zoukankan      html  css  js  c++  java
  • Ubuntu No space left on device Is it a lie or have I run out of inodes

    Yesterday one of my development servers decided it was going to do some very strange things.Wordpress and other websites stopped working properly, I got session errors when trying to usePHPMyAdmin, I couldn't upload files through web forms (the server complained there was no temporary directory). So I logged in to try and work out what was going on. The temporary directory was there and had the correct permissions, however if I tried to create a file in it I was told:

    $ touch /tmp/testfile

    Unable to create file /tmp/testfile: No space left on device

    So I must have run out of disk space, which is odd as I had loads last time I checked.

    $ df -h

    Filesystem            Size  Used Avail Use% Mounted on

    /dev/sda1              15G  8.5G  6.5G  57% /

    devtmpfs              299M  112K  299M   1% /dev

    none                  308M     0  308M   0% /dev/shm

    none                  308M   64K  308M   1% /var/run

    none                  308M     0  308M   0% /var/lock

    none                  308M     0  308M   0% /lib/init/rw

    /dev/sdc1              40G  6.4G   32G  17% /home

    Oh, I have plenty of disk space! What the hell is going on then? As my server is an Amazon EC2instance my first thoughts were there was a problem with the block storage. So I spent an hour or so trying to find any clues in their forums and got nowhere.

    After another few hours of scouring the internet for people having similar problems and finding nothing at all I was about to give up. As a last ditch attempt to find the solution I checked myMunin stats for the server and immediately I noticed that the inode graphs for one of the mounted disks had been rising steadily over the last few weeks and had just reached 100%!!!

    $ df -i

    Filesystem            Inodes   IUsed   IFree IUse% Mounted on

    /dev/sda1             983040  983040       0  100% /

    devtmpfs               76490    1957   74533    3% /dev

    none                   78747       1   78746    1% /dev/shm

    none                   78747      34   78713    1% /var/run

    none                   78747       2   78745    1% /var/lock

    none                   78747       1   78746    1% /lib/init/rw

    /dev/sdc1            2621440   13238 2608202    1% /home

    So then, where are all these files? There must be thousands of them to be using up 100% of just under a million.

    To count all the files in a directory and all it's subdirectories:

    $ for i in /*; do echo $i; find $i | wc -l; done

    Then you can narrow down your search by replacing the /* for any directory that has an unusually large number of files in. For me it was /var

    $ for i in /var/*; do echo $i; find $i | wc -l; done

    Eventually I narrowed it down to the reports being held by the Squid Proxy server report generator sarg so a simple fix was to clear out all the old reports and stop sarg from auto generating reports every day.

    $ rm -rf /var/log/sarg/*

    And thats it! Server fixed and back up and running without any problems. All I have to do is remember to keep an eye on any autogenerated logs and reports and make sure that old ones are actually being deleted!

  • 相关阅读:
    面向Java新手的日志 承 一 异常的使用
    现代JVM内存管理方法及GC的实现和主要思路
    现代Java EE应用调优和架构 大纲篇 (暂定名)
    无聊的解决方案
    代码生成器项目正式启动
    现代Java应用的性能调优方法及开发要点
    我的十年
    快慢之间 一个多线程Server疑难杂症修复记录
    面向Java新手的日志 起
    MongoTemplate项目启动
  • 原文地址:https://www.cnblogs.com/zhangzhang/p/3090301.html
Copyright © 2011-2022 走看看