zoukankan      html  css  js  c++  java
  • Ubuntu No space left on device Is it a lie or have I run out of inodes

    Yesterday one of my development servers decided it was going to do some very strange things.Wordpress and other websites stopped working properly, I got session errors when trying to usePHPMyAdmin, I couldn't upload files through web forms (the server complained there was no temporary directory). So I logged in to try and work out what was going on. The temporary directory was there and had the correct permissions, however if I tried to create a file in it I was told:

    $ touch /tmp/testfile

    Unable to create file /tmp/testfile: No space left on device

    So I must have run out of disk space, which is odd as I had loads last time I checked.

    $ df -h

    Filesystem            Size  Used Avail Use% Mounted on

    /dev/sda1              15G  8.5G  6.5G  57% /

    devtmpfs              299M  112K  299M   1% /dev

    none                  308M     0  308M   0% /dev/shm

    none                  308M   64K  308M   1% /var/run

    none                  308M     0  308M   0% /var/lock

    none                  308M     0  308M   0% /lib/init/rw

    /dev/sdc1              40G  6.4G   32G  17% /home

    Oh, I have plenty of disk space! What the hell is going on then? As my server is an Amazon EC2instance my first thoughts were there was a problem with the block storage. So I spent an hour or so trying to find any clues in their forums and got nowhere.

    After another few hours of scouring the internet for people having similar problems and finding nothing at all I was about to give up. As a last ditch attempt to find the solution I checked myMunin stats for the server and immediately I noticed that the inode graphs for one of the mounted disks had been rising steadily over the last few weeks and had just reached 100%!!!

    $ df -i

    Filesystem            Inodes   IUsed   IFree IUse% Mounted on

    /dev/sda1             983040  983040       0  100% /

    devtmpfs               76490    1957   74533    3% /dev

    none                   78747       1   78746    1% /dev/shm

    none                   78747      34   78713    1% /var/run

    none                   78747       2   78745    1% /var/lock

    none                   78747       1   78746    1% /lib/init/rw

    /dev/sdc1            2621440   13238 2608202    1% /home

    So then, where are all these files? There must be thousands of them to be using up 100% of just under a million.

    To count all the files in a directory and all it's subdirectories:

    $ for i in /*; do echo $i; find $i | wc -l; done

    Then you can narrow down your search by replacing the /* for any directory that has an unusually large number of files in. For me it was /var

    $ for i in /var/*; do echo $i; find $i | wc -l; done

    Eventually I narrowed it down to the reports being held by the Squid Proxy server report generator sarg so a simple fix was to clear out all the old reports and stop sarg from auto generating reports every day.

    $ rm -rf /var/log/sarg/*

    And thats it! Server fixed and back up and running without any problems. All I have to do is remember to keep an eye on any autogenerated logs and reports and make sure that old ones are actually being deleted!

  • 相关阅读:
    优步中国大举扩张,2016进军100城市!
    优步中国:2月底前进军广东、湖南、湖北18个城市
    Uber能知道你是不是在开车的时候玩手机
    Uber入驻四川乐山峨眉地区
    南京优步上线黄金区域,接单可享更高奖励!
    优步UBER司机全国各地奖励政策汇总 (2月1日-2月7日)
    成都Uber优步司机奖励政策(2月1日)
    北京Uber优步司机奖励政策(2月1日)
    滴滴快车奖励政策,高峰奖励,翻倍奖励,按成交率,指派单数分级(2月1日)
    跨程序共享数据,读取联系人信息
  • 原文地址:https://www.cnblogs.com/zhangzhang/p/3090301.html
Copyright © 2011-2022 走看看