zoukankan      html  css  js  c++  java
  • Ubuntu No space left on device Is it a lie or have I run out of inodes

    Yesterday one of my development servers decided it was going to do some very strange things.Wordpress and other websites stopped working properly, I got session errors when trying to usePHPMyAdmin, I couldn't upload files through web forms (the server complained there was no temporary directory). So I logged in to try and work out what was going on. The temporary directory was there and had the correct permissions, however if I tried to create a file in it I was told:

    $ touch /tmp/testfile

    Unable to create file /tmp/testfile: No space left on device

    So I must have run out of disk space, which is odd as I had loads last time I checked.

    $ df -h

    Filesystem            Size  Used Avail Use% Mounted on

    /dev/sda1              15G  8.5G  6.5G  57% /

    devtmpfs              299M  112K  299M   1% /dev

    none                  308M     0  308M   0% /dev/shm

    none                  308M   64K  308M   1% /var/run

    none                  308M     0  308M   0% /var/lock

    none                  308M     0  308M   0% /lib/init/rw

    /dev/sdc1              40G  6.4G   32G  17% /home

    Oh, I have plenty of disk space! What the hell is going on then? As my server is an Amazon EC2instance my first thoughts were there was a problem with the block storage. So I spent an hour or so trying to find any clues in their forums and got nowhere.

    After another few hours of scouring the internet for people having similar problems and finding nothing at all I was about to give up. As a last ditch attempt to find the solution I checked myMunin stats for the server and immediately I noticed that the inode graphs for one of the mounted disks had been rising steadily over the last few weeks and had just reached 100%!!!

    $ df -i

    Filesystem            Inodes   IUsed   IFree IUse% Mounted on

    /dev/sda1             983040  983040       0  100% /

    devtmpfs               76490    1957   74533    3% /dev

    none                   78747       1   78746    1% /dev/shm

    none                   78747      34   78713    1% /var/run

    none                   78747       2   78745    1% /var/lock

    none                   78747       1   78746    1% /lib/init/rw

    /dev/sdc1            2621440   13238 2608202    1% /home

    So then, where are all these files? There must be thousands of them to be using up 100% of just under a million.

    To count all the files in a directory and all it's subdirectories:

    $ for i in /*; do echo $i; find $i | wc -l; done

    Then you can narrow down your search by replacing the /* for any directory that has an unusually large number of files in. For me it was /var

    $ for i in /var/*; do echo $i; find $i | wc -l; done

    Eventually I narrowed it down to the reports being held by the Squid Proxy server report generator sarg so a simple fix was to clear out all the old reports and stop sarg from auto generating reports every day.

    $ rm -rf /var/log/sarg/*

    And thats it! Server fixed and back up and running without any problems. All I have to do is remember to keep an eye on any autogenerated logs and reports and make sure that old ones are actually being deleted!

  • 相关阅读:
    试说明一级文件索引结构、二级文件索引结构是如何构造的。
    文件物理结构的比较
    文件的物理结构
    什么是索引文件,要随机存取某一记录时需经过几步操作?
    对文件的存取有哪两种基本方式,各有什么特点?
    文件的逻辑结构有哪两种形式?
    文件组织的两种结构
    WebService或HTTP服务端接收请求转发消息到另一个服务端-实现思路
    Eclipse报Caused by: java.lang.OutOfMemoryError: PermGen space解决思路
    树莓派2操作记录(有记录才能沉淀...)
  • 原文地址:https://www.cnblogs.com/zhangzhang/p/3090301.html
Copyright © 2011-2022 走看看