zoukankan      html  css  js  c++  java
  • linux undelete

    http://www.tldp.org/HOWTO/archived/Ext2fs-Undeletion-Dir-Struct/index.html

    http://www.giis.co.in/debugfs.html

    http://www.kossboss.com/linux---debugfs-explained-ext2-ext3-ext4-filesystem-recovery

      

    http://www.cyberciti.biz/tips/linux-ext3-ext4-deleted-files-recovery-howto.html

      apt-get install testdisk

      photorec

    ls -d

    logdump -i <12341234>

      Blocks:  (0+1): 2142216

    dd if=/dev/xxx of=yyy bs=4096 count=1 skip=2142216

    太大的文件我还不知道怎么弄。

     
     
     
    debugfs : An unique command

    Hi,
    I searched for an user guide on debugfs command (other than man page) but didn't find more documents about it :( So here is an doc for you,where i tried to explain some basics stuffs that can be performed with debugfs.

    I Assume you are root user and know about ext2/ext3 File System terms like nova,supernova..oops sorry - i mean block,superblock,inode,group descriptors, bitmap. If not, please Check out http://web.mit.edu/tytso/www/linux/ext2intro.html You can also look into Gadi Oxmen's ext2ed project which provides good doc about ext2 internals. If you want an unreliable guide - [written by myself] then check out Kick Start with ext3 :-)

    Ok.Before we begin few stuffs for you to set things clear and loud :-)

    *Let me tell you one thing this is not a prefect user guide for debugfs by any means. I'm going to write this docs as i learn about debugfs. So if you mess up with your System - donot blame me - After all it's your system you got to take responsiblity for your actions - Be a Responsible Linux User:-)

    *And also i adamantly refuse/reject any comments on my english. [Though I know it's very poor - i'm not interested in improving it. :-) as i prefer spending that time in improving my programming language skills]

    Let's Jump into debugfs. What's debugfs? man page says "debugfs - ext2/ext3 file system debugger". Type debugfs on terminal it'll display something similar to below given output version and date [i guess it should be release date]

    # debugfs
    debugfs 1.37 (21-Mar-2005)
    debugfs:
    Which file system partition to explore? If you don't know about partitions peek into man df, by the time you looked into man page of df .. i opened sda5 in my system using
    debugfs: open /dev/sda5
    Now what to do this ?I got absoultely no idea.Let's seek some help using command
    debugfs: help
    It displays list of available options.Let me count it... one ..two..three....four........wow 51 options Don't get scared by it's number.It'll be easy if you are ready to learning it and it'll be very very easy if you are very much interested in learning :-) See already we know 3 options like open,help.yes. that's two not three.:-) Here we go for third one : quit.
    debugfs: quit
    but before we quit it's always better to close /dev/sda5 which we opened using 'open' so type
    debugfs: close
    It'll display a message 'File closed successfully.' [just kidding - it wont display any messge] try and repeat the same command
    debugfs: close
    close: Filesystem not open

    Now we know there is nothing to close so type quit.Take some tea/coffee break - come back later :-) I'm also going outside for a little break.
    ok.Back to debugfs.Now let's try and provide the partition name while invoking debugfs itself.
    # debugfs /dev/sda5
    Let's check that first option when we type help. (i.e)show_debugfs_params (or) params
    debugfs: params
    Open mode: read-only
    Filesystem in use: /dev/sda5
    Yes.we got the supplied arguments as result.Hey look here - there is an option called features[Set/print superblock features]let's see what's that really mean.
    debugfs: features
    Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery sparse_super large_file
    Yes.Just what i expected - It's ...It's ...features of File System :-) ok..ok..you got me ...i agree right now - i'm 10000% sure i got no idea about this result. :-) let's explore this later :-) ..Let's check 'dirty' option
    debugfs: dirty
    dirty: Filesystem opened read/only
    hmmmm...how to open a file system as Read-Write using debugfs? so that we can check this command.Ya. got check syntax of 'open' option
    open: Usage: open [-s superblock] [-b blocksize] [-c] [-w]
    wondering how i got the syntax of open for debugfs.Just type :
    debugfs: open
    Please Forget to provide the device name so that it'll give you the syntax :-) so close it once and open with -w option like this:
    debugfs: close
    debugfs: open -w /dev/sda5
    Seems like the file system is opened in write mode.How to verify that?let's try params options:
    debugfs: params
    Open mode: read-write
    Yes.finally we opened the file system in write mode too :-)now try that 'dirty' option. If my guess is correct - dirty will set the file system as dirty so that on next boot file system check (fsck)will be performed.

    i typed dirty once .
    twice..
    Do you want me to try for third time?
    Ok .Here i go:
    debugfs: dirty
    debugfs: dirty
    debugfs: dirty
    Let's quit debugfs and restart the system to check im right or not [i have to save this file
    and take a backup too-before reboot :-)]
    No..I'm wrong - It didn't do any file system check (fsck) during reboot :( ....May be it should work with ext2 file system.I'm using ext3.So next option is:
    'init_filesys' it says "Initalize a filesystem (DESTROYS DATA)" - I'm not going to run debugfs to try this option :-) A quick check on man page said "Creates an ext2 file system on the device".
    Usage of this command is
    initialize
    You have to provide device name and it's default block size for creating new ext2FS.
    Then comes 'show_super_stats' or simply 'stats'. When i typed stats i can see plenty of info. like,
    Filesystem volume name: /opt
    and
    Last mounted on:
    and
    Filesystem UUID: cbd9db58-ce06-47e3-add0-5631faef0d37
    and here goes magic signature of FS:
    Filesystem magic number: 0xEF53
    Filesystem revision #: 1 (dynamic)
    clealy features of FS
    Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery sparse_super large_file
    and
    Default mount options: (none)
    and we got state of FS,
    Filesystem state: clean
    wait...wait...this is what we tried earlier with 'dirty' option??? Let me type dirty again
    debugfs: dirty
    now run stats.
    Filesystem state: not clean

    Yes.Let reboot now and look for that fsck -I hope it works now - Keeping finger crossed :-)No again...didn't give the expected results.-It didn't run file system check at boot time. :-( i ran 'stats' it said state is 'clean' ???How it became clean now???I wonder while rebooting something resets this state.
    Let me try for last time,run dirty and stats shows file system state as 'not clean'.Now i'll disconnect the power supply rather than rebooting or shutdown in proper way. Let's wait for it ......Yes.Looks like file system check performed while booting said "PASSED" But is that correct approach?Let me try to disconnect power without manually making FS dirty using debugfs.
    YES.This time it didn't perform File system Check.So dirty forces file system check.I hope someone will help to understand more about this.[why ?? It doesn't perform file system check if i going to reboot and performs file system check if i disconnect power ???Any insights into this issue --ping me :-)]
    Ok with this 'dirty' stuff....Where were we ?..yes digging about 'stats'.Run stats yourself and see FS setting like Inode count,Block count,Free blocks,Free Inodes,block size etc etc..And also note down at the end we have group descriptor informaions.Very Useful details to explore and explode your FS - Isn't it? ;-)
    'ncheck' takes inode as an input and gives the path name.
    debugfs: ncheck 2348010
    Inode Pathname
    2348010 /oss/man/cat1
    Then we have 'icheck' when you provide block number as input it will tell you which inode it belongs to. First how to find correct block number ??? Try 'stat' option with file it'll display the inode info. Beware 'stat' will give a file inode details where as 'stats' will provide file system information that reside in super block.I guess 's' in stats refer to super block .
    debugfs: stat ttt
    Inode: 13 Type: regular Mode: 0644 Flags: 0x0 Generation: 1899498200
    User: 0 Group: 0 Size: 11
    File ACL: 1621061 Directory ACL: 0
    Links: 1 Blockcount: 16
    Fragment: Address: 0 Number: 0 Size: 0
    ctime: 0x48050386 -- Wed Apr 16 01:05:34 2008
    atime: 0x48050385 -- Wed Apr 16 01:05:33 2008
    mtime: 0x48050386 -- Wed Apr 16 01:05:34 2008
    BLOCKS:
    (0):1515291
    TOTAL: 1
    stat gives inode details of given file like Inode number,file type,user,size,time,links andblocks..So we got a valid block 1515291 . Let's use this block number with icheck
    debugfs: icheck 1515291
    Block Inode number
    1515291 13
    Gives the expected inode number 13 of file 'ttt'.
    No comments about following options :
    chroot
    cd
    ls
    ln
    unlink
    mkdir
    rmdir
    rm

    What's that kill_file? Name itself creates curiosity.It dellocates the inode&block but doesn't remove the entry from directory structure.
    $ls -il giis.txt
    15 -rw-rw-r-- 1 oss oss 18 Apr 16 15:54 giis.txt
    Now i used kill_file on giis.txt.
    debugfs: kill_file giis.txt
    Even now
    $ls -il giis.txt
    15 -rw-rw-r-- 1 oss oss 18 Apr 16 15:54 giis.txt
    gives the same result.
    debugfs: stat giis.txt
    Inode: 15 Type: regular Mode: 0664 Flags: 0x0 Generation: 3139576194
    User: 500 Group: 500 Size: 18
    File ACL: 505359 Directory ACL: 0
    Links: 1 Blockcount: 16
    Fragment: Address: 0 Number: 0 Size: 0
    ctime: 0x4805d3eb -- Wed Apr 16 15:54:43 2008
    atime: 0x4805d3e7 -- Wed Apr 16 15:54:39 2008
    mtime: 0x4805d3eb -- Wed Apr 16 15:54:43 2008
    dtime: 0x4805d445 -- Wed Apr 16 15:56:13 2008
    BLOCKS:
    (0):10234
    TOTAL: 1

    By using kill_file giis.txt file's inode and block are marked as free.They can be used by some other file.You can verify that by checking with ffi and ffb options which provide next available free inode number and next available free block number.

    debugfs: ffi
    Free inode found: 15

    debugfs: ffb
    Free blocks found: 10234

    Note inode & block number from ffi and ffb are same as stat result. I'm not sure what the purpose of kill_file.Any way quite interesting stuff.
    clri says it'll clear inode contents.
    debugfs: clri ttt

    now stat ttt shows clean inode with timing date back to Thursday Jan 1 1970 exactly 5.30
    And all other entries are set to 0 and finally the type set as 'bad type' :-)

    Let's try the next option with debugfs - testi which says it'll Test an inode's in-use flag.In turn it means using testi you can tell whether an inode is allocated or free state.

    debugfs: testi Laks
    Inode 880417 is marked in use
    Lets check the same for giis.txt for which kill_file option is implemented.
    debugfs: testi giis.txt
    Inode 15 is not in use
    'seti' will mark the inode as in-use. (ie) It's not an free inode.Let's check the same with giis.txt
    debugfs: seti giis.txt
    Now let's verify that :
    debugfs: testi giis.txt
    Inode 15 is marked in use
    Yes.Previously inode 15 marked as free -but now with the help of seti it's marked as in-use.Now we have 'freei' as name indicates it will set an inode as free.
    debugfs: freei giis.txt
    debugfs: testi giis.txt
    Inode 15 is not in use
    cool. You can just play with inode status.:-)
    Similar to testi,seti,freei we got testb,setb,freeb with which we can mark a block as free / in-use.
    Usage: testb [count]
    To verify whehther block number 10234 is free or not.

    debugfs: testb 10234
    Block 10234 not in use
    you can test more than one block with the help count . Let check 2 blocks starting from 10234

    debugfs: testb 10234 2
    Block 10234 not in use
    Block 10235 marked in use

    Then we have freeb and setb their usage is :
    debugfs: setb
    setb: Usage: setb [count]
    debugfs: freeb
    freeb: Usage: freeb [count]

    I guess you know what this will do...setb and freeb are homework for you :-)

    The mi option : mission impossible :-)
    It's referred as modify_inode. yes it's really impossible task which can be achieved only with mi. With this you can modify inode contents of file. That's great - isn't it?
    when i typed ,
    debugfs: mi giis.txt
    Mode [0100664]

    and awaits for my input - i don't want to change mode.so just press enter then comes

    User ID [500]
    Group ID [500]
    Size [18] 20
    Creation time [1208341483]
    Modification time [1208341483]
    Access time [1208341479]
    Deletion time [1208341573]
    Link count [1]
    Block count [16]
    File flags [0x0]
    Generation [0xbb222182]
    File acl [505359]
    High 32bits of size [0]
    Fragment address [0]
    Fragment number [0]
    Fragment size [0]
    Direct Block #0 [10234]
    Direct Block #1 [0]
    Direct Block #2 [0]
    Direct Block #3 [0]
    Direct Block #4 [0]
    Direct Block #5 [0]
    Direct Block #6 [0]
    Direct Block #7 [0]
    Direct Block #8 [0]
    Direct Block #9 [0]
    Direct Block #10 [0]
    Direct Block #11 [0]
    Indirect Block [0]
    Double Indirect Block [0]
    Triple Indirect Block [0]
    debugfs:

    Note that i entered the size field as 20 while acutal size is 18. That's really cool.you can see..data block number can also be set using this that's super-duper stuff.May be i'll play with this later.Remember anytime,anywhere its always dangeous to play with inode. :-) But if you are brave Linux user/programmer just ignore my warning and enjoy yourself :-)

    expand option : man page said expand the directory.what's there to expand 'a directory' ?? Let me check this first creat a new directory called ext3crave and also creat a sample test file there. First take a backup current stat info of the directory ext3crave.

    debugfs: stat ext3crave
    Inode: 1926563 Type: directory Mode: 0775 Flags: 0x0 Generation: 3139576210
    User: 500 Group: 500 Size: 4096
    File ACL: 505359 Directory ACL: 0
    Links: 2 Blockcount: 16
    Fragment: Address: 0 Number: 0 Size: 0
    ctime: 0x4806e498 -- Thu Apr 17 11:18:08 2008
    atime: 0x4806e49e -- Thu Apr 17 11:18:14 2008
    mtime: 0x4806e498 -- Thu Apr 17 11:18:08 2008
    BLOCKS:
    (0):1951197
    TOTAL: 1

    ok...now go for expand.

    debugfs: expand ext3crave

    No messages displayed :-(...what's happened?run stat and hope for some clue....
    debugfs: stat ext3crave
    Inode: 1926563 Type: directory Mode: 0775 Flags: 0x0 Generation: 3139576210
    User: 500 Group: 500 Size: 8192
    File ACL: 505359 Directory ACL: 0
    Links: 2 Blockcount: 24
    Fragment: Address: 0 Number: 0 Size: 0
    ctime: 0x4806e498 -- Thu Apr 17 11:18:08 2008
    atime: 0x4806e661 -- Thu Apr 17 11:25:45 2008
    mtime: 0x4806e498 -- Thu Apr 17 11:18:08 2008
    BLOCKS:
    (0):1951197, (1):1951200
    TOTAL: 2

    wow....look the size is doubled along with block count and the total said 2.Let's quickly check the directory.

    ls -il ext3crave

    No..nothing special there. Look at stat carefully you can see 1951200 block added.My guess is - the directory structure normally has size of block size with 'expand' you can increase it's size.
    debugfs: expand ext3crave
    debugfs: stat ext3crave
    Inode: 1926563 Type: directory Mode: 0775 Flags: 0x0 Generation: 3139576210
    User: 500 Group: 500 Size: 12288
    File ACL: 505359 Directory ACL: 0
    Links: 2 Blockcount: 32
    Fragment: Address: 0 Number: 0 Size: 0
    ctime: 0x4806e498 -- Thu Apr 17 11:18:08 2008
    atime: 0x4806e661 -- Thu Apr 17 11:25:45 2008
    mtime: 0x4806e498 -- Thu Apr 17 11:18:08 2008
    BLOCKS:
    (0):1951197, (1-2):1951200-1951201
    TOTAL: 3

    see i tried expand again a new block 1951201 is added now.Now the directory size is
    12288. ie. (4096+4096+4096) I'm trying to verify this increased size with ls -il but it shows only 4096
    $ls -il ext3crave
    1926563 drwxrwxr-x 2 oss oss 4096 Apr 17 11:18 ext3crave
    ---i don't know why???? But using debugfs stat ext3crave gives expected 12288 as size.
    something wrong with ls command ????
    Then use lsdel to list deleted inodes.
    debugfs: lsdel
    Inode Owner Mode Size Blocks Time deleted
    15 500 100664 20 1/ 1 Wed Apr 16 15:56:13 2008
    1 deleted inodes found.

    I guess the blocks represent the total no.of data blocks released
    while deleting this file. Next comes my favourite one - undelete :-)
    undel: Usage: undelete
    debugfs: undel 15 /opt/oss/heee
    15: File not found by ext2_lookup

    What i tried here is simple , we got the deleted inode number from lsdel. But it displayed error message.I believe this undelete will work only for ext2 but not with ext3.

    'write' looks similar to cp command.

    debugfs: write /opt/exthide.txt exthide2.txt
    Allocated inode: 17

    First , ls -il /opt gave me this
    ? ?--------- ? ? ? ? ? exthide2.txt

    but after sometime i got :
    17 -rw-rw-r-- 1 root root 21 Apr 17 12:11 exthide2.txt

    is it because- inode is in buffer and took sometime to write to disk ??

    dump or dump_inode
    debugfs: dump giis.txt giis_inode.txt
    dump writes the file content into an output file.
    similar to cp command?i wonder am i missing something here?

    Right...Keep reading we got very few commands left to explore. You already read 77.732454% of this document still got just 22.267546% to reach EOF :-) Keep going.... :-)

    Next option 'rdump' - It copies a directory and it's contents to another location.
    debugfs: rdump ext3crave /opt/extcrave
    Now ext3crave copied to /opt/extcrave.Similar to cp -r command. Next option : set_super_value, ssv used to set value in superblock.use ssv -l to see list of superblock values which can be set.It's syntax is :
    Usage: set_super_value
    I wonder this is option will affect all superblock copies or applicable only to first superblock only?[i prefer not to play with superblock as of now - since i may lose this file :-) ]Similar to superblock we have sif-set inode field. Syntax is similar to ssv.
    set_inode
    changing...mayI'm trying to set the size to 100.but user id is changing...may be i'm using a older version of
    debugfs.

    debugfs: sif <17> size 100
    debugfs: stat exthide2.txt
    Inode: 17 Type: regular Mode: 0664 Flags: 0x0 Generation: 0
    User: 100 Group: 0 Size: 0
    File ACL: 0 Directory ACL: 0
    Links: 1 Blockcount: 8
    Fragment: Address: 0 Number: 0 Size: 0
    ctime: 0x4806f111 -- Thu Apr 17 12:11:21 2008
    atime: 0x4806f111 -- Thu Apr 17 12:11:21 2008
    mtime: 0x4806f111 -- Thu Apr 17 12:11:21 2008
    BLOCKS:
    (0):1515007
    TOTAL: 1

    Remember we can set inode details with mi option too. But mi takes filename as input where as sif takes inode as input.Enough of sif. Next option in our queue is 'logdump'
    debugfs: logdump
    Journal starts at block 1, transaction 402979
    Found expected sequence 402979, type 1 (descriptor block) at block 1
    Found expected sequence 402979, type 2 (commit block) at block 3
    Found expected sequence 402980, type 1 (descriptor block) at block 4
    No magic number at block 66: end of journal.

    Journaling informations.I'll learn about journal stuff and let you know about this soon .Till then Just remember you can view journal log data with logdump option.And then we have very simple options like 'htree','hash' Use a file with hash
    debugfs: hash giis.txt
    Hash of giis.txt is 0x444b98da (minor 0x0)

    If you got any idea what's this mean - then mail me [i'm serious] :-)

    dirsearch - Search a file within directory.
    Usage: dirsearch dir filename
    Should be easy one.try yourself.

    'imap' interesting option which gives the location of specified file.
    debugfs: imap giis.txt
    Inode 15 is part of block group 0
    located at block 1006, offset 0x0700
    see..giis.txt has inode numbered 15 and it's part of block group 0 and at location 1006 which is inode bitmap block. Then careful while using 'dump_unused'.It Dumps unused blocks to screen -with messages like
    debugfs: dump_unused

    Unused block 1515083 contains non-zero data:
    //garbage comes here.

    It'll dump contents of all deleted files.Finally we have bmap.Help says "Calculate the logical->physical block mapping for an inode"
    debugfs: bmap giis.txt 10234
    0

    Again i have no idea about bmap.

    Congrats !!!!!You have successfully completed the course.!!! You will be awarded 1000000$....I'm not serious here -- just kidding ..so please don't mail me asking for 1000000$ :-) :-) :-)

    To explore your system ,try debugfs in read mode...and to explode your System ,try the same inwrite mode :-)If you like this docs and got any questions or got any answers for my questions- you can mail me.

    LINUX - debugfs explained, ext2 ext3 ext4 filesystem recovery - how to use debugfs - examples

    This is my gift to you all, as this is a learned art, and took a while to master(although I shouldnt say Im master, there is always room to be better, alot of room) because most of these topics are just scattered across the web.
     
     
    DEBUGSFS DUMP FULL CURROPT VOLUME TO DIRECTORY SCRIPT (READ ARTICLE BELOW FIRST)
    #################################################################################
     
    NOTE: first read article below
     
    First put in a USB /samba share / ntfs / iscsi and note its full absolute pathname. Remove the last slash and insure first slash is there and put that into BACKUPLOCATION variable.
     
    THE_FS variable change that match the curropt volume
     
    Here is the script
     
    #!/bin/bash
    ################################################
    # UPDATE - 1/10/2014
    # DESCRIPTION: RDUMP CURROPT FILESYSTEM TO USB
    # USING DEBUGFS WITH CATASTROPHIC MODE
    # THIS WILL NOT CARE ABOUT METADATA AS MUCH
    # requirements: debugfs, sed, egrep, awk
    ################################################
    # ONLY THING TO CHANGE: BACKUPLOCATION TO WHERE YOUR DUMPING DATA (note share names preserved on dump)
    # NOTE: BACKUPLOCATION STARTS WITH A / AND ENDS WITHOUT A /
    # YOU CAN ADD EXTRA ARGS TO THE DEBUGFS IF NEEDED (SOMETIMES JUST -c DOESNT WORK)
    ################################################
    # Change /dev/sda1 to match your volume name
    # BEFORE RUNNING THIS TEST LIKE SO
    # debugfs -R ls -c /dev/sda1
    # MAKE SURE ALL YOUR FOLDERS SHOW UP WITH THIS:
    # debugfs -R ls -c /dev/sda1 | sed -e 's/)/ /g' | egrep -i "[:letter:]" | awk '{print $1}'
    #################################################
    BACKUPLOCATION="/mnt/_BACKUP" # <---------------------------------- change this to match your dump location
    THE_FS="/dev/sda1" # <--------------------------------------------- change this to match your curropt volume
    OTHER_OPTIONS=""; # if you need to add extra args to the debugfs script
    cd ${BACKUPLOCATION}
    echo FROM ${THE_FS} GOING TO BACKUP THIS:
    debugfs -R ls -c /dev/c/c
    for i in `debugfs ${OTHER_OPTIONS} -R ls -c ${THE_FS} | sed -e 's/)/ /g' | egrep -i "[:letter:]" | awk '{print $1}'`
    do
    echo "WORKING ON: ${i}"
    time debugfs -R "rdump /$i ${BACKUPLOCATION}" -c ${THE_FS}
    du -hc ${BACKUPLOCATION}/${i} | nl > ${BACKUPLOCATION}/du-${i}.txt 2>&1 &
    done
     
     
    HOW TO USE DEBUGFS TO RECOVER A CURROPT EXT FILESYSTEM
    ######################################################
     
    PLEASE READ THIS ALL THE WAY BEFORE DOING ANYTHING - I MENTION SOME KEY FACTS IN THE MIDDLE WHICH I FEEL LIKE I SHOULD OF MENTIONED IN THE BEGINNING :-)
     
    ALSO PLEASE FORGIVE ME IF ALL OF THE SUDDEN I JUST GO VERY BASIC ON YOU, THIS IS JUST MY RANDOM STYLE. IN THE END IM VERY THOUROUGH I PROMISE - ALMOST TO THOUROUGH.
     
    NOTE I TRY TO PREPEND NEW LINES THAT ARE COMMANDS WITH A #, I WILL NOT PUT # TO COMMANDS IN THE MIDDLE OF A SENTENCE AND I WILL NOT PUT # ON A SCRIPT
     
    Some filesystems get so curropt that a simple mount doesnt work. Even mounting with other superblocks doesnt work. A filesystem check gets way too many errors, pages and pages of errors. Its horrible situation but possible to get out of, kinda, not guaranteed to recover anything.
     
    PRESTEPS: if the filesystem is not too curropt and you have it mounted, then unmount it. In this example Im going to unmount sda1 as thats the fs we are working on, if that is your root filesystem, then pop in a linux recovery system like Knoppix and work from there. The Knoppix will give you another root environment to work off so that you can unmount sda1 or whatever your system is.
     
    Run a filesystem check with no fixing thats the -fn option (remember the filesystem check and repair -fy runs an automatic fix thats kind of like a blender, you end up with mumbojumbo, sometimes its good, sometimes it bad, and sometimes the stuff is in the lost+found - the safe assumption is that the end result is its bad so thats why before doing a filesystem check and repair, thats the "-fy" option, always backup/clone the disks or available current data)
     
    The way I run the Filesystem check is like this (for this article Im pretending the data is on sda1):
     
    # fsck -fn -C0 /dev/sda1
     
    The -fn makes sure that we are safe and only check the filesystem and dont repair it. Remember the whole purpose it to try to do any write operations. The -C0 gives a percentage progress bar.
     
    Better then that is to run the filesystem check and repair in a "nohup &" wrapper, which runs the command in the background and outputs the screen output to a file called nohup.out, this file goes to the directory from where you ran it (there is a way to redirect that file elsewhere and thus name it something else, just google search redirect nohup)
     
    # cd /
    # nohup fsck -fn -C0 /dev/sda1 &
     
    Hit enter twice for this one, and then you will be back at the bash screen while the command is running safely in the background.
     
    To see the output of the fsck with nohup, just go into the nohup file like this:
     
    # tail -f /nohup.out
     
    Thats why I cd to / before running nohup so that the nohup.out file automatically goes to / for easy finding.
     
    Cancel the view at anytime with Control+c, it doesnt hurt it, to cancel the fsck you can "killall fsck" or "killall -9 fsck" or find out the PID of the fsck with "ps" or "ps aux" or "ps -aux" and then kill it with "kill PID#" or "kill -i PID#" Where PID# you actually put the PID number. So if for example the PID for my fsck was 555, I would first try "kill 555" then I would try the "ps" commands and see if its still there, if it didnt get killed then do "kill -9 555"
     
    Also run the following commands:
     
    # dumpe2fs -h /dev/sda1
     
    That spits out the header information of the dumpe2fs filesystem dump. In there we are looking for the following information:
     
    Block Count, Free Blocks, Block Size this will give us an idea of how much data we are going to recover.
     
    The formula is: Total Data = (Block Count - Free Blocks) * Block Size
    Remember that the Block Size is given in bytes, so if its 4096 then that means its 4096 bytes.
    After crunching that formula down you will be left with how many bytes big is your system.
    Also we are looking at
     
    From the dumpe2fs -h /dev/sda1 we are also looking at the "filesystem state", if it says anything other then "clean" then you have problems. The best status check of a filesystem is the above "fsck -fn -C0 /dev/sda1" command. However fsck -fn -C0 /dev/sda1 takes time and dumpe2fs -h is instant. Also a state of clean with errors can be bad or okay, bad meaning your FS wont mount and okay meaning there are some errors but it still mounts. (If the FS mounts, I would just backup the data from a readonly mount of the filesystem, if its currently mounted then remount it like so: "mount -o remount,ro /dev/sda1", and then backup the data - ifs it not yet mounted like for example you have your unit booted into a recovery system like knoppix then do "mount -o ro /dev/sda1 /randommountpoint1" - you can always make your random mount point be named whatever you want)
     
    Next lets get all the locations of the superblocks (the locations were the inode tables are at - remember the inodes are the data that explain the data, its like the filename and the properties of the file, not the actual data it self)
     
    # dumpe2fs /dev/sda1 | grep -i "superblock"
    # mke2fs -n /dev/sda1
     
    We are looking for a list that looks like this:
     
    Superblock backups stored on blocks:
            32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
            4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
            102400000, 214990848, 512000000, 550731776, 644972544
     
    And I want you to turn it into a list that looks like this - only space delimited:
     
    32768 98304 163840 229376 294912 819200 884736 1605632 2654208 4096000 7962624 11239424 20480000 23887872 71663616 78675968 102400000 214990848 512000000 550731776 644972544
     
    Set this big number into a variable:
    BIGLIST="32768 98304 163840 229376 294912 819200 884736 1605632 2654208 4096000 7962624 11239424 20480000 23887872 71663616 78675968 102400000 214990848 512000000 550731776 644972544"
    Then we can refer to this at any time by typing $BIGLIST or ${BIGLIST}
     
    We are going to use this list later to try several different recoveries. First we will try to recover without pointing at an alternate superblock and then we will try every superblock listed above - not manually, but with a for loop.
     
    If the problem is that your filesystem is not mounting, you can try something like this, if this works you can stop right here and backup your data and your done - at any point you have access to the data and you can successfully extract it out, you can stop with following this article:
     
    for i in $BIGLIST; do
    echo "===Trying to mount with SUPERBLOCK: $i===="
    mount -o sb=$i /dev/sda1 /randommountpoint1
    done
     
    Or shrunk down to one line (notice how I place the ;, one at the end of every line not including the "do" on the first line):
     
    for i in $BIGLIST; do echo "===Trying to mount with SUPERBLOCK: $i===="; mount -o sb=$i /dev/sda1 /randommountpoint1 ; done;
     
    If none of that worked then we are left to debugfs - this command is the debug filesystem tool for the ext filesystem. I prefer to only use it in catastrophic mode (the -c option) as its the last case resort for me before I give up (to using something like foremost or Photorec).
     
    An exerpt from the man page explains what dash -c does:
     
    "Specifies that the file system should be opened in catastrophic mode, in which the inode and group bitmaps are not read initially. This can be useful for filesystems with significant corruption, but because of this, catastrophic mode forces the filesystem to be opened read-only."
     
    First of all though I should probably show how the manpage describes debugfs: "The debugfs program is an interactive file system debugger. It can be used to examine and change the state of an ext2, ext3, or ext4 file system. device is the special file corresponding to the device containing the file system (e.g /dev/hdXX)." 
     
    It also tells us that you shall use the command in this syntax / Synopsis: "debugfs [ -Vwci ] [ -b blocksize ] [ -s superblock ] [ -f cmd_file ] [ -R request ] [ -d data_source_device ] [ device ]"
     
    First of all I only use the -c, -R, -s, and -b options.
     
    Final man page exerpts:
    "-b blocksize 
    Forces the use of the given block size for the file system, rather than detecting the correct block size as normal. 
     
    -s superblock 
    Causes the file system superblock to be read from the given block number, instead of using the primary superblock (located at an offset of 1024 bytes from the beginning of the filesystem). If you specify the -s option, you must also provide the blocksize of the filesystem via the -b option. 
     
    -R request 
    Causes debugfs to execute the single command request, and then exit...."
     
    The plan of action is as such:
     
    1. Setup a mount destination location (I wont go deep into this, either just mount a USB drive, or a network share - whichever way you use, make sure it has enough space to store what we calculated above in "Total Data")
     
    2. Create a subdirectory in the mount destination to where you will dump everything to, this is optional im just OCD about organization of folders
     
    3. We will try to enter the debugfs using regular methods without specifiying any alternate superblock or alternate blocksize, if that fails we will run a script to find the winning combination.
     
    4. I will show you how to use debugfs as a script, because debugfs is regularly setup like a prompt program such as ftp so that you have to type commands into it, so this will be nice to do lots of mass operations - especially since debugfs has one big catch to it (its more of an annoying catch)
     
    What thats catch of debugfs? And with this I will explain the rdump command of debugfs.
     
    Imagine our /dev/sda1 filesystem has the following folders on the root: media, etc, sys, var, home, data
     
    Well with debugfs you cant say extract (or dump - and if you wanna be more technical rdump for recursive dump) all of /dev/sda1, you have to instead specify each folder one by one. However since we have the ability to recursive dump all we have to do is one rdump for media, and one for etc, and one for sys, and one for var, and one for home, and one for data... Its easier to automasize that with a script. Just incase you are wondering one rdump will extract all of the contents of the media folder.
     
    Let me also give you an example of how the rdump allocates its output, because to me its important if it makes the folder or not most people dont worry about that, but I hate to mix my directories up.
     
    For example, If im located in an empty directory called /destination which is what my USB disk is mounted to. Lets say I say "rdump /media .". (sidenote about syntax im telling rdump to take the /media folder from /dev/sda1 and dump it to here - hence the dot . - here being the /destination directory in which I am in, which is the USB). Well the good news is I will get the media folder there, and not the insides of the media folder. So in the end I will have a /destination/media folder with all of its correct contents.
     
    If your wondering is there a way to do "rdump / ." The answer is no. Thats a lack of the program, so you have to rdump every folder and file on the root one by one as already explained above. But thats not a big deal - especially since noone in their right mind should have over a dozen folders (and over a dozen files) on the root of their filesystems, if they do thats fine thats just more rdump commands.
     
    So for the scenario above I would just do:
    "rdump /media ."
    "rdump /etc ."
    "rdump /sys ."
    "rdump /var ."
    "rdump /home ."
    "rdump /home ."
     
    So Lets begin.
     
    THE STEPS
    #########
     
    (Step 0) 
     
    Make the mnt directory if its not there, if it is there and something else is mounted there, then unmount it. Check mounts with "mount" and unmount things with "umount ..." look at google for more info on that.
     
    mkdir /mnt
       
    (Step 1)
    Mount something to put the DATA to either a Remote share or USB
     
    Mount USB
     
    Plug in your USB and see if it shows up and how it shows up
    # lsusb
    # cat /proc/partitions
    # dmesg
     
    Whatever letter & partition that your USB gets, Ill just call that letter "b" and that partition number 1
    # mount /dev/sdb1 /mnt
     
    --or if your mounting a share---
     
    LIST SHARES - IF ASKED FOR PASSWORD PUT IN THE PASSWORD OF A USER THAT HAS ACCESS TO THAT SHARE, IT WILL ENUMERATE THE SHARES IF YOU GAVE CORRECT CREDENTIALS
     
    # smbclient -L remote-ip
    # smbclient -L remote-ip -U username
     
    MOUNT THE SHARE
    # mount -t cifs //remote-ip/sharename /mnt
    # mount -t -o user=username //remote-ip/sharename /mnt
     
    MAKE SURE THAT THE DESTINATION HAS ENOUGH SPACE TO COVER THE debugfs, AS LONG AS YOU HAVE MORE SPACE THEN THAT NUMBER WE CALCULATED ABOVE "Total Data" THEN YOUR SET
     
    # df
    # df -h
     
    or both at once:
     
    # df && df -h
     
    (Step 2)
     
    Create the optional directories to dump to
     
    # mkdir /mnt/dump
     
    (Step 3)
     
    Enter debugfs: -c catastrophic mode - this mode tries its best to recover the files, without catastrophic mode even debugfs wont work
     
    FIRST GO TO FOLDER WHERE YOU WANT TO DUMP THE RECOVERY TO
     
    # cd /mnt/dump
    # debugfs -c /dev/sda1
     
    DEBUGFS OPENS UP A NEW PROMPT, IN IT TYPE THE FOLLOWING (ONLY TYPE THE STUFF AFTER THE debugfs: PART)
     
    FIRST - LIST THE CURRENT DIRECTORIES WE WILL ATTEMPT TO RECOVER
    # debugfs: ls
     
    OUTPUT OF LS IS SUPPRESSED HERE BECAUSE IM MAKING THIS EXAMPLE UP AS I GO (You should see the folder name, the inode in some parenthises and some other name, we are just worried about the name of the folder)
     
    LETS RECOVER THE media FOLDER USING rdump [filesystem directory] [local directory - dump destination], rdump STANDS FOR RECURSIVE DUMP. WE TELL IT THE FOLDER TO DUMP FROM /dev/sda1 WHICH IN THIS CASE IS /backup (WHICH REFERS TO THE FOLDER /c/backup, WHICH IS THE BACKUP SHARE)  AND THEN WE TELL IT WHERE TO DUMP TO WHICH IS . (WHICH IS CURRENT WORKING DIRECTORY - /mnt/cdump - REMEMBER WE cd INTO THIS FOLDER BEFORE RUNNING debugfs -c)
     
    # debugfs: rdump /backup .
     
    IF IT CAN RECOVER IT WILL, IT WILL ALSO TAKE FOREVER ON BIG FOLDERS, IT WILL COME UP WITH SOME PERMISSIONS ERRORS, JUST IGNORE THOSE. IT WILL RECOVER ALL THAT IT CAN
     
    Now when that is done just repeat the "ls" command and the "rdump /[folder or file] ." commands until you have all of it
    REPEAT FOR ALL THE FOLDER THAT YOU SEE FROM "debugfs: ls" OUTPUT WHEN ITS DONE JUST DO THIS: "debugfs: quit" TO EXIT. WHILE ITS COPYING YOU CAN DO STEP 4 BELOW TO WATCH THE PROGRESS.
     
    WHAT IF STEP 3 DIDNT WORK: SPECIFICALLY THE NORMAL SUPERBLOCK DIDNT WORK
    ########################################################################
     
    I know that the manpage blabs about the catastrophic -c option, how "the inode and group bitmaps are not read initially" but I still try it with different superblocks.
     
    If debugfs -c didnt return any ls information we need to run through a loop. Remember that list of superblock numbers I had you get.
     
    Type the following to see that list again:
     
    # echo $BIGLIST
    32768 98304 163840 229376 294912 819200 884736 1605632 2654208 4096000 7962624 11239424 20480000 23887872 71663616 78675968 102400000 214990848 512000000 550731776 644972544
     
    Did you forget how to get the $BIGLIST?
    Let me recap this from the beginning(not the whole beginning but just how I got biglist):
    Run "mke2fs -n /dev/sda1" or "dumpe2fs /dev/sda1 | grep superblock" which should give you a list of superblocks, convert that list to the following command and hit enter after you type it or paste it in:
     
    # BIGLIST="32768 98304 163840 229376 294912 819200 884736 1605632 2654208 4096000 7962624 11239424 20480000 23887872 71663616 78675968 102400000 214990848 512000000 550731776 644972544"
     
    So lets try to use the debugfs command with all those superblocks, and lets have debugfs automatically do the "ls" command for us so that we dont have to run it, thats the -R switch. The -s switch is where we will try all the different superblock numbers from $BIGLIST. Also we will try a few different filesystem block size numbers, Im familiar with 4K filesystem blocks and 16K filesystem blocks, so those translated to bytes are 4096 and 16384 respectively- you can try your own if you want:
     
    BIGLIST="32768 98304 163840 229376 294912 819200 884736 1605632 2654208 4096000 7962624 11239424 20480000 23887872 71663616 78675968 102400000 214990848 512000000 550731776 644972544"
    for z in 4096 16384; do
    for i in $BIGLIST; do
    echo "====BLOCK SIZE: $z==SB: $i===="
    debugfs -s $i -b $z -R "ls" -c /dev/sda1
    done
    done
     
    Or shrunk to one line:
     
    BIGLIST="32768 98304 163840 229376 294912 819200 884736 1605632 2654208 4096000 7962624 11239424 20480000 23887872 71663616 78675968 102400000 214990848 512000000 550731776 644972544"
    for z in 4096 16384; do for i in $BIGLIST; do echo "====BLOCK SIZE: $z==SB: $i====" && debugfs -s $i -b $z -R "ls" -c /dev/sda1; done; done;
     
    Or if you dont wanna use $BIGLIST variable and just do it all in one:
     
    # for z in 4096 16384; do for i in 32768 98304 163840 229376 294912 819200 884736 1605632 2654208 4096000 7962624 11239424 20480000 23887872 71663616 78675968 102400000 214990848 512000000 550731776 644972544; do echo "====BLOCK SIZE: $z==SB: $i====" && debugfs -s $i -b $z -R "ls" -c /dev/sda1; done; done;
     
    So okay great you ran one of these commands what do you do then.... Well hopefully one of the commands returned a folder and file listing of some sort, then you can use it to your advantage to enter back into debugfs with that magical superblock number and block size number.
     
    For example lets say I just ran the script above and it returned a whole bunch of nothing until I got to the line
     
    =====BLOCK SIZE: 4096===SB: 11239424====
     
    From which is just had the full file listing below it, similar to this (Sorry if this doesnt follow the past examples of what folders are contained on the root of /dev/sda1 , this is to illustrate that this returned superblock and block size are winning combination):
     
    =====BLOCK SIZE: 4096===SB: 11239424====
     2  (12) .    2  (12) ..    11  (20) lost+found    29532161  (16) media
     46256129  (16) backup    14548993  (12) home    3022849  (12) Alpha
     12290  (20) aquota.user    12291  (20) aquota.group
     88870913  (12) Bravo
     
    Now that we know the winning block size is 4096 and superblock # is 11239424
     
    We can start debugfs like this:
     
    # debugfs -b 4096 -s 11239424 -c /dev/sda1
     
    Then do your magic from there.
     
    As a sidenote you dont have to do specify both the superblock and blocksize, you can have it try to figure one or both out, the next 3 are perfectly legal commands:
     
    # debugfs -b 4096  -c /dev/sda1
    # debugfs -s 11239424 -c /dev/sda1
    # debugfs -c /dev/sda1
     
    If all 4 work, thats fine, pick whichever, they all should have the same results.
     
    The final one is the original from Step 3.
     
    Once inside you do the regular "ls" and "rdump /[folder] ." or your choice. For example to recover the above system I would do the following
     
    cd /mnt/dump
    debugfs -b 4096 -s 11239424 -c /dev/sda1
    ls
    rdump /lost+found .
    rdump /media .
    rdump /backup .
    rdump /home .
    rdump /Alpha .
    rdump /aquota.user .
    rdump /aquota.group .
    rdump /Bravo .
     
    Of course you would have to wait forever in between each to start the next one, which bring me to the next subject - script this stuff so you dont have to wait.
     
    GREAT NOW LETS SCRIPT WITH THIS
    ###############################
     
    So lets just jump right in, and then Ill explain, lets say I want to do the above in a script so I dont have to wait - As your eyes can already forecast and foresee, I will restate the above here in the NOT SCRIPT section, and then jump into the script in the SCRIPT section:
     
    NOT SCRIPT - ORIGINAL - WHAT WE DONT WANT BECAUSE IT WAITS FOR US TO TYPE IN A NEW COMMAND EVERY TIME (remember we dont have to write the "debugfs: " prompt part, thats already there):
    cd /mnt/dump
    debugfs -b 4096 -s 11239424 -c /dev/sda1
    debugfs:  ls
    debugfs: rdump /lost+found .
    debugfs:  rdump /media .
    debugfs:  rdump /backup .
    debugfs:  rdump /home .
    debugfs:  rdump /Alpha .
    debugfs:  rdump /aquota.user .
    debugfs:  rdump /aquota.group .
    debugfs:  rdump /Bravo .
     
    SCRIPT - FINAL:
    cd /mnt/dump
    debugfs -b 4096 -s 11239424 -R "ls" -c /dev/sda1
    debugfs -b 4096 -s 11239424 -R "rdump /lost+found ." -c /dev/sda1
    debugfs -b 4096 -s 11239424 -R "rdump /media ." -c /dev/sda1
    debugfs -b 4096 -s 11239424 -R "rdump /backup ." -c /dev/sda1
    debugfs -b 4096 -s 11239424 -R "rdump /home ." -c /dev/sda1
    debugfs -b 4096 -s 11239424 -R "rdump /Alpha ." -c /dev/sda1
    debugfs -b 4096 -s 11239424 -R "rdump /aquota.user ." -c /dev/sda1
    debugfs -b 4096 -s 11239424 -R "rdump /Bravo ." -c /dev/sda1
     
    You can just select all of those copy it and paste it right in, you will get the following data structure afterwards:
     
    /mnt/dump/lost+found
    /mnt/dump/media
    /mnt/dump/backup
    /mnt/dump/home
    /mnt/dump/Alpha
    /mnt/dump/aquota.user
    /mnt/dump/Bravo
     
    You can also combine the above commands into one pasteable line, instead of one pasteable chunk of commands:
     
    The following 2 just do seperately:
    # cd /mnt/dump
    # debugfs -b 4096 -s 11239424 -R "ls" -c /dev/sda1
     
    Then combine to 1 line:
    # debugfs -b 4096 -s 11239424 -R "rdump /lost+found ." -c /dev/sda1; debugfs -b 4096 -s 11239424 -R "rdump /media ." -c /dev/sda1; debugfs -b 4096 -s 11239424 -R "rdump /backup ." -c /dev/sda1;debugfs -b 4096 -s 11239424 -R "rdump /home ." -c /dev/sda1; debugfs -b 4096 -s 11239424 -R "rdump /Alpha ." -c /dev/sda1; debugfs -b 4096 -s 11239424 -R "rdump /aquota.user ." -c /dev/sda1; debugfs -b 4096 -s 11239424 -R "rdump /Bravo ." -c /dev/sda1;
     
    JUST FOR THE CURIOUS: if you dont wanna specify -b and -s, if it works without specifying the superblock and block size then you can simply do this:
     
    cd /mnt/dump
    debugfs -R "ls" -c /dev/sda1
    debugfs -R "rdump /lost+found ." -c /dev/sda1
    debugfs -R "rdump /media ." -c /dev/sda1
    debugfs -R "rdump /backup ." -c /dev/sda1
    debugfs -R "rdump /home ." -c /dev/sda1
    debugfs -R "rdump /Alpha ." -c /dev/sda1
    debugfs -R "rdump /aquota.user ." -c /dev/sda1
    debugfs -R "rdump /Bravo ." -c /dev/sda1
     
    Or in 1 command style:
     
    First do these:
    # cd /mnt/dump
    # debugfs -R "ls" -c /dev/sda1
     
    Then do this single line:
    # debugfs -R "rdump /lost+found ." -c /dev/sda1; debugfs -R "rdump /media ." -c /dev/sda1; debugfs -R "rdump /backup ." -c /dev/sda1; debugfs -R "rdump /home ." -c /dev/sda1; debugfs -R "rdump /Alpha ." -c /dev/sda1; debugfs -R "rdump /aquota.user ." -c /dev/sda1; debugfs -R "rdump /Bravo ." -c /dev/sda1;
     
    Thats pretty much all of the important notes I have on debugfs, here is how to use the console portion of the debugfs (the none script part of debugfs, running it without R as we did in step 3):
     
    HOW DEBUGFS IS USED:
    ####################
     
    debugfs operates like this: it uses commands similar to ftp command: A quick recap, do local commands with a prefix of the letter l or !. Example !pwd will tell me the current working directory I will dump to. !pwd returns "/mnt/dump" and !ls lists nothing because there is no folders in /mnt/cdump. Now a simple ls will list all the folders on the root of /dev/sda1 filesystem, so it lists the following for me:
     
    # debugfs -c /dev/sda1
    debugfs 1.41.14 (22-Dec-2010)
    /dev/sda1: catastrophic mode - not reading inode or group bitmaps
    debugfs:
    debugfs:  ls
     2  (12) .    2  (12) ..    11  (20) lost+found    29532161  (16) media
     46256129  (16) backup    14548993  (12) home    3022849  (12) Alpha
     12290  (20) aquota.user    12291  (20) aquota.group
     88870913  (12) Bravo
     
    Other local commands use the l prefix as I mentioned for example we are in the /mnt/dump directory, what if I wanted to change to /mnt/other/ so that I can dump the files there: 
    "lcd /mnt/other"
     
    Type "help" to get a list of all the options.
     
    (STEP 4) Optional to watch the progress!!
     
    To watch the progess, open another shell, or if your using screen open another screen, or if your using detach then detach or whatever:
     
    watch -n0.5 "df && df -h" 
     
    BEFORE BACKING UP ITS A GOOD IDEA TO RUN "df && df -h" TO SEE THE SIZE IN KILOBYTES AND HUMAN READABLE FORM OF THE DESTINATION - HOPEFULLY ITS CLOSE TO EMPTY. THEN YOU HOPEFULLY KNOW THE SIZE OF THE DATA WE WANT TO BACKUP IN /dev/sda1.
     
    IF YOU DIDNT RUN df && df -h BEFORE THE DEBUGFS COMMAND THATS FINE, REMEMEBER WE STILL HAVE OUR OUTPUT OF dumpe2fs -h /dev/sda1 AND THE WHICH WE USED TO CALCULATE Total Data, THAT NUMBER SHOULD BE IN BYTES.
     
    TO RECAP: RUN "dumpe2fs -h /dev/sda1" BEFORE RUNNING DUMPE2FS AND GET THE FOLLOWING NUMBERS: Block count,Free blocks, and Block size. Block size is usually 4096, meaning 4096 bytes, or 4 Kilobytes. So do the following math TO FIND OUT THE AMOUNT OF DATA BACKING UP WE WILL BE TARGETING (NOTE THIS IS NUMBER WE WANT OUR WATCH COMMAND TO REACH, CONSIDERING YOU STARTED WITH AN EMPTY DESTINATION DEVICE/SHARE): ([block count] - [free blocks])*[block size]=[total amount of data in kilobytes]
     
    SUMMARY VIA QUICK FULL EXAMPLE
    ##############################
     
    Since this is alot to take in, and I have a long way of writing, let me throw this in, an example taken from the beginning - this is a script style example.
     
    Scenario: Root filesystem failed, linux machine doesnt boot up.
     
    1. Download Knoppix on another PC and burn it to a CD
     
    2. Pop in Knoppix to PC with problem, and start it up and open up a terminal shell
    Run the following commands to identify what is your main curropt filesystem and how its labeled
    # dmesg | grep "[hs]d[abcdefghijklmnopqrstuvwxyz]"
    # cat /proc/partitions
     
    Lets say the filesystem in this case was also /dev/sda1
     
    Hopefully you can get the information of the Filesystem Size, this is optional its just so that when the backup is happening we know when its close to done:
     
    # dumpe2fs -h /dev/sda1
     
    We get the following information:
    Block count:              1459093504
    Free blocks:              410639656
    Block size:               4096
     
    PLUGGING IN TO MY FORUMLA GIVES: (1459093504-410639656)*4096=4.294467e+12 bytes
    WHICH BEGS ME TO SHOW YOU WOLFRAMALPHA FOR CONVERSION OF UNITS (its an amazing calculator)...
    GO TO www.wolframalpha.com AND TYPE "(1459093504-410639656)*4096 bytes" IN THE BOX AND HIT ENTER
    ONE OF THE THE ANSWER IS: 4.29 TB - NOTE ALL OF THE ANSWERS ARE CORRECT, IT SHOWS YOU LOTS OF FORMS OF THE CORRECT ANSWER WHICH IS WHY I LIKE IT
     
    3. Mount the destination - where we will dump the damages to - I will show you this example in the USB sense and Mounting Share sense
     
    3a. I wanna backup all my Stuff to a USB - plug in USB (the USB you found that has 5 TB of storage lol) and run the following commands to identify the USB, lets pretend in this case its sdb1.
     
    # dmesg | grep "[hs]d[abcdefghijklmnopqrstuvwxyz]"
    # cat /proc/partitions
    # mount /dev/sdb1 /mnt
     
    3b. I wanna backup all my stuff: First on a Windows machine (IP addres 10.10.10.10) that has enough space to cover the Total Data of 4.29 TB, I make a folder called "sally" on the volume that has enough space for the 4.29 TB, I right click on the folder and enable sharing on it. I make sure sharing is enabled full control for everyone, but I limit my security to a user called "fred" with a password "12345678" and full control for user "fred". Then on linux I do the following:
     
    # smbclient -L 10.10.10.10
     
    OR if it wants a username give it "fred"
     
    # smbclient -L 10.10.10.10 -U fred
     
    If asked for password just try the 12345678 that is "freds" password. It should show me the sally share I made.
     
    I mount the share with this
     
    # mount -t -o user=username //10.10.10.10/sally /mnt
     
    4. Make the subdirectories for organization - optional I just like to have folders within folders within folders - folderception
    # mkdir /mnt/dump
     
    5. Get into the folder
     
    # cd /mnt/dump
     
    6.  Debug FS time: Get the file listing
    # debugfs -R "ls" -c /dev/sda1
     
    FAIL!!! Oh no!!, well lets try another superblock
     
    7. Find out the superblock numbers:
    # mke2fs -n /dev/sda1
    I take the output of the superblocks and put them in a notepad, and then remove the comas and newlines, add in double quotes until I get:
    "32768 98304 163840 229376 294912 819200 884736 1605632 2654208 4096000 7962624 11239424 20480000 23887872 71663616 78675968 102400000 214990848 512000000 550731776 644972544" with the quotes
     
    8. Make the BIGLIST Variable out of it
    # BIGLIST="32768 98304 163840 229376 294912 819200 884736 1605632 2654208 4096000 7962624 11239424 20480000 23887872 71663616 78675968 102400000 214990848 512000000 550731776 644972544"
     
    9. Run the following scriptlet:
    # for z in 4096 16384; do for i in $BIGLIST; do echo "====BLOCK SIZE: $z==SB: $i====" && debugfs -s $i -b $z -R "ls" -c /dev/sda1; done; done;
     
    In this case I get a file listing with 16394 block size and superblock 819200
     
    10. Debug FS time: Get the file listing - revisted but not failed, unlike step 6:
    # debugfs -b 16394 -s 819200 -R "ls" -c /dev/sda1
     
    We get a file listing similar to the above article - just as an obvious side note for the confused, this is the same listing we see in step 9 when we find the correct superblock and block size:
     2  (12) .    2  (12) ..    11  (20) lost+found    29532161  (16) media
     46256129  (16) backup    14548993  (12) home    3022849  (12) Alpha
     12290  (20) aquota.user    12291  (20) aquota.group
     88870913  (12) Bravo
     
    11. So lets say I wanna extract out as much as I can of the following folders: lost+found, media, backup, home, Alpha, and Bravo
     
    I can copy paste this giant code in, or even write it into a bash script: 
    debugfs -b 16394 -s 819200 -R "rdump /lost+found ." -c /dev/sda1
    debugfs -b 16394 -s 819200 -R "rdump /media ." -c /dev/sda1
    debugfs -b 16394 -s 819200 -R "rdump /backup ." -c /dev/sda1
    debugfs -b 16394 -s 819200 -R "rdump /home ." -c /dev/sda1
    debugfs -b 16394 -s 819200 -R "rdump /Alpha ." -c /dev/sda1
    debugfs -b 16394 -s 819200 -R "rdump /aquota.user ." -c /dev/sda1
    debugfs -b 16394 -s 819200 -R "rdump /Bravo ." -c /dev/sda1
     
    Or I can shrink this down to one command and paste it in as well - I would rather do it this way, since with the above way sometimes the last command doesnt run if you like dont select the final newline character, so in my opinion this next command is the best way to do it:
     
    # debugfs -b 16394 -s 819200 -R "rdump /lost+found ." -c /dev/sda1; debugfs -b 16394 -s 819200 -R "rdump /media ." -c /dev/sda1; debugfs -b 16394 -s 819200 -R "rdump /backup ." -c /dev/sda1; debugfs -b 16394 -s 819200 -R "rdump /home ." -c /dev/sda1; debugfs -b 16394 -s 819200 -R "rdump /Alpha ." -c /dev/sda1; debugfs -b 16394 -s 819200 -R "rdump /aquota.user ." -c /dev/sda1; debugfs -b 16394 -s 819200 -R "rdump /Bravo ." -c /dev/sda1;
     
    As quick tip, we could nohup it and put it into the background
     
    # nohup (  debugfs -b 16394 -s 819200 -R "rdump /lost+found ." -c /dev/sda1; debugfs -b 16394 -s 819200 -R "rdump /media ." -c /dev/sda1; debugfs -b 16394 -s 819200 -R "rdump /backup ." -c /dev/sda1; debugfs -b 16394 -s 819200 -R "rdump /home ." -c /dev/sda1; debugfs -b 16394 -s 819200 -R "rdump /Alpha ." -c /dev/sda1; debugfs -b 16394 -s 819200 -R "rdump /aquota.user ." -c /dev/sda1; debugfs -b 16394 -s 819200 -R "rdump /Bravo ." -c /dev/sda1; ) &
     
    Follow the output with "tail -f nohup.out" the nohup file in this case will be in /mnt/dump since thats where we ran the command from. 
     
    12. To follow the progress just do the following:
    # watch -n0.5 "df && df -h" 
    You know its done when you reach the 4.29 TB or whatever size of Total Data was.
     
    13. When your done, just unmount your USB or Share
    Type sync: just to insure all the writes are finallized and synced over across the system.
     
    # sync
    # cd /
    # umount /mnt/
     
  • 相关阅读:
    基于proteus的数字电路设计
    AXI4自定义FPGA外设理论基础
    FPGA 原语之一位全加器
    FPGA原语初步试验
    PS的流水灯设计分析
    vivado2019操作之约束文件
    http 笔记1
    编写有效用例-笔记
    接口测试学习积累1
    模拟器学习
  • 原文地址:https://www.cnblogs.com/jvava/p/3990542.html
Copyright © 2011-2022 走看看