Disk Maintenance under Linux (Disk Recovery)
The next utility we'll look at is dumpe2fs. To invoke this utility, type dumpe2fs device, to get the block group information for a particular device. Actually, you will get more information than you're likely to use, but if you understand the physical file system structure, the output will be comprehensible to you. A sample output is shown in Listing 1.
We really need only the first 22 lines of output. (The very first line with the version number is not part of the output table.) Most of these lines are fairly self-explanatory; however, one or two could use further explanation. The first line tells us the files system's magic number. This number is not random—it is always 0xEF53 for the ext2 file systems. The 0x prefix identifies this number as hexadecimal. The EF53 presumably means Extended Filesystem (EF) version and mod number 53. However, I am unclear about the background of the 53. (Original ext2fs versions had 51 as the final digits, and are incompatible with the current version.) The second line indicates whether a file system is clean or unclean. A file system that has been properly synced and unmounted will be labeled clean. A file system, which is currently mounted read-write or has not been properly synced prior to shutdown (such as with a sudden power failure or computer hard reset), will be labeled not clean. A not clean indication will trigger an automatic fsck on normal system boot.
Another important line for us is the block count (we'll need this later) that tells us how many blocks we have on the partition. We'll use this number when necessary with e2fsck and badblocks. However, I already know how many blocks I have on the partition; I see it every time I invoke df to check my hard drive disk usage. (If this were a game show, the raspberry would have sounded.) Check the output of df against dumpe2fs—it's not the same. The block count in dumpe2fs is the one we need. The number df gives us is adjusted to show us only the number of 1024k blocks we can actually access in one form or another. Superblocks, for example, aren't counted. Have you also noticed that the “used” and “available” numbers don't add up to the number of 1024k blocks? This discrepancy occurs because, by default, the system has reserved approximately five percent of these blocks. This percentage can be changed, as can many other parameters listed in the first 22 lines of the dumpe2fs readout; but again, unless you know what you are doing, I strongly recommend against it.
By the way, the information you are reading in the dumpe2fs is a translation into English of the partition superblock information listed in block one. Copies of the superblock are also maintained at each group boundary for backup purposes. The Blocks per group value tells us the offset for each superblock. The first begins at one, the succeeding are located at multiples of the Blocks per group value plus 1.
While we don't really need to use more than the first 22 lines of information, a quick look at the rest of the listing could be useful. The information is grouped by blocks and reflects how your disk is organized to store data. The superblocks are not specifically mentioned, but they are the first two blocks that are apparently missing from the beginning of each group. The block bitmap is a simple map showing the usage of the blocks in a group. This map contains a one or zero, corresponding to the used or empty blocks, respectively, in the group. The inode (information node) bitmap is similar to the block bitmaps, but corresponds to inodes in the group. The inode table is the list of inodes. The next line is the number of free blocks. Note that, while some groups have no free blocks, they all have free inodes. These inodes will not be used—they are extras. Some files use more than one block to store information, but need only one inode to reference the file, which explains the unused inodes.
Now that we have the information we need (finally), we can run badblocks. This utility does a surface scan for defects and is invoked by typing, as a minimum:
The device is the one we need to check (hda1, sda1, etc.) and the blocks-count is the value we noted after running dumpe2fs (above).
Four options are available with badblocks. The first option is the -b with the block size as its argument. This option is only needed if fsck will not run or is confused about the block size. The second option -o, which has a filename argument, will save to a file the block numbers badblocks considers bad to a file. If this option is not specified, badblocks will send all output to the screen (stdout). The third option is -v for verbose (self- explanatory). The final option is -w, which will destroy all the data on your disk by writing new data to every block and verifying the write. (Once again, you've been warned.)
Your best bet here is to run badblocks with the -o filename option. As bad blocks are encountered, they will be written to the file as a number, one to a line. This will be very helpful later on. In order to run badblocks in this way, the file system you are writing the file to must be mounted read-write. As root—and you should be root to do this maintenance—you can switch to your home directory, which should be located somewhere in the root partition. badblocks will save the file in the current directory unless you qualify the filename with a full pathname. If you need to mount the root partition read-write to write the file, simply type: mount -n -o remount,rw /.
Once you have your list of bad block numbers, you'll want to check these blocks to see if they are in use, and if not, set them as in use. If a block is already marked in use, we may want to clear the block (since the data in it might be corrupted), and reset it as allocated. Print the list of bad blocks—you'll need it later.
Webinar: 8 Signs You’re Beyond Cron
On Demand NOW
Join Linux Journal and Pat Cameron, Director of Automation Technology at HelpSystems, as they discuss the eight primary advantages of moving beyond cron job scheduling. In this webinar, you’ll learn about integrating cron with an enterprise scheduler.View Now!
|Dr Hjkl on the Command Line||May 21, 2015|
|Initializing and Managing Services in Linux: Past, Present and Future||May 20, 2015|
|Goodbye, Pi. Hello, C.H.I.P.||May 18, 2015|
|Using Hiera with Puppet||May 14, 2015|
|Urgent Kernel Patch for Ubuntu||May 12, 2015|
|Gartner Dubs DivvyCloud Cool Cloud Management Vendor||May 12, 2015|
- Initializing and Managing Services in Linux: Past, Present and Future
- Dr Hjkl on the Command Line
- Goodbye, Pi. Hello, C.H.I.P.
- Using Hiera with Puppet
- Gartner Dubs DivvyCloud Cool Cloud Management Vendor
- Mumblehard--Let's End Its Five-Year Reign
- Infinite BusyBox with systemd
- Urgent Kernel Patch for Ubuntu
- It's Easier to Ask Forgiveness...
- A More Stable Future for Ubuntu