Disk Maintenance under Linux (Disk Recovery)
The next utility we'll look at is dumpe2fs. To invoke this utility, type dumpe2fs device, to get the block group information for a particular device. Actually, you will get more information than you're likely to use, but if you understand the physical file system structure, the output will be comprehensible to you. A sample output is shown in Listing 1.
We really need only the first 22 lines of output. (The very first line with the version number is not part of the output table.) Most of these lines are fairly self-explanatory; however, one or two could use further explanation. The first line tells us the files system's magic number. This number is not random—it is always 0xEF53 for the ext2 file systems. The 0x prefix identifies this number as hexadecimal. The EF53 presumably means Extended Filesystem (EF) version and mod number 53. However, I am unclear about the background of the 53. (Original ext2fs versions had 51 as the final digits, and are incompatible with the current version.) The second line indicates whether a file system is clean or unclean. A file system that has been properly synced and unmounted will be labeled clean. A file system, which is currently mounted read-write or has not been properly synced prior to shutdown (such as with a sudden power failure or computer hard reset), will be labeled not clean. A not clean indication will trigger an automatic fsck on normal system boot.
Another important line for us is the block count (we'll need this later) that tells us how many blocks we have on the partition. We'll use this number when necessary with e2fsck and badblocks. However, I already know how many blocks I have on the partition; I see it every time I invoke df to check my hard drive disk usage. (If this were a game show, the raspberry would have sounded.) Check the output of df against dumpe2fs—it's not the same. The block count in dumpe2fs is the one we need. The number df gives us is adjusted to show us only the number of 1024k blocks we can actually access in one form or another. Superblocks, for example, aren't counted. Have you also noticed that the “used” and “available” numbers don't add up to the number of 1024k blocks? This discrepancy occurs because, by default, the system has reserved approximately five percent of these blocks. This percentage can be changed, as can many other parameters listed in the first 22 lines of the dumpe2fs readout; but again, unless you know what you are doing, I strongly recommend against it.
By the way, the information you are reading in the dumpe2fs is a translation into English of the partition superblock information listed in block one. Copies of the superblock are also maintained at each group boundary for backup purposes. The Blocks per group value tells us the offset for each superblock. The first begins at one, the succeeding are located at multiples of the Blocks per group value plus 1.
While we don't really need to use more than the first 22 lines of information, a quick look at the rest of the listing could be useful. The information is grouped by blocks and reflects how your disk is organized to store data. The superblocks are not specifically mentioned, but they are the first two blocks that are apparently missing from the beginning of each group. The block bitmap is a simple map showing the usage of the blocks in a group. This map contains a one or zero, corresponding to the used or empty blocks, respectively, in the group. The inode (information node) bitmap is similar to the block bitmaps, but corresponds to inodes in the group. The inode table is the list of inodes. The next line is the number of free blocks. Note that, while some groups have no free blocks, they all have free inodes. These inodes will not be used—they are extras. Some files use more than one block to store information, but need only one inode to reference the file, which explains the unused inodes.
Now that we have the information we need (finally), we can run badblocks. This utility does a surface scan for defects and is invoked by typing, as a minimum:
The device is the one we need to check (hda1, sda1, etc.) and the blocks-count is the value we noted after running dumpe2fs (above).
Four options are available with badblocks. The first option is the -b with the block size as its argument. This option is only needed if fsck will not run or is confused about the block size. The second option -o, which has a filename argument, will save to a file the block numbers badblocks considers bad to a file. If this option is not specified, badblocks will send all output to the screen (stdout). The third option is -v for verbose (self- explanatory). The final option is -w, which will destroy all the data on your disk by writing new data to every block and verifying the write. (Once again, you've been warned.)
Your best bet here is to run badblocks with the -o filename option. As bad blocks are encountered, they will be written to the file as a number, one to a line. This will be very helpful later on. In order to run badblocks in this way, the file system you are writing the file to must be mounted read-write. As root—and you should be root to do this maintenance—you can switch to your home directory, which should be located somewhere in the root partition. badblocks will save the file in the current directory unless you qualify the filename with a full pathname. If you need to mount the root partition read-write to write the file, simply type: mount -n -o remount,rw /.
Once you have your list of bad block numbers, you'll want to check these blocks to see if they are in use, and if not, set them as in use. If a block is already marked in use, we may want to clear the block (since the data in it might be corrupted), and reset it as allocated. Print the list of bad blocks—you'll need it later.
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?
|Non-Linux FOSS: libnotify, OS X Style||Jun 18, 2013|
|Containers—Not Virtual Machines—Are the Future Cloud||Jun 17, 2013|
|Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer||Jun 12, 2013|
|Weechat, Irssi's Little Brother||Jun 11, 2013|
|One Tail Just Isn't Enough||Jun 07, 2013|
|Introduction to MapReduce with Hadoop on Linux||Jun 05, 2013|
- Containers—Not Virtual Machines—Are the Future Cloud
- Non-Linux FOSS: libnotify, OS X Style
- Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer
- Linux Systems Administrator
- Introduction to MapReduce with Hadoop on Linux
- RSS Feeds
- Validate an E-Mail Address with PHP, the Right Way
- New Products
- Weechat, Irssi's Little Brother
- Tech Tip: Really Simple HTTP Server with Python
- Poul-Henning Kamp: welcome to
55 min 9 sec ago
- This has already been done
56 min 9 sec ago
- Reply to comment | Linux Journal
1 hour 41 min ago
- Welcome to 1998
2 hours 29 min ago
- notifier shortcomings
2 hours 53 min ago
4 hours 30 min ago
- Android User
4 hours 32 min ago
- Reply to comment | Linux Journal
6 hours 25 min ago
9 hours 14 min ago
- This is a good post. This
14 hours 27 min ago