Disk Maintenance under Linux (Disk Recovery)

The ins and outs of disk maintenance—what we all should know and DO.
Down in the Dumps

The next utility we'll look at is dumpe2fs. To invoke this utility, type dumpe2fs device, to get the block group information for a particular device. Actually, you will get more information than you're likely to use, but if you understand the physical file system structure, the output will be comprehensible to you. A sample output is shown in Listing 1.

Output from dumpe2fs

We really need only the first 22 lines of output. (The very first line with the version number is not part of the output table.) Most of these lines are fairly self-explanatory; however, one or two could use further explanation. The first line tells us the files system's magic number. This number is not random—it is always 0xEF53 for the ext2 file systems. The 0x prefix identifies this number as hexadecimal. The EF53 presumably means Extended Filesystem (EF) version and mod number 53. However, I am unclear about the background of the 53. (Original ext2fs versions had 51 as the final digits, and are incompatible with the current version.) The second line indicates whether a file system is clean or unclean. A file system that has been properly synced and unmounted will be labeled clean. A file system, which is currently mounted read-write or has not been properly synced prior to shutdown (such as with a sudden power failure or computer hard reset), will be labeled not clean. A not clean indication will trigger an automatic fsck on normal system boot.

Another important line for us is the block count (we'll need this later) that tells us how many blocks we have on the partition. We'll use this number when necessary with e2fsck and badblocks. However, I already know how many blocks I have on the partition; I see it every time I invoke df to check my hard drive disk usage. (If this were a game show, the raspberry would have sounded.) Check the output of df against dumpe2fs—it's not the same. The block count in dumpe2fs is the one we need. The number df gives us is adjusted to show us only the number of 1024k blocks we can actually access in one form or another. Superblocks, for example, aren't counted. Have you also noticed that the “used” and “available” numbers don't add up to the number of 1024k blocks? This discrepancy occurs because, by default, the system has reserved approximately five percent of these blocks. This percentage can be changed, as can many other parameters listed in the first 22 lines of the dumpe2fs readout; but again, unless you know what you are doing, I strongly recommend against it.

By the way, the information you are reading in the dumpe2fs is a translation into English of the partition superblock information listed in block one. Copies of the superblock are also maintained at each group boundary for backup purposes. The Blocks per group value tells us the offset for each superblock. The first begins at one, the succeeding are located at multiples of the Blocks per group value plus 1.

While we don't really need to use more than the first 22 lines of information, a quick look at the rest of the listing could be useful. The information is grouped by blocks and reflects how your disk is organized to store data. The superblocks are not specifically mentioned, but they are the first two blocks that are apparently missing from the beginning of each group. The block bitmap is a simple map showing the usage of the blocks in a group. This map contains a one or zero, corresponding to the used or empty blocks, respectively, in the group. The inode (information node) bitmap is similar to the block bitmaps, but corresponds to inodes in the group. The inode table is the list of inodes. The next line is the number of free blocks. Note that, while some groups have no free blocks, they all have free inodes. These inodes will not be used—they are extras. Some files use more than one block to store information, but need only one inode to reference the file, which explains the unused inodes.

badblocks

Now that we have the information we need (finally), we can run badblocks. This utility does a surface scan for defects and is invoked by typing, as a minimum:

badblocks /dev/

The device is the one we need to check (hda1, sda1, etc.) and the blocks-count is the value we noted after running dumpe2fs (above).

Four options are available with badblocks. The first option is the -b with the block size as its argument. This option is only needed if fsck will not run or is confused about the block size. The second option -o, which has a filename argument, will save to a file the block numbers badblocks considers bad to a file. If this option is not specified, badblocks will send all output to the screen (stdout). The third option is -v for verbose (self- explanatory). The final option is -w, which will destroy all the data on your disk by writing new data to every block and verifying the write. (Once again, you've been warned.)

Your best bet here is to run badblocks with the -o filename option. As bad blocks are encountered, they will be written to the file as a number, one to a line. This will be very helpful later on. In order to run badblocks in this way, the file system you are writing the file to must be mounted read-write. As root—and you should be root to do this maintenance—you can switch to your home directory, which should be located somewhere in the root partition. badblocks will save the file in the current directory unless you qualify the filename with a full pathname. If you need to mount the root partition read-write to write the file, simply type: mount -n -o remount,rw /.

Once you have your list of bad block numbers, you'll want to check these blocks to see if they are in use, and if not, set them as in use. If a block is already marked in use, we may want to clear the block (since the data in it might be corrupted), and reset it as allocated. Print the list of bad blocks—you'll need it later.

______________________

Comments

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

As long as the hard disk is

Erase Hard Drive's picture

As long as the hard disk is not toasted, completely broken, or data is overwritten, you can retrieve your data. When a file is removed from the hard drive, it is not actually deleted.

Recover linux hard drive

Linux hard drive recovery's picture

To recover the data deleted from linux hard drive I have used the Stellar Phoenix software for Linux data recovery

Linux data recovery software From Stellar Data recovery

Maria's picture

I too have used the software and found it Awesome

A file which iNode is 0

who can help me?'s picture

My system is Linux 5

here is a file: .bash_profile
when I run the command: ls -ai

The result return by shell is :

> ----------------------------------
> cd dmsystem
> ls -ai
> [root@TCJ dmsystem]# ls -ai
> 21200897 . 21200909 data 21200906 .mozilla
> 2 .. 21200898 .emacs 21200994 src
> 21201266 .bash_history 21200944 exe 21200984 .viminfo
> 21200901 .bash_logout 21200903 .kde 21200900 .zshrc
> 0 .bash_profile 21201265 .lesshst
> 21200902 .bashrc 21200986 log
> ----------------------------------

Who can surrport to me some command so that I can delete the file: .bash_profile

Wow, over 10 years old. Good

directhex's picture

Wow, over 10 years old. Good info. Thanks.

Magic Numbers.

Ralph Corderoy's picture

> The EF53 presumably means Extended Filesystem (EF) version and mod number 53. However, I am unclear about the background of the 53.

I've always assumed the `5' is meant to be read as an `S' since hexadecimal doesn't have an S and the intention is Extended File System 3.

Cheers,

Ralph.

The author says: "The file

LucMove's picture

The author says:

"The files in these directories will have the form ./#nnnn, where nnnn is the inode number used as the file name. You may be able to determine what the file is by inspecting it using cat. If cat returns what appears to be garbage, you probably have a binary file. In this case, you can do a chmod +x #nnnn, and then run the file. These procedures should give you enough information to learn what the file is."

But nowadays there is a program called 'file' that is a much better idea. Just run:

$ file #nnnn

... and it will try a very good guess at what is the file's format.

tutorial: Repair hard disk (linux)

daniel's picture

here the link

White Paper
Linux Management with Red Hat Satellite: Measuring Business Impact and ROI

Linux has become a key foundation for supporting today's rapidly growing IT environments. Linux is being used to deploy business applications and databases, trading on its reputation as a low-cost operating environment. For many IT organizations, Linux is a mainstay for deploying Web servers and has evolved from handling basic file, print, and utility workloads to running mission-critical applications and databases, physically, virtually, and in the cloud. As Linux grows in importance in terms of value to the business, managing Linux environments to high standards of service quality — availability, security, and performance — becomes an essential requirement for business success.

Learn More

Sponsored by Red Hat

White Paper
Private PaaS for the Agile Enterprise

If you already use virtualized infrastructure, you are well on your way to leveraging the power of the cloud. Virtualization offers the promise of limitless resources, but how do you manage that scalability when your DevOps team doesn’t scale? In today’s hypercompetitive markets, fast results can make a difference between leading the pack vs. obsolescence. Organizations need more benefits from cloud computing than just raw resources. They need agility, flexibility, convenience, ROI, and control.

Stackato private Platform-as-a-Service technology from ActiveState extends your private cloud infrastructure by creating a private PaaS to provide on-demand availability, flexibility, control, and ultimately, faster time-to-market for your enterprise.

Learn More

Sponsored by ActiveState