Disk Maintenance under Linux (Disk Recovery)

The ins and outs of disk maintenance—what we all should know and DO.
fsck—The File System Checker

Invoking fsck from the command line on any given partition will probably not result in a check being run, because you have not reached the predetermined maximum mount count; therefore, the system believes the file system is clean and not in need of checking. To force the check, invoke fsck with -f.

At this point, one of two things will happen: fsck will begin to run correctly and check your disk partition (possibly hesitating at the bad spots on the disk and issuing appropriate error messages before continuing) or it will terminate without running, leaving error messages behind. If fsck does not run, you'll have to give the program additional information as indicated in the error messages. Probably the most common information you'll need to pass to e2fsck is the address of the alternate superblock or the block size so that e2fsck can calculate where an alternate superblock is located. The -b switch will tell e2fsck to use the alternate superblock, but we'll have to tell e2fsck where to find one. On ext2 file systems, superblocks are normally located at 8193, 16385 and higher multiples of 8192+1 (see dumpe2fs explanation below). As an alternative, we can pass e2fsck the block size with the -B switch (once we have that information) to allow e2fsck to calculate alternate superblock locations. Later I'll tell you where to get the block size value if you ever need it.

At this point, it's worth mentioning two other mutually exclusive switches available to fsck and e2fsck. The first is the -n switch, which tells fsck to answer no to all queries, and will leave the file system in its original condition making no repairs. The second is the -y switch, which automatically corrects any errors it finds. Generally, to speed things up, you may want to run fsck with the -y switch. So, why don't we just use this option all the time? I strongly recommend against this course of action, if you suspect problems with the file system. While fsck will usually not encounter problems, typing fsck -y and then taking a coffee break, leaving the machine to take care of itself, is not particularly prudent. If, in the interests of speed, you use the automatic answer yes switch to do routine checks, be sure to list your lost+found directories from time to time. Besides, you'll really want to note the block or inode numbers that appear while fsck runs, so that you can check them later to see if they are allocated to files.

The other available options for fsck and e2fsck can be found in the man pages. I consider the fsck and e2fsck man pages fairly well written, as is appropriate considering the importance of these utilities to your file system's health.

Some Common fsck Messages

You may encounter messages asking if you want fsck to correct an error. Answering no will normally terminate the program so that you may fix the problem and rerun fsck. However, most error messages you're likely to encounter are fairly routine, and you may safely answer yes to them. If you see a message such as inode 1234 unattached, it means the file pointed to by inode (information node) 1234 has, for one reason or another, lost its filename. This can occur for several reasons, including a power failure or a computer reset without a proper disk sync.

Other common errors include zero time inodes, which are also due to the disk not being properly synced before shutdown. If you see these errors frequently and you've been shutting down your system correctly, you may have any number of other problems. In this case, you could begin by checking your power and data connections and your power supply for fluctuations or passing too much noise. Finally, check your hard disk parameters. I must caution you that altering the default hard disk parameters could do serious damage to your file system or corrupt your files—be careful.

The lost+found Directories

One lost+found directory should be located in the root partition of each file system. If you have, for example, two mounted file systems, /usr and /home, you should have three lost+found directories. These directories will contain files whose inodes have become disconnected from their file names. The files in these directories will have the form ./#nnnn, where nnnn is the inode number used as the file name. You may be able to determine what the file is by inspecting it using cat. If cat returns what appears to be garbage, you probably have a binary file. In this case, you can do a chmod +x #nnnn, and then run the file. These procedures should give you enough information to learn what the file is. If the file is important, it can be renamed and moved to its original location; otherwise, it can be deleted.

______________________

Comments

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

As long as the hard disk is

Erase Hard Drive's picture

As long as the hard disk is not toasted, completely broken, or data is overwritten, you can retrieve your data. When a file is removed from the hard drive, it is not actually deleted.

Recover linux hard drive

Linux hard drive recovery's picture

To recover the data deleted from linux hard drive I have used the Stellar Phoenix software for Linux data recovery

Linux data recovery software From Stellar Data recovery

Maria's picture

I too have used the software and found it Awesome

A file which iNode is 0

who can help me?'s picture

My system is Linux 5

here is a file: .bash_profile
when I run the command: ls -ai

The result return by shell is :

> ----------------------------------
> cd dmsystem
> ls -ai
> [root@TCJ dmsystem]# ls -ai
> 21200897 . 21200909 data 21200906 .mozilla
> 2 .. 21200898 .emacs 21200994 src
> 21201266 .bash_history 21200944 exe 21200984 .viminfo
> 21200901 .bash_logout 21200903 .kde 21200900 .zshrc
> 0 .bash_profile 21201265 .lesshst
> 21200902 .bashrc 21200986 log
> ----------------------------------

Who can surrport to me some command so that I can delete the file: .bash_profile

Wow, over 10 years old. Good

directhex's picture

Wow, over 10 years old. Good info. Thanks.

Magic Numbers.

Ralph Corderoy's picture

> The EF53 presumably means Extended Filesystem (EF) version and mod number 53. However, I am unclear about the background of the 53.

I've always assumed the `5' is meant to be read as an `S' since hexadecimal doesn't have an S and the intention is Extended File System 3.

Cheers,

Ralph.

The author says: "The file

LucMove's picture

The author says:

"The files in these directories will have the form ./#nnnn, where nnnn is the inode number used as the file name. You may be able to determine what the file is by inspecting it using cat. If cat returns what appears to be garbage, you probably have a binary file. In this case, you can do a chmod +x #nnnn, and then run the file. These procedures should give you enough information to learn what the file is."

But nowadays there is a program called 'file' that is a much better idea. Just run:

$ file #nnnn

... and it will try a very good guess at what is the file's format.

tutorial: Repair hard disk (linux)

daniel's picture

here the link

White Paper
Linux Management with Red Hat Satellite: Measuring Business Impact and ROI

Linux has become a key foundation for supporting today's rapidly growing IT environments. Linux is being used to deploy business applications and databases, trading on its reputation as a low-cost operating environment. For many IT organizations, Linux is a mainstay for deploying Web servers and has evolved from handling basic file, print, and utility workloads to running mission-critical applications and databases, physically, virtually, and in the cloud. As Linux grows in importance in terms of value to the business, managing Linux environments to high standards of service quality — availability, security, and performance — becomes an essential requirement for business success.

Learn More

Sponsored by Red Hat

White Paper
Private PaaS for the Agile Enterprise

If you already use virtualized infrastructure, you are well on your way to leveraging the power of the cloud. Virtualization offers the promise of limitless resources, but how do you manage that scalability when your DevOps team doesn’t scale? In today’s hypercompetitive markets, fast results can make a difference between leading the pack vs. obsolescence. Organizations need more benefits from cloud computing than just raw resources. They need agility, flexibility, convenience, ROI, and control.

Stackato private Platform-as-a-Service technology from ActiveState extends your private cloud infrastructure by creating a private PaaS to provide on-demand availability, flexibility, control, and ultimately, faster time-to-market for your enterprise.

Learn More

Sponsored by ActiveState