Linux System Administration

Got that sinking feeling that often follows an overzealous rm? Our system doctor has prescription.

There was recently a bit of traffic on the Usenet newsgroups about the need for (or lack of) an undelete command for Linux. If you were to type rm * tmp instead of rm *tmp and such a command were available, you could quickly recover your files.

The main problem with this idea from a filesystem standpoint involves the differences between the way DOS handles its filesystems and the way Linux handles its filesystems.

Let's look at how DOS handles its filesystems. When DOS writes a file to a hard drive (or a floppy drive) it begins by finding the first block that is marked “free” in the File Allocation Table (FAT). Data is written to that block, the next free block is searched for and written to, and so on until the file has been completely written. The problem with this approach is that the file can be in blocks that are scattered all over the drive. This scattering is known as fragmentation and can seriously degrade your filesystem's performance, because now the hard drive has to look all over the place for file fragments. When files are deleted, the space is marked “free” in the FAT and the blocks can be used by another file.

The good thing about this is that, if you delete a file that is out near the end of your drive, the data in those blocks may not be overwritten for months. In this case, it is likely that you will be able to get your data back for a reasonable amount of time afterwards.

Linux (actually, the second extended filesystem that is almost universally used under Linux) is slightly smarter in its approach to fragmentation. It uses several techniques to reduce fragmentation, involving segmenting the filesystem into independently-managed groups, temporarily reserving large chunks of contiguous space for files, and starting the search for new blocks to be added to a file from the current end of the file, rather than from the start of the filesystem. This greatly decreases fragmentation and makes file access much faster. The only case in which significant fragmentation occurs is when large files are written to an almost-full filesystem, because the filesystem is probably left with lots of free spaces too small to tuck files into nicely.

Because of this policy for finding empty blocks for files, when a file is deleted, the (probably large) contiguous space it occupied becomes a likely place for new files to be written. Also, because Linux is a multi-user, multitasking operating system, there is often more file-creating activity going on than under DOS, which means that those empty spaces where files used to be are more likely to be used for new files. “Undeleteability” has been traded off for a very fast filesystem that normally never needs to be defragmented.

The easiest answer to the problem is to put something in the filesystem that says a file was just deleted, but there are four problems with this approach:

  1. You would need to write a new filesystem or modify a current one (i.e. hack the kernel).

  2. How long should a file be marked “deleted”?

  3. What happens when a hard drive is filled with files that are “deleted”?

  4. What kind of performance loss and fragmentation will occur when files have to be written around “deleted” space?

Each of these questions can be answered and worked around. If you want to do it, go right ahead and try—the ext2 filesystem has space reserved to help you. But I have some solutions that require zero lines of C source code.

I have two similar solutions, and your job as a system administrator is to determine which method is best for you. The first method is a user-by-user no-root-needed approach, and the other is a system-wide approach implemented by root for all (or almost all) users.

The user-by-user approach can be done by anyone with shell access and it doesn't require root privileges, only a few changes to your .profile and .login or .bashrc files and a bit of drive space. The idea is that you alias the rm command to move the files to another directory. Then, when you log in the next time, those files that were moved are purged from the filesystem using the real /bin/rm command. Because the files are not actually deleted by the user, they are accessible until the next login. If you're using the bash shell, add this to your .bashrc file:

alias waste='/bin/rm'
alias rm='mv $1 ~/.rm'

and in your

.profile:
if [ -x ~/.rm ];
 then
   /bin/rm -r ~/.rm
   mkdir ~/.rm
   chmod og-r ~/.rm
 else
   mkdir ~/.rm
   chmod og-r ~/.rm
 fi

Advantages:

  • can be done by any user

  • only takes up user space

  • /bin/rm is still available as the command waste

  • automatically gets rid of old files every time you log in.

Disadvantages:

  • takes up filesystem space (bad if you have a quota)

  • not easy to implement for many users at once

  • files get deleted each login (bad if you log in twice at the same time)

______________________

Comments

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Bad code?

Anonymous's picture

This line: if [ -n "$BASH" == "" ] ;
appears to be nonesense. Shouldn't it be if [ -n "$BASH" ] ;

Yes, bad code

Mitch Frazier's picture

Yes, it's wrong, yours is correct. He probably meant:

  if [ ! "$BASH" == "" ]; ...

Mitch Frazier is an Associate Editor for Linux Journal.

Webinar
One Click, Universal Protection: Implementing Centralized Security Policies on Linux Systems

As Linux continues to play an ever increasing role in corporate data centers and institutions, ensuring the integrity and protection of these systems must be a priority. With 60% of the world's websites and an increasing share of organization's mission-critical workloads running on Linux, failing to stop malware and other advanced threats on Linux can increasingly impact an organization's reputation and bottom line.

Learn More

Sponsored by Bit9

Webinar
Linux Backup and Recovery Webinar

Most companies incorporate backup procedures for critical data, which can be restored quickly if a loss occurs. However, fewer companies are prepared for catastrophic system failures, in which they lose all data, the entire operating system, applications, settings, patches and more, reducing their system(s) to “bare metal.” After all, before data can be restored to a system, there must be a system to restore it to.

In this one hour webinar, learn how to enhance your existing backup strategies for better disaster recovery preparedness using Storix System Backup Administrator (SBAdmin), a highly flexible bare-metal recovery solution for UNIX and Linux systems.

Learn More

Sponsored by Storix