How a Corrupted USB Drive Was Saved by GNU/Linux
To summarize exactly what fixed the USB device:
Step 1: create a filesystem image of the right size, with FATs and the directory in the right places:
# dd if=/dev/zero of=/tmp/r2x bs=512 count=1001952 # losetup /dev/loop2 /tmp/r2x # mkfs.msdos -n mkfs__msdos -s 16 -R 64 /dev/loop2
Step 2: copy bytes from the corrupt image, except the boot sector, onto the filesystem image created in step 1:
# dd if=r1 of=r2x bs=512 skip=1 seek=1
Step 3: execute filesystem repair on that image:
# fsck.msdos -f -r /dev/loop2
Because I knew that FAT1 was bogus, I told it to use FAT2, and it reported success. It asked me whether to write the changes, and I said yes.
The filesystem images in /tmp/r2x and /dev/loop2 now were consistent. The acid test was to try to mount the filesystem:
# mkdir /tmp/r2d # mount -t vfat /dev/loop2 /tmp/r2d # ls -lRA /tmp/r2d
After which all kinds of good stuff appeared.
Note: A good result to ls -lR showed that I was lucky in one other way: I didn't know if the boot sector had a good value for the size of the root directory, the -r parameter to mkfs.msdos. I simply used the default and it turned out fine.
At this point, I decided I had better burn a CD. I burn and read CDs all the time on Linux, but I rarely burn CDs to be read by Windows. Again I did a Web search, and a page from IBM's DeveloperWorks site turned up. I had searched "linux burn CD windows" or something like that. So I tried this:
# mkisofs -J -r -v /tmp/r2d | \ cdrecord -v -pad -eject fs=4m speed=4 dev=0,0,0 -
I wasn't 100% sure that Windows would like this CD, but fortunately I have Windows95 under Win4Lin. Its sole purpose for me is to run Quicken and TurboTax, but I fired it up and pointed Windows Explorer at the just-burned CD-ROM. Explorer loved it. I used gimp(1) to capture a screenshot and e-mailed the image to my friend's brother--he was ecstatic.
Shell jockeys need not read this.
1 #!/bin/bash 2 # parameters added to mkfs.msdos.... 3 ARGS="$*" 4 if mount | grep /tmp/r2d; then umount /tmp/r2d; fi 5 losetup -d /dev/loop2 6 losetup /dev/loop2 /tmp/r2x 7 mkfs.msdos -n mkfs__msdos -s 16 $ARGS /dev/loop2 8 mount -t vfat /dev/loop2 /tmp/r2d 9 yes hello | dd bs=8192 count=3 of=/tmp/r2d/foo.txt 10 umount /tmp/r2d
Line 1 identifies to exec(2) that this is supposed to be run by the shell. I've become accustomed to bash, the Bourne again shell.
Line 2 simply explains line 3, that the parameters you type after b.sh are parameters to add to the mkfs.msdos command line.
Lines 4-6 establish /dev/loop2 as the block device whose contents are in the filesystem image kept in /dev/r2x. Line 4 unmounts the artificial filesystem if it was mounted; this is done because we're about to make some changes to it. Lines 5-6 make sure that /dev/loop2 is connected to /tmp/r2x and only to /tmp/r2x.
Line 7 creates an artificial filesystem image with whatever additional parameters the user gave--remember $ARGS from line 3?.
Line 8 mounts the filesystem onto /tmp/r2d. Line 9 creates a file of about 24KB (three clusters), so I have a filename to look for at the beginning of the directory.
Line 10 then unmounts the artificial filesystem image, so the kernel does not think there are inconsistencies if I play with /tmp/r2x.
|Speed Up Your Web Site with Varnish||Jun 19, 2013|
|Non-Linux FOSS: libnotify, OS X Style||Jun 18, 2013|
|Containers—Not Virtual Machines—Are the Future Cloud||Jun 17, 2013|
|Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer||Jun 12, 2013|
|Weechat, Irssi's Little Brother||Jun 11, 2013|
|One Tail Just Isn't Enough||Jun 07, 2013|
- Speed Up Your Web Site with Varnish
- Containers—Not Virtual Machines—Are the Future Cloud
- Non-Linux FOSS: libnotify, OS X Style
- RSS Feeds
- Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer
- Linux Systems Administrator
- Senior Perl Developer
- Technical Support Rep
- UX Designer
- Weechat, Irssi's Little Brother
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?