Advanced File Recovery
So, a few weeks ago, someone made the mistake of upgrading a computer to Windows Vista from Windows XP. Besides the 8-hour upgrade process headache (what was it doing?), it also left the machine unusable. This person ended up reinstalling Windows XP and also installing Ubuntu. After the whole process was over, this person claimed to have lost important files. Excellent :-)
Since the partitions were completely destroyed and the OS reinstalled, file system level access software would not be helpful. So, I dug deep, and came up big. foremost. This GPL software seems to have been written by the US military, and as such, is in the Public Domain by default. It is now GPL software, and you can use it freely.
I read up on the docs, and here's how I got almost all the files back:
* Boot Ubuntu Live CD (never recover files from the HDD-installed system)
* Add extra software repositories to apt
$ sudo sed -i "s/main restricted$/main restricted universe multiverse/g" /etc/apt/sources.list
* Install the application 'foremost'
$ sudo aptitude install foremost
* Create a recovery directory (put this on USB, instead, if you have one)
$ sudo mkdir /root/recovery
* Search the drive for files!
$ sudo foremost -v -i /dev/hda -o /root/recovery
* Maybe you just want all those JPG files you lost and nothing else?
$ sudo foremost -v -i /dev/hda -o /root/recovery -t jpg
Finally, copy your files somewhere safe. If you don't have enough room in the live system, you could mount a remote partition over NFS, SSH,
Some of you may be asking yourself what to do if foremost does not support the file types you lost. Maybe you lost some precious OGG files? No worries! Why don't you give 'magicrescue' a try? magicrescue allows you to define the proper "magic bytes" that define the file type. Basically, it's what the 'file' command would check against. You can see all magic definitions in /usr/share/file/magic. You can define your own and use them as input when you run magicrescue. The possibilities are endless.
So, if you need to get your data back desperately, these are great options for you! Keep on hacking :-)
|Speed Up Your Web Site with Varnish||Jun 19, 2013|
|Non-Linux FOSS: libnotify, OS X Style||Jun 18, 2013|
|Containers—Not Virtual Machines—Are the Future Cloud||Jun 17, 2013|
|Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer||Jun 12, 2013|
|Weechat, Irssi's Little Brother||Jun 11, 2013|
|One Tail Just Isn't Enough||Jun 07, 2013|
- Speed Up Your Web Site with Varnish
- Containers—Not Virtual Machines—Are the Future Cloud
- Linux Systems Administrator
- Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer
- Non-Linux FOSS: libnotify, OS X Style
- Senior Perl Developer
- Technical Support Rep
- UX Designer
- RSS Feeds
- Reply to comment | Linux Journal
1 hour 57 min ago
- Reply to comment | Linux Journal
5 hours 57 min ago
- Yeah, user namespaces are
7 hours 13 min ago
- Cari Uang
10 hours 45 min ago
- user namespaces
13 hours 38 min ago
14 hours 4 min ago
- One advantage with VMs
16 hours 33 min ago
- about info
17 hours 6 min ago
17 hours 7 min ago
17 hours 8 min ago
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?