Best of Technical Support
Before I came to my current job, the archive system was sporadic at best, using multiple media formats from DAT to Exabyte. I have been given the task of copying all these formats to a single standard (in this case AIT). I am trying to copy one tape to another and cannot find a proper command. We are using the GNU version of tar and would like to keep this standard. I have tried
dd if=/dev/<device file> of=/dev/<device file2>
and cannot even get the command to read the tape. —Charles Long, firstname.lastname@example.org
The command you are using looks correct, the only missing item may be a block size (bs) in case the format of the tapes was written with a specific block size factor (that you have to find out). Then the command will be
dd if=/dev/XXXX of=/dev/YYYY bs=<your_block_size_number>
—Felipe E. Barousse Boué, email@example.com
dd should read a tape, regardless of the program used to write the tape (e.g., tar, dump, NetBackup). The problem could be one of the following: 1) tape is written on a different/incompatible drive, 2) tape is faulty, 3) heads on drive are dirty or 4) the header on the tape. So skip before reading (use mt fsf and no-rewind device) and try reading the tape(s) with the original program in read-only mode to make sure the tape(s) and drive(s) are working correctly. If you cannot read the tapes in any way, then the backups are useless. —Keith Trollope, firstname.lastname@example.org
I want to install Red Hat 7.1 on only my master disk. I have a slave drive with Win98 installed at present. The master disk is empty, so I'm not trying to dual boot my PC in that sense. Short of physically disconnecting it, is there a way to autopartition the one disk? How would you recommend I divide up my 40GB disk? —Paul Henman, email@example.com
Red Hat definitely lets you partition the drives as you wish. I usually recommend the following: / 100MB /safe 100MB /usr 3-4G in your case, more if you want /var everything else
You need to symlink /tmp to /var/tmp/tmp and /home to /var/home. The advantage of this scheme is that your root partition is critical, so if you keep it small, you reduce the changes of corruption, and you can keep an on-line backup copy of it in /safe. —Marc Merlin, firstname.lastname@example.org
I had installed SuSE 6.3 on an extreme partition of my HDD (so no LILO). I later created a bootdisk from the CD through the rawrite utility. I am able to access the command-line interface by booting from the floppy and specifying the installed partition. Can I initiate the GUI interface in any way? —Manoj Ramakrishnan, email@example.com
There are different ways to initiate the GUI interface. You can directly start X from the command line with startx. You can start xdm, kdm or gdm as root; they often have an init script in /etc/init.d/, or they can be started from the command line. The last and probably easiest option is to set the right runlevel, 3, in YaST2. —Marc Merlin, firstname.lastname@example.org
I would like to know how to set the inode size for my entire hard drive, each partition. Is there a way I can set the inode size during Red Hat 7.0's installation, or do I have to use debugfs to change inode sizes? —Matt Walters, email@example.com
You cannot change the bytes per inode after the filesystem has been created. If you want to set it when you install Red Hat, you can drop to the shell (F2 from the text-mode installer) and create the filesystems yourself. After that, you can return to the installer, which has the option of not formatting partitions. A sample command would be:
mke2fs -s 1 -b 4096 -i 8192 -m 1 /dev/sda1
where -s 1 parses superblocks (Linux 2.2 or better), -b 4096 sets the disk block size to 4K, -i 8192 sets bytes per inode to 8K and -m 1 sets reserved block size to 1% of your partition size (appropriate for today's big disks and partitions). —Marc Merlin, firstname.lastname@example.org
|Dynamic DNS—an Object Lesson in Problem Solving||May 21, 2013|
|Using Salt Stack and Vagrant for Drupal Development||May 20, 2013|
|Making Linux and Android Get Along (It's Not as Hard as It Sounds)||May 16, 2013|
|Drupal Is a Framework: Why Everyone Needs to Understand This||May 15, 2013|
|Home, My Backup Data Center||May 13, 2013|
|Non-Linux FOSS: Seashore||May 10, 2013|
- RSS Feeds
- Making Linux and Android Get Along (It's Not as Hard as It Sounds)
- Using Salt Stack and Vagrant for Drupal Development
- Dynamic DNS—an Object Lesson in Problem Solving
- New Products
- Validate an E-Mail Address with PHP, the Right Way
- Drupal Is a Framework: Why Everyone Needs to Understand This
- A Topic for Discussion - Open Source Feature-Richness?
- Download the Free Red Hat White Paper "Using an Open Source Framework to Catch the Bad Guy"
- Tech Tip: Really Simple HTTP Server with Python
- Roll your own dynamic dns
43 min 43 sec ago
- Please correct the URL for Salt Stack's web site
3 hours 55 min ago
- Android is Linux -- why no better inter-operation
6 hours 10 min ago
- Connecting Android device to desktop Linux via USB
6 hours 39 min ago
- Find new cell phone and tablet pc
7 hours 37 min ago
9 hours 5 min ago
- Automatically updating Guest Additions
10 hours 14 min ago
- I like your topic on android
11 hours 1 min ago
- This is the easiest tutorial
17 hours 36 min ago
- Ahh, the Koolaid.
23 hours 15 min ago
Enter to Win an Adafruit Pi Cobbler Breakout Kit for Raspberry Pi
It's Raspberry Pi month at Linux Journal. Each week in May, Adafruit will be giving away a Pi-related prize to a lucky, randomly drawn LJ reader. Winners will be announced weekly.
Fill out the fields below to enter to win this week's prize-- a Pi Cobbler Breakout Kit for Raspberry Pi.
Congratulations to our winners so far:
- 5-8-13, Pi Starter Pack: Jack Davis
- 5-15-13, Pi Model B 512MB RAM: Patrick Dunn
- 5-21-13, Prototyping Pi Plate Kit: Philip Kirby
- Next winner announced on 5-27-13!
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?