Best of Technical Support
Before I came to my current job, the archive system was sporadic at best, using multiple media formats from DAT to Exabyte. I have been given the task of copying all these formats to a single standard (in this case AIT). I am trying to copy one tape to another and cannot find a proper command. We are using the GNU version of tar and would like to keep this standard. I have tried
dd if=/dev/<device file> of=/dev/<device file2>
and cannot even get the command to read the tape. —Charles Long, email@example.com
The command you are using looks correct, the only missing item may be a block size (bs) in case the format of the tapes was written with a specific block size factor (that you have to find out). Then the command will be
dd if=/dev/XXXX of=/dev/YYYY bs=<your_block_size_number>
—Felipe E. Barousse Boué, firstname.lastname@example.org
dd should read a tape, regardless of the program used to write the tape (e.g., tar, dump, NetBackup). The problem could be one of the following: 1) tape is written on a different/incompatible drive, 2) tape is faulty, 3) heads on drive are dirty or 4) the header on the tape. So skip before reading (use mt fsf and no-rewind device) and try reading the tape(s) with the original program in read-only mode to make sure the tape(s) and drive(s) are working correctly. If you cannot read the tapes in any way, then the backups are useless. —Keith Trollope, email@example.com
I want to install Red Hat 7.1 on only my master disk. I have a slave drive with Win98 installed at present. The master disk is empty, so I'm not trying to dual boot my PC in that sense. Short of physically disconnecting it, is there a way to autopartition the one disk? How would you recommend I divide up my 40GB disk? —Paul Henman, firstname.lastname@example.org
Red Hat definitely lets you partition the drives as you wish. I usually recommend the following: / 100MB /safe 100MB /usr 3-4G in your case, more if you want /var everything else
You need to symlink /tmp to /var/tmp/tmp and /home to /var/home. The advantage of this scheme is that your root partition is critical, so if you keep it small, you reduce the changes of corruption, and you can keep an on-line backup copy of it in /safe. —Marc Merlin, email@example.com
I had installed SuSE 6.3 on an extreme partition of my HDD (so no LILO). I later created a bootdisk from the CD through the rawrite utility. I am able to access the command-line interface by booting from the floppy and specifying the installed partition. Can I initiate the GUI interface in any way? —Manoj Ramakrishnan, firstname.lastname@example.org
There are different ways to initiate the GUI interface. You can directly start X from the command line with startx. You can start xdm, kdm or gdm as root; they often have an init script in /etc/init.d/, or they can be started from the command line. The last and probably easiest option is to set the right runlevel, 3, in YaST2. —Marc Merlin, email@example.com
I would like to know how to set the inode size for my entire hard drive, each partition. Is there a way I can set the inode size during Red Hat 7.0's installation, or do I have to use debugfs to change inode sizes? —Matt Walters, firstname.lastname@example.org
You cannot change the bytes per inode after the filesystem has been created. If you want to set it when you install Red Hat, you can drop to the shell (F2 from the text-mode installer) and create the filesystems yourself. After that, you can return to the installer, which has the option of not formatting partitions. A sample command would be:
mke2fs -s 1 -b 4096 -i 8192 -m 1 /dev/sda1
where -s 1 parses superblocks (Linux 2.2 or better), -b 4096 sets the disk block size to 4K, -i 8192 sets bytes per inode to 8K and -m 1 sets reserved block size to 1% of your partition size (appropriate for today's big disks and partitions). —Marc Merlin, email@example.com
|Speed Up Your Web Site with Varnish||Jun 19, 2013|
|Non-Linux FOSS: libnotify, OS X Style||Jun 18, 2013|
|Containers—Not Virtual Machines—Are the Future Cloud||Jun 17, 2013|
|Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer||Jun 12, 2013|
|Weechat, Irssi's Little Brother||Jun 11, 2013|
|One Tail Just Isn't Enough||Jun 07, 2013|
- Containers—Not Virtual Machines—Are the Future Cloud
- Non-Linux FOSS: libnotify, OS X Style
- Linux Systems Administrator
- Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer
- Validate an E-Mail Address with PHP, the Right Way
- Technical Support Rep
- Senior Perl Developer
- UX Designer
- Introduction to MapReduce with Hadoop on Linux
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?