Best of Technical Support

Our experts answer your technical questions.

Slow Backups

I moved our Samba server (Red Hat 7.3, PII) to a new PC (Red Hat 9, P4). I have a cron job set up to create daily backups from shares using smbtar. I have installed all the latest patches using up2date. Problem: this backup script is running much more slowly on the new configuration than on the old one. Any ideas why this might be?


Zoltan Sutto


sutto.zoltan@rutinsoft.hu

My first guess is Ethernet drivers. Make sure they are the latest and greatest. I also have had issues with Ethernet auto-negotiating speed. Make sure you are at 100BT/full duplex.


Christopher Wingert


cwingert@qualcomm.com

If you really wanted to analyze the problem, you'd start by running the smbtar script with tracing turned on (the -x option to bash). That's because smbtar is a shell script. Then, you could eyeball it to see which commands were taking a long time. You also could (more invasively) edit a copy of the script, inserting calls to take timestamps (relative and absolute) between calls to external commands. These could be written to a profiling file or simply sent to the system logs using the logger command. You can use shell expressions like:

START_TIME="$(date +%s)"; REL_TIME="$START_TIME"
REL_TIME="$(( $(date +%s) - $REL_TIME ))"
...

to get the current time (as a number of seconds since the epoch in 1970). Thus, the total elapsed time for your script would be the current time minus the $START_TIME that you set as the first line of the script.

Also consider that differences in your configuration might be introducing some odd network name services delays, for example, if your old /etc/hosts file had some entries that made reverse DNS queries work and the new installation has failed to preserve those, or if your old /etc/nsswitch.conf was only checking local files and your new one is somehow querying NIS, LDAP or winbind (MS Windows domain) sources. Because winbind is in newer Red Hat systems after 7.3, it could be the culprit.

Performance tuning is a process of taking measurements (profiling) to find bottlenecks (analysis) and eliminate those where possible (tuning). Usually the elimination of bottlenecks involves finding cases where the system is doing work unnecessary to your application, for example, querying network-based directory services rather than simply using local files.

Sometimes you should consider an entirely different approach to the task at hand. In this case, I'd seriously consider not using smbtar to back up these Samba shares. You simply can use rsync to synchronize the selected (shared) directory trees to one large holding disk on the system with the tape drive. Then, back that up directly to tape.


Jim Dennis


jimd@starshine.org

It could be that your new system is not getting as much throughput to your hard disks as it should be. I'm assuming you have IDE disks. Default installs on some Linux distributions don't necessarily enable DMA by default; it has to be enabled explicitly after install. You can use hdparm to verify/test your drive (in my case, my system is on /dev/hda):

[root@hamtop ~]# hdparm /dev/hda

/dev/hda:
multcount    = 16 (on)
IO_support   =  0 (default 16-bit)
unmaskirq    =  0 (off)
using_dma    =  1 (on)
keepsettings =  0 (off)
readonly     =  0 (off)
readahead    =  8 (on)
geometry     = 3648/255/63, sectors = 58605120, start = 0

Check the using_dma entry. If yours is set to 0, that could explain it. Try setting it to hdparm -d1 /dev/hdX, where X is your drive letter. Then test it:

[root@hamtop ~]# hdparm -tT /dev/hda

/dev/hda:
Timing buffer-cache reads:   128 MB in  0.82 seconds = 156.10 MB/sec
Timing buffered disk reads:  64 MB in  2.68 seconds = 23.88 MB/sec

You should see the buffered disk reads go up considerably compared to what you get from running the same test without DMA enabled. Thoroughly test the drive with DMA enabled before relying on it, as in rare cases older drives don't behave well with this set. If this does fix it, read up on how your particular distribution can be made to enable this at boot. In the case of Red Hat, it can be controlled through /etc/sysconfig/harddisks.


Timothy Hamlin


thamlin@zeus.nmt.edu

How to Recover a Kernel .config File?

I have reconfigured the Linux kernel on my computer to version 2.4.22, but at the boot screen, I still have the option to choose between version 2.4.20-8 and 2.4.22. My problem is I do not have the .config file for the 2.4.20-8 kernel, and I'd like to know whether there is a command to generate this file?


Jan Nicolas Myklebust


jan-nicolas.myklebust@cnes.fr

If this is the default Red Hat kernel, you can unpack the kernel source package and grab the .config file from the /usr/src/linux-2.4/configs directory.


Christopher Wingert


cwingert@qualcomm.com

There isn't a command to generate a .config file from a kernel image in 2.4.x and earlier. In the new 2.6 kernels, a compile-time option supports this.


Jim Dennis


jimd@starshine.org

bash without History

The February 2004 BTS column had a question about hiding mistakenly entered information from the bash history. If you kill your own bash process with kill -9 $$ instead of logging out, it doesn't write history to disk.


Jack Coates


jack@monkeynoodle.org

Can't Make a Partition on Free Disk Space

The current partitioning on my Red Hat 9 system is:

hda1 20GB Windows
hda2  7GB Linux /
hda3 12GB Linux /usr
swap  1GB

I have resized hda1 down to 8GB using GNU parted, thus getting 12GB of free space. Now I want to make a new Linux partition on the unused 12GB. The problem is, the parted mkpart command simply says can't make partition and the fdisk n command says delete a partition before you make new partition.


Hiroshi Iwatani


HGA03630@nifty.ne.jp

Sounds like you have four primary partitions already, and the maximum is four. You need to delete a partition and add a logical partition, which can encompass many more partitions. I would turn off swap, delete the swap partition, add a logical partition including all free space, add a new swap partition, run mkswap, add and format your data partition and then turn on swap. You should also update /etc/fstab for the new swap and data partition.


Christopher Wingert


cwingert@qualcomm.com

Quick Crossover Networking

How can I use a cross-link Ethernet cable to transfer data from one computer to the other when both are Debian sarge and when one is sarge and the other is Microsoft Windows?


Akos Zelei


azelei@freemail.hu

You simply can give each of the two machines any arbitrary IP addresses from the same network (I'd recommend using the RFC1918 address blocks reserved for these purposes: 192.168.x.*—so call one 192.168.1.1 and the other 1982.168.1.2). If you choose the addresses wisely (or follow my example) you can leave the subnet and broadcast values at their defaults. You then should be able to ping each from the other. At that point, you also should be able to run any normal TCP/IP protocols over that link. You can use the IP addresses or add entries for left and right in the /etc/hosts files on each. At that point you'd use rsync, scp or any protocol you liked across them. As for the Windows system: you can create a static IP address configuration manually and either use its native file sharing (configure Samba on the Debian GNU/Linux system) or install the Cygwin for MS Windows suite and use rsync over SSH and so on.


Jim Dennis


jimd@starshine.org

If you don't want to set up the Linux system as a Samba server, put putty on the Windows box (www.chiark.greenend.org.uk/~sgtatham/putty). Or, if the Windows box is already set up to share files, you can use smbclient from Linux.


Don Marti


dmarti@ssc.com

______________________

White Paper
Linux Management with Red Hat Satellite: Measuring Business Impact and ROI

Linux has become a key foundation for supporting today's rapidly growing IT environments. Linux is being used to deploy business applications and databases, trading on its reputation as a low-cost operating environment. For many IT organizations, Linux is a mainstay for deploying Web servers and has evolved from handling basic file, print, and utility workloads to running mission-critical applications and databases, physically, virtually, and in the cloud. As Linux grows in importance in terms of value to the business, managing Linux environments to high standards of service quality — availability, security, and performance — becomes an essential requirement for business success.

Learn More

Sponsored by Red Hat

White Paper
Private PaaS for the Agile Enterprise

If you already use virtualized infrastructure, you are well on your way to leveraging the power of the cloud. Virtualization offers the promise of limitless resources, but how do you manage that scalability when your DevOps team doesn’t scale? In today’s hypercompetitive markets, fast results can make a difference between leading the pack vs. obsolescence. Organizations need more benefits from cloud computing than just raw resources. They need agility, flexibility, convenience, ROI, and control.

Stackato private Platform-as-a-Service technology from ActiveState extends your private cloud infrastructure by creating a private PaaS to provide on-demand availability, flexibility, control, and ultimately, faster time-to-market for your enterprise.

Learn More

Sponsored by ActiveState