Best of Technical Support
I moved our Samba server (Red Hat 7.3, PII)
to a new PC (Red Hat 9, P4). I have a cron job
set up to create daily backups from shares using
smbtar. I have installed all the latest patches
using up2date. Problem: this backup script is running
much more slowly on the new configuration than on the old
one. Any ideas why this might be?
My first guess is Ethernet drivers. Make sure they are the latest
and greatest. I also have had issues with Ethernet auto-negotiating
speed. Make sure you are at 100BT/full duplex.
If you really wanted to analyze the problem, you'd start by running the smbtar script with tracing turned on (the -x option to bash). That's because smbtar is a shell script. Then, you could eyeball it to see which commands were taking a long time. You also could (more invasively) edit a copy of the script, inserting calls to take timestamps (relative and absolute) between calls to external commands. These could be written to a profiling file or simply sent to the system logs using the logger command. You can use shell expressions like:
START_TIME="$(date +%s)"; REL_TIME="$START_TIME" REL_TIME="$(( $(date +%s) - $REL_TIME ))" ...
to get the current time (as a number of seconds since the epoch in 1970). Thus, the total elapsed time for your script would be the current time minus the $START_TIME that you set as the first line of the script.
Also consider that differences in your configuration might be introducing some odd network name services delays, for example, if your old /etc/hosts file had some entries that made reverse DNS queries work and the new installation has failed to preserve those, or if your old /etc/nsswitch.conf was only checking local files and your new one is somehow querying NIS, LDAP or winbind (MS Windows domain) sources. Because winbind is in newer Red Hat systems after 7.3, it could be the culprit.
Performance tuning is a process of taking measurements (profiling) to find bottlenecks (analysis) and eliminate those where possible (tuning). Usually the elimination of bottlenecks involves finding cases where the system is doing work unnecessary to your application, for example, querying network-based directory services rather than simply using local files.
Sometimes you should consider an entirely different approach to the
task at hand. In this case, I'd seriously consider not using smbtar to
back up these Samba shares. You simply can use rsync to synchronize
the selected (shared) directory trees to one large holding disk on the
system with the tape drive. Then, back that up
directly to tape.
It could be that your new system is not getting as much throughput to your hard disks as it should be. I'm assuming you have IDE disks. Default installs on some Linux distributions don't necessarily enable DMA by default; it has to be enabled explicitly after install. You can use hdparm to verify/test your drive (in my case, my system is on /dev/hda):
[root@hamtop ~]# hdparm /dev/hda /dev/hda: multcount = 16 (on) IO_support = 0 (default 16-bit) unmaskirq = 0 (off) using_dma = 1 (on) keepsettings = 0 (off) readonly = 0 (off) readahead = 8 (on) geometry = 3648/255/63, sectors = 58605120, start = 0
Check the using_dma entry. If yours is set to 0, that could explain it. Try setting it to hdparm -d1 /dev/hdX, where X is your drive letter. Then test it:
[root@hamtop ~]# hdparm -tT /dev/hda /dev/hda: Timing buffer-cache reads: 128 MB in 0.82 seconds = 156.10 MB/sec Timing buffered disk reads: 64 MB in 2.68 seconds = 23.88 MB/sec
You should see the buffered disk reads go up considerably compared to
what you get from running the same test without DMA enabled.
Thoroughly test the drive with DMA enabled before relying on it, as
in rare cases older drives don't behave well with this set.
If this does fix it, read up on how your
particular distribution can be made to enable this at boot. In the
case of Red Hat, it can be controlled through /etc/sysconfig/harddisks.
I have reconfigured the Linux kernel on my computer to
version 2.4.22, but at the boot screen, I still have the option to
choose between version 2.4.20-8 and 2.4.22. My problem is I do not
have the .config file for the 2.4.20-8 kernel, and I'd like to
know whether there is a command to generate this file?
Jan Nicolas Myklebust
If this is the default Red Hat kernel, you can unpack
the kernel source package and grab the .config file
from the /usr/src/linux-2.4/configs directory.
There isn't a command to generate a .config file from a kernel image
in 2.4.x and earlier. In the new 2.6 kernels, a compile-time
option supports this.
The February 2004 BTS column had a question about
hiding mistakenly entered information from the bash
history. If you kill your own bash process with
kill -9 $$
instead of logging out, it doesn't write history
The current partitioning on my Red Hat 9 system is:
hda1 20GB Windows hda2 7GB Linux / hda3 12GB Linux /usr swap 1GB
I have resized hda1 down to 8GB using GNU parted, thus
getting 12GB of free space. Now I want to make a new
Linux partition on the unused 12GB. The problem is,
the parted mkpart command simply says can't make
partition and the fdisk n command says delete a
partition before you make new partition.
Sounds like you have four primary partitions already, and the maximum is
You need to delete a partition and add a logical partition, which can
encompass many more partitions. I would turn off swap,
delete the swap partition,
add a logical partition including all free space,
add a new swap partition,
add and format your data partition and
then turn on swap.
You should also update /etc/fstab for the new swap and data partition.
How can I use a cross-link Ethernet cable to
transfer data from one computer to the other when
both are Debian sarge and when one is sarge and
the other is Microsoft Windows?
You simply can give each of the two machines any
arbitrary IP addresses from the same network (I'd recommend using
the RFC1918 address blocks reserved for these purposes:
call one 192.168.1.1 and the other 1922.214.171.124). If you choose the
addresses wisely (or follow my example) you can leave the
subnet and broadcast values at their defaults.
You then should be able to ping each from the other. At that point,
you also should be able to run any normal TCP/IP protocols over that
link. You can use the IP addresses or add entries for left and
right in the /etc/hosts files on each. At that point you'd use rsync,
scp or any protocol you liked across them.
As for the Windows system: you can create a static IP address
configuration manually and either use its native file sharing (configure Samba
on the Debian GNU/Linux system) or install the Cygwin for MS Windows
suite and use rsync over SSH and so on.
If you don't want to set up the Linux system
as a Samba server, put putty on the Windows box
if the Windows box is already set up to share files,
you can use smbclient from Linux.
Fast/Flexible Linux OS Recovery
On Demand Now
In this live one-hour webinar, learn how to enhance your existing backup strategies for complete disaster recovery preparedness using Storix System Backup Administrator (SBAdmin), a highly flexible full-system recovery solution for UNIX and Linux systems.
Join Linux Journal's Shawn Powers and David Huffman, President/CEO, Storix, Inc.
Free to Linux Journal readers.Register Now!
- Download "Linux Management with Red Hat Satellite: Measuring Business Impact and ROI"
- Sony Settles in Linux Battle
- Peppermint 7 Released
- Libarchive Security Flaw Discovered
- Profiles and RC Files
- Maru OS Brings Debian to Your Phone
- The Giant Zero, Part 0.x
- Snappy Moves to New Platforms
- Git 2.9 Released
- Astronomy for KDE
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide