Best of Technical Support
I moved our Samba server (Red Hat 7.3, PII)
to a new PC (Red Hat 9, P4). I have a cron job
set up to create daily backups from shares using
smbtar. I have installed all the latest patches
using up2date. Problem: this backup script is running
much more slowly on the new configuration than on the old
one. Any ideas why this might be?
My first guess is Ethernet drivers. Make sure they are the latest
and greatest. I also have had issues with Ethernet auto-negotiating
speed. Make sure you are at 100BT/full duplex.
If you really wanted to analyze the problem, you'd start by running the smbtar script with tracing turned on (the -x option to bash). That's because smbtar is a shell script. Then, you could eyeball it to see which commands were taking a long time. You also could (more invasively) edit a copy of the script, inserting calls to take timestamps (relative and absolute) between calls to external commands. These could be written to a profiling file or simply sent to the system logs using the logger command. You can use shell expressions like:
START_TIME="$(date +%s)"; REL_TIME="$START_TIME" REL_TIME="$(( $(date +%s) - $REL_TIME ))" ...
to get the current time (as a number of seconds since the epoch in 1970). Thus, the total elapsed time for your script would be the current time minus the $START_TIME that you set as the first line of the script.
Also consider that differences in your configuration might be introducing some odd network name services delays, for example, if your old /etc/hosts file had some entries that made reverse DNS queries work and the new installation has failed to preserve those, or if your old /etc/nsswitch.conf was only checking local files and your new one is somehow querying NIS, LDAP or winbind (MS Windows domain) sources. Because winbind is in newer Red Hat systems after 7.3, it could be the culprit.
Performance tuning is a process of taking measurements (profiling) to find bottlenecks (analysis) and eliminate those where possible (tuning). Usually the elimination of bottlenecks involves finding cases where the system is doing work unnecessary to your application, for example, querying network-based directory services rather than simply using local files.
Sometimes you should consider an entirely different approach to the
task at hand. In this case, I'd seriously consider not using smbtar to
back up these Samba shares. You simply can use rsync to synchronize
the selected (shared) directory trees to one large holding disk on the
system with the tape drive. Then, back that up
directly to tape.
It could be that your new system is not getting as much throughput to your hard disks as it should be. I'm assuming you have IDE disks. Default installs on some Linux distributions don't necessarily enable DMA by default; it has to be enabled explicitly after install. You can use hdparm to verify/test your drive (in my case, my system is on /dev/hda):
[root@hamtop ~]# hdparm /dev/hda /dev/hda: multcount = 16 (on) IO_support = 0 (default 16-bit) unmaskirq = 0 (off) using_dma = 1 (on) keepsettings = 0 (off) readonly = 0 (off) readahead = 8 (on) geometry = 3648/255/63, sectors = 58605120, start = 0
Check the using_dma entry. If yours is set to 0, that could explain it. Try setting it to hdparm -d1 /dev/hdX, where X is your drive letter. Then test it:
[root@hamtop ~]# hdparm -tT /dev/hda /dev/hda: Timing buffer-cache reads: 128 MB in 0.82 seconds = 156.10 MB/sec Timing buffered disk reads: 64 MB in 2.68 seconds = 23.88 MB/sec
You should see the buffered disk reads go up considerably compared to
what you get from running the same test without DMA enabled.
Thoroughly test the drive with DMA enabled before relying on it, as
in rare cases older drives don't behave well with this set.
If this does fix it, read up on how your
particular distribution can be made to enable this at boot. In the
case of Red Hat, it can be controlled through /etc/sysconfig/harddisks.
I have reconfigured the Linux kernel on my computer to
version 2.4.22, but at the boot screen, I still have the option to
choose between version 2.4.20-8 and 2.4.22. My problem is I do not
have the .config file for the 2.4.20-8 kernel, and I'd like to
know whether there is a command to generate this file?
Jan Nicolas Myklebust
If this is the default Red Hat kernel, you can unpack
the kernel source package and grab the .config file
from the /usr/src/linux-2.4/configs directory.
There isn't a command to generate a .config file from a kernel image
in 2.4.x and earlier. In the new 2.6 kernels, a compile-time
option supports this.
The February 2004 BTS column had a question about
hiding mistakenly entered information from the bash
history. If you kill your own bash process with
kill -9 $$
instead of logging out, it doesn't write history
The current partitioning on my Red Hat 9 system is:
hda1 20GB Windows hda2 7GB Linux / hda3 12GB Linux /usr swap 1GB
I have resized hda1 down to 8GB using GNU parted, thus
getting 12GB of free space. Now I want to make a new
Linux partition on the unused 12GB. The problem is,
the parted mkpart command simply says can't make
partition and the fdisk n command says delete a
partition before you make new partition.
Sounds like you have four primary partitions already, and the maximum is
You need to delete a partition and add a logical partition, which can
encompass many more partitions. I would turn off swap,
delete the swap partition,
add a logical partition including all free space,
add a new swap partition,
add and format your data partition and
then turn on swap.
You should also update /etc/fstab for the new swap and data partition.
How can I use a cross-link Ethernet cable to
transfer data from one computer to the other when
both are Debian sarge and when one is sarge and
the other is Microsoft Windows?
You simply can give each of the two machines any
arbitrary IP addresses from the same network (I'd recommend using
the RFC1918 address blocks reserved for these purposes:
call one 192.168.1.1 and the other 19188.8.131.52). If you choose the
addresses wisely (or follow my example) you can leave the
subnet and broadcast values at their defaults.
You then should be able to ping each from the other. At that point,
you also should be able to run any normal TCP/IP protocols over that
link. You can use the IP addresses or add entries for left and
right in the /etc/hosts files on each. At that point you'd use rsync,
scp or any protocol you liked across them.
As for the Windows system: you can create a static IP address
configuration manually and either use its native file sharing (configure Samba
on the Debian GNU/Linux system) or install the Cygwin for MS Windows
suite and use rsync over SSH and so on.
If you don't want to set up the Linux system
as a Samba server, put putty on the Windows box
if the Windows box is already set up to share files,
you can use smbclient from Linux.
|Designing Electronics with Linux||May 22, 2013|
|Dynamic DNS—an Object Lesson in Problem Solving||May 21, 2013|
|Using Salt Stack and Vagrant for Drupal Development||May 20, 2013|
|Making Linux and Android Get Along (It's Not as Hard as It Sounds)||May 16, 2013|
|Drupal Is a Framework: Why Everyone Needs to Understand This||May 15, 2013|
|Home, My Backup Data Center||May 13, 2013|
- RSS Feeds
- Dynamic DNS—an Object Lesson in Problem Solving
- Making Linux and Android Get Along (It's Not as Hard as It Sounds)
- Using Salt Stack and Vagrant for Drupal Development
- New Products
- A Topic for Discussion - Open Source Feature-Richness?
- Drupal Is a Framework: Why Everyone Needs to Understand This
- Validate an E-Mail Address with PHP, the Right Way
- Designing Electronics with Linux
- What's the tweeting protocol?
- Kernel Problem
2 hours 29 min ago
- BASH script to log IPs on public web server
6 hours 56 min ago
10 hours 32 min ago
- Reply to comment | Linux Journal
11 hours 4 min ago
- All the articles you talked
13 hours 28 min ago
- All the articles you talked
13 hours 31 min ago
- All the articles you talked
13 hours 32 min ago
17 hours 57 min ago
- Keeping track of IP address
19 hours 48 min ago
- Roll your own dynamic dns
1 day 1 hour ago
Enter to Win an Adafruit Pi Cobbler Breakout Kit for Raspberry Pi
It's Raspberry Pi month at Linux Journal. Each week in May, Adafruit will be giving away a Pi-related prize to a lucky, randomly drawn LJ reader. Winners will be announced weekly.
Fill out the fields below to enter to win this week's prize-- a Pi Cobbler Breakout Kit for Raspberry Pi.
Congratulations to our winners so far:
- 5-8-13, Pi Starter Pack: Jack Davis
- 5-15-13, Pi Model B 512MB RAM: Patrick Dunn
- 5-21-13, Prototyping Pi Plate Kit: Philip Kirby
- Next winner announced on 5-27-13!
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?