The dd command is one of the original Unix utilities and should be in everyone's tool box. It can strip headers, extract parts of binary files and write into the middle of floppy disks; it is used by the Linux kernel Makefiles to make boot images. It can be used to copy and convert magnetic tape formats, convert between ASCII and EBCDIC, swap bytes, and force to upper and lowercase.
For blocked I/O, the dd command has no competition in the standard tool set. One could write a custom utility to do specific I/O or formatting but, as dd is already available almost everywhere, it makes sense to use it.
Like most well-behaved commands, dd reads from its standard input and writes to its standard output, unless a command line specification has been given. This allows dd to be used in pipes, and remotely with the rsh remote shell command.
Unlike most commands, dd uses a keyword=value format for its parameters. This was reputedly modeled after IBM System/360 JCL, which had an elaborate DD “Dataset Definition” specification for I/O devices. A complete listing of all keywords is available from GNU dd with
Some people believe dd means “Destroy Disk” or “Delete Data” because if it is misused, a partition or output file can be trashed very quickly. Since dd is the tool used to write disk headers, boot records, and similar system data areas, misuse of dd has probably trashed many hard disks and file systems.
In essence, dd copies and optionally converts data. It uses an input buffer, conversion buffer if conversion is specified, and an output buffer. Reads are issued to the input file or device for the size of the input buffer, optional conversions are applied, and writes are issued for the size of the output buffer. This allows I/O requests to be tailored to the requirements of a task. Output to standard error reports the number of full and short blocks read and written.
A typical task for dd is copying a floppy disk. As the common geometry of a 3.5" floppy is 18 sectors per track, two heads and 80 cylinders, an optimized dd command to read a floppy is:
Example 1a: Copying from a 3.5" floppy:dd bs=2x80x18b if=/dev/fd0 of=/tmp/floppy.image1+0 records in1+0 records out
The 18b specifies 18 sectors of 512 bytes, the 2x multiplies the sector size by the number of heads, and the 80x is for the cylinders—a total of 1474560 bytes. This issues a single 1474560-byte read request to /dev/fd0 and a single 1474560 write request to /tmp/floppy.image, whereas a corresponding cp command:
cp /dev/fd0 /tmp/floppy.image
issues 360 reads and writes of 4096 bytes. While this may seem insignificant on a 1.44MB file, when larger amounts of data are involved, reducing the number of system calls and improving performance can be significant.
This example also shows the factor capability in the GNU dd number specification. This has been around since before the Programmers Work Bench and, while not documented in the GNU dd man page, is present in the source and works just fine, thank you.
To finish copying a floppy, the original needs to be ejected, a new diskette inserted, and another dd command issued to write to the diskette:
Example 1b: Copying to a 3.5" floppydd bs=2x80x18b < /tmp/floppy.image > /dev/fd01+0 records in1+0 records out
Here is shown the stdin/stdout usage, in which respect dd is like most other utilities.
The original need for dd came with the 1/2" tapes used to exchange data with other systems and boot and install Unix on the PDP/11. Those days are gone, but the 9-track format lives. To access the venerable 9-track, 1/2" tape, dd is superior. With modern SCSI tape devices, blocking and unblocking are no longer a necessity, as the hardware reads and writes 512-byte data blocks.
However, the 9-track 1/2" tape format allows for variable length blocking and can be impossible to read with the cp command. The dd command allows for the exact specification of input and output block sizes, and can even read variable length block sizes, by specifying an input buffer size larger than any of the blocks on the tape. Short blocks are read, and dd happily copies those to the output file without complaint, simply reporting on the number of complete and short blocks encountered.
Then there are the EBCDIC datasets transferred from such systems as MVS, which are almost always 80-character blank-padded Hollerith Card Images! No problem for dd, which will convert these to newline-terminated variable record length ASCII. Making the format is just as easy and dd again is the right tool for the job.
Example 2: Converting EBCDIC 80-character fixed-length record to ASCII variable-length newline-terminated recorddd bs=10240 cbs=80 conv=ascii,unblock if=/dev/st0 of=ascii.out40+0 records in38+1 records out
The fixed record length is specified by the cbs=80 parameter, and the input and output block sizes are set with bs=10240. The EBCDIC-to-ASCII conversion and fixed-to-variable record length conversion are enabled with the conv=ascii,noblock parameter.
Notice the output record count is smaller than the input record count. This is due to the padding spaces eliminated from the output file and replaced with newline characters.
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- SUSE LLC's SUSE Manager
- My +1 Sword of Productivity
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- Managing Linux Using Puppet
- Non-Linux FOSS: Caffeine!
- Tech Tip: Really Simple HTTP Server with Python
- Doing for User Space What We Did for Kernel Space
- Parsing an RSS News Feed with a Bash Script
- Rogue Wave Software's Zend Server
- SuperTuxKart 0.9.2 Released
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide