Archiving and Compression
Although the differences are sometimes made opaque in casual conversation, there is in fact a complete difference between archiving files and compressing them. Archiving means that you take 10 files and combine them into one file, with no difference in size. If you start with 10 100KB files and archive them, the resulting single file is 1000KB. On the other hand, if you compress those 10 files, you might find that the resulting files range from only a few kilobytes to close to the original size of 100KB, depending upon the original file type.
Note - In fact, you might end up with a bigger file during compression! If the file is already compressed, compressing it again adds extra overhead, resulting in a slightly bigger file.
All of the archive and compression formats in this chapter — zip, gzip, bzip2, and tar — are popular, but zip is probably the world's most widely used format. That's because of its almost universal use on Windows, but zip and unzip are well supported among all major (and most minor) operating systems, so things compressed using zip also work on Linux and Mac OS. If you're sending archives out to users and you don't know which operating systems they're using, zip is a safe choice to make.
gzip was designed as an open-source replacement for an older Unix program, compress. It's found on virtually every Unix-based system in the world, including Linux and Mac OS X, but it is much less common on Windows. If you're sending files back and forth to users of Unix-based machines, gzip is a safe choice.
The bzip2 command is the new kid on the block. Designed to supersede gzip, bzip2 creates smaller files, but at the cost of speed. That said, computers are so fast nowadays that most users won't notice much of a difference between the times it takes gzip or bzip2 to compress a group of files.
Note - Linux Magazine published a good article comparing several different compression formats, which you can find at www.linux-mag.com/content/view/1678/43/.
zip, gzip, and bzip2 are focused on compression (although zip also archives). The tar command does one thing — archive — and it has been doing it for a long time. It's found almost solely on Unix-based machines. You'll definitely run into tar files (also called tarballs) if you download source code, but almost every Linux user can expect to encounter a tarball some time in his career.
zip both archives and compresses files, thus making it great for sending multiple files as email attachments, backing up items, or for saving disk space. Using it is simple. Let's say you want to send a TIFF to someone via email. A TIFF image is uncompressed, so it tends to be pretty large. Zipping it up should help make the email attachment a bit smaller.
Note - When using ls -l, I'm only showing the information needed for each example.
$ ls -lh -rw-r--r-- scott scott 1006K young_edgar_scott.tif $ zip grandpa.zip young_edgar_scott.tif adding: young_edgar_scott.tif (deflated 19%) $ ls -lh -rw-r--r-- scott scott 1006K young_edgar_scott.tif -rw-r--r-- scott scott 819K grandpa.zip _grandpa.zip
In this case, you shaved off about 200KB on the resulting zip file, or 19%, as zip helpfully informs you. Not bad. You can do the same thing for several images.
$ ls -l -rw-r--r-- scott scott 251980 edgar_intl_shoe.tif -rw-r--r-- scott scott 1130922 edgar_baby.tif -rw-r--r-- scott scott 1029224 young_edgar_scott.tif $ zip grandpa.zip edgar_intl_shoe.tif edgar_baby.tif young_edgar_scott.tif adding: edgar_intl_shoe.tif (deflated 4%) adding: edgar_baby.tif (deflated 12%) adding: young_edgar_scott.tif (deflated 19%) $ ls -l -rw-r--r-- scott scott 251980 edgar_intl_shoe.tif -rw-r--r-- scott scott 1130922 edgar_baby.tif -rw-r--r-- scott scott 2074296 grandpa.zip -rw-r--r-- scott scott 1029224 young_edgar_scott.tif
It's not too polite, however, to zip up individual files this way. For three files, it's not so bad. The recipient will unzip grandpa.zip and end up with three individual files. If the payload was 50 files, however, the user would end up with files strewn everywhere. Better to zip up a directory containing those 50 files so when the user unzips it, he's left with a tidy directory instead.
$ ls -lF drwxr-xr-x scott scott edgar_scott/ $ zip grandpa.zip edgar_scott adding: edgar_scott/ (stored 0%) adding: edgar_scott/edgar_baby.tif (deflated 12%) adding: edgar_scott/young_edgar_scott.tif (deflated 19%) adding: edgar_scott/edgar_intl_shoe.tif (deflated 4%) $ ls -lF drwxr-xr-x scott scott 160 edgar_scott/ -rw-r--r-- scott scott 2074502 grandpa.zip
Whether you're zipping up a file, several files, or a directory, the pattern is the same: the zip command, followed by the name of the Zip file you're creating, and finished with the item(s) you're adding to the Zip file.
Practical Task Scheduling Deployment
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.View Now!
|The Firebird Project's Firebird Relational Database||Jul 29, 2016|
|Stunnel Security for Oracle||Jul 28, 2016|
|SUSE LLC's SUSE Manager||Jul 21, 2016|
|My +1 Sword of Productivity||Jul 20, 2016|
|Non-Linux FOSS: Caffeine!||Jul 19, 2016|
|Murat Yener and Onur Dundar's Expert Android Studio (Wrox)||Jul 18, 2016|
- The Firebird Project's Firebird Relational Database
- Stunnel Security for Oracle
- My +1 Sword of Productivity
- Non-Linux FOSS: Caffeine!
- SUSE LLC's SUSE Manager
- Managing Linux Using Puppet
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- Parsing an RSS News Feed with a Bash Script
- Google's SwiftShader Released
- Doing for User Space What We Did for Kernel Space
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide