Handle Compressed and Uncompressed Files Uniformly


When looking at log files or other files that are compressed and rotated automatically, it's useful to be able to deal with them in a uniform fashion. The following bash function does that:

function data_source ()
 local F=$1

 # strip the gz if it's there
 F=$(echo $F | perl -pe 's/.gz$//')

 if [[ -f $F ]] ; then
  cat $F
 elif [[ -f $F.gz ]] ; then
  nice gunzip -c $F

Now, when you want to process the files, you can use:

for file in * ; do
 data_source $file | ...

If you have bzip2 files, just modify the data_source function to check for that also.



Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

GPL tool to handle this

Rob Russell's picture

The big reason why zcat shouldn't be used for this is that zcat will fail if you give it uncompressed data. The whole point is to have a cat that will handle _both_ compressed and uncompressed data. Also, I don't think that checking the filename extension is a valid way to determine file contents.

I wrote ccat to handle not only plaintext and gzipped data, but also bzipped data, and is extensible to handle other compression methods, like compress(1). ccat is at www.administra.tion.ca

Stripping suffix

macias's picture

Why not just execute


a blast from the past?

Anonymous's picture

Wasn't this same thing done a few months ago? It sure seems familiar.

Function broken

Wodin's picture

This function will do the wrong thing if you have "file.gz" and "file" in the same directory and try to do something with "file.gz".

$ echo "uncompressed file" >file
$ echo "COMPRESSED file" | gzip >file.gz
$ data_source file.gz
uncompressed file

Also, if you make a mistake with the filename, the function just does nothing:

$ data_source fiel.gz

so you might not notice your typo.

overkill, redondant

Anonymous's picture

1/ as indicated, zcat does the trick very well already.
2/ using perl to do a simple substitution is a huge overkill
3/ detection the type of a file is probably better served with the 'type' command:

$> type -ib foo.tar.bz2

$> type -ib foo.gz

$ mv foo.gz foo.bz2; type -ib foo.bz2

Why function?

Martin Zikmund's picture


try "man zgrep" instead of using this frantic_function and piping it.

I miss the point or it lacks the point

Fussy Penguin's picture

I don't see the point, if this is meant as a scripting exercise: shooting with a Perl cannon to kill a mosquito when Bash itself and basename, that would make the trick altogether.

I guess zcat, zless, zgrep and z"whatever" have been created for that purpose.

Diseducative, after all.

mixing bash scripts w/perl

Anonymous's picture

IMHO, it seems like bit of overkill to use perl (and subprocess/pipe/etc) for a search/replace inside a bash script. I tend to think that if I'm going to use perl in the script, it's probably best just to write the entire script in perl.

While I admittedly haven't gone to the full length of replacing (or even running) the exact piece of code presented, I did throw together a proof-of-concept piece of code that works on fc6 (bash --version: 3.1.17(1)-release):

file="test.gz"  # file with a .gz extension
file2="test"    # file without a .gz extension for comparison

nogzfile=${file/%.gz/}      # replace the post string (%) .gz with nothing
nogzfile2=${file2/%.gz/}    # same replacement on the other file

# look at the output
echo "file: $file"              # test.gz
echo "file2: $file2"            # test
echo "nogzfile: $nogzfile"      # test
echo "nogzfile: $nogzfile2"     # test

You may want to incorporate this search/replace inside your script something like (not tested):
local F=${1/%.gz/}

Incase you're interested, the pattern matching part I pulled from my favorite bash scripting resource, The Advanced Bash Scripting Guide:
Reference page used in this example

all-in-all, nice function. I'm sure it will provide many people with a useful snippet. keep up the good work.