Handle Compressed and Uncompressed Files Uniformly

 in

When looking at log files or other files that are compressed and rotated automatically, it's useful to be able to deal with them in a uniform fashion. The following bash function does that:

function data_source ()
{
 local F=$1

 # strip the gz if it's there
 F=$(echo $F | perl -pe 's/.gz$//')

 if [[ -f $F ]] ; then
  cat $F
 elif [[ -f $F.gz ]] ; then
  nice gunzip -c $F
 fi
}

Now, when you want to process the files, you can use:

for file in * ; do
 data_source $file | ...
done

If you have bzip2 files, just modify the data_source function to check for that also.

______________________

Comments

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

GPL tool to handle this

Rob Russell's picture

The big reason why zcat shouldn't be used for this is that zcat will fail if you give it uncompressed data. The whole point is to have a cat that will handle _both_ compressed and uncompressed data. Also, I don't think that checking the filename extension is a valid way to determine file contents.

I wrote ccat to handle not only plaintext and gzipped data, but also bzipped data, and is extensible to handle other compression methods, like compress(1). ccat is at www.administra.tion.ca

Stripping suffix

macias's picture

Why not just execute
X=${Y%.gz}

?

a blast from the past?

Anonymous's picture

Wasn't this same thing done a few months ago? It sure seems familiar.

Function broken

Wodin's picture

This function will do the wrong thing if you have "file.gz" and "file" in the same directory and try to do something with "file.gz".

$ echo "uncompressed file" >file
$ echo "COMPRESSED file" | gzip >file.gz
$ data_source file.gz
uncompressed file
$

Also, if you make a mistake with the filename, the function just does nothing:

$ data_source fiel.gz
$

so you might not notice your typo.

overkill, redondant

Anonymous's picture

1/ as indicated, zcat does the trick very well already.
2/ using perl to do a simple substitution is a huge overkill
3/ detection the type of a file is probably better served with the 'type' command:

$> type -ib foo.tar.bz2
application/x-bzip2

$> type -ib foo.gz
application/x-gzip

$ mv foo.gz foo.bz2; type -ib foo.bz2
application/x-gzip

Why function?

Martin Zikmund's picture

Hello,

try "man zgrep" instead of using this frantic_function and piping it.

I miss the point or it lacks the point

Fussy Penguin's picture

I don't see the point, if this is meant as a scripting exercise: shooting with a Perl cannon to kill a mosquito when Bash itself and basename, that would make the trick altogether.

I guess zcat, zless, zgrep and z"whatever" have been created for that purpose.

Diseducative, after all.

mixing bash scripts w/perl

Anonymous's picture

IMHO, it seems like bit of overkill to use perl (and subprocess/pipe/etc) for a search/replace inside a bash script. I tend to think that if I'm going to use perl in the script, it's probably best just to write the entire script in perl.

While I admittedly haven't gone to the full length of replacing (or even running) the exact piece of code presented, I did throw together a proof-of-concept piece of code that works on fc6 (bash --version: 3.1.17(1)-release):

#!/bin/bash
file="test.gz"  # file with a .gz extension
file2="test"    # file without a .gz extension for comparison

nogzfile=${file/%.gz/}      # replace the post string (%) .gz with nothing
nogzfile2=${file2/%.gz/}    # same replacement on the other file

# look at the output
echo "file: $file"              # test.gz
echo "file2: $file2"            # test
echo "nogzfile: $nogzfile"      # test
echo "nogzfile: $nogzfile2"     # test

You may want to incorporate this search/replace inside your script something like (not tested):
local F=${1/%.gz/}

Incase you're interested, the pattern matching part I pulled from my favorite bash scripting resource, The Advanced Bash Scripting Guide:
Reference page used in this example

all-in-all, nice function. I'm sure it will provide many people with a useful snippet. keep up the good work.

White Paper
Linux Management with Red Hat Satellite: Measuring Business Impact and ROI

Linux has become a key foundation for supporting today's rapidly growing IT environments. Linux is being used to deploy business applications and databases, trading on its reputation as a low-cost operating environment. For many IT organizations, Linux is a mainstay for deploying Web servers and has evolved from handling basic file, print, and utility workloads to running mission-critical applications and databases, physically, virtually, and in the cloud. As Linux grows in importance in terms of value to the business, managing Linux environments to high standards of service quality — availability, security, and performance — becomes an essential requirement for business success.

Learn More

Sponsored by Red Hat

White Paper
Private PaaS for the Agile Enterprise

If you already use virtualized infrastructure, you are well on your way to leveraging the power of the cloud. Virtualization offers the promise of limitless resources, but how do you manage that scalability when your DevOps team doesn’t scale? In today’s hypercompetitive markets, fast results can make a difference between leading the pack vs. obsolescence. Organizations need more benefits from cloud computing than just raw resources. They need agility, flexibility, convenience, ROI, and control.

Stackato private Platform-as-a-Service technology from ActiveState extends your private cloud infrastructure by creating a private PaaS to provide on-demand availability, flexibility, control, and ultimately, faster time-to-market for your enterprise.

Learn More

Sponsored by ActiveState