Tech Tip: Using Ghostscript to Convert and Combine Files
Ghostscript gives you the power to combine files, convert files, and much more, all from the command line.
It is easy to combine several input files into one combined PDF using Ghostscript:
gs -sDEVICE=pdfwrite \ -dNOPAUSE -dBATCH -dSAFER \ -sOutputFile=combined.pdf \ first.pdf \ second.pdf \ third.pdf [...]
Your input files don't even need to be PDF files. You can also use PostScript or EPS files, or any mixture of the three:
gs -sDEVICE=pdfwrite \ -dNOPAUSE -dBATCH -dSAFER \ -sOutputFile=combined.pdf \ first.pdf \ second.ps \ third.eps [...]
The combined.pdf file will contain the input files in the order given on the commandline. If you don't want the combined file to be PDF, but PostScript instead, you may want to use this:
gs -sDEVICE=pswrite \ -dNOPAUSE -dBATCH -dSAFER \ -sOutputFile=combined.ps \ first.pdf \ second.ps \ third.eps [...]
Should you for whatever reason want PostScript level 1 output, add a language level parameter:
gs -sDEVICE=pswrite \ -dLanguageLevel=1 \ -dNOPAUSE -dBATCH -dSAFER \ -sOutputFile=combined.ps \ first.pdf \ second.ps \ third.eps [...]
The default PostScript language output level is 2. Using "1.5" is also supported, which is language level 1 with color extensions.
You can convert color input files into black/white or non-color/gray PostScript like this:
gs -sDEVICE=psgray \ -dNOPAUSE -dBATCH -dSAFER \ -sOutputFile=combined.ps \ first.pdf \ second.ps \ third.eps [...] gs -sDEVICE=psmono \ -dNOPAUSE -dBATCH -dSAFER \ -sOutputFile=combined.ps \ first.pdf \ second.ps \ third.eps [...]
Should you for some reason need a series of single-page EPS files made up of pages from various input files, try this:
gs -sDEVICE=epswrite \ -dNOPAUSE -dBATCH -dSAFER \ -sOutputFile=p%08d.eps \ 5page-first.pdf \ 7page-second.ps \ 1page-third.eps [...]
The resulting files will be nicely named as p00000001.eps .... p00000013.eps ...
But be aware, converting PDFs back to PostScript (or EPS), like the last 6 commands did, may loose some or much of the original quality. For example, PostScript can't handle transparencies directly (it fakes them by converting them into bitmap patterns), and converting such a PostScript file back to PDF will not restore the original transparency feature. Also, some other aspects of the graphic quality from the input PDFs may be deteriorated.
So in general, it's better to stay with PDFs and avoid roundtrip conversions to PostScript and back to PDF...
Should you need TIFFs or JPEGs from all pages of your input files, try this:
gs -sDEVICE=tiffg4 \ -dNOPAUSE -dBATCH -dSAFER \ -sOutputFile=p%08d.tif \ -r600x600 \ 5page-first.pdf \ 7page-second.ps \ 1page-third.eps [...] gs -sDEVICE=jpeg \ -dNOPAUSE -dBATCH -dSAFER \ -r600x600 \ -sOutputFile=p%08d.jpg \ 5page-first.pdf \ 7page-second.ps \ 1page-third.eps [...]
Graphic gurus, check this out. To create color separations (CMYK), use:
gs -sDEVICE=tiffsep \ -dNOPAUSE -dBATCH -dSAFER \ -r600x600 \ -sOutputFile=p%08d.tif \ 5page-first.pdf \ 7page-second.ps \ 1page-third.eps [...]
We included an extra parameter in the last few examples to make the output resolution 600dpi, because we don't like the default 72dpi when it comes to pure full page image files. Now, you may be surprised: for each single page of the input files you automatically get 5 different files:
p000000XX.tif p000000XX.Cyan.tif p000000XX.Magenta.tif p000000XX.Yellow.tif p000000XX.Black.tif
The *.tif file will be the biggest, since it contains a single 32 bit composite CMYK file (tiff32nc format). The four *.Colorname.tif files are not really colored (as one might think from their names), but in reality they are tiffgray files meant for creating offset printing plates for the respective separation in 4-color CMYK printing. If Ghostscript autodetected so called "spot colors" in the input files, these will get their own separation files, with a naming convention of *.s1.tif, *.s2.tif,... etc. (up to 64 different process and spot colors are supported).
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- Google's SwiftShader Released
- SUSE LLC's SUSE Manager
- My +1 Sword of Productivity
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- Managing Linux Using Puppet
- Non-Linux FOSS: Caffeine!
- Interview with Patrick Volkerding
- SuperTuxKart 0.9.2 Released
- Parsing an RSS News Feed with a Bash Script
- Doing for User Space What We Did for Kernel Space
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide