Overview of Linux Printing Systems
Today, the PostScript language stays the primary interface for printing in the UNIX world. All major applications will output at least generic PostScript that will then be processed by the printing system until it gets printed. This is obviously very limited, because applications have no unified way of querying printing features, or even know if a job printed correctly. Very few applications are able to use PPD files to access printer features, although StarOffice and OpenOffice are notable exceptions.
But the situation is improving. For instance, CUPS provides a basic C API that allows applications to be integrated more easily with their printing system. This API includes functions to communicate with a CUPS daemon through IPP, as well as functions to read and parse PPD files, and thus gather detailed information about printers and their capabilities. This still stays quite limited for the application developer, as this only works with CUPS and similar IPP servers.
On the free software side, the Gnome and KDE desktop projects now both include middle-level layers to facilitate printing : KDEPrint and Gnome-Print. These frameworks propose to provide a unified APIs to the applications, by abstracting the underlying printing system.
Things are much better than they were just a few years ago with the emergence of more advanced printing systems. As this is a subject essential to enterprises, we are beginning to see support from big name vendors like HP or IBM that strive to improve on this infrastructure.
Moreover, the Free Standards Group is working on the OpenPrinting project, whose stated goal is to define the next generation of the printing infrastructure for the Linux operating system. Gathering many experts from the industry, this workgroup is defining APIs and standards that will bring Linux up to speed with its competitors.
Stephane Peter is a senior software engineer working for Codehost, Inc in Culver City, CA. When not playing with printing systems, he can be found playing his guitar or biking around in Southern California.
|Non-Linux FOSS: libnotify, OS X Style||Jun 18, 2013|
|Containers—Not Virtual Machines—Are the Future Cloud||Jun 17, 2013|
|Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer||Jun 12, 2013|
|Weechat, Irssi's Little Brother||Jun 11, 2013|
|One Tail Just Isn't Enough||Jun 07, 2013|
|Introduction to MapReduce with Hadoop on Linux||Jun 05, 2013|
- Containers—Not Virtual Machines—Are the Future Cloud
- Non-Linux FOSS: libnotify, OS X Style
- Linux Systems Administrator
- Validate an E-Mail Address with PHP, the Right Way
- Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer
- Senior Perl Developer
- Technical Support Rep
- UX Designer
- Introduction to MapReduce with Hadoop on Linux
- RSS Feeds
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?