Linux Print System at Cisco Systems, Inc.
The client sends its print job to the central print server and disconnects. The print server takes the job and adds it to the queue for the designated printer. The print server then connects to the printer, and sends the job. Any status is sent to the print server, not the client.
Since the print server has significant storage capacity, it can receive jobs at any time, regardless of what the printer is doing. The client machine can send the job, then move on to another job.
The jobs go through a central queue, which prints them in the order received. Each user should be able (operating system permitting) to see all jobs waiting to print on a printer by looking at this print server queue.
A system administrator may kill any job on the print server, regardless of its source.
If a printer fails, it is easy to re-route all the jobs from the broken printer to a working one.
Any printer changes can be made on the central print server alone, since this is the only machine that talks directly to the printer.
A central print server system is more complex. It requires a system administrator to set up the print server and keep it running.
If the print server dies, all printing stops, unless a good backup print server is available.
The users have no queue control. Menial tasks such as print job cancellations fall on the shoulders of system administrators, if the users no longer have the permissions or skills to do it themselves as they do in the direct client-to-printer case.
Most larger companies make a half-hearted attempt at the central server approach. The real problems begin when more than one “central” server is implemented. The UNIX system administrator sets up a UNIX print server, the Windows guy sets up an NT server, and some of the clients skip the servers completely and go directly to the printer. All jobs meet at one printer, where chaos ensues.
You now have all the problems of the central server approach compounded with all the problems of the client-to-printer approach plus a few extra thrown in for good measure. Printer changes must be implemented on multiple servers by multiple system administrators leading to multiple potential errors. Multiple machines (now servers instead of clients) compete for the same printer, there's no orderly queueing and we still don't know where that 2,000-page document is coming from.
To make matters worse, each environment has a different name for the same printer, which makes tracking down printers even more difficult. When a user has a problem, he most likely doesn't know which environment he is trying to print from. He'll call the wrong system administrator, who can't find the user's printer name in his environment. The system administrator will suggest the user call a different group, who will pass the user to another group, and so on. Five system administrators later, the user is back to the first one. Overall, a frustrating experience for everyone. This situation was beginning to occur at Cisco.
After a few months of dealing with these problems, I decided to find a better way. I sat down and detailed what I believed to be the “ideal print system”. It had to have the advantages of the server approach, yet mitigate some of the disadvantages.
Multi-protocol: The server must talk to all the different protocols available to both clients for sending and printers for receiving.
Ultra-reliable: Use redundancy to remove the single point of failure inherent in most central server approaches.
Single point of queueing: No matter where the job comes from or the route it takes, all jobs for a particular printer must land in a single queue handled by one machine.
Expandable and flexible: Cisco is a growing company. Any system has to be able to scale well and allow frequent reorganization.
Centrally, de-centrally and remotely manageable: Cisco has offices worldwide, some of which have local expertise, some of which don't.
Cheap: The system has to be affordable for the small offices, yet expandable for use at headquarters.
Queue management devolved to the users: System administrators don't have time; users want control.
Avoid duplication: Any information duplicated by hand is prone to error. Even entering the IP address into both the printer and the print server should be considered a duplication.
Simple to manage: No matter how many servers are added for redundancy or capacity, the management of these must remain simple.
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- Stunnel Security for Oracle
- SourceClear Open
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- SUSE LLC's SUSE Manager
- My +1 Sword of Productivity
- Managing Linux Using Puppet
- Google's SwiftShader Released
- Non-Linux FOSS: Caffeine!
- Parsing an RSS News Feed with a Bash Script
- Doing for User Space What We Did for Kernel Space
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide