The Arrival of NX, Part 1
Here is an early summary of how NX achieves its extraordinary performance. We delve into more details in subsequent parts of this article series.
NX combines three essential ingredients:
an efficient compression of what normally would be X traffic
an intelligent mechanism to store and re-use (cache) data that is transfered between server and client
a drastic reduction in multiple, time-consuming X roundtrips, bringing their total number close to zero.
If equivalent data needs to be transfered repeatedly, NX takes it from the cache. If similar data needs to be transfered repeatedly, NX boils down that action to a differential transfer. What it pipes through the link is not the complete data, but only the delta.
These two techniques are not entirely new. Previous implementations of them exist, but observers might conclude that NX's implementation is more elegant. NX is optimized to the last bit. Keith Packard thought that to go beyond ZLIB's compression was nearly impossible and the result never could perform fast enough. He concluded that roundtrip suppression was an important factor for success. Before NX, roundtrip suppression was never made efficient enough for X connections. And for reasons unknown to me, Keith Packard never pursued its solution. The solution to roundtrip suppression is the most decisive breakthrough represented in NX. It is the missing link in reducing traffic between the NX client and the server enough to facilitate a believable low-bandwidth remote GUI experience. It thus created a new X compression technology that is far better than plain ZLIB.
We discuss more about NX roundtrip suppression and traffic compression in Parts 2 and 3 of this article series.
To learn more about FreeNX and witness a real-life workflow demonstration of a case of remote document creation, printing and publishing, visit the Linuxprinting.org booth (#2043) at the LinuxWorld Conference & Expo in San Francisco, August 8–11, 2005. I will be there along with other members of collaborating projects.
Kurt Pfeifle is a system specialist and the technical lead of the Consulting and Training Network Printing group for Danka Deutschland GmbH, in Stuttgart, Germany. Kurt is known across the Open Source and Free Software communities of the world as a passionate CUPS evangelist; his interest in CUPS dates back to its first beta release in June 1999. He is the author of the KDEPrint Handbook and contributes to the KDEPrint Web site. Kurt also handles an array of matters for Linuxprinting.org and wrote most of the printing documentation for the Samba Project.
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- SUSE LLC's SUSE Manager
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- My +1 Sword of Productivity
- Managing Linux Using Puppet
- Non-Linux FOSS: Caffeine!
- Doing for User Space What We Did for Kernel Space
- SuperTuxKart 0.9.2 Released
- Parsing an RSS News Feed with a Bash Script
- Google's SwiftShader Released
- Rogue Wave Software's Zend Server
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide