Once upon a time, there was the mainframe. All application processing was centralized to this enormous beast, and desktop equipment did nothing but display its output. Then the personal computer arrived, ending the tyranny of the mainframe. Individual users suddenly were empowered to install their own applications. Software development and innovation boomed. The personal computers were networked. Thus, the mainframe was slain.
But all did not live happily ever after. The cost of maintaining a workstation on every desktop outgrew the purchase cost long ago.The fact that the dominant operating system is like a Petri dish for viruses and spyware has exacerbated the situation to a point that should be considered intolerable. It has to be faced that, in most situations, it is not desirable to allow the user to install software. The only sane management decision is to draw a clear line between users and administrators.
This can be accomplished in large part by using a secure system like Linux on the desktop. Viruses and spyware disappear, and maintenance costs can plummet. But, there is still a full system on every desktop that must be maintained. Hard drives fail. Fans fail. Major OS updates are not automatic. Desk space is consumed.
One solution is a step forward that feels like turning back the clock. The thin client is the modern equivalent of the text terminal. It provides a low-profile, low-maintenance appliance for the desktop. Application processing is off-loaded to a centralized system called a terminal server. Linux has emerged as the OS of choice on the thin client, even when the terminal server runs MS Windows. But let's not go halfway. Let's explore in detail how to deploy a Linux thin client with a Linux terminal server.
What makes a client thin? Most important, thin clients have minimal local software that can be stored on a Flash memory module that is read-only for the local user. This is usually a standard CompactFlash card or a Disk On Module (DOM), which is Flash memory with an IDE interface. A small portion of Flash is made writable for saving configuration information, but in a properly configured system, the user will not be able to modify this. Once configured, it is very nearly an appliance as far as the user is concerned.
Because most of the processing is performed by the terminal server, a slower CPU can be used; 533MHz is typical. This diminishes the cooling requirements greatly, which means fewer or no fans. The silence is golden.
Because there are no internal drives or expansion cards, motherboard components are reduced, allowing very small form factors. The small form factor, reduced cooling requirements and lack of drives mean a very small enclosure. The model I typically use measures 9.5" tall and 1.75" wide, and has a maximum power consumption of 30W. The smaller power supply also means a smaller UPS. Compare a 700 VA workstation UPS costing $120 US and weighing 17 pounds to a 350 VA thin-client UPS costing $40 US and weighing 11 pounds.
Thin clients have two distinct modes of operation: client and standalone. In standalone mode, the thin client isn't really a client. All necessary applications are loaded in Flash and executed locally, which can drive the purchase cost up by increasing the Flash requirements. The most common application of this is a Web appliance. Any decent thin client will have the ability to boot directly into a Web browser and even prevent the user from exiting the browser or modifying its configuration.
Here is a big caveat to thin clients: vendor dependence. You can't simply download the latest version of Firefox and install it on a thin client as you can with a workstation. The manufacturer must provide a special image for your make and model. This is something that needs to change, but for now, the software that the manufacturer makes available is a crucial factor in selecting a thin client. If you want Firefox on a standalone thin client, the manufacturer has to provide it. If you want Flash and Java to work, the manufacturer must provide the plugins. Don't expect the plugins to be current releases either. The size of some plugins has outpaced even the plummeting cost of memory. In particular, Acrobat and Java have grown so enormous that it is more reasonable to use an older release than pay for the additional Flash and RAM required to run them.
How software is made available depends on the manufacturer. There are basically two methods. One is to provide individual modules. This allows you to pick and choose, but more labor is involved in preparing the clients. The other method is for the manufacturer to provide monolithic images with all the options needed. This can be practical if the manufacturer is flexible about providing custom images.
When using thin clients in client mode, the applications are all normal installations on the terminal server, which is simply a high-performance server with enough horsepower to do the application processing.
In client mode, the thin client has a dual nature. It is a client in respect to the application services provided by the terminal server, but it is also a server in respect to providing those applications with access to local hardware. The local hardware being served up is primarily a keyboard, video and mouse (KVM), but there also can be local audio, USB storage devices and printers.
Thin clients are available with Linux, Windows CE and Windows XP Embedded. Barring some desire to use Internet Explorer in standalone mode, there really isn't any reason to consider anything but Linux for a thin client. Even if the terminal server is MS Windows, the fact that Linux is running on the thin client is completely transparent to the user. CE and XP only add software license costs to each client, and XP doubles the Flash and RAM memory requirements on the client (128MB minimum Flash and RAM for Linux vs. 265 Flash and RAM for XP). Because of this, the most commonly deployed thin-client configuration today is Linux thin clients connecting to MS Windows terminal servers.
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- SUSE LLC's SUSE Manager
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- My +1 Sword of Productivity
- Managing Linux Using Puppet
- Non-Linux FOSS: Caffeine!
- Doing for User Space What We Did for Kernel Space
- Google's SwiftShader Released
- SuperTuxKart 0.9.2 Released
- LiveCode Ltd.'s LiveCode
- Parsing an RSS News Feed with a Bash Script
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide