Desktop Guerrilla Tactics: a Portable Thin Client Approach
As an operating system, Linux has reached the point where it has entered mainstream computing. No longer do people scratch their heads at the mention of its name, nor do they shake it when they hear it's being used for enterprise applications. Linux has proven its value propositions of cost, scalability and performance in the real world. The final frontier for Linux to conquer is the desktop.
The reality of Linux on the desktop is the situation we faced when we worked for an organization in the midst of deploying Linux. Several servers already had been migrated to Linux with much success. Now, the managers were casting a wary eye at the desktop. We demonstrated several desktop-oriented distributions: Lindows, Xandros, Knoppix and Red Hat. Of these, the managers liked the Red Hat environment and support structure the best.
Although they liked what they saw in Linux desktops, the managers felt the responses to the demos were too subjective and too theoretical to commit to mass deployment wholesale. They wanted to see how the users would react to this shift. The only way to do this was through a pilot group.
We were working against two major constraints. First, the managers wanted to run the pilot group without any major disruptions in their day-to-day operations. If we did install Linux on the pilot group's existing desktops, we would have to do the entire job in half a day. If the pilot group did not like what they saw, we would have to restore the existing Windows desktops just as quickly.
Second, we were working with a hodge-podge of old machines. The desktops were a varied mix of Pentium II and Pentium III computers with different memory and hard disk configurations and no CD-ROM drives. Worse, the hard disks generally had less than 500MB of free space. No way could we dual-boot a decent Linux distribution on these machines.
So, here was the challenge: how could we bring Linux quickly onto the desktop to penetrate the users' defenses? Just as importantly, how could we take Linux out of the environment in case the opposition proved overwhelming? We would have to take a guerrilla approach to conquering the desktop.
One of the things we had going in our favor was the office network. Fortunately, the company had invested in a decent Ethernet infrastructure, and all the machines already were connected. This setup immediately led us to consider a thin client approach to our project.
A thin client approach meant we would be running all the applications off a fat server. The desktops themselves would be responsible only for outputting display on the monitor and accepting input from the keyboard and the mouse. But how would we accomplish this?
We were aware of several open-source thin client projects, most notably, the Linux Terminal Server Project (www.ltsp.org) and Netstation (netstation.sourceforge.net). Although these packages have proven popular, we found them complicated to set up and maintain. They required us to put together a tightly coupled server and client environment: critical client files needed to be served through NFS, for example.
An approach we liked better was the Virtual Network Computer (VNC) from AT&T (www.uk.research.att.com/vnc). VNC is a remote display system that allows you to view a computing desktop environment from anywhere on a network and control it as if you were sitting in front of that computer. The beauty of VNC is that it works with a wide variety of platforms for both the client and the server. The server and the clients communicate primarily through the VNC protocol, so they are not as tightly linked. We could run it on almost any type of client and any type of server.
We thought we had found our answer, so we installed the VNC server on our Linux machine. We put VNC clients on the desktop, running within the Windows environment. Using VNC, our users could access the Linux desktop that was running on our server.
Needless to say, this approach failed dismally. Users followed the path of least resistance and opted to ignore the VNC icons on their Windows desktops. Instead of trying Red Hat, they continued to use their old applications. Luckily, we found this out before deployment to our pilot group.
We were left with only one recourse: we would have to package a small floppy-based distribution that contained a VNC client. Then, with their hard drives disconnected for the duration of the pilot, the users would have no option but to use our thin client network. If the pilot failed, we would reconnect their hard disks and they would be back in their old environment.
Here, in broad strokes, is the thin client approach on which we settled. We assembled a small floppy-based distribution with an SVGA VNC client, and then we set up our Linux machine to act as a fat server to our thin clients. We then deployed our floppy distribution to the client machines. All our work was done with a stock distribution of Red Hat 9, with the exception of some packages we downloaded from the Internet.
Practical Task Scheduling Deployment
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.View Now!
|The Firebird Project's Firebird Relational Database||Jul 29, 2016|
|Stunnel Security for Oracle||Jul 28, 2016|
|SUSE LLC's SUSE Manager||Jul 21, 2016|
|My +1 Sword of Productivity||Jul 20, 2016|
|Non-Linux FOSS: Caffeine!||Jul 19, 2016|
|Murat Yener and Onur Dundar's Expert Android Studio (Wrox)||Jul 18, 2016|
- Stunnel Security for Oracle
- The Firebird Project's Firebird Relational Database
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- SUSE LLC's SUSE Manager
- Managing Linux Using Puppet
- My +1 Sword of Productivity
- Non-Linux FOSS: Caffeine!
- Google's SwiftShader Released
- SuperTuxKart 0.9.2 Released
- Doing for User Space What We Did for Kernel Space
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide