Using Linux in a Training Environment
Our original prototype training room uses a stock 60MHz Pentium processor in its host machine, although newer training rooms are coming on-line with Pentium 90MHz processors or better. There is a huge difference in processing speed between the older Pentium 60 models and the newer Pentium 90 units, although I have had great success with both systems. If you are still hesitant about obtaining a Pentium machine, a 486DX/100 unit will provide comparable performance.
For most training classes including X Windows and Motif, 16MB is adequate. Of course, greater performance can be obtained by simply upgrading to 32MB of RAM on the host. I recommend getting the full 32MB of RAM initially, rather than purchasing it later. While our MIS department has been quite accommodating to our hardware requests, your organization may not be as generous. If you have corporate red tape to cut through, request the 32MB up front.
We currently use the Adaptec 1542CF SCSI controllers. These are ISA-based cards which have been stable under Linux for quite some time. I have experimented with the Adaptec 2940 PCI-based controller, but it was a bit too squirrelly for my tastes. Even though the 1542 units are 16-bit ISA cards, my aim was stability first and foremost. A few other cards which I can personally attest to are the Future Domain 1680 series and the older Always IN-2000 cards.
Our first training room used an older 500MB IDE drive. While it served admirably and reliably, it also reached maximum capacity in a hurry. For a full install of Linux, complete with XFree86, I allow a liberal 200MB or so. However, some other storage requirements must be taken into consideration during the planning phase. For instance:
Motif—With the newer X11R6 distributions of SWiM, roughly 30MB of storage is needed for a full install from CD.
Student lab work—Plenty of storage must be set aside for student lab work. Some courses, such as the Shell Programming course, don't require much storage for student lab work. Other courses, like our X/Motif Development course, require quite a bit. For 8 students, I recommend having around 20MB or so available per student for their course work.
Linux kernels—If you plan on experimenting with newer revisions of the Linux kernel, plan on having a lot of extra room. I recommend having 20MB or so per revision.
Temporary storage—Plan on setting aside a liberal amount of storage for temporary files (i.e., the /tmp directory). In fact, I recommend that you make this directory a separate file system altogether. I like to have 100-200MB available for a typical temporary storage area.
WWW storage—We run an internal training Web, complete with on-line prep tests for our students. I must point out that even the smallest working Web requires a good bit of storage. We currently have around 20MB or so of web information on-line (including the web server software and our image library).
Working storage—Of course, we need plenty of room to sock away on-line course materials (completed solutions to lab work, shell scripts, etc). In addition, our instructors do quite a bit of development and experimentation as well, so that must be taken into account as well. A few hundred megabytes will work nicely.
Of course, any good Linux system needs a CD-ROM unit attached. With most software packages shipping on CD-ROM these days (including Linux), it pays to have one of these drives in place. Should disaster strike, it's much easier to reload the base operating system from CD-ROM, rather than a tape backup unit. I have had great success with several models from NEC, Sony and Sanyo. Try to stay away from proprietary SCSI interfaces, such as come with some Compaq CD-ROM drives. That old single-spin, wonder unit in the attic may make a perfect candidate for this job, since it won't be used all the time.
These wonderful devices make perfect solutions for backups. These drives are so fast and quiet that I have actually performed system backups while a class was in session. Any major brand should work nicely, although I can personally attest to the 2GB and 4GB models from Colorado. Even if you have to perform backups on an older 120/250MB Colorado Jumbo, the issue of system and working backups should be addressed swiftly and immediately.
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- SourceClear Open
- SUSE LLC's SUSE Manager
- My +1 Sword of Productivity
- Managing Linux Using Puppet
- Tech Tip: Really Simple HTTP Server with Python
- Doing for User Space What We Did for Kernel Space
- Non-Linux FOSS: Caffeine!
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- Google's SwiftShader Released
- Parsing an RSS News Feed with a Bash Script
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide