Linux in a Windows Workstation Environment, Part I
This series of articles covers the development of a Linux-based server that supports a number of workstations running MS Windows in the computer laboratory of a 55+ RV Resort in Mesa, Arizona. The age-old stereotype of senior citizens playing shuffleboard by day and bingo by night is outdated, if ever true. Such activities have their role; however, our residents are equally as likely to be in the computer room, sending and receiving e-mail or browsing the Web to research their latest financial, medical or recreational question. Our facility protects the local machines from inexperienced and/or inept users, but it also offers sophisticated services for the user that needs them.
Prior to beginning this project, I had experience using a wide variety of computer systems; however, my UNIX and Linux experience was minimal. My computer background began in 1963 as a Fortran programmer on mainframes in support of my own scientific research. In the early 1970s, I was a part of a small team that developed real-time software and hardware for interfacing PDP-11's to scientific instruments. In 1981, I became system manager for a VAX-11/780 and ran various systems. At this position, I gained some experience with UNIX and Linux systems until my retirement came in mid 1999. At that time, I became a full-time RV resident, dedicated to the avoidance of cold weather.
In November 1999, we arrived in Mesa, Arizona, and occupied a site in the Mesa Regal RV Resort, which is a 55+ community. Given my long-time involvement with computers, I naturally joined the computer club. It had been established six or seven years earlier, when one of the residents transported his personal computer to a classroom once a week to teach the residents how to use such a machine. The next step was for him and his students to conduct fund-raising and purchase a single PC for teaching purposes.
By the time of my arrival in Mesa, the computer club had expanded from that humble beginning to a dedicated computer room, populated with 8 Windows-based PCs. These machines shared a DSL broadband Internet connection with routing and network address translation services provided by WinGate software running on one of the PCs. This configuration was proposed and implemented by a consultant, as the computer club had no internal expertise in networking.
During my first season in Mesa, the club facilities expanded to 12 computers. The following summer (2000), the RV resort was sold to Cal-Am Properties, Inc., which has a commitment to providing computer access to the residents of its properties. The company's initial contribution was to add ten new workstations and replace the two oldest computers. At this point, however, we ran into a problem: our WinGate license would support only 5 concurrent Internet sessions, which was not nearly enough for 22 workstations. Because I had more networking experience than did the others, I was asked to propose solutions to this problem. I rejected the first option of purchasing additional licenses for the WinGate software, as this would have been relatively expensive. In addition, my real-time background and experiences with Windows made me highly distrustful of using Windows 98 in a mission-critical role.
The second option was to convert one of the recently retired machines, which did not have sufficient resources to run Windows 98, into a router. I learned that Linux could operate nicely on minimal hardware, so I began developing a router on a 133MHz Pentium with 16MB of RAM, a 1.4GB hard drive and two Ethernet interfaces. The resulting system was built from a SuSE 6.4 distribution, employing a 2.2.x kernel. The firewall and network address translation functions were provided by the ipchains facility. Not only was this system built from surplus equipment at no cost for hardware, it clearly could handle all the workstations. It also added firewall functionality. The only "cost" was development time. This system went into service in November 2000 and served us well for more than one year.
In January 2002, the router memory was upgraded from 16MB to 32MB, the kernel was upgraded to the 2.4.x series, and the firewall was rewritten using iptables functionality, which added stateful information regarding each packet. Not only could we block external connection attempts based on TCP or UDP port, we also could pass only those packets that contain information explicitly requested.
In late 2002, a computer with a 400MHz processor, 64MB of RAM and a 6GB hard drive became available. The router system was transferred to this machine, and the previous unit became a cold spare. It never was called on, though, as the newer unit also was stable. It ran until June 2004, with reboots needed only for kernel upgrades and long power failures that exhausted the battery backup unit. At one time, the router had run in excess of one year between reboots.
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- SUSE LLC's SUSE Manager
- My +1 Sword of Productivity
- Managing Linux Using Puppet
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- Non-Linux FOSS: Caffeine!
- SuperTuxKart 0.9.2 Released
- Doing for User Space What We Did for Kernel Space
- Google's SwiftShader Released
- Parsing an RSS News Feed with a Bash Script
- SourceClear Open
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide