A Linux-based Lab for Operating Systems and Network Courses
A Unix systems administration course is currently being taught using the laboratory. On the second class day students were required to choose a slot in a table having rows reflecting tasks of a system administrator and columns reflecting the hardware platforms available in the laboratory. The rows were called “interest groups” and the columns were called “teams”. The interest groups of the table were:
Hardware group—responsible for hardware installation, maintenance, upgrades.
Software group—responsible for software installation, maintenance and upgrade of system software.
Network group—concentrating on networking hardware and software.
Backup and security group—responsible for prevention and monitoring of security for the laboratory.
Documentation group—responsible for dealing with the maintenance of appropriate documentation of a system. The person choosing the documentation group slot for a team was also given the leadership roll for their team.
The above-mentioned Pentium processor systems and two older SPARC processors were the hardware platforms offered to the student teams. Additionally, each team was allowed to choose an operating system to manage on their machine subject to license availability. For the Pentium processors the students were offered Solaris for the 386 or Linux. Solaris or SPARC-Linux was available for the Sun systems. One team expressed considerable interest in integration of Windows NT into the laboratory, and since there was one license available for NT, it was agreed that this team could support NT, so long as class assignments could be performed on their system.
Class assignments were of three forms: user assignments, team assignments and interest group assignments. Because each team was to provide accounts on another team's platform, each team became a user base for another team. Thus, individual user assignments became team assignments. For example, a user assignment to enter an HTML page containing a Java applet became a team assignment to install a web server and a Java compiler on their system.
Interest group assignments were those that altered or reconfigured the basic capabilities of the laboratory. For example, creating two subnets in the laboratory, creating a backup system for the laboratory machines, monitoring and securing the laboratory were duties in which the interest groups joined together to accomplish the assigned task.
Finally, to address distributed system management issues, all teams were assigned the responsibility of installing, providing and managing file and print services for the laboratory. An Iomega Jaz drive was configured as the boot disk for the machine in the laboratory that was to be the NFS and print server. This machine was also configured with an additional two SCSI hard disks, a tape drive and a printer. Each team checked out a Jaz disk and built an operating system on its disk. The only team that had a problem using the removable disks was the Windows NT team, which discovered that (by design) a Microsoft Windows application cannot have page files that reside on a removable drive.
Initially a working system was provided for the file server, but occasionally a team's Jaz disk would become the working system for the laboratory. If problems were discovered, the initial system could easily restore needed services, and the team could be given the responsibility of repairing the problem without disrupting services. Maximum chaos could easily be achieved by assigning problems such as the rebuilding of the kernel on the Java disk system.
Due to concerns by Auburn's Engineering Network Services group regarding students having root access to machines connected to the campus network, the Linux machines were isolated on a subnet, connected to the main college of engineering network through a secure router which knows the address of the machine connected to each port, and routes packets only to the machine designated to receive them. We physically secured the machines by locking the room in which they were kept whenever no paid employee was in the department (not necessarily in the same room), and we routed a fiber optic cable connected to an alarm box through all the machine cases. We soon found this cable frustrating, because it required the assistance of the department's single systems administrator any time a hardware change was necessary (this occurred several times a day during the first months). In spite of these precautions, we lost the motherboards and disk drives of two machines to theft during that quarter. We had failed to realize that the sort of commodity hardware used in this lab is a much more attractive target for theft than the engineering workstations, minicomputers and parallel machines we have traditionally used for academic research and instruction. Further, the constant need to access the hardware inside the PC cases meant that more opportunities existed for the alarms to be disconnected. There seems to be a fundamental tension in a systems lab between making the lab a good development environment and making it secure against a dishonest student—one simply cannot lock down every cable, nut and bolt if any work is to get done. On the other hand, insurance policies are difficult to obtain without evidence of adequate security.
Our current security approach is multifold:
Fiber optic cables run through the machine cases
A lock on each computer's case—keys are available any time during business hours
A digital camera, connected to a 486 machine, used to snap pictures, send them to a file server in a very secure room and serve the latest one to the lab's web page at http://dn102af.cse.eng.auburn.edu/~root/labimage.html
Locking the lab when none of the departmental staff is around
Any individual measure can certainly be circumvented, and of course, a dedicated effort can defeat any security system. So far this system seems to provide a credible theft deterrent without interfering too heavily with productivity.
Practical Task Scheduling Deployment
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.View Now!
|The Firebird Project's Firebird Relational Database||Jul 29, 2016|
|Stunnel Security for Oracle||Jul 28, 2016|
|SUSE LLC's SUSE Manager||Jul 21, 2016|
|My +1 Sword of Productivity||Jul 20, 2016|
|Non-Linux FOSS: Caffeine!||Jul 19, 2016|
|Murat Yener and Onur Dundar's Expert Android Studio (Wrox)||Jul 18, 2016|
- Stunnel Security for Oracle
- The Firebird Project's Firebird Relational Database
- SUSE LLC's SUSE Manager
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- Managing Linux Using Puppet
- My +1 Sword of Productivity
- Non-Linux FOSS: Caffeine!
- Doing for User Space What We Did for Kernel Space
- SuperTuxKart 0.9.2 Released
- Google's SwiftShader Released
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide