A Linux-based Lab for Operating Systems and Network Courses
A Unix systems administration course is currently being taught using the laboratory. On the second class day students were required to choose a slot in a table having rows reflecting tasks of a system administrator and columns reflecting the hardware platforms available in the laboratory. The rows were called “interest groups” and the columns were called “teams”. The interest groups of the table were:
Hardware group—responsible for hardware installation, maintenance, upgrades.
Software group—responsible for software installation, maintenance and upgrade of system software.
Network group—concentrating on networking hardware and software.
Backup and security group—responsible for prevention and monitoring of security for the laboratory.
Documentation group—responsible for dealing with the maintenance of appropriate documentation of a system. The person choosing the documentation group slot for a team was also given the leadership roll for their team.
The above-mentioned Pentium processor systems and two older SPARC processors were the hardware platforms offered to the student teams. Additionally, each team was allowed to choose an operating system to manage on their machine subject to license availability. For the Pentium processors the students were offered Solaris for the 386 or Linux. Solaris or SPARC-Linux was available for the Sun systems. One team expressed considerable interest in integration of Windows NT into the laboratory, and since there was one license available for NT, it was agreed that this team could support NT, so long as class assignments could be performed on their system.
Class assignments were of three forms: user assignments, team assignments and interest group assignments. Because each team was to provide accounts on another team's platform, each team became a user base for another team. Thus, individual user assignments became team assignments. For example, a user assignment to enter an HTML page containing a Java applet became a team assignment to install a web server and a Java compiler on their system.
Interest group assignments were those that altered or reconfigured the basic capabilities of the laboratory. For example, creating two subnets in the laboratory, creating a backup system for the laboratory machines, monitoring and securing the laboratory were duties in which the interest groups joined together to accomplish the assigned task.
Finally, to address distributed system management issues, all teams were assigned the responsibility of installing, providing and managing file and print services for the laboratory. An Iomega Jaz drive was configured as the boot disk for the machine in the laboratory that was to be the NFS and print server. This machine was also configured with an additional two SCSI hard disks, a tape drive and a printer. Each team checked out a Jaz disk and built an operating system on its disk. The only team that had a problem using the removable disks was the Windows NT team, which discovered that (by design) a Microsoft Windows application cannot have page files that reside on a removable drive.
Initially a working system was provided for the file server, but occasionally a team's Jaz disk would become the working system for the laboratory. If problems were discovered, the initial system could easily restore needed services, and the team could be given the responsibility of repairing the problem without disrupting services. Maximum chaos could easily be achieved by assigning problems such as the rebuilding of the kernel on the Java disk system.
Due to concerns by Auburn's Engineering Network Services group regarding students having root access to machines connected to the campus network, the Linux machines were isolated on a subnet, connected to the main college of engineering network through a secure router which knows the address of the machine connected to each port, and routes packets only to the machine designated to receive them. We physically secured the machines by locking the room in which they were kept whenever no paid employee was in the department (not necessarily in the same room), and we routed a fiber optic cable connected to an alarm box through all the machine cases. We soon found this cable frustrating, because it required the assistance of the department's single systems administrator any time a hardware change was necessary (this occurred several times a day during the first months). In spite of these precautions, we lost the motherboards and disk drives of two machines to theft during that quarter. We had failed to realize that the sort of commodity hardware used in this lab is a much more attractive target for theft than the engineering workstations, minicomputers and parallel machines we have traditionally used for academic research and instruction. Further, the constant need to access the hardware inside the PC cases meant that more opportunities existed for the alarms to be disconnected. There seems to be a fundamental tension in a systems lab between making the lab a good development environment and making it secure against a dishonest student—one simply cannot lock down every cable, nut and bolt if any work is to get done. On the other hand, insurance policies are difficult to obtain without evidence of adequate security.
Our current security approach is multifold:
Fiber optic cables run through the machine cases
A lock on each computer's case—keys are available any time during business hours
A digital camera, connected to a 486 machine, used to snap pictures, send them to a file server in a very secure room and serve the latest one to the lab's web page at http://dn102af.cse.eng.auburn.edu/~root/labimage.html
Locking the lab when none of the departmental staff is around
Any individual measure can certainly be circumvented, and of course, a dedicated effort can defeat any security system. So far this system seems to provide a credible theft deterrent without interfering too heavily with productivity.
Fast/Flexible Linux OS Recovery
On Demand Now
In this live one-hour webinar, learn how to enhance your existing backup strategies for complete disaster recovery preparedness using Storix System Backup Administrator (SBAdmin), a highly flexible full-system recovery solution for UNIX and Linux systems.
Join Linux Journal's Shawn Powers and David Huffman, President/CEO, Storix, Inc.
Free to Linux Journal readers.Register Now!
- Download "Linux Management with Red Hat Satellite: Measuring Business Impact and ROI"
- Petros Koutoupis' RapidDisk
- The Italian Army Switches to LibreOffice
- ServersCheck's Thermal Imaging Camera Sensor
- Linux Mint 18
- Oracle vs. Google: Round 2
- The FBI and the Mozilla Foundation Lock Horns over Known Security Hole
- Varnish Software's Varnish Massive Storage Engine
- Privacy and the New Math
Until recently, IBM’s Power Platform was looked upon as being the system that hosted IBM’s flavor of UNIX and proprietary operating system called IBM i. These servers often are found in medium-size businesses running ERP, CRM and financials for on-premise customers. By enabling the Power platform to run the Linux OS, IBM now has positioned Power to be the platform of choice for those already running Linux that are facing scalability issues, especially customers looking at analytics, big data or cloud computing.
￼Running Linux on IBM’s Power hardware offers some obvious benefits, including improved processing speed and memory bandwidth, inherent security, and simpler deployment and management. But if you look beyond the impressive architecture, you’ll also find an open ecosystem that has given rise to a strong, innovative community, as well as an inventory of system and network management applications that really help leverage the benefits offered by running Linux on Power.Get the Guide