A Linux-based Lab for Operating Systems and Network Courses
This article describes our experiences installing and using a teaching laboratory based on the Linux operating system. The lab is a platform for undergraduate operating systems and networking education in the Department of Computer Science and Engineering at Auburn University. We have deliberately made the software and hardware environments of the lab quite heterogeneous, but Linux has always been the workhorse of the lab. Linux was chosen primarily due to the wide range of hardware supported, the sophistication of its kernel and the availability of source code and documentation. We believe that “hands-on” experience is an essential component of computer science education and that most current curricula rely far too heavily on simulation when teaching systems issues.
Our primary motivation for building this laboratory was to not shortchange our students when it comes to practical experience with systems programming. The shortcomings we most desired to avoid are:
Out-of-date technology—A good systems programming instructor must be familiar with the state of the art, and more importantly, the state of practice. A competent instructor should be aware of such things as memory speeds and sizes, disk drive performance, network standards and CPU performance. These things change rapidly, and far too often we throw up our hands and say “I'll teach the fundamental concepts, but not dirty my hands with implementation details.” Such thinking is fundamentally flawed when applied to systems programming, since what works and what does not work is fundamentally based on the economics and technology that makes some solutions viable and others not.
Too much reliance on simulation—Systems programming is inherently a messy business. Machines can crash; device drivers can hang the system or even the network; hardware can be damaged by poor programming or incorrect wiring. There is a risk that students will use the systems lab to crack other systems. Poorly written network software can result in inadvertent denial of service to other legitimate users of the network.
All these things make a dedicated systems laboratory very difficult to accommodate in today's typical university computing environment of networked Windows PCs and commercial Unix workstations. Systems administrators don't want to put at risk a part of their network which is, almost by definition, broken most of the time.
Thus, systems programming courses often rely exclusively on the use of simulators to provide students with some experience in systems programming. The problem with this approach is that the real world is not running on a simulator. Simulation adds an extra layer of complexity that must be grasped by the novice, and also insulates the novice from the fact that development tools may be primitive or nonexistent and the designer is often responsible for building the design tools as well. Students need to learn to build their own tools, to leverage the capabilities they have in order to achieve the ends they need when developing new systems.
Aversion to team exercises—We in the computer science and engineering department at Auburn often ask our industrial contacts, “What should we be teaching our students to prepare them for work at your firm?” Inevitably, we are told that the important skills include teamwork and experience with large systems. Because of the difficulties inherent in assessing team performance and in managing the interpersonal issues involved with team-based projects, most university exercises tend to be individual activities based on small, or even toy systems. Perhaps the underlying obstacle may be an unwillingness to depart from the traditional lecture-and-examination method of teaching. Professors have a term that is generally used to refer to team work—cheating.
Security concerns—It is a fact of life that not all university students can be trusted in the same ways that employees are trusted. As mentioned above, some protections must be set up so that hacking does not interfere with the work of other students or with that of university staff. Further, physical precautions must be taken against the theft of equipment available for public use. Unfortunately, these concerns seem directly aimed at frustrating any attempt to provide students with “real” systems programming experience, for which they may need root privileges, and for which they will often need access to the inside of the machine.
A dedicated laboratory for systems programming courses, running Linux on commodity hardware, isolated to some extent from the main campus network, seemed to us to be a good way around most of these shortcomings.
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- SUSE LLC's SUSE Manager
- My +1 Sword of Productivity
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- Managing Linux Using Puppet
- Non-Linux FOSS: Caffeine!
- Tech Tip: Really Simple HTTP Server with Python
- Doing for User Space What We Did for Kernel Space
- Parsing an RSS News Feed with a Bash Script
- Rogue Wave Software's Zend Server
- SuperTuxKart 0.9.2 Released
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide