School Laboratory Management Blues
One of the main challenges in running a school computer laboratory is managing the software running on the different PCs allocated for use by students. System administrators face various scenarios, all stemming from the nature of a learning environment in which users all have superuser access to their machines. For example:
Essential system programs might be inadvertently or maliciously deleted, thus bringing down an application or the operating system itself; or
Installation of new applications might necessitate a change in some system libraries, thereby creating conflict with existing applications; or
Classes that require scratch installations of an operating system may disrupt subsequent classes dealing with databases or web servers.
In addition, there are several variations of the scenarios depicted above. Compounding the problem are scheduling pressures, as school computer laboratories frequently must accommodate a variety of classes, each one having its own unique requirements. Even administrators who manage a laboratory dedicated to one class per week still have to reinstall their systems for subsequent classes.
System management tools designed for the enterprise generally are inappropriate for a school computer laboratory environment. They were meant to manage a more-or-less consistent profile of computer systems across a company, one that changes less frequently and one with stricter usage policies than a learning environment. The cost of a commercial system management suite also can be prohibitive for a school.
Scripted installation tools, such as Red Hat's Kickstart program, can automate the installation of the operating system, but external and customized applications must be packaged separately by administrators. While this is a straightforward process for some applications, it can be quite complex for larger applications, especially those requiring interactive user input for configuration.
One solution that addresses these issues is to maintain an archive of system images required for every class that is run in the laboratory. A system image can be restored to a personal computer as needed, typically in under half an hour. Far more efficient than a scratch OS installation, the system image does not need to be configured or loaded with additional required applications: configuration files and programs are stored as part of the image itself.
Thus, if an operating system is crippled because of a deleted system file, the administrator can simply restore the computer with the image pertaining to a certain class. Software with conflicting library requirements or even different operating system flavors can be installed in different system images and restored as needed. Classes with pre-established software requirements no longer need to go through the whole installation process to set up their systems.
Additionally, system images can be built on one master machine and re-used on machines with the same or similar configurations. This is a significant time saver for system administrators, as they can now build the image once and deploy it to several machines simultaneously.
Several commercial system image utilities are on the market, Ghost and DriveImage being the most popular ones, and on the whole they don't cost too much. However, school system administrators on severe budget constraints can do away with the purchases altogether and achieve greater flexibility by using basic utilities that ship with almost all Linux distributions.
Typically, an operating system, along with its configuration files and all the installed applications, resides in an entire partition of a computer's hard disk. When the computer boots up, the bootloader points to this partition as the source of all the required programs for normal operation. So at the heart of it all, you need a utility to back up an entire hard-disk partition to a file and, conversely, to restore it from a file to a partition.
The dd utility suits this purpose. dd's documentation modestly claims that it copies and converts files. In reality, it is a powerful utility because Linux and other UNIXes treat everything as files. For example, if you wanted to write an image of the second partition of your first hard drive (/dev/hda2) to a file, you would issue the command
dd if=/dev/hda2 of=myimage.img
A 2GB-sized partition would result correspondingly in a 2GB file, so you would need to make sure the directory you are copying the image to has sufficient capacity. Also, make sure the partition you are cloning is not mounted or being used for any other activity.
Restoring an image to a partition is as easy a process.
dd if=myimage.img of=/dev/hda2
Because the image backup and restore utility can't run by itself, you need to run it off an operating system. The same operating system must have access to all the partitions you are modifying, as well as the area where the image files are located. Other required components are the bootloader and a file partition utility.
The way this would work in practice, then, is:
Boot from the master operating system.
Use dd to restore a saved image to a preset partition.
Reboot the system and use the bootloader to select the partition from which the machine should boot.
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- SUSE LLC's SUSE Manager
- Returning Values from Bash Functions
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- My +1 Sword of Productivity
- Non-Linux FOSS: Caffeine!
- Managing Linux Using Puppet
- Doing for User Space What We Did for Kernel Space
- Tech Tip: Really Simple HTTP Server with Python
- Rogue Wave Software's Zend Server
- Parsing an RSS News Feed with a Bash Script
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide