School Laboratory Management Blues
One of the main challenges in running a school computer laboratory is managing the software running on the different PCs allocated for use by students. System administrators face various scenarios, all stemming from the nature of a learning environment in which users all have superuser access to their machines. For example:
Essential system programs might be inadvertently or maliciously deleted, thus bringing down an application or the operating system itself; or
Installation of new applications might necessitate a change in some system libraries, thereby creating conflict with existing applications; or
Classes that require scratch installations of an operating system may disrupt subsequent classes dealing with databases or web servers.
In addition, there are several variations of the scenarios depicted above. Compounding the problem are scheduling pressures, as school computer laboratories frequently must accommodate a variety of classes, each one having its own unique requirements. Even administrators who manage a laboratory dedicated to one class per week still have to reinstall their systems for subsequent classes.
System management tools designed for the enterprise generally are inappropriate for a school computer laboratory environment. They were meant to manage a more-or-less consistent profile of computer systems across a company, one that changes less frequently and one with stricter usage policies than a learning environment. The cost of a commercial system management suite also can be prohibitive for a school.
Scripted installation tools, such as Red Hat's Kickstart program, can automate the installation of the operating system, but external and customized applications must be packaged separately by administrators. While this is a straightforward process for some applications, it can be quite complex for larger applications, especially those requiring interactive user input for configuration.
One solution that addresses these issues is to maintain an archive of system images required for every class that is run in the laboratory. A system image can be restored to a personal computer as needed, typically in under half an hour. Far more efficient than a scratch OS installation, the system image does not need to be configured or loaded with additional required applications: configuration files and programs are stored as part of the image itself.
Thus, if an operating system is crippled because of a deleted system file, the administrator can simply restore the computer with the image pertaining to a certain class. Software with conflicting library requirements or even different operating system flavors can be installed in different system images and restored as needed. Classes with pre-established software requirements no longer need to go through the whole installation process to set up their systems.
Additionally, system images can be built on one master machine and re-used on machines with the same or similar configurations. This is a significant time saver for system administrators, as they can now build the image once and deploy it to several machines simultaneously.
Several commercial system image utilities are on the market, Ghost and DriveImage being the most popular ones, and on the whole they don't cost too much. However, school system administrators on severe budget constraints can do away with the purchases altogether and achieve greater flexibility by using basic utilities that ship with almost all Linux distributions.
Typically, an operating system, along with its configuration files and all the installed applications, resides in an entire partition of a computer's hard disk. When the computer boots up, the bootloader points to this partition as the source of all the required programs for normal operation. So at the heart of it all, you need a utility to back up an entire hard-disk partition to a file and, conversely, to restore it from a file to a partition.
The dd utility suits this purpose. dd's documentation modestly claims that it copies and converts files. In reality, it is a powerful utility because Linux and other UNIXes treat everything as files. For example, if you wanted to write an image of the second partition of your first hard drive (/dev/hda2) to a file, you would issue the command
dd if=/dev/hda2 of=myimage.img
A 2GB-sized partition would result correspondingly in a 2GB file, so you would need to make sure the directory you are copying the image to has sufficient capacity. Also, make sure the partition you are cloning is not mounted or being used for any other activity.
Restoring an image to a partition is as easy a process.
dd if=myimage.img of=/dev/hda2
Because the image backup and restore utility can't run by itself, you need to run it off an operating system. The same operating system must have access to all the partitions you are modifying, as well as the area where the image files are located. Other required components are the bootloader and a file partition utility.
The way this would work in practice, then, is:
Boot from the master operating system.
Use dd to restore a saved image to a preset partition.
Reboot the system and use the bootloader to select the partition from which the machine should boot.
|Designing Electronics with Linux||May 22, 2013|
|Dynamic DNS—an Object Lesson in Problem Solving||May 21, 2013|
|Using Salt Stack and Vagrant for Drupal Development||May 20, 2013|
|Making Linux and Android Get Along (It's Not as Hard as It Sounds)||May 16, 2013|
|Drupal Is a Framework: Why Everyone Needs to Understand This||May 15, 2013|
|Home, My Backup Data Center||May 13, 2013|
- New Products
- Linux Systems Administrator
- Senior Perl Developer
- Technical Support Rep
- UX Designer
- Designing Electronics with Linux
- Dynamic DNS—an Object Lesson in Problem Solving
- Making Linux and Android Get Along (It's Not as Hard as It Sounds)
- Using Salt Stack and Vagrant for Drupal Development
- Nice article, thanks for the
7 hours 39 min ago
- I once had a better way I
13 hours 25 min ago
- Not only you I too assumed
13 hours 42 min ago
- another very interesting
15 hours 35 min ago
- Reply to comment | Linux Journal
17 hours 29 min ago
- Reply to comment | Linux Journal
1 day 23 min ago
- Reply to comment | Linux Journal
1 day 39 min ago
- Favorite (and easily brute-forced) pw's
1 day 2 hours ago
- Have you tried Boxen? It's a
1 day 8 hours ago
- seo services in india
1 day 12 hours ago
Enter to Win an Adafruit Pi Cobbler Breakout Kit for Raspberry Pi
It's Raspberry Pi month at Linux Journal. Each week in May, Adafruit will be giving away a Pi-related prize to a lucky, randomly drawn LJ reader. Winners will be announced weekly.
Fill out the fields below to enter to win this week's prize-- a Pi Cobbler Breakout Kit for Raspberry Pi.
Congratulations to our winners so far:
- 5-8-13, Pi Starter Pack: Jack Davis
- 5-15-13, Pi Model B 512MB RAM: Patrick Dunn
- 5-21-13, Prototyping Pi Plate Kit: Philip Kirby
- Next winner announced on 5-27-13!
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?