Remote Linux Explained
As we all know, life without a root filesystem (/) is meaningless. When booting locally, the root FS is almost always found on the local hard drive. Where does root come from on a network-booted client? To provide root you have two choices: remotely mounted root over NFS from the server or by a RAM disk. If you provide root via NFS, by default your kernel looks for root in /tftpboot/ip, where ip is the IP address of your client. This requires starting NFS on the server and exporting /tftpboot (or /tftpboot/ip for each node). To get the client node to boot to the login prompt, there are several requirements on the root filesystem, including the init and shell binaries; devices, at least the console device; and any dynamically loaded libraries the init and shell binaries might depend on.
A quick-and-dirty method of populating a remote root filesystem would be to copy init, sh, the necessary libraries and a console device, as in:
cp /sbin/init /tftpboot/192.168.64.1/sbin/init cp /bin/sh /tftpboot/192.168.64.1/bin/sh
To determine the dynamically loaded libraries for init, use the ldd command:
ldd /sbin/init ldd /bin/shand then copy the libraries listed by the ldd commands to /tftpboot/ip/lib. To make the devices, there is a handy MAKEDEV command, part of the MAKEDEV package:
/dev/MAKEDEV -d /tftpboot/188.8.131.52/dev consoleIf you have your other services up and running on the server correctly, when you force a network boot on the client, it will then run the init script from its remote root, using the console provided in its remote root, bring up a shell and prompt for a runlevel (since there is no /etc/inittab file in remote root). Enter s for single-user mode, and just like that, your client is up and running to the shell prompt.
A special shell is available, called sash (for standalone shell) that is extremely useful in the remote environment. This is because sash has no dynamically loaded libraries and provides some standard built-in commands that manipulate filesystems (mount, umount, sync), change file permissions (chmod, chgrp, chown) and archive (ar, tar), among other things. Instead of starting sh, for example, you can copy /sbin/sash to /tftpboot/ip/sbin/sash, and the kernel will bring up the standalone shell instead. You also might want to provide your own rudimentary inittab file to run sash on startup, as in:
In this article we've explored a few of the services and methods used to boot Linux remotely. Remote Linux is extremely fertile ground for continuing research. As networks become faster and can support greater numbers of remote clients, and as clusters become larger and have greater dependency on centralized administration, remote Linux techniques will play an even greater role in the industry. With the advent of dense server technology, remote Linux has become not just a convenience but a necessity.
I gratefully acknowledge the research of Vasilios Hoffman from Wesleyan. “V”, as he likes to be called, demonstrated the use of loopback devices in creating RAM disks and how to create modular network bootable kernels correctly. V is simply a wealth of Linux information.
Richard Ferri is a senior programmer in IBM's Linux Technology Center, where he works on open-source Linux clustering projects such as LUI (oss.software.ibm.com/lui) and OSCAR (www.openclustergroup.org). He has a BA in English from Georgetown University and now lives in upstate New York with his wife, Pat, three teen-aged sons and three dogs of suspect lineage.
|Non-Linux FOSS: libnotify, OS X Style||Jun 18, 2013|
|Containers—Not Virtual Machines—Are the Future Cloud||Jun 17, 2013|
|Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer||Jun 12, 2013|
|Weechat, Irssi's Little Brother||Jun 11, 2013|
|One Tail Just Isn't Enough||Jun 07, 2013|
|Introduction to MapReduce with Hadoop on Linux||Jun 05, 2013|
- Containers—Not Virtual Machines—Are the Future Cloud
- Non-Linux FOSS: libnotify, OS X Style
- Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer
- Linux Systems Administrator
- Validate an E-Mail Address with PHP, the Right Way
- Introduction to MapReduce with Hadoop on Linux
- RSS Feeds
- Weechat, Irssi's Little Brother
- New Products
- Tech Tip: Really Simple HTTP Server with Python
- Poul-Henning Kamp: welcome to
1 hour 26 min ago
- This has already been done
1 hour 27 min ago
- Reply to comment | Linux Journal
2 hours 13 min ago
- Welcome to 1998
3 hours 1 min ago
- notifier shortcomings
3 hours 25 min ago
5 hours 2 min ago
- Android User
5 hours 3 min ago
- Reply to comment | Linux Journal
6 hours 56 min ago
9 hours 46 min ago
- This is a good post. This
14 hours 59 min ago
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?