Remote Linux Explained
To get a node to boot correctly remotely, the server and client node must cooperate in several well-defined ways. There are several basic requirements for setting up nodes for remote booting:
A well-stocked server: the server must have the proper services running (we'll talk about those later) that can provide information required by the remotely booting node.
A method of remote power control: to start the node's boot sequence, you have to be able to power up or reset the node. You really cannot rely on the operating system on the node to be present at this time—after all, sometimes you need to reset the node because the operating system crashed.
A network: the nodes all have to have a route, direct or indirect via a gateway, to the server. We'll only talk about Ethernet networks here.
A network-bootable Linux kernel: the kernel can be either compressed or uncompressed, tagged or untagged. Tagged kernels, discussed later in this article, are used by the Etherboot solution. Also, see the Etherboot web site in the Resources section for more information.
A network loader: a method of reading the network boot kernel from the server and placing it correctly in the memory of the node.
A filesystem: in the good old days, the filesystem was served over the network via NFS. With solutions like SYSLINUX, you actually can provide the entire filesystem via a RAM disk, sent over the network with the kernel.
Now that we have a basic idea of the flow of the remote booting process and a general list of the client's requirements for the server, let's discuss in practical terms the options for providing all these services.
Remote power control is a small subset of the available possible remote hardware functions. Various manufacturers provide specialized hardware and software with their offerings that provide robust remote hardware control features, such as monitoring temperature, fan, power, power controls, BIOS updating, alerts and so on. The only remote hardware control features we really need in order to boot the nodes remotely are the ability to power-on and power-off, although having a reset feature is handy as well.
To force a network boot, the boot order on the workstation probably has to be modified. Typically, the boot order is something like diskette, CD-ROM and then hard drive. Most often, booting over the network is either not on the boot list at all or at the very bottom of the list, after the last of the local media. Since most workstations are shipped with some viable operating system already installed on the hard drive, a reset or power-on of the node will boot it from its local hard drive.
To do true network booting, you must have an Ethernet adaptor that is PXE-compliant. PXE is the Preboot eXecution Environment, part of Intel's Wired for Management (WfM) initiative. PXE-compliant means that the Ethernet adaptor is able to load and execute a network loader program from a server prior to bringing over the kernel itself. Both the BIOS on the node and the firmware on the Ethernet adaptor must cooperate to support PXE booting. A PXE-compliant system is capable of broadcasting the adaptor MAC address, receiving a response from the server, configuring the adaptor for TCP/IP, receiving a network loader over the network and transferring control to it.
Although this is an article about remote Linux, and a diskette is a local media, there is a hybrid of remote booting that is so important it must be mentioned. Since many Ethernet adaptor/node BIOS combinations are not PXE-compliant, an open-source solution has sprung up: Etherboot. Etherboot provides a method of creating a boot diskette that contains a simple loader and an Ethernet device driver. Set the boot list to disk, and when the client is booted the Etherboot diskette takes over loading the driver into memory and broadcasting the MAC address, looking for a server. In the PXE case, the server is conditioned to respond with a network boot loader; in the Etherboot case, the server must respond with a tagged network boot kernel. A kernel is tagged by running the mknbi command against the kernel. (See etherboot.sourceforge.net/doc/html/mknbi.html for further information on mknbi.) The point of network booting the node is to get the kernel into local memory. Regardless of whether you choose the PXE scenario or the Etherboot scenario, the network boot path quickly coalesces—the kernel is in memory and control is passed to it.
One Click, Universal Protection: Implementing Centralized Security Policies on Linux Systems
Join editor Bill Childers and Bit9's Paul Riegle on April 27 at 12pm Central to learn how to keep your Linux systems secure.
Free to Linux Journal readers.Register Now!
|diff -u: What's New in Kernel Development||Aug 20, 2014|
|Security Hardening with Ansible||Aug 18, 2014|
|Monitoring Android Traffic with Wireshark||Aug 14, 2014|
|IndieBox: for Gamers Who Miss Boxes!||Aug 13, 2014|
|Non-Linux FOSS: a Virtualized Cisco Infrastructure?||Aug 11, 2014|
|Linux Security Threats on the Rise||Aug 08, 2014|
- diff -u: What's New in Kernel Development
- Security Hardening with Ansible
- NSA: Linux Journal is an "extremist forum" and its readers get flagged for extra surveillance
- New Products
- Tech Tip: Really Simple HTTP Server with Python
- [<Megashare>] Watch Mrs Brown's Boys Movie Online Full Movie HD 2014
- Monitoring Android Traffic with Wireshark
- Returning Values from Bash Functions
- RSS Feeds
- Raspberry Pi: the Perfect Home Server