MILLE-XTERM and LTSP
Diskless terminals need a way to store configuration data, such as screen resolution and available printers. Under LTSP, a central file called lts.conf stores the configuration of terminals and has to be edited manually. With thousands of terminals, you need a hierarchical database—that's the purpose of the configurator.
This component is written in PHP and has two interfaces. The first is dedicated to terminals. During the boot process, the terminal requests its configuration from the server using its MAC address as a parameter. The server generates the corresponding configuration and sends it to the terminal in the standard lts.conf format. A wrapper around the getltscfg command ensures backward compatibility with the other LTSP scripts.
The other interface lets administrators manage the configuration of the terminals via a Web browser. Administrators can organize terminals hierarchically by groups and apply configurations according to specific criteria, such as location or hardware type. But the configurator serves yet another function. It is designed to work with links, a console text browser, as shown in Figure 3. The terminal can boot in a special admin mode that does not require running the X server. To boot in this mode, the option mode=admin is appended to the kernel options in the bootloader configuration. Then, links is launched with the terminal configurator URL and MAC. The administrator can change the terminal settings directly. When complete, the terminal reboots and receives its new configuration.
The configurator also is useful for building terminal inventories. Hardware information is sent to the configurator during the boot process. Administrators can generate reports regarding the state of the terminals. Also, every connection to the configurator is logged and then can be analyzed to determine terminal usage, user login information and much more. You know how managers like reports!
When a terminal boots, it requests a display from the application server. To dispatch users on available application servers, MILLE-XTERM provides a load balancer. The first version of the load balancer (proof of concept) required five lines of PHP and returned a random address from a static list of application servers. Although simple, this approach had some drawbacks. First, an off-line server should be removed from the list and not be returned to the terminals. And, to provide reliable load balancing, several factors, such as number of processors, speed and load average have to be taken into account. Therefore, a much more robust and complete Python system has replaced the initial prototype (Figure 4). The load balancer agent runs on every application server, collecting data on the state of the application server and waiting for load-balancer server requests. The balancer is also a Python script that runs on the boot server. It contacts each load-balancer agent to determine its state and computes a weight for each server. A greater weight indicates that the server is less loaded and will be selected more often statistically to accept new users. A terminal request for an application server will then prompt the load-balancer server to get a randomly chosen application server in the weighted list.
Let's examine a concrete example: three application servers and two boot servers. Install the mille-xterm-lbagent package on each application server, and install mille-xterm-lbserver on each boot server. Make sure that the respective services are started, lbagent and lbserver. Add one node entry for each application server in the file /etc/mille-xterm/lbsconfig.xml:
<?xml version="1.0"?> <lbsconfig> <nodes> <group default="true" name="PROD"> <node address="http://10.0.0.1:8001" name="xapp1"/> <node address="http://10.0.0.2:8001" name="xapp2"/> <node address="http://10.0.0.3:8001" name="xapp3"/> </group> </nodes>
Copy this file on every boot server. Fire up a browser and enter the URL of the load balancer to see it in action. By default, lbserver listens on port 8008, so don't forget to append the port to the URL: http://localhost:8008/. IP addresses of the chosen application server will be displayed. Press the refresh button to get a new IP and you're set!
Fast/Flexible Linux OS Recovery
On Demand Now
In this live one-hour webinar, learn how to enhance your existing backup strategies for complete disaster recovery preparedness using Storix System Backup Administrator (SBAdmin), a highly flexible full-system recovery solution for UNIX and Linux systems.
Join Linux Journal's Shawn Powers and David Huffman, President/CEO, Storix, Inc.
Free to Linux Journal readers.Register Now!
- Google's Abacus Project: It's All about Trust
- Download "Linux Management with Red Hat Satellite: Measuring Business Impact and ROI"
- Seeing Red and Getting Sleep
- Back to Backups
- Secure Desktops with Qubes: Introduction
- Fancy Tricks for Changing Numeric Base
- Working with Command Arguments
- Secure Desktops with Qubes: Installation
- Linux Mint 18
- CentOS 6.8 Released
Until recently, IBM’s Power Platform was looked upon as being the system that hosted IBM’s flavor of UNIX and proprietary operating system called IBM i. These servers often are found in medium-size businesses running ERP, CRM and financials for on-premise customers. By enabling the Power platform to run the Linux OS, IBM now has positioned Power to be the platform of choice for those already running Linux that are facing scalability issues, especially customers looking at analytics, big data or cloud computing.
￼Running Linux on IBM’s Power hardware offers some obvious benefits, including improved processing speed and memory bandwidth, inherent security, and simpler deployment and management. But if you look beyond the impressive architecture, you’ll also find an open ecosystem that has given rise to a strong, innovative community, as well as an inventory of system and network management applications that really help leverage the benefits offered by running Linux on Power.Get the Guide