MOSIX: A Cluster Load-Balancing Solution for Linux
After starting the 14 nodes as a MOSIX cluster, we wanted to test our installation. By default, all the diskless CPUs mount an NFS directory on PC1. So we placed the Linux kernel 2.2.14 source code directory under that NFS space, making it visible to all nodes, and we started the kernel compilation process using MExec/MPMake, the parallel make, a contributed software that assigns new processes to the best available cluster nodes (available for download from MOSIX web site).
Figures 3, 4, 5 and 6 show snapshots of mon, a MOSIX tool that shows the load on all the nodes. As Figure 3 shows, there was a high load on node 14 because it was the node on which the compilation started. A few seconds later, Figures 4 and 5 show less load on CPU 14, and then Figure 6 shows a good distribution of the load among all the nodes.
MOSIX supports configurations with large numbers of computers with minimal scaling overheads to impair the performance. You can have a simple low-end setup composed of several PCs connected via Ethernet, on the other hand, you can have larger configurations that include workstations and servers connected via higher speed LANs such as fast Ethernet. A high-end configuration may also include a large number of SMP and non-SMP workstations and servers connected via a high-performance LAN such as Gigabit-Ethernet.
Our last experiment will include testing MOSIX on a new self-contained NEBS-compliant cabinet that consists of 16 Pentium III processors powered with 512MB of RAM and running at 500MHz. Each CPU has two on-board Ethernet ports and is also paired with a four-port ZNYX Ethernet card (used to provide Ethernet redundancy). Eight of the CPUs have a RAID setup (RAID 0 and RAID 5) with three 18GB SCSI disks.
MOSIX for Linux is subject to the GNU General Public License version 2, as published by the Free Software Foundation. It is available for download from the MOSIX web site (see Resources).
MOSIX allows us to do an uninstall and clean up the kernel source it modified. During the initial installation, mosix.install modifies the following system configuration files: /etc/inittab, /etc/inetd.conf, /etc/lilo.conf, /etc/rc.d/init.d/atd and /etc/cron.daily/slocate.cron.
The original contents of these files are saved with the .pre_mosix extension and the changes made to kernel files are logged to the mos_uninstall.log file in the kernel-source directory. To uninstall MOSIX, you run the command ./mosix.install uninstall and answer the questions. When you are asked if you want to clean the Linux kernel sources, answer “yes”. The script will then attempt to revert all the changes that were made during a previous MOSIX installation. At the end you need to reboot the node so start it as a plain Linux node.
Clustering offers several advantages that result in sharing the processing and the ability to achieve higher performance. If you are interested in clustering your servers with efficient load-balancing software and you need support for high performance, then MOSIX can certainly be useful for you. It is easy to install and configure, and it works.
However, our initial interest with MOSIX was to understand its algorithms and investigate the possibility of using it for efficient distribution of web traffic over multiple processors. We found that MOSIX is not directly suitable for the type of functionality we want for a near telecom-internet server that we are aiming to prototype, mainly because it is missing a front-end tool for transaction-oriented load balancing such as the HTTP requests.
There have been many requests to the MOSIX mailing list asking about HTTP traffic distribution with MOSIX. I believe that if the authors would add this functionality to MOSIX, MOSIX will be one of the most popular software packages for Linux clusters.
|Happy Birthday Linux||Aug 25, 2016|
|ContainerCon Vendors Offer Flexible Solutions for Managing All Your New Micro-VMs||Aug 24, 2016|
|Updates from LinuxCon and ContainerCon, Toronto, August 2016||Aug 23, 2016|
|NVMe over Fabrics Support Coming to the Linux 4.8 Kernel||Aug 22, 2016|
|What I Wish I’d Known When I Was an Embedded Linux Newbie||Aug 18, 2016|
|Pandas||Aug 17, 2016|
- Happy Birthday Linux
- Download "Linux Management with Red Hat Satellite: Measuring Business Impact and ROI"
- ContainerCon Vendors Offer Flexible Solutions for Managing All Your New Micro-VMs
- What I Wish I’d Known When I Was an Embedded Linux Newbie
- Updates from LinuxCon and ContainerCon, Toronto, August 2016
- NVMe over Fabrics Support Coming to the Linux 4.8 Kernel
- New Version of GParted
- Returning Values from Bash Functions
- All about printf
- Tech Tip: Really Simple HTTP Server with Python
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide