Configuring a Linux/VMware GSX Server
Last time we talked, I described how you could utilize a single high-powered computer running Linux and VMware GSX server to host many virtual servers running Windows NT, 2000, 98, FreeBSD and so on. In this article, we will talk about how to configure a Red Hat Linux server for the VMware GSX environment, add additional network interface cards to reduce virtual server bottlenecks and add an external drive array to provide plenty of disk space for our SQL databases and VMs.
Out of the box, the Linux kernel comes configured to support a great many devices, filesystems and networking protocols. But only a small portion of the supported devices are needed for a typical GSX server, and some that aren't included in the default kernel may need to be added. For some of you, the stock kernel configuration may work fine for your GSX implementation. Depending on your needs or any special hardware requirements, however, you may have to resort to building a custom kernel. For the purposes of this article, we will be using Red Hat 7.2 with the 2.4.2-17smp kernel. If you are using a different distribution for your GSX server (SuSE, Caldera or TurboLinux), make the necessary kernel modifications that match your version.
In order to build our custom kernel, we must run make config from the command line. (If you prefer a GUI version, run make xconfig within a terminal window under X). But first we should make a backup of our default working kernel, and add a pointer to it in LILO or Grub (whichever bootloader you are using). To backup our working kernel, we want to change to the /boot directory and copy the kernel image (vmlinuz.2.4.7-10) and System.map file (system.map.2.4.7-10) either to a backup directory or to renamed files in the current /boot directory (i.e., vmlinux.2.4.7-10-old). Next change to the /etc directory and open your lilo.conf file in vi or your favorite editor, and make a new entry pointing to the backup kernel we just copied or renamed. To save time we can copy the information from the original kernel and create a new instance that points to the backup kernel.
We want to create a means of booting our server back to the stock kernel should our custom kernel behave badly or, worse, fail to boot at all. After saving the modified lilo config file, be sure to update your bootloader to recognize the changes. For example, type lilo and press Enter (do this a few times to be sure your updates are added) at the command prompt to update the LILO bootloader. It's now a good time to reboot the server and try out the backup kernel. Once the kernel successfully boots to the backup kernel, it's time to move on and build our custom kernel.
Now to customize our kernel for a specific GSX environment. Personally I prefer to configure kernels from the command line rather than within an X session, but use the method that best suits you. To start the process, login as root and change to the /usr/src/linux2.4 directory. Type make config to display a list of items the kernel currently supports. Carefully page through the list and only disable support of items you are absolutely sure won't affect the server's ability to function (i.e., sound, infrared, Toshiba laptop, joysticks, Ham radio, etc.). Please note: this step is not necessary to configure the GSX server; I only add it because it makes for a smaller, quicker loading kernel.
Okay, back to our kernel config file. If you make a mistake while paging through the config list, press Ctrl-C to quit without making any changes, then start over. As you look through the file, notice that some options are listed next to each supported device or protocol. Here's what they mean: Y(es) will add support into the kernel itself; N(o) means no support will be provided for this item; M(odular) means the item will be supported as a loadable module. The ?(Help) option is also available. Other options are specific to the functions of the item, such as the maximum memory a server can support.
As mentioned earlier, if you want to optimize the custom kernel, we can trim down the size of the footprint by disabling support for unnecessary devices. But don't get too hung up on disabling everything you don't feel is necessary. We don't want to neuter the kernel and reduce the server to a whimpering mass, only trim it down a bit for better performance. Again this is optional.
Once you are ready to make your changes, run make config again and carefully page through the list, adding or remove support as needed. For this article we want to add an external SCSI drive array to provide additional disk space for virtual servers and SQL databases. To do this, we must add support for the new hardware that will talk to the external disk array (i.e., RAID controller, SCSI controllers, etc.). Also, we need to determine if the new device(s) should load with the kernel or as a module. Keep in mind, if you choose to add a device as part of the kernel rather than as a loadable module, the device support will stay in memory rather than be removed dynamically and added as needed, as is the case with modules. For critical devices, such as RAID controllers, file systems and so on, compiling device support as part of the kernel is necessary. But for less frequently accessed devices, modular support may be a better choice.
Fast/Flexible Linux OS Recovery
On Demand Now
In this live one-hour webinar, learn how to enhance your existing backup strategies for complete disaster recovery preparedness using Storix System Backup Administrator (SBAdmin), a highly flexible full-system recovery solution for UNIX and Linux systems.
Join Linux Journal's Shawn Powers and David Huffman, President/CEO, Storix, Inc.
Free to Linux Journal readers.Register Now!
- Google's Abacus Project: It's All about Trust
- Download "Linux Management with Red Hat Satellite: Measuring Business Impact and ROI"
- Seeing Red and Getting Sleep
- Secure Desktops with Qubes: Introduction
- Fancy Tricks for Changing Numeric Base
- Back to Backups
- Working with Command Arguments
- Secure Desktops with Qubes: Installation
- Linux Mint 18
- CentOS 6.8 Released
Until recently, IBM’s Power Platform was looked upon as being the system that hosted IBM’s flavor of UNIX and proprietary operating system called IBM i. These servers often are found in medium-size businesses running ERP, CRM and financials for on-premise customers. By enabling the Power platform to run the Linux OS, IBM now has positioned Power to be the platform of choice for those already running Linux that are facing scalability issues, especially customers looking at analytics, big data or cloud computing.
￼Running Linux on IBM’s Power hardware offers some obvious benefits, including improved processing speed and memory bandwidth, inherent security, and simpler deployment and management. But if you look beyond the impressive architecture, you’ll also find an open ecosystem that has given rise to a strong, innovative community, as well as an inventory of system and network management applications that really help leverage the benefits offered by running Linux on Power.Get the Guide