Kernel Korner - Exploring Dynamic Kernel Module Support (DKMS)
Source is a wonderful thing. Merged module source in the kernel tree is even better. Most of all, support for that source is what really counts. In today's explosion of Linux in the enterprise, the ability to pick up the phone and find help is critical. More than ever, corporations are driving Linux development and requirements. Often, this meets with skepticism and a bit of anxiety by the community, but if done correctly, the benefits are seen and felt by everyone.
The dynamic kernel module support (DKMS) framework should be viewed as a prime example of this. DKMS, a system designed to help Dell Computer Corporation distribute fixes to its customers in a controlled fashion, also speeds driver development, testing and validation for the entire community.
The DKMS framework is basically a duplicate tree outside of the kernel tree that holds module source and compiled module binaries. This duplication allows for a decoupling of modules from the kernel, which, for Linux solution and deployment providers, is a powerful tool. The power comes from permitting driver drops onto existing kernels in an orderly and supportable manner. In turn, this frees both providers and their customers from being bound by kernel drops to fix their issues. Instead, when a driver fix has been released, DKMS serves as a stopgap to distribute the fix until the code can be merged back into the kernel.
Staying with the customer angle for a bit longer, DKMS offers other advantages. The business of compiling from source, installing or fidgeting with rebuildable source RPMs has never been for the faint-of-heart. The reality is that more Linux users are coming in with less experience, necessitating simpler solutions. DKMS bridges these issues by creating one executable that can be called to build, install or uninstall modules. Further, using its match feature, configuring modules on new kernels could not be easier, as the modules to install can be based solely on the configuration of some kernel previously running. In production environments, this is an immense step forward as IT managers no longer have to choose between some predefined solution stack or the security enhancements of a newer kernel.
DKMS also has much to offer developers and veteran Linux users. The aforementioned idea of the decoupling of modules from the kernel through duplication (not complete separation) creates a viable test bed for driver development. Rather than having to push fixes into successive kernels, these fixes can be distributed and tested on the spot and on a large scale. This speedup in testing translates to an overall improvement in the speed of general development. By removing kernel releases as a blocking mechanism to widespread module code distribution, the result is better tested code that later can be pushed back into the kernel at a more rapid pace—a win for both developers and users.
DKMS also makes developers' lives easier by simplifying the delivery process associated with kernel-dependent software. In the past, for example, Dell's main method for delivering modules was RPMs containing kernel-specific precompiled modules. As kernel errata emerged, we often were taken through the monotonous and unending process of recompiling binaries for these new kernels—a situation that no developer wants to be in. However, Dell still favored this delivery mechanism because it minimized the amount of work and/or knowledge customers needed to have to install modules. With DKMS, we can meet these usability requirements and significantly decrease our workload from the development standpoint. DKMS requires module source code to be located only on the user's system. The DKMS executable takes care of building and installing the module for any kernel users may have on their systems, eliminating the kernel catch-up game.
With all of this up-front hype about DKMS, perhaps it might be best to settle into the particulars of actually how the software is used. First, using DKMS for a module requires that the module source be located on the user's system and that it be located in the directory /usr/src/(module))-((module-version))/. In addition, a dkms.conf file must exist with the appropriately formatted directives within this configuration file to tell DKMS such things as where to install the module and how to build it. More information on the format of the dkms.conf file can be found later in this article. Once these two requirements have been met and DKMS has been installed on the system, the user can begin using DKMS by adding a module/module-version to the DKMS tree. The example add command:
dkms add -m qla2x00 -v v6.04.00
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- SUSE LLC's SUSE Manager
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- My +1 Sword of Productivity
- Managing Linux Using Puppet
- Non-Linux FOSS: Caffeine!
- SuperTuxKart 0.9.2 Released
- Doing for User Space What We Did for Kernel Space
- Google's SwiftShader Released
- Parsing an RSS News Feed with a Bash Script
- Rogue Wave Software's Zend Server
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide