Say Goodbye to Reboots with Ksplice
To prepare a Ksplice rebootless update, you need a few ingredients. First, you need the source code of the running kernel—your Linux distribution typically makes this available through your package manager. You also need the kernel configuration file and the System.map file. Finally, you need to point Ksplice at your kernel headers by creating a symbolic link.
Ideally, you also would like the versions of the compiler and assembler on your system to be the same as the ones that built the original kernel. If they are too different, the Ksplice tools will notice and complain before trying to install the update. (I explain why later in this article.)
With all of the materials mentioned above, you can build a replica of your running kernel.
In these examples, I assume that the directory /usr/src/linux already contains the running kernel's source. The following commands prepare your setup appropriately, as described above:
$ mkdir /usr/src/linux/ksplice $ cp /boot/config-`uname -r` /usr/src/linux/ksplice/.config $ cp /boot/System.map-`uname -r` /usr/src/linux/ksplice/System.map $ ln -s /lib/modules/`uname -r`/build /usr/src/linux/ksplice/build
Next, you need the patch to the kernel that you want to apply. This can be an ordinary patch taken from Linus Torvalds' git tree or a patch of your own design. Let's use an example patch that modifies the behavior of printk, the Linux kernel function that is responsible for printing messages to the kernel log. I assume that you have placed this patch in ~/printk.patch:
--- linux-2.6/kernel/printk.c ... +++ linux-2.6-new/kernel/printk.c ... @@ -609,6 +609,7 @@ va_list args; int r; + vprintk("Quoth the kernel:\n", NULL); va_start(args, fmt); r = vprintk(fmt, args); va_end(args);
Once this patch is applied, all messages that are printed using printk will be preceded by the message “Quoth the kernel:”.
To create the rebootless update, run the following command from the directory /usr/src/linux/kernel:
ksplice-create --patch=~/printk.patch /usr/src/linux
It should output something like Ksplice update tarball written to ksplice-8c4o6ucj.tar.gz. This is the rebootless update that corresponds to your source code patch.
Feeding your patch and the kernel's source code into ksplice-create will do the following: first, it compiles your kernel twice—once without the patch and once with the patch applied.
Second, it compares the output of the two compilations, looking for differences. In particular, it needs to find functions that have changed. For each changed function, it pulls out a copy of both the old and the new versions and puts them in the output file.
At this point, Ksplice has determined what functions have been changed by the source code patch, and it has saved old and new versions of the changed functions. Now, it must figure out how to install the new versions of the functions safely, while the system is running.
Applying the update from your perspective is quite simple. As root, run:
from the directory /usr/src/linux/kernel; ksplice-8c4o6ucj.tar.gz is the name of the tarball created in the step above.
If the update has been applied successfully, kernel messages should appear with “Quoth the kernel” in front of them. Let's verify this by running dmesg, which allows us to look at the kernel's log.
If all has gone well, you will see something like:
# dmesg | tail -n2 Quoth the kernel: ksplice: Update 8c4o6ucj applied successfully
What's happening under the hood to make this possible? Remember that Ksplice has a list of functions that need to change in the running kernel. In particular, it has the old versions (that is, the versions that should be in memory right now) and the new versions.
First, it has to locate the functions that it's trying to change. So if it's trying to change printk, as in this example, it first needs to find it in kernel memory.
Once it has found it, it compares it to the old copy of printk that it has in the tarball. Remember that this version of printk was compiled from the unmodified kernel source, with the same compiler and assembler. So the two versions should match exactly. If they do not match, we act conservatively and give up. This safety check is why Ksplice requires the same compiler and assembler in the ksplice-create step.
Now that it has found the old copy of the function and confirmed that it is the correct code, it needs to replace it. It accomplishes this by first loading the new version of the function elsewhere in memory, using the kernel's module loader. Next, at a safe time, it overwrites the first instruction of the old function with a jump instruction that goes to the new function. This is called a trampoline, because it “bounces” all of the callers of the old function immediately over to the new function.
When is it safe to do this replacement? At a high level, we want to replace the code when no one else is using it. If the code is being used while it is being replaced, we potentially could end up with a problem. For example, if the old version of a function locked a resource in one way, and the new version locks it in another, and both run at the same time, we could end up in a situation in which they step on each other's toes.
So how does Ksplice make sure that no one is using the code while it is being replaced? It examines the stack of every kernel thread to ensure that no one has a pointer into the code that is being replaced. Said another way, if no one can reference the old code, no one is using the old code, so it's safe to replace it. This whole process takes place while the machine is briefly paused using Linux's stop_machine mechanism, to make sure that no new references get added when we're not looking.
If this check concludes that it is not a safe time to update the code (that is, if someone is holding a reference to the old code), ksplice-apply aborts the update process. Trying again is harmless, however, and if the update does not apply right away, it will generally apply after a few tries. This is because essentially none of the code in the kernel is constantly in use.
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- SUSE LLC's SUSE Manager
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- My +1 Sword of Productivity
- Managing Linux Using Puppet
- Non-Linux FOSS: Caffeine!
- Doing for User Space What We Did for Kernel Space
- SuperTuxKart 0.9.2 Released
- Google's SwiftShader Released
- Parsing an RSS News Feed with a Bash Script
- Rogue Wave Software's Zend Server
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide