klogd: The Kernel Logging Dæmon
klogd reads kernel log messages and helps process and send those messages to the appropriate files, sockets or users. This month we discuss memory address resolution and how to modify klogd's default behavior using command-line switches.
The syslog dæmon takes messages sent through the syslog applications programming interface (the openlog, syslog and closelog library calls) and dispatches them to various files, sockets, users, pipes, or the bit-bucket (meaning a “discard”--any log message that doesn't match a selector in the syslog.conf file is discarded). Kernel messages, however, cannot use the syslog API.
The basic reason for this is that the syslog API is provided by the kernel itself. Code that can be called by itself is said to be “reentrant”. In order to be reentrant, a secondary call must have no effect on a prior call. If code makes any use of global or static variables, for example, it cannot be reentrant, since a change to a static variable in the secondary call will change it in the primary call.
Think about it this way: what would happen if a program were in the middle of a syslog call when the kernel needed to call the syslog function to report some kernel event? Unless the syslog call were totally reentrant, the kernel call would clobber the user-space call.
Many parts of the kernel are reentrant, either by being truly reentrant or through the use of selective locking. It would be very bad news indeed if a user-space program could block kernel execution merely by getting stuck in a system call. For this reason (among others), a very clean separation is maintained between the functions that kernel-space code may use and the code that user-space code may use. No kernel API function is dependent on the state of any user-space call (or, to be a bit more precise, those dependencies that exist are predictable and well understood). Thus, the kernel doesn't have to worry about what user-space programs are calling what user-space API functions from moment to moment. The kernel code may simply get on with business.
The kernel does, however, need a way to send messages to report abnormal situations, and, when debugging kernel layer code, to “see how far we got”. Thus, the kernel has its own logging API.
(For the curious, the user-space syslog call is provided by a function in the kernel called sys_syslog. The kernel syslog call is called printk. You can find the source code for both in your own copy of the Linux kernel source, which is more than likely to be in the /usr/src/linux directory on your system. Go ahead and look. Remember, it's your source!)
The kernel is built for simplicity and speed. Thus, the kernel's conception of logging is a bit more basic than that of syslog. Kernel messages are simple text with the convention that a priority of 0 to 7 will be encoded in <n> characters (where n is the priority, from 0 to 7) prefixed to the rest of the message text. The kernel logging API doesn't have the concept of a “facility” as syslogd does. Level 7 is the lowest priority and level 0 is the highest.
Sometimes, as when a protection fault occurs, the kernel logs contain memory addresses. The protection fault report from a Linux kernel isn't much use to anyone in debugging your problem, because kernels are almost certainly locally compiled. Even if you have never recompiled your kernel, the sheer number of distributions and versions of distributions out there makes it impossible for someone to help you with a raw protection fault log.
Fortunately, Linux kernels since 1.3.43 have reported addresses in a standard format. The klogd program recognizes that format and attempts to resolve addresses to symbol names so that one can actually find the object or code an address refers to. Later on, we'll cover how klogd does this.
These aspects of kernel messages (separate API, non-syslog attributes and memory address resolution) are the reason for a separate dæmon for kernel messages. There were (and are) patched versions of syslog that pick up kernel messages, but this practice is no longer popular, and with good reason. A clean separation of user-space and kernel-space features makes sense. If syslogd did some of these things, it would tie syslogd fairly tightly to the kernel version. Of course, klogd is bound fairly tightly, but this is considered much more acceptable, since it “encapsulates” this “dependent” code and then passes it on to a standard logging mechanism (syslogd).
So, what klogd does, basically, is read kernel log messages, transforms them slightly (by resolving kernel memory addresses to symbols) and then calls the user-space syslog API with the kernel facility and the priority as encoded in the kernel message. This is the dæmon's default behavior. Let's take a look at how the default behavior can be modified.
First off, klogd does not have a configuration file as syslogd does. Its behavior can be modified only through command-line switches and signals. We'll cover the switches first, then we'll discuss address resolution. Finally, we'll go over the signals to which klogd responds.
|Designing Electronics with Linux||May 22, 2013|
|Dynamic DNS—an Object Lesson in Problem Solving||May 21, 2013|
|Using Salt Stack and Vagrant for Drupal Development||May 20, 2013|
|Making Linux and Android Get Along (It's Not as Hard as It Sounds)||May 16, 2013|
|Drupal Is a Framework: Why Everyone Needs to Understand This||May 15, 2013|
|Home, My Backup Data Center||May 13, 2013|
- RSS Feeds
- Dynamic DNS—an Object Lesson in Problem Solving
- Making Linux and Android Get Along (It's Not as Hard as It Sounds)
- Using Salt Stack and Vagrant for Drupal Development
- New Products
- A Topic for Discussion - Open Source Feature-Richness?
- Drupal Is a Framework: Why Everyone Needs to Understand This
- Validate an E-Mail Address with PHP, the Right Way
- Designing Electronics with Linux
- What's the tweeting protocol?
- Kernel Problem
2 hours 10 min ago
- BASH script to log IPs on public web server
6 hours 37 min ago
10 hours 12 min ago
- Reply to comment | Linux Journal
10 hours 45 min ago
- All the articles you talked
13 hours 8 min ago
- All the articles you talked
13 hours 12 min ago
- All the articles you talked
13 hours 13 min ago
17 hours 38 min ago
- Keeping track of IP address
19 hours 29 min ago
- Roll your own dynamic dns
1 day 42 min ago
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?