klogd: The Kernel Logging Dæmon
Linux beginners are unlikely to encounter this, but more seasoned users will often have more than one bootable kernel on their system at a time. If the box is a hobbyist or kernel hacker's box, it is likely to have a number of stable and a number of development series kernels on it. I myself always keep three generations of stable kernels on my systems, so that if a bug should show up, I can immediately reboot into the older kernel.
When klogd starts, it identifies the kernel version (all kernels since 1.3.43 put version information in the map files) and then looks at:
/boot/System.map /System.map /usr/src/linux/System.map
It will use the kernel version information to choose the correct one, if possible. When I have a development series kernel on a box, I leave the stable kernel map in /boot and I leave the development kernel map in /usr/src/linux. As for my two “old” stable kernels, I just live with the fact that klogd will not be able to resolve addresses in the event of a fault if I boot into them. Remember, you can use the -k switch on klogd to force it to use a particular map file if you build an archive of them.
Just bear in mind as you read this discussion that these events are so rare that in 15 machine years (three machines running Linux 24x7 for five years), I have seen this happen twice, and both times it was due to failing hardware.
In addition to the command-line switches, klogd will respond to certain signals. You send signals with the kill command.
The signals klogd responds to are:
SIGTSTP suspends and SIGCONT resumes kernel logging. The resuming includes re-initialization, so you can use this to, for example, unmount the /proc file system without killing klogd:
kill -TSTP <pid> umount /proc kill -CONT <pid>
See the Memory Address Resolution section for more information.
These signals all gracefully shut down klogd.
The klogd works with syslogd to handle the dispatch of kernel messages. It exists solely because the kernel itself is unable to use the syslog API directly. Klogd provides for resolving raw memory addresses into kernel symbol names.
|Dynamic DNS—an Object Lesson in Problem Solving||May 21, 2013|
|Using Salt Stack and Vagrant for Drupal Development||May 20, 2013|
|Making Linux and Android Get Along (It's Not as Hard as It Sounds)||May 16, 2013|
|Drupal Is a Framework: Why Everyone Needs to Understand This||May 15, 2013|
|Home, My Backup Data Center||May 13, 2013|
|Non-Linux FOSS: Seashore||May 10, 2013|
- Dynamic DNS—an Object Lesson in Problem Solving
- Making Linux and Android Get Along (It's Not as Hard as It Sounds)
- Using Salt Stack and Vagrant for Drupal Development
- New Products
- Drupal Is a Framework: Why Everyone Needs to Understand This
- Validate an E-Mail Address with PHP, the Right Way
- A Topic for Discussion - Open Source Feature-Richness?
- Download the Free Red Hat White Paper "Using an Open Source Framework to Catch the Bad Guy"
- The Secret Password Is...
- New Products
3 hours 46 min ago
- Keeping track of IP address
5 hours 37 min ago
- Roll your own dynamic dns
10 hours 51 min ago
- Please correct the URL for Salt Stack's web site
14 hours 2 min ago
- Android is Linux -- why no better inter-operation
16 hours 17 min ago
- Connecting Android device to desktop Linux via USB
16 hours 46 min ago
- Find new cell phone and tablet pc
17 hours 44 min ago
19 hours 13 min ago
- Automatically updating Guest Additions
20 hours 22 min ago
- I like your topic on android
21 hours 8 min ago
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?