The System Logging Dæmons, syslogd and klog
At this point, you may be asking what klogd has to do with any of this. The answer to that is simple. The kernel can't call the syslog API. There are many reasons for this. The core reason, and the simplest to understand, is that Linux actually provides two completely separate APIs. The more familiar one is the “standard library” used by user-space applications; this is the one that uses syslog. The other API is normally not used by applications: this is the kernel API code that runs as part of the kernel. This code needs services similar to those offered by the applications programming interface, but for numerous technical (and a few aesthetic) reasons, it is not possible for kernel code to use the application's API. For this reason, the kernel has its own entirely separate mechanism for generating messages. The klog dæmon, klogd, is an application process that ties the kernel messaging system to syslogd. Actually, it can also dispatch kernel messages to files, but most configurations use the klogd default, which is to prepare kernel messages, and in essence, resubmit them through syslog.
There is quite a bit more to klogd if you wish to delve into the depths, but for the purposes of this article, it is sufficient to know that klogd feeds kernel messages to syslogd, where they appear to be coming from the kern facility.
syslogd provides a powerful and simple mechanism for managing messages from multiple applications in a highly configurable manner. Its ability to “demultiplex” the message stream makes using the syslog API an appealing option for applications developers, and I would encourage you to consider using that API in your own programs.
|Dynamic DNS—an Object Lesson in Problem Solving||May 21, 2013|
|Using Salt Stack and Vagrant for Drupal Development||May 20, 2013|
|Making Linux and Android Get Along (It's Not as Hard as It Sounds)||May 16, 2013|
|Drupal Is a Framework: Why Everyone Needs to Understand This||May 15, 2013|
|Home, My Backup Data Center||May 13, 2013|
|Non-Linux FOSS: Seashore||May 10, 2013|
- RSS Feeds
- Making Linux and Android Get Along (It's Not as Hard as It Sounds)
- Using Salt Stack and Vagrant for Drupal Development
- Dynamic DNS—an Object Lesson in Problem Solving
- New Products
- Validate an E-Mail Address with PHP, the Right Way
- Drupal Is a Framework: Why Everyone Needs to Understand This
- A Topic for Discussion - Open Source Feature-Richness?
- Download the Free Red Hat White Paper "Using an Open Source Framework to Catch the Bad Guy"
- Tech Tip: Really Simple HTTP Server with Python
- Please correct the URL for Salt Stack's web site
3 hours 8 min ago
- Android is Linux -- why no better inter-operation
5 hours 23 min ago
- Connecting Android device to desktop Linux via USB
5 hours 52 min ago
- Find new cell phone and tablet pc
6 hours 50 min ago
8 hours 19 min ago
- Automatically updating Guest Additions
9 hours 27 min ago
- I like your topic on android
10 hours 14 min ago
- This is the easiest tutorial
16 hours 50 min ago
- Ahh, the Koolaid.
22 hours 28 min ago
- git-annex assistant
1 day 4 hours ago
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?