The System Logging Dæmons, syslogd and klog
There are some drawbacks to using the “out-of-the-box” Red Hat syslog.conf file. Notably, all “actions” are basically to write to local files. The syslogd dæmon can do much more than that. Let's take a look at actions next.
Actions may send messages to any of these destinations:
A regular file: this is what you see in our example. This is simply the name of a file to which the message is appended.
A named pipe: named pipes, or FIFOs (First-In, First-Out), are a simple form of inter-process communication supported by Linux and many other operating systems. You create a named pipe with the mkfifo command; a FIFO appears in the file system. You tell syslogd it is writing to a FIFO instead of a file by putting a pipe character (|) in front of the FIFO name. Take a look at the man pages for mkfifo, both the command and the system call, and the man page for “fifo”, which is a description of the special file. You read and write FIFOs with the normal file system calls. A description of FIFO programming is beyond our scope, although I can highly recommend the excellent book UNIX Network Programming by W. Richard Stevens, from Prentice-Hall.
A terminal or console: if you specify a tty device (such as /dev/console), syslogd is smart enough to figure out that it is a device, not a file, and treat it accordingly. This can be fun if you have a dumb terminal—you can send all your messages to /dev/ttyS1 (for example) and get all your messages on the terminal screen while you work on your console. This is state-of-the-art 1970 technology—I love glass teletypes!
A remote machine: now this is the true power. Let's assume you have many Linux boxes on a network. Do you want to log in to each to check their logs for certain conditions? Of course you don't, and you don't have to. Optionally, the syslogd listens on the network for messages as well. Just put an at sign (@) followed by the host name:
This last will send all critical and above messages from all facilities to sol.n0zes.ampr.org, which will then apply its own syslog.conf file to save them. Syslogd will not forward a message received from the network to another host: in other words, you get one and only one hop. This may be overridden with switches when syslogd is invoked. It seems like a reasonable thing to do, since even the possibility of circular message routing would be enough to scare the dickens out of any network administrator.
This capability has obvious advantages for centralized logging and log scanning for security violations and so forth. It also has obvious deficiencies. It is hard to maintain a complete log when your network is down. Take advantage of the fact that you can route messages to more than one action by making sure every message finds its way to a file before you send it to remote logger.
One feature that newer users of Linux may not be aware of is console messaging. This isn't used very much any more, thanks to talk and irc and other much more interactive “chat” mechanisms with much cleaner user interfaces. You can, however, send a text message to any user logged in to your system with the “write” command. This is an unpopular facility for several reasons. First, in today's windowed environments, a user probably has many “terminals” active and it is hard to know which one to write to. Second, if they are in the middle of some intense full-screen activity (such as editing a large file with vi) and you blast a bunch of text at them that confuses their editor and screws up their screen, they will not like you very much. Most Linuxes I have seen default their users to messaging being off. This facility uses the same ability to write to a user's console to send messages directly to their screen. Just put a comma-separated list of user names as the action. Save this for truly critical stuff. You might turn this on to try it, but I bet you will turn it off again before too long.
There is a similar method, called wall or write-to-all. This lets you send a text message to every user logged in to the system. The superuser can do this whether you choose to accept messages or not. This is how shutdown sends its warning messages. You can have syslogd send a message to everyone by this mechanism by specifying an asterisk (*) as the action. Save this one for the most dire of dire messages, if you must use it. This should be used for warning of an impending crash—anything less is probably overkill.
|Dynamic DNS—an Object Lesson in Problem Solving||May 21, 2013|
|Using Salt Stack and Vagrant for Drupal Development||May 20, 2013|
|Making Linux and Android Get Along (It's Not as Hard as It Sounds)||May 16, 2013|
|Drupal Is a Framework: Why Everyone Needs to Understand This||May 15, 2013|
|Home, My Backup Data Center||May 13, 2013|
|Non-Linux FOSS: Seashore||May 10, 2013|
- RSS Feeds
- Making Linux and Android Get Along (It's Not as Hard as It Sounds)
- Using Salt Stack and Vagrant for Drupal Development
- Dynamic DNS—an Object Lesson in Problem Solving
- New Products
- Validate an E-Mail Address with PHP, the Right Way
- Drupal Is a Framework: Why Everyone Needs to Understand This
- A Topic for Discussion - Open Source Feature-Richness?
- Download the Free Red Hat White Paper "Using an Open Source Framework to Catch the Bad Guy"
- Tech Tip: Really Simple HTTP Server with Python
- Roll your own dynamic dns
2 hours 56 min ago
- Please correct the URL for Salt Stack's web site
6 hours 7 min ago
- Android is Linux -- why no better inter-operation
8 hours 23 min ago
- Connecting Android device to desktop Linux via USB
8 hours 51 min ago
- Find new cell phone and tablet pc
9 hours 49 min ago
11 hours 18 min ago
- Automatically updating Guest Additions
12 hours 27 min ago
- I like your topic on android
13 hours 13 min ago
- This is the easiest tutorial
19 hours 49 min ago
- Ahh, the Koolaid.
1 day 1 hour ago
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?