Linux's Tell-Tale Heart, Part 1

by Marcel Gagné

Welcome back, everyone, to the Corner, the SysAdmin's Corner, that is. I do hope you are all rested up from last week. Today's column marks the beginning of a new series, where we dig deep within the heart of your Linux system and listen. Whether you know it or not, your Linux system is constantly talking to you in a kind of system monologue, or diary, keeping you abreast of everything in its life. All you need to do is pay attention. Sometimes the information will look like idle chit-chat. Sometimes, the health of your system depends on it.

As time goes on and you get more comfortable with your Linux system, you've no doubt realized just how powerful it really is. There's a lot going on under the hood, and a lot of feedback is being generated from all that information. Today, I'm going to show you how some of this information is generated, how you can customize it, and how you can be in several places at once!

Shall we begin?

A good deal of the logs your system generates come to you courtesy of the syslogd dæmon. The syslogd dæmon is a program that runs in the background, independent of whatever else you may do on your system, but it does pay attention. That's its job; to collect information on what is going on and reporting it. Actually, this is as good a place as any for a definition. For those of you who may not already know this, my description of syslogd is actually a pretty good definition of what a dæmon is, at least the first part. By definition, a dæmon is a program which, after being spawned (either at boot or by a command from a shell), disconnects itself from the terminal that started it and runs in the background. If you then disconnect from the terminal session that started the program or log out entirely, the program continues to run in the background. What it does there is a function of what the dæmon is for. The inetd dæmon listens for network connections, while syslogd watches, monitors and logs.

What it logs is defined in a file called /etc/syslog.conf. The format of the file consists of two sections: a selector and an action field. The selector (which is broken up into facility and priority) defines what we log by identifying where the information came from and the level of importance or severity. The action field tells syslogd where the information goes or what to do with it. Not counting comment lines, the file looks something like this:

     facility/priority                   /var/log/filename

The file names you see there may already be familiar ones (messages, maillog and secure), but as you can see, they may also be changed. A listing of your /var/log directory will show you the various log files your system keeps. Some of the files you'll see there (the samba logs for instance) are written by other processes.

Actually, the file name above can be an additional process or even another host where its syslogd process will do whatever its own log file tells it to. More on that shortly, but first the selector field/section. This section will be either auth (security info), authpriv (more or less the same as auth), cron (your cron scheduling system), dæmon (various dæmons), kern (messages generated by the kernel), lpr (spooler), mail, mark, news, security, syslog, user (user programs), uucp, and local0-local7. Not all of these are useful or even used. For instance, mark is basically no more (it just gets ignored) and security has been superseded by auth. Each of these facilities has a priority to define the severity of the report. These are debug (debug statements), info (whatever doesn't fit elsewhere), notice (getting important), warning (very important and potentially a bad thing), err (error conditions), and finally the biggies, crit (for critical), alert, and emerg (it doesn't get any worse). Notice, as well, that you can specify a wild card ("*") to say that you want reports on every priority level associated with a given facility.

     kern.*                             /dev/console

The bottom line here is that any given Linux system is generating an awful lot of information. Pretend for a minute that you are the system administrator on a small/medium/large network. You have your own Linux machine running as your desktop, but somewhere on your network is your main machine. It runs mail, firewall services, etc. Should something dreadful happen to that machine in the form of a disk crash, a cracker breaching security, you name it, it is entirely possible that by the time you have a chance to look at the logs to determine what happened, it is already too late. They may be gone for good. Pretend again that you have several of these critical systems to keep an eye on. How can you possibly keep an eye on every one of them?

Here's what you do. Modify the syslog.conf file (in /etc) and add a new line with a very different action to the file. In the example that follows, I have added another line that defines what to do with authpriv messages. In other words, if I get a message telling me that someone is trying to log in to my machine, I want to know about it. But just in case the evidence is removed before I have another chance to look at the logs, it would be nice if a copy of that log entry were passed on to my other machine, a machine I call "shadow".

     # The authpriv file has restricted access.
     authpriv.*                                              /var/log/secure
     authpriv.*                                              @shadow

When messages that normally show up in the /var/log/secure file are generated, I will get them in shadow's /var/log/secure file. Those messages will be prefaced by the host name on which they were generated. For example ...

     Jun 21 12:22:06 shadow in.telnetd[17002]: connect from
     Jun 21 12:22:10 shadow login: LOGIN ON 5 BY natika FROM shadow
     Jun 22 12:57:31 website in.telnetd[1245]: connect from

Notice that the first two lines from shadow's /var/log/secure line are fronted by shadow's host name. The last is reporting from our Internet gateway, a machine called "website". Pretty cool, eh? There is a catch, though. In order for shadow to accept and record these messages, you need to stop your system logger and restart it with a different set of options. If I do a ps ax to look at how my syslogd is running on shadow, I get something like this:

     [root@shadow /root]# ps ax | grep syslog
     17171 ?        S      0:00 syslogd -m 0 -r
     17220 pts/6    S      0:00 grep syslog

If you do it on your machine, you will notice that the -r option is probably missing. You need to stop syslogd and restart it with the -r, which tells syslogd to listen for remote syslogd messages. To stop syslogd on my Red Hat system, I can use this command:

     /etc/rc.d/init.d/syslogd stop

You can also stop the process with this command:

     kill pid_of_syslogd

To restart, I could just re-issue that command with start instead of stop, but I would get the same version of syslogd running. If you just want to try this for yourself, you can (as root) simply type syslogd -m 0 -r and your machine should start accepting logs from your other machine. For this to happen each time you boot, you need to change the boot script itself. On my Red Hat system, that is /etc/rc.d/init.d/syslog. Here are a few lines from that script.

     # See how we were called.
     case "$1" in
             echo -n "Starting system logger: "
             # we don't want the MARK ticks
             daemon syslogd -m 0

That last line is the one we want to change. Now when we reboot, syslogd will start with our new options. Incidentally, any changes you make to /etc/syslog.conf also require a restart (of the process, not the whole system).

If you are particularly concerned about what is going on with your various systems, you could check your new, enriched log files with occasional tail -10 /var/log/log_file_name commands in a terminal window. If you don't want to issue that many keystrokes, you could write yourself a silly little program like this Perl script:

     while ( $eternity )
             { &show_me;
               sleep 10;
     sub show_me {
             $secure_tail=`tail /var/log/secure`;
            print "$secure_tail

I can now keep a terminal window open and have the last 10 lines of whatever file I am watching (in this case, /var/log/secure) refresh every 10 seconds. You could, of course, do something much more creative than my little example, but it is a starting point.

Next time around, I will show you ways to get more out of your logs, and some creative ways to monitor what is going on in your system. Until then, remember, your Linux system is talking to you. Are you listening?

Load Disqus comments