Open-Source Intrusion-Detection Tools for Linux
As an ISP, we are the most vulnerable to attack because of the open nature of our networks. Unlike corporate networks, which can limit access, and can backtrack users, we have to continuously monitor for attacks and, more importantly, successful intrusions. We use several open-source security tools to monitor for intrusions and system compromises. These tools have alerted us to attack and allowed us to quickly respond to intrusions from system compromises.
System administrators are known to be trained paranoids. They are not only concerned about updating their systems against the latest program vulnerability, hacking tools and kiddie scripts, but they have to be vigilant against the security holes and exploits that are not known or documented. Most good system administrators can protect their systems against low-level crackers by using standard system hardening techniques (“Post-Installation Security Procedures”, Linux Journal, December 1999). It is the holes that are not known, and the knowledgeable cracker that poses the much greater danger.
Intrusion detection and recovery is a goal of all system security. It usually involves looking for system compromises: checking logs for unusual activity, unusual connections, and alterations in the system files. Of course a system must be secured in order for intrusion detection and recovery to be effective. System administrators would be working double time if they kept finding people breaking into their systems and had to recover from them.
There are many open-source tools available to system administrators to aid in this process.
We use Tiger, Logcheck and Tripwire in cron jobs to continuously monitor the system for incursions and intrusions. They alert us to attacks on our system and allow us to quickly respond to the attack or limit the damage. We also use other security programs, but to allow this article to be manageable, we shall concentrate on programs with stable releases.
Tiger is a set of scripts that search a system for weaknesses which could allow an unauthorized user to change system configurations, gain root access, or change important system files. System administrators may remember Tiger as the successor to COPS (Computer Oracle and Password System).
Tiger was originally developed at Texas A&M University and can be downloaded from net.tamu.edu/pub/security/TAMU. There are currently several versions of Tiger available: tiger-2.2.3p1, the latest Tiger with updated check_devs and check_rhost scripts; tiger-2.2.3p1-ASC, tiger-2.2.3p1 with contributions from the Arctic Regional Supercomputer Center; tiger-2.2.4p1, tiger-2.2.3 with Linux support; and, TARA, Tiger Analytical Research Assistant, an upgrade to tiger-2.2.3 from Advanced Research Corporation.
Tiger scans the system for cron, inetd, passwd, files permissions, aliases and PATH variables to see if they can be used to gain root access. It checks for system vulnerabilities in inetd, host equivalents and PATH to determine if a user can gain remote access to the system. It also uses digital signatures, using MD5, to determine if key system binaries have been altered.
Tiger is made up of scripts which may be run together or individually. This allows system administrators to determine the best strategy for checking their system depending on system size and configuration. The Tiger script files are available for download at ftp.linuxjournal.com/pub/lj/listings/issue78.
Tiger can be used as an interactive process, or configured to run any or all the scripts using the tigerrc file. In addition, a site file must include the locations of files Tiger may use, such as Crack and Reporter. Sample tigerrc and site files are included in the distribution. We usually start with everything turned on in the tigerrc and the site-sample files, and customize which scripts to run after the initial audit. Tiger can be run by typing ./tiger in the Tiger directory. Using Tiger as an interactive operation will run the Tiger scripts without regard to the amount of time it takes to complete.
The individual checks can be spread out over time using the tigercron script without cluttering up your crontab file. Tigercron uses the cronrc file to schedule the tiger scripts in a manner similar to crontab. Tigercron is a little bit different from the conventional crontab process in that tigercron is run in crontab every hour, and then tigercron, using cronrc in the tiger directory, determines which scripts to run. One advantage of tigercron is it is configured to e-mail a security report only if there is a change from the previous scan. This prevents cluttering the system administrator's e-mail with status information and creates a red flag for the sys admin that the system has changed and to take notice of a possible intruder whenever a Tiger e-mail is generated.
Tiger outputs a security report upon completion. The output is detailed and prefixed by cryptic error messages. Fortunately, Tiger comes with a utility to explain error messages, called tigexp. A explanation of each error message can be obtained by running tigexp -f <report filename>.
Tiger 2.2.4 has been updated to run with Red Hat Linux using a 2.0.35 kernel. This is great for people who use Red Hat 5.1 and have not upgraded. Systems using other flavors of Linux, or those that have been upgraded will have to update their binary signature file by using the util/mksig script. This allows Tiger binary checks to be updated as new programs are upgraded or installed.
Tiger is an important part of our system integrity monitors. It is proactive in checking for ways root can be compromised and scans for changes in important system files. In the past, we have been alerted to system intrusions because of Tiger.
|Non-Linux FOSS: libnotify, OS X Style||Jun 18, 2013|
|Containers—Not Virtual Machines—Are the Future Cloud||Jun 17, 2013|
|Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer||Jun 12, 2013|
|Weechat, Irssi's Little Brother||Jun 11, 2013|
|One Tail Just Isn't Enough||Jun 07, 2013|
|Introduction to MapReduce with Hadoop on Linux||Jun 05, 2013|
- Containers—Not Virtual Machines—Are the Future Cloud
- Non-Linux FOSS: libnotify, OS X Style
- Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer
- Linux Systems Administrator
- Validate an E-Mail Address with PHP, the Right Way
- Introduction to MapReduce with Hadoop on Linux
- RSS Feeds
- Weechat, Irssi's Little Brother
- New Products
- Tech Tip: Really Simple HTTP Server with Python
- Poul-Henning Kamp: welcome to
1 hour 48 min ago
- This has already been done
1 hour 49 min ago
- Reply to comment | Linux Journal
2 hours 34 min ago
- Welcome to 1998
3 hours 23 min ago
- notifier shortcomings
3 hours 47 min ago
5 hours 23 min ago
- Android User
5 hours 25 min ago
- Reply to comment | Linux Journal
7 hours 18 min ago
10 hours 8 min ago
- This is a good post. This
15 hours 21 min ago
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?