Intrusion Detection for the Masses
As impregnable as we hope our hardened systems are, security isn't a game of absolutes: the potential for system breaches must be recognized. Tripwire Open Source is a free and open-source software package that gives us a reasonable chance of being notified of possible breaches as soon as they occur.
Integrity checkers such as Tripwire create cryptographic “fingerprints” of system binaries, configuration files and other things likely to be tampered with in the course of, or subsequent to, a security breach. They then periodically check those files against the stored fingerprints and e-mail discrepancies back to you.
Tripwire is the most well known and mature integrity-checking system, and the one we're about to discuss in depth. You may also be interested in AIDE, which runs on more platforms than Tripwire Open Source, and FCheck, which is written 100% in Perl and, thus, even less platform-dependent than AIDE or Tripwire (it even runs on Windows). See the Resources section at the end of this article for links to AIDE's and FCheck's web sites.
Integrity checking mechanisms are like system backups; we hope we'll never need them, but heaven help us if we do and they're not there. Also, like system backups, integrity checking is an important component of a larger plan. If a system has been hardened, patched and maintained according to the industry's highest standards (or at least common sense), an integrity checker will provide a final safety net that helps minimize the damage done by whatever brilliant cracker manages to sneak in.
The principle on which integrity checkers operate is simple: if a file changes unexpectedly, there's a good chance it's been altered by an intruder. For example, one of the first things a system cracker will often do after “rooting” a system is replace common system utilities such as ls, ps and netstat with “rootkit”, which makes them appear to work normally but conveniently fail to list files, processes and connections (respectively) that might betray the cracker's presence.
Integrity checkers can be used to create a database of hashes (checksums) of important system binaries, configuration files or anything else we don't expect or want to have changed. By periodically checking those files against the integrity checker's database, we can minimize the chances of our system being compromised without ever knowing it. The less time between a system's compromise and the administrator's learning of it, the greater the chance administrators can catch, or at least evict, the intruders.
One caveat: any integrity-checker with an untrustworty database is worthless. It is imperative to create this database as soon as possible after installing the host's operating system from trusted media. I repeat: installing, configuring and maintaining an integrity-checker is not worth the effort unless its database was initialized on a clean system.
Among the most celebrated and useful things to come out of Purdue's COAST project (http://www.cerias.purdue.edu/coast/) is Tripwire, created by Dr. Eugene Spafford and Gene Kim. Originally both open source and free, Tripwire went commercial in 1997, and fee-free use was restricted to academic and other non-commercial settings.
Happily, last October Tripwire, Inc. released Tripwire Open Source, Linux Edition. Commercial versions of Tripwire until then had included features not available in the older Academic Source Release. In contrast, Tripwire Open Source is a more-or-less current version of the commercial product. Other than lacking enterprise features such as centralized management of multiple systems, it is very similar to the Tripwire for Servers product.
Note that Tripwire Open Source is free for use only on non-commercial Unices, i.e., Linux and Free/Net/OpenBSD. In fact, it's only officially supported on Red Hat Linux and FreeBSD, although there's no reason it shouldn't compile and run equally well on other Linux and BSD distributions. Only the older Academic Source Release is free for use on commercial Unices such as Sun Solaris and IBM AIX; the proprietary version must be purchased for these systems.
But we're all Linux geeks here, right? For the remainder of this discussion I'll focus on Tripwire Open Source, Linux Edition.
As of this writing, the most current version of Tripwire Open Source is 2.3.1-2. It can be downloaded as a source-code tarball at http://sourceforge.net/projects/tripwire/. I strongly recommend that you obtain, compile and install this version. While Tripwire has had only one significant security problem (and only a denial-of-service risk, at that) in its history, we use Tripwire because we're paranoid. For paranoiacs, only the latest (stable) version is good enough.
Having said that, the binary version included with Red Hat 7.0 is reasonably up-to-date. As far as I can tell, the differences between Red Hat's v2.3-55 RPM and the official source-release v2.3.1-2 involve non-security-related bugfixes; therefore you're probably taking no huge risk in using your stock RH 7.0 RPM. But don't say I told you to!
To compile Tripwire Open Source, move the archive to /usr/src and un-tar it, e.g., tar -xzvf ./tripwire-2.3.1-2.tar.gz. Next, check whether you have a symbolic link from /usr/bin/gmake to /usr/bin/make (non-Linux Unices don't all come with GNU make, so Tripwire explicitly looks for gmake--of course, on most Linux systems this is simply called make). If you don't have it, the command to create this link is ln -s /usr/bin/make /usr/bin/gmake.
Another thing to check for is a full set of subdirectories in /usr/share/man—Tripwire will need to place man pages in man4, man5 and man8. On my Debian system /usr/man/man4 was missing, and as a result the installer created a file called /usr/share/man/man4 that, of course, was actually a man page incorrectly copied to that name rather within it.
Finally, read the source's README and INSTALL files, change to the source-tree's src directory (e.g., /usr/src/tripwire-2.3.1-2/src), and make any changes you deem necessary to the variable-definitions in src/Makefile. Be sure to verify that the appropriate SYSPRE definition is uncommented (SYSPRE = i386-pc-linux, SYSPRE = sparc-linux, etc.).
Now we're ready to compile, type make release. This will take awhile, so now is a good time to grab a sandwich. When the build is done, navigate up one directory level, e.g., /usr/src/tripwire-2.3.1-2, and execute these two commands:
cp ./install/install.cfg . cp ./install/install.sh .
Now open install.cfg with your favorite text editor; while the default paths are probably fine, you should at the very least examine the Mail Options section. This is where we initially tell Tripwire how to route its logs. If we set TWMAILMETHOD=SENDMAIL and specify a value for TWMAILPROGRAM, Tripwire will use the specified local mailer (sendmail by default) to deliver its reports to a local user or group.
If instead we set TWMAILMETHOD=SMTP and specify values for TWSMTPHOST and TWSMTPPORT, Tripwire will mail its reports to an external e-mail address via the specified SMTP server and port. Note that if you change your mind later, Mail Options settings can be changed in Tripwire's configuration file at any time.
If the system on which you're installing Tripwire is a multiuser system, and one that you or other system administrators routinely log on to and read e-mail, the SENDMAIL method is probably preferable. If the system is a host you typically administer remotely from other systems, the SMTP method is probably better.
Once install.cfg is set to your liking, it's time to install Tripwire. Simply enter sh ./install.sh. You will be prompted for site and local passwords; the site password protects Tripwire's configuration and policy files, whereas the local password protects Tripwire databases and reports. This allows the use of a single policy across multiple hosts in such a way as to centralize control of Tripwire policies but distribute responsibility for database management and report generation.
|Designing Electronics with Linux||May 22, 2013|
|Dynamic DNS—an Object Lesson in Problem Solving||May 21, 2013|
|Using Salt Stack and Vagrant for Drupal Development||May 20, 2013|
|Making Linux and Android Get Along (It's Not as Hard as It Sounds)||May 16, 2013|
|Drupal Is a Framework: Why Everyone Needs to Understand This||May 15, 2013|
|Home, My Backup Data Center||May 13, 2013|
- New Products
- Linux Systems Administrator
- Senior Perl Developer
- Technical Support Rep
- UX Designer
- Designing Electronics with Linux
- Dynamic DNS—an Object Lesson in Problem Solving
- Making Linux and Android Get Along (It's Not as Hard as It Sounds)
- Using Salt Stack and Vagrant for Drupal Development
- Nice article, thanks for the
8 hours 53 min ago
- I once had a better way I
14 hours 39 min ago
- Not only you I too assumed
14 hours 56 min ago
- another very interesting
16 hours 49 min ago
- Reply to comment | Linux Journal
18 hours 42 min ago
- Reply to comment | Linux Journal
1 day 1 hour ago
- Reply to comment | Linux Journal
1 day 1 hour ago
- Favorite (and easily brute-forced) pw's
1 day 3 hours ago
- Have you tried Boxen? It's a
1 day 9 hours ago
- seo services in india
1 day 14 hours ago
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?