Verifying Filesystem Integrity with CVS
Now let's put the collector.pl and cvschecker.pl scripts into action with a couple of intrusion examples. Assume the target system is a Red Hat 6.2 machine; HIBDS data has been collected from this machine before any external network connection was established, and the target has an IP address of 192.168.10.5.
Suppose machine 192.168.10.5 is cracked, and the following command is executed as root:
# cp /bin/sh /dev/... && chmod 4755 /dev/...
This will copy the /bin/sh shell to the /dev/ directory as the file “...” and will set the uid bit. Because the file is owned by root, and we made it executable by any user on the system, the attacker only needs to know the path /dev/... to execute any command as root. Obviously, we would like to know if something like this has happened on the 192.168.10.5 system. Now, on the collector box, we execute cvschecker.pl, and the following e-mail is sent to root@localhost, which clearly shows /dev/... as a new suid file:
From: hbids@localhost Subject: Changed file on 192.168.10.5: suidfiles To: root@localhost Date: Sat, 10 Nov 2001 17:35:13 -0500 (EST) Index: /home/mbr/192.168.10.5/suidfiles ======================================================= RCS file: /usr/local/hbids_cvs/192.168.10.5/suidfiles,v retrieving revision 1.3 retrieving revision 1.4 diff -r1.3 -r1.4 4a5 > -rwsr-xr-x 1 root root 512668 Nov 10 18:40 /dev/...
Now suppose an attacker is able to execute the following two commands as root:
# echo "eviluser:x:0:0::/:/bin/bash" >> /etc/passwd # echo "eviluser::11636:0:99999:7:::" >> /etc/shadow
Note that the uid and gid for eviluser are set to 0 and 0 in the /etc/passwd entry, and also that there is no encrypted password string in the /etc/shadow entry. Hence, any user on the system could become root without supplying a password simply by typing su - eviluser. As in the previous example, after running cvschecker.pl, we receive the following e-mails in root's mailbox:
From: hbids@localhost Subject: Changed file on 192.168.10.5: /etc/passwd Delivered-To: root@localhost Date: Sat, 10 Nov 2001 17:43:17 -0500 (EST) Index: /home/mbr/192.168.10.5/etc/passwd ======================================================= RCS file: /usr/local/hbids_cvs/192.168.10.5/etc/passwd,v retrieving revision 1.2 retrieving revision 1.3 diff -r1.2 -r1.3 26a27 > eviluser:x:0:0::/:/bin/bashand
From: hbids@localhost Subject: Changed file on 192.168.10.5: /etc/shadow Delivered-To: root@localhost Date: Sat, 10 Nov 2001 17:43:18 -0500 (EST) Index: /home/mbr/192.168.10.5/etc/shadow ======================================================= RCS file: /usr/local/hbids_cvs/192.168.10.5/etc/shadow,v retrieving revision 1.2 retrieving revision 1.3 diff -r1.2 -r1.3 26a27 > eviluser::11636:0:99999:7:::
Finding changes in the filesystem can be an effective method for detecting intruders. In this article we have illustrated some simple Perl code that bends CVS into a homegrown, host-based intrusion-detection system. At my current place of employment, USinternetworking, Inc., a large ASP in Annapolis, Maryland, we use a similar (although greatly expanded) custom system called USiOasis to help verify filesystem integrity across several hundred machines in our network infrastructure. The machines are loaded with various operating systems that include Linux, HPUX, Solaris and Windows, and run many different types of server applications. The system includes a MySQL database back end, a rather large CVS repository and a custom web/CGI front end written mostly in Perl. Making use of a CVS repository to perform difference tracking also comes with an important additional benefit: an excellent visualization tool written in Python called ViewCVS. Storing operating system and application configuration files within CVS also aids several areas outside of detecting intrusions, such as troubleshooting network and application-level outages, disaster recovery and tracking system configurations over time.
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?
|Speed Up Your Web Site with Varnish||Jun 19, 2013|
|Non-Linux FOSS: libnotify, OS X Style||Jun 18, 2013|
|Containers—Not Virtual Machines—Are the Future Cloud||Jun 17, 2013|
|Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer||Jun 12, 2013|
|Weechat, Irssi's Little Brother||Jun 11, 2013|
|One Tail Just Isn't Enough||Jun 07, 2013|
- Speed Up Your Web Site with Varnish
- Containers—Not Virtual Machines—Are the Future Cloud
- Linux Systems Administrator
- Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer
- RSS Feeds
- Senior Perl Developer
- Technical Support Rep
- Non-Linux FOSS: libnotify, OS X Style
- UX Designer
4 min 57 sec ago
- Reply to comment | Linux Journal
27 min 16 sec ago
- Android has been dominating
31 min 48 sec ago
- It is quiet helping
3 hours 17 min ago
3 hours 34 min ago
- Reachli - Amplifying your
4 hours 51 min ago
5 hours 39 min ago
- good point!
5 hours 42 min ago
- Varnish works!
5 hours 51 min ago
- Reply to comment | Linux Journal
6 hours 21 min ago