Open-Source Intrusion-Detection Tools for Linux
When an intrusion has been detected, the system administator needs to first regain control of the compromised system by disconnecting it from the network. This is to prevent further intrusion and possible Denial of Service attacks on the Internet originating from the compromised host. An image of the system should be backed up to allow for the intrusion to be analyzed and referenced later.
The system must then be analyzed thoroughly by reviewing log files. This is a primary source of information on how, when and where the intrusion occurred. All system binaries and configuration files, including the kernel, need to be verified to make sure they are unaltered. To do this, the system administrator must first insure the system analysis tools themselves are clean and do not contain Trojans. System data should also be checked to make sure the intruder has not changed them. Intruders may “park” data or programs on the system. This may include programs to be used in other intrusions, and data from other compromised systems. Intruders may also install network sniffers and other monitoring programs in hopes of capturing information which will allow them to access other hosts. Once an intrusion has been detected on one system, all the other systems on the network should also be checked for possible intrusion. The intruder may have used the compromised system to gain access to other hosts on the network, or they may have used other hosts to gain access to the system with the detected intrusion.
System administrators should file an incident report for all hosts compromised with a computer coordination center, such as CERT. Intruders usually use compromised accounts to attack other system. It is difficult or impossible for an individual system to track down the origins of a knowledgeable attacker. However, it is made possible through cooperation among system administrators, closing down avenues of attack and access, limiting the attacker to hosts and systems where they can be monitored and identified.
Once an intrusion has been analyzed and reported, then comes the task of recovering from the intrusion. First a clean version of the system should be installed, preferably from the original installation media. If a backup is used, the system binaries should be restored from copies with known clean binaries. The sys admin should take the paranoid stance that the latest backup may contain the altered programs and data and needs to be sure they are not reinstalling bad files.
Once a compromised system has been restored, it must be secured to prevent another intrusion. Steps in hardening a system include disabling all unnecessary services, installing all vendor security patches, consulting CERT and other security advisories, and changing passwords.
Detecting and recovering from an intrusion may actually be the start of a system administrator's security journey. Intrusions only highlight the need for system security. With millions of users on the Internet, one has to assume that, while individually they may pose minimal threat, collectively they are more knowledgeable and have more resources than any system administrator or security program.
Bobby S. Wen (email@example.com) holds two engineering degrees and an MBA. He started playing with Linux in 1994 with a Slackware pre-1.0 distribution and has been addicted ever since. Even though he has a computer for every man, woman, child, and dog at home, he still has to wait his turn for a computer, because the only computer his children want to play with is the one he is working on. He currently multi-boots Linux, FreeBSD, Solaris, BeOS, Windows 98 and Windows NT.
|Non-Linux FOSS: libnotify, OS X Style||Jun 18, 2013|
|Containers—Not Virtual Machines—Are the Future Cloud||Jun 17, 2013|
|Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer||Jun 12, 2013|
|Weechat, Irssi's Little Brother||Jun 11, 2013|
|One Tail Just Isn't Enough||Jun 07, 2013|
|Introduction to MapReduce with Hadoop on Linux||Jun 05, 2013|
- Containers—Not Virtual Machines—Are the Future Cloud
- Non-Linux FOSS: libnotify, OS X Style
- Linux Systems Administrator
- Validate an E-Mail Address with PHP, the Right Way
- Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer
- Technical Support Rep
- Senior Perl Developer
- UX Designer
- Introduction to MapReduce with Hadoop on Linux
- Weechat, Irssi's Little Brother
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?