Security Research Laboratory and Education Center
COAST graduate students have been studying ways of enhancing audit trails on Linux systems. Additionally, penetration and vulnerability analysis efforts have benefited from the use of Linux machines with the enhanced auditing systems.
Generally, operating systems' audit trails or logs are inadequate for a variety of applications such as intrusion detection. The students have developed two different approaches to enhancing the data collected by Linux. One approach was to use the technique of interposing shared objects to collect new application-level audit data. Using this technique, a program can be instructed to record and act upon certain library calls and their arguments without modifying the binary or source code of the program. (See Figure 6.)
Another part of the project involves using a Linux 2.0.34 kernel (see Figure 7.) to audit low-level network data. This involves adding a mechanism to the kernel to report network packet headers to user processes. By correlating these data and intrusion detection systems, host-based intrusion detection systems can detect low-level network attacks such as “Land”, “Teardrop” and “Syn floods”. This mechanism uses a version of the existing kernel log code, modified to accommodate arbitrary binary data.
The vulnerability database and analysis group at COAST is collecting and analyzing computer vulnerabilities for a variety of purposes. The project includes the application of knowledge discovery and data mining tools to find non-obvious relationships in vulnerability data, to develop vulnerability classifications and to develop tools that will generate intrusion detection signatures from vulnerability information. One goal of the group is to develop methods of testing software in order to discover security flaws before the software is deployed.
In the words of Professor Spafford:
With the increasing use of computers and networks, the importance of information security and assurance is also going to increase. Concerns for privacy, safety and integrity may soon become more important to people than speed of computation. This represents a tremendous challenge, but also a tremendous opportunity for those who seek to understand—and provide—workable security.
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- Google's SwiftShader Released
- SUSE LLC's SUSE Manager
- My +1 Sword of Productivity
- Managing Linux Using Puppet
- Interview with Patrick Volkerding
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- Non-Linux FOSS: Caffeine!
- SuperTuxKart 0.9.2 Released
- Tech Tip: Really Simple HTTP Server with Python
- Parsing an RSS News Feed with a Bash Script
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide