Security Research Laboratory and Education Center
Keeping the bandits out is not the only reason you will need educated security experts to maintain your system in the future. What will happen when the demand for security administrators is so high your firm cannot afford them? Or if the total development cost for secure software is more than the debt in the U.S. alone? Your answer to my last question may be, “We will use open-source code”--good point! However, you will still need experienced security personnel to maintain your system.
As most of the industry is struggling to prepare their systems for the year 2000, academia is facing the problem of educating enough computer scientists. Government reports predict that in the year 2000, on-line commerce in the U.S. alone will exceed 15 billion dollars per year, and the sales of security software will exceed two billion dollars per year. The need for increased training and research in information security will only expand in the coming years as the use of wide-area computer networks spreads.
As computer crime is increasing, Purdue University in Indiana is addressing the issue. For the last seven years, the Purdue Computer Science Department has been the home of the Computer Operations, Audit and Security Technology (COAST) laboratory. COAST is one of the largest academic research groups and graduate studies laboratories in practical computer and network security in the world. The laboratory is expanding into a newly established center.
Purdue's University Center for Education and Research in Information Assurance and Security (CERIAS as in “serious”) is a pioneer in the area of information security. This new university center was designed to educate the next generation of computer and network security specialists. With projects encompassing Linux, Solaris, Windows 95/NT, smart cards, iButtons, biometrics, ATM networks and firewalls, their research will work toward the goal of reducing the threat of so-called information warfare.
The director of the laboratory and of the newly founded center, Professor Gene Spafford, is a computer scientist who has been a major contributor to the discipline of information security. Spafford is an ACM (Association for Computing Machinery) fellow and has written several books on information security. He also helped to analyze and contain the Internet worm in 1998. Together with 15 faculty members and 40 graduate and undergraduate students (see Figure 1), he is steering the center toward a common goal: to provide world-class research and education in information security.
Currently, the faculty and students are drawn heavily from the computer science area. However, the center is opening its doors to a diversity of disciplines (e.g., philosophy, linguistics, political science, industrial engineering, management, sociology and electrical and computer engineering).
The laboratory (see Figure 2) and the new center have attracted professors and students from 13 countries. One reason is that there are few highly competent academic security laboratories with industry support. The diversity does not end with nationality—almost 40 percent of the students are female. Security has drawn the interest of women since the early days, and the number of female students has been increasing steadily in the last few years.
The research includes audit trails format and reduction, network protection, firewall and software evaluation, creation of a vulnerabilities database and testing. Additionally, several undergraduate projects dealing with authentication and security archive are in progress. The main COAST projects are described briefly below.
Intrusion Detection (ID) is a field within computer security that has grown rapidly over the last few years. The AAFID (autonomous agents for intrusion detection) project in the COAST laboratory is about intrusion detection.
Traditional intrusion detection systems (IDS) collect data from one or more hosts and process the data in a central machine to detect anomalous behavior. This approach has a problem in that it prevents scaling of the IDS to a large number of machines, due to the storage and processing limitations of the host that performs the analysis.
The AAFID architecture uses many independent entities, called “autonomous agents”, working simultaneously to perform distributed intrusion detection. Each agent monitors certain aspects of a system and reports strange behavior or occurrences of specific events. For example, one agent may look for bad permissions on system files, another agent may look for bad configurations of an FTP server, and yet another may look for attempts to perform attacks by corrupting the ARP (address resolution protocol) cache of the machine.
The results produced by the agents are collected on a per-machine level, permitting the correlation of events reported by different agents that may be caused by the same attack. Furthermore, reports produced by each machine are aggregated at a higher (per-network) level, allowing the system to detect attacks involving multiple machines.
The AAFID group consists of ten graduate and undergraduate students within the COAST laboratory. A prototype implementation (see Figures 3 and 4) can be found on the AAFID project web page at http://www.cs.purdue.edu/coast/projects/autonomous-agents.html.
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- Google's SwiftShader Released
- SUSE LLC's SUSE Manager
- My +1 Sword of Productivity
- Interview with Patrick Volkerding
- Managing Linux Using Puppet
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- Non-Linux FOSS: Caffeine!
- SuperTuxKart 0.9.2 Released
- Tech Tip: Really Simple HTTP Server with Python
- Parsing an RSS News Feed with a Bash Script
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide