AlienVault: the Future of Security Information Management
Let's create a simple directive so you can see correlation in action. As an example, let's use a simple directive to monitor suspicious access to the Web server using two different plugins. In order to do so, first turn down the values for your OSSEC plugin. From the OSSIM management page, go to the Plugins section under Configuration. Scroll through the tables to find Plugin ID 7010, and click on the ID column to edit the plugin's values. On the resulting page, change the reliability values for the SIDs 5503 and 5716 from 5 to 1 (Figure 8). If you left these values at 5, they would send an alarm before the rule is processed. Because the goal is to observe correlation, you need to turn them down.
Click on the Directives link found under the Correlation section of the navigation pane. From here, you get a brief description of how directives are ordered and processed. Click on the Add Directive line in the top left of the page. In the resulting fields, enter “Unauthorized Access to Web Server” as the Name. In the blank field next to Id, enter 101, which places your directive in the Generic directives group. Set the Priority to 2 and click Save. On the next page (Figure 9), click on the + symbol to add a rule to your new directive. In the Name field, type “NMAP Scan on Web Server from Foreign Host”. Enter 1001 as the Plugin Id (snort). In the Plugin Sid field, type “2000537, 2000545”, and under the Network section in the To field, type in the IP address of your CentOS server and the Port to List 22. In the Risk field, set Occurrence to 3, Reliability to 1. Set the Sticky field to True and Sticky Different to SRC_IP. Click the Save button at the bottom of the page.
In theory, you have a directive that will send an alarm when a host runs an Nmap scan against port 22 on your Web server. However, you won't receive alerts yet. In order for a directive to send an alarm, the risk of the directive being tripped must be greater than 1.
Although I have not talked much about risk until now, it is integral to the function of correlation. Risk is the primary factor used by the correlation engine to determine when alarms are generated. It is calculated using a series of subjective numerical values assigned by the agents and administrators. Expressed in mathematical form, the formula for risk looks like this:
Risk = (priority x reliability x asset) / 25
Priority is the number OSSIM uses to prioritize rules. It is set at the Directive level. Priority can have a value of 0–5. 0 means OSSIM should ignore the alert. A value of 5 means OSSIM should treat this as a serious threat. Reliability refers to how reliable a rule is based on the chance that it's a false positive. It is set at the individual rule level and can be cumulative if there is more than one rule in a directive. Possible values for reliability are 1–10, and they equate to percentages, so 6 would mean a rule is reliable 60% of the time. Asset is the value that represents the importance of a host. You assigned the highest possible priority (5) to your CentOS server in the Policies section earlier in the article.
At this point, you have one rule under your directive, but no correlation, so you need to add another rule. Click on the + symbol on your directive. Give the new rule a name of “Too Many Auth Failures”. Set the Plugin ID to 7010 (OSSEC), and set the From field to the IP address of your Web server as the OSSEC agent will show the Web server as the source of the events. Set Occurrence to 4 and Reliability to 0 for now. Click Save. After adding the second rule, navigate to the row of the new rule and move the mouse over the directional arrows that control how rules are treated inside the directive. The up and down arrows are similar to AND statements, meaning both rules must match, and the left and right arrows nest rules within each other like nested IF statements. Move your second rule to the right. Open the second rule back up and change the reliability to +2, which will increase the reliability by 2 over the previously processed rule (3 if the first rule is met). Now, if both rules are met, the risk will be > 1 and an alarm will be generated. Listing 1 shows the directive in XML format.
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- SUSE LLC's SUSE Manager
- My +1 Sword of Productivity
- Non-Linux FOSS: Caffeine!
- Managing Linux Using Puppet
- Tech Tip: Really Simple HTTP Server with Python
- Parsing an RSS News Feed with a Bash Script
- Google's SwiftShader Released
- SuperTuxKart 0.9.2 Released
- Doing for User Space What We Did for Kernel Space
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide