Practical Threat Analysis and Risk Management
The last piece of the threat puzzle we'll discuss before plunging into threat analysis is the attacker. Attackers, also sometimes called “actors”, can range from the predictable (disgruntled ex-employees, mischievous youths) to the strange-but-true (drug cartels, government agencies, industrial spies). When you consider possible attackers, almost any type is possible; the challenge is to gauge which attackers are the most likely.
A good rule of thumb in identifying probable attackers is to consider the same suspects your physical security controls are designed to keep out, minus geographical limitations. This is a useful parallel: if you install an expensive lock on the door to your computer room, nobody will ask, “Do you really think the maintenance staff will steal these machines when we go home?”
Computer security is no different. While it's often tempting to say “my data isn't interesting; nobody would want to hack me”, you have no choice but to assume that if you're vulnerable to a certain kind of attack, some attacker eventually will probe for and exploit it, regardless of whether you're imaginative enough to understand why. It's considerably less important to understand attackers than it is to identify and mitigate the vulnerabilities that can feasibly be attacked.
Once you've compiled lists of assets and vulnerabilities (and considered likely attackers), the next step is to correlate and quantify them. One simple way to quantify risk is by calculating annualized loss expectancies (ALEs).
For each vulnerability associated with each asset, you estimate first the cost of replacing or restoring that asset (its single loss expectancy) and then the vulnerability's expected annual rate of occurrence. You then multiply these to obtain the vulnerability's annualized loss expectancy.
In other words, for each vulnerability we calculate: single loss expectancy (cost) × (expected) annual rate of occurrences = annualized loss expectancy.
For example, suppose Mommenpop, Inc., a small business, wishes to calculate the ALE for denial-of-service (DOS) attacks against their SMTP gateway. Suppose further that e-mail is a critical application for their business; their ten employees use e-mail to bill clients, provide work estimates to prospective customers and facilitate other critical business communications. However, networking is not their core business, so they depend on a local consulting firm for e-mail-server support.
Past outages, averaging one day in length, have tended to reduce productivity by about one-fourth, which translates to two hours per day per employee. Their fallback mechanism is a fax machine, but since they're located in a small town, this entails long-distance telephone calls and is expensive.
All this probably sounds more complicated than it is; it's much less imposing expressed in spreadsheet form (Figure 1).
The next thing to estimate is this type of incident's expected annual occurrence (EAO). This is expressed as a number or fraction of incidents per year. Continuing our example, suppose Mommenpop, Inc. hasn't been the target of espionage or other attacks by its competitors yet, and as far as you can tell, the most likely sources of DOS attacks on their mailserver are vandals, hoodlums, deranged people and other random strangers.
It seems reasonable to guess that such an attack is unlikely to occur more than once every two or three years; let's say two to be conservative. One incident every two years is an average of 0.5 incidents per year, for an EAO of 0.5. Let's plug this in to our ALE formula:
950 ($/incident) × 0.5 (incidents/yr) = 475 ($/yr).
The ALE for DOS attacks on Mommenpop's SMTP gateway is thus $475 per year.
Now suppose some vendors are trying to talk the company into replacing their homegrown Linux firewall with a commercial firewall; this product has a built-in SMTP proxy that will help minimize but not eliminate the SMTP gateway's exposure to DOS attacks. If that commercial product costs $5,000, even if its cost can be spread out over three years (to $2,166 per year after 10% annual interest), such a firewall upgrade would not appear to be justified by this single risk.
Figure 2 shows a more complete threat analysis for our hypothetical business' SMTP gateway, including not only the ALE we just calculated but also a number of others that address related assets, plus a variety of security goals.
In this example analysis, customer data in the form of confidential e-mail is the most valuable asset at risk; if this is eavesdropped or tampered with, customers could be lost (due to losing confidence in Mommenpop), resulting in lost revenue. Different perceived potentials in these losses are reflected in the single loss expectancy figures for different vulnerabilities. Similarly, the different estimated annual rates of occurrence reflect the relative likelihood of each vulnerability actually being exploited.
Since the sample analysis in Figure 2 is in the form of a spreadsheet, it's easy to sort the rows arbitrarily. Figure 3 shows the same analysis sorted by vulnerability.
This is useful for adding up ALEs associated with the same vulnerability. For example, there are two ALEs associated with in-transit alteration of e-mail while it traverses the Internet or ISPs, at $2,500 and $750, for a combined ALE of $3,250. If a training consultant will, for $2,400, deliver three half-day seminars for the company's workers on how to use free GnuPG software to sign and encrypt documents, the trainer's fee will be justified by this vulnerability alone.
We also see some relationships between ALEs for different vulnerabilities. In Figure 3 we see that the bottom three ALEs all involve losses caused by the SMTP gateway's being compromised. In other words, not only will an SMTP gateway compromise result in lost productivity and expensive recovery time from consultants ($1,200 in either ALE, at the top of Figure 3), it will expose the business to an additional $31,500 risk of e-mail data compromises, for a total ALE of $32,700.
Clearly, the ALE for e-mail eavesdropping or tampering caused by system compromise is high. Mommenpop, Inc. would be well-advised to call that $2,400 trainer immediately.
Problems with relying on the ALE as an analytical tool include its subjectivity (note how often in the example I used words like “unlikely” and “reasonable”) and, therefore, the fact that the experience and knowledge of whoever's calculating, rather than empirical data, ultimately determine its significance. Also, this method doesn't lend itself too well to correlating ALEs with each other (except in short lists as shown in Figures 2 and 3).
The ALE method's strengths, though, are its simplicity and its flexibility. Anyone sufficiently familiar with their own system architecture and operating costs, and possessing even a general sense of current trends in IS security (e.g., from reading CERT advisories and incident reports now and then), can create lengthy lists of itemized ALEs for their environment with little effort. If such a list takes the form of a spreadsheet, ongoing tweaking of its various cost and frequency estimates is especially easy.
Even given this method's inherent subjectivity (not completely avoidable in practical threat-analysis techniques), it's extremely useful as a tool for enumerating, quantifying and weighing risks. A well-constructed list of annualized loss expectancies can help you optimally focus your IT security expenditures on the threats likeliest to affect you in ways that matter.
|Designing Electronics with Linux||May 22, 2013|
|Dynamic DNS—an Object Lesson in Problem Solving||May 21, 2013|
|Using Salt Stack and Vagrant for Drupal Development||May 20, 2013|
|Making Linux and Android Get Along (It's Not as Hard as It Sounds)||May 16, 2013|
|Drupal Is a Framework: Why Everyone Needs to Understand This||May 15, 2013|
|Home, My Backup Data Center||May 13, 2013|
- RSS Feeds
- Dynamic DNS—an Object Lesson in Problem Solving
- Making Linux and Android Get Along (It's Not as Hard as It Sounds)
- Designing Electronics with Linux
- Using Salt Stack and Vagrant for Drupal Development
- New Products
- A Topic for Discussion - Open Source Feature-Richness?
- Drupal Is a Framework: Why Everyone Needs to Understand This
- Validate an E-Mail Address with PHP, the Right Way
- What's the tweeting protocol?
- Kernel Problem
5 hours 52 min ago
- BASH script to log IPs on public web server
10 hours 19 min ago
13 hours 55 min ago
- Reply to comment | Linux Journal
14 hours 27 min ago
- All the articles you talked
16 hours 51 min ago
- All the articles you talked
16 hours 54 min ago
- All the articles you talked
16 hours 56 min ago
21 hours 20 min ago
- Keeping track of IP address
23 hours 11 min ago
- Roll your own dynamic dns
1 day 4 hours ago
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?