Practical Threat Analysis and Risk Management
If you've been reading this column awhile, you know I like to balance technical procedures, tools and techniques with enough background information to give them some context. Security is a big topic, and the only way to make sense of the myriad variables, technologies and black magic that figure into it is to try to understand some of the commonalities between security puzzles.
One piece common to each and every security scenario is the threat. Without a threat there's no need for security measures. But how much time do you spend identifying and evaluating threats to your systems, compared to the time you spend implementing and (I hope) maintaining specific security measures? Probably far too little time. If so, don't feel bad, even seasoned security consultants spend too little time on threat analysis.
This is not to say you need to spend hours and hours on it. My point is that, ideally, threats to the integrity and availability of your critical systems should be analyzed systematically and comprehensively; threats to less essential but still important systems at least should be thought about in an organized and objective way.
Before we dive into threat analysis, we need to cover some important terms and concepts. First, what does threat mean? Quite simply, a threat is the combination of an asset, a vulnerability and an attacker.
An asset is anything you wish to protect. In information security scenarios, an asset is usually data, a computer system or a network of computer systems. We want to protect those assets' integrity and, in the case of data, confidentiality.
Integrity is the absence of unauthorized changes. Unauthorized changes result in that computer's or data's integrity being compromised. This can mean that bogus data was inserted into the legitimate data, or parts of the legitimate data were deleted or changed. In the case of computers, it means that configuration files have been altered by attackers in such a way as to allow unauthorized users to use the system improperly.
We also want to protect the confidentiality of at least some of our data. This is a somewhat different problem than that of integrity, since confidentiality can be compromised completely passively. If someone alters your data, it's easy to detect and analyze by comparing the compromised data with the original data. If an attacker illicitly copies (steals) it, however, detection and damage-assessment is much harder since the data actually hasn't changed.
For example, suppose ABC Corporation has an SMTP gateway that processes their incoming e-mail. This SMTP gateway represents two assets. The first asset is the server itself, whose proper functioning is important to ABC Corp.'s daily business. In other words, ABC Corp. needs to protect the integrity of its SMTP server so its e-mail service isn't interrupted.
Secondly, that SMTP gateway is host to data contained in the e-mail that passes through it. If the gateway's system integrity is compromised, then confidential e-mail could be eavesdropped and important communications tampered with. Protecting the SMTP gateway, therefore, is also important in preserving the confidentiality and integrity of ABC Corp.'s e-mail data.
Step one in any threat analysis, then, is identifying which assets need to be protected and which qualities of those assets need protecting.
Step two is identifying known and plausible vulnerabilities in that asset and in the systems that directly interact with it. Known vulnerabilities, of course, are much easier to deal with than vulnerabilities that are purely speculative. (Or so you'd think, but an alarming number of computers connected to the Internet run default, unpatched operating systems and applications.) Regardless, you need to try to identify both.
Known vulnerabilities often are eliminated easily via software patches, careful configuration or instructions provided by vendor bulletins or public forums. Those that can't be mitigated so easily must be analyzed, weighed and either protected via external means (e.g., firewalls) or accepted as a cost of doing whatever it is that the software or system needs to do.
Unknown vulnerabilities by definition must be considered in a general sense, but that makes them no less significant. The easy way to illustrate this is with an example.
Let's return to ABC Corporation. Their e-mail administrator prefers to run sendmail on the ABC Corp.'s SMTP gateway because she's a sendmail expert, and it's done the job well for them so far. But she has no illusions about sendmail's security record; she stays abreast of all security bulletins and always applies patches and updates as soon as they come out. ABC Corp. is thus well protected from known sendmail vulnerabilities.
ABC's very hip e-mail administrator doesn't stop there, however. Although she's reasonably confident she's got sendmail securely patched and configured, she knows that buffer-overflow vulnerabilities have been a problem in the past, especially since sendmail is often run as root (i.e., hijacking a process running as root is equivalent to gaining root access).
Therefore, she runs sendmail in a “chroot jail” (a subset of the full filesystem) and as user “mail” rather than as root, employing sendmail's SafeFileEnvironment and RunAsUser processing options, respectively. In this way the SMTP gateway has some level of protection not only against known vulnerabilities, but also against unknown vulnerabilities that might cause sendmail to be compromised, but hopefully not in a way that causes the entire system to be compromised, too.