Paranoid Penguin - How to Worry about Linux Security
So, how do these various attackers achieve their nefarious goals? What, in other words, are their weapons?
When I first became a network engineer in the mid-1990s, the attack paradigm we usually worried about was a human being sitting at a computer, interacting more or less directly with her “victim” systems in real time. In other words, Matthew Broderick's character in the movie WarGames was the sort of attacker we assumed (rightly or wrongly) to be most common.
Today, however, it's safe to say that the vast majority of probes and attacks conducted against networked systems are carried out by automated software processes—that is, by viruses, trojans and worms. People are still behind these types of attacks; make no mistake about it—someone has to write, adapt and deploy all that malicious code—but most actual attacks happen by proxy nowadays.
For example, one of the fastest-growing tools of the resource-theft trades (spamming, porn-peddling, DDoS and so on) is the botnet. A bot is a computer that has been infected with a worm or virus that surreptitiously contacts the person who released it; a botnet is an entire network of bots awaiting instruction.
Botnets are part of a strange, complicated and shady economy. Botnet operators who distribute spam under the pay of spammers are commonly paid per distribution node. Most of the merchants who pay them intentionally turn a blind eye to the fact that these nodes are probably not legitimate e-mail servers, but illegally hijacked systems (that is, bots). When spam-botnet operators are caught, the actual source of both their income and their spam invariably claims they had no idea their spam was being distributed by illegally compromised systems.
Resource thieves aren't the only ones who use botnets. DDoSers use them to conduct highly distributed network bombardments that are difficult both to trace and stop. Identity thieves often carry out phishing attacks, in which spam e-mail purporting to be from a bank or e-commerce site is used to lure people into entering their logon credentials at an impostor Web site. Phishers, even more than garden-variety pharmaceutical and porn spammers, have a strong motivation to hide their tracks, so botnets are especially useful for phishing-spam distribution.
The fully interactive system attacker is still very much with us; not all attackers cast as wide a net as spammers or phishers. Corporate spies, vandals and stalkers are all likely to make use of one-on-one attacks in which attackers focus their attention on one system, and conduct their attacks more or less in real time. Some of these attackers, especially in the corporate-espionage space, are highly skilled and creative experts who are able to crack even carefully secured systems, often by writing customized attack software.
Conventional wisdom, however, is that many if not most Web site defacers and other vandal types are script kiddies—less-skilled attackers who rely on tools they download from the Internet or obtain from friends. Such attackers are much more easily thwarted than the pros, because they tend to be not nearly as adaptable. If the attack scripts they run against a given system fail, they're far more likely to give up and seek a softer target than they are to fine-tune their scripts or write a new script altogether. And, a given script may work only against one version of one application running on one particular architecture (for example, Apache 2.0.1 on Intel x86 platforms).
In summary, the good news is that most attacks are indiscriminate, automated and not very adaptable. Highly focused, human-operated and creative attacks are much less common. The bad news is that the sheer volume and variety of automated attacks (including spam, phishing, malicious code and script-kiddies' tools) makes them a force to be reckoned with. These attacks cost people and organizations everywhere millions of dollars annually in lost productivity and fraudulent transactions. Furthermore, just because skilled/human attackers may not seem like a likely threat in a given scenario doesn't mean you can disregard them altogether.
So, there are attackers, and these attackers have tools. What are the nuts and bolts that these tools manipulate?
Consider this simple formula from threat-modeling parlance: a threat equals an attacker plus some vulnerability. A vulnerability is some characteristic of the attacker's target that presents an opening. What the threat equation tells us is that if a given vulnerability can't be exploited by an attacker (for example, because the system isn't networked and resides in a locked room), it doesn't constitute a risk. Conversely, a system with no vulnerabilities isn't at risk regardless of how many attackers target it.
Obviously enough, there's no such thing as a completely invulnerable system. There are, however, many ways to deal with vulnerabilities that decrease their possibility of being exploited by attackers.
Common types of vulnerabilities include:
Bugs in user-space software (applications).
Bugs in system software (kernel, drivers/modules and so forth).
Default (insecure) application or system settings.
Extraneous user accounts.
Extraneous software (with bugs or sloppy/default settings).
Unused security features in applications.
Unused security features in the operating system.
The remedies to these tend to be straightforward. Bugs can be patched, default/insecure settings can be changed, extraneous accounts and applications can be removed, security features can be leveraged and users can be educated. “Straightforward” doesn't necessarily mean “easy” (or “quick” or “cheap”), however.
The patch rat race, as I've said many times in this space, is futile—you can't write a patch without first discovering the bug, and what are the odds of the good guys discovering every major bug before the bad guys do? Still, we're stuck with this cycle; patch we must.
The outlook for tightening system and application settings, leveraging application/OS security features, educating users and applying additional security techniques and tools is considerably brighter.
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- SUSE LLC's SUSE Manager
- My +1 Sword of Productivity
- Non-Linux FOSS: Caffeine!
- Managing Linux Using Puppet
- Tech Tip: Really Simple HTTP Server with Python
- Parsing an RSS News Feed with a Bash Script
- SuperTuxKart 0.9.2 Released
- Doing for User Space What We Did for Kernel Space
- Google's SwiftShader Released