Paranoid Penguin - Mental Laziness and Bad Dogma to Avoid
Your DSL router at home has a built-in firewall you've enabled, and your corporate LAN at work has industrial-strength dedicated firewalls. That means, you can visit any Web site or download any program without fear of weirdness, right?
In the age of evil-twin (forged) Web sites, cross-site scripting, spyware and active content, you take a risk every time you visit an untrusted Web site. Your home firewall doesn't know or care what your browser pulls, so long as it pulls it via RFC-compliant HTTP or HTTPS. Even Web proxies generally pass the data payloads of HTTP/HTTPS packets verbatim from one session to the other.
Firewalls are great at restricting traffic by application-protocol type and source and destination IP address, but they aren't great at detecting evil within allowed traffic flows. And nowadays, RFC-compliant HTTP/HTTPS data flows carry everything from the hyptertext “brochureware” for which the Web was originally designed to remote desktop control sessions, full-motion videoconferencing and pretty much anything else you'd care to do over a network.
With or without a firewall, you need to be careful which sites you frequent, which software you install on your system and which information you transmit over the Internet. Just because your nightclub has a bouncer checking IDs at the door doesn't mean you can trust everybody who gets in.
In olden times, firewalls enforced a very simple trust model: “inside” equals “trusted”, and “outside” equals “untrusted”. We configured firewalls to block most “inbound” traffic (that is to say, transactions initiated from the untrusted outside) and to allow most “outbound” traffic (transactions initiated from the trusted inside).
Aside from the reality of insider threats, however, this trust model can no longer really be applied to computer systems themselves. Regardless of whether we trust internal users, we must acknowledge the likelihood of spyware and malware infections.
Such infections are often difficult to detect (see Mental Laziness 3); and frequently result in infected systems trying to infect other systems, trying to “report for duty” back to an external botnet controller or both.
Suppose users download a new stock-ticker applet for their desktops. But, unbeknownst to them, it serves double duty as a keystroke logger that silently logs and transmits any user names, passwords, credit-card numbers or Social Security numbers it detects being typed on the users' systems and transmits them back out to an Internet Relay Chat server halfway around the world.
Making this scenario work in the attacker's favor depends on several things happening. First, users have to be gullible enough to install the software in the first place, which should be against company policy—controlling who installs desktop software and why it is an important security practice. Second, the users' company's firewall or outbound Web proxy has to be not scanning downloads for malicious content (not that it's difficult for an attacker to customize this sort of thing in a way that evades detection).
Finally, the corporate firewall must be configured to allow internal systems to initiate outbound IRC connections. And, this is the easiest of these three assumptions for a company's system administrators and network architects to control.
If you enforce the use of an outbound proxy for all outbound Web traffic, most of the other outbound Internet data flows your users really need probably will be on the back end—SMTP e-mail relaying, DNS and so forth—and, therefore, will amount to a manageably small set of things you need to allow explicitly in your firewall's outbound rule set.
The good news is, even if that isn't the case, you may be able to achieve nearly the same thing by deploying personal firewalls on user desktops that allow only outbound Internet access by a finite set of local applications. Anything that end users install without approval (or anything that infects their systems) won't be on the “allowed” list and, therefore, won't be able to connect back out.
Some of us rely on antivirus software less than others. There are good reasons and bad reasons for being more relaxed about this. If you don't use Windows (for which the vast majority of malware is written), if you read all your e-mail in plain text (not HTML or even RTF), if you keep your system meticulously patched, if you disconnect it from the network when you're not using it, if you never double-click e-mail links or attachments, if you minimize the number of new/unfamiliar/untrusted Web sites you visit, and if you install software that comes only from trusted sources, all of these factors together may nearly obviate the need for antivirus software.
But, if none of that applies, and you simply assume that in the case of infection, you simply can re-install your OS and get on with your life, thinking you'll notice the infection right away, you're probably asking for trouble.
There was a time when computer crimes were frequently, maybe mostly, motivated by mischief and posturing. Espionage certainly existed, but it was unusual. And, the activities of troublemakers and braggarts tend, by definition, to be very obvious and visible. Viruses, worms and trojans, therefore, tended to be fairly noisy. What fun would there be in infecting people if they didn't know about it?
But, if your goal is not to have misanthropic fun but rather to steal people's money or identity or to distribute spam, stealth is of the essence. Accordingly, the malware on which those two activities depend tends to be as low-profile as possible. A spambot agent will generate network traffic, of course—its job is to relay spam. But, if in doing so it cripples your computer's or your LAN's performance, you'll detect it and remove it all the more quickly, which defeats the purpose.
So, most of us should, in fact, run and maintain antivirus software from a reputable vendor. Antivirus software probably won't detect the activity of malware it didn't prevent infection by—there will always be zero-day malware for which there is no patch or antivirus signature—but it will be infinitely more likely to prevent infection than wishful thinking is.
|Dynamic DNS—an Object Lesson in Problem Solving||May 21, 2013|
|Using Salt Stack and Vagrant for Drupal Development||May 20, 2013|
|Making Linux and Android Get Along (It's Not as Hard as It Sounds)||May 16, 2013|
|Drupal Is a Framework: Why Everyone Needs to Understand This||May 15, 2013|
|Home, My Backup Data Center||May 13, 2013|
|Non-Linux FOSS: Seashore||May 10, 2013|
- Dynamic DNS—an Object Lesson in Problem Solving
- Making Linux and Android Get Along (It's Not as Hard as It Sounds)
- Using Salt Stack and Vagrant for Drupal Development
- New Products
- Drupal Is a Framework: Why Everyone Needs to Understand This
- Validate an E-Mail Address with PHP, the Right Way
- A Topic for Discussion - Open Source Feature-Richness?
- Download the Free Red Hat White Paper "Using an Open Source Framework to Catch the Bad Guy"
- The Secret Password Is...
- New Products
3 hours 48 min ago
- Keeping track of IP address
5 hours 39 min ago
- Roll your own dynamic dns
10 hours 52 min ago
- Please correct the URL for Salt Stack's web site
14 hours 4 min ago
- Android is Linux -- why no better inter-operation
16 hours 19 min ago
- Connecting Android device to desktop Linux via USB
16 hours 48 min ago
- Find new cell phone and tablet pc
17 hours 46 min ago
19 hours 15 min ago
- Automatically updating Guest Additions
20 hours 23 min ago
- I like your topic on android
21 hours 10 min ago
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?