Linux Security Threats on the Rise
Every year, heck...every month, Linux is adopted by more companies and organizations as an important if not primary component of their enterprise platform. And the more serious the hardware platform, the more likely it is to be running Linux. 60% of servers, 70% of Web servers and 95% of all supercomputers are Linux-based!
Even if they're not "Linux shops", companies realize certain benefits from bringing Linux in for specific purposes. Its reliability, flexibility, scalability and cost of ownership offer huge advantages over other OSes...but I don't have to tell you that, do I? You probably earn your keep because of these statistics!
One of the many benefits cited by enterprises bringing in Linux is the security and the resultant "cost of ownership" benefits that come from, among many other things, not having to deal with security-related issues and attacks. While Gartner and other analyst companies have poo-poohed the actual cost benefits in the past, a lawsuit showed that Microsoft had actually influenced its computations and models in favor of calculating Windows' total cost of ownership, and real-world anecdotal evidence shows the same. Sterling Ball, CEO of Ernie Ball Guitar Strings said, "What about the cost of dealing with a virus? We don't have 'em....There's no doubt that what I'm doing is cheaper to operate. The analyst guys can say whatever they want."
All that said, at least two factors point to increased security risk for Linux going forward: its sheer size and its ever-growing popularity. Simply put, with 15.8 million lines of code in the most recent kernel, the likelihood of a mistake or mistakes simply increases. And mistakes = vulnerability. Witness the GnuTLS bug from earlier this year. And with more Web servers running Linux than anything else, cracking Linux gets you "where the money is", to paraphrase Willie Sutton.
The Bad Guys love it because they can see and manipulate every line of code for their nefarious purposes. The flip side though is that the same things that make it vulnerable, make it safe too. The Good Guys also can look at and patch every line of code as vulnerabilities are exposed or need arises! Vigilance is the key.
Mark Cox, Senior Director of Engineering at Red Hat, talks about the most fundamental level of vigilance--things that seem like they should be "no-brainers" but that are so easy to neglect or forget about. "Vulnerabilities in software are found all the time, so the critical piece of advice is to make sure that your servers are kept up to date with security fixes all the time. That means keeping track of all those cool utilities you download, install, and forget about, like a PHP photo album software I found on my server recently that was a couple years old and full of security holes. There are still Windows servers being infected with Nimda and Code Red worms because they've not been patched yet."
That's vulnerability more from a single-user/small system point of view. Multiply all those downloads and activities many thousands of times across an enterprise, and you easily can begin to see where vulnerabilities could occur in even the best-intended secure environments. To secure systems on an enterprise scale, one needs more than vigilance. One actually needs real-time continuous visibility into and across the entire landscape/environment and the ability to establish and enforce security policy across the entire environment.
Linux Journal is partnering with Bit9 + Carbon Black for a Webinar to address these issues and more. "One Click, Universal Protection: Implementing Centralized Security Policies on Linux Systems" will give you the technical justification for increased vigilance and security measures as well as a roadmap to follow to ensure that your data, your customers' data and all your systems are safe and secure. The Webinar is on Wednesday, August 27, 2014 at 1:00 pm EDT. You owe it to yourself to stay at least one step ahead of the Bad Guys. This Webinar will help! Go here to register now!
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- SUSE LLC's SUSE Manager
- My +1 Sword of Productivity
- Non-Linux FOSS: Caffeine!
- Managing Linux Using Puppet
- Tech Tip: Really Simple HTTP Server with Python
- SuperTuxKart 0.9.2 Released
- Parsing an RSS News Feed with a Bash Script
- Doing for User Space What We Did for Kernel Space
- Google's SwiftShader Released