Where are the Enterprise Management Tools for Linux on the Desktop?
On January 5, Doc Searls asked What would you use Exchange for? A good question and based on the number of comments, it is generating a lot of discussion. As someone currently embedded in a primarily Windows environment, I want to know where the enterprise management tools for the Linux desktop are?
About every three months, some new vulnerability in Windows or a key subsystem spurs new traffic across a variety of forums along the lines of if you ran Linux, you wouldn't have these problems. The self appointed prophets, zealots or kooks are generally saying this from the position of a single platform or small install base. However, when the scale of installation exceeds the number of machines you can manage manually (and that depends on how dedicated you are, in most cases it seems to be about 10 desktops) and begins to approach a number where you need a staff to manage them, then Linux as a desktop solution begins to lose its luster. This is not to say that Linux as a desktop OS is not possible, but, the management and maintenance of the desktop environment becomes the proverbial long pole in the tent and begins to chew up resources, in terms of manpower and costs.
Doc argues that Exchange has the advantage because it is good. I would disagree that it is good, but that 1) it combines a number of useful tools into a single suite application with a common UI and 2) it has several supplemental programs, such as Blackberry support written for it and thus makes it a lock in technology. When it comes to desktop management however, there are as many options in the Windows world as there are email systems in the Linux world. Sadly, there are not a lot of homogenous options for desktop management in the Linux world. This is not to say there are not any, but there is not one or two that meet most of the needs of management when they are looking for a desktop management system and fewer tools in general that support the baseline requirements or share a common UI.
I have the extra burden of having to work under what is known as the Federal Desktop Core Configuration, a series of setting applied by Group Policy to my Windows desktops. The FDCC is only one part of a much broader secure computing architecture that will soon be a requirement for all operating platforms used in the United States government. As I was walking through the process of building and implementing the FDCC image and settings, I began to think about the issues I have had over my twenty years in the industry and the requirements for managing desktops. I would argue that the following requirements always seem to keep cropping up: Patching and the verification of patches being applied; configuration managed desktop; automated application installation and verification; remote hardware and software inventory. I could dive down through the FDCC and pull up more stuff, but this gives us a framework of some of the most immediate needs.
Is there anything more important or difficult to manage than patching? In the Linux environment there are several ways to manage patching, either through external or internal repositories. But patching is more than just applying them. In an enterprise, you need to test the patch against a number of representative test stations and in many cases, wait for the howls of stuff breaking. You hope that the patch will not negatively affect anyone important, even after your rigorous testing. But in an enterprise, pushing, or making the patch available is only the beginning. You have to then close the loop and ensure that the patch has been deployed. RedHat’s new Satellite system is beginning to make inroads into this area of ensuring that the patches install but there is still a shortage of tools to do this with in the Linux world at the enterprise level and in many cases it is critical for management and security to know how many of their systems are at risk, and at what sort of risk, either in terms of gross numbers or percentage per site.
Both puppet and its grandfather, cfengine are flexible tools that manage a number of settings and configurations successfully. But certainly not easily. As we are constantly tasked to do more with less and as we continue to find even scarce resources disappearing, ease of implementation and operation is critical to the success of any management system. Being able to configure large numbers of systems quickly and correctly the first time is a critical goal.
Automated application installation
I have not done an extensive search to find tools to install applications remotely in Linux and I would like to hear about them and how successful they are. Again, like patching, deploying the application is only half the battle. You still have to report out on who got it, when, what version etc, and most importantly it has to work. It is also critical that the creation of the package not be more complicated than the application being installed. This has been one of the major knocks against using RPM-based installs for custom application deployment.
Remote Hardware Inventory
How many people have SNMP running on the workstations? All of their workstations? SNMP is one of the easiest ways to gather information about hardware but the bandwidth requirements for reporting hardware inventories, and software for that matter, could overwhelm most production networks depending on the frequencies and security requirements you have. But SNMP has its own set of problems from sketchy MIB support on marginal hardware to overwhelming MIB support on others. Finding the happy medium and the right OID can lead to sleepless nights for developer and manager alike. And then there are the security arguments for not using SNMP at all.
In all of these cases, separate tools might exist to do some of these tasks. In other cases, there are no tools to do the tasks but there are ways to get around that, either with scripts, custom programs or other creative uses of existing tools designed for other purposes. Which, in the spirit of Linux is a good thing, but in the real world of enterprise management is not so good and getting worse each day and still lacks a common UI for doing everything under one instance.
For Linux to be a player in the desktop space, especially the enterprise desktop, we need a cohesive tool suite that will do not only the work to make our jobs easier, but provide the reports that management says they not only want, but in many cases require for compliance and other legal reporting requirements. In the Federal space, it becomes even more critical and as many government contractors begin implementing the Federal Standard, it will become harder for Linux to compete on the desktop.
Practical Task Scheduling Deployment
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.View Now!
|The Firebird Project's Firebird Relational Database||Jul 29, 2016|
|Stunnel Security for Oracle||Jul 28, 2016|
|SUSE LLC's SUSE Manager||Jul 21, 2016|
|My +1 Sword of Productivity||Jul 20, 2016|
|Non-Linux FOSS: Caffeine!||Jul 19, 2016|
|Murat Yener and Onur Dundar's Expert Android Studio (Wrox)||Jul 18, 2016|
- The Firebird Project's Firebird Relational Database
- Stunnel Security for Oracle
- My +1 Sword of Productivity
- SUSE LLC's SUSE Manager
- Non-Linux FOSS: Caffeine!
- Managing Linux Using Puppet
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- Parsing an RSS News Feed with a Bash Script
- Google's SwiftShader Released
- Doing for User Space What We Did for Kernel Space
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide