Where are the Enterprise Management Tools for Linux on the Desktop?

On January 5, Doc Searls asked What would you use Exchange for? A good question and based on the number of comments, it is generating a lot of discussion. As someone currently embedded in a primarily Windows environment, I want to know where the enterprise management tools for the Linux desktop are?

About every three months, some new vulnerability in Windows or a key subsystem spurs new traffic across a variety of forums along the lines of if you ran Linux, you wouldn't have these problems. The self appointed prophets, zealots or kooks are generally saying this from the position of a single platform or small install base. However, when the scale of installation exceeds the number of machines you can manage manually (and that depends on how dedicated you are, in most cases it seems to be about 10 desktops) and begins to approach a number where you need a staff to manage them, then Linux as a desktop solution begins to lose its luster. This is not to say that Linux as a desktop OS is not possible, but, the management and maintenance of the desktop environment becomes the proverbial long pole in the tent and begins to chew up resources, in terms of manpower and costs.

Doc argues that Exchange has the advantage because it is good. I would disagree that it is good, but that 1) it combines a number of useful tools into a single suite application with a common UI and 2) it has several supplemental programs, such as Blackberry support written for it and thus makes it a lock in technology. When it comes to desktop management however, there are as many options in the Windows world as there are email systems in the Linux world. Sadly, there are not a lot of homogenous options for desktop management in the Linux world. This is not to say there are not any, but there is not one or two that meet most of the needs of management when they are looking for a desktop management system and fewer tools in general that support the baseline requirements or share a common UI.

I have the extra burden of having to work under what is known as the Federal Desktop Core Configuration, a series of setting applied by Group Policy to my Windows desktops. The FDCC is only one part of a much broader secure computing architecture that will soon be a requirement for all operating platforms used in the United States government. As I was walking through the process of building and implementing the FDCC image and settings, I began to think about the issues I have had over my twenty years in the industry and the requirements for managing desktops. I would argue that the following requirements always seem to keep cropping up: Patching and the verification of patches being applied; configuration managed desktop; automated application installation and verification; remote hardware and software inventory. I could dive down through the FDCC and pull up more stuff, but this gives us a framework of some of the most immediate needs.

Is there anything more important or difficult to manage than patching? In the Linux environment there are several ways to manage patching, either through external or internal repositories. But patching is more than just applying them. In an enterprise, you need to test the patch against a number of representative test stations and in many cases, wait for the howls of stuff breaking. You hope that the patch will not negatively affect anyone important, even after your rigorous testing. But in an enterprise, pushing, or making the patch available is only the beginning. You have to then close the loop and ensure that the patch has been deployed. RedHat’s new Satellite system is beginning to make inroads into this area of ensuring that the patches install but there is still a shortage of tools to do this with in the Linux world at the enterprise level and in many cases it is critical for management and security to know how many of their systems are at risk, and at what sort of risk, either in terms of gross numbers or percentage per site.

Configuration Management
Both puppet and its grandfather, cfengine are flexible tools that manage a number of settings and configurations successfully. But certainly not easily. As we are constantly tasked to do more with less and as we continue to find even scarce resources disappearing, ease of implementation and operation is critical to the success of any management system. Being able to configure large numbers of systems quickly and correctly the first time is a critical goal.

Automated application installation
I have not done an extensive search to find tools to install applications remotely in Linux and I would like to hear about them and how successful they are. Again, like patching, deploying the application is only half the battle. You still have to report out on who got it, when, what version etc, and most importantly it has to work. It is also critical that the creation of the package not be more complicated than the application being installed. This has been one of the major knocks against using RPM-based installs for custom application deployment.

Remote Hardware Inventory
How many people have SNMP running on the workstations? All of their workstations? SNMP is one of the easiest ways to gather information about hardware but the bandwidth requirements for reporting hardware inventories, and software for that matter, could overwhelm most production networks depending on the frequencies and security requirements you have. But SNMP has its own set of problems from sketchy MIB support on marginal hardware to overwhelming MIB support on others. Finding the happy medium and the right OID can lead to sleepless nights for developer and manager alike. And then there are the security arguments for not using SNMP at all.

In all of these cases, separate tools might exist to do some of these tasks. In other cases, there are no tools to do the tasks but there are ways to get around that, either with scripts, custom programs or other creative uses of existing tools designed for other purposes. Which, in the spirit of Linux is a good thing, but in the real world of enterprise management is not so good and getting worse each day and still lacks a common UI for doing everything under one instance.

For Linux to be a player in the desktop space, especially the enterprise desktop, we need a cohesive tool suite that will do not only the work to make our jobs easier, but provide the reports that management says they not only want, but in many cases require for compliance and other legal reporting requirements. In the Federal space, it becomes even more critical and as many government contractors begin implementing the Federal Standard, it will become harder for Linux to compete on the desktop.


David Lane, KG4GIY is a member of Linux Journal's Editorial Advisory Panel and the Control Op for Linux Journal's Virtual Ham Shack


Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

I was pretty surprised to

Anonymous's picture

I was pretty surprised to see how many people disagree with the article. It's completely true. I've been a linux admin of large environments for almost 15 years and I've been complaining about the lack of good, integrated, lifecycle management almost all that time. And I've worked in numerous groups trying to find tools for exactly these issues on several occasions, always ending up with targeted spot-fix solutions that never solve the whole problem. And as a matter of fact I would argue the tools are pretty far from perfect in the windows world as well. But MS is definitely way ahead of us.

Yes, many tools exist that fix parts of the problem...systemimager, cobbler, mrepo, linmin, mondo, etc etc. But a large part of these are far from "complete" and could use a lot of development, and most importantly these are all separate tools without any integration. Many areas are completely lacking - proper inventory management, compliance control and change mgmt among them. Ironically security is weak too - promising projects like osiris and ossec are only just beginning to mature.

YES, there are also some commercial products, but its a bit like how games exist on mac. Opsware and Vintela/Quest are really in the right direction, but extremely expensive - not helpful when one of the major selling points is supposed to be reduced cost.
And YES, linux has the bits and pieces in place for all this. You can script yourself a bare-bones but decent management environment with a reasonable amount of effort, but this is really not a satisfactory solution in a large commercial environment, its too targeted and homebrew and way too high maintenance in terms of quality testing etc. And lack of commercial support is a total killer.

Spacewalk is definitely an interesting project and I really think it has the potential to change everything. RH Satellite was always pretty good but not enough, and too limited. Satellite may well change that. We'll have to wait and see.

But what the enterprise needs (or at least demands - whether they actually need it or not), is tools that make your desktops and servers into centrally controlled appliances, in a way that is integrated, secure, supported, scalable, affordable and preferably has a brand name attached to it. This just doesn't exist. Anyone claiming otherwise is either a fanatic or hasn't worked in a large commercial environment.

I would agree with you however..........

slackware user's picture

I was talking about this very subject with a fellow linux user friend of mine. There isn't any easy way to control vast amounts of linux clients or provide profile based services to different users.

1. linux needs a customized active directory replacement option which could control user authentication and information. This would be similar to AD but in my mind would go beyond LDAP and have software application objects. It would also have the ability to store other entities as objects such as printers and network shares.

2. The linux directory would have the ability to deploy software packages based on user account. For example with the user and the software packages registered into the linux directory administrators could then create a group and apply a software application to that profile. For example john in accounting would be in the accountants group. The accountants group contains software application objects for openoffice calc, scientific calculators, etc. The users home directory is mounted based on the network share object contained within the accounting group. The appropriate printers are also activated based on hardware object attributes.

3. The software packages would be served to the network through the use of a centralized update server. All other automated package repositories would be disabled and versions for the enterprise would be controlled on the deployment server. The client machine would fetch this information upon user login and install/remove directory objects automatically based on profile attributes.

4. an enterprise linux base image would be created that would contain the initial modifications to join the client machine to the enterprise realm. This image would be available via pxe boot and could be automatically loaded over the network. Once the image was deployed the rest of the machine configuration would take place upon the users first login. Depending on objects for the user you would have entirely different installations.

5. upon successful login the users machine would render up specific information about itself such as hostname user hardware info etc and send it back to the directory server. After the initial check-in with the directory server it could then report periodically if needed. The hardest part of this design would be keeping machine hostnames persistent across reimaging. This could be controlled through dhcp automatic hostname assignment and tied back to the user account through the initial check-in with the directory server. another option that might not be as practical would be pulling the computer name from the bios or based on mac address.

6. The linux directory server would be easy to administer from both a graphical interface and the command line. The directory server should also have the ability to present or unpresent objects based on timespans. This would allow organizations to schedule hardware switchovers and other various tasks that would normally cause unneeded downtime to happen automatically behind the scenes. The server should have the ability to send the client machines a message that would force them to check for updates. The updates would then take effect at the same time for everyone based on what the server says. The server should also stagger the sending of the message to update in advance so that all the clients don't create a ddos situation. The server should possess the logic which would check what objects would be effected and only inform client machines that would be impacted directly.

7. The directory servers should store the information in some kind of database. The directory servers should have the ability to scale seamlessly. There should be an easy method for adding another directory server and replicating all the objects from within the enterprises realm. It would update other peer directory servers of it's presence and also inform clients depending on how the high availiablity is setup to work.

8. This system should only use free and opensource software and have the ability to adapt to use various package management systems such as debian and redhat. It should also provide extended management outside the linux realm for mac and windows client authentication.

9. The system should trackk all kinds of statistics for users, software usages, hardware performance etc. This information gathering process should be handled mainly on the client and sent to the server periodically for analysis and archiving.

10. extensive checksum validations should run in the background periodically and would report any discrepancies on client machines when varified against the enterprise realms list of what it should be. This would be stored as extended attributes for various programs and a seperate list for the base image. Other features such as rkhunter, clamscan, & chkrootkit should have the ability to run behind the scenes. These extended security features should have the ability to be enabled disabled from the directory server on a per machine or per user basis.

Theres a quick list of what linux needs for enterprise management. With a mechanism like a enterprise realm server for linux then every aspect of software management can be controlled from a single location. That is the real reason linux has not caught on as fast as it could have. It is a difference in philosophy. The linux user / administrator loves the freedom provided by the linux system. The windows administration mindset is based on control which is the opposite of freedom. The lack of a mechanism that allows an administrator to control everything that takes place on your linux desktop is very non-appealing. Once there is a way for windows administrators to limit your freedom then adoption for linux systems will definitely skyrocket.

David, i´m sorry but your

T-One's picture

David, i´m sorry but your post is absolut bullshit.
Before writing such things make a little research next time.

There are enough Management-Tools direct from the big Linux-Companys.
RedHat = RedHat Satellite Server (commercial) or Spacewalk (free)
Suse = ZENWorks (the most powerfull Management Tool under linux)
Ubuntu = Landscape

All in all

Anonymous's picture

David I'm sorry but I have to say you are clueless on the subject you are writing about. Either that or you are just lazy. You have to change with the times, there are countless examples of how Linux admins accomplish Enterprise Management, and there are countless tools that help them. Do your research.

What most of you are missing

Anonymous's picture

Many large IT departments take on a significant number of people without much experience with either Windows or *nix.
These guys/gals are thrown in to the deep end and are expected to pick things up as they go along but aren't given the time to learn how to use multiple tools or their more advanced features.
In these sorts of situations the centralized 'all in one' software config tools are a must.

If this ain't enough, most large enterprise setups have (mostly IT illiterate) management make the decisions on what tools to use - and they're usually persuaded to use the tools their admins can pick up the quickest.

Absolutely right...

Anonymous's picture

I've been a Windows/Linux mixed environment sysadmin at 3 different companies (so far), and at each company, I had more training on how to use my voicemail than I did for any IT support tools. The tools on the Windows side are standardized. They have extensive documentation. They all work together without the user having to "glue" them together. They all have a well researched set of best practices. And on top of all that, they are simple and menu driven. I can learn a Windows enterprise environment quickly and make sweeping changes with very little effort...

The Linux side is way different. The tools exist (sometimes), but they do not work together automatically because there is no real framework or set of standards on which to base their operation. The admin has to piece together a set of tools, write a bunch of code to get them to interoperate, which means he also has to write a ton of documentation on what he's done. He has to hope that the program that he chooses is well supported and won't be abandoned, and even then there's no gauratee that the documentation will be any good. There's just no comparison...

I'm not a Microsoft fan, but thats because of their inexcusable business practices, not because their technology doesn't work. I would love to see Linux succeed in the enterprise, but until the programmers stop trying so hard not to do anything like Microsoft and actually acknoledge that their enterprise solutions get things done better, then I don't think it ever will.

And I think I can guess

Del's picture

And I think I can guess which of the environments you worked on.

Things changed dramatically five years ago when Red Hat and Suse got serious. One of the main motivations behind going for them is predictability, they guarantee your software stack (including the tools most need for management) for an increasing number of years. So your point on abandoned tools is much more relevant to the MS environment, where the risk of that happening is much higher, but then you will have to stick to what Suse or Red Hat supplies.

When it comes to documentation of tools, that has actually become very good. Head over to Red Hat and have a look. I can also recommend hiring a Red Hat consultant for the set-up, we have had very positive experiences with that.

I can learn a Windows enterprise environment quickly and make sweeping changes with very little effort...

But maybe not the changes you need or want. What you are attacking is partly the flexibility of GNU/Linux environments. I will actually state that there is very little you can do with the windows environment, simply because changes that give added value to your users are typically very expensive on windows. On GNU/Linux environments you can however make breathtaking changes in a week with a single capable admin, e.g., changing entire server infrastructures. There is actually *very* little a sysadmin can do with a windows environment, because it typically boils down to a budget decision, and those are made by management.

When it comes to lack of standards, I think you mean too much flexibility. I can understand this frustration from a windows sysadmins viewpoint, MS tend to push away any competition on the infrastructure side, and you end up with mixed bags like Exchange (is Exchange a mail, calendring, DNS, DHCP or authentication server?). I am so sick and tired of the standardisation argument, because all it boils down to is IT management buying everything from MS, and users ending up with a crappy and expensive solution.

I'm not a Microsoft fan

I am sorry, but I don't believe you. Too many standard phrases from Microsoft's sales department in your post. Next time try to focus on the single point there is something to, instead of going overboard with rubbish from the TCO campaign.

Don't forget the business

Anonymous's picture

Lots of interesting comments. But it seems that everyone is taking the geek approach. Think about how you get Linux desktops into your organization. Somebody has to pay for them. How do you convince the business managers to fork over the money? You tell them how much they'll save compared to paying Microsoft. But then somebody will say something about how complicated Linux is and how they'll need more support staff (we know you won't need that much staff, but someone will claim you do). The business guys all nod knowingly and look at you. Do you say, "no worries, we have a really great enterprise desktop management system"? No you don't because one doesn't exist and you never lie. So you're stuck with Microsoft again.
Why hasn't Novell or Redhat solved this problem yet?

I encourage you to read the

Del's picture

I encourage you to read the posts again. The more elaborate responses do not seem geeky to me. It is not a *problem*, Red Hat and Novell have solutions. GNU/Linux is successfully deployed in countless enterprise environments.

Problems selling in GNU/Linux desktops to management is in my impression dominated by lock-in mechanisms. For instance, if all software was available for both operating systems, I see no reason for anybody to choose windows. Actually, I believe the market for XP/Vista would collapse over night.

Wrong approach

Anonymous's picture

Hi there,

I see a number of questions and issues posed by windows admins that are looking into linux. Things are different in the *nix world. Where are the enterprise tools? He's sitting right here, behind the keyboard.

@In an enterprise, you need to test the patch against a number of representative test stations and in many cases
=Use KVM. Set up as many "test stations" as you want. Testing that involves specific peripherals is no different than in a Legacy Win32 enterprise. Brush up on IEEE829 and test away (or delegate it to an actual QA person).

@You have to then close the loop and ensure that the patch has been deployed
=Why is this hard. If the patch affects a file, simply ssh to the remote host and check the file. Need to check multiple hosts? Put it in a for loop: for FOO in `somecommand -thatEnumerates -theHosts` ; do echo ${FOO} && ssh ${FOO} "sum somefile && somecommand --showVersion" ; done >> DidItGetThere.log . You now know which hosts have received the patch and which didn't.

Configuration Management
@Being able to configure large numbers of systems quickly and correctly the first time is a critical goal.

Then you better get the source right "the first time". Replicating a change from one box to N boxes is trivial. CFEngine is handy for a number of things, but can be cumbersome if you don't plan ahead. I don't like "agents" personally.

Automated application installation
@I have not done an extensive search to find tools to install applications remotely in Linux
That's a good thing. I'll stop you before you start. The tools are "ssh" and "packagemanager" For example:

ssh installuser@somemachine "apt-get install someapplication -y"

You can make it fancy and pipe the output somewhere for auditing. You can run the application CLI and check it's version. You can even run the app on the remote machine and have X forward the output to your desktop (to confirm it is indeed installed and can be run).

@It is also critical that the creation of the package not be more complicated than the application being installed.
I don't understand. You seem to be suggesting that a custom application installer is more easily created in windows (via installshield, wininstall, or SMS) than under linux rpm/deb. Deb and rpm files are trivial if you are doing simple file extraction. I've only run across issues when there are a large number of dependencies.

Remote Hardware Inventory
@How many people have SNMP
Stop. ssh, lspci, and mysql will get you pretty far. Nagios/Zabbix will also do it for you, and let you make some rules and triggers. If you work in a a large enterprise (thousands and thousands of machines) and want to 'pretend' like you actually have some level of asset management, Give your HP rep a call and ask about brokenview.

@In all of these cases, separate tools might exist:but in the real world of enterprise management

Whoa, hold on there... We are "the real world of enterprise management". Me, James accross the hall, Daryl down the street. We're *nix admins. We invented the enterprise, not MS or analysts. 40 workstations and Windows SBS are not an enterprise. Those 40 servers and 250 office desktops are not an enterprise. Eight DC's, 100+ racks, 6 offices in 4 different countries.... That's an enterprise. Call RHEL, Novel, or just call Mark (I hear he has his own Debian based distro now), learn how to write a for loop and a bash script with an 'if' in it, and the rest is careful planning.


Anonymous's picture

1: SSH running in a for loop will halt on the first down system. You wanna start coding multiple processes, exceptions or are you just going to wait 3 days for all the timeouts?

2: Try clicking the OK button on an install 2000 times.

Please get some experience in a real enterprise.

"You can make it fancy and

Anonymous's picture

"You can make it fancy and pipe the output for auditing" I loved that comment!

I like debunking

stiiixy's picture

I like these responses. Satisfying the initial article's requests and debunking Windows USER'S responses with simple answers. +1 for linux-peeps (sorry, not the Slashdot-style rating system ewwww, I can't stand that site).

Congruent systems management

Ryan Nowakowski's picture

These guys solved this problem long ago...


Kavey's picture

Apparently people are thinking of *nix as a replacement for Windows. I will be the first to remind you, it is NOT.

Management of Linux systems vary from commercial tools, to free tools, to home grown tools in most large scale corporate environments. I can be the first to tell you that I am on a team of about 9 people who manage thousands of Linux servers and workstations in our region alone. There are other regions with other teams managing other systems, but I'm just talking about our own. I am a Linux admin at a Fortune 500 company.

We can easily update all of our systems from one location. We can run scripts on part of all of the systems at once using a variety of tools. Even better is the software management. Since all the systems are running the same version of the OS, we can load the software on network shares along with shared libraries, etc., use startup scripts that load the proper paths and libraries for each application. So all users have a custom launch bar with all the applications they require. The tool is updated using one of the methods previously described and the programs launch from the network locations. Think of it as install once, run locally anywhere. Windows really lacks this kind of power. Yes you have thin client, and remote applications, but this is not the same thing. This is like Installing an application to a single drive on the network, and everyone running it on their local machine. i.e. it runs as if it's locally installed on all computers without the need to install it locally.

We can easily reinstall any system that is on the network. Users can log into any system and have all their settings and files and applications available to them. Replacement of workstations is simple, and it's just a pull and drop operation. Windows may have roaming profiles, but Linux stores all it's data in the users home directory that is mounted from a network share. It's an instant login anywhere, as opposed to waiting for the profile to download to the local machine, and having to manage the local machine's profiles (which can hurt some performance when there are many profiles loaded).

Before I became a Linux admin, I worked for 10 years as a Windows admin (primary job, Linux slowly seeped into my life over those years). I got sick of doing constant break fix work. As a Windows admin, I'd say 90% of my time was break fix, and 10% was testing/implementing new ideas to improve the environment. As a Linux admin, I spend well over 90% of my time testing/implementing new ideas to improve things.

Exactly, bottom line Windows

Jason's picture

Exactly, bottom line Windows was built on a system designed for a single user. Linux was built on a multi-user platform. It's that simple.

Puppet + distro patch system

Anonymous's picture

Considering how many tools we have to use in our Windows environment and the cost for them, this list of patching, configuration management, app installation, remote hardware inventory would be better applied as how to do this on windows with out breaking the bank. Puppet handles configuration management, automated remote application installation, an remote hardware inventory. You add in the dsitro provided patch system and all bases are covered with addition of just one tool. Puppet configuration does have a bit of learning curve, but so do all the enterprise level Windows tools, Tivoli anyone?

Available patch list is available from most of the distro provide tools, so no third party tool is needed there, proper use of a local software repository and test bed environment (you do pre test the patches, right?) with nightly updates will make sure all machines are correctly updated.

As far as the FDCC goes there are several puppet users working under that and similiar setup guidelines.So the tools for doing that are available.

I'm glad to see I'm not crazy

Blaque's picture

Everytime I read about or hear about complaints that there aren't enterprise management tools for Linux I scratch my head and wonder if they've ever used SSH or a package manager. I'm not an enterprise Admin...I just manage web servers as I'm also a web developer. So I started to believe I was thinking to simplistic and that the enterprise task was over my head.

With some of the comments I see hear I feel now that its not over my head and I am not crazy. In essence the things I like about a Linux setup are beings looked over and are the subject of complaints from the Windows world. I like the fact that Linux is powerful enough for me to build my own custom solution to manage it. It doesn't need all the expensive enterprise solutions to get the job done. I've never minded piecing things together but I guess thats the mind of a developer. As another poster said I believe the admins in the Windows world ARE looking for some big red button to push and everything is done. I don't believe I saw anything in this article that couldn't be done with SSH access, scripts, package managers and some other tools that are readily available. If anything Linux seems far more ready to be managed out of the box than Windows is. Its all a matter of making a plan and then executing.

do your homework before spreading FUD

bigbrovar's picture

ok am a young system admin.. for a university in my country. we run ubuntu on all our computers .which are about 500 in number .. for mass installation we use systemimager.. which allows us to create to build a master system. then creat an image of it which is send to the fileserver .. and booted on all the other machines (pxe) now if the if a need to apply as mass match of the system we just apply the patch in the master build (aka goldenimage) then rsync with the image on the fileserver .. and use flamethrower to update the other nodes on the network of the changes.. everything runs smooth an easy.. if a profesor needs a software it can be installed and sent to all machines in less than 10 minutes.. there is also cluster ssh for mass administration or computers.. we also use OCS Inventory.. which is a free beer/speech automatic inventory program on all our machines. uses a cron to periodically send infor on the state of our machines to the server .. which can be viewed through a webinterface and even printed .. or exported as pdf.. we use use a central ldap authentication for all uses .. and have their home directory exported to the fileserver.. allowing users to log in from any machine on the network.. there is apt-cacher which is a great tool that caches all package downloaded using apt-get on .. so once a package or update is downloaded from the ubuntu repo and installed.. the package would be cached on our network .. i can go on and on.. configuring our setup is sure a bitch.. but then that makes my job secured.. because not making people can do it.. its not a matter of point and click.. beside all the tools we use are free (beer and speech)..


Bbob's picture

Lol "An all encompassing application would have one 5 foot thick manual instead of 20 thinner ones, that's all."

Not to mention that the 20 thinner ones being worked on by 20 people each are going to have higher rate of thoroughness than the single book "5 foot thick" being worked on by probably only 20 people as well.

Server and thin clients are the smart way to run a network

SwiftNet's picture

The whole fat client model for enterprises is so very dysfunctional. There are some really easy tools to manage Windows PC networks. What I've found, is that inevitably I still run around and tweak things on at least a few Windows stations. The cause can be everything from faulty hardware/software to unauthorized user tweaks.
A smart network is one with a powerhouse server with Linux thin clients. Updating the server is all that is needed. Need a new software package? Install it once and you are done. Knowledge of scripting is necessary when running a 'nix network, where a Windows admin can get away with knowing very little. When there is a problem, the Windows admin usually falls back to wipe/restore where a 'nix admin will usually go for the fix.

please, at least do five minutes of google first

Ben Franklin's picture

You lost me at "The self appointed prophets, zealots or kooks..." And, as other commenter have said, you haven't done your homework, you just want to complain because there isn't some sort of magic wand to do your work for you.

Well yeah! Isn't that what

Jason's picture

Well yeah! Isn't that what computers are supposed to be?? :)


Daeng Bo's picture

Another poster here saying you didn't do your homework. You should have titled this piece "Dear Lazyweb ..."

If you use Ubuntu, then here's your answer

You're welcome,

Remote management

Anonymous's picture

I know that Novell bashing is common, what about zenworks ? Will remote control Windows and Linux systems. But not all flavours of linux, I think only Red Hat and SLES/SLED.


There is a solution

Martin Owens's picture

You could get together with some clued in chaps, perhaps other sys-admins who know a thing or two about linux in the enterprise. Set up a biz, design the tools as you think they should work, remembering to use existing infrastructure and libraries to make things quick.


May I suggest....

JohnMc's picture

Mr. Lane,

You have not looked hard enough. Take a look at a product called Big-Fix. (http://www.bigfix.com/) It is not cheap but I trialed it two years ago on a heterogeneous 5000 seat test environment of clients & servers. We ran Windows, Linux, HP-UX, AIX against it. It worked.

It can do patching. It can do app installs. It can do equipment surveys. Fact it can do bare metal installs if you spend the time to set up the proper model. Single interface. Programmable reporting. The works. Its only downside? Its NOT cheap.

I always thought of the

Anonymous's picture

I always thought of the Linux admins more like of Jedi of IT. They might not have all the fancy "gadgets and guns" but they sure know how to wield their "Force". Sure, it take your time at the beginning, but in the end it pays off much better.

Linux has no "registry"

Anonymous's picture

I'm an enterprise windows admin who just happens to be a big Linux fan as well, but I believe that the Linux desktop will never see widespread enterprise usage unless it undergoes some core changes.

For instance, in the Windows world, admins have a single searchable configuartion database for the entire contents of the machine - the registry. The registry makes things like WMI and group policy possible, and those tools in turn make the enterprise Windows destop possible.

In the Linux world, all configuration is done via disjointed text files that don't even share a common syntax. Creating tools that take other common tools into consideration on an individual basis has got to be difficult and can't possibly be extensible. Until Linux can offer something simular to the registry, I don't see how enterprise Linux tools can compete with enterprise Windows tools.

I'm actually surprised that some distro hasn't picked up on this already. Linux has so many tools to choose from to do something like this. A distro like Red Hat or Ubuntu could require that all configuration be composed in XML and tie all these seperate configurations into a single standardized database of settings on the machine.

The problem with enterprise Linux is that the same freedom it likes to give its programmers and users to write code any way they see fit is also the silver bullet for standardization, which admins and their tools depend upon...

You cannot be serious!

Anonymous's picture

The registry, as it is currently implemented and used, is a freakin' abomination. Cannot keep itself clean, makes installing software much less straightforward.

That said, every distro I've worked with DOES have an internal database to track what's installed, what's available, and is highly configurable to boot.

I happen to LIKE the text files. if someone knows how to insert comments in the registry to document what was done and why and what the purpose of a particular key or hive might be I'd sure like to know.

Sorry. The registry is crap IMO.

You're right, and you're wrong...

Anonymous's picture

Its true that the registry sucks, but its hard to argue that it doesn't lend itself to enterprise computing much better than a bunch of non-standard text files do...

The distros do track packages and their installation settings, etc. via their respective package managers (btw, why so many package managers that do the same thing). But, they don't track user and apllication setting. There's no API for getting to and enforcing policy on any of these settings, etc. These are the things that make enterprise desktop deployment possible/manageable by a single admin...

If you don't like the registry, fine, make a BETTER regisrty that makes Linux superior in the enterprise arena. Don't defend an inferior way of doing things just because the Windows way isn't perfect...

Really? ->

Jason's picture

I am inclined to agree with

Del's picture

I am inclined to agree with the conclusion that you must be a windows user ;) I work in an enterprise environment with numerous linux-machines, and it works very smoothly. It is my impression that support costs are relatively higher on the XP environment, but it is hard to conclude objectively (different usage and users on the two platforms).

One thing that you seem to neglect is that your list does not apply to all. Different environments have different demands. For us the standard application stack is typically locked, and the leading distributions make it trivial to customize a PXE-image containing the software stack we need. Patching the standard software-stack can be done on designated test-servers first, easily through the package system. It is an admins dream compared to an XP environment with all users being admins.

Additional commercial softwares can be installed on network directories that are put in everybody's path, and hence only one binary to patch, or rather upgrade, is needed for each. Typically upgrading and keeping the old version available until it is no longer desired. Totally painless.

Ssh is not difficult, nor is Python or apt. Clusterssh is a treat enabling you to perform any command on any collection of clients or servers in a snap. Besides no gui can replace an admins need to learn a scripting language.

Configuration management? You have a nice selection here:

Make sure to install Mediawiki on a server to thoroughly document all admin tasks performed, and you are in for an enjoyable ride.

A "quickie" with Webmin

El Perro Loco's picture

Wouldn't Webmin be showing the way to integrate the several tools available under Linux?

I'm just thinking "usability". The command line is geek-cool, but not enterprise-cool.

well, like this?

Anonymous's picture

Linux is for smart people.

Anonymous's picture

Linux is for smart people. IT people are just getting dumber.
Done are the times of in-house innovation... In are the times of "how-little-work-can-i-do"

If that is the case...

David Lane's picture

Then Linux as an OS is doomed.

Fortunately, I disagree. Most shops are not dumb, but they certainly are stretched far beyond what they were even 10 years ago. Where there used to be four people managing 4 servers and 100 users, there are two people managing 100 servers part-time, for 5000 users. It makes it very tough to actually get anything accomplished, much less learn anything new.

David Lane, KG4GIY is a member of Linux Journal's Editorial Advisory Panel and the Control Op for Linux Journal's Virtual Ham Shack

One "smart" admin...

Ryan Nowakowski's picture

...can manage 500 machines more easily than 20 junior guys. That's because everything can be scripted in Linux, from the install, to the management. Where do I start to become one of these "smart" Linux admins. I like this site:

I've used it to manage dozens of machines and the guy who owns the site manages hundreds of critical desktops/servers on a stock trading floor.

This is a moot point...

Anonymous's picture

Tons of stuff can be scripted in Windows too. You have jscript, vbscript, perl for windows, batch scripting, and now powershell, and the list goes on. The point that one platform can be scripted and another cannot is just silly at best.

The real issue is that much of what can be done with Windows in the enterprise does not require scripting. Things can automated quickly using a whole bunch of available tools that interface directly with the registry, and for the most part, a monkey could use them.

Linux rocks for the desktop, and it rocks for the standalone server, but for the enterprise? As sad as it is, Windows is just better...

Warning: if you compare

Jason's picture

Warning: if you compare batch files to shell scripts, prepare to be laughed at for hours. My office just ported it's primary app from an AIX server to Windows. The problem is that I had all these fantastic admin tools in the form of scripts that gave me instant results exactly as needed. People here honestly thought I had godlike powers - REALITY: I'm a google expert.

No big deal, I thought. I'll just do a little googling and convert the scripts to batch files. I'm sure someone has a conversion tool. If not, I'll write one. BAH! When I actually reviewed what is available in batch scripting compared to shells, I just hung my head. (Incidentally, there are tools to convert batch files to scripts).

There is no comparison. Right now I'm checking into Cygwin which basically would give me shell tools on the Windows server. Hopefully that will work. (I'm interested in any feedback from those who've tried this before).

go for it

cthubik's picture

If you're a linux person who's stuck with windows, Cygwin is pretty useful.

you must be windows user indeed

Anonymous's picture

All the areas that you mentioned look funny to me.

Patching? As easy as installing different apt proxy for test machines. if you have your test lab, make it download patches from test proxy: a proxy with fresher packages than all office machines. And I can assure you that as soon as your crond works, all patches will be applied.

Conf management? I use puppet here, and find it OK. it becomes more powerful and easier with each new version, unlike SMS.

Automated apps installation? So many ways to do that, you know. Make your own package, use yast, slack, dpkg, whatever. Software inventory can be easily done with scripts.

Hardware inventory with SNMP? Are you kidding? Out of 10000 office desktops, 6666 would be switched off, and other 3337 would have DHCP address that you will have to resolve to a meaningful name. Linux provides you with best tools for hardware inventory, if you just use its internal commands and upload results to your inventory database. Along with username, hostname, S.M.A.R.T and whatever.

Stop procrastinating about Linux not having these tools and not ready for Enterprise. IT staff degraded too much with Windows and can't imagine how to do simplest tasks anymore, this is who is not ready!

Perhaps I wasn't clear

David Lane's picture

It is not that there are not tools out there to do this stuff. There are, mostly. What is missing is the, oh call it ease of use and the ability to close the loop and verify that things have been accomplished as expected factor.

Puppet is a powerful tool. YaST is a powerful tool. So is SSH. Yet each one of them has a steep learning curve, different command line switches and configuration settings that make cross program configurations difficult.

Linux, and UNIX pride itself on having multiple ways to do things, yet in the current environment, most shops do not have the manpower or the time to sit down and grind through the documentation just to figure out what the additional requirements are for each of the tools, much less the time to do the research to FIND THEM ALL. And then, as I said, closing the loop and reporting success or failure.

We have managed to come up with several good network and systems management suites. Why have we not done the same for the desktop management suites?

David Lane, KG4GIY is a member of Linux Journal's Editorial Advisory Panel and the Control Op for Linux Journal's Virtual Ham Shack

What learning curve?

Anonymous's picture

"Puppet is a powerful tool. YaST is a powerful tool. So is SSH. Yet each one of them has a steep learning curve, different command line switches and configuration settings that make cross program configurations difficult."

Not sure about Puppet, but neither YaST nor SSH have any learning curve AT ALL. Or do you mean it is hard to get to know how to hack a server that uses them?

You still appear to be

dsr's picture

You still appear to be whining about not having done your homework.

"My Cusinart, my knife and my mixer have completely different interfaces. They are all powerful tools, but how am I going to learn them all?"

Well, you spend time learning them, that's how. Do your research. Pick your tools. Learn to use them. Join the user communities and keep up to date. Document your methodology.

One way: Use Debian. Establish your own repositories: one a mirror of the stable branch, another to hold your own packages, still others for alpha and beta testing packages. Each machine runs apticron and 'apt-get upgrade -y' every night -- now they are all up to date. Write Puppet routines to configure them. Run a normalized SNMPd configuration, read-only. Unify user management. Kill all end-user knowledge of root passwords, but give them specific sudo rights when they need them. Build an internal Wiki to document everything. You should be able to reproduce a given machine's build just by knowing the class it belonged to and the DNS name you assigned. Backup home directories, or force storage on central servers, which are themselves backed up.

Oh, this isn't easy? That's right, you'll need to spend time and energy and a very little amount of money getting it all set up. Then it repays you for years and years.

You're wasting your

Anonymous's picture

You're wasting your time.
What Windows "IT Experts" want is a Big Red button that says "Do it".

The button has to be provided by a large vendor at a high cost.

It does not matter if the button does not work as advertised half the time.

It does not matter that the time they spent working around the quirks of this button could have been better spent with writing their own scripts.

All they want is to click that button and be able to whine and complain to the vendor that it does not work as advertised.

The vendor then promises that everything is fixed in the latest version. Just extend your licenses.

This is what Windows "experts" want. I have seen this over and over again.

Sadly, many Unix "experts" are no different.

Why do they have to be in

Anonymous's picture

Why do they have to be in one suite as you call it?

Why do ssh and puppet have to come together? As they are, they work fine. You can't seriously expect a config manager and a secure communications tool to have the same command line switches and settings - they do different things!

An all encompassing application would have one 5 foot thick manual instead of 20 thinner ones, that's all.

If you are complaining that learning linux is too much work, I would counter that learning the Windows tools is just as much work, only more people have done it.

I grant you the width of choice makes deciding which pieces to use more difficult, but the end result is often something that works better for the people that chose it - and they understand it better.

More documentation on fitting all the pieces together well is always welcome of course. That's what this is in a lot of ways, a training issue not a technical one. (Spoken by someone who's wanted to learn puppet for a while now and not gotten there.)

Enterprise Management tools exists for the Linux Desktop

Jean Philippe's picture

"If you are complaining that learning linux is too much work, I would counter that learning the Windows tools is just as much work, only more people have done it."

Sorry but I don't agree learning Windows is way more difficult than Linux, lack of documentation, lack of communication.

I think on Linux you have to make things right while on Windows mistakes are allowed even if the windows registry is a complete mess a workstation might still work (more or less).
On Linux you have to make it right, wrong path, typo, and it might not work at all.
That's a real difference.

If you want to do your job right on windows you have a lot of things to learn, while on Linux it is more a question of fine tuning the configuration and you can explain why it works or doesn't, you can hardly do that on Windows.

What ? Linux = lack of

Sephi's picture

What ? Linux = lack of communication ? You can't be SERIOUS... discussion boards, mailing lists, wikis, irc... what else do you need ??

And we can easily see where the "not following the rules" behaviour leads... take IE for example. They didn't follow any standard to build this software, and it ended in crap.

"What ? Linux = lack of

Anonymous's picture

"What ? Linux = lack of communication ? You can't be SERIOUS..."

No I was speaking of Windows, the fact I wanted to highlight is that the author of the article might use Windows and it will work, as clueless he could be, but if he wants to use Linux he must know exactly what to do.

White Paper
Linux Management with Red Hat Satellite: Measuring Business Impact and ROI

Linux has become a key foundation for supporting today's rapidly growing IT environments. Linux is being used to deploy business applications and databases, trading on its reputation as a low-cost operating environment. For many IT organizations, Linux is a mainstay for deploying Web servers and has evolved from handling basic file, print, and utility workloads to running mission-critical applications and databases, physically, virtually, and in the cloud. As Linux grows in importance in terms of value to the business, managing Linux environments to high standards of service quality — availability, security, and performance — becomes an essential requirement for business success.

Learn More

Sponsored by Red Hat

White Paper
Private PaaS for the Agile Enterprise

If you already use virtualized infrastructure, you are well on your way to leveraging the power of the cloud. Virtualization offers the promise of limitless resources, but how do you manage that scalability when your DevOps team doesn’t scale? In today’s hypercompetitive markets, fast results can make a difference between leading the pack vs. obsolescence. Organizations need more benefits from cloud computing than just raw resources. They need agility, flexibility, convenience, ROI, and control.

Stackato private Platform-as-a-Service technology from ActiveState extends your private cloud infrastructure by creating a private PaaS to provide on-demand availability, flexibility, control, and ultimately, faster time-to-market for your enterprise.

Learn More

Sponsored by ActiveState