/var/opinion - Come Together
The whole PC world is plagued by a lack of good standards. Some of the most frustrating standards problems are hardware-related. For example, what brainiac thought it was a good idea to make the FireWire connectors and USB connectors on motherboards identical? The motherboard manuals are usually careful to point out that if you mix these up, you can damage the motherboard. That's nice, but who made it possible to mix them up in the first place? Dumb.
It's just as troubling to see a continuing lack of good, comprehensive standards among Linux distributions. As with hardware, you can almost always find a way to make something work if you are careful and know what you're doing. But that's no excuse for the lack of standards across distributions, and the few inadequate standards that exist.
Here's what inspired this complaint. If you've been following my columns, you'll know that I've been trying to put together a MythTV box. I followed several how-to pages for installing special drivers for the tuner cards I have tried. Most of the published instructions, including those linked to by some hardware vendors, tell you to place firmware everywhere but the place Ubuntu stores firmware. Ubuntu looks for firmware in the /lib/firmware/<kernel version> directory. Most instructions tell you to put the firmware in /usr/lib/hotplug/firmware. One card, the Hauppauge PVR-150/500, wants firmware files in multiple locations, including the /lib/modules/ directory. It uses different filenames depending on the version of the kernel and driver. I've tested three cards so far, and I finally ran out of patience and used a shotgun approach. I put copies of the firmware just about everywhere but my son's sock drawer. All the drivers work now. I have no idea which copies of the firmware files they are finding, but I don't care anymore.
Personally, I like the Ubuntu approach to locating firmware. Ubuntu uses udev, which many agree is superior to hotplug. It lets you install separate versions of firmware based on the version of the kernel.
Some may argue that this differentiation is what open source is all about. If Ubuntu's choice is good enough, other distributions will cream-skim it, and it will become the standard. Fair enough, but wouldn't it be more efficient for customers if the distributors simply agreed on such fundamentals as udev and where to put firmware? At least that way we'd be less likely to run across how-to pages that don't apply to our chosen distribution.
As much as I like this one thing about Ubuntu, Ubuntu is far from perfect when it comes to establishing or observing standards. Try to install a vanilla kernel on Ubuntu and see for yourself. You'll notice that you can no longer mount some disk partitions. Ubuntu, by default, installs and uses a logical volume manager (LVM) and enterprise volume management services (EVMS), one or both of which break how Ubuntu works if you use a vanilla kernel. I managed to fix the mount problem by editing the configuration files for LVM and EVMS to ignore all the drives on my system. The next version of Ubuntu will add ivman, yet another volume manager. I can't wait to find out what I'll have to reconfigure when the new Ubuntu is ready.
Unfortunately, my suggestion that distributors collaborate is utopian and unrealistic. They don't even work as a team in ways that would benefit them most, such as pressuring hardware vendors to preload Linux. When it comes to standards, most distributors aren't even willing to agree on a package format let alone build a package system where you could install a Mandriva RPM in Fedora without running into dependency problems. They can't agree on where to put firmware files or whether EVMS should be part of the basic system.
The best possible solution would be for all major distributors to build on a single base distribution. This was one of the original ideas posed when Linux Standard Base was first formed, but distributors rejected the idea in spite of the fact that it would save them all a lot of duplicated effort. Why are distributors disinclined to agree on a comprehensive standard distribution? Competition. A standard base distribution would lower the barrier of entry for new competing distributions. Put more bluntly, despite all the lip service Linux distributors give to how their commitment to open source and freedom empowers end users, they really do like having a degree of customer lock-in. Their lock-in just isn't as severe, obvious, destructive or effective as Microsoft's lock-in.
Don't get me wrong. I don't want to see the Linux market homogenized so much that distributions start to disappear. I'm glad there are many distributions from which to choose. I would simply like to see them differentiate their distributions at a much higher level, a level that eliminates needless compatibility problems. But I confess that there are times when frustration leads me to the temptation to start a crusade to get everyone to run Debian. What do you think?
Nicholas Petreley is Editor in Chief of Linux Journal and a former programmer, teacher, analyst and consultant who has been working with and writing about Linux for more than ten years.
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- SUSE LLC's SUSE Manager
- My +1 Sword of Productivity
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- Non-Linux FOSS: Caffeine!
- Managing Linux Using Puppet
- Doing for User Space What We Did for Kernel Space
- Tech Tip: Really Simple HTTP Server with Python
- SuperTuxKart 0.9.2 Released
- Rogue Wave Software's Zend Server
- Parsing an RSS News Feed with a Bash Script
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide