The Ultimate Distro
The name of Gaël Duval's new distro, Ulteo, with its hint of the word "ultimate", smacks of a certain ambition. But Duval probably means it in the sense that it is the last distribution you will ever need to install, because thereafter it will "self-upgrade automatically," as the announcement of the alpha release put it. Ease-of-use has been a constant theme in Duval's work. When he launched his first distro, Mandrake, in July 1998, one of his stated goals was "to provide a working and easy-to-install linux-distribution to people who don't want to spend too much time in installing and configuring their Linux system : just install it and USE IT.">
But if the vision has been steadfast, the path to achieving it has proved somewhat stony. First Mandrake acquired Conectiva to form Mandriva, and then, in March 2006, Duval was "laid off", as the euphemism has it. If you're interested, you can read Duval's comments on the whole affair, as well as those of François Bancilhon, CEO of Mandriva, and decide for yourself what really happened. But looking at the bigger picture, what's interesting about the Mandrake/Ulteo saga is that it recapitulates so much of the recent history of free software, as new distros have continually been created in an attempt to resolve the perceived shortcomings of existing offerings.
In the beginning, Linus created two floppy discs, called "boot" and "root". As Lars Wirzenius, Linus' Helsinki friend and someone who had the privilege of being present at the birth of Linux, explained to me a few years ago:
The boot disk had the kernel. When that booted, it asked you to insert the other disk, and that had the whole file system for the Linux system. All the stuff that these days would be put on a hard disc was on that floppy. But it was a very, very small file system, very few programs, just enough to be called an independent Unix system.
Copies of these discs were placed on a server at Helsinki University. They were soon mirrored around the world, for example at the Manchester Computing Centre (MCC), part of the University of Manchester, in the UK. It was probably here that, in the nicest possibly way, the distro wars started. The MCC decided it could do something a little better than Linus' basic two discs, and put together the MCC Interim distribution, which first appeared in February 1992, barely six months after Linus had revealed Linux to the world. Shortly afterwards, other distros appeared: Dave Safford's TAMU (Texas A&M University) and Martin Junius' MJ collections, followed by Peter MacDonald's famous SLS release.
It was SLS that prompted a rather remarkable diatribe in the very first issue of Linux Journal, dated March 1994, that pinpoints the fundamental challenge facing any distro-maker:
Many distributions have started out as fairly good systems, but as time passes, attention to maintaining the distribution becomes a secondary concern. A case-in-point is the Soft landing Linux System (better known as SLS). It is quite possibly the most bug-ridden and badly maintained Linux distribution available; unfortunately, it is also quite possibly the most popular.
The author of these strong words was a young Ian Murdock, explaining what prompted him to create his own distribution, which he named "Debian" after his wife and himself - Deb+Ian. As he told me in 2000: "I regret how harsh I was, because the guy was just trying to do something good." They may have been typical young man's words, but they are also symptomatic of a feeling that seems to have welled up time and again within the free software community: that the current distros just don't do their job well enough - and that something better is possible.
There's a nice graphical representation of this constant sprouting and growth, and it's interesting to note that Murdock's Debian has proved a strong stock for new shoots of the distro tree. But this shows only a tiny part of the total richness: the indispensable Distrowatch lists over 300 distributions in its main listing .
This is one of free software's greatest and least-appreciated strengths: the fact that it can continue to evolve in an almost organic fashion, untrammelled by constraints of economics, or even feasibility. It is this fecundity that drives free software forward unstoppably, and that distinguishes it from the sterile code monster that is Windows, which, trapped within the carapace of its closed source, only slouches towards Redmond to be born every five years or so. And it is precisely because of this ever-present, irrepressible urge to trump what has gone before, and to create the ultimate distro, that there will never be one.
Glyn Moody writes about free software at opendotdotdot
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- Stunnel Security for Oracle
- SourceClear Open
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- SUSE LLC's SUSE Manager
- My +1 Sword of Productivity
- Managing Linux Using Puppet
- Non-Linux FOSS: Caffeine!
- Google's SwiftShader Released
- Doing for User Space What We Did for Kernel Space
- Parsing an RSS News Feed with a Bash Script
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide