Philosophy and Fancy
Fights between licensing philosophies are not the only issues that help shape distributions. There's also the question of computing models. Sun Microsystems (which started life as Stanford University Networks) used to maintain that “The network is the computer.” Because Linux is descended from System V UNIX—via its dependence on innovations made by the Berkeley Software Distribution (BSD)—and UNIX always has been a network-centric operating system family, it makes sense that some, or most, Linux distributions would follow this network-centric philosophy. Time-share computing and time-share-with-a-fresh-coat-of-paint (that is, Cloud Computing) are the major paradigms of the network-centric distribution. Two others are cluster-based computing and dumb terminals running remotely off a central server (both useful in scientific and commercial environments). Some flavors of Red Hat are specifically tailored to this computing model.
On the flip side, we have the desktop distribution. This is the operating system for the personal computing revolution. It stores the operating system and all the user's data locally (while the network-centric system prefers the opposite). These distributions are usually general-purpose, including a selection of software that can meet almost every need, from setting up a home Web server to running a small business, from playing games to Web browsing and word processing or producing a podcast. The desktop distribution is the Swiss Army knife of Linux distros. Ubuntu, SUSE and Mandriva show this approach in action.
You can see a vestige of the early heritage of your particular distribution by looking at the filesystem structure. Does your package manager install software to /usr/local/* or to /usr/*? If the former, your distro probably started life as a network-centric operating system for an office environment. If the latter, your distro has probably been designed (or, in some cases, redesigned) with the desktop in mind.
Alas, there are some things for which the Swiss Army knife just isn't suited, and in the last four years, several custom-purpose distributions have come on the scene to solve the shortcomings of the desktop distribution for different specific purposes. The most obvious of these are the studio distributions, customized for real-time audio and video production in high-demand environments, but there also are customized distributions for firewalls, Web servers and laptops as well as market-specific distros targeting churches, activist groups, hackers and crackers, and grandparents (that is, users who are incapable of interacting with their machines as anything other than appliances).
Moving beyond the customized distro space, there's an entire field of customized Linux distributions that deserves special mention: the live CD. Knoppix was the first mover here, and since then, the space has exploded. With a live CD, you can run Linux on almost any hardware, including the programs you use most often (if one live CD doesn't have it, chances are another probably will), without touching the machine's hard drive. Live CDs are very useful for diagnostics, for testing whether a distribution will play nice with your hardware or for taking a familiar environment into hostile territory (for example, when visiting relatives whom you don't want to find out that you like visiting dolphinsex.com purely for research while writing your latest romantic epic about trans-species love among aquatic mammals).
No discussion of the different approaches would be complete without mentioning embedded distributions—versions of Linux and derivative operating systems (such as RockBox and Android) designed to run on handheld devices, in networking appliances, NAS Servers and dozens of other gadgets, toys, tools and machines that consumers love to use and hackers love to repurpose. Some of these you can find for download on the Web, but a greater number are created and used in-house at different companies that manufacture devices of different sorts and often include a goodly amount of proprietary code to interact with the device's firmware.
There's a third axis along which distributions sort themselves out, and that has to do with how you answer the question “Whose job is it to administrate the system?”
Linux's architecture segregates system functions from user access—a major reason that Linux has proved remarkably insusceptible to viruses and worms. In a classical setup, what I'll call office administration, this means that only the root account can install and remove software, monkey with system settings, load and unload kernel modules, and change the hardware. A user account may merely use the software and access the data files generated by that particular user or shared with it by another user. This is still the most common setup, and it's useful in small-office, home-office and family environments where more than one user will be accessing a given system regularly.
However, laptops and Netbooks often don't need this kind of strict segregation, because the user almost always also is the system administrator. Those distributions aimed at this market and at the single-user desktop operate according to a home administration model—that is, to remove the encumbrance of having to log in to root separately, a number of modern distros do not enable the root account by default. Instead, the primary user is also the sysadmin and must furnish only a password to perform administrative functions. Ubuntu and its derivatives use this scheme by default, although they easily can be converted to the more classical administration method.
The final major administrative paradigm is most commonly encountered in embedded systems and appliances. These gadgets, such as your trusty blue-box Linksys router, are generally headless and are administered remotely from a master system over SSH or (more commonly) through an easy-to-understand Web interface.
|Non-Linux FOSS: libnotify, OS X Style||Jun 18, 2013|
|Containers—Not Virtual Machines—Are the Future Cloud||Jun 17, 2013|
|Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer||Jun 12, 2013|
|Weechat, Irssi's Little Brother||Jun 11, 2013|
|One Tail Just Isn't Enough||Jun 07, 2013|
|Introduction to MapReduce with Hadoop on Linux||Jun 05, 2013|
- Containers—Not Virtual Machines—Are the Future Cloud
- Non-Linux FOSS: libnotify, OS X Style
- Validate an E-Mail Address with PHP, the Right Way
- RSS Feeds
- Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer
- Introduction to MapReduce with Hadoop on Linux
- Help with Designing or Debugging CORBA Applications
- New Products
- Returning Values from Bash Functions
- Linux Systems Administrator
- Welcome to 1998
23 sec ago
- notifier shortcomings
24 min 5 sec ago
2 hours 56 sec ago
- Android User
2 hours 2 min ago
- Reply to comment | Linux Journal
3 hours 55 min ago
6 hours 45 min ago
- This is a good post. This
11 hours 58 min ago
- Great, This is really amazing
12 hours 3 sec ago
- These posts are really good
12 hours 1 min ago
- It’s a really great site you
12 hours 3 min ago
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?