Linux in the Year 2000
Reprinted from Linux Journal, Issue 2, May 1994
In the past seven years we have seen Linux go from an idea for a small Unix-like system into a movement to bring affordable, reliable multi-tasking software to anyone who could buy a rather minimal computer. In fact, we have seen that in some parts of the world people are more likely to have a Linux system than to be connected to the electric power grid.
Now we see Linux and an Internet connection in over 100 million homes worldwide. How did this happen? Cost is the best answer. Some of you probably remember an old program loader called MS-DOS. Back in the 1980s it was being marketed as an operating system and it managed to establish a user base close to that of Linux today. But it had three fatal flaws:
it only ran on one type of computer system and could not be expanded to support the full capabilities of new microprocessors;
it cost money;
it didn't support multi-tasking.
We can excuse the first flaw as it was originally written for a project at a computer company and was never intended to be marketed to the general public. Although a more visionary company might have made a better decision we can only say that hindsight is 20-20.
The fact that people actually had to pay for a copy of MS-DOS (or, more properly stated, were supposed to pay for it) could also be considered as a very short-sighted decision on the part of Microsoft. As we all know now it is the added value training, customization and, of course, the user-specific applications that make the money. Giving away operating systems helps to sell these services along with hardware. The final flaw, however, is what resulted in the demise of MS-DOS. I remember that back in 1986 I gave a talk at a personal computer user's group meeting in Seattle. I had brought along an IBM-AT (remember those it had an Intel 80286 processor in it and people ran MS-DOS on them) and a couple of H19 terminals (Heathkit? Another flash from the past for some.)
I had a version of the Unix system running on this hardware. I talked about Unix systems pointing out multi-tasking as being a primary benefit. I was amazed when these allegedly computer-literate people didn't understand why multi-tasking was absolutely necessary. In fact, one of the group members actually said I don't like Unix because it accesses the disk when I'm not doing anything . Today, 99% of the computer system users don't even know what a disk is, much less, disk access.
With the advent of ISDN in the early 1990s and personal satellite stations in the late 1990s, connectivity became the big issue. People quickly realized that they didn't want to know what their computer was doing, they just wanted to see the results. Could you, for example, imagine manually instructing your computer to call up another computer? Well, Unix systems pretty much pioneered the initial ideas behind these sorts of computer links with the advent of the uucp program suite back in the early 1970s.
When the average user, without using these words, asked for a multitasking computer system, Linux was there and waiting. We have to give credit to early Linux activists (and <W0I>Linux Journal<D> itself) for going out to companies that intended to market personal Internet stations and point out that Linux was a more capable and less expensive base to use for their products. The result, as you can see today, is that most personal Internet stations are based on the Linux operating system.
But there is more to the success of Linux. <N>People recognized they would rather pay for service than things. Linux, much like my first car, a 55 Chevy, offers a choice for the consumer. They can either fix it themselves or then can hire someone to fix it. That someone can be a representative of the manufacturer or the kid down the street. This was certainly not the case with proprietary operating systems or vehicles of the 1990s.
We are pleased to announce that as of November, 1999 90% of our subscribers are now via the Internet rather than on paper. We do, however, see that other 10% as so important because that is where those who are new to computing (yes, there still are some) find out about Linux and how easy it is to get their Linux system on the Internet. Over the years, most of our subscribers have moved from paper copies of LJ to an Internet subscription once we got them up to speed.
To make this electronic version possible, we had to upgrade our offices to a complete Linux network. <N>Even though all of our editorial and advertising work was done on Linux systems from our humble beginnings in 1993, our production, subscription and accounting systems ran on other computers.
Today our seamless ISDN connections (and the satellite link to my office outside of Yaak, Montana) make this startup seem like a nightmare rather than the reality of seven years ago.
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- SUSE LLC's SUSE Manager
- My +1 Sword of Productivity
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- Non-Linux FOSS: Caffeine!
- Managing Linux Using Puppet
- Doing for User Space What We Did for Kernel Space
- Tech Tip: Really Simple HTTP Server with Python
- SuperTuxKart 0.9.2 Released
- Parsing an RSS News Feed with a Bash Script
- Google's SwiftShader Released
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide