Netsurfing With Linux
Purportedly, this article is about how an “obscure” operating system called Linux was used to launch a happy band of netsurfers into the wild ocean of the Internet. But it is really a rousing tale of adventure and discovery—with Linux playing the part of a trusty ship. That ship enabled us to chart a map of the vast Internet ocean, in the guise of Netsurfer Digest, a free, sponsor-supported e-zine (electronic magazine), serving as a gateway to on-line adventure for netsurfers all over the world.
It all started in the spring of 1994 when I gathered together a small band of netsurfers and, through the proper application of persuasion, hand waving, and free food, convinced them to put together an on-line publishing company. Following venerable startup tradition, we scribbled on numerous coffee house napkins, ate lots of pizza, exchanged tons of e-mail, and gave birth to Netsurfer Communications. Our goal was to publish interesting and user-friendly e-zines using the hot new technology of the World Wide Web.
Being a fairly on-line literate bunch, we decided that our products would be made available exclusively on-line, and that the company would be run as a virtual corporation. We intended to take every advantage of the vast leverage provided by modern communication technology. By existing entirely on-line, we would be able to effortlessly communicate with our consumers, tap a global pool of talented contributors, and keep our overhead to a minimum.
While prototypes of our flagship publication, Netsurfer Digest, were being prepared and the production process was being designed, a number of technical decisions had to be made. One of the most important was the choice of operating system for our production facilities and for our Internet site. Whatever we chose had to meet a number of fairly stringent requirements.
First, we needed a very reliable e-mail platform. Our system had to be able to support mailing lists serving thousands and provide reliable e-mail storage for our internal editorial communications. Second, we needed a reliable World Wide Web and FTP site. Back issues of our e-zine as well as various background information had to be made available to people all over the world, at all hours of the day and night. We also needed the ability to easily change and update this information, sometimes by automated scripts. Finally, it was important that whatever environment we chose supported good development tools. We planned to create a number of custom programs to aid in production and distribution of our e-zines.
It didn't take a genius to figure out that some flavor of Unix was what we needed. After all, Unix is the native operating system of the Internet and may well be the best development environment ever designed. The only question was which brand of Unix to go with? There were a number of commercial versions available running on expensive workstations or feature-loaded PCs. The key word there was “expensive”. Now, if you've been involved with startups, then you know that next to the occasionally scrambled brains of the startup team, the most precious resource is cash. You only spend it on items absolutely essential for keeping the venture going, such as marketing, hardware, and pizza. This was definitely on our minds when the time came to choose our operating system.
It just so happened that at the time I had a copy of Linux available, which I had purchased on CD-ROM from Trans-Ameritech. I had played around with it at home and had also heard good things about it from my friends. It appeared to be a full-featured, relatively robust operating system which might be able to meet our needs through the early stages of the e-zine. The price was right, and what's more, we did not need some super-expensive machine to run it. We had little to lose by giving Linux a try. If it worked, we had a very inexpensive solution to our requirements. If not, well, we could always go with one of the more expensive commercial operating systems, something we figured we'd have to do anyway as our enterprise grew.
It was clear that Linux was a nice stand-alone operating system, but we needed to find out if Linux could reliably support an Internet site. Early in May of 1994 the big day came. I had already installed Linux version 1.0.9 on our machine, a humble 486DX33 PC with 8 meg of RAM, a 245 meg drive, and an Ethernet card. The first priority was to see if we could hook it up to the Ethernet network at our provider site and, from there, work on getting it on the Internet. So I lugged the machine across the San Francisco Bay to Berkeley and sat down with our netmaster, Bill Woodcock, to install it on his network. I was prepared to spend a few hours fiddling with the system to get it running. I even brought some snacks to munch on as we worked during the afternoon on getting the whole contraption to work.
First, we read the instructions in the Ethernet-HOWTO, which essentially said to make sure that the kernel had been compiled with support for our Ethernet card. No problem; I had already done this. Next we read the NET-2-HOWTO, which told us how to configure the TCP/IP network. This boiled down to either running a utility called ifconfig or changing a few well commented lines in the rc.inet1 and rc.inet2 files. It seemed deceptively simple, and after we made the changes, Bill and I looked at each other skeptically and rebooted the machine.
I've spent all my professional life working with complex hardware and software systems, first as a mainframe design engineer, and later, as a software manager. In my long experience, I've learned that the first time you test a new piece of software, turn on new hardware, or configure a network, it never works. Never. There is always some fiddling and adjusting, or even bug-fixing, which must be done before the whole thing works vaguely the way it was designed to. That's just the nature of the beast. Imagine my consternation when the machine came up, recognized the network, and responded to pings from the rest of the world. This just does not happen in the real world. I was, frankly, stunned and amazed. But in a good way.
In short order, we brought up the standard daemons and had telnet, FTP, and e-mail going between our machine and the rest of the network. What I thought would be a long afternoon of debugging and digging through obscure on-line documentation turned into a half-hour job. We took the rest of the afternoon off and went out to get some pizza. I even sprang for extra sauce.
Bill spent the next few days configuring the machine to our liking. He arranged domain name registration, set up secure FTP and WWW software (all freely available on the Net), wrestled with e-mail configuration, and set up the necessary user accounts. We were on the Internet and ready to support beta testing of our first e-zine, Netsurfer Digest.
Practical Task Scheduling Deployment
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.View Now!
|The Firebird Project's Firebird Relational Database||Jul 29, 2016|
|Stunnel Security for Oracle||Jul 28, 2016|
|SUSE LLC's SUSE Manager||Jul 21, 2016|
|My +1 Sword of Productivity||Jul 20, 2016|
|Non-Linux FOSS: Caffeine!||Jul 19, 2016|
|Murat Yener and Onur Dundar's Expert Android Studio (Wrox)||Jul 18, 2016|
- The Firebird Project's Firebird Relational Database
- Stunnel Security for Oracle
- My +1 Sword of Productivity
- Non-Linux FOSS: Caffeine!
- Managing Linux Using Puppet
- SUSE LLC's SUSE Manager
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- Doing for User Space What We Did for Kernel Space
- Google's SwiftShader Released
- SuperTuxKart 0.9.2 Released
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide