Linux Lunacy 2003: Cruising the Big Picture, Part I

Doc Searls' first report from the latest Geek Cruise.
Day Two: Crash Courses on the High Seas

Because our cruise left from Seattle rather than Vancouver (the other primary departure point for Alaska cruises), we skipped the lower parts of the Inside Passage and headed for Juneau by going out to sea and around Vancouver and Queen Charlotte Islands. The seas were described by the ship's bridge (automatically, using a special channel on each cabin's TV) as rough, with waves in the 7.5-12 foot range. A few people felt woozy, but the front desk gave away plenty of free Dramamine. I've been on plenty of other ships (large and small) with rides that were a lot worse.

About the only recreational drawback during this leg of the cruise was the closure of the swimming pools, which instead provided excellent entertainment for crowds gathered on the Lido deck to study dramatic wave motions at the inboard laboratory that the pool had become.

The Wave Pool on the Lido Deck

By timing the photo above to the maximum dip of the Amsterdam's bow (to the right, or fore), I could later tell by my trusty protractor that the effects were produced by a 2° angle of pitch. In other words, it wasn't as bad as it looked.

That morning I attended Ted T'so's Introduction to the Linux Kernel session. For a non-technical type like me (and I speak in relative terms here, most of my non-geek friends think I'm as technical as a shop manual), attending a session like this is like learning a language by immersion. As Don pointed out in his report on the cruise, Ted's tutorial is one he's given a number of times (Don wrote about one in June 2002), each updated in pace with kernel developments. The cruise tutorial was based on Ted's tutorial from Usenix in June of this year. That was the month before I saw a talk Ted gave at the O'Reilly Open Source Convention in July. In that talk, Ted said the 2.6 kernel should be out "in a couple of months". Being that those months had just gone by, I was curious to hear where he stood now on the subject.

Well, neither Ted nor Linus gave us a way to plot the asymptote. "It'll be ready when it's ready" was the report from both of them. (More later this week in the section on Linus's talk). Linux in any case is a perpetually unfinished project, and Ted gave his class plenty of interesting new stuff to chew on. Here are my notes, typed live in a laptop, from the Tutorial:

  • IBM (Ted's employer) wants hot-pluggable and hot-swappable memory. Look for it early next year.

  • Ted credits authors: Rusty Russel's module loader, Ingo Molnar's 0(1) scheduler...

  • More power to the kernel, which now handles relocation an loading... enforces type checking... catches more incompatibilities...

  • Look for better throughput, as new block device drivers make use of new block I/O APIs. They'll also be able to address huge address spaces ... up to 16TB on 32-bit architectures.

  • Hope for laptops... support for new BIOS extensions and hardware... better new ACPI support, CPU frequency scaling...Advanced Linux Sound Architecture (ALSA)

  • In 2.4 kernel tasks are never pre-empted by other processes. Kernel code may yield explicitly (eventually calling schedule ()), or implicitly by taking a page fault. Review of interrupt changes from 2.2 through 2.5, work queues, priorities in kernel-mode CPU time, advantages and disadvantages of various schedulers on SMP systems...

  • "One thing Microsoft did that was a true benefit to the entire industry..." By insisting that a machine would not be ready for Windows 98 unless it had a pure PCI bus, Microsoft killed off the ISA bus. "Afterwards you could actually prove the device in advance..."

  • If you configure a high performance system, you want to make sure they support multiple IRQs, so you don't force interrupts to happen needlessly. PCI supports four IRQs...

  • "It's only the amateur professional paranoids who really care about /dev/random ." He adds, "/dev/random is needed for generating long-term cryptographic keys. But for many other uses, a cryptographic, psuedo-random number generator, is quite sufficient; there's no need for "true randomness". OpenBSD has a /dev/crandom device which is a cryptographic pseudorandom generator, but there's no point to do it in the kernel. You can also do it in a user-space library."

  • All kernel code operates in a process context. In 2.4 you are never pre-empted by other processes. Kernel code may yield explicitly. In 2.6, kernel code may be pre-empted if CONFIG_PREEMPT is enabled.

  • Some interesting history... Between service pack 3 and 4 in NT, Microsoft ripped out their networking stack and put back a better one. Then they ran the Mindcraft benchmark challenge against Linux. That's how they rigged the benchmark. See, back then Linux had a single threaded networking stack. In service pack 4 the networking stack changed to Microsoft's advantage. They set the whole thing up and had Mindcraft push the button. The effect was to inspire the networking crew for Linux.2.4. Thanks to the challenge, the 2.4 networking stack became fully capable, making 2.4 a quantum leap better than it had been -- and than Microsoft's alternatives. Yes, the scheduler and I/O subsystems still needed work. but the networking stack got fixed right away.

  • Corporations want to sell big-ass machines. They want to create scalability to add resources at the high end. But you need to make sure you don't affect the 2-CPU or few-CPU case. You can't impact low-SMP performance, do locking on the cheap in low-SMP systems. You need a kernel that works across many machines at many points in the scale, rather than point-optimizing the kernel for one CPU-count in the spectrum. This is the advantage of having a company like IBM involved. A Sun might care mostly about high CPU-count performance. IBM wants to care about the low-CPU count as well as the high.

The part of the tutorial that impressed me most was the early section on development choices based on sober assessments of all the different kinds of stuff the kernel needs to manage. There are physical (memory, CPU, disks, peripherals) resources and logical (process, security, quota) resources. And interfaces between resources that need to be secure and reliable. All while the demands rise in complexity and size. How do you prioritize the development of all that?

An answer occurred to me: "By putting first person interests in the plural". When Ted said "we" he usually wasn't talking about IBM. He also didn't need to make a sharp distinction between the IBM "we" and the Linux "we." For his work, the distinction was moot in most cases.

Afterwards, I told Ted the story of a conversation I had with a Microsoft guy at OSCon. The guy opened up by saying "The first thing you have to understand is that our senior management has decided that Linux is our number one threat." I replied by saying that Linux was a project, not a company. "But our competitors contribute funding and manpower to Linux development", he replied. "Look at IBM and HP. They're our competitors." But, "Aren't they also your OEMs?" I said.

To my surprise, Ted agreed with the guy. "Linux moves much more quickly, because it can take advantage of many companies interests to improve, and as a result it improves much more quickly than if Microsoft was competing against a single OS being developed by a single company. So Microsoft is right to feel so threatened."

The next talk I attended was Bruce Perens' "JGPRS and GSM International Wireless Connectivity for Road Warriors". It was nice to hear Bruce open with kind words for Eric Raymond, with whom he co-authored an open letter to Darl McBride on September 9. We talked SCO for awhile--talking about it is almost unavoidable. "They (SCO) are doing some incredibly dorky things", he said, "and saying so many things that probably are not true." He gave credit where due, saying SCO's moves "do seem to be propping up their stock price... and as long as they keep that value up, they can take millions of dollars out where you won't see it." He also said, "They are a Microsoft proxy. And this is the way we will see Microsoft fighting open source in the future. There are any number of other proxies out there that would be glad to take millions of dollars in license fees from Microsoft."

On the mobile front, two things became clear: 1) a lot of interesting stuff is going on that isn't obvious here in the States; and 2) an awful lot of it is being done with Linux.

The next talk I took in was Charles Roth's "Online Collaboration: Understanding it, Picking It, and Making it Work in the Workplace". Charles and Neil Bauman have known each other since second grade, and Neil credits Charles with turning him on to technology and computing. (Charles cringed when I told him that Neil told Linus that Charles is "the smartest person I know".)

Charles' talk provided some good procedural advice for building and maintaining the forward-moving conversations that create a better "we". He's also a funny guy. Among his one-liners were these:

  • Subversion is a really useful thing.

  • Make power visible. Decisions must not be invisible, and must link to on-line conversation objects. People without power must see the process they're dealing with.

  • Give people ownership, and put them inside the on-line conversation space.

  • Does anybody really use a whiteboard? (On-line, that is.)

The evening talk was Paul Kunz' "Bringing the Web to America." Paul is a high energy physicist with the Stanford Linear Accelerator Center (SLAC, or "Slack") and an old school technologist in all the best meanings of the label. His career runs from a Princeton PhD through CERN, Fermilab and SLAC, where he has worked since 1974. The man is a Big Scientist, and one of his missions is making clear the role played by both big science and the academic research community in bringing the Net and the Web into the world -- and the self-interested, dumb and ultimately doomed systems it obsoleted and replaced along the way, often with great resistance.

Among the many surprising revelations in Paul's talk (at least for me) was that the European PTTs (national Post, Telephone & Telecommunications authorities) held such a massive monopoly over public networks. Thanks to their enormous political clout, the PTTs established the OSI X.25 packet service as a protocol that was not only mandated by law, but allowed the PTTs to charged by the kilobyte. I winced to recall paying upwards of $3,000/year around the turn of the 90s to communicate over various X.25 networks. "If the prime minister of Germany wanted to meet with the head of the PTT, he had to make an appointment", Paul said. "Even Washington felt the pressure to follow international standards, and ordered all laboratories to have a five-year plan to convert to X.25."

What broke the PTT's stranglehold? It was a combination of academic and scientific computing centers and networks, starting with ARPANET and various DECNets, but most significantly with BITNET in the US and EARN in Europe. A link from CUNY and BITNET in the US to EARN in Italy was established in 1984. Another from Italy to Israel followed, with physicist Haim Harari playing a crucial role. Then links followed to Switzerland and southern France. Then the Swiss allowed CERN to connect to Italy. Then, in 1985, the German PTT allowed temporary EARN links to the states "until their X.25 infrastructure was in place". Then DECNet links got hooked in. Then ARPANET linked Scandinavia to the US. So, Paul said, "by the time the PTTs hd X.25 in place, the traffic on temporary networks was too high to handle with X.25."

Along the way, IBM "cleverly or accidentally" appealed to European scientific paranoia about "falling behind Americans because of lack of free networking". High energy physics also funded the spread of networking to Russia and China.

Paul went on to outline the more familiar parts of Internet history, making clear a fact that often gets lost in the telling: "The use of the backbone remains free, and ARPANET open-source culture persists."

While just about every geek knows that Tim Berners-Lee developed the Web while working on a NeXT machine, Paul gives NeXT and NeXTStep additional credit for bringing UNIX into the object-oriented GUI world. "The greatness of NeXTStep can be measured by the large number of quality applications produced by a very small community with an open-source culture. A mere mortal with a good idea could program an application in a reasonable amount of time just to try it out and share it with others." The Web, Paul said, was at least in part a product of Tim Berners-Lee's efforts to solve a high energy physics problem and to do it with others around the world. He did that buy buying a NeXT computer, writing a hypertext application and extending the hypertext to documents on remote computers by adding a new protocol to the Net: HTTP.

By "complete accident" Paul also had a NeXT machine. This fact, however, didn't cause his pulse to rise when he saw Tim's announcement of the Web on August 19,1991. In fact, he didn't go out of his way to look Tim up when he visited CERN the next month. Instead, it was Tim who caught up with Paul.

After selling Paul on the usefulness of the Web, Paul asked for a demonstration. Tim said all the Web's servers were in the same building. So, said Paul, "We uploaded my NeXT at SLAC with the browser software and ran it there with windows sent back to CERN. It worked well. Remarkably well. I told Tim I was going to put SLAC's SPIRES database on the Web as soon as I got home."

Several months passed before the two were back in touch. History happened when Tim got to see SPIRES on his browser over the Web. (Here are the SLAC screenshots.)

Paul called SPIRES-Web "the first killer app for the Web". Why? "It had 200,000 records physicists wanted to search". In a short time there were thousands of users in forty countries. SPIRES became Tim's demo application at a series of meetings attended by physicists. It also was seen by a growing number of hackers around the high energy physics community, including Marc Andreessen at NCSA. We all know what happened next.

Paul concluded by calling the Net and the Web "dramatic demonstrations of the results from an open, adequately funded, academic research community".

He also said his old NeXT server is still going strong.

See Part II for Days 3 and 4.

Doc Searls is Senior Editor of Linux Journal, covering the business beat. His monthly column in the magazine is Linux For Suits, and his bi-weekly newsletter is SuitWatch.

email: doc@ssc.com

______________________

Doc Searls is Senior Editor of Linux Journal

Comments

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Re: Linux Lunacy 2003: Cruising the Big Picture, Part I

Anonymous's picture

openbsd actually has /dev/arandom as a cryptographically secure psuedo random number generator. it's done in kernel using the ARC4 algorithm (equivilent to the RC4 stream cipher and PRNG). it's not /dev/crandom.

$ uname -a
OpenBSD jose.someplace.com 3.4 GENERIC#57 i386
$ ls -l /dev/*random
crw-r--r-- 1 root wheel 45, 4 Nov 11 04:50 /dev/arandom
crw-r--r-- 1 root wheel 45, 3 Oct 30 14:05 /dev/prandom
crw-r--r-- 1 root wheel 45, 0 Oct 30 14:05 /dev/random
crw-r--r-- 1 root wheel 45, 1 Oct 30 14:05 /dev/srandom
crw-r--r-- 1 root wheel 45, 2 Nov 11 04:50 /dev/urandom

jose nazario, ph.d.

White Paper
Linux Management with Red Hat Satellite: Measuring Business Impact and ROI

Linux has become a key foundation for supporting today's rapidly growing IT environments. Linux is being used to deploy business applications and databases, trading on its reputation as a low-cost operating environment. For many IT organizations, Linux is a mainstay for deploying Web servers and has evolved from handling basic file, print, and utility workloads to running mission-critical applications and databases, physically, virtually, and in the cloud. As Linux grows in importance in terms of value to the business, managing Linux environments to high standards of service quality — availability, security, and performance — becomes an essential requirement for business success.

Learn More

Sponsored by Red Hat

White Paper
Private PaaS for the Agile Enterprise

If you already use virtualized infrastructure, you are well on your way to leveraging the power of the cloud. Virtualization offers the promise of limitless resources, but how do you manage that scalability when your DevOps team doesn’t scale? In today’s hypercompetitive markets, fast results can make a difference between leading the pack vs. obsolescence. Organizations need more benefits from cloud computing than just raw resources. They need agility, flexibility, convenience, ROI, and control.

Stackato private Platform-as-a-Service technology from ActiveState extends your private cloud infrastructure by creating a private PaaS to provide on-demand availability, flexibility, control, and ultimately, faster time-to-market for your enterprise.

Learn More

Sponsored by ActiveState