Linux Lunacy 2003: Cruising the Big Picture, Part I

by Doc Searls

Linux Lunacy 2003: Cruising the Big Picture, Part I

"It's about having a good time and learning a lot along the way". That's what "Captain" Neil Bauman says about Geek Cruises, which he launched in May 2000. He wanted to take those two basic geek imperatives and equip them to an unlikely extreme: running a week-long conference of courses and seminars on a cruise ship and sailing it through some of the most thought-inspiring settings in the world.

That's what happened this past September, when Linux Lunacy III toured Alaska's Inside Passage on board the M.S. Amsterdam, a 780-foot-long Holland America cruise ship. It also happened with Linux Lunacy I (2001) and Linux Lunacy II, which covered the Eastern and Western Caribbean. And it will happen again next year with Linux Lunacy IV (2004), which will depart from Venice and call on ports in the Eastern Mediterranean. LL4 will be the first Geek Cruise in Europe. Like the first three, it will be sponsored by Linux Journal.

Most tradeshow conferences last one to three or four days. "Taking a whole week for everything--both professional and recreational--you can enjoy every port of call, every shore excursion you can book--and still take a full course load on the boat as it sails from port to port", Neil says. "We schedule everything so all the talks and meetings don't coincide with shore visits. We don't want anybody having to make a choice between learning something and having a good time. We want everybody to have it both ways."

Linux Lunacy 2003: Cruising the Big Picture, Part I

Linux Journalists On-Board the 2003 Linux Lunacy Cruise

The Linux Lunacy III curriculum stretched across the greater Linux platform--LAMP for short. (That's Linux, Apache, MySQL, PHP, Perl, Python and everything else that fits in the suite.) Ted T'so gave a whole day (two long sessions) to the Linux kernel and added another lecture on filesystems. David Axmark gave sessions on MySQL. Randall Schwartz did Perl. Guido van Rossum did Python. Kara and Steven Pritchard gave LPI certification courses and tests. Bruce Perens covered Linux in tiny embedded applications, plus international wireless connectivity. Mick Bauer taught classes on Linux security. David Fetter taught Linux databases. Greg Haerr taught programming, and Keith Packard taught about graphics in X and fonts in Linux.

For high-level views, Charles Roth gave a talk about on-line collaboration. Ian Shields presented IBM's approach to speed-starting Linux application development. (IBM also was a co-sponsor.) Paul Kunz of the Stanford Linear Accelerator (SLAC) gave a talk on bringing the Web to America. I gave a talk on Linux in the enterprise. And Linus held his now-annual Q&A about the state of Linux in general. (A transcript of that talk will be posted on this site later this week.)

Conferences are opportunities to get hang-time with people who hardly have time for their own families, much less anybody else. But if you put a conference on a cruise ship, families can come along. Linus, Guido, Bruce and many others brought theirs. Normally I bring mine, but this year I brought my sister, who coincidentally used to be the COO for the company where Charles Roth was the CTO. So the cruise was a great way to share good times with friends and families and to make new friends along the way.

That even goes for friends at ports of call. On the last cruise we met with the Jamaica Linux Users Group. This time we met with two Pacific seaside LUGs, JLUG in Juneau and VLUG in Victoria.

"A good cruise is a great change in context", Neil says. "You can use the setting just to have a good time, or it can make you think in new ways about familiar subjects. That's one of the reasons geeks like to come on cruises. They can't help thinking anyway, but in a novel setting they often come up with new perspectives and new ideas."

So here's a day-by-day account of where we went and what we learned along the way.

Day One: Shipping out of Seattle

The urge to defy gravity with architecture always has been invested in gargantuan constructions, from pyramids to cathedrals to dirigibles to high-rise buildings. Now some of the world's largest buildings have hulls. Think of today's giant cruise ship as the aquatic equivalent of a high-rise: a long-float.

The Queen Mary 2 recently launched from France, setting a new record size for a cruise ship. It's 1,131 feet long, 21 stories high, carries 2,600 passengers and weighs in at 150,000 gross registered tons. (A "grt" actually is a measure of volume rather than weight. One grt is 100 cubic feet.) A week earlier, Royal Caribbean upstaged the Queen Mary 2's launch by announcing its purchase of an Ultra Voyager, which can accommodate 3,600 passengers and a crew of 1,400. The announcement didn't mention gross tonnage, but it did say the ship is 15% bigger than the rest of the line's gigantic cruise ships, which run in the 140,000 grt range. The Star Princess, one of the large ladies of the P&O Princess fleet, was docked right behind the Amsterdam in Seattle, and it followed us through the whole cruise. The Star Princess is a mere 110,000 tons and looked immense from all angles, including one from above. In Juneau we were able to look down from atop the Mt. Rogers tram and see the golf course on the the ship's top deck.

To put all this in perspective, the Titanic was 46,329 grt. The USS Nimitz -- the world's largest aircraft carrier--is 95,000 grt.

If there's a limiting factor to nautical gigantism, it's the Panama Canal. Built in the first decade of the last century to accommodate several ships in one lock, it now serves the same purpose as the box at airport counters that tells you the outer limits of your carry-on luggage dimensions. Ships that wish to navigate the Canal have to fit in locks that are 1,000 feet long, 110 feet wide and 70 feet deep. They can bloom out in five directions above the dock line, but otherwise need to fit in that box with at least a few feet to spare on all sides. Of course, ships that don't bother with the canal can be as big as they please. The current record holder is the Jahre Viking supertanker, at 1,504 feet long and 260,851 grt. Needless to say, it's not a Canal-compliant vessel.

Although The Cruise People Ltd. put the Amsterdam at the bottom of their largest passenger ship list (it's 88th), it's plenty huge at more than 61,000 grt. The boat has a passenger capacity of 1,380, three sets of elevators (each with four apiece, totalling twelve) that run to eleven levels, two pools, a casino, several vast dining rooms, plus enough meeting rooms, bars, nightclubs, theaters and other facilities to qualify as a city block of fine upscale hotels. The Amsterdam also is the flagship of the Holland America line and is less than three years old. We were told on board that employees on other ships in the fleet are rewarded for good work by moving up to the Amsterdam. The last two Linux Lunacies traveled aboard the M.S. Maasdam, which also is an excellent ship. But the Amsterdam clearly is a cut above. The service is so tireless and professional that it's rare to go a day without seeing somebody cleaning a counter or polishing a fixture. The food is excellent, too, especially considering the large number of people being fed on a near-constant basis.

Seattle, that notoriously (though not exceptionally) wet city, had an extremely dry summer this year. Although the drought burned up countless lawns (nobody sprinkles in Seattle), it also gave us near-perfect departure weather. Mt. Ranier loomed large behind the city as we eased out of the port and went northward up Puget Sound.

Our companions on the outbound lanes were container cargo ships that provide a useful lesson for the software business, which traditionally has loathed the threat that commoditization poses to vast profit margins. Yet the Port of Seattle, like all the ship and freight depots of the world, abundantly demonstrates the fecundity of commodities as a base ecology of business.

Sure, bits by themselves may be as free as air--or even more free, as it's a cloneable commodity--but that doesn't mean there's no money to be made in storing, shipping, managing or building with them. Tempting as it is to look toward Microsoft (especially in Seattle) as a prototypical software company, the better view is toward IBM, Oracle, SAP and Computer Associates--all of which constantly are adjusting to take advantage of Linux as a building material and open source as a building method.

After the evening's cocktail party up in the Crows Nest lounge, and as the lights of towns on the US and Canadian coasts faded into the black distance under the stars, I went down to the Internet Cafe, where I had signed up for a week's worth of Internet access by Wi-Fi (an essential grace of Geek Cruising). There I looked up software commoditization and found "Commoditization: The Innovator's Opening" by Ian Murdock (of Debian fame). He concludes with this:

The key thing technologists need to think about is innovating in their business models as much as (if not more than) innovating in their technology. Of course, it's a natural trap for the technologist to think about technology alone, but technology is but a small part of the technology business . Look for your competition's Achilles' heel, which more often than not is an outdated business model in a changing world, not technology. To attack your competition with technology alone is to charge the giants head on, and this approach is doomed to failure the vast majority of the time.

That's a nice answer to both John Carroll's "The Commoditization of Software" and Bruce Sterling's "Freedom's Dark Side" (dredged up by the same search). Both writers worry about what could happen to the software business after freedom and openness get through with it. Ian's simple answer is to leverage something other than the code. It ain't that hard a concept to grasp. (Hell, Ian did it himself with Debian and Progeny.) My favorite Don Marti line is "Information wants to be $6.95". Watching the cargo ships go by, I found myself thinking, "that's what your basic containers tend to cost".

Day Two: Crash Courses on the High Seas

Because our cruise left from Seattle rather than Vancouver (the other primary departure point for Alaska cruises), we skipped the lower parts of the Inside Passage and headed for Juneau by going out to sea and around Vancouver and Queen Charlotte Islands. The seas were described by the ship's bridge (automatically, using a special channel on each cabin's TV) as rough, with waves in the 7.5-12 foot range. A few people felt woozy, but the front desk gave away plenty of free Dramamine. I've been on plenty of other ships (large and small) with rides that were a lot worse.

About the only recreational drawback during this leg of the cruise was the closure of the swimming pools, which instead provided excellent entertainment for crowds gathered on the Lido deck to study dramatic wave motions at the inboard laboratory that the pool had become.

Linux Lunacy 2003: Cruising the Big Picture, Part I

The Wave Pool on the Lido Deck

By timing the photo above to the maximum dip of the Amsterdam's bow (to the right, or fore), I could later tell by my trusty protractor that the effects were produced by a 2° angle of pitch. In other words, it wasn't as bad as it looked.

That morning I attended Ted T'so's Introduction to the Linux Kernel session. For a non-technical type like me (and I speak in relative terms here, most of my non-geek friends think I'm as technical as a shop manual), attending a session like this is like learning a language by immersion. As Don pointed out in his report on the cruise, Ted's tutorial is one he's given a number of times (Don wrote about one in June 2002), each updated in pace with kernel developments. The cruise tutorial was based on Ted's tutorial from Usenix in June of this year. That was the month before I saw a talk Ted gave at the O'Reilly Open Source Convention in July. In that talk, Ted said the 2.6 kernel should be out "in a couple of months". Being that those months had just gone by, I was curious to hear where he stood now on the subject.

Well, neither Ted nor Linus gave us a way to plot the asymptote. "It'll be ready when it's ready" was the report from both of them. (More later this week in the section on Linus's talk). Linux in any case is a perpetually unfinished project, and Ted gave his class plenty of interesting new stuff to chew on. Here are my notes, typed live in a laptop, from the Tutorial:

  • IBM (Ted's employer) wants hot-pluggable and hot-swappable memory. Look for it early next year.

  • Ted credits authors: Rusty Russel's module loader, Ingo Molnar's 0(1) scheduler...

  • More power to the kernel, which now handles relocation an loading... enforces type checking... catches more incompatibilities...

  • Look for better throughput, as new block device drivers make use of new block I/O APIs. They'll also be able to address huge address spaces ... up to 16TB on 32-bit architectures.

  • Hope for laptops... support for new BIOS extensions and hardware... better new ACPI support, CPU frequency scaling...Advanced Linux Sound Architecture (ALSA)

  • In 2.4 kernel tasks are never pre-empted by other processes. Kernel code may yield explicitly (eventually calling schedule ()), or implicitly by taking a page fault. Review of interrupt changes from 2.2 through 2.5, work queues, priorities in kernel-mode CPU time, advantages and disadvantages of various schedulers on SMP systems...

  • "One thing Microsoft did that was a true benefit to the entire industry..." By insisting that a machine would not be ready for Windows 98 unless it had a pure PCI bus, Microsoft killed off the ISA bus. "Afterwards you could actually prove the device in advance..."

  • If you configure a high performance system, you want to make sure they support multiple IRQs, so you don't force interrupts to happen needlessly. PCI supports four IRQs...

  • "It's only the amateur professional paranoids who really care about /dev/random ." He adds, "/dev/random is needed for generating long-term cryptographic keys. But for many other uses, a cryptographic, psuedo-random number generator, is quite sufficient; there's no need for "true randomness". OpenBSD has a /dev/crandom device which is a cryptographic pseudorandom generator, but there's no point to do it in the kernel. You can also do it in a user-space library."

  • All kernel code operates in a process context. In 2.4 you are never pre-empted by other processes. Kernel code may yield explicitly. In 2.6, kernel code may be pre-empted if CONFIG_PREEMPT is enabled.

  • Some interesting history... Between service pack 3 and 4 in NT, Microsoft ripped out their networking stack and put back a better one. Then they ran the Mindcraft benchmark challenge against Linux. That's how they rigged the benchmark. See, back then Linux had a single threaded networking stack. In service pack 4 the networking stack changed to Microsoft's advantage. They set the whole thing up and had Mindcraft push the button. The effect was to inspire the networking crew for Linux.2.4. Thanks to the challenge, the 2.4 networking stack became fully capable, making 2.4 a quantum leap better than it had been -- and than Microsoft's alternatives. Yes, the scheduler and I/O subsystems still needed work. but the networking stack got fixed right away.

  • Corporations want to sell big-ass machines. They want to create scalability to add resources at the high end. But you need to make sure you don't affect the 2-CPU or few-CPU case. You can't impact low-SMP performance, do locking on the cheap in low-SMP systems. You need a kernel that works across many machines at many points in the scale, rather than point-optimizing the kernel for one CPU-count in the spectrum. This is the advantage of having a company like IBM involved. A Sun might care mostly about high CPU-count performance. IBM wants to care about the low-CPU count as well as the high.

The part of the tutorial that impressed me most was the early section on development choices based on sober assessments of all the different kinds of stuff the kernel needs to manage. There are physical (memory, CPU, disks, peripherals) resources and logical (process, security, quota) resources. And interfaces between resources that need to be secure and reliable. All while the demands rise in complexity and size. How do you prioritize the development of all that?

An answer occurred to me: "By putting first person interests in the plural". When Ted said "we" he usually wasn't talking about IBM. He also didn't need to make a sharp distinction between the IBM "we" and the Linux "we." For his work, the distinction was moot in most cases.

Afterwards, I told Ted the story of a conversation I had with a Microsoft guy at OSCon. The guy opened up by saying "The first thing you have to understand is that our senior management has decided that Linux is our number one threat." I replied by saying that Linux was a project, not a company. "But our competitors contribute funding and manpower to Linux development", he replied. "Look at IBM and HP. They're our competitors." But, "Aren't they also your OEMs?" I said.

To my surprise, Ted agreed with the guy. "Linux moves much more quickly, because it can take advantage of many companies interests to improve, and as a result it improves much more quickly than if Microsoft was competing against a single OS being developed by a single company. So Microsoft is right to feel so threatened."

The next talk I attended was Bruce Perens' "JGPRS and GSM International Wireless Connectivity for Road Warriors". It was nice to hear Bruce open with kind words for Eric Raymond, with whom he co-authored an open letter to Darl McBride on September 9. We talked SCO for awhile--talking about it is almost unavoidable. "They (SCO) are doing some incredibly dorky things", he said, "and saying so many things that probably are not true." He gave credit where due, saying SCO's moves "do seem to be propping up their stock price... and as long as they keep that value up, they can take millions of dollars out where you won't see it." He also said, "They are a Microsoft proxy. And this is the way we will see Microsoft fighting open source in the future. There are any number of other proxies out there that would be glad to take millions of dollars in license fees from Microsoft."

On the mobile front, two things became clear: 1) a lot of interesting stuff is going on that isn't obvious here in the States; and 2) an awful lot of it is being done with Linux.

The next talk I took in was Charles Roth's "Online Collaboration: Understanding it, Picking It, and Making it Work in the Workplace". Charles and Neil Bauman have known each other since second grade, and Neil credits Charles with turning him on to technology and computing. (Charles cringed when I told him that Neil told Linus that Charles is "the smartest person I know".)

Charles' talk provided some good procedural advice for building and maintaining the forward-moving conversations that create a better "we". He's also a funny guy. Among his one-liners were these:

  • Subversion is a really useful thing.

  • Make power visible. Decisions must not be invisible, and must link to on-line conversation objects. People without power must see the process they're dealing with.

  • Give people ownership, and put them inside the on-line conversation space.

  • Does anybody really use a whiteboard? (On-line, that is.)

The evening talk was Paul Kunz' "Bringing the Web to America." Paul is a high energy physicist with the Stanford Linear Accelerator Center (SLAC, or "Slack") and an old school technologist in all the best meanings of the label. His career runs from a Princeton PhD through CERN, Fermilab and SLAC, where he has worked since 1974. The man is a Big Scientist, and one of his missions is making clear the role played by both big science and the academic research community in bringing the Net and the Web into the world -- and the self-interested, dumb and ultimately doomed systems it obsoleted and replaced along the way, often with great resistance.

Among the many surprising revelations in Paul's talk (at least for me) was that the European PTTs (national Post, Telephone & Telecommunications authorities) held such a massive monopoly over public networks. Thanks to their enormous political clout, the PTTs established the OSI X.25 packet service as a protocol that was not only mandated by law, but allowed the PTTs to charged by the kilobyte. I winced to recall paying upwards of $3,000/year around the turn of the 90s to communicate over various X.25 networks. "If the prime minister of Germany wanted to meet with the head of the PTT, he had to make an appointment", Paul said. "Even Washington felt the pressure to follow international standards, and ordered all laboratories to have a five-year plan to convert to X.25."

What broke the PTT's stranglehold? It was a combination of academic and scientific computing centers and networks, starting with ARPANET and various DECNets, but most significantly with BITNET in the US and EARN in Europe. A link from CUNY and BITNET in the US to EARN in Italy was established in 1984. Another from Italy to Israel followed, with physicist Haim Harari playing a crucial role. Then links followed to Switzerland and southern France. Then the Swiss allowed CERN to connect to Italy. Then, in 1985, the German PTT allowed temporary EARN links to the states "until their X.25 infrastructure was in place". Then DECNet links got hooked in. Then ARPANET linked Scandinavia to the US. So, Paul said, "by the time the PTTs hd X.25 in place, the traffic on temporary networks was too high to handle with X.25."

Along the way, IBM "cleverly or accidentally" appealed to European scientific paranoia about "falling behind Americans because of lack of free networking". High energy physics also funded the spread of networking to Russia and China.

Paul went on to outline the more familiar parts of Internet history, making clear a fact that often gets lost in the telling: "The use of the backbone remains free, and ARPANET open-source culture persists."

While just about every geek knows that Tim Berners-Lee developed the Web while working on a NeXT machine, Paul gives NeXT and NeXTStep additional credit for bringing UNIX into the object-oriented GUI world. "The greatness of NeXTStep can be measured by the large number of quality applications produced by a very small community with an open-source culture. A mere mortal with a good idea could program an application in a reasonable amount of time just to try it out and share it with others." The Web, Paul said, was at least in part a product of Tim Berners-Lee's efforts to solve a high energy physics problem and to do it with others around the world. He did that buy buying a NeXT computer, writing a hypertext application and extending the hypertext to documents on remote computers by adding a new protocol to the Net: HTTP.

By "complete accident" Paul also had a NeXT machine. This fact, however, didn't cause his pulse to rise when he saw Tim's announcement of the Web on August 19,1991. In fact, he didn't go out of his way to look Tim up when he visited CERN the next month. Instead, it was Tim who caught up with Paul.

After selling Paul on the usefulness of the Web, Paul asked for a demonstration. Tim said all the Web's servers were in the same building. So, said Paul, "We uploaded my NeXT at SLAC with the browser software and ran it there with windows sent back to CERN. It worked well. Remarkably well. I told Tim I was going to put SLAC's SPIRES database on the Web as soon as I got home."

Several months passed before the two were back in touch. History happened when Tim got to see SPIRES on his browser over the Web. (Here are the SLAC screenshots.)

Paul called SPIRES-Web "the first killer app for the Web". Why? "It had 200,000 records physicists wanted to search". In a short time there were thousands of users in forty countries. SPIRES became Tim's demo application at a series of meetings attended by physicists. It also was seen by a growing number of hackers around the high energy physics community, including Marc Andreessen at NCSA. We all know what happened next.

Paul concluded by calling the Net and the Web "dramatic demonstrations of the results from an open, adequately funded, academic research community".

He also said his old NeXT server is still going strong.

See Part II for Days 3 and 4.

Doc Searls is Senior Editor of Linux Journal, covering the business beat. His monthly column in the magazine is Linux For Suits, and his bi-weekly newsletter is SuitWatch.

email: doc@ssc.com

Load Disqus comments