Import This: the Tenth International Python Conference

by Mike Orr

Import This: the Tenth International Python Conference

"Import This" was the slogan for the Tenth International Python Conference, ("It Fits Your Brain" was the motto of the ninth conference, which I wrote about in a previous article). The event took place February 4-7, 2002, at the Hilton Alexandria Mark Center in Alexandria, Virginia, a few miles from the Pentagon City station on the Washington, DC metro. According to the conference registrar, 244 people attended, 83% of last year's attendance. The decrease seems to be a result of companies' tighter travel budgets this year and the fact that the conference wasn't held in California.

The keynote talks were unusual this year because both were delivered by Python outsiders. What they did have was experience in other relevant areas, allowing them to give us fresh ideas that we in the Python community may not have been able to come up with ourselves. The two speakers were Andrew König, who played a key role in the standardization of C++, and Tim Berners-Lee, father of the World Wide Web. Besides the keynotes, there were four tracks of seminars: Refereed Papers, Zope, Python Tools, and Web Services and Protocols.

This article describes the keynotes and seminars during the two main conference days, as well as Guido von Rossum's talk and the other discussions held during Developers' Day.

Andrew König: "Notes From a Polyglot Outsider"

Andrew has been programming since 1967 and has used many programming languages. He compared Python with four of them--Fortran, APL, Snobol and ML--that are each quite different from Python and from each other.

Every language has an underlying theme: a specific set of problems it was designed to solve, often because the existing languages weren't adequate for the task. Fortran is designed for efficient numerical computations. APL is good with arrays. Snobol has exceptional string and regular-expression handling. ML is efficient for functional programming. But as Andrew demonstrated by showing the same program in different languages, one language's strength is another's weakness. Snobol, for instance, is weak in data structures.

Surprisingly (or not so surprisingly), Python came out well on all counts. It has good mathematical capability (it was created by a mathematician), especially with the Numeric Python module, good array support, good string and regular-expression handling (now using Perl-style regular expressions with an engine written in C), and good support for functional programming. I pressed Andrew to list some weaknesses in Python. He didn't know any off the top of his head, and he was unwilling to venture any because his knowledge of the language is limited. It's nice to know Python doesn't have any glaring holes somebody with his experience would immediately notice and curse over.

Those of us who have been in the Python community for a few years know about Python's not-easily-fixable problems: slow dictionary lookups (and thus slow lookups of non-local variables), the Global Interpreter Lock (which prevents multiple threads from executing Python code simultaneously), etc., but these are implementation issues rather than language issues. On the language front, we mostly have requests for minor features such as an irange function (returning (index, item) pairs), interfaces, etc. Perhaps the biggest problem Python has in the language area is the tabs-vs-spaces debate, and that's only a problem depending on who you talk to. Many former requests have been implemented or are being implemented, such as the unification of types and classes, the super keyword (currently a function), garbage collection, etc. I won't even bother mentioning the indentation-vs-braces debate because that's a religious issue, and most Pythoneers realize indentation is the One True Way once they get used to it.

Andrew encountered Python via Mailman, a mailing-list system written in Python. He looked at the source and liked the way it was designed. Issues such as indentation vs braces, or the fact that assignment (a = b) copies a reference rather than data, are superficial in Andrew's opinion. What is not superficial is that Python:

  • is interactive.

  • has good library support.

  • can do introspection (which is what makes Idle possible).

  • allows you to change data structures dynamically. You can add attributes to existing instances simply by assigning to them.

Another major advantage of Python over these other four languages is it has an active and supportive user community and the fact that the language developers actively solicit suggestions from the community. Python is the only language out of these five that was developed by an open group. Python strikes a good balance between central control and user control.

Python also avoided many of the standardization traps C++ fell into. C++ standardization should have been delayed two years, he said, to allow the initial proposals to mature before they fossilized. Another difference is that C++ standardizes itself via bureaucratic decision; Python standardizes itself by somebody making a reference implementation and convincing people to use it.

Tim Berners-Lee: Webizing Python, Part 1: URI Identifiers

Tim started his talk by confirming his credentials as a Python outsider. "I don't know much about Python", he said, "but Python is fun." Guido had tried hard to convince him to use Python, and he tried hard to convince Guido to do something with the web. Finally, Tim became a Python enthusiast when he tried to learn Python on a plane trip. He had already downloaded Python and its documentation on his laptop, and between takeoff and landing he was able to install Python and learn enough to do something with it, "all on one battery."

Not many people can go to a web browser and type a URL and think, "I invented this baby." So it's natural that Tim would want to webize everything and that his talk would be called "Webizing Python". "Webizing" means replacing all identifiers with URIs. ("URI" is the technical term for that thing you call a URL, although it's wider in scope. See http://www.w3.org/Addressing/.) He also proposed a graph data type to help Python process Web-like data, which is discussed in the next section. Both of these goals are consistent with Tim's vision for the Web, as described in his Short History: "The dream behind the Web is of a common information space in which we communicate by sharing information." Currently, the Web excels at making human-readable information universally accessible. It has not yet, however, reached its potential for delivering machine-readable information (database data) in a similar manner. This is the goal of such efforts as XML, RDF (discussed in the next section) and integrating URIs into programming languages.

What's the easiest way to replace identifiers with URIs? Just do it and see what breaks. Then change the program (Python) to recognize the new identifiers and, again, see what breaks. Why would you want to do this in the first place? A statement such as

import http://www.w3.org/2000/10/swap/llyn.py

gives so much more information than

import llyn
such as who created it and/or is responsible for its maintenance and where to find the latest version for automatic updating. It also facilitates module endorsements, which are like digital certificates: they allow a trusted authority to verify that the module you have is the official version made by some reputable party. (The standard module function urllib.urlopen already opens a URI as if it were a read-only file, honoring redirects transparently.) Using a URI does not mean that Python has to dial your modem every time you run the script; it can call a routine that acts as a proxy server and load the module from the local disk, just like Python does now.

This is all Tim's pie-in-the-sky ideas, not something Python has committed to doing. Making Python (or any language) URI-compatible means overcoming various "closed-world assumptions" in the language, just like making a proprietary database format (or HTML) XML-compatible requires some changes.

What are Python's closed-world assumptions? First, the URI import above won't even compile because ':' and '/' aren't allowed in a module name. We'd either have to extend the identifier syntax, introduce a new quoting mechanism such as <URI>, or make the identifier a quoted string. Using full URIs in expressions would also break, the '/' would be interpreted as the division operator. One possibility is to assume that "." in Python identifiers is "/" in the corresponding URIs.

Of course, you don't want to type the absolute URI every time you use a variable in an expression anyway. You just want a short alias name. Python's "import ... as ..." syntax already does this, and we can teach import to implicitly assign the name llyn to the long URI above.

Of course, local variables would not be linked to URIs, that would be silly. A local variable is private to its enclosing function.

Once we have modules accessible via URIs, module attributes are also accessible individually. module.attribute maps directly to a URI as http://example.com/directory/module.py#attribute. Which brings us to Tim's next topic....

Tim Berners-Lee: Webizing Python, Part 2: the Graph Data Type

The more I tried to write this section, the less I realized I understood graphs, so I recommend you read this section alongside Tim's slides from the talk and W3C's overview of RDF and a gentle What is RDF? introduction for a more complete picture.

A graph is not what you drew in geometry class, but something like a three-dimensional dictionary, or a dictionary of dictionaries, or a list of triples with the first two parts acting as keys, only it's more than all of that combined. The first part is akin to an ordinary dictionary key: a unique identifier for an object, or what database people call a "primary key". The second part represents an attribute of that object. The third part is the value.

Why stop at one level of attributes? Why not recursively go down an arbitrary number of levels? thats.the.python.way.isnt.it? It turns out that one level of properties is exactly what you need to represent items in a database table (row,column,value), a tree of hyperlinks (URI,fragment,content) or Resource Description Framework (RDF) metadata. RDF is an XML-compatible format that allows you to describe the basic properties of web pages (title, author, abstract, publishing date, etc.). That's enough information to build a smart search engine. Current search engines operate on only one criterion--raw document text--because there is no other criteria. But with metadata, they could search on multiple criteria.

So one node in an RDF graph might be:

('http://example.com/dir/subdir/important-document.html', 'author', 'Barney Rubble')

But RDF is not limited to indexing mundane web pages. It can be used for medical information or any other type of database data.

For instance, say we have a graph literal like this:

g = {sky color blue, gray; madeOf air.
     sea color grey.
     grey sameAs grey. }

that defines the following graph:

Row

Column

Value

sky

color

blue

sky

color

gray

sky

madeOf

air

sea

color

grey

grey == gray

  

colour == color

  
Each word is a variable, which may have been initialized from a string literal, a URI or another object. Each node definition ends in a period ("."). Syntax shortcuts (",", ";") allow multiple nodes to be created from one line.

sameAs aliases two variants together, so that a query for one will also match the other. Here this is used to keep Brits happy when they spell "gray" and "color" wrong (kidding). I'm not sure whether Tim intended sameAs to generate an ordinary graph node or a special alias object, so I have shown it specially in the table.

Armed with your graph object, you can run queries like:

g.about[sky]    # Returns a dictionary: 
                # {color: [blue, gray], madeOf: air }

Here, 'color' maps to a list of multiple values (blue, gray). Python dictionaries cannot have duplicate keys, but graphs may have multiple nodes with identical key pairs. The values will have to be returned somehow, and putting them into a list is as good a way as any.

You may also want to query, "Find all the nodes where X is true." This is a variation of the "find everything under a common parent" task. One or more parts can be wildcarded with a "*" symbol, or maybe Python's None object would be better. Here are some possible APIs under consideration:

g.any({sky color *})                   # Returns a list: [blue, gray]
for row, value in g.all({* color *})
g = g + {sky color blue}               # Adds a node to the graph.
g.toxml()                              # Serialize to XML format.

If the graph includes some kind of date or version columns, you could also query, "Is there anything out of date that this node depends on?"

Python itself can benefit from graphs to provide a standardized way to return a wide variety of information that is now handled by multiple ad hoc formats: system information returned by os.system, introspection data returned by the inspect module (e.g., the methods provided by a certain object), an improved DB-API (database API) and a serialization/parsing format for any data.

Another thing we'd need is a visual graph browser to inspect, update, reload and delete nodes.

Somebody in the audience asked whether you really needed to change Python, since one can implement a graph as an ordinary class. Tim said yes, you can use a class, but the reason for building the type into Python itself is to provide a more convenient syntax for constructing graph literals. In that sense, it's similar to the complex, slice and ellipsis types, which are not used in ordinary Python but make Numeric Python more convenient.

Dan Sugalski: Parrot

One of the most interesting talks occurred in the Python Tools Track, "The Parrot Project: Building a Multi-Language Interpreter Engine" by Dan Sugalski. Parrot originated as an April Fool's joke concocted by Guido and Larry Wall. (What do you expect from somebody who names a language after Monty Python? Parrot is named after The Dead Parrot Sketch. O'Reilly even put up a pseudo catalog entry for the nonexistent Nutshell book.)

But then a funny thing happened on the way to the comedy club. The Perl6 team, charged with making an improved bytecode interpreter for Perl, realized that it would take just a little more work to make the interpreter run Python, Ruby, Objective-C and other languages, too. For Python, this means a third implementation of the language, alongside the standard Python (called CPython because it's written in C) and Jython (a version of Python written in Java). Guido said in his Developer's Day talk that he's not ready to give up on the CPython codebase for Parrot, so CPython will remain the standard and Parrot will be an alternative.

It turns out that most of the visible differences between programming languages--the stuff that religious wars are made of--are really only relevant at the parser level. Once the parser has tokenized the source code, what's left are "variables", "expressions", "functions", "for loops", etc., things that are common in every modern language. Perl, in its attempt to be the kitchen-sink borg language that assimilates everything, in the same way that Emacs is the Editor to End All Editors, has an interpreter that performs a superset of what all other languages need. Anything that another language needs that Perl lacks can more or less easily be added to the interpreter without bothering Perl.

What does Parrot offer a language? A ready-made back-end interpreter that provides OS independence, a rich set of data types, dynamically changeable types (which makes classic optimizations à la C difficult), closures, continuations, matrix math, curried functions and garbage collection. It provides a safe execution environment (resource quotas, access restrictions) for untrusted eval'd code.

Another gee-whiz feature is that you can parse Python source code to Python bytecode, convert that to Perl bytecode and then unparse it to Perl source code. I assume Parrot will come with a command-line utility to take care of the details for you.

Parrot's design goals are to:

  • run Perl code fast.

  • clean up the grotty bits of the existing interpreters (which come "preinstalled on your system for your inconvenience").

  • provide a good base for Perl's language features .

  • be easily extensible and embeddable. (Perl's binary API is so horrid he never wants to use it again.)

  • have a long-range scalable design he won't be embarrassed about ten years from now. ("Often software lasts longer than it should.")

The secret, says Dan, is that "Python, Perl, Objective-C and Ruby are all just ALGOL with a funny hat." They all have object models that are "the same except where they're different". The differences are minor, and anything missing in the hardware or interpreter can be emulated in the runtime library at the price of speed.

Parrot assumes modern hardware with good-sized L1 and L2 caches and lots of RAM. The interpreter tries to build long "pipelines" of machine instructions, so that if an unpredicted decision is made that blows the pipeline and it has to go back to main memory, it does it in a big way that minimizes the need to do it again for a while.

Parrot is register-based rather than stack-based. This performs no worse than stack-based systems on register-starved architectures like the x86 but much better than stack-based systems on other hardware. "If you don't like registers, pretend it's a large-named temp cache."

Parrot's native data types include Integer, Float, Parrot String and Parrot Magic Cookies (PMC). PMCs are a generic object type. However, language implementors are encouraged to use only PMC for all their types--even numbers--because Parrot's other three types are really only optimizing shortcuts for the interpreter and do not necessarily have the full behavior needed by your language's types.

Dead objects are detected and garbage collection is done separately. Dan claims garbage collection is very difficult to get right and most languages do a bad job of it. That's why he recommends languages use his garbage collector rather than their own. An audience member argued that Python's reference-counting scheme is more portable, but Dan stuck to his claim that reference counting sucks. However, Parrot does expose an interface to allow languages to do pseudo reference counting for those occasions where it's useful. C extension programmers that create Parrot objects do have to register them with the garbage collector if they hold onto the object past the lifetime of the function that created it. (If the object is returned from the function or is just discarded at the end of the function, the programmer does not have to register it.)

Language parsers can be pushed in and out at runtime. For instance, your base parser may be for Python source code. Then you encounter a regular expression, so you push a regex parser. Then, a while later, you encounter a database query string, so you push a SQL parser.

Parrot's basic operations are done, and the interpreter is Turing complete. The parser isn't done yet, but an external compiler written in Python generates Python Parrot bytecode for a subset of Python source constructs.

Zope

The Zope keynote, "Open Source Content Management", was delivered by Tony Byrne of CMS Watch, an industry portal for content-management issues. What is a content management system (CMS)? It's a set of business rules and editorial processes managed by people. Specifically, it's not a category of software. Anybody with a web site has a CMS even if they do it all by hand. Even the increasingly-popular blogging is arguably a kind of personal content management. Tony calls the software "CM tools"; however, some others call them CMSs, including another author quoted below.

So, why use CM tools in your CMS?

  • To devolve control and avoid the webmaster bottleneck (meaning, nothing happens unless the webmaster does it).

  • To allow people to specialize in what they do best (creating and maintaining content), letting the machine do what it does best (the mundane tasks).

  • To divide content into flexible, reusable chunks.

  • To easily provide alternate presentation formats for the disabled.

Tony exposed a few industry buzzwords:

Scalable platform vs out-of-the-box

These are two of Tony's favorites. In fact, they're mutually exclusive. Easy-to-install, out-of-the-box products probably won't work for you unless you situation is exactly like the one the authors envisioned. Conversely, scalable products are difficult to install.

XML compliant

This is another gem. How can a product be XML compliant when XML itself is changing? Does merely being able to dump a data structure into an XML text file count as XML compliance? Fine, but anybody and their dog can do that.

Intuitive interface

intuitive to whom? To the program's authors, of course. Were end-users involved in designing the "intuitive" interface?

Customizable

How? How much effort would be required to take the product as shipped and create the demo program the sales staff initially showed the customer?

Dynamic content management

This is not always necessary. More and more sites are discovering the value of pregenerating "static" pages for data that doesn't change extremely often. Not only does it cut down on server resources, but it's search-engine friendly. The only time dynamic pages are truly necessary is when the page is customized according to unpredictable user input or changes very rapidly (say, the latest stock quotes). A hybrid approach is also possible: pregenerate the portions of a page that don't change often, and leave a box for the content that must be calculated on the fly. But often you'll find that trivial personalizations ("Good morning, Sara. Your last login was 1 day 12 minutes 3 seconds ago.") are more hassle to maintain than they're worth.

Secure

CMS Watch has an article on the six questions you should ask your CMS software vendor regarding security. If they say, "Mega Big Bank uses our CMS and they wouldn't if they weren't sure of the security", the author, Colin Cornelius, responds, "I could tell you a thing or two about how financial institutions select a CMS, and security doesn't always enter into it."

There are three phases of web content management: production (what happens before somebody clicks a link to that page), publishing (what happens after they click) and distribution (how is the content reformatted and sent to alternate output devices). What's the best way to design a workflow system that adequately addresses all three phases and by which you can evaluate potential tools? Use a plain old word processor or spreadsheet.

CMS is really an immature market. There are 220 vendors of CMS software, of varying qualities. Many people use two systems, one for production and one for publishing.

Many CMS companies have gotten out of the search-engine business, and good for them. Designing a good search engine is difficult. Paying $10,000 to a dedicated search-engine company that knows their stuff is well worth it.

Syndication is one thing Tony recommends. That means the sharing of article metadata with other sites, such as LJ's news.rss file or the "recent Slashdot items" links you see on some sites. There's an article about syndication on the CMS Watch site.

Here's Tony's analysis of open-source CM technology, including Zope and all others: good cost, requires substantial support, the support is great but the documentation sucks.

Tony closed with a warning to the Zope community, a list of the top things people say when he mentions Zope:

  • Why does it have such a funny name?

  • I looked at Zope, but I still don't understand what it is.

  • It seems like a kind of religion.

  • I'd consider it for my Intranet, but it won't necessarily work with my Java or Oracle production server.

  • We're a Java/COM/Perl shop.

  • Is the Zope corporation for me or against me? I'm an integration consultant. Does the Zope company want to make me more productive or steal my business?

Tony thinks it's usually better to go with an off-the-shelf content management tool than to roll your own. He predicts that Java will continue to be more and more used for XML and that production and publishing will continue to be separate.

Other Talks

David Cooper demonstrated a large NATO intranet that they converted to Zope in eight months. They made a content management tool called WISE that runs on top of Zope. Like Zope, WISE turned out to be much more popular than they expected. It looks like the tool won't be released publicly, although I'm hearing conflicting information about it.

I gave a talk on the Cheetah string-template system (shameless plug: the paper itself), and Geoff Tavola gave an introductory talk on Webware. There was also a talk on Twisted, an Internet application framework that aims to be the next generation of technology after the Web.

Paul Pfeiffer talked about "The Usability of Python, Perl and TCL", looking at two "moderately complex" programming tasks in all three languages: a paint program and a chat program, both using Tk graphics. He looked mostly at maintainability, because that's where he says 70% of the programming effort is directed. Python outperformed both Perl and Tcl, although Tcl made a good showing in the chat program. The Python programs had the fewest defects, although the difference was more pronounced in the paint program. The biggest problems were object design and sockets (Perl sockets are not first-class objects). Tcl had the best string handling of all three languages. Python had the best performance (even though its Tk implementation calls Tcl behind the scenes), but the tab/space confusion and the lack of non-blocking I/O were hindrances. The Perl libraries he found to be poorly documented.

Developers' Day

Guido's only talk took place on the morning of Developers' Day. Many, including me, wished he could have talked more, but he's been occupied with more important matters: the birth of his son, Orlijn, this past November (pictures on Guido's web site). The talk was called "The State of the Python Union", or "An Overview of the Things that Keep Guido Awake at Night." super, property and slot are evolving into context-sensitive keywords; the first two are builtin functions in Python 2.2, but that was just an experimental step, especially since the current super syntax is so clumsy: super(MyOwnClass, self).method(args...).

Other development work that may see the light of day include a logging module, a DB-API 3 with a transactional module, import from a .zip file and some kind of standard module for persistence beyond pickle/shelve. (Contrary to a rumor in my last article, ZODB is not going into the standard library any time soon, because Guido's not sure it's the most appropriate implementation of persistence. Getting into the standard library is a monumental task, and it pretty much only happens when it's obvious that a certain module is by far the best solution.) Zope is trying different approaches for interfaces in Zope 3, and if a clear winner emerges it may trickle into the Python core eventually.

For Guido, the star of the show this conference was PyChecker, a kind of lint tool for Python source. It checks for:

  • No global found (e.g., using a module without importing it)

  • Passing the wrong number of parameters to functions/methods/constructors

  • Passing the wrong number of parameters to builtin functions & methods

  • Using format strings that don't match arguments

  • Using class methods and attributes that don't exist

  • Changing signature when overriding a method

  • Redefining a function/class/method in the same scope

  • Using a variable before setting it

  • Self is not the first parameter defined for a method

  • Unused globals and locals (module or variable)

  • Unused function/method arguments (can ignore self)

  • No doc strings in modules, classes, functions, and methods

[Source: PyChecker web site]

Performance has been going down ("except for things that are faster") in recent versions of Python as more and more "gee-whiz" features have been added fast and furiously. Guido would like to focus future Python versions on performance rather than on language extensions. 50% of the overhead seems to be in the virtual machine and the other 50% in data structures. He'd like to move some C code into Python, especially since many portions already have equivalent Python implementations. The traceback code, the bytecode compiler (from AST), the interactive command line and the import search all have redundant implementations both in C and Python and, of course, the Python versions are easier to debug.

Then came the "Parade of the PEPs". This was supposed to be a rundown through all the Python Enhancement Proposals, looking at their current statuses. Unfortunately there wasn't enough time for this, so it got cut short. I'd really like to see this feature happen in future conferences, or perhaps a quarterly on-line report could be the equivalent. But Guido did discuss the PEPs he wanted community input on, for instance, string interpolation. Is it a priority? He admitted that %(name)s was a mistake: it's too easy to forget the 's', it's awkward to type and it combines two operations (string interpolation and numeric formatting) that are, in practice, rarely done simultaneously (when was the last time you used %(name)d?). Now he prefers $name or ${name}, same as Perl and the shells (and Cheetah).

It's interesting, he said, that if you let a year go by, many issues that were once pressing become obsolete.

Also during Developers' Day were "lightning talks" (short talks) and other talks. One was on the Python Mail System (PMS), an entire library of routines to base a mail client on. State is kept in a pickle, and "folders" can be arbitrary views of physical folders (e.g., filtered, sorted, recursive). There were also talks on optimizing Python, Unicode difficulties and the type-class unification.

"Immutability is all in your mind", so said Barry Warsaw in the type-class unification talk. In Python 2.2, you can subclass int but that doesn't make it mutable. It does mean you can add methods or attributes. Try it at home.

>>> class I(int):
...   lookMa = "No hands!"
... 
>>> i = I(5)
>>> i      
5
>>> i + 3
8
>>> i.lookMa
'No hands!'

Miscellaneous Notes and Conclusion

Andrew Kucheling won O'Reilly's Frank Willison award for the most outstanding contributions to the Python community. He won it for his work on the crypto toolkit, HOWTOs and his "What's New in Python 2.2" series. He had a proud smile on his face when he accepted the award. (Frank Willison was a late editor at O'Reilly and a big Python supporter.)

We shared the foyer and ballrooms with a group of US Marines. There's a definite irony in the contrast between the spiffily dressed Marines in uniform and the scruffily dressed geeks, although I'm not sure what it is. One Pythoneer commented that if you got a hundred Python attendees against four Marines, it would be a fair fight. But if the activity were information warfare, the ratio would be reversed.

David Ascher during the closing ceremony gave the funniest quote of the conference: "I used to be uptight about garbage collection, but then I realized that the Web has no garbage collection."

There was nothing like Toilet Paper this year.

Many thanks to to the conference sponsors for supporting Python even in this difficult economic climate:

The economy brings up another issue: the cost of the conference. $545 (or $840 with Developers' Day) is too steep for most individuals and even for many companies. I know at least three people in my local Python group who would have attended if they could afford it, and I'm sure others can say the same thing. This is especially important for an open-source project, because enthusiastic Pythoneers are the lifeblood of the project, and when they can't participate in something, the whole project suffers. Nevertheless, the fact remains that even the current price is not enough for the conference to break even, and if it weren't for the sponsors there would be no conference. Finding affordable meeting space for three hundred people is a difficult task, and it won't get any easier. (Maybe Zope, Inc. can buy a hotel and call it the Zope Hotel?) So, I wonder if the Python community can put their heads together and come up with a different kind of conference, or something other than a conference, that all enthusiastic Pythoneers can participate in. Perhaps a set of regional meetings rather than one international conference? I don't know what. Certainly, everybody wants to hear Guido speak, and Guido can't go to ten different regional meetings. Maybe we can webcast his talks and other important Python talks (with transcripts for the multimedia-impaired), so that more people can bridge the distance+cost barrier. A call-in radio talk show called "The Python Hour"? Farfetched, but the more we brainstorm ideas, the more we'll come up with something practical.

Next year will bring another round of maturing for Python and Zope and spin-off products from Zope and packages that have recently been released or are still on the drawing boards. What will they look like? All we know is what the Monty Python announcer says, "And now for something completely different...."

Mike Orr is part of the tech staff at SSC, Inc. and Editor of Linux Gazette.

email: mso@ssc.com

Load Disqus comments

Firstwave Cloud