Linux for Suits - Work to Be Done

by Doc Searls

We were flying from the Coliseum north above Figueroa into downtown Los Angeles, about five stories above the road surface. After crossing the Harbor Freeway, we passed the Bob Hope Patriotic Hall on our right, then barely cleared ten lanes of elevated traffic on the Santa Monica Freeway. After zooming past the Convention Center and the Staples Center, we veered to avoid hitting a large building facing south across Olympic that featured monstrous portraits of Kobe Bryant and two other Lakers. From here north, the street became a slot between rows of buildings on both sides. When a break showed up at 5th, we took a hard right to head east a couple blocks before taking another right on Hill. Below to our right appeared Pershing Square. Suddenly, we dove down to the ground and into the escalators leading down into the subway. Here we paused on the platform to watch a train go by, then punched out through the roof and into the air above, where we took Hill North to a right on 4th and then across Broadway and through the arched doorway into the Bradbury Building, a hollow shrine to ironwork and exposed elevator mechanics, best known for the noir scenes it provided for the movies Blade Runner and Chinatown. From here, we flew out to the north and then east on Cesar Chavez to look at plans for turning the concrete trough of the Los Angeles River into a green park of some kind. Suddenly, the park itself appeared, the river blue to the brim.

Our pilot and urban simulation god was Zachary Rynew, a grad student in Architecture at UCLA. The flight was a demonstration of Virtual Los Angeles, a simulacrum of downtown LA. One can fly through and explore Virtual LA in surreal time—that is, right now or at times in the past or future. Virtual LA is designed to facilitate urban planning, emergency response, architectural development, education and a mess of other specialties and combinations thereof. It combines 3-D geometric modeling with street-level and aerial photography. Some of the visuals are so detailed, you can read graffiti and small signs in shop windows. The system originally was developed on SGI systems, but it was moved to Linux several years ago. Like all open-source projects, and like the subject it explores, it is under development. Work will continue on it as long as the project is useful.

Virtual Los Angeles was one among many visualizations at the Digital Innovation Day Showcase at UCLA in May 2007. Others included the Roman Forum Project, 800 Years of Berlin and the Qumran Visualization Project. In UCLA's Visualization Portal, we were treated to a tour of Santiago de Compostela, a 13th-century cathedral in Spain, as three projectors blended images seamlessly on a curved 180° screen surrounding seating for 40. In each case, the experts talked about how the work of virtual construction (or reconstruction) resulted in better insights and understandings of settings both long gone and not yet built. There was a constant digital dance between analysis and synthesis—and in the midst of it all, the increasingly obvious need for tools that are open and free.

All this work grew alongside and out of a relatively new discipline called digital humanities. I had first heard of the field from Joseph Vaughan, Assistant Director of the UCLA Center for Digital Humanities, in an e-mail responding to a Linux for Suits report on the first Desktop Linux Summit, back in 2003. In that e-mail, Joe reported on gradual progress in getting more Linux and open-source software put to use on digital humanities projects. I ran into Joe at the showcase, and he said that, in fact, a lot of progress had been made during the four years since he sent that e-mail, with much more to go. The same could be said of digital humanities, and for that matter, of all digital technologies.

This was one of the first points made by the keynote speaker, Dr Willard McCarty of the Centre for Computing in the Humanities, Kings College London. Dr McCarty's talk was titled “What's going on?”, and he began by telling us that what's happening in computing is by nature perpetually protean. “In the history of inventions”, he said, “computing is in its infancy, its products incunabular. The point I am making is that it and they always will be, however progressively better they get.”

But, he does note some trends:

About 30 years ago, Northrop Frye noted a striking technological change in scholarly resources, from the “portly...tomes” of his childhood, communicating in their physical stature “immense and definitive authority”, to the “paperback revolution” of his adulthood, which better matched the speed of scholarship and gave wing to the diversification of publishing. The demotic character and relative impermanence communicated by these paperbacks also implied the undermining of authority I just mentioned, in this case a weakening of the barrier between author and reader. Running in parallel if not cognate with this physically mediated change came theoretical changes in ideas of textuality, for example, Mikhail Bakhtin's “dialogic imagination”, reader-response theory and, more recently, in anthropological linguistic studies of context. Meanwhile, various parts of computer science have developed congruently, from design of black-boxed, batch-orientated systems of former times to toolkits and implementations of “interaction design”. Computing has become literally and figuratively conversational.

This applies not only to the rise of independent and autonomous development by individuals and groups, but also to participation by everybody willing and able to weigh in, including users, which McCarty places now at the start rather than at the end of things:

...it makes less and less sense to be thinking in terms of “end users” and to be creating knowledge-jukeboxes for them. It makes more and more sense to be designing for “end makers” and giving them the scholarly equivalent of Tinker Toys. But, we must beware not to be taking away with one hand what we have given with the other. To use Clifford Geertz' vivid phrase, we need rigorous “intellectual weed control” against the Taylorian notions that keep users in their place—notions of knowledge “delivery”, scholarly “impact”, learning “outcomes” and all the rest of the tiresome cant we are submerged in these days. The whole promise of computing for our time—here is my historical thesis—is directly contrary to the obsolete 19th-century cosmology implicit in such talk.

McCarty's proximal context is the academy, particularly the graduate schools and programs where specialties tend to be well-tended and guarded spaces. About these, he says:

Particularly since the advent of the Web, our attention and energy have been involved with the exponential growth of digitization. The benefits for scholarship here are unarguably great. But as ever larger amounts of searchable and otherwise computable material become available, we don't simply have more evidence for this or that business as usual. We have massively greater ecological diversity to take account of, and so can expect inherited ways of construing reality and of working, alone and with each other, to need basic renovation. Here is work to be done. It's not a matter of breaking down disciplinary boundaries—the more we concentrate on breaking these down, the more they are needed for the breaking down. Rather the point is the reconfiguration of disciplinarity. From computing's prospect at least, the feudal metaphor of turf and the medieval tree of knowledge in its formal garden of learning make no sense. We need other metaphors. Here is work to be done.

The challenge McCarty lays out over and over, with those four words—“work to be done”—is not just for academics alone, or academics in cahoots with programmers and other technologists. It's work to be done by everybody.

This comes home for me with ProjectVRM, which I'm heading up as a fellow with the Berkman Center for Internet and Society at Harvard University. Here, I'm working to drive development of tools that fix marketplaces by giving customers both independence from vendor entrapments and better ways of engaging with vendors on an equal power footing. These tools of independence and engagement don't exist yet—though their parts surely do, amid the growing portfolio of free and open-source software and standards that are lying around in the world.

The problem is, the only code I know is Morse. A few years ago, that would have disqualified me as a project leader. Now, it doesn't. Because now, it's easier than ever for people who see problems worth solving to find the people and tools that will help those problems get solved. That's what's happening with ProjectVRM. It's what happened with the user-centric identity movement out of which VRM grew as a specialty. And, it's what will lead Linux and other open-source projects off of their own turf and out into a larger world where users are at the start of things, and not just the end.

Resources

Virtual Los Angeles: digitalinnovations.ucla.edu/2007/ccc/projects/Jepson.htm

Digital Innovation Day Showcase: www.ucla.edu/spotlight/07/digital-innovation.html

Roman Forum Project: www.digitalinnovations.ucla.edu/2007/ccc/projects/Favro.htm

800 Years of Berlin: www.digitalinnovations.ucla.edu/2007/ccc/projects/Presner.htm

Qumran Visualization Project: digitalinnovations.ucla.edu/2007/ccc/projects/Schniedewind.htm

UCLA's Visualization Portal: www.ats.ucla.edu/portal/default.htm

Santiago de Compostela Tour: digitalinnovations.ucla.edu/2007/ccc/projects/Dagenais.htm

Digital Humanities: en.wikipedia.org/wiki/Digital_Humanities

Dr Willard McCarty: staff.cch.kcl.ac.uk/%7ewmccarty

Centre for Computing in the Humanities: www.kcl.ac.uk/schools/humanities/cch/index.html

Kings College London: kcl.ac.uk

ProjectVRM: projectvrm.org

Berkman Center for Internet and Society at Harvard University: cyber.law.harvard.edu

Doc Searls is Senior Editor of Linux Journal. He is also a Visiting Scholar at the University of California at Santa Barbara and a Fellow with the Berkman Center for Internet and Society at Harvard University.

Load Disqus comments