Is Transparency the Killer Virtue?
Last Friday I had lunch with my friend, Jim Sterne, one of those rare marketing guys who deals in the concrete rather than the abstract. Jim is about as BS-free a marketing guy as you can find. Among other things, he has written a pile of practical books that pull no punches about what doesn't work, while providing plenty of practical wisdom about what does.
We spent much of the meal cowondering about why so many CEOs talk about "accountability" and say they value hard facts, yet show limited interest in what their companies actually do on the Net. Yes, it's nice they finally recognize that having a web site is a good thing, like having a lobby and adequate visitor parking. But beyond that, they're worse than clueless--they're contented with that state.
In his new book , Web Metrics: Proven Methods for Measuring Web Site Success, Jim makes the case for accountability, measurement, ROI, best practices, economies of scale and so on. He also gives practical advice not only for delivering those values but for convincing the upper parts of the org chart there's a need for them-- kind of like the infantry telling the brass there's a war going on.
Earlier that day I spoke on the phone with Marc Canter, another industry veteran. Marc's a talented guy: too polyhedral a peg to fit in a hole of any shape. You might remember him as the founder of Macromind, now Macromedia. The first time I saw Marc, sometime back in 80s, he was singing beautifully. He was also busy creating authoring tools that would relieve both he and his customers of the need to program machine code. The result, in addition to Macromind Director and other products, was a bunch of ideas about "multimedia" that departed sadly from his founding influence.
We talked about all kinds of stuff, but two things stand out in my notes. One is that Marc said he likes hiring people over 40 because their high-mileage wisdom and long-term perspectives are necessary for a company's durable success. The other was that we both (as post-40 guys) saw three famous crashes from three consecutive decades as part of the same trend toward full corporate awakening.
The first crash was the savings and loan crash in the mid-80s. The second was the dot-com crash at the end of the 90s. And the third is the current meltdown of companies like Enron and WorldCom, whose departed value was largely derived from accounting opacities.
Several weeks ago I had lunch in London with another marketing guy, Chris Macrae, son of the economist Norman Macrae and another tireless evangelist of good marketing sense. A lot of what Chris says may be wordy and vague, but I love that the guy thinks out loud about stuff that matters. Lately he has been wrestling publicly with the issue of transparency.
As it happens I've been thinking lately that the real virtue of Linux and other forms of infrastructural software--as well as all the protocols that together make up the Net (which I talk about in the latest SuitWatch newsletter)--is not only that it's open and free, but that it's transparent. It is see-thru infrastructure. In fact, what makes it infrastructural is the fact that you can see through it. You can trust it because it has no secrets. The source of its integrity may not be obvious to everybody, but it's easy to find, to examine and even to improve.
This is Linux' appeal not only to budget-minded companies that recognize the hidden costs of opaque dependencies, but to whole governments that don't want to depend on anything that isn't entirely knowable. In this story about Linux in China from earlier this year, Matei Mihalca, head of Internet research at Merrill Lynch Asia Pacific, said "China wants to control its destiny in terms of the software platform that is used in the country", and he added that "there is full transparency in terms of the underlying code" with Linux.
Credit where due: Bill Gates was right to make a big deal about "trustworthy computing". And maybe Microsoft is beginning to understand that some of Linux's appeal is its transparently trustworthy nature. (And let's also give them points for not trying to squash Ximian's Mono project and for planning a booth at LinuxWorld Expo.)
In his memo (link above), Bill says, "Trustworthy Computing is computing that is as available, reliable and secure as electricity, water services and telephony." We should note that all those services are pure infrastructure whose workings are mostly transparent. Yet for all its popularity, Windows lacks that same transparency, which makes it inherently less infrastructural. It's an interesting issue. Opacity may be a virtue of commercial software and drive its value, but ultimately it disqualifies that software as deep infrastructure. The questions for software companies everywhere will increasingly be: What transparent goods do we ubiquitize (or help ubiquitize) as foundational infrastructure? What opaque stuff can we sell as products that run on that infrastructure? For companies accustomed to controlling whole markets by creating dependencies on opaque code, that's a tough choice, but it's one that must be made.
By contrast, Apple has moved ahead of the curve by taking advantage of foundational transparencies everywhere it can find them: BSD (borrowed for Darwin, the open-source form of UNIX on which OS X is built), 802.11b, Jabber, ZeroConf, FireWire and everything else it can either create and share or borrow to help ubiquitize. Whatever else one might say about the company, it's clear they grok the transparency issue in a strategic way.
So I'm beginning to think that transparency is the issue to bet on. Customers have always wanted it. Employees have always been uncomfortable (or at least inconvenienced) by the opacity imperative, as well as the whole cult of secrecy that accounts for countless corporate strategies. But most significantly, stockholders are finally--thanks to Enron and WorldCom--fed up with opaque accounting practices.
How long will it take before they get equally as fed up with opaque infrastructural software?
(I'll look for your answers in the comments section below. But also feel free to carry the conversation over to the Stealthy Business Linux Forum. I'll be doing the same.)
Doc Searls is senior editor of Linux Journal.
Doc Searls is Senior Editor of Linux Journal
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- Paranoid Penguin - Building a Secure Squid Web Proxy, Part IV
- SUSE LLC's SUSE Manager
- Google's SwiftShader Released
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- Managing Linux Using Puppet
- My +1 Sword of Productivity
- Non-Linux FOSS: Caffeine!
- SuperTuxKart 0.9.2 Released
- Parsing an RSS News Feed with a Bash Script
- Doing for User Space What We Did for Kernel Space
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide