Grounds for Identity
A year ago, identity was mostly the concern of privacy and crypto guys. The only company taking much public interest was Microsoft, which was busy scaring everybody with its Passport identity management system and the Hailstorm initiative that went along with it. (Microsoft folks tell me they never meant to scare anybody. Privately they refer to Passport as "Piñata" because of all the bashing it takes.)
But over the next three quarters, identity became a big deal, certified by its own high-profile web site and tradeshow: Digital ID World (DIDW). The first DIDW took place in Denver in early October 2002. It was well-run and well-attended for a first effort by people who were, for the most part, new to the business. Those people included PingID.com, which is the commercial counterpart of PingID.org, an open-source effort.
When Don Marti got a look at advance promotion for DIDW, he called the speaker lineup "scary": a lot of big companies and associations (Microsoft and the Sun-led Liberty Alliance, for starters); a lot of small companies trying to sell stuff to big enterprise customers; and almost nobody representing individual interests (especially privacy). Except for me. And frankly, I had to push to get myself added to the speaker lineup, which I did through my position on the advisory board of PingID.
At the show I made as much trouble as I could. On the opening day I moderated a panel on identity and open source. On the closing day I gave a talk about the open-source nature of internet infrastructure--the need for open identity protocols and other standards that commercial interests alone would be unlikely to provide. I presented a slide that compiled a list of phrases assembled from buzzwords I heard in one talk after another at the show:
metadata control exchange system
partnership compliance implementation audit
self-addressing portable entitlement chain
DRM privacy directive store
self-regulating feedback mechanism
persistent federated domain logic audit
enterprise portal crossover
cross domain global security management protocol framework
custody containment certificate
logical domain root browser function
Driving this droning was a default assumption that identity could be managed and controlled--in spite of the fact that the Net is neither. At the end of my open-source panel, Brent Glass said this from the audience (quoting notes taken by another audience member):
I don't want any organization having control of my identity. I don't trust enterprises. I don't trust the government. I want to be the center of my identity. One of the things open source has going for it is it puts the user at the center. Could the panel explain if it can do this for us? Can it give humans control that need not be relinquished?
I believe the answer is yes. But to explain how, I'll start with some history. Back in the late 1980s and early 1990s, Craig Burton, Jamie Lewis and other Novell veterans at The Burton Group quietly changed the way we conceived networks, shifting us from a technical to a service model. Thanks to TBG's efforts, we began talking about networks as collections of interoperable services, including directory, security, management, file, print and messaging. At first the "network services model" was applied to LANs and enterprise systems such as Lotus Notes. But when the Internet began to lithify and support almost everything, the model applied there as well. Protocols such as TCP/IP, HTTP, SMTP, IMAP, POP3, LDAP and DHCP not only define the Net's working infrastructure but also provide its services.
Compared to even an old commercial LAN like Novell's NetWare, the Net's roster of services are still primitive and few. In fact, their primitive nature helps account for much of their ubiquitous adoption. Openness and simplicity are good things to have in protocols. But the fewness of network services on the Net is another matter. If "the history of the Internet is the history of its protocols", as Vint Cerf says, we're still in the Paleozoic era. For example, there still are no common protocols for printing over the Net. Directory services are minimal (DNS covers few bases and LDAP only covers directory access). Aside from e-mail, messaging is a mess. Jabber's IM protocols are widely adopted, but hardly ubiquitous. Thanks to AOL's and Microsoft's childish refusal to interoperate with each other, instant messaging for most of us remains stuck at the Prodigy vs. Compuserve stage. But if IM is an embryo, ID is an unfertilized egg.
To shift metaphors in a botanical direction, think of the Net as Mother Earth and all this corporate droning as seed thrown on dry ground. What's more, the enthusiastic seed spilling at DIDW reminded me of every other cycle of enthusiasm launched whenever the ground starts to shake. Big companies and governments try to protect and extend the existing order while startups wage a leadership revolution. Both miss the fact that all Net-based architectures, old and new, are grounded on a geology that nobody owns, everybody can use and anybody can improve.
Today big business operates by the grace of the Net. The creators of the Net--the makers of ubiquitous protocols that are as central and beyond ownership as the core of the Earth--are the gods behind the primal forces of today's business world. Those gods still have work to do, as veteran Byte editor John Udell explains:
The connected computer is fast approaching ubiquity. We've created cyberspace, but we haven't yet really colonized it because we lack the organizing principle to do so. Having abolished time and space, nothing remains but identity. How we project our identities into cyberspace is the central riddle. Until we solve that, we can't move on.
Project is the right word, not protect.
If we create the protocols, APIs and other standards that let customers relate at full power with the companies they choose, consumer becomes an obsolete noun. The companies now in full charge of the identities they confer on each of us will no longer have full control, because now they will have to relate and not just distribute. But because we show up as customers rather than as consumers, the range of business possibilities is much larger. The trade-off is a good one for both sides.
But it won't begin until we get those protocols and APIs, which won't happen unless somebody decides to write them for everybody. Maybe that effort will come from the noncommercial world, as it did with HTTP and SMTP. Or maybe it will come from the altruistic side of the commercial world, as it did with SOAP and RSS.
My guess is that it will come from both, as it does with Linux (if we give full credit to the companies that employ the developers who continue to improve code that nobody owns and everybody can use). Once it does, there will be real grounds for enthusiasm.
Doc Searls is senior editor of Linux Journal.
Doc Searls is Senior Editor of Linux Journal
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- Google's SwiftShader Released
- Interview with Patrick Volkerding
- SUSE LLC's SUSE Manager
- My +1 Sword of Productivity
- Tech Tip: Really Simple HTTP Server with Python
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- Managing Linux Using Puppet
- Non-Linux FOSS: Caffeine!
- SuperTuxKart 0.9.2 Released
- Returning Values from Bash Functions
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide