EOF - Now Data Gets Personal
The main problem with data is that it's easy to copy. In fact, sending it from one place to another is essentially an act of replication. The mv command is alien to most people's experience of the Internet. They may use it every day (hardly realizing it) within their own filesystems, but between separate systems on the Net, their experience is that of replication, not relocation.
This alone makes control of one's data problematic, especially when the first-person possessive voice isn't quite right. “My” data often isn't. For example, take profile or activity data kept by a service provider. It's from you and about you, but you don't own it, much less control it. Transaction data is created by a buyer and a seller together, but the canonical form of the data is what's kept by the seller and provided (by copying) in the form of a bill or displayed on an encrypted personal Web connection. Sellers don't go much further than that. The idea of sharing that information in its raw form, either during a transaction or later on request by the buyer, is alien at best to most sellers' IT and legal departments. As John Perry Barlow put it in “Death From Above” (way back in 1995, w2.eff.org/Misc/Publications/John_Perry_Barlow/HTML/death_from_above.html), “America remains a place where companies produce and consumers consume in an economic relationship which is still as asymmetrical as that of bomber to bombee.” In fact, this is still true of the whole business world.
Yet, internetworking of that world brings a great deal of symmetricality to it, imposed by the architecture of the Internet and its growing suite of protocols. The bank that used to occupy the most serious building on Main Street—or a skyscraper in a big city—is now but one location among a trillion on the Web. Yours is another. The word “domain” applies to both of you, even if your bank's “brand” is bigger than yours. Of your own sense of place and power on the Net, the words of William Cowper apply (www.bartelby.com/41/317.html): “I AM monarch of all I survey; / My right there is none to dispute...”
Yet, as William Gibson famously said, “the future is here but not evenly distributed.” Bomber/bombee power asymmetries persist in the B2C (business-to-consumer) world of everyday retailing. When you buy something, the transaction data in most cases comes to you only in the form of a receipt from the seller and a bill from the credit-card company. Neither is offered in formats that allow you to gather data on the spot or later over a secure Net connection—not easily, anyway.
If we could collect that data easily, our self-knowledge and future purchases would be far better informed. In fact, collected data could go far beyond transaction alone. Time, date, location, duration, sequence—those are obvious ones. How about other bits of data, such as those involved in dealings with airlines? For example, your “fare basis code” (HL7LNR, or some other collection of letters and numbers) contains piles of information that might be useful to you as well as the airline, especially as you begin to add up the variables over time.
A marketplace is no better than the knowledge and practices that buyers and sellers both bring to it. But, while the Net opens many paths for increasing knowledge on both sides, most of the knowledge-gathering innovation has gone into helping sellers. Not buyers.
Today, that's changing. More and more buyers (especially the geeks among them) are getting around to helping themselves. In particular, two new development categories are starting to stand out—at least for me. One is self-tracking, and the other is personal informatics.
Compared to its alternative (basically, guessing), self-tracking is “know thyself” taken to an extreme. Alexandra Carmichael, for example, tracks 40 things about herself, every day. These include mood, chronic pain levels, sexual activity, food intake and so on. She's a star in the Quantified Self community (www.kk.org/quantifiedself), which is led by Gary Wolf and Kevin Kelly. Among topics at QS meetups are chemical body load, personal genome sequencing, lifelogging, self-experimentation, behavior monitoring, location tracking, non-invasive probes, digitizing body info, sharing health records, psychological self-assessments and medical self-diagnostics, to name a few.
Now, would any of these be extreme if they were easy and routine? Well, that's the idea. ListenLog (cyber.law.harvard.edu/projectvrm/ListenLog), one of the projects I'm involved with, doesn't make sense unless it's easy, and unless the data it yields is plainly valuable.
This brings us to personal informatics, which is a general category that includes self-tracking and extends to actions. All this data needs to live somewhere, and stuff needs to be done with it.
In the commercial realm, I see two broad but different approaches. One is based on a personal data store that might be self-hosted by the customer or in a cloud operated by what we call a fourth-party service (serving the buyer rather than the seller—to differentiate it from third parties, which primarily serve sellers). As Iain Henderson (who leads this approach) puts it, what matters here “is what the individual brings to the party via their personal data store/user-driven and volunteered personal information. They bring the context for all subsequent components of the buying process (and high-grade fuel for the selling process if it can be trained to listen rather than shout).” The other approach is based on complete user autonomy, whereby self-tracking and personal relationships are entirely the responsibility of the individual. This is exemplified by The Mine! Project (themineproject.org/about), led by Adriana Lukas. As she puts it, the difference between the two approaches is providing vs. enabling (www.mediainfluencer.net/2009/04/enabling-vs-providing).
Either way, the individual is the primary actor. As distribution of the future evens out, the individual has the most to gain.
Doc Searls is Senior Editor of Linux Journal. He is also a fellow with the Berkman Center for Internet and Society at Harvard University and the Center for Information Technology and Society at UC Santa Barbara.
Doc Searls is Senior Editor of Linux Journal
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- SUSE LLC's SUSE Manager
- My +1 Sword of Productivity
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- Managing Linux Using Puppet
- Non-Linux FOSS: Caffeine!
- SuperTuxKart 0.9.2 Released
- Doing for User Space What We Did for Kernel Space
- Parsing an RSS News Feed with a Bash Script
- Google's SwiftShader Released
- Rogue Wave Software's Zend Server
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide