Who is in charge of my privacy?
It should concern us that most computer users -- ourselves included -- see themselves as dependent variables in respect to large companies' privacy policies, rather than as independent variables.
I mean, it's understandable that big companies think of themselves as In Control. Hey: they are. They should have an obligation to care about users' privacy, and to explain their privacy policies. But why should we behave as supplicants to these companies, or even to governments, in respect to how anybody or anything treats what we regard as private information about ourselves and what we do in the world?
The short answer is that we don't have much choice. For individuals, privacy control tools are still limited. Meanwhile, what needs to be controlled remains nearly unlimited. And intrusive by nature. Cookies, for example. They're these things that live in our browsers and give others the ability to track us like animals. Never mind that these can be used for many Good Things. The fact remains that they are symptomatic of an asymmetry of control ability. What we might generously call a "relationship" with cookie-placers is our ability to forbid or get rid of them. But figuring out what they are isn't easy, or likely to happen.
This all comes up for me because I'm at a lecture by Peter Fleischer, Global Privacy Counsel for Google. (The link is to his blog, not his job.) He's doing a very good job of explaining all the stuff Google does to care about privacy and to Do The Right Thing, whatever that may be. And conditions do vary, all over the world. There's a lot to care and talk about here.
The problem is that Google's perspective is Google's alone. It's a BigCo perspective. Which is fine, as far as it goes. Where it doesn't go, and where it can't go, is toward itself, from the individual. That's the side that needs to be built out -- not just so geeks can control their privacy, can assert their own privacy and information usage policies; but so anybody can do the same thing. Easily.
Personal control over one's own online privacy is important, of course. In fact it's necessary -- but also insufficient to a much larger area of concern and opportunity: relationship.
We have many relationships online. All of them, however, are defined and controlled (sometimes from both sides) within each company's silo. What we don't have are personally controlled global approaches to relationship, including privacy variables.
For example, let's say I want to publish my interest in buying a laptop that weighs less than five pounds and has a 500Gb hard drive, when such a thing is ready. Let's also say I want to do this in the open market, outside any company's silo. I don't want to do it only inside Amazon, or Google, or eBay. I want to do it in the open, and on my own terms. Let's also say that I want to make clear the fact that I have good money ready to spend on this product, and can be trusted as a customer -- but that I not reveal my name or any other information about myself that I don't want to reveal. Let's also say that I actually have relationships with some companies, and that I am willing to reveal that fact just to those companies.
What we're talking about here is selective disclosure in the context of what we might call a personal RFP. Joe Andrieu goes into some detail about what this might involve. It is critical to his case, and mine, that we see the user as the point of integration. One reason we haven't made progress on this is that we all still see companies (rather than individuals) as points of integration. This gives us countless CRM (customer relationship management) systems -- by companies -- each with its own silo. When we want to transcend these silos, we look for one bigger silo, which only compounds the problem. A good example of this is the idea of a national identity system, or a single place where everybody's health care data can live.
The ability of individuals to manage relationships with companies (and organizations, and government entities) -- what we call VRM -- is something that needs to live with ourselves. Nobody else can give it to us. In fact it's a mistake to look for them to give it to us, because then it's not ours. This is something we have to build for ourselves. As we've done with many other piles of code.
Doc Searls is Senior Editor of Linux Journal
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- SUSE LLC's SUSE Manager
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- My +1 Sword of Productivity
- Managing Linux Using Puppet
- Non-Linux FOSS: Caffeine!
- Doing for User Space What We Did for Kernel Space
- SuperTuxKart 0.9.2 Released
- Google's SwiftShader Released
- Parsing an RSS News Feed with a Bash Script
- Rogue Wave Software's Zend Server
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide