How to Fix the Edge

In December 2016, Peter Levine of the venture firm Andreessen Horowitz published a post with a video titled "Return to the Edge and the End of Cloud Computing" In it, he outlines a pendulum swing between centralized and distributed computing that goes like this:

  • Mainframe/Centralized/1960-1970 ↔ Client-Server/Distributed/1980–2000

  • Mobile-Cloud/Centralized/2005-2020 ↔ Edge Intelligence/Distributed/2020–

He says the "total addressable market" in that next pendulum swing will include the Internet of Things, with trillions of devices, starting with today's cars and drones. He also says machine learning will be required to "decipher the nuances of the real world".

Thanks to bandwidth and latency issues, most of this will have to happen at end points, on the edge, and not in central clouds. Important information still will flow to those clouds and get processed there for various purposes, but decisions will happen where the latency is lowest and proximity highest: at the edges. Machines will make most of the data they gather (and is gathered for them). That's because, he says, humans are bad at many decisions that machines do better, such as in driving a car. Peter has a Tesla and says "My car is a much better driver than I am." In driving for us, machine-learning systems in our things will "optimize for agility over power". Systems in today's fighter aircraft already do this for pilots in dogfights. They are symbiotes of a kind, operating as a kind of external nervous system for pilots, gathering data, learning and reacting in real time, yet leaving the actual piloting up to the human in the cockpit. (In dogfights, pilots also do not depend on remote and centralized clouds, but they do fly through and around the non-metaphorical kind.)

The learning curve for these systems consists of three verbs operating in recursive loops. The verbs are sense, infer and act. Here's how those sort out.

Sense

Data will enter these loops from sensors all over the place—cameras, depth sensors, radar, accelerometers. Already, he says, "a self-driving car generates about ten gigabytes of data per mile", and "a Lytro camera—a data center in a camera—generates 300 gigabytes of data per second." Soon running shoes will have sensors with machine-learning algorithms, he adds, and they will be truly smart, so they can tell you, for example, how well you are doing or ought to do.

Infer

The data from our smart things will be mostly unstructured, requiring more machine learning to extract relevance, do task-specific recognition, train for "deep learning", and to increase accuracy and automate what needs to be automated. (This will leave the human stuff up to the human—again like the fighter pilot doing what only he or she can do.)

Act

As IoT devices become more sophisticated, we'll have more data accumulations and processing decisions, and in many (or most) cases, machines will make ever-more-educated choices about what to do. Again, people will get to do what people do best. And, they'll do it based on better input from the educated things that also do what machines do best on their own.

Meanwhile, the old centralized cloud will become what he calls a "training center". Since machine learning needs lots of data in order to learn, and the most relevant data comes from many places, it only makes sense for the cloud to store the important stuff, learn from everything everywhere, and push relevant learnings back out to the edge. Think of what happens (or ought to happen) when millions of cars, shoes, skis, toasters and sunglasses send edge-curated data back to clouds for deep learning, and the best and most relevant of that learning gets pushed back out to the machines and humans at the edge. Everything gets smarter—presumably.

His predictions:

  • Sensors will proliferate and produce huge volumes of geo-spacial data.

  • Existing infrastructure will back-haul relevant data while most computing happens at edges, with on-site machine learning as well.

  • We will return to peer-to-peer networks, where edge devices lessen the load on core networks and share data locally.

  • We will have less code and more math, or "data-centric computing"—not just the logical kind.

  • The next generation of programmers won't be doing just logic: IF, THEN, ELSE and the rest.

  • We'll have more mathematicians, at least in terms of talent required.

  • Also expect new programming languages addressing edge use cases.

  • The processing power of the edge will increase while prices decrease, which happens with every generation of technology.

  • Trillions of devices in the supply chain will commoditize processing power and sensors. The first LIDAR for a Google car was $7,000. The new ones are $500. They'll come down to 50 cents.

  • The entire world becomes the domain of IT. "Who will run the fleet of drones to inspect houses?" When we have remote surgery using robots, we also will need allied forms of human expertise, just to keep the whole thing running.

  • We'll have "consumer-oriented applications with enterprise manageability."

His conclusion: "There's a big disruption on the horizon. It's going to impact networking, storage, compute, programming languages and, of course, management."

______________________

Doc Searls is Senior Editor of Linux Journal