Linux and the Internet of Things

I wake up in the middle of the night, mouth parched and vision blurry, and fumble around to find my iPhone. I press my thumb to the fingerprint scanner, and in the dim blue light, just out of instinct, I squint at the screen, find the right app, open it, and check the ambient temperature and air quality indoors. It turns out my goose bumps are lying; the temperature is quite comfortable and not arctic-level freezing, which is how it feels to me. Now, I touch a virtual switch, and warm yellow light illuminates my way to the kitchen. I slowly waddle towards the water bottle without stumbling over one of my cats, have a drink, and safely waddle back to bed. Another tap on my app screen, and the light fades away; I sleep.

This is my reality, and an increasingly common one in many people's homes. The only difference? My tech is mostly self-made—with few exceptions, of course.

Our homes slowly but surely fill up with small, smart gadgets that make our control over our surroundings stronger and more precise. These are the tools that help us create customizable, responsive home environments that anticipate our needs and intelligently adapt to our rhythms. Perhaps these gadgets even make us a little spoiled, but mostly, they help us feel comfortable in and educated about our environments. Never before have we known so much in precisely quantified units about our living spaces, working spaces and ourselves. Smart homes tell us about the air we breathe, the water we drink, and the temperature and humidity in which we live, and vigilantly guard us from strangers or threats. They have become an extension of ourselves in the best possible way, automating important but mundane tasks so we can focus on more important things, like maintaining the late-night cat/human detente.

The Four Factors That Led to IoT

What are the circumstances that allowed this wonderful reality to materialize? From my perspective, I think of four factors as being largely responsible:

  1. Tiny, inexpensive, power-efficient-yet-powerful processors replacing older, simpler microcontrollers.

  2. Cloud processing becoming cheap enough to be accessible and affordable for large and small companies alike.

  3. Smartphones becoming powerful multicore computers that are practically ubiquitous.

  4. Linux making it remarkably easy to spin up a smart application that can run on anything from toaster to a spaceship with minimal effort, and that same Linux OS powering the back-end cloud side.

Let's flesh this out a little more.

Microcontrollers:

Silicone vendors have improved manufacturing technologies so much that each transistor in a processor is now imperceptibly tiny (14 nanometer technology!). They bring magic to life with their new lines of low-powered systems on chips (SOCs) and have brought the market to the point where a chip the size of a small postage stamp costs $2–$7 and still has:

  • A full hardware network stack (that is, a built-in capability to connect to the Internet).

  • A dual-core ARM family processor.

  • Enough computation power to run an operating system.

  • Plenty of other useful features that all run off a single coin cell battery.

All you have to do is stick it in a package—a watch, a small gadget on the wall, a light bulb, or whatever your company desires—and you instantly "smartify" your product. This kind of luxury (which wasn't available even a decade ago) has driven companies to opt for the use of true operating systems on their devices (namely Linux) and to forgo the older, more difficult and less efficient path of direct microcontroller programming with a single "forever" loop and every software aspect done in-house.

The Cloud:

At the same time, organizations like Amazon are offering their cloud infrastructures to anyone for reasonable costs. This enables companies to wield unimaginable computational power—still running Linux, mind you—without investing anything in the purchase, physical installation or maintenance of the hundreds of powerful processing machines at their disposal. A company of two people can bring distributed services to its customers that require intensive computational power; this would have been a complete deal-breaker (even for large companies) in the pre-cloud era.

Smartphones:

There is not much to be added about the role of smartphones in the "smartification" of our environment. As a society, we quickly adapted to the beautiful, intuitive control interfaces and quick response times, and they've become so much a part of our daily lives that parting with one, even just for a few hours, causes most people to feel naked. What many people do not realize, however, is the vast computational power in each and every modern smartphone. Just to bring a few personal reference points into the picture:

  • 1993: my first computer was an Intel 386 40MHz with a whopping 8MB of RAM, 170MB of hard drive storage and a 1MB graphics accelerator.

  • 2015: my smartphone, a Samsung Galaxy, has a Qualcomm Snapdragon with a quad-core 2.5GHz processor, 2GB of RAM, and 32GB of fast Flash storage and a 32 pipeline 3D hardware graphic accelerator.

To review, just looking at CPU speed and quantity of cores, my phone has roughly 250 times more computation power than my desktop did. I'm not even taking into account the so-very-useful additional features that every phone packs nowadays—portability, elegant UI, constant connectivity and precise sensors like GPS, accelerometers, gyroscopes, magnetic compasses, ambient light sensors and megapixel/HD cameras. All of those factors are pure gold for making our environment smart and creating IoT concepts.

Linux:

Linux is the final component that makes the Internet of Things a reality—the glue that holds everything together. How, you ask? Well, let's look at a typical product and try to understand how Linux contributes to and affects each step of development.

Let's start with the end point—the wearable or home-based gadget. It usually hides either a high- or low-end ARM-based SOC inside it. (Considering that even low-end microcontrollers are capable of running a small operating system, it's safe to assume this one could as well.) Now, as a company, what sounds easier: a) building an original software environment from scratch that includes task scheduling, memory management, peripherals access that supports multiple technologies (for example, I2C, SPI, SDIO and so on) and creating your own implementation of a network stack, including various cryptology solutions to support secure socket layer (SSL, the almost omnipresent secure communication standard over the Internet); or b) taking a free, constantly evolving and improving operating system tested by billions that provides all of those things and more? Obviously, b is the clear winner.

The Rise of Linux

Through the years, Linux has become such a complete solution that you would need to find an incredibly convincing argument to choose an alternate approach to a hardware OS. Not only is the Linux kernel versatile and easy to tweak and adjust to fit the needs of a project exactly, but it's also cross-compilation-friendly; Linux can be brought to almost any given platform, and as operating systems go, it can be very low maintenance in terms of hardware resources. This is an operating system internally built to support almost any imaginable hardware layout or peripheral, and while it's not completely plug-and-play and certainly requires an investment of labor, it comes ready for such work and aims to make it as easy as possible.

Let's move on to the next part of the chain—the cloud, where the power of Linux manifests itself from a different angle. Now, ideally, we want an operating system that is capable of managing vast computation power, many processors on one machine, high-throughput network stack, and if you don't mind, could you please throw in off-the-shelf powerful solutions for different aspects of the cloud infrastructure? Oh, and could you also make it all free? Yes, it's all that—your operating system is that same Linux, but this time, it's tweaked to answer your server-side demands, most of the additional components any company needs (such as message queues, caching, Web servers, databases and so on) and comes at no cost thanks to the Open Source community. This is mind-bogglingly wonderful.

The last part is the smartphone. Here, it gets a little trickier—how are an Android or iOS phone, an Internet of Things solution, and Linux all connected? Indeed, the link is a less obvious one. (And as a tangent, let's quickly call out that Android is, in fact, Linux! It's the same old Linux with a little polish on top known as the Android services layer.)

The main thing is that Linux has developed a wide, Open Source community that creates and maintains the various infrastructure solutions (like the ones mentioned above), and these same solutions get adjusted and ported to work in the mobile environment as well. To that end, even if Linux itself does not necessarily run on the mobile phone, its derivatives and side products often are present, simplifying the process of integrating the smartphone as part of the Internet of Things ecosystem.

The Inherent Benefits of Linux

So far, I've discussed the direct effect of the evolution of processors on the adoption of Linux and the versatility of Linux as a factor for its selection as operating system of choice on both front end and back end/cloud. However, there are other aspects to the use of Linux that benefit companies and further proliferation in the Internet of Things ecosystem.

First, off-the-shelf solutions that help companies develop and deploy solutions without re-inventing the wheel are available on the cloud side, and are applicable at all the points where Linux is used. This means that your hundreds of cloud servers and tiny smart light bulbs potentially could be using the same exact code base for the same purposes—for example, message parsing or queuing—thus saving money, time and manpower.

Second, manpower and skill set also are transferable across different components, provided that both run Linux. A startup script for a toaster and a startup script for a cloud database server require the exact same Linux system administration skills, meaning your personnel could be more productive and contribute to more parts of the system. This allows for easier transfers between different teams, and as a result, happier engineers. Although this may sound like a soft benefit, this is a very serious advantage for companies that build on Linux.

Finally, it is good to note the benefit of community development. Linux and its components are being developed, used and maintained by a huge community. This is a prime example of the "law of large numbers", meaning that most bugs eventually will be detected and (mostly sooner than later) fixed by the community that acts as an innate immune system. Thus, the products developed gain the benefit of being more robust and error-free.

A Practical Application

One of the things that has been keeping me busy at work lately is the integration of different sensors into our product. Sensors are tricky little guys; some are simple and straightforward, not much more than a resistor that changes according to environmental factors. (Think of a thermistor that's affected by temperature or a photoresistor that changes based on the amount of light it absorbs.) Some are trickier and need a groundwork of special mechanisms before they're used, like I2C or SPI bus-connected sensors.

Although the first kind of sensor can be easily integrated anywhere, usually reading values off a built-in, analog-to-digital converter—a feature that exists on most modern SOCs—the second kind would drive you bananas before you're able to force it to talk to you—or it would, unless your operating system already supports it.

Case in point: let's look at the integration of one of those trickier sensors—an I2C-based temperature and humidity sensor by Sensirion.

The sensor has a slave address of 0x70, and we'll use a single "give us data" command word 0x5C24 to mean "give us a humidity sample followed by a temperature sample".

First, let's assume we're not using Linux and working "bare-bones", so to speak. The SOC in question has a built-in I2C controller—in fact, it has two—and the controller is represented by a collection of registers mapped to the physical memory. Communicating through it would mean changing those values in a timely fashion to make the communication possible. However, there are a few twists; there are three I2C buses that are multiplexed between the two controllers because the second controller handles two buses, and there are multiple devices on each of the buses.

Here is "simple code" to write the bytes to request temperature from the sensor:


mov     r0, #CONTROLLER_BASE
orr     r0, r0, #DEVICE_OFFSET
mov     r1, #0x01
str     r1, [r0, #PRESCALE_LOW]
mov     r1, #0x01
str     r1,[r0, #I2C_ENABLE]
mov     r1, #0x04
str     r1, [r0, #FMCTRL]
mov     r1, #0xE0 // <-- slave address
str     r1, [r0, #FMDATA]
mov     r1, #0x5C //<-- first part of measurement command
str     r1, [r0, #FMDATA]
mov     r1, #0x24 //<-- second part of measurement command
str     r1, [r0, #FMDATA]
mov     r1, #0x0A
str     r1, [r0, #FMCTRL]

(Keep in mind, this is before waiting for acknowledgement and later reading the actual output values.)

To put it simply, you have to do everything yourself and control the communication process end to end. And as a bonus, it usually completely messes up the readability of your source code.

Now, let's look at the same exact example with Linux i2c support in place (assume there's an open handle for the I2C device):


static char sample_cmd[] = { 0x5c, 0x24 };
ioctl(i2c_fd, I2C_SLAVE, 0xE0);
write(i2c_fd, sample_cmd, 2);

Needless to say, reading the output is just as simple, making Linux the obvious solution for this application.

Looking Ahead

I believe that the Internet of Things is not a passing trend, and together with continued improvement in the low-end processors market and decreasing prices of cloud computing, we'll see more and more solutions showing up all around us.

Our homes are the first front of IoT adoption, and with the explosion of smart wearables, our bodies are the second wave. What will the next canvas be?

The main trend for the next decade will be an increased number of interconnected devices filling up the space around us, perhaps with some harvesting green energy as the power demands for such devices also become smaller and smaller. Perhaps the streets will become smart and interconnected? I easily can imagine streetlights responding to human presence and adjusting according to personal preferences ("I feel like walking the streets in purple today."), thus conserving energy by not illuminating unattended streets. Or we could see more interconnected engagement with our cars, which already is happening, but on a very low scale as compared to the potential. Eventually, we will all live in a world that responds very precisely to our needs and wants, a more pleasant space where it is safer for our children and ourselves to live. As for Linux, it will continue to play a crucial part in these advances either as the main platform of choice or the platform on top of which all new solutions will be developed. It is possible that soon enough low-end processors will be so powerful that bare-bones Linux will be replaced with Android as the chosen solution for gadgets, but worry not; it will still be the same old Linux, just with benefits.

On this note, I have to finish. My cats are getting hungry and my last attempt at making a smart cat feeder did not end well—for the feeder, that is. Some things remain to be done by hand, at least for now, but I know I'll have better success with the larger/non-feline projects in my life as long as I have Linux.

Load Disqus comments