Embedding Linux in a Commercial Product

A look at embedded systems and what it takes to build one.
Embedded System—a Definition

One view is that if an application does not have a user interface, it must be embedded, since the user does not directly interact with it. This is, of course, overly simplistic. An elevator-control computer is considered embedded, but has a user interface: buttons to select the floor and an indicator to show on which floor the elevator is now located. For embedded systems connected to a network, this distinction blurs even further if the system contains a web server for monitoring and control. A better definition might focus on the intended functions or primary purpose of the system.

Since Linux provides both a basic kernel for performing the embedded functions and also has all the user interface bells and whistles you could ever want, it is very versatile. It can handle both embedded tasks and user interfaces. Look at Linux as a continuum: scaling from a stripped-down micro-kernel with memory management, task switching and timer services and nothing else, to a full-blown server, supporting a full range of file system and network services.

A minimal embedded Linux system needs just these essential elements:

  • a boot utility

  • the Linux micro-kernel, composed of memory management, process management and timing services

  • an initialization process

To get it to do something useful and still remain minimal, you need to add:

  • drivers for hardware

  • one or more application processes to provide the needed functionality

As you add more capabilities, you might also need these:
  • a file system (perhaps in ROM or RAM)

  • TCP/IP network stack

  • a disk for storing semi-transient data and swap capability

Hardware Platforms

Choosing the best hardware is a complex job and fraught with tar pits of company politics, prejudices, legacies of other projects and a lack of complete or accurate information.

Cost is often a key issue. When looking at the costs, make sure you look at total product costs, not just the CPU. Sometimes a fast, cheap CPU can become an expensive dog of a product, once you add the bus logic and delays to make it work with your peripherals. If you are a software geek, chances are the hardware decisions have already been made. However, if you are the system designer, it is your due diligence to make a real-time budget and satisfy yourself that the hardware can handle the job.

Start with a realistic view of how fast the CPU needs to run to get the job done—then triple it. It is amazing how fast theoretical CPU capacity disappears in the real world. Don't forget to factor in how your application will utilize any cache.

Also, figure out how fast the bus needs to run. If there are secondary buses such as a PCI bus, include them also. A slow bus or one that is saturated with DMA traffic can slow a fast CPU to a crawl.

CPUs with integrated peripherals are nice because there is less hardware to be debugged, and working drivers are frequently already available to support the popular CPUs. However, in my projects, these chips always seem to have the wrong combination of peripherals or don't have the capabilities we need. Also, just because the peripherals are integrated, don't assume this leads to the cheapest solution.

Squeezing 10 Pounds of Linux into a 5-Pound Bag

One of the common perceptions about Linux is that it is too bloated to use for an embedded system. This need not be true. The typical Linux distribution set up for a PC has more features than you need and usually more than the PC user needs also.

For starters, let's separate the kernel from the tasks. The standard Linux kernel is always resident in memory. Each application program that is run is loaded from disk to memory where it executes. When the program finishes, the memory it occupies is discarded, that is, the program is unloaded.

In an embedded system, there may be no disk. There are two ways to handle removing the dependence on a disk, depending on the complexity of the system and the hardware design.

In a simple system, the kernel and all applications processes are resident in memory, when the system starts up. This is how most traditional embedded systems work and can also be supported by Linux.

With Linux, a second possibility opens up. Since Linux already has the ability to “load” and “unload” programs, an embedded system can exploit this to save RAM. Consider a typical system that includes a flash memory, perhaps 8 to 16MB of flash, and 8MB of RAM. The flash memory can be organized as a file system. A flash driver is used to interface the flash to the file system. Alternatively, a flash disk can be used. This is a flash part that emulates a disk to the software. One example of this is the DiskOnChip from M-Systems (http://www.m-systems.com/) which can support up to 160MB. All of the programs are stored as files on the flash file system and are loaded into RAM as needed. This dynamic “load on demand” capability is a powerful feature that makes it easier to support a range of features:

  • It allows the initialization code to be discarded after the system boots. Linux typically uses a number of utility programs that run outside the kernel. These usually run once at initialization time, then never again. Furthermore, these utility programs can run sequentially, one after the other, in a mutually exclusive fashion. Thus, the same memory can be used over and over to “page in” each program, as the system boots. This can be a real memory saver, particularly for things like network stacks that are configured once and never changed.

  • If the Linux loadable module feature is included in the kernel, drivers can be loaded as well as the application programs. The software can check the hardware environment and adaptively load only the appropriate software for that hardware. This eliminates the complexity of having one program to handle many variations of the hardware at the expense of more flash memory.

  • Software upgrades are more modular. You can upgrade the application and loadable drivers on the flash, often while the system is running.

  • Configuration information and runtime parameters can be stored as data files on the flash.

______________________

White Paper
Linux Management with Red Hat Satellite: Measuring Business Impact and ROI

Linux has become a key foundation for supporting today's rapidly growing IT environments. Linux is being used to deploy business applications and databases, trading on its reputation as a low-cost operating environment. For many IT organizations, Linux is a mainstay for deploying Web servers and has evolved from handling basic file, print, and utility workloads to running mission-critical applications and databases, physically, virtually, and in the cloud. As Linux grows in importance in terms of value to the business, managing Linux environments to high standards of service quality — availability, security, and performance — becomes an essential requirement for business success.

Learn More

Sponsored by Red Hat

White Paper
Private PaaS for the Agile Enterprise

If you already use virtualized infrastructure, you are well on your way to leveraging the power of the cloud. Virtualization offers the promise of limitless resources, but how do you manage that scalability when your DevOps team doesn’t scale? In today’s hypercompetitive markets, fast results can make a difference between leading the pack vs. obsolescence. Organizations need more benefits from cloud computing than just raw resources. They need agility, flexibility, convenience, ROI, and control.

Stackato private Platform-as-a-Service technology from ActiveState extends your private cloud infrastructure by creating a private PaaS to provide on-demand availability, flexibility, control, and ultimately, faster time-to-market for your enterprise.

Learn More

Sponsored by ActiveState