Embedding Linux in a Commercial Product

A look at embedded systems and what it takes to build one.
Booting—Where's LILO and the BIOS?

When a microprocessor first powers up, it begins executing instructions at a predetermined address. Usually there is some sort of read-only memory at that location, which contains the initial start-up or boot code. In a PC, this is the BIOS. It performs some low-level CPU initialization and configures other hardware. The BIOS goes on to figure out which disk contains the operating system, copies the OS to RAM and jumps to it. Actually, it is significantly more complex than that, but this is sufficient for our purposes. Linux systems running on a PC depend on the PC's BIOS to provide these configuration and OS-loading functions.

In an embedded system, there often is no such BIOS. Thus, you need to provide the equivalent startup code. Fortunately, an embedded system does not need the flexibility of a PC BIOS boot program, since it usually needs to deal with only one hardware configuration. The code is simpler and tends to be fairly boring. It is just a list of instructions that jam fixed numbers into hardware registers. However, this is critical code, because these values need to be correct for your hardware and often must be done in a specific order. There is also, in most cases, a minimal power-on self-test module that sanity-checks the memory, blinks some LEDs, and may exercise some other hardware necessary to get the main Linux OS up and running. This startup code is highly hardware-specific and not portable.

Fortunately, most systems use a fairly cookbook hardware design for the core microprocessor and memory. Typically, the chip manufacturer has a demo board that can be used as a reference design—more or less copying it for the new design. Often, startup code is available for these cookbook designs, which can be modified for your needs fairly easily. Rarely will new startup code need to be written from scratch.

To test the code, you can use an in-circuit emulator containing its own “emulation memory”, which replaces the target memory. You load the code into the emulator and debug via the emulator. If this is not available, you may skip this step, but count on a longer debug cycle.

This code ultimately needs to run from some non-volatile memory, usually either flash or EPROM chip. You will need some way to get the code into this chip. How this is done will depend on the “target” hardware and tools.

One popular method is to take the flash or EPROM chip and plug it into an “EPROM” or “flash burner”. This will “burn” (store) your program into the chip. Then, plug the chip into a socket on your target board and turn on the power. This method requires the part to be “socketed” on the board; however, some device package formats cannot be socketed.

Another method is via a JTAG interface. Some chips include a JTAG interface which can be used to program the chip. This is the most convenient way to do it. The chip can be permanently soldered onto the board, and a small cable run from a JTAG connector on the board, usually a PC card, to a JTAG interface. The downside is some custom programming is usually required on the PC to operate the JTAG interface. This same facility can also be used in production for smaller-quantity runs.

Robustness—More Reliable than a Politician's Promise

Linux is generally considered to be very reliable and stable when running on PC hardware, particularly when compared to a popular alternative. How stable is the embedded kernel itself? For most microprocessors, Linux is quite good. A Linux kernel port to a new microprocessor family is usually done to more than just the microprocessor. Typically, it is ported to one or more specific target boards to which Linux is ported. These boards include some specific peripherals as well as the CPU.

Fortunately, much of the kernel code is processor-independent, so porting concentrates on the differences. Most of these are in the memory management and interrupt handling areas. Once these are ported, they tend to be fairly stable. As discussed before, boot strategies vary depending on the hardware specifics, and you should plan on doing some customization.

The device drivers are more of a wild card: some are more stable than others. Also, the selection is rather limited; once you leave the ubiquitous PC platform, you may need to create your own. Luckily, many device drivers are floating around, and you can probably find one close to what you need that can be modified. The driver interfaces are well-defined. Most drivers of a like kind are fairly similar, so migrating a disk, network or serial port driver from one device to another is usually not too difficult. I have found most drivers to be well-written and easy to understand, but keep a book on the kernel structures handy.

In my experience, Linux is at least as stable as the big-name commercial operating systems with which I have worked. Generally, the problems with these operating systems and Linux stem from a misunderstanding of the subtlety of how things work, rather than hard coding bugs or basic design errors. Plenty of war stories abound for any operating system and need not be repeated here. The advantage to Linux is that the source code is available, well-commented and very well-documented. As a result, you are in control of dealing with any problems that come up.

Along with the basic kernel and device drivers, some additional issues arise. If the system has a hard disk, the reliability of the file system comes into question. We have over two years of field experience with an embedded Linux system design employing a disk. These systems are almost never shut down properly. Power just gets disconnected at random times. The experience has been very good, using the standard (EXT2) file system. The standard Linux initialization scripts run the fsck program, which does an excellent job of checking and cleaning up any dangling inodes. One change that may be wise is to run the update program at a 5 or 10-second interval instead of the default 30 seconds. This shortens the time window that data sits in the local memory cache before being flushed to disk, thus lowering the probability of losing data.

______________________

White Paper
Linux Management with Red Hat Satellite: Measuring Business Impact and ROI

Linux has become a key foundation for supporting today's rapidly growing IT environments. Linux is being used to deploy business applications and databases, trading on its reputation as a low-cost operating environment. For many IT organizations, Linux is a mainstay for deploying Web servers and has evolved from handling basic file, print, and utility workloads to running mission-critical applications and databases, physically, virtually, and in the cloud. As Linux grows in importance in terms of value to the business, managing Linux environments to high standards of service quality — availability, security, and performance — becomes an essential requirement for business success.

Learn More

Sponsored by Red Hat

White Paper
Private PaaS for the Agile Enterprise

If you already use virtualized infrastructure, you are well on your way to leveraging the power of the cloud. Virtualization offers the promise of limitless resources, but how do you manage that scalability when your DevOps team doesn’t scale? In today’s hypercompetitive markets, fast results can make a difference between leading the pack vs. obsolescence. Organizations need more benefits from cloud computing than just raw resources. They need agility, flexibility, convenience, ROI, and control.

Stackato private Platform-as-a-Service technology from ActiveState extends your private cloud infrastructure by creating a private PaaS to provide on-demand availability, flexibility, control, and ultimately, faster time-to-market for your enterprise.

Learn More

Sponsored by ActiveState