Embedding Linux in a Commercial Product

A look at embedded systems and what it takes to build one.
Tools—Breaking the ICE Barrier

A key element in developing embedded systems is the set of available tools. Like any craft or profession, good tools help to get the job done faster and better. At different stages of development, different tools may be required.

Traditionally, the first tool used to develop embedded systems was the in-circuit emulator (ICE). This is a relatively expensive piece of equipment that typically hacks into the circuitry between the microprocessor and its bus, which allows the user to monitor and control all activity in and out of the microprocessor. These can be difficult to set up, and because of their invasive nature, can provoke erratic performance. However, they give a very clear picture of what is happening at the bus level, and eliminate a lot of guesswork at the very lowest level of the hardware/software interface.

In the past, some projects relied on this as the primary debugging tool, often through all stages of development. However, once the initial software works well enough to support a serial port, most debugging can be done without an ICE using other methods. Also, most newer embedded systems use a fairly cookbook microprocessor design. Often, corresponding working startup code is available that can be used to get the serial port working in short order. This means that one can often get along quite nicely without an ICE. Eliminating the ICE stage lowers the cost of development. Once the serial port is up, it can be used to support several layers of increasingly sophisticated development tools.

Linux is based on the GNU C compiler, which, as part of the GNU tool chain, works with the gdb source-level debugger. This provides all the software tools you need to develop an embedded Linux system. Here is a typical sequence of debug tools used to bring up a new embedded Linux system on new hardware.

  1. Write or port startup code. (We will talk more about this later.)

  2. Write code to print a string on the serial port, i.e., “Hello World”. (Actually, I prefer “Watson, come here I need you”, the first words spoken over a telephone.)

  3. Port the gdb target code to work over the serial port. This talks to another Linux “host” system which is running the gdb program. You simply tell gdb to debug the program via the serial port. It talks over the serial port to the gdb target code on your test computer, giving you full C source-level debugging. You may also want to use this same capability to download the additional code into RAM or flash memory.

  4. Use gdb to get the rest of the hardware and software initialization code to work, to the point where the Linux kernel starts up.

  5. Once the Linux kernel starts, the serial port becomes the Linux console port and can be used for subsequent development. Use kgdb, the kernel debug version of gdb. Often, this step is not required. If you have a network connection, such as 10BaseT, you will probably want to get it working next.

  6. Once you have a fully functional Linux kernel running on your target hardware, you can debug your application processes. Use either gdb or a graphical overlay on gdb such as xgdb.

Real Time—Says Who?

Simply put, the majority of real-time systems aren't. Embedded systems are often misclassified as real-time systems. However, most systems simply do not require real-time capabilities. Real time is a relative term. Purists will often define hard real time as the need to respond to an event in a deterministic manner and in a short time, i.e., microseconds. Increasingly, hard real-time functions in this tight time range are being implemented in dedicated DSP (digital signal processor) chips or ASICs (application-specific ICs). Also, these requirements are often simply designed out through the use of a deeper hardware FIFO, scatter/gather DMA engines and custom hardware.

Many designers agonize over the need for real-time performance without a clear understanding of what their real requirements are. For most systems, near real-time response in the one- to five-millisecond range is sufficient. Also, a softer requirement may be quite acceptable, something like:

The Windows 98 Crashed_Yet monitor interrupt must be processed within 4 milliseconds 98% of the time, and within 20 milliseconds 100% of the time.

These soft requirements are much easier to achieve. Meeting them involves a discussion of context switch time, interrupt latency, task prioritization and scheduling. Context switch time was once a hot topic among OS folks. However, most CPUs handle this acceptably well, and CPU speeds have gotten fast enough that this has ceased to be a major concern.

Tight real-time requirements should usually be handled by an interrupt routine or other kernel context driver functions in order to assure consistent behavior. Latency time, the time required to service the interrupt once it has occurred, is largely determined by interrupt priority and other software that may temporarily mask the interrupt.

Interrupts must be engineered and managed to assure that the timing requirements can be met, just as with any other operating system. On Intel x86 processors, this job can be handled quite nicely by the real-time extension to Linux (RTLinux, see http://www.rtlinux.org/). This essentially provides an interrupt processing scheduler that runs Linux as its background task. Critical interrupts can be serviced without the rest of Linux knowing about them. Thus, you get a lot of control over critical timing. Interfaces are then provided between the real-time level and the basic-Linux level with relaxed timing constraints. This provides a real-time framework similar to other embedded operating systems. In essence, the real-time critical code is isolated and “engineered” to meet the requirement, and the results of this code are handled in a more generic manner, perhaps at the application task (process) level.

______________________

White Paper
Linux Management with Red Hat Satellite: Measuring Business Impact and ROI

Linux has become a key foundation for supporting today's rapidly growing IT environments. Linux is being used to deploy business applications and databases, trading on its reputation as a low-cost operating environment. For many IT organizations, Linux is a mainstay for deploying Web servers and has evolved from handling basic file, print, and utility workloads to running mission-critical applications and databases, physically, virtually, and in the cloud. As Linux grows in importance in terms of value to the business, managing Linux environments to high standards of service quality — availability, security, and performance — becomes an essential requirement for business success.

Learn More

Sponsored by Red Hat

White Paper
Private PaaS for the Agile Enterprise

If you already use virtualized infrastructure, you are well on your way to leveraging the power of the cloud. Virtualization offers the promise of limitless resources, but how do you manage that scalability when your DevOps team doesn’t scale? In today’s hypercompetitive markets, fast results can make a difference between leading the pack vs. obsolescence. Organizations need more benefits from cloud computing than just raw resources. They need agility, flexibility, convenience, ROI, and control.

Stackato private Platform-as-a-Service technology from ActiveState extends your private cloud infrastructure by creating a private PaaS to provide on-demand availability, flexibility, control, and ultimately, faster time-to-market for your enterprise.

Learn More

Sponsored by ActiveState