Real-Time Applications with RTLinux
The first widely used release of RTLinux, V1, was a simple system, really intended only for low-end x86-based computers. The V1 API was homegrown, designed for the convenience of the implementors without much forethought. By 1998 the RTLinux developers realized that if they stayed with the V1 API it would would have to be extended and patched up to work around old assumptions. Now that other people were using RTLinux in serious applications, the design team wanted a more durable and, well, standard API. The challenge was to move toward a standard API without sacrificing the speed, efficiency and lightweight structure that made RTLinux useful and interesting in the first place.
The team started by assuming that POSIX was out of the question--too big, slow, and incompatible with a lightweight real-time operating system. But the POSIX 1003.13 PSE51 specification defines a standard API that fit surprisingly well. POSIX PSE51 is for a ``minimal real-time system profile''. Systems following this standard look like a single POSIX process with multiple threads (see Figure 1). Essentially, PSE51 puts threads and signal handlers on a bare machine. So RTLinux ended up as a PSE51 POSIX real-time process where one of the threads is Linux, itself a POSIX operating system (see Figure 2).
POSIX 1003.13 permits sensible limitations on the POSIX services provided in the minimal real-time environment. For example, while any POSIX-compliant operating system must support the open() system call, PSE51 allows strict limitation of the file space, so that have to support of a general-purpose file system is unnecessary. The operation of opening a file in a general-purpose POSIX operating system is anything but realtime. It might require multiple disk reads following symbolic and hard links and even network operations. The standard RTLinux open() will open /dev/x for a fixed set of real-time devices and will not support any other path names.
RTLinux is structured as a small core component and a set of optional components.
The core component permits installation of very low-latency interrupt handlers that cannot be delayed or preempted by Linux itself, and some low-level synchronization and interrupt control routines. This core component has been extended to support SMP, and at the same time it has been simplified by removing functionality that can be provided outside the core.
The majority of RTLinux functionality is in a collection of loadable kernel modules that provide optional services and levels of abstraction. These modules include: a scheduler, a component to control the system timers, a POSIX I/O layer, real-time FIFOs, and a shared memory module (a package contributed by Tomasz Motylewski).
The key RTLinux design objectives are that the system should be transparent, modular and extensible. Transparency means that there are no unopenable black boxes, and that the cost of any operation should be determinable. Modularity means that it is possible to omit functionality (and that functionality's expense) if it isn't needed. To support this, the simplest RTLinux configuration supports high-speed interrupt handling and no more. And extensibility means that programmers should be able to add modules and tailor the system to their requirements. As an obvious example, the RTLinux simple priority scheduler can easily be replaced by schedulers more suited to the needs of some specific application.
While developing RTLinux, we have tried to maximize the advantage we get from having Linux and its powerful capabilities available. In fact, RTLinux Development Rule Number One is: if a service or operation is inherently non-real-time, it should be provided in Linux and not in the RT environment.
To facilitate this, RTLinux provides three standard interfaces between real-time tasks and Linux. The first is the RT-FIFO device interface mentioned above, the second is shared memory, and the third is the pthread signal mechanism, which allows real-time threads to generate soft interrupts for Linux. For example, a primitive data acquisition system might consist of a single real-time interrupt handler that collects data from an A/D device and dumps that data into an RT-FIFO, while on the Linux side, logging is taken care of by the shell command line:
cat /dev/rtf_a2d > log
A more elaborate system might use a C program to receive and process data from an RT-FIFO, a Tcl/Tk front end to control the data flow and send control commands to the real-time handler via a second RT-FIFO, and a Perl script sending the data over a network to a second machine for processing and graphical display (see Figure 3).
A more advanced version of this interface between RTLinux and Linux can be seen in the Real-Time Controls Laboratory (RTiC-Lab), an open-source, hard real-time controller system that frees controls engineers from the design of the hard real-time tasks and allows them to focus on the controller algorithms themselves. RTiC-Lab uses RTLinux for the implementation of hard real-time controllers and I/O, and it uses Linux for the implementation of networking, GTK+ graphical user interface (through which users can both start and stop their controller, update controller parameters, and get real-time data from the controller task) and IPC communications to other non-real-time dependent tasks (such as plotting packages, FFT algorithms and other user applications). RTiC-Lab uses a mixture of RT-FIFOs and shared memory to communicate between RTLinux and Linux.
- Readers' Choice Awards--Nominate Your Apps & Gadgets Now!
- Memory Ordering in Modern Microprocessors, Part I
- Source Code Scanners for Better Code
- diff -u: What's New in Kernel Development
- RSS Feeds
- Non-Linux FOSS: AutoHotkey
- Tech Tip: Really Simple HTTP Server with Python
- Security Hardening with Ansible
- Returning Values from Bash Functions