Real Time and Linux
Linux is well tuned for throughput-limited applications, but it is not well designed for deterministic response, though enhancements to the kernel are available to help or guarantee determinism. So-called real-time applications require, among other things, deterministic response. In this article I examine the nature of real-time applications and Linux's strengths and weaknesses in supporting such applications. In later articles I will examine various approaches to help real-time applications satisfy hard real-time requirements. Most of these issues are with respect to the Linux kernel, but the GNU C library, for example, plays a part in some of this.
Many definitions are available for the terms relating to real time. In fact, because they have different requirements, different applications demand different definitions. Some applications may be satisfied by some average response time while others may require that every deadline be met.
The response time of an application is that time interval from when it receives a stimulus, usually provided via a hardware interrupt, to when the application has produced a result based on that stimulus. These results may be such things as the opening of a valve in an industrial control application, drawing a frame of graphics in a visual simulation or processing a packet of data in a data-acquisition application.
Let's consider an opening of a valve scenario. Imagine we have a sensor next to a conveyor belt with parts that need to be painted that move past a paint nozzle. When the part is just in the right position the sensor alerts our system that the valve on the paint nozzle should open to allow paint to be sprayed onto the part. Do we need to have this valve open at just the right time on average, or every time? Every time would be nice.
We need to open the valve no later than the latest time that it's not too late to begin painting the part. We also need to close the valve no sooner than when we are finished with painting the part. Thus, it is desirable to not keep the valve open any longer than necessary, since we really don't want to be painting the rest of the conveyor belt.
We say that the latest possible time to open the valve and still accomplish the proper painting is the deadline. In this case if we miss the deadline, we won't paint the part properly. Let's say that our deadline is 1ms. That is the time from when the sensor alerts us until the time we must have begun painting. To be sure that we are never late, let's say we design our system to begin painting 950µs after we receive the sensor interrupt. In some cases we may begin painting a little before, and sometimes a little afterward.
Of course, it will never be the case that we start painting exactly 950µs after the interrupt, with infinite precision. For instance, we may be early by 10µs one time and late by 13µs the next. This variance is termed jitter. We can see from our example that if our system were to provide large jitter, we would have to move our target time significantly below 1ms to be assured of satisfying that deadline. This also would mean that we frequently would be opening the valve much sooner than actually required, which would waste paint. Thus, some real-time systems will have requirements in terms of both deadlines and jitter. We assume that our requirements say that any missed deadlines are failures.
The term operating environment means the operating system as well as the collection of processes running, the interrupt activity and the activity of hardware devices (like disks). We want to have an operating environment for our real-time application that is so robust we are free to run any number of any kind of applications concurrently with our real-time application and still have it perform acceptably.
An operating environment where we can determine the worst-case time for a given response time or requirement is deterministic. Operating environments that don't allow us to determine a worst-case time are called nondeterministic. Real-time applications require a deterministic operating environment, and real-time operating systems are capable of providing a deterministic operating environment.
Nondeterminism is frequently caused by algorithms that do not run in constant time; for example, if an operating system's scheduler must traverse its entire run list to be able to decide which process to run next. This algorithm is linear, sometimes notated as O(n), meaning read on the order of n. That is, as n (the number of processes on the run list) grows, the time to decide grows proportionally. With an O(n) algorithm there is no upper bound on the time that the algorithm will take. If your response time depends upon your sleeping process to be awakened and selected to run, and the scheduler is O(n), then you will not be able to determine the worst-case time. This is a property of the Linux scheduler.
This is important in an environment where the system designer cannot control the number of processes that users of the system may create, this is important. In an embedded system, where characteristics of the system, such as the user interface, make it impossible for there to be any more than a given number of processes, then the environment has been constrained sufficiently to bound this kind of scheduling delay. This is an example where determinism may be achievable by some aspect of the configuration of the operating environment. Notice that a priority system may be required, as well as other things, but as far as the scheduling time is concerned, the time is bounded.
A visual simulation may require an average target framerate, say 60 frames per second. As long as frames are dropped relatively infrequently, and that over a suitable period the framerate is 60 frames a second, the system may be performing acceptably.
The example of the paint nozzle and the average framerate are examples of what we call hard real-time and soft real-time constraints, respectively. Hard real-time applications must have their deadlines met, otherwise an unacceptable result occurs. Something blows up, something crashes, some operation fails, someone dies. Soft real-time applications usually must satisfy a deadline, but if a certain number of deadlines are missed by just a little bit, the system may still be considered to be operating acceptably.
Let's consider another example. Imagine we are building a penguin robot to aid scientists in studying animal behavior. Through careful observation we determine that upon the emergence of a seal from a hole in the ice, a penguin has 600ms to move away from the hole to avoid being eaten by the seal. Will our robot penguin survive if it moves back, on average, within 600ms? Perhaps, if the variance in the attack time of the seal varies synchronously with our penguin's response time. Are you going to build your penguin with that assumption? We also realize that there is a certain distance from the hole that the seal can reach. Our penguin must move farther than that distance within the 600ms. Some would call that line the deadline.
For an operating environment to accommodate a hard real-time application, it must be able to insure that the application's deadlines always can be satisfied. This implies that all actions within the operating system must be deterministic. If the operating environment accommodates a soft real-time application, this generally means that an occasional delay may occur, but such a delay will not be unduly long.
Requirements for an application may be quantitative or qualitative. A qualitative requirement for a visual simulation would be that the system needs to react quickly enough to seem natural. This reaction time would be quantified to measure compliance. For example, a frame of graphics based upon user input may have a requirement to be rendered within 33.3ms after the user's input. This means that if a pilot moves the control stick in the flight simulator to bank right, the out-of-the-window view should change to reflect the new flight path within 33.3ms. Where did the 33.3ms requirement come from? Human factors--that amount of time is fast enough so that humans perceive the visual simulation as sufficiently smooth.
It is not the value of the time requirement but rather that there is a time requirement that makes this a real-time requirement. If one changed the requirement to have the graphics drawn within 33.3 seconds, instead of 33.3ms, it would still be a real-time system. The difference may be in the means to satisfy the requirements. In a Linux system the 33.3ms may require the use of a special Linux kernel and its functions, whereas the 33.3 second requirement may be achievable by means available within a standard kernel.
This leads us to a tenet: fast does not imply real-time and vice versa. However, fast on a relative scale may imply the need for real-time operating system features. This leads us to the distinction between real-time operating systems and real-time applications. Real-time applications have time-related requirements. Real-time operating systems can guarantee performance to real-time applications.
In practice, a general-purpose operating system, such as Linux, provides sufficient means for an application with relatively long deadlines if the operating environment can be controlled suitably. It is because of this property that one frequently hears that there is no need for real-time operating systems because processors have become so fast. This is only true for relatively uninteresting projects.
One has to remember, though, that if the operating environment is not constrained suitably, deadlines may be missed even though extensive testing never found a case of a missed deadline. Linear-time algorithms, for example, may be lurking in the code.
Another issue to keep in mind is audience effect, often stated as "The more important the audience for one's demo, the more likely the demo will fail.'' While anecdotal evidence abounds for the audience effect, effects such as nonrepeatability are often due to race conditions. A race condition is a situation where the result depends upon the relative speeds of the tasks or the outside world. All real-time systems, by definition, have race conditions. Well-designed systems have race conditions only around required deadlines. Testing alone cannot prove the lack of race conditions.
Since an operating system is largely responsive instead of proactive, many activities that cause delay can be avoided. In order for a particular application (process) to be able to meet its deadlines, such things as CPU-bound competitors, disk I/O, system calls or interrupts may need to be controlled. These are the kinds of things that constitute a properly constrained operating environment. Characteristics of the operating system and drivers also may be of concern. The operating system may block interrupts or not allow system calls to be preempted. While these activities may be deterministic, they may cause delays that are longer than acceptable for a given application.
A real-time operating system requires simpler efforts to constrain the environment than does a general-purpose operating system.
- New Products
- Users, Permissions and Multitenant Sites
- Not So Dynamic Updates
- Flexible Access Control with Squid Proxy
- Security in Three Ds: Detect, Decide and Deny
- DevOps: Everything You Need to Know
- Tighten Up SSH
- Solving ODEs on Linux
- Non-Linux FOSS: MenuMeters
- Android Candy: Bluetooth Auto Connect