RTLinux Application Development Tutorial
RTLinux is a hard real-time OS, as opposed to soft real-time systems or those that make no scheduling guarantees whatsoever. RTLinux is a hard real-time kernel that runs Linux or BSD as an idle task, whenever there are no real-time demands.
The dual-kernel approach taken by RTLinux requires a slightly different approach to real-time programming. Real-time code is written as a kernel module that is managed by the real-time kernel. All user management code is run as a normal process managed by Linux, communicating through a variety of mechanisms. This code separation abstracts real-time code into a simpler code base, simplifying development of both the real-time code and the management interface.
Some real-time OS approaches try to force the kernel and user-space code to do well with both real-time and nonreal-time scheduling constraints. Instead, RTLinux takes the normal UNIX approach, where a tool is written to do one thing, and do it well, rather than cram everything into a one-size-fits-all system. This results in a simpler system and encourages simpler code, while simultaneously providing a deterministic real-time environment that is constrained only by the hardware powering it.
Those of you who have used real-time systems before know that every system has a ``special'' API, where ``special'' is usually replaced with a more colorful term. RTLinux's API, however, is based on POSIX PSE 51, which is a standard designed for embedded real-time systems. This capability enables developers to use the standard pthread_* calls within a real-time environment. This means that all of the POSIX calls, such as pthread_create(), are available to real-time code, along with all of the mutex calls, condition variables, etc. For cases that introduce nondeterminism and are not covered by the standard, RTLinux provides extensions to make life easier for the developer.
From the user-space perspective, the nonreal-time code is exactly like any other Linux process. Once the real-time components have been abstracted out, the remaining code is free to behave like any other application, without fear of interfering with real-time operations. A management front end can be written in GTK+ (or Qt, of course), talk to a remote database or even host an entire Oracle instance directly on the real-time system. Nothing done within Linux can affect the execution of real-time code. While this is not license to write sloppy user-space code, it does remove any worry of complications arising from someone mishandling a Java garbage collector or transferring a large file over NFS while the machine is controlling a robotic arm.
As a side note, code running in the real-time kernel also has access to the entire API of the Linux kernel. However, the Linux kernel was not designed with real-time constraints in mind, and many calls are not always safe. Indeterminate blocking may occur, depending on the call chain. It is up to the developer to decide which calls are safe for the environment and problem at hand, and RTLinux provides real-time equivalents of some commonly used functions. As most hard real-time problems involve direct interaction with hardware, having the kernel API available for dealing with tasks like PCI device initialization can prevent many headaches. As demonstrated later in this article, there are places where use of the kernel API is entirely safe, so all is not lost.
How is real-time code managed, if it is off running on its own in the real-time kernel, while the rest is running as a normal Linux task? RTLinux provides a few answers to this problem and solves a variety of needs in differing situations.
First, the most common communication model is the real-time FIFO. Anyone who has used a normal FIFO under Linux (as created with mkfifo) is familiar with how this works. A process on one end writes to a FIFO, which appears as a normal file, while another one reads from the other end. With RTLinux, the reader might be a real-time process, while the writer is a user-space program shuttling directives to the real-time code through the FIFO or vice versa. In either case, the FIFO devices are normal character devices (/dev/rtf*), and both ends can interact with the device through normal POSIX calls, such as open(), close(), read() and write().
This approach works well for many applications because the user-space code simply has to work with a normal file for communication. From the real-time perspective, the calls to push data into the FIFO are nonblocking and have little impact on execution. However, as real-time code can never be blocked while waiting for user space to work through the data, the FIFO calls allow data to be overwritten, rather than block the real-time caller. This means that if the real-time kernel is under heavy pressure and never schedules the Linux thread (and by extension, your user-space code), the FIFO may fill before user space can read it. In this situation, the FIFO either should be allocated with more memory to buffer with, or it should flush the FIFO to prevent user space from getting dated or corrupted data.
Another IPC mechanism is the mbuff driver. This is a shared memory system that allows kernel and user-space code to share access to the same memory region. For some applications, this is a very good way to share information, especially when access methods might be nonsequential, as is the case with real-time FIFOs. Also, as with FIFOs, the real-time code cannot be blocked while user-space applications handle the data, so there are no synchronization methods between the two sides. Should this be needed by the programmer, it is possible to coordinate access via certain bit flags in the region, but one must be careful not to block real-time code unintentionally.
A third system that merits mention is RTLinux's softirq system. It is possible for real-time code to create a software-based IRQ, where a handler is installed under the Linux kernel that runs as if it were intercepting real hardware IRQs. But in this case, the IRQ is really generated by a driver in the real-time kernel. This is a very efficient method of safely accessing potentially blocking calls in the Linux kernel. For example, the RTLinux call rtl_printf(), which behaves like Linux's printk(), allows safe real-time printk() calls by pushing data into a buffer and signalling a software IRQ. The handler installed under Linux intercepts this and safely moves the data into the normal kernel ring buffer without potentially inducing blocking.
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- SUSE LLC's SUSE Manager
- My +1 Sword of Productivity
- Managing Linux Using Puppet
- Non-Linux FOSS: Caffeine!
- Tech Tip: Really Simple HTTP Server with Python
- SuperTuxKart 0.9.2 Released
- Control Your Linux Desktop with D-Bus
- Parsing an RSS News Feed with a Bash Script
- Google's SwiftShader Released
- Returning Values from Bash Functions
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide