Porting RTOS Device Drivers to Embedded Linux

Transform your wild-and-woolly legacy RTOS code into well-formed Linux device drivers.
RTOS I/O Subsystems

Most RTOSes ship with a customized standard C run-time library, such as pREPC for pSOS, and selectively patched C libraries (libc) from compiler ISVs. They do the same for glibc. Thus, at a minimum, most RTOSes support a subset of standard C-style I/O, including the system calls open, close, read, write and ioctl. In most cases, these calls and their derivatives resolve to a thin wrapper around I/O primitives. Interestingly, because most RTOSes did not support filesystems, those platforms that do offer file abstractions for Flash or rotating media often use completely different code and/or different APIs, such as pHILE for pSOS. Wind River VxWorks goes further than most RTOS platforms in offering a feature-rich I/O subsystem, principally to overcome hurdles in integration and generalization of networking interfaces/media.

Many RTOSes also support a bottom-half mechanism, that is, some means of deferring I/O processing to an interruptible and/or preemptible context. Others do not but may instead support mechanisms such as interrupt nesting to achieve comparable ends.

Typical RTOS Application I/O Architecture

A typical I/O scheme (input only) and the data delivery path to the main application is diagramed in Figure 1. Processing proceeds as follows:

  • A hardware interrupt triggers execution of an ISR.

  • The ISR does basic processing and either completes the input operation locally or lets the RTOS schedule deferred handling. In some cases, deferred processing is handled by what Linux would call a user thread, herein an ordinary RTOS task.

  • Whenever and wherever the data ultimately is acquired (ISR or deferred context), ready data is put into a queue. Yes, RTOS ISRs can access application queue APIs and other IPCs—see the API table.

  • One or more application tasks then read messages from the queue to consume the delivered data.

Figure 1. Comparison between Typical I/O and Data Delivery in a Legacy RTOS and Linux

Output often is accomplished with comparable mechanisms—instead of using write() or comparable system calls, one or more RTOS application tasks put ready data into a queue. The queue then is drained by an I/O routine or ISR that responds to a ready-to-send interrupt, a system timer or another application task that waits pending on queue contents. It then performs I/O directly, either polled or by DMA.

Mapping RTOS I/O to Linux

The queue-based producer/consumer I/O model described above is one of many ad hoc approaches employed in legacy designs. Let us continue to use this straightforward example to discuss several possible (re)implementations under embedded Linux.

Developers who are reticent to learn the particulars of Linux driver design, or who are in a great hurry, likely try to port most of a queue-based design intact to a user-space paradigm. In this driver-mapping scheme, memory-mapped physical I/O occurs in user context by way of a pointer supplied by mmap():


#include <sys/mman.h>

#define REG_SIZE   0x4   /* device register size */
#define REG_OFFSET 0xFA400000
                   /* physical address of device */

void *mem_ptr;
         /* de-reference for memory-mapped access */
int fd;

fd=open("/dev/mem",O_RDWR);
           /* open physical memory (must be root) */

mem_ptr = mmap((void *)0x0,
               REG_AREA_SIZE, PROT_READ+PROT_WRITE,
               MAP_SHARED, fd, REG_OFFSET);
                         /* actual call to mmap() */

A process-based user thread performs the same processing as the RTOS-based ISR or deferred task would. It then uses the SVR4 IPC msgsnd() call to queue a message for receipt by another local thread or by another process by invoking msgrcv().

Although this quick-and-dirty approach is good for prototyping, it presents significant challenges for building deployable code. Foremost is the need to field interrupts in user space. Projects such as DOSEMU offer signal-based interrupt I/O with SIG (the silly interrupt generator), but user-space interrupt processing is quite slow—millisecond latencies instead of tens of microseconds for a kernel-based ISR. Furthermore, user-context scheduling, even with the preemptible Linux kernel and real-time policies in place, cannot guarantee 100% timely execution of user-space I/O threads.

It is highly preferable to bite the bullet and write at least a simple Linux driver to handle interrupt processing at kernel level. A basic character or block driver can field application interrupt data directly in the top half or defer processing to a tasklet, a kernel thread or to the newer work-queue bottom-half mechanism available in the 2.6 kernel. One or more application threads/processes can open the device and then perform synchronous reads, just as the RTOS application made synchronous queue receive calls. This approach will require at least recoding consumer thread I/O to use device reads instead of queue receive operations.

To reduce the impact of porting to embedded Linux, you also could leave a queue-based scheme in place and add an additional thread or dæmon process that waits for I/O on the newly minted device. When data is ready, that thread/dæmon wakes up and queues the received data for use by the consuming application threads or processes.

______________________

Comments

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

As the author and a few

Moshe's picture

As the author and a few commenters rightfully noted you can go the easy path by mapping all VxWorks tasks to Linux user mode processes/threads. The downside is that the performance hit can be huge (see above comment about mmap overhead).

Fortunately, it looks like now there is a solution - www.femtolinux.com allows to run user processes in kernel mode, removing the user/kernel barrier. FemtoLinux processes are pretty much identical to VxWorks tasks.

Been on both sides of this

Baumann's picture

15 years as a VxWorks developer, now doing the linux side of the game. 99% of the time, the "real driver" approach is the preferred one - you get protection, etc. (I've ported almost all of my old VxWorks drivers to Linux that way) but there is the odd case - and I'm dealing with one now, where mmap() may buy you the realtime response you need - where even the interrupts are too slow.
(porting from linux to VxWorks is the easy direction - you're going from protected to unprotected,life is easy, aside from a few calls that aren't allowed.)
My catch at the moment though - on the architecture that I'm working with, is that the mmap call is expensive - more than you might think. Each access, by the time it has rolled up and unrolled the various page tables, is appearing to take 700ns - dropping memory bandwidth to less than 14MB/Sec. And that bytes. Pun intended.
Like everything, you've got to evaluate what you're doing, and why

migration kit for linux to vxworks - availablility !!

karthik bala guru's picture

Hi all,
VxWorks-to-Linux migration kits are offered by a number of companies, including MapuSoft, LynuxWorks, MontaVista, and TimeSys.

But, y is there no such thing like,
Linux-to-VxWorks Migration Kits ????

What is the difficulty in providing such a migration kit ?
where is the problem actually ?

if there is a linux-to-vxworks migration kit available in any website or shop, do kindly let me know.

thanks and regards,
karthik bala guru

migrating a protocol from linux to vxworks - availablility !!

karthik bala guru's picture

actually, i am porting a protocol stack developed in arm-linux into vxworks.

do let me know if there is any migration kit for this.

cheeers,
karthik bala guru

How to implement mmap() in vxWorks?

Anonymous's picture

Does anybody know how to use linux mmap() like function in vxWorks?
Please info me!

hmmm

Vijaykc's picture

Why would you want mmap()in Vxworks? The entire memory space is yours..... :)
I am not quite sure why you need one the first place.

White Paper
Linux Management with Red Hat Satellite: Measuring Business Impact and ROI

Linux has become a key foundation for supporting today's rapidly growing IT environments. Linux is being used to deploy business applications and databases, trading on its reputation as a low-cost operating environment. For many IT organizations, Linux is a mainstay for deploying Web servers and has evolved from handling basic file, print, and utility workloads to running mission-critical applications and databases, physically, virtually, and in the cloud. As Linux grows in importance in terms of value to the business, managing Linux environments to high standards of service quality — availability, security, and performance — becomes an essential requirement for business success.

Learn More

Sponsored by Red Hat

White Paper
Private PaaS for the Agile Enterprise

If you already use virtualized infrastructure, you are well on your way to leveraging the power of the cloud. Virtualization offers the promise of limitless resources, but how do you manage that scalability when your DevOps team doesn’t scale? In today’s hypercompetitive markets, fast results can make a difference between leading the pack vs. obsolescence. Organizations need more benefits from cloud computing than just raw resources. They need agility, flexibility, convenience, ROI, and control.

Stackato private Platform-as-a-Service technology from ActiveState extends your private cloud infrastructure by creating a private PaaS to provide on-demand availability, flexibility, control, and ultimately, faster time-to-market for your enterprise.

Learn More

Sponsored by ActiveState