Anatomy of a Small Open-Source Kernel for DSPs

Julian provides the technical and historical background of dsp_k, with a particular focus on the kernel.
Context Switch Function

Like the kernel context data, you can find the context switch function in the file dsp_K.c, and it is called _DSP_K_SWITCH to avoid any potential confusion. It is first called from DSP_K_RUN to start multitasking and then evermore from DSP_K_TASK_SCHEDULE until your application ceases multitasking and returns via DSP_K_RUN to main. There are basically two routes through the kernel to reach DSP_K_TASK_SCHEDULE: either directly from a task function, such as DSP_K_TASK_PEND, or from the tick ISR. The context switch function behaves slightly differently depending upon which route is taken.

Wrapped up in C, the context switch function is mainly written in assembler (a summary in Listing 4 omits the assembler detail). It first performs some task lock checks and decides if a switch is actually necessary. If so, it gets a pointer to the current task context data structure described above and updates it from the RUN to the READY state (i.e., logically moves the task back onto the ready queue). It then sets about saving the current task context. Not shown in Listing 4, if the ISR route was followed, the code walks up the stack searching for the task frame before the interrupt was taken and flips the shadow registers used by the kernel in order to retrieve the task state.

Listing 4. Context Switch Function

With a little bit of juggling, saving the current running task context is fairly straightforward. Essentially, the general purpose registers are copied onto the current task context data structure followed by the general purpose index registers. Juggling is necessary to preserve registers until they are saved, to work with the shadow registers if the ISR route is followed and to collaborate with the compiler. (Depending upon the version of compiler you have, the executable code differs subtly, and you should make sure it behaves sensibly where clearly commented with VDSP_VER.) After saving the index registers, any of the optionally configured multiplier, modifier, base and length register values are copied onto the current task context stack, completing the first half of the context switch job.

The second half, recovering the new task context so that it is moved from READY to RUN and thus made current, is likewise fairly straightforward and works essentially in reverse of a context save.

Then in the third half, a C stack frame has to be set up. This is fiddly for two main reasons: first, the CPU registers have to be prepared in an orderly fashion, and second, the registers recovered in the second half must be preserved. (Again the details are omitted from Listing 4, but you can see them in the dsp_K.c file.) To prepare the CPU registers on a SHARC, the PC stack has to be emptied, the status and arithmetic status registers need to be set and the mode registers programmed in addition to setting up the PC, FP and SP registers. Finally, an ordinary return is executed to exit the context switcher and jump onto the new task frame.

Kernel Personality Services

Still with me? Well, we've just about covered the lowest sublayer functional services and can now move up a bit to the kernel personality services.

One facet of the process-oriented approach used in dsp_K is that tasks run forever (until explicitly halted by a call to DSP_K_TASK_EXIT or _exit), and the DSP_K_TASK_RESET function attends to this. A task stack is established in DSP_K_RUN so that, on reaching the end of its entry point code, your application tasks return to the DSP_K_TASK_RESET function. All this does is reset the task context to its initial conditions, and like the end of _DSP_K_SWITCH, it sets up a C stack frame and returns into the task entry point function.

Sane Scheduling

Because tasks run forever, a mechanism for scheduling them needs to be provided. As you might have noticed when I introduced the BSP services above, the kernel services do not include a common scheduler. Rather, in the dsp_K model, task scheduling is carried out through programmer-provided functions attached to each task (refer to the t__DSP_K_TASK_DSCR.scheduler element in Listing 3). The dsp_K distribution includes classic round-robin and priority scheduler functions as generic scheduling methods, but you might want to write your own. The kernel uses the DSP_K_TASK_SCHEDULE function to call programmer-provided scheduler functions and then to check the sanity against preset limits so that the kernel won't crash before calling the context switcher.

With traditional UNIX or POSIX systems you can start tasks dynamically, so an unpredictable system lifetime exists for the kernel. By contrast, many embedded systems are designed for a specific purpose and to function statically. It is often well-stated that real-time schedulers should provide a hard real-time response to events (i.e., possess a low-interrupt latency). We could also consider that in order to build a predictable embedded system, deterministic scheduling is needed. The purpose of a scheduler function is to enable each task in your application to decide which one should be run next, and it is possible to provide a different scheduler function for each task within a dsp_K application. It is always possible for us to write poor applications that yield non-deterministic scheduling, but such design flaws are in our application code and not in the kernel.

______________________

White Paper
Linux Management with Red Hat Satellite: Measuring Business Impact and ROI

Linux has become a key foundation for supporting today's rapidly growing IT environments. Linux is being used to deploy business applications and databases, trading on its reputation as a low-cost operating environment. For many IT organizations, Linux is a mainstay for deploying Web servers and has evolved from handling basic file, print, and utility workloads to running mission-critical applications and databases, physically, virtually, and in the cloud. As Linux grows in importance in terms of value to the business, managing Linux environments to high standards of service quality — availability, security, and performance — becomes an essential requirement for business success.

Learn More

Sponsored by Red Hat

White Paper
Private PaaS for the Agile Enterprise

If you already use virtualized infrastructure, you are well on your way to leveraging the power of the cloud. Virtualization offers the promise of limitless resources, but how do you manage that scalability when your DevOps team doesn’t scale? In today’s hypercompetitive markets, fast results can make a difference between leading the pack vs. obsolescence. Organizations need more benefits from cloud computing than just raw resources. They need agility, flexibility, convenience, ROI, and control.

Stackato private Platform-as-a-Service technology from ActiveState extends your private cloud infrastructure by creating a private PaaS to provide on-demand availability, flexibility, control, and ultimately, faster time-to-market for your enterprise.

Learn More

Sponsored by ActiveState