Anatomy of a Small Open-Source Kernel for DSPs
Before enabling interrupts, you must program each interrupt vector with code (called an interrupt handler) that will run when the corresponding interrupt is raised. The interrupt handlers provided with dsp_K are found in the file dsp_Ki.asm and summarized in Listing 1. These handlers are statically compiled, but it is possible to reprogram them dynamically simply by writing instructions to the vector program memory address.
On the SHARC, each interrupt vector provides a four-program word space in which its handler runs. But for simple events that, say, require no action or set some flag, four words are enough, including executing an rti instruction. When the interrupt handling requirements are more complex, the handler must jump elsewhere in the program code to an interrupt wrapper. In Listing 1 you can see the handlers that jump to the dsp_K interrupt wrapper, DSP_K_INTR_RAPPER.
The dsp_K interrupt wrapper is also found in the file dsp_Ki.asm and is further summarized in Listing 2. It is carefully written to avoid overwriting any machine registers or memory that might be used by your application. Furthermore, the ustat1 and ustat2 registers are not assigned by the compiler, so the wrapper code and other parts of the kernel make good use of them. (This does mean you can't reuse them in your application code.) The ustat1 register serves dsp_K as a temporary general purpose register, and ustat2 serves to store the kernel state. You can view the bits (and their assigned meanings) used in ustat2 to record the kernel state in the comment at the top of the file dsp_K.c.
At the label DSP_K_INTR_RAPPER_1, the dsp_K wrapper switches to the SHARC shadow registers, then overcomes a bug in the SHARC simulator before jumping to the C function _DSP_K_INTR_RAPPER to complete the wrapper code. This second jump is taken for two reasons: first, C is easier to read, and second, the seg_rth segment is made to resemble a primary bootstrap of 256 program words, of which about half remain free after the fixed interrupt vectors have their share. The _DSP_K_INTR_RAPPER function, found in the file dsp_K.c, is statically linked into either the seg_krco or seg_pmco segment, each normally configured with several kilobytes of program space. This C function is basically a switch, where each case completes the interrupt handling requirements for a vector, and upon its return the wrapper executes the rti instruction to exit the interrupt.
The code surrounding the call to _DSP_K_INTR_RAPPER, preserving register i12 and the bugfix against the SHARC simulator, is provided to correct errant behavior. Such problems are BSP-specific and are found by debugging running code, so I can offer no specific guidance should you attempt to make modifications. The latter was discovered early on, perhaps in release 0.2, while the former i12 bug was found later, perhaps in release 0.4. Such problems just happen, usually manifesting in lockup or jumping to strange memory addresses, which you must overcome in the kernel.
The second functional service provided by the BSP is context switching. As with interrupts, you can find specific details about registers on the SHARC in the relevant Analog Devices manuals. As you probably know, context switching is the method whereby registers and related resources are preserved and restored as different tasks are run. A context switch may occur at various scheduling points, including both an explicit software request or a hardware event like the tick interrupt.
To carry out a context switch, the kernel copies the values in the CPU registers for the current running task to a save stack and then copies values from another save stack for the new task into the CPU registers. The kernel then exits, and the newly restored task runs until the next scheduling point occurs. There are further details for you to explore, but that is basically it.
All the dsp_K context switch code and related data structures are provided for you in the file dsp_K.c. They are placed into program segments (commonly named seg_krco for kernel code and seg_krda for kernel data) and statically allocated to the correct memory location by the linker during buildtime.
The kernel data requirements are small (and can be inspected by studying seg_krda in the optional .map file produced after linking). A structure variable called __DSP_K_context is maintained at runtime, and you can find its definition in the file dsp_K.h, which is also summarized in Listing 3. This main structure consists of a t__DSP_K_tcb substructure per task, in addition to a small number of kernel-specific variables. These kernel variables include the current task and a pointer to the kernel environment comprising the task list supplied when your application main function called DSP_K_RUN, along with the arguments supplied to main from the C startup library.
Each t__DSP_K_tcb substructure stores the data needed to support one task. Basic data includes the task's group ID and parent ID along with any timer or wait events pending. Each task also stores its own exit value and errno for inspection, along with optionally built-in runtime statistics as a primitive profiling capability. The key element for your interest, however, is the array of ctxt integers that are used to save the context (or state) of each task in your application.
You configure the task context to be saved at buildtime, subject to a minimum base. The minimum save requirements are the general purpose registers, the general purpose index registers and the status and stack checking settings. Other SHARC registers can be saved optionally, depending on your application requirements, to include the modify, base, length and multiplier register sets. But often you don't need them, and runtime performance is improved by eliminating them from the context switch function.
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- Stunnel Security for Oracle
- SourceClear Open
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- SUSE LLC's SUSE Manager
- My +1 Sword of Productivity
- Managing Linux Using Puppet
- Google's SwiftShader Released
- Non-Linux FOSS: Caffeine!
- Parsing an RSS News Feed with a Bash Script
- Doing for User Space What We Did for Kernel Space
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide