Dissecting Interrupts and Browsing DMA

This is the fourth in a series of articles about writing character device drivers as loadable kernel modules. This month, we further investigate the field of interrupt handling. Though it is conceptually simple, practical limitations and constraints make this an “interesting” part of device friver writing, and several different facilities have been provided for different situations. We also invest
Running Predefined Queues

The kernel proper defines three task queues for your convenience and amusement. A custom driver should normally use one of those queues instead of declaring its own queue, and any driver may use the predefined queues without declaring new ones. The only reason there are special queues is higher performance: queues with smaller IDs are executed first.

The three predefined queues are:

struct tq_struct *tq_timer;
struct tq_struct *tq_scheduler;
struct tq_struct *tq_immediate;

The first is run in relation with kernel timers and won't be discussed here—it is left as an exercise for the reader. The next is run anytime scheduling takes place, and the last is run “immediately” upon exit from the interrupt handler, as a generic bottom half; this will generally be the queue you use in your driver to replace the older bottom half mechanism.

The “immediate” queue is used like tq_skel above. You don't need to choose an ID and declare it, although mark_bh() must still be called with the IMMEDIATE_BH argument as shown below. Correspondingly, the tq_timer queue uses mark_bh(TIMER_BH), but the tq_scheduler queue does not need to be marked to run, and so mark_bh() is not called.

This listing shows an example of using the “immediate” queue:

#include <linux/interrupt.h>
#include <linux/tqueue.h>
/*
 * Two different tasks
 */
static struct tq_struct task1;
static struct tq_struct task2;
/*
 * The top half queues tasks, and no bottom
 * half is there
 */
static void skel_interrupt(int irq,
                           struct pt_regs *regs)
{
  do_top_half_stuff()
  if (condition1()(I) {
    queue_task(&task1,&tq_immediate);
    mark_bh(IMMEDIATE_BH);
    }
  if (condition2()(I) {
    queue_task(&task2,&tq_skel);
    mark_bh(IMMEDIATE_BH);
    }
}
/*
 * And init as usual, but nothing has to be
 * cleaned up
 */
int init_module(void)
{
  /* ... */
  task1.routine=proc1; task1.data=arg1;
  task2.routine=proc2; task2.data=arg2;
  /* ... */
}
An Example: Playing with tq_scheduler

Task queues are quite amusing to play with, but most of us lack “real” hardware needing deferred handling. Fortunately, the implementation of run_task_queue() is smart enough you can enjoy yourself even without suitable hardware.

The good news is run_task_queue() calls queued functions after removing them from the queue. Thus, you can re-insert a task in the queue from within the task itself.

This silly task prints a message only every ten seconds, up to the end of the world. It needs to be registered only once, and then it will arrange its own existence.

struct tq_struct silly;
void silly_task(void *unused)
{
  static unsigned long last;
  if (jiffies/HZ/10 != last) {
    last=jiffies/HZ/10;
    printk(KERN_INFO "I'm there\n");
  }
queue_task(&silly, &tq_scheduler);
/* tq_scheduler doesn't need mark_bh() */
}

If you're thinking about viruses, please hold on, and remember a wise administrator doesn't do anything as root without reading the code beforehand.

But let's leave task queues and begin looking at memory issues.

DMA on the PC—Dirty, Machine-dependent, Awful

Remember the old days of the IBM PC? Yes, those days, when the PC was delivered with 128 KB of RAM, an 8086-processor, tape interface and 360 KB floppy. [You got a whole 128KB? Lucky you! —ED] Those were the days when DMA on ISA bus was considered fast. The idea of DMA is to transfer a block of data from a device to memory or vice versa without letting the CPU do the boring job of moving individual bytes. Instead, after initialization of both the device and the on-board DMA controller of the motherboard, the device signals the DMA controller that it has data to transfer. The DMA controller puts the RAM in a state receiving data from the data bus, the device puts the data on the data bus, and when this has been done, the controller increments an address counter and decrements a length counter, so that further transfers go into the next location.

In those old days this technique was fast, allowing transfer rates of up to 800 kilobytes per second on 16-bit ISA. Today we have transfer rates of 132 MB/second with PCI 2.0. But good old ISA DMA has still its applications: imagine a sound card playing 16-bit coded music at 48 kHz stereo. This would result in 192 kilobytes per second. Transmitting the data by interrupt requesting, say, two words every 20 microseconds would certainly let the CPU drop a whole lot of interrupts. Polling the data (non-interrupt driven data transfer) at that rate would certainly stop the CPU from doing anything useful, too. What we need is a continuous data flow at midrange speed—perfect for DMA. Linux only has to initiate and stop the data transfer. The rest is done by the hardware itself.

We will deal only with ISA DMA in this article—most expansion cards are still ISA, and ISA DMA is fast enough for many applications. However, DMA on the ISA bus has severe restrictions:

  • The hardware and the DMA controllers know nothing about virtual memory—all they can see is physical memory and its addresses (This restriction belongs to all kinds of ISA DMA.) Instead of using a page here and another there and gluing them together in virtual memory, we must allocate continuous blocks of physical memory, and we may not swap them out.

  • Intel-based systems have two DMA controllers, one with the lower four DMA channels allowing byte-wise transfers and one with the upper four channels allowing (faster) word transfer. Both have only a 16-bit address counter and a 16-bit length counter. Therefore, we may not transfer more than 65535 bytes (or words, with the second controller) at once.

  • The address counter represents only the lower 16 bits of the address (A0-A15 on DMA channels 0-3, A1-A16 on channels 4-7). The upper 8 bits of address space are represented in a page register. They will not change during a transfer, meaning transfers can take place only within one 16-bit (64 KB) segment of memory.

  • Upper 8 bits? 24 bits of address space in total? Yes, it's sad, but true. We can only transfer within the lower 16MB of RAM. If your data eventually needs to reach another address, the CPU has to copy it “by hand” again, using the DMA-accessible memory as a “bounce buffer”, as it is called.

______________________

White Paper
Linux Management with Red Hat Satellite: Measuring Business Impact and ROI

Linux has become a key foundation for supporting today's rapidly growing IT environments. Linux is being used to deploy business applications and databases, trading on its reputation as a low-cost operating environment. For many IT organizations, Linux is a mainstay for deploying Web servers and has evolved from handling basic file, print, and utility workloads to running mission-critical applications and databases, physically, virtually, and in the cloud. As Linux grows in importance in terms of value to the business, managing Linux environments to high standards of service quality — availability, security, and performance — becomes an essential requirement for business success.

Learn More

Sponsored by Red Hat

White Paper
Private PaaS for the Agile Enterprise

If you already use virtualized infrastructure, you are well on your way to leveraging the power of the cloud. Virtualization offers the promise of limitless resources, but how do you manage that scalability when your DevOps team doesn’t scale? In today’s hypercompetitive markets, fast results can make a difference between leading the pack vs. obsolescence. Organizations need more benefits from cloud computing than just raw resources. They need agility, flexibility, convenience, ROI, and control.

Stackato private Platform-as-a-Service technology from ActiveState extends your private cloud infrastructure by creating a private PaaS to provide on-demand availability, flexibility, control, and ultimately, faster time-to-market for your enterprise.

Learn More

Sponsored by ActiveState