Kernel Korner - Using DMA
Second, (if you need it), the overall maximum size:
void blk_queue_max_sectors(request_queue_t *q, unsigned short max_sectors);
Finally, the DMA mask must be programmed into the queue:
void blk_queue_bounce_limit(request_queue_t *q, u64 max_address);
Usually, you set max_address to the DMA mask. If an IOMMU is being used, however, max_address should be set to BLK_BOUNCE_ANY to tell the block layer not to do any bouncing.
To operate a device, it must have a request function (see the BIO documentation) whose job it is to loop around and pull requests from the device queue using the command:
struct request *elv_next_request(request_queue_t *q);
The number of mapping entries required by the request are located in req->nr_phys_segments. You need to allocate an interim table of this size in units of sizeof(struct scatterlist). Next, do the interim mapping with:
int blk_rq_map_sg(request_queue_t *q, struct request *req, struct scatterlist *sglist);
This returns the number of SG list entries used.
The following command provides the interim table supplied by the block layer, which finally is mapped using:
int dma_map_sg(struct device *dev, struct scatterlist *sglist, int count, enum dma_data_direction dir);
where count is the value returned and sglist is the same list passed into the function blk_rq_map_sg. The return value is the number of actual SG list entries needed by the request. The SG list is reused and filled up with the actual entries that need to be programmed into the device's SG entries. The dir provides a hint about how to cope correctly with cache coherency. It can have three values:
DMA_TO_DEVICE: the data is being transferred from memory to the device.
DMA_FROM_DEVICE: the device is transferring data into main memory only.
DMA_BIDIRECTIONAL: no hint is given about the transfer direction.
Two macros should be used when traversing the SG list to program the device's SG table:
dma_addr_t sg_dma_address(struct scatterlist *sglist_entry);
which return the bus physical address and segment lengths, respectively, of each entry.
The reason for this two-stage mapping of the request is because the BIO layer is designed to be generic code and has no direct interaction with the platform layer, which knows how to program the IOMMU. Thus, the only thing the BIO layer can calculate is the number of SG segments the IOMMU makes for the request. The BIO layer doesn't know the bus addresses the IOMMU assigns to these segments, so it has to pass in a list of the physical memory addresses of all the pages that need to be mapped. It is the dma_map_sg function that communicates with the platform layer, programs the IOMMU and retrieves the bus physical list of addresses. This too is why the number of elements the BIO layer needs for its list may be longer than the number returned by the DMA API.
When the DMA has completed, the DMA transaction must be torn down with:
int dma_unmap_sg(struct device *dev, struct scatterlist *sglist, int hwcount, enum dma_data_direction dir);
where all the parameters are the same as those passed into dma_map_sg except for hwcount, which should be the value returned by that function. And finally, the SG list you allocated may be freed and the request completed.
Usually, the device driver operates without touching any of the data it is transferring. Occasionally, however, the device driver may need to modify or inspect the data before handing it back to the block layer. To do this, the CPU caches must be made coherent with the data by using:
int dma_sync_sg(struct device *dev, struct scatterlist *sglist, int hwcount, enum dma_data_direction dir);
Webinar: 8 Signs You’re Beyond Cron
On Demand NOW
Join Linux Journal and Pat Cameron, Director of Automation Technology at HelpSystems, as they discuss the eight primary advantages of moving beyond cron job scheduling. In this webinar, you’ll learn about integrating cron with an enterprise scheduler.View Now!
|Dr Hjkl on the Command Line||May 21, 2015|
|Initializing and Managing Services in Linux: Past, Present and Future||May 20, 2015|
|Goodbye, Pi. Hello, C.H.I.P.||May 18, 2015|
|Enter to Win Archive DVD + Free Backup Solution||May 18, 2015|
|Using Hiera with Puppet||May 14, 2015|
|Urgent Kernel Patch for Ubuntu||May 12, 2015|
- Dr Hjkl on the Command Line
- Initializing and Managing Services in Linux: Past, Present and Future
- Goodbye, Pi. Hello, C.H.I.P.
- Enter to Win Archive DVD + Free Backup Solution
- Using Hiera with Puppet
- Gartner Dubs DivvyCloud Cool Cloud Management Vendor
- Mumblehard--Let's End Its Five-Year Reign
- Infinite BusyBox with systemd
- A More Stable Future for Ubuntu
- It's Easier to Ask Forgiveness...