Kernel Korner - Using DMA
Second, (if you need it), the overall maximum size:
void blk_queue_max_sectors(request_queue_t *q, unsigned short max_sectors);
Finally, the DMA mask must be programmed into the queue:
void blk_queue_bounce_limit(request_queue_t *q, u64 max_address);
Usually, you set max_address to the DMA mask. If an IOMMU is being used, however, max_address should be set to BLK_BOUNCE_ANY to tell the block layer not to do any bouncing.
To operate a device, it must have a request function (see the BIO documentation) whose job it is to loop around and pull requests from the device queue using the command:
struct request *elv_next_request(request_queue_t *q);
The number of mapping entries required by the request are located in req->nr_phys_segments. You need to allocate an interim table of this size in units of sizeof(struct scatterlist). Next, do the interim mapping with:
int blk_rq_map_sg(request_queue_t *q, struct request *req, struct scatterlist *sglist);
This returns the number of SG list entries used.
The following command provides the interim table supplied by the block layer, which finally is mapped using:
int dma_map_sg(struct device *dev, struct scatterlist *sglist, int count, enum dma_data_direction dir);
where count is the value returned and sglist is the same list passed into the function blk_rq_map_sg. The return value is the number of actual SG list entries needed by the request. The SG list is reused and filled up with the actual entries that need to be programmed into the device's SG entries. The dir provides a hint about how to cope correctly with cache coherency. It can have three values:
DMA_TO_DEVICE: the data is being transferred from memory to the device.
DMA_FROM_DEVICE: the device is transferring data into main memory only.
DMA_BIDIRECTIONAL: no hint is given about the transfer direction.
Two macros should be used when traversing the SG list to program the device's SG table:
dma_addr_t sg_dma_address(struct scatterlist *sglist_entry);
which return the bus physical address and segment lengths, respectively, of each entry.
The reason for this two-stage mapping of the request is because the BIO layer is designed to be generic code and has no direct interaction with the platform layer, which knows how to program the IOMMU. Thus, the only thing the BIO layer can calculate is the number of SG segments the IOMMU makes for the request. The BIO layer doesn't know the bus addresses the IOMMU assigns to these segments, so it has to pass in a list of the physical memory addresses of all the pages that need to be mapped. It is the dma_map_sg function that communicates with the platform layer, programs the IOMMU and retrieves the bus physical list of addresses. This too is why the number of elements the BIO layer needs for its list may be longer than the number returned by the DMA API.
When the DMA has completed, the DMA transaction must be torn down with:
int dma_unmap_sg(struct device *dev, struct scatterlist *sglist, int hwcount, enum dma_data_direction dir);
where all the parameters are the same as those passed into dma_map_sg except for hwcount, which should be the value returned by that function. And finally, the SG list you allocated may be freed and the request completed.
Usually, the device driver operates without touching any of the data it is transferring. Occasionally, however, the device driver may need to modify or inspect the data before handing it back to the block layer. To do this, the CPU caches must be made coherent with the data by using:
int dma_sync_sg(struct device *dev, struct scatterlist *sglist, int hwcount, enum dma_data_direction dir);
Fast/Flexible Linux OS Recovery
On Demand Now
In this live one-hour webinar, learn how to enhance your existing backup strategies for complete disaster recovery preparedness using Storix System Backup Administrator (SBAdmin), a highly flexible full-system recovery solution for UNIX and Linux systems.
Join Linux Journal's Shawn Powers and David Huffman, President/CEO, Storix, Inc.
Free to Linux Journal readers.Register Now!
- Back to Backups
- Download "Linux Management with Red Hat Satellite: Measuring Business Impact and ROI"
- Google's Abacus Project: It's All about Trust
- Secure Desktops with Qubes: Introduction
- Seeing Red and Getting Sleep
- Fancy Tricks for Changing Numeric Base
- Secure Desktops with Qubes: Installation
- Linux Mint 18
- Working with Command Arguments
- CentOS 6.8 Released
Until recently, IBM’s Power Platform was looked upon as being the system that hosted IBM’s flavor of UNIX and proprietary operating system called IBM i. These servers often are found in medium-size businesses running ERP, CRM and financials for on-premise customers. By enabling the Power platform to run the Linux OS, IBM now has positioned Power to be the platform of choice for those already running Linux that are facing scalability issues, especially customers looking at analytics, big data or cloud computing.
￼Running Linux on IBM’s Power hardware offers some obvious benefits, including improved processing speed and memory bandwidth, inherent security, and simpler deployment and management. But if you look beyond the impressive architecture, you’ll also find an open ecosystem that has given rise to a strong, innovative community, as well as an inventory of system and network management applications that really help leverage the benefits offered by running Linux on Power.Get the Guide