Lines Matching +full:dma +full:- +full:requests
8 Most of the Slave DMA controllers have the same general principles of
11 They have a given number of channels to use for the DMA transfers, and
12 a given number of requests lines.
14 Requests and channels are pretty much orthogonal. Channels can be used
15 to serve several to any requests. To simplify, channels are the
16 entities that will be doing the copy, and requests what endpoints are
20 DMA-eligible devices to the controller itself. Whenever the device
21 will want to start a transfer, it will assert a DMA request (DRQ) by
24 A very simple DMA controller would only take into account a single
35 is why most if not all of the DMA controllers can adjust this, using a
38 Moreover, some DMA controllers, whenever the RAM is used as a source
44 transfer into smaller sub-transfers.
46 Our theoretical DMA controller would then only be able to do transfers
49 non-contiguous buffers to a contiguous buffer, which is called
50 scatter-gather.
53 scatter-gather. So we're left with two cases here: either we have a
54 quite simple DMA controller that doesn't support it, and we'll have to
55 implement it in software, or we have a more advanced DMA controller,
56 that implements in hardware scatter-gather.
64 or the first item of the list to one channel of the DMA controller,
69 your hardware. Each DMA controller will require a different structure,
77 whenever you're willing to use DMA.
79 These were just the general memory-to-memory (also called mem2mem) or
80 memory-to-device (mem2dev) kind of transfers. Most devices often
84 DMA Support in Linux
87 Historically, DMA controller drivers have been implemented using the
98 documentation file in Documentation/crypto/async-tx-api.rst.
104 ------------------------------------
114 - ``channels``: should be initialized as a list using the
117 - ``src_addr_widths``:
120 - ``dst_addr_widths``:
123 - ``directions``:
127 - ``residue_granularity``:
131 - Descriptor:
136 - Segment:
139 - Burst:
142 - ``dev``: should hold the pointer to the ``struct device`` associated
146 ---------------------------
161 - DMA_MEMCPY
163 - The device is able to do memory to memory copies
165 - No matter what the overall size of the combined chunks for source and
167 transmitted. That means the number and size of the scatter-gather buffers in
170 total size of the two scatter-gather list buffers.
172 - It's usually used for copying pixel data between host memory and
173 memory-mapped GPU device memory, such as found on modern PCI video graphics
178 - DMA_XOR
180 - The device is able to perform XOR operations on memory areas
182 - Used to accelerate XOR intensive tasks, such as RAID5
184 - DMA_XOR_VAL
186 - The device is able to perform parity check using the XOR
189 - DMA_PQ
191 - The device is able to perform RAID6 P+Q computations, P being a
192 simple XOR, and Q being a Reed-Solomon algorithm.
194 - DMA_PQ_VAL
196 - The device is able to perform parity check using RAID6 P+Q
199 - DMA_MEMSET
201 - The device is able to fill memory with the provided pattern
203 - The pattern is treated as a single byte signed value.
205 - DMA_INTERRUPT
207 - The device is able to trigger a dummy transfer that will
210 - Used by the client drivers to register a callback that will be
211 called on a regular basis through the DMA controller interrupt
213 - DMA_PRIVATE
215 - The devices only supports slave transfers, and as such isn't
218 - DMA_ASYNC_TX
220 - The device supports asynchronous memory-to-memory operations,
223 - This capability is automatically set by the DMA engine
227 - DMA_SLAVE
229 - The device can handle device to memory transfers, including
230 scatter-gather transfers.
232 - While in the mem2mem case we were having two distinct types to
237 - If you want to transfer a single contiguous memory buffer,
240 - DMA_CYCLIC
242 - The device can handle cyclic transfers.
244 - A cyclic transfer is a transfer where the chunk collection will
247 - It's usually used for audio transfers, where you want to operate
250 - DMA_INTERLEAVE
252 - The device supports interleaved transfer.
254 - These transfers can transfer data from a non-contiguous buffer
255 to a non-contiguous buffer, opposed to DMA_SLAVE that can
256 transfer data from a non-contiguous data set to a continuous
259 - It's usually used for 2d content transfers, in which case you
263 - DMA_COMPLETION_NO_ORDER
265 - The device does not support in order completion.
267 - The driver should return DMA_OUT_OF_ORDER for device_tx_status if
270 - All cookie tracking and checking API should be treated as invalid if
273 - At this point, this is incompatible with polling option for dmatest.
275 - If this cap is set, the user is recommended to provide an unique
276 identifier for each descriptor sent to the DMA device in order to
279 - DMA_REPEAT
281 - The device supports repeated transfers. A repeated transfer, indicated by
286 - This feature is limited to interleaved transfers, this flag should thus not
288 the current needs of DMA clients, support for additional transfer types
291 - DMA_LOAD_EOT
293 - The device supports replacing repeated transfers at end of transfer (EOT)
296 - Support for replacing a currently running transfer at another point (such
298 based on DMA clients needs, if and when the need arises.
309 -------------------------------
310 Some data movement architecture (DMA controller and peripherals) uses metadata
311 associated with a transaction. The DMA controller role is to transfer the
313 The metadata itself is not used by the DMA engine itself, but it contains
317 descriptors. Depending on the architecture the DMA driver can implement either
321 - DESC_METADATA_CLIENT
326 From the DMA driver the following is expected for this mode:
328 - DMA_MEM_TO_DEV / DEV_MEM_TO_MEM
330 The data from the provided metadata buffer should be prepared for the DMA
334 - DMA_DEV_TO_MEM
336 On transfer completion the DMA driver must copy the metadata to the client
338 After the transfer completion, DMA drivers must not touch the metadata
341 - DESC_METADATA_ENGINE
343 The metadata buffer is allocated/managed by the DMA driver. The client driver
348 From the DMA driver the following is expected for this mode:
350 - get_metadata_ptr()
355 - set_metadata_len()
358 to let the DMA driver know the number of valid bytes provided.
361 callback (in DMA_DEV_TO_MEM case) the DMA driver must ensure that the
365 -----------------
375 - ``device_alloc_chan_resources``
377 - ``device_free_chan_resources``
379 - These functions will be called whenever a driver will call
383 - They are in charge of allocating/freeing all the needed
386 - These functions can sleep.
388 - ``device_prep_dma_*``
390 - These functions are matching the capabilities you registered
393 - These functions all take the buffer or the scatterlist relevant
397 - These functions can be called from an interrupt context
399 - Any allocation you might do should be using the GFP_NOWAIT
403 - Drivers should try to pre-allocate any memory they might need
407 - It should return a unique instance of the
411 - This structure can be initialized using the function
414 - You'll also need to set two fields in this structure:
416 - flags:
420 - tx_submit: A pointer to a function you have to implement,
424 - In this structure the function pointer callback_result can be
432 - result: This provides the transfer result defined by
435 - residue: Provides the residue bytes of the transfer for those that
438 - ``device_prep_peripheral_dma_vec``
440 - Similar to ``device_prep_slave_sg``, but it takes a pointer to a
444 - ``device_issue_pending``
446 - Takes the first transaction descriptor in the pending queue,
450 - This function can be called in an interrupt context
452 - ``device_tx_status``
454 - Should report the bytes left to go over on the given channel
456 - Should only care about the transaction descriptor passed as
459 - The tx_state argument might be NULL
461 - Should use dma_set_residue to report it
463 - In the case of a cyclic transfer, it should only take into
466 - Should return DMA_OUT_OF_ORDER if the device does not support in order
469 - This function can be called in an interrupt context.
471 - device_config
473 - Reconfigures the channel with the configuration given as argument
475 - This command should NOT perform synchronously, or on any
478 - In this case, the function will receive a ``dma_slave_config``
482 - Even though that structure contains a direction field, this
486 - This call is mandatory for slave operations only. This should NOT be
491 - device_pause
493 - Pauses a transfer on the channel
495 - This command should operate synchronously on the channel,
498 - device_resume
500 - Resumes a transfer on the channel
502 - This command should operate synchronously on the channel,
505 - device_terminate_all
507 - Aborts all the pending and ongoing transfers on the channel
509 - For aborted transfers the complete callback should not be called
511 - Can be called from atomic context or from within a complete
515 - Termination may be asynchronous. The driver does not have to
519 - device_synchronize
521 - Must synchronize the termination of a channel to the current
524 - Must make sure that memory for previously submitted
525 descriptors is no longer accessed by the DMA controller.
527 - Must make sure that all complete callbacks for previously
531 - May sleep.
542 - Should be called at the end of an async TX transfer, and can be
545 - Makes sure that dependent operations are run before marking it
550 - it's a DMA transaction ID that will increment over time.
552 - Not really relevant any more since the introduction of ``virt-dma``
557 - A small structure that contains a DMA address and length.
561 - If clear, the descriptor cannot be reused by provider until the
565 - This can be acked by invoking async_tx_ack()
567 - If set, does not mean descriptor can be reused
571 - If set, the descriptor can be reused after being completed. It should
574 - The descriptor should be prepared for reuse by invoking
577 - ``dmaengine_desc_set_reuse()`` will succeed only when channel support
580 - As a consequence, if a device driver wants to skip the
582 because the DMA'd data wasn't used, it can resubmit the transfer right after
585 - Descriptor can be freed in few ways
587 - Clearing DMA_CTRL_REUSE by invoking
590 - Explicitly invoking ``dmaengine_desc_free()``, this can succeed only
593 - Terminating the channel
595 - DMA_PREP_CMD
597 - If set, the client driver tells DMA controller that passed data in DMA
600 - Interpretation of command data is DMA controller specific. It can be
605 - DMA_PREP_REPEAT
607 - If set, the transfer will be automatically repeated when it ends until a
613 - This flag is only supported if the channel reports the DMA_REPEAT
616 - DMA_PREP_LOAD_EOT
618 - If set, the transfer will replace the transfer currently being executed at
621 - This is the default behaviour for non-repeated transfers, specifying
622 DMA_PREP_LOAD_EOT for non-repeated transfers will thus make no difference.
624 - When using repeated transfers, DMA clients will usually need to set the
630 - This flag is only supported if the channel reports the DMA_LOAD_EOT
641 This is a rather inefficient design though, because the inter-transfer
654 - Burst: A number of consecutive read or write operations that
657 - Chunk: A contiguous collection of bursts
659 - Transfer: A collection of chunks (be it contiguous or not)