xref: /linux/Documentation/core-api/dma-api.rst (revision 22c55fb9eb92395d999b8404d73e58540d11bdd8)
1============================================
2Dynamic DMA mapping using the generic device
3============================================
4
5:Author: James E.J. Bottomley <James.Bottomley@HansenPartnership.com>
6
7This document describes the DMA API.  For a more gentle introduction
8of the API (and actual examples), see Documentation/core-api/dma-api-howto.rst.
9
10This API is split into two pieces.  Part I describes the basic API.
11Part II describes extensions for supporting non-coherent memory
12machines.  Unless you know that your driver absolutely has to support
13non-coherent platforms (this is usually only legacy platforms) you
14should only use the API described in part I.
15
16Part I - DMA API
17----------------
18
19To get the DMA API, you must #include <linux/dma-mapping.h>.  This
20provides dma_addr_t and the interfaces described below.
21
22A dma_addr_t can hold any valid DMA address for the platform.  It can be
23given to a device to use as a DMA source or target.  A CPU cannot reference
24a dma_addr_t directly because there may be translation between its physical
25address space and the DMA address space.
26
27Part Ia - Using large DMA-coherent buffers
28------------------------------------------
29
30::
31
32	void *
33	dma_alloc_coherent(struct device *dev, size_t size,
34			   dma_addr_t *dma_handle, gfp_t flag)
35
36Coherent memory is memory for which a write by either the device or
37the processor can immediately be read by the processor or device
38without having to worry about caching effects.  (You may however need
39to make sure to flush the processor's write buffers before telling
40devices to read that memory.)
41
42This routine allocates a region of <size> bytes of coherent memory.
43
44It returns a pointer to the allocated region (in the processor's virtual
45address space) or NULL if the allocation failed.
46
47It also returns a <dma_handle> which may be cast to an unsigned integer the
48same width as the bus and given to the device as the DMA address base of
49the region.
50
51Note: coherent memory can be expensive on some platforms, and the
52minimum allocation length may be as big as a page, so you should
53consolidate your requests for coherent memory as much as possible.
54The simplest way to do that is to use the dma_pool calls (see below).
55
56The flag parameter allows the caller to specify the ``GFP_`` flags (see
57kmalloc()) for the allocation (the implementation may ignore flags that affect
58the location of the returned memory, like GFP_DMA).
59
60::
61
62	void
63	dma_free_coherent(struct device *dev, size_t size, void *cpu_addr,
64			  dma_addr_t dma_handle)
65
66Free a previously allocated region of coherent memory.  dev, size and dma_handle
67must all be the same as those passed into dma_alloc_coherent().  cpu_addr must
68be the virtual address returned by dma_alloc_coherent().
69
70Note that unlike the sibling allocation call, this routine may only be called
71with IRQs enabled.
72
73
74Part Ib - Using small DMA-coherent buffers
75------------------------------------------
76
77To get this part of the DMA API, you must #include <linux/dmapool.h>
78
79Many drivers need lots of small DMA-coherent memory regions for DMA
80descriptors or I/O buffers.  Rather than allocating in units of a page
81or more using dma_alloc_coherent(), you can use DMA pools.  These work
82much like a struct kmem_cache, except that they use the DMA-coherent allocator,
83not __get_free_pages().  Also, they understand common hardware constraints
84for alignment, like queue heads needing to be aligned on N-byte boundaries.
85
86.. kernel-doc:: mm/dmapool.c
87   :export:
88
89.. kernel-doc:: include/linux/dmapool.h
90
91
92Part Ic - DMA addressing limitations
93------------------------------------
94
95DMA mask is a bit mask of the addressable region for the device. In other words,
96if applying the DMA mask (a bitwise AND operation) to the DMA address of a
97memory region does not clear any bits in the address, then the device can
98perform DMA to that memory region.
99
100All the below functions which set a DMA mask may fail if the requested mask
101cannot be used with the device, or if the device is not capable of doing DMA.
102
103::
104
105	int
106	dma_set_mask_and_coherent(struct device *dev, u64 mask)
107
108Updates both streaming and coherent DMA masks.
109
110Returns: 0 if successful and a negative error if not.
111
112::
113
114	int
115	dma_set_mask(struct device *dev, u64 mask)
116
117Updates only the streaming DMA mask.
118
119Returns: 0 if successful and a negative error if not.
120
121::
122
123	int
124	dma_set_coherent_mask(struct device *dev, u64 mask)
125
126Updates only the coherent DMA mask.
127
128Returns: 0 if successful and a negative error if not.
129
130::
131
132	u64
133	dma_get_required_mask(struct device *dev)
134
135This API returns the mask that the platform requires to
136operate efficiently.  Usually this means the returned mask
137is the minimum required to cover all of memory.  Examining the
138required mask gives drivers with variable descriptor sizes the
139opportunity to use smaller descriptors as necessary.
140
141Requesting the required mask does not alter the current mask.  If you
142wish to take advantage of it, you should issue a dma_set_mask()
143call to set the mask to the value returned.
144
145::
146
147	size_t
148	dma_max_mapping_size(struct device *dev);
149
150Returns the maximum size of a mapping for the device. The size parameter
151of the mapping functions like dma_map_single(), dma_map_page() and
152others should not be larger than the returned value.
153
154::
155
156	size_t
157	dma_opt_mapping_size(struct device *dev);
158
159Returns the maximum optimal size of a mapping for the device.
160
161Mapping larger buffers may take much longer in certain scenarios. In
162addition, for high-rate short-lived streaming mappings, the upfront time
163spent on the mapping may account for an appreciable part of the total
164request lifetime. As such, if splitting larger requests incurs no
165significant performance penalty, then device drivers are advised to
166limit total DMA streaming mappings length to the returned value.
167
168::
169
170	bool
171	dma_need_sync(struct device *dev, dma_addr_t dma_addr);
172
173Returns %true if dma_sync_single_for_{device,cpu} calls are required to
174transfer memory ownership.  Returns %false if those calls can be skipped.
175
176::
177
178	unsigned long
179	dma_get_merge_boundary(struct device *dev);
180
181Returns the DMA merge boundary. If the device cannot merge any DMA address
182segments, the function returns 0.
183
184Part Id - Streaming DMA mappings
185--------------------------------
186
187Streaming DMA allows to map an existing buffer for DMA transfers and then
188unmap it when finished.  Map functions are not guaranteed to succeed, so the
189return value must be checked.
190
191.. note::
192
193	In particular, mapping may fail for memory not addressable by the
194	device, e.g. if it is not within the DMA mask of the device and/or a
195	connecting bus bridge.  Streaming DMA functions try to overcome such
196	addressing constraints, either by using an IOMMU (a device which maps
197	I/O DMA addresses to physical memory addresses), or by copying the
198	data to/from a bounce buffer if the kernel is configured with a
199	:doc:`SWIOTLB <swiotlb>`.  However, these methods are not always
200	available, and even if they are, they may still fail for a number of
201	reasons.
202
203	In short, a device driver may need to be wary of where buffers are
204	located in physical memory, especially if the DMA mask is less than 32
205	bits.
206
207::
208
209	dma_addr_t
210	dma_map_single(struct device *dev, void *cpu_addr, size_t size,
211		       enum dma_data_direction direction)
212
213Maps a piece of processor virtual memory so it can be accessed by the
214device and returns the DMA address of the memory.
215
216The DMA API uses a strongly typed enumerator for its direction:
217
218======================= =============================================
219DMA_NONE		no direction (used for debugging)
220DMA_TO_DEVICE		data is going from the memory to the device
221DMA_FROM_DEVICE		data is coming from the device to the memory
222DMA_BIDIRECTIONAL	direction isn't known
223======================= =============================================
224
225.. note::
226
227	Contiguous kernel virtual space may not be contiguous as
228	physical memory.  Since this API does not provide any scatter/gather
229	capability, it will fail if the user tries to map a non-physically
230	contiguous piece of memory.  For this reason, memory to be mapped by
231	this API should be obtained from sources which guarantee it to be
232	physically contiguous (like kmalloc).
233
234.. warning::
235
236	Memory coherency operates at a granularity called the cache
237	line width.  In order for memory mapped by this API to operate
238	correctly, the mapped region must begin exactly on a cache line
239	boundary and end exactly on one (to prevent two separately mapped
240	regions from sharing a single cache line).  Since the cache line size
241	may not be known at compile time, the API will not enforce this
242	requirement.  Therefore, it is recommended that driver writers who
243	don't take special care to determine the cache line size at run time
244	only map virtual regions that begin and end on page boundaries (which
245	are guaranteed also to be cache line boundaries).
246
247	DMA_TO_DEVICE synchronisation must be done after the last modification
248	of the memory region by the software and before it is handed off to
249	the device.  Once this primitive is used, memory covered by this
250	primitive should be treated as read-only by the device.  If the device
251	may write to it at any point, it should be DMA_BIDIRECTIONAL (see
252	below).
253
254	DMA_FROM_DEVICE synchronisation must be done before the driver
255	accesses data that may be changed by the device.  This memory should
256	be treated as read-only by the driver.  If the driver needs to write
257	to it at any point, it should be DMA_BIDIRECTIONAL (see below).
258
259	DMA_BIDIRECTIONAL requires special handling: it means that the driver
260	isn't sure if the memory was modified before being handed off to the
261	device and also isn't sure if the device will also modify it.  Thus,
262	you must always sync bidirectional memory twice: once before the
263	memory is handed off to the device (to make sure all memory changes
264	are flushed from the processor) and once before the data may be
265	accessed after being used by the device (to make sure any processor
266	cache lines are updated with data that the device may have changed).
267
268::
269
270	void
271	dma_unmap_single(struct device *dev, dma_addr_t dma_addr, size_t size,
272			 enum dma_data_direction direction)
273
274Unmaps the region previously mapped.  All the parameters passed in
275must be identical to those passed to (and returned by) dma_map_single().
276
277::
278
279	dma_addr_t
280	dma_map_page(struct device *dev, struct page *page,
281		     unsigned long offset, size_t size,
282		     enum dma_data_direction direction)
283
284	void
285	dma_unmap_page(struct device *dev, dma_addr_t dma_address, size_t size,
286		       enum dma_data_direction direction)
287
288API for mapping and unmapping for pages.  All the notes and warnings
289for the other mapping APIs apply here.  Also, although the <offset>
290and <size> parameters are provided to do partial page mapping, it is
291recommended that you never use these unless you really know what the
292cache width is.
293
294::
295
296	dma_addr_t
297	dma_map_resource(struct device *dev, phys_addr_t phys_addr, size_t size,
298			 enum dma_data_direction dir, unsigned long attrs)
299
300	void
301	dma_unmap_resource(struct device *dev, dma_addr_t addr, size_t size,
302			   enum dma_data_direction dir, unsigned long attrs)
303
304API for mapping and unmapping for MMIO resources. All the notes and
305warnings for the other mapping APIs apply here. The API should only be
306used to map device MMIO resources, mapping of RAM is not permitted.
307
308::
309
310	int
311	dma_mapping_error(struct device *dev, dma_addr_t dma_addr)
312
313In some circumstances dma_map_single(), dma_map_page() and dma_map_resource()
314will fail to create a mapping. A driver can check for these errors by testing
315the returned DMA address with dma_mapping_error(). A non-zero return value
316means the mapping could not be created and the driver should take appropriate
317action (e.g. reduce current DMA mapping usage or delay and try again later).
318
319::
320
321	int
322	dma_map_sg(struct device *dev, struct scatterlist *sg,
323		   int nents, enum dma_data_direction direction)
324
325Maps a scatter/gather list for DMA. Returns the number of DMA address segments
326mapped, which may be smaller than <nents> passed in if several consecutive
327sglist entries are merged (e.g. with an IOMMU, or if some adjacent segments
328just happen to be physically contiguous).
329
330Please note that the sg cannot be mapped again if it has been mapped once.
331The mapping process is allowed to destroy information in the sg.
332
333As with the other mapping interfaces, dma_map_sg() can fail. When it
334does, 0 is returned and a driver must take appropriate action. It is
335critical that the driver do something, in the case of a block driver
336aborting the request or even oopsing is better than doing nothing and
337corrupting the filesystem.
338
339With scatterlists, you use the resulting mapping like this::
340
341	int i, count = dma_map_sg(dev, sglist, nents, direction);
342	struct scatterlist *sg;
343
344	for_each_sg(sglist, sg, count, i) {
345		hw_address[i] = sg_dma_address(sg);
346		hw_len[i] = sg_dma_len(sg);
347	}
348
349where nents is the number of entries in the sglist.
350
351The implementation is free to merge several consecutive sglist entries
352into one.  The returned number is the actual number of sg entries it
353mapped them to. On failure, 0 is returned.
354
355Then you should loop count times (note: this can be less than nents times)
356and use sg_dma_address() and sg_dma_len() macros where you previously
357accessed sg->address and sg->length as shown above.
358
359::
360
361	void
362	dma_unmap_sg(struct device *dev, struct scatterlist *sg,
363		     int nents, enum dma_data_direction direction)
364
365Unmap the previously mapped scatter/gather list.  All the parameters
366must be the same as those and passed in to the scatter/gather mapping
367API.
368
369Note: <nents> must be the number you passed in, *not* the number of
370DMA address entries returned.
371
372::
373
374	void
375	dma_sync_single_for_cpu(struct device *dev, dma_addr_t dma_handle,
376				size_t size,
377				enum dma_data_direction direction)
378
379	void
380	dma_sync_single_for_device(struct device *dev, dma_addr_t dma_handle,
381				   size_t size,
382				   enum dma_data_direction direction)
383
384	void
385	dma_sync_sg_for_cpu(struct device *dev, struct scatterlist *sg,
386			    int nents,
387			    enum dma_data_direction direction)
388
389	void
390	dma_sync_sg_for_device(struct device *dev, struct scatterlist *sg,
391			       int nents,
392			       enum dma_data_direction direction)
393
394Synchronise a single contiguous or scatter/gather mapping for the CPU
395and device. With the sync_sg API, all the parameters must be the same
396as those passed into the sg mapping API. With the sync_single API,
397you can use dma_handle and size parameters that aren't identical to
398those passed into the single mapping API to do a partial sync.
399
400
401.. note::
402
403   You must do this:
404
405   - Before reading values that have been written by DMA from the device
406     (use the DMA_FROM_DEVICE direction)
407   - After writing values that will be written to the device using DMA
408     (use the DMA_TO_DEVICE) direction
409   - before *and* after handing memory to the device if the memory is
410     DMA_BIDIRECTIONAL
411
412See also dma_map_single().
413
414::
415
416	dma_addr_t
417	dma_map_single_attrs(struct device *dev, void *cpu_addr, size_t size,
418			     enum dma_data_direction dir,
419			     unsigned long attrs)
420
421	void
422	dma_unmap_single_attrs(struct device *dev, dma_addr_t dma_addr,
423			       size_t size, enum dma_data_direction dir,
424			       unsigned long attrs)
425
426	int
427	dma_map_sg_attrs(struct device *dev, struct scatterlist *sgl,
428			 int nents, enum dma_data_direction dir,
429			 unsigned long attrs)
430
431	void
432	dma_unmap_sg_attrs(struct device *dev, struct scatterlist *sgl,
433			   int nents, enum dma_data_direction dir,
434			   unsigned long attrs)
435
436The four functions above are just like the counterpart functions
437without the _attrs suffixes, except that they pass an optional
438dma_attrs.
439
440The interpretation of DMA attributes is architecture-specific, and
441each attribute should be documented in
442Documentation/core-api/dma-attributes.rst.
443
444If dma_attrs are 0, the semantics of each of these functions
445is identical to those of the corresponding function
446without the _attrs suffix. As a result dma_map_single_attrs()
447can generally replace dma_map_single(), etc.
448
449As an example of the use of the ``*_attrs`` functions, here's how
450you could pass an attribute DMA_ATTR_FOO when mapping memory
451for DMA::
452
453	#include <linux/dma-mapping.h>
454	/* DMA_ATTR_FOO should be defined in linux/dma-mapping.h and
455	* documented in Documentation/core-api/dma-attributes.rst */
456	...
457
458		unsigned long attr;
459		attr |= DMA_ATTR_FOO;
460		....
461		n = dma_map_sg_attrs(dev, sg, nents, DMA_TO_DEVICE, attr);
462		....
463
464Architectures that care about DMA_ATTR_FOO would check for its
465presence in their implementations of the mapping and unmapping
466routines, e.g.:::
467
468	void whizco_dma_map_sg_attrs(struct device *dev, dma_addr_t dma_addr,
469				     size_t size, enum dma_data_direction dir,
470				     unsigned long attrs)
471	{
472		....
473		if (attrs & DMA_ATTR_FOO)
474			/* twizzle the frobnozzle */
475		....
476	}
477
478Part Ie - IOVA-based DMA mappings
479---------------------------------
480
481These APIs allow a very efficient mapping when using an IOMMU.  They are an
482optional path that requires extra code and are only recommended for drivers
483where DMA mapping performance, or the space usage for storing the DMA addresses
484matter.  All the considerations from the previous section apply here as well.
485
486::
487
488    bool dma_iova_try_alloc(struct device *dev, struct dma_iova_state *state,
489		phys_addr_t phys, size_t size);
490
491Is used to try to allocate IOVA space for mapping operation.  If it returns
492false this API can't be used for the given device and the normal streaming
493DMA mapping API should be used.  The ``struct dma_iova_state`` is allocated
494by the driver and must be kept around until unmap time.
495
496::
497
498    static inline bool dma_use_iova(struct dma_iova_state *state)
499
500Can be used by the driver to check if the IOVA-based API is used after a
501call to dma_iova_try_alloc.  This can be useful in the unmap path.
502
503::
504
505    int dma_iova_link(struct device *dev, struct dma_iova_state *state,
506		phys_addr_t phys, size_t offset, size_t size,
507		enum dma_data_direction dir, unsigned long attrs);
508
509Is used to link ranges to the IOVA previously allocated.  The start of all
510but the first call to dma_iova_link for a given state must be aligned
511to the DMA merge boundary returned by ``dma_get_merge_boundary())``, and
512the size of all but the last range must be aligned to the DMA merge boundary
513as well.
514
515::
516
517    int dma_iova_sync(struct device *dev, struct dma_iova_state *state,
518		size_t offset, size_t size);
519
520Must be called to sync the IOMMU page tables for IOVA-range mapped by one or
521more calls to ``dma_iova_link()``.
522
523For drivers that use a one-shot mapping, all ranges can be unmapped and the
524IOVA freed by calling:
525
526::
527
528   void dma_iova_destroy(struct device *dev, struct dma_iova_state *state,
529		size_t mapped_len, enum dma_data_direction dir,
530                unsigned long attrs);
531
532Alternatively drivers can dynamically manage the IOVA space by unmapping
533and mapping individual regions.  In that case
534
535::
536
537    void dma_iova_unlink(struct device *dev, struct dma_iova_state *state,
538		size_t offset, size_t size, enum dma_data_direction dir,
539		unsigned long attrs);
540
541is used to unmap a range previously mapped, and
542
543::
544
545   void dma_iova_free(struct device *dev, struct dma_iova_state *state);
546
547is used to free the IOVA space.  All regions must have been unmapped using
548``dma_iova_unlink()`` before calling ``dma_iova_free()``.
549
550Part II - Non-coherent DMA allocations
551--------------------------------------
552
553These APIs allow to allocate pages that are guaranteed to be DMA addressable
554by the passed in device, but which need explicit management of memory ownership
555for the kernel vs the device.
556
557If you don't understand how cache line coherency works between a processor and
558an I/O device, you should not be using this part of the API.
559
560::
561
562	struct page *
563	dma_alloc_pages(struct device *dev, size_t size, dma_addr_t *dma_handle,
564			enum dma_data_direction dir, gfp_t gfp)
565
566This routine allocates a region of <size> bytes of non-coherent memory.  It
567returns a pointer to first struct page for the region, or NULL if the
568allocation failed. The resulting struct page can be used for everything a
569struct page is suitable for.
570
571It also returns a <dma_handle> which may be cast to an unsigned integer the
572same width as the bus and given to the device as the DMA address base of
573the region.
574
575The dir parameter specified if data is read and/or written by the device,
576see dma_map_single() for details.
577
578The gfp parameter allows the caller to specify the ``GFP_`` flags (see
579kmalloc()) for the allocation, but rejects flags used to specify a memory
580zone such as GFP_DMA or GFP_HIGHMEM.
581
582Before giving the memory to the device, dma_sync_single_for_device() needs
583to be called, and before reading memory written by the device,
584dma_sync_single_for_cpu(), just like for streaming DMA mappings that are
585reused.
586
587::
588
589	void
590	dma_free_pages(struct device *dev, size_t size, struct page *page,
591			dma_addr_t dma_handle, enum dma_data_direction dir)
592
593Free a region of memory previously allocated using dma_alloc_pages().
594dev, size, dma_handle and dir must all be the same as those passed into
595dma_alloc_pages().  page must be the pointer returned by dma_alloc_pages().
596
597::
598
599	int
600	dma_mmap_pages(struct device *dev, struct vm_area_struct *vma,
601		       size_t size, struct page *page)
602
603Map an allocation returned from dma_alloc_pages() into a user address space.
604dev and size must be the same as those passed into dma_alloc_pages().
605page must be the pointer returned by dma_alloc_pages().
606
607::
608
609	void *
610	dma_alloc_noncoherent(struct device *dev, size_t size,
611			dma_addr_t *dma_handle, enum dma_data_direction dir,
612			gfp_t gfp)
613
614This routine is a convenient wrapper around dma_alloc_pages that returns the
615kernel virtual address for the allocated memory instead of the page structure.
616
617::
618
619	void
620	dma_free_noncoherent(struct device *dev, size_t size, void *cpu_addr,
621			dma_addr_t dma_handle, enum dma_data_direction dir)
622
623Free a region of memory previously allocated using dma_alloc_noncoherent().
624dev, size, dma_handle and dir must all be the same as those passed into
625dma_alloc_noncoherent().  cpu_addr must be the virtual address returned by
626dma_alloc_noncoherent().
627
628::
629
630	struct sg_table *
631	dma_alloc_noncontiguous(struct device *dev, size_t size,
632				enum dma_data_direction dir, gfp_t gfp,
633				unsigned long attrs);
634
635This routine allocates  <size> bytes of non-coherent and possibly non-contiguous
636memory.  It returns a pointer to struct sg_table that describes the allocated
637and DMA mapped memory, or NULL if the allocation failed. The resulting memory
638can be used for struct page mapped into a scatterlist are suitable for.
639
640The return sg_table is guaranteed to have 1 single DMA mapped segment as
641indicated by sgt->nents, but it might have multiple CPU side segments as
642indicated by sgt->orig_nents.
643
644The dir parameter specified if data is read and/or written by the device,
645see dma_map_single() for details.
646
647The gfp parameter allows the caller to specify the ``GFP_`` flags (see
648kmalloc()) for the allocation, but rejects flags used to specify a memory
649zone such as GFP_DMA or GFP_HIGHMEM.
650
651The attrs argument must be either 0 or DMA_ATTR_ALLOC_SINGLE_PAGES.
652
653Before giving the memory to the device, dma_sync_sgtable_for_device() needs
654to be called, and before reading memory written by the device,
655dma_sync_sgtable_for_cpu(), just like for streaming DMA mappings that are
656reused.
657
658::
659
660	void
661	dma_free_noncontiguous(struct device *dev, size_t size,
662			       struct sg_table *sgt,
663			       enum dma_data_direction dir)
664
665Free memory previously allocated using dma_alloc_noncontiguous().  dev, size,
666and dir must all be the same as those passed into dma_alloc_noncontiguous().
667sgt must be the pointer returned by dma_alloc_noncontiguous().
668
669::
670
671	void *
672	dma_vmap_noncontiguous(struct device *dev, size_t size,
673		struct sg_table *sgt)
674
675Return a contiguous kernel mapping for an allocation returned from
676dma_alloc_noncontiguous().  dev and size must be the same as those passed into
677dma_alloc_noncontiguous().  sgt must be the pointer returned by
678dma_alloc_noncontiguous().
679
680Once a non-contiguous allocation is mapped using this function, the
681flush_kernel_vmap_range() and invalidate_kernel_vmap_range() APIs must be used
682to manage the coherency between the kernel mapping, the device and user space
683mappings (if any).
684
685::
686
687	void
688	dma_vunmap_noncontiguous(struct device *dev, void *vaddr)
689
690Unmap a kernel mapping returned by dma_vmap_noncontiguous().  dev must be the
691same the one passed into dma_alloc_noncontiguous().  vaddr must be the pointer
692returned by dma_vmap_noncontiguous().
693
694
695::
696
697	int
698	dma_mmap_noncontiguous(struct device *dev, struct vm_area_struct *vma,
699			       size_t size, struct sg_table *sgt)
700
701Map an allocation returned from dma_alloc_noncontiguous() into a user address
702space.  dev and size must be the same as those passed into
703dma_alloc_noncontiguous().  sgt must be the pointer returned by
704dma_alloc_noncontiguous().
705
706::
707
708	int
709	dma_get_cache_alignment(void)
710
711Returns the processor cache alignment.  This is the absolute minimum
712alignment *and* width that you must observe when either mapping
713memory or doing partial flushes.
714
715.. note::
716
717	This API may return a number *larger* than the actual cache
718	line, but it will guarantee that one or more cache lines fit exactly
719	into the width returned by this call.  It will also always be a power
720	of two for easy alignment.
721
722
723Part III - Debug drivers use of the DMA API
724-------------------------------------------
725
726The DMA API as described above has some constraints. DMA addresses must be
727released with the corresponding function with the same size for example. With
728the advent of hardware IOMMUs it becomes more and more important that drivers
729do not violate those constraints. In the worst case such a violation can
730result in data corruption up to destroyed filesystems.
731
732To debug drivers and find bugs in the usage of the DMA API checking code can
733be compiled into the kernel which will tell the developer about those
734violations. If your architecture supports it you can select the "Enable
735debugging of DMA API usage" option in your kernel configuration. Enabling this
736option has a performance impact. Do not enable it in production kernels.
737
738If you boot the resulting kernel will contain code which does some bookkeeping
739about what DMA memory was allocated for which device. If this code detects an
740error it prints a warning message with some details into your kernel log. An
741example warning message may look like this::
742
743	WARNING: at /data2/repos/linux-2.6-iommu/lib/dma-debug.c:448
744		check_unmap+0x203/0x490()
745	Hardware name:
746	forcedeth 0000:00:08.0: DMA-API: device driver frees DMA memory with wrong
747		function [device address=0x00000000640444be] [size=66 bytes] [mapped as
748	single] [unmapped as page]
749	Modules linked in: nfsd exportfs bridge stp llc r8169
750	Pid: 0, comm: swapper Tainted: G        W  2.6.28-dmatest-09289-g8bb99c0 #1
751	Call Trace:
752	<IRQ>  [<ffffffff80240b22>] warn_slowpath+0xf2/0x130
753	[<ffffffff80647b70>] _spin_unlock+0x10/0x30
754	[<ffffffff80537e75>] usb_hcd_link_urb_to_ep+0x75/0xc0
755	[<ffffffff80647c22>] _spin_unlock_irqrestore+0x12/0x40
756	[<ffffffff8055347f>] ohci_urb_enqueue+0x19f/0x7c0
757	[<ffffffff80252f96>] queue_work+0x56/0x60
758	[<ffffffff80237e10>] enqueue_task_fair+0x20/0x50
759	[<ffffffff80539279>] usb_hcd_submit_urb+0x379/0xbc0
760	[<ffffffff803b78c3>] cpumask_next_and+0x23/0x40
761	[<ffffffff80235177>] find_busiest_group+0x207/0x8a0
762	[<ffffffff8064784f>] _spin_lock_irqsave+0x1f/0x50
763	[<ffffffff803c7ea3>] check_unmap+0x203/0x490
764	[<ffffffff803c8259>] debug_dma_unmap_page+0x49/0x50
765	[<ffffffff80485f26>] nv_tx_done_optimized+0xc6/0x2c0
766	[<ffffffff80486c13>] nv_nic_irq_optimized+0x73/0x2b0
767	[<ffffffff8026df84>] handle_IRQ_event+0x34/0x70
768	[<ffffffff8026ffe9>] handle_edge_irq+0xc9/0x150
769	[<ffffffff8020e3ab>] do_IRQ+0xcb/0x1c0
770	[<ffffffff8020c093>] ret_from_intr+0x0/0xa
771	<EOI> <4>---[ end trace f6435a98e2a38c0e ]---
772
773The driver developer can find the driver and the device including a stacktrace
774of the DMA API call which caused this warning.
775
776Per default only the first error will result in a warning message. All other
777errors will only silently counted. This limitation exist to prevent the code
778from flooding your kernel log. To support debugging a device driver this can
779be disabled via debugfs. See the debugfs interface documentation below for
780details.
781
782The debugfs directory for the DMA API debugging code is called dma-api/. In
783this directory the following files can currently be found:
784
785=============================== ===============================================
786dma-api/all_errors		This file contains a numeric value. If this
787				value is not equal to zero the debugging code
788				will print a warning for every error it finds
789				into the kernel log. Be careful with this
790				option, as it can easily flood your logs.
791
792dma-api/disabled		This read-only file contains the character 'Y'
793				if the debugging code is disabled. This can
794				happen when it runs out of memory or if it was
795				disabled at boot time
796
797dma-api/dump			This read-only file contains current DMA
798				mappings.
799
800dma-api/error_count		This file is read-only and shows the total
801				numbers of errors found.
802
803dma-api/num_errors		The number in this file shows how many
804				warnings will be printed to the kernel log
805				before it stops. This number is initialized to
806				one at system boot and be set by writing into
807				this file
808
809dma-api/min_free_entries	This read-only file can be read to get the
810				minimum number of free dma_debug_entries the
811				allocator has ever seen. If this value goes
812				down to zero the code will attempt to increase
813				nr_total_entries to compensate.
814
815dma-api/num_free_entries	The current number of free dma_debug_entries
816				in the allocator.
817
818dma-api/nr_total_entries	The total number of dma_debug_entries in the
819				allocator, both free and used.
820
821dma-api/driver_filter		You can write a name of a driver into this file
822				to limit the debug output to requests from that
823				particular driver. Write an empty string to
824				that file to disable the filter and see
825				all errors again.
826=============================== ===============================================
827
828If you have this code compiled into your kernel it will be enabled by default.
829If you want to boot without the bookkeeping anyway you can provide
830'dma_debug=off' as a boot parameter. This will disable DMA API debugging.
831Notice that you can not enable it again at runtime. You have to reboot to do
832so.
833
834If you want to see debug messages only for a special device driver you can
835specify the dma_debug_driver=<drivername> parameter. This will enable the
836driver filter at boot time. The debug code will only print errors for that
837driver afterwards. This filter can be disabled or changed later using debugfs.
838
839When the code disables itself at runtime this is most likely because it ran
840out of dma_debug_entries and was unable to allocate more on-demand. 65536
841entries are preallocated at boot - if this is too low for you boot with
842'dma_debug_entries=<your_desired_number>' to overwrite the default. Note
843that the code allocates entries in batches, so the exact number of
844preallocated entries may be greater than the actual number requested. The
845code will print to the kernel log each time it has dynamically allocated
846as many entries as were initially preallocated. This is to indicate that a
847larger preallocation size may be appropriate, or if it happens continually
848that a driver may be leaking mappings.
849
850::
851
852	void
853	debug_dma_mapping_error(struct device *dev, dma_addr_t dma_addr);
854
855dma-debug interface debug_dma_mapping_error() to debug drivers that fail
856to check DMA mapping errors on addresses returned by dma_map_single() and
857dma_map_page() interfaces. This interface clears a flag set by
858debug_dma_map_page() to indicate that dma_mapping_error() has been called by
859the driver. When driver does unmap, debug_dma_unmap() checks the flag and if
860this flag is still set, prints warning message that includes call trace that
861leads up to the unmap. This interface can be called from dma_mapping_error()
862routines to enable DMA mapping error check debugging.
863
864Functions and structures
865========================
866
867.. kernel-doc:: include/linux/scatterlist.h
868.. kernel-doc:: lib/scatterlist.c
869