| #
8a840ab0 |
| 06-Jan-2026 |
Robin Murphy <robin.murphy@arm.com> |
dma-mapping: Remove dma_mark_clean (again)
With IA-64 now gone, there are no users of the dma_mark_clean hook, so we can retire it for good.
Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signe
dma-mapping: Remove dma_mark_clean (again)
With IA-64 now gone, there are no users of the dma_mark_clean hook, so we can retire it for good.
Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com> Link: https://lore.kernel.org/r/c004927f01962726ff1dcf94d1b4efff84db805a.1767727673.git.robin.murphy@arm.com
show more ...
|
| #
150745b4 |
| 15-Oct-2024 |
Christoph Hellwig <hch@lst.de> |
dma-debug: remove DMA_API_DEBUG_SG
The scatterlist validity checks are pretty simple and cheap, perform them unconditionally.
Signed-off-by: Christoph Hellwig <hch@lst.de>
|
| #
de6c85bf |
| 28-Aug-2024 |
Christoph Hellwig <hch@lst.de> |
dma-mapping: clearly mark DMA ops as an architecture feature
DMA ops are a helper for architectures and not for drivers to override the DMA implementation.
Unfortunately driver authors keep ignorin
dma-mapping: clearly mark DMA ops as an architecture feature
DMA ops are a helper for architectures and not for drivers to override the DMA implementation.
Unfortunately driver authors keep ignoring this. Make the fact more clear by renaming the symbol to ARCH_HAS_DMA_OPS and having the two drivers overriding their dma_ops depend on that. These drivers should probably be marked broken, but we can give them a bit of a grace period for that.
Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Sakari Ailus <sakari.ailus@linux.intel.com> # for IPU6 Acked-by: Robin Murphy <robin.murphy@arm.com>
show more ...
|
| #
b5c58b2f |
| 24-Jul-2024 |
Leon Romanovsky <leonro@nvidia.com> |
dma-mapping: direct calls for dma-iommu
Directly call into dma-iommu just like we have been doing for dma-direct for a while. This avoids the indirect call overhead for IOMMU ops and removes the ne
dma-mapping: direct calls for dma-iommu
Directly call into dma-iommu just like we have been doing for dma-direct for a while. This avoids the indirect call overhead for IOMMU ops and removes the need to have DMA ops entirely for many common configurations.
Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Leon Romanovsky <leon@kernel.org> Acked-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Acked-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
show more ...
|
| #
fe7514b1 |
| 07-May-2024 |
Alexander Lobakin <aleksander.lobakin@intel.com> |
dma: compile-out DMA sync op calls when not used
Some platforms do have DMA, but DMA there is always direct and coherent. Currently, even on such platforms DMA sync operations are compiled and calle
dma: compile-out DMA sync op calls when not used
Some platforms do have DMA, but DMA there is always direct and coherent. Currently, even on such platforms DMA sync operations are compiled and called. Add a new hidden Kconfig symbol, DMA_NEED_SYNC, and set it only when either sync operations are needed or there is DMA ops or swiotlb or DMA debug is enabled. Compile global dma_sync_*() and dma_need_sync() only when it's set, otherwise provide empty inline stubs. The change allows for future optimizations of DMA sync calls depending on runtime conditions.
Signed-off-by: Alexander Lobakin <aleksander.lobakin@intel.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
show more ...
|
| #
2c8ed1b9 |
| 05-Oct-2023 |
Christoph Hellwig <hch@lst.de> |
dma-direct: add a CONFIG_ARCH_HAS_DMA_ALLOC symbol
Instead of using arch_dma_alloc if none of the generic coherent allocators are used, require the architectures to explicitly opt into providing it.
dma-direct: add a CONFIG_ARCH_HAS_DMA_ALLOC symbol
Instead of using arch_dma_alloc if none of the generic coherent allocators are used, require the architectures to explicitly opt into providing it. This will used to deal with the case of m68knommu and coldfire where we can't do any coherent allocations whatsoever, and also makes it clear that arch_dma_alloc is a last resort.
Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Robin Murphy <robin.murphy@arm.com> Reviewed-by: Greg Ungerer <gerg@linux-m68k.org> Tested-by: Greg Ungerer <gerg@linux-m68k.org>
show more ...
|
| #
da323d46 |
| 05-Oct-2023 |
Christoph Hellwig <hch@lst.de> |
dma-direct: add dependencies to CONFIG_DMA_GLOBAL_POOL
CONFIG_DMA_GLOBAL_POOL can't be combined with other DMA coherent allocators. Add dependencies to Kconfig to document this, and make kconfig co
dma-direct: add dependencies to CONFIG_DMA_GLOBAL_POOL
CONFIG_DMA_GLOBAL_POOL can't be combined with other DMA coherent allocators. Add dependencies to Kconfig to document this, and make kconfig complain about unmet dependencies if someone tries.
Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Robin Murphy <robin.murphy@arm.com> Reviewed-by: Lad Prabhakar <prabhakar.mahadev-lad.rj@bp.renesas.com> Reviewed-by: Greg Ungerer <gerg@linux-m68k.org> Tested-by: Greg Ungerer <gerg@linux-m68k.org>
show more ...
|
| #
2dcdf8c1 |
| 30-Aug-2023 |
Christoph Hellwig <hch@lst.de> |
dma-contiguous: fix the Kconfig entry for CONFIG_DMA_NUMA_CMA
It makes no sense to expose CONFIG_DMA_NUMA_CMA if CONFIG_NUMA is not enabled, and random config options shouldn't be default unless the
dma-contiguous: fix the Kconfig entry for CONFIG_DMA_NUMA_CMA
It makes no sense to expose CONFIG_DMA_NUMA_CMA if CONFIG_NUMA is not enabled, and random config options shouldn't be default unless there is a good reason. Replace the default NUMA with a depends on to fix both issues.
Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Robin Murphy <roin.murphy@arm.com>
show more ...
|
| #
62708b2b |
| 01-Aug-2023 |
Petr Tesarik <petr.tesarik.ext@huawei.com> |
swiotlb: add a flag whether SWIOTLB is allowed to grow
Add a config option (CONFIG_SWIOTLB_DYNAMIC) to enable or disable dynamic allocation of additional bounce buffers.
If this option is set, mark
swiotlb: add a flag whether SWIOTLB is allowed to grow
Add a config option (CONFIG_SWIOTLB_DYNAMIC) to enable or disable dynamic allocation of additional bounce buffers.
If this option is set, mark the default SWIOTLB as able to grow and restricted DMA pools as unable.
However, if the address of the default memory pool is explicitly queried, make the default SWIOTLB also unable to grow. This is currently used to set up PCI BAR movable regions on some Octeon MIPS boards which may not be able to use a SWIOTLB pool elsewhere in physical memory. See octeon_pci_setup() for more details.
If a remap function is specified, it must be also called on any dynamically allocated pools, but there are some issues:
- The remap function may block, so it should not be called from an atomic context. - There is no corresponding unremap() function if the memory pool is freed. - The only in-tree implementation (xen_swiotlb_fixup) requires that the number of slots in the memory pool is a multiple of SWIOTLB_SEGSIZE.
Keep it simple for now and disable growing the SWIOTLB if a remap function was specified.
Signed-off-by: Petr Tesarik <petr.tesarik.ext@huawei.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
show more ...
|
| #
bf29bfaa |
| 12-Jul-2023 |
Yajun Deng <yajun.deng@linux.dev> |
dma-contiguous: support numa CMA for specified node
The kernel parameter 'cma_pernuma=' only supports reserving the same size of CMA area for each node. We need to reserve different sizes of CMA are
dma-contiguous: support numa CMA for specified node
The kernel parameter 'cma_pernuma=' only supports reserving the same size of CMA area for each node. We need to reserve different sizes of CMA area for specified nodes if these devices belong to different nodes.
Adding another kernel parameter 'numa_cma=' to reserve CMA area for the specified node. If we want to use one of these parameters, we need to enable DMA_NUMA_CMA.
At the same time, print the node id in cma_declare_contiguous_nid() if CONFIG_NUMA is enabled.
Signed-off-by: Yajun Deng <yajun.deng@linux.dev> Signed-off-by: Christoph Hellwig <hch@lst.de>
show more ...
|
| #
22e4a348 |
| 12-May-2023 |
Yajun Deng <yajun.deng@linux.dev> |
dma-contiguous: support per-numa CMA for all architectures
In the commit b7176c261cdb ("dma-contiguous: provide the ability to reserve per-numa CMA"), Barry adds DMA_PERNUMA_CMA for ARM64.
But this
dma-contiguous: support per-numa CMA for all architectures
In the commit b7176c261cdb ("dma-contiguous: provide the ability to reserve per-numa CMA"), Barry adds DMA_PERNUMA_CMA for ARM64.
But this feature is architecture independent, so support per-numa CMA for all architectures, and enable it by default if NUMA.
Signed-off-by: Yajun Deng <yajun.deng@linux.dev> Tested-by: Yicong Yang <yangyicong@hisilicon.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
show more ...
|
| #
1e6d5dea |
| 30-Jun-2023 |
Linus Torvalds <torvalds@linux-foundation.org> |
Merge tag 'dma-mapping-6.5-2023-06-28' of git://git.infradead.org/users/hch/dma-mapping
Pull dma-mapping updates from Christoph Hellwig:
- swiotlb cleanups (Petr Tesarik)
- use kvmalloc_array (g
Merge tag 'dma-mapping-6.5-2023-06-28' of git://git.infradead.org/users/hch/dma-mapping
Pull dma-mapping updates from Christoph Hellwig:
- swiotlb cleanups (Petr Tesarik)
- use kvmalloc_array (gaoxu)
- a small step towards removing is_swiotlb_active (Christoph Hellwig)
- fix a Kconfig typo Sui Jingfeng)
* tag 'dma-mapping-6.5-2023-06-28' of git://git.infradead.org/users/hch/dma-mapping: drm/nouveau: stop using is_swiotlb_active swiotlb: use the atomic counter of total used slabs if available swiotlb: remove unused field "used" from struct io_tlb_mem dma-remap: use kvmalloc_array/kvfree for larger dma memory remap dma-mapping: fix a Kconfig typo
show more ...
|
| #
370645f4 |
| 12-Jun-2023 |
Catalin Marinas <catalin.marinas@arm.com> |
dma-mapping: force bouncing if the kmalloc() size is not cache-line-aligned
For direct DMA, if the size is small enough to have originated from a kmalloc() cache below ARCH_DMA_MINALIGN, check its a
dma-mapping: force bouncing if the kmalloc() size is not cache-line-aligned
For direct DMA, if the size is small enough to have originated from a kmalloc() cache below ARCH_DMA_MINALIGN, check its alignment against dma_get_cache_alignment() and bounce if necessary. For larger sizes, it is the responsibility of the DMA API caller to ensure proper alignment.
At this point, the kmalloc() caches are properly aligned but this will change in a subsequent patch.
Architectures can opt in by selecting DMA_BOUNCE_UNALIGNED_KMALLOC.
Link: https://lkml.kernel.org/r/20230612153201.554742-15-catalin.marinas@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Robin Murphy <robin.murphy@arm.com> Tested-by: Isaac J. Manjarres <isaacmanjarres@google.com> Cc: Alasdair Kergon <agk@redhat.com> Cc: Ard Biesheuvel <ardb@kernel.org> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Daniel Vetter <daniel@ffwll.ch> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Herbert Xu <herbert@gondor.apana.org.au> Cc: Jerry Snitselaar <jsnitsel@redhat.com> Cc: Joerg Roedel <joro@8bytes.org> Cc: Jonathan Cameron <jic23@kernel.org> Cc: Jonathan Cameron <Jonathan.Cameron@huawei.com> Cc: Lars-Peter Clausen <lars@metafoo.de> Cc: Logan Gunthorpe <logang@deltatee.com> Cc: Marc Zyngier <maz@kernel.org> Cc: Mark Brown <broonie@kernel.org> Cc: Mike Snitzer <snitzer@kernel.org> Cc: "Rafael J. Wysocki" <rafael@kernel.org> Cc: Saravana Kannan <saravanak@google.com> Cc: Vlastimil Babka <vbabka@suse.cz> Cc: Will Deacon <will@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
show more ...
|
| #
af2880ec |
| 12-Jun-2023 |
Robin Murphy <robin.murphy@arm.com> |
scatterlist: add dedicated config for DMA flags
The DMA flags field will be useful for users beyond PCI P2P, so upgrade to its own dedicated config option.
[catalin.marinas@arm.com: use #ifdef CONF
scatterlist: add dedicated config for DMA flags
The DMA flags field will be useful for users beyond PCI P2P, so upgrade to its own dedicated config option.
[catalin.marinas@arm.com: use #ifdef CONFIG_NEED_SG_DMA_FLAGS in scatterlist.h] [catalin.marinas@arm.com: update PCI_P2PDMA dma_flags comment in scatterlist.h] Link: https://lkml.kernel.org/r/20230612153201.554742-13-catalin.marinas@arm.com Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Tested-by: Isaac J. Manjarres <isaacmanjarres@google.com> Cc: Alasdair Kergon <agk@redhat.com> Cc: Ard Biesheuvel <ardb@kernel.org> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Daniel Vetter <daniel@ffwll.ch> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Herbert Xu <herbert@gondor.apana.org.au> Cc: Jerry Snitselaar <jsnitsel@redhat.com> Cc: Joerg Roedel <joro@8bytes.org> Cc: Jonathan Cameron <jic23@kernel.org> Cc: Jonathan Cameron <Jonathan.Cameron@huawei.com> Cc: Lars-Peter Clausen <lars@metafoo.de> Cc: Logan Gunthorpe <logang@deltatee.com> Cc: Marc Zyngier <maz@kernel.org> Cc: Mark Brown <broonie@kernel.org> Cc: Mike Snitzer <snitzer@kernel.org> Cc: "Rafael J. Wysocki" <rafael@kernel.org> Cc: Saravana Kannan <saravanak@google.com> Cc: Vlastimil Babka <vbabka@suse.cz> Cc: Will Deacon <will@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
show more ...
|
| #
4d3af20e |
| 04-Jun-2023 |
Sui Jingfeng <suijingfeng@loongson.cn> |
dma-mapping: fix a Kconfig typo
'thing' -> 'think'
Signed-off-by: Sui Jingfeng <suijingfeng@loongson.cn> Signed-off-by: Christoph Hellwig <hch@lst.de>
|
| #
1d3f56b2 |
| 01-Apr-2023 |
Jiaxun Yang <jiaxun.yang@flygoat.com> |
dma-mapping: provide CONFIG_ARCH_DMA_DEFAULT_COHERENT
Provide a kconfig option to allow arches to manipulate default value of dma_default_coherent in Kconfig.
Signed-off-by: Jiaxun Yang <jiaxun.yan
dma-mapping: provide CONFIG_ARCH_DMA_DEFAULT_COHERENT
Provide a kconfig option to allow arches to manipulate default value of dma_default_coherent in Kconfig.
Signed-off-by: Jiaxun Yang <jiaxun.yang@flygoat.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
show more ...
|
| #
f5ff79fd |
| 26-Feb-2022 |
Christoph Hellwig <hch@lst.de> |
dma-mapping: remove CONFIG_DMA_REMAP
CONFIG_DMA_REMAP is used to build a few helpers around the core vmalloc code, and to use them in case there is a highmem page in dma-direct, and to make dma cohe
dma-mapping: remove CONFIG_DMA_REMAP
CONFIG_DMA_REMAP is used to build a few helpers around the core vmalloc code, and to use them in case there is a highmem page in dma-direct, and to make dma coherent allocations be able to use non-contiguous pages allocations for DMA allocations in the dma-iommu layer.
Right now it needs to be explicitly selected by architectures, and is only done so by architectures that require remapping to deal with devices that are not DMA coherent. Make it unconditional for builds with CONFIG_MMU as it is very little extra code, but makes it much more likely that large DMA allocations succeed on x86.
This fixes hot plugging a NVMe thunderbolt SSD for me, which tries to allocate a 1MB buffer that is otherwise hard to obtain due to memory fragmentation on a heavily used laptop.
Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Robin Murphy <robin.murphy@arm.com>
show more ...
|
| #
3de18c86 |
| 03-Sep-2021 |
Linus Torvalds <torvalds@linux-foundation.org> |
Merge branch 'stable/for-linus-5.15' of git://git.kernel.org/pub/scm/linux/kernel/git/konrad/swiotlb
Pull swiotlb updates from Konrad Rzeszutek Wilk: "A new feature called restricted DMA pools. It
Merge branch 'stable/for-linus-5.15' of git://git.kernel.org/pub/scm/linux/kernel/git/konrad/swiotlb
Pull swiotlb updates from Konrad Rzeszutek Wilk: "A new feature called restricted DMA pools. It allows SWIOTLB to utilize per-device (or per-platform) allocated memory pools instead of using the global one.
The first big user of this is ARM Confidential Computing where the memory for DMA operations can be set per platform"
* 'stable/for-linus-5.15' of git://git.kernel.org/pub/scm/linux/kernel/git/konrad/swiotlb: (23 commits) swiotlb: use depends on for DMA_RESTRICTED_POOL of: restricted dma: Don't fail device probe on rmem init failure of: Move of_dma_set_restricted_buffer() into device.c powerpc/svm: Don't issue ultracalls if !mem_encrypt_active() s390/pv: fix the forcing of the swiotlb swiotlb: Free tbl memory in swiotlb_exit() swiotlb: Emit diagnostic in swiotlb_exit() swiotlb: Convert io_default_tlb_mem to static allocation of: Return success from of_dma_set_restricted_buffer() when !OF_ADDRESS swiotlb: add overflow checks to swiotlb_bounce swiotlb: fix implicit debugfs declarations of: Add plumbing for restricted DMA pool dt-bindings: of: Add restricted DMA pool swiotlb: Add restricted DMA pool initialization swiotlb: Add restricted DMA alloc/free support swiotlb: Refactor swiotlb_tbl_unmap_single swiotlb: Move alloc_size to swiotlb_find_slots swiotlb: Use is_swiotlb_force_bounce for swiotlb data bouncing swiotlb: Update is_swiotlb_active to add a struct device argument swiotlb: Update is_swiotlb_buffer to add a struct device argument ...
show more ...
|
| #
f3c4b134 |
| 27-Aug-2021 |
Claire Chang <tientzu@chromium.org> |
swiotlb: use depends on for DMA_RESTRICTED_POOL
Use depends on instead of select for DMA_RESTRICTED_POOL; otherwise it will make SWIOTLB user configurable and cause compile errors for some arch (e.g
swiotlb: use depends on for DMA_RESTRICTED_POOL
Use depends on instead of select for DMA_RESTRICTED_POOL; otherwise it will make SWIOTLB user configurable and cause compile errors for some arch (e.g. mips).
Fixes: 0b84e4f8b793 ("swiotlb: Add restricted DMA pool initialization") Acked-by: Will Deacon <will@kernel.org> Reported-by: Guenter Roeck <linux@roeck-us.net> Signed-off-by: Claire Chang <tientzu@chromium.org> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
show more ...
|
| #
faf4ef82 |
| 23-Jun-2021 |
Christoph Hellwig <hch@lst.de> |
dma-direct: add support for dma_coherent_default_memory
Add an option to allocate uncached memory for dma_alloc_coherent from the global dma_coherent_default_memory. This will allow to move arm-nom
dma-direct: add support for dma_coherent_default_memory
Add an option to allocate uncached memory for dma_alloc_coherent from the global dma_coherent_default_memory. This will allow to move arm-nommu (and eventually other platforms) to use generic code for allocating uncached memory from a pre-populated pool.
Note that this is a different pool from the one that platforms that can remap at runtime use for GFP_ATOMIC allocations for now, although there might be opportunities to eventually end up with a common codebase for the two use cases.
Signed-off-by: Christoph Hellwig <hch@lst.de> Tested-by: Dillon Min <dillon.minfei@gmail.com>
show more ...
|
| #
0b84e4f8 |
| 19-Jun-2021 |
Claire Chang <tientzu@chromium.org> |
swiotlb: Add restricted DMA pool initialization
Add the initialization function to create restricted DMA pools from matching reserved-memory nodes.
Regardless of swiotlb setting, the restricted DMA
swiotlb: Add restricted DMA pool initialization
Add the initialization function to create restricted DMA pools from matching reserved-memory nodes.
Regardless of swiotlb setting, the restricted DMA pool is preferred if available.
The restricted DMA pools provide a basic level of protection against the DMA overwriting buffer contents at unexpected times. However, to protect against general data leakage and system memory corruption, the system needs to provide a way to lock down the memory access, e.g., MPU.
Signed-off-by: Claire Chang <tientzu@chromium.org> Reviewed-by: Christoph Hellwig <hch@lst.de> Tested-by: Stefano Stabellini <sstabellini@kernel.org> Tested-by: Will Deacon <will@kernel.org> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
show more ...
|
| #
6d4e9a8e |
| 10-Feb-2021 |
Christoph Hellwig <hch@lst.de> |
driver core: lift dma_default_coherent into common code
Lift the dma_default_coherent variable from the mips architecture code to the driver core. This allows an architecture to sdefault all device
driver core: lift dma_default_coherent into common code
Lift the dma_default_coherent variable from the mips architecture code to the driver core. This allows an architecture to sdefault all device to be DMA coherent at run time, even if the kernel is build with support for DMA noncoherent device. By allowing device_initialize to set the ->dma_coherent field to this default the amount of arch hooks required for this behavior can be greatly reduced.
Signed-off-by: Christoph Hellwig <hch@lst.de> Acked-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
show more ...
|
| #
347d81b6 |
| 22-Dec-2020 |
Linus Torvalds <torvalds@linux-foundation.org> |
Merge tag 'dma-mapping-5.11' of git://git.infradead.org/users/hch/dma-mapping
Pull dma-mapping updates from Christoph Hellwig:
- support for a partial IOMMU bypass (Alexey Kardashevskiy)
- add a
Merge tag 'dma-mapping-5.11' of git://git.infradead.org/users/hch/dma-mapping
Pull dma-mapping updates from Christoph Hellwig:
- support for a partial IOMMU bypass (Alexey Kardashevskiy)
- add a DMA API benchmark (Barry Song)
- misc fixes (Tiezhu Yang, tangjianqiang)
* tag 'dma-mapping-5.11' of git://git.infradead.org/users/hch/dma-mapping: selftests/dma: add test application for DMA_MAP_BENCHMARK dma-mapping: add benchmark support for streaming DMA APIs dma-contiguous: fix a typo error in a comment dma-pool: no need to check return value of debugfs_create functions powerpc/dma: Fallback to dma_ops when persistent memory present dma-mapping: Allow mixing bypass and mapped DMA operation
show more ...
|
| #
65789daa |
| 16-Nov-2020 |
Barry Song <song.bao.hua@hisilicon.com> |
dma-mapping: add benchmark support for streaming DMA APIs
Nowadays, there are increasing requirements to benchmark the performance of dma_map and dma_unmap particually while the device is attached t
dma-mapping: add benchmark support for streaming DMA APIs
Nowadays, there are increasing requirements to benchmark the performance of dma_map and dma_unmap particually while the device is attached to an IOMMU.
This patch enables the support. Users can run specified number of threads to do dma_map_page and dma_unmap_page on a specific NUMA node with the specified duration. Then dma_map_benchmark will calculate the average latency for map and unmap.
A difficulity for this benchmark is that dma_map/unmap APIs must run on a particular device. Each device might have different backend of IOMMU or non-IOMMU.
So we use the driver_override to bind dma_map_benchmark to a particual device by: For platform devices: echo dma_map_benchmark > /sys/bus/platform/devices/xxx/driver_override echo xxx > /sys/bus/platform/drivers/xxx/unbind echo xxx > /sys/bus/platform/drivers/dma_map_benchmark/bind
For PCI devices: echo dma_map_benchmark > /sys/bus/pci/devices/0000:00:01.0/driver_override echo 0000:00:01.0 > /sys/bus/pci/drivers/xxx/unbind echo 0000:00:01.0 > /sys/bus/pci/drivers/dma_map_benchmark/bind
Cc: Will Deacon <will@kernel.org> Cc: Shuah Khan <shuah@kernel.org> Cc: Christoph Hellwig <hch@lst.de> Cc: Marek Szyprowski <m.szyprowski@samsung.com> Cc: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Barry Song <song.bao.hua@hisilicon.com> [hch: folded in two fixes from Colin Ian King <colin.king@canonical.com>] Signed-off-by: Christoph Hellwig <hch@lst.de>
show more ...
|
| #
8d8d53cf |
| 29-Oct-2020 |
Alexey Kardashevskiy <aik@ozlabs.ru> |
dma-mapping: Allow mixing bypass and mapped DMA operation
At the moment we allow bypassing DMA ops only when we can do this for the entire RAM. However there are configs with mixed type memory where
dma-mapping: Allow mixing bypass and mapped DMA operation
At the moment we allow bypassing DMA ops only when we can do this for the entire RAM. However there are configs with mixed type memory where we could still allow bypassing IOMMU in most cases; POWERPC with persistent memory is one example.
This adds an arch hook to determine where bypass can still work and we invoke direct DMA API. The following patch checks the bus limit on POWERPC to allow or disallow direct mapping.
This adds a ARCH_HAS_DMA_MAP_DIRECT config option to make the arch_xxxx hooks no-op by default.
Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru> Signed-off-by: Christoph Hellwig <hch@lst.de>
show more ...
|