| /linux/kernel/dma/ |
| H A D | Kconfig | 161 bool "DMA Contiguous Memory Allocator" 164 This enables the Contiguous Memory Allocator which allows drivers 165 to allocate big physically-contiguous blocks of memory for use with 171 For more information see <kernel/dma/contiguous.c>. 177 bool "Enable separate DMA Contiguous Memory Area for NUMA Node" 187 comment "Default contiguous memory area size:" 195 Defines the size (in MiB) of the default memory area for Contiguous 206 Defines the size of the default memory area for Contiguous Memory 230 int "Maximum PAGE_SIZE order of alignment for contiguous buffers" 238 specify the maximum PAGE_SIZE order for contiguous buffers. Larger
|
| H A D | contiguous.c | 3 * Contiguous Memory Allocator for DMA mapping framework 9 * Contiguous Memory Allocator 11 * The Contiguous Memory Allocator (CMA) makes it possible to 12 * allocate big contiguous chunks of memory after the system has 18 * IO map support and require contiguous blocks of memory to 204 * dma_contiguous_reserve() - reserve area(s) for contiguous memory handling 267 * dma_contiguous_reserve_area() - reserve custom contiguous area 280 * If @fixed is true, reserve contiguous area at exactly @base. If false, in dma_contiguous_reserve_area() 294 /* Architecture specific contiguous memory fixup. */ 302 * dma_alloc_from_contiguous() - allocate pages from contiguous are [all...] |
| /linux/drivers/iommu/generic_pt/ |
| H A D | pt_common.h | 16 * The routines marked "@pts: Entry to query" operate on the entire contiguous 67 * @num_contig_lg2: Number of contiguous items to clear 97 * concurrent update from HW. The entire contiguous entry is changed. 106 * by this entry. If the entry is contiguous then the consolidated 118 * pt_entry_num_contig_lg2() - Number of contiguous items for this leaf entry 121 * Return the number of contiguous items this leaf entry spans. If the entry 131 * is contiguous this returns the same value for each sub-item. I.e.:: 144 * If the entry is not contiguous this returns pt_table_item_lg2sz(), otherwise 145 * it returns the total VA/OA size of the entire contiguous entry. 196 * For a single item non-contiguous entry oasz_lg2 is pt_table_item_lg2sz(). [all …]
|
| H A D | pt_fmt_defaults.h | 40 * If not supplied by the format then contiguous pages are not supported. 42 * If contiguous pages are supported then the format must also provide 43 * pt_contig_count_lg2() if it supports a single contiguous size per level, 53 * Return the number of contiguous OA items forming an entry at this table level 95 * pt_entry_oa - OA is at the start of a contiguous entry 97 * pt_item_oa - OA is adjusted for every item in a contiguous entry 144 * If not supplied by the format then assume only one contiguous size determined 212 /* For a contiguous entry the sw bit is only stored in the first item. */ in pt_test_sw_bit_acquire()
|
| /linux/tools/testing/selftests/resctrl/ |
| H A D | cat_test.c | 248 /* Get the largest contiguous exclusive portion of the cache */ in cat_run_test() 293 /* AMD always supports non-contiguous CBM. */ in arch_supports_noncont_cat() 299 /* Intel support for non-contiguous CBM needs to be discovered. */ in arch_supports_noncont_cat() 327 ksft_print_msg("Hardware and kernel differ on non-contiguous CBM support!\n"); in noncont_cat_run_test() 346 /* Contiguous mask write check. */ in noncont_cat_run_test() 350 ksft_print_msg("Write of contiguous CBM failed\n"); in noncont_cat_run_test() 355 * Non-contiguous mask write check. CBM has a 0xf hole approximately in the middle. in noncont_cat_run_test() 362 ksft_print_msg("Non-contiguous CBMs supported but write of non-contiguous CBM failed\n"); in noncont_cat_run_test() 364 …ksft_print_msg("Non-contiguous CBMs not supported and write of non-contiguous CBM failed as expect… in noncont_cat_run_test() 366 ksft_print_msg("Non-contiguous CBMs not supported but write of non-contiguous CBM succeeded\n"); in noncont_cat_run_test()
|
| /linux/Documentation/arch/arm64/ |
| H A D | hugetlbpage.rst | 23 2) Using the Contiguous bit 26 The architecture provides a contiguous bit in the translation table entries 28 contiguous set of entries that can be cached in a single TLB entry. 30 The contiguous bit is used in Linux to increase the mapping size at the pmd and 31 pte (last) level. The number of supported contiguous entries varies by page size
|
| /linux/tools/perf/pmu-events/arch/arm64/fujitsu/monaka/ |
| H A D | sve.json | 188 …ecuted operations that prefetch memory due to an SVE predicated single contiguous element prefetch… 192 … reads from memory with a non-temporal hint due to an SVE non-temporal contiguous element load ins… 196 …t writes to memory with a non-temporal hint due to an SVE non-temporal contiguous element store in… 200 …ions that read from memory due to Advanced SIMD or SVE multiple vector contiguous structure load i… 204 …tions that write to memory due to Advanced SIMD or SVE multiple vector contiguous structure store … 208 …chitecturally executed operations that read from memory due to SVE non-contiguous gather-load inst… 212 …rchitecturally executed operations that write to memory due to SVE non-contiguous scatter-store in… 216 …rchitecturally executed operations that prefetch memory due to SVE non-contiguous gather-prefetch …
|
| /linux/arch/arm64/kernel/pi/ |
| H A D | map_range.c | 15 * map_range - Map a contiguous range of physical pages into virtual memory 25 * @may_use_cont: Whether the use of the contiguous attribute is allowed 69 * Start a contiguous range if start and pa are in map_range() 76 * Clear the contiguous attribute if the remaining in map_range() 77 * range does not cover a contiguous block in map_range()
|
| /linux/drivers/gpu/drm/xe/ |
| H A D | xe_bo_doc.h | 32 * vmap (Xe can access the memory via xe_map layer) and have contiguous physical 35 * More details of why kernel BOs are pinned and contiguous below. 144 * makes this rather easy but the caveat is the memory must be contiguous. Again 145 * for simplity, we enforce that all kernel (pinned) BOs are contiguous and 164 * Do not require kernel BOs to be contiguous in physical memory / restored to 167 * tables. All of that memory is allocated 1 page at time so the contiguous 169 * kernel BOs are not contiguous too.
|
| /linux/Documentation/admin-guide/ |
| H A D | dell_rbu.rst | 32 image methods. In case of monolithic the driver allocates a contiguous chunk 35 would place each packet in contiguous physical memory. The driver also 57 copied to a single contiguous block of physical memory. 60 of contiguous memory and the BIOS image is scattered in these packets. 84 the file and spreads it across the physical memory in contiguous packet_sized
|
| /linux/drivers/gpu/drm/ |
| H A D | drm_gem_dma_helper.c | 32 * presented to the device as a contiguous chunk of memory. This is useful 39 * are contiguous in the IOVA space so appear contiguous to devices using 43 * objects that are physically contiguous in memory. 124 * The allocated memory will occupy a contiguous chunk of bus address space. 127 * memory will be physically contiguous. For devices that access through an 128 * IOMMU, then the allocated memory is not expected to be physically contiguous 129 * because having contiguous IOVAs is sufficient to meet a devices DMA 185 * The allocated memory will occupy a contiguous chunk of bus address space. 456 * another driver. Imported buffers must be physically contiguous i [all...] |
| /linux/Documentation/driver-api/cxl/allocation/ |
| H A D | hugepages.rst | 7 Contiguous Memory Allocator 11 carves out contiguous capacity. 15 at :code:`__init` time - when CMA carves out contiguous capacity.
|
| /linux/arch/nios2/ |
| H A D | Kconfig | 51 int "Order of maximal physically contiguous allocations" 55 contiguous allocations. The limit is called MAX_PAGE_ORDER and it 57 allocated as a single contiguous block. This option allows 59 large blocks of physically contiguous memory is required.
|
| /linux/drivers/gpu/drm/exynos/ |
| H A D | exynos_drm_gem.c | 37 * if EXYNOS_BO_CONTIG, fully physically contiguous memory in exynos_drm_alloc_buf() 38 * region will be allocated else physically contiguous in exynos_drm_alloc_buf() 211 * contiguous anyway, so drop EXYNOS_BO_NONCONTIG flag in exynos_drm_gem_create() 214 DRM_WARN("Non-contiguous allocation is not supported without IOMMU, falling back to contiguous buffer\n"); in exynos_drm_gem_create() 437 /* check if the entries in the sg_table are contiguous */ in exynos_drm_gem_prime_import_sg_table() 448 * Buffer has been mapped as contiguous into DMA address space, in exynos_drm_gem_prime_import_sg_table()
|
| /linux/arch/sh/mm/ |
| H A D | Kconfig | 34 int "Order of maximal physically contiguous allocations" 41 contiguous allocations. The limit is called MAX_PAGE:_ORDER and it 43 allocated as a single contiguous block. This option allows 45 large blocks of physically contiguous memory is required.
|
| /linux/Documentation/mm/ |
| H A D | memory-model.rst | 9 spans a contiguous range up to the maximal address. It could be, 11 for the CPU. Then there could be several contiguous ranges at 35 non-NUMA systems with contiguous, or mostly contiguous, physical 114 page *vmemmap` pointer that points to a virtually contiguous array of
|
| /linux/arch/arm64/mm/ |
| H A D | hugetlbpage.c | 146 * Changing some bits of contiguous entries requires us to follow a 147 * Break-Before-Make approach, breaking the whole contiguous set 149 * "Misprogramming of the Contiguous bit", page D4-1762. 192 * Changing some bits of contiguous entries requires us to follow a 193 * Break-Before-Make approach, breaking the whole contiguous set 195 * "Misprogramming of the Contiguous bit", page D4-1762. 398 * For a contiguous huge pte range we need to check whether or not write 400 * all the contiguous ptes we need to check whether or not there is a
|
| /linux/Documentation/admin-guide/mm/ |
| H A D | nommu-mmap.rst | 24 In the no-MMU case: VM regions backed by arbitrary contiguous runs of 52 appropriate bit of the file will be read into a contiguous bit of 83 sequence by providing a contiguous sequence of pages to map. In that 93 blockdev must be able to provide a contiguous run of pages without 95 all its memory as a contiguous array upfront. 252 filesystem providing the service will probably allocate a contiguous collection 269 should allocate sufficient contiguous memory to honour any supported mapping.
|
| /linux/Documentation/driver-api/dmaengine/ |
| H A D | provider.rst | 47 that involve a single contiguous block of data. However, some of the 49 non-contiguous buffers to a contiguous buffer, which is called 237 - If you want to transfer a single contiguous memory buffer, 254 - These transfers can transfer data from a non-contiguous buffer 255 to a non-contiguous buffer, opposed to DMA_SLAVE that can 256 transfer data from a non-contiguous data set to a continuous 657 - Chunk: A contiguous collection of bursts 659 - Transfer: A collection of chunks (be it contiguous or not)
|
| /linux/mm/ |
| H A D | percpu-km.c | 8 * Chunks are allocated as a contiguous kernel memory using gfp 24 * PAGE_SIZE. Because each chunk is allocated as a contiguous 30 #error "contiguous percpu allocation is incompatible with paged first chunk"
|
| H A D | cma.c | 3 * Contiguous Memory Allocator 255 * cma_init_reserved_mem() - create custom contiguous area from reserved memory 264 * This function creates custom contiguous area from already reserved memory. 713 * cma_declare_contiguous_nid() - reserve custom contiguous area 729 * If @fixed is true, reserve contiguous area at exactly @base. If false, 818 * Do not hand out page ranges that are not contiguous, so in cma_range_alloc() 918 * cma_alloc() - allocate pages from contiguous area 919 * @cma: Contiguous memory region for which the allocation is performed. 924 * This function allocates part of contiguous memory on specific 925 * contiguous memory area. [all …]
|
| H A D | util.c | 132 * result is physically contiguous. Use kfree() to free. 154 * result is physically contiguous. Use kfree() to free. 170 * result may be not physically contiguous. Use kvfree() to free. 215 * contiguous, to be freed by kfree(). 241 * physically contiguous. Use kvfree() to free. 622 * __vmalloc_array - allocate memory for a virtually contiguous array. 638 * vmalloc_array - allocate memory for a virtually contiguous array. 649 * __vcalloc - allocate and zero memory for a virtually contiguous array. 661 * vcalloc - allocate and zero memory for a virtually contiguous array. 1438 * page_range_contiguous - test whether the page range is contiguous [all …]
|
| /linux/drivers/iio/common/cros_ec_sensors/ |
| H A D | Kconfig | 17 tristate "ChromeOS EC Contiguous Sensors" 20 Module to handle 3d contiguous sensors like
|
| /linux/sound/pci/ctxfi/ |
| H A D | ctresource.c | 37 break; /* found sufficient contiguous resources */ in get_resource() 41 /* Can not find sufficient contiguous resources */ in get_resource() 45 /* Mark the contiguous bits in resource bit-map as used */ in get_resource() 62 /* Mark the contiguous bits in resource bit-map as used */ in put_resource()
|
| /linux/rust/kernel/alloc/ |
| H A D | allocator.rs | 25 /// The contiguous kernel allocator. 27 /// `Kmalloc` is typically used for physically contiguous allocations up to page size, but also 33 /// The virtually contiguous kernel allocator. 35 /// `Vmalloc` allocates pages from the page level allocator and maps them into the contiguous kernel 37 /// allocator is not physically contiguous.
|