Home
last modified time | relevance | path

Searched full:mappings (Results 1 – 25 of 1341) sorted by relevance

12345678910>>...54

/linux/Documentation/mm/
H A Dhighmem.rst15 at all times. This means the kernel needs to start using temporary mappings of
48 Temporary Virtual Mappings
51 The kernel contains several ways of creating temporary mappings. The following
55 short term mappings. They can be invoked from any context (including
56 interrupts) but the mappings can only be used in the context which acquired
64 These mappings are thread-local and CPU-local, meaning that the mapping
89 mappings, the local mappings are only valid in the context of the caller
94 Most code can be designed to use thread local mappings. User should
99 Nesting kmap_local_page() and kmap_atomic() mappings is allowed to a certain
103 mappings.
[all …]
H A Dhugetlbfs_reserv.rst87 of mappings. Location differences are:
89 - For private mappings, the reservation map hangs off the VMA structure.
92 - For shared mappings, the reservation map hangs off the inode. Specifically,
93 inode->i_mapping->private_data. Since shared mappings are always backed
121 One of the big differences between PRIVATE and SHARED mappings is the way
124 - For shared mappings, an entry in the reservation map indicates a reservation
127 - For private mappings, the lack of an entry in the reservation map indicates
133 For private mappings, hugetlb_reserve_pages() creates the reservation map and
138 are needed for the current mapping/segment. For private mappings, this is
139 always the value (to - from). However, for shared mappings it is possible that
[all …]
/linux/drivers/gpu/drm/nouveau/
H A Dnouveau_exec.c25 * and unmap memory. Mappings may be flagged as sparse. Sparse mappings are not
29 * Userspace may request memory backed mappings either within or outside of the
31 * mapping. Subsequently requested memory backed mappings within a sparse
33 * mapping. If such memory backed mappings are unmapped the kernel will make
35 * Requests to unmap a sparse mapping that still contains memory backed mappings
36 * will result in those memory backed mappings being unmapped first.
38 * Unmap requests are not bound to the range of existing mappings and can even
39 * overlap the bounds of sparse mappings. For such a request the kernel will
40 * make sure to unmap all memory backed mappings within the given range,
41 * splitting up memory backed mappings which are only partially contained
[all …]
/linux/arch/x86/include/asm/
H A Dinvpcid.h13 * mappings, we don't want the compiler to reorder any subsequent in __invpcid()
25 /* Flush all mappings for a given pcid and addr, not including globals. */
32 /* Flush all mappings for a given PCID, not including globals. */
38 /* Flush all mappings, including globals, for all PCIDs. */
44 /* Flush all mappings for all PCIDs except globals. */
/linux/drivers/gpu/drm/
H A Ddrm_gem_atomic_helper.c21 * synchronization helpers, and plane state and framebuffer BO mappings
43 * a mapping of the shadow buffer into kernel address space. The mappings
47 * The helpers for shadow-buffered planes establish and release mappings,
70 * In the driver's atomic-update function, shadow-buffer mappings are available
87 * struct &drm_shadow_plane_state.map. The mappings are valid while the state
92 * callbacks. Access to shadow-buffer mappings is similar to regular
212 * The function does not duplicate existing mappings of the shadow buffers.
213 * Mappings are maintained during the atomic commit by the plane's prepare_fb
241 * The function does not duplicate existing mappings of the shadow buffers.
242 * Mappings are maintained during the atomic commit by the plane's prepare_fb
[all …]
/linux/Documentation/admin-guide/mm/
H A Dnommu-mmap.rst29 These behave very much like private mappings, except that they're
133 In the no-MMU case, however, anonymous mappings are backed by physical
147 (#) A list of all the private copy and anonymous mappings on the system is
150 (#) A list of all the mappings in use by a process is visible through
176 mappings made by a process or if the mapping in which the address lies does not
191 Shared mappings may not be moved. Shareable mappings may not be moved either,
196 mappings, move parts of existing mappings or resize parts of mappings. It must
243 mappings may still be mapped directly off the device under some
250 Provision of shared mappings on memory backed files is similar to the provision
253 of pages and permit mappings to be made on that.
[all …]
/linux/Documentation/arch/arm/
H A Dmemory.rst62 Machine specific static mappings are also
72 PKMAP_BASE PAGE_OFFSET-1 Permanent kernel mappings
78 placed here using dynamic mappings.
85 00001000 TASK_SIZE-1 User space mappings
86 Per-thread mappings are placed here via
96 Please note that mappings which collide with the above areas may result
103 must set up their own mappings using open() and mmap().
/linux/arch/x86/mm/
H A Dmem_encrypt_identity.c13 * Since we're dealing with identity mappings, physical and virtual
257 * entries that are needed. Those mappings will be covered mostly in sme_pgtable_calc()
260 * mappings. For mappings that are not 2MB aligned, PTE mappings in sme_pgtable_calc()
355 * One PGD for both encrypted and decrypted mappings and a set of in sme_encrypt_kernel()
356 * PUDs and PMDs for each of the encrypted and decrypted mappings. in sme_encrypt_kernel()
381 * mappings are populated. in sme_encrypt_kernel()
402 * decrypted kernel mappings are created. in sme_encrypt_kernel()
423 /* Add encrypted kernel (identity) mappings */ in sme_encrypt_kernel()
429 /* Add decrypted, write-protected kernel (non-identity) mappings */ in sme_encrypt_kernel()
436 /* Add encrypted initrd (identity) mappings */ in sme_encrypt_kernel()
[all …]
/linux/Documentation/driver-api/
H A Dio-mapping.rst44 used with mappings created by io_mapping_create_wc()
46 Temporary mappings are only valid in the context of the caller. The mapping
56 Nested mappings need to be undone in reverse order because the mapping
65 The mappings are released with::
83 The mappings are released with::
/linux/tools/testing/selftests/kvm/
H A Dkvm_page_table_test.c110 * Then KVM will create normal page mappings or huge block in guest_code()
111 * mappings for them. in guest_code()
126 * normal page mappings from RO to RW if memory backing src type in guest_code()
128 * mappings into normal page mappings if memory backing src type in guest_code()
150 * this will create new mappings at the smallest in guest_code()
164 * split page mappings back to block mappings. And a TLB in guest_code()
166 * page mappings are not fully invalidated. in guest_code()
365 /* Test the stage of KVM creating mappings */ in run_test()
375 /* Test the stage of KVM updating mappings */ in run_test()
388 /* Test the stage of KVM adjusting mappings */ in run_test()
/linux/arch/sh/mm/
H A Dpmb.c50 /* Adjacent entry link for contiguous multi-entry mappings */
172 * Finally for sizes that involve compound mappings, walk in pmb_mapping_exists()
424 * Small mappings need to go through the TLB. in pmb_remap_caller()
530 pr_info("PMB: boot mappings:\n"); in pmb_notify()
551 * Sync our software copy of the PMB mappings with those in hardware. The
552 * mappings in the hardware PMB were either set up by the bootloader or
561 * Run through the initial boot mappings, log the established in pmb_synchronize()
563 * PPN range. Specifically, we only care about existing mappings in pmb_synchronize()
567 * loader can establish multi-page mappings with the same caching in pmb_synchronize()
573 * jumping between the cached and uncached mappings and tearing in pmb_synchronize()
[all …]
/linux/Documentation/filesystems/iomap/
H A Ddesign.rst24 This layer tries to obtain mappings of each file ranges to storage
28 2. An upper layer that acts upon the space mappings provided by the
31 The iteration can involve mappings of file's logical offset ranges to
62 units (generally memory pages or blocks) and looks up space mappings on
64 largest space mappings that it can create for a given file operation and
69 Larger space mappings improve runtime performance by amortizing the cost
238 * **IOMAP_F_MERGED**: Multiple contiguous block mappings were
272 ``IOMAP_INLINE`` mappings.
282 that should be used to detect stale mappings.
286 Filesystems with completely static mappings need not set this value.
[all …]
/linux/mm/
H A DKconfig.debug123 bool "Check for invalid mappings in user page tables"
187 bool "Warn on W+X mappings at boot"
192 Generate a warning if any W+X mappings are found at boot.
195 mappings after applying NX, as such mappings are a security risk.
199 <arch>/mm: Checked W+X mappings: passed, no W+X pages found.
203 <arch>/mm: Checked W+X mappings: failed, <N> W+X pages found.
206 still fine, as W+X mappings are not a security hole in
/linux/drivers/gpu/drm/tegra/
H A Dsubmit.c150 xa_lock(&context->mappings); in tegra_drm_mapping_get()
152 mapping = xa_load(&context->mappings, id); in tegra_drm_mapping_get()
156 xa_unlock(&context->mappings); in tegra_drm_mapping_get()
261 struct tegra_drm_used_mapping *mappings; in submit_process_bufs() local
273 mappings = kcalloc(args->num_bufs, sizeof(*mappings), GFP_KERNEL); in submit_process_bufs()
274 if (!mappings) { in submit_process_bufs()
303 mappings[i].mapping = mapping; in submit_process_bufs()
304 mappings[i].flags = buf->flags; in submit_process_bufs()
307 job_data->used_mappings = mappings; in submit_process_bufs()
316 tegra_drm_mapping_put(mappings[i].mapping); in submit_process_bufs()
[all …]
/linux/arch/hexagon/include/asm/
H A Dmem-layout.h71 * Permanent IO mappings will live at 0xfexx_xxxx
80 * "permanent kernel mappings", defined as long-lasting mappings of
92 * "Permanent Kernel Mappings"; fancy (or less fancy) PTE table
/linux/arch/arm64/mm/
H A Dpageattr.c83 * Kernel VA mappings are always live, and splitting live section in change_memory_common()
84 * mappings into page mappings may cause TLB conflicts. This means in change_memory_common()
88 * Let's restrict ourselves to mappings created by vmalloc (or vmap). in change_memory_common()
89 * Those are guaranteed to consist entirely of page mappings, and in change_memory_common()
210 * p?d_present(). When debug_pagealloc is enabled, sections mappings are
/linux/drivers/soc/aspeed/
H A DKconfig13 Control LPC firmware cycle mappings through ioctl()s. The driver
43 Control ASPEED P2A VGA MMIO to BMC mappings through ioctl()s. The
44 driver also provides an interface for userspace mappings to a
/linux/Documentation/driver-api/usb/
H A Ddma.rst19 manage dma mappings for existing dma-ready buffers (see below).
27 don't manage dma mappings for URBs.
41 IOMMU to manage the DMA mappings. It can cost MUCH more to set up and
42 tear down the IOMMU mappings with each request than perform the I/O!
64 "streaming" DMA mappings.)
/linux/Documentation/core-api/
H A Ddma-api-howto.rst35 mappings between physical and bus addresses.
172 The setup for streaming mappings is performed via a call to
234 coherent allocations, but supports full 64-bits for streaming mappings
257 kernel will use this information later when you make DMA mappings.
296 Types of DMA mappings
299 There are two types of DMA mappings:
301 - Consistent DMA mappings which are usually mapped at driver
314 Good examples of what to use consistent mappings for are:
323 versa. Consistent mappings guarantee this.
345 - Streaming DMA mappings which are usually mapped for one DMA
[all …]
/linux/arch/sh/kernel/
H A Dhead_32.S91 * Reconfigure the initial PMB mappings setup by the hardware.
102 * our address space and the initial mappings may not map PAGE_OFFSET
105 * Once we've setup cached and uncached mappings we clear the rest of the
156 * existing mappings that match the initial mappings VPN/PPN.
175 cmp/eq r0, r8 /* Check for valid __MEMORY_START mappings */
185 * mappings.
/linux/include/drm/
H A Ddrm_cache.h56 * for some buffers, both the CPU and the GPU use uncached mappings, in drm_arch_can_wc_memory()
59 * The use of uncached GPU mappings relies on the correct implementation in drm_arch_can_wc_memory()
61 * will use cached mappings nonetheless. On x86 platforms, this does not in drm_arch_can_wc_memory()
62 * seem to matter, as uncached CPU mappings will snoop the caches in any in drm_arch_can_wc_memory()
/linux/Documentation/gpu/rfc/
H A Di915_vm_bind.h17 * 1: In VM_UNBIND calls, the UMD must specify the exact mappings created
19 * mappings or splitting them. Similarly, VM_BIND calls will not replace
20 * any existing mappings.
22 * 2: The restrictions on unbinding partial or multiple mappings is
23 * lifted, Similarly, binding will replace any mappings in the given range.
93 * Multiple VA mappings can be created to the same section of the object
H A Di915_vm_bind.rst9 specified address space (VM). These mappings (also referred to as persistent
10 mappings) will be persistent across multiple GPU submissions (execbuf calls)
12 mappings during each submission (as required by older execbuf mode).
27 * Multiple Virtual Address (VA) mappings can map to the same physical pages
30 * Support capture of persistent mappings in the dump upon GPU error.
90 path (where required mappings are already bound) submission latency is O(1)
201 execbuf. VM_BIND allows bind/unbind of mappings required for the directly
231 mapped objects. Page table pages are similar to persistent mappings of a
/linux/fs/xfs/scrub/
H A Dbmap.c47 * while we inspect block mappings, so wait for directio to finish in xchk_setup_inode_bmap()
69 * space mappings for the data fork. Leave accumulated errors in xchk_setup_inode_bmap()
86 /* Drop the page cache if we're repairing block mappings. */ in xchk_setup_inode_bmap()
130 /* May mappings point to shared space? */
660 * Decide if we want to scan the reverse mappings to determine if the attr
661 * fork /really/ has zero space mappings.
672 * to reconstruct the block mappings. If the fork is not in this in xchk_bmap_check_empty_attrfork()
704 * Decide if we want to scan the reverse mappings to determine if the data
705 * fork /really/ has zero space mappings.
720 * to reconstruct the block mappings. If the fork is not in this in xchk_bmap_check_empty_datafork()
[all …]
/linux/include/linux/
H A Drmap.h351 * folio_dup_file_rmap_ptes - duplicate PTE mappings of a page range of a folio
352 * @folio: The folio to duplicate the mappings of
353 * @page: The first page to duplicate the mappings of
404 * don't allow to duplicate the mappings but instead require to e.g., in __folio_try_dup_anon_rmap()
453 * folio_try_dup_anon_rmap_ptes - try duplicating PTE mappings of a page range
455 * @folio: The folio to duplicate the mappings of
456 * @page: The first page to duplicate the mappings of
458 * @src_vma: The vm area from which the mappings are duplicated
465 * Duplicating the mappings can only fail if the folio may be pinned; device
469 * If duplicating the mappings succeeded, the duplicated PTEs have to be R/O in
[all …]

12345678910>>...54