| /linux/Documentation/core-api/ |
| H A D | memory-allocation.rst | 15 memory should be allocated. The GFP acronym stands for "get free 61 ``GFP_HIGHUSER_MOVABLE`` does not require that allocated memory 65 ``GFP_HIGHUSER`` means that the allocated memory is not movable, 70 ``GFP_USER`` means that the allocated memory is not movable and it 82 used to ensure that the allocated memory is accessible by hardware 142 The maximal size of a chunk that can be allocated with `kmalloc` is 147 The address of a chunk allocated with `kmalloc` is aligned to at least 153 Chunks allocated with kmalloc() can be resized with krealloc(). Similarly 158 request pages from the page allocator. The memory allocated by `vmalloc` 176 When the allocated memory is no longer needed it must be freed. [all …]
|
| /linux/Documentation/mm/ |
| H A D | page_owner.rst | 2 page owner: Tracking about who allocated each page 8 page owner is for the tracking about who allocated each page. 28 allocated base pages, which gives us a quick overview of where the memory 32 allocated base pages (faster to read and parse, eg, for monitoring) that 55 memory system, so, until initialization, many pages can be allocated and 56 they would have no owner information. To fix it up, these early allocated 57 pages are investigated and marked as allocated in initialization phase. 59 at least, we can tell whether the page is allocated or not, 60 more accurately. On 2GB memory x86-64 VM box, 13343 early allocated pages 61 are caught and marked, although they are mostly allocated from struct [all …]
|
| /linux/tools/perf/pmu-events/arch/x86/broadwell/ |
| H A D | uncore-interconnect.json | 3 …"BriefDescription": "Number of entries allocated. Account for Any type: e.g. Snoop, Core aperture,… 31 … that are in DirectData mode. Such entry is defined as valid when it is allocated till data sent t… 36 … that are in DirectData mode. Such entry is defined as valid when it is allocated till data sent t… 41 …"BriefDescription": "Total number of Core outgoing entries allocated. Accounts for Coherent and no… 50 … "BriefDescription": "Number of Core coherent Data Read entries allocated in DirectData mode", 55 … "PublicDescription": "Number of Core coherent Data Read entries allocated in DirectData mode.", 60 …"BriefDescription": "Number of Writes allocated - any write transactions: full/partials writes and…
|
| /linux/drivers/net/ethernet/intel/ice/ |
| H A D | ice_irq.c | 66 * @dyn_allowed: allow entry to be dynamically allocated 72 * Returns allocated irq entry or NULL on failure. 88 /* only already allocated if the caller says so */ in ice_get_irq_res() 168 * allocated interrupt appropriately. 172 * were allocated with ice_pci_alloc_irq_vectors are already used 173 * and dynamically allocated interrupts are supported then new 174 * interrupt will be allocated with pci_msix_alloc_irq_at. 176 * Some callers may only support dynamically allocated interrupts. 197 dev_dbg(dev, "allocated new irq at index %d\n", map.index); in ice_alloc_irq() 216 * Remove allocated interrupt from the interrupt tracker. If interrupt was [all …]
|
| /linux/include/linux/ |
| H A D | xz.h | 26 * dictionary doesn't need to be allocated as 28 * structures are allocated at initialization, 32 * allocated at initialization, so xz_dec_run() 35 * allocated once the required size has been 70 * tried to be allocated was no more than the 177 * Multi-call mode with dynamically allocated dictionary (XZ_DYNALLOC): 192 * @s: Decoder state allocated using xz_dec_init() 211 * xz_dec_reset() - Reset an already allocated decoder state 212 * @s: Decoder state allocated using xz_dec_init() 224 * xz_dec_end() - Free the memory allocated for the decoder state [all …]
|
| H A D | idr.h | 51 * DEFINE_IDR() - Define a statically-allocated IDR. 163 * Initialise a dynamically allocated IDR. To initialise a 164 * statically allocated IDR, use DEFINE_IDR(). 172 * idr_is_empty() - Are there any IDs allocated? 175 * Return: %true if any IDs have been allocated from this IDR. 288 * Return: The allocated ID, or %-ENOMEM if memory could not be allocated, 306 * Return: The allocated ID, or %-ENOMEM if memory could not be allocated, 324 * Return: The allocated ID, or %-ENOMEM if memory could not be allocated,
|
| /linux/kernel/trace/ |
| H A D | tracing_map.h | 67 * the way the insertion algorithm works, the size of the allocated 75 * tracing_map_elts are pre-allocated before any call is made to 78 * The tracing_map_entry array is allocated as a single block by 82 * generally be allocated together as a single large array without 83 * failure, they're allocated individually, by tracing_map_init(). 85 * The pool of tracing_map_elts are allocated by tracing_map_init() 94 * array holding the pointers which make up the pre-allocated pool of 95 * tracing_map_elts is allocated as a single block and is stored in 113 * is created for each tracing_map_elt, again individually allocated 114 * to avoid failures that might be expected if allocated as a single [all …]
|
| /linux/drivers/firmware/efi/libstub/ |
| H A D | relocate.c | 11 * @align: minimum alignment of the allocated memory area. It should 13 * @addr: on exit the address of the allocated memory 17 * EFI_LOADER_DATA. The allocated pages are aligned according to @align but at 18 * least EFI_ALLOC_ALIGN. The first allocated page will not below the address 97 * @alignment: minimum alignment of the allocated memory area. It 101 * Copy a memory area to a newly allocated memory area aligned according 103 * is not available, the allocated address will not be below @min_addr. 158 * have been allocated by UEFI, so we can safely use memcpy. in efi_relocate_kernel()
|
| H A D | alignedmem.c | 11 * @addr: On return the address of the first allocated page. The first 12 * allocated page has alignment EFI_ALLOC_ALIGN which is an 14 * @max: the address that the last allocated memory page shall not 19 * Allocate pages as EFI_LOADER_DATA. The allocated pages are aligned according 20 * to @align, which should be >= EFI_ALLOC_ALIGN. The last allocated page will
|
| /linux/drivers/pci/endpoint/ |
| H A D | pci-epc-mem.c | 145 * Invoke to cleanup the pci_epc_mem structure allocated in 171 * @epc: the EPC device on which memory has to be allocated 172 * @phys_addr: populate the allocated physical address here 173 * @size: the size of the address space that has to be allocated 239 * pci_epc_mem_free_addr() - free the allocated memory address 240 * @epc: the EPC device on which memory was allocated 241 * @phys_addr: the allocated physical address 242 * @virt_addr: virtual address of the allocated mem space 243 * @size: the size of the allocated address space 245 * Invoke to free the memory allocated using pci_epc_mem_alloc_addr.
|
| /linux/mm/ |
| H A D | mempool.c | 201 * May be called on a zeroed but uninitialized mempool (i.e. allocated with 217 * @pool: pointer to the memory pool which was allocated via 275 * allocated for this pool. 298 * allocated for this pool. 335 * @pool: pointer to the memory pool which was allocated via 338 * allocated for this pool. 413 unsigned int count, unsigned int allocated, in mempool_alloc_from_pool() argument 420 if (unlikely(pool->curr_nr < count - allocated)) in mempool_alloc_from_pool() 425 allocated++; in mempool_alloc_from_pool() 439 return allocated; in mempool_alloc_from_pool() [all …]
|
| /linux/tools/perf/pmu-events/arch/arm64/fujitsu/a64fx/ |
| H A D | cache.json | 105 …nt counts operations where demand access hits an L2 cache refill buffer allocated by software or h… 108 …nt counts operations where demand access hits an L2 cache refill buffer allocated by software or h… 111 …ions where software or hardware prefetch hits an L2 cache refill buffer allocated by demand access… 114 …ions where software or hardware prefetch hits an L2 cache refill buffer allocated by demand access… 117 …nt counts operations where demand access hits an L2 cache refill buffer allocated by software or h… 120 …nt counts operations where demand access hits an L2 cache refill buffer allocated by software or h…
|
| /linux/fs/ |
| H A D | kernel_read_file.c | 14 * *@buf is NULL, a buffer will be allocated, and 16 * @buf_size: size of buf, if already allocated. If @buf not 17 * allocated, this is the largest size to allocate. 25 * via an already-allocated @buf, in at most @buf_size chunks, and 41 void *allocated = NULL; in kernel_read_file() local 80 *buf = allocated = vmalloc(i_size); in kernel_read_file() 115 if (allocated) { in kernel_read_file()
|
| /linux/include/uapi/xen/ |
| H A D | evtchn.h | 39 * Return allocated port. 49 * Return allocated port. 59 * Return allocated port. 68 * Unbind previously allocated @port. 77 * Unbind previously allocated @port. 105 * Bind statically allocated @port.
|
| /linux/drivers/net/ipa/ |
| H A D | gsi_trans.h | 79 * @max_alloc: Maximum number of elements allocated at a time from pool 91 * Return: Virtual address of element(s) allocated from the pool 107 * @max_alloc: Maximum number of elements allocated at a time from pool 121 * Return: Virtual address of element allocated from the pool 123 * Only one element at a time may be allocated from a DMA pool. 135 * gsi_channel_trans_idle() - Return whether no transactions are allocated 139 * Return: True if no transactions are allocated, false otherwise 159 * gsi_trans_free() - Free a previously-allocated GSI transaction
|
| /linux/tools/testing/radix-tree/ |
| H A D | main.c | 243 printv(1, "starting single_thread_tests: %d allocated, preempt %d\n", in single_thread_tests() 247 printv(2, "after multiorder_check: %d allocated, preempt %d\n", in single_thread_tests() 251 printv(2, "after tag_check: %d allocated, preempt %d\n", in single_thread_tests() 255 printv(2, "after gang_check: %d allocated, preempt %d\n", in single_thread_tests() 259 printv(2, "after add_and_check: %d allocated, preempt %d\n", in single_thread_tests() 263 printv(2, "after dynamic_height_check: %d allocated, preempt %d\n", in single_thread_tests() 268 printv(2, "after idr_checks: %d allocated, preempt %d\n", in single_thread_tests() 272 printv(2, "after big_gang_check: %d allocated, preempt %d\n", in single_thread_tests() 280 printv(2, "after copy_tag_check: %d allocated, preempt %d\n", in single_thread_tests() 323 printv(2, "after rcu_barrier: %d allocated, preempt %d\n", in main()
|
| /linux/Documentation/userspace-api/media/dvb/ |
| H A D | dmx-reqbufs.rst | 38 Memory mapped buffers are located in device memory and must be allocated 40 space. User buffers are allocated by applications themselves, and this 43 allocated by applications through a device driver, and this ioctl only 54 number allocated in the ``count`` field. The ``count`` can be smaller than the number requested, ev… 56 function correctly. The actual allocated buffer size can is returned
|
| /linux/Documentation/arch/arm64/ |
| H A D | gcs.rst | 102 the stack will remain allocated for the lifetime of the thread. At present 117 allocated for it of half the standard stack size or 2 gigabytes, 121 new Guarded Control Stack will be allocated for the new thread with 124 * When a stack is allocated by enabling GCS or during thread creation then 129 * Additional Guarded Control Stacks can be allocated using the 132 * Stacks allocated using map_shadow_stack() can optionally have an end of 140 * Stacks allocated using map_shadow_stack() must have a size which is a 146 * When a thread is freed the Guarded Control Stack initially allocated for 208 If GCS is enabled via ptrace no new GCS will be allocated for the thread.
|
| /linux/tools/perf/arch/powerpc/util/ |
| H A D | skip-callchain-idx.c | 36 * if return address is in LR and if a new frame was allocated. 63 * Return address is in LR. Check if a frame was allocated in check_return_reg() 74 * If call frame address is in r1, no new frame was allocated. in check_return_reg() 81 * A new frame was allocated but has not yet been used. in check_return_reg() 139 * 1 if return address is in LR and no new stack frame was allocated 140 * 2 if return address is in LR and a new frame was allocated (but not 202 * allocated but the LR was not saved into it, then the LR contains the 249 * New frame allocated but return address still in LR. in arch_skip_callchain_idx()
|
| /linux/rust/kernel/pci/ |
| H A D | irq.rs | 74 /// Represents an allocated IRQ vector for a specific PCI device. 76 /// This type ties an IRQ vector to the device it was allocated for, 90 /// - `dev` must point to a [`Device`] that has successfully allocated IRQ vectors. 117 /// This type ensures that IRQ vectors are properly allocated and freed by 122 /// The [`Device`] has successfully allocated IRQ vectors. 151 // - `pci_alloc_irq_vectors` returns the number of allocated vectors on success. in register() 169 // - `self.dev` has successfully allocated IRQ vectors. in drop() 214 /// The allocated vectors are automatically freed when the device is unbound, using the 225 /// Returns a range of IRQ vectors that were successfully allocated, or an error if the
|
| /linux/tools/testing/selftests/bpf/prog_tests/ |
| H A D | task_local_data.h | 30 * Thread-specific memory for storing TLD is allocated lazily on the first call to 38 * TLD_DYN_DATA_SIZE - The maximum size of memory allocated for TLDs created dynamically 43 * possibly known statically, a memory area of size TLD_DYN_DATA_SIZE will be allocated 44 * for these TLDs. This additional memory is allocated for every thread that calls 47 * will be allocated for TLDs created with it. 56 * TLD_DONT_ROUND_UP_DATA_SIZE - Don't round up memory size allocated for data if 247 * TLD_DYN_DATA_SIZE is allocated for tld_create_key() in __tld_create_key() 291 * enough memory will be allocated for each thread on the first call to tld_get_data(). 315 * An additional TLD_DYN_DATA_SIZE bytes are allocated per-thread to accommodate TLDs 359 /* tld_data_p is allocated on the first invocation of tld_get_data() */ in tld_get_data()
|
| /linux/arch/mips/math-emu/ |
| H A D | dsemul.c | 20 * page in response to a call to mips_dsemul(). Each thread may be allocated 26 * the thread. In these cases the allocated frame will either be reused by 54 * frame allocated to each thread, allowing us to clean it up at later 59 * trust that BRK_MEMU means there's actually a valid frame allocated 153 /* Clear any allocated frame, retrieving its index */ in dsemul_thread_cleanup() 156 /* If no frame was allocated, we're done */ in dsemul_thread_cleanup() 162 /* Free the frame that this thread had allocated */ in dsemul_thread_cleanup() 190 * of the emupage - we'll free the allocated frame anyway. in dsemul_thread_rollback() 294 /* Cleanup the allocated frame, returning if there wasn't one */ in do_dsemulret()
|
| /linux/arch/mips/include/asm/octeon/ |
| H A D | cvmx-bootmem.h | 79 * Size actually allocated for named block (may differ from 153 * memory cannot be allocated at the specified address. 165 * Frees a previously allocated named bootmem block. 200 * @align: Alignment of memory to be allocated. (must be a power of 2) 214 * freed. If the requested name block is already allocated, return 221 * @param align Alignment of memory to be allocated. (must be a power of 2) 271 * Returns physical address of block allocated, or -1 on failure 297 * @return physical address of block allocated, or -1 on failure 307 * of the block that was allocated, or the list will become
|
| /linux/drivers/net/ethernet/mellanox/mlx5/core/sf/ |
| H A D | hw_table.c | 16 u8 allocated: 1; member 86 if (!hwc->sfs[i].allocated && free_idx == -1) { in mlx5_sf_hw_table_id_alloc() 91 if (hwc->sfs[i].allocated && hwc->sfs[i].usr_sfnum == usr_sfnum) in mlx5_sf_hw_table_id_alloc() 99 hwc->sfs[free_idx].allocated = true; in mlx5_sf_hw_table_id_alloc() 110 hwc->sfs[id].allocated = false; in mlx5_sf_hw_table_id_free() 178 hwc->sfs[idx].allocated = false; in mlx5_sf_hw_table_hwc_sf_free() 201 hwc->sfs[id].allocated = false; in mlx5_sf_hw_table_sf_deferred_free() 216 if (hwc->sfs[i].allocated) in mlx5_sf_hw_table_hwc_dealloc_all() 373 if (sf_hw->allocated && sf_hw->pending_delete) in mlx5_sf_hw_vhca_event()
|
| /linux/drivers/acpi/acpica/ |
| H A D | utownerid.c | 81 * allocated and update the global ID mask. in acpi_ut_allocate_owner_id() 92 * permanently allocated (prevents +1 overflow) in acpi_ut_allocate_owner_id() 98 "Allocated OwnerId: 0x%3.3X\n", in acpi_ut_allocate_owner_id() 108 * All owner_ids have been allocated. This typically should in acpi_ut_allocate_owner_id() 110 * allocated upon table load (one per table) and method execution, and in acpi_ut_allocate_owner_id() 130 * PARAMETERS: owner_id_ptr - Pointer to a previously allocated owner_ID 182 "Attempted release of non-allocated OwnerId: 0x%3.3X", in acpi_ut_release_owner_id()
|