/linux/kernel/irq/ |
H A D | matrix.c | 13 unsigned int allocated; member 155 /* Find the best CPU which has the lowest number of managed IRQs allocated */ 159 unsigned int cpu, best_cpu, allocated = UINT_MAX; in matrix_find_best_cpu_managed() local 167 if (!cm->online || cm->managed_allocated > allocated) in matrix_find_best_cpu_managed() 171 allocated = cm->managed_allocated; in matrix_find_best_cpu_managed() 180 * @replace: Replace an already allocated vector with a system 198 cm->allocated--; in irq_matrix_assign_system() 252 * This removes not allocated managed interrupts from the map. It does 268 /* Get managed bit which are not allocated */ in irq_matrix_remove_managed() 290 * @mapped_cpu: Pointer to store the CPU for which the irq was allocated [all …]
|
/linux/Documentation/core-api/ |
H A D | memory-allocation.rst | 15 memory should be allocated. The GFP acronym stands for "get free 61 ``GFP_HIGHUSER_MOVABLE`` does not require that allocated memory 65 ``GFP_HIGHUSER`` means that the allocated memory is not movable, 70 ``GFP_USER`` means that the allocated memory is not movable and it 82 used to ensure that the allocated memory is accessible by hardware 142 The maximal size of a chunk that can be allocated with `kmalloc` is 147 The address of a chunk allocated with `kmalloc` is aligned to at least 153 Chunks allocated with kmalloc() can be resized with krealloc(). Similarly 158 request pages from the page allocator. The memory allocated by `vmalloc` 176 When the allocated memory is no longer needed it must be freed. [all …]
|
/linux/include/drm/ |
H A D | drm_suballoc.h | 19 * @olist: List of allocated ranges. 20 * @flist: Array[fence context hash] of queues of fenced allocated ranges. 34 * struct drm_suballoc - Sub-allocated range 35 * @olist: List link for list of allocated ranges. 36 * @flist: List linkk for the manager fenced allocated ranges queues. 66 * Return: The start of the allocated range. 77 * Return: The end of the allocated range + 1. 88 * Return: The size of the allocated range.
|
/linux/drivers/md/dm-vdo/ |
H A D | memory-alloc.h | 35 * @what: What is being allocated (for error logging) 36 * @ptr: A pointer to hold the allocated memory 65 * @WHAT: What is being allocated (for error logging) 66 * @PTR: A pointer to hold the allocated memory 78 * allocated memory. 81 * @WHAT: What is being allocated (for error logging) 82 * @PTR: A pointer to hold the allocated memory 105 * @what: What is being allocated (for error logging) 106 * @ptr: A pointer to hold the allocated memory 120 * @what: What is being allocated (for error logging) [all …]
|
/linux/sound/firewire/ |
H A D | iso-resources.c | 31 r->allocated = false; in fw_iso_resources_init() 43 WARN_ON(r->allocated); in fw_iso_resources_destroy() 111 if (WARN_ON(r->allocated)) in fw_iso_resources_allocate() 137 r->allocated = true; in fw_iso_resources_allocate() 158 * any resources that were allocated before the bus reset. It is safe to call 159 * this function if no resources are currently allocated. 171 if (!r->allocated) { in fw_iso_resources_update() 190 r->allocated = false; in fw_iso_resources_update() 206 * fw_iso_resources_free - frees allocated resources 209 * This function deallocates the channel and bandwidth, if allocated. [all …]
|
/linux/Documentation/mm/ |
H A D | page_owner.rst | 2 page owner: Tracking about who allocated each page 8 page owner is for the tracking about who allocated each page. 28 allocated base pages, which gives us a quick overview of where the memory 52 memory system, so, until initialization, many pages can be allocated and 53 they would have no owner information. To fix it up, these early allocated 54 pages are investigated and marked as allocated in initialization phase. 56 at least, we can tell whether the page is allocated or not, 57 more accurately. On 2GB memory x86-64 VM box, 13343 early allocated pages 58 are caught and marked, although they are mostly allocated from struct 124 Page allocated via order XXX, ... [all …]
|
/linux/include/net/page_pool/ |
H A D | helpers.h | 24 * allocated from page pool. There is no cache line dirtying for 'struct page' 29 * page allocated from page pool. Page splitting enables memory saving and thus 102 * @offset: offset to the allocated page 107 * Return: allocated page fragment, otherwise return NULL. 166 * @offset: offset to the allocated page 167 * @size: in as the requested size, out as the allocated size 173 * Return: allocated page or page fragment, otherwise return NULL. 202 * @size: in as the requested size, out as the allocated size 205 * it returns va of the allocated page or page fragment. 207 * Return: the va for the allocated pag [all...] |
/linux/tools/perf/pmu-events/arch/x86/broadwell/ |
H A D | uncore-interconnect.json | 3 …"BriefDescription": "Number of entries allocated. Account for Any type: e.g. Snoop, Core aperture,… 31 … that are in DirectData mode. Such entry is defined as valid when it is allocated till data sent t… 36 … that are in DirectData mode. Such entry is defined as valid when it is allocated till data sent t… 41 …"BriefDescription": "Total number of Core outgoing entries allocated. Accounts for Coherent and no… 50 … "BriefDescription": "Number of Core coherent Data Read entries allocated in DirectData mode", 55 … "PublicDescription": "Number of Core coherent Data Read entries allocated in DirectData mode.", 60 …"BriefDescription": "Number of Writes allocated - any write transactions: full/partials writes and…
|
/linux/drivers/net/ethernet/intel/ice/ |
H A D | ice_irq.c | 66 * @dyn_allowed: allow entry to be dynamically allocated 72 * Returns allocated irq entry or NULL on failure. 88 /* only already allocated if the caller says so */ in ice_get_irq_res() 167 * allocated interrupt appropriately. 171 * were allocated with ice_pci_alloc_irq_vectors are already used 172 * and dynamically allocated interrupts are supported then new 173 * interrupt will be allocated with pci_msix_alloc_irq_at. 175 * Some callers may only support dynamically allocated interrupts. 196 dev_dbg(dev, "allocated new irq at index %d\n", map.index); in ice_alloc_irq() 215 * Remove allocated interrupt from the interrupt tracker. If interrupt was [all …]
|
/linux/include/linux/ |
H A D | xz.h | 26 * dictionary doesn't need to be allocated as 28 * structures are allocated at initialization, 32 * allocated at initialization, so xz_dec_run() 35 * allocated once the required size has been 70 * tried to be allocated was no more than the 177 * Multi-call mode with dynamically allocated dictionary (XZ_DYNALLOC): 192 * @s: Decoder state allocated using xz_dec_init() 211 * xz_dec_reset() - Reset an already allocated decoder state 212 * @s: Decoder state allocated using xz_dec_init() 224 * xz_dec_end() - Free the memory allocated for the decoder state [all …]
|
/linux/kernel/trace/ |
H A D | tracing_map.h | 67 * the way the insertion algorithm works, the size of the allocated 75 * tracing_map_elts are pre-allocated before any call is made to 78 * The tracing_map_entry array is allocated as a single block by 82 * generally be allocated together as a single large array without 83 * failure, they're allocated individually, by tracing_map_init(). 85 * The pool of tracing_map_elts are allocated by tracing_map_init() 94 * array holding the pointers which make up the pre-allocated pool of 95 * tracing_map_elts is allocated as a single block and is stored in 113 * is created for each tracing_map_elt, again individually allocated 114 * to avoid failures that might be expected if allocated as a single [all …]
|
/linux/drivers/firmware/efi/libstub/ |
H A D | relocate.c | 11 * @align: minimum alignment of the allocated memory area. It should 13 * @addr: on exit the address of the allocated memory 17 * EFI_LOADER_DATA. The allocated pages are aligned according to @align but at 18 * least EFI_ALLOC_ALIGN. The first allocated page will not below the address 97 * @alignment: minimum alignment of the allocated memory area. It 101 * Copy a memory area to a newly allocated memory area aligned according 103 * is not available, the allocated address will not be below @min_addr. 158 * have been allocated by UEFI, so we can safely use memcpy. in efi_relocate_kernel()
|
H A D | alignedmem.c | 11 * @addr: On return the address of the first allocated page. The first 12 * allocated page has alignment EFI_ALLOC_ALIGN which is an 14 * @max: the address that the last allocated memory page shall not 19 * Allocate pages as EFI_LOADER_DATA. The allocated pages are aligned according 20 * to @align, which should be >= EFI_ALLOC_ALIGN. The last allocated page will
|
/linux/kernel/dma/ |
H A D | contiguous.c | 33 * where only movable pages can be allocated from. This way, kernel 35 * it, allocated pages can be migrated. 208 * has been activated and all other subsystems have already allocated/reserved 266 * has been activated and all other subsystems have already allocated/reserved 313 * dma_release_from_contiguous() - release allocated pages 314 * @dev: Pointer to device for which the pages were allocated. 315 * @pages: Allocated pages. 316 * @count: Number of allocated pages. 318 * This function releases memory allocated by dma_alloc_from_contiguous(). 390 * dma_free_contiguous() - release allocated pages [all …]
|
/linux/mm/ |
H A D | zpool.c | 234 * set to the allocated object handle. The allocation will 248 * zpool_free() - Free previously allocated memory 249 * @zpool: The zpool that allocated the memory. 252 * This frees previously allocated memory. This does not guarantee 267 * zpool_obj_read_begin() - Start reading from a previously allocated handle. 268 * @zpool: The zpool that the handle was allocated from 272 * This starts a read operation of a previously allocated handle. The passed 287 * zpool_obj_read_end() - Finish reading from a previously allocated handle. 288 * @zpool: The zpool that the handle was allocated from 301 * zpool_obj_write() - Write to a previously allocated handle. [all …]
|
/linux/tools/perf/pmu-events/arch/arm64/fujitsu/a64fx/ |
H A D | cache.json | 105 …nt counts operations where demand access hits an L2 cache refill buffer allocated by software or h… 108 …nt counts operations where demand access hits an L2 cache refill buffer allocated by software or h… 111 …ions where software or hardware prefetch hits an L2 cache refill buffer allocated by demand access… 114 …ions where software or hardware prefetch hits an L2 cache refill buffer allocated by demand access… 117 …nt counts operations where demand access hits an L2 cache refill buffer allocated by software or h… 120 …nt counts operations where demand access hits an L2 cache refill buffer allocated by software or h…
|
/linux/fs/ |
H A D | kernel_read_file.c | 14 * *@buf is NULL, a buffer will be allocated, and 16 * @buf_size: size of buf, if already allocated. If @buf not 17 * allocated, this is the largest size to allocate. 25 * via an already-allocated @buf, in at most @buf_size chunks, and 41 void *allocated = NULL; in kernel_read_file() local 80 *buf = allocated = vmalloc(i_size); in kernel_read_file() 115 if (allocated) { in kernel_read_file()
|
/linux/include/uapi/xen/ |
H A D | evtchn.h | 39 * Return allocated port. 49 * Return allocated port. 59 * Return allocated port. 68 * Unbind previously allocated @port. 77 * Unbind previously allocated @port. 105 * Bind statically allocated @port.
|
/linux/drivers/net/ipa/ |
H A D | gsi_trans.h | 79 * @max_alloc: Maximum number of elements allocated at a time from pool 91 * Return: Virtual address of element(s) allocated from the pool 107 * @max_alloc: Maximum number of elements allocated at a time from pool 121 * Return: Virtual address of element allocated from the pool 123 * Only one element at a time may be allocated from a DMA pool. 135 * gsi_channel_trans_idle() - Return whether no transactions are allocated 139 * Return: True if no transactions are allocated, false otherwise 159 * gsi_trans_free() - Free a previously-allocated GSI transaction
|
/linux/tools/testing/radix-tree/ |
H A D | main.c | 243 printv(1, "starting single_thread_tests: %d allocated, preempt %d\n", in single_thread_tests() 247 printv(2, "after multiorder_check: %d allocated, preempt %d\n", in single_thread_tests() 251 printv(2, "after tag_check: %d allocated, preempt %d\n", in single_thread_tests() 255 printv(2, "after gang_check: %d allocated, preempt %d\n", in single_thread_tests() 259 printv(2, "after add_and_check: %d allocated, preempt %d\n", in single_thread_tests() 263 printv(2, "after dynamic_height_check: %d allocated, preempt %d\n", in single_thread_tests() 268 printv(2, "after idr_checks: %d allocated, preempt %d\n", in single_thread_tests() 272 printv(2, "after big_gang_check: %d allocated, preempt %d\n", in single_thread_tests() 280 printv(2, "after copy_tag_check: %d allocated, preempt %d\n", in single_thread_tests() 323 printv(2, "after rcu_barrier: %d allocated, preempt %d\n", in main()
|
/linux/Documentation/arch/arm64/ |
H A D | gcs.rst | 102 the stack will remain allocated for the lifetime of the thread. At present 117 allocated for it of half the standard stack size or 2 gigabytes, 121 new Guarded Control Stack will be allocated for the new thread with 124 * When a stack is allocated by enabling GCS or during thread creation then 129 * Additional Guarded Control Stacks can be allocated using the 132 * Stacks allocated using map_shadow_stack() can optionally have an end of 140 * Stacks allocated using map_shadow_stack() must have a size which is a 146 * When a thread is freed the Guarded Control Stack initially allocated for 208 If GCS is enabled via ptrace no new GCS will be allocated for the thread.
|
/linux/Documentation/userspace-api/media/dvb/ |
H A D | dmx-reqbufs.rst | 38 Memory mapped buffers are located in device memory and must be allocated 40 space. User buffers are allocated by applications themselves, and this 43 allocated by applications through a device driver, and this ioctl only 54 number allocated in the ``count`` field. The ``count`` can be smaller than the number requested, ev… 56 function correctly. The actual allocated buffer size can is returned
|
/linux/Documentation/core-api/irq/ |
H A D | irq-domain.rst | 92 be allocated. 115 hwirq number. When a hwirq is mapped, an irq_desc is allocated for 121 allocated for in-use IRQs. The disadvantage is that the table must be 134 IRQs. When an hwirq is mapped, an irq_desc is allocated and the 171 range of irq_descs allocated for the hwirqs. It is used when the 184 been allocated for the controller and that the IRQ number can be 188 allocated for every hwirq, even if it is unused. 199 will be allocated on-the-fly for it, and if no range is specified it 201 descriptors will be allocated. 207 used and no descriptor gets allocated it is very important to make sure [all …]
|
/linux/drivers/pci/msi/ |
H A D | api.c | 21 * allocate a single interrupt vector. On success, the allocated vector 44 * free earlier allocated interrupt vectors, and restore INTx emulation. 99 * Return: number of MSI-X vectors allocated (which might be smaller 100 * than @maxvecs), where Linux IRQ numbers for such allocated vectors 142 * On success msi_map::index contains the allocated index (>= 0) and 143 * msi_map::virq contains the allocated Linux interrupt number (> 0). 186 * free earlier-allocated interrupt vectors, and restore INTx. 228 * Return: number of allocated vectors (which might be smaller than 334 * the MSI(-X) vector was allocated without explicit affinity 353 /* MSI[X] interrupts can be allocated without affinity descriptor */ in pci_irq_get_affinity() [all …]
|
/linux/tools/perf/arch/powerpc/util/ |
H A D | skip-callchain-idx.c | 44 * if return address is in LR and if a new frame was allocated. 71 * Return address is in LR. Check if a frame was allocated in check_return_reg() 82 * If call frame address is in r1, no new frame was allocated. in check_return_reg() 89 * A new frame was allocated but has not yet been used. in check_return_reg() 147 * 1 if return address is in LR and no new stack frame was allocated 148 * 2 if return address is in LR and a new frame was allocated (but not 232 * allocated but the LR was not saved into it, then the LR contains the 279 * New frame allocated but return address still in LR. in arch_skip_callchain_idx()
|