Home
last modified time | relevance | path

Searched full:allocation (Results 1 – 25 of 1915) sorted by relevance

12345678910>>...77

/linux/drivers/acpi/acpica/
H A Duttrack.c4 * Module Name: uttrack - Memory allocation tracking routines (debug only)
14 * Each memory allocation is tracked via a doubly linked list. Each
32 *allocation);
80 * PARAMETERS: size - Size of the allocation
94 struct acpi_debug_mem_block *allocation; in acpi_ut_allocate_and_track() local
105 allocation = in acpi_ut_allocate_and_track()
107 if (!allocation) { in acpi_ut_allocate_and_track()
109 /* Report allocation error */ in acpi_ut_allocate_and_track()
118 acpi_ut_track_allocation(allocation, size, ACPI_MEM_MALLOC, in acpi_ut_allocate_and_track()
121 acpi_os_free(allocation); in acpi_ut_allocate_and_track()
[all …]
H A Dutalloc.c4 * Module Name: utalloc - local memory allocation routines
22 * PARAMETERS: size - Size of the allocation
33 void *allocation; in acpi_os_allocate_zeroed() local
37 allocation = acpi_os_allocate(size); in acpi_os_allocate_zeroed()
38 if (allocation) { in acpi_os_allocate_zeroed()
42 memset(allocation, 0, size); in acpi_os_allocate_zeroed()
45 return (allocation); in acpi_os_allocate_zeroed()
152 /* Memory allocation lists */ in acpi_ut_create_caches()
222 /* Debug only - display leftover memory allocation, if any */ in acpi_ut_delete_caches()
322 * purposefully bypass the (optionally enabled) internal allocation in acpi_ut_initialize_buffer()
[all …]
/linux/Documentation/core-api/
H A Dmemory-allocation.rst4 Memory Allocation Guide
7 Linux provides a variety of APIs for memory allocation. You can
14 Most of the memory allocation APIs use GFP flags to express how that
16 pages", the underlying memory allocation function.
18 Diversity of the allocation APIs combined with the numerous GFP flags
26 Of course there are cases when other allocation APIs and different GFP
45 * If the allocation is performed from an atomic context, e.g interrupt
48 ``GFP_NOWAIT`` allocation is likely to fail. Users of this flag need
52 will be stressed unless allocation succeeds, you may use ``GFP_ATOMIC``.
67 example may be a hardware allocation that maps data directly into
[all …]
/linux/rust/kernel/
H A Dalloc.rs3 //! Implementation of the kernel's memory allocation infrastructure.
21 /// Indicates an allocation error.
69 /// Allocation flags.
80 /// Allow the allocation to be in high memory.
88 /// Users can not sleep and need the allocation to succeed.
99 /// The same as [`GFP_KERNEL`], except the allocation is accounted to kmemcg.
107 /// Suppresses allocation failure reports.
152 /// - A memory allocation returned from an allocator must remain valid until it is explicitly freed.
154 /// - Any pointer to a valid memory allocation must be valid to be passed to any other [`Allocator`]
186 // new memory allocation. in alloc()
[all …]
/linux/tools/perf/pmu-events/arch/x86/sapphirerapids/
H A Duncore-cxl.json12 "BriefDescription": "Number of Allocation to Mem Rxx AGF 0",
22 "BriefDescription": "Number of Allocation to Cache Req AGF0",
32 "BriefDescription": "Number of Allocation to Cache Rsp AGF",
42 "BriefDescription": "Number of Allocation to Cache Data AGF",
52 "BriefDescription": "Number of Allocation to Cache Rsp AGF",
62 "BriefDescription": "Number of Allocation to Cache Req AGF 1",
72 "BriefDescription": "Number of Allocation to Mem Data AGF",
252 "BriefDescription": "Number of Allocation to Cache Data Packing buffer",
262 "BriefDescription": "Number of Allocation to Cache Req Packing buffer",
272 "BriefDescription": "Number of Allocation to Cache Rsp Packing buffer",
[all …]
/linux/tools/perf/pmu-events/arch/x86/emeraldrapids/
H A Duncore-cxl.json12 "BriefDescription": "Number of Allocation to Mem Rxx AGF 0",
22 "BriefDescription": "Number of Allocation to Cache Req AGF0",
32 "BriefDescription": "Number of Allocation to Cache Rsp AGF",
42 "BriefDescription": "Number of Allocation to Cache Data AGF",
52 "BriefDescription": "Number of Allocation to Cache Rsp AGF",
62 "BriefDescription": "Number of Allocation to Cache Req AGF 1",
72 "BriefDescription": "Number of Allocation to Mem Data AGF",
252 "BriefDescription": "Number of Allocation to Cache Data Packing buffer",
262 "BriefDescription": "Number of Allocation to Cache Req Packing buffer",
272 "BriefDescription": "Number of Allocation to Cache Rsp Packing buffer",
[all …]
/linux/Documentation/trace/
H A Devents-kmem.rst5 The kmem tracing system captures events related to object and page allocation
8 - Slab allocation of small objects of unknown type (kmalloc)
9 - Slab allocation of small objects of known type
10 - Page allocation
17 1. Slab allocation of small objects of unknown type
27 internal fragmented as a result of the allocation pattern. By correlating
29 the allocation sites were.
32 2. Slab allocation of small objects of known type
45 3. Page allocation
54 These four events deal with page allocation and freeing. mm_page_alloc is
[all …]
/linux/Documentation/mm/
H A Dallocation-profiling.rst4 MEMORY ALLOCATION PROFILING
23 When set to "never", memory allocation profiling overhead is minimized and it
30 If compression fails, a warning is issued and memory allocation profiling gets
67 Memory allocation profiling builds off of code tagging, which is a library for
72 To add accounting for an allocation call, we replace it with a macro
76 - calls the real allocation function
81 do not properly belong to the outer allocation context and should be counted
85 Thus, proper usage requires determining which function in an allocation call
92 - switch its allocation call to the _noprof() version, e.g. kmalloc_noprof()
108 - Hook your data structure's init function, like any other allocation function.
H A Dpage_frags.rst11 simple allocation framework for page fragments. This is used by the
17 cache is needed. This provides a central point for the fragment allocation
20 which can be expensive at allocation time. However due to the nature of
23 to be disabled when executing the fragment allocation.
26 allocation. The netdev_alloc_cache is used by callers making use of the
41 avoid calling get_page per allocation.
/linux/tools/include/linux/
H A Dgfp_types.h10 * typedef gfp_t - Memory allocation flags.
14 * the underlying memory allocation function. Not every GFP flag is
132 * pages being in one zone (fair zone allocation policy).
134 * %__GFP_HARDWALL enforces the cpuset memory allocation policy.
136 * %__GFP_THISNODE forces the allocation to be satisfied from the requested
139 * %__GFP_ACCOUNT causes the allocation to be accounted to kmemcg.
141 * %__GFP_NO_OBJ_EXT causes slab allocation to have no object extension.
164 * the caller guarantees the allocation will allow more memory to be freed
202 * canonical example is THP allocation where a fallback is cheap but
238 * If the allocation does fail, and the caller is in a position to
[all …]
/linux/include/linux/
H A Dgfp_types.h10 * typedef gfp_t - Memory allocation flags.
14 * the underlying memory allocation function. Not every GFP flag is
132 * pages being in one zone (fair zone allocation policy).
134 * %__GFP_HARDWALL enforces the cpuset memory allocation policy.
136 * %__GFP_THISNODE forces the allocation to be satisfied from the requested
139 * %__GFP_ACCOUNT causes the allocation to be accounted to kmemcg.
141 * %__GFP_NO_OBJ_EXT causes slab allocation to have no object extension.
164 * the caller guarantees the allocation will allow more memory to be freed
202 * canonical example is THP allocation where a fallback is cheap but
238 * If the allocation does fail, and the caller is in a position to
[all …]
H A Dslab.h157 * the locks at page-allocation time, as is done in __i915_request_ctor(),
354 * sheaf is allocated and filled from slab(s) using bulk allocation.
355 * Otherwise the allocation falls back to the normal operation
366 * Bulk allocation and free operations also try to use the cpu sheaves
698 * Figure out which kmalloc slab an allocation of a certain size
777 * @gfpflags: describe the allocation context
780 * primarily in cases where charging at allocation time might not be possible
788 * internal metadata allocation.
810 * Bulk allocation and freeing operations. These are accelerated in an
889 * @flags: describe the allocation context
[all …]
/linux/sound/core/
H A Dpcm_memory.c31 MODULE_PARM_DESC(max_alloc_per_card, "Max total allocation bytes per card.");
71 /* the actual allocation size might be bigger than requested, in do_alloc_pages()
77 /* take back on allocation failure */ in do_alloc_pages()
313 * snd_pcm_lib_preallocate_pages - pre-allocation for the given DMA type
317 * @size: the requested pre-allocation size in bytes
318 * @max: the max. allowed pre-allocation size
320 * Do pre-allocation for the given DMA buffer type.
331 …* snd_pcm_lib_preallocate_pages_for_all - pre-allocation for continuous memory type (all substream…
335 * @size: the requested pre-allocation size in bytes
336 * @max: the max. allowed pre-allocation size
[all …]
/linux/drivers/android/binder/
H A Dallocation.rs28 /// Range within the allocation where we can find the offsets to the object descriptors.
30 /// The target node of the transaction this allocation is associated to.
33 /// When this allocation is dropped, call `pending_oneway_finished` on the node.
44 /// Represents an allocation that the kernel is currently using.
50 /// This allocation corresponds to an allocation in the range allocator, so the relevant pages are
52 pub(crate) struct Allocation { struct
63 impl Allocation { argument
228 /// Should the looper return to userspace when freeing this allocation?
239 impl Drop for Allocation { implementation
266 // Ignore allocation failures. in drop()
[all …]
H A Dtransaction.rs17 allocation::{Allocation, TranslatedFds},
73 allocation: SpinLock<Option<Allocation>>, field
142 allocation <- kernel::new_spinlock!(Some(alloc.success()), "Transaction::new"), in new()
179 allocation <- kernel::new_spinlock!(Some(alloc.success()), "Transaction::new"), in new_reply()
363 let mut alloc = self.allocation.lock().take().ok_or(ESRCH)?; in prepare_file_list()
367 *self.allocation.lock() = Some(alloc); in prepare_file_list()
371 // Free the allocation eagerly. in prepare_file_list()
439 let mut alloc = self.allocation.lock().take().ok_or(ESRCH)?; in do_work()
449 // It is now the user's responsibility to clear the allocation. in do_work()
466 let allocation = self.allocation.lock().take(); in cancel() localVariable
[all …]
/linux/fs/jfs/
H A Djfs_imap.h21 #define MAXAG 128 /* maximum number of allocation groups */
23 #define AMAPSIZE 512 /* bytes in the IAG allocation maps */
39 * inode allocation map:
41 * inode allocation map consists of
43 * . inode allocation group pages (per 4096 inodes)
47 * inode allocation group page (per 4096 inodes of an AG)
51 __le32 iagnum; /* 4: inode allocation group number */
73 /* allocation bit map: 1 bit per inode (0 - free, 1 - allocated) */
74 __le32 wmap[EXTSPERIAG]; /* 512: working allocation map */
75 __le32 pmap[EXTSPERIAG]; /* 512: persistent allocation map */
[all …]
/linux/arch/x86/include/asm/
H A Dhw_irq.h61 * irq_alloc_info - X86 specific interrupt allocation info
62 * @type: X86 specific allocation type
63 * @flags: Flags for allocation tweaks
66 * @mask: CPU mask for vector allocation
68 * @data: Allocation specific data
70 * @ioapic: IOAPIC specific allocation data
71 * @uv: UV specific allocation data
/linux/fs/xfs/libxfs/
H A Dxfs_alloc.h40 xfs_agnumber_t agno; /* allocation group number */
41 xfs_agblock_t agbno; /* allocation group-relative block # */
54 char wasdel; /* set if allocation was prev delayed */
55 char wasfromfl; /* set if allocation is from freelist */
64 #define XFS_ALLOC_USERDATA (1 << 0)/* allocation is for user data*/
103 * space matching the requirements in that AG, then the allocation will fail.
109 * viable candidates in the AG, then fail the allocation.
116 * then the allocation fails.
122 * Best effort full filesystem allocation scan.
124 * Locality aware allocation will be attempted in the initial AG, but on failure
[all …]
/linux/Documentation/filesystems/ext4/
H A Dbigalloc.rst15 use clustered allocation, so that each bit in the ext4 block allocation
19 This means that each bit in the block allocation bitmap now addresses
20 256 4k blocks. This shrinks the total size of the block allocation
29 128MiB); however, the minimum allocation unit becomes a cluster, not a
/linux/tools/perf/pmu-events/arch/x86/broadwellx/
H A Dfrontend.json15 "PublicDescription": "This event counts Decode Stream Buffer (DSB)-to-MITE switch true penalty cycles. These cycles do not include uops routed through because of the switch itself, for example, when Instruction Decode Queue (IDQ) pre-allocation is unavailable, or Instruction Decode Queue (IDQ) is full. SBD-to-MITE switch true penalty cycles happen after the merge mux (MM) receives Decode Stream Buffer (DSB) Sync-indication until receiving the first MITE uop. MM is placed before Instruction Decode Queue (IDQ) to merge uops being fed from the MITE and Decode Stream Buffer (DSB) paths. Decode Stream Buffer (DSB) inserts the Sync-indication whenever a Decode Stream Buffer (DSB)-to-MITE switch occurs. Penalty: A Decode Stream Buffer (DSB) hit followed by a Decode Stream Buffer (DSB) miss can cost up to six cycles in which no uops are delivered to the IDQ. Most often, such switches from the Decode Stream Buffer (DSB) to the legacy pipeline cost 02 cycles.",
211 "BriefDescription": "Uops not delivered to Resource Allocation Table (RAT) per thread when backend of the machine is not stalled",
215 "PublicDescription": "This event counts the number of uops not delivered to Resource Allocation Table (RAT) per thread adding 4 x when Resource Allocation Table (RAT) is not stalled and Instruction Decode Queue (IDQ) delivers x uops to Resource Allocation Table (RAT) (where x belongs to {0,1,2,3}). Counting does not cover cases when: a. IDQ-Resource Allocation Table (RAT) pipe serves the other thread; b. Resource Allocation Table (RAT) is stalled for the thread (including uop drops and clear BE conditions); c. Instruction Decode Queue (IDQ) delivers four uops.",
220 "BriefDescription": "Cycles per thread when 4 or more uops are not delivered to Resource Allocation Table (RAT) when backend of the machine is not stalled",
225 "PublicDescription": "This event counts, on the per-thread basis, cycles when no uops are delivered to Resource Allocation Table (RAT). IDQ_Uops_Not_Delivered.core =4.",
230 "BriefDescription": "Counts cycles FE delivered 4 uops or Resource Allocation Tabl
[all...]
/linux/tools/perf/pmu-events/arch/x86/broadwellde/
H A Dfrontend.json15 "PublicDescription": "This event counts Decode Stream Buffer (DSB)-to-MITE switch true penalty cycles. These cycles do not include uops routed through because of the switch itself, for example, when Instruction Decode Queue (IDQ) pre-allocation is unavailable, or Instruction Decode Queue (IDQ) is full. SBD-to-MITE switch true penalty cycles happen after the merge mux (MM) receives Decode Stream Buffer (DSB) Sync-indication until receiving the first MITE uop. MM is placed before Instruction Decode Queue (IDQ) to merge uops being fed from the MITE and Decode Stream Buffer (DSB) paths. Decode Stream Buffer (DSB) inserts the Sync-indication whenever a Decode Stream Buffer (DSB)-to-MITE switch occurs. Penalty: A Decode Stream Buffer (DSB) hit followed by a Decode Stream Buffer (DSB) miss can cost up to six cycles in which no uops are delivered to the IDQ. Most often, such switches from the Decode Stream Buffer (DSB) to the legacy pipeline cost 02 cycles.",
211 "BriefDescription": "Uops not delivered to Resource Allocation Table (RAT) per thread when backend of the machine is not stalled",
215 "PublicDescription": "This event counts the number of uops not delivered to Resource Allocation Table (RAT) per thread adding 4 x when Resource Allocation Table (RAT) is not stalled and Instruction Decode Queue (IDQ) delivers x uops to Resource Allocation Table (RAT) (where x belongs to {0,1,2,3}). Counting does not cover cases when: a. IDQ-Resource Allocation Table (RAT) pipe serves the other thread; b. Resource Allocation Table (RAT) is stalled for the thread (including uop drops and clear BE conditions); c. Instruction Decode Queue (IDQ) delivers four uops.",
220 "BriefDescription": "Cycles per thread when 4 or more uops are not delivered to Resource Allocation Table (RAT) when backend of the machine is not stalled",
225 "PublicDescription": "This event counts, on the per-thread basis, cycles when no uops are delivered to Resource Allocation Table (RAT). IDQ_Uops_Not_Delivered.core =4.",
230 "BriefDescription": "Counts cycles FE delivered 4 uops or Resource Allocation Tabl
[all...]
/linux/tools/perf/pmu-events/arch/x86/broadwell/
H A Dfrontend.json15 "PublicDescription": "This event counts Decode Stream Buffer (DSB)-to-MITE switch true penalty cycles. These cycles do not include uops routed through because of the switch itself, for example, when Instruction Decode Queue (IDQ) pre-allocation is unavailable, or Instruction Decode Queue (IDQ) is full. SBD-to-MITE switch true penalty cycles happen after the merge mux (MM) receives Decode Stream Buffer (DSB) Sync-indication until receiving the first MITE uop. MM is placed before Instruction Decode Queue (IDQ) to merge uops being fed from the MITE and Decode Stream Buffer (DSB) paths. Decode Stream Buffer (DSB) inserts the Sync-indication whenever a Decode Stream Buffer (DSB)-to-MITE switch occurs. Penalty: A Decode Stream Buffer (DSB) hit followed by a Decode Stream Buffer (DSB) miss can cost up to six cycles in which no uops are delivered to the IDQ. Most often, such switches from the Decode Stream Buffer (DSB) to the legacy pipeline cost 02 cycles.",
211 "BriefDescription": "Uops not delivered to Resource Allocation Table (RAT) per thread when backend of the machine is not stalled",
215 "PublicDescription": "This event counts the number of uops not delivered to Resource Allocation Table (RAT) per thread adding 4 x when Resource Allocation Table (RAT) is not stalled and Instruction Decode Queue (IDQ) delivers x uops to Resource Allocation Table (RAT) (where x belongs to {0,1,2,3}). Counting does not cover cases when: a. IDQ-Resource Allocation Table (RAT) pipe serves the other thread; b. Resource Allocation Table (RAT) is stalled for the thread (including uop drops and clear BE conditions); c. Instruction Decode Queue (IDQ) delivers four uops.",
220 "BriefDescription": "Cycles per thread when 4 or more uops are not delivered to Resource Allocation Table (RAT) when backend of the machine is not stalled",
225 "PublicDescription": "This event counts, on the per-thread basis, cycles when no uops are delivered to Resource Allocation Table (RAT). IDQ_Uops_Not_Delivered.core =4.",
230 "BriefDescription": "Counts cycles FE delivered 4 uops or Resource Allocation Tabl
[all...]
/linux/rust/kernel/alloc/
H A Dallocator.rs5 //! Documentation for the kernel's memory allocators can found in the "Memory Allocation Guide"
9 //! Reference: <https://docs.kernel.org/core-api/memory-allocation.html>
45 /// failure. This allocator is typically used when the size for the requested allocation is not
81 /// - it accepts any pointer to a valid memory allocation allocated by this function.
140 // - passing a pointer to a valid memory allocation is OK,
161 /// Convert a pointer to a [`Vmalloc`] allocation to a [`page::BorrowedPage`].
177 /// // `ptr` is a valid pointer to a `Vmalloc` allocation.
189 /// - `ptr` must be a valid pointer to a [`Vmalloc`] allocation.
201 // this function `ptr` is a valid pointer to a `Vmalloc` allocation. in to_page()
210 // - passing a pointer to a valid memory allocation is OK,
[all …]
/linux/Documentation/userspace-api/
H A Ddma-buf-alloc-exchange.rst180 described as having a height of 1080, with the memory allocation for each buffer
228 Allocation chapter
233 buffer-allocation interface available at either kernel or userspace level, the
234 client makes an arbitrary choice of allocation interface such as Vulkan, GBM, or
237 Each allocation request must take, at a minimum: the pixel format, a list of
239 this set of properties in different ways, such as allowing allocation in more
244 allocation, any padding required, and further properties of the underlying
250 After allocation, the client must query the allocator to determine the actual
272 memory. For this reason, each import and allocation API must provide a separate
314 In the latter case, the allocation is as discussed above, being provided with a
[all …]
/linux/drivers/android/binder/range_alloc/
H A Dmod.rs15 Allocated(Allocation<T>),
50 fn allocate<T>(self, data: Option<T>) -> Allocation<T> { in allocate()
51 Allocation { in allocate()
58 struct Allocation<T> { struct
63 impl<T> Allocation<T> { implementation
152 /// Try to reserve a new buffer, using the provided allocation if necessary.
241 /// Called when an allocation is no longer in use by the kernel.
253 /// Called when the kernel starts using an allocation.
299 // If the user supplied an allocation that we did not end up using, then we return it here.
306 /// Returned by `reserve_new` to request the caller to make an allocation before calling the method

12345678910>>...77