/freebsd/share/man/man9/ |
H A D | memguard.9 | 59 can guard all allocations larger than 61 and can guard a random fraction of all allocations. 62 There is also a knob to prevent allocations smaller than a specified 87 Only allocations from that 94 is modified at run-time then only allocations of the new 99 Existing guarded allocations will still be properly released by 150 The default is 0, meaning all allocations can potentially be guarded. 152 can guard sufficiently large allocations randomly, with average 155 allocations. 156 The default is 0, meaning no allocations are randomly guarded. [all …]
|
/freebsd/lib/libpmc/pmu-events/arch/x86/tremontx/ |
H A D | uncore-memory.json | 162 "BriefDescription": "Read Pending Queue Allocations", 168 …"PublicDescription": "Read Pending Queue Allocations : Counts the number of allocations into the R… 173 "BriefDescription": "Read Pending Queue Allocations", 179 …"PublicDescription": "Read Pending Queue Allocations : Counts the number of allocations into the R… 190 … not empty) and the average latency (in conjunction with the number of allocations). The RPQ is u… 200 … not empty) and the average latency (in conjunction with the number of allocations). The RPQ is u… 204 "BriefDescription": "Write Pending Queue Allocations", 210 …"PublicDescription": "Write Pending Queue Allocations : Counts the number of allocations into the … 215 "BriefDescription": "Write Pending Queue Allocations", 221 …"PublicDescription": "Write Pending Queue Allocations : Counts the number of allocations into the … [all …]
|
/freebsd/sys/contrib/openzfs/include/os/linux/spl/sys/ |
H A D | vmem.h | 50 * On Linux, the primary means of doing allocations is via kmalloc(), which 57 * memory from which allocations can be done using vmalloc(). It might seem 61 * 1. Page directory table allocations are hard coded to use GFP_KERNEL. 62 * Consequently, any KM_PUSHPAGE or KM_NOSLEEP allocations done using 69 * 3. All vmalloc() allocations and frees are protected by a single global 70 * lock which serializes all allocations. 73 * list. The former will sum the allocations while the latter will print 75 * indefinitely. When the total number of mapped allocations is large 87 * allocations (8MB in size or smaller) and map vmem_{alloc,zalloc,free}()
|
/freebsd/contrib/llvm-project/clang/lib/AST/Interp/ |
H A D | DynamicAllocator.h | 1 //==--------- DynamicAllocator.h - Dynamic allocations ------------*- C++ -*-=// 24 /// Manages dynamic memory allocations done during bytecode interpretation. 26 /// We manage allocations as a map from their new-expression to a list 27 /// of allocations. This is called an AllocationSite. For each site, we 31 /// For all array allocations, we need to allocate new Descriptor instances, 41 llvm::SmallVector<Allocation> Allocations; member 46 Allocations.push_back({std::move(Memory)}); in AllocationSite() 49 size_t size() const { return Allocations.size(); } in size()
|
H A D | DynamicAllocator.cpp | 1 //==-------- DynamicAllocator.cpp - Dynamic allocations ----------*- C++ -*-==// 25 for (auto &Alloc : AllocSite.Allocations) { in cleanup() 84 It->second.Allocations.emplace_back(std::move(Memory)); in allocate() 101 auto AllocIt = llvm::find_if(Site.Allocations, [&](const Allocation &A) { in deallocate() 106 assert(AllocIt != Site.Allocations.end()); in deallocate() 112 Site.Allocations.erase(AllocIt); in deallocate()
|
/freebsd/share/doc/papers/kernmalloc/ |
H A D | kernmalloc.t | 100 for small allocations and space-efficient for large allocations. 118 Often the allocations are for small pieces of memory that are only 190 a set of allocations at any point in time. 352 the ``Requests'' field is the number of allocations since system startup; 356 allocations are for small objects. 357 Large allocations occur infrequently, 371 Small allocations are done using the 4.2BSD power-of-two list strategy; 379 the lists corresponding to large allocations are always empty. 392 for large allocations, 393 a different strategy is used for allocations larger than two kilobytes. [all …]
|
H A D | appendix.t | 41 * are quite fast. Allocations greater than MAXALLOCSAVE must 43 * allocations should be done infrequently as they will be slow. 66 short ks_indx; /* bucket index, size of small allocations */ 67 u_short ks_pagecnt; /* for large allocations, pages allocated */
|
/freebsd/contrib/llvm-project/llvm/lib/ExecutionEngine/Orc/TargetProcess/ |
H A D | SimpleExecutorMemoryManager.cpp | 21 assert(Allocations.empty() && "shutdown not called?"); in ~SimpleExecutorMemoryManager() 31 assert(!Allocations.count(MB.base()) && "Duplicate allocation addr"); in allocate() 32 Allocations[MB.base()].Size = Size; in allocate() 62 auto I = Allocations.find(Base.toPtr<void *>()); in finalize() 63 if (I == Allocations.end()) in finalize() 81 auto I = Allocations.find(Base.toPtr<void *>()); in finalize() 84 if (I == Allocations.end()) in finalize() 92 Allocations.erase(I); in finalize() 161 auto I = Allocations.find(Base.toPtr<void *>()); in deallocate() 164 if (I != Allocations.end()) { in deallocate() [all …]
|
H A D | ExecutorSharedMemoryMapperService.cpp | 192 Allocations[MinAddr].DeinitializationActions = in initialize() 194 Reservations[Reservation.toPtr<void *>()].Allocations.push_back(MinAddr); in initialize() 215 Allocations[Base].DeinitializationActions)) { in deinitialize() 221 auto AllocationIt = llvm::find(Reservation.second.Allocations, Base); in deinitialize() 222 if (AllocationIt != Reservation.second.Allocations.end()) { in deinitialize() 223 Reservation.second.Allocations.erase(AllocationIt); in deinitialize() 228 Allocations.erase(Base); in deinitialize() 257 AllocAddrs.swap(R.Allocations); in release() 260 // deinitialize sub allocations in release()
|
/freebsd/sys/contrib/openzfs/man/man4/ |
H A D | spl.4 | 40 and improve cache reclaim time but individual allocations may take longer. 66 allocations should be small, 75 the largest allocations are quickly noticed and fixed. 79 when testing so any new largish allocations are quickly caught. 85 allocations will fail if they exceed 87 Allocations which are marginally smaller than this limit may succeed but 92 allocations larger than this maximum will quickly fail. 94 allocations less than or equal to this value will use
|
/freebsd/sys/contrib/openzfs/module/os/linux/spl/ |
H A D | spl-kmem.c | 31 * As a general rule kmem_alloc() allocations should be small, preferably 38 * enough to ensure the largest allocations are quickly noticed and fixed. 42 * allocations are quickly caught. These warnings may be disabled by setting 52 * Large kmem_alloc() allocations will fail if they exceed KMALLOC_MAX_SIZE. 53 * Allocations which are marginally smaller than this limit may succeed but 56 * margin of 4x is set. Kmem_alloc() allocations larger than this maximum 57 * will quickly fail. Vmem_alloc() allocations less than or equal to this 137 * GFP_KERNEL allocations can safely use kvmalloc which may in spl_kvmalloc() 162 * e (>32kB) allocations. in spl_kvmalloc() 179 * For non-__GFP-RECLAIM allocations we always stick to in spl_kvmalloc() [all …]
|
/freebsd/sys/dev/ice/ |
H A D | ice_resmgr.c | 36 * Manage device resource allocations for a PF, including assigning queues to 37 * VSIs, or managing interrupt allocations across the PF. 39 * It can handle contiguous and scattered resource allocations, and upon 52 MALLOC_DEFINE(M_ICE_RESMGR, "ice-resmgr", "Intel(R) 100Gb Network Driver resmgr allocations"); 86 * will only allow contiguous allocations. This type of resmgr is intended to 87 * be used with tracking device MSI-X interrupt allocations. 179 /* Scattered allocations won't work if they weren't allowed at resmgr in ice_resmgr_assign_scattered()
|
H A D | ice_resmgr.h | 47 * allocations since interrupt allocations must be contiguous. 58 * For managing VSI queue allocations 73 * Represent resource allocations using a bitstring, where bit zero represents 89 * managing queue allocations.
|
/freebsd/contrib/llvm-project/llvm/lib/ExecutionEngine/Orc/ |
H A D | MemoryMapper.cpp | 102 Allocations[MinAddr].Size = MaxAddr - MinAddr; in initialize() 103 Allocations[MinAddr].DeinitializationActions = in initialize() 105 Reservations[AI.MappingBase.toPtr<void *>()].Allocations.push_back(MinAddr); in initialize() 122 Allocations[Base].DeinitializationActions)) { in deinitialize() 128 {Base.toPtr<void *>(), Allocations[Base].Size}, in deinitialize() 134 Allocations.erase(Base); in deinitialize() 152 AllocAddrs.swap(R.Allocations); in release() 155 // deinitialize sub allocations in release() 370 ArrayRef<ExecutorAddr> Allocations, in deinitialize() argument 384 SAs.Instance, Allocations); in deinitialize()
|
/freebsd/contrib/llvm-project/llvm/include/llvm/ExecutionEngine/Orc/ |
H A D | MemoryMapper.h | 70 virtual void deinitialize(ArrayRef<ExecutorAddr> Allocations, 96 void deinitialize(ArrayRef<ExecutorAddr> Allocations, 113 std::vector<ExecutorAddr> Allocations; member 119 AllocationMap Allocations; variable 148 void deinitialize(ArrayRef<ExecutorAddr> Allocations,
|
/freebsd/sys/contrib/openzfs/include/sys/ |
H A D | metaslab_impl.h | 156 * size of reserved allocations is maintained by mca_reserved. 157 * The maximum total size of reserved allocations is determined by 168 * "normal" for data block allocations (i.e. main pool allocations) or "log" 169 * for allocations designated for intent log devices (i.e. slog devices). 172 * to the class can be used to satisfy that request. Allocations are done 174 * This rotor points to the next metaslab group where allocations will be 188 * and can accept allocations. An initialized metaslab group is 229 * ineligible for allocations for a number of reasons such as limited free 290 * allow us to process all allocations and frees in syncing context 334 * ensure that allocations are not performed on the metaslab that is [all …]
|
/freebsd/contrib/llvm-project/lldb/include/lldb/Expression/ |
H A D | IRExecutionUnit.h | 189 /// Commit all allocations to the process and record where they were stored. 195 /// True <=> all allocations were performed successfully. 200 /// Report all committed allocations to the execution engine. 206 /// Write the contents of all allocations to the process. 209 /// The process containing the allocations. 212 /// True <=> all allocations were performed successfully. 335 /// Allocations made by the JIT are first queued up and then applied in bulk 392 bool m_reported_allocations; ///< True after allocations have been reported. 397 ///< is true, any allocations need to be committed immediately -- no
|
H A D | IRMemoryMap.h | 27 /// memory. All allocations made by this class are represented as disjoint 32 /// allocations still get made-up addresses. If an inferior appears at some 44 ///< It is an error to create other types of allocations while such 45 ///allocations exist. 131 // Returns true if the two given allocations intersect each other.
|
/freebsd/crypto/openssl/doc/man3/ |
H A D | OPENSSL_malloc.pod | 121 If no allocations have been done, it is possible to "swap out" the default 144 B<OPENSSL_MALLOC_FAILURES> controls how often allocations should fail. 148 C<100;@25> or C<100@0;0@25> means the first 100 allocations pass, then all 149 other allocations (until the program exits or crashes) have a 25% chance of 155 details about how many allocations there have been so far, what percentage 180 always because allocations have already happened).
|
/freebsd/contrib/jemalloc/include/jemalloc/internal/ |
H A D | sc.h | 16 * satisfy allocations in the half-open range (base, base * 2]. There are 18 * each one covers allocations for base / SC_NGROUP possible allocation sizes. 30 * which covers allocations in (base, base + 1 * delta] 32 * which covers allocations in (base + 1 * delta, base + 2 * delta]. 34 * which covers allocations in (base + 2 * delta, base + 3 * delta]. 37 * which covers allocations in (base + (SC_NGROUP - 1) * delta, 2 * base]. 52 * allocations (without wasting space unnecessarily), we introduce tiny size 190 * We cap allocations to be less than 2 ** (ptr_bits - 1), so the highest base
|
/freebsd/secure/lib/libcrypto/man/man3/ |
H A D | OPENSSL_malloc.3 | 257 If no allocations have been done, it is possible to \*(L"swap out\*(R" the default 280 \&\fB\s-1OPENSSL_MALLOC_FAILURES\s0\fR controls how often allocations should fail. 284 \&\f(CW\*(C`100;@25\*(C'\fR or \f(CW\*(C`100@0;0@25\*(C'\fR means the first 100 allocations pass, t… 285 other allocations (until the program exits or crashes) have a 25% chance of 291 details about how many allocations there have been so far, what percentage 317 always because allocations have already happened).
|
/freebsd/lib/libmemstat/ |
H A D | libmemstat.3 | 210 limit on the number of allocations, is available only for specific 281 allocations, return it. 301 Return the total number of allocations for the memory type over its lifetime. 307 Return the current number of allocations for the memory type. 340 allocations for the memory type on the CPU over its lifetime. 453 allocations performed by the
|
/freebsd/sys/net/ |
H A D | ieee_oui.h | 40 * addresses. The following allocations exist so that various 45 * Allocations from this range are expected to be made using COMMON 57 * gives us 254 allocations of 64K addresses. Address blocks can be
|
/freebsd/usr.sbin/acpi/acpidb/ |
H A D | acpidb.8 | 46 .It Ic Allocations 47 Display list of current memory allocations 62 .It Ic Stats Op Cm Allocations | Memory | Misc | Objects | Tables
|
/freebsd/contrib/llvm-project/compiler-rt/lib/hwasan/ |
H A D | hwasan_flags.inc | 56 "The number of heap (de)allocations remembered per thread. " 70 // enough, use malloc_bisect_dump to see interesting allocations. 76 "Print all allocations within [malloc_bisect_left, "
|