Lines Matching refs:allocation

46   usage is improved with certain allocation patterns.  As usual, the release and
162 - Allow arena index lookup based on allocation addresses via mallctl.
167 allocation. (@interwq, @davidtgoldblatt)
226 - Improve the fit for aligned allocation. (@interwq, @edwinsmith)
273 only allocation activity is to call free() after TLS destructors have been
373 - Improve reentrant allocation support, such that deadlock is less likely if
385 - Unify the allocation paths, and merge most fast-path branching decisions.
478 a single allocation's size exceeds the interval. (@jasone)
498 - Fix DSS (sbrk(2)-based) allocation. This regression was first released in
507 - Fix huge-aligned allocation. This regression was first released in 4.4.0.
527 is higher priority than address, so that the allocation policy prefers older
567 - Fix large allocation to search starting in the optimal size class heap,
574 - Fix over-sized allocation of radix tree leaf nodes. (@mjp41, @ogaun,
576 - Fix over-sized allocation of arena_t (plus associated stats) data
586 allocation. (@kspinka, @Whissi, @jasone)
592 - Fix TSD fetches to avoid (recursive) allocation. This is relevant to
600 - Fix bootstrapping issues for configurations that require allocation during
681 numerical overflow, and all allocation functions are guaranteed to indicate
710 - Move retained memory allocation out of the default chunk allocation
712 a custom chunk allocation function. This resolves a virtual memory leak.
786 allocation/deallocation within the application's thread-specific data
799 + Make one call to prof_active_get_unlocked() per allocation event, and use
800 the result throughout the relevant functions that handle an allocation
802 allocation events against concurrent prof_active changes.
854 - Refactor huge allocation to be managed by arenas, so that arenas now
860 mallctls provide high level per arena huge allocation statistics.
872 allocation versus deallocation.
903 reduces the cost of repeated huge allocation/deallocation, because it
911 - Randomly distribute large allocation base pointer alignment relative to page
918 - Implement in-place huge allocation growing and shrinking.
922 allocation, because a global data structure is critical for determining
985 small/large allocation if chunk allocation failed. In the absence of this
986 bug, chunk allocation failure would result in allocation failure, e.g. NULL
992 - Use dss allocation precedence for huge allocations as well as small/large
1050 allocation, not just deallocation.
1051 - Fix a data race for large allocation stats counters.
1084 - Preserve errno during the first allocation. A readlink(2) call during
1086 set during the first allocation prior to this fix.
1150 allocation and dirty page purging algorithms in order to better control
1157 - Fix dss/mmap allocation precedence code to use recyclable mmap memory only
1158 after primary dss allocation fails.
1340 - Fix deadlocks on OS X that were due to memory allocation in
1348 + Fix error detection bugs for aligned memory allocation.
1426 - Add per thread allocation counters that can be accessed via the
1443 - Fix an allocation bug for small allocations that could be triggered if
1496 - Add support for allocation backed by one or more swap files, and allow the
1498 - Implement allocation profiling and leak checking.
1505 - Modify chunk allocation to work when address space layout randomization
1508 - Handle 0-size allocation requests in posix_memalign().