1Following are change highlights associated with official releases. Important 2bug fixes are all mentioned, but some internal enhancements are omitted here for 3brevity. Much more detail can be found in the git revision history: 4 5 https://github.com/jemalloc/jemalloc 6 7* 4.2.1 (June 8, 2016) 8 9 Bug fixes: 10 - Fix bootstrapping issues for configurations that require allocation during 11 tsd initialization (e.g. --disable-tls). (@cferris1000, @jasone) 12 - Fix gettimeofday() version of nstime_update(). (@ronawho) 13 - Fix Valgrind regressions in calloc() and chunk_alloc_wrapper(). (@ronawho) 14 - Fix potential VM map fragmentation regression. (@jasone) 15 - Fix opt_zero-triggered in-place huge reallocation zeroing. (@jasone) 16 - Fix heap profiling context leaks in reallocation edge cases. (@jasone) 17 18* 4.2.0 (May 12, 2016) 19 20 New features: 21 - Add the arena.<i>.reset mallctl, which makes it possible to discard all of 22 an arena's allocations in a single operation. (@jasone) 23 - Add the stats.retained and stats.arenas.<i>.retained statistics. (@jasone) 24 - Add the --with-version configure option. (@jasone) 25 - Support --with-lg-page values larger than actual page size. (@jasone) 26 27 Optimizations: 28 - Use pairing heaps rather than red-black trees for various hot data 29 structures. (@djwatson, @jasone) 30 - Streamline fast paths of rtree operations. (@jasone) 31 - Optimize the fast paths of calloc() and [m,d,sd]allocx(). (@jasone) 32 - Decommit unused virtual memory if the OS does not overcommit. (@jasone) 33 - Specify MAP_NORESERVE on Linux if [heuristic] overcommit is active, in order 34 to avoid unfortunate interactions during fork(2). (@jasone) 35 36 Bug fixes: 37 - Fix chunk accounting related to triggering gdump profiles. (@jasone) 38 - Link against librt for clock_gettime(2) if glibc < 2.17. (@jasone) 39 - Scale leak report summary according to sampling probability. (@jasone) 40 41* 4.1.1 (May 3, 2016) 42 43 This bugfix release resolves a variety of mostly minor issues, though the 44 bitmap fix is critical for 64-bit Windows. 45 46 Bug fixes: 47 - Fix the linear scan version of bitmap_sfu() to shift by the proper amount 48 even when sizeof(long) is not the same as sizeof(void *), as on 64-bit 49 Windows. (@jasone) 50 - Fix hashing functions to avoid unaligned memory accesses (and resulting 51 crashes). This is relevant at least to some ARM-based platforms. 52 (@rkmisra) 53 - Fix fork()-related lock rank ordering reversals. These reversals were 54 unlikely to cause deadlocks in practice except when heap profiling was 55 enabled and active. (@jasone) 56 - Fix various chunk leaks in OOM code paths. (@jasone) 57 - Fix malloc_stats_print() to print opt.narenas correctly. (@jasone) 58 - Fix MSVC-specific build/test issues. (@rustyx, @yuslepukhin) 59 - Fix a variety of test failures that were due to test fragility rather than 60 core bugs. (@jasone) 61 62* 4.1.0 (February 28, 2016) 63 64 This release is primarily about optimizations, but it also incorporates a lot 65 of portability-motivated refactoring and enhancements. Many people worked on 66 this release, to an extent that even with the omission here of minor changes 67 (see git revision history), and of the people who reported and diagnosed 68 issues, so much of the work was contributed that starting with this release, 69 changes are annotated with author credits to help reflect the collaborative 70 effort involved. 71 72 New features: 73 - Implement decay-based unused dirty page purging, a major optimization with 74 mallctl API impact. This is an alternative to the existing ratio-based 75 unused dirty page purging, and is intended to eventually become the sole 76 purging mechanism. New mallctls: 77 + opt.purge 78 + opt.decay_time 79 + arena.<i>.decay 80 + arena.<i>.decay_time 81 + arenas.decay_time 82 + stats.arenas.<i>.decay_time 83 (@jasone, @cevans87) 84 - Add --with-malloc-conf, which makes it possible to embed a default 85 options string during configuration. This was motivated by the desire to 86 specify --with-malloc-conf=purge:decay , since the default must remain 87 purge:ratio until the 5.0.0 release. (@jasone) 88 - Add MS Visual Studio 2015 support. (@rustyx, @yuslepukhin) 89 - Make *allocx() size class overflow behavior defined. The maximum 90 size class is now less than PTRDIFF_MAX to protect applications against 91 numerical overflow, and all allocation functions are guaranteed to indicate 92 errors rather than potentially crashing if the request size exceeds the 93 maximum size class. (@jasone) 94 - jeprof: 95 + Add raw heap profile support. (@jasone) 96 + Add --retain and --exclude for backtrace symbol filtering. (@jasone) 97 98 Optimizations: 99 - Optimize the fast path to combine various bootstrapping and configuration 100 checks and execute more streamlined code in the common case. (@interwq) 101 - Use linear scan for small bitmaps (used for small object tracking). In 102 addition to speeding up bitmap operations on 64-bit systems, this reduces 103 allocator metadata overhead by approximately 0.2%. (@djwatson) 104 - Separate arena_avail trees, which substantially speeds up run tree 105 operations. (@djwatson) 106 - Use memoization (boot-time-computed table) for run quantization. Separate 107 arena_avail trees reduced the importance of this optimization. (@jasone) 108 - Attempt mmap-based in-place huge reallocation. This can dramatically speed 109 up incremental huge reallocation. (@jasone) 110 111 Incompatible changes: 112 - Make opt.narenas unsigned rather than size_t. (@jasone) 113 114 Bug fixes: 115 - Fix stats.cactive accounting regression. (@rustyx, @jasone) 116 - Handle unaligned keys in hash(). This caused problems for some ARM systems. 117 (@jasone, @cferris1000) 118 - Refactor arenas array. In addition to fixing a fork-related deadlock, this 119 makes arena lookups faster and simpler. (@jasone) 120 - Move retained memory allocation out of the default chunk allocation 121 function, to a location that gets executed even if the application installs 122 a custom chunk allocation function. This resolves a virtual memory leak. 123 (@buchgr) 124 - Fix a potential tsd cleanup leak. (@cferris1000, @jasone) 125 - Fix run quantization. In practice this bug had no impact unless 126 applications requested memory with alignment exceeding one page. 127 (@jasone, @djwatson) 128 - Fix LinuxThreads-specific bootstrapping deadlock. (Cosmin Paraschiv) 129 - jeprof: 130 + Don't discard curl options if timeout is not defined. (@djwatson) 131 + Detect failed profile fetches. (@djwatson) 132 - Fix stats.arenas.<i>.{dss,lg_dirty_mult,decay_time,pactive,pdirty} for 133 --disable-stats case. (@jasone) 134 135* 4.0.4 (October 24, 2015) 136 137 This bugfix release fixes another xallocx() regression. No other regressions 138 have come to light in over a month, so this is likely a good starting point 139 for people who prefer to wait for "dot one" releases with all the major issues 140 shaken out. 141 142 Bug fixes: 143 - Fix xallocx(..., MALLOCX_ZERO to zero the last full trailing page of large 144 allocations that have been randomly assigned an offset of 0 when 145 --enable-cache-oblivious configure option is enabled. 146 147* 4.0.3 (September 24, 2015) 148 149 This bugfix release continues the trend of xallocx() and heap profiling fixes. 150 151 Bug fixes: 152 - Fix xallocx(..., MALLOCX_ZERO) to zero all trailing bytes of large 153 allocations when --enable-cache-oblivious configure option is enabled. 154 - Fix xallocx(..., MALLOCX_ZERO) to zero trailing bytes of huge allocations 155 when resizing from/to a size class that is not a multiple of the chunk size. 156 - Fix prof_tctx_dump_iter() to filter out nodes that were created after heap 157 profile dumping started. 158 - Work around a potentially bad thread-specific data initialization 159 interaction with NPTL (glibc's pthreads implementation). 160 161* 4.0.2 (September 21, 2015) 162 163 This bugfix release addresses a few bugs specific to heap profiling. 164 165 Bug fixes: 166 - Fix ixallocx_prof_sample() to never modify nor create sampled small 167 allocations. xallocx() is in general incapable of moving small allocations, 168 so this fix removes buggy code without loss of generality. 169 - Fix irallocx_prof_sample() to always allocate large regions, even when 170 alignment is non-zero. 171 - Fix prof_alloc_rollback() to read tdata from thread-specific data rather 172 than dereferencing a potentially invalid tctx. 173 174* 4.0.1 (September 15, 2015) 175 176 This is a bugfix release that is somewhat high risk due to the amount of 177 refactoring required to address deep xallocx() problems. As a side effect of 178 these fixes, xallocx() now tries harder to partially fulfill requests for 179 optional extra space. Note that a couple of minor heap profiling 180 optimizations are included, but these are better thought of as performance 181 fixes that were integral to disovering most of the other bugs. 182 183 Optimizations: 184 - Avoid a chunk metadata read in arena_prof_tctx_set(), since it is in the 185 fast path when heap profiling is enabled. Additionally, split a special 186 case out into arena_prof_tctx_reset(), which also avoids chunk metadata 187 reads. 188 - Optimize irallocx_prof() to optimistically update the sampler state. The 189 prior implementation appears to have been a holdover from when 190 rallocx()/xallocx() functionality was combined as rallocm(). 191 192 Bug fixes: 193 - Fix TLS configuration such that it is enabled by default for platforms on 194 which it works correctly. 195 - Fix arenas_cache_cleanup() and arena_get_hard() to handle 196 allocation/deallocation within the application's thread-specific data 197 cleanup functions even after arenas_cache is torn down. 198 - Fix xallocx() bugs related to size+extra exceeding HUGE_MAXCLASS. 199 - Fix chunk purge hook calls for in-place huge shrinking reallocation to 200 specify the old chunk size rather than the new chunk size. This bug caused 201 no correctness issues for the default chunk purge function, but was 202 visible to custom functions set via the "arena.<i>.chunk_hooks" mallctl. 203 - Fix heap profiling bugs: 204 + Fix heap profiling to distinguish among otherwise identical sample sites 205 with interposed resets (triggered via the "prof.reset" mallctl). This bug 206 could cause data structure corruption that would most likely result in a 207 segfault. 208 + Fix irealloc_prof() to prof_alloc_rollback() on OOM. 209 + Make one call to prof_active_get_unlocked() per allocation event, and use 210 the result throughout the relevant functions that handle an allocation 211 event. Also add a missing check in prof_realloc(). These fixes protect 212 allocation events against concurrent prof_active changes. 213 + Fix ixallocx_prof() to pass usize_max and zero to ixallocx_prof_sample() 214 in the correct order. 215 + Fix prof_realloc() to call prof_free_sampled_object() after calling 216 prof_malloc_sample_object(). Prior to this fix, if tctx and old_tctx were 217 the same, the tctx could have been prematurely destroyed. 218 - Fix portability bugs: 219 + Don't bitshift by negative amounts when encoding/decoding run sizes in 220 chunk header maps. This affected systems with page sizes greater than 8 221 KiB. 222 + Rename index_t to szind_t to avoid an existing type on Solaris. 223 + Add JEMALLOC_CXX_THROW to the memalign() function prototype, in order to 224 match glibc and avoid compilation errors when including both 225 jemalloc/jemalloc.h and malloc.h in C++ code. 226 + Don't assume that /bin/sh is appropriate when running size_classes.sh 227 during configuration. 228 + Consider __sparcv9 a synonym for __sparc64__ when defining LG_QUANTUM. 229 + Link tests to librt if it contains clock_gettime(2). 230 231* 4.0.0 (August 17, 2015) 232 233 This version contains many speed and space optimizations, both minor and 234 major. The major themes are generalization, unification, and simplification. 235 Although many of these optimizations cause no visible behavior change, their 236 cumulative effect is substantial. 237 238 New features: 239 - Normalize size class spacing to be consistent across the complete size 240 range. By default there are four size classes per size doubling, but this 241 is now configurable via the --with-lg-size-class-group option. Also add the 242 --with-lg-page, --with-lg-page-sizes, --with-lg-quantum, and 243 --with-lg-tiny-min options, which can be used to tweak page and size class 244 settings. Impacts: 245 + Worst case performance for incrementally growing/shrinking reallocation 246 is improved because there are far fewer size classes, and therefore 247 copying happens less often. 248 + Internal fragmentation is limited to 20% for all but the smallest size 249 classes (those less than four times the quantum). (1B + 4 KiB) 250 and (1B + 4 MiB) previously suffered nearly 50% internal fragmentation. 251 + Chunk fragmentation tends to be lower because there are fewer distinct run 252 sizes to pack. 253 - Add support for explicit tcaches. The "tcache.create", "tcache.flush", and 254 "tcache.destroy" mallctls control tcache lifetime and flushing, and the 255 MALLOCX_TCACHE(tc) and MALLOCX_TCACHE_NONE flags to the *allocx() API 256 control which tcache is used for each operation. 257 - Implement per thread heap profiling, as well as the ability to 258 enable/disable heap profiling on a per thread basis. Add the "prof.reset", 259 "prof.lg_sample", "thread.prof.name", "thread.prof.active", 260 "opt.prof_thread_active_init", "prof.thread_active_init", and 261 "thread.prof.active" mallctls. 262 - Add support for per arena application-specified chunk allocators, configured 263 via the "arena.<i>.chunk_hooks" mallctl. 264 - Refactor huge allocation to be managed by arenas, so that arenas now 265 function as general purpose independent allocators. This is important in 266 the context of user-specified chunk allocators, aside from the scalability 267 benefits. Related new statistics: 268 + The "stats.arenas.<i>.huge.allocated", "stats.arenas.<i>.huge.nmalloc", 269 "stats.arenas.<i>.huge.ndalloc", and "stats.arenas.<i>.huge.nrequests" 270 mallctls provide high level per arena huge allocation statistics. 271 + The "arenas.nhchunks", "arenas.hchunk.<i>.size", 272 "stats.arenas.<i>.hchunks.<j>.nmalloc", 273 "stats.arenas.<i>.hchunks.<j>.ndalloc", 274 "stats.arenas.<i>.hchunks.<j>.nrequests", and 275 "stats.arenas.<i>.hchunks.<j>.curhchunks" mallctls provide per size class 276 statistics. 277 - Add the 'util' column to malloc_stats_print() output, which reports the 278 proportion of available regions that are currently in use for each small 279 size class. 280 - Add "alloc" and "free" modes for for junk filling (see the "opt.junk" 281 mallctl), so that it is possible to separately enable junk filling for 282 allocation versus deallocation. 283 - Add the jemalloc-config script, which provides information about how 284 jemalloc was configured, and how to integrate it into application builds. 285 - Add metadata statistics, which are accessible via the "stats.metadata", 286 "stats.arenas.<i>.metadata.mapped", and 287 "stats.arenas.<i>.metadata.allocated" mallctls. 288 - Add the "stats.resident" mallctl, which reports the upper limit of 289 physically resident memory mapped by the allocator. 290 - Add per arena control over unused dirty page purging, via the 291 "arenas.lg_dirty_mult", "arena.<i>.lg_dirty_mult", and 292 "stats.arenas.<i>.lg_dirty_mult" mallctls. 293 - Add the "prof.gdump" mallctl, which makes it possible to toggle the gdump 294 feature on/off during program execution. 295 - Add sdallocx(), which implements sized deallocation. The primary 296 optimization over dallocx() is the removal of a metadata read, which often 297 suffers an L1 cache miss. 298 - Add missing header includes in jemalloc/jemalloc.h, so that applications 299 only have to #include <jemalloc/jemalloc.h>. 300 - Add support for additional platforms: 301 + Bitrig 302 + Cygwin 303 + DragonFlyBSD 304 + iOS 305 + OpenBSD 306 + OpenRISC/or1k 307 308 Optimizations: 309 - Maintain dirty runs in per arena LRUs rather than in per arena trees of 310 dirty-run-containing chunks. In practice this change significantly reduces 311 dirty page purging volume. 312 - Integrate whole chunks into the unused dirty page purging machinery. This 313 reduces the cost of repeated huge allocation/deallocation, because it 314 effectively introduces a cache of chunks. 315 - Split the arena chunk map into two separate arrays, in order to increase 316 cache locality for the frequently accessed bits. 317 - Move small run metadata out of runs, into arena chunk headers. This reduces 318 run fragmentation, smaller runs reduce external fragmentation for small size 319 classes, and packed (less uniformly aligned) metadata layout improves CPU 320 cache set distribution. 321 - Randomly distribute large allocation base pointer alignment relative to page 322 boundaries in order to more uniformly utilize CPU cache sets. This can be 323 disabled via the --disable-cache-oblivious configure option, and queried via 324 the "config.cache_oblivious" mallctl. 325 - Micro-optimize the fast paths for the public API functions. 326 - Refactor thread-specific data to reside in a single structure. This assures 327 that only a single TLS read is necessary per call into the public API. 328 - Implement in-place huge allocation growing and shrinking. 329 - Refactor rtree (radix tree for chunk lookups) to be lock-free, and make 330 additional optimizations that reduce maximum lookup depth to one or two 331 levels. This resolves what was a concurrency bottleneck for per arena huge 332 allocation, because a global data structure is critical for determining 333 which arenas own which huge allocations. 334 335 Incompatible changes: 336 - Replace --enable-cc-silence with --disable-cc-silence to suppress spurious 337 warnings by default. 338 - Assure that the constness of malloc_usable_size()'s return type matches that 339 of the system implementation. 340 - Change the heap profile dump format to support per thread heap profiling, 341 rename pprof to jeprof, and enhance it with the --thread=<n> option. As a 342 result, the bundled jeprof must now be used rather than the upstream 343 (gperftools) pprof. 344 - Disable "opt.prof_final" by default, in order to avoid atexit(3), which can 345 internally deadlock on some platforms. 346 - Change the "arenas.nlruns" mallctl type from size_t to unsigned. 347 - Replace the "stats.arenas.<i>.bins.<j>.allocated" mallctl with 348 "stats.arenas.<i>.bins.<j>.curregs". 349 - Ignore MALLOC_CONF in set{uid,gid,cap} binaries. 350 - Ignore MALLOCX_ARENA(a) in dallocx(), in favor of using the 351 MALLOCX_TCACHE(tc) and MALLOCX_TCACHE_NONE flags to control tcache usage. 352 353 Removed features: 354 - Remove the *allocm() API, which is superseded by the *allocx() API. 355 - Remove the --enable-dss options, and make dss non-optional on all platforms 356 which support sbrk(2). 357 - Remove the "arenas.purge" mallctl, which was obsoleted by the 358 "arena.<i>.purge" mallctl in 3.1.0. 359 - Remove the unnecessary "opt.valgrind" mallctl; jemalloc automatically 360 detects whether it is running inside Valgrind. 361 - Remove the "stats.huge.allocated", "stats.huge.nmalloc", and 362 "stats.huge.ndalloc" mallctls. 363 - Remove the --enable-mremap option. 364 - Remove the "stats.chunks.current", "stats.chunks.total", and 365 "stats.chunks.high" mallctls. 366 367 Bug fixes: 368 - Fix the cactive statistic to decrease (rather than increase) when active 369 memory decreases. This regression was first released in 3.5.0. 370 - Fix OOM handling in memalign() and valloc(). A variant of this bug existed 371 in all releases since 2.0.0, which introduced these functions. 372 - Fix an OOM-related regression in arena_tcache_fill_small(), which could 373 cause cache corruption on OOM. This regression was present in all releases 374 from 2.2.0 through 3.6.0. 375 - Fix size class overflow handling for malloc(), posix_memalign(), memalign(), 376 calloc(), and realloc() when profiling is enabled. 377 - Fix the "arena.<i>.dss" mallctl to return an error if "primary" or 378 "secondary" precedence is specified, but sbrk(2) is not supported. 379 - Fix fallback lg_floor() implementations to handle extremely large inputs. 380 - Ensure the default purgeable zone is after the default zone on OS X. 381 - Fix latent bugs in atomic_*(). 382 - Fix the "arena.<i>.dss" mallctl to handle read-only calls. 383 - Fix tls_model configuration to enable the initial-exec model when possible. 384 - Mark malloc_conf as a weak symbol so that the application can override it. 385 - Correctly detect glibc's adaptive pthread mutexes. 386 - Fix the --without-export configure option. 387 388* 3.6.0 (March 31, 2014) 389 390 This version contains a critical bug fix for a regression present in 3.5.0 and 391 3.5.1. 392 393 Bug fixes: 394 - Fix a regression in arena_chunk_alloc() that caused crashes during 395 small/large allocation if chunk allocation failed. In the absence of this 396 bug, chunk allocation failure would result in allocation failure, e.g. NULL 397 return from malloc(). This regression was introduced in 3.5.0. 398 - Fix backtracing for gcc intrinsics-based backtracing by specifying 399 -fno-omit-frame-pointer to gcc. Note that the application (and all the 400 libraries it links to) must also be compiled with this option for 401 backtracing to be reliable. 402 - Use dss allocation precedence for huge allocations as well as small/large 403 allocations. 404 - Fix test assertion failure message formatting. This bug did not manifest on 405 x86_64 systems because of implementation subtleties in va_list. 406 - Fix inconsequential test failures for hash and SFMT code. 407 408 New features: 409 - Support heap profiling on FreeBSD. This feature depends on the proc 410 filesystem being mounted during heap profile dumping. 411 412* 3.5.1 (February 25, 2014) 413 414 This version primarily addresses minor bugs in test code. 415 416 Bug fixes: 417 - Configure Solaris/Illumos to use MADV_FREE. 418 - Fix junk filling for mremap(2)-based huge reallocation. This is only 419 relevant if configuring with the --enable-mremap option specified. 420 - Avoid compilation failure if 'restrict' C99 keyword is not supported by the 421 compiler. 422 - Add a configure test for SSE2 rather than assuming it is usable on i686 423 systems. This fixes test compilation errors, especially on 32-bit Linux 424 systems. 425 - Fix mallctl argument size mismatches (size_t vs. uint64_t) in the stats unit 426 test. 427 - Fix/remove flawed alignment-related overflow tests. 428 - Prevent compiler optimizations that could change backtraces in the 429 prof_accum unit test. 430 431* 3.5.0 (January 22, 2014) 432 433 This version focuses on refactoring and automated testing, though it also 434 includes some non-trivial heap profiling optimizations not mentioned below. 435 436 New features: 437 - Add the *allocx() API, which is a successor to the experimental *allocm() 438 API. The *allocx() functions are slightly simpler to use because they have 439 fewer parameters, they directly return the results of primary interest, and 440 mallocx()/rallocx() avoid the strict aliasing pitfall that 441 allocm()/rallocm() share with posix_memalign(). Note that *allocm() is 442 slated for removal in the next non-bugfix release. 443 - Add support for LinuxThreads. 444 445 Bug fixes: 446 - Unless heap profiling is enabled, disable floating point code and don't link 447 with libm. This, in combination with e.g. EXTRA_CFLAGS=-mno-sse on x64 448 systems, makes it possible to completely disable floating point register 449 use. Some versions of glibc neglect to save/restore caller-saved floating 450 point registers during dynamic lazy symbol loading, and the symbol loading 451 code uses whatever malloc the application happens to have linked/loaded 452 with, the result being potential floating point register corruption. 453 - Report ENOMEM rather than EINVAL if an OOM occurs during heap profiling 454 backtrace creation in imemalign(). This bug impacted posix_memalign() and 455 aligned_alloc(). 456 - Fix a file descriptor leak in a prof_dump_maps() error path. 457 - Fix prof_dump() to close the dump file descriptor for all relevant error 458 paths. 459 - Fix rallocm() to use the arena specified by the ALLOCM_ARENA(s) flag for 460 allocation, not just deallocation. 461 - Fix a data race for large allocation stats counters. 462 - Fix a potential infinite loop during thread exit. This bug occurred on 463 Solaris, and could affect other platforms with similar pthreads TSD 464 implementations. 465 - Don't junk-fill reallocations unless usable size changes. This fixes a 466 violation of the *allocx()/*allocm() semantics. 467 - Fix growing large reallocation to junk fill new space. 468 - Fix huge deallocation to junk fill when munmap is disabled. 469 - Change the default private namespace prefix from empty to je_, and change 470 --with-private-namespace-prefix so that it prepends an additional prefix 471 rather than replacing je_. This reduces the likelihood of applications 472 which statically link jemalloc experiencing symbol name collisions. 473 - Add missing private namespace mangling (relevant when 474 --with-private-namespace is specified). 475 - Add and use JEMALLOC_INLINE_C so that static inline functions are marked as 476 static even for debug builds. 477 - Add a missing mutex unlock in a malloc_init_hard() error path. In practice 478 this error path is never executed. 479 - Fix numerous bugs in malloc_strotumax() error handling/reporting. These 480 bugs had no impact except for malformed inputs. 481 - Fix numerous bugs in malloc_snprintf(). These bugs were not exercised by 482 existing calls, so they had no impact. 483 484* 3.4.1 (October 20, 2013) 485 486 Bug fixes: 487 - Fix a race in the "arenas.extend" mallctl that could cause memory corruption 488 of internal data structures and subsequent crashes. 489 - Fix Valgrind integration flaws that caused Valgrind warnings about reads of 490 uninitialized memory in: 491 + arena chunk headers 492 + internal zero-initialized data structures (relevant to tcache and prof 493 code) 494 - Preserve errno during the first allocation. A readlink(2) call during 495 initialization fails unless /etc/malloc.conf exists, so errno was typically 496 set during the first allocation prior to this fix. 497 - Fix compilation warnings reported by gcc 4.8.1. 498 499* 3.4.0 (June 2, 2013) 500 501 This version is essentially a small bugfix release, but the addition of 502 aarch64 support requires that the minor version be incremented. 503 504 Bug fixes: 505 - Fix race-triggered deadlocks in chunk_record(). These deadlocks were 506 typically triggered by multiple threads concurrently deallocating huge 507 objects. 508 509 New features: 510 - Add support for the aarch64 architecture. 511 512* 3.3.1 (March 6, 2013) 513 514 This version fixes bugs that are typically encountered only when utilizing 515 custom run-time options. 516 517 Bug fixes: 518 - Fix a locking order bug that could cause deadlock during fork if heap 519 profiling were enabled. 520 - Fix a chunk recycling bug that could cause the allocator to lose track of 521 whether a chunk was zeroed. On FreeBSD, NetBSD, and OS X, it could cause 522 corruption if allocating via sbrk(2) (unlikely unless running with the 523 "dss:primary" option specified). This was completely harmless on Linux 524 unless using mlockall(2) (and unlikely even then, unless the 525 --disable-munmap configure option or the "dss:primary" option was 526 specified). This regression was introduced in 3.1.0 by the 527 mlockall(2)/madvise(2) interaction fix. 528 - Fix TLS-related memory corruption that could occur during thread exit if the 529 thread never allocated memory. Only the quarantine and prof facilities were 530 susceptible. 531 - Fix two quarantine bugs: 532 + Internal reallocation of the quarantined object array leaked the old 533 array. 534 + Reallocation failure for internal reallocation of the quarantined object 535 array (very unlikely) resulted in memory corruption. 536 - Fix Valgrind integration to annotate all internally allocated memory in a 537 way that keeps Valgrind happy about internal data structure access. 538 - Fix building for s390 systems. 539 540* 3.3.0 (January 23, 2013) 541 542 This version includes a few minor performance improvements in addition to the 543 listed new features and bug fixes. 544 545 New features: 546 - Add clipping support to lg_chunk option processing. 547 - Add the --enable-ivsalloc option. 548 - Add the --without-export option. 549 - Add the --disable-zone-allocator option. 550 551 Bug fixes: 552 - Fix "arenas.extend" mallctl to output the number of arenas. 553 - Fix chunk_recycle() to unconditionally inform Valgrind that returned memory 554 is undefined. 555 - Fix build break on FreeBSD related to alloca.h. 556 557* 3.2.0 (November 9, 2012) 558 559 In addition to a couple of bug fixes, this version modifies page run 560 allocation and dirty page purging algorithms in order to better control 561 page-level virtual memory fragmentation. 562 563 Incompatible changes: 564 - Change the "opt.lg_dirty_mult" default from 5 to 3 (32:1 to 8:1). 565 566 Bug fixes: 567 - Fix dss/mmap allocation precedence code to use recyclable mmap memory only 568 after primary dss allocation fails. 569 - Fix deadlock in the "arenas.purge" mallctl. This regression was introduced 570 in 3.1.0 by the addition of the "arena.<i>.purge" mallctl. 571 572* 3.1.0 (October 16, 2012) 573 574 New features: 575 - Auto-detect whether running inside Valgrind, thus removing the need to 576 manually specify MALLOC_CONF=valgrind:true. 577 - Add the "arenas.extend" mallctl, which allows applications to create 578 manually managed arenas. 579 - Add the ALLOCM_ARENA() flag for {,r,d}allocm(). 580 - Add the "opt.dss", "arena.<i>.dss", and "stats.arenas.<i>.dss" mallctls, 581 which provide control over dss/mmap precedence. 582 - Add the "arena.<i>.purge" mallctl, which obsoletes "arenas.purge". 583 - Define LG_QUANTUM for hppa. 584 585 Incompatible changes: 586 - Disable tcache by default if running inside Valgrind, in order to avoid 587 making unallocated objects appear reachable to Valgrind. 588 - Drop const from malloc_usable_size() argument on Linux. 589 590 Bug fixes: 591 - Fix heap profiling crash if sampled object is freed via realloc(p, 0). 592 - Remove const from __*_hook variable declarations, so that glibc can modify 593 them during process forking. 594 - Fix mlockall(2)/madvise(2) interaction. 595 - Fix fork(2)-related deadlocks. 596 - Fix error return value for "thread.tcache.enabled" mallctl. 597 598* 3.0.0 (May 11, 2012) 599 600 Although this version adds some major new features, the primary focus is on 601 internal code cleanup that facilitates maintainability and portability, most 602 of which is not reflected in the ChangeLog. This is the first release to 603 incorporate substantial contributions from numerous other developers, and the 604 result is a more broadly useful allocator (see the git revision history for 605 contribution details). Note that the license has been unified, thanks to 606 Facebook granting a license under the same terms as the other copyright 607 holders (see COPYING). 608 609 New features: 610 - Implement Valgrind support, redzones, and quarantine. 611 - Add support for additional platforms: 612 + FreeBSD 613 + Mac OS X Lion 614 + MinGW 615 + Windows (no support yet for replacing the system malloc) 616 - Add support for additional architectures: 617 + MIPS 618 + SH4 619 + Tilera 620 - Add support for cross compiling. 621 - Add nallocm(), which rounds a request size up to the nearest size class 622 without actually allocating. 623 - Implement aligned_alloc() (blame C11). 624 - Add the "thread.tcache.enabled" mallctl. 625 - Add the "opt.prof_final" mallctl. 626 - Update pprof (from gperftools 2.0). 627 - Add the --with-mangling option. 628 - Add the --disable-experimental option. 629 - Add the --disable-munmap option, and make it the default on Linux. 630 - Add the --enable-mremap option, which disables use of mremap(2) by default. 631 632 Incompatible changes: 633 - Enable stats by default. 634 - Enable fill by default. 635 - Disable lazy locking by default. 636 - Rename the "tcache.flush" mallctl to "thread.tcache.flush". 637 - Rename the "arenas.pagesize" mallctl to "arenas.page". 638 - Change the "opt.lg_prof_sample" default from 0 to 19 (1 B to 512 KiB). 639 - Change the "opt.prof_accum" default from true to false. 640 641 Removed features: 642 - Remove the swap feature, including the "config.swap", "swap.avail", 643 "swap.prezeroed", "swap.nfds", and "swap.fds" mallctls. 644 - Remove highruns statistics, including the 645 "stats.arenas.<i>.bins.<j>.highruns" and 646 "stats.arenas.<i>.lruns.<j>.highruns" mallctls. 647 - As part of small size class refactoring, remove the "opt.lg_[qc]space_max", 648 "arenas.cacheline", "arenas.subpage", "arenas.[tqcs]space_{min,max}", and 649 "arenas.[tqcs]bins" mallctls. 650 - Remove the "arenas.chunksize" mallctl. 651 - Remove the "opt.lg_prof_tcmax" option. 652 - Remove the "opt.lg_prof_bt_max" option. 653 - Remove the "opt.lg_tcache_gc_sweep" option. 654 - Remove the --disable-tiny option, including the "config.tiny" mallctl. 655 - Remove the --enable-dynamic-page-shift configure option. 656 - Remove the --enable-sysv configure option. 657 658 Bug fixes: 659 - Fix a statistics-related bug in the "thread.arena" mallctl that could cause 660 invalid statistics and crashes. 661 - Work around TLS deallocation via free() on Linux. This bug could cause 662 write-after-free memory corruption. 663 - Fix a potential deadlock that could occur during interval- and 664 growth-triggered heap profile dumps. 665 - Fix large calloc() zeroing bugs due to dropping chunk map unzeroed flags. 666 - Fix chunk_alloc_dss() to stop claiming memory is zeroed. This bug could 667 cause memory corruption and crashes with --enable-dss specified. 668 - Fix fork-related bugs that could cause deadlock in children between fork 669 and exec. 670 - Fix malloc_stats_print() to honor 'b' and 'l' in the opts parameter. 671 - Fix realloc(p, 0) to act like free(p). 672 - Do not enforce minimum alignment in memalign(). 673 - Check for NULL pointer in malloc_usable_size(). 674 - Fix an off-by-one heap profile statistics bug that could be observed in 675 interval- and growth-triggered heap profiles. 676 - Fix the "epoch" mallctl to update cached stats even if the passed in epoch 677 is 0. 678 - Fix bin->runcur management to fix a layout policy bug. This bug did not 679 affect correctness. 680 - Fix a bug in choose_arena_hard() that potentially caused more arenas to be 681 initialized than necessary. 682 - Add missing "opt.lg_tcache_max" mallctl implementation. 683 - Use glibc allocator hooks to make mixed allocator usage less likely. 684 - Fix build issues for --disable-tcache. 685 - Don't mangle pthread_create() when --with-private-namespace is specified. 686 687* 2.2.5 (November 14, 2011) 688 689 Bug fixes: 690 - Fix huge_ralloc() race when using mremap(2). This is a serious bug that 691 could cause memory corruption and/or crashes. 692 - Fix huge_ralloc() to maintain chunk statistics. 693 - Fix malloc_stats_print(..., "a") output. 694 695* 2.2.4 (November 5, 2011) 696 697 Bug fixes: 698 - Initialize arenas_tsd before using it. This bug existed for 2.2.[0-3], as 699 well as for --disable-tls builds in earlier releases. 700 - Do not assume a 4 KiB page size in test/rallocm.c. 701 702* 2.2.3 (August 31, 2011) 703 704 This version fixes numerous bugs related to heap profiling. 705 706 Bug fixes: 707 - Fix a prof-related race condition. This bug could cause memory corruption, 708 but only occurred in non-default configurations (prof_accum:false). 709 - Fix off-by-one backtracing issues (make sure that prof_alloc_prep() is 710 excluded from backtraces). 711 - Fix a prof-related bug in realloc() (only triggered by OOM errors). 712 - Fix prof-related bugs in allocm() and rallocm(). 713 - Fix prof_tdata_cleanup() for --disable-tls builds. 714 - Fix a relative include path, to fix objdir builds. 715 716* 2.2.2 (July 30, 2011) 717 718 Bug fixes: 719 - Fix a build error for --disable-tcache. 720 - Fix assertions in arena_purge() (for real this time). 721 - Add the --with-private-namespace option. This is a workaround for symbol 722 conflicts that can inadvertently arise when using static libraries. 723 724* 2.2.1 (March 30, 2011) 725 726 Bug fixes: 727 - Implement atomic operations for x86/x64. This fixes compilation failures 728 for versions of gcc that are still in wide use. 729 - Fix an assertion in arena_purge(). 730 731* 2.2.0 (March 22, 2011) 732 733 This version incorporates several improvements to algorithms and data 734 structures that tend to reduce fragmentation and increase speed. 735 736 New features: 737 - Add the "stats.cactive" mallctl. 738 - Update pprof (from google-perftools 1.7). 739 - Improve backtracing-related configuration logic, and add the 740 --disable-prof-libgcc option. 741 742 Bug fixes: 743 - Change default symbol visibility from "internal", to "hidden", which 744 decreases the overhead of library-internal function calls. 745 - Fix symbol visibility so that it is also set on OS X. 746 - Fix a build dependency regression caused by the introduction of the .pic.o 747 suffix for PIC object files. 748 - Add missing checks for mutex initialization failures. 749 - Don't use libgcc-based backtracing except on x64, where it is known to work. 750 - Fix deadlocks on OS X that were due to memory allocation in 751 pthread_mutex_lock(). 752 - Heap profiling-specific fixes: 753 + Fix memory corruption due to integer overflow in small region index 754 computation, when using a small enough sample interval that profiling 755 context pointers are stored in small run headers. 756 + Fix a bootstrap ordering bug that only occurred with TLS disabled. 757 + Fix a rallocm() rsize bug. 758 + Fix error detection bugs for aligned memory allocation. 759 760* 2.1.3 (March 14, 2011) 761 762 Bug fixes: 763 - Fix a cpp logic regression (due to the "thread.{de,}allocatedp" mallctl fix 764 for OS X in 2.1.2). 765 - Fix a "thread.arena" mallctl bug. 766 - Fix a thread cache stats merging bug. 767 768* 2.1.2 (March 2, 2011) 769 770 Bug fixes: 771 - Fix "thread.{de,}allocatedp" mallctl for OS X. 772 - Add missing jemalloc.a to build system. 773 774* 2.1.1 (January 31, 2011) 775 776 Bug fixes: 777 - Fix aligned huge reallocation (affected allocm()). 778 - Fix the ALLOCM_LG_ALIGN macro definition. 779 - Fix a heap dumping deadlock. 780 - Fix a "thread.arena" mallctl bug. 781 782* 2.1.0 (December 3, 2010) 783 784 This version incorporates some optimizations that can't quite be considered 785 bug fixes. 786 787 New features: 788 - Use Linux's mremap(2) for huge object reallocation when possible. 789 - Avoid locking in mallctl*() when possible. 790 - Add the "thread.[de]allocatedp" mallctl's. 791 - Convert the manual page source from roff to DocBook, and generate both roff 792 and HTML manuals. 793 794 Bug fixes: 795 - Fix a crash due to incorrect bootstrap ordering. This only impacted 796 --enable-debug --enable-dss configurations. 797 - Fix a minor statistics bug for mallctl("swap.avail", ...). 798 799* 2.0.1 (October 29, 2010) 800 801 Bug fixes: 802 - Fix a race condition in heap profiling that could cause undefined behavior 803 if "opt.prof_accum" were disabled. 804 - Add missing mutex unlocks for some OOM error paths in the heap profiling 805 code. 806 - Fix a compilation error for non-C99 builds. 807 808* 2.0.0 (October 24, 2010) 809 810 This version focuses on the experimental *allocm() API, and on improved 811 run-time configuration/introspection. Nonetheless, numerous performance 812 improvements are also included. 813 814 New features: 815 - Implement the experimental {,r,s,d}allocm() API, which provides a superset 816 of the functionality available via malloc(), calloc(), posix_memalign(), 817 realloc(), malloc_usable_size(), and free(). These functions can be used to 818 allocate/reallocate aligned zeroed memory, ask for optional extra memory 819 during reallocation, prevent object movement during reallocation, etc. 820 - Replace JEMALLOC_OPTIONS/JEMALLOC_PROF_PREFIX with MALLOC_CONF, which is 821 more human-readable, and more flexible. For example: 822 JEMALLOC_OPTIONS=AJP 823 is now: 824 MALLOC_CONF=abort:true,fill:true,stats_print:true 825 - Port to Apple OS X. Sponsored by Mozilla. 826 - Make it possible for the application to control thread-->arena mappings via 827 the "thread.arena" mallctl. 828 - Add compile-time support for all TLS-related functionality via pthreads TSD. 829 This is mainly of interest for OS X, which does not support TLS, but has a 830 TSD implementation with similar performance. 831 - Override memalign() and valloc() if they are provided by the system. 832 - Add the "arenas.purge" mallctl, which can be used to synchronously purge all 833 dirty unused pages. 834 - Make cumulative heap profiling data optional, so that it is possible to 835 limit the amount of memory consumed by heap profiling data structures. 836 - Add per thread allocation counters that can be accessed via the 837 "thread.allocated" and "thread.deallocated" mallctls. 838 839 Incompatible changes: 840 - Remove JEMALLOC_OPTIONS and malloc_options (see MALLOC_CONF above). 841 - Increase default backtrace depth from 4 to 128 for heap profiling. 842 - Disable interval-based profile dumps by default. 843 844 Bug fixes: 845 - Remove bad assertions in fork handler functions. These assertions could 846 cause aborts for some combinations of configure settings. 847 - Fix strerror_r() usage to deal with non-standard semantics in GNU libc. 848 - Fix leak context reporting. This bug tended to cause the number of contexts 849 to be underreported (though the reported number of objects and bytes were 850 correct). 851 - Fix a realloc() bug for large in-place growing reallocation. This bug could 852 cause memory corruption, but it was hard to trigger. 853 - Fix an allocation bug for small allocations that could be triggered if 854 multiple threads raced to create a new run of backing pages. 855 - Enhance the heap profiler to trigger samples based on usable size, rather 856 than request size. 857 - Fix a heap profiling bug due to sometimes losing track of requested object 858 size for sampled objects. 859 860* 1.0.3 (August 12, 2010) 861 862 Bug fixes: 863 - Fix the libunwind-based implementation of stack backtracing (used for heap 864 profiling). This bug could cause zero-length backtraces to be reported. 865 - Add a missing mutex unlock in library initialization code. If multiple 866 threads raced to initialize malloc, some of them could end up permanently 867 blocked. 868 869* 1.0.2 (May 11, 2010) 870 871 Bug fixes: 872 - Fix junk filling of large objects, which could cause memory corruption. 873 - Add MAP_NORESERVE support for chunk mapping, because otherwise virtual 874 memory limits could cause swap file configuration to fail. Contributed by 875 Jordan DeLong. 876 877* 1.0.1 (April 14, 2010) 878 879 Bug fixes: 880 - Fix compilation when --enable-fill is specified. 881 - Fix threads-related profiling bugs that affected accuracy and caused memory 882 to be leaked during thread exit. 883 - Fix dirty page purging race conditions that could cause crashes. 884 - Fix crash in tcache flushing code during thread destruction. 885 886* 1.0.0 (April 11, 2010) 887 888 This release focuses on speed and run-time introspection. Numerous 889 algorithmic improvements make this release substantially faster than its 890 predecessors. 891 892 New features: 893 - Implement autoconf-based configuration system. 894 - Add mallctl*(), for the purposes of introspection and run-time 895 configuration. 896 - Make it possible for the application to manually flush a thread's cache, via 897 the "tcache.flush" mallctl. 898 - Base maximum dirty page count on proportion of active memory. 899 - Compute various additional run-time statistics, including per size class 900 statistics for large objects. 901 - Expose malloc_stats_print(), which can be called repeatedly by the 902 application. 903 - Simplify the malloc_message() signature to only take one string argument, 904 and incorporate an opaque data pointer argument for use by the application 905 in combination with malloc_stats_print(). 906 - Add support for allocation backed by one or more swap files, and allow the 907 application to disable over-commit if swap files are in use. 908 - Implement allocation profiling and leak checking. 909 910 Removed features: 911 - Remove the dynamic arena rebalancing code, since thread-specific caching 912 reduces its utility. 913 914 Bug fixes: 915 - Modify chunk allocation to work when address space layout randomization 916 (ASLR) is in use. 917 - Fix thread cleanup bugs related to TLS destruction. 918 - Handle 0-size allocation requests in posix_memalign(). 919 - Fix a chunk leak. The leaked chunks were never touched, so this impacted 920 virtual memory usage, but not physical memory usage. 921 922* linux_2008082[78]a (August 27/28, 2008) 923 924 These snapshot releases are the simple result of incorporating Linux-specific 925 support into the FreeBSD malloc sources. 926 927-------------------------------------------------------------------------------- 928vim:filetype=text:textwidth=80 929