xref: /freebsd/contrib/jemalloc/TUNING.md (revision bf6039f09a30484c0749a3e3047d6a47b116b466)
1*bf6039f0SWarner LoshThis document summarizes the common approaches for performance fine tuning with
2*bf6039f0SWarner Loshjemalloc (as of 5.1.0).  The default configuration of jemalloc tends to work
3*bf6039f0SWarner Loshreasonably well in practice, and most applications should not have to tune any
4*bf6039f0SWarner Loshoptions. However, in order to cover a wide range of applications and avoid
5*bf6039f0SWarner Loshpathological cases, the default setting is sometimes kept conservative and
6*bf6039f0SWarner Loshsuboptimal, even for many common workloads.  When jemalloc is properly tuned for
7*bf6039f0SWarner Losha specific application / workload, it is common to improve system level metrics
8*bf6039f0SWarner Loshby a few percent, or make favorable trade-offs.
9*bf6039f0SWarner Losh
10*bf6039f0SWarner Losh
11*bf6039f0SWarner Losh## Notable runtime options for performance tuning
12*bf6039f0SWarner Losh
13*bf6039f0SWarner LoshRuntime options can be set via
14*bf6039f0SWarner Losh[malloc_conf](http://jemalloc.net/jemalloc.3.html#tuning).
15*bf6039f0SWarner Losh
16*bf6039f0SWarner Losh* [background_thread](http://jemalloc.net/jemalloc.3.html#background_thread)
17*bf6039f0SWarner Losh
18*bf6039f0SWarner Losh    Enabling jemalloc background threads generally improves the tail latency for
19*bf6039f0SWarner Losh    application threads, since unused memory purging is shifted to the dedicated
20*bf6039f0SWarner Losh    background threads.  In addition, unintended purging delay caused by
21*bf6039f0SWarner Losh    application inactivity is avoided with background threads.
22*bf6039f0SWarner Losh
23*bf6039f0SWarner Losh    Suggested: `background_thread:true` when jemalloc managed threads can be
24*bf6039f0SWarner Losh    allowed.
25*bf6039f0SWarner Losh
26*bf6039f0SWarner Losh* [metadata_thp](http://jemalloc.net/jemalloc.3.html#opt.metadata_thp)
27*bf6039f0SWarner Losh
28*bf6039f0SWarner Losh    Allowing jemalloc to utilize transparent huge pages for its internal
29*bf6039f0SWarner Losh    metadata usually reduces TLB misses significantly, especially for programs
30*bf6039f0SWarner Losh    with large memory footprint and frequent allocation / deallocation
31*bf6039f0SWarner Losh    activities.  Metadata memory usage may increase due to the use of huge
32*bf6039f0SWarner Losh    pages.
33*bf6039f0SWarner Losh
34*bf6039f0SWarner Losh    Suggested for allocation intensive programs: `metadata_thp:auto` or
35*bf6039f0SWarner Losh    `metadata_thp:always`, which is expected to improve CPU utilization at a
36*bf6039f0SWarner Losh    small memory cost.
37*bf6039f0SWarner Losh
38*bf6039f0SWarner Losh* [dirty_decay_ms](http://jemalloc.net/jemalloc.3.html#opt.dirty_decay_ms) and
39*bf6039f0SWarner Losh  [muzzy_decay_ms](http://jemalloc.net/jemalloc.3.html#opt.muzzy_decay_ms)
40*bf6039f0SWarner Losh
41*bf6039f0SWarner Losh    Decay time determines how fast jemalloc returns unused pages back to the
42*bf6039f0SWarner Losh    operating system, and therefore provides a fairly straightforward trade-off
43*bf6039f0SWarner Losh    between CPU and memory usage.  Shorter decay time purges unused pages faster
44*bf6039f0SWarner Losh    to reduces memory usage (usually at the cost of more CPU cycles spent on
45*bf6039f0SWarner Losh    purging), and vice versa.
46*bf6039f0SWarner Losh
47*bf6039f0SWarner Losh    Suggested: tune the values based on the desired trade-offs.
48*bf6039f0SWarner Losh
49*bf6039f0SWarner Losh* [narenas](http://jemalloc.net/jemalloc.3.html#opt.narenas)
50*bf6039f0SWarner Losh
51*bf6039f0SWarner Losh    By default jemalloc uses multiple arenas to reduce internal lock contention.
52*bf6039f0SWarner Losh    However high arena count may also increase overall memory fragmentation,
53*bf6039f0SWarner Losh    since arenas manage memory independently.  When high degree of parallelism
54*bf6039f0SWarner Losh    is not expected at the allocator level, lower number of arenas often
55*bf6039f0SWarner Losh    improves memory usage.
56*bf6039f0SWarner Losh
57*bf6039f0SWarner Losh    Suggested: if low parallelism is expected, try lower arena count while
58*bf6039f0SWarner Losh    monitoring CPU and memory usage.
59*bf6039f0SWarner Losh
60*bf6039f0SWarner Losh* [percpu_arena](http://jemalloc.net/jemalloc.3.html#opt.percpu_arena)
61*bf6039f0SWarner Losh
62*bf6039f0SWarner Losh    Enable dynamic thread to arena association based on running CPU.  This has
63*bf6039f0SWarner Losh    the potential to improve locality, e.g. when thread to CPU affinity is
64*bf6039f0SWarner Losh    present.
65*bf6039f0SWarner Losh
66*bf6039f0SWarner Losh    Suggested: try `percpu_arena:percpu` or `percpu_arena:phycpu` if
67*bf6039f0SWarner Losh    thread migration between processors is expected to be infrequent.
68*bf6039f0SWarner Losh
69*bf6039f0SWarner LoshExamples:
70*bf6039f0SWarner Losh
71*bf6039f0SWarner Losh* High resource consumption application, prioritizing CPU utilization:
72*bf6039f0SWarner Losh
73*bf6039f0SWarner Losh    `background_thread:true,metadata_thp:auto` combined with relaxed decay time
74*bf6039f0SWarner Losh    (increased `dirty_decay_ms` and / or `muzzy_decay_ms`,
75*bf6039f0SWarner Losh    e.g. `dirty_decay_ms:30000,muzzy_decay_ms:30000`).
76*bf6039f0SWarner Losh
77*bf6039f0SWarner Losh* High resource consumption application, prioritizing memory usage:
78*bf6039f0SWarner Losh
79*bf6039f0SWarner Losh    `background_thread:true` combined with shorter decay time (decreased
80*bf6039f0SWarner Losh    `dirty_decay_ms` and / or `muzzy_decay_ms`,
81*bf6039f0SWarner Losh    e.g. `dirty_decay_ms:5000,muzzy_decay_ms:5000`), and lower arena count
82*bf6039f0SWarner Losh    (e.g. number of CPUs).
83*bf6039f0SWarner Losh
84*bf6039f0SWarner Losh* Low resource consumption application:
85*bf6039f0SWarner Losh
86*bf6039f0SWarner Losh    `narenas:1,lg_tcache_max:13` combined with shorter decay time (decreased
87*bf6039f0SWarner Losh    `dirty_decay_ms` and / or `muzzy_decay_ms`,e.g.
88*bf6039f0SWarner Losh    `dirty_decay_ms:1000,muzzy_decay_ms:0`).
89*bf6039f0SWarner Losh
90*bf6039f0SWarner Losh* Extremely conservative -- minimize memory usage at all costs, only suitable when
91*bf6039f0SWarner Loshallocation activity is very rare:
92*bf6039f0SWarner Losh
93*bf6039f0SWarner Losh    `narenas:1,tcache:false,dirty_decay_ms:0,muzzy_decay_ms:0`
94*bf6039f0SWarner Losh
95*bf6039f0SWarner LoshNote that it is recommended to combine the options with `abort_conf:true` which
96*bf6039f0SWarner Loshaborts immediately on illegal options.
97*bf6039f0SWarner Losh
98*bf6039f0SWarner Losh## Beyond runtime options
99*bf6039f0SWarner Losh
100*bf6039f0SWarner LoshIn addition to the runtime options, there are a number of programmatic ways to
101*bf6039f0SWarner Loshimprove application performance with jemalloc.
102*bf6039f0SWarner Losh
103*bf6039f0SWarner Losh* [Explicit arenas](http://jemalloc.net/jemalloc.3.html#arenas.create)
104*bf6039f0SWarner Losh
105*bf6039f0SWarner Losh    Manually created arenas can help performance in various ways, e.g. by
106*bf6039f0SWarner Losh    managing locality and contention for specific usages.  For example,
107*bf6039f0SWarner Losh    applications can explicitly allocate frequently accessed objects from a
108*bf6039f0SWarner Losh    dedicated arena with
109*bf6039f0SWarner Losh    [mallocx()](http://jemalloc.net/jemalloc.3.html#MALLOCX_ARENA) to improve
110*bf6039f0SWarner Losh    locality.  In addition, explicit arenas often benefit from individually
111*bf6039f0SWarner Losh    tuned options, e.g. relaxed [decay
112*bf6039f0SWarner Losh    time](http://jemalloc.net/jemalloc.3.html#arena.i.dirty_decay_ms) if
113*bf6039f0SWarner Losh    frequent reuse is expected.
114*bf6039f0SWarner Losh
115*bf6039f0SWarner Losh* [Extent hooks](http://jemalloc.net/jemalloc.3.html#arena.i.extent_hooks)
116*bf6039f0SWarner Losh
117*bf6039f0SWarner Losh    Extent hooks allow customization for managing underlying memory.  One use
118*bf6039f0SWarner Losh    case for performance purpose is to utilize huge pages -- for example,
119*bf6039f0SWarner Losh    [HHVM](https://github.com/facebook/hhvm/blob/master/hphp/util/alloc.cpp)
120*bf6039f0SWarner Losh    uses explicit arenas with customized extent hooks to manage 1GB huge pages
121*bf6039f0SWarner Losh    for frequently accessed data, which reduces TLB misses significantly.
122*bf6039f0SWarner Losh
123*bf6039f0SWarner Losh* [Explicit thread-to-arena
124*bf6039f0SWarner Losh  binding](http://jemalloc.net/jemalloc.3.html#thread.arena)
125*bf6039f0SWarner Losh
126*bf6039f0SWarner Losh    It is common for some threads in an application to have different memory
127*bf6039f0SWarner Losh    access / allocation patterns.  Threads with heavy workloads often benefit
128*bf6039f0SWarner Losh    from explicit binding, e.g. binding very active threads to dedicated arenas
129*bf6039f0SWarner Losh    may reduce contention at the allocator level.
130