xref: /illumos-gate/usr/src/man/man9/vmem.9 (revision 6e6545bfaed3bab9ce836ee82d1abd8f2edba89a)
1.\"
2.\" CDDL HEADER START
3.\"
4.\" The contents of this file are subject to the terms of the
5.\" Common Development and Distribution License (the "License").
6.\" You may not use this file except in compliance with the License.
7.\"
8.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
9.\" or http://www.opensolaris.org/os/licensing.
10.\" See the License for the specific language governing permissions
11.\" and limitations under the License.
12.\"
13.\" When distributing Covered Code, include this CDDL HEADER in each
14.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE.
15.\" If applicable, add the following below this CDDL HEADER, with the
16.\" fields enclosed by brackets "[]" replaced with your own identifying
17.\" information: Portions Copyright [yyyy] [name of copyright owner]
18.\"
19.\" CDDL HEADER END
20.\"
21.\"
22.\" Copyright 2010 Sun Microsystems, Inc.  All rights reserved.
23.\" Use is subject to license terms.
24.\"
25.\" Copyright (c) 2012, 2015 by Delphix. All rights reserved.
26.\" Copyright (c) 2012, Joyent, Inc. All rights reserved.
27.\"
28.\" The text of this is derived from section 1 of the big theory statement in
29.\" usr/src/uts/common/os/vmem.c, the traditional location of this text.  They
30.\" should largely be updated in tandem.
31.Dd Jan 18, 2017
32.Dt VMEM 9
33.Os
34.Sh NAME
35.Nm vmem
36.Nd virtual memory allocator
37.Sh DESCRIPTION
38.Ss Overview
39An address space is divided into a number of logically distinct pieces, or
40.Em arenas :
41text, data, heap, stack, and so on.
42Within these
43arenas we often subdivide further; for example, we use heap addresses
44not only for the kernel heap
45.Po
46.Fn kmem_alloc
47space
48.Pc ,
49but also for DVMA,
50.Fn bp_mapin ,
51.Pa /dev/kmem ,
52and even some device mappings.
53.Pp
54The kernel address space, therefore, is most accurately described as
55a tree of arenas in which each node of the tree
56.Em imports
57some subset of its parent.
58The virtual memory allocator manages these arenas
59and supports their natural hierarchical structure.
60.Ss Arenas
61An arena is nothing more than a set of integers.
62These integers most commonly represent virtual addresses, but in fact they can
63represent anything at all.
64For example, we could use an arena containing the integers minpid through maxpid
65to allocate process IDs.
66For uses of this nature, prefer
67.Xr id_space 9F
68instead.
69.Pp
70.Fn vmem_create
71and
72.Fn vmem_destroy
73create and destroy vmem arenas.
74In order to differentiate between arenas used for addresses and arenas used for
75identifiers, the
76.Dv VMC_IDENTIFIER
77flag is passed to
78.Fn vmem_create .
79This prevents identifier exhaustion from being diagnosed as general memory
80failure.
81.Ss Spans
82We represent the integers in an arena as a collection of
83.Em spans ,
84or contiguous ranges of integers.
85For example, the kernel heap consists of just one span:
86.Li "[kernelheap, ekernelheap)" .
87Spans can be added to an arena in two ways: explicitly, by
88.Fn vmem_add ;
89or implicitly, by importing, as described in
90.Sx Imported Memory
91below.
92.Ss Segments
93Spans are subdivided into
94.Em segments ,
95each of which is either allocated or free.
96A segment, like a span, is a contiguous range of integers.
97Each allocated segment
98.Li "[addr, addr + size)"
99represents exactly one
100.Li "vmem_alloc(size)"
101that returned
102.Sy addr .
103Free segments represent the space between allocated segments.
104If two free segments are adjacent, we coalesce them into one larger segment;
105that is, if segments
106.Li "[a, b)"
107and
108.Li "[b, c)"
109are both free, we merge them into a single segment
110.Li "[a, c)" .
111The segments within a span are linked together in increasing\-address
112order so we can easily determine whether coalescing is possible.
113.Pp
114Segments never cross span boundaries.
115When all segments within an imported span become free, we return the span to its
116source.
117.Ss Imported Memory
118As mentioned in the overview, some arenas are logical subsets of other arenas.
119For example,
120.Sy kmem_va_arena
121(a virtual address cache
122that satisfies most
123.Fn kmem_slab_create
124requests) is just a subset of
125.Sy heap_arena
126(the kernel heap) that provides caching for the most common slab sizes.
127When
128.Sy kmem_va_arena
129runs out of virtual memory, it
130.Em imports
131more from the heap; we say that
132.Sy heap_arena
133is the
134.Em "vmem source"
135for
136.Sy kmem_va_arena.
137.Fn vmem_create
138allows you to specify any existing vmem arena as the source for your new arena.
139Topologically, since every arena is a child of at most one source, the set of
140all arenas forms a collection of trees.
141.Ss Constrained Allocations
142Some vmem clients are quite picky about the kind of address they want.
143For example, the DVMA code may need an address that is at a particular
144phase with respect to some alignment (to get good cache coloring), or
145that lies within certain limits (the addressable range of a device),
146or that doesn't cross some boundary (a DMA counter restriction) \(em
147or all of the above.
148.Fn vmem_xalloc
149allows the client to specify any or all of these constraints.
150.Ss The Vmem Quantum
151Every arena has a notion of
152.Sq quantum ,
153specified at
154.Fn vmem_create
155time, that defines the arena's minimum unit of currency.
156Most commonly the quantum is either 1 or
157.Dv PAGESIZE ,
158but any power of 2 is legal.
159All vmem allocations are guaranteed to be quantum\-aligned.
160.Ss Relationship to the Kernel Memory Allocator
161Every kmem cache has a vmem arena as its slab supplier.
162The kernel memory allocator uses
163.Fn vmem_alloc
164and
165.Fn vmem_free
166to create and destroy slabs.
167.Sh SEE ALSO
168.Xr id_space 9F ,
169.Xr vmem_add 9F ,
170.Xr vmem_alloc 9F ,
171.Xr vmem_contains 9F ,
172.Xr vmem_create 9F ,
173.Xr vmem_walk 9F
174.Pp
175.Rs
176.%A Jeff Bonwick
177.%A Jonathan Adams
178.%T Magazines and vmem: Extending the Slab Allocator to Many CPUs and Arbitrary Resources.
179.%J Proceedings of the 2001 Usenix Conference
180.%U http://www.usenix.org/event/usenix01/bonwick.html
181.Re
182