xref: /illumos-gate/usr/src/man/man9/vmem.9 (revision 8533946bd264dca901fdf56bf3da1d81e728b423)
1.\"
2.\" CDDL HEADER START
3.\"
4.\" The contents of this file are subject to the terms of the
5.\" Common Development and Distribution License (the "License").
6.\" You may not use this file except in compliance with the License.
7.\"
8.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
9.\" or http://www.opensolaris.org/os/licensing.
10.\" See the License for the specific language governing permissions
11.\" and limitations under the License.
12.\"
13.\" When distributing Covered Code, include this CDDL HEADER in each
14.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE.
15.\" If applicable, add the following below this CDDL HEADER, with the
16.\" fields enclosed by brackets "[]" replaced with your own identifying
17.\" information: Portions Copyright [yyyy] [name of copyright owner]
18.\"
19.\" CDDL HEADER END
20.\"
21.\"
22.\" Copyright 2010 Sun Microsystems, Inc.  All rights reserved.
23.\" Use is subject to license terms.
24.\"
25.\" Copyright (c) 2012, 2015 by Delphix. All rights reserved.
26.\" Copyright (c) 2012, Joyent, Inc. All rights reserved.
27.\"
28.\" The text of this is derived from section 1 of the big theory statement in
29.\" usr/src/uts/common/os/vmem.c, the traditional location of this text.  They
30.\" should largely be updated in tandem.
31.Dd Jan 18, 2017
32.Dt VMEM 9
33.Os
34.Sh NAME
35.Nm vmem
36.Nd virtual memory allocator
37.Sh DESCRIPTION
38.Ss Overview
39An address space is divided into a number of logically distinct pieces, or
40.Em arenas :
41text, data, heap, stack, and so on.
42Within these
43arenas we often subdivide further; for example, we use heap addresses
44not only for the kernel heap
45.Po
46.Fn kmem_alloc
47space
48.Pc ,
49but also for DVMA,
50.Fn bp_mapin ,
51.Pa /dev/kmem ,
52and even some device mappings.
53.Pp
54The kernel address space, therefore, is most accurately described as
55a tree of arenas in which each node of the tree
56.Em imports
57some subset of its parent.
58The virtual memory allocator manages these arenas
59and supports their natural hierarchical structure.
60.Ss Arenas
61An arena is nothing more than a set of integers.  These integers most
62commonly represent virtual addresses, but in fact they can represent
63anything at all.  For example, we could use an arena containing the
64integers minpid through maxpid to allocate process IDs.  For uses of this
65nature, prefer
66.Xr id_space 9F
67instead.
68.Pp
69.Fn vmem_create
70and
71.Fn vmem_destroy
72create and destroy vmem arenas.  In order to differentiate between arenas used
73for addresses and arenas used for identifiers, the
74.Dv VMC_IDENTIFIER
75flag is passed to
76.Fn vmem_create .
77This prevents identifier exhaustion from being diagnosed as general memory
78failure.
79.Ss Spans
80We represent the integers in an arena as a collection of
81.Em spans ,
82or contiguous ranges of integers.  For example, the kernel heap consists of
83just one span:
84.Li "[kernelheap, ekernelheap)" .
85Spans can be added to an arena in two ways: explicitly, by
86.Fn vmem_add ;
87or implicitly, by importing, as described in
88.Sx Imported Memory
89below.
90.Ss Segments
91Spans are subdivided into
92.Em segments ,
93each of which is either allocated or free.  A segment, like a span, is a
94contiguous range of integers.  Each allocated segment
95.Li "[addr, addr + size)"
96represents exactly one
97.Li "vmem_alloc(size)"
98that returned
99.Sy addr .
100Free segments represent the space between allocated segments.  If two free
101segments are adjacent, we coalesce them into one larger segment; that is, if
102segments
103.Li "[a, b)"
104and
105.Li "[b, c)"
106are both free, we merge them into a single segment
107.Li "[a, c)" .
108The segments within a span are linked together in increasing\-address
109order so we can easily determine whether coalescing is possible.
110.Pp
111Segments never cross span boundaries.  When all segments within an imported
112span become free, we return the span to its source.
113.Ss Imported Memory
114As mentioned in the overview, some arenas are logical subsets of
115other arenas.  For example,
116.Sy kmem_va_arena
117(a virtual address cache
118that satisfies most
119.Fn kmem_slab_create
120requests) is just a subset of
121.Sy heap_arena
122(the kernel heap) that provides caching for the most common slab sizes.  When
123.Sy kmem_va_arena
124runs out of virtual memory, it
125.Em imports
126more from the heap; we say that
127.Sy heap_arena
128is the
129.Em "vmem source"
130for
131.Sy kmem_va_arena.
132.Fn vmem_create
133allows you to specify any existing vmem arena as the source for your new
134arena.  Topologically, since every arena is a child of at most one source, the
135set of all arenas forms a collection of trees.
136.Ss Constrained Allocations
137Some vmem clients are quite picky about the kind of address they want.
138For example, the DVMA code may need an address that is at a particular
139phase with respect to some alignment (to get good cache coloring), or
140that lies within certain limits (the addressable range of a device),
141or that doesn't cross some boundary (a DMA counter restriction) \(em
142or all of the above.
143.Fn vmem_xalloc
144allows the client to specify any or all of these constraints.
145.Ss The Vmem Quantum
146Every arena has a notion of
147.Sq quantum ,
148specified at
149.Fn vmem_create
150time, that defines the arena's minimum unit of currency.  Most commonly the
151quantum is either 1 or
152.Dv PAGESIZE ,
153but any power of 2 is legal.  All vmem allocations are guaranteed to be
154quantum\-aligned.
155.Ss Relationship to the Kernel Memory Allocator
156Every kmem cache has a vmem arena as its slab supplier.  The kernel memory
157allocator uses
158.Fn vmem_alloc
159and
160.Fn vmem_free
161to create and destroy slabs.
162.Sh SEE ALSO
163.Xr id_space 9F ,
164.Xr vmem_add 9F ,
165.Xr vmem_alloc 9F ,
166.Xr vmem_contains 9F ,
167.Xr vmem_create 9F ,
168.Xr vmem_walk 9F
169.Pp
170.Rs
171.%A Jeff Bonwick
172.%A Jonathan Adams
173.%T Magazines and vmem: Extending the Slab Allocator to Many CPUs and Arbitrary Resources.
174.%J Proceedings of the 2001 Usenix Conference
175.%U http://www.usenix.org/event/usenix01/bonwick.html
176.Re
177