1*7c478bd9Sstevel@tonic-gate /* 2*7c478bd9Sstevel@tonic-gate * CDDL HEADER START 3*7c478bd9Sstevel@tonic-gate * 4*7c478bd9Sstevel@tonic-gate * The contents of this file are subject to the terms of the 5*7c478bd9Sstevel@tonic-gate * Common Development and Distribution License, Version 1.0 only 6*7c478bd9Sstevel@tonic-gate * (the "License"). You may not use this file except in compliance 7*7c478bd9Sstevel@tonic-gate * with the License. 8*7c478bd9Sstevel@tonic-gate * 9*7c478bd9Sstevel@tonic-gate * You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE 10*7c478bd9Sstevel@tonic-gate * or http://www.opensolaris.org/os/licensing. 11*7c478bd9Sstevel@tonic-gate * See the License for the specific language governing permissions 12*7c478bd9Sstevel@tonic-gate * and limitations under the License. 13*7c478bd9Sstevel@tonic-gate * 14*7c478bd9Sstevel@tonic-gate * When distributing Covered Code, include this CDDL HEADER in each 15*7c478bd9Sstevel@tonic-gate * file and include the License file at usr/src/OPENSOLARIS.LICENSE. 16*7c478bd9Sstevel@tonic-gate * If applicable, add the following below this CDDL HEADER, with the 17*7c478bd9Sstevel@tonic-gate * fields enclosed by brackets "[]" replaced with your own identifying 18*7c478bd9Sstevel@tonic-gate * information: Portions Copyright [yyyy] [name of copyright owner] 19*7c478bd9Sstevel@tonic-gate * 20*7c478bd9Sstevel@tonic-gate * CDDL HEADER END 21*7c478bd9Sstevel@tonic-gate */ 22*7c478bd9Sstevel@tonic-gate /* 23*7c478bd9Sstevel@tonic-gate * Copyright 2005 Sun Microsystems, Inc. All rights reserved. 24*7c478bd9Sstevel@tonic-gate * Use is subject to license terms. 25*7c478bd9Sstevel@tonic-gate */ 26*7c478bd9Sstevel@tonic-gate 27*7c478bd9Sstevel@tonic-gate #pragma ident "%Z%%M% %I% %E% SMI" 28*7c478bd9Sstevel@tonic-gate 29*7c478bd9Sstevel@tonic-gate /* 30*7c478bd9Sstevel@tonic-gate * Big Theory Statement for the virtual memory allocator. 31*7c478bd9Sstevel@tonic-gate * 32*7c478bd9Sstevel@tonic-gate * For a more complete description of the main ideas, see: 33*7c478bd9Sstevel@tonic-gate * 34*7c478bd9Sstevel@tonic-gate * Jeff Bonwick and Jonathan Adams, 35*7c478bd9Sstevel@tonic-gate * 36*7c478bd9Sstevel@tonic-gate * Magazines and vmem: Extending the Slab Allocator to Many CPUs and 37*7c478bd9Sstevel@tonic-gate * Arbitrary Resources. 38*7c478bd9Sstevel@tonic-gate * 39*7c478bd9Sstevel@tonic-gate * Proceedings of the 2001 Usenix Conference. 40*7c478bd9Sstevel@tonic-gate * Available as http://www.usenix.org/event/usenix01/bonwick.html 41*7c478bd9Sstevel@tonic-gate * 42*7c478bd9Sstevel@tonic-gate * 43*7c478bd9Sstevel@tonic-gate * 1. General Concepts 44*7c478bd9Sstevel@tonic-gate * ------------------- 45*7c478bd9Sstevel@tonic-gate * 46*7c478bd9Sstevel@tonic-gate * 1.1 Overview 47*7c478bd9Sstevel@tonic-gate * ------------ 48*7c478bd9Sstevel@tonic-gate * We divide the kernel address space into a number of logically distinct 49*7c478bd9Sstevel@tonic-gate * pieces, or *arenas*: text, data, heap, stack, and so on. Within these 50*7c478bd9Sstevel@tonic-gate * arenas we often subdivide further; for example, we use heap addresses 51*7c478bd9Sstevel@tonic-gate * not only for the kernel heap (kmem_alloc() space), but also for DVMA, 52*7c478bd9Sstevel@tonic-gate * bp_mapin(), /dev/kmem, and even some device mappings like the TOD chip. 53*7c478bd9Sstevel@tonic-gate * The kernel address space, therefore, is most accurately described as 54*7c478bd9Sstevel@tonic-gate * a tree of arenas in which each node of the tree *imports* some subset 55*7c478bd9Sstevel@tonic-gate * of its parent. The virtual memory allocator manages these arenas and 56*7c478bd9Sstevel@tonic-gate * supports their natural hierarchical structure. 57*7c478bd9Sstevel@tonic-gate * 58*7c478bd9Sstevel@tonic-gate * 1.2 Arenas 59*7c478bd9Sstevel@tonic-gate * ---------- 60*7c478bd9Sstevel@tonic-gate * An arena is nothing more than a set of integers. These integers most 61*7c478bd9Sstevel@tonic-gate * commonly represent virtual addresses, but in fact they can represent 62*7c478bd9Sstevel@tonic-gate * anything at all. For example, we could use an arena containing the 63*7c478bd9Sstevel@tonic-gate * integers minpid through maxpid to allocate process IDs. vmem_create() 64*7c478bd9Sstevel@tonic-gate * and vmem_destroy() create and destroy vmem arenas. In order to 65*7c478bd9Sstevel@tonic-gate * differentiate between arenas used for adresses and arenas used for 66*7c478bd9Sstevel@tonic-gate * identifiers, the VMC_IDENTIFIER flag is passed to vmem_create(). This 67*7c478bd9Sstevel@tonic-gate * prevents identifier exhaustion from being diagnosed as general memory 68*7c478bd9Sstevel@tonic-gate * failure. 69*7c478bd9Sstevel@tonic-gate * 70*7c478bd9Sstevel@tonic-gate * 1.3 Spans 71*7c478bd9Sstevel@tonic-gate * --------- 72*7c478bd9Sstevel@tonic-gate * We represent the integers in an arena as a collection of *spans*, or 73*7c478bd9Sstevel@tonic-gate * contiguous ranges of integers. For example, the kernel heap consists 74*7c478bd9Sstevel@tonic-gate * of just one span: [kernelheap, ekernelheap). Spans can be added to an 75*7c478bd9Sstevel@tonic-gate * arena in two ways: explicitly, by vmem_add(), or implicitly, by 76*7c478bd9Sstevel@tonic-gate * importing, as described in Section 1.5 below. 77*7c478bd9Sstevel@tonic-gate * 78*7c478bd9Sstevel@tonic-gate * 1.4 Segments 79*7c478bd9Sstevel@tonic-gate * ------------ 80*7c478bd9Sstevel@tonic-gate * Spans are subdivided into *segments*, each of which is either allocated 81*7c478bd9Sstevel@tonic-gate * or free. A segment, like a span, is a contiguous range of integers. 82*7c478bd9Sstevel@tonic-gate * Each allocated segment [addr, addr + size) represents exactly one 83*7c478bd9Sstevel@tonic-gate * vmem_alloc(size) that returned addr. Free segments represent the space 84*7c478bd9Sstevel@tonic-gate * between allocated segments. If two free segments are adjacent, we 85*7c478bd9Sstevel@tonic-gate * coalesce them into one larger segment; that is, if segments [a, b) and 86*7c478bd9Sstevel@tonic-gate * [b, c) are both free, we merge them into a single segment [a, c). 87*7c478bd9Sstevel@tonic-gate * The segments within a span are linked together in increasing-address order 88*7c478bd9Sstevel@tonic-gate * so we can easily determine whether coalescing is possible. 89*7c478bd9Sstevel@tonic-gate * 90*7c478bd9Sstevel@tonic-gate * Segments never cross span boundaries. When all segments within 91*7c478bd9Sstevel@tonic-gate * an imported span become free, we return the span to its source. 92*7c478bd9Sstevel@tonic-gate * 93*7c478bd9Sstevel@tonic-gate * 1.5 Imported Memory 94*7c478bd9Sstevel@tonic-gate * ------------------- 95*7c478bd9Sstevel@tonic-gate * As mentioned in the overview, some arenas are logical subsets of 96*7c478bd9Sstevel@tonic-gate * other arenas. For example, kmem_va_arena (a virtual address cache 97*7c478bd9Sstevel@tonic-gate * that satisfies most kmem_slab_create() requests) is just a subset 98*7c478bd9Sstevel@tonic-gate * of heap_arena (the kernel heap) that provides caching for the most 99*7c478bd9Sstevel@tonic-gate * common slab sizes. When kmem_va_arena runs out of virtual memory, 100*7c478bd9Sstevel@tonic-gate * it *imports* more from the heap; we say that heap_arena is the 101*7c478bd9Sstevel@tonic-gate * *vmem source* for kmem_va_arena. vmem_create() allows you to 102*7c478bd9Sstevel@tonic-gate * specify any existing vmem arena as the source for your new arena. 103*7c478bd9Sstevel@tonic-gate * Topologically, since every arena is a child of at most one source, 104*7c478bd9Sstevel@tonic-gate * the set of all arenas forms a collection of trees. 105*7c478bd9Sstevel@tonic-gate * 106*7c478bd9Sstevel@tonic-gate * 1.6 Constrained Allocations 107*7c478bd9Sstevel@tonic-gate * --------------------------- 108*7c478bd9Sstevel@tonic-gate * Some vmem clients are quite picky about the kind of address they want. 109*7c478bd9Sstevel@tonic-gate * For example, the DVMA code may need an address that is at a particular 110*7c478bd9Sstevel@tonic-gate * phase with respect to some alignment (to get good cache coloring), or 111*7c478bd9Sstevel@tonic-gate * that lies within certain limits (the addressable range of a device), 112*7c478bd9Sstevel@tonic-gate * or that doesn't cross some boundary (a DMA counter restriction) -- 113*7c478bd9Sstevel@tonic-gate * or all of the above. vmem_xalloc() allows the client to specify any 114*7c478bd9Sstevel@tonic-gate * or all of these constraints. 115*7c478bd9Sstevel@tonic-gate * 116*7c478bd9Sstevel@tonic-gate * 1.7 The Vmem Quantum 117*7c478bd9Sstevel@tonic-gate * -------------------- 118*7c478bd9Sstevel@tonic-gate * Every arena has a notion of 'quantum', specified at vmem_create() time, 119*7c478bd9Sstevel@tonic-gate * that defines the arena's minimum unit of currency. Most commonly the 120*7c478bd9Sstevel@tonic-gate * quantum is either 1 or PAGESIZE, but any power of 2 is legal. 121*7c478bd9Sstevel@tonic-gate * All vmem allocations are guaranteed to be quantum-aligned. 122*7c478bd9Sstevel@tonic-gate * 123*7c478bd9Sstevel@tonic-gate * 1.8 Quantum Caching 124*7c478bd9Sstevel@tonic-gate * ------------------- 125*7c478bd9Sstevel@tonic-gate * A vmem arena may be so hot (frequently used) that the scalability of vmem 126*7c478bd9Sstevel@tonic-gate * allocation is a significant concern. We address this by allowing the most 127*7c478bd9Sstevel@tonic-gate * common allocation sizes to be serviced by the kernel memory allocator, 128*7c478bd9Sstevel@tonic-gate * which provides low-latency per-cpu caching. The qcache_max argument to 129*7c478bd9Sstevel@tonic-gate * vmem_create() specifies the largest allocation size to cache. 130*7c478bd9Sstevel@tonic-gate * 131*7c478bd9Sstevel@tonic-gate * 1.9 Relationship to Kernel Memory Allocator 132*7c478bd9Sstevel@tonic-gate * ------------------------------------------- 133*7c478bd9Sstevel@tonic-gate * Every kmem cache has a vmem arena as its slab supplier. The kernel memory 134*7c478bd9Sstevel@tonic-gate * allocator uses vmem_alloc() and vmem_free() to create and destroy slabs. 135*7c478bd9Sstevel@tonic-gate * 136*7c478bd9Sstevel@tonic-gate * 137*7c478bd9Sstevel@tonic-gate * 2. Implementation 138*7c478bd9Sstevel@tonic-gate * ----------------- 139*7c478bd9Sstevel@tonic-gate * 140*7c478bd9Sstevel@tonic-gate * 2.1 Segment lists and markers 141*7c478bd9Sstevel@tonic-gate * ----------------------------- 142*7c478bd9Sstevel@tonic-gate * The segment structure (vmem_seg_t) contains two doubly-linked lists. 143*7c478bd9Sstevel@tonic-gate * 144*7c478bd9Sstevel@tonic-gate * The arena list (vs_anext/vs_aprev) links all segments in the arena. 145*7c478bd9Sstevel@tonic-gate * In addition to the allocated and free segments, the arena contains 146*7c478bd9Sstevel@tonic-gate * special marker segments at span boundaries. Span markers simplify 147*7c478bd9Sstevel@tonic-gate * coalescing and importing logic by making it easy to tell both when 148*7c478bd9Sstevel@tonic-gate * we're at a span boundary (so we don't coalesce across it), and when 149*7c478bd9Sstevel@tonic-gate * a span is completely free (its neighbors will both be span markers). 150*7c478bd9Sstevel@tonic-gate * 151*7c478bd9Sstevel@tonic-gate * Imported spans will have vs_import set. 152*7c478bd9Sstevel@tonic-gate * 153*7c478bd9Sstevel@tonic-gate * The next-of-kin list (vs_knext/vs_kprev) links segments of the same type: 154*7c478bd9Sstevel@tonic-gate * (1) for allocated segments, vs_knext is the hash chain linkage; 155*7c478bd9Sstevel@tonic-gate * (2) for free segments, vs_knext is the freelist linkage; 156*7c478bd9Sstevel@tonic-gate * (3) for span marker segments, vs_knext is the next span marker. 157*7c478bd9Sstevel@tonic-gate * 158*7c478bd9Sstevel@tonic-gate * 2.2 Allocation hashing 159*7c478bd9Sstevel@tonic-gate * ---------------------- 160*7c478bd9Sstevel@tonic-gate * We maintain a hash table of all allocated segments, hashed by address. 161*7c478bd9Sstevel@tonic-gate * This allows vmem_free() to discover the target segment in constant time. 162*7c478bd9Sstevel@tonic-gate * vmem_update() periodically resizes hash tables to keep hash chains short. 163*7c478bd9Sstevel@tonic-gate * 164*7c478bd9Sstevel@tonic-gate * 2.3 Freelist management 165*7c478bd9Sstevel@tonic-gate * ----------------------- 166*7c478bd9Sstevel@tonic-gate * We maintain power-of-2 freelists for free segments, i.e. free segments 167*7c478bd9Sstevel@tonic-gate * of size >= 2^n reside in vmp->vm_freelist[n]. To ensure constant-time 168*7c478bd9Sstevel@tonic-gate * allocation, vmem_xalloc() looks not in the first freelist that *might* 169*7c478bd9Sstevel@tonic-gate * satisfy the allocation, but in the first freelist that *definitely* 170*7c478bd9Sstevel@tonic-gate * satisfies the allocation (unless VM_BESTFIT is specified, or all larger 171*7c478bd9Sstevel@tonic-gate * freelists are empty). For example, a 1000-byte allocation will be 172*7c478bd9Sstevel@tonic-gate * satisfied not from the 512..1023-byte freelist, whose members *might* 173*7c478bd9Sstevel@tonic-gate * contains a 1000-byte segment, but from a 1024-byte or larger freelist, 174*7c478bd9Sstevel@tonic-gate * the first member of which will *definitely* satisfy the allocation. 175*7c478bd9Sstevel@tonic-gate * This ensures that vmem_xalloc() works in constant time. 176*7c478bd9Sstevel@tonic-gate * 177*7c478bd9Sstevel@tonic-gate * We maintain a bit map to determine quickly which freelists are non-empty. 178*7c478bd9Sstevel@tonic-gate * vmp->vm_freemap & (1 << n) is non-zero iff vmp->vm_freelist[n] is non-empty. 179*7c478bd9Sstevel@tonic-gate * 180*7c478bd9Sstevel@tonic-gate * The different freelists are linked together into one large freelist, 181*7c478bd9Sstevel@tonic-gate * with the freelist heads serving as markers. Freelist markers simplify 182*7c478bd9Sstevel@tonic-gate * the maintenance of vm_freemap by making it easy to tell when we're taking 183*7c478bd9Sstevel@tonic-gate * the last member of a freelist (both of its neighbors will be markers). 184*7c478bd9Sstevel@tonic-gate * 185*7c478bd9Sstevel@tonic-gate * 2.4 Vmem Locking 186*7c478bd9Sstevel@tonic-gate * ---------------- 187*7c478bd9Sstevel@tonic-gate * For simplicity, all arena state is protected by a per-arena lock. 188*7c478bd9Sstevel@tonic-gate * For very hot arenas, use quantum caching for scalability. 189*7c478bd9Sstevel@tonic-gate * 190*7c478bd9Sstevel@tonic-gate * 2.5 Vmem Population 191*7c478bd9Sstevel@tonic-gate * ------------------- 192*7c478bd9Sstevel@tonic-gate * Any internal vmem routine that might need to allocate new segment 193*7c478bd9Sstevel@tonic-gate * structures must prepare in advance by calling vmem_populate(), which 194*7c478bd9Sstevel@tonic-gate * will preallocate enough vmem_seg_t's to get is through the entire 195*7c478bd9Sstevel@tonic-gate * operation without dropping the arena lock. 196*7c478bd9Sstevel@tonic-gate * 197*7c478bd9Sstevel@tonic-gate * 2.6 Auditing 198*7c478bd9Sstevel@tonic-gate * ------------ 199*7c478bd9Sstevel@tonic-gate * If KMF_AUDIT is set in kmem_flags, we audit vmem allocations as well. 200*7c478bd9Sstevel@tonic-gate * Since virtual addresses cannot be scribbled on, there is no equivalent 201*7c478bd9Sstevel@tonic-gate * in vmem to redzone checking, deadbeef, or other kmem debugging features. 202*7c478bd9Sstevel@tonic-gate * Moreover, we do not audit frees because segment coalescing destroys the 203*7c478bd9Sstevel@tonic-gate * association between an address and its segment structure. Auditing is 204*7c478bd9Sstevel@tonic-gate * thus intended primarily to keep track of who's consuming the arena. 205*7c478bd9Sstevel@tonic-gate * Debugging support could certainly be extended in the future if it proves 206*7c478bd9Sstevel@tonic-gate * necessary, but we do so much live checking via the allocation hash table 207*7c478bd9Sstevel@tonic-gate * that even non-DEBUG systems get quite a bit of sanity checking already. 208*7c478bd9Sstevel@tonic-gate */ 209*7c478bd9Sstevel@tonic-gate 210*7c478bd9Sstevel@tonic-gate #include <sys/vmem_impl.h> 211*7c478bd9Sstevel@tonic-gate #include <sys/kmem.h> 212*7c478bd9Sstevel@tonic-gate #include <sys/kstat.h> 213*7c478bd9Sstevel@tonic-gate #include <sys/param.h> 214*7c478bd9Sstevel@tonic-gate #include <sys/systm.h> 215*7c478bd9Sstevel@tonic-gate #include <sys/atomic.h> 216*7c478bd9Sstevel@tonic-gate #include <sys/bitmap.h> 217*7c478bd9Sstevel@tonic-gate #include <sys/sysmacros.h> 218*7c478bd9Sstevel@tonic-gate #include <sys/cmn_err.h> 219*7c478bd9Sstevel@tonic-gate #include <sys/debug.h> 220*7c478bd9Sstevel@tonic-gate #include <sys/panic.h> 221*7c478bd9Sstevel@tonic-gate 222*7c478bd9Sstevel@tonic-gate #define VMEM_INITIAL 10 /* early vmem arenas */ 223*7c478bd9Sstevel@tonic-gate #define VMEM_SEG_INITIAL 200 /* early segments */ 224*7c478bd9Sstevel@tonic-gate 225*7c478bd9Sstevel@tonic-gate /* 226*7c478bd9Sstevel@tonic-gate * Adding a new span to an arena requires two segment structures: one to 227*7c478bd9Sstevel@tonic-gate * represent the span, and one to represent the free segment it contains. 228*7c478bd9Sstevel@tonic-gate */ 229*7c478bd9Sstevel@tonic-gate #define VMEM_SEGS_PER_SPAN_CREATE 2 230*7c478bd9Sstevel@tonic-gate 231*7c478bd9Sstevel@tonic-gate /* 232*7c478bd9Sstevel@tonic-gate * Allocating a piece of an existing segment requires 0-2 segment structures 233*7c478bd9Sstevel@tonic-gate * depending on how much of the segment we're allocating. 234*7c478bd9Sstevel@tonic-gate * 235*7c478bd9Sstevel@tonic-gate * To allocate the entire segment, no new segment structures are needed; we 236*7c478bd9Sstevel@tonic-gate * simply move the existing segment structure from the freelist to the 237*7c478bd9Sstevel@tonic-gate * allocation hash table. 238*7c478bd9Sstevel@tonic-gate * 239*7c478bd9Sstevel@tonic-gate * To allocate a piece from the left or right end of the segment, we must 240*7c478bd9Sstevel@tonic-gate * split the segment into two pieces (allocated part and remainder), so we 241*7c478bd9Sstevel@tonic-gate * need one new segment structure to represent the remainder. 242*7c478bd9Sstevel@tonic-gate * 243*7c478bd9Sstevel@tonic-gate * To allocate from the middle of a segment, we need two new segment strucures 244*7c478bd9Sstevel@tonic-gate * to represent the remainders on either side of the allocated part. 245*7c478bd9Sstevel@tonic-gate */ 246*7c478bd9Sstevel@tonic-gate #define VMEM_SEGS_PER_EXACT_ALLOC 0 247*7c478bd9Sstevel@tonic-gate #define VMEM_SEGS_PER_LEFT_ALLOC 1 248*7c478bd9Sstevel@tonic-gate #define VMEM_SEGS_PER_RIGHT_ALLOC 1 249*7c478bd9Sstevel@tonic-gate #define VMEM_SEGS_PER_MIDDLE_ALLOC 2 250*7c478bd9Sstevel@tonic-gate 251*7c478bd9Sstevel@tonic-gate /* 252*7c478bd9Sstevel@tonic-gate * vmem_populate() preallocates segment structures for vmem to do its work. 253*7c478bd9Sstevel@tonic-gate * It must preallocate enough for the worst case, which is when we must import 254*7c478bd9Sstevel@tonic-gate * a new span and then allocate from the middle of it. 255*7c478bd9Sstevel@tonic-gate */ 256*7c478bd9Sstevel@tonic-gate #define VMEM_SEGS_PER_ALLOC_MAX \ 257*7c478bd9Sstevel@tonic-gate (VMEM_SEGS_PER_SPAN_CREATE + VMEM_SEGS_PER_MIDDLE_ALLOC) 258*7c478bd9Sstevel@tonic-gate 259*7c478bd9Sstevel@tonic-gate /* 260*7c478bd9Sstevel@tonic-gate * The segment structures themselves are allocated from vmem_seg_arena, so 261*7c478bd9Sstevel@tonic-gate * we have a recursion problem when vmem_seg_arena needs to populate itself. 262*7c478bd9Sstevel@tonic-gate * We address this by working out the maximum number of segment structures 263*7c478bd9Sstevel@tonic-gate * this act will require, and multiplying by the maximum number of threads 264*7c478bd9Sstevel@tonic-gate * that we'll allow to do it simultaneously. 265*7c478bd9Sstevel@tonic-gate * 266*7c478bd9Sstevel@tonic-gate * The worst-case segment consumption to populate vmem_seg_arena is as 267*7c478bd9Sstevel@tonic-gate * follows (depicted as a stack trace to indicate why events are occurring): 268*7c478bd9Sstevel@tonic-gate * 269*7c478bd9Sstevel@tonic-gate * (In order to lower the fragmentation in the heap_arena, we specify a 270*7c478bd9Sstevel@tonic-gate * minimum import size for the vmem_metadata_arena which is the same size 271*7c478bd9Sstevel@tonic-gate * as the kmem_va quantum cache allocations. This causes the worst-case 272*7c478bd9Sstevel@tonic-gate * allocation from the vmem_metadata_arena to be 3 segments.) 273*7c478bd9Sstevel@tonic-gate * 274*7c478bd9Sstevel@tonic-gate * vmem_alloc(vmem_seg_arena) -> 2 segs (span create + exact alloc) 275*7c478bd9Sstevel@tonic-gate * segkmem_alloc(vmem_metadata_arena) 276*7c478bd9Sstevel@tonic-gate * vmem_alloc(vmem_metadata_arena) -> 3 segs (span create + left alloc) 277*7c478bd9Sstevel@tonic-gate * vmem_alloc(heap_arena) -> 1 seg (left alloc) 278*7c478bd9Sstevel@tonic-gate * page_create() 279*7c478bd9Sstevel@tonic-gate * hat_memload() 280*7c478bd9Sstevel@tonic-gate * kmem_cache_alloc() 281*7c478bd9Sstevel@tonic-gate * kmem_slab_create() 282*7c478bd9Sstevel@tonic-gate * vmem_alloc(hat_memload_arena) -> 2 segs (span create + exact alloc) 283*7c478bd9Sstevel@tonic-gate * segkmem_alloc(heap_arena) 284*7c478bd9Sstevel@tonic-gate * vmem_alloc(heap_arena) -> 1 seg (left alloc) 285*7c478bd9Sstevel@tonic-gate * page_create() 286*7c478bd9Sstevel@tonic-gate * hat_memload() -> (hat layer won't recurse further) 287*7c478bd9Sstevel@tonic-gate * 288*7c478bd9Sstevel@tonic-gate * The worst-case consumption for each arena is 3 segment structures. 289*7c478bd9Sstevel@tonic-gate * Of course, a 3-seg reserve could easily be blown by multiple threads. 290*7c478bd9Sstevel@tonic-gate * Therefore, we serialize all allocations from vmem_seg_arena (which is OK 291*7c478bd9Sstevel@tonic-gate * because they're rare). We cannot allow a non-blocking allocation to get 292*7c478bd9Sstevel@tonic-gate * tied up behind a blocking allocation, however, so we use separate locks 293*7c478bd9Sstevel@tonic-gate * for VM_SLEEP and VM_NOSLEEP allocations. In addition, if the system is 294*7c478bd9Sstevel@tonic-gate * panicking then we must keep enough resources for panic_thread to do its 295*7c478bd9Sstevel@tonic-gate * work. Thus we have at most three threads trying to allocate from 296*7c478bd9Sstevel@tonic-gate * vmem_seg_arena, and each thread consumes at most three segment structures, 297*7c478bd9Sstevel@tonic-gate * so we must maintain a 9-seg reserve. 298*7c478bd9Sstevel@tonic-gate */ 299*7c478bd9Sstevel@tonic-gate #define VMEM_POPULATE_RESERVE 9 300*7c478bd9Sstevel@tonic-gate 301*7c478bd9Sstevel@tonic-gate /* 302*7c478bd9Sstevel@tonic-gate * vmem_populate() ensures that each arena has VMEM_MINFREE seg structures 303*7c478bd9Sstevel@tonic-gate * so that it can satisfy the worst-case allocation *and* participate in 304*7c478bd9Sstevel@tonic-gate * worst-case allocation from vmem_seg_arena. 305*7c478bd9Sstevel@tonic-gate */ 306*7c478bd9Sstevel@tonic-gate #define VMEM_MINFREE (VMEM_POPULATE_RESERVE + VMEM_SEGS_PER_ALLOC_MAX) 307*7c478bd9Sstevel@tonic-gate 308*7c478bd9Sstevel@tonic-gate static vmem_t vmem0[VMEM_INITIAL]; 309*7c478bd9Sstevel@tonic-gate static vmem_t *vmem_populator[VMEM_INITIAL]; 310*7c478bd9Sstevel@tonic-gate static uint32_t vmem_id; 311*7c478bd9Sstevel@tonic-gate static uint32_t vmem_populators; 312*7c478bd9Sstevel@tonic-gate static vmem_seg_t vmem_seg0[VMEM_SEG_INITIAL]; 313*7c478bd9Sstevel@tonic-gate static vmem_seg_t *vmem_segfree; 314*7c478bd9Sstevel@tonic-gate static kmutex_t vmem_list_lock; 315*7c478bd9Sstevel@tonic-gate static kmutex_t vmem_segfree_lock; 316*7c478bd9Sstevel@tonic-gate static kmutex_t vmem_sleep_lock; 317*7c478bd9Sstevel@tonic-gate static kmutex_t vmem_nosleep_lock; 318*7c478bd9Sstevel@tonic-gate static kmutex_t vmem_panic_lock; 319*7c478bd9Sstevel@tonic-gate static vmem_t *vmem_list; 320*7c478bd9Sstevel@tonic-gate static vmem_t *vmem_metadata_arena; 321*7c478bd9Sstevel@tonic-gate static vmem_t *vmem_seg_arena; 322*7c478bd9Sstevel@tonic-gate static vmem_t *vmem_hash_arena; 323*7c478bd9Sstevel@tonic-gate static vmem_t *vmem_vmem_arena; 324*7c478bd9Sstevel@tonic-gate static long vmem_update_interval = 15; /* vmem_update() every 15 seconds */ 325*7c478bd9Sstevel@tonic-gate uint32_t vmem_mtbf; /* mean time between failures [default: off] */ 326*7c478bd9Sstevel@tonic-gate size_t vmem_seg_size = sizeof (vmem_seg_t); 327*7c478bd9Sstevel@tonic-gate 328*7c478bd9Sstevel@tonic-gate static vmem_kstat_t vmem_kstat_template = { 329*7c478bd9Sstevel@tonic-gate { "mem_inuse", KSTAT_DATA_UINT64 }, 330*7c478bd9Sstevel@tonic-gate { "mem_import", KSTAT_DATA_UINT64 }, 331*7c478bd9Sstevel@tonic-gate { "mem_total", KSTAT_DATA_UINT64 }, 332*7c478bd9Sstevel@tonic-gate { "vmem_source", KSTAT_DATA_UINT32 }, 333*7c478bd9Sstevel@tonic-gate { "alloc", KSTAT_DATA_UINT64 }, 334*7c478bd9Sstevel@tonic-gate { "free", KSTAT_DATA_UINT64 }, 335*7c478bd9Sstevel@tonic-gate { "wait", KSTAT_DATA_UINT64 }, 336*7c478bd9Sstevel@tonic-gate { "fail", KSTAT_DATA_UINT64 }, 337*7c478bd9Sstevel@tonic-gate { "lookup", KSTAT_DATA_UINT64 }, 338*7c478bd9Sstevel@tonic-gate { "search", KSTAT_DATA_UINT64 }, 339*7c478bd9Sstevel@tonic-gate { "populate_wait", KSTAT_DATA_UINT64 }, 340*7c478bd9Sstevel@tonic-gate { "populate_fail", KSTAT_DATA_UINT64 }, 341*7c478bd9Sstevel@tonic-gate { "contains", KSTAT_DATA_UINT64 }, 342*7c478bd9Sstevel@tonic-gate { "contains_search", KSTAT_DATA_UINT64 }, 343*7c478bd9Sstevel@tonic-gate }; 344*7c478bd9Sstevel@tonic-gate 345*7c478bd9Sstevel@tonic-gate /* 346*7c478bd9Sstevel@tonic-gate * Insert/delete from arena list (type 'a') or next-of-kin list (type 'k'). 347*7c478bd9Sstevel@tonic-gate */ 348*7c478bd9Sstevel@tonic-gate #define VMEM_INSERT(vprev, vsp, type) \ 349*7c478bd9Sstevel@tonic-gate { \ 350*7c478bd9Sstevel@tonic-gate vmem_seg_t *vnext = (vprev)->vs_##type##next; \ 351*7c478bd9Sstevel@tonic-gate (vsp)->vs_##type##next = (vnext); \ 352*7c478bd9Sstevel@tonic-gate (vsp)->vs_##type##prev = (vprev); \ 353*7c478bd9Sstevel@tonic-gate (vprev)->vs_##type##next = (vsp); \ 354*7c478bd9Sstevel@tonic-gate (vnext)->vs_##type##prev = (vsp); \ 355*7c478bd9Sstevel@tonic-gate } 356*7c478bd9Sstevel@tonic-gate 357*7c478bd9Sstevel@tonic-gate #define VMEM_DELETE(vsp, type) \ 358*7c478bd9Sstevel@tonic-gate { \ 359*7c478bd9Sstevel@tonic-gate vmem_seg_t *vprev = (vsp)->vs_##type##prev; \ 360*7c478bd9Sstevel@tonic-gate vmem_seg_t *vnext = (vsp)->vs_##type##next; \ 361*7c478bd9Sstevel@tonic-gate (vprev)->vs_##type##next = (vnext); \ 362*7c478bd9Sstevel@tonic-gate (vnext)->vs_##type##prev = (vprev); \ 363*7c478bd9Sstevel@tonic-gate } 364*7c478bd9Sstevel@tonic-gate 365*7c478bd9Sstevel@tonic-gate /* 366*7c478bd9Sstevel@tonic-gate * Get a vmem_seg_t from the global segfree list. 367*7c478bd9Sstevel@tonic-gate */ 368*7c478bd9Sstevel@tonic-gate static vmem_seg_t * 369*7c478bd9Sstevel@tonic-gate vmem_getseg_global(void) 370*7c478bd9Sstevel@tonic-gate { 371*7c478bd9Sstevel@tonic-gate vmem_seg_t *vsp; 372*7c478bd9Sstevel@tonic-gate 373*7c478bd9Sstevel@tonic-gate mutex_enter(&vmem_segfree_lock); 374*7c478bd9Sstevel@tonic-gate if ((vsp = vmem_segfree) != NULL) 375*7c478bd9Sstevel@tonic-gate vmem_segfree = vsp->vs_knext; 376*7c478bd9Sstevel@tonic-gate mutex_exit(&vmem_segfree_lock); 377*7c478bd9Sstevel@tonic-gate 378*7c478bd9Sstevel@tonic-gate return (vsp); 379*7c478bd9Sstevel@tonic-gate } 380*7c478bd9Sstevel@tonic-gate 381*7c478bd9Sstevel@tonic-gate /* 382*7c478bd9Sstevel@tonic-gate * Put a vmem_seg_t on the global segfree list. 383*7c478bd9Sstevel@tonic-gate */ 384*7c478bd9Sstevel@tonic-gate static void 385*7c478bd9Sstevel@tonic-gate vmem_putseg_global(vmem_seg_t *vsp) 386*7c478bd9Sstevel@tonic-gate { 387*7c478bd9Sstevel@tonic-gate mutex_enter(&vmem_segfree_lock); 388*7c478bd9Sstevel@tonic-gate vsp->vs_knext = vmem_segfree; 389*7c478bd9Sstevel@tonic-gate vmem_segfree = vsp; 390*7c478bd9Sstevel@tonic-gate mutex_exit(&vmem_segfree_lock); 391*7c478bd9Sstevel@tonic-gate } 392*7c478bd9Sstevel@tonic-gate 393*7c478bd9Sstevel@tonic-gate /* 394*7c478bd9Sstevel@tonic-gate * Get a vmem_seg_t from vmp's segfree list. 395*7c478bd9Sstevel@tonic-gate */ 396*7c478bd9Sstevel@tonic-gate static vmem_seg_t * 397*7c478bd9Sstevel@tonic-gate vmem_getseg(vmem_t *vmp) 398*7c478bd9Sstevel@tonic-gate { 399*7c478bd9Sstevel@tonic-gate vmem_seg_t *vsp; 400*7c478bd9Sstevel@tonic-gate 401*7c478bd9Sstevel@tonic-gate ASSERT(vmp->vm_nsegfree > 0); 402*7c478bd9Sstevel@tonic-gate 403*7c478bd9Sstevel@tonic-gate vsp = vmp->vm_segfree; 404*7c478bd9Sstevel@tonic-gate vmp->vm_segfree = vsp->vs_knext; 405*7c478bd9Sstevel@tonic-gate vmp->vm_nsegfree--; 406*7c478bd9Sstevel@tonic-gate 407*7c478bd9Sstevel@tonic-gate return (vsp); 408*7c478bd9Sstevel@tonic-gate } 409*7c478bd9Sstevel@tonic-gate 410*7c478bd9Sstevel@tonic-gate /* 411*7c478bd9Sstevel@tonic-gate * Put a vmem_seg_t on vmp's segfree list. 412*7c478bd9Sstevel@tonic-gate */ 413*7c478bd9Sstevel@tonic-gate static void 414*7c478bd9Sstevel@tonic-gate vmem_putseg(vmem_t *vmp, vmem_seg_t *vsp) 415*7c478bd9Sstevel@tonic-gate { 416*7c478bd9Sstevel@tonic-gate vsp->vs_knext = vmp->vm_segfree; 417*7c478bd9Sstevel@tonic-gate vmp->vm_segfree = vsp; 418*7c478bd9Sstevel@tonic-gate vmp->vm_nsegfree++; 419*7c478bd9Sstevel@tonic-gate } 420*7c478bd9Sstevel@tonic-gate 421*7c478bd9Sstevel@tonic-gate /* 422*7c478bd9Sstevel@tonic-gate * Add vsp to the appropriate freelist. 423*7c478bd9Sstevel@tonic-gate */ 424*7c478bd9Sstevel@tonic-gate static void 425*7c478bd9Sstevel@tonic-gate vmem_freelist_insert(vmem_t *vmp, vmem_seg_t *vsp) 426*7c478bd9Sstevel@tonic-gate { 427*7c478bd9Sstevel@tonic-gate vmem_seg_t *vprev; 428*7c478bd9Sstevel@tonic-gate 429*7c478bd9Sstevel@tonic-gate ASSERT(*VMEM_HASH(vmp, vsp->vs_start) != vsp); 430*7c478bd9Sstevel@tonic-gate 431*7c478bd9Sstevel@tonic-gate vprev = (vmem_seg_t *)&vmp->vm_freelist[highbit(VS_SIZE(vsp)) - 1]; 432*7c478bd9Sstevel@tonic-gate vsp->vs_type = VMEM_FREE; 433*7c478bd9Sstevel@tonic-gate vmp->vm_freemap |= VS_SIZE(vprev); 434*7c478bd9Sstevel@tonic-gate VMEM_INSERT(vprev, vsp, k); 435*7c478bd9Sstevel@tonic-gate 436*7c478bd9Sstevel@tonic-gate cv_broadcast(&vmp->vm_cv); 437*7c478bd9Sstevel@tonic-gate } 438*7c478bd9Sstevel@tonic-gate 439*7c478bd9Sstevel@tonic-gate /* 440*7c478bd9Sstevel@tonic-gate * Take vsp from the freelist. 441*7c478bd9Sstevel@tonic-gate */ 442*7c478bd9Sstevel@tonic-gate static void 443*7c478bd9Sstevel@tonic-gate vmem_freelist_delete(vmem_t *vmp, vmem_seg_t *vsp) 444*7c478bd9Sstevel@tonic-gate { 445*7c478bd9Sstevel@tonic-gate ASSERT(*VMEM_HASH(vmp, vsp->vs_start) != vsp); 446*7c478bd9Sstevel@tonic-gate ASSERT(vsp->vs_type == VMEM_FREE); 447*7c478bd9Sstevel@tonic-gate 448*7c478bd9Sstevel@tonic-gate if (vsp->vs_knext->vs_start == 0 && vsp->vs_kprev->vs_start == 0) { 449*7c478bd9Sstevel@tonic-gate /* 450*7c478bd9Sstevel@tonic-gate * The segments on both sides of 'vsp' are freelist heads, 451*7c478bd9Sstevel@tonic-gate * so taking vsp leaves the freelist at vsp->vs_kprev empty. 452*7c478bd9Sstevel@tonic-gate */ 453*7c478bd9Sstevel@tonic-gate ASSERT(vmp->vm_freemap & VS_SIZE(vsp->vs_kprev)); 454*7c478bd9Sstevel@tonic-gate vmp->vm_freemap ^= VS_SIZE(vsp->vs_kprev); 455*7c478bd9Sstevel@tonic-gate } 456*7c478bd9Sstevel@tonic-gate VMEM_DELETE(vsp, k); 457*7c478bd9Sstevel@tonic-gate } 458*7c478bd9Sstevel@tonic-gate 459*7c478bd9Sstevel@tonic-gate /* 460*7c478bd9Sstevel@tonic-gate * Add vsp to the allocated-segment hash table and update kstats. 461*7c478bd9Sstevel@tonic-gate */ 462*7c478bd9Sstevel@tonic-gate static void 463*7c478bd9Sstevel@tonic-gate vmem_hash_insert(vmem_t *vmp, vmem_seg_t *vsp) 464*7c478bd9Sstevel@tonic-gate { 465*7c478bd9Sstevel@tonic-gate vmem_seg_t **bucket; 466*7c478bd9Sstevel@tonic-gate 467*7c478bd9Sstevel@tonic-gate vsp->vs_type = VMEM_ALLOC; 468*7c478bd9Sstevel@tonic-gate bucket = VMEM_HASH(vmp, vsp->vs_start); 469*7c478bd9Sstevel@tonic-gate vsp->vs_knext = *bucket; 470*7c478bd9Sstevel@tonic-gate *bucket = vsp; 471*7c478bd9Sstevel@tonic-gate 472*7c478bd9Sstevel@tonic-gate if (vmem_seg_size == sizeof (vmem_seg_t)) { 473*7c478bd9Sstevel@tonic-gate vsp->vs_depth = (uint8_t)getpcstack(vsp->vs_stack, 474*7c478bd9Sstevel@tonic-gate VMEM_STACK_DEPTH); 475*7c478bd9Sstevel@tonic-gate vsp->vs_thread = curthread; 476*7c478bd9Sstevel@tonic-gate vsp->vs_timestamp = gethrtime(); 477*7c478bd9Sstevel@tonic-gate } else { 478*7c478bd9Sstevel@tonic-gate vsp->vs_depth = 0; 479*7c478bd9Sstevel@tonic-gate } 480*7c478bd9Sstevel@tonic-gate 481*7c478bd9Sstevel@tonic-gate vmp->vm_kstat.vk_alloc.value.ui64++; 482*7c478bd9Sstevel@tonic-gate vmp->vm_kstat.vk_mem_inuse.value.ui64 += VS_SIZE(vsp); 483*7c478bd9Sstevel@tonic-gate } 484*7c478bd9Sstevel@tonic-gate 485*7c478bd9Sstevel@tonic-gate /* 486*7c478bd9Sstevel@tonic-gate * Remove vsp from the allocated-segment hash table and update kstats. 487*7c478bd9Sstevel@tonic-gate */ 488*7c478bd9Sstevel@tonic-gate static vmem_seg_t * 489*7c478bd9Sstevel@tonic-gate vmem_hash_delete(vmem_t *vmp, uintptr_t addr, size_t size) 490*7c478bd9Sstevel@tonic-gate { 491*7c478bd9Sstevel@tonic-gate vmem_seg_t *vsp, **prev_vspp; 492*7c478bd9Sstevel@tonic-gate 493*7c478bd9Sstevel@tonic-gate prev_vspp = VMEM_HASH(vmp, addr); 494*7c478bd9Sstevel@tonic-gate while ((vsp = *prev_vspp) != NULL) { 495*7c478bd9Sstevel@tonic-gate if (vsp->vs_start == addr) { 496*7c478bd9Sstevel@tonic-gate *prev_vspp = vsp->vs_knext; 497*7c478bd9Sstevel@tonic-gate break; 498*7c478bd9Sstevel@tonic-gate } 499*7c478bd9Sstevel@tonic-gate vmp->vm_kstat.vk_lookup.value.ui64++; 500*7c478bd9Sstevel@tonic-gate prev_vspp = &vsp->vs_knext; 501*7c478bd9Sstevel@tonic-gate } 502*7c478bd9Sstevel@tonic-gate 503*7c478bd9Sstevel@tonic-gate if (vsp == NULL) 504*7c478bd9Sstevel@tonic-gate panic("vmem_hash_delete(%p, %lx, %lu): bad free", 505*7c478bd9Sstevel@tonic-gate vmp, addr, size); 506*7c478bd9Sstevel@tonic-gate if (VS_SIZE(vsp) != size) 507*7c478bd9Sstevel@tonic-gate panic("vmem_hash_delete(%p, %lx, %lu): wrong size (expect %lu)", 508*7c478bd9Sstevel@tonic-gate vmp, addr, size, VS_SIZE(vsp)); 509*7c478bd9Sstevel@tonic-gate 510*7c478bd9Sstevel@tonic-gate vmp->vm_kstat.vk_free.value.ui64++; 511*7c478bd9Sstevel@tonic-gate vmp->vm_kstat.vk_mem_inuse.value.ui64 -= size; 512*7c478bd9Sstevel@tonic-gate 513*7c478bd9Sstevel@tonic-gate return (vsp); 514*7c478bd9Sstevel@tonic-gate } 515*7c478bd9Sstevel@tonic-gate 516*7c478bd9Sstevel@tonic-gate /* 517*7c478bd9Sstevel@tonic-gate * Create a segment spanning the range [start, end) and add it to the arena. 518*7c478bd9Sstevel@tonic-gate */ 519*7c478bd9Sstevel@tonic-gate static vmem_seg_t * 520*7c478bd9Sstevel@tonic-gate vmem_seg_create(vmem_t *vmp, vmem_seg_t *vprev, uintptr_t start, uintptr_t end) 521*7c478bd9Sstevel@tonic-gate { 522*7c478bd9Sstevel@tonic-gate vmem_seg_t *newseg = vmem_getseg(vmp); 523*7c478bd9Sstevel@tonic-gate 524*7c478bd9Sstevel@tonic-gate newseg->vs_start = start; 525*7c478bd9Sstevel@tonic-gate newseg->vs_end = end; 526*7c478bd9Sstevel@tonic-gate newseg->vs_type = 0; 527*7c478bd9Sstevel@tonic-gate newseg->vs_import = 0; 528*7c478bd9Sstevel@tonic-gate 529*7c478bd9Sstevel@tonic-gate VMEM_INSERT(vprev, newseg, a); 530*7c478bd9Sstevel@tonic-gate 531*7c478bd9Sstevel@tonic-gate return (newseg); 532*7c478bd9Sstevel@tonic-gate } 533*7c478bd9Sstevel@tonic-gate 534*7c478bd9Sstevel@tonic-gate /* 535*7c478bd9Sstevel@tonic-gate * Remove segment vsp from the arena. 536*7c478bd9Sstevel@tonic-gate */ 537*7c478bd9Sstevel@tonic-gate static void 538*7c478bd9Sstevel@tonic-gate vmem_seg_destroy(vmem_t *vmp, vmem_seg_t *vsp) 539*7c478bd9Sstevel@tonic-gate { 540*7c478bd9Sstevel@tonic-gate ASSERT(vsp->vs_type != VMEM_ROTOR); 541*7c478bd9Sstevel@tonic-gate VMEM_DELETE(vsp, a); 542*7c478bd9Sstevel@tonic-gate 543*7c478bd9Sstevel@tonic-gate vmem_putseg(vmp, vsp); 544*7c478bd9Sstevel@tonic-gate } 545*7c478bd9Sstevel@tonic-gate 546*7c478bd9Sstevel@tonic-gate /* 547*7c478bd9Sstevel@tonic-gate * Add the span [vaddr, vaddr + size) to vmp and update kstats. 548*7c478bd9Sstevel@tonic-gate */ 549*7c478bd9Sstevel@tonic-gate static vmem_seg_t * 550*7c478bd9Sstevel@tonic-gate vmem_span_create(vmem_t *vmp, void *vaddr, size_t size, uint8_t import) 551*7c478bd9Sstevel@tonic-gate { 552*7c478bd9Sstevel@tonic-gate vmem_seg_t *newseg, *span; 553*7c478bd9Sstevel@tonic-gate uintptr_t start = (uintptr_t)vaddr; 554*7c478bd9Sstevel@tonic-gate uintptr_t end = start + size; 555*7c478bd9Sstevel@tonic-gate 556*7c478bd9Sstevel@tonic-gate ASSERT(MUTEX_HELD(&vmp->vm_lock)); 557*7c478bd9Sstevel@tonic-gate 558*7c478bd9Sstevel@tonic-gate if ((start | end) & (vmp->vm_quantum - 1)) 559*7c478bd9Sstevel@tonic-gate panic("vmem_span_create(%p, %p, %lu): misaligned", 560*7c478bd9Sstevel@tonic-gate vmp, vaddr, size); 561*7c478bd9Sstevel@tonic-gate 562*7c478bd9Sstevel@tonic-gate span = vmem_seg_create(vmp, vmp->vm_seg0.vs_aprev, start, end); 563*7c478bd9Sstevel@tonic-gate span->vs_type = VMEM_SPAN; 564*7c478bd9Sstevel@tonic-gate span->vs_import = import; 565*7c478bd9Sstevel@tonic-gate VMEM_INSERT(vmp->vm_seg0.vs_kprev, span, k); 566*7c478bd9Sstevel@tonic-gate 567*7c478bd9Sstevel@tonic-gate newseg = vmem_seg_create(vmp, span, start, end); 568*7c478bd9Sstevel@tonic-gate vmem_freelist_insert(vmp, newseg); 569*7c478bd9Sstevel@tonic-gate 570*7c478bd9Sstevel@tonic-gate if (import) 571*7c478bd9Sstevel@tonic-gate vmp->vm_kstat.vk_mem_import.value.ui64 += size; 572*7c478bd9Sstevel@tonic-gate vmp->vm_kstat.vk_mem_total.value.ui64 += size; 573*7c478bd9Sstevel@tonic-gate 574*7c478bd9Sstevel@tonic-gate return (newseg); 575*7c478bd9Sstevel@tonic-gate } 576*7c478bd9Sstevel@tonic-gate 577*7c478bd9Sstevel@tonic-gate /* 578*7c478bd9Sstevel@tonic-gate * Remove span vsp from vmp and update kstats. 579*7c478bd9Sstevel@tonic-gate */ 580*7c478bd9Sstevel@tonic-gate static void 581*7c478bd9Sstevel@tonic-gate vmem_span_destroy(vmem_t *vmp, vmem_seg_t *vsp) 582*7c478bd9Sstevel@tonic-gate { 583*7c478bd9Sstevel@tonic-gate vmem_seg_t *span = vsp->vs_aprev; 584*7c478bd9Sstevel@tonic-gate size_t size = VS_SIZE(vsp); 585*7c478bd9Sstevel@tonic-gate 586*7c478bd9Sstevel@tonic-gate ASSERT(MUTEX_HELD(&vmp->vm_lock)); 587*7c478bd9Sstevel@tonic-gate ASSERT(span->vs_type == VMEM_SPAN); 588*7c478bd9Sstevel@tonic-gate 589*7c478bd9Sstevel@tonic-gate if (span->vs_import) 590*7c478bd9Sstevel@tonic-gate vmp->vm_kstat.vk_mem_import.value.ui64 -= size; 591*7c478bd9Sstevel@tonic-gate vmp->vm_kstat.vk_mem_total.value.ui64 -= size; 592*7c478bd9Sstevel@tonic-gate 593*7c478bd9Sstevel@tonic-gate VMEM_DELETE(span, k); 594*7c478bd9Sstevel@tonic-gate 595*7c478bd9Sstevel@tonic-gate vmem_seg_destroy(vmp, vsp); 596*7c478bd9Sstevel@tonic-gate vmem_seg_destroy(vmp, span); 597*7c478bd9Sstevel@tonic-gate } 598*7c478bd9Sstevel@tonic-gate 599*7c478bd9Sstevel@tonic-gate /* 600*7c478bd9Sstevel@tonic-gate * Allocate the subrange [addr, addr + size) from segment vsp. 601*7c478bd9Sstevel@tonic-gate * If there are leftovers on either side, place them on the freelist. 602*7c478bd9Sstevel@tonic-gate * Returns a pointer to the segment representing [addr, addr + size). 603*7c478bd9Sstevel@tonic-gate */ 604*7c478bd9Sstevel@tonic-gate static vmem_seg_t * 605*7c478bd9Sstevel@tonic-gate vmem_seg_alloc(vmem_t *vmp, vmem_seg_t *vsp, uintptr_t addr, size_t size) 606*7c478bd9Sstevel@tonic-gate { 607*7c478bd9Sstevel@tonic-gate uintptr_t vs_start = vsp->vs_start; 608*7c478bd9Sstevel@tonic-gate uintptr_t vs_end = vsp->vs_end; 609*7c478bd9Sstevel@tonic-gate size_t vs_size = vs_end - vs_start; 610*7c478bd9Sstevel@tonic-gate size_t realsize = P2ROUNDUP(size, vmp->vm_quantum); 611*7c478bd9Sstevel@tonic-gate uintptr_t addr_end = addr + realsize; 612*7c478bd9Sstevel@tonic-gate 613*7c478bd9Sstevel@tonic-gate ASSERT(P2PHASE(vs_start, vmp->vm_quantum) == 0); 614*7c478bd9Sstevel@tonic-gate ASSERT(P2PHASE(addr, vmp->vm_quantum) == 0); 615*7c478bd9Sstevel@tonic-gate ASSERT(vsp->vs_type == VMEM_FREE); 616*7c478bd9Sstevel@tonic-gate ASSERT(addr >= vs_start && addr_end - 1 <= vs_end - 1); 617*7c478bd9Sstevel@tonic-gate ASSERT(addr - 1 <= addr_end - 1); 618*7c478bd9Sstevel@tonic-gate 619*7c478bd9Sstevel@tonic-gate /* 620*7c478bd9Sstevel@tonic-gate * If we're allocating from the start of the segment, and the 621*7c478bd9Sstevel@tonic-gate * remainder will be on the same freelist, we can save quite 622*7c478bd9Sstevel@tonic-gate * a bit of work. 623*7c478bd9Sstevel@tonic-gate */ 624*7c478bd9Sstevel@tonic-gate if (P2SAMEHIGHBIT(vs_size, vs_size - realsize) && addr == vs_start) { 625*7c478bd9Sstevel@tonic-gate ASSERT(highbit(vs_size) == highbit(vs_size - realsize)); 626*7c478bd9Sstevel@tonic-gate vsp->vs_start = addr_end; 627*7c478bd9Sstevel@tonic-gate vsp = vmem_seg_create(vmp, vsp->vs_aprev, addr, addr + size); 628*7c478bd9Sstevel@tonic-gate vmem_hash_insert(vmp, vsp); 629*7c478bd9Sstevel@tonic-gate return (vsp); 630*7c478bd9Sstevel@tonic-gate } 631*7c478bd9Sstevel@tonic-gate 632*7c478bd9Sstevel@tonic-gate vmem_freelist_delete(vmp, vsp); 633*7c478bd9Sstevel@tonic-gate 634*7c478bd9Sstevel@tonic-gate if (vs_end != addr_end) 635*7c478bd9Sstevel@tonic-gate vmem_freelist_insert(vmp, 636*7c478bd9Sstevel@tonic-gate vmem_seg_create(vmp, vsp, addr_end, vs_end)); 637*7c478bd9Sstevel@tonic-gate 638*7c478bd9Sstevel@tonic-gate if (vs_start != addr) 639*7c478bd9Sstevel@tonic-gate vmem_freelist_insert(vmp, 640*7c478bd9Sstevel@tonic-gate vmem_seg_create(vmp, vsp->vs_aprev, vs_start, addr)); 641*7c478bd9Sstevel@tonic-gate 642*7c478bd9Sstevel@tonic-gate vsp->vs_start = addr; 643*7c478bd9Sstevel@tonic-gate vsp->vs_end = addr + size; 644*7c478bd9Sstevel@tonic-gate 645*7c478bd9Sstevel@tonic-gate vmem_hash_insert(vmp, vsp); 646*7c478bd9Sstevel@tonic-gate return (vsp); 647*7c478bd9Sstevel@tonic-gate } 648*7c478bd9Sstevel@tonic-gate 649*7c478bd9Sstevel@tonic-gate /* 650*7c478bd9Sstevel@tonic-gate * Returns 1 if we are populating, 0 otherwise. 651*7c478bd9Sstevel@tonic-gate * Call it if we want to prevent recursion from HAT. 652*7c478bd9Sstevel@tonic-gate */ 653*7c478bd9Sstevel@tonic-gate int 654*7c478bd9Sstevel@tonic-gate vmem_is_populator() 655*7c478bd9Sstevel@tonic-gate { 656*7c478bd9Sstevel@tonic-gate return (mutex_owner(&vmem_sleep_lock) == curthread || 657*7c478bd9Sstevel@tonic-gate mutex_owner(&vmem_nosleep_lock) == curthread || 658*7c478bd9Sstevel@tonic-gate mutex_owner(&vmem_panic_lock) == curthread); 659*7c478bd9Sstevel@tonic-gate } 660*7c478bd9Sstevel@tonic-gate 661*7c478bd9Sstevel@tonic-gate /* 662*7c478bd9Sstevel@tonic-gate * Populate vmp's segfree list with VMEM_MINFREE vmem_seg_t structures. 663*7c478bd9Sstevel@tonic-gate */ 664*7c478bd9Sstevel@tonic-gate static int 665*7c478bd9Sstevel@tonic-gate vmem_populate(vmem_t *vmp, int vmflag) 666*7c478bd9Sstevel@tonic-gate { 667*7c478bd9Sstevel@tonic-gate char *p; 668*7c478bd9Sstevel@tonic-gate vmem_seg_t *vsp; 669*7c478bd9Sstevel@tonic-gate ssize_t nseg; 670*7c478bd9Sstevel@tonic-gate size_t size; 671*7c478bd9Sstevel@tonic-gate kmutex_t *lp; 672*7c478bd9Sstevel@tonic-gate int i; 673*7c478bd9Sstevel@tonic-gate 674*7c478bd9Sstevel@tonic-gate while (vmp->vm_nsegfree < VMEM_MINFREE && 675*7c478bd9Sstevel@tonic-gate (vsp = vmem_getseg_global()) != NULL) 676*7c478bd9Sstevel@tonic-gate vmem_putseg(vmp, vsp); 677*7c478bd9Sstevel@tonic-gate 678*7c478bd9Sstevel@tonic-gate if (vmp->vm_nsegfree >= VMEM_MINFREE) 679*7c478bd9Sstevel@tonic-gate return (1); 680*7c478bd9Sstevel@tonic-gate 681*7c478bd9Sstevel@tonic-gate /* 682*7c478bd9Sstevel@tonic-gate * If we're already populating, tap the reserve. 683*7c478bd9Sstevel@tonic-gate */ 684*7c478bd9Sstevel@tonic-gate if (vmem_is_populator()) { 685*7c478bd9Sstevel@tonic-gate ASSERT(vmp->vm_cflags & VMC_POPULATOR); 686*7c478bd9Sstevel@tonic-gate return (1); 687*7c478bd9Sstevel@tonic-gate } 688*7c478bd9Sstevel@tonic-gate 689*7c478bd9Sstevel@tonic-gate mutex_exit(&vmp->vm_lock); 690*7c478bd9Sstevel@tonic-gate 691*7c478bd9Sstevel@tonic-gate if (panic_thread == curthread) 692*7c478bd9Sstevel@tonic-gate lp = &vmem_panic_lock; 693*7c478bd9Sstevel@tonic-gate else if (vmflag & VM_NOSLEEP) 694*7c478bd9Sstevel@tonic-gate lp = &vmem_nosleep_lock; 695*7c478bd9Sstevel@tonic-gate else 696*7c478bd9Sstevel@tonic-gate lp = &vmem_sleep_lock; 697*7c478bd9Sstevel@tonic-gate 698*7c478bd9Sstevel@tonic-gate mutex_enter(lp); 699*7c478bd9Sstevel@tonic-gate 700*7c478bd9Sstevel@tonic-gate nseg = VMEM_MINFREE + vmem_populators * VMEM_POPULATE_RESERVE; 701*7c478bd9Sstevel@tonic-gate size = P2ROUNDUP(nseg * vmem_seg_size, vmem_seg_arena->vm_quantum); 702*7c478bd9Sstevel@tonic-gate nseg = size / vmem_seg_size; 703*7c478bd9Sstevel@tonic-gate 704*7c478bd9Sstevel@tonic-gate /* 705*7c478bd9Sstevel@tonic-gate * The following vmem_alloc() may need to populate vmem_seg_arena 706*7c478bd9Sstevel@tonic-gate * and all the things it imports from. When doing so, it will tap 707*7c478bd9Sstevel@tonic-gate * each arena's reserve to prevent recursion (see the block comment 708*7c478bd9Sstevel@tonic-gate * above the definition of VMEM_POPULATE_RESERVE). 709*7c478bd9Sstevel@tonic-gate */ 710*7c478bd9Sstevel@tonic-gate p = vmem_alloc(vmem_seg_arena, size, vmflag & VM_KMFLAGS); 711*7c478bd9Sstevel@tonic-gate if (p == NULL) { 712*7c478bd9Sstevel@tonic-gate mutex_exit(lp); 713*7c478bd9Sstevel@tonic-gate mutex_enter(&vmp->vm_lock); 714*7c478bd9Sstevel@tonic-gate vmp->vm_kstat.vk_populate_fail.value.ui64++; 715*7c478bd9Sstevel@tonic-gate return (0); 716*7c478bd9Sstevel@tonic-gate } 717*7c478bd9Sstevel@tonic-gate 718*7c478bd9Sstevel@tonic-gate /* 719*7c478bd9Sstevel@tonic-gate * Restock the arenas that may have been depleted during population. 720*7c478bd9Sstevel@tonic-gate */ 721*7c478bd9Sstevel@tonic-gate for (i = 0; i < vmem_populators; i++) { 722*7c478bd9Sstevel@tonic-gate mutex_enter(&vmem_populator[i]->vm_lock); 723*7c478bd9Sstevel@tonic-gate while (vmem_populator[i]->vm_nsegfree < VMEM_POPULATE_RESERVE) 724*7c478bd9Sstevel@tonic-gate vmem_putseg(vmem_populator[i], 725*7c478bd9Sstevel@tonic-gate (vmem_seg_t *)(p + --nseg * vmem_seg_size)); 726*7c478bd9Sstevel@tonic-gate mutex_exit(&vmem_populator[i]->vm_lock); 727*7c478bd9Sstevel@tonic-gate } 728*7c478bd9Sstevel@tonic-gate 729*7c478bd9Sstevel@tonic-gate mutex_exit(lp); 730*7c478bd9Sstevel@tonic-gate mutex_enter(&vmp->vm_lock); 731*7c478bd9Sstevel@tonic-gate 732*7c478bd9Sstevel@tonic-gate /* 733*7c478bd9Sstevel@tonic-gate * Now take our own segments. 734*7c478bd9Sstevel@tonic-gate */ 735*7c478bd9Sstevel@tonic-gate ASSERT(nseg >= VMEM_MINFREE); 736*7c478bd9Sstevel@tonic-gate while (vmp->vm_nsegfree < VMEM_MINFREE) 737*7c478bd9Sstevel@tonic-gate vmem_putseg(vmp, (vmem_seg_t *)(p + --nseg * vmem_seg_size)); 738*7c478bd9Sstevel@tonic-gate 739*7c478bd9Sstevel@tonic-gate /* 740*7c478bd9Sstevel@tonic-gate * Give the remainder to charity. 741*7c478bd9Sstevel@tonic-gate */ 742*7c478bd9Sstevel@tonic-gate while (nseg > 0) 743*7c478bd9Sstevel@tonic-gate vmem_putseg_global((vmem_seg_t *)(p + --nseg * vmem_seg_size)); 744*7c478bd9Sstevel@tonic-gate 745*7c478bd9Sstevel@tonic-gate return (1); 746*7c478bd9Sstevel@tonic-gate } 747*7c478bd9Sstevel@tonic-gate 748*7c478bd9Sstevel@tonic-gate /* 749*7c478bd9Sstevel@tonic-gate * Advance a walker from its previous position to 'afterme'. 750*7c478bd9Sstevel@tonic-gate * Note: may drop and reacquire vmp->vm_lock. 751*7c478bd9Sstevel@tonic-gate */ 752*7c478bd9Sstevel@tonic-gate static void 753*7c478bd9Sstevel@tonic-gate vmem_advance(vmem_t *vmp, vmem_seg_t *walker, vmem_seg_t *afterme) 754*7c478bd9Sstevel@tonic-gate { 755*7c478bd9Sstevel@tonic-gate vmem_seg_t *vprev = walker->vs_aprev; 756*7c478bd9Sstevel@tonic-gate vmem_seg_t *vnext = walker->vs_anext; 757*7c478bd9Sstevel@tonic-gate vmem_seg_t *vsp = NULL; 758*7c478bd9Sstevel@tonic-gate 759*7c478bd9Sstevel@tonic-gate VMEM_DELETE(walker, a); 760*7c478bd9Sstevel@tonic-gate 761*7c478bd9Sstevel@tonic-gate if (afterme != NULL) 762*7c478bd9Sstevel@tonic-gate VMEM_INSERT(afterme, walker, a); 763*7c478bd9Sstevel@tonic-gate 764*7c478bd9Sstevel@tonic-gate /* 765*7c478bd9Sstevel@tonic-gate * The walker segment's presence may have prevented its neighbors 766*7c478bd9Sstevel@tonic-gate * from coalescing. If so, coalesce them now. 767*7c478bd9Sstevel@tonic-gate */ 768*7c478bd9Sstevel@tonic-gate if (vprev->vs_type == VMEM_FREE) { 769*7c478bd9Sstevel@tonic-gate if (vnext->vs_type == VMEM_FREE) { 770*7c478bd9Sstevel@tonic-gate ASSERT(vprev->vs_end == vnext->vs_start); 771*7c478bd9Sstevel@tonic-gate vmem_freelist_delete(vmp, vnext); 772*7c478bd9Sstevel@tonic-gate vmem_freelist_delete(vmp, vprev); 773*7c478bd9Sstevel@tonic-gate vprev->vs_end = vnext->vs_end; 774*7c478bd9Sstevel@tonic-gate vmem_freelist_insert(vmp, vprev); 775*7c478bd9Sstevel@tonic-gate vmem_seg_destroy(vmp, vnext); 776*7c478bd9Sstevel@tonic-gate } 777*7c478bd9Sstevel@tonic-gate vsp = vprev; 778*7c478bd9Sstevel@tonic-gate } else if (vnext->vs_type == VMEM_FREE) { 779*7c478bd9Sstevel@tonic-gate vsp = vnext; 780*7c478bd9Sstevel@tonic-gate } 781*7c478bd9Sstevel@tonic-gate 782*7c478bd9Sstevel@tonic-gate /* 783*7c478bd9Sstevel@tonic-gate * vsp could represent a complete imported span, 784*7c478bd9Sstevel@tonic-gate * in which case we must return it to the source. 785*7c478bd9Sstevel@tonic-gate */ 786*7c478bd9Sstevel@tonic-gate if (vsp != NULL && vsp->vs_aprev->vs_import && 787*7c478bd9Sstevel@tonic-gate vmp->vm_source_free != NULL && 788*7c478bd9Sstevel@tonic-gate vsp->vs_aprev->vs_type == VMEM_SPAN && 789*7c478bd9Sstevel@tonic-gate vsp->vs_anext->vs_type == VMEM_SPAN) { 790*7c478bd9Sstevel@tonic-gate void *vaddr = (void *)vsp->vs_start; 791*7c478bd9Sstevel@tonic-gate size_t size = VS_SIZE(vsp); 792*7c478bd9Sstevel@tonic-gate ASSERT(size == VS_SIZE(vsp->vs_aprev)); 793*7c478bd9Sstevel@tonic-gate vmem_freelist_delete(vmp, vsp); 794*7c478bd9Sstevel@tonic-gate vmem_span_destroy(vmp, vsp); 795*7c478bd9Sstevel@tonic-gate mutex_exit(&vmp->vm_lock); 796*7c478bd9Sstevel@tonic-gate vmp->vm_source_free(vmp->vm_source, vaddr, size); 797*7c478bd9Sstevel@tonic-gate mutex_enter(&vmp->vm_lock); 798*7c478bd9Sstevel@tonic-gate } 799*7c478bd9Sstevel@tonic-gate } 800*7c478bd9Sstevel@tonic-gate 801*7c478bd9Sstevel@tonic-gate /* 802*7c478bd9Sstevel@tonic-gate * VM_NEXTFIT allocations deliberately cycle through all virtual addresses 803*7c478bd9Sstevel@tonic-gate * in an arena, so that we avoid reusing addresses for as long as possible. 804*7c478bd9Sstevel@tonic-gate * This helps to catch used-after-freed bugs. It's also the perfect policy 805*7c478bd9Sstevel@tonic-gate * for allocating things like process IDs, where we want to cycle through 806*7c478bd9Sstevel@tonic-gate * all values in order. 807*7c478bd9Sstevel@tonic-gate */ 808*7c478bd9Sstevel@tonic-gate static void * 809*7c478bd9Sstevel@tonic-gate vmem_nextfit_alloc(vmem_t *vmp, size_t size, int vmflag) 810*7c478bd9Sstevel@tonic-gate { 811*7c478bd9Sstevel@tonic-gate vmem_seg_t *vsp, *rotor; 812*7c478bd9Sstevel@tonic-gate uintptr_t addr; 813*7c478bd9Sstevel@tonic-gate size_t realsize = P2ROUNDUP(size, vmp->vm_quantum); 814*7c478bd9Sstevel@tonic-gate size_t vs_size; 815*7c478bd9Sstevel@tonic-gate 816*7c478bd9Sstevel@tonic-gate mutex_enter(&vmp->vm_lock); 817*7c478bd9Sstevel@tonic-gate 818*7c478bd9Sstevel@tonic-gate if (vmp->vm_nsegfree < VMEM_MINFREE && !vmem_populate(vmp, vmflag)) { 819*7c478bd9Sstevel@tonic-gate mutex_exit(&vmp->vm_lock); 820*7c478bd9Sstevel@tonic-gate return (NULL); 821*7c478bd9Sstevel@tonic-gate } 822*7c478bd9Sstevel@tonic-gate 823*7c478bd9Sstevel@tonic-gate /* 824*7c478bd9Sstevel@tonic-gate * The common case is that the segment right after the rotor is free, 825*7c478bd9Sstevel@tonic-gate * and large enough that extracting 'size' bytes won't change which 826*7c478bd9Sstevel@tonic-gate * freelist it's on. In this case we can avoid a *lot* of work. 827*7c478bd9Sstevel@tonic-gate * Instead of the normal vmem_seg_alloc(), we just advance the start 828*7c478bd9Sstevel@tonic-gate * address of the victim segment. Instead of moving the rotor, we 829*7c478bd9Sstevel@tonic-gate * create the new segment structure *behind the rotor*, which has 830*7c478bd9Sstevel@tonic-gate * the same effect. And finally, we know we don't have to coalesce 831*7c478bd9Sstevel@tonic-gate * the rotor's neighbors because the new segment lies between them. 832*7c478bd9Sstevel@tonic-gate */ 833*7c478bd9Sstevel@tonic-gate rotor = &vmp->vm_rotor; 834*7c478bd9Sstevel@tonic-gate vsp = rotor->vs_anext; 835*7c478bd9Sstevel@tonic-gate if (vsp->vs_type == VMEM_FREE && (vs_size = VS_SIZE(vsp)) > realsize && 836*7c478bd9Sstevel@tonic-gate P2SAMEHIGHBIT(vs_size, vs_size - realsize)) { 837*7c478bd9Sstevel@tonic-gate ASSERT(highbit(vs_size) == highbit(vs_size - realsize)); 838*7c478bd9Sstevel@tonic-gate addr = vsp->vs_start; 839*7c478bd9Sstevel@tonic-gate vsp->vs_start = addr + realsize; 840*7c478bd9Sstevel@tonic-gate vmem_hash_insert(vmp, 841*7c478bd9Sstevel@tonic-gate vmem_seg_create(vmp, rotor->vs_aprev, addr, addr + size)); 842*7c478bd9Sstevel@tonic-gate mutex_exit(&vmp->vm_lock); 843*7c478bd9Sstevel@tonic-gate return ((void *)addr); 844*7c478bd9Sstevel@tonic-gate } 845*7c478bd9Sstevel@tonic-gate 846*7c478bd9Sstevel@tonic-gate /* 847*7c478bd9Sstevel@tonic-gate * Starting at the rotor, look for a segment large enough to 848*7c478bd9Sstevel@tonic-gate * satisfy the allocation. 849*7c478bd9Sstevel@tonic-gate */ 850*7c478bd9Sstevel@tonic-gate for (;;) { 851*7c478bd9Sstevel@tonic-gate vmp->vm_kstat.vk_search.value.ui64++; 852*7c478bd9Sstevel@tonic-gate if (vsp->vs_type == VMEM_FREE && VS_SIZE(vsp) >= size) 853*7c478bd9Sstevel@tonic-gate break; 854*7c478bd9Sstevel@tonic-gate vsp = vsp->vs_anext; 855*7c478bd9Sstevel@tonic-gate if (vsp == rotor) { 856*7c478bd9Sstevel@tonic-gate /* 857*7c478bd9Sstevel@tonic-gate * We've come full circle. One possibility is that the 858*7c478bd9Sstevel@tonic-gate * there's actually enough space, but the rotor itself 859*7c478bd9Sstevel@tonic-gate * is preventing the allocation from succeeding because 860*7c478bd9Sstevel@tonic-gate * it's sitting between two free segments. Therefore, 861*7c478bd9Sstevel@tonic-gate * we advance the rotor and see if that liberates a 862*7c478bd9Sstevel@tonic-gate * suitable segment. 863*7c478bd9Sstevel@tonic-gate */ 864*7c478bd9Sstevel@tonic-gate vmem_advance(vmp, rotor, rotor->vs_anext); 865*7c478bd9Sstevel@tonic-gate vsp = rotor->vs_aprev; 866*7c478bd9Sstevel@tonic-gate if (vsp->vs_type == VMEM_FREE && VS_SIZE(vsp) >= size) 867*7c478bd9Sstevel@tonic-gate break; 868*7c478bd9Sstevel@tonic-gate /* 869*7c478bd9Sstevel@tonic-gate * If there's a lower arena we can import from, or it's 870*7c478bd9Sstevel@tonic-gate * a VM_NOSLEEP allocation, let vmem_xalloc() handle it. 871*7c478bd9Sstevel@tonic-gate * Otherwise, wait until another thread frees something. 872*7c478bd9Sstevel@tonic-gate */ 873*7c478bd9Sstevel@tonic-gate if (vmp->vm_source_alloc != NULL || 874*7c478bd9Sstevel@tonic-gate (vmflag & VM_NOSLEEP)) { 875*7c478bd9Sstevel@tonic-gate mutex_exit(&vmp->vm_lock); 876*7c478bd9Sstevel@tonic-gate return (vmem_xalloc(vmp, size, vmp->vm_quantum, 877*7c478bd9Sstevel@tonic-gate 0, 0, NULL, NULL, vmflag & VM_KMFLAGS)); 878*7c478bd9Sstevel@tonic-gate } 879*7c478bd9Sstevel@tonic-gate vmp->vm_kstat.vk_wait.value.ui64++; 880*7c478bd9Sstevel@tonic-gate cv_wait(&vmp->vm_cv, &vmp->vm_lock); 881*7c478bd9Sstevel@tonic-gate vsp = rotor->vs_anext; 882*7c478bd9Sstevel@tonic-gate } 883*7c478bd9Sstevel@tonic-gate } 884*7c478bd9Sstevel@tonic-gate 885*7c478bd9Sstevel@tonic-gate /* 886*7c478bd9Sstevel@tonic-gate * We found a segment. Extract enough space to satisfy the allocation. 887*7c478bd9Sstevel@tonic-gate */ 888*7c478bd9Sstevel@tonic-gate addr = vsp->vs_start; 889*7c478bd9Sstevel@tonic-gate vsp = vmem_seg_alloc(vmp, vsp, addr, size); 890*7c478bd9Sstevel@tonic-gate ASSERT(vsp->vs_type == VMEM_ALLOC && 891*7c478bd9Sstevel@tonic-gate vsp->vs_start == addr && vsp->vs_end == addr + size); 892*7c478bd9Sstevel@tonic-gate 893*7c478bd9Sstevel@tonic-gate /* 894*7c478bd9Sstevel@tonic-gate * Advance the rotor to right after the newly-allocated segment. 895*7c478bd9Sstevel@tonic-gate * That's where the next VM_NEXTFIT allocation will begin searching. 896*7c478bd9Sstevel@tonic-gate */ 897*7c478bd9Sstevel@tonic-gate vmem_advance(vmp, rotor, vsp); 898*7c478bd9Sstevel@tonic-gate mutex_exit(&vmp->vm_lock); 899*7c478bd9Sstevel@tonic-gate return ((void *)addr); 900*7c478bd9Sstevel@tonic-gate } 901*7c478bd9Sstevel@tonic-gate 902*7c478bd9Sstevel@tonic-gate /* 903*7c478bd9Sstevel@tonic-gate * Checks if vmp is guaranteed to have a size-byte buffer somewhere on its 904*7c478bd9Sstevel@tonic-gate * freelist. If size is not a power-of-2, it can return a false-negative. 905*7c478bd9Sstevel@tonic-gate * 906*7c478bd9Sstevel@tonic-gate * Used to decide if a newly imported span is superfluous after re-acquiring 907*7c478bd9Sstevel@tonic-gate * the arena lock. 908*7c478bd9Sstevel@tonic-gate */ 909*7c478bd9Sstevel@tonic-gate static int 910*7c478bd9Sstevel@tonic-gate vmem_canalloc(vmem_t *vmp, size_t size) 911*7c478bd9Sstevel@tonic-gate { 912*7c478bd9Sstevel@tonic-gate int hb; 913*7c478bd9Sstevel@tonic-gate int flist = 0; 914*7c478bd9Sstevel@tonic-gate ASSERT(MUTEX_HELD(&vmp->vm_lock)); 915*7c478bd9Sstevel@tonic-gate 916*7c478bd9Sstevel@tonic-gate if ((size & (size - 1)) == 0) 917*7c478bd9Sstevel@tonic-gate flist = lowbit(P2ALIGN(vmp->vm_freemap, size)); 918*7c478bd9Sstevel@tonic-gate else if ((hb = highbit(size)) < VMEM_FREELISTS) 919*7c478bd9Sstevel@tonic-gate flist = lowbit(P2ALIGN(vmp->vm_freemap, 1UL << hb)); 920*7c478bd9Sstevel@tonic-gate 921*7c478bd9Sstevel@tonic-gate return (flist); 922*7c478bd9Sstevel@tonic-gate } 923*7c478bd9Sstevel@tonic-gate 924*7c478bd9Sstevel@tonic-gate /* 925*7c478bd9Sstevel@tonic-gate * Allocate size bytes at offset phase from an align boundary such that the 926*7c478bd9Sstevel@tonic-gate * resulting segment [addr, addr + size) is a subset of [minaddr, maxaddr) 927*7c478bd9Sstevel@tonic-gate * that does not straddle a nocross-aligned boundary. 928*7c478bd9Sstevel@tonic-gate */ 929*7c478bd9Sstevel@tonic-gate void * 930*7c478bd9Sstevel@tonic-gate vmem_xalloc(vmem_t *vmp, size_t size, size_t align_arg, size_t phase, 931*7c478bd9Sstevel@tonic-gate size_t nocross, void *minaddr, void *maxaddr, int vmflag) 932*7c478bd9Sstevel@tonic-gate { 933*7c478bd9Sstevel@tonic-gate vmem_seg_t *vsp; 934*7c478bd9Sstevel@tonic-gate vmem_seg_t *vbest = NULL; 935*7c478bd9Sstevel@tonic-gate uintptr_t addr, taddr, start, end; 936*7c478bd9Sstevel@tonic-gate uintptr_t align = (align_arg != 0) ? align_arg : vmp->vm_quantum; 937*7c478bd9Sstevel@tonic-gate void *vaddr, *xvaddr = NULL; 938*7c478bd9Sstevel@tonic-gate size_t xsize; 939*7c478bd9Sstevel@tonic-gate int hb, flist, resv; 940*7c478bd9Sstevel@tonic-gate uint32_t mtbf; 941*7c478bd9Sstevel@tonic-gate 942*7c478bd9Sstevel@tonic-gate if ((align | phase | nocross) & (vmp->vm_quantum - 1)) 943*7c478bd9Sstevel@tonic-gate panic("vmem_xalloc(%p, %lu, %lu, %lu, %lu, %p, %p, %x): " 944*7c478bd9Sstevel@tonic-gate "parameters not vm_quantum aligned", 945*7c478bd9Sstevel@tonic-gate (void *)vmp, size, align_arg, phase, nocross, 946*7c478bd9Sstevel@tonic-gate minaddr, maxaddr, vmflag); 947*7c478bd9Sstevel@tonic-gate 948*7c478bd9Sstevel@tonic-gate if (nocross != 0 && 949*7c478bd9Sstevel@tonic-gate (align > nocross || P2ROUNDUP(phase + size, align) > nocross)) 950*7c478bd9Sstevel@tonic-gate panic("vmem_xalloc(%p, %lu, %lu, %lu, %lu, %p, %p, %x): " 951*7c478bd9Sstevel@tonic-gate "overconstrained allocation", 952*7c478bd9Sstevel@tonic-gate (void *)vmp, size, align_arg, phase, nocross, 953*7c478bd9Sstevel@tonic-gate minaddr, maxaddr, vmflag); 954*7c478bd9Sstevel@tonic-gate 955*7c478bd9Sstevel@tonic-gate if (phase >= align || (align & (align - 1)) != 0 || 956*7c478bd9Sstevel@tonic-gate (nocross & (nocross - 1)) != 0) 957*7c478bd9Sstevel@tonic-gate panic("vmem_xalloc(%p, %lu, %lu, %lu, %lu, %p, %p, %x): " 958*7c478bd9Sstevel@tonic-gate "parameters inconsistent or invalid", 959*7c478bd9Sstevel@tonic-gate (void *)vmp, size, align_arg, phase, nocross, 960*7c478bd9Sstevel@tonic-gate minaddr, maxaddr, vmflag); 961*7c478bd9Sstevel@tonic-gate 962*7c478bd9Sstevel@tonic-gate if ((mtbf = vmem_mtbf | vmp->vm_mtbf) != 0 && gethrtime() % mtbf == 0 && 963*7c478bd9Sstevel@tonic-gate (vmflag & (VM_NOSLEEP | VM_PANIC)) == VM_NOSLEEP) 964*7c478bd9Sstevel@tonic-gate return (NULL); 965*7c478bd9Sstevel@tonic-gate 966*7c478bd9Sstevel@tonic-gate mutex_enter(&vmp->vm_lock); 967*7c478bd9Sstevel@tonic-gate for (;;) { 968*7c478bd9Sstevel@tonic-gate if (vmp->vm_nsegfree < VMEM_MINFREE && 969*7c478bd9Sstevel@tonic-gate !vmem_populate(vmp, vmflag)) 970*7c478bd9Sstevel@tonic-gate break; 971*7c478bd9Sstevel@tonic-gate do_alloc: 972*7c478bd9Sstevel@tonic-gate /* 973*7c478bd9Sstevel@tonic-gate * highbit() returns the highest bit + 1, which is exactly 974*7c478bd9Sstevel@tonic-gate * what we want: we want to search the first freelist whose 975*7c478bd9Sstevel@tonic-gate * members are *definitely* large enough to satisfy our 976*7c478bd9Sstevel@tonic-gate * allocation. However, there are certain cases in which we 977*7c478bd9Sstevel@tonic-gate * want to look at the next-smallest freelist (which *might* 978*7c478bd9Sstevel@tonic-gate * be able to satisfy the allocation): 979*7c478bd9Sstevel@tonic-gate * 980*7c478bd9Sstevel@tonic-gate * (1) The size is exactly a power of 2, in which case 981*7c478bd9Sstevel@tonic-gate * the smaller freelist is always big enough; 982*7c478bd9Sstevel@tonic-gate * 983*7c478bd9Sstevel@tonic-gate * (2) All other freelists are empty; 984*7c478bd9Sstevel@tonic-gate * 985*7c478bd9Sstevel@tonic-gate * (3) We're in the highest possible freelist, which is 986*7c478bd9Sstevel@tonic-gate * always empty (e.g. the 4GB freelist on 32-bit systems); 987*7c478bd9Sstevel@tonic-gate * 988*7c478bd9Sstevel@tonic-gate * (4) We're doing a best-fit or first-fit allocation. 989*7c478bd9Sstevel@tonic-gate */ 990*7c478bd9Sstevel@tonic-gate if ((size & (size - 1)) == 0) { 991*7c478bd9Sstevel@tonic-gate flist = lowbit(P2ALIGN(vmp->vm_freemap, size)); 992*7c478bd9Sstevel@tonic-gate } else { 993*7c478bd9Sstevel@tonic-gate hb = highbit(size); 994*7c478bd9Sstevel@tonic-gate if ((vmp->vm_freemap >> hb) == 0 || 995*7c478bd9Sstevel@tonic-gate hb == VMEM_FREELISTS || 996*7c478bd9Sstevel@tonic-gate (vmflag & (VM_BESTFIT | VM_FIRSTFIT))) 997*7c478bd9Sstevel@tonic-gate hb--; 998*7c478bd9Sstevel@tonic-gate flist = lowbit(P2ALIGN(vmp->vm_freemap, 1UL << hb)); 999*7c478bd9Sstevel@tonic-gate } 1000*7c478bd9Sstevel@tonic-gate 1001*7c478bd9Sstevel@tonic-gate for (vbest = NULL, vsp = (flist == 0) ? NULL : 1002*7c478bd9Sstevel@tonic-gate vmp->vm_freelist[flist - 1].vs_knext; 1003*7c478bd9Sstevel@tonic-gate vsp != NULL; vsp = vsp->vs_knext) { 1004*7c478bd9Sstevel@tonic-gate vmp->vm_kstat.vk_search.value.ui64++; 1005*7c478bd9Sstevel@tonic-gate if (vsp->vs_start == 0) { 1006*7c478bd9Sstevel@tonic-gate /* 1007*7c478bd9Sstevel@tonic-gate * We're moving up to a larger freelist, 1008*7c478bd9Sstevel@tonic-gate * so if we've already found a candidate, 1009*7c478bd9Sstevel@tonic-gate * the fit can't possibly get any better. 1010*7c478bd9Sstevel@tonic-gate */ 1011*7c478bd9Sstevel@tonic-gate if (vbest != NULL) 1012*7c478bd9Sstevel@tonic-gate break; 1013*7c478bd9Sstevel@tonic-gate /* 1014*7c478bd9Sstevel@tonic-gate * Find the next non-empty freelist. 1015*7c478bd9Sstevel@tonic-gate */ 1016*7c478bd9Sstevel@tonic-gate flist = lowbit(P2ALIGN(vmp->vm_freemap, 1017*7c478bd9Sstevel@tonic-gate VS_SIZE(vsp))); 1018*7c478bd9Sstevel@tonic-gate if (flist-- == 0) 1019*7c478bd9Sstevel@tonic-gate break; 1020*7c478bd9Sstevel@tonic-gate vsp = (vmem_seg_t *)&vmp->vm_freelist[flist]; 1021*7c478bd9Sstevel@tonic-gate ASSERT(vsp->vs_knext->vs_type == VMEM_FREE); 1022*7c478bd9Sstevel@tonic-gate continue; 1023*7c478bd9Sstevel@tonic-gate } 1024*7c478bd9Sstevel@tonic-gate if (vsp->vs_end - 1 < (uintptr_t)minaddr) 1025*7c478bd9Sstevel@tonic-gate continue; 1026*7c478bd9Sstevel@tonic-gate if (vsp->vs_start > (uintptr_t)maxaddr - 1) 1027*7c478bd9Sstevel@tonic-gate continue; 1028*7c478bd9Sstevel@tonic-gate start = MAX(vsp->vs_start, (uintptr_t)minaddr); 1029*7c478bd9Sstevel@tonic-gate end = MIN(vsp->vs_end - 1, (uintptr_t)maxaddr - 1) + 1; 1030*7c478bd9Sstevel@tonic-gate taddr = P2PHASEUP(start, align, phase); 1031*7c478bd9Sstevel@tonic-gate if (P2CROSS(taddr, taddr + size - 1, nocross)) 1032*7c478bd9Sstevel@tonic-gate taddr += 1033*7c478bd9Sstevel@tonic-gate P2ROUNDUP(P2NPHASE(taddr, nocross), align); 1034*7c478bd9Sstevel@tonic-gate if ((taddr - start) + size > end - start || 1035*7c478bd9Sstevel@tonic-gate (vbest != NULL && VS_SIZE(vsp) >= VS_SIZE(vbest))) 1036*7c478bd9Sstevel@tonic-gate continue; 1037*7c478bd9Sstevel@tonic-gate vbest = vsp; 1038*7c478bd9Sstevel@tonic-gate addr = taddr; 1039*7c478bd9Sstevel@tonic-gate if (!(vmflag & VM_BESTFIT) || VS_SIZE(vbest) == size) 1040*7c478bd9Sstevel@tonic-gate break; 1041*7c478bd9Sstevel@tonic-gate } 1042*7c478bd9Sstevel@tonic-gate if (vbest != NULL) 1043*7c478bd9Sstevel@tonic-gate break; 1044*7c478bd9Sstevel@tonic-gate ASSERT(xvaddr == NULL); 1045*7c478bd9Sstevel@tonic-gate if (size == 0) 1046*7c478bd9Sstevel@tonic-gate panic("vmem_xalloc(): size == 0"); 1047*7c478bd9Sstevel@tonic-gate if (vmp->vm_source_alloc != NULL && nocross == 0 && 1048*7c478bd9Sstevel@tonic-gate minaddr == NULL && maxaddr == NULL) { 1049*7c478bd9Sstevel@tonic-gate size_t aneeded, asize; 1050*7c478bd9Sstevel@tonic-gate size_t aquantum = MAX(vmp->vm_quantum, 1051*7c478bd9Sstevel@tonic-gate vmp->vm_source->vm_quantum); 1052*7c478bd9Sstevel@tonic-gate size_t aphase = phase; 1053*7c478bd9Sstevel@tonic-gate if (align > aquantum) { 1054*7c478bd9Sstevel@tonic-gate aphase = (P2PHASE(phase, aquantum) != 0) ? 1055*7c478bd9Sstevel@tonic-gate align - vmp->vm_quantum : align - aquantum; 1056*7c478bd9Sstevel@tonic-gate ASSERT(aphase >= phase); 1057*7c478bd9Sstevel@tonic-gate } 1058*7c478bd9Sstevel@tonic-gate aneeded = MAX(size + aphase, vmp->vm_min_import); 1059*7c478bd9Sstevel@tonic-gate asize = P2ROUNDUP(aneeded, aquantum); 1060*7c478bd9Sstevel@tonic-gate 1061*7c478bd9Sstevel@tonic-gate /* 1062*7c478bd9Sstevel@tonic-gate * Determine how many segment structures we'll consume. 1063*7c478bd9Sstevel@tonic-gate * The calculation must be precise because if we're 1064*7c478bd9Sstevel@tonic-gate * here on behalf of vmem_populate(), we are taking 1065*7c478bd9Sstevel@tonic-gate * segments from a very limited reserve. 1066*7c478bd9Sstevel@tonic-gate */ 1067*7c478bd9Sstevel@tonic-gate if (size == asize && !(vmp->vm_cflags & VMC_XALLOC)) 1068*7c478bd9Sstevel@tonic-gate resv = VMEM_SEGS_PER_SPAN_CREATE + 1069*7c478bd9Sstevel@tonic-gate VMEM_SEGS_PER_EXACT_ALLOC; 1070*7c478bd9Sstevel@tonic-gate else if (phase == 0 && 1071*7c478bd9Sstevel@tonic-gate align <= vmp->vm_source->vm_quantum) 1072*7c478bd9Sstevel@tonic-gate resv = VMEM_SEGS_PER_SPAN_CREATE + 1073*7c478bd9Sstevel@tonic-gate VMEM_SEGS_PER_LEFT_ALLOC; 1074*7c478bd9Sstevel@tonic-gate else 1075*7c478bd9Sstevel@tonic-gate resv = VMEM_SEGS_PER_ALLOC_MAX; 1076*7c478bd9Sstevel@tonic-gate 1077*7c478bd9Sstevel@tonic-gate ASSERT(vmp->vm_nsegfree >= resv); 1078*7c478bd9Sstevel@tonic-gate vmp->vm_nsegfree -= resv; /* reserve our segs */ 1079*7c478bd9Sstevel@tonic-gate mutex_exit(&vmp->vm_lock); 1080*7c478bd9Sstevel@tonic-gate if (vmp->vm_cflags & VMC_XALLOC) { 1081*7c478bd9Sstevel@tonic-gate size_t oasize = asize; 1082*7c478bd9Sstevel@tonic-gate vaddr = ((vmem_ximport_t *) 1083*7c478bd9Sstevel@tonic-gate vmp->vm_source_alloc)(vmp->vm_source, 1084*7c478bd9Sstevel@tonic-gate &asize, vmflag & VM_KMFLAGS); 1085*7c478bd9Sstevel@tonic-gate ASSERT(asize >= oasize); 1086*7c478bd9Sstevel@tonic-gate ASSERT(P2PHASE(asize, 1087*7c478bd9Sstevel@tonic-gate vmp->vm_source->vm_quantum) == 0); 1088*7c478bd9Sstevel@tonic-gate } else { 1089*7c478bd9Sstevel@tonic-gate vaddr = vmp->vm_source_alloc(vmp->vm_source, 1090*7c478bd9Sstevel@tonic-gate asize, vmflag & VM_KMFLAGS); 1091*7c478bd9Sstevel@tonic-gate } 1092*7c478bd9Sstevel@tonic-gate mutex_enter(&vmp->vm_lock); 1093*7c478bd9Sstevel@tonic-gate vmp->vm_nsegfree += resv; /* claim reservation */ 1094*7c478bd9Sstevel@tonic-gate aneeded = size + align - vmp->vm_quantum; 1095*7c478bd9Sstevel@tonic-gate aneeded = P2ROUNDUP(aneeded, vmp->vm_quantum); 1096*7c478bd9Sstevel@tonic-gate if (vaddr != NULL) { 1097*7c478bd9Sstevel@tonic-gate /* 1098*7c478bd9Sstevel@tonic-gate * Since we dropped the vmem lock while 1099*7c478bd9Sstevel@tonic-gate * calling the import function, other 1100*7c478bd9Sstevel@tonic-gate * threads could have imported space 1101*7c478bd9Sstevel@tonic-gate * and made our import unnecessary. In 1102*7c478bd9Sstevel@tonic-gate * order to save space, we return 1103*7c478bd9Sstevel@tonic-gate * excess imports immediately. 1104*7c478bd9Sstevel@tonic-gate */ 1105*7c478bd9Sstevel@tonic-gate if (asize > aneeded && 1106*7c478bd9Sstevel@tonic-gate vmp->vm_source_free != NULL && 1107*7c478bd9Sstevel@tonic-gate vmem_canalloc(vmp, aneeded)) { 1108*7c478bd9Sstevel@tonic-gate ASSERT(resv >= 1109*7c478bd9Sstevel@tonic-gate VMEM_SEGS_PER_MIDDLE_ALLOC); 1110*7c478bd9Sstevel@tonic-gate xvaddr = vaddr; 1111*7c478bd9Sstevel@tonic-gate xsize = asize; 1112*7c478bd9Sstevel@tonic-gate goto do_alloc; 1113*7c478bd9Sstevel@tonic-gate } 1114*7c478bd9Sstevel@tonic-gate vbest = vmem_span_create(vmp, vaddr, asize, 1); 1115*7c478bd9Sstevel@tonic-gate addr = P2PHASEUP(vbest->vs_start, align, phase); 1116*7c478bd9Sstevel@tonic-gate break; 1117*7c478bd9Sstevel@tonic-gate } else if (vmem_canalloc(vmp, aneeded)) { 1118*7c478bd9Sstevel@tonic-gate /* 1119*7c478bd9Sstevel@tonic-gate * Our import failed, but another thread 1120*7c478bd9Sstevel@tonic-gate * added sufficient free memory to the arena 1121*7c478bd9Sstevel@tonic-gate * to satisfy our request. Go back and 1122*7c478bd9Sstevel@tonic-gate * grab it. 1123*7c478bd9Sstevel@tonic-gate */ 1124*7c478bd9Sstevel@tonic-gate ASSERT(resv >= VMEM_SEGS_PER_MIDDLE_ALLOC); 1125*7c478bd9Sstevel@tonic-gate goto do_alloc; 1126*7c478bd9Sstevel@tonic-gate } 1127*7c478bd9Sstevel@tonic-gate } 1128*7c478bd9Sstevel@tonic-gate 1129*7c478bd9Sstevel@tonic-gate /* 1130*7c478bd9Sstevel@tonic-gate * If the requestor chooses to fail the allocation attempt 1131*7c478bd9Sstevel@tonic-gate * rather than reap wait and retry - get out of the loop. 1132*7c478bd9Sstevel@tonic-gate */ 1133*7c478bd9Sstevel@tonic-gate if (vmflag & VM_ABORT) 1134*7c478bd9Sstevel@tonic-gate break; 1135*7c478bd9Sstevel@tonic-gate mutex_exit(&vmp->vm_lock); 1136*7c478bd9Sstevel@tonic-gate if (vmp->vm_cflags & VMC_IDENTIFIER) 1137*7c478bd9Sstevel@tonic-gate kmem_reap_idspace(); 1138*7c478bd9Sstevel@tonic-gate else 1139*7c478bd9Sstevel@tonic-gate kmem_reap(); 1140*7c478bd9Sstevel@tonic-gate mutex_enter(&vmp->vm_lock); 1141*7c478bd9Sstevel@tonic-gate if (vmflag & VM_NOSLEEP) 1142*7c478bd9Sstevel@tonic-gate break; 1143*7c478bd9Sstevel@tonic-gate vmp->vm_kstat.vk_wait.value.ui64++; 1144*7c478bd9Sstevel@tonic-gate cv_wait(&vmp->vm_cv, &vmp->vm_lock); 1145*7c478bd9Sstevel@tonic-gate } 1146*7c478bd9Sstevel@tonic-gate if (vbest != NULL) { 1147*7c478bd9Sstevel@tonic-gate ASSERT(vbest->vs_type == VMEM_FREE); 1148*7c478bd9Sstevel@tonic-gate ASSERT(vbest->vs_knext != vbest); 1149*7c478bd9Sstevel@tonic-gate (void) vmem_seg_alloc(vmp, vbest, addr, size); 1150*7c478bd9Sstevel@tonic-gate mutex_exit(&vmp->vm_lock); 1151*7c478bd9Sstevel@tonic-gate if (xvaddr) 1152*7c478bd9Sstevel@tonic-gate vmp->vm_source_free(vmp->vm_source, xvaddr, xsize); 1153*7c478bd9Sstevel@tonic-gate ASSERT(P2PHASE(addr, align) == phase); 1154*7c478bd9Sstevel@tonic-gate ASSERT(!P2CROSS(addr, addr + size - 1, nocross)); 1155*7c478bd9Sstevel@tonic-gate ASSERT(addr >= (uintptr_t)minaddr); 1156*7c478bd9Sstevel@tonic-gate ASSERT(addr + size - 1 <= (uintptr_t)maxaddr - 1); 1157*7c478bd9Sstevel@tonic-gate return ((void *)addr); 1158*7c478bd9Sstevel@tonic-gate } 1159*7c478bd9Sstevel@tonic-gate vmp->vm_kstat.vk_fail.value.ui64++; 1160*7c478bd9Sstevel@tonic-gate mutex_exit(&vmp->vm_lock); 1161*7c478bd9Sstevel@tonic-gate if (vmflag & VM_PANIC) 1162*7c478bd9Sstevel@tonic-gate panic("vmem_xalloc(%p, %lu, %lu, %lu, %lu, %p, %p, %x): " 1163*7c478bd9Sstevel@tonic-gate "cannot satisfy mandatory allocation", 1164*7c478bd9Sstevel@tonic-gate (void *)vmp, size, align_arg, phase, nocross, 1165*7c478bd9Sstevel@tonic-gate minaddr, maxaddr, vmflag); 1166*7c478bd9Sstevel@tonic-gate ASSERT(xvaddr == NULL); 1167*7c478bd9Sstevel@tonic-gate return (NULL); 1168*7c478bd9Sstevel@tonic-gate } 1169*7c478bd9Sstevel@tonic-gate 1170*7c478bd9Sstevel@tonic-gate /* 1171*7c478bd9Sstevel@tonic-gate * Free the segment [vaddr, vaddr + size), where vaddr was a constrained 1172*7c478bd9Sstevel@tonic-gate * allocation. vmem_xalloc() and vmem_xfree() must always be paired because 1173*7c478bd9Sstevel@tonic-gate * both routines bypass the quantum caches. 1174*7c478bd9Sstevel@tonic-gate */ 1175*7c478bd9Sstevel@tonic-gate void 1176*7c478bd9Sstevel@tonic-gate vmem_xfree(vmem_t *vmp, void *vaddr, size_t size) 1177*7c478bd9Sstevel@tonic-gate { 1178*7c478bd9Sstevel@tonic-gate vmem_seg_t *vsp, *vnext, *vprev; 1179*7c478bd9Sstevel@tonic-gate 1180*7c478bd9Sstevel@tonic-gate mutex_enter(&vmp->vm_lock); 1181*7c478bd9Sstevel@tonic-gate 1182*7c478bd9Sstevel@tonic-gate vsp = vmem_hash_delete(vmp, (uintptr_t)vaddr, size); 1183*7c478bd9Sstevel@tonic-gate vsp->vs_end = P2ROUNDUP(vsp->vs_end, vmp->vm_quantum); 1184*7c478bd9Sstevel@tonic-gate 1185*7c478bd9Sstevel@tonic-gate /* 1186*7c478bd9Sstevel@tonic-gate * Attempt to coalesce with the next segment. 1187*7c478bd9Sstevel@tonic-gate */ 1188*7c478bd9Sstevel@tonic-gate vnext = vsp->vs_anext; 1189*7c478bd9Sstevel@tonic-gate if (vnext->vs_type == VMEM_FREE) { 1190*7c478bd9Sstevel@tonic-gate ASSERT(vsp->vs_end == vnext->vs_start); 1191*7c478bd9Sstevel@tonic-gate vmem_freelist_delete(vmp, vnext); 1192*7c478bd9Sstevel@tonic-gate vsp->vs_end = vnext->vs_end; 1193*7c478bd9Sstevel@tonic-gate vmem_seg_destroy(vmp, vnext); 1194*7c478bd9Sstevel@tonic-gate } 1195*7c478bd9Sstevel@tonic-gate 1196*7c478bd9Sstevel@tonic-gate /* 1197*7c478bd9Sstevel@tonic-gate * Attempt to coalesce with the previous segment. 1198*7c478bd9Sstevel@tonic-gate */ 1199*7c478bd9Sstevel@tonic-gate vprev = vsp->vs_aprev; 1200*7c478bd9Sstevel@tonic-gate if (vprev->vs_type == VMEM_FREE) { 1201*7c478bd9Sstevel@tonic-gate ASSERT(vprev->vs_end == vsp->vs_start); 1202*7c478bd9Sstevel@tonic-gate vmem_freelist_delete(vmp, vprev); 1203*7c478bd9Sstevel@tonic-gate vprev->vs_end = vsp->vs_end; 1204*7c478bd9Sstevel@tonic-gate vmem_seg_destroy(vmp, vsp); 1205*7c478bd9Sstevel@tonic-gate vsp = vprev; 1206*7c478bd9Sstevel@tonic-gate } 1207*7c478bd9Sstevel@tonic-gate 1208*7c478bd9Sstevel@tonic-gate /* 1209*7c478bd9Sstevel@tonic-gate * If the entire span is free, return it to the source. 1210*7c478bd9Sstevel@tonic-gate */ 1211*7c478bd9Sstevel@tonic-gate if (vsp->vs_aprev->vs_import && vmp->vm_source_free != NULL && 1212*7c478bd9Sstevel@tonic-gate vsp->vs_aprev->vs_type == VMEM_SPAN && 1213*7c478bd9Sstevel@tonic-gate vsp->vs_anext->vs_type == VMEM_SPAN) { 1214*7c478bd9Sstevel@tonic-gate vaddr = (void *)vsp->vs_start; 1215*7c478bd9Sstevel@tonic-gate size = VS_SIZE(vsp); 1216*7c478bd9Sstevel@tonic-gate ASSERT(size == VS_SIZE(vsp->vs_aprev)); 1217*7c478bd9Sstevel@tonic-gate vmem_span_destroy(vmp, vsp); 1218*7c478bd9Sstevel@tonic-gate mutex_exit(&vmp->vm_lock); 1219*7c478bd9Sstevel@tonic-gate vmp->vm_source_free(vmp->vm_source, vaddr, size); 1220*7c478bd9Sstevel@tonic-gate } else { 1221*7c478bd9Sstevel@tonic-gate vmem_freelist_insert(vmp, vsp); 1222*7c478bd9Sstevel@tonic-gate mutex_exit(&vmp->vm_lock); 1223*7c478bd9Sstevel@tonic-gate } 1224*7c478bd9Sstevel@tonic-gate } 1225*7c478bd9Sstevel@tonic-gate 1226*7c478bd9Sstevel@tonic-gate /* 1227*7c478bd9Sstevel@tonic-gate * Allocate size bytes from arena vmp. Returns the allocated address 1228*7c478bd9Sstevel@tonic-gate * on success, NULL on failure. vmflag specifies VM_SLEEP or VM_NOSLEEP, 1229*7c478bd9Sstevel@tonic-gate * and may also specify best-fit, first-fit, or next-fit allocation policy 1230*7c478bd9Sstevel@tonic-gate * instead of the default instant-fit policy. VM_SLEEP allocations are 1231*7c478bd9Sstevel@tonic-gate * guaranteed to succeed. 1232*7c478bd9Sstevel@tonic-gate */ 1233*7c478bd9Sstevel@tonic-gate void * 1234*7c478bd9Sstevel@tonic-gate vmem_alloc(vmem_t *vmp, size_t size, int vmflag) 1235*7c478bd9Sstevel@tonic-gate { 1236*7c478bd9Sstevel@tonic-gate vmem_seg_t *vsp; 1237*7c478bd9Sstevel@tonic-gate uintptr_t addr; 1238*7c478bd9Sstevel@tonic-gate int hb; 1239*7c478bd9Sstevel@tonic-gate int flist = 0; 1240*7c478bd9Sstevel@tonic-gate uint32_t mtbf; 1241*7c478bd9Sstevel@tonic-gate 1242*7c478bd9Sstevel@tonic-gate if (size - 1 < vmp->vm_qcache_max) 1243*7c478bd9Sstevel@tonic-gate return (kmem_cache_alloc(vmp->vm_qcache[(size - 1) >> 1244*7c478bd9Sstevel@tonic-gate vmp->vm_qshift], vmflag & VM_KMFLAGS)); 1245*7c478bd9Sstevel@tonic-gate 1246*7c478bd9Sstevel@tonic-gate if ((mtbf = vmem_mtbf | vmp->vm_mtbf) != 0 && gethrtime() % mtbf == 0 && 1247*7c478bd9Sstevel@tonic-gate (vmflag & (VM_NOSLEEP | VM_PANIC)) == VM_NOSLEEP) 1248*7c478bd9Sstevel@tonic-gate return (NULL); 1249*7c478bd9Sstevel@tonic-gate 1250*7c478bd9Sstevel@tonic-gate if (vmflag & VM_NEXTFIT) 1251*7c478bd9Sstevel@tonic-gate return (vmem_nextfit_alloc(vmp, size, vmflag)); 1252*7c478bd9Sstevel@tonic-gate 1253*7c478bd9Sstevel@tonic-gate if (vmflag & (VM_BESTFIT | VM_FIRSTFIT)) 1254*7c478bd9Sstevel@tonic-gate return (vmem_xalloc(vmp, size, vmp->vm_quantum, 0, 0, 1255*7c478bd9Sstevel@tonic-gate NULL, NULL, vmflag)); 1256*7c478bd9Sstevel@tonic-gate 1257*7c478bd9Sstevel@tonic-gate /* 1258*7c478bd9Sstevel@tonic-gate * Unconstrained instant-fit allocation from the segment list. 1259*7c478bd9Sstevel@tonic-gate */ 1260*7c478bd9Sstevel@tonic-gate mutex_enter(&vmp->vm_lock); 1261*7c478bd9Sstevel@tonic-gate 1262*7c478bd9Sstevel@tonic-gate if (vmp->vm_nsegfree >= VMEM_MINFREE || vmem_populate(vmp, vmflag)) { 1263*7c478bd9Sstevel@tonic-gate if ((size & (size - 1)) == 0) 1264*7c478bd9Sstevel@tonic-gate flist = lowbit(P2ALIGN(vmp->vm_freemap, size)); 1265*7c478bd9Sstevel@tonic-gate else if ((hb = highbit(size)) < VMEM_FREELISTS) 1266*7c478bd9Sstevel@tonic-gate flist = lowbit(P2ALIGN(vmp->vm_freemap, 1UL << hb)); 1267*7c478bd9Sstevel@tonic-gate } 1268*7c478bd9Sstevel@tonic-gate 1269*7c478bd9Sstevel@tonic-gate if (flist-- == 0) { 1270*7c478bd9Sstevel@tonic-gate mutex_exit(&vmp->vm_lock); 1271*7c478bd9Sstevel@tonic-gate return (vmem_xalloc(vmp, size, vmp->vm_quantum, 1272*7c478bd9Sstevel@tonic-gate 0, 0, NULL, NULL, vmflag)); 1273*7c478bd9Sstevel@tonic-gate } 1274*7c478bd9Sstevel@tonic-gate 1275*7c478bd9Sstevel@tonic-gate ASSERT(size <= (1UL << flist)); 1276*7c478bd9Sstevel@tonic-gate vsp = vmp->vm_freelist[flist].vs_knext; 1277*7c478bd9Sstevel@tonic-gate addr = vsp->vs_start; 1278*7c478bd9Sstevel@tonic-gate (void) vmem_seg_alloc(vmp, vsp, addr, size); 1279*7c478bd9Sstevel@tonic-gate mutex_exit(&vmp->vm_lock); 1280*7c478bd9Sstevel@tonic-gate return ((void *)addr); 1281*7c478bd9Sstevel@tonic-gate } 1282*7c478bd9Sstevel@tonic-gate 1283*7c478bd9Sstevel@tonic-gate /* 1284*7c478bd9Sstevel@tonic-gate * Free the segment [vaddr, vaddr + size). 1285*7c478bd9Sstevel@tonic-gate */ 1286*7c478bd9Sstevel@tonic-gate void 1287*7c478bd9Sstevel@tonic-gate vmem_free(vmem_t *vmp, void *vaddr, size_t size) 1288*7c478bd9Sstevel@tonic-gate { 1289*7c478bd9Sstevel@tonic-gate if (size - 1 < vmp->vm_qcache_max) 1290*7c478bd9Sstevel@tonic-gate kmem_cache_free(vmp->vm_qcache[(size - 1) >> vmp->vm_qshift], 1291*7c478bd9Sstevel@tonic-gate vaddr); 1292*7c478bd9Sstevel@tonic-gate else 1293*7c478bd9Sstevel@tonic-gate vmem_xfree(vmp, vaddr, size); 1294*7c478bd9Sstevel@tonic-gate } 1295*7c478bd9Sstevel@tonic-gate 1296*7c478bd9Sstevel@tonic-gate /* 1297*7c478bd9Sstevel@tonic-gate * Determine whether arena vmp contains the segment [vaddr, vaddr + size). 1298*7c478bd9Sstevel@tonic-gate */ 1299*7c478bd9Sstevel@tonic-gate int 1300*7c478bd9Sstevel@tonic-gate vmem_contains(vmem_t *vmp, void *vaddr, size_t size) 1301*7c478bd9Sstevel@tonic-gate { 1302*7c478bd9Sstevel@tonic-gate uintptr_t start = (uintptr_t)vaddr; 1303*7c478bd9Sstevel@tonic-gate uintptr_t end = start + size; 1304*7c478bd9Sstevel@tonic-gate vmem_seg_t *vsp; 1305*7c478bd9Sstevel@tonic-gate vmem_seg_t *seg0 = &vmp->vm_seg0; 1306*7c478bd9Sstevel@tonic-gate 1307*7c478bd9Sstevel@tonic-gate mutex_enter(&vmp->vm_lock); 1308*7c478bd9Sstevel@tonic-gate vmp->vm_kstat.vk_contains.value.ui64++; 1309*7c478bd9Sstevel@tonic-gate for (vsp = seg0->vs_knext; vsp != seg0; vsp = vsp->vs_knext) { 1310*7c478bd9Sstevel@tonic-gate vmp->vm_kstat.vk_contains_search.value.ui64++; 1311*7c478bd9Sstevel@tonic-gate ASSERT(vsp->vs_type == VMEM_SPAN); 1312*7c478bd9Sstevel@tonic-gate if (start >= vsp->vs_start && end - 1 <= vsp->vs_end - 1) 1313*7c478bd9Sstevel@tonic-gate break; 1314*7c478bd9Sstevel@tonic-gate } 1315*7c478bd9Sstevel@tonic-gate mutex_exit(&vmp->vm_lock); 1316*7c478bd9Sstevel@tonic-gate return (vsp != seg0); 1317*7c478bd9Sstevel@tonic-gate } 1318*7c478bd9Sstevel@tonic-gate 1319*7c478bd9Sstevel@tonic-gate /* 1320*7c478bd9Sstevel@tonic-gate * Add the span [vaddr, vaddr + size) to arena vmp. 1321*7c478bd9Sstevel@tonic-gate */ 1322*7c478bd9Sstevel@tonic-gate void * 1323*7c478bd9Sstevel@tonic-gate vmem_add(vmem_t *vmp, void *vaddr, size_t size, int vmflag) 1324*7c478bd9Sstevel@tonic-gate { 1325*7c478bd9Sstevel@tonic-gate if (vaddr == NULL || size == 0) 1326*7c478bd9Sstevel@tonic-gate panic("vmem_add(%p, %p, %lu): bad arguments", vmp, vaddr, size); 1327*7c478bd9Sstevel@tonic-gate 1328*7c478bd9Sstevel@tonic-gate ASSERT(!vmem_contains(vmp, vaddr, size)); 1329*7c478bd9Sstevel@tonic-gate 1330*7c478bd9Sstevel@tonic-gate mutex_enter(&vmp->vm_lock); 1331*7c478bd9Sstevel@tonic-gate if (vmem_populate(vmp, vmflag)) 1332*7c478bd9Sstevel@tonic-gate (void) vmem_span_create(vmp, vaddr, size, 0); 1333*7c478bd9Sstevel@tonic-gate else 1334*7c478bd9Sstevel@tonic-gate vaddr = NULL; 1335*7c478bd9Sstevel@tonic-gate mutex_exit(&vmp->vm_lock); 1336*7c478bd9Sstevel@tonic-gate return (vaddr); 1337*7c478bd9Sstevel@tonic-gate } 1338*7c478bd9Sstevel@tonic-gate 1339*7c478bd9Sstevel@tonic-gate /* 1340*7c478bd9Sstevel@tonic-gate * Walk the vmp arena, applying func to each segment matching typemask. 1341*7c478bd9Sstevel@tonic-gate * If VMEM_REENTRANT is specified, the arena lock is dropped across each 1342*7c478bd9Sstevel@tonic-gate * call to func(); otherwise, it is held for the duration of vmem_walk() 1343*7c478bd9Sstevel@tonic-gate * to ensure a consistent snapshot. Note that VMEM_REENTRANT callbacks 1344*7c478bd9Sstevel@tonic-gate * are *not* necessarily consistent, so they may only be used when a hint 1345*7c478bd9Sstevel@tonic-gate * is adequate. 1346*7c478bd9Sstevel@tonic-gate */ 1347*7c478bd9Sstevel@tonic-gate void 1348*7c478bd9Sstevel@tonic-gate vmem_walk(vmem_t *vmp, int typemask, 1349*7c478bd9Sstevel@tonic-gate void (*func)(void *, void *, size_t), void *arg) 1350*7c478bd9Sstevel@tonic-gate { 1351*7c478bd9Sstevel@tonic-gate vmem_seg_t *vsp; 1352*7c478bd9Sstevel@tonic-gate vmem_seg_t *seg0 = &vmp->vm_seg0; 1353*7c478bd9Sstevel@tonic-gate vmem_seg_t walker; 1354*7c478bd9Sstevel@tonic-gate 1355*7c478bd9Sstevel@tonic-gate if (typemask & VMEM_WALKER) 1356*7c478bd9Sstevel@tonic-gate return; 1357*7c478bd9Sstevel@tonic-gate 1358*7c478bd9Sstevel@tonic-gate bzero(&walker, sizeof (walker)); 1359*7c478bd9Sstevel@tonic-gate walker.vs_type = VMEM_WALKER; 1360*7c478bd9Sstevel@tonic-gate 1361*7c478bd9Sstevel@tonic-gate mutex_enter(&vmp->vm_lock); 1362*7c478bd9Sstevel@tonic-gate VMEM_INSERT(seg0, &walker, a); 1363*7c478bd9Sstevel@tonic-gate for (vsp = seg0->vs_anext; vsp != seg0; vsp = vsp->vs_anext) { 1364*7c478bd9Sstevel@tonic-gate if (vsp->vs_type & typemask) { 1365*7c478bd9Sstevel@tonic-gate void *start = (void *)vsp->vs_start; 1366*7c478bd9Sstevel@tonic-gate size_t size = VS_SIZE(vsp); 1367*7c478bd9Sstevel@tonic-gate if (typemask & VMEM_REENTRANT) { 1368*7c478bd9Sstevel@tonic-gate vmem_advance(vmp, &walker, vsp); 1369*7c478bd9Sstevel@tonic-gate mutex_exit(&vmp->vm_lock); 1370*7c478bd9Sstevel@tonic-gate func(arg, start, size); 1371*7c478bd9Sstevel@tonic-gate mutex_enter(&vmp->vm_lock); 1372*7c478bd9Sstevel@tonic-gate vsp = &walker; 1373*7c478bd9Sstevel@tonic-gate } else { 1374*7c478bd9Sstevel@tonic-gate func(arg, start, size); 1375*7c478bd9Sstevel@tonic-gate } 1376*7c478bd9Sstevel@tonic-gate } 1377*7c478bd9Sstevel@tonic-gate } 1378*7c478bd9Sstevel@tonic-gate vmem_advance(vmp, &walker, NULL); 1379*7c478bd9Sstevel@tonic-gate mutex_exit(&vmp->vm_lock); 1380*7c478bd9Sstevel@tonic-gate } 1381*7c478bd9Sstevel@tonic-gate 1382*7c478bd9Sstevel@tonic-gate /* 1383*7c478bd9Sstevel@tonic-gate * Return the total amount of memory whose type matches typemask. Thus: 1384*7c478bd9Sstevel@tonic-gate * 1385*7c478bd9Sstevel@tonic-gate * typemask VMEM_ALLOC yields total memory allocated (in use). 1386*7c478bd9Sstevel@tonic-gate * typemask VMEM_FREE yields total memory free (available). 1387*7c478bd9Sstevel@tonic-gate * typemask (VMEM_ALLOC | VMEM_FREE) yields total arena size. 1388*7c478bd9Sstevel@tonic-gate */ 1389*7c478bd9Sstevel@tonic-gate size_t 1390*7c478bd9Sstevel@tonic-gate vmem_size(vmem_t *vmp, int typemask) 1391*7c478bd9Sstevel@tonic-gate { 1392*7c478bd9Sstevel@tonic-gate uint64_t size = 0; 1393*7c478bd9Sstevel@tonic-gate 1394*7c478bd9Sstevel@tonic-gate if (typemask & VMEM_ALLOC) 1395*7c478bd9Sstevel@tonic-gate size += vmp->vm_kstat.vk_mem_inuse.value.ui64; 1396*7c478bd9Sstevel@tonic-gate if (typemask & VMEM_FREE) 1397*7c478bd9Sstevel@tonic-gate size += vmp->vm_kstat.vk_mem_total.value.ui64 - 1398*7c478bd9Sstevel@tonic-gate vmp->vm_kstat.vk_mem_inuse.value.ui64; 1399*7c478bd9Sstevel@tonic-gate return ((size_t)size); 1400*7c478bd9Sstevel@tonic-gate } 1401*7c478bd9Sstevel@tonic-gate 1402*7c478bd9Sstevel@tonic-gate /* 1403*7c478bd9Sstevel@tonic-gate * Create an arena called name whose initial span is [base, base + size). 1404*7c478bd9Sstevel@tonic-gate * The arena's natural unit of currency is quantum, so vmem_alloc() 1405*7c478bd9Sstevel@tonic-gate * guarantees quantum-aligned results. The arena may import new spans 1406*7c478bd9Sstevel@tonic-gate * by invoking afunc() on source, and may return those spans by invoking 1407*7c478bd9Sstevel@tonic-gate * ffunc() on source. To make small allocations fast and scalable, 1408*7c478bd9Sstevel@tonic-gate * the arena offers high-performance caching for each integer multiple 1409*7c478bd9Sstevel@tonic-gate * of quantum up to qcache_max. 1410*7c478bd9Sstevel@tonic-gate */ 1411*7c478bd9Sstevel@tonic-gate static vmem_t * 1412*7c478bd9Sstevel@tonic-gate vmem_create_common(const char *name, void *base, size_t size, size_t quantum, 1413*7c478bd9Sstevel@tonic-gate void *(*afunc)(vmem_t *, size_t, int), 1414*7c478bd9Sstevel@tonic-gate void (*ffunc)(vmem_t *, void *, size_t), 1415*7c478bd9Sstevel@tonic-gate vmem_t *source, size_t qcache_max, int vmflag) 1416*7c478bd9Sstevel@tonic-gate { 1417*7c478bd9Sstevel@tonic-gate int i; 1418*7c478bd9Sstevel@tonic-gate size_t nqcache; 1419*7c478bd9Sstevel@tonic-gate vmem_t *vmp, *cur, **vmpp; 1420*7c478bd9Sstevel@tonic-gate vmem_seg_t *vsp; 1421*7c478bd9Sstevel@tonic-gate vmem_freelist_t *vfp; 1422*7c478bd9Sstevel@tonic-gate uint32_t id = atomic_add_32_nv(&vmem_id, 1); 1423*7c478bd9Sstevel@tonic-gate 1424*7c478bd9Sstevel@tonic-gate if (vmem_vmem_arena != NULL) { 1425*7c478bd9Sstevel@tonic-gate vmp = vmem_alloc(vmem_vmem_arena, sizeof (vmem_t), 1426*7c478bd9Sstevel@tonic-gate vmflag & VM_KMFLAGS); 1427*7c478bd9Sstevel@tonic-gate } else { 1428*7c478bd9Sstevel@tonic-gate ASSERT(id <= VMEM_INITIAL); 1429*7c478bd9Sstevel@tonic-gate vmp = &vmem0[id - 1]; 1430*7c478bd9Sstevel@tonic-gate } 1431*7c478bd9Sstevel@tonic-gate 1432*7c478bd9Sstevel@tonic-gate /* An identifier arena must inherit from another identifier arena */ 1433*7c478bd9Sstevel@tonic-gate ASSERT(source == NULL || ((source->vm_cflags & VMC_IDENTIFIER) == 1434*7c478bd9Sstevel@tonic-gate (vmflag & VMC_IDENTIFIER))); 1435*7c478bd9Sstevel@tonic-gate 1436*7c478bd9Sstevel@tonic-gate if (vmp == NULL) 1437*7c478bd9Sstevel@tonic-gate return (NULL); 1438*7c478bd9Sstevel@tonic-gate bzero(vmp, sizeof (vmem_t)); 1439*7c478bd9Sstevel@tonic-gate 1440*7c478bd9Sstevel@tonic-gate (void) snprintf(vmp->vm_name, VMEM_NAMELEN, "%s", name); 1441*7c478bd9Sstevel@tonic-gate mutex_init(&vmp->vm_lock, NULL, MUTEX_DEFAULT, NULL); 1442*7c478bd9Sstevel@tonic-gate cv_init(&vmp->vm_cv, NULL, CV_DEFAULT, NULL); 1443*7c478bd9Sstevel@tonic-gate vmp->vm_cflags = vmflag; 1444*7c478bd9Sstevel@tonic-gate vmflag &= VM_KMFLAGS; 1445*7c478bd9Sstevel@tonic-gate 1446*7c478bd9Sstevel@tonic-gate vmp->vm_quantum = quantum; 1447*7c478bd9Sstevel@tonic-gate vmp->vm_qshift = highbit(quantum) - 1; 1448*7c478bd9Sstevel@tonic-gate nqcache = MIN(qcache_max >> vmp->vm_qshift, VMEM_NQCACHE_MAX); 1449*7c478bd9Sstevel@tonic-gate 1450*7c478bd9Sstevel@tonic-gate for (i = 0; i <= VMEM_FREELISTS; i++) { 1451*7c478bd9Sstevel@tonic-gate vfp = &vmp->vm_freelist[i]; 1452*7c478bd9Sstevel@tonic-gate vfp->vs_end = 1UL << i; 1453*7c478bd9Sstevel@tonic-gate vfp->vs_knext = (vmem_seg_t *)(vfp + 1); 1454*7c478bd9Sstevel@tonic-gate vfp->vs_kprev = (vmem_seg_t *)(vfp - 1); 1455*7c478bd9Sstevel@tonic-gate } 1456*7c478bd9Sstevel@tonic-gate 1457*7c478bd9Sstevel@tonic-gate vmp->vm_freelist[0].vs_kprev = NULL; 1458*7c478bd9Sstevel@tonic-gate vmp->vm_freelist[VMEM_FREELISTS].vs_knext = NULL; 1459*7c478bd9Sstevel@tonic-gate vmp->vm_freelist[VMEM_FREELISTS].vs_end = 0; 1460*7c478bd9Sstevel@tonic-gate vmp->vm_hash_table = vmp->vm_hash0; 1461*7c478bd9Sstevel@tonic-gate vmp->vm_hash_mask = VMEM_HASH_INITIAL - 1; 1462*7c478bd9Sstevel@tonic-gate vmp->vm_hash_shift = highbit(vmp->vm_hash_mask); 1463*7c478bd9Sstevel@tonic-gate 1464*7c478bd9Sstevel@tonic-gate vsp = &vmp->vm_seg0; 1465*7c478bd9Sstevel@tonic-gate vsp->vs_anext = vsp; 1466*7c478bd9Sstevel@tonic-gate vsp->vs_aprev = vsp; 1467*7c478bd9Sstevel@tonic-gate vsp->vs_knext = vsp; 1468*7c478bd9Sstevel@tonic-gate vsp->vs_kprev = vsp; 1469*7c478bd9Sstevel@tonic-gate vsp->vs_type = VMEM_SPAN; 1470*7c478bd9Sstevel@tonic-gate 1471*7c478bd9Sstevel@tonic-gate vsp = &vmp->vm_rotor; 1472*7c478bd9Sstevel@tonic-gate vsp->vs_type = VMEM_ROTOR; 1473*7c478bd9Sstevel@tonic-gate VMEM_INSERT(&vmp->vm_seg0, vsp, a); 1474*7c478bd9Sstevel@tonic-gate 1475*7c478bd9Sstevel@tonic-gate bcopy(&vmem_kstat_template, &vmp->vm_kstat, sizeof (vmem_kstat_t)); 1476*7c478bd9Sstevel@tonic-gate 1477*7c478bd9Sstevel@tonic-gate vmp->vm_id = id; 1478*7c478bd9Sstevel@tonic-gate if (source != NULL) 1479*7c478bd9Sstevel@tonic-gate vmp->vm_kstat.vk_source_id.value.ui32 = source->vm_id; 1480*7c478bd9Sstevel@tonic-gate vmp->vm_source = source; 1481*7c478bd9Sstevel@tonic-gate vmp->vm_source_alloc = afunc; 1482*7c478bd9Sstevel@tonic-gate vmp->vm_source_free = ffunc; 1483*7c478bd9Sstevel@tonic-gate 1484*7c478bd9Sstevel@tonic-gate /* 1485*7c478bd9Sstevel@tonic-gate * Some arenas (like vmem_metadata and kmem_metadata) cannot 1486*7c478bd9Sstevel@tonic-gate * use quantum caching to lower fragmentation. Instead, we 1487*7c478bd9Sstevel@tonic-gate * increase their imports, giving a similar effect. 1488*7c478bd9Sstevel@tonic-gate */ 1489*7c478bd9Sstevel@tonic-gate if (vmp->vm_cflags & VMC_NO_QCACHE) { 1490*7c478bd9Sstevel@tonic-gate vmp->vm_min_import = 1491*7c478bd9Sstevel@tonic-gate VMEM_QCACHE_SLABSIZE(nqcache << vmp->vm_qshift); 1492*7c478bd9Sstevel@tonic-gate nqcache = 0; 1493*7c478bd9Sstevel@tonic-gate } 1494*7c478bd9Sstevel@tonic-gate 1495*7c478bd9Sstevel@tonic-gate if (nqcache != 0) { 1496*7c478bd9Sstevel@tonic-gate ASSERT(!(vmflag & VM_NOSLEEP)); 1497*7c478bd9Sstevel@tonic-gate vmp->vm_qcache_max = nqcache << vmp->vm_qshift; 1498*7c478bd9Sstevel@tonic-gate for (i = 0; i < nqcache; i++) { 1499*7c478bd9Sstevel@tonic-gate char buf[VMEM_NAMELEN + 21]; 1500*7c478bd9Sstevel@tonic-gate (void) sprintf(buf, "%s_%lu", vmp->vm_name, 1501*7c478bd9Sstevel@tonic-gate (i + 1) * quantum); 1502*7c478bd9Sstevel@tonic-gate vmp->vm_qcache[i] = kmem_cache_create(buf, 1503*7c478bd9Sstevel@tonic-gate (i + 1) * quantum, quantum, NULL, NULL, NULL, 1504*7c478bd9Sstevel@tonic-gate NULL, vmp, KMC_QCACHE | KMC_NOTOUCH); 1505*7c478bd9Sstevel@tonic-gate } 1506*7c478bd9Sstevel@tonic-gate } 1507*7c478bd9Sstevel@tonic-gate 1508*7c478bd9Sstevel@tonic-gate if ((vmp->vm_ksp = kstat_create("vmem", vmp->vm_id, vmp->vm_name, 1509*7c478bd9Sstevel@tonic-gate "vmem", KSTAT_TYPE_NAMED, sizeof (vmem_kstat_t) / 1510*7c478bd9Sstevel@tonic-gate sizeof (kstat_named_t), KSTAT_FLAG_VIRTUAL)) != NULL) { 1511*7c478bd9Sstevel@tonic-gate vmp->vm_ksp->ks_data = &vmp->vm_kstat; 1512*7c478bd9Sstevel@tonic-gate kstat_install(vmp->vm_ksp); 1513*7c478bd9Sstevel@tonic-gate } 1514*7c478bd9Sstevel@tonic-gate 1515*7c478bd9Sstevel@tonic-gate mutex_enter(&vmem_list_lock); 1516*7c478bd9Sstevel@tonic-gate vmpp = &vmem_list; 1517*7c478bd9Sstevel@tonic-gate while ((cur = *vmpp) != NULL) 1518*7c478bd9Sstevel@tonic-gate vmpp = &cur->vm_next; 1519*7c478bd9Sstevel@tonic-gate *vmpp = vmp; 1520*7c478bd9Sstevel@tonic-gate mutex_exit(&vmem_list_lock); 1521*7c478bd9Sstevel@tonic-gate 1522*7c478bd9Sstevel@tonic-gate if (vmp->vm_cflags & VMC_POPULATOR) { 1523*7c478bd9Sstevel@tonic-gate ASSERT(vmem_populators < VMEM_INITIAL); 1524*7c478bd9Sstevel@tonic-gate vmem_populator[atomic_add_32_nv(&vmem_populators, 1) - 1] = vmp; 1525*7c478bd9Sstevel@tonic-gate mutex_enter(&vmp->vm_lock); 1526*7c478bd9Sstevel@tonic-gate (void) vmem_populate(vmp, vmflag | VM_PANIC); 1527*7c478bd9Sstevel@tonic-gate mutex_exit(&vmp->vm_lock); 1528*7c478bd9Sstevel@tonic-gate } 1529*7c478bd9Sstevel@tonic-gate 1530*7c478bd9Sstevel@tonic-gate if ((base || size) && vmem_add(vmp, base, size, vmflag) == NULL) { 1531*7c478bd9Sstevel@tonic-gate vmem_destroy(vmp); 1532*7c478bd9Sstevel@tonic-gate return (NULL); 1533*7c478bd9Sstevel@tonic-gate } 1534*7c478bd9Sstevel@tonic-gate 1535*7c478bd9Sstevel@tonic-gate return (vmp); 1536*7c478bd9Sstevel@tonic-gate } 1537*7c478bd9Sstevel@tonic-gate 1538*7c478bd9Sstevel@tonic-gate vmem_t * 1539*7c478bd9Sstevel@tonic-gate vmem_xcreate(const char *name, void *base, size_t size, size_t quantum, 1540*7c478bd9Sstevel@tonic-gate vmem_ximport_t *afunc, vmem_free_t *ffunc, vmem_t *source, 1541*7c478bd9Sstevel@tonic-gate size_t qcache_max, int vmflag) 1542*7c478bd9Sstevel@tonic-gate { 1543*7c478bd9Sstevel@tonic-gate ASSERT(!(vmflag & (VMC_POPULATOR | VMC_XALLOC))); 1544*7c478bd9Sstevel@tonic-gate vmflag &= ~(VMC_POPULATOR | VMC_XALLOC); 1545*7c478bd9Sstevel@tonic-gate 1546*7c478bd9Sstevel@tonic-gate return (vmem_create_common(name, base, size, quantum, 1547*7c478bd9Sstevel@tonic-gate (vmem_alloc_t *)afunc, ffunc, source, qcache_max, 1548*7c478bd9Sstevel@tonic-gate vmflag | VMC_XALLOC)); 1549*7c478bd9Sstevel@tonic-gate } 1550*7c478bd9Sstevel@tonic-gate 1551*7c478bd9Sstevel@tonic-gate vmem_t * 1552*7c478bd9Sstevel@tonic-gate vmem_create(const char *name, void *base, size_t size, size_t quantum, 1553*7c478bd9Sstevel@tonic-gate vmem_alloc_t *afunc, vmem_free_t *ffunc, vmem_t *source, 1554*7c478bd9Sstevel@tonic-gate size_t qcache_max, int vmflag) 1555*7c478bd9Sstevel@tonic-gate { 1556*7c478bd9Sstevel@tonic-gate ASSERT(!(vmflag & VMC_XALLOC)); 1557*7c478bd9Sstevel@tonic-gate vmflag &= ~VMC_XALLOC; 1558*7c478bd9Sstevel@tonic-gate 1559*7c478bd9Sstevel@tonic-gate return (vmem_create_common(name, base, size, quantum, 1560*7c478bd9Sstevel@tonic-gate afunc, ffunc, source, qcache_max, vmflag)); 1561*7c478bd9Sstevel@tonic-gate } 1562*7c478bd9Sstevel@tonic-gate 1563*7c478bd9Sstevel@tonic-gate /* 1564*7c478bd9Sstevel@tonic-gate * Destroy arena vmp. 1565*7c478bd9Sstevel@tonic-gate */ 1566*7c478bd9Sstevel@tonic-gate void 1567*7c478bd9Sstevel@tonic-gate vmem_destroy(vmem_t *vmp) 1568*7c478bd9Sstevel@tonic-gate { 1569*7c478bd9Sstevel@tonic-gate vmem_t *cur, **vmpp; 1570*7c478bd9Sstevel@tonic-gate vmem_seg_t *seg0 = &vmp->vm_seg0; 1571*7c478bd9Sstevel@tonic-gate vmem_seg_t *vsp; 1572*7c478bd9Sstevel@tonic-gate size_t leaked; 1573*7c478bd9Sstevel@tonic-gate int i; 1574*7c478bd9Sstevel@tonic-gate 1575*7c478bd9Sstevel@tonic-gate mutex_enter(&vmem_list_lock); 1576*7c478bd9Sstevel@tonic-gate vmpp = &vmem_list; 1577*7c478bd9Sstevel@tonic-gate while ((cur = *vmpp) != vmp) 1578*7c478bd9Sstevel@tonic-gate vmpp = &cur->vm_next; 1579*7c478bd9Sstevel@tonic-gate *vmpp = vmp->vm_next; 1580*7c478bd9Sstevel@tonic-gate mutex_exit(&vmem_list_lock); 1581*7c478bd9Sstevel@tonic-gate 1582*7c478bd9Sstevel@tonic-gate for (i = 0; i < VMEM_NQCACHE_MAX; i++) 1583*7c478bd9Sstevel@tonic-gate if (vmp->vm_qcache[i]) 1584*7c478bd9Sstevel@tonic-gate kmem_cache_destroy(vmp->vm_qcache[i]); 1585*7c478bd9Sstevel@tonic-gate 1586*7c478bd9Sstevel@tonic-gate leaked = vmem_size(vmp, VMEM_ALLOC); 1587*7c478bd9Sstevel@tonic-gate if (leaked != 0) 1588*7c478bd9Sstevel@tonic-gate cmn_err(CE_WARN, "vmem_destroy('%s'): leaked %lu %s", 1589*7c478bd9Sstevel@tonic-gate vmp->vm_name, leaked, (vmp->vm_cflags & VMC_IDENTIFIER) ? 1590*7c478bd9Sstevel@tonic-gate "identifiers" : "bytes"); 1591*7c478bd9Sstevel@tonic-gate 1592*7c478bd9Sstevel@tonic-gate if (vmp->vm_hash_table != vmp->vm_hash0) 1593*7c478bd9Sstevel@tonic-gate vmem_free(vmem_hash_arena, vmp->vm_hash_table, 1594*7c478bd9Sstevel@tonic-gate (vmp->vm_hash_mask + 1) * sizeof (void *)); 1595*7c478bd9Sstevel@tonic-gate 1596*7c478bd9Sstevel@tonic-gate /* 1597*7c478bd9Sstevel@tonic-gate * Give back the segment structures for anything that's left in the 1598*7c478bd9Sstevel@tonic-gate * arena, e.g. the primary spans and their free segments. 1599*7c478bd9Sstevel@tonic-gate */ 1600*7c478bd9Sstevel@tonic-gate VMEM_DELETE(&vmp->vm_rotor, a); 1601*7c478bd9Sstevel@tonic-gate for (vsp = seg0->vs_anext; vsp != seg0; vsp = vsp->vs_anext) 1602*7c478bd9Sstevel@tonic-gate vmem_putseg_global(vsp); 1603*7c478bd9Sstevel@tonic-gate 1604*7c478bd9Sstevel@tonic-gate while (vmp->vm_nsegfree > 0) 1605*7c478bd9Sstevel@tonic-gate vmem_putseg_global(vmem_getseg(vmp)); 1606*7c478bd9Sstevel@tonic-gate 1607*7c478bd9Sstevel@tonic-gate kstat_delete(vmp->vm_ksp); 1608*7c478bd9Sstevel@tonic-gate 1609*7c478bd9Sstevel@tonic-gate mutex_destroy(&vmp->vm_lock); 1610*7c478bd9Sstevel@tonic-gate cv_destroy(&vmp->vm_cv); 1611*7c478bd9Sstevel@tonic-gate vmem_free(vmem_vmem_arena, vmp, sizeof (vmem_t)); 1612*7c478bd9Sstevel@tonic-gate } 1613*7c478bd9Sstevel@tonic-gate 1614*7c478bd9Sstevel@tonic-gate /* 1615*7c478bd9Sstevel@tonic-gate * Resize vmp's hash table to keep the average lookup depth near 1.0. 1616*7c478bd9Sstevel@tonic-gate */ 1617*7c478bd9Sstevel@tonic-gate static void 1618*7c478bd9Sstevel@tonic-gate vmem_hash_rescale(vmem_t *vmp) 1619*7c478bd9Sstevel@tonic-gate { 1620*7c478bd9Sstevel@tonic-gate vmem_seg_t **old_table, **new_table, *vsp; 1621*7c478bd9Sstevel@tonic-gate size_t old_size, new_size, h, nseg; 1622*7c478bd9Sstevel@tonic-gate 1623*7c478bd9Sstevel@tonic-gate nseg = (size_t)(vmp->vm_kstat.vk_alloc.value.ui64 - 1624*7c478bd9Sstevel@tonic-gate vmp->vm_kstat.vk_free.value.ui64); 1625*7c478bd9Sstevel@tonic-gate 1626*7c478bd9Sstevel@tonic-gate new_size = MAX(VMEM_HASH_INITIAL, 1 << (highbit(3 * nseg + 4) - 2)); 1627*7c478bd9Sstevel@tonic-gate old_size = vmp->vm_hash_mask + 1; 1628*7c478bd9Sstevel@tonic-gate 1629*7c478bd9Sstevel@tonic-gate if ((old_size >> 1) <= new_size && new_size <= (old_size << 1)) 1630*7c478bd9Sstevel@tonic-gate return; 1631*7c478bd9Sstevel@tonic-gate 1632*7c478bd9Sstevel@tonic-gate new_table = vmem_alloc(vmem_hash_arena, new_size * sizeof (void *), 1633*7c478bd9Sstevel@tonic-gate VM_NOSLEEP); 1634*7c478bd9Sstevel@tonic-gate if (new_table == NULL) 1635*7c478bd9Sstevel@tonic-gate return; 1636*7c478bd9Sstevel@tonic-gate bzero(new_table, new_size * sizeof (void *)); 1637*7c478bd9Sstevel@tonic-gate 1638*7c478bd9Sstevel@tonic-gate mutex_enter(&vmp->vm_lock); 1639*7c478bd9Sstevel@tonic-gate 1640*7c478bd9Sstevel@tonic-gate old_size = vmp->vm_hash_mask + 1; 1641*7c478bd9Sstevel@tonic-gate old_table = vmp->vm_hash_table; 1642*7c478bd9Sstevel@tonic-gate 1643*7c478bd9Sstevel@tonic-gate vmp->vm_hash_mask = new_size - 1; 1644*7c478bd9Sstevel@tonic-gate vmp->vm_hash_table = new_table; 1645*7c478bd9Sstevel@tonic-gate vmp->vm_hash_shift = highbit(vmp->vm_hash_mask); 1646*7c478bd9Sstevel@tonic-gate 1647*7c478bd9Sstevel@tonic-gate for (h = 0; h < old_size; h++) { 1648*7c478bd9Sstevel@tonic-gate vsp = old_table[h]; 1649*7c478bd9Sstevel@tonic-gate while (vsp != NULL) { 1650*7c478bd9Sstevel@tonic-gate uintptr_t addr = vsp->vs_start; 1651*7c478bd9Sstevel@tonic-gate vmem_seg_t *next_vsp = vsp->vs_knext; 1652*7c478bd9Sstevel@tonic-gate vmem_seg_t **hash_bucket = VMEM_HASH(vmp, addr); 1653*7c478bd9Sstevel@tonic-gate vsp->vs_knext = *hash_bucket; 1654*7c478bd9Sstevel@tonic-gate *hash_bucket = vsp; 1655*7c478bd9Sstevel@tonic-gate vsp = next_vsp; 1656*7c478bd9Sstevel@tonic-gate } 1657*7c478bd9Sstevel@tonic-gate } 1658*7c478bd9Sstevel@tonic-gate 1659*7c478bd9Sstevel@tonic-gate mutex_exit(&vmp->vm_lock); 1660*7c478bd9Sstevel@tonic-gate 1661*7c478bd9Sstevel@tonic-gate if (old_table != vmp->vm_hash0) 1662*7c478bd9Sstevel@tonic-gate vmem_free(vmem_hash_arena, old_table, 1663*7c478bd9Sstevel@tonic-gate old_size * sizeof (void *)); 1664*7c478bd9Sstevel@tonic-gate } 1665*7c478bd9Sstevel@tonic-gate 1666*7c478bd9Sstevel@tonic-gate /* 1667*7c478bd9Sstevel@tonic-gate * Perform periodic maintenance on all vmem arenas. 1668*7c478bd9Sstevel@tonic-gate */ 1669*7c478bd9Sstevel@tonic-gate void 1670*7c478bd9Sstevel@tonic-gate vmem_update(void *dummy) 1671*7c478bd9Sstevel@tonic-gate { 1672*7c478bd9Sstevel@tonic-gate vmem_t *vmp; 1673*7c478bd9Sstevel@tonic-gate 1674*7c478bd9Sstevel@tonic-gate mutex_enter(&vmem_list_lock); 1675*7c478bd9Sstevel@tonic-gate for (vmp = vmem_list; vmp != NULL; vmp = vmp->vm_next) { 1676*7c478bd9Sstevel@tonic-gate /* 1677*7c478bd9Sstevel@tonic-gate * If threads are waiting for resources, wake them up 1678*7c478bd9Sstevel@tonic-gate * periodically so they can issue another kmem_reap() 1679*7c478bd9Sstevel@tonic-gate * to reclaim resources cached by the slab allocator. 1680*7c478bd9Sstevel@tonic-gate */ 1681*7c478bd9Sstevel@tonic-gate cv_broadcast(&vmp->vm_cv); 1682*7c478bd9Sstevel@tonic-gate 1683*7c478bd9Sstevel@tonic-gate /* 1684*7c478bd9Sstevel@tonic-gate * Rescale the hash table to keep the hash chains short. 1685*7c478bd9Sstevel@tonic-gate */ 1686*7c478bd9Sstevel@tonic-gate vmem_hash_rescale(vmp); 1687*7c478bd9Sstevel@tonic-gate } 1688*7c478bd9Sstevel@tonic-gate mutex_exit(&vmem_list_lock); 1689*7c478bd9Sstevel@tonic-gate 1690*7c478bd9Sstevel@tonic-gate (void) timeout(vmem_update, dummy, vmem_update_interval * hz); 1691*7c478bd9Sstevel@tonic-gate } 1692*7c478bd9Sstevel@tonic-gate 1693*7c478bd9Sstevel@tonic-gate /* 1694*7c478bd9Sstevel@tonic-gate * Prepare vmem for use. 1695*7c478bd9Sstevel@tonic-gate */ 1696*7c478bd9Sstevel@tonic-gate vmem_t * 1697*7c478bd9Sstevel@tonic-gate vmem_init(const char *heap_name, 1698*7c478bd9Sstevel@tonic-gate void *heap_start, size_t heap_size, size_t heap_quantum, 1699*7c478bd9Sstevel@tonic-gate void *(*heap_alloc)(vmem_t *, size_t, int), 1700*7c478bd9Sstevel@tonic-gate void (*heap_free)(vmem_t *, void *, size_t)) 1701*7c478bd9Sstevel@tonic-gate { 1702*7c478bd9Sstevel@tonic-gate uint32_t id; 1703*7c478bd9Sstevel@tonic-gate int nseg = VMEM_SEG_INITIAL; 1704*7c478bd9Sstevel@tonic-gate vmem_t *heap; 1705*7c478bd9Sstevel@tonic-gate 1706*7c478bd9Sstevel@tonic-gate while (--nseg >= 0) 1707*7c478bd9Sstevel@tonic-gate vmem_putseg_global(&vmem_seg0[nseg]); 1708*7c478bd9Sstevel@tonic-gate 1709*7c478bd9Sstevel@tonic-gate heap = vmem_create(heap_name, 1710*7c478bd9Sstevel@tonic-gate heap_start, heap_size, heap_quantum, 1711*7c478bd9Sstevel@tonic-gate NULL, NULL, NULL, 0, 1712*7c478bd9Sstevel@tonic-gate VM_SLEEP | VMC_POPULATOR); 1713*7c478bd9Sstevel@tonic-gate 1714*7c478bd9Sstevel@tonic-gate vmem_metadata_arena = vmem_create("vmem_metadata", 1715*7c478bd9Sstevel@tonic-gate NULL, 0, heap_quantum, 1716*7c478bd9Sstevel@tonic-gate vmem_alloc, vmem_free, heap, 8 * heap_quantum, 1717*7c478bd9Sstevel@tonic-gate VM_SLEEP | VMC_POPULATOR | VMC_NO_QCACHE); 1718*7c478bd9Sstevel@tonic-gate 1719*7c478bd9Sstevel@tonic-gate vmem_seg_arena = vmem_create("vmem_seg", 1720*7c478bd9Sstevel@tonic-gate NULL, 0, heap_quantum, 1721*7c478bd9Sstevel@tonic-gate heap_alloc, heap_free, vmem_metadata_arena, 0, 1722*7c478bd9Sstevel@tonic-gate VM_SLEEP | VMC_POPULATOR); 1723*7c478bd9Sstevel@tonic-gate 1724*7c478bd9Sstevel@tonic-gate vmem_hash_arena = vmem_create("vmem_hash", 1725*7c478bd9Sstevel@tonic-gate NULL, 0, 8, 1726*7c478bd9Sstevel@tonic-gate heap_alloc, heap_free, vmem_metadata_arena, 0, 1727*7c478bd9Sstevel@tonic-gate VM_SLEEP); 1728*7c478bd9Sstevel@tonic-gate 1729*7c478bd9Sstevel@tonic-gate vmem_vmem_arena = vmem_create("vmem_vmem", 1730*7c478bd9Sstevel@tonic-gate vmem0, sizeof (vmem0), 1, 1731*7c478bd9Sstevel@tonic-gate heap_alloc, heap_free, vmem_metadata_arena, 0, 1732*7c478bd9Sstevel@tonic-gate VM_SLEEP); 1733*7c478bd9Sstevel@tonic-gate 1734*7c478bd9Sstevel@tonic-gate for (id = 0; id < vmem_id; id++) 1735*7c478bd9Sstevel@tonic-gate (void) vmem_xalloc(vmem_vmem_arena, sizeof (vmem_t), 1736*7c478bd9Sstevel@tonic-gate 1, 0, 0, &vmem0[id], &vmem0[id + 1], 1737*7c478bd9Sstevel@tonic-gate VM_NOSLEEP | VM_BESTFIT | VM_PANIC); 1738*7c478bd9Sstevel@tonic-gate 1739*7c478bd9Sstevel@tonic-gate return (heap); 1740*7c478bd9Sstevel@tonic-gate } 1741