1*9853d9e8SJason Beloro /* 2*9853d9e8SJason Beloro * 3*9853d9e8SJason Beloro * CDDL HEADER START 4*9853d9e8SJason Beloro * 5*9853d9e8SJason Beloro * The contents of this file are subject to the terms of the 6*9853d9e8SJason Beloro * Common Development and Distribution License (the "License"). 7*9853d9e8SJason Beloro * You may not use this file except in compliance with the License. 8*9853d9e8SJason Beloro * 9*9853d9e8SJason Beloro * You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE 10*9853d9e8SJason Beloro * or http://www.opensolaris.org/os/licensing. 11*9853d9e8SJason Beloro * See the License for the specific language governing permissions 12*9853d9e8SJason Beloro * and limitations under the License. 13*9853d9e8SJason Beloro * 14*9853d9e8SJason Beloro * When distributing Covered Code, include this CDDL HEADER in each 15*9853d9e8SJason Beloro * file and include the License file at usr/src/OPENSOLARIS.LICENSE. 16*9853d9e8SJason Beloro * If applicable, add the following below this CDDL HEADER, with the 17*9853d9e8SJason Beloro * fields enclosed by brackets "[]" replaced with your own identifying 18*9853d9e8SJason Beloro * information: Portions Copyright [yyyy] [name of copyright owner] 19*9853d9e8SJason Beloro * 20*9853d9e8SJason Beloro * CDDL HEADER END 21*9853d9e8SJason Beloro */ 22*9853d9e8SJason Beloro /* 23*9853d9e8SJason Beloro * Copyright 2009 Sun Microsystems, Inc. All rights reserved. 24*9853d9e8SJason Beloro * Use is subject to license terms. 25*9853d9e8SJason Beloro */ 26*9853d9e8SJason Beloro 27*9853d9e8SJason Beloro #include <sys/types.h> 28*9853d9e8SJason Beloro #include <sys/cmn_err.h> 29*9853d9e8SJason Beloro #include <sys/vm.h> 30*9853d9e8SJason Beloro #include <sys/mman.h> 31*9853d9e8SJason Beloro #include <vm/vm_dep.h> 32*9853d9e8SJason Beloro #include <vm/seg_kmem.h> 33*9853d9e8SJason Beloro #include <vm/seg_kpm.h> 34*9853d9e8SJason Beloro #include <sys/mem_config.h> 35*9853d9e8SJason Beloro #include <sys/sysmacros.h> 36*9853d9e8SJason Beloro 37*9853d9e8SJason Beloro extern pgcnt_t pp_dummy_npages; 38*9853d9e8SJason Beloro extern pfn_t *pp_dummy_pfn; /* Array of dummy pfns. */ 39*9853d9e8SJason Beloro 40*9853d9e8SJason Beloro extern kmutex_t memseg_lists_lock; 41*9853d9e8SJason Beloro extern struct memseg *memseg_va_avail; 42*9853d9e8SJason Beloro extern struct memseg *memseg_alloc(); 43*9853d9e8SJason Beloro 44*9853d9e8SJason Beloro extern page_t *ppvm_base; 45*9853d9e8SJason Beloro extern pgcnt_t ppvm_size; 46*9853d9e8SJason Beloro 47*9853d9e8SJason Beloro static vnode_t pp_vn, rsv_vn; 48*9853d9e8SJason Beloro static pgcnt_t rsv_metapgs; 49*9853d9e8SJason Beloro static int meta_rsv_enable; 50*9853d9e8SJason Beloro static int sun4v_memseg_debug; 51*9853d9e8SJason Beloro 52*9853d9e8SJason Beloro extern struct memseg *memseg_reuse(pgcnt_t); 53*9853d9e8SJason Beloro extern void remap_to_dummy(caddr_t, pgcnt_t); 54*9853d9e8SJason Beloro 55*9853d9e8SJason Beloro /* 56*9853d9e8SJason Beloro * The page_t memory for incoming pages is allocated from existing memory 57*9853d9e8SJason Beloro * which can create a potential situation where memory addition fails 58*9853d9e8SJason Beloro * because of shortage of existing memory. To mitigate this situation 59*9853d9e8SJason Beloro * some memory is always reserved ahead of time for page_t allocation. 60*9853d9e8SJason Beloro * Each 4MB of reserved page_t's guarantees a 256MB (x64) addition without 61*9853d9e8SJason Beloro * page_t allocation. The added 256MB added memory could theoretically 62*9853d9e8SJason Beloro * allow an addition of 16GB. 63*9853d9e8SJason Beloro */ 64*9853d9e8SJason Beloro #define RSV_SIZE 0x40000000 /* add size with rsrvd page_t's 1G */ 65*9853d9e8SJason Beloro 66*9853d9e8SJason Beloro #ifdef DEBUG 67*9853d9e8SJason Beloro #define MEMSEG_DEBUG(args...) if (sun4v_memseg_debug) printf(args) 68*9853d9e8SJason Beloro #else 69*9853d9e8SJason Beloro #define MEMSEG_DEBUG(...) 70*9853d9e8SJason Beloro #endif 71*9853d9e8SJason Beloro 72*9853d9e8SJason Beloro /* 73*9853d9e8SJason Beloro * The page_t's for the incoming memory are allocated from 74*9853d9e8SJason Beloro * existing pages. 75*9853d9e8SJason Beloro */ 76*9853d9e8SJason Beloro /*ARGSUSED*/ 77*9853d9e8SJason Beloro int 78*9853d9e8SJason Beloro memseg_alloc_meta(pfn_t base, pgcnt_t npgs, void **ptp, pgcnt_t *metap) 79*9853d9e8SJason Beloro { 80*9853d9e8SJason Beloro page_t *pp, *opp, *epp, *pgpp; 81*9853d9e8SJason Beloro pgcnt_t metapgs; 82*9853d9e8SJason Beloro int i, rsv; 83*9853d9e8SJason Beloro struct seg kseg; 84*9853d9e8SJason Beloro caddr_t vaddr; 85*9853d9e8SJason Beloro u_offset_t off; 86*9853d9e8SJason Beloro 87*9853d9e8SJason Beloro /* 88*9853d9e8SJason Beloro * Verify incoming memory is within supported DR range. 89*9853d9e8SJason Beloro */ 90*9853d9e8SJason Beloro if ((base + npgs) * sizeof (page_t) > ppvm_size) 91*9853d9e8SJason Beloro return (KPHYSM_ENOTSUP); 92*9853d9e8SJason Beloro 93*9853d9e8SJason Beloro opp = pp = ppvm_base + base; 94*9853d9e8SJason Beloro epp = pp + npgs; 95*9853d9e8SJason Beloro metapgs = btopr(npgs * sizeof (page_t)); 96*9853d9e8SJason Beloro 97*9853d9e8SJason Beloro if (!IS_P2ALIGNED((uint64_t)pp, PAGESIZE) && 98*9853d9e8SJason Beloro page_find(&pp_vn, (u_offset_t)pp)) { 99*9853d9e8SJason Beloro /* 100*9853d9e8SJason Beloro * Another memseg has page_t's in the same 101*9853d9e8SJason Beloro * page which 'pp' resides. This would happen 102*9853d9e8SJason Beloro * if PAGESIZE is not an integral multiple of 103*9853d9e8SJason Beloro * sizeof (page_t) and therefore 'pp' 104*9853d9e8SJason Beloro * does not start on a page boundry. 105*9853d9e8SJason Beloro * 106*9853d9e8SJason Beloro * Since the other memseg's pages_t's still 107*9853d9e8SJason Beloro * map valid pages, skip allocation of this page. 108*9853d9e8SJason Beloro * Advance 'pp' to the next page which should 109*9853d9e8SJason Beloro * belong only to the incoming memseg. 110*9853d9e8SJason Beloro * 111*9853d9e8SJason Beloro * If the last page_t in the current page 112*9853d9e8SJason Beloro * crosses a page boundary, this should still 113*9853d9e8SJason Beloro * work. The first part of the page_t is 114*9853d9e8SJason Beloro * already allocated. The second part of 115*9853d9e8SJason Beloro * the page_t will be allocated below. 116*9853d9e8SJason Beloro */ 117*9853d9e8SJason Beloro ASSERT(PAGESIZE % sizeof (page_t)); 118*9853d9e8SJason Beloro pp = (page_t *)P2ROUNDUP((uint64_t)pp, PAGESIZE); 119*9853d9e8SJason Beloro metapgs--; 120*9853d9e8SJason Beloro } 121*9853d9e8SJason Beloro 122*9853d9e8SJason Beloro if (!IS_P2ALIGNED((uint64_t)epp, PAGESIZE) && 123*9853d9e8SJason Beloro page_find(&pp_vn, (u_offset_t)epp)) { 124*9853d9e8SJason Beloro /* 125*9853d9e8SJason Beloro * Another memseg has page_t's in the same 126*9853d9e8SJason Beloro * page which 'epp' resides. This would happen 127*9853d9e8SJason Beloro * if PAGESIZE is not an integral multiple of 128*9853d9e8SJason Beloro * sizeof (page_t) and therefore 'epp' 129*9853d9e8SJason Beloro * does not start on a page boundry. 130*9853d9e8SJason Beloro * 131*9853d9e8SJason Beloro * Since the other memseg's pages_t's still 132*9853d9e8SJason Beloro * map valid pages, skip allocation of this page. 133*9853d9e8SJason Beloro */ 134*9853d9e8SJason Beloro ASSERT(PAGESIZE % sizeof (page_t)); 135*9853d9e8SJason Beloro metapgs--; 136*9853d9e8SJason Beloro } 137*9853d9e8SJason Beloro 138*9853d9e8SJason Beloro ASSERT(IS_P2ALIGNED((uint64_t)pp, PAGESIZE)); 139*9853d9e8SJason Beloro 140*9853d9e8SJason Beloro /* 141*9853d9e8SJason Beloro * Back metadata space with physical pages. 142*9853d9e8SJason Beloro */ 143*9853d9e8SJason Beloro kseg.s_as = &kas; 144*9853d9e8SJason Beloro vaddr = (caddr_t)pp; 145*9853d9e8SJason Beloro 146*9853d9e8SJason Beloro for (i = 0; i < metapgs; i++) 147*9853d9e8SJason Beloro if (page_find(&pp_vn, (u_offset_t)(vaddr + i * PAGESIZE))) 148*9853d9e8SJason Beloro panic("page_find(0x%p, %p)\n", 149*9853d9e8SJason Beloro (void *)&pp_vn, (void *)(vaddr + i * PAGESIZE)); 150*9853d9e8SJason Beloro 151*9853d9e8SJason Beloro /* 152*9853d9e8SJason Beloro * Allocate the metadata pages; these are the pages that will 153*9853d9e8SJason Beloro * contain the page_t's for the incoming memory. 154*9853d9e8SJason Beloro * 155*9853d9e8SJason Beloro * If a normal allocation fails, use the reserved metapgs for 156*9853d9e8SJason Beloro * a small allocation; otherwise retry with PG_WAIT. 157*9853d9e8SJason Beloro */ 158*9853d9e8SJason Beloro rsv = off = 0; 159*9853d9e8SJason Beloro if (metapgs <= rsv_metapgs) { 160*9853d9e8SJason Beloro MEMSEG_DEBUG("memseg_get: use rsv 0x%lx metapgs", metapgs); 161*9853d9e8SJason Beloro ASSERT(meta_rsv_enable); 162*9853d9e8SJason Beloro rsv = 1; 163*9853d9e8SJason Beloro } else if ((pgpp = page_create_va(&pp_vn, (u_offset_t)pp, ptob(metapgs), 164*9853d9e8SJason Beloro PG_NORELOC | PG_EXCL, &kseg, vaddr)) == NULL) { 165*9853d9e8SJason Beloro cmn_err(CE_WARN, "memseg_get: can't get 0x%ld metapgs", 166*9853d9e8SJason Beloro metapgs); 167*9853d9e8SJason Beloro return (KPHYSM_ERESOURCE); 168*9853d9e8SJason Beloro } 169*9853d9e8SJason Beloro if (rsv) { 170*9853d9e8SJason Beloro /* 171*9853d9e8SJason Beloro * The reseve pages must be hashed out of the reserve vnode 172*9853d9e8SJason Beloro * and rehashed by <pp_vn,vaddr>. The resreved pages also 173*9853d9e8SJason Beloro * must be replenished immedidately at the end of the add 174*9853d9e8SJason Beloro * processing. 175*9853d9e8SJason Beloro */ 176*9853d9e8SJason Beloro for (i = 0; i < metapgs; i++) { 177*9853d9e8SJason Beloro pgpp = page_find(&rsv_vn, off); 178*9853d9e8SJason Beloro ASSERT(pgpp); 179*9853d9e8SJason Beloro page_hashout(pgpp, 0); 180*9853d9e8SJason Beloro hat_devload(kas.a_hat, vaddr, PAGESIZE, 181*9853d9e8SJason Beloro page_pptonum(pgpp), PROT_READ | PROT_WRITE, 182*9853d9e8SJason Beloro HAT_LOAD | HAT_LOAD_REMAP | HAT_LOAD_NOCONSIST); 183*9853d9e8SJason Beloro ASSERT(!page_find(&pp_vn, (u_offset_t)vaddr)); 184*9853d9e8SJason Beloro if (!page_hashin(pgpp, &pp_vn, (u_offset_t)vaddr, 0)) 185*9853d9e8SJason Beloro panic("memseg_get: page_hashin(0x%p, 0x%p)", 186*9853d9e8SJason Beloro (void *)pgpp, (void *)vaddr); 187*9853d9e8SJason Beloro off += PAGESIZE; 188*9853d9e8SJason Beloro vaddr += PAGESIZE; 189*9853d9e8SJason Beloro rsv_metapgs--; 190*9853d9e8SJason Beloro } 191*9853d9e8SJason Beloro } else { 192*9853d9e8SJason Beloro for (i = 0; i < metapgs; i++) { 193*9853d9e8SJason Beloro hat_devload(kas.a_hat, vaddr, PAGESIZE, 194*9853d9e8SJason Beloro page_pptonum(pgpp), PROT_READ | PROT_WRITE, 195*9853d9e8SJason Beloro HAT_LOAD | HAT_LOAD_REMAP | HAT_LOAD_NOCONSIST); 196*9853d9e8SJason Beloro pgpp = pgpp->p_next; 197*9853d9e8SJason Beloro vaddr += PAGESIZE; 198*9853d9e8SJason Beloro } 199*9853d9e8SJason Beloro } 200*9853d9e8SJason Beloro 201*9853d9e8SJason Beloro ASSERT(ptp); 202*9853d9e8SJason Beloro ASSERT(metap); 203*9853d9e8SJason Beloro 204*9853d9e8SJason Beloro *ptp = (void *)opp; 205*9853d9e8SJason Beloro *metap = metapgs; 206*9853d9e8SJason Beloro 207*9853d9e8SJason Beloro return (KPHYSM_OK); 208*9853d9e8SJason Beloro } 209*9853d9e8SJason Beloro 210*9853d9e8SJason Beloro void 211*9853d9e8SJason Beloro memseg_free_meta(void *ptp, pgcnt_t metapgs) 212*9853d9e8SJason Beloro { 213*9853d9e8SJason Beloro int i; 214*9853d9e8SJason Beloro page_t *pp; 215*9853d9e8SJason Beloro u_offset_t off; 216*9853d9e8SJason Beloro 217*9853d9e8SJason Beloro if (!metapgs) 218*9853d9e8SJason Beloro return; 219*9853d9e8SJason Beloro 220*9853d9e8SJason Beloro off = (u_offset_t)ptp; 221*9853d9e8SJason Beloro 222*9853d9e8SJason Beloro ASSERT(off); 223*9853d9e8SJason Beloro ASSERT(IS_P2ALIGNED((uint64_t)off, PAGESIZE)); 224*9853d9e8SJason Beloro 225*9853d9e8SJason Beloro MEMSEG_DEBUG("memseg_free_meta: off=0x%lx metapgs=0x%lx\n", 226*9853d9e8SJason Beloro (uint64_t)off, metapgs); 227*9853d9e8SJason Beloro /* 228*9853d9e8SJason Beloro * Free pages allocated during add. 229*9853d9e8SJason Beloro */ 230*9853d9e8SJason Beloro for (i = 0; i < metapgs; i++) { 231*9853d9e8SJason Beloro pp = page_find(&pp_vn, off); 232*9853d9e8SJason Beloro ASSERT(pp); 233*9853d9e8SJason Beloro ASSERT(pp->p_szc == 0); 234*9853d9e8SJason Beloro page_io_unlock(pp); 235*9853d9e8SJason Beloro page_destroy(pp, 0); 236*9853d9e8SJason Beloro off += PAGESIZE; 237*9853d9e8SJason Beloro } 238*9853d9e8SJason Beloro } 239*9853d9e8SJason Beloro 240*9853d9e8SJason Beloro pfn_t 241*9853d9e8SJason Beloro memseg_get_metapfn(void *ptp, pgcnt_t metapg) 242*9853d9e8SJason Beloro { 243*9853d9e8SJason Beloro page_t *pp; 244*9853d9e8SJason Beloro u_offset_t off; 245*9853d9e8SJason Beloro 246*9853d9e8SJason Beloro off = (u_offset_t)ptp + ptob(metapg); 247*9853d9e8SJason Beloro 248*9853d9e8SJason Beloro ASSERT(off); 249*9853d9e8SJason Beloro ASSERT(IS_P2ALIGNED((uint64_t)off, PAGESIZE)); 250*9853d9e8SJason Beloro 251*9853d9e8SJason Beloro pp = page_find(&pp_vn, off); 252*9853d9e8SJason Beloro ASSERT(pp); 253*9853d9e8SJason Beloro ASSERT(pp->p_szc == 0); 254*9853d9e8SJason Beloro ASSERT(pp->p_pagenum != PFN_INVALID); 255*9853d9e8SJason Beloro 256*9853d9e8SJason Beloro return (pp->p_pagenum); 257*9853d9e8SJason Beloro } 258*9853d9e8SJason Beloro 259*9853d9e8SJason Beloro /* 260*9853d9e8SJason Beloro * Remap a memseg's page_t's to dummy pages. Skip the low/high 261*9853d9e8SJason Beloro * ends of the range if they are already in use. 262*9853d9e8SJason Beloro */ 263*9853d9e8SJason Beloro void 264*9853d9e8SJason Beloro memseg_remap_meta(struct memseg *seg) 265*9853d9e8SJason Beloro { 266*9853d9e8SJason Beloro int i; 267*9853d9e8SJason Beloro u_offset_t off; 268*9853d9e8SJason Beloro page_t *pp; 269*9853d9e8SJason Beloro #if 0 270*9853d9e8SJason Beloro page_t *epp; 271*9853d9e8SJason Beloro #endif 272*9853d9e8SJason Beloro pgcnt_t metapgs; 273*9853d9e8SJason Beloro 274*9853d9e8SJason Beloro metapgs = btopr(MSEG_NPAGES(seg) * sizeof (page_t)); 275*9853d9e8SJason Beloro ASSERT(metapgs); 276*9853d9e8SJason Beloro pp = seg->pages; 277*9853d9e8SJason Beloro seg->pages_end = seg->pages_base; 278*9853d9e8SJason Beloro #if 0 279*9853d9e8SJason Beloro epp = seg->epages; 280*9853d9e8SJason Beloro 281*9853d9e8SJason Beloro /* 282*9853d9e8SJason Beloro * This code cannot be tested as the kernel does not compile 283*9853d9e8SJason Beloro * when page_t size is changed. It is left here as a starting 284*9853d9e8SJason Beloro * point if the unaligned page_t size needs to be supported. 285*9853d9e8SJason Beloro */ 286*9853d9e8SJason Beloro 287*9853d9e8SJason Beloro if (!IS_P2ALIGNED((uint64_t)pp, PAGESIZE) && 288*9853d9e8SJason Beloro page_find(&pp_vn, (u_offset_t)(pp - 1)) && !page_deleted(pp - 1)) { 289*9853d9e8SJason Beloro /* 290*9853d9e8SJason Beloro * Another memseg has page_t's in the same 291*9853d9e8SJason Beloro * page which 'pp' resides. This would happen 292*9853d9e8SJason Beloro * if PAGESIZE is not an integral multiple of 293*9853d9e8SJason Beloro * sizeof (page_t) and therefore 'seg->pages' 294*9853d9e8SJason Beloro * does not start on a page boundry. 295*9853d9e8SJason Beloro * 296*9853d9e8SJason Beloro * Since the other memseg's pages_t's still 297*9853d9e8SJason Beloro * map valid pages, skip remap of this page. 298*9853d9e8SJason Beloro * Advance 'pp' to the next page which should 299*9853d9e8SJason Beloro * belong only to the outgoing memseg. 300*9853d9e8SJason Beloro * 301*9853d9e8SJason Beloro * If the last page_t in the current page 302*9853d9e8SJason Beloro * crosses a page boundary, this should still 303*9853d9e8SJason Beloro * work. The first part of the page_t is 304*9853d9e8SJason Beloro * valid since memseg_lock_delete_all() has 305*9853d9e8SJason Beloro * been called. The second part of the page_t 306*9853d9e8SJason Beloro * will be remapped to the corresponding 307*9853d9e8SJason Beloro * dummy page below. 308*9853d9e8SJason Beloro */ 309*9853d9e8SJason Beloro ASSERT(PAGESIZE % sizeof (page_t)); 310*9853d9e8SJason Beloro pp = (page_t *)P2ROUNDUP((uint64_t)pp, PAGESIZE); 311*9853d9e8SJason Beloro metapgs--; 312*9853d9e8SJason Beloro } 313*9853d9e8SJason Beloro 314*9853d9e8SJason Beloro if (!IS_P2ALIGNED((uint64_t)epp, PAGESIZE) && 315*9853d9e8SJason Beloro page_find(&pp_vn, (u_offset_t)epp) && !page_deleted(epp)) { 316*9853d9e8SJason Beloro /* 317*9853d9e8SJason Beloro * Another memseg has page_t's in the same 318*9853d9e8SJason Beloro * page which 'epp' resides. This would happen 319*9853d9e8SJason Beloro * if PAGESIZE is not an integral multiple of 320*9853d9e8SJason Beloro * sizeof (page_t) and therefore 'seg->epages' 321*9853d9e8SJason Beloro * does not start on a page boundry. 322*9853d9e8SJason Beloro * 323*9853d9e8SJason Beloro * Since the other memseg's pages_t's still 324*9853d9e8SJason Beloro * map valid pages, skip remap of this page. 325*9853d9e8SJason Beloro */ 326*9853d9e8SJason Beloro ASSERT(PAGESIZE % sizeof (page_t)); 327*9853d9e8SJason Beloro metapgs--; 328*9853d9e8SJason Beloro } 329*9853d9e8SJason Beloro #endif 330*9853d9e8SJason Beloro ASSERT(IS_P2ALIGNED((uint64_t)pp, PAGESIZE)); 331*9853d9e8SJason Beloro 332*9853d9e8SJason Beloro remap_to_dummy((caddr_t)pp, metapgs); 333*9853d9e8SJason Beloro 334*9853d9e8SJason Beloro off = (u_offset_t)pp; 335*9853d9e8SJason Beloro 336*9853d9e8SJason Beloro MEMSEG_DEBUG("memseg_remap: off=0x%lx metapgs=0x%lx\n", (uint64_t)off, 337*9853d9e8SJason Beloro metapgs); 338*9853d9e8SJason Beloro /* 339*9853d9e8SJason Beloro * Free pages allocated during add. 340*9853d9e8SJason Beloro */ 341*9853d9e8SJason Beloro for (i = 0; i < metapgs; i++) { 342*9853d9e8SJason Beloro pp = page_find(&pp_vn, off); 343*9853d9e8SJason Beloro ASSERT(pp); 344*9853d9e8SJason Beloro ASSERT(pp->p_szc == 0); 345*9853d9e8SJason Beloro page_io_unlock(pp); 346*9853d9e8SJason Beloro page_destroy(pp, 0); 347*9853d9e8SJason Beloro off += PAGESIZE; 348*9853d9e8SJason Beloro } 349*9853d9e8SJason Beloro } 350*9853d9e8SJason Beloro 351*9853d9e8SJason Beloro static void 352*9853d9e8SJason Beloro rsv_alloc() 353*9853d9e8SJason Beloro { 354*9853d9e8SJason Beloro int i; 355*9853d9e8SJason Beloro page_t *pp; 356*9853d9e8SJason Beloro pgcnt_t metapgs; 357*9853d9e8SJason Beloro u_offset_t off; 358*9853d9e8SJason Beloro struct seg kseg; 359*9853d9e8SJason Beloro 360*9853d9e8SJason Beloro kseg.s_as = &kas; 361*9853d9e8SJason Beloro 362*9853d9e8SJason Beloro /* 363*9853d9e8SJason Beloro * Reserve enough page_t pages for an add request of 364*9853d9e8SJason Beloro * RSV_SIZE bytes. 365*9853d9e8SJason Beloro */ 366*9853d9e8SJason Beloro metapgs = btopr(btop(RSV_SIZE) * sizeof (page_t)) - rsv_metapgs; 367*9853d9e8SJason Beloro 368*9853d9e8SJason Beloro for (i = off = 0; i < metapgs; i++, off += PAGESIZE) { 369*9853d9e8SJason Beloro (void) page_create_va(&rsv_vn, off, PAGESIZE, 370*9853d9e8SJason Beloro PG_NORELOC | PG_WAIT, &kseg, 0); 371*9853d9e8SJason Beloro pp = page_find(&rsv_vn, off); 372*9853d9e8SJason Beloro ASSERT(pp); 373*9853d9e8SJason Beloro ASSERT(PAGE_EXCL(pp)); 374*9853d9e8SJason Beloro page_iolock_init(pp); 375*9853d9e8SJason Beloro rsv_metapgs++; 376*9853d9e8SJason Beloro } 377*9853d9e8SJason Beloro } 378*9853d9e8SJason Beloro 379*9853d9e8SJason Beloro void 380*9853d9e8SJason Beloro i_dr_mem_init(size_t *hint) 381*9853d9e8SJason Beloro { 382*9853d9e8SJason Beloro if (meta_rsv_enable) { 383*9853d9e8SJason Beloro rsv_alloc(); 384*9853d9e8SJason Beloro if (hint) 385*9853d9e8SJason Beloro *hint = RSV_SIZE; 386*9853d9e8SJason Beloro } 387*9853d9e8SJason Beloro } 388*9853d9e8SJason Beloro 389*9853d9e8SJason Beloro void 390*9853d9e8SJason Beloro i_dr_mem_fini() 391*9853d9e8SJason Beloro { 392*9853d9e8SJason Beloro int i; 393*9853d9e8SJason Beloro page_t *pp; 394*9853d9e8SJason Beloro u_offset_t off; 395*9853d9e8SJason Beloro 396*9853d9e8SJason Beloro for (i = off = 0; i < rsv_metapgs; i++, off += PAGESIZE) { 397*9853d9e8SJason Beloro if (pp = page_find(&rsv_vn, off)) { 398*9853d9e8SJason Beloro ASSERT(PAGE_EXCL(pp)); 399*9853d9e8SJason Beloro page_destroy(pp, 0); 400*9853d9e8SJason Beloro } 401*9853d9e8SJason Beloro ASSERT(!page_find(&rsv_vn, off)); 402*9853d9e8SJason Beloro } 403*9853d9e8SJason Beloro rsv_metapgs = 0; 404*9853d9e8SJason Beloro } 405*9853d9e8SJason Beloro 406*9853d9e8SJason Beloro void 407*9853d9e8SJason Beloro i_dr_mem_update() 408*9853d9e8SJason Beloro { 409*9853d9e8SJason Beloro rsv_alloc(); 410*9853d9e8SJason Beloro } 411