1*3a634bfcSVikram Hegde /* 2*3a634bfcSVikram Hegde * CDDL HEADER START 3*3a634bfcSVikram Hegde * 4*3a634bfcSVikram Hegde * The contents of this file are subject to the terms of the 5*3a634bfcSVikram Hegde * Common Development and Distribution License (the "License"). 6*3a634bfcSVikram Hegde * You may not use this file except in compliance with the License. 7*3a634bfcSVikram Hegde * 8*3a634bfcSVikram Hegde * You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE 9*3a634bfcSVikram Hegde * or http://www.opensolaris.org/os/licensing. 10*3a634bfcSVikram Hegde * See the License for the specific language governing permissions 11*3a634bfcSVikram Hegde * and limitations under the License. 12*3a634bfcSVikram Hegde * 13*3a634bfcSVikram Hegde * When distributing Covered Code, include this CDDL HEADER in each 14*3a634bfcSVikram Hegde * file and include the License file at usr/src/OPENSOLARIS.LICENSE. 15*3a634bfcSVikram Hegde * If applicable, add the following below this CDDL HEADER, with the 16*3a634bfcSVikram Hegde * fields enclosed by brackets "[]" replaced with your own identifying 17*3a634bfcSVikram Hegde * information: Portions Copyright [yyyy] [name of copyright owner] 18*3a634bfcSVikram Hegde * 19*3a634bfcSVikram Hegde * CDDL HEADER END 20*3a634bfcSVikram Hegde */ 21*3a634bfcSVikram Hegde /* 22*3a634bfcSVikram Hegde * Portions Copyright 2010 Sun Microsystems, Inc. All rights reserved. 23*3a634bfcSVikram Hegde * Use is subject to license terms. 24*3a634bfcSVikram Hegde */ 25*3a634bfcSVikram Hegde 26*3a634bfcSVikram Hegde /* 27*3a634bfcSVikram Hegde * immu_regs.c - File that operates on a IMMU unit's regsiters 28*3a634bfcSVikram Hegde */ 29*3a634bfcSVikram Hegde #include <sys/dditypes.h> 30*3a634bfcSVikram Hegde #include <sys/ddi.h> 31*3a634bfcSVikram Hegde #include <sys/archsystm.h> 32*3a634bfcSVikram Hegde #include <sys/x86_archext.h> 33*3a634bfcSVikram Hegde #include <sys/spl.h> 34*3a634bfcSVikram Hegde #include <sys/immu.h> 35*3a634bfcSVikram Hegde 36*3a634bfcSVikram Hegde #define get_reg32(immu, offset) ddi_get32((immu)->immu_regs_handle, \ 37*3a634bfcSVikram Hegde (uint32_t *)(immu->immu_regs_addr + (offset))) 38*3a634bfcSVikram Hegde #define get_reg64(immu, offset) ddi_get64((immu)->immu_regs_handle, \ 39*3a634bfcSVikram Hegde (uint64_t *)(immu->immu_regs_addr + (offset))) 40*3a634bfcSVikram Hegde #define put_reg32(immu, offset, val) ddi_put32\ 41*3a634bfcSVikram Hegde ((immu)->immu_regs_handle, \ 42*3a634bfcSVikram Hegde (uint32_t *)(immu->immu_regs_addr + (offset)), val) 43*3a634bfcSVikram Hegde #define put_reg64(immu, offset, val) ddi_put64\ 44*3a634bfcSVikram Hegde ((immu)->immu_regs_handle, \ 45*3a634bfcSVikram Hegde (uint64_t *)(immu->immu_regs_addr + (offset)), val) 46*3a634bfcSVikram Hegde 47*3a634bfcSVikram Hegde /* 48*3a634bfcSVikram Hegde * wait max 60s for the hardware completion 49*3a634bfcSVikram Hegde */ 50*3a634bfcSVikram Hegde #define IMMU_MAX_WAIT_TIME 60000000 51*3a634bfcSVikram Hegde #define wait_completion(immu, offset, getf, completion, status) \ 52*3a634bfcSVikram Hegde { \ 53*3a634bfcSVikram Hegde clock_t stick = ddi_get_lbolt(); \ 54*3a634bfcSVikram Hegde clock_t ntick; \ 55*3a634bfcSVikram Hegde _NOTE(CONSTCOND) \ 56*3a634bfcSVikram Hegde while (1) { \ 57*3a634bfcSVikram Hegde status = getf(immu, offset); \ 58*3a634bfcSVikram Hegde ntick = ddi_get_lbolt(); \ 59*3a634bfcSVikram Hegde if (completion) { \ 60*3a634bfcSVikram Hegde break; \ 61*3a634bfcSVikram Hegde } \ 62*3a634bfcSVikram Hegde if (ntick - stick >= drv_usectohz(IMMU_MAX_WAIT_TIME)) { \ 63*3a634bfcSVikram Hegde ddi_err(DER_PANIC, NULL, \ 64*3a634bfcSVikram Hegde "immu wait completion time out"); \ 65*3a634bfcSVikram Hegde /*NOTREACHED*/ \ 66*3a634bfcSVikram Hegde } else { \ 67*3a634bfcSVikram Hegde iommu_cpu_nop();\ 68*3a634bfcSVikram Hegde }\ 69*3a634bfcSVikram Hegde }\ 70*3a634bfcSVikram Hegde } 71*3a634bfcSVikram Hegde 72*3a634bfcSVikram Hegde static ddi_device_acc_attr_t immu_regs_attr = { 73*3a634bfcSVikram Hegde DDI_DEVICE_ATTR_V0, 74*3a634bfcSVikram Hegde DDI_NEVERSWAP_ACC, 75*3a634bfcSVikram Hegde DDI_STRICTORDER_ACC, 76*3a634bfcSVikram Hegde }; 77*3a634bfcSVikram Hegde 78*3a634bfcSVikram Hegde /* 79*3a634bfcSVikram Hegde * iotlb_flush() 80*3a634bfcSVikram Hegde * flush the iotlb cache 81*3a634bfcSVikram Hegde */ 82*3a634bfcSVikram Hegde static void 83*3a634bfcSVikram Hegde iotlb_flush(immu_t *immu, uint_t domain_id, 84*3a634bfcSVikram Hegde uint64_t addr, uint_t am, uint_t hint, immu_iotlb_inv_t type) 85*3a634bfcSVikram Hegde { 86*3a634bfcSVikram Hegde uint64_t command = 0, iva = 0; 87*3a634bfcSVikram Hegde uint_t iva_offset, iotlb_offset; 88*3a634bfcSVikram Hegde uint64_t status = 0; 89*3a634bfcSVikram Hegde 90*3a634bfcSVikram Hegde ASSERT(MUTEX_HELD(&(immu->immu_regs_lock))); 91*3a634bfcSVikram Hegde 92*3a634bfcSVikram Hegde /* no lock needed since cap and excap fields are RDONLY */ 93*3a634bfcSVikram Hegde iva_offset = IMMU_ECAP_GET_IRO(immu->immu_regs_excap); 94*3a634bfcSVikram Hegde iotlb_offset = iva_offset + 8; 95*3a634bfcSVikram Hegde 96*3a634bfcSVikram Hegde /* 97*3a634bfcSVikram Hegde * prepare drain read/write command 98*3a634bfcSVikram Hegde */ 99*3a634bfcSVikram Hegde if (IMMU_CAP_GET_DWD(immu->immu_regs_cap)) { 100*3a634bfcSVikram Hegde command |= TLB_INV_DRAIN_WRITE; 101*3a634bfcSVikram Hegde } 102*3a634bfcSVikram Hegde 103*3a634bfcSVikram Hegde if (IMMU_CAP_GET_DRD(immu->immu_regs_cap)) { 104*3a634bfcSVikram Hegde command |= TLB_INV_DRAIN_READ; 105*3a634bfcSVikram Hegde } 106*3a634bfcSVikram Hegde 107*3a634bfcSVikram Hegde /* 108*3a634bfcSVikram Hegde * if the hardward doesn't support page selective invalidation, we 109*3a634bfcSVikram Hegde * will use domain type. Otherwise, use global type 110*3a634bfcSVikram Hegde */ 111*3a634bfcSVikram Hegde switch (type) { 112*3a634bfcSVikram Hegde case IOTLB_PSI: 113*3a634bfcSVikram Hegde if (!IMMU_CAP_GET_PSI(immu->immu_regs_cap) || 114*3a634bfcSVikram Hegde (am > IMMU_CAP_GET_MAMV(immu->immu_regs_cap)) || 115*3a634bfcSVikram Hegde (addr & IMMU_PAGEOFFSET)) { 116*3a634bfcSVikram Hegde goto ignore_psi; 117*3a634bfcSVikram Hegde } 118*3a634bfcSVikram Hegde command |= TLB_INV_PAGE | TLB_INV_IVT | 119*3a634bfcSVikram Hegde TLB_INV_DID(domain_id); 120*3a634bfcSVikram Hegde iva = addr | am | TLB_IVA_HINT(hint); 121*3a634bfcSVikram Hegde break; 122*3a634bfcSVikram Hegde ignore_psi: 123*3a634bfcSVikram Hegde case IOTLB_DSI: 124*3a634bfcSVikram Hegde command |= TLB_INV_DOMAIN | TLB_INV_IVT | 125*3a634bfcSVikram Hegde TLB_INV_DID(domain_id); 126*3a634bfcSVikram Hegde break; 127*3a634bfcSVikram Hegde case IOTLB_GLOBAL: 128*3a634bfcSVikram Hegde command |= TLB_INV_GLOBAL | TLB_INV_IVT; 129*3a634bfcSVikram Hegde break; 130*3a634bfcSVikram Hegde default: 131*3a634bfcSVikram Hegde ddi_err(DER_MODE, NULL, "%s: incorrect iotlb flush type", 132*3a634bfcSVikram Hegde immu->immu_name); 133*3a634bfcSVikram Hegde return; 134*3a634bfcSVikram Hegde } 135*3a634bfcSVikram Hegde 136*3a634bfcSVikram Hegde /* verify there is no pending command */ 137*3a634bfcSVikram Hegde wait_completion(immu, iotlb_offset, get_reg64, 138*3a634bfcSVikram Hegde (!(status & TLB_INV_IVT)), status); 139*3a634bfcSVikram Hegde if (iva) 140*3a634bfcSVikram Hegde put_reg64(immu, iva_offset, iva); 141*3a634bfcSVikram Hegde put_reg64(immu, iotlb_offset, command); 142*3a634bfcSVikram Hegde wait_completion(immu, iotlb_offset, get_reg64, 143*3a634bfcSVikram Hegde (!(status & TLB_INV_IVT)), status); 144*3a634bfcSVikram Hegde } 145*3a634bfcSVikram Hegde 146*3a634bfcSVikram Hegde /* 147*3a634bfcSVikram Hegde * iotlb_psi() 148*3a634bfcSVikram Hegde * iotlb page specific invalidation 149*3a634bfcSVikram Hegde */ 150*3a634bfcSVikram Hegde static void 151*3a634bfcSVikram Hegde iotlb_psi(immu_t *immu, uint_t domain_id, 152*3a634bfcSVikram Hegde uint64_t dvma, uint_t count, uint_t hint) 153*3a634bfcSVikram Hegde { 154*3a634bfcSVikram Hegde uint_t am = 0; 155*3a634bfcSVikram Hegde uint_t max_am = 0; 156*3a634bfcSVikram Hegde uint64_t align = 0; 157*3a634bfcSVikram Hegde uint64_t dvma_pg = 0; 158*3a634bfcSVikram Hegde uint_t used_count = 0; 159*3a634bfcSVikram Hegde 160*3a634bfcSVikram Hegde mutex_enter(&(immu->immu_regs_lock)); 161*3a634bfcSVikram Hegde 162*3a634bfcSVikram Hegde /* choose page specified invalidation */ 163*3a634bfcSVikram Hegde if (IMMU_CAP_GET_PSI(immu->immu_regs_cap)) { 164*3a634bfcSVikram Hegde /* MAMV is valid only if PSI is set */ 165*3a634bfcSVikram Hegde max_am = IMMU_CAP_GET_MAMV(immu->immu_regs_cap); 166*3a634bfcSVikram Hegde while (count != 0) { 167*3a634bfcSVikram Hegde /* First calculate alignment of DVMA */ 168*3a634bfcSVikram Hegde dvma_pg = IMMU_BTOP(dvma); 169*3a634bfcSVikram Hegde ASSERT(dvma_pg != NULL); 170*3a634bfcSVikram Hegde ASSERT(count >= 1); 171*3a634bfcSVikram Hegde for (align = 1; (dvma_pg & align) == 0; align <<= 1) 172*3a634bfcSVikram Hegde ; 173*3a634bfcSVikram Hegde /* truncate count to the nearest power of 2 */ 174*3a634bfcSVikram Hegde for (used_count = 1, am = 0; count >> used_count != 0; 175*3a634bfcSVikram Hegde used_count <<= 1, am++) 176*3a634bfcSVikram Hegde ; 177*3a634bfcSVikram Hegde if (am > max_am) { 178*3a634bfcSVikram Hegde am = max_am; 179*3a634bfcSVikram Hegde used_count = 1 << am; 180*3a634bfcSVikram Hegde } 181*3a634bfcSVikram Hegde if (align >= used_count) { 182*3a634bfcSVikram Hegde iotlb_flush(immu, domain_id, 183*3a634bfcSVikram Hegde dvma, am, hint, IOTLB_PSI); 184*3a634bfcSVikram Hegde } else { 185*3a634bfcSVikram Hegde /* align < used_count */ 186*3a634bfcSVikram Hegde used_count = align; 187*3a634bfcSVikram Hegde for (am = 0; (1 << am) != used_count; am++) 188*3a634bfcSVikram Hegde ; 189*3a634bfcSVikram Hegde iotlb_flush(immu, domain_id, 190*3a634bfcSVikram Hegde dvma, am, hint, IOTLB_PSI); 191*3a634bfcSVikram Hegde } 192*3a634bfcSVikram Hegde count -= used_count; 193*3a634bfcSVikram Hegde dvma = (dvma_pg + used_count) << IMMU_PAGESHIFT; 194*3a634bfcSVikram Hegde } 195*3a634bfcSVikram Hegde } else { 196*3a634bfcSVikram Hegde /* choose domain invalidation */ 197*3a634bfcSVikram Hegde iotlb_flush(immu, domain_id, dvma, 0, 0, IOTLB_DSI); 198*3a634bfcSVikram Hegde } 199*3a634bfcSVikram Hegde 200*3a634bfcSVikram Hegde mutex_exit(&(immu->immu_regs_lock)); 201*3a634bfcSVikram Hegde } 202*3a634bfcSVikram Hegde 203*3a634bfcSVikram Hegde /* 204*3a634bfcSVikram Hegde * iotlb_dsi() 205*3a634bfcSVikram Hegde * domain specific invalidation 206*3a634bfcSVikram Hegde */ 207*3a634bfcSVikram Hegde static void 208*3a634bfcSVikram Hegde iotlb_dsi(immu_t *immu, uint_t domain_id) 209*3a634bfcSVikram Hegde { 210*3a634bfcSVikram Hegde mutex_enter(&(immu->immu_regs_lock)); 211*3a634bfcSVikram Hegde iotlb_flush(immu, domain_id, 0, 0, 0, IOTLB_DSI); 212*3a634bfcSVikram Hegde mutex_exit(&(immu->immu_regs_lock)); 213*3a634bfcSVikram Hegde } 214*3a634bfcSVikram Hegde 215*3a634bfcSVikram Hegde /* 216*3a634bfcSVikram Hegde * iotlb_global() 217*3a634bfcSVikram Hegde * global iotlb invalidation 218*3a634bfcSVikram Hegde */ 219*3a634bfcSVikram Hegde static void 220*3a634bfcSVikram Hegde iotlb_global(immu_t *immu) 221*3a634bfcSVikram Hegde { 222*3a634bfcSVikram Hegde mutex_enter(&(immu->immu_regs_lock)); 223*3a634bfcSVikram Hegde iotlb_flush(immu, 0, 0, 0, 0, IOTLB_GLOBAL); 224*3a634bfcSVikram Hegde mutex_exit(&(immu->immu_regs_lock)); 225*3a634bfcSVikram Hegde } 226*3a634bfcSVikram Hegde 227*3a634bfcSVikram Hegde 228*3a634bfcSVikram Hegde static int 229*3a634bfcSVikram Hegde gaw2agaw(int gaw) 230*3a634bfcSVikram Hegde { 231*3a634bfcSVikram Hegde int r, agaw; 232*3a634bfcSVikram Hegde 233*3a634bfcSVikram Hegde r = (gaw - 12) % 9; 234*3a634bfcSVikram Hegde 235*3a634bfcSVikram Hegde if (r == 0) 236*3a634bfcSVikram Hegde agaw = gaw; 237*3a634bfcSVikram Hegde else 238*3a634bfcSVikram Hegde agaw = gaw + 9 - r; 239*3a634bfcSVikram Hegde 240*3a634bfcSVikram Hegde if (agaw > 64) 241*3a634bfcSVikram Hegde agaw = 64; 242*3a634bfcSVikram Hegde 243*3a634bfcSVikram Hegde return (agaw); 244*3a634bfcSVikram Hegde } 245*3a634bfcSVikram Hegde 246*3a634bfcSVikram Hegde /* 247*3a634bfcSVikram Hegde * set_immu_agaw() 248*3a634bfcSVikram Hegde * calculate agaw for a IOMMU unit 249*3a634bfcSVikram Hegde */ 250*3a634bfcSVikram Hegde static int 251*3a634bfcSVikram Hegde set_agaw(immu_t *immu) 252*3a634bfcSVikram Hegde { 253*3a634bfcSVikram Hegde int mgaw, magaw, agaw; 254*3a634bfcSVikram Hegde uint_t bitpos; 255*3a634bfcSVikram Hegde int max_sagaw_mask, sagaw_mask, mask; 256*3a634bfcSVikram Hegde int nlevels; 257*3a634bfcSVikram Hegde 258*3a634bfcSVikram Hegde /* 259*3a634bfcSVikram Hegde * mgaw is the maximum guest address width. 260*3a634bfcSVikram Hegde * Addresses above this value will be 261*3a634bfcSVikram Hegde * blocked by the IOMMU unit. 262*3a634bfcSVikram Hegde * sagaw is a bitmask that lists all the 263*3a634bfcSVikram Hegde * AGAWs supported by this IOMMU unit. 264*3a634bfcSVikram Hegde */ 265*3a634bfcSVikram Hegde mgaw = IMMU_CAP_MGAW(immu->immu_regs_cap); 266*3a634bfcSVikram Hegde sagaw_mask = IMMU_CAP_SAGAW(immu->immu_regs_cap); 267*3a634bfcSVikram Hegde 268*3a634bfcSVikram Hegde magaw = gaw2agaw(mgaw); 269*3a634bfcSVikram Hegde 270*3a634bfcSVikram Hegde /* 271*3a634bfcSVikram Hegde * Get bitpos corresponding to 272*3a634bfcSVikram Hegde * magaw 273*3a634bfcSVikram Hegde */ 274*3a634bfcSVikram Hegde 275*3a634bfcSVikram Hegde /* 276*3a634bfcSVikram Hegde * Maximum SAGAW is specified by 277*3a634bfcSVikram Hegde * Vt-d spec. 278*3a634bfcSVikram Hegde */ 279*3a634bfcSVikram Hegde max_sagaw_mask = ((1 << 5) - 1); 280*3a634bfcSVikram Hegde 281*3a634bfcSVikram Hegde if (sagaw_mask > max_sagaw_mask) { 282*3a634bfcSVikram Hegde ddi_err(DER_WARN, NULL, "%s: SAGAW bitmask (%x) " 283*3a634bfcSVikram Hegde "is larger than maximu SAGAW bitmask " 284*3a634bfcSVikram Hegde "(%x) specified by Intel Vt-d spec", 285*3a634bfcSVikram Hegde immu->immu_name, sagaw_mask, max_sagaw_mask); 286*3a634bfcSVikram Hegde return (DDI_FAILURE); 287*3a634bfcSVikram Hegde } 288*3a634bfcSVikram Hegde 289*3a634bfcSVikram Hegde /* 290*3a634bfcSVikram Hegde * Find a supported AGAW <= magaw 291*3a634bfcSVikram Hegde * 292*3a634bfcSVikram Hegde * sagaw_mask bitpos AGAW (bits) nlevels 293*3a634bfcSVikram Hegde * ============================================== 294*3a634bfcSVikram Hegde * 0 0 0 0 1 0 30 2 295*3a634bfcSVikram Hegde * 0 0 0 1 0 1 39 3 296*3a634bfcSVikram Hegde * 0 0 1 0 0 2 48 4 297*3a634bfcSVikram Hegde * 0 1 0 0 0 3 57 5 298*3a634bfcSVikram Hegde * 1 0 0 0 0 4 64(66) 6 299*3a634bfcSVikram Hegde */ 300*3a634bfcSVikram Hegde mask = 1; 301*3a634bfcSVikram Hegde nlevels = 0; 302*3a634bfcSVikram Hegde agaw = 0; 303*3a634bfcSVikram Hegde for (mask = 1, bitpos = 0; bitpos < 5; 304*3a634bfcSVikram Hegde bitpos++, mask <<= 1) { 305*3a634bfcSVikram Hegde if (mask & sagaw_mask) { 306*3a634bfcSVikram Hegde nlevels = bitpos + 2; 307*3a634bfcSVikram Hegde agaw = 30 + (bitpos * 9); 308*3a634bfcSVikram Hegde } 309*3a634bfcSVikram Hegde } 310*3a634bfcSVikram Hegde 311*3a634bfcSVikram Hegde /* calculated agaw can be > 64 */ 312*3a634bfcSVikram Hegde agaw = (agaw > 64) ? 64 : agaw; 313*3a634bfcSVikram Hegde 314*3a634bfcSVikram Hegde if (agaw < 30 || agaw > magaw) { 315*3a634bfcSVikram Hegde ddi_err(DER_WARN, NULL, "%s: Calculated AGAW (%d) " 316*3a634bfcSVikram Hegde "is outside valid limits [30,%d] specified by Vt-d spec " 317*3a634bfcSVikram Hegde "and magaw", immu->immu_name, agaw, magaw); 318*3a634bfcSVikram Hegde return (DDI_FAILURE); 319*3a634bfcSVikram Hegde } 320*3a634bfcSVikram Hegde 321*3a634bfcSVikram Hegde if (nlevels < 2 || nlevels > 6) { 322*3a634bfcSVikram Hegde ddi_err(DER_WARN, NULL, "%s: Calculated pagetable " 323*3a634bfcSVikram Hegde "level (%d) is outside valid limits [2,6]", 324*3a634bfcSVikram Hegde immu->immu_name, nlevels); 325*3a634bfcSVikram Hegde return (DDI_FAILURE); 326*3a634bfcSVikram Hegde } 327*3a634bfcSVikram Hegde 328*3a634bfcSVikram Hegde ddi_err(DER_LOG, NULL, "Calculated pagetable " 329*3a634bfcSVikram Hegde "level (%d), agaw = %d", nlevels, agaw); 330*3a634bfcSVikram Hegde 331*3a634bfcSVikram Hegde immu->immu_dvma_nlevels = nlevels; 332*3a634bfcSVikram Hegde immu->immu_dvma_agaw = agaw; 333*3a634bfcSVikram Hegde 334*3a634bfcSVikram Hegde return (DDI_SUCCESS); 335*3a634bfcSVikram Hegde } 336*3a634bfcSVikram Hegde 337*3a634bfcSVikram Hegde static int 338*3a634bfcSVikram Hegde setup_regs(immu_t *immu) 339*3a634bfcSVikram Hegde { 340*3a634bfcSVikram Hegde int error; 341*3a634bfcSVikram Hegde 342*3a634bfcSVikram Hegde ASSERT(immu); 343*3a634bfcSVikram Hegde ASSERT(immu->immu_name); 344*3a634bfcSVikram Hegde 345*3a634bfcSVikram Hegde /* 346*3a634bfcSVikram Hegde * This lock may be acquired by the IOMMU interrupt handler 347*3a634bfcSVikram Hegde */ 348*3a634bfcSVikram Hegde mutex_init(&(immu->immu_regs_lock), NULL, MUTEX_DRIVER, 349*3a634bfcSVikram Hegde (void *)ipltospl(IMMU_INTR_IPL)); 350*3a634bfcSVikram Hegde 351*3a634bfcSVikram Hegde /* 352*3a634bfcSVikram Hegde * map the register address space 353*3a634bfcSVikram Hegde */ 354*3a634bfcSVikram Hegde error = ddi_regs_map_setup(immu->immu_dip, 0, 355*3a634bfcSVikram Hegde (caddr_t *)&(immu->immu_regs_addr), (offset_t)0, 356*3a634bfcSVikram Hegde (offset_t)IMMU_REGSZ, &immu_regs_attr, 357*3a634bfcSVikram Hegde &(immu->immu_regs_handle)); 358*3a634bfcSVikram Hegde 359*3a634bfcSVikram Hegde if (error == DDI_FAILURE) { 360*3a634bfcSVikram Hegde ddi_err(DER_WARN, NULL, "%s: Intel IOMMU register map failed", 361*3a634bfcSVikram Hegde immu->immu_name); 362*3a634bfcSVikram Hegde mutex_destroy(&(immu->immu_regs_lock)); 363*3a634bfcSVikram Hegde return (DDI_FAILURE); 364*3a634bfcSVikram Hegde } 365*3a634bfcSVikram Hegde 366*3a634bfcSVikram Hegde /* 367*3a634bfcSVikram Hegde * get the register value 368*3a634bfcSVikram Hegde */ 369*3a634bfcSVikram Hegde immu->immu_regs_cap = get_reg64(immu, IMMU_REG_CAP); 370*3a634bfcSVikram Hegde immu->immu_regs_excap = get_reg64(immu, IMMU_REG_EXCAP); 371*3a634bfcSVikram Hegde 372*3a634bfcSVikram Hegde /* 373*3a634bfcSVikram Hegde * if the hardware access is non-coherent, we need clflush 374*3a634bfcSVikram Hegde */ 375*3a634bfcSVikram Hegde if (IMMU_ECAP_GET_C(immu->immu_regs_excap)) { 376*3a634bfcSVikram Hegde immu->immu_dvma_coherent = B_TRUE; 377*3a634bfcSVikram Hegde } else { 378*3a634bfcSVikram Hegde immu->immu_dvma_coherent = B_FALSE; 379*3a634bfcSVikram Hegde if (!(x86_feature & X86_CLFSH)) { 380*3a634bfcSVikram Hegde ddi_err(DER_WARN, NULL, 381*3a634bfcSVikram Hegde "immu unit %s can't be enabled due to " 382*3a634bfcSVikram Hegde "missing clflush functionality", immu->immu_name); 383*3a634bfcSVikram Hegde ddi_regs_map_free(&(immu->immu_regs_handle)); 384*3a634bfcSVikram Hegde mutex_destroy(&(immu->immu_regs_lock)); 385*3a634bfcSVikram Hegde return (DDI_FAILURE); 386*3a634bfcSVikram Hegde } 387*3a634bfcSVikram Hegde } 388*3a634bfcSVikram Hegde 389*3a634bfcSVikram Hegde /* 390*3a634bfcSVikram Hegde * Check for Mobile 4 series chipset 391*3a634bfcSVikram Hegde */ 392*3a634bfcSVikram Hegde if (immu_quirk_mobile4 == B_TRUE && 393*3a634bfcSVikram Hegde !IMMU_CAP_GET_RWBF(immu->immu_regs_cap)) { 394*3a634bfcSVikram Hegde ddi_err(DER_LOG, NULL, 395*3a634bfcSVikram Hegde "IMMU: Mobile 4 chipset quirk detected. " 396*3a634bfcSVikram Hegde "Force-setting RWBF"); 397*3a634bfcSVikram Hegde IMMU_CAP_SET_RWBF(immu->immu_regs_cap); 398*3a634bfcSVikram Hegde ASSERT(IMMU_CAP_GET_RWBF(immu->immu_regs_cap)); 399*3a634bfcSVikram Hegde } 400*3a634bfcSVikram Hegde 401*3a634bfcSVikram Hegde /* 402*3a634bfcSVikram Hegde * retrieve the maximum number of domains 403*3a634bfcSVikram Hegde */ 404*3a634bfcSVikram Hegde immu->immu_max_domains = IMMU_CAP_ND(immu->immu_regs_cap); 405*3a634bfcSVikram Hegde 406*3a634bfcSVikram Hegde /* 407*3a634bfcSVikram Hegde * calculate the agaw 408*3a634bfcSVikram Hegde */ 409*3a634bfcSVikram Hegde if (set_agaw(immu) != DDI_SUCCESS) { 410*3a634bfcSVikram Hegde ddi_regs_map_free(&(immu->immu_regs_handle)); 411*3a634bfcSVikram Hegde mutex_destroy(&(immu->immu_regs_lock)); 412*3a634bfcSVikram Hegde return (DDI_FAILURE); 413*3a634bfcSVikram Hegde } 414*3a634bfcSVikram Hegde immu->immu_regs_cmdval = 0; 415*3a634bfcSVikram Hegde 416*3a634bfcSVikram Hegde return (DDI_SUCCESS); 417*3a634bfcSVikram Hegde } 418*3a634bfcSVikram Hegde 419*3a634bfcSVikram Hegde /* ############### Functions exported ################## */ 420*3a634bfcSVikram Hegde 421*3a634bfcSVikram Hegde /* 422*3a634bfcSVikram Hegde * immu_regs_setup() 423*3a634bfcSVikram Hegde * Setup mappings to a IMMU unit's registers 424*3a634bfcSVikram Hegde * so that they can be read/written 425*3a634bfcSVikram Hegde */ 426*3a634bfcSVikram Hegde void 427*3a634bfcSVikram Hegde immu_regs_setup(list_t *listp) 428*3a634bfcSVikram Hegde { 429*3a634bfcSVikram Hegde int i; 430*3a634bfcSVikram Hegde immu_t *immu; 431*3a634bfcSVikram Hegde 432*3a634bfcSVikram Hegde for (i = 0; i < IMMU_MAXSEG; i++) { 433*3a634bfcSVikram Hegde immu = list_head(listp); 434*3a634bfcSVikram Hegde for (; immu; immu = list_next(listp, immu)) { 435*3a634bfcSVikram Hegde /* do your best, continue on error */ 436*3a634bfcSVikram Hegde if (setup_regs(immu) != DDI_SUCCESS) { 437*3a634bfcSVikram Hegde immu->immu_regs_setup = B_FALSE; 438*3a634bfcSVikram Hegde } else { 439*3a634bfcSVikram Hegde immu->immu_regs_setup = B_TRUE; 440*3a634bfcSVikram Hegde } 441*3a634bfcSVikram Hegde } 442*3a634bfcSVikram Hegde } 443*3a634bfcSVikram Hegde } 444*3a634bfcSVikram Hegde 445*3a634bfcSVikram Hegde /* 446*3a634bfcSVikram Hegde * immu_regs_map() 447*3a634bfcSVikram Hegde */ 448*3a634bfcSVikram Hegde int 449*3a634bfcSVikram Hegde immu_regs_resume(immu_t *immu) 450*3a634bfcSVikram Hegde { 451*3a634bfcSVikram Hegde int error; 452*3a634bfcSVikram Hegde 453*3a634bfcSVikram Hegde /* 454*3a634bfcSVikram Hegde * remap the register address space 455*3a634bfcSVikram Hegde */ 456*3a634bfcSVikram Hegde error = ddi_regs_map_setup(immu->immu_dip, 0, 457*3a634bfcSVikram Hegde (caddr_t *)&(immu->immu_regs_addr), (offset_t)0, 458*3a634bfcSVikram Hegde (offset_t)IMMU_REGSZ, &immu_regs_attr, 459*3a634bfcSVikram Hegde &(immu->immu_regs_handle)); 460*3a634bfcSVikram Hegde if (error != DDI_SUCCESS) { 461*3a634bfcSVikram Hegde return (DDI_FAILURE); 462*3a634bfcSVikram Hegde } 463*3a634bfcSVikram Hegde 464*3a634bfcSVikram Hegde immu_regs_set_root_table(immu); 465*3a634bfcSVikram Hegde 466*3a634bfcSVikram Hegde immu_regs_intr_enable(immu, immu->immu_regs_intr_msi_addr, 467*3a634bfcSVikram Hegde immu->immu_regs_intr_msi_data, immu->immu_regs_intr_uaddr); 468*3a634bfcSVikram Hegde 469*3a634bfcSVikram Hegde (void) immu_intr_handler(immu); 470*3a634bfcSVikram Hegde 471*3a634bfcSVikram Hegde immu_regs_intrmap_enable(immu, immu->immu_intrmap_irta_reg); 472*3a634bfcSVikram Hegde 473*3a634bfcSVikram Hegde immu_regs_qinv_enable(immu, immu->immu_qinv_reg_value); 474*3a634bfcSVikram Hegde 475*3a634bfcSVikram Hegde 476*3a634bfcSVikram Hegde return (error); 477*3a634bfcSVikram Hegde } 478*3a634bfcSVikram Hegde 479*3a634bfcSVikram Hegde /* 480*3a634bfcSVikram Hegde * immu_regs_suspend() 481*3a634bfcSVikram Hegde */ 482*3a634bfcSVikram Hegde void 483*3a634bfcSVikram Hegde immu_regs_suspend(immu_t *immu) 484*3a634bfcSVikram Hegde { 485*3a634bfcSVikram Hegde 486*3a634bfcSVikram Hegde immu->immu_intrmap_running = B_FALSE; 487*3a634bfcSVikram Hegde 488*3a634bfcSVikram Hegde /* Finally, unmap the regs */ 489*3a634bfcSVikram Hegde ddi_regs_map_free(&(immu->immu_regs_handle)); 490*3a634bfcSVikram Hegde } 491*3a634bfcSVikram Hegde 492*3a634bfcSVikram Hegde /* 493*3a634bfcSVikram Hegde * immu_regs_startup() 494*3a634bfcSVikram Hegde * set a IMMU unit's registers to startup the unit 495*3a634bfcSVikram Hegde */ 496*3a634bfcSVikram Hegde void 497*3a634bfcSVikram Hegde immu_regs_startup(immu_t *immu) 498*3a634bfcSVikram Hegde { 499*3a634bfcSVikram Hegde uint32_t status; 500*3a634bfcSVikram Hegde 501*3a634bfcSVikram Hegde if (immu->immu_regs_setup == B_FALSE) { 502*3a634bfcSVikram Hegde return; 503*3a634bfcSVikram Hegde } 504*3a634bfcSVikram Hegde 505*3a634bfcSVikram Hegde ASSERT(immu->immu_regs_running == B_FALSE); 506*3a634bfcSVikram Hegde 507*3a634bfcSVikram Hegde ASSERT(MUTEX_HELD(&(immu->immu_lock))); 508*3a634bfcSVikram Hegde 509*3a634bfcSVikram Hegde mutex_enter(&(immu->immu_regs_lock)); 510*3a634bfcSVikram Hegde put_reg32(immu, IMMU_REG_GLOBAL_CMD, 511*3a634bfcSVikram Hegde immu->immu_regs_cmdval | IMMU_GCMD_TE); 512*3a634bfcSVikram Hegde wait_completion(immu, IMMU_REG_GLOBAL_STS, 513*3a634bfcSVikram Hegde get_reg32, (status & IMMU_GSTS_TES), status); 514*3a634bfcSVikram Hegde immu->immu_regs_cmdval |= IMMU_GCMD_TE; 515*3a634bfcSVikram Hegde immu->immu_regs_running = B_TRUE; 516*3a634bfcSVikram Hegde mutex_exit(&(immu->immu_regs_lock)); 517*3a634bfcSVikram Hegde 518*3a634bfcSVikram Hegde ddi_err(DER_NOTE, NULL, "IMMU %s running", immu->immu_name); 519*3a634bfcSVikram Hegde } 520*3a634bfcSVikram Hegde 521*3a634bfcSVikram Hegde /* 522*3a634bfcSVikram Hegde * immu_regs_shutdown() 523*3a634bfcSVikram Hegde * shutdown a unit 524*3a634bfcSVikram Hegde */ 525*3a634bfcSVikram Hegde void 526*3a634bfcSVikram Hegde immu_regs_shutdown(immu_t *immu) 527*3a634bfcSVikram Hegde { 528*3a634bfcSVikram Hegde uint32_t status; 529*3a634bfcSVikram Hegde 530*3a634bfcSVikram Hegde if (immu->immu_regs_running == B_FALSE) { 531*3a634bfcSVikram Hegde return; 532*3a634bfcSVikram Hegde } 533*3a634bfcSVikram Hegde 534*3a634bfcSVikram Hegde ASSERT(immu->immu_regs_setup == B_TRUE); 535*3a634bfcSVikram Hegde 536*3a634bfcSVikram Hegde ASSERT(MUTEX_HELD(&(immu->immu_lock))); 537*3a634bfcSVikram Hegde 538*3a634bfcSVikram Hegde mutex_enter(&(immu->immu_regs_lock)); 539*3a634bfcSVikram Hegde immu->immu_regs_cmdval &= ~IMMU_GCMD_TE; 540*3a634bfcSVikram Hegde put_reg32(immu, IMMU_REG_GLOBAL_CMD, 541*3a634bfcSVikram Hegde immu->immu_regs_cmdval); 542*3a634bfcSVikram Hegde wait_completion(immu, IMMU_REG_GLOBAL_STS, 543*3a634bfcSVikram Hegde get_reg32, !(status & IMMU_GSTS_TES), status); 544*3a634bfcSVikram Hegde immu->immu_regs_running = B_FALSE; 545*3a634bfcSVikram Hegde mutex_exit(&(immu->immu_regs_lock)); 546*3a634bfcSVikram Hegde 547*3a634bfcSVikram Hegde ddi_err(DER_NOTE, NULL, "IOMMU %s stopped", immu->immu_name); 548*3a634bfcSVikram Hegde } 549*3a634bfcSVikram Hegde 550*3a634bfcSVikram Hegde /* 551*3a634bfcSVikram Hegde * immu_regs_intr() 552*3a634bfcSVikram Hegde * Set a IMMU unit regs to setup a IMMU unit's 553*3a634bfcSVikram Hegde * interrupt handler 554*3a634bfcSVikram Hegde */ 555*3a634bfcSVikram Hegde void 556*3a634bfcSVikram Hegde immu_regs_intr_enable(immu_t *immu, uint32_t msi_addr, uint32_t msi_data, 557*3a634bfcSVikram Hegde uint32_t uaddr) 558*3a634bfcSVikram Hegde { 559*3a634bfcSVikram Hegde mutex_enter(&(immu->immu_regs_lock)); 560*3a634bfcSVikram Hegde immu->immu_regs_intr_msi_addr = msi_addr; 561*3a634bfcSVikram Hegde immu->immu_regs_intr_uaddr = uaddr; 562*3a634bfcSVikram Hegde immu->immu_regs_intr_msi_data = msi_data; 563*3a634bfcSVikram Hegde put_reg32(immu, IMMU_REG_FEVNT_ADDR, msi_addr); 564*3a634bfcSVikram Hegde put_reg32(immu, IMMU_REG_FEVNT_UADDR, uaddr); 565*3a634bfcSVikram Hegde put_reg32(immu, IMMU_REG_FEVNT_DATA, msi_data); 566*3a634bfcSVikram Hegde put_reg32(immu, IMMU_REG_FEVNT_CON, 0); 567*3a634bfcSVikram Hegde mutex_exit(&(immu->immu_regs_lock)); 568*3a634bfcSVikram Hegde } 569*3a634bfcSVikram Hegde 570*3a634bfcSVikram Hegde /* 571*3a634bfcSVikram Hegde * immu_regs_passthru_supported() 572*3a634bfcSVikram Hegde * Returns B_TRUE ifi passthru is supported 573*3a634bfcSVikram Hegde */ 574*3a634bfcSVikram Hegde boolean_t 575*3a634bfcSVikram Hegde immu_regs_passthru_supported(immu_t *immu) 576*3a634bfcSVikram Hegde { 577*3a634bfcSVikram Hegde if (IMMU_ECAP_GET_PT(immu->immu_regs_excap)) { 578*3a634bfcSVikram Hegde return (B_TRUE); 579*3a634bfcSVikram Hegde } 580*3a634bfcSVikram Hegde 581*3a634bfcSVikram Hegde ddi_err(DER_WARN, NULL, "Passthru not supported"); 582*3a634bfcSVikram Hegde return (B_FALSE); 583*3a634bfcSVikram Hegde } 584*3a634bfcSVikram Hegde 585*3a634bfcSVikram Hegde /* 586*3a634bfcSVikram Hegde * immu_regs_is_TM_reserved() 587*3a634bfcSVikram Hegde * Returns B_TRUE if TM field is reserved 588*3a634bfcSVikram Hegde */ 589*3a634bfcSVikram Hegde boolean_t 590*3a634bfcSVikram Hegde immu_regs_is_TM_reserved(immu_t *immu) 591*3a634bfcSVikram Hegde { 592*3a634bfcSVikram Hegde if (IMMU_ECAP_GET_DI(immu->immu_regs_excap) || 593*3a634bfcSVikram Hegde IMMU_ECAP_GET_CH(immu->immu_regs_excap)) { 594*3a634bfcSVikram Hegde return (B_FALSE); 595*3a634bfcSVikram Hegde } 596*3a634bfcSVikram Hegde return (B_TRUE); 597*3a634bfcSVikram Hegde } 598*3a634bfcSVikram Hegde 599*3a634bfcSVikram Hegde /* 600*3a634bfcSVikram Hegde * immu_regs_is_SNP_reserved() 601*3a634bfcSVikram Hegde * Returns B_TRUE if SNP field is reserved 602*3a634bfcSVikram Hegde */ 603*3a634bfcSVikram Hegde boolean_t 604*3a634bfcSVikram Hegde immu_regs_is_SNP_reserved(immu_t *immu) 605*3a634bfcSVikram Hegde { 606*3a634bfcSVikram Hegde 607*3a634bfcSVikram Hegde return (IMMU_ECAP_GET_SC(immu->immu_regs_excap) ? B_FALSE : B_TRUE); 608*3a634bfcSVikram Hegde } 609*3a634bfcSVikram Hegde 610*3a634bfcSVikram Hegde /* 611*3a634bfcSVikram Hegde * immu_regs_wbf_flush() 612*3a634bfcSVikram Hegde * If required and supported, write to IMMU 613*3a634bfcSVikram Hegde * unit's regs to flush DMA write buffer(s) 614*3a634bfcSVikram Hegde */ 615*3a634bfcSVikram Hegde void 616*3a634bfcSVikram Hegde immu_regs_wbf_flush(immu_t *immu) 617*3a634bfcSVikram Hegde { 618*3a634bfcSVikram Hegde uint32_t status; 619*3a634bfcSVikram Hegde 620*3a634bfcSVikram Hegde if (!IMMU_CAP_GET_RWBF(immu->immu_regs_cap)) { 621*3a634bfcSVikram Hegde return; 622*3a634bfcSVikram Hegde } 623*3a634bfcSVikram Hegde 624*3a634bfcSVikram Hegde mutex_enter(&(immu->immu_regs_lock)); 625*3a634bfcSVikram Hegde put_reg32(immu, IMMU_REG_GLOBAL_CMD, 626*3a634bfcSVikram Hegde immu->immu_regs_cmdval | IMMU_GCMD_WBF); 627*3a634bfcSVikram Hegde wait_completion(immu, IMMU_REG_GLOBAL_STS, 628*3a634bfcSVikram Hegde get_reg32, (!(status & IMMU_GSTS_WBFS)), status); 629*3a634bfcSVikram Hegde mutex_exit(&(immu->immu_regs_lock)); 630*3a634bfcSVikram Hegde } 631*3a634bfcSVikram Hegde 632*3a634bfcSVikram Hegde /* 633*3a634bfcSVikram Hegde * immu_regs_cpu_flush() 634*3a634bfcSVikram Hegde * flush the cpu cache line after CPU memory writes, so 635*3a634bfcSVikram Hegde * IOMMU can see the writes 636*3a634bfcSVikram Hegde */ 637*3a634bfcSVikram Hegde void 638*3a634bfcSVikram Hegde immu_regs_cpu_flush(immu_t *immu, caddr_t addr, uint_t size) 639*3a634bfcSVikram Hegde { 640*3a634bfcSVikram Hegde uint_t i; 641*3a634bfcSVikram Hegde 642*3a634bfcSVikram Hegde ASSERT(immu); 643*3a634bfcSVikram Hegde 644*3a634bfcSVikram Hegde if (immu->immu_dvma_coherent == B_TRUE) 645*3a634bfcSVikram Hegde return; 646*3a634bfcSVikram Hegde 647*3a634bfcSVikram Hegde for (i = 0; i < size; i += x86_clflush_size) { 648*3a634bfcSVikram Hegde clflush_insn(addr+i); 649*3a634bfcSVikram Hegde } 650*3a634bfcSVikram Hegde 651*3a634bfcSVikram Hegde mfence_insn(); 652*3a634bfcSVikram Hegde } 653*3a634bfcSVikram Hegde 654*3a634bfcSVikram Hegde void 655*3a634bfcSVikram Hegde immu_regs_iotlb_flush(immu_t *immu, uint_t domainid, uint64_t dvma, 656*3a634bfcSVikram Hegde uint64_t count, uint_t hint, immu_iotlb_inv_t type) 657*3a634bfcSVikram Hegde { 658*3a634bfcSVikram Hegde ASSERT(immu); 659*3a634bfcSVikram Hegde 660*3a634bfcSVikram Hegde switch (type) { 661*3a634bfcSVikram Hegde case IOTLB_PSI: 662*3a634bfcSVikram Hegde ASSERT(domainid > 0); 663*3a634bfcSVikram Hegde ASSERT(dvma > 0); 664*3a634bfcSVikram Hegde ASSERT(count > 0); 665*3a634bfcSVikram Hegde iotlb_psi(immu, domainid, dvma, count, hint); 666*3a634bfcSVikram Hegde break; 667*3a634bfcSVikram Hegde case IOTLB_DSI: 668*3a634bfcSVikram Hegde ASSERT(domainid > 0); 669*3a634bfcSVikram Hegde ASSERT(dvma == 0); 670*3a634bfcSVikram Hegde ASSERT(count == 0); 671*3a634bfcSVikram Hegde ASSERT(hint == 0); 672*3a634bfcSVikram Hegde iotlb_dsi(immu, domainid); 673*3a634bfcSVikram Hegde break; 674*3a634bfcSVikram Hegde case IOTLB_GLOBAL: 675*3a634bfcSVikram Hegde ASSERT(domainid == 0); 676*3a634bfcSVikram Hegde ASSERT(dvma == 0); 677*3a634bfcSVikram Hegde ASSERT(count == 0); 678*3a634bfcSVikram Hegde ASSERT(hint == 0); 679*3a634bfcSVikram Hegde iotlb_global(immu); 680*3a634bfcSVikram Hegde break; 681*3a634bfcSVikram Hegde default: 682*3a634bfcSVikram Hegde ddi_err(DER_PANIC, NULL, "invalid IOTLB invalidation type: %d", 683*3a634bfcSVikram Hegde type); 684*3a634bfcSVikram Hegde /*NOTREACHED*/ 685*3a634bfcSVikram Hegde } 686*3a634bfcSVikram Hegde } 687*3a634bfcSVikram Hegde 688*3a634bfcSVikram Hegde /* 689*3a634bfcSVikram Hegde * immu_regs_context_flush() 690*3a634bfcSVikram Hegde * flush the context cache 691*3a634bfcSVikram Hegde */ 692*3a634bfcSVikram Hegde void 693*3a634bfcSVikram Hegde immu_regs_context_flush(immu_t *immu, uint8_t function_mask, 694*3a634bfcSVikram Hegde uint16_t sid, uint_t did, immu_context_inv_t type) 695*3a634bfcSVikram Hegde { 696*3a634bfcSVikram Hegde uint64_t command = 0; 697*3a634bfcSVikram Hegde uint64_t status; 698*3a634bfcSVikram Hegde 699*3a634bfcSVikram Hegde ASSERT(immu); 700*3a634bfcSVikram Hegde ASSERT(rw_write_held(&(immu->immu_ctx_rwlock))); 701*3a634bfcSVikram Hegde 702*3a634bfcSVikram Hegde /* 703*3a634bfcSVikram Hegde * define the command 704*3a634bfcSVikram Hegde */ 705*3a634bfcSVikram Hegde switch (type) { 706*3a634bfcSVikram Hegde case CONTEXT_FSI: 707*3a634bfcSVikram Hegde command |= CCMD_INV_ICC | CCMD_INV_DEVICE 708*3a634bfcSVikram Hegde | CCMD_INV_DID(did) 709*3a634bfcSVikram Hegde | CCMD_INV_SID(sid) | CCMD_INV_FM(function_mask); 710*3a634bfcSVikram Hegde break; 711*3a634bfcSVikram Hegde case CONTEXT_DSI: 712*3a634bfcSVikram Hegde ASSERT(function_mask == 0); 713*3a634bfcSVikram Hegde ASSERT(sid == 0); 714*3a634bfcSVikram Hegde command |= CCMD_INV_ICC | CCMD_INV_DOMAIN 715*3a634bfcSVikram Hegde | CCMD_INV_DID(did); 716*3a634bfcSVikram Hegde break; 717*3a634bfcSVikram Hegde case CONTEXT_GLOBAL: 718*3a634bfcSVikram Hegde ASSERT(function_mask == 0); 719*3a634bfcSVikram Hegde ASSERT(sid == 0); 720*3a634bfcSVikram Hegde ASSERT(did == 0); 721*3a634bfcSVikram Hegde command |= CCMD_INV_ICC | CCMD_INV_GLOBAL; 722*3a634bfcSVikram Hegde break; 723*3a634bfcSVikram Hegde default: 724*3a634bfcSVikram Hegde ddi_err(DER_PANIC, NULL, 725*3a634bfcSVikram Hegde "%s: incorrect context cache flush type", 726*3a634bfcSVikram Hegde immu->immu_name); 727*3a634bfcSVikram Hegde /*NOTREACHED*/ 728*3a634bfcSVikram Hegde } 729*3a634bfcSVikram Hegde 730*3a634bfcSVikram Hegde mutex_enter(&(immu->immu_regs_lock)); 731*3a634bfcSVikram Hegde /* verify there is no pending command */ 732*3a634bfcSVikram Hegde wait_completion(immu, IMMU_REG_CONTEXT_CMD, get_reg64, 733*3a634bfcSVikram Hegde (!(status & CCMD_INV_ICC)), status); 734*3a634bfcSVikram Hegde put_reg64(immu, IMMU_REG_CONTEXT_CMD, command); 735*3a634bfcSVikram Hegde wait_completion(immu, IMMU_REG_CONTEXT_CMD, get_reg64, 736*3a634bfcSVikram Hegde (!(status & CCMD_INV_ICC)), status); 737*3a634bfcSVikram Hegde mutex_exit(&(immu->immu_regs_lock)); 738*3a634bfcSVikram Hegde } 739*3a634bfcSVikram Hegde 740*3a634bfcSVikram Hegde void 741*3a634bfcSVikram Hegde immu_regs_set_root_table(immu_t *immu) 742*3a634bfcSVikram Hegde { 743*3a634bfcSVikram Hegde uint32_t status; 744*3a634bfcSVikram Hegde 745*3a634bfcSVikram Hegde mutex_enter(&(immu->immu_regs_lock)); 746*3a634bfcSVikram Hegde put_reg64(immu, IMMU_REG_ROOTENTRY, 747*3a634bfcSVikram Hegde immu->immu_ctx_root->hwpg_paddr); 748*3a634bfcSVikram Hegde put_reg32(immu, IMMU_REG_GLOBAL_CMD, 749*3a634bfcSVikram Hegde immu->immu_regs_cmdval | IMMU_GCMD_SRTP); 750*3a634bfcSVikram Hegde wait_completion(immu, IMMU_REG_GLOBAL_STS, 751*3a634bfcSVikram Hegde get_reg32, (status & IMMU_GSTS_RTPS), status); 752*3a634bfcSVikram Hegde mutex_exit(&(immu->immu_regs_lock)); 753*3a634bfcSVikram Hegde } 754*3a634bfcSVikram Hegde 755*3a634bfcSVikram Hegde 756*3a634bfcSVikram Hegde /* enable queued invalidation interface */ 757*3a634bfcSVikram Hegde void 758*3a634bfcSVikram Hegde immu_regs_qinv_enable(immu_t *immu, uint64_t qinv_reg_value) 759*3a634bfcSVikram Hegde { 760*3a634bfcSVikram Hegde uint32_t status; 761*3a634bfcSVikram Hegde 762*3a634bfcSVikram Hegde if (immu_qinv_enable == B_FALSE) 763*3a634bfcSVikram Hegde return; 764*3a634bfcSVikram Hegde 765*3a634bfcSVikram Hegde mutex_enter(&immu->immu_regs_lock); 766*3a634bfcSVikram Hegde immu->immu_qinv_reg_value = qinv_reg_value; 767*3a634bfcSVikram Hegde /* Initialize the Invalidation Queue Tail register to zero */ 768*3a634bfcSVikram Hegde put_reg64(immu, IMMU_REG_INVAL_QT, 0); 769*3a634bfcSVikram Hegde 770*3a634bfcSVikram Hegde /* set invalidation queue base address register */ 771*3a634bfcSVikram Hegde put_reg64(immu, IMMU_REG_INVAL_QAR, qinv_reg_value); 772*3a634bfcSVikram Hegde 773*3a634bfcSVikram Hegde /* enable queued invalidation interface */ 774*3a634bfcSVikram Hegde put_reg32(immu, IMMU_REG_GLOBAL_CMD, 775*3a634bfcSVikram Hegde immu->immu_regs_cmdval | IMMU_GCMD_QIE); 776*3a634bfcSVikram Hegde wait_completion(immu, IMMU_REG_GLOBAL_STS, 777*3a634bfcSVikram Hegde get_reg32, (status & IMMU_GSTS_QIES), status); 778*3a634bfcSVikram Hegde mutex_exit(&immu->immu_regs_lock); 779*3a634bfcSVikram Hegde 780*3a634bfcSVikram Hegde immu->immu_regs_cmdval |= IMMU_GCMD_QIE; 781*3a634bfcSVikram Hegde immu->immu_qinv_running = B_TRUE; 782*3a634bfcSVikram Hegde 783*3a634bfcSVikram Hegde } 784*3a634bfcSVikram Hegde 785*3a634bfcSVikram Hegde /* enable interrupt remapping hardware unit */ 786*3a634bfcSVikram Hegde void 787*3a634bfcSVikram Hegde immu_regs_intrmap_enable(immu_t *immu, uint64_t irta_reg) 788*3a634bfcSVikram Hegde { 789*3a634bfcSVikram Hegde uint32_t status; 790*3a634bfcSVikram Hegde 791*3a634bfcSVikram Hegde if (immu_intrmap_enable == B_FALSE) 792*3a634bfcSVikram Hegde return; 793*3a634bfcSVikram Hegde 794*3a634bfcSVikram Hegde /* set interrupt remap table pointer */ 795*3a634bfcSVikram Hegde mutex_enter(&(immu->immu_regs_lock)); 796*3a634bfcSVikram Hegde immu->immu_intrmap_irta_reg = irta_reg; 797*3a634bfcSVikram Hegde put_reg64(immu, IMMU_REG_IRTAR, irta_reg); 798*3a634bfcSVikram Hegde put_reg32(immu, IMMU_REG_GLOBAL_CMD, 799*3a634bfcSVikram Hegde immu->immu_regs_cmdval | IMMU_GCMD_SIRTP); 800*3a634bfcSVikram Hegde wait_completion(immu, IMMU_REG_GLOBAL_STS, 801*3a634bfcSVikram Hegde get_reg32, (status & IMMU_GSTS_IRTPS), status); 802*3a634bfcSVikram Hegde mutex_exit(&(immu->immu_regs_lock)); 803*3a634bfcSVikram Hegde 804*3a634bfcSVikram Hegde /* global flush intr entry cache */ 805*3a634bfcSVikram Hegde if (immu_qinv_enable == B_TRUE) 806*3a634bfcSVikram Hegde immu_qinv_intr_global(immu); 807*3a634bfcSVikram Hegde 808*3a634bfcSVikram Hegde /* enable interrupt remapping */ 809*3a634bfcSVikram Hegde mutex_enter(&(immu->immu_regs_lock)); 810*3a634bfcSVikram Hegde put_reg32(immu, IMMU_REG_GLOBAL_CMD, 811*3a634bfcSVikram Hegde immu->immu_regs_cmdval | IMMU_GCMD_IRE); 812*3a634bfcSVikram Hegde wait_completion(immu, IMMU_REG_GLOBAL_STS, 813*3a634bfcSVikram Hegde get_reg32, (status & IMMU_GSTS_IRES), 814*3a634bfcSVikram Hegde status); 815*3a634bfcSVikram Hegde immu->immu_regs_cmdval |= IMMU_GCMD_IRE; 816*3a634bfcSVikram Hegde 817*3a634bfcSVikram Hegde /* set compatible mode */ 818*3a634bfcSVikram Hegde put_reg32(immu, IMMU_REG_GLOBAL_CMD, 819*3a634bfcSVikram Hegde immu->immu_regs_cmdval | IMMU_GCMD_CFI); 820*3a634bfcSVikram Hegde wait_completion(immu, IMMU_REG_GLOBAL_STS, 821*3a634bfcSVikram Hegde get_reg32, (status & IMMU_GSTS_CFIS), 822*3a634bfcSVikram Hegde status); 823*3a634bfcSVikram Hegde immu->immu_regs_cmdval |= IMMU_GCMD_CFI; 824*3a634bfcSVikram Hegde mutex_exit(&(immu->immu_regs_lock)); 825*3a634bfcSVikram Hegde 826*3a634bfcSVikram Hegde immu->immu_intrmap_running = B_TRUE; 827*3a634bfcSVikram Hegde } 828*3a634bfcSVikram Hegde 829*3a634bfcSVikram Hegde uint64_t 830*3a634bfcSVikram Hegde immu_regs_get64(immu_t *immu, uint_t reg) 831*3a634bfcSVikram Hegde { 832*3a634bfcSVikram Hegde return (get_reg64(immu, reg)); 833*3a634bfcSVikram Hegde } 834*3a634bfcSVikram Hegde 835*3a634bfcSVikram Hegde uint32_t 836*3a634bfcSVikram Hegde immu_regs_get32(immu_t *immu, uint_t reg) 837*3a634bfcSVikram Hegde { 838*3a634bfcSVikram Hegde return (get_reg32(immu, reg)); 839*3a634bfcSVikram Hegde } 840*3a634bfcSVikram Hegde 841*3a634bfcSVikram Hegde void 842*3a634bfcSVikram Hegde immu_regs_put64(immu_t *immu, uint_t reg, uint64_t val) 843*3a634bfcSVikram Hegde { 844*3a634bfcSVikram Hegde put_reg64(immu, reg, val); 845*3a634bfcSVikram Hegde } 846*3a634bfcSVikram Hegde 847*3a634bfcSVikram Hegde void 848*3a634bfcSVikram Hegde immu_regs_put32(immu_t *immu, uint_t reg, uint32_t val) 849*3a634bfcSVikram Hegde { 850*3a634bfcSVikram Hegde put_reg32(immu, reg, val); 851*3a634bfcSVikram Hegde } 852