Lines Matching +full:en +full:- +full:usb

55  *    multi-processing (SMT), etc.
57 * ------------------------
59 * ------------------------
81 * AMD adds non-Intel compatible
105 * various extensions. For example, AMD-
123 * Some leaves are broken down into sub-leaves. This means that the value
125 * example, Intel uses the value in %ecx on leaf 7 to indicate a sub-leaf to get
131 * program is in 64-bit mode. When executing in 64-bit mode, the upper
135 * ----------------------
137 * ----------------------
169 * defines the processor family's non-architectural features. In general, we'll
199 * the processor, are dealing with processor-specific features such as CPU
222 * which itself may have multiple revisions. In general, non-architectural
230 * present used and available only for AMD and AMD-like processors.
232 * ------------
234 * ------------
239 * certain Cyrix processors) but those were all 32-bit processors which are no
242 * well-defined execution ordering enforced by the definition of cpuid_pass_t in
253 * 64-bit processors do. This would also be the place to
260 * so that machine-dependent code can control the features
263 * machine-dependent code that needs basic identity will
267 * this pass, machine-depedent boot code is responsible for
291 * 3. Determining the set of CPU security-specific features
314 * by the run-time link-editor (RTLD), though userland
328 * ---------
330 * ---------
335 * as possible during the boot process -- right after the IDENT pass.
355 * ------------------
357 * ------------------
411 * processor with more than one core is often referred to as 'multi-core'.
413 * that has 'multi-core' processors.
433 * simultaneous multi-threading (SMT). When Intel introduced this in their
434 * processors they called it hyper-threading (HT). When multiple threads
453 * definition of multi-chip module (because it is in the name) and use the
454 * term 'die' when we want the more general, potential sub-component
464 * MULTI-CHIP MODULE
466 * A multi-chip module (MCM) refers to putting multiple distinct chips that
467 * are connected together in the same package. When a multi-chip design is
515 * NUMA or non-uniform memory access, describes a way that systems are
518 * multi-chip module, some of that memory is physically closer and some of
522 * +--------+ +--------+
523 * | DIMM A | +----------+ +----------+ | DIMM D |
524 * +--------+-+ | | | | +-+------+-+
526 * +--------+-+ | | | | +-+------+-+
527 * | DIMM C | +----------+ +----------+ | DIMM F |
528 * +--------+ +--------+
530 * In this example, Socket 0 is closer to DIMMs A-C while Socket 1 is
531 * closer to DIMMs D-F. This means that it is cheaper for socket 0 to
532 * access DIMMs A-C and more expensive to access D-F as it has to go
534 * D-F are cheaper than A-C. While the socket form is the most common, when
535 * using multi-chip modules, this can also sometimes occur. For another
540 * --------------
552 * +-----------------------------------------------------------------------+
554 * | +-------------------+ +-------------------+ +-------------------+ |
556 * | | +--------+ +---+ | | +--------+ +---+ | | +--------+ +---+ | |
558 * | | +--------+ | 1 | | | +--------+ | 1 | | | +--------+ | 1 | | |
559 * | | +--------+ | | | | +--------+ | | | | +--------+ | | | |
561 * | | +--------+ +---+ | | +--------+ +---+ | | +--------+ +---+ | |
562 * | | +--------------+ | | +--------------+ | | +--------------+ | |
564 * | | +--------------+ | | +--------------+ | | +--------------+ | |
565 * | +-------------------+ +-------------------+ +-------------------+ |
566 * | +-------------------------------------------------------------------+ |
568 * | +-------------------------------------------------------------------+ |
569 * | +-------------------------------------------------------------------+ |
571 * | +-------------------------------------------------------------------+ |
572 * +-----------------------------------------------------------------------+
582 * increased the number of addressable logical CPUs from 8-bits to 32-bits), an
586 * ------------
606 * node this indicates a multi-chip module. Usually each node has its own
608 * different from the corresponding Intel Nehalem-Skylake+ processors. As a
628 * The Zen family (0x17) uses a multi-chip module (MCM) design, the module
648 * +--------------------------------------------------------+
650 * | +-------------------+ +-------------------+ +---+ |
651 * | | Core +----+ | | Core +----+ | | | |
652 * | | +--------+ | L2 | | | +--------+ | L2 | | | | |
653 * | | | Thread | +----+ | | | Thread | +----+ | | | |
654 * | | +--------+-+ +--+ | | +--------+-+ +--+ | | L | |
656 * | | +--------+ +--+ | | +--------+ +--+ | | | |
657 * | +-------------------+ +-------------------+ | C | |
658 * | +-------------------+ +-------------------+ | a | |
659 * | | Core +----+ | | Core +----+ | | c | |
660 * | | +--------+ | L2 | | | +--------+ | L2 | | | h | |
661 * | | | Thread | +----+ | | | Thread | +----+ | | e | |
662 * | | +--------+-+ +--+ | | +--------+-+ +--+ | | | |
664 * | | +--------+ +--+ | | +--------+ +--+ | | | |
665 * | +-------------------+ +-------------------+ +---+ |
667 * +--------------------------------------------------------+
673 * +--------------------------------------------------------+
675 * | +--------------------------------------------------+ |
676 * | | I/O Units (PCIe, SATA, USB, etc.) | |
677 * | +--------------------------------------------------+ |
679 * | +-----------+ HH +-----------+ |
684 * | +-----------+ HH +-----------+ |
686 * | +--------------------------------------------------+ |
688 * | +--------------------------------------------------+ |
690 * +--------------------------------------------------------+
700 * +----------PP---------------------PP---------+
702 * | +-----------+ +-----------+ |
707 * | +-----------+ooo ...+-----------+ |
712 * | +-----------+... ooo+-----------+ |
717 * | +-----------+ +-----------+ |
719 * +----------PP---------------------PP---------+
738 * +--------------------------------------------------------+
741 * | +-----------+ HH +-----------+ |
746 * | +-----------+ HH +-----------+ |
749 * +--------------------------------------------------------+
758 * +---------------------PP----PP---------------------+
760 * | +-----------+ PP PP +-----------+ |
762 * | | Zen 2 | +-PP----PP-+ | Zen 2 | |
765 * | +-----------+ | | +-----------+ |
773 * | +-----------+ | | +-----------+ |
775 * | | Zen 2 -| +-PP----PP-+ |- Zen 2 | |
778 * | +-----------+ PP PP +-----------+ |
780 * +---------------------PP----PP---------------------+
809 * +-------------------------------------------------+
811 * | +-------------------+ +-------------------+ |
812 * | | Core +----+ | | Core +----+ | |
813 * | | +--------+ | L2 | | | +--------+ | L2 | | |
814 * | | | Thread | +----+ | | | Thread | +----+ | |
815 * | | +--------+-+ +--+ | | +--------+-+ +--+ | |
817 * | | +--------+ +--+ | | +--------+ +--+ | |
818 * | +-------------------+ +-------------------+ |
819 * | +-------------------+ +-------------------+ |
820 * | | Core +----+ | | Core +----+ | |
821 * | | +--------+ | L2 | | | +--------+ | L2 | | |
822 * | | | Thread | +----+ | | | Thread | +----+ | |
823 * | | +--------+-+ +--+ | | +--------+-+ +--+ | |
825 * | | +--------+ +--+ | | +--------+ +--+ | |
826 * | +-------------------+ +-------------------+ |
828 * | +--------------------------------------------+ |
830 * | +--------------------------------------------+ |
832 * | +-------------------+ +-------------------+ |
833 * | | Core +----+ | | Core +----+ | |
834 * | | +--------+ | L2 | | | +--------+ | L2 | | |
835 * | | | Thread | +----+ | | | Thread | +----+ | |
836 * | | +--------+-+ +--+ | | +--------+-+ +--+ | |
838 * | | +--------+ +--+ | | +--------+ +--+ | |
839 * | +-------------------+ +-------------------+ |
840 * | +-------------------+ +-------------------+ |
841 * | | Core +----+ | | Core +----+ | |
842 * | | +--------+ | L2 | | | +--------+ | L2 | | |
843 * | | | Thread | +----+ | | | Thread | +----+ | |
844 * | | +--------+-+ +--+ | | +--------+-+ +--+ | |
846 * | | +--------+ +--+ | | +--------+ +--+ | |
847 * | +-------------------+ +-------------------+ |
848 * +-------------------------------------------------+
872 * %eax The APIC ID. The entire register is defined to have a 32-bit
876 * %ebx On Bulldozer-era systems this contains information about the
878 * resources). It also contains a per-package compute unit ID that
881 * On Zen-era systems this instead contains the number of threads
894 * ----------------
912 * This is the value of the CPU's APIC id. This should be the full 32-bit
913 * ID if the CPU is using the x2apic. Otherwise, it should be the 8-bit
970 * determine if simultaneous multi-threading (SMT) is enabled. When
981 * When processors are actually a multi-chip module, this represents the
1005 * processors without AMD Bulldozer-style compute units this should be set
1041 * -----------
1043 * -----------
1056 * checks scattered about fields being non-zero before we assume we can use
1064 * use an actual vendor, then that usually turns into multiple one-core CPUs
1068 * --------------------
1070 * --------------------
1075 * with the is_x86_feature() function. This is queried by x86-specific functions
1080 * mitigations, to various x86-specific drivers. General purpose or
1089 * instruction sets are. Programs use this information to make run-time
1090 * decisions about what features they should use. As an example, the run-time
1091 * link-editor (rtld) can relocate different functions depending on the hardware
1103 * -----------------------------------------------
1105 * -----------------------------------------------
1114 * - Spectre v1
1115 * - swapgs (Spectre v1 variant)
1116 * - Spectre v2
1117 * - Branch History Injection (BHI).
1118 * - Meltdown (Spectre v3)
1119 * - Rogue Register Read (Spectre v3a)
1120 * - Speculative Store Bypass (Spectre v4)
1121 * - ret2spec, SpectreRSB
1122 * - L1 Terminal Fault (L1TF)
1123 * - Microarchitectural Data Sampling (MDS)
1124 * - Register File Data Sampling (RFDS)
1128 * from non-kernel executing environments such as user processes and hardware
1165 * 2. A no-op version
1168 * AMD-specific optimized retopoline variant that was based around using a
1198 * - Switching between two different user processes
1199 * - Going between user land and the kernel
1200 * - Returning to the kernel from a hardware virtual machine
1210 * such as a non-root VMX context attacking the kernel we first look to
1220 * indicating no need for post-barrier RSB protections, either in one place
1236 * in the 'host' context when SEV-SNP is enabled.
1266 * non-root/guest to root mode). The attacker can then exploit certain
1267 * compiler-generated code-sequences ("gadgets") to disclose information from
1268 * other contexts or domains. Recent (late-2023/early-2024) research in
1286 * SMEP and eIBRS are a continuing defense-in-depth measure protecting the
1297 * can generally affect any branch-dependent code. The swapgs issue is one
1308 * If an attacker can cause a mis-speculation of the branch here, we could skip
1309 * the needed swapgs, and use the /user/ %gsbase as the base of the %gs-based
1315 * space, we could mis-speculate and swapgs the user %gsbase back in prior to
1321 * Note that we don't enable user-space "wrgsbase" via CR4_FSGSBASE, making it
1322 * harder for user-space to actually set a useful %gsbase value: although it's
1333 * this we use per-CPU page tables and switch between the user and kernel
1338 * - uts/i86pc/ml/kpti_trampolines.s
1339 * - uts/i86pc/vm/hat_i86.c
1363 * For the non-hardware virtualized case, this is relatively easy to deal with.
1426 * a no-op.
1430 * thread executing on a core. In the case where you have hyper-threading
1435 * would have to issue an inter-processor interrupt (IPI) to the other thread.
1436 * Rather than implement this, we recommend that one disables hyper-threading
1437 * through the use of psradm -aS.
1441 * TSX Asynchronous Abort (TAA) is another side-channel vulnerability that
1465 * Another MDS-variant in a few select Intel Atom CPUs is Register File Data
1505 * - Spectre v1: Not currently mitigated
1506 * - swapgs: lfences after swapgs paths
1507 * - Spectre v2: Retpolines/RSB Stuffing or eIBRS/AIBRS if HW support
1508 * - Meltdown: Kernel Page Table Isolation
1509 * - Spectre v3a: Updated CPU microcode
1510 * - Spectre v4: Not currently mitigated
1511 * - SpectreRSB: SMEP and RSB Stuffing
1512 * - L1TF: spec_uarch_flush, SMT exclusion, requires microcode
1513 * - MDS: x86_md_clear, requires microcode, disabling SMT
1514 * - TAA: x86_md_clear and disabling SMT OR microcode and disabling TSX
1515 * - RFDS: microcode with x86_md_clear if RFDS_CLEAR set and RFDS_NO not.
1516 * - BHI: software sequence, and use of BHI_DIS_S if microcode has it.
1521 * - RDCL_NO: Meltdown, L1TF, MSBDS subset of MDS
1522 * - MDS_NO: All forms of MDS
1523 * - TAA_NO: TAA
1524 * - RFDS_NO: RFDS
1525 * - BHI_NO: BHI
1569 int x86_use_pcid = -1;
1570 int x86_use_invpcid = -1;
1586 * X86_TAA_NOTHING -- no mitigation available for TAA side-channels
1587 * X86_TAA_DISABLED -- mitigation disabled via x86_disable_taa
1588 * X86_TAA_MD_CLEAR -- MDS mitigation also suffices for TAA
1589 * X86_TAA_TSX_FORCE_ABORT -- transactions are forced to abort
1590 * X86_TAA_TSX_DISABLE -- force abort transactions and hide from CPUID
1591 * X86_TAA_HW_MITIGATED -- TSX potentially active but H/W not TAA-vulnerable
1782 static int platform_type = -1;
1798 * processor cache-line alignment, but this is not guarantied in the furture.
1814 * per-lwp xsave area is dynamically allocated based on xsav_max_size. The
1870 uint8_t cpi_cacheinfo[16]; /* fn 2: intel-style cache desc */
1878 struct cpuid_regs cpi_sub7[2]; /* Leaf 7, sub-leaves 1-2 */
1891 uint_t cpi_ncore_per_chip; /* AMD: fn 0x80000008: %ecx[7-0] */
1892 /* Intel: fn 4: %eax[31-26] */
1940 * These bit fields are defined by the Intel Application Note AP-485
1943 #define CPI_FAMILY_XTD(cpi) BITX((cpi)->cpi_std[1].cp_eax, 27, 20)
1944 #define CPI_MODEL_XTD(cpi) BITX((cpi)->cpi_std[1].cp_eax, 19, 16)
1945 #define CPI_TYPE(cpi) BITX((cpi)->cpi_std[1].cp_eax, 13, 12)
1946 #define CPI_FAMILY(cpi) BITX((cpi)->cpi_std[1].cp_eax, 11, 8)
1947 #define CPI_STEP(cpi) BITX((cpi)->cpi_std[1].cp_eax, 3, 0)
1948 #define CPI_MODEL(cpi) BITX((cpi)->cpi_std[1].cp_eax, 7, 4)
1950 #define CPI_FEATURES_EDX(cpi) ((cpi)->cpi_std[1].cp_edx)
1951 #define CPI_FEATURES_ECX(cpi) ((cpi)->cpi_std[1].cp_ecx)
1952 #define CPI_FEATURES_XTD_EDX(cpi) ((cpi)->cpi_extd[1].cp_edx)
1953 #define CPI_FEATURES_XTD_ECX(cpi) ((cpi)->cpi_extd[1].cp_ecx)
1954 #define CPI_FEATURES_7_0_EBX(cpi) ((cpi)->cpi_std[7].cp_ebx)
1955 #define CPI_FEATURES_7_0_ECX(cpi) ((cpi)->cpi_std[7].cp_ecx)
1956 #define CPI_FEATURES_7_0_EDX(cpi) ((cpi)->cpi_std[7].cp_edx)
1957 #define CPI_FEATURES_7_1_EAX(cpi) ((cpi)->cpi_sub7[0].cp_eax)
1958 #define CPI_FEATURES_7_2_EDX(cpi) ((cpi)->cpi_sub7[1].cp_edx)
1960 #define CPI_BRANDID(cpi) BITX((cpi)->cpi_std[1].cp_ebx, 7, 0)
1961 #define CPI_CHUNKS(cpi) BITX((cpi)->cpi_std[1].cp_ebx, 15, 7)
1962 #define CPI_CPU_COUNT(cpi) BITX((cpi)->cpi_std[1].cp_ebx, 23, 16)
1963 #define CPI_APIC_ID(cpi) BITX((cpi)->cpi_std[1].cp_ebx, 31, 24)
1972 * Defined by Intel Application Note AP-485
1974 #define CPI_NUM_CORES(regs) BITX((regs)->cp_eax, 31, 26)
1975 #define CPI_NTHR_SHR_CACHE(regs) BITX((regs)->cp_eax, 25, 14)
1976 #define CPI_FULL_ASSOC_CACHE(regs) BITX((regs)->cp_eax, 9, 9)
1977 #define CPI_SELF_INIT_CACHE(regs) BITX((regs)->cp_eax, 8, 8)
1978 #define CPI_CACHE_LVL(regs) BITX((regs)->cp_eax, 7, 5)
1979 #define CPI_CACHE_TYPE(regs) BITX((regs)->cp_eax, 4, 0)
1984 #define CPI_CPU_LEVEL_TYPE(regs) BITX((regs)->cp_ecx, 15, 8)
1986 #define CPI_CACHE_WAYS(regs) BITX((regs)->cp_ebx, 31, 22)
1987 #define CPI_CACHE_PARTS(regs) BITX((regs)->cp_ebx, 21, 12)
1988 #define CPI_CACHE_COH_LN_SZ(regs) BITX((regs)->cp_ebx, 11, 0)
1990 #define CPI_CACHE_SETS(regs) BITX((regs)->cp_ecx, 31, 0)
1992 #define CPI_PREFCH_STRIDE(regs) BITX((regs)->cp_edx, 9, 0)
1996 * A couple of shorthand macros to identify "later" P6-family chips
1997 * like the Pentium M and Core. First, the "older" P6-based stuff
1998 * (loosely defined as "pre-Pentium-4"):
2002 cpi->cpi_family == 6 && \
2003 (cpi->cpi_model == 1 || \
2004 cpi->cpi_model == 3 || \
2005 cpi->cpi_model == 5 || \
2006 cpi->cpi_model == 6 || \
2007 cpi->cpi_model == 7 || \
2008 cpi->cpi_model == 8 || \
2009 cpi->cpi_model == 0xA || \
2010 cpi->cpi_model == 0xB) \
2014 #define IS_NEW_F6(cpi) ((cpi->cpi_family == 6) && !IS_LEGACY_P6(cpi))
2017 #define IS_EXTENDED_MODEL_INTEL(cpi) (cpi->cpi_family == 0x6 || \
2018 cpi->cpi_family >= 0xf)
2023 * See cpuid section of "Intel 64 and IA-32 Architectures Software Developer's
2024 * Manual Volume 2A: Instruction Set Reference, A-M" #25366-022US, November
2032 #define MWAIT_SUPPORTED(cpi) ((cpi)->cpi_std[1].cp_ecx & CPUID_INTC_ECX_MON)
2033 #define MWAIT_INT_ENABLE(cpi) ((cpi)->cpi_std[5].cp_ecx & 0x2)
2034 #define MWAIT_EXTENSION(cpi) ((cpi)->cpi_std[5].cp_ecx & 0x1)
2035 #define MWAIT_SIZE_MIN(cpi) BITX((cpi)->cpi_std[5].cp_eax, 15, 0)
2036 #define MWAIT_SIZE_MAX(cpi) BITX((cpi)->cpi_std[5].cp_ebx, 15, 0)
2038 * Number of sub-cstates for a given c-state.
2041 BITX((cpi)->cpi_std[5].cp_edx, c_state + 3, c_state)
2071 * Apply up various platform-dependent restrictions where the
2083 cp->cp_edx &= in platform_cpuid_mangle()
2095 cp->cp_edx &= in platform_cpuid_mangle()
2102 cp->cp_ecx &= ~CPUID_AMD_ECX_CMP_LGCY; in platform_cpuid_mangle()
2113 * Zero out the (ncores-per-chip - 1) field in platform_cpuid_mangle()
2115 cp->cp_eax &= 0x03fffffff; in platform_cpuid_mangle()
2126 cp->cp_ecx &= ~CPUID_AMD_ECX_CR8D; in platform_cpuid_mangle()
2131 * Zero out the (ncores-per-chip - 1) field in platform_cpuid_mangle()
2133 cp->cp_ecx &= 0xffffff00; in platform_cpuid_mangle()
2150 * we don't currently support. Could be set to non-zero values
2160 * Allocate space for mcpu_cpi in the machcpu structure for all non-boot CPUs.
2170 ASSERT(cpu->cpu_id != 0); in cpuid_alloc_space()
2171 ASSERT(cpu->cpu_m.mcpu_cpi == NULL); in cpuid_alloc_space()
2172 cpu->cpu_m.mcpu_cpi = in cpuid_alloc_space()
2173 kmem_zalloc(sizeof (*cpu->cpu_m.mcpu_cpi), KM_SLEEP); in cpuid_alloc_space()
2179 struct cpuid_info *cpi = cpu->cpu_m.mcpu_cpi; in cpuid_free_space()
2189 for (i = 1; i < cpi->cpi_cache_leaf_size; i++) in cpuid_free_space()
2190 kmem_free(cpi->cpi_cache_leaves[i], sizeof (struct cpuid_regs)); in cpuid_free_space()
2191 if (cpi->cpi_cache_leaf_size > 0) in cpuid_free_space()
2192 kmem_free(cpi->cpi_cache_leaves, in cpuid_free_space()
2193 cpi->cpi_cache_leaf_size * sizeof (struct cpuid_regs *)); in cpuid_free_space()
2196 cpu->cpu_m.mcpu_cpi = NULL; in cpuid_free_space()
2216 ASSERT(platform_type == -1); in determine_platform()
2290 * Xen's pseudo-cpuid function returns a string representing the in determine_platform()
2294 * hypervisor might use a different one depending on whether Hyper-V in determine_platform()
2316 ASSERT(platform_type != -1); in get_hwenv()
2351 for (i = 0; i < ARRAY_SIZE(cpi->cpi_topo); i++) { in cpuid_gather_ext_topo_leaf()
2352 struct cpuid_regs *regs = &cpi->cpi_topo[i]; in cpuid_gather_ext_topo_leaf()
2355 regs->cp_eax = leaf; in cpuid_gather_ext_topo_leaf()
2356 regs->cp_ecx = i; in cpuid_gather_ext_topo_leaf()
2359 if (CPUID_AMD_8X26_ECX_TYPE(regs->cp_ecx) == in cpuid_gather_ext_topo_leaf()
2365 cpi->cpi_topo_nleaves = i; in cpuid_gather_ext_topo_leaf()
2376 struct cpuid_info *cpi = cpu->cpu_m.mcpu_cpi; in cpuid_gather_amd_topology_leaves()
2378 if (cpi->cpi_xmaxeax >= CPUID_LEAF_EXT_8) { in cpuid_gather_amd_topology_leaves()
2381 cp = &cpi->cpi_extd[8]; in cpuid_gather_amd_topology_leaves()
2382 cp->cp_eax = CPUID_LEAF_EXT_8; in cpuid_gather_amd_topology_leaves()
2384 platform_cpuid_mangle(cpi->cpi_vendor, CPUID_LEAF_EXT_8, cp); in cpuid_gather_amd_topology_leaves()
2388 cpi->cpi_xmaxeax >= CPUID_LEAF_EXT_1e) { in cpuid_gather_amd_topology_leaves()
2391 cp = &cpi->cpi_extd[0x1e]; in cpuid_gather_amd_topology_leaves()
2392 cp->cp_eax = CPUID_LEAF_EXT_1e; in cpuid_gather_amd_topology_leaves()
2396 if (cpi->cpi_xmaxeax >= CPUID_LEAF_EXT_26) { in cpuid_gather_amd_topology_leaves()
2414 if (cpi->cpi_maxeax >= 0xB) { in cpuid_gather_apicid()
2419 cp->cp_eax = 0xB; in cpuid_gather_apicid()
2420 cp->cp_edx = cp->cp_ebx = cp->cp_ecx = 0; in cpuid_gather_apicid()
2423 if (cp->cp_ebx != 0) { in cpuid_gather_apicid()
2424 return (cp->cp_edx); in cpuid_gather_apicid()
2428 if ((cpi->cpi_vendor == X86_VENDOR_AMD || in cpuid_gather_apicid()
2429 cpi->cpi_vendor == X86_VENDOR_HYGON) && in cpuid_gather_apicid()
2431 cpi->cpi_xmaxeax >= CPUID_LEAF_EXT_1e) { in cpuid_gather_apicid()
2432 return (cpi->cpi_extd[0x1e].cp_eax); in cpuid_gather_apicid()
2463 if (cpi->cpi_xmaxeax >= CPUID_LEAF_EXT_8) { in cpuid_amd_ncores()
2464 nthreads = BITX(cpi->cpi_extd[8].cp_ecx, 7, 0) + 1; in cpuid_amd_ncores()
2465 } else if ((cpi->cpi_std[1].cp_edx & CPUID_INTC_EDX_HTT) != 0) { in cpuid_amd_ncores()
2474 if (cpi->cpi_family >= 0x17 && in cpuid_amd_ncores()
2476 cpi->cpi_xmaxeax >= CPUID_LEAF_EXT_1e) { in cpuid_amd_ncores()
2477 nthread_per_core = BITX(cpi->cpi_extd[0x1e].cp_ebx, 15, 8) + 1; in cpuid_amd_ncores()
2498 if (cpi->cpi_maxeax >= 4) { in cpuid_intel_ncores()
2499 *ncores = BITX(cpi->cpi_std[4].cp_eax, 31, 26) + 1; in cpuid_intel_ncores()
2504 if ((cpi->cpi_std[1].cp_edx & CPUID_INTC_EDX_HTT) != 0) { in cpuid_intel_ncores()
2514 struct cpuid_info *cpi = cpu->cpu_m.mcpu_cpi; in cpuid_leafB_getids()
2518 if (cpi->cpi_maxeax < 0xB) in cpuid_leafB_getids()
2522 cp->cp_eax = 0xB; in cpuid_leafB_getids()
2523 cp->cp_edx = cp->cp_ebx = cp->cp_ecx = 0; in cpuid_leafB_getids()
2528 * Check CPUID.EAX=0BH, ECX=0H:EBX is non-zero, which in cpuid_leafB_getids()
2532 if (cp->cp_ebx != 0) { in cpuid_leafB_getids()
2542 cp->cp_eax = 0xB; in cpuid_leafB_getids()
2543 cp->cp_ecx = i; in cpuid_leafB_getids()
2549 x2apic_id = cp->cp_edx; in cpuid_leafB_getids()
2550 coreid_shift = BITX(cp->cp_eax, 4, 0); in cpuid_leafB_getids()
2551 ncpu_per_core = BITX(cp->cp_ebx, 15, 0); in cpuid_leafB_getids()
2553 x2apic_id = cp->cp_edx; in cpuid_leafB_getids()
2554 chipid_shift = BITX(cp->cp_eax, 4, 0); in cpuid_leafB_getids()
2555 ncpu_per_chip = BITX(cp->cp_ebx, 15, 0); in cpuid_leafB_getids()
2562 cpi->cpi_ncpu_per_chip = ncpu_per_chip; in cpuid_leafB_getids()
2563 cpi->cpi_ncore_per_chip = ncpu_per_chip / in cpuid_leafB_getids()
2565 cpi->cpi_chipid = x2apic_id >> chipid_shift; in cpuid_leafB_getids()
2566 cpi->cpi_clogid = x2apic_id & ((1 << chipid_shift) - 1); in cpuid_leafB_getids()
2567 cpi->cpi_coreid = x2apic_id >> coreid_shift; in cpuid_leafB_getids()
2568 cpi->cpi_pkgcoreid = cpi->cpi_clogid >> coreid_shift; in cpuid_leafB_getids()
2569 cpi->cpi_procnodeid = cpi->cpi_chipid; in cpuid_leafB_getids()
2570 cpi->cpi_compunitid = cpi->cpi_coreid; in cpuid_leafB_getids()
2573 cpi->cpi_nthread_bits = coreid_shift; in cpuid_leafB_getids()
2574 cpi->cpi_ncore_bits = chipid_shift - coreid_shift; in cpuid_leafB_getids()
2589 struct cpuid_info *cpi = cpu->cpu_m.mcpu_cpi; in cpuid_intel_getids()
2595 cpi->cpi_procnodes_per_pkg = 1; in cpuid_intel_getids()
2596 cpi->cpi_cores_per_compunit = 1; in cpuid_intel_getids()
2613 cpi->cpi_nthread_bits = ddi_fls(cpi->cpi_ncpu_per_chip); in cpuid_intel_getids()
2614 cpi->cpi_ncore_bits = ddi_fls(cpi->cpi_ncore_per_chip); in cpuid_intel_getids()
2616 for (i = 1; i < cpi->cpi_ncpu_per_chip; i <<= 1) in cpuid_intel_getids()
2619 cpi->cpi_chipid = cpi->cpi_apicid >> chipid_shift; in cpuid_intel_getids()
2620 cpi->cpi_clogid = cpi->cpi_apicid & ((1 << chipid_shift) - 1); in cpuid_intel_getids()
2624 * Multi-core (and possibly multi-threaded) in cpuid_intel_getids()
2629 if (cpi->cpi_ncore_per_chip == 1) in cpuid_intel_getids()
2630 ncpu_per_core = cpi->cpi_ncpu_per_chip; in cpuid_intel_getids()
2631 else if (cpi->cpi_ncore_per_chip > 1) in cpuid_intel_getids()
2632 ncpu_per_core = cpi->cpi_ncpu_per_chip / in cpuid_intel_getids()
2633 cpi->cpi_ncore_per_chip; in cpuid_intel_getids()
2638 * +-----------------------+------+------+ in cpuid_intel_getids()
2640 * +-----------------------+------+------+ in cpuid_intel_getids()
2641 * <------- chipid --------> in cpuid_intel_getids()
2642 * <------- coreid ---------------> in cpuid_intel_getids()
2643 * <--- clogid --> in cpuid_intel_getids()
2644 * <------> in cpuid_intel_getids()
2650 * store the value of cpi->cpi_ncpu_per_chip. in cpuid_intel_getids()
2653 * cpi->cpi_ncore_per_chip. in cpuid_intel_getids()
2657 cpi->cpi_coreid = cpi->cpi_apicid >> coreid_shift; in cpuid_intel_getids()
2658 cpi->cpi_pkgcoreid = cpi->cpi_clogid >> coreid_shift; in cpuid_intel_getids()
2661 * Single-core multi-threaded processors. in cpuid_intel_getids()
2663 cpi->cpi_coreid = cpi->cpi_chipid; in cpuid_intel_getids()
2664 cpi->cpi_pkgcoreid = 0; in cpuid_intel_getids()
2667 * Single-core single-thread processors. in cpuid_intel_getids()
2669 cpi->cpi_coreid = cpu->cpu_id; in cpuid_intel_getids()
2670 cpi->cpi_pkgcoreid = 0; in cpuid_intel_getids()
2672 cpi->cpi_procnodeid = cpi->cpi_chipid; in cpuid_intel_getids()
2673 cpi->cpi_compunitid = cpi->cpi_coreid; in cpuid_intel_getids()
2692 struct cpuid_info *cpi = cpu->cpu_m.mcpu_cpi; in cpuid_amd_get_coreid()
2694 if (cpi->cpi_family >= 0x17 && in cpuid_amd_get_coreid()
2696 cpi->cpi_xmaxeax >= CPUID_LEAF_EXT_1e) { in cpuid_amd_get_coreid()
2697 uint_t nthreads = BITX(cpi->cpi_extd[0x1e].cp_ebx, 15, 8) + 1; in cpuid_amd_get_coreid()
2700 return (cpi->cpi_apicid >> 1); in cpuid_amd_get_coreid()
2704 return (cpu->cpu_id); in cpuid_amd_get_coreid()
2713 * synthesize this case by using cpu->cpu_id. This scheme does not,
2715 * coreids starting at a multiple of the number of cores per chip - that is
2732 struct cpuid_info *cpi = cpu->cpu_m.mcpu_cpi; in cpuid_amd_getids()
2742 cpi->cpi_coreid = cpuid_amd_get_coreid(cpu); in cpuid_amd_getids()
2743 cpi->cpi_compunitid = cpi->cpi_coreid; in cpuid_amd_getids()
2744 cpi->cpi_cores_per_compunit = 1; in cpuid_amd_getids()
2745 cpi->cpi_procnodes_per_pkg = 1; in cpuid_amd_getids()
2751 * then we assume it's one. This should be present on all 64-bit AMD in cpuid_amd_getids()
2754 if (cpi->cpi_xmaxeax >= CPUID_LEAF_EXT_8) { in cpuid_amd_getids()
2755 coreidsz = BITX((cpi)->cpi_extd[8].cp_ecx, 15, 12); in cpuid_amd_getids()
2763 for (i = 1; i < cpi->cpi_ncore_per_chip; i <<= 1) in cpuid_amd_getids()
2769 /* Assume single-core part */ in cpuid_amd_getids()
2772 cpi->cpi_clogid = cpi->cpi_apicid & ((1 << coreidsz) - 1); in cpuid_amd_getids()
2777 * this value is the core id in the given node. For non-virtualized in cpuid_amd_getids()
2785 if (cpi->cpi_family >= 0x17 && in cpuid_amd_getids()
2787 cpi->cpi_xmaxeax >= CPUID_LEAF_EXT_1e && in cpuid_amd_getids()
2788 cpi->cpi_extd[0x1e].cp_ebx != 0) { in cpuid_amd_getids()
2789 uint_t nthreads = BITX(cpi->cpi_extd[0x1e].cp_ebx, 15, 8) + 1; in cpuid_amd_getids()
2792 cpi->cpi_pkgcoreid = cpi->cpi_clogid >> 1; in cpuid_amd_getids()
2794 cpi->cpi_pkgcoreid = cpi->cpi_clogid; in cpuid_amd_getids()
2797 cpi->cpi_pkgcoreid = cpi->cpi_clogid; in cpuid_amd_getids()
2806 cpi->cpi_xmaxeax >= CPUID_LEAF_EXT_1e) { in cpuid_amd_getids()
2807 cp = &cpi->cpi_extd[0x1e]; in cpuid_amd_getids()
2809 cpi->cpi_procnodes_per_pkg = BITX(cp->cp_ecx, 10, 8) + 1; in cpuid_amd_getids()
2810 cpi->cpi_procnodeid = BITX(cp->cp_ecx, 7, 0); in cpuid_amd_getids()
2813 * For Bulldozer-era CPUs, recalculate the compute unit in cpuid_amd_getids()
2816 if (cpi->cpi_family >= 0x15 && cpi->cpi_family < 0x17) { in cpuid_amd_getids()
2817 cpi->cpi_cores_per_compunit = in cpuid_amd_getids()
2818 BITX(cp->cp_ebx, 15, 8) + 1; in cpuid_amd_getids()
2819 cpi->cpi_compunitid = BITX(cp->cp_ebx, 7, 0) + in cpuid_amd_getids()
2820 (cpi->cpi_ncore_per_chip / in cpuid_amd_getids()
2821 cpi->cpi_cores_per_compunit) * in cpuid_amd_getids()
2822 (cpi->cpi_procnodeid / in cpuid_amd_getids()
2823 cpi->cpi_procnodes_per_pkg); in cpuid_amd_getids()
2825 } else if (cpi->cpi_family == 0xf || cpi->cpi_family >= 0x11) { in cpuid_amd_getids()
2826 cpi->cpi_procnodeid = (cpi->cpi_apicid >> coreidsz) & 7; in cpuid_amd_getids()
2827 } else if (cpi->cpi_family == 0x10) { in cpuid_amd_getids()
2829 * See if we are a multi-node processor. in cpuid_amd_getids()
2833 if ((cpi->cpi_model < 8) || BITX(nb_caps_reg, 29, 29) == 0) { in cpuid_amd_getids()
2834 /* Single-node */ in cpuid_amd_getids()
2835 cpi->cpi_procnodeid = BITX(cpi->cpi_apicid, 5, in cpuid_amd_getids()
2840 * Multi-node revision D (2 nodes per package in cpuid_amd_getids()
2843 cpi->cpi_procnodes_per_pkg = 2; in cpuid_amd_getids()
2845 first_half = (cpi->cpi_pkgcoreid <= in cpuid_amd_getids()
2846 (cpi->cpi_ncore_per_chip/2 - 1)); in cpuid_amd_getids()
2848 if (cpi->cpi_apicid == cpi->cpi_pkgcoreid) { in cpuid_amd_getids()
2850 cpi->cpi_procnodeid = (first_half ? 0 : 1); in cpuid_amd_getids()
2855 node2_1 = BITX(cpi->cpi_apicid, 5, 4) << 1; in cpuid_amd_getids()
2862 * always 0 on dual-node processors) in cpuid_amd_getids()
2865 cpi->cpi_procnodeid = node2_1 + in cpuid_amd_getids()
2868 cpi->cpi_procnodeid = node2_1 + in cpuid_amd_getids()
2873 cpi->cpi_procnodeid = 0; in cpuid_amd_getids()
2876 cpi->cpi_chipid = in cpuid_amd_getids()
2877 cpi->cpi_procnodeid / cpi->cpi_procnodes_per_pkg; in cpuid_amd_getids()
2879 cpi->cpi_ncore_bits = coreidsz; in cpuid_amd_getids()
2880 cpi->cpi_nthread_bits = ddi_fls(cpi->cpi_ncpu_per_chip / in cpuid_amd_getids()
2881 cpi->cpi_ncore_per_chip); in cpuid_amd_getids()
2891 * MDS-related micro-architectural state that would normally happen by calling
2902 * micro-architectural state on the processor. This flush is used to mitigate
2906 * - A noop which is done because we either are vulnerable, but do not have
2910 * - spec_uarch_flush_msr which will issue an L1D flush and if microcode to
2912 * however, it only flushes the MDS related micro-architectural state on the
2915 * - x86_md_clear which will flush the MDS related state. This is done when we
2925 struct cpuid_info *cpi = cpu->cpu_m.mcpu_cpi; in cpuid_update_md_clear()
2927 /* Non-Intel doesn't concern us here. */ in cpuid_update_md_clear()
2928 if (cpi->cpi_vendor != X86_VENDOR_Intel) in cpuid_update_md_clear()
2958 struct cpuid_info *cpi = cpu->cpu_m.mcpu_cpi; in cpuid_update_l1d_flush()
2966 if (cpi->cpi_vendor != X86_VENDOR_Intel || in cpuid_update_l1d_flush()
3032 * to not have an RSB (pre-eIBRS).
3035 X86_BHI_TOO_OLD_OR_DISABLED, /* Pre-eIBRS or disabled */
3057 * enough to fix (which includes non-Intel CPUs), or the CPU has an explicit
3058 * disable-Branch-History control.
3064 struct cpuid_info *cpi = cpu->cpu_m.mcpu_cpi; in cpuid_learn_and_patch_bhi()
3068 ASSERT0(cpu->cpu_id); in cpuid_learn_and_patch_bhi()
3078 * or if it's non-Intel, in which case this mitigation mechanism in cpuid_learn_and_patch_bhi()
3081 if (cpi->cpi_vendor != X86_VENDOR_Intel || in cpuid_learn_and_patch_bhi()
3112 * post-barrier RSB (PBRSB) guessing suggests we should enable Intel RSB
3119 * +-------+------------+-----------------+--------+
3121 * +-------+------------+-----------------+--------+
3125 * +-------+------------+-----------------+--------+
3133 * needed, but context-switch stuffing isn't.
3138 …l.com/content/www/us/en/developer/articles/technical/software-security-guidance/advisory-guidance/…
3166 * The Intel document on Post-Barrier RSB says that processors in cpuid_patch_rsb()
3177 * both vmexit and context-switching require the software in cpuid_patch_rsb()
3256 * AMD Zen 5 processors have a bug where the 16- and 32-bit forms of the
3258 * (CF=1) - See AMD-SB-7055 / CVE-2025-62626.
3263 struct cpuid_info *cpi = cpu->cpu_m.mcpu_cpi; in cpuid_evaluate_amd_rdseed()
3264 struct cpuid_regs *ecp = &cpi->cpi_std[7]; in cpuid_evaluate_amd_rdseed()
3265 uint32_t rev = cpu->cpu_m.mcpu_ucode_info->cui_rev; in cpuid_evaluate_amd_rdseed()
3268 ASSERT3U(cpi->cpi_vendor, ==, X86_VENDOR_AMD); in cpuid_evaluate_amd_rdseed()
3269 ASSERT(ecp->cp_ebx & CPUID_INTC_EBX_7_0_RDSEED); in cpuid_evaluate_amd_rdseed()
3272 if (uarchrev_uarch(cpi->cpi_uarchrev) != X86_UARCH_AMD_ZEN5) in cpuid_evaluate_amd_rdseed()
3276 * AMD-SB-7055 specifies microcode versions that mitigate this issue on in cpuid_evaluate_amd_rdseed()
3277 * BRH-C1 and BRHD-B0. If we're on one of those chips and the microcode in cpuid_evaluate_amd_rdseed()
3280 if (chiprev_matches(cpi->cpi_chiprev, X86_CHIPREV_AMD_TURIN_C1) && in cpuid_evaluate_amd_rdseed()
3284 if (chiprev_matches(cpi->cpi_chiprev, X86_CHIPREV_AMD_DENSE_TURIN_B0) && in cpuid_evaluate_amd_rdseed()
3296 if (cpu->cpu_id == 0) in cpuid_evaluate_amd_rdseed()
3300 ecp->cp_ebx &= ~CPUID_INTC_EBX_7_0_RDSEED; in cpuid_evaluate_amd_rdseed()
3317 struct cpuid_info *cpi = cpu->cpu_m.mcpu_cpi; in cpuid_update_tsx()
3319 VERIFY(cpu->cpu_id == 0); in cpuid_update_tsx()
3321 if (cpi->cpi_vendor != X86_VENDOR_Intel) { in cpuid_update_tsx()
3336 * we want to cross-CPU-thread protection. in cpuid_update_tsx()
3362 * mitigation. TSX-using code will always take the fallback path. in cpuid_update_tsx()
3364 if (cpi->cpi_pass < 4) { in cpuid_update_tsx()
3406 struct cpuid_info *cpi = cpu->cpu_m.mcpu_cpi; in cpuid_scan_security()
3409 if ((cpi->cpi_vendor == X86_VENDOR_AMD || in cpuid_scan_security()
3410 cpi->cpi_vendor == X86_VENDOR_HYGON) && in cpuid_scan_security()
3411 cpi->cpi_xmaxeax >= CPUID_LEAF_EXT_8) { in cpuid_scan_security()
3412 if (cpi->cpi_extd[8].cp_ebx & CPUID_AMD_EBX_IBPB) in cpuid_scan_security()
3414 if (cpi->cpi_extd[8].cp_ebx & CPUID_AMD_EBX_IBRS) in cpuid_scan_security()
3416 if (cpi->cpi_extd[8].cp_ebx & CPUID_AMD_EBX_STIBP) in cpuid_scan_security()
3418 if (cpi->cpi_extd[8].cp_ebx & CPUID_AMD_EBX_STIBP_ALL) in cpuid_scan_security()
3420 if (cpi->cpi_extd[8].cp_ebx & CPUID_AMD_EBX_SSBD) in cpuid_scan_security()
3422 if (cpi->cpi_extd[8].cp_ebx & CPUID_AMD_EBX_VIRT_SSBD) in cpuid_scan_security()
3424 if (cpi->cpi_extd[8].cp_ebx & CPUID_AMD_EBX_SSB_NO) in cpuid_scan_security()
3432 if (cpi->cpi_vendor == X86_VENDOR_AMD && in cpuid_scan_security()
3433 (cpi->cpi_extd[8].cp_ebx & CPUID_AMD_EBX_PREFER_IBRS) && in cpuid_scan_security()
3434 cpi->cpi_xmaxeax >= CPUID_LEAF_EXT_21 && in cpuid_scan_security()
3435 (cpi->cpi_extd[0x21].cp_eax & CPUID_AMD_8X21_EAX_AIBRS)) { in cpuid_scan_security()
3439 } else if (cpi->cpi_vendor == X86_VENDOR_Intel && in cpuid_scan_security()
3440 cpi->cpi_maxeax >= 7) { in cpuid_scan_security()
3442 ecp = &cpi->cpi_std[7]; in cpuid_scan_security()
3444 if (ecp->cp_edx & CPUID_INTC_EDX_7_0_MD_CLEAR) { in cpuid_scan_security()
3448 if (ecp->cp_edx & CPUID_INTC_EDX_7_0_SPEC_CTRL) { in cpuid_scan_security()
3453 if (ecp->cp_edx & CPUID_INTC_EDX_7_0_STIBP) { in cpuid_scan_security()
3470 if (ecp->cp_edx & CPUID_INTC_EDX_7_0_ARCH_CAPS) { in cpuid_scan_security()
3533 if (ecp->cp_edx & CPUID_INTC_EDX_7_0_SSBD) in cpuid_scan_security()
3536 if (ecp->cp_edx & CPUID_INTC_EDX_7_0_FLUSH_CMD) in cpuid_scan_security()
3541 * Take care of certain mitigations on the non-boot CPU. The boot CPU in cpuid_scan_security()
3543 * do. This gives us a hook for per-HW thread mitigations such as in cpuid_scan_security()
3546 if (cpu->cpu_id != 0) { in cpuid_scan_security()
3603 * If any of these are present, then we need to flush u-arch state at in cpuid_scan_security()
3607 * MDS, the L1D flush also clears the other u-arch state that the in cpuid_scan_security()
3660 cpi = cpu->cpu_m.mcpu_cpi; in cpuid_basic_topology()
3662 if (cpi->cpi_vendor == X86_VENDOR_AMD || in cpuid_basic_topology()
3663 cpi->cpi_vendor == X86_VENDOR_HYGON) { in cpuid_basic_topology()
3667 cpi->cpi_apicid = cpuid_gather_apicid(cpi); in cpuid_basic_topology()
3673 switch (cpi->cpi_vendor) { in cpuid_basic_topology()
3675 cpuid_intel_ncores(cpi, &cpi->cpi_ncpu_per_chip, in cpuid_basic_topology()
3676 &cpi->cpi_ncore_per_chip); in cpuid_basic_topology()
3680 cpuid_amd_ncores(cpi, &cpi->cpi_ncpu_per_chip, in cpuid_basic_topology()
3681 &cpi->cpi_ncore_per_chip); in cpuid_basic_topology()
3687 * today, though there are also 64-bit VIA chips. Assume that in cpuid_basic_topology()
3690 if ((cpi->cpi_std[1].cp_edx & CPUID_INTC_EDX_HTT) != 0) { in cpuid_basic_topology()
3691 cpi->cpi_ncore_per_chip = 1; in cpuid_basic_topology()
3692 cpi->cpi_ncpu_per_chip = CPI_CPU_COUNT(cpi); in cpuid_basic_topology()
3701 if (cpi->cpi_ncore_per_chip > 1) { in cpuid_basic_topology()
3705 if (cpi->cpi_ncpu_per_chip > 1 && in cpuid_basic_topology()
3706 cpi->cpi_ncpu_per_chip != cpi->cpi_ncore_per_chip) { in cpuid_basic_topology()
3720 * This is a single core, single-threaded processor. in cpuid_basic_topology()
3722 cpi->cpi_procnodes_per_pkg = 1; in cpuid_basic_topology()
3723 cpi->cpi_cores_per_compunit = 1; in cpuid_basic_topology()
3724 cpi->cpi_compunitid = 0; in cpuid_basic_topology()
3725 cpi->cpi_chipid = -1; in cpuid_basic_topology()
3726 cpi->cpi_clogid = 0; in cpuid_basic_topology()
3727 cpi->cpi_coreid = cpu->cpu_id; in cpuid_basic_topology()
3728 cpi->cpi_pkgcoreid = 0; in cpuid_basic_topology()
3729 if (cpi->cpi_vendor == X86_VENDOR_AMD || in cpuid_basic_topology()
3730 cpi->cpi_vendor == X86_VENDOR_HYGON) { in cpuid_basic_topology()
3731 cpi->cpi_procnodeid = BITX(cpi->cpi_apicid, 3, 0); in cpuid_basic_topology()
3733 cpi->cpi_procnodeid = cpi->cpi_chipid; in cpuid_basic_topology()
3736 switch (cpi->cpi_vendor) { in cpuid_basic_topology()
3757 cpi->cpi_procnodes_per_pkg = 1; in cpuid_basic_topology()
3758 cpi->cpi_cores_per_compunit = 1; in cpuid_basic_topology()
3759 cpi->cpi_chipid = 0; in cpuid_basic_topology()
3760 cpi->cpi_coreid = cpu->cpu_id; in cpuid_basic_topology()
3761 cpi->cpi_clogid = cpu->cpu_id; in cpuid_basic_topology()
3762 cpi->cpi_pkgcoreid = cpu->cpu_id; in cpuid_basic_topology()
3763 cpi->cpi_procnodeid = cpi->cpi_chipid; in cpuid_basic_topology()
3764 cpi->cpi_compunitid = cpi->cpi_coreid; in cpuid_basic_topology()
3780 struct cpuid_info *cpi = cpu->cpu_m.mcpu_cpi; in cpuid_basic_thermal()
3782 if (cpi->cpi_maxeax < 6) { in cpuid_basic_thermal()
3786 cp = &cpi->cpi_std[6]; in cpuid_basic_thermal()
3787 cp->cp_eax = 6; in cpuid_basic_thermal()
3788 cp->cp_ebx = cp->cp_ecx = cp->cp_edx = 0; in cpuid_basic_thermal()
3790 platform_cpuid_mangle(cpi->cpi_vendor, 6, cp); in cpuid_basic_thermal()
3792 if (cpi->cpi_vendor != X86_VENDOR_Intel) { in cpuid_basic_thermal()
3796 if ((cp->cp_eax & CPUID_INTC_EAX_DTS) != 0) { in cpuid_basic_thermal()
3800 if ((cp->cp_eax & CPUID_INTC_EAX_PTM) != 0) { in cpuid_basic_thermal()
3812 struct cpuid_info *cpi = cpu->cpu_m.mcpu_cpi; in cpuid_basic_avx()
3817 if ((cpi->cpi_std[1].cp_ecx & CPUID_INTC_ECX_AVX) == 0) in cpuid_basic_avx()
3826 if (cpi->cpi_std[1].cp_ecx & CPUID_INTC_ECX_F16C) in cpuid_basic_avx()
3829 if (cpi->cpi_std[1].cp_ecx & CPUID_INTC_ECX_FMA) in cpuid_basic_avx()
3832 if (cpi->cpi_std[7].cp_ebx & CPUID_INTC_EBX_7_0_BMI1) in cpuid_basic_avx()
3835 if (cpi->cpi_std[7].cp_ebx & CPUID_INTC_EBX_7_0_BMI2) in cpuid_basic_avx()
3838 if (cpi->cpi_std[7].cp_ebx & CPUID_INTC_EBX_7_0_AVX2) in cpuid_basic_avx()
3841 if (cpi->cpi_std[7].cp_ecx & CPUID_INTC_ECX_7_0_VAES) in cpuid_basic_avx()
3844 if (cpi->cpi_std[7].cp_ecx & CPUID_INTC_ECX_7_0_VPCLMULQDQ) in cpuid_basic_avx()
3851 if ((cpi->cpi_std[7].cp_ebx & CPUID_INTC_EBX_7_0_AVX512F) == 0) in cpuid_basic_avx()
3855 if (cpi->cpi_std[7].cp_ebx & CPUID_INTC_EBX_7_0_AVX512DQ) in cpuid_basic_avx()
3858 if (cpi->cpi_std[7].cp_ebx & CPUID_INTC_EBX_7_0_AVX512IFMA) in cpuid_basic_avx()
3861 if (cpi->cpi_std[7].cp_ebx & CPUID_INTC_EBX_7_0_AVX512PF) in cpuid_basic_avx()
3864 if (cpi->cpi_std[7].cp_ebx & CPUID_INTC_EBX_7_0_AVX512ER) in cpuid_basic_avx()
3867 if (cpi->cpi_std[7].cp_ebx & CPUID_INTC_EBX_7_0_AVX512CD) in cpuid_basic_avx()
3870 if (cpi->cpi_std[7].cp_ebx & CPUID_INTC_EBX_7_0_AVX512BW) in cpuid_basic_avx()
3873 if (cpi->cpi_std[7].cp_ebx & CPUID_INTC_EBX_7_0_AVX512VL) in cpuid_basic_avx()
3876 if (cpi->cpi_std[7].cp_ecx & CPUID_INTC_ECX_7_0_AVX512VBMI) in cpuid_basic_avx()
3879 if (cpi->cpi_std[7].cp_ecx & CPUID_INTC_ECX_7_0_AVX512VBMI2) in cpuid_basic_avx()
3882 if (cpi->cpi_std[7].cp_ecx & CPUID_INTC_ECX_7_0_AVX512VNNI) in cpuid_basic_avx()
3885 if (cpi->cpi_std[7].cp_ecx & CPUID_INTC_ECX_7_0_AVX512BITALG) in cpuid_basic_avx()
3888 if (cpi->cpi_std[7].cp_ecx & CPUID_INTC_ECX_7_0_AVX512VPOPCDQ) in cpuid_basic_avx()
3891 if (cpi->cpi_std[7].cp_edx & CPUID_INTC_EDX_7_0_AVX5124NNIW) in cpuid_basic_avx()
3894 if (cpi->cpi_std[7].cp_edx & CPUID_INTC_EDX_7_0_AVX5124FMAPS) in cpuid_basic_avx()
3901 if (cpi->cpi_std[7].cp_eax < 1) in cpuid_basic_avx()
3904 if (cpi->cpi_sub7[0].cp_eax & CPUID_INTC_EAX_7_1_AVX512_BF16) in cpuid_basic_avx()
3918 struct cpuid_info *cpi = cpu->cpu_m.mcpu_cpi; in cpuid_basic_ppin()
3920 switch (cpi->cpi_vendor) { in cpuid_basic_ppin()
3926 if (cpi->cpi_xmaxeax >= CPUID_LEAF_EXT_8) { in cpuid_basic_ppin()
3927 if (cpi->cpi_extd[8].cp_ebx & CPUID_AMD_EBX_PPIN) { in cpuid_basic_ppin()
3933 if (cpi->cpi_family != 6) in cpuid_basic_ppin()
3935 switch (cpi->cpi_model) { in cpuid_basic_ppin()
3987 ASSERT3S(platform_type, !=, -1); in cpuid_pass_ident()
3990 cpi = cpu->cpu_m.mcpu_cpi; in cpuid_pass_ident()
3993 cp = &cpi->cpi_std[0]; in cpuid_pass_ident()
3994 cp->cp_eax = 0; in cpuid_pass_ident()
3995 cpi->cpi_maxeax = __cpuid_insn(cp); in cpuid_pass_ident()
3997 uint32_t *iptr = (uint32_t *)cpi->cpi_vendorstr; in cpuid_pass_ident()
3998 *iptr++ = cp->cp_ebx; in cpuid_pass_ident()
3999 *iptr++ = cp->cp_edx; in cpuid_pass_ident()
4000 *iptr++ = cp->cp_ecx; in cpuid_pass_ident()
4001 *(char *)&cpi->cpi_vendorstr[12] = '\0'; in cpuid_pass_ident()
4004 cpi->cpi_vendor = _cpuid_vendorstr_to_vendorcode(cpi->cpi_vendorstr); in cpuid_pass_ident()
4005 x86_vendor = cpi->cpi_vendor; /* for compatibility */ in cpuid_pass_ident()
4010 if (cpi->cpi_maxeax > CPI_MAXEAX_MAX) in cpuid_pass_ident()
4011 cpi->cpi_maxeax = CPI_MAXEAX_MAX; in cpuid_pass_ident()
4012 if (cpi->cpi_maxeax < 1) in cpuid_pass_ident()
4015 cp = &cpi->cpi_std[1]; in cpuid_pass_ident()
4016 cp->cp_eax = 1; in cpuid_pass_ident()
4022 cpi->cpi_model = CPI_MODEL(cpi); in cpuid_pass_ident()
4023 cpi->cpi_family = CPI_FAMILY(cpi); in cpuid_pass_ident()
4025 if (cpi->cpi_family == 0xf) in cpuid_pass_ident()
4026 cpi->cpi_family += CPI_FAMILY_XTD(cpi); in cpuid_pass_ident()
4034 switch (cpi->cpi_vendor) { in cpuid_pass_ident()
4037 cpi->cpi_model += CPI_MODEL_XTD(cpi) << 4; in cpuid_pass_ident()
4041 cpi->cpi_model += CPI_MODEL_XTD(cpi) << 4; in cpuid_pass_ident()
4044 cpi->cpi_model += CPI_MODEL_XTD(cpi) << 4; in cpuid_pass_ident()
4047 if (cpi->cpi_model == 0xf) in cpuid_pass_ident()
4048 cpi->cpi_model += CPI_MODEL_XTD(cpi) << 4; in cpuid_pass_ident()
4052 cpi->cpi_step = CPI_STEP(cpi); in cpuid_pass_ident()
4053 cpi->cpi_brandid = CPI_BRANDID(cpi); in cpuid_pass_ident()
4058 cpi->cpi_chiprev = _cpuid_chiprev(cpi->cpi_vendor, cpi->cpi_family, in cpuid_pass_ident()
4059 cpi->cpi_model, cpi->cpi_step); in cpuid_pass_ident()
4060 cpi->cpi_chiprevstr = _cpuid_chiprevstr(cpi->cpi_vendor, in cpuid_pass_ident()
4061 cpi->cpi_family, cpi->cpi_model, cpi->cpi_step); in cpuid_pass_ident()
4062 cpi->cpi_socket = _cpuid_skt(cpi->cpi_vendor, cpi->cpi_family, in cpuid_pass_ident()
4063 cpi->cpi_model, cpi->cpi_step); in cpuid_pass_ident()
4064 cpi->cpi_uarchrev = _cpuid_uarchrev(cpi->cpi_vendor, cpi->cpi_family, in cpuid_pass_ident()
4065 cpi->cpi_model, cpi->cpi_step); in cpuid_pass_ident()
4080 cpi = cpu->cpu_m.mcpu_cpi; in cpuid_pass_basic()
4083 if (cpi->cpi_maxeax < 1) in cpuid_pass_basic()
4089 cp = &cpi->cpi_std[1]; in cpuid_pass_basic()
4093 * - believe %edx feature word in cpuid_pass_basic()
4094 * - ignore %ecx feature word in cpuid_pass_basic()
4095 * - 32-bit virtual and physical addressing in cpuid_pass_basic()
4100 cpi->cpi_pabits = cpi->cpi_vabits = 32; in cpuid_pass_basic()
4102 switch (cpi->cpi_vendor) { in cpuid_pass_basic()
4104 if (cpi->cpi_family == 5) in cpuid_pass_basic()
4112 if (cpi->cpi_model < 3 && cpi->cpi_step < 3) in cpuid_pass_basic()
4113 cp->cp_edx &= ~CPUID_INTC_EDX_SEP; in cpuid_pass_basic()
4114 } else if (IS_NEW_F6(cpi) || cpi->cpi_family == 0xf) { in cpuid_pass_basic()
4123 } else if (cpi->cpi_family > 0xf) in cpuid_pass_basic()
4129 if (cpi->cpi_maxeax < 5) in cpuid_pass_basic()
4137 if (cpi->cpi_family == 0xf && cpi->cpi_model == 0xe) { in cpuid_pass_basic()
4138 cp->cp_eax = (0xf0f & cp->cp_eax) | 0xc0; in cpuid_pass_basic()
4139 cpi->cpi_model = 0xc; in cpuid_pass_basic()
4142 if (cpi->cpi_family == 5) { in cpuid_pass_basic()
4155 if (cpi->cpi_model == 0) { in cpuid_pass_basic()
4156 if (cp->cp_edx & 0x200) { in cpuid_pass_basic()
4157 cp->cp_edx &= ~0x200; in cpuid_pass_basic()
4158 cp->cp_edx |= CPUID_INTC_EDX_PGE; in cpuid_pass_basic()
4165 if (cpi->cpi_model < 6) in cpuid_pass_basic()
4173 if (cpi->cpi_family >= 0xf) in cpuid_pass_basic()
4179 if (cpi->cpi_maxeax < 5) in cpuid_pass_basic()
4185 * Pre-family-10h Opterons do not have the MWAIT instruction. We in cpuid_pass_basic()
4187 * is preferred. Families in-between are less certain. in cpuid_pass_basic()
4189 if (cpi->cpi_family < 0x17) { in cpuid_pass_basic()
4203 if (cpi->cpi_family == 5 && cpi->cpi_model == 4 && in cpuid_pass_basic()
4204 (cpi->cpi_step == 2 || cpi->cpi_step == 3)) in cpuid_pass_basic()
4205 cp->cp_edx |= CPUID_INTC_EDX_CX8; in cpuid_pass_basic()
4211 if (cpi->cpi_family == 6) in cpuid_pass_basic()
4212 cp->cp_edx |= CPUID_INTC_EDX_CX8; in cpuid_pass_basic()
4292 cp->cp_edx &= mask_edx; in cpuid_pass_basic()
4293 cp->cp_ecx &= mask_ecx; in cpuid_pass_basic()
4300 platform_cpuid_mangle(cpi->cpi_vendor, 1, cp); in cpuid_pass_basic()
4305 * 7 has sub-leaves determined by ecx. in cpuid_pass_basic()
4307 if (cpi->cpi_maxeax >= 7) { in cpuid_pass_basic()
4309 ecp = &cpi->cpi_std[7]; in cpuid_pass_basic()
4310 ecp->cp_eax = 7; in cpuid_pass_basic()
4311 ecp->cp_ecx = 0; in cpuid_pass_basic()
4316 * extended-save-area dependent flags here. By removing most of in cpuid_pass_basic()
4317 * the leaf 7, sub-leaf 0 flags, that will ensure that we don't in cpuid_pass_basic()
4322 ecp->cp_ebx &= ~CPUID_INTC_EBX_7_0_BMI1; in cpuid_pass_basic()
4323 ecp->cp_ebx &= ~CPUID_INTC_EBX_7_0_BMI2; in cpuid_pass_basic()
4324 ecp->cp_ebx &= ~CPUID_INTC_EBX_7_0_AVX2; in cpuid_pass_basic()
4325 ecp->cp_ebx &= ~CPUID_INTC_EBX_7_0_MPX; in cpuid_pass_basic()
4326 ecp->cp_ebx &= ~CPUID_INTC_EBX_7_0_ALL_AVX512; in cpuid_pass_basic()
4327 ecp->cp_ecx &= ~CPUID_INTC_ECX_7_0_ALL_AVX512; in cpuid_pass_basic()
4328 ecp->cp_edx &= ~CPUID_INTC_EDX_7_0_ALL_AVX512; in cpuid_pass_basic()
4329 ecp->cp_ecx &= ~CPUID_INTC_ECX_7_0_VAES; in cpuid_pass_basic()
4330 ecp->cp_ecx &= ~CPUID_INTC_ECX_7_0_VPCLMULQDQ; in cpuid_pass_basic()
4331 ecp->cp_ecx &= ~CPUID_INTC_ECX_7_0_GFNI; in cpuid_pass_basic()
4334 if (ecp->cp_ebx & CPUID_INTC_EBX_7_0_SMEP) in cpuid_pass_basic()
4343 if (ecp->cp_ebx & CPUID_INTC_EBX_7_0_SMAP && in cpuid_pass_basic()
4347 if (ecp->cp_ebx & CPUID_INTC_EBX_7_0_RDSEED) { in cpuid_pass_basic()
4349 if (cpi->cpi_vendor == X86_VENDOR_AMD) in cpuid_pass_basic()
4353 if (ecp->cp_ebx & CPUID_INTC_EBX_7_0_ADX) in cpuid_pass_basic()
4356 if (ecp->cp_ebx & CPUID_INTC_EBX_7_0_FSGSBASE) in cpuid_pass_basic()
4359 if (ecp->cp_ebx & CPUID_INTC_EBX_7_0_CLFLUSHOPT) in cpuid_pass_basic()
4362 if (ecp->cp_ebx & CPUID_INTC_EBX_7_0_INVPCID) in cpuid_pass_basic()
4365 if (ecp->cp_ecx & CPUID_INTC_ECX_7_0_UMIP) in cpuid_pass_basic()
4367 if (ecp->cp_ecx & CPUID_INTC_ECX_7_0_PKU) in cpuid_pass_basic()
4369 if (ecp->cp_ecx & CPUID_INTC_ECX_7_0_OSPKE) in cpuid_pass_basic()
4371 if (ecp->cp_ecx & CPUID_INTC_ECX_7_0_GFNI) in cpuid_pass_basic()
4374 if (ecp->cp_ebx & CPUID_INTC_EBX_7_0_CLWB) in cpuid_pass_basic()
4377 if (cpi->cpi_vendor == X86_VENDOR_Intel) { in cpuid_pass_basic()
4378 if (ecp->cp_ebx & CPUID_INTC_EBX_7_0_MPX) in cpuid_pass_basic()
4386 if (ecp->cp_eax >= 1) { in cpuid_pass_basic()
4388 c71 = &cpi->cpi_sub7[0]; in cpuid_pass_basic()
4389 c71->cp_eax = 7; in cpuid_pass_basic()
4390 c71->cp_ecx = 1; in cpuid_pass_basic()
4395 if (ecp->cp_eax >= 2) { in cpuid_pass_basic()
4397 c72 = &cpi->cpi_sub7[1]; in cpuid_pass_basic()
4398 c72->cp_eax = 7; in cpuid_pass_basic()
4399 c72->cp_ecx = 2; in cpuid_pass_basic()
4407 cp->cp_edx |= cpuid_feature_edx_include; in cpuid_pass_basic()
4408 cp->cp_edx &= ~cpuid_feature_edx_exclude; in cpuid_pass_basic()
4410 cp->cp_ecx |= cpuid_feature_ecx_include; in cpuid_pass_basic()
4411 cp->cp_ecx &= ~cpuid_feature_ecx_exclude; in cpuid_pass_basic()
4413 if (cp->cp_edx & CPUID_INTC_EDX_PSE) { in cpuid_pass_basic()
4416 if (cp->cp_edx & CPUID_INTC_EDX_TSC) { in cpuid_pass_basic()
4419 if (cp->cp_edx & CPUID_INTC_EDX_MSR) { in cpuid_pass_basic()
4422 if (cp->cp_edx & CPUID_INTC_EDX_MTRR) { in cpuid_pass_basic()
4425 if (cp->cp_edx & CPUID_INTC_EDX_PGE) { in cpuid_pass_basic()
4428 if (cp->cp_edx & CPUID_INTC_EDX_CMOV) { in cpuid_pass_basic()
4431 if (cp->cp_edx & CPUID_INTC_EDX_MMX) { in cpuid_pass_basic()
4434 if ((cp->cp_edx & CPUID_INTC_EDX_MCE) != 0 && in cpuid_pass_basic()
4435 (cp->cp_edx & CPUID_INTC_EDX_MCA) != 0) { in cpuid_pass_basic()
4438 if (cp->cp_edx & CPUID_INTC_EDX_PAE) { in cpuid_pass_basic()
4441 if (cp->cp_edx & CPUID_INTC_EDX_CX8) { in cpuid_pass_basic()
4444 if (cp->cp_ecx & CPUID_INTC_ECX_CX16) { in cpuid_pass_basic()
4447 if (cp->cp_edx & CPUID_INTC_EDX_PAT) { in cpuid_pass_basic()
4450 if (cp->cp_edx & CPUID_INTC_EDX_SEP) { in cpuid_pass_basic()
4453 if (cp->cp_edx & CPUID_INTC_EDX_FXSR) { in cpuid_pass_basic()
4459 if (cp->cp_edx & CPUID_INTC_EDX_SSE) { in cpuid_pass_basic()
4462 if (cp->cp_edx & CPUID_INTC_EDX_SSE2) { in cpuid_pass_basic()
4465 if (cp->cp_ecx & CPUID_INTC_ECX_SSE3) { in cpuid_pass_basic()
4468 if (cp->cp_ecx & CPUID_INTC_ECX_SSSE3) { in cpuid_pass_basic()
4471 if (cp->cp_ecx & CPUID_INTC_ECX_SSE4_1) { in cpuid_pass_basic()
4474 if (cp->cp_ecx & CPUID_INTC_ECX_SSE4_2) { in cpuid_pass_basic()
4477 if (cp->cp_ecx & CPUID_INTC_ECX_AES) { in cpuid_pass_basic()
4480 if (cp->cp_ecx & CPUID_INTC_ECX_PCLMULQDQ) { in cpuid_pass_basic()
4484 if (cpi->cpi_std[7].cp_ebx & CPUID_INTC_EBX_7_0_SHA) in cpuid_pass_basic()
4487 if (cp->cp_ecx & CPUID_INTC_ECX_XSAVE) { in cpuid_pass_basic()
4495 if (cp->cp_ecx & CPUID_INTC_ECX_PCID) { in cpuid_pass_basic()
4499 if (cp->cp_ecx & CPUID_INTC_ECX_X2APIC) { in cpuid_pass_basic()
4502 if (cp->cp_edx & CPUID_INTC_EDX_DE) { in cpuid_pass_basic()
4506 if (cp->cp_ecx & CPUID_INTC_ECX_MON) { in cpuid_pass_basic()
4512 if (cp->cp_edx & CPUID_INTC_EDX_CLFSH) { in cpuid_pass_basic()
4513 cpi->cpi_mwait.support |= MWAIT_SUPPORT; in cpuid_pass_basic()
4523 ASSERT((cp->cp_ecx & CPUID_INTC_ECX_MON) && in cpuid_pass_basic()
4524 (cp->cp_edx & CPUID_INTC_EDX_CLFSH)); in cpuid_pass_basic()
4530 if (cp->cp_ecx & CPUID_INTC_ECX_VMX) { in cpuid_pass_basic()
4534 if (cp->cp_ecx & CPUID_INTC_ECX_RDRAND) in cpuid_pass_basic()
4541 if (cp->cp_edx & CPUID_INTC_EDX_CLFSH) { in cpuid_pass_basic()
4543 x86_clflush_size = (BITX(cp->cp_ebx, 15, 8) * 8); in cpuid_pass_basic()
4546 cpi->cpi_pabits = 36; in cpuid_pass_basic()
4548 if (cpi->cpi_maxeax >= 0xD && !xsave_force_disable) { in cpuid_pass_basic()
4552 ecp->cp_eax = 0xD; in cpuid_pass_basic()
4553 ecp->cp_ecx = 1; in cpuid_pass_basic()
4554 ecp->cp_edx = ecp->cp_ebx = 0; in cpuid_pass_basic()
4557 if (ecp->cp_eax & CPUID_INTC_EAX_D_1_XSAVEOPT) in cpuid_pass_basic()
4559 if (ecp->cp_eax & CPUID_INTC_EAX_D_1_XSAVEC) in cpuid_pass_basic()
4561 if (ecp->cp_eax & CPUID_INTC_EAX_D_1_XSAVES) in cpuid_pass_basic()
4575 if (cpi->cpi_vendor == X86_VENDOR_AMD && in cpuid_pass_basic()
4576 uarchrev_uarch(cpi->cpi_uarchrev) <= X86_UARCH_AMD_ZEN2) { in cpuid_pass_basic()
4586 switch (cpi->cpi_vendor) { in cpuid_pass_basic()
4592 if (IS_NEW_F6(cpi) || cpi->cpi_family >= 0xf || in cpuid_pass_basic()
4593 (get_hwenv() == HW_KVM && cpi->cpi_family == 6 && in cpuid_pass_basic()
4594 (cpi->cpi_model == 6 || cpi->cpi_model == 2))) in cpuid_pass_basic()
4598 if (cpi->cpi_family > 5 || in cpuid_pass_basic()
4599 (cpi->cpi_family == 5 && cpi->cpi_model >= 1)) in cpuid_pass_basic()
4604 * Only these Cyrix CPUs are -known- to support in cpuid_pass_basic()
4620 cp = &cpi->cpi_extd[0]; in cpuid_pass_basic()
4621 cp->cp_eax = CPUID_LEAF_EXT_0; in cpuid_pass_basic()
4622 cpi->cpi_xmaxeax = __cpuid_insn(cp); in cpuid_pass_basic()
4625 if (cpi->cpi_xmaxeax & CPUID_LEAF_EXT_0) { in cpuid_pass_basic()
4627 if (cpi->cpi_xmaxeax > CPI_XMAXEAX_MAX) in cpuid_pass_basic()
4628 cpi->cpi_xmaxeax = CPI_XMAXEAX_MAX; in cpuid_pass_basic()
4630 switch (cpi->cpi_vendor) { in cpuid_pass_basic()
4634 if (cpi->cpi_xmaxeax < 0x80000001) in cpuid_pass_basic()
4636 cp = &cpi->cpi_extd[1]; in cpuid_pass_basic()
4637 cp->cp_eax = 0x80000001; in cpuid_pass_basic()
4640 if (cpi->cpi_vendor == X86_VENDOR_AMD && in cpuid_pass_basic()
4641 cpi->cpi_family == 5 && in cpuid_pass_basic()
4642 cpi->cpi_model == 6 && in cpuid_pass_basic()
4643 cpi->cpi_step == 6) { in cpuid_pass_basic()
4648 if (cp->cp_edx & 0x400) { in cpuid_pass_basic()
4649 cp->cp_edx &= ~0x400; in cpuid_pass_basic()
4650 cp->cp_edx |= CPUID_AMD_EDX_SYSC; in cpuid_pass_basic()
4654 platform_cpuid_mangle(cpi->cpi_vendor, 0x80000001, cp); in cpuid_pass_basic()
4659 if (cp->cp_edx & CPUID_AMD_EDX_NX) { in cpuid_pass_basic()
4664 * Regardless whether or not we boot 64-bit, in cpuid_pass_basic()
4666 * the CPU is capable of running 64-bit. in cpuid_pass_basic()
4668 if (cp->cp_edx & CPUID_AMD_EDX_LM) { in cpuid_pass_basic()
4672 /* 1 GB large page - enable only for 64 bit kernel */ in cpuid_pass_basic()
4673 if (cp->cp_edx & CPUID_AMD_EDX_1GPG) { in cpuid_pass_basic()
4677 if ((cpi->cpi_vendor == X86_VENDOR_AMD || in cpuid_pass_basic()
4678 cpi->cpi_vendor == X86_VENDOR_HYGON) && in cpuid_pass_basic()
4679 (cpi->cpi_std[1].cp_edx & CPUID_INTC_EDX_FXSR) && in cpuid_pass_basic()
4680 (cp->cp_ecx & CPUID_AMD_ECX_SSE4A)) { in cpuid_pass_basic()
4687 * instead. In the amd64 kernel, things are -way- in cpuid_pass_basic()
4690 if (cp->cp_edx & CPUID_AMD_EDX_SYSC) { in cpuid_pass_basic()
4704 if (cp->cp_edx & CPUID_AMD_EDX_TSCP) { in cpuid_pass_basic()
4708 if (cp->cp_ecx & CPUID_AMD_ECX_SVM) { in cpuid_pass_basic()
4712 if (cp->cp_ecx & CPUID_AMD_ECX_TOPOEXT) { in cpuid_pass_basic()
4716 if (cp->cp_ecx & CPUID_AMD_ECX_PCEC) { in cpuid_pass_basic()
4720 if (cp->cp_ecx & CPUID_AMD_ECX_XOP) { in cpuid_pass_basic()
4724 if (cp->cp_ecx & CPUID_AMD_ECX_FMA4) { in cpuid_pass_basic()
4728 if (cp->cp_ecx & CPUID_AMD_ECX_TBM) { in cpuid_pass_basic()
4732 if (cp->cp_ecx & CPUID_AMD_ECX_MONITORX) { in cpuid_pass_basic()
4743 switch (cpi->cpi_vendor) { in cpuid_pass_basic()
4745 if (cpi->cpi_maxeax >= 4) { in cpuid_pass_basic()
4746 cp = &cpi->cpi_std[4]; in cpuid_pass_basic()
4747 cp->cp_eax = 4; in cpuid_pass_basic()
4748 cp->cp_ecx = 0; in cpuid_pass_basic()
4750 platform_cpuid_mangle(cpi->cpi_vendor, 4, cp); in cpuid_pass_basic()
4755 if (cpi->cpi_xmaxeax < CPUID_LEAF_EXT_8) in cpuid_pass_basic()
4757 cp = &cpi->cpi_extd[8]; in cpuid_pass_basic()
4758 cp->cp_eax = CPUID_LEAF_EXT_8; in cpuid_pass_basic()
4760 platform_cpuid_mangle(cpi->cpi_vendor, CPUID_LEAF_EXT_8, in cpuid_pass_basic()
4766 if (cpi->cpi_vendor == X86_VENDOR_AMD || in cpuid_pass_basic()
4767 cpi->cpi_vendor == X86_VENDOR_HYGON) { in cpuid_pass_basic()
4774 if (cp->cp_ebx & CPUID_AMD_EBX_ERR_PTR_ZERO) { in cpuid_pass_basic()
4775 cpi->cpi_fp_amd_save = 0; in cpuid_pass_basic()
4777 cpi->cpi_fp_amd_save = 1; in cpuid_pass_basic()
4780 if (cp->cp_ebx & CPUID_AMD_EBX_CLZERO) { in cpuid_pass_basic()
4790 cpi->cpi_pabits = BITX(cp->cp_eax, 7, 0); in cpuid_pass_basic()
4791 cpi->cpi_vabits = BITX(cp->cp_eax, 15, 8); in cpuid_pass_basic()
4798 * Get CPUID data about TSC Invariance in Deep C-State. in cpuid_pass_basic()
4800 switch (cpi->cpi_vendor) { in cpuid_pass_basic()
4804 if (cpi->cpi_maxeax >= 7) { in cpuid_pass_basic()
4805 cp = &cpi->cpi_extd[7]; in cpuid_pass_basic()
4806 cp->cp_eax = 0x80000007; in cpuid_pass_basic()
4807 cp->cp_ecx = 0; in cpuid_pass_basic()
4826 if (cpi->cpi_vendor == X86_VENDOR_AMD || in cpuid_pass_basic()
4827 cpi->cpi_vendor == X86_VENDOR_HYGON) { in cpuid_pass_basic()
4828 if (cpi->cpi_xmaxeax >= CPUID_LEAF_EXT_8 && in cpuid_pass_basic()
4829 cpi->cpi_extd[8].cp_ebx & CPUID_AMD_EBX_ERR_PTR_ZERO) { in cpuid_pass_basic()
4831 cpi->cpi_fp_amd_save = 0; in cpuid_pass_basic()
4833 cpi->cpi_fp_amd_save = 1; in cpuid_pass_basic()
4841 if ((cpi->cpi_vendor == X86_VENDOR_AMD || in cpuid_pass_basic()
4842 cpi->cpi_vendor == X86_VENDOR_HYGON) && in cpuid_pass_basic()
4854 if (cpi->cpi_family == 0xf || cpi->cpi_family == 0x11) { in cpuid_pass_basic()
4856 } else if (cpi->cpi_family >= 0x10) { in cpuid_pass_basic()
4882 } else if (cpi->cpi_vendor == X86_VENDOR_Intel && in cpuid_pass_basic()
4894 * any additional processor-specific leaves that we may not have yet. in cpuid_pass_basic()
4896 switch (cpi->cpi_vendor) { in cpuid_pass_basic()
4899 if (cpi->cpi_xmaxeax >= CPUID_LEAF_EXT_21) { in cpuid_pass_basic()
4900 cp = &cpi->cpi_extd[0x21]; in cpuid_pass_basic()
4901 cp->cp_eax = CPUID_LEAF_EXT_21; in cpuid_pass_basic()
4902 cp->cp_ecx = 0; in cpuid_pass_basic()
4929 struct cpuid_info *cpi = cpu->cpu_m.mcpu_cpi; in cpuid_pass_extended()
4931 if (cpi->cpi_maxeax < 1) in cpuid_pass_extended()
4934 if ((nmax = cpi->cpi_maxeax + 1) > NMAX_CPI_STD) in cpuid_pass_extended()
4939 for (n = 2, cp = &cpi->cpi_std[2]; n < nmax; n++, cp++) { in cpuid_pass_extended()
4946 cp->cp_eax = n; in cpuid_pass_extended()
4964 cp->cp_ecx = 0; in cpuid_pass_extended()
4967 platform_cpuid_mangle(cpi->cpi_vendor, n, cp); in cpuid_pass_extended()
4979 cpi->cpi_ncache = sizeof (*cp) * in cpuid_pass_extended()
4980 BITX(cp->cp_eax, 7, 0); in cpuid_pass_extended()
4981 if (cpi->cpi_ncache == 0) in cpuid_pass_extended()
4983 cpi->cpi_ncache--; /* skip count byte */ in cpuid_pass_extended()
4990 if (cpi->cpi_ncache > (sizeof (*cp) - 1)) in cpuid_pass_extended()
4991 cpi->cpi_ncache = sizeof (*cp) - 1; in cpuid_pass_extended()
4993 dp = cpi->cpi_cacheinfo; in cpuid_pass_extended()
4994 if (BITX(cp->cp_eax, 31, 31) == 0) { in cpuid_pass_extended()
4995 uint8_t *p = (void *)&cp->cp_eax; in cpuid_pass_extended()
5000 if (BITX(cp->cp_ebx, 31, 31) == 0) { in cpuid_pass_extended()
5001 uint8_t *p = (void *)&cp->cp_ebx; in cpuid_pass_extended()
5006 if (BITX(cp->cp_ecx, 31, 31) == 0) { in cpuid_pass_extended()
5007 uint8_t *p = (void *)&cp->cp_ecx; in cpuid_pass_extended()
5012 if (BITX(cp->cp_edx, 31, 31) == 0) { in cpuid_pass_extended()
5013 uint8_t *p = (void *)&cp->cp_edx; in cpuid_pass_extended()
5034 if (!(cpi->cpi_mwait.support & MWAIT_SUPPORT)) in cpuid_pass_extended()
5046 "size %ld", cpu->cpu_id, (long)mwait_size); in cpuid_pass_extended()
5051 cpi->cpi_mwait.mon_min = (size_t)MWAIT_SIZE_MIN(cpi); in cpuid_pass_extended()
5052 cpi->cpi_mwait.mon_max = mwait_size; in cpuid_pass_extended()
5054 cpi->cpi_mwait.support |= MWAIT_EXTENSIONS; in cpuid_pass_extended()
5056 cpi->cpi_mwait.support |= in cpuid_pass_extended()
5069 if (cpi->cpi_maxeax >= 0xD) { in cpuid_pass_extended()
5074 cp->cp_eax = 0xD; in cpuid_pass_extended()
5075 cp->cp_edx = cp->cp_ebx = cp->cp_ecx = 0; in cpuid_pass_extended()
5082 if ((cp->cp_eax & XFEATURE_LEGACY_FP) == 0 || in cpuid_pass_extended()
5083 (cp->cp_eax & XFEATURE_SSE) == 0) { in cpuid_pass_extended()
5087 cpi->cpi_xsave.xsav_hw_features_low = cp->cp_eax; in cpuid_pass_extended()
5088 cpi->cpi_xsave.xsav_hw_features_high = cp->cp_edx; in cpuid_pass_extended()
5089 cpi->cpi_xsave.xsav_max_size = cp->cp_ecx; in cpuid_pass_extended()
5095 if (cpi->cpi_xsave.xsav_hw_features_low & XFEATURE_AVX) { in cpuid_pass_extended()
5096 cp->cp_eax = 0xD; in cpuid_pass_extended()
5097 cp->cp_ecx = 2; in cpuid_pass_extended()
5098 cp->cp_edx = cp->cp_ebx = 0; in cpuid_pass_extended()
5102 if (cp->cp_ebx != CPUID_LEAFD_2_YMM_OFFSET || in cpuid_pass_extended()
5103 cp->cp_eax != CPUID_LEAFD_2_YMM_SIZE) { in cpuid_pass_extended()
5107 cpi->cpi_xsave.ymm_size = cp->cp_eax; in cpuid_pass_extended()
5108 cpi->cpi_xsave.ymm_offset = cp->cp_ebx; in cpuid_pass_extended()
5115 if (cpi->cpi_xsave.xsav_hw_features_low & XFEATURE_MPX) { in cpuid_pass_extended()
5116 cp->cp_eax = 0xD; in cpuid_pass_extended()
5117 cp->cp_ecx = 3; in cpuid_pass_extended()
5118 cp->cp_edx = cp->cp_ebx = 0; in cpuid_pass_extended()
5122 cpi->cpi_xsave.bndregs_size = cp->cp_eax; in cpuid_pass_extended()
5123 cpi->cpi_xsave.bndregs_offset = cp->cp_ebx; in cpuid_pass_extended()
5125 cp->cp_eax = 0xD; in cpuid_pass_extended()
5126 cp->cp_ecx = 4; in cpuid_pass_extended()
5127 cp->cp_edx = cp->cp_ebx = 0; in cpuid_pass_extended()
5131 cpi->cpi_xsave.bndcsr_size = cp->cp_eax; in cpuid_pass_extended()
5132 cpi->cpi_xsave.bndcsr_offset = cp->cp_ebx; in cpuid_pass_extended()
5139 if (cpi->cpi_xsave.xsav_hw_features_low & XFEATURE_AVX512) { in cpuid_pass_extended()
5140 cp->cp_eax = 0xD; in cpuid_pass_extended()
5141 cp->cp_ecx = 5; in cpuid_pass_extended()
5142 cp->cp_edx = cp->cp_ebx = 0; in cpuid_pass_extended()
5146 cpi->cpi_xsave.opmask_size = cp->cp_eax; in cpuid_pass_extended()
5147 cpi->cpi_xsave.opmask_offset = cp->cp_ebx; in cpuid_pass_extended()
5149 cp->cp_eax = 0xD; in cpuid_pass_extended()
5150 cp->cp_ecx = 6; in cpuid_pass_extended()
5151 cp->cp_edx = cp->cp_ebx = 0; in cpuid_pass_extended()
5155 cpi->cpi_xsave.zmmlo_size = cp->cp_eax; in cpuid_pass_extended()
5156 cpi->cpi_xsave.zmmlo_offset = cp->cp_ebx; in cpuid_pass_extended()
5158 cp->cp_eax = 0xD; in cpuid_pass_extended()
5159 cp->cp_ecx = 7; in cpuid_pass_extended()
5160 cp->cp_edx = cp->cp_ebx = 0; in cpuid_pass_extended()
5164 cpi->cpi_xsave.zmmhi_size = cp->cp_eax; in cpuid_pass_extended()
5165 cpi->cpi_xsave.zmmhi_offset = cp->cp_ebx; in cpuid_pass_extended()
5168 if (cpi->cpi_xsave.xsav_hw_features_low & XFEATURE_PKRU) { in cpuid_pass_extended()
5169 cp->cp_eax = 0xD; in cpuid_pass_extended()
5170 cp->cp_ecx = 9; in cpuid_pass_extended()
5171 cp->cp_edx = cp->cp_ebx = 0; in cpuid_pass_extended()
5175 cpi->cpi_xsave.pkru_size = cp->cp_eax; in cpuid_pass_extended()
5176 cpi->cpi_xsave.pkru_offset = cp->cp_ebx; in cpuid_pass_extended()
5182 xsave_state_size = cpi->cpi_xsave.xsav_max_size; in cpuid_pass_extended()
5188 cpu->cpu_id, cpi->cpi_xsave.xsav_hw_features_low, in cpuid_pass_extended()
5189 cpi->cpi_xsave.xsav_hw_features_high, in cpuid_pass_extended()
5190 (int)cpi->cpi_xsave.xsav_max_size, in cpuid_pass_extended()
5191 (int)cpi->cpi_xsave.ymm_size, in cpuid_pass_extended()
5192 (int)cpi->cpi_xsave.ymm_offset); in cpuid_pass_extended()
5196 * This must be a non-boot CPU. We cannot in cpuid_pass_extended()
5200 ASSERT(cpu->cpu_id != 0); in cpuid_pass_extended()
5203 "continue.", cpu->cpu_id); in cpuid_pass_extended()
5208 * non-boot CPUs. When we're here on a boot CPU in cpuid_pass_extended()
5209 * we should disable the feature, on a non-boot in cpuid_pass_extended()
5212 if (cpu->cpu_id == 0) { in cpuid_pass_extended()
5280 if ((cpi->cpi_xmaxeax & CPUID_LEAF_EXT_0) == 0) in cpuid_pass_extended()
5283 if ((nmax = cpi->cpi_xmaxeax - CPUID_LEAF_EXT_0 + 1) > NMAX_CPI_EXTD) in cpuid_pass_extended()
5290 iptr = (void *)cpi->cpi_brandstr; in cpuid_pass_extended()
5291 for (n = 2, cp = &cpi->cpi_extd[2]; n < nmax; cp++, n++) { in cpuid_pass_extended()
5292 cp->cp_eax = CPUID_LEAF_EXT_0 + n; in cpuid_pass_extended()
5294 platform_cpuid_mangle(cpi->cpi_vendor, CPUID_LEAF_EXT_0 + n, in cpuid_pass_extended()
5303 *iptr++ = cp->cp_eax; in cpuid_pass_extended()
5304 *iptr++ = cp->cp_ebx; in cpuid_pass_extended()
5305 *iptr++ = cp->cp_ecx; in cpuid_pass_extended()
5306 *iptr++ = cp->cp_edx; in cpuid_pass_extended()
5309 switch (cpi->cpi_vendor) { in cpuid_pass_extended()
5317 if (cpi->cpi_family < 6 || in cpuid_pass_extended()
5318 (cpi->cpi_family == 6 && in cpuid_pass_extended()
5319 cpi->cpi_model < 1)) in cpuid_pass_extended()
5320 cp->cp_eax = 0; in cpuid_pass_extended()
5327 switch (cpi->cpi_vendor) { in cpuid_pass_extended()
5334 if (cpi->cpi_family < 6 || in cpuid_pass_extended()
5335 (cpi->cpi_family == 6 && in cpuid_pass_extended()
5336 cpi->cpi_model < 1)) in cpuid_pass_extended()
5337 cp->cp_eax = cp->cp_ebx = 0; in cpuid_pass_extended()
5343 if (cpi->cpi_family == 6 && in cpuid_pass_extended()
5344 cpi->cpi_model == 3 && in cpuid_pass_extended()
5345 cpi->cpi_step == 0) { in cpuid_pass_extended()
5346 cp->cp_ecx &= 0xffff; in cpuid_pass_extended()
5347 cp->cp_ecx |= 0x400000; in cpuid_pass_extended()
5355 if (cpi->cpi_family != 6) in cpuid_pass_extended()
5362 if (cpi->cpi_model == 7 || in cpuid_pass_extended()
5363 cpi->cpi_model == 8) in cpuid_pass_extended()
5364 cp->cp_ecx = in cpuid_pass_extended()
5365 BITX(cp->cp_ecx, 31, 24) << 16 | in cpuid_pass_extended()
5366 BITX(cp->cp_ecx, 23, 16) << 12 | in cpuid_pass_extended()
5367 BITX(cp->cp_ecx, 15, 8) << 8 | in cpuid_pass_extended()
5368 BITX(cp->cp_ecx, 7, 0); in cpuid_pass_extended()
5372 if (cpi->cpi_model == 9 && cpi->cpi_step == 1) in cpuid_pass_extended()
5373 cp->cp_ecx |= 8 << 12; in cpuid_pass_extended()
5397 switch (cpi->cpi_family) { in intel_cpubrand()
5401 switch (cpi->cpi_model) { in intel_cpubrand()
5416 cp = &cpi->cpi_std[2]; /* cache info */ in intel_cpubrand()
5421 tmp = (cp->cp_eax >> (8 * i)) & 0xff; in intel_cpubrand()
5431 tmp = (cp->cp_ebx >> (8 * i)) & 0xff; in intel_cpubrand()
5441 tmp = (cp->cp_ecx >> (8 * i)) & 0xff; in intel_cpubrand()
5451 tmp = (cp->cp_edx >> (8 * i)) & 0xff; in intel_cpubrand()
5461 return (cpi->cpi_model == 5 ? in intel_cpubrand()
5464 return (cpi->cpi_model == 5 ? in intel_cpubrand()
5475 if (cpi->cpi_brandid != 0) { in intel_cpubrand()
5504 sgn = (cpi->cpi_family << 8) | in intel_cpubrand()
5505 (cpi->cpi_model << 4) | cpi->cpi_step; in intel_cpubrand()
5508 if (brand_tbl[i].bt_bid == cpi->cpi_brandid) in intel_cpubrand()
5511 if (sgn == 0x6b1 && cpi->cpi_brandid == 3) in intel_cpubrand()
5513 if (sgn < 0xf13 && cpi->cpi_brandid == 0xb) in intel_cpubrand()
5515 if (sgn < 0xf13 && cpi->cpi_brandid == 0xe) in intel_cpubrand()
5529 switch (cpi->cpi_family) { in amd_cpubrand()
5531 switch (cpi->cpi_model) { in amd_cpubrand()
5538 return ("AMD-K5(r)"); in amd_cpubrand()
5541 return ("AMD-K6(r)"); in amd_cpubrand()
5543 return ("AMD-K6(r)-2"); in amd_cpubrand()
5545 return ("AMD-K6(r)-III"); in amd_cpubrand()
5550 switch (cpi->cpi_model) { in amd_cpubrand()
5552 return ("AMD-K7(tm)"); in amd_cpubrand()
5566 return ((cpi->cpi_extd[6].cp_ecx >> 16) >= 256 ? in amd_cpubrand()
5575 if (cpi->cpi_family == 0xf && cpi->cpi_model == 5 && in amd_cpubrand()
5576 cpi->cpi_brandid != 0) { in amd_cpubrand()
5577 switch (BITX(cpi->cpi_brandid, 7, 5)) { in amd_cpubrand()
5616 if (cpi->cpi_family == 4 && cpi->cpi_model == 9) in cyrix_cpubrand()
5618 else if (cpi->cpi_family == 5) { in cyrix_cpubrand()
5619 switch (cpi->cpi_model) { in cyrix_cpubrand()
5627 } else if (cpi->cpi_family == 6) { in cyrix_cpubrand()
5628 switch (cpi->cpi_model) { in cyrix_cpubrand()
5656 switch (cpi->cpi_vendor) { in fabricate_brandstr()
5667 if (cpi->cpi_family == 5 && cpi->cpi_model == 0) in fabricate_brandstr()
5671 if (cpi->cpi_family == 5) in fabricate_brandstr()
5672 switch (cpi->cpi_model) { in fabricate_brandstr()
5687 if (cpi->cpi_family == 5 && in fabricate_brandstr()
5688 (cpi->cpi_model == 0 || cpi->cpi_model == 2)) in fabricate_brandstr()
5692 if (cpi->cpi_family == 5 && cpi->cpi_model == 0) in fabricate_brandstr()
5696 if (cpi->cpi_family == 5 && cpi->cpi_model == 4) in fabricate_brandstr()
5705 (void) strcpy((char *)cpi->cpi_brandstr, brand); in fabricate_brandstr()
5712 (void) snprintf(cpi->cpi_brandstr, sizeof (cpi->cpi_brandstr), in fabricate_brandstr()
5713 "%s %d.%d.%d", cpi->cpi_vendorstr, cpi->cpi_family, in fabricate_brandstr()
5714 cpi->cpi_model, cpi->cpi_step); in fabricate_brandstr()
5732 struct cpuid_info *cpi = cpu->cpu_m.mcpu_cpi; in cpuid_pass_dynamic()
5748 cpi->cpi_ncpu_shr_last_cache = 1; in cpuid_pass_dynamic()
5749 cpi->cpi_last_lvl_cacheid = cpu->cpu_id; in cpuid_pass_dynamic()
5751 if ((cpi->cpi_maxeax >= 4 && cpi->cpi_vendor == X86_VENDOR_Intel) || in cpuid_pass_dynamic()
5752 ((cpi->cpi_vendor == X86_VENDOR_AMD || in cpuid_pass_dynamic()
5753 cpi->cpi_vendor == X86_VENDOR_HYGON) && in cpuid_pass_dynamic()
5754 cpi->cpi_xmaxeax >= CPUID_LEAF_EXT_1d && in cpuid_pass_dynamic()
5758 if (cpi->cpi_vendor == X86_VENDOR_Intel) { in cpuid_pass_dynamic()
5771 cp->cp_eax = leaf; in cpuid_pass_dynamic()
5772 cp->cp_ecx = i; in cpuid_pass_dynamic()
5781 cpi->cpi_ncpu_shr_last_cache = in cpuid_pass_dynamic()
5785 cpi->cpi_cache_leaf_size = size = i; in cpuid_pass_dynamic()
5793 cpi->cpi_cache_leaves = in cpuid_pass_dynamic()
5795 if (cpi->cpi_vendor == X86_VENDOR_Intel) { in cpuid_pass_dynamic()
5796 cpi->cpi_cache_leaves[0] = &cpi->cpi_std[4]; in cpuid_pass_dynamic()
5798 cpi->cpi_cache_leaves[0] = &cpi->cpi_extd[0x1d]; in cpuid_pass_dynamic()
5809 cp = cpi->cpi_cache_leaves[i] = in cpuid_pass_dynamic()
5811 cp->cp_eax = leaf; in cpuid_pass_dynamic()
5812 cp->cp_ecx = i; in cpuid_pass_dynamic()
5825 for (i = 1; i < cpi->cpi_ncpu_shr_last_cache; i <<= 1) in cpuid_pass_dynamic()
5827 cpi->cpi_last_lvl_cacheid = cpi->cpi_apicid >> shft; in cpuid_pass_dynamic()
5833 if ((cpi->cpi_xmaxeax & CPUID_LEAF_EXT_0) == 0) { in cpuid_pass_dynamic()
5842 if (cpi->cpi_brandstr[0]) { in cpuid_pass_dynamic()
5843 size_t maxlen = sizeof (cpi->cpi_brandstr); in cpuid_pass_dynamic()
5846 dst = src = (char *)cpi->cpi_brandstr; in cpuid_pass_dynamic()
5847 src[maxlen - 1] = '\0'; in cpuid_pass_dynamic()
5862 * Now do an in-place copy. in cpuid_pass_dynamic()
5865 * -really- no need to shout. in cpuid_pass_dynamic()
5889 while (--dst > cpi->cpi_brandstr) in cpuid_pass_dynamic()
5995 cpi = cpu->cpu_m.mcpu_cpi; in cpuid_pass_resolve()
6023 if (cpi->cpi_maxeax >= 1) { in cpuid_pass_resolve()
6024 uint32_t *edx = &cpi->cpi_support[STD_EDX_FEATURES]; in cpuid_pass_resolve()
6025 uint32_t *ecx = &cpi->cpi_support[STD_ECX_FEATURES]; in cpuid_pass_resolve()
6056 if (cpi->cpi_xmaxeax < 0x80000001) in cpuid_pass_resolve()
6059 switch (cpi->cpi_vendor) { in cpuid_pass_resolve()
6065 * here to make the initial crop of 64-bit OS's work. in cpuid_pass_resolve()
6073 edx = &cpi->cpi_support[AMD_EDX_FEATURES]; in cpuid_pass_resolve()
6074 ecx = &cpi->cpi_support[AMD_ECX_FEATURES]; in cpuid_pass_resolve()
6098 switch (cpi->cpi_vendor) { in cpuid_pass_resolve()
6148 cpi = cpu->cpu_m.mcpu_cpi; in cpuid_insn()
6156 if (cp->cp_eax <= cpi->cpi_maxeax && cp->cp_eax < NMAX_CPI_STD) { in cpuid_insn()
6157 xcp = &cpi->cpi_std[cp->cp_eax]; in cpuid_insn()
6158 } else if (cp->cp_eax >= CPUID_LEAF_EXT_0 && in cpuid_insn()
6159 cp->cp_eax <= cpi->cpi_xmaxeax && in cpuid_insn()
6160 cp->cp_eax < CPUID_LEAF_EXT_0 + NMAX_CPI_EXTD) { in cpuid_insn()
6161 xcp = &cpi->cpi_extd[cp->cp_eax - CPUID_LEAF_EXT_0]; in cpuid_insn()
6171 cp->cp_eax = xcp->cp_eax; in cpuid_insn()
6172 cp->cp_ebx = xcp->cp_ebx; in cpuid_insn()
6173 cp->cp_ecx = xcp->cp_ecx; in cpuid_insn()
6174 cp->cp_edx = xcp->cp_edx; in cpuid_insn()
6175 return (cp->cp_eax); in cpuid_insn()
6181 return (cpu != NULL && cpu->cpu_m.mcpu_cpi != NULL && in cpuid_checkpass()
6182 cpu->cpu_m.mcpu_cpi->cpi_pass >= pass); in cpuid_checkpass()
6190 return (snprintf(s, n, "%s", cpu->cpu_m.mcpu_cpi->cpi_brandstr)); in cpuid_getbrandstr()
6201 return (cpu->cpu_m.mcpu_cpi->cpi_chipid >= 0); in cpuid_is_cmt()
6205 * AMD and Intel both implement the 64-bit variant of the syscall
6206 * instruction (syscallq), so if there's -any- support for syscall,
6209 * However, Intel decided to -not- implement the 32-bit variant of the
6213 * XXPV Currently, 32-bit syscall instructions don't work via the hypervisor,
6228 struct cpuid_info *cpi = cpu->cpu_m.mcpu_cpi; in cpuid_syscall32_insn()
6230 if ((cpi->cpi_vendor == X86_VENDOR_AMD || in cpuid_syscall32_insn()
6231 cpi->cpi_vendor == X86_VENDOR_HYGON) && in cpuid_syscall32_insn()
6232 cpi->cpi_xmaxeax >= 0x80000001 && in cpuid_syscall32_insn()
6243 struct cpuid_info *cpi = cpu->cpu_m.mcpu_cpi; in cpuid_getidstr()
6253 return (snprintf(s, n, fmt_ht, cpi->cpi_chipid, in cpuid_getidstr()
6254 cpi->cpi_vendorstr, cpi->cpi_std[1].cp_eax, in cpuid_getidstr()
6255 cpi->cpi_family, cpi->cpi_model, in cpuid_getidstr()
6256 cpi->cpi_step, cpu->cpu_type_info.pi_clock)); in cpuid_getidstr()
6258 cpi->cpi_vendorstr, cpi->cpi_std[1].cp_eax, in cpuid_getidstr()
6259 cpi->cpi_family, cpi->cpi_model, in cpuid_getidstr()
6260 cpi->cpi_step, cpu->cpu_type_info.pi_clock)); in cpuid_getidstr()
6267 return ((const char *)cpu->cpu_m.mcpu_cpi->cpi_vendorstr); in cpuid_getvendorstr()
6274 return (cpu->cpu_m.mcpu_cpi->cpi_vendor); in cpuid_getvendor()
6281 return (cpu->cpu_m.mcpu_cpi->cpi_family); in cpuid_getfamily()
6288 return (cpu->cpu_m.mcpu_cpi->cpi_model); in cpuid_getmodel()
6295 return (cpu->cpu_m.mcpu_cpi->cpi_ncpu_per_chip); in cpuid_get_ncpu_per_chip()
6302 return (cpu->cpu_m.mcpu_cpi->cpi_ncore_per_chip); in cpuid_get_ncore_per_chip()
6309 return (cpu->cpu_m.mcpu_cpi->cpi_ncpu_shr_last_cache); in cpuid_get_ncpu_sharing_last_cache()
6316 return (cpu->cpu_m.mcpu_cpi->cpi_last_lvl_cacheid); in cpuid_get_last_lvl_cacheid()
6323 return (cpu->cpu_m.mcpu_cpi->cpi_step); in cpuid_getstep()
6330 return (cpu->cpu_m.mcpu_cpi->cpi_std[1].cp_eax); in cpuid_getsig()
6337 return (cpu->cpu_m.mcpu_cpi->cpi_chiprev); in cpuid_getchiprev()
6344 return (cpu->cpu_m.mcpu_cpi->cpi_chiprevstr); in cpuid_getchiprevstr()
6351 return (cpu->cpu_m.mcpu_cpi->cpi_socket); in cpuid_getsockettype()
6361 cpi = cpu->cpu_m.mcpu_cpi; in cpuid_getsocketstr()
6365 socketstr = _cpuid_sktstr(cpi->cpi_vendor, cpi->cpi_family, in cpuid_getsocketstr()
6366 cpi->cpi_model, cpi->cpi_step); in cpuid_getsocketstr()
6375 return (cpu->cpu_m.mcpu_cpi->cpi_uarchrev); in cpuid_getuarchrev()
6384 return (cpu->cpu_m.mcpu_cpi->cpi_chipid); in cpuid_get_chipid()
6385 return (cpu->cpu_id); in cpuid_get_chipid()
6392 return (cpu->cpu_m.mcpu_cpi->cpi_coreid); in cpuid_get_coreid()
6399 return (cpu->cpu_m.mcpu_cpi->cpi_pkgcoreid); in cpuid_get_pkgcoreid()
6406 return (cpu->cpu_m.mcpu_cpi->cpi_clogid); in cpuid_get_clogid()
6413 return (cpu->cpu_m.mcpu_cpi->cpi_last_lvl_cacheid); in cpuid_get_cacheid()
6420 return (cpu->cpu_m.mcpu_cpi->cpi_procnodeid); in cpuid_get_procnodeid()
6427 return (cpu->cpu_m.mcpu_cpi->cpi_procnodes_per_pkg); in cpuid_get_procnodes_per_pkg()
6434 return (cpu->cpu_m.mcpu_cpi->cpi_compunitid); in cpuid_get_compunitid()
6441 return (cpu->cpu_m.mcpu_cpi->cpi_cores_per_compunit); in cpuid_get_cores_per_compunit()
6448 if (cpu->cpu_m.mcpu_cpi->cpi_maxeax < 1) { in cpuid_get_apicid()
6451 return (cpu->cpu_m.mcpu_cpi->cpi_apicid); in cpuid_get_apicid()
6462 cpi = cpu->cpu_m.mcpu_cpi; in cpuid_get_addrsize()
6467 *pabits = cpi->cpi_pabits; in cpuid_get_addrsize()
6469 *vabits = cpi->cpi_vabits; in cpuid_get_addrsize()
6516 * Use our supported-features indicators (xsave_bv_all) to return the XSAVE
6517 * size of our supported-features that need saving. Some CPUs' maximum save
6519 * unsupported-by-us features (e.g. Intel AMX) which we MAY be able to safely
6575 cpi = cpu->cpu_m.mcpu_cpi; in cpuid_get_dtlb_nent()
6582 if (cpi->cpi_xmaxeax >= 0x80000006) { in cpuid_get_dtlb_nent()
6583 struct cpuid_regs *cp = &cpi->cpi_extd[6]; in cpuid_get_dtlb_nent()
6592 if ((cp->cp_ebx & 0xffff0000) == 0) in cpuid_get_dtlb_nent()
6593 dtlb_nent = cp->cp_ebx & 0x0000ffff; in cpuid_get_dtlb_nent()
6595 dtlb_nent = BITX(cp->cp_ebx, 27, 16); in cpuid_get_dtlb_nent()
6599 if ((cp->cp_eax & 0xffff0000) == 0) in cpuid_get_dtlb_nent()
6600 dtlb_nent = cp->cp_eax & 0x0000ffff; in cpuid_get_dtlb_nent()
6602 dtlb_nent = BITX(cp->cp_eax, 27, 16); in cpuid_get_dtlb_nent()
6617 if (cpi->cpi_xmaxeax >= 0x80000005) { in cpuid_get_dtlb_nent()
6618 struct cpuid_regs *cp = &cpi->cpi_extd[5]; in cpuid_get_dtlb_nent()
6622 dtlb_nent = BITX(cp->cp_ebx, 23, 16); in cpuid_get_dtlb_nent()
6625 dtlb_nent = BITX(cp->cp_eax, 23, 16); in cpuid_get_dtlb_nent()
6628 panic("unknown L1 d-TLB pagesize"); in cpuid_get_dtlb_nent()
6646 struct cpuid_info *cpi = cpu->cpu_m.mcpu_cpi; in cpuid_opteron_erratum()
6651 * a legacy (32-bit) AMD CPU. in cpuid_opteron_erratum()
6653 if (cpi->cpi_vendor != X86_VENDOR_AMD || in cpuid_opteron_erratum()
6654 cpi->cpi_family == 4 || cpi->cpi_family == 5 || in cpuid_opteron_erratum()
6655 cpi->cpi_family == 6) { in cpuid_opteron_erratum()
6659 eax = cpi->cpi_std[1].cp_eax; in cpuid_opteron_erratum()
6699 return (cpi->cpi_family < 0x10); in cpuid_opteron_erratum()
6705 return (cpi->cpi_family <= 0x11); in cpuid_opteron_erratum()
6709 return (cpi->cpi_family <= 0x11); in cpuid_opteron_erratum()
6726 return (cpi->cpi_family < 0x10); in cpuid_opteron_erratum()
6730 return (cpi->cpi_family <= 0x11); in cpuid_opteron_erratum()
6742 return (cpi->cpi_family < 0x10); in cpuid_opteron_erratum()
6748 return (cpi->cpi_family < 0x10); in cpuid_opteron_erratum()
6804 return (cpi->cpi_family < 0x10 || cpi->cpi_family == 0x11); in cpuid_opteron_erratum()
6808 return (cpi->cpi_family < 0x10); in cpuid_opteron_erratum()
6817 * Our current fix for this is to disable the C1-Clock ramping. in cpuid_opteron_erratum()
6821 if (cpi->cpi_family >= 0x12 || get_hwenv() != HW_NATIVE) { in cpuid_opteron_erratum()
6836 if (cpi->cpi_family >= 0x10) { in cpuid_opteron_erratum()
6844 * check for processors (pre-Shanghai) that do not provide in cpuid_opteron_erratum()
6847 return (cpi->cpi_family == 0x10 && cpi->cpi_model < 4); in cpuid_opteron_erratum()
6854 return (cpi->cpi_family == 0x10 || cpi->cpi_family == 0x12); in cpuid_opteron_erratum()
6857 return (-1); in cpuid_opteron_erratum()
6864 * Return 1 if erratum is present, 0 if not present and -1 if indeterminate.
6871 static int osvwfeature = -1; in osvw_opteron_erratum()
6875 cpi = cpu->cpu_m.mcpu_cpi; in osvw_opteron_erratum()
6878 if (osvwfeature == -1) { in osvw_opteron_erratum()
6879 osvwfeature = cpi->cpi_extd[1].cp_ecx & CPUID_AMD_ECX_OSVW; in osvw_opteron_erratum()
6883 (cpi->cpi_extd[1].cp_ecx & CPUID_AMD_ECX_OSVW)); in osvw_opteron_erratum()
6886 return (-1); in osvw_opteron_erratum()
6895 return (-1); in osvw_opteron_erratum()
6901 * 0 - fixed by HW in osvw_opteron_erratum()
6902 * 1 - BIOS has applied the workaround when BIOS in osvw_opteron_erratum()
6921 return (-1); in osvw_opteron_erratum()
6926 static const char line_str[] = "line-size";
6939 if (snprintf(buf, sizeof (buf), "%s-%s", label, type) < sizeof (buf)) in add_cache_prop()
6944 * Intel-style cache/tlb description
6951 static const char l1_icache_str[] = "l1-icache";
6952 static const char l1_dcache_str[] = "l1-dcache";
6953 static const char l2_cache_str[] = "l2-cache";
6954 static const char l3_cache_str[] = "l3-cache";
6955 static const char itlb4k_str[] = "itlb-4K";
6956 static const char dtlb4k_str[] = "dtlb-4K";
6957 static const char itlb2M_str[] = "itlb-2M";
6958 static const char itlb4M_str[] = "itlb-4M";
6959 static const char dtlb4M_str[] = "dtlb-4M";
6960 static const char dtlb24_str[] = "dtlb0-2M-4M";
6961 static const char itlb424_str[] = "itlb-4K-2M-4M";
6962 static const char itlb24_str[] = "itlb-2M-4M";
6963 static const char dtlb44_str[] = "dtlb-4K-4M";
6964 static const char sl1_dcache_str[] = "sectored-l1-dcache";
6965 static const char sl2_cache_str[] = "sectored-l2-cache";
6966 static const char itrace_str[] = "itrace-cache";
6967 static const char sl3_cache_str[] = "sectored-l3-cache";
6968 static const char sh_l2_tlb4k_str[] = "shared-l2-tlb-4k";
6980 * Codes ignored - Reason
6981 * ----------------------
6982 * 40H - intel_cpuid_4_cache_info() disambiguates l2/l3 cache
6983 * f0H/f1H - Currently we do not interpret prefetch size by design
7080 { 0x70, 4, 0, 32, "tlb-4K" },
7081 { 0x80, 4, 16, 16*1024, "l1-cache" },
7092 for (; ct->ct_code != 0; ct++) in find_cacheent()
7093 if (ct->ct_code <= code) in find_cacheent()
7095 if (ct->ct_code == code) in find_cacheent()
7102 * Populate cachetab entry with L2 or L3 cache-information using
7113 for (i = 0; i < cpi->cpi_cache_leaf_size; i++) { in intel_cpuid_4_cache_info()
7114 level = CPI_CACHE_LVL(cpi->cpi_cache_leaves[i]); in intel_cpuid_4_cache_info()
7117 ct->ct_assoc = in intel_cpuid_4_cache_info()
7118 CPI_CACHE_WAYS(cpi->cpi_cache_leaves[i]) + 1; in intel_cpuid_4_cache_info()
7119 ct->ct_line_size = in intel_cpuid_4_cache_info()
7120 CPI_CACHE_COH_LN_SZ(cpi->cpi_cache_leaves[i]) + 1; in intel_cpuid_4_cache_info()
7121 ct->ct_size = ct->ct_assoc * in intel_cpuid_4_cache_info()
7122 (CPI_CACHE_PARTS(cpi->cpi_cache_leaves[i]) + 1) * in intel_cpuid_4_cache_info()
7123 ct->ct_line_size * in intel_cpuid_4_cache_info()
7124 (cpi->cpi_cache_leaves[i]->cp_ecx + 1); in intel_cpuid_4_cache_info()
7127 ct->ct_label = l2_cache_str; in intel_cpuid_4_cache_info()
7129 ct->ct_label = l3_cache_str; in intel_cpuid_4_cache_info()
7140 * The walk is terminated if the walker returns non-zero.
7151 if ((dp = cpi->cpi_cacheinfo) == NULL) in intel_walk_cacheinfo()
7153 for (i = 0; i < cpi->cpi_ncache; i++, dp++) { in intel_walk_cacheinfo()
7161 if (*dp == 0x49 && cpi->cpi_maxeax >= 0x4 && in intel_walk_cacheinfo()
7199 if ((dp = cpi->cpi_cacheinfo) == NULL) in cyrix_walk_cacheinfo()
7201 for (i = 0; i < cpi->cpi_ncache; i++, dp++) { in cyrix_walk_cacheinfo()
7203 * Search Cyrix-specific descriptor table first .. in cyrix_walk_cacheinfo()
7222 * A cacheinfo walker that adds associativity, line-size, and size properties
7230 add_cache_prop(devi, ct->ct_label, assoc_str, ct->ct_assoc); in add_cacheent_props()
7231 if (ct->ct_line_size != 0) in add_cacheent_props()
7232 add_cache_prop(devi, ct->ct_label, line_str, in add_cacheent_props()
7233 ct->ct_line_size); in add_cacheent_props()
7234 add_cache_prop(devi, ct->ct_label, size_str, ct->ct_size); in add_cacheent_props()
7239 static const char fully_assoc[] = "fully-associative?";
7281 * associated with a tag. For example, the AMD K6-III has a sector in add_amd_cache()
7285 add_cache_prop(devi, label, "lines-per-tag", lines_per_tag); in add_amd_cache()
7332 add_cache_prop(devi, label, "lines-per-tag", lines_per_tag); in add_amd_l2_cache()
7342 if (cpi->cpi_xmaxeax < 0x80000005) in amd_cache_info()
7344 cp = &cpi->cpi_extd[5]; in amd_cache_info()
7352 add_amd_tlb(devi, "dtlb-2M", in amd_cache_info()
7353 BITX(cp->cp_eax, 31, 24), BITX(cp->cp_eax, 23, 16)); in amd_cache_info()
7354 add_amd_tlb(devi, "itlb-2M", in amd_cache_info()
7355 BITX(cp->cp_eax, 15, 8), BITX(cp->cp_eax, 7, 0)); in amd_cache_info()
7361 switch (cpi->cpi_vendor) { in amd_cache_info()
7364 if (cpi->cpi_family >= 5) { in amd_cache_info()
7370 if ((nentries = BITX(cp->cp_ebx, 23, 16)) == 255) in amd_cache_info()
7375 add_amd_tlb(devi, "tlb-4K", BITX(cp->cp_ebx, 31, 24), in amd_cache_info()
7382 BITX(cp->cp_ebx, 31, 24), BITX(cp->cp_ebx, 23, 16)); in amd_cache_info()
7384 BITX(cp->cp_ebx, 15, 8), BITX(cp->cp_ebx, 7, 0)); in amd_cache_info()
7393 BITX(cp->cp_ecx, 31, 24), BITX(cp->cp_ecx, 23, 16), in amd_cache_info()
7394 BITX(cp->cp_ecx, 15, 8), BITX(cp->cp_ecx, 7, 0)); in amd_cache_info()
7401 BITX(cp->cp_edx, 31, 24), BITX(cp->cp_edx, 23, 16), in amd_cache_info()
7402 BITX(cp->cp_edx, 15, 8), BITX(cp->cp_edx, 7, 0)); in amd_cache_info()
7404 if (cpi->cpi_xmaxeax < 0x80000006) in amd_cache_info()
7406 cp = &cpi->cpi_extd[6]; in amd_cache_info()
7410 if (BITX(cp->cp_eax, 31, 16) == 0) in amd_cache_info()
7411 add_amd_l2_tlb(devi, "l2-tlb-2M", in amd_cache_info()
7412 BITX(cp->cp_eax, 15, 12), BITX(cp->cp_eax, 11, 0)); in amd_cache_info()
7414 add_amd_l2_tlb(devi, "l2-dtlb-2M", in amd_cache_info()
7415 BITX(cp->cp_eax, 31, 28), BITX(cp->cp_eax, 27, 16)); in amd_cache_info()
7416 add_amd_l2_tlb(devi, "l2-itlb-2M", in amd_cache_info()
7417 BITX(cp->cp_eax, 15, 12), BITX(cp->cp_eax, 11, 0)); in amd_cache_info()
7422 if (BITX(cp->cp_ebx, 31, 16) == 0) { in amd_cache_info()
7423 add_amd_l2_tlb(devi, "l2-tlb-4K", in amd_cache_info()
7424 BITX(cp->cp_eax, 15, 12), BITX(cp->cp_eax, 11, 0)); in amd_cache_info()
7426 add_amd_l2_tlb(devi, "l2-dtlb-4K", in amd_cache_info()
7427 BITX(cp->cp_eax, 31, 28), BITX(cp->cp_eax, 27, 16)); in amd_cache_info()
7428 add_amd_l2_tlb(devi, "l2-itlb-4K", in amd_cache_info()
7429 BITX(cp->cp_eax, 15, 12), BITX(cp->cp_eax, 11, 0)); in amd_cache_info()
7433 BITX(cp->cp_ecx, 31, 16), BITX(cp->cp_ecx, 15, 12), in amd_cache_info()
7434 BITX(cp->cp_ecx, 11, 8), BITX(cp->cp_ecx, 7, 0)); in amd_cache_info()
7439 * and tlb architecture - Intel's way and AMD's way.
7446 switch (cpi->cpi_vendor) { in x86_which_cacheinfo()
7448 if (cpi->cpi_maxeax >= 2) in x86_which_cacheinfo()
7456 if (cpi->cpi_family > 5 || in x86_which_cacheinfo()
7457 (cpi->cpi_family == 5 && cpi->cpi_model >= 1)) in x86_which_cacheinfo()
7463 if (cpi->cpi_family >= 5) in x86_which_cacheinfo()
7469 * then we assume they have AMD-format cache in x86_which_cacheinfo()
7473 * then try our-Cyrix specific handler. in x86_which_cacheinfo()
7476 * table-driven format instead. in x86_which_cacheinfo()
7478 if (cpi->cpi_xmaxeax >= 0x80000005) in x86_which_cacheinfo()
7480 else if (cpi->cpi_vendor == X86_VENDOR_Cyrix) in x86_which_cacheinfo()
7482 else if (cpi->cpi_maxeax >= 2) in x86_which_cacheinfo()
7486 return (-1); in x86_which_cacheinfo()
7506 /* cpu-mhz, and clock-frequency */ in cpuid_set_cpu_properties()
7511 "cpu-mhz", cpu_freq); in cpuid_set_cpu_properties()
7514 "clock-frequency", (int)mul); in cpuid_set_cpu_properties()
7519 /* vendor-id */ in cpuid_set_cpu_properties()
7521 "vendor-id", cpi->cpi_vendorstr); in cpuid_set_cpu_properties()
7523 if (cpi->cpi_maxeax == 0) { in cpuid_set_cpu_properties()
7533 "cpu-model", CPI_MODEL(cpi)); in cpuid_set_cpu_properties()
7535 "stepping-id", CPI_STEP(cpi)); in cpuid_set_cpu_properties()
7538 switch (cpi->cpi_vendor) { in cpuid_set_cpu_properties()
7550 /* ext-family */ in cpuid_set_cpu_properties()
7551 switch (cpi->cpi_vendor) { in cpuid_set_cpu_properties()
7554 create = cpi->cpi_family >= 0xf; in cpuid_set_cpu_properties()
7565 "ext-family", CPI_FAMILY_XTD(cpi)); in cpuid_set_cpu_properties()
7567 /* ext-model */ in cpuid_set_cpu_properties()
7568 switch (cpi->cpi_vendor) { in cpuid_set_cpu_properties()
7584 "ext-model", CPI_MODEL_XTD(cpi)); in cpuid_set_cpu_properties()
7587 switch (cpi->cpi_vendor) { in cpuid_set_cpu_properties()
7593 create = cpi->cpi_xmaxeax >= 0x80000001; in cpuid_set_cpu_properties()
7601 "generation", BITX((cpi)->cpi_extd[1].cp_eax, 11, 8)); in cpuid_set_cpu_properties()
7603 /* brand-id */ in cpuid_set_cpu_properties()
7604 switch (cpi->cpi_vendor) { in cpuid_set_cpu_properties()
7610 create = cpi->cpi_family > 6 || in cpuid_set_cpu_properties()
7611 (cpi->cpi_family == 6 && cpi->cpi_model >= 8); in cpuid_set_cpu_properties()
7614 create = cpi->cpi_family >= 0xf; in cpuid_set_cpu_properties()
7623 if (create && cpi->cpi_brandid != 0) { in cpuid_set_cpu_properties()
7625 "brand-id", cpi->cpi_brandid); in cpuid_set_cpu_properties()
7628 /* chunks, and apic-id */ in cpuid_set_cpu_properties()
7629 switch (cpi->cpi_vendor) { in cpuid_set_cpu_properties()
7634 create = IS_NEW_F6(cpi) || cpi->cpi_family >= 0xf; in cpuid_set_cpu_properties()
7637 create = cpi->cpi_family >= 0xf; in cpuid_set_cpu_properties()
7650 "apic-id", cpi->cpi_apicid); in cpuid_set_cpu_properties()
7651 if (cpi->cpi_chipid >= 0) { in cpuid_set_cpu_properties()
7653 "chip#", cpi->cpi_chipid); in cpuid_set_cpu_properties()
7655 "clog#", cpi->cpi_clogid); in cpuid_set_cpu_properties()
7659 /* cpuid-features */ in cpuid_set_cpu_properties()
7661 "cpuid-features", CPI_FEATURES_EDX(cpi)); in cpuid_set_cpu_properties()
7664 /* cpuid-features-ecx */ in cpuid_set_cpu_properties()
7665 switch (cpi->cpi_vendor) { in cpuid_set_cpu_properties()
7667 create = IS_NEW_F6(cpi) || cpi->cpi_family >= 0xf; in cpuid_set_cpu_properties()
7670 create = cpi->cpi_family >= 0xf; in cpuid_set_cpu_properties()
7681 "cpuid-features-ecx", CPI_FEATURES_ECX(cpi)); in cpuid_set_cpu_properties()
7683 /* ext-cpuid-features */ in cpuid_set_cpu_properties()
7684 switch (cpi->cpi_vendor) { in cpuid_set_cpu_properties()
7691 create = cpi->cpi_xmaxeax >= 0x80000001; in cpuid_set_cpu_properties()
7699 "ext-cpuid-features", CPI_FEATURES_XTD_EDX(cpi)); in cpuid_set_cpu_properties()
7701 "ext-cpuid-features-ecx", CPI_FEATURES_XTD_ECX(cpi)); in cpuid_set_cpu_properties()
7708 * same -something- about the processor, however lame. in cpuid_set_cpu_properties()
7711 "brand-string", cpi->cpi_brandstr); in cpuid_set_cpu_properties()
7739 * A cacheinfo walker that fetches the size, line-size and associativity
7748 if (ct->ct_label != l2_cache_str && in intel_l2cinfo()
7749 ct->ct_label != sl2_cache_str) in intel_l2cinfo()
7750 return (0); /* not an L2 -- keep walking */ in intel_l2cinfo()
7752 if ((ip = l2i->l2i_csz) != NULL) in intel_l2cinfo()
7753 *ip = ct->ct_size; in intel_l2cinfo()
7754 if ((ip = l2i->l2i_lsz) != NULL) in intel_l2cinfo()
7755 *ip = ct->ct_line_size; in intel_l2cinfo()
7756 if ((ip = l2i->l2i_assoc) != NULL) in intel_l2cinfo()
7757 *ip = ct->ct_assoc; in intel_l2cinfo()
7758 l2i->l2i_ret = ct->ct_size; in intel_l2cinfo()
7759 return (1); /* was an L2 -- terminate walk */ in intel_l2cinfo()
7769 * -1 is undefined. 0 is fully associative.
7773 {-1, 1, 2, -1, 4, -1, 8, -1, 16, -1, 32, 48, 64, 96, 128, 0};
7783 if (cpi->cpi_xmaxeax < 0x80000006) in amd_l2cacheinfo()
7785 cp = &cpi->cpi_extd[6]; in amd_l2cacheinfo()
7787 if ((i = BITX(cp->cp_ecx, 15, 12)) != 0 && in amd_l2cacheinfo()
7788 (size = BITX(cp->cp_ecx, 31, 16)) != 0) { in amd_l2cacheinfo()
7792 ASSERT(assoc != -1); in amd_l2cacheinfo()
7794 if ((ip = l2i->l2i_csz) != NULL) in amd_l2cacheinfo()
7796 if ((ip = l2i->l2i_lsz) != NULL) in amd_l2cacheinfo()
7797 *ip = BITX(cp->cp_ecx, 7, 0); in amd_l2cacheinfo()
7798 if ((ip = l2i->l2i_assoc) != NULL) in amd_l2cacheinfo()
7800 l2i->l2i_ret = cachesz; in amd_l2cacheinfo()
7807 struct cpuid_info *cpi = cpu->cpu_m.mcpu_cpi; in getl2cacheinfo()
7810 l2i->l2i_csz = csz; in getl2cacheinfo()
7811 l2i->l2i_lsz = lsz; in getl2cacheinfo()
7812 l2i->l2i_assoc = assoc; in getl2cacheinfo()
7813 l2i->l2i_ret = -1; in getl2cacheinfo()
7828 return (l2i->l2i_ret); in getl2cacheinfo()
7841 mwait_size = CPU->cpu_m.mcpu_cpi->cpi_mwait.mon_max; in cpuid_mwait_alloc()
7860 cpu->cpu_m.mcpu_cpi->cpi_mwait.buf_actual = ret; in cpuid_mwait_alloc()
7861 cpu->cpu_m.mcpu_cpi->cpi_mwait.size_actual = mwait_size; in cpuid_mwait_alloc()
7867 cpu->cpu_m.mcpu_cpi->cpi_mwait.buf_actual = ret; in cpuid_mwait_alloc()
7868 cpu->cpu_m.mcpu_cpi->cpi_mwait.size_actual = mwait_size * 2; in cpuid_mwait_alloc()
7878 if (cpu->cpu_m.mcpu_cpi == NULL) { in cpuid_mwait_free()
7882 if (cpu->cpu_m.mcpu_cpi->cpi_mwait.buf_actual != NULL && in cpuid_mwait_free()
7883 cpu->cpu_m.mcpu_cpi->cpi_mwait.size_actual > 0) { in cpuid_mwait_free()
7884 kmem_free(cpu->cpu_m.mcpu_cpi->cpi_mwait.buf_actual, in cpuid_mwait_free()
7885 cpu->cpu_m.mcpu_cpi->cpi_mwait.size_actual); in cpuid_mwait_free()
7888 cpu->cpu_m.mcpu_cpi->cpi_mwait.buf_actual = NULL; in cpuid_mwait_free()
7889 cpu->cpu_m.mcpu_cpi->cpi_mwait.size_actual = 0; in cpuid_mwait_free()
7899 cnt = &_no_rdtsc_end - &_no_rdtsc_start; in patch_tsc_read()
7903 cnt = &_tsc_lfence_end - &_tsc_lfence_start; in patch_tsc_read()
7908 cnt = &_tscp_end - &_tscp_start; in patch_tsc_read()
7928 cpi = CPU->cpu_m.mcpu_cpi; in cpuid_deep_cstates_supported()
7930 switch (cpi->cpi_vendor) { in cpuid_deep_cstates_supported()
7934 if (cpi->cpi_xmaxeax < 0x80000007) in cpuid_deep_cstates_supported()
7938 * Does TSC run at a constant rate in all C-states? in cpuid_deep_cstates_supported()
7982 if (x86_use_pcid == -1) in enable_pcid()
7985 if (x86_use_invpcid == -1) { in enable_pcid()
8008 * - cpuid_pass_basic() is done, so that X86 features are known.
8009 * - fpu_probe() is done, so that fp_save_mech is chosen.
8024 cpu->cpu_m.mcpu_cpi->cpi_std[1].cp_ecx |= CPUID_INTC_ECX_OSXSAVE; in xsave_setup_msr()
8030 * APIC timer will continue running in all C-states,
8031 * including the deepest C-states.
8042 cpi = CPU->cpu_m.mcpu_cpi; in cpuid_arat_supported()
8044 switch (cpi->cpi_vendor) { in cpuid_arat_supported()
8049 * Always-running Local APIC Timer is in cpuid_arat_supported()
8052 if (cpi->cpi_maxeax >= 6) { in cpuid_arat_supported()
8070 struct cpuid_info *cpi = cp->cpu_m.mcpu_cpi; in cpuid_iepb_supported()
8084 if ((cpi->cpi_vendor != X86_VENDOR_Intel) || (cpi->cpi_maxeax < 6)) in cpuid_iepb_supported()
8104 struct cpuid_info *cpi = CPU->cpu_m.mcpu_cpi; in cpuid_deadline_tsc_supported()
8110 switch (cpi->cpi_vendor) { in cpuid_deadline_tsc_supported()
8112 if (cpi->cpi_maxeax >= 1) { in cpuid_deadline_tsc_supported()
8137 cnt = &bcopy_patch_end - &bcopy_patch_start; in patch_memops()
8161 cpi = cpu->cpu_m.mcpu_cpi; in cpuid_get_ext_topo()
8163 if (cpi->cpi_ncore_bits > *core_nbits) { in cpuid_get_ext_topo()
8164 *core_nbits = cpi->cpi_ncore_bits; in cpuid_get_ext_topo()
8167 if (cpi->cpi_nthread_bits > *strand_nbits) { in cpuid_get_ext_topo()
8168 *strand_nbits = cpi->cpi_nthread_bits; in cpuid_get_ext_topo()
8175 struct cpuid_info *cpi = cpu->cpu_m.mcpu_cpi; in cpuid_pass_ucode()
8182 switch (cpi->cpi_vendor) { in cpuid_pass_ucode()
8187 if (cpi->cpi_maxeax < 7) { in cpuid_pass_ucode()
8190 cpi->cpi_maxeax = __cpuid_insn(&cp); in cpuid_pass_ucode()
8191 if (cpi->cpi_maxeax < 7) in cpuid_pass_ucode()
8199 cpi->cpi_std[7] = cp; in cpuid_pass_ucode()
8205 if (cpi->cpi_family < 5 || in cpuid_pass_ucode()
8206 (cpi->cpi_family == 5 && cpi->cpi_model < 1)) in cpuid_pass_ucode()
8209 if (cpi->cpi_xmaxeax < CPUID_LEAF_EXT_8) { in cpuid_pass_ucode()
8212 cpi->cpi_xmaxeax = __cpuid_insn(&cp); in cpuid_pass_ucode()
8213 if (cpi->cpi_xmaxeax < CPUID_LEAF_EXT_8) in cpuid_pass_ucode()
8224 platform_cpuid_mangle(cpi->cpi_vendor, CPUID_LEAF_EXT_8, &cp); in cpuid_pass_ucode()
8225 cpi->cpi_extd[8] = cp; in cpuid_pass_ucode()
8227 if (cpi->cpi_xmaxeax < CPUID_LEAF_EXT_21) in cpuid_pass_ucode()
8233 platform_cpuid_mangle(cpi->cpi_vendor, CPUID_LEAF_EXT_21, &cp); in cpuid_pass_ucode()
8234 cpi->cpi_extd[0x21] = cp; in cpuid_pass_ucode()
8255 fset = (uchar_t *)(arg0 + sizeof (x86_featureset) * CPU->cpu_id); in cpuid_post_ucodeadm_xc()
8256 if (first_pass && CPU->cpu_id != 0) in cpuid_post_ucodeadm_xc()
8258 if (!first_pass && CPU->cpu_id == 0) in cpuid_post_ucodeadm_xc()
8288 rev = cpu->cpu_m.mcpu_ucode_info->cui_rev; in cpuid_post_ucodeadm()
8294 if (cpu->cpu_m.mcpu_ucode_info->cui_rev != rev) { in cpuid_post_ucodeadm()
8297 i, cpu->cpu_m.mcpu_ucode_info->cui_rev, rev); in cpuid_post_ucodeadm()
8339 cmn_err(CE_CONT, "?post-ucode x86_feature: %s\n", in cpuid_post_ucodeadm()
8379 if (cp->cpu_id == 0 && cp->cpu_m.mcpu_cpi == NULL) in cpuid_execpass()
8380 cp->cpu_m.mcpu_cpi = &cpuid_info0; in cpuid_execpass()
8382 ASSERT(cpuid_checkpass(cp, pass - 1)); in cpuid_execpass()
8387 cp->cpu_m.mcpu_cpi->cpi_pass = pass; in cpuid_execpass()
8393 pass, cp->cpu_id); in cpuid_execpass()
8409 * may bitwise-OR together chiprevs of the same vendor and family to form the
8478 switch (cpi->cpi_vendor) { in cpuid_cache_topo_sup()
8480 if (cpi->cpi_maxeax >= 4) { in cpuid_cache_topo_sup()
8486 if (cpi->cpi_xmaxeax >= CPUID_LEAF_EXT_1d && in cpuid_cache_topo_sup()
8504 cpi = cpu->cpu_m.mcpu_cpi; in cpuid_getncaches()
8510 *ncache = cpi->cpi_cache_leaf_size; in cpuid_getncaches()
8521 cpi = cpu->cpu_m.mcpu_cpi; in cpuid_getcache()
8527 if (cno >= cpi->cpi_cache_leaf_size) { in cpuid_getcache()
8532 cp = cpi->cpi_cache_leaves[cno]; in cpuid_getcache()
8535 cache->xc_type = X86_CACHE_TYPE_DATA; in cpuid_getcache()
8538 cache->xc_type = X86_CACHE_TYPE_INST; in cpuid_getcache()
8541 cache->xc_type = X86_CACHE_TYPE_UNIFIED; in cpuid_getcache()
8547 cache->xc_level = CPI_CACHE_LVL(cp); in cpuid_getcache()
8549 cache->xc_flags |= X86_CACHE_F_FULL_ASSOC; in cpuid_getcache()
8551 cache->xc_nparts = CPI_CACHE_PARTS(cp) + 1; in cpuid_getcache()
8556 if (cpi->cpi_vendor == X86_VENDOR_AMD && in cpuid_getcache()
8558 cache->xc_nsets = 1; in cpuid_getcache()
8560 cache->xc_nsets = CPI_CACHE_SETS(cp) + 1; in cpuid_getcache()
8562 cache->xc_nways = CPI_CACHE_WAYS(cp) + 1; in cpuid_getcache()
8563 cache->xc_line_size = CPI_CACHE_COH_LN_SZ(cp) + 1; in cpuid_getcache()
8564 cache->xc_size = cache->xc_nparts * cache->xc_nsets * cache->xc_nways * in cpuid_getcache()
8565 cache->xc_line_size; in cpuid_getcache()
8568 * are being shared. Normally this would be the value - 1, but the CPUID in cpuid_getcache()
8572 cache->xc_apic_shift = highbit(CPI_NTHR_SHR_CACHE(cp)); in cpuid_getcache()
8591 cache->xc_id = (uint64_t)cache->xc_level << 40; in cpuid_getcache()
8592 cache->xc_id |= (uint64_t)cache->xc_type << 32; in cpuid_getcache()
8593 cache->xc_id |= (uint64_t)cpi->cpi_apicid >> cache->xc_apic_shift; in cpuid_getcache()