Lines Matching +full:multi +full:- +full:subsystems
54 * multi-processing (SMT), etc.
56 * ------------------------
58 * ------------------------
80 * AMD adds non-Intel compatible
104 * various extensions. For example, AMD-
122 * Some leaves are broken down into sub-leaves. This means that the value
124 * example, Intel uses the value in %ecx on leaf 7 to indicate a sub-leaf to get
130 * program is in 64-bit mode. When executing in 64-bit mode, the upper
134 * ----------------------
136 * ----------------------
168 * defines the processor family's non-architectural features. In general, we'll
198 * the processor, are dealing with processor-specific features such as CPU
221 * which itself may have multiple revisions. In general, non-architectural
229 * present used and available only for AMD and AMD-like processors.
231 * ------------
233 * ------------
238 * certain Cyrix processors) but those were all 32-bit processors which are no
241 * well-defined execution ordering enforced by the definition of cpuid_pass_t in
252 * 64-bit processors do. This would also be the place to
259 * so that machine-dependent code can control the features
262 * machine-dependent code that needs basic identity will
266 * this pass, machine-depedent boot code is responsible for
290 * 3. Determining the set of CPU security-specific features
313 * by the run-time link-editor (RTLD), though userland
327 * ---------
329 * ---------
334 * as possible during the boot process -- right after the IDENT pass.
354 * ------------------
356 * ------------------
410 * processor with more than one core is often referred to as 'multi-core'.
412 * that has 'multi-core' processors.
432 * simultaneous multi-threading (SMT). When Intel introduced this in their
433 * processors they called it hyper-threading (HT). When multiple threads
452 * definition of multi-chip module (because it is in the name) and use the
453 * term 'die' when we want the more general, potential sub-component
463 * MULTI-CHIP MODULE
465 * A multi-chip module (MCM) refers to putting multiple distinct chips that
466 * are connected together in the same package. When a multi-chip design is
514 * NUMA or non-uniform memory access, describes a way that systems are
517 * multi-chip module, some of that memory is physically closer and some of
521 * +--------+ +--------+
522 * | DIMM A | +----------+ +----------+ | DIMM D |
523 * +--------+-+ | | | | +-+------+-+
525 * +--------+-+ | | | | +-+------+-+
526 * | DIMM C | +----------+ +----------+ | DIMM F |
527 * +--------+ +--------+
529 * In this example, Socket 0 is closer to DIMMs A-C while Socket 1 is
530 * closer to DIMMs D-F. This means that it is cheaper for socket 0 to
531 * access DIMMs A-C and more expensive to access D-F as it has to go
533 * D-F are cheaper than A-C. While the socket form is the most common, when
534 * using multi-chip modules, this can also sometimes occur. For another
539 * --------------
551 * +-----------------------------------------------------------------------+
553 * | +-------------------+ +-------------------+ +-------------------+ |
555 * | | +--------+ +---+ | | +--------+ +---+ | | +--------+ +---+ | |
557 * | | +--------+ | 1 | | | +--------+ | 1 | | | +--------+ | 1 | | |
558 * | | +--------+ | | | | +--------+ | | | | +--------+ | | | |
560 * | | +--------+ +---+ | | +--------+ +---+ | | +--------+ +---+ | |
561 * | | +--------------+ | | +--------------+ | | +--------------+ | |
563 * | | +--------------+ | | +--------------+ | | +--------------+ | |
564 * | +-------------------+ +-------------------+ +-------------------+ |
565 * | +-------------------------------------------------------------------+ |
567 * | +-------------------------------------------------------------------+ |
568 * | +-------------------------------------------------------------------+ |
570 * | +-------------------------------------------------------------------+ |
571 * +-----------------------------------------------------------------------+
581 * increased the number of addressable logical CPUs from 8-bits to 32-bits), an
585 * ------------
605 * node this indicates a multi-chip module. Usually each node has its own
607 * different from the corresponding Intel Nehalem-Skylake+ processors. As a
627 * The Zen family (0x17) uses a multi-chip module (MCM) design, the module
647 * +--------------------------------------------------------+
649 * | +-------------------+ +-------------------+ +---+ |
650 * | | Core +----+ | | Core +----+ | | | |
651 * | | +--------+ | L2 | | | +--------+ | L2 | | | | |
652 * | | | Thread | +----+ | | | Thread | +----+ | | | |
653 * | | +--------+-+ +--+ | | +--------+-+ +--+ | | L | |
655 * | | +--------+ +--+ | | +--------+ +--+ | | | |
656 * | +-------------------+ +-------------------+ | C | |
657 * | +-------------------+ +-------------------+ | a | |
658 * | | Core +----+ | | Core +----+ | | c | |
659 * | | +--------+ | L2 | | | +--------+ | L2 | | | h | |
660 * | | | Thread | +----+ | | | Thread | +----+ | | e | |
661 * | | +--------+-+ +--+ | | +--------+-+ +--+ | | | |
663 * | | +--------+ +--+ | | +--------+ +--+ | | | |
664 * | +-------------------+ +-------------------+ +---+ |
666 * +--------------------------------------------------------+
672 * +--------------------------------------------------------+
674 * | +--------------------------------------------------+ |
676 * | +--------------------------------------------------+ |
678 * | +-----------+ HH +-----------+ |
683 * | +-----------+ HH +-----------+ |
685 * | +--------------------------------------------------+ |
687 * | +--------------------------------------------------+ |
689 * +--------------------------------------------------------+
699 * +----------PP---------------------PP---------+
701 * | +-----------+ +-----------+ |
706 * | +-----------+ooo ...+-----------+ |
711 * | +-----------+... ooo+-----------+ |
716 * | +-----------+ +-----------+ |
718 * +----------PP---------------------PP---------+
737 * +--------------------------------------------------------+
740 * | +-----------+ HH +-----------+ |
745 * | +-----------+ HH +-----------+ |
748 * +--------------------------------------------------------+
757 * +---------------------PP----PP---------------------+
759 * | +-----------+ PP PP +-----------+ |
761 * | | Zen 2 | +-PP----PP-+ | Zen 2 | |
764 * | +-----------+ | | +-----------+ |
772 * | +-----------+ | | +-----------+ |
774 * | | Zen 2 -| +-PP----PP-+ |- Zen 2 | |
777 * | +-----------+ PP PP +-----------+ |
779 * +---------------------PP----PP---------------------+
808 * +-------------------------------------------------+
810 * | +-------------------+ +-------------------+ |
811 * | | Core +----+ | | Core +----+ | |
812 * | | +--------+ | L2 | | | +--------+ | L2 | | |
813 * | | | Thread | +----+ | | | Thread | +----+ | |
814 * | | +--------+-+ +--+ | | +--------+-+ +--+ | |
816 * | | +--------+ +--+ | | +--------+ +--+ | |
817 * | +-------------------+ +-------------------+ |
818 * | +-------------------+ +-------------------+ |
819 * | | Core +----+ | | Core +----+ | |
820 * | | +--------+ | L2 | | | +--------+ | L2 | | |
821 * | | | Thread | +----+ | | | Thread | +----+ | |
822 * | | +--------+-+ +--+ | | +--------+-+ +--+ | |
824 * | | +--------+ +--+ | | +--------+ +--+ | |
825 * | +-------------------+ +-------------------+ |
827 * | +--------------------------------------------+ |
829 * | +--------------------------------------------+ |
831 * | +-------------------+ +-------------------+ |
832 * | | Core +----+ | | Core +----+ | |
833 * | | +--------+ | L2 | | | +--------+ | L2 | | |
834 * | | | Thread | +----+ | | | Thread | +----+ | |
835 * | | +--------+-+ +--+ | | +--------+-+ +--+ | |
837 * | | +--------+ +--+ | | +--------+ +--+ | |
838 * | +-------------------+ +-------------------+ |
839 * | +-------------------+ +-------------------+ |
840 * | | Core +----+ | | Core +----+ | |
841 * | | +--------+ | L2 | | | +--------+ | L2 | | |
842 * | | | Thread | +----+ | | | Thread | +----+ | |
843 * | | +--------+-+ +--+ | | +--------+-+ +--+ | |
845 * | | +--------+ +--+ | | +--------+ +--+ | |
846 * | +-------------------+ +-------------------+ |
847 * +-------------------------------------------------+
871 * %eax The APIC ID. The entire register is defined to have a 32-bit
875 * %ebx On Bulldozer-era systems this contains information about the
877 * resources). It also contains a per-package compute unit ID that
880 * On Zen-era systems this instead contains the number of threads
893 * ----------------
911 * This is the value of the CPU's APIC id. This should be the full 32-bit
912 * ID if the CPU is using the x2apic. Otherwise, it should be the 8-bit
969 * determine if simultaneous multi-threading (SMT) is enabled. When
980 * When processors are actually a multi-chip module, this represents the
1004 * processors without AMD Bulldozer-style compute units this should be set
1040 * -----------
1042 * -----------
1055 * checks scattered about fields being non-zero before we assume we can use
1063 * use an actual vendor, then that usually turns into multiple one-core CPUs
1067 * --------------------
1069 * --------------------
1074 * with the is_x86_feature() function. This is queried by x86-specific functions
1079 * mitigations, to various x86-specific drivers. General purpose or
1088 * instruction sets are. Programs use this information to make run-time
1089 * decisions about what features they should use. As an example, the run-time
1090 * link-editor (rtld) can relocate different functions depending on the hardware
1094 * form cpuid_get*. This is used by a number of different subsystems in the
1096 * topology information, etc. Some of these subsystems include processor groups
1102 * -----------------------------------------------
1104 * -----------------------------------------------
1113 * - Spectre v1
1114 * - swapgs (Spectre v1 variant)
1115 * - Spectre v2
1116 * - Branch History Injection (BHI).
1117 * - Meltdown (Spectre v3)
1118 * - Rogue Register Read (Spectre v3a)
1119 * - Speculative Store Bypass (Spectre v4)
1120 * - ret2spec, SpectreRSB
1121 * - L1 Terminal Fault (L1TF)
1122 * - Microarchitectural Data Sampling (MDS)
1123 * - Register File Data Sampling (RFDS)
1127 * from non-kernel executing environments such as user processes and hardware
1164 * 2. A no-op version
1167 * AMD-specific optimized retopoline variant that was based around using a
1197 * - Switching between two different user processes
1198 * - Going between user land and the kernel
1199 * - Returning to the kernel from a hardware virtual machine
1209 * such as a non-root VMX context attacking the kernel we first look to
1219 * indicating no need for post-barrier RSB protections, either in one place
1235 * in the 'host' context when SEV-SNP is enabled.
1265 * non-root/guest to root mode). The attacker can then exploit certain
1266 * compiler-generated code-sequences ("gadgets") to disclose information from
1267 * other contexts or domains. Recent (late-2023/early-2024) research in
1285 * SMEP and eIBRS are a continuing defense-in-depth measure protecting the
1296 * can generally affect any branch-dependent code. The swapgs issue is one
1307 * If an attacker can cause a mis-speculation of the branch here, we could skip
1308 * the needed swapgs, and use the /user/ %gsbase as the base of the %gs-based
1314 * space, we could mis-speculate and swapgs the user %gsbase back in prior to
1320 * Note that we don't enable user-space "wrgsbase" via CR4_FSGSBASE, making it
1321 * harder for user-space to actually set a useful %gsbase value: although it's
1332 * this we use per-CPU page tables and switch between the user and kernel
1337 * - uts/i86pc/ml/kpti_trampolines.s
1338 * - uts/i86pc/vm/hat_i86.c
1362 * For the non-hardware virtualized case, this is relatively easy to deal with.
1425 * a no-op.
1429 * thread executing on a core. In the case where you have hyper-threading
1434 * would have to issue an inter-processor interrupt (IPI) to the other thread.
1435 * Rather than implement this, we recommend that one disables hyper-threading
1436 * through the use of psradm -aS.
1440 * TSX Asynchronous Abort (TAA) is another side-channel vulnerability that
1464 * Another MDS-variant in a few select Intel Atom CPUs is Register File Data
1504 * - Spectre v1: Not currently mitigated
1505 * - swapgs: lfences after swapgs paths
1506 * - Spectre v2: Retpolines/RSB Stuffing or eIBRS/AIBRS if HW support
1507 * - Meltdown: Kernel Page Table Isolation
1508 * - Spectre v3a: Updated CPU microcode
1509 * - Spectre v4: Not currently mitigated
1510 * - SpectreRSB: SMEP and RSB Stuffing
1511 * - L1TF: spec_uarch_flush, SMT exclusion, requires microcode
1512 * - MDS: x86_md_clear, requires microcode, disabling SMT
1513 * - TAA: x86_md_clear and disabling SMT OR microcode and disabling TSX
1514 * - RFDS: microcode with x86_md_clear if RFDS_CLEAR set and RFDS_NO not.
1515 * - BHI: software sequence, and use of BHI_DIS_S if microcode has it.
1520 * - RDCL_NO: Meltdown, L1TF, MSBDS subset of MDS
1521 * - MDS_NO: All forms of MDS
1522 * - TAA_NO: TAA
1523 * - RFDS_NO: RFDS
1524 * - BHI_NO: BHI
1567 int x86_use_pcid = -1;
1568 int x86_use_invpcid = -1;
1584 * X86_TAA_NOTHING -- no mitigation available for TAA side-channels
1585 * X86_TAA_DISABLED -- mitigation disabled via x86_disable_taa
1586 * X86_TAA_MD_CLEAR -- MDS mitigation also suffices for TAA
1587 * X86_TAA_TSX_FORCE_ABORT -- transactions are forced to abort
1588 * X86_TAA_TSX_DISABLE -- force abort transactions and hide from CPUID
1589 * X86_TAA_HW_MITIGATED -- TSX potentially active but H/W not TAA-vulnerable
1780 static int platform_type = -1;
1796 * processor cache-line alignment, but this is not guarantied in the furture.
1812 * per-lwp xsave area is dynamically allocated based on xsav_max_size. The
1866 uint8_t cpi_cacheinfo[16]; /* fn 2: intel-style cache desc */
1874 struct cpuid_regs cpi_sub7[2]; /* Leaf 7, sub-leaves 1-2 */
1887 uint_t cpi_ncore_per_chip; /* AMD: fn 0x80000008: %ecx[7-0] */
1888 /* Intel: fn 4: %eax[31-26] */
1936 * These bit fields are defined by the Intel Application Note AP-485
1939 #define CPI_FAMILY_XTD(cpi) BITX((cpi)->cpi_std[1].cp_eax, 27, 20)
1940 #define CPI_MODEL_XTD(cpi) BITX((cpi)->cpi_std[1].cp_eax, 19, 16)
1941 #define CPI_TYPE(cpi) BITX((cpi)->cpi_std[1].cp_eax, 13, 12)
1942 #define CPI_FAMILY(cpi) BITX((cpi)->cpi_std[1].cp_eax, 11, 8)
1943 #define CPI_STEP(cpi) BITX((cpi)->cpi_std[1].cp_eax, 3, 0)
1944 #define CPI_MODEL(cpi) BITX((cpi)->cpi_std[1].cp_eax, 7, 4)
1946 #define CPI_FEATURES_EDX(cpi) ((cpi)->cpi_std[1].cp_edx)
1947 #define CPI_FEATURES_ECX(cpi) ((cpi)->cpi_std[1].cp_ecx)
1948 #define CPI_FEATURES_XTD_EDX(cpi) ((cpi)->cpi_extd[1].cp_edx)
1949 #define CPI_FEATURES_XTD_ECX(cpi) ((cpi)->cpi_extd[1].cp_ecx)
1950 #define CPI_FEATURES_7_0_EBX(cpi) ((cpi)->cpi_std[7].cp_ebx)
1951 #define CPI_FEATURES_7_0_ECX(cpi) ((cpi)->cpi_std[7].cp_ecx)
1952 #define CPI_FEATURES_7_0_EDX(cpi) ((cpi)->cpi_std[7].cp_edx)
1953 #define CPI_FEATURES_7_1_EAX(cpi) ((cpi)->cpi_sub7[0].cp_eax)
1954 #define CPI_FEATURES_7_2_EDX(cpi) ((cpi)->cpi_sub7[1].cp_edx)
1956 #define CPI_BRANDID(cpi) BITX((cpi)->cpi_std[1].cp_ebx, 7, 0)
1957 #define CPI_CHUNKS(cpi) BITX((cpi)->cpi_std[1].cp_ebx, 15, 7)
1958 #define CPI_CPU_COUNT(cpi) BITX((cpi)->cpi_std[1].cp_ebx, 23, 16)
1959 #define CPI_APIC_ID(cpi) BITX((cpi)->cpi_std[1].cp_ebx, 31, 24)
1968 * Defined by Intel Application Note AP-485
1970 #define CPI_NUM_CORES(regs) BITX((regs)->cp_eax, 31, 26)
1971 #define CPI_NTHR_SHR_CACHE(regs) BITX((regs)->cp_eax, 25, 14)
1972 #define CPI_FULL_ASSOC_CACHE(regs) BITX((regs)->cp_eax, 9, 9)
1973 #define CPI_SELF_INIT_CACHE(regs) BITX((regs)->cp_eax, 8, 8)
1974 #define CPI_CACHE_LVL(regs) BITX((regs)->cp_eax, 7, 5)
1975 #define CPI_CACHE_TYPE(regs) BITX((regs)->cp_eax, 4, 0)
1980 #define CPI_CPU_LEVEL_TYPE(regs) BITX((regs)->cp_ecx, 15, 8)
1982 #define CPI_CACHE_WAYS(regs) BITX((regs)->cp_ebx, 31, 22)
1983 #define CPI_CACHE_PARTS(regs) BITX((regs)->cp_ebx, 21, 12)
1984 #define CPI_CACHE_COH_LN_SZ(regs) BITX((regs)->cp_ebx, 11, 0)
1986 #define CPI_CACHE_SETS(regs) BITX((regs)->cp_ecx, 31, 0)
1988 #define CPI_PREFCH_STRIDE(regs) BITX((regs)->cp_edx, 9, 0)
1992 * A couple of shorthand macros to identify "later" P6-family chips
1993 * like the Pentium M and Core. First, the "older" P6-based stuff
1994 * (loosely defined as "pre-Pentium-4"):
1998 cpi->cpi_family == 6 && \
1999 (cpi->cpi_model == 1 || \
2000 cpi->cpi_model == 3 || \
2001 cpi->cpi_model == 5 || \
2002 cpi->cpi_model == 6 || \
2003 cpi->cpi_model == 7 || \
2004 cpi->cpi_model == 8 || \
2005 cpi->cpi_model == 0xA || \
2006 cpi->cpi_model == 0xB) \
2010 #define IS_NEW_F6(cpi) ((cpi->cpi_family == 6) && !IS_LEGACY_P6(cpi))
2013 #define IS_EXTENDED_MODEL_INTEL(cpi) (cpi->cpi_family == 0x6 || \
2014 cpi->cpi_family >= 0xf)
2019 * See cpuid section of "Intel 64 and IA-32 Architectures Software Developer's
2020 * Manual Volume 2A: Instruction Set Reference, A-M" #25366-022US, November
2028 #define MWAIT_SUPPORTED(cpi) ((cpi)->cpi_std[1].cp_ecx & CPUID_INTC_ECX_MON)
2029 #define MWAIT_INT_ENABLE(cpi) ((cpi)->cpi_std[5].cp_ecx & 0x2)
2030 #define MWAIT_EXTENSION(cpi) ((cpi)->cpi_std[5].cp_ecx & 0x1)
2031 #define MWAIT_SIZE_MIN(cpi) BITX((cpi)->cpi_std[5].cp_eax, 15, 0)
2032 #define MWAIT_SIZE_MAX(cpi) BITX((cpi)->cpi_std[5].cp_ebx, 15, 0)
2034 * Number of sub-cstates for a given c-state.
2037 BITX((cpi)->cpi_std[5].cp_edx, c_state + 3, c_state)
2067 * Apply up various platform-dependent restrictions where the
2079 cp->cp_edx &= in platform_cpuid_mangle()
2091 cp->cp_edx &= in platform_cpuid_mangle()
2098 cp->cp_ecx &= ~CPUID_AMD_ECX_CMP_LGCY; in platform_cpuid_mangle()
2109 * Zero out the (ncores-per-chip - 1) field in platform_cpuid_mangle()
2111 cp->cp_eax &= 0x03fffffff; in platform_cpuid_mangle()
2122 cp->cp_ecx &= ~CPUID_AMD_ECX_CR8D; in platform_cpuid_mangle()
2127 * Zero out the (ncores-per-chip - 1) field in platform_cpuid_mangle()
2129 cp->cp_ecx &= 0xffffff00; in platform_cpuid_mangle()
2146 * we don't currently support. Could be set to non-zero values
2156 * Allocate space for mcpu_cpi in the machcpu structure for all non-boot CPUs.
2166 ASSERT(cpu->cpu_id != 0); in cpuid_alloc_space()
2167 ASSERT(cpu->cpu_m.mcpu_cpi == NULL); in cpuid_alloc_space()
2168 cpu->cpu_m.mcpu_cpi = in cpuid_alloc_space()
2169 kmem_zalloc(sizeof (*cpu->cpu_m.mcpu_cpi), KM_SLEEP); in cpuid_alloc_space()
2175 struct cpuid_info *cpi = cpu->cpu_m.mcpu_cpi; in cpuid_free_space()
2185 for (i = 1; i < cpi->cpi_cache_leaf_size; i++) in cpuid_free_space()
2186 kmem_free(cpi->cpi_cache_leaves[i], sizeof (struct cpuid_regs)); in cpuid_free_space()
2187 if (cpi->cpi_cache_leaf_size > 0) in cpuid_free_space()
2188 kmem_free(cpi->cpi_cache_leaves, in cpuid_free_space()
2189 cpi->cpi_cache_leaf_size * sizeof (struct cpuid_regs *)); in cpuid_free_space()
2192 cpu->cpu_m.mcpu_cpi = NULL; in cpuid_free_space()
2198 * initialization of various subsystems (e.g. TSC). determine_platform() must
2212 ASSERT(platform_type == -1); in determine_platform()
2286 * Xen's pseudo-cpuid function returns a string representing the in determine_platform()
2290 * hypervisor might use a different one depending on whether Hyper-V in determine_platform()
2312 ASSERT(platform_type != -1); in get_hwenv()
2347 for (i = 0; i < ARRAY_SIZE(cpi->cpi_topo); i++) { in cpuid_gather_ext_topo_leaf()
2348 struct cpuid_regs *regs = &cpi->cpi_topo[i]; in cpuid_gather_ext_topo_leaf()
2351 regs->cp_eax = leaf; in cpuid_gather_ext_topo_leaf()
2352 regs->cp_ecx = i; in cpuid_gather_ext_topo_leaf()
2355 if (CPUID_AMD_8X26_ECX_TYPE(regs->cp_ecx) == in cpuid_gather_ext_topo_leaf()
2361 cpi->cpi_topo_nleaves = i; in cpuid_gather_ext_topo_leaf()
2372 struct cpuid_info *cpi = cpu->cpu_m.mcpu_cpi; in cpuid_gather_amd_topology_leaves()
2374 if (cpi->cpi_xmaxeax >= CPUID_LEAF_EXT_8) { in cpuid_gather_amd_topology_leaves()
2377 cp = &cpi->cpi_extd[8]; in cpuid_gather_amd_topology_leaves()
2378 cp->cp_eax = CPUID_LEAF_EXT_8; in cpuid_gather_amd_topology_leaves()
2380 platform_cpuid_mangle(cpi->cpi_vendor, CPUID_LEAF_EXT_8, cp); in cpuid_gather_amd_topology_leaves()
2384 cpi->cpi_xmaxeax >= CPUID_LEAF_EXT_1e) { in cpuid_gather_amd_topology_leaves()
2387 cp = &cpi->cpi_extd[0x1e]; in cpuid_gather_amd_topology_leaves()
2388 cp->cp_eax = CPUID_LEAF_EXT_1e; in cpuid_gather_amd_topology_leaves()
2392 if (cpi->cpi_xmaxeax >= CPUID_LEAF_EXT_26) { in cpuid_gather_amd_topology_leaves()
2410 if (cpi->cpi_maxeax >= 0xB) { in cpuid_gather_apicid()
2415 cp->cp_eax = 0xB; in cpuid_gather_apicid()
2416 cp->cp_edx = cp->cp_ebx = cp->cp_ecx = 0; in cpuid_gather_apicid()
2419 if (cp->cp_ebx != 0) { in cpuid_gather_apicid()
2420 return (cp->cp_edx); in cpuid_gather_apicid()
2424 if ((cpi->cpi_vendor == X86_VENDOR_AMD || in cpuid_gather_apicid()
2425 cpi->cpi_vendor == X86_VENDOR_HYGON) && in cpuid_gather_apicid()
2427 cpi->cpi_xmaxeax >= CPUID_LEAF_EXT_1e) { in cpuid_gather_apicid()
2428 return (cpi->cpi_extd[0x1e].cp_eax); in cpuid_gather_apicid()
2459 if (cpi->cpi_xmaxeax >= CPUID_LEAF_EXT_8) { in cpuid_amd_ncores()
2460 nthreads = BITX(cpi->cpi_extd[8].cp_ecx, 7, 0) + 1; in cpuid_amd_ncores()
2461 } else if ((cpi->cpi_std[1].cp_edx & CPUID_INTC_EDX_HTT) != 0) { in cpuid_amd_ncores()
2470 if (cpi->cpi_family >= 0x17 && in cpuid_amd_ncores()
2472 cpi->cpi_xmaxeax >= CPUID_LEAF_EXT_1e) { in cpuid_amd_ncores()
2473 nthread_per_core = BITX(cpi->cpi_extd[0x1e].cp_ebx, 15, 8) + 1; in cpuid_amd_ncores()
2494 if (cpi->cpi_maxeax >= 4) { in cpuid_intel_ncores()
2495 *ncores = BITX(cpi->cpi_std[4].cp_eax, 31, 26) + 1; in cpuid_intel_ncores()
2500 if ((cpi->cpi_std[1].cp_edx & CPUID_INTC_EDX_HTT) != 0) { in cpuid_intel_ncores()
2510 struct cpuid_info *cpi = cpu->cpu_m.mcpu_cpi; in cpuid_leafB_getids()
2514 if (cpi->cpi_maxeax < 0xB) in cpuid_leafB_getids()
2518 cp->cp_eax = 0xB; in cpuid_leafB_getids()
2519 cp->cp_edx = cp->cp_ebx = cp->cp_ecx = 0; in cpuid_leafB_getids()
2524 * Check CPUID.EAX=0BH, ECX=0H:EBX is non-zero, which in cpuid_leafB_getids()
2528 if (cp->cp_ebx != 0) { in cpuid_leafB_getids()
2538 cp->cp_eax = 0xB; in cpuid_leafB_getids()
2539 cp->cp_ecx = i; in cpuid_leafB_getids()
2545 x2apic_id = cp->cp_edx; in cpuid_leafB_getids()
2546 coreid_shift = BITX(cp->cp_eax, 4, 0); in cpuid_leafB_getids()
2547 ncpu_per_core = BITX(cp->cp_ebx, 15, 0); in cpuid_leafB_getids()
2549 x2apic_id = cp->cp_edx; in cpuid_leafB_getids()
2550 chipid_shift = BITX(cp->cp_eax, 4, 0); in cpuid_leafB_getids()
2551 ncpu_per_chip = BITX(cp->cp_ebx, 15, 0); in cpuid_leafB_getids()
2558 cpi->cpi_ncpu_per_chip = ncpu_per_chip; in cpuid_leafB_getids()
2559 cpi->cpi_ncore_per_chip = ncpu_per_chip / in cpuid_leafB_getids()
2561 cpi->cpi_chipid = x2apic_id >> chipid_shift; in cpuid_leafB_getids()
2562 cpi->cpi_clogid = x2apic_id & ((1 << chipid_shift) - 1); in cpuid_leafB_getids()
2563 cpi->cpi_coreid = x2apic_id >> coreid_shift; in cpuid_leafB_getids()
2564 cpi->cpi_pkgcoreid = cpi->cpi_clogid >> coreid_shift; in cpuid_leafB_getids()
2565 cpi->cpi_procnodeid = cpi->cpi_chipid; in cpuid_leafB_getids()
2566 cpi->cpi_compunitid = cpi->cpi_coreid; in cpuid_leafB_getids()
2569 cpi->cpi_nthread_bits = coreid_shift; in cpuid_leafB_getids()
2570 cpi->cpi_ncore_bits = chipid_shift - coreid_shift; in cpuid_leafB_getids()
2585 struct cpuid_info *cpi = cpu->cpu_m.mcpu_cpi; in cpuid_intel_getids()
2591 cpi->cpi_procnodes_per_pkg = 1; in cpuid_intel_getids()
2592 cpi->cpi_cores_per_compunit = 1; in cpuid_intel_getids()
2609 cpi->cpi_nthread_bits = ddi_fls(cpi->cpi_ncpu_per_chip); in cpuid_intel_getids()
2610 cpi->cpi_ncore_bits = ddi_fls(cpi->cpi_ncore_per_chip); in cpuid_intel_getids()
2612 for (i = 1; i < cpi->cpi_ncpu_per_chip; i <<= 1) in cpuid_intel_getids()
2615 cpi->cpi_chipid = cpi->cpi_apicid >> chipid_shift; in cpuid_intel_getids()
2616 cpi->cpi_clogid = cpi->cpi_apicid & ((1 << chipid_shift) - 1); in cpuid_intel_getids()
2620 * Multi-core (and possibly multi-threaded) in cpuid_intel_getids()
2625 if (cpi->cpi_ncore_per_chip == 1) in cpuid_intel_getids()
2626 ncpu_per_core = cpi->cpi_ncpu_per_chip; in cpuid_intel_getids()
2627 else if (cpi->cpi_ncore_per_chip > 1) in cpuid_intel_getids()
2628 ncpu_per_core = cpi->cpi_ncpu_per_chip / in cpuid_intel_getids()
2629 cpi->cpi_ncore_per_chip; in cpuid_intel_getids()
2634 * +-----------------------+------+------+ in cpuid_intel_getids()
2636 * +-----------------------+------+------+ in cpuid_intel_getids()
2637 * <------- chipid --------> in cpuid_intel_getids()
2638 * <------- coreid ---------------> in cpuid_intel_getids()
2639 * <--- clogid --> in cpuid_intel_getids()
2640 * <------> in cpuid_intel_getids()
2646 * store the value of cpi->cpi_ncpu_per_chip. in cpuid_intel_getids()
2649 * cpi->cpi_ncore_per_chip. in cpuid_intel_getids()
2653 cpi->cpi_coreid = cpi->cpi_apicid >> coreid_shift; in cpuid_intel_getids()
2654 cpi->cpi_pkgcoreid = cpi->cpi_clogid >> coreid_shift; in cpuid_intel_getids()
2657 * Single-core multi-threaded processors. in cpuid_intel_getids()
2659 cpi->cpi_coreid = cpi->cpi_chipid; in cpuid_intel_getids()
2660 cpi->cpi_pkgcoreid = 0; in cpuid_intel_getids()
2663 * Single-core single-thread processors. in cpuid_intel_getids()
2665 cpi->cpi_coreid = cpu->cpu_id; in cpuid_intel_getids()
2666 cpi->cpi_pkgcoreid = 0; in cpuid_intel_getids()
2668 cpi->cpi_procnodeid = cpi->cpi_chipid; in cpuid_intel_getids()
2669 cpi->cpi_compunitid = cpi->cpi_coreid; in cpuid_intel_getids()
2688 struct cpuid_info *cpi = cpu->cpu_m.mcpu_cpi; in cpuid_amd_get_coreid()
2690 if (cpi->cpi_family >= 0x17 && in cpuid_amd_get_coreid()
2692 cpi->cpi_xmaxeax >= CPUID_LEAF_EXT_1e) { in cpuid_amd_get_coreid()
2693 uint_t nthreads = BITX(cpi->cpi_extd[0x1e].cp_ebx, 15, 8) + 1; in cpuid_amd_get_coreid()
2696 return (cpi->cpi_apicid >> 1); in cpuid_amd_get_coreid()
2700 return (cpu->cpu_id); in cpuid_amd_get_coreid()
2709 * synthesize this case by using cpu->cpu_id. This scheme does not,
2711 * coreids starting at a multiple of the number of cores per chip - that is
2728 struct cpuid_info *cpi = cpu->cpu_m.mcpu_cpi; in cpuid_amd_getids()
2738 cpi->cpi_coreid = cpuid_amd_get_coreid(cpu); in cpuid_amd_getids()
2739 cpi->cpi_compunitid = cpi->cpi_coreid; in cpuid_amd_getids()
2740 cpi->cpi_cores_per_compunit = 1; in cpuid_amd_getids()
2741 cpi->cpi_procnodes_per_pkg = 1; in cpuid_amd_getids()
2747 * then we assume it's one. This should be present on all 64-bit AMD in cpuid_amd_getids()
2750 if (cpi->cpi_xmaxeax >= CPUID_LEAF_EXT_8) { in cpuid_amd_getids()
2751 coreidsz = BITX((cpi)->cpi_extd[8].cp_ecx, 15, 12); in cpuid_amd_getids()
2759 for (i = 1; i < cpi->cpi_ncore_per_chip; i <<= 1) in cpuid_amd_getids()
2765 /* Assume single-core part */ in cpuid_amd_getids()
2768 cpi->cpi_clogid = cpi->cpi_apicid & ((1 << coreidsz) - 1); in cpuid_amd_getids()
2773 * this value is the core id in the given node. For non-virtualized in cpuid_amd_getids()
2781 if (cpi->cpi_family >= 0x17 && in cpuid_amd_getids()
2783 cpi->cpi_xmaxeax >= CPUID_LEAF_EXT_1e && in cpuid_amd_getids()
2784 cpi->cpi_extd[0x1e].cp_ebx != 0) { in cpuid_amd_getids()
2785 uint_t nthreads = BITX(cpi->cpi_extd[0x1e].cp_ebx, 15, 8) + 1; in cpuid_amd_getids()
2788 cpi->cpi_pkgcoreid = cpi->cpi_clogid >> 1; in cpuid_amd_getids()
2790 cpi->cpi_pkgcoreid = cpi->cpi_clogid; in cpuid_amd_getids()
2793 cpi->cpi_pkgcoreid = cpi->cpi_clogid; in cpuid_amd_getids()
2802 cpi->cpi_xmaxeax >= CPUID_LEAF_EXT_1e) { in cpuid_amd_getids()
2803 cp = &cpi->cpi_extd[0x1e]; in cpuid_amd_getids()
2805 cpi->cpi_procnodes_per_pkg = BITX(cp->cp_ecx, 10, 8) + 1; in cpuid_amd_getids()
2806 cpi->cpi_procnodeid = BITX(cp->cp_ecx, 7, 0); in cpuid_amd_getids()
2809 * For Bulldozer-era CPUs, recalculate the compute unit in cpuid_amd_getids()
2812 if (cpi->cpi_family >= 0x15 && cpi->cpi_family < 0x17) { in cpuid_amd_getids()
2813 cpi->cpi_cores_per_compunit = in cpuid_amd_getids()
2814 BITX(cp->cp_ebx, 15, 8) + 1; in cpuid_amd_getids()
2815 cpi->cpi_compunitid = BITX(cp->cp_ebx, 7, 0) + in cpuid_amd_getids()
2816 (cpi->cpi_ncore_per_chip / in cpuid_amd_getids()
2817 cpi->cpi_cores_per_compunit) * in cpuid_amd_getids()
2818 (cpi->cpi_procnodeid / in cpuid_amd_getids()
2819 cpi->cpi_procnodes_per_pkg); in cpuid_amd_getids()
2821 } else if (cpi->cpi_family == 0xf || cpi->cpi_family >= 0x11) { in cpuid_amd_getids()
2822 cpi->cpi_procnodeid = (cpi->cpi_apicid >> coreidsz) & 7; in cpuid_amd_getids()
2823 } else if (cpi->cpi_family == 0x10) { in cpuid_amd_getids()
2825 * See if we are a multi-node processor. in cpuid_amd_getids()
2829 if ((cpi->cpi_model < 8) || BITX(nb_caps_reg, 29, 29) == 0) { in cpuid_amd_getids()
2830 /* Single-node */ in cpuid_amd_getids()
2831 cpi->cpi_procnodeid = BITX(cpi->cpi_apicid, 5, in cpuid_amd_getids()
2836 * Multi-node revision D (2 nodes per package in cpuid_amd_getids()
2839 cpi->cpi_procnodes_per_pkg = 2; in cpuid_amd_getids()
2841 first_half = (cpi->cpi_pkgcoreid <= in cpuid_amd_getids()
2842 (cpi->cpi_ncore_per_chip/2 - 1)); in cpuid_amd_getids()
2844 if (cpi->cpi_apicid == cpi->cpi_pkgcoreid) { in cpuid_amd_getids()
2846 cpi->cpi_procnodeid = (first_half ? 0 : 1); in cpuid_amd_getids()
2851 node2_1 = BITX(cpi->cpi_apicid, 5, 4) << 1; in cpuid_amd_getids()
2858 * always 0 on dual-node processors) in cpuid_amd_getids()
2861 cpi->cpi_procnodeid = node2_1 + in cpuid_amd_getids()
2864 cpi->cpi_procnodeid = node2_1 + in cpuid_amd_getids()
2869 cpi->cpi_procnodeid = 0; in cpuid_amd_getids()
2872 cpi->cpi_chipid = in cpuid_amd_getids()
2873 cpi->cpi_procnodeid / cpi->cpi_procnodes_per_pkg; in cpuid_amd_getids()
2875 cpi->cpi_ncore_bits = coreidsz; in cpuid_amd_getids()
2876 cpi->cpi_nthread_bits = ddi_fls(cpi->cpi_ncpu_per_chip / in cpuid_amd_getids()
2877 cpi->cpi_ncore_per_chip); in cpuid_amd_getids()
2887 * MDS-related micro-architectural state that would normally happen by calling
2898 * micro-architectural state on the processor. This flush is used to mitigate
2902 * - A noop which is done because we either are vulnerable, but do not have
2906 * - spec_uarch_flush_msr which will issue an L1D flush and if microcode to
2908 * however, it only flushes the MDS related micro-architectural state on the
2911 * - x86_md_clear which will flush the MDS related state. This is done when we
2921 struct cpuid_info *cpi = cpu->cpu_m.mcpu_cpi; in cpuid_update_md_clear()
2923 /* Non-Intel doesn't concern us here. */ in cpuid_update_md_clear()
2924 if (cpi->cpi_vendor != X86_VENDOR_Intel) in cpuid_update_md_clear()
2954 struct cpuid_info *cpi = cpu->cpu_m.mcpu_cpi; in cpuid_update_l1d_flush()
2962 if (cpi->cpi_vendor != X86_VENDOR_Intel || in cpuid_update_l1d_flush()
3028 * to not have an RSB (pre-eIBRS).
3031 X86_BHI_TOO_OLD_OR_DISABLED, /* Pre-eIBRS or disabled */
3053 * enough to fix (which includes non-Intel CPUs), or the CPU has an explicit
3054 * disable-Branch-History control.
3060 struct cpuid_info *cpi = cpu->cpu_m.mcpu_cpi; in cpuid_learn_and_patch_bhi()
3064 ASSERT0(cpu->cpu_id); in cpuid_learn_and_patch_bhi()
3074 * or if it's non-Intel, in which case this mitigation mechanism in cpuid_learn_and_patch_bhi()
3077 if (cpi->cpi_vendor != X86_VENDOR_Intel || in cpuid_learn_and_patch_bhi()
3108 * post-barrier RSB (PBRSB) guessing suggests we should enable Intel RSB
3115 * +-------+------------+-----------------+--------+
3117 * +-------+------------+-----------------+--------+
3121 * +-------+------------+-----------------+--------+
3129 * needed, but context-switch stuffing isn't.
3134 …/en/developer/articles/technical/software-security-guidance/advisory-guidance/post-barrier-return-…
3162 * The Intel document on Post-Barrier RSB says that processors in cpuid_patch_rsb()
3173 * both vmexit and context-switching require the software in cpuid_patch_rsb()
3261 struct cpuid_info *cpi = cpu->cpu_m.mcpu_cpi; in cpuid_update_tsx()
3263 VERIFY(cpu->cpu_id == 0); in cpuid_update_tsx()
3265 if (cpi->cpi_vendor != X86_VENDOR_Intel) { in cpuid_update_tsx()
3280 * we want to cross-CPU-thread protection. in cpuid_update_tsx()
3306 * mitigation. TSX-using code will always take the fallback path. in cpuid_update_tsx()
3308 if (cpi->cpi_pass < 4) { in cpuid_update_tsx()
3350 struct cpuid_info *cpi = cpu->cpu_m.mcpu_cpi; in cpuid_scan_security()
3353 if ((cpi->cpi_vendor == X86_VENDOR_AMD || in cpuid_scan_security()
3354 cpi->cpi_vendor == X86_VENDOR_HYGON) && in cpuid_scan_security()
3355 cpi->cpi_xmaxeax >= CPUID_LEAF_EXT_8) { in cpuid_scan_security()
3356 if (cpi->cpi_extd[8].cp_ebx & CPUID_AMD_EBX_IBPB) in cpuid_scan_security()
3358 if (cpi->cpi_extd[8].cp_ebx & CPUID_AMD_EBX_IBRS) in cpuid_scan_security()
3360 if (cpi->cpi_extd[8].cp_ebx & CPUID_AMD_EBX_STIBP) in cpuid_scan_security()
3362 if (cpi->cpi_extd[8].cp_ebx & CPUID_AMD_EBX_STIBP_ALL) in cpuid_scan_security()
3364 if (cpi->cpi_extd[8].cp_ebx & CPUID_AMD_EBX_SSBD) in cpuid_scan_security()
3366 if (cpi->cpi_extd[8].cp_ebx & CPUID_AMD_EBX_VIRT_SSBD) in cpuid_scan_security()
3368 if (cpi->cpi_extd[8].cp_ebx & CPUID_AMD_EBX_SSB_NO) in cpuid_scan_security()
3376 if (cpi->cpi_vendor == X86_VENDOR_AMD && in cpuid_scan_security()
3377 (cpi->cpi_extd[8].cp_ebx & CPUID_AMD_EBX_PREFER_IBRS) && in cpuid_scan_security()
3378 cpi->cpi_xmaxeax >= CPUID_LEAF_EXT_21 && in cpuid_scan_security()
3379 (cpi->cpi_extd[0x21].cp_eax & CPUID_AMD_8X21_EAX_AIBRS)) { in cpuid_scan_security()
3383 } else if (cpi->cpi_vendor == X86_VENDOR_Intel && in cpuid_scan_security()
3384 cpi->cpi_maxeax >= 7) { in cpuid_scan_security()
3386 ecp = &cpi->cpi_std[7]; in cpuid_scan_security()
3388 if (ecp->cp_edx & CPUID_INTC_EDX_7_0_MD_CLEAR) { in cpuid_scan_security()
3392 if (ecp->cp_edx & CPUID_INTC_EDX_7_0_SPEC_CTRL) { in cpuid_scan_security()
3397 if (ecp->cp_edx & CPUID_INTC_EDX_7_0_STIBP) { in cpuid_scan_security()
3414 if (ecp->cp_edx & CPUID_INTC_EDX_7_0_ARCH_CAPS) { in cpuid_scan_security()
3477 if (ecp->cp_edx & CPUID_INTC_EDX_7_0_SSBD) in cpuid_scan_security()
3480 if (ecp->cp_edx & CPUID_INTC_EDX_7_0_FLUSH_CMD) in cpuid_scan_security()
3485 * Take care of certain mitigations on the non-boot CPU. The boot CPU in cpuid_scan_security()
3487 * do. This gives us a hook for per-HW thread mitigations such as in cpuid_scan_security()
3490 if (cpu->cpu_id != 0) { in cpuid_scan_security()
3547 * If any of these are present, then we need to flush u-arch state at in cpuid_scan_security()
3551 * MDS, the L1D flush also clears the other u-arch state that the in cpuid_scan_security()
3604 cpi = cpu->cpu_m.mcpu_cpi; in cpuid_basic_topology()
3606 if (cpi->cpi_vendor == X86_VENDOR_AMD || in cpuid_basic_topology()
3607 cpi->cpi_vendor == X86_VENDOR_HYGON) { in cpuid_basic_topology()
3611 cpi->cpi_apicid = cpuid_gather_apicid(cpi); in cpuid_basic_topology()
3617 switch (cpi->cpi_vendor) { in cpuid_basic_topology()
3619 cpuid_intel_ncores(cpi, &cpi->cpi_ncpu_per_chip, in cpuid_basic_topology()
3620 &cpi->cpi_ncore_per_chip); in cpuid_basic_topology()
3624 cpuid_amd_ncores(cpi, &cpi->cpi_ncpu_per_chip, in cpuid_basic_topology()
3625 &cpi->cpi_ncore_per_chip); in cpuid_basic_topology()
3631 * today, though there are also 64-bit VIA chips. Assume that in cpuid_basic_topology()
3634 if ((cpi->cpi_std[1].cp_edx & CPUID_INTC_EDX_HTT) != 0) { in cpuid_basic_topology()
3635 cpi->cpi_ncore_per_chip = 1; in cpuid_basic_topology()
3636 cpi->cpi_ncpu_per_chip = CPI_CPU_COUNT(cpi); in cpuid_basic_topology()
3645 if (cpi->cpi_ncore_per_chip > 1) { in cpuid_basic_topology()
3649 if (cpi->cpi_ncpu_per_chip > 1 && in cpuid_basic_topology()
3650 cpi->cpi_ncpu_per_chip != cpi->cpi_ncore_per_chip) { in cpuid_basic_topology()
3664 * This is a single core, single-threaded processor. in cpuid_basic_topology()
3666 cpi->cpi_procnodes_per_pkg = 1; in cpuid_basic_topology()
3667 cpi->cpi_cores_per_compunit = 1; in cpuid_basic_topology()
3668 cpi->cpi_compunitid = 0; in cpuid_basic_topology()
3669 cpi->cpi_chipid = -1; in cpuid_basic_topology()
3670 cpi->cpi_clogid = 0; in cpuid_basic_topology()
3671 cpi->cpi_coreid = cpu->cpu_id; in cpuid_basic_topology()
3672 cpi->cpi_pkgcoreid = 0; in cpuid_basic_topology()
3673 if (cpi->cpi_vendor == X86_VENDOR_AMD || in cpuid_basic_topology()
3674 cpi->cpi_vendor == X86_VENDOR_HYGON) { in cpuid_basic_topology()
3675 cpi->cpi_procnodeid = BITX(cpi->cpi_apicid, 3, 0); in cpuid_basic_topology()
3677 cpi->cpi_procnodeid = cpi->cpi_chipid; in cpuid_basic_topology()
3680 switch (cpi->cpi_vendor) { in cpuid_basic_topology()
3701 cpi->cpi_procnodes_per_pkg = 1; in cpuid_basic_topology()
3702 cpi->cpi_cores_per_compunit = 1; in cpuid_basic_topology()
3703 cpi->cpi_chipid = 0; in cpuid_basic_topology()
3704 cpi->cpi_coreid = cpu->cpu_id; in cpuid_basic_topology()
3705 cpi->cpi_clogid = cpu->cpu_id; in cpuid_basic_topology()
3706 cpi->cpi_pkgcoreid = cpu->cpu_id; in cpuid_basic_topology()
3707 cpi->cpi_procnodeid = cpi->cpi_chipid; in cpuid_basic_topology()
3708 cpi->cpi_compunitid = cpi->cpi_coreid; in cpuid_basic_topology()
3724 struct cpuid_info *cpi = cpu->cpu_m.mcpu_cpi; in cpuid_basic_thermal()
3726 if (cpi->cpi_maxeax < 6) { in cpuid_basic_thermal()
3730 cp = &cpi->cpi_std[6]; in cpuid_basic_thermal()
3731 cp->cp_eax = 6; in cpuid_basic_thermal()
3732 cp->cp_ebx = cp->cp_ecx = cp->cp_edx = 0; in cpuid_basic_thermal()
3734 platform_cpuid_mangle(cpi->cpi_vendor, 6, cp); in cpuid_basic_thermal()
3736 if (cpi->cpi_vendor != X86_VENDOR_Intel) { in cpuid_basic_thermal()
3740 if ((cp->cp_eax & CPUID_INTC_EAX_DTS) != 0) { in cpuid_basic_thermal()
3744 if ((cp->cp_eax & CPUID_INTC_EAX_PTM) != 0) { in cpuid_basic_thermal()
3756 struct cpuid_info *cpi = cpu->cpu_m.mcpu_cpi; in cpuid_basic_avx()
3761 if ((cpi->cpi_std[1].cp_ecx & CPUID_INTC_ECX_AVX) == 0) in cpuid_basic_avx()
3770 if (cpi->cpi_std[1].cp_ecx & CPUID_INTC_ECX_F16C) in cpuid_basic_avx()
3773 if (cpi->cpi_std[1].cp_ecx & CPUID_INTC_ECX_FMA) in cpuid_basic_avx()
3776 if (cpi->cpi_std[7].cp_ebx & CPUID_INTC_EBX_7_0_BMI1) in cpuid_basic_avx()
3779 if (cpi->cpi_std[7].cp_ebx & CPUID_INTC_EBX_7_0_BMI2) in cpuid_basic_avx()
3782 if (cpi->cpi_std[7].cp_ebx & CPUID_INTC_EBX_7_0_AVX2) in cpuid_basic_avx()
3785 if (cpi->cpi_std[7].cp_ecx & CPUID_INTC_ECX_7_0_VAES) in cpuid_basic_avx()
3788 if (cpi->cpi_std[7].cp_ecx & CPUID_INTC_ECX_7_0_VPCLMULQDQ) in cpuid_basic_avx()
3795 if ((cpi->cpi_std[7].cp_ebx & CPUID_INTC_EBX_7_0_AVX512F) == 0) in cpuid_basic_avx()
3799 if (cpi->cpi_std[7].cp_ebx & CPUID_INTC_EBX_7_0_AVX512DQ) in cpuid_basic_avx()
3802 if (cpi->cpi_std[7].cp_ebx & CPUID_INTC_EBX_7_0_AVX512IFMA) in cpuid_basic_avx()
3805 if (cpi->cpi_std[7].cp_ebx & CPUID_INTC_EBX_7_0_AVX512PF) in cpuid_basic_avx()
3808 if (cpi->cpi_std[7].cp_ebx & CPUID_INTC_EBX_7_0_AVX512ER) in cpuid_basic_avx()
3811 if (cpi->cpi_std[7].cp_ebx & CPUID_INTC_EBX_7_0_AVX512CD) in cpuid_basic_avx()
3814 if (cpi->cpi_std[7].cp_ebx & CPUID_INTC_EBX_7_0_AVX512BW) in cpuid_basic_avx()
3817 if (cpi->cpi_std[7].cp_ebx & CPUID_INTC_EBX_7_0_AVX512VL) in cpuid_basic_avx()
3820 if (cpi->cpi_std[7].cp_ecx & CPUID_INTC_ECX_7_0_AVX512VBMI) in cpuid_basic_avx()
3823 if (cpi->cpi_std[7].cp_ecx & CPUID_INTC_ECX_7_0_AVX512VBMI2) in cpuid_basic_avx()
3826 if (cpi->cpi_std[7].cp_ecx & CPUID_INTC_ECX_7_0_AVX512VNNI) in cpuid_basic_avx()
3829 if (cpi->cpi_std[7].cp_ecx & CPUID_INTC_ECX_7_0_AVX512BITALG) in cpuid_basic_avx()
3832 if (cpi->cpi_std[7].cp_ecx & CPUID_INTC_ECX_7_0_AVX512VPOPCDQ) in cpuid_basic_avx()
3835 if (cpi->cpi_std[7].cp_edx & CPUID_INTC_EDX_7_0_AVX5124NNIW) in cpuid_basic_avx()
3838 if (cpi->cpi_std[7].cp_edx & CPUID_INTC_EDX_7_0_AVX5124FMAPS) in cpuid_basic_avx()
3845 if (cpi->cpi_std[7].cp_eax < 1) in cpuid_basic_avx()
3848 if (cpi->cpi_sub7[0].cp_eax & CPUID_INTC_EAX_7_1_AVX512_BF16) in cpuid_basic_avx()
3862 struct cpuid_info *cpi = cpu->cpu_m.mcpu_cpi; in cpuid_basic_ppin()
3864 switch (cpi->cpi_vendor) { in cpuid_basic_ppin()
3870 if (cpi->cpi_xmaxeax >= CPUID_LEAF_EXT_8) { in cpuid_basic_ppin()
3871 if (cpi->cpi_extd[8].cp_ebx & CPUID_AMD_EBX_PPIN) { in cpuid_basic_ppin()
3877 if (cpi->cpi_family != 6) in cpuid_basic_ppin()
3879 switch (cpi->cpi_model) { in cpuid_basic_ppin()
3931 ASSERT3S(platform_type, !=, -1); in cpuid_pass_ident()
3934 cpi = cpu->cpu_m.mcpu_cpi; in cpuid_pass_ident()
3937 cp = &cpi->cpi_std[0]; in cpuid_pass_ident()
3938 cp->cp_eax = 0; in cpuid_pass_ident()
3939 cpi->cpi_maxeax = __cpuid_insn(cp); in cpuid_pass_ident()
3941 uint32_t *iptr = (uint32_t *)cpi->cpi_vendorstr; in cpuid_pass_ident()
3942 *iptr++ = cp->cp_ebx; in cpuid_pass_ident()
3943 *iptr++ = cp->cp_edx; in cpuid_pass_ident()
3944 *iptr++ = cp->cp_ecx; in cpuid_pass_ident()
3945 *(char *)&cpi->cpi_vendorstr[12] = '\0'; in cpuid_pass_ident()
3948 cpi->cpi_vendor = _cpuid_vendorstr_to_vendorcode(cpi->cpi_vendorstr); in cpuid_pass_ident()
3949 x86_vendor = cpi->cpi_vendor; /* for compatibility */ in cpuid_pass_ident()
3954 if (cpi->cpi_maxeax > CPI_MAXEAX_MAX) in cpuid_pass_ident()
3955 cpi->cpi_maxeax = CPI_MAXEAX_MAX; in cpuid_pass_ident()
3956 if (cpi->cpi_maxeax < 1) in cpuid_pass_ident()
3959 cp = &cpi->cpi_std[1]; in cpuid_pass_ident()
3960 cp->cp_eax = 1; in cpuid_pass_ident()
3966 cpi->cpi_model = CPI_MODEL(cpi); in cpuid_pass_ident()
3967 cpi->cpi_family = CPI_FAMILY(cpi); in cpuid_pass_ident()
3969 if (cpi->cpi_family == 0xf) in cpuid_pass_ident()
3970 cpi->cpi_family += CPI_FAMILY_XTD(cpi); in cpuid_pass_ident()
3978 switch (cpi->cpi_vendor) { in cpuid_pass_ident()
3981 cpi->cpi_model += CPI_MODEL_XTD(cpi) << 4; in cpuid_pass_ident()
3985 cpi->cpi_model += CPI_MODEL_XTD(cpi) << 4; in cpuid_pass_ident()
3988 cpi->cpi_model += CPI_MODEL_XTD(cpi) << 4; in cpuid_pass_ident()
3991 if (cpi->cpi_model == 0xf) in cpuid_pass_ident()
3992 cpi->cpi_model += CPI_MODEL_XTD(cpi) << 4; in cpuid_pass_ident()
3996 cpi->cpi_step = CPI_STEP(cpi); in cpuid_pass_ident()
3997 cpi->cpi_brandid = CPI_BRANDID(cpi); in cpuid_pass_ident()
4002 cpi->cpi_chiprev = _cpuid_chiprev(cpi->cpi_vendor, cpi->cpi_family, in cpuid_pass_ident()
4003 cpi->cpi_model, cpi->cpi_step); in cpuid_pass_ident()
4004 cpi->cpi_chiprevstr = _cpuid_chiprevstr(cpi->cpi_vendor, in cpuid_pass_ident()
4005 cpi->cpi_family, cpi->cpi_model, cpi->cpi_step); in cpuid_pass_ident()
4006 cpi->cpi_socket = _cpuid_skt(cpi->cpi_vendor, cpi->cpi_family, in cpuid_pass_ident()
4007 cpi->cpi_model, cpi->cpi_step); in cpuid_pass_ident()
4008 cpi->cpi_uarchrev = _cpuid_uarchrev(cpi->cpi_vendor, cpi->cpi_family, in cpuid_pass_ident()
4009 cpi->cpi_model, cpi->cpi_step); in cpuid_pass_ident()
4024 cpi = cpu->cpu_m.mcpu_cpi; in cpuid_pass_basic()
4027 if (cpi->cpi_maxeax < 1) in cpuid_pass_basic()
4033 cp = &cpi->cpi_std[1]; in cpuid_pass_basic()
4037 * - believe %edx feature word in cpuid_pass_basic()
4038 * - ignore %ecx feature word in cpuid_pass_basic()
4039 * - 32-bit virtual and physical addressing in cpuid_pass_basic()
4044 cpi->cpi_pabits = cpi->cpi_vabits = 32; in cpuid_pass_basic()
4046 switch (cpi->cpi_vendor) { in cpuid_pass_basic()
4048 if (cpi->cpi_family == 5) in cpuid_pass_basic()
4056 if (cpi->cpi_model < 3 && cpi->cpi_step < 3) in cpuid_pass_basic()
4057 cp->cp_edx &= ~CPUID_INTC_EDX_SEP; in cpuid_pass_basic()
4058 } else if (IS_NEW_F6(cpi) || cpi->cpi_family == 0xf) { in cpuid_pass_basic()
4067 } else if (cpi->cpi_family > 0xf) in cpuid_pass_basic()
4073 if (cpi->cpi_maxeax < 5) in cpuid_pass_basic()
4081 if (cpi->cpi_family == 0xf && cpi->cpi_model == 0xe) { in cpuid_pass_basic()
4082 cp->cp_eax = (0xf0f & cp->cp_eax) | 0xc0; in cpuid_pass_basic()
4083 cpi->cpi_model = 0xc; in cpuid_pass_basic()
4086 if (cpi->cpi_family == 5) { in cpuid_pass_basic()
4099 if (cpi->cpi_model == 0) { in cpuid_pass_basic()
4100 if (cp->cp_edx & 0x200) { in cpuid_pass_basic()
4101 cp->cp_edx &= ~0x200; in cpuid_pass_basic()
4102 cp->cp_edx |= CPUID_INTC_EDX_PGE; in cpuid_pass_basic()
4109 if (cpi->cpi_model < 6) in cpuid_pass_basic()
4117 if (cpi->cpi_family >= 0xf) in cpuid_pass_basic()
4123 if (cpi->cpi_maxeax < 5) in cpuid_pass_basic()
4129 * Pre-family-10h Opterons do not have the MWAIT instruction. We in cpuid_pass_basic()
4131 * is preferred. Families in-between are less certain. in cpuid_pass_basic()
4133 if (cpi->cpi_family < 0x17) { in cpuid_pass_basic()
4147 if (cpi->cpi_family == 5 && cpi->cpi_model == 4 && in cpuid_pass_basic()
4148 (cpi->cpi_step == 2 || cpi->cpi_step == 3)) in cpuid_pass_basic()
4149 cp->cp_edx |= CPUID_INTC_EDX_CX8; in cpuid_pass_basic()
4155 if (cpi->cpi_family == 6) in cpuid_pass_basic()
4156 cp->cp_edx |= CPUID_INTC_EDX_CX8; in cpuid_pass_basic()
4236 cp->cp_edx &= mask_edx; in cpuid_pass_basic()
4237 cp->cp_ecx &= mask_ecx; in cpuid_pass_basic()
4244 platform_cpuid_mangle(cpi->cpi_vendor, 1, cp); in cpuid_pass_basic()
4249 * 7 has sub-leaves determined by ecx. in cpuid_pass_basic()
4251 if (cpi->cpi_maxeax >= 7) { in cpuid_pass_basic()
4253 ecp = &cpi->cpi_std[7]; in cpuid_pass_basic()
4254 ecp->cp_eax = 7; in cpuid_pass_basic()
4255 ecp->cp_ecx = 0; in cpuid_pass_basic()
4260 * extended-save-area dependent flags here. By removing most of in cpuid_pass_basic()
4261 * the leaf 7, sub-leaf 0 flags, that will ensure tha we don't in cpuid_pass_basic()
4266 ecp->cp_ebx &= ~CPUID_INTC_EBX_7_0_BMI1; in cpuid_pass_basic()
4267 ecp->cp_ebx &= ~CPUID_INTC_EBX_7_0_BMI2; in cpuid_pass_basic()
4268 ecp->cp_ebx &= ~CPUID_INTC_EBX_7_0_AVX2; in cpuid_pass_basic()
4269 ecp->cp_ebx &= ~CPUID_INTC_EBX_7_0_MPX; in cpuid_pass_basic()
4270 ecp->cp_ebx &= ~CPUID_INTC_EBX_7_0_ALL_AVX512; in cpuid_pass_basic()
4271 ecp->cp_ecx &= ~CPUID_INTC_ECX_7_0_ALL_AVX512; in cpuid_pass_basic()
4272 ecp->cp_edx &= ~CPUID_INTC_EDX_7_0_ALL_AVX512; in cpuid_pass_basic()
4273 ecp->cp_ecx &= ~CPUID_INTC_ECX_7_0_VAES; in cpuid_pass_basic()
4274 ecp->cp_ecx &= ~CPUID_INTC_ECX_7_0_VPCLMULQDQ; in cpuid_pass_basic()
4275 ecp->cp_ecx &= ~CPUID_INTC_ECX_7_0_GFNI; in cpuid_pass_basic()
4278 if (ecp->cp_ebx & CPUID_INTC_EBX_7_0_SMEP) in cpuid_pass_basic()
4287 if (ecp->cp_ebx & CPUID_INTC_EBX_7_0_SMAP && in cpuid_pass_basic()
4291 if (ecp->cp_ebx & CPUID_INTC_EBX_7_0_RDSEED) in cpuid_pass_basic()
4294 if (ecp->cp_ebx & CPUID_INTC_EBX_7_0_ADX) in cpuid_pass_basic()
4297 if (ecp->cp_ebx & CPUID_INTC_EBX_7_0_FSGSBASE) in cpuid_pass_basic()
4300 if (ecp->cp_ebx & CPUID_INTC_EBX_7_0_CLFLUSHOPT) in cpuid_pass_basic()
4303 if (ecp->cp_ebx & CPUID_INTC_EBX_7_0_INVPCID) in cpuid_pass_basic()
4306 if (ecp->cp_ecx & CPUID_INTC_ECX_7_0_UMIP) in cpuid_pass_basic()
4308 if (ecp->cp_ecx & CPUID_INTC_ECX_7_0_PKU) in cpuid_pass_basic()
4310 if (ecp->cp_ecx & CPUID_INTC_ECX_7_0_OSPKE) in cpuid_pass_basic()
4312 if (ecp->cp_ecx & CPUID_INTC_ECX_7_0_GFNI) in cpuid_pass_basic()
4315 if (ecp->cp_ebx & CPUID_INTC_EBX_7_0_CLWB) in cpuid_pass_basic()
4318 if (cpi->cpi_vendor == X86_VENDOR_Intel) { in cpuid_pass_basic()
4319 if (ecp->cp_ebx & CPUID_INTC_EBX_7_0_MPX) in cpuid_pass_basic()
4327 if (ecp->cp_eax >= 1) { in cpuid_pass_basic()
4329 c71 = &cpi->cpi_sub7[0]; in cpuid_pass_basic()
4330 c71->cp_eax = 7; in cpuid_pass_basic()
4331 c71->cp_ecx = 1; in cpuid_pass_basic()
4336 if (ecp->cp_eax >= 2) { in cpuid_pass_basic()
4338 c72 = &cpi->cpi_sub7[1]; in cpuid_pass_basic()
4339 c72->cp_eax = 7; in cpuid_pass_basic()
4340 c72->cp_ecx = 2; in cpuid_pass_basic()
4348 cp->cp_edx |= cpuid_feature_edx_include; in cpuid_pass_basic()
4349 cp->cp_edx &= ~cpuid_feature_edx_exclude; in cpuid_pass_basic()
4351 cp->cp_ecx |= cpuid_feature_ecx_include; in cpuid_pass_basic()
4352 cp->cp_ecx &= ~cpuid_feature_ecx_exclude; in cpuid_pass_basic()
4354 if (cp->cp_edx & CPUID_INTC_EDX_PSE) { in cpuid_pass_basic()
4357 if (cp->cp_edx & CPUID_INTC_EDX_TSC) { in cpuid_pass_basic()
4360 if (cp->cp_edx & CPUID_INTC_EDX_MSR) { in cpuid_pass_basic()
4363 if (cp->cp_edx & CPUID_INTC_EDX_MTRR) { in cpuid_pass_basic()
4366 if (cp->cp_edx & CPUID_INTC_EDX_PGE) { in cpuid_pass_basic()
4369 if (cp->cp_edx & CPUID_INTC_EDX_CMOV) { in cpuid_pass_basic()
4372 if (cp->cp_edx & CPUID_INTC_EDX_MMX) { in cpuid_pass_basic()
4375 if ((cp->cp_edx & CPUID_INTC_EDX_MCE) != 0 && in cpuid_pass_basic()
4376 (cp->cp_edx & CPUID_INTC_EDX_MCA) != 0) { in cpuid_pass_basic()
4379 if (cp->cp_edx & CPUID_INTC_EDX_PAE) { in cpuid_pass_basic()
4382 if (cp->cp_edx & CPUID_INTC_EDX_CX8) { in cpuid_pass_basic()
4385 if (cp->cp_ecx & CPUID_INTC_ECX_CX16) { in cpuid_pass_basic()
4388 if (cp->cp_edx & CPUID_INTC_EDX_PAT) { in cpuid_pass_basic()
4391 if (cp->cp_edx & CPUID_INTC_EDX_SEP) { in cpuid_pass_basic()
4394 if (cp->cp_edx & CPUID_INTC_EDX_FXSR) { in cpuid_pass_basic()
4400 if (cp->cp_edx & CPUID_INTC_EDX_SSE) { in cpuid_pass_basic()
4403 if (cp->cp_edx & CPUID_INTC_EDX_SSE2) { in cpuid_pass_basic()
4406 if (cp->cp_ecx & CPUID_INTC_ECX_SSE3) { in cpuid_pass_basic()
4409 if (cp->cp_ecx & CPUID_INTC_ECX_SSSE3) { in cpuid_pass_basic()
4412 if (cp->cp_ecx & CPUID_INTC_ECX_SSE4_1) { in cpuid_pass_basic()
4415 if (cp->cp_ecx & CPUID_INTC_ECX_SSE4_2) { in cpuid_pass_basic()
4418 if (cp->cp_ecx & CPUID_INTC_ECX_AES) { in cpuid_pass_basic()
4421 if (cp->cp_ecx & CPUID_INTC_ECX_PCLMULQDQ) { in cpuid_pass_basic()
4425 if (cpi->cpi_std[7].cp_ebx & CPUID_INTC_EBX_7_0_SHA) in cpuid_pass_basic()
4428 if (cp->cp_ecx & CPUID_INTC_ECX_XSAVE) { in cpuid_pass_basic()
4436 if (cp->cp_ecx & CPUID_INTC_ECX_PCID) { in cpuid_pass_basic()
4440 if (cp->cp_ecx & CPUID_INTC_ECX_X2APIC) { in cpuid_pass_basic()
4443 if (cp->cp_edx & CPUID_INTC_EDX_DE) { in cpuid_pass_basic()
4447 if (cp->cp_ecx & CPUID_INTC_ECX_MON) { in cpuid_pass_basic()
4453 if (cp->cp_edx & CPUID_INTC_EDX_CLFSH) { in cpuid_pass_basic()
4454 cpi->cpi_mwait.support |= MWAIT_SUPPORT; in cpuid_pass_basic()
4464 ASSERT((cp->cp_ecx & CPUID_INTC_ECX_MON) && in cpuid_pass_basic()
4465 (cp->cp_edx & CPUID_INTC_EDX_CLFSH)); in cpuid_pass_basic()
4471 if (cp->cp_ecx & CPUID_INTC_ECX_VMX) { in cpuid_pass_basic()
4475 if (cp->cp_ecx & CPUID_INTC_ECX_RDRAND) in cpuid_pass_basic()
4482 if (cp->cp_edx & CPUID_INTC_EDX_CLFSH) { in cpuid_pass_basic()
4484 x86_clflush_size = (BITX(cp->cp_ebx, 15, 8) * 8); in cpuid_pass_basic()
4487 cpi->cpi_pabits = 36; in cpuid_pass_basic()
4489 if (cpi->cpi_maxeax >= 0xD && !xsave_force_disable) { in cpuid_pass_basic()
4493 ecp->cp_eax = 0xD; in cpuid_pass_basic()
4494 ecp->cp_ecx = 1; in cpuid_pass_basic()
4495 ecp->cp_edx = ecp->cp_ebx = 0; in cpuid_pass_basic()
4498 if (ecp->cp_eax & CPUID_INTC_EAX_D_1_XSAVEOPT) in cpuid_pass_basic()
4500 if (ecp->cp_eax & CPUID_INTC_EAX_D_1_XSAVEC) in cpuid_pass_basic()
4502 if (ecp->cp_eax & CPUID_INTC_EAX_D_1_XSAVES) in cpuid_pass_basic()
4516 if (cpi->cpi_vendor == X86_VENDOR_AMD && in cpuid_pass_basic()
4517 uarchrev_uarch(cpi->cpi_uarchrev) <= X86_UARCH_AMD_ZEN2) { in cpuid_pass_basic()
4527 switch (cpi->cpi_vendor) { in cpuid_pass_basic()
4533 if (IS_NEW_F6(cpi) || cpi->cpi_family >= 0xf || in cpuid_pass_basic()
4534 (get_hwenv() == HW_KVM && cpi->cpi_family == 6 && in cpuid_pass_basic()
4535 (cpi->cpi_model == 6 || cpi->cpi_model == 2))) in cpuid_pass_basic()
4539 if (cpi->cpi_family > 5 || in cpuid_pass_basic()
4540 (cpi->cpi_family == 5 && cpi->cpi_model >= 1)) in cpuid_pass_basic()
4545 * Only these Cyrix CPUs are -known- to support in cpuid_pass_basic()
4561 cp = &cpi->cpi_extd[0]; in cpuid_pass_basic()
4562 cp->cp_eax = CPUID_LEAF_EXT_0; in cpuid_pass_basic()
4563 cpi->cpi_xmaxeax = __cpuid_insn(cp); in cpuid_pass_basic()
4566 if (cpi->cpi_xmaxeax & CPUID_LEAF_EXT_0) { in cpuid_pass_basic()
4568 if (cpi->cpi_xmaxeax > CPI_XMAXEAX_MAX) in cpuid_pass_basic()
4569 cpi->cpi_xmaxeax = CPI_XMAXEAX_MAX; in cpuid_pass_basic()
4571 switch (cpi->cpi_vendor) { in cpuid_pass_basic()
4575 if (cpi->cpi_xmaxeax < 0x80000001) in cpuid_pass_basic()
4577 cp = &cpi->cpi_extd[1]; in cpuid_pass_basic()
4578 cp->cp_eax = 0x80000001; in cpuid_pass_basic()
4581 if (cpi->cpi_vendor == X86_VENDOR_AMD && in cpuid_pass_basic()
4582 cpi->cpi_family == 5 && in cpuid_pass_basic()
4583 cpi->cpi_model == 6 && in cpuid_pass_basic()
4584 cpi->cpi_step == 6) { in cpuid_pass_basic()
4589 if (cp->cp_edx & 0x400) { in cpuid_pass_basic()
4590 cp->cp_edx &= ~0x400; in cpuid_pass_basic()
4591 cp->cp_edx |= CPUID_AMD_EDX_SYSC; in cpuid_pass_basic()
4595 platform_cpuid_mangle(cpi->cpi_vendor, 0x80000001, cp); in cpuid_pass_basic()
4600 if (cp->cp_edx & CPUID_AMD_EDX_NX) { in cpuid_pass_basic()
4605 * Regardless whether or not we boot 64-bit, in cpuid_pass_basic()
4607 * the CPU is capable of running 64-bit. in cpuid_pass_basic()
4609 if (cp->cp_edx & CPUID_AMD_EDX_LM) { in cpuid_pass_basic()
4613 /* 1 GB large page - enable only for 64 bit kernel */ in cpuid_pass_basic()
4614 if (cp->cp_edx & CPUID_AMD_EDX_1GPG) { in cpuid_pass_basic()
4618 if ((cpi->cpi_vendor == X86_VENDOR_AMD || in cpuid_pass_basic()
4619 cpi->cpi_vendor == X86_VENDOR_HYGON) && in cpuid_pass_basic()
4620 (cpi->cpi_std[1].cp_edx & CPUID_INTC_EDX_FXSR) && in cpuid_pass_basic()
4621 (cp->cp_ecx & CPUID_AMD_ECX_SSE4A)) { in cpuid_pass_basic()
4628 * instead. In the amd64 kernel, things are -way- in cpuid_pass_basic()
4631 if (cp->cp_edx & CPUID_AMD_EDX_SYSC) { in cpuid_pass_basic()
4645 if (cp->cp_edx & CPUID_AMD_EDX_TSCP) { in cpuid_pass_basic()
4649 if (cp->cp_ecx & CPUID_AMD_ECX_SVM) { in cpuid_pass_basic()
4653 if (cp->cp_ecx & CPUID_AMD_ECX_TOPOEXT) { in cpuid_pass_basic()
4657 if (cp->cp_ecx & CPUID_AMD_ECX_PCEC) { in cpuid_pass_basic()
4661 if (cp->cp_ecx & CPUID_AMD_ECX_XOP) { in cpuid_pass_basic()
4665 if (cp->cp_ecx & CPUID_AMD_ECX_FMA4) { in cpuid_pass_basic()
4669 if (cp->cp_ecx & CPUID_AMD_ECX_TBM) { in cpuid_pass_basic()
4673 if (cp->cp_ecx & CPUID_AMD_ECX_MONITORX) { in cpuid_pass_basic()
4684 switch (cpi->cpi_vendor) { in cpuid_pass_basic()
4686 if (cpi->cpi_maxeax >= 4) { in cpuid_pass_basic()
4687 cp = &cpi->cpi_std[4]; in cpuid_pass_basic()
4688 cp->cp_eax = 4; in cpuid_pass_basic()
4689 cp->cp_ecx = 0; in cpuid_pass_basic()
4691 platform_cpuid_mangle(cpi->cpi_vendor, 4, cp); in cpuid_pass_basic()
4696 if (cpi->cpi_xmaxeax < CPUID_LEAF_EXT_8) in cpuid_pass_basic()
4698 cp = &cpi->cpi_extd[8]; in cpuid_pass_basic()
4699 cp->cp_eax = CPUID_LEAF_EXT_8; in cpuid_pass_basic()
4701 platform_cpuid_mangle(cpi->cpi_vendor, CPUID_LEAF_EXT_8, in cpuid_pass_basic()
4707 if (cpi->cpi_vendor == X86_VENDOR_AMD || in cpuid_pass_basic()
4708 cpi->cpi_vendor == X86_VENDOR_HYGON) { in cpuid_pass_basic()
4715 if (cp->cp_ebx & CPUID_AMD_EBX_ERR_PTR_ZERO) { in cpuid_pass_basic()
4716 cpi->cpi_fp_amd_save = 0; in cpuid_pass_basic()
4718 cpi->cpi_fp_amd_save = 1; in cpuid_pass_basic()
4721 if (cp->cp_ebx & CPUID_AMD_EBX_CLZERO) { in cpuid_pass_basic()
4731 cpi->cpi_pabits = BITX(cp->cp_eax, 7, 0); in cpuid_pass_basic()
4732 cpi->cpi_vabits = BITX(cp->cp_eax, 15, 8); in cpuid_pass_basic()
4739 * Get CPUID data about TSC Invariance in Deep C-State. in cpuid_pass_basic()
4741 switch (cpi->cpi_vendor) { in cpuid_pass_basic()
4745 if (cpi->cpi_maxeax >= 7) { in cpuid_pass_basic()
4746 cp = &cpi->cpi_extd[7]; in cpuid_pass_basic()
4747 cp->cp_eax = 0x80000007; in cpuid_pass_basic()
4748 cp->cp_ecx = 0; in cpuid_pass_basic()
4767 if (cpi->cpi_vendor == X86_VENDOR_AMD || in cpuid_pass_basic()
4768 cpi->cpi_vendor == X86_VENDOR_HYGON) { in cpuid_pass_basic()
4769 if (cpi->cpi_xmaxeax >= CPUID_LEAF_EXT_8 && in cpuid_pass_basic()
4770 cpi->cpi_extd[8].cp_ebx & CPUID_AMD_EBX_ERR_PTR_ZERO) { in cpuid_pass_basic()
4772 cpi->cpi_fp_amd_save = 0; in cpuid_pass_basic()
4774 cpi->cpi_fp_amd_save = 1; in cpuid_pass_basic()
4782 if ((cpi->cpi_vendor == X86_VENDOR_AMD || in cpuid_pass_basic()
4783 cpi->cpi_vendor == X86_VENDOR_HYGON) && in cpuid_pass_basic()
4795 if (cpi->cpi_family == 0xf || cpi->cpi_family == 0x11) { in cpuid_pass_basic()
4797 } else if (cpi->cpi_family >= 0x10) { in cpuid_pass_basic()
4823 } else if (cpi->cpi_vendor == X86_VENDOR_Intel && in cpuid_pass_basic()
4835 * any additional processor-specific leaves that we may not have yet. in cpuid_pass_basic()
4837 switch (cpi->cpi_vendor) { in cpuid_pass_basic()
4840 if (cpi->cpi_xmaxeax >= CPUID_LEAF_EXT_21) { in cpuid_pass_basic()
4841 cp = &cpi->cpi_extd[0x21]; in cpuid_pass_basic()
4842 cp->cp_eax = CPUID_LEAF_EXT_21; in cpuid_pass_basic()
4843 cp->cp_ecx = 0; in cpuid_pass_basic()
4870 struct cpuid_info *cpi = cpu->cpu_m.mcpu_cpi; in cpuid_pass_extended()
4872 if (cpi->cpi_maxeax < 1) in cpuid_pass_extended()
4875 if ((nmax = cpi->cpi_maxeax + 1) > NMAX_CPI_STD) in cpuid_pass_extended()
4880 for (n = 2, cp = &cpi->cpi_std[2]; n < nmax; n++, cp++) { in cpuid_pass_extended()
4887 cp->cp_eax = n; in cpuid_pass_extended()
4905 cp->cp_ecx = 0; in cpuid_pass_extended()
4908 platform_cpuid_mangle(cpi->cpi_vendor, n, cp); in cpuid_pass_extended()
4920 cpi->cpi_ncache = sizeof (*cp) * in cpuid_pass_extended()
4921 BITX(cp->cp_eax, 7, 0); in cpuid_pass_extended()
4922 if (cpi->cpi_ncache == 0) in cpuid_pass_extended()
4924 cpi->cpi_ncache--; /* skip count byte */ in cpuid_pass_extended()
4931 if (cpi->cpi_ncache > (sizeof (*cp) - 1)) in cpuid_pass_extended()
4932 cpi->cpi_ncache = sizeof (*cp) - 1; in cpuid_pass_extended()
4934 dp = cpi->cpi_cacheinfo; in cpuid_pass_extended()
4935 if (BITX(cp->cp_eax, 31, 31) == 0) { in cpuid_pass_extended()
4936 uint8_t *p = (void *)&cp->cp_eax; in cpuid_pass_extended()
4941 if (BITX(cp->cp_ebx, 31, 31) == 0) { in cpuid_pass_extended()
4942 uint8_t *p = (void *)&cp->cp_ebx; in cpuid_pass_extended()
4947 if (BITX(cp->cp_ecx, 31, 31) == 0) { in cpuid_pass_extended()
4948 uint8_t *p = (void *)&cp->cp_ecx; in cpuid_pass_extended()
4953 if (BITX(cp->cp_edx, 31, 31) == 0) { in cpuid_pass_extended()
4954 uint8_t *p = (void *)&cp->cp_edx; in cpuid_pass_extended()
4975 if (!(cpi->cpi_mwait.support & MWAIT_SUPPORT)) in cpuid_pass_extended()
4987 "size %ld", cpu->cpu_id, (long)mwait_size); in cpuid_pass_extended()
4992 cpi->cpi_mwait.mon_min = (size_t)MWAIT_SIZE_MIN(cpi); in cpuid_pass_extended()
4993 cpi->cpi_mwait.mon_max = mwait_size; in cpuid_pass_extended()
4995 cpi->cpi_mwait.support |= MWAIT_EXTENSIONS; in cpuid_pass_extended()
4997 cpi->cpi_mwait.support |= in cpuid_pass_extended()
5010 if (cpi->cpi_maxeax >= 0xD) { in cpuid_pass_extended()
5015 cp->cp_eax = 0xD; in cpuid_pass_extended()
5016 cp->cp_edx = cp->cp_ebx = cp->cp_ecx = 0; in cpuid_pass_extended()
5023 if ((cp->cp_eax & XFEATURE_LEGACY_FP) == 0 || in cpuid_pass_extended()
5024 (cp->cp_eax & XFEATURE_SSE) == 0) { in cpuid_pass_extended()
5028 cpi->cpi_xsave.xsav_hw_features_low = cp->cp_eax; in cpuid_pass_extended()
5029 cpi->cpi_xsave.xsav_hw_features_high = cp->cp_edx; in cpuid_pass_extended()
5030 cpi->cpi_xsave.xsav_max_size = cp->cp_ecx; in cpuid_pass_extended()
5036 if (cpi->cpi_xsave.xsav_hw_features_low & XFEATURE_AVX) { in cpuid_pass_extended()
5037 cp->cp_eax = 0xD; in cpuid_pass_extended()
5038 cp->cp_ecx = 2; in cpuid_pass_extended()
5039 cp->cp_edx = cp->cp_ebx = 0; in cpuid_pass_extended()
5043 if (cp->cp_ebx != CPUID_LEAFD_2_YMM_OFFSET || in cpuid_pass_extended()
5044 cp->cp_eax != CPUID_LEAFD_2_YMM_SIZE) { in cpuid_pass_extended()
5048 cpi->cpi_xsave.ymm_size = cp->cp_eax; in cpuid_pass_extended()
5049 cpi->cpi_xsave.ymm_offset = cp->cp_ebx; in cpuid_pass_extended()
5056 if (cpi->cpi_xsave.xsav_hw_features_low & XFEATURE_MPX) { in cpuid_pass_extended()
5057 cp->cp_eax = 0xD; in cpuid_pass_extended()
5058 cp->cp_ecx = 3; in cpuid_pass_extended()
5059 cp->cp_edx = cp->cp_ebx = 0; in cpuid_pass_extended()
5063 cpi->cpi_xsave.bndregs_size = cp->cp_eax; in cpuid_pass_extended()
5064 cpi->cpi_xsave.bndregs_offset = cp->cp_ebx; in cpuid_pass_extended()
5066 cp->cp_eax = 0xD; in cpuid_pass_extended()
5067 cp->cp_ecx = 4; in cpuid_pass_extended()
5068 cp->cp_edx = cp->cp_ebx = 0; in cpuid_pass_extended()
5072 cpi->cpi_xsave.bndcsr_size = cp->cp_eax; in cpuid_pass_extended()
5073 cpi->cpi_xsave.bndcsr_offset = cp->cp_ebx; in cpuid_pass_extended()
5080 if (cpi->cpi_xsave.xsav_hw_features_low & XFEATURE_AVX512) { in cpuid_pass_extended()
5081 cp->cp_eax = 0xD; in cpuid_pass_extended()
5082 cp->cp_ecx = 5; in cpuid_pass_extended()
5083 cp->cp_edx = cp->cp_ebx = 0; in cpuid_pass_extended()
5087 cpi->cpi_xsave.opmask_size = cp->cp_eax; in cpuid_pass_extended()
5088 cpi->cpi_xsave.opmask_offset = cp->cp_ebx; in cpuid_pass_extended()
5090 cp->cp_eax = 0xD; in cpuid_pass_extended()
5091 cp->cp_ecx = 6; in cpuid_pass_extended()
5092 cp->cp_edx = cp->cp_ebx = 0; in cpuid_pass_extended()
5096 cpi->cpi_xsave.zmmlo_size = cp->cp_eax; in cpuid_pass_extended()
5097 cpi->cpi_xsave.zmmlo_offset = cp->cp_ebx; in cpuid_pass_extended()
5099 cp->cp_eax = 0xD; in cpuid_pass_extended()
5100 cp->cp_ecx = 7; in cpuid_pass_extended()
5101 cp->cp_edx = cp->cp_ebx = 0; in cpuid_pass_extended()
5105 cpi->cpi_xsave.zmmhi_size = cp->cp_eax; in cpuid_pass_extended()
5106 cpi->cpi_xsave.zmmhi_offset = cp->cp_ebx; in cpuid_pass_extended()
5112 xsave_state_size = cpi->cpi_xsave.xsav_max_size; in cpuid_pass_extended()
5118 cpu->cpu_id, cpi->cpi_xsave.xsav_hw_features_low, in cpuid_pass_extended()
5119 cpi->cpi_xsave.xsav_hw_features_high, in cpuid_pass_extended()
5120 (int)cpi->cpi_xsave.xsav_max_size, in cpuid_pass_extended()
5121 (int)cpi->cpi_xsave.ymm_size, in cpuid_pass_extended()
5122 (int)cpi->cpi_xsave.ymm_offset); in cpuid_pass_extended()
5126 * This must be a non-boot CPU. We cannot in cpuid_pass_extended()
5130 ASSERT(cpu->cpu_id != 0); in cpuid_pass_extended()
5133 "continue.", cpu->cpu_id); in cpuid_pass_extended()
5138 * non-boot CPUs. When we're here on a boot CPU in cpuid_pass_extended()
5139 * we should disable the feature, on a non-boot in cpuid_pass_extended()
5142 if (cpu->cpu_id == 0) { in cpuid_pass_extended()
5210 if ((cpi->cpi_xmaxeax & CPUID_LEAF_EXT_0) == 0) in cpuid_pass_extended()
5213 if ((nmax = cpi->cpi_xmaxeax - CPUID_LEAF_EXT_0 + 1) > NMAX_CPI_EXTD) in cpuid_pass_extended()
5220 iptr = (void *)cpi->cpi_brandstr; in cpuid_pass_extended()
5221 for (n = 2, cp = &cpi->cpi_extd[2]; n < nmax; cp++, n++) { in cpuid_pass_extended()
5222 cp->cp_eax = CPUID_LEAF_EXT_0 + n; in cpuid_pass_extended()
5224 platform_cpuid_mangle(cpi->cpi_vendor, CPUID_LEAF_EXT_0 + n, in cpuid_pass_extended()
5233 *iptr++ = cp->cp_eax; in cpuid_pass_extended()
5234 *iptr++ = cp->cp_ebx; in cpuid_pass_extended()
5235 *iptr++ = cp->cp_ecx; in cpuid_pass_extended()
5236 *iptr++ = cp->cp_edx; in cpuid_pass_extended()
5239 switch (cpi->cpi_vendor) { in cpuid_pass_extended()
5247 if (cpi->cpi_family < 6 || in cpuid_pass_extended()
5248 (cpi->cpi_family == 6 && in cpuid_pass_extended()
5249 cpi->cpi_model < 1)) in cpuid_pass_extended()
5250 cp->cp_eax = 0; in cpuid_pass_extended()
5257 switch (cpi->cpi_vendor) { in cpuid_pass_extended()
5264 if (cpi->cpi_family < 6 || in cpuid_pass_extended()
5265 (cpi->cpi_family == 6 && in cpuid_pass_extended()
5266 cpi->cpi_model < 1)) in cpuid_pass_extended()
5267 cp->cp_eax = cp->cp_ebx = 0; in cpuid_pass_extended()
5273 if (cpi->cpi_family == 6 && in cpuid_pass_extended()
5274 cpi->cpi_model == 3 && in cpuid_pass_extended()
5275 cpi->cpi_step == 0) { in cpuid_pass_extended()
5276 cp->cp_ecx &= 0xffff; in cpuid_pass_extended()
5277 cp->cp_ecx |= 0x400000; in cpuid_pass_extended()
5285 if (cpi->cpi_family != 6) in cpuid_pass_extended()
5292 if (cpi->cpi_model == 7 || in cpuid_pass_extended()
5293 cpi->cpi_model == 8) in cpuid_pass_extended()
5294 cp->cp_ecx = in cpuid_pass_extended()
5295 BITX(cp->cp_ecx, 31, 24) << 16 | in cpuid_pass_extended()
5296 BITX(cp->cp_ecx, 23, 16) << 12 | in cpuid_pass_extended()
5297 BITX(cp->cp_ecx, 15, 8) << 8 | in cpuid_pass_extended()
5298 BITX(cp->cp_ecx, 7, 0); in cpuid_pass_extended()
5302 if (cpi->cpi_model == 9 && cpi->cpi_step == 1) in cpuid_pass_extended()
5303 cp->cp_ecx |= 8 << 12; in cpuid_pass_extended()
5327 switch (cpi->cpi_family) { in intel_cpubrand()
5331 switch (cpi->cpi_model) { in intel_cpubrand()
5346 cp = &cpi->cpi_std[2]; /* cache info */ in intel_cpubrand()
5351 tmp = (cp->cp_eax >> (8 * i)) & 0xff; in intel_cpubrand()
5361 tmp = (cp->cp_ebx >> (8 * i)) & 0xff; in intel_cpubrand()
5371 tmp = (cp->cp_ecx >> (8 * i)) & 0xff; in intel_cpubrand()
5381 tmp = (cp->cp_edx >> (8 * i)) & 0xff; in intel_cpubrand()
5391 return (cpi->cpi_model == 5 ? in intel_cpubrand()
5394 return (cpi->cpi_model == 5 ? in intel_cpubrand()
5405 if (cpi->cpi_brandid != 0) { in intel_cpubrand()
5434 sgn = (cpi->cpi_family << 8) | in intel_cpubrand()
5435 (cpi->cpi_model << 4) | cpi->cpi_step; in intel_cpubrand()
5438 if (brand_tbl[i].bt_bid == cpi->cpi_brandid) in intel_cpubrand()
5441 if (sgn == 0x6b1 && cpi->cpi_brandid == 3) in intel_cpubrand()
5443 if (sgn < 0xf13 && cpi->cpi_brandid == 0xb) in intel_cpubrand()
5445 if (sgn < 0xf13 && cpi->cpi_brandid == 0xe) in intel_cpubrand()
5459 switch (cpi->cpi_family) { in amd_cpubrand()
5461 switch (cpi->cpi_model) { in amd_cpubrand()
5468 return ("AMD-K5(r)"); in amd_cpubrand()
5471 return ("AMD-K6(r)"); in amd_cpubrand()
5473 return ("AMD-K6(r)-2"); in amd_cpubrand()
5475 return ("AMD-K6(r)-III"); in amd_cpubrand()
5480 switch (cpi->cpi_model) { in amd_cpubrand()
5482 return ("AMD-K7(tm)"); in amd_cpubrand()
5496 return ((cpi->cpi_extd[6].cp_ecx >> 16) >= 256 ? in amd_cpubrand()
5505 if (cpi->cpi_family == 0xf && cpi->cpi_model == 5 && in amd_cpubrand()
5506 cpi->cpi_brandid != 0) { in amd_cpubrand()
5507 switch (BITX(cpi->cpi_brandid, 7, 5)) { in amd_cpubrand()
5546 if (cpi->cpi_family == 4 && cpi->cpi_model == 9) in cyrix_cpubrand()
5548 else if (cpi->cpi_family == 5) { in cyrix_cpubrand()
5549 switch (cpi->cpi_model) { in cyrix_cpubrand()
5557 } else if (cpi->cpi_family == 6) { in cyrix_cpubrand()
5558 switch (cpi->cpi_model) { in cyrix_cpubrand()
5586 switch (cpi->cpi_vendor) { in fabricate_brandstr()
5597 if (cpi->cpi_family == 5 && cpi->cpi_model == 0) in fabricate_brandstr()
5601 if (cpi->cpi_family == 5) in fabricate_brandstr()
5602 switch (cpi->cpi_model) { in fabricate_brandstr()
5617 if (cpi->cpi_family == 5 && in fabricate_brandstr()
5618 (cpi->cpi_model == 0 || cpi->cpi_model == 2)) in fabricate_brandstr()
5622 if (cpi->cpi_family == 5 && cpi->cpi_model == 0) in fabricate_brandstr()
5626 if (cpi->cpi_family == 5 && cpi->cpi_model == 4) in fabricate_brandstr()
5635 (void) strcpy((char *)cpi->cpi_brandstr, brand); in fabricate_brandstr()
5642 (void) snprintf(cpi->cpi_brandstr, sizeof (cpi->cpi_brandstr), in fabricate_brandstr()
5643 "%s %d.%d.%d", cpi->cpi_vendorstr, cpi->cpi_family, in fabricate_brandstr()
5644 cpi->cpi_model, cpi->cpi_step); in fabricate_brandstr()
5662 struct cpuid_info *cpi = cpu->cpu_m.mcpu_cpi; in cpuid_pass_dynamic()
5678 cpi->cpi_ncpu_shr_last_cache = 1; in cpuid_pass_dynamic()
5679 cpi->cpi_last_lvl_cacheid = cpu->cpu_id; in cpuid_pass_dynamic()
5681 if ((cpi->cpi_maxeax >= 4 && cpi->cpi_vendor == X86_VENDOR_Intel) || in cpuid_pass_dynamic()
5682 ((cpi->cpi_vendor == X86_VENDOR_AMD || in cpuid_pass_dynamic()
5683 cpi->cpi_vendor == X86_VENDOR_HYGON) && in cpuid_pass_dynamic()
5684 cpi->cpi_xmaxeax >= CPUID_LEAF_EXT_1d && in cpuid_pass_dynamic()
5688 if (cpi->cpi_vendor == X86_VENDOR_Intel) { in cpuid_pass_dynamic()
5701 cp->cp_eax = leaf; in cpuid_pass_dynamic()
5702 cp->cp_ecx = i; in cpuid_pass_dynamic()
5711 cpi->cpi_ncpu_shr_last_cache = in cpuid_pass_dynamic()
5715 cpi->cpi_cache_leaf_size = size = i; in cpuid_pass_dynamic()
5723 cpi->cpi_cache_leaves = in cpuid_pass_dynamic()
5725 if (cpi->cpi_vendor == X86_VENDOR_Intel) { in cpuid_pass_dynamic()
5726 cpi->cpi_cache_leaves[0] = &cpi->cpi_std[4]; in cpuid_pass_dynamic()
5728 cpi->cpi_cache_leaves[0] = &cpi->cpi_extd[0x1d]; in cpuid_pass_dynamic()
5739 cp = cpi->cpi_cache_leaves[i] = in cpuid_pass_dynamic()
5741 cp->cp_eax = leaf; in cpuid_pass_dynamic()
5742 cp->cp_ecx = i; in cpuid_pass_dynamic()
5755 for (i = 1; i < cpi->cpi_ncpu_shr_last_cache; i <<= 1) in cpuid_pass_dynamic()
5757 cpi->cpi_last_lvl_cacheid = cpi->cpi_apicid >> shft; in cpuid_pass_dynamic()
5763 if ((cpi->cpi_xmaxeax & CPUID_LEAF_EXT_0) == 0) { in cpuid_pass_dynamic()
5772 if (cpi->cpi_brandstr[0]) { in cpuid_pass_dynamic()
5773 size_t maxlen = sizeof (cpi->cpi_brandstr); in cpuid_pass_dynamic()
5776 dst = src = (char *)cpi->cpi_brandstr; in cpuid_pass_dynamic()
5777 src[maxlen - 1] = '\0'; in cpuid_pass_dynamic()
5792 * Now do an in-place copy. in cpuid_pass_dynamic()
5795 * -really- no need to shout. in cpuid_pass_dynamic()
5819 while (--dst > cpi->cpi_brandstr) in cpuid_pass_dynamic()
5925 cpi = cpu->cpu_m.mcpu_cpi; in cpuid_pass_resolve()
5953 if (cpi->cpi_maxeax >= 1) { in cpuid_pass_resolve()
5954 uint32_t *edx = &cpi->cpi_support[STD_EDX_FEATURES]; in cpuid_pass_resolve()
5955 uint32_t *ecx = &cpi->cpi_support[STD_ECX_FEATURES]; in cpuid_pass_resolve()
5986 if (cpi->cpi_xmaxeax < 0x80000001) in cpuid_pass_resolve()
5989 switch (cpi->cpi_vendor) { in cpuid_pass_resolve()
5995 * here to make the initial crop of 64-bit OS's work. in cpuid_pass_resolve()
6003 edx = &cpi->cpi_support[AMD_EDX_FEATURES]; in cpuid_pass_resolve()
6004 ecx = &cpi->cpi_support[AMD_ECX_FEATURES]; in cpuid_pass_resolve()
6028 switch (cpi->cpi_vendor) { in cpuid_pass_resolve()
6078 cpi = cpu->cpu_m.mcpu_cpi; in cpuid_insn()
6086 if (cp->cp_eax <= cpi->cpi_maxeax && cp->cp_eax < NMAX_CPI_STD) { in cpuid_insn()
6087 xcp = &cpi->cpi_std[cp->cp_eax]; in cpuid_insn()
6088 } else if (cp->cp_eax >= CPUID_LEAF_EXT_0 && in cpuid_insn()
6089 cp->cp_eax <= cpi->cpi_xmaxeax && in cpuid_insn()
6090 cp->cp_eax < CPUID_LEAF_EXT_0 + NMAX_CPI_EXTD) { in cpuid_insn()
6091 xcp = &cpi->cpi_extd[cp->cp_eax - CPUID_LEAF_EXT_0]; in cpuid_insn()
6101 cp->cp_eax = xcp->cp_eax; in cpuid_insn()
6102 cp->cp_ebx = xcp->cp_ebx; in cpuid_insn()
6103 cp->cp_ecx = xcp->cp_ecx; in cpuid_insn()
6104 cp->cp_edx = xcp->cp_edx; in cpuid_insn()
6105 return (cp->cp_eax); in cpuid_insn()
6111 return (cpu != NULL && cpu->cpu_m.mcpu_cpi != NULL && in cpuid_checkpass()
6112 cpu->cpu_m.mcpu_cpi->cpi_pass >= pass); in cpuid_checkpass()
6120 return (snprintf(s, n, "%s", cpu->cpu_m.mcpu_cpi->cpi_brandstr)); in cpuid_getbrandstr()
6131 return (cpu->cpu_m.mcpu_cpi->cpi_chipid >= 0); in cpuid_is_cmt()
6135 * AMD and Intel both implement the 64-bit variant of the syscall
6136 * instruction (syscallq), so if there's -any- support for syscall,
6139 * However, Intel decided to -not- implement the 32-bit variant of the
6143 * XXPV Currently, 32-bit syscall instructions don't work via the hypervisor,
6158 struct cpuid_info *cpi = cpu->cpu_m.mcpu_cpi; in cpuid_syscall32_insn()
6160 if ((cpi->cpi_vendor == X86_VENDOR_AMD || in cpuid_syscall32_insn()
6161 cpi->cpi_vendor == X86_VENDOR_HYGON) && in cpuid_syscall32_insn()
6162 cpi->cpi_xmaxeax >= 0x80000001 && in cpuid_syscall32_insn()
6173 struct cpuid_info *cpi = cpu->cpu_m.mcpu_cpi; in cpuid_getidstr()
6183 return (snprintf(s, n, fmt_ht, cpi->cpi_chipid, in cpuid_getidstr()
6184 cpi->cpi_vendorstr, cpi->cpi_std[1].cp_eax, in cpuid_getidstr()
6185 cpi->cpi_family, cpi->cpi_model, in cpuid_getidstr()
6186 cpi->cpi_step, cpu->cpu_type_info.pi_clock)); in cpuid_getidstr()
6188 cpi->cpi_vendorstr, cpi->cpi_std[1].cp_eax, in cpuid_getidstr()
6189 cpi->cpi_family, cpi->cpi_model, in cpuid_getidstr()
6190 cpi->cpi_step, cpu->cpu_type_info.pi_clock)); in cpuid_getidstr()
6197 return ((const char *)cpu->cpu_m.mcpu_cpi->cpi_vendorstr); in cpuid_getvendorstr()
6204 return (cpu->cpu_m.mcpu_cpi->cpi_vendor); in cpuid_getvendor()
6211 return (cpu->cpu_m.mcpu_cpi->cpi_family); in cpuid_getfamily()
6218 return (cpu->cpu_m.mcpu_cpi->cpi_model); in cpuid_getmodel()
6225 return (cpu->cpu_m.mcpu_cpi->cpi_ncpu_per_chip); in cpuid_get_ncpu_per_chip()
6232 return (cpu->cpu_m.mcpu_cpi->cpi_ncore_per_chip); in cpuid_get_ncore_per_chip()
6239 return (cpu->cpu_m.mcpu_cpi->cpi_ncpu_shr_last_cache); in cpuid_get_ncpu_sharing_last_cache()
6246 return (cpu->cpu_m.mcpu_cpi->cpi_last_lvl_cacheid); in cpuid_get_last_lvl_cacheid()
6253 return (cpu->cpu_m.mcpu_cpi->cpi_step); in cpuid_getstep()
6260 return (cpu->cpu_m.mcpu_cpi->cpi_std[1].cp_eax); in cpuid_getsig()
6267 return (cpu->cpu_m.mcpu_cpi->cpi_chiprev); in cpuid_getchiprev()
6274 return (cpu->cpu_m.mcpu_cpi->cpi_chiprevstr); in cpuid_getchiprevstr()
6281 return (cpu->cpu_m.mcpu_cpi->cpi_socket); in cpuid_getsockettype()
6291 cpi = cpu->cpu_m.mcpu_cpi; in cpuid_getsocketstr()
6295 socketstr = _cpuid_sktstr(cpi->cpi_vendor, cpi->cpi_family, in cpuid_getsocketstr()
6296 cpi->cpi_model, cpi->cpi_step); in cpuid_getsocketstr()
6305 return (cpu->cpu_m.mcpu_cpi->cpi_uarchrev); in cpuid_getuarchrev()
6314 return (cpu->cpu_m.mcpu_cpi->cpi_chipid); in cpuid_get_chipid()
6315 return (cpu->cpu_id); in cpuid_get_chipid()
6322 return (cpu->cpu_m.mcpu_cpi->cpi_coreid); in cpuid_get_coreid()
6329 return (cpu->cpu_m.mcpu_cpi->cpi_pkgcoreid); in cpuid_get_pkgcoreid()
6336 return (cpu->cpu_m.mcpu_cpi->cpi_clogid); in cpuid_get_clogid()
6343 return (cpu->cpu_m.mcpu_cpi->cpi_last_lvl_cacheid); in cpuid_get_cacheid()
6350 return (cpu->cpu_m.mcpu_cpi->cpi_procnodeid); in cpuid_get_procnodeid()
6357 return (cpu->cpu_m.mcpu_cpi->cpi_procnodes_per_pkg); in cpuid_get_procnodes_per_pkg()
6364 return (cpu->cpu_m.mcpu_cpi->cpi_compunitid); in cpuid_get_compunitid()
6371 return (cpu->cpu_m.mcpu_cpi->cpi_cores_per_compunit); in cpuid_get_cores_per_compunit()
6378 if (cpu->cpu_m.mcpu_cpi->cpi_maxeax < 1) { in cpuid_get_apicid()
6381 return (cpu->cpu_m.mcpu_cpi->cpi_apicid); in cpuid_get_apicid()
6392 cpi = cpu->cpu_m.mcpu_cpi; in cpuid_get_addrsize()
6397 *pabits = cpi->cpi_pabits; in cpuid_get_addrsize()
6399 *vabits = cpi->cpi_vabits; in cpuid_get_addrsize()
6481 cpi = cpu->cpu_m.mcpu_cpi; in cpuid_get_dtlb_nent()
6488 if (cpi->cpi_xmaxeax >= 0x80000006) { in cpuid_get_dtlb_nent()
6489 struct cpuid_regs *cp = &cpi->cpi_extd[6]; in cpuid_get_dtlb_nent()
6498 if ((cp->cp_ebx & 0xffff0000) == 0) in cpuid_get_dtlb_nent()
6499 dtlb_nent = cp->cp_ebx & 0x0000ffff; in cpuid_get_dtlb_nent()
6501 dtlb_nent = BITX(cp->cp_ebx, 27, 16); in cpuid_get_dtlb_nent()
6505 if ((cp->cp_eax & 0xffff0000) == 0) in cpuid_get_dtlb_nent()
6506 dtlb_nent = cp->cp_eax & 0x0000ffff; in cpuid_get_dtlb_nent()
6508 dtlb_nent = BITX(cp->cp_eax, 27, 16); in cpuid_get_dtlb_nent()
6523 if (cpi->cpi_xmaxeax >= 0x80000005) { in cpuid_get_dtlb_nent()
6524 struct cpuid_regs *cp = &cpi->cpi_extd[5]; in cpuid_get_dtlb_nent()
6528 dtlb_nent = BITX(cp->cp_ebx, 23, 16); in cpuid_get_dtlb_nent()
6531 dtlb_nent = BITX(cp->cp_eax, 23, 16); in cpuid_get_dtlb_nent()
6534 panic("unknown L1 d-TLB pagesize"); in cpuid_get_dtlb_nent()
6552 struct cpuid_info *cpi = cpu->cpu_m.mcpu_cpi; in cpuid_opteron_erratum()
6557 * a legacy (32-bit) AMD CPU. in cpuid_opteron_erratum()
6559 if (cpi->cpi_vendor != X86_VENDOR_AMD || in cpuid_opteron_erratum()
6560 cpi->cpi_family == 4 || cpi->cpi_family == 5 || in cpuid_opteron_erratum()
6561 cpi->cpi_family == 6) { in cpuid_opteron_erratum()
6565 eax = cpi->cpi_std[1].cp_eax; in cpuid_opteron_erratum()
6605 return (cpi->cpi_family < 0x10); in cpuid_opteron_erratum()
6611 return (cpi->cpi_family <= 0x11); in cpuid_opteron_erratum()
6615 return (cpi->cpi_family <= 0x11); in cpuid_opteron_erratum()
6632 return (cpi->cpi_family < 0x10); in cpuid_opteron_erratum()
6636 return (cpi->cpi_family <= 0x11); in cpuid_opteron_erratum()
6648 return (cpi->cpi_family < 0x10); in cpuid_opteron_erratum()
6654 return (cpi->cpi_family < 0x10); in cpuid_opteron_erratum()
6710 return (cpi->cpi_family < 0x10 || cpi->cpi_family == 0x11); in cpuid_opteron_erratum()
6714 return (cpi->cpi_family < 0x10); in cpuid_opteron_erratum()
6723 * Our current fix for this is to disable the C1-Clock ramping. in cpuid_opteron_erratum()
6727 if (cpi->cpi_family >= 0x12 || get_hwenv() != HW_NATIVE) { in cpuid_opteron_erratum()
6742 if (cpi->cpi_family >= 0x10) { in cpuid_opteron_erratum()
6750 * check for processors (pre-Shanghai) that do not provide in cpuid_opteron_erratum()
6753 return (cpi->cpi_family == 0x10 && cpi->cpi_model < 4); in cpuid_opteron_erratum()
6760 return (cpi->cpi_family == 0x10 || cpi->cpi_family == 0x12); in cpuid_opteron_erratum()
6763 return (-1); in cpuid_opteron_erratum()
6770 * Return 1 if erratum is present, 0 if not present and -1 if indeterminate.
6777 static int osvwfeature = -1; in osvw_opteron_erratum()
6781 cpi = cpu->cpu_m.mcpu_cpi; in osvw_opteron_erratum()
6784 if (osvwfeature == -1) { in osvw_opteron_erratum()
6785 osvwfeature = cpi->cpi_extd[1].cp_ecx & CPUID_AMD_ECX_OSVW; in osvw_opteron_erratum()
6789 (cpi->cpi_extd[1].cp_ecx & CPUID_AMD_ECX_OSVW)); in osvw_opteron_erratum()
6792 return (-1); in osvw_opteron_erratum()
6801 return (-1); in osvw_opteron_erratum()
6807 * 0 - fixed by HW in osvw_opteron_erratum()
6808 * 1 - BIOS has applied the workaround when BIOS in osvw_opteron_erratum()
6827 return (-1); in osvw_opteron_erratum()
6832 static const char line_str[] = "line-size";
6845 if (snprintf(buf, sizeof (buf), "%s-%s", label, type) < sizeof (buf)) in add_cache_prop()
6850 * Intel-style cache/tlb description
6857 static const char l1_icache_str[] = "l1-icache";
6858 static const char l1_dcache_str[] = "l1-dcache";
6859 static const char l2_cache_str[] = "l2-cache";
6860 static const char l3_cache_str[] = "l3-cache";
6861 static const char itlb4k_str[] = "itlb-4K";
6862 static const char dtlb4k_str[] = "dtlb-4K";
6863 static const char itlb2M_str[] = "itlb-2M";
6864 static const char itlb4M_str[] = "itlb-4M";
6865 static const char dtlb4M_str[] = "dtlb-4M";
6866 static const char dtlb24_str[] = "dtlb0-2M-4M";
6867 static const char itlb424_str[] = "itlb-4K-2M-4M";
6868 static const char itlb24_str[] = "itlb-2M-4M";
6869 static const char dtlb44_str[] = "dtlb-4K-4M";
6870 static const char sl1_dcache_str[] = "sectored-l1-dcache";
6871 static const char sl2_cache_str[] = "sectored-l2-cache";
6872 static const char itrace_str[] = "itrace-cache";
6873 static const char sl3_cache_str[] = "sectored-l3-cache";
6874 static const char sh_l2_tlb4k_str[] = "shared-l2-tlb-4k";
6886 * Codes ignored - Reason
6887 * ----------------------
6888 * 40H - intel_cpuid_4_cache_info() disambiguates l2/l3 cache
6889 * f0H/f1H - Currently we do not interpret prefetch size by design
6986 { 0x70, 4, 0, 32, "tlb-4K" },
6987 { 0x80, 4, 16, 16*1024, "l1-cache" },
6998 for (; ct->ct_code != 0; ct++) in find_cacheent()
6999 if (ct->ct_code <= code) in find_cacheent()
7001 if (ct->ct_code == code) in find_cacheent()
7008 * Populate cachetab entry with L2 or L3 cache-information using
7019 for (i = 0; i < cpi->cpi_cache_leaf_size; i++) { in intel_cpuid_4_cache_info()
7020 level = CPI_CACHE_LVL(cpi->cpi_cache_leaves[i]); in intel_cpuid_4_cache_info()
7023 ct->ct_assoc = in intel_cpuid_4_cache_info()
7024 CPI_CACHE_WAYS(cpi->cpi_cache_leaves[i]) + 1; in intel_cpuid_4_cache_info()
7025 ct->ct_line_size = in intel_cpuid_4_cache_info()
7026 CPI_CACHE_COH_LN_SZ(cpi->cpi_cache_leaves[i]) + 1; in intel_cpuid_4_cache_info()
7027 ct->ct_size = ct->ct_assoc * in intel_cpuid_4_cache_info()
7028 (CPI_CACHE_PARTS(cpi->cpi_cache_leaves[i]) + 1) * in intel_cpuid_4_cache_info()
7029 ct->ct_line_size * in intel_cpuid_4_cache_info()
7030 (cpi->cpi_cache_leaves[i]->cp_ecx + 1); in intel_cpuid_4_cache_info()
7033 ct->ct_label = l2_cache_str; in intel_cpuid_4_cache_info()
7035 ct->ct_label = l3_cache_str; in intel_cpuid_4_cache_info()
7046 * The walk is terminated if the walker returns non-zero.
7057 if ((dp = cpi->cpi_cacheinfo) == NULL) in intel_walk_cacheinfo()
7059 for (i = 0; i < cpi->cpi_ncache; i++, dp++) { in intel_walk_cacheinfo()
7067 if (*dp == 0x49 && cpi->cpi_maxeax >= 0x4 && in intel_walk_cacheinfo()
7105 if ((dp = cpi->cpi_cacheinfo) == NULL) in cyrix_walk_cacheinfo()
7107 for (i = 0; i < cpi->cpi_ncache; i++, dp++) { in cyrix_walk_cacheinfo()
7109 * Search Cyrix-specific descriptor table first .. in cyrix_walk_cacheinfo()
7128 * A cacheinfo walker that adds associativity, line-size, and size properties
7136 add_cache_prop(devi, ct->ct_label, assoc_str, ct->ct_assoc); in add_cacheent_props()
7137 if (ct->ct_line_size != 0) in add_cacheent_props()
7138 add_cache_prop(devi, ct->ct_label, line_str, in add_cacheent_props()
7139 ct->ct_line_size); in add_cacheent_props()
7140 add_cache_prop(devi, ct->ct_label, size_str, ct->ct_size); in add_cacheent_props()
7145 static const char fully_assoc[] = "fully-associative?";
7187 * associated with a tag. For example, the AMD K6-III has a sector in add_amd_cache()
7191 add_cache_prop(devi, label, "lines-per-tag", lines_per_tag); in add_amd_cache()
7238 add_cache_prop(devi, label, "lines-per-tag", lines_per_tag); in add_amd_l2_cache()
7248 if (cpi->cpi_xmaxeax < 0x80000005) in amd_cache_info()
7250 cp = &cpi->cpi_extd[5]; in amd_cache_info()
7258 add_amd_tlb(devi, "dtlb-2M", in amd_cache_info()
7259 BITX(cp->cp_eax, 31, 24), BITX(cp->cp_eax, 23, 16)); in amd_cache_info()
7260 add_amd_tlb(devi, "itlb-2M", in amd_cache_info()
7261 BITX(cp->cp_eax, 15, 8), BITX(cp->cp_eax, 7, 0)); in amd_cache_info()
7267 switch (cpi->cpi_vendor) { in amd_cache_info()
7270 if (cpi->cpi_family >= 5) { in amd_cache_info()
7276 if ((nentries = BITX(cp->cp_ebx, 23, 16)) == 255) in amd_cache_info()
7281 add_amd_tlb(devi, "tlb-4K", BITX(cp->cp_ebx, 31, 24), in amd_cache_info()
7288 BITX(cp->cp_ebx, 31, 24), BITX(cp->cp_ebx, 23, 16)); in amd_cache_info()
7290 BITX(cp->cp_ebx, 15, 8), BITX(cp->cp_ebx, 7, 0)); in amd_cache_info()
7299 BITX(cp->cp_ecx, 31, 24), BITX(cp->cp_ecx, 23, 16), in amd_cache_info()
7300 BITX(cp->cp_ecx, 15, 8), BITX(cp->cp_ecx, 7, 0)); in amd_cache_info()
7307 BITX(cp->cp_edx, 31, 24), BITX(cp->cp_edx, 23, 16), in amd_cache_info()
7308 BITX(cp->cp_edx, 15, 8), BITX(cp->cp_edx, 7, 0)); in amd_cache_info()
7310 if (cpi->cpi_xmaxeax < 0x80000006) in amd_cache_info()
7312 cp = &cpi->cpi_extd[6]; in amd_cache_info()
7316 if (BITX(cp->cp_eax, 31, 16) == 0) in amd_cache_info()
7317 add_amd_l2_tlb(devi, "l2-tlb-2M", in amd_cache_info()
7318 BITX(cp->cp_eax, 15, 12), BITX(cp->cp_eax, 11, 0)); in amd_cache_info()
7320 add_amd_l2_tlb(devi, "l2-dtlb-2M", in amd_cache_info()
7321 BITX(cp->cp_eax, 31, 28), BITX(cp->cp_eax, 27, 16)); in amd_cache_info()
7322 add_amd_l2_tlb(devi, "l2-itlb-2M", in amd_cache_info()
7323 BITX(cp->cp_eax, 15, 12), BITX(cp->cp_eax, 11, 0)); in amd_cache_info()
7328 if (BITX(cp->cp_ebx, 31, 16) == 0) { in amd_cache_info()
7329 add_amd_l2_tlb(devi, "l2-tlb-4K", in amd_cache_info()
7330 BITX(cp->cp_eax, 15, 12), BITX(cp->cp_eax, 11, 0)); in amd_cache_info()
7332 add_amd_l2_tlb(devi, "l2-dtlb-4K", in amd_cache_info()
7333 BITX(cp->cp_eax, 31, 28), BITX(cp->cp_eax, 27, 16)); in amd_cache_info()
7334 add_amd_l2_tlb(devi, "l2-itlb-4K", in amd_cache_info()
7335 BITX(cp->cp_eax, 15, 12), BITX(cp->cp_eax, 11, 0)); in amd_cache_info()
7339 BITX(cp->cp_ecx, 31, 16), BITX(cp->cp_ecx, 15, 12), in amd_cache_info()
7340 BITX(cp->cp_ecx, 11, 8), BITX(cp->cp_ecx, 7, 0)); in amd_cache_info()
7345 * and tlb architecture - Intel's way and AMD's way.
7352 switch (cpi->cpi_vendor) { in x86_which_cacheinfo()
7354 if (cpi->cpi_maxeax >= 2) in x86_which_cacheinfo()
7362 if (cpi->cpi_family > 5 || in x86_which_cacheinfo()
7363 (cpi->cpi_family == 5 && cpi->cpi_model >= 1)) in x86_which_cacheinfo()
7369 if (cpi->cpi_family >= 5) in x86_which_cacheinfo()
7375 * then we assume they have AMD-format cache in x86_which_cacheinfo()
7379 * then try our-Cyrix specific handler. in x86_which_cacheinfo()
7382 * table-driven format instead. in x86_which_cacheinfo()
7384 if (cpi->cpi_xmaxeax >= 0x80000005) in x86_which_cacheinfo()
7386 else if (cpi->cpi_vendor == X86_VENDOR_Cyrix) in x86_which_cacheinfo()
7388 else if (cpi->cpi_maxeax >= 2) in x86_which_cacheinfo()
7392 return (-1); in x86_which_cacheinfo()
7412 /* cpu-mhz, and clock-frequency */ in cpuid_set_cpu_properties()
7417 "cpu-mhz", cpu_freq); in cpuid_set_cpu_properties()
7420 "clock-frequency", (int)mul); in cpuid_set_cpu_properties()
7425 /* vendor-id */ in cpuid_set_cpu_properties()
7427 "vendor-id", cpi->cpi_vendorstr); in cpuid_set_cpu_properties()
7429 if (cpi->cpi_maxeax == 0) { in cpuid_set_cpu_properties()
7439 "cpu-model", CPI_MODEL(cpi)); in cpuid_set_cpu_properties()
7441 "stepping-id", CPI_STEP(cpi)); in cpuid_set_cpu_properties()
7444 switch (cpi->cpi_vendor) { in cpuid_set_cpu_properties()
7456 /* ext-family */ in cpuid_set_cpu_properties()
7457 switch (cpi->cpi_vendor) { in cpuid_set_cpu_properties()
7460 create = cpi->cpi_family >= 0xf; in cpuid_set_cpu_properties()
7471 "ext-family", CPI_FAMILY_XTD(cpi)); in cpuid_set_cpu_properties()
7473 /* ext-model */ in cpuid_set_cpu_properties()
7474 switch (cpi->cpi_vendor) { in cpuid_set_cpu_properties()
7490 "ext-model", CPI_MODEL_XTD(cpi)); in cpuid_set_cpu_properties()
7493 switch (cpi->cpi_vendor) { in cpuid_set_cpu_properties()
7499 create = cpi->cpi_xmaxeax >= 0x80000001; in cpuid_set_cpu_properties()
7507 "generation", BITX((cpi)->cpi_extd[1].cp_eax, 11, 8)); in cpuid_set_cpu_properties()
7509 /* brand-id */ in cpuid_set_cpu_properties()
7510 switch (cpi->cpi_vendor) { in cpuid_set_cpu_properties()
7516 create = cpi->cpi_family > 6 || in cpuid_set_cpu_properties()
7517 (cpi->cpi_family == 6 && cpi->cpi_model >= 8); in cpuid_set_cpu_properties()
7520 create = cpi->cpi_family >= 0xf; in cpuid_set_cpu_properties()
7529 if (create && cpi->cpi_brandid != 0) { in cpuid_set_cpu_properties()
7531 "brand-id", cpi->cpi_brandid); in cpuid_set_cpu_properties()
7534 /* chunks, and apic-id */ in cpuid_set_cpu_properties()
7535 switch (cpi->cpi_vendor) { in cpuid_set_cpu_properties()
7540 create = IS_NEW_F6(cpi) || cpi->cpi_family >= 0xf; in cpuid_set_cpu_properties()
7543 create = cpi->cpi_family >= 0xf; in cpuid_set_cpu_properties()
7556 "apic-id", cpi->cpi_apicid); in cpuid_set_cpu_properties()
7557 if (cpi->cpi_chipid >= 0) { in cpuid_set_cpu_properties()
7559 "chip#", cpi->cpi_chipid); in cpuid_set_cpu_properties()
7561 "clog#", cpi->cpi_clogid); in cpuid_set_cpu_properties()
7565 /* cpuid-features */ in cpuid_set_cpu_properties()
7567 "cpuid-features", CPI_FEATURES_EDX(cpi)); in cpuid_set_cpu_properties()
7570 /* cpuid-features-ecx */ in cpuid_set_cpu_properties()
7571 switch (cpi->cpi_vendor) { in cpuid_set_cpu_properties()
7573 create = IS_NEW_F6(cpi) || cpi->cpi_family >= 0xf; in cpuid_set_cpu_properties()
7576 create = cpi->cpi_family >= 0xf; in cpuid_set_cpu_properties()
7587 "cpuid-features-ecx", CPI_FEATURES_ECX(cpi)); in cpuid_set_cpu_properties()
7589 /* ext-cpuid-features */ in cpuid_set_cpu_properties()
7590 switch (cpi->cpi_vendor) { in cpuid_set_cpu_properties()
7597 create = cpi->cpi_xmaxeax >= 0x80000001; in cpuid_set_cpu_properties()
7605 "ext-cpuid-features", CPI_FEATURES_XTD_EDX(cpi)); in cpuid_set_cpu_properties()
7607 "ext-cpuid-features-ecx", CPI_FEATURES_XTD_ECX(cpi)); in cpuid_set_cpu_properties()
7614 * same -something- about the processor, however lame. in cpuid_set_cpu_properties()
7617 "brand-string", cpi->cpi_brandstr); in cpuid_set_cpu_properties()
7645 * A cacheinfo walker that fetches the size, line-size and associativity
7654 if (ct->ct_label != l2_cache_str && in intel_l2cinfo()
7655 ct->ct_label != sl2_cache_str) in intel_l2cinfo()
7656 return (0); /* not an L2 -- keep walking */ in intel_l2cinfo()
7658 if ((ip = l2i->l2i_csz) != NULL) in intel_l2cinfo()
7659 *ip = ct->ct_size; in intel_l2cinfo()
7660 if ((ip = l2i->l2i_lsz) != NULL) in intel_l2cinfo()
7661 *ip = ct->ct_line_size; in intel_l2cinfo()
7662 if ((ip = l2i->l2i_assoc) != NULL) in intel_l2cinfo()
7663 *ip = ct->ct_assoc; in intel_l2cinfo()
7664 l2i->l2i_ret = ct->ct_size; in intel_l2cinfo()
7665 return (1); /* was an L2 -- terminate walk */ in intel_l2cinfo()
7675 * -1 is undefined. 0 is fully associative.
7679 {-1, 1, 2, -1, 4, -1, 8, -1, 16, -1, 32, 48, 64, 96, 128, 0};
7689 if (cpi->cpi_xmaxeax < 0x80000006) in amd_l2cacheinfo()
7691 cp = &cpi->cpi_extd[6]; in amd_l2cacheinfo()
7693 if ((i = BITX(cp->cp_ecx, 15, 12)) != 0 && in amd_l2cacheinfo()
7694 (size = BITX(cp->cp_ecx, 31, 16)) != 0) { in amd_l2cacheinfo()
7698 ASSERT(assoc != -1); in amd_l2cacheinfo()
7700 if ((ip = l2i->l2i_csz) != NULL) in amd_l2cacheinfo()
7702 if ((ip = l2i->l2i_lsz) != NULL) in amd_l2cacheinfo()
7703 *ip = BITX(cp->cp_ecx, 7, 0); in amd_l2cacheinfo()
7704 if ((ip = l2i->l2i_assoc) != NULL) in amd_l2cacheinfo()
7706 l2i->l2i_ret = cachesz; in amd_l2cacheinfo()
7713 struct cpuid_info *cpi = cpu->cpu_m.mcpu_cpi; in getl2cacheinfo()
7716 l2i->l2i_csz = csz; in getl2cacheinfo()
7717 l2i->l2i_lsz = lsz; in getl2cacheinfo()
7718 l2i->l2i_assoc = assoc; in getl2cacheinfo()
7719 l2i->l2i_ret = -1; in getl2cacheinfo()
7734 return (l2i->l2i_ret); in getl2cacheinfo()
7747 mwait_size = CPU->cpu_m.mcpu_cpi->cpi_mwait.mon_max; in cpuid_mwait_alloc()
7766 cpu->cpu_m.mcpu_cpi->cpi_mwait.buf_actual = ret; in cpuid_mwait_alloc()
7767 cpu->cpu_m.mcpu_cpi->cpi_mwait.size_actual = mwait_size; in cpuid_mwait_alloc()
7773 cpu->cpu_m.mcpu_cpi->cpi_mwait.buf_actual = ret; in cpuid_mwait_alloc()
7774 cpu->cpu_m.mcpu_cpi->cpi_mwait.size_actual = mwait_size * 2; in cpuid_mwait_alloc()
7784 if (cpu->cpu_m.mcpu_cpi == NULL) { in cpuid_mwait_free()
7788 if (cpu->cpu_m.mcpu_cpi->cpi_mwait.buf_actual != NULL && in cpuid_mwait_free()
7789 cpu->cpu_m.mcpu_cpi->cpi_mwait.size_actual > 0) { in cpuid_mwait_free()
7790 kmem_free(cpu->cpu_m.mcpu_cpi->cpi_mwait.buf_actual, in cpuid_mwait_free()
7791 cpu->cpu_m.mcpu_cpi->cpi_mwait.size_actual); in cpuid_mwait_free()
7794 cpu->cpu_m.mcpu_cpi->cpi_mwait.buf_actual = NULL; in cpuid_mwait_free()
7795 cpu->cpu_m.mcpu_cpi->cpi_mwait.size_actual = 0; in cpuid_mwait_free()
7805 cnt = &_no_rdtsc_end - &_no_rdtsc_start; in patch_tsc_read()
7809 cnt = &_tsc_lfence_end - &_tsc_lfence_start; in patch_tsc_read()
7814 cnt = &_tscp_end - &_tscp_start; in patch_tsc_read()
7834 cpi = CPU->cpu_m.mcpu_cpi; in cpuid_deep_cstates_supported()
7836 switch (cpi->cpi_vendor) { in cpuid_deep_cstates_supported()
7840 if (cpi->cpi_xmaxeax < 0x80000007) in cpuid_deep_cstates_supported()
7844 * Does TSC run at a constant rate in all C-states? in cpuid_deep_cstates_supported()
7888 if (x86_use_pcid == -1) in enable_pcid()
7891 if (x86_use_invpcid == -1) { in enable_pcid()
7914 * - cpuid_pass_basic() is done, so that X86 features are known.
7915 * - fpu_probe() is done, so that fp_save_mech is chosen.
7930 cpu->cpu_m.mcpu_cpi->cpi_std[1].cp_ecx |= CPUID_INTC_ECX_OSXSAVE; in xsave_setup_msr()
7936 * APIC timer will continue running in all C-states,
7937 * including the deepest C-states.
7948 cpi = CPU->cpu_m.mcpu_cpi; in cpuid_arat_supported()
7950 switch (cpi->cpi_vendor) { in cpuid_arat_supported()
7955 * Always-running Local APIC Timer is in cpuid_arat_supported()
7958 if (cpi->cpi_maxeax >= 6) { in cpuid_arat_supported()
7976 struct cpuid_info *cpi = cp->cpu_m.mcpu_cpi; in cpuid_iepb_supported()
7990 if ((cpi->cpi_vendor != X86_VENDOR_Intel) || (cpi->cpi_maxeax < 6)) in cpuid_iepb_supported()
8010 struct cpuid_info *cpi = CPU->cpu_m.mcpu_cpi; in cpuid_deadline_tsc_supported()
8016 switch (cpi->cpi_vendor) { in cpuid_deadline_tsc_supported()
8018 if (cpi->cpi_maxeax >= 1) { in cpuid_deadline_tsc_supported()
8043 cnt = &bcopy_patch_end - &bcopy_patch_start; in patch_memops()
8067 cpi = cpu->cpu_m.mcpu_cpi; in cpuid_get_ext_topo()
8069 if (cpi->cpi_ncore_bits > *core_nbits) { in cpuid_get_ext_topo()
8070 *core_nbits = cpi->cpi_ncore_bits; in cpuid_get_ext_topo()
8073 if (cpi->cpi_nthread_bits > *strand_nbits) { in cpuid_get_ext_topo()
8074 *strand_nbits = cpi->cpi_nthread_bits; in cpuid_get_ext_topo()
8081 struct cpuid_info *cpi = cpu->cpu_m.mcpu_cpi; in cpuid_pass_ucode()
8088 switch (cpi->cpi_vendor) { in cpuid_pass_ucode()
8093 if (cpi->cpi_maxeax < 7) { in cpuid_pass_ucode()
8096 cpi->cpi_maxeax = __cpuid_insn(&cp); in cpuid_pass_ucode()
8097 if (cpi->cpi_maxeax < 7) in cpuid_pass_ucode()
8105 cpi->cpi_std[7] = cp; in cpuid_pass_ucode()
8111 if (cpi->cpi_family < 5 || in cpuid_pass_ucode()
8112 (cpi->cpi_family == 5 && cpi->cpi_model < 1)) in cpuid_pass_ucode()
8115 if (cpi->cpi_xmaxeax < CPUID_LEAF_EXT_8) { in cpuid_pass_ucode()
8118 cpi->cpi_xmaxeax = __cpuid_insn(&cp); in cpuid_pass_ucode()
8119 if (cpi->cpi_xmaxeax < CPUID_LEAF_EXT_8) in cpuid_pass_ucode()
8130 platform_cpuid_mangle(cpi->cpi_vendor, CPUID_LEAF_EXT_8, &cp); in cpuid_pass_ucode()
8131 cpi->cpi_extd[8] = cp; in cpuid_pass_ucode()
8133 if (cpi->cpi_xmaxeax < CPUID_LEAF_EXT_21) in cpuid_pass_ucode()
8139 platform_cpuid_mangle(cpi->cpi_vendor, CPUID_LEAF_EXT_21, &cp); in cpuid_pass_ucode()
8140 cpi->cpi_extd[0x21] = cp; in cpuid_pass_ucode()
8161 fset = (uchar_t *)(arg0 + sizeof (x86_featureset) * CPU->cpu_id); in cpuid_post_ucodeadm_xc()
8162 if (first_pass && CPU->cpu_id != 0) in cpuid_post_ucodeadm_xc()
8164 if (!first_pass && CPU->cpu_id == 0) in cpuid_post_ucodeadm_xc()
8194 rev = cpu->cpu_m.mcpu_ucode_info->cui_rev; in cpuid_post_ucodeadm()
8200 if (cpu->cpu_m.mcpu_ucode_info->cui_rev != rev) { in cpuid_post_ucodeadm()
8203 i, cpu->cpu_m.mcpu_ucode_info->cui_rev, rev); in cpuid_post_ucodeadm()
8245 cmn_err(CE_CONT, "?post-ucode x86_feature: %s\n", in cpuid_post_ucodeadm()
8285 if (cp->cpu_id == 0 && cp->cpu_m.mcpu_cpi == NULL) in cpuid_execpass()
8286 cp->cpu_m.mcpu_cpi = &cpuid_info0; in cpuid_execpass()
8288 ASSERT(cpuid_checkpass(cp, pass - 1)); in cpuid_execpass()
8293 cp->cpu_m.mcpu_cpi->cpi_pass = pass; in cpuid_execpass()
8299 pass, cp->cpu_id); in cpuid_execpass()
8315 * may bitwise-OR together chiprevs of the same vendor and family to form the
8384 switch (cpi->cpi_vendor) { in cpuid_cache_topo_sup()
8386 if (cpi->cpi_maxeax >= 4) { in cpuid_cache_topo_sup()
8392 if (cpi->cpi_xmaxeax >= CPUID_LEAF_EXT_1d && in cpuid_cache_topo_sup()
8410 cpi = cpu->cpu_m.mcpu_cpi; in cpuid_getncaches()
8416 *ncache = cpi->cpi_cache_leaf_size; in cpuid_getncaches()
8427 cpi = cpu->cpu_m.mcpu_cpi; in cpuid_getcache()
8433 if (cno >= cpi->cpi_cache_leaf_size) { in cpuid_getcache()
8438 cp = cpi->cpi_cache_leaves[cno]; in cpuid_getcache()
8441 cache->xc_type = X86_CACHE_TYPE_DATA; in cpuid_getcache()
8444 cache->xc_type = X86_CACHE_TYPE_INST; in cpuid_getcache()
8447 cache->xc_type = X86_CACHE_TYPE_UNIFIED; in cpuid_getcache()
8453 cache->xc_level = CPI_CACHE_LVL(cp); in cpuid_getcache()
8455 cache->xc_flags |= X86_CACHE_F_FULL_ASSOC; in cpuid_getcache()
8457 cache->xc_nparts = CPI_CACHE_PARTS(cp) + 1; in cpuid_getcache()
8462 if (cpi->cpi_vendor == X86_VENDOR_AMD && in cpuid_getcache()
8464 cache->xc_nsets = 1; in cpuid_getcache()
8466 cache->xc_nsets = CPI_CACHE_SETS(cp) + 1; in cpuid_getcache()
8468 cache->xc_nways = CPI_CACHE_WAYS(cp) + 1; in cpuid_getcache()
8469 cache->xc_line_size = CPI_CACHE_COH_LN_SZ(cp) + 1; in cpuid_getcache()
8470 cache->xc_size = cache->xc_nparts * cache->xc_nsets * cache->xc_nways * in cpuid_getcache()
8471 cache->xc_line_size; in cpuid_getcache()
8474 * are being shared. Normally this would be the value - 1, but the CPUID in cpuid_getcache()
8478 cache->xc_apic_shift = highbit(CPI_NTHR_SHR_CACHE(cp)); in cpuid_getcache()
8497 cache->xc_id = (uint64_t)cache->xc_level << 40; in cpuid_getcache()
8498 cache->xc_id |= (uint64_t)cache->xc_type << 32; in cpuid_getcache()
8499 cache->xc_id |= (uint64_t)cpi->cpi_apicid >> cache->xc_apic_shift; in cpuid_getcache()