1# SPDX-License-Identifier: GPL-2.0-only 2menuconfig CXL_BUS 3 tristate "CXL (Compute Express Link) Devices Support" 4 depends on PCI 5 select FW_LOADER 6 select FW_UPLOAD 7 select PCI_DOE 8 select FIRMWARE_TABLE 9 select NUMA_KEEP_MEMINFO if (NUMA && X86) 10 help 11 CXL is a bus that is electrically compatible with PCI Express, but 12 layers three protocols on that signalling (CXL.io, CXL.cache, and 13 CXL.mem). The CXL.cache protocol allows devices to hold cachelines 14 locally, the CXL.mem protocol allows devices to be fully coherent 15 memory targets, the CXL.io protocol is equivalent to PCI Express. 16 Say 'y' to enable support for the configuration and management of 17 devices supporting these protocols. 18 19if CXL_BUS 20 21config CXL_PCI 22 tristate "PCI manageability" 23 default CXL_BUS 24 help 25 The CXL specification defines a "CXL memory device" sub-class in the 26 PCI "memory controller" base class of devices. Device's identified by 27 this class code provide support for volatile and / or persistent 28 memory to be mapped into the system address map (Host-managed Device 29 Memory (HDM)). 30 31 Say 'y/m' to enable a driver that will attach to CXL memory expander 32 devices enumerated by the memory device class code for configuration 33 and management primarily via the mailbox interface. See Chapter 2.3 34 Type 3 CXL Device in the CXL 2.0 specification for more details. 35 36 If unsure say 'm'. 37 38config CXL_MEM_RAW_COMMANDS 39 bool "RAW Command Interface for Memory Devices" 40 depends on CXL_PCI 41 help 42 Enable CXL RAW command interface. 43 44 The CXL driver ioctl interface may assign a kernel ioctl command 45 number for each specification defined opcode. At any given point in 46 time the number of opcodes that the specification defines and a device 47 may implement may exceed the kernel's set of associated ioctl function 48 numbers. The mismatch is either by omission, specification is too new, 49 or by design. When prototyping new hardware, or developing / debugging 50 the driver it is useful to be able to submit any possible command to 51 the hardware, even commands that may crash the kernel due to their 52 potential impact to memory currently in use by the kernel. 53 54 If developing CXL hardware or the driver say Y, otherwise say N. 55 56config CXL_ACPI 57 tristate "CXL ACPI: Platform Support" 58 depends on ACPI 59 depends on ACPI_NUMA 60 default CXL_BUS 61 select ACPI_TABLE_LIB 62 select ACPI_HMAT 63 help 64 Enable support for host managed device memory (HDM) resources 65 published by a platform's ACPI CXL memory layout description. See 66 Chapter 9.14.1 CXL Early Discovery Table (CEDT) in the CXL 2.0 67 specification, and CXL Fixed Memory Window Structures (CEDT.CFMWS) 68 (https://www.computeexpresslink.org/spec-landing). The CXL core 69 consumes these resource to publish the root of a cxl_port decode 70 hierarchy to map regions that represent System RAM, or Persistent 71 Memory regions to be managed by LIBNVDIMM. 72 73 If unsure say 'm'. 74 75config CXL_PMEM 76 tristate "CXL PMEM: Persistent Memory Support" 77 depends on LIBNVDIMM 78 default CXL_BUS 79 help 80 In addition to typical memory resources a platform may also advertise 81 support for persistent memory attached via CXL. This support is 82 managed via a bridge driver from CXL to the LIBNVDIMM system 83 subsystem. Say 'y/m' to enable support for enumerating and 84 provisioning the persistent memory capacity of CXL memory expanders. 85 86 If unsure say 'm'. 87 88config CXL_MEM 89 tristate "CXL: Memory Expansion" 90 depends on CXL_PCI 91 default CXL_BUS 92 help 93 The CXL.mem protocol allows a device to act as a provider of "System 94 RAM" and/or "Persistent Memory" that is fully coherent as if the 95 memory were attached to the typical CPU memory controller. This is 96 known as HDM "Host-managed Device Memory". 97 98 Say 'y/m' to enable a driver that will attach to CXL.mem devices for 99 memory expansion and control of HDM. See Chapter 9.13 in the CXL 2.0 100 specification for a detailed description of HDM. 101 102 If unsure say 'm'. 103 104config CXL_PORT 105 default CXL_BUS 106 tristate 107 108config CXL_SUSPEND 109 def_bool y 110 depends on SUSPEND && CXL_MEM 111 112config CXL_REGION 113 bool "CXL: Region Support" 114 default CXL_BUS 115 # For MAX_PHYSMEM_BITS 116 depends on SPARSEMEM 117 select MEMREGION 118 select GET_FREE_REGION 119 help 120 Enable the CXL core to enumerate and provision CXL regions. A CXL 121 region is defined by one or more CXL expanders that decode a given 122 system-physical address range. For CXL regions established by 123 platform-firmware this option enables memory error handling to 124 identify the devices participating in a given interleaved memory 125 range. Otherwise, platform-firmware managed CXL is enabled by being 126 placed in the system address map and does not need a driver. 127 128 If unsure say 'y' 129 130config CXL_REGION_INVALIDATION_TEST 131 bool "CXL: Region Cache Management Bypass (TEST)" 132 depends on CXL_REGION 133 help 134 CXL Region management and security operations potentially invalidate 135 the content of CPU caches without notifying those caches to 136 invalidate the affected cachelines. The CXL Region driver attempts 137 to invalidate caches when those events occur. If that invalidation 138 fails the region will fail to enable. Reasons for cache 139 invalidation failure are due to the CPU not providing a cache 140 invalidation mechanism. For example usage of wbinvd is restricted to 141 bare metal x86. However, for testing purposes toggling this option 142 can disable that data integrity safety and proceed with enabling 143 regions when there might be conflicting contents in the CPU cache. 144 145 If unsure, or if this kernel is meant for production environments, 146 say N. 147 148endif 149