xref: /linux/drivers/cxl/Kconfig (revision 5bab5dc780c9ed0c69fc2f828015532acf4a7848)
1# SPDX-License-Identifier: GPL-2.0-only
2menuconfig CXL_BUS
3	tristate "CXL (Compute Express Link) Devices Support"
4	depends on PCI
5	select FW_LOADER
6	select FW_UPLOAD
7	select PCI_DOE
8	select FIRMWARE_TABLE
9	help
10	  CXL is a bus that is electrically compatible with PCI Express, but
11	  layers three protocols on that signalling (CXL.io, CXL.cache, and
12	  CXL.mem). The CXL.cache protocol allows devices to hold cachelines
13	  locally, the CXL.mem protocol allows devices to be fully coherent
14	  memory targets, the CXL.io protocol is equivalent to PCI Express.
15	  Say 'y' to enable support for the configuration and management of
16	  devices supporting these protocols.
17
18if CXL_BUS
19
20config CXL_PCI
21	tristate "PCI manageability"
22	default CXL_BUS
23	help
24	  The CXL specification defines a "CXL memory device" sub-class in the
25	  PCI "memory controller" base class of devices. Device's identified by
26	  this class code provide support for volatile and / or persistent
27	  memory to be mapped into the system address map (Host-managed Device
28	  Memory (HDM)).
29
30	  Say 'y/m' to enable a driver that will attach to CXL memory expander
31	  devices enumerated by the memory device class code for configuration
32	  and management primarily via the mailbox interface. See Chapter 2.3
33	  Type 3 CXL Device in the CXL 2.0 specification for more details.
34
35	  If unsure say 'm'.
36
37config CXL_MEM_RAW_COMMANDS
38	bool "RAW Command Interface for Memory Devices"
39	depends on CXL_PCI
40	help
41	  Enable CXL RAW command interface.
42
43	  The CXL driver ioctl interface may assign a kernel ioctl command
44	  number for each specification defined opcode. At any given point in
45	  time the number of opcodes that the specification defines and a device
46	  may implement may exceed the kernel's set of associated ioctl function
47	  numbers. The mismatch is either by omission, specification is too new,
48	  or by design. When prototyping new hardware, or developing / debugging
49	  the driver it is useful to be able to submit any possible command to
50	  the hardware, even commands that may crash the kernel due to their
51	  potential impact to memory currently in use by the kernel.
52
53	  If developing CXL hardware or the driver say Y, otherwise say N.
54
55config CXL_ACPI
56	tristate "CXL ACPI: Platform Support"
57	depends on ACPI
58	depends on ACPI_NUMA
59	default CXL_BUS
60	select ACPI_TABLE_LIB
61	select ACPI_HMAT
62	help
63	  Enable support for host managed device memory (HDM) resources
64	  published by a platform's ACPI CXL memory layout description.  See
65	  Chapter 9.14.1 CXL Early Discovery Table (CEDT) in the CXL 2.0
66	  specification, and CXL Fixed Memory Window Structures (CEDT.CFMWS)
67	  (https://www.computeexpresslink.org/spec-landing). The CXL core
68	  consumes these resource to publish the root of a cxl_port decode
69	  hierarchy to map regions that represent System RAM, or Persistent
70	  Memory regions to be managed by LIBNVDIMM.
71
72	  If unsure say 'm'.
73
74config CXL_PMEM
75	tristate "CXL PMEM: Persistent Memory Support"
76	depends on LIBNVDIMM
77	default CXL_BUS
78	help
79	  In addition to typical memory resources a platform may also advertise
80	  support for persistent memory attached via CXL. This support is
81	  managed via a bridge driver from CXL to the LIBNVDIMM system
82	  subsystem. Say 'y/m' to enable support for enumerating and
83	  provisioning the persistent memory capacity of CXL memory expanders.
84
85	  If unsure say 'm'.
86
87config CXL_MEM
88	tristate "CXL: Memory Expansion"
89	depends on CXL_PCI
90	default CXL_BUS
91	help
92	  The CXL.mem protocol allows a device to act as a provider of "System
93	  RAM" and/or "Persistent Memory" that is fully coherent as if the
94	  memory were attached to the typical CPU memory controller. This is
95	  known as HDM "Host-managed Device Memory".
96
97	  Say 'y/m' to enable a driver that will attach to CXL.mem devices for
98	  memory expansion and control of HDM. See Chapter 9.13 in the CXL 2.0
99	  specification for a detailed description of HDM.
100
101	  If unsure say 'm'.
102
103config CXL_PORT
104	default CXL_BUS
105	tristate
106
107config CXL_SUSPEND
108	def_bool y
109	depends on SUSPEND && CXL_MEM
110
111config CXL_REGION
112	bool "CXL: Region Support"
113	default CXL_BUS
114	# For MAX_PHYSMEM_BITS
115	depends on SPARSEMEM
116	select MEMREGION
117	select GET_FREE_REGION
118	help
119	  Enable the CXL core to enumerate and provision CXL regions. A CXL
120	  region is defined by one or more CXL expanders that decode a given
121	  system-physical address range. For CXL regions established by
122	  platform-firmware this option enables memory error handling to
123	  identify the devices participating in a given interleaved memory
124	  range. Otherwise, platform-firmware managed CXL is enabled by being
125	  placed in the system address map and does not need a driver.
126
127	  If unsure say 'y'
128
129config CXL_REGION_INVALIDATION_TEST
130	bool "CXL: Region Cache Management Bypass (TEST)"
131	depends on CXL_REGION
132	help
133	  CXL Region management and security operations potentially invalidate
134	  the content of CPU caches without notifying those caches to
135	  invalidate the affected cachelines. The CXL Region driver attempts
136	  to invalidate caches when those events occur.  If that invalidation
137	  fails the region will fail to enable.  Reasons for cache
138	  invalidation failure are due to the CPU not providing a cache
139	  invalidation mechanism. For example usage of wbinvd is restricted to
140	  bare metal x86. However, for testing purposes toggling this option
141	  can disable that data integrity safety and proceed with enabling
142	  regions when there might be conflicting contents in the CPU cache.
143
144	  If unsure, or if this kernel is meant for production environments,
145	  say N.
146
147endif
148