xref: /linux/drivers/cxl/Kconfig (revision 019b383d1132e4051de0d2e43254454b86538cf4)
1# SPDX-License-Identifier: GPL-2.0-only
2menuconfig CXL_BUS
3	tristate "CXL (Compute Express Link) Devices Support"
4	depends on PCI
5	select FW_LOADER
6	select FW_UPLOAD
7	select PCI_DOE
8	help
9	  CXL is a bus that is electrically compatible with PCI Express, but
10	  layers three protocols on that signalling (CXL.io, CXL.cache, and
11	  CXL.mem). The CXL.cache protocol allows devices to hold cachelines
12	  locally, the CXL.mem protocol allows devices to be fully coherent
13	  memory targets, the CXL.io protocol is equivalent to PCI Express.
14	  Say 'y' to enable support for the configuration and management of
15	  devices supporting these protocols.
16
17if CXL_BUS
18
19config CXL_PCI
20	tristate "PCI manageability"
21	default CXL_BUS
22	help
23	  The CXL specification defines a "CXL memory device" sub-class in the
24	  PCI "memory controller" base class of devices. Device's identified by
25	  this class code provide support for volatile and / or persistent
26	  memory to be mapped into the system address map (Host-managed Device
27	  Memory (HDM)).
28
29	  Say 'y/m' to enable a driver that will attach to CXL memory expander
30	  devices enumerated by the memory device class code for configuration
31	  and management primarily via the mailbox interface. See Chapter 2.3
32	  Type 3 CXL Device in the CXL 2.0 specification for more details.
33
34	  If unsure say 'm'.
35
36config CXL_MEM_RAW_COMMANDS
37	bool "RAW Command Interface for Memory Devices"
38	depends on CXL_PCI
39	help
40	  Enable CXL RAW command interface.
41
42	  The CXL driver ioctl interface may assign a kernel ioctl command
43	  number for each specification defined opcode. At any given point in
44	  time the number of opcodes that the specification defines and a device
45	  may implement may exceed the kernel's set of associated ioctl function
46	  numbers. The mismatch is either by omission, specification is too new,
47	  or by design. When prototyping new hardware, or developing / debugging
48	  the driver it is useful to be able to submit any possible command to
49	  the hardware, even commands that may crash the kernel due to their
50	  potential impact to memory currently in use by the kernel.
51
52	  If developing CXL hardware or the driver say Y, otherwise say N.
53
54config CXL_ACPI
55	tristate "CXL ACPI: Platform Support"
56	depends on ACPI
57	default CXL_BUS
58	select ACPI_TABLE_LIB
59	help
60	  Enable support for host managed device memory (HDM) resources
61	  published by a platform's ACPI CXL memory layout description.  See
62	  Chapter 9.14.1 CXL Early Discovery Table (CEDT) in the CXL 2.0
63	  specification, and CXL Fixed Memory Window Structures (CEDT.CFMWS)
64	  (https://www.computeexpresslink.org/spec-landing). The CXL core
65	  consumes these resource to publish the root of a cxl_port decode
66	  hierarchy to map regions that represent System RAM, or Persistent
67	  Memory regions to be managed by LIBNVDIMM.
68
69	  If unsure say 'm'.
70
71config CXL_PMEM
72	tristate "CXL PMEM: Persistent Memory Support"
73	depends on LIBNVDIMM
74	default CXL_BUS
75	help
76	  In addition to typical memory resources a platform may also advertise
77	  support for persistent memory attached via CXL. This support is
78	  managed via a bridge driver from CXL to the LIBNVDIMM system
79	  subsystem. Say 'y/m' to enable support for enumerating and
80	  provisioning the persistent memory capacity of CXL memory expanders.
81
82	  If unsure say 'm'.
83
84config CXL_MEM
85	tristate "CXL: Memory Expansion"
86	depends on CXL_PCI
87	default CXL_BUS
88	help
89	  The CXL.mem protocol allows a device to act as a provider of "System
90	  RAM" and/or "Persistent Memory" that is fully coherent as if the
91	  memory were attached to the typical CPU memory controller. This is
92	  known as HDM "Host-managed Device Memory".
93
94	  Say 'y/m' to enable a driver that will attach to CXL.mem devices for
95	  memory expansion and control of HDM. See Chapter 9.13 in the CXL 2.0
96	  specification for a detailed description of HDM.
97
98	  If unsure say 'm'.
99
100config CXL_PORT
101	default CXL_BUS
102	tristate
103
104config CXL_SUSPEND
105	def_bool y
106	depends on SUSPEND && CXL_MEM
107
108config CXL_REGION
109	bool "CXL: Region Support"
110	default CXL_BUS
111	# For MAX_PHYSMEM_BITS
112	depends on SPARSEMEM
113	select MEMREGION
114	select GET_FREE_REGION
115	help
116	  Enable the CXL core to enumerate and provision CXL regions. A CXL
117	  region is defined by one or more CXL expanders that decode a given
118	  system-physical address range. For CXL regions established by
119	  platform-firmware this option enables memory error handling to
120	  identify the devices participating in a given interleaved memory
121	  range. Otherwise, platform-firmware managed CXL is enabled by being
122	  placed in the system address map and does not need a driver.
123
124	  If unsure say 'y'
125
126config CXL_REGION_INVALIDATION_TEST
127	bool "CXL: Region Cache Management Bypass (TEST)"
128	depends on CXL_REGION
129	help
130	  CXL Region management and security operations potentially invalidate
131	  the content of CPU caches without notifying those caches to
132	  invalidate the affected cachelines. The CXL Region driver attempts
133	  to invalidate caches when those events occur.  If that invalidation
134	  fails the region will fail to enable.  Reasons for cache
135	  invalidation failure are due to the CPU not providing a cache
136	  invalidation mechanism. For example usage of wbinvd is restricted to
137	  bare metal x86. However, for testing purposes toggling this option
138	  can disable that data integrity safety and proceed with enabling
139	  regions when there might be conflicting contents in the CPU cache.
140
141	  If unsure, or if this kernel is meant for production environments,
142	  say N.
143
144config CXL_PMU
145	tristate "CXL Performance Monitoring Unit"
146	default CXL_BUS
147	depends on PERF_EVENTS
148	help
149	  Support performance monitoring as defined in CXL rev 3.0
150	  section 13.2: Performance Monitoring. CXL components may have
151	  one or more CXL Performance Monitoring Units (CPMUs).
152
153	  Say 'y/m' to enable a driver that will attach to performance
154	  monitoring units and provide standard perf based interfaces.
155
156	  If unsure say 'm'.
157endif
158