1# SPDX-License-Identifier: GPL-2.0-only 2config NVME_CORE 3 tristate 4 select BLK_DEV_INTEGRITY_T10 if BLK_DEV_INTEGRITY 5 6config BLK_DEV_NVME 7 tristate "NVM Express block device" 8 depends on PCI && BLOCK 9 select NVME_CORE 10 help 11 The NVM Express driver is for solid state drives directly 12 connected to the PCI or PCI Express bus. If you know you 13 don't have one of these, it is safe to answer N. 14 15 To compile this driver as a module, choose M here: the 16 module will be called nvme. 17 18config NVME_MULTIPATH 19 bool "NVMe multipath support" 20 depends on NVME_CORE 21 help 22 This option enables support for multipath access to NVMe 23 subsystems. If this option is enabled only a single 24 /dev/nvmeXnY device will show up for each NVMe namespace, 25 even if it is accessible through multiple controllers. 26 27config NVME_VERBOSE_ERRORS 28 bool "NVMe verbose error reporting" 29 depends on NVME_CORE 30 help 31 This option enables verbose reporting for NVMe errors. The 32 error translation table will grow the kernel image size by 33 about 4 KB. 34 35config NVME_HWMON 36 bool "NVMe hardware monitoring" 37 depends on (NVME_CORE=y && HWMON=y) || (NVME_CORE=m && HWMON) 38 help 39 This provides support for NVMe hardware monitoring. If enabled, 40 a hardware monitoring device will be created for each NVMe drive 41 in the system. 42 43config NVME_FABRICS 44 select NVME_CORE 45 tristate 46 47config NVME_RDMA 48 tristate "NVM Express over Fabrics RDMA host driver" 49 depends on INFINIBAND && INFINIBAND_ADDR_TRANS && BLOCK 50 select NVME_FABRICS 51 select SG_POOL 52 help 53 This provides support for the NVMe over Fabrics protocol using 54 the RDMA (Infiniband, RoCE, iWarp) transport. This allows you 55 to use remote block devices exported using the NVMe protocol set. 56 57 To configure a NVMe over Fabrics controller use the nvme-cli tool 58 from https://github.com/linux-nvme/nvme-cli. 59 60 If unsure, say N. 61 62config NVME_FC 63 tristate "NVM Express over Fabrics FC host driver" 64 depends on BLOCK 65 depends on HAS_DMA 66 select NVME_FABRICS 67 select SG_POOL 68 help 69 This provides support for the NVMe over Fabrics protocol using 70 the FC transport. This allows you to use remote block devices 71 exported using the NVMe protocol set. 72 73 To configure a NVMe over Fabrics controller use the nvme-cli tool 74 from https://github.com/linux-nvme/nvme-cli. 75 76 If unsure, say N. 77 78config NVME_TCP 79 tristate "NVM Express over Fabrics TCP host driver" 80 depends on INET 81 depends on BLOCK 82 select NVME_FABRICS 83 select CRYPTO 84 select CRYPTO_CRC32C 85 help 86 This provides support for the NVMe over Fabrics protocol using 87 the TCP transport. This allows you to use remote block devices 88 exported using the NVMe protocol set. 89 90 To configure a NVMe over Fabrics controller use the nvme-cli tool 91 from https://github.com/linux-nvme/nvme-cli. 92 93 If unsure, say N. 94 95config NVME_APPLE 96 tristate "Apple ANS2 NVM Express host driver" 97 depends on OF && BLOCK 98 depends on APPLE_RTKIT && APPLE_SART 99 depends on ARCH_APPLE || COMPILE_TEST 100 select NVME_CORE 101 help 102 This provides support for the NVMe controller embedded in Apple SoCs 103 such as the M1. 104 105 To compile this driver as a module, choose M here: the 106 module will be called nvme-apple. 107