1# SPDX-License-Identifier: GPL-2.0-only 2# 3# Block device driver configuration 4# 5 6menuconfig MD 7 bool "Multiple devices driver support (RAID and LVM)" 8 depends on BLOCK 9 select SRCU 10 help 11 Support multiple physical spindles through a single logical device. 12 Required for RAID and logical volume management. 13 14if MD 15 16config BLK_DEV_MD 17 tristate "RAID support" 18 select BLOCK_HOLDER_DEPRECATED if SYSFS 19 help 20 This driver lets you combine several hard disk partitions into one 21 logical block device. This can be used to simply append one 22 partition to another one or to combine several redundant hard disks 23 into a RAID1/4/5 device so as to provide protection against hard 24 disk failures. This is called "Software RAID" since the combining of 25 the partitions is done by the kernel. "Hardware RAID" means that the 26 combining is done by a dedicated controller; if you have such a 27 controller, you do not need to say Y here. 28 29 More information about Software RAID on Linux is contained in the 30 Software RAID mini-HOWTO, available from 31 <https://www.tldp.org/docs.html#howto>. There you will also learn 32 where to get the supporting user space utilities raidtools. 33 34 If unsure, say N. 35 36config MD_AUTODETECT 37 bool "Autodetect RAID arrays during kernel boot" 38 depends on BLK_DEV_MD=y 39 default y 40 help 41 If you say Y here, then the kernel will try to autodetect raid 42 arrays as part of its boot process. 43 44 If you don't use raid and say Y, this autodetection can cause 45 a several-second delay in the boot time due to various 46 synchronisation steps that are part of this step. 47 48 If unsure, say Y. 49 50config MD_LINEAR 51 tristate "Linear (append) mode (deprecated)" 52 depends on BLK_DEV_MD 53 help 54 If you say Y here, then your multiple devices driver will be able to 55 use the so-called linear mode, i.e. it will combine the hard disk 56 partitions by simply appending one to the other. 57 58 To compile this as a module, choose M here: the module 59 will be called linear. 60 61 If unsure, say Y. 62 63config MD_RAID0 64 tristate "RAID-0 (striping) mode" 65 depends on BLK_DEV_MD 66 help 67 If you say Y here, then your multiple devices driver will be able to 68 use the so-called raid0 mode, i.e. it will combine the hard disk 69 partitions into one logical device in such a fashion as to fill them 70 up evenly, one chunk here and one chunk there. This will increase 71 the throughput rate if the partitions reside on distinct disks. 72 73 Information about Software RAID on Linux is contained in the 74 Software-RAID mini-HOWTO, available from 75 <https://www.tldp.org/docs.html#howto>. There you will also 76 learn where to get the supporting user space utilities raidtools. 77 78 To compile this as a module, choose M here: the module 79 will be called raid0. 80 81 If unsure, say Y. 82 83config MD_RAID1 84 tristate "RAID-1 (mirroring) mode" 85 depends on BLK_DEV_MD 86 help 87 A RAID-1 set consists of several disk drives which are exact copies 88 of each other. In the event of a mirror failure, the RAID driver 89 will continue to use the operational mirrors in the set, providing 90 an error free MD (multiple device) to the higher levels of the 91 kernel. In a set with N drives, the available space is the capacity 92 of a single drive, and the set protects against a failure of (N - 1) 93 drives. 94 95 Information about Software RAID on Linux is contained in the 96 Software-RAID mini-HOWTO, available from 97 <https://www.tldp.org/docs.html#howto>. There you will also 98 learn where to get the supporting user space utilities raidtools. 99 100 If you want to use such a RAID-1 set, say Y. To compile this code 101 as a module, choose M here: the module will be called raid1. 102 103 If unsure, say Y. 104 105config MD_RAID10 106 tristate "RAID-10 (mirrored striping) mode" 107 depends on BLK_DEV_MD 108 help 109 RAID-10 provides a combination of striping (RAID-0) and 110 mirroring (RAID-1) with easier configuration and more flexible 111 layout. 112 Unlike RAID-0, but like RAID-1, RAID-10 requires all devices to 113 be the same size (or at least, only as much as the smallest device 114 will be used). 115 RAID-10 provides a variety of layouts that provide different levels 116 of redundancy and performance. 117 118 RAID-10 requires mdadm-1.7.0 or later, available at: 119 120 https://www.kernel.org/pub/linux/utils/raid/mdadm/ 121 122 If unsure, say Y. 123 124config MD_RAID456 125 tristate "RAID-4/RAID-5/RAID-6 mode" 126 depends on BLK_DEV_MD 127 select RAID6_PQ 128 select LIBCRC32C 129 select ASYNC_MEMCPY 130 select ASYNC_XOR 131 select ASYNC_PQ 132 select ASYNC_RAID6_RECOV 133 help 134 A RAID-5 set of N drives with a capacity of C MB per drive provides 135 the capacity of C * (N - 1) MB, and protects against a failure 136 of a single drive. For a given sector (row) number, (N - 1) drives 137 contain data sectors, and one drive contains the parity protection. 138 For a RAID-4 set, the parity blocks are present on a single drive, 139 while a RAID-5 set distributes the parity across the drives in one 140 of the available parity distribution methods. 141 142 A RAID-6 set of N drives with a capacity of C MB per drive 143 provides the capacity of C * (N - 2) MB, and protects 144 against a failure of any two drives. For a given sector 145 (row) number, (N - 2) drives contain data sectors, and two 146 drives contains two independent redundancy syndromes. Like 147 RAID-5, RAID-6 distributes the syndromes across the drives 148 in one of the available parity distribution methods. 149 150 Information about Software RAID on Linux is contained in the 151 Software-RAID mini-HOWTO, available from 152 <https://www.tldp.org/docs.html#howto>. There you will also 153 learn where to get the supporting user space utilities raidtools. 154 155 If you want to use such a RAID-4/RAID-5/RAID-6 set, say Y. To 156 compile this code as a module, choose M here: the module 157 will be called raid456. 158 159 If unsure, say Y. 160 161config MD_MULTIPATH 162 tristate "Multipath I/O support (deprecated)" 163 depends on BLK_DEV_MD 164 help 165 MD_MULTIPATH provides a simple multi-path personality for use 166 the MD framework. It is not under active development. New 167 projects should consider using DM_MULTIPATH which has more 168 features and more testing. 169 170 If unsure, say N. 171 172config MD_FAULTY 173 tristate "Faulty test module for MD (deprecated)" 174 depends on BLK_DEV_MD 175 help 176 The "faulty" module allows for a block device that occasionally returns 177 read or write errors. It is useful for testing. 178 179 In unsure, say N. 180 181 182config MD_CLUSTER 183 tristate "Cluster Support for MD" 184 depends on BLK_DEV_MD 185 depends on DLM 186 default n 187 help 188 Clustering support for MD devices. This enables locking and 189 synchronization across multiple systems on the cluster, so all 190 nodes in the cluster can access the MD devices simultaneously. 191 192 This brings the redundancy (and uptime) of RAID levels across the 193 nodes of the cluster. Currently, it can work with raid1 and raid10 194 (limited support). 195 196 If unsure, say N. 197 198source "drivers/md/bcache/Kconfig" 199 200config BLK_DEV_DM_BUILTIN 201 bool 202 203config BLK_DEV_DM 204 tristate "Device mapper support" 205 select BLOCK_HOLDER_DEPRECATED if SYSFS 206 select BLK_DEV_DM_BUILTIN 207 select BLK_MQ_STACKING 208 depends on DAX || DAX=n 209 help 210 Device-mapper is a low level volume manager. It works by allowing 211 people to specify mappings for ranges of logical sectors. Various 212 mapping types are available, in addition people may write their own 213 modules containing custom mappings if they wish. 214 215 Higher level volume managers such as LVM2 use this driver. 216 217 To compile this as a module, choose M here: the module will be 218 called dm-mod. 219 220 If unsure, say N. 221 222config DM_DEBUG 223 bool "Device mapper debugging support" 224 depends on BLK_DEV_DM 225 help 226 Enable this for messages that may help debug device-mapper problems. 227 228 If unsure, say N. 229 230config DM_BUFIO 231 tristate 232 depends on BLK_DEV_DM 233 help 234 This interface allows you to do buffered I/O on a device and acts 235 as a cache, holding recently-read blocks in memory and performing 236 delayed writes. 237 238config DM_DEBUG_BLOCK_MANAGER_LOCKING 239 bool "Block manager locking" 240 depends on DM_BUFIO 241 help 242 Block manager locking can catch various metadata corruption issues. 243 244 If unsure, say N. 245 246config DM_DEBUG_BLOCK_STACK_TRACING 247 bool "Keep stack trace of persistent data block lock holders" 248 depends on STACKTRACE_SUPPORT && DM_DEBUG_BLOCK_MANAGER_LOCKING 249 select STACKTRACE 250 help 251 Enable this for messages that may help debug problems with the 252 block manager locking used by thin provisioning and caching. 253 254 If unsure, say N. 255 256config DM_BIO_PRISON 257 tristate 258 depends on BLK_DEV_DM 259 help 260 Some bio locking schemes used by other device-mapper targets 261 including thin provisioning. 262 263source "drivers/md/persistent-data/Kconfig" 264 265config DM_UNSTRIPED 266 tristate "Unstriped target" 267 depends on BLK_DEV_DM 268 help 269 Unstripes I/O so it is issued solely on a single drive in a HW 270 RAID0 or dm-striped target. 271 272config DM_CRYPT 273 tristate "Crypt target support" 274 depends on BLK_DEV_DM 275 depends on (ENCRYPTED_KEYS || ENCRYPTED_KEYS=n) 276 depends on (TRUSTED_KEYS || TRUSTED_KEYS=n) 277 select CRYPTO 278 select CRYPTO_CBC 279 select CRYPTO_ESSIV 280 help 281 This device-mapper target allows you to create a device that 282 transparently encrypts the data on it. You'll need to activate 283 the ciphers you're going to use in the cryptoapi configuration. 284 285 For further information on dm-crypt and userspace tools see: 286 <https://gitlab.com/cryptsetup/cryptsetup/wikis/DMCrypt> 287 288 To compile this code as a module, choose M here: the module will 289 be called dm-crypt. 290 291 If unsure, say N. 292 293config DM_SNAPSHOT 294 tristate "Snapshot target" 295 depends on BLK_DEV_DM 296 select DM_BUFIO 297 help 298 Allow volume managers to take writable snapshots of a device. 299 300config DM_THIN_PROVISIONING 301 tristate "Thin provisioning target" 302 depends on BLK_DEV_DM 303 select DM_PERSISTENT_DATA 304 select DM_BIO_PRISON 305 help 306 Provides thin provisioning and snapshots that share a data store. 307 308config DM_CACHE 309 tristate "Cache target (EXPERIMENTAL)" 310 depends on BLK_DEV_DM 311 default n 312 select DM_PERSISTENT_DATA 313 select DM_BIO_PRISON 314 help 315 dm-cache attempts to improve performance of a block device by 316 moving frequently used data to a smaller, higher performance 317 device. Different 'policy' plugins can be used to change the 318 algorithms used to select which blocks are promoted, demoted, 319 cleaned etc. It supports writeback and writethrough modes. 320 321config DM_CACHE_SMQ 322 tristate "Stochastic MQ Cache Policy (EXPERIMENTAL)" 323 depends on DM_CACHE 324 default y 325 help 326 A cache policy that uses a multiqueue ordered by recent hits 327 to select which blocks should be promoted and demoted. 328 This is meant to be a general purpose policy. It prioritises 329 reads over writes. This SMQ policy (vs MQ) offers the promise 330 of less memory utilization, improved performance and increased 331 adaptability in the face of changing workloads. 332 333config DM_WRITECACHE 334 tristate "Writecache target" 335 depends on BLK_DEV_DM 336 help 337 The writecache target caches writes on persistent memory or SSD. 338 It is intended for databases or other programs that need extremely 339 low commit latency. 340 341 The writecache target doesn't cache reads because reads are supposed 342 to be cached in standard RAM. 343 344config DM_EBS 345 tristate "Emulated block size target (EXPERIMENTAL)" 346 depends on BLK_DEV_DM && !HIGHMEM 347 select DM_BUFIO 348 help 349 dm-ebs emulates smaller logical block size on backing devices 350 with larger ones (e.g. 512 byte sectors on 4K native disks). 351 352config DM_ERA 353 tristate "Era target (EXPERIMENTAL)" 354 depends on BLK_DEV_DM 355 default n 356 select DM_PERSISTENT_DATA 357 select DM_BIO_PRISON 358 help 359 dm-era tracks which parts of a block device are written to 360 over time. Useful for maintaining cache coherency when using 361 vendor snapshots. 362 363config DM_CLONE 364 tristate "Clone target (EXPERIMENTAL)" 365 depends on BLK_DEV_DM 366 default n 367 select DM_PERSISTENT_DATA 368 help 369 dm-clone produces a one-to-one copy of an existing, read-only source 370 device into a writable destination device. The cloned device is 371 visible/mountable immediately and the copy of the source device to the 372 destination device happens in the background, in parallel with user 373 I/O. 374 375 If unsure, say N. 376 377config DM_MIRROR 378 tristate "Mirror target" 379 depends on BLK_DEV_DM 380 help 381 Allow volume managers to mirror logical volumes, also 382 needed for live data migration tools such as 'pvmove'. 383 384config DM_LOG_USERSPACE 385 tristate "Mirror userspace logging" 386 depends on DM_MIRROR && NET 387 select CONNECTOR 388 help 389 The userspace logging module provides a mechanism for 390 relaying the dm-dirty-log API to userspace. Log designs 391 which are more suited to userspace implementation (e.g. 392 shared storage logs) or experimental logs can be implemented 393 by leveraging this framework. 394 395config DM_RAID 396 tristate "RAID 1/4/5/6/10 target" 397 depends on BLK_DEV_DM 398 select MD_RAID0 399 select MD_RAID1 400 select MD_RAID10 401 select MD_RAID456 402 select BLK_DEV_MD 403 help 404 A dm target that supports RAID1, RAID10, RAID4, RAID5 and RAID6 mappings 405 406 A RAID-5 set of N drives with a capacity of C MB per drive provides 407 the capacity of C * (N - 1) MB, and protects against a failure 408 of a single drive. For a given sector (row) number, (N - 1) drives 409 contain data sectors, and one drive contains the parity protection. 410 For a RAID-4 set, the parity blocks are present on a single drive, 411 while a RAID-5 set distributes the parity across the drives in one 412 of the available parity distribution methods. 413 414 A RAID-6 set of N drives with a capacity of C MB per drive 415 provides the capacity of C * (N - 2) MB, and protects 416 against a failure of any two drives. For a given sector 417 (row) number, (N - 2) drives contain data sectors, and two 418 drives contains two independent redundancy syndromes. Like 419 RAID-5, RAID-6 distributes the syndromes across the drives 420 in one of the available parity distribution methods. 421 422config DM_ZERO 423 tristate "Zero target" 424 depends on BLK_DEV_DM 425 help 426 A target that discards writes, and returns all zeroes for 427 reads. Useful in some recovery situations. 428 429config DM_MULTIPATH 430 tristate "Multipath target" 431 depends on BLK_DEV_DM 432 # nasty syntax but means make DM_MULTIPATH independent 433 # of SCSI_DH if the latter isn't defined but if 434 # it is, DM_MULTIPATH must depend on it. We get a build 435 # error if SCSI_DH=m and DM_MULTIPATH=y 436 depends on !SCSI_DH || SCSI 437 help 438 Allow volume managers to support multipath hardware. 439 440config DM_MULTIPATH_QL 441 tristate "I/O Path Selector based on the number of in-flight I/Os" 442 depends on DM_MULTIPATH 443 help 444 This path selector is a dynamic load balancer which selects 445 the path with the least number of in-flight I/Os. 446 447 If unsure, say N. 448 449config DM_MULTIPATH_ST 450 tristate "I/O Path Selector based on the service time" 451 depends on DM_MULTIPATH 452 help 453 This path selector is a dynamic load balancer which selects 454 the path expected to complete the incoming I/O in the shortest 455 time. 456 457 If unsure, say N. 458 459config DM_MULTIPATH_HST 460 tristate "I/O Path Selector based on historical service time" 461 depends on DM_MULTIPATH 462 help 463 This path selector is a dynamic load balancer which selects 464 the path expected to complete the incoming I/O in the shortest 465 time by comparing estimated service time (based on historical 466 service time). 467 468 If unsure, say N. 469 470config DM_MULTIPATH_IOA 471 tristate "I/O Path Selector based on CPU submission" 472 depends on DM_MULTIPATH 473 help 474 This path selector selects the path based on the CPU the IO is 475 executed on and the CPU to path mapping setup at path addition time. 476 477 If unsure, say N. 478 479config DM_DELAY 480 tristate "I/O delaying target" 481 depends on BLK_DEV_DM 482 help 483 A target that delays reads and/or writes and can send 484 them to different devices. Useful for testing. 485 486 If unsure, say N. 487 488config DM_DUST 489 tristate "Bad sector simulation target" 490 depends on BLK_DEV_DM 491 help 492 A target that simulates bad sector behavior. 493 Useful for testing. 494 495 If unsure, say N. 496 497config DM_INIT 498 bool "DM \"dm-mod.create=\" parameter support" 499 depends on BLK_DEV_DM=y 500 help 501 Enable "dm-mod.create=" parameter to create mapped devices at init time. 502 This option is useful to allow mounting rootfs without requiring an 503 initramfs. 504 See Documentation/admin-guide/device-mapper/dm-init.rst for dm-mod.create="..." 505 format. 506 507 If unsure, say N. 508 509config DM_UEVENT 510 bool "DM uevents" 511 depends on BLK_DEV_DM 512 help 513 Generate udev events for DM events. 514 515config DM_FLAKEY 516 tristate "Flakey target" 517 depends on BLK_DEV_DM 518 help 519 A target that intermittently fails I/O for debugging purposes. 520 521config DM_VERITY 522 tristate "Verity target support" 523 depends on BLK_DEV_DM 524 select CRYPTO 525 select CRYPTO_HASH 526 select DM_BUFIO 527 help 528 This device-mapper target creates a read-only device that 529 transparently validates the data on one underlying device against 530 a pre-generated tree of cryptographic checksums stored on a second 531 device. 532 533 You'll need to activate the digests you're going to use in the 534 cryptoapi configuration. 535 536 To compile this code as a module, choose M here: the module will 537 be called dm-verity. 538 539 If unsure, say N. 540 541config DM_VERITY_VERIFY_ROOTHASH_SIG 542 def_bool n 543 bool "Verity data device root hash signature verification support" 544 depends on DM_VERITY 545 select SYSTEM_DATA_VERIFICATION 546 help 547 Add ability for dm-verity device to be validated if the 548 pre-generated tree of cryptographic checksums passed has a pkcs#7 549 signature file that can validate the roothash of the tree. 550 551 By default, rely on the builtin trusted keyring. 552 553 If unsure, say N. 554 555config DM_VERITY_VERIFY_ROOTHASH_SIG_SECONDARY_KEYRING 556 bool "Verity data device root hash signature verification with secondary keyring" 557 depends on DM_VERITY_VERIFY_ROOTHASH_SIG 558 depends on SECONDARY_TRUSTED_KEYRING 559 help 560 Rely on the secondary trusted keyring to verify dm-verity signatures. 561 562 If unsure, say N. 563 564config DM_VERITY_FEC 565 bool "Verity forward error correction support" 566 depends on DM_VERITY 567 select REED_SOLOMON 568 select REED_SOLOMON_DEC8 569 help 570 Add forward error correction support to dm-verity. This option 571 makes it possible to use pre-generated error correction data to 572 recover from corrupted blocks. 573 574 If unsure, say N. 575 576config DM_SWITCH 577 tristate "Switch target support (EXPERIMENTAL)" 578 depends on BLK_DEV_DM 579 help 580 This device-mapper target creates a device that supports an arbitrary 581 mapping of fixed-size regions of I/O across a fixed set of paths. 582 The path used for any specific region can be switched dynamically 583 by sending the target a message. 584 585 To compile this code as a module, choose M here: the module will 586 be called dm-switch. 587 588 If unsure, say N. 589 590config DM_LOG_WRITES 591 tristate "Log writes target support" 592 depends on BLK_DEV_DM 593 help 594 This device-mapper target takes two devices, one device to use 595 normally, one to log all write operations done to the first device. 596 This is for use by file system developers wishing to verify that 597 their fs is writing a consistent file system at all times by allowing 598 them to replay the log in a variety of ways and to check the 599 contents. 600 601 To compile this code as a module, choose M here: the module will 602 be called dm-log-writes. 603 604 If unsure, say N. 605 606config DM_INTEGRITY 607 tristate "Integrity target support" 608 depends on BLK_DEV_DM 609 select BLK_DEV_INTEGRITY 610 select DM_BUFIO 611 select CRYPTO 612 select CRYPTO_SKCIPHER 613 select ASYNC_XOR 614 select DM_AUDIT if AUDIT 615 help 616 This device-mapper target emulates a block device that has 617 additional per-sector tags that can be used for storing 618 integrity information. 619 620 This integrity target is used with the dm-crypt target to 621 provide authenticated disk encryption or it can be used 622 standalone. 623 624 To compile this code as a module, choose M here: the module will 625 be called dm-integrity. 626 627config DM_ZONED 628 tristate "Drive-managed zoned block device target support" 629 depends on BLK_DEV_DM 630 depends on BLK_DEV_ZONED 631 select CRC32 632 help 633 This device-mapper target takes a host-managed or host-aware zoned 634 block device and exposes most of its capacity as a regular block 635 device (drive-managed zoned block device) without any write 636 constraints. This is mainly intended for use with file systems that 637 do not natively support zoned block devices but still want to 638 benefit from the increased capacity offered by SMR disks. Other uses 639 by applications using raw block devices (for example object stores) 640 are also possible. 641 642 To compile this code as a module, choose M here: the module will 643 be called dm-zoned. 644 645 If unsure, say N. 646 647config DM_AUDIT 648 bool "DM audit events" 649 depends on AUDIT 650 help 651 Generate audit events for device-mapper. 652 653 Enables audit logging of several security relevant events in the 654 particular device-mapper targets, especially the integrity target. 655 656endif # MD 657