1.\" SPDX-License-Identifier: CDDL-1.0 2.\" 3.\" Copyright (c) 2013 by Turbo Fredriksson <turbo@bayour.com>. All rights reserved. 4.\" Copyright (c) 2019, 2021 by Delphix. All rights reserved. 5.\" Copyright (c) 2019 Datto Inc. 6.\" Copyright (c) 2023, 2024, 2025, Klara, Inc. 7.\" The contents of this file are subject to the terms of the Common Development 8.\" and Distribution License (the "License"). You may not use this file except 9.\" in compliance with the License. You can obtain a copy of the license at 10.\" usr/src/OPENSOLARIS.LICENSE or https://opensource.org/licenses/CDDL-1.0. 11.\" 12.\" See the License for the specific language governing permissions and 13.\" limitations under the License. When distributing Covered Code, include this 14.\" CDDL HEADER in each file and include the License file at 15.\" usr/src/OPENSOLARIS.LICENSE. If applicable, add the following below this 16.\" CDDL HEADER, with the fields enclosed by brackets "[]" replaced with your 17.\" own identifying information: 18.\" Portions Copyright [yyyy] [name of copyright owner] 19.\" 20.Dd May 29, 2025 21.Dt ZFS 4 22.Os 23. 24.Sh NAME 25.Nm zfs 26.Nd tuning of the ZFS kernel module 27. 28.Sh DESCRIPTION 29The ZFS module supports these parameters: 30.Bl -tag -width Ds 31.It Sy dbuf_cache_max_bytes Ns = Ns Sy UINT64_MAX Ns B Pq u64 32Maximum size in bytes of the dbuf cache. 33The target size is determined by the MIN versus 34.No 1/2^ Ns Sy dbuf_cache_shift Pq 1/32nd 35of the target ARC size. 36The behavior of the dbuf cache and its associated settings 37can be observed via the 38.Pa /proc/spl/kstat/zfs/dbufstats 39kstat. 40. 41.It Sy dbuf_metadata_cache_max_bytes Ns = Ns Sy UINT64_MAX Ns B Pq u64 42Maximum size in bytes of the metadata dbuf cache. 43The target size is determined by the MIN versus 44.No 1/2^ Ns Sy dbuf_metadata_cache_shift Pq 1/64th 45of the target ARC size. 46The behavior of the metadata dbuf cache and its associated settings 47can be observed via the 48.Pa /proc/spl/kstat/zfs/dbufstats 49kstat. 50. 51.It Sy dbuf_cache_hiwater_pct Ns = Ns Sy 10 Ns % Pq uint 52The percentage over 53.Sy dbuf_cache_max_bytes 54when dbufs must be evicted directly. 55. 56.It Sy dbuf_cache_lowater_pct Ns = Ns Sy 10 Ns % Pq uint 57The percentage below 58.Sy dbuf_cache_max_bytes 59when the evict thread stops evicting dbufs. 60. 61.It Sy dbuf_cache_shift Ns = Ns Sy 5 Pq uint 62Set the size of the dbuf cache 63.Pq Sy dbuf_cache_max_bytes 64to a log2 fraction of the target ARC size. 65. 66.It Sy dbuf_metadata_cache_shift Ns = Ns Sy 6 Pq uint 67Set the size of the dbuf metadata cache 68.Pq Sy dbuf_metadata_cache_max_bytes 69to a log2 fraction of the target ARC size. 70. 71.It Sy dbuf_mutex_cache_shift Ns = Ns Sy 0 Pq uint 72Set the size of the mutex array for the dbuf cache. 73When set to 74.Sy 0 75the array is dynamically sized based on total system memory. 76. 77.It Sy dmu_object_alloc_chunk_shift Ns = Ns Sy 7 Po 128 Pc Pq uint 78dnode slots allocated in a single operation as a power of 2. 79The default value minimizes lock contention for the bulk operation performed. 80. 81.It Sy dmu_ddt_copies Ns = Ns Sy 3 Pq uint 82Controls the number of copies stored for DeDup Table 83.Pq DDT 84objects. 85Reducing the number of copies to 1 from the previous default of 3 86can reduce the write inflation caused by deduplication. 87This assumes redundancy for this data is provided by the vdev layer. 88If the DDT is damaged, space may be leaked 89.Pq not freed 90when the DDT can not report the correct reference count. 91. 92.It Sy dmu_prefetch_max Ns = Ns Sy 134217728 Ns B Po 128 MiB Pc Pq uint 93Limit the amount we can prefetch with one call to this amount in bytes. 94This helps to limit the amount of memory that can be used by prefetching. 95. 96.It Sy l2arc_feed_again Ns = Ns Sy 1 Ns | Ns 0 Pq int 97Turbo L2ARC warm-up. 98When the L2ARC is cold the fill interval will be set as fast as possible. 99. 100.It Sy l2arc_feed_min_ms Ns = Ns Sy 200 Pq u64 101Min feed interval in milliseconds. 102Requires 103.Sy l2arc_feed_again Ns = Ns Ar 1 104and only applicable in related situations. 105. 106.It Sy l2arc_feed_secs Ns = Ns Sy 1 Pq u64 107Seconds between L2ARC writing. 108. 109.It Sy l2arc_headroom Ns = Ns Sy 8 Pq u64 110How far through the ARC lists to search for L2ARC cacheable content, 111expressed as a multiplier of 112.Sy l2arc_write_max . 113ARC persistence across reboots can be achieved with persistent L2ARC 114by setting this parameter to 115.Sy 0 , 116allowing the full length of ARC lists to be searched for cacheable content. 117. 118.It Sy l2arc_headroom_boost Ns = Ns Sy 200 Ns % Pq u64 119Scales 120.Sy l2arc_headroom 121by this percentage when L2ARC contents are being successfully compressed 122before writing. 123A value of 124.Sy 100 125disables this feature. 126. 127.It Sy l2arc_exclude_special Ns = Ns Sy 0 Ns | Ns 1 Pq int 128Controls whether buffers present on special vdevs are eligible for caching 129into L2ARC. 130If set to 1, exclude dbufs on special vdevs from being cached to L2ARC. 131. 132.It Sy l2arc_mfuonly Ns = Ns Sy 0 Ns | Ns 1 Ns | Ns 2 Pq int 133Controls whether only MFU metadata and data are cached from ARC into L2ARC. 134This may be desired to avoid wasting space on L2ARC when reading/writing large 135amounts of data that are not expected to be accessed more than once. 136.Pp 137The default is 0, 138meaning both MRU and MFU data and metadata are cached. 139When turning off this feature (setting it to 0), some MRU buffers will 140still be present in ARC and eventually cached on L2ARC. 141.No If Sy l2arc_noprefetch Ns = Ns Sy 0 , 142some prefetched buffers will be cached to L2ARC, and those might later 143transition to MRU, in which case the 144.Sy l2arc_mru_asize No arcstat will not be Sy 0 . 145.Pp 146Setting it to 1 means to L2 cache only MFU data and metadata. 147.Pp 148Setting it to 2 means to L2 cache all metadata (MRU+MFU) but 149only MFU data (i.e. MRU data are not cached). This can be the right setting 150to cache as much metadata as possible even when having high data turnover. 151.Pp 152Regardless of 153.Sy l2arc_noprefetch , 154some MFU buffers might be evicted from ARC, 155accessed later on as prefetches and transition to MRU as prefetches. 156If accessed again they are counted as MRU and the 157.Sy l2arc_mru_asize No arcstat will not be Sy 0 . 158.Pp 159The ARC status of L2ARC buffers when they were first cached in 160L2ARC can be seen in the 161.Sy l2arc_mru_asize , Sy l2arc_mfu_asize , No and Sy l2arc_prefetch_asize 162arcstats when importing the pool or onlining a cache 163device if persistent L2ARC is enabled. 164.Pp 165The 166.Sy evict_l2_eligible_mru 167arcstat does not take into account if this option is enabled as the information 168provided by the 169.Sy evict_l2_eligible_m[rf]u 170arcstats can be used to decide if toggling this option is appropriate 171for the current workload. 172. 173.It Sy l2arc_meta_percent Ns = Ns Sy 33 Ns % Pq uint 174Percent of ARC size allowed for L2ARC-only headers. 175Since L2ARC buffers are not evicted on memory pressure, 176too many headers on a system with an irrationally large L2ARC 177can render it slow or unusable. 178This parameter limits L2ARC writes and rebuilds to achieve the target. 179. 180.It Sy l2arc_trim_ahead Ns = Ns Sy 0 Ns % Pq u64 181Trims ahead of the current write size 182.Pq Sy l2arc_write_max 183on L2ARC devices by this percentage of write size if we have filled the device. 184If set to 185.Sy 100 186we TRIM twice the space required to accommodate upcoming writes. 187A minimum of 188.Sy 64 MiB 189will be trimmed. 190It also enables TRIM of the whole L2ARC device upon creation 191or addition to an existing pool or if the header of the device is 192invalid upon importing a pool or onlining a cache device. 193A value of 194.Sy 0 195disables TRIM on L2ARC altogether and is the default as it can put significant 196stress on the underlying storage devices. 197This will vary depending of how well the specific device handles these commands. 198. 199.It Sy l2arc_noprefetch Ns = Ns Sy 1 Ns | Ns 0 Pq int 200Do not write buffers to L2ARC if they were prefetched but not used by 201applications. 202In case there are prefetched buffers in L2ARC and this option 203is later set, we do not read the prefetched buffers from L2ARC. 204Unsetting this option is useful for caching sequential reads from the 205disks to L2ARC and serve those reads from L2ARC later on. 206This may be beneficial in case the L2ARC device is significantly faster 207in sequential reads than the disks of the pool. 208.Pp 209Use 210.Sy 1 211to disable and 212.Sy 0 213to enable caching/reading prefetches to/from L2ARC. 214. 215.It Sy l2arc_norw Ns = Ns Sy 0 Ns | Ns 1 Pq int 216No reads during writes. 217. 218.It Sy l2arc_write_boost Ns = Ns Sy 33554432 Ns B Po 32 MiB Pc Pq u64 219Cold L2ARC devices will have 220.Sy l2arc_write_max 221increased by this amount while they remain cold. 222. 223.It Sy l2arc_write_max Ns = Ns Sy 33554432 Ns B Po 32 MiB Pc Pq u64 224Max write bytes per interval. 225. 226.It Sy l2arc_rebuild_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int 227Rebuild the L2ARC when importing a pool (persistent L2ARC). 228This can be disabled if there are problems importing a pool 229or attaching an L2ARC device (e.g. the L2ARC device is slow 230in reading stored log metadata, or the metadata 231has become somehow fragmented/unusable). 232. 233.It Sy l2arc_rebuild_blocks_min_l2size Ns = Ns Sy 1073741824 Ns B Po 1 GiB Pc Pq u64 234Minimum size of an L2ARC device required in order to write log blocks in it. 235The log blocks are used upon importing the pool to rebuild the persistent L2ARC. 236.Pp 237For L2ARC devices less than 1 GiB, the amount of data 238.Fn l2arc_evict 239evicts is significant compared to the amount of restored L2ARC data. 240In this case, do not write log blocks in L2ARC in order not to waste space. 241. 242.It Sy metaslab_aliquot Ns = Ns Sy 2097152 Ns B Po 2 MiB Pc Pq u64 243Metaslab group's per child vdev allocation granularity, in bytes. 244This is roughly similar to what would be referred to as the "stripe size" 245in traditional RAID arrays. 246In normal operation, ZFS will try to write this amount of data to each child 247of a top-level vdev before moving on to the next top-level vdev. 248. 249.It Sy metaslab_bias_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int 250Enable metaslab groups biasing based on their over- or under-utilization 251relative to the metaslab class average. 252If disabled, each metaslab group will receive allocations proportional to its 253capacity. 254. 255.It Sy metaslab_perf_bias Ns = Ns Sy 1 Ns | Ns 0 Ns | Ns 2 Pq int 256Controls metaslab groups biasing based on their write performance. 257Setting to 0 makes all metaslab groups receive fixed amounts of allocations. 258Setting to 2 allows faster metaslab groups to allocate more. 259Setting to 1 equals to 2 if the pool is write-bound or 0 otherwise. 260That is, if the pool is limited by write throughput, then allocate more from 261faster metaslab groups, but if not, try to evenly distribute the allocations. 262. 263.It Sy metaslab_force_ganging Ns = Ns Sy 16777217 Ns B Po 16 MiB + 1 B Pc Pq u64 264Make some blocks above a certain size be gang blocks. 265This option is used by the test suite to facilitate testing. 266. 267.It Sy metaslab_force_ganging_pct Ns = Ns Sy 3 Ns % Pq uint 268For blocks that could be forced to be a gang block (due to 269.Sy metaslab_force_ganging ) , 270force this many of them to be gang blocks. 271. 272.It Sy brt_zap_prefetch Ns = Ns Sy 1 Ns | Ns 0 Pq int 273Controls prefetching BRT records for blocks which are going to be cloned. 274. 275.It Sy brt_zap_default_bs Ns = Ns Sy 12 Po 4 KiB Pc Pq int 276Default BRT ZAP data block size as a power of 2. Note that changing this after 277creating a BRT on the pool will not affect existing BRTs, only newly created 278ones. 279. 280.It Sy brt_zap_default_ibs Ns = Ns Sy 12 Po 4 KiB Pc Pq int 281Default BRT ZAP indirect block size as a power of 2. Note that changing this 282after creating a BRT on the pool will not affect existing BRTs, only newly 283created ones. 284. 285.It Sy ddt_zap_default_bs Ns = Ns Sy 15 Po 32 KiB Pc Pq int 286Default DDT ZAP data block size as a power of 2. Note that changing this after 287creating a DDT on the pool will not affect existing DDTs, only newly created 288ones. 289. 290.It Sy ddt_zap_default_ibs Ns = Ns Sy 15 Po 32 KiB Pc Pq int 291Default DDT ZAP indirect block size as a power of 2. Note that changing this 292after creating a DDT on the pool will not affect existing DDTs, only newly 293created ones. 294. 295.It Sy zfs_default_bs Ns = Ns Sy 9 Po 512 B Pc Pq int 296Default dnode block size as a power of 2. 297. 298.It Sy zfs_default_ibs Ns = Ns Sy 17 Po 128 KiB Pc Pq int 299Default dnode indirect block size as a power of 2. 300. 301.It Sy zfs_dio_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int 302Enable Direct I/O. 303If this setting is 0, then all I/O requests will be directed through the ARC 304acting as though the dataset property 305.Sy direct 306was set to 307.Sy disabled . 308. 309.It Sy zfs_dio_strict Ns = Ns Sy 0 Ns | Ns 1 Pq int 310Strictly enforce alignment for Direct I/O requests, returning 311.Sy EINVAL 312if not page-aligned instead of silently falling back to uncached I/O. 313. 314.It Sy zfs_history_output_max Ns = Ns Sy 1048576 Ns B Po 1 MiB Pc Pq u64 315When attempting to log an output nvlist of an ioctl in the on-disk history, 316the output will not be stored if it is larger than this size (in bytes). 317This must be less than 318.Sy DMU_MAX_ACCESS Pq 64 MiB . 319This applies primarily to 320.Fn zfs_ioc_channel_program Pq cf. Xr zfs-program 8 . 321. 322.It Sy zfs_keep_log_spacemaps_at_export Ns = Ns Sy 0 Ns | Ns 1 Pq int 323Prevent log spacemaps from being destroyed during pool exports and destroys. 324. 325.It Sy zfs_metaslab_segment_weight_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int 326Enable/disable segment-based metaslab selection. 327. 328.It Sy zfs_metaslab_switch_threshold Ns = Ns Sy 2 Pq int 329When using segment-based metaslab selection, continue allocating 330from the active metaslab until this option's 331worth of buckets have been exhausted. 332. 333.It Sy metaslab_debug_load Ns = Ns Sy 0 Ns | Ns 1 Pq int 334Load all metaslabs during pool import. 335. 336.It Sy metaslab_debug_unload Ns = Ns Sy 0 Ns | Ns 1 Pq int 337Prevent metaslabs from being unloaded. 338. 339.It Sy metaslab_fragmentation_factor_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int 340Enable use of the fragmentation metric in computing metaslab weights. 341. 342.It Sy metaslab_df_max_search Ns = Ns Sy 16777216 Ns B Po 16 MiB Pc Pq uint 343Maximum distance to search forward from the last offset. 344Without this limit, fragmented pools can see 345.Em >100`000 346iterations and 347.Fn metaslab_block_picker 348becomes the performance limiting factor on high-performance storage. 349.Pp 350With the default setting of 351.Sy 16 MiB , 352we typically see less than 353.Em 500 354iterations, even with very fragmented 355.Sy ashift Ns = Ns Sy 9 356pools. 357The maximum number of iterations possible is 358.Sy metaslab_df_max_search / 2^(ashift+1) . 359With the default setting of 360.Sy 16 MiB 361this is 362.Em 16*1024 Pq with Sy ashift Ns = Ns Sy 9 363or 364.Em 2*1024 Pq with Sy ashift Ns = Ns Sy 12 . 365. 366.It Sy metaslab_df_use_largest_segment Ns = Ns Sy 0 Ns | Ns 1 Pq int 367If not searching forward (due to 368.Sy metaslab_df_max_search , metaslab_df_free_pct , 369.No or Sy metaslab_df_alloc_threshold ) , 370this tunable controls which segment is used. 371If set, we will use the largest free segment. 372If unset, we will use a segment of at least the requested size. 373. 374.It Sy zfs_metaslab_max_size_cache_sec Ns = Ns Sy 3600 Ns s Po 1 hour Pc Pq u64 375When we unload a metaslab, we cache the size of the largest free chunk. 376We use that cached size to determine whether or not to load a metaslab 377for a given allocation. 378As more frees accumulate in that metaslab while it's unloaded, 379the cached max size becomes less and less accurate. 380After a number of seconds controlled by this tunable, 381we stop considering the cached max size and start 382considering only the histogram instead. 383. 384.It Sy zfs_metaslab_mem_limit Ns = Ns Sy 25 Ns % Pq uint 385When we are loading a new metaslab, we check the amount of memory being used 386to store metaslab range trees. 387If it is over a threshold, we attempt to unload the least recently used metaslab 388to prevent the system from clogging all of its memory with range trees. 389This tunable sets the percentage of total system memory that is the threshold. 390. 391.It Sy zfs_metaslab_try_hard_before_gang Ns = Ns Sy 0 Ns | Ns 1 Pq int 392.Bl -item -compact 393.It 394If unset, we will first try normal allocation. 395.It 396If that fails then we will do a gang allocation. 397.It 398If that fails then we will do a "try hard" gang allocation. 399.It 400If that fails then we will have a multi-layer gang block. 401.El 402.Pp 403.Bl -item -compact 404.It 405If set, we will first try normal allocation. 406.It 407If that fails then we will do a "try hard" allocation. 408.It 409If that fails we will do a gang allocation. 410.It 411If that fails we will do a "try hard" gang allocation. 412.It 413If that fails then we will have a multi-layer gang block. 414.El 415. 416.It Sy zfs_metaslab_find_max_tries Ns = Ns Sy 100 Pq uint 417When not trying hard, we only consider this number of the best metaslabs. 418This improves performance, especially when there are many metaslabs per vdev 419and the allocation can't actually be satisfied 420(so we would otherwise iterate all metaslabs). 421. 422.It Sy zfs_vdev_default_ms_count Ns = Ns Sy 200 Pq uint 423When a vdev is added, target this number of metaslabs per top-level vdev. 424. 425.It Sy zfs_vdev_default_ms_shift Ns = Ns Sy 29 Po 512 MiB Pc Pq uint 426Default lower limit for metaslab size. 427. 428.It Sy zfs_vdev_max_ms_shift Ns = Ns Sy 34 Po 16 GiB Pc Pq uint 429Default upper limit for metaslab size. 430. 431.It Sy zfs_vdev_max_auto_ashift Ns = Ns Sy 14 Pq uint 432Maximum ashift used when optimizing for logical \[->] physical sector size on 433new 434top-level vdevs. 435May be increased up to 436.Sy ASHIFT_MAX Po 16 Pc , 437but this may negatively impact pool space efficiency. 438. 439.It Sy zfs_vdev_direct_write_verify Ns = Ns Sy Linux 1 | FreeBSD 0 Pq uint 440If non-zero, then a Direct I/O write's checksum will be verified every 441time the write is issued and before it is committed to the block pointer. 442In the event the checksum is not valid then the I/O operation will return EIO. 443This module parameter can be used to detect if the 444contents of the users buffer have changed in the process of doing a Direct I/O 445write. 446It can also help to identify if reported checksum errors are tied to Direct I/O 447writes. 448Each verify error causes a 449.Sy dio_verify_wr 450zevent. 451Direct Write I/O checksum verify errors can be seen with 452.Nm zpool Cm status Fl d . 453The default value for this is 1 on Linux, but is 0 for 454.Fx 455because user pages can be placed under write protection in 456.Fx 457before the Direct I/O write is issued. 458. 459.It Sy zfs_vdev_min_auto_ashift Ns = Ns Sy ASHIFT_MIN Po 9 Pc Pq uint 460Minimum ashift used when creating new top-level vdevs. 461. 462.It Sy zfs_vdev_min_ms_count Ns = Ns Sy 16 Pq uint 463Minimum number of metaslabs to create in a top-level vdev. 464. 465.It Sy vdev_validate_skip Ns = Ns Sy 0 Ns | Ns 1 Pq int 466Skip label validation steps during pool import. 467Changing is not recommended unless you know what you're doing 468and are recovering a damaged label. 469. 470.It Sy zfs_vdev_ms_count_limit Ns = Ns Sy 131072 Po 128k Pc Pq uint 471Practical upper limit of total metaslabs per top-level vdev. 472. 473.It Sy metaslab_preload_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int 474Enable metaslab group preloading. 475. 476.It Sy metaslab_preload_limit Ns = Ns Sy 10 Pq uint 477Maximum number of metaslabs per group to preload 478. 479.It Sy metaslab_preload_pct Ns = Ns Sy 50 Pq uint 480Percentage of CPUs to run a metaslab preload taskq 481. 482.It Sy metaslab_lba_weighting_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int 483Give more weight to metaslabs with lower LBAs, 484assuming they have greater bandwidth, 485as is typically the case on a modern constant angular velocity disk drive. 486. 487.It Sy metaslab_unload_delay Ns = Ns Sy 32 Pq uint 488After a metaslab is used, we keep it loaded for this many TXGs, to attempt to 489reduce unnecessary reloading. 490Note that both this many TXGs and 491.Sy metaslab_unload_delay_ms 492milliseconds must pass before unloading will occur. 493. 494.It Sy metaslab_unload_delay_ms Ns = Ns Sy 600000 Ns ms Po 10 min Pc Pq uint 495After a metaslab is used, we keep it loaded for this many milliseconds, 496to attempt to reduce unnecessary reloading. 497Note, that both this many milliseconds and 498.Sy metaslab_unload_delay 499TXGs must pass before unloading will occur. 500. 501.It Sy reference_history Ns = Ns Sy 3 Pq uint 502Maximum reference holders being tracked when reference_tracking_enable is 503active. 504.It Sy raidz_expand_max_copy_bytes Ns = Ns Sy 160MB Pq ulong 505Max amount of memory to use for RAID-Z expansion I/O. 506This limits how much I/O can be outstanding at once. 507. 508.It Sy raidz_expand_max_reflow_bytes Ns = Ns Sy 0 Pq ulong 509For testing, pause RAID-Z expansion when reflow amount reaches this value. 510. 511.It Sy raidz_io_aggregate_rows Ns = Ns Sy 4 Pq ulong 512For expanded RAID-Z, aggregate reads that have more rows than this. 513. 514.It Sy reference_history Ns = Ns Sy 3 Pq int 515Maximum reference holders being tracked when reference_tracking_enable is 516active. 517. 518.It Sy reference_tracking_enable Ns = Ns Sy 0 Ns | Ns 1 Pq int 519Track reference holders to 520.Sy refcount_t 521objects (debug builds only). 522. 523.It Sy send_holes_without_birth_time Ns = Ns Sy 1 Ns | Ns 0 Pq int 524When set, the 525.Sy hole_birth 526optimization will not be used, and all holes will always be sent during a 527.Nm zfs Cm send . 528This is useful if you suspect your datasets are affected by a bug in 529.Sy hole_birth . 530. 531.It Sy spa_config_path Ns = Ns Pa /etc/zfs/zpool.cache Pq charp 532SPA config file. 533. 534.It Sy spa_asize_inflation Ns = Ns Sy 24 Pq uint 535Multiplication factor used to estimate actual disk consumption from the 536size of data being written. 537The default value is a worst case estimate, 538but lower values may be valid for a given pool depending on its configuration. 539Pool administrators who understand the factors involved 540may wish to specify a more realistic inflation factor, 541particularly if they operate close to quota or capacity limits. 542. 543.It Sy spa_load_print_vdev_tree Ns = Ns Sy 0 Ns | Ns 1 Pq int 544Whether to print the vdev tree in the debugging message buffer during pool 545import. 546. 547.It Sy spa_load_verify_data Ns = Ns Sy 1 Ns | Ns 0 Pq int 548Whether to traverse data blocks during an "extreme rewind" 549.Pq Fl X 550import. 551.Pp 552An extreme rewind import normally performs a full traversal of all 553blocks in the pool for verification. 554If this parameter is unset, the traversal skips non-metadata blocks. 555It can be toggled once the 556import has started to stop or start the traversal of non-metadata blocks. 557. 558.It Sy spa_load_verify_metadata Ns = Ns Sy 1 Ns | Ns 0 Pq int 559Whether to traverse blocks during an "extreme rewind" 560.Pq Fl X 561pool import. 562.Pp 563An extreme rewind import normally performs a full traversal of all 564blocks in the pool for verification. 565If this parameter is unset, the traversal is not performed. 566It can be toggled once the import has started to stop or start the traversal. 567. 568.It Sy spa_load_verify_shift Ns = Ns Sy 4 Po 1/16th Pc Pq uint 569Sets the maximum number of bytes to consume during pool import to the log2 570fraction of the target ARC size. 571. 572.It Sy spa_slop_shift Ns = Ns Sy 5 Po 1/32nd Pc Pq int 573Normally, we don't allow the last 574.Sy 3.2% Pq Sy 1/2^spa_slop_shift 575of space in the pool to be consumed. 576This ensures that we don't run the pool completely out of space, 577due to unaccounted changes (e.g. to the MOS). 578It also limits the worst-case time to allocate space. 579If we have less than this amount of free space, 580most ZPL operations (e.g. write, create) will return 581.Sy ENOSPC . 582. 583.It Sy spa_num_allocators Ns = Ns Sy 4 Pq int 584Determines the number of block allocators to use per spa instance. 585Capped by the number of actual CPUs in the system via 586.Sy spa_cpus_per_allocator . 587.Pp 588Note that setting this value too high could result in performance 589degradation and/or excess fragmentation. 590Set value only applies to pools imported/created after that. 591. 592.It Sy spa_cpus_per_allocator Ns = Ns Sy 4 Pq int 593Determines the minimum number of CPUs in a system for block allocator 594per spa instance. 595Set value only applies to pools imported/created after that. 596. 597.It Sy spa_upgrade_errlog_limit Ns = Ns Sy 0 Pq uint 598Limits the number of on-disk error log entries that will be converted to the 599new format when enabling the 600.Sy head_errlog 601feature. 602The default is to convert all log entries. 603. 604.It Sy vdev_removal_max_span Ns = Ns Sy 32768 Ns B Po 32 KiB Pc Pq uint 605During top-level vdev removal, chunks of data are copied from the vdev 606which may include free space in order to trade bandwidth for IOPS. 607This parameter determines the maximum span of free space, in bytes, 608which will be included as "unnecessary" data in a chunk of copied data. 609.Pp 610The default value here was chosen to align with 611.Sy zfs_vdev_read_gap_limit , 612which is a similar concept when doing 613regular reads (but there's no reason it has to be the same). 614. 615.It Sy vdev_file_logical_ashift Ns = Ns Sy 9 Po 512 B Pc Pq u64 616Logical ashift for file-based devices. 617. 618.It Sy vdev_file_physical_ashift Ns = Ns Sy 9 Po 512 B Pc Pq u64 619Physical ashift for file-based devices. 620. 621.It Sy zap_iterate_prefetch Ns = Ns Sy 1 Ns | Ns 0 Pq int 622If set, when we start iterating over a ZAP object, 623prefetch the entire object (all leaf blocks). 624However, this is limited by 625.Sy dmu_prefetch_max . 626. 627.It Sy zap_micro_max_size Ns = Ns Sy 131072 Ns B Po 128 KiB Pc Pq int 628Maximum micro ZAP size. 629A "micro" ZAP is upgraded to a "fat" ZAP once it grows beyond the specified 630size. 631Sizes higher than 128KiB will be clamped to 128KiB unless the 632.Sy large_microzap 633feature is enabled. 634. 635.It Sy zap_shrink_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int 636If set, adjacent empty ZAP blocks will be collapsed, reducing disk space. 637. 638.It Sy zfetch_min_distance Ns = Ns Sy 4194304 Ns B Po 4 MiB Pc Pq uint 639Min bytes to prefetch per stream. 640Prefetch distance starts from the demand access size and quickly grows to 641this value, doubling on each hit. 642After that it may grow further by 1/8 per hit, but only if some prefetch 643since last time haven't completed in time to satisfy demand request, i.e. 644prefetch depth didn't cover the read latency or the pool got saturated. 645. 646.It Sy zfetch_max_distance Ns = Ns Sy 67108864 Ns B Po 64 MiB Pc Pq uint 647Max bytes to prefetch per stream. 648. 649.It Sy zfetch_max_idistance Ns = Ns Sy 67108864 Ns B Po 64 MiB Pc Pq uint 650Max bytes to prefetch indirects for per stream. 651. 652.It Sy zfetch_max_reorder Ns = Ns Sy 16777216 Ns B Po 16 MiB Pc Pq uint 653Requests within this byte distance from the current prefetch stream position 654are considered parts of the stream, reordered due to parallel processing. 655Such requests do not advance the stream position immediately unless 656.Sy zfetch_hole_shift 657fill threshold is reached, but saved to fill holes in the stream later. 658. 659.It Sy zfetch_max_streams Ns = Ns Sy 8 Pq uint 660Max number of streams per zfetch (prefetch streams per file). 661. 662.It Sy zfetch_min_sec_reap Ns = Ns Sy 1 Pq uint 663Min time before inactive prefetch stream can be reclaimed 664. 665.It Sy zfetch_max_sec_reap Ns = Ns Sy 2 Pq uint 666Max time before inactive prefetch stream can be deleted 667. 668.It Sy zfs_abd_scatter_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int 669Enables ARC from using scatter/gather lists and forces all allocations to be 670linear in kernel memory. 671Disabling can improve performance in some code paths 672at the expense of fragmented kernel memory. 673. 674.It Sy zfs_abd_scatter_max_order Ns = Ns Sy MAX_ORDER\-1 Pq uint 675Maximum number of consecutive memory pages allocated in a single block for 676scatter/gather lists. 677.Pp 678The value of 679.Sy MAX_ORDER 680depends on kernel configuration. 681. 682.It Sy zfs_abd_scatter_min_size Ns = Ns Sy 1536 Ns B Po 1.5 KiB Pc Pq uint 683This is the minimum allocation size that will use scatter (page-based) ABDs. 684Smaller allocations will use linear ABDs. 685. 686.It Sy zfs_arc_dnode_limit Ns = Ns Sy 0 Ns B Pq u64 687When the number of bytes consumed by dnodes in the ARC exceeds this number of 688bytes, try to unpin some of it in response to demand for non-metadata. 689This value acts as a ceiling to the amount of dnode metadata, and defaults to 690.Sy 0 , 691which indicates that a percent which is based on 692.Sy zfs_arc_dnode_limit_percent 693of the ARC meta buffers that may be used for dnodes. 694.It Sy zfs_arc_dnode_limit_percent Ns = Ns Sy 10 Ns % Pq u64 695Percentage that can be consumed by dnodes of ARC meta buffers. 696.Pp 697See also 698.Sy zfs_arc_dnode_limit , 699which serves a similar purpose but has a higher priority if nonzero. 700. 701.It Sy zfs_arc_dnode_reduce_percent Ns = Ns Sy 10 Ns % Pq u64 702Percentage of ARC dnodes to try to scan in response to demand for non-metadata 703when the number of bytes consumed by dnodes exceeds 704.Sy zfs_arc_dnode_limit . 705. 706.It Sy zfs_arc_average_blocksize Ns = Ns Sy 8192 Ns B Po 8 KiB Pc Pq uint 707The ARC's buffer hash table is sized based on the assumption of an average 708block size of this value. 709This works out to roughly 1 MiB of hash table per 1 GiB of physical memory 710with 8-byte pointers. 711For configurations with a known larger average block size, 712this value can be increased to reduce the memory footprint. 713. 714.It Sy zfs_arc_eviction_pct Ns = Ns Sy 200 Ns % Pq uint 715When 716.Fn arc_is_overflowing , 717.Fn arc_get_data_impl 718waits for this percent of the requested amount of data to be evicted. 719For example, by default, for every 720.Em 2 KiB 721that's evicted, 722.Em 1 KiB 723of it may be "reused" by a new allocation. 724Since this is above 725.Sy 100 Ns % , 726it ensures that progress is made towards getting 727.Sy arc_size No under Sy arc_c . 728Since this is finite, it ensures that allocations can still happen, 729even during the potentially long time that 730.Sy arc_size No is more than Sy arc_c . 731. 732.It Sy zfs_arc_evict_batch_limit Ns = Ns Sy 10 Pq uint 733Number ARC headers to evict per sub-list before proceeding to another sub-list. 734This batch-style operation prevents entire sub-lists from being evicted at once 735but comes at a cost of additional unlocking and locking. 736. 737.It Sy zfs_arc_evict_threads Ns = Ns Sy 0 Pq int 738Sets the number of ARC eviction threads to be used. 739.Pp 740If set greater than 0, ZFS will dedicate up to that many threads to ARC 741eviction. 742Each thread will process one sub-list at a time, 743until the eviction target is reached or all sub-lists have been processed. 744When set to 0, ZFS will compute a reasonable number of eviction threads based 745on the number of CPUs. 746.TS 747box; 748lb l l . 749 CPUs Threads 750_ 751 1-4 1 752 5-8 2 753 9-15 3 754 16-31 4 755 32-63 6 756 64-95 8 757 96-127 9 758 128-160 11 759 160-191 12 760 192-223 13 761 224-255 14 762 256+ 16 763.TE 764.Pp 765More threads may improve the responsiveness of ZFS to memory pressure. 766This can be important for performance when eviction from the ARC becomes 767a bottleneck for reads and writes. 768.Pp 769This parameter can only be set at module load time. 770. 771.It Sy zfs_arc_grow_retry Ns = Ns Sy 0 Ns s Pq uint 772If set to a non zero value, it will replace the 773.Sy arc_grow_retry 774value with this value. 775The 776.Sy arc_grow_retry 777.No value Pq default Sy 5 Ns s 778is the number of seconds the ARC will wait before 779trying to resume growth after a memory pressure event. 780. 781.It Sy zfs_arc_lotsfree_percent Ns = Ns Sy 10 Ns % Pq int 782Throttle I/O when free system memory drops below this percentage of total 783system memory. 784Setting this value to 785.Sy 0 786will disable the throttle. 787. 788.It Sy zfs_arc_max Ns = Ns Sy 0 Ns B Pq u64 789Max size of ARC in bytes. 790If 791.Sy 0 , 792then the max size of ARC is determined by the amount of system memory installed. 793The larger of 794.Sy all_system_memory No \- Sy 1 GiB 795and 796.Sy 5/8 No \(mu Sy all_system_memory 797will be used as the limit. 798This value must be at least 799.Sy 67108864 Ns B Pq 64 MiB . 800.Pp 801This value can be changed dynamically, with some caveats. 802It cannot be set back to 803.Sy 0 804while running, and reducing it below the current ARC size will not cause 805the ARC to shrink without memory pressure to induce shrinking. 806. 807.It Sy zfs_arc_meta_balance Ns = Ns Sy 500 Pq uint 808Balance between metadata and data on ghost hits. 809Values above 100 increase metadata caching by proportionally reducing effect 810of ghost data hits on target data/metadata rate. 811. 812.It Sy zfs_arc_min Ns = Ns Sy 0 Ns B Pq u64 813Min size of ARC in bytes. 814.No If set to Sy 0 , arc_c_min 815will default to consuming the larger of 816.Sy 32 MiB 817and 818.Sy all_system_memory No / Sy 32 . 819. 820.It Sy zfs_arc_min_prefetch_ms Ns = Ns Sy 0 Ns ms Ns Po Ns ≡ Ns 1s Pc Pq uint 821Minimum time prefetched blocks are locked in the ARC. 822. 823.It Sy zfs_arc_min_prescient_prefetch_ms Ns = Ns Sy 0 Ns ms Ns Po Ns ≡ Ns 6s Pc Pq uint 824Minimum time "prescient prefetched" blocks are locked in the ARC. 825These blocks are meant to be prefetched fairly aggressively ahead of 826the code that may use them. 827. 828.It Sy zfs_arc_prune_task_threads Ns = Ns Sy 1 Pq int 829Number of arc_prune threads. 830.Fx 831does not need more than one. 832Linux may theoretically use one per mount point up to number of CPUs, 833but that was not proven to be useful. 834. 835.It Sy zfs_max_missing_tvds Ns = Ns Sy 0 Pq int 836Number of missing top-level vdevs which will be allowed during 837pool import (only in read-only mode). 838. 839.It Sy zfs_max_nvlist_src_size Ns = Sy 0 Pq u64 840Maximum size in bytes allowed to be passed as 841.Sy zc_nvlist_src_size 842for ioctls on 843.Pa /dev/zfs . 844This prevents a user from causing the kernel to allocate 845an excessive amount of memory. 846When the limit is exceeded, the ioctl fails with 847.Sy EINVAL 848and a description of the error is sent to the 849.Pa zfs-dbgmsg 850log. 851This parameter should not need to be touched under normal circumstances. 852If 853.Sy 0 , 854equivalent to a quarter of the user-wired memory limit under 855.Fx 856and to 857.Sy 134217728 Ns B Pq 128 MiB 858under Linux. 859. 860.It Sy zfs_multilist_num_sublists Ns = Ns Sy 0 Pq uint 861To allow more fine-grained locking, each ARC state contains a series 862of lists for both data and metadata objects. 863Locking is performed at the level of these "sub-lists". 864This parameters controls the number of sub-lists per ARC state, 865and also applies to other uses of the multilist data structure. 866.Pp 867If 868.Sy 0 , 869equivalent to the greater of the number of online CPUs and 870.Sy 4 . 871. 872.It Sy zfs_arc_overflow_shift Ns = Ns Sy 8 Pq int 873The ARC size is considered to be overflowing if it exceeds the current 874ARC target size 875.Pq Sy arc_c 876by thresholds determined by this parameter. 877Exceeding by 878.Sy ( arc_c No >> Sy zfs_arc_overflow_shift ) No / Sy 2 879starts ARC reclamation process. 880If that appears insufficient, exceeding by 881.Sy ( arc_c No >> Sy zfs_arc_overflow_shift ) No \(mu Sy 1.5 882blocks new buffer allocation until the reclaim thread catches up. 883Started reclamation process continues till ARC size returns below the 884target size. 885.Pp 886The default value of 887.Sy 8 888causes the ARC to start reclamation if it exceeds the target size by 889.Em 0.2% 890of the target size, and block allocations by 891.Em 0.6% . 892. 893.It Sy zfs_arc_shrink_shift Ns = Ns Sy 0 Pq uint 894If nonzero, this will update 895.Sy arc_shrink_shift Pq default Sy 7 896with the new value. 897. 898.It Sy zfs_arc_pc_percent Ns = Ns Sy 0 Ns % Po off Pc Pq uint 899Percent of pagecache to reclaim ARC to. 900.Pp 901This tunable allows the ZFS ARC to play more nicely 902with the kernel's LRU pagecache. 903It can guarantee that the ARC size won't collapse under scanning 904pressure on the pagecache, yet still allows the ARC to be reclaimed down to 905.Sy zfs_arc_min 906if necessary. 907This value is specified as percent of pagecache size (as measured by 908.Sy NR_ACTIVE_FILE 909+ 910.Sy NR_INACTIVE_FILE ) , 911where that percent may exceed 912.Sy 100 . 913This 914only operates during memory pressure/reclaim. 915. 916.It Sy zfs_arc_shrinker_limit Ns = Ns Sy 0 Pq int 917This is a limit on how many pages the ARC shrinker makes available for 918eviction in response to one page allocation attempt. 919Note that in practice, the kernel's shrinker can ask us to evict 920up to about four times this for one allocation attempt. 921To reduce OOM risk, this limit is applied for kswapd reclaims only. 922.Pp 923For example a value of 924.Sy 10000 Pq in practice, Em 160 MiB No per allocation attempt with 4 KiB pages 925limits the amount of time spent attempting to reclaim ARC memory to 926less than 100 ms per allocation attempt, 927even with a small average compressed block size of ~8 KiB. 928.Pp 929The parameter can be set to 0 (zero) to disable the limit, 930and only applies on Linux. 931. 932.It Sy zfs_arc_shrinker_seeks Ns = Ns Sy 2 Pq int 933Relative cost of ARC eviction on Linux, AKA number of seeks needed to 934restore evicted page. 935Bigger values make ARC more precious and evictions smaller, comparing to 936other kernel subsystems. 937Value of 4 means parity with page cache. 938. 939.It Sy zfs_arc_sys_free Ns = Ns Sy 0 Ns B Pq u64 940The target number of bytes the ARC should leave as free memory on the system. 941If zero, equivalent to the bigger of 942.Sy 512 KiB No and Sy all_system_memory/64 . 943. 944.It Sy zfs_autoimport_disable Ns = Ns Sy 1 Ns | Ns 0 Pq int 945Disable pool import at module load by ignoring the cache file 946.Pq Sy spa_config_path . 947. 948.It Sy zfs_checksum_events_per_second Ns = Ns Sy 20 Ns /s Pq uint 949Rate limit checksum events to this many per second. 950Note that this should not be set below the ZED thresholds 951(currently 10 checksums over 10 seconds) 952or else the daemon may not trigger any action. 953. 954.It Sy zfs_commit_timeout_pct Ns = Ns Sy 10 Ns % Pq uint 955This controls the amount of time that a ZIL block (lwb) will remain "open" 956when it isn't "full", and it has a thread waiting for it to be committed to 957stable storage. 958The timeout is scaled based on a percentage of the last lwb 959latency to avoid significantly impacting the latency of each individual 960transaction record (itx). 961. 962.It Sy zfs_condense_indirect_commit_entry_delay_ms Ns = Ns Sy 0 Ns ms Pq int 963Vdev indirection layer (used for device removal) sleeps for this many 964milliseconds during mapping generation. 965Intended for use with the test suite to throttle vdev removal speed. 966. 967.It Sy zfs_condense_indirect_obsolete_pct Ns = Ns Sy 25 Ns % Pq uint 968Minimum percent of obsolete bytes in vdev mapping required to attempt to 969condense 970.Pq see Sy zfs_condense_indirect_vdevs_enable . 971Intended for use with the test suite 972to facilitate triggering condensing as needed. 973. 974.It Sy zfs_condense_indirect_vdevs_enable Ns = Ns Sy 1 Ns | Ns 0 Pq int 975Enable condensing indirect vdev mappings. 976When set, attempt to condense indirect vdev mappings 977if the mapping uses more than 978.Sy zfs_condense_min_mapping_bytes 979bytes of memory and if the obsolete space map object uses more than 980.Sy zfs_condense_max_obsolete_bytes 981bytes on-disk. 982The condensing process is an attempt to save memory by removing obsolete 983mappings. 984. 985.It Sy zfs_condense_max_obsolete_bytes Ns = Ns Sy 1073741824 Ns B Po 1 GiB Pc Pq u64 986Only attempt to condense indirect vdev mappings if the on-disk size 987of the obsolete space map object is greater than this number of bytes 988.Pq see Sy zfs_condense_indirect_vdevs_enable . 989. 990.It Sy zfs_condense_min_mapping_bytes Ns = Ns Sy 131072 Ns B Po 128 KiB Pc Pq u64 991Minimum size vdev mapping to attempt to condense 992.Pq see Sy zfs_condense_indirect_vdevs_enable . 993. 994.It Sy zfs_dbgmsg_enable Ns = Ns Sy 1 Ns | Ns 0 Pq int 995Internally ZFS keeps a small log to facilitate debugging. 996The log is enabled by default, and can be disabled by unsetting this option. 997The contents of the log can be accessed by reading 998.Pa /proc/spl/kstat/zfs/dbgmsg . 999Writing 1000.Sy 0 1001to the file clears the log. 1002.Pp 1003This setting does not influence debug prints due to 1004.Sy zfs_flags . 1005. 1006.It Sy zfs_dbgmsg_maxsize Ns = Ns Sy 4194304 Ns B Po 4 MiB Pc Pq uint 1007Maximum size of the internal ZFS debug log. 1008. 1009.It Sy zfs_dbuf_state_index Ns = Ns Sy 0 Pq int 1010Historically used for controlling what reporting was available under 1011.Pa /proc/spl/kstat/zfs . 1012No effect. 1013. 1014.It Sy zfs_deadman_checktime_ms Ns = Ns Sy 60000 Ns ms Po 1 min Pc Pq u64 1015Check time in milliseconds. 1016This defines the frequency at which we check for hung I/O requests 1017and potentially invoke the 1018.Sy zfs_deadman_failmode 1019behavior. 1020. 1021.It Sy zfs_deadman_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int 1022When a pool sync operation takes longer than 1023.Sy zfs_deadman_synctime_ms , 1024or when an individual I/O operation takes longer than 1025.Sy zfs_deadman_ziotime_ms , 1026then the operation is considered to be "hung". 1027If 1028.Sy zfs_deadman_enabled 1029is set, then the deadman behavior is invoked as described by 1030.Sy zfs_deadman_failmode . 1031By default, the deadman is enabled and set to 1032.Sy wait 1033which results in "hung" I/O operations only being logged. 1034The deadman is automatically disabled when a pool gets suspended. 1035. 1036.It Sy zfs_deadman_events_per_second Ns = Ns Sy 1 Ns /s Pq int 1037Rate limit deadman zevents (which report hung I/O operations) to this many per 1038second. 1039. 1040.It Sy zfs_deadman_failmode Ns = Ns Sy wait Pq charp 1041Controls the failure behavior when the deadman detects a "hung" I/O operation. 1042Valid values are: 1043.Bl -tag -compact -offset 4n -width "continue" 1044.It Sy wait 1045Wait for a "hung" operation to complete. 1046For each "hung" operation a "deadman" event will be posted 1047describing that operation. 1048.It Sy continue 1049Attempt to recover from a "hung" operation by re-dispatching it 1050to the I/O pipeline if possible. 1051.It Sy panic 1052Panic the system. 1053This can be used to facilitate automatic fail-over 1054to a properly configured fail-over partner. 1055.El 1056. 1057.It Sy zfs_deadman_synctime_ms Ns = Ns Sy 600000 Ns ms Po 10 min Pc Pq u64 1058Interval in milliseconds after which the deadman is triggered and also 1059the interval after which a pool sync operation is considered to be "hung". 1060Once this limit is exceeded the deadman will be invoked every 1061.Sy zfs_deadman_checktime_ms 1062milliseconds until the pool sync completes. 1063. 1064.It Sy zfs_deadman_ziotime_ms Ns = Ns Sy 300000 Ns ms Po 5 min Pc Pq u64 1065Interval in milliseconds after which the deadman is triggered and an 1066individual I/O operation is considered to be "hung". 1067As long as the operation remains "hung", 1068the deadman will be invoked every 1069.Sy zfs_deadman_checktime_ms 1070milliseconds until the operation completes. 1071. 1072.It Sy zfs_dedup_prefetch Ns = Ns Sy 0 Ns | Ns 1 Pq int 1073Enable prefetching dedup-ed blocks which are going to be freed. 1074. 1075.It Sy zfs_dedup_log_flush_min_time_ms Ns = Ns Sy 1000 Ns Pq uint 1076Minimum time to spend on dedup log flush each transaction. 1077.Pp 1078At least this long will be spent flushing dedup log entries each transaction, 1079up to 1080.Sy zfs_txg_timeout . 1081This occurs even if doing so would delay the transaction, that is, other IO 1082completes under this time. 1083. 1084.It Sy zfs_dedup_log_flush_entries_min Ns = Ns Sy 100 Ns Pq uint 1085Flush at least this many entries each transaction. 1086.Pp 1087OpenZFS will flush a fraction of the log every TXG, to keep the size 1088proportional to the ingest rate (see 1089.Sy zfs_dedup_log_flush_txgs ) . 1090This sets the minimum for that estimate, which prevents the backlog from 1091completely draining if the ingest rate falls. 1092Raising it can force OpenZFS to flush more aggressively, reducing the backlog 1093to zero more quickly, but can make it less able to back off if log 1094flushing would compete with other IO too much. 1095. 1096.It Sy zfs_dedup_log_flush_entries_max Ns = Ns Sy UINT_MAX Ns Pq uint 1097Flush at most this many entries each transaction. 1098.Pp 1099Mostly used for debugging purposes. 1100.It Sy zfs_dedup_log_flush_txgs Ns = Ns Sy 100 Ns Pq uint 1101Target number of TXGs to process the whole dedup log. 1102.Pp 1103Every TXG, OpenZFS will process the inverse of this number times the size 1104of the DDT backlog. 1105This will keep the backlog at a size roughly equal to the ingest rate 1106times this value. 1107This offers a balance between a more efficient DDT log, with better 1108aggregation, and shorter import times, which increase as the size of the 1109DDT log increases. 1110Increasing this value will result in a more efficient DDT log, but longer 1111import times. 1112.It Sy zfs_dedup_log_cap Ns = Ns Sy UINT_MAX Ns Pq uint 1113Soft cap for the size of the current dedup log. 1114.Pp 1115If the log is larger than this size, we increase the aggressiveness of 1116the flushing to try to bring it back down to the soft cap. 1117Setting it will reduce import times, but will reduce the efficiency of 1118the DDT log, increasing the expected number of IOs required to flush the same 1119amount of data. 1120.It Sy zfs_dedup_log_hard_cap Ns = Ns Sy 0 Ns | Ns 1 Pq uint 1121Whether to treat the log cap as a firm cap or not. 1122.Pp 1123When set to 0 (the default), the 1124.Sy zfs_dedup_log_cap 1125will increase the maximum number of log entries we flush in a given txg. 1126This will bring the backlog size down towards the cap, but not at the expense 1127of making TXG syncs take longer. 1128If this is set to 1, the cap acts more like a hard cap than a soft cap; it will 1129also increase the minimum number of log entries we flush per TXG. 1130Enabling it will reduce worst-case import times, at the cost of increased TXG 1131sync times. 1132.It Sy zfs_dedup_log_flush_flow_rate_txgs Ns = Ns Sy 10 Ns Pq uint 1133Number of transactions to use to compute the flow rate. 1134.Pp 1135OpenZFS will estimate number of entries changed (ingest rate), number of entries 1136flushed (flush rate) and time spent flushing (flush time rate) and combining 1137these into an overall "flow rate". 1138It will use an exponential weighted moving average over some number of recent 1139transactions to compute these rates. 1140This sets the number of transactions to compute these averages over. 1141Setting it higher can help to smooth out the flow rate in the face of spiky 1142workloads, but will take longer for the flow rate to adjust to a sustained 1143change in the ingress rate. 1144. 1145.It Sy zfs_dedup_log_txg_max Ns = Ns Sy 8 Ns Pq uint 1146Max transactions to before starting to flush dedup logs. 1147.Pp 1148OpenZFS maintains two dedup logs, one receiving new changes, one flushing. 1149If there is nothing to flush, it will accumulate changes for no more than this 1150many transactions before switching the logs and starting to flush entries out. 1151. 1152.It Sy zfs_dedup_log_mem_max Ns = Ns Sy 0 Ns Pq u64 1153Max memory to use for dedup logs. 1154.Pp 1155OpenZFS will spend no more than this much memory on maintaining the in-memory 1156dedup log. 1157Flushing will begin when around half this amount is being spent on logs. 1158The default value of 1159.Sy 0 1160will cause it to be set by 1161.Sy zfs_dedup_log_mem_max_percent 1162instead. 1163. 1164.It Sy zfs_dedup_log_mem_max_percent Ns = Ns Sy 1 Ns % Pq uint 1165Max memory to use for dedup logs, as a percentage of total memory. 1166.Pp 1167If 1168.Sy zfs_dedup_log_mem_max 1169is not set, it will be initialized as a percentage of the total memory in the 1170system. 1171. 1172.It Sy zfs_delay_min_dirty_percent Ns = Ns Sy 60 Ns % Pq uint 1173Start to delay each transaction once there is this amount of dirty data, 1174expressed as a percentage of 1175.Sy zfs_dirty_data_max . 1176This value should be at least 1177.Sy zfs_vdev_async_write_active_max_dirty_percent . 1178.No See Sx ZFS TRANSACTION DELAY . 1179. 1180.It Sy zfs_delay_scale Ns = Ns Sy 500000 Pq int 1181This controls how quickly the transaction delay approaches infinity. 1182Larger values cause longer delays for a given amount of dirty data. 1183.Pp 1184For the smoothest delay, this value should be about 1 billion divided 1185by the maximum number of operations per second. 1186This will smoothly handle between ten times and a tenth of this number. 1187.No See Sx ZFS TRANSACTION DELAY . 1188.Pp 1189.Sy zfs_delay_scale No \(mu Sy zfs_dirty_data_max Em must No be smaller than Sy 2^64 . 1190. 1191.It Sy zfs_dio_write_verify_events_per_second Ns = Ns Sy 20 Ns /s Pq uint 1192Rate limit Direct I/O write verify events to this many per second. 1193. 1194.It Sy zfs_disable_ivset_guid_check Ns = Ns Sy 0 Ns | Ns 1 Pq int 1195Disables requirement for IVset GUIDs to be present and match when doing a raw 1196receive of encrypted datasets. 1197Intended for users whose pools were created with 1198OpenZFS pre-release versions and now have compatibility issues. 1199. 1200.It Sy zfs_key_max_salt_uses Ns = Ns Sy 400000000 Po 4*10^8 Pc Pq ulong 1201Maximum number of uses of a single salt value before generating a new one for 1202encrypted datasets. 1203The default value is also the maximum. 1204. 1205.It Sy zfs_object_mutex_size Ns = Ns Sy 64 Pq uint 1206Size of the znode hashtable used for holds. 1207.Pp 1208Due to the need to hold locks on objects that may not exist yet, kernel mutexes 1209are not created per-object and instead a hashtable is used where collisions 1210will result in objects waiting when there is not actually contention on the 1211same object. 1212. 1213.It Sy zfs_slow_io_events_per_second Ns = Ns Sy 20 Ns /s Pq int 1214Rate limit delay zevents (which report slow I/O operations) to this many per 1215second. 1216. 1217.It Sy zfs_unflushed_max_mem_amt Ns = Ns Sy 1073741824 Ns B Po 1 GiB Pc Pq u64 1218Upper-bound limit for unflushed metadata changes to be held by the 1219log spacemap in memory, in bytes. 1220. 1221.It Sy zfs_unflushed_max_mem_ppm Ns = Ns Sy 1000 Ns ppm Po 0.1% Pc Pq u64 1222Part of overall system memory that ZFS allows to be used 1223for unflushed metadata changes by the log spacemap, in millionths. 1224. 1225.It Sy zfs_unflushed_log_block_max Ns = Ns Sy 131072 Po 128k Pc Pq u64 1226Describes the maximum number of log spacemap blocks allowed for each pool. 1227The default value means that the space in all the log spacemaps 1228can add up to no more than 1229.Sy 131072 1230blocks (which means 1231.Em 16 GiB 1232of logical space before compression and ditto blocks, 1233assuming that blocksize is 1234.Em 128 KiB ) . 1235.Pp 1236This tunable is important because it involves a trade-off between import 1237time after an unclean export and the frequency of flushing metaslabs. 1238The higher this number is, the more log blocks we allow when the pool is 1239active which means that we flush metaslabs less often and thus decrease 1240the number of I/O operations for spacemap updates per TXG. 1241At the same time though, that means that in the event of an unclean export, 1242there will be more log spacemap blocks for us to read, inducing overhead 1243in the import time of the pool. 1244The lower the number, the amount of flushing increases, destroying log 1245blocks quicker as they become obsolete faster, which leaves less blocks 1246to be read during import time after a crash. 1247.Pp 1248Each log spacemap block existing during pool import leads to approximately 1249one extra logical I/O issued. 1250This is the reason why this tunable is exposed in terms of blocks rather 1251than space used. 1252. 1253.It Sy zfs_unflushed_log_block_min Ns = Ns Sy 1000 Pq u64 1254If the number of metaslabs is small and our incoming rate is high, 1255we could get into a situation that we are flushing all our metaslabs every TXG. 1256Thus we always allow at least this many log blocks. 1257. 1258.It Sy zfs_unflushed_log_block_pct Ns = Ns Sy 400 Ns % Pq u64 1259Tunable used to determine the number of blocks that can be used for 1260the spacemap log, expressed as a percentage of the total number of 1261unflushed metaslabs in the pool. 1262. 1263.It Sy zfs_unflushed_log_txg_max Ns = Ns Sy 1000 Pq u64 1264Tunable limiting maximum time in TXGs any metaslab may remain unflushed. 1265It effectively limits maximum number of unflushed per-TXG spacemap logs 1266that need to be read after unclean pool export. 1267. 1268.It Sy zfs_unlink_suspend_progress Ns = Ns Sy 0 Ns | Ns 1 Pq uint 1269When enabled, files will not be asynchronously removed from the list of pending 1270unlinks and the space they consume will be leaked. 1271Once this option has been disabled and the dataset is remounted, 1272the pending unlinks will be processed and the freed space returned to the pool. 1273This option is used by the test suite. 1274. 1275.It Sy zfs_delete_blocks Ns = Ns Sy 20480 Pq ulong 1276This is the used to define a large file for the purposes of deletion. 1277Files containing more than 1278.Sy zfs_delete_blocks 1279will be deleted asynchronously, while smaller files are deleted synchronously. 1280Decreasing this value will reduce the time spent in an 1281.Xr unlink 2 1282system call, at the expense of a longer delay before the freed space is 1283available. 1284This only applies on Linux. 1285. 1286.It Sy zfs_dirty_data_max Ns = Pq int 1287Determines the dirty space limit in bytes. 1288Once this limit is exceeded, new writes are halted until space frees up. 1289This parameter takes precedence over 1290.Sy zfs_dirty_data_max_percent . 1291.No See Sx ZFS TRANSACTION DELAY . 1292.Pp 1293Defaults to 1294.Sy physical_ram/10 , 1295capped at 1296.Sy zfs_dirty_data_max_max . 1297. 1298.It Sy zfs_dirty_data_max_max Ns = Pq int 1299Maximum allowable value of 1300.Sy zfs_dirty_data_max , 1301expressed in bytes. 1302This limit is only enforced at module load time, and will be ignored if 1303.Sy zfs_dirty_data_max 1304is later changed. 1305This parameter takes precedence over 1306.Sy zfs_dirty_data_max_max_percent . 1307.No See Sx ZFS TRANSACTION DELAY . 1308.Pp 1309Defaults to 1310.Sy min(physical_ram/4, 4GiB) , 1311or 1312.Sy min(physical_ram/4, 1GiB) 1313for 32-bit systems. 1314. 1315.It Sy zfs_dirty_data_max_max_percent Ns = Ns Sy 25 Ns % Pq uint 1316Maximum allowable value of 1317.Sy zfs_dirty_data_max , 1318expressed as a percentage of physical RAM. 1319This limit is only enforced at module load time, and will be ignored if 1320.Sy zfs_dirty_data_max 1321is later changed. 1322The parameter 1323.Sy zfs_dirty_data_max_max 1324takes precedence over this one. 1325.No See Sx ZFS TRANSACTION DELAY . 1326. 1327.It Sy zfs_dirty_data_max_percent Ns = Ns Sy 10 Ns % Pq uint 1328Determines the dirty space limit, expressed as a percentage of all memory. 1329Once this limit is exceeded, new writes are halted until space frees up. 1330The parameter 1331.Sy zfs_dirty_data_max 1332takes precedence over this one. 1333.No See Sx ZFS TRANSACTION DELAY . 1334.Pp 1335Subject to 1336.Sy zfs_dirty_data_max_max . 1337. 1338.It Sy zfs_dirty_data_sync_percent Ns = Ns Sy 20 Ns % Pq uint 1339Start syncing out a transaction group if there's at least this much dirty data 1340.Pq as a percentage of Sy zfs_dirty_data_max . 1341This should be less than 1342.Sy zfs_vdev_async_write_active_min_dirty_percent . 1343. 1344.It Sy zfs_wrlog_data_max Ns = Pq int 1345The upper limit of write-transaction ZIL log data size in bytes. 1346Write operations are throttled when approaching the limit until log data is 1347cleared out after transaction group sync. 1348Because of some overhead, it should be set at least 2 times the size of 1349.Sy zfs_dirty_data_max 1350.No to prevent harming normal write throughput . 1351It also should be smaller than the size of the slog device if slog is present. 1352.Pp 1353Defaults to 1354.Sy zfs_dirty_data_max*2 1355. 1356.It Sy zfs_fallocate_reserve_percent Ns = Ns Sy 110 Ns % Pq uint 1357Since ZFS is a copy-on-write filesystem with snapshots, blocks cannot be 1358preallocated for a file in order to guarantee that later writes will not 1359run out of space. 1360Instead, 1361.Xr fallocate 2 1362space preallocation only checks that sufficient space is currently available 1363in the pool or the user's project quota allocation, 1364and then creates a sparse file of the requested size. 1365The requested space is multiplied by 1366.Sy zfs_fallocate_reserve_percent 1367to allow additional space for indirect blocks and other internal metadata. 1368Setting this to 1369.Sy 0 1370disables support for 1371.Xr fallocate 2 1372and causes it to return 1373.Sy EOPNOTSUPP . 1374. 1375.It Sy zfs_fletcher_4_impl Ns = Ns Sy fastest Pq string 1376Select a fletcher 4 implementation. 1377.Pp 1378Supported selectors are: 1379.Sy fastest , scalar , sse2 , ssse3 , avx2 , avx512f , avx512bw , 1380.No and Sy aarch64_neon . 1381All except 1382.Sy fastest No and Sy scalar 1383require instruction set extensions to be available, 1384and will only appear if ZFS detects that they are present at runtime. 1385If multiple implementations of fletcher 4 are available, the 1386.Sy fastest 1387will be chosen using a micro benchmark. 1388Selecting 1389.Sy scalar 1390results in the original CPU-based calculation being used. 1391Selecting any option other than 1392.Sy fastest No or Sy scalar 1393results in vector instructions 1394from the respective CPU instruction set being used. 1395. 1396.It Sy zfs_bclone_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int 1397Enables access to the block cloning feature. 1398If this setting is 0, then even if feature@block_cloning is enabled, 1399using functions and system calls that attempt to clone blocks will act as 1400though the feature is disabled. 1401. 1402.It Sy zfs_bclone_wait_dirty Ns = Ns Sy 0 Ns | Ns 1 Pq int 1403When set to 1 the FICLONE and FICLONERANGE ioctls wait for dirty data to be 1404written to disk. 1405This allows the clone operation to reliably succeed when a file is 1406modified and then immediately cloned. 1407For small files this may be slower than making a copy of the file. 1408Therefore, this setting defaults to 0 which causes a clone operation to 1409immediately fail when encountering a dirty block. 1410. 1411.It Sy zfs_blake3_impl Ns = Ns Sy fastest Pq string 1412Select a BLAKE3 implementation. 1413.Pp 1414Supported selectors are: 1415.Sy cycle , fastest , generic , sse2 , sse41 , avx2 , avx512 . 1416All except 1417.Sy cycle , fastest No and Sy generic 1418require instruction set extensions to be available, 1419and will only appear if ZFS detects that they are present at runtime. 1420If multiple implementations of BLAKE3 are available, the 1421.Sy fastest will be chosen using a micro benchmark. You can see the 1422benchmark results by reading this kstat file: 1423.Pa /proc/spl/kstat/zfs/chksum_bench . 1424. 1425.It Sy zfs_free_bpobj_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int 1426Enable/disable the processing of the free_bpobj object. 1427. 1428.It Sy zfs_async_block_max_blocks Ns = Ns Sy UINT64_MAX Po unlimited Pc Pq u64 1429Maximum number of blocks freed in a single TXG. 1430. 1431.It Sy zfs_max_async_dedup_frees Ns = Ns Sy 100000 Po 10^5 Pc Pq u64 1432Maximum number of dedup blocks freed in a single TXG. 1433. 1434.It Sy zfs_vdev_async_read_max_active Ns = Ns Sy 3 Pq uint 1435Maximum asynchronous read I/O operations active to each device. 1436.No See Sx ZFS I/O SCHEDULER . 1437. 1438.It Sy zfs_vdev_async_read_min_active Ns = Ns Sy 1 Pq uint 1439Minimum asynchronous read I/O operation active to each device. 1440.No See Sx ZFS I/O SCHEDULER . 1441. 1442.It Sy zfs_vdev_async_write_active_max_dirty_percent Ns = Ns Sy 60 Ns % Pq uint 1443When the pool has more than this much dirty data, use 1444.Sy zfs_vdev_async_write_max_active 1445to limit active async writes. 1446If the dirty data is between the minimum and maximum, 1447the active I/O limit is linearly interpolated. 1448.No See Sx ZFS I/O SCHEDULER . 1449. 1450.It Sy zfs_vdev_async_write_active_min_dirty_percent Ns = Ns Sy 30 Ns % Pq uint 1451When the pool has less than this much dirty data, use 1452.Sy zfs_vdev_async_write_min_active 1453to limit active async writes. 1454If the dirty data is between the minimum and maximum, 1455the active I/O limit is linearly 1456interpolated. 1457.No See Sx ZFS I/O SCHEDULER . 1458. 1459.It Sy zfs_vdev_async_write_max_active Ns = Ns Sy 10 Pq uint 1460Maximum asynchronous write I/O operations active to each device. 1461.No See Sx ZFS I/O SCHEDULER . 1462. 1463.It Sy zfs_vdev_async_write_min_active Ns = Ns Sy 2 Pq uint 1464Minimum asynchronous write I/O operations active to each device. 1465.No See Sx ZFS I/O SCHEDULER . 1466.Pp 1467Lower values are associated with better latency on rotational media but poorer 1468resilver performance. 1469The default value of 1470.Sy 2 1471was chosen as a compromise. 1472A value of 1473.Sy 3 1474has been shown to improve resilver performance further at a cost of 1475further increasing latency. 1476. 1477.It Sy zfs_vdev_initializing_max_active Ns = Ns Sy 1 Pq uint 1478Maximum initializing I/O operations active to each device. 1479.No See Sx ZFS I/O SCHEDULER . 1480. 1481.It Sy zfs_vdev_initializing_min_active Ns = Ns Sy 1 Pq uint 1482Minimum initializing I/O operations active to each device. 1483.No See Sx ZFS I/O SCHEDULER . 1484. 1485.It Sy zfs_vdev_max_active Ns = Ns Sy 1000 Pq uint 1486The maximum number of I/O operations active to each device. 1487Ideally, this will be at least the sum of each queue's 1488.Sy max_active . 1489.No See Sx ZFS I/O SCHEDULER . 1490. 1491.It Sy zfs_vdev_open_timeout_ms Ns = Ns Sy 1000 Pq uint 1492Timeout value to wait before determining a device is missing 1493during import. 1494This is helpful for transient missing paths due 1495to links being briefly removed and recreated in response to 1496udev events. 1497. 1498.It Sy zfs_vdev_rebuild_max_active Ns = Ns Sy 3 Pq uint 1499Maximum sequential resilver I/O operations active to each device. 1500.No See Sx ZFS I/O SCHEDULER . 1501. 1502.It Sy zfs_vdev_rebuild_min_active Ns = Ns Sy 1 Pq uint 1503Minimum sequential resilver I/O operations active to each device. 1504.No See Sx ZFS I/O SCHEDULER . 1505. 1506.It Sy zfs_vdev_removal_max_active Ns = Ns Sy 2 Pq uint 1507Maximum removal I/O operations active to each device. 1508.No See Sx ZFS I/O SCHEDULER . 1509. 1510.It Sy zfs_vdev_removal_min_active Ns = Ns Sy 1 Pq uint 1511Minimum removal I/O operations active to each device. 1512.No See Sx ZFS I/O SCHEDULER . 1513. 1514.It Sy zfs_vdev_scrub_max_active Ns = Ns Sy 2 Pq uint 1515Maximum scrub I/O operations active to each device. 1516.No See Sx ZFS I/O SCHEDULER . 1517. 1518.It Sy zfs_vdev_scrub_min_active Ns = Ns Sy 1 Pq uint 1519Minimum scrub I/O operations active to each device. 1520.No See Sx ZFS I/O SCHEDULER . 1521. 1522.It Sy zfs_vdev_sync_read_max_active Ns = Ns Sy 10 Pq uint 1523Maximum synchronous read I/O operations active to each device. 1524.No See Sx ZFS I/O SCHEDULER . 1525. 1526.It Sy zfs_vdev_sync_read_min_active Ns = Ns Sy 10 Pq uint 1527Minimum synchronous read I/O operations active to each device. 1528.No See Sx ZFS I/O SCHEDULER . 1529. 1530.It Sy zfs_vdev_sync_write_max_active Ns = Ns Sy 10 Pq uint 1531Maximum synchronous write I/O operations active to each device. 1532.No See Sx ZFS I/O SCHEDULER . 1533. 1534.It Sy zfs_vdev_sync_write_min_active Ns = Ns Sy 10 Pq uint 1535Minimum synchronous write I/O operations active to each device. 1536.No See Sx ZFS I/O SCHEDULER . 1537. 1538.It Sy zfs_vdev_trim_max_active Ns = Ns Sy 2 Pq uint 1539Maximum trim/discard I/O operations active to each device. 1540.No See Sx ZFS I/O SCHEDULER . 1541. 1542.It Sy zfs_vdev_trim_min_active Ns = Ns Sy 1 Pq uint 1543Minimum trim/discard I/O operations active to each device. 1544.No See Sx ZFS I/O SCHEDULER . 1545. 1546.It Sy zfs_vdev_nia_delay Ns = Ns Sy 5 Pq uint 1547For non-interactive I/O (scrub, resilver, removal, initialize and rebuild), 1548the number of concurrently-active I/O operations is limited to 1549.Sy zfs_*_min_active , 1550unless the vdev is "idle". 1551When there are no interactive I/O operations active (synchronous or otherwise), 1552and 1553.Sy zfs_vdev_nia_delay 1554operations have completed since the last interactive operation, 1555then the vdev is considered to be "idle", 1556and the number of concurrently-active non-interactive operations is increased to 1557.Sy zfs_*_max_active . 1558.No See Sx ZFS I/O SCHEDULER . 1559. 1560.It Sy zfs_vdev_nia_credit Ns = Ns Sy 5 Pq uint 1561Some HDDs tend to prioritize sequential I/O so strongly, that concurrent 1562random I/O latency reaches several seconds. 1563On some HDDs this happens even if sequential I/O operations 1564are submitted one at a time, and so setting 1565.Sy zfs_*_max_active Ns = Sy 1 1566does not help. 1567To prevent non-interactive I/O, like scrub, 1568from monopolizing the device, no more than 1569.Sy zfs_vdev_nia_credit operations can be sent 1570while there are outstanding incomplete interactive operations. 1571This enforced wait ensures the HDD services the interactive I/O 1572within a reasonable amount of time. 1573.No See Sx ZFS I/O SCHEDULER . 1574. 1575.It Sy zfs_vdev_failfast_mask Ns = Ns Sy 1 Pq uint 1576Defines if the driver should retire on a given error type. 1577The following options may be bitwise-ored together: 1578.TS 1579box; 1580lbz r l l . 1581 Value Name Description 1582_ 1583 1 Device No driver retries on device errors 1584 2 Transport No driver retries on transport errors. 1585 4 Driver No driver retries on driver errors. 1586.TE 1587. 1588.It Sy zfs_vdev_disk_max_segs Ns = Ns Sy 0 Pq uint 1589Maximum number of segments to add to a BIO (min 4). 1590If this is higher than the maximum allowed by the device queue or the kernel 1591itself, it will be clamped. 1592Setting it to zero will cause the kernel's ideal size to be used. 1593This parameter only applies on Linux. 1594. 1595.It Sy zfs_expire_snapshot Ns = Ns Sy 300 Ns s Pq int 1596Time before expiring 1597.Pa .zfs/snapshot . 1598. 1599.It Sy zfs_admin_snapshot Ns = Ns Sy 0 Ns | Ns 1 Pq int 1600Allow the creation, removal, or renaming of entries in the 1601.Sy .zfs/snapshot 1602directory to cause the creation, destruction, or renaming of snapshots. 1603When enabled, this functionality works both locally and over NFS exports 1604which have the 1605.Em no_root_squash 1606option set. 1607. 1608.It Sy zfs_snapshot_no_setuid Ns = Ns Sy 0 Ns | Ns 1 Pq int 1609Whether to disable 1610.Em setuid/setgid 1611support for snapshot mounts triggered by access to the 1612.Sy .zfs/snapshot 1613directory by setting the 1614.Em nosuid 1615mount option. 1616. 1617.It Sy zfs_flags Ns = Ns Sy 0 Pq int 1618Set additional debugging flags. 1619The following flags may be bitwise-ored together: 1620.TS 1621box; 1622lbz r l l . 1623 Value Name Description 1624_ 1625 1 ZFS_DEBUG_DPRINTF Enable dprintf entries in the debug log. 1626* 2 ZFS_DEBUG_DBUF_VERIFY Enable extra dbuf verifications. 1627* 4 ZFS_DEBUG_DNODE_VERIFY Enable extra dnode verifications. 1628 8 ZFS_DEBUG_SNAPNAMES Enable snapshot name verification. 1629* 16 ZFS_DEBUG_MODIFY Check for illegally modified ARC buffers. 1630 64 ZFS_DEBUG_ZIO_FREE Enable verification of block frees. 1631 128 ZFS_DEBUG_HISTOGRAM_VERIFY Enable extra spacemap histogram verifications. 1632 256 ZFS_DEBUG_METASLAB_VERIFY Verify space accounting on disk matches in-memory \fBrange_trees\fP. 1633 512 ZFS_DEBUG_SET_ERROR Enable \fBSET_ERROR\fP and dprintf entries in the debug log. 1634 1024 ZFS_DEBUG_INDIRECT_REMAP Verify split blocks created by device removal. 1635 2048 ZFS_DEBUG_TRIM Verify TRIM ranges are always within the allocatable range tree. 1636 4096 ZFS_DEBUG_LOG_SPACEMAP Verify that the log summary is consistent with the spacemap log 1637 and enable \fBzfs_dbgmsgs\fP for metaslab loading and flushing. 1638 8192 ZFS_DEBUG_METASLAB_ALLOC Enable debugging messages when allocations fail. 1639 16384 ZFS_DEBUG_BRT Enable BRT-related debugging messages. 1640 32768 ZFS_DEBUG_RAIDZ_RECONSTRUCT Enabled debugging messages for raidz reconstruction. 1641 65536 ZFS_DEBUG_DDT Enable DDT-related debugging messages. 1642.TE 1643.Sy \& * No Requires debug build . 1644. 1645.It Sy zfs_btree_verify_intensity Ns = Ns Sy 0 Pq uint 1646Enables btree verification. 1647The following settings are cumulative: 1648.TS 1649box; 1650lbz r l l . 1651 Value Description 1652 1653 1 Verify height. 1654 2 Verify pointers from children to parent. 1655 3 Verify element counts. 1656 4 Verify element order. (expensive) 1657* 5 Verify unused memory is poisoned. (expensive) 1658.TE 1659.Sy \& * No Requires debug build . 1660. 1661.It Sy zfs_free_leak_on_eio Ns = Ns Sy 0 Ns | Ns 1 Pq int 1662If destroy encounters an 1663.Sy EIO 1664while reading metadata (e.g. indirect blocks), 1665space referenced by the missing metadata can not be freed. 1666Normally this causes the background destroy to become "stalled", 1667as it is unable to make forward progress. 1668While in this stalled state, all remaining space to free 1669from the error-encountering filesystem is "temporarily leaked". 1670Set this flag to cause it to ignore the 1671.Sy EIO , 1672permanently leak the space from indirect blocks that can not be read, 1673and continue to free everything else that it can. 1674.Pp 1675The default "stalling" behavior is useful if the storage partially 1676fails (i.e. some but not all I/O operations fail), and then later recovers. 1677In this case, we will be able to continue pool operations while it is 1678partially failed, and when it recovers, we can continue to free the 1679space, with no leaks. 1680Note, however, that this case is actually fairly rare. 1681.Pp 1682Typically pools either 1683.Bl -enum -compact -offset 4n -width "1." 1684.It 1685fail completely (but perhaps temporarily, 1686e.g. due to a top-level vdev going offline), or 1687.It 1688have localized, permanent errors (e.g. disk returns the wrong data 1689due to bit flip or firmware bug). 1690.El 1691In the former case, this setting does not matter because the 1692pool will be suspended and the sync thread will not be able to make 1693forward progress regardless. 1694In the latter, because the error is permanent, the best we can do 1695is leak the minimum amount of space, 1696which is what setting this flag will do. 1697It is therefore reasonable for this flag to normally be set, 1698but we chose the more conservative approach of not setting it, 1699so that there is no possibility of 1700leaking space in the "partial temporary" failure case. 1701. 1702.It Sy zfs_free_min_time_ms Ns = Ns Sy 1000 Ns ms Po 1s Pc Pq uint 1703During a 1704.Nm zfs Cm destroy 1705operation using the 1706.Sy async_destroy 1707feature, 1708a minimum of this much time will be spent working on freeing blocks per TXG. 1709. 1710.It Sy zfs_obsolete_min_time_ms Ns = Ns Sy 500 Ns ms Pq uint 1711Similar to 1712.Sy zfs_free_min_time_ms , 1713but for cleanup of old indirection records for removed vdevs. 1714. 1715.It Sy zfs_immediate_write_sz Ns = Ns Sy 32768 Ns B Po 32 KiB Pc Pq s64 1716Largest data block to write to the ZIL. 1717Larger blocks will be treated as if the dataset being written to had the 1718.Sy logbias Ns = Ns Sy throughput 1719property set. 1720. 1721.It Sy zfs_initialize_value Ns = Ns Sy 16045690984833335022 Po 0xDEADBEEFDEADBEEE Pc Pq u64 1722Pattern written to vdev free space by 1723.Xr zpool-initialize 8 . 1724. 1725.It Sy zfs_initialize_chunk_size Ns = Ns Sy 1048576 Ns B Po 1 MiB Pc Pq u64 1726Size of writes used by 1727.Xr zpool-initialize 8 . 1728This option is used by the test suite. 1729. 1730.It Sy zfs_livelist_max_entries Ns = Ns Sy 500000 Po 5*10^5 Pc Pq u64 1731The threshold size (in block pointers) at which we create a new sub-livelist. 1732Larger sublists are more costly from a memory perspective but the fewer 1733sublists there are, the lower the cost of insertion. 1734. 1735.It Sy zfs_livelist_min_percent_shared Ns = Ns Sy 75 Ns % Pq int 1736If the amount of shared space between a snapshot and its clone drops below 1737this threshold, the clone turns off the livelist and reverts to the old 1738deletion method. 1739This is in place because livelists no long give us a benefit 1740once a clone has been overwritten enough. 1741. 1742.It Sy zfs_livelist_condense_new_alloc Ns = Ns Sy 0 Pq int 1743Incremented each time an extra ALLOC blkptr is added to a livelist entry while 1744it is being condensed. 1745This option is used by the test suite to track race conditions. 1746. 1747.It Sy zfs_livelist_condense_sync_cancel Ns = Ns Sy 0 Pq int 1748Incremented each time livelist condensing is canceled while in 1749.Fn spa_livelist_condense_sync . 1750This option is used by the test suite to track race conditions. 1751. 1752.It Sy zfs_livelist_condense_sync_pause Ns = Ns Sy 0 Ns | Ns 1 Pq int 1753When set, the livelist condense process pauses indefinitely before 1754executing the synctask \(em 1755.Fn spa_livelist_condense_sync . 1756This option is used by the test suite to trigger race conditions. 1757. 1758.It Sy zfs_livelist_condense_zthr_cancel Ns = Ns Sy 0 Pq int 1759Incremented each time livelist condensing is canceled while in 1760.Fn spa_livelist_condense_cb . 1761This option is used by the test suite to track race conditions. 1762. 1763.It Sy zfs_livelist_condense_zthr_pause Ns = Ns Sy 0 Ns | Ns 1 Pq int 1764When set, the livelist condense process pauses indefinitely before 1765executing the open context condensing work in 1766.Fn spa_livelist_condense_cb . 1767This option is used by the test suite to trigger race conditions. 1768. 1769.It Sy zfs_lua_max_instrlimit Ns = Ns Sy 100000000 Po 10^8 Pc Pq u64 1770The maximum execution time limit that can be set for a ZFS channel program, 1771specified as a number of Lua instructions. 1772. 1773.It Sy zfs_lua_max_memlimit Ns = Ns Sy 104857600 Po 100 MiB Pc Pq u64 1774The maximum memory limit that can be set for a ZFS channel program, specified 1775in bytes. 1776. 1777.It Sy zfs_max_dataset_nesting Ns = Ns Sy 50 Pq int 1778The maximum depth of nested datasets. 1779This value can be tuned temporarily to 1780fix existing datasets that exceed the predefined limit. 1781. 1782.It Sy zfs_max_log_walking Ns = Ns Sy 5 Pq u64 1783The number of past TXGs that the flushing algorithm of the log spacemap 1784feature uses to estimate incoming log blocks. 1785. 1786.It Sy zfs_max_logsm_summary_length Ns = Ns Sy 10 Pq u64 1787Maximum number of rows allowed in the summary of the spacemap log. 1788. 1789.It Sy zfs_max_recordsize Ns = Ns Sy 16777216 Po 16 MiB Pc Pq uint 1790We currently support block sizes from 1791.Em 512 Po 512 B Pc No to Em 16777216 Po 16 MiB Pc . 1792The benefits of larger blocks, and thus larger I/O, 1793need to be weighed against the cost of COWing a giant block to modify one byte. 1794Additionally, very large blocks can have an impact on I/O latency, 1795and also potentially on the memory allocator. 1796Therefore, we formerly forbade creating blocks larger than 1M. 1797Larger blocks could be created by changing it, 1798and pools with larger blocks can always be imported and used, 1799regardless of this setting. 1800.Pp 1801Note that it is still limited by default to 1802.Ar 1 MiB 1803on x86_32, because Linux's 18043/1 memory split doesn't leave much room for 16M chunks. 1805. 1806.It Sy zfs_allow_redacted_dataset_mount Ns = Ns Sy 0 Ns | Ns 1 Pq int 1807Allow datasets received with redacted send/receive to be mounted. 1808Normally disabled because these datasets may be missing key data. 1809. 1810.It Sy zfs_min_metaslabs_to_flush Ns = Ns Sy 1 Pq u64 1811Minimum number of metaslabs to flush per dirty TXG. 1812. 1813.It Sy zfs_metaslab_fragmentation_threshold Ns = Ns Sy 77 Ns % Pq uint 1814Allow metaslabs to keep their active state as long as their fragmentation 1815percentage is no more than this value. 1816An active metaslab that exceeds this threshold 1817will no longer keep its active status allowing better metaslabs to be selected. 1818. 1819.It Sy zfs_mg_fragmentation_threshold Ns = Ns Sy 95 Ns % Pq uint 1820Metaslab groups are considered eligible for allocations if their 1821fragmentation metric (measured as a percentage) is less than or equal to 1822this value. 1823If a metaslab group exceeds this threshold then it will be 1824skipped unless all metaslab groups within the metaslab class have also 1825crossed this threshold. 1826. 1827.It Sy zfs_mg_noalloc_threshold Ns = Ns Sy 0 Ns % Pq uint 1828Defines a threshold at which metaslab groups should be eligible for allocations. 1829The value is expressed as a percentage of free space 1830beyond which a metaslab group is always eligible for allocations. 1831If a metaslab group's free space is less than or equal to the 1832threshold, the allocator will avoid allocating to that group 1833unless all groups in the pool have reached the threshold. 1834Once all groups have reached the threshold, all groups are allowed to accept 1835allocations. 1836The default value of 1837.Sy 0 1838disables the feature and causes all metaslab groups to be eligible for 1839allocations. 1840.Pp 1841This parameter allows one to deal with pools having heavily imbalanced 1842vdevs such as would be the case when a new vdev has been added. 1843Setting the threshold to a non-zero percentage will stop allocations 1844from being made to vdevs that aren't filled to the specified percentage 1845and allow lesser filled vdevs to acquire more allocations than they 1846otherwise would under the old 1847.Sy zfs_mg_alloc_failures 1848facility. 1849. 1850.It Sy zfs_ddt_data_is_special Ns = Ns Sy 1 Ns | Ns 0 Pq int 1851If enabled, ZFS will place DDT data into the special allocation class. 1852. 1853.It Sy zfs_user_indirect_is_special Ns = Ns Sy 1 Ns | Ns 0 Pq int 1854If enabled, ZFS will place user data indirect blocks 1855into the special allocation class. 1856. 1857.It Sy zfs_multihost_history Ns = Ns Sy 0 Pq uint 1858Historical statistics for this many latest multihost updates will be available 1859in 1860.Pa /proc/spl/kstat/zfs/ Ns Ao Ar pool Ac Ns Pa /multihost . 1861. 1862.It Sy zfs_multihost_interval Ns = Ns Sy 1000 Ns ms Po 1 s Pc Pq u64 1863Used to control the frequency of multihost writes which are performed when the 1864.Sy multihost 1865pool property is on. 1866This is one of the factors used to determine the 1867length of the activity check during import. 1868.Pp 1869The multihost write period is 1870.Sy zfs_multihost_interval No / Sy leaf-vdevs . 1871On average a multihost write will be issued for each leaf vdev 1872every 1873.Sy zfs_multihost_interval 1874milliseconds. 1875In practice, the observed period can vary with the I/O load 1876and this observed value is the delay which is stored in the uberblock. 1877. 1878.It Sy zfs_multihost_import_intervals Ns = Ns Sy 20 Pq uint 1879Used to control the duration of the activity test on import. 1880Smaller values of 1881.Sy zfs_multihost_import_intervals 1882will reduce the import time but increase 1883the risk of failing to detect an active pool. 1884The total activity check time is never allowed to drop below one second. 1885.Pp 1886On import the activity check waits a minimum amount of time determined by 1887.Sy zfs_multihost_interval No \(mu Sy zfs_multihost_import_intervals , 1888or the same product computed on the host which last had the pool imported, 1889whichever is greater. 1890The activity check time may be further extended if the value of MMP 1891delay found in the best uberblock indicates actual multihost updates happened 1892at longer intervals than 1893.Sy zfs_multihost_interval . 1894A minimum of 1895.Em 100 ms 1896is enforced. 1897.Pp 1898.Sy 0 No is equivalent to Sy 1 . 1899. 1900.It Sy zfs_multihost_fail_intervals Ns = Ns Sy 10 Pq uint 1901Controls the behavior of the pool when multihost write failures or delays are 1902detected. 1903.Pp 1904When 1905.Sy 0 , 1906multihost write failures or delays are ignored. 1907The failures will still be reported to the ZED which depending on 1908its configuration may take action such as suspending the pool or offlining a 1909device. 1910.Pp 1911Otherwise, the pool will be suspended if 1912.Sy zfs_multihost_fail_intervals No \(mu Sy zfs_multihost_interval 1913milliseconds pass without a successful MMP write. 1914This guarantees the activity test will see MMP writes if the pool is imported. 1915.Sy 1 No is equivalent to Sy 2 ; 1916this is necessary to prevent the pool from being suspended 1917due to normal, small I/O latency variations. 1918. 1919.It Sy zfs_no_scrub_io Ns = Ns Sy 0 Ns | Ns 1 Pq int 1920Set to disable scrub I/O. 1921This results in scrubs not actually scrubbing data and 1922simply doing a metadata crawl of the pool instead. 1923. 1924.It Sy zfs_no_scrub_prefetch Ns = Ns Sy 0 Ns | Ns 1 Pq int 1925Set to disable block prefetching for scrubs. 1926. 1927.It Sy zfs_nocacheflush Ns = Ns Sy 0 Ns | Ns 1 Pq int 1928Disable cache flush operations on disks when writing. 1929Setting this will cause pool corruption on power loss 1930if a volatile out-of-order write cache is enabled. 1931. 1932.It Sy zfs_nopwrite_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int 1933Allow no-operation writes. 1934The occurrence of nopwrites will further depend on other pool properties 1935.Pq i.a. the checksumming and compression algorithms . 1936. 1937.It Sy zfs_dmu_offset_next_sync Ns = Ns Sy 1 Ns | Ns 0 Pq int 1938Enable forcing TXG sync to find holes. 1939When enabled forces ZFS to sync data when 1940.Sy SEEK_HOLE No or Sy SEEK_DATA 1941flags are used allowing holes in a file to be accurately reported. 1942When disabled holes will not be reported in recently dirtied files. 1943. 1944.It Sy zfs_pd_bytes_max Ns = Ns Sy 52428800 Ns B Po 50 MiB Pc Pq int 1945The number of bytes which should be prefetched during a pool traversal, like 1946.Nm zfs Cm send 1947or other data crawling operations. 1948. 1949.It Sy zfs_traverse_indirect_prefetch_limit Ns = Ns Sy 32 Pq uint 1950The number of blocks pointed by indirect (non-L0) block which should be 1951prefetched during a pool traversal, like 1952.Nm zfs Cm send 1953or other data crawling operations. 1954. 1955.It Sy zfs_per_txg_dirty_frees_percent Ns = Ns Sy 30 Ns % Pq u64 1956Control percentage of dirtied indirect blocks from frees allowed into one TXG. 1957After this threshold is crossed, additional frees will wait until the next TXG. 1958.Sy 0 No disables this throttle . 1959. 1960.It Sy zfs_prefetch_disable Ns = Ns Sy 0 Ns | Ns 1 Pq int 1961Disable predictive prefetch. 1962Note that it leaves "prescient" prefetch 1963.Pq for, e.g., Nm zfs Cm send 1964intact. 1965Unlike predictive prefetch, prescient prefetch never issues I/O 1966that ends up not being needed, so it can't hurt performance. 1967. 1968.It Sy zfs_qat_checksum_disable Ns = Ns Sy 0 Ns | Ns 1 Pq int 1969Disable QAT hardware acceleration for SHA256 checksums. 1970May be unset after the ZFS modules have been loaded to initialize the QAT 1971hardware as long as support is compiled in and the QAT driver is present. 1972. 1973.It Sy zfs_qat_compress_disable Ns = Ns Sy 0 Ns | Ns 1 Pq int 1974Disable QAT hardware acceleration for gzip compression. 1975May be unset after the ZFS modules have been loaded to initialize the QAT 1976hardware as long as support is compiled in and the QAT driver is present. 1977. 1978.It Sy zfs_qat_encrypt_disable Ns = Ns Sy 0 Ns | Ns 1 Pq int 1979Disable QAT hardware acceleration for AES-GCM encryption. 1980May be unset after the ZFS modules have been loaded to initialize the QAT 1981hardware as long as support is compiled in and the QAT driver is present. 1982. 1983.It Sy zfs_vnops_read_chunk_size Ns = Ns Sy 33554432 Ns B Po 32 MiB Pc Pq u64 1984Bytes to read per chunk. 1985. 1986.It Sy zfs_read_history Ns = Ns Sy 0 Pq uint 1987Historical statistics for this many latest reads will be available in 1988.Pa /proc/spl/kstat/zfs/ Ns Ao Ar pool Ac Ns Pa /reads . 1989. 1990.It Sy zfs_read_history_hits Ns = Ns Sy 0 Ns | Ns 1 Pq int 1991Include cache hits in read history 1992. 1993.It Sy zfs_rebuild_max_segment Ns = Ns Sy 1048576 Ns B Po 1 MiB Pc Pq u64 1994Maximum read segment size to issue when sequentially resilvering a 1995top-level vdev. 1996. 1997.It Sy zfs_rebuild_scrub_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int 1998Automatically start a pool scrub when the last active sequential resilver 1999completes in order to verify the checksums of all blocks which have been 2000resilvered. 2001This is enabled by default and strongly recommended. 2002. 2003.It Sy zfs_rebuild_vdev_limit Ns = Ns Sy 67108864 Ns B Po 64 MiB Pc Pq u64 2004Maximum amount of I/O that can be concurrently issued for a sequential 2005resilver per leaf device, given in bytes. 2006. 2007.It Sy zfs_reconstruct_indirect_combinations_max Ns = Ns Sy 4096 Pq int 2008If an indirect split block contains more than this many possible unique 2009combinations when being reconstructed, consider it too computationally 2010expensive to check them all. 2011Instead, try at most this many randomly selected 2012combinations each time the block is accessed. 2013This allows all segment copies to participate fairly 2014in the reconstruction when all combinations 2015cannot be checked and prevents repeated use of one bad copy. 2016. 2017.It Sy zfs_recover Ns = Ns Sy 0 Ns | Ns 1 Pq int 2018Set to attempt to recover from fatal errors. 2019This should only be used as a last resort, 2020as it typically results in leaked space, or worse. 2021. 2022.It Sy zfs_removal_ignore_errors Ns = Ns Sy 0 Ns | Ns 1 Pq int 2023Ignore hard I/O errors during device removal. 2024When set, if a device encounters a hard I/O error during the removal process 2025the removal will not be canceled. 2026This can result in a normally recoverable block becoming permanently damaged 2027and is hence not recommended. 2028This should only be used as a last resort when the 2029pool cannot be returned to a healthy state prior to removing the device. 2030. 2031.It Sy zfs_removal_suspend_progress Ns = Ns Sy 0 Ns | Ns 1 Pq uint 2032This is used by the test suite so that it can ensure that certain actions 2033happen while in the middle of a removal. 2034. 2035.It Sy zfs_remove_max_segment Ns = Ns Sy 16777216 Ns B Po 16 MiB Pc Pq uint 2036The largest contiguous segment that we will attempt to allocate when removing 2037a device. 2038If there is a performance problem with attempting to allocate large blocks, 2039consider decreasing this. 2040The default value is also the maximum. 2041. 2042.It Sy zfs_resilver_disable_defer Ns = Ns Sy 0 Ns | Ns 1 Pq int 2043Ignore the 2044.Sy resilver_defer 2045feature, causing an operation that would start a resilver to 2046immediately restart the one in progress. 2047. 2048.It Sy zfs_resilver_defer_percent Ns = Ns Sy 10 Ns % Pq uint 2049If the ongoing resilver progress is below this threshold, a new resilver will 2050restart from scratch instead of being deferred after the current one finishes, 2051even if the 2052.Sy resilver_defer 2053feature is enabled. 2054. 2055.It Sy zfs_resilver_min_time_ms Ns = Ns Sy 3000 Ns ms Po 3 s Pc Pq uint 2056Resilvers are processed by the sync thread. 2057While resilvering, it will spend at least this much time 2058working on a resilver between TXG flushes. 2059. 2060.It Sy zfs_scan_ignore_errors Ns = Ns Sy 0 Ns | Ns 1 Pq int 2061If set, remove the DTL (dirty time list) upon completion of a pool scan (scrub), 2062even if there were unrepairable errors. 2063Intended to be used during pool repair or recovery to 2064stop resilvering when the pool is next imported. 2065. 2066.It Sy zfs_scrub_after_expand Ns = Ns Sy 1 Ns | Ns 0 Pq int 2067Automatically start a pool scrub after a RAIDZ expansion completes 2068in order to verify the checksums of all blocks which have been 2069copied during the expansion. 2070This is enabled by default and strongly recommended. 2071. 2072.It Sy zfs_scrub_min_time_ms Ns = Ns Sy 1000 Ns ms Po 1 s Pc Pq uint 2073Scrubs are processed by the sync thread. 2074While scrubbing, it will spend at least this much time 2075working on a scrub between TXG flushes. 2076. 2077.It Sy zfs_scrub_error_blocks_per_txg Ns = Ns Sy 4096 Pq uint 2078Error blocks to be scrubbed in one txg. 2079. 2080.It Sy zfs_scan_checkpoint_intval Ns = Ns Sy 7200 Ns s Po 2 hour Pc Pq uint 2081To preserve progress across reboots, the sequential scan algorithm periodically 2082needs to stop metadata scanning and issue all the verification I/O to disk. 2083The frequency of this flushing is determined by this tunable. 2084. 2085.It Sy zfs_scan_fill_weight Ns = Ns Sy 3 Pq uint 2086This tunable affects how scrub and resilver I/O segments are ordered. 2087A higher number indicates that we care more about how filled in a segment is, 2088while a lower number indicates we care more about the size of the extent without 2089considering the gaps within a segment. 2090This value is only tunable upon module insertion. 2091Changing the value afterwards will have no effect on scrub or resilver 2092performance. 2093. 2094.It Sy zfs_scan_issue_strategy Ns = Ns Sy 0 Pq uint 2095Determines the order that data will be verified while scrubbing or resilvering: 2096.Bl -tag -compact -offset 4n -width "a" 2097.It Sy 1 2098Data will be verified as sequentially as possible, given the 2099amount of memory reserved for scrubbing 2100.Pq see Sy zfs_scan_mem_lim_fact . 2101This may improve scrub performance if the pool's data is very fragmented. 2102.It Sy 2 2103The largest mostly-contiguous chunk of found data will be verified first. 2104By deferring scrubbing of small segments, we may later find adjacent data 2105to coalesce and increase the segment size. 2106.It Sy 0 2107.No Use strategy Sy 1 No during normal verification 2108.No and strategy Sy 2 No while taking a checkpoint . 2109.El 2110. 2111.It Sy zfs_scan_legacy Ns = Ns Sy 0 Ns | Ns 1 Pq int 2112If unset, indicates that scrubs and resilvers will gather metadata in 2113memory before issuing sequential I/O. 2114Otherwise indicates that the legacy algorithm will be used, 2115where I/O is initiated as soon as it is discovered. 2116Unsetting will not affect scrubs or resilvers that are already in progress. 2117. 2118.It Sy zfs_scan_max_ext_gap Ns = Ns Sy 2097152 Ns B Po 2 MiB Pc Pq int 2119Sets the largest gap in bytes between scrub/resilver I/O operations 2120that will still be considered sequential for sorting purposes. 2121Changing this value will not 2122affect scrubs or resilvers that are already in progress. 2123. 2124.It Sy zfs_scan_mem_lim_fact Ns = Ns Sy 20 Ns ^-1 Pq uint 2125Maximum fraction of RAM used for I/O sorting by sequential scan algorithm. 2126This tunable determines the hard limit for I/O sorting memory usage. 2127When the hard limit is reached we stop scanning metadata and start issuing 2128data verification I/O. 2129This is done until we get below the soft limit. 2130. 2131.It Sy zfs_scan_mem_lim_soft_fact Ns = Ns Sy 20 Ns ^-1 Pq uint 2132The fraction of the hard limit used to determined the soft limit for I/O sorting 2133by the sequential scan algorithm. 2134When we cross this limit from below no action is taken. 2135When we cross this limit from above it is because we are issuing verification 2136I/O. 2137In this case (unless the metadata scan is done) we stop issuing verification I/O 2138and start scanning metadata again until we get to the hard limit. 2139. 2140.It Sy zfs_scan_report_txgs Ns = Ns Sy 0 Ns | Ns 1 Pq uint 2141When reporting resilver throughput and estimated completion time use the 2142performance observed over roughly the last 2143.Sy zfs_scan_report_txgs 2144TXGs. 2145When set to zero performance is calculated over the time between checkpoints. 2146. 2147.It Sy zfs_scan_strict_mem_lim Ns = Ns Sy 0 Ns | Ns 1 Pq int 2148Enforce tight memory limits on pool scans when a sequential scan is in progress. 2149When disabled, the memory limit may be exceeded by fast disks. 2150. 2151.It Sy zfs_scan_suspend_progress Ns = Ns Sy 0 Ns | Ns 1 Pq int 2152Freezes a scrub/resilver in progress without actually pausing it. 2153Intended for testing/debugging. 2154. 2155.It Sy zfs_scan_vdev_limit Ns = Ns Sy 16777216 Ns B Po 16 MiB Pc Pq int 2156Maximum amount of data that can be concurrently issued at once for scrubs and 2157resilvers per leaf device, given in bytes. 2158. 2159.It Sy zfs_send_corrupt_data Ns = Ns Sy 0 Ns | Ns 1 Pq int 2160Allow sending of corrupt data (ignore read/checksum errors when sending). 2161. 2162.It Sy zfs_send_unmodified_spill_blocks Ns = Ns Sy 1 Ns | Ns 0 Pq int 2163Include unmodified spill blocks in the send stream. 2164Under certain circumstances, previous versions of ZFS could incorrectly 2165remove the spill block from an existing object. 2166Including unmodified copies of the spill blocks creates a backwards-compatible 2167stream which will recreate a spill block if it was incorrectly removed. 2168. 2169.It Sy zfs_send_no_prefetch_queue_ff Ns = Ns Sy 20 Ns ^\-1 Pq uint 2170The fill fraction of the 2171.Nm zfs Cm send 2172internal queues. 2173The fill fraction controls the timing with which internal threads are woken up. 2174. 2175.It Sy zfs_send_no_prefetch_queue_length Ns = Ns Sy 1048576 Ns B Po 1 MiB Pc Pq uint 2176The maximum number of bytes allowed in 2177.Nm zfs Cm send Ns 's 2178internal queues. 2179. 2180.It Sy zfs_send_queue_ff Ns = Ns Sy 20 Ns ^\-1 Pq uint 2181The fill fraction of the 2182.Nm zfs Cm send 2183prefetch queue. 2184The fill fraction controls the timing with which internal threads are woken up. 2185. 2186.It Sy zfs_send_queue_length Ns = Ns Sy 16777216 Ns B Po 16 MiB Pc Pq uint 2187The maximum number of bytes allowed that will be prefetched by 2188.Nm zfs Cm send . 2189This value must be at least twice the maximum block size in use. 2190. 2191.It Sy zfs_recv_queue_ff Ns = Ns Sy 20 Ns ^\-1 Pq uint 2192The fill fraction of the 2193.Nm zfs Cm receive 2194queue. 2195The fill fraction controls the timing with which internal threads are woken up. 2196. 2197.It Sy zfs_recv_queue_length Ns = Ns Sy 16777216 Ns B Po 16 MiB Pc Pq uint 2198The maximum number of bytes allowed in the 2199.Nm zfs Cm receive 2200queue. 2201This value must be at least twice the maximum block size in use. 2202. 2203.It Sy zfs_recv_write_batch_size Ns = Ns Sy 1048576 Ns B Po 1 MiB Pc Pq uint 2204The maximum amount of data, in bytes, that 2205.Nm zfs Cm receive 2206will write in one DMU transaction. 2207This is the uncompressed size, even when receiving a compressed send stream. 2208This setting will not reduce the write size below a single block. 2209Capped at a maximum of 2210.Sy 32 MiB . 2211. 2212.It Sy zfs_recv_best_effort_corrective Ns = Ns Sy 0 Pq int 2213When this variable is set to non-zero a corrective receive: 2214.Bl -enum -compact -offset 4n -width "1." 2215.It 2216Does not enforce the restriction of source & destination snapshot GUIDs 2217matching. 2218.It 2219If there is an error during healing, the healing receive is not 2220terminated instead it moves on to the next record. 2221.El 2222. 2223.It Sy zfs_override_estimate_recordsize Ns = Ns Sy 0 Ns | Ns 1 Pq uint 2224Setting this variable overrides the default logic for estimating block 2225sizes when doing a 2226.Nm zfs Cm send . 2227The default heuristic is that the average block size 2228will be the current recordsize. 2229Override this value if most data in your dataset is not of that size 2230and you require accurate zfs send size estimates. 2231. 2232.It Sy zfs_sync_pass_deferred_free Ns = Ns Sy 2 Pq uint 2233Flushing of data to disk is done in passes. 2234Defer frees starting in this pass. 2235. 2236.It Sy zfs_spa_discard_memory_limit Ns = Ns Sy 16777216 Ns B Po 16 MiB Pc Pq int 2237Maximum memory used for prefetching a checkpoint's space map on each 2238vdev while discarding the checkpoint. 2239. 2240.It Sy zfs_special_class_metadata_reserve_pct Ns = Ns Sy 25 Ns % Pq uint 2241Only allow small data blocks to be allocated on the special and dedup vdev 2242types when the available free space percentage on these vdevs exceeds this 2243value. 2244This ensures reserved space is available for pool metadata as the 2245special vdevs approach capacity. 2246. 2247.It Sy zfs_sync_pass_dont_compress Ns = Ns Sy 8 Pq uint 2248Starting in this sync pass, disable compression (including of metadata). 2249With the default setting, in practice, we don't have this many sync passes, 2250so this has no effect. 2251.Pp 2252The original intent was that disabling compression would help the sync passes 2253to converge. 2254However, in practice, disabling compression increases 2255the average number of sync passes; because when we turn compression off, 2256many blocks' size will change, and thus we have to re-allocate 2257(not overwrite) them. 2258It also increases the number of 2259.Em 128 KiB 2260allocations (e.g. for indirect blocks and spacemaps) 2261because these will not be compressed. 2262The 2263.Em 128 KiB 2264allocations are especially detrimental to performance 2265on highly fragmented systems, which may have very few free segments of this 2266size, 2267and may need to load new metaslabs to satisfy these allocations. 2268. 2269.It Sy zfs_sync_pass_rewrite Ns = Ns Sy 2 Pq uint 2270Rewrite new block pointers starting in this pass. 2271. 2272.It Sy zfs_trim_extent_bytes_max Ns = Ns Sy 134217728 Ns B Po 128 MiB Pc Pq uint 2273Maximum size of TRIM command. 2274Larger ranges will be split into chunks no larger than this value before 2275issuing. 2276. 2277.It Sy zfs_trim_extent_bytes_min Ns = Ns Sy 32768 Ns B Po 32 KiB Pc Pq uint 2278Minimum size of TRIM commands. 2279TRIM ranges smaller than this will be skipped, 2280unless they're part of a larger range which was chunked. 2281This is done because it's common for these small TRIMs 2282to negatively impact overall performance. 2283. 2284.It Sy zfs_trim_metaslab_skip Ns = Ns Sy 0 Ns | Ns 1 Pq uint 2285Skip uninitialized metaslabs during the TRIM process. 2286This option is useful for pools constructed from large thinly-provisioned 2287devices 2288where TRIM operations are slow. 2289As a pool ages, an increasing fraction of the pool's metaslabs 2290will be initialized, progressively degrading the usefulness of this option. 2291This setting is stored when starting a manual TRIM and will 2292persist for the duration of the requested TRIM. 2293. 2294.It Sy zfs_trim_queue_limit Ns = Ns Sy 10 Pq uint 2295Maximum number of queued TRIMs outstanding per leaf vdev. 2296The number of concurrent TRIM commands issued to the device is controlled by 2297.Sy zfs_vdev_trim_min_active No and Sy zfs_vdev_trim_max_active . 2298. 2299.It Sy zfs_trim_txg_batch Ns = Ns Sy 32 Pq uint 2300The number of transaction groups' worth of frees which should be aggregated 2301before TRIM operations are issued to the device. 2302This setting represents a trade-off between issuing larger, 2303more efficient TRIM operations and the delay 2304before the recently trimmed space is available for use by the device. 2305.Pp 2306Increasing this value will allow frees to be aggregated for a longer time. 2307This will result is larger TRIM operations and potentially increased memory 2308usage. 2309Decreasing this value will have the opposite effect. 2310The default of 2311.Sy 32 2312was determined to be a reasonable compromise. 2313. 2314.It Sy zfs_txg_history Ns = Ns Sy 100 Pq uint 2315Historical statistics for this many latest TXGs will be available in 2316.Pa /proc/spl/kstat/zfs/ Ns Ao Ar pool Ac Ns Pa /TXGs . 2317. 2318.It Sy zfs_txg_timeout Ns = Ns Sy 5 Ns s Pq uint 2319Flush dirty data to disk at least every this many seconds (maximum TXG 2320duration). 2321. 2322.It Sy zfs_vdev_aggregation_limit Ns = Ns Sy 1048576 Ns B Po 1 MiB Pc Pq uint 2323Max vdev I/O aggregation size. 2324. 2325.It Sy zfs_vdev_aggregation_limit_non_rotating Ns = Ns Sy 131072 Ns B Po 128 KiB Pc Pq uint 2326Max vdev I/O aggregation size for non-rotating media. 2327. 2328.It Sy zfs_vdev_mirror_rotating_inc Ns = Ns Sy 0 Pq int 2329A number by which the balancing algorithm increments the load calculation for 2330the purpose of selecting the least busy mirror member when an I/O operation 2331immediately follows its predecessor on rotational vdevs 2332for the purpose of making decisions based on load. 2333. 2334.It Sy zfs_vdev_mirror_rotating_seek_inc Ns = Ns Sy 5 Pq int 2335A number by which the balancing algorithm increments the load calculation for 2336the purpose of selecting the least busy mirror member when an I/O operation 2337lacks locality as defined by 2338.Sy zfs_vdev_mirror_rotating_seek_offset . 2339Operations within this that are not immediately following the previous operation 2340are incremented by half. 2341. 2342.It Sy zfs_vdev_mirror_rotating_seek_offset Ns = Ns Sy 1048576 Ns B Po 1 MiB Pc Pq int 2343The maximum distance for the last queued I/O operation in which 2344the balancing algorithm considers an operation to have locality. 2345.No See Sx ZFS I/O SCHEDULER . 2346. 2347.It Sy zfs_vdev_mirror_non_rotating_inc Ns = Ns Sy 0 Pq int 2348A number by which the balancing algorithm increments the load calculation for 2349the purpose of selecting the least busy mirror member on non-rotational vdevs 2350when I/O operations do not immediately follow one another. 2351. 2352.It Sy zfs_vdev_mirror_non_rotating_seek_inc Ns = Ns Sy 1 Pq int 2353A number by which the balancing algorithm increments the load calculation for 2354the purpose of selecting the least busy mirror member when an I/O operation 2355lacks 2356locality as defined by the 2357.Sy zfs_vdev_mirror_rotating_seek_offset . 2358Operations within this that are not immediately following the previous operation 2359are incremented by half. 2360. 2361.It Sy zfs_vdev_read_gap_limit Ns = Ns Sy 32768 Ns B Po 32 KiB Pc Pq uint 2362Aggregate read I/O operations if the on-disk gap between them is within this 2363threshold. 2364. 2365.It Sy zfs_vdev_write_gap_limit Ns = Ns Sy 4096 Ns B Po 4 KiB Pc Pq uint 2366Aggregate write I/O operations if the on-disk gap between them is within this 2367threshold. 2368. 2369.It Sy zfs_vdev_raidz_impl Ns = Ns Sy fastest Pq string 2370Select the raidz parity implementation to use. 2371.Pp 2372Variants that don't depend on CPU-specific features 2373may be selected on module load, as they are supported on all systems. 2374The remaining options may only be set after the module is loaded, 2375as they are available only if the implementations are compiled in 2376and supported on the running system. 2377.Pp 2378Once the module is loaded, 2379.Pa /sys/module/zfs/parameters/zfs_vdev_raidz_impl 2380will show the available options, 2381with the currently selected one enclosed in square brackets. 2382.Pp 2383.TS 2384lb l l . 2385fastest selected by built-in benchmark 2386original original implementation 2387scalar scalar implementation 2388sse2 SSE2 instruction set 64-bit x86 2389ssse3 SSSE3 instruction set 64-bit x86 2390avx2 AVX2 instruction set 64-bit x86 2391avx512f AVX512F instruction set 64-bit x86 2392avx512bw AVX512F & AVX512BW instruction sets 64-bit x86 2393aarch64_neon NEON Aarch64/64-bit ARMv8 2394aarch64_neonx2 NEON with more unrolling Aarch64/64-bit ARMv8 2395powerpc_altivec Altivec PowerPC 2396.TE 2397. 2398.It Sy zfs_zevent_len_max Ns = Ns Sy 512 Pq uint 2399Max event queue length. 2400Events in the queue can be viewed with 2401.Xr zpool-events 8 . 2402. 2403.It Sy zfs_zevent_retain_max Ns = Ns Sy 2000 Pq int 2404Maximum recent zevent records to retain for duplicate checking. 2405Setting this to 2406.Sy 0 2407disables duplicate detection. 2408. 2409.It Sy zfs_zevent_retain_expire_secs Ns = Ns Sy 900 Ns s Po 15 min Pc Pq int 2410Lifespan for a recent ereport that was retained for duplicate checking. 2411. 2412.It Sy zfs_zil_clean_taskq_maxalloc Ns = Ns Sy 1048576 Pq int 2413The maximum number of taskq entries that are allowed to be cached. 2414When this limit is exceeded transaction records (itxs) 2415will be cleaned synchronously. 2416. 2417.It Sy zfs_zil_clean_taskq_minalloc Ns = Ns Sy 1024 Pq int 2418The number of taskq entries that are pre-populated when the taskq is first 2419created and are immediately available for use. 2420. 2421.It Sy zfs_zil_clean_taskq_nthr_pct Ns = Ns Sy 100 Ns % Pq int 2422This controls the number of threads used by 2423.Sy dp_zil_clean_taskq . 2424The default value of 2425.Sy 100% 2426will create a maximum of one thread per CPU. 2427. 2428.It Sy zil_maxblocksize Ns = Ns Sy 131072 Ns B Po 128 KiB Pc Pq uint 2429This sets the maximum block size used by the ZIL. 2430On very fragmented pools, lowering this 2431.Pq typically to Sy 36 KiB 2432can improve performance. 2433. 2434.It Sy zil_maxcopied Ns = Ns Sy 7680 Ns B Po 7.5 KiB Pc Pq uint 2435This sets the maximum number of write bytes logged via WR_COPIED. 2436It tunes a tradeoff between additional memory copy and possibly worse log 2437space efficiency vs additional range lock/unlock. 2438. 2439.It Sy zil_nocacheflush Ns = Ns Sy 0 Ns | Ns 1 Pq int 2440Disable the cache flush commands that are normally sent to disk by 2441the ZIL after an LWB write has completed. 2442Setting this will cause ZIL corruption on power loss 2443if a volatile out-of-order write cache is enabled. 2444. 2445.It Sy zil_replay_disable Ns = Ns Sy 0 Ns | Ns 1 Pq int 2446Disable intent logging replay. 2447Can be disabled for recovery from corrupted ZIL. 2448. 2449.It Sy zil_slog_bulk Ns = Ns Sy 67108864 Ns B Po 64 MiB Pc Pq u64 2450Limit SLOG write size per commit executed with synchronous priority. 2451Any writes above that will be executed with lower (asynchronous) priority 2452to limit potential SLOG device abuse by single active ZIL writer. 2453. 2454.It Sy zfs_zil_saxattr Ns = Ns Sy 1 Ns | Ns 0 Pq int 2455Setting this tunable to zero disables ZIL logging of new 2456.Sy xattr Ns = Ns Sy sa 2457records if the 2458.Sy org.openzfs:zilsaxattr 2459feature is enabled on the pool. 2460This would only be necessary to work around bugs in the ZIL logging or replay 2461code for this record type. 2462The tunable has no effect if the feature is disabled. 2463. 2464.It Sy zfs_embedded_slog_min_ms Ns = Ns Sy 64 Pq uint 2465Usually, one metaslab from each normal-class vdev is dedicated for use by 2466the ZIL to log synchronous writes. 2467However, if there are fewer than 2468.Sy zfs_embedded_slog_min_ms 2469metaslabs in the vdev, this functionality is disabled. 2470This ensures that we don't set aside an unreasonable amount of space for the 2471ZIL. 2472. 2473.It Sy zstd_earlyabort_pass Ns = Ns Sy 1 Pq uint 2474Whether heuristic for detection of incompressible data with zstd levels >= 3 2475using LZ4 and zstd-1 passes is enabled. 2476. 2477.It Sy zstd_abort_size Ns = Ns Sy 131072 Pq uint 2478Minimal uncompressed size (inclusive) of a record before the early abort 2479heuristic will be attempted. 2480. 2481.It Sy zio_deadman_log_all Ns = Ns Sy 0 Ns | Ns 1 Pq int 2482If non-zero, the zio deadman will produce debugging messages 2483.Pq see Sy zfs_dbgmsg_enable 2484for all zios, rather than only for leaf zios possessing a vdev. 2485This is meant to be used by developers to gain 2486diagnostic information for hang conditions which don't involve a mutex 2487or other locking primitive: typically conditions in which a thread in 2488the zio pipeline is looping indefinitely. 2489. 2490.It Sy zio_slow_io_ms Ns = Ns Sy 30000 Ns ms Po 30 s Pc Pq int 2491When an I/O operation takes more than this much time to complete, 2492it's marked as slow. 2493Each slow operation causes a delay zevent. 2494Slow I/O counters can be seen with 2495.Nm zpool Cm status Fl s . 2496. 2497.It Sy zio_dva_throttle_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int 2498Throttle block allocations in the I/O pipeline. 2499This allows for dynamic allocation distribution based on device performance. 2500. 2501.It Sy zfs_xattr_compat Ns = Ns 0 Ns | Ns 1 Pq int 2502Control the naming scheme used when setting new xattrs in the user namespace. 2503If 2504.Sy 0 2505.Pq the default on Linux , 2506user namespace xattr names are prefixed with the namespace, to be backwards 2507compatible with previous versions of ZFS on Linux. 2508If 2509.Sy 1 2510.Pq the default on Fx , 2511user namespace xattr names are not prefixed, to be backwards compatible with 2512previous versions of ZFS on illumos and 2513.Fx . 2514.Pp 2515Either naming scheme can be read on this and future versions of ZFS, regardless 2516of this tunable, but legacy ZFS on illumos or 2517.Fx 2518are unable to read user namespace xattrs written in the Linux format, and 2519legacy versions of ZFS on Linux are unable to read user namespace xattrs written 2520in the legacy ZFS format. 2521.Pp 2522An existing xattr with the alternate naming scheme is removed when overwriting 2523the xattr so as to not accumulate duplicates. 2524. 2525.It Sy zio_requeue_io_start_cut_in_line Ns = Ns Sy 0 Ns | Ns 1 Pq int 2526Prioritize requeued I/O. 2527. 2528.It Sy zio_taskq_batch_pct Ns = Ns Sy 80 Ns % Pq uint 2529Percentage of online CPUs which will run a worker thread for I/O. 2530These workers are responsible for I/O work such as compression, encryption, 2531checksum and parity calculations. 2532Fractional number of CPUs will be rounded down. 2533.Pp 2534The default value of 2535.Sy 80% 2536was chosen to avoid using all CPUs which can result in 2537latency issues and inconsistent application performance, 2538especially when slower compression and/or checksumming is enabled. 2539Set value only applies to pools imported/created after that. 2540. 2541.It Sy zio_taskq_batch_tpq Ns = Ns Sy 0 Pq uint 2542Number of worker threads per taskq. 2543Higher values improve I/O ordering and CPU utilization, 2544while lower reduce lock contention. 2545Set value only applies to pools imported/created after that. 2546.Pp 2547If 2548.Sy 0 , 2549generate a system-dependent value close to 6 threads per taskq. 2550Set value only applies to pools imported/created after that. 2551. 2552.It Sy zio_taskq_write_tpq Ns = Ns Sy 16 Pq uint 2553Determines the minimum number of threads per write issue taskq. 2554Higher values improve CPU utilization on high throughput, 2555while lower reduce taskq locks contention on high IOPS. 2556Set value only applies to pools imported/created after that. 2557. 2558.It Sy zio_taskq_read Ns = Ns Sy fixed,1,8 null scale null Pq charp 2559Set the queue and thread configuration for the IO read queues. 2560This is an advanced debugging parameter. 2561Don't change this unless you understand what it does. 2562Set values only apply to pools imported/created after that. 2563. 2564.It Sy zio_taskq_write Ns = Ns Sy sync null scale null Pq charp 2565Set the queue and thread configuration for the IO write queues. 2566This is an advanced debugging parameter. 2567Don't change this unless you understand what it does. 2568Set values only apply to pools imported/created after that. 2569. 2570.It Sy zvol_inhibit_dev Ns = Ns Sy 0 Ns | Ns 1 Pq uint 2571Do not create zvol device nodes. 2572This may slightly improve startup time on 2573systems with a very large number of zvols. 2574. 2575.It Sy zvol_major Ns = Ns Sy 230 Pq uint 2576Major number for zvol block devices. 2577. 2578.It Sy zvol_max_discard_blocks Ns = Ns Sy 16384 Pq long 2579Discard (TRIM) operations done on zvols will be done in batches of this 2580many blocks, where block size is determined by the 2581.Sy volblocksize 2582property of a zvol. 2583. 2584.It Sy zvol_prefetch_bytes Ns = Ns Sy 131072 Ns B Po 128 KiB Pc Pq uint 2585When adding a zvol to the system, prefetch this many bytes 2586from the start and end of the volume. 2587Prefetching these regions of the volume is desirable, 2588because they are likely to be accessed immediately by 2589.Xr blkid 8 2590or the kernel partitioner. 2591. 2592.It Sy zvol_request_sync Ns = Ns Sy 0 Ns | Ns 1 Pq uint 2593When processing I/O requests for a zvol, submit them synchronously. 2594This effectively limits the queue depth to 2595.Em 1 2596for each I/O submitter. 2597When unset, requests are handled asynchronously by a thread pool. 2598The number of requests which can be handled concurrently is controlled by 2599.Sy zvol_threads . 2600.Sy zvol_request_sync 2601is ignored when running on a kernel that supports block multiqueue 2602.Pq Li blk-mq . 2603. 2604.It Sy zvol_num_taskqs Ns = Ns Sy 0 Pq uint 2605Number of zvol taskqs. 2606If 2607.Sy 0 2608(the default) then scaling is done internally to prefer 6 threads per taskq. 2609This only applies on Linux. 2610. 2611.It Sy zvol_threads Ns = Ns Sy 0 Pq uint 2612The number of system wide threads to use for processing zvol block IOs. 2613If 2614.Sy 0 2615(the default) then internally set 2616.Sy zvol_threads 2617to the number of CPUs present or 32 (whichever is greater). 2618. 2619.It Sy zvol_blk_mq_threads Ns = Ns Sy 0 Pq uint 2620The number of threads per zvol to use for queuing IO requests. 2621This parameter will only appear if your kernel supports 2622.Li blk-mq 2623and is only read and assigned to a zvol at zvol load time. 2624If 2625.Sy 0 2626(the default) then internally set 2627.Sy zvol_blk_mq_threads 2628to the number of CPUs present. 2629. 2630.It Sy zvol_use_blk_mq Ns = Ns Sy 0 Ns | Ns 1 Pq uint 2631Set to 2632.Sy 1 2633to use the 2634.Li blk-mq 2635API for zvols. 2636Set to 2637.Sy 0 2638(the default) to use the legacy zvol APIs. 2639This setting can give better or worse zvol performance depending on 2640the workload. 2641This parameter will only appear if your kernel supports 2642.Li blk-mq 2643and is only read and assigned to a zvol at zvol load time. 2644. 2645.It Sy zvol_blk_mq_blocks_per_thread Ns = Ns Sy 8 Pq uint 2646If 2647.Sy zvol_use_blk_mq 2648is enabled, then process this number of 2649.Sy volblocksize Ns -sized blocks per zvol thread. 2650This tunable can be use to favor better performance for zvol reads (lower 2651values) or writes (higher values). 2652If set to 2653.Sy 0 , 2654then the zvol layer will process the maximum number of blocks 2655per thread that it can. 2656This parameter will only appear if your kernel supports 2657.Li blk-mq 2658and is only applied at each zvol's load time. 2659. 2660.It Sy zvol_blk_mq_queue_depth Ns = Ns Sy 0 Pq uint 2661The queue_depth value for the zvol 2662.Li blk-mq 2663interface. 2664This parameter will only appear if your kernel supports 2665.Li blk-mq 2666and is only applied at each zvol's load time. 2667If 2668.Sy 0 2669(the default) then use the kernel's default queue depth. 2670Values are clamped to the kernel's 2671.Dv BLKDEV_MIN_RQ 2672and 2673.Dv BLKDEV_MAX_RQ Ns / Ns Dv BLKDEV_DEFAULT_RQ 2674limits. 2675. 2676.It Sy zvol_volmode Ns = Ns Sy 1 Pq uint 2677Defines zvol block devices behavior when 2678.Sy volmode Ns = Ns Sy default : 2679.Bl -tag -compact -offset 4n -width "a" 2680.It Sy 1 2681.No equivalent to Sy full 2682.It Sy 2 2683.No equivalent to Sy dev 2684.It Sy 3 2685.No equivalent to Sy none 2686.El 2687. 2688.It Sy zvol_enforce_quotas Ns = Ns Sy 0 Ns | Ns 1 Pq uint 2689Enable strict ZVOL quota enforcement. 2690The strict quota enforcement may have a performance impact. 2691.El 2692. 2693.Sh ZFS I/O SCHEDULER 2694ZFS issues I/O operations to leaf vdevs to satisfy and complete I/O operations. 2695The scheduler determines when and in what order those operations are issued. 2696The scheduler divides operations into five I/O classes, 2697prioritized in the following order: sync read, sync write, async read, 2698async write, and scrub/resilver. 2699Each queue defines the minimum and maximum number of concurrent operations 2700that may be issued to the device. 2701In addition, the device has an aggregate maximum, 2702.Sy zfs_vdev_max_active . 2703Note that the sum of the per-queue minima must not exceed the aggregate maximum. 2704If the sum of the per-queue maxima exceeds the aggregate maximum, 2705then the number of active operations may reach 2706.Sy zfs_vdev_max_active , 2707in which case no further operations will be issued, 2708regardless of whether all per-queue minima have been met. 2709.Pp 2710For many physical devices, throughput increases with the number of 2711concurrent operations, but latency typically suffers. 2712Furthermore, physical devices typically have a limit 2713at which more concurrent operations have no 2714effect on throughput or can actually cause it to decrease. 2715.Pp 2716The scheduler selects the next operation to issue by first looking for an 2717I/O class whose minimum has not been satisfied. 2718Once all are satisfied and the aggregate maximum has not been hit, 2719the scheduler looks for classes whose maximum has not been satisfied. 2720Iteration through the I/O classes is done in the order specified above. 2721No further operations are issued 2722if the aggregate maximum number of concurrent operations has been hit, 2723or if there are no operations queued for an I/O class that has not hit its 2724maximum. 2725Every time an I/O operation is queued or an operation completes, 2726the scheduler looks for new operations to issue. 2727.Pp 2728In general, smaller 2729.Sy max_active Ns s 2730will lead to lower latency of synchronous operations. 2731Larger 2732.Sy max_active Ns s 2733may lead to higher overall throughput, depending on underlying storage. 2734.Pp 2735The ratio of the queues' 2736.Sy max_active Ns s 2737determines the balance of performance between reads, writes, and scrubs. 2738For example, increasing 2739.Sy zfs_vdev_scrub_max_active 2740will cause the scrub or resilver to complete more quickly, 2741but reads and writes to have higher latency and lower throughput. 2742.Pp 2743All I/O classes have a fixed maximum number of outstanding operations, 2744except for the async write class. 2745Asynchronous writes represent the data that is committed to stable storage 2746during the syncing stage for transaction groups. 2747Transaction groups enter the syncing state periodically, 2748so the number of queued async writes will quickly burst up 2749and then bleed down to zero. 2750Rather than servicing them as quickly as possible, 2751the I/O scheduler changes the maximum number of active async write operations 2752according to the amount of dirty data in the pool. 2753Since both throughput and latency typically increase with the number of 2754concurrent operations issued to physical devices, reducing the 2755burstiness in the number of simultaneous operations also stabilizes the 2756response time of operations from other queues, in particular synchronous ones. 2757In broad strokes, the I/O scheduler will issue more concurrent operations 2758from the async write queue as there is more dirty data in the pool. 2759. 2760.Ss Async Writes 2761The number of concurrent operations issued for the async write I/O class 2762follows a piece-wise linear function defined by a few adjustable points: 2763.Bd -literal 2764 | o---------| <-- \fBzfs_vdev_async_write_max_active\fP 2765 ^ | /^ | 2766 | | / | | 2767active | / | | 2768 I/O | / | | 2769count | / | | 2770 | / | | 2771 |-------o | | <-- \fBzfs_vdev_async_write_min_active\fP 2772 0|_______^______|_________| 2773 0% | | 100% of \fBzfs_dirty_data_max\fP 2774 | | 2775 | `-- \fBzfs_vdev_async_write_active_max_dirty_percent\fP 2776 `--------- \fBzfs_vdev_async_write_active_min_dirty_percent\fP 2777.Ed 2778.Pp 2779Until the amount of dirty data exceeds a minimum percentage of the dirty 2780data allowed in the pool, the I/O scheduler will limit the number of 2781concurrent operations to the minimum. 2782As that threshold is crossed, the number of concurrent operations issued 2783increases linearly to the maximum at the specified maximum percentage 2784of the dirty data allowed in the pool. 2785.Pp 2786Ideally, the amount of dirty data on a busy pool will stay in the sloped 2787part of the function between 2788.Sy zfs_vdev_async_write_active_min_dirty_percent 2789and 2790.Sy zfs_vdev_async_write_active_max_dirty_percent . 2791If it exceeds the maximum percentage, 2792this indicates that the rate of incoming data is 2793greater than the rate that the backend storage can handle. 2794In this case, we must further throttle incoming writes, 2795as described in the next section. 2796. 2797.Sh ZFS TRANSACTION DELAY 2798We delay transactions when we've determined that the backend storage 2799isn't able to accommodate the rate of incoming writes. 2800.Pp 2801If there is already a transaction waiting, we delay relative to when 2802that transaction will finish waiting. 2803This way the calculated delay time 2804is independent of the number of threads concurrently executing transactions. 2805.Pp 2806If we are the only waiter, wait relative to when the transaction started, 2807rather than the current time. 2808This credits the transaction for "time already served", 2809e.g. reading indirect blocks. 2810.Pp 2811The minimum time for a transaction to take is calculated as 2812.D1 min_time = min( Ns Sy zfs_delay_scale No \(mu Po Sy dirty No \- Sy min Pc / Po Sy max No \- Sy dirty Pc , 100ms) 2813.Pp 2814The delay has two degrees of freedom that can be adjusted via tunables. 2815The percentage of dirty data at which we start to delay is defined by 2816.Sy zfs_delay_min_dirty_percent . 2817This should typically be at or above 2818.Sy zfs_vdev_async_write_active_max_dirty_percent , 2819so that we only start to delay after writing at full speed 2820has failed to keep up with the incoming write rate. 2821The scale of the curve is defined by 2822.Sy zfs_delay_scale . 2823Roughly speaking, this variable determines the amount of delay at the midpoint 2824of the curve. 2825.Bd -literal 2826delay 2827 10ms +-------------------------------------------------------------*+ 2828 | *| 2829 9ms + *+ 2830 | *| 2831 8ms + *+ 2832 | * | 2833 7ms + * + 2834 | * | 2835 6ms + * + 2836 | * | 2837 5ms + * + 2838 | * | 2839 4ms + * + 2840 | * | 2841 3ms + * + 2842 | * | 2843 2ms + (midpoint) * + 2844 | | ** | 2845 1ms + v *** + 2846 | \fBzfs_delay_scale\fP ----------> ******** | 2847 0 +-------------------------------------*********----------------+ 2848 0% <- \fBzfs_dirty_data_max\fP -> 100% 2849.Ed 2850.Pp 2851Note, that since the delay is added to the outstanding time remaining on the 2852most recent transaction it's effectively the inverse of IOPS. 2853Here, the midpoint of 2854.Em 500 us 2855translates to 2856.Em 2000 IOPS . 2857The shape of the curve 2858was chosen such that small changes in the amount of accumulated dirty data 2859in the first three quarters of the curve yield relatively small differences 2860in the amount of delay. 2861.Pp 2862The effects can be easier to understand when the amount of delay is 2863represented on a logarithmic scale: 2864.Bd -literal 2865delay 2866100ms +-------------------------------------------------------------++ 2867 + + 2868 | | 2869 + *+ 2870 10ms + *+ 2871 + ** + 2872 | (midpoint) ** | 2873 + | ** + 2874 1ms + v **** + 2875 + \fBzfs_delay_scale\fP ----------> ***** + 2876 | **** | 2877 + **** + 2878100us + ** + 2879 + * + 2880 | * | 2881 + * + 2882 10us + * + 2883 + + 2884 | | 2885 + + 2886 +--------------------------------------------------------------+ 2887 0% <- \fBzfs_dirty_data_max\fP -> 100% 2888.Ed 2889.Pp 2890Note here that only as the amount of dirty data approaches its limit does 2891the delay start to increase rapidly. 2892The goal of a properly tuned system should be to keep the amount of dirty data 2893out of that range by first ensuring that the appropriate limits are set 2894for the I/O scheduler to reach optimal throughput on the back-end storage, 2895and then by changing the value of 2896.Sy zfs_delay_scale 2897to increase the steepness of the curve. 2898