xref: /freebsd/sys/contrib/openzfs/man/man4/zfs.4 (revision 0d48d1ffe0446cd2f87ce02555e3d17772ae7284)
1.\" SPDX-License-Identifier: CDDL-1.0
2.\"
3.\" Copyright (c) 2013 by Turbo Fredriksson <turbo@bayour.com>. All rights reserved.
4.\" Copyright (c) 2019, 2021 by Delphix. All rights reserved.
5.\" Copyright (c) 2019 Datto Inc.
6.\" Copyright (c) 2023, 2024, 2025, Klara, Inc.
7.\" The contents of this file are subject to the terms of the Common Development
8.\" and Distribution License (the "License").  You may not use this file except
9.\" in compliance with the License. You can obtain a copy of the license at
10.\" usr/src/OPENSOLARIS.LICENSE or https://opensource.org/licenses/CDDL-1.0.
11.\"
12.\" See the License for the specific language governing permissions and
13.\" limitations under the License. When distributing Covered Code, include this
14.\" CDDL HEADER in each file and include the License file at
15.\" usr/src/OPENSOLARIS.LICENSE.  If applicable, add the following below this
16.\" CDDL HEADER, with the fields enclosed by brackets "[]" replaced with your
17.\" own identifying information:
18.\" Portions Copyright [yyyy] [name of copyright owner]
19.\"
20.Dd May 24, 2025
21.Dt ZFS 4
22.Os
23.
24.Sh NAME
25.Nm zfs
26.Nd tuning of the ZFS kernel module
27.
28.Sh DESCRIPTION
29The ZFS module supports these parameters:
30.Bl -tag -width Ds
31.It Sy dbuf_cache_max_bytes Ns = Ns Sy UINT64_MAX Ns B Pq u64
32Maximum size in bytes of the dbuf cache.
33The target size is determined by the MIN versus
34.No 1/2^ Ns Sy dbuf_cache_shift Pq 1/32nd
35of the target ARC size.
36The behavior of the dbuf cache and its associated settings
37can be observed via the
38.Pa /proc/spl/kstat/zfs/dbufstats
39kstat.
40.
41.It Sy dbuf_metadata_cache_max_bytes Ns = Ns Sy UINT64_MAX Ns B Pq u64
42Maximum size in bytes of the metadata dbuf cache.
43The target size is determined by the MIN versus
44.No 1/2^ Ns Sy dbuf_metadata_cache_shift Pq 1/64th
45of the target ARC size.
46The behavior of the metadata dbuf cache and its associated settings
47can be observed via the
48.Pa /proc/spl/kstat/zfs/dbufstats
49kstat.
50.
51.It Sy dbuf_cache_hiwater_pct Ns = Ns Sy 10 Ns % Pq uint
52The percentage over
53.Sy dbuf_cache_max_bytes
54when dbufs must be evicted directly.
55.
56.It Sy dbuf_cache_lowater_pct Ns = Ns Sy 10 Ns % Pq uint
57The percentage below
58.Sy dbuf_cache_max_bytes
59when the evict thread stops evicting dbufs.
60.
61.It Sy dbuf_cache_shift Ns = Ns Sy 5 Pq uint
62Set the size of the dbuf cache
63.Pq Sy dbuf_cache_max_bytes
64to a log2 fraction of the target ARC size.
65.
66.It Sy dbuf_metadata_cache_shift Ns = Ns Sy 6 Pq uint
67Set the size of the dbuf metadata cache
68.Pq Sy dbuf_metadata_cache_max_bytes
69to a log2 fraction of the target ARC size.
70.
71.It Sy dbuf_mutex_cache_shift Ns = Ns Sy 0 Pq uint
72Set the size of the mutex array for the dbuf cache.
73When set to
74.Sy 0
75the array is dynamically sized based on total system memory.
76.
77.It Sy dmu_object_alloc_chunk_shift Ns = Ns Sy 7 Po 128 Pc Pq uint
78dnode slots allocated in a single operation as a power of 2.
79The default value minimizes lock contention for the bulk operation performed.
80.
81.It Sy dmu_ddt_copies Ns = Ns Sy 3 Pq uint
82Controls the number of copies stored for DeDup Table
83.Pq DDT
84objects.
85Reducing the number of copies to 1 from the previous default of 3
86can reduce the write inflation caused by deduplication.
87This assumes redundancy for this data is provided by the vdev layer.
88If the DDT is damaged, space may be leaked
89.Pq not freed
90when the DDT can not report the correct reference count.
91.
92.It Sy dmu_prefetch_max Ns = Ns Sy 134217728 Ns B Po 128 MiB Pc Pq uint
93Limit the amount we can prefetch with one call to this amount in bytes.
94This helps to limit the amount of memory that can be used by prefetching.
95.
96.It Sy l2arc_feed_again Ns = Ns Sy 1 Ns | Ns 0 Pq int
97Turbo L2ARC warm-up.
98When the L2ARC is cold the fill interval will be set as fast as possible.
99.
100.It Sy l2arc_feed_min_ms Ns = Ns Sy 200 Pq u64
101Min feed interval in milliseconds.
102Requires
103.Sy l2arc_feed_again Ns = Ns Ar 1
104and only applicable in related situations.
105.
106.It Sy l2arc_feed_secs Ns = Ns Sy 1 Pq u64
107Seconds between L2ARC writing.
108.
109.It Sy l2arc_headroom Ns = Ns Sy 8 Pq u64
110How far through the ARC lists to search for L2ARC cacheable content,
111expressed as a multiplier of
112.Sy l2arc_write_max .
113ARC persistence across reboots can be achieved with persistent L2ARC
114by setting this parameter to
115.Sy 0 ,
116allowing the full length of ARC lists to be searched for cacheable content.
117.
118.It Sy l2arc_headroom_boost Ns = Ns Sy 200 Ns % Pq u64
119Scales
120.Sy l2arc_headroom
121by this percentage when L2ARC contents are being successfully compressed
122before writing.
123A value of
124.Sy 100
125disables this feature.
126.
127.It Sy l2arc_exclude_special Ns = Ns Sy 0 Ns | Ns 1 Pq int
128Controls whether buffers present on special vdevs are eligible for caching
129into L2ARC.
130If set to 1, exclude dbufs on special vdevs from being cached to L2ARC.
131.
132.It Sy l2arc_mfuonly Ns = Ns Sy 0 Ns | Ns 1 Ns | Ns 2 Pq int
133Controls whether only MFU metadata and data are cached from ARC into L2ARC.
134This may be desired to avoid wasting space on L2ARC when reading/writing large
135amounts of data that are not expected to be accessed more than once.
136.Pp
137The default is 0,
138meaning both MRU and MFU data and metadata are cached.
139When turning off this feature (setting it to 0), some MRU buffers will
140still be present in ARC and eventually cached on L2ARC.
141.No If Sy l2arc_noprefetch Ns = Ns Sy 0 ,
142some prefetched buffers will be cached to L2ARC, and those might later
143transition to MRU, in which case the
144.Sy l2arc_mru_asize No arcstat will not be Sy 0 .
145.Pp
146Setting it to 1 means to L2 cache only MFU data and metadata.
147.Pp
148Setting it to 2 means to L2 cache all metadata (MRU+MFU) but
149only MFU data (i.e. MRU data are not cached). This can be the right setting
150to cache as much metadata as possible even when having high data turnover.
151.Pp
152Regardless of
153.Sy l2arc_noprefetch ,
154some MFU buffers might be evicted from ARC,
155accessed later on as prefetches and transition to MRU as prefetches.
156If accessed again they are counted as MRU and the
157.Sy l2arc_mru_asize No arcstat will not be Sy 0 .
158.Pp
159The ARC status of L2ARC buffers when they were first cached in
160L2ARC can be seen in the
161.Sy l2arc_mru_asize , Sy l2arc_mfu_asize , No and Sy l2arc_prefetch_asize
162arcstats when importing the pool or onlining a cache
163device if persistent L2ARC is enabled.
164.Pp
165The
166.Sy evict_l2_eligible_mru
167arcstat does not take into account if this option is enabled as the information
168provided by the
169.Sy evict_l2_eligible_m[rf]u
170arcstats can be used to decide if toggling this option is appropriate
171for the current workload.
172.
173.It Sy l2arc_meta_percent Ns = Ns Sy 33 Ns % Pq uint
174Percent of ARC size allowed for L2ARC-only headers.
175Since L2ARC buffers are not evicted on memory pressure,
176too many headers on a system with an irrationally large L2ARC
177can render it slow or unusable.
178This parameter limits L2ARC writes and rebuilds to achieve the target.
179.
180.It Sy l2arc_trim_ahead Ns = Ns Sy 0 Ns % Pq u64
181Trims ahead of the current write size
182.Pq Sy l2arc_write_max
183on L2ARC devices by this percentage of write size if we have filled the device.
184If set to
185.Sy 100
186we TRIM twice the space required to accommodate upcoming writes.
187A minimum of
188.Sy 64 MiB
189will be trimmed.
190It also enables TRIM of the whole L2ARC device upon creation
191or addition to an existing pool or if the header of the device is
192invalid upon importing a pool or onlining a cache device.
193A value of
194.Sy 0
195disables TRIM on L2ARC altogether and is the default as it can put significant
196stress on the underlying storage devices.
197This will vary depending of how well the specific device handles these commands.
198.
199.It Sy l2arc_noprefetch Ns = Ns Sy 1 Ns | Ns 0 Pq int
200Do not write buffers to L2ARC if they were prefetched but not used by
201applications.
202In case there are prefetched buffers in L2ARC and this option
203is later set, we do not read the prefetched buffers from L2ARC.
204Unsetting this option is useful for caching sequential reads from the
205disks to L2ARC and serve those reads from L2ARC later on.
206This may be beneficial in case the L2ARC device is significantly faster
207in sequential reads than the disks of the pool.
208.Pp
209Use
210.Sy 1
211to disable and
212.Sy 0
213to enable caching/reading prefetches to/from L2ARC.
214.
215.It Sy l2arc_norw Ns = Ns Sy 0 Ns | Ns 1 Pq int
216No reads during writes.
217.
218.It Sy l2arc_write_boost Ns = Ns Sy 33554432 Ns B Po 32 MiB Pc Pq u64
219Cold L2ARC devices will have
220.Sy l2arc_write_max
221increased by this amount while they remain cold.
222.
223.It Sy l2arc_write_max Ns = Ns Sy 33554432 Ns B Po 32 MiB Pc Pq u64
224Max write bytes per interval.
225.
226.It Sy l2arc_rebuild_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int
227Rebuild the L2ARC when importing a pool (persistent L2ARC).
228This can be disabled if there are problems importing a pool
229or attaching an L2ARC device (e.g. the L2ARC device is slow
230in reading stored log metadata, or the metadata
231has become somehow fragmented/unusable).
232.
233.It Sy l2arc_rebuild_blocks_min_l2size Ns = Ns Sy 1073741824 Ns B Po 1 GiB Pc Pq u64
234Minimum size of an L2ARC device required in order to write log blocks in it.
235The log blocks are used upon importing the pool to rebuild the persistent L2ARC.
236.Pp
237For L2ARC devices less than 1 GiB, the amount of data
238.Fn l2arc_evict
239evicts is significant compared to the amount of restored L2ARC data.
240In this case, do not write log blocks in L2ARC in order not to waste space.
241.
242.It Sy metaslab_aliquot Ns = Ns Sy 2097152 Ns B Po 2 MiB Pc Pq u64
243Metaslab group's per child vdev allocation granularity, in bytes.
244This is roughly similar to what would be referred to as the "stripe size"
245in traditional RAID arrays.
246In normal operation, ZFS will try to write this amount of data to each child
247of a top-level vdev before moving on to the next top-level vdev.
248.
249.It Sy metaslab_bias_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int
250Enable metaslab groups biasing based on their over- or under-utilization
251relative to the metaslab class average.
252If disabled, each metaslab group will receive allocations proportional to its
253capacity.
254.
255.It Sy metaslab_perf_bias Ns = Ns Sy 1 Ns | Ns 0 Ns | Ns 2 Pq int
256Controls metaslab groups biasing based on their write performance.
257Setting to 0 makes all metaslab groups receive fixed amounts of allocations.
258Setting to 2 allows faster metaslab groups to allocate more.
259Setting to 1 equals to 2 if the pool is write-bound or 0 otherwise.
260That is, if the pool is limited by write throughput, then allocate more from
261faster metaslab groups, but if not, try to evenly distribute the allocations.
262.
263.It Sy metaslab_force_ganging Ns = Ns Sy 16777217 Ns B Po 16 MiB + 1 B Pc Pq u64
264Make some blocks above a certain size be gang blocks.
265This option is used by the test suite to facilitate testing.
266.
267.It Sy metaslab_force_ganging_pct Ns = Ns Sy 3 Ns % Pq uint
268For blocks that could be forced to be a gang block (due to
269.Sy metaslab_force_ganging ) ,
270force this many of them to be gang blocks.
271.
272.It Sy brt_zap_prefetch Ns = Ns Sy 1 Ns | Ns 0 Pq int
273Controls prefetching BRT records for blocks which are going to be cloned.
274.
275.It Sy brt_zap_default_bs Ns = Ns Sy 12 Po 4 KiB Pc Pq int
276Default BRT ZAP data block size as a power of 2. Note that changing this after
277creating a BRT on the pool will not affect existing BRTs, only newly created
278ones.
279.
280.It Sy brt_zap_default_ibs Ns = Ns Sy 12 Po 4 KiB Pc Pq int
281Default BRT ZAP indirect block size as a power of 2. Note that changing this
282after creating a BRT on the pool will not affect existing BRTs, only newly
283created ones.
284.
285.It Sy ddt_zap_default_bs Ns = Ns Sy 15 Po 32 KiB Pc Pq int
286Default DDT ZAP data block size as a power of 2. Note that changing this after
287creating a DDT on the pool will not affect existing DDTs, only newly created
288ones.
289.
290.It Sy ddt_zap_default_ibs Ns = Ns Sy 15 Po 32 KiB Pc Pq int
291Default DDT ZAP indirect block size as a power of 2. Note that changing this
292after creating a DDT on the pool will not affect existing DDTs, only newly
293created ones.
294.
295.It Sy zfs_default_bs Ns = Ns Sy 9 Po 512 B Pc Pq int
296Default dnode block size as a power of 2.
297.
298.It Sy zfs_default_ibs Ns = Ns Sy 17 Po 128 KiB Pc Pq int
299Default dnode indirect block size as a power of 2.
300.
301.It Sy zfs_dio_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int
302Enable Direct I/O.
303If this setting is 0, then all I/O requests will be directed through the ARC
304acting as though the dataset property
305.Sy direct
306was set to
307.Sy disabled .
308.
309.It Sy zfs_dio_strict Ns = Ns Sy 0 Ns | Ns 1 Pq int
310Strictly enforce alignment for Direct I/O requests, returning
311.Sy EINVAL
312if not page-aligned instead of silently falling back to uncached I/O.
313.
314.It Sy zfs_history_output_max Ns = Ns Sy 1048576 Ns B Po 1 MiB Pc Pq u64
315When attempting to log an output nvlist of an ioctl in the on-disk history,
316the output will not be stored if it is larger than this size (in bytes).
317This must be less than
318.Sy DMU_MAX_ACCESS Pq 64 MiB .
319This applies primarily to
320.Fn zfs_ioc_channel_program Pq cf. Xr zfs-program 8 .
321.
322.It Sy zfs_keep_log_spacemaps_at_export Ns = Ns Sy 0 Ns | Ns 1 Pq int
323Prevent log spacemaps from being destroyed during pool exports and destroys.
324.
325.It Sy zfs_metaslab_segment_weight_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int
326Enable/disable segment-based metaslab selection.
327.
328.It Sy zfs_metaslab_switch_threshold Ns = Ns Sy 2 Pq int
329When using segment-based metaslab selection, continue allocating
330from the active metaslab until this option's
331worth of buckets have been exhausted.
332.
333.It Sy metaslab_debug_load Ns = Ns Sy 0 Ns | Ns 1 Pq int
334Load all metaslabs during pool import.
335.
336.It Sy metaslab_debug_unload Ns = Ns Sy 0 Ns | Ns 1 Pq int
337Prevent metaslabs from being unloaded.
338.
339.It Sy metaslab_fragmentation_factor_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int
340Enable use of the fragmentation metric in computing metaslab weights.
341.
342.It Sy metaslab_df_max_search Ns = Ns Sy 16777216 Ns B Po 16 MiB Pc Pq uint
343Maximum distance to search forward from the last offset.
344Without this limit, fragmented pools can see
345.Em >100`000
346iterations and
347.Fn metaslab_block_picker
348becomes the performance limiting factor on high-performance storage.
349.Pp
350With the default setting of
351.Sy 16 MiB ,
352we typically see less than
353.Em 500
354iterations, even with very fragmented
355.Sy ashift Ns = Ns Sy 9
356pools.
357The maximum number of iterations possible is
358.Sy metaslab_df_max_search / 2^(ashift+1) .
359With the default setting of
360.Sy 16 MiB
361this is
362.Em 16*1024 Pq with Sy ashift Ns = Ns Sy 9
363or
364.Em 2*1024 Pq with Sy ashift Ns = Ns Sy 12 .
365.
366.It Sy metaslab_df_use_largest_segment Ns = Ns Sy 0 Ns | Ns 1 Pq int
367If not searching forward (due to
368.Sy metaslab_df_max_search , metaslab_df_free_pct ,
369.No or Sy metaslab_df_alloc_threshold ) ,
370this tunable controls which segment is used.
371If set, we will use the largest free segment.
372If unset, we will use a segment of at least the requested size.
373.
374.It Sy zfs_metaslab_max_size_cache_sec Ns = Ns Sy 3600 Ns s Po 1 hour Pc Pq u64
375When we unload a metaslab, we cache the size of the largest free chunk.
376We use that cached size to determine whether or not to load a metaslab
377for a given allocation.
378As more frees accumulate in that metaslab while it's unloaded,
379the cached max size becomes less and less accurate.
380After a number of seconds controlled by this tunable,
381we stop considering the cached max size and start
382considering only the histogram instead.
383.
384.It Sy zfs_metaslab_mem_limit Ns = Ns Sy 25 Ns % Pq uint
385When we are loading a new metaslab, we check the amount of memory being used
386to store metaslab range trees.
387If it is over a threshold, we attempt to unload the least recently used metaslab
388to prevent the system from clogging all of its memory with range trees.
389This tunable sets the percentage of total system memory that is the threshold.
390.
391.It Sy zfs_metaslab_try_hard_before_gang Ns = Ns Sy 0 Ns | Ns 1 Pq int
392.Bl -item -compact
393.It
394If unset, we will first try normal allocation.
395.It
396If that fails then we will do a gang allocation.
397.It
398If that fails then we will do a "try hard" gang allocation.
399.It
400If that fails then we will have a multi-layer gang block.
401.El
402.Pp
403.Bl -item -compact
404.It
405If set, we will first try normal allocation.
406.It
407If that fails then we will do a "try hard" allocation.
408.It
409If that fails we will do a gang allocation.
410.It
411If that fails we will do a "try hard" gang allocation.
412.It
413If that fails then we will have a multi-layer gang block.
414.El
415.
416.It Sy zfs_metaslab_find_max_tries Ns = Ns Sy 100 Pq uint
417When not trying hard, we only consider this number of the best metaslabs.
418This improves performance, especially when there are many metaslabs per vdev
419and the allocation can't actually be satisfied
420(so we would otherwise iterate all metaslabs).
421.
422.It Sy zfs_vdev_default_ms_count Ns = Ns Sy 200 Pq uint
423When a vdev is added, target this number of metaslabs per top-level vdev.
424.
425.It Sy zfs_vdev_default_ms_shift Ns = Ns Sy 29 Po 512 MiB Pc Pq uint
426Default lower limit for metaslab size.
427.
428.It Sy zfs_vdev_max_ms_shift Ns = Ns Sy 34 Po 16 GiB Pc Pq uint
429Default upper limit for metaslab size.
430.
431.It Sy zfs_vdev_max_auto_ashift Ns = Ns Sy 14 Pq uint
432Maximum ashift used when optimizing for logical \[->] physical sector size on
433new
434top-level vdevs.
435May be increased up to
436.Sy ASHIFT_MAX Po 16 Pc ,
437but this may negatively impact pool space efficiency.
438.
439.It Sy zfs_vdev_direct_write_verify Ns = Ns Sy Linux 1 | FreeBSD 0 Pq uint
440If non-zero, then a Direct I/O write's checksum will be verified every
441time the write is issued and before it is committed to the block pointer.
442In the event the checksum is not valid then the I/O operation will return EIO.
443This module parameter can be used to detect if the
444contents of the users buffer have changed in the process of doing a Direct I/O
445write.
446It can also help to identify if reported checksum errors are tied to Direct I/O
447writes.
448Each verify error causes a
449.Sy dio_verify_wr
450zevent.
451Direct Write I/O checksum verify errors can be seen with
452.Nm zpool Cm status Fl d .
453The default value for this is 1 on Linux, but is 0 for
454.Fx
455because user pages can be placed under write protection in
456.Fx
457before the Direct I/O write is issued.
458.
459.It Sy zfs_vdev_min_auto_ashift Ns = Ns Sy ASHIFT_MIN Po 9 Pc Pq uint
460Minimum ashift used when creating new top-level vdevs.
461.
462.It Sy zfs_vdev_min_ms_count Ns = Ns Sy 16 Pq uint
463Minimum number of metaslabs to create in a top-level vdev.
464.
465.It Sy vdev_validate_skip Ns = Ns Sy 0 Ns | Ns 1 Pq int
466Skip label validation steps during pool import.
467Changing is not recommended unless you know what you're doing
468and are recovering a damaged label.
469.
470.It Sy zfs_vdev_ms_count_limit Ns = Ns Sy 131072 Po 128k Pc Pq uint
471Practical upper limit of total metaslabs per top-level vdev.
472.
473.It Sy metaslab_preload_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int
474Enable metaslab group preloading.
475.
476.It Sy metaslab_preload_limit Ns = Ns Sy 10 Pq uint
477Maximum number of metaslabs per group to preload
478.
479.It Sy metaslab_preload_pct Ns = Ns Sy 50 Pq uint
480Percentage of CPUs to run a metaslab preload taskq
481.
482.It Sy metaslab_lba_weighting_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int
483Give more weight to metaslabs with lower LBAs,
484assuming they have greater bandwidth,
485as is typically the case on a modern constant angular velocity disk drive.
486.
487.It Sy metaslab_unload_delay Ns = Ns Sy 32 Pq uint
488After a metaslab is used, we keep it loaded for this many TXGs, to attempt to
489reduce unnecessary reloading.
490Note that both this many TXGs and
491.Sy metaslab_unload_delay_ms
492milliseconds must pass before unloading will occur.
493.
494.It Sy metaslab_unload_delay_ms Ns = Ns Sy 600000 Ns ms Po 10 min Pc Pq uint
495After a metaslab is used, we keep it loaded for this many milliseconds,
496to attempt to reduce unnecessary reloading.
497Note, that both this many milliseconds and
498.Sy metaslab_unload_delay
499TXGs must pass before unloading will occur.
500.
501.It Sy reference_history Ns = Ns Sy 3 Pq uint
502Maximum reference holders being tracked when reference_tracking_enable is
503active.
504.It Sy raidz_expand_max_copy_bytes Ns = Ns Sy 160MB Pq ulong
505Max amount of memory to use for RAID-Z expansion I/O.
506This limits how much I/O can be outstanding at once.
507.
508.It Sy raidz_expand_max_reflow_bytes Ns = Ns Sy 0 Pq ulong
509For testing, pause RAID-Z expansion when reflow amount reaches this value.
510.
511.It Sy raidz_io_aggregate_rows Ns = Ns Sy 4 Pq ulong
512For expanded RAID-Z, aggregate reads that have more rows than this.
513.
514.It Sy reference_history Ns = Ns Sy 3 Pq int
515Maximum reference holders being tracked when reference_tracking_enable is
516active.
517.
518.It Sy reference_tracking_enable Ns = Ns Sy 0 Ns | Ns 1 Pq int
519Track reference holders to
520.Sy refcount_t
521objects (debug builds only).
522.
523.It Sy send_holes_without_birth_time Ns = Ns Sy 1 Ns | Ns 0 Pq int
524When set, the
525.Sy hole_birth
526optimization will not be used, and all holes will always be sent during a
527.Nm zfs Cm send .
528This is useful if you suspect your datasets are affected by a bug in
529.Sy hole_birth .
530.
531.It Sy spa_config_path Ns = Ns Pa /etc/zfs/zpool.cache Pq charp
532SPA config file.
533.
534.It Sy spa_asize_inflation Ns = Ns Sy 24 Pq uint
535Multiplication factor used to estimate actual disk consumption from the
536size of data being written.
537The default value is a worst case estimate,
538but lower values may be valid for a given pool depending on its configuration.
539Pool administrators who understand the factors involved
540may wish to specify a more realistic inflation factor,
541particularly if they operate close to quota or capacity limits.
542.
543.It Sy spa_load_print_vdev_tree Ns = Ns Sy 0 Ns | Ns 1 Pq int
544Whether to print the vdev tree in the debugging message buffer during pool
545import.
546.
547.It Sy spa_load_verify_data Ns = Ns Sy 1 Ns | Ns 0 Pq int
548Whether to traverse data blocks during an "extreme rewind"
549.Pq Fl X
550import.
551.Pp
552An extreme rewind import normally performs a full traversal of all
553blocks in the pool for verification.
554If this parameter is unset, the traversal skips non-metadata blocks.
555It can be toggled once the
556import has started to stop or start the traversal of non-metadata blocks.
557.
558.It Sy spa_load_verify_metadata  Ns = Ns Sy 1 Ns | Ns 0 Pq int
559Whether to traverse blocks during an "extreme rewind"
560.Pq Fl X
561pool import.
562.Pp
563An extreme rewind import normally performs a full traversal of all
564blocks in the pool for verification.
565If this parameter is unset, the traversal is not performed.
566It can be toggled once the import has started to stop or start the traversal.
567.
568.It Sy spa_load_verify_shift Ns = Ns Sy 4 Po 1/16th Pc Pq uint
569Sets the maximum number of bytes to consume during pool import to the log2
570fraction of the target ARC size.
571.
572.It Sy spa_slop_shift Ns = Ns Sy 5 Po 1/32nd Pc Pq int
573Normally, we don't allow the last
574.Sy 3.2% Pq Sy 1/2^spa_slop_shift
575of space in the pool to be consumed.
576This ensures that we don't run the pool completely out of space,
577due to unaccounted changes (e.g. to the MOS).
578It also limits the worst-case time to allocate space.
579If we have less than this amount of free space,
580most ZPL operations (e.g. write, create) will return
581.Sy ENOSPC .
582.
583.It Sy spa_num_allocators Ns = Ns Sy 4 Pq int
584Determines the number of block allocators to use per spa instance.
585Capped by the number of actual CPUs in the system via
586.Sy spa_cpus_per_allocator .
587.Pp
588Note that setting this value too high could result in performance
589degradation and/or excess fragmentation.
590Set value only applies to pools imported/created after that.
591.
592.It Sy spa_cpus_per_allocator Ns = Ns Sy 4 Pq int
593Determines the minimum number of CPUs in a system for block allocator
594per spa instance.
595Set value only applies to pools imported/created after that.
596.
597.It Sy spa_upgrade_errlog_limit Ns = Ns Sy 0 Pq uint
598Limits the number of on-disk error log entries that will be converted to the
599new format when enabling the
600.Sy head_errlog
601feature.
602The default is to convert all log entries.
603.
604.It Sy vdev_removal_max_span Ns = Ns Sy 32768 Ns B Po 32 KiB Pc Pq uint
605During top-level vdev removal, chunks of data are copied from the vdev
606which may include free space in order to trade bandwidth for IOPS.
607This parameter determines the maximum span of free space, in bytes,
608which will be included as "unnecessary" data in a chunk of copied data.
609.Pp
610The default value here was chosen to align with
611.Sy zfs_vdev_read_gap_limit ,
612which is a similar concept when doing
613regular reads (but there's no reason it has to be the same).
614.
615.It Sy vdev_file_logical_ashift Ns = Ns Sy 9 Po 512 B Pc Pq u64
616Logical ashift for file-based devices.
617.
618.It Sy vdev_file_physical_ashift Ns = Ns Sy 9 Po 512 B Pc Pq u64
619Physical ashift for file-based devices.
620.
621.It Sy zap_iterate_prefetch Ns = Ns Sy 1 Ns | Ns 0 Pq int
622If set, when we start iterating over a ZAP object,
623prefetch the entire object (all leaf blocks).
624However, this is limited by
625.Sy dmu_prefetch_max .
626.
627.It Sy zap_micro_max_size Ns = Ns Sy 131072 Ns B Po 128 KiB Pc Pq int
628Maximum micro ZAP size.
629A "micro" ZAP is upgraded to a "fat" ZAP once it grows beyond the specified
630size.
631Sizes higher than 128KiB will be clamped to 128KiB unless the
632.Sy large_microzap
633feature is enabled.
634.
635.It Sy zap_shrink_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int
636If set, adjacent empty ZAP blocks will be collapsed, reducing disk space.
637.
638.It Sy zfetch_min_distance Ns = Ns Sy 4194304 Ns B Po 4 MiB Pc Pq uint
639Min bytes to prefetch per stream.
640Prefetch distance starts from the demand access size and quickly grows to
641this value, doubling on each hit.
642After that it may grow further by 1/8 per hit, but only if some prefetch
643since last time haven't completed in time to satisfy demand request, i.e.
644prefetch depth didn't cover the read latency or the pool got saturated.
645.
646.It Sy zfetch_max_distance Ns = Ns Sy 67108864 Ns B Po 64 MiB Pc Pq uint
647Max bytes to prefetch per stream.
648.
649.It Sy zfetch_max_idistance Ns = Ns Sy 67108864 Ns B Po 64 MiB Pc Pq uint
650Max bytes to prefetch indirects for per stream.
651.
652.It Sy zfetch_max_reorder Ns = Ns Sy 16777216 Ns B Po 16 MiB Pc Pq uint
653Requests within this byte distance from the current prefetch stream position
654are considered parts of the stream, reordered due to parallel processing.
655Such requests do not advance the stream position immediately unless
656.Sy zfetch_hole_shift
657fill threshold is reached, but saved to fill holes in the stream later.
658.
659.It Sy zfetch_max_streams Ns = Ns Sy 8 Pq uint
660Max number of streams per zfetch (prefetch streams per file).
661.
662.It Sy zfetch_min_sec_reap Ns = Ns Sy 1 Pq uint
663Min time before inactive prefetch stream can be reclaimed
664.
665.It Sy zfetch_max_sec_reap Ns = Ns Sy 2 Pq uint
666Max time before inactive prefetch stream can be deleted
667.
668.It Sy zfs_abd_scatter_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int
669Enables ARC from using scatter/gather lists and forces all allocations to be
670linear in kernel memory.
671Disabling can improve performance in some code paths
672at the expense of fragmented kernel memory.
673.
674.It Sy zfs_abd_scatter_max_order Ns = Ns Sy MAX_ORDER\-1 Pq uint
675Maximum number of consecutive memory pages allocated in a single block for
676scatter/gather lists.
677.Pp
678The value of
679.Sy MAX_ORDER
680depends on kernel configuration.
681.
682.It Sy zfs_abd_scatter_min_size Ns = Ns Sy 1536 Ns B Po 1.5 KiB Pc Pq uint
683This is the minimum allocation size that will use scatter (page-based) ABDs.
684Smaller allocations will use linear ABDs.
685.
686.It Sy zfs_arc_dnode_limit Ns = Ns Sy 0 Ns B Pq u64
687When the number of bytes consumed by dnodes in the ARC exceeds this number of
688bytes, try to unpin some of it in response to demand for non-metadata.
689This value acts as a ceiling to the amount of dnode metadata, and defaults to
690.Sy 0 ,
691which indicates that a percent which is based on
692.Sy zfs_arc_dnode_limit_percent
693of the ARC meta buffers that may be used for dnodes.
694.It Sy zfs_arc_dnode_limit_percent Ns = Ns Sy 10 Ns % Pq u64
695Percentage that can be consumed by dnodes of ARC meta buffers.
696.Pp
697See also
698.Sy zfs_arc_dnode_limit ,
699which serves a similar purpose but has a higher priority if nonzero.
700.
701.It Sy zfs_arc_dnode_reduce_percent Ns = Ns Sy 10 Ns % Pq u64
702Percentage of ARC dnodes to try to scan in response to demand for non-metadata
703when the number of bytes consumed by dnodes exceeds
704.Sy zfs_arc_dnode_limit .
705.
706.It Sy zfs_arc_average_blocksize Ns = Ns Sy 8192 Ns B Po 8 KiB Pc Pq uint
707The ARC's buffer hash table is sized based on the assumption of an average
708block size of this value.
709This works out to roughly 1 MiB of hash table per 1 GiB of physical memory
710with 8-byte pointers.
711For configurations with a known larger average block size,
712this value can be increased to reduce the memory footprint.
713.
714.It Sy zfs_arc_eviction_pct Ns = Ns Sy 200 Ns % Pq uint
715When
716.Fn arc_is_overflowing ,
717.Fn arc_get_data_impl
718waits for this percent of the requested amount of data to be evicted.
719For example, by default, for every
720.Em 2 KiB
721that's evicted,
722.Em 1 KiB
723of it may be "reused" by a new allocation.
724Since this is above
725.Sy 100 Ns % ,
726it ensures that progress is made towards getting
727.Sy arc_size No under Sy arc_c .
728Since this is finite, it ensures that allocations can still happen,
729even during the potentially long time that
730.Sy arc_size No is more than Sy arc_c .
731.
732.It Sy zfs_arc_evict_batch_limit Ns = Ns Sy 10 Pq uint
733Number ARC headers to evict per sub-list before proceeding to another sub-list.
734This batch-style operation prevents entire sub-lists from being evicted at once
735but comes at a cost of additional unlocking and locking.
736.
737.It Sy zfs_arc_evict_threads Ns = Ns Sy 0 Pq int
738Sets the number of ARC eviction threads to be used.
739.Pp
740If set greater than 0, ZFS will dedicate up to that many threads to ARC
741eviction.
742Each thread will process one sub-list at a time,
743until the eviction target is reached or all sub-lists have been processed.
744When set to 0, ZFS will compute a reasonable number of eviction threads based
745on the number of CPUs.
746.TS
747box;
748lb l l .
749	CPUs	Threads
750_
751	1-4	1
752	5-8	2
753	9-15	3
754	16-31	4
755	32-63	6
756	64-95	8
757	96-127	9
758	128-160	11
759	160-191	12
760	192-223	13
761	224-255	14
762	256+	16
763.TE
764.Pp
765More threads may improve the responsiveness of ZFS to memory pressure.
766This can be important for performance when eviction from the ARC becomes
767a bottleneck for reads and writes.
768.Pp
769This parameter can only be set at module load time.
770.
771.It Sy zfs_arc_grow_retry Ns = Ns Sy 0 Ns s Pq uint
772If set to a non zero value, it will replace the
773.Sy arc_grow_retry
774value with this value.
775The
776.Sy arc_grow_retry
777.No value Pq default Sy 5 Ns s
778is the number of seconds the ARC will wait before
779trying to resume growth after a memory pressure event.
780.
781.It Sy zfs_arc_lotsfree_percent Ns = Ns Sy 10 Ns % Pq int
782Throttle I/O when free system memory drops below this percentage of total
783system memory.
784Setting this value to
785.Sy 0
786will disable the throttle.
787.
788.It Sy zfs_arc_max Ns = Ns Sy 0 Ns B Pq u64
789Max size of ARC in bytes.
790If
791.Sy 0 ,
792then the max size of ARC is determined by the amount of system memory installed.
793The larger of
794.Sy all_system_memory No \- Sy 1 GiB
795and
796.Sy 5/8 No \(mu Sy all_system_memory
797will be used as the limit.
798This value must be at least
799.Sy 67108864 Ns B Pq 64 MiB .
800.Pp
801This value can be changed dynamically, with some caveats.
802It cannot be set back to
803.Sy 0
804while running, and reducing it below the current ARC size will not cause
805the ARC to shrink without memory pressure to induce shrinking.
806.
807.It Sy zfs_arc_meta_balance Ns = Ns Sy 500 Pq uint
808Balance between metadata and data on ghost hits.
809Values above 100 increase metadata caching by proportionally reducing effect
810of ghost data hits on target data/metadata rate.
811.
812.It Sy zfs_arc_min Ns = Ns Sy 0 Ns B Pq u64
813Min size of ARC in bytes.
814.No If set to Sy 0 , arc_c_min
815will default to consuming the larger of
816.Sy 32 MiB
817and
818.Sy all_system_memory No / Sy 32 .
819.
820.It Sy zfs_arc_min_prefetch_ms Ns = Ns Sy 0 Ns ms Ns Po Ns ≡ Ns 1s Pc Pq uint
821Minimum time prefetched blocks are locked in the ARC.
822.
823.It Sy zfs_arc_min_prescient_prefetch_ms Ns = Ns Sy 0 Ns ms Ns Po Ns ≡ Ns 6s Pc Pq uint
824Minimum time "prescient prefetched" blocks are locked in the ARC.
825These blocks are meant to be prefetched fairly aggressively ahead of
826the code that may use them.
827.
828.It Sy zfs_arc_prune_task_threads Ns = Ns Sy 1 Pq int
829Number of arc_prune threads.
830.Fx
831does not need more than one.
832Linux may theoretically use one per mount point up to number of CPUs,
833but that was not proven to be useful.
834.
835.It Sy zfs_max_missing_tvds Ns = Ns Sy 0 Pq int
836Number of missing top-level vdevs which will be allowed during
837pool import (only in read-only mode).
838.
839.It Sy zfs_max_nvlist_src_size Ns = Sy 0 Pq u64
840Maximum size in bytes allowed to be passed as
841.Sy zc_nvlist_src_size
842for ioctls on
843.Pa /dev/zfs .
844This prevents a user from causing the kernel to allocate
845an excessive amount of memory.
846When the limit is exceeded, the ioctl fails with
847.Sy EINVAL
848and a description of the error is sent to the
849.Pa zfs-dbgmsg
850log.
851This parameter should not need to be touched under normal circumstances.
852If
853.Sy 0 ,
854equivalent to a quarter of the user-wired memory limit under
855.Fx
856and to
857.Sy 134217728 Ns B Pq 128 MiB
858under Linux.
859.
860.It Sy zfs_multilist_num_sublists Ns = Ns Sy 0 Pq uint
861To allow more fine-grained locking, each ARC state contains a series
862of lists for both data and metadata objects.
863Locking is performed at the level of these "sub-lists".
864This parameters controls the number of sub-lists per ARC state,
865and also applies to other uses of the multilist data structure.
866.Pp
867If
868.Sy 0 ,
869equivalent to the greater of the number of online CPUs and
870.Sy 4 .
871.
872.It Sy zfs_arc_overflow_shift Ns = Ns Sy 8 Pq int
873The ARC size is considered to be overflowing if it exceeds the current
874ARC target size
875.Pq Sy arc_c
876by thresholds determined by this parameter.
877Exceeding by
878.Sy ( arc_c No >> Sy zfs_arc_overflow_shift ) No / Sy 2
879starts ARC reclamation process.
880If that appears insufficient, exceeding by
881.Sy ( arc_c No >> Sy zfs_arc_overflow_shift ) No \(mu Sy 1.5
882blocks new buffer allocation until the reclaim thread catches up.
883Started reclamation process continues till ARC size returns below the
884target size.
885.Pp
886The default value of
887.Sy 8
888causes the ARC to start reclamation if it exceeds the target size by
889.Em 0.2%
890of the target size, and block allocations by
891.Em 0.6% .
892.
893.It Sy zfs_arc_shrink_shift Ns = Ns Sy 0 Pq uint
894If nonzero, this will update
895.Sy arc_shrink_shift Pq default Sy 7
896with the new value.
897.
898.It Sy zfs_arc_pc_percent Ns = Ns Sy 0 Ns % Po off Pc Pq uint
899Percent of pagecache to reclaim ARC to.
900.Pp
901This tunable allows the ZFS ARC to play more nicely
902with the kernel's LRU pagecache.
903It can guarantee that the ARC size won't collapse under scanning
904pressure on the pagecache, yet still allows the ARC to be reclaimed down to
905.Sy zfs_arc_min
906if necessary.
907This value is specified as percent of pagecache size (as measured by
908.Sy NR_ACTIVE_FILE
909+
910.Sy NR_INACTIVE_FILE ) ,
911where that percent may exceed
912.Sy 100 .
913This
914only operates during memory pressure/reclaim.
915.
916.It Sy zfs_arc_shrinker_limit Ns = Ns Sy 0 Pq int
917This is a limit on how many pages the ARC shrinker makes available for
918eviction in response to one page allocation attempt.
919Note that in practice, the kernel's shrinker can ask us to evict
920up to about four times this for one allocation attempt.
921To reduce OOM risk, this limit is applied for kswapd reclaims only.
922.Pp
923For example a value of
924.Sy 10000 Pq in practice, Em 160 MiB No per allocation attempt with 4 KiB pages
925limits the amount of time spent attempting to reclaim ARC memory to
926less than 100 ms per allocation attempt,
927even with a small average compressed block size of ~8 KiB.
928.Pp
929The parameter can be set to 0 (zero) to disable the limit,
930and only applies on Linux.
931.
932.It Sy zfs_arc_shrinker_seeks Ns = Ns Sy 2 Pq int
933Relative cost of ARC eviction on Linux, AKA number of seeks needed to
934restore evicted page.
935Bigger values make ARC more precious and evictions smaller, comparing to
936other kernel subsystems.
937Value of 4 means parity with page cache.
938.
939.It Sy zfs_arc_sys_free Ns = Ns Sy 0 Ns B Pq u64
940The target number of bytes the ARC should leave as free memory on the system.
941If zero, equivalent to the bigger of
942.Sy 512 KiB No and Sy all_system_memory/64 .
943.
944.It Sy zfs_autoimport_disable Ns = Ns Sy 1 Ns | Ns 0 Pq int
945Disable pool import at module load by ignoring the cache file
946.Pq Sy spa_config_path .
947.
948.It Sy zfs_checksum_events_per_second Ns = Ns Sy 20 Ns /s Pq uint
949Rate limit checksum events to this many per second.
950Note that this should not be set below the ZED thresholds
951(currently 10 checksums over 10 seconds)
952or else the daemon may not trigger any action.
953.
954.It Sy zfs_commit_timeout_pct Ns = Ns Sy 10 Ns % Pq uint
955This controls the amount of time that a ZIL block (lwb) will remain "open"
956when it isn't "full", and it has a thread waiting for it to be committed to
957stable storage.
958The timeout is scaled based on a percentage of the last lwb
959latency to avoid significantly impacting the latency of each individual
960transaction record (itx).
961.
962.It Sy zfs_condense_indirect_commit_entry_delay_ms Ns = Ns Sy 0 Ns ms Pq int
963Vdev indirection layer (used for device removal) sleeps for this many
964milliseconds during mapping generation.
965Intended for use with the test suite to throttle vdev removal speed.
966.
967.It Sy zfs_condense_indirect_obsolete_pct Ns = Ns Sy 25 Ns % Pq uint
968Minimum percent of obsolete bytes in vdev mapping required to attempt to
969condense
970.Pq see Sy zfs_condense_indirect_vdevs_enable .
971Intended for use with the test suite
972to facilitate triggering condensing as needed.
973.
974.It Sy zfs_condense_indirect_vdevs_enable Ns = Ns Sy 1 Ns | Ns 0 Pq int
975Enable condensing indirect vdev mappings.
976When set, attempt to condense indirect vdev mappings
977if the mapping uses more than
978.Sy zfs_condense_min_mapping_bytes
979bytes of memory and if the obsolete space map object uses more than
980.Sy zfs_condense_max_obsolete_bytes
981bytes on-disk.
982The condensing process is an attempt to save memory by removing obsolete
983mappings.
984.
985.It Sy zfs_condense_max_obsolete_bytes Ns = Ns Sy 1073741824 Ns B Po 1 GiB Pc Pq u64
986Only attempt to condense indirect vdev mappings if the on-disk size
987of the obsolete space map object is greater than this number of bytes
988.Pq see Sy zfs_condense_indirect_vdevs_enable .
989.
990.It Sy zfs_condense_min_mapping_bytes Ns = Ns Sy 131072 Ns B Po 128 KiB Pc Pq u64
991Minimum size vdev mapping to attempt to condense
992.Pq see Sy zfs_condense_indirect_vdevs_enable .
993.
994.It Sy zfs_dbgmsg_enable Ns = Ns Sy 1 Ns | Ns 0 Pq int
995Internally ZFS keeps a small log to facilitate debugging.
996The log is enabled by default, and can be disabled by unsetting this option.
997The contents of the log can be accessed by reading
998.Pa /proc/spl/kstat/zfs/dbgmsg .
999Writing
1000.Sy 0
1001to the file clears the log.
1002.Pp
1003This setting does not influence debug prints due to
1004.Sy zfs_flags .
1005.
1006.It Sy zfs_dbgmsg_maxsize Ns = Ns Sy 4194304 Ns B Po 4 MiB Pc Pq uint
1007Maximum size of the internal ZFS debug log.
1008.
1009.It Sy zfs_dbuf_state_index Ns = Ns Sy 0 Pq int
1010Historically used for controlling what reporting was available under
1011.Pa /proc/spl/kstat/zfs .
1012No effect.
1013.
1014.It Sy zfs_deadman_checktime_ms Ns = Ns Sy 60000 Ns ms Po 1 min Pc Pq u64
1015Check time in milliseconds.
1016This defines the frequency at which we check for hung I/O requests
1017and potentially invoke the
1018.Sy zfs_deadman_failmode
1019behavior.
1020.
1021.It Sy zfs_deadman_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int
1022When a pool sync operation takes longer than
1023.Sy zfs_deadman_synctime_ms ,
1024or when an individual I/O operation takes longer than
1025.Sy zfs_deadman_ziotime_ms ,
1026then the operation is considered to be "hung".
1027If
1028.Sy zfs_deadman_enabled
1029is set, then the deadman behavior is invoked as described by
1030.Sy zfs_deadman_failmode .
1031By default, the deadman is enabled and set to
1032.Sy wait
1033which results in "hung" I/O operations only being logged.
1034The deadman is automatically disabled when a pool gets suspended.
1035.
1036.It Sy zfs_deadman_events_per_second Ns = Ns Sy 1 Ns /s Pq int
1037Rate limit deadman zevents (which report hung I/O operations) to this many per
1038second.
1039.
1040.It Sy zfs_deadman_failmode Ns = Ns Sy wait Pq charp
1041Controls the failure behavior when the deadman detects a "hung" I/O operation.
1042Valid values are:
1043.Bl -tag -compact -offset 4n -width "continue"
1044.It Sy wait
1045Wait for a "hung" operation to complete.
1046For each "hung" operation a "deadman" event will be posted
1047describing that operation.
1048.It Sy continue
1049Attempt to recover from a "hung" operation by re-dispatching it
1050to the I/O pipeline if possible.
1051.It Sy panic
1052Panic the system.
1053This can be used to facilitate automatic fail-over
1054to a properly configured fail-over partner.
1055.El
1056.
1057.It Sy zfs_deadman_synctime_ms Ns = Ns Sy 600000 Ns ms Po 10 min Pc Pq u64
1058Interval in milliseconds after which the deadman is triggered and also
1059the interval after which a pool sync operation is considered to be "hung".
1060Once this limit is exceeded the deadman will be invoked every
1061.Sy zfs_deadman_checktime_ms
1062milliseconds until the pool sync completes.
1063.
1064.It Sy zfs_deadman_ziotime_ms Ns = Ns Sy 300000 Ns ms Po 5 min Pc Pq u64
1065Interval in milliseconds after which the deadman is triggered and an
1066individual I/O operation is considered to be "hung".
1067As long as the operation remains "hung",
1068the deadman will be invoked every
1069.Sy zfs_deadman_checktime_ms
1070milliseconds until the operation completes.
1071.
1072.It Sy zfs_dedup_prefetch Ns = Ns Sy 0 Ns | Ns 1 Pq int
1073Enable prefetching dedup-ed blocks which are going to be freed.
1074.
1075.It Sy zfs_dedup_log_flush_min_time_ms Ns = Ns Sy 1000 Ns Pq uint
1076Minimum time to spend on dedup log flush each transaction.
1077.Pp
1078At least this long will be spent flushing dedup log entries each transaction,
1079up to
1080.Sy zfs_txg_timeout .
1081This occurs even if doing so would delay the transaction, that is, other IO
1082completes under this time.
1083.
1084.It Sy zfs_dedup_log_flush_entries_min Ns = Ns Sy 100 Ns Pq uint
1085Flush at least this many entries each transaction.
1086.Pp
1087OpenZFS will flush a fraction of the log every TXG, to keep the size
1088proportional to the ingest rate (see
1089.Sy zfs_dedup_log_flush_txgs ) .
1090This sets the minimum for that estimate, which prevents the backlog from
1091completely draining if the ingest rate falls.
1092Raising it can force OpenZFS to flush more aggressively, reducing the backlog
1093to zero more quickly, but can make it less able to back off if log
1094flushing would compete with other IO too much.
1095.
1096.It Sy zfs_dedup_log_flush_entries_max Ns = Ns Sy UINT_MAX Ns Pq uint
1097Flush at most this many entries each transaction.
1098.Pp
1099Mostly used for debugging purposes.
1100.It Sy zfs_dedup_log_flush_txgs Ns = Ns Sy 100 Ns Pq uint
1101Target number of TXGs to process the whole dedup log.
1102.Pp
1103Every TXG, OpenZFS will process the inverse of this number times the size
1104of the DDT backlog.
1105This will keep the backlog at a size roughly equal to the ingest rate
1106times this value.
1107This offers a balance between a more efficient DDT log, with better
1108aggregation, and shorter import times, which increase as the size of the
1109DDT log increases.
1110Increasing this value will result in a more efficient DDT log, but longer
1111import times.
1112.It Sy zfs_dedup_log_cap Ns = Ns Sy UINT_MAX Ns Pq uint
1113Soft cap for the size of the current dedup log.
1114.Pp
1115If the log is larger than this size, we increase the aggressiveness of
1116the flushing to try to bring it back down to the soft cap.
1117Setting it will reduce import times, but will reduce the efficiency of
1118the DDT log, increasing the expected number of IOs required to flush the same
1119amount of data.
1120.It Sy zfs_dedup_log_hard_cap Ns = Ns Sy 0 Ns | Ns 1 Pq uint
1121Whether to treat the log cap as a firm cap or not.
1122.Pp
1123When set to 0 (the default), the
1124.Sy zfs_dedup_log_cap
1125will increase the maximum number of log entries we flush in a given txg.
1126This will bring the backlog size down towards the cap, but not at the expense
1127of making TXG syncs take longer.
1128If this is set to 1, the cap acts more like a hard cap than a soft cap; it will
1129also increase the minimum number of log entries we flush per TXG.
1130Enabling it will reduce worst-case import times, at the cost of increased TXG
1131sync times.
1132.It Sy zfs_dedup_log_flush_flow_rate_txgs Ns = Ns Sy 10 Ns Pq uint
1133Number of transactions to use to compute the flow rate.
1134.Pp
1135OpenZFS will estimate number of entries changed (ingest rate), number of entries
1136flushed (flush rate) and time spent flushing (flush time rate) and combining
1137these into an overall "flow rate".
1138It will use an exponential weighted moving average over some number of recent
1139transactions to compute these rates.
1140This sets the number of transactions to compute these averages over.
1141Setting it higher can help to smooth out the flow rate in the face of spiky
1142workloads, but will take longer for the flow rate to adjust to a sustained
1143change in the ingress rate.
1144.
1145.It Sy zfs_dedup_log_txg_max Ns = Ns Sy 8 Ns Pq uint
1146Max transactions to before starting to flush dedup logs.
1147.Pp
1148OpenZFS maintains two dedup logs, one receiving new changes, one flushing.
1149If there is nothing to flush, it will accumulate changes for no more than this
1150many transactions before switching the logs and starting to flush entries out.
1151.
1152.It Sy zfs_dedup_log_mem_max Ns = Ns Sy 0 Ns Pq u64
1153Max memory to use for dedup logs.
1154.Pp
1155OpenZFS will spend no more than this much memory on maintaining the in-memory
1156dedup log.
1157Flushing will begin when around half this amount is being spent on logs.
1158The default value of
1159.Sy 0
1160will cause it to be set by
1161.Sy zfs_dedup_log_mem_max_percent
1162instead.
1163.
1164.It Sy zfs_dedup_log_mem_max_percent Ns = Ns Sy 1 Ns % Pq uint
1165Max memory to use for dedup logs, as a percentage of total memory.
1166.Pp
1167If
1168.Sy zfs_dedup_log_mem_max
1169is not set, it will be initialized as a percentage of the total memory in the
1170system.
1171.
1172.It Sy zfs_delay_min_dirty_percent Ns = Ns Sy 60 Ns % Pq uint
1173Start to delay each transaction once there is this amount of dirty data,
1174expressed as a percentage of
1175.Sy zfs_dirty_data_max .
1176This value should be at least
1177.Sy zfs_vdev_async_write_active_max_dirty_percent .
1178.No See Sx ZFS TRANSACTION DELAY .
1179.
1180.It Sy zfs_delay_scale Ns = Ns Sy 500000 Pq int
1181This controls how quickly the transaction delay approaches infinity.
1182Larger values cause longer delays for a given amount of dirty data.
1183.Pp
1184For the smoothest delay, this value should be about 1 billion divided
1185by the maximum number of operations per second.
1186This will smoothly handle between ten times and a tenth of this number.
1187.No See Sx ZFS TRANSACTION DELAY .
1188.Pp
1189.Sy zfs_delay_scale No \(mu Sy zfs_dirty_data_max Em must No be smaller than Sy 2^64 .
1190.
1191.It Sy zfs_dio_write_verify_events_per_second Ns = Ns Sy 20 Ns /s Pq uint
1192Rate limit Direct I/O write verify events to this many per second.
1193.
1194.It Sy zfs_disable_ivset_guid_check Ns = Ns Sy 0 Ns | Ns 1 Pq int
1195Disables requirement for IVset GUIDs to be present and match when doing a raw
1196receive of encrypted datasets.
1197Intended for users whose pools were created with
1198OpenZFS pre-release versions and now have compatibility issues.
1199.
1200.It Sy zfs_key_max_salt_uses Ns = Ns Sy 400000000 Po 4*10^8 Pc Pq ulong
1201Maximum number of uses of a single salt value before generating a new one for
1202encrypted datasets.
1203The default value is also the maximum.
1204.
1205.It Sy zfs_object_mutex_size Ns = Ns Sy 64 Pq uint
1206Size of the znode hashtable used for holds.
1207.Pp
1208Due to the need to hold locks on objects that may not exist yet, kernel mutexes
1209are not created per-object and instead a hashtable is used where collisions
1210will result in objects waiting when there is not actually contention on the
1211same object.
1212.
1213.It Sy zfs_slow_io_events_per_second Ns = Ns Sy 20 Ns /s Pq int
1214Rate limit delay zevents (which report slow I/O operations) to this many per
1215second.
1216.
1217.It Sy zfs_unflushed_max_mem_amt Ns = Ns Sy 1073741824 Ns B Po 1 GiB Pc Pq u64
1218Upper-bound limit for unflushed metadata changes to be held by the
1219log spacemap in memory, in bytes.
1220.
1221.It Sy zfs_unflushed_max_mem_ppm Ns = Ns Sy 1000 Ns ppm Po 0.1% Pc Pq u64
1222Part of overall system memory that ZFS allows to be used
1223for unflushed metadata changes by the log spacemap, in millionths.
1224.
1225.It Sy zfs_unflushed_log_block_max Ns = Ns Sy 131072 Po 128k Pc Pq u64
1226Describes the maximum number of log spacemap blocks allowed for each pool.
1227The default value means that the space in all the log spacemaps
1228can add up to no more than
1229.Sy 131072
1230blocks (which means
1231.Em 16 GiB
1232of logical space before compression and ditto blocks,
1233assuming that blocksize is
1234.Em 128 KiB ) .
1235.Pp
1236This tunable is important because it involves a trade-off between import
1237time after an unclean export and the frequency of flushing metaslabs.
1238The higher this number is, the more log blocks we allow when the pool is
1239active which means that we flush metaslabs less often and thus decrease
1240the number of I/O operations for spacemap updates per TXG.
1241At the same time though, that means that in the event of an unclean export,
1242there will be more log spacemap blocks for us to read, inducing overhead
1243in the import time of the pool.
1244The lower the number, the amount of flushing increases, destroying log
1245blocks quicker as they become obsolete faster, which leaves less blocks
1246to be read during import time after a crash.
1247.Pp
1248Each log spacemap block existing during pool import leads to approximately
1249one extra logical I/O issued.
1250This is the reason why this tunable is exposed in terms of blocks rather
1251than space used.
1252.
1253.It Sy zfs_unflushed_log_block_min Ns = Ns Sy 1000 Pq u64
1254If the number of metaslabs is small and our incoming rate is high,
1255we could get into a situation that we are flushing all our metaslabs every TXG.
1256Thus we always allow at least this many log blocks.
1257.
1258.It Sy zfs_unflushed_log_block_pct Ns = Ns Sy 400 Ns % Pq u64
1259Tunable used to determine the number of blocks that can be used for
1260the spacemap log, expressed as a percentage of the total number of
1261unflushed metaslabs in the pool.
1262.
1263.It Sy zfs_unflushed_log_txg_max Ns = Ns Sy 1000 Pq u64
1264Tunable limiting maximum time in TXGs any metaslab may remain unflushed.
1265It effectively limits maximum number of unflushed per-TXG spacemap logs
1266that need to be read after unclean pool export.
1267.
1268.It Sy zfs_unlink_suspend_progress Ns = Ns Sy 0 Ns | Ns 1 Pq uint
1269When enabled, files will not be asynchronously removed from the list of pending
1270unlinks and the space they consume will be leaked.
1271Once this option has been disabled and the dataset is remounted,
1272the pending unlinks will be processed and the freed space returned to the pool.
1273This option is used by the test suite.
1274.
1275.It Sy zfs_delete_blocks Ns = Ns Sy 20480 Pq ulong
1276This is the used to define a large file for the purposes of deletion.
1277Files containing more than
1278.Sy zfs_delete_blocks
1279will be deleted asynchronously, while smaller files are deleted synchronously.
1280Decreasing this value will reduce the time spent in an
1281.Xr unlink 2
1282system call, at the expense of a longer delay before the freed space is
1283available.
1284This only applies on Linux.
1285.
1286.It Sy zfs_dirty_data_max Ns = Pq int
1287Determines the dirty space limit in bytes.
1288Once this limit is exceeded, new writes are halted until space frees up.
1289This parameter takes precedence over
1290.Sy zfs_dirty_data_max_percent .
1291.No See Sx ZFS TRANSACTION DELAY .
1292.Pp
1293Defaults to
1294.Sy physical_ram/10 ,
1295capped at
1296.Sy zfs_dirty_data_max_max .
1297.
1298.It Sy zfs_dirty_data_max_max Ns = Pq int
1299Maximum allowable value of
1300.Sy zfs_dirty_data_max ,
1301expressed in bytes.
1302This limit is only enforced at module load time, and will be ignored if
1303.Sy zfs_dirty_data_max
1304is later changed.
1305This parameter takes precedence over
1306.Sy zfs_dirty_data_max_max_percent .
1307.No See Sx ZFS TRANSACTION DELAY .
1308.Pp
1309Defaults to
1310.Sy min(physical_ram/4, 4GiB) ,
1311or
1312.Sy min(physical_ram/4, 1GiB)
1313for 32-bit systems.
1314.
1315.It Sy zfs_dirty_data_max_max_percent Ns = Ns Sy 25 Ns % Pq uint
1316Maximum allowable value of
1317.Sy zfs_dirty_data_max ,
1318expressed as a percentage of physical RAM.
1319This limit is only enforced at module load time, and will be ignored if
1320.Sy zfs_dirty_data_max
1321is later changed.
1322The parameter
1323.Sy zfs_dirty_data_max_max
1324takes precedence over this one.
1325.No See Sx ZFS TRANSACTION DELAY .
1326.
1327.It Sy zfs_dirty_data_max_percent Ns = Ns Sy 10 Ns % Pq uint
1328Determines the dirty space limit, expressed as a percentage of all memory.
1329Once this limit is exceeded, new writes are halted until space frees up.
1330The parameter
1331.Sy zfs_dirty_data_max
1332takes precedence over this one.
1333.No See Sx ZFS TRANSACTION DELAY .
1334.Pp
1335Subject to
1336.Sy zfs_dirty_data_max_max .
1337.
1338.It Sy zfs_dirty_data_sync_percent Ns = Ns Sy 20 Ns % Pq uint
1339Start syncing out a transaction group if there's at least this much dirty data
1340.Pq as a percentage of Sy zfs_dirty_data_max .
1341This should be less than
1342.Sy zfs_vdev_async_write_active_min_dirty_percent .
1343.
1344.It Sy zfs_wrlog_data_max Ns = Pq int
1345The upper limit of write-transaction ZIL log data size in bytes.
1346Write operations are throttled when approaching the limit until log data is
1347cleared out after transaction group sync.
1348Because of some overhead, it should be set at least 2 times the size of
1349.Sy zfs_dirty_data_max
1350.No to prevent harming normal write throughput .
1351It also should be smaller than the size of the slog device if slog is present.
1352.Pp
1353Defaults to
1354.Sy zfs_dirty_data_max*2
1355.
1356.It Sy zfs_fallocate_reserve_percent Ns = Ns Sy 110 Ns % Pq uint
1357Since ZFS is a copy-on-write filesystem with snapshots, blocks cannot be
1358preallocated for a file in order to guarantee that later writes will not
1359run out of space.
1360Instead,
1361.Xr fallocate 2
1362space preallocation only checks that sufficient space is currently available
1363in the pool or the user's project quota allocation,
1364and then creates a sparse file of the requested size.
1365The requested space is multiplied by
1366.Sy zfs_fallocate_reserve_percent
1367to allow additional space for indirect blocks and other internal metadata.
1368Setting this to
1369.Sy 0
1370disables support for
1371.Xr fallocate 2
1372and causes it to return
1373.Sy EOPNOTSUPP .
1374.
1375.It Sy zfs_fletcher_4_impl Ns = Ns Sy fastest Pq string
1376Select a fletcher 4 implementation.
1377.Pp
1378Supported selectors are:
1379.Sy fastest , scalar , sse2 , ssse3 , avx2 , avx512f , avx512bw ,
1380.No and Sy aarch64_neon .
1381All except
1382.Sy fastest No and Sy scalar
1383require instruction set extensions to be available,
1384and will only appear if ZFS detects that they are present at runtime.
1385If multiple implementations of fletcher 4 are available, the
1386.Sy fastest
1387will be chosen using a micro benchmark.
1388Selecting
1389.Sy scalar
1390results in the original CPU-based calculation being used.
1391Selecting any option other than
1392.Sy fastest No or Sy scalar
1393results in vector instructions
1394from the respective CPU instruction set being used.
1395.
1396.It Sy zfs_bclone_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int
1397Enables access to the block cloning feature.
1398If this setting is 0, then even if feature@block_cloning is enabled,
1399using functions and system calls that attempt to clone blocks will act as
1400though the feature is disabled.
1401.
1402.It Sy zfs_bclone_wait_dirty Ns = Ns Sy 0 Ns | Ns 1 Pq int
1403When set to 1 the FICLONE and FICLONERANGE ioctls wait for dirty data to be
1404written to disk.
1405This allows the clone operation to reliably succeed when a file is
1406modified and then immediately cloned.
1407For small files this may be slower than making a copy of the file.
1408Therefore, this setting defaults to 0 which causes a clone operation to
1409immediately fail when encountering a dirty block.
1410.
1411.It Sy zfs_blake3_impl Ns = Ns Sy fastest Pq string
1412Select a BLAKE3 implementation.
1413.Pp
1414Supported selectors are:
1415.Sy cycle , fastest , generic , sse2 , sse41 , avx2 , avx512 .
1416All except
1417.Sy cycle , fastest No and Sy generic
1418require instruction set extensions to be available,
1419and will only appear if ZFS detects that they are present at runtime.
1420If multiple implementations of BLAKE3 are available, the
1421.Sy fastest will be chosen using a micro benchmark. You can see the
1422benchmark results by reading this kstat file:
1423.Pa /proc/spl/kstat/zfs/chksum_bench .
1424.
1425.It Sy zfs_free_bpobj_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int
1426Enable/disable the processing of the free_bpobj object.
1427.
1428.It Sy zfs_async_block_max_blocks Ns = Ns Sy UINT64_MAX Po unlimited Pc Pq u64
1429Maximum number of blocks freed in a single TXG.
1430.
1431.It Sy zfs_max_async_dedup_frees Ns = Ns Sy 100000 Po 10^5 Pc Pq u64
1432Maximum number of dedup blocks freed in a single TXG.
1433.
1434.It Sy zfs_vdev_async_read_max_active Ns = Ns Sy 3 Pq uint
1435Maximum asynchronous read I/O operations active to each device.
1436.No See Sx ZFS I/O SCHEDULER .
1437.
1438.It Sy zfs_vdev_async_read_min_active Ns = Ns Sy 1 Pq uint
1439Minimum asynchronous read I/O operation active to each device.
1440.No See Sx ZFS I/O SCHEDULER .
1441.
1442.It Sy zfs_vdev_async_write_active_max_dirty_percent Ns = Ns Sy 60 Ns % Pq uint
1443When the pool has more than this much dirty data, use
1444.Sy zfs_vdev_async_write_max_active
1445to limit active async writes.
1446If the dirty data is between the minimum and maximum,
1447the active I/O limit is linearly interpolated.
1448.No See Sx ZFS I/O SCHEDULER .
1449.
1450.It Sy zfs_vdev_async_write_active_min_dirty_percent Ns = Ns Sy 30 Ns % Pq uint
1451When the pool has less than this much dirty data, use
1452.Sy zfs_vdev_async_write_min_active
1453to limit active async writes.
1454If the dirty data is between the minimum and maximum,
1455the active I/O limit is linearly
1456interpolated.
1457.No See Sx ZFS I/O SCHEDULER .
1458.
1459.It Sy zfs_vdev_async_write_max_active Ns = Ns Sy 10 Pq uint
1460Maximum asynchronous write I/O operations active to each device.
1461.No See Sx ZFS I/O SCHEDULER .
1462.
1463.It Sy zfs_vdev_async_write_min_active Ns = Ns Sy 2 Pq uint
1464Minimum asynchronous write I/O operations active to each device.
1465.No See Sx ZFS I/O SCHEDULER .
1466.Pp
1467Lower values are associated with better latency on rotational media but poorer
1468resilver performance.
1469The default value of
1470.Sy 2
1471was chosen as a compromise.
1472A value of
1473.Sy 3
1474has been shown to improve resilver performance further at a cost of
1475further increasing latency.
1476.
1477.It Sy zfs_vdev_initializing_max_active Ns = Ns Sy 1 Pq uint
1478Maximum initializing I/O operations active to each device.
1479.No See Sx ZFS I/O SCHEDULER .
1480.
1481.It Sy zfs_vdev_initializing_min_active Ns = Ns Sy 1 Pq uint
1482Minimum initializing I/O operations active to each device.
1483.No See Sx ZFS I/O SCHEDULER .
1484.
1485.It Sy zfs_vdev_max_active Ns = Ns Sy 1000 Pq uint
1486The maximum number of I/O operations active to each device.
1487Ideally, this will be at least the sum of each queue's
1488.Sy max_active .
1489.No See Sx ZFS I/O SCHEDULER .
1490.
1491.It Sy zfs_vdev_open_timeout_ms Ns = Ns Sy 1000 Pq uint
1492Timeout value to wait before determining a device is missing
1493during import.
1494This is helpful for transient missing paths due
1495to links being briefly removed and recreated in response to
1496udev events.
1497.
1498.It Sy zfs_vdev_rebuild_max_active Ns = Ns Sy 3 Pq uint
1499Maximum sequential resilver I/O operations active to each device.
1500.No See Sx ZFS I/O SCHEDULER .
1501.
1502.It Sy zfs_vdev_rebuild_min_active Ns = Ns Sy 1 Pq uint
1503Minimum sequential resilver I/O operations active to each device.
1504.No See Sx ZFS I/O SCHEDULER .
1505.
1506.It Sy zfs_vdev_removal_max_active Ns = Ns Sy 2 Pq uint
1507Maximum removal I/O operations active to each device.
1508.No See Sx ZFS I/O SCHEDULER .
1509.
1510.It Sy zfs_vdev_removal_min_active Ns = Ns Sy 1 Pq uint
1511Minimum removal I/O operations active to each device.
1512.No See Sx ZFS I/O SCHEDULER .
1513.
1514.It Sy zfs_vdev_scrub_max_active Ns = Ns Sy 2 Pq uint
1515Maximum scrub I/O operations active to each device.
1516.No See Sx ZFS I/O SCHEDULER .
1517.
1518.It Sy zfs_vdev_scrub_min_active Ns = Ns Sy 1 Pq uint
1519Minimum scrub I/O operations active to each device.
1520.No See Sx ZFS I/O SCHEDULER .
1521.
1522.It Sy zfs_vdev_sync_read_max_active Ns = Ns Sy 10 Pq uint
1523Maximum synchronous read I/O operations active to each device.
1524.No See Sx ZFS I/O SCHEDULER .
1525.
1526.It Sy zfs_vdev_sync_read_min_active Ns = Ns Sy 10 Pq uint
1527Minimum synchronous read I/O operations active to each device.
1528.No See Sx ZFS I/O SCHEDULER .
1529.
1530.It Sy zfs_vdev_sync_write_max_active Ns = Ns Sy 10 Pq uint
1531Maximum synchronous write I/O operations active to each device.
1532.No See Sx ZFS I/O SCHEDULER .
1533.
1534.It Sy zfs_vdev_sync_write_min_active Ns = Ns Sy 10 Pq uint
1535Minimum synchronous write I/O operations active to each device.
1536.No See Sx ZFS I/O SCHEDULER .
1537.
1538.It Sy zfs_vdev_trim_max_active Ns = Ns Sy 2 Pq uint
1539Maximum trim/discard I/O operations active to each device.
1540.No See Sx ZFS I/O SCHEDULER .
1541.
1542.It Sy zfs_vdev_trim_min_active Ns = Ns Sy 1 Pq uint
1543Minimum trim/discard I/O operations active to each device.
1544.No See Sx ZFS I/O SCHEDULER .
1545.
1546.It Sy zfs_vdev_nia_delay Ns = Ns Sy 5 Pq uint
1547For non-interactive I/O (scrub, resilver, removal, initialize and rebuild),
1548the number of concurrently-active I/O operations is limited to
1549.Sy zfs_*_min_active ,
1550unless the vdev is "idle".
1551When there are no interactive I/O operations active (synchronous or otherwise),
1552and
1553.Sy zfs_vdev_nia_delay
1554operations have completed since the last interactive operation,
1555then the vdev is considered to be "idle",
1556and the number of concurrently-active non-interactive operations is increased to
1557.Sy zfs_*_max_active .
1558.No See Sx ZFS I/O SCHEDULER .
1559.
1560.It Sy zfs_vdev_nia_credit Ns = Ns Sy 5 Pq uint
1561Some HDDs tend to prioritize sequential I/O so strongly, that concurrent
1562random I/O latency reaches several seconds.
1563On some HDDs this happens even if sequential I/O operations
1564are submitted one at a time, and so setting
1565.Sy zfs_*_max_active Ns = Sy 1
1566does not help.
1567To prevent non-interactive I/O, like scrub,
1568from monopolizing the device, no more than
1569.Sy zfs_vdev_nia_credit operations can be sent
1570while there are outstanding incomplete interactive operations.
1571This enforced wait ensures the HDD services the interactive I/O
1572within a reasonable amount of time.
1573.No See Sx ZFS I/O SCHEDULER .
1574.
1575.It Sy zfs_vdev_failfast_mask Ns = Ns Sy 1 Pq uint
1576Defines if the driver should retire on a given error type.
1577The following options may be bitwise-ored together:
1578.TS
1579box;
1580lbz r l l .
1581	Value	Name	Description
1582_
1583	1	Device	No driver retries on device errors
1584	2	Transport	No driver retries on transport errors.
1585	4	Driver	No driver retries on driver errors.
1586.TE
1587.
1588.It Sy zfs_vdev_disk_max_segs Ns = Ns Sy 0 Pq uint
1589Maximum number of segments to add to a BIO (min 4).
1590If this is higher than the maximum allowed by the device queue or the kernel
1591itself, it will be clamped.
1592Setting it to zero will cause the kernel's ideal size to be used.
1593This parameter only applies on Linux.
1594This parameter is ignored if
1595.Sy zfs_vdev_disk_classic Ns = Ns Sy 1 .
1596.
1597.It Sy zfs_vdev_disk_classic Ns = Ns Sy 0 Ns | Ns 1 Pq uint
1598If set to 1, OpenZFS will submit IO to Linux using the method it used in 2.2
1599and earlier.
1600This "classic" method has known issues with highly fragmented IO requests and
1601is slower on many workloads, but it has been in use for many years and is known
1602to be very stable.
1603If you set this parameter, please also open a bug report why you did so,
1604including the workload involved and any error messages.
1605.Pp
1606This parameter and the classic submission method will be removed once we have
1607total confidence in the new method.
1608.Pp
1609This parameter only applies on Linux, and can only be set at module load time.
1610.
1611.It Sy zfs_expire_snapshot Ns = Ns Sy 300 Ns s Pq int
1612Time before expiring
1613.Pa .zfs/snapshot .
1614.
1615.It Sy zfs_admin_snapshot Ns = Ns Sy 0 Ns | Ns 1 Pq int
1616Allow the creation, removal, or renaming of entries in the
1617.Sy .zfs/snapshot
1618directory to cause the creation, destruction, or renaming of snapshots.
1619When enabled, this functionality works both locally and over NFS exports
1620which have the
1621.Em no_root_squash
1622option set.
1623.
1624.It Sy zfs_snapshot_no_setuid Ns = Ns Sy 0 Ns | Ns 1 Pq int
1625Whether to disable
1626.Em setuid/setgid
1627support for snapshot mounts triggered by access to the
1628.Sy .zfs/snapshot
1629directory by setting the
1630.Em nosuid
1631mount option.
1632.
1633.It Sy zfs_flags Ns = Ns Sy 0 Pq int
1634Set additional debugging flags.
1635The following flags may be bitwise-ored together:
1636.TS
1637box;
1638lbz r l l .
1639	Value	Name	Description
1640_
1641	1	ZFS_DEBUG_DPRINTF	Enable dprintf entries in the debug log.
1642*	2	ZFS_DEBUG_DBUF_VERIFY	Enable extra dbuf verifications.
1643*	4	ZFS_DEBUG_DNODE_VERIFY	Enable extra dnode verifications.
1644	8	ZFS_DEBUG_SNAPNAMES	Enable snapshot name verification.
1645*	16	ZFS_DEBUG_MODIFY	Check for illegally modified ARC buffers.
1646	64	ZFS_DEBUG_ZIO_FREE	Enable verification of block frees.
1647	128	ZFS_DEBUG_HISTOGRAM_VERIFY	Enable extra spacemap histogram verifications.
1648	256	ZFS_DEBUG_METASLAB_VERIFY	Verify space accounting on disk matches in-memory \fBrange_trees\fP.
1649	512	ZFS_DEBUG_SET_ERROR	Enable \fBSET_ERROR\fP and dprintf entries in the debug log.
1650	1024	ZFS_DEBUG_INDIRECT_REMAP	Verify split blocks created by device removal.
1651	2048	ZFS_DEBUG_TRIM	Verify TRIM ranges are always within the allocatable range tree.
1652	4096	ZFS_DEBUG_LOG_SPACEMAP	Verify that the log summary is consistent with the spacemap log
1653			       and enable \fBzfs_dbgmsgs\fP for metaslab loading and flushing.
1654	8192	ZFS_DEBUG_METASLAB_ALLOC	Enable debugging messages when allocations fail.
1655	16384	ZFS_DEBUG_BRT	Enable BRT-related debugging messages.
1656	32768	ZFS_DEBUG_RAIDZ_RECONSTRUCT	Enabled debugging messages for raidz reconstruction.
1657	65536	ZFS_DEBUG_DDT	Enable DDT-related debugging messages.
1658.TE
1659.Sy \& * No Requires debug build .
1660.
1661.It Sy zfs_btree_verify_intensity Ns = Ns Sy 0 Pq uint
1662Enables btree verification.
1663The following settings are cumulative:
1664.TS
1665box;
1666lbz r l l .
1667	Value	Description
1668
1669	1	Verify height.
1670	2	Verify pointers from children to parent.
1671	3	Verify element counts.
1672	4	Verify element order. (expensive)
1673*	5	Verify unused memory is poisoned. (expensive)
1674.TE
1675.Sy \& * No Requires debug build .
1676.
1677.It Sy zfs_free_leak_on_eio Ns = Ns Sy 0 Ns | Ns 1 Pq int
1678If destroy encounters an
1679.Sy EIO
1680while reading metadata (e.g. indirect blocks),
1681space referenced by the missing metadata can not be freed.
1682Normally this causes the background destroy to become "stalled",
1683as it is unable to make forward progress.
1684While in this stalled state, all remaining space to free
1685from the error-encountering filesystem is "temporarily leaked".
1686Set this flag to cause it to ignore the
1687.Sy EIO ,
1688permanently leak the space from indirect blocks that can not be read,
1689and continue to free everything else that it can.
1690.Pp
1691The default "stalling" behavior is useful if the storage partially
1692fails (i.e. some but not all I/O operations fail), and then later recovers.
1693In this case, we will be able to continue pool operations while it is
1694partially failed, and when it recovers, we can continue to free the
1695space, with no leaks.
1696Note, however, that this case is actually fairly rare.
1697.Pp
1698Typically pools either
1699.Bl -enum -compact -offset 4n -width "1."
1700.It
1701fail completely (but perhaps temporarily,
1702e.g. due to a top-level vdev going offline), or
1703.It
1704have localized, permanent errors (e.g. disk returns the wrong data
1705due to bit flip or firmware bug).
1706.El
1707In the former case, this setting does not matter because the
1708pool will be suspended and the sync thread will not be able to make
1709forward progress regardless.
1710In the latter, because the error is permanent, the best we can do
1711is leak the minimum amount of space,
1712which is what setting this flag will do.
1713It is therefore reasonable for this flag to normally be set,
1714but we chose the more conservative approach of not setting it,
1715so that there is no possibility of
1716leaking space in the "partial temporary" failure case.
1717.
1718.It Sy zfs_free_min_time_ms Ns = Ns Sy 1000 Ns ms Po 1s Pc Pq uint
1719During a
1720.Nm zfs Cm destroy
1721operation using the
1722.Sy async_destroy
1723feature,
1724a minimum of this much time will be spent working on freeing blocks per TXG.
1725.
1726.It Sy zfs_obsolete_min_time_ms Ns = Ns Sy 500 Ns ms Pq uint
1727Similar to
1728.Sy zfs_free_min_time_ms ,
1729but for cleanup of old indirection records for removed vdevs.
1730.
1731.It Sy zfs_immediate_write_sz Ns = Ns Sy 32768 Ns B Po 32 KiB Pc Pq s64
1732Largest data block to write to the ZIL.
1733Larger blocks will be treated as if the dataset being written to had the
1734.Sy logbias Ns = Ns Sy throughput
1735property set.
1736.
1737.It Sy zfs_initialize_value Ns = Ns Sy 16045690984833335022 Po 0xDEADBEEFDEADBEEE Pc Pq u64
1738Pattern written to vdev free space by
1739.Xr zpool-initialize 8 .
1740.
1741.It Sy zfs_initialize_chunk_size Ns = Ns Sy 1048576 Ns B Po 1 MiB Pc Pq u64
1742Size of writes used by
1743.Xr zpool-initialize 8 .
1744This option is used by the test suite.
1745.
1746.It Sy zfs_livelist_max_entries Ns = Ns Sy 500000 Po 5*10^5 Pc Pq u64
1747The threshold size (in block pointers) at which we create a new sub-livelist.
1748Larger sublists are more costly from a memory perspective but the fewer
1749sublists there are, the lower the cost of insertion.
1750.
1751.It Sy zfs_livelist_min_percent_shared Ns = Ns Sy 75 Ns % Pq int
1752If the amount of shared space between a snapshot and its clone drops below
1753this threshold, the clone turns off the livelist and reverts to the old
1754deletion method.
1755This is in place because livelists no long give us a benefit
1756once a clone has been overwritten enough.
1757.
1758.It Sy zfs_livelist_condense_new_alloc Ns = Ns Sy 0 Pq int
1759Incremented each time an extra ALLOC blkptr is added to a livelist entry while
1760it is being condensed.
1761This option is used by the test suite to track race conditions.
1762.
1763.It Sy zfs_livelist_condense_sync_cancel Ns = Ns Sy 0 Pq int
1764Incremented each time livelist condensing is canceled while in
1765.Fn spa_livelist_condense_sync .
1766This option is used by the test suite to track race conditions.
1767.
1768.It Sy zfs_livelist_condense_sync_pause Ns = Ns Sy 0 Ns | Ns 1 Pq int
1769When set, the livelist condense process pauses indefinitely before
1770executing the synctask \(em
1771.Fn spa_livelist_condense_sync .
1772This option is used by the test suite to trigger race conditions.
1773.
1774.It Sy zfs_livelist_condense_zthr_cancel Ns = Ns Sy 0 Pq int
1775Incremented each time livelist condensing is canceled while in
1776.Fn spa_livelist_condense_cb .
1777This option is used by the test suite to track race conditions.
1778.
1779.It Sy zfs_livelist_condense_zthr_pause Ns = Ns Sy 0 Ns | Ns 1 Pq int
1780When set, the livelist condense process pauses indefinitely before
1781executing the open context condensing work in
1782.Fn spa_livelist_condense_cb .
1783This option is used by the test suite to trigger race conditions.
1784.
1785.It Sy zfs_lua_max_instrlimit Ns = Ns Sy 100000000 Po 10^8 Pc Pq u64
1786The maximum execution time limit that can be set for a ZFS channel program,
1787specified as a number of Lua instructions.
1788.
1789.It Sy zfs_lua_max_memlimit Ns = Ns Sy 104857600 Po 100 MiB Pc Pq u64
1790The maximum memory limit that can be set for a ZFS channel program, specified
1791in bytes.
1792.
1793.It Sy zfs_max_dataset_nesting Ns = Ns Sy 50 Pq int
1794The maximum depth of nested datasets.
1795This value can be tuned temporarily to
1796fix existing datasets that exceed the predefined limit.
1797.
1798.It Sy zfs_max_log_walking Ns = Ns Sy 5 Pq u64
1799The number of past TXGs that the flushing algorithm of the log spacemap
1800feature uses to estimate incoming log blocks.
1801.
1802.It Sy zfs_max_logsm_summary_length Ns = Ns Sy 10 Pq u64
1803Maximum number of rows allowed in the summary of the spacemap log.
1804.
1805.It Sy zfs_max_recordsize Ns = Ns Sy 16777216 Po 16 MiB Pc Pq uint
1806We currently support block sizes from
1807.Em 512 Po 512 B Pc No to Em 16777216 Po 16 MiB Pc .
1808The benefits of larger blocks, and thus larger I/O,
1809need to be weighed against the cost of COWing a giant block to modify one byte.
1810Additionally, very large blocks can have an impact on I/O latency,
1811and also potentially on the memory allocator.
1812Therefore, we formerly forbade creating blocks larger than 1M.
1813Larger blocks could be created by changing it,
1814and pools with larger blocks can always be imported and used,
1815regardless of this setting.
1816.Pp
1817Note that it is still limited by default to
1818.Ar 1 MiB
1819on x86_32, because Linux's
18203/1 memory split doesn't leave much room for 16M chunks.
1821.
1822.It Sy zfs_allow_redacted_dataset_mount Ns = Ns Sy 0 Ns | Ns 1 Pq int
1823Allow datasets received with redacted send/receive to be mounted.
1824Normally disabled because these datasets may be missing key data.
1825.
1826.It Sy zfs_min_metaslabs_to_flush Ns = Ns Sy 1 Pq u64
1827Minimum number of metaslabs to flush per dirty TXG.
1828.
1829.It Sy zfs_metaslab_fragmentation_threshold Ns = Ns Sy 77 Ns % Pq uint
1830Allow metaslabs to keep their active state as long as their fragmentation
1831percentage is no more than this value.
1832An active metaslab that exceeds this threshold
1833will no longer keep its active status allowing better metaslabs to be selected.
1834.
1835.It Sy zfs_mg_fragmentation_threshold Ns = Ns Sy 95 Ns % Pq uint
1836Metaslab groups are considered eligible for allocations if their
1837fragmentation metric (measured as a percentage) is less than or equal to
1838this value.
1839If a metaslab group exceeds this threshold then it will be
1840skipped unless all metaslab groups within the metaslab class have also
1841crossed this threshold.
1842.
1843.It Sy zfs_mg_noalloc_threshold Ns = Ns Sy 0 Ns % Pq uint
1844Defines a threshold at which metaslab groups should be eligible for allocations.
1845The value is expressed as a percentage of free space
1846beyond which a metaslab group is always eligible for allocations.
1847If a metaslab group's free space is less than or equal to the
1848threshold, the allocator will avoid allocating to that group
1849unless all groups in the pool have reached the threshold.
1850Once all groups have reached the threshold, all groups are allowed to accept
1851allocations.
1852The default value of
1853.Sy 0
1854disables the feature and causes all metaslab groups to be eligible for
1855allocations.
1856.Pp
1857This parameter allows one to deal with pools having heavily imbalanced
1858vdevs such as would be the case when a new vdev has been added.
1859Setting the threshold to a non-zero percentage will stop allocations
1860from being made to vdevs that aren't filled to the specified percentage
1861and allow lesser filled vdevs to acquire more allocations than they
1862otherwise would under the old
1863.Sy zfs_mg_alloc_failures
1864facility.
1865.
1866.It Sy zfs_ddt_data_is_special Ns = Ns Sy 1 Ns | Ns 0 Pq int
1867If enabled, ZFS will place DDT data into the special allocation class.
1868.
1869.It Sy zfs_user_indirect_is_special Ns = Ns Sy 1 Ns | Ns 0 Pq int
1870If enabled, ZFS will place user data indirect blocks
1871into the special allocation class.
1872.
1873.It Sy zfs_multihost_history Ns = Ns Sy 0 Pq uint
1874Historical statistics for this many latest multihost updates will be available
1875in
1876.Pa /proc/spl/kstat/zfs/ Ns Ao Ar pool Ac Ns Pa /multihost .
1877.
1878.It Sy zfs_multihost_interval Ns = Ns Sy 1000 Ns ms Po 1 s Pc Pq u64
1879Used to control the frequency of multihost writes which are performed when the
1880.Sy multihost
1881pool property is on.
1882This is one of the factors used to determine the
1883length of the activity check during import.
1884.Pp
1885The multihost write period is
1886.Sy zfs_multihost_interval No / Sy leaf-vdevs .
1887On average a multihost write will be issued for each leaf vdev
1888every
1889.Sy zfs_multihost_interval
1890milliseconds.
1891In practice, the observed period can vary with the I/O load
1892and this observed value is the delay which is stored in the uberblock.
1893.
1894.It Sy zfs_multihost_import_intervals Ns = Ns Sy 20 Pq uint
1895Used to control the duration of the activity test on import.
1896Smaller values of
1897.Sy zfs_multihost_import_intervals
1898will reduce the import time but increase
1899the risk of failing to detect an active pool.
1900The total activity check time is never allowed to drop below one second.
1901.Pp
1902On import the activity check waits a minimum amount of time determined by
1903.Sy zfs_multihost_interval No \(mu Sy zfs_multihost_import_intervals ,
1904or the same product computed on the host which last had the pool imported,
1905whichever is greater.
1906The activity check time may be further extended if the value of MMP
1907delay found in the best uberblock indicates actual multihost updates happened
1908at longer intervals than
1909.Sy zfs_multihost_interval .
1910A minimum of
1911.Em 100 ms
1912is enforced.
1913.Pp
1914.Sy 0 No is equivalent to Sy 1 .
1915.
1916.It Sy zfs_multihost_fail_intervals Ns = Ns Sy 10 Pq uint
1917Controls the behavior of the pool when multihost write failures or delays are
1918detected.
1919.Pp
1920When
1921.Sy 0 ,
1922multihost write failures or delays are ignored.
1923The failures will still be reported to the ZED which depending on
1924its configuration may take action such as suspending the pool or offlining a
1925device.
1926.Pp
1927Otherwise, the pool will be suspended if
1928.Sy zfs_multihost_fail_intervals No \(mu Sy zfs_multihost_interval
1929milliseconds pass without a successful MMP write.
1930This guarantees the activity test will see MMP writes if the pool is imported.
1931.Sy 1 No is equivalent to Sy 2 ;
1932this is necessary to prevent the pool from being suspended
1933due to normal, small I/O latency variations.
1934.
1935.It Sy zfs_no_scrub_io Ns = Ns Sy 0 Ns | Ns 1 Pq int
1936Set to disable scrub I/O.
1937This results in scrubs not actually scrubbing data and
1938simply doing a metadata crawl of the pool instead.
1939.
1940.It Sy zfs_no_scrub_prefetch Ns = Ns Sy 0 Ns | Ns 1 Pq int
1941Set to disable block prefetching for scrubs.
1942.
1943.It Sy zfs_nocacheflush Ns = Ns Sy 0 Ns | Ns 1 Pq int
1944Disable cache flush operations on disks when writing.
1945Setting this will cause pool corruption on power loss
1946if a volatile out-of-order write cache is enabled.
1947.
1948.It Sy zfs_nopwrite_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int
1949Allow no-operation writes.
1950The occurrence of nopwrites will further depend on other pool properties
1951.Pq i.a. the checksumming and compression algorithms .
1952.
1953.It Sy zfs_dmu_offset_next_sync Ns = Ns Sy 1 Ns | Ns 0 Pq int
1954Enable forcing TXG sync to find holes.
1955When enabled forces ZFS to sync data when
1956.Sy SEEK_HOLE No or Sy SEEK_DATA
1957flags are used allowing holes in a file to be accurately reported.
1958When disabled holes will not be reported in recently dirtied files.
1959.
1960.It Sy zfs_pd_bytes_max Ns = Ns Sy 52428800 Ns B Po 50 MiB Pc Pq int
1961The number of bytes which should be prefetched during a pool traversal, like
1962.Nm zfs Cm send
1963or other data crawling operations.
1964.
1965.It Sy zfs_traverse_indirect_prefetch_limit Ns = Ns Sy 32 Pq uint
1966The number of blocks pointed by indirect (non-L0) block which should be
1967prefetched during a pool traversal, like
1968.Nm zfs Cm send
1969or other data crawling operations.
1970.
1971.It Sy zfs_per_txg_dirty_frees_percent Ns = Ns Sy 30 Ns % Pq u64
1972Control percentage of dirtied indirect blocks from frees allowed into one TXG.
1973After this threshold is crossed, additional frees will wait until the next TXG.
1974.Sy 0 No disables this throttle .
1975.
1976.It Sy zfs_prefetch_disable Ns = Ns Sy 0 Ns | Ns 1 Pq int
1977Disable predictive prefetch.
1978Note that it leaves "prescient" prefetch
1979.Pq for, e.g., Nm zfs Cm send
1980intact.
1981Unlike predictive prefetch, prescient prefetch never issues I/O
1982that ends up not being needed, so it can't hurt performance.
1983.
1984.It Sy zfs_qat_checksum_disable Ns = Ns Sy 0 Ns | Ns 1 Pq int
1985Disable QAT hardware acceleration for SHA256 checksums.
1986May be unset after the ZFS modules have been loaded to initialize the QAT
1987hardware as long as support is compiled in and the QAT driver is present.
1988.
1989.It Sy zfs_qat_compress_disable Ns = Ns Sy 0 Ns | Ns 1 Pq int
1990Disable QAT hardware acceleration for gzip compression.
1991May be unset after the ZFS modules have been loaded to initialize the QAT
1992hardware as long as support is compiled in and the QAT driver is present.
1993.
1994.It Sy zfs_qat_encrypt_disable Ns = Ns Sy 0 Ns | Ns 1 Pq int
1995Disable QAT hardware acceleration for AES-GCM encryption.
1996May be unset after the ZFS modules have been loaded to initialize the QAT
1997hardware as long as support is compiled in and the QAT driver is present.
1998.
1999.It Sy zfs_vnops_read_chunk_size Ns = Ns Sy 1048576 Ns B Po 1 MiB Pc Pq u64
2000Bytes to read per chunk.
2001.
2002.It Sy zfs_read_history Ns = Ns Sy 0 Pq uint
2003Historical statistics for this many latest reads will be available in
2004.Pa /proc/spl/kstat/zfs/ Ns Ao Ar pool Ac Ns Pa /reads .
2005.
2006.It Sy zfs_read_history_hits Ns = Ns Sy 0 Ns | Ns 1 Pq int
2007Include cache hits in read history
2008.
2009.It Sy zfs_rebuild_max_segment Ns = Ns Sy 1048576 Ns B Po 1 MiB Pc Pq u64
2010Maximum read segment size to issue when sequentially resilvering a
2011top-level vdev.
2012.
2013.It Sy zfs_rebuild_scrub_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int
2014Automatically start a pool scrub when the last active sequential resilver
2015completes in order to verify the checksums of all blocks which have been
2016resilvered.
2017This is enabled by default and strongly recommended.
2018.
2019.It Sy zfs_rebuild_vdev_limit Ns = Ns Sy 67108864 Ns B Po 64 MiB Pc Pq u64
2020Maximum amount of I/O that can be concurrently issued for a sequential
2021resilver per leaf device, given in bytes.
2022.
2023.It Sy zfs_reconstruct_indirect_combinations_max Ns = Ns Sy 4096 Pq int
2024If an indirect split block contains more than this many possible unique
2025combinations when being reconstructed, consider it too computationally
2026expensive to check them all.
2027Instead, try at most this many randomly selected
2028combinations each time the block is accessed.
2029This allows all segment copies to participate fairly
2030in the reconstruction when all combinations
2031cannot be checked and prevents repeated use of one bad copy.
2032.
2033.It Sy zfs_recover Ns = Ns Sy 0 Ns | Ns 1 Pq int
2034Set to attempt to recover from fatal errors.
2035This should only be used as a last resort,
2036as it typically results in leaked space, or worse.
2037.
2038.It Sy zfs_removal_ignore_errors Ns = Ns Sy 0 Ns | Ns 1 Pq int
2039Ignore hard I/O errors during device removal.
2040When set, if a device encounters a hard I/O error during the removal process
2041the removal will not be canceled.
2042This can result in a normally recoverable block becoming permanently damaged
2043and is hence not recommended.
2044This should only be used as a last resort when the
2045pool cannot be returned to a healthy state prior to removing the device.
2046.
2047.It Sy zfs_removal_suspend_progress Ns = Ns Sy 0 Ns | Ns 1 Pq uint
2048This is used by the test suite so that it can ensure that certain actions
2049happen while in the middle of a removal.
2050.
2051.It Sy zfs_remove_max_segment Ns = Ns Sy 16777216 Ns B Po 16 MiB Pc Pq uint
2052The largest contiguous segment that we will attempt to allocate when removing
2053a device.
2054If there is a performance problem with attempting to allocate large blocks,
2055consider decreasing this.
2056The default value is also the maximum.
2057.
2058.It Sy zfs_resilver_disable_defer Ns = Ns Sy 0 Ns | Ns 1 Pq int
2059Ignore the
2060.Sy resilver_defer
2061feature, causing an operation that would start a resilver to
2062immediately restart the one in progress.
2063.
2064.It Sy zfs_resilver_defer_percent Ns = Ns Sy 10 Ns % Pq uint
2065If the ongoing resilver progress is below this threshold, a new resilver will
2066restart from scratch instead of being deferred after the current one finishes,
2067even if the
2068.Sy resilver_defer
2069feature is enabled.
2070.
2071.It Sy zfs_resilver_min_time_ms Ns = Ns Sy 3000 Ns ms Po 3 s Pc Pq uint
2072Resilvers are processed by the sync thread.
2073While resilvering, it will spend at least this much time
2074working on a resilver between TXG flushes.
2075.
2076.It Sy zfs_scan_ignore_errors Ns = Ns Sy 0 Ns | Ns 1 Pq int
2077If set, remove the DTL (dirty time list) upon completion of a pool scan (scrub),
2078even if there were unrepairable errors.
2079Intended to be used during pool repair or recovery to
2080stop resilvering when the pool is next imported.
2081.
2082.It Sy zfs_scrub_after_expand Ns = Ns Sy 1 Ns | Ns 0 Pq int
2083Automatically start a pool scrub after a RAIDZ expansion completes
2084in order to verify the checksums of all blocks which have been
2085copied during the expansion.
2086This is enabled by default and strongly recommended.
2087.
2088.It Sy zfs_scrub_min_time_ms Ns = Ns Sy 1000 Ns ms Po 1 s Pc Pq uint
2089Scrubs are processed by the sync thread.
2090While scrubbing, it will spend at least this much time
2091working on a scrub between TXG flushes.
2092.
2093.It Sy zfs_scrub_error_blocks_per_txg Ns = Ns Sy 4096 Pq uint
2094Error blocks to be scrubbed in one txg.
2095.
2096.It Sy zfs_scan_checkpoint_intval Ns = Ns Sy 7200 Ns s Po 2 hour Pc Pq uint
2097To preserve progress across reboots, the sequential scan algorithm periodically
2098needs to stop metadata scanning and issue all the verification I/O to disk.
2099The frequency of this flushing is determined by this tunable.
2100.
2101.It Sy zfs_scan_fill_weight Ns = Ns Sy 3 Pq uint
2102This tunable affects how scrub and resilver I/O segments are ordered.
2103A higher number indicates that we care more about how filled in a segment is,
2104while a lower number indicates we care more about the size of the extent without
2105considering the gaps within a segment.
2106This value is only tunable upon module insertion.
2107Changing the value afterwards will have no effect on scrub or resilver
2108performance.
2109.
2110.It Sy zfs_scan_issue_strategy Ns = Ns Sy 0 Pq uint
2111Determines the order that data will be verified while scrubbing or resilvering:
2112.Bl -tag -compact -offset 4n -width "a"
2113.It Sy 1
2114Data will be verified as sequentially as possible, given the
2115amount of memory reserved for scrubbing
2116.Pq see Sy zfs_scan_mem_lim_fact .
2117This may improve scrub performance if the pool's data is very fragmented.
2118.It Sy 2
2119The largest mostly-contiguous chunk of found data will be verified first.
2120By deferring scrubbing of small segments, we may later find adjacent data
2121to coalesce and increase the segment size.
2122.It Sy 0
2123.No Use strategy Sy 1 No during normal verification
2124.No and strategy Sy 2 No while taking a checkpoint .
2125.El
2126.
2127.It Sy zfs_scan_legacy Ns = Ns Sy 0 Ns | Ns 1 Pq int
2128If unset, indicates that scrubs and resilvers will gather metadata in
2129memory before issuing sequential I/O.
2130Otherwise indicates that the legacy algorithm will be used,
2131where I/O is initiated as soon as it is discovered.
2132Unsetting will not affect scrubs or resilvers that are already in progress.
2133.
2134.It Sy zfs_scan_max_ext_gap Ns = Ns Sy 2097152 Ns B Po 2 MiB Pc Pq int
2135Sets the largest gap in bytes between scrub/resilver I/O operations
2136that will still be considered sequential for sorting purposes.
2137Changing this value will not
2138affect scrubs or resilvers that are already in progress.
2139.
2140.It Sy zfs_scan_mem_lim_fact Ns = Ns Sy 20 Ns ^-1 Pq uint
2141Maximum fraction of RAM used for I/O sorting by sequential scan algorithm.
2142This tunable determines the hard limit for I/O sorting memory usage.
2143When the hard limit is reached we stop scanning metadata and start issuing
2144data verification I/O.
2145This is done until we get below the soft limit.
2146.
2147.It Sy zfs_scan_mem_lim_soft_fact Ns = Ns Sy 20 Ns ^-1 Pq uint
2148The fraction of the hard limit used to determined the soft limit for I/O sorting
2149by the sequential scan algorithm.
2150When we cross this limit from below no action is taken.
2151When we cross this limit from above it is because we are issuing verification
2152I/O.
2153In this case (unless the metadata scan is done) we stop issuing verification I/O
2154and start scanning metadata again until we get to the hard limit.
2155.
2156.It Sy zfs_scan_report_txgs Ns = Ns Sy 0 Ns | Ns 1 Pq uint
2157When reporting resilver throughput and estimated completion time use the
2158performance observed over roughly the last
2159.Sy zfs_scan_report_txgs
2160TXGs.
2161When set to zero performance is calculated over the time between checkpoints.
2162.
2163.It Sy zfs_scan_strict_mem_lim Ns = Ns Sy 0 Ns | Ns 1 Pq int
2164Enforce tight memory limits on pool scans when a sequential scan is in progress.
2165When disabled, the memory limit may be exceeded by fast disks.
2166.
2167.It Sy zfs_scan_suspend_progress Ns = Ns Sy 0 Ns | Ns 1 Pq int
2168Freezes a scrub/resilver in progress without actually pausing it.
2169Intended for testing/debugging.
2170.
2171.It Sy zfs_scan_vdev_limit Ns = Ns Sy 16777216 Ns B Po 16 MiB Pc Pq int
2172Maximum amount of data that can be concurrently issued at once for scrubs and
2173resilvers per leaf device, given in bytes.
2174.
2175.It Sy zfs_send_corrupt_data Ns = Ns Sy 0 Ns | Ns 1 Pq int
2176Allow sending of corrupt data (ignore read/checksum errors when sending).
2177.
2178.It Sy zfs_send_unmodified_spill_blocks Ns = Ns Sy 1 Ns | Ns 0 Pq int
2179Include unmodified spill blocks in the send stream.
2180Under certain circumstances, previous versions of ZFS could incorrectly
2181remove the spill block from an existing object.
2182Including unmodified copies of the spill blocks creates a backwards-compatible
2183stream which will recreate a spill block if it was incorrectly removed.
2184.
2185.It Sy zfs_send_no_prefetch_queue_ff Ns = Ns Sy 20 Ns ^\-1 Pq uint
2186The fill fraction of the
2187.Nm zfs Cm send
2188internal queues.
2189The fill fraction controls the timing with which internal threads are woken up.
2190.
2191.It Sy zfs_send_no_prefetch_queue_length Ns = Ns Sy 1048576 Ns B Po 1 MiB Pc Pq uint
2192The maximum number of bytes allowed in
2193.Nm zfs Cm send Ns 's
2194internal queues.
2195.
2196.It Sy zfs_send_queue_ff Ns = Ns Sy 20 Ns ^\-1 Pq uint
2197The fill fraction of the
2198.Nm zfs Cm send
2199prefetch queue.
2200The fill fraction controls the timing with which internal threads are woken up.
2201.
2202.It Sy zfs_send_queue_length Ns = Ns Sy 16777216 Ns B Po 16 MiB Pc Pq uint
2203The maximum number of bytes allowed that will be prefetched by
2204.Nm zfs Cm send .
2205This value must be at least twice the maximum block size in use.
2206.
2207.It Sy zfs_recv_queue_ff Ns = Ns Sy 20 Ns ^\-1 Pq uint
2208The fill fraction of the
2209.Nm zfs Cm receive
2210queue.
2211The fill fraction controls the timing with which internal threads are woken up.
2212.
2213.It Sy zfs_recv_queue_length Ns = Ns Sy 16777216 Ns B Po 16 MiB Pc Pq uint
2214The maximum number of bytes allowed in the
2215.Nm zfs Cm receive
2216queue.
2217This value must be at least twice the maximum block size in use.
2218.
2219.It Sy zfs_recv_write_batch_size Ns = Ns Sy 1048576 Ns B Po 1 MiB Pc Pq uint
2220The maximum amount of data, in bytes, that
2221.Nm zfs Cm receive
2222will write in one DMU transaction.
2223This is the uncompressed size, even when receiving a compressed send stream.
2224This setting will not reduce the write size below a single block.
2225Capped at a maximum of
2226.Sy 32 MiB .
2227.
2228.It Sy zfs_recv_best_effort_corrective Ns = Ns Sy 0 Pq int
2229When this variable is set to non-zero a corrective receive:
2230.Bl -enum -compact -offset 4n -width "1."
2231.It
2232Does not enforce the restriction of source & destination snapshot GUIDs
2233matching.
2234.It
2235If there is an error during healing, the healing receive is not
2236terminated instead it moves on to the next record.
2237.El
2238.
2239.It Sy zfs_override_estimate_recordsize Ns = Ns Sy 0 Ns | Ns 1 Pq uint
2240Setting this variable overrides the default logic for estimating block
2241sizes when doing a
2242.Nm zfs Cm send .
2243The default heuristic is that the average block size
2244will be the current recordsize.
2245Override this value if most data in your dataset is not of that size
2246and you require accurate zfs send size estimates.
2247.
2248.It Sy zfs_sync_pass_deferred_free Ns = Ns Sy 2 Pq uint
2249Flushing of data to disk is done in passes.
2250Defer frees starting in this pass.
2251.
2252.It Sy zfs_spa_discard_memory_limit Ns = Ns Sy 16777216 Ns B Po 16 MiB Pc Pq int
2253Maximum memory used for prefetching a checkpoint's space map on each
2254vdev while discarding the checkpoint.
2255.
2256.It Sy zfs_special_class_metadata_reserve_pct Ns = Ns Sy 25 Ns % Pq uint
2257Only allow small data blocks to be allocated on the special and dedup vdev
2258types when the available free space percentage on these vdevs exceeds this
2259value.
2260This ensures reserved space is available for pool metadata as the
2261special vdevs approach capacity.
2262.
2263.It Sy zfs_sync_pass_dont_compress Ns = Ns Sy 8 Pq uint
2264Starting in this sync pass, disable compression (including of metadata).
2265With the default setting, in practice, we don't have this many sync passes,
2266so this has no effect.
2267.Pp
2268The original intent was that disabling compression would help the sync passes
2269to converge.
2270However, in practice, disabling compression increases
2271the average number of sync passes; because when we turn compression off,
2272many blocks' size will change, and thus we have to re-allocate
2273(not overwrite) them.
2274It also increases the number of
2275.Em 128 KiB
2276allocations (e.g. for indirect blocks and spacemaps)
2277because these will not be compressed.
2278The
2279.Em 128 KiB
2280allocations are especially detrimental to performance
2281on highly fragmented systems, which may have very few free segments of this
2282size,
2283and may need to load new metaslabs to satisfy these allocations.
2284.
2285.It Sy zfs_sync_pass_rewrite Ns = Ns Sy 2 Pq uint
2286Rewrite new block pointers starting in this pass.
2287.
2288.It Sy zfs_trim_extent_bytes_max Ns = Ns Sy 134217728 Ns B Po 128 MiB Pc Pq uint
2289Maximum size of TRIM command.
2290Larger ranges will be split into chunks no larger than this value before
2291issuing.
2292.
2293.It Sy zfs_trim_extent_bytes_min Ns = Ns Sy 32768 Ns B Po 32 KiB Pc Pq uint
2294Minimum size of TRIM commands.
2295TRIM ranges smaller than this will be skipped,
2296unless they're part of a larger range which was chunked.
2297This is done because it's common for these small TRIMs
2298to negatively impact overall performance.
2299.
2300.It Sy zfs_trim_metaslab_skip Ns = Ns Sy 0 Ns | Ns 1 Pq uint
2301Skip uninitialized metaslabs during the TRIM process.
2302This option is useful for pools constructed from large thinly-provisioned
2303devices
2304where TRIM operations are slow.
2305As a pool ages, an increasing fraction of the pool's metaslabs
2306will be initialized, progressively degrading the usefulness of this option.
2307This setting is stored when starting a manual TRIM and will
2308persist for the duration of the requested TRIM.
2309.
2310.It Sy zfs_trim_queue_limit Ns = Ns Sy 10 Pq uint
2311Maximum number of queued TRIMs outstanding per leaf vdev.
2312The number of concurrent TRIM commands issued to the device is controlled by
2313.Sy zfs_vdev_trim_min_active No and Sy zfs_vdev_trim_max_active .
2314.
2315.It Sy zfs_trim_txg_batch Ns = Ns Sy 32 Pq uint
2316The number of transaction groups' worth of frees which should be aggregated
2317before TRIM operations are issued to the device.
2318This setting represents a trade-off between issuing larger,
2319more efficient TRIM operations and the delay
2320before the recently trimmed space is available for use by the device.
2321.Pp
2322Increasing this value will allow frees to be aggregated for a longer time.
2323This will result is larger TRIM operations and potentially increased memory
2324usage.
2325Decreasing this value will have the opposite effect.
2326The default of
2327.Sy 32
2328was determined to be a reasonable compromise.
2329.
2330.It Sy zfs_txg_history Ns = Ns Sy 100 Pq uint
2331Historical statistics for this many latest TXGs will be available in
2332.Pa /proc/spl/kstat/zfs/ Ns Ao Ar pool Ac Ns Pa /TXGs .
2333.
2334.It Sy zfs_txg_timeout Ns = Ns Sy 5 Ns s Pq uint
2335Flush dirty data to disk at least every this many seconds (maximum TXG
2336duration).
2337.
2338.It Sy zfs_vdev_aggregation_limit Ns = Ns Sy 1048576 Ns B Po 1 MiB Pc Pq uint
2339Max vdev I/O aggregation size.
2340.
2341.It Sy zfs_vdev_aggregation_limit_non_rotating Ns = Ns Sy 131072 Ns B Po 128 KiB Pc Pq uint
2342Max vdev I/O aggregation size for non-rotating media.
2343.
2344.It Sy zfs_vdev_mirror_rotating_inc Ns = Ns Sy 0 Pq int
2345A number by which the balancing algorithm increments the load calculation for
2346the purpose of selecting the least busy mirror member when an I/O operation
2347immediately follows its predecessor on rotational vdevs
2348for the purpose of making decisions based on load.
2349.
2350.It Sy zfs_vdev_mirror_rotating_seek_inc Ns = Ns Sy 5 Pq int
2351A number by which the balancing algorithm increments the load calculation for
2352the purpose of selecting the least busy mirror member when an I/O operation
2353lacks locality as defined by
2354.Sy zfs_vdev_mirror_rotating_seek_offset .
2355Operations within this that are not immediately following the previous operation
2356are incremented by half.
2357.
2358.It Sy zfs_vdev_mirror_rotating_seek_offset Ns = Ns Sy 1048576 Ns B Po 1 MiB Pc Pq int
2359The maximum distance for the last queued I/O operation in which
2360the balancing algorithm considers an operation to have locality.
2361.No See Sx ZFS I/O SCHEDULER .
2362.
2363.It Sy zfs_vdev_mirror_non_rotating_inc Ns = Ns Sy 0 Pq int
2364A number by which the balancing algorithm increments the load calculation for
2365the purpose of selecting the least busy mirror member on non-rotational vdevs
2366when I/O operations do not immediately follow one another.
2367.
2368.It Sy zfs_vdev_mirror_non_rotating_seek_inc Ns = Ns Sy 1 Pq int
2369A number by which the balancing algorithm increments the load calculation for
2370the purpose of selecting the least busy mirror member when an I/O operation
2371lacks
2372locality as defined by the
2373.Sy zfs_vdev_mirror_rotating_seek_offset .
2374Operations within this that are not immediately following the previous operation
2375are incremented by half.
2376.
2377.It Sy zfs_vdev_read_gap_limit Ns = Ns Sy 32768 Ns B Po 32 KiB Pc Pq uint
2378Aggregate read I/O operations if the on-disk gap between them is within this
2379threshold.
2380.
2381.It Sy zfs_vdev_write_gap_limit Ns = Ns Sy 4096 Ns B Po 4 KiB Pc Pq uint
2382Aggregate write I/O operations if the on-disk gap between them is within this
2383threshold.
2384.
2385.It Sy zfs_vdev_raidz_impl Ns = Ns Sy fastest Pq string
2386Select the raidz parity implementation to use.
2387.Pp
2388Variants that don't depend on CPU-specific features
2389may be selected on module load, as they are supported on all systems.
2390The remaining options may only be set after the module is loaded,
2391as they are available only if the implementations are compiled in
2392and supported on the running system.
2393.Pp
2394Once the module is loaded,
2395.Pa /sys/module/zfs/parameters/zfs_vdev_raidz_impl
2396will show the available options,
2397with the currently selected one enclosed in square brackets.
2398.Pp
2399.TS
2400lb l l .
2401fastest	selected by built-in benchmark
2402original	original implementation
2403scalar	scalar implementation
2404sse2	SSE2 instruction set	64-bit x86
2405ssse3	SSSE3 instruction set	64-bit x86
2406avx2	AVX2 instruction set	64-bit x86
2407avx512f	AVX512F instruction set	64-bit x86
2408avx512bw	AVX512F & AVX512BW instruction sets	64-bit x86
2409aarch64_neon	NEON	Aarch64/64-bit ARMv8
2410aarch64_neonx2	NEON with more unrolling	Aarch64/64-bit ARMv8
2411powerpc_altivec	Altivec	PowerPC
2412.TE
2413.
2414.It Sy zfs_zevent_len_max Ns = Ns Sy 512 Pq uint
2415Max event queue length.
2416Events in the queue can be viewed with
2417.Xr zpool-events 8 .
2418.
2419.It Sy zfs_zevent_retain_max Ns = Ns Sy 2000 Pq int
2420Maximum recent zevent records to retain for duplicate checking.
2421Setting this to
2422.Sy 0
2423disables duplicate detection.
2424.
2425.It Sy zfs_zevent_retain_expire_secs Ns = Ns Sy 900 Ns s Po 15 min Pc Pq int
2426Lifespan for a recent ereport that was retained for duplicate checking.
2427.
2428.It Sy zfs_zil_clean_taskq_maxalloc Ns = Ns Sy 1048576 Pq int
2429The maximum number of taskq entries that are allowed to be cached.
2430When this limit is exceeded transaction records (itxs)
2431will be cleaned synchronously.
2432.
2433.It Sy zfs_zil_clean_taskq_minalloc Ns = Ns Sy 1024 Pq int
2434The number of taskq entries that are pre-populated when the taskq is first
2435created and are immediately available for use.
2436.
2437.It Sy zfs_zil_clean_taskq_nthr_pct Ns = Ns Sy 100 Ns % Pq int
2438This controls the number of threads used by
2439.Sy dp_zil_clean_taskq .
2440The default value of
2441.Sy 100%
2442will create a maximum of one thread per CPU.
2443.
2444.It Sy zil_maxblocksize Ns = Ns Sy 131072 Ns B Po 128 KiB Pc Pq uint
2445This sets the maximum block size used by the ZIL.
2446On very fragmented pools, lowering this
2447.Pq typically to Sy 36 KiB
2448can improve performance.
2449.
2450.It Sy zil_maxcopied Ns = Ns Sy 7680 Ns B Po 7.5 KiB Pc Pq uint
2451This sets the maximum number of write bytes logged via WR_COPIED.
2452It tunes a tradeoff between additional memory copy and possibly worse log
2453space efficiency vs additional range lock/unlock.
2454.
2455.It Sy zil_nocacheflush Ns = Ns Sy 0 Ns | Ns 1 Pq int
2456Disable the cache flush commands that are normally sent to disk by
2457the ZIL after an LWB write has completed.
2458Setting this will cause ZIL corruption on power loss
2459if a volatile out-of-order write cache is enabled.
2460.
2461.It Sy zil_replay_disable Ns = Ns Sy 0 Ns | Ns 1 Pq int
2462Disable intent logging replay.
2463Can be disabled for recovery from corrupted ZIL.
2464.
2465.It Sy zil_slog_bulk Ns = Ns Sy 67108864 Ns B Po 64 MiB Pc Pq u64
2466Limit SLOG write size per commit executed with synchronous priority.
2467Any writes above that will be executed with lower (asynchronous) priority
2468to limit potential SLOG device abuse by single active ZIL writer.
2469.
2470.It Sy zfs_zil_saxattr Ns = Ns Sy 1 Ns | Ns 0 Pq int
2471Setting this tunable to zero disables ZIL logging of new
2472.Sy xattr Ns = Ns Sy sa
2473records if the
2474.Sy org.openzfs:zilsaxattr
2475feature is enabled on the pool.
2476This would only be necessary to work around bugs in the ZIL logging or replay
2477code for this record type.
2478The tunable has no effect if the feature is disabled.
2479.
2480.It Sy zfs_embedded_slog_min_ms Ns = Ns Sy 64 Pq uint
2481Usually, one metaslab from each normal-class vdev is dedicated for use by
2482the ZIL to log synchronous writes.
2483However, if there are fewer than
2484.Sy zfs_embedded_slog_min_ms
2485metaslabs in the vdev, this functionality is disabled.
2486This ensures that we don't set aside an unreasonable amount of space for the
2487ZIL.
2488.
2489.It Sy zstd_earlyabort_pass Ns = Ns Sy 1 Pq uint
2490Whether heuristic for detection of incompressible data with zstd levels >= 3
2491using LZ4 and zstd-1 passes is enabled.
2492.
2493.It Sy zstd_abort_size Ns = Ns Sy 131072 Pq uint
2494Minimal uncompressed size (inclusive) of a record before the early abort
2495heuristic will be attempted.
2496.
2497.It Sy zio_deadman_log_all Ns = Ns Sy 0 Ns | Ns 1 Pq int
2498If non-zero, the zio deadman will produce debugging messages
2499.Pq see Sy zfs_dbgmsg_enable
2500for all zios, rather than only for leaf zios possessing a vdev.
2501This is meant to be used by developers to gain
2502diagnostic information for hang conditions which don't involve a mutex
2503or other locking primitive: typically conditions in which a thread in
2504the zio pipeline is looping indefinitely.
2505.
2506.It Sy zio_slow_io_ms Ns = Ns Sy 30000 Ns ms Po 30 s Pc Pq int
2507When an I/O operation takes more than this much time to complete,
2508it's marked as slow.
2509Each slow operation causes a delay zevent.
2510Slow I/O counters can be seen with
2511.Nm zpool Cm status Fl s .
2512.
2513.It Sy zio_dva_throttle_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int
2514Throttle block allocations in the I/O pipeline.
2515This allows for dynamic allocation distribution based on device performance.
2516.
2517.It Sy zfs_xattr_compat Ns = Ns 0 Ns | Ns 1 Pq int
2518Control the naming scheme used when setting new xattrs in the user namespace.
2519If
2520.Sy 0
2521.Pq the default on Linux ,
2522user namespace xattr names are prefixed with the namespace, to be backwards
2523compatible with previous versions of ZFS on Linux.
2524If
2525.Sy 1
2526.Pq the default on Fx ,
2527user namespace xattr names are not prefixed, to be backwards compatible with
2528previous versions of ZFS on illumos and
2529.Fx .
2530.Pp
2531Either naming scheme can be read on this and future versions of ZFS, regardless
2532of this tunable, but legacy ZFS on illumos or
2533.Fx
2534are unable to read user namespace xattrs written in the Linux format, and
2535legacy versions of ZFS on Linux are unable to read user namespace xattrs written
2536in the legacy ZFS format.
2537.Pp
2538An existing xattr with the alternate naming scheme is removed when overwriting
2539the xattr so as to not accumulate duplicates.
2540.
2541.It Sy zio_requeue_io_start_cut_in_line Ns = Ns Sy 0 Ns | Ns 1 Pq int
2542Prioritize requeued I/O.
2543.
2544.It Sy zio_taskq_batch_pct Ns = Ns Sy 80 Ns % Pq uint
2545Percentage of online CPUs which will run a worker thread for I/O.
2546These workers are responsible for I/O work such as compression, encryption,
2547checksum and parity calculations.
2548Fractional number of CPUs will be rounded down.
2549.Pp
2550The default value of
2551.Sy 80%
2552was chosen to avoid using all CPUs which can result in
2553latency issues and inconsistent application performance,
2554especially when slower compression and/or checksumming is enabled.
2555Set value only applies to pools imported/created after that.
2556.
2557.It Sy zio_taskq_batch_tpq Ns = Ns Sy 0 Pq uint
2558Number of worker threads per taskq.
2559Higher values improve I/O ordering and CPU utilization,
2560while lower reduce lock contention.
2561Set value only applies to pools imported/created after that.
2562.Pp
2563If
2564.Sy 0 ,
2565generate a system-dependent value close to 6 threads per taskq.
2566Set value only applies to pools imported/created after that.
2567.
2568.It Sy zio_taskq_write_tpq Ns = Ns Sy 16 Pq uint
2569Determines the minimum number of threads per write issue taskq.
2570Higher values improve CPU utilization on high throughput,
2571while lower reduce taskq locks contention on high IOPS.
2572Set value only applies to pools imported/created after that.
2573.
2574.It Sy zio_taskq_read Ns = Ns Sy fixed,1,8 null scale null Pq charp
2575Set the queue and thread configuration for the IO read queues.
2576This is an advanced debugging parameter.
2577Don't change this unless you understand what it does.
2578Set values only apply to pools imported/created after that.
2579.
2580.It Sy zio_taskq_write Ns = Ns Sy sync null scale null Pq charp
2581Set the queue and thread configuration for the IO write queues.
2582This is an advanced debugging parameter.
2583Don't change this unless you understand what it does.
2584Set values only apply to pools imported/created after that.
2585.
2586.It Sy zvol_inhibit_dev Ns = Ns Sy 0 Ns | Ns 1 Pq uint
2587Do not create zvol device nodes.
2588This may slightly improve startup time on
2589systems with a very large number of zvols.
2590.
2591.It Sy zvol_major Ns = Ns Sy 230 Pq uint
2592Major number for zvol block devices.
2593.
2594.It Sy zvol_max_discard_blocks Ns = Ns Sy 16384 Pq long
2595Discard (TRIM) operations done on zvols will be done in batches of this
2596many blocks, where block size is determined by the
2597.Sy volblocksize
2598property of a zvol.
2599.
2600.It Sy zvol_prefetch_bytes Ns = Ns Sy 131072 Ns B Po 128 KiB Pc Pq uint
2601When adding a zvol to the system, prefetch this many bytes
2602from the start and end of the volume.
2603Prefetching these regions of the volume is desirable,
2604because they are likely to be accessed immediately by
2605.Xr blkid 8
2606or the kernel partitioner.
2607.
2608.It Sy zvol_request_sync Ns = Ns Sy 0 Ns | Ns 1 Pq uint
2609When processing I/O requests for a zvol, submit them synchronously.
2610This effectively limits the queue depth to
2611.Em 1
2612for each I/O submitter.
2613When unset, requests are handled asynchronously by a thread pool.
2614The number of requests which can be handled concurrently is controlled by
2615.Sy zvol_threads .
2616.Sy zvol_request_sync
2617is ignored when running on a kernel that supports block multiqueue
2618.Pq Li blk-mq .
2619.
2620.It Sy zvol_num_taskqs Ns = Ns Sy 0 Pq uint
2621Number of zvol taskqs.
2622If
2623.Sy 0
2624(the default) then scaling is done internally to prefer 6 threads per taskq.
2625This only applies on Linux.
2626.
2627.It Sy zvol_threads Ns = Ns Sy 0 Pq uint
2628The number of system wide threads to use for processing zvol block IOs.
2629If
2630.Sy 0
2631(the default) then internally set
2632.Sy zvol_threads
2633to the number of CPUs present or 32 (whichever is greater).
2634.
2635.It Sy zvol_blk_mq_threads Ns = Ns Sy 0 Pq uint
2636The number of threads per zvol to use for queuing IO requests.
2637This parameter will only appear if your kernel supports
2638.Li blk-mq
2639and is only read and assigned to a zvol at zvol load time.
2640If
2641.Sy 0
2642(the default) then internally set
2643.Sy zvol_blk_mq_threads
2644to the number of CPUs present.
2645.
2646.It Sy zvol_use_blk_mq Ns = Ns Sy 0 Ns | Ns 1 Pq uint
2647Set to
2648.Sy 1
2649to use the
2650.Li blk-mq
2651API for zvols.
2652Set to
2653.Sy 0
2654(the default) to use the legacy zvol APIs.
2655This setting can give better or worse zvol performance depending on
2656the workload.
2657This parameter will only appear if your kernel supports
2658.Li blk-mq
2659and is only read and assigned to a zvol at zvol load time.
2660.
2661.It Sy zvol_blk_mq_blocks_per_thread Ns = Ns Sy 8 Pq uint
2662If
2663.Sy zvol_use_blk_mq
2664is enabled, then process this number of
2665.Sy volblocksize Ns -sized blocks per zvol thread.
2666This tunable can be use to favor better performance for zvol reads (lower
2667values) or writes (higher values).
2668If set to
2669.Sy 0 ,
2670then the zvol layer will process the maximum number of blocks
2671per thread that it can.
2672This parameter will only appear if your kernel supports
2673.Li blk-mq
2674and is only applied at each zvol's load time.
2675.
2676.It Sy zvol_blk_mq_queue_depth Ns = Ns Sy 0 Pq uint
2677The queue_depth value for the zvol
2678.Li blk-mq
2679interface.
2680This parameter will only appear if your kernel supports
2681.Li blk-mq
2682and is only applied at each zvol's load time.
2683If
2684.Sy 0
2685(the default) then use the kernel's default queue depth.
2686Values are clamped to the kernel's
2687.Dv BLKDEV_MIN_RQ
2688and
2689.Dv BLKDEV_MAX_RQ Ns / Ns Dv BLKDEV_DEFAULT_RQ
2690limits.
2691.
2692.It Sy zvol_volmode Ns = Ns Sy 1 Pq uint
2693Defines zvol block devices behavior when
2694.Sy volmode Ns = Ns Sy default :
2695.Bl -tag -compact -offset 4n -width "a"
2696.It Sy 1
2697.No equivalent to Sy full
2698.It Sy 2
2699.No equivalent to Sy dev
2700.It Sy 3
2701.No equivalent to Sy none
2702.El
2703.
2704.It Sy zvol_enforce_quotas Ns = Ns Sy 0 Ns | Ns 1 Pq uint
2705Enable strict ZVOL quota enforcement.
2706The strict quota enforcement may have a performance impact.
2707.El
2708.
2709.Sh ZFS I/O SCHEDULER
2710ZFS issues I/O operations to leaf vdevs to satisfy and complete I/O operations.
2711The scheduler determines when and in what order those operations are issued.
2712The scheduler divides operations into five I/O classes,
2713prioritized in the following order: sync read, sync write, async read,
2714async write, and scrub/resilver.
2715Each queue defines the minimum and maximum number of concurrent operations
2716that may be issued to the device.
2717In addition, the device has an aggregate maximum,
2718.Sy zfs_vdev_max_active .
2719Note that the sum of the per-queue minima must not exceed the aggregate maximum.
2720If the sum of the per-queue maxima exceeds the aggregate maximum,
2721then the number of active operations may reach
2722.Sy zfs_vdev_max_active ,
2723in which case no further operations will be issued,
2724regardless of whether all per-queue minima have been met.
2725.Pp
2726For many physical devices, throughput increases with the number of
2727concurrent operations, but latency typically suffers.
2728Furthermore, physical devices typically have a limit
2729at which more concurrent operations have no
2730effect on throughput or can actually cause it to decrease.
2731.Pp
2732The scheduler selects the next operation to issue by first looking for an
2733I/O class whose minimum has not been satisfied.
2734Once all are satisfied and the aggregate maximum has not been hit,
2735the scheduler looks for classes whose maximum has not been satisfied.
2736Iteration through the I/O classes is done in the order specified above.
2737No further operations are issued
2738if the aggregate maximum number of concurrent operations has been hit,
2739or if there are no operations queued for an I/O class that has not hit its
2740maximum.
2741Every time an I/O operation is queued or an operation completes,
2742the scheduler looks for new operations to issue.
2743.Pp
2744In general, smaller
2745.Sy max_active Ns s
2746will lead to lower latency of synchronous operations.
2747Larger
2748.Sy max_active Ns s
2749may lead to higher overall throughput, depending on underlying storage.
2750.Pp
2751The ratio of the queues'
2752.Sy max_active Ns s
2753determines the balance of performance between reads, writes, and scrubs.
2754For example, increasing
2755.Sy zfs_vdev_scrub_max_active
2756will cause the scrub or resilver to complete more quickly,
2757but reads and writes to have higher latency and lower throughput.
2758.Pp
2759All I/O classes have a fixed maximum number of outstanding operations,
2760except for the async write class.
2761Asynchronous writes represent the data that is committed to stable storage
2762during the syncing stage for transaction groups.
2763Transaction groups enter the syncing state periodically,
2764so the number of queued async writes will quickly burst up
2765and then bleed down to zero.
2766Rather than servicing them as quickly as possible,
2767the I/O scheduler changes the maximum number of active async write operations
2768according to the amount of dirty data in the pool.
2769Since both throughput and latency typically increase with the number of
2770concurrent operations issued to physical devices, reducing the
2771burstiness in the number of simultaneous operations also stabilizes the
2772response time of operations from other queues, in particular synchronous ones.
2773In broad strokes, the I/O scheduler will issue more concurrent operations
2774from the async write queue as there is more dirty data in the pool.
2775.
2776.Ss Async Writes
2777The number of concurrent operations issued for the async write I/O class
2778follows a piece-wise linear function defined by a few adjustable points:
2779.Bd -literal
2780       |              o---------| <-- \fBzfs_vdev_async_write_max_active\fP
2781  ^    |             /^         |
2782  |    |            / |         |
2783active |           /  |         |
2784 I/O   |          /   |         |
2785count  |         /    |         |
2786       |        /     |         |
2787       |-------o      |         | <-- \fBzfs_vdev_async_write_min_active\fP
2788      0|_______^______|_________|
2789       0%      |      |       100% of \fBzfs_dirty_data_max\fP
2790               |      |
2791               |      `-- \fBzfs_vdev_async_write_active_max_dirty_percent\fP
2792               `--------- \fBzfs_vdev_async_write_active_min_dirty_percent\fP
2793.Ed
2794.Pp
2795Until the amount of dirty data exceeds a minimum percentage of the dirty
2796data allowed in the pool, the I/O scheduler will limit the number of
2797concurrent operations to the minimum.
2798As that threshold is crossed, the number of concurrent operations issued
2799increases linearly to the maximum at the specified maximum percentage
2800of the dirty data allowed in the pool.
2801.Pp
2802Ideally, the amount of dirty data on a busy pool will stay in the sloped
2803part of the function between
2804.Sy zfs_vdev_async_write_active_min_dirty_percent
2805and
2806.Sy zfs_vdev_async_write_active_max_dirty_percent .
2807If it exceeds the maximum percentage,
2808this indicates that the rate of incoming data is
2809greater than the rate that the backend storage can handle.
2810In this case, we must further throttle incoming writes,
2811as described in the next section.
2812.
2813.Sh ZFS TRANSACTION DELAY
2814We delay transactions when we've determined that the backend storage
2815isn't able to accommodate the rate of incoming writes.
2816.Pp
2817If there is already a transaction waiting, we delay relative to when
2818that transaction will finish waiting.
2819This way the calculated delay time
2820is independent of the number of threads concurrently executing transactions.
2821.Pp
2822If we are the only waiter, wait relative to when the transaction started,
2823rather than the current time.
2824This credits the transaction for "time already served",
2825e.g. reading indirect blocks.
2826.Pp
2827The minimum time for a transaction to take is calculated as
2828.D1 min_time = min( Ns Sy zfs_delay_scale No \(mu Po Sy dirty No \- Sy min Pc / Po Sy max No \- Sy dirty Pc , 100ms)
2829.Pp
2830The delay has two degrees of freedom that can be adjusted via tunables.
2831The percentage of dirty data at which we start to delay is defined by
2832.Sy zfs_delay_min_dirty_percent .
2833This should typically be at or above
2834.Sy zfs_vdev_async_write_active_max_dirty_percent ,
2835so that we only start to delay after writing at full speed
2836has failed to keep up with the incoming write rate.
2837The scale of the curve is defined by
2838.Sy zfs_delay_scale .
2839Roughly speaking, this variable determines the amount of delay at the midpoint
2840of the curve.
2841.Bd -literal
2842delay
2843 10ms +-------------------------------------------------------------*+
2844      |                                                             *|
2845  9ms +                                                             *+
2846      |                                                             *|
2847  8ms +                                                             *+
2848      |                                                            * |
2849  7ms +                                                            * +
2850      |                                                            * |
2851  6ms +                                                            * +
2852      |                                                            * |
2853  5ms +                                                           *  +
2854      |                                                           *  |
2855  4ms +                                                           *  +
2856      |                                                           *  |
2857  3ms +                                                          *   +
2858      |                                                          *   |
2859  2ms +                                              (midpoint) *    +
2860      |                                                  |    **     |
2861  1ms +                                                  v ***       +
2862      |             \fBzfs_delay_scale\fP ---------->     ********         |
2863    0 +-------------------------------------*********----------------+
2864      0%                    <- \fBzfs_dirty_data_max\fP ->               100%
2865.Ed
2866.Pp
2867Note, that since the delay is added to the outstanding time remaining on the
2868most recent transaction it's effectively the inverse of IOPS.
2869Here, the midpoint of
2870.Em 500 us
2871translates to
2872.Em 2000 IOPS .
2873The shape of the curve
2874was chosen such that small changes in the amount of accumulated dirty data
2875in the first three quarters of the curve yield relatively small differences
2876in the amount of delay.
2877.Pp
2878The effects can be easier to understand when the amount of delay is
2879represented on a logarithmic scale:
2880.Bd -literal
2881delay
2882100ms +-------------------------------------------------------------++
2883      +                                                              +
2884      |                                                              |
2885      +                                                             *+
2886 10ms +                                                             *+
2887      +                                                           ** +
2888      |                                              (midpoint)  **  |
2889      +                                                  |     **    +
2890  1ms +                                                  v ****      +
2891      +             \fBzfs_delay_scale\fP ---------->        *****         +
2892      |                                             ****             |
2893      +                                          ****                +
2894100us +                                        **                    +
2895      +                                       *                      +
2896      |                                      *                       |
2897      +                                     *                        +
2898 10us +                                     *                        +
2899      +                                                              +
2900      |                                                              |
2901      +                                                              +
2902      +--------------------------------------------------------------+
2903      0%                    <- \fBzfs_dirty_data_max\fP ->               100%
2904.Ed
2905.Pp
2906Note here that only as the amount of dirty data approaches its limit does
2907the delay start to increase rapidly.
2908The goal of a properly tuned system should be to keep the amount of dirty data
2909out of that range by first ensuring that the appropriate limits are set
2910for the I/O scheduler to reach optimal throughput on the back-end storage,
2911and then by changing the value of
2912.Sy zfs_delay_scale
2913to increase the steepness of the curve.
2914