xref: /freebsd/sys/contrib/openzfs/man/man4/zfs.4 (revision 2008043f386721d58158e37e0d7e50df8095942d)
1.\"
2.\" Copyright (c) 2013 by Turbo Fredriksson <turbo@bayour.com>. All rights reserved.
3.\" Copyright (c) 2019, 2021 by Delphix. All rights reserved.
4.\" Copyright (c) 2019 Datto Inc.
5.\" The contents of this file are subject to the terms of the Common Development
6.\" and Distribution License (the "License").  You may not use this file except
7.\" in compliance with the License. You can obtain a copy of the license at
8.\" usr/src/OPENSOLARIS.LICENSE or https://opensource.org/licenses/CDDL-1.0.
9.\"
10.\" See the License for the specific language governing permissions and
11.\" limitations under the License. When distributing Covered Code, include this
12.\" CDDL HEADER in each file and include the License file at
13.\" usr/src/OPENSOLARIS.LICENSE.  If applicable, add the following below this
14.\" CDDL HEADER, with the fields enclosed by brackets "[]" replaced with your
15.\" own identifying information:
16.\" Portions Copyright [yyyy] [name of copyright owner]
17.\"
18.Dd July 21, 2023
19.Dt ZFS 4
20.Os
21.
22.Sh NAME
23.Nm zfs
24.Nd tuning of the ZFS kernel module
25.
26.Sh DESCRIPTION
27The ZFS module supports these parameters:
28.Bl -tag -width Ds
29.It Sy dbuf_cache_max_bytes Ns = Ns Sy UINT64_MAX Ns B Pq u64
30Maximum size in bytes of the dbuf cache.
31The target size is determined by the MIN versus
32.No 1/2^ Ns Sy dbuf_cache_shift Pq 1/32nd
33of the target ARC size.
34The behavior of the dbuf cache and its associated settings
35can be observed via the
36.Pa /proc/spl/kstat/zfs/dbufstats
37kstat.
38.
39.It Sy dbuf_metadata_cache_max_bytes Ns = Ns Sy UINT64_MAX Ns B Pq u64
40Maximum size in bytes of the metadata dbuf cache.
41The target size is determined by the MIN versus
42.No 1/2^ Ns Sy dbuf_metadata_cache_shift Pq 1/64th
43of the target ARC size.
44The behavior of the metadata dbuf cache and its associated settings
45can be observed via the
46.Pa /proc/spl/kstat/zfs/dbufstats
47kstat.
48.
49.It Sy dbuf_cache_hiwater_pct Ns = Ns Sy 10 Ns % Pq uint
50The percentage over
51.Sy dbuf_cache_max_bytes
52when dbufs must be evicted directly.
53.
54.It Sy dbuf_cache_lowater_pct Ns = Ns Sy 10 Ns % Pq uint
55The percentage below
56.Sy dbuf_cache_max_bytes
57when the evict thread stops evicting dbufs.
58.
59.It Sy dbuf_cache_shift Ns = Ns Sy 5 Pq uint
60Set the size of the dbuf cache
61.Pq Sy dbuf_cache_max_bytes
62to a log2 fraction of the target ARC size.
63.
64.It Sy dbuf_metadata_cache_shift Ns = Ns Sy 6 Pq uint
65Set the size of the dbuf metadata cache
66.Pq Sy dbuf_metadata_cache_max_bytes
67to a log2 fraction of the target ARC size.
68.
69.It Sy dbuf_mutex_cache_shift Ns = Ns Sy 0 Pq uint
70Set the size of the mutex array for the dbuf cache.
71When set to
72.Sy 0
73the array is dynamically sized based on total system memory.
74.
75.It Sy dmu_object_alloc_chunk_shift Ns = Ns Sy 7 Po 128 Pc Pq uint
76dnode slots allocated in a single operation as a power of 2.
77The default value minimizes lock contention for the bulk operation performed.
78.
79.It Sy dmu_prefetch_max Ns = Ns Sy 134217728 Ns B Po 128 MiB Pc Pq uint
80Limit the amount we can prefetch with one call to this amount in bytes.
81This helps to limit the amount of memory that can be used by prefetching.
82.
83.It Sy ignore_hole_birth Pq int
84Alias for
85.Sy send_holes_without_birth_time .
86.
87.It Sy l2arc_feed_again Ns = Ns Sy 1 Ns | Ns 0 Pq int
88Turbo L2ARC warm-up.
89When the L2ARC is cold the fill interval will be set as fast as possible.
90.
91.It Sy l2arc_feed_min_ms Ns = Ns Sy 200 Pq u64
92Min feed interval in milliseconds.
93Requires
94.Sy l2arc_feed_again Ns = Ns Ar 1
95and only applicable in related situations.
96.
97.It Sy l2arc_feed_secs Ns = Ns Sy 1 Pq u64
98Seconds between L2ARC writing.
99.
100.It Sy l2arc_headroom Ns = Ns Sy 2 Pq u64
101How far through the ARC lists to search for L2ARC cacheable content,
102expressed as a multiplier of
103.Sy l2arc_write_max .
104ARC persistence across reboots can be achieved with persistent L2ARC
105by setting this parameter to
106.Sy 0 ,
107allowing the full length of ARC lists to be searched for cacheable content.
108.
109.It Sy l2arc_headroom_boost Ns = Ns Sy 200 Ns % Pq u64
110Scales
111.Sy l2arc_headroom
112by this percentage when L2ARC contents are being successfully compressed
113before writing.
114A value of
115.Sy 100
116disables this feature.
117.
118.It Sy l2arc_exclude_special Ns = Ns Sy 0 Ns | Ns 1 Pq int
119Controls whether buffers present on special vdevs are eligible for caching
120into L2ARC.
121If set to 1, exclude dbufs on special vdevs from being cached to L2ARC.
122.
123.It Sy l2arc_mfuonly Ns = Ns Sy 0 Ns | Ns 1 Pq  int
124Controls whether only MFU metadata and data are cached from ARC into L2ARC.
125This may be desired to avoid wasting space on L2ARC when reading/writing large
126amounts of data that are not expected to be accessed more than once.
127.Pp
128The default is off,
129meaning both MRU and MFU data and metadata are cached.
130When turning off this feature, some MRU buffers will still be present
131in ARC and eventually cached on L2ARC.
132.No If Sy l2arc_noprefetch Ns = Ns Sy 0 ,
133some prefetched buffers will be cached to L2ARC, and those might later
134transition to MRU, in which case the
135.Sy l2arc_mru_asize No arcstat will not be Sy 0 .
136.Pp
137Regardless of
138.Sy l2arc_noprefetch ,
139some MFU buffers might be evicted from ARC,
140accessed later on as prefetches and transition to MRU as prefetches.
141If accessed again they are counted as MRU and the
142.Sy l2arc_mru_asize No arcstat will not be Sy 0 .
143.Pp
144The ARC status of L2ARC buffers when they were first cached in
145L2ARC can be seen in the
146.Sy l2arc_mru_asize , Sy l2arc_mfu_asize , No and Sy l2arc_prefetch_asize
147arcstats when importing the pool or onlining a cache
148device if persistent L2ARC is enabled.
149.Pp
150The
151.Sy evict_l2_eligible_mru
152arcstat does not take into account if this option is enabled as the information
153provided by the
154.Sy evict_l2_eligible_m[rf]u
155arcstats can be used to decide if toggling this option is appropriate
156for the current workload.
157.
158.It Sy l2arc_meta_percent Ns = Ns Sy 33 Ns % Pq uint
159Percent of ARC size allowed for L2ARC-only headers.
160Since L2ARC buffers are not evicted on memory pressure,
161too many headers on a system with an irrationally large L2ARC
162can render it slow or unusable.
163This parameter limits L2ARC writes and rebuilds to achieve the target.
164.
165.It Sy l2arc_trim_ahead Ns = Ns Sy 0 Ns % Pq u64
166Trims ahead of the current write size
167.Pq Sy l2arc_write_max
168on L2ARC devices by this percentage of write size if we have filled the device.
169If set to
170.Sy 100
171we TRIM twice the space required to accommodate upcoming writes.
172A minimum of
173.Sy 64 MiB
174will be trimmed.
175It also enables TRIM of the whole L2ARC device upon creation
176or addition to an existing pool or if the header of the device is
177invalid upon importing a pool or onlining a cache device.
178A value of
179.Sy 0
180disables TRIM on L2ARC altogether and is the default as it can put significant
181stress on the underlying storage devices.
182This will vary depending of how well the specific device handles these commands.
183.
184.It Sy l2arc_noprefetch Ns = Ns Sy 1 Ns | Ns 0 Pq int
185Do not write buffers to L2ARC if they were prefetched but not used by
186applications.
187In case there are prefetched buffers in L2ARC and this option
188is later set, we do not read the prefetched buffers from L2ARC.
189Unsetting this option is useful for caching sequential reads from the
190disks to L2ARC and serve those reads from L2ARC later on.
191This may be beneficial in case the L2ARC device is significantly faster
192in sequential reads than the disks of the pool.
193.Pp
194Use
195.Sy 1
196to disable and
197.Sy 0
198to enable caching/reading prefetches to/from L2ARC.
199.
200.It Sy l2arc_norw Ns = Ns Sy 0 Ns | Ns 1 Pq int
201No reads during writes.
202.
203.It Sy l2arc_write_boost Ns = Ns Sy 8388608 Ns B Po 8 MiB Pc Pq u64
204Cold L2ARC devices will have
205.Sy l2arc_write_max
206increased by this amount while they remain cold.
207.
208.It Sy l2arc_write_max Ns = Ns Sy 8388608 Ns B Po 8 MiB Pc Pq u64
209Max write bytes per interval.
210.
211.It Sy l2arc_rebuild_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int
212Rebuild the L2ARC when importing a pool (persistent L2ARC).
213This can be disabled if there are problems importing a pool
214or attaching an L2ARC device (e.g. the L2ARC device is slow
215in reading stored log metadata, or the metadata
216has become somehow fragmented/unusable).
217.
218.It Sy l2arc_rebuild_blocks_min_l2size Ns = Ns Sy 1073741824 Ns B Po 1 GiB Pc Pq u64
219Mininum size of an L2ARC device required in order to write log blocks in it.
220The log blocks are used upon importing the pool to rebuild the persistent L2ARC.
221.Pp
222For L2ARC devices less than 1 GiB, the amount of data
223.Fn l2arc_evict
224evicts is significant compared to the amount of restored L2ARC data.
225In this case, do not write log blocks in L2ARC in order not to waste space.
226.
227.It Sy metaslab_aliquot Ns = Ns Sy 1048576 Ns B Po 1 MiB Pc Pq u64
228Metaslab granularity, in bytes.
229This is roughly similar to what would be referred to as the "stripe size"
230in traditional RAID arrays.
231In normal operation, ZFS will try to write this amount of data to each disk
232before moving on to the next top-level vdev.
233.
234.It Sy metaslab_bias_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int
235Enable metaslab group biasing based on their vdevs' over- or under-utilization
236relative to the pool.
237.
238.It Sy metaslab_force_ganging Ns = Ns Sy 16777217 Ns B Po 16 MiB + 1 B Pc Pq u64
239Make some blocks above a certain size be gang blocks.
240This option is used by the test suite to facilitate testing.
241.
242.It Sy metaslab_force_ganging_pct Ns = Ns Sy 3 Ns % Pq uint
243For blocks that could be forced to be a gang block (due to
244.Sy metaslab_force_ganging ) ,
245force this many of them to be gang blocks.
246.
247.It Sy zfs_ddt_zap_default_bs Ns = Ns Sy 15 Po 32 KiB Pc Pq int
248Default DDT ZAP data block size as a power of 2. Note that changing this after
249creating a DDT on the pool will not affect existing DDTs, only newly created
250ones.
251.
252.It Sy zfs_ddt_zap_default_ibs Ns = Ns Sy 15 Po 32 KiB Pc Pq int
253Default DDT ZAP indirect block size as a power of 2. Note that changing this
254after creating a DDT on the pool will not affect existing DDTs, only newly
255created ones.
256.
257.It Sy zfs_default_bs Ns = Ns Sy 9 Po 512 B Pc Pq int
258Default dnode block size as a power of 2.
259.
260.It Sy zfs_default_ibs Ns = Ns Sy 17 Po 128 KiB Pc Pq int
261Default dnode indirect block size as a power of 2.
262.
263.It Sy zfs_history_output_max Ns = Ns Sy 1048576 Ns B Po 1 MiB Pc Pq u64
264When attempting to log an output nvlist of an ioctl in the on-disk history,
265the output will not be stored if it is larger than this size (in bytes).
266This must be less than
267.Sy DMU_MAX_ACCESS Pq 64 MiB .
268This applies primarily to
269.Fn zfs_ioc_channel_program Pq cf. Xr zfs-program 8 .
270.
271.It Sy zfs_keep_log_spacemaps_at_export Ns = Ns Sy 0 Ns | Ns 1 Pq int
272Prevent log spacemaps from being destroyed during pool exports and destroys.
273.
274.It Sy zfs_metaslab_segment_weight_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int
275Enable/disable segment-based metaslab selection.
276.
277.It Sy zfs_metaslab_switch_threshold Ns = Ns Sy 2 Pq int
278When using segment-based metaslab selection, continue allocating
279from the active metaslab until this option's
280worth of buckets have been exhausted.
281.
282.It Sy metaslab_debug_load Ns = Ns Sy 0 Ns | Ns 1 Pq int
283Load all metaslabs during pool import.
284.
285.It Sy metaslab_debug_unload Ns = Ns Sy 0 Ns | Ns 1 Pq int
286Prevent metaslabs from being unloaded.
287.
288.It Sy metaslab_fragmentation_factor_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int
289Enable use of the fragmentation metric in computing metaslab weights.
290.
291.It Sy metaslab_df_max_search Ns = Ns Sy 16777216 Ns B Po 16 MiB Pc Pq uint
292Maximum distance to search forward from the last offset.
293Without this limit, fragmented pools can see
294.Em >100`000
295iterations and
296.Fn metaslab_block_picker
297becomes the performance limiting factor on high-performance storage.
298.Pp
299With the default setting of
300.Sy 16 MiB ,
301we typically see less than
302.Em 500
303iterations, even with very fragmented
304.Sy ashift Ns = Ns Sy 9
305pools.
306The maximum number of iterations possible is
307.Sy metaslab_df_max_search / 2^(ashift+1) .
308With the default setting of
309.Sy 16 MiB
310this is
311.Em 16*1024 Pq with Sy ashift Ns = Ns Sy 9
312or
313.Em 2*1024 Pq with Sy ashift Ns = Ns Sy 12 .
314.
315.It Sy metaslab_df_use_largest_segment Ns = Ns Sy 0 Ns | Ns 1 Pq int
316If not searching forward (due to
317.Sy metaslab_df_max_search , metaslab_df_free_pct ,
318.No or Sy metaslab_df_alloc_threshold ) ,
319this tunable controls which segment is used.
320If set, we will use the largest free segment.
321If unset, we will use a segment of at least the requested size.
322.
323.It Sy zfs_metaslab_max_size_cache_sec Ns = Ns Sy 3600 Ns s Po 1 hour Pc Pq u64
324When we unload a metaslab, we cache the size of the largest free chunk.
325We use that cached size to determine whether or not to load a metaslab
326for a given allocation.
327As more frees accumulate in that metaslab while it's unloaded,
328the cached max size becomes less and less accurate.
329After a number of seconds controlled by this tunable,
330we stop considering the cached max size and start
331considering only the histogram instead.
332.
333.It Sy zfs_metaslab_mem_limit Ns = Ns Sy 25 Ns % Pq uint
334When we are loading a new metaslab, we check the amount of memory being used
335to store metaslab range trees.
336If it is over a threshold, we attempt to unload the least recently used metaslab
337to prevent the system from clogging all of its memory with range trees.
338This tunable sets the percentage of total system memory that is the threshold.
339.
340.It Sy zfs_metaslab_try_hard_before_gang Ns = Ns Sy 0 Ns | Ns 1 Pq int
341.Bl -item -compact
342.It
343If unset, we will first try normal allocation.
344.It
345If that fails then we will do a gang allocation.
346.It
347If that fails then we will do a "try hard" gang allocation.
348.It
349If that fails then we will have a multi-layer gang block.
350.El
351.Pp
352.Bl -item -compact
353.It
354If set, we will first try normal allocation.
355.It
356If that fails then we will do a "try hard" allocation.
357.It
358If that fails we will do a gang allocation.
359.It
360If that fails we will do a "try hard" gang allocation.
361.It
362If that fails then we will have a multi-layer gang block.
363.El
364.
365.It Sy zfs_metaslab_find_max_tries Ns = Ns Sy 100 Pq uint
366When not trying hard, we only consider this number of the best metaslabs.
367This improves performance, especially when there are many metaslabs per vdev
368and the allocation can't actually be satisfied
369(so we would otherwise iterate all metaslabs).
370.
371.It Sy zfs_vdev_default_ms_count Ns = Ns Sy 200 Pq uint
372When a vdev is added, target this number of metaslabs per top-level vdev.
373.
374.It Sy zfs_vdev_default_ms_shift Ns = Ns Sy 29 Po 512 MiB Pc Pq uint
375Default lower limit for metaslab size.
376.
377.It Sy zfs_vdev_max_ms_shift Ns = Ns Sy 34 Po 16 GiB Pc Pq uint
378Default upper limit for metaslab size.
379.
380.It Sy zfs_vdev_max_auto_ashift Ns = Ns Sy 14 Pq uint
381Maximum ashift used when optimizing for logical \[->] physical sector size on
382new
383top-level vdevs.
384May be increased up to
385.Sy ASHIFT_MAX Po 16 Pc ,
386but this may negatively impact pool space efficiency.
387.
388.It Sy zfs_vdev_min_auto_ashift Ns = Ns Sy ASHIFT_MIN Po 9 Pc Pq uint
389Minimum ashift used when creating new top-level vdevs.
390.
391.It Sy zfs_vdev_min_ms_count Ns = Ns Sy 16 Pq uint
392Minimum number of metaslabs to create in a top-level vdev.
393.
394.It Sy vdev_validate_skip Ns = Ns Sy 0 Ns | Ns 1 Pq int
395Skip label validation steps during pool import.
396Changing is not recommended unless you know what you're doing
397and are recovering a damaged label.
398.
399.It Sy zfs_vdev_ms_count_limit Ns = Ns Sy 131072 Po 128k Pc Pq uint
400Practical upper limit of total metaslabs per top-level vdev.
401.
402.It Sy metaslab_preload_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int
403Enable metaslab group preloading.
404.
405.It Sy metaslab_preload_limit Ns = Ns Sy 10 Pq uint
406Maximum number of metaslabs per group to preload
407.
408.It Sy metaslab_preload_pct Ns = Ns Sy 50 Pq uint
409Percentage of CPUs to run a metaslab preload taskq
410.
411.It Sy metaslab_lba_weighting_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int
412Give more weight to metaslabs with lower LBAs,
413assuming they have greater bandwidth,
414as is typically the case on a modern constant angular velocity disk drive.
415.
416.It Sy metaslab_unload_delay Ns = Ns Sy 32 Pq uint
417After a metaslab is used, we keep it loaded for this many TXGs, to attempt to
418reduce unnecessary reloading.
419Note that both this many TXGs and
420.Sy metaslab_unload_delay_ms
421milliseconds must pass before unloading will occur.
422.
423.It Sy metaslab_unload_delay_ms Ns = Ns Sy 600000 Ns ms Po 10 min Pc Pq uint
424After a metaslab is used, we keep it loaded for this many milliseconds,
425to attempt to reduce unnecessary reloading.
426Note, that both this many milliseconds and
427.Sy metaslab_unload_delay
428TXGs must pass before unloading will occur.
429.
430.It Sy reference_history Ns = Ns Sy 3 Pq uint
431Maximum reference holders being tracked when reference_tracking_enable is
432active.
433.
434.It Sy reference_tracking_enable Ns = Ns Sy 0 Ns | Ns 1 Pq int
435Track reference holders to
436.Sy refcount_t
437objects (debug builds only).
438.
439.It Sy send_holes_without_birth_time Ns = Ns Sy 1 Ns | Ns 0 Pq int
440When set, the
441.Sy hole_birth
442optimization will not be used, and all holes will always be sent during a
443.Nm zfs Cm send .
444This is useful if you suspect your datasets are affected by a bug in
445.Sy hole_birth .
446.
447.It Sy spa_config_path Ns = Ns Pa /etc/zfs/zpool.cache Pq charp
448SPA config file.
449.
450.It Sy spa_asize_inflation Ns = Ns Sy 24 Pq uint
451Multiplication factor used to estimate actual disk consumption from the
452size of data being written.
453The default value is a worst case estimate,
454but lower values may be valid for a given pool depending on its configuration.
455Pool administrators who understand the factors involved
456may wish to specify a more realistic inflation factor,
457particularly if they operate close to quota or capacity limits.
458.
459.It Sy spa_load_print_vdev_tree Ns = Ns Sy 0 Ns | Ns 1 Pq int
460Whether to print the vdev tree in the debugging message buffer during pool
461import.
462.
463.It Sy spa_load_verify_data Ns = Ns Sy 1 Ns | Ns 0 Pq int
464Whether to traverse data blocks during an "extreme rewind"
465.Pq Fl X
466import.
467.Pp
468An extreme rewind import normally performs a full traversal of all
469blocks in the pool for verification.
470If this parameter is unset, the traversal skips non-metadata blocks.
471It can be toggled once the
472import has started to stop or start the traversal of non-metadata blocks.
473.
474.It Sy spa_load_verify_metadata  Ns = Ns Sy 1 Ns | Ns 0 Pq int
475Whether to traverse blocks during an "extreme rewind"
476.Pq Fl X
477pool import.
478.Pp
479An extreme rewind import normally performs a full traversal of all
480blocks in the pool for verification.
481If this parameter is unset, the traversal is not performed.
482It can be toggled once the import has started to stop or start the traversal.
483.
484.It Sy spa_load_verify_shift Ns = Ns Sy 4 Po 1/16th Pc Pq uint
485Sets the maximum number of bytes to consume during pool import to the log2
486fraction of the target ARC size.
487.
488.It Sy spa_slop_shift Ns = Ns Sy 5 Po 1/32nd Pc Pq int
489Normally, we don't allow the last
490.Sy 3.2% Pq Sy 1/2^spa_slop_shift
491of space in the pool to be consumed.
492This ensures that we don't run the pool completely out of space,
493due to unaccounted changes (e.g. to the MOS).
494It also limits the worst-case time to allocate space.
495If we have less than this amount of free space,
496most ZPL operations (e.g. write, create) will return
497.Sy ENOSPC .
498.
499.It Sy spa_upgrade_errlog_limit Ns = Ns Sy 0 Pq uint
500Limits the number of on-disk error log entries that will be converted to the
501new format when enabling the
502.Sy head_errlog
503feature.
504The default is to convert all log entries.
505.
506.It Sy vdev_removal_max_span Ns = Ns Sy 32768 Ns B Po 32 KiB Pc Pq uint
507During top-level vdev removal, chunks of data are copied from the vdev
508which may include free space in order to trade bandwidth for IOPS.
509This parameter determines the maximum span of free space, in bytes,
510which will be included as "unnecessary" data in a chunk of copied data.
511.Pp
512The default value here was chosen to align with
513.Sy zfs_vdev_read_gap_limit ,
514which is a similar concept when doing
515regular reads (but there's no reason it has to be the same).
516.
517.It Sy vdev_file_logical_ashift Ns = Ns Sy 9 Po 512 B Pc Pq u64
518Logical ashift for file-based devices.
519.
520.It Sy vdev_file_physical_ashift Ns = Ns Sy 9 Po 512 B Pc Pq u64
521Physical ashift for file-based devices.
522.
523.It Sy zap_iterate_prefetch Ns = Ns Sy 1 Ns | Ns 0 Pq int
524If set, when we start iterating over a ZAP object,
525prefetch the entire object (all leaf blocks).
526However, this is limited by
527.Sy dmu_prefetch_max .
528.
529.It Sy zap_micro_max_size Ns = Ns Sy 131072 Ns B Po 128 KiB Pc Pq int
530Maximum micro ZAP size.
531A micro ZAP is upgraded to a fat ZAP, once it grows beyond the specified size.
532.
533.It Sy zfetch_min_distance Ns = Ns Sy 4194304 Ns B Po 4 MiB Pc Pq uint
534Min bytes to prefetch per stream.
535Prefetch distance starts from the demand access size and quickly grows to
536this value, doubling on each hit.
537After that it may grow further by 1/8 per hit, but only if some prefetch
538since last time haven't completed in time to satisfy demand request, i.e.
539prefetch depth didn't cover the read latency or the pool got saturated.
540.
541.It Sy zfetch_max_distance Ns = Ns Sy 67108864 Ns B Po 64 MiB Pc Pq uint
542Max bytes to prefetch per stream.
543.
544.It Sy zfetch_max_idistance Ns = Ns Sy 67108864 Ns B Po 64 MiB Pc Pq uint
545Max bytes to prefetch indirects for per stream.
546.
547.It Sy zfetch_max_streams Ns = Ns Sy 8 Pq uint
548Max number of streams per zfetch (prefetch streams per file).
549.
550.It Sy zfetch_min_sec_reap Ns = Ns Sy 1 Pq uint
551Min time before inactive prefetch stream can be reclaimed
552.
553.It Sy zfetch_max_sec_reap Ns = Ns Sy 2 Pq uint
554Max time before inactive prefetch stream can be deleted
555.
556.It Sy zfs_abd_scatter_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int
557Enables ARC from using scatter/gather lists and forces all allocations to be
558linear in kernel memory.
559Disabling can improve performance in some code paths
560at the expense of fragmented kernel memory.
561.
562.It Sy zfs_abd_scatter_max_order Ns = Ns Sy MAX_ORDER\-1 Pq uint
563Maximum number of consecutive memory pages allocated in a single block for
564scatter/gather lists.
565.Pp
566The value of
567.Sy MAX_ORDER
568depends on kernel configuration.
569.
570.It Sy zfs_abd_scatter_min_size Ns = Ns Sy 1536 Ns B Po 1.5 KiB Pc Pq uint
571This is the minimum allocation size that will use scatter (page-based) ABDs.
572Smaller allocations will use linear ABDs.
573.
574.It Sy zfs_arc_dnode_limit Ns = Ns Sy 0 Ns B Pq u64
575When the number of bytes consumed by dnodes in the ARC exceeds this number of
576bytes, try to unpin some of it in response to demand for non-metadata.
577This value acts as a ceiling to the amount of dnode metadata, and defaults to
578.Sy 0 ,
579which indicates that a percent which is based on
580.Sy zfs_arc_dnode_limit_percent
581of the ARC meta buffers that may be used for dnodes.
582.It Sy zfs_arc_dnode_limit_percent Ns = Ns Sy 10 Ns % Pq u64
583Percentage that can be consumed by dnodes of ARC meta buffers.
584.Pp
585See also
586.Sy zfs_arc_dnode_limit ,
587which serves a similar purpose but has a higher priority if nonzero.
588.
589.It Sy zfs_arc_dnode_reduce_percent Ns = Ns Sy 10 Ns % Pq u64
590Percentage of ARC dnodes to try to scan in response to demand for non-metadata
591when the number of bytes consumed by dnodes exceeds
592.Sy zfs_arc_dnode_limit .
593.
594.It Sy zfs_arc_average_blocksize Ns = Ns Sy 8192 Ns B Po 8 KiB Pc Pq uint
595The ARC's buffer hash table is sized based on the assumption of an average
596block size of this value.
597This works out to roughly 1 MiB of hash table per 1 GiB of physical memory
598with 8-byte pointers.
599For configurations with a known larger average block size,
600this value can be increased to reduce the memory footprint.
601.
602.It Sy zfs_arc_eviction_pct Ns = Ns Sy 200 Ns % Pq uint
603When
604.Fn arc_is_overflowing ,
605.Fn arc_get_data_impl
606waits for this percent of the requested amount of data to be evicted.
607For example, by default, for every
608.Em 2 KiB
609that's evicted,
610.Em 1 KiB
611of it may be "reused" by a new allocation.
612Since this is above
613.Sy 100 Ns % ,
614it ensures that progress is made towards getting
615.Sy arc_size No under Sy arc_c .
616Since this is finite, it ensures that allocations can still happen,
617even during the potentially long time that
618.Sy arc_size No is more than Sy arc_c .
619.
620.It Sy zfs_arc_evict_batch_limit Ns = Ns Sy 10 Pq uint
621Number ARC headers to evict per sub-list before proceeding to another sub-list.
622This batch-style operation prevents entire sub-lists from being evicted at once
623but comes at a cost of additional unlocking and locking.
624.
625.It Sy zfs_arc_grow_retry Ns = Ns Sy 0 Ns s Pq uint
626If set to a non zero value, it will replace the
627.Sy arc_grow_retry
628value with this value.
629The
630.Sy arc_grow_retry
631.No value Pq default Sy 5 Ns s
632is the number of seconds the ARC will wait before
633trying to resume growth after a memory pressure event.
634.
635.It Sy zfs_arc_lotsfree_percent Ns = Ns Sy 10 Ns % Pq int
636Throttle I/O when free system memory drops below this percentage of total
637system memory.
638Setting this value to
639.Sy 0
640will disable the throttle.
641.
642.It Sy zfs_arc_max Ns = Ns Sy 0 Ns B Pq u64
643Max size of ARC in bytes.
644If
645.Sy 0 ,
646then the max size of ARC is determined by the amount of system memory installed.
647The larger of
648.Sy all_system_memory No \- Sy 1 GiB
649and
650.Sy 5/8 No \(mu Sy all_system_memory
651will be used as the limit.
652This value must be at least
653.Sy 67108864 Ns B Pq 64 MiB .
654.Pp
655This value can be changed dynamically, with some caveats.
656It cannot be set back to
657.Sy 0
658while running, and reducing it below the current ARC size will not cause
659the ARC to shrink without memory pressure to induce shrinking.
660.
661.It Sy zfs_arc_meta_balance Ns = Ns Sy 500 Pq uint
662Balance between metadata and data on ghost hits.
663Values above 100 increase metadata caching by proportionally reducing effect
664of ghost data hits on target data/metadata rate.
665.
666.It Sy zfs_arc_min Ns = Ns Sy 0 Ns B Pq u64
667Min size of ARC in bytes.
668.No If set to Sy 0 , arc_c_min
669will default to consuming the larger of
670.Sy 32 MiB
671and
672.Sy all_system_memory No / Sy 32 .
673.
674.It Sy zfs_arc_min_prefetch_ms Ns = Ns Sy 0 Ns ms Ns Po Ns ≡ Ns 1s Pc Pq uint
675Minimum time prefetched blocks are locked in the ARC.
676.
677.It Sy zfs_arc_min_prescient_prefetch_ms Ns = Ns Sy 0 Ns ms Ns Po Ns ≡ Ns 6s Pc Pq uint
678Minimum time "prescient prefetched" blocks are locked in the ARC.
679These blocks are meant to be prefetched fairly aggressively ahead of
680the code that may use them.
681.
682.It Sy zfs_arc_prune_task_threads Ns = Ns Sy 1 Pq int
683Number of arc_prune threads.
684.Fx
685does not need more than one.
686Linux may theoretically use one per mount point up to number of CPUs,
687but that was not proven to be useful.
688.
689.It Sy zfs_max_missing_tvds Ns = Ns Sy 0 Pq int
690Number of missing top-level vdevs which will be allowed during
691pool import (only in read-only mode).
692.
693.It Sy zfs_max_nvlist_src_size Ns = Sy 0 Pq u64
694Maximum size in bytes allowed to be passed as
695.Sy zc_nvlist_src_size
696for ioctls on
697.Pa /dev/zfs .
698This prevents a user from causing the kernel to allocate
699an excessive amount of memory.
700When the limit is exceeded, the ioctl fails with
701.Sy EINVAL
702and a description of the error is sent to the
703.Pa zfs-dbgmsg
704log.
705This parameter should not need to be touched under normal circumstances.
706If
707.Sy 0 ,
708equivalent to a quarter of the user-wired memory limit under
709.Fx
710and to
711.Sy 134217728 Ns B Pq 128 MiB
712under Linux.
713.
714.It Sy zfs_multilist_num_sublists Ns = Ns Sy 0 Pq uint
715To allow more fine-grained locking, each ARC state contains a series
716of lists for both data and metadata objects.
717Locking is performed at the level of these "sub-lists".
718This parameters controls the number of sub-lists per ARC state,
719and also applies to other uses of the multilist data structure.
720.Pp
721If
722.Sy 0 ,
723equivalent to the greater of the number of online CPUs and
724.Sy 4 .
725.
726.It Sy zfs_arc_overflow_shift Ns = Ns Sy 8 Pq int
727The ARC size is considered to be overflowing if it exceeds the current
728ARC target size
729.Pq Sy arc_c
730by thresholds determined by this parameter.
731Exceeding by
732.Sy ( arc_c No >> Sy zfs_arc_overflow_shift ) No / Sy 2
733starts ARC reclamation process.
734If that appears insufficient, exceeding by
735.Sy ( arc_c No >> Sy zfs_arc_overflow_shift ) No \(mu Sy 1.5
736blocks new buffer allocation until the reclaim thread catches up.
737Started reclamation process continues till ARC size returns below the
738target size.
739.Pp
740The default value of
741.Sy 8
742causes the ARC to start reclamation if it exceeds the target size by
743.Em 0.2%
744of the target size, and block allocations by
745.Em 0.6% .
746.
747.It Sy zfs_arc_shrink_shift Ns = Ns Sy 0 Pq uint
748If nonzero, this will update
749.Sy arc_shrink_shift Pq default Sy 7
750with the new value.
751.
752.It Sy zfs_arc_pc_percent Ns = Ns Sy 0 Ns % Po off Pc Pq uint
753Percent of pagecache to reclaim ARC to.
754.Pp
755This tunable allows the ZFS ARC to play more nicely
756with the kernel's LRU pagecache.
757It can guarantee that the ARC size won't collapse under scanning
758pressure on the pagecache, yet still allows the ARC to be reclaimed down to
759.Sy zfs_arc_min
760if necessary.
761This value is specified as percent of pagecache size (as measured by
762.Sy NR_FILE_PAGES ) ,
763where that percent may exceed
764.Sy 100 .
765This
766only operates during memory pressure/reclaim.
767.
768.It Sy zfs_arc_shrinker_limit Ns = Ns Sy 10000 Pq int
769This is a limit on how many pages the ARC shrinker makes available for
770eviction in response to one page allocation attempt.
771Note that in practice, the kernel's shrinker can ask us to evict
772up to about four times this for one allocation attempt.
773.Pp
774The default limit of
775.Sy 10000 Pq in practice, Em 160 MiB No per allocation attempt with 4 KiB pages
776limits the amount of time spent attempting to reclaim ARC memory to
777less than 100 ms per allocation attempt,
778even with a small average compressed block size of ~8 KiB.
779.Pp
780The parameter can be set to 0 (zero) to disable the limit,
781and only applies on Linux.
782.
783.It Sy zfs_arc_sys_free Ns = Ns Sy 0 Ns B Pq u64
784The target number of bytes the ARC should leave as free memory on the system.
785If zero, equivalent to the bigger of
786.Sy 512 KiB No and Sy all_system_memory/64 .
787.
788.It Sy zfs_autoimport_disable Ns = Ns Sy 1 Ns | Ns 0 Pq int
789Disable pool import at module load by ignoring the cache file
790.Pq Sy spa_config_path .
791.
792.It Sy zfs_checksum_events_per_second Ns = Ns Sy 20 Ns /s Pq uint
793Rate limit checksum events to this many per second.
794Note that this should not be set below the ZED thresholds
795(currently 10 checksums over 10 seconds)
796or else the daemon may not trigger any action.
797.
798.It Sy zfs_commit_timeout_pct Ns = Ns Sy 10 Ns % Pq uint
799This controls the amount of time that a ZIL block (lwb) will remain "open"
800when it isn't "full", and it has a thread waiting for it to be committed to
801stable storage.
802The timeout is scaled based on a percentage of the last lwb
803latency to avoid significantly impacting the latency of each individual
804transaction record (itx).
805.
806.It Sy zfs_condense_indirect_commit_entry_delay_ms Ns = Ns Sy 0 Ns ms Pq int
807Vdev indirection layer (used for device removal) sleeps for this many
808milliseconds during mapping generation.
809Intended for use with the test suite to throttle vdev removal speed.
810.
811.It Sy zfs_condense_indirect_obsolete_pct Ns = Ns Sy 25 Ns % Pq uint
812Minimum percent of obsolete bytes in vdev mapping required to attempt to
813condense
814.Pq see Sy zfs_condense_indirect_vdevs_enable .
815Intended for use with the test suite
816to facilitate triggering condensing as needed.
817.
818.It Sy zfs_condense_indirect_vdevs_enable Ns = Ns Sy 1 Ns | Ns 0 Pq int
819Enable condensing indirect vdev mappings.
820When set, attempt to condense indirect vdev mappings
821if the mapping uses more than
822.Sy zfs_condense_min_mapping_bytes
823bytes of memory and if the obsolete space map object uses more than
824.Sy zfs_condense_max_obsolete_bytes
825bytes on-disk.
826The condensing process is an attempt to save memory by removing obsolete
827mappings.
828.
829.It Sy zfs_condense_max_obsolete_bytes Ns = Ns Sy 1073741824 Ns B Po 1 GiB Pc Pq u64
830Only attempt to condense indirect vdev mappings if the on-disk size
831of the obsolete space map object is greater than this number of bytes
832.Pq see Sy zfs_condense_indirect_vdevs_enable .
833.
834.It Sy zfs_condense_min_mapping_bytes Ns = Ns Sy 131072 Ns B Po 128 KiB Pc Pq u64
835Minimum size vdev mapping to attempt to condense
836.Pq see Sy zfs_condense_indirect_vdevs_enable .
837.
838.It Sy zfs_dbgmsg_enable Ns = Ns Sy 1 Ns | Ns 0 Pq int
839Internally ZFS keeps a small log to facilitate debugging.
840The log is enabled by default, and can be disabled by unsetting this option.
841The contents of the log can be accessed by reading
842.Pa /proc/spl/kstat/zfs/dbgmsg .
843Writing
844.Sy 0
845to the file clears the log.
846.Pp
847This setting does not influence debug prints due to
848.Sy zfs_flags .
849.
850.It Sy zfs_dbgmsg_maxsize Ns = Ns Sy 4194304 Ns B Po 4 MiB Pc Pq uint
851Maximum size of the internal ZFS debug log.
852.
853.It Sy zfs_dbuf_state_index Ns = Ns Sy 0 Pq int
854Historically used for controlling what reporting was available under
855.Pa /proc/spl/kstat/zfs .
856No effect.
857.
858.It Sy zfs_deadman_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int
859When a pool sync operation takes longer than
860.Sy zfs_deadman_synctime_ms ,
861or when an individual I/O operation takes longer than
862.Sy zfs_deadman_ziotime_ms ,
863then the operation is considered to be "hung".
864If
865.Sy zfs_deadman_enabled
866is set, then the deadman behavior is invoked as described by
867.Sy zfs_deadman_failmode .
868By default, the deadman is enabled and set to
869.Sy wait
870which results in "hung" I/O operations only being logged.
871The deadman is automatically disabled when a pool gets suspended.
872.
873.It Sy zfs_deadman_failmode Ns = Ns Sy wait Pq charp
874Controls the failure behavior when the deadman detects a "hung" I/O operation.
875Valid values are:
876.Bl -tag -compact -offset 4n -width "continue"
877.It Sy wait
878Wait for a "hung" operation to complete.
879For each "hung" operation a "deadman" event will be posted
880describing that operation.
881.It Sy continue
882Attempt to recover from a "hung" operation by re-dispatching it
883to the I/O pipeline if possible.
884.It Sy panic
885Panic the system.
886This can be used to facilitate automatic fail-over
887to a properly configured fail-over partner.
888.El
889.
890.It Sy zfs_deadman_checktime_ms Ns = Ns Sy 60000 Ns ms Po 1 min Pc Pq u64
891Check time in milliseconds.
892This defines the frequency at which we check for hung I/O requests
893and potentially invoke the
894.Sy zfs_deadman_failmode
895behavior.
896.
897.It Sy zfs_deadman_synctime_ms Ns = Ns Sy 600000 Ns ms Po 10 min Pc Pq u64
898Interval in milliseconds after which the deadman is triggered and also
899the interval after which a pool sync operation is considered to be "hung".
900Once this limit is exceeded the deadman will be invoked every
901.Sy zfs_deadman_checktime_ms
902milliseconds until the pool sync completes.
903.
904.It Sy zfs_deadman_ziotime_ms Ns = Ns Sy 300000 Ns ms Po 5 min Pc Pq u64
905Interval in milliseconds after which the deadman is triggered and an
906individual I/O operation is considered to be "hung".
907As long as the operation remains "hung",
908the deadman will be invoked every
909.Sy zfs_deadman_checktime_ms
910milliseconds until the operation completes.
911.
912.It Sy zfs_dedup_prefetch Ns = Ns Sy 0 Ns | Ns 1 Pq int
913Enable prefetching dedup-ed blocks which are going to be freed.
914.
915.It Sy zfs_delay_min_dirty_percent Ns = Ns Sy 60 Ns % Pq uint
916Start to delay each transaction once there is this amount of dirty data,
917expressed as a percentage of
918.Sy zfs_dirty_data_max .
919This value should be at least
920.Sy zfs_vdev_async_write_active_max_dirty_percent .
921.No See Sx ZFS TRANSACTION DELAY .
922.
923.It Sy zfs_delay_scale Ns = Ns Sy 500000 Pq int
924This controls how quickly the transaction delay approaches infinity.
925Larger values cause longer delays for a given amount of dirty data.
926.Pp
927For the smoothest delay, this value should be about 1 billion divided
928by the maximum number of operations per second.
929This will smoothly handle between ten times and a tenth of this number.
930.No See Sx ZFS TRANSACTION DELAY .
931.Pp
932.Sy zfs_delay_scale No \(mu Sy zfs_dirty_data_max Em must No be smaller than Sy 2^64 .
933.
934.It Sy zfs_disable_ivset_guid_check Ns = Ns Sy 0 Ns | Ns 1 Pq int
935Disables requirement for IVset GUIDs to be present and match when doing a raw
936receive of encrypted datasets.
937Intended for users whose pools were created with
938OpenZFS pre-release versions and now have compatibility issues.
939.
940.It Sy zfs_key_max_salt_uses Ns = Ns Sy 400000000 Po 4*10^8 Pc Pq ulong
941Maximum number of uses of a single salt value before generating a new one for
942encrypted datasets.
943The default value is also the maximum.
944.
945.It Sy zfs_object_mutex_size Ns = Ns Sy 64 Pq uint
946Size of the znode hashtable used for holds.
947.Pp
948Due to the need to hold locks on objects that may not exist yet, kernel mutexes
949are not created per-object and instead a hashtable is used where collisions
950will result in objects waiting when there is not actually contention on the
951same object.
952.
953.It Sy zfs_slow_io_events_per_second Ns = Ns Sy 20 Ns /s Pq int
954Rate limit delay and deadman zevents (which report slow I/O operations) to this
955many per
956second.
957.
958.It Sy zfs_unflushed_max_mem_amt Ns = Ns Sy 1073741824 Ns B Po 1 GiB Pc Pq u64
959Upper-bound limit for unflushed metadata changes to be held by the
960log spacemap in memory, in bytes.
961.
962.It Sy zfs_unflushed_max_mem_ppm Ns = Ns Sy 1000 Ns ppm Po 0.1% Pc Pq u64
963Part of overall system memory that ZFS allows to be used
964for unflushed metadata changes by the log spacemap, in millionths.
965.
966.It Sy zfs_unflushed_log_block_max Ns = Ns Sy 131072 Po 128k Pc Pq u64
967Describes the maximum number of log spacemap blocks allowed for each pool.
968The default value means that the space in all the log spacemaps
969can add up to no more than
970.Sy 131072
971blocks (which means
972.Em 16 GiB
973of logical space before compression and ditto blocks,
974assuming that blocksize is
975.Em 128 KiB ) .
976.Pp
977This tunable is important because it involves a trade-off between import
978time after an unclean export and the frequency of flushing metaslabs.
979The higher this number is, the more log blocks we allow when the pool is
980active which means that we flush metaslabs less often and thus decrease
981the number of I/O operations for spacemap updates per TXG.
982At the same time though, that means that in the event of an unclean export,
983there will be more log spacemap blocks for us to read, inducing overhead
984in the import time of the pool.
985The lower the number, the amount of flushing increases, destroying log
986blocks quicker as they become obsolete faster, which leaves less blocks
987to be read during import time after a crash.
988.Pp
989Each log spacemap block existing during pool import leads to approximately
990one extra logical I/O issued.
991This is the reason why this tunable is exposed in terms of blocks rather
992than space used.
993.
994.It Sy zfs_unflushed_log_block_min Ns = Ns Sy 1000 Pq u64
995If the number of metaslabs is small and our incoming rate is high,
996we could get into a situation that we are flushing all our metaslabs every TXG.
997Thus we always allow at least this many log blocks.
998.
999.It Sy zfs_unflushed_log_block_pct Ns = Ns Sy 400 Ns % Pq u64
1000Tunable used to determine the number of blocks that can be used for
1001the spacemap log, expressed as a percentage of the total number of
1002unflushed metaslabs in the pool.
1003.
1004.It Sy zfs_unflushed_log_txg_max Ns = Ns Sy 1000 Pq u64
1005Tunable limiting maximum time in TXGs any metaslab may remain unflushed.
1006It effectively limits maximum number of unflushed per-TXG spacemap logs
1007that need to be read after unclean pool export.
1008.
1009.It Sy zfs_unlink_suspend_progress Ns = Ns Sy 0 Ns | Ns 1 Pq uint
1010When enabled, files will not be asynchronously removed from the list of pending
1011unlinks and the space they consume will be leaked.
1012Once this option has been disabled and the dataset is remounted,
1013the pending unlinks will be processed and the freed space returned to the pool.
1014This option is used by the test suite.
1015.
1016.It Sy zfs_delete_blocks Ns = Ns Sy 20480 Pq ulong
1017This is the used to define a large file for the purposes of deletion.
1018Files containing more than
1019.Sy zfs_delete_blocks
1020will be deleted asynchronously, while smaller files are deleted synchronously.
1021Decreasing this value will reduce the time spent in an
1022.Xr unlink 2
1023system call, at the expense of a longer delay before the freed space is
1024available.
1025This only applies on Linux.
1026.
1027.It Sy zfs_dirty_data_max Ns = Pq int
1028Determines the dirty space limit in bytes.
1029Once this limit is exceeded, new writes are halted until space frees up.
1030This parameter takes precedence over
1031.Sy zfs_dirty_data_max_percent .
1032.No See Sx ZFS TRANSACTION DELAY .
1033.Pp
1034Defaults to
1035.Sy physical_ram/10 ,
1036capped at
1037.Sy zfs_dirty_data_max_max .
1038.
1039.It Sy zfs_dirty_data_max_max Ns = Pq int
1040Maximum allowable value of
1041.Sy zfs_dirty_data_max ,
1042expressed in bytes.
1043This limit is only enforced at module load time, and will be ignored if
1044.Sy zfs_dirty_data_max
1045is later changed.
1046This parameter takes precedence over
1047.Sy zfs_dirty_data_max_max_percent .
1048.No See Sx ZFS TRANSACTION DELAY .
1049.Pp
1050Defaults to
1051.Sy min(physical_ram/4, 4GiB) ,
1052or
1053.Sy min(physical_ram/4, 1GiB)
1054for 32-bit systems.
1055.
1056.It Sy zfs_dirty_data_max_max_percent Ns = Ns Sy 25 Ns % Pq uint
1057Maximum allowable value of
1058.Sy zfs_dirty_data_max ,
1059expressed as a percentage of physical RAM.
1060This limit is only enforced at module load time, and will be ignored if
1061.Sy zfs_dirty_data_max
1062is later changed.
1063The parameter
1064.Sy zfs_dirty_data_max_max
1065takes precedence over this one.
1066.No See Sx ZFS TRANSACTION DELAY .
1067.
1068.It Sy zfs_dirty_data_max_percent Ns = Ns Sy 10 Ns % Pq uint
1069Determines the dirty space limit, expressed as a percentage of all memory.
1070Once this limit is exceeded, new writes are halted until space frees up.
1071The parameter
1072.Sy zfs_dirty_data_max
1073takes precedence over this one.
1074.No See Sx ZFS TRANSACTION DELAY .
1075.Pp
1076Subject to
1077.Sy zfs_dirty_data_max_max .
1078.
1079.It Sy zfs_dirty_data_sync_percent Ns = Ns Sy 20 Ns % Pq uint
1080Start syncing out a transaction group if there's at least this much dirty data
1081.Pq as a percentage of Sy zfs_dirty_data_max .
1082This should be less than
1083.Sy zfs_vdev_async_write_active_min_dirty_percent .
1084.
1085.It Sy zfs_wrlog_data_max Ns = Pq int
1086The upper limit of write-transaction zil log data size in bytes.
1087Write operations are throttled when approaching the limit until log data is
1088cleared out after transaction group sync.
1089Because of some overhead, it should be set at least 2 times the size of
1090.Sy zfs_dirty_data_max
1091.No to prevent harming normal write throughput .
1092It also should be smaller than the size of the slog device if slog is present.
1093.Pp
1094Defaults to
1095.Sy zfs_dirty_data_max*2
1096.
1097.It Sy zfs_fallocate_reserve_percent Ns = Ns Sy 110 Ns % Pq uint
1098Since ZFS is a copy-on-write filesystem with snapshots, blocks cannot be
1099preallocated for a file in order to guarantee that later writes will not
1100run out of space.
1101Instead,
1102.Xr fallocate 2
1103space preallocation only checks that sufficient space is currently available
1104in the pool or the user's project quota allocation,
1105and then creates a sparse file of the requested size.
1106The requested space is multiplied by
1107.Sy zfs_fallocate_reserve_percent
1108to allow additional space for indirect blocks and other internal metadata.
1109Setting this to
1110.Sy 0
1111disables support for
1112.Xr fallocate 2
1113and causes it to return
1114.Sy EOPNOTSUPP .
1115.
1116.It Sy zfs_fletcher_4_impl Ns = Ns Sy fastest Pq string
1117Select a fletcher 4 implementation.
1118.Pp
1119Supported selectors are:
1120.Sy fastest , scalar , sse2 , ssse3 , avx2 , avx512f , avx512bw ,
1121.No and Sy aarch64_neon .
1122All except
1123.Sy fastest No and Sy scalar
1124require instruction set extensions to be available,
1125and will only appear if ZFS detects that they are present at runtime.
1126If multiple implementations of fletcher 4 are available, the
1127.Sy fastest
1128will be chosen using a micro benchmark.
1129Selecting
1130.Sy scalar
1131results in the original CPU-based calculation being used.
1132Selecting any option other than
1133.Sy fastest No or Sy scalar
1134results in vector instructions
1135from the respective CPU instruction set being used.
1136.
1137.It Sy zfs_blake3_impl Ns = Ns Sy fastest Pq string
1138Select a BLAKE3 implementation.
1139.Pp
1140Supported selectors are:
1141.Sy cycle , fastest , generic , sse2 , sse41 , avx2 , avx512 .
1142All except
1143.Sy cycle , fastest No and Sy generic
1144require instruction set extensions to be available,
1145and will only appear if ZFS detects that they are present at runtime.
1146If multiple implementations of BLAKE3 are available, the
1147.Sy fastest will be chosen using a micro benchmark. You can see the
1148benchmark results by reading this kstat file:
1149.Pa /proc/spl/kstat/zfs/chksum_bench .
1150.
1151.It Sy zfs_free_bpobj_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int
1152Enable/disable the processing of the free_bpobj object.
1153.
1154.It Sy zfs_async_block_max_blocks Ns = Ns Sy UINT64_MAX Po unlimited Pc Pq u64
1155Maximum number of blocks freed in a single TXG.
1156.
1157.It Sy zfs_max_async_dedup_frees Ns = Ns Sy 100000 Po 10^5 Pc Pq u64
1158Maximum number of dedup blocks freed in a single TXG.
1159.
1160.It Sy zfs_vdev_async_read_max_active Ns = Ns Sy 3 Pq uint
1161Maximum asynchronous read I/O operations active to each device.
1162.No See Sx ZFS I/O SCHEDULER .
1163.
1164.It Sy zfs_vdev_async_read_min_active Ns = Ns Sy 1 Pq uint
1165Minimum asynchronous read I/O operation active to each device.
1166.No See Sx ZFS I/O SCHEDULER .
1167.
1168.It Sy zfs_vdev_async_write_active_max_dirty_percent Ns = Ns Sy 60 Ns % Pq uint
1169When the pool has more than this much dirty data, use
1170.Sy zfs_vdev_async_write_max_active
1171to limit active async writes.
1172If the dirty data is between the minimum and maximum,
1173the active I/O limit is linearly interpolated.
1174.No See Sx ZFS I/O SCHEDULER .
1175.
1176.It Sy zfs_vdev_async_write_active_min_dirty_percent Ns = Ns Sy 30 Ns % Pq uint
1177When the pool has less than this much dirty data, use
1178.Sy zfs_vdev_async_write_min_active
1179to limit active async writes.
1180If the dirty data is between the minimum and maximum,
1181the active I/O limit is linearly
1182interpolated.
1183.No See Sx ZFS I/O SCHEDULER .
1184.
1185.It Sy zfs_vdev_async_write_max_active Ns = Ns Sy 10 Pq uint
1186Maximum asynchronous write I/O operations active to each device.
1187.No See Sx ZFS I/O SCHEDULER .
1188.
1189.It Sy zfs_vdev_async_write_min_active Ns = Ns Sy 2 Pq uint
1190Minimum asynchronous write I/O operations active to each device.
1191.No See Sx ZFS I/O SCHEDULER .
1192.Pp
1193Lower values are associated with better latency on rotational media but poorer
1194resilver performance.
1195The default value of
1196.Sy 2
1197was chosen as a compromise.
1198A value of
1199.Sy 3
1200has been shown to improve resilver performance further at a cost of
1201further increasing latency.
1202.
1203.It Sy zfs_vdev_initializing_max_active Ns = Ns Sy 1 Pq uint
1204Maximum initializing I/O operations active to each device.
1205.No See Sx ZFS I/O SCHEDULER .
1206.
1207.It Sy zfs_vdev_initializing_min_active Ns = Ns Sy 1 Pq uint
1208Minimum initializing I/O operations active to each device.
1209.No See Sx ZFS I/O SCHEDULER .
1210.
1211.It Sy zfs_vdev_max_active Ns = Ns Sy 1000 Pq uint
1212The maximum number of I/O operations active to each device.
1213Ideally, this will be at least the sum of each queue's
1214.Sy max_active .
1215.No See Sx ZFS I/O SCHEDULER .
1216.
1217.It Sy zfs_vdev_open_timeout_ms Ns = Ns Sy 1000 Pq uint
1218Timeout value to wait before determining a device is missing
1219during import.
1220This is helpful for transient missing paths due
1221to links being briefly removed and recreated in response to
1222udev events.
1223.
1224.It Sy zfs_vdev_rebuild_max_active Ns = Ns Sy 3 Pq uint
1225Maximum sequential resilver I/O operations active to each device.
1226.No See Sx ZFS I/O SCHEDULER .
1227.
1228.It Sy zfs_vdev_rebuild_min_active Ns = Ns Sy 1 Pq uint
1229Minimum sequential resilver I/O operations active to each device.
1230.No See Sx ZFS I/O SCHEDULER .
1231.
1232.It Sy zfs_vdev_removal_max_active Ns = Ns Sy 2 Pq uint
1233Maximum removal I/O operations active to each device.
1234.No See Sx ZFS I/O SCHEDULER .
1235.
1236.It Sy zfs_vdev_removal_min_active Ns = Ns Sy 1 Pq uint
1237Minimum removal I/O operations active to each device.
1238.No See Sx ZFS I/O SCHEDULER .
1239.
1240.It Sy zfs_vdev_scrub_max_active Ns = Ns Sy 2 Pq uint
1241Maximum scrub I/O operations active to each device.
1242.No See Sx ZFS I/O SCHEDULER .
1243.
1244.It Sy zfs_vdev_scrub_min_active Ns = Ns Sy 1 Pq uint
1245Minimum scrub I/O operations active to each device.
1246.No See Sx ZFS I/O SCHEDULER .
1247.
1248.It Sy zfs_vdev_sync_read_max_active Ns = Ns Sy 10 Pq uint
1249Maximum synchronous read I/O operations active to each device.
1250.No See Sx ZFS I/O SCHEDULER .
1251.
1252.It Sy zfs_vdev_sync_read_min_active Ns = Ns Sy 10 Pq uint
1253Minimum synchronous read I/O operations active to each device.
1254.No See Sx ZFS I/O SCHEDULER .
1255.
1256.It Sy zfs_vdev_sync_write_max_active Ns = Ns Sy 10 Pq uint
1257Maximum synchronous write I/O operations active to each device.
1258.No See Sx ZFS I/O SCHEDULER .
1259.
1260.It Sy zfs_vdev_sync_write_min_active Ns = Ns Sy 10 Pq uint
1261Minimum synchronous write I/O operations active to each device.
1262.No See Sx ZFS I/O SCHEDULER .
1263.
1264.It Sy zfs_vdev_trim_max_active Ns = Ns Sy 2 Pq uint
1265Maximum trim/discard I/O operations active to each device.
1266.No See Sx ZFS I/O SCHEDULER .
1267.
1268.It Sy zfs_vdev_trim_min_active Ns = Ns Sy 1 Pq uint
1269Minimum trim/discard I/O operations active to each device.
1270.No See Sx ZFS I/O SCHEDULER .
1271.
1272.It Sy zfs_vdev_nia_delay Ns = Ns Sy 5 Pq uint
1273For non-interactive I/O (scrub, resilver, removal, initialize and rebuild),
1274the number of concurrently-active I/O operations is limited to
1275.Sy zfs_*_min_active ,
1276unless the vdev is "idle".
1277When there are no interactive I/O operations active (synchronous or otherwise),
1278and
1279.Sy zfs_vdev_nia_delay
1280operations have completed since the last interactive operation,
1281then the vdev is considered to be "idle",
1282and the number of concurrently-active non-interactive operations is increased to
1283.Sy zfs_*_max_active .
1284.No See Sx ZFS I/O SCHEDULER .
1285.
1286.It Sy zfs_vdev_nia_credit Ns = Ns Sy 5 Pq uint
1287Some HDDs tend to prioritize sequential I/O so strongly, that concurrent
1288random I/O latency reaches several seconds.
1289On some HDDs this happens even if sequential I/O operations
1290are submitted one at a time, and so setting
1291.Sy zfs_*_max_active Ns = Sy 1
1292does not help.
1293To prevent non-interactive I/O, like scrub,
1294from monopolizing the device, no more than
1295.Sy zfs_vdev_nia_credit operations can be sent
1296while there are outstanding incomplete interactive operations.
1297This enforced wait ensures the HDD services the interactive I/O
1298within a reasonable amount of time.
1299.No See Sx ZFS I/O SCHEDULER .
1300.
1301.It Sy zfs_vdev_queue_depth_pct Ns = Ns Sy 1000 Ns % Pq uint
1302Maximum number of queued allocations per top-level vdev expressed as
1303a percentage of
1304.Sy zfs_vdev_async_write_max_active ,
1305which allows the system to detect devices that are more capable
1306of handling allocations and to allocate more blocks to those devices.
1307This allows for dynamic allocation distribution when devices are imbalanced,
1308as fuller devices will tend to be slower than empty devices.
1309.Pp
1310Also see
1311.Sy zio_dva_throttle_enabled .
1312.
1313.It Sy zfs_vdev_def_queue_depth Ns = Ns Sy 32 Pq uint
1314Default queue depth for each vdev IO allocator.
1315Higher values allow for better coalescing of sequential writes before sending
1316them to the disk, but can increase transaction commit times.
1317.
1318.It Sy zfs_vdev_failfast_mask Ns = Ns Sy 1 Pq uint
1319Defines if the driver should retire on a given error type.
1320The following options may be bitwise-ored together:
1321.TS
1322box;
1323lbz r l l .
1324	Value	Name	Description
1325_
1326	1	Device	No driver retries on device errors
1327	2	Transport	No driver retries on transport errors.
1328	4	Driver	No driver retries on driver errors.
1329.TE
1330.
1331.It Sy zfs_expire_snapshot Ns = Ns Sy 300 Ns s Pq int
1332Time before expiring
1333.Pa .zfs/snapshot .
1334.
1335.It Sy zfs_admin_snapshot Ns = Ns Sy 0 Ns | Ns 1 Pq int
1336Allow the creation, removal, or renaming of entries in the
1337.Sy .zfs/snapshot
1338directory to cause the creation, destruction, or renaming of snapshots.
1339When enabled, this functionality works both locally and over NFS exports
1340which have the
1341.Em no_root_squash
1342option set.
1343.
1344.It Sy zfs_flags Ns = Ns Sy 0 Pq int
1345Set additional debugging flags.
1346The following flags may be bitwise-ored together:
1347.TS
1348box;
1349lbz r l l .
1350	Value	Name	Description
1351_
1352	1	ZFS_DEBUG_DPRINTF	Enable dprintf entries in the debug log.
1353*	2	ZFS_DEBUG_DBUF_VERIFY	Enable extra dbuf verifications.
1354*	4	ZFS_DEBUG_DNODE_VERIFY	Enable extra dnode verifications.
1355	8	ZFS_DEBUG_SNAPNAMES	Enable snapshot name verification.
1356*	16	ZFS_DEBUG_MODIFY	Check for illegally modified ARC buffers.
1357	64	ZFS_DEBUG_ZIO_FREE	Enable verification of block frees.
1358	128	ZFS_DEBUG_HISTOGRAM_VERIFY	Enable extra spacemap histogram verifications.
1359	256	ZFS_DEBUG_METASLAB_VERIFY	Verify space accounting on disk matches in-memory \fBrange_trees\fP.
1360	512	ZFS_DEBUG_SET_ERROR	Enable \fBSET_ERROR\fP and dprintf entries in the debug log.
1361	1024	ZFS_DEBUG_INDIRECT_REMAP	Verify split blocks created by device removal.
1362	2048	ZFS_DEBUG_TRIM	Verify TRIM ranges are always within the allocatable range tree.
1363	4096	ZFS_DEBUG_LOG_SPACEMAP	Verify that the log summary is consistent with the spacemap log
1364			       and enable \fBzfs_dbgmsgs\fP for metaslab loading and flushing.
1365.TE
1366.Sy \& * No Requires debug build .
1367.
1368.It Sy zfs_btree_verify_intensity Ns = Ns Sy 0 Pq uint
1369Enables btree verification.
1370The following settings are culminative:
1371.TS
1372box;
1373lbz r l l .
1374	Value	Description
1375
1376	1	Verify height.
1377	2	Verify pointers from children to parent.
1378	3	Verify element counts.
1379	4	Verify element order. (expensive)
1380*	5	Verify unused memory is poisoned. (expensive)
1381.TE
1382.Sy \& * No Requires debug build .
1383.
1384.It Sy zfs_free_leak_on_eio Ns = Ns Sy 0 Ns | Ns 1 Pq int
1385If destroy encounters an
1386.Sy EIO
1387while reading metadata (e.g. indirect blocks),
1388space referenced by the missing metadata can not be freed.
1389Normally this causes the background destroy to become "stalled",
1390as it is unable to make forward progress.
1391While in this stalled state, all remaining space to free
1392from the error-encountering filesystem is "temporarily leaked".
1393Set this flag to cause it to ignore the
1394.Sy EIO ,
1395permanently leak the space from indirect blocks that can not be read,
1396and continue to free everything else that it can.
1397.Pp
1398The default "stalling" behavior is useful if the storage partially
1399fails (i.e. some but not all I/O operations fail), and then later recovers.
1400In this case, we will be able to continue pool operations while it is
1401partially failed, and when it recovers, we can continue to free the
1402space, with no leaks.
1403Note, however, that this case is actually fairly rare.
1404.Pp
1405Typically pools either
1406.Bl -enum -compact -offset 4n -width "1."
1407.It
1408fail completely (but perhaps temporarily,
1409e.g. due to a top-level vdev going offline), or
1410.It
1411have localized, permanent errors (e.g. disk returns the wrong data
1412due to bit flip or firmware bug).
1413.El
1414In the former case, this setting does not matter because the
1415pool will be suspended and the sync thread will not be able to make
1416forward progress regardless.
1417In the latter, because the error is permanent, the best we can do
1418is leak the minimum amount of space,
1419which is what setting this flag will do.
1420It is therefore reasonable for this flag to normally be set,
1421but we chose the more conservative approach of not setting it,
1422so that there is no possibility of
1423leaking space in the "partial temporary" failure case.
1424.
1425.It Sy zfs_free_min_time_ms Ns = Ns Sy 1000 Ns ms Po 1s Pc Pq uint
1426During a
1427.Nm zfs Cm destroy
1428operation using the
1429.Sy async_destroy
1430feature,
1431a minimum of this much time will be spent working on freeing blocks per TXG.
1432.
1433.It Sy zfs_obsolete_min_time_ms Ns = Ns Sy 500 Ns ms Pq uint
1434Similar to
1435.Sy zfs_free_min_time_ms ,
1436but for cleanup of old indirection records for removed vdevs.
1437.
1438.It Sy zfs_immediate_write_sz Ns = Ns Sy 32768 Ns B Po 32 KiB Pc Pq s64
1439Largest data block to write to the ZIL.
1440Larger blocks will be treated as if the dataset being written to had the
1441.Sy logbias Ns = Ns Sy throughput
1442property set.
1443.
1444.It Sy zfs_initialize_value Ns = Ns Sy 16045690984833335022 Po 0xDEADBEEFDEADBEEE Pc Pq u64
1445Pattern written to vdev free space by
1446.Xr zpool-initialize 8 .
1447.
1448.It Sy zfs_initialize_chunk_size Ns = Ns Sy 1048576 Ns B Po 1 MiB Pc Pq u64
1449Size of writes used by
1450.Xr zpool-initialize 8 .
1451This option is used by the test suite.
1452.
1453.It Sy zfs_livelist_max_entries Ns = Ns Sy 500000 Po 5*10^5 Pc Pq u64
1454The threshold size (in block pointers) at which we create a new sub-livelist.
1455Larger sublists are more costly from a memory perspective but the fewer
1456sublists there are, the lower the cost of insertion.
1457.
1458.It Sy zfs_livelist_min_percent_shared Ns = Ns Sy 75 Ns % Pq int
1459If the amount of shared space between a snapshot and its clone drops below
1460this threshold, the clone turns off the livelist and reverts to the old
1461deletion method.
1462This is in place because livelists no long give us a benefit
1463once a clone has been overwritten enough.
1464.
1465.It Sy zfs_livelist_condense_new_alloc Ns = Ns Sy 0 Pq int
1466Incremented each time an extra ALLOC blkptr is added to a livelist entry while
1467it is being condensed.
1468This option is used by the test suite to track race conditions.
1469.
1470.It Sy zfs_livelist_condense_sync_cancel Ns = Ns Sy 0 Pq int
1471Incremented each time livelist condensing is canceled while in
1472.Fn spa_livelist_condense_sync .
1473This option is used by the test suite to track race conditions.
1474.
1475.It Sy zfs_livelist_condense_sync_pause Ns = Ns Sy 0 Ns | Ns 1 Pq int
1476When set, the livelist condense process pauses indefinitely before
1477executing the synctask \(em
1478.Fn spa_livelist_condense_sync .
1479This option is used by the test suite to trigger race conditions.
1480.
1481.It Sy zfs_livelist_condense_zthr_cancel Ns = Ns Sy 0 Pq int
1482Incremented each time livelist condensing is canceled while in
1483.Fn spa_livelist_condense_cb .
1484This option is used by the test suite to track race conditions.
1485.
1486.It Sy zfs_livelist_condense_zthr_pause Ns = Ns Sy 0 Ns | Ns 1 Pq int
1487When set, the livelist condense process pauses indefinitely before
1488executing the open context condensing work in
1489.Fn spa_livelist_condense_cb .
1490This option is used by the test suite to trigger race conditions.
1491.
1492.It Sy zfs_lua_max_instrlimit Ns = Ns Sy 100000000 Po 10^8 Pc Pq u64
1493The maximum execution time limit that can be set for a ZFS channel program,
1494specified as a number of Lua instructions.
1495.
1496.It Sy zfs_lua_max_memlimit Ns = Ns Sy 104857600 Po 100 MiB Pc Pq u64
1497The maximum memory limit that can be set for a ZFS channel program, specified
1498in bytes.
1499.
1500.It Sy zfs_max_dataset_nesting Ns = Ns Sy 50 Pq int
1501The maximum depth of nested datasets.
1502This value can be tuned temporarily to
1503fix existing datasets that exceed the predefined limit.
1504.
1505.It Sy zfs_max_log_walking Ns = Ns Sy 5 Pq u64
1506The number of past TXGs that the flushing algorithm of the log spacemap
1507feature uses to estimate incoming log blocks.
1508.
1509.It Sy zfs_max_logsm_summary_length Ns = Ns Sy 10 Pq u64
1510Maximum number of rows allowed in the summary of the spacemap log.
1511.
1512.It Sy zfs_max_recordsize Ns = Ns Sy 16777216 Po 16 MiB Pc Pq uint
1513We currently support block sizes from
1514.Em 512 Po 512 B Pc No to Em 16777216 Po 16 MiB Pc .
1515The benefits of larger blocks, and thus larger I/O,
1516need to be weighed against the cost of COWing a giant block to modify one byte.
1517Additionally, very large blocks can have an impact on I/O latency,
1518and also potentially on the memory allocator.
1519Therefore, we formerly forbade creating blocks larger than 1M.
1520Larger blocks could be created by changing it,
1521and pools with larger blocks can always be imported and used,
1522regardless of this setting.
1523.
1524.It Sy zfs_allow_redacted_dataset_mount Ns = Ns Sy 0 Ns | Ns 1 Pq int
1525Allow datasets received with redacted send/receive to be mounted.
1526Normally disabled because these datasets may be missing key data.
1527.
1528.It Sy zfs_min_metaslabs_to_flush Ns = Ns Sy 1 Pq u64
1529Minimum number of metaslabs to flush per dirty TXG.
1530.
1531.It Sy zfs_metaslab_fragmentation_threshold Ns = Ns Sy 70 Ns % Pq uint
1532Allow metaslabs to keep their active state as long as their fragmentation
1533percentage is no more than this value.
1534An active metaslab that exceeds this threshold
1535will no longer keep its active status allowing better metaslabs to be selected.
1536.
1537.It Sy zfs_mg_fragmentation_threshold Ns = Ns Sy 95 Ns % Pq uint
1538Metaslab groups are considered eligible for allocations if their
1539fragmentation metric (measured as a percentage) is less than or equal to
1540this value.
1541If a metaslab group exceeds this threshold then it will be
1542skipped unless all metaslab groups within the metaslab class have also
1543crossed this threshold.
1544.
1545.It Sy zfs_mg_noalloc_threshold Ns = Ns Sy 0 Ns % Pq uint
1546Defines a threshold at which metaslab groups should be eligible for allocations.
1547The value is expressed as a percentage of free space
1548beyond which a metaslab group is always eligible for allocations.
1549If a metaslab group's free space is less than or equal to the
1550threshold, the allocator will avoid allocating to that group
1551unless all groups in the pool have reached the threshold.
1552Once all groups have reached the threshold, all groups are allowed to accept
1553allocations.
1554The default value of
1555.Sy 0
1556disables the feature and causes all metaslab groups to be eligible for
1557allocations.
1558.Pp
1559This parameter allows one to deal with pools having heavily imbalanced
1560vdevs such as would be the case when a new vdev has been added.
1561Setting the threshold to a non-zero percentage will stop allocations
1562from being made to vdevs that aren't filled to the specified percentage
1563and allow lesser filled vdevs to acquire more allocations than they
1564otherwise would under the old
1565.Sy zfs_mg_alloc_failures
1566facility.
1567.
1568.It Sy zfs_ddt_data_is_special Ns = Ns Sy 1 Ns | Ns 0 Pq int
1569If enabled, ZFS will place DDT data into the special allocation class.
1570.
1571.It Sy zfs_user_indirect_is_special Ns = Ns Sy 1 Ns | Ns 0 Pq int
1572If enabled, ZFS will place user data indirect blocks
1573into the special allocation class.
1574.
1575.It Sy zfs_multihost_history Ns = Ns Sy 0 Pq uint
1576Historical statistics for this many latest multihost updates will be available
1577in
1578.Pa /proc/spl/kstat/zfs/ Ns Ao Ar pool Ac Ns Pa /multihost .
1579.
1580.It Sy zfs_multihost_interval Ns = Ns Sy 1000 Ns ms Po 1 s Pc Pq u64
1581Used to control the frequency of multihost writes which are performed when the
1582.Sy multihost
1583pool property is on.
1584This is one of the factors used to determine the
1585length of the activity check during import.
1586.Pp
1587The multihost write period is
1588.Sy zfs_multihost_interval No / Sy leaf-vdevs .
1589On average a multihost write will be issued for each leaf vdev
1590every
1591.Sy zfs_multihost_interval
1592milliseconds.
1593In practice, the observed period can vary with the I/O load
1594and this observed value is the delay which is stored in the uberblock.
1595.
1596.It Sy zfs_multihost_import_intervals Ns = Ns Sy 20 Pq uint
1597Used to control the duration of the activity test on import.
1598Smaller values of
1599.Sy zfs_multihost_import_intervals
1600will reduce the import time but increase
1601the risk of failing to detect an active pool.
1602The total activity check time is never allowed to drop below one second.
1603.Pp
1604On import the activity check waits a minimum amount of time determined by
1605.Sy zfs_multihost_interval No \(mu Sy zfs_multihost_import_intervals ,
1606or the same product computed on the host which last had the pool imported,
1607whichever is greater.
1608The activity check time may be further extended if the value of MMP
1609delay found in the best uberblock indicates actual multihost updates happened
1610at longer intervals than
1611.Sy zfs_multihost_interval .
1612A minimum of
1613.Em 100 ms
1614is enforced.
1615.Pp
1616.Sy 0 No is equivalent to Sy 1 .
1617.
1618.It Sy zfs_multihost_fail_intervals Ns = Ns Sy 10 Pq uint
1619Controls the behavior of the pool when multihost write failures or delays are
1620detected.
1621.Pp
1622When
1623.Sy 0 ,
1624multihost write failures or delays are ignored.
1625The failures will still be reported to the ZED which depending on
1626its configuration may take action such as suspending the pool or offlining a
1627device.
1628.Pp
1629Otherwise, the pool will be suspended if
1630.Sy zfs_multihost_fail_intervals No \(mu Sy zfs_multihost_interval
1631milliseconds pass without a successful MMP write.
1632This guarantees the activity test will see MMP writes if the pool is imported.
1633.Sy 1 No is equivalent to Sy 2 ;
1634this is necessary to prevent the pool from being suspended
1635due to normal, small I/O latency variations.
1636.
1637.It Sy zfs_no_scrub_io Ns = Ns Sy 0 Ns | Ns 1 Pq int
1638Set to disable scrub I/O.
1639This results in scrubs not actually scrubbing data and
1640simply doing a metadata crawl of the pool instead.
1641.
1642.It Sy zfs_no_scrub_prefetch Ns = Ns Sy 0 Ns | Ns 1 Pq int
1643Set to disable block prefetching for scrubs.
1644.
1645.It Sy zfs_nocacheflush Ns = Ns Sy 0 Ns | Ns 1 Pq int
1646Disable cache flush operations on disks when writing.
1647Setting this will cause pool corruption on power loss
1648if a volatile out-of-order write cache is enabled.
1649.
1650.It Sy zfs_nopwrite_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int
1651Allow no-operation writes.
1652The occurrence of nopwrites will further depend on other pool properties
1653.Pq i.a. the checksumming and compression algorithms .
1654.
1655.It Sy zfs_dmu_offset_next_sync Ns = Ns Sy 1 Ns | Ns 0 Pq int
1656Enable forcing TXG sync to find holes.
1657When enabled forces ZFS to sync data when
1658.Sy SEEK_HOLE No or Sy SEEK_DATA
1659flags are used allowing holes in a file to be accurately reported.
1660When disabled holes will not be reported in recently dirtied files.
1661.
1662.It Sy zfs_pd_bytes_max Ns = Ns Sy 52428800 Ns B Po 50 MiB Pc Pq int
1663The number of bytes which should be prefetched during a pool traversal, like
1664.Nm zfs Cm send
1665or other data crawling operations.
1666.
1667.It Sy zfs_traverse_indirect_prefetch_limit Ns = Ns Sy 32 Pq uint
1668The number of blocks pointed by indirect (non-L0) block which should be
1669prefetched during a pool traversal, like
1670.Nm zfs Cm send
1671or other data crawling operations.
1672.
1673.It Sy zfs_per_txg_dirty_frees_percent Ns = Ns Sy 30 Ns % Pq u64
1674Control percentage of dirtied indirect blocks from frees allowed into one TXG.
1675After this threshold is crossed, additional frees will wait until the next TXG.
1676.Sy 0 No disables this throttle .
1677.
1678.It Sy zfs_prefetch_disable Ns = Ns Sy 0 Ns | Ns 1 Pq int
1679Disable predictive prefetch.
1680Note that it leaves "prescient" prefetch
1681.Pq for, e.g., Nm zfs Cm send
1682intact.
1683Unlike predictive prefetch, prescient prefetch never issues I/O
1684that ends up not being needed, so it can't hurt performance.
1685.
1686.It Sy zfs_qat_checksum_disable Ns = Ns Sy 0 Ns | Ns 1 Pq int
1687Disable QAT hardware acceleration for SHA256 checksums.
1688May be unset after the ZFS modules have been loaded to initialize the QAT
1689hardware as long as support is compiled in and the QAT driver is present.
1690.
1691.It Sy zfs_qat_compress_disable Ns = Ns Sy 0 Ns | Ns 1 Pq int
1692Disable QAT hardware acceleration for gzip compression.
1693May be unset after the ZFS modules have been loaded to initialize the QAT
1694hardware as long as support is compiled in and the QAT driver is present.
1695.
1696.It Sy zfs_qat_encrypt_disable Ns = Ns Sy 0 Ns | Ns 1 Pq int
1697Disable QAT hardware acceleration for AES-GCM encryption.
1698May be unset after the ZFS modules have been loaded to initialize the QAT
1699hardware as long as support is compiled in and the QAT driver is present.
1700.
1701.It Sy zfs_vnops_read_chunk_size Ns = Ns Sy 1048576 Ns B Po 1 MiB Pc Pq u64
1702Bytes to read per chunk.
1703.
1704.It Sy zfs_read_history Ns = Ns Sy 0 Pq uint
1705Historical statistics for this many latest reads will be available in
1706.Pa /proc/spl/kstat/zfs/ Ns Ao Ar pool Ac Ns Pa /reads .
1707.
1708.It Sy zfs_read_history_hits Ns = Ns Sy 0 Ns | Ns 1 Pq int
1709Include cache hits in read history
1710.
1711.It Sy zfs_rebuild_max_segment Ns = Ns Sy 1048576 Ns B Po 1 MiB Pc Pq u64
1712Maximum read segment size to issue when sequentially resilvering a
1713top-level vdev.
1714.
1715.It Sy zfs_rebuild_scrub_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int
1716Automatically start a pool scrub when the last active sequential resilver
1717completes in order to verify the checksums of all blocks which have been
1718resilvered.
1719This is enabled by default and strongly recommended.
1720.
1721.It Sy zfs_rebuild_vdev_limit Ns = Ns Sy 67108864 Ns B Po 64 MiB Pc Pq u64
1722Maximum amount of I/O that can be concurrently issued for a sequential
1723resilver per leaf device, given in bytes.
1724.
1725.It Sy zfs_reconstruct_indirect_combinations_max Ns = Ns Sy 4096 Pq int
1726If an indirect split block contains more than this many possible unique
1727combinations when being reconstructed, consider it too computationally
1728expensive to check them all.
1729Instead, try at most this many randomly selected
1730combinations each time the block is accessed.
1731This allows all segment copies to participate fairly
1732in the reconstruction when all combinations
1733cannot be checked and prevents repeated use of one bad copy.
1734.
1735.It Sy zfs_recover Ns = Ns Sy 0 Ns | Ns 1 Pq int
1736Set to attempt to recover from fatal errors.
1737This should only be used as a last resort,
1738as it typically results in leaked space, or worse.
1739.
1740.It Sy zfs_removal_ignore_errors Ns = Ns Sy 0 Ns | Ns 1 Pq int
1741Ignore hard I/O errors during device removal.
1742When set, if a device encounters a hard I/O error during the removal process
1743the removal will not be cancelled.
1744This can result in a normally recoverable block becoming permanently damaged
1745and is hence not recommended.
1746This should only be used as a last resort when the
1747pool cannot be returned to a healthy state prior to removing the device.
1748.
1749.It Sy zfs_removal_suspend_progress Ns = Ns Sy 0 Ns | Ns 1 Pq uint
1750This is used by the test suite so that it can ensure that certain actions
1751happen while in the middle of a removal.
1752.
1753.It Sy zfs_remove_max_segment Ns = Ns Sy 16777216 Ns B Po 16 MiB Pc Pq uint
1754The largest contiguous segment that we will attempt to allocate when removing
1755a device.
1756If there is a performance problem with attempting to allocate large blocks,
1757consider decreasing this.
1758The default value is also the maximum.
1759.
1760.It Sy zfs_resilver_disable_defer Ns = Ns Sy 0 Ns | Ns 1 Pq int
1761Ignore the
1762.Sy resilver_defer
1763feature, causing an operation that would start a resilver to
1764immediately restart the one in progress.
1765.
1766.It Sy zfs_resilver_min_time_ms Ns = Ns Sy 3000 Ns ms Po 3 s Pc Pq uint
1767Resilvers are processed by the sync thread.
1768While resilvering, it will spend at least this much time
1769working on a resilver between TXG flushes.
1770.
1771.It Sy zfs_scan_ignore_errors Ns = Ns Sy 0 Ns | Ns 1 Pq int
1772If set, remove the DTL (dirty time list) upon completion of a pool scan (scrub),
1773even if there were unrepairable errors.
1774Intended to be used during pool repair or recovery to
1775stop resilvering when the pool is next imported.
1776.
1777.It Sy zfs_scrub_min_time_ms Ns = Ns Sy 1000 Ns ms Po 1 s Pc Pq uint
1778Scrubs are processed by the sync thread.
1779While scrubbing, it will spend at least this much time
1780working on a scrub between TXG flushes.
1781.
1782.It Sy zfs_scrub_error_blocks_per_txg Ns = Ns Sy 4096 Pq uint
1783Error blocks to be scrubbed in one txg.
1784.
1785.It Sy zfs_scan_checkpoint_intval Ns = Ns Sy 7200 Ns s Po 2 hour Pc Pq uint
1786To preserve progress across reboots, the sequential scan algorithm periodically
1787needs to stop metadata scanning and issue all the verification I/O to disk.
1788The frequency of this flushing is determined by this tunable.
1789.
1790.It Sy zfs_scan_fill_weight Ns = Ns Sy 3 Pq uint
1791This tunable affects how scrub and resilver I/O segments are ordered.
1792A higher number indicates that we care more about how filled in a segment is,
1793while a lower number indicates we care more about the size of the extent without
1794considering the gaps within a segment.
1795This value is only tunable upon module insertion.
1796Changing the value afterwards will have no effect on scrub or resilver
1797performance.
1798.
1799.It Sy zfs_scan_issue_strategy Ns = Ns Sy 0 Pq uint
1800Determines the order that data will be verified while scrubbing or resilvering:
1801.Bl -tag -compact -offset 4n -width "a"
1802.It Sy 1
1803Data will be verified as sequentially as possible, given the
1804amount of memory reserved for scrubbing
1805.Pq see Sy zfs_scan_mem_lim_fact .
1806This may improve scrub performance if the pool's data is very fragmented.
1807.It Sy 2
1808The largest mostly-contiguous chunk of found data will be verified first.
1809By deferring scrubbing of small segments, we may later find adjacent data
1810to coalesce and increase the segment size.
1811.It Sy 0
1812.No Use strategy Sy 1 No during normal verification
1813.No and strategy Sy 2 No while taking a checkpoint .
1814.El
1815.
1816.It Sy zfs_scan_legacy Ns = Ns Sy 0 Ns | Ns 1 Pq int
1817If unset, indicates that scrubs and resilvers will gather metadata in
1818memory before issuing sequential I/O.
1819Otherwise indicates that the legacy algorithm will be used,
1820where I/O is initiated as soon as it is discovered.
1821Unsetting will not affect scrubs or resilvers that are already in progress.
1822.
1823.It Sy zfs_scan_max_ext_gap Ns = Ns Sy 2097152 Ns B Po 2 MiB Pc Pq int
1824Sets the largest gap in bytes between scrub/resilver I/O operations
1825that will still be considered sequential for sorting purposes.
1826Changing this value will not
1827affect scrubs or resilvers that are already in progress.
1828.
1829.It Sy zfs_scan_mem_lim_fact Ns = Ns Sy 20 Ns ^-1 Pq uint
1830Maximum fraction of RAM used for I/O sorting by sequential scan algorithm.
1831This tunable determines the hard limit for I/O sorting memory usage.
1832When the hard limit is reached we stop scanning metadata and start issuing
1833data verification I/O.
1834This is done until we get below the soft limit.
1835.
1836.It Sy zfs_scan_mem_lim_soft_fact Ns = Ns Sy 20 Ns ^-1 Pq uint
1837The fraction of the hard limit used to determined the soft limit for I/O sorting
1838by the sequential scan algorithm.
1839When we cross this limit from below no action is taken.
1840When we cross this limit from above it is because we are issuing verification
1841I/O.
1842In this case (unless the metadata scan is done) we stop issuing verification I/O
1843and start scanning metadata again until we get to the hard limit.
1844.
1845.It Sy zfs_scan_report_txgs Ns = Ns Sy 0 Ns | Ns 1 Pq uint
1846When reporting resilver throughput and estimated completion time use the
1847performance observed over roughly the last
1848.Sy zfs_scan_report_txgs
1849TXGs.
1850When set to zero performance is calculated over the time between checkpoints.
1851.
1852.It Sy zfs_scan_strict_mem_lim Ns = Ns Sy 0 Ns | Ns 1 Pq int
1853Enforce tight memory limits on pool scans when a sequential scan is in progress.
1854When disabled, the memory limit may be exceeded by fast disks.
1855.
1856.It Sy zfs_scan_suspend_progress Ns = Ns Sy 0 Ns | Ns 1 Pq int
1857Freezes a scrub/resilver in progress without actually pausing it.
1858Intended for testing/debugging.
1859.
1860.It Sy zfs_scan_vdev_limit Ns = Ns Sy 16777216 Ns B Po 16 MiB Pc Pq int
1861Maximum amount of data that can be concurrently issued at once for scrubs and
1862resilvers per leaf device, given in bytes.
1863.
1864.It Sy zfs_send_corrupt_data Ns = Ns Sy 0 Ns | Ns 1 Pq int
1865Allow sending of corrupt data (ignore read/checksum errors when sending).
1866.
1867.It Sy zfs_send_unmodified_spill_blocks Ns = Ns Sy 1 Ns | Ns 0 Pq int
1868Include unmodified spill blocks in the send stream.
1869Under certain circumstances, previous versions of ZFS could incorrectly
1870remove the spill block from an existing object.
1871Including unmodified copies of the spill blocks creates a backwards-compatible
1872stream which will recreate a spill block if it was incorrectly removed.
1873.
1874.It Sy zfs_send_no_prefetch_queue_ff Ns = Ns Sy 20 Ns ^\-1 Pq uint
1875The fill fraction of the
1876.Nm zfs Cm send
1877internal queues.
1878The fill fraction controls the timing with which internal threads are woken up.
1879.
1880.It Sy zfs_send_no_prefetch_queue_length Ns = Ns Sy 1048576 Ns B Po 1 MiB Pc Pq uint
1881The maximum number of bytes allowed in
1882.Nm zfs Cm send Ns 's
1883internal queues.
1884.
1885.It Sy zfs_send_queue_ff Ns = Ns Sy 20 Ns ^\-1 Pq uint
1886The fill fraction of the
1887.Nm zfs Cm send
1888prefetch queue.
1889The fill fraction controls the timing with which internal threads are woken up.
1890.
1891.It Sy zfs_send_queue_length Ns = Ns Sy 16777216 Ns B Po 16 MiB Pc Pq uint
1892The maximum number of bytes allowed that will be prefetched by
1893.Nm zfs Cm send .
1894This value must be at least twice the maximum block size in use.
1895.
1896.It Sy zfs_recv_queue_ff Ns = Ns Sy 20 Ns ^\-1 Pq uint
1897The fill fraction of the
1898.Nm zfs Cm receive
1899queue.
1900The fill fraction controls the timing with which internal threads are woken up.
1901.
1902.It Sy zfs_recv_queue_length Ns = Ns Sy 16777216 Ns B Po 16 MiB Pc Pq uint
1903The maximum number of bytes allowed in the
1904.Nm zfs Cm receive
1905queue.
1906This value must be at least twice the maximum block size in use.
1907.
1908.It Sy zfs_recv_write_batch_size Ns = Ns Sy 1048576 Ns B Po 1 MiB Pc Pq uint
1909The maximum amount of data, in bytes, that
1910.Nm zfs Cm receive
1911will write in one DMU transaction.
1912This is the uncompressed size, even when receiving a compressed send stream.
1913This setting will not reduce the write size below a single block.
1914Capped at a maximum of
1915.Sy 32 MiB .
1916.
1917.It Sy zfs_recv_best_effort_corrective Ns = Ns Sy 0 Pq int
1918When this variable is set to non-zero a corrective receive:
1919.Bl -enum -compact -offset 4n -width "1."
1920.It
1921Does not enforce the restriction of source & destination snapshot GUIDs
1922matching.
1923.It
1924If there is an error during healing, the healing receive is not
1925terminated instead it moves on to the next record.
1926.El
1927.
1928.It Sy zfs_override_estimate_recordsize Ns = Ns Sy 0 Ns | Ns 1 Pq uint
1929Setting this variable overrides the default logic for estimating block
1930sizes when doing a
1931.Nm zfs Cm send .
1932The default heuristic is that the average block size
1933will be the current recordsize.
1934Override this value if most data in your dataset is not of that size
1935and you require accurate zfs send size estimates.
1936.
1937.It Sy zfs_sync_pass_deferred_free Ns = Ns Sy 2 Pq uint
1938Flushing of data to disk is done in passes.
1939Defer frees starting in this pass.
1940.
1941.It Sy zfs_spa_discard_memory_limit Ns = Ns Sy 16777216 Ns B Po 16 MiB Pc Pq int
1942Maximum memory used for prefetching a checkpoint's space map on each
1943vdev while discarding the checkpoint.
1944.
1945.It Sy zfs_special_class_metadata_reserve_pct Ns = Ns Sy 25 Ns % Pq uint
1946Only allow small data blocks to be allocated on the special and dedup vdev
1947types when the available free space percentage on these vdevs exceeds this
1948value.
1949This ensures reserved space is available for pool metadata as the
1950special vdevs approach capacity.
1951.
1952.It Sy zfs_sync_pass_dont_compress Ns = Ns Sy 8 Pq uint
1953Starting in this sync pass, disable compression (including of metadata).
1954With the default setting, in practice, we don't have this many sync passes,
1955so this has no effect.
1956.Pp
1957The original intent was that disabling compression would help the sync passes
1958to converge.
1959However, in practice, disabling compression increases
1960the average number of sync passes; because when we turn compression off,
1961many blocks' size will change, and thus we have to re-allocate
1962(not overwrite) them.
1963It also increases the number of
1964.Em 128 KiB
1965allocations (e.g. for indirect blocks and spacemaps)
1966because these will not be compressed.
1967The
1968.Em 128 KiB
1969allocations are especially detrimental to performance
1970on highly fragmented systems, which may have very few free segments of this
1971size,
1972and may need to load new metaslabs to satisfy these allocations.
1973.
1974.It Sy zfs_sync_pass_rewrite Ns = Ns Sy 2 Pq uint
1975Rewrite new block pointers starting in this pass.
1976.
1977.It Sy zfs_sync_taskq_batch_pct Ns = Ns Sy 75 Ns % Pq int
1978This controls the number of threads used by
1979.Sy dp_sync_taskq .
1980The default value of
1981.Sy 75%
1982will create a maximum of one thread per CPU.
1983.
1984.It Sy zfs_trim_extent_bytes_max Ns = Ns Sy 134217728 Ns B Po 128 MiB Pc Pq uint
1985Maximum size of TRIM command.
1986Larger ranges will be split into chunks no larger than this value before
1987issuing.
1988.
1989.It Sy zfs_trim_extent_bytes_min Ns = Ns Sy 32768 Ns B Po 32 KiB Pc Pq uint
1990Minimum size of TRIM commands.
1991TRIM ranges smaller than this will be skipped,
1992unless they're part of a larger range which was chunked.
1993This is done because it's common for these small TRIMs
1994to negatively impact overall performance.
1995.
1996.It Sy zfs_trim_metaslab_skip Ns = Ns Sy 0 Ns | Ns 1 Pq uint
1997Skip uninitialized metaslabs during the TRIM process.
1998This option is useful for pools constructed from large thinly-provisioned
1999devices
2000where TRIM operations are slow.
2001As a pool ages, an increasing fraction of the pool's metaslabs
2002will be initialized, progressively degrading the usefulness of this option.
2003This setting is stored when starting a manual TRIM and will
2004persist for the duration of the requested TRIM.
2005.
2006.It Sy zfs_trim_queue_limit Ns = Ns Sy 10 Pq uint
2007Maximum number of queued TRIMs outstanding per leaf vdev.
2008The number of concurrent TRIM commands issued to the device is controlled by
2009.Sy zfs_vdev_trim_min_active No and Sy zfs_vdev_trim_max_active .
2010.
2011.It Sy zfs_trim_txg_batch Ns = Ns Sy 32 Pq uint
2012The number of transaction groups' worth of frees which should be aggregated
2013before TRIM operations are issued to the device.
2014This setting represents a trade-off between issuing larger,
2015more efficient TRIM operations and the delay
2016before the recently trimmed space is available for use by the device.
2017.Pp
2018Increasing this value will allow frees to be aggregated for a longer time.
2019This will result is larger TRIM operations and potentially increased memory
2020usage.
2021Decreasing this value will have the opposite effect.
2022The default of
2023.Sy 32
2024was determined to be a reasonable compromise.
2025.
2026.It Sy zfs_txg_history Ns = Ns Sy 0 Pq uint
2027Historical statistics for this many latest TXGs will be available in
2028.Pa /proc/spl/kstat/zfs/ Ns Ao Ar pool Ac Ns Pa /TXGs .
2029.
2030.It Sy zfs_txg_timeout Ns = Ns Sy 5 Ns s Pq uint
2031Flush dirty data to disk at least every this many seconds (maximum TXG
2032duration).
2033.
2034.It Sy zfs_vdev_aggregation_limit Ns = Ns Sy 1048576 Ns B Po 1 MiB Pc Pq uint
2035Max vdev I/O aggregation size.
2036.
2037.It Sy zfs_vdev_aggregation_limit_non_rotating Ns = Ns Sy 131072 Ns B Po 128 KiB Pc Pq uint
2038Max vdev I/O aggregation size for non-rotating media.
2039.
2040.It Sy zfs_vdev_mirror_rotating_inc Ns = Ns Sy 0 Pq int
2041A number by which the balancing algorithm increments the load calculation for
2042the purpose of selecting the least busy mirror member when an I/O operation
2043immediately follows its predecessor on rotational vdevs
2044for the purpose of making decisions based on load.
2045.
2046.It Sy zfs_vdev_mirror_rotating_seek_inc Ns = Ns Sy 5 Pq int
2047A number by which the balancing algorithm increments the load calculation for
2048the purpose of selecting the least busy mirror member when an I/O operation
2049lacks locality as defined by
2050.Sy zfs_vdev_mirror_rotating_seek_offset .
2051Operations within this that are not immediately following the previous operation
2052are incremented by half.
2053.
2054.It Sy zfs_vdev_mirror_rotating_seek_offset Ns = Ns Sy 1048576 Ns B Po 1 MiB Pc Pq int
2055The maximum distance for the last queued I/O operation in which
2056the balancing algorithm considers an operation to have locality.
2057.No See Sx ZFS I/O SCHEDULER .
2058.
2059.It Sy zfs_vdev_mirror_non_rotating_inc Ns = Ns Sy 0 Pq int
2060A number by which the balancing algorithm increments the load calculation for
2061the purpose of selecting the least busy mirror member on non-rotational vdevs
2062when I/O operations do not immediately follow one another.
2063.
2064.It Sy zfs_vdev_mirror_non_rotating_seek_inc Ns = Ns Sy 1 Pq int
2065A number by which the balancing algorithm increments the load calculation for
2066the purpose of selecting the least busy mirror member when an I/O operation
2067lacks
2068locality as defined by the
2069.Sy zfs_vdev_mirror_rotating_seek_offset .
2070Operations within this that are not immediately following the previous operation
2071are incremented by half.
2072.
2073.It Sy zfs_vdev_read_gap_limit Ns = Ns Sy 32768 Ns B Po 32 KiB Pc Pq uint
2074Aggregate read I/O operations if the on-disk gap between them is within this
2075threshold.
2076.
2077.It Sy zfs_vdev_write_gap_limit Ns = Ns Sy 4096 Ns B Po 4 KiB Pc Pq uint
2078Aggregate write I/O operations if the on-disk gap between them is within this
2079threshold.
2080.
2081.It Sy zfs_vdev_raidz_impl Ns = Ns Sy fastest Pq string
2082Select the raidz parity implementation to use.
2083.Pp
2084Variants that don't depend on CPU-specific features
2085may be selected on module load, as they are supported on all systems.
2086The remaining options may only be set after the module is loaded,
2087as they are available only if the implementations are compiled in
2088and supported on the running system.
2089.Pp
2090Once the module is loaded,
2091.Pa /sys/module/zfs/parameters/zfs_vdev_raidz_impl
2092will show the available options,
2093with the currently selected one enclosed in square brackets.
2094.Pp
2095.TS
2096lb l l .
2097fastest	selected by built-in benchmark
2098original	original implementation
2099scalar	scalar implementation
2100sse2	SSE2 instruction set	64-bit x86
2101ssse3	SSSE3 instruction set	64-bit x86
2102avx2	AVX2 instruction set	64-bit x86
2103avx512f	AVX512F instruction set	64-bit x86
2104avx512bw	AVX512F & AVX512BW instruction sets	64-bit x86
2105aarch64_neon	NEON	Aarch64/64-bit ARMv8
2106aarch64_neonx2	NEON with more unrolling	Aarch64/64-bit ARMv8
2107powerpc_altivec	Altivec	PowerPC
2108.TE
2109.
2110.It Sy zfs_vdev_scheduler Pq charp
2111.Sy DEPRECATED .
2112Prints warning to kernel log for compatibility.
2113.
2114.It Sy zfs_zevent_len_max Ns = Ns Sy 512 Pq uint
2115Max event queue length.
2116Events in the queue can be viewed with
2117.Xr zpool-events 8 .
2118.
2119.It Sy zfs_zevent_retain_max Ns = Ns Sy 2000 Pq int
2120Maximum recent zevent records to retain for duplicate checking.
2121Setting this to
2122.Sy 0
2123disables duplicate detection.
2124.
2125.It Sy zfs_zevent_retain_expire_secs Ns = Ns Sy 900 Ns s Po 15 min Pc Pq int
2126Lifespan for a recent ereport that was retained for duplicate checking.
2127.
2128.It Sy zfs_zil_clean_taskq_maxalloc Ns = Ns Sy 1048576 Pq int
2129The maximum number of taskq entries that are allowed to be cached.
2130When this limit is exceeded transaction records (itxs)
2131will be cleaned synchronously.
2132.
2133.It Sy zfs_zil_clean_taskq_minalloc Ns = Ns Sy 1024 Pq int
2134The number of taskq entries that are pre-populated when the taskq is first
2135created and are immediately available for use.
2136.
2137.It Sy zfs_zil_clean_taskq_nthr_pct Ns = Ns Sy 100 Ns % Pq int
2138This controls the number of threads used by
2139.Sy dp_zil_clean_taskq .
2140The default value of
2141.Sy 100%
2142will create a maximum of one thread per cpu.
2143.
2144.It Sy zil_maxblocksize Ns = Ns Sy 131072 Ns B Po 128 KiB Pc Pq uint
2145This sets the maximum block size used by the ZIL.
2146On very fragmented pools, lowering this
2147.Pq typically to Sy 36 KiB
2148can improve performance.
2149.
2150.It Sy zil_maxcopied Ns = Ns Sy 7680 Ns B Po 7.5 KiB Pc Pq uint
2151This sets the maximum number of write bytes logged via WR_COPIED.
2152It tunes a tradeoff between additional memory copy and possibly worse log
2153space efficiency vs additional range lock/unlock.
2154.
2155.It Sy zil_nocacheflush Ns = Ns Sy 0 Ns | Ns 1 Pq int
2156Disable the cache flush commands that are normally sent to disk by
2157the ZIL after an LWB write has completed.
2158Setting this will cause ZIL corruption on power loss
2159if a volatile out-of-order write cache is enabled.
2160.
2161.It Sy zil_replay_disable Ns = Ns Sy 0 Ns | Ns 1 Pq int
2162Disable intent logging replay.
2163Can be disabled for recovery from corrupted ZIL.
2164.
2165.It Sy zil_slog_bulk Ns = Ns Sy 67108864 Ns B Po 64 MiB Pc Pq u64
2166Limit SLOG write size per commit executed with synchronous priority.
2167Any writes above that will be executed with lower (asynchronous) priority
2168to limit potential SLOG device abuse by single active ZIL writer.
2169.
2170.It Sy zfs_zil_saxattr Ns = Ns Sy 1 Ns | Ns 0 Pq int
2171Setting this tunable to zero disables ZIL logging of new
2172.Sy xattr Ns = Ns Sy sa
2173records if the
2174.Sy org.openzfs:zilsaxattr
2175feature is enabled on the pool.
2176This would only be necessary to work around bugs in the ZIL logging or replay
2177code for this record type.
2178The tunable has no effect if the feature is disabled.
2179.
2180.It Sy zfs_embedded_slog_min_ms Ns = Ns Sy 64 Pq uint
2181Usually, one metaslab from each normal-class vdev is dedicated for use by
2182the ZIL to log synchronous writes.
2183However, if there are fewer than
2184.Sy zfs_embedded_slog_min_ms
2185metaslabs in the vdev, this functionality is disabled.
2186This ensures that we don't set aside an unreasonable amount of space for the
2187ZIL.
2188.
2189.It Sy zstd_earlyabort_pass Ns = Ns Sy 1 Pq uint
2190Whether heuristic for detection of incompressible data with zstd levels >= 3
2191using LZ4 and zstd-1 passes is enabled.
2192.
2193.It Sy zstd_abort_size Ns = Ns Sy 131072 Pq uint
2194Minimal uncompressed size (inclusive) of a record before the early abort
2195heuristic will be attempted.
2196.
2197.It Sy zio_deadman_log_all Ns = Ns Sy 0 Ns | Ns 1 Pq int
2198If non-zero, the zio deadman will produce debugging messages
2199.Pq see Sy zfs_dbgmsg_enable
2200for all zios, rather than only for leaf zios possessing a vdev.
2201This is meant to be used by developers to gain
2202diagnostic information for hang conditions which don't involve a mutex
2203or other locking primitive: typically conditions in which a thread in
2204the zio pipeline is looping indefinitely.
2205.
2206.It Sy zio_slow_io_ms Ns = Ns Sy 30000 Ns ms Po 30 s Pc Pq int
2207When an I/O operation takes more than this much time to complete,
2208it's marked as slow.
2209Each slow operation causes a delay zevent.
2210Slow I/O counters can be seen with
2211.Nm zpool Cm status Fl s .
2212.
2213.It Sy zio_dva_throttle_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int
2214Throttle block allocations in the I/O pipeline.
2215This allows for dynamic allocation distribution when devices are imbalanced.
2216When enabled, the maximum number of pending allocations per top-level vdev
2217is limited by
2218.Sy zfs_vdev_queue_depth_pct .
2219.
2220.It Sy zfs_xattr_compat Ns = Ns 0 Ns | Ns 1 Pq int
2221Control the naming scheme used when setting new xattrs in the user namespace.
2222If
2223.Sy 0
2224.Pq the default on Linux ,
2225user namespace xattr names are prefixed with the namespace, to be backwards
2226compatible with previous versions of ZFS on Linux.
2227If
2228.Sy 1
2229.Pq the default on Fx ,
2230user namespace xattr names are not prefixed, to be backwards compatible with
2231previous versions of ZFS on illumos and
2232.Fx .
2233.Pp
2234Either naming scheme can be read on this and future versions of ZFS, regardless
2235of this tunable, but legacy ZFS on illumos or
2236.Fx
2237are unable to read user namespace xattrs written in the Linux format, and
2238legacy versions of ZFS on Linux are unable to read user namespace xattrs written
2239in the legacy ZFS format.
2240.Pp
2241An existing xattr with the alternate naming scheme is removed when overwriting
2242the xattr so as to not accumulate duplicates.
2243.
2244.It Sy zio_requeue_io_start_cut_in_line Ns = Ns Sy 0 Ns | Ns 1 Pq int
2245Prioritize requeued I/O.
2246.
2247.It Sy zio_taskq_batch_pct Ns = Ns Sy 80 Ns % Pq uint
2248Percentage of online CPUs which will run a worker thread for I/O.
2249These workers are responsible for I/O work such as compression and
2250checksum calculations.
2251Fractional number of CPUs will be rounded down.
2252.Pp
2253The default value of
2254.Sy 80%
2255was chosen to avoid using all CPUs which can result in
2256latency issues and inconsistent application performance,
2257especially when slower compression and/or checksumming is enabled.
2258.
2259.It Sy zio_taskq_batch_tpq Ns = Ns Sy 0 Pq uint
2260Number of worker threads per taskq.
2261Lower values improve I/O ordering and CPU utilization,
2262while higher reduces lock contention.
2263.Pp
2264If
2265.Sy 0 ,
2266generate a system-dependent value close to 6 threads per taskq.
2267.
2268.It Sy zvol_inhibit_dev Ns = Ns Sy 0 Ns | Ns 1 Pq uint
2269Do not create zvol device nodes.
2270This may slightly improve startup time on
2271systems with a very large number of zvols.
2272.
2273.It Sy zvol_major Ns = Ns Sy 230 Pq uint
2274Major number for zvol block devices.
2275.
2276.It Sy zvol_max_discard_blocks Ns = Ns Sy 16384 Pq long
2277Discard (TRIM) operations done on zvols will be done in batches of this
2278many blocks, where block size is determined by the
2279.Sy volblocksize
2280property of a zvol.
2281.
2282.It Sy zvol_prefetch_bytes Ns = Ns Sy 131072 Ns B Po 128 KiB Pc Pq uint
2283When adding a zvol to the system, prefetch this many bytes
2284from the start and end of the volume.
2285Prefetching these regions of the volume is desirable,
2286because they are likely to be accessed immediately by
2287.Xr blkid 8
2288or the kernel partitioner.
2289.
2290.It Sy zvol_request_sync Ns = Ns Sy 0 Ns | Ns 1 Pq uint
2291When processing I/O requests for a zvol, submit them synchronously.
2292This effectively limits the queue depth to
2293.Em 1
2294for each I/O submitter.
2295When unset, requests are handled asynchronously by a thread pool.
2296The number of requests which can be handled concurrently is controlled by
2297.Sy zvol_threads .
2298.Sy zvol_request_sync
2299is ignored when running on a kernel that supports block multiqueue
2300.Pq Li blk-mq .
2301.
2302.It Sy zvol_threads Ns = Ns Sy 0 Pq uint
2303The number of system wide threads to use for processing zvol block IOs.
2304If
2305.Sy 0
2306(the default) then internally set
2307.Sy zvol_threads
2308to the number of CPUs present or 32 (whichever is greater).
2309.
2310.It Sy zvol_blk_mq_threads Ns = Ns Sy 0 Pq uint
2311The number of threads per zvol to use for queuing IO requests.
2312This parameter will only appear if your kernel supports
2313.Li blk-mq
2314and is only read and assigned to a zvol at zvol load time.
2315If
2316.Sy 0
2317(the default) then internally set
2318.Sy zvol_blk_mq_threads
2319to the number of CPUs present.
2320.
2321.It Sy zvol_use_blk_mq Ns = Ns Sy 0 Ns | Ns 1 Pq uint
2322Set to
2323.Sy 1
2324to use the
2325.Li blk-mq
2326API for zvols.
2327Set to
2328.Sy 0
2329(the default) to use the legacy zvol APIs.
2330This setting can give better or worse zvol performance depending on
2331the workload.
2332This parameter will only appear if your kernel supports
2333.Li blk-mq
2334and is only read and assigned to a zvol at zvol load time.
2335.
2336.It Sy zvol_blk_mq_blocks_per_thread Ns = Ns Sy 8 Pq uint
2337If
2338.Sy zvol_use_blk_mq
2339is enabled, then process this number of
2340.Sy volblocksize Ns -sized blocks per zvol thread.
2341This tunable can be use to favor better performance for zvol reads (lower
2342values) or writes (higher values).
2343If set to
2344.Sy 0 ,
2345then the zvol layer will process the maximum number of blocks
2346per thread that it can.
2347This parameter will only appear if your kernel supports
2348.Li blk-mq
2349and is only applied at each zvol's load time.
2350.
2351.It Sy zvol_blk_mq_queue_depth Ns = Ns Sy 0 Pq uint
2352The queue_depth value for the zvol
2353.Li blk-mq
2354interface.
2355This parameter will only appear if your kernel supports
2356.Li blk-mq
2357and is only applied at each zvol's load time.
2358If
2359.Sy 0
2360(the default) then use the kernel's default queue depth.
2361Values are clamped to the kernel's
2362.Dv BLKDEV_MIN_RQ
2363and
2364.Dv BLKDEV_MAX_RQ Ns / Ns Dv BLKDEV_DEFAULT_RQ
2365limits.
2366.
2367.It Sy zvol_volmode Ns = Ns Sy 1 Pq uint
2368Defines zvol block devices behaviour when
2369.Sy volmode Ns = Ns Sy default :
2370.Bl -tag -compact -offset 4n -width "a"
2371.It Sy 1
2372.No equivalent to Sy full
2373.It Sy 2
2374.No equivalent to Sy dev
2375.It Sy 3
2376.No equivalent to Sy none
2377.El
2378.
2379.It Sy zvol_enforce_quotas Ns = Ns Sy 0 Ns | Ns 1 Pq uint
2380Enable strict ZVOL quota enforcement.
2381The strict quota enforcement may have a performance impact.
2382.El
2383.
2384.Sh ZFS I/O SCHEDULER
2385ZFS issues I/O operations to leaf vdevs to satisfy and complete I/O operations.
2386The scheduler determines when and in what order those operations are issued.
2387The scheduler divides operations into five I/O classes,
2388prioritized in the following order: sync read, sync write, async read,
2389async write, and scrub/resilver.
2390Each queue defines the minimum and maximum number of concurrent operations
2391that may be issued to the device.
2392In addition, the device has an aggregate maximum,
2393.Sy zfs_vdev_max_active .
2394Note that the sum of the per-queue minima must not exceed the aggregate maximum.
2395If the sum of the per-queue maxima exceeds the aggregate maximum,
2396then the number of active operations may reach
2397.Sy zfs_vdev_max_active ,
2398in which case no further operations will be issued,
2399regardless of whether all per-queue minima have been met.
2400.Pp
2401For many physical devices, throughput increases with the number of
2402concurrent operations, but latency typically suffers.
2403Furthermore, physical devices typically have a limit
2404at which more concurrent operations have no
2405effect on throughput or can actually cause it to decrease.
2406.Pp
2407The scheduler selects the next operation to issue by first looking for an
2408I/O class whose minimum has not been satisfied.
2409Once all are satisfied and the aggregate maximum has not been hit,
2410the scheduler looks for classes whose maximum has not been satisfied.
2411Iteration through the I/O classes is done in the order specified above.
2412No further operations are issued
2413if the aggregate maximum number of concurrent operations has been hit,
2414or if there are no operations queued for an I/O class that has not hit its
2415maximum.
2416Every time an I/O operation is queued or an operation completes,
2417the scheduler looks for new operations to issue.
2418.Pp
2419In general, smaller
2420.Sy max_active Ns s
2421will lead to lower latency of synchronous operations.
2422Larger
2423.Sy max_active Ns s
2424may lead to higher overall throughput, depending on underlying storage.
2425.Pp
2426The ratio of the queues'
2427.Sy max_active Ns s
2428determines the balance of performance between reads, writes, and scrubs.
2429For example, increasing
2430.Sy zfs_vdev_scrub_max_active
2431will cause the scrub or resilver to complete more quickly,
2432but reads and writes to have higher latency and lower throughput.
2433.Pp
2434All I/O classes have a fixed maximum number of outstanding operations,
2435except for the async write class.
2436Asynchronous writes represent the data that is committed to stable storage
2437during the syncing stage for transaction groups.
2438Transaction groups enter the syncing state periodically,
2439so the number of queued async writes will quickly burst up
2440and then bleed down to zero.
2441Rather than servicing them as quickly as possible,
2442the I/O scheduler changes the maximum number of active async write operations
2443according to the amount of dirty data in the pool.
2444Since both throughput and latency typically increase with the number of
2445concurrent operations issued to physical devices, reducing the
2446burstiness in the number of simultaneous operations also stabilizes the
2447response time of operations from other queues, in particular synchronous ones.
2448In broad strokes, the I/O scheduler will issue more concurrent operations
2449from the async write queue as there is more dirty data in the pool.
2450.
2451.Ss Async Writes
2452The number of concurrent operations issued for the async write I/O class
2453follows a piece-wise linear function defined by a few adjustable points:
2454.Bd -literal
2455       |              o---------| <-- \fBzfs_vdev_async_write_max_active\fP
2456  ^    |             /^         |
2457  |    |            / |         |
2458active |           /  |         |
2459 I/O   |          /   |         |
2460count  |         /    |         |
2461       |        /     |         |
2462       |-------o      |         | <-- \fBzfs_vdev_async_write_min_active\fP
2463      0|_______^______|_________|
2464       0%      |      |       100% of \fBzfs_dirty_data_max\fP
2465               |      |
2466               |      `-- \fBzfs_vdev_async_write_active_max_dirty_percent\fP
2467               `--------- \fBzfs_vdev_async_write_active_min_dirty_percent\fP
2468.Ed
2469.Pp
2470Until the amount of dirty data exceeds a minimum percentage of the dirty
2471data allowed in the pool, the I/O scheduler will limit the number of
2472concurrent operations to the minimum.
2473As that threshold is crossed, the number of concurrent operations issued
2474increases linearly to the maximum at the specified maximum percentage
2475of the dirty data allowed in the pool.
2476.Pp
2477Ideally, the amount of dirty data on a busy pool will stay in the sloped
2478part of the function between
2479.Sy zfs_vdev_async_write_active_min_dirty_percent
2480and
2481.Sy zfs_vdev_async_write_active_max_dirty_percent .
2482If it exceeds the maximum percentage,
2483this indicates that the rate of incoming data is
2484greater than the rate that the backend storage can handle.
2485In this case, we must further throttle incoming writes,
2486as described in the next section.
2487.
2488.Sh ZFS TRANSACTION DELAY
2489We delay transactions when we've determined that the backend storage
2490isn't able to accommodate the rate of incoming writes.
2491.Pp
2492If there is already a transaction waiting, we delay relative to when
2493that transaction will finish waiting.
2494This way the calculated delay time
2495is independent of the number of threads concurrently executing transactions.
2496.Pp
2497If we are the only waiter, wait relative to when the transaction started,
2498rather than the current time.
2499This credits the transaction for "time already served",
2500e.g. reading indirect blocks.
2501.Pp
2502The minimum time for a transaction to take is calculated as
2503.D1 min_time = min( Ns Sy zfs_delay_scale No \(mu Po Sy dirty No \- Sy min Pc / Po Sy max No \- Sy dirty Pc , 100ms)
2504.Pp
2505The delay has two degrees of freedom that can be adjusted via tunables.
2506The percentage of dirty data at which we start to delay is defined by
2507.Sy zfs_delay_min_dirty_percent .
2508This should typically be at or above
2509.Sy zfs_vdev_async_write_active_max_dirty_percent ,
2510so that we only start to delay after writing at full speed
2511has failed to keep up with the incoming write rate.
2512The scale of the curve is defined by
2513.Sy zfs_delay_scale .
2514Roughly speaking, this variable determines the amount of delay at the midpoint
2515of the curve.
2516.Bd -literal
2517delay
2518 10ms +-------------------------------------------------------------*+
2519      |                                                             *|
2520  9ms +                                                             *+
2521      |                                                             *|
2522  8ms +                                                             *+
2523      |                                                            * |
2524  7ms +                                                            * +
2525      |                                                            * |
2526  6ms +                                                            * +
2527      |                                                            * |
2528  5ms +                                                           *  +
2529      |                                                           *  |
2530  4ms +                                                           *  +
2531      |                                                           *  |
2532  3ms +                                                          *   +
2533      |                                                          *   |
2534  2ms +                                              (midpoint) *    +
2535      |                                                  |    **     |
2536  1ms +                                                  v ***       +
2537      |             \fBzfs_delay_scale\fP ---------->     ********         |
2538    0 +-------------------------------------*********----------------+
2539      0%                    <- \fBzfs_dirty_data_max\fP ->               100%
2540.Ed
2541.Pp
2542Note, that since the delay is added to the outstanding time remaining on the
2543most recent transaction it's effectively the inverse of IOPS.
2544Here, the midpoint of
2545.Em 500 us
2546translates to
2547.Em 2000 IOPS .
2548The shape of the curve
2549was chosen such that small changes in the amount of accumulated dirty data
2550in the first three quarters of the curve yield relatively small differences
2551in the amount of delay.
2552.Pp
2553The effects can be easier to understand when the amount of delay is
2554represented on a logarithmic scale:
2555.Bd -literal
2556delay
2557100ms +-------------------------------------------------------------++
2558      +                                                              +
2559      |                                                              |
2560      +                                                             *+
2561 10ms +                                                             *+
2562      +                                                           ** +
2563      |                                              (midpoint)  **  |
2564      +                                                  |     **    +
2565  1ms +                                                  v ****      +
2566      +             \fBzfs_delay_scale\fP ---------->        *****         +
2567      |                                             ****             |
2568      +                                          ****                +
2569100us +                                        **                    +
2570      +                                       *                      +
2571      |                                      *                       |
2572      +                                     *                        +
2573 10us +                                     *                        +
2574      +                                                              +
2575      |                                                              |
2576      +                                                              +
2577      +--------------------------------------------------------------+
2578      0%                    <- \fBzfs_dirty_data_max\fP ->               100%
2579.Ed
2580.Pp
2581Note here that only as the amount of dirty data approaches its limit does
2582the delay start to increase rapidly.
2583The goal of a properly tuned system should be to keep the amount of dirty data
2584out of that range by first ensuring that the appropriate limits are set
2585for the I/O scheduler to reach optimal throughput on the back-end storage,
2586and then by changing the value of
2587.Sy zfs_delay_scale
2588to increase the steepness of the curve.
2589