Lines Matching +full:scan +full:- +full:interval +full:- +full:ms
1 .\" SPDX-License-Identifier: CDDL-1.0
10 .\" usr/src/OPENSOLARIS.LICENSE or https://opensource.org/licenses/CDDL-1.0.
30 .Bl -tag -width Ds
97 Turbo L2ARC warm-up.
98 When the L2ARC is cold the fill interval will be set as fast as possible.
101 Min feed interval in milliseconds.
174 Percent of ARC size allowed for L2ARC-only headers.
224 Max write bytes per interval.
247 of a top-level vdev before moving on to the next top-level vdev.
250 Enable metaslab groups biasing based on their over- or under-utilization
259 Setting to 1 equals to 2 if the pool is write-bound or 0 otherwise.
312 if not page-aligned instead of silently falling back to uncached I/O.
315 When attempting to log an output nvlist of an ioctl in the on-disk history,
320 .Fn zfs_ioc_channel_program Pq cf. Xr zfs-program 8 .
326 Enable/disable segment-based metaslab selection.
329 When using segment-based metaslab selection, continue allocating
348 becomes the performance limiting factor on high-performance storage.
392 .Bl -item -compact
400 If that fails then we will have a multi-layer gang block.
403 .Bl -item -compact
413 If that fails then we will have a multi-layer gang block.
423 When a vdev is added, target this number of metaslabs per top-level vdev.
432 Maximum ashift used when optimizing for logical \[->] physical sector size on
434 top-level vdevs.
440 If non-zero, then a Direct I/O write's checksum will be verified every
460 Minimum ashift used when creating new top-level vdevs.
463 Minimum number of metaslabs to create in a top-level vdev.
471 Practical upper limit of total metaslabs per top-level vdev.
494 .It Sy metaslab_unload_delay_ms Ns = Ns Sy 600000 Ns ms Po 10 min Pc Pq uint
505 Max amount of memory to use for RAID-Z expansion I/O.
509 For testing, pause RAID-Z expansion when reflow amount reaches this value.
512 For expanded RAID-Z, aggregate reads that have more rows than this.
554 If this parameter is unset, the traversal skips non-metadata blocks.
556 import has started to stop or start the traversal of non-metadata blocks.
578 It also limits the worst-case time to allocate space.
598 Limits the number of on-disk error log entries that will be converted to the
605 During top-level vdev removal, chunks of data are copied from the vdev
616 Logical ashift for file-based devices.
619 Physical ashift for file-based devices.
674 .It Sy zfs_abd_scatter_max_order Ns = Ns Sy MAX_ORDER\-1 Pq uint
683 This is the minimum allocation size that will use scatter (page-based) ABDs.
688 bytes, try to unpin some of it in response to demand for non-metadata.
702 Percentage of ARC dnodes to try to scan in response to demand for non-metadata
710 with 8-byte pointers.
733 Number ARC headers to evict per sub-list before proceeding to another sub-list.
734 This batch-style operation prevents entire sub-lists from being evicted at once
742 Each thread will process one sub-list at a time,
743 until the eviction target is reached or all sub-lists have been processed.
751 1-4 1
752 5-8 2
753 9-15 3
754 16-31 4
755 32-63 6
756 64-95 8
757 96-127 9
758 128-160 11
759 160-191 12
760 192-223 13
761 224-255 14
794 .Sy all_system_memory No \- Sy 1 GiB
820 .It Sy zfs_arc_min_prefetch_ms Ns = Ns Sy 0 Ns ms Ns Po Ns ≡ Ns 1s Pc Pq uint
823 .It Sy zfs_arc_min_prescient_prefetch_ms Ns = Ns Sy 0 Ns ms Ns Po Ns ≡ Ns 6s Pc Pq uint
836 Number of missing top-level vdevs which will be allowed during
837 pool import (only in read-only mode).
849 .Pa zfs-dbgmsg
854 equivalent to a quarter of the user-wired memory limit under
861 To allow more fine-grained locking, each ARC state contains a series
863 Locking is performed at the level of these "sub-lists".
864 This parameters controls the number of sub-lists per ARC state,
926 less than 100 ms per allocation attempt,
962 .It Sy zfs_condense_indirect_commit_entry_delay_ms Ns = Ns Sy 0 Ns ms Pq int
981 bytes on-disk.
986 Only attempt to condense indirect vdev mappings if the on-disk size
1014 .It Sy zfs_deadman_checktime_ms Ns = Ns Sy 60000 Ns ms Po 1 min Pc Pq u64
1043 .Bl -tag -compact -offset 4n -width "continue"
1049 Attempt to recover from a "hung" operation by re-dispatching it
1053 This can be used to facilitate automatic fail-over
1054 to a properly configured fail-over partner.
1057 .It Sy zfs_deadman_synctime_ms Ns = Ns Sy 600000 Ns ms Po 10 min Pc Pq u64
1058 Interval in milliseconds after which the deadman is triggered and also
1059 the interval after which a pool sync operation is considered to be "hung".
1064 .It Sy zfs_deadman_ziotime_ms Ns = Ns Sy 300000 Ns ms Po 5 min Pc Pq u64
1065 Interval in milliseconds after which the deadman is triggered and an
1073 Enable prefetching dedup-ed blocks which are going to be freed.
1130 Enabling it will reduce worst-case import times, at the cost of increased TXG
1155 OpenZFS will spend no more than this much memory on maintaining the in-memory
1198 OpenZFS pre-release versions and now have compatibility issues.
1209 are not created per-object and instead a hashtable is used where collisions
1218 Upper-bound limit for unflushed metadata changes to be held by the
1236 This tunable is important because it involves a trade-off between import
1265 It effectively limits maximum number of unflushed per-TXG spacemap logs
1313 for 32-bit systems.
1345 The upper limit of write-transaction ZIL log data size in bytes.
1357 Since ZFS is a copy-on-write filesystem with snapshots, blocks cannot be
1390 results in the original CPU-based calculation being used.
1547 For non-interactive I/O (scrub, resilver, removal, initialize and rebuild),
1548 the number of concurrently-active I/O operations is limited to
1556 and the number of concurrently-active non-interactive operations is increased to
1567 To prevent non-interactive I/O, like scrub,
1577 The following options may be bitwise-ored together:
1619 The following flags may be bitwise-ored together:
1632 256 ZFS_DEBUG_METASLAB_VERIFY Verify space accounting on disk matches in-memory \fBrange_trees\fP.
1639 16384 ZFS_DEBUG_BRT Enable BRT-related debugging messages.
1641 65536 ZFS_DEBUG_DDT Enable DDT-related debugging messages.
1669 from the error-encountering filesystem is "temporarily leaked".
1683 .Bl -enum -compact -offset 4n -width "1."
1686 e.g. due to a top-level vdev going offline), or
1702 .It Sy zfs_free_min_time_ms Ns = Ns Sy 1000 Ns ms Po 1s Pc Pq uint
1710 .It Sy zfs_obsolete_min_time_ms Ns = Ns Sy 500 Ns ms Pq uint
1723 .Xr zpool-initialize 8 .
1727 .Xr zpool-initialize 8 .
1731 The threshold size (in block pointers) at which we create a new sub-livelist.
1843 Setting the threshold to a non-zero percentage will stop allocations
1862 .It Sy zfs_multihost_interval Ns = Ns Sy 1000 Ns ms Po 1 s Pc Pq u64
1870 .Sy zfs_multihost_interval No / Sy leaf-vdevs .
1895 .Em 100 ms
1930 if a volatile out-of-order write cache is enabled.
1933 Allow no-operation writes.
1950 The number of blocks pointed by indirect (non-L0) block which should be
1979 Disable QAT hardware acceleration for AES-GCM encryption.
1995 top-level vdev.
2055 .It Sy zfs_resilver_min_time_ms Ns = Ns Sy 3000 Ns ms Po 3 s Pc Pq uint
2061 If set, remove the DTL (dirty time list) upon completion of a pool scan (scrub),
2072 .It Sy zfs_scrub_min_time_ms Ns = Ns Sy 1000 Ns ms Po 1 s Pc Pq uint
2081 To preserve progress across reboots, the sequential scan algorithm periodically
2096 .Bl -tag -compact -offset 4n -width "a"
2103 The largest mostly-contiguous chunk of found data will be verified first.
2124 .It Sy zfs_scan_mem_lim_fact Ns = Ns Sy 20 Ns ^-1 Pq uint
2125 Maximum fraction of RAM used for I/O sorting by sequential scan algorithm.
2131 .It Sy zfs_scan_mem_lim_soft_fact Ns = Ns Sy 20 Ns ^-1 Pq uint
2133 by the sequential scan algorithm.
2137 In this case (unless the metadata scan is done) we stop issuing verification I/O
2148 Enforce tight memory limits on pool scans when a sequential scan is in progress.
2166 Including unmodified copies of the spill blocks creates a backwards-compatible
2169 .It Sy zfs_send_no_prefetch_queue_ff Ns = Ns Sy 20 Ns ^\-1 Pq uint
2180 .It Sy zfs_send_queue_ff Ns = Ns Sy 20 Ns ^\-1 Pq uint
2191 .It Sy zfs_recv_queue_ff Ns = Ns Sy 20 Ns ^\-1 Pq uint
2213 When this variable is set to non-zero a corrective receive:
2214 .Bl -enum -compact -offset 4n -width "1."
2256 many blocks' size will change, and thus we have to re-allocate
2286 This option is useful for pools constructed from large thinly-provisioned
2302 This setting represents a trade-off between issuing larger,
2326 Max vdev I/O aggregation size for non-rotating media.
2349 the purpose of selecting the least busy mirror member on non-rotational vdevs
2362 Aggregate read I/O operations if the on-disk gap between them is within this
2366 Aggregate write I/O operations if the on-disk gap between them is within this
2372 Variants that don't depend on CPU-specific features
2385 fastest selected by built-in benchmark
2388 sse2 SSE2 instruction set 64-bit x86
2389 ssse3 SSSE3 instruction set 64-bit x86
2390 avx2 AVX2 instruction set 64-bit x86
2391 avx512f AVX512F instruction set 64-bit x86
2392 avx512bw AVX512F & AVX512BW instruction sets 64-bit x86
2393 aarch64_neon NEON Aarch64/64-bit ARMv8
2394 aarch64_neonx2 NEON with more unrolling Aarch64/64-bit ARMv8
2401 .Xr zpool-events 8 .
2418 The number of taskq entries that are pre-populated when the taskq is first
2443 if a volatile out-of-order write cache is enabled.
2465 Usually, one metaslab from each normal-class vdev is dedicated for use by
2475 using LZ4 and zstd-1 passes is enabled.
2482 If non-zero, the zio deadman will produce debugging messages
2490 .It Sy zio_slow_io_ms Ns = Ns Sy 30000 Ns ms Po 30 s Pc Pq int
2549 generate a system-dependent value close to 6 threads per taskq.
2602 .Pq Li blk-mq .
2622 .Li blk-mq
2634 .Li blk-mq
2642 .Li blk-mq
2649 .Sy volblocksize Ns -sized blocks per zvol thread.
2657 .Li blk-mq
2662 .Li blk-mq
2665 .Li blk-mq
2679 .Bl -tag -compact -offset 4n -width "a"
2703 Note that the sum of the per-queue minima must not exceed the aggregate maximum.
2704 If the sum of the per-queue maxima exceeds the aggregate maximum,
2708 regardless of whether all per-queue minima have been met.
2762 follows a piece-wise linear function defined by a few adjustable points:
2763 .Bd -literal
2764 | o---------| <-- \fBzfs_vdev_async_write_max_active\fP
2771 |-------o | | <-- \fBzfs_vdev_async_write_min_active\fP
2775 | `-- \fBzfs_vdev_async_write_active_max_dirty_percent\fP
2776 `--------- \fBzfs_vdev_async_write_active_min_dirty_percent\fP
2812 … min( Ns Sy zfs_delay_scale No \(mu Po Sy dirty No \- Sy min Pc / Po Sy max No \- Sy dirty Pc , 10…
2825 .Bd -literal
2827 10ms +-------------------------------------------------------------*+
2829 9ms + *+
2831 8ms + *+
2833 7ms + * +
2835 6ms + * +
2837 5ms + * +
2839 4ms + * +
2841 3ms + * +
2843 2ms + (midpoint) * +
2845 1ms + v *** +
2846 | \fBzfs_delay_scale\fP ----------> ******** |
2847 0 +-------------------------------------*********----------------+
2848 0% <- \fBzfs_dirty_data_max\fP -> 100%
2864 .Bd -literal
2866 100ms +-------------------------------------------------------------++
2870 10ms + *+
2874 1ms + v **** +
2875 + \fBzfs_delay_scale\fP ----------> ***** +
2886 +--------------------------------------------------------------+
2887 0% <- \fBzfs_dirty_data_max\fP -> 100%
2894 for the I/O scheduler to reach optimal throughput on the back-end storage,