Lines Matching +full:per +full:- +full:device

1 .\" SPDX-License-Identifier: CDDL-1.0
10 .\" usr/src/OPENSOLARIS.LICENSE or https://opensource.org/licenses/CDDL-1.0.
30 .Bl -tag -width Ds
97 Turbo L2ARC warm-up.
163 device if persistent L2ARC is enabled.
174 Percent of ARC size allowed for L2ARC-only headers.
183 on L2ARC devices by this percentage of write size if we have filled the device.
190 It also enables TRIM of the whole L2ARC device upon creation
191 or addition to an existing pool or if the header of the device is
192 invalid upon importing a pool or onlining a cache device.
197 This will vary depending of how well the specific device handles these commands.
206 This may be beneficial in case the L2ARC device is significantly faster
224 Max write bytes per interval.
229 or attaching an L2ARC device (e.g. the L2ARC device is slow
234 Minimum size of an L2ARC device required in order to write log blocks in it.
243 Metaslab group's per child vdev allocation granularity, in bytes.
247 of a top-level vdev before moving on to the next top-level vdev.
250 Enable metaslab groups biasing based on their over- or under-utilization
259 Setting to 1 equals to 2 if the pool is write-bound or 0 otherwise.
312 if not page-aligned instead of silently falling back to uncached I/O.
315 When attempting to log an output nvlist of an ioctl in the on-disk history,
320 .Fn zfs_ioc_channel_program Pq cf. Xr zfs-program 8 .
326 Enable/disable segment-based metaslab selection.
329 When using segment-based metaslab selection, continue allocating
348 becomes the performance limiting factor on high-performance storage.
392 .Bl -item -compact
400 If that fails then we will have a multi-layer gang block.
403 .Bl -item -compact
413 If that fails then we will have a multi-layer gang block.
418 This improves performance, especially when there are many metaslabs per vdev
423 When a vdev is added, target this number of metaslabs per top-level vdev.
432 Maximum ashift used when optimizing for logical \[->] physical sector size on
434 top-level vdevs.
440 If non-zero, then a Direct I/O write's checksum will be verified every
460 Minimum ashift used when creating new top-level vdevs.
463 Minimum number of metaslabs to create in a top-level vdev.
471 Practical upper limit of total metaslabs per top-level vdev.
477 Maximum number of metaslabs per group to preload
505 Max amount of memory to use for RAID-Z expansion I/O.
509 For testing, pause RAID-Z expansion when reflow amount reaches this value.
512 For expanded RAID-Z, aggregate reads that have more rows than this.
554 If this parameter is unset, the traversal skips non-metadata blocks.
556 import has started to stop or start the traversal of non-metadata blocks.
578 It also limits the worst-case time to allocate space.
584 Determines the number of block allocators to use per spa instance.
598 Limits the number of on-disk error log entries that will be converted to the
605 During top-level vdev removal, chunks of data are copied from the vdev
616 Logical ashift for file-based devices.
619 Physical ashift for file-based devices.
639 Min bytes to prefetch per stream.
642 After that it may grow further by 1/8 per hit, but only if some prefetch
647 Max bytes to prefetch per stream.
650 Max bytes to prefetch indirects for per stream.
660 Max number of streams per zfetch (prefetch streams per file).
674 .It Sy zfs_abd_scatter_max_order Ns = Ns Sy MAX_ORDER\-1 Pq uint
683 This is the minimum allocation size that will use scatter (page-based) ABDs.
688 bytes, try to unpin some of it in response to demand for non-metadata.
702 Percentage of ARC dnodes to try to scan in response to demand for non-metadata
709 This works out to roughly 1 MiB of hash table per 1 GiB of physical memory
710 with 8-byte pointers.
733 Number ARC headers to evict per sub-list before proceeding to another sub-list.
734 This batch-style operation prevents entire sub-lists from being evicted at once
742 Each thread will process one sub-list at a time,
743 until the eviction target is reached or all sub-lists have been processed.
751 1-4 1
752 5-8 2
753 9-15 3
754 16-31 4
755 32-63 6
756 64-95 8
757 96-127 9
758 128-160 11
759 160-191 12
760 192-223 13
761 224-255 14
794 .Sy all_system_memory No \- Sy 1 GiB
832 Linux may theoretically use one per mount point up to number of CPUs,
836 Number of missing top-level vdevs which will be allowed during
837 pool import (only in read-only mode).
849 .Pa zfs-dbgmsg
854 equivalent to a quarter of the user-wired memory limit under
861 To allow more fine-grained locking, each ARC state contains a series
863 Locking is performed at the level of these "sub-lists".
864 This parameters controls the number of sub-lists per ARC state,
924 .Sy 10000 Pq in practice, Em 160 MiB No per allocation attempt with 4 KiB pages
926 less than 100 ms per allocation attempt,
949 Rate limit checksum events to this many per second.
963 Vdev indirection layer (used for device removal) sleeps for this many
981 bytes on-disk.
986 Only attempt to condense indirect vdev mappings if the on-disk size
1037 Rate limit deadman zevents (which report hung I/O operations) to this many per
1043 .Bl -tag -compact -offset 4n -width "continue"
1049 Attempt to recover from a "hung" operation by re-dispatching it
1053 This can be used to facilitate automatic fail-over
1054 to a properly configured fail-over partner.
1073 Enable prefetching dedup-ed blocks which are going to be freed.
1129 also increase the minimum number of log entries we flush per TXG.
1130 Enabling it will reduce worst-case import times, at the cost of increased TXG
1155 OpenZFS will spend no more than this much memory on maintaining the in-memory
1185 by the maximum number of operations per second.
1192 Rate limit Direct I/O write verify events to this many per second.
1198 OpenZFS pre-release versions and now have compatibility issues.
1209 are not created per-object and instead a hashtable is used where collisions
1214 Rate limit delay zevents (which report slow I/O operations) to this many per
1218 Upper-bound limit for unflushed metadata changes to be held by the
1236 This tunable is important because it involves a trade-off between import
1240 the number of I/O operations for spacemap updates per TXG.
1265 It effectively limits maximum number of unflushed per-TXG spacemap logs
1313 for 32-bit systems.
1345 The upper limit of write-transaction ZIL log data size in bytes.
1351 It also should be smaller than the size of the slog device if slog is present.
1357 Since ZFS is a copy-on-write filesystem with snapshots, blocks cannot be
1390 results in the original CPU-based calculation being used.
1435 Maximum asynchronous read I/O operations active to each device.
1439 Minimum asynchronous read I/O operation active to each device.
1460 Maximum asynchronous write I/O operations active to each device.
1464 Minimum asynchronous write I/O operations active to each device.
1478 Maximum initializing I/O operations active to each device.
1482 Minimum initializing I/O operations active to each device.
1486 The maximum number of I/O operations active to each device.
1492 Timeout value to wait before determining a device is missing
1499 Maximum sequential resilver I/O operations active to each device.
1503 Minimum sequential resilver I/O operations active to each device.
1507 Maximum removal I/O operations active to each device.
1511 Minimum removal I/O operations active to each device.
1515 Maximum scrub I/O operations active to each device.
1519 Minimum scrub I/O operations active to each device.
1523 Maximum synchronous read I/O operations active to each device.
1527 Minimum synchronous read I/O operations active to each device.
1531 Maximum synchronous write I/O operations active to each device.
1535 Minimum synchronous write I/O operations active to each device.
1539 Maximum trim/discard I/O operations active to each device.
1543 Minimum trim/discard I/O operations active to each device.
1547 For non-interactive I/O (scrub, resilver, removal, initialize and rebuild),
1548 the number of concurrently-active I/O operations is limited to
1556 and the number of concurrently-active non-interactive operations is increased to
1567 To prevent non-interactive I/O, like scrub,
1568 from monopolizing the device, no more than
1577 The following options may be bitwise-ored together:
1583 1 Device No driver retries on device errors
1590 If this is higher than the maximum allowed by the device queue or the kernel
1619 The following flags may be bitwise-ored together:
1632 256 ZFS_DEBUG_METASLAB_VERIFY Verify space accounting on disk matches in-memory \fBrange_trees\fP.
1634 1024 ZFS_DEBUG_INDIRECT_REMAP Verify split blocks created by device removal.
1639 16384 ZFS_DEBUG_BRT Enable BRT-related debugging messages.
1641 65536 ZFS_DEBUG_DDT Enable DDT-related debugging messages.
1669 from the error-encountering filesystem is "temporarily leaked".
1683 .Bl -enum -compact -offset 4n -width "1."
1686 e.g. due to a top-level vdev going offline), or
1708 a minimum of this much time will be spent working on freeing blocks per TXG.
1723 .Xr zpool-initialize 8 .
1727 .Xr zpool-initialize 8 .
1731 The threshold size (in block pointers) at which we create a new sub-livelist.
1811 Minimum number of metaslabs to flush per dirty TXG.
1843 Setting the threshold to a non-zero percentage will stop allocations
1870 .Sy zfs_multihost_interval No / Sy leaf-vdevs .
1909 device.
1930 if a volatile out-of-order write cache is enabled.
1933 Allow no-operation writes.
1950 The number of blocks pointed by indirect (non-L0) block which should be
1979 Disable QAT hardware acceleration for AES-GCM encryption.
1984 Bytes to read per chunk.
1995 top-level vdev.
2005 resilver per leaf device, given in bytes.
2023 Ignore hard I/O errors during device removal.
2024 When set, if a device encounters a hard I/O error during the removal process
2029 pool cannot be returned to a healthy state prior to removing the device.
2037 a device.
2096 .Bl -tag -compact -offset 4n -width "a"
2103 The largest mostly-contiguous chunk of found data will be verified first.
2124 .It Sy zfs_scan_mem_lim_fact Ns = Ns Sy 20 Ns ^-1 Pq uint
2131 .It Sy zfs_scan_mem_lim_soft_fact Ns = Ns Sy 20 Ns ^-1 Pq uint
2157 resilvers per leaf device, given in bytes.
2166 Including unmodified copies of the spill blocks creates a backwards-compatible
2169 .It Sy zfs_send_no_prefetch_queue_ff Ns = Ns Sy 20 Ns ^\-1 Pq uint
2180 .It Sy zfs_send_queue_ff Ns = Ns Sy 20 Ns ^\-1 Pq uint
2191 .It Sy zfs_recv_queue_ff Ns = Ns Sy 20 Ns ^\-1 Pq uint
2213 When this variable is set to non-zero a corrective receive:
2214 .Bl -enum -compact -offset 4n -width "1."
2256 many blocks' size will change, and thus we have to re-allocate
2286 This option is useful for pools constructed from large thinly-provisioned
2295 Maximum number of queued TRIMs outstanding per leaf vdev.
2296 The number of concurrent TRIM commands issued to the device is controlled by
2301 before TRIM operations are issued to the device.
2302 This setting represents a trade-off between issuing larger,
2304 before the recently trimmed space is available for use by the device.
2326 Max vdev I/O aggregation size for non-rotating media.
2349 the purpose of selecting the least busy mirror member on non-rotational vdevs
2362 Aggregate read I/O operations if the on-disk gap between them is within this
2366 Aggregate write I/O operations if the on-disk gap between them is within this
2372 Variants that don't depend on CPU-specific features
2385 fastest selected by built-in benchmark
2388 sse2 SSE2 instruction set 64-bit x86
2389 ssse3 SSSE3 instruction set 64-bit x86
2390 avx2 AVX2 instruction set 64-bit x86
2391 avx512f AVX512F instruction set 64-bit x86
2392 avx512bw AVX512F & AVX512BW instruction sets 64-bit x86
2393 aarch64_neon NEON Aarch64/64-bit ARMv8
2394 aarch64_neonx2 NEON with more unrolling Aarch64/64-bit ARMv8
2401 .Xr zpool-events 8 .
2418 The number of taskq entries that are pre-populated when the taskq is first
2426 will create a maximum of one thread per CPU.
2443 if a volatile out-of-order write cache is enabled.
2450 Limit SLOG write size per commit executed with synchronous priority.
2452 to limit potential SLOG device abuse by single active ZIL writer.
2465 Usually, one metaslab from each normal-class vdev is dedicated for use by
2475 using LZ4 and zstd-1 passes is enabled.
2482 If non-zero, the zio deadman will produce debugging messages
2499 This allows for dynamic allocation distribution based on device performance.
2542 Number of worker threads per taskq.
2549 generate a system-dependent value close to 6 threads per taskq.
2553 Determines the minimum number of threads per write issue taskq.
2571 Do not create zvol device nodes.
2602 .Pq Li blk-mq .
2608 (the default) then scaling is done internally to prefer 6 threads per taskq.
2620 The number of threads per zvol to use for queuing IO requests.
2622 .Li blk-mq
2634 .Li blk-mq
2642 .Li blk-mq
2649 .Sy volblocksize Ns -sized blocks per zvol thread.
2657 .Li blk-mq
2662 .Li blk-mq
2665 .Li blk-mq
2679 .Bl -tag -compact -offset 4n -width "a"
2700 that may be issued to the device.
2701 In addition, the device has an aggregate maximum,
2703 Note that the sum of the per-queue minima must not exceed the aggregate maximum.
2704 If the sum of the per-queue maxima exceeds the aggregate maximum,
2708 regardless of whether all per-queue minima have been met.
2762 follows a piece-wise linear function defined by a few adjustable points:
2763 .Bd -literal
2764 | o---------| <-- \fBzfs_vdev_async_write_max_active\fP
2771 |-------o | | <-- \fBzfs_vdev_async_write_min_active\fP
2775 | `-- \fBzfs_vdev_async_write_active_max_dirty_percent\fP
2776 `--------- \fBzfs_vdev_async_write_active_min_dirty_percent\fP
2812 .D1 min_time = min( Ns Sy zfs_delay_scale No \(mu Po Sy dirty No \- Sy min Pc / Po Sy max No \- Sy …
2825 .Bd -literal
2827 10ms +-------------------------------------------------------------*+
2846 | \fBzfs_delay_scale\fP ----------> ******** |
2847 0 +-------------------------------------*********----------------+
2848 0% <- \fBzfs_dirty_data_max\fP -> 100%
2864 .Bd -literal
2866 100ms +-------------------------------------------------------------++
2875 + \fBzfs_delay_scale\fP ----------> ***** +
2886 +--------------------------------------------------------------+
2887 0% <- \fBzfs_dirty_data_max\fP -> 100%
2894 for the I/O scheduler to reach optimal throughput on the back-end storage,