Lines Matching +full:trim +full:- +full:data +full:- +full:valid

1 .\" SPDX-License-Identifier: CDDL-1.0
11 .\" usr/src/OPENSOLARIS.LICENSE or https://opensource.org/licenses/CDDL-1.0.
31 .Bl -tag -width Ds
88 This assumes redundancy for this data is provided by the vdev layer.
98 Turbo L2ARC warm-up.
134 Controls whether only MFU metadata and data are cached from ARC into L2ARC.
136 amounts of data that are not expected to be accessed more than once.
139 meaning both MRU and MFU data and metadata are cached.
147 Setting it to 1 means to L2 cache only MFU data and metadata.
150 only MFU data (i.e. MRU data are not cached). This can be the right setting
151 to cache as much metadata as possible even when having high data turnover.
175 Percent of ARC size allowed for L2ARC-only headers.
187 we TRIM twice the space required to accommodate upcoming writes.
191 It also enables TRIM of the whole L2ARC device upon creation
196 disables TRIM on L2ARC altogether and is the default as it can put significant
238 For L2ARC devices less than 1 GiB, the amount of data
240 evicts is significant compared to the amount of restored L2ARC data.
247 In normal operation, ZFS will try to write this amount of data to each child
248 of a top-level vdev before moving on to the next top-level vdev.
251 Enable metaslab groups biasing based on their over- or under-utilization
260 Setting to 1 equals to 2 if the pool is write-bound or 0 otherwise.
277 Default BRT ZAP data block size as a power of 2. Note that changing this after
287 Default DDT ZAP data block size as a power of 2. Note that changing this after
313 if not page-aligned instead of silently falling back to uncached I/O.
316 When attempting to log an output nvlist of an ioctl in the on-disk history,
321 .Fn zfs_ioc_channel_program Pq cf. Xr zfs-program 8 .
327 Enable/disable segment-based metaslab selection.
330 When using segment-based metaslab selection, continue allocating
349 becomes the performance limiting factor on high-performance storage.
393 .Bl -item -compact
401 If that fails then we will have a multi-layer gang block.
404 .Bl -item -compact
414 If that fails then we will have a multi-layer gang block.
424 When a vdev is added, target this number of metaslabs per top-level vdev.
433 Maximum ashift used when optimizing for logical \[->] physical sector size on
435 top-level vdevs.
441 If non-zero, then a Direct I/O write's checksum will be verified every
443 In the event the checksum is not valid then the I/O operation will return EIO.
461 Minimum ashift used when creating new top-level vdevs.
464 Minimum number of metaslabs to create in a top-level vdev.
472 Practical upper limit of total metaslabs per top-level vdev.
506 Max amount of memory to use for RAID-Z expansion I/O.
510 For testing, pause RAID-Z expansion when reflow amount reaches this value.
513 For expanded RAID-Z, aggregate reads that have more rows than this.
537 size of data being written.
539 but lower values may be valid for a given pool depending on its configuration.
549 Whether to traverse data blocks during an "extreme rewind"
555 If this parameter is unset, the traversal skips non-metadata blocks.
557 import has started to stop or start the traversal of non-metadata blocks.
579 It also limits the worst-case time to allocate space.
599 Limits the number of on-disk error log entries that will be converted to the
608 data will be reconstructed as needed from parity.
610 A number of disks in a RAID-Z or dRAID vdev may sit out at the same time, up
614 Defaults to 600 seconds and a value of zero disables disk sit-outs in general,
618 How often each RAID-Z and dRAID vdev will check for slow disk outliers.
626 When performing slow outlier checks for RAID-Z and dRAID vdevs, this value is
632 problems, but may significantly increase the rate of spurious sit-outs.
635 of the inter-quartile range (IQR) that is being used in a Tukey's Fence
637 This is much higher than a normal Tukey's Fence k-value, because the
638 distribution under consideration is probably an extreme-value distribution,
642 During top-level vdev removal, chunks of data are copied from the vdev
645 which will be included as "unnecessary" data in a chunk of copied data.
653 Logical ashift for file-based devices.
656 Physical ashift for file-based devices.
711 .It Sy zfs_abd_scatter_max_order Ns = Ns Sy MAX_ORDER\-1 Pq uint
720 This is the minimum allocation size that will use scatter (page-based) ABDs.
725 bytes, try to unpin some of it in response to demand for non-metadata.
739 Percentage of ARC dnodes to try to scan in response to demand for non-metadata
747 with 8-byte pointers.
755 waits for this percent of the requested amount of data to be evicted.
770 Number ARC headers to evict per sub-list before proceeding to another sub-list.
771 This batch-style operation prevents entire sub-lists from being evicted at once
779 Each thread will process one sub-list at a time,
780 until the eviction target is reached or all sub-lists have been processed.
788 1-4 1
789 5-8 2
790 9-15 3
791 16-31 4
792 32-63 6
793 64-95 8
794 96-127 9
795 128-160 11
796 160-191 12
797 192-223 13
798 224-255 14
831 .Sy all_system_memory No \- Sy 1 GiB
845 Balance between metadata and data on ghost hits.
847 of ghost data hits on target data/metadata rate.
873 Number of missing top-level vdevs which will be allowed during
874 pool import (only in read-only mode).
886 .Pa zfs-dbgmsg
891 equivalent to a quarter of the user-wired memory limit under
898 To allow more fine-grained locking, each ARC state contains a series
899 of lists for both data and metadata objects.
900 Locking is performed at the level of these "sub-lists".
901 This parameters controls the number of sub-lists per ARC state,
902 and also applies to other uses of the multilist data structure.
1014 bytes on-disk.
1019 Only attempt to condense indirect vdev mappings if the on-disk size
1075 Valid values are:
1076 .Bl -tag -compact -offset 4n -width "continue"
1082 Attempt to recover from a "hung" operation by re-dispatching it
1086 This can be used to facilitate automatic fail-over
1087 to a properly configured fail-over partner.
1106 Enable prefetching dedup-ed blocks which are going to be freed.
1152 amount of data.
1163 Enabling it will reduce worst-case import times, at the cost of increased TXG
1188 OpenZFS will spend no more than this much memory on maintaining the in-memory
1206 Start to delay each transaction once there is this amount of dirty data,
1215 Larger values cause longer delays for a given amount of dirty data.
1231 OpenZFS pre-release versions and now have compatibility issues.
1242 are not created per-object and instead a hashtable is used where collisions
1251 Upper-bound limit for unflushed metadata changes to be held by the
1269 This tunable is important because it involves a trade-off between import
1298 It effectively limits maximum number of unflushed per-TXG spacemap logs
1346 for 32-bit systems.
1372 Start syncing out a transaction group if there's at least this much dirty data
1378 The upper limit of write-transaction ZIL log data size in bytes.
1379 Write operations are throttled when approaching the limit until log data is
1390 Since ZFS is a copy-on-write filesystem with snapshots, blocks cannot be
1423 results in the original CPU-based calculation being used.
1437 data to be written to disk before proceeding.
1477 When the pool has more than this much dirty data, use
1480 If the dirty data is between the minimum and maximum,
1485 When the pool has less than this much dirty data, use
1488 If the dirty data is between the minimum and maximum,
1573 Maximum trim/discard I/O operations active to each device.
1577 Minimum trim/discard I/O operations active to each device.
1581 For non-interactive I/O (scrub, resilver, removal, initialize and rebuild),
1582 the number of concurrently-active I/O operations is limited to
1590 and the number of concurrently-active non-interactive operations is increased to
1601 To prevent non-interactive I/O, like scrub,
1611 The following options may be bitwise-ored together:
1653 The following flags may be bitwise-ored together:
1666 256 ZFS_DEBUG_METASLAB_VERIFY Verify space accounting on disk matches in-memory \fBrange_trees\fP.
1669 2048 ZFS_DEBUG_TRIM Verify TRIM ranges are always within the allocatable range tree.
1673 16384 ZFS_DEBUG_BRT Enable BRT-related debugging messages.
1675 65536 ZFS_DEBUG_DDT Enable DDT-related debugging messages.
1703 from the error-encountering filesystem is "temporarily leaked".
1717 .Bl -enum -compact -offset 4n -width "1."
1720 e.g. due to a top-level vdev going offline), or
1722 have localized, permanent errors (e.g. disk returns the wrong data
1750 Largest write size to store the data directly into the ZIL if
1755 storing all written data into ZIL to not depend on regular vdev latency.
1766 .Xr zpool-initialize 8 .
1770 .Xr zpool-initialize 8 .
1774 The threshold size (in block pointers) at which we create a new sub-livelist.
1851 Normally disabled because these datasets may be missing key data.
1886 Setting the threshold to a non-zero percentage will stop allocations
1894 If enabled, ZFS will place DDT data into the special allocation class.
1897 If enabled, ZFS will place user data indirect blocks
1913 .Sy zfs_multihost_interval No / Sy leaf-vdevs .
1964 This results in scrubs not actually scrubbing data and
1973 if a volatile out-of-order write cache is enabled.
1976 Allow no-operation writes.
1982 When enabled forces ZFS to sync data when
1990 or other data crawling operations.
1993 The number of blocks pointed by indirect (non-L0) block which should be
1996 or other data crawling operations.
2022 Disable QAT hardware acceleration for AES-GCM encryption.
2038 top-level vdev.
2138 Determines the order that data will be verified while scrubbing or resilvering:
2139 .Bl -tag -compact -offset 4n -width "a"
2141 Data will be verified as sequentially as possible, given the
2144 This may improve scrub performance if the pool's data is very fragmented.
2146 The largest mostly-contiguous chunk of found data will be verified first.
2147 By deferring scrubbing of small segments, we may later find adjacent data
2167 .It Sy zfs_scan_mem_lim_fact Ns = Ns Sy 20 Ns ^-1 Pq uint
2171 data verification I/O.
2174 .It Sy zfs_scan_mem_lim_soft_fact Ns = Ns Sy 20 Ns ^-1 Pq uint
2199 Maximum amount of data that can be concurrently issued at once for scrubs and
2203 Allow sending of corrupt data (ignore read/checksum errors when sending).
2209 Including unmodified copies of the spill blocks creates a backwards-compatible
2212 .It Sy zfs_send_no_prefetch_queue_ff Ns = Ns Sy 20 Ns ^\-1 Pq uint
2223 .It Sy zfs_send_queue_ff Ns = Ns Sy 20 Ns ^\-1 Pq uint
2234 .It Sy zfs_recv_queue_ff Ns = Ns Sy 20 Ns ^\-1 Pq uint
2247 The maximum amount of data, in bytes, that
2256 When this variable is set to non-zero a corrective receive:
2257 .Bl -enum -compact -offset 4n -width "1."
2272 Override this value if most data in your dataset is not of that size
2276 Flushing of data to disk is done in passes.
2294 It ensures that the data is actually written to persistent storage, which helps
2299 Only allow small data blocks to be allocated on the special and dedup vdev
2314 many blocks' size will change, and thus we have to re-allocate
2331 Maximum size of TRIM command.
2336 Minimum size of TRIM commands.
2337 TRIM ranges smaller than this will be skipped,
2343 Skip uninitialized metaslabs during the TRIM process.
2344 This option is useful for pools constructed from large thinly-provisioned
2346 where TRIM operations are slow.
2349 This setting is stored when starting a manual TRIM and will
2350 persist for the duration of the requested TRIM.
2354 The number of concurrent TRIM commands issued to the device is controlled by
2359 before TRIM operations are issued to the device.
2360 This setting represents a trade-off between issuing larger,
2361 more efficient TRIM operations and the delay
2365 This will result is larger TRIM operations and potentially increased memory
2377 Flush dirty data to disk at least every this many seconds (maximum TXG
2384 Max vdev I/O aggregation size for non-rotating media.
2407 the purpose of selecting the least busy mirror member on non-rotational vdevs
2420 Aggregate read I/O operations if the on-disk gap between them is within this
2424 Aggregate write I/O operations if the on-disk gap between them is within this
2430 Variants that don't depend on CPU-specific features
2443 fastest selected by built-in benchmark
2446 sse2 SSE2 instruction set 64-bit x86
2447 ssse3 SSSE3 instruction set 64-bit x86
2448 avx2 AVX2 instruction set 64-bit x86
2449 avx512f AVX512F instruction set 64-bit x86
2450 avx512bw AVX512F & AVX512BW instruction sets 64-bit x86
2451 aarch64_neon NEON Aarch64/64-bit ARMv8
2452 aarch64_neonx2 NEON with more unrolling Aarch64/64-bit ARMv8
2459 .Xr zpool-events 8 .
2476 The number of taskq entries that are pre-populated when the taskq is first
2501 if a volatile out-of-order write cache is enabled.
2532 Whether heuristic for detection of incompressible data with zstd levels >= 3
2533 using LZ4 and zstd-1 passes is enabled.
2540 If non-zero, the zio deadman will produce debugging messages
2593 infrequently-accessed files, until the kernel's memory pressure mechanism
2650 generate a system-dependent value close to 6 threads per taskq.
2663 Each of the four values corresponds to the issue, issue high-priority,
2664 interrupt, and interrupt high-priority queues.
2665 Valid values are
2679 Each of the four values corresponds to the issue, issue high-priority,
2680 interrupt, and interrupt high-priority queues.
2681 Valid values are
2695 Each of the four values corresponds to the issue, issue high-priority,
2696 interrupt, and interrupt high-priority queues.
2697 Valid values are
2718 Discard (TRIM) operations done on zvols will be done in batches of this
2741 .Pq Li blk-mq .
2761 .Li blk-mq
2773 .Li blk-mq
2781 .Li blk-mq
2788 .Sy volblocksize Ns -sized blocks per zvol thread.
2796 .Li blk-mq
2801 .Li blk-mq
2804 .Li blk-mq
2818 .Bl -tag -compact -offset 4n -width "a"
2842 Note that the sum of the per-queue minima must not exceed the aggregate maximum.
2843 If the sum of the per-queue maxima exceeds the aggregate maximum,
2847 regardless of whether all per-queue minima have been met.
2884 Asynchronous writes represent the data that is committed to stable storage
2891 according to the amount of dirty data in the pool.
2897 from the async write queue as there is more dirty data in the pool.
2901 follows a piece-wise linear function defined by a few adjustable points:
2902 .Bd -literal
2903 | o---------| <-- \fBzfs_vdev_async_write_max_active\fP
2910 |-------o | | <-- \fBzfs_vdev_async_write_min_active\fP
2914 | `-- \fBzfs_vdev_async_write_active_max_dirty_percent\fP
2915 `--------- \fBzfs_vdev_async_write_active_min_dirty_percent\fP
2918 Until the amount of dirty data exceeds a minimum percentage of the dirty
2919 data allowed in the pool, the I/O scheduler will limit the number of
2923 of the dirty data allowed in the pool.
2925 Ideally, the amount of dirty data on a busy pool will stay in the sloped
2931 this indicates that the rate of incoming data is
2951 .D1 min_time = min( Ns Sy zfs_delay_scale No \(mu Po Sy dirty No \- Sy min Pc / Po Sy max No \- Sy …
2954 The percentage of dirty data at which we start to delay is defined by
2964 .Bd -literal
2966 10ms +-------------------------------------------------------------*+
2985 | \fBzfs_delay_scale\fP ----------> ******** |
2986 0 +-------------------------------------*********----------------+
2987 0% <- \fBzfs_dirty_data_max\fP -> 100%
2997 was chosen such that small changes in the amount of accumulated dirty data
3003 .Bd -literal
3005 100ms +-------------------------------------------------------------++
3014 + \fBzfs_delay_scale\fP ----------> ***** +
3025 +--------------------------------------------------------------+
3026 0% <- \fBzfs_dirty_data_max\fP -> 100%
3029 Note here that only as the amount of dirty data approaches its limit does
3031 The goal of a properly tuned system should be to keep the amount of dirty data
3033 for the I/O scheduler to reach optimal throughput on the back-end storage,