Lines Matching +full:os +full:- +full:initiated

1 .\" SPDX-License-Identifier: CDDL-1.0
10 .\" or https://opensource.org/licenses/CDDL-1.0.
28 .\" Copyright (c) 2017 Open-E, Inc. All Rights Reserved.
43 .Bl -tag -width "special"
69 .Em N No disks of size Em X No can hold Em X No bytes and can withstand Em N-1
72 A distributed-parity layout, similar to RAID-5/6, with improved distribution of
73 parity, and which does not suffer from the RAID-5/6
84 vdev type specifies a single-parity raidz group; the
86 vdev type specifies a double-parity raidz group; and the
88 vdev type specifies a triple-parity raidz group.
96 .Em (N-P)*X No bytes and can withstand Em P No devices failing without losing data .
126 .Sy floor((N-S)/(D+P))*single_drive_IOPS .
128 Like raidz, a dRAID can have single-, double-, or triple-parity.
143 .Em (N-S)*(D/(D+P))*X No bytes and can withstand Em P
146 A non-default dRAID configuration can be specified by appending one or more
150 .Bl -tag -compact -width "children"
152 The parity level (1-3).
159 .Em 8 , No unless Em N-P-S No is less than Em 8 .
162 Useful as a cross-check when listing a large number of devices.
169 A pseudo-vdev which keeps track of available hot spares for a pool.
175 If more than one log device is specified, then writes are load-balanced between
187 allocations are load-balanced between those devices.
194 allocations are load-balanced between those devices.
215 Data is dynamically distributed across all top-level devices to balance data
237 While ZFS supports running in a non-redundant configuration, where each root
249 The health of the top-level vdev, such as a mirror or raidz device,
252 A top-level vdev or component device is in one of the following states:
253 .Bl -tag -width "DEGRADED"
255 One or more top-level vdevs is in the degraded state because one or more
262 .Bl -bullet -compact
264 The number of checksum errors or slow I/Os exceeds acceptable levels and the
273 One or more top-level vdevs is in the faulted state because one or more
280 .Bl -bullet -compact
295 Device removal detection is hardware-dependent and may not be supported on all
321 If a device is removed and later re-attached to the system,
323 Device attachment detection is hardware-dependent
343 Once a spare replacement is initiated, a new
359 An in-progress spare replacement can be canceled by detaching the hot spare.
368 .Po Sy draid1 Ns - Ns Ar 2 Ns - Ns Ar 3 No specifies spare Ar 3 No of vdev Ar 2 ,
397 Mirrored devices can be removed by specifying the top-level mirror vdev.
404 For read-heavy workloads, where the working set size is much larger than what
408 read-workloads of mostly static content.
436 will result in scanning the full-length ARC lists for cacheable content to be
447 The user can off- and online the cache device when there is less memory
458 A pool checkpoint can be thought of as a pool-wide snapshot and should be used
475 .Dl # Nm zpool Cm import Fl -rewind-to-checkpoint Ar pool
495 .Pq non- Ns Sy dedup Ns /- Ns Sy special
507 Inclusion of small file or zvol blocks in the special class is opt-in.