Lines Matching +full:part +full:- +full:number
9 .\" or https://opensource.org/licenses/CDDL-1.0.
27 .\" Copyright (c) 2017 Open-E, Inc. All Rights Reserved.
42 .Bl -tag -width "special"
68 .Em N No disks of size Em X No can hold Em X No bytes and can withstand Em N-1
71 A distributed-parity layout, similar to RAID-5/6, with improved distribution of
72 parity, and which does not suffer from the RAID-5/6
83 vdev type specifies a single-parity raidz group; the
85 vdev type specifies a double-parity raidz group; and the
87 vdev type specifies a triple-parity raidz group.
95 .Em (N-P)*X No bytes and can withstand Em P No devices failing without losing data .
96 The minimum number of devices in a raidz group is one more than the number of
98 The recommended number is between 3 and 9 to help increase performance.
125 .Sy floor((N-S)/(D+P))*single_drive_IOPS .
127 Like raidz, a dRAID can have single-, double-, or triple-parity.
142 .Em (N-S)*(D/(D+P))*X No bytes and can withstand Em P
145 A non-default dRAID configuration can be specified by appending one or more
149 .Bl -tag -compact -width "children"
151 The parity level (1-3).
153 The number of data devices per redundancy group.
158 .Em 8 , No unless Em N-P-S No is less than Em 8 .
160 The expected number of children.
161 Useful as a cross-check when listing a large number of devices.
162 An error is returned when the provided number of children differs.
164 The number of distributed hot spares.
168 A pseudo-vdev which keeps track of available hot spares for a pool.
174 If more than one log device is specified, then writes are load-balanced between
186 allocations are load-balanced between those devices.
193 allocations are load-balanced between those devices.
210 A pool can have any number of virtual devices at the top of the configuration
214 Data is dynamically distributed across all top-level devices to balance data
236 While ZFS supports running in a non-redundant configuration, where each root
248 The health of the top-level vdev, such as a mirror or raidz device,
251 A top-level vdev or component device is in one of the following states:
252 .Bl -tag -width "DEGRADED"
254 One or more top-level vdevs is in the degraded state because one or more
261 .Bl -bullet -compact
263 The number of checksum errors or slow I/Os exceeds acceptable levels and the
267 The number of I/O errors exceeds acceptable levels.
272 One or more top-level vdevs is in the faulted state because one or more
279 .Bl -bullet -compact
283 The number of I/O errors exceeds acceptable levels and the device is faulted to
294 Device removal detection is hardware-dependent and may not be supported on all
320 If a device is removed and later re-attached to the system,
322 Device attachment detection is hardware-dependent
333 vdev with any number of devices.
358 An in-progress spare replacement can be cancelled by detaching the hot spare.
366 These hot spares are named after the dRAID vdev they're a part of
367 .Po Sy draid1 Ns - Ns Ar 2 Ns - Ns Ar 3 No specifies spare Ar 3 No of vdev Ar 2 ,
394 In addition, log devices are imported and exported as part of the pool
396 Mirrored devices can be removed by specifying the top-level mirror vdev.
403 For read-heavy workloads, where the working set size is much larger than what
407 read-workloads of mostly static content.
411 vdev with any number of devices.
415 Cache devices cannot be mirrored or part of a raidz configuration.
417 the original storage pool device, which might be part of a mirrored or raidz
435 will result in scanning the full-length ARC lists for cacheable content to be
440 restored in L2ARC, even if the device was previously part of the pool.
446 The user can off- and online the cache device when there is less memory
457 A pool checkpoint can be thought of as a pool-wide snapshot and should be used
458 with care as it contains every part of the pool's state, from properties to vdev
474 .Dl # Nm zpool Cm import Fl -rewind-to-checkpoint Ar pool
483 Finally, data that is part of the checkpoint but has been freed in the
493 .Pq non- Ns Sy dedup Ns /- Ns Sy special
505 Inclusion of small file blocks in the special class is opt-in.