Lines Matching +full:main +full:- +full:storage

9 .\" or https://opensource.org/licenses/CDDL-1.0.
27 .\" Copyright (c) 2017 Open-E, Inc. All Rights Reserved.
35 .Nd overview of ZFS storage pools
42 .Bl -tag -width "special"
68 .Em N No disks of size Em X No can hold Em X No bytes and can withstand Em N-1
71 A distributed-parity layout, similar to RAID-5/6, with improved distribution of
72 parity, and which does not suffer from the RAID-5/6
83 vdev type specifies a single-parity raidz group; the
85 vdev type specifies a double-parity raidz group; and the
87 vdev type specifies a triple-parity raidz group.
95 .Em (N-P)*X No bytes and can withstand Em P No devices failing without losing data .
125 .Sy floor((N-S)/(D+P))*single_drive_IOPS .
127 Like raidz, a dRAID can have single-, double-, or triple-parity.
142 .Em (N-S)*(D/(D+P))*X No bytes and can withstand Em P
145 A non-default dRAID configuration can be specified by appending one or more
149 .Bl -tag -compact -width "children"
151 The parity level (1-3).
158 .Em 8 , No unless Em N-P-S No is less than Em 8 .
161 Useful as a cross-check when listing a large number of devices.
168 A pseudo-vdev which keeps track of available hot spares for a pool.
174 If more than one log device is specified, then writes are load-balanced between
186 allocations are load-balanced between those devices.
193 allocations are load-balanced between those devices.
199 A device used to cache storage pool data.
214 Data is dynamically distributed across all top-level devices to balance data
236 While ZFS supports running in a non-redundant configuration, where each root
248 The health of the top-level vdev, such as a mirror or raidz device,
251 A top-level vdev or component device is in one of the following states:
252 .Bl -tag -width "DEGRADED"
254 One or more top-level vdevs is in the degraded state because one or more
261 .Bl -bullet -compact
272 One or more top-level vdevs is in the faulted state because one or more
279 .Bl -bullet -compact
294 Device removal detection is hardware-dependent and may not be supported on all
320 If a device is removed and later re-attached to the system,
322 Device attachment detection is hardware-dependent
358 An in-progress spare replacement can be cancelled by detaching the hot spare.
367 .Po Sy draid1 Ns - Ns Ar 2 Ns - Ns Ar 3 No specifies spare Ar 3 No of vdev Ar 2 ,
377 For instance, databases often require their transactions to be on stable storage
382 By default, the intent log is allocated from blocks within the main pool.
396 Mirrored devices can be removed by specifying the top-level mirror vdev.
399 Devices can be added to a storage pool as
401 These devices provide an additional layer of caching between main memory and
403 For read-heavy workloads, where the working set size is much larger than what
404 can be cached in main memory, using cache devices allows much more of this
407 read-workloads of mostly static content.
417 the original storage pool device, which might be part of a mirrored or raidz
435 will result in scanning the full-length ARC lists for cacheable content to be
446 The user can off- and online the cache device when there is less memory
457 A pool checkpoint can be thought of as a pool-wide snapshot and should be used
474 .Dl # Nm zpool Cm import Fl -rewind-to-checkpoint Ar pool
493 .Pq non- Ns Sy dedup Ns /- Ns Sy special
505 Inclusion of small file blocks in the special class is opt-in.