1.\" Copyright (c) 2010 Alexander Motin <mav@FreeBSD.org> 2.\" All rights reserved. 3.\" 4.\" Redistribution and use in source and binary forms, with or without 5.\" modification, are permitted provided that the following conditions 6.\" are met: 7.\" 1. Redistributions of source code must retain the above copyright 8.\" notice, this list of conditions and the following disclaimer. 9.\" 2. Redistributions in binary form must reproduce the above copyright 10.\" notice, this list of conditions and the following disclaimer in the 11.\" documentation and/or other materials provided with the distribution. 12.\" 13.\" THIS SOFTWARE IS PROVIDED BY THE AUTHORS AND CONTRIBUTORS ``AS IS'' AND 14.\" ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE 15.\" IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE 16.\" ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHORS OR CONTRIBUTORS BE LIABLE 17.\" FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL 18.\" DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS 19.\" OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) 20.\" HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT 21.\" LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY 22.\" OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF 23.\" SUCH DAMAGE. 24.\" 25.\" $FreeBSD$ 26.\" 27.Dd April 4, 2013 28.Dt GRAID 8 29.Os 30.Sh NAME 31.Nm graid 32.Nd "control utility for software RAID devices" 33.Sh SYNOPSIS 34.Nm 35.Cm label 36.Op Fl f 37.Op Fl o Ar fmtopt 38.Op Fl S Ar size 39.Op Fl s Ar strip 40.Ar format 41.Ar label 42.Ar level 43.Ar prov ... 44.Nm 45.Cm add 46.Op Fl f 47.Op Fl S Ar size 48.Op Fl s Ar strip 49.Ar name 50.Ar label 51.Ar level 52.Nm 53.Cm delete 54.Op Fl f 55.Ar name 56.Op Ar label | Ar num 57.Nm 58.Cm insert 59.Ar name 60.Ar prov ... 61.Nm 62.Cm remove 63.Ar name 64.Ar prov ... 65.Nm 66.Cm fail 67.Ar name 68.Ar prov ... 69.Nm 70.Cm stop 71.Op Fl fv 72.Ar name ... 73.Nm 74.Cm list 75.Nm 76.Cm status 77.Nm 78.Cm load 79.Nm 80.Cm unload 81.Sh DESCRIPTION 82The 83.Nm 84utility is used to manage software RAID configurations, supported by the 85GEOM RAID class. 86GEOM RAID class uses on-disk metadata to provide access to software-RAID 87volumes defined by different RAID BIOSes. 88Depending on RAID BIOS type and its metadata format, different subsets of 89configurations and features are supported. 90To allow booting from RAID volume, the metadata format should match the 91RAID BIOS type and its capabilities. 92To guarantee that these match, it is recommended to create volumes via the 93RAID BIOS interface, while experienced users are free to do it using this 94utility. 95.Pp 96The first argument to 97.Nm 98indicates an action to be performed: 99.Bl -tag -width ".Cm destroy" 100.It Cm label 101Create an array with single volume. 102The 103.Ar format 104argument specifies the on-disk metadata format to use for this array, 105such as "Intel". 106The 107.Ar label 108argument specifies the label of the created volume. 109The 110.Ar level 111argument specifies the RAID level of the created volume, such as: 112"RAID0", "RAID1", etc. 113The subsequent list enumerates providers to use as array components. 114The special name "NONE" can be used to reserve space for absent disks. 115The order of components can be important, depending on specific RAID level 116and metadata format. 117.Pp 118Additional options include: 119.Bl -tag -width ".Fl s Ar strip" 120.It Fl f 121Enforce specified configuration creation if it is officially unsupported, 122but technically can be created. 123.It Fl o Ar fmtopt 124Specifies metadata format options. 125.It Fl S Ar size 126Use 127.Ar size 128bytes on each component for this volume. 129Should be used if several volumes per array are planned, or if smaller 130components going to be inserted later. 131Defaults to size of the smallest component. 132.It Fl s Ar strip 133Specifies strip size in bytes. 134Defaults to 131072. 135.El 136.It Cm add 137Create another volume on the existing array. 138The 139.Ar name 140argument is the name of the existing array, reported by label command. 141The rest of arguments are the same as for the label command. 142.It Cm delete 143Delete volume(s) from the existing array. 144When the last volume is deleted, the array is also deleted and its metadata 145erased. 146The 147.Ar name 148argument is the name of existing array. 149Optional 150.Ar label 151or 152.Ar num 153arguments allow specifying volume for deletion. 154.Pp 155Additional options include: 156.Bl -tag -width ".Fl f" 157.It Fl f 158Delete volume(s) even if it is still open. 159.El 160.It Cm insert 161Insert specified provider(s) into specified array instead of the first missing 162or failed components. 163If there are no such components, mark disk(s) as spare. 164.It Cm remove 165Remove the specified provider(s) from the specified array and erase metadata. 166If there are spare disks present, the removed disk(s) will be replaced by 167spares. 168.It Cm fail 169Mark the given disks(s) as failed, removing from active use unless absolutely 170necessary due to exhausted redundancy. 171If there are spare disks present - failed disk(s) will be replaced with one 172of them. 173.It Cm stop 174Stop the given array. 175The metadata will not be erased. 176.Pp 177Additional options include: 178.Bl -tag -width ".Fl f" 179.It Fl f 180Stop the given array even if some of its volumes are opened. 181.El 182.It Cm list 183See 184.Xr geom 8 . 185.It Cm status 186See 187.Xr geom 8 . 188.It Cm load 189See 190.Xr geom 8 . 191.It Cm unload 192See 193.Xr geom 8 . 194.El 195.Pp 196Additional options include: 197.Bl -tag -width ".Fl v" 198.It Fl v 199Be more verbose. 200.El 201.Sh SUPPORTED METADATA FORMATS 202The GEOM RAID class follows a modular design, allowing different metadata 203formats to be used. 204Support is currently implemented for the following formats: 205.Bl -tag -width "Intel" 206.It DDF 207The format defined by the SNIA Common RAID Disk Data Format v2.0 specification. 208Used by some Adaptec RAID BIOSes and some hardware RAID controllers. 209Because of high format flexibility different implementations support 210different set of features and have different on-disk metadata layouts. 211To provide compatibility, the GEOM RAID class mimics capabilities 212of the first detected DDF array. 213Respecting that, it may support different number of disks per volume, 214volumes per array, partitions per disk, etc. 215The following configurations are supported: RAID0 (2+ disks), RAID1 (2+ disks), 216RAID1E (3+ disks), RAID3 (3+ disks), RAID4 (3+ disks), RAID5 (3+ disks), 217RAID5E (4+ disks), RAID5EE (4+ disks), RAID5R (3+ disks), RAID6 (4+ disks), 218RAIDMDF (4+ disks), RAID10 (4+ disks), SINGLE (1 disk), CONCAT (2+ disks). 219.Pp 220Format supports two options "BE" and "LE", that mean big-endian byte order 221defined by specification (default) and little-endian used by some Adaptec 222controllers. 223.It Intel 224The format used by Intel RAID BIOS. 225Supports up to two volumes per array. 226Supports configurations: RAID0 (2+ disks), RAID1 (2 disks), 227RAID5 (3+ disks), RAID10 (4 disks). 228Configurations not supported by Intel RAID BIOS, but enforceable on your own 229risk: RAID1 (3+ disks), RAID1E (3+ disks), RAID10 (6+ disks). 230.It JMicron 231The format used by JMicron RAID BIOS. 232Supports one volume per array. 233Supports configurations: RAID0 (2+ disks), RAID1 (2 disks), 234RAID10 (4 disks), CONCAT (2+ disks). 235Configurations not supported by JMicron RAID BIOS, but enforceable on your own 236risk: RAID1 (3+ disks), RAID1E (3+ disks), RAID10 (6+ disks), RAID5 (3+ disks). 237.It NVIDIA 238The format used by NVIDIA MediaShield RAID BIOS. 239Supports one volume per array. 240Supports configurations: RAID0 (2+ disks), RAID1 (2 disks), 241RAID5 (3+ disks), RAID10 (4+ disks), SINGLE (1 disk), CONCAT (2+ disks). 242Configurations not supported by NVIDIA MediaShield RAID BIOS, but enforceable 243on your own risk: RAID1 (3+ disks). 244.It Promise 245The format used by Promise and AMD/ATI RAID BIOSes. 246Supports multiple volumes per array. 247Each disk can be split to be used by up to two arbitrary volumes. 248Supports configurations: RAID0 (2+ disks), RAID1 (2 disks), 249RAID5 (3+ disks), RAID10 (4 disks), SINGLE (1 disk), CONCAT (2+ disks). 250Configurations not supported by RAID BIOSes, but enforceable on your 251own risk: RAID1 (3+ disks), RAID10 (6+ disks). 252.It SiI 253The format used by SiliconImage RAID BIOS. 254Supports one volume per array. 255Supports configurations: RAID0 (2+ disks), RAID1 (2 disks), 256RAID5 (3+ disks), RAID10 (4 disks), SINGLE (1 disk), CONCAT (2+ disks). 257Configurations not supported by SiliconImage RAID BIOS, but enforceable on your 258own risk: RAID1 (3+ disks), RAID10 (6+ disks). 259.El 260.Sh SUPPORTED RAID LEVELS 261The GEOM RAID class follows a modular design, allowing different RAID levels 262to be used. 263Full support for the following RAID levels is currently implemented: 264RAID0, RAID1, RAID1E, RAID10, SINGLE, CONCAT. 265The following RAID levels supported as read-only for volumes in optimal 266state (without using redundancy): RAID4, RAID5, RAID5E, RAID5EE, RAID5R, 267RAID6, RAIDMDF. 268.Sh RAID LEVEL MIGRATION 269The GEOM RAID class has no support for RAID level migration, allowed by some 270metadata formats. 271If you started migration using BIOS or in some other way, make sure to 272complete it there. 273Do not run GEOM RAID class on migrating volumes under pain of possible data 274corruption! 275.Sh 2TiB BARRIERS 276NVIDIA metadata format does not support volumes above 2TiB. 277.Sh SYSCTL VARIABLES 278The following 279.Xr sysctl 8 280variable can be used to control the behavior of the 281.Nm RAID 282GEOM class. 283.Bl -tag -width indent 284.It Va kern.geom.raid.aggressive_spare : No 0 285Use any disks without metadata connected to controllers of the vendor 286matching to volume metadata format as spare. 287Use it with much care to not lose data if connecting unrelated disk! 288.It Va kern.geom.raid.clean_time : No 5 289Mark volume as clean when idle for the specified number of seconds. 290.It Va kern.geom.raid.debug : No 0 291Debug level of the 292.Nm RAID 293GEOM class. 294.It Va kern.geom.raid.enable : No 1 295Enable on-disk metadata taste. 296.It Va kern.geom.raid.idle_threshold : No 1000000 297Time in microseconds to consider a volume idle for rebuild purposes. 298.It Va kern.geom.raid.name_format : No 0 299Providers name format: 0 -- raid/r{num}, 1 -- raid/{label}. 300.It Va kern.geom.raid.read_err_thresh : No 10 301Number of read errors equated to disk failure. 302Write errors are always considered as disk failures. 303.It Va kern.geom.raid.start_timeout : No 30 304Time to wait for missing array components on startup. 305.It Va kern.geom.raid. Ns Ar X Ns Va .enable : No 1 306Enable taste for specific metadata or transformation module. 307.El 308.Sh EXIT STATUS 309Exit status is 0 on success, and non-zero if the command fails. 310.Sh SEE ALSO 311.Xr geom 4 , 312.Xr geom 8 , 313.Xr gvinum 8 314.Sh HISTORY 315The 316.Nm 317utility appeared in 318.Fx 9.0 . 319.Sh AUTHORS 320.An Alexander Motin Aq Mt mav@FreeBSD.org 321.An M. Warner Losh Aq Mt imp@FreeBSD.org 322