1.\" 2.\" CDDL HEADER START 3.\" 4.\" The contents of this file are subject to the terms of the 5.\" Common Development and Distribution License (the "License"). 6.\" You may not use this file except in compliance with the License. 7.\" 8.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE 9.\" or http://www.opensolaris.org/os/licensing. 10.\" See the License for the specific language governing permissions 11.\" and limitations under the License. 12.\" 13.\" When distributing Covered Code, include this CDDL HEADER in each 14.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE. 15.\" If applicable, add the following below this CDDL HEADER, with the 16.\" fields enclosed by brackets "[]" replaced with your own identifying 17.\" information: Portions Copyright [yyyy] [name of copyright owner] 18.\" 19.\" CDDL HEADER END 20.\" 21.\" 22.\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved. 23.\" Copyright (c) 2012, 2018 by Delphix. All rights reserved. 24.\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved. 25.\" Copyright (c) 2017 Datto Inc. 26.\" Copyright (c) 2018 George Melikov. All Rights Reserved. 27.\" Copyright 2017 Nexenta Systems, Inc. 28.\" Copyright (c) 2017 Open-E, Inc. All Rights Reserved. 29.\" 30.Dd August 9, 2019 31.Dt ZPOOL-CREATE 8 32.Os 33.Sh NAME 34.Nm zpool Ns Pf - Cm create 35.Nd Creates a new ZFS storage pool 36.Sh SYNOPSIS 37.Nm 38.Cm create 39.Op Fl dfn 40.Op Fl m Ar mountpoint 41.Oo Fl o Ar property Ns = Ns Ar value Oc Ns ... 42.Oo Fl o Ar feature@feature Ns = Ns Ar value Oc 43.Oo Fl O Ar file-system-property Ns = Ns Ar value Oc Ns ... 44.Op Fl R Ar root 45.Ar pool vdev Ns ... 46.Sh DESCRIPTION 47.Bl -tag -width Ds 48.It Xo 49.Nm 50.Cm create 51.Op Fl dfn 52.Op Fl m Ar mountpoint 53.Oo Fl o Ar property Ns = Ns Ar value Oc Ns ... 54.Oo Fl o Ar feature@feature Ns = Ns Ar value Oc Ns ... 55.Oo Fl O Ar file-system-property Ns = Ns Ar value Oc Ns ... 56.Op Fl R Ar root 57.Op Fl t Ar tname 58.Ar pool vdev Ns ... 59.Xc 60Creates a new storage pool containing the virtual devices specified on the 61command line. 62The pool name must begin with a letter, and can only contain 63alphanumeric characters as well as underscore 64.Pq Qq Sy _ , 65dash 66.Pq Qq Sy \&- , 67colon 68.Pq Qq Sy \&: , 69space 70.Pq Qq Sy \&\ , 71and period 72.Pq Qq Sy \&. . 73The pool names 74.Sy mirror , 75.Sy raidz , 76.Sy spare 77and 78.Sy log 79are reserved, as are names beginning with 80.Sy mirror , 81.Sy raidz , 82.Sy spare , 83and the pattern 84.Sy c[0-9] . 85The 86.Ar vdev 87specification is described in the 88.Em Virtual Devices 89section of 90.Xr zpoolconcepts. 91.Pp 92The command attempts to verify that each device specified is accessible and not 93currently in use by another subsystem. However this check is not robust enough 94to detect simultaneous attempts to use a new device in different pools, even if 95.Sy multihost 96is 97.Sy enabled. 98The 99administrator must ensure that simultaneous invocations of any combination of 100.Sy zpool replace , 101.Sy zpool create , 102.Sy zpool add , 103or 104.Sy zpool labelclear , 105do not refer to the same device. Using the same device in two pools will 106result in pool corruption. 107.Pp 108There are some uses, such as being currently mounted, or specified as the 109dedicated dump device, that prevents a device from ever being used by ZFS. 110Other uses, such as having a preexisting UFS file system, can be overridden with 111the 112.Fl f 113option. 114.Pp 115The command also checks that the replication strategy for the pool is 116consistent. 117An attempt to combine redundant and non-redundant storage in a single pool, or 118to mix disks and files, results in an error unless 119.Fl f 120is specified. 121The use of differently sized devices within a single raidz or mirror group is 122also flagged as an error unless 123.Fl f 124is specified. 125.Pp 126Unless the 127.Fl R 128option is specified, the default mount point is 129.Pa / Ns Ar pool . 130The mount point must not exist or must be empty, or else the root dataset 131cannot be mounted. 132This can be overridden with the 133.Fl m 134option. 135.Pp 136By default all supported features are enabled on the new pool unless the 137.Fl d 138option is specified. 139.Bl -tag -width Ds 140.It Fl d 141Do not enable any features on the new pool. 142Individual features can be enabled by setting their corresponding properties to 143.Sy enabled 144with the 145.Fl o 146option. 147See 148.Xr zpool-features 5 149for details about feature properties. 150.It Fl f 151Forces use of 152.Ar vdev Ns s , 153even if they appear in use or specify a conflicting replication level. 154Not all devices can be overridden in this manner. 155.It Fl m Ar mountpoint 156Sets the mount point for the root dataset. 157The default mount point is 158.Pa /pool 159or 160.Pa altroot/pool 161if 162.Ar altroot 163is specified. 164The mount point must be an absolute path, 165.Sy legacy , 166or 167.Sy none . 168For more information on dataset mount points, see 169.Xr zfs 8 . 170.It Fl n 171Displays the configuration that would be used without actually creating the 172pool. 173The actual pool creation can still fail due to insufficient privileges or 174device sharing. 175.It Fl o Ar property Ns = Ns Ar value 176Sets the given pool properties. 177See the 178.Xr zpoolprops 179manual page for a list of valid properties that can be set. 180.It Fl o Ar feature@feature Ns = Ns Ar value 181Sets the given pool feature. See the 182.Xr zpool-features 5 183section for a list of valid features that can be set. 184Value can be either disabled or enabled. 185.It Fl O Ar file-system-property Ns = Ns Ar value 186Sets the given file system properties in the root file system of the pool. 187See the 188.Xr zfsprops 8 189manual page for a list of valid properties that can be set. 190.It Fl R Ar root 191Equivalent to 192.Fl o Sy cachefile Ns = Ns Sy none Fl o Sy altroot Ns = Ns Ar root 193.It Fl t Ar tname 194Sets the in-core pool name to 195.Sy tname 196while the on-disk name will be the name specified as the pool name 197.Sy pool . 198This will set the default cachefile property to none. This is intended 199to handle name space collisions when creating pools for other systems, 200such as virtual machines or physical machines whose pools live on network 201block devices. 202.El 203.El 204.Sh SEE ALSO 205.Xr zpool-destroy 8 , 206.Xr zpool-export 8 , 207.Xr zpool-import 8 208