1.\" 2.\" Copyright (c) 2012-2016 Intel Corporation 3.\" All rights reserved. 4.\" 5.\" Redistribution and use in source and binary forms, with or without 6.\" modification, are permitted provided that the following conditions 7.\" are met: 8.\" 1. Redistributions of source code must retain the above copyright 9.\" notice, this list of conditions, and the following disclaimer, 10.\" without modification. 11.\" 2. Redistributions in binary form must reproduce at minimum a disclaimer 12.\" substantially similar to the "NO WARRANTY" disclaimer below 13.\" ("Disclaimer") and any redistribution must be conditioned upon 14.\" including a substantially similar Disclaimer requirement for further 15.\" binary redistribution. 16.\" 17.\" NO WARRANTY 18.\" THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS 19.\" "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT 20.\" LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR 21.\" A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT 22.\" HOLDERS OR CONTRIBUTORS BE LIABLE FOR SPECIAL, EXEMPLARY, OR CONSEQUENTIAL 23.\" DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS 24.\" OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) 25.\" HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, 26.\" STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING 27.\" IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE 28.\" POSSIBILITY OF SUCH DAMAGES. 29.\" 30.\" nvme driver man page. 31.\" 32.\" Author: Jim Harris <jimharris@FreeBSD.org> 33.\" 34.\" $FreeBSD$ 35.\" 36.Dd May 18, 2019 37.Dt NVME 4 38.Os 39.Sh NAME 40.Nm nvme 41.Nd NVM Express core driver 42.Sh SYNOPSIS 43To compile this driver into your kernel, 44place the following line in your kernel configuration file: 45.Bd -ragged -offset indent 46.Cd "device nvme" 47.Ed 48.Pp 49Or, to load the driver as a module at boot, place the following line in 50.Xr loader.conf 5 : 51.Bd -literal -offset indent 52nvme_load="YES" 53.Ed 54.Pp 55Most users will also want to enable 56.Xr nvd 4 57to expose NVM Express namespaces as disk devices which can be 58partitioned. 59Note that in NVM Express terms, a namespace is roughly equivalent to a 60SCSI LUN. 61.Sh DESCRIPTION 62The 63.Nm 64driver provides support for NVM Express (NVMe) controllers, such as: 65.Bl -bullet 66.It 67Hardware initialization 68.It 69Per-CPU IO queue pairs 70.It 71API for registering NVMe namespace consumers such as 72.Xr nvd 4 73or 74.Xr nda 4 75.It 76API for submitting NVM commands to namespaces 77.It 78Ioctls for controller and namespace configuration and management 79.El 80.Pp 81The 82.Nm 83driver creates controller device nodes in the format 84.Pa /dev/nvmeX 85and namespace device nodes in 86the format 87.Pa /dev/nvmeXnsY . 88Note that the NVM Express specification starts numbering namespaces at 1, 89not 0, and this driver follows that convention. 90.Sh CONFIGURATION 91By default, 92.Nm 93will create an I/O queue pair for each CPU, provided enough MSI-X vectors 94and NVMe queue pairs can be allocated. 95If not enough vectors or queue 96pairs are available, nvme(4) will use a smaller number of queue pairs and 97assign multiple CPUs per queue pair. 98.Pp 99To force a single I/O queue pair shared by all CPUs, set the following 100tunable value in 101.Xr loader.conf 5 : 102.Bd -literal -offset indent 103hw.nvme.per_cpu_io_queues=0 104.Ed 105.Pp 106To assign more than one CPU per I/O queue pair, thereby reducing the number 107of MSI-X vectors consumed by the device, set the following tunable value in 108.Xr loader.conf 5 : 109.Bd -literal -offset indent 110hw.nvme.min_cpus_per_ioq=X 111.Ed 112.Pp 113To force legacy interrupts for all 114.Nm 115driver instances, set the following tunable value in 116.Xr loader.conf 5 : 117.Bd -literal -offset indent 118hw.nvme.force_intx=1 119.Ed 120.Pp 121Note that use of INTx implies disabling of per-CPU I/O queue pairs. 122.Pp 123The 124.Xr nvd 4 125driver is used to provide a disk driver to the system by default. 126The 127.Xr nda 4 128driver can also be used instead. 129The 130.Xr nvd 4 131driver performs better with smaller transactions and few TRIM 132commands. 133It sends all commands directly to the drive immediately. 134The 135.Xr nda 4 136driver performs better with larger transactions and also collapses 137TRIM commands giving better performance. 138It can queue commands to the drive; combine 139.Dv BIO_DELETE 140commands into a single trip; and 141use the CAM I/O scheduler to bias one type of operation over another. 142To select the 143.Xr nda 4 144driver, set the following tunable value in 145.Xr loader.conf 5 : 146.Bd -literal -offset indent 147hw.nvme.use_nvd=0 148.Ed 149.Pp 150This value may also be set in the kernel config file with 151.Bd -literal -offset indent 152.Cd options NVME_USE_NVD=0 153.Ed 154.Pp 155When there is an error, 156.Nm 157prints only the most relevant information about the command by default. 158To enable dumping of all information about the command, set the following tunable 159value in 160.Xr loader.conf 5 : 161.Bd -literal -offset indent 162hw.nvme.verbose_cmd_dump=1 163.Ed 164.Pp 165.Sh SYSCTL VARIABLES 166The following controller-level sysctls are currently implemented: 167.Bl -tag -width indent 168.It Va dev.nvme.0.num_cpus_per_ioq 169(R) Number of CPUs associated with each I/O queue pair. 170.It Va dev.nvme.0.int_coal_time 171(R/W) Interrupt coalescing timer period in microseconds. 172Set to 0 to disable. 173.It Va dev.nvme.0.int_coal_threshold 174(R/W) Interrupt coalescing threshold in number of command completions. 175Set to 0 to disable. 176.El 177.Pp 178The following queue pair-level sysctls are currently implemented. 179Admin queue sysctls take the format of dev.nvme.0.adminq and I/O queue sysctls 180take the format of dev.nvme.0.ioq0. 181.Bl -tag -width indent 182.It Va dev.nvme.0.ioq0.num_entries 183(R) Number of entries in this queue pair's command and completion queue. 184.It Va dev.nvme.0.ioq0.num_tr 185(R) Number of nvme_tracker structures currently allocated for this queue pair. 186.It Va dev.nvme.0.ioq0.num_prp_list 187(R) Number of nvme_prp_list structures currently allocated for this queue pair. 188.It Va dev.nvme.0.ioq0.sq_head 189(R) Current location of the submission queue head pointer as observed by 190the driver. 191The head pointer is incremented by the controller as it takes commands off 192of the submission queue. 193.It Va dev.nvme.0.ioq0.sq_tail 194(R) Current location of the submission queue tail pointer as observed by 195the driver. 196The driver increments the tail pointer after writing a command 197into the submission queue to signal that a new command is ready to be 198processed. 199.It Va dev.nvme.0.ioq0.cq_head 200(R) Current location of the completion queue head pointer as observed by 201the driver. 202The driver increments the head pointer after finishing 203with a completion entry that was posted by the controller. 204.It Va dev.nvme.0.ioq0.num_cmds 205(R) Number of commands that have been submitted on this queue pair. 206.It Va dev.nvme.0.ioq0.dump_debug 207(W) Writing 1 to this sysctl will dump the full contents of the submission 208and completion queues to the console. 209.El 210.Sh SEE ALSO 211.Xr nda 4 , 212.Xr nvd 4 , 213.Xr pci 4 , 214.Xr nvmecontrol 8 , 215.Xr disk 9 216.Sh HISTORY 217The 218.Nm 219driver first appeared in 220.Fx 9.2 . 221.Sh AUTHORS 222.An -nosplit 223The 224.Nm 225driver was developed by Intel and originally written by 226.An Jim Harris Aq Mt jimharris@FreeBSD.org , 227with contributions from 228.An Joe Golio 229at EMC. 230.Pp 231This man page was written by 232.An Jim Harris Aq Mt jimharris@FreeBSD.org . 233