xref: /freebsd/share/man/man4/nvme.4 (revision 6683132d54bd6d589889e43dabdc53d35e38a028)
1.\"
2.\" Copyright (c) 2012-2016 Intel Corporation
3.\" All rights reserved.
4.\"
5.\" Redistribution and use in source and binary forms, with or without
6.\" modification, are permitted provided that the following conditions
7.\" are met:
8.\" 1. Redistributions of source code must retain the above copyright
9.\"    notice, this list of conditions, and the following disclaimer,
10.\"    without modification.
11.\" 2. Redistributions in binary form must reproduce at minimum a disclaimer
12.\"    substantially similar to the "NO WARRANTY" disclaimer below
13.\"    ("Disclaimer") and any redistribution must be conditioned upon
14.\"    including a substantially similar Disclaimer requirement for further
15.\"    binary redistribution.
16.\"
17.\" NO WARRANTY
18.\" THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
19.\" "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
20.\" LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR
21.\" A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
22.\" HOLDERS OR CONTRIBUTORS BE LIABLE FOR SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
23.\" DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
24.\" OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
25.\" HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
26.\" STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING
27.\" IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
28.\" POSSIBILITY OF SUCH DAMAGES.
29.\"
30.\" nvme driver man page.
31.\"
32.\" Author: Jim Harris <jimharris@FreeBSD.org>
33.\"
34.\" $FreeBSD$
35.\"
36.Dd May 18, 2019
37.Dt NVME 4
38.Os
39.Sh NAME
40.Nm nvme
41.Nd NVM Express core driver
42.Sh SYNOPSIS
43To compile this driver into your kernel,
44place the following line in your kernel configuration file:
45.Bd -ragged -offset indent
46.Cd "device nvme"
47.Ed
48.Pp
49Or, to load the driver as a module at boot, place the following line in
50.Xr loader.conf 5 :
51.Bd -literal -offset indent
52nvme_load="YES"
53.Ed
54.Pp
55Most users will also want to enable
56.Xr nvd 4
57to expose NVM Express namespaces as disk devices which can be
58partitioned.
59Note that in NVM Express terms, a namespace is roughly equivalent to a
60SCSI LUN.
61.Sh DESCRIPTION
62The
63.Nm
64driver provides support for NVM Express (NVMe) controllers, such as:
65.Bl -bullet
66.It
67Hardware initialization
68.It
69Per-CPU IO queue pairs
70.It
71API for registering NVMe namespace consumers such as
72.Xr nvd 4
73or
74.Xr nda 4
75.It
76API for submitting NVM commands to namespaces
77.It
78Ioctls for controller and namespace configuration and management
79.El
80.Pp
81The
82.Nm
83driver creates controller device nodes in the format
84.Pa /dev/nvmeX
85and namespace device nodes in
86the format
87.Pa /dev/nvmeXnsY .
88Note that the NVM Express specification starts numbering namespaces at 1,
89not 0, and this driver follows that convention.
90.Sh CONFIGURATION
91By default,
92.Nm
93will create an I/O queue pair for each CPU, provided enough MSI-X vectors
94and NVMe queue pairs can be allocated.
95If not enough vectors or queue
96pairs are available, nvme(4) will use a smaller number of queue pairs and
97assign multiple CPUs per queue pair.
98.Pp
99To force a single I/O queue pair shared by all CPUs, set the following
100tunable value in
101.Xr loader.conf 5 :
102.Bd -literal -offset indent
103hw.nvme.per_cpu_io_queues=0
104.Ed
105.Pp
106To assign more than one CPU per I/O queue pair, thereby reducing the number
107of MSI-X vectors consumed by the device, set the following tunable value in
108.Xr loader.conf 5 :
109.Bd -literal -offset indent
110hw.nvme.min_cpus_per_ioq=X
111.Ed
112.Pp
113To force legacy interrupts for all
114.Nm
115driver instances, set the following tunable value in
116.Xr loader.conf 5 :
117.Bd -literal -offset indent
118hw.nvme.force_intx=1
119.Ed
120.Pp
121Note that use of INTx implies disabling of per-CPU I/O queue pairs.
122.Pp
123The
124.Xr nvd 4
125driver is used to provide a disk driver to the system by default.
126The
127.Xr nda 4
128driver can also be used instead.
129The
130.Xr nvd 4
131driver performs better with smaller transactions and few TRIM
132commands.
133It sends all commands directly to the drive immediately.
134The
135.Xr nda 4
136driver performs better with larger transactions and also collapses
137TRIM commands giving better performance.
138It can queue commands to the drive; combine
139.Dv BIO_DELETE
140commands into a single trip; and
141use the CAM I/O scheduler to bias one type of operation over another.
142To select the
143.Xr nda 4
144driver, set the following tunable value in
145.Xr loader.conf 5 :
146.Bd -literal -offset indent
147hw.nvme.use_nvd=0
148.Ed
149.Pp
150This value may also be set in the kernel config file with
151.Bd -literal -offset indent
152.Cd options NVME_USE_NVD=0
153.Ed
154.Sh SYSCTL VARIABLES
155The following controller-level sysctls are currently implemented:
156.Bl -tag -width indent
157.It Va dev.nvme.0.num_cpus_per_ioq
158(R) Number of CPUs associated with each I/O queue pair.
159.It Va dev.nvme.0.int_coal_time
160(R/W) Interrupt coalescing timer period in microseconds.
161Set to 0 to disable.
162.It Va dev.nvme.0.int_coal_threshold
163(R/W) Interrupt coalescing threshold in number of command completions.
164Set to 0 to disable.
165.El
166.Pp
167The following queue pair-level sysctls are currently implemented.
168Admin queue sysctls take the format of dev.nvme.0.adminq and I/O queue sysctls
169take the format of dev.nvme.0.ioq0.
170.Bl -tag -width indent
171.It Va dev.nvme.0.ioq0.num_entries
172(R) Number of entries in this queue pair's command and completion queue.
173.It Va dev.nvme.0.ioq0.num_tr
174(R) Number of nvme_tracker structures currently allocated for this queue pair.
175.It Va dev.nvme.0.ioq0.num_prp_list
176(R) Number of nvme_prp_list structures currently allocated for this queue pair.
177.It Va dev.nvme.0.ioq0.sq_head
178(R) Current location of the submission queue head pointer as observed by
179the driver.
180The head pointer is incremented by the controller as it takes commands off
181of the submission queue.
182.It Va dev.nvme.0.ioq0.sq_tail
183(R) Current location of the submission queue tail pointer as observed by
184the driver.
185The driver increments the tail pointer after writing a command
186into the submission queue to signal that a new command is ready to be
187processed.
188.It Va dev.nvme.0.ioq0.cq_head
189(R) Current location of the completion queue head pointer as observed by
190the driver.
191The driver increments the head pointer after finishing
192with a completion entry that was posted by the controller.
193.It Va dev.nvme.0.ioq0.num_cmds
194(R) Number of commands that have been submitted on this queue pair.
195.It Va dev.nvme.0.ioq0.dump_debug
196(W) Writing 1 to this sysctl will dump the full contents of the submission
197and completion queues to the console.
198.El
199.Sh SEE ALSO
200.Xr nda 4 ,
201.Xr nvd 4 ,
202.Xr pci 4 ,
203.Xr nvmecontrol 8 ,
204.Xr disk 9
205.Sh HISTORY
206The
207.Nm
208driver first appeared in
209.Fx 9.2 .
210.Sh AUTHORS
211.An -nosplit
212The
213.Nm
214driver was developed by Intel and originally written by
215.An Jim Harris Aq Mt jimharris@FreeBSD.org ,
216with contributions from
217.An Joe Golio
218at EMC.
219.Pp
220This man page was written by
221.An Jim Harris Aq Mt jimharris@FreeBSD.org .
222