#
365b89e8 |
| 30-Dec-2024 |
John Baldwin <jhb@FreeBSD.org> |
nvmf: Switch several ioctls to using nvlists
For requests that handoff queues from userspace to the kernel as well as the request to fetch reconnect parameters from the kernel, switch from using fla
nvmf: Switch several ioctls to using nvlists
For requests that handoff queues from userspace to the kernel as well as the request to fetch reconnect parameters from the kernel, switch from using flat structures to nvlists. In particular, this will permit adding support for additional transports in the future without breaking the ABI of the structures.
Note that this is an ABI break for the ioctls used by nvmf(4) and nvmft(4). Since this is only present in main I did not bother implementing compatability shims.
Inspired by: imp (suggestion on a different review) Reviewed by: imp Sponsored by: Chelsio Communications Differential Revision: https://reviews.freebsd.org/D48230
show more ...
|
Revision tags: release/14.2.0 |
|
#
4d3b659f |
| 11-Nov-2024 |
John Baldwin <jhb@FreeBSD.org> |
nvmf: Track SQ flow control
This isn't really needed since the host driver never submits more commands to a queue than it can hold, but I noticed that the recently-added SQ head and tail sysctl node
nvmf: Track SQ flow control
This isn't really needed since the host driver never submits more commands to a queue than it can hold, but I noticed that the recently-added SQ head and tail sysctl nodes were not updating. This fixes that and also uses these values to assert that there we never submit a command while a queue pair is full.
Sponsored by: Chelsio Communications
show more ...
|
#
931dd5fe |
| 02-Nov-2024 |
John Baldwin <jhb@FreeBSD.org> |
nvmf: Add sysctl nodes for each queue pair
These report the queue size, queue head, queue tail, and the number of commands submitted.
Sponsored by: Chelsio Communications
|
Revision tags: release/13.4.0, release/14.1.0 |
|
#
a1eda741 |
| 03-May-2024 |
John Baldwin <jhb@FreeBSD.org> |
nvmf: The in-kernel NVMe over Fabrics host
This is the client (initiator in SCSI terms) for NVMe over Fabrics. Userland is responsible for creating a set of queue pairs and then handing them off via
nvmf: The in-kernel NVMe over Fabrics host
This is the client (initiator in SCSI terms) for NVMe over Fabrics. Userland is responsible for creating a set of queue pairs and then handing them off via an ioctl to this driver, e.g. via the 'connect' command from nvmecontrol(8). An nvmeX new-bus device is created at the top-level to represent the remote controller similar to PCI nvmeX devices for PCI-express controllers.
As with nvme(4), namespace devices named /dev/nvmeXnsY are created and pass through commands can be submitted to either the namespace devices or the controller device. For example, 'nvmecontrol identify nvmeX' works for a remote Fabrics controller the same as for a PCI-express controller.
nvmf exports remote namespaces via nda(4) devices using the new NVMF CAM transport. nvmf does not support nvd(4), only nda(4).
Sponsored by: Chelsio Communications Differential Revision: https://reviews.freebsd.org/D44714
show more ...
|