#
f6d434f1 |
| 05-Jun-2024 |
John Baldwin <jhb@FreeBSD.org> |
nvmf: Rescan all namespaces if the changed NS log page is too large
Previously this just punted with a warning message.
Reviewed by: imp Sponsored by: Chelsio Communications Differential Revision:
nvmf: Rescan all namespaces if the changed NS log page is too large
Previously this just punted with a warning message.
Reviewed by: imp Sponsored by: Chelsio Communications Differential Revision: https://reviews.freebsd.org/D45460
show more ...
|
#
a1eda741 |
| 03-May-2024 |
John Baldwin <jhb@FreeBSD.org> |
nvmf: The in-kernel NVMe over Fabrics host
This is the client (initiator in SCSI terms) for NVMe over Fabrics. Userland is responsible for creating a set of queue pairs and then handing them off via
nvmf: The in-kernel NVMe over Fabrics host
This is the client (initiator in SCSI terms) for NVMe over Fabrics. Userland is responsible for creating a set of queue pairs and then handing them off via an ioctl to this driver, e.g. via the 'connect' command from nvmecontrol(8). An nvmeX new-bus device is created at the top-level to represent the remote controller similar to PCI nvmeX devices for PCI-express controllers.
As with nvme(4), namespace devices named /dev/nvmeXnsY are created and pass through commands can be submitted to either the namespace devices or the controller device. For example, 'nvmecontrol identify nvmeX' works for a remote Fabrics controller the same as for a PCI-express controller.
nvmf exports remote namespaces via nda(4) devices using the new NVMF CAM transport. nvmf does not support nvd(4), only nda(4).
Sponsored by: Chelsio Communications Differential Revision: https://reviews.freebsd.org/D44714
show more ...
|