#
ef052adf |
| 26-Sep-2024 |
John Baldwin <jhb@FreeBSD.org> |
nvmf: Narrow scope of sim lock in nvmf_sim_io
nvmf_submit_request() handles races with concurrent queue pair destruction (or the queue pair being destroyed between nvmf_allocate_request and nvmf_sub
nvmf: Narrow scope of sim lock in nvmf_sim_io
nvmf_submit_request() handles races with concurrent queue pair destruction (or the queue pair being destroyed between nvmf_allocate_request and nvmf_submit_request), so the lock is not needed here. This avoids holding the lock across transport-specific logic such as queueing mbufs for PDUs to a socket buffer, etc.
Holding the lock across nvmf_allocate_request() ensures that the queue pair pointers in the softc are still valid as shutdown attempts will block on the lock before destroying the queue pairs.
Sponsored by: Chelsio Communications
show more ...
|
#
aec2ae8b |
| 26-Sep-2024 |
John Baldwin <jhb@FreeBSD.org> |
nvmf: Always use xpt_done instead of xpt_done_direct
The last reference on a pending I/O request might be held by an mbuf in the socket buffer. When this mbuf is freed, the I/O request is completed
nvmf: Always use xpt_done instead of xpt_done_direct
The last reference on a pending I/O request might be held by an mbuf in the socket buffer. When this mbuf is freed, the I/O request is completed which triggers completion of the CCB. However, this can occur with locks held (e.g. with so_snd locked when the mbuf is freed by sbdrop()) raising a LOR between so_snd and the CAM device lock. Instead, defer CCB completion processing to a thread where locks are not held.
Sponsored by: Chelsio Communications
show more ...
|
Revision tags: release/13.4.0 |
|
#
f46d4971 |
| 05-Jun-2024 |
John Baldwin <jhb@FreeBSD.org> |
nvmf: Handle shutdowns more gracefully
If an association is disconnected during a clean shutdown, abort all pending and future I/O requests with an error to avoid hangs either due to filesystem unmo
nvmf: Handle shutdowns more gracefully
If an association is disconnected during a clean shutdown, abort all pending and future I/O requests with an error to avoid hangs either due to filesystem unmounts or a stuck GEOM event.
If an association is connected during a clean shutdown, gracefully disconnect from the remote controller and close the open queues.
Reviewed by: imp Sponsored by: Chelsio Communications Differential Revision: https://reviews.freebsd.org/D45462
show more ...
|
#
aacaeeee |
| 05-Jun-2024 |
John Baldwin <jhb@FreeBSD.org> |
nvmf: Permit failing I/O requests while disconnected
Add a kern.nvmf.fail_on_disconnection sysctl similar to the kern.iscsi.fail_on_disconnection sysctl. This causes pending I/O requests to fail wi
nvmf: Permit failing I/O requests while disconnected
Add a kern.nvmf.fail_on_disconnection sysctl similar to the kern.iscsi.fail_on_disconnection sysctl. This causes pending I/O requests to fail with an error if an association is disconnected instead of requeueing to be retried once the association is reconnected. As with iSCSI, the default is to queue and retry operations.
Reviewed by: imp Sponsored by: Chelsio Communications Differential Revision: https://reviews.freebsd.org/D45308
show more ...
|
Revision tags: release/14.1.0 |
|
#
1f029b86 |
| 10-May-2024 |
John Baldwin <jhb@FreeBSD.org> |
nvmf: Use strlcpy instead of strncpy to ensure termination
Reported by: Coverity Scan CID: 1545054 Sponsored by: Chelsio Communications
|
#
a1eda741 |
| 03-May-2024 |
John Baldwin <jhb@FreeBSD.org> |
nvmf: The in-kernel NVMe over Fabrics host
This is the client (initiator in SCSI terms) for NVMe over Fabrics. Userland is responsible for creating a set of queue pairs and then handing them off via
nvmf: The in-kernel NVMe over Fabrics host
This is the client (initiator in SCSI terms) for NVMe over Fabrics. Userland is responsible for creating a set of queue pairs and then handing them off via an ioctl to this driver, e.g. via the 'connect' command from nvmecontrol(8). An nvmeX new-bus device is created at the top-level to represent the remote controller similar to PCI nvmeX devices for PCI-express controllers.
As with nvme(4), namespace devices named /dev/nvmeXnsY are created and pass through commands can be submitted to either the namespace devices or the controller device. For example, 'nvmecontrol identify nvmeX' works for a remote Fabrics controller the same as for a PCI-express controller.
nvmf exports remote namespaces via nda(4) devices using the new NVMF CAM transport. nvmf does not support nvd(4), only nda(4).
Sponsored by: Chelsio Communications Differential Revision: https://reviews.freebsd.org/D44714
show more ...
|