History log of /linux/drivers/nvme/host/fabrics.c (Results 976 – 989 of 989)
Revision (<<< Hide revision tags) (Show revision tags >>>) Date Author Comments
# 9b0dd49e 05-Sep-2016 Greg Kroah-Hartman <gregkh@linuxfoundation.org>

Merge 4.8-rc5 into usb-testing

We want the USB fixes in here for testing and merge issues.

Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>


# fbc1ec2e 05-Sep-2016 Greg Kroah-Hartman <gregkh@linuxfoundation.org>

Merge 4.8-rc5 into char-misc-next

We want the fixes in here for merging and testing.

Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>


# 0141af18 03-Sep-2016 Linus Torvalds <torvalds@linux-foundation.org>

Merge branch 'for-linus' of git://git.kernel.dk/linux-block

Pull block fixes from Jens Axboe:
"A collection of fixes for the nvme over fabrics code"

* 'for-linus' of git://git.kernel.dk/linux-bloc

Merge branch 'for-linus' of git://git.kernel.dk/linux-block

Pull block fixes from Jens Axboe:
"A collection of fixes for the nvme over fabrics code"

* 'for-linus' of git://git.kernel.dk/linux-block:
nvme-rdma: Get rid of redundant defines
nvme-rdma: Get rid of duplicate variable
nvme: fabrics drivers don't need the nvme-pci driver
nvme-fabrics: get a reference when reusing a nvme_host structure
nvme-fabrics: change NQN UUID to big-endian format
nvme-loop: set sqsize to 0-based value, per spec
nvme-rdma: fix sqsize/hsqsize per spec
fabrics: define admin sqsize min default, per spec
nvmet-rdma: +1 to *queue_size from hsqsize/hrqsize
nvmet-rdma: Fix use after free
nvme-rdma: initialize ret to zero to avoid returning garbage

show more ...


# d8d8d9d7 29-Aug-2016 Jens Axboe <axboe@fb.com>

Merge branch 'nvmf-4.8-rc' of git://git.infradead.org/nvme-fabrics into for-linus

Sagi writes:

Mostly stability fixes and cleanups:
- NQN endianess fix from Daniel
- possible use-after-free fix fro

Merge branch 'nvmf-4.8-rc' of git://git.infradead.org/nvme-fabrics into for-linus

Sagi writes:

Mostly stability fixes and cleanups:
- NQN endianess fix from Daniel
- possible use-after-free fix from Vincent
- nvme-rdma connect semantics fixes from Jay
- Remove redundant variables in rdma driver
- Kbuild fix from Christoph
- nvmf_host referencing fix from Christoph
- uninit variable fix from Colin

show more ...


# 98096d8a 18-Aug-2016 Christoph Hellwig <hch@lst.de>

nvme-fabrics: get a reference when reusing a nvme_host structure

Without this we'll get a use after free after connecting two controller
using the same hostnqn and then disconnecting one of them.

S

nvme-fabrics: get a reference when reusing a nvme_host structure

Without this we'll get a use after free after connecting two controller
using the same hostnqn and then disconnecting one of them.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Jay Freyensee <james_p_freyensee@linux.intel.com>
Signed-off-by: Sagi Grimberg <sagi@grimberg.me>

show more ...


Revision tags: v4.7, v4.7-rc7, v4.7-rc6
# 7a665d2f 28-Jun-2016 Daniel Verkamp <daniel.verkamp@intel.com>

nvme-fabrics: change NQN UUID to big-endian format

NVM Express 1.2.1 section 7.9, NVMe Qualified Names, specifies that the
UUID format of NQN uses a UUID based on RFC 4122.

RFC 4122 specifies that

nvme-fabrics: change NQN UUID to big-endian format

NVM Express 1.2.1 section 7.9, NVMe Qualified Names, specifies that the
UUID format of NQN uses a UUID based on RFC 4122.

RFC 4122 specifies that the UUID is encoded in big-endian byte order.

Switch the NVMe over Fabrics host ID field from little-endian UUID to
big-endian UUID to match the specification.

Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
Reviewed-by: Jay Freyensee <james_p_freyensee@linux.intel.com>
Signed-off-by: Sagi Grimberg <sagi@grimberg.me>

show more ...


# f994d9dc 18-Aug-2016 Jay Freyensee <james_p_freyensee@linux.intel.com>

fabrics: define admin sqsize min default, per spec

Upon admin queue connect(), the rdma qp was being
set based on NVMF_AQ_DEPTH. However, the fabrics layer was
using the sqsize field value set for

fabrics: define admin sqsize min default, per spec

Upon admin queue connect(), the rdma qp was being
set based on NVMF_AQ_DEPTH. However, the fabrics layer was
using the sqsize field value set for I/O queues for the admin
queue, which threw the nvme layer and rdma layer off-whack:

root@fedora23-fabrics-host1 nvmf]# dmesg
[ 3507.798642] nvme_fabrics: nvmf_connect_admin_queue():admin sqsize
being sent is: 128
[ 3507.798858] nvme nvme0: creating 16 I/O queues.
[ 3507.896407] nvme nvme0: new ctrl: NQN "nullside-nqn", addr
192.168.1.3:4420

Thus, to have a different admin queue value, we use
NVMF_AQ_DEPTH for connect() and RDMA private data
as the minimum depth specified in the NVMe-over-Fabrics 1.0 spec
(and in that RDMA private data we treat hrqsize as 1's-based
value, per current understanding of the fabrics spec).

Reported-by: Daniel Verkamp <daniel.verkamp@intel.com>
Signed-off-by: Jay Freyensee <james_p_freyensee@linux.intel.com>
Reviewed-by: Daniel Verkamp <daniel.verkamp@intel.com>
Signed-off-by: Sagi Grimberg <sagi@grimberg.me>

show more ...


# cc926387 15-Aug-2016 Daniel Vetter <daniel.vetter@ffwll.ch>

Merge remote-tracking branch 'airlied/drm-next' into drm-intel-next-queued

Backmerge because too many conflicts, and also we need to get at the
latest struct fence patches from Gustavo. Requested by

Merge remote-tracking branch 'airlied/drm-next' into drm-intel-next-queued

Backmerge because too many conflicts, and also we need to get at the
latest struct fence patches from Gustavo. Requested by Chris Wilson.

Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>

show more ...


# a2071cd7 10-Aug-2016 Ingo Molnar <mingo@kernel.org>

Merge branch 'linus' into locking/urgent, to pick up fixes

Signed-off-by: Ingo Molnar <mingo@kernel.org>


# 3fc9d690 27-Jul-2016 Linus Torvalds <torvalds@linux-foundation.org>

Merge branch 'for-4.8/drivers' of git://git.kernel.dk/linux-block

Pull block driver updates from Jens Axboe:
"This branch also contains core changes. I've come to the conclusion
that from 4.9 an

Merge branch 'for-4.8/drivers' of git://git.kernel.dk/linux-block

Pull block driver updates from Jens Axboe:
"This branch also contains core changes. I've come to the conclusion
that from 4.9 and forward, I'll be doing just a single branch. We
often have dependencies between core and drivers, and it's hard to
always split them up appropriately without pulling core into drivers
when that happens.

That said, this contains:

- separate secure erase type for the core block layer, from
Christoph.

- set of discard fixes, from Christoph.

- bio shrinking fixes from Christoph, as a followup up to the
op/flags change in the core branch.

- map and append request fixes from Christoph.

- NVMeF (NVMe over Fabrics) code from Christoph. This is pretty
exciting!

- nvme-loop fixes from Arnd.

- removal of ->driverfs_dev from Dan, after providing a
device_add_disk() helper.

- bcache fixes from Bhaktipriya and Yijing.

- cdrom subchannel read fix from Vchannaiah.

- set of lightnvm updates from Wenwei, Matias, Johannes, and Javier.

- set of drbd updates and fixes from Fabian, Lars, and Philipp.

- mg_disk error path fix from Bart.

- user notification for failed device add for loop, from Minfei.

- NVMe in general:
+ NVMe delay quirk from Guilherme.
+ SR-IOV support and command retry limits from Keith.
+ fix for memory-less NUMA node from Masayoshi.
+ use UINT_MAX for discard sectors, from Minfei.
+ cancel IO fixes from Ming.
+ don't allocate unused major, from Neil.
+ error code fixup from Dan.
+ use constants for PSDT/FUSE from James.
+ variable init fix from Jay.
+ fabrics fixes from Ming, Sagi, and Wei.
+ various fixes"

* 'for-4.8/drivers' of git://git.kernel.dk/linux-block: (115 commits)
nvme/pci: Provide SR-IOV support
nvme: initialize variable before logical OR'ing it
block: unexport various bio mapping helpers
scsi/osd: open code blk_make_request
target: stop using blk_make_request
block: simplify and export blk_rq_append_bio
block: ensure bios return from blk_get_request are properly initialized
virtio_blk: use blk_rq_map_kern
memstick: don't allow REQ_TYPE_BLOCK_PC requests
block: shrink bio size again
block: simplify and cleanup bvec pool handling
block: get rid of bio_rw and READA
block: don't ignore -EOPNOTSUPP blkdev_issue_write_same
block: introduce BLKDEV_DISCARD_ZERO to fix zeroout
NVMe: don't allocate unused nvme_major
nvme: avoid crashes when node 0 is memoryless node.
nvme: Limit command retries
loop: Make user notify for adding loop device failed
nvme-loop: fix nvme-loop Kconfig dependencies
nvmet: fix return value check in nvmet_subsys_alloc()
...

show more ...


# e76debd9 01-Jul-2016 Ming Lin <mlin@kernel.org>

nvme-fabrics: add-remove ctrl repeat fix

Repeatedly adding then removing the same NVMe-over-Fabrics controller
over and over again (shown below) can cause a kernel crash (also shown
below). This pa

nvme-fabrics: add-remove ctrl repeat fix

Repeatedly adding then removing the same NVMe-over-Fabrics controller
over and over again (shown below) can cause a kernel crash (also shown
below). This patch fixes that.

[nvmf]# ./setup_nvme_connections.sh
traddr=192.168.1.100,transport=rdma,trsvcid=4420,nqn=darkside
-nqn,hostnqn=evil-wins-nqn,nr_io_queues=16 > /dev/nvme-fabrics
traddr=192.168.1.100,transport=rdma,trsvcid=4420,nqn=lightside
-nqn,hostnqn=good-wins-nqn > /dev/nvme-fabrics
[nvmf]# ./remove_nvme_connections.sh 2
echo 1 > /sys/class/nvme/nvme0/delete_controller
echo 1 > /sys/class/nvme/nvme1/delete_controller
[nvmf]# ./setup_nvme_connections.sh
traddr=192.168.1.100,transport=rdma,trsvcid=4420,nqn=darkside
-nqn,hostnqn=evil-wins-nqn,nr_io_queues=16 > /dev/nvme-fabrics
Killed

[nvmf]# dmesg
[ 313.416908] nvme nvme0: creating 16 I/O queues.
[ 313.523908] nvme nvme0: new ctrl: NQN "darkside-nqn", addr
192.168.1.100:4420
[ 313.524857] BUG: unable to handle kernel NULL pointer dereference at
0000000000000010
[ 313.525262] IP: [<ffffffff8136c60e>] strcmp+0xe/0x30
[ 313.525490] PGD 0
[ 313.525726] Oops: 0000 [#1] SMP
[ 313.525900] Modules linked in: nvme_rdma nvme_fabrics nvme_core
ib_ipoib rdma_ucm ib_ucm ib_uverbs ib_umad rdma_cm ib_cm iw_cm mlx4_en
mlx4_ib ib_core mlx4_core
[ 313.527085] CPU: 15 PID: 5856 Comm: setup_nvme_conn Not tainted
4.7.0-rc2+ #2
[ 313.527259] Hardware name: Supermicro X9DRT-F/IBQF/IBFF/X9DRT
-F/IBQF/IBFF, BIOS 1.0a 10/09/2012
[ 313.527551] task: ffff88027646cd40 ti: ffff88025b980000 task.ti:
ffff88025b980000
[ 313.527879] RIP: 0010:[<ffffffff8136c60e>] [<ffffffff8136c60e>]
strcmp+0xe/0x30
[ 313.528232] RSP: 0018:ffff88025b983db0 EFLAGS: 00010206
[ 313.528403] RAX: 0000000000000000 RBX: ffff880471879880 RCX:
fffffffffffffff1
[ 313.528594] RDX: 0000000000000000 RSI: ffff880474afa860 RDI:
0000000000000011
[ 313.528778] RBP: ffff88025b983db0 R08: ffff880474afa860 R09:
ffff880471879058
[ 313.528956] R10: 000000000000002c R11: ffff88047f415000 R12:
ffff880471879800
[ 313.529129] R13: ffff880471879000 R14: ffff880474afa860 R15:
fffffffffffffff8
[ 313.529303] FS: 00007f778f510700(0000) GS:ffff88047fbc0000(0000)
knlGS:0000000000000000
[ 313.529629] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 313.529817] CR2: 0000000000000010 CR3: 0000000274174000 CR4:
00000000000406e0
[ 313.529989] Stack:
[ 313.530154] ffff88025b983e48 ffffffffa0171c74 0000000000000001
0000000000000059
[ 313.530621] ffff880476f32400 ffff88047e8add80 0000010074b33aa0
ffff880471879059
[ 313.531162] ffff88047187904b ffff880471879058 0000000000000000
ffff88047736e000
[ 313.531629] Call Trace:
[ 313.531797] [<ffffffffa0171c74>] nvmf_dev_write+0x674/0x840
[nvme_fabrics]
[ 313.531974] [<ffffffff81180b53>] __vfs_write+0x23/0x120
[ 313.532146] [<ffffffff8119daff>] ? __fd_install+0x1f/0xc0
[ 313.532316] [<ffffffff8119d97a>] ? __alloc_fd+0x3a/0x170
[ 313.532487] [<ffffffff811811f3>] vfs_write+0xb3/0x1b0
[ 313.532658] [<ffffffff8117e321>] ? filp_close+0x51/0x70
[ 313.532845] [<ffffffff811824e1>] SyS_write+0x41/0xa0
[ 313.533016] [<ffffffff8183055b>]
entry_SYSCALL_64_fastpath+0x13/0x8f
[ 313.533188] Code: 80 3a 00 75 f7 48 83 c6 01 0f b6 4e ff 48 83 c2 01
84 c9 88 4a ff 75 ed 5d c3 0f 1f 00 55 48 89 e5 eb 04 84 c0 74 18 48 83
c7 01 <0f> b6 47 ff 48 83 c6 01 3a 46 ff 74 eb 19 c0 83 c8 01 5d c3 31
[ 313.536563] RIP [<ffffffff8136c60e>] strcmp+0xe/0x30
[ 313.536815] RSP <ffff88025b983db0>
[ 313.536981] CR2: 0000000000000010
[ 313.537151] ---[ end trace 3d952e590e7bc2d5 ]---

Reported-and-tested-by: Jay Freyensee <james.p.freyensee@intel.com>
Signed-off-by: Ming Lin <mlin@kernel.org>
Signed-off-by: Jay Freyensee <james.p.freyensee@intel.com>
Signed-off-by: Jens Axboe <axboe@fb.com>

show more ...


Revision tags: v4.7-rc5
# 6a92967c 22-Jun-2016 Sagi Grimberg <sagi@grimberg.me>

nvme-fabrics: Remove tl_retry_count

The timeout before error recovery logic kicks in is
dictated by the nvme keep-alive, so we don't really need
a transport layer retry count. transports can retry f

nvme-fabrics: Remove tl_retry_count

The timeout before error recovery logic kicks in is
dictated by the nvme keep-alive, so we don't really need
a transport layer retry count. transports can retry for
as much as they like.

Signed-off-by: Sagi Grimberg <sagi@grimberg.me>
Signed-off-by: Jens Axboe <axboe@fb.com>

show more ...


Revision tags: v4.7-rc4
# 038bd4cb 13-Jun-2016 Sagi Grimberg <sagi@grimberg.me>

nvme: add keep-alive support

Periodic keep-alive is a mandatory feature in NVMe over Fabrics, and
optional in NVMe 1.2.1 for PCIe. This patch adds periodic keep-alive
sent from the host to verify t

nvme: add keep-alive support

Periodic keep-alive is a mandatory feature in NVMe over Fabrics, and
optional in NVMe 1.2.1 for PCIe. This patch adds periodic keep-alive
sent from the host to verify that the controller is still responsive
and vice-versa. The keep-alive timeout is user-defined (with
keep_alive_tmo connection parameter) and defaults to 5 seconds.

In order to avoid a race condition where the host sends a keep-alive
competing with the target side keep-alive timeout expiration, the host
adds a grace period of 10 seconds when publishing the keep-alive timeout
to the target.

In case a keep-alive failed (or timed out), a transport specific error
recovery kicks in.

For now only NVMe over Fabrics is wired up to support keep alive, but
we can add PCIe support easily once controllers actually supporting it
become available.

Signed-off-by: Sagi Grimberg <sagi@grimberg.me>
Reviewed-by: Steve Wise <swise@chelsio.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Keith Busch <keith.busch@intel.com>
Signed-off-by: Jens Axboe <axboe@fb.com>

show more ...


# 07bfcd09 13-Jun-2016 Christoph Hellwig <hch@lst.de>

nvme-fabrics: add a generic NVMe over Fabrics library

The NVMe over Fabrics library provides an interface for both transports
and the nvme core to handle fabrics specific commands and attributes
ind

nvme-fabrics: add a generic NVMe over Fabrics library

The NVMe over Fabrics library provides an interface for both transports
and the nvme core to handle fabrics specific commands and attributes
independent of the underlying transport.

In addition, the fabrics library adds a misc device interface that allow
actually creating a fabrics controller, as we can't just autodiscover
it like in the PCI case. The nvme-cli utility has been enhanced to use
this interface to support fabric connect and discovery.

Signed-off-by: Armen Baloyan <armenx.baloyan@intel.com>,
Signed-off-by: Jay Freyensee <james.p.freyensee@intel.com>,
Signed-off-by: Ming Lin <ming.l@ssi.samsung.com>
Signed-off-by: Sagi Grimberg <sagi@grimberg.me>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Keith Busch <keith.busch@intel.com>
Signed-off-by: Jens Axboe <axboe@fb.com>

show more ...


1...<<31323334353637383940