xref: /linux/Documentation/admin-guide/nvme-multipath.rst (revision d0f93ac2c384c40202cf393fa7e8a2cac7004ba1)
1.. SPDX-License-Identifier: GPL-2.0
2
3====================
4Linux NVMe multipath
5====================
6
7This document describes NVMe multipath and its path selection policies supported
8by the Linux NVMe host driver.
9
10
11Introduction
12============
13
14The NVMe multipath feature in Linux integrates namespaces with the same
15identifier into a single block device. Using multipath enhances the reliability
16and stability of I/O access while improving bandwidth performance. When a user
17sends I/O to this merged block device, the multipath mechanism selects one of
18the underlying block devices (paths) according to the configured policy.
19Different policies result in different path selections.
20
21
22Policies
23========
24
25All policies follow the ANA (Asymmetric Namespace Access) mechanism, meaning
26that when an optimized path is available, it will be chosen over a non-optimized
27one. Current the NVMe multipath policies include numa(default), round-robin and
28queue-depth.
29
30To set the desired policy (e.g., round-robin), use one of the following methods:
31   1. echo -n "round-robin" > /sys/module/nvme_core/parameters/iopolicy
32   2. or add the "nvme_core.iopolicy=round-robin" to cmdline.
33
34
35NUMA
36----
37
38The NUMA policy selects the path closest to the NUMA node of the current CPU for
39I/O distribution. This policy maintains the nearest paths to each NUMA node
40based on network interface connections.
41
42When to use the NUMA policy:
43  1. Multi-core Systems: Optimizes memory access in multi-core and
44     multi-processor systems, especially under NUMA architecture.
45  2. High Affinity Workloads: Binds I/O processing to the CPU to reduce
46     communication and data transfer delays across nodes.
47
48
49Round-Robin
50-----------
51
52The round-robin policy distributes I/O requests evenly across all paths to
53enhance throughput and resource utilization. Each I/O operation is sent to the
54next path in sequence.
55
56When to use the round-robin policy:
57  1. Balanced Workloads: Effective for balanced and predictable workloads with
58     similar I/O size and type.
59  2. Homogeneous Path Performance: Utilizes all paths efficiently when
60     performance characteristics (e.g., latency, bandwidth) are similar.
61
62
63Queue-Depth
64-----------
65
66The queue-depth policy manages I/O requests based on the current queue depth
67of each path, selecting the path with the least number of in-flight I/Os.
68
69When to use the queue-depth policy:
70  1. High load with small I/Os: Effectively balances load across paths when
71     the load is high, and I/O operations consist of small, relatively
72     fixed-sized requests.
73