xref: /linux/Documentation/driver-api/interconnect.rst (revision 40286d6379aacfcc053253ef78dc78b09addffda)
1.. SPDX-License-Identifier: GPL-2.0
2
3=====================================
4Generic System Interconnect Subsystem
5=====================================
6
7Introduction
8------------
9
10This framework is designed to provide a standard kernel interface to control
11the settings of the interconnects on an SoC. These settings can be throughput,
12latency and priority between multiple interconnected devices or functional
13blocks. This can be controlled dynamically in order to save power or provide
14maximum performance.
15
16The interconnect bus is hardware with configurable parameters, which can be
17set on a data path according to the requests received from various drivers.
18An example of interconnect buses are the interconnects between various
19components or functional blocks in chipsets. There can be multiple interconnects
20on an SoC that can be multi-tiered.
21
22Below is a simplified diagram of a real-world SoC interconnect bus topology.
23
24::
25
26 +----------------+    +----------------+
27 | HW Accelerator |--->|      M NoC     |<---------------+
28 +----------------+    +----------------+                |
29                         |      |                    +------------+
30  +-----+  +-------------+      V       +------+     |            |
31  | DDR |  |                +--------+  | PCIe |     |            |
32  +-----+  |                | Slaves |  +------+     |            |
33    ^ ^    |                +--------+     |         |   C NoC    |
34    | |    V                               V         |            |
35 +------------------+   +------------------------+   |            |   +-----+
36 |                  |-->|                        |-->|            |-->| CPU |
37 |                  |-->|                        |<--|            |   +-----+
38 |     Mem NoC      |   |         S NoC          |   +------------+
39 |                  |<--|                        |---------+    |
40 |                  |<--|                        |<------+ |    |   +--------+
41 +------------------+   +------------------------+       | |    +-->| Slaves |
42   ^  ^    ^    ^          ^                             | |        +--------+
43   |  |    |    |          |                             | V
44 +------+  |  +-----+   +-----+  +---------+   +----------------+   +--------+
45 | CPUs |  |  | GPU |   | DSP |  | Masters |-->|       P NoC    |-->| Slaves |
46 +------+  |  +-----+   +-----+  +---------+   +----------------+   +--------+
47           |
48       +-------+
49       | Modem |
50       +-------+
51
52Terminology
53-----------
54
55Interconnect provider is the software definition of the interconnect hardware.
56The interconnect providers on the above diagram are M NoC, S NoC, C NoC, P NoC
57and Mem NoC.
58
59Interconnect node is the software definition of the interconnect hardware
60port. Each interconnect provider consists of multiple interconnect nodes,
61which are connected to other SoC components including other interconnect
62providers. The point on the diagram where the CPUs connect to the memory is
63called an interconnect node, which belongs to the Mem NoC interconnect provider.
64
65Interconnect endpoints are the first or the last element of the path. Every
66endpoint is a node, but not every node is an endpoint.
67
68Interconnect path is everything between two endpoints including all the nodes
69that have to be traversed to reach from a source to destination node. It may
70include multiple master-slave pairs across several interconnect providers.
71
72Interconnect consumers are the entities which make use of the data paths exposed
73by the providers. The consumers send requests to providers requesting various
74throughput, latency and priority. Usually the consumers are device drivers, that
75send request based on their needs. An example for a consumer is a video decoder
76that supports various formats and image sizes.
77
78Interconnect providers
79----------------------
80
81Interconnect provider is an entity that implements methods to initialize and
82configure interconnect bus hardware. The interconnect provider drivers should
83be registered with the interconnect provider core.
84
85.. kernel-doc:: include/linux/interconnect-provider.h
86
87.. kernel-doc:: drivers/interconnect/core.c
88   :functions: icc_provider_init icc_provider_register icc_provider_deregister
89               icc_node_create icc_node_create_dyn icc_node_destroy
90               icc_node_add icc_node_del icc_nodes_remove icc_node_set_name
91               icc_link_create icc_link_nodes
92
93Interconnect consumers
94----------------------
95
96Interconnect consumers are the clients which use the interconnect APIs to
97get paths between endpoints and set their bandwidth/latency/QoS requirements
98for these interconnect paths.
99
100.. kernel-doc:: drivers/interconnect/core.c
101   :functions: devm_of_icc_get of_icc_get_by_index of_icc_get icc_get
102               icc_put icc_enable icc_disable icc_set_bw icc_set_tag
103               icc_get_name
104
105.. kernel-doc:: drivers/interconnect/bulk.c
106
107Interconnect debugfs interfaces
108-------------------------------
109
110Like several other subsystems interconnect will create some files for debugging
111and introspection. Files in debugfs are not considered ABI so application
112software shouldn't rely on format details change between kernel versions.
113
114``/sys/kernel/debug/interconnect/interconnect_summary``:
115
116Show all interconnect nodes in the system with their aggregated bandwidth
117request. Indented under each node show bandwidth requests from each device.
118
119``/sys/kernel/debug/interconnect/interconnect_graph``:
120
121Show the interconnect graph in the graphviz dot format. It shows all
122interconnect nodes and links in the system and groups together nodes from the
123same provider as subgraphs. The format is human-readable and can also be piped
124through dot to generate diagrams in many graphical formats::
125
126        $ cat /sys/kernel/debug/interconnect/interconnect_graph | \
127                dot -Tsvg > interconnect_graph.svg
128
129The ``test-client`` directory provides interfaces for issuing BW requests to
130any arbitrary path. Note that for safety reasons, this feature is disabled by
131default without a Kconfig to enable it. Enabling it requires code changes to
132``#define INTERCONNECT_ALLOW_WRITE_DEBUGFS``. Example usage::
133
134        cd /sys/kernel/debug/interconnect/test-client/
135
136        # Configure node endpoints for the path from CPU to DDR on
137        # qcom/sm8550.
138        echo chm_apps > src_node
139        echo ebi > dst_node
140
141        # Get path between src_node and dst_node. This is only
142        # necessary after updating the node endpoints.
143        echo 1 > get
144
145        # Set desired BW to 1GBps avg and 2GBps peak.
146        echo 1000000 > avg_bw
147        echo 2000000 > peak_bw
148
149        # Vote for avg_bw and peak_bw on the latest path from "get".
150        # Voting for multiple paths is possible by repeating this
151        # process for different nodes endpoints.
152        echo 1 > commit
153