xref: /linux/tools/perf/Documentation/perf-c2c.txt (revision 2ec8107d8e0d1d285b2bbf1999e7f4e46b6b535b)
1perf-c2c(1)
2===========
3
4NAME
5----
6perf-c2c - Shared Data C2C/HITM Analyzer.
7
8SYNOPSIS
9--------
10[verse]
11'perf c2c record' [<options>] <command>
12'perf c2c record' [<options>] -- [<record command options>] <command>
13'perf c2c report' [<options>]
14
15DESCRIPTION
16-----------
17C2C stands for Cache To Cache.
18
19The perf c2c tool provides means for Shared Data C2C/HITM analysis. It allows
20you to track down the cacheline contentions.
21
22The tool is based on x86's load latency and precise store facility events
23provided by Intel CPUs. These events provide:
24  - memory address of the access
25  - type of the access (load and store details)
26  - latency (in cycles) of the load access
27
28The c2c tool provide means to record this data and report back access details
29for cachelines with highest contention - highest number of HITM accesses.
30
31The basic workflow with this tool follows the standard record/report phase.
32User uses the record command to record events data and report command to
33display it.
34
35
36RECORD OPTIONS
37--------------
38-e::
39--event=::
40	Select the PMU event. Use 'perf mem record -e list'
41	to list available events.
42
43-v::
44--verbose::
45	Be more verbose (show counter open errors, etc).
46
47-l::
48--ldlat::
49	Configure mem-loads latency.
50
51-k::
52--all-kernel::
53	Configure all used events to run in kernel space.
54
55-u::
56--all-user::
57	Configure all used events to run in user space.
58
59REPORT OPTIONS
60--------------
61-k::
62--vmlinux=<file>::
63	vmlinux pathname
64
65-v::
66--verbose::
67	Be more verbose (show counter open errors, etc).
68
69-i::
70--input::
71	Specify the input file to process.
72
73-N::
74--node-info::
75	Show extra node info in report (see NODE INFO section)
76
77-c::
78--coalesce::
79	Specify sorintg fields for single cacheline display.
80	Following fields are available: tid,pid,iaddr,dso
81	(see COALESCE)
82
83-g::
84--call-graph::
85	Setup callchains parameters.
86	Please refer to perf-report man page for details.
87
88--stdio::
89	Force the stdio output (see STDIO OUTPUT)
90
91--stats::
92	Display only statistic tables and force stdio mode.
93
94--full-symbols::
95	Display full length of symbols.
96
97--no-source::
98	Do not display Source:Line column.
99
100--show-all::
101	Show all captured HITM lines, with no regard to HITM % 0.0005 limit.
102
103C2C RECORD
104----------
105The perf c2c record command setup options related to HITM cacheline analysis
106and calls standard perf record command.
107
108Following perf record options are configured by default:
109(check perf record man page for details)
110
111  -W,-d,--sample-cpu
112
113Unless specified otherwise with '-e' option, following events are monitored by
114default:
115
116  cpu/mem-loads,ldlat=30/P
117  cpu/mem-stores/P
118
119User can pass any 'perf record' option behind '--' mark, like (to enable
120callchains and system wide monitoring):
121
122  $ perf c2c record -- -g -a
123
124Please check RECORD OPTIONS section for specific c2c record options.
125
126C2C REPORT
127----------
128The perf c2c report command displays shared data analysis.  It comes in two
129display modes: stdio and tui (default).
130
131The report command workflow is following:
132  - sort all the data based on the cacheline address
133  - store access details for each cacheline
134  - sort all cachelines based on user settings
135  - display data
136
137In general perf report output consist of 2 basic views:
138  1) most expensive cachelines list
139  2) offsets details for each cacheline
140
141For each cacheline in the 1) list we display following data:
142(Both stdio and TUI modes follow the same fields output)
143
144  Index
145  - zero based index to identify the cacheline
146
147  Cacheline
148  - cacheline address (hex number)
149
150  Total records
151  - sum of all cachelines accesses
152
153  Rmt/Lcl Hitm
154  - cacheline percentage of all Remote/Local HITM accesses
155
156  LLC Load Hitm - Total, Lcl, Rmt
157  - count of Total/Local/Remote load HITMs
158
159  Store Reference - Total, L1Hit, L1Miss
160    Total - all store accesses
161    L1Hit - store accesses that hit L1
162    L1Hit - store accesses that missed L1
163
164  Load Dram
165  - count of local and remote DRAM accesses
166
167  LLC Ld Miss
168  - count of all accesses that missed LLC
169
170  Total Loads
171  - sum of all load accesses
172
173  Core Load Hit - FB, L1, L2
174  - count of load hits in FB (Fill Buffer), L1 and L2 cache
175
176  LLC Load Hit - Llc, Rmt
177  - count of LLC and Remote load hits
178
179For each offset in the 2) list we display following data:
180
181  HITM - Rmt, Lcl
182  - % of Remote/Local HITM accesses for given offset within cacheline
183
184  Store Refs - L1 Hit, L1 Miss
185  - % of store accesses that hit/missed L1 for given offset within cacheline
186
187  Data address - Offset
188  - offset address
189
190  Pid
191  - pid of the process responsible for the accesses
192
193  Tid
194  - tid of the process responsible for the accesses
195
196  Code address
197  - code address responsible for the accesses
198
199  cycles - rmt hitm, lcl hitm, load
200    - sum of cycles for given accesses - Remote/Local HITM and generic load
201
202  cpu cnt
203    - number of cpus that participated on the access
204
205  Symbol
206    - code symbol related to the 'Code address' value
207
208  Shared Object
209    - shared object name related to the 'Code address' value
210
211  Source:Line
212    - source information related to the 'Code address' value
213
214  Node
215    - nodes participating on the access (see NODE INFO section)
216
217NODE INFO
218---------
219The 'Node' field displays nodes that accesses given cacheline
220offset. Its output comes in 3 flavors:
221  - node IDs separated by ','
222  - node IDs with stats for each ID, in following format:
223      Node{cpus %hitms %stores}
224  - node IDs with list of affected CPUs in following format:
225      Node{cpu list}
226
227User can switch between above flavors with -N option or
228use 'n' key to interactively switch in TUI mode.
229
230COALESCE
231--------
232User can specify how to sort offsets for cacheline.
233
234Following fields are available and governs the final
235output fields set for caheline offsets output:
236
237  tid   - coalesced by process TIDs
238  pid   - coalesced by process PIDs
239  iaddr - coalesced by code address, following fields are displayed:
240             Code address, Code symbol, Shared Object, Source line
241  dso   - coalesced by shared object
242
243By default the coalescing is setup with 'pid,tid,iaddr'.
244
245STDIO OUTPUT
246------------
247The stdio output displays data on standard output.
248
249Following tables are displayed:
250  Trace Event Information
251  - overall statistics of memory accesses
252
253  Global Shared Cache Line Event Information
254  - overall statistics on shared cachelines
255
256  Shared Data Cache Line Table
257  - list of most expensive cachelines
258
259  Shared Cache Line Distribution Pareto
260  - list of all accessed offsets for each cacheline
261
262TUI OUTPUT
263----------
264The TUI output provides interactive interface to navigate
265through cachelines list and to display offset details.
266
267For details please refer to the help window by pressing '?' key.
268
269CREDITS
270-------
271Although Don Zickus, Dick Fowles and Joe Mario worked together
272to get this implemented, we got lots of early help from Arnaldo
273Carvalho de Melo, Stephane Eranian, Jiri Olsa and Andi Kleen.
274
275C2C BLOG
276--------
277Check Joe's blog on c2c tool for detailed use case explanation:
278  https://joemario.github.io/blog/2016/09/01/c2c-blog/
279
280SEE ALSO
281--------
282linkperf:perf-record[1], linkperf:perf-mem[1]
283