1perf-stat(1) 2============ 3 4NAME 5---- 6perf-stat - Run a command and gather performance counter statistics 7 8SYNOPSIS 9-------- 10[verse] 11'perf stat' [-e <EVENT> | --event=EVENT] [-a] <command> 12'perf stat' [-e <EVENT> | --event=EVENT] [-a] -- <command> [<options>] 13'perf stat' [-e <EVENT> | --event=EVENT] [-a] record [-o file] -- <command> [<options>] 14'perf stat' report [-i file] 15 16DESCRIPTION 17----------- 18This command runs a command and gathers performance counter statistics 19from it. 20 21 22OPTIONS 23------- 24<command>...:: 25 Any command you can specify in a shell. 26 27record:: 28 See STAT RECORD. 29 30report:: 31 See STAT REPORT. 32 33-e:: 34--event=:: 35 Select the PMU event. Selection can be: 36 37 - a symbolic event name (use 'perf list' to list all events) 38 39 - a raw PMU event (eventsel+umask) in the form of rNNN where NNN is a 40 hexadecimal event descriptor. 41 42 - a symbolically formed event like 'pmu/param1=0x3,param2/' where 43 param1 and param2 are defined as formats for the PMU in 44 /sys/bus/event_source/devices/<pmu>/format/* 45 46 'percore' is a event qualifier that sums up the event counts for both 47 hardware threads in a core. For example: 48 perf stat -A -a -e cpu/event,percore=1/,otherevent ... 49 50 - a symbolically formed event like 'pmu/config=M,config1=N,config2=K/' 51 where M, N, K are numbers (in decimal, hex, octal format). 52 Acceptable values for each of 'config', 'config1' and 'config2' 53 parameters are defined by corresponding entries in 54 /sys/bus/event_source/devices/<pmu>/format/* 55 56 Note that the last two syntaxes support prefix and glob matching in 57 the PMU name to simplify creation of events across multiple instances 58 of the same type of PMU in large systems (e.g. memory controller PMUs). 59 Multiple PMU instances are typical for uncore PMUs, so the prefix 60 'uncore_' is also ignored when performing this match. 61 62 63-i:: 64--no-inherit:: 65 child tasks do not inherit counters 66-p:: 67--pid=<pid>:: 68 stat events on existing process id (comma separated list) 69 70-t:: 71--tid=<tid>:: 72 stat events on existing thread id (comma separated list) 73 74 75-a:: 76--all-cpus:: 77 system-wide collection from all CPUs (default if no target is specified) 78 79--no-scale:: 80 Don't scale/normalize counter values 81 82-d:: 83--detailed:: 84 print more detailed statistics, can be specified up to 3 times 85 86 -d: detailed events, L1 and LLC data cache 87 -d -d: more detailed events, dTLB and iTLB events 88 -d -d -d: very detailed events, adding prefetch events 89 90-r:: 91--repeat=<n>:: 92 repeat command and print average + stddev (max: 100). 0 means forever. 93 94-B:: 95--big-num:: 96 print large numbers with thousands' separators according to locale 97 98-C:: 99--cpu=:: 100Count only on the list of CPUs provided. Multiple CPUs can be provided as a 101comma-separated list with no space: 0,1. Ranges of CPUs are specified with -: 0-2. 102In per-thread mode, this option is ignored. The -a option is still necessary 103to activate system-wide monitoring. Default is to count on all CPUs. 104 105-A:: 106--no-aggr:: 107Do not aggregate counts across all monitored CPUs. 108 109-n:: 110--null:: 111 null run - don't start any counters 112 113-v:: 114--verbose:: 115 be more verbose (show counter open errors, etc) 116 117-x SEP:: 118--field-separator SEP:: 119print counts using a CSV-style output to make it easy to import directly into 120spreadsheets. Columns are separated by the string specified in SEP. 121 122--table:: Display time for each run (-r option), in a table format, e.g.: 123 124 $ perf stat --null -r 5 --table perf bench sched pipe 125 126 Performance counter stats for 'perf bench sched pipe' (5 runs): 127 128 # Table of individual measurements: 129 5.189 (-0.293) # 130 5.189 (-0.294) # 131 5.186 (-0.296) # 132 5.663 (+0.181) ## 133 6.186 (+0.703) #### 134 135 # Final result: 136 5.483 +- 0.198 seconds time elapsed ( +- 3.62% ) 137 138-G name:: 139--cgroup name:: 140monitor only in the container (cgroup) called "name". This option is available only 141in per-cpu mode. The cgroup filesystem must be mounted. All threads belonging to 142container "name" are monitored when they run on the monitored CPUs. Multiple cgroups 143can be provided. Each cgroup is applied to the corresponding event, i.e., first cgroup 144to first event, second cgroup to second event and so on. It is possible to provide 145an empty cgroup (monitor all the time) using, e.g., -G foo,,bar. Cgroups must have 146corresponding events, i.e., they always refer to events defined earlier on the command 147line. If the user wants to track multiple events for a specific cgroup, the user can 148use '-e e1 -e e2 -G foo,foo' or just use '-e e1 -e e2 -G foo'. 149 150If wanting to monitor, say, 'cycles' for a cgroup and also for system wide, this 151command line can be used: 'perf stat -e cycles -G cgroup_name -a -e cycles'. 152 153-o file:: 154--output file:: 155Print the output into the designated file. 156 157--append:: 158Append to the output file designated with the -o option. Ignored if -o is not specified. 159 160--log-fd:: 161 162Log output to fd, instead of stderr. Complementary to --output, and mutually exclusive 163with it. --append may be used here. Examples: 164 3>results perf stat --log-fd 3 -- $cmd 165 3>>results perf stat --log-fd 3 --append -- $cmd 166 167--pre:: 168--post:: 169 Pre and post measurement hooks, e.g.: 170 171perf stat --repeat 10 --null --sync --pre 'make -s O=defconfig-build/clean' -- make -s -j64 O=defconfig-build/ bzImage 172 173-I msecs:: 174--interval-print msecs:: 175Print count deltas every N milliseconds (minimum: 1ms) 176The overhead percentage could be high in some cases, for instance with small, sub 100ms intervals. Use with caution. 177 example: 'perf stat -I 1000 -e cycles -a sleep 5' 178 179If the metric exists, it is calculated by the counts generated in this interval and the metric is printed after #. 180 181--interval-count times:: 182Print count deltas for fixed number of times. 183This option should be used together with "-I" option. 184 example: 'perf stat -I 1000 --interval-count 2 -e cycles -a' 185 186--interval-clear:: 187Clear the screen before next interval. 188 189--timeout msecs:: 190Stop the 'perf stat' session and print count deltas after N milliseconds (minimum: 10 ms). 191This option is not supported with the "-I" option. 192 example: 'perf stat --time 2000 -e cycles -a' 193 194--metric-only:: 195Only print computed metrics. Print them in a single line. 196Don't show any raw values. Not supported with --per-thread. 197 198--per-socket:: 199Aggregate counts per processor socket for system-wide mode measurements. This 200is a useful mode to detect imbalance between sockets. To enable this mode, 201use --per-socket in addition to -a. (system-wide). The output includes the 202socket number and the number of online processors on that socket. This is 203useful to gauge the amount of aggregation. 204 205--per-die:: 206Aggregate counts per processor die for system-wide mode measurements. This 207is a useful mode to detect imbalance between dies. To enable this mode, 208use --per-die in addition to -a. (system-wide). The output includes the 209die number and the number of online processors on that die. This is 210useful to gauge the amount of aggregation. 211 212--per-core:: 213Aggregate counts per physical processor for system-wide mode measurements. This 214is a useful mode to detect imbalance between physical cores. To enable this mode, 215use --per-core in addition to -a. (system-wide). The output includes the 216core number and the number of online logical processors on that physical processor. 217 218--per-thread:: 219Aggregate counts per monitored threads, when monitoring threads (-t option) 220or processes (-p option). 221 222--per-node:: 223Aggregate counts per NUMA nodes for system-wide mode measurements. This 224is a useful mode to detect imbalance between NUMA nodes. To enable this 225mode, use --per-node in addition to -a. (system-wide). 226 227-D msecs:: 228--delay msecs:: 229After starting the program, wait msecs before measuring. This is useful to 230filter out the startup phase of the program, which is often very different. 231 232-T:: 233--transaction:: 234 235Print statistics of transactional execution if supported. 236 237STAT RECORD 238----------- 239Stores stat data into perf data file. 240 241-o file:: 242--output file:: 243Output file name. 244 245STAT REPORT 246----------- 247Reads and reports stat data from perf data file. 248 249-i file:: 250--input file:: 251Input file name. 252 253--per-socket:: 254Aggregate counts per processor socket for system-wide mode measurements. 255 256--per-die:: 257Aggregate counts per processor die for system-wide mode measurements. 258 259--per-core:: 260Aggregate counts per physical processor for system-wide mode measurements. 261 262-M:: 263--metrics:: 264Print metrics or metricgroups specified in a comma separated list. 265For a group all metrics from the group are added. 266The events from the metrics are automatically measured. 267See perf list output for the possble metrics and metricgroups. 268 269-A:: 270--no-aggr:: 271Do not aggregate counts across all monitored CPUs. 272 273--topdown:: 274Print top down level 1 metrics if supported by the CPU. This allows to 275determine bottle necks in the CPU pipeline for CPU bound workloads, 276by breaking the cycles consumed down into frontend bound, backend bound, 277bad speculation and retiring. 278 279Frontend bound means that the CPU cannot fetch and decode instructions fast 280enough. Backend bound means that computation or memory access is the bottle 281neck. Bad Speculation means that the CPU wasted cycles due to branch 282mispredictions and similar issues. Retiring means that the CPU computed without 283an apparently bottleneck. The bottleneck is only the real bottleneck 284if the workload is actually bound by the CPU and not by something else. 285 286For best results it is usually a good idea to use it with interval 287mode like -I 1000, as the bottleneck of workloads can change often. 288 289The top down metrics are collected per core instead of per 290CPU thread. Per core mode is automatically enabled 291and -a (global monitoring) is needed, requiring root rights or 292perf.perf_event_paranoid=-1. 293 294Topdown uses the full Performance Monitoring Unit, and needs 295disabling of the NMI watchdog (as root): 296echo 0 > /proc/sys/kernel/nmi_watchdog 297for best results. Otherwise the bottlenecks may be inconsistent 298on workload with changing phases. 299 300This enables --metric-only, unless overridden with --no-metric-only. 301 302To interpret the results it is usually needed to know on which 303CPUs the workload runs on. If needed the CPUs can be forced using 304taskset. 305 306--no-merge:: 307Do not merge results from same PMUs. 308 309When multiple events are created from a single event specification, 310stat will, by default, aggregate the event counts and show the result 311in a single row. This option disables that behavior and shows 312the individual events and counts. 313 314Multiple events are created from a single event specification when: 3151. Prefix or glob matching is used for the PMU name. 3162. Aliases, which are listed immediately after the Kernel PMU events 317 by perf list, are used. 318 319--smi-cost:: 320Measure SMI cost if msr/aperf/ and msr/smi/ events are supported. 321 322During the measurement, the /sys/device/cpu/freeze_on_smi will be set to 323freeze core counters on SMI. 324The aperf counter will not be effected by the setting. 325The cost of SMI can be measured by (aperf - unhalted core cycles). 326 327In practice, the percentages of SMI cycles is very useful for performance 328oriented analysis. --metric_only will be applied by default. 329The output is SMI cycles%, equals to (aperf - unhalted core cycles) / aperf 330 331Users who wants to get the actual value can apply --no-metric-only. 332 333--all-kernel:: 334Configure all used events to run in kernel space. 335 336--all-user:: 337Configure all used events to run in user space. 338 339--percore-show-thread:: 340The event modifier "percore" has supported to sum up the event counts 341for all hardware threads in a core and show the counts per core. 342 343This option with event modifier "percore" enabled also sums up the event 344counts for all hardware threads in a core but show the sum counts per 345hardware thread. This is essentially a replacement for the any bit and 346convenient for post processing. 347 348EXAMPLES 349-------- 350 351$ perf stat -- make 352 353 Performance counter stats for 'make': 354 355 83723.452481 task-clock:u (msec) # 1.004 CPUs utilized 356 0 context-switches:u # 0.000 K/sec 357 0 cpu-migrations:u # 0.000 K/sec 358 3,228,188 page-faults:u # 0.039 M/sec 359 229,570,665,834 cycles:u # 2.742 GHz 360 313,163,853,778 instructions:u # 1.36 insn per cycle 361 69,704,684,856 branches:u # 832.559 M/sec 362 2,078,861,393 branch-misses:u # 2.98% of all branches 363 364 83.409183620 seconds time elapsed 365 366 74.684747000 seconds user 367 8.739217000 seconds sys 368 369TIMINGS 370------- 371As displayed in the example above we can display 3 types of timings. 372We always display the time the counters were enabled/alive: 373 374 83.409183620 seconds time elapsed 375 376For workload sessions we also display time the workloads spent in 377user/system lands: 378 379 74.684747000 seconds user 380 8.739217000 seconds sys 381 382Those times are the very same as displayed by the 'time' tool. 383 384CSV FORMAT 385---------- 386 387With -x, perf stat is able to output a not-quite-CSV format output 388Commas in the output are not put into "". To make it easy to parse 389it is recommended to use a different character like -x \; 390 391The fields are in this order: 392 393 - optional usec time stamp in fractions of second (with -I xxx) 394 - optional CPU, core, or socket identifier 395 - optional number of logical CPUs aggregated 396 - counter value 397 - unit of the counter value or empty 398 - event name 399 - run time of counter 400 - percentage of measurement time the counter was running 401 - optional variance if multiple values are collected with -r 402 - optional metric value 403 - optional unit of metric 404 405Additional metrics may be printed with all earlier fields being empty. 406 407SEE ALSO 408-------- 409linkperf:perf-top[1], linkperf:perf-list[1] 410