History log of /linux/tools/perf/bench/bench.h (Results 126 – 150 of 426)
Revision (<<< Hide revision tags) (Show revision tags >>>) Date Author Comments
# ead5d1f4 01-Sep-2020 Jiri Kosina <jkosina@suse.cz>

Merge branch 'master' into for-next

Sync with Linus' branch in order to be able to apply fixups
of more recent patches.


Revision tags: v5.9-rc3
# 3bec5b6a 25-Aug-2020 Mark Brown <broonie@kernel.org>

Merge tag 'v5.9-rc2' into regulator-5.9

Linux 5.9-rc2


# 1959ba4e 25-Aug-2020 Mark Brown <broonie@kernel.org>

Merge tag 'v5.9-rc2' into asoc-5.9

Linux 5.9-rc2


# 2d9ad4cf 25-Aug-2020 Maarten Lankhorst <maarten.lankhorst@linux.intel.com>

Merge tag 'v5.9-rc2' into drm-misc-fixes

Backmerge requested by Tomi for a fix to omap inconsistent
locking state issue, and because we need at least v5.9-rc2 now.

Signed-off-by: Maarten Lankhorst

Merge tag 'v5.9-rc2' into drm-misc-fixes

Backmerge requested by Tomi for a fix to omap inconsistent
locking state issue, and because we need at least v5.9-rc2 now.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>

show more ...


Revision tags: v5.9-rc2
# d85ddd13 18-Aug-2020 Maxime Ripard <maxime@cerno.tech>

Merge v5.9-rc1 into drm-misc-next

Sam needs 5.9-rc1 to have dev_err_probe in to merge some patches.

Signed-off-by: Maxime Ripard <maxime@cerno.tech>


Revision tags: v5.9-rc1
# 00e4db51 11-Aug-2020 Linus Torvalds <torvalds@linux-foundation.org>

Merge tag 'perf-tools-2020-08-10' of git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux

Pull perf tools updates from Arnaldo Carvalho de Melo:
"New features:

- Introduce controlling how '

Merge tag 'perf-tools-2020-08-10' of git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux

Pull perf tools updates from Arnaldo Carvalho de Melo:
"New features:

- Introduce controlling how 'perf stat' and 'perf record' works via a
control file descriptor, allowing starting with events configured
but disabled until commands are received via the control file
descriptor. This allows, for instance for tools such as Intel VTune
to make further use of perf as its Linux platform driver.

- Improve 'perf record' to to register in a perf.data file header the
clockid used to help later correlate things like syslog files and
perf events recorded.

- Add basic syscall and find_next_bit benchmarks to 'perf bench'.

- Allow using computed metrics in calculating other metrics. For
instance:

{
.metric_expr = "l2_rqsts.demand_data_rd_hit + l2_rqsts.pf_hit + l2_rqsts.rfo_hit",
.metric_name = "DCache_L2_All_Hits",
},
{
.metric_expr = "max(l2_rqsts.all_demand_data_rd - l2_rqsts.demand_data_rd_hit, 0) + l2_rqsts.pf_miss + l2_rqsts.rfo_miss",
.metric_name = "DCache_L2_All_Miss",
},
{
.metric_expr = "dcache_l2_all_hits + dcache_l2_all_miss",
.metric_name = "DCache_L2_All",
}

- Add suport for 'd_ratio', '>' and '<' operators to the expression
resolver used in calculating metrics in 'perf stat'.

Support for new kernel features:

- Support TEXT_POKE and KSYMBOL_TYPE_OOL perf metadata events to cope
with things like ftrace, trampolines, i.e. changes in the kernel
text that gets in the way of properly decoding Intel PT hardware
traces, for instance.

Intel PT:

- Add various knobs to reduce the volume of Intel PT traces by
reducing the level of details such as decoding just some types of
packets (e.g., FUP/TIP, PSB+), also filtering by time range.

- Add new itrace options (log flags to the 'd' option, error flags to
the 'e' one, etc), controlling how Intel PT is transformed into
perf events, document some missing options (e.g., how to synthesize
callchains).

BPF:

- Properly report BPF errors when parsing events.

- Do not setup side-band events if LIBBPF is not linked, fixing a
segfault.

Libraries:

- Improvements to the libtraceevent plugin mechanism.

- Improve libtracevent support for KVM trace events SVM exit reasons.

- Add a libtracevent plugins for decoding syscalls/sys_enter_futex
and for tlb_flush.

- Ensure sample_period is set libpfm4 events in 'perf test'.

- Fixup libperf namespacing, to make sure what is in libperf has the
perf_ namespace while what is now only in tools/perf/ doesn't use
that prefix.

Arch specific:

- Improve the testing of vendor events and metrics in 'perf test'.

- Allow no ARM CoreSight hardware tracer sink to be specified on
command line.

- Fix arm_spe_x recording when mixed with other perf events.

- Add s390 idle functions 'psw_idle' and 'psw_idle_exit' to list of
idle symbols.

- List kernel supplied event aliases for arm64 in 'perf list'.

- Add support for extended register capability in PowerPC 9 and 10.

- Added nest IMC power9 metric events.

Miscellaneous:

- No need to setup sample_regs_intr/sample_regs_user for dummy
events.

- Update various copies of kernel headers, some causing perf to
handle new syscalls, MSRs, etc.

- Improve usage of flex and yacc, enabling warnings and addressing
the fallout.

- Add missing '--output' option to 'perf kmem' so that it can pass it
along to 'perf record'.

- 'perf probe' fixes related to adding multiple probes on the same
address for the same event.

- Make 'perf probe' warn if the target function is a GNU indirect
function.

- Remove //anon mmap events from 'perf inject jit' to fix supporting
both using ELF files for generated functions and the perf-PID.map
approaches"

* tag 'perf-tools-2020-08-10' of git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux: (144 commits)
perf record: Skip side-band event setup if HAVE_LIBBPF_SUPPORT is not set
perf tools powerpc: Add support for extended regs in power10
perf tools powerpc: Add support for extended register capability
tools headers UAPI: Sync drm/i915_drm.h with the kernel sources
tools arch x86: Sync asm/cpufeatures.h with the kernel sources
tools arch x86: Sync the msr-index.h copy with the kernel sources
tools headers UAPI: update linux/in.h copy
tools headers API: Update close_range affected files
perf script: Add 'tod' field to display time of day
perf script: Change the 'enum perf_output_field' enumerators to be 64 bits
perf data: Add support to store time of day in CTF data conversion
perf tools: Move clockid_res_ns under clock struct
perf header: Store clock references for -k/--clockid option
perf tools: Add clockid_name function
perf clockid: Move parse_clockid() to new clockid object
tools lib traceevent: Handle possible strdup() error in tep_add_plugin_path() API
libtraceevent: Fixed description of tep_add_plugin_path() API
libtraceevent: Fixed type in PRINT_FMT_STING
libtraceevent: Fixed broken indentation in parse_ip4_print_args()
libtraceevent: Improve error handling of tep_plugin_add_option() API
...

show more ...


# 3b5d1afd 03-Aug-2020 Takashi Iwai <tiwai@suse.de>

Merge branch 'for-next' into for-linus


Revision tags: v5.8
# 7c43b0c1 30-Jul-2020 Ian Rogers <irogers@google.com>

perf bench: Add benchmark of find_next_bit

for_each_set_bit, or similar functions like for_each_cpu, may be hot
within the kernel. If many bits were set then one could imagine on Intel
a "bt" instru

perf bench: Add benchmark of find_next_bit

for_each_set_bit, or similar functions like for_each_cpu, may be hot
within the kernel. If many bits were set then one could imagine on Intel
a "bt" instruction with every bit may be faster than the function call
and word length find_next_bit logic. Add a benchmark to measure this.

This benchmark on AMD rome and Intel skylakex shows "bt" is not a good
option except for very small bitmaps.

Committer testing:

# perf bench
Usage:
perf bench [<common options>] <collection> <benchmark> [<options>]

# List of all available benchmark collections:

sched: Scheduler and IPC benchmarks
syscall: System call benchmarks
mem: Memory access benchmarks
numa: NUMA scheduling and MM benchmarks
futex: Futex stressing benchmarks
epoll: Epoll stressing benchmarks
internals: Perf-internals benchmarks
all: All benchmarks

# perf bench mem

# List of available benchmarks for collection 'mem':

memcpy: Benchmark for memcpy() functions
memset: Benchmark for memset() functions
find_bit: Benchmark for find_bit() functions
all: Run all memory access benchmarks

# perf bench mem find_bit
# Running 'mem/find_bit' benchmark:
100000 operations 1 bits set of 1 bits
Average for_each_set_bit took: 730.200 usec (+- 6.468 usec)
Average test_bit loop took: 366.200 usec (+- 4.652 usec)
100000 operations 1 bits set of 2 bits
Average for_each_set_bit took: 781.000 usec (+- 24.247 usec)
Average test_bit loop took: 550.200 usec (+- 4.152 usec)
100000 operations 2 bits set of 2 bits
Average for_each_set_bit took: 1113.400 usec (+- 112.340 usec)
Average test_bit loop took: 1098.500 usec (+- 182.834 usec)
100000 operations 1 bits set of 4 bits
Average for_each_set_bit took: 843.800 usec (+- 8.772 usec)
Average test_bit loop took: 948.800 usec (+- 10.278 usec)
100000 operations 2 bits set of 4 bits
Average for_each_set_bit took: 1185.800 usec (+- 114.345 usec)
Average test_bit loop took: 1473.200 usec (+- 175.498 usec)
100000 operations 4 bits set of 4 bits
Average for_each_set_bit took: 1769.667 usec (+- 233.177 usec)
Average test_bit loop took: 1864.933 usec (+- 187.470 usec)
100000 operations 1 bits set of 8 bits
Average for_each_set_bit took: 898.000 usec (+- 21.755 usec)
Average test_bit loop took: 1768.400 usec (+- 23.672 usec)
100000 operations 2 bits set of 8 bits
Average for_each_set_bit took: 1244.900 usec (+- 116.396 usec)
Average test_bit loop took: 2201.800 usec (+- 145.398 usec)
100000 operations 4 bits set of 8 bits
Average for_each_set_bit took: 1822.533 usec (+- 231.554 usec)
Average test_bit loop took: 2569.467 usec (+- 168.453 usec)
100000 operations 8 bits set of 8 bits
Average for_each_set_bit took: 2845.100 usec (+- 441.365 usec)
Average test_bit loop took: 3023.300 usec (+- 219.575 usec)
100000 operations 1 bits set of 16 bits
Average for_each_set_bit took: 923.400 usec (+- 17.560 usec)
Average test_bit loop took: 3240.000 usec (+- 16.492 usec)
100000 operations 2 bits set of 16 bits
Average for_each_set_bit took: 1264.300 usec (+- 114.034 usec)
Average test_bit loop took: 3714.400 usec (+- 158.898 usec)
100000 operations 4 bits set of 16 bits
Average for_each_set_bit took: 1817.867 usec (+- 222.199 usec)
Average test_bit loop took: 4015.333 usec (+- 154.162 usec)
100000 operations 8 bits set of 16 bits
Average for_each_set_bit took: 2826.350 usec (+- 433.457 usec)
Average test_bit loop took: 4460.350 usec (+- 210.762 usec)
100000 operations 16 bits set of 16 bits
Average for_each_set_bit took: 4615.600 usec (+- 809.350 usec)
Average test_bit loop took: 5129.960 usec (+- 320.821 usec)
100000 operations 1 bits set of 32 bits
Average for_each_set_bit took: 904.400 usec (+- 14.250 usec)
Average test_bit loop took: 6194.000 usec (+- 29.254 usec)
100000 operations 2 bits set of 32 bits
Average for_each_set_bit took: 1252.700 usec (+- 116.432 usec)
Average test_bit loop took: 6652.400 usec (+- 154.352 usec)
100000 operations 4 bits set of 32 bits
Average for_each_set_bit took: 1824.200 usec (+- 229.133 usec)
Average test_bit loop took: 6961.733 usec (+- 154.682 usec)
100000 operations 8 bits set of 32 bits
Average for_each_set_bit took: 2823.950 usec (+- 432.296 usec)
Average test_bit loop took: 7351.900 usec (+- 193.626 usec)
100000 operations 16 bits set of 32 bits
Average for_each_set_bit took: 4552.560 usec (+- 785.141 usec)
Average test_bit loop took: 7998.360 usec (+- 305.629 usec)
100000 operations 32 bits set of 32 bits
Average for_each_set_bit took: 7557.067 usec (+- 1407.702 usec)
Average test_bit loop took: 9072.400 usec (+- 513.209 usec)
100000 operations 1 bits set of 64 bits
Average for_each_set_bit took: 896.800 usec (+- 14.389 usec)
Average test_bit loop took: 11927.200 usec (+- 68.862 usec)
100000 operations 2 bits set of 64 bits
Average for_each_set_bit took: 1230.400 usec (+- 111.731 usec)
Average test_bit loop took: 12478.600 usec (+- 189.382 usec)
100000 operations 4 bits set of 64 bits
Average for_each_set_bit took: 1844.733 usec (+- 244.826 usec)
Average test_bit loop took: 12911.467 usec (+- 206.246 usec)
100000 operations 8 bits set of 64 bits
Average for_each_set_bit took: 2779.300 usec (+- 413.612 usec)
Average test_bit loop took: 13372.650 usec (+- 239.623 usec)
100000 operations 16 bits set of 64 bits
Average for_each_set_bit took: 4423.920 usec (+- 748.240 usec)
Average test_bit loop took: 13995.800 usec (+- 318.427 usec)
100000 operations 32 bits set of 64 bits
Average for_each_set_bit took: 7580.600 usec (+- 1462.407 usec)
Average test_bit loop took: 15063.067 usec (+- 516.477 usec)
100000 operations 64 bits set of 64 bits
Average for_each_set_bit took: 13391.514 usec (+- 2765.371 usec)
Average test_bit loop took: 16974.914 usec (+- 916.936 usec)
100000 operations 1 bits set of 128 bits
Average for_each_set_bit took: 1153.800 usec (+- 124.245 usec)
Average test_bit loop took: 26959.000 usec (+- 714.047 usec)
100000 operations 2 bits set of 128 bits
Average for_each_set_bit took: 1445.200 usec (+- 113.587 usec)
Average test_bit loop took: 25798.800 usec (+- 512.908 usec)
100000 operations 4 bits set of 128 bits
Average for_each_set_bit took: 1990.933 usec (+- 219.362 usec)
Average test_bit loop took: 25589.400 usec (+- 348.288 usec)
100000 operations 8 bits set of 128 bits
Average for_each_set_bit took: 2963.000 usec (+- 419.487 usec)
Average test_bit loop took: 25690.050 usec (+- 262.025 usec)
100000 operations 16 bits set of 128 bits
Average for_each_set_bit took: 4585.200 usec (+- 741.734 usec)
Average test_bit loop took: 26125.040 usec (+- 274.127 usec)
100000 operations 32 bits set of 128 bits
Average for_each_set_bit took: 7626.200 usec (+- 1404.950 usec)
Average test_bit loop took: 27038.867 usec (+- 442.554 usec)
100000 operations 64 bits set of 128 bits
Average for_each_set_bit took: 13343.371 usec (+- 2686.460 usec)
Average test_bit loop took: 28936.543 usec (+- 883.257 usec)
100000 operations 128 bits set of 128 bits
Average for_each_set_bit took: 23442.950 usec (+- 4880.541 usec)
Average test_bit loop took: 32484.125 usec (+- 1691.931 usec)
100000 operations 1 bits set of 256 bits
Average for_each_set_bit took: 1183.000 usec (+- 32.073 usec)
Average test_bit loop took: 50114.600 usec (+- 198.880 usec)
100000 operations 2 bits set of 256 bits
Average for_each_set_bit took: 1550.000 usec (+- 124.550 usec)
Average test_bit loop took: 50334.200 usec (+- 128.425 usec)
100000 operations 4 bits set of 256 bits
Average for_each_set_bit took: 2164.333 usec (+- 246.359 usec)
Average test_bit loop took: 49959.867 usec (+- 188.035 usec)
100000 operations 8 bits set of 256 bits
Average for_each_set_bit took: 3211.200 usec (+- 454.829 usec)
Average test_bit loop took: 50140.850 usec (+- 176.046 usec)
100000 operations 16 bits set of 256 bits
Average for_each_set_bit took: 5181.640 usec (+- 882.726 usec)
Average test_bit loop took: 51003.160 usec (+- 419.601 usec)
100000 operations 32 bits set of 256 bits
Average for_each_set_bit took: 8369.333 usec (+- 1513.150 usec)
Average test_bit loop took: 52096.700 usec (+- 573.022 usec)
100000 operations 64 bits set of 256 bits
Average for_each_set_bit took: 13866.857 usec (+- 2649.393 usec)
Average test_bit loop took: 53989.600 usec (+- 938.808 usec)
100000 operations 128 bits set of 256 bits
Average for_each_set_bit took: 23588.350 usec (+- 4724.222 usec)
Average test_bit loop took: 57300.625 usec (+- 1625.962 usec)
100000 operations 256 bits set of 256 bits
Average for_each_set_bit took: 42752.200 usec (+- 9202.084 usec)
Average test_bit loop took: 64426.933 usec (+- 3402.326 usec)
100000 operations 1 bits set of 512 bits
Average for_each_set_bit took: 1632.000 usec (+- 229.954 usec)
Average test_bit loop took: 98090.000 usec (+- 1120.435 usec)
100000 operations 2 bits set of 512 bits
Average for_each_set_bit took: 1937.700 usec (+- 148.902 usec)
Average test_bit loop took: 100364.100 usec (+- 1433.219 usec)
100000 operations 4 bits set of 512 bits
Average for_each_set_bit took: 2528.000 usec (+- 243.654 usec)
Average test_bit loop took: 99932.067 usec (+- 955.868 usec)
100000 operations 8 bits set of 512 bits
Average for_each_set_bit took: 3734.100 usec (+- 512.359 usec)
Average test_bit loop took: 98944.750 usec (+- 812.070 usec)
100000 operations 16 bits set of 512 bits
Average for_each_set_bit took: 5551.400 usec (+- 846.605 usec)
Average test_bit loop took: 98691.600 usec (+- 654.753 usec)
100000 operations 32 bits set of 512 bits
Average for_each_set_bit took: 8594.500 usec (+- 1446.072 usec)
Average test_bit loop took: 99176.867 usec (+- 579.990 usec)
100000 operations 64 bits set of 512 bits
Average for_each_set_bit took: 13840.743 usec (+- 2527.055 usec)
Average test_bit loop took: 100758.743 usec (+- 833.865 usec)
100000 operations 128 bits set of 512 bits
Average for_each_set_bit took: 23185.925 usec (+- 4532.910 usec)
Average test_bit loop took: 103786.700 usec (+- 1475.276 usec)
100000 operations 256 bits set of 512 bits
Average for_each_set_bit took: 40322.400 usec (+- 8341.802 usec)
Average test_bit loop took: 109433.378 usec (+- 2742.615 usec)
100000 operations 512 bits set of 512 bits
Average for_each_set_bit took: 71804.540 usec (+- 15436.546 usec)
Average test_bit loop took: 120255.440 usec (+- 5252.777 usec)
100000 operations 1 bits set of 1024 bits
Average for_each_set_bit took: 1859.600 usec (+- 27.969 usec)
Average test_bit loop took: 187676.000 usec (+- 1337.770 usec)
100000 operations 2 bits set of 1024 bits
Average for_each_set_bit took: 2273.600 usec (+- 139.420 usec)
Average test_bit loop took: 188176.000 usec (+- 684.357 usec)
100000 operations 4 bits set of 1024 bits
Average for_each_set_bit took: 2940.400 usec (+- 268.213 usec)
Average test_bit loop took: 189172.600 usec (+- 593.295 usec)
100000 operations 8 bits set of 1024 bits
Average for_each_set_bit took: 4224.200 usec (+- 547.933 usec)
Average test_bit loop took: 190257.250 usec (+- 621.021 usec)
100000 operations 16 bits set of 1024 bits
Average for_each_set_bit took: 6090.560 usec (+- 877.975 usec)
Average test_bit loop took: 190143.880 usec (+- 503.753 usec)
100000 operations 32 bits set of 1024 bits
Average for_each_set_bit took: 9178.800 usec (+- 1475.136 usec)
Average test_bit loop took: 190757.100 usec (+- 494.757 usec)
100000 operations 64 bits set of 1024 bits
Average for_each_set_bit took: 14441.457 usec (+- 2545.497 usec)
Average test_bit loop took: 192299.486 usec (+- 795.251 usec)
100000 operations 128 bits set of 1024 bits
Average for_each_set_bit took: 23623.825 usec (+- 4481.182 usec)
Average test_bit loop took: 194885.550 usec (+- 1300.817 usec)
100000 operations 256 bits set of 1024 bits
Average for_each_set_bit took: 40194.956 usec (+- 8109.056 usec)
Average test_bit loop took: 200259.311 usec (+- 2566.085 usec)
100000 operations 512 bits set of 1024 bits
Average for_each_set_bit took: 70983.560 usec (+- 15074.982 usec)
Average test_bit loop took: 210527.460 usec (+- 4968.980 usec)
100000 operations 1024 bits set of 1024 bits
Average for_each_set_bit took: 136530.345 usec (+- 31584.400 usec)
Average test_bit loop took: 233329.691 usec (+- 10814.036 usec)
100000 operations 1 bits set of 2048 bits
Average for_each_set_bit took: 3077.600 usec (+- 76.376 usec)
Average test_bit loop took: 402154.400 usec (+- 518.571 usec)
100000 operations 2 bits set of 2048 bits
Average for_each_set_bit took: 3508.600 usec (+- 148.350 usec)
Average test_bit loop took: 403814.500 usec (+- 1133.027 usec)
100000 operations 4 bits set of 2048 bits
Average for_each_set_bit took: 4219.333 usec (+- 285.844 usec)
Average test_bit loop took: 404312.533 usec (+- 985.751 usec)
100000 operations 8 bits set of 2048 bits
Average for_each_set_bit took: 5670.550 usec (+- 615.238 usec)
Average test_bit loop took: 405321.800 usec (+- 1038.487 usec)
100000 operations 16 bits set of 2048 bits
Average for_each_set_bit took: 7785.080 usec (+- 992.522 usec)
Average test_bit loop took: 406746.160 usec (+- 1015.478 usec)
100000 operations 32 bits set of 2048 bits
Average for_each_set_bit took: 11163.800 usec (+- 1627.320 usec)
Average test_bit loop took: 406124.267 usec (+- 898.785 usec)
100000 operations 64 bits set of 2048 bits
Average for_each_set_bit took: 16964.629 usec (+- 2806.130 usec)
Average test_bit loop took: 406618.514 usec (+- 798.356 usec)
100000 operations 128 bits set of 2048 bits
Average for_each_set_bit took: 27219.625 usec (+- 4988.458 usec)
Average test_bit loop took: 410149.325 usec (+- 1705.641 usec)
100000 operations 256 bits set of 2048 bits
Average for_each_set_bit took: 45138.578 usec (+- 8831.021 usec)
Average test_bit loop took: 415462.467 usec (+- 2725.418 usec)
100000 operations 512 bits set of 2048 bits
Average for_each_set_bit took: 77450.540 usec (+- 15962.238 usec)
Average test_bit loop took: 426089.180 usec (+- 5171.788 usec)
100000 operations 1024 bits set of 2048 bits
Average for_each_set_bit took: 138023.636 usec (+- 29826.959 usec)
Average test_bit loop took: 446346.636 usec (+- 9904.417 usec)
100000 operations 2048 bits set of 2048 bits
Average for_each_set_bit took: 251072.600 usec (+- 55947.692 usec)
Average test_bit loop took: 484855.983 usec (+- 18970.431 usec)
#

Signed-off-by: Ian Rogers <irogers@google.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lore.kernel.org/lkml/20200729220034.1337168-1-irogers@google.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>

show more ...


Revision tags: v5.8-rc7, v5.8-rc6, v5.8-rc5, v5.8-rc4, v5.8-rc3, v5.8-rc2, v5.8-rc1, v5.7, v5.7-rc7, v5.7-rc6, v5.7-rc5, v5.7-rc4, v5.7-rc3, v5.7-rc2, v5.7-rc1, v5.6, v5.6-rc7, v5.6-rc6, v5.6-rc5, v5.6-rc4, v5.6-rc3, v5.6-rc2, v5.6-rc1, v5.5, v5.5-rc7, v5.5-rc6, v5.5-rc5, v5.5-rc4, v5.5-rc3, v5.5-rc2, v5.5-rc1, v5.4, v5.4-rc8, v5.4-rc7, v5.4-rc6, v5.4-rc5, v5.4-rc4, v5.4-rc3, v5.4-rc2, v5.4-rc1, v5.3, v5.3-rc8, v5.3-rc7, v5.3-rc6, v5.3-rc5, v5.3-rc4, v5.3-rc3, v5.3-rc2, v5.3-rc1, v5.2, v5.2-rc7, v5.2-rc6, v5.2-rc5, v5.2-rc4, v5.2-rc3, v5.2-rc2, v5.2-rc1, v5.1, v5.1-rc7, v5.1-rc6, v5.1-rc5, v5.1-rc4, v5.1-rc3, v5.1-rc2, v5.1-rc1
# c2a08203 08-Mar-2019 Davidlohr Bueso <dave@stgolabs.net>

perf bench: Add basic syscall benchmark

The usefulness of having a standard way of testing syscall performance
has come up from time to time[0]. Furthermore, some of our testing
machinery (such as '

perf bench: Add basic syscall benchmark

The usefulness of having a standard way of testing syscall performance
has come up from time to time[0]. Furthermore, some of our testing
machinery (such as 'mmtests') already makes use of a simplified version
of the microbenchmark. This patch mainly takes the same idea to measure
syscall throughput compatible with 'perf-bench' via getppid(2), yet
without any of the additional template stuff from Ingo's version (based
on numa.c). The code is identical to what mmtests uses.

[0] https://lore.kernel.org/lkml/20160201074156.GA27156@gmail.com/

Committer notes:

Add mising stdlib.h and unistd.h to get the prototypes for exit() and
getppid().

Committer testing:

$ perf bench
Usage:
perf bench [<common options>] <collection> <benchmark> [<options>]

# List of all available benchmark collections:

sched: Scheduler and IPC benchmarks
syscall: System call benchmarks
mem: Memory access benchmarks
numa: NUMA scheduling and MM benchmarks
futex: Futex stressing benchmarks
epoll: Epoll stressing benchmarks
internals: Perf-internals benchmarks
all: All benchmarks

$
$ perf bench syscall

# List of available benchmarks for collection 'syscall':

basic: Benchmark for basic getppid(2) calls
all: Run all syscall benchmarks

$ perf bench syscall basic
# Running 'syscall/basic' benchmark:
# Executed 10000000 getppid() calls
Total time: 3.679 [sec]

0.367957 usecs/op
2717708 ops/sec
$ perf bench syscall all
# Running syscall/basic benchmark...
# Executed 10000000 getppid() calls
Total time: 3.644 [sec]

0.364456 usecs/op
2743815 ops/sec

$

Signed-off-by: Davidlohr Bueso <dbueso@suse.de>
Acked-by: Josh Poimboeuf <jpoimboe@redhat.com>
Acked-by: Mel Gorman <mgorman@techsingularity.net>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Link: http://lore.kernel.org/lkml/20190308181747.l36zqz2avtivrr3c@linux-r8p5
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>

show more ...


# 98817a84 30-Jun-2020 Thomas Gleixner <tglx@linutronix.de>

Merge tag 'irqchip-fixes-5.8-1' of git://git.kernel.org/pub/scm/linux/kernel/git/maz/arm-platforms into irq/urgent

Pull irqchip fixes from Marc Zyngier:

- Fix atomicity of affinity update in the G

Merge tag 'irqchip-fixes-5.8-1' of git://git.kernel.org/pub/scm/linux/kernel/git/maz/arm-platforms into irq/urgent

Pull irqchip fixes from Marc Zyngier:

- Fix atomicity of affinity update in the GIC driver
- Don't sleep in atomic when waiting for a GICv4.1 RD to respond
- Fix a couple of typos in user-visible messages

show more ...


# 77346a70 30-Jun-2020 Joerg Roedel <jroedel@suse.de>

Merge tag 'v5.8-rc3' into arm/qcom

Linux 5.8-rc3


# 60e9eabf 29-Jun-2020 Maarten Lankhorst <maarten.lankhorst@linux.intel.com>

Backmerge remote-tracking branch 'drm/drm-next' into drm-misc-next

Some conflicts with ttm_bo->offset removal, but drm-misc-next needs updating to v5.8.

Signed-off-by: Maarten Lankhorst <maarten.la

Backmerge remote-tracking branch 'drm/drm-next' into drm-misc-next

Some conflicts with ttm_bo->offset removal, but drm-misc-next needs updating to v5.8.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>

show more ...


# 0f69403d 25-Jun-2020 Jani Nikula <jani.nikula@intel.com>

Merge drm/drm-next into drm-intel-next-queued

Catch up with upstream, in particular to get c1e8d7c6a7a6 ("mmap locking
API: convert mmap_sem comments").

Signed-off-by: Jani Nikula <jani.nikula@inte

Merge drm/drm-next into drm-intel-next-queued

Catch up with upstream, in particular to get c1e8d7c6a7a6 ("mmap locking
API: convert mmap_sem comments").

Signed-off-by: Jani Nikula <jani.nikula@intel.com>

show more ...


# 6870112c 17-Jun-2020 Mark Brown <broonie@kernel.org>

Merge tag 'v5.8-rc1' into regulator-5.8

Linux 5.8-rc1


# 07c7b547 16-Jun-2020 Tony Lindgren <tony@atomide.com>

Merge tag 'v5.8-rc1' into fixes

Linux 5.8-rc1


# 4b3c1f1b 16-Jun-2020 Thomas Zimmermann <tzimmermann@suse.de>

Merge v5.8-rc1 into drm-misc-fixes

Beginning a new release cycles for what will become v5.8. Updating
drm-misc-fixes accordingly.

Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>


# 8440d4a7 12-Jun-2020 Rob Herring <robh@kernel.org>

Merge branch 'dt/schema-cleanups' into dt/linus


# f77d26a9 11-Jun-2020 Thomas Gleixner <tglx@linutronix.de>

Merge branch 'x86/entry' into ras/core

to fixup conflicts in arch/x86/kernel/cpu/mce/core.c so MCE specific follow
up patches can be applied without creating a horrible merge conflict
afterwards.


# 8dd06ef3 06-Jun-2020 Dmitry Torokhov <dmitry.torokhov@gmail.com>

Merge branch 'next' into for-linus

Prepare input updates for 5.8 merge window.


# a7092c82 01-Jun-2020 Linus Torvalds <torvalds@linux-foundation.org>

Merge tag 'perf-core-2020-06-01' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip

Pull perf updates from Ingo Molnar:
"Kernel side changes:

- Add AMD Fam17h RAPL support

- Introduce

Merge tag 'perf-core-2020-06-01' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip

Pull perf updates from Ingo Molnar:
"Kernel side changes:

- Add AMD Fam17h RAPL support

- Introduce CAP_PERFMON to kernel and user space

- Add Zhaoxin CPU support

- Misc fixes and cleanups

Tooling changes:

- perf record:

Introduce '--switch-output-event' to use arbitrary events to be
setup and read from a side band thread and, when they take place a
signal be sent to the main 'perf record' thread, reusing the core
for '--switch-output' to take perf.data snapshots from the ring
buffer used for '--overwrite', e.g.:

# perf record --overwrite -e sched:* \
--switch-output-event syscalls:*connect* \
workload

will take perf.data.YYYYMMDDHHMMSS snapshots up to around the
connect syscalls.

Add '--num-synthesize-threads' option to control degree of
parallelism of the synthesize_mmap() code which is scanning
/proc/PID/task/PID/maps and can be time consuming. This mimics
pre-existing behaviour in 'perf top'.

- perf bench:

Add a multi-threaded synthesize benchmark and kallsyms parsing
benchmark.

- Intel PT support:

Stitch LBR records from multiple samples to get deeper backtraces,
there are caveats, see the csets for details.

Allow using Intel PT to synthesize callchains for regular events.

Add support for synthesizing branch stacks for regular events
(cycles, instructions, etc) from Intel PT data.

Misc changes:

- Updated perf vendor events for power9 and Coresight.

- Add flamegraph.py script via 'perf flamegraph'

- Misc other changes, fixes and cleanups - see the Git log for details

Also, since over the last couple of years perf tooling has matured and
decoupled from the kernel perf changes to a large degree, going
forward Arnaldo is going to send perf tooling changes via direct pull
requests"

* tag 'perf-core-2020-06-01' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (163 commits)
perf/x86/rapl: Add AMD Fam17h RAPL support
perf/x86/rapl: Make perf_probe_msr() more robust and flexible
perf/x86/rapl: Flip logic on default events visibility
perf/x86/rapl: Refactor to share the RAPL code between Intel and AMD CPUs
perf/x86/rapl: Move RAPL support to common x86 code
perf/core: Replace zero-length array with flexible-array
perf/x86: Replace zero-length array with flexible-array
perf/x86/intel: Add more available bits for OFFCORE_RESPONSE of Intel Tremont
perf/x86/rapl: Add Ice Lake RAPL support
perf flamegraph: Use /bin/bash for report and record scripts
perf cs-etm: Move definition of 'traceid_list' global variable from header file
libsymbols kallsyms: Move hex2u64 out of header
libsymbols kallsyms: Parse using io api
perf bench: Add kallsyms parsing
perf: cs-etm: Update to build with latest opencsd version.
perf symbol: Fix kernel symbol address display
perf inject: Rename perf_evsel__*() operating on 'struct evsel *' to evsel__*()
perf annotate: Rename perf_evsel__*() operating on 'struct evsel *' to evsel__*()
perf trace: Rename perf_evsel__*() operating on 'struct evsel *' to evsel__*()
perf script: Rename perf_evsel__*() operating on 'struct evsel *' to evsel__*()
...

show more ...


# d053cf0d 01-Jun-2020 Petr Mladek <pmladek@suse.com>

Merge branch 'for-5.8' into for-linus


# 1f422417 23-May-2020 Daniel Lezcano <daniel.lezcano@linaro.org>

Merge branch 'timers/drivers/timer-ti' into timers/drivers/next


# 0fdc50df 12-May-2020 Dmitry Torokhov <dmitry.torokhov@gmail.com>

Merge tag 'v5.6' into next

Sync up with mainline to get device tree and other changes.


# 059c6d68 08-May-2020 Thomas Gleixner <tglx@linutronix.de>

Merge tag 'perf-core-for-mingo-5.8-20200506' of git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux into perf/core

Pull perf updates from Arnaldo:

perf/core improvements and fixes:

perf recor

Merge tag 'perf-core-for-mingo-5.8-20200506' of git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux into perf/core

Pull perf updates from Arnaldo:

perf/core improvements and fixes:

perf record:

- Introduce --switch-output-event to use arbitrary events to be setup
and read from a side band thread and, when they take place a signal
be sent to the main 'perf record' thread, reusing the --switch-output
code to take perf.data snapshots from the --overwrite ring buffer, e.g.:

# perf record --overwrite -e sched:* \
--switch-output-event syscalls:*connect* \
workload

will take perf.data.YYYYMMDDHHMMSS snapshots up to around the
connect syscalls.

Stephane Eranian:

- Add --num-synthesize-threads option to control degree of parallelism of the
synthesize_mmap() code which is scanning /proc/PID/task/PID/maps and can be
time consuming. This mimics pre-existing behaviour in 'perf top'.

Intel PT:

Adrian Hunter:

- Add support for synthesizing branch stacks for regular events (cycles,
instructions, etc) from Intel PT data.

perf bench:

Ian Rogers:

- Add a multi-threaded synthesize benchmark.

- Add kallsyms parsing benchmark.

Tommi Rantala:

- Fix div-by-zero if runtime is zero.

perf synthetic events:

- Remove use of sscanf from /proc reading when parsing pre-existing
threads to generate synthetic PERF_RECORD_{FORK,MMAP,COMM,etc} events.

tools api:

- Add a lightweight buffered reading API.

libsymbols:

- Parse kallsyms using new lightweight buffered reading io API.

perf parse-events:

- Fix memory leaks found on parse_events.

perf mem2node:

- Avoid double free related to realloc().

perf stat:

Jin Yao:

- Zero all the 'ena' and 'run' array slot stats for interval mode.

- Improve runtime stat for interval mode

Kajol Jain:

- Enable Hz/hz printing for --metric-only option

- Enhance JSON/metric infrastructure to handle "?".

perf tests:

Kajol Jain:

- Added test for runtime param in metric expression.

Tommi Rantala:

- Fix data path in the session topology test.

perf vendor events power9:

Kajol Jain:

- Add hv_24x7 socket/chip level metric events

Coresight:

Leo Yan:

- Move definition of 'traceid_list' global variable from header file.

Mike Leach:

- Update to build with latest opencsd version.

perf pmu:

Shaokun Zhang:

- Fix function name in comment, its get_cpuid_str(), not get_cpustr()

Stephane Eranian:

- Add perf_pmu__find_by_type() helper

perf script:

Stephane Eranian:

- Remove extraneous newline in perf_sample__fprintf_regs().

Ian Rogers:

- Avoid NULL dereference on symbol.

tools feature:

Stephane Eranian:

- Add support for detecting libpfm4.

perf symbol:

Thomas Richter:

- Fix kernel symbol address display in TUI verbose mode.

perf cgroup:

Tommi Rantala:

- Avoid needless closing of unopened fd

libperf:

He Zhe:

- Add NULL pointer check for cpu_map iteration and NULL
assignment for all_cpus.

Ian Rogers:

- Fix a refcount leak in evlist method.

Arnaldo Carvalho de Melo:

- Rename the code in tools/perf/util, i.e. perf tooling specific, that
operates on 'struct evsel' to evsel__, leaving the perf_evsel__
namespace for the routines in tools/lib/perf/ that operate on
'struct perf_evsel__'.

tools/perf specific libraries:

Konstantin Khlebnikov:

- Fix reading new topology attribute "core_cpus"

- Simplify checking if SMT is active.

perf flamegraph:

Arnaldo Carvalho de Melo:

- Use /bin/bash for report and record scripts, just like all other
such scripts, fixing a package dependency bug in a Linaro
OpenEmbedded build checker.

perf evlist:

Jagadeesh Pagadala:

- Remove duplicate headers.

Miscelaneous:

Zou Wei:

- Remove unneeded semicolon in libtraceevent, 'perf c2c' and others.

- Fix warning assignment of 0/1 to bool variable in 'perf report'

Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>

show more ...


# 51876bd4 02-May-2020 Ian Rogers <irogers@google.com>

perf bench: Add kallsyms parsing

Add a benchmark for kallsyms parsing. Example output:

Running 'internals/kallsyms-parse' benchmark:
Average kallsyms__parse took: 103.971 ms (+- 0.121 ms)

Comm

perf bench: Add kallsyms parsing

Add a benchmark for kallsyms parsing. Example output:

Running 'internals/kallsyms-parse' benchmark:
Average kallsyms__parse took: 103.971 ms (+- 0.121 ms)

Committer testing:

Test Machine: AMD Ryzen 5 3600X 6-Core Processor

[root@five ~]# perf bench internals kallsyms-parse
# Running 'internals/kallsyms-parse' benchmark:
Average kallsyms__parse took: 79.692 ms (+- 0.101 ms)
[root@five ~]# perf stat -r5 perf bench internals kallsyms-parse
# Running 'internals/kallsyms-parse' benchmark:
Average kallsyms__parse took: 80.563 ms (+- 0.079 ms)
# Running 'internals/kallsyms-parse' benchmark:
Average kallsyms__parse took: 81.046 ms (+- 0.155 ms)
# Running 'internals/kallsyms-parse' benchmark:
Average kallsyms__parse took: 80.874 ms (+- 0.104 ms)
# Running 'internals/kallsyms-parse' benchmark:
Average kallsyms__parse took: 81.173 ms (+- 0.133 ms)
# Running 'internals/kallsyms-parse' benchmark:
Average kallsyms__parse took: 81.169 ms (+- 0.074 ms)

Performance counter stats for 'perf bench internals kallsyms-parse' (5 runs):

8,093.54 msec task-clock # 0.999 CPUs utilized ( +- 0.14% )
3,165 context-switches # 0.391 K/sec ( +- 0.18% )
10 cpu-migrations # 0.001 K/sec ( +- 23.13% )
744 page-faults # 0.092 K/sec ( +- 0.21% )
34,551,564,954 cycles # 4.269 GHz ( +- 0.05% ) (83.33%)
1,160,584,308 stalled-cycles-frontend # 3.36% frontend cycles idle ( +- 1.60% ) (83.33%)
14,974,323,985 stalled-cycles-backend # 43.34% backend cycles idle ( +- 0.24% ) (83.33%)
58,712,905,705 instructions # 1.70 insn per cycle
# 0.26 stalled cycles per insn ( +- 0.01% ) (83.34%)
14,136,433,778 branches # 1746.632 M/sec ( +- 0.01% ) (83.33%)
141,943,217 branch-misses # 1.00% of all branches ( +- 0.04% ) (83.33%)

8.1040 +- 0.0115 seconds time elapsed ( +- 0.14% )

[root@five ~]#

Signed-off-by: Ian Rogers <irogers@google.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lore.kernel.org/lkml/20200501221315.54715-2-irogers@google.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>

show more ...


12345678910>>...18