1# SPDX-License-Identifier: GPL-2.0-only 2# 3# Architectures that offer an FUNCTION_TRACER implementation should 4# select HAVE_FUNCTION_TRACER: 5# 6 7config USER_STACKTRACE_SUPPORT 8 bool 9 10config NOP_TRACER 11 bool 12 13config HAVE_RETHOOK 14 bool 15 16config RETHOOK 17 bool 18 depends on HAVE_RETHOOK 19 help 20 Enable generic return hooking feature. This is an internal 21 API, which will be used by other function-entry hooking 22 features like fprobe and kprobes. 23 24config HAVE_FUNCTION_TRACER 25 bool 26 help 27 See Documentation/trace/ftrace-design.rst 28 29config HAVE_FUNCTION_GRAPH_TRACER 30 bool 31 help 32 See Documentation/trace/ftrace-design.rst 33 34config HAVE_DYNAMIC_FTRACE 35 bool 36 help 37 See Documentation/trace/ftrace-design.rst 38 39config HAVE_DYNAMIC_FTRACE_WITH_REGS 40 bool 41 42config HAVE_DYNAMIC_FTRACE_WITH_DIRECT_CALLS 43 bool 44 45config HAVE_DYNAMIC_FTRACE_WITH_ARGS 46 bool 47 help 48 If this is set, then arguments and stack can be found from 49 the pt_regs passed into the function callback regs parameter 50 by default, even without setting the REGS flag in the ftrace_ops. 51 This allows for use of regs_get_kernel_argument() and 52 kernel_stack_pointer(). 53 54config HAVE_FTRACE_MCOUNT_RECORD 55 bool 56 help 57 See Documentation/trace/ftrace-design.rst 58 59config HAVE_SYSCALL_TRACEPOINTS 60 bool 61 help 62 See Documentation/trace/ftrace-design.rst 63 64config HAVE_FENTRY 65 bool 66 help 67 Arch supports the gcc options -pg with -mfentry 68 69config HAVE_NOP_MCOUNT 70 bool 71 help 72 Arch supports the gcc options -pg with -mrecord-mcount and -nop-mcount 73 74config HAVE_OBJTOOL_MCOUNT 75 bool 76 help 77 Arch supports objtool --mcount 78 79config HAVE_C_RECORDMCOUNT 80 bool 81 help 82 C version of recordmcount available? 83 84config HAVE_BUILDTIME_MCOUNT_SORT 85 bool 86 help 87 An architecture selects this if it sorts the mcount_loc section 88 at build time. 89 90config BUILDTIME_MCOUNT_SORT 91 bool 92 default y 93 depends on HAVE_BUILDTIME_MCOUNT_SORT && DYNAMIC_FTRACE 94 help 95 Sort the mcount_loc section at build time. 96 97config TRACER_MAX_TRACE 98 bool 99 100config TRACE_CLOCK 101 bool 102 103config RING_BUFFER 104 bool 105 select TRACE_CLOCK 106 select IRQ_WORK 107 108config EVENT_TRACING 109 select CONTEXT_SWITCH_TRACER 110 select GLOB 111 bool 112 113config CONTEXT_SWITCH_TRACER 114 bool 115 116config RING_BUFFER_ALLOW_SWAP 117 bool 118 help 119 Allow the use of ring_buffer_swap_cpu. 120 Adds a very slight overhead to tracing when enabled. 121 122config PREEMPTIRQ_TRACEPOINTS 123 bool 124 depends on TRACE_PREEMPT_TOGGLE || TRACE_IRQFLAGS 125 select TRACING 126 default y 127 help 128 Create preempt/irq toggle tracepoints if needed, so that other parts 129 of the kernel can use them to generate or add hooks to them. 130 131# All tracer options should select GENERIC_TRACER. For those options that are 132# enabled by all tracers (context switch and event tracer) they select TRACING. 133# This allows those options to appear when no other tracer is selected. But the 134# options do not appear when something else selects it. We need the two options 135# GENERIC_TRACER and TRACING to avoid circular dependencies to accomplish the 136# hiding of the automatic options. 137 138config TRACING 139 bool 140 select RING_BUFFER 141 select STACKTRACE if STACKTRACE_SUPPORT 142 select TRACEPOINTS 143 select NOP_TRACER 144 select BINARY_PRINTF 145 select EVENT_TRACING 146 select TRACE_CLOCK 147 148config GENERIC_TRACER 149 bool 150 select TRACING 151 152# 153# Minimum requirements an architecture has to meet for us to 154# be able to offer generic tracing facilities: 155# 156config TRACING_SUPPORT 157 bool 158 depends on TRACE_IRQFLAGS_SUPPORT 159 depends on STACKTRACE_SUPPORT 160 default y 161 162menuconfig FTRACE 163 bool "Tracers" 164 depends on TRACING_SUPPORT 165 default y if DEBUG_KERNEL 166 help 167 Enable the kernel tracing infrastructure. 168 169if FTRACE 170 171config BOOTTIME_TRACING 172 bool "Boot-time Tracing support" 173 depends on TRACING 174 select BOOT_CONFIG 175 help 176 Enable developer to setup ftrace subsystem via supplemental 177 kernel cmdline at boot time for debugging (tracing) driver 178 initialization and boot process. 179 180config FUNCTION_TRACER 181 bool "Kernel Function Tracer" 182 depends on HAVE_FUNCTION_TRACER 183 select KALLSYMS 184 select GENERIC_TRACER 185 select CONTEXT_SWITCH_TRACER 186 select GLOB 187 select TASKS_RCU if PREEMPTION 188 select TASKS_RUDE_RCU 189 help 190 Enable the kernel to trace every kernel function. This is done 191 by using a compiler feature to insert a small, 5-byte No-Operation 192 instruction at the beginning of every kernel function, which NOP 193 sequence is then dynamically patched into a tracer call when 194 tracing is enabled by the administrator. If it's runtime disabled 195 (the bootup default), then the overhead of the instructions is very 196 small and not measurable even in micro-benchmarks. 197 198config FUNCTION_GRAPH_TRACER 199 bool "Kernel Function Graph Tracer" 200 depends on HAVE_FUNCTION_GRAPH_TRACER 201 depends on FUNCTION_TRACER 202 depends on !X86_32 || !CC_OPTIMIZE_FOR_SIZE 203 default y 204 help 205 Enable the kernel to trace a function at both its return 206 and its entry. 207 Its first purpose is to trace the duration of functions and 208 draw a call graph for each thread with some information like 209 the return value. This is done by setting the current return 210 address on the current task structure into a stack of calls. 211 212config DYNAMIC_FTRACE 213 bool "enable/disable function tracing dynamically" 214 depends on FUNCTION_TRACER 215 depends on HAVE_DYNAMIC_FTRACE 216 default y 217 help 218 This option will modify all the calls to function tracing 219 dynamically (will patch them out of the binary image and 220 replace them with a No-Op instruction) on boot up. During 221 compile time, a table is made of all the locations that ftrace 222 can function trace, and this table is linked into the kernel 223 image. When this is enabled, functions can be individually 224 enabled, and the functions not enabled will not affect 225 performance of the system. 226 227 See the files in /sys/kernel/debug/tracing: 228 available_filter_functions 229 set_ftrace_filter 230 set_ftrace_notrace 231 232 This way a CONFIG_FUNCTION_TRACER kernel is slightly larger, but 233 otherwise has native performance as long as no tracing is active. 234 235config DYNAMIC_FTRACE_WITH_REGS 236 def_bool y 237 depends on DYNAMIC_FTRACE 238 depends on HAVE_DYNAMIC_FTRACE_WITH_REGS 239 240config DYNAMIC_FTRACE_WITH_DIRECT_CALLS 241 def_bool y 242 depends on DYNAMIC_FTRACE_WITH_REGS 243 depends on HAVE_DYNAMIC_FTRACE_WITH_DIRECT_CALLS 244 245config DYNAMIC_FTRACE_WITH_ARGS 246 def_bool y 247 depends on DYNAMIC_FTRACE 248 depends on HAVE_DYNAMIC_FTRACE_WITH_ARGS 249 250config FPROBE 251 bool "Kernel Function Probe (fprobe)" 252 depends on FUNCTION_TRACER 253 depends on DYNAMIC_FTRACE_WITH_REGS 254 depends on HAVE_RETHOOK 255 select RETHOOK 256 default n 257 help 258 This option enables kernel function probe (fprobe) based on ftrace. 259 The fprobe is similar to kprobes, but probes only for kernel function 260 entries and exits. This also can probe multiple functions by one 261 fprobe. 262 263 If unsure, say N. 264 265config FUNCTION_PROFILER 266 bool "Kernel function profiler" 267 depends on FUNCTION_TRACER 268 default n 269 help 270 This option enables the kernel function profiler. A file is created 271 in debugfs called function_profile_enabled which defaults to zero. 272 When a 1 is echoed into this file profiling begins, and when a 273 zero is entered, profiling stops. A "functions" file is created in 274 the trace_stat directory; this file shows the list of functions that 275 have been hit and their counters. 276 277 If in doubt, say N. 278 279config STACK_TRACER 280 bool "Trace max stack" 281 depends on HAVE_FUNCTION_TRACER 282 select FUNCTION_TRACER 283 select STACKTRACE 284 select KALLSYMS 285 help 286 This special tracer records the maximum stack footprint of the 287 kernel and displays it in /sys/kernel/debug/tracing/stack_trace. 288 289 This tracer works by hooking into every function call that the 290 kernel executes, and keeping a maximum stack depth value and 291 stack-trace saved. If this is configured with DYNAMIC_FTRACE 292 then it will not have any overhead while the stack tracer 293 is disabled. 294 295 To enable the stack tracer on bootup, pass in 'stacktrace' 296 on the kernel command line. 297 298 The stack tracer can also be enabled or disabled via the 299 sysctl kernel.stack_tracer_enabled 300 301 Say N if unsure. 302 303config TRACE_PREEMPT_TOGGLE 304 bool 305 help 306 Enables hooks which will be called when preemption is first disabled, 307 and last enabled. 308 309config IRQSOFF_TRACER 310 bool "Interrupts-off Latency Tracer" 311 default n 312 depends on TRACE_IRQFLAGS_SUPPORT 313 select TRACE_IRQFLAGS 314 select GENERIC_TRACER 315 select TRACER_MAX_TRACE 316 select RING_BUFFER_ALLOW_SWAP 317 select TRACER_SNAPSHOT 318 select TRACER_SNAPSHOT_PER_CPU_SWAP 319 help 320 This option measures the time spent in irqs-off critical 321 sections, with microsecond accuracy. 322 323 The default measurement method is a maximum search, which is 324 disabled by default and can be runtime (re-)started 325 via: 326 327 echo 0 > /sys/kernel/debug/tracing/tracing_max_latency 328 329 (Note that kernel size and overhead increase with this option 330 enabled. This option and the preempt-off timing option can be 331 used together or separately.) 332 333config PREEMPT_TRACER 334 bool "Preemption-off Latency Tracer" 335 default n 336 depends on PREEMPTION 337 select GENERIC_TRACER 338 select TRACER_MAX_TRACE 339 select RING_BUFFER_ALLOW_SWAP 340 select TRACER_SNAPSHOT 341 select TRACER_SNAPSHOT_PER_CPU_SWAP 342 select TRACE_PREEMPT_TOGGLE 343 help 344 This option measures the time spent in preemption-off critical 345 sections, with microsecond accuracy. 346 347 The default measurement method is a maximum search, which is 348 disabled by default and can be runtime (re-)started 349 via: 350 351 echo 0 > /sys/kernel/debug/tracing/tracing_max_latency 352 353 (Note that kernel size and overhead increase with this option 354 enabled. This option and the irqs-off timing option can be 355 used together or separately.) 356 357config SCHED_TRACER 358 bool "Scheduling Latency Tracer" 359 select GENERIC_TRACER 360 select CONTEXT_SWITCH_TRACER 361 select TRACER_MAX_TRACE 362 select TRACER_SNAPSHOT 363 help 364 This tracer tracks the latency of the highest priority task 365 to be scheduled in, starting from the point it has woken up. 366 367config HWLAT_TRACER 368 bool "Tracer to detect hardware latencies (like SMIs)" 369 select GENERIC_TRACER 370 help 371 This tracer, when enabled will create one or more kernel threads, 372 depending on what the cpumask file is set to, which each thread 373 spinning in a loop looking for interruptions caused by 374 something other than the kernel. For example, if a 375 System Management Interrupt (SMI) takes a noticeable amount of 376 time, this tracer will detect it. This is useful for testing 377 if a system is reliable for Real Time tasks. 378 379 Some files are created in the tracing directory when this 380 is enabled: 381 382 hwlat_detector/width - time in usecs for how long to spin for 383 hwlat_detector/window - time in usecs between the start of each 384 iteration 385 386 A kernel thread is created that will spin with interrupts disabled 387 for "width" microseconds in every "window" cycle. It will not spin 388 for "window - width" microseconds, where the system can 389 continue to operate. 390 391 The output will appear in the trace and trace_pipe files. 392 393 When the tracer is not running, it has no affect on the system, 394 but when it is running, it can cause the system to be 395 periodically non responsive. Do not run this tracer on a 396 production system. 397 398 To enable this tracer, echo in "hwlat" into the current_tracer 399 file. Every time a latency is greater than tracing_thresh, it will 400 be recorded into the ring buffer. 401 402config OSNOISE_TRACER 403 bool "OS Noise tracer" 404 select GENERIC_TRACER 405 help 406 In the context of high-performance computing (HPC), the Operating 407 System Noise (osnoise) refers to the interference experienced by an 408 application due to activities inside the operating system. In the 409 context of Linux, NMIs, IRQs, SoftIRQs, and any other system thread 410 can cause noise to the system. Moreover, hardware-related jobs can 411 also cause noise, for example, via SMIs. 412 413 The osnoise tracer leverages the hwlat_detector by running a similar 414 loop with preemption, SoftIRQs and IRQs enabled, thus allowing all 415 the sources of osnoise during its execution. The osnoise tracer takes 416 note of the entry and exit point of any source of interferences, 417 increasing a per-cpu interference counter. It saves an interference 418 counter for each source of interference. The interference counter for 419 NMI, IRQs, SoftIRQs, and threads is increased anytime the tool 420 observes these interferences' entry events. When a noise happens 421 without any interference from the operating system level, the 422 hardware noise counter increases, pointing to a hardware-related 423 noise. In this way, osnoise can account for any source of 424 interference. At the end of the period, the osnoise tracer prints 425 the sum of all noise, the max single noise, the percentage of CPU 426 available for the thread, and the counters for the noise sources. 427 428 In addition to the tracer, a set of tracepoints were added to 429 facilitate the identification of the osnoise source. 430 431 The output will appear in the trace and trace_pipe files. 432 433 To enable this tracer, echo in "osnoise" into the current_tracer 434 file. 435 436config TIMERLAT_TRACER 437 bool "Timerlat tracer" 438 select OSNOISE_TRACER 439 select GENERIC_TRACER 440 help 441 The timerlat tracer aims to help the preemptive kernel developers 442 to find sources of wakeup latencies of real-time threads. 443 444 The tracer creates a per-cpu kernel thread with real-time priority. 445 The tracer thread sets a periodic timer to wakeup itself, and goes 446 to sleep waiting for the timer to fire. At the wakeup, the thread 447 then computes a wakeup latency value as the difference between 448 the current time and the absolute time that the timer was set 449 to expire. 450 451 The tracer prints two lines at every activation. The first is the 452 timer latency observed at the hardirq context before the 453 activation of the thread. The second is the timer latency observed 454 by the thread, which is the same level that cyclictest reports. The 455 ACTIVATION ID field serves to relate the irq execution to its 456 respective thread execution. 457 458 The tracer is build on top of osnoise tracer, and the osnoise: 459 events can be used to trace the source of interference from NMI, 460 IRQs and other threads. It also enables the capture of the 461 stacktrace at the IRQ context, which helps to identify the code 462 path that can cause thread delay. 463 464config MMIOTRACE 465 bool "Memory mapped IO tracing" 466 depends on HAVE_MMIOTRACE_SUPPORT && PCI 467 select GENERIC_TRACER 468 help 469 Mmiotrace traces Memory Mapped I/O access and is meant for 470 debugging and reverse engineering. It is called from the ioremap 471 implementation and works via page faults. Tracing is disabled by 472 default and can be enabled at run-time. 473 474 See Documentation/trace/mmiotrace.rst. 475 If you are not helping to develop drivers, say N. 476 477config ENABLE_DEFAULT_TRACERS 478 bool "Trace process context switches and events" 479 depends on !GENERIC_TRACER 480 select TRACING 481 help 482 This tracer hooks to various trace points in the kernel, 483 allowing the user to pick and choose which trace point they 484 want to trace. It also includes the sched_switch tracer plugin. 485 486config FTRACE_SYSCALLS 487 bool "Trace syscalls" 488 depends on HAVE_SYSCALL_TRACEPOINTS 489 select GENERIC_TRACER 490 select KALLSYMS 491 help 492 Basic tracer to catch the syscall entry and exit events. 493 494config TRACER_SNAPSHOT 495 bool "Create a snapshot trace buffer" 496 select TRACER_MAX_TRACE 497 help 498 Allow tracing users to take snapshot of the current buffer using the 499 ftrace interface, e.g.: 500 501 echo 1 > /sys/kernel/debug/tracing/snapshot 502 cat snapshot 503 504config TRACER_SNAPSHOT_PER_CPU_SWAP 505 bool "Allow snapshot to swap per CPU" 506 depends on TRACER_SNAPSHOT 507 select RING_BUFFER_ALLOW_SWAP 508 help 509 Allow doing a snapshot of a single CPU buffer instead of a 510 full swap (all buffers). If this is set, then the following is 511 allowed: 512 513 echo 1 > /sys/kernel/debug/tracing/per_cpu/cpu2/snapshot 514 515 After which, only the tracing buffer for CPU 2 was swapped with 516 the main tracing buffer, and the other CPU buffers remain the same. 517 518 When this is enabled, this adds a little more overhead to the 519 trace recording, as it needs to add some checks to synchronize 520 recording with swaps. But this does not affect the performance 521 of the overall system. This is enabled by default when the preempt 522 or irq latency tracers are enabled, as those need to swap as well 523 and already adds the overhead (plus a lot more). 524 525config TRACE_BRANCH_PROFILING 526 bool 527 select GENERIC_TRACER 528 529choice 530 prompt "Branch Profiling" 531 default BRANCH_PROFILE_NONE 532 help 533 The branch profiling is a software profiler. It will add hooks 534 into the C conditionals to test which path a branch takes. 535 536 The likely/unlikely profiler only looks at the conditions that 537 are annotated with a likely or unlikely macro. 538 539 The "all branch" profiler will profile every if-statement in the 540 kernel. This profiler will also enable the likely/unlikely 541 profiler. 542 543 Either of the above profilers adds a bit of overhead to the system. 544 If unsure, choose "No branch profiling". 545 546config BRANCH_PROFILE_NONE 547 bool "No branch profiling" 548 help 549 No branch profiling. Branch profiling adds a bit of overhead. 550 Only enable it if you want to analyse the branching behavior. 551 Otherwise keep it disabled. 552 553config PROFILE_ANNOTATED_BRANCHES 554 bool "Trace likely/unlikely profiler" 555 select TRACE_BRANCH_PROFILING 556 help 557 This tracer profiles all likely and unlikely macros 558 in the kernel. It will display the results in: 559 560 /sys/kernel/debug/tracing/trace_stat/branch_annotated 561 562 Note: this will add a significant overhead; only turn this 563 on if you need to profile the system's use of these macros. 564 565config PROFILE_ALL_BRANCHES 566 bool "Profile all if conditionals" if !FORTIFY_SOURCE 567 select TRACE_BRANCH_PROFILING 568 help 569 This tracer profiles all branch conditions. Every if () 570 taken in the kernel is recorded whether it hit or miss. 571 The results will be displayed in: 572 573 /sys/kernel/debug/tracing/trace_stat/branch_all 574 575 This option also enables the likely/unlikely profiler. 576 577 This configuration, when enabled, will impose a great overhead 578 on the system. This should only be enabled when the system 579 is to be analyzed in much detail. 580endchoice 581 582config TRACING_BRANCHES 583 bool 584 help 585 Selected by tracers that will trace the likely and unlikely 586 conditions. This prevents the tracers themselves from being 587 profiled. Profiling the tracing infrastructure can only happen 588 when the likelys and unlikelys are not being traced. 589 590config BRANCH_TRACER 591 bool "Trace likely/unlikely instances" 592 depends on TRACE_BRANCH_PROFILING 593 select TRACING_BRANCHES 594 help 595 This traces the events of likely and unlikely condition 596 calls in the kernel. The difference between this and the 597 "Trace likely/unlikely profiler" is that this is not a 598 histogram of the callers, but actually places the calling 599 events into a running trace buffer to see when and where the 600 events happened, as well as their results. 601 602 Say N if unsure. 603 604config BLK_DEV_IO_TRACE 605 bool "Support for tracing block IO actions" 606 depends on SYSFS 607 depends on BLOCK 608 select RELAY 609 select DEBUG_FS 610 select TRACEPOINTS 611 select GENERIC_TRACER 612 select STACKTRACE 613 help 614 Say Y here if you want to be able to trace the block layer actions 615 on a given queue. Tracing allows you to see any traffic happening 616 on a block device queue. For more information (and the userspace 617 support tools needed), fetch the blktrace tools from: 618 619 git://git.kernel.dk/blktrace.git 620 621 Tracing also is possible using the ftrace interface, e.g.: 622 623 echo 1 > /sys/block/sda/sda1/trace/enable 624 echo blk > /sys/kernel/debug/tracing/current_tracer 625 cat /sys/kernel/debug/tracing/trace_pipe 626 627 If unsure, say N. 628 629config KPROBE_EVENTS 630 depends on KPROBES 631 depends on HAVE_REGS_AND_STACK_ACCESS_API 632 bool "Enable kprobes-based dynamic events" 633 select TRACING 634 select PROBE_EVENTS 635 select DYNAMIC_EVENTS 636 default y 637 help 638 This allows the user to add tracing events (similar to tracepoints) 639 on the fly via the ftrace interface. See 640 Documentation/trace/kprobetrace.rst for more details. 641 642 Those events can be inserted wherever kprobes can probe, and record 643 various register and memory values. 644 645 This option is also required by perf-probe subcommand of perf tools. 646 If you want to use perf tools, this option is strongly recommended. 647 648config KPROBE_EVENTS_ON_NOTRACE 649 bool "Do NOT protect notrace function from kprobe events" 650 depends on KPROBE_EVENTS 651 depends on DYNAMIC_FTRACE 652 default n 653 help 654 This is only for the developers who want to debug ftrace itself 655 using kprobe events. 656 657 If kprobes can use ftrace instead of breakpoint, ftrace related 658 functions are protected from kprobe-events to prevent an infinite 659 recursion or any unexpected execution path which leads to a kernel 660 crash. 661 662 This option disables such protection and allows you to put kprobe 663 events on ftrace functions for debugging ftrace by itself. 664 Note that this might let you shoot yourself in the foot. 665 666 If unsure, say N. 667 668config UPROBE_EVENTS 669 bool "Enable uprobes-based dynamic events" 670 depends on ARCH_SUPPORTS_UPROBES 671 depends on MMU 672 depends on PERF_EVENTS 673 select UPROBES 674 select PROBE_EVENTS 675 select DYNAMIC_EVENTS 676 select TRACING 677 default y 678 help 679 This allows the user to add tracing events on top of userspace 680 dynamic events (similar to tracepoints) on the fly via the trace 681 events interface. Those events can be inserted wherever uprobes 682 can probe, and record various registers. 683 This option is required if you plan to use perf-probe subcommand 684 of perf tools on user space applications. 685 686config BPF_EVENTS 687 depends on BPF_SYSCALL 688 depends on (KPROBE_EVENTS || UPROBE_EVENTS) && PERF_EVENTS 689 bool 690 default y 691 help 692 This allows the user to attach BPF programs to kprobe, uprobe, and 693 tracepoint events. 694 695config DYNAMIC_EVENTS 696 def_bool n 697 698config PROBE_EVENTS 699 def_bool n 700 701config BPF_KPROBE_OVERRIDE 702 bool "Enable BPF programs to override a kprobed function" 703 depends on BPF_EVENTS 704 depends on FUNCTION_ERROR_INJECTION 705 default n 706 help 707 Allows BPF to override the execution of a probed function and 708 set a different return value. This is used for error injection. 709 710config FTRACE_MCOUNT_RECORD 711 def_bool y 712 depends on DYNAMIC_FTRACE 713 depends on HAVE_FTRACE_MCOUNT_RECORD 714 715config FTRACE_MCOUNT_USE_PATCHABLE_FUNCTION_ENTRY 716 bool 717 depends on FTRACE_MCOUNT_RECORD 718 719config FTRACE_MCOUNT_USE_CC 720 def_bool y 721 depends on $(cc-option,-mrecord-mcount) 722 depends on !FTRACE_MCOUNT_USE_PATCHABLE_FUNCTION_ENTRY 723 depends on FTRACE_MCOUNT_RECORD 724 725config FTRACE_MCOUNT_USE_OBJTOOL 726 def_bool y 727 depends on HAVE_OBJTOOL_MCOUNT 728 depends on !FTRACE_MCOUNT_USE_PATCHABLE_FUNCTION_ENTRY 729 depends on !FTRACE_MCOUNT_USE_CC 730 depends on FTRACE_MCOUNT_RECORD 731 732config FTRACE_MCOUNT_USE_RECORDMCOUNT 733 def_bool y 734 depends on !FTRACE_MCOUNT_USE_PATCHABLE_FUNCTION_ENTRY 735 depends on !FTRACE_MCOUNT_USE_CC 736 depends on !FTRACE_MCOUNT_USE_OBJTOOL 737 depends on FTRACE_MCOUNT_RECORD 738 739config TRACING_MAP 740 bool 741 depends on ARCH_HAVE_NMI_SAFE_CMPXCHG 742 help 743 tracing_map is a special-purpose lock-free map for tracing, 744 separated out as a stand-alone facility in order to allow it 745 to be shared between multiple tracers. It isn't meant to be 746 generally used outside of that context, and is normally 747 selected by tracers that use it. 748 749config SYNTH_EVENTS 750 bool "Synthetic trace events" 751 select TRACING 752 select DYNAMIC_EVENTS 753 default n 754 help 755 Synthetic events are user-defined trace events that can be 756 used to combine data from other trace events or in fact any 757 data source. Synthetic events can be generated indirectly 758 via the trace() action of histogram triggers or directly 759 by way of an in-kernel API. 760 761 See Documentation/trace/events.rst or 762 Documentation/trace/histogram.rst for details and examples. 763 764 If in doubt, say N. 765 766config USER_EVENTS 767 bool "User trace events" 768 select TRACING 769 select DYNAMIC_EVENTS 770 depends on BROKEN || COMPILE_TEST # API needs to be straighten out 771 help 772 User trace events are user-defined trace events that 773 can be used like an existing kernel trace event. User trace 774 events are generated by writing to a tracefs file. User 775 processes can determine if their tracing events should be 776 generated by memory mapping a tracefs file and checking for 777 an associated byte being non-zero. 778 779 If in doubt, say N. 780 781config HIST_TRIGGERS 782 bool "Histogram triggers" 783 depends on ARCH_HAVE_NMI_SAFE_CMPXCHG 784 select TRACING_MAP 785 select TRACING 786 select DYNAMIC_EVENTS 787 select SYNTH_EVENTS 788 default n 789 help 790 Hist triggers allow one or more arbitrary trace event fields 791 to be aggregated into hash tables and dumped to stdout by 792 reading a debugfs/tracefs file. They're useful for 793 gathering quick and dirty (though precise) summaries of 794 event activity as an initial guide for further investigation 795 using more advanced tools. 796 797 Inter-event tracing of quantities such as latencies is also 798 supported using hist triggers under this option. 799 800 See Documentation/trace/histogram.rst. 801 If in doubt, say N. 802 803config TRACE_EVENT_INJECT 804 bool "Trace event injection" 805 depends on TRACING 806 help 807 Allow user-space to inject a specific trace event into the ring 808 buffer. This is mainly used for testing purpose. 809 810 If unsure, say N. 811 812config TRACEPOINT_BENCHMARK 813 bool "Add tracepoint that benchmarks tracepoints" 814 help 815 This option creates the tracepoint "benchmark:benchmark_event". 816 When the tracepoint is enabled, it kicks off a kernel thread that 817 goes into an infinite loop (calling cond_resched() to let other tasks 818 run), and calls the tracepoint. Each iteration will record the time 819 it took to write to the tracepoint and the next iteration that 820 data will be passed to the tracepoint itself. That is, the tracepoint 821 will report the time it took to do the previous tracepoint. 822 The string written to the tracepoint is a static string of 128 bytes 823 to keep the time the same. The initial string is simply a write of 824 "START". The second string records the cold cache time of the first 825 write which is not added to the rest of the calculations. 826 827 As it is a tight loop, it benchmarks as hot cache. That's fine because 828 we care most about hot paths that are probably in cache already. 829 830 An example of the output: 831 832 START 833 first=3672 [COLD CACHED] 834 last=632 first=3672 max=632 min=632 avg=316 std=446 std^2=199712 835 last=278 first=3672 max=632 min=278 avg=303 std=316 std^2=100337 836 last=277 first=3672 max=632 min=277 avg=296 std=258 std^2=67064 837 last=273 first=3672 max=632 min=273 avg=292 std=224 std^2=50411 838 last=273 first=3672 max=632 min=273 avg=288 std=200 std^2=40389 839 last=281 first=3672 max=632 min=273 avg=287 std=183 std^2=33666 840 841 842config RING_BUFFER_BENCHMARK 843 tristate "Ring buffer benchmark stress tester" 844 depends on RING_BUFFER 845 help 846 This option creates a test to stress the ring buffer and benchmark it. 847 It creates its own ring buffer such that it will not interfere with 848 any other users of the ring buffer (such as ftrace). It then creates 849 a producer and consumer that will run for 10 seconds and sleep for 850 10 seconds. Each interval it will print out the number of events 851 it recorded and give a rough estimate of how long each iteration took. 852 853 It does not disable interrupts or raise its priority, so it may be 854 affected by processes that are running. 855 856 If unsure, say N. 857 858config TRACE_EVAL_MAP_FILE 859 bool "Show eval mappings for trace events" 860 depends on TRACING 861 help 862 The "print fmt" of the trace events will show the enum/sizeof names 863 instead of their values. This can cause problems for user space tools 864 that use this string to parse the raw data as user space does not know 865 how to convert the string to its value. 866 867 To fix this, there's a special macro in the kernel that can be used 868 to convert an enum/sizeof into its value. If this macro is used, then 869 the print fmt strings will be converted to their values. 870 871 If something does not get converted properly, this option can be 872 used to show what enums/sizeof the kernel tried to convert. 873 874 This option is for debugging the conversions. A file is created 875 in the tracing directory called "eval_map" that will show the 876 names matched with their values and what trace event system they 877 belong too. 878 879 Normally, the mapping of the strings to values will be freed after 880 boot up or module load. With this option, they will not be freed, as 881 they are needed for the "eval_map" file. Enabling this option will 882 increase the memory footprint of the running kernel. 883 884 If unsure, say N. 885 886config FTRACE_RECORD_RECURSION 887 bool "Record functions that recurse in function tracing" 888 depends on FUNCTION_TRACER 889 help 890 All callbacks that attach to the function tracing have some sort 891 of protection against recursion. Even though the protection exists, 892 it adds overhead. This option will create a file in the tracefs 893 file system called "recursed_functions" that will list the functions 894 that triggered a recursion. 895 896 This will add more overhead to cases that have recursion. 897 898 If unsure, say N 899 900config FTRACE_RECORD_RECURSION_SIZE 901 int "Max number of recursed functions to record" 902 default 128 903 depends on FTRACE_RECORD_RECURSION 904 help 905 This defines the limit of number of functions that can be 906 listed in the "recursed_functions" file, that lists all 907 the functions that caused a recursion to happen. 908 This file can be reset, but the limit can not change in 909 size at runtime. 910 911config RING_BUFFER_RECORD_RECURSION 912 bool "Record functions that recurse in the ring buffer" 913 depends on FTRACE_RECORD_RECURSION 914 # default y, because it is coupled with FTRACE_RECORD_RECURSION 915 default y 916 help 917 The ring buffer has its own internal recursion. Although when 918 recursion happens it wont cause harm because of the protection, 919 but it does cause an unwanted overhead. Enabling this option will 920 place where recursion was detected into the ftrace "recursed_functions" 921 file. 922 923 This will add more overhead to cases that have recursion. 924 925config GCOV_PROFILE_FTRACE 926 bool "Enable GCOV profiling on ftrace subsystem" 927 depends on GCOV_KERNEL 928 help 929 Enable GCOV profiling on ftrace subsystem for checking 930 which functions/lines are tested. 931 932 If unsure, say N. 933 934 Note that on a kernel compiled with this config, ftrace will 935 run significantly slower. 936 937config FTRACE_SELFTEST 938 bool 939 940config FTRACE_STARTUP_TEST 941 bool "Perform a startup test on ftrace" 942 depends on GENERIC_TRACER 943 select FTRACE_SELFTEST 944 help 945 This option performs a series of startup tests on ftrace. On bootup 946 a series of tests are made to verify that the tracer is 947 functioning properly. It will do tests on all the configured 948 tracers of ftrace. 949 950config EVENT_TRACE_STARTUP_TEST 951 bool "Run selftest on trace events" 952 depends on FTRACE_STARTUP_TEST 953 default y 954 help 955 This option performs a test on all trace events in the system. 956 It basically just enables each event and runs some code that 957 will trigger events (not necessarily the event it enables) 958 This may take some time run as there are a lot of events. 959 960config EVENT_TRACE_TEST_SYSCALLS 961 bool "Run selftest on syscall events" 962 depends on EVENT_TRACE_STARTUP_TEST 963 help 964 This option will also enable testing every syscall event. 965 It only enables the event and disables it and runs various loads 966 with the event enabled. This adds a bit more time for kernel boot 967 up since it runs this on every system call defined. 968 969 TBD - enable a way to actually call the syscalls as we test their 970 events 971 972config FTRACE_SORT_STARTUP_TEST 973 bool "Verify compile time sorting of ftrace functions" 974 depends on DYNAMIC_FTRACE 975 depends on BUILDTIME_MCOUNT_SORT 976 help 977 Sorting of the mcount_loc sections that is used to find the 978 where the ftrace knows where to patch functions for tracing 979 and other callbacks is done at compile time. But if the sort 980 is not done correctly, it will cause non-deterministic failures. 981 When this is set, the sorted sections will be verified that they 982 are in deed sorted and will warn if they are not. 983 984 If unsure, say N 985 986config RING_BUFFER_STARTUP_TEST 987 bool "Ring buffer startup self test" 988 depends on RING_BUFFER 989 help 990 Run a simple self test on the ring buffer on boot up. Late in the 991 kernel boot sequence, the test will start that kicks off 992 a thread per cpu. Each thread will write various size events 993 into the ring buffer. Another thread is created to send IPIs 994 to each of the threads, where the IPI handler will also write 995 to the ring buffer, to test/stress the nesting ability. 996 If any anomalies are discovered, a warning will be displayed 997 and all ring buffers will be disabled. 998 999 The test runs for 10 seconds. This will slow your boot time 1000 by at least 10 more seconds. 1001 1002 At the end of the test, statics and more checks are done. 1003 It will output the stats of each per cpu buffer. What 1004 was written, the sizes, what was read, what was lost, and 1005 other similar details. 1006 1007 If unsure, say N 1008 1009config RING_BUFFER_VALIDATE_TIME_DELTAS 1010 bool "Verify ring buffer time stamp deltas" 1011 depends on RING_BUFFER 1012 help 1013 This will audit the time stamps on the ring buffer sub 1014 buffer to make sure that all the time deltas for the 1015 events on a sub buffer matches the current time stamp. 1016 This audit is performed for every event that is not 1017 interrupted, or interrupting another event. A check 1018 is also made when traversing sub buffers to make sure 1019 that all the deltas on the previous sub buffer do not 1020 add up to be greater than the current time stamp. 1021 1022 NOTE: This adds significant overhead to recording of events, 1023 and should only be used to test the logic of the ring buffer. 1024 Do not use it on production systems. 1025 1026 Only say Y if you understand what this does, and you 1027 still want it enabled. Otherwise say N 1028 1029config MMIOTRACE_TEST 1030 tristate "Test module for mmiotrace" 1031 depends on MMIOTRACE && m 1032 help 1033 This is a dumb module for testing mmiotrace. It is very dangerous 1034 as it will write garbage to IO memory starting at a given address. 1035 However, it should be safe to use on e.g. unused portion of VRAM. 1036 1037 Say N, unless you absolutely know what you are doing. 1038 1039config PREEMPTIRQ_DELAY_TEST 1040 tristate "Test module to create a preempt / IRQ disable delay thread to test latency tracers" 1041 depends on m 1042 help 1043 Select this option to build a test module that can help test latency 1044 tracers by executing a preempt or irq disable section with a user 1045 configurable delay. The module busy waits for the duration of the 1046 critical section. 1047 1048 For example, the following invocation generates a burst of three 1049 irq-disabled critical sections for 500us: 1050 modprobe preemptirq_delay_test test_mode=irq delay=500 burst_size=3 1051 1052 What's more, if you want to attach the test on the cpu which the latency 1053 tracer is running on, specify cpu_affinity=cpu_num at the end of the 1054 command. 1055 1056 If unsure, say N 1057 1058config SYNTH_EVENT_GEN_TEST 1059 tristate "Test module for in-kernel synthetic event generation" 1060 depends on SYNTH_EVENTS 1061 help 1062 This option creates a test module to check the base 1063 functionality of in-kernel synthetic event definition and 1064 generation. 1065 1066 To test, insert the module, and then check the trace buffer 1067 for the generated sample events. 1068 1069 If unsure, say N. 1070 1071config KPROBE_EVENT_GEN_TEST 1072 tristate "Test module for in-kernel kprobe event generation" 1073 depends on KPROBE_EVENTS 1074 help 1075 This option creates a test module to check the base 1076 functionality of in-kernel kprobe event definition. 1077 1078 To test, insert the module, and then check the trace buffer 1079 for the generated kprobe events. 1080 1081 If unsure, say N. 1082 1083config HIST_TRIGGERS_DEBUG 1084 bool "Hist trigger debug support" 1085 depends on HIST_TRIGGERS 1086 help 1087 Add "hist_debug" file for each event, which when read will 1088 dump out a bunch of internal details about the hist triggers 1089 defined on that event. 1090 1091 The hist_debug file serves a couple of purposes: 1092 1093 - Helps developers verify that nothing is broken. 1094 1095 - Provides educational information to support the details 1096 of the hist trigger internals as described by 1097 Documentation/trace/histogram-design.rst. 1098 1099 The hist_debug output only covers the data structures 1100 related to the histogram definitions themselves and doesn't 1101 display the internals of map buckets or variable values of 1102 running histograms. 1103 1104 If unsure, say N. 1105 1106endif # FTRACE 1107