xref: /linux/Documentation/trace/ftrace.rst (revision 7af08b57bcb9ebf78675c50069c54125c0a8b795)
1========================
2ftrace - Function Tracer
3========================
4
5Copyright 2008 Red Hat Inc.
6
7:Author:   Steven Rostedt <srostedt@redhat.com>
8:License:  The GNU Free Documentation License, Version 1.2
9          (dual licensed under the GPL v2)
10:Original Reviewers:  Elias Oltmanns, Randy Dunlap, Andrew Morton,
11		      John Kacur, and David Teigland.
12
13- Written for: 2.6.28-rc2
14- Updated for: 3.10
15- Updated for: 4.13 - Copyright 2017 VMware Inc. Steven Rostedt
16- Converted to rst format - Changbin Du <changbin.du@intel.com>
17
18Introduction
19------------
20
21Ftrace is an internal tracer designed to help out developers and
22designers of systems to find what is going on inside the kernel.
23It can be used for debugging or analyzing latencies and
24performance issues that take place outside of user-space.
25
26Although ftrace is typically considered the function tracer, it
27is really a framework of several assorted tracing utilities.
28There's latency tracing to examine what occurs between interrupts
29disabled and enabled, as well as for preemption and from a time
30a task is woken to the task is actually scheduled in.
31
32One of the most common uses of ftrace is the event tracing.
33Throughout the kernel is hundreds of static event points that
34can be enabled via the tracefs file system to see what is
35going on in certain parts of the kernel.
36
37See events.rst for more information.
38
39
40Implementation Details
41----------------------
42
43See Documentation/trace/ftrace-design.rst for details for arch porters and such.
44
45
46The File System
47---------------
48
49Ftrace uses the tracefs file system to hold the control files as
50well as the files to display output.
51
52When tracefs is configured into the kernel (which selecting any ftrace
53option will do) the directory /sys/kernel/tracing will be created. To mount
54this directory, you can add to your /etc/fstab file::
55
56 tracefs       /sys/kernel/tracing       tracefs defaults        0       0
57
58Or you can mount it at run time with::
59
60 mount -t tracefs nodev /sys/kernel/tracing
61
62For quicker access to that directory you may want to make a soft link to
63it::
64
65 ln -s /sys/kernel/tracing /tracing
66
67.. attention::
68
69  Before 4.1, all ftrace tracing control files were within the debugfs
70  file system, which is typically located at /sys/kernel/debug/tracing.
71  For backward compatibility, when mounting the debugfs file system,
72  the tracefs file system will be automatically mounted at:
73
74  /sys/kernel/debug/tracing
75
76  All files located in the tracefs file system will be located in that
77  debugfs file system directory as well.
78
79.. attention::
80
81  Any selected ftrace option will also create the tracefs file system.
82  The rest of the document will assume that you are in the ftrace directory
83  (cd /sys/kernel/tracing) and will only concentrate on the files within that
84  directory and not distract from the content with the extended
85  "/sys/kernel/tracing" path name.
86
87That's it! (assuming that you have ftrace configured into your kernel)
88
89After mounting tracefs you will have access to the control and output files
90of ftrace. Here is a list of some of the key files:
91
92
93 Note: all time values are in microseconds.
94
95  current_tracer:
96
97	This is used to set or display the current tracer
98	that is configured. Changing the current tracer clears
99	the ring buffer content as well as the "snapshot" buffer.
100
101  available_tracers:
102
103	This holds the different types of tracers that
104	have been compiled into the kernel. The
105	tracers listed here can be configured by
106	echoing their name into current_tracer.
107
108  tracing_on:
109
110	This sets or displays whether writing to the trace
111	ring buffer is enabled. Echo 0 into this file to disable
112	the tracer or 1 to enable it. Note, this only disables
113	writing to the ring buffer, the tracing overhead may
114	still be occurring.
115
116	The kernel function tracing_off() can be used within the
117	kernel to disable writing to the ring buffer, which will
118	set this file to "0". User space can re-enable tracing by
119	echoing "1" into the file.
120
121	Note, the function and event trigger "traceoff" will also
122	set this file to zero and stop tracing. Which can also
123	be re-enabled by user space using this file.
124
125  trace:
126
127	This file holds the output of the trace in a human
128	readable format (described below). Opening this file for
129	writing with the O_TRUNC flag clears the ring buffer content.
130        Note, this file is not a consumer. If tracing is off
131        (no tracer running, or tracing_on is zero), it will produce
132        the same output each time it is read. When tracing is on,
133        it may produce inconsistent results as it tries to read
134        the entire buffer without consuming it.
135
136  trace_pipe:
137
138	The output is the same as the "trace" file but this
139	file is meant to be streamed with live tracing.
140	Reads from this file will block until new data is
141	retrieved.  Unlike the "trace" file, this file is a
142	consumer. This means reading from this file causes
143	sequential reads to display more current data. Once
144	data is read from this file, it is consumed, and
145	will not be read again with a sequential read. The
146	"trace" file is static, and if the tracer is not
147	adding more data, it will display the same
148	information every time it is read.
149
150  trace_options:
151
152	This file lets the user control the amount of data
153	that is displayed in one of the above output
154	files. Options also exist to modify how a tracer
155	or events work (stack traces, timestamps, etc).
156
157  options:
158
159	This is a directory that has a file for every available
160	trace option (also in trace_options). Options may also be set
161	or cleared by writing a "1" or "0" respectively into the
162	corresponding file with the option name.
163
164  tracing_max_latency:
165
166	Some of the tracers record the max latency.
167	For example, the maximum time that interrupts are disabled.
168	The maximum time is saved in this file. The max trace will also be
169	stored,	and displayed by "trace". A new max trace will only be
170	recorded if the latency is greater than the value in this file
171	(in microseconds).
172
173	By echoing in a time into this file, no latency will be recorded
174	unless it is greater than the time in this file.
175
176  tracing_thresh:
177
178	Some latency tracers will record a trace whenever the
179	latency is greater than the number in this file.
180	Only active when the file contains a number greater than 0.
181	(in microseconds)
182
183  buffer_percent:
184
185	This is the watermark for how much the ring buffer needs to be filled
186	before a waiter is woken up. That is, if an application calls a
187	blocking read syscall on one of the per_cpu trace_pipe_raw files, it
188	will block until the given amount of data specified by buffer_percent
189	is in the ring buffer before it wakes the reader up. This also
190	controls how the splice system calls are blocked on this file::
191
192	  0   - means to wake up as soon as there is any data in the ring buffer.
193	  50  - means to wake up when roughly half of the ring buffer sub-buffers
194	        are full.
195	  100 - means to block until the ring buffer is totally full and is
196	        about to start overwriting the older data.
197
198  buffer_size_kb:
199
200	This sets or displays the number of kilobytes each CPU
201	buffer holds. By default, the trace buffers are the same size
202	for each CPU. The displayed number is the size of the
203	CPU buffer and not total size of all buffers. The
204	trace buffers are allocated in pages (blocks of memory
205	that the kernel uses for allocation, usually 4 KB in size).
206	A few extra pages may be allocated to accommodate buffer management
207	meta-data. If the last page allocated has room for more bytes
208	than requested, the rest of the page will be used,
209	making the actual allocation bigger than requested or shown.
210	( Note, the size may not be a multiple of the page size
211	due to buffer management meta-data. )
212
213	Buffer sizes for individual CPUs may vary
214	(see "per_cpu/cpu0/buffer_size_kb" below), and if they do
215	this file will show "X".
216
217  buffer_total_size_kb:
218
219	This displays the total combined size of all the trace buffers.
220
221  buffer_subbuf_size_kb:
222
223	This sets or displays the sub buffer size. The ring buffer is broken up
224	into several same size "sub buffers". An event can not be bigger than
225	the size of the sub buffer. Normally, the sub buffer is the size of the
226	architecture's page (4K on x86). The sub buffer also contains meta data
227	at the start which also limits the size of an event.  That means when
228	the sub buffer is a page size, no event can be larger than the page
229	size minus the sub buffer meta data.
230
231	Note, the buffer_subbuf_size_kb is a way for the user to specify the
232	minimum size of the subbuffer. The kernel may make it bigger due to the
233	implementation details, or simply fail the operation if the kernel can
234	not handle the request.
235
236	Changing the sub buffer size allows for events to be larger than the
237	page size.
238
239	Note: When changing the sub-buffer size, tracing is stopped and any
240	data in the ring buffer and the snapshot buffer will be discarded.
241
242  free_buffer:
243
244	If a process is performing tracing, and the ring buffer	should be
245	shrunk "freed" when the process is finished, even if it were to be
246	killed by a signal, this file can be used for that purpose. On close
247	of this file, the ring buffer will be resized to its minimum size.
248	Having a process that is tracing also open this file, when the process
249	exits its file descriptor for this file will be closed, and in doing so,
250	the ring buffer will be "freed".
251
252	It may also stop tracing if disable_on_free option is set.
253
254  tracing_cpumask:
255
256	This is a mask that lets the user only trace on specified CPUs.
257	The format is a hex string representing the CPUs.
258
259  set_ftrace_filter:
260
261	When dynamic ftrace is configured in (see the
262	section below "dynamic ftrace"), the code is dynamically
263	modified (code text rewrite) to disable calling of the
264	function profiler (mcount). This lets tracing be configured
265	in with practically no overhead in performance.  This also
266	has a side effect of enabling or disabling specific functions
267	to be traced. Echoing names of functions into this file
268	will limit the trace to only those functions.
269	This influences the tracers "function" and "function_graph"
270	and thus also function profiling (see "function_profile_enabled").
271
272	The functions listed in "available_filter_functions" are what
273	can be written into this file.
274
275	This interface also allows for commands to be used. See the
276	"Filter commands" section for more details.
277
278	As a speed up, since processing strings can be quite expensive
279	and requires a check of all functions registered to tracing, instead
280	an index can be written into this file. A number (starting with "1")
281	written will instead select the same corresponding at the line position
282	of the "available_filter_functions" file.
283
284  set_ftrace_notrace:
285
286	This has an effect opposite to that of
287	set_ftrace_filter. Any function that is added here will not
288	be traced. If a function exists in both set_ftrace_filter
289	and set_ftrace_notrace,	the function will _not_ be traced.
290
291  set_ftrace_pid:
292
293	Have the function tracer only trace the threads whose PID are
294	listed in this file.
295
296	If the "function-fork" option is set, then when a task whose
297	PID is listed in this file forks, the child's PID will
298	automatically be added to this file, and the child will be
299	traced by the function tracer as well. This option will also
300	cause PIDs of tasks that exit to be removed from the file.
301
302  set_ftrace_notrace_pid:
303
304        Have the function tracer ignore threads whose PID are listed in
305        this file.
306
307        If the "function-fork" option is set, then when a task whose
308	PID is listed in this file forks, the child's PID will
309	automatically be added to this file, and the child will not be
310	traced by the function tracer as well. This option will also
311	cause PIDs of tasks that exit to be removed from the file.
312
313        If a PID is in both this file and "set_ftrace_pid", then this
314        file takes precedence, and the thread will not be traced.
315
316  set_event_pid:
317
318	Have the events only trace a task with a PID listed in this file.
319	Note, sched_switch and sched_wake_up will also trace events
320	listed in this file.
321
322	To have the PIDs of children of tasks with their PID in this file
323	added on fork, enable the "event-fork" option. That option will also
324	cause the PIDs of tasks to be removed from this file when the task
325	exits.
326
327  set_event_notrace_pid:
328
329	Have the events not trace a task with a PID listed in this file.
330	Note, sched_switch and sched_wakeup will trace threads not listed
331	in this file, even if a thread's PID is in the file if the
332        sched_switch or sched_wakeup events also trace a thread that should
333        be traced.
334
335	To have the PIDs of children of tasks with their PID in this file
336	added on fork, enable the "event-fork" option. That option will also
337	cause the PIDs of tasks to be removed from this file when the task
338	exits.
339
340  set_graph_function:
341
342	Functions listed in this file will cause the function graph
343	tracer to only trace these functions and the functions that
344	they call. (See the section "dynamic ftrace" for more details).
345	Note, set_ftrace_filter and set_ftrace_notrace still affects
346	what functions are being traced.
347
348  set_graph_notrace:
349
350	Similar to set_graph_function, but will disable function graph
351	tracing when the function is hit until it exits the function.
352	This makes it possible to ignore tracing functions that are called
353	by a specific function.
354
355  available_filter_functions:
356
357	This lists the functions that ftrace has processed and can trace.
358	These are the function names that you can pass to
359	"set_ftrace_filter", "set_ftrace_notrace",
360	"set_graph_function", or "set_graph_notrace".
361	(See the section "dynamic ftrace" below for more details.)
362
363  available_filter_functions_addrs:
364
365	Similar to available_filter_functions, but with address displayed
366	for each function. The displayed address is the patch-site address
367	and can differ from /proc/kallsyms address.
368
369  dyn_ftrace_total_info:
370
371	This file is for debugging purposes. The number of functions that
372	have been converted to nops and are available to be traced.
373
374  enabled_functions:
375
376	This file is more for debugging ftrace, but can also be useful
377	in seeing if any function has a callback attached to it.
378	Not only does the trace infrastructure use ftrace function
379	trace utility, but other subsystems might too. This file
380	displays all functions that have a callback attached to them
381	as well as the number of callbacks that have been attached.
382	Note, a callback may also call multiple functions which will
383	not be listed in this count.
384
385	If the callback registered to be traced by a function with
386	the "save regs" attribute (thus even more overhead), a 'R'
387	will be displayed on the same line as the function that
388	is returning registers.
389
390	If the callback registered to be traced by a function with
391	the "ip modify" attribute (thus the regs->ip can be changed),
392	an 'I' will be displayed on the same line as the function that
393	can be overridden.
394
395	If a non ftrace trampoline is attached (BPF) a 'D' will be displayed.
396	Note, normal ftrace trampolines can also be attached, but only one
397	"direct" trampoline can be attached to a given function at a time.
398
399	Some architectures can not call direct trampolines, but instead have
400	the ftrace ops function located above the function entry point. In
401	such cases an 'O' will be displayed.
402
403	If a function had either the "ip modify" or a "direct" call attached to
404	it in the past, a 'M' will be shown. This flag is never cleared. It is
405	used to know if a function was every modified by the ftrace infrastructure,
406	and can be used for debugging.
407
408	If the architecture supports it, it will also show what callback
409	is being directly called by the function. If the count is greater
410	than 1 it most likely will be ftrace_ops_list_func().
411
412	If the callback of a function jumps to a trampoline that is
413	specific to the callback and which is not the standard trampoline,
414	its address will be printed as well as the function that the
415	trampoline calls.
416
417  touched_functions:
418
419	This file contains all the functions that ever had a function callback
420	to it via the ftrace infrastructure. It has the same format as
421	enabled_functions but shows all functions that have every been
422	traced.
423
424	To see any function that has every been modified by "ip modify" or a
425	direct trampoline, one can perform the following command:
426
427	grep ' M ' /sys/kernel/tracing/touched_functions
428
429  function_profile_enabled:
430
431	When set it will enable all functions with either the function
432	tracer, or if configured, the function graph tracer. It will
433	keep a histogram of the number of functions that were called
434	and if the function graph tracer was configured, it will also keep
435	track of the time spent in those functions. The histogram
436	content can be displayed in the files:
437
438	trace_stat/function<cpu> ( function0, function1, etc).
439
440  trace_stat:
441
442	A directory that holds different tracing stats.
443
444  kprobe_events:
445
446	Enable dynamic trace points. See kprobetrace.rst.
447
448  kprobe_profile:
449
450	Dynamic trace points stats. See kprobetrace.rst.
451
452  max_graph_depth:
453
454	Used with the function graph tracer. This is the max depth
455	it will trace into a function. Setting this to a value of
456	one will show only the first kernel function that is called
457	from user space.
458
459  printk_formats:
460
461	This is for tools that read the raw format files. If an event in
462	the ring buffer references a string, only a pointer to the string
463	is recorded into the buffer and not the string itself. This prevents
464	tools from knowing what that string was. This file displays the string
465	and address for	the string allowing tools to map the pointers to what
466	the strings were.
467
468  saved_cmdlines:
469
470	Only the pid of the task is recorded in a trace event unless
471	the event specifically saves the task comm as well. Ftrace
472	makes a cache of pid mappings to comms to try to display
473	comms for events. If a pid for a comm is not listed, then
474	"<...>" is displayed in the output.
475
476	If the option "record-cmd" is set to "0", then comms of tasks
477	will not be saved during recording. By default, it is enabled.
478
479  saved_cmdlines_size:
480
481	By default, 128 comms are saved (see "saved_cmdlines" above). To
482	increase or decrease the amount of comms that are cached, echo
483	the number of comms to cache into this file.
484
485  saved_tgids:
486
487	If the option "record-tgid" is set, on each scheduling context switch
488	the Task Group ID of a task is saved in a table mapping the PID of
489	the thread to its TGID. By default, the "record-tgid" option is
490	disabled.
491
492  snapshot:
493
494	This displays the "snapshot" buffer and also lets the user
495	take a snapshot of the current running trace.
496	See the "Snapshot" section below for more details.
497
498  stack_max_size:
499
500	When the stack tracer is activated, this will display the
501	maximum stack size it has encountered.
502	See the "Stack Trace" section below.
503
504  stack_trace:
505
506	This displays the stack back trace of the largest stack
507	that was encountered when the stack tracer is activated.
508	See the "Stack Trace" section below.
509
510  stack_trace_filter:
511
512	This is similar to "set_ftrace_filter" but it limits what
513	functions the stack tracer will check.
514
515  trace_clock:
516
517	Whenever an event is recorded into the ring buffer, a
518	"timestamp" is added. This stamp comes from a specified
519	clock. By default, ftrace uses the "local" clock. This
520	clock is very fast and strictly per cpu, but on some
521	systems it may not be monotonic with respect to other
522	CPUs. In other words, the local clocks may not be in sync
523	with local clocks on other CPUs.
524
525	Usual clocks for tracing::
526
527	  # cat trace_clock
528	  [local] global counter x86-tsc
529
530	The clock with the square brackets around it is the one in effect.
531
532	local:
533		Default clock, but may not be in sync across CPUs
534
535	global:
536		This clock is in sync with all CPUs but may
537		be a bit slower than the local clock.
538
539	counter:
540		This is not a clock at all, but literally an atomic
541		counter. It counts up one by one, but is in sync
542		with all CPUs. This is useful when you need to
543		know exactly the order events occurred with respect to
544		each other on different CPUs.
545
546	uptime:
547		This uses the jiffies counter and the time stamp
548		is relative to the time since boot up.
549
550	perf:
551		This makes ftrace use the same clock that perf uses.
552		Eventually perf will be able to read ftrace buffers
553		and this will help out in interleaving the data.
554
555	x86-tsc:
556		Architectures may define their own clocks. For
557		example, x86 uses its own TSC cycle clock here.
558
559	ppc-tb:
560		This uses the powerpc timebase register value.
561		This is in sync across CPUs and can also be used
562		to correlate events across hypervisor/guest if
563		tb_offset is known.
564
565	mono:
566		This uses the fast monotonic clock (CLOCK_MONOTONIC)
567		which is monotonic and is subject to NTP rate adjustments.
568
569	mono_raw:
570		This is the raw monotonic clock (CLOCK_MONOTONIC_RAW)
571		which is monotonic but is not subject to any rate adjustments
572		and ticks at the same rate as the hardware clocksource.
573
574	boot:
575		This is the boot clock (CLOCK_BOOTTIME) and is based on the
576		fast monotonic clock, but also accounts for time spent in
577		suspend. Since the clock access is designed for use in
578		tracing in the suspend path, some side effects are possible
579		if clock is accessed after the suspend time is accounted before
580		the fast mono clock is updated. In this case, the clock update
581		appears to happen slightly sooner than it normally would have.
582		Also on 32-bit systems, it's possible that the 64-bit boot offset
583		sees a partial update. These effects are rare and post
584		processing should be able to handle them. See comments in the
585		ktime_get_boot_fast_ns() function for more information.
586
587	tai:
588		This is the tai clock (CLOCK_TAI) and is derived from the wall-
589		clock time. However, this clock does not experience
590		discontinuities and backwards jumps caused by NTP inserting leap
591		seconds. Since the clock access is designed for use in tracing,
592		side effects are possible. The clock access may yield wrong
593		readouts in case the internal TAI offset is updated e.g., caused
594		by setting the system time or using adjtimex() with an offset.
595		These effects are rare and post processing should be able to
596		handle them. See comments in the ktime_get_tai_fast_ns()
597		function for more information.
598
599	To set a clock, simply echo the clock name into this file::
600
601	  # echo global > trace_clock
602
603	Setting a clock clears the ring buffer content as well as the
604	"snapshot" buffer.
605
606  trace_marker:
607
608	This is a very useful file for synchronizing user space
609	with events happening in the kernel. Writing strings into
610	this file will be written into the ftrace buffer.
611
612	It is useful in applications to open this file at the start
613	of the application and just reference the file descriptor
614	for the file::
615
616		void trace_write(const char *fmt, ...)
617		{
618			va_list ap;
619			char buf[256];
620			int n;
621
622			if (trace_fd < 0)
623				return;
624
625			va_start(ap, fmt);
626			n = vsnprintf(buf, 256, fmt, ap);
627			va_end(ap);
628
629			write(trace_fd, buf, n);
630		}
631
632	start::
633
634		trace_fd = open("trace_marker", O_WRONLY);
635
636	Note: Writing into the trace_marker file can also initiate triggers
637	      that are written into /sys/kernel/tracing/events/ftrace/print/trigger
638	      See "Event triggers" in Documentation/trace/events.rst and an
639              example in Documentation/trace/histogram.rst (Section 3.)
640
641  trace_marker_raw:
642
643	This is similar to trace_marker above, but is meant for binary data
644	to be written to it, where a tool can be used to parse the data
645	from trace_pipe_raw.
646
647  uprobe_events:
648
649	Add dynamic tracepoints in programs.
650	See uprobetracer.rst
651
652  uprobe_profile:
653
654	Uprobe statistics. See uprobetrace.txt
655
656  instances:
657
658	This is a way to make multiple trace buffers where different
659	events can be recorded in different buffers.
660	See "Instances" section below.
661
662  events:
663
664	This is the trace event directory. It holds event tracepoints
665	(also known as static tracepoints) that have been compiled
666	into the kernel. It shows what event tracepoints exist
667	and how they are grouped by system. There are "enable"
668	files at various levels that can enable the tracepoints
669	when a "1" is written to them.
670
671	See events.rst for more information.
672
673  set_event:
674
675	By echoing in the event into this file, will enable that event.
676
677	See events.rst for more information.
678
679  available_events:
680
681	A list of events that can be enabled in tracing.
682
683	See events.rst for more information.
684
685  timestamp_mode:
686
687	Certain tracers may change the timestamp mode used when
688	logging trace events into the event buffer.  Events with
689	different modes can coexist within a buffer but the mode in
690	effect when an event is logged determines which timestamp mode
691	is used for that event.  The default timestamp mode is
692	'delta'.
693
694	Usual timestamp modes for tracing:
695
696	  # cat timestamp_mode
697	  [delta] absolute
698
699	  The timestamp mode with the square brackets around it is the
700	  one in effect.
701
702	  delta: Default timestamp mode - timestamp is a delta against
703	         a per-buffer timestamp.
704
705	  absolute: The timestamp is a full timestamp, not a delta
706                 against some other value.  As such it takes up more
707                 space and is less efficient.
708
709  hwlat_detector:
710
711	Directory for the Hardware Latency Detector.
712	See "Hardware Latency Detector" section below.
713
714  per_cpu:
715
716	This is a directory that contains the trace per_cpu information.
717
718  per_cpu/cpu0/buffer_size_kb:
719
720	The ftrace buffer is defined per_cpu. That is, there's a separate
721	buffer for each CPU to allow writes to be done atomically,
722	and free from cache bouncing. These buffers may have different
723	size buffers. This file is similar to the buffer_size_kb
724	file, but it only displays or sets the buffer size for the
725	specific CPU. (here cpu0).
726
727  per_cpu/cpu0/trace:
728
729	This is similar to the "trace" file, but it will only display
730	the data specific for the CPU. If written to, it only clears
731	the specific CPU buffer.
732
733  per_cpu/cpu0/trace_pipe
734
735	This is similar to the "trace_pipe" file, and is a consuming
736	read, but it will only display (and consume) the data specific
737	for the CPU.
738
739  per_cpu/cpu0/trace_pipe_raw
740
741	For tools that can parse the ftrace ring buffer binary format,
742	the trace_pipe_raw file can be used to extract the data
743	from the ring buffer directly. With the use of the splice()
744	system call, the buffer data can be quickly transferred to
745	a file or to the network where a server is collecting the
746	data.
747
748	Like trace_pipe, this is a consuming reader, where multiple
749	reads will always produce different data.
750
751  per_cpu/cpu0/snapshot:
752
753	This is similar to the main "snapshot" file, but will only
754	snapshot the current CPU (if supported). It only displays
755	the content of the snapshot for a given CPU, and if
756	written to, only clears this CPU buffer.
757
758  per_cpu/cpu0/snapshot_raw:
759
760	Similar to the trace_pipe_raw, but will read the binary format
761	from the snapshot buffer for the given CPU.
762
763  per_cpu/cpu0/stats:
764
765	This displays certain stats about the ring buffer:
766
767	entries:
768		The number of events that are still in the buffer.
769
770	overrun:
771		The number of lost events due to overwriting when
772		the buffer was full.
773
774	commit overrun:
775		Should always be zero.
776		This gets set if so many events happened within a nested
777		event (ring buffer is re-entrant), that it fills the
778		buffer and starts dropping events.
779
780	bytes:
781		Bytes actually read (not overwritten).
782
783	oldest event ts:
784		The oldest timestamp in the buffer
785
786	now ts:
787		The current timestamp
788
789	dropped events:
790		Events lost due to overwrite option being off.
791
792	read events:
793		The number of events read.
794
795The Tracers
796-----------
797
798Here is the list of current tracers that may be configured.
799
800  "function"
801
802	Function call tracer to trace all kernel functions.
803
804  "function_graph"
805
806	Similar to the function tracer except that the
807	function tracer probes the functions on their entry
808	whereas the function graph tracer traces on both entry
809	and exit of the functions. It then provides the ability
810	to draw a graph of function calls similar to C code
811	source.
812
813  "blk"
814
815	The block tracer. The tracer used by the blktrace user
816	application.
817
818  "hwlat"
819
820	The Hardware Latency tracer is used to detect if the hardware
821	produces any latency. See "Hardware Latency Detector" section
822	below.
823
824  "irqsoff"
825
826	Traces the areas that disable interrupts and saves
827	the trace with the longest max latency.
828	See tracing_max_latency. When a new max is recorded,
829	it replaces the old trace. It is best to view this
830	trace with the latency-format option enabled, which
831	happens automatically when the tracer is selected.
832
833  "preemptoff"
834
835	Similar to irqsoff but traces and records the amount of
836	time for which preemption is disabled.
837
838  "preemptirqsoff"
839
840	Similar to irqsoff and preemptoff, but traces and
841	records the largest time for which irqs and/or preemption
842	is disabled.
843
844  "wakeup"
845
846	Traces and records the max latency that it takes for
847	the highest priority task to get scheduled after
848	it has been woken up.
849        Traces all tasks as an average developer would expect.
850
851  "wakeup_rt"
852
853        Traces and records the max latency that it takes for just
854        RT tasks (as the current "wakeup" does). This is useful
855        for those interested in wake up timings of RT tasks.
856
857  "wakeup_dl"
858
859	Traces and records the max latency that it takes for
860	a SCHED_DEADLINE task to be woken (as the "wakeup" and
861	"wakeup_rt" does).
862
863  "mmiotrace"
864
865	A special tracer that is used to trace binary module.
866	It will trace all the calls that a module makes to the
867	hardware. Everything it writes and reads from the I/O
868	as well.
869
870  "branch"
871
872	This tracer can be configured when tracing likely/unlikely
873	calls within the kernel. It will trace when a likely and
874	unlikely branch is hit and if it was correct in its prediction
875	of being correct.
876
877  "nop"
878
879	This is the "trace nothing" tracer. To remove all
880	tracers from tracing simply echo "nop" into
881	current_tracer.
882
883Error conditions
884----------------
885
886  For most ftrace commands, failure modes are obvious and communicated
887  using standard return codes.
888
889  For other more involved commands, extended error information may be
890  available via the tracing/error_log file.  For the commands that
891  support it, reading the tracing/error_log file after an error will
892  display more detailed information about what went wrong, if
893  information is available.  The tracing/error_log file is a circular
894  error log displaying a small number (currently, 8) of ftrace errors
895  for the last (8) failed commands.
896
897  The extended error information and usage takes the form shown in
898  this example::
899
900    # echo xxx > /sys/kernel/tracing/events/sched/sched_wakeup/trigger
901    echo: write error: Invalid argument
902
903    # cat /sys/kernel/tracing/error_log
904    [ 5348.887237] location: error: Couldn't yyy: zzz
905      Command: xxx
906               ^
907    [ 7517.023364] location: error: Bad rrr: sss
908      Command: ppp qqq
909                   ^
910
911  To clear the error log, echo the empty string into it::
912
913    # echo > /sys/kernel/tracing/error_log
914
915Examples of using the tracer
916----------------------------
917
918Here are typical examples of using the tracers when controlling
919them only with the tracefs interface (without using any
920user-land utilities).
921
922Output format:
923--------------
924
925Here is an example of the output format of the file "trace"::
926
927  # tracer: function
928  #
929  # entries-in-buffer/entries-written: 140080/250280   #P:4
930  #
931  #                              _-----=> irqs-off
932  #                             / _----=> need-resched
933  #                            | / _---=> hardirq/softirq
934  #                            || / _--=> preempt-depth
935  #                            ||| /     delay
936  #           TASK-PID   CPU#  ||||    TIMESTAMP  FUNCTION
937  #              | |       |   ||||       |         |
938              bash-1977  [000] .... 17284.993652: sys_close <-system_call_fastpath
939              bash-1977  [000] .... 17284.993653: __close_fd <-sys_close
940              bash-1977  [000] .... 17284.993653: _raw_spin_lock <-__close_fd
941              sshd-1974  [003] .... 17284.993653: __srcu_read_unlock <-fsnotify
942              bash-1977  [000] .... 17284.993654: add_preempt_count <-_raw_spin_lock
943              bash-1977  [000] ...1 17284.993655: _raw_spin_unlock <-__close_fd
944              bash-1977  [000] ...1 17284.993656: sub_preempt_count <-_raw_spin_unlock
945              bash-1977  [000] .... 17284.993657: filp_close <-__close_fd
946              bash-1977  [000] .... 17284.993657: dnotify_flush <-filp_close
947              sshd-1974  [003] .... 17284.993658: sys_select <-system_call_fastpath
948              ....
949
950A header is printed with the tracer name that is represented by
951the trace. In this case the tracer is "function". Then it shows the
952number of events in the buffer as well as the total number of entries
953that were written. The difference is the number of entries that were
954lost due to the buffer filling up (250280 - 140080 = 110200 events
955lost).
956
957The header explains the content of the events. Task name "bash", the task
958PID "1977", the CPU that it was running on "000", the latency format
959(explained below), the timestamp in <secs>.<usecs> format, the
960function name that was traced "sys_close" and the parent function that
961called this function "system_call_fastpath". The timestamp is the time
962at which the function was entered.
963
964Latency trace format
965--------------------
966
967When the latency-format option is enabled or when one of the latency
968tracers is set, the trace file gives somewhat more information to see
969why a latency happened. Here is a typical trace::
970
971  # tracer: irqsoff
972  #
973  # irqsoff latency trace v1.1.5 on 3.8.0-test+
974  # --------------------------------------------------------------------
975  # latency: 259 us, #4/4, CPU#2 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:4)
976  #    -----------------
977  #    | task: ps-6143 (uid:0 nice:0 policy:0 rt_prio:0)
978  #    -----------------
979  #  => started at: __lock_task_sighand
980  #  => ended at:   _raw_spin_unlock_irqrestore
981  #
982  #
983  #                  _------=> CPU#
984  #                 / _-----=> irqs-off
985  #                | / _----=> need-resched
986  #                || / _---=> hardirq/softirq
987  #                ||| / _--=> preempt-depth
988  #                |||| /     delay
989  #  cmd     pid   ||||| time  |   caller
990  #     \   /      |||||  \    |   /
991        ps-6143    2d...    0us!: trace_hardirqs_off <-__lock_task_sighand
992        ps-6143    2d..1  259us+: trace_hardirqs_on <-_raw_spin_unlock_irqrestore
993        ps-6143    2d..1  263us+: time_hardirqs_on <-_raw_spin_unlock_irqrestore
994        ps-6143    2d..1  306us : <stack trace>
995   => trace_hardirqs_on_caller
996   => trace_hardirqs_on
997   => _raw_spin_unlock_irqrestore
998   => do_task_stat
999   => proc_tgid_stat
1000   => proc_single_show
1001   => seq_read
1002   => vfs_read
1003   => sys_read
1004   => system_call_fastpath
1005
1006
1007This shows that the current tracer is "irqsoff" tracing the time
1008for which interrupts were disabled. It gives the trace version (which
1009never changes) and the version of the kernel upon which this was executed on
1010(3.8). Then it displays the max latency in microseconds (259 us). The number
1011of trace entries displayed and the total number (both are four: #4/4).
1012VP, KP, SP, and HP are always zero and are reserved for later use.
1013#P is the number of online CPUs (#P:4).
1014
1015The task is the process that was running when the latency
1016occurred. (ps pid: 6143).
1017
1018The start and stop (the functions in which the interrupts were
1019disabled and enabled respectively) that caused the latencies:
1020
1021  - __lock_task_sighand is where the interrupts were disabled.
1022  - _raw_spin_unlock_irqrestore is where they were enabled again.
1023
1024The next lines after the header are the trace itself. The header
1025explains which is which.
1026
1027  cmd: The name of the process in the trace.
1028
1029  pid: The PID of that process.
1030
1031  CPU#: The CPU which the process was running on.
1032
1033  irqs-off: 'd' interrupts are disabled. '.' otherwise.
1034
1035  need-resched:
1036	- 'B' all, TIF_NEED_RESCHED, PREEMPT_NEED_RESCHED and TIF_RESCHED_LAZY is set,
1037	- 'N' both TIF_NEED_RESCHED and PREEMPT_NEED_RESCHED is set,
1038	- 'n' only TIF_NEED_RESCHED is set,
1039	- 'p' only PREEMPT_NEED_RESCHED is set,
1040	- 'L' both PREEMPT_NEED_RESCHED and TIF_RESCHED_LAZY is set,
1041	- 'b' both TIF_NEED_RESCHED and TIF_RESCHED_LAZY is set,
1042	- 'l' only TIF_RESCHED_LAZY is set
1043	- '.' otherwise.
1044
1045  hardirq/softirq:
1046	- 'Z' - NMI occurred inside a hardirq
1047	- 'z' - NMI is running
1048	- 'H' - hard irq occurred inside a softirq.
1049	- 'h' - hard irq is running
1050	- 's' - soft irq is running
1051	- '.' - normal context.
1052
1053  preempt-depth: The level of preempt_disabled
1054
1055The above is mostly meaningful for kernel developers.
1056
1057  time:
1058	When the latency-format option is enabled, the trace file
1059	output includes a timestamp relative to the start of the
1060	trace. This differs from the output when latency-format
1061	is disabled, which includes an absolute timestamp.
1062
1063  delay:
1064	This is just to help catch your eye a bit better. And
1065	needs to be fixed to be only relative to the same CPU.
1066	The marks are determined by the difference between this
1067	current trace and the next trace.
1068
1069	  - '$' - greater than 1 second
1070	  - '@' - greater than 100 millisecond
1071	  - '*' - greater than 10 millisecond
1072	  - '#' - greater than 1000 microsecond
1073	  - '!' - greater than 100 microsecond
1074	  - '+' - greater than 10 microsecond
1075	  - ' ' - less than or equal to 10 microsecond.
1076
1077  The rest is the same as the 'trace' file.
1078
1079  Note, the latency tracers will usually end with a back trace
1080  to easily find where the latency occurred.
1081
1082trace_options
1083-------------
1084
1085The trace_options file (or the options directory) is used to control
1086what gets printed in the trace output, or manipulate the tracers.
1087To see what is available, simply cat the file::
1088
1089  cat trace_options
1090	print-parent
1091	nosym-offset
1092	nosym-addr
1093	noverbose
1094	noraw
1095	nohex
1096	nobin
1097	noblock
1098	nofields
1099	trace_printk
1100	annotate
1101	nouserstacktrace
1102	nosym-userobj
1103	noprintk-msg-only
1104	context-info
1105	nolatency-format
1106	record-cmd
1107	norecord-tgid
1108	overwrite
1109	nodisable_on_free
1110	irq-info
1111	markers
1112	noevent-fork
1113	function-trace
1114	nofunction-fork
1115	nodisplay-graph
1116	nostacktrace
1117	nobranch
1118
1119To disable one of the options, echo in the option prepended with
1120"no"::
1121
1122  echo noprint-parent > trace_options
1123
1124To enable an option, leave off the "no"::
1125
1126  echo sym-offset > trace_options
1127
1128Here are the available options:
1129
1130  print-parent
1131	On function traces, display the calling (parent)
1132	function as well as the function being traced.
1133	::
1134
1135	  print-parent:
1136	   bash-4000  [01]  1477.606694: simple_strtoul <-kstrtoul
1137
1138	  noprint-parent:
1139	   bash-4000  [01]  1477.606694: simple_strtoul
1140
1141
1142  sym-offset
1143	Display not only the function name, but also the
1144	offset in the function. For example, instead of
1145	seeing just "ktime_get", you will see
1146	"ktime_get+0xb/0x20".
1147	::
1148
1149	  sym-offset:
1150	   bash-4000  [01]  1477.606694: simple_strtoul+0x6/0xa0
1151
1152  sym-addr
1153	This will also display the function address as well
1154	as the function name.
1155	::
1156
1157	  sym-addr:
1158	   bash-4000  [01]  1477.606694: simple_strtoul <c0339346>
1159
1160  verbose
1161	This deals with the trace file when the
1162        latency-format option is enabled.
1163	::
1164
1165	    bash  4000 1 0 00000000 00010a95 [58127d26] 1720.415ms \
1166	    (+0.000ms): simple_strtoul (kstrtoul)
1167
1168  raw
1169	This will display raw numbers. This option is best for
1170	use with user applications that can translate the raw
1171	numbers better than having it done in the kernel.
1172
1173  hex
1174	Similar to raw, but the numbers will be in a hexadecimal format.
1175
1176  bin
1177	This will print out the formats in raw binary.
1178
1179  block
1180	When set, reading trace_pipe will not block when polled.
1181
1182  fields
1183	Print the fields as described by their types. This is a better
1184	option than using hex, bin or raw, as it gives a better parsing
1185	of the content of the event.
1186
1187  trace_printk
1188	Can disable trace_printk() from writing into the buffer.
1189
1190  trace_printk_dest
1191	Set to have trace_printk() and similar internal tracing functions
1192	write into this instance. Note, only one trace instance can have
1193	this set. By setting this flag, it clears the trace_printk_dest flag
1194	of the instance that had it set previously. By default, the top
1195	level trace has this set, and will get it set again if another
1196	instance has it set then clears it.
1197
1198	This flag cannot be cleared by the top level instance, as it is the
1199	default instance. The only way the top level instance has this flag
1200	cleared, is by it being set in another instance.
1201
1202  annotate
1203	It is sometimes confusing when the CPU buffers are full
1204	and one CPU buffer had a lot of events recently, thus
1205	a shorter time frame, were another CPU may have only had
1206	a few events, which lets it have older events. When
1207	the trace is reported, it shows the oldest events first,
1208	and it may look like only one CPU ran (the one with the
1209	oldest events). When the annotate option is set, it will
1210	display when a new CPU buffer started::
1211
1212			  <idle>-0     [001] dNs4 21169.031481: wake_up_idle_cpu <-add_timer_on
1213			  <idle>-0     [001] dNs4 21169.031482: _raw_spin_unlock_irqrestore <-add_timer_on
1214			  <idle>-0     [001] .Ns4 21169.031484: sub_preempt_count <-_raw_spin_unlock_irqrestore
1215		##### CPU 2 buffer started ####
1216			  <idle>-0     [002] .N.1 21169.031484: rcu_idle_exit <-cpu_idle
1217			  <idle>-0     [001] .Ns3 21169.031484: _raw_spin_unlock <-clocksource_watchdog
1218			  <idle>-0     [001] .Ns3 21169.031485: sub_preempt_count <-_raw_spin_unlock
1219
1220  userstacktrace
1221	This option changes the trace. It records a
1222	stacktrace of the current user space thread after
1223	each trace event.
1224
1225  sym-userobj
1226	when user stacktrace are enabled, look up which
1227	object the address belongs to, and print a
1228	relative address. This is especially useful when
1229	ASLR is on, otherwise you don't get a chance to
1230	resolve the address to object/file/line after
1231	the app is no longer running
1232
1233	The lookup is performed when you read
1234	trace,trace_pipe. Example::
1235
1236		  a.out-1623  [000] 40874.465068: /root/a.out[+0x480] <-/root/a.out[+0
1237		  x494] <- /root/a.out[+0x4a8] <- /lib/libc-2.7.so[+0x1e1a6]
1238
1239
1240  printk-msg-only
1241	When set, trace_printk()s will only show the format
1242	and not their parameters (if trace_bprintk() or
1243	trace_bputs() was used to save the trace_printk()).
1244
1245  context-info
1246	Show only the event data. Hides the comm, PID,
1247	timestamp, CPU, and other useful data.
1248
1249  latency-format
1250	This option changes the trace output. When it is enabled,
1251	the trace displays additional information about the
1252	latency, as described in "Latency trace format".
1253
1254  pause-on-trace
1255	When set, opening the trace file for read, will pause
1256	writing to the ring buffer (as if tracing_on was set to zero).
1257	This simulates the original behavior of the trace file.
1258	When the file is closed, tracing will be enabled again.
1259
1260  hash-ptr
1261        When set, "%p" in the event printk format displays the
1262        hashed pointer value instead of real address.
1263        This will be useful if you want to find out which hashed
1264        value is corresponding to the real value in trace log.
1265
1266  record-cmd
1267	When any event or tracer is enabled, a hook is enabled
1268	in the sched_switch trace point to fill comm cache
1269	with mapped pids and comms. But this may cause some
1270	overhead, and if you only care about pids, and not the
1271	name of the task, disabling this option can lower the
1272	impact of tracing. See "saved_cmdlines".
1273
1274  record-tgid
1275	When any event or tracer is enabled, a hook is enabled
1276	in the sched_switch trace point to fill the cache of
1277	mapped Thread Group IDs (TGID) mapping to pids. See
1278	"saved_tgids".
1279
1280  overwrite
1281	This controls what happens when the trace buffer is
1282	full. If "1" (default), the oldest events are
1283	discarded and overwritten. If "0", then the newest
1284	events are discarded.
1285	(see per_cpu/cpu0/stats for overrun and dropped)
1286
1287  disable_on_free
1288	When the free_buffer is closed, tracing will
1289	stop (tracing_on set to 0).
1290
1291  irq-info
1292	Shows the interrupt, preempt count, need resched data.
1293	When disabled, the trace looks like::
1294
1295		# tracer: function
1296		#
1297		# entries-in-buffer/entries-written: 144405/9452052   #P:4
1298		#
1299		#           TASK-PID   CPU#      TIMESTAMP  FUNCTION
1300		#              | |       |          |         |
1301			  <idle>-0     [002]  23636.756054: ttwu_do_activate.constprop.89 <-try_to_wake_up
1302			  <idle>-0     [002]  23636.756054: activate_task <-ttwu_do_activate.constprop.89
1303			  <idle>-0     [002]  23636.756055: enqueue_task <-activate_task
1304
1305
1306  markers
1307	When set, the trace_marker is writable (only by root).
1308	When disabled, the trace_marker will error with EINVAL
1309	on write.
1310
1311  event-fork
1312	When set, tasks with PIDs listed in set_event_pid will have
1313	the PIDs of their children added to set_event_pid when those
1314	tasks fork. Also, when tasks with PIDs in set_event_pid exit,
1315	their PIDs will be removed from the file.
1316
1317        This affects PIDs listed in set_event_notrace_pid as well.
1318
1319  function-trace
1320	The latency tracers will enable function tracing
1321	if this option is enabled (default it is). When
1322	it is disabled, the latency tracers do not trace
1323	functions. This keeps the overhead of the tracer down
1324	when performing latency tests.
1325
1326  function-fork
1327	When set, tasks with PIDs listed in set_ftrace_pid will
1328	have the PIDs of their children added to set_ftrace_pid
1329	when those tasks fork. Also, when tasks with PIDs in
1330	set_ftrace_pid exit, their PIDs will be removed from the
1331	file.
1332
1333        This affects PIDs in set_ftrace_notrace_pid as well.
1334
1335  display-graph
1336	When set, the latency tracers (irqsoff, wakeup, etc) will
1337	use function graph tracing instead of function tracing.
1338
1339  stacktrace
1340	When set, a stack trace is recorded after any trace event
1341	is recorded.
1342
1343  branch
1344	Enable branch tracing with the tracer. This enables branch
1345	tracer along with the currently set tracer. Enabling this
1346	with the "nop" tracer is the same as just enabling the
1347	"branch" tracer.
1348
1349.. tip:: Some tracers have their own options. They only appear in this
1350       file when the tracer is active. They always appear in the
1351       options directory.
1352
1353
1354Here are the per tracer options:
1355
1356Options for function tracer:
1357
1358  func_stack_trace
1359	When set, a stack trace is recorded after every
1360	function that is recorded. NOTE! Limit the functions
1361	that are recorded before enabling this, with
1362	"set_ftrace_filter" otherwise the system performance
1363	will be critically degraded. Remember to disable
1364	this option before clearing the function filter.
1365
1366Options for function_graph tracer:
1367
1368 Since the function_graph tracer has a slightly different output
1369 it has its own options to control what is displayed.
1370
1371  funcgraph-overrun
1372	When set, the "overrun" of the graph stack is
1373	displayed after each function traced. The
1374	overrun, is when the stack depth of the calls
1375	is greater than what is reserved for each task.
1376	Each task has a fixed array of functions to
1377	trace in the call graph. If the depth of the
1378	calls exceeds that, the function is not traced.
1379	The overrun is the number of functions missed
1380	due to exceeding this array.
1381
1382  funcgraph-cpu
1383	When set, the CPU number of the CPU where the trace
1384	occurred is displayed.
1385
1386  funcgraph-overhead
1387	When set, if the function takes longer than
1388	A certain amount, then a delay marker is
1389	displayed. See "delay" above, under the
1390	header description.
1391
1392  funcgraph-proc
1393	Unlike other tracers, the process' command line
1394	is not displayed by default, but instead only
1395	when a task is traced in and out during a context
1396	switch. Enabling this options has the command
1397	of each process displayed at every line.
1398
1399  funcgraph-duration
1400	At the end of each function (the return)
1401	the duration of the amount of time in the
1402	function is displayed in microseconds.
1403
1404  funcgraph-abstime
1405	When set, the timestamp is displayed at each line.
1406
1407  funcgraph-irqs
1408	When disabled, functions that happen inside an
1409	interrupt will not be traced.
1410
1411  funcgraph-tail
1412	When set, the return event will include the function
1413	that it represents. By default this is off, and
1414	only a closing curly bracket "}" is displayed for
1415	the return of a function.
1416
1417  funcgraph-retval
1418	When set, the return value of each traced function
1419	will be printed after an equal sign "=". By default
1420	this is off.
1421
1422  funcgraph-retval-hex
1423	When set, the return value will always be printed
1424	in hexadecimal format. If the option is not set and
1425	the return value is an error code, it will be printed
1426	in signed decimal format; otherwise it will also be
1427	printed in hexadecimal format. By default, this option
1428	is off.
1429
1430  sleep-time
1431	When running function graph tracer, to include
1432	the time a task schedules out in its function.
1433	When enabled, it will account time the task has been
1434	scheduled out as part of the function call.
1435
1436  graph-time
1437	When running function profiler with function graph tracer,
1438	to include the time to call nested functions. When this is
1439	not set, the time reported for the function will only
1440	include the time the function itself executed for, not the
1441	time for functions that it called.
1442
1443Options for blk tracer:
1444
1445  blk_classic
1446	Shows a more minimalistic output.
1447
1448
1449irqsoff
1450-------
1451
1452When interrupts are disabled, the CPU can not react to any other
1453external event (besides NMIs and SMIs). This prevents the timer
1454interrupt from triggering or the mouse interrupt from letting
1455the kernel know of a new mouse event. The result is a latency
1456with the reaction time.
1457
1458The irqsoff tracer tracks the time for which interrupts are
1459disabled. When a new maximum latency is hit, the tracer saves
1460the trace leading up to that latency point so that every time a
1461new maximum is reached, the old saved trace is discarded and the
1462new trace is saved.
1463
1464To reset the maximum, echo 0 into tracing_max_latency. Here is
1465an example::
1466
1467  # echo 0 > options/function-trace
1468  # echo irqsoff > current_tracer
1469  # echo 1 > tracing_on
1470  # echo 0 > tracing_max_latency
1471  # ls -ltr
1472  [...]
1473  # echo 0 > tracing_on
1474  # cat trace
1475  # tracer: irqsoff
1476  #
1477  # irqsoff latency trace v1.1.5 on 3.8.0-test+
1478  # --------------------------------------------------------------------
1479  # latency: 16 us, #4/4, CPU#0 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:4)
1480  #    -----------------
1481  #    | task: swapper/0-0 (uid:0 nice:0 policy:0 rt_prio:0)
1482  #    -----------------
1483  #  => started at: run_timer_softirq
1484  #  => ended at:   run_timer_softirq
1485  #
1486  #
1487  #                  _------=> CPU#
1488  #                 / _-----=> irqs-off
1489  #                | / _----=> need-resched
1490  #                || / _---=> hardirq/softirq
1491  #                ||| / _--=> preempt-depth
1492  #                |||| /     delay
1493  #  cmd     pid   ||||| time  |   caller
1494  #     \   /      |||||  \    |   /
1495    <idle>-0       0d.s2    0us+: _raw_spin_lock_irq <-run_timer_softirq
1496    <idle>-0       0dNs3   17us : _raw_spin_unlock_irq <-run_timer_softirq
1497    <idle>-0       0dNs3   17us+: trace_hardirqs_on <-run_timer_softirq
1498    <idle>-0       0dNs3   25us : <stack trace>
1499   => _raw_spin_unlock_irq
1500   => run_timer_softirq
1501   => __do_softirq
1502   => call_softirq
1503   => do_softirq
1504   => irq_exit
1505   => smp_apic_timer_interrupt
1506   => apic_timer_interrupt
1507   => rcu_idle_exit
1508   => cpu_idle
1509   => rest_init
1510   => start_kernel
1511   => x86_64_start_reservations
1512   => x86_64_start_kernel
1513
1514Here we see that we had a latency of 16 microseconds (which is
1515very good). The _raw_spin_lock_irq in run_timer_softirq disabled
1516interrupts. The difference between the 16 and the displayed
1517timestamp 25us occurred because the clock was incremented
1518between the time of recording the max latency and the time of
1519recording the function that had that latency.
1520
1521Note the above example had function-trace not set. If we set
1522function-trace, we get a much larger output::
1523
1524 with echo 1 > options/function-trace
1525
1526  # tracer: irqsoff
1527  #
1528  # irqsoff latency trace v1.1.5 on 3.8.0-test+
1529  # --------------------------------------------------------------------
1530  # latency: 71 us, #168/168, CPU#3 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:4)
1531  #    -----------------
1532  #    | task: bash-2042 (uid:0 nice:0 policy:0 rt_prio:0)
1533  #    -----------------
1534  #  => started at: ata_scsi_queuecmd
1535  #  => ended at:   ata_scsi_queuecmd
1536  #
1537  #
1538  #                  _------=> CPU#
1539  #                 / _-----=> irqs-off
1540  #                | / _----=> need-resched
1541  #                || / _---=> hardirq/softirq
1542  #                ||| / _--=> preempt-depth
1543  #                |||| /     delay
1544  #  cmd     pid   ||||| time  |   caller
1545  #     \   /      |||||  \    |   /
1546      bash-2042    3d...    0us : _raw_spin_lock_irqsave <-ata_scsi_queuecmd
1547      bash-2042    3d...    0us : add_preempt_count <-_raw_spin_lock_irqsave
1548      bash-2042    3d..1    1us : ata_scsi_find_dev <-ata_scsi_queuecmd
1549      bash-2042    3d..1    1us : __ata_scsi_find_dev <-ata_scsi_find_dev
1550      bash-2042    3d..1    2us : ata_find_dev.part.14 <-__ata_scsi_find_dev
1551      bash-2042    3d..1    2us : ata_qc_new_init <-__ata_scsi_queuecmd
1552      bash-2042    3d..1    3us : ata_sg_init <-__ata_scsi_queuecmd
1553      bash-2042    3d..1    4us : ata_scsi_rw_xlat <-__ata_scsi_queuecmd
1554      bash-2042    3d..1    4us : ata_build_rw_tf <-ata_scsi_rw_xlat
1555  [...]
1556      bash-2042    3d..1   67us : delay_tsc <-__delay
1557      bash-2042    3d..1   67us : add_preempt_count <-delay_tsc
1558      bash-2042    3d..2   67us : sub_preempt_count <-delay_tsc
1559      bash-2042    3d..1   67us : add_preempt_count <-delay_tsc
1560      bash-2042    3d..2   68us : sub_preempt_count <-delay_tsc
1561      bash-2042    3d..1   68us+: ata_bmdma_start <-ata_bmdma_qc_issue
1562      bash-2042    3d..1   71us : _raw_spin_unlock_irqrestore <-ata_scsi_queuecmd
1563      bash-2042    3d..1   71us : _raw_spin_unlock_irqrestore <-ata_scsi_queuecmd
1564      bash-2042    3d..1   72us+: trace_hardirqs_on <-ata_scsi_queuecmd
1565      bash-2042    3d..1  120us : <stack trace>
1566   => _raw_spin_unlock_irqrestore
1567   => ata_scsi_queuecmd
1568   => scsi_dispatch_cmd
1569   => scsi_request_fn
1570   => __blk_run_queue_uncond
1571   => __blk_run_queue
1572   => blk_queue_bio
1573   => submit_bio_noacct
1574   => submit_bio
1575   => submit_bh
1576   => __ext3_get_inode_loc
1577   => ext3_iget
1578   => ext3_lookup
1579   => lookup_real
1580   => __lookup_hash
1581   => walk_component
1582   => lookup_last
1583   => path_lookupat
1584   => filename_lookup
1585   => user_path_at_empty
1586   => user_path_at
1587   => vfs_fstatat
1588   => vfs_stat
1589   => sys_newstat
1590   => system_call_fastpath
1591
1592
1593Here we traced a 71 microsecond latency. But we also see all the
1594functions that were called during that time. Note that by
1595enabling function tracing, we incur an added overhead. This
1596overhead may extend the latency times. But nevertheless, this
1597trace has provided some very helpful debugging information.
1598
1599If we prefer function graph output instead of function, we can set
1600display-graph option::
1601
1602 with echo 1 > options/display-graph
1603
1604  # tracer: irqsoff
1605  #
1606  # irqsoff latency trace v1.1.5 on 4.20.0-rc6+
1607  # --------------------------------------------------------------------
1608  # latency: 3751 us, #274/274, CPU#0 | (M:desktop VP:0, KP:0, SP:0 HP:0 #P:4)
1609  #    -----------------
1610  #    | task: bash-1507 (uid:0 nice:0 policy:0 rt_prio:0)
1611  #    -----------------
1612  #  => started at: free_debug_processing
1613  #  => ended at:   return_to_handler
1614  #
1615  #
1616  #                                       _-----=> irqs-off
1617  #                                      / _----=> need-resched
1618  #                                     | / _---=> hardirq/softirq
1619  #                                     || / _--=> preempt-depth
1620  #                                     ||| /
1621  #   REL TIME      CPU  TASK/PID       ||||     DURATION                  FUNCTION CALLS
1622  #      |          |     |    |        ||||      |   |                     |   |   |   |
1623          0 us |   0)   bash-1507    |  d... |   0.000 us    |  _raw_spin_lock_irqsave();
1624          0 us |   0)   bash-1507    |  d..1 |   0.378 us    |    do_raw_spin_trylock();
1625          1 us |   0)   bash-1507    |  d..2 |               |    set_track() {
1626          2 us |   0)   bash-1507    |  d..2 |               |      save_stack_trace() {
1627          2 us |   0)   bash-1507    |  d..2 |               |        __save_stack_trace() {
1628          3 us |   0)   bash-1507    |  d..2 |               |          __unwind_start() {
1629          3 us |   0)   bash-1507    |  d..2 |               |            get_stack_info() {
1630          3 us |   0)   bash-1507    |  d..2 |   0.351 us    |              in_task_stack();
1631          4 us |   0)   bash-1507    |  d..2 |   1.107 us    |            }
1632  [...]
1633       3750 us |   0)   bash-1507    |  d..1 |   0.516 us    |      do_raw_spin_unlock();
1634       3750 us |   0)   bash-1507    |  d..1 |   0.000 us    |  _raw_spin_unlock_irqrestore();
1635       3764 us |   0)   bash-1507    |  d..1 |   0.000 us    |  tracer_hardirqs_on();
1636      bash-1507    0d..1 3792us : <stack trace>
1637   => free_debug_processing
1638   => __slab_free
1639   => kmem_cache_free
1640   => vm_area_free
1641   => remove_vma
1642   => exit_mmap
1643   => mmput
1644   => begin_new_exec
1645   => load_elf_binary
1646   => search_binary_handler
1647   => __do_execve_file.isra.32
1648   => __x64_sys_execve
1649   => do_syscall_64
1650   => entry_SYSCALL_64_after_hwframe
1651
1652preemptoff
1653----------
1654
1655When preemption is disabled, we may be able to receive
1656interrupts but the task cannot be preempted and a higher
1657priority task must wait for preemption to be enabled again
1658before it can preempt a lower priority task.
1659
1660The preemptoff tracer traces the places that disable preemption.
1661Like the irqsoff tracer, it records the maximum latency for
1662which preemption was disabled. The control of preemptoff tracer
1663is much like the irqsoff tracer.
1664::
1665
1666  # echo 0 > options/function-trace
1667  # echo preemptoff > current_tracer
1668  # echo 1 > tracing_on
1669  # echo 0 > tracing_max_latency
1670  # ls -ltr
1671  [...]
1672  # echo 0 > tracing_on
1673  # cat trace
1674  # tracer: preemptoff
1675  #
1676  # preemptoff latency trace v1.1.5 on 3.8.0-test+
1677  # --------------------------------------------------------------------
1678  # latency: 46 us, #4/4, CPU#1 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:4)
1679  #    -----------------
1680  #    | task: sshd-1991 (uid:0 nice:0 policy:0 rt_prio:0)
1681  #    -----------------
1682  #  => started at: do_IRQ
1683  #  => ended at:   do_IRQ
1684  #
1685  #
1686  #                  _------=> CPU#
1687  #                 / _-----=> irqs-off
1688  #                | / _----=> need-resched
1689  #                || / _---=> hardirq/softirq
1690  #                ||| / _--=> preempt-depth
1691  #                |||| /     delay
1692  #  cmd     pid   ||||| time  |   caller
1693  #     \   /      |||||  \    |   /
1694      sshd-1991    1d.h.    0us+: irq_enter <-do_IRQ
1695      sshd-1991    1d..1   46us : irq_exit <-do_IRQ
1696      sshd-1991    1d..1   47us+: trace_preempt_on <-do_IRQ
1697      sshd-1991    1d..1   52us : <stack trace>
1698   => sub_preempt_count
1699   => irq_exit
1700   => do_IRQ
1701   => ret_from_intr
1702
1703
1704This has some more changes. Preemption was disabled when an
1705interrupt came in (notice the 'h'), and was enabled on exit.
1706But we also see that interrupts have been disabled when entering
1707the preempt off section and leaving it (the 'd'). We do not know if
1708interrupts were enabled in the mean time or shortly after this
1709was over.
1710::
1711
1712  # tracer: preemptoff
1713  #
1714  # preemptoff latency trace v1.1.5 on 3.8.0-test+
1715  # --------------------------------------------------------------------
1716  # latency: 83 us, #241/241, CPU#1 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:4)
1717  #    -----------------
1718  #    | task: bash-1994 (uid:0 nice:0 policy:0 rt_prio:0)
1719  #    -----------------
1720  #  => started at: wake_up_new_task
1721  #  => ended at:   task_rq_unlock
1722  #
1723  #
1724  #                  _------=> CPU#
1725  #                 / _-----=> irqs-off
1726  #                | / _----=> need-resched
1727  #                || / _---=> hardirq/softirq
1728  #                ||| / _--=> preempt-depth
1729  #                |||| /     delay
1730  #  cmd     pid   ||||| time  |   caller
1731  #     \   /      |||||  \    |   /
1732      bash-1994    1d..1    0us : _raw_spin_lock_irqsave <-wake_up_new_task
1733      bash-1994    1d..1    0us : select_task_rq_fair <-select_task_rq
1734      bash-1994    1d..1    1us : __rcu_read_lock <-select_task_rq_fair
1735      bash-1994    1d..1    1us : source_load <-select_task_rq_fair
1736      bash-1994    1d..1    1us : source_load <-select_task_rq_fair
1737  [...]
1738      bash-1994    1d..1   12us : irq_enter <-smp_apic_timer_interrupt
1739      bash-1994    1d..1   12us : rcu_irq_enter <-irq_enter
1740      bash-1994    1d..1   13us : add_preempt_count <-irq_enter
1741      bash-1994    1d.h1   13us : exit_idle <-smp_apic_timer_interrupt
1742      bash-1994    1d.h1   13us : hrtimer_interrupt <-smp_apic_timer_interrupt
1743      bash-1994    1d.h1   13us : _raw_spin_lock <-hrtimer_interrupt
1744      bash-1994    1d.h1   14us : add_preempt_count <-_raw_spin_lock
1745      bash-1994    1d.h2   14us : ktime_get_update_offsets <-hrtimer_interrupt
1746  [...]
1747      bash-1994    1d.h1   35us : lapic_next_event <-clockevents_program_event
1748      bash-1994    1d.h1   35us : irq_exit <-smp_apic_timer_interrupt
1749      bash-1994    1d.h1   36us : sub_preempt_count <-irq_exit
1750      bash-1994    1d..2   36us : do_softirq <-irq_exit
1751      bash-1994    1d..2   36us : __do_softirq <-call_softirq
1752      bash-1994    1d..2   36us : __local_bh_disable <-__do_softirq
1753      bash-1994    1d.s2   37us : add_preempt_count <-_raw_spin_lock_irq
1754      bash-1994    1d.s3   38us : _raw_spin_unlock <-run_timer_softirq
1755      bash-1994    1d.s3   39us : sub_preempt_count <-_raw_spin_unlock
1756      bash-1994    1d.s2   39us : call_timer_fn <-run_timer_softirq
1757  [...]
1758      bash-1994    1dNs2   81us : cpu_needs_another_gp <-rcu_process_callbacks
1759      bash-1994    1dNs2   82us : __local_bh_enable <-__do_softirq
1760      bash-1994    1dNs2   82us : sub_preempt_count <-__local_bh_enable
1761      bash-1994    1dN.2   82us : idle_cpu <-irq_exit
1762      bash-1994    1dN.2   83us : rcu_irq_exit <-irq_exit
1763      bash-1994    1dN.2   83us : sub_preempt_count <-irq_exit
1764      bash-1994    1.N.1   84us : _raw_spin_unlock_irqrestore <-task_rq_unlock
1765      bash-1994    1.N.1   84us+: trace_preempt_on <-task_rq_unlock
1766      bash-1994    1.N.1  104us : <stack trace>
1767   => sub_preempt_count
1768   => _raw_spin_unlock_irqrestore
1769   => task_rq_unlock
1770   => wake_up_new_task
1771   => do_fork
1772   => sys_clone
1773   => stub_clone
1774
1775
1776The above is an example of the preemptoff trace with
1777function-trace set. Here we see that interrupts were not disabled
1778the entire time. The irq_enter code lets us know that we entered
1779an interrupt 'h'. Before that, the functions being traced still
1780show that it is not in an interrupt, but we can see from the
1781functions themselves that this is not the case.
1782
1783preemptirqsoff
1784--------------
1785
1786Knowing the locations that have interrupts disabled or
1787preemption disabled for the longest times is helpful. But
1788sometimes we would like to know when either preemption and/or
1789interrupts are disabled.
1790
1791Consider the following code::
1792
1793    local_irq_disable();
1794    call_function_with_irqs_off();
1795    preempt_disable();
1796    call_function_with_irqs_and_preemption_off();
1797    local_irq_enable();
1798    call_function_with_preemption_off();
1799    preempt_enable();
1800
1801The irqsoff tracer will record the total length of
1802call_function_with_irqs_off() and
1803call_function_with_irqs_and_preemption_off().
1804
1805The preemptoff tracer will record the total length of
1806call_function_with_irqs_and_preemption_off() and
1807call_function_with_preemption_off().
1808
1809But neither will trace the time that interrupts and/or
1810preemption is disabled. This total time is the time that we can
1811not schedule. To record this time, use the preemptirqsoff
1812tracer.
1813
1814Again, using this trace is much like the irqsoff and preemptoff
1815tracers.
1816::
1817
1818  # echo 0 > options/function-trace
1819  # echo preemptirqsoff > current_tracer
1820  # echo 1 > tracing_on
1821  # echo 0 > tracing_max_latency
1822  # ls -ltr
1823  [...]
1824  # echo 0 > tracing_on
1825  # cat trace
1826  # tracer: preemptirqsoff
1827  #
1828  # preemptirqsoff latency trace v1.1.5 on 3.8.0-test+
1829  # --------------------------------------------------------------------
1830  # latency: 100 us, #4/4, CPU#3 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:4)
1831  #    -----------------
1832  #    | task: ls-2230 (uid:0 nice:0 policy:0 rt_prio:0)
1833  #    -----------------
1834  #  => started at: ata_scsi_queuecmd
1835  #  => ended at:   ata_scsi_queuecmd
1836  #
1837  #
1838  #                  _------=> CPU#
1839  #                 / _-----=> irqs-off
1840  #                | / _----=> need-resched
1841  #                || / _---=> hardirq/softirq
1842  #                ||| / _--=> preempt-depth
1843  #                |||| /     delay
1844  #  cmd     pid   ||||| time  |   caller
1845  #     \   /      |||||  \    |   /
1846        ls-2230    3d...    0us+: _raw_spin_lock_irqsave <-ata_scsi_queuecmd
1847        ls-2230    3...1  100us : _raw_spin_unlock_irqrestore <-ata_scsi_queuecmd
1848        ls-2230    3...1  101us+: trace_preempt_on <-ata_scsi_queuecmd
1849        ls-2230    3...1  111us : <stack trace>
1850   => sub_preempt_count
1851   => _raw_spin_unlock_irqrestore
1852   => ata_scsi_queuecmd
1853   => scsi_dispatch_cmd
1854   => scsi_request_fn
1855   => __blk_run_queue_uncond
1856   => __blk_run_queue
1857   => blk_queue_bio
1858   => submit_bio_noacct
1859   => submit_bio
1860   => submit_bh
1861   => ext3_bread
1862   => ext3_dir_bread
1863   => htree_dirblock_to_tree
1864   => ext3_htree_fill_tree
1865   => ext3_readdir
1866   => vfs_readdir
1867   => sys_getdents
1868   => system_call_fastpath
1869
1870
1871The trace_hardirqs_off_thunk is called from assembly on x86 when
1872interrupts are disabled in the assembly code. Without the
1873function tracing, we do not know if interrupts were enabled
1874within the preemption points. We do see that it started with
1875preemption enabled.
1876
1877Here is a trace with function-trace set::
1878
1879  # tracer: preemptirqsoff
1880  #
1881  # preemptirqsoff latency trace v1.1.5 on 3.8.0-test+
1882  # --------------------------------------------------------------------
1883  # latency: 161 us, #339/339, CPU#3 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:4)
1884  #    -----------------
1885  #    | task: ls-2269 (uid:0 nice:0 policy:0 rt_prio:0)
1886  #    -----------------
1887  #  => started at: schedule
1888  #  => ended at:   mutex_unlock
1889  #
1890  #
1891  #                  _------=> CPU#
1892  #                 / _-----=> irqs-off
1893  #                | / _----=> need-resched
1894  #                || / _---=> hardirq/softirq
1895  #                ||| / _--=> preempt-depth
1896  #                |||| /     delay
1897  #  cmd     pid   ||||| time  |   caller
1898  #     \   /      |||||  \    |   /
1899  kworker/-59      3...1    0us : __schedule <-schedule
1900  kworker/-59      3d..1    0us : rcu_preempt_qs <-rcu_note_context_switch
1901  kworker/-59      3d..1    1us : add_preempt_count <-_raw_spin_lock_irq
1902  kworker/-59      3d..2    1us : deactivate_task <-__schedule
1903  kworker/-59      3d..2    1us : dequeue_task <-deactivate_task
1904  kworker/-59      3d..2    2us : update_rq_clock <-dequeue_task
1905  kworker/-59      3d..2    2us : dequeue_task_fair <-dequeue_task
1906  kworker/-59      3d..2    2us : update_curr <-dequeue_task_fair
1907  kworker/-59      3d..2    2us : update_min_vruntime <-update_curr
1908  kworker/-59      3d..2    3us : cpuacct_charge <-update_curr
1909  kworker/-59      3d..2    3us : __rcu_read_lock <-cpuacct_charge
1910  kworker/-59      3d..2    3us : __rcu_read_unlock <-cpuacct_charge
1911  kworker/-59      3d..2    3us : update_cfs_rq_blocked_load <-dequeue_task_fair
1912  kworker/-59      3d..2    4us : clear_buddies <-dequeue_task_fair
1913  kworker/-59      3d..2    4us : account_entity_dequeue <-dequeue_task_fair
1914  kworker/-59      3d..2    4us : update_min_vruntime <-dequeue_task_fair
1915  kworker/-59      3d..2    4us : update_cfs_shares <-dequeue_task_fair
1916  kworker/-59      3d..2    5us : hrtick_update <-dequeue_task_fair
1917  kworker/-59      3d..2    5us : wq_worker_sleeping <-__schedule
1918  kworker/-59      3d..2    5us : kthread_data <-wq_worker_sleeping
1919  kworker/-59      3d..2    5us : put_prev_task_fair <-__schedule
1920  kworker/-59      3d..2    6us : pick_next_task_fair <-pick_next_task
1921  kworker/-59      3d..2    6us : clear_buddies <-pick_next_task_fair
1922  kworker/-59      3d..2    6us : set_next_entity <-pick_next_task_fair
1923  kworker/-59      3d..2    6us : update_stats_wait_end <-set_next_entity
1924        ls-2269    3d..2    7us : finish_task_switch <-__schedule
1925        ls-2269    3d..2    7us : _raw_spin_unlock_irq <-finish_task_switch
1926        ls-2269    3d..2    8us : do_IRQ <-ret_from_intr
1927        ls-2269    3d..2    8us : irq_enter <-do_IRQ
1928        ls-2269    3d..2    8us : rcu_irq_enter <-irq_enter
1929        ls-2269    3d..2    9us : add_preempt_count <-irq_enter
1930        ls-2269    3d.h2    9us : exit_idle <-do_IRQ
1931  [...]
1932        ls-2269    3d.h3   20us : sub_preempt_count <-_raw_spin_unlock
1933        ls-2269    3d.h2   20us : irq_exit <-do_IRQ
1934        ls-2269    3d.h2   21us : sub_preempt_count <-irq_exit
1935        ls-2269    3d..3   21us : do_softirq <-irq_exit
1936        ls-2269    3d..3   21us : __do_softirq <-call_softirq
1937        ls-2269    3d..3   21us+: __local_bh_disable <-__do_softirq
1938        ls-2269    3d.s4   29us : sub_preempt_count <-_local_bh_enable_ip
1939        ls-2269    3d.s5   29us : sub_preempt_count <-_local_bh_enable_ip
1940        ls-2269    3d.s5   31us : do_IRQ <-ret_from_intr
1941        ls-2269    3d.s5   31us : irq_enter <-do_IRQ
1942        ls-2269    3d.s5   31us : rcu_irq_enter <-irq_enter
1943  [...]
1944        ls-2269    3d.s5   31us : rcu_irq_enter <-irq_enter
1945        ls-2269    3d.s5   32us : add_preempt_count <-irq_enter
1946        ls-2269    3d.H5   32us : exit_idle <-do_IRQ
1947        ls-2269    3d.H5   32us : handle_irq <-do_IRQ
1948        ls-2269    3d.H5   32us : irq_to_desc <-handle_irq
1949        ls-2269    3d.H5   33us : handle_fasteoi_irq <-handle_irq
1950  [...]
1951        ls-2269    3d.s5  158us : _raw_spin_unlock_irqrestore <-rtl8139_poll
1952        ls-2269    3d.s3  158us : net_rps_action_and_irq_enable.isra.65 <-net_rx_action
1953        ls-2269    3d.s3  159us : __local_bh_enable <-__do_softirq
1954        ls-2269    3d.s3  159us : sub_preempt_count <-__local_bh_enable
1955        ls-2269    3d..3  159us : idle_cpu <-irq_exit
1956        ls-2269    3d..3  159us : rcu_irq_exit <-irq_exit
1957        ls-2269    3d..3  160us : sub_preempt_count <-irq_exit
1958        ls-2269    3d...  161us : __mutex_unlock_slowpath <-mutex_unlock
1959        ls-2269    3d...  162us+: trace_hardirqs_on <-mutex_unlock
1960        ls-2269    3d...  186us : <stack trace>
1961   => __mutex_unlock_slowpath
1962   => mutex_unlock
1963   => process_output
1964   => n_tty_write
1965   => tty_write
1966   => vfs_write
1967   => sys_write
1968   => system_call_fastpath
1969
1970This is an interesting trace. It started with kworker running and
1971scheduling out and ls taking over. But as soon as ls released the
1972rq lock and enabled interrupts (but not preemption) an interrupt
1973triggered. When the interrupt finished, it started running softirqs.
1974But while the softirq was running, another interrupt triggered.
1975When an interrupt is running inside a softirq, the annotation is 'H'.
1976
1977
1978wakeup
1979------
1980
1981One common case that people are interested in tracing is the
1982time it takes for a task that is woken to actually wake up.
1983Now for non Real-Time tasks, this can be arbitrary. But tracing
1984it nonetheless can be interesting.
1985
1986Without function tracing::
1987
1988  # echo 0 > options/function-trace
1989  # echo wakeup > current_tracer
1990  # echo 1 > tracing_on
1991  # echo 0 > tracing_max_latency
1992  # chrt -f 5 sleep 1
1993  # echo 0 > tracing_on
1994  # cat trace
1995  # tracer: wakeup
1996  #
1997  # wakeup latency trace v1.1.5 on 3.8.0-test+
1998  # --------------------------------------------------------------------
1999  # latency: 15 us, #4/4, CPU#3 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:4)
2000  #    -----------------
2001  #    | task: kworker/3:1H-312 (uid:0 nice:-20 policy:0 rt_prio:0)
2002  #    -----------------
2003  #
2004  #                  _------=> CPU#
2005  #                 / _-----=> irqs-off
2006  #                | / _----=> need-resched
2007  #                || / _---=> hardirq/softirq
2008  #                ||| / _--=> preempt-depth
2009  #                |||| /     delay
2010  #  cmd     pid   ||||| time  |   caller
2011  #     \   /      |||||  \    |   /
2012    <idle>-0       3dNs7    0us :      0:120:R   + [003]   312:100:R kworker/3:1H
2013    <idle>-0       3dNs7    1us+: ttwu_do_activate.constprop.87 <-try_to_wake_up
2014    <idle>-0       3d..3   15us : __schedule <-schedule
2015    <idle>-0       3d..3   15us :      0:120:R ==> [003]   312:100:R kworker/3:1H
2016
2017The tracer only traces the highest priority task in the system
2018to avoid tracing the normal circumstances. Here we see that
2019the kworker with a nice priority of -20 (not very nice), took
2020just 15 microseconds from the time it woke up, to the time it
2021ran.
2022
2023Non Real-Time tasks are not that interesting. A more interesting
2024trace is to concentrate only on Real-Time tasks.
2025
2026wakeup_rt
2027---------
2028
2029In a Real-Time environment it is very important to know the
2030wakeup time it takes for the highest priority task that is woken
2031up to the time that it executes. This is also known as "schedule
2032latency". I stress the point that this is about RT tasks. It is
2033also important to know the scheduling latency of non-RT tasks,
2034but the average schedule latency is better for non-RT tasks.
2035Tools like LatencyTop are more appropriate for such
2036measurements.
2037
2038Real-Time environments are interested in the worst case latency.
2039That is the longest latency it takes for something to happen,
2040and not the average. We can have a very fast scheduler that may
2041only have a large latency once in a while, but that would not
2042work well with Real-Time tasks.  The wakeup_rt tracer was designed
2043to record the worst case wakeups of RT tasks. Non-RT tasks are
2044not recorded because the tracer only records one worst case and
2045tracing non-RT tasks that are unpredictable will overwrite the
2046worst case latency of RT tasks (just run the normal wakeup
2047tracer for a while to see that effect).
2048
2049Since this tracer only deals with RT tasks, we will run this
2050slightly differently than we did with the previous tracers.
2051Instead of performing an 'ls', we will run 'sleep 1' under
2052'chrt' which changes the priority of the task.
2053::
2054
2055  # echo 0 > options/function-trace
2056  # echo wakeup_rt > current_tracer
2057  # echo 1 > tracing_on
2058  # echo 0 > tracing_max_latency
2059  # chrt -f 5 sleep 1
2060  # echo 0 > tracing_on
2061  # cat trace
2062  # tracer: wakeup
2063  #
2064  # tracer: wakeup_rt
2065  #
2066  # wakeup_rt latency trace v1.1.5 on 3.8.0-test+
2067  # --------------------------------------------------------------------
2068  # latency: 5 us, #4/4, CPU#3 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:4)
2069  #    -----------------
2070  #    | task: sleep-2389 (uid:0 nice:0 policy:1 rt_prio:5)
2071  #    -----------------
2072  #
2073  #                  _------=> CPU#
2074  #                 / _-----=> irqs-off
2075  #                | / _----=> need-resched
2076  #                || / _---=> hardirq/softirq
2077  #                ||| / _--=> preempt-depth
2078  #                |||| /     delay
2079  #  cmd     pid   ||||| time  |   caller
2080  #     \   /      |||||  \    |   /
2081    <idle>-0       3d.h4    0us :      0:120:R   + [003]  2389: 94:R sleep
2082    <idle>-0       3d.h4    1us+: ttwu_do_activate.constprop.87 <-try_to_wake_up
2083    <idle>-0       3d..3    5us : __schedule <-schedule
2084    <idle>-0       3d..3    5us :      0:120:R ==> [003]  2389: 94:R sleep
2085
2086
2087Running this on an idle system, we see that it only took 5 microseconds
2088to perform the task switch.  Note, since the trace point in the schedule
2089is before the actual "switch", we stop the tracing when the recorded task
2090is about to schedule in. This may change if we add a new marker at the
2091end of the scheduler.
2092
2093Notice that the recorded task is 'sleep' with the PID of 2389
2094and it has an rt_prio of 5. This priority is user-space priority
2095and not the internal kernel priority. The policy is 1 for
2096SCHED_FIFO and 2 for SCHED_RR.
2097
2098Note, that the trace data shows the internal priority (99 - rtprio).
2099::
2100
2101  <idle>-0       3d..3    5us :      0:120:R ==> [003]  2389: 94:R sleep
2102
2103The 0:120:R means idle was running with a nice priority of 0 (120 - 120)
2104and in the running state 'R'. The sleep task was scheduled in with
21052389: 94:R. That is the priority is the kernel rtprio (99 - 5 = 94)
2106and it too is in the running state.
2107
2108Doing the same with chrt -r 5 and function-trace set.
2109::
2110
2111  echo 1 > options/function-trace
2112
2113  # tracer: wakeup_rt
2114  #
2115  # wakeup_rt latency trace v1.1.5 on 3.8.0-test+
2116  # --------------------------------------------------------------------
2117  # latency: 29 us, #85/85, CPU#3 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:4)
2118  #    -----------------
2119  #    | task: sleep-2448 (uid:0 nice:0 policy:1 rt_prio:5)
2120  #    -----------------
2121  #
2122  #                  _------=> CPU#
2123  #                 / _-----=> irqs-off
2124  #                | / _----=> need-resched
2125  #                || / _---=> hardirq/softirq
2126  #                ||| / _--=> preempt-depth
2127  #                |||| /     delay
2128  #  cmd     pid   ||||| time  |   caller
2129  #     \   /      |||||  \    |   /
2130    <idle>-0       3d.h4    1us+:      0:120:R   + [003]  2448: 94:R sleep
2131    <idle>-0       3d.h4    2us : ttwu_do_activate.constprop.87 <-try_to_wake_up
2132    <idle>-0       3d.h3    3us : check_preempt_curr <-ttwu_do_wakeup
2133    <idle>-0       3d.h3    3us : resched_curr <-check_preempt_curr
2134    <idle>-0       3dNh3    4us : task_woken_rt <-ttwu_do_wakeup
2135    <idle>-0       3dNh3    4us : _raw_spin_unlock <-try_to_wake_up
2136    <idle>-0       3dNh3    4us : sub_preempt_count <-_raw_spin_unlock
2137    <idle>-0       3dNh2    5us : ttwu_stat <-try_to_wake_up
2138    <idle>-0       3dNh2    5us : _raw_spin_unlock_irqrestore <-try_to_wake_up
2139    <idle>-0       3dNh2    6us : sub_preempt_count <-_raw_spin_unlock_irqrestore
2140    <idle>-0       3dNh1    6us : _raw_spin_lock <-__run_hrtimer
2141    <idle>-0       3dNh1    6us : add_preempt_count <-_raw_spin_lock
2142    <idle>-0       3dNh2    7us : _raw_spin_unlock <-hrtimer_interrupt
2143    <idle>-0       3dNh2    7us : sub_preempt_count <-_raw_spin_unlock
2144    <idle>-0       3dNh1    7us : tick_program_event <-hrtimer_interrupt
2145    <idle>-0       3dNh1    7us : clockevents_program_event <-tick_program_event
2146    <idle>-0       3dNh1    8us : ktime_get <-clockevents_program_event
2147    <idle>-0       3dNh1    8us : lapic_next_event <-clockevents_program_event
2148    <idle>-0       3dNh1    8us : irq_exit <-smp_apic_timer_interrupt
2149    <idle>-0       3dNh1    9us : sub_preempt_count <-irq_exit
2150    <idle>-0       3dN.2    9us : idle_cpu <-irq_exit
2151    <idle>-0       3dN.2    9us : rcu_irq_exit <-irq_exit
2152    <idle>-0       3dN.2   10us : rcu_eqs_enter_common.isra.45 <-rcu_irq_exit
2153    <idle>-0       3dN.2   10us : sub_preempt_count <-irq_exit
2154    <idle>-0       3.N.1   11us : rcu_idle_exit <-cpu_idle
2155    <idle>-0       3dN.1   11us : rcu_eqs_exit_common.isra.43 <-rcu_idle_exit
2156    <idle>-0       3.N.1   11us : tick_nohz_idle_exit <-cpu_idle
2157    <idle>-0       3dN.1   12us : menu_hrtimer_cancel <-tick_nohz_idle_exit
2158    <idle>-0       3dN.1   12us : ktime_get <-tick_nohz_idle_exit
2159    <idle>-0       3dN.1   12us : tick_do_update_jiffies64 <-tick_nohz_idle_exit
2160    <idle>-0       3dN.1   13us : cpu_load_update_nohz <-tick_nohz_idle_exit
2161    <idle>-0       3dN.1   13us : _raw_spin_lock <-cpu_load_update_nohz
2162    <idle>-0       3dN.1   13us : add_preempt_count <-_raw_spin_lock
2163    <idle>-0       3dN.2   13us : __cpu_load_update <-cpu_load_update_nohz
2164    <idle>-0       3dN.2   14us : sched_avg_update <-__cpu_load_update
2165    <idle>-0       3dN.2   14us : _raw_spin_unlock <-cpu_load_update_nohz
2166    <idle>-0       3dN.2   14us : sub_preempt_count <-_raw_spin_unlock
2167    <idle>-0       3dN.1   15us : calc_load_nohz_stop <-tick_nohz_idle_exit
2168    <idle>-0       3dN.1   15us : touch_softlockup_watchdog <-tick_nohz_idle_exit
2169    <idle>-0       3dN.1   15us : hrtimer_cancel <-tick_nohz_idle_exit
2170    <idle>-0       3dN.1   15us : hrtimer_try_to_cancel <-hrtimer_cancel
2171    <idle>-0       3dN.1   16us : lock_hrtimer_base.isra.18 <-hrtimer_try_to_cancel
2172    <idle>-0       3dN.1   16us : _raw_spin_lock_irqsave <-lock_hrtimer_base.isra.18
2173    <idle>-0       3dN.1   16us : add_preempt_count <-_raw_spin_lock_irqsave
2174    <idle>-0       3dN.2   17us : __remove_hrtimer <-remove_hrtimer.part.16
2175    <idle>-0       3dN.2   17us : hrtimer_force_reprogram <-__remove_hrtimer
2176    <idle>-0       3dN.2   17us : tick_program_event <-hrtimer_force_reprogram
2177    <idle>-0       3dN.2   18us : clockevents_program_event <-tick_program_event
2178    <idle>-0       3dN.2   18us : ktime_get <-clockevents_program_event
2179    <idle>-0       3dN.2   18us : lapic_next_event <-clockevents_program_event
2180    <idle>-0       3dN.2   19us : _raw_spin_unlock_irqrestore <-hrtimer_try_to_cancel
2181    <idle>-0       3dN.2   19us : sub_preempt_count <-_raw_spin_unlock_irqrestore
2182    <idle>-0       3dN.1   19us : hrtimer_forward <-tick_nohz_idle_exit
2183    <idle>-0       3dN.1   20us : ktime_add_safe <-hrtimer_forward
2184    <idle>-0       3dN.1   20us : ktime_add_safe <-hrtimer_forward
2185    <idle>-0       3dN.1   20us : hrtimer_start_range_ns <-hrtimer_start_expires.constprop.11
2186    <idle>-0       3dN.1   20us : __hrtimer_start_range_ns <-hrtimer_start_range_ns
2187    <idle>-0       3dN.1   21us : lock_hrtimer_base.isra.18 <-__hrtimer_start_range_ns
2188    <idle>-0       3dN.1   21us : _raw_spin_lock_irqsave <-lock_hrtimer_base.isra.18
2189    <idle>-0       3dN.1   21us : add_preempt_count <-_raw_spin_lock_irqsave
2190    <idle>-0       3dN.2   22us : ktime_add_safe <-__hrtimer_start_range_ns
2191    <idle>-0       3dN.2   22us : enqueue_hrtimer <-__hrtimer_start_range_ns
2192    <idle>-0       3dN.2   22us : tick_program_event <-__hrtimer_start_range_ns
2193    <idle>-0       3dN.2   23us : clockevents_program_event <-tick_program_event
2194    <idle>-0       3dN.2   23us : ktime_get <-clockevents_program_event
2195    <idle>-0       3dN.2   23us : lapic_next_event <-clockevents_program_event
2196    <idle>-0       3dN.2   24us : _raw_spin_unlock_irqrestore <-__hrtimer_start_range_ns
2197    <idle>-0       3dN.2   24us : sub_preempt_count <-_raw_spin_unlock_irqrestore
2198    <idle>-0       3dN.1   24us : account_idle_ticks <-tick_nohz_idle_exit
2199    <idle>-0       3dN.1   24us : account_idle_time <-account_idle_ticks
2200    <idle>-0       3.N.1   25us : sub_preempt_count <-cpu_idle
2201    <idle>-0       3.N..   25us : schedule <-cpu_idle
2202    <idle>-0       3.N..   25us : __schedule <-preempt_schedule
2203    <idle>-0       3.N..   26us : add_preempt_count <-__schedule
2204    <idle>-0       3.N.1   26us : rcu_note_context_switch <-__schedule
2205    <idle>-0       3.N.1   26us : rcu_sched_qs <-rcu_note_context_switch
2206    <idle>-0       3dN.1   27us : rcu_preempt_qs <-rcu_note_context_switch
2207    <idle>-0       3.N.1   27us : _raw_spin_lock_irq <-__schedule
2208    <idle>-0       3dN.1   27us : add_preempt_count <-_raw_spin_lock_irq
2209    <idle>-0       3dN.2   28us : put_prev_task_idle <-__schedule
2210    <idle>-0       3dN.2   28us : pick_next_task_stop <-pick_next_task
2211    <idle>-0       3dN.2   28us : pick_next_task_rt <-pick_next_task
2212    <idle>-0       3dN.2   29us : dequeue_pushable_task <-pick_next_task_rt
2213    <idle>-0       3d..3   29us : __schedule <-preempt_schedule
2214    <idle>-0       3d..3   30us :      0:120:R ==> [003]  2448: 94:R sleep
2215
2216This isn't that big of a trace, even with function tracing enabled,
2217so I included the entire trace.
2218
2219The interrupt went off while when the system was idle. Somewhere
2220before task_woken_rt() was called, the NEED_RESCHED flag was set,
2221this is indicated by the first occurrence of the 'N' flag.
2222
2223Latency tracing and events
2224--------------------------
2225As function tracing can induce a much larger latency, but without
2226seeing what happens within the latency it is hard to know what
2227caused it. There is a middle ground, and that is with enabling
2228events.
2229::
2230
2231  # echo 0 > options/function-trace
2232  # echo wakeup_rt > current_tracer
2233  # echo 1 > events/enable
2234  # echo 1 > tracing_on
2235  # echo 0 > tracing_max_latency
2236  # chrt -f 5 sleep 1
2237  # echo 0 > tracing_on
2238  # cat trace
2239  # tracer: wakeup_rt
2240  #
2241  # wakeup_rt latency trace v1.1.5 on 3.8.0-test+
2242  # --------------------------------------------------------------------
2243  # latency: 6 us, #12/12, CPU#2 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:4)
2244  #    -----------------
2245  #    | task: sleep-5882 (uid:0 nice:0 policy:1 rt_prio:5)
2246  #    -----------------
2247  #
2248  #                  _------=> CPU#
2249  #                 / _-----=> irqs-off
2250  #                | / _----=> need-resched
2251  #                || / _---=> hardirq/softirq
2252  #                ||| / _--=> preempt-depth
2253  #                |||| /     delay
2254  #  cmd     pid   ||||| time  |   caller
2255  #     \   /      |||||  \    |   /
2256    <idle>-0       2d.h4    0us :      0:120:R   + [002]  5882: 94:R sleep
2257    <idle>-0       2d.h4    0us : ttwu_do_activate.constprop.87 <-try_to_wake_up
2258    <idle>-0       2d.h4    1us : sched_wakeup: comm=sleep pid=5882 prio=94 success=1 target_cpu=002
2259    <idle>-0       2dNh2    1us : hrtimer_expire_exit: hrtimer=ffff88007796feb8
2260    <idle>-0       2.N.2    2us : power_end: cpu_id=2
2261    <idle>-0       2.N.2    3us : cpu_idle: state=4294967295 cpu_id=2
2262    <idle>-0       2dN.3    4us : hrtimer_cancel: hrtimer=ffff88007d50d5e0
2263    <idle>-0       2dN.3    4us : hrtimer_start: hrtimer=ffff88007d50d5e0 function=tick_sched_timer expires=34311211000000 softexpires=34311211000000
2264    <idle>-0       2.N.2    5us : rcu_utilization: Start context switch
2265    <idle>-0       2.N.2    5us : rcu_utilization: End context switch
2266    <idle>-0       2d..3    6us : __schedule <-schedule
2267    <idle>-0       2d..3    6us :      0:120:R ==> [002]  5882: 94:R sleep
2268
2269
2270Hardware Latency Detector
2271-------------------------
2272
2273The hardware latency detector is executed by enabling the "hwlat" tracer.
2274
2275NOTE, this tracer will affect the performance of the system as it will
2276periodically make a CPU constantly busy with interrupts disabled.
2277::
2278
2279  # echo hwlat > current_tracer
2280  # sleep 100
2281  # cat trace
2282  # tracer: hwlat
2283  #
2284  # entries-in-buffer/entries-written: 13/13   #P:8
2285  #
2286  #                              _-----=> irqs-off
2287  #                             / _----=> need-resched
2288  #                            | / _---=> hardirq/softirq
2289  #                            || / _--=> preempt-depth
2290  #                            ||| /     delay
2291  #           TASK-PID   CPU#  ||||    TIMESTAMP  FUNCTION
2292  #              | |       |   ||||       |         |
2293             <...>-1729  [001] d...   678.473449: #1     inner/outer(us):   11/12    ts:1581527483.343962693 count:6
2294             <...>-1729  [004] d...   689.556542: #2     inner/outer(us):   16/9     ts:1581527494.889008092 count:1
2295             <...>-1729  [005] d...   714.756290: #3     inner/outer(us):   16/16    ts:1581527519.678961629 count:5
2296             <...>-1729  [001] d...   718.788247: #4     inner/outer(us):    9/17    ts:1581527523.889012713 count:1
2297             <...>-1729  [002] d...   719.796341: #5     inner/outer(us):   13/9     ts:1581527524.912872606 count:1
2298             <...>-1729  [006] d...   844.787091: #6     inner/outer(us):    9/12    ts:1581527649.889048502 count:2
2299             <...>-1729  [003] d...   849.827033: #7     inner/outer(us):   18/9     ts:1581527654.889013793 count:1
2300             <...>-1729  [007] d...   853.859002: #8     inner/outer(us):    9/12    ts:1581527658.889065736 count:1
2301             <...>-1729  [001] d...   855.874978: #9     inner/outer(us):    9/11    ts:1581527660.861991877 count:1
2302             <...>-1729  [001] d...   863.938932: #10    inner/outer(us):    9/11    ts:1581527668.970010500 count:1 nmi-total:7 nmi-count:1
2303             <...>-1729  [007] d...   878.050780: #11    inner/outer(us):    9/12    ts:1581527683.385002600 count:1 nmi-total:5 nmi-count:1
2304             <...>-1729  [007] d...   886.114702: #12    inner/outer(us):    9/12    ts:1581527691.385001600 count:1
2305
2306
2307The above output is somewhat the same in the header. All events will have
2308interrupts disabled 'd'. Under the FUNCTION title there is:
2309
2310 #1
2311	This is the count of events recorded that were greater than the
2312	tracing_threshold (See below).
2313
2314 inner/outer(us):   11/11
2315
2316      This shows two numbers as "inner latency" and "outer latency". The test
2317      runs in a loop checking a timestamp twice. The latency detected within
2318      the two timestamps is the "inner latency" and the latency detected
2319      after the previous timestamp and the next timestamp in the loop is
2320      the "outer latency".
2321
2322 ts:1581527483.343962693
2323
2324      The absolute timestamp that the first latency was recorded in the window.
2325
2326 count:6
2327
2328      The number of times a latency was detected during the window.
2329
2330 nmi-total:7 nmi-count:1
2331
2332      On architectures that support it, if an NMI comes in during the
2333      test, the time spent in NMI is reported in "nmi-total" (in
2334      microseconds).
2335
2336      All architectures that have NMIs will show the "nmi-count" if an
2337      NMI comes in during the test.
2338
2339hwlat files:
2340
2341  tracing_threshold
2342	This gets automatically set to "10" to represent 10
2343	microseconds. This is the threshold of latency that
2344	needs to be detected before the trace will be recorded.
2345
2346	Note, when hwlat tracer is finished (another tracer is
2347	written into "current_tracer"), the original value for
2348	tracing_threshold is placed back into this file.
2349
2350  hwlat_detector/width
2351	The length of time the test runs with interrupts disabled.
2352
2353  hwlat_detector/window
2354	The length of time of the window which the test
2355	runs. That is, the test will run for "width"
2356	microseconds per "window" microseconds
2357
2358  tracing_cpumask
2359	When the test is started. A kernel thread is created that
2360	runs the test. This thread will alternate between CPUs
2361	listed in the tracing_cpumask between each period
2362	(one "window"). To limit the test to specific CPUs
2363	set the mask in this file to only the CPUs that the test
2364	should run on.
2365
2366function
2367--------
2368
2369This tracer is the function tracer. Enabling the function tracer
2370can be done from the debug file system. Make sure the
2371ftrace_enabled is set; otherwise this tracer is a nop.
2372See the "ftrace_enabled" section below.
2373::
2374
2375  # sysctl kernel.ftrace_enabled=1
2376  # echo function > current_tracer
2377  # echo 1 > tracing_on
2378  # usleep 1
2379  # echo 0 > tracing_on
2380  # cat trace
2381  # tracer: function
2382  #
2383  # entries-in-buffer/entries-written: 24799/24799   #P:4
2384  #
2385  #                              _-----=> irqs-off
2386  #                             / _----=> need-resched
2387  #                            | / _---=> hardirq/softirq
2388  #                            || / _--=> preempt-depth
2389  #                            ||| /     delay
2390  #           TASK-PID   CPU#  ||||    TIMESTAMP  FUNCTION
2391  #              | |       |   ||||       |         |
2392              bash-1994  [002] ....  3082.063030: mutex_unlock <-rb_simple_write
2393              bash-1994  [002] ....  3082.063031: __mutex_unlock_slowpath <-mutex_unlock
2394              bash-1994  [002] ....  3082.063031: __fsnotify_parent <-fsnotify_modify
2395              bash-1994  [002] ....  3082.063032: fsnotify <-fsnotify_modify
2396              bash-1994  [002] ....  3082.063032: __srcu_read_lock <-fsnotify
2397              bash-1994  [002] ....  3082.063032: add_preempt_count <-__srcu_read_lock
2398              bash-1994  [002] ...1  3082.063032: sub_preempt_count <-__srcu_read_lock
2399              bash-1994  [002] ....  3082.063033: __srcu_read_unlock <-fsnotify
2400  [...]
2401
2402
2403Note: function tracer uses ring buffers to store the above
2404entries. The newest data may overwrite the oldest data.
2405Sometimes using echo to stop the trace is not sufficient because
2406the tracing could have overwritten the data that you wanted to
2407record. For this reason, it is sometimes better to disable
2408tracing directly from a program. This allows you to stop the
2409tracing at the point that you hit the part that you are
2410interested in. To disable the tracing directly from a C program,
2411something like following code snippet can be used::
2412
2413	int trace_fd;
2414	[...]
2415	int main(int argc, char *argv[]) {
2416		[...]
2417		trace_fd = open(tracing_file("tracing_on"), O_WRONLY);
2418		[...]
2419		if (condition_hit()) {
2420			write(trace_fd, "0", 1);
2421		}
2422		[...]
2423	}
2424
2425
2426Single thread tracing
2427---------------------
2428
2429By writing into set_ftrace_pid you can trace a
2430single thread. For example::
2431
2432  # cat set_ftrace_pid
2433  no pid
2434  # echo 3111 > set_ftrace_pid
2435  # cat set_ftrace_pid
2436  3111
2437  # echo function > current_tracer
2438  # cat trace | head
2439  # tracer: function
2440  #
2441  #           TASK-PID    CPU#    TIMESTAMP  FUNCTION
2442  #              | |       |          |         |
2443      yum-updatesd-3111  [003]  1637.254676: finish_task_switch <-thread_return
2444      yum-updatesd-3111  [003]  1637.254681: hrtimer_cancel <-schedule_hrtimeout_range
2445      yum-updatesd-3111  [003]  1637.254682: hrtimer_try_to_cancel <-hrtimer_cancel
2446      yum-updatesd-3111  [003]  1637.254683: lock_hrtimer_base <-hrtimer_try_to_cancel
2447      yum-updatesd-3111  [003]  1637.254685: fget_light <-do_sys_poll
2448      yum-updatesd-3111  [003]  1637.254686: pipe_poll <-do_sys_poll
2449  # echo > set_ftrace_pid
2450  # cat trace |head
2451  # tracer: function
2452  #
2453  #           TASK-PID    CPU#    TIMESTAMP  FUNCTION
2454  #              | |       |          |         |
2455  ##### CPU 3 buffer started ####
2456      yum-updatesd-3111  [003]  1701.957688: free_poll_entry <-poll_freewait
2457      yum-updatesd-3111  [003]  1701.957689: remove_wait_queue <-free_poll_entry
2458      yum-updatesd-3111  [003]  1701.957691: fput <-free_poll_entry
2459      yum-updatesd-3111  [003]  1701.957692: audit_syscall_exit <-sysret_audit
2460      yum-updatesd-3111  [003]  1701.957693: path_put <-audit_syscall_exit
2461
2462If you want to trace a function when executing, you could use
2463something like this simple program.
2464::
2465
2466	#include <stdio.h>
2467	#include <stdlib.h>
2468	#include <sys/types.h>
2469	#include <sys/stat.h>
2470	#include <fcntl.h>
2471	#include <unistd.h>
2472	#include <string.h>
2473
2474	#define _STR(x) #x
2475	#define STR(x) _STR(x)
2476	#define MAX_PATH 256
2477
2478	const char *find_tracefs(void)
2479	{
2480	       static char tracefs[MAX_PATH+1];
2481	       static int tracefs_found;
2482	       char type[100];
2483	       FILE *fp;
2484
2485	       if (tracefs_found)
2486		       return tracefs;
2487
2488	       if ((fp = fopen("/proc/mounts","r")) == NULL) {
2489		       perror("/proc/mounts");
2490		       return NULL;
2491	       }
2492
2493	       while (fscanf(fp, "%*s %"
2494		             STR(MAX_PATH)
2495		             "s %99s %*s %*d %*d\n",
2496		             tracefs, type) == 2) {
2497		       if (strcmp(type, "tracefs") == 0)
2498		               break;
2499	       }
2500	       fclose(fp);
2501
2502	       if (strcmp(type, "tracefs") != 0) {
2503		       fprintf(stderr, "tracefs not mounted");
2504		       return NULL;
2505	       }
2506
2507	       strcat(tracefs, "/tracing/");
2508	       tracefs_found = 1;
2509
2510	       return tracefs;
2511	}
2512
2513	const char *tracing_file(const char *file_name)
2514	{
2515	       static char trace_file[MAX_PATH+1];
2516	       snprintf(trace_file, MAX_PATH, "%s/%s", find_tracefs(), file_name);
2517	       return trace_file;
2518	}
2519
2520	int main (int argc, char **argv)
2521	{
2522		if (argc < 1)
2523		        exit(-1);
2524
2525		if (fork() > 0) {
2526		        int fd, ffd;
2527		        char line[64];
2528		        int s;
2529
2530		        ffd = open(tracing_file("current_tracer"), O_WRONLY);
2531		        if (ffd < 0)
2532		                exit(-1);
2533		        write(ffd, "nop", 3);
2534
2535		        fd = open(tracing_file("set_ftrace_pid"), O_WRONLY);
2536		        s = sprintf(line, "%d\n", getpid());
2537		        write(fd, line, s);
2538
2539		        write(ffd, "function", 8);
2540
2541		        close(fd);
2542		        close(ffd);
2543
2544		        execvp(argv[1], argv+1);
2545		}
2546
2547		return 0;
2548	}
2549
2550Or this simple script!
2551::
2552
2553  #!/bin/bash
2554
2555  tracefs=`sed -ne 's/^tracefs \(.*\) tracefs.*/\1/p' /proc/mounts`
2556  echo 0 > $tracefs/tracing_on
2557  echo $$ > $tracefs/set_ftrace_pid
2558  echo function > $tracefs/current_tracer
2559  echo 1 > $tracefs/tracing_on
2560  exec "$@"
2561
2562
2563function graph tracer
2564---------------------------
2565
2566This tracer is similar to the function tracer except that it
2567probes a function on its entry and its exit. This is done by
2568using a dynamically allocated stack of return addresses in each
2569task_struct. On function entry the tracer overwrites the return
2570address of each function traced to set a custom probe. Thus the
2571original return address is stored on the stack of return address
2572in the task_struct.
2573
2574Probing on both ends of a function leads to special features
2575such as:
2576
2577- measure of a function's time execution
2578- having a reliable call stack to draw function calls graph
2579
2580This tracer is useful in several situations:
2581
2582- you want to find the reason of a strange kernel behavior and
2583  need to see what happens in detail on any areas (or specific
2584  ones).
2585
2586- you are experiencing weird latencies but it's difficult to
2587  find its origin.
2588
2589- you want to find quickly which path is taken by a specific
2590  function
2591
2592- you just want to peek inside a working kernel and want to see
2593  what happens there.
2594
2595::
2596
2597  # tracer: function_graph
2598  #
2599  # CPU  DURATION                  FUNCTION CALLS
2600  # |     |   |                     |   |   |   |
2601
2602   0)               |  sys_open() {
2603   0)               |    do_sys_open() {
2604   0)               |      getname() {
2605   0)               |        kmem_cache_alloc() {
2606   0)   1.382 us    |          __might_sleep();
2607   0)   2.478 us    |        }
2608   0)               |        strncpy_from_user() {
2609   0)               |          might_fault() {
2610   0)   1.389 us    |            __might_sleep();
2611   0)   2.553 us    |          }
2612   0)   3.807 us    |        }
2613   0)   7.876 us    |      }
2614   0)               |      alloc_fd() {
2615   0)   0.668 us    |        _spin_lock();
2616   0)   0.570 us    |        expand_files();
2617   0)   0.586 us    |        _spin_unlock();
2618
2619
2620There are several columns that can be dynamically
2621enabled/disabled. You can use every combination of options you
2622want, depending on your needs.
2623
2624- The cpu number on which the function executed is default
2625  enabled.  It is sometimes better to only trace one cpu (see
2626  tracing_cpumask file) or you might sometimes see unordered
2627  function calls while cpu tracing switch.
2628
2629	- hide: echo nofuncgraph-cpu > trace_options
2630	- show: echo funcgraph-cpu > trace_options
2631
2632- The duration (function's time of execution) is displayed on
2633  the closing bracket line of a function or on the same line
2634  than the current function in case of a leaf one. It is default
2635  enabled.
2636
2637	- hide: echo nofuncgraph-duration > trace_options
2638	- show: echo funcgraph-duration > trace_options
2639
2640- The overhead field precedes the duration field in case of
2641  reached duration thresholds.
2642
2643	- hide: echo nofuncgraph-overhead > trace_options
2644	- show: echo funcgraph-overhead > trace_options
2645	- depends on: funcgraph-duration
2646
2647  ie::
2648
2649    3) # 1837.709 us |          } /* __switch_to */
2650    3)               |          finish_task_switch() {
2651    3)   0.313 us    |            _raw_spin_unlock_irq();
2652    3)   3.177 us    |          }
2653    3) # 1889.063 us |        } /* __schedule */
2654    3) ! 140.417 us  |      } /* __schedule */
2655    3) # 2034.948 us |    } /* schedule */
2656    3) * 33998.59 us |  } /* schedule_preempt_disabled */
2657
2658    [...]
2659
2660    1)   0.260 us    |              msecs_to_jiffies();
2661    1)   0.313 us    |              __rcu_read_unlock();
2662    1) + 61.770 us   |            }
2663    1) + 64.479 us   |          }
2664    1)   0.313 us    |          rcu_bh_qs();
2665    1)   0.313 us    |          __local_bh_enable();
2666    1) ! 217.240 us  |        }
2667    1)   0.365 us    |        idle_cpu();
2668    1)               |        rcu_irq_exit() {
2669    1)   0.417 us    |          rcu_eqs_enter_common.isra.47();
2670    1)   3.125 us    |        }
2671    1) ! 227.812 us  |      }
2672    1) ! 457.395 us  |    }
2673    1) @ 119760.2 us |  }
2674
2675    [...]
2676
2677    2)               |    handle_IPI() {
2678    1)   6.979 us    |                  }
2679    2)   0.417 us    |      scheduler_ipi();
2680    1)   9.791 us    |                }
2681    1) + 12.917 us   |              }
2682    2)   3.490 us    |    }
2683    1) + 15.729 us   |            }
2684    1) + 18.542 us   |          }
2685    2) $ 3594274 us  |  }
2686
2687Flags::
2688
2689  + means that the function exceeded 10 usecs.
2690  ! means that the function exceeded 100 usecs.
2691  # means that the function exceeded 1000 usecs.
2692  * means that the function exceeded 10 msecs.
2693  @ means that the function exceeded 100 msecs.
2694  $ means that the function exceeded 1 sec.
2695
2696
2697- The task/pid field displays the thread cmdline and pid which
2698  executed the function. It is default disabled.
2699
2700	- hide: echo nofuncgraph-proc > trace_options
2701	- show: echo funcgraph-proc > trace_options
2702
2703  ie::
2704
2705    # tracer: function_graph
2706    #
2707    # CPU  TASK/PID        DURATION                  FUNCTION CALLS
2708    # |    |    |           |   |                     |   |   |   |
2709    0)    sh-4802     |               |                  d_free() {
2710    0)    sh-4802     |               |                    call_rcu() {
2711    0)    sh-4802     |               |                      __call_rcu() {
2712    0)    sh-4802     |   0.616 us    |                        rcu_process_gp_end();
2713    0)    sh-4802     |   0.586 us    |                        check_for_new_grace_period();
2714    0)    sh-4802     |   2.899 us    |                      }
2715    0)    sh-4802     |   4.040 us    |                    }
2716    0)    sh-4802     |   5.151 us    |                  }
2717    0)    sh-4802     | + 49.370 us   |                }
2718
2719
2720- The absolute time field is an absolute timestamp given by the
2721  system clock since it started. A snapshot of this time is
2722  given on each entry/exit of functions
2723
2724	- hide: echo nofuncgraph-abstime > trace_options
2725	- show: echo funcgraph-abstime > trace_options
2726
2727  ie::
2728
2729    #
2730    #      TIME       CPU  DURATION                  FUNCTION CALLS
2731    #       |         |     |   |                     |   |   |   |
2732    360.774522 |   1)   0.541 us    |                                          }
2733    360.774522 |   1)   4.663 us    |                                        }
2734    360.774523 |   1)   0.541 us    |                                        __wake_up_bit();
2735    360.774524 |   1)   6.796 us    |                                      }
2736    360.774524 |   1)   7.952 us    |                                    }
2737    360.774525 |   1)   9.063 us    |                                  }
2738    360.774525 |   1)   0.615 us    |                                  journal_mark_dirty();
2739    360.774527 |   1)   0.578 us    |                                  __brelse();
2740    360.774528 |   1)               |                                  reiserfs_prepare_for_journal() {
2741    360.774528 |   1)               |                                    unlock_buffer() {
2742    360.774529 |   1)               |                                      wake_up_bit() {
2743    360.774529 |   1)               |                                        bit_waitqueue() {
2744    360.774530 |   1)   0.594 us    |                                          __phys_addr();
2745
2746
2747The function name is always displayed after the closing bracket
2748for a function if the start of that function is not in the
2749trace buffer.
2750
2751Display of the function name after the closing bracket may be
2752enabled for functions whose start is in the trace buffer,
2753allowing easier searching with grep for function durations.
2754It is default disabled.
2755
2756	- hide: echo nofuncgraph-tail > trace_options
2757	- show: echo funcgraph-tail > trace_options
2758
2759  Example with nofuncgraph-tail (default)::
2760
2761    0)               |      putname() {
2762    0)               |        kmem_cache_free() {
2763    0)   0.518 us    |          __phys_addr();
2764    0)   1.757 us    |        }
2765    0)   2.861 us    |      }
2766
2767  Example with funcgraph-tail::
2768
2769    0)               |      putname() {
2770    0)               |        kmem_cache_free() {
2771    0)   0.518 us    |          __phys_addr();
2772    0)   1.757 us    |        } /* kmem_cache_free() */
2773    0)   2.861 us    |      } /* putname() */
2774
2775The return value of each traced function can be displayed after
2776an equal sign "=". When encountering system call failures, it
2777can be very helpful to quickly locate the function that first
2778returns an error code.
2779
2780	- hide: echo nofuncgraph-retval > trace_options
2781	- show: echo funcgraph-retval > trace_options
2782
2783  Example with funcgraph-retval::
2784
2785    1)               |    cgroup_migrate() {
2786    1)   0.651 us    |      cgroup_migrate_add_task(); /* = 0xffff93fcfd346c00 */
2787    1)               |      cgroup_migrate_execute() {
2788    1)               |        cpu_cgroup_can_attach() {
2789    1)               |          cgroup_taskset_first() {
2790    1)   0.732 us    |            cgroup_taskset_next(); /* = 0xffff93fc8fb20000 */
2791    1)   1.232 us    |          } /* cgroup_taskset_first = 0xffff93fc8fb20000 */
2792    1)   0.380 us    |          sched_rt_can_attach(); /* = 0x0 */
2793    1)   2.335 us    |        } /* cpu_cgroup_can_attach = -22 */
2794    1)   4.369 us    |      } /* cgroup_migrate_execute = -22 */
2795    1)   7.143 us    |    } /* cgroup_migrate = -22 */
2796
2797The above example shows that the function cpu_cgroup_can_attach
2798returned the error code -22 firstly, then we can read the code
2799of this function to get the root cause.
2800
2801When the option funcgraph-retval-hex is not set, the return value can
2802be displayed in a smart way. Specifically, if it is an error code,
2803it will be printed in signed decimal format, otherwise it will
2804printed in hexadecimal format.
2805
2806	- smart: echo nofuncgraph-retval-hex > trace_options
2807	- hexadecimal: echo funcgraph-retval-hex > trace_options
2808
2809  Example with funcgraph-retval-hex::
2810
2811    1)               |      cgroup_migrate() {
2812    1)   0.651 us    |        cgroup_migrate_add_task(); /* = 0xffff93fcfd346c00 */
2813    1)               |        cgroup_migrate_execute() {
2814    1)               |          cpu_cgroup_can_attach() {
2815    1)               |            cgroup_taskset_first() {
2816    1)   0.732 us    |              cgroup_taskset_next(); /* = 0xffff93fc8fb20000 */
2817    1)   1.232 us    |            } /* cgroup_taskset_first = 0xffff93fc8fb20000 */
2818    1)   0.380 us    |            sched_rt_can_attach(); /* = 0x0 */
2819    1)   2.335 us    |          } /* cpu_cgroup_can_attach = 0xffffffea */
2820    1)   4.369 us    |        } /* cgroup_migrate_execute = 0xffffffea */
2821    1)   7.143 us    |      } /* cgroup_migrate = 0xffffffea */
2822
2823At present, there are some limitations when using the funcgraph-retval
2824option, and these limitations will be eliminated in the future:
2825
2826- Even if the function return type is void, a return value will still
2827  be printed, and you can just ignore it.
2828
2829- Even if return values are stored in multiple registers, only the
2830  value contained in the first register will be recorded and printed.
2831  To illustrate, in the x86 architecture, eax and edx are used to store
2832  a 64-bit return value, with the lower 32 bits saved in eax and the
2833  upper 32 bits saved in edx. However, only the value stored in eax
2834  will be recorded and printed.
2835
2836- In certain procedure call standards, such as arm64's AAPCS64, when a
2837  type is smaller than a GPR, it is the responsibility of the consumer
2838  to perform the narrowing, and the upper bits may contain UNKNOWN values.
2839  Therefore, it is advisable to check the code for such cases. For instance,
2840  when using a u8 in a 64-bit GPR, bits [63:8] may contain arbitrary values,
2841  especially when larger types are truncated, whether explicitly or implicitly.
2842  Here are some specific cases to illustrate this point:
2843
2844  **Case One**:
2845
2846  The function narrow_to_u8 is defined as follows::
2847
2848	u8 narrow_to_u8(u64 val)
2849	{
2850		// implicitly truncated
2851		return val;
2852	}
2853
2854  It may be compiled to::
2855
2856	narrow_to_u8:
2857		< ... ftrace instrumentation ... >
2858		RET
2859
2860  If you pass 0x123456789abcdef to this function and want to narrow it,
2861  it may be recorded as 0x123456789abcdef instead of 0xef.
2862
2863  **Case Two**:
2864
2865  The function error_if_not_4g_aligned is defined as follows::
2866
2867	int error_if_not_4g_aligned(u64 val)
2868	{
2869		if (val & GENMASK(31, 0))
2870			return -EINVAL;
2871
2872		return 0;
2873	}
2874
2875  It could be compiled to::
2876
2877	error_if_not_4g_aligned:
2878		CBNZ    w0, .Lnot_aligned
2879		RET			// bits [31:0] are zero, bits
2880					// [63:32] are UNKNOWN
2881	.Lnot_aligned:
2882		MOV    x0, #-EINVAL
2883		RET
2884
2885  When passing 0x2_0000_0000 to it, the return value may be recorded as
2886  0x2_0000_0000 instead of 0.
2887
2888You can put some comments on specific functions by using
2889trace_printk() For example, if you want to put a comment inside
2890the __might_sleep() function, you just have to include
2891<linux/ftrace.h> and call trace_printk() inside __might_sleep()::
2892
2893	trace_printk("I'm a comment!\n")
2894
2895will produce::
2896
2897   1)               |             __might_sleep() {
2898   1)               |                /* I'm a comment! */
2899   1)   1.449 us    |             }
2900
2901
2902You might find other useful features for this tracer in the
2903following "dynamic ftrace" section such as tracing only specific
2904functions or tasks.
2905
2906dynamic ftrace
2907--------------
2908
2909If CONFIG_DYNAMIC_FTRACE is set, the system will run with
2910virtually no overhead when function tracing is disabled. The way
2911this works is the mcount function call (placed at the start of
2912every kernel function, produced by the -pg switch in gcc),
2913starts of pointing to a simple return. (Enabling FTRACE will
2914include the -pg switch in the compiling of the kernel.)
2915
2916At compile time every C file object is run through the
2917recordmcount program (located in the scripts directory). This
2918program will parse the ELF headers in the C object to find all
2919the locations in the .text section that call mcount. Starting
2920with gcc version 4.6, the -mfentry has been added for x86, which
2921calls "__fentry__" instead of "mcount". Which is called before
2922the creation of the stack frame.
2923
2924Note, not all sections are traced. They may be prevented by either
2925a notrace, or blocked another way and all inline functions are not
2926traced. Check the "available_filter_functions" file to see what functions
2927can be traced.
2928
2929A section called "__mcount_loc" is created that holds
2930references to all the mcount/fentry call sites in the .text section.
2931The recordmcount program re-links this section back into the
2932original object. The final linking stage of the kernel will add all these
2933references into a single table.
2934
2935On boot up, before SMP is initialized, the dynamic ftrace code
2936scans this table and updates all the locations into nops. It
2937also records the locations, which are added to the
2938available_filter_functions list.  Modules are processed as they
2939are loaded and before they are executed.  When a module is
2940unloaded, it also removes its functions from the ftrace function
2941list. This is automatic in the module unload code, and the
2942module author does not need to worry about it.
2943
2944When tracing is enabled, the process of modifying the function
2945tracepoints is dependent on architecture. The old method is to use
2946kstop_machine to prevent races with the CPUs executing code being
2947modified (which can cause the CPU to do undesirable things, especially
2948if the modified code crosses cache (or page) boundaries), and the nops are
2949patched back to calls. But this time, they do not call mcount
2950(which is just a function stub). They now call into the ftrace
2951infrastructure.
2952
2953The new method of modifying the function tracepoints is to place
2954a breakpoint at the location to be modified, sync all CPUs, modify
2955the rest of the instruction not covered by the breakpoint. Sync
2956all CPUs again, and then remove the breakpoint with the finished
2957version to the ftrace call site.
2958
2959Some archs do not even need to monkey around with the synchronization,
2960and can just slap the new code on top of the old without any
2961problems with other CPUs executing it at the same time.
2962
2963One special side-effect to the recording of the functions being
2964traced is that we can now selectively choose which functions we
2965wish to trace and which ones we want the mcount calls to remain
2966as nops.
2967
2968Two files are used, one for enabling and one for disabling the
2969tracing of specified functions. They are:
2970
2971  set_ftrace_filter
2972
2973and
2974
2975  set_ftrace_notrace
2976
2977A list of available functions that you can add to these files is
2978listed in:
2979
2980   available_filter_functions
2981
2982::
2983
2984  # cat available_filter_functions
2985  put_prev_task_idle
2986  kmem_cache_create
2987  pick_next_task_rt
2988  cpus_read_lock
2989  pick_next_task_fair
2990  mutex_lock
2991  [...]
2992
2993If I am only interested in sys_nanosleep and hrtimer_interrupt::
2994
2995  # echo sys_nanosleep hrtimer_interrupt > set_ftrace_filter
2996  # echo function > current_tracer
2997  # echo 1 > tracing_on
2998  # usleep 1
2999  # echo 0 > tracing_on
3000  # cat trace
3001  # tracer: function
3002  #
3003  # entries-in-buffer/entries-written: 5/5   #P:4
3004  #
3005  #                              _-----=> irqs-off
3006  #                             / _----=> need-resched
3007  #                            | / _---=> hardirq/softirq
3008  #                            || / _--=> preempt-depth
3009  #                            ||| /     delay
3010  #           TASK-PID   CPU#  ||||    TIMESTAMP  FUNCTION
3011  #              | |       |   ||||       |         |
3012            usleep-2665  [001] ....  4186.475355: sys_nanosleep <-system_call_fastpath
3013            <idle>-0     [001] d.h1  4186.475409: hrtimer_interrupt <-smp_apic_timer_interrupt
3014            usleep-2665  [001] d.h1  4186.475426: hrtimer_interrupt <-smp_apic_timer_interrupt
3015            <idle>-0     [003] d.h1  4186.475426: hrtimer_interrupt <-smp_apic_timer_interrupt
3016            <idle>-0     [002] d.h1  4186.475427: hrtimer_interrupt <-smp_apic_timer_interrupt
3017
3018To see which functions are being traced, you can cat the file:
3019::
3020
3021  # cat set_ftrace_filter
3022  hrtimer_interrupt
3023  sys_nanosleep
3024
3025
3026Perhaps this is not enough. The filters also allow glob(7) matching.
3027
3028  ``<match>*``
3029	will match functions that begin with <match>
3030  ``*<match>``
3031	will match functions that end with <match>
3032  ``*<match>*``
3033	will match functions that have <match> in it
3034  ``<match1>*<match2>``
3035	will match functions that begin with <match1> and end with <match2>
3036
3037.. note::
3038      It is better to use quotes to enclose the wild cards,
3039      otherwise the shell may expand the parameters into names
3040      of files in the local directory.
3041
3042::
3043
3044  # echo 'hrtimer_*' > set_ftrace_filter
3045
3046Produces::
3047
3048  # tracer: function
3049  #
3050  # entries-in-buffer/entries-written: 897/897   #P:4
3051  #
3052  #                              _-----=> irqs-off
3053  #                             / _----=> need-resched
3054  #                            | / _---=> hardirq/softirq
3055  #                            || / _--=> preempt-depth
3056  #                            ||| /     delay
3057  #           TASK-PID   CPU#  ||||    TIMESTAMP  FUNCTION
3058  #              | |       |   ||||       |         |
3059            <idle>-0     [003] dN.1  4228.547803: hrtimer_cancel <-tick_nohz_idle_exit
3060            <idle>-0     [003] dN.1  4228.547804: hrtimer_try_to_cancel <-hrtimer_cancel
3061            <idle>-0     [003] dN.2  4228.547805: hrtimer_force_reprogram <-__remove_hrtimer
3062            <idle>-0     [003] dN.1  4228.547805: hrtimer_forward <-tick_nohz_idle_exit
3063            <idle>-0     [003] dN.1  4228.547805: hrtimer_start_range_ns <-hrtimer_start_expires.constprop.11
3064            <idle>-0     [003] d..1  4228.547858: hrtimer_get_next_event <-get_next_timer_interrupt
3065            <idle>-0     [003] d..1  4228.547859: hrtimer_start <-__tick_nohz_idle_enter
3066            <idle>-0     [003] d..2  4228.547860: hrtimer_force_reprogram <-__rem
3067
3068Notice that we lost the sys_nanosleep.
3069::
3070
3071  # cat set_ftrace_filter
3072  hrtimer_run_queues
3073  hrtimer_run_pending
3074  hrtimer_init
3075  hrtimer_cancel
3076  hrtimer_try_to_cancel
3077  hrtimer_forward
3078  hrtimer_start
3079  hrtimer_reprogram
3080  hrtimer_force_reprogram
3081  hrtimer_get_next_event
3082  hrtimer_interrupt
3083  hrtimer_nanosleep
3084  hrtimer_wakeup
3085  hrtimer_get_remaining
3086  hrtimer_get_res
3087  hrtimer_init_sleeper
3088
3089
3090This is because the '>' and '>>' act just like they do in bash.
3091To rewrite the filters, use '>'
3092To append to the filters, use '>>'
3093
3094To clear out a filter so that all functions will be recorded
3095again::
3096
3097 # echo > set_ftrace_filter
3098 # cat set_ftrace_filter
3099 #
3100
3101Again, now we want to append.
3102
3103::
3104
3105  # echo sys_nanosleep > set_ftrace_filter
3106  # cat set_ftrace_filter
3107  sys_nanosleep
3108  # echo 'hrtimer_*' >> set_ftrace_filter
3109  # cat set_ftrace_filter
3110  hrtimer_run_queues
3111  hrtimer_run_pending
3112  hrtimer_init
3113  hrtimer_cancel
3114  hrtimer_try_to_cancel
3115  hrtimer_forward
3116  hrtimer_start
3117  hrtimer_reprogram
3118  hrtimer_force_reprogram
3119  hrtimer_get_next_event
3120  hrtimer_interrupt
3121  sys_nanosleep
3122  hrtimer_nanosleep
3123  hrtimer_wakeup
3124  hrtimer_get_remaining
3125  hrtimer_get_res
3126  hrtimer_init_sleeper
3127
3128
3129The set_ftrace_notrace prevents those functions from being
3130traced.
3131::
3132
3133  # echo '*preempt*' '*lock*' > set_ftrace_notrace
3134
3135Produces::
3136
3137  # tracer: function
3138  #
3139  # entries-in-buffer/entries-written: 39608/39608   #P:4
3140  #
3141  #                              _-----=> irqs-off
3142  #                             / _----=> need-resched
3143  #                            | / _---=> hardirq/softirq
3144  #                            || / _--=> preempt-depth
3145  #                            ||| /     delay
3146  #           TASK-PID   CPU#  ||||    TIMESTAMP  FUNCTION
3147  #              | |       |   ||||       |         |
3148              bash-1994  [000] ....  4342.324896: file_ra_state_init <-do_dentry_open
3149              bash-1994  [000] ....  4342.324897: open_check_o_direct <-do_last
3150              bash-1994  [000] ....  4342.324897: ima_file_check <-do_last
3151              bash-1994  [000] ....  4342.324898: process_measurement <-ima_file_check
3152              bash-1994  [000] ....  4342.324898: ima_get_action <-process_measurement
3153              bash-1994  [000] ....  4342.324898: ima_match_policy <-ima_get_action
3154              bash-1994  [000] ....  4342.324899: do_truncate <-do_last
3155              bash-1994  [000] ....  4342.324899: setattr_should_drop_suidgid <-do_truncate
3156              bash-1994  [000] ....  4342.324899: notify_change <-do_truncate
3157              bash-1994  [000] ....  4342.324900: current_fs_time <-notify_change
3158              bash-1994  [000] ....  4342.324900: current_kernel_time <-current_fs_time
3159              bash-1994  [000] ....  4342.324900: timespec_trunc <-current_fs_time
3160
3161We can see that there's no more lock or preempt tracing.
3162
3163Selecting function filters via index
3164------------------------------------
3165
3166Because processing of strings is expensive (the address of the function
3167needs to be looked up before comparing to the string being passed in),
3168an index can be used as well to enable functions. This is useful in the
3169case of setting thousands of specific functions at a time. By passing
3170in a list of numbers, no string processing will occur. Instead, the function
3171at the specific location in the internal array (which corresponds to the
3172functions in the "available_filter_functions" file), is selected.
3173
3174::
3175
3176  # echo 1 > set_ftrace_filter
3177
3178Will select the first function listed in "available_filter_functions"
3179
3180::
3181
3182  # head -1 available_filter_functions
3183  trace_initcall_finish_cb
3184
3185  # cat set_ftrace_filter
3186  trace_initcall_finish_cb
3187
3188  # head -50 available_filter_functions | tail -1
3189  x86_pmu_commit_txn
3190
3191  # echo 1 50 > set_ftrace_filter
3192  # cat set_ftrace_filter
3193  trace_initcall_finish_cb
3194  x86_pmu_commit_txn
3195
3196Dynamic ftrace with the function graph tracer
3197---------------------------------------------
3198
3199Although what has been explained above concerns both the
3200function tracer and the function-graph-tracer, there are some
3201special features only available in the function-graph tracer.
3202
3203If you want to trace only one function and all of its children,
3204you just have to echo its name into set_graph_function::
3205
3206 echo __do_fault > set_graph_function
3207
3208will produce the following "expanded" trace of the __do_fault()
3209function::
3210
3211   0)               |  __do_fault() {
3212   0)               |    filemap_fault() {
3213   0)               |      find_lock_page() {
3214   0)   0.804 us    |        find_get_page();
3215   0)               |        __might_sleep() {
3216   0)   1.329 us    |        }
3217   0)   3.904 us    |      }
3218   0)   4.979 us    |    }
3219   0)   0.653 us    |    _spin_lock();
3220   0)   0.578 us    |    page_add_file_rmap();
3221   0)   0.525 us    |    native_set_pte_at();
3222   0)   0.585 us    |    _spin_unlock();
3223   0)               |    unlock_page() {
3224   0)   0.541 us    |      page_waitqueue();
3225   0)   0.639 us    |      __wake_up_bit();
3226   0)   2.786 us    |    }
3227   0) + 14.237 us   |  }
3228   0)               |  __do_fault() {
3229   0)               |    filemap_fault() {
3230   0)               |      find_lock_page() {
3231   0)   0.698 us    |        find_get_page();
3232   0)               |        __might_sleep() {
3233   0)   1.412 us    |        }
3234   0)   3.950 us    |      }
3235   0)   5.098 us    |    }
3236   0)   0.631 us    |    _spin_lock();
3237   0)   0.571 us    |    page_add_file_rmap();
3238   0)   0.526 us    |    native_set_pte_at();
3239   0)   0.586 us    |    _spin_unlock();
3240   0)               |    unlock_page() {
3241   0)   0.533 us    |      page_waitqueue();
3242   0)   0.638 us    |      __wake_up_bit();
3243   0)   2.793 us    |    }
3244   0) + 14.012 us   |  }
3245
3246You can also expand several functions at once::
3247
3248 echo sys_open > set_graph_function
3249 echo sys_close >> set_graph_function
3250
3251Now if you want to go back to trace all functions you can clear
3252this special filter via::
3253
3254 echo > set_graph_function
3255
3256
3257ftrace_enabled
3258--------------
3259
3260Note, the proc sysctl ftrace_enable is a big on/off switch for the
3261function tracer. By default it is enabled (when function tracing is
3262enabled in the kernel). If it is disabled, all function tracing is
3263disabled. This includes not only the function tracers for ftrace, but
3264also for any other uses (perf, kprobes, stack tracing, profiling, etc). It
3265cannot be disabled if there is a callback with FTRACE_OPS_FL_PERMANENT set
3266registered.
3267
3268Please disable this with care.
3269
3270This can be disable (and enabled) with::
3271
3272  sysctl kernel.ftrace_enabled=0
3273  sysctl kernel.ftrace_enabled=1
3274
3275 or
3276
3277  echo 0 > /proc/sys/kernel/ftrace_enabled
3278  echo 1 > /proc/sys/kernel/ftrace_enabled
3279
3280
3281Filter commands
3282---------------
3283
3284A few commands are supported by the set_ftrace_filter interface.
3285Trace commands have the following format::
3286
3287  <function>:<command>:<parameter>
3288
3289The following commands are supported:
3290
3291- mod:
3292  This command enables function filtering per module. The
3293  parameter defines the module. For example, if only the write*
3294  functions in the ext3 module are desired, run:
3295
3296   echo 'write*:mod:ext3' > set_ftrace_filter
3297
3298  This command interacts with the filter in the same way as
3299  filtering based on function names. Thus, adding more functions
3300  in a different module is accomplished by appending (>>) to the
3301  filter file. Remove specific module functions by prepending
3302  '!'::
3303
3304   echo '!writeback*:mod:ext3' >> set_ftrace_filter
3305
3306  Mod command supports module globbing. Disable tracing for all
3307  functions except a specific module::
3308
3309   echo '!*:mod:!ext3' >> set_ftrace_filter
3310
3311  Disable tracing for all modules, but still trace kernel::
3312
3313   echo '!*:mod:*' >> set_ftrace_filter
3314
3315  Enable filter only for kernel::
3316
3317   echo '*write*:mod:!*' >> set_ftrace_filter
3318
3319  Enable filter for module globbing::
3320
3321   echo '*write*:mod:*snd*' >> set_ftrace_filter
3322
3323- traceon/traceoff:
3324  These commands turn tracing on and off when the specified
3325  functions are hit. The parameter determines how many times the
3326  tracing system is turned on and off. If unspecified, there is
3327  no limit. For example, to disable tracing when a schedule bug
3328  is hit the first 5 times, run::
3329
3330   echo '__schedule_bug:traceoff:5' > set_ftrace_filter
3331
3332  To always disable tracing when __schedule_bug is hit::
3333
3334   echo '__schedule_bug:traceoff' > set_ftrace_filter
3335
3336  These commands are cumulative whether or not they are appended
3337  to set_ftrace_filter. To remove a command, prepend it by '!'
3338  and drop the parameter::
3339
3340   echo '!__schedule_bug:traceoff:0' > set_ftrace_filter
3341
3342  The above removes the traceoff command for __schedule_bug
3343  that have a counter. To remove commands without counters::
3344
3345   echo '!__schedule_bug:traceoff' > set_ftrace_filter
3346
3347- snapshot:
3348  Will cause a snapshot to be triggered when the function is hit.
3349  ::
3350
3351   echo 'native_flush_tlb_others:snapshot' > set_ftrace_filter
3352
3353  To only snapshot once:
3354  ::
3355
3356   echo 'native_flush_tlb_others:snapshot:1' > set_ftrace_filter
3357
3358  To remove the above commands::
3359
3360   echo '!native_flush_tlb_others:snapshot' > set_ftrace_filter
3361   echo '!native_flush_tlb_others:snapshot:0' > set_ftrace_filter
3362
3363- enable_event/disable_event:
3364  These commands can enable or disable a trace event. Note, because
3365  function tracing callbacks are very sensitive, when these commands
3366  are registered, the trace point is activated, but disabled in
3367  a "soft" mode. That is, the tracepoint will be called, but
3368  just will not be traced. The event tracepoint stays in this mode
3369  as long as there's a command that triggers it.
3370  ::
3371
3372   echo 'try_to_wake_up:enable_event:sched:sched_switch:2' > \
3373   	 set_ftrace_filter
3374
3375  The format is::
3376
3377    <function>:enable_event:<system>:<event>[:count]
3378    <function>:disable_event:<system>:<event>[:count]
3379
3380  To remove the events commands::
3381
3382   echo '!try_to_wake_up:enable_event:sched:sched_switch:0' > \
3383   	 set_ftrace_filter
3384   echo '!schedule:disable_event:sched:sched_switch' > \
3385   	 set_ftrace_filter
3386
3387- dump:
3388  When the function is hit, it will dump the contents of the ftrace
3389  ring buffer to the console. This is useful if you need to debug
3390  something, and want to dump the trace when a certain function
3391  is hit. Perhaps it's a function that is called before a triple
3392  fault happens and does not allow you to get a regular dump.
3393
3394- cpudump:
3395  When the function is hit, it will dump the contents of the ftrace
3396  ring buffer for the current CPU to the console. Unlike the "dump"
3397  command, it only prints out the contents of the ring buffer for the
3398  CPU that executed the function that triggered the dump.
3399
3400- stacktrace:
3401  When the function is hit, a stack trace is recorded.
3402
3403trace_pipe
3404----------
3405
3406The trace_pipe outputs the same content as the trace file, but
3407the effect on the tracing is different. Every read from
3408trace_pipe is consumed. This means that subsequent reads will be
3409different. The trace is live.
3410::
3411
3412  # echo function > current_tracer
3413  # cat trace_pipe > /tmp/trace.out &
3414  [1] 4153
3415  # echo 1 > tracing_on
3416  # usleep 1
3417  # echo 0 > tracing_on
3418  # cat trace
3419  # tracer: function
3420  #
3421  # entries-in-buffer/entries-written: 0/0   #P:4
3422  #
3423  #                              _-----=> irqs-off
3424  #                             / _----=> need-resched
3425  #                            | / _---=> hardirq/softirq
3426  #                            || / _--=> preempt-depth
3427  #                            ||| /     delay
3428  #           TASK-PID   CPU#  ||||    TIMESTAMP  FUNCTION
3429  #              | |       |   ||||       |         |
3430
3431  #
3432  # cat /tmp/trace.out
3433             bash-1994  [000] ....  5281.568961: mutex_unlock <-rb_simple_write
3434             bash-1994  [000] ....  5281.568963: __mutex_unlock_slowpath <-mutex_unlock
3435             bash-1994  [000] ....  5281.568963: __fsnotify_parent <-fsnotify_modify
3436             bash-1994  [000] ....  5281.568964: fsnotify <-fsnotify_modify
3437             bash-1994  [000] ....  5281.568964: __srcu_read_lock <-fsnotify
3438             bash-1994  [000] ....  5281.568964: add_preempt_count <-__srcu_read_lock
3439             bash-1994  [000] ...1  5281.568965: sub_preempt_count <-__srcu_read_lock
3440             bash-1994  [000] ....  5281.568965: __srcu_read_unlock <-fsnotify
3441             bash-1994  [000] ....  5281.568967: sys_dup2 <-system_call_fastpath
3442
3443
3444Note, reading the trace_pipe file will block until more input is
3445added. This is contrary to the trace file. If any process opened
3446the trace file for reading, it will actually disable tracing and
3447prevent new entries from being added. The trace_pipe file does
3448not have this limitation.
3449
3450trace entries
3451-------------
3452
3453Having too much or not enough data can be troublesome in
3454diagnosing an issue in the kernel. The file buffer_size_kb is
3455used to modify the size of the internal trace buffers. The
3456number listed is the number of entries that can be recorded per
3457CPU. To know the full size, multiply the number of possible CPUs
3458with the number of entries.
3459::
3460
3461  # cat buffer_size_kb
3462  1408 (units kilobytes)
3463
3464Or simply read buffer_total_size_kb
3465::
3466
3467  # cat buffer_total_size_kb
3468  5632
3469
3470To modify the buffer, simple echo in a number (in 1024 byte segments).
3471::
3472
3473  # echo 10000 > buffer_size_kb
3474  # cat buffer_size_kb
3475  10000 (units kilobytes)
3476
3477It will try to allocate as much as possible. If you allocate too
3478much, it can cause Out-Of-Memory to trigger.
3479::
3480
3481  # echo 1000000000000 > buffer_size_kb
3482  -bash: echo: write error: Cannot allocate memory
3483  # cat buffer_size_kb
3484  85
3485
3486The per_cpu buffers can be changed individually as well:
3487::
3488
3489  # echo 10000 > per_cpu/cpu0/buffer_size_kb
3490  # echo 100 > per_cpu/cpu1/buffer_size_kb
3491
3492When the per_cpu buffers are not the same, the buffer_size_kb
3493at the top level will just show an X
3494::
3495
3496  # cat buffer_size_kb
3497  X
3498
3499This is where the buffer_total_size_kb is useful:
3500::
3501
3502  # cat buffer_total_size_kb
3503  12916
3504
3505Writing to the top level buffer_size_kb will reset all the buffers
3506to be the same again.
3507
3508Snapshot
3509--------
3510CONFIG_TRACER_SNAPSHOT makes a generic snapshot feature
3511available to all non latency tracers. (Latency tracers which
3512record max latency, such as "irqsoff" or "wakeup", can't use
3513this feature, since those are already using the snapshot
3514mechanism internally.)
3515
3516Snapshot preserves a current trace buffer at a particular point
3517in time without stopping tracing. Ftrace swaps the current
3518buffer with a spare buffer, and tracing continues in the new
3519current (=previous spare) buffer.
3520
3521The following tracefs files in "tracing" are related to this
3522feature:
3523
3524  snapshot:
3525
3526	This is used to take a snapshot and to read the output
3527	of the snapshot. Echo 1 into this file to allocate a
3528	spare buffer and to take a snapshot (swap), then read
3529	the snapshot from this file in the same format as
3530	"trace" (described above in the section "The File
3531	System"). Both reads snapshot and tracing are executable
3532	in parallel. When the spare buffer is allocated, echoing
3533	0 frees it, and echoing else (positive) values clear the
3534	snapshot contents.
3535	More details are shown in the table below.
3536
3537	+--------------+------------+------------+------------+
3538	|status\\input |     0      |     1      |    else    |
3539	+==============+============+============+============+
3540	|not allocated |(do nothing)| alloc+swap |(do nothing)|
3541	+--------------+------------+------------+------------+
3542	|allocated     |    free    |    swap    |   clear    |
3543	+--------------+------------+------------+------------+
3544
3545Here is an example of using the snapshot feature.
3546::
3547
3548  # echo 1 > events/sched/enable
3549  # echo 1 > snapshot
3550  # cat snapshot
3551  # tracer: nop
3552  #
3553  # entries-in-buffer/entries-written: 71/71   #P:8
3554  #
3555  #                              _-----=> irqs-off
3556  #                             / _----=> need-resched
3557  #                            | / _---=> hardirq/softirq
3558  #                            || / _--=> preempt-depth
3559  #                            ||| /     delay
3560  #           TASK-PID   CPU#  ||||    TIMESTAMP  FUNCTION
3561  #              | |       |   ||||       |         |
3562            <idle>-0     [005] d...  2440.603828: sched_switch: prev_comm=swapper/5 prev_pid=0 prev_prio=120   prev_state=R ==> next_comm=snapshot-test-2 next_pid=2242 next_prio=120
3563             sleep-2242  [005] d...  2440.603846: sched_switch: prev_comm=snapshot-test-2 prev_pid=2242 prev_prio=120   prev_state=R ==> next_comm=kworker/5:1 next_pid=60 next_prio=120
3564  [...]
3565          <idle>-0     [002] d...  2440.707230: sched_switch: prev_comm=swapper/2 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=snapshot-test-2 next_pid=2229 next_prio=120
3566
3567  # cat trace
3568  # tracer: nop
3569  #
3570  # entries-in-buffer/entries-written: 77/77   #P:8
3571  #
3572  #                              _-----=> irqs-off
3573  #                             / _----=> need-resched
3574  #                            | / _---=> hardirq/softirq
3575  #                            || / _--=> preempt-depth
3576  #                            ||| /     delay
3577  #           TASK-PID   CPU#  ||||    TIMESTAMP  FUNCTION
3578  #              | |       |   ||||       |         |
3579            <idle>-0     [007] d...  2440.707395: sched_switch: prev_comm=swapper/7 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=snapshot-test-2 next_pid=2243 next_prio=120
3580   snapshot-test-2-2229  [002] d...  2440.707438: sched_switch: prev_comm=snapshot-test-2 prev_pid=2229 prev_prio=120 prev_state=S ==> next_comm=swapper/2 next_pid=0 next_prio=120
3581  [...]
3582
3583
3584If you try to use this snapshot feature when current tracer is
3585one of the latency tracers, you will get the following results.
3586::
3587
3588  # echo wakeup > current_tracer
3589  # echo 1 > snapshot
3590  bash: echo: write error: Device or resource busy
3591  # cat snapshot
3592  cat: snapshot: Device or resource busy
3593
3594
3595Instances
3596---------
3597In the tracefs tracing directory, there is a directory called "instances".
3598This directory can have new directories created inside of it using
3599mkdir, and removing directories with rmdir. The directory created
3600with mkdir in this directory will already contain files and other
3601directories after it is created.
3602::
3603
3604  # mkdir instances/foo
3605  # ls instances/foo
3606  buffer_size_kb  buffer_total_size_kb  events  free_buffer  per_cpu
3607  set_event  snapshot  trace  trace_clock  trace_marker  trace_options
3608  trace_pipe  tracing_on
3609
3610As you can see, the new directory looks similar to the tracing directory
3611itself. In fact, it is very similar, except that the buffer and
3612events are agnostic from the main directory, or from any other
3613instances that are created.
3614
3615The files in the new directory work just like the files with the
3616same name in the tracing directory except the buffer that is used
3617is a separate and new buffer. The files affect that buffer but do not
3618affect the main buffer with the exception of trace_options. Currently,
3619the trace_options affect all instances and the top level buffer
3620the same, but this may change in future releases. That is, options
3621may become specific to the instance they reside in.
3622
3623Notice that none of the function tracer files are there, nor is
3624current_tracer and available_tracers. This is because the buffers
3625can currently only have events enabled for them.
3626::
3627
3628  # mkdir instances/foo
3629  # mkdir instances/bar
3630  # mkdir instances/zoot
3631  # echo 100000 > buffer_size_kb
3632  # echo 1000 > instances/foo/buffer_size_kb
3633  # echo 5000 > instances/bar/per_cpu/cpu1/buffer_size_kb
3634  # echo function > current_trace
3635  # echo 1 > instances/foo/events/sched/sched_wakeup/enable
3636  # echo 1 > instances/foo/events/sched/sched_wakeup_new/enable
3637  # echo 1 > instances/foo/events/sched/sched_switch/enable
3638  # echo 1 > instances/bar/events/irq/enable
3639  # echo 1 > instances/zoot/events/syscalls/enable
3640  # cat trace_pipe
3641  CPU:2 [LOST 11745 EVENTS]
3642              bash-2044  [002] .... 10594.481032: _raw_spin_lock_irqsave <-get_page_from_freelist
3643              bash-2044  [002] d... 10594.481032: add_preempt_count <-_raw_spin_lock_irqsave
3644              bash-2044  [002] d..1 10594.481032: __rmqueue <-get_page_from_freelist
3645              bash-2044  [002] d..1 10594.481033: _raw_spin_unlock <-get_page_from_freelist
3646              bash-2044  [002] d..1 10594.481033: sub_preempt_count <-_raw_spin_unlock
3647              bash-2044  [002] d... 10594.481033: get_pageblock_flags_group <-get_pageblock_migratetype
3648              bash-2044  [002] d... 10594.481034: __mod_zone_page_state <-get_page_from_freelist
3649              bash-2044  [002] d... 10594.481034: zone_statistics <-get_page_from_freelist
3650              bash-2044  [002] d... 10594.481034: __inc_zone_state <-zone_statistics
3651              bash-2044  [002] d... 10594.481034: __inc_zone_state <-zone_statistics
3652              bash-2044  [002] .... 10594.481035: arch_dup_task_struct <-copy_process
3653  [...]
3654
3655  # cat instances/foo/trace_pipe
3656              bash-1998  [000] d..4   136.676759: sched_wakeup: comm=kworker/0:1 pid=59 prio=120 success=1 target_cpu=000
3657              bash-1998  [000] dN.4   136.676760: sched_wakeup: comm=bash pid=1998 prio=120 success=1 target_cpu=000
3658            <idle>-0     [003] d.h3   136.676906: sched_wakeup: comm=rcu_preempt pid=9 prio=120 success=1 target_cpu=003
3659            <idle>-0     [003] d..3   136.676909: sched_switch: prev_comm=swapper/3 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=rcu_preempt next_pid=9 next_prio=120
3660       rcu_preempt-9     [003] d..3   136.676916: sched_switch: prev_comm=rcu_preempt prev_pid=9 prev_prio=120 prev_state=S ==> next_comm=swapper/3 next_pid=0 next_prio=120
3661              bash-1998  [000] d..4   136.677014: sched_wakeup: comm=kworker/0:1 pid=59 prio=120 success=1 target_cpu=000
3662              bash-1998  [000] dN.4   136.677016: sched_wakeup: comm=bash pid=1998 prio=120 success=1 target_cpu=000
3663              bash-1998  [000] d..3   136.677018: sched_switch: prev_comm=bash prev_pid=1998 prev_prio=120 prev_state=R+ ==> next_comm=kworker/0:1 next_pid=59 next_prio=120
3664       kworker/0:1-59    [000] d..4   136.677022: sched_wakeup: comm=sshd pid=1995 prio=120 success=1 target_cpu=001
3665       kworker/0:1-59    [000] d..3   136.677025: sched_switch: prev_comm=kworker/0:1 prev_pid=59 prev_prio=120 prev_state=S ==> next_comm=bash next_pid=1998 next_prio=120
3666  [...]
3667
3668  # cat instances/bar/trace_pipe
3669       migration/1-14    [001] d.h3   138.732674: softirq_raise: vec=3 [action=NET_RX]
3670            <idle>-0     [001] dNh3   138.732725: softirq_raise: vec=3 [action=NET_RX]
3671              bash-1998  [000] d.h1   138.733101: softirq_raise: vec=1 [action=TIMER]
3672              bash-1998  [000] d.h1   138.733102: softirq_raise: vec=9 [action=RCU]
3673              bash-1998  [000] ..s2   138.733105: softirq_entry: vec=1 [action=TIMER]
3674              bash-1998  [000] ..s2   138.733106: softirq_exit: vec=1 [action=TIMER]
3675              bash-1998  [000] ..s2   138.733106: softirq_entry: vec=9 [action=RCU]
3676              bash-1998  [000] ..s2   138.733109: softirq_exit: vec=9 [action=RCU]
3677              sshd-1995  [001] d.h1   138.733278: irq_handler_entry: irq=21 name=uhci_hcd:usb4
3678              sshd-1995  [001] d.h1   138.733280: irq_handler_exit: irq=21 ret=unhandled
3679              sshd-1995  [001] d.h1   138.733281: irq_handler_entry: irq=21 name=eth0
3680              sshd-1995  [001] d.h1   138.733283: irq_handler_exit: irq=21 ret=handled
3681  [...]
3682
3683  # cat instances/zoot/trace
3684  # tracer: nop
3685  #
3686  # entries-in-buffer/entries-written: 18996/18996   #P:4
3687  #
3688  #                              _-----=> irqs-off
3689  #                             / _----=> need-resched
3690  #                            | / _---=> hardirq/softirq
3691  #                            || / _--=> preempt-depth
3692  #                            ||| /     delay
3693  #           TASK-PID   CPU#  ||||    TIMESTAMP  FUNCTION
3694  #              | |       |   ||||       |         |
3695              bash-1998  [000] d...   140.733501: sys_write -> 0x2
3696              bash-1998  [000] d...   140.733504: sys_dup2(oldfd: a, newfd: 1)
3697              bash-1998  [000] d...   140.733506: sys_dup2 -> 0x1
3698              bash-1998  [000] d...   140.733508: sys_fcntl(fd: a, cmd: 1, arg: 0)
3699              bash-1998  [000] d...   140.733509: sys_fcntl -> 0x1
3700              bash-1998  [000] d...   140.733510: sys_close(fd: a)
3701              bash-1998  [000] d...   140.733510: sys_close -> 0x0
3702              bash-1998  [000] d...   140.733514: sys_rt_sigprocmask(how: 0, nset: 0, oset: 6e2768, sigsetsize: 8)
3703              bash-1998  [000] d...   140.733515: sys_rt_sigprocmask -> 0x0
3704              bash-1998  [000] d...   140.733516: sys_rt_sigaction(sig: 2, act: 7fff718846f0, oact: 7fff71884650, sigsetsize: 8)
3705              bash-1998  [000] d...   140.733516: sys_rt_sigaction -> 0x0
3706
3707You can see that the trace of the top most trace buffer shows only
3708the function tracing. The foo instance displays wakeups and task
3709switches.
3710
3711To remove the instances, simply delete their directories:
3712::
3713
3714  # rmdir instances/foo
3715  # rmdir instances/bar
3716  # rmdir instances/zoot
3717
3718Note, if a process has a trace file open in one of the instance
3719directories, the rmdir will fail with EBUSY.
3720
3721
3722Stack trace
3723-----------
3724Since the kernel has a fixed sized stack, it is important not to
3725waste it in functions. A kernel developer must be conscious of
3726what they allocate on the stack. If they add too much, the system
3727can be in danger of a stack overflow, and corruption will occur,
3728usually leading to a system panic.
3729
3730There are some tools that check this, usually with interrupts
3731periodically checking usage. But if you can perform a check
3732at every function call that will become very useful. As ftrace provides
3733a function tracer, it makes it convenient to check the stack size
3734at every function call. This is enabled via the stack tracer.
3735
3736CONFIG_STACK_TRACER enables the ftrace stack tracing functionality.
3737To enable it, write a '1' into /proc/sys/kernel/stack_tracer_enabled.
3738::
3739
3740 # echo 1 > /proc/sys/kernel/stack_tracer_enabled
3741
3742You can also enable it from the kernel command line to trace
3743the stack size of the kernel during boot up, by adding "stacktrace"
3744to the kernel command line parameter.
3745
3746After running it for a few minutes, the output looks like:
3747::
3748
3749  # cat stack_max_size
3750  2928
3751
3752  # cat stack_trace
3753          Depth    Size   Location    (18 entries)
3754          -----    ----   --------
3755    0)     2928     224   update_sd_lb_stats+0xbc/0x4ac
3756    1)     2704     160   find_busiest_group+0x31/0x1f1
3757    2)     2544     256   load_balance+0xd9/0x662
3758    3)     2288      80   idle_balance+0xbb/0x130
3759    4)     2208     128   __schedule+0x26e/0x5b9
3760    5)     2080      16   schedule+0x64/0x66
3761    6)     2064     128   schedule_timeout+0x34/0xe0
3762    7)     1936     112   wait_for_common+0x97/0xf1
3763    8)     1824      16   wait_for_completion+0x1d/0x1f
3764    9)     1808     128   flush_work+0xfe/0x119
3765   10)     1680      16   tty_flush_to_ldisc+0x1e/0x20
3766   11)     1664      48   input_available_p+0x1d/0x5c
3767   12)     1616      48   n_tty_poll+0x6d/0x134
3768   13)     1568      64   tty_poll+0x64/0x7f
3769   14)     1504     880   do_select+0x31e/0x511
3770   15)      624     400   core_sys_select+0x177/0x216
3771   16)      224      96   sys_select+0x91/0xb9
3772   17)      128     128   system_call_fastpath+0x16/0x1b
3773
3774Note, if -mfentry is being used by gcc, functions get traced before
3775they set up the stack frame. This means that leaf level functions
3776are not tested by the stack tracer when -mfentry is used.
3777
3778Currently, -mfentry is used by gcc 4.6.0 and above on x86 only.
3779
3780More
3781----
3782More details can be found in the source code, in the `kernel/trace/*.c` files.
3783