xref: /linux/Documentation/trace/ftrace.rst (revision 509d3f45847627f4c5cdce004c3ec79262b5239c)
1========================
2ftrace - Function Tracer
3========================
4
5Copyright 2008 Red Hat Inc.
6
7:Author:   Steven Rostedt <srostedt@redhat.com>
8:License:  The GNU Free Documentation License, Version 1.2
9          (dual licensed under the GPL v2)
10:Original Reviewers:  Elias Oltmanns, Randy Dunlap, Andrew Morton,
11		      John Kacur, and David Teigland.
12
13- Written for: 2.6.28-rc2
14- Updated for: 3.10
15- Updated for: 4.13 - Copyright 2017 VMware Inc. Steven Rostedt
16- Converted to rst format - Changbin Du <changbin.du@intel.com>
17
18Introduction
19------------
20
21Ftrace is an internal tracer designed to help out developers and
22designers of systems to find what is going on inside the kernel.
23It can be used for debugging or analyzing latencies and
24performance issues that take place outside of user-space.
25
26Although ftrace is typically considered the function tracer, it
27is really a framework of several assorted tracing utilities.
28There's latency tracing to examine what occurs between interrupts
29disabled and enabled, as well as for preemption and from a time
30a task is woken to the task is actually scheduled in.
31
32One of the most common uses of ftrace is the event tracing.
33Throughout the kernel are hundreds of static event points that
34can be enabled via the tracefs file system to see what is
35going on in certain parts of the kernel.
36
37See events.rst for more information.
38
39
40Implementation Details
41----------------------
42
43See Documentation/trace/ftrace-design.rst for details for arch porters and such.
44
45
46The File System
47---------------
48
49Ftrace uses the tracefs file system to hold the control files as
50well as the files to display output.
51
52When tracefs is configured into the kernel (which selecting any ftrace
53option will do) the directory /sys/kernel/tracing will be created. To mount
54this directory, you can add to your /etc/fstab file::
55
56 tracefs       /sys/kernel/tracing       tracefs defaults        0       0
57
58Or you can mount it at run time with::
59
60 mount -t tracefs nodev /sys/kernel/tracing
61
62For quicker access to that directory you may want to make a soft link to
63it::
64
65 ln -s /sys/kernel/tracing /tracing
66
67.. attention::
68
69  Before 4.1, all ftrace tracing control files were within the debugfs
70  file system, which is typically located at /sys/kernel/debug/tracing.
71  For backward compatibility, when mounting the debugfs file system,
72  the tracefs file system will be automatically mounted at:
73
74  /sys/kernel/debug/tracing
75
76  All files located in the tracefs file system will be located in that
77  debugfs file system directory as well.
78
79.. attention::
80
81  Any selected ftrace option will also create the tracefs file system.
82  The rest of the document will assume that you are in the ftrace directory
83  (cd /sys/kernel/tracing) and will only concentrate on the files within that
84  directory and not distract from the content with the extended
85  "/sys/kernel/tracing" path name.
86
87That's it! (assuming that you have ftrace configured into your kernel)
88
89After mounting tracefs you will have access to the control and output files
90of ftrace. Here is a list of some of the key files:
91
92
93 Note: all time values are in microseconds.
94
95  current_tracer:
96
97	This is used to set or display the current tracer
98	that is configured. Changing the current tracer clears
99	the ring buffer content as well as the "snapshot" buffer.
100
101  available_tracers:
102
103	This holds the different types of tracers that
104	have been compiled into the kernel. The
105	tracers listed here can be configured by
106	echoing their name into current_tracer.
107
108  tracing_on:
109
110	This sets or displays whether writing to the trace
111	ring buffer is enabled. Echo 0 into this file to disable
112	the tracer or 1 to enable it. Note, this only disables
113	writing to the ring buffer, the tracing overhead may
114	still be occurring.
115
116	The kernel function tracing_off() can be used within the
117	kernel to disable writing to the ring buffer, which will
118	set this file to "0". User space can re-enable tracing by
119	echoing "1" into the file.
120
121	Note, the function and event trigger "traceoff" will also
122	set this file to zero and stop tracing. Which can also
123	be re-enabled by user space using this file.
124
125  trace:
126
127	This file holds the output of the trace in a human
128	readable format (described below). Opening this file for
129	writing with the O_TRUNC flag clears the ring buffer content.
130        Note, this file is not a consumer. If tracing is off
131        (no tracer running, or tracing_on is zero), it will produce
132        the same output each time it is read. When tracing is on,
133        it may produce inconsistent results as it tries to read
134        the entire buffer without consuming it.
135
136  trace_pipe:
137
138	The output is the same as the "trace" file but this
139	file is meant to be streamed with live tracing.
140	Reads from this file will block until new data is
141	retrieved.  Unlike the "trace" file, this file is a
142	consumer. This means reading from this file causes
143	sequential reads to display more current data. Once
144	data is read from this file, it is consumed, and
145	will not be read again with a sequential read. The
146	"trace" file is static, and if the tracer is not
147	adding more data, it will display the same
148	information every time it is read.
149
150  trace_options:
151
152	This file lets the user control the amount of data
153	that is displayed in one of the above output
154	files. Options also exist to modify how a tracer
155	or events work (stack traces, timestamps, etc).
156
157  options:
158
159	This is a directory that has a file for every available
160	trace option (also in trace_options). Options may also be set
161	or cleared by writing a "1" or "0" respectively into the
162	corresponding file with the option name.
163
164  tracing_max_latency:
165
166	Some of the tracers record the max latency.
167	For example, the maximum time that interrupts are disabled.
168	The maximum time is saved in this file. The max trace will also be
169	stored,	and displayed by "trace". A new max trace will only be
170	recorded if the latency is greater than the value in this file
171	(in microseconds).
172
173	By echoing in a time into this file, no latency will be recorded
174	unless it is greater than the time in this file.
175
176  tracing_thresh:
177
178	Some latency tracers will record a trace whenever the
179	latency is greater than the number in this file.
180	Only active when the file contains a number greater than 0.
181	(in microseconds)
182
183  buffer_percent:
184
185	This is the watermark for how much the ring buffer needs to be filled
186	before a waiter is woken up. That is, if an application calls a
187	blocking read syscall on one of the per_cpu trace_pipe_raw files, it
188	will block until the given amount of data specified by buffer_percent
189	is in the ring buffer before it wakes the reader up. This also
190	controls how the splice system calls are blocked on this file::
191
192	  0   - means to wake up as soon as there is any data in the ring buffer.
193	  50  - means to wake up when roughly half of the ring buffer sub-buffers
194	        are full.
195	  100 - means to block until the ring buffer is totally full and is
196	        about to start overwriting the older data.
197
198  buffer_size_kb:
199
200	This sets or displays the number of kilobytes each CPU
201	buffer holds. By default, the trace buffers are the same size
202	for each CPU. The displayed number is the size of the
203	CPU buffer and not total size of all buffers. The
204	trace buffers are allocated in pages (blocks of memory
205	that the kernel uses for allocation, usually 4 KB in size).
206	A few extra pages may be allocated to accommodate buffer management
207	meta-data. If the last page allocated has room for more bytes
208	than requested, the rest of the page will be used,
209	making the actual allocation bigger than requested or shown.
210	( Note, the size may not be a multiple of the page size
211	due to buffer management meta-data. )
212
213	Buffer sizes for individual CPUs may vary
214	(see "per_cpu/cpu0/buffer_size_kb" below), and if they do
215	this file will show "X".
216
217  buffer_total_size_kb:
218
219	This displays the total combined size of all the trace buffers.
220
221  buffer_subbuf_size_kb:
222
223	This sets or displays the sub buffer size. The ring buffer is broken up
224	into several same size "sub buffers". An event can not be bigger than
225	the size of the sub buffer. Normally, the sub buffer is the size of the
226	architecture's page (4K on x86). The sub buffer also contains meta data
227	at the start which also limits the size of an event.  That means when
228	the sub buffer is a page size, no event can be larger than the page
229	size minus the sub buffer meta data.
230
231	Note, the buffer_subbuf_size_kb is a way for the user to specify the
232	minimum size of the subbuffer. The kernel may make it bigger due to the
233	implementation details, or simply fail the operation if the kernel can
234	not handle the request.
235
236	Changing the sub buffer size allows for events to be larger than the
237	page size.
238
239	Note: When changing the sub-buffer size, tracing is stopped and any
240	data in the ring buffer and the snapshot buffer will be discarded.
241
242  free_buffer:
243
244	If a process is performing tracing, and the ring buffer	should be
245	shrunk "freed" when the process is finished, even if it were to be
246	killed by a signal, this file can be used for that purpose. On close
247	of this file, the ring buffer will be resized to its minimum size.
248	Having a process that is tracing also open this file, when the process
249	exits its file descriptor for this file will be closed, and in doing so,
250	the ring buffer will be "freed".
251
252	It may also stop tracing if disable_on_free option is set.
253
254  tracing_cpumask:
255
256	This is a mask that lets the user only trace on specified CPUs.
257	The format is a hex string representing the CPUs.
258
259  set_ftrace_filter:
260
261	When dynamic ftrace is configured in (see the
262	section below "dynamic ftrace"), the code is dynamically
263	modified (code text rewrite) to disable calling of the
264	function profiler (mcount). This lets tracing be configured
265	in with practically no overhead in performance.  This also
266	has a side effect of enabling or disabling specific functions
267	to be traced. Echoing names of functions into this file
268	will limit the trace to only those functions.
269	This influences the tracers "function" and "function_graph"
270	and thus also function profiling (see "function_profile_enabled").
271
272	The functions listed in "available_filter_functions" are what
273	can be written into this file.
274
275	This interface also allows for commands to be used. See the
276	"Filter commands" section for more details.
277
278	As a speed up, since processing strings can be quite expensive
279	and requires a check of all functions registered to tracing, instead
280	an index can be written into this file. A number (starting with "1")
281	written will instead select the same corresponding at the line position
282	of the "available_filter_functions" file.
283
284  set_ftrace_notrace:
285
286	This has an effect opposite to that of
287	set_ftrace_filter. Any function that is added here will not
288	be traced. If a function exists in both set_ftrace_filter
289	and set_ftrace_notrace,	the function will _not_ be traced.
290
291  set_ftrace_pid:
292
293	Have the function tracer only trace the threads whose PID are
294	listed in this file.
295
296	If the "function-fork" option is set, then when a task whose
297	PID is listed in this file forks, the child's PID will
298	automatically be added to this file, and the child will be
299	traced by the function tracer as well. This option will also
300	cause PIDs of tasks that exit to be removed from the file.
301
302  set_ftrace_notrace_pid:
303
304        Have the function tracer ignore threads whose PID are listed in
305        this file.
306
307        If the "function-fork" option is set, then when a task whose
308	PID is listed in this file forks, the child's PID will
309	automatically be added to this file, and the child will not be
310	traced by the function tracer as well. This option will also
311	cause PIDs of tasks that exit to be removed from the file.
312
313        If a PID is in both this file and "set_ftrace_pid", then this
314        file takes precedence, and the thread will not be traced.
315
316  set_event_pid:
317
318	Have the events only trace a task with a PID listed in this file.
319	Note, sched_switch and sched_wake_up will also trace events
320	listed in this file.
321
322	To have the PIDs of children of tasks with their PID in this file
323	added on fork, enable the "event-fork" option. That option will also
324	cause the PIDs of tasks to be removed from this file when the task
325	exits.
326
327  set_event_notrace_pid:
328
329	Have the events not trace a task with a PID listed in this file.
330	Note, sched_switch and sched_wakeup will trace threads not listed
331	in this file, even if a thread's PID is in the file if the
332        sched_switch or sched_wakeup events also trace a thread that should
333        be traced.
334
335	To have the PIDs of children of tasks with their PID in this file
336	added on fork, enable the "event-fork" option. That option will also
337	cause the PIDs of tasks to be removed from this file when the task
338	exits.
339
340  set_graph_function:
341
342	Functions listed in this file will cause the function graph
343	tracer to only trace these functions and the functions that
344	they call. (See the section "dynamic ftrace" for more details).
345	Note, set_ftrace_filter and set_ftrace_notrace still affects
346	what functions are being traced.
347
348  set_graph_notrace:
349
350	Similar to set_graph_function, but will disable function graph
351	tracing when the function is hit until it exits the function.
352	This makes it possible to ignore tracing functions that are called
353	by a specific function.
354
355  available_filter_functions:
356
357	This lists the functions that ftrace has processed and can trace.
358	These are the function names that you can pass to
359	"set_ftrace_filter", "set_ftrace_notrace",
360	"set_graph_function", or "set_graph_notrace".
361	(See the section "dynamic ftrace" below for more details.)
362
363  available_filter_functions_addrs:
364
365	Similar to available_filter_functions, but with address displayed
366	for each function. The displayed address is the patch-site address
367	and can differ from /proc/kallsyms address.
368
369  syscall_user_buf_size:
370
371	Some system call trace events will record the data from a user
372	space address that one of the parameters point to. The amount of
373	data per event is limited. This file holds the max number of bytes
374	that will be recorded into the ring buffer to hold this data.
375	The max value is currently 165.
376
377  dyn_ftrace_total_info:
378
379	This file is for debugging purposes. The number of functions that
380	have been converted to nops and are available to be traced.
381
382  enabled_functions:
383
384	This file is more for debugging ftrace, but can also be useful
385	in seeing if any function has a callback attached to it.
386	Not only does the trace infrastructure use ftrace function
387	trace utility, but other subsystems might too. This file
388	displays all functions that have a callback attached to them
389	as well as the number of callbacks that have been attached.
390	Note, a callback may also call multiple functions which will
391	not be listed in this count.
392
393	If the callback registered to be traced by a function with
394	the "save regs" attribute (thus even more overhead), an 'R'
395	will be displayed on the same line as the function that
396	is returning registers.
397
398	If the callback registered to be traced by a function with
399	the "ip modify" attribute (thus the regs->ip can be changed),
400	an 'I' will be displayed on the same line as the function that
401	can be overridden.
402
403	If a non-ftrace trampoline is attached (BPF) a 'D' will be displayed.
404	Note, normal ftrace trampolines can also be attached, but only one
405	"direct" trampoline can be attached to a given function at a time.
406
407	Some architectures can not call direct trampolines, but instead have
408	the ftrace ops function located above the function entry point. In
409	such cases an 'O' will be displayed.
410
411	If a function had either the "ip modify" or a "direct" call attached to
412	it in the past, a 'M' will be shown. This flag is never cleared. It is
413	used to know if a function was ever modified by the ftrace infrastructure,
414	and can be used for debugging.
415
416	If the architecture supports it, it will also show what callback
417	is being directly called by the function. If the count is greater
418	than 1 it most likely will be ftrace_ops_list_func().
419
420	If the callback of a function jumps to a trampoline that is
421	specific to the callback and which is not the standard trampoline,
422	its address will be printed as well as the function that the
423	trampoline calls.
424
425  touched_functions:
426
427	This file contains all the functions that ever had a function callback
428	to it via the ftrace infrastructure. It has the same format as
429	enabled_functions but shows all functions that have ever been
430	traced.
431
432	To see any function that has every been modified by "ip modify" or a
433	direct trampoline, one can perform the following command:
434
435	grep ' M ' /sys/kernel/tracing/touched_functions
436
437  function_profile_enabled:
438
439	When set it will enable all functions with either the function
440	tracer, or if configured, the function graph tracer. It will
441	keep a histogram of the number of functions that were called
442	and if the function graph tracer was configured, it will also keep
443	track of the time spent in those functions. The histogram
444	content can be displayed in the files:
445
446	trace_stat/function<cpu> ( function0, function1, etc).
447
448  trace_stat:
449
450	A directory that holds different tracing stats.
451
452  kprobe_events:
453
454	Enable dynamic trace points. See kprobetrace.rst.
455
456  kprobe_profile:
457
458	Dynamic trace points stats. See kprobetrace.rst.
459
460  max_graph_depth:
461
462	Used with the function graph tracer. This is the max depth
463	it will trace into a function. Setting this to a value of
464	one will show only the first kernel function that is called
465	from user space.
466
467  printk_formats:
468
469	This is for tools that read the raw format files. If an event in
470	the ring buffer references a string, only a pointer to the string
471	is recorded into the buffer and not the string itself. This prevents
472	tools from knowing what that string was. This file displays the string
473	and address for	the string allowing tools to map the pointers to what
474	the strings were.
475
476  saved_cmdlines:
477
478	Only the pid of the task is recorded in a trace event unless
479	the event specifically saves the task comm as well. Ftrace
480	makes a cache of pid mappings to comms to try to display
481	comms for events. If a pid for a comm is not listed, then
482	"<...>" is displayed in the output.
483
484	If the option "record-cmd" is set to "0", then comms of tasks
485	will not be saved during recording. By default, it is enabled.
486
487  saved_cmdlines_size:
488
489	By default, 128 comms are saved (see "saved_cmdlines" above). To
490	increase or decrease the amount of comms that are cached, echo
491	the number of comms to cache into this file.
492
493  saved_tgids:
494
495	If the option "record-tgid" is set, on each scheduling context switch
496	the Task Group ID of a task is saved in a table mapping the PID of
497	the thread to its TGID. By default, the "record-tgid" option is
498	disabled.
499
500  snapshot:
501
502	This displays the "snapshot" buffer and also lets the user
503	take a snapshot of the current running trace.
504	See the "Snapshot" section below for more details.
505
506  stack_max_size:
507
508	When the stack tracer is activated, this will display the
509	maximum stack size it has encountered.
510	See the "Stack Trace" section below.
511
512  stack_trace:
513
514	This displays the stack back trace of the largest stack
515	that was encountered when the stack tracer is activated.
516	See the "Stack Trace" section below.
517
518  stack_trace_filter:
519
520	This is similar to "set_ftrace_filter" but it limits what
521	functions the stack tracer will check.
522
523  trace_clock:
524
525	Whenever an event is recorded into the ring buffer, a
526	"timestamp" is added. This stamp comes from a specified
527	clock. By default, ftrace uses the "local" clock. This
528	clock is very fast and strictly per CPU, but on some
529	systems it may not be monotonic with respect to other
530	CPUs. In other words, the local clocks may not be in sync
531	with local clocks on other CPUs.
532
533	Usual clocks for tracing::
534
535	  # cat trace_clock
536	  [local] global counter x86-tsc
537
538	The clock with the square brackets around it is the one in effect.
539
540	local:
541		Default clock, but may not be in sync across CPUs
542
543	global:
544		This clock is in sync with all CPUs but may
545		be a bit slower than the local clock.
546
547	counter:
548		This is not a clock at all, but literally an atomic
549		counter. It counts up one by one, but is in sync
550		with all CPUs. This is useful when you need to
551		know exactly the order events occurred with respect to
552		each other on different CPUs.
553
554	uptime:
555		This uses the jiffies counter and the time stamp
556		is relative to the time since boot up.
557
558	perf:
559		This makes ftrace use the same clock that perf uses.
560		Eventually perf will be able to read ftrace buffers
561		and this will help out in interleaving the data.
562
563	x86-tsc:
564		Architectures may define their own clocks. For
565		example, x86 uses its own TSC cycle clock here.
566
567	ppc-tb:
568		This uses the powerpc timebase register value.
569		This is in sync across CPUs and can also be used
570		to correlate events across hypervisor/guest if
571		tb_offset is known.
572
573	mono:
574		This uses the fast monotonic clock (CLOCK_MONOTONIC)
575		which is monotonic and is subject to NTP rate adjustments.
576
577	mono_raw:
578		This is the raw monotonic clock (CLOCK_MONOTONIC_RAW)
579		which is monotonic but is not subject to any rate adjustments
580		and ticks at the same rate as the hardware clocksource.
581
582	boot:
583		This is the boot clock (CLOCK_BOOTTIME) and is based on the
584		fast monotonic clock, but also accounts for time spent in
585		suspend. Since the clock access is designed for use in
586		tracing in the suspend path, some side effects are possible
587		if clock is accessed after the suspend time is accounted before
588		the fast mono clock is updated. In this case, the clock update
589		appears to happen slightly sooner than it normally would have.
590		Also on 32-bit systems, it's possible that the 64-bit boot offset
591		sees a partial update. These effects are rare and post
592		processing should be able to handle them. See comments in the
593		ktime_get_boot_fast_ns() function for more information.
594
595	tai:
596		This is the tai clock (CLOCK_TAI) and is derived from the wall-
597		clock time. However, this clock does not experience
598		discontinuities and backwards jumps caused by NTP inserting leap
599		seconds. Since the clock access is designed for use in tracing,
600		side effects are possible. The clock access may yield wrong
601		readouts in case the internal TAI offset is updated e.g., caused
602		by setting the system time or using adjtimex() with an offset.
603		These effects are rare and post processing should be able to
604		handle them. See comments in the ktime_get_tai_fast_ns()
605		function for more information.
606
607	To set a clock, simply echo the clock name into this file::
608
609	  # echo global > trace_clock
610
611	Setting a clock clears the ring buffer content as well as the
612	"snapshot" buffer.
613
614  trace_marker:
615
616	This is a very useful file for synchronizing user space
617	with events happening in the kernel. Writing strings into
618	this file will be written into the ftrace buffer.
619
620	It is useful in applications to open this file at the start
621	of the application and just reference the file descriptor
622	for the file::
623
624		void trace_write(const char *fmt, ...)
625		{
626			va_list ap;
627			char buf[256];
628			int n;
629
630			if (trace_fd < 0)
631				return;
632
633			va_start(ap, fmt);
634			n = vsnprintf(buf, 256, fmt, ap);
635			va_end(ap);
636
637			write(trace_fd, buf, n);
638		}
639
640	start::
641
642		trace_fd = open("trace_marker", O_WRONLY);
643
644	Note: Writing into the trace_marker file can also initiate triggers
645	      that are written into /sys/kernel/tracing/events/ftrace/print/trigger
646	      See "Event triggers" in Documentation/trace/events.rst and an
647              example in Documentation/trace/histogram.rst (Section 3.)
648
649  trace_marker_raw:
650
651	This is similar to trace_marker above, but is meant for binary data
652	to be written to it, where a tool can be used to parse the data
653	from trace_pipe_raw.
654
655  uprobe_events:
656
657	Add dynamic tracepoints in programs.
658	See uprobetracer.rst
659
660  uprobe_profile:
661
662	Uprobe statistics. See uprobetrace.txt
663
664  instances:
665
666	This is a way to make multiple trace buffers where different
667	events can be recorded in different buffers.
668	See "Instances" section below.
669
670  events:
671
672	This is the trace event directory. It holds event tracepoints
673	(also known as static tracepoints) that have been compiled
674	into the kernel. It shows what event tracepoints exist
675	and how they are grouped by system. There are "enable"
676	files at various levels that can enable the tracepoints
677	when a "1" is written to them.
678
679	See events.rst for more information.
680
681  set_event:
682
683	By echoing in the event into this file, will enable that event.
684
685	See events.rst for more information.
686
687  available_events:
688
689	A list of events that can be enabled in tracing.
690
691	See events.rst for more information.
692
693  timestamp_mode:
694
695	Certain tracers may change the timestamp mode used when
696	logging trace events into the event buffer.  Events with
697	different modes can coexist within a buffer but the mode in
698	effect when an event is logged determines which timestamp mode
699	is used for that event.  The default timestamp mode is
700	'delta'.
701
702	Usual timestamp modes for tracing:
703
704	  # cat timestamp_mode
705	  [delta] absolute
706
707	  The timestamp mode with the square brackets around it is the
708	  one in effect.
709
710	  delta: Default timestamp mode - timestamp is a delta against
711	         a per-buffer timestamp.
712
713	  absolute: The timestamp is a full timestamp, not a delta
714                 against some other value.  As such it takes up more
715                 space and is less efficient.
716
717  hwlat_detector:
718
719	Directory for the Hardware Latency Detector.
720	See "Hardware Latency Detector" section below.
721
722  per_cpu:
723
724	This is a directory that contains the trace per_cpu information.
725
726  per_cpu/cpu0/buffer_size_kb:
727
728	The ftrace buffer is defined per_cpu. That is, there's a separate
729	buffer for each CPU to allow writes to be done atomically,
730	and free from cache bouncing. These buffers may have different
731	size buffers. This file is similar to the buffer_size_kb
732	file, but it only displays or sets the buffer size for the
733	specific CPU. (here cpu0).
734
735  per_cpu/cpu0/trace:
736
737	This is similar to the "trace" file, but it will only display
738	the data specific for the CPU. If written to, it only clears
739	the specific CPU buffer.
740
741  per_cpu/cpu0/trace_pipe
742
743	This is similar to the "trace_pipe" file, and is a consuming
744	read, but it will only display (and consume) the data specific
745	for the CPU.
746
747  per_cpu/cpu0/trace_pipe_raw
748
749	For tools that can parse the ftrace ring buffer binary format,
750	the trace_pipe_raw file can be used to extract the data
751	from the ring buffer directly. With the use of the splice()
752	system call, the buffer data can be quickly transferred to
753	a file or to the network where a server is collecting the
754	data.
755
756	Like trace_pipe, this is a consuming reader, where multiple
757	reads will always produce different data.
758
759  per_cpu/cpu0/snapshot:
760
761	This is similar to the main "snapshot" file, but will only
762	snapshot the current CPU (if supported). It only displays
763	the content of the snapshot for a given CPU, and if
764	written to, only clears this CPU buffer.
765
766  per_cpu/cpu0/snapshot_raw:
767
768	Similar to the trace_pipe_raw, but will read the binary format
769	from the snapshot buffer for the given CPU.
770
771  per_cpu/cpu0/stats:
772
773	This displays certain stats about the ring buffer:
774
775	entries:
776		The number of events that are still in the buffer.
777
778	overrun:
779		The number of lost events due to overwriting when
780		the buffer was full.
781
782	commit overrun:
783		Should always be zero.
784		This gets set if so many events happened within a nested
785		event (ring buffer is re-entrant), that it fills the
786		buffer and starts dropping events.
787
788	bytes:
789		Bytes actually read (not overwritten).
790
791	oldest event ts:
792		The oldest timestamp in the buffer
793
794	now ts:
795		The current timestamp
796
797	dropped events:
798		Events lost due to overwrite option being off.
799
800	read events:
801		The number of events read.
802
803The Tracers
804-----------
805
806Here is the list of current tracers that may be configured.
807
808  "function"
809
810	Function call tracer to trace all kernel functions.
811
812  "function_graph"
813
814	Similar to the function tracer except that the
815	function tracer probes the functions on their entry
816	whereas the function graph tracer traces on both entry
817	and exit of the functions. It then provides the ability
818	to draw a graph of function calls similar to C code
819	source.
820
821	Note that the function graph calculates the timings of when the
822	function starts and returns internally and for each instance. If
823	there are two instances that run function graph tracer and traces
824	the same functions, the length of the timings may be slightly off as
825	each read the timestamp separately and not at the same time.
826
827  "blk"
828
829	The block tracer. The tracer used by the blktrace user
830	application.
831
832  "hwlat"
833
834	The Hardware Latency tracer is used to detect if the hardware
835	produces any latency. See "Hardware Latency Detector" section
836	below.
837
838  "irqsoff"
839
840	Traces the areas that disable interrupts and saves
841	the trace with the longest max latency.
842	See tracing_max_latency. When a new max is recorded,
843	it replaces the old trace. It is best to view this
844	trace with the latency-format option enabled, which
845	happens automatically when the tracer is selected.
846
847  "preemptoff"
848
849	Similar to irqsoff but traces and records the amount of
850	time for which preemption is disabled.
851
852  "preemptirqsoff"
853
854	Similar to irqsoff and preemptoff, but traces and
855	records the largest time for which irqs and/or preemption
856	is disabled.
857
858  "wakeup"
859
860	Traces and records the max latency that it takes for
861	the highest priority task to get scheduled after
862	it has been woken up.
863        Traces all tasks as an average developer would expect.
864
865  "wakeup_rt"
866
867        Traces and records the max latency that it takes for just
868        RT tasks (as the current "wakeup" does). This is useful
869        for those interested in wake up timings of RT tasks.
870
871  "wakeup_dl"
872
873	Traces and records the max latency that it takes for
874	a SCHED_DEADLINE task to be woken (as the "wakeup" and
875	"wakeup_rt" does).
876
877  "mmiotrace"
878
879	A special tracer that is used to trace binary modules.
880	It will trace all the calls that a module makes to the
881	hardware. Everything it writes and reads from the I/O
882	as well.
883
884  "branch"
885
886	This tracer can be configured when tracing likely/unlikely
887	calls within the kernel. It will trace when a likely and
888	unlikely branch is hit and if it was correct in its prediction
889	of being correct.
890
891  "nop"
892
893	This is the "trace nothing" tracer. To remove all
894	tracers from tracing simply echo "nop" into
895	current_tracer.
896
897Error conditions
898----------------
899
900  For most ftrace commands, failure modes are obvious and communicated
901  using standard return codes.
902
903  For other more involved commands, extended error information may be
904  available via the tracing/error_log file.  For the commands that
905  support it, reading the tracing/error_log file after an error will
906  display more detailed information about what went wrong, if
907  information is available.  The tracing/error_log file is a circular
908  error log displaying a small number (currently, 8) of ftrace errors
909  for the last (8) failed commands.
910
911  The extended error information and usage takes the form shown in
912  this example::
913
914    # echo xxx > /sys/kernel/tracing/events/sched/sched_wakeup/trigger
915    echo: write error: Invalid argument
916
917    # cat /sys/kernel/tracing/error_log
918    [ 5348.887237] location: error: Couldn't yyy: zzz
919      Command: xxx
920               ^
921    [ 7517.023364] location: error: Bad rrr: sss
922      Command: ppp qqq
923                   ^
924
925  To clear the error log, echo the empty string into it::
926
927    # echo > /sys/kernel/tracing/error_log
928
929Examples of using the tracer
930----------------------------
931
932Here are typical examples of using the tracers when controlling
933them only with the tracefs interface (without using any
934user-land utilities).
935
936Output format:
937--------------
938
939Here is an example of the output format of the file "trace"::
940
941  # tracer: function
942  #
943  # entries-in-buffer/entries-written: 140080/250280   #P:4
944  #
945  #                              _-----=> irqs-off
946  #                             / _----=> need-resched
947  #                            | / _---=> hardirq/softirq
948  #                            || / _--=> preempt-depth
949  #                            ||| /     delay
950  #           TASK-PID   CPU#  ||||    TIMESTAMP  FUNCTION
951  #              | |       |   ||||       |         |
952              bash-1977  [000] .... 17284.993652: sys_close <-system_call_fastpath
953              bash-1977  [000] .... 17284.993653: __close_fd <-sys_close
954              bash-1977  [000] .... 17284.993653: _raw_spin_lock <-__close_fd
955              sshd-1974  [003] .... 17284.993653: __srcu_read_unlock <-fsnotify
956              bash-1977  [000] .... 17284.993654: add_preempt_count <-_raw_spin_lock
957              bash-1977  [000] ...1 17284.993655: _raw_spin_unlock <-__close_fd
958              bash-1977  [000] ...1 17284.993656: sub_preempt_count <-_raw_spin_unlock
959              bash-1977  [000] .... 17284.993657: filp_close <-__close_fd
960              bash-1977  [000] .... 17284.993657: dnotify_flush <-filp_close
961              sshd-1974  [003] .... 17284.993658: sys_select <-system_call_fastpath
962              ....
963
964A header is printed with the tracer name that is represented by
965the trace. In this case the tracer is "function". Then it shows the
966number of events in the buffer as well as the total number of entries
967that were written. The difference is the number of entries that were
968lost due to the buffer filling up (250280 - 140080 = 110200 events
969lost).
970
971The header explains the content of the events. Task name "bash", the task
972PID "1977", the CPU that it was running on "000", the latency format
973(explained below), the timestamp in <secs>.<usecs> format, the
974function name that was traced "sys_close" and the parent function that
975called this function "system_call_fastpath". The timestamp is the time
976at which the function was entered.
977
978Latency trace format
979--------------------
980
981When the latency-format option is enabled or when one of the latency
982tracers is set, the trace file gives somewhat more information to see
983why a latency happened. Here is a typical trace::
984
985  # tracer: irqsoff
986  #
987  # irqsoff latency trace v1.1.5 on 3.8.0-test+
988  # --------------------------------------------------------------------
989  # latency: 259 us, #4/4, CPU#2 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:4)
990  #    -----------------
991  #    | task: ps-6143 (uid:0 nice:0 policy:0 rt_prio:0)
992  #    -----------------
993  #  => started at: __lock_task_sighand
994  #  => ended at:   _raw_spin_unlock_irqrestore
995  #
996  #
997  #                  _------=> CPU#
998  #                 / _-----=> irqs-off
999  #                | / _----=> need-resched
1000  #                || / _---=> hardirq/softirq
1001  #                ||| / _--=> preempt-depth
1002  #                |||| /     delay
1003  #  cmd     pid   ||||| time  |   caller
1004  #     \   /      |||||  \    |   /
1005        ps-6143    2d...    0us!: trace_hardirqs_off <-__lock_task_sighand
1006        ps-6143    2d..1  259us+: trace_hardirqs_on <-_raw_spin_unlock_irqrestore
1007        ps-6143    2d..1  263us+: time_hardirqs_on <-_raw_spin_unlock_irqrestore
1008        ps-6143    2d..1  306us : <stack trace>
1009   => trace_hardirqs_on_caller
1010   => trace_hardirqs_on
1011   => _raw_spin_unlock_irqrestore
1012   => do_task_stat
1013   => proc_tgid_stat
1014   => proc_single_show
1015   => seq_read
1016   => vfs_read
1017   => sys_read
1018   => system_call_fastpath
1019
1020
1021This shows that the current tracer is "irqsoff" tracing the time
1022for which interrupts were disabled. It gives the trace version (which
1023never changes) and the version of the kernel upon which this was executed on
1024(3.8). Then it displays the max latency in microseconds (259 us). The number
1025of trace entries displayed and the total number (both are four: #4/4).
1026VP, KP, SP, and HP are always zero and are reserved for later use.
1027#P is the number of online CPUs (#P:4).
1028
1029The task is the process that was running when the latency
1030occurred. (ps pid: 6143).
1031
1032The start and stop (the functions in which the interrupts were
1033disabled and enabled respectively) that caused the latencies:
1034
1035  - __lock_task_sighand is where the interrupts were disabled.
1036  - _raw_spin_unlock_irqrestore is where they were enabled again.
1037
1038The next lines after the header are the trace itself. The header
1039explains which is which.
1040
1041  cmd: The name of the process in the trace.
1042
1043  pid: The PID of that process.
1044
1045  CPU#: The CPU which the process was running on.
1046
1047  irqs-off: 'd' interrupts are disabled. '.' otherwise.
1048
1049  need-resched:
1050	- 'B' all, TIF_NEED_RESCHED, PREEMPT_NEED_RESCHED and TIF_RESCHED_LAZY is set,
1051	- 'N' both TIF_NEED_RESCHED and PREEMPT_NEED_RESCHED is set,
1052	- 'n' only TIF_NEED_RESCHED is set,
1053	- 'p' only PREEMPT_NEED_RESCHED is set,
1054	- 'L' both PREEMPT_NEED_RESCHED and TIF_RESCHED_LAZY is set,
1055	- 'b' both TIF_NEED_RESCHED and TIF_RESCHED_LAZY is set,
1056	- 'l' only TIF_RESCHED_LAZY is set
1057	- '.' otherwise.
1058
1059  hardirq/softirq:
1060	- 'Z' - NMI occurred inside a hardirq
1061	- 'z' - NMI is running
1062	- 'H' - hard irq occurred inside a softirq.
1063	- 'h' - hard irq is running
1064	- 's' - soft irq is running
1065	- '.' - normal context.
1066
1067  preempt-depth: The level of preempt_disabled
1068
1069The above is mostly meaningful for kernel developers.
1070
1071  time:
1072	When the latency-format option is enabled, the trace file
1073	output includes a timestamp relative to the start of the
1074	trace. This differs from the output when latency-format
1075	is disabled, which includes an absolute timestamp.
1076
1077  delay:
1078	This is just to help catch your eye a bit better. And
1079	needs to be fixed to be only relative to the same CPU.
1080	The marks are determined by the difference between this
1081	current trace and the next trace.
1082
1083	  - '$' - greater than 1 second
1084	  - '@' - greater than 100 millisecond
1085	  - '*' - greater than 10 millisecond
1086	  - '#' - greater than 1000 microsecond
1087	  - '!' - greater than 100 microsecond
1088	  - '+' - greater than 10 microsecond
1089	  - ' ' - less than or equal to 10 microsecond.
1090
1091  The rest is the same as the 'trace' file.
1092
1093  Note, the latency tracers will usually end with a back trace
1094  to easily find where the latency occurred.
1095
1096trace_options
1097-------------
1098
1099The trace_options file (or the options directory) is used to control
1100what gets printed in the trace output, or manipulate the tracers.
1101To see what is available, simply cat the file::
1102
1103  cat trace_options
1104	print-parent
1105	nosym-offset
1106	nosym-addr
1107	noverbose
1108	noraw
1109	nohex
1110	nobin
1111	noblock
1112	nofields
1113	trace_printk
1114	annotate
1115	nouserstacktrace
1116	nosym-userobj
1117	noprintk-msg-only
1118	context-info
1119	nolatency-format
1120	record-cmd
1121	norecord-tgid
1122	overwrite
1123	nodisable_on_free
1124	irq-info
1125	markers
1126	noevent-fork
1127	function-trace
1128	nofunction-fork
1129	nodisplay-graph
1130	nostacktrace
1131	nobranch
1132
1133To disable one of the options, echo in the option prepended with
1134"no"::
1135
1136  echo noprint-parent > trace_options
1137
1138To enable an option, leave off the "no"::
1139
1140  echo sym-offset > trace_options
1141
1142Here are the available options:
1143
1144  print-parent
1145	On function traces, display the calling (parent)
1146	function as well as the function being traced.
1147	::
1148
1149	  print-parent:
1150	   bash-4000  [01]  1477.606694: simple_strtoul <-kstrtoul
1151
1152	  noprint-parent:
1153	   bash-4000  [01]  1477.606694: simple_strtoul
1154
1155
1156  sym-offset
1157	Display not only the function name, but also the
1158	offset in the function. For example, instead of
1159	seeing just "ktime_get", you will see
1160	"ktime_get+0xb/0x20".
1161	::
1162
1163	  sym-offset:
1164	   bash-4000  [01]  1477.606694: simple_strtoul+0x6/0xa0
1165
1166  sym-addr
1167	This will also display the function address as well
1168	as the function name.
1169	::
1170
1171	  sym-addr:
1172	   bash-4000  [01]  1477.606694: simple_strtoul <c0339346>
1173
1174  verbose
1175	This deals with the trace file when the
1176        latency-format option is enabled.
1177	::
1178
1179	    bash  4000 1 0 00000000 00010a95 [58127d26] 1720.415ms \
1180	    (+0.000ms): simple_strtoul (kstrtoul)
1181
1182  raw
1183	This will display raw numbers. This option is best for
1184	use with user applications that can translate the raw
1185	numbers better than having it done in the kernel.
1186
1187  hex
1188	Similar to raw, but the numbers will be in a hexadecimal format.
1189
1190  bin
1191	This will print out the formats in raw binary.
1192
1193  block
1194	When set, reading trace_pipe will not block when polled.
1195
1196  fields
1197	Print the fields as described by their types. This is a better
1198	option than using hex, bin or raw, as it gives a better parsing
1199	of the content of the event.
1200
1201  trace_printk
1202	Can disable trace_printk() from writing into the buffer.
1203
1204  trace_printk_dest
1205	Set to have trace_printk() and similar internal tracing functions
1206	write into this instance. Note, only one trace instance can have
1207	this set. By setting this flag, it clears the trace_printk_dest flag
1208	of the instance that had it set previously. By default, the top
1209	level trace has this set, and will get it set again if another
1210	instance has it set then clears it.
1211
1212	This flag cannot be cleared by the top level instance, as it is the
1213	default instance. The only way the top level instance has this flag
1214	cleared, is by it being set in another instance.
1215
1216  copy_trace_marker
1217	If there are applications that hard code writing into the top level
1218	trace_marker file (/sys/kernel/tracing/trace_marker or trace_marker_raw),
1219	and the tooling would like it to go into an instance, this option can
1220	be used. Create an instance and set this option, and then all writes
1221	into the top level trace_marker file will also be redirected into this
1222	instance.
1223
1224	Note, by default this option is set for the top level instance. If it
1225	is disabled, then writes to the trace_marker or trace_marker_raw files
1226	will not be written into the top level file. If no instance has this
1227	option set, then a write will error with the errno of ENODEV.
1228
1229  annotate
1230	It is sometimes confusing when the CPU buffers are full
1231	and one CPU buffer had a lot of events recently, thus
1232	a shorter time frame, were another CPU may have only had
1233	a few events, which lets it have older events. When
1234	the trace is reported, it shows the oldest events first,
1235	and it may look like only one CPU ran (the one with the
1236	oldest events). When the annotate option is set, it will
1237	display when a new CPU buffer started::
1238
1239			  <idle>-0     [001] dNs4 21169.031481: wake_up_idle_cpu <-add_timer_on
1240			  <idle>-0     [001] dNs4 21169.031482: _raw_spin_unlock_irqrestore <-add_timer_on
1241			  <idle>-0     [001] .Ns4 21169.031484: sub_preempt_count <-_raw_spin_unlock_irqrestore
1242		##### CPU 2 buffer started ####
1243			  <idle>-0     [002] .N.1 21169.031484: rcu_idle_exit <-cpu_idle
1244			  <idle>-0     [001] .Ns3 21169.031484: _raw_spin_unlock <-clocksource_watchdog
1245			  <idle>-0     [001] .Ns3 21169.031485: sub_preempt_count <-_raw_spin_unlock
1246
1247  userstacktrace
1248	This option changes the trace. It records a
1249	stacktrace of the current user space thread after
1250	each trace event.
1251
1252  sym-userobj
1253	when user stacktrace are enabled, look up which
1254	object the address belongs to, and print a
1255	relative address. This is especially useful when
1256	ASLR is on, otherwise you don't get a chance to
1257	resolve the address to object/file/line after
1258	the app is no longer running
1259
1260	The lookup is performed when you read
1261	trace,trace_pipe. Example::
1262
1263		  a.out-1623  [000] 40874.465068: /root/a.out[+0x480] <-/root/a.out[+0
1264		  x494] <- /root/a.out[+0x4a8] <- /lib/libc-2.7.so[+0x1e1a6]
1265
1266
1267  printk-msg-only
1268	When set, trace_printk()s will only show the format
1269	and not their parameters (if trace_bprintk() or
1270	trace_bputs() was used to save the trace_printk()).
1271
1272  context-info
1273	Show only the event data. Hides the comm, PID,
1274	timestamp, CPU, and other useful data.
1275
1276  latency-format
1277	This option changes the trace output. When it is enabled,
1278	the trace displays additional information about the
1279	latency, as described in "Latency trace format".
1280
1281  pause-on-trace
1282	When set, opening the trace file for read, will pause
1283	writing to the ring buffer (as if tracing_on was set to zero).
1284	This simulates the original behavior of the trace file.
1285	When the file is closed, tracing will be enabled again.
1286
1287  hash-ptr
1288        When set, "%p" in the event printk format displays the
1289        hashed pointer value instead of real address.
1290        This will be useful if you want to find out which hashed
1291        value is corresponding to the real value in trace log.
1292
1293  record-cmd
1294	When any event or tracer is enabled, a hook is enabled
1295	in the sched_switch trace point to fill comm cache
1296	with mapped pids and comms. But this may cause some
1297	overhead, and if you only care about pids, and not the
1298	name of the task, disabling this option can lower the
1299	impact of tracing. See "saved_cmdlines".
1300
1301  record-tgid
1302	When any event or tracer is enabled, a hook is enabled
1303	in the sched_switch trace point to fill the cache of
1304	mapped Thread Group IDs (TGID) mapping to pids. See
1305	"saved_tgids".
1306
1307  overwrite
1308	This controls what happens when the trace buffer is
1309	full. If "1" (default), the oldest events are
1310	discarded and overwritten. If "0", then the newest
1311	events are discarded.
1312	(see per_cpu/cpu0/stats for overrun and dropped)
1313
1314  disable_on_free
1315	When the free_buffer is closed, tracing will
1316	stop (tracing_on set to 0).
1317
1318  irq-info
1319	Shows the interrupt, preempt count, need resched data.
1320	When disabled, the trace looks like::
1321
1322		# tracer: function
1323		#
1324		# entries-in-buffer/entries-written: 144405/9452052   #P:4
1325		#
1326		#           TASK-PID   CPU#      TIMESTAMP  FUNCTION
1327		#              | |       |          |         |
1328			  <idle>-0     [002]  23636.756054: ttwu_do_activate.constprop.89 <-try_to_wake_up
1329			  <idle>-0     [002]  23636.756054: activate_task <-ttwu_do_activate.constprop.89
1330			  <idle>-0     [002]  23636.756055: enqueue_task <-activate_task
1331
1332
1333  markers
1334	When set, the trace_marker is writable (only by root).
1335	When disabled, the trace_marker will error with EINVAL
1336	on write.
1337
1338  event-fork
1339	When set, tasks with PIDs listed in set_event_pid will have
1340	the PIDs of their children added to set_event_pid when those
1341	tasks fork. Also, when tasks with PIDs in set_event_pid exit,
1342	their PIDs will be removed from the file.
1343
1344        This affects PIDs listed in set_event_notrace_pid as well.
1345
1346  function-trace
1347	The latency tracers will enable function tracing
1348	if this option is enabled (default it is). When
1349	it is disabled, the latency tracers do not trace
1350	functions. This keeps the overhead of the tracer down
1351	when performing latency tests.
1352
1353  function-fork
1354	When set, tasks with PIDs listed in set_ftrace_pid will
1355	have the PIDs of their children added to set_ftrace_pid
1356	when those tasks fork. Also, when tasks with PIDs in
1357	set_ftrace_pid exit, their PIDs will be removed from the
1358	file.
1359
1360        This affects PIDs in set_ftrace_notrace_pid as well.
1361
1362  display-graph
1363	When set, the latency tracers (irqsoff, wakeup, etc) will
1364	use function graph tracing instead of function tracing.
1365
1366  stacktrace
1367	When set, a stack trace is recorded after any trace event
1368	is recorded.
1369
1370  branch
1371	Enable branch tracing with the tracer. This enables branch
1372	tracer along with the currently set tracer. Enabling this
1373	with the "nop" tracer is the same as just enabling the
1374	"branch" tracer.
1375
1376.. tip:: Some tracers have their own options. They only appear in this
1377       file when the tracer is active. They always appear in the
1378       options directory.
1379
1380
1381Here are the per tracer options:
1382
1383Options for function tracer:
1384
1385  func_stack_trace
1386	When set, a stack trace is recorded after every
1387	function that is recorded. NOTE! Limit the functions
1388	that are recorded before enabling this, with
1389	"set_ftrace_filter" otherwise the system performance
1390	will be critically degraded. Remember to disable
1391	this option before clearing the function filter.
1392
1393Options for function_graph tracer:
1394
1395 Since the function_graph tracer has a slightly different output
1396 it has its own options to control what is displayed.
1397
1398  funcgraph-overrun
1399	When set, the "overrun" of the graph stack is
1400	displayed after each function traced. The
1401	overrun, is when the stack depth of the calls
1402	is greater than what is reserved for each task.
1403	Each task has a fixed array of functions to
1404	trace in the call graph. If the depth of the
1405	calls exceeds that, the function is not traced.
1406	The overrun is the number of functions missed
1407	due to exceeding this array.
1408
1409  funcgraph-cpu
1410	When set, the CPU number of the CPU where the trace
1411	occurred is displayed.
1412
1413  funcgraph-overhead
1414	When set, if the function takes longer than
1415	A certain amount, then a delay marker is
1416	displayed. See "delay" above, under the
1417	header description.
1418
1419  funcgraph-proc
1420	Unlike other tracers, the process' command line
1421	is not displayed by default, but instead only
1422	when a task is traced in and out during a context
1423	switch. Enabling this options has the command
1424	of each process displayed at every line.
1425
1426  funcgraph-duration
1427	At the end of each function (the return)
1428	the duration of the amount of time in the
1429	function is displayed in microseconds.
1430
1431  funcgraph-abstime
1432	When set, the timestamp is displayed at each line.
1433
1434  funcgraph-irqs
1435	When disabled, functions that happen inside an
1436	interrupt will not be traced.
1437
1438  funcgraph-tail
1439	When set, the return event will include the function
1440	that it represents. By default this is off, and
1441	only a closing curly bracket "}" is displayed for
1442	the return of a function.
1443
1444  funcgraph-retval
1445	When set, the return value of each traced function
1446	will be printed after an equal sign "=". By default
1447	this is off.
1448
1449  funcgraph-retval-hex
1450	When set, the return value will always be printed
1451	in hexadecimal format. If the option is not set and
1452	the return value is an error code, it will be printed
1453	in signed decimal format; otherwise it will also be
1454	printed in hexadecimal format. By default, this option
1455	is off.
1456
1457  sleep-time
1458	When running function graph tracer, to include
1459	the time a task schedules out in its function.
1460	When enabled, it will account time the task has been
1461	scheduled out as part of the function call.
1462
1463  graph-time
1464	When running function profiler with function graph tracer,
1465	to include the time to call nested functions. When this is
1466	not set, the time reported for the function will only
1467	include the time the function itself executed for, not the
1468	time for functions that it called.
1469
1470Options for blk tracer:
1471
1472  blk_classic
1473	Shows a more minimalistic output.
1474
1475
1476irqsoff
1477-------
1478
1479When interrupts are disabled, the CPU can not react to any other
1480external event (besides NMIs and SMIs). This prevents the timer
1481interrupt from triggering or the mouse interrupt from letting
1482the kernel know of a new mouse event. The result is a latency
1483with the reaction time.
1484
1485The irqsoff tracer tracks the time for which interrupts are
1486disabled. When a new maximum latency is hit, the tracer saves
1487the trace leading up to that latency point so that every time a
1488new maximum is reached, the old saved trace is discarded and the
1489new trace is saved.
1490
1491To reset the maximum, echo 0 into tracing_max_latency. Here is
1492an example::
1493
1494  # echo 0 > options/function-trace
1495  # echo irqsoff > current_tracer
1496  # echo 1 > tracing_on
1497  # echo 0 > tracing_max_latency
1498  # ls -ltr
1499  [...]
1500  # echo 0 > tracing_on
1501  # cat trace
1502  # tracer: irqsoff
1503  #
1504  # irqsoff latency trace v1.1.5 on 3.8.0-test+
1505  # --------------------------------------------------------------------
1506  # latency: 16 us, #4/4, CPU#0 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:4)
1507  #    -----------------
1508  #    | task: swapper/0-0 (uid:0 nice:0 policy:0 rt_prio:0)
1509  #    -----------------
1510  #  => started at: run_timer_softirq
1511  #  => ended at:   run_timer_softirq
1512  #
1513  #
1514  #                  _------=> CPU#
1515  #                 / _-----=> irqs-off
1516  #                | / _----=> need-resched
1517  #                || / _---=> hardirq/softirq
1518  #                ||| / _--=> preempt-depth
1519  #                |||| /     delay
1520  #  cmd     pid   ||||| time  |   caller
1521  #     \   /      |||||  \    |   /
1522    <idle>-0       0d.s2    0us+: _raw_spin_lock_irq <-run_timer_softirq
1523    <idle>-0       0dNs3   17us : _raw_spin_unlock_irq <-run_timer_softirq
1524    <idle>-0       0dNs3   17us+: trace_hardirqs_on <-run_timer_softirq
1525    <idle>-0       0dNs3   25us : <stack trace>
1526   => _raw_spin_unlock_irq
1527   => run_timer_softirq
1528   => __do_softirq
1529   => call_softirq
1530   => do_softirq
1531   => irq_exit
1532   => smp_apic_timer_interrupt
1533   => apic_timer_interrupt
1534   => rcu_idle_exit
1535   => cpu_idle
1536   => rest_init
1537   => start_kernel
1538   => x86_64_start_reservations
1539   => x86_64_start_kernel
1540
1541Here we see that we had a latency of 16 microseconds (which is
1542very good). The _raw_spin_lock_irq in run_timer_softirq disabled
1543interrupts. The difference between the 16 and the displayed
1544timestamp 25us occurred because the clock was incremented
1545between the time of recording the max latency and the time of
1546recording the function that had that latency.
1547
1548Note the above example had function-trace not set. If we set
1549function-trace, we get a much larger output::
1550
1551 with echo 1 > options/function-trace
1552
1553  # tracer: irqsoff
1554  #
1555  # irqsoff latency trace v1.1.5 on 3.8.0-test+
1556  # --------------------------------------------------------------------
1557  # latency: 71 us, #168/168, CPU#3 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:4)
1558  #    -----------------
1559  #    | task: bash-2042 (uid:0 nice:0 policy:0 rt_prio:0)
1560  #    -----------------
1561  #  => started at: ata_scsi_queuecmd
1562  #  => ended at:   ata_scsi_queuecmd
1563  #
1564  #
1565  #                  _------=> CPU#
1566  #                 / _-----=> irqs-off
1567  #                | / _----=> need-resched
1568  #                || / _---=> hardirq/softirq
1569  #                ||| / _--=> preempt-depth
1570  #                |||| /     delay
1571  #  cmd     pid   ||||| time  |   caller
1572  #     \   /      |||||  \    |   /
1573      bash-2042    3d...    0us : _raw_spin_lock_irqsave <-ata_scsi_queuecmd
1574      bash-2042    3d...    0us : add_preempt_count <-_raw_spin_lock_irqsave
1575      bash-2042    3d..1    1us : ata_scsi_find_dev <-ata_scsi_queuecmd
1576      bash-2042    3d..1    1us : __ata_scsi_find_dev <-ata_scsi_find_dev
1577      bash-2042    3d..1    2us : ata_find_dev.part.14 <-__ata_scsi_find_dev
1578      bash-2042    3d..1    2us : ata_qc_new_init <-__ata_scsi_queuecmd
1579      bash-2042    3d..1    3us : ata_sg_init <-__ata_scsi_queuecmd
1580      bash-2042    3d..1    4us : ata_scsi_rw_xlat <-__ata_scsi_queuecmd
1581      bash-2042    3d..1    4us : ata_build_rw_tf <-ata_scsi_rw_xlat
1582  [...]
1583      bash-2042    3d..1   67us : delay_tsc <-__delay
1584      bash-2042    3d..1   67us : add_preempt_count <-delay_tsc
1585      bash-2042    3d..2   67us : sub_preempt_count <-delay_tsc
1586      bash-2042    3d..1   67us : add_preempt_count <-delay_tsc
1587      bash-2042    3d..2   68us : sub_preempt_count <-delay_tsc
1588      bash-2042    3d..1   68us+: ata_bmdma_start <-ata_bmdma_qc_issue
1589      bash-2042    3d..1   71us : _raw_spin_unlock_irqrestore <-ata_scsi_queuecmd
1590      bash-2042    3d..1   71us : _raw_spin_unlock_irqrestore <-ata_scsi_queuecmd
1591      bash-2042    3d..1   72us+: trace_hardirqs_on <-ata_scsi_queuecmd
1592      bash-2042    3d..1  120us : <stack trace>
1593   => _raw_spin_unlock_irqrestore
1594   => ata_scsi_queuecmd
1595   => scsi_dispatch_cmd
1596   => scsi_request_fn
1597   => __blk_run_queue_uncond
1598   => __blk_run_queue
1599   => blk_queue_bio
1600   => submit_bio_noacct
1601   => submit_bio
1602   => submit_bh
1603   => __ext3_get_inode_loc
1604   => ext3_iget
1605   => ext3_lookup
1606   => lookup_real
1607   => __lookup_hash
1608   => walk_component
1609   => lookup_last
1610   => path_lookupat
1611   => filename_lookup
1612   => user_path_at_empty
1613   => user_path_at
1614   => vfs_fstatat
1615   => vfs_stat
1616   => sys_newstat
1617   => system_call_fastpath
1618
1619
1620Here we traced a 71 microsecond latency. But we also see all the
1621functions that were called during that time. Note that by
1622enabling function tracing, we incur an added overhead. This
1623overhead may extend the latency times. But nevertheless, this
1624trace has provided some very helpful debugging information.
1625
1626If we prefer function graph output instead of function, we can set
1627display-graph option::
1628
1629 with echo 1 > options/display-graph
1630
1631  # tracer: irqsoff
1632  #
1633  # irqsoff latency trace v1.1.5 on 4.20.0-rc6+
1634  # --------------------------------------------------------------------
1635  # latency: 3751 us, #274/274, CPU#0 | (M:desktop VP:0, KP:0, SP:0 HP:0 #P:4)
1636  #    -----------------
1637  #    | task: bash-1507 (uid:0 nice:0 policy:0 rt_prio:0)
1638  #    -----------------
1639  #  => started at: free_debug_processing
1640  #  => ended at:   return_to_handler
1641  #
1642  #
1643  #                                       _-----=> irqs-off
1644  #                                      / _----=> need-resched
1645  #                                     | / _---=> hardirq/softirq
1646  #                                     || / _--=> preempt-depth
1647  #                                     ||| /
1648  #   REL TIME      CPU  TASK/PID       ||||     DURATION                  FUNCTION CALLS
1649  #      |          |     |    |        ||||      |   |                     |   |   |   |
1650          0 us |   0)   bash-1507    |  d... |   0.000 us    |  _raw_spin_lock_irqsave();
1651          0 us |   0)   bash-1507    |  d..1 |   0.378 us    |    do_raw_spin_trylock();
1652          1 us |   0)   bash-1507    |  d..2 |               |    set_track() {
1653          2 us |   0)   bash-1507    |  d..2 |               |      save_stack_trace() {
1654          2 us |   0)   bash-1507    |  d..2 |               |        __save_stack_trace() {
1655          3 us |   0)   bash-1507    |  d..2 |               |          __unwind_start() {
1656          3 us |   0)   bash-1507    |  d..2 |               |            get_stack_info() {
1657          3 us |   0)   bash-1507    |  d..2 |   0.351 us    |              in_task_stack();
1658          4 us |   0)   bash-1507    |  d..2 |   1.107 us    |            }
1659  [...]
1660       3750 us |   0)   bash-1507    |  d..1 |   0.516 us    |      do_raw_spin_unlock();
1661       3750 us |   0)   bash-1507    |  d..1 |   0.000 us    |  _raw_spin_unlock_irqrestore();
1662       3764 us |   0)   bash-1507    |  d..1 |   0.000 us    |  tracer_hardirqs_on();
1663      bash-1507    0d..1 3792us : <stack trace>
1664   => free_debug_processing
1665   => __slab_free
1666   => kmem_cache_free
1667   => vm_area_free
1668   => remove_vma
1669   => exit_mmap
1670   => mmput
1671   => begin_new_exec
1672   => load_elf_binary
1673   => search_binary_handler
1674   => __do_execve_file.isra.32
1675   => __x64_sys_execve
1676   => do_syscall_64
1677   => entry_SYSCALL_64_after_hwframe
1678
1679preemptoff
1680----------
1681
1682When preemption is disabled, we may be able to receive
1683interrupts but the task cannot be preempted and a higher
1684priority task must wait for preemption to be enabled again
1685before it can preempt a lower priority task.
1686
1687The preemptoff tracer traces the places that disable preemption.
1688Like the irqsoff tracer, it records the maximum latency for
1689which preemption was disabled. The control of preemptoff tracer
1690is much like the irqsoff tracer.
1691::
1692
1693  # echo 0 > options/function-trace
1694  # echo preemptoff > current_tracer
1695  # echo 1 > tracing_on
1696  # echo 0 > tracing_max_latency
1697  # ls -ltr
1698  [...]
1699  # echo 0 > tracing_on
1700  # cat trace
1701  # tracer: preemptoff
1702  #
1703  # preemptoff latency trace v1.1.5 on 3.8.0-test+
1704  # --------------------------------------------------------------------
1705  # latency: 46 us, #4/4, CPU#1 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:4)
1706  #    -----------------
1707  #    | task: sshd-1991 (uid:0 nice:0 policy:0 rt_prio:0)
1708  #    -----------------
1709  #  => started at: do_IRQ
1710  #  => ended at:   do_IRQ
1711  #
1712  #
1713  #                  _------=> CPU#
1714  #                 / _-----=> irqs-off
1715  #                | / _----=> need-resched
1716  #                || / _---=> hardirq/softirq
1717  #                ||| / _--=> preempt-depth
1718  #                |||| /     delay
1719  #  cmd     pid   ||||| time  |   caller
1720  #     \   /      |||||  \    |   /
1721      sshd-1991    1d.h.    0us+: irq_enter <-do_IRQ
1722      sshd-1991    1d..1   46us : irq_exit <-do_IRQ
1723      sshd-1991    1d..1   47us+: trace_preempt_on <-do_IRQ
1724      sshd-1991    1d..1   52us : <stack trace>
1725   => sub_preempt_count
1726   => irq_exit
1727   => do_IRQ
1728   => ret_from_intr
1729
1730
1731This has some more changes. Preemption was disabled when an
1732interrupt came in (notice the 'h'), and was enabled on exit.
1733But we also see that interrupts have been disabled when entering
1734the preempt off section and leaving it (the 'd'). We do not know if
1735interrupts were enabled in the mean time or shortly after this
1736was over.
1737::
1738
1739  # tracer: preemptoff
1740  #
1741  # preemptoff latency trace v1.1.5 on 3.8.0-test+
1742  # --------------------------------------------------------------------
1743  # latency: 83 us, #241/241, CPU#1 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:4)
1744  #    -----------------
1745  #    | task: bash-1994 (uid:0 nice:0 policy:0 rt_prio:0)
1746  #    -----------------
1747  #  => started at: wake_up_new_task
1748  #  => ended at:   task_rq_unlock
1749  #
1750  #
1751  #                  _------=> CPU#
1752  #                 / _-----=> irqs-off
1753  #                | / _----=> need-resched
1754  #                || / _---=> hardirq/softirq
1755  #                ||| / _--=> preempt-depth
1756  #                |||| /     delay
1757  #  cmd     pid   ||||| time  |   caller
1758  #     \   /      |||||  \    |   /
1759      bash-1994    1d..1    0us : _raw_spin_lock_irqsave <-wake_up_new_task
1760      bash-1994    1d..1    0us : select_task_rq_fair <-select_task_rq
1761      bash-1994    1d..1    1us : __rcu_read_lock <-select_task_rq_fair
1762      bash-1994    1d..1    1us : source_load <-select_task_rq_fair
1763      bash-1994    1d..1    1us : source_load <-select_task_rq_fair
1764  [...]
1765      bash-1994    1d..1   12us : irq_enter <-smp_apic_timer_interrupt
1766      bash-1994    1d..1   12us : rcu_irq_enter <-irq_enter
1767      bash-1994    1d..1   13us : add_preempt_count <-irq_enter
1768      bash-1994    1d.h1   13us : exit_idle <-smp_apic_timer_interrupt
1769      bash-1994    1d.h1   13us : hrtimer_interrupt <-smp_apic_timer_interrupt
1770      bash-1994    1d.h1   13us : _raw_spin_lock <-hrtimer_interrupt
1771      bash-1994    1d.h1   14us : add_preempt_count <-_raw_spin_lock
1772      bash-1994    1d.h2   14us : ktime_get_update_offsets <-hrtimer_interrupt
1773  [...]
1774      bash-1994    1d.h1   35us : lapic_next_event <-clockevents_program_event
1775      bash-1994    1d.h1   35us : irq_exit <-smp_apic_timer_interrupt
1776      bash-1994    1d.h1   36us : sub_preempt_count <-irq_exit
1777      bash-1994    1d..2   36us : do_softirq <-irq_exit
1778      bash-1994    1d..2   36us : __do_softirq <-call_softirq
1779      bash-1994    1d..2   36us : __local_bh_disable <-__do_softirq
1780      bash-1994    1d.s2   37us : add_preempt_count <-_raw_spin_lock_irq
1781      bash-1994    1d.s3   38us : _raw_spin_unlock <-run_timer_softirq
1782      bash-1994    1d.s3   39us : sub_preempt_count <-_raw_spin_unlock
1783      bash-1994    1d.s2   39us : call_timer_fn <-run_timer_softirq
1784  [...]
1785      bash-1994    1dNs2   81us : cpu_needs_another_gp <-rcu_process_callbacks
1786      bash-1994    1dNs2   82us : __local_bh_enable <-__do_softirq
1787      bash-1994    1dNs2   82us : sub_preempt_count <-__local_bh_enable
1788      bash-1994    1dN.2   82us : idle_cpu <-irq_exit
1789      bash-1994    1dN.2   83us : rcu_irq_exit <-irq_exit
1790      bash-1994    1dN.2   83us : sub_preempt_count <-irq_exit
1791      bash-1994    1.N.1   84us : _raw_spin_unlock_irqrestore <-task_rq_unlock
1792      bash-1994    1.N.1   84us+: trace_preempt_on <-task_rq_unlock
1793      bash-1994    1.N.1  104us : <stack trace>
1794   => sub_preempt_count
1795   => _raw_spin_unlock_irqrestore
1796   => task_rq_unlock
1797   => wake_up_new_task
1798   => do_fork
1799   => sys_clone
1800   => stub_clone
1801
1802
1803The above is an example of the preemptoff trace with
1804function-trace set. Here we see that interrupts were not disabled
1805the entire time. The irq_enter code lets us know that we entered
1806an interrupt 'h'. Before that, the functions being traced still
1807show that it is not in an interrupt, but we can see from the
1808functions themselves that this is not the case.
1809
1810preemptirqsoff
1811--------------
1812
1813Knowing the locations that have interrupts disabled or
1814preemption disabled for the longest times is helpful. But
1815sometimes we would like to know when either preemption and/or
1816interrupts are disabled.
1817
1818Consider the following code::
1819
1820    local_irq_disable();
1821    call_function_with_irqs_off();
1822    preempt_disable();
1823    call_function_with_irqs_and_preemption_off();
1824    local_irq_enable();
1825    call_function_with_preemption_off();
1826    preempt_enable();
1827
1828The irqsoff tracer will record the total length of
1829call_function_with_irqs_off() and
1830call_function_with_irqs_and_preemption_off().
1831
1832The preemptoff tracer will record the total length of
1833call_function_with_irqs_and_preemption_off() and
1834call_function_with_preemption_off().
1835
1836But neither will trace the time that interrupts and/or
1837preemption is disabled. This total time is the time that we can
1838not schedule. To record this time, use the preemptirqsoff
1839tracer.
1840
1841Again, using this trace is much like the irqsoff and preemptoff
1842tracers.
1843::
1844
1845  # echo 0 > options/function-trace
1846  # echo preemptirqsoff > current_tracer
1847  # echo 1 > tracing_on
1848  # echo 0 > tracing_max_latency
1849  # ls -ltr
1850  [...]
1851  # echo 0 > tracing_on
1852  # cat trace
1853  # tracer: preemptirqsoff
1854  #
1855  # preemptirqsoff latency trace v1.1.5 on 3.8.0-test+
1856  # --------------------------------------------------------------------
1857  # latency: 100 us, #4/4, CPU#3 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:4)
1858  #    -----------------
1859  #    | task: ls-2230 (uid:0 nice:0 policy:0 rt_prio:0)
1860  #    -----------------
1861  #  => started at: ata_scsi_queuecmd
1862  #  => ended at:   ata_scsi_queuecmd
1863  #
1864  #
1865  #                  _------=> CPU#
1866  #                 / _-----=> irqs-off
1867  #                | / _----=> need-resched
1868  #                || / _---=> hardirq/softirq
1869  #                ||| / _--=> preempt-depth
1870  #                |||| /     delay
1871  #  cmd     pid   ||||| time  |   caller
1872  #     \   /      |||||  \    |   /
1873        ls-2230    3d...    0us+: _raw_spin_lock_irqsave <-ata_scsi_queuecmd
1874        ls-2230    3...1  100us : _raw_spin_unlock_irqrestore <-ata_scsi_queuecmd
1875        ls-2230    3...1  101us+: trace_preempt_on <-ata_scsi_queuecmd
1876        ls-2230    3...1  111us : <stack trace>
1877   => sub_preempt_count
1878   => _raw_spin_unlock_irqrestore
1879   => ata_scsi_queuecmd
1880   => scsi_dispatch_cmd
1881   => scsi_request_fn
1882   => __blk_run_queue_uncond
1883   => __blk_run_queue
1884   => blk_queue_bio
1885   => submit_bio_noacct
1886   => submit_bio
1887   => submit_bh
1888   => ext3_bread
1889   => ext3_dir_bread
1890   => htree_dirblock_to_tree
1891   => ext3_htree_fill_tree
1892   => ext3_readdir
1893   => vfs_readdir
1894   => sys_getdents
1895   => system_call_fastpath
1896
1897
1898The trace_hardirqs_off_thunk is called from assembly on x86 when
1899interrupts are disabled in the assembly code. Without the
1900function tracing, we do not know if interrupts were enabled
1901within the preemption points. We do see that it started with
1902preemption enabled.
1903
1904Here is a trace with function-trace set::
1905
1906  # tracer: preemptirqsoff
1907  #
1908  # preemptirqsoff latency trace v1.1.5 on 3.8.0-test+
1909  # --------------------------------------------------------------------
1910  # latency: 161 us, #339/339, CPU#3 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:4)
1911  #    -----------------
1912  #    | task: ls-2269 (uid:0 nice:0 policy:0 rt_prio:0)
1913  #    -----------------
1914  #  => started at: schedule
1915  #  => ended at:   mutex_unlock
1916  #
1917  #
1918  #                  _------=> CPU#
1919  #                 / _-----=> irqs-off
1920  #                | / _----=> need-resched
1921  #                || / _---=> hardirq/softirq
1922  #                ||| / _--=> preempt-depth
1923  #                |||| /     delay
1924  #  cmd     pid   ||||| time  |   caller
1925  #     \   /      |||||  \    |   /
1926  kworker/-59      3...1    0us : __schedule <-schedule
1927  kworker/-59      3d..1    0us : rcu_preempt_qs <-rcu_note_context_switch
1928  kworker/-59      3d..1    1us : add_preempt_count <-_raw_spin_lock_irq
1929  kworker/-59      3d..2    1us : deactivate_task <-__schedule
1930  kworker/-59      3d..2    1us : dequeue_task <-deactivate_task
1931  kworker/-59      3d..2    2us : update_rq_clock <-dequeue_task
1932  kworker/-59      3d..2    2us : dequeue_task_fair <-dequeue_task
1933  kworker/-59      3d..2    2us : update_curr <-dequeue_task_fair
1934  kworker/-59      3d..2    2us : update_min_vruntime <-update_curr
1935  kworker/-59      3d..2    3us : cpuacct_charge <-update_curr
1936  kworker/-59      3d..2    3us : __rcu_read_lock <-cpuacct_charge
1937  kworker/-59      3d..2    3us : __rcu_read_unlock <-cpuacct_charge
1938  kworker/-59      3d..2    3us : update_cfs_rq_blocked_load <-dequeue_task_fair
1939  kworker/-59      3d..2    4us : clear_buddies <-dequeue_task_fair
1940  kworker/-59      3d..2    4us : account_entity_dequeue <-dequeue_task_fair
1941  kworker/-59      3d..2    4us : update_min_vruntime <-dequeue_task_fair
1942  kworker/-59      3d..2    4us : update_cfs_shares <-dequeue_task_fair
1943  kworker/-59      3d..2    5us : hrtick_update <-dequeue_task_fair
1944  kworker/-59      3d..2    5us : wq_worker_sleeping <-__schedule
1945  kworker/-59      3d..2    5us : kthread_data <-wq_worker_sleeping
1946  kworker/-59      3d..2    5us : put_prev_task_fair <-__schedule
1947  kworker/-59      3d..2    6us : pick_next_task_fair <-pick_next_task
1948  kworker/-59      3d..2    6us : clear_buddies <-pick_next_task_fair
1949  kworker/-59      3d..2    6us : set_next_entity <-pick_next_task_fair
1950  kworker/-59      3d..2    6us : update_stats_wait_end <-set_next_entity
1951        ls-2269    3d..2    7us : finish_task_switch <-__schedule
1952        ls-2269    3d..2    7us : _raw_spin_unlock_irq <-finish_task_switch
1953        ls-2269    3d..2    8us : do_IRQ <-ret_from_intr
1954        ls-2269    3d..2    8us : irq_enter <-do_IRQ
1955        ls-2269    3d..2    8us : rcu_irq_enter <-irq_enter
1956        ls-2269    3d..2    9us : add_preempt_count <-irq_enter
1957        ls-2269    3d.h2    9us : exit_idle <-do_IRQ
1958  [...]
1959        ls-2269    3d.h3   20us : sub_preempt_count <-_raw_spin_unlock
1960        ls-2269    3d.h2   20us : irq_exit <-do_IRQ
1961        ls-2269    3d.h2   21us : sub_preempt_count <-irq_exit
1962        ls-2269    3d..3   21us : do_softirq <-irq_exit
1963        ls-2269    3d..3   21us : __do_softirq <-call_softirq
1964        ls-2269    3d..3   21us+: __local_bh_disable <-__do_softirq
1965        ls-2269    3d.s4   29us : sub_preempt_count <-_local_bh_enable_ip
1966        ls-2269    3d.s5   29us : sub_preempt_count <-_local_bh_enable_ip
1967        ls-2269    3d.s5   31us : do_IRQ <-ret_from_intr
1968        ls-2269    3d.s5   31us : irq_enter <-do_IRQ
1969        ls-2269    3d.s5   31us : rcu_irq_enter <-irq_enter
1970  [...]
1971        ls-2269    3d.s5   31us : rcu_irq_enter <-irq_enter
1972        ls-2269    3d.s5   32us : add_preempt_count <-irq_enter
1973        ls-2269    3d.H5   32us : exit_idle <-do_IRQ
1974        ls-2269    3d.H5   32us : handle_irq <-do_IRQ
1975        ls-2269    3d.H5   32us : irq_to_desc <-handle_irq
1976        ls-2269    3d.H5   33us : handle_fasteoi_irq <-handle_irq
1977  [...]
1978        ls-2269    3d.s5  158us : _raw_spin_unlock_irqrestore <-rtl8139_poll
1979        ls-2269    3d.s3  158us : net_rps_action_and_irq_enable.isra.65 <-net_rx_action
1980        ls-2269    3d.s3  159us : __local_bh_enable <-__do_softirq
1981        ls-2269    3d.s3  159us : sub_preempt_count <-__local_bh_enable
1982        ls-2269    3d..3  159us : idle_cpu <-irq_exit
1983        ls-2269    3d..3  159us : rcu_irq_exit <-irq_exit
1984        ls-2269    3d..3  160us : sub_preempt_count <-irq_exit
1985        ls-2269    3d...  161us : __mutex_unlock_slowpath <-mutex_unlock
1986        ls-2269    3d...  162us+: trace_hardirqs_on <-mutex_unlock
1987        ls-2269    3d...  186us : <stack trace>
1988   => __mutex_unlock_slowpath
1989   => mutex_unlock
1990   => process_output
1991   => n_tty_write
1992   => tty_write
1993   => vfs_write
1994   => sys_write
1995   => system_call_fastpath
1996
1997This is an interesting trace. It started with kworker running and
1998scheduling out and ls taking over. But as soon as ls released the
1999rq lock and enabled interrupts (but not preemption) an interrupt
2000triggered. When the interrupt finished, it started running softirqs.
2001But while the softirq was running, another interrupt triggered.
2002When an interrupt is running inside a softirq, the annotation is 'H'.
2003
2004
2005wakeup
2006------
2007
2008One common case that people are interested in tracing is the
2009time it takes for a task that is woken to actually wake up.
2010Now for non Real-Time tasks, this can be arbitrary. But tracing
2011it nonetheless can be interesting.
2012
2013Without function tracing::
2014
2015  # echo 0 > options/function-trace
2016  # echo wakeup > current_tracer
2017  # echo 1 > tracing_on
2018  # echo 0 > tracing_max_latency
2019  # chrt -f 5 sleep 1
2020  # echo 0 > tracing_on
2021  # cat trace
2022  # tracer: wakeup
2023  #
2024  # wakeup latency trace v1.1.5 on 3.8.0-test+
2025  # --------------------------------------------------------------------
2026  # latency: 15 us, #4/4, CPU#3 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:4)
2027  #    -----------------
2028  #    | task: kworker/3:1H-312 (uid:0 nice:-20 policy:0 rt_prio:0)
2029  #    -----------------
2030  #
2031  #                  _------=> CPU#
2032  #                 / _-----=> irqs-off
2033  #                | / _----=> need-resched
2034  #                || / _---=> hardirq/softirq
2035  #                ||| / _--=> preempt-depth
2036  #                |||| /     delay
2037  #  cmd     pid   ||||| time  |   caller
2038  #     \   /      |||||  \    |   /
2039    <idle>-0       3dNs7    0us :      0:120:R   + [003]   312:100:R kworker/3:1H
2040    <idle>-0       3dNs7    1us+: ttwu_do_activate.constprop.87 <-try_to_wake_up
2041    <idle>-0       3d..3   15us : __schedule <-schedule
2042    <idle>-0       3d..3   15us :      0:120:R ==> [003]   312:100:R kworker/3:1H
2043
2044The tracer only traces the highest priority task in the system
2045to avoid tracing the normal circumstances. Here we see that
2046the kworker with a nice priority of -20 (not very nice), took
2047just 15 microseconds from the time it woke up, to the time it
2048ran.
2049
2050Non Real-Time tasks are not that interesting. A more interesting
2051trace is to concentrate only on Real-Time tasks.
2052
2053wakeup_rt
2054---------
2055
2056In a Real-Time environment it is very important to know the
2057wakeup time it takes for the highest priority task that is woken
2058up to the time that it executes. This is also known as "schedule
2059latency". I stress the point that this is about RT tasks. It is
2060also important to know the scheduling latency of non-RT tasks,
2061but the average schedule latency is better for non-RT tasks.
2062Tools like LatencyTop are more appropriate for such
2063measurements.
2064
2065Real-Time environments are interested in the worst case latency.
2066That is the longest latency it takes for something to happen,
2067and not the average. We can have a very fast scheduler that may
2068only have a large latency once in a while, but that would not
2069work well with Real-Time tasks.  The wakeup_rt tracer was designed
2070to record the worst case wakeups of RT tasks. Non-RT tasks are
2071not recorded because the tracer only records one worst case and
2072tracing non-RT tasks that are unpredictable will overwrite the
2073worst case latency of RT tasks (just run the normal wakeup
2074tracer for a while to see that effect).
2075
2076Since this tracer only deals with RT tasks, we will run this
2077slightly differently than we did with the previous tracers.
2078Instead of performing an 'ls', we will run 'sleep 1' under
2079'chrt' which changes the priority of the task.
2080::
2081
2082  # echo 0 > options/function-trace
2083  # echo wakeup_rt > current_tracer
2084  # echo 1 > tracing_on
2085  # echo 0 > tracing_max_latency
2086  # chrt -f 5 sleep 1
2087  # echo 0 > tracing_on
2088  # cat trace
2089  # tracer: wakeup
2090  #
2091  # tracer: wakeup_rt
2092  #
2093  # wakeup_rt latency trace v1.1.5 on 3.8.0-test+
2094  # --------------------------------------------------------------------
2095  # latency: 5 us, #4/4, CPU#3 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:4)
2096  #    -----------------
2097  #    | task: sleep-2389 (uid:0 nice:0 policy:1 rt_prio:5)
2098  #    -----------------
2099  #
2100  #                  _------=> CPU#
2101  #                 / _-----=> irqs-off
2102  #                | / _----=> need-resched
2103  #                || / _---=> hardirq/softirq
2104  #                ||| / _--=> preempt-depth
2105  #                |||| /     delay
2106  #  cmd     pid   ||||| time  |   caller
2107  #     \   /      |||||  \    |   /
2108    <idle>-0       3d.h4    0us :      0:120:R   + [003]  2389: 94:R sleep
2109    <idle>-0       3d.h4    1us+: ttwu_do_activate.constprop.87 <-try_to_wake_up
2110    <idle>-0       3d..3    5us : __schedule <-schedule
2111    <idle>-0       3d..3    5us :      0:120:R ==> [003]  2389: 94:R sleep
2112
2113
2114Running this on an idle system, we see that it only took 5 microseconds
2115to perform the task switch.  Note, since the trace point in the schedule
2116is before the actual "switch", we stop the tracing when the recorded task
2117is about to schedule in. This may change if we add a new marker at the
2118end of the scheduler.
2119
2120Notice that the recorded task is 'sleep' with the PID of 2389
2121and it has an rt_prio of 5. This priority is user-space priority
2122and not the internal kernel priority. The policy is 1 for
2123SCHED_FIFO and 2 for SCHED_RR.
2124
2125Note, that the trace data shows the internal priority (99 - rtprio).
2126::
2127
2128  <idle>-0       3d..3    5us :      0:120:R ==> [003]  2389: 94:R sleep
2129
2130The 0:120:R means idle was running with a nice priority of 0 (120 - 120)
2131and in the running state 'R'. The sleep task was scheduled in with
21322389: 94:R. That is the priority is the kernel rtprio (99 - 5 = 94)
2133and it too is in the running state.
2134
2135Doing the same with chrt -r 5 and function-trace set.
2136::
2137
2138  echo 1 > options/function-trace
2139
2140  # tracer: wakeup_rt
2141  #
2142  # wakeup_rt latency trace v1.1.5 on 3.8.0-test+
2143  # --------------------------------------------------------------------
2144  # latency: 29 us, #85/85, CPU#3 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:4)
2145  #    -----------------
2146  #    | task: sleep-2448 (uid:0 nice:0 policy:1 rt_prio:5)
2147  #    -----------------
2148  #
2149  #                  _------=> CPU#
2150  #                 / _-----=> irqs-off
2151  #                | / _----=> need-resched
2152  #                || / _---=> hardirq/softirq
2153  #                ||| / _--=> preempt-depth
2154  #                |||| /     delay
2155  #  cmd     pid   ||||| time  |   caller
2156  #     \   /      |||||  \    |   /
2157    <idle>-0       3d.h4    1us+:      0:120:R   + [003]  2448: 94:R sleep
2158    <idle>-0       3d.h4    2us : ttwu_do_activate.constprop.87 <-try_to_wake_up
2159    <idle>-0       3d.h3    3us : check_preempt_curr <-ttwu_do_wakeup
2160    <idle>-0       3d.h3    3us : resched_curr <-check_preempt_curr
2161    <idle>-0       3dNh3    4us : task_woken_rt <-ttwu_do_wakeup
2162    <idle>-0       3dNh3    4us : _raw_spin_unlock <-try_to_wake_up
2163    <idle>-0       3dNh3    4us : sub_preempt_count <-_raw_spin_unlock
2164    <idle>-0       3dNh2    5us : ttwu_stat <-try_to_wake_up
2165    <idle>-0       3dNh2    5us : _raw_spin_unlock_irqrestore <-try_to_wake_up
2166    <idle>-0       3dNh2    6us : sub_preempt_count <-_raw_spin_unlock_irqrestore
2167    <idle>-0       3dNh1    6us : _raw_spin_lock <-__run_hrtimer
2168    <idle>-0       3dNh1    6us : add_preempt_count <-_raw_spin_lock
2169    <idle>-0       3dNh2    7us : _raw_spin_unlock <-hrtimer_interrupt
2170    <idle>-0       3dNh2    7us : sub_preempt_count <-_raw_spin_unlock
2171    <idle>-0       3dNh1    7us : tick_program_event <-hrtimer_interrupt
2172    <idle>-0       3dNh1    7us : clockevents_program_event <-tick_program_event
2173    <idle>-0       3dNh1    8us : ktime_get <-clockevents_program_event
2174    <idle>-0       3dNh1    8us : lapic_next_event <-clockevents_program_event
2175    <idle>-0       3dNh1    8us : irq_exit <-smp_apic_timer_interrupt
2176    <idle>-0       3dNh1    9us : sub_preempt_count <-irq_exit
2177    <idle>-0       3dN.2    9us : idle_cpu <-irq_exit
2178    <idle>-0       3dN.2    9us : rcu_irq_exit <-irq_exit
2179    <idle>-0       3dN.2   10us : rcu_eqs_enter_common.isra.45 <-rcu_irq_exit
2180    <idle>-0       3dN.2   10us : sub_preempt_count <-irq_exit
2181    <idle>-0       3.N.1   11us : rcu_idle_exit <-cpu_idle
2182    <idle>-0       3dN.1   11us : rcu_eqs_exit_common.isra.43 <-rcu_idle_exit
2183    <idle>-0       3.N.1   11us : tick_nohz_idle_exit <-cpu_idle
2184    <idle>-0       3dN.1   12us : menu_hrtimer_cancel <-tick_nohz_idle_exit
2185    <idle>-0       3dN.1   12us : ktime_get <-tick_nohz_idle_exit
2186    <idle>-0       3dN.1   12us : tick_do_update_jiffies64 <-tick_nohz_idle_exit
2187    <idle>-0       3dN.1   13us : cpu_load_update_nohz <-tick_nohz_idle_exit
2188    <idle>-0       3dN.1   13us : _raw_spin_lock <-cpu_load_update_nohz
2189    <idle>-0       3dN.1   13us : add_preempt_count <-_raw_spin_lock
2190    <idle>-0       3dN.2   13us : __cpu_load_update <-cpu_load_update_nohz
2191    <idle>-0       3dN.2   14us : sched_avg_update <-__cpu_load_update
2192    <idle>-0       3dN.2   14us : _raw_spin_unlock <-cpu_load_update_nohz
2193    <idle>-0       3dN.2   14us : sub_preempt_count <-_raw_spin_unlock
2194    <idle>-0       3dN.1   15us : calc_load_nohz_stop <-tick_nohz_idle_exit
2195    <idle>-0       3dN.1   15us : touch_softlockup_watchdog <-tick_nohz_idle_exit
2196    <idle>-0       3dN.1   15us : hrtimer_cancel <-tick_nohz_idle_exit
2197    <idle>-0       3dN.1   15us : hrtimer_try_to_cancel <-hrtimer_cancel
2198    <idle>-0       3dN.1   16us : lock_hrtimer_base.isra.18 <-hrtimer_try_to_cancel
2199    <idle>-0       3dN.1   16us : _raw_spin_lock_irqsave <-lock_hrtimer_base.isra.18
2200    <idle>-0       3dN.1   16us : add_preempt_count <-_raw_spin_lock_irqsave
2201    <idle>-0       3dN.2   17us : __remove_hrtimer <-remove_hrtimer.part.16
2202    <idle>-0       3dN.2   17us : hrtimer_force_reprogram <-__remove_hrtimer
2203    <idle>-0       3dN.2   17us : tick_program_event <-hrtimer_force_reprogram
2204    <idle>-0       3dN.2   18us : clockevents_program_event <-tick_program_event
2205    <idle>-0       3dN.2   18us : ktime_get <-clockevents_program_event
2206    <idle>-0       3dN.2   18us : lapic_next_event <-clockevents_program_event
2207    <idle>-0       3dN.2   19us : _raw_spin_unlock_irqrestore <-hrtimer_try_to_cancel
2208    <idle>-0       3dN.2   19us : sub_preempt_count <-_raw_spin_unlock_irqrestore
2209    <idle>-0       3dN.1   19us : hrtimer_forward <-tick_nohz_idle_exit
2210    <idle>-0       3dN.1   20us : ktime_add_safe <-hrtimer_forward
2211    <idle>-0       3dN.1   20us : ktime_add_safe <-hrtimer_forward
2212    <idle>-0       3dN.1   20us : hrtimer_start_range_ns <-hrtimer_start_expires.constprop.11
2213    <idle>-0       3dN.1   20us : __hrtimer_start_range_ns <-hrtimer_start_range_ns
2214    <idle>-0       3dN.1   21us : lock_hrtimer_base.isra.18 <-__hrtimer_start_range_ns
2215    <idle>-0       3dN.1   21us : _raw_spin_lock_irqsave <-lock_hrtimer_base.isra.18
2216    <idle>-0       3dN.1   21us : add_preempt_count <-_raw_spin_lock_irqsave
2217    <idle>-0       3dN.2   22us : ktime_add_safe <-__hrtimer_start_range_ns
2218    <idle>-0       3dN.2   22us : enqueue_hrtimer <-__hrtimer_start_range_ns
2219    <idle>-0       3dN.2   22us : tick_program_event <-__hrtimer_start_range_ns
2220    <idle>-0       3dN.2   23us : clockevents_program_event <-tick_program_event
2221    <idle>-0       3dN.2   23us : ktime_get <-clockevents_program_event
2222    <idle>-0       3dN.2   23us : lapic_next_event <-clockevents_program_event
2223    <idle>-0       3dN.2   24us : _raw_spin_unlock_irqrestore <-__hrtimer_start_range_ns
2224    <idle>-0       3dN.2   24us : sub_preempt_count <-_raw_spin_unlock_irqrestore
2225    <idle>-0       3dN.1   24us : account_idle_ticks <-tick_nohz_idle_exit
2226    <idle>-0       3dN.1   24us : account_idle_time <-account_idle_ticks
2227    <idle>-0       3.N.1   25us : sub_preempt_count <-cpu_idle
2228    <idle>-0       3.N..   25us : schedule <-cpu_idle
2229    <idle>-0       3.N..   25us : __schedule <-preempt_schedule
2230    <idle>-0       3.N..   26us : add_preempt_count <-__schedule
2231    <idle>-0       3.N.1   26us : rcu_note_context_switch <-__schedule
2232    <idle>-0       3.N.1   26us : rcu_sched_qs <-rcu_note_context_switch
2233    <idle>-0       3dN.1   27us : rcu_preempt_qs <-rcu_note_context_switch
2234    <idle>-0       3.N.1   27us : _raw_spin_lock_irq <-__schedule
2235    <idle>-0       3dN.1   27us : add_preempt_count <-_raw_spin_lock_irq
2236    <idle>-0       3dN.2   28us : put_prev_task_idle <-__schedule
2237    <idle>-0       3dN.2   28us : pick_next_task_stop <-pick_next_task
2238    <idle>-0       3dN.2   28us : pick_next_task_rt <-pick_next_task
2239    <idle>-0       3dN.2   29us : dequeue_pushable_task <-pick_next_task_rt
2240    <idle>-0       3d..3   29us : __schedule <-preempt_schedule
2241    <idle>-0       3d..3   30us :      0:120:R ==> [003]  2448: 94:R sleep
2242
2243This isn't that big of a trace, even with function tracing enabled,
2244so I included the entire trace.
2245
2246The interrupt went off while when the system was idle. Somewhere
2247before task_woken_rt() was called, the NEED_RESCHED flag was set,
2248this is indicated by the first occurrence of the 'N' flag.
2249
2250Latency tracing and events
2251--------------------------
2252As function tracing can induce a much larger latency, but without
2253seeing what happens within the latency it is hard to know what
2254caused it. There is a middle ground, and that is with enabling
2255events.
2256::
2257
2258  # echo 0 > options/function-trace
2259  # echo wakeup_rt > current_tracer
2260  # echo 1 > events/enable
2261  # echo 1 > tracing_on
2262  # echo 0 > tracing_max_latency
2263  # chrt -f 5 sleep 1
2264  # echo 0 > tracing_on
2265  # cat trace
2266  # tracer: wakeup_rt
2267  #
2268  # wakeup_rt latency trace v1.1.5 on 3.8.0-test+
2269  # --------------------------------------------------------------------
2270  # latency: 6 us, #12/12, CPU#2 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:4)
2271  #    -----------------
2272  #    | task: sleep-5882 (uid:0 nice:0 policy:1 rt_prio:5)
2273  #    -----------------
2274  #
2275  #                  _------=> CPU#
2276  #                 / _-----=> irqs-off
2277  #                | / _----=> need-resched
2278  #                || / _---=> hardirq/softirq
2279  #                ||| / _--=> preempt-depth
2280  #                |||| /     delay
2281  #  cmd     pid   ||||| time  |   caller
2282  #     \   /      |||||  \    |   /
2283    <idle>-0       2d.h4    0us :      0:120:R   + [002]  5882: 94:R sleep
2284    <idle>-0       2d.h4    0us : ttwu_do_activate.constprop.87 <-try_to_wake_up
2285    <idle>-0       2d.h4    1us : sched_wakeup: comm=sleep pid=5882 prio=94 success=1 target_cpu=002
2286    <idle>-0       2dNh2    1us : hrtimer_expire_exit: hrtimer=ffff88007796feb8
2287    <idle>-0       2.N.2    2us : power_end: cpu_id=2
2288    <idle>-0       2.N.2    3us : cpu_idle: state=4294967295 cpu_id=2
2289    <idle>-0       2dN.3    4us : hrtimer_cancel: hrtimer=ffff88007d50d5e0
2290    <idle>-0       2dN.3    4us : hrtimer_start: hrtimer=ffff88007d50d5e0 function=tick_sched_timer expires=34311211000000 softexpires=34311211000000
2291    <idle>-0       2.N.2    5us : rcu_utilization: Start context switch
2292    <idle>-0       2.N.2    5us : rcu_utilization: End context switch
2293    <idle>-0       2d..3    6us : __schedule <-schedule
2294    <idle>-0       2d..3    6us :      0:120:R ==> [002]  5882: 94:R sleep
2295
2296
2297Hardware Latency Detector
2298-------------------------
2299
2300The hardware latency detector is executed by enabling the "hwlat" tracer.
2301
2302NOTE, this tracer will affect the performance of the system as it will
2303periodically make a CPU constantly busy with interrupts disabled.
2304::
2305
2306  # echo hwlat > current_tracer
2307  # sleep 100
2308  # cat trace
2309  # tracer: hwlat
2310  #
2311  # entries-in-buffer/entries-written: 13/13   #P:8
2312  #
2313  #                              _-----=> irqs-off
2314  #                             / _----=> need-resched
2315  #                            | / _---=> hardirq/softirq
2316  #                            || / _--=> preempt-depth
2317  #                            ||| /     delay
2318  #           TASK-PID   CPU#  ||||    TIMESTAMP  FUNCTION
2319  #              | |       |   ||||       |         |
2320             <...>-1729  [001] d...   678.473449: #1     inner/outer(us):   11/12    ts:1581527483.343962693 count:6
2321             <...>-1729  [004] d...   689.556542: #2     inner/outer(us):   16/9     ts:1581527494.889008092 count:1
2322             <...>-1729  [005] d...   714.756290: #3     inner/outer(us):   16/16    ts:1581527519.678961629 count:5
2323             <...>-1729  [001] d...   718.788247: #4     inner/outer(us):    9/17    ts:1581527523.889012713 count:1
2324             <...>-1729  [002] d...   719.796341: #5     inner/outer(us):   13/9     ts:1581527524.912872606 count:1
2325             <...>-1729  [006] d...   844.787091: #6     inner/outer(us):    9/12    ts:1581527649.889048502 count:2
2326             <...>-1729  [003] d...   849.827033: #7     inner/outer(us):   18/9     ts:1581527654.889013793 count:1
2327             <...>-1729  [007] d...   853.859002: #8     inner/outer(us):    9/12    ts:1581527658.889065736 count:1
2328             <...>-1729  [001] d...   855.874978: #9     inner/outer(us):    9/11    ts:1581527660.861991877 count:1
2329             <...>-1729  [001] d...   863.938932: #10    inner/outer(us):    9/11    ts:1581527668.970010500 count:1 nmi-total:7 nmi-count:1
2330             <...>-1729  [007] d...   878.050780: #11    inner/outer(us):    9/12    ts:1581527683.385002600 count:1 nmi-total:5 nmi-count:1
2331             <...>-1729  [007] d...   886.114702: #12    inner/outer(us):    9/12    ts:1581527691.385001600 count:1
2332
2333
2334The above output is somewhat the same in the header. All events will have
2335interrupts disabled 'd'. Under the FUNCTION title there is:
2336
2337 #1
2338	This is the count of events recorded that were greater than the
2339	tracing_threshold (See below).
2340
2341 inner/outer(us):   11/11
2342
2343      This shows two numbers as "inner latency" and "outer latency". The test
2344      runs in a loop checking a timestamp twice. The latency detected within
2345      the two timestamps is the "inner latency" and the latency detected
2346      after the previous timestamp and the next timestamp in the loop is
2347      the "outer latency".
2348
2349 ts:1581527483.343962693
2350
2351      The absolute timestamp that the first latency was recorded in the window.
2352
2353 count:6
2354
2355      The number of times a latency was detected during the window.
2356
2357 nmi-total:7 nmi-count:1
2358
2359      On architectures that support it, if an NMI comes in during the
2360      test, the time spent in NMI is reported in "nmi-total" (in
2361      microseconds).
2362
2363      All architectures that have NMIs will show the "nmi-count" if an
2364      NMI comes in during the test.
2365
2366hwlat files:
2367
2368  tracing_threshold
2369	This gets automatically set to "10" to represent 10
2370	microseconds. This is the threshold of latency that
2371	needs to be detected before the trace will be recorded.
2372
2373	Note, when hwlat tracer is finished (another tracer is
2374	written into "current_tracer"), the original value for
2375	tracing_threshold is placed back into this file.
2376
2377  hwlat_detector/width
2378	The length of time the test runs with interrupts disabled.
2379
2380  hwlat_detector/window
2381	The length of time of the window which the test
2382	runs. That is, the test will run for "width"
2383	microseconds per "window" microseconds
2384
2385  tracing_cpumask
2386	When the test is started. A kernel thread is created that
2387	runs the test. This thread will alternate between CPUs
2388	listed in the tracing_cpumask between each period
2389	(one "window"). To limit the test to specific CPUs
2390	set the mask in this file to only the CPUs that the test
2391	should run on.
2392
2393function
2394--------
2395
2396This tracer is the function tracer. Enabling the function tracer
2397can be done from the debug file system. Make sure the
2398ftrace_enabled is set; otherwise this tracer is a nop.
2399See the "ftrace_enabled" section below.
2400::
2401
2402  # sysctl kernel.ftrace_enabled=1
2403  # echo function > current_tracer
2404  # echo 1 > tracing_on
2405  # usleep 1
2406  # echo 0 > tracing_on
2407  # cat trace
2408  # tracer: function
2409  #
2410  # entries-in-buffer/entries-written: 24799/24799   #P:4
2411  #
2412  #                              _-----=> irqs-off
2413  #                             / _----=> need-resched
2414  #                            | / _---=> hardirq/softirq
2415  #                            || / _--=> preempt-depth
2416  #                            ||| /     delay
2417  #           TASK-PID   CPU#  ||||    TIMESTAMP  FUNCTION
2418  #              | |       |   ||||       |         |
2419              bash-1994  [002] ....  3082.063030: mutex_unlock <-rb_simple_write
2420              bash-1994  [002] ....  3082.063031: __mutex_unlock_slowpath <-mutex_unlock
2421              bash-1994  [002] ....  3082.063031: __fsnotify_parent <-fsnotify_modify
2422              bash-1994  [002] ....  3082.063032: fsnotify <-fsnotify_modify
2423              bash-1994  [002] ....  3082.063032: __srcu_read_lock <-fsnotify
2424              bash-1994  [002] ....  3082.063032: add_preempt_count <-__srcu_read_lock
2425              bash-1994  [002] ...1  3082.063032: sub_preempt_count <-__srcu_read_lock
2426              bash-1994  [002] ....  3082.063033: __srcu_read_unlock <-fsnotify
2427  [...]
2428
2429
2430Note: function tracer uses ring buffers to store the above
2431entries. The newest data may overwrite the oldest data.
2432Sometimes using echo to stop the trace is not sufficient because
2433the tracing could have overwritten the data that you wanted to
2434record. For this reason, it is sometimes better to disable
2435tracing directly from a program. This allows you to stop the
2436tracing at the point that you hit the part that you are
2437interested in. To disable the tracing directly from a C program,
2438something like following code snippet can be used::
2439
2440	int trace_fd;
2441	[...]
2442	int main(int argc, char *argv[]) {
2443		[...]
2444		trace_fd = open(tracing_file("tracing_on"), O_WRONLY);
2445		[...]
2446		if (condition_hit()) {
2447			write(trace_fd, "0", 1);
2448		}
2449		[...]
2450	}
2451
2452
2453Single thread tracing
2454---------------------
2455
2456By writing into set_ftrace_pid you can trace a
2457single thread. For example::
2458
2459  # cat set_ftrace_pid
2460  no pid
2461  # echo 3111 > set_ftrace_pid
2462  # cat set_ftrace_pid
2463  3111
2464  # echo function > current_tracer
2465  # cat trace | head
2466  # tracer: function
2467  #
2468  #           TASK-PID    CPU#    TIMESTAMP  FUNCTION
2469  #              | |       |          |         |
2470      yum-updatesd-3111  [003]  1637.254676: finish_task_switch <-thread_return
2471      yum-updatesd-3111  [003]  1637.254681: hrtimer_cancel <-schedule_hrtimeout_range
2472      yum-updatesd-3111  [003]  1637.254682: hrtimer_try_to_cancel <-hrtimer_cancel
2473      yum-updatesd-3111  [003]  1637.254683: lock_hrtimer_base <-hrtimer_try_to_cancel
2474      yum-updatesd-3111  [003]  1637.254685: fget_light <-do_sys_poll
2475      yum-updatesd-3111  [003]  1637.254686: pipe_poll <-do_sys_poll
2476  # echo > set_ftrace_pid
2477  # cat trace |head
2478  # tracer: function
2479  #
2480  #           TASK-PID    CPU#    TIMESTAMP  FUNCTION
2481  #              | |       |          |         |
2482  ##### CPU 3 buffer started ####
2483      yum-updatesd-3111  [003]  1701.957688: free_poll_entry <-poll_freewait
2484      yum-updatesd-3111  [003]  1701.957689: remove_wait_queue <-free_poll_entry
2485      yum-updatesd-3111  [003]  1701.957691: fput <-free_poll_entry
2486      yum-updatesd-3111  [003]  1701.957692: audit_syscall_exit <-sysret_audit
2487      yum-updatesd-3111  [003]  1701.957693: path_put <-audit_syscall_exit
2488
2489If you want to trace a function when executing, you could use
2490something like this simple program.
2491::
2492
2493	#include <stdio.h>
2494	#include <stdlib.h>
2495	#include <sys/types.h>
2496	#include <sys/stat.h>
2497	#include <fcntl.h>
2498	#include <unistd.h>
2499	#include <string.h>
2500
2501	#define _STR(x) #x
2502	#define STR(x) _STR(x)
2503	#define MAX_PATH 256
2504
2505	const char *find_tracefs(void)
2506	{
2507	       static char tracefs[MAX_PATH+1];
2508	       static int tracefs_found;
2509	       char type[100];
2510	       FILE *fp;
2511
2512	       if (tracefs_found)
2513		       return tracefs;
2514
2515	       if ((fp = fopen("/proc/mounts","r")) == NULL) {
2516		       perror("/proc/mounts");
2517		       return NULL;
2518	       }
2519
2520	       while (fscanf(fp, "%*s %"
2521		             STR(MAX_PATH)
2522		             "s %99s %*s %*d %*d\n",
2523		             tracefs, type) == 2) {
2524		       if (strcmp(type, "tracefs") == 0)
2525		               break;
2526	       }
2527	       fclose(fp);
2528
2529	       if (strcmp(type, "tracefs") != 0) {
2530		       fprintf(stderr, "tracefs not mounted");
2531		       return NULL;
2532	       }
2533
2534	       strcat(tracefs, "/tracing/");
2535	       tracefs_found = 1;
2536
2537	       return tracefs;
2538	}
2539
2540	const char *tracing_file(const char *file_name)
2541	{
2542	       static char trace_file[MAX_PATH+1];
2543	       snprintf(trace_file, MAX_PATH, "%s/%s", find_tracefs(), file_name);
2544	       return trace_file;
2545	}
2546
2547	int main (int argc, char **argv)
2548	{
2549		if (argc < 1)
2550		        exit(-1);
2551
2552		if (fork() > 0) {
2553		        int fd, ffd;
2554		        char line[64];
2555		        int s;
2556
2557		        ffd = open(tracing_file("current_tracer"), O_WRONLY);
2558		        if (ffd < 0)
2559		                exit(-1);
2560		        write(ffd, "nop", 3);
2561
2562		        fd = open(tracing_file("set_ftrace_pid"), O_WRONLY);
2563		        s = sprintf(line, "%d\n", getpid());
2564		        write(fd, line, s);
2565
2566		        write(ffd, "function", 8);
2567
2568		        close(fd);
2569		        close(ffd);
2570
2571		        execvp(argv[1], argv+1);
2572		}
2573
2574		return 0;
2575	}
2576
2577Or this simple script!
2578::
2579
2580  #!/bin/bash
2581
2582  tracefs=`sed -ne 's/^tracefs \(.*\) tracefs.*/\1/p' /proc/mounts`
2583  echo 0 > $tracefs/tracing_on
2584  echo $$ > $tracefs/set_ftrace_pid
2585  echo function > $tracefs/current_tracer
2586  echo 1 > $tracefs/tracing_on
2587  exec "$@"
2588
2589
2590function graph tracer
2591---------------------------
2592
2593This tracer is similar to the function tracer except that it
2594probes a function on its entry and its exit. This is done by
2595using a dynamically allocated stack of return addresses in each
2596task_struct. On function entry the tracer overwrites the return
2597address of each function traced to set a custom probe. Thus the
2598original return address is stored on the stack of return address
2599in the task_struct.
2600
2601Probing on both ends of a function leads to special features
2602such as:
2603
2604- measure of a function's time execution
2605- having a reliable call stack to draw function calls graph
2606
2607This tracer is useful in several situations:
2608
2609- you want to find the reason of a strange kernel behavior and
2610  need to see what happens in detail on any areas (or specific
2611  ones).
2612
2613- you are experiencing weird latencies but it's difficult to
2614  find its origin.
2615
2616- you want to find quickly which path is taken by a specific
2617  function
2618
2619- you just want to peek inside a working kernel and want to see
2620  what happens there.
2621
2622::
2623
2624  # tracer: function_graph
2625  #
2626  # CPU  DURATION                  FUNCTION CALLS
2627  # |     |   |                     |   |   |   |
2628
2629   0)               |  sys_open() {
2630   0)               |    do_sys_open() {
2631   0)               |      getname() {
2632   0)               |        kmem_cache_alloc() {
2633   0)   1.382 us    |          __might_sleep();
2634   0)   2.478 us    |        }
2635   0)               |        strncpy_from_user() {
2636   0)               |          might_fault() {
2637   0)   1.389 us    |            __might_sleep();
2638   0)   2.553 us    |          }
2639   0)   3.807 us    |        }
2640   0)   7.876 us    |      }
2641   0)               |      alloc_fd() {
2642   0)   0.668 us    |        _spin_lock();
2643   0)   0.570 us    |        expand_files();
2644   0)   0.586 us    |        _spin_unlock();
2645
2646
2647There are several columns that can be dynamically
2648enabled/disabled. You can use every combination of options you
2649want, depending on your needs.
2650
2651- The cpu number on which the function executed is default
2652  enabled.  It is sometimes better to only trace one cpu (see
2653  tracing_cpumask file) or you might sometimes see unordered
2654  function calls while cpu tracing switch.
2655
2656	- hide: echo nofuncgraph-cpu > trace_options
2657	- show: echo funcgraph-cpu > trace_options
2658
2659- The duration (function's time of execution) is displayed on
2660  the closing bracket line of a function or on the same line
2661  than the current function in case of a leaf one. It is default
2662  enabled.
2663
2664	- hide: echo nofuncgraph-duration > trace_options
2665	- show: echo funcgraph-duration > trace_options
2666
2667- The overhead field precedes the duration field in case of
2668  reached duration thresholds.
2669
2670	- hide: echo nofuncgraph-overhead > trace_options
2671	- show: echo funcgraph-overhead > trace_options
2672	- depends on: funcgraph-duration
2673
2674  ie::
2675
2676    3) # 1837.709 us |          } /* __switch_to */
2677    3)               |          finish_task_switch() {
2678    3)   0.313 us    |            _raw_spin_unlock_irq();
2679    3)   3.177 us    |          }
2680    3) # 1889.063 us |        } /* __schedule */
2681    3) ! 140.417 us  |      } /* __schedule */
2682    3) # 2034.948 us |    } /* schedule */
2683    3) * 33998.59 us |  } /* schedule_preempt_disabled */
2684
2685    [...]
2686
2687    1)   0.260 us    |              msecs_to_jiffies();
2688    1)   0.313 us    |              __rcu_read_unlock();
2689    1) + 61.770 us   |            }
2690    1) + 64.479 us   |          }
2691    1)   0.313 us    |          rcu_bh_qs();
2692    1)   0.313 us    |          __local_bh_enable();
2693    1) ! 217.240 us  |        }
2694    1)   0.365 us    |        idle_cpu();
2695    1)               |        rcu_irq_exit() {
2696    1)   0.417 us    |          rcu_eqs_enter_common.isra.47();
2697    1)   3.125 us    |        }
2698    1) ! 227.812 us  |      }
2699    1) ! 457.395 us  |    }
2700    1) @ 119760.2 us |  }
2701
2702    [...]
2703
2704    2)               |    handle_IPI() {
2705    1)   6.979 us    |                  }
2706    2)   0.417 us    |      scheduler_ipi();
2707    1)   9.791 us    |                }
2708    1) + 12.917 us   |              }
2709    2)   3.490 us    |    }
2710    1) + 15.729 us   |            }
2711    1) + 18.542 us   |          }
2712    2) $ 3594274 us  |  }
2713
2714Flags::
2715
2716  + means that the function exceeded 10 usecs.
2717  ! means that the function exceeded 100 usecs.
2718  # means that the function exceeded 1000 usecs.
2719  * means that the function exceeded 10 msecs.
2720  @ means that the function exceeded 100 msecs.
2721  $ means that the function exceeded 1 sec.
2722
2723
2724- The task/pid field displays the thread cmdline and pid which
2725  executed the function. It is default disabled.
2726
2727	- hide: echo nofuncgraph-proc > trace_options
2728	- show: echo funcgraph-proc > trace_options
2729
2730  ie::
2731
2732    # tracer: function_graph
2733    #
2734    # CPU  TASK/PID        DURATION                  FUNCTION CALLS
2735    # |    |    |           |   |                     |   |   |   |
2736    0)    sh-4802     |               |                  d_free() {
2737    0)    sh-4802     |               |                    call_rcu() {
2738    0)    sh-4802     |               |                      __call_rcu() {
2739    0)    sh-4802     |   0.616 us    |                        rcu_process_gp_end();
2740    0)    sh-4802     |   0.586 us    |                        check_for_new_grace_period();
2741    0)    sh-4802     |   2.899 us    |                      }
2742    0)    sh-4802     |   4.040 us    |                    }
2743    0)    sh-4802     |   5.151 us    |                  }
2744    0)    sh-4802     | + 49.370 us   |                }
2745
2746
2747- The absolute time field is an absolute timestamp given by the
2748  system clock since it started. A snapshot of this time is
2749  given on each entry/exit of functions
2750
2751	- hide: echo nofuncgraph-abstime > trace_options
2752	- show: echo funcgraph-abstime > trace_options
2753
2754  ie::
2755
2756    #
2757    #      TIME       CPU  DURATION                  FUNCTION CALLS
2758    #       |         |     |   |                     |   |   |   |
2759    360.774522 |   1)   0.541 us    |                                          }
2760    360.774522 |   1)   4.663 us    |                                        }
2761    360.774523 |   1)   0.541 us    |                                        __wake_up_bit();
2762    360.774524 |   1)   6.796 us    |                                      }
2763    360.774524 |   1)   7.952 us    |                                    }
2764    360.774525 |   1)   9.063 us    |                                  }
2765    360.774525 |   1)   0.615 us    |                                  journal_mark_dirty();
2766    360.774527 |   1)   0.578 us    |                                  __brelse();
2767    360.774528 |   1)               |                                  reiserfs_prepare_for_journal() {
2768    360.774528 |   1)               |                                    unlock_buffer() {
2769    360.774529 |   1)               |                                      wake_up_bit() {
2770    360.774529 |   1)               |                                        bit_waitqueue() {
2771    360.774530 |   1)   0.594 us    |                                          __phys_addr();
2772
2773
2774The function name is always displayed after the closing bracket
2775for a function if the start of that function is not in the
2776trace buffer.
2777
2778Display of the function name after the closing bracket may be
2779enabled for functions whose start is in the trace buffer,
2780allowing easier searching with grep for function durations.
2781It is default disabled.
2782
2783	- hide: echo nofuncgraph-tail > trace_options
2784	- show: echo funcgraph-tail > trace_options
2785
2786  Example with nofuncgraph-tail (default)::
2787
2788    0)               |      putname() {
2789    0)               |        kmem_cache_free() {
2790    0)   0.518 us    |          __phys_addr();
2791    0)   1.757 us    |        }
2792    0)   2.861 us    |      }
2793
2794  Example with funcgraph-tail::
2795
2796    0)               |      putname() {
2797    0)               |        kmem_cache_free() {
2798    0)   0.518 us    |          __phys_addr();
2799    0)   1.757 us    |        } /* kmem_cache_free() */
2800    0)   2.861 us    |      } /* putname() */
2801
2802The return value of each traced function can be displayed after
2803an equal sign "=". When encountering system call failures, it
2804can be very helpful to quickly locate the function that first
2805returns an error code.
2806
2807	- hide: echo nofuncgraph-retval > trace_options
2808	- show: echo funcgraph-retval > trace_options
2809
2810  Example with funcgraph-retval::
2811
2812    1)               |    cgroup_migrate() {
2813    1)   0.651 us    |      cgroup_migrate_add_task(); /* = 0xffff93fcfd346c00 */
2814    1)               |      cgroup_migrate_execute() {
2815    1)               |        cpu_cgroup_can_attach() {
2816    1)               |          cgroup_taskset_first() {
2817    1)   0.732 us    |            cgroup_taskset_next(); /* = 0xffff93fc8fb20000 */
2818    1)   1.232 us    |          } /* cgroup_taskset_first = 0xffff93fc8fb20000 */
2819    1)   0.380 us    |          sched_rt_can_attach(); /* = 0x0 */
2820    1)   2.335 us    |        } /* cpu_cgroup_can_attach = -22 */
2821    1)   4.369 us    |      } /* cgroup_migrate_execute = -22 */
2822    1)   7.143 us    |    } /* cgroup_migrate = -22 */
2823
2824The above example shows that the function cpu_cgroup_can_attach
2825returned the error code -22 firstly, then we can read the code
2826of this function to get the root cause.
2827
2828When the option funcgraph-retval-hex is not set, the return value can
2829be displayed in a smart way. Specifically, if it is an error code,
2830it will be printed in signed decimal format, otherwise it will
2831printed in hexadecimal format.
2832
2833	- smart: echo nofuncgraph-retval-hex > trace_options
2834	- hexadecimal: echo funcgraph-retval-hex > trace_options
2835
2836  Example with funcgraph-retval-hex::
2837
2838    1)               |      cgroup_migrate() {
2839    1)   0.651 us    |        cgroup_migrate_add_task(); /* = 0xffff93fcfd346c00 */
2840    1)               |        cgroup_migrate_execute() {
2841    1)               |          cpu_cgroup_can_attach() {
2842    1)               |            cgroup_taskset_first() {
2843    1)   0.732 us    |              cgroup_taskset_next(); /* = 0xffff93fc8fb20000 */
2844    1)   1.232 us    |            } /* cgroup_taskset_first = 0xffff93fc8fb20000 */
2845    1)   0.380 us    |            sched_rt_can_attach(); /* = 0x0 */
2846    1)   2.335 us    |          } /* cpu_cgroup_can_attach = 0xffffffea */
2847    1)   4.369 us    |        } /* cgroup_migrate_execute = 0xffffffea */
2848    1)   7.143 us    |      } /* cgroup_migrate = 0xffffffea */
2849
2850At present, there are some limitations when using the funcgraph-retval
2851option, and these limitations will be eliminated in the future:
2852
2853- Even if the function return type is void, a return value will still
2854  be printed, and you can just ignore it.
2855
2856- Even if return values are stored in multiple registers, only the
2857  value contained in the first register will be recorded and printed.
2858  To illustrate, in the x86 architecture, eax and edx are used to store
2859  a 64-bit return value, with the lower 32 bits saved in eax and the
2860  upper 32 bits saved in edx. However, only the value stored in eax
2861  will be recorded and printed.
2862
2863- In certain procedure call standards, such as arm64's AAPCS64, when a
2864  type is smaller than a GPR, it is the responsibility of the consumer
2865  to perform the narrowing, and the upper bits may contain UNKNOWN values.
2866  Therefore, it is advisable to check the code for such cases. For instance,
2867  when using a u8 in a 64-bit GPR, bits [63:8] may contain arbitrary values,
2868  especially when larger types are truncated, whether explicitly or implicitly.
2869  Here are some specific cases to illustrate this point:
2870
2871  **Case One**:
2872
2873  The function narrow_to_u8 is defined as follows::
2874
2875	u8 narrow_to_u8(u64 val)
2876	{
2877		// implicitly truncated
2878		return val;
2879	}
2880
2881  It may be compiled to::
2882
2883	narrow_to_u8:
2884		< ... ftrace instrumentation ... >
2885		RET
2886
2887  If you pass 0x123456789abcdef to this function and want to narrow it,
2888  it may be recorded as 0x123456789abcdef instead of 0xef.
2889
2890  **Case Two**:
2891
2892  The function error_if_not_4g_aligned is defined as follows::
2893
2894	int error_if_not_4g_aligned(u64 val)
2895	{
2896		if (val & GENMASK(31, 0))
2897			return -EINVAL;
2898
2899		return 0;
2900	}
2901
2902  It could be compiled to::
2903
2904	error_if_not_4g_aligned:
2905		CBNZ    w0, .Lnot_aligned
2906		RET			// bits [31:0] are zero, bits
2907					// [63:32] are UNKNOWN
2908	.Lnot_aligned:
2909		MOV    x0, #-EINVAL
2910		RET
2911
2912  When passing 0x2_0000_0000 to it, the return value may be recorded as
2913  0x2_0000_0000 instead of 0.
2914
2915You can put some comments on specific functions by using
2916trace_printk() For example, if you want to put a comment inside
2917the __might_sleep() function, you just have to include
2918<linux/ftrace.h> and call trace_printk() inside __might_sleep()::
2919
2920	trace_printk("I'm a comment!\n")
2921
2922will produce::
2923
2924   1)               |             __might_sleep() {
2925   1)               |                /* I'm a comment! */
2926   1)   1.449 us    |             }
2927
2928
2929You might find other useful features for this tracer in the
2930following "dynamic ftrace" section such as tracing only specific
2931functions or tasks.
2932
2933dynamic ftrace
2934--------------
2935
2936If CONFIG_DYNAMIC_FTRACE is set, the system will run with
2937virtually no overhead when function tracing is disabled. The way
2938this works is the mcount function call (placed at the start of
2939every kernel function, produced by the -pg switch in gcc),
2940starts of pointing to a simple return. (Enabling FTRACE will
2941include the -pg switch in the compiling of the kernel.)
2942
2943At compile time every C file object is run through the
2944recordmcount program (located in the scripts directory). This
2945program will parse the ELF headers in the C object to find all
2946the locations in the .text section that call mcount. Starting
2947with gcc version 4.6, the -mfentry has been added for x86, which
2948calls "__fentry__" instead of "mcount". Which is called before
2949the creation of the stack frame.
2950
2951Note, not all sections are traced. They may be prevented by either
2952a notrace, or blocked another way and all inline functions are not
2953traced. Check the "available_filter_functions" file to see what functions
2954can be traced.
2955
2956A section called "__mcount_loc" is created that holds
2957references to all the mcount/fentry call sites in the .text section.
2958The recordmcount program re-links this section back into the
2959original object. The final linking stage of the kernel will add all these
2960references into a single table.
2961
2962On boot up, before SMP is initialized, the dynamic ftrace code
2963scans this table and updates all the locations into nops. It
2964also records the locations, which are added to the
2965available_filter_functions list.  Modules are processed as they
2966are loaded and before they are executed.  When a module is
2967unloaded, it also removes its functions from the ftrace function
2968list. This is automatic in the module unload code, and the
2969module author does not need to worry about it.
2970
2971When tracing is enabled, the process of modifying the function
2972tracepoints is dependent on architecture. The old method is to use
2973kstop_machine to prevent races with the CPUs executing code being
2974modified (which can cause the CPU to do undesirable things, especially
2975if the modified code crosses cache (or page) boundaries), and the nops are
2976patched back to calls. But this time, they do not call mcount
2977(which is just a function stub). They now call into the ftrace
2978infrastructure.
2979
2980The new method of modifying the function tracepoints is to place
2981a breakpoint at the location to be modified, sync all CPUs, modify
2982the rest of the instruction not covered by the breakpoint. Sync
2983all CPUs again, and then remove the breakpoint with the finished
2984version to the ftrace call site.
2985
2986Some archs do not even need to monkey around with the synchronization,
2987and can just slap the new code on top of the old without any
2988problems with other CPUs executing it at the same time.
2989
2990One special side-effect to the recording of the functions being
2991traced is that we can now selectively choose which functions we
2992wish to trace and which ones we want the mcount calls to remain
2993as nops.
2994
2995Two files are used, one for enabling and one for disabling the
2996tracing of specified functions. They are:
2997
2998  set_ftrace_filter
2999
3000and
3001
3002  set_ftrace_notrace
3003
3004A list of available functions that you can add to these files is
3005listed in:
3006
3007   available_filter_functions
3008
3009::
3010
3011  # cat available_filter_functions
3012  put_prev_task_idle
3013  kmem_cache_create
3014  pick_next_task_rt
3015  cpus_read_lock
3016  pick_next_task_fair
3017  mutex_lock
3018  [...]
3019
3020If I am only interested in sys_nanosleep and hrtimer_interrupt::
3021
3022  # echo sys_nanosleep hrtimer_interrupt > set_ftrace_filter
3023  # echo function > current_tracer
3024  # echo 1 > tracing_on
3025  # usleep 1
3026  # echo 0 > tracing_on
3027  # cat trace
3028  # tracer: function
3029  #
3030  # entries-in-buffer/entries-written: 5/5   #P:4
3031  #
3032  #                              _-----=> irqs-off
3033  #                             / _----=> need-resched
3034  #                            | / _---=> hardirq/softirq
3035  #                            || / _--=> preempt-depth
3036  #                            ||| /     delay
3037  #           TASK-PID   CPU#  ||||    TIMESTAMP  FUNCTION
3038  #              | |       |   ||||       |         |
3039            usleep-2665  [001] ....  4186.475355: sys_nanosleep <-system_call_fastpath
3040            <idle>-0     [001] d.h1  4186.475409: hrtimer_interrupt <-smp_apic_timer_interrupt
3041            usleep-2665  [001] d.h1  4186.475426: hrtimer_interrupt <-smp_apic_timer_interrupt
3042            <idle>-0     [003] d.h1  4186.475426: hrtimer_interrupt <-smp_apic_timer_interrupt
3043            <idle>-0     [002] d.h1  4186.475427: hrtimer_interrupt <-smp_apic_timer_interrupt
3044
3045To see which functions are being traced, you can cat the file:
3046::
3047
3048  # cat set_ftrace_filter
3049  hrtimer_interrupt
3050  sys_nanosleep
3051
3052
3053Perhaps this is not enough. The filters also allow glob(7) matching.
3054
3055  ``<match>*``
3056	will match functions that begin with <match>
3057  ``*<match>``
3058	will match functions that end with <match>
3059  ``*<match>*``
3060	will match functions that have <match> in it
3061  ``<match1>*<match2>``
3062	will match functions that begin with <match1> and end with <match2>
3063
3064.. note::
3065      It is better to use quotes to enclose the wild cards,
3066      otherwise the shell may expand the parameters into names
3067      of files in the local directory.
3068
3069::
3070
3071  # echo 'hrtimer_*' > set_ftrace_filter
3072
3073Produces::
3074
3075  # tracer: function
3076  #
3077  # entries-in-buffer/entries-written: 897/897   #P:4
3078  #
3079  #                              _-----=> irqs-off
3080  #                             / _----=> need-resched
3081  #                            | / _---=> hardirq/softirq
3082  #                            || / _--=> preempt-depth
3083  #                            ||| /     delay
3084  #           TASK-PID   CPU#  ||||    TIMESTAMP  FUNCTION
3085  #              | |       |   ||||       |         |
3086            <idle>-0     [003] dN.1  4228.547803: hrtimer_cancel <-tick_nohz_idle_exit
3087            <idle>-0     [003] dN.1  4228.547804: hrtimer_try_to_cancel <-hrtimer_cancel
3088            <idle>-0     [003] dN.2  4228.547805: hrtimer_force_reprogram <-__remove_hrtimer
3089            <idle>-0     [003] dN.1  4228.547805: hrtimer_forward <-tick_nohz_idle_exit
3090            <idle>-0     [003] dN.1  4228.547805: hrtimer_start_range_ns <-hrtimer_start_expires.constprop.11
3091            <idle>-0     [003] d..1  4228.547858: hrtimer_get_next_event <-get_next_timer_interrupt
3092            <idle>-0     [003] d..1  4228.547859: hrtimer_start <-__tick_nohz_idle_enter
3093            <idle>-0     [003] d..2  4228.547860: hrtimer_force_reprogram <-__rem
3094
3095Notice that we lost the sys_nanosleep.
3096::
3097
3098  # cat set_ftrace_filter
3099  hrtimer_run_queues
3100  hrtimer_run_pending
3101  hrtimer_setup
3102  hrtimer_cancel
3103  hrtimer_try_to_cancel
3104  hrtimer_forward
3105  hrtimer_start
3106  hrtimer_reprogram
3107  hrtimer_force_reprogram
3108  hrtimer_get_next_event
3109  hrtimer_interrupt
3110  hrtimer_nanosleep
3111  hrtimer_wakeup
3112  hrtimer_get_remaining
3113  hrtimer_get_res
3114  hrtimer_init_sleeper
3115
3116
3117This is because the '>' and '>>' act just like they do in bash.
3118To rewrite the filters, use '>'
3119To append to the filters, use '>>'
3120
3121To clear out a filter so that all functions will be recorded
3122again::
3123
3124 # echo > set_ftrace_filter
3125 # cat set_ftrace_filter
3126 #
3127
3128Again, now we want to append.
3129
3130::
3131
3132  # echo sys_nanosleep > set_ftrace_filter
3133  # cat set_ftrace_filter
3134  sys_nanosleep
3135  # echo 'hrtimer_*' >> set_ftrace_filter
3136  # cat set_ftrace_filter
3137  hrtimer_run_queues
3138  hrtimer_run_pending
3139  hrtimer_setup
3140  hrtimer_cancel
3141  hrtimer_try_to_cancel
3142  hrtimer_forward
3143  hrtimer_start
3144  hrtimer_reprogram
3145  hrtimer_force_reprogram
3146  hrtimer_get_next_event
3147  hrtimer_interrupt
3148  sys_nanosleep
3149  hrtimer_nanosleep
3150  hrtimer_wakeup
3151  hrtimer_get_remaining
3152  hrtimer_get_res
3153  hrtimer_init_sleeper
3154
3155
3156The set_ftrace_notrace prevents those functions from being
3157traced.
3158::
3159
3160  # echo '*preempt*' '*lock*' > set_ftrace_notrace
3161
3162Produces::
3163
3164  # tracer: function
3165  #
3166  # entries-in-buffer/entries-written: 39608/39608   #P:4
3167  #
3168  #                              _-----=> irqs-off
3169  #                             / _----=> need-resched
3170  #                            | / _---=> hardirq/softirq
3171  #                            || / _--=> preempt-depth
3172  #                            ||| /     delay
3173  #           TASK-PID   CPU#  ||||    TIMESTAMP  FUNCTION
3174  #              | |       |   ||||       |         |
3175              bash-1994  [000] ....  4342.324896: file_ra_state_init <-do_dentry_open
3176              bash-1994  [000] ....  4342.324897: open_check_o_direct <-do_last
3177              bash-1994  [000] ....  4342.324897: ima_file_check <-do_last
3178              bash-1994  [000] ....  4342.324898: process_measurement <-ima_file_check
3179              bash-1994  [000] ....  4342.324898: ima_get_action <-process_measurement
3180              bash-1994  [000] ....  4342.324898: ima_match_policy <-ima_get_action
3181              bash-1994  [000] ....  4342.324899: do_truncate <-do_last
3182              bash-1994  [000] ....  4342.324899: setattr_should_drop_suidgid <-do_truncate
3183              bash-1994  [000] ....  4342.324899: notify_change <-do_truncate
3184              bash-1994  [000] ....  4342.324900: current_fs_time <-notify_change
3185              bash-1994  [000] ....  4342.324900: current_kernel_time <-current_fs_time
3186              bash-1994  [000] ....  4342.324900: timespec_trunc <-current_fs_time
3187
3188We can see that there's no more lock or preempt tracing.
3189
3190Selecting function filters via index
3191------------------------------------
3192
3193Because processing of strings is expensive (the address of the function
3194needs to be looked up before comparing to the string being passed in),
3195an index can be used as well to enable functions. This is useful in the
3196case of setting thousands of specific functions at a time. By passing
3197in a list of numbers, no string processing will occur. Instead, the function
3198at the specific location in the internal array (which corresponds to the
3199functions in the "available_filter_functions" file), is selected.
3200
3201::
3202
3203  # echo 1 > set_ftrace_filter
3204
3205Will select the first function listed in "available_filter_functions"
3206
3207::
3208
3209  # head -1 available_filter_functions
3210  trace_initcall_finish_cb
3211
3212  # cat set_ftrace_filter
3213  trace_initcall_finish_cb
3214
3215  # head -50 available_filter_functions | tail -1
3216  x86_pmu_commit_txn
3217
3218  # echo 1 50 > set_ftrace_filter
3219  # cat set_ftrace_filter
3220  trace_initcall_finish_cb
3221  x86_pmu_commit_txn
3222
3223Dynamic ftrace with the function graph tracer
3224---------------------------------------------
3225
3226Although what has been explained above concerns both the
3227function tracer and the function-graph-tracer, there are some
3228special features only available in the function-graph tracer.
3229
3230If you want to trace only one function and all of its children,
3231you just have to echo its name into set_graph_function::
3232
3233 echo __do_fault > set_graph_function
3234
3235will produce the following "expanded" trace of the __do_fault()
3236function::
3237
3238   0)               |  __do_fault() {
3239   0)               |    filemap_fault() {
3240   0)               |      find_lock_page() {
3241   0)   0.804 us    |        find_get_page();
3242   0)               |        __might_sleep() {
3243   0)   1.329 us    |        }
3244   0)   3.904 us    |      }
3245   0)   4.979 us    |    }
3246   0)   0.653 us    |    _spin_lock();
3247   0)   0.578 us    |    page_add_file_rmap();
3248   0)   0.525 us    |    native_set_pte_at();
3249   0)   0.585 us    |    _spin_unlock();
3250   0)               |    unlock_page() {
3251   0)   0.541 us    |      page_waitqueue();
3252   0)   0.639 us    |      __wake_up_bit();
3253   0)   2.786 us    |    }
3254   0) + 14.237 us   |  }
3255   0)               |  __do_fault() {
3256   0)               |    filemap_fault() {
3257   0)               |      find_lock_page() {
3258   0)   0.698 us    |        find_get_page();
3259   0)               |        __might_sleep() {
3260   0)   1.412 us    |        }
3261   0)   3.950 us    |      }
3262   0)   5.098 us    |    }
3263   0)   0.631 us    |    _spin_lock();
3264   0)   0.571 us    |    page_add_file_rmap();
3265   0)   0.526 us    |    native_set_pte_at();
3266   0)   0.586 us    |    _spin_unlock();
3267   0)               |    unlock_page() {
3268   0)   0.533 us    |      page_waitqueue();
3269   0)   0.638 us    |      __wake_up_bit();
3270   0)   2.793 us    |    }
3271   0) + 14.012 us   |  }
3272
3273You can also expand several functions at once::
3274
3275 echo sys_open > set_graph_function
3276 echo sys_close >> set_graph_function
3277
3278Now if you want to go back to trace all functions you can clear
3279this special filter via::
3280
3281 echo > set_graph_function
3282
3283
3284ftrace_enabled
3285--------------
3286
3287Note, the proc sysctl ftrace_enable is a big on/off switch for the
3288function tracer. By default it is enabled (when function tracing is
3289enabled in the kernel). If it is disabled, all function tracing is
3290disabled. This includes not only the function tracers for ftrace, but
3291also for any other uses (perf, kprobes, stack tracing, profiling, etc). It
3292cannot be disabled if there is a callback with FTRACE_OPS_FL_PERMANENT set
3293registered.
3294
3295Please disable this with care.
3296
3297This can be disable (and enabled) with::
3298
3299  sysctl kernel.ftrace_enabled=0
3300  sysctl kernel.ftrace_enabled=1
3301
3302 or
3303
3304  echo 0 > /proc/sys/kernel/ftrace_enabled
3305  echo 1 > /proc/sys/kernel/ftrace_enabled
3306
3307
3308Filter commands
3309---------------
3310
3311A few commands are supported by the set_ftrace_filter interface.
3312Trace commands have the following format::
3313
3314  <function>:<command>:<parameter>
3315
3316The following commands are supported:
3317
3318- mod:
3319  This command enables function filtering per module. The
3320  parameter defines the module. For example, if only the write*
3321  functions in the ext3 module are desired, run:
3322
3323   echo 'write*:mod:ext3' > set_ftrace_filter
3324
3325  This command interacts with the filter in the same way as
3326  filtering based on function names. Thus, adding more functions
3327  in a different module is accomplished by appending (>>) to the
3328  filter file. Remove specific module functions by prepending
3329  '!'::
3330
3331   echo '!writeback*:mod:ext3' >> set_ftrace_filter
3332
3333  Mod command supports module globbing. Disable tracing for all
3334  functions except a specific module::
3335
3336   echo '!*:mod:!ext3' >> set_ftrace_filter
3337
3338  Disable tracing for all modules, but still trace kernel::
3339
3340   echo '!*:mod:*' >> set_ftrace_filter
3341
3342  Enable filter only for kernel::
3343
3344   echo '*write*:mod:!*' >> set_ftrace_filter
3345
3346  Enable filter for module globbing::
3347
3348   echo '*write*:mod:*snd*' >> set_ftrace_filter
3349
3350- traceon/traceoff:
3351  These commands turn tracing on and off when the specified
3352  functions are hit. The parameter determines how many times the
3353  tracing system is turned on and off. If unspecified, there is
3354  no limit. For example, to disable tracing when a schedule bug
3355  is hit the first 5 times, run::
3356
3357   echo '__schedule_bug:traceoff:5' > set_ftrace_filter
3358
3359  To always disable tracing when __schedule_bug is hit::
3360
3361   echo '__schedule_bug:traceoff' > set_ftrace_filter
3362
3363  These commands are cumulative whether or not they are appended
3364  to set_ftrace_filter. To remove a command, prepend it by '!'
3365  and drop the parameter::
3366
3367   echo '!__schedule_bug:traceoff:0' > set_ftrace_filter
3368
3369  The above removes the traceoff command for __schedule_bug
3370  that have a counter. To remove commands without counters::
3371
3372   echo '!__schedule_bug:traceoff' > set_ftrace_filter
3373
3374- snapshot:
3375  Will cause a snapshot to be triggered when the function is hit.
3376  ::
3377
3378   echo 'native_flush_tlb_others:snapshot' > set_ftrace_filter
3379
3380  To only snapshot once:
3381  ::
3382
3383   echo 'native_flush_tlb_others:snapshot:1' > set_ftrace_filter
3384
3385  To remove the above commands::
3386
3387   echo '!native_flush_tlb_others:snapshot' > set_ftrace_filter
3388   echo '!native_flush_tlb_others:snapshot:0' > set_ftrace_filter
3389
3390- enable_event/disable_event:
3391  These commands can enable or disable a trace event. Note, because
3392  function tracing callbacks are very sensitive, when these commands
3393  are registered, the trace point is activated, but disabled in
3394  a "soft" mode. That is, the tracepoint will be called, but
3395  just will not be traced. The event tracepoint stays in this mode
3396  as long as there's a command that triggers it.
3397  ::
3398
3399   echo 'try_to_wake_up:enable_event:sched:sched_switch:2' > \
3400   	 set_ftrace_filter
3401
3402  The format is::
3403
3404    <function>:enable_event:<system>:<event>[:count]
3405    <function>:disable_event:<system>:<event>[:count]
3406
3407  To remove the events commands::
3408
3409   echo '!try_to_wake_up:enable_event:sched:sched_switch:0' > \
3410   	 set_ftrace_filter
3411   echo '!schedule:disable_event:sched:sched_switch' > \
3412   	 set_ftrace_filter
3413
3414- dump:
3415  When the function is hit, it will dump the contents of the ftrace
3416  ring buffer to the console. This is useful if you need to debug
3417  something, and want to dump the trace when a certain function
3418  is hit. Perhaps it's a function that is called before a triple
3419  fault happens and does not allow you to get a regular dump.
3420
3421- cpudump:
3422  When the function is hit, it will dump the contents of the ftrace
3423  ring buffer for the current CPU to the console. Unlike the "dump"
3424  command, it only prints out the contents of the ring buffer for the
3425  CPU that executed the function that triggered the dump.
3426
3427- stacktrace:
3428  When the function is hit, a stack trace is recorded.
3429
3430trace_pipe
3431----------
3432
3433The trace_pipe outputs the same content as the trace file, but
3434the effect on the tracing is different. Every read from
3435trace_pipe is consumed. This means that subsequent reads will be
3436different. The trace is live.
3437::
3438
3439  # echo function > current_tracer
3440  # cat trace_pipe > /tmp/trace.out &
3441  [1] 4153
3442  # echo 1 > tracing_on
3443  # usleep 1
3444  # echo 0 > tracing_on
3445  # cat trace
3446  # tracer: function
3447  #
3448  # entries-in-buffer/entries-written: 0/0   #P:4
3449  #
3450  #                              _-----=> irqs-off
3451  #                             / _----=> need-resched
3452  #                            | / _---=> hardirq/softirq
3453  #                            || / _--=> preempt-depth
3454  #                            ||| /     delay
3455  #           TASK-PID   CPU#  ||||    TIMESTAMP  FUNCTION
3456  #              | |       |   ||||       |         |
3457
3458  #
3459  # cat /tmp/trace.out
3460             bash-1994  [000] ....  5281.568961: mutex_unlock <-rb_simple_write
3461             bash-1994  [000] ....  5281.568963: __mutex_unlock_slowpath <-mutex_unlock
3462             bash-1994  [000] ....  5281.568963: __fsnotify_parent <-fsnotify_modify
3463             bash-1994  [000] ....  5281.568964: fsnotify <-fsnotify_modify
3464             bash-1994  [000] ....  5281.568964: __srcu_read_lock <-fsnotify
3465             bash-1994  [000] ....  5281.568964: add_preempt_count <-__srcu_read_lock
3466             bash-1994  [000] ...1  5281.568965: sub_preempt_count <-__srcu_read_lock
3467             bash-1994  [000] ....  5281.568965: __srcu_read_unlock <-fsnotify
3468             bash-1994  [000] ....  5281.568967: sys_dup2 <-system_call_fastpath
3469
3470
3471Note, reading the trace_pipe file will block until more input is
3472added. This is contrary to the trace file. If any process opened
3473the trace file for reading, it will actually disable tracing and
3474prevent new entries from being added. The trace_pipe file does
3475not have this limitation.
3476
3477trace entries
3478-------------
3479
3480Having too much or not enough data can be troublesome in
3481diagnosing an issue in the kernel. The file buffer_size_kb is
3482used to modify the size of the internal trace buffers. The
3483number listed is the number of entries that can be recorded per
3484CPU. To know the full size, multiply the number of possible CPUs
3485with the number of entries.
3486::
3487
3488  # cat buffer_size_kb
3489  1408 (units kilobytes)
3490
3491Or simply read buffer_total_size_kb
3492::
3493
3494  # cat buffer_total_size_kb
3495  5632
3496
3497To modify the buffer, simple echo in a number (in 1024 byte segments).
3498::
3499
3500  # echo 10000 > buffer_size_kb
3501  # cat buffer_size_kb
3502  10000 (units kilobytes)
3503
3504It will try to allocate as much as possible. If you allocate too
3505much, it can cause Out-Of-Memory to trigger.
3506::
3507
3508  # echo 1000000000000 > buffer_size_kb
3509  -bash: echo: write error: Cannot allocate memory
3510  # cat buffer_size_kb
3511  85
3512
3513The per_cpu buffers can be changed individually as well:
3514::
3515
3516  # echo 10000 > per_cpu/cpu0/buffer_size_kb
3517  # echo 100 > per_cpu/cpu1/buffer_size_kb
3518
3519When the per_cpu buffers are not the same, the buffer_size_kb
3520at the top level will just show an X
3521::
3522
3523  # cat buffer_size_kb
3524  X
3525
3526This is where the buffer_total_size_kb is useful:
3527::
3528
3529  # cat buffer_total_size_kb
3530  12916
3531
3532Writing to the top level buffer_size_kb will reset all the buffers
3533to be the same again.
3534
3535Snapshot
3536--------
3537CONFIG_TRACER_SNAPSHOT makes a generic snapshot feature
3538available to all non latency tracers. (Latency tracers which
3539record max latency, such as "irqsoff" or "wakeup", can't use
3540this feature, since those are already using the snapshot
3541mechanism internally.)
3542
3543Snapshot preserves a current trace buffer at a particular point
3544in time without stopping tracing. Ftrace swaps the current
3545buffer with a spare buffer, and tracing continues in the new
3546current (=previous spare) buffer.
3547
3548The following tracefs files in "tracing" are related to this
3549feature:
3550
3551  snapshot:
3552
3553	This is used to take a snapshot and to read the output
3554	of the snapshot. Echo 1 into this file to allocate a
3555	spare buffer and to take a snapshot (swap), then read
3556	the snapshot from this file in the same format as
3557	"trace" (described above in the section "The File
3558	System"). Both reads snapshot and tracing are executable
3559	in parallel. When the spare buffer is allocated, echoing
3560	0 frees it, and echoing else (positive) values clear the
3561	snapshot contents.
3562	More details are shown in the table below.
3563
3564	+--------------+------------+------------+------------+
3565	|status\\input |     0      |     1      |    else    |
3566	+==============+============+============+============+
3567	|not allocated |(do nothing)| alloc+swap |(do nothing)|
3568	+--------------+------------+------------+------------+
3569	|allocated     |    free    |    swap    |   clear    |
3570	+--------------+------------+------------+------------+
3571
3572Here is an example of using the snapshot feature.
3573::
3574
3575  # echo 1 > events/sched/enable
3576  # echo 1 > snapshot
3577  # cat snapshot
3578  # tracer: nop
3579  #
3580  # entries-in-buffer/entries-written: 71/71   #P:8
3581  #
3582  #                              _-----=> irqs-off
3583  #                             / _----=> need-resched
3584  #                            | / _---=> hardirq/softirq
3585  #                            || / _--=> preempt-depth
3586  #                            ||| /     delay
3587  #           TASK-PID   CPU#  ||||    TIMESTAMP  FUNCTION
3588  #              | |       |   ||||       |         |
3589            <idle>-0     [005] d...  2440.603828: sched_switch: prev_comm=swapper/5 prev_pid=0 prev_prio=120   prev_state=R ==> next_comm=snapshot-test-2 next_pid=2242 next_prio=120
3590             sleep-2242  [005] d...  2440.603846: sched_switch: prev_comm=snapshot-test-2 prev_pid=2242 prev_prio=120   prev_state=R ==> next_comm=kworker/5:1 next_pid=60 next_prio=120
3591  [...]
3592          <idle>-0     [002] d...  2440.707230: sched_switch: prev_comm=swapper/2 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=snapshot-test-2 next_pid=2229 next_prio=120
3593
3594  # cat trace
3595  # tracer: nop
3596  #
3597  # entries-in-buffer/entries-written: 77/77   #P:8
3598  #
3599  #                              _-----=> irqs-off
3600  #                             / _----=> need-resched
3601  #                            | / _---=> hardirq/softirq
3602  #                            || / _--=> preempt-depth
3603  #                            ||| /     delay
3604  #           TASK-PID   CPU#  ||||    TIMESTAMP  FUNCTION
3605  #              | |       |   ||||       |         |
3606            <idle>-0     [007] d...  2440.707395: sched_switch: prev_comm=swapper/7 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=snapshot-test-2 next_pid=2243 next_prio=120
3607   snapshot-test-2-2229  [002] d...  2440.707438: sched_switch: prev_comm=snapshot-test-2 prev_pid=2229 prev_prio=120 prev_state=S ==> next_comm=swapper/2 next_pid=0 next_prio=120
3608  [...]
3609
3610
3611If you try to use this snapshot feature when current tracer is
3612one of the latency tracers, you will get the following results.
3613::
3614
3615  # echo wakeup > current_tracer
3616  # echo 1 > snapshot
3617  bash: echo: write error: Device or resource busy
3618  # cat snapshot
3619  cat: snapshot: Device or resource busy
3620
3621
3622Instances
3623---------
3624In the tracefs tracing directory, there is a directory called "instances".
3625This directory can have new directories created inside of it using
3626mkdir, and removing directories with rmdir. The directory created
3627with mkdir in this directory will already contain files and other
3628directories after it is created.
3629::
3630
3631  # mkdir instances/foo
3632  # ls instances/foo
3633  buffer_size_kb  buffer_total_size_kb  events  free_buffer  per_cpu
3634  set_event  snapshot  trace  trace_clock  trace_marker  trace_options
3635  trace_pipe  tracing_on
3636
3637As you can see, the new directory looks similar to the tracing directory
3638itself. In fact, it is very similar, except that the buffer and
3639events are agnostic from the main directory, or from any other
3640instances that are created.
3641
3642The files in the new directory work just like the files with the
3643same name in the tracing directory except the buffer that is used
3644is a separate and new buffer. The files affect that buffer but do not
3645affect the main buffer with the exception of trace_options. Currently,
3646the trace_options affect all instances and the top level buffer
3647the same, but this may change in future releases. That is, options
3648may become specific to the instance they reside in.
3649
3650Notice that none of the function tracer files are there, nor is
3651current_tracer and available_tracers. This is because the buffers
3652can currently only have events enabled for them.
3653::
3654
3655  # mkdir instances/foo
3656  # mkdir instances/bar
3657  # mkdir instances/zoot
3658  # echo 100000 > buffer_size_kb
3659  # echo 1000 > instances/foo/buffer_size_kb
3660  # echo 5000 > instances/bar/per_cpu/cpu1/buffer_size_kb
3661  # echo function > current_trace
3662  # echo 1 > instances/foo/events/sched/sched_wakeup/enable
3663  # echo 1 > instances/foo/events/sched/sched_wakeup_new/enable
3664  # echo 1 > instances/foo/events/sched/sched_switch/enable
3665  # echo 1 > instances/bar/events/irq/enable
3666  # echo 1 > instances/zoot/events/syscalls/enable
3667  # cat trace_pipe
3668  CPU:2 [LOST 11745 EVENTS]
3669              bash-2044  [002] .... 10594.481032: _raw_spin_lock_irqsave <-get_page_from_freelist
3670              bash-2044  [002] d... 10594.481032: add_preempt_count <-_raw_spin_lock_irqsave
3671              bash-2044  [002] d..1 10594.481032: __rmqueue <-get_page_from_freelist
3672              bash-2044  [002] d..1 10594.481033: _raw_spin_unlock <-get_page_from_freelist
3673              bash-2044  [002] d..1 10594.481033: sub_preempt_count <-_raw_spin_unlock
3674              bash-2044  [002] d... 10594.481033: get_pageblock_flags_group <-get_pageblock_migratetype
3675              bash-2044  [002] d... 10594.481034: __mod_zone_page_state <-get_page_from_freelist
3676              bash-2044  [002] d... 10594.481034: zone_statistics <-get_page_from_freelist
3677              bash-2044  [002] d... 10594.481034: __inc_zone_state <-zone_statistics
3678              bash-2044  [002] d... 10594.481034: __inc_zone_state <-zone_statistics
3679              bash-2044  [002] .... 10594.481035: arch_dup_task_struct <-copy_process
3680  [...]
3681
3682  # cat instances/foo/trace_pipe
3683              bash-1998  [000] d..4   136.676759: sched_wakeup: comm=kworker/0:1 pid=59 prio=120 success=1 target_cpu=000
3684              bash-1998  [000] dN.4   136.676760: sched_wakeup: comm=bash pid=1998 prio=120 success=1 target_cpu=000
3685            <idle>-0     [003] d.h3   136.676906: sched_wakeup: comm=rcu_preempt pid=9 prio=120 success=1 target_cpu=003
3686            <idle>-0     [003] d..3   136.676909: sched_switch: prev_comm=swapper/3 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=rcu_preempt next_pid=9 next_prio=120
3687       rcu_preempt-9     [003] d..3   136.676916: sched_switch: prev_comm=rcu_preempt prev_pid=9 prev_prio=120 prev_state=S ==> next_comm=swapper/3 next_pid=0 next_prio=120
3688              bash-1998  [000] d..4   136.677014: sched_wakeup: comm=kworker/0:1 pid=59 prio=120 success=1 target_cpu=000
3689              bash-1998  [000] dN.4   136.677016: sched_wakeup: comm=bash pid=1998 prio=120 success=1 target_cpu=000
3690              bash-1998  [000] d..3   136.677018: sched_switch: prev_comm=bash prev_pid=1998 prev_prio=120 prev_state=R+ ==> next_comm=kworker/0:1 next_pid=59 next_prio=120
3691       kworker/0:1-59    [000] d..4   136.677022: sched_wakeup: comm=sshd pid=1995 prio=120 success=1 target_cpu=001
3692       kworker/0:1-59    [000] d..3   136.677025: sched_switch: prev_comm=kworker/0:1 prev_pid=59 prev_prio=120 prev_state=S ==> next_comm=bash next_pid=1998 next_prio=120
3693  [...]
3694
3695  # cat instances/bar/trace_pipe
3696       migration/1-14    [001] d.h3   138.732674: softirq_raise: vec=3 [action=NET_RX]
3697            <idle>-0     [001] dNh3   138.732725: softirq_raise: vec=3 [action=NET_RX]
3698              bash-1998  [000] d.h1   138.733101: softirq_raise: vec=1 [action=TIMER]
3699              bash-1998  [000] d.h1   138.733102: softirq_raise: vec=9 [action=RCU]
3700              bash-1998  [000] ..s2   138.733105: softirq_entry: vec=1 [action=TIMER]
3701              bash-1998  [000] ..s2   138.733106: softirq_exit: vec=1 [action=TIMER]
3702              bash-1998  [000] ..s2   138.733106: softirq_entry: vec=9 [action=RCU]
3703              bash-1998  [000] ..s2   138.733109: softirq_exit: vec=9 [action=RCU]
3704              sshd-1995  [001] d.h1   138.733278: irq_handler_entry: irq=21 name=uhci_hcd:usb4
3705              sshd-1995  [001] d.h1   138.733280: irq_handler_exit: irq=21 ret=unhandled
3706              sshd-1995  [001] d.h1   138.733281: irq_handler_entry: irq=21 name=eth0
3707              sshd-1995  [001] d.h1   138.733283: irq_handler_exit: irq=21 ret=handled
3708  [...]
3709
3710  # cat instances/zoot/trace
3711  # tracer: nop
3712  #
3713  # entries-in-buffer/entries-written: 18996/18996   #P:4
3714  #
3715  #                              _-----=> irqs-off
3716  #                             / _----=> need-resched
3717  #                            | / _---=> hardirq/softirq
3718  #                            || / _--=> preempt-depth
3719  #                            ||| /     delay
3720  #           TASK-PID   CPU#  ||||    TIMESTAMP  FUNCTION
3721  #              | |       |   ||||       |         |
3722              bash-1998  [000] d...   140.733501: sys_write -> 0x2
3723              bash-1998  [000] d...   140.733504: sys_dup2(oldfd: a, newfd: 1)
3724              bash-1998  [000] d...   140.733506: sys_dup2 -> 0x1
3725              bash-1998  [000] d...   140.733508: sys_fcntl(fd: a, cmd: 1, arg: 0)
3726              bash-1998  [000] d...   140.733509: sys_fcntl -> 0x1
3727              bash-1998  [000] d...   140.733510: sys_close(fd: a)
3728              bash-1998  [000] d...   140.733510: sys_close -> 0x0
3729              bash-1998  [000] d...   140.733514: sys_rt_sigprocmask(how: 0, nset: 0, oset: 6e2768, sigsetsize: 8)
3730              bash-1998  [000] d...   140.733515: sys_rt_sigprocmask -> 0x0
3731              bash-1998  [000] d...   140.733516: sys_rt_sigaction(sig: 2, act: 7fff718846f0, oact: 7fff71884650, sigsetsize: 8)
3732              bash-1998  [000] d...   140.733516: sys_rt_sigaction -> 0x0
3733
3734You can see that the trace of the top most trace buffer shows only
3735the function tracing. The foo instance displays wakeups and task
3736switches.
3737
3738To remove the instances, simply delete their directories:
3739::
3740
3741  # rmdir instances/foo
3742  # rmdir instances/bar
3743  # rmdir instances/zoot
3744
3745Note, if a process has a trace file open in one of the instance
3746directories, the rmdir will fail with EBUSY.
3747
3748
3749Stack trace
3750-----------
3751Since the kernel has a fixed sized stack, it is important not to
3752waste it in functions. A kernel developer must be conscious of
3753what they allocate on the stack. If they add too much, the system
3754can be in danger of a stack overflow, and corruption will occur,
3755usually leading to a system panic.
3756
3757There are some tools that check this, usually with interrupts
3758periodically checking usage. But if you can perform a check
3759at every function call that will become very useful. As ftrace provides
3760a function tracer, it makes it convenient to check the stack size
3761at every function call. This is enabled via the stack tracer.
3762
3763CONFIG_STACK_TRACER enables the ftrace stack tracing functionality.
3764To enable it, write a '1' into /proc/sys/kernel/stack_tracer_enabled.
3765::
3766
3767 # echo 1 > /proc/sys/kernel/stack_tracer_enabled
3768
3769You can also enable it from the kernel command line to trace
3770the stack size of the kernel during boot up, by adding "stacktrace"
3771to the kernel command line parameter.
3772
3773After running it for a few minutes, the output looks like:
3774::
3775
3776  # cat stack_max_size
3777  2928
3778
3779  # cat stack_trace
3780          Depth    Size   Location    (18 entries)
3781          -----    ----   --------
3782    0)     2928     224   update_sd_lb_stats+0xbc/0x4ac
3783    1)     2704     160   find_busiest_group+0x31/0x1f1
3784    2)     2544     256   load_balance+0xd9/0x662
3785    3)     2288      80   idle_balance+0xbb/0x130
3786    4)     2208     128   __schedule+0x26e/0x5b9
3787    5)     2080      16   schedule+0x64/0x66
3788    6)     2064     128   schedule_timeout+0x34/0xe0
3789    7)     1936     112   wait_for_common+0x97/0xf1
3790    8)     1824      16   wait_for_completion+0x1d/0x1f
3791    9)     1808     128   flush_work+0xfe/0x119
3792   10)     1680      16   tty_flush_to_ldisc+0x1e/0x20
3793   11)     1664      48   input_available_p+0x1d/0x5c
3794   12)     1616      48   n_tty_poll+0x6d/0x134
3795   13)     1568      64   tty_poll+0x64/0x7f
3796   14)     1504     880   do_select+0x31e/0x511
3797   15)      624     400   core_sys_select+0x177/0x216
3798   16)      224      96   sys_select+0x91/0xb9
3799   17)      128     128   system_call_fastpath+0x16/0x1b
3800
3801Note, if -mfentry is being used by gcc, functions get traced before
3802they set up the stack frame. This means that leaf level functions
3803are not tested by the stack tracer when -mfentry is used.
3804
3805Currently, -mfentry is used by gcc 4.6.0 and above on x86 only.
3806
3807More
3808----
3809More details can be found in the source code, in the `kernel/trace/*.c` files.
3810