xref: /linux/Documentation/trace/ftrace.rst (revision b78f1293f90642ff9809935aa9df7b964e0f17ba)
1========================
2ftrace - Function Tracer
3========================
4
5Copyright 2008 Red Hat Inc.
6
7:Author:   Steven Rostedt <srostedt@redhat.com>
8:License:  The GNU Free Documentation License, Version 1.2
9          (dual licensed under the GPL v2)
10:Original Reviewers:  Elias Oltmanns, Randy Dunlap, Andrew Morton,
11		      John Kacur, and David Teigland.
12
13- Written for: 2.6.28-rc2
14- Updated for: 3.10
15- Updated for: 4.13 - Copyright 2017 VMware Inc. Steven Rostedt
16- Converted to rst format - Changbin Du <changbin.du@intel.com>
17
18Introduction
19------------
20
21Ftrace is an internal tracer designed to help out developers and
22designers of systems to find what is going on inside the kernel.
23It can be used for debugging or analyzing latencies and
24performance issues that take place outside of user-space.
25
26Although ftrace is typically considered the function tracer, it
27is really a framework of several assorted tracing utilities.
28There's latency tracing to examine what occurs between interrupts
29disabled and enabled, as well as for preemption and from a time
30a task is woken to the task is actually scheduled in.
31
32One of the most common uses of ftrace is the event tracing.
33Throughout the kernel is hundreds of static event points that
34can be enabled via the tracefs file system to see what is
35going on in certain parts of the kernel.
36
37See events.rst for more information.
38
39
40Implementation Details
41----------------------
42
43See Documentation/trace/ftrace-design.rst for details for arch porters and such.
44
45
46The File System
47---------------
48
49Ftrace uses the tracefs file system to hold the control files as
50well as the files to display output.
51
52When tracefs is configured into the kernel (which selecting any ftrace
53option will do) the directory /sys/kernel/tracing will be created. To mount
54this directory, you can add to your /etc/fstab file::
55
56 tracefs       /sys/kernel/tracing       tracefs defaults        0       0
57
58Or you can mount it at run time with::
59
60 mount -t tracefs nodev /sys/kernel/tracing
61
62For quicker access to that directory you may want to make a soft link to
63it::
64
65 ln -s /sys/kernel/tracing /tracing
66
67.. attention::
68
69  Before 4.1, all ftrace tracing control files were within the debugfs
70  file system, which is typically located at /sys/kernel/debug/tracing.
71  For backward compatibility, when mounting the debugfs file system,
72  the tracefs file system will be automatically mounted at:
73
74  /sys/kernel/debug/tracing
75
76  All files located in the tracefs file system will be located in that
77  debugfs file system directory as well.
78
79.. attention::
80
81  Any selected ftrace option will also create the tracefs file system.
82  The rest of the document will assume that you are in the ftrace directory
83  (cd /sys/kernel/tracing) and will only concentrate on the files within that
84  directory and not distract from the content with the extended
85  "/sys/kernel/tracing" path name.
86
87That's it! (assuming that you have ftrace configured into your kernel)
88
89After mounting tracefs you will have access to the control and output files
90of ftrace. Here is a list of some of the key files:
91
92
93 Note: all time values are in microseconds.
94
95  current_tracer:
96
97	This is used to set or display the current tracer
98	that is configured. Changing the current tracer clears
99	the ring buffer content as well as the "snapshot" buffer.
100
101  available_tracers:
102
103	This holds the different types of tracers that
104	have been compiled into the kernel. The
105	tracers listed here can be configured by
106	echoing their name into current_tracer.
107
108  tracing_on:
109
110	This sets or displays whether writing to the trace
111	ring buffer is enabled. Echo 0 into this file to disable
112	the tracer or 1 to enable it. Note, this only disables
113	writing to the ring buffer, the tracing overhead may
114	still be occurring.
115
116	The kernel function tracing_off() can be used within the
117	kernel to disable writing to the ring buffer, which will
118	set this file to "0". User space can re-enable tracing by
119	echoing "1" into the file.
120
121	Note, the function and event trigger "traceoff" will also
122	set this file to zero and stop tracing. Which can also
123	be re-enabled by user space using this file.
124
125  trace:
126
127	This file holds the output of the trace in a human
128	readable format (described below). Opening this file for
129	writing with the O_TRUNC flag clears the ring buffer content.
130        Note, this file is not a consumer. If tracing is off
131        (no tracer running, or tracing_on is zero), it will produce
132        the same output each time it is read. When tracing is on,
133        it may produce inconsistent results as it tries to read
134        the entire buffer without consuming it.
135
136  trace_pipe:
137
138	The output is the same as the "trace" file but this
139	file is meant to be streamed with live tracing.
140	Reads from this file will block until new data is
141	retrieved.  Unlike the "trace" file, this file is a
142	consumer. This means reading from this file causes
143	sequential reads to display more current data. Once
144	data is read from this file, it is consumed, and
145	will not be read again with a sequential read. The
146	"trace" file is static, and if the tracer is not
147	adding more data, it will display the same
148	information every time it is read.
149
150  trace_options:
151
152	This file lets the user control the amount of data
153	that is displayed in one of the above output
154	files. Options also exist to modify how a tracer
155	or events work (stack traces, timestamps, etc).
156
157  options:
158
159	This is a directory that has a file for every available
160	trace option (also in trace_options). Options may also be set
161	or cleared by writing a "1" or "0" respectively into the
162	corresponding file with the option name.
163
164  tracing_max_latency:
165
166	Some of the tracers record the max latency.
167	For example, the maximum time that interrupts are disabled.
168	The maximum time is saved in this file. The max trace will also be
169	stored,	and displayed by "trace". A new max trace will only be
170	recorded if the latency is greater than the value in this file
171	(in microseconds).
172
173	By echoing in a time into this file, no latency will be recorded
174	unless it is greater than the time in this file.
175
176  tracing_thresh:
177
178	Some latency tracers will record a trace whenever the
179	latency is greater than the number in this file.
180	Only active when the file contains a number greater than 0.
181	(in microseconds)
182
183  buffer_percent:
184
185	This is the watermark for how much the ring buffer needs to be filled
186	before a waiter is woken up. That is, if an application calls a
187	blocking read syscall on one of the per_cpu trace_pipe_raw files, it
188	will block until the given amount of data specified by buffer_percent
189	is in the ring buffer before it wakes the reader up. This also
190	controls how the splice system calls are blocked on this file::
191
192	  0   - means to wake up as soon as there is any data in the ring buffer.
193	  50  - means to wake up when roughly half of the ring buffer sub-buffers
194	        are full.
195	  100 - means to block until the ring buffer is totally full and is
196	        about to start overwriting the older data.
197
198  buffer_size_kb:
199
200	This sets or displays the number of kilobytes each CPU
201	buffer holds. By default, the trace buffers are the same size
202	for each CPU. The displayed number is the size of the
203	CPU buffer and not total size of all buffers. The
204	trace buffers are allocated in pages (blocks of memory
205	that the kernel uses for allocation, usually 4 KB in size).
206	A few extra pages may be allocated to accommodate buffer management
207	meta-data. If the last page allocated has room for more bytes
208	than requested, the rest of the page will be used,
209	making the actual allocation bigger than requested or shown.
210	( Note, the size may not be a multiple of the page size
211	due to buffer management meta-data. )
212
213	Buffer sizes for individual CPUs may vary
214	(see "per_cpu/cpu0/buffer_size_kb" below), and if they do
215	this file will show "X".
216
217  buffer_total_size_kb:
218
219	This displays the total combined size of all the trace buffers.
220
221  buffer_subbuf_size_kb:
222
223	This sets or displays the sub buffer size. The ring buffer is broken up
224	into several same size "sub buffers". An event can not be bigger than
225	the size of the sub buffer. Normally, the sub buffer is the size of the
226	architecture's page (4K on x86). The sub buffer also contains meta data
227	at the start which also limits the size of an event.  That means when
228	the sub buffer is a page size, no event can be larger than the page
229	size minus the sub buffer meta data.
230
231	Note, the buffer_subbuf_size_kb is a way for the user to specify the
232	minimum size of the subbuffer. The kernel may make it bigger due to the
233	implementation details, or simply fail the operation if the kernel can
234	not handle the request.
235
236	Changing the sub buffer size allows for events to be larger than the
237	page size.
238
239	Note: When changing the sub-buffer size, tracing is stopped and any
240	data in the ring buffer and the snapshot buffer will be discarded.
241
242  free_buffer:
243
244	If a process is performing tracing, and the ring buffer	should be
245	shrunk "freed" when the process is finished, even if it were to be
246	killed by a signal, this file can be used for that purpose. On close
247	of this file, the ring buffer will be resized to its minimum size.
248	Having a process that is tracing also open this file, when the process
249	exits its file descriptor for this file will be closed, and in doing so,
250	the ring buffer will be "freed".
251
252	It may also stop tracing if disable_on_free option is set.
253
254  tracing_cpumask:
255
256	This is a mask that lets the user only trace on specified CPUs.
257	The format is a hex string representing the CPUs.
258
259  set_ftrace_filter:
260
261	When dynamic ftrace is configured in (see the
262	section below "dynamic ftrace"), the code is dynamically
263	modified (code text rewrite) to disable calling of the
264	function profiler (mcount). This lets tracing be configured
265	in with practically no overhead in performance.  This also
266	has a side effect of enabling or disabling specific functions
267	to be traced. Echoing names of functions into this file
268	will limit the trace to only those functions.
269	This influences the tracers "function" and "function_graph"
270	and thus also function profiling (see "function_profile_enabled").
271
272	The functions listed in "available_filter_functions" are what
273	can be written into this file.
274
275	This interface also allows for commands to be used. See the
276	"Filter commands" section for more details.
277
278	As a speed up, since processing strings can be quite expensive
279	and requires a check of all functions registered to tracing, instead
280	an index can be written into this file. A number (starting with "1")
281	written will instead select the same corresponding at the line position
282	of the "available_filter_functions" file.
283
284  set_ftrace_notrace:
285
286	This has an effect opposite to that of
287	set_ftrace_filter. Any function that is added here will not
288	be traced. If a function exists in both set_ftrace_filter
289	and set_ftrace_notrace,	the function will _not_ be traced.
290
291  set_ftrace_pid:
292
293	Have the function tracer only trace the threads whose PID are
294	listed in this file.
295
296	If the "function-fork" option is set, then when a task whose
297	PID is listed in this file forks, the child's PID will
298	automatically be added to this file, and the child will be
299	traced by the function tracer as well. This option will also
300	cause PIDs of tasks that exit to be removed from the file.
301
302  set_ftrace_notrace_pid:
303
304        Have the function tracer ignore threads whose PID are listed in
305        this file.
306
307        If the "function-fork" option is set, then when a task whose
308	PID is listed in this file forks, the child's PID will
309	automatically be added to this file, and the child will not be
310	traced by the function tracer as well. This option will also
311	cause PIDs of tasks that exit to be removed from the file.
312
313        If a PID is in both this file and "set_ftrace_pid", then this
314        file takes precedence, and the thread will not be traced.
315
316  set_event_pid:
317
318	Have the events only trace a task with a PID listed in this file.
319	Note, sched_switch and sched_wake_up will also trace events
320	listed in this file.
321
322	To have the PIDs of children of tasks with their PID in this file
323	added on fork, enable the "event-fork" option. That option will also
324	cause the PIDs of tasks to be removed from this file when the task
325	exits.
326
327  set_event_notrace_pid:
328
329	Have the events not trace a task with a PID listed in this file.
330	Note, sched_switch and sched_wakeup will trace threads not listed
331	in this file, even if a thread's PID is in the file if the
332        sched_switch or sched_wakeup events also trace a thread that should
333        be traced.
334
335	To have the PIDs of children of tasks with their PID in this file
336	added on fork, enable the "event-fork" option. That option will also
337	cause the PIDs of tasks to be removed from this file when the task
338	exits.
339
340  set_graph_function:
341
342	Functions listed in this file will cause the function graph
343	tracer to only trace these functions and the functions that
344	they call. (See the section "dynamic ftrace" for more details).
345	Note, set_ftrace_filter and set_ftrace_notrace still affects
346	what functions are being traced.
347
348  set_graph_notrace:
349
350	Similar to set_graph_function, but will disable function graph
351	tracing when the function is hit until it exits the function.
352	This makes it possible to ignore tracing functions that are called
353	by a specific function.
354
355  available_filter_functions:
356
357	This lists the functions that ftrace has processed and can trace.
358	These are the function names that you can pass to
359	"set_ftrace_filter", "set_ftrace_notrace",
360	"set_graph_function", or "set_graph_notrace".
361	(See the section "dynamic ftrace" below for more details.)
362
363  available_filter_functions_addrs:
364
365	Similar to available_filter_functions, but with address displayed
366	for each function. The displayed address is the patch-site address
367	and can differ from /proc/kallsyms address.
368
369  dyn_ftrace_total_info:
370
371	This file is for debugging purposes. The number of functions that
372	have been converted to nops and are available to be traced.
373
374  enabled_functions:
375
376	This file is more for debugging ftrace, but can also be useful
377	in seeing if any function has a callback attached to it.
378	Not only does the trace infrastructure use ftrace function
379	trace utility, but other subsystems might too. This file
380	displays all functions that have a callback attached to them
381	as well as the number of callbacks that have been attached.
382	Note, a callback may also call multiple functions which will
383	not be listed in this count.
384
385	If the callback registered to be traced by a function with
386	the "save regs" attribute (thus even more overhead), a 'R'
387	will be displayed on the same line as the function that
388	is returning registers.
389
390	If the callback registered to be traced by a function with
391	the "ip modify" attribute (thus the regs->ip can be changed),
392	an 'I' will be displayed on the same line as the function that
393	can be overridden.
394
395	If a non ftrace trampoline is attached (BPF) a 'D' will be displayed.
396	Note, normal ftrace trampolines can also be attached, but only one
397	"direct" trampoline can be attached to a given function at a time.
398
399	Some architectures can not call direct trampolines, but instead have
400	the ftrace ops function located above the function entry point. In
401	such cases an 'O' will be displayed.
402
403	If a function had either the "ip modify" or a "direct" call attached to
404	it in the past, a 'M' will be shown. This flag is never cleared. It is
405	used to know if a function was every modified by the ftrace infrastructure,
406	and can be used for debugging.
407
408	If the architecture supports it, it will also show what callback
409	is being directly called by the function. If the count is greater
410	than 1 it most likely will be ftrace_ops_list_func().
411
412	If the callback of a function jumps to a trampoline that is
413	specific to the callback and which is not the standard trampoline,
414	its address will be printed as well as the function that the
415	trampoline calls.
416
417  touched_functions:
418
419	This file contains all the functions that ever had a function callback
420	to it via the ftrace infrastructure. It has the same format as
421	enabled_functions but shows all functions that have every been
422	traced.
423
424	To see any function that has every been modified by "ip modify" or a
425	direct trampoline, one can perform the following command:
426
427	grep ' M ' /sys/kernel/tracing/touched_functions
428
429  function_profile_enabled:
430
431	When set it will enable all functions with either the function
432	tracer, or if configured, the function graph tracer. It will
433	keep a histogram of the number of functions that were called
434	and if the function graph tracer was configured, it will also keep
435	track of the time spent in those functions. The histogram
436	content can be displayed in the files:
437
438	trace_stat/function<cpu> ( function0, function1, etc).
439
440  trace_stat:
441
442	A directory that holds different tracing stats.
443
444  kprobe_events:
445
446	Enable dynamic trace points. See kprobetrace.rst.
447
448  kprobe_profile:
449
450	Dynamic trace points stats. See kprobetrace.rst.
451
452  max_graph_depth:
453
454	Used with the function graph tracer. This is the max depth
455	it will trace into a function. Setting this to a value of
456	one will show only the first kernel function that is called
457	from user space.
458
459  printk_formats:
460
461	This is for tools that read the raw format files. If an event in
462	the ring buffer references a string, only a pointer to the string
463	is recorded into the buffer and not the string itself. This prevents
464	tools from knowing what that string was. This file displays the string
465	and address for	the string allowing tools to map the pointers to what
466	the strings were.
467
468  saved_cmdlines:
469
470	Only the pid of the task is recorded in a trace event unless
471	the event specifically saves the task comm as well. Ftrace
472	makes a cache of pid mappings to comms to try to display
473	comms for events. If a pid for a comm is not listed, then
474	"<...>" is displayed in the output.
475
476	If the option "record-cmd" is set to "0", then comms of tasks
477	will not be saved during recording. By default, it is enabled.
478
479  saved_cmdlines_size:
480
481	By default, 128 comms are saved (see "saved_cmdlines" above). To
482	increase or decrease the amount of comms that are cached, echo
483	the number of comms to cache into this file.
484
485  saved_tgids:
486
487	If the option "record-tgid" is set, on each scheduling context switch
488	the Task Group ID of a task is saved in a table mapping the PID of
489	the thread to its TGID. By default, the "record-tgid" option is
490	disabled.
491
492  snapshot:
493
494	This displays the "snapshot" buffer and also lets the user
495	take a snapshot of the current running trace.
496	See the "Snapshot" section below for more details.
497
498  stack_max_size:
499
500	When the stack tracer is activated, this will display the
501	maximum stack size it has encountered.
502	See the "Stack Trace" section below.
503
504  stack_trace:
505
506	This displays the stack back trace of the largest stack
507	that was encountered when the stack tracer is activated.
508	See the "Stack Trace" section below.
509
510  stack_trace_filter:
511
512	This is similar to "set_ftrace_filter" but it limits what
513	functions the stack tracer will check.
514
515  trace_clock:
516
517	Whenever an event is recorded into the ring buffer, a
518	"timestamp" is added. This stamp comes from a specified
519	clock. By default, ftrace uses the "local" clock. This
520	clock is very fast and strictly per cpu, but on some
521	systems it may not be monotonic with respect to other
522	CPUs. In other words, the local clocks may not be in sync
523	with local clocks on other CPUs.
524
525	Usual clocks for tracing::
526
527	  # cat trace_clock
528	  [local] global counter x86-tsc
529
530	The clock with the square brackets around it is the one in effect.
531
532	local:
533		Default clock, but may not be in sync across CPUs
534
535	global:
536		This clock is in sync with all CPUs but may
537		be a bit slower than the local clock.
538
539	counter:
540		This is not a clock at all, but literally an atomic
541		counter. It counts up one by one, but is in sync
542		with all CPUs. This is useful when you need to
543		know exactly the order events occurred with respect to
544		each other on different CPUs.
545
546	uptime:
547		This uses the jiffies counter and the time stamp
548		is relative to the time since boot up.
549
550	perf:
551		This makes ftrace use the same clock that perf uses.
552		Eventually perf will be able to read ftrace buffers
553		and this will help out in interleaving the data.
554
555	x86-tsc:
556		Architectures may define their own clocks. For
557		example, x86 uses its own TSC cycle clock here.
558
559	ppc-tb:
560		This uses the powerpc timebase register value.
561		This is in sync across CPUs and can also be used
562		to correlate events across hypervisor/guest if
563		tb_offset is known.
564
565	mono:
566		This uses the fast monotonic clock (CLOCK_MONOTONIC)
567		which is monotonic and is subject to NTP rate adjustments.
568
569	mono_raw:
570		This is the raw monotonic clock (CLOCK_MONOTONIC_RAW)
571		which is monotonic but is not subject to any rate adjustments
572		and ticks at the same rate as the hardware clocksource.
573
574	boot:
575		This is the boot clock (CLOCK_BOOTTIME) and is based on the
576		fast monotonic clock, but also accounts for time spent in
577		suspend. Since the clock access is designed for use in
578		tracing in the suspend path, some side effects are possible
579		if clock is accessed after the suspend time is accounted before
580		the fast mono clock is updated. In this case, the clock update
581		appears to happen slightly sooner than it normally would have.
582		Also on 32-bit systems, it's possible that the 64-bit boot offset
583		sees a partial update. These effects are rare and post
584		processing should be able to handle them. See comments in the
585		ktime_get_boot_fast_ns() function for more information.
586
587	tai:
588		This is the tai clock (CLOCK_TAI) and is derived from the wall-
589		clock time. However, this clock does not experience
590		discontinuities and backwards jumps caused by NTP inserting leap
591		seconds. Since the clock access is designed for use in tracing,
592		side effects are possible. The clock access may yield wrong
593		readouts in case the internal TAI offset is updated e.g., caused
594		by setting the system time or using adjtimex() with an offset.
595		These effects are rare and post processing should be able to
596		handle them. See comments in the ktime_get_tai_fast_ns()
597		function for more information.
598
599	To set a clock, simply echo the clock name into this file::
600
601	  # echo global > trace_clock
602
603	Setting a clock clears the ring buffer content as well as the
604	"snapshot" buffer.
605
606  trace_marker:
607
608	This is a very useful file for synchronizing user space
609	with events happening in the kernel. Writing strings into
610	this file will be written into the ftrace buffer.
611
612	It is useful in applications to open this file at the start
613	of the application and just reference the file descriptor
614	for the file::
615
616		void trace_write(const char *fmt, ...)
617		{
618			va_list ap;
619			char buf[256];
620			int n;
621
622			if (trace_fd < 0)
623				return;
624
625			va_start(ap, fmt);
626			n = vsnprintf(buf, 256, fmt, ap);
627			va_end(ap);
628
629			write(trace_fd, buf, n);
630		}
631
632	start::
633
634		trace_fd = open("trace_marker", O_WRONLY);
635
636	Note: Writing into the trace_marker file can also initiate triggers
637	      that are written into /sys/kernel/tracing/events/ftrace/print/trigger
638	      See "Event triggers" in Documentation/trace/events.rst and an
639              example in Documentation/trace/histogram.rst (Section 3.)
640
641  trace_marker_raw:
642
643	This is similar to trace_marker above, but is meant for binary data
644	to be written to it, where a tool can be used to parse the data
645	from trace_pipe_raw.
646
647  uprobe_events:
648
649	Add dynamic tracepoints in programs.
650	See uprobetracer.rst
651
652  uprobe_profile:
653
654	Uprobe statistics. See uprobetrace.txt
655
656  instances:
657
658	This is a way to make multiple trace buffers where different
659	events can be recorded in different buffers.
660	See "Instances" section below.
661
662  events:
663
664	This is the trace event directory. It holds event tracepoints
665	(also known as static tracepoints) that have been compiled
666	into the kernel. It shows what event tracepoints exist
667	and how they are grouped by system. There are "enable"
668	files at various levels that can enable the tracepoints
669	when a "1" is written to them.
670
671	See events.rst for more information.
672
673  set_event:
674
675	By echoing in the event into this file, will enable that event.
676
677	See events.rst for more information.
678
679  available_events:
680
681	A list of events that can be enabled in tracing.
682
683	See events.rst for more information.
684
685  timestamp_mode:
686
687	Certain tracers may change the timestamp mode used when
688	logging trace events into the event buffer.  Events with
689	different modes can coexist within a buffer but the mode in
690	effect when an event is logged determines which timestamp mode
691	is used for that event.  The default timestamp mode is
692	'delta'.
693
694	Usual timestamp modes for tracing:
695
696	  # cat timestamp_mode
697	  [delta] absolute
698
699	  The timestamp mode with the square brackets around it is the
700	  one in effect.
701
702	  delta: Default timestamp mode - timestamp is a delta against
703	         a per-buffer timestamp.
704
705	  absolute: The timestamp is a full timestamp, not a delta
706                 against some other value.  As such it takes up more
707                 space and is less efficient.
708
709  hwlat_detector:
710
711	Directory for the Hardware Latency Detector.
712	See "Hardware Latency Detector" section below.
713
714  per_cpu:
715
716	This is a directory that contains the trace per_cpu information.
717
718  per_cpu/cpu0/buffer_size_kb:
719
720	The ftrace buffer is defined per_cpu. That is, there's a separate
721	buffer for each CPU to allow writes to be done atomically,
722	and free from cache bouncing. These buffers may have different
723	size buffers. This file is similar to the buffer_size_kb
724	file, but it only displays or sets the buffer size for the
725	specific CPU. (here cpu0).
726
727  per_cpu/cpu0/trace:
728
729	This is similar to the "trace" file, but it will only display
730	the data specific for the CPU. If written to, it only clears
731	the specific CPU buffer.
732
733  per_cpu/cpu0/trace_pipe
734
735	This is similar to the "trace_pipe" file, and is a consuming
736	read, but it will only display (and consume) the data specific
737	for the CPU.
738
739  per_cpu/cpu0/trace_pipe_raw
740
741	For tools that can parse the ftrace ring buffer binary format,
742	the trace_pipe_raw file can be used to extract the data
743	from the ring buffer directly. With the use of the splice()
744	system call, the buffer data can be quickly transferred to
745	a file or to the network where a server is collecting the
746	data.
747
748	Like trace_pipe, this is a consuming reader, where multiple
749	reads will always produce different data.
750
751  per_cpu/cpu0/snapshot:
752
753	This is similar to the main "snapshot" file, but will only
754	snapshot the current CPU (if supported). It only displays
755	the content of the snapshot for a given CPU, and if
756	written to, only clears this CPU buffer.
757
758  per_cpu/cpu0/snapshot_raw:
759
760	Similar to the trace_pipe_raw, but will read the binary format
761	from the snapshot buffer for the given CPU.
762
763  per_cpu/cpu0/stats:
764
765	This displays certain stats about the ring buffer:
766
767	entries:
768		The number of events that are still in the buffer.
769
770	overrun:
771		The number of lost events due to overwriting when
772		the buffer was full.
773
774	commit overrun:
775		Should always be zero.
776		This gets set if so many events happened within a nested
777		event (ring buffer is re-entrant), that it fills the
778		buffer and starts dropping events.
779
780	bytes:
781		Bytes actually read (not overwritten).
782
783	oldest event ts:
784		The oldest timestamp in the buffer
785
786	now ts:
787		The current timestamp
788
789	dropped events:
790		Events lost due to overwrite option being off.
791
792	read events:
793		The number of events read.
794
795The Tracers
796-----------
797
798Here is the list of current tracers that may be configured.
799
800  "function"
801
802	Function call tracer to trace all kernel functions.
803
804  "function_graph"
805
806	Similar to the function tracer except that the
807	function tracer probes the functions on their entry
808	whereas the function graph tracer traces on both entry
809	and exit of the functions. It then provides the ability
810	to draw a graph of function calls similar to C code
811	source.
812
813	Note that the function graph calculates the timings of when the
814	function starts and returns internally and for each instance. If
815	there are two instances that run function graph tracer and traces
816	the same functions, the length of the timings may be slightly off as
817	each read the timestamp separately and not at the same time.
818
819  "blk"
820
821	The block tracer. The tracer used by the blktrace user
822	application.
823
824  "hwlat"
825
826	The Hardware Latency tracer is used to detect if the hardware
827	produces any latency. See "Hardware Latency Detector" section
828	below.
829
830  "irqsoff"
831
832	Traces the areas that disable interrupts and saves
833	the trace with the longest max latency.
834	See tracing_max_latency. When a new max is recorded,
835	it replaces the old trace. It is best to view this
836	trace with the latency-format option enabled, which
837	happens automatically when the tracer is selected.
838
839  "preemptoff"
840
841	Similar to irqsoff but traces and records the amount of
842	time for which preemption is disabled.
843
844  "preemptirqsoff"
845
846	Similar to irqsoff and preemptoff, but traces and
847	records the largest time for which irqs and/or preemption
848	is disabled.
849
850  "wakeup"
851
852	Traces and records the max latency that it takes for
853	the highest priority task to get scheduled after
854	it has been woken up.
855        Traces all tasks as an average developer would expect.
856
857  "wakeup_rt"
858
859        Traces and records the max latency that it takes for just
860        RT tasks (as the current "wakeup" does). This is useful
861        for those interested in wake up timings of RT tasks.
862
863  "wakeup_dl"
864
865	Traces and records the max latency that it takes for
866	a SCHED_DEADLINE task to be woken (as the "wakeup" and
867	"wakeup_rt" does).
868
869  "mmiotrace"
870
871	A special tracer that is used to trace binary module.
872	It will trace all the calls that a module makes to the
873	hardware. Everything it writes and reads from the I/O
874	as well.
875
876  "branch"
877
878	This tracer can be configured when tracing likely/unlikely
879	calls within the kernel. It will trace when a likely and
880	unlikely branch is hit and if it was correct in its prediction
881	of being correct.
882
883  "nop"
884
885	This is the "trace nothing" tracer. To remove all
886	tracers from tracing simply echo "nop" into
887	current_tracer.
888
889Error conditions
890----------------
891
892  For most ftrace commands, failure modes are obvious and communicated
893  using standard return codes.
894
895  For other more involved commands, extended error information may be
896  available via the tracing/error_log file.  For the commands that
897  support it, reading the tracing/error_log file after an error will
898  display more detailed information about what went wrong, if
899  information is available.  The tracing/error_log file is a circular
900  error log displaying a small number (currently, 8) of ftrace errors
901  for the last (8) failed commands.
902
903  The extended error information and usage takes the form shown in
904  this example::
905
906    # echo xxx > /sys/kernel/tracing/events/sched/sched_wakeup/trigger
907    echo: write error: Invalid argument
908
909    # cat /sys/kernel/tracing/error_log
910    [ 5348.887237] location: error: Couldn't yyy: zzz
911      Command: xxx
912               ^
913    [ 7517.023364] location: error: Bad rrr: sss
914      Command: ppp qqq
915                   ^
916
917  To clear the error log, echo the empty string into it::
918
919    # echo > /sys/kernel/tracing/error_log
920
921Examples of using the tracer
922----------------------------
923
924Here are typical examples of using the tracers when controlling
925them only with the tracefs interface (without using any
926user-land utilities).
927
928Output format:
929--------------
930
931Here is an example of the output format of the file "trace"::
932
933  # tracer: function
934  #
935  # entries-in-buffer/entries-written: 140080/250280   #P:4
936  #
937  #                              _-----=> irqs-off
938  #                             / _----=> need-resched
939  #                            | / _---=> hardirq/softirq
940  #                            || / _--=> preempt-depth
941  #                            ||| /     delay
942  #           TASK-PID   CPU#  ||||    TIMESTAMP  FUNCTION
943  #              | |       |   ||||       |         |
944              bash-1977  [000] .... 17284.993652: sys_close <-system_call_fastpath
945              bash-1977  [000] .... 17284.993653: __close_fd <-sys_close
946              bash-1977  [000] .... 17284.993653: _raw_spin_lock <-__close_fd
947              sshd-1974  [003] .... 17284.993653: __srcu_read_unlock <-fsnotify
948              bash-1977  [000] .... 17284.993654: add_preempt_count <-_raw_spin_lock
949              bash-1977  [000] ...1 17284.993655: _raw_spin_unlock <-__close_fd
950              bash-1977  [000] ...1 17284.993656: sub_preempt_count <-_raw_spin_unlock
951              bash-1977  [000] .... 17284.993657: filp_close <-__close_fd
952              bash-1977  [000] .... 17284.993657: dnotify_flush <-filp_close
953              sshd-1974  [003] .... 17284.993658: sys_select <-system_call_fastpath
954              ....
955
956A header is printed with the tracer name that is represented by
957the trace. In this case the tracer is "function". Then it shows the
958number of events in the buffer as well as the total number of entries
959that were written. The difference is the number of entries that were
960lost due to the buffer filling up (250280 - 140080 = 110200 events
961lost).
962
963The header explains the content of the events. Task name "bash", the task
964PID "1977", the CPU that it was running on "000", the latency format
965(explained below), the timestamp in <secs>.<usecs> format, the
966function name that was traced "sys_close" and the parent function that
967called this function "system_call_fastpath". The timestamp is the time
968at which the function was entered.
969
970Latency trace format
971--------------------
972
973When the latency-format option is enabled or when one of the latency
974tracers is set, the trace file gives somewhat more information to see
975why a latency happened. Here is a typical trace::
976
977  # tracer: irqsoff
978  #
979  # irqsoff latency trace v1.1.5 on 3.8.0-test+
980  # --------------------------------------------------------------------
981  # latency: 259 us, #4/4, CPU#2 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:4)
982  #    -----------------
983  #    | task: ps-6143 (uid:0 nice:0 policy:0 rt_prio:0)
984  #    -----------------
985  #  => started at: __lock_task_sighand
986  #  => ended at:   _raw_spin_unlock_irqrestore
987  #
988  #
989  #                  _------=> CPU#
990  #                 / _-----=> irqs-off
991  #                | / _----=> need-resched
992  #                || / _---=> hardirq/softirq
993  #                ||| / _--=> preempt-depth
994  #                |||| /     delay
995  #  cmd     pid   ||||| time  |   caller
996  #     \   /      |||||  \    |   /
997        ps-6143    2d...    0us!: trace_hardirqs_off <-__lock_task_sighand
998        ps-6143    2d..1  259us+: trace_hardirqs_on <-_raw_spin_unlock_irqrestore
999        ps-6143    2d..1  263us+: time_hardirqs_on <-_raw_spin_unlock_irqrestore
1000        ps-6143    2d..1  306us : <stack trace>
1001   => trace_hardirqs_on_caller
1002   => trace_hardirqs_on
1003   => _raw_spin_unlock_irqrestore
1004   => do_task_stat
1005   => proc_tgid_stat
1006   => proc_single_show
1007   => seq_read
1008   => vfs_read
1009   => sys_read
1010   => system_call_fastpath
1011
1012
1013This shows that the current tracer is "irqsoff" tracing the time
1014for which interrupts were disabled. It gives the trace version (which
1015never changes) and the version of the kernel upon which this was executed on
1016(3.8). Then it displays the max latency in microseconds (259 us). The number
1017of trace entries displayed and the total number (both are four: #4/4).
1018VP, KP, SP, and HP are always zero and are reserved for later use.
1019#P is the number of online CPUs (#P:4).
1020
1021The task is the process that was running when the latency
1022occurred. (ps pid: 6143).
1023
1024The start and stop (the functions in which the interrupts were
1025disabled and enabled respectively) that caused the latencies:
1026
1027  - __lock_task_sighand is where the interrupts were disabled.
1028  - _raw_spin_unlock_irqrestore is where they were enabled again.
1029
1030The next lines after the header are the trace itself. The header
1031explains which is which.
1032
1033  cmd: The name of the process in the trace.
1034
1035  pid: The PID of that process.
1036
1037  CPU#: The CPU which the process was running on.
1038
1039  irqs-off: 'd' interrupts are disabled. '.' otherwise.
1040
1041  need-resched:
1042	- 'B' all, TIF_NEED_RESCHED, PREEMPT_NEED_RESCHED and TIF_RESCHED_LAZY is set,
1043	- 'N' both TIF_NEED_RESCHED and PREEMPT_NEED_RESCHED is set,
1044	- 'n' only TIF_NEED_RESCHED is set,
1045	- 'p' only PREEMPT_NEED_RESCHED is set,
1046	- 'L' both PREEMPT_NEED_RESCHED and TIF_RESCHED_LAZY is set,
1047	- 'b' both TIF_NEED_RESCHED and TIF_RESCHED_LAZY is set,
1048	- 'l' only TIF_RESCHED_LAZY is set
1049	- '.' otherwise.
1050
1051  hardirq/softirq:
1052	- 'Z' - NMI occurred inside a hardirq
1053	- 'z' - NMI is running
1054	- 'H' - hard irq occurred inside a softirq.
1055	- 'h' - hard irq is running
1056	- 's' - soft irq is running
1057	- '.' - normal context.
1058
1059  preempt-depth: The level of preempt_disabled
1060
1061The above is mostly meaningful for kernel developers.
1062
1063  time:
1064	When the latency-format option is enabled, the trace file
1065	output includes a timestamp relative to the start of the
1066	trace. This differs from the output when latency-format
1067	is disabled, which includes an absolute timestamp.
1068
1069  delay:
1070	This is just to help catch your eye a bit better. And
1071	needs to be fixed to be only relative to the same CPU.
1072	The marks are determined by the difference between this
1073	current trace and the next trace.
1074
1075	  - '$' - greater than 1 second
1076	  - '@' - greater than 100 millisecond
1077	  - '*' - greater than 10 millisecond
1078	  - '#' - greater than 1000 microsecond
1079	  - '!' - greater than 100 microsecond
1080	  - '+' - greater than 10 microsecond
1081	  - ' ' - less than or equal to 10 microsecond.
1082
1083  The rest is the same as the 'trace' file.
1084
1085  Note, the latency tracers will usually end with a back trace
1086  to easily find where the latency occurred.
1087
1088trace_options
1089-------------
1090
1091The trace_options file (or the options directory) is used to control
1092what gets printed in the trace output, or manipulate the tracers.
1093To see what is available, simply cat the file::
1094
1095  cat trace_options
1096	print-parent
1097	nosym-offset
1098	nosym-addr
1099	noverbose
1100	noraw
1101	nohex
1102	nobin
1103	noblock
1104	nofields
1105	trace_printk
1106	annotate
1107	nouserstacktrace
1108	nosym-userobj
1109	noprintk-msg-only
1110	context-info
1111	nolatency-format
1112	record-cmd
1113	norecord-tgid
1114	overwrite
1115	nodisable_on_free
1116	irq-info
1117	markers
1118	noevent-fork
1119	function-trace
1120	nofunction-fork
1121	nodisplay-graph
1122	nostacktrace
1123	nobranch
1124
1125To disable one of the options, echo in the option prepended with
1126"no"::
1127
1128  echo noprint-parent > trace_options
1129
1130To enable an option, leave off the "no"::
1131
1132  echo sym-offset > trace_options
1133
1134Here are the available options:
1135
1136  print-parent
1137	On function traces, display the calling (parent)
1138	function as well as the function being traced.
1139	::
1140
1141	  print-parent:
1142	   bash-4000  [01]  1477.606694: simple_strtoul <-kstrtoul
1143
1144	  noprint-parent:
1145	   bash-4000  [01]  1477.606694: simple_strtoul
1146
1147
1148  sym-offset
1149	Display not only the function name, but also the
1150	offset in the function. For example, instead of
1151	seeing just "ktime_get", you will see
1152	"ktime_get+0xb/0x20".
1153	::
1154
1155	  sym-offset:
1156	   bash-4000  [01]  1477.606694: simple_strtoul+0x6/0xa0
1157
1158  sym-addr
1159	This will also display the function address as well
1160	as the function name.
1161	::
1162
1163	  sym-addr:
1164	   bash-4000  [01]  1477.606694: simple_strtoul <c0339346>
1165
1166  verbose
1167	This deals with the trace file when the
1168        latency-format option is enabled.
1169	::
1170
1171	    bash  4000 1 0 00000000 00010a95 [58127d26] 1720.415ms \
1172	    (+0.000ms): simple_strtoul (kstrtoul)
1173
1174  raw
1175	This will display raw numbers. This option is best for
1176	use with user applications that can translate the raw
1177	numbers better than having it done in the kernel.
1178
1179  hex
1180	Similar to raw, but the numbers will be in a hexadecimal format.
1181
1182  bin
1183	This will print out the formats in raw binary.
1184
1185  block
1186	When set, reading trace_pipe will not block when polled.
1187
1188  fields
1189	Print the fields as described by their types. This is a better
1190	option than using hex, bin or raw, as it gives a better parsing
1191	of the content of the event.
1192
1193  trace_printk
1194	Can disable trace_printk() from writing into the buffer.
1195
1196  trace_printk_dest
1197	Set to have trace_printk() and similar internal tracing functions
1198	write into this instance. Note, only one trace instance can have
1199	this set. By setting this flag, it clears the trace_printk_dest flag
1200	of the instance that had it set previously. By default, the top
1201	level trace has this set, and will get it set again if another
1202	instance has it set then clears it.
1203
1204	This flag cannot be cleared by the top level instance, as it is the
1205	default instance. The only way the top level instance has this flag
1206	cleared, is by it being set in another instance.
1207
1208  copy_trace_marker
1209	If there are applications that hard code writing into the top level
1210	trace_marker file (/sys/kernel/tracing/trace_marker or trace_marker_raw),
1211	and the tooling would like it to go into an instance, this option can
1212	be used. Create an instance and set this option, and then all writes
1213	into the top level trace_marker file will also be redirected into this
1214	instance.
1215
1216	Note, by default this option is set for the top level instance. If it
1217	is disabled, then writes to the trace_marker or trace_marker_raw files
1218	will not be written into the top level file. If no instance has this
1219	option set, then a write will error with the errno of ENODEV.
1220
1221  annotate
1222	It is sometimes confusing when the CPU buffers are full
1223	and one CPU buffer had a lot of events recently, thus
1224	a shorter time frame, were another CPU may have only had
1225	a few events, which lets it have older events. When
1226	the trace is reported, it shows the oldest events first,
1227	and it may look like only one CPU ran (the one with the
1228	oldest events). When the annotate option is set, it will
1229	display when a new CPU buffer started::
1230
1231			  <idle>-0     [001] dNs4 21169.031481: wake_up_idle_cpu <-add_timer_on
1232			  <idle>-0     [001] dNs4 21169.031482: _raw_spin_unlock_irqrestore <-add_timer_on
1233			  <idle>-0     [001] .Ns4 21169.031484: sub_preempt_count <-_raw_spin_unlock_irqrestore
1234		##### CPU 2 buffer started ####
1235			  <idle>-0     [002] .N.1 21169.031484: rcu_idle_exit <-cpu_idle
1236			  <idle>-0     [001] .Ns3 21169.031484: _raw_spin_unlock <-clocksource_watchdog
1237			  <idle>-0     [001] .Ns3 21169.031485: sub_preempt_count <-_raw_spin_unlock
1238
1239  userstacktrace
1240	This option changes the trace. It records a
1241	stacktrace of the current user space thread after
1242	each trace event.
1243
1244  sym-userobj
1245	when user stacktrace are enabled, look up which
1246	object the address belongs to, and print a
1247	relative address. This is especially useful when
1248	ASLR is on, otherwise you don't get a chance to
1249	resolve the address to object/file/line after
1250	the app is no longer running
1251
1252	The lookup is performed when you read
1253	trace,trace_pipe. Example::
1254
1255		  a.out-1623  [000] 40874.465068: /root/a.out[+0x480] <-/root/a.out[+0
1256		  x494] <- /root/a.out[+0x4a8] <- /lib/libc-2.7.so[+0x1e1a6]
1257
1258
1259  printk-msg-only
1260	When set, trace_printk()s will only show the format
1261	and not their parameters (if trace_bprintk() or
1262	trace_bputs() was used to save the trace_printk()).
1263
1264  context-info
1265	Show only the event data. Hides the comm, PID,
1266	timestamp, CPU, and other useful data.
1267
1268  latency-format
1269	This option changes the trace output. When it is enabled,
1270	the trace displays additional information about the
1271	latency, as described in "Latency trace format".
1272
1273  pause-on-trace
1274	When set, opening the trace file for read, will pause
1275	writing to the ring buffer (as if tracing_on was set to zero).
1276	This simulates the original behavior of the trace file.
1277	When the file is closed, tracing will be enabled again.
1278
1279  hash-ptr
1280        When set, "%p" in the event printk format displays the
1281        hashed pointer value instead of real address.
1282        This will be useful if you want to find out which hashed
1283        value is corresponding to the real value in trace log.
1284
1285  record-cmd
1286	When any event or tracer is enabled, a hook is enabled
1287	in the sched_switch trace point to fill comm cache
1288	with mapped pids and comms. But this may cause some
1289	overhead, and if you only care about pids, and not the
1290	name of the task, disabling this option can lower the
1291	impact of tracing. See "saved_cmdlines".
1292
1293  record-tgid
1294	When any event or tracer is enabled, a hook is enabled
1295	in the sched_switch trace point to fill the cache of
1296	mapped Thread Group IDs (TGID) mapping to pids. See
1297	"saved_tgids".
1298
1299  overwrite
1300	This controls what happens when the trace buffer is
1301	full. If "1" (default), the oldest events are
1302	discarded and overwritten. If "0", then the newest
1303	events are discarded.
1304	(see per_cpu/cpu0/stats for overrun and dropped)
1305
1306  disable_on_free
1307	When the free_buffer is closed, tracing will
1308	stop (tracing_on set to 0).
1309
1310  irq-info
1311	Shows the interrupt, preempt count, need resched data.
1312	When disabled, the trace looks like::
1313
1314		# tracer: function
1315		#
1316		# entries-in-buffer/entries-written: 144405/9452052   #P:4
1317		#
1318		#           TASK-PID   CPU#      TIMESTAMP  FUNCTION
1319		#              | |       |          |         |
1320			  <idle>-0     [002]  23636.756054: ttwu_do_activate.constprop.89 <-try_to_wake_up
1321			  <idle>-0     [002]  23636.756054: activate_task <-ttwu_do_activate.constprop.89
1322			  <idle>-0     [002]  23636.756055: enqueue_task <-activate_task
1323
1324
1325  markers
1326	When set, the trace_marker is writable (only by root).
1327	When disabled, the trace_marker will error with EINVAL
1328	on write.
1329
1330  event-fork
1331	When set, tasks with PIDs listed in set_event_pid will have
1332	the PIDs of their children added to set_event_pid when those
1333	tasks fork. Also, when tasks with PIDs in set_event_pid exit,
1334	their PIDs will be removed from the file.
1335
1336        This affects PIDs listed in set_event_notrace_pid as well.
1337
1338  function-trace
1339	The latency tracers will enable function tracing
1340	if this option is enabled (default it is). When
1341	it is disabled, the latency tracers do not trace
1342	functions. This keeps the overhead of the tracer down
1343	when performing latency tests.
1344
1345  function-fork
1346	When set, tasks with PIDs listed in set_ftrace_pid will
1347	have the PIDs of their children added to set_ftrace_pid
1348	when those tasks fork. Also, when tasks with PIDs in
1349	set_ftrace_pid exit, their PIDs will be removed from the
1350	file.
1351
1352        This affects PIDs in set_ftrace_notrace_pid as well.
1353
1354  display-graph
1355	When set, the latency tracers (irqsoff, wakeup, etc) will
1356	use function graph tracing instead of function tracing.
1357
1358  stacktrace
1359	When set, a stack trace is recorded after any trace event
1360	is recorded.
1361
1362  branch
1363	Enable branch tracing with the tracer. This enables branch
1364	tracer along with the currently set tracer. Enabling this
1365	with the "nop" tracer is the same as just enabling the
1366	"branch" tracer.
1367
1368.. tip:: Some tracers have their own options. They only appear in this
1369       file when the tracer is active. They always appear in the
1370       options directory.
1371
1372
1373Here are the per tracer options:
1374
1375Options for function tracer:
1376
1377  func_stack_trace
1378	When set, a stack trace is recorded after every
1379	function that is recorded. NOTE! Limit the functions
1380	that are recorded before enabling this, with
1381	"set_ftrace_filter" otherwise the system performance
1382	will be critically degraded. Remember to disable
1383	this option before clearing the function filter.
1384
1385Options for function_graph tracer:
1386
1387 Since the function_graph tracer has a slightly different output
1388 it has its own options to control what is displayed.
1389
1390  funcgraph-overrun
1391	When set, the "overrun" of the graph stack is
1392	displayed after each function traced. The
1393	overrun, is when the stack depth of the calls
1394	is greater than what is reserved for each task.
1395	Each task has a fixed array of functions to
1396	trace in the call graph. If the depth of the
1397	calls exceeds that, the function is not traced.
1398	The overrun is the number of functions missed
1399	due to exceeding this array.
1400
1401  funcgraph-cpu
1402	When set, the CPU number of the CPU where the trace
1403	occurred is displayed.
1404
1405  funcgraph-overhead
1406	When set, if the function takes longer than
1407	A certain amount, then a delay marker is
1408	displayed. See "delay" above, under the
1409	header description.
1410
1411  funcgraph-proc
1412	Unlike other tracers, the process' command line
1413	is not displayed by default, but instead only
1414	when a task is traced in and out during a context
1415	switch. Enabling this options has the command
1416	of each process displayed at every line.
1417
1418  funcgraph-duration
1419	At the end of each function (the return)
1420	the duration of the amount of time in the
1421	function is displayed in microseconds.
1422
1423  funcgraph-abstime
1424	When set, the timestamp is displayed at each line.
1425
1426  funcgraph-irqs
1427	When disabled, functions that happen inside an
1428	interrupt will not be traced.
1429
1430  funcgraph-tail
1431	When set, the return event will include the function
1432	that it represents. By default this is off, and
1433	only a closing curly bracket "}" is displayed for
1434	the return of a function.
1435
1436  funcgraph-retval
1437	When set, the return value of each traced function
1438	will be printed after an equal sign "=". By default
1439	this is off.
1440
1441  funcgraph-retval-hex
1442	When set, the return value will always be printed
1443	in hexadecimal format. If the option is not set and
1444	the return value is an error code, it will be printed
1445	in signed decimal format; otherwise it will also be
1446	printed in hexadecimal format. By default, this option
1447	is off.
1448
1449  sleep-time
1450	When running function graph tracer, to include
1451	the time a task schedules out in its function.
1452	When enabled, it will account time the task has been
1453	scheduled out as part of the function call.
1454
1455  graph-time
1456	When running function profiler with function graph tracer,
1457	to include the time to call nested functions. When this is
1458	not set, the time reported for the function will only
1459	include the time the function itself executed for, not the
1460	time for functions that it called.
1461
1462Options for blk tracer:
1463
1464  blk_classic
1465	Shows a more minimalistic output.
1466
1467
1468irqsoff
1469-------
1470
1471When interrupts are disabled, the CPU can not react to any other
1472external event (besides NMIs and SMIs). This prevents the timer
1473interrupt from triggering or the mouse interrupt from letting
1474the kernel know of a new mouse event. The result is a latency
1475with the reaction time.
1476
1477The irqsoff tracer tracks the time for which interrupts are
1478disabled. When a new maximum latency is hit, the tracer saves
1479the trace leading up to that latency point so that every time a
1480new maximum is reached, the old saved trace is discarded and the
1481new trace is saved.
1482
1483To reset the maximum, echo 0 into tracing_max_latency. Here is
1484an example::
1485
1486  # echo 0 > options/function-trace
1487  # echo irqsoff > current_tracer
1488  # echo 1 > tracing_on
1489  # echo 0 > tracing_max_latency
1490  # ls -ltr
1491  [...]
1492  # echo 0 > tracing_on
1493  # cat trace
1494  # tracer: irqsoff
1495  #
1496  # irqsoff latency trace v1.1.5 on 3.8.0-test+
1497  # --------------------------------------------------------------------
1498  # latency: 16 us, #4/4, CPU#0 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:4)
1499  #    -----------------
1500  #    | task: swapper/0-0 (uid:0 nice:0 policy:0 rt_prio:0)
1501  #    -----------------
1502  #  => started at: run_timer_softirq
1503  #  => ended at:   run_timer_softirq
1504  #
1505  #
1506  #                  _------=> CPU#
1507  #                 / _-----=> irqs-off
1508  #                | / _----=> need-resched
1509  #                || / _---=> hardirq/softirq
1510  #                ||| / _--=> preempt-depth
1511  #                |||| /     delay
1512  #  cmd     pid   ||||| time  |   caller
1513  #     \   /      |||||  \    |   /
1514    <idle>-0       0d.s2    0us+: _raw_spin_lock_irq <-run_timer_softirq
1515    <idle>-0       0dNs3   17us : _raw_spin_unlock_irq <-run_timer_softirq
1516    <idle>-0       0dNs3   17us+: trace_hardirqs_on <-run_timer_softirq
1517    <idle>-0       0dNs3   25us : <stack trace>
1518   => _raw_spin_unlock_irq
1519   => run_timer_softirq
1520   => __do_softirq
1521   => call_softirq
1522   => do_softirq
1523   => irq_exit
1524   => smp_apic_timer_interrupt
1525   => apic_timer_interrupt
1526   => rcu_idle_exit
1527   => cpu_idle
1528   => rest_init
1529   => start_kernel
1530   => x86_64_start_reservations
1531   => x86_64_start_kernel
1532
1533Here we see that we had a latency of 16 microseconds (which is
1534very good). The _raw_spin_lock_irq in run_timer_softirq disabled
1535interrupts. The difference between the 16 and the displayed
1536timestamp 25us occurred because the clock was incremented
1537between the time of recording the max latency and the time of
1538recording the function that had that latency.
1539
1540Note the above example had function-trace not set. If we set
1541function-trace, we get a much larger output::
1542
1543 with echo 1 > options/function-trace
1544
1545  # tracer: irqsoff
1546  #
1547  # irqsoff latency trace v1.1.5 on 3.8.0-test+
1548  # --------------------------------------------------------------------
1549  # latency: 71 us, #168/168, CPU#3 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:4)
1550  #    -----------------
1551  #    | task: bash-2042 (uid:0 nice:0 policy:0 rt_prio:0)
1552  #    -----------------
1553  #  => started at: ata_scsi_queuecmd
1554  #  => ended at:   ata_scsi_queuecmd
1555  #
1556  #
1557  #                  _------=> CPU#
1558  #                 / _-----=> irqs-off
1559  #                | / _----=> need-resched
1560  #                || / _---=> hardirq/softirq
1561  #                ||| / _--=> preempt-depth
1562  #                |||| /     delay
1563  #  cmd     pid   ||||| time  |   caller
1564  #     \   /      |||||  \    |   /
1565      bash-2042    3d...    0us : _raw_spin_lock_irqsave <-ata_scsi_queuecmd
1566      bash-2042    3d...    0us : add_preempt_count <-_raw_spin_lock_irqsave
1567      bash-2042    3d..1    1us : ata_scsi_find_dev <-ata_scsi_queuecmd
1568      bash-2042    3d..1    1us : __ata_scsi_find_dev <-ata_scsi_find_dev
1569      bash-2042    3d..1    2us : ata_find_dev.part.14 <-__ata_scsi_find_dev
1570      bash-2042    3d..1    2us : ata_qc_new_init <-__ata_scsi_queuecmd
1571      bash-2042    3d..1    3us : ata_sg_init <-__ata_scsi_queuecmd
1572      bash-2042    3d..1    4us : ata_scsi_rw_xlat <-__ata_scsi_queuecmd
1573      bash-2042    3d..1    4us : ata_build_rw_tf <-ata_scsi_rw_xlat
1574  [...]
1575      bash-2042    3d..1   67us : delay_tsc <-__delay
1576      bash-2042    3d..1   67us : add_preempt_count <-delay_tsc
1577      bash-2042    3d..2   67us : sub_preempt_count <-delay_tsc
1578      bash-2042    3d..1   67us : add_preempt_count <-delay_tsc
1579      bash-2042    3d..2   68us : sub_preempt_count <-delay_tsc
1580      bash-2042    3d..1   68us+: ata_bmdma_start <-ata_bmdma_qc_issue
1581      bash-2042    3d..1   71us : _raw_spin_unlock_irqrestore <-ata_scsi_queuecmd
1582      bash-2042    3d..1   71us : _raw_spin_unlock_irqrestore <-ata_scsi_queuecmd
1583      bash-2042    3d..1   72us+: trace_hardirqs_on <-ata_scsi_queuecmd
1584      bash-2042    3d..1  120us : <stack trace>
1585   => _raw_spin_unlock_irqrestore
1586   => ata_scsi_queuecmd
1587   => scsi_dispatch_cmd
1588   => scsi_request_fn
1589   => __blk_run_queue_uncond
1590   => __blk_run_queue
1591   => blk_queue_bio
1592   => submit_bio_noacct
1593   => submit_bio
1594   => submit_bh
1595   => __ext3_get_inode_loc
1596   => ext3_iget
1597   => ext3_lookup
1598   => lookup_real
1599   => __lookup_hash
1600   => walk_component
1601   => lookup_last
1602   => path_lookupat
1603   => filename_lookup
1604   => user_path_at_empty
1605   => user_path_at
1606   => vfs_fstatat
1607   => vfs_stat
1608   => sys_newstat
1609   => system_call_fastpath
1610
1611
1612Here we traced a 71 microsecond latency. But we also see all the
1613functions that were called during that time. Note that by
1614enabling function tracing, we incur an added overhead. This
1615overhead may extend the latency times. But nevertheless, this
1616trace has provided some very helpful debugging information.
1617
1618If we prefer function graph output instead of function, we can set
1619display-graph option::
1620
1621 with echo 1 > options/display-graph
1622
1623  # tracer: irqsoff
1624  #
1625  # irqsoff latency trace v1.1.5 on 4.20.0-rc6+
1626  # --------------------------------------------------------------------
1627  # latency: 3751 us, #274/274, CPU#0 | (M:desktop VP:0, KP:0, SP:0 HP:0 #P:4)
1628  #    -----------------
1629  #    | task: bash-1507 (uid:0 nice:0 policy:0 rt_prio:0)
1630  #    -----------------
1631  #  => started at: free_debug_processing
1632  #  => ended at:   return_to_handler
1633  #
1634  #
1635  #                                       _-----=> irqs-off
1636  #                                      / _----=> need-resched
1637  #                                     | / _---=> hardirq/softirq
1638  #                                     || / _--=> preempt-depth
1639  #                                     ||| /
1640  #   REL TIME      CPU  TASK/PID       ||||     DURATION                  FUNCTION CALLS
1641  #      |          |     |    |        ||||      |   |                     |   |   |   |
1642          0 us |   0)   bash-1507    |  d... |   0.000 us    |  _raw_spin_lock_irqsave();
1643          0 us |   0)   bash-1507    |  d..1 |   0.378 us    |    do_raw_spin_trylock();
1644          1 us |   0)   bash-1507    |  d..2 |               |    set_track() {
1645          2 us |   0)   bash-1507    |  d..2 |               |      save_stack_trace() {
1646          2 us |   0)   bash-1507    |  d..2 |               |        __save_stack_trace() {
1647          3 us |   0)   bash-1507    |  d..2 |               |          __unwind_start() {
1648          3 us |   0)   bash-1507    |  d..2 |               |            get_stack_info() {
1649          3 us |   0)   bash-1507    |  d..2 |   0.351 us    |              in_task_stack();
1650          4 us |   0)   bash-1507    |  d..2 |   1.107 us    |            }
1651  [...]
1652       3750 us |   0)   bash-1507    |  d..1 |   0.516 us    |      do_raw_spin_unlock();
1653       3750 us |   0)   bash-1507    |  d..1 |   0.000 us    |  _raw_spin_unlock_irqrestore();
1654       3764 us |   0)   bash-1507    |  d..1 |   0.000 us    |  tracer_hardirqs_on();
1655      bash-1507    0d..1 3792us : <stack trace>
1656   => free_debug_processing
1657   => __slab_free
1658   => kmem_cache_free
1659   => vm_area_free
1660   => remove_vma
1661   => exit_mmap
1662   => mmput
1663   => begin_new_exec
1664   => load_elf_binary
1665   => search_binary_handler
1666   => __do_execve_file.isra.32
1667   => __x64_sys_execve
1668   => do_syscall_64
1669   => entry_SYSCALL_64_after_hwframe
1670
1671preemptoff
1672----------
1673
1674When preemption is disabled, we may be able to receive
1675interrupts but the task cannot be preempted and a higher
1676priority task must wait for preemption to be enabled again
1677before it can preempt a lower priority task.
1678
1679The preemptoff tracer traces the places that disable preemption.
1680Like the irqsoff tracer, it records the maximum latency for
1681which preemption was disabled. The control of preemptoff tracer
1682is much like the irqsoff tracer.
1683::
1684
1685  # echo 0 > options/function-trace
1686  # echo preemptoff > current_tracer
1687  # echo 1 > tracing_on
1688  # echo 0 > tracing_max_latency
1689  # ls -ltr
1690  [...]
1691  # echo 0 > tracing_on
1692  # cat trace
1693  # tracer: preemptoff
1694  #
1695  # preemptoff latency trace v1.1.5 on 3.8.0-test+
1696  # --------------------------------------------------------------------
1697  # latency: 46 us, #4/4, CPU#1 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:4)
1698  #    -----------------
1699  #    | task: sshd-1991 (uid:0 nice:0 policy:0 rt_prio:0)
1700  #    -----------------
1701  #  => started at: do_IRQ
1702  #  => ended at:   do_IRQ
1703  #
1704  #
1705  #                  _------=> CPU#
1706  #                 / _-----=> irqs-off
1707  #                | / _----=> need-resched
1708  #                || / _---=> hardirq/softirq
1709  #                ||| / _--=> preempt-depth
1710  #                |||| /     delay
1711  #  cmd     pid   ||||| time  |   caller
1712  #     \   /      |||||  \    |   /
1713      sshd-1991    1d.h.    0us+: irq_enter <-do_IRQ
1714      sshd-1991    1d..1   46us : irq_exit <-do_IRQ
1715      sshd-1991    1d..1   47us+: trace_preempt_on <-do_IRQ
1716      sshd-1991    1d..1   52us : <stack trace>
1717   => sub_preempt_count
1718   => irq_exit
1719   => do_IRQ
1720   => ret_from_intr
1721
1722
1723This has some more changes. Preemption was disabled when an
1724interrupt came in (notice the 'h'), and was enabled on exit.
1725But we also see that interrupts have been disabled when entering
1726the preempt off section and leaving it (the 'd'). We do not know if
1727interrupts were enabled in the mean time or shortly after this
1728was over.
1729::
1730
1731  # tracer: preemptoff
1732  #
1733  # preemptoff latency trace v1.1.5 on 3.8.0-test+
1734  # --------------------------------------------------------------------
1735  # latency: 83 us, #241/241, CPU#1 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:4)
1736  #    -----------------
1737  #    | task: bash-1994 (uid:0 nice:0 policy:0 rt_prio:0)
1738  #    -----------------
1739  #  => started at: wake_up_new_task
1740  #  => ended at:   task_rq_unlock
1741  #
1742  #
1743  #                  _------=> CPU#
1744  #                 / _-----=> irqs-off
1745  #                | / _----=> need-resched
1746  #                || / _---=> hardirq/softirq
1747  #                ||| / _--=> preempt-depth
1748  #                |||| /     delay
1749  #  cmd     pid   ||||| time  |   caller
1750  #     \   /      |||||  \    |   /
1751      bash-1994    1d..1    0us : _raw_spin_lock_irqsave <-wake_up_new_task
1752      bash-1994    1d..1    0us : select_task_rq_fair <-select_task_rq
1753      bash-1994    1d..1    1us : __rcu_read_lock <-select_task_rq_fair
1754      bash-1994    1d..1    1us : source_load <-select_task_rq_fair
1755      bash-1994    1d..1    1us : source_load <-select_task_rq_fair
1756  [...]
1757      bash-1994    1d..1   12us : irq_enter <-smp_apic_timer_interrupt
1758      bash-1994    1d..1   12us : rcu_irq_enter <-irq_enter
1759      bash-1994    1d..1   13us : add_preempt_count <-irq_enter
1760      bash-1994    1d.h1   13us : exit_idle <-smp_apic_timer_interrupt
1761      bash-1994    1d.h1   13us : hrtimer_interrupt <-smp_apic_timer_interrupt
1762      bash-1994    1d.h1   13us : _raw_spin_lock <-hrtimer_interrupt
1763      bash-1994    1d.h1   14us : add_preempt_count <-_raw_spin_lock
1764      bash-1994    1d.h2   14us : ktime_get_update_offsets <-hrtimer_interrupt
1765  [...]
1766      bash-1994    1d.h1   35us : lapic_next_event <-clockevents_program_event
1767      bash-1994    1d.h1   35us : irq_exit <-smp_apic_timer_interrupt
1768      bash-1994    1d.h1   36us : sub_preempt_count <-irq_exit
1769      bash-1994    1d..2   36us : do_softirq <-irq_exit
1770      bash-1994    1d..2   36us : __do_softirq <-call_softirq
1771      bash-1994    1d..2   36us : __local_bh_disable <-__do_softirq
1772      bash-1994    1d.s2   37us : add_preempt_count <-_raw_spin_lock_irq
1773      bash-1994    1d.s3   38us : _raw_spin_unlock <-run_timer_softirq
1774      bash-1994    1d.s3   39us : sub_preempt_count <-_raw_spin_unlock
1775      bash-1994    1d.s2   39us : call_timer_fn <-run_timer_softirq
1776  [...]
1777      bash-1994    1dNs2   81us : cpu_needs_another_gp <-rcu_process_callbacks
1778      bash-1994    1dNs2   82us : __local_bh_enable <-__do_softirq
1779      bash-1994    1dNs2   82us : sub_preempt_count <-__local_bh_enable
1780      bash-1994    1dN.2   82us : idle_cpu <-irq_exit
1781      bash-1994    1dN.2   83us : rcu_irq_exit <-irq_exit
1782      bash-1994    1dN.2   83us : sub_preempt_count <-irq_exit
1783      bash-1994    1.N.1   84us : _raw_spin_unlock_irqrestore <-task_rq_unlock
1784      bash-1994    1.N.1   84us+: trace_preempt_on <-task_rq_unlock
1785      bash-1994    1.N.1  104us : <stack trace>
1786   => sub_preempt_count
1787   => _raw_spin_unlock_irqrestore
1788   => task_rq_unlock
1789   => wake_up_new_task
1790   => do_fork
1791   => sys_clone
1792   => stub_clone
1793
1794
1795The above is an example of the preemptoff trace with
1796function-trace set. Here we see that interrupts were not disabled
1797the entire time. The irq_enter code lets us know that we entered
1798an interrupt 'h'. Before that, the functions being traced still
1799show that it is not in an interrupt, but we can see from the
1800functions themselves that this is not the case.
1801
1802preemptirqsoff
1803--------------
1804
1805Knowing the locations that have interrupts disabled or
1806preemption disabled for the longest times is helpful. But
1807sometimes we would like to know when either preemption and/or
1808interrupts are disabled.
1809
1810Consider the following code::
1811
1812    local_irq_disable();
1813    call_function_with_irqs_off();
1814    preempt_disable();
1815    call_function_with_irqs_and_preemption_off();
1816    local_irq_enable();
1817    call_function_with_preemption_off();
1818    preempt_enable();
1819
1820The irqsoff tracer will record the total length of
1821call_function_with_irqs_off() and
1822call_function_with_irqs_and_preemption_off().
1823
1824The preemptoff tracer will record the total length of
1825call_function_with_irqs_and_preemption_off() and
1826call_function_with_preemption_off().
1827
1828But neither will trace the time that interrupts and/or
1829preemption is disabled. This total time is the time that we can
1830not schedule. To record this time, use the preemptirqsoff
1831tracer.
1832
1833Again, using this trace is much like the irqsoff and preemptoff
1834tracers.
1835::
1836
1837  # echo 0 > options/function-trace
1838  # echo preemptirqsoff > current_tracer
1839  # echo 1 > tracing_on
1840  # echo 0 > tracing_max_latency
1841  # ls -ltr
1842  [...]
1843  # echo 0 > tracing_on
1844  # cat trace
1845  # tracer: preemptirqsoff
1846  #
1847  # preemptirqsoff latency trace v1.1.5 on 3.8.0-test+
1848  # --------------------------------------------------------------------
1849  # latency: 100 us, #4/4, CPU#3 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:4)
1850  #    -----------------
1851  #    | task: ls-2230 (uid:0 nice:0 policy:0 rt_prio:0)
1852  #    -----------------
1853  #  => started at: ata_scsi_queuecmd
1854  #  => ended at:   ata_scsi_queuecmd
1855  #
1856  #
1857  #                  _------=> CPU#
1858  #                 / _-----=> irqs-off
1859  #                | / _----=> need-resched
1860  #                || / _---=> hardirq/softirq
1861  #                ||| / _--=> preempt-depth
1862  #                |||| /     delay
1863  #  cmd     pid   ||||| time  |   caller
1864  #     \   /      |||||  \    |   /
1865        ls-2230    3d...    0us+: _raw_spin_lock_irqsave <-ata_scsi_queuecmd
1866        ls-2230    3...1  100us : _raw_spin_unlock_irqrestore <-ata_scsi_queuecmd
1867        ls-2230    3...1  101us+: trace_preempt_on <-ata_scsi_queuecmd
1868        ls-2230    3...1  111us : <stack trace>
1869   => sub_preempt_count
1870   => _raw_spin_unlock_irqrestore
1871   => ata_scsi_queuecmd
1872   => scsi_dispatch_cmd
1873   => scsi_request_fn
1874   => __blk_run_queue_uncond
1875   => __blk_run_queue
1876   => blk_queue_bio
1877   => submit_bio_noacct
1878   => submit_bio
1879   => submit_bh
1880   => ext3_bread
1881   => ext3_dir_bread
1882   => htree_dirblock_to_tree
1883   => ext3_htree_fill_tree
1884   => ext3_readdir
1885   => vfs_readdir
1886   => sys_getdents
1887   => system_call_fastpath
1888
1889
1890The trace_hardirqs_off_thunk is called from assembly on x86 when
1891interrupts are disabled in the assembly code. Without the
1892function tracing, we do not know if interrupts were enabled
1893within the preemption points. We do see that it started with
1894preemption enabled.
1895
1896Here is a trace with function-trace set::
1897
1898  # tracer: preemptirqsoff
1899  #
1900  # preemptirqsoff latency trace v1.1.5 on 3.8.0-test+
1901  # --------------------------------------------------------------------
1902  # latency: 161 us, #339/339, CPU#3 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:4)
1903  #    -----------------
1904  #    | task: ls-2269 (uid:0 nice:0 policy:0 rt_prio:0)
1905  #    -----------------
1906  #  => started at: schedule
1907  #  => ended at:   mutex_unlock
1908  #
1909  #
1910  #                  _------=> CPU#
1911  #                 / _-----=> irqs-off
1912  #                | / _----=> need-resched
1913  #                || / _---=> hardirq/softirq
1914  #                ||| / _--=> preempt-depth
1915  #                |||| /     delay
1916  #  cmd     pid   ||||| time  |   caller
1917  #     \   /      |||||  \    |   /
1918  kworker/-59      3...1    0us : __schedule <-schedule
1919  kworker/-59      3d..1    0us : rcu_preempt_qs <-rcu_note_context_switch
1920  kworker/-59      3d..1    1us : add_preempt_count <-_raw_spin_lock_irq
1921  kworker/-59      3d..2    1us : deactivate_task <-__schedule
1922  kworker/-59      3d..2    1us : dequeue_task <-deactivate_task
1923  kworker/-59      3d..2    2us : update_rq_clock <-dequeue_task
1924  kworker/-59      3d..2    2us : dequeue_task_fair <-dequeue_task
1925  kworker/-59      3d..2    2us : update_curr <-dequeue_task_fair
1926  kworker/-59      3d..2    2us : update_min_vruntime <-update_curr
1927  kworker/-59      3d..2    3us : cpuacct_charge <-update_curr
1928  kworker/-59      3d..2    3us : __rcu_read_lock <-cpuacct_charge
1929  kworker/-59      3d..2    3us : __rcu_read_unlock <-cpuacct_charge
1930  kworker/-59      3d..2    3us : update_cfs_rq_blocked_load <-dequeue_task_fair
1931  kworker/-59      3d..2    4us : clear_buddies <-dequeue_task_fair
1932  kworker/-59      3d..2    4us : account_entity_dequeue <-dequeue_task_fair
1933  kworker/-59      3d..2    4us : update_min_vruntime <-dequeue_task_fair
1934  kworker/-59      3d..2    4us : update_cfs_shares <-dequeue_task_fair
1935  kworker/-59      3d..2    5us : hrtick_update <-dequeue_task_fair
1936  kworker/-59      3d..2    5us : wq_worker_sleeping <-__schedule
1937  kworker/-59      3d..2    5us : kthread_data <-wq_worker_sleeping
1938  kworker/-59      3d..2    5us : put_prev_task_fair <-__schedule
1939  kworker/-59      3d..2    6us : pick_next_task_fair <-pick_next_task
1940  kworker/-59      3d..2    6us : clear_buddies <-pick_next_task_fair
1941  kworker/-59      3d..2    6us : set_next_entity <-pick_next_task_fair
1942  kworker/-59      3d..2    6us : update_stats_wait_end <-set_next_entity
1943        ls-2269    3d..2    7us : finish_task_switch <-__schedule
1944        ls-2269    3d..2    7us : _raw_spin_unlock_irq <-finish_task_switch
1945        ls-2269    3d..2    8us : do_IRQ <-ret_from_intr
1946        ls-2269    3d..2    8us : irq_enter <-do_IRQ
1947        ls-2269    3d..2    8us : rcu_irq_enter <-irq_enter
1948        ls-2269    3d..2    9us : add_preempt_count <-irq_enter
1949        ls-2269    3d.h2    9us : exit_idle <-do_IRQ
1950  [...]
1951        ls-2269    3d.h3   20us : sub_preempt_count <-_raw_spin_unlock
1952        ls-2269    3d.h2   20us : irq_exit <-do_IRQ
1953        ls-2269    3d.h2   21us : sub_preempt_count <-irq_exit
1954        ls-2269    3d..3   21us : do_softirq <-irq_exit
1955        ls-2269    3d..3   21us : __do_softirq <-call_softirq
1956        ls-2269    3d..3   21us+: __local_bh_disable <-__do_softirq
1957        ls-2269    3d.s4   29us : sub_preempt_count <-_local_bh_enable_ip
1958        ls-2269    3d.s5   29us : sub_preempt_count <-_local_bh_enable_ip
1959        ls-2269    3d.s5   31us : do_IRQ <-ret_from_intr
1960        ls-2269    3d.s5   31us : irq_enter <-do_IRQ
1961        ls-2269    3d.s5   31us : rcu_irq_enter <-irq_enter
1962  [...]
1963        ls-2269    3d.s5   31us : rcu_irq_enter <-irq_enter
1964        ls-2269    3d.s5   32us : add_preempt_count <-irq_enter
1965        ls-2269    3d.H5   32us : exit_idle <-do_IRQ
1966        ls-2269    3d.H5   32us : handle_irq <-do_IRQ
1967        ls-2269    3d.H5   32us : irq_to_desc <-handle_irq
1968        ls-2269    3d.H5   33us : handle_fasteoi_irq <-handle_irq
1969  [...]
1970        ls-2269    3d.s5  158us : _raw_spin_unlock_irqrestore <-rtl8139_poll
1971        ls-2269    3d.s3  158us : net_rps_action_and_irq_enable.isra.65 <-net_rx_action
1972        ls-2269    3d.s3  159us : __local_bh_enable <-__do_softirq
1973        ls-2269    3d.s3  159us : sub_preempt_count <-__local_bh_enable
1974        ls-2269    3d..3  159us : idle_cpu <-irq_exit
1975        ls-2269    3d..3  159us : rcu_irq_exit <-irq_exit
1976        ls-2269    3d..3  160us : sub_preempt_count <-irq_exit
1977        ls-2269    3d...  161us : __mutex_unlock_slowpath <-mutex_unlock
1978        ls-2269    3d...  162us+: trace_hardirqs_on <-mutex_unlock
1979        ls-2269    3d...  186us : <stack trace>
1980   => __mutex_unlock_slowpath
1981   => mutex_unlock
1982   => process_output
1983   => n_tty_write
1984   => tty_write
1985   => vfs_write
1986   => sys_write
1987   => system_call_fastpath
1988
1989This is an interesting trace. It started with kworker running and
1990scheduling out and ls taking over. But as soon as ls released the
1991rq lock and enabled interrupts (but not preemption) an interrupt
1992triggered. When the interrupt finished, it started running softirqs.
1993But while the softirq was running, another interrupt triggered.
1994When an interrupt is running inside a softirq, the annotation is 'H'.
1995
1996
1997wakeup
1998------
1999
2000One common case that people are interested in tracing is the
2001time it takes for a task that is woken to actually wake up.
2002Now for non Real-Time tasks, this can be arbitrary. But tracing
2003it nonetheless can be interesting.
2004
2005Without function tracing::
2006
2007  # echo 0 > options/function-trace
2008  # echo wakeup > current_tracer
2009  # echo 1 > tracing_on
2010  # echo 0 > tracing_max_latency
2011  # chrt -f 5 sleep 1
2012  # echo 0 > tracing_on
2013  # cat trace
2014  # tracer: wakeup
2015  #
2016  # wakeup latency trace v1.1.5 on 3.8.0-test+
2017  # --------------------------------------------------------------------
2018  # latency: 15 us, #4/4, CPU#3 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:4)
2019  #    -----------------
2020  #    | task: kworker/3:1H-312 (uid:0 nice:-20 policy:0 rt_prio:0)
2021  #    -----------------
2022  #
2023  #                  _------=> CPU#
2024  #                 / _-----=> irqs-off
2025  #                | / _----=> need-resched
2026  #                || / _---=> hardirq/softirq
2027  #                ||| / _--=> preempt-depth
2028  #                |||| /     delay
2029  #  cmd     pid   ||||| time  |   caller
2030  #     \   /      |||||  \    |   /
2031    <idle>-0       3dNs7    0us :      0:120:R   + [003]   312:100:R kworker/3:1H
2032    <idle>-0       3dNs7    1us+: ttwu_do_activate.constprop.87 <-try_to_wake_up
2033    <idle>-0       3d..3   15us : __schedule <-schedule
2034    <idle>-0       3d..3   15us :      0:120:R ==> [003]   312:100:R kworker/3:1H
2035
2036The tracer only traces the highest priority task in the system
2037to avoid tracing the normal circumstances. Here we see that
2038the kworker with a nice priority of -20 (not very nice), took
2039just 15 microseconds from the time it woke up, to the time it
2040ran.
2041
2042Non Real-Time tasks are not that interesting. A more interesting
2043trace is to concentrate only on Real-Time tasks.
2044
2045wakeup_rt
2046---------
2047
2048In a Real-Time environment it is very important to know the
2049wakeup time it takes for the highest priority task that is woken
2050up to the time that it executes. This is also known as "schedule
2051latency". I stress the point that this is about RT tasks. It is
2052also important to know the scheduling latency of non-RT tasks,
2053but the average schedule latency is better for non-RT tasks.
2054Tools like LatencyTop are more appropriate for such
2055measurements.
2056
2057Real-Time environments are interested in the worst case latency.
2058That is the longest latency it takes for something to happen,
2059and not the average. We can have a very fast scheduler that may
2060only have a large latency once in a while, but that would not
2061work well with Real-Time tasks.  The wakeup_rt tracer was designed
2062to record the worst case wakeups of RT tasks. Non-RT tasks are
2063not recorded because the tracer only records one worst case and
2064tracing non-RT tasks that are unpredictable will overwrite the
2065worst case latency of RT tasks (just run the normal wakeup
2066tracer for a while to see that effect).
2067
2068Since this tracer only deals with RT tasks, we will run this
2069slightly differently than we did with the previous tracers.
2070Instead of performing an 'ls', we will run 'sleep 1' under
2071'chrt' which changes the priority of the task.
2072::
2073
2074  # echo 0 > options/function-trace
2075  # echo wakeup_rt > current_tracer
2076  # echo 1 > tracing_on
2077  # echo 0 > tracing_max_latency
2078  # chrt -f 5 sleep 1
2079  # echo 0 > tracing_on
2080  # cat trace
2081  # tracer: wakeup
2082  #
2083  # tracer: wakeup_rt
2084  #
2085  # wakeup_rt latency trace v1.1.5 on 3.8.0-test+
2086  # --------------------------------------------------------------------
2087  # latency: 5 us, #4/4, CPU#3 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:4)
2088  #    -----------------
2089  #    | task: sleep-2389 (uid:0 nice:0 policy:1 rt_prio:5)
2090  #    -----------------
2091  #
2092  #                  _------=> CPU#
2093  #                 / _-----=> irqs-off
2094  #                | / _----=> need-resched
2095  #                || / _---=> hardirq/softirq
2096  #                ||| / _--=> preempt-depth
2097  #                |||| /     delay
2098  #  cmd     pid   ||||| time  |   caller
2099  #     \   /      |||||  \    |   /
2100    <idle>-0       3d.h4    0us :      0:120:R   + [003]  2389: 94:R sleep
2101    <idle>-0       3d.h4    1us+: ttwu_do_activate.constprop.87 <-try_to_wake_up
2102    <idle>-0       3d..3    5us : __schedule <-schedule
2103    <idle>-0       3d..3    5us :      0:120:R ==> [003]  2389: 94:R sleep
2104
2105
2106Running this on an idle system, we see that it only took 5 microseconds
2107to perform the task switch.  Note, since the trace point in the schedule
2108is before the actual "switch", we stop the tracing when the recorded task
2109is about to schedule in. This may change if we add a new marker at the
2110end of the scheduler.
2111
2112Notice that the recorded task is 'sleep' with the PID of 2389
2113and it has an rt_prio of 5. This priority is user-space priority
2114and not the internal kernel priority. The policy is 1 for
2115SCHED_FIFO and 2 for SCHED_RR.
2116
2117Note, that the trace data shows the internal priority (99 - rtprio).
2118::
2119
2120  <idle>-0       3d..3    5us :      0:120:R ==> [003]  2389: 94:R sleep
2121
2122The 0:120:R means idle was running with a nice priority of 0 (120 - 120)
2123and in the running state 'R'. The sleep task was scheduled in with
21242389: 94:R. That is the priority is the kernel rtprio (99 - 5 = 94)
2125and it too is in the running state.
2126
2127Doing the same with chrt -r 5 and function-trace set.
2128::
2129
2130  echo 1 > options/function-trace
2131
2132  # tracer: wakeup_rt
2133  #
2134  # wakeup_rt latency trace v1.1.5 on 3.8.0-test+
2135  # --------------------------------------------------------------------
2136  # latency: 29 us, #85/85, CPU#3 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:4)
2137  #    -----------------
2138  #    | task: sleep-2448 (uid:0 nice:0 policy:1 rt_prio:5)
2139  #    -----------------
2140  #
2141  #                  _------=> CPU#
2142  #                 / _-----=> irqs-off
2143  #                | / _----=> need-resched
2144  #                || / _---=> hardirq/softirq
2145  #                ||| / _--=> preempt-depth
2146  #                |||| /     delay
2147  #  cmd     pid   ||||| time  |   caller
2148  #     \   /      |||||  \    |   /
2149    <idle>-0       3d.h4    1us+:      0:120:R   + [003]  2448: 94:R sleep
2150    <idle>-0       3d.h4    2us : ttwu_do_activate.constprop.87 <-try_to_wake_up
2151    <idle>-0       3d.h3    3us : check_preempt_curr <-ttwu_do_wakeup
2152    <idle>-0       3d.h3    3us : resched_curr <-check_preempt_curr
2153    <idle>-0       3dNh3    4us : task_woken_rt <-ttwu_do_wakeup
2154    <idle>-0       3dNh3    4us : _raw_spin_unlock <-try_to_wake_up
2155    <idle>-0       3dNh3    4us : sub_preempt_count <-_raw_spin_unlock
2156    <idle>-0       3dNh2    5us : ttwu_stat <-try_to_wake_up
2157    <idle>-0       3dNh2    5us : _raw_spin_unlock_irqrestore <-try_to_wake_up
2158    <idle>-0       3dNh2    6us : sub_preempt_count <-_raw_spin_unlock_irqrestore
2159    <idle>-0       3dNh1    6us : _raw_spin_lock <-__run_hrtimer
2160    <idle>-0       3dNh1    6us : add_preempt_count <-_raw_spin_lock
2161    <idle>-0       3dNh2    7us : _raw_spin_unlock <-hrtimer_interrupt
2162    <idle>-0       3dNh2    7us : sub_preempt_count <-_raw_spin_unlock
2163    <idle>-0       3dNh1    7us : tick_program_event <-hrtimer_interrupt
2164    <idle>-0       3dNh1    7us : clockevents_program_event <-tick_program_event
2165    <idle>-0       3dNh1    8us : ktime_get <-clockevents_program_event
2166    <idle>-0       3dNh1    8us : lapic_next_event <-clockevents_program_event
2167    <idle>-0       3dNh1    8us : irq_exit <-smp_apic_timer_interrupt
2168    <idle>-0       3dNh1    9us : sub_preempt_count <-irq_exit
2169    <idle>-0       3dN.2    9us : idle_cpu <-irq_exit
2170    <idle>-0       3dN.2    9us : rcu_irq_exit <-irq_exit
2171    <idle>-0       3dN.2   10us : rcu_eqs_enter_common.isra.45 <-rcu_irq_exit
2172    <idle>-0       3dN.2   10us : sub_preempt_count <-irq_exit
2173    <idle>-0       3.N.1   11us : rcu_idle_exit <-cpu_idle
2174    <idle>-0       3dN.1   11us : rcu_eqs_exit_common.isra.43 <-rcu_idle_exit
2175    <idle>-0       3.N.1   11us : tick_nohz_idle_exit <-cpu_idle
2176    <idle>-0       3dN.1   12us : menu_hrtimer_cancel <-tick_nohz_idle_exit
2177    <idle>-0       3dN.1   12us : ktime_get <-tick_nohz_idle_exit
2178    <idle>-0       3dN.1   12us : tick_do_update_jiffies64 <-tick_nohz_idle_exit
2179    <idle>-0       3dN.1   13us : cpu_load_update_nohz <-tick_nohz_idle_exit
2180    <idle>-0       3dN.1   13us : _raw_spin_lock <-cpu_load_update_nohz
2181    <idle>-0       3dN.1   13us : add_preempt_count <-_raw_spin_lock
2182    <idle>-0       3dN.2   13us : __cpu_load_update <-cpu_load_update_nohz
2183    <idle>-0       3dN.2   14us : sched_avg_update <-__cpu_load_update
2184    <idle>-0       3dN.2   14us : _raw_spin_unlock <-cpu_load_update_nohz
2185    <idle>-0       3dN.2   14us : sub_preempt_count <-_raw_spin_unlock
2186    <idle>-0       3dN.1   15us : calc_load_nohz_stop <-tick_nohz_idle_exit
2187    <idle>-0       3dN.1   15us : touch_softlockup_watchdog <-tick_nohz_idle_exit
2188    <idle>-0       3dN.1   15us : hrtimer_cancel <-tick_nohz_idle_exit
2189    <idle>-0       3dN.1   15us : hrtimer_try_to_cancel <-hrtimer_cancel
2190    <idle>-0       3dN.1   16us : lock_hrtimer_base.isra.18 <-hrtimer_try_to_cancel
2191    <idle>-0       3dN.1   16us : _raw_spin_lock_irqsave <-lock_hrtimer_base.isra.18
2192    <idle>-0       3dN.1   16us : add_preempt_count <-_raw_spin_lock_irqsave
2193    <idle>-0       3dN.2   17us : __remove_hrtimer <-remove_hrtimer.part.16
2194    <idle>-0       3dN.2   17us : hrtimer_force_reprogram <-__remove_hrtimer
2195    <idle>-0       3dN.2   17us : tick_program_event <-hrtimer_force_reprogram
2196    <idle>-0       3dN.2   18us : clockevents_program_event <-tick_program_event
2197    <idle>-0       3dN.2   18us : ktime_get <-clockevents_program_event
2198    <idle>-0       3dN.2   18us : lapic_next_event <-clockevents_program_event
2199    <idle>-0       3dN.2   19us : _raw_spin_unlock_irqrestore <-hrtimer_try_to_cancel
2200    <idle>-0       3dN.2   19us : sub_preempt_count <-_raw_spin_unlock_irqrestore
2201    <idle>-0       3dN.1   19us : hrtimer_forward <-tick_nohz_idle_exit
2202    <idle>-0       3dN.1   20us : ktime_add_safe <-hrtimer_forward
2203    <idle>-0       3dN.1   20us : ktime_add_safe <-hrtimer_forward
2204    <idle>-0       3dN.1   20us : hrtimer_start_range_ns <-hrtimer_start_expires.constprop.11
2205    <idle>-0       3dN.1   20us : __hrtimer_start_range_ns <-hrtimer_start_range_ns
2206    <idle>-0       3dN.1   21us : lock_hrtimer_base.isra.18 <-__hrtimer_start_range_ns
2207    <idle>-0       3dN.1   21us : _raw_spin_lock_irqsave <-lock_hrtimer_base.isra.18
2208    <idle>-0       3dN.1   21us : add_preempt_count <-_raw_spin_lock_irqsave
2209    <idle>-0       3dN.2   22us : ktime_add_safe <-__hrtimer_start_range_ns
2210    <idle>-0       3dN.2   22us : enqueue_hrtimer <-__hrtimer_start_range_ns
2211    <idle>-0       3dN.2   22us : tick_program_event <-__hrtimer_start_range_ns
2212    <idle>-0       3dN.2   23us : clockevents_program_event <-tick_program_event
2213    <idle>-0       3dN.2   23us : ktime_get <-clockevents_program_event
2214    <idle>-0       3dN.2   23us : lapic_next_event <-clockevents_program_event
2215    <idle>-0       3dN.2   24us : _raw_spin_unlock_irqrestore <-__hrtimer_start_range_ns
2216    <idle>-0       3dN.2   24us : sub_preempt_count <-_raw_spin_unlock_irqrestore
2217    <idle>-0       3dN.1   24us : account_idle_ticks <-tick_nohz_idle_exit
2218    <idle>-0       3dN.1   24us : account_idle_time <-account_idle_ticks
2219    <idle>-0       3.N.1   25us : sub_preempt_count <-cpu_idle
2220    <idle>-0       3.N..   25us : schedule <-cpu_idle
2221    <idle>-0       3.N..   25us : __schedule <-preempt_schedule
2222    <idle>-0       3.N..   26us : add_preempt_count <-__schedule
2223    <idle>-0       3.N.1   26us : rcu_note_context_switch <-__schedule
2224    <idle>-0       3.N.1   26us : rcu_sched_qs <-rcu_note_context_switch
2225    <idle>-0       3dN.1   27us : rcu_preempt_qs <-rcu_note_context_switch
2226    <idle>-0       3.N.1   27us : _raw_spin_lock_irq <-__schedule
2227    <idle>-0       3dN.1   27us : add_preempt_count <-_raw_spin_lock_irq
2228    <idle>-0       3dN.2   28us : put_prev_task_idle <-__schedule
2229    <idle>-0       3dN.2   28us : pick_next_task_stop <-pick_next_task
2230    <idle>-0       3dN.2   28us : pick_next_task_rt <-pick_next_task
2231    <idle>-0       3dN.2   29us : dequeue_pushable_task <-pick_next_task_rt
2232    <idle>-0       3d..3   29us : __schedule <-preempt_schedule
2233    <idle>-0       3d..3   30us :      0:120:R ==> [003]  2448: 94:R sleep
2234
2235This isn't that big of a trace, even with function tracing enabled,
2236so I included the entire trace.
2237
2238The interrupt went off while when the system was idle. Somewhere
2239before task_woken_rt() was called, the NEED_RESCHED flag was set,
2240this is indicated by the first occurrence of the 'N' flag.
2241
2242Latency tracing and events
2243--------------------------
2244As function tracing can induce a much larger latency, but without
2245seeing what happens within the latency it is hard to know what
2246caused it. There is a middle ground, and that is with enabling
2247events.
2248::
2249
2250  # echo 0 > options/function-trace
2251  # echo wakeup_rt > current_tracer
2252  # echo 1 > events/enable
2253  # echo 1 > tracing_on
2254  # echo 0 > tracing_max_latency
2255  # chrt -f 5 sleep 1
2256  # echo 0 > tracing_on
2257  # cat trace
2258  # tracer: wakeup_rt
2259  #
2260  # wakeup_rt latency trace v1.1.5 on 3.8.0-test+
2261  # --------------------------------------------------------------------
2262  # latency: 6 us, #12/12, CPU#2 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:4)
2263  #    -----------------
2264  #    | task: sleep-5882 (uid:0 nice:0 policy:1 rt_prio:5)
2265  #    -----------------
2266  #
2267  #                  _------=> CPU#
2268  #                 / _-----=> irqs-off
2269  #                | / _----=> need-resched
2270  #                || / _---=> hardirq/softirq
2271  #                ||| / _--=> preempt-depth
2272  #                |||| /     delay
2273  #  cmd     pid   ||||| time  |   caller
2274  #     \   /      |||||  \    |   /
2275    <idle>-0       2d.h4    0us :      0:120:R   + [002]  5882: 94:R sleep
2276    <idle>-0       2d.h4    0us : ttwu_do_activate.constprop.87 <-try_to_wake_up
2277    <idle>-0       2d.h4    1us : sched_wakeup: comm=sleep pid=5882 prio=94 success=1 target_cpu=002
2278    <idle>-0       2dNh2    1us : hrtimer_expire_exit: hrtimer=ffff88007796feb8
2279    <idle>-0       2.N.2    2us : power_end: cpu_id=2
2280    <idle>-0       2.N.2    3us : cpu_idle: state=4294967295 cpu_id=2
2281    <idle>-0       2dN.3    4us : hrtimer_cancel: hrtimer=ffff88007d50d5e0
2282    <idle>-0       2dN.3    4us : hrtimer_start: hrtimer=ffff88007d50d5e0 function=tick_sched_timer expires=34311211000000 softexpires=34311211000000
2283    <idle>-0       2.N.2    5us : rcu_utilization: Start context switch
2284    <idle>-0       2.N.2    5us : rcu_utilization: End context switch
2285    <idle>-0       2d..3    6us : __schedule <-schedule
2286    <idle>-0       2d..3    6us :      0:120:R ==> [002]  5882: 94:R sleep
2287
2288
2289Hardware Latency Detector
2290-------------------------
2291
2292The hardware latency detector is executed by enabling the "hwlat" tracer.
2293
2294NOTE, this tracer will affect the performance of the system as it will
2295periodically make a CPU constantly busy with interrupts disabled.
2296::
2297
2298  # echo hwlat > current_tracer
2299  # sleep 100
2300  # cat trace
2301  # tracer: hwlat
2302  #
2303  # entries-in-buffer/entries-written: 13/13   #P:8
2304  #
2305  #                              _-----=> irqs-off
2306  #                             / _----=> need-resched
2307  #                            | / _---=> hardirq/softirq
2308  #                            || / _--=> preempt-depth
2309  #                            ||| /     delay
2310  #           TASK-PID   CPU#  ||||    TIMESTAMP  FUNCTION
2311  #              | |       |   ||||       |         |
2312             <...>-1729  [001] d...   678.473449: #1     inner/outer(us):   11/12    ts:1581527483.343962693 count:6
2313             <...>-1729  [004] d...   689.556542: #2     inner/outer(us):   16/9     ts:1581527494.889008092 count:1
2314             <...>-1729  [005] d...   714.756290: #3     inner/outer(us):   16/16    ts:1581527519.678961629 count:5
2315             <...>-1729  [001] d...   718.788247: #4     inner/outer(us):    9/17    ts:1581527523.889012713 count:1
2316             <...>-1729  [002] d...   719.796341: #5     inner/outer(us):   13/9     ts:1581527524.912872606 count:1
2317             <...>-1729  [006] d...   844.787091: #6     inner/outer(us):    9/12    ts:1581527649.889048502 count:2
2318             <...>-1729  [003] d...   849.827033: #7     inner/outer(us):   18/9     ts:1581527654.889013793 count:1
2319             <...>-1729  [007] d...   853.859002: #8     inner/outer(us):    9/12    ts:1581527658.889065736 count:1
2320             <...>-1729  [001] d...   855.874978: #9     inner/outer(us):    9/11    ts:1581527660.861991877 count:1
2321             <...>-1729  [001] d...   863.938932: #10    inner/outer(us):    9/11    ts:1581527668.970010500 count:1 nmi-total:7 nmi-count:1
2322             <...>-1729  [007] d...   878.050780: #11    inner/outer(us):    9/12    ts:1581527683.385002600 count:1 nmi-total:5 nmi-count:1
2323             <...>-1729  [007] d...   886.114702: #12    inner/outer(us):    9/12    ts:1581527691.385001600 count:1
2324
2325
2326The above output is somewhat the same in the header. All events will have
2327interrupts disabled 'd'. Under the FUNCTION title there is:
2328
2329 #1
2330	This is the count of events recorded that were greater than the
2331	tracing_threshold (See below).
2332
2333 inner/outer(us):   11/11
2334
2335      This shows two numbers as "inner latency" and "outer latency". The test
2336      runs in a loop checking a timestamp twice. The latency detected within
2337      the two timestamps is the "inner latency" and the latency detected
2338      after the previous timestamp and the next timestamp in the loop is
2339      the "outer latency".
2340
2341 ts:1581527483.343962693
2342
2343      The absolute timestamp that the first latency was recorded in the window.
2344
2345 count:6
2346
2347      The number of times a latency was detected during the window.
2348
2349 nmi-total:7 nmi-count:1
2350
2351      On architectures that support it, if an NMI comes in during the
2352      test, the time spent in NMI is reported in "nmi-total" (in
2353      microseconds).
2354
2355      All architectures that have NMIs will show the "nmi-count" if an
2356      NMI comes in during the test.
2357
2358hwlat files:
2359
2360  tracing_threshold
2361	This gets automatically set to "10" to represent 10
2362	microseconds. This is the threshold of latency that
2363	needs to be detected before the trace will be recorded.
2364
2365	Note, when hwlat tracer is finished (another tracer is
2366	written into "current_tracer"), the original value for
2367	tracing_threshold is placed back into this file.
2368
2369  hwlat_detector/width
2370	The length of time the test runs with interrupts disabled.
2371
2372  hwlat_detector/window
2373	The length of time of the window which the test
2374	runs. That is, the test will run for "width"
2375	microseconds per "window" microseconds
2376
2377  tracing_cpumask
2378	When the test is started. A kernel thread is created that
2379	runs the test. This thread will alternate between CPUs
2380	listed in the tracing_cpumask between each period
2381	(one "window"). To limit the test to specific CPUs
2382	set the mask in this file to only the CPUs that the test
2383	should run on.
2384
2385function
2386--------
2387
2388This tracer is the function tracer. Enabling the function tracer
2389can be done from the debug file system. Make sure the
2390ftrace_enabled is set; otherwise this tracer is a nop.
2391See the "ftrace_enabled" section below.
2392::
2393
2394  # sysctl kernel.ftrace_enabled=1
2395  # echo function > current_tracer
2396  # echo 1 > tracing_on
2397  # usleep 1
2398  # echo 0 > tracing_on
2399  # cat trace
2400  # tracer: function
2401  #
2402  # entries-in-buffer/entries-written: 24799/24799   #P:4
2403  #
2404  #                              _-----=> irqs-off
2405  #                             / _----=> need-resched
2406  #                            | / _---=> hardirq/softirq
2407  #                            || / _--=> preempt-depth
2408  #                            ||| /     delay
2409  #           TASK-PID   CPU#  ||||    TIMESTAMP  FUNCTION
2410  #              | |       |   ||||       |         |
2411              bash-1994  [002] ....  3082.063030: mutex_unlock <-rb_simple_write
2412              bash-1994  [002] ....  3082.063031: __mutex_unlock_slowpath <-mutex_unlock
2413              bash-1994  [002] ....  3082.063031: __fsnotify_parent <-fsnotify_modify
2414              bash-1994  [002] ....  3082.063032: fsnotify <-fsnotify_modify
2415              bash-1994  [002] ....  3082.063032: __srcu_read_lock <-fsnotify
2416              bash-1994  [002] ....  3082.063032: add_preempt_count <-__srcu_read_lock
2417              bash-1994  [002] ...1  3082.063032: sub_preempt_count <-__srcu_read_lock
2418              bash-1994  [002] ....  3082.063033: __srcu_read_unlock <-fsnotify
2419  [...]
2420
2421
2422Note: function tracer uses ring buffers to store the above
2423entries. The newest data may overwrite the oldest data.
2424Sometimes using echo to stop the trace is not sufficient because
2425the tracing could have overwritten the data that you wanted to
2426record. For this reason, it is sometimes better to disable
2427tracing directly from a program. This allows you to stop the
2428tracing at the point that you hit the part that you are
2429interested in. To disable the tracing directly from a C program,
2430something like following code snippet can be used::
2431
2432	int trace_fd;
2433	[...]
2434	int main(int argc, char *argv[]) {
2435		[...]
2436		trace_fd = open(tracing_file("tracing_on"), O_WRONLY);
2437		[...]
2438		if (condition_hit()) {
2439			write(trace_fd, "0", 1);
2440		}
2441		[...]
2442	}
2443
2444
2445Single thread tracing
2446---------------------
2447
2448By writing into set_ftrace_pid you can trace a
2449single thread. For example::
2450
2451  # cat set_ftrace_pid
2452  no pid
2453  # echo 3111 > set_ftrace_pid
2454  # cat set_ftrace_pid
2455  3111
2456  # echo function > current_tracer
2457  # cat trace | head
2458  # tracer: function
2459  #
2460  #           TASK-PID    CPU#    TIMESTAMP  FUNCTION
2461  #              | |       |          |         |
2462      yum-updatesd-3111  [003]  1637.254676: finish_task_switch <-thread_return
2463      yum-updatesd-3111  [003]  1637.254681: hrtimer_cancel <-schedule_hrtimeout_range
2464      yum-updatesd-3111  [003]  1637.254682: hrtimer_try_to_cancel <-hrtimer_cancel
2465      yum-updatesd-3111  [003]  1637.254683: lock_hrtimer_base <-hrtimer_try_to_cancel
2466      yum-updatesd-3111  [003]  1637.254685: fget_light <-do_sys_poll
2467      yum-updatesd-3111  [003]  1637.254686: pipe_poll <-do_sys_poll
2468  # echo > set_ftrace_pid
2469  # cat trace |head
2470  # tracer: function
2471  #
2472  #           TASK-PID    CPU#    TIMESTAMP  FUNCTION
2473  #              | |       |          |         |
2474  ##### CPU 3 buffer started ####
2475      yum-updatesd-3111  [003]  1701.957688: free_poll_entry <-poll_freewait
2476      yum-updatesd-3111  [003]  1701.957689: remove_wait_queue <-free_poll_entry
2477      yum-updatesd-3111  [003]  1701.957691: fput <-free_poll_entry
2478      yum-updatesd-3111  [003]  1701.957692: audit_syscall_exit <-sysret_audit
2479      yum-updatesd-3111  [003]  1701.957693: path_put <-audit_syscall_exit
2480
2481If you want to trace a function when executing, you could use
2482something like this simple program.
2483::
2484
2485	#include <stdio.h>
2486	#include <stdlib.h>
2487	#include <sys/types.h>
2488	#include <sys/stat.h>
2489	#include <fcntl.h>
2490	#include <unistd.h>
2491	#include <string.h>
2492
2493	#define _STR(x) #x
2494	#define STR(x) _STR(x)
2495	#define MAX_PATH 256
2496
2497	const char *find_tracefs(void)
2498	{
2499	       static char tracefs[MAX_PATH+1];
2500	       static int tracefs_found;
2501	       char type[100];
2502	       FILE *fp;
2503
2504	       if (tracefs_found)
2505		       return tracefs;
2506
2507	       if ((fp = fopen("/proc/mounts","r")) == NULL) {
2508		       perror("/proc/mounts");
2509		       return NULL;
2510	       }
2511
2512	       while (fscanf(fp, "%*s %"
2513		             STR(MAX_PATH)
2514		             "s %99s %*s %*d %*d\n",
2515		             tracefs, type) == 2) {
2516		       if (strcmp(type, "tracefs") == 0)
2517		               break;
2518	       }
2519	       fclose(fp);
2520
2521	       if (strcmp(type, "tracefs") != 0) {
2522		       fprintf(stderr, "tracefs not mounted");
2523		       return NULL;
2524	       }
2525
2526	       strcat(tracefs, "/tracing/");
2527	       tracefs_found = 1;
2528
2529	       return tracefs;
2530	}
2531
2532	const char *tracing_file(const char *file_name)
2533	{
2534	       static char trace_file[MAX_PATH+1];
2535	       snprintf(trace_file, MAX_PATH, "%s/%s", find_tracefs(), file_name);
2536	       return trace_file;
2537	}
2538
2539	int main (int argc, char **argv)
2540	{
2541		if (argc < 1)
2542		        exit(-1);
2543
2544		if (fork() > 0) {
2545		        int fd, ffd;
2546		        char line[64];
2547		        int s;
2548
2549		        ffd = open(tracing_file("current_tracer"), O_WRONLY);
2550		        if (ffd < 0)
2551		                exit(-1);
2552		        write(ffd, "nop", 3);
2553
2554		        fd = open(tracing_file("set_ftrace_pid"), O_WRONLY);
2555		        s = sprintf(line, "%d\n", getpid());
2556		        write(fd, line, s);
2557
2558		        write(ffd, "function", 8);
2559
2560		        close(fd);
2561		        close(ffd);
2562
2563		        execvp(argv[1], argv+1);
2564		}
2565
2566		return 0;
2567	}
2568
2569Or this simple script!
2570::
2571
2572  #!/bin/bash
2573
2574  tracefs=`sed -ne 's/^tracefs \(.*\) tracefs.*/\1/p' /proc/mounts`
2575  echo 0 > $tracefs/tracing_on
2576  echo $$ > $tracefs/set_ftrace_pid
2577  echo function > $tracefs/current_tracer
2578  echo 1 > $tracefs/tracing_on
2579  exec "$@"
2580
2581
2582function graph tracer
2583---------------------------
2584
2585This tracer is similar to the function tracer except that it
2586probes a function on its entry and its exit. This is done by
2587using a dynamically allocated stack of return addresses in each
2588task_struct. On function entry the tracer overwrites the return
2589address of each function traced to set a custom probe. Thus the
2590original return address is stored on the stack of return address
2591in the task_struct.
2592
2593Probing on both ends of a function leads to special features
2594such as:
2595
2596- measure of a function's time execution
2597- having a reliable call stack to draw function calls graph
2598
2599This tracer is useful in several situations:
2600
2601- you want to find the reason of a strange kernel behavior and
2602  need to see what happens in detail on any areas (or specific
2603  ones).
2604
2605- you are experiencing weird latencies but it's difficult to
2606  find its origin.
2607
2608- you want to find quickly which path is taken by a specific
2609  function
2610
2611- you just want to peek inside a working kernel and want to see
2612  what happens there.
2613
2614::
2615
2616  # tracer: function_graph
2617  #
2618  # CPU  DURATION                  FUNCTION CALLS
2619  # |     |   |                     |   |   |   |
2620
2621   0)               |  sys_open() {
2622   0)               |    do_sys_open() {
2623   0)               |      getname() {
2624   0)               |        kmem_cache_alloc() {
2625   0)   1.382 us    |          __might_sleep();
2626   0)   2.478 us    |        }
2627   0)               |        strncpy_from_user() {
2628   0)               |          might_fault() {
2629   0)   1.389 us    |            __might_sleep();
2630   0)   2.553 us    |          }
2631   0)   3.807 us    |        }
2632   0)   7.876 us    |      }
2633   0)               |      alloc_fd() {
2634   0)   0.668 us    |        _spin_lock();
2635   0)   0.570 us    |        expand_files();
2636   0)   0.586 us    |        _spin_unlock();
2637
2638
2639There are several columns that can be dynamically
2640enabled/disabled. You can use every combination of options you
2641want, depending on your needs.
2642
2643- The cpu number on which the function executed is default
2644  enabled.  It is sometimes better to only trace one cpu (see
2645  tracing_cpumask file) or you might sometimes see unordered
2646  function calls while cpu tracing switch.
2647
2648	- hide: echo nofuncgraph-cpu > trace_options
2649	- show: echo funcgraph-cpu > trace_options
2650
2651- The duration (function's time of execution) is displayed on
2652  the closing bracket line of a function or on the same line
2653  than the current function in case of a leaf one. It is default
2654  enabled.
2655
2656	- hide: echo nofuncgraph-duration > trace_options
2657	- show: echo funcgraph-duration > trace_options
2658
2659- The overhead field precedes the duration field in case of
2660  reached duration thresholds.
2661
2662	- hide: echo nofuncgraph-overhead > trace_options
2663	- show: echo funcgraph-overhead > trace_options
2664	- depends on: funcgraph-duration
2665
2666  ie::
2667
2668    3) # 1837.709 us |          } /* __switch_to */
2669    3)               |          finish_task_switch() {
2670    3)   0.313 us    |            _raw_spin_unlock_irq();
2671    3)   3.177 us    |          }
2672    3) # 1889.063 us |        } /* __schedule */
2673    3) ! 140.417 us  |      } /* __schedule */
2674    3) # 2034.948 us |    } /* schedule */
2675    3) * 33998.59 us |  } /* schedule_preempt_disabled */
2676
2677    [...]
2678
2679    1)   0.260 us    |              msecs_to_jiffies();
2680    1)   0.313 us    |              __rcu_read_unlock();
2681    1) + 61.770 us   |            }
2682    1) + 64.479 us   |          }
2683    1)   0.313 us    |          rcu_bh_qs();
2684    1)   0.313 us    |          __local_bh_enable();
2685    1) ! 217.240 us  |        }
2686    1)   0.365 us    |        idle_cpu();
2687    1)               |        rcu_irq_exit() {
2688    1)   0.417 us    |          rcu_eqs_enter_common.isra.47();
2689    1)   3.125 us    |        }
2690    1) ! 227.812 us  |      }
2691    1) ! 457.395 us  |    }
2692    1) @ 119760.2 us |  }
2693
2694    [...]
2695
2696    2)               |    handle_IPI() {
2697    1)   6.979 us    |                  }
2698    2)   0.417 us    |      scheduler_ipi();
2699    1)   9.791 us    |                }
2700    1) + 12.917 us   |              }
2701    2)   3.490 us    |    }
2702    1) + 15.729 us   |            }
2703    1) + 18.542 us   |          }
2704    2) $ 3594274 us  |  }
2705
2706Flags::
2707
2708  + means that the function exceeded 10 usecs.
2709  ! means that the function exceeded 100 usecs.
2710  # means that the function exceeded 1000 usecs.
2711  * means that the function exceeded 10 msecs.
2712  @ means that the function exceeded 100 msecs.
2713  $ means that the function exceeded 1 sec.
2714
2715
2716- The task/pid field displays the thread cmdline and pid which
2717  executed the function. It is default disabled.
2718
2719	- hide: echo nofuncgraph-proc > trace_options
2720	- show: echo funcgraph-proc > trace_options
2721
2722  ie::
2723
2724    # tracer: function_graph
2725    #
2726    # CPU  TASK/PID        DURATION                  FUNCTION CALLS
2727    # |    |    |           |   |                     |   |   |   |
2728    0)    sh-4802     |               |                  d_free() {
2729    0)    sh-4802     |               |                    call_rcu() {
2730    0)    sh-4802     |               |                      __call_rcu() {
2731    0)    sh-4802     |   0.616 us    |                        rcu_process_gp_end();
2732    0)    sh-4802     |   0.586 us    |                        check_for_new_grace_period();
2733    0)    sh-4802     |   2.899 us    |                      }
2734    0)    sh-4802     |   4.040 us    |                    }
2735    0)    sh-4802     |   5.151 us    |                  }
2736    0)    sh-4802     | + 49.370 us   |                }
2737
2738
2739- The absolute time field is an absolute timestamp given by the
2740  system clock since it started. A snapshot of this time is
2741  given on each entry/exit of functions
2742
2743	- hide: echo nofuncgraph-abstime > trace_options
2744	- show: echo funcgraph-abstime > trace_options
2745
2746  ie::
2747
2748    #
2749    #      TIME       CPU  DURATION                  FUNCTION CALLS
2750    #       |         |     |   |                     |   |   |   |
2751    360.774522 |   1)   0.541 us    |                                          }
2752    360.774522 |   1)   4.663 us    |                                        }
2753    360.774523 |   1)   0.541 us    |                                        __wake_up_bit();
2754    360.774524 |   1)   6.796 us    |                                      }
2755    360.774524 |   1)   7.952 us    |                                    }
2756    360.774525 |   1)   9.063 us    |                                  }
2757    360.774525 |   1)   0.615 us    |                                  journal_mark_dirty();
2758    360.774527 |   1)   0.578 us    |                                  __brelse();
2759    360.774528 |   1)               |                                  reiserfs_prepare_for_journal() {
2760    360.774528 |   1)               |                                    unlock_buffer() {
2761    360.774529 |   1)               |                                      wake_up_bit() {
2762    360.774529 |   1)               |                                        bit_waitqueue() {
2763    360.774530 |   1)   0.594 us    |                                          __phys_addr();
2764
2765
2766The function name is always displayed after the closing bracket
2767for a function if the start of that function is not in the
2768trace buffer.
2769
2770Display of the function name after the closing bracket may be
2771enabled for functions whose start is in the trace buffer,
2772allowing easier searching with grep for function durations.
2773It is default disabled.
2774
2775	- hide: echo nofuncgraph-tail > trace_options
2776	- show: echo funcgraph-tail > trace_options
2777
2778  Example with nofuncgraph-tail (default)::
2779
2780    0)               |      putname() {
2781    0)               |        kmem_cache_free() {
2782    0)   0.518 us    |          __phys_addr();
2783    0)   1.757 us    |        }
2784    0)   2.861 us    |      }
2785
2786  Example with funcgraph-tail::
2787
2788    0)               |      putname() {
2789    0)               |        kmem_cache_free() {
2790    0)   0.518 us    |          __phys_addr();
2791    0)   1.757 us    |        } /* kmem_cache_free() */
2792    0)   2.861 us    |      } /* putname() */
2793
2794The return value of each traced function can be displayed after
2795an equal sign "=". When encountering system call failures, it
2796can be very helpful to quickly locate the function that first
2797returns an error code.
2798
2799	- hide: echo nofuncgraph-retval > trace_options
2800	- show: echo funcgraph-retval > trace_options
2801
2802  Example with funcgraph-retval::
2803
2804    1)               |    cgroup_migrate() {
2805    1)   0.651 us    |      cgroup_migrate_add_task(); /* = 0xffff93fcfd346c00 */
2806    1)               |      cgroup_migrate_execute() {
2807    1)               |        cpu_cgroup_can_attach() {
2808    1)               |          cgroup_taskset_first() {
2809    1)   0.732 us    |            cgroup_taskset_next(); /* = 0xffff93fc8fb20000 */
2810    1)   1.232 us    |          } /* cgroup_taskset_first = 0xffff93fc8fb20000 */
2811    1)   0.380 us    |          sched_rt_can_attach(); /* = 0x0 */
2812    1)   2.335 us    |        } /* cpu_cgroup_can_attach = -22 */
2813    1)   4.369 us    |      } /* cgroup_migrate_execute = -22 */
2814    1)   7.143 us    |    } /* cgroup_migrate = -22 */
2815
2816The above example shows that the function cpu_cgroup_can_attach
2817returned the error code -22 firstly, then we can read the code
2818of this function to get the root cause.
2819
2820When the option funcgraph-retval-hex is not set, the return value can
2821be displayed in a smart way. Specifically, if it is an error code,
2822it will be printed in signed decimal format, otherwise it will
2823printed in hexadecimal format.
2824
2825	- smart: echo nofuncgraph-retval-hex > trace_options
2826	- hexadecimal: echo funcgraph-retval-hex > trace_options
2827
2828  Example with funcgraph-retval-hex::
2829
2830    1)               |      cgroup_migrate() {
2831    1)   0.651 us    |        cgroup_migrate_add_task(); /* = 0xffff93fcfd346c00 */
2832    1)               |        cgroup_migrate_execute() {
2833    1)               |          cpu_cgroup_can_attach() {
2834    1)               |            cgroup_taskset_first() {
2835    1)   0.732 us    |              cgroup_taskset_next(); /* = 0xffff93fc8fb20000 */
2836    1)   1.232 us    |            } /* cgroup_taskset_first = 0xffff93fc8fb20000 */
2837    1)   0.380 us    |            sched_rt_can_attach(); /* = 0x0 */
2838    1)   2.335 us    |          } /* cpu_cgroup_can_attach = 0xffffffea */
2839    1)   4.369 us    |        } /* cgroup_migrate_execute = 0xffffffea */
2840    1)   7.143 us    |      } /* cgroup_migrate = 0xffffffea */
2841
2842At present, there are some limitations when using the funcgraph-retval
2843option, and these limitations will be eliminated in the future:
2844
2845- Even if the function return type is void, a return value will still
2846  be printed, and you can just ignore it.
2847
2848- Even if return values are stored in multiple registers, only the
2849  value contained in the first register will be recorded and printed.
2850  To illustrate, in the x86 architecture, eax and edx are used to store
2851  a 64-bit return value, with the lower 32 bits saved in eax and the
2852  upper 32 bits saved in edx. However, only the value stored in eax
2853  will be recorded and printed.
2854
2855- In certain procedure call standards, such as arm64's AAPCS64, when a
2856  type is smaller than a GPR, it is the responsibility of the consumer
2857  to perform the narrowing, and the upper bits may contain UNKNOWN values.
2858  Therefore, it is advisable to check the code for such cases. For instance,
2859  when using a u8 in a 64-bit GPR, bits [63:8] may contain arbitrary values,
2860  especially when larger types are truncated, whether explicitly or implicitly.
2861  Here are some specific cases to illustrate this point:
2862
2863  **Case One**:
2864
2865  The function narrow_to_u8 is defined as follows::
2866
2867	u8 narrow_to_u8(u64 val)
2868	{
2869		// implicitly truncated
2870		return val;
2871	}
2872
2873  It may be compiled to::
2874
2875	narrow_to_u8:
2876		< ... ftrace instrumentation ... >
2877		RET
2878
2879  If you pass 0x123456789abcdef to this function and want to narrow it,
2880  it may be recorded as 0x123456789abcdef instead of 0xef.
2881
2882  **Case Two**:
2883
2884  The function error_if_not_4g_aligned is defined as follows::
2885
2886	int error_if_not_4g_aligned(u64 val)
2887	{
2888		if (val & GENMASK(31, 0))
2889			return -EINVAL;
2890
2891		return 0;
2892	}
2893
2894  It could be compiled to::
2895
2896	error_if_not_4g_aligned:
2897		CBNZ    w0, .Lnot_aligned
2898		RET			// bits [31:0] are zero, bits
2899					// [63:32] are UNKNOWN
2900	.Lnot_aligned:
2901		MOV    x0, #-EINVAL
2902		RET
2903
2904  When passing 0x2_0000_0000 to it, the return value may be recorded as
2905  0x2_0000_0000 instead of 0.
2906
2907You can put some comments on specific functions by using
2908trace_printk() For example, if you want to put a comment inside
2909the __might_sleep() function, you just have to include
2910<linux/ftrace.h> and call trace_printk() inside __might_sleep()::
2911
2912	trace_printk("I'm a comment!\n")
2913
2914will produce::
2915
2916   1)               |             __might_sleep() {
2917   1)               |                /* I'm a comment! */
2918   1)   1.449 us    |             }
2919
2920
2921You might find other useful features for this tracer in the
2922following "dynamic ftrace" section such as tracing only specific
2923functions or tasks.
2924
2925dynamic ftrace
2926--------------
2927
2928If CONFIG_DYNAMIC_FTRACE is set, the system will run with
2929virtually no overhead when function tracing is disabled. The way
2930this works is the mcount function call (placed at the start of
2931every kernel function, produced by the -pg switch in gcc),
2932starts of pointing to a simple return. (Enabling FTRACE will
2933include the -pg switch in the compiling of the kernel.)
2934
2935At compile time every C file object is run through the
2936recordmcount program (located in the scripts directory). This
2937program will parse the ELF headers in the C object to find all
2938the locations in the .text section that call mcount. Starting
2939with gcc version 4.6, the -mfentry has been added for x86, which
2940calls "__fentry__" instead of "mcount". Which is called before
2941the creation of the stack frame.
2942
2943Note, not all sections are traced. They may be prevented by either
2944a notrace, or blocked another way and all inline functions are not
2945traced. Check the "available_filter_functions" file to see what functions
2946can be traced.
2947
2948A section called "__mcount_loc" is created that holds
2949references to all the mcount/fentry call sites in the .text section.
2950The recordmcount program re-links this section back into the
2951original object. The final linking stage of the kernel will add all these
2952references into a single table.
2953
2954On boot up, before SMP is initialized, the dynamic ftrace code
2955scans this table and updates all the locations into nops. It
2956also records the locations, which are added to the
2957available_filter_functions list.  Modules are processed as they
2958are loaded and before they are executed.  When a module is
2959unloaded, it also removes its functions from the ftrace function
2960list. This is automatic in the module unload code, and the
2961module author does not need to worry about it.
2962
2963When tracing is enabled, the process of modifying the function
2964tracepoints is dependent on architecture. The old method is to use
2965kstop_machine to prevent races with the CPUs executing code being
2966modified (which can cause the CPU to do undesirable things, especially
2967if the modified code crosses cache (or page) boundaries), and the nops are
2968patched back to calls. But this time, they do not call mcount
2969(which is just a function stub). They now call into the ftrace
2970infrastructure.
2971
2972The new method of modifying the function tracepoints is to place
2973a breakpoint at the location to be modified, sync all CPUs, modify
2974the rest of the instruction not covered by the breakpoint. Sync
2975all CPUs again, and then remove the breakpoint with the finished
2976version to the ftrace call site.
2977
2978Some archs do not even need to monkey around with the synchronization,
2979and can just slap the new code on top of the old without any
2980problems with other CPUs executing it at the same time.
2981
2982One special side-effect to the recording of the functions being
2983traced is that we can now selectively choose which functions we
2984wish to trace and which ones we want the mcount calls to remain
2985as nops.
2986
2987Two files are used, one for enabling and one for disabling the
2988tracing of specified functions. They are:
2989
2990  set_ftrace_filter
2991
2992and
2993
2994  set_ftrace_notrace
2995
2996A list of available functions that you can add to these files is
2997listed in:
2998
2999   available_filter_functions
3000
3001::
3002
3003  # cat available_filter_functions
3004  put_prev_task_idle
3005  kmem_cache_create
3006  pick_next_task_rt
3007  cpus_read_lock
3008  pick_next_task_fair
3009  mutex_lock
3010  [...]
3011
3012If I am only interested in sys_nanosleep and hrtimer_interrupt::
3013
3014  # echo sys_nanosleep hrtimer_interrupt > set_ftrace_filter
3015  # echo function > current_tracer
3016  # echo 1 > tracing_on
3017  # usleep 1
3018  # echo 0 > tracing_on
3019  # cat trace
3020  # tracer: function
3021  #
3022  # entries-in-buffer/entries-written: 5/5   #P:4
3023  #
3024  #                              _-----=> irqs-off
3025  #                             / _----=> need-resched
3026  #                            | / _---=> hardirq/softirq
3027  #                            || / _--=> preempt-depth
3028  #                            ||| /     delay
3029  #           TASK-PID   CPU#  ||||    TIMESTAMP  FUNCTION
3030  #              | |       |   ||||       |         |
3031            usleep-2665  [001] ....  4186.475355: sys_nanosleep <-system_call_fastpath
3032            <idle>-0     [001] d.h1  4186.475409: hrtimer_interrupt <-smp_apic_timer_interrupt
3033            usleep-2665  [001] d.h1  4186.475426: hrtimer_interrupt <-smp_apic_timer_interrupt
3034            <idle>-0     [003] d.h1  4186.475426: hrtimer_interrupt <-smp_apic_timer_interrupt
3035            <idle>-0     [002] d.h1  4186.475427: hrtimer_interrupt <-smp_apic_timer_interrupt
3036
3037To see which functions are being traced, you can cat the file:
3038::
3039
3040  # cat set_ftrace_filter
3041  hrtimer_interrupt
3042  sys_nanosleep
3043
3044
3045Perhaps this is not enough. The filters also allow glob(7) matching.
3046
3047  ``<match>*``
3048	will match functions that begin with <match>
3049  ``*<match>``
3050	will match functions that end with <match>
3051  ``*<match>*``
3052	will match functions that have <match> in it
3053  ``<match1>*<match2>``
3054	will match functions that begin with <match1> and end with <match2>
3055
3056.. note::
3057      It is better to use quotes to enclose the wild cards,
3058      otherwise the shell may expand the parameters into names
3059      of files in the local directory.
3060
3061::
3062
3063  # echo 'hrtimer_*' > set_ftrace_filter
3064
3065Produces::
3066
3067  # tracer: function
3068  #
3069  # entries-in-buffer/entries-written: 897/897   #P:4
3070  #
3071  #                              _-----=> irqs-off
3072  #                             / _----=> need-resched
3073  #                            | / _---=> hardirq/softirq
3074  #                            || / _--=> preempt-depth
3075  #                            ||| /     delay
3076  #           TASK-PID   CPU#  ||||    TIMESTAMP  FUNCTION
3077  #              | |       |   ||||       |         |
3078            <idle>-0     [003] dN.1  4228.547803: hrtimer_cancel <-tick_nohz_idle_exit
3079            <idle>-0     [003] dN.1  4228.547804: hrtimer_try_to_cancel <-hrtimer_cancel
3080            <idle>-0     [003] dN.2  4228.547805: hrtimer_force_reprogram <-__remove_hrtimer
3081            <idle>-0     [003] dN.1  4228.547805: hrtimer_forward <-tick_nohz_idle_exit
3082            <idle>-0     [003] dN.1  4228.547805: hrtimer_start_range_ns <-hrtimer_start_expires.constprop.11
3083            <idle>-0     [003] d..1  4228.547858: hrtimer_get_next_event <-get_next_timer_interrupt
3084            <idle>-0     [003] d..1  4228.547859: hrtimer_start <-__tick_nohz_idle_enter
3085            <idle>-0     [003] d..2  4228.547860: hrtimer_force_reprogram <-__rem
3086
3087Notice that we lost the sys_nanosleep.
3088::
3089
3090  # cat set_ftrace_filter
3091  hrtimer_run_queues
3092  hrtimer_run_pending
3093  hrtimer_setup
3094  hrtimer_cancel
3095  hrtimer_try_to_cancel
3096  hrtimer_forward
3097  hrtimer_start
3098  hrtimer_reprogram
3099  hrtimer_force_reprogram
3100  hrtimer_get_next_event
3101  hrtimer_interrupt
3102  hrtimer_nanosleep
3103  hrtimer_wakeup
3104  hrtimer_get_remaining
3105  hrtimer_get_res
3106  hrtimer_init_sleeper
3107
3108
3109This is because the '>' and '>>' act just like they do in bash.
3110To rewrite the filters, use '>'
3111To append to the filters, use '>>'
3112
3113To clear out a filter so that all functions will be recorded
3114again::
3115
3116 # echo > set_ftrace_filter
3117 # cat set_ftrace_filter
3118 #
3119
3120Again, now we want to append.
3121
3122::
3123
3124  # echo sys_nanosleep > set_ftrace_filter
3125  # cat set_ftrace_filter
3126  sys_nanosleep
3127  # echo 'hrtimer_*' >> set_ftrace_filter
3128  # cat set_ftrace_filter
3129  hrtimer_run_queues
3130  hrtimer_run_pending
3131  hrtimer_setup
3132  hrtimer_cancel
3133  hrtimer_try_to_cancel
3134  hrtimer_forward
3135  hrtimer_start
3136  hrtimer_reprogram
3137  hrtimer_force_reprogram
3138  hrtimer_get_next_event
3139  hrtimer_interrupt
3140  sys_nanosleep
3141  hrtimer_nanosleep
3142  hrtimer_wakeup
3143  hrtimer_get_remaining
3144  hrtimer_get_res
3145  hrtimer_init_sleeper
3146
3147
3148The set_ftrace_notrace prevents those functions from being
3149traced.
3150::
3151
3152  # echo '*preempt*' '*lock*' > set_ftrace_notrace
3153
3154Produces::
3155
3156  # tracer: function
3157  #
3158  # entries-in-buffer/entries-written: 39608/39608   #P:4
3159  #
3160  #                              _-----=> irqs-off
3161  #                             / _----=> need-resched
3162  #                            | / _---=> hardirq/softirq
3163  #                            || / _--=> preempt-depth
3164  #                            ||| /     delay
3165  #           TASK-PID   CPU#  ||||    TIMESTAMP  FUNCTION
3166  #              | |       |   ||||       |         |
3167              bash-1994  [000] ....  4342.324896: file_ra_state_init <-do_dentry_open
3168              bash-1994  [000] ....  4342.324897: open_check_o_direct <-do_last
3169              bash-1994  [000] ....  4342.324897: ima_file_check <-do_last
3170              bash-1994  [000] ....  4342.324898: process_measurement <-ima_file_check
3171              bash-1994  [000] ....  4342.324898: ima_get_action <-process_measurement
3172              bash-1994  [000] ....  4342.324898: ima_match_policy <-ima_get_action
3173              bash-1994  [000] ....  4342.324899: do_truncate <-do_last
3174              bash-1994  [000] ....  4342.324899: setattr_should_drop_suidgid <-do_truncate
3175              bash-1994  [000] ....  4342.324899: notify_change <-do_truncate
3176              bash-1994  [000] ....  4342.324900: current_fs_time <-notify_change
3177              bash-1994  [000] ....  4342.324900: current_kernel_time <-current_fs_time
3178              bash-1994  [000] ....  4342.324900: timespec_trunc <-current_fs_time
3179
3180We can see that there's no more lock or preempt tracing.
3181
3182Selecting function filters via index
3183------------------------------------
3184
3185Because processing of strings is expensive (the address of the function
3186needs to be looked up before comparing to the string being passed in),
3187an index can be used as well to enable functions. This is useful in the
3188case of setting thousands of specific functions at a time. By passing
3189in a list of numbers, no string processing will occur. Instead, the function
3190at the specific location in the internal array (which corresponds to the
3191functions in the "available_filter_functions" file), is selected.
3192
3193::
3194
3195  # echo 1 > set_ftrace_filter
3196
3197Will select the first function listed in "available_filter_functions"
3198
3199::
3200
3201  # head -1 available_filter_functions
3202  trace_initcall_finish_cb
3203
3204  # cat set_ftrace_filter
3205  trace_initcall_finish_cb
3206
3207  # head -50 available_filter_functions | tail -1
3208  x86_pmu_commit_txn
3209
3210  # echo 1 50 > set_ftrace_filter
3211  # cat set_ftrace_filter
3212  trace_initcall_finish_cb
3213  x86_pmu_commit_txn
3214
3215Dynamic ftrace with the function graph tracer
3216---------------------------------------------
3217
3218Although what has been explained above concerns both the
3219function tracer and the function-graph-tracer, there are some
3220special features only available in the function-graph tracer.
3221
3222If you want to trace only one function and all of its children,
3223you just have to echo its name into set_graph_function::
3224
3225 echo __do_fault > set_graph_function
3226
3227will produce the following "expanded" trace of the __do_fault()
3228function::
3229
3230   0)               |  __do_fault() {
3231   0)               |    filemap_fault() {
3232   0)               |      find_lock_page() {
3233   0)   0.804 us    |        find_get_page();
3234   0)               |        __might_sleep() {
3235   0)   1.329 us    |        }
3236   0)   3.904 us    |      }
3237   0)   4.979 us    |    }
3238   0)   0.653 us    |    _spin_lock();
3239   0)   0.578 us    |    page_add_file_rmap();
3240   0)   0.525 us    |    native_set_pte_at();
3241   0)   0.585 us    |    _spin_unlock();
3242   0)               |    unlock_page() {
3243   0)   0.541 us    |      page_waitqueue();
3244   0)   0.639 us    |      __wake_up_bit();
3245   0)   2.786 us    |    }
3246   0) + 14.237 us   |  }
3247   0)               |  __do_fault() {
3248   0)               |    filemap_fault() {
3249   0)               |      find_lock_page() {
3250   0)   0.698 us    |        find_get_page();
3251   0)               |        __might_sleep() {
3252   0)   1.412 us    |        }
3253   0)   3.950 us    |      }
3254   0)   5.098 us    |    }
3255   0)   0.631 us    |    _spin_lock();
3256   0)   0.571 us    |    page_add_file_rmap();
3257   0)   0.526 us    |    native_set_pte_at();
3258   0)   0.586 us    |    _spin_unlock();
3259   0)               |    unlock_page() {
3260   0)   0.533 us    |      page_waitqueue();
3261   0)   0.638 us    |      __wake_up_bit();
3262   0)   2.793 us    |    }
3263   0) + 14.012 us   |  }
3264
3265You can also expand several functions at once::
3266
3267 echo sys_open > set_graph_function
3268 echo sys_close >> set_graph_function
3269
3270Now if you want to go back to trace all functions you can clear
3271this special filter via::
3272
3273 echo > set_graph_function
3274
3275
3276ftrace_enabled
3277--------------
3278
3279Note, the proc sysctl ftrace_enable is a big on/off switch for the
3280function tracer. By default it is enabled (when function tracing is
3281enabled in the kernel). If it is disabled, all function tracing is
3282disabled. This includes not only the function tracers for ftrace, but
3283also for any other uses (perf, kprobes, stack tracing, profiling, etc). It
3284cannot be disabled if there is a callback with FTRACE_OPS_FL_PERMANENT set
3285registered.
3286
3287Please disable this with care.
3288
3289This can be disable (and enabled) with::
3290
3291  sysctl kernel.ftrace_enabled=0
3292  sysctl kernel.ftrace_enabled=1
3293
3294 or
3295
3296  echo 0 > /proc/sys/kernel/ftrace_enabled
3297  echo 1 > /proc/sys/kernel/ftrace_enabled
3298
3299
3300Filter commands
3301---------------
3302
3303A few commands are supported by the set_ftrace_filter interface.
3304Trace commands have the following format::
3305
3306  <function>:<command>:<parameter>
3307
3308The following commands are supported:
3309
3310- mod:
3311  This command enables function filtering per module. The
3312  parameter defines the module. For example, if only the write*
3313  functions in the ext3 module are desired, run:
3314
3315   echo 'write*:mod:ext3' > set_ftrace_filter
3316
3317  This command interacts with the filter in the same way as
3318  filtering based on function names. Thus, adding more functions
3319  in a different module is accomplished by appending (>>) to the
3320  filter file. Remove specific module functions by prepending
3321  '!'::
3322
3323   echo '!writeback*:mod:ext3' >> set_ftrace_filter
3324
3325  Mod command supports module globbing. Disable tracing for all
3326  functions except a specific module::
3327
3328   echo '!*:mod:!ext3' >> set_ftrace_filter
3329
3330  Disable tracing for all modules, but still trace kernel::
3331
3332   echo '!*:mod:*' >> set_ftrace_filter
3333
3334  Enable filter only for kernel::
3335
3336   echo '*write*:mod:!*' >> set_ftrace_filter
3337
3338  Enable filter for module globbing::
3339
3340   echo '*write*:mod:*snd*' >> set_ftrace_filter
3341
3342- traceon/traceoff:
3343  These commands turn tracing on and off when the specified
3344  functions are hit. The parameter determines how many times the
3345  tracing system is turned on and off. If unspecified, there is
3346  no limit. For example, to disable tracing when a schedule bug
3347  is hit the first 5 times, run::
3348
3349   echo '__schedule_bug:traceoff:5' > set_ftrace_filter
3350
3351  To always disable tracing when __schedule_bug is hit::
3352
3353   echo '__schedule_bug:traceoff' > set_ftrace_filter
3354
3355  These commands are cumulative whether or not they are appended
3356  to set_ftrace_filter. To remove a command, prepend it by '!'
3357  and drop the parameter::
3358
3359   echo '!__schedule_bug:traceoff:0' > set_ftrace_filter
3360
3361  The above removes the traceoff command for __schedule_bug
3362  that have a counter. To remove commands without counters::
3363
3364   echo '!__schedule_bug:traceoff' > set_ftrace_filter
3365
3366- snapshot:
3367  Will cause a snapshot to be triggered when the function is hit.
3368  ::
3369
3370   echo 'native_flush_tlb_others:snapshot' > set_ftrace_filter
3371
3372  To only snapshot once:
3373  ::
3374
3375   echo 'native_flush_tlb_others:snapshot:1' > set_ftrace_filter
3376
3377  To remove the above commands::
3378
3379   echo '!native_flush_tlb_others:snapshot' > set_ftrace_filter
3380   echo '!native_flush_tlb_others:snapshot:0' > set_ftrace_filter
3381
3382- enable_event/disable_event:
3383  These commands can enable or disable a trace event. Note, because
3384  function tracing callbacks are very sensitive, when these commands
3385  are registered, the trace point is activated, but disabled in
3386  a "soft" mode. That is, the tracepoint will be called, but
3387  just will not be traced. The event tracepoint stays in this mode
3388  as long as there's a command that triggers it.
3389  ::
3390
3391   echo 'try_to_wake_up:enable_event:sched:sched_switch:2' > \
3392   	 set_ftrace_filter
3393
3394  The format is::
3395
3396    <function>:enable_event:<system>:<event>[:count]
3397    <function>:disable_event:<system>:<event>[:count]
3398
3399  To remove the events commands::
3400
3401   echo '!try_to_wake_up:enable_event:sched:sched_switch:0' > \
3402   	 set_ftrace_filter
3403   echo '!schedule:disable_event:sched:sched_switch' > \
3404   	 set_ftrace_filter
3405
3406- dump:
3407  When the function is hit, it will dump the contents of the ftrace
3408  ring buffer to the console. This is useful if you need to debug
3409  something, and want to dump the trace when a certain function
3410  is hit. Perhaps it's a function that is called before a triple
3411  fault happens and does not allow you to get a regular dump.
3412
3413- cpudump:
3414  When the function is hit, it will dump the contents of the ftrace
3415  ring buffer for the current CPU to the console. Unlike the "dump"
3416  command, it only prints out the contents of the ring buffer for the
3417  CPU that executed the function that triggered the dump.
3418
3419- stacktrace:
3420  When the function is hit, a stack trace is recorded.
3421
3422trace_pipe
3423----------
3424
3425The trace_pipe outputs the same content as the trace file, but
3426the effect on the tracing is different. Every read from
3427trace_pipe is consumed. This means that subsequent reads will be
3428different. The trace is live.
3429::
3430
3431  # echo function > current_tracer
3432  # cat trace_pipe > /tmp/trace.out &
3433  [1] 4153
3434  # echo 1 > tracing_on
3435  # usleep 1
3436  # echo 0 > tracing_on
3437  # cat trace
3438  # tracer: function
3439  #
3440  # entries-in-buffer/entries-written: 0/0   #P:4
3441  #
3442  #                              _-----=> irqs-off
3443  #                             / _----=> need-resched
3444  #                            | / _---=> hardirq/softirq
3445  #                            || / _--=> preempt-depth
3446  #                            ||| /     delay
3447  #           TASK-PID   CPU#  ||||    TIMESTAMP  FUNCTION
3448  #              | |       |   ||||       |         |
3449
3450  #
3451  # cat /tmp/trace.out
3452             bash-1994  [000] ....  5281.568961: mutex_unlock <-rb_simple_write
3453             bash-1994  [000] ....  5281.568963: __mutex_unlock_slowpath <-mutex_unlock
3454             bash-1994  [000] ....  5281.568963: __fsnotify_parent <-fsnotify_modify
3455             bash-1994  [000] ....  5281.568964: fsnotify <-fsnotify_modify
3456             bash-1994  [000] ....  5281.568964: __srcu_read_lock <-fsnotify
3457             bash-1994  [000] ....  5281.568964: add_preempt_count <-__srcu_read_lock
3458             bash-1994  [000] ...1  5281.568965: sub_preempt_count <-__srcu_read_lock
3459             bash-1994  [000] ....  5281.568965: __srcu_read_unlock <-fsnotify
3460             bash-1994  [000] ....  5281.568967: sys_dup2 <-system_call_fastpath
3461
3462
3463Note, reading the trace_pipe file will block until more input is
3464added. This is contrary to the trace file. If any process opened
3465the trace file for reading, it will actually disable tracing and
3466prevent new entries from being added. The trace_pipe file does
3467not have this limitation.
3468
3469trace entries
3470-------------
3471
3472Having too much or not enough data can be troublesome in
3473diagnosing an issue in the kernel. The file buffer_size_kb is
3474used to modify the size of the internal trace buffers. The
3475number listed is the number of entries that can be recorded per
3476CPU. To know the full size, multiply the number of possible CPUs
3477with the number of entries.
3478::
3479
3480  # cat buffer_size_kb
3481  1408 (units kilobytes)
3482
3483Or simply read buffer_total_size_kb
3484::
3485
3486  # cat buffer_total_size_kb
3487  5632
3488
3489To modify the buffer, simple echo in a number (in 1024 byte segments).
3490::
3491
3492  # echo 10000 > buffer_size_kb
3493  # cat buffer_size_kb
3494  10000 (units kilobytes)
3495
3496It will try to allocate as much as possible. If you allocate too
3497much, it can cause Out-Of-Memory to trigger.
3498::
3499
3500  # echo 1000000000000 > buffer_size_kb
3501  -bash: echo: write error: Cannot allocate memory
3502  # cat buffer_size_kb
3503  85
3504
3505The per_cpu buffers can be changed individually as well:
3506::
3507
3508  # echo 10000 > per_cpu/cpu0/buffer_size_kb
3509  # echo 100 > per_cpu/cpu1/buffer_size_kb
3510
3511When the per_cpu buffers are not the same, the buffer_size_kb
3512at the top level will just show an X
3513::
3514
3515  # cat buffer_size_kb
3516  X
3517
3518This is where the buffer_total_size_kb is useful:
3519::
3520
3521  # cat buffer_total_size_kb
3522  12916
3523
3524Writing to the top level buffer_size_kb will reset all the buffers
3525to be the same again.
3526
3527Snapshot
3528--------
3529CONFIG_TRACER_SNAPSHOT makes a generic snapshot feature
3530available to all non latency tracers. (Latency tracers which
3531record max latency, such as "irqsoff" or "wakeup", can't use
3532this feature, since those are already using the snapshot
3533mechanism internally.)
3534
3535Snapshot preserves a current trace buffer at a particular point
3536in time without stopping tracing. Ftrace swaps the current
3537buffer with a spare buffer, and tracing continues in the new
3538current (=previous spare) buffer.
3539
3540The following tracefs files in "tracing" are related to this
3541feature:
3542
3543  snapshot:
3544
3545	This is used to take a snapshot and to read the output
3546	of the snapshot. Echo 1 into this file to allocate a
3547	spare buffer and to take a snapshot (swap), then read
3548	the snapshot from this file in the same format as
3549	"trace" (described above in the section "The File
3550	System"). Both reads snapshot and tracing are executable
3551	in parallel. When the spare buffer is allocated, echoing
3552	0 frees it, and echoing else (positive) values clear the
3553	snapshot contents.
3554	More details are shown in the table below.
3555
3556	+--------------+------------+------------+------------+
3557	|status\\input |     0      |     1      |    else    |
3558	+==============+============+============+============+
3559	|not allocated |(do nothing)| alloc+swap |(do nothing)|
3560	+--------------+------------+------------+------------+
3561	|allocated     |    free    |    swap    |   clear    |
3562	+--------------+------------+------------+------------+
3563
3564Here is an example of using the snapshot feature.
3565::
3566
3567  # echo 1 > events/sched/enable
3568  # echo 1 > snapshot
3569  # cat snapshot
3570  # tracer: nop
3571  #
3572  # entries-in-buffer/entries-written: 71/71   #P:8
3573  #
3574  #                              _-----=> irqs-off
3575  #                             / _----=> need-resched
3576  #                            | / _---=> hardirq/softirq
3577  #                            || / _--=> preempt-depth
3578  #                            ||| /     delay
3579  #           TASK-PID   CPU#  ||||    TIMESTAMP  FUNCTION
3580  #              | |       |   ||||       |         |
3581            <idle>-0     [005] d...  2440.603828: sched_switch: prev_comm=swapper/5 prev_pid=0 prev_prio=120   prev_state=R ==> next_comm=snapshot-test-2 next_pid=2242 next_prio=120
3582             sleep-2242  [005] d...  2440.603846: sched_switch: prev_comm=snapshot-test-2 prev_pid=2242 prev_prio=120   prev_state=R ==> next_comm=kworker/5:1 next_pid=60 next_prio=120
3583  [...]
3584          <idle>-0     [002] d...  2440.707230: sched_switch: prev_comm=swapper/2 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=snapshot-test-2 next_pid=2229 next_prio=120
3585
3586  # cat trace
3587  # tracer: nop
3588  #
3589  # entries-in-buffer/entries-written: 77/77   #P:8
3590  #
3591  #                              _-----=> irqs-off
3592  #                             / _----=> need-resched
3593  #                            | / _---=> hardirq/softirq
3594  #                            || / _--=> preempt-depth
3595  #                            ||| /     delay
3596  #           TASK-PID   CPU#  ||||    TIMESTAMP  FUNCTION
3597  #              | |       |   ||||       |         |
3598            <idle>-0     [007] d...  2440.707395: sched_switch: prev_comm=swapper/7 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=snapshot-test-2 next_pid=2243 next_prio=120
3599   snapshot-test-2-2229  [002] d...  2440.707438: sched_switch: prev_comm=snapshot-test-2 prev_pid=2229 prev_prio=120 prev_state=S ==> next_comm=swapper/2 next_pid=0 next_prio=120
3600  [...]
3601
3602
3603If you try to use this snapshot feature when current tracer is
3604one of the latency tracers, you will get the following results.
3605::
3606
3607  # echo wakeup > current_tracer
3608  # echo 1 > snapshot
3609  bash: echo: write error: Device or resource busy
3610  # cat snapshot
3611  cat: snapshot: Device or resource busy
3612
3613
3614Instances
3615---------
3616In the tracefs tracing directory, there is a directory called "instances".
3617This directory can have new directories created inside of it using
3618mkdir, and removing directories with rmdir. The directory created
3619with mkdir in this directory will already contain files and other
3620directories after it is created.
3621::
3622
3623  # mkdir instances/foo
3624  # ls instances/foo
3625  buffer_size_kb  buffer_total_size_kb  events  free_buffer  per_cpu
3626  set_event  snapshot  trace  trace_clock  trace_marker  trace_options
3627  trace_pipe  tracing_on
3628
3629As you can see, the new directory looks similar to the tracing directory
3630itself. In fact, it is very similar, except that the buffer and
3631events are agnostic from the main directory, or from any other
3632instances that are created.
3633
3634The files in the new directory work just like the files with the
3635same name in the tracing directory except the buffer that is used
3636is a separate and new buffer. The files affect that buffer but do not
3637affect the main buffer with the exception of trace_options. Currently,
3638the trace_options affect all instances and the top level buffer
3639the same, but this may change in future releases. That is, options
3640may become specific to the instance they reside in.
3641
3642Notice that none of the function tracer files are there, nor is
3643current_tracer and available_tracers. This is because the buffers
3644can currently only have events enabled for them.
3645::
3646
3647  # mkdir instances/foo
3648  # mkdir instances/bar
3649  # mkdir instances/zoot
3650  # echo 100000 > buffer_size_kb
3651  # echo 1000 > instances/foo/buffer_size_kb
3652  # echo 5000 > instances/bar/per_cpu/cpu1/buffer_size_kb
3653  # echo function > current_trace
3654  # echo 1 > instances/foo/events/sched/sched_wakeup/enable
3655  # echo 1 > instances/foo/events/sched/sched_wakeup_new/enable
3656  # echo 1 > instances/foo/events/sched/sched_switch/enable
3657  # echo 1 > instances/bar/events/irq/enable
3658  # echo 1 > instances/zoot/events/syscalls/enable
3659  # cat trace_pipe
3660  CPU:2 [LOST 11745 EVENTS]
3661              bash-2044  [002] .... 10594.481032: _raw_spin_lock_irqsave <-get_page_from_freelist
3662              bash-2044  [002] d... 10594.481032: add_preempt_count <-_raw_spin_lock_irqsave
3663              bash-2044  [002] d..1 10594.481032: __rmqueue <-get_page_from_freelist
3664              bash-2044  [002] d..1 10594.481033: _raw_spin_unlock <-get_page_from_freelist
3665              bash-2044  [002] d..1 10594.481033: sub_preempt_count <-_raw_spin_unlock
3666              bash-2044  [002] d... 10594.481033: get_pageblock_flags_group <-get_pageblock_migratetype
3667              bash-2044  [002] d... 10594.481034: __mod_zone_page_state <-get_page_from_freelist
3668              bash-2044  [002] d... 10594.481034: zone_statistics <-get_page_from_freelist
3669              bash-2044  [002] d... 10594.481034: __inc_zone_state <-zone_statistics
3670              bash-2044  [002] d... 10594.481034: __inc_zone_state <-zone_statistics
3671              bash-2044  [002] .... 10594.481035: arch_dup_task_struct <-copy_process
3672  [...]
3673
3674  # cat instances/foo/trace_pipe
3675              bash-1998  [000] d..4   136.676759: sched_wakeup: comm=kworker/0:1 pid=59 prio=120 success=1 target_cpu=000
3676              bash-1998  [000] dN.4   136.676760: sched_wakeup: comm=bash pid=1998 prio=120 success=1 target_cpu=000
3677            <idle>-0     [003] d.h3   136.676906: sched_wakeup: comm=rcu_preempt pid=9 prio=120 success=1 target_cpu=003
3678            <idle>-0     [003] d..3   136.676909: sched_switch: prev_comm=swapper/3 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=rcu_preempt next_pid=9 next_prio=120
3679       rcu_preempt-9     [003] d..3   136.676916: sched_switch: prev_comm=rcu_preempt prev_pid=9 prev_prio=120 prev_state=S ==> next_comm=swapper/3 next_pid=0 next_prio=120
3680              bash-1998  [000] d..4   136.677014: sched_wakeup: comm=kworker/0:1 pid=59 prio=120 success=1 target_cpu=000
3681              bash-1998  [000] dN.4   136.677016: sched_wakeup: comm=bash pid=1998 prio=120 success=1 target_cpu=000
3682              bash-1998  [000] d..3   136.677018: sched_switch: prev_comm=bash prev_pid=1998 prev_prio=120 prev_state=R+ ==> next_comm=kworker/0:1 next_pid=59 next_prio=120
3683       kworker/0:1-59    [000] d..4   136.677022: sched_wakeup: comm=sshd pid=1995 prio=120 success=1 target_cpu=001
3684       kworker/0:1-59    [000] d..3   136.677025: sched_switch: prev_comm=kworker/0:1 prev_pid=59 prev_prio=120 prev_state=S ==> next_comm=bash next_pid=1998 next_prio=120
3685  [...]
3686
3687  # cat instances/bar/trace_pipe
3688       migration/1-14    [001] d.h3   138.732674: softirq_raise: vec=3 [action=NET_RX]
3689            <idle>-0     [001] dNh3   138.732725: softirq_raise: vec=3 [action=NET_RX]
3690              bash-1998  [000] d.h1   138.733101: softirq_raise: vec=1 [action=TIMER]
3691              bash-1998  [000] d.h1   138.733102: softirq_raise: vec=9 [action=RCU]
3692              bash-1998  [000] ..s2   138.733105: softirq_entry: vec=1 [action=TIMER]
3693              bash-1998  [000] ..s2   138.733106: softirq_exit: vec=1 [action=TIMER]
3694              bash-1998  [000] ..s2   138.733106: softirq_entry: vec=9 [action=RCU]
3695              bash-1998  [000] ..s2   138.733109: softirq_exit: vec=9 [action=RCU]
3696              sshd-1995  [001] d.h1   138.733278: irq_handler_entry: irq=21 name=uhci_hcd:usb4
3697              sshd-1995  [001] d.h1   138.733280: irq_handler_exit: irq=21 ret=unhandled
3698              sshd-1995  [001] d.h1   138.733281: irq_handler_entry: irq=21 name=eth0
3699              sshd-1995  [001] d.h1   138.733283: irq_handler_exit: irq=21 ret=handled
3700  [...]
3701
3702  # cat instances/zoot/trace
3703  # tracer: nop
3704  #
3705  # entries-in-buffer/entries-written: 18996/18996   #P:4
3706  #
3707  #                              _-----=> irqs-off
3708  #                             / _----=> need-resched
3709  #                            | / _---=> hardirq/softirq
3710  #                            || / _--=> preempt-depth
3711  #                            ||| /     delay
3712  #           TASK-PID   CPU#  ||||    TIMESTAMP  FUNCTION
3713  #              | |       |   ||||       |         |
3714              bash-1998  [000] d...   140.733501: sys_write -> 0x2
3715              bash-1998  [000] d...   140.733504: sys_dup2(oldfd: a, newfd: 1)
3716              bash-1998  [000] d...   140.733506: sys_dup2 -> 0x1
3717              bash-1998  [000] d...   140.733508: sys_fcntl(fd: a, cmd: 1, arg: 0)
3718              bash-1998  [000] d...   140.733509: sys_fcntl -> 0x1
3719              bash-1998  [000] d...   140.733510: sys_close(fd: a)
3720              bash-1998  [000] d...   140.733510: sys_close -> 0x0
3721              bash-1998  [000] d...   140.733514: sys_rt_sigprocmask(how: 0, nset: 0, oset: 6e2768, sigsetsize: 8)
3722              bash-1998  [000] d...   140.733515: sys_rt_sigprocmask -> 0x0
3723              bash-1998  [000] d...   140.733516: sys_rt_sigaction(sig: 2, act: 7fff718846f0, oact: 7fff71884650, sigsetsize: 8)
3724              bash-1998  [000] d...   140.733516: sys_rt_sigaction -> 0x0
3725
3726You can see that the trace of the top most trace buffer shows only
3727the function tracing. The foo instance displays wakeups and task
3728switches.
3729
3730To remove the instances, simply delete their directories:
3731::
3732
3733  # rmdir instances/foo
3734  # rmdir instances/bar
3735  # rmdir instances/zoot
3736
3737Note, if a process has a trace file open in one of the instance
3738directories, the rmdir will fail with EBUSY.
3739
3740
3741Stack trace
3742-----------
3743Since the kernel has a fixed sized stack, it is important not to
3744waste it in functions. A kernel developer must be conscious of
3745what they allocate on the stack. If they add too much, the system
3746can be in danger of a stack overflow, and corruption will occur,
3747usually leading to a system panic.
3748
3749There are some tools that check this, usually with interrupts
3750periodically checking usage. But if you can perform a check
3751at every function call that will become very useful. As ftrace provides
3752a function tracer, it makes it convenient to check the stack size
3753at every function call. This is enabled via the stack tracer.
3754
3755CONFIG_STACK_TRACER enables the ftrace stack tracing functionality.
3756To enable it, write a '1' into /proc/sys/kernel/stack_tracer_enabled.
3757::
3758
3759 # echo 1 > /proc/sys/kernel/stack_tracer_enabled
3760
3761You can also enable it from the kernel command line to trace
3762the stack size of the kernel during boot up, by adding "stacktrace"
3763to the kernel command line parameter.
3764
3765After running it for a few minutes, the output looks like:
3766::
3767
3768  # cat stack_max_size
3769  2928
3770
3771  # cat stack_trace
3772          Depth    Size   Location    (18 entries)
3773          -----    ----   --------
3774    0)     2928     224   update_sd_lb_stats+0xbc/0x4ac
3775    1)     2704     160   find_busiest_group+0x31/0x1f1
3776    2)     2544     256   load_balance+0xd9/0x662
3777    3)     2288      80   idle_balance+0xbb/0x130
3778    4)     2208     128   __schedule+0x26e/0x5b9
3779    5)     2080      16   schedule+0x64/0x66
3780    6)     2064     128   schedule_timeout+0x34/0xe0
3781    7)     1936     112   wait_for_common+0x97/0xf1
3782    8)     1824      16   wait_for_completion+0x1d/0x1f
3783    9)     1808     128   flush_work+0xfe/0x119
3784   10)     1680      16   tty_flush_to_ldisc+0x1e/0x20
3785   11)     1664      48   input_available_p+0x1d/0x5c
3786   12)     1616      48   n_tty_poll+0x6d/0x134
3787   13)     1568      64   tty_poll+0x64/0x7f
3788   14)     1504     880   do_select+0x31e/0x511
3789   15)      624     400   core_sys_select+0x177/0x216
3790   16)      224      96   sys_select+0x91/0xb9
3791   17)      128     128   system_call_fastpath+0x16/0x1b
3792
3793Note, if -mfentry is being used by gcc, functions get traced before
3794they set up the stack frame. This means that leaf level functions
3795are not tested by the stack tracer when -mfentry is used.
3796
3797Currently, -mfentry is used by gcc 4.6.0 and above on x86 only.
3798
3799More
3800----
3801More details can be found in the source code, in the `kernel/trace/*.c` files.
3802