xref: /linux/Documentation/RCU/stallwarn.rst (revision 7f71507851fc7764b36a3221839607d3a45c2025)
1.. SPDX-License-Identifier: GPL-2.0
2
3==============================
4Using RCU's CPU Stall Detector
5==============================
6
7This document first discusses what sorts of issues RCU's CPU stall
8detector can locate, and then discusses kernel parameters and Kconfig
9options that can be used to fine-tune the detector's operation.  Finally,
10this document explains the stall detector's "splat" format.
11
12
13What Causes RCU CPU Stall Warnings?
14===================================
15
16So your kernel printed an RCU CPU stall warning.  The next question is
17"What caused it?"  The following problems can result in RCU CPU stall
18warnings:
19
20-	A CPU looping in an RCU read-side critical section.
21
22-	A CPU looping with interrupts disabled.
23
24-	A CPU looping with preemption disabled.
25
26-	A CPU looping with bottom halves disabled.
27
28-	For !CONFIG_PREEMPTION kernels, a CPU looping anywhere in the
29	kernel without potentially invoking schedule().  If the looping
30	in the kernel is really expected and desirable behavior, you
31	might need to add some calls to cond_resched().
32
33-	Booting Linux using a console connection that is too slow to
34	keep up with the boot-time console-message rate.  For example,
35	a 115Kbaud serial console can be *way* too slow to keep up
36	with boot-time message rates, and will frequently result in
37	RCU CPU stall warning messages.  Especially if you have added
38	debug printk()s.
39
40-	Anything that prevents RCU's grace-period kthreads from running.
41	This can result in the "All QSes seen" console-log message.
42	This message will include information on when the kthread last
43	ran and how often it should be expected to run.  It can also
44	result in the ``rcu_.*kthread starved for`` console-log message,
45	which will include additional debugging information.
46
47-	A CPU-bound real-time task in a CONFIG_PREEMPTION kernel, which might
48	happen to preempt a low-priority task in the middle of an RCU
49	read-side critical section.   This is especially damaging if
50	that low-priority task is not permitted to run on any other CPU,
51	in which case the next RCU grace period can never complete, which
52	will eventually cause the system to run out of memory and hang.
53	While the system is in the process of running itself out of
54	memory, you might see stall-warning messages.
55
56-	A CPU-bound real-time task in a CONFIG_PREEMPT_RT kernel that
57	is running at a higher priority than the RCU softirq threads.
58	This will prevent RCU callbacks from ever being invoked,
59	and in a CONFIG_PREEMPT_RCU kernel will further prevent
60	RCU grace periods from ever completing.  Either way, the
61	system will eventually run out of memory and hang.  In the
62	CONFIG_PREEMPT_RCU case, you might see stall-warning
63	messages.
64
65	You can use the rcutree.kthread_prio kernel boot parameter to
66	increase the scheduling priority of RCU's kthreads, which can
67	help avoid this problem.  However, please note that doing this
68	can increase your system's context-switch rate and thus degrade
69	performance.
70
71-	A periodic interrupt whose handler takes longer than the time
72	interval between successive pairs of interrupts.  This can
73	prevent RCU's kthreads and softirq handlers from running.
74	Note that certain high-overhead debugging options, for example
75	the function_graph tracer, can result in interrupt handler taking
76	considerably longer than normal, which can in turn result in
77	RCU CPU stall warnings.
78
79-	Testing a workload on a fast system, tuning the stall-warning
80	timeout down to just barely avoid RCU CPU stall warnings, and then
81	running the same workload with the same stall-warning timeout on a
82	slow system.  Note that thermal throttling and on-demand governors
83	can cause a single system to be sometimes fast and sometimes slow!
84
85-	A hardware or software issue shuts off the scheduler-clock
86	interrupt on a CPU that is not in dyntick-idle mode.  This
87	problem really has happened, and seems to be most likely to
88	result in RCU CPU stall warnings for CONFIG_NO_HZ_COMMON=n kernels.
89
90-	A hardware or software issue that prevents time-based wakeups
91	from occurring.  These issues can range from misconfigured or
92	buggy timer hardware through bugs in the interrupt or exception
93	path (whether hardware, firmware, or software) through bugs
94	in Linux's timer subsystem through bugs in the scheduler, and,
95	yes, even including bugs in RCU itself.  It can also result in
96	the ``rcu_.*timer wakeup didn't happen for`` console-log message,
97	which will include additional debugging information.
98
99-	A low-level kernel issue that either fails to invoke one of the
100	variants of rcu_eqs_enter(true), rcu_eqs_exit(true), ct_idle_enter(),
101	ct_idle_exit(), ct_irq_enter(), or ct_irq_exit() on the one
102	hand, or that invokes one of them too many times on the other.
103	Historically, the most frequent issue has been an omission
104	of either irq_enter() or irq_exit(), which in turn invoke
105	ct_irq_enter() or ct_irq_exit(), respectively.  Building your
106	kernel with CONFIG_RCU_EQS_DEBUG=y can help track down these types
107	of issues, which sometimes arise in architecture-specific code.
108
109-	A bug in the RCU implementation.
110
111-	A hardware failure.  This is quite unlikely, but is not at all
112	uncommon in large datacenter.  In one memorable case some decades
113	back, a CPU failed in a running system, becoming unresponsive,
114	but not causing an immediate crash.  This resulted in a series
115	of RCU CPU stall warnings, eventually leading the realization
116	that the CPU had failed.
117
118The RCU, RCU-sched, RCU-tasks, and RCU-tasks-trace implementations have
119CPU stall warning.  Note that SRCU does *not* have CPU stall warnings.
120Please note that RCU only detects CPU stalls when there is a grace period
121in progress.  No grace period, no CPU stall warnings.
122
123To diagnose the cause of the stall, inspect the stack traces.
124The offending function will usually be near the top of the stack.
125If you have a series of stall warnings from a single extended stall,
126comparing the stack traces can often help determine where the stall
127is occurring, which will usually be in the function nearest the top of
128that portion of the stack which remains the same from trace to trace.
129If you can reliably trigger the stall, ftrace can be quite helpful.
130
131RCU bugs can often be debugged with the help of CONFIG_RCU_TRACE
132and with RCU's event tracing.  For information on RCU's event tracing,
133see include/trace/events/rcu.h.
134
135
136Fine-Tuning the RCU CPU Stall Detector
137======================================
138
139The rcuupdate.rcu_cpu_stall_suppress module parameter disables RCU's
140CPU stall detector, which detects conditions that unduly delay RCU grace
141periods.  This module parameter enables CPU stall detection by default,
142but may be overridden via boot-time parameter or at runtime via sysfs.
143The stall detector's idea of what constitutes "unduly delayed" is
144controlled by a set of kernel configuration variables and cpp macros:
145
146CONFIG_RCU_CPU_STALL_TIMEOUT
147----------------------------
148
149	This kernel configuration parameter defines the period of time
150	that RCU will wait from the beginning of a grace period until it
151	issues an RCU CPU stall warning.  This time period is normally
152	21 seconds.
153
154	This configuration parameter may be changed at runtime via the
155	/sys/module/rcupdate/parameters/rcu_cpu_stall_timeout, however
156	this parameter is checked only at the beginning of a cycle.
157	So if you are 10 seconds into a 40-second stall, setting this
158	sysfs parameter to (say) five will shorten the timeout for the
159	*next* stall, or the following warning for the current stall
160	(assuming the stall lasts long enough).  It will not affect the
161	timing of the next warning for the current stall.
162
163	Stall-warning messages may be enabled and disabled completely via
164	/sys/module/rcupdate/parameters/rcu_cpu_stall_suppress.
165
166CONFIG_RCU_EXP_CPU_STALL_TIMEOUT
167--------------------------------
168
169	Same as the CONFIG_RCU_CPU_STALL_TIMEOUT parameter but only for
170	the expedited grace period. This parameter defines the period
171	of time that RCU will wait from the beginning of an expedited
172	grace period until it issues an RCU CPU stall warning. This time
173	period is normally 20 milliseconds on Android devices.	A zero
174	value causes the CONFIG_RCU_CPU_STALL_TIMEOUT value to be used,
175	after conversion to milliseconds.
176
177	This configuration parameter may be changed at runtime via the
178	/sys/module/rcupdate/parameters/rcu_exp_cpu_stall_timeout, however
179	this parameter is checked only at the beginning of a cycle. If you
180	are in a current stall cycle, setting it to a new value will change
181	the timeout for the -next- stall.
182
183	Stall-warning messages may be enabled and disabled completely via
184	/sys/module/rcupdate/parameters/rcu_cpu_stall_suppress.
185
186RCU_STALL_DELAY_DELTA
187---------------------
188
189	Although the lockdep facility is extremely useful, it does add
190	some overhead.  Therefore, under CONFIG_PROVE_RCU, the
191	RCU_STALL_DELAY_DELTA macro allows five extra seconds before
192	giving an RCU CPU stall warning message.  (This is a cpp
193	macro, not a kernel configuration parameter.)
194
195RCU_STALL_RAT_DELAY
196-------------------
197
198	The CPU stall detector tries to make the offending CPU print its
199	own warnings, as this often gives better-quality stack traces.
200	However, if the offending CPU does not detect its own stall in
201	the number of jiffies specified by RCU_STALL_RAT_DELAY, then
202	some other CPU will complain.  This delay is normally set to
203	two jiffies.  (This is a cpp macro, not a kernel configuration
204	parameter.)
205
206rcupdate.rcu_task_stall_timeout
207-------------------------------
208
209	This boot/sysfs parameter controls the RCU-tasks and
210	RCU-tasks-trace stall warning intervals.  A value of zero or less
211	suppresses RCU-tasks stall warnings.  A positive value sets the
212	stall-warning interval in seconds.  An RCU-tasks stall warning
213	starts with the line:
214
215		INFO: rcu_tasks detected stalls on tasks:
216
217	And continues with the output of sched_show_task() for each
218	task stalling the current RCU-tasks grace period.
219
220	An RCU-tasks-trace stall warning starts (and continues) similarly:
221
222		INFO: rcu_tasks_trace detected stalls on tasks
223
224
225Interpreting RCU's CPU Stall-Detector "Splats"
226==============================================
227
228For non-RCU-tasks flavors of RCU, when a CPU detects that some other
229CPU is stalling, it will print a message similar to the following::
230
231	INFO: rcu_sched detected stalls on CPUs/tasks:
232	2-...: (3 GPs behind) idle=06c/0/0 softirq=1453/1455 fqs=0
233	16-...: (0 ticks this GP) idle=81c/0/0 softirq=764/764 fqs=0
234	(detected by 32, t=2603 jiffies, g=7075, q=625)
235
236This message indicates that CPU 32 detected that CPUs 2 and 16 were both
237causing stalls, and that the stall was affecting RCU-sched.  This message
238will normally be followed by stack dumps for each CPU.  Please note that
239PREEMPT_RCU builds can be stalled by tasks as well as by CPUs, and that
240the tasks will be indicated by PID, for example, "P3421".  It is even
241possible for an rcu_state stall to be caused by both CPUs *and* tasks,
242in which case the offending CPUs and tasks will all be called out in the list.
243In some cases, CPUs will detect themselves stalling, which will result
244in a self-detected stall.
245
246CPU 2's "(3 GPs behind)" indicates that this CPU has not interacted with
247the RCU core for the past three grace periods.  In contrast, CPU 16's "(0
248ticks this GP)" indicates that this CPU has not taken any scheduling-clock
249interrupts during the current stalled grace period.
250
251The "idle=" portion of the message prints the dyntick-idle state.
252The hex number before the first "/" is the low-order 16 bits of the
253dynticks counter, which will have an even-numbered value if the CPU
254is in dyntick-idle mode and an odd-numbered value otherwise.  The hex
255number between the two "/"s is the value of the nesting, which will be
256a small non-negative number if in the idle loop (as shown above) and a
257very large positive number otherwise.  The number following the final
258"/" is the NMI nesting, which will be a small non-negative number.
259
260The "softirq=" portion of the message tracks the number of RCU softirq
261handlers that the stalled CPU has executed.  The number before the "/"
262is the number that had executed since boot at the time that this CPU
263last noted the beginning of a grace period, which might be the current
264(stalled) grace period, or it might be some earlier grace period (for
265example, if the CPU might have been in dyntick-idle mode for an extended
266time period).  The number after the "/" is the number that have executed
267since boot until the current time.  If this latter number stays constant
268across repeated stall-warning messages, it is possible that RCU's softirq
269handlers are no longer able to execute on this CPU.  This can happen if
270the stalled CPU is spinning with interrupts are disabled, or, in -rt
271kernels, if a high-priority process is starving RCU's softirq handler.
272
273The "fqs=" shows the number of force-quiescent-state idle/offline
274detection passes that the grace-period kthread has made across this
275CPU since the last time that this CPU noted the beginning of a grace
276period.
277
278The "detected by" line indicates which CPU detected the stall (in this
279case, CPU 32), how many jiffies have elapsed since the start of the grace
280period (in this case 2603), the grace-period sequence number (7075), and
281an estimate of the total number of RCU callbacks queued across all CPUs
282(625 in this case).
283
284If the grace period ends just as the stall warning starts printing,
285there will be a spurious stall-warning message, which will include
286the following::
287
288	INFO: Stall ended before state dump start
289
290This is rare, but does happen from time to time in real life.  It is also
291possible for a zero-jiffy stall to be flagged in this case, depending
292on how the stall warning and the grace-period initialization happen to
293interact.  Please note that it is not possible to entirely eliminate this
294sort of false positive without resorting to things like stop_machine(),
295which is overkill for this sort of problem.
296
297If all CPUs and tasks have passed through quiescent states, but the
298grace period has nevertheless failed to end, the stall-warning splat
299will include something like the following::
300
301	All QSes seen, last rcu_preempt kthread activity 23807 (4297905177-4297881370), jiffies_till_next_fqs=3, root ->qsmask 0x0
302
303The "23807" indicates that it has been more than 23 thousand jiffies
304since the grace-period kthread ran.  The "jiffies_till_next_fqs"
305indicates how frequently that kthread should run, giving the number
306of jiffies between force-quiescent-state scans, in this case three,
307which is way less than 23807.  Finally, the root rcu_node structure's
308->qsmask field is printed, which will normally be zero.
309
310If the relevant grace-period kthread has been unable to run prior to
311the stall warning, as was the case in the "All QSes seen" line above,
312the following additional line is printed::
313
314	rcu_sched kthread starved for 23807 jiffies! g7075 f0x0 RCU_GP_WAIT_FQS(3) ->state=0x1 ->cpu=5
315	Unless rcu_sched kthread gets sufficient CPU time, OOM is now expected behavior.
316
317Starving the grace-period kthreads of CPU time can of course result
318in RCU CPU stall warnings even when all CPUs and tasks have passed
319through the required quiescent states.  The "g" number shows the current
320grace-period sequence number, the "f" precedes the ->gp_flags command
321to the grace-period kthread, the "RCU_GP_WAIT_FQS" indicates that the
322kthread is waiting for a short timeout, the "state" precedes value of the
323task_struct ->state field, and the "cpu" indicates that the grace-period
324kthread last ran on CPU 5.
325
326If the relevant grace-period kthread does not wake from FQS wait in a
327reasonable time, then the following additional line is printed::
328
329	kthread timer wakeup didn't happen for 23804 jiffies! g7076 f0x0 RCU_GP_WAIT_FQS(5) ->state=0x402
330
331The "23804" indicates that kthread's timer expired more than 23 thousand
332jiffies ago.  The rest of the line has meaning similar to the kthread
333starvation case.
334
335Additionally, the following line is printed::
336
337	Possible timer handling issue on cpu=4 timer-softirq=11142
338
339Here "cpu" indicates that the grace-period kthread last ran on CPU 4,
340where it queued the fqs timer.  The number following the "timer-softirq"
341is the current ``TIMER_SOFTIRQ`` count on cpu 4.  If this value does not
342change on successive RCU CPU stall warnings, there is further reason to
343suspect a timer problem.
344
345These messages are usually followed by stack dumps of the CPUs and tasks
346involved in the stall.  These stack traces can help you locate the cause
347of the stall, keeping in mind that the CPU detecting the stall will have
348an interrupt frame that is mainly devoted to detecting the stall.
349
350
351Multiple Warnings From One Stall
352================================
353
354If a stall lasts long enough, multiple stall-warning messages will
355be printed for it.  The second and subsequent messages are printed at
356longer intervals, so that the time between (say) the first and second
357message will be about three times the interval between the beginning
358of the stall and the first message.  It can be helpful to compare the
359stack dumps for the different messages for the same stalled grace period.
360
361
362Stall Warnings for Expedited Grace Periods
363==========================================
364
365If an expedited grace period detects a stall, it will place a message
366like the following in dmesg::
367
368	INFO: rcu_sched detected expedited stalls on CPUs/tasks: { 7-... } 21119 jiffies s: 73 root: 0x2/.
369
370This indicates that CPU 7 has failed to respond to a reschedule IPI.
371The three periods (".") following the CPU number indicate that the CPU
372is online (otherwise the first period would instead have been "O"),
373that the CPU was online at the beginning of the expedited grace period
374(otherwise the second period would have instead been "o"), and that
375the CPU has been online at least once since boot (otherwise, the third
376period would instead have been "N").  The number before the "jiffies"
377indicates that the expedited grace period has been going on for 21,119
378jiffies.  The number following the "s:" indicates that the expedited
379grace-period sequence counter is 73.  The fact that this last value is
380odd indicates that an expedited grace period is in flight.  The number
381following "root:" is a bitmask that indicates which children of the root
382rcu_node structure correspond to CPUs and/or tasks that are blocking the
383current expedited grace period.  If the tree had more than one level,
384additional hex numbers would be printed for the states of the other
385rcu_node structures in the tree.
386
387As with normal grace periods, PREEMPT_RCU builds can be stalled by
388tasks as well as by CPUs, and that the tasks will be indicated by PID,
389for example, "P3421".
390
391It is entirely possible to see stall warnings from normal and from
392expedited grace periods at about the same time during the same run.
393
394RCU_CPU_STALL_CPUTIME
395=====================
396
397In kernels built with CONFIG_RCU_CPU_STALL_CPUTIME=y or booted with
398rcupdate.rcu_cpu_stall_cputime=1, the following additional information
399is supplied with each RCU CPU stall warning::
400
401  rcu:          hardirqs   softirqs   csw/system
402  rcu:  number:      624         45            0
403  rcu: cputime:       69          1         2425   ==> 2500(ms)
404
405These statistics are collected during the sampling period. The values
406in row "number:" are the number of hard interrupts, number of soft
407interrupts, and number of context switches on the stalled CPU. The
408first three values in row "cputime:" indicate the CPU time in
409milliseconds consumed by hard interrupts, soft interrupts, and tasks
410on the stalled CPU.  The last number is the measurement interval, again
411in milliseconds.  Because user-mode tasks normally do not cause RCU CPU
412stalls, these tasks are typically kernel tasks, which is why only the
413system CPU time are considered.
414
415The sampling period is shown as follows::
416
417  |<------------first timeout---------->|<-----second timeout----->|
418  |<--half timeout-->|<--half timeout-->|                          |
419  |                  |<--first period-->|                          |
420  |                  |<-----------second sampling period---------->|
421  |                  |                  |                          |
422             snapshot time point    1st-stall                  2nd-stall
423
424The following describes four typical scenarios:
425
4261. A CPU looping with interrupts disabled.
427
428   ::
429
430     rcu:          hardirqs   softirqs   csw/system
431     rcu:  number:        0          0            0
432     rcu: cputime:        0          0            0   ==> 2500(ms)
433
434   Because interrupts have been disabled throughout the measurement
435   interval, there are no interrupts and no context switches.
436   Furthermore, because CPU time consumption was measured using interrupt
437   handlers, the system CPU consumption is misleadingly measured as zero.
438   This scenario will normally also have "(0 ticks this GP)" printed on
439   this CPU's summary line.
440
4412. A CPU looping with bottom halves disabled.
442
443   This is similar to the previous example, but with non-zero number of
444   and CPU time consumed by hard interrupts, along with non-zero CPU
445   time consumed by in-kernel execution::
446
447     rcu:          hardirqs   softirqs   csw/system
448     rcu:  number:      624          0            0
449     rcu: cputime:       49          0         2446   ==> 2500(ms)
450
451   The fact that there are zero softirqs gives a hint that these were
452   disabled, perhaps via local_bh_disable().  It is of course possible
453   that there were no softirqs, perhaps because all events that would
454   result in softirq execution are confined to other CPUs.  In this case,
455   the diagnosis should continue as shown in the next example.
456
4573. A CPU looping with preemption disabled.
458
459   Here, only the number of context switches is zero::
460
461     rcu:          hardirqs   softirqs   csw/system
462     rcu:  number:      624         45            0
463     rcu: cputime:       69          1         2425   ==> 2500(ms)
464
465   This situation hints that the stalled CPU was looping with preemption
466   disabled.
467
4684. No looping, but massive hard and soft interrupts.
469
470   ::
471
472     rcu:          hardirqs   softirqs   csw/system
473     rcu:  number:       xx         xx            0
474     rcu: cputime:       xx         xx            0   ==> 2500(ms)
475
476   Here, the number and CPU time of hard interrupts are all non-zero,
477   but the number of context switches and the in-kernel CPU time consumed
478   are zero. The number and cputime of soft interrupts will usually be
479   non-zero, but could be zero, for example, if the CPU was spinning
480   within a single hard interrupt handler.
481
482   If this type of RCU CPU stall warning can be reproduced, you can
483   narrow it down by looking at /proc/interrupts or by writing code to
484   trace each interrupt, for example, by referring to show_interrupts().
485