xref: /linux/Documentation/admin-guide/lockup-watchdogs.rst (revision 53597deca0e38c30e6cd4ba2114fa42d2bcd85bb)
1===============================================================
2Softlockup detector and hardlockup detector (aka nmi_watchdog)
3===============================================================
4
5The Linux kernel can act as a watchdog to detect both soft and hard
6lockups.
7
8A 'softlockup' is defined as a bug that causes the kernel to loop in
9kernel mode for more than 20 seconds (see "Implementation" below for
10details), without giving other tasks a chance to run. The current
11stack trace is displayed upon detection and, by default, the system
12will stay locked up. Alternatively, the kernel can be configured to
13panic; a sysctl, "kernel.softlockup_panic", a kernel parameter,
14"softlockup_panic" (see "Documentation/admin-guide/kernel-parameters.rst" for
15details), and a compile option, "BOOTPARAM_SOFTLOCKUP_PANIC", are
16provided for this.
17
18A 'hardlockup' is defined as a bug that causes the CPU to loop in
19kernel mode for several seconds (see "Implementation" below for
20details), without letting other interrupts have a chance to run.
21Similarly to the softlockup case, the current stack trace is displayed
22upon detection and the system will stay locked up unless the default
23behavior is changed, which can be done through a sysctl,
24'hardlockup_panic', a compile time knob, "BOOTPARAM_HARDLOCKUP_PANIC",
25and a kernel parameter, "nmi_watchdog"
26(see "Documentation/admin-guide/kernel-parameters.rst" for details).
27
28The panic option can be used in combination with panic_timeout (this
29timeout is set through the confusingly named "kernel.panic" sysctl),
30to cause the system to reboot automatically after a specified amount
31of time.
32
33Configuration
34=============
35
36A kernel knob is provided that allows administrators to configure
37this period. The "watchdog_thresh" parameter (default 10 seconds)
38controls the threshold. The right value for a particular environment
39is a trade-off between fast response to lockups and detection overhead.
40
41Implementation
42==============
43
44The soft and hard lockup detectors are built around an hrtimer.
45In addition, the softlockup detector regularly schedules a job, and
46the hard lockup detector might use Perf/NMI events on architectures
47that support it.
48
49Frequency and Heartbeats
50------------------------
51
52The core of the detectors is an hrtimer. It serves multiple purposes:
53
54- schedules watchdog job for the softlockup detector
55- bumps the interrupt counter for hardlockup detectors (heartbeat)
56- detects softlockups
57- detects hardlockups in Buddy mode
58
59The period of this hrtimer is 2*watchdog_thresh/5, which is 4 seconds
60by default. The hrtimer has two or three chances to generate an interrupt
61(heartbeat) before the hardlockup detector kicks in.
62
63Softlockup Detector
64-------------------
65
66The watchdog job is scheduled by the hrtimer and runs in a stop scheduling
67thread. It updates a timestamp every time it is scheduled. If that timestamp
68is not updated for 2*watchdog_thresh seconds (the softlockup threshold) the
69'softlockup detector' (coded inside the hrtimer callback function)
70will dump useful debug information to the system log, after which it
71will call panic if it was instructed to do so or resume execution of
72other kernel code.
73
74Hardlockup Detector (NMI/Perf)
75------------------------------
76
77On architectures that support NMI (Non-Maskable Interrupt) perf events,
78a periodic NMI is generated every "watchdog_thresh" seconds.
79
80If any CPU in the system does not receive any hrtimer interrupt
81(heartbeat) during the "watchdog_thresh" window, the 'hardlockup
82detector' (the handler for the NMI perf event) will generate a kernel
83warning or call panic.
84
85**Detection Overhead (NMI):**
86
87The time to detect a lockup can vary depending on when the lockup
88occurs relative to the NMI check window. Examples below assume a watchdog_thresh of 10.
89
90* **Best Case:** The lockup occurs just before the first heartbeat is
91  due. The detector will notice the missing hrtimer interrupt almost
92  immediately during the next check.
93
94  ::
95
96    Time 100.0: cpu 1 heartbeat
97    Time 100.1: hardlockup_check, cpu1 stores its state
98    Time 103.9: Hard Lockup on cpu1
99    Time 104.0: cpu 1 heartbeat never comes
100    Time 110.1: hardlockup_check, cpu1 checks the state again, should be the same, declares lockup
101
102    Time to detection: ~6 seconds
103
104* **Worst Case:** The lockup occurs shortly after a valid interrupt
105  (heartbeat) which itself happened just after the NMI check. The next
106  NMI check sees that the interrupt count has changed (due to that one
107  heartbeat), assumes the CPU is healthy, and resets the baseline. The
108  lockup is only detected at the subsequent check.
109
110  ::
111
112    Time 100.0: hardlockup_check, cpu1 stores its state
113    Time 100.1: cpu 1 heartbeat
114    Time 100.2: Hard Lockup on cpu1
115    Time 110.0: hardlockup_check, cpu1 stores its state (misses lockup as state changed)
116    Time 120.0: hardlockup_check, cpu1 checks the state again, should be the same, declares lockup
117
118    Time to detection: ~20 seconds
119
120Hardlockup Detector (Buddy)
121---------------------------
122
123On architectures or configurations where NMI perf events are not
124available (or disabled), the kernel may use the "buddy" hardlockup
125detector. This mechanism requires SMP (Symmetric Multi-Processing).
126
127In this mode, each CPU is assigned a "buddy" CPU to monitor. The
128monitoring CPU runs its own hrtimer (the same one used for softlockup
129detection) and checks if the buddy CPU's hrtimer interrupt count has
130increased.
131
132To ensure timeliness and avoid false positives, the buddy system performs
133checks at every hrtimer interval (2*watchdog_thresh/5, which is 4 seconds
134by default). It uses a missed-interrupt threshold of 3. If the buddy's
135interrupt count has not changed for 3 consecutive checks, it is assumed
136that the buddy CPU is hardlocked (interrupts disabled). The monitoring
137CPU will then trigger the hardlockup response (warning or panic).
138
139**Detection Overhead (Buddy):**
140
141With a default check interval of 4 seconds (watchdog_thresh = 10):
142
143* **Best case:** Lockup occurs just before a check.
144    Detected in ~8s (0s till 1st check + 4s till 2nd + 4s till 3rd).
145* **Worst case:** Lockup occurs just after a check.
146    Detected in ~12s (4s till 1st check + 4s till 2nd + 4s till 3rd).
147
148**Limitations of the Buddy Detector:**
149
1501.  **All-CPU Lockup:** If all CPUs lock up simultaneously, the buddy
151    detector cannot detect the condition because the monitoring CPUs
152    are also frozen.
1532.  **Stack Traces:** Unlike the NMI detector, the buddy detector
154    cannot directly interrupt the locked CPU to grab a stack trace.
155    It relies on architecture-specific mechanisms (like NMI backtrace
156    support) to try and retrieve the status of the locked CPU. If
157    such support is missing, the log may only show that a lockup
158    occurred without providing the locked CPU's stack.
159
160Watchdog Core Exclusion
161=======================
162
163By default, the watchdog runs on all online cores.  However, on a
164kernel configured with NO_HZ_FULL, by default the watchdog runs only
165on the housekeeping cores, not the cores specified in the "nohz_full"
166boot argument.  If we allowed the watchdog to run by default on
167the "nohz_full" cores, we would have to run timer ticks to activate
168the scheduler, which would prevent the "nohz_full" functionality
169from protecting the user code on those cores from the kernel.
170Of course, disabling it by default on the nohz_full cores means that
171when those cores do enter the kernel, by default we will not be
172able to detect if they lock up.  However, allowing the watchdog
173to continue to run on the housekeeping (non-tickless) cores means
174that we will continue to detect lockups properly on those cores.
175
176In either case, the set of cores excluded from running the watchdog
177may be adjusted via the kernel.watchdog_cpumask sysctl.  For
178nohz_full cores, this may be useful for debugging a case where the
179kernel seems to be hanging on the nohz_full cores.
180