Lines Matching full:rcu

15  *	Documentation/RCU
18 #define pr_fmt(fmt) "rcu: " fmt
70 #include "rcu.h"
134 * RCU can assume that there is but one task, allowing RCU to (for example)
136 * is RCU_SCHEDULER_INIT, RCU must actually do all the hard work required
138 * boot-time false positives from lockdep-RCU error checking. Finally, it
139 * transitions from RCU_SCHEDULER_INIT to RCU_SCHEDULER_RUNNING after RCU
148 * is capable of creating new tasks. So RCU processing (for example,
149 * creating tasks for RCU priority boosting) must be delayed until after
151 * currently delay invocation of any RCU callbacks until after this point.
153 * It might later prove better for people registering RCU callbacks during
196 /* Retrieve RCU kthreads priority for rcutorture */
215 * Return true if an RCU grace period is in progress. The READ_ONCE()s
238 * rcu_softirq_qs - Provide a set of RCU quiescent states in softirq processing
240 * Mark a quiescent state for RCU, Tasks RCU, and Tasks Trace RCU.
245 * Note that from RCU's viewpoint, a call to rcu_softirq_qs() is
266 "Illegal rcu_softirq_qs() in RCU read-side critical section"); in rcu_softirq_qs()
291 * indicates that RCU is in an extended quiescent state.
299 * rcu_watching_snap_stopped_since() - Has RCU stopped watching a given CPU
347 * Let the RCU core know that this CPU has gone through the scheduler,
350 * memory barriers to let the RCU core know about it, regardless of what
353 * We inform the RCU core by emulating a zero-duration dyntick-idle period.
390 "RCU nesting counter underflow!"); in rcu_is_cpu_rrupt_from_idle()
397 * Non nested idle interrupt (interrupting section where RCU in rcu_is_cpu_rrupt_from_idle()
406 "RCU nmi_nesting counter not in idle task!"); in rcu_is_cpu_rrupt_from_idle()
410 RCU_LOCKDEP_WARN(1, "RCU nmi_nesting counter underflow/zero!"); in rcu_is_cpu_rrupt_from_idle()
472 pr_info("RCU calculated value of scheduler-enlistment delay is %ld jiffies.\n", j); in adjust_jiffies_till_sched_qs()
518 * Return the number of RCU GPs completed thus far for debug & stats.
527 * Return the number of RCU expedited batches completed thus far for
594 * In these cases the late RCU wake ups aren't supported in the resched loops and our
625 "RCU nesting counter underflow/zero!"); in rcu_irq_exit_check_preempt()
628 "Bad RCU nmi_nesting counter\n"); in rcu_irq_exit_check_preempt()
630 "RCU in extended quiescent state!"); in rcu_irq_exit_check_preempt()
636 * __rcu_irq_enter_check_tick - Enable scheduler tick on CPU if RCU needs it.
640 * execution is an RCU quiescent state and the time executing in the kernel
643 * in the kernel, which can cause a number of problems, include RCU CPU
647 * in a timely manner, the RCU grace-period kthread sets that CPU's
650 * tick, which will enable RCU to detect that CPU's quiescent states,
656 * interrupt or exception. In that case, the RCU grace-period kthread
658 * controlled environments, this function allows RCU to get what it
675 // RCU doesn't need nohz_full help from this CPU, or it is in __rcu_irq_enter_check_tick()
681 // from interrupts (as opposed to NMIs). Therefore, (1) RCU is in __rcu_irq_enter_check_tick()
688 // A nohz_full CPU is in the kernel and RCU needs a in __rcu_irq_enter_check_tick()
699 * Check to see if any future non-offloaded RCU-related work will need
701 * returning 1 if so. This function is part of the RCU implementation;
702 * it is -not- an exported member of the RCU API. This is used by
706 * Just check whether or not this CPU has non-offloaded RCU callbacks
732 * rcu_is_watching - RCU read-side critical sections permitted on current CPU?
734 * Return @true if RCU is watching the running CPU and @false otherwise.
735 * An @true return means that this CPU can safely enter RCU read-side
778 * rcu_set_gpwrap_lag - Set RCU GP sequence overflow lag value.
864 * state. Either way, that CPU cannot possibly be in an RCU in rcu_watching_snap_recheck()
866 * of the current RCU grace period. in rcu_watching_snap_recheck()
875 * Complain if a CPU that is considered to be offline from RCU's in rcu_watching_snap_recheck()
880 * last task on a leaf rcu_node structure exiting its RCU read-side in rcu_watching_snap_recheck()
890 * of RCU's Requirements documentation. in rcu_watching_snap_recheck()
910 * delay RCU grace periods: (1) At age jiffies_to_sched_qs, in rcu_watching_snap_recheck()
948 * If more than halfway to RCU CPU stall-warning time, invoke in rcu_watching_snap_recheck()
1135 * ->gp_seq number while RCU is idle, but with reference to a non-root
1138 * the RCU grace-period kthread.
1159 * information requires acquiring a global lock... RCU therefore in rcu_accelerate_cbs()
1214 * Returns true if the RCU grace-period kthread needs to be awakened.
1239 * that the RCU grace-period kthread be awakened.
1402 * Handler for on_each_cpu() to invoke the target CPU's RCU core
1418 // If RCU was idle, note beginning of GP. in rcu_poll_gp_seq_start()
1656 struct llist_node *done, *rcu, *next, *head; in rcu_sr_normal_gp_cleanup_work() local
1686 llist_for_each_safe(rcu, next, head) { in rcu_sr_normal_gp_cleanup_work()
1687 if (!rcu_sr_is_wait_head(rcu)) { in rcu_sr_normal_gp_cleanup_work()
1688 rcu_sr_normal_complete(rcu); in rcu_sr_normal_gp_cleanup_work()
1692 rcu_sr_put_wait_head(rcu); in rcu_sr_normal_gp_cleanup_work()
1704 struct llist_node *wait_tail, *next = NULL, *rcu = NULL; in rcu_sr_normal_gp_cleanup() local
1718 llist_for_each_safe(rcu, next, wait_tail->next) { in rcu_sr_normal_gp_cleanup()
1719 if (rcu_sr_is_wait_head(rcu)) in rcu_sr_normal_gp_cleanup()
1722 rcu_sr_normal_complete(rcu); in rcu_sr_normal_gp_cleanup()
1845 * Documentation/RCU/Design/Requirements/Requirements.rst in the in rcu_gp_init()
1878 * wait for subsequent online CPUs, and that RCU hooks in the CPU in rcu_gp_init()
1882 * of RCU's Requirements documentation. in rcu_gp_init()
2171 * RCU grace-period initialization races by forcing the end of in rcu_gp_cleanup()
2387 * RCU grace period. The caller must hold the corresponding rnp->lock with
2506 * Tell RCU we are done (but rcu_report_qs_rdp() will be the in rcu_check_quiescent_state()
2525 * Invoke any RCU callbacks that have made it to the end of their grace
2679 * state, for example, user mode or idle loop. It also schedules RCU
2718 * blocking the current grace period, initiate RCU priority boosting.
2816 // Workqueue handler for an RCU reader for kernels enforcing struct RCU
2824 /* Perform RCU core processing work for the current CPU. */
2833 trace_rcu_utilization(TPS("Start RCU core")); in rcu_core()
2844 /* Update RCU state based on any recent quiescent states. */ in rcu_core()
2862 /* Re-invoke RCU core processing if there are callbacks remaining. */ in rcu_core()
2869 trace_rcu_utilization(TPS("End RCU core")); in rcu_core()
2871 // If strict GPs, schedule an RCU reader in a clean environment. in rcu_core()
2905 * Wake up this CPU's rcuc kthread to do RCU core processing.
2928 * Per-CPU kernel thread that invokes RCU callbacks. This replaces
2929 * the RCU softirq used in configurations of RCU that do not support RCU
2975 * Spawn per-CPU RCU core processing kthreads.
2999 * Handle any core-RCU processing required by a call_rcu() invocation.
3006 * If called from an extended quiescent state, invoke the RCU in call_rcu_core()
3007 * core in order to force a re-evaluation of RCU's idleness. in call_rcu_core()
3012 /* If interrupts were disabled or CPU offline, don't invoke RCU core. */ in call_rcu_core()
3045 * RCU callback function to leak a callback.
3054 * number of queued RCU callbacks. The caller must hold the leaf rcu_node
3071 * number of queued RCU callbacks. No locks need be held, but the
3111 * Use rcu:rcu_callback trace event to find the previous in __call_rcu_common()
3156 * call_rcu_hurry() - Queue RCU callback for invocation after grace period, and
3160 * @head: structure to be used for queueing the RCU updates.
3164 * period elapses, in other words after all pre-existing RCU read-side
3185 * call_rcu() - Queue an RCU callback for invocation after a grace period.
3190 * @head: structure to be used for queueing the RCU updates.
3194 * period elapses, in other words after all pre-existing RCU read-side
3196 * might well execute concurrently with RCU read-side critical sections
3199 * It is perfectly legal to repost an RCU callback, potentially with
3205 * RCU read-side critical sections are delimited by rcu_read_lock()
3208 * or softirqs have been disabled also serve as RCU read-side critical
3213 * all pre-existing RCU read-side critical section. On systems with more
3216 * last RCU read-side critical section whose beginning preceded the call
3217 * to call_rcu(). It also means that each CPU executing an RCU read-side
3220 * of that RCU read-side critical section. Note that these guarantees
3225 * resulting RCU callback function "func()", then both CPU A and CPU B are
3232 * Documentation/RCU/Design/Memory-Ordering/Tree-RCU-Memory-Ordering.rst.
3307 * period has elapsed, in other words after all currently executing RCU
3310 * concurrently with new RCU read-side critical sections that began while
3313 * RCU read-side critical sections are delimited by rcu_read_lock()
3316 * or softirqs have been disabled also serve as RCU read-side critical
3323 * the end of its last RCU read-side critical section whose beginning
3325 * an RCU read-side critical section that extends beyond the return from
3328 * that RCU read-side critical section. Note that these guarantees include
3339 * Documentation/RCU/Design/Memory-Ordering/Tree-RCU-Memory-Ordering.rst.
3349 "Illegal synchronize_rcu() in RCU read-side critical section"); in synchronize_rcu()
3396 * get_state_synchronize_rcu - Snapshot current RCU state
3405 * Any prior manipulation of RCU-protected data must happen in get_state_synchronize_rcu()
3414 * get_state_synchronize_rcu_full - Snapshot RCU state, both normal and expedited
3432 * Any prior manipulation of RCU-protected data must happen in get_state_synchronize_rcu_full()
3474 * start_poll_synchronize_rcu - Snapshot and start RCU grace period
3479 * is not already slated to start, notifies RCU core of the need for that
3492 * start_poll_synchronize_rcu_full - Take a full snapshot and start RCU grace period
3500 * RCU core of the need for that grace period.
3511 * poll_state_synchronize_rcu - Has the specified RCU grace period completed?
3514 * If a full RCU grace period has elapsed since the earlier call from
3555 * poll_state_synchronize_rcu_full - Has the specified RCU grace period completed?
3558 * If a full RCU grace period has elapsed since the earlier call from
3604 * cond_synchronize_rcu - Conditionally wait for an RCU grace period
3607 * If a full RCU grace period has elapsed since the earlier call to
3629 * cond_synchronize_rcu_full - Conditionally wait for an RCU grace period
3632 * If a full RCU grace period has elapsed since the call to
3656 * Check to see if there is any immediate RCU-related work to be done by
3677 /* Is this a nohz_full CPU in userspace or idle? (Ignore RCU if so.) */ in rcu_pending()
3686 /* Is the RCU core waiting for a quiescent state from this CPU? */ in rcu_pending()
3695 /* Has RCU gone idle with this CPU needing another grace period? */ in rcu_pending()
3701 /* Have RCU grace period completed or started? */ in rcu_pending()
3721 * RCU callback function for rcu_barrier(). If we are last, wake
3799 * Note that this primitive does not necessarily wait for an RCU grace period
3800 * to complete. For example, if there are no RCU callbacks queued anywhere
3802 * immediately, without waiting for anything, much less an RCU grace period.
3803 * In fact, rcu_barrier() will normally not result in any RCU grace periods
3807 * pending lazy RCU callbacks.
3916 * This is intended for use by test suites to avoid OOM by flushing RCU
3994 * from RCU's perspective? This perspective is given by that structure's
4012 * Is the current CPU online as far as RCU is concerned?
4021 * RCU on an offline processor during initial boot, hence the check for
4058 * and all tasks that were preempted within an RCU read-side critical
4059 * section while running on one of those CPUs have since exited their RCU
4130 * Do boot-time initialization of a CPU's per-CPU RCU data.
4226 * Initializes a CPU's per-CPU RCU data. Note that only one online or
4229 * CPU cannot possibly have any non-offloaded RCU callbacks in flight yet.
4316 * incoming CPUs are not allowed to use RCU read-side critical sections
4358 if (WARN_ON_ONCE(rnp->qsmask & mask)) { /* RCU waiting on incoming CPU? */ in rcutree_report_cpu_starting()
4371 smp_mb(); /* Ensure RCU read-side usage follows above initialization. */ in rcutree_report_cpu_starting()
4375 * The outgoing function has no further need of RCU, so remove it from
4420 if (rnp->qsmask & mask) { /* RCU waiting on outgoing CPU? */ in rcutree_report_cpu_dead()
4544 * On non-huge systems, use expedited RCU grace periods to make suspend
4568 * Spawn the kthreads that handle RCU's grace periods.
4617 * runtime RCU functionality.