Lines Matching refs:rcu_node

50 critical section for the ``rcu_node`` structure's
62 Therefore, for any given ``rcu_node`` structure, any access
71 on different ``rcu_node`` structures.
118 | But the chain of rcu_node-structure lock acquisitions guarantees |
166 | by the CPU's leaf ``rcu_node`` structure's ``->lock`` as described |
194 the ``rcu_node`` structure's ``->lock`` field, so much so that it is
206 5 struct rcu_node *rnp;
245 .. kernel-figure:: rcu_node-lock.svg
247 The box represents the ``rcu_node`` structure's ``->lock`` critical
305 ``rcu_node`` structure's ``->lock``. In all cases, there is full
306 ordering against any prior critical section for that same ``rcu_node``
308 current task's or CPU's prior critical sections for any ``rcu_node``
331 thread, which makes several passes over the ``rcu_node`` tree within the
335 ``rcu_node`` changes over time, just like Heraclitus's river. However,
336 to keep the ``rcu_node`` river tractable, the grace-period kernel
348 root ``rcu_node`` structure is touched.
350 The first pass through the ``rcu_node`` tree updates bitmasks based on
353 this ``rcu_node`` structure has not transitioned to or from zero, this
354 pass will scan only the leaf ``rcu_node`` structures. However, if the
355 number of online CPUs for a given leaf ``rcu_node`` structure has
358 leaf ``rcu_node`` structure has transitioned to zero,
361 ``rcu_node`` structure onlines its first CPU and if the next
362 ``rcu_node`` structure has no online CPUs (or, alternatively if the
363 leftmost ``rcu_node`` structure offlines its last CPU and if the next
364 ``rcu_node`` structure has no online CPUs).
368 The final ``rcu_gp_init()`` pass through the ``rcu_node`` tree traverses
369 breadth-first, setting each ``rcu_node`` structure's ``->gp_seq`` field
380 ``rcu_node`` structure's ``->gp_seq`` field, each CPU's observation of
400 | its last ``rcu_gp_init()`` pass through its leaf ``rcu_node`` |
417 ``rcu_node`` tree only until they encountered an ``rcu_node`` structure
420 that ``rcu_node`` structure's ``->lock``.
425 its leaf ``rcu_node`` lock. Therefore, all execution shown in this
462 traverses up the ``rcu_node`` tree as shown at the bottom of the
463 diagram, clearing bits from each ``rcu_node`` structure's ``->qsmask``
466 Note that traversal passes upwards out of a given ``rcu_node`` structure
468 subtree headed by that ``rcu_node`` structure. A key point is that if a
469 CPU's traversal stops at a given ``rcu_node`` structure, then there will
471 proceeds upwards from that point, and the ``rcu_node`` ``->lock``
475 CPU traverses through the root ``rcu_node`` structure, the “last CPU”
476 being the one that clears the last bit in the root ``rcu_node``
491 while holding the corresponding CPU's leaf ``rcu_node`` structure's
515 ``rcu_node`` structure's ``->lock`` and update this structure's
535 ``rcu_node`` structures, and if there are no new quiescent states due to
541 reaches an ``rcu_node`` structure that has quiescent states outstanding
548 | ``rcu_node`` structure, which means that there are still CPUs |
565 Grace-period cleanup first scans the ``rcu_node`` tree breadth-first
595 Once a given CPU's leaf ``rcu_node`` structure's ``->gp_seq`` field has
613 its leaf ``rcu_node`` structure's ``->lock`` before invoking callbacks,
624 running on a CPU corresponding to the leftmost leaf ``rcu_node``
626 the rightmost leaf ``rcu_node`` structure, and the grace-period kernel