xref: /linux/Documentation/RCU/Design/Requirements/Requirements.rst (revision 7f45d6f8ae383ed01070883b3c74ee51c9740065)
1ccc9971eSMauro Carvalho Chehab=================================
2ccc9971eSMauro Carvalho ChehabA Tour Through RCU's Requirements
3ccc9971eSMauro Carvalho Chehab=================================
4ccc9971eSMauro Carvalho Chehab
5ccc9971eSMauro Carvalho ChehabCopyright IBM Corporation, 2015
6ccc9971eSMauro Carvalho Chehab
7ccc9971eSMauro Carvalho ChehabAuthor: Paul E. McKenney
8ccc9971eSMauro Carvalho Chehab
9ccc9971eSMauro Carvalho ChehabThe initial version of this document appeared in the
10ccc9971eSMauro Carvalho Chehab`LWN <https://lwn.net/>`_ on those articles:
11ccc9971eSMauro Carvalho Chehab`part 1 <https://lwn.net/Articles/652156/>`_,
12ccc9971eSMauro Carvalho Chehab`part 2 <https://lwn.net/Articles/652677/>`_, and
13ccc9971eSMauro Carvalho Chehab`part 3 <https://lwn.net/Articles/653326/>`_.
14ccc9971eSMauro Carvalho Chehab
15ccc9971eSMauro Carvalho ChehabIntroduction
16ccc9971eSMauro Carvalho Chehab------------
17ccc9971eSMauro Carvalho Chehab
18ccc9971eSMauro Carvalho ChehabRead-copy update (RCU) is a synchronization mechanism that is often used
19ccc9971eSMauro Carvalho Chehabas a replacement for reader-writer locking. RCU is unusual in that
20ccc9971eSMauro Carvalho Chehabupdaters do not block readers, which means that RCU's read-side
21ccc9971eSMauro Carvalho Chehabprimitives can be exceedingly fast and scalable. In addition, updaters
22ccc9971eSMauro Carvalho Chehabcan make useful forward progress concurrently with readers. However, all
23ccc9971eSMauro Carvalho Chehabthis concurrency between RCU readers and updaters does raise the
24ccc9971eSMauro Carvalho Chehabquestion of exactly what RCU readers are doing, which in turn raises the
25ccc9971eSMauro Carvalho Chehabquestion of exactly what RCU's requirements are.
26ccc9971eSMauro Carvalho Chehab
27ccc9971eSMauro Carvalho ChehabThis document therefore summarizes RCU's requirements, and can be
28ccc9971eSMauro Carvalho Chehabthought of as an informal, high-level specification for RCU. It is
29ccc9971eSMauro Carvalho Chehabimportant to understand that RCU's specification is primarily empirical
30ccc9971eSMauro Carvalho Chehabin nature; in fact, I learned about many of these requirements the hard
31ccc9971eSMauro Carvalho Chehabway. This situation might cause some consternation, however, not only
32ccc9971eSMauro Carvalho Chehabhas this learning process been a lot of fun, but it has also been a
33ccc9971eSMauro Carvalho Chehabgreat privilege to work with so many people willing to apply
34ccc9971eSMauro Carvalho Chehabtechnologies in interesting new ways.
35ccc9971eSMauro Carvalho Chehab
36ccc9971eSMauro Carvalho ChehabAll that aside, here are the categories of currently known RCU
37ccc9971eSMauro Carvalho Chehabrequirements:
38ccc9971eSMauro Carvalho Chehab
3907335c16SJoel Fernandes (Google)#. `Fundamental Requirements`_
4007335c16SJoel Fernandes (Google)#. `Fundamental Non-Requirements`_
4107335c16SJoel Fernandes (Google)#. `Parallelism Facts of Life`_
4207335c16SJoel Fernandes (Google)#. `Quality-of-Implementation Requirements`_
4307335c16SJoel Fernandes (Google)#. `Linux Kernel Complications`_
4407335c16SJoel Fernandes (Google)#. `Software-Engineering Requirements`_
4507335c16SJoel Fernandes (Google)#. `Other RCU Flavors`_
4607335c16SJoel Fernandes (Google)#. `Possible Future Changes`_
47ccc9971eSMauro Carvalho Chehab
48ccc9971eSMauro Carvalho ChehabThis is followed by a `summary <#Summary>`__, however, the answers to
49ccc9971eSMauro Carvalho Chehabeach quick quiz immediately follows the quiz. Select the big white space
50ccc9971eSMauro Carvalho Chehabwith your mouse to see the answer.
51ccc9971eSMauro Carvalho Chehab
52ccc9971eSMauro Carvalho ChehabFundamental Requirements
53ccc9971eSMauro Carvalho Chehab------------------------
54ccc9971eSMauro Carvalho Chehab
55ccc9971eSMauro Carvalho ChehabRCU's fundamental requirements are the closest thing RCU has to hard
56ccc9971eSMauro Carvalho Chehabmathematical requirements. These are:
57ccc9971eSMauro Carvalho Chehab
5807335c16SJoel Fernandes (Google)#. `Grace-Period Guarantee`_
5907335c16SJoel Fernandes (Google)#. `Publish/Subscribe Guarantee`_
6007335c16SJoel Fernandes (Google)#. `Memory-Barrier Guarantees`_
6107335c16SJoel Fernandes (Google)#. `RCU Primitives Guaranteed to Execute Unconditionally`_
6207335c16SJoel Fernandes (Google)#. `Guaranteed Read-to-Write Upgrade`_
63ccc9971eSMauro Carvalho Chehab
64ccc9971eSMauro Carvalho ChehabGrace-Period Guarantee
65ccc9971eSMauro Carvalho Chehab~~~~~~~~~~~~~~~~~~~~~~
66ccc9971eSMauro Carvalho Chehab
67ccc9971eSMauro Carvalho ChehabRCU's grace-period guarantee is unusual in being premeditated: Jack
68ccc9971eSMauro Carvalho ChehabSlingwine and I had this guarantee firmly in mind when we started work
69ccc9971eSMauro Carvalho Chehabon RCU (then called “rclock”) in the early 1990s. That said, the past
70ccc9971eSMauro Carvalho Chehabtwo decades of experience with RCU have produced a much more detailed
71ccc9971eSMauro Carvalho Chehabunderstanding of this guarantee.
72ccc9971eSMauro Carvalho Chehab
73ccc9971eSMauro Carvalho ChehabRCU's grace-period guarantee allows updaters to wait for the completion
74ccc9971eSMauro Carvalho Chehabof all pre-existing RCU read-side critical sections. An RCU read-side
75ccc9971eSMauro Carvalho Chehabcritical section begins with the marker ``rcu_read_lock()`` and ends
76ccc9971eSMauro Carvalho Chehabwith the marker ``rcu_read_unlock()``. These markers may be nested, and
77ccc9971eSMauro Carvalho ChehabRCU treats a nested set as one big RCU read-side critical section.
78ccc9971eSMauro Carvalho ChehabProduction-quality implementations of ``rcu_read_lock()`` and
79ccc9971eSMauro Carvalho Chehab``rcu_read_unlock()`` are extremely lightweight, and in fact have
80ccc9971eSMauro Carvalho Chehabexactly zero overhead in Linux kernels built for production use with
81ccc9971eSMauro Carvalho Chehab``CONFIG_PREEMPT=n``.
82ccc9971eSMauro Carvalho Chehab
83ccc9971eSMauro Carvalho ChehabThis guarantee allows ordering to be enforced with extremely low
84ccc9971eSMauro Carvalho Chehaboverhead to readers, for example:
85ccc9971eSMauro Carvalho Chehab
86ccc9971eSMauro Carvalho Chehab   ::
87ccc9971eSMauro Carvalho Chehab
88ccc9971eSMauro Carvalho Chehab       1 int x, y;
89ccc9971eSMauro Carvalho Chehab       2
90ccc9971eSMauro Carvalho Chehab       3 void thread0(void)
91ccc9971eSMauro Carvalho Chehab       4 {
92ccc9971eSMauro Carvalho Chehab       5   rcu_read_lock();
93ccc9971eSMauro Carvalho Chehab       6   r1 = READ_ONCE(x);
94ccc9971eSMauro Carvalho Chehab       7   r2 = READ_ONCE(y);
95ccc9971eSMauro Carvalho Chehab       8   rcu_read_unlock();
96ccc9971eSMauro Carvalho Chehab       9 }
97ccc9971eSMauro Carvalho Chehab      10
98ccc9971eSMauro Carvalho Chehab      11 void thread1(void)
99ccc9971eSMauro Carvalho Chehab      12 {
100ccc9971eSMauro Carvalho Chehab      13   WRITE_ONCE(x, 1);
101ccc9971eSMauro Carvalho Chehab      14   synchronize_rcu();
102ccc9971eSMauro Carvalho Chehab      15   WRITE_ONCE(y, 1);
103ccc9971eSMauro Carvalho Chehab      16 }
104ccc9971eSMauro Carvalho Chehab
105ccc9971eSMauro Carvalho ChehabBecause the ``synchronize_rcu()`` on line 14 waits for all pre-existing
106ccc9971eSMauro Carvalho Chehabreaders, any instance of ``thread0()`` that loads a value of zero from
107ccc9971eSMauro Carvalho Chehab``x`` must complete before ``thread1()`` stores to ``y``, so that
108ccc9971eSMauro Carvalho Chehabinstance must also load a value of zero from ``y``. Similarly, any
109ccc9971eSMauro Carvalho Chehabinstance of ``thread0()`` that loads a value of one from ``y`` must have
110ccc9971eSMauro Carvalho Chehabstarted after the ``synchronize_rcu()`` started, and must therefore also
111ccc9971eSMauro Carvalho Chehabload a value of one from ``x``. Therefore, the outcome:
112ccc9971eSMauro Carvalho Chehab
113ccc9971eSMauro Carvalho Chehab   ::
114ccc9971eSMauro Carvalho Chehab
115ccc9971eSMauro Carvalho Chehab      (r1 == 0 && r2 == 1)
116ccc9971eSMauro Carvalho Chehab
117ccc9971eSMauro Carvalho Chehabcannot happen.
118ccc9971eSMauro Carvalho Chehab
119ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+
120ccc9971eSMauro Carvalho Chehab| **Quick Quiz**:                                                       |
121ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+
122ccc9971eSMauro Carvalho Chehab| Wait a minute! You said that updaters can make useful forward         |
123ccc9971eSMauro Carvalho Chehab| progress concurrently with readers, but pre-existing readers will     |
124ccc9971eSMauro Carvalho Chehab| block ``synchronize_rcu()``!!!                                        |
125ccc9971eSMauro Carvalho Chehab| Just who are you trying to fool???                                    |
126ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+
127ccc9971eSMauro Carvalho Chehab| **Answer**:                                                           |
128ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+
129ccc9971eSMauro Carvalho Chehab| First, if updaters do not wish to be blocked by readers, they can use |
130ccc9971eSMauro Carvalho Chehab| ``call_rcu()`` or ``kfree_rcu()``, which will be discussed later.     |
131ccc9971eSMauro Carvalho Chehab| Second, even when using ``synchronize_rcu()``, the other update-side  |
132ccc9971eSMauro Carvalho Chehab| code does run concurrently with readers, whether pre-existing or not. |
133ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+
134ccc9971eSMauro Carvalho Chehab
135ccc9971eSMauro Carvalho ChehabThis scenario resembles one of the first uses of RCU in
136ccc9971eSMauro Carvalho Chehab`DYNIX/ptx <https://en.wikipedia.org/wiki/DYNIX>`__, which managed a
137ccc9971eSMauro Carvalho Chehabdistributed lock manager's transition into a state suitable for handling
138ccc9971eSMauro Carvalho Chehabrecovery from node failure, more or less as follows:
139ccc9971eSMauro Carvalho Chehab
140ccc9971eSMauro Carvalho Chehab   ::
141ccc9971eSMauro Carvalho Chehab
142ccc9971eSMauro Carvalho Chehab       1 #define STATE_NORMAL        0
143ccc9971eSMauro Carvalho Chehab       2 #define STATE_WANT_RECOVERY 1
144ccc9971eSMauro Carvalho Chehab       3 #define STATE_RECOVERING    2
145ccc9971eSMauro Carvalho Chehab       4 #define STATE_WANT_NORMAL   3
146ccc9971eSMauro Carvalho Chehab       5
147ccc9971eSMauro Carvalho Chehab       6 int state = STATE_NORMAL;
148ccc9971eSMauro Carvalho Chehab       7
149ccc9971eSMauro Carvalho Chehab       8 void do_something_dlm(void)
150ccc9971eSMauro Carvalho Chehab       9 {
151ccc9971eSMauro Carvalho Chehab      10   int state_snap;
152ccc9971eSMauro Carvalho Chehab      11
153ccc9971eSMauro Carvalho Chehab      12   rcu_read_lock();
154ccc9971eSMauro Carvalho Chehab      13   state_snap = READ_ONCE(state);
155ccc9971eSMauro Carvalho Chehab      14   if (state_snap == STATE_NORMAL)
156ccc9971eSMauro Carvalho Chehab      15     do_something();
157ccc9971eSMauro Carvalho Chehab      16   else
158ccc9971eSMauro Carvalho Chehab      17     do_something_carefully();
159ccc9971eSMauro Carvalho Chehab      18   rcu_read_unlock();
160ccc9971eSMauro Carvalho Chehab      19 }
161ccc9971eSMauro Carvalho Chehab      20
162ccc9971eSMauro Carvalho Chehab      21 void start_recovery(void)
163ccc9971eSMauro Carvalho Chehab      22 {
164ccc9971eSMauro Carvalho Chehab      23   WRITE_ONCE(state, STATE_WANT_RECOVERY);
165ccc9971eSMauro Carvalho Chehab      24   synchronize_rcu();
166ccc9971eSMauro Carvalho Chehab      25   WRITE_ONCE(state, STATE_RECOVERING);
167ccc9971eSMauro Carvalho Chehab      26   recovery();
168ccc9971eSMauro Carvalho Chehab      27   WRITE_ONCE(state, STATE_WANT_NORMAL);
169ccc9971eSMauro Carvalho Chehab      28   synchronize_rcu();
170ccc9971eSMauro Carvalho Chehab      29   WRITE_ONCE(state, STATE_NORMAL);
171ccc9971eSMauro Carvalho Chehab      30 }
172ccc9971eSMauro Carvalho Chehab
173ccc9971eSMauro Carvalho ChehabThe RCU read-side critical section in ``do_something_dlm()`` works with
174ccc9971eSMauro Carvalho Chehabthe ``synchronize_rcu()`` in ``start_recovery()`` to guarantee that
175ccc9971eSMauro Carvalho Chehab``do_something()`` never runs concurrently with ``recovery()``, but with
176ccc9971eSMauro Carvalho Chehablittle or no synchronization overhead in ``do_something_dlm()``.
177ccc9971eSMauro Carvalho Chehab
178ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+
179ccc9971eSMauro Carvalho Chehab| **Quick Quiz**:                                                       |
180ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+
181ccc9971eSMauro Carvalho Chehab| Why is the ``synchronize_rcu()`` on line 28 needed?                   |
182ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+
183ccc9971eSMauro Carvalho Chehab| **Answer**:                                                           |
184ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+
185ccc9971eSMauro Carvalho Chehab| Without that extra grace period, memory reordering could result in    |
186ccc9971eSMauro Carvalho Chehab| ``do_something_dlm()`` executing ``do_something()`` concurrently with |
187ccc9971eSMauro Carvalho Chehab| the last bits of ``recovery()``.                                      |
188ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+
189ccc9971eSMauro Carvalho Chehab
190ccc9971eSMauro Carvalho ChehabIn order to avoid fatal problems such as deadlocks, an RCU read-side
191ccc9971eSMauro Carvalho Chehabcritical section must not contain calls to ``synchronize_rcu()``.
192ccc9971eSMauro Carvalho ChehabSimilarly, an RCU read-side critical section must not contain anything
193ccc9971eSMauro Carvalho Chehabthat waits, directly or indirectly, on completion of an invocation of
194ccc9971eSMauro Carvalho Chehab``synchronize_rcu()``.
195ccc9971eSMauro Carvalho Chehab
196ccc9971eSMauro Carvalho ChehabAlthough RCU's grace-period guarantee is useful in and of itself, with
197ccc9971eSMauro Carvalho Chehab`quite a few use cases <https://lwn.net/Articles/573497/>`__, it would
198ccc9971eSMauro Carvalho Chehabbe good to be able to use RCU to coordinate read-side access to linked
199ccc9971eSMauro Carvalho Chehabdata structures. For this, the grace-period guarantee is not sufficient,
200ccc9971eSMauro Carvalho Chehabas can be seen in function ``add_gp_buggy()`` below. We will look at the
201ccc9971eSMauro Carvalho Chehabreader's code later, but in the meantime, just think of the reader as
202ccc9971eSMauro Carvalho Chehablocklessly picking up the ``gp`` pointer, and, if the value loaded is
203ccc9971eSMauro Carvalho Chehabnon-\ ``NULL``, locklessly accessing the ``->a`` and ``->b`` fields.
204ccc9971eSMauro Carvalho Chehab
205ccc9971eSMauro Carvalho Chehab   ::
206ccc9971eSMauro Carvalho Chehab
207ccc9971eSMauro Carvalho Chehab       1 bool add_gp_buggy(int a, int b)
208ccc9971eSMauro Carvalho Chehab       2 {
209ccc9971eSMauro Carvalho Chehab       3   p = kmalloc(sizeof(*p), GFP_KERNEL);
210ccc9971eSMauro Carvalho Chehab       4   if (!p)
211ccc9971eSMauro Carvalho Chehab       5     return -ENOMEM;
212ccc9971eSMauro Carvalho Chehab       6   spin_lock(&gp_lock);
213ccc9971eSMauro Carvalho Chehab       7   if (rcu_access_pointer(gp)) {
214ccc9971eSMauro Carvalho Chehab       8     spin_unlock(&gp_lock);
215ccc9971eSMauro Carvalho Chehab       9     return false;
216ccc9971eSMauro Carvalho Chehab      10   }
217ccc9971eSMauro Carvalho Chehab      11   p->a = a;
218ccc9971eSMauro Carvalho Chehab      12   p->b = a;
219ccc9971eSMauro Carvalho Chehab      13   gp = p; /* ORDERING BUG */
220ccc9971eSMauro Carvalho Chehab      14   spin_unlock(&gp_lock);
221ccc9971eSMauro Carvalho Chehab      15   return true;
222ccc9971eSMauro Carvalho Chehab      16 }
223ccc9971eSMauro Carvalho Chehab
224ccc9971eSMauro Carvalho ChehabThe problem is that both the compiler and weakly ordered CPUs are within
225ccc9971eSMauro Carvalho Chehabtheir rights to reorder this code as follows:
226ccc9971eSMauro Carvalho Chehab
227ccc9971eSMauro Carvalho Chehab   ::
228ccc9971eSMauro Carvalho Chehab
229ccc9971eSMauro Carvalho Chehab       1 bool add_gp_buggy_optimized(int a, int b)
230ccc9971eSMauro Carvalho Chehab       2 {
231ccc9971eSMauro Carvalho Chehab       3   p = kmalloc(sizeof(*p), GFP_KERNEL);
232ccc9971eSMauro Carvalho Chehab       4   if (!p)
233ccc9971eSMauro Carvalho Chehab       5     return -ENOMEM;
234ccc9971eSMauro Carvalho Chehab       6   spin_lock(&gp_lock);
235ccc9971eSMauro Carvalho Chehab       7   if (rcu_access_pointer(gp)) {
236ccc9971eSMauro Carvalho Chehab       8     spin_unlock(&gp_lock);
237ccc9971eSMauro Carvalho Chehab       9     return false;
238ccc9971eSMauro Carvalho Chehab      10   }
239ccc9971eSMauro Carvalho Chehab      11   gp = p; /* ORDERING BUG */
240ccc9971eSMauro Carvalho Chehab      12   p->a = a;
241ccc9971eSMauro Carvalho Chehab      13   p->b = a;
242ccc9971eSMauro Carvalho Chehab      14   spin_unlock(&gp_lock);
243ccc9971eSMauro Carvalho Chehab      15   return true;
244ccc9971eSMauro Carvalho Chehab      16 }
245ccc9971eSMauro Carvalho Chehab
246ccc9971eSMauro Carvalho ChehabIf an RCU reader fetches ``gp`` just after ``add_gp_buggy_optimized``
247ccc9971eSMauro Carvalho Chehabexecutes line 11, it will see garbage in the ``->a`` and ``->b`` fields.
248ccc9971eSMauro Carvalho ChehabAnd this is but one of many ways in which compiler and hardware
249ccc9971eSMauro Carvalho Chehaboptimizations could cause trouble. Therefore, we clearly need some way
250ccc9971eSMauro Carvalho Chehabto prevent the compiler and the CPU from reordering in this manner,
251ccc9971eSMauro Carvalho Chehabwhich brings us to the publish-subscribe guarantee discussed in the next
252ccc9971eSMauro Carvalho Chehabsection.
253ccc9971eSMauro Carvalho Chehab
254ccc9971eSMauro Carvalho ChehabPublish/Subscribe Guarantee
255ccc9971eSMauro Carvalho Chehab~~~~~~~~~~~~~~~~~~~~~~~~~~~
256ccc9971eSMauro Carvalho Chehab
257ccc9971eSMauro Carvalho ChehabRCU's publish-subscribe guarantee allows data to be inserted into a
258ccc9971eSMauro Carvalho Chehablinked data structure without disrupting RCU readers. The updater uses
259ccc9971eSMauro Carvalho Chehab``rcu_assign_pointer()`` to insert the new data, and readers use
260ccc9971eSMauro Carvalho Chehab``rcu_dereference()`` to access data, whether new or old. The following
261ccc9971eSMauro Carvalho Chehabshows an example of insertion:
262ccc9971eSMauro Carvalho Chehab
263ccc9971eSMauro Carvalho Chehab   ::
264ccc9971eSMauro Carvalho Chehab
265ccc9971eSMauro Carvalho Chehab       1 bool add_gp(int a, int b)
266ccc9971eSMauro Carvalho Chehab       2 {
267ccc9971eSMauro Carvalho Chehab       3   p = kmalloc(sizeof(*p), GFP_KERNEL);
268ccc9971eSMauro Carvalho Chehab       4   if (!p)
269ccc9971eSMauro Carvalho Chehab       5     return -ENOMEM;
270ccc9971eSMauro Carvalho Chehab       6   spin_lock(&gp_lock);
271ccc9971eSMauro Carvalho Chehab       7   if (rcu_access_pointer(gp)) {
272ccc9971eSMauro Carvalho Chehab       8     spin_unlock(&gp_lock);
273ccc9971eSMauro Carvalho Chehab       9     return false;
274ccc9971eSMauro Carvalho Chehab      10   }
275ccc9971eSMauro Carvalho Chehab      11   p->a = a;
276ccc9971eSMauro Carvalho Chehab      12   p->b = a;
277ccc9971eSMauro Carvalho Chehab      13   rcu_assign_pointer(gp, p);
278ccc9971eSMauro Carvalho Chehab      14   spin_unlock(&gp_lock);
279ccc9971eSMauro Carvalho Chehab      15   return true;
280ccc9971eSMauro Carvalho Chehab      16 }
281ccc9971eSMauro Carvalho Chehab
282ccc9971eSMauro Carvalho ChehabThe ``rcu_assign_pointer()`` on line 13 is conceptually equivalent to a
283ccc9971eSMauro Carvalho Chehabsimple assignment statement, but also guarantees that its assignment
284ccc9971eSMauro Carvalho Chehabwill happen after the two assignments in lines 11 and 12, similar to the
285ccc9971eSMauro Carvalho ChehabC11 ``memory_order_release`` store operation. It also prevents any
286ccc9971eSMauro Carvalho Chehabnumber of “interesting” compiler optimizations, for example, the use of
287ccc9971eSMauro Carvalho Chehab``gp`` as a scratch location immediately preceding the assignment.
288ccc9971eSMauro Carvalho Chehab
289ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+
290ccc9971eSMauro Carvalho Chehab| **Quick Quiz**:                                                       |
291ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+
292ccc9971eSMauro Carvalho Chehab| But ``rcu_assign_pointer()`` does nothing to prevent the two          |
293ccc9971eSMauro Carvalho Chehab| assignments to ``p->a`` and ``p->b`` from being reordered. Can't that |
294ccc9971eSMauro Carvalho Chehab| also cause problems?                                                  |
295ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+
296ccc9971eSMauro Carvalho Chehab| **Answer**:                                                           |
297ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+
298ccc9971eSMauro Carvalho Chehab| No, it cannot. The readers cannot see either of these two fields      |
299ccc9971eSMauro Carvalho Chehab| until the assignment to ``gp``, by which time both fields are fully   |
300ccc9971eSMauro Carvalho Chehab| initialized. So reordering the assignments to ``p->a`` and ``p->b``   |
301ccc9971eSMauro Carvalho Chehab| cannot possibly cause any problems.                                   |
302ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+
303ccc9971eSMauro Carvalho Chehab
304ccc9971eSMauro Carvalho ChehabIt is tempting to assume that the reader need not do anything special to
305ccc9971eSMauro Carvalho Chehabcontrol its accesses to the RCU-protected data, as shown in
306ccc9971eSMauro Carvalho Chehab``do_something_gp_buggy()`` below:
307ccc9971eSMauro Carvalho Chehab
308ccc9971eSMauro Carvalho Chehab   ::
309ccc9971eSMauro Carvalho Chehab
310ccc9971eSMauro Carvalho Chehab       1 bool do_something_gp_buggy(void)
311ccc9971eSMauro Carvalho Chehab       2 {
312ccc9971eSMauro Carvalho Chehab       3   rcu_read_lock();
313ccc9971eSMauro Carvalho Chehab       4   p = gp;  /* OPTIMIZATIONS GALORE!!! */
314ccc9971eSMauro Carvalho Chehab       5   if (p) {
315ccc9971eSMauro Carvalho Chehab       6     do_something(p->a, p->b);
316ccc9971eSMauro Carvalho Chehab       7     rcu_read_unlock();
317ccc9971eSMauro Carvalho Chehab       8     return true;
318ccc9971eSMauro Carvalho Chehab       9   }
319ccc9971eSMauro Carvalho Chehab      10   rcu_read_unlock();
320ccc9971eSMauro Carvalho Chehab      11   return false;
321ccc9971eSMauro Carvalho Chehab      12 }
322ccc9971eSMauro Carvalho Chehab
323ccc9971eSMauro Carvalho ChehabHowever, this temptation must be resisted because there are a
324ccc9971eSMauro Carvalho Chehabsurprisingly large number of ways that the compiler (to say nothing of
325ccc9971eSMauro Carvalho Chehab`DEC Alpha CPUs <https://h71000.www7.hp.com/wizard/wiz_2637.html>`__)
326ccc9971eSMauro Carvalho Chehabcan trip this code up. For but one example, if the compiler were short
327ccc9971eSMauro Carvalho Chehabof registers, it might choose to refetch from ``gp`` rather than keeping
328ccc9971eSMauro Carvalho Chehaba separate copy in ``p`` as follows:
329ccc9971eSMauro Carvalho Chehab
330ccc9971eSMauro Carvalho Chehab   ::
331ccc9971eSMauro Carvalho Chehab
332ccc9971eSMauro Carvalho Chehab       1 bool do_something_gp_buggy_optimized(void)
333ccc9971eSMauro Carvalho Chehab       2 {
334ccc9971eSMauro Carvalho Chehab       3   rcu_read_lock();
335ccc9971eSMauro Carvalho Chehab       4   if (gp) { /* OPTIMIZATIONS GALORE!!! */
336ccc9971eSMauro Carvalho Chehab       5     do_something(gp->a, gp->b);
337ccc9971eSMauro Carvalho Chehab       6     rcu_read_unlock();
338ccc9971eSMauro Carvalho Chehab       7     return true;
339ccc9971eSMauro Carvalho Chehab       8   }
340ccc9971eSMauro Carvalho Chehab       9   rcu_read_unlock();
341ccc9971eSMauro Carvalho Chehab      10   return false;
342ccc9971eSMauro Carvalho Chehab      11 }
343ccc9971eSMauro Carvalho Chehab
344ccc9971eSMauro Carvalho ChehabIf this function ran concurrently with a series of updates that replaced
345ccc9971eSMauro Carvalho Chehabthe current structure with a new one, the fetches of ``gp->a`` and
346ccc9971eSMauro Carvalho Chehab``gp->b`` might well come from two different structures, which could
347ccc9971eSMauro Carvalho Chehabcause serious confusion. To prevent this (and much else besides),
348ccc9971eSMauro Carvalho Chehab``do_something_gp()`` uses ``rcu_dereference()`` to fetch from ``gp``:
349ccc9971eSMauro Carvalho Chehab
350ccc9971eSMauro Carvalho Chehab   ::
351ccc9971eSMauro Carvalho Chehab
352ccc9971eSMauro Carvalho Chehab       1 bool do_something_gp(void)
353ccc9971eSMauro Carvalho Chehab       2 {
354ccc9971eSMauro Carvalho Chehab       3   rcu_read_lock();
355ccc9971eSMauro Carvalho Chehab       4   p = rcu_dereference(gp);
356ccc9971eSMauro Carvalho Chehab       5   if (p) {
357ccc9971eSMauro Carvalho Chehab       6     do_something(p->a, p->b);
358ccc9971eSMauro Carvalho Chehab       7     rcu_read_unlock();
359ccc9971eSMauro Carvalho Chehab       8     return true;
360ccc9971eSMauro Carvalho Chehab       9   }
361ccc9971eSMauro Carvalho Chehab      10   rcu_read_unlock();
362ccc9971eSMauro Carvalho Chehab      11   return false;
363ccc9971eSMauro Carvalho Chehab      12 }
364ccc9971eSMauro Carvalho Chehab
365ccc9971eSMauro Carvalho ChehabThe ``rcu_dereference()`` uses volatile casts and (for DEC Alpha) memory
366ccc9971eSMauro Carvalho Chehabbarriers in the Linux kernel. Should a `high-quality implementation of
367ccc9971eSMauro Carvalho ChehabC11 ``memory_order_consume``
368ccc9971eSMauro Carvalho Chehab[PDF] <http://www.rdrop.com/users/paulmck/RCU/consume.2015.07.13a.pdf>`__
369ccc9971eSMauro Carvalho Chehabever appear, then ``rcu_dereference()`` could be implemented as a
370ccc9971eSMauro Carvalho Chehab``memory_order_consume`` load. Regardless of the exact implementation, a
371ccc9971eSMauro Carvalho Chehabpointer fetched by ``rcu_dereference()`` may not be used outside of the
372ccc9971eSMauro Carvalho Chehaboutermost RCU read-side critical section containing that
373ccc9971eSMauro Carvalho Chehab``rcu_dereference()``, unless protection of the corresponding data
374ccc9971eSMauro Carvalho Chehabelement has been passed from RCU to some other synchronization
375ccc9971eSMauro Carvalho Chehabmechanism, most commonly locking or `reference
376ccc9971eSMauro Carvalho Chehabcounting <https://www.kernel.org/doc/Documentation/RCU/rcuref.txt>`__.
377ccc9971eSMauro Carvalho Chehab
378ccc9971eSMauro Carvalho ChehabIn short, updaters use ``rcu_assign_pointer()`` and readers use
379ccc9971eSMauro Carvalho Chehab``rcu_dereference()``, and these two RCU API elements work together to
380ccc9971eSMauro Carvalho Chehabensure that readers have a consistent view of newly added data elements.
381ccc9971eSMauro Carvalho Chehab
382ccc9971eSMauro Carvalho ChehabOf course, it is also necessary to remove elements from RCU-protected
383ccc9971eSMauro Carvalho Chehabdata structures, for example, using the following process:
384ccc9971eSMauro Carvalho Chehab
385ccc9971eSMauro Carvalho Chehab#. Remove the data element from the enclosing structure.
386ccc9971eSMauro Carvalho Chehab#. Wait for all pre-existing RCU read-side critical sections to complete
387ccc9971eSMauro Carvalho Chehab   (because only pre-existing readers can possibly have a reference to
388ccc9971eSMauro Carvalho Chehab   the newly removed data element).
389ccc9971eSMauro Carvalho Chehab#. At this point, only the updater has a reference to the newly removed
390ccc9971eSMauro Carvalho Chehab   data element, so it can safely reclaim the data element, for example,
391ccc9971eSMauro Carvalho Chehab   by passing it to ``kfree()``.
392ccc9971eSMauro Carvalho Chehab
393ccc9971eSMauro Carvalho ChehabThis process is implemented by ``remove_gp_synchronous()``:
394ccc9971eSMauro Carvalho Chehab
395ccc9971eSMauro Carvalho Chehab   ::
396ccc9971eSMauro Carvalho Chehab
397ccc9971eSMauro Carvalho Chehab       1 bool remove_gp_synchronous(void)
398ccc9971eSMauro Carvalho Chehab       2 {
399ccc9971eSMauro Carvalho Chehab       3   struct foo *p;
400ccc9971eSMauro Carvalho Chehab       4
401ccc9971eSMauro Carvalho Chehab       5   spin_lock(&gp_lock);
402ccc9971eSMauro Carvalho Chehab       6   p = rcu_access_pointer(gp);
403ccc9971eSMauro Carvalho Chehab       7   if (!p) {
404ccc9971eSMauro Carvalho Chehab       8     spin_unlock(&gp_lock);
405ccc9971eSMauro Carvalho Chehab       9     return false;
406ccc9971eSMauro Carvalho Chehab      10   }
407ccc9971eSMauro Carvalho Chehab      11   rcu_assign_pointer(gp, NULL);
408ccc9971eSMauro Carvalho Chehab      12   spin_unlock(&gp_lock);
409ccc9971eSMauro Carvalho Chehab      13   synchronize_rcu();
410ccc9971eSMauro Carvalho Chehab      14   kfree(p);
411ccc9971eSMauro Carvalho Chehab      15   return true;
412ccc9971eSMauro Carvalho Chehab      16 }
413ccc9971eSMauro Carvalho Chehab
414ccc9971eSMauro Carvalho ChehabThis function is straightforward, with line 13 waiting for a grace
415ccc9971eSMauro Carvalho Chehabperiod before line 14 frees the old data element. This waiting ensures
416ccc9971eSMauro Carvalho Chehabthat readers will reach line 7 of ``do_something_gp()`` before the data
417ccc9971eSMauro Carvalho Chehabelement referenced by ``p`` is freed. The ``rcu_access_pointer()`` on
418ccc9971eSMauro Carvalho Chehabline 6 is similar to ``rcu_dereference()``, except that:
419ccc9971eSMauro Carvalho Chehab
420ccc9971eSMauro Carvalho Chehab#. The value returned by ``rcu_access_pointer()`` cannot be
421ccc9971eSMauro Carvalho Chehab   dereferenced. If you want to access the value pointed to as well as
422ccc9971eSMauro Carvalho Chehab   the pointer itself, use ``rcu_dereference()`` instead of
423ccc9971eSMauro Carvalho Chehab   ``rcu_access_pointer()``.
424ccc9971eSMauro Carvalho Chehab#. The call to ``rcu_access_pointer()`` need not be protected. In
425ccc9971eSMauro Carvalho Chehab   contrast, ``rcu_dereference()`` must either be within an RCU
426ccc9971eSMauro Carvalho Chehab   read-side critical section or in a code segment where the pointer
427ccc9971eSMauro Carvalho Chehab   cannot change, for example, in code protected by the corresponding
428ccc9971eSMauro Carvalho Chehab   update-side lock.
429ccc9971eSMauro Carvalho Chehab
430ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+
431ccc9971eSMauro Carvalho Chehab| **Quick Quiz**:                                                       |
432ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+
433ccc9971eSMauro Carvalho Chehab| Without the ``rcu_dereference()`` or the ``rcu_access_pointer()``,    |
434ccc9971eSMauro Carvalho Chehab| what destructive optimizations might the compiler make use of?        |
435ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+
436ccc9971eSMauro Carvalho Chehab| **Answer**:                                                           |
437ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+
438ccc9971eSMauro Carvalho Chehab| Let's start with what happens to ``do_something_gp()`` if it fails to |
439ccc9971eSMauro Carvalho Chehab| use ``rcu_dereference()``. It could reuse a value formerly fetched    |
440ccc9971eSMauro Carvalho Chehab| from this same pointer. It could also fetch the pointer from ``gp``   |
441ccc9971eSMauro Carvalho Chehab| in a byte-at-a-time manner, resulting in *load tearing*, in turn      |
442ccc9971eSMauro Carvalho Chehab| resulting a bytewise mash-up of two distinct pointer values. It might |
443ccc9971eSMauro Carvalho Chehab| even use value-speculation optimizations, where it makes a wrong      |
444ccc9971eSMauro Carvalho Chehab| guess, but by the time it gets around to checking the value, an       |
445ccc9971eSMauro Carvalho Chehab| update has changed the pointer to match the wrong guess. Too bad      |
446ccc9971eSMauro Carvalho Chehab| about any dereferences that returned pre-initialization garbage in    |
447ccc9971eSMauro Carvalho Chehab| the meantime!                                                         |
448ccc9971eSMauro Carvalho Chehab| For ``remove_gp_synchronous()``, as long as all modifications to      |
449ccc9971eSMauro Carvalho Chehab| ``gp`` are carried out while holding ``gp_lock``, the above           |
450ccc9971eSMauro Carvalho Chehab| optimizations are harmless. However, ``sparse`` will complain if you  |
451ccc9971eSMauro Carvalho Chehab| define ``gp`` with ``__rcu`` and then access it without using either  |
452ccc9971eSMauro Carvalho Chehab| ``rcu_access_pointer()`` or ``rcu_dereference()``.                    |
453ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+
454ccc9971eSMauro Carvalho Chehab
455ccc9971eSMauro Carvalho ChehabIn short, RCU's publish-subscribe guarantee is provided by the
456ccc9971eSMauro Carvalho Chehabcombination of ``rcu_assign_pointer()`` and ``rcu_dereference()``. This
457ccc9971eSMauro Carvalho Chehabguarantee allows data elements to be safely added to RCU-protected
458ccc9971eSMauro Carvalho Chehablinked data structures without disrupting RCU readers. This guarantee
459ccc9971eSMauro Carvalho Chehabcan be used in combination with the grace-period guarantee to also allow
460ccc9971eSMauro Carvalho Chehabdata elements to be removed from RCU-protected linked data structures,
461ccc9971eSMauro Carvalho Chehabagain without disrupting RCU readers.
462ccc9971eSMauro Carvalho Chehab
463ccc9971eSMauro Carvalho ChehabThis guarantee was only partially premeditated. DYNIX/ptx used an
464ccc9971eSMauro Carvalho Chehabexplicit memory barrier for publication, but had nothing resembling
465ccc9971eSMauro Carvalho Chehab``rcu_dereference()`` for subscription, nor did it have anything
4668ca924aeSWill Deaconresembling the dependency-ordering barrier that was later subsumed
467ccc9971eSMauro Carvalho Chehabinto ``rcu_dereference()`` and later still into ``READ_ONCE()``. The
468ccc9971eSMauro Carvalho Chehabneed for these operations made itself known quite suddenly at a
469ccc9971eSMauro Carvalho Chehablate-1990s meeting with the DEC Alpha architects, back in the days when
470ccc9971eSMauro Carvalho ChehabDEC was still a free-standing company. It took the Alpha architects a
471ccc9971eSMauro Carvalho Chehabgood hour to convince me that any sort of barrier would ever be needed,
472ccc9971eSMauro Carvalho Chehaband it then took me a good *two* hours to convince them that their
473ccc9971eSMauro Carvalho Chehabdocumentation did not make this point clear. More recent work with the C
474ccc9971eSMauro Carvalho Chehaband C++ standards committees have provided much education on tricks and
475ccc9971eSMauro Carvalho Chehabtraps from the compiler. In short, compilers were much less tricky in
476ccc9971eSMauro Carvalho Chehabthe early 1990s, but in 2015, don't even think about omitting
477ccc9971eSMauro Carvalho Chehab``rcu_dereference()``!
478ccc9971eSMauro Carvalho Chehab
479ccc9971eSMauro Carvalho ChehabMemory-Barrier Guarantees
480ccc9971eSMauro Carvalho Chehab~~~~~~~~~~~~~~~~~~~~~~~~~
481ccc9971eSMauro Carvalho Chehab
482ccc9971eSMauro Carvalho ChehabThe previous section's simple linked-data-structure scenario clearly
483ccc9971eSMauro Carvalho Chehabdemonstrates the need for RCU's stringent memory-ordering guarantees on
484ccc9971eSMauro Carvalho Chehabsystems with more than one CPU:
485ccc9971eSMauro Carvalho Chehab
486ccc9971eSMauro Carvalho Chehab#. Each CPU that has an RCU read-side critical section that begins
487ccc9971eSMauro Carvalho Chehab   before ``synchronize_rcu()`` starts is guaranteed to execute a full
488ccc9971eSMauro Carvalho Chehab   memory barrier between the time that the RCU read-side critical
489ccc9971eSMauro Carvalho Chehab   section ends and the time that ``synchronize_rcu()`` returns. Without
490ccc9971eSMauro Carvalho Chehab   this guarantee, a pre-existing RCU read-side critical section might
491ccc9971eSMauro Carvalho Chehab   hold a reference to the newly removed ``struct foo`` after the
492ccc9971eSMauro Carvalho Chehab   ``kfree()`` on line 14 of ``remove_gp_synchronous()``.
493ccc9971eSMauro Carvalho Chehab#. Each CPU that has an RCU read-side critical section that ends after
494ccc9971eSMauro Carvalho Chehab   ``synchronize_rcu()`` returns is guaranteed to execute a full memory
495ccc9971eSMauro Carvalho Chehab   barrier between the time that ``synchronize_rcu()`` begins and the
496ccc9971eSMauro Carvalho Chehab   time that the RCU read-side critical section begins. Without this
497ccc9971eSMauro Carvalho Chehab   guarantee, a later RCU read-side critical section running after the
498ccc9971eSMauro Carvalho Chehab   ``kfree()`` on line 14 of ``remove_gp_synchronous()`` might later run
499ccc9971eSMauro Carvalho Chehab   ``do_something_gp()`` and find the newly deleted ``struct foo``.
500ccc9971eSMauro Carvalho Chehab#. If the task invoking ``synchronize_rcu()`` remains on a given CPU,
501ccc9971eSMauro Carvalho Chehab   then that CPU is guaranteed to execute a full memory barrier sometime
502ccc9971eSMauro Carvalho Chehab   during the execution of ``synchronize_rcu()``. This guarantee ensures
503ccc9971eSMauro Carvalho Chehab   that the ``kfree()`` on line 14 of ``remove_gp_synchronous()`` really
504ccc9971eSMauro Carvalho Chehab   does execute after the removal on line 11.
505ccc9971eSMauro Carvalho Chehab#. If the task invoking ``synchronize_rcu()`` migrates among a group of
506ccc9971eSMauro Carvalho Chehab   CPUs during that invocation, then each of the CPUs in that group is
507ccc9971eSMauro Carvalho Chehab   guaranteed to execute a full memory barrier sometime during the
508ccc9971eSMauro Carvalho Chehab   execution of ``synchronize_rcu()``. This guarantee also ensures that
509ccc9971eSMauro Carvalho Chehab   the ``kfree()`` on line 14 of ``remove_gp_synchronous()`` really does
510ccc9971eSMauro Carvalho Chehab   execute after the removal on line 11, but also in the case where the
511ccc9971eSMauro Carvalho Chehab   thread executing the ``synchronize_rcu()`` migrates in the meantime.
512ccc9971eSMauro Carvalho Chehab
513ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+
514ccc9971eSMauro Carvalho Chehab| **Quick Quiz**:                                                       |
515ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+
516ccc9971eSMauro Carvalho Chehab| Given that multiple CPUs can start RCU read-side critical sections at |
517ccc9971eSMauro Carvalho Chehab| any time without any ordering whatsoever, how can RCU possibly tell   |
518ccc9971eSMauro Carvalho Chehab| whether or not a given RCU read-side critical section starts before a |
519ccc9971eSMauro Carvalho Chehab| given instance of ``synchronize_rcu()``?                              |
520ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+
521ccc9971eSMauro Carvalho Chehab| **Answer**:                                                           |
522ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+
523ccc9971eSMauro Carvalho Chehab| If RCU cannot tell whether or not a given RCU read-side critical      |
524ccc9971eSMauro Carvalho Chehab| section starts before a given instance of ``synchronize_rcu()``, then |
525ccc9971eSMauro Carvalho Chehab| it must assume that the RCU read-side critical section started first. |
526ccc9971eSMauro Carvalho Chehab| In other words, a given instance of ``synchronize_rcu()`` can avoid   |
527ccc9971eSMauro Carvalho Chehab| waiting on a given RCU read-side critical section only if it can      |
528ccc9971eSMauro Carvalho Chehab| prove that ``synchronize_rcu()`` started first.                       |
529ccc9971eSMauro Carvalho Chehab| A related question is “When ``rcu_read_lock()`` doesn't generate any  |
530ccc9971eSMauro Carvalho Chehab| code, why does it matter how it relates to a grace period?” The       |
531ccc9971eSMauro Carvalho Chehab| answer is that it is not the relationship of ``rcu_read_lock()``      |
532ccc9971eSMauro Carvalho Chehab| itself that is important, but rather the relationship of the code     |
533ccc9971eSMauro Carvalho Chehab| within the enclosed RCU read-side critical section to the code        |
534ccc9971eSMauro Carvalho Chehab| preceding and following the grace period. If we take this viewpoint,  |
535ccc9971eSMauro Carvalho Chehab| then a given RCU read-side critical section begins before a given     |
536ccc9971eSMauro Carvalho Chehab| grace period when some access preceding the grace period observes the |
537ccc9971eSMauro Carvalho Chehab| effect of some access within the critical section, in which case none |
538ccc9971eSMauro Carvalho Chehab| of the accesses within the critical section may observe the effects   |
539ccc9971eSMauro Carvalho Chehab| of any access following the grace period.                             |
540ccc9971eSMauro Carvalho Chehab|                                                                       |
541ccc9971eSMauro Carvalho Chehab| As of late 2016, mathematical models of RCU take this viewpoint, for  |
542ccc9971eSMauro Carvalho Chehab| example, see slides 62 and 63 of the `2016 LinuxCon                   |
543ccc9971eSMauro Carvalho Chehab| EU <http://www2.rdrop.com/users/paulmck/scalability/paper/LinuxMM.201 |
544ccc9971eSMauro Carvalho Chehab| 6.10.04c.LCE.pdf>`__                                                  |
545ccc9971eSMauro Carvalho Chehab| presentation.                                                         |
546ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+
547ccc9971eSMauro Carvalho Chehab
548ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+
549ccc9971eSMauro Carvalho Chehab| **Quick Quiz**:                                                       |
550ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+
551ccc9971eSMauro Carvalho Chehab| The first and second guarantees require unbelievably strict ordering! |
552ccc9971eSMauro Carvalho Chehab| Are all these memory barriers *really* required?                      |
553ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+
554ccc9971eSMauro Carvalho Chehab| **Answer**:                                                           |
555ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+
556ccc9971eSMauro Carvalho Chehab| Yes, they really are required. To see why the first guarantee is      |
557ccc9971eSMauro Carvalho Chehab| required, consider the following sequence of events:                  |
558ccc9971eSMauro Carvalho Chehab|                                                                       |
559ccc9971eSMauro Carvalho Chehab| #. CPU 1: ``rcu_read_lock()``                                         |
560ccc9971eSMauro Carvalho Chehab| #. CPU 1: ``q = rcu_dereference(gp); /* Very likely to return p. */`` |
561ccc9971eSMauro Carvalho Chehab| #. CPU 0: ``list_del_rcu(p);``                                        |
562ccc9971eSMauro Carvalho Chehab| #. CPU 0: ``synchronize_rcu()`` starts.                               |
563ccc9971eSMauro Carvalho Chehab| #. CPU 1: ``do_something_with(q->a);``                                |
564ccc9971eSMauro Carvalho Chehab|    ``/* No smp_mb(), so might happen after kfree(). */``              |
565ccc9971eSMauro Carvalho Chehab| #. CPU 1: ``rcu_read_unlock()``                                       |
566ccc9971eSMauro Carvalho Chehab| #. CPU 0: ``synchronize_rcu()`` returns.                              |
567ccc9971eSMauro Carvalho Chehab| #. CPU 0: ``kfree(p);``                                               |
568ccc9971eSMauro Carvalho Chehab|                                                                       |
569ccc9971eSMauro Carvalho Chehab| Therefore, there absolutely must be a full memory barrier between the |
570ccc9971eSMauro Carvalho Chehab| end of the RCU read-side critical section and the end of the grace    |
571ccc9971eSMauro Carvalho Chehab| period.                                                               |
572ccc9971eSMauro Carvalho Chehab|                                                                       |
573ccc9971eSMauro Carvalho Chehab| The sequence of events demonstrating the necessity of the second rule |
574ccc9971eSMauro Carvalho Chehab| is roughly similar:                                                   |
575ccc9971eSMauro Carvalho Chehab|                                                                       |
576ccc9971eSMauro Carvalho Chehab| #. CPU 0: ``list_del_rcu(p);``                                        |
577ccc9971eSMauro Carvalho Chehab| #. CPU 0: ``synchronize_rcu()`` starts.                               |
578ccc9971eSMauro Carvalho Chehab| #. CPU 1: ``rcu_read_lock()``                                         |
579ccc9971eSMauro Carvalho Chehab| #. CPU 1: ``q = rcu_dereference(gp);``                                |
580ccc9971eSMauro Carvalho Chehab|    ``/* Might return p if no memory barrier. */``                     |
581ccc9971eSMauro Carvalho Chehab| #. CPU 0: ``synchronize_rcu()`` returns.                              |
582ccc9971eSMauro Carvalho Chehab| #. CPU 0: ``kfree(p);``                                               |
583ccc9971eSMauro Carvalho Chehab| #. CPU 1: ``do_something_with(q->a); /* Boom!!! */``                  |
584ccc9971eSMauro Carvalho Chehab| #. CPU 1: ``rcu_read_unlock()``                                       |
585ccc9971eSMauro Carvalho Chehab|                                                                       |
586ccc9971eSMauro Carvalho Chehab| And similarly, without a memory barrier between the beginning of the  |
587ccc9971eSMauro Carvalho Chehab| grace period and the beginning of the RCU read-side critical section, |
588ccc9971eSMauro Carvalho Chehab| CPU 1 might end up accessing the freelist.                            |
589ccc9971eSMauro Carvalho Chehab|                                                                       |
590ccc9971eSMauro Carvalho Chehab| The “as if” rule of course applies, so that any implementation that   |
591ccc9971eSMauro Carvalho Chehab| acts as if the appropriate memory barriers were in place is a correct |
592ccc9971eSMauro Carvalho Chehab| implementation. That said, it is much easier to fool yourself into    |
593ccc9971eSMauro Carvalho Chehab| believing that you have adhered to the as-if rule than it is to       |
594ccc9971eSMauro Carvalho Chehab| actually adhere to it!                                                |
595ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+
596ccc9971eSMauro Carvalho Chehab
597ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+
598ccc9971eSMauro Carvalho Chehab| **Quick Quiz**:                                                       |
599ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+
600ccc9971eSMauro Carvalho Chehab| You claim that ``rcu_read_lock()`` and ``rcu_read_unlock()`` generate |
601ccc9971eSMauro Carvalho Chehab| absolutely no code in some kernel builds. This means that the         |
602ccc9971eSMauro Carvalho Chehab| compiler might arbitrarily rearrange consecutive RCU read-side        |
603ccc9971eSMauro Carvalho Chehab| critical sections. Given such rearrangement, if a given RCU read-side |
604ccc9971eSMauro Carvalho Chehab| critical section is done, how can you be sure that all prior RCU      |
605ccc9971eSMauro Carvalho Chehab| read-side critical sections are done? Won't the compiler              |
606ccc9971eSMauro Carvalho Chehab| rearrangements make that impossible to determine?                     |
607ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+
608ccc9971eSMauro Carvalho Chehab| **Answer**:                                                           |
609ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+
610ccc9971eSMauro Carvalho Chehab| In cases where ``rcu_read_lock()`` and ``rcu_read_unlock()`` generate |
611ccc9971eSMauro Carvalho Chehab| absolutely no code, RCU infers quiescent states only at special       |
612ccc9971eSMauro Carvalho Chehab| locations, for example, within the scheduler. Because calls to        |
613ccc9971eSMauro Carvalho Chehab| ``schedule()`` had better prevent calling-code accesses to shared     |
614ccc9971eSMauro Carvalho Chehab| variables from being rearranged across the call to ``schedule()``, if |
615ccc9971eSMauro Carvalho Chehab| RCU detects the end of a given RCU read-side critical section, it     |
616ccc9971eSMauro Carvalho Chehab| will necessarily detect the end of all prior RCU read-side critical   |
617ccc9971eSMauro Carvalho Chehab| sections, no matter how aggressively the compiler scrambles the code. |
618ccc9971eSMauro Carvalho Chehab| Again, this all assumes that the compiler cannot scramble code across |
619ccc9971eSMauro Carvalho Chehab| calls to the scheduler, out of interrupt handlers, into the idle      |
620ccc9971eSMauro Carvalho Chehab| loop, into user-mode code, and so on. But if your kernel build allows |
621ccc9971eSMauro Carvalho Chehab| that sort of scrambling, you have broken far more than just RCU!      |
622ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+
623ccc9971eSMauro Carvalho Chehab
624ccc9971eSMauro Carvalho ChehabNote that these memory-barrier requirements do not replace the
625ccc9971eSMauro Carvalho Chehabfundamental RCU requirement that a grace period wait for all
626ccc9971eSMauro Carvalho Chehabpre-existing readers. On the contrary, the memory barriers called out in
627ccc9971eSMauro Carvalho Chehabthis section must operate in such a way as to *enforce* this fundamental
628ccc9971eSMauro Carvalho Chehabrequirement. Of course, different implementations enforce this
629ccc9971eSMauro Carvalho Chehabrequirement in different ways, but enforce it they must.
630ccc9971eSMauro Carvalho Chehab
631ccc9971eSMauro Carvalho ChehabRCU Primitives Guaranteed to Execute Unconditionally
632ccc9971eSMauro Carvalho Chehab~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
633ccc9971eSMauro Carvalho Chehab
634ccc9971eSMauro Carvalho ChehabThe common-case RCU primitives are unconditional. They are invoked, they
635ccc9971eSMauro Carvalho Chehabdo their job, and they return, with no possibility of error, and no need
636ccc9971eSMauro Carvalho Chehabto retry. This is a key RCU design philosophy.
637ccc9971eSMauro Carvalho Chehab
638ccc9971eSMauro Carvalho ChehabHowever, this philosophy is pragmatic rather than pigheaded. If someone
639ccc9971eSMauro Carvalho Chehabcomes up with a good justification for a particular conditional RCU
640ccc9971eSMauro Carvalho Chehabprimitive, it might well be implemented and added. After all, this
641ccc9971eSMauro Carvalho Chehabguarantee was reverse-engineered, not premeditated. The unconditional
642ccc9971eSMauro Carvalho Chehabnature of the RCU primitives was initially an accident of
643ccc9971eSMauro Carvalho Chehabimplementation, and later experience with synchronization primitives
644ccc9971eSMauro Carvalho Chehabwith conditional primitives caused me to elevate this accident to a
645ccc9971eSMauro Carvalho Chehabguarantee. Therefore, the justification for adding a conditional
646ccc9971eSMauro Carvalho Chehabprimitive to RCU would need to be based on detailed and compelling use
647ccc9971eSMauro Carvalho Chehabcases.
648ccc9971eSMauro Carvalho Chehab
649ccc9971eSMauro Carvalho ChehabGuaranteed Read-to-Write Upgrade
650ccc9971eSMauro Carvalho Chehab~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
651ccc9971eSMauro Carvalho Chehab
652ccc9971eSMauro Carvalho ChehabAs far as RCU is concerned, it is always possible to carry out an update
653ccc9971eSMauro Carvalho Chehabwithin an RCU read-side critical section. For example, that RCU
654ccc9971eSMauro Carvalho Chehabread-side critical section might search for a given data element, and
655ccc9971eSMauro Carvalho Chehabthen might acquire the update-side spinlock in order to update that
656ccc9971eSMauro Carvalho Chehabelement, all while remaining in that RCU read-side critical section. Of
657ccc9971eSMauro Carvalho Chehabcourse, it is necessary to exit the RCU read-side critical section
658ccc9971eSMauro Carvalho Chehabbefore invoking ``synchronize_rcu()``, however, this inconvenience can
659ccc9971eSMauro Carvalho Chehabbe avoided through use of the ``call_rcu()`` and ``kfree_rcu()`` API
660ccc9971eSMauro Carvalho Chehabmembers described later in this document.
661ccc9971eSMauro Carvalho Chehab
662ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+
663ccc9971eSMauro Carvalho Chehab| **Quick Quiz**:                                                       |
664ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+
665ccc9971eSMauro Carvalho Chehab| But how does the upgrade-to-write operation exclude other readers?    |
666ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+
667ccc9971eSMauro Carvalho Chehab| **Answer**:                                                           |
668ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+
669ccc9971eSMauro Carvalho Chehab| It doesn't, just like normal RCU updates, which also do not exclude   |
670ccc9971eSMauro Carvalho Chehab| RCU readers.                                                          |
671ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+
672ccc9971eSMauro Carvalho Chehab
673ccc9971eSMauro Carvalho ChehabThis guarantee allows lookup code to be shared between read-side and
674ccc9971eSMauro Carvalho Chehabupdate-side code, and was premeditated, appearing in the earliest
675ccc9971eSMauro Carvalho ChehabDYNIX/ptx RCU documentation.
676ccc9971eSMauro Carvalho Chehab
677ccc9971eSMauro Carvalho ChehabFundamental Non-Requirements
678ccc9971eSMauro Carvalho Chehab----------------------------
679ccc9971eSMauro Carvalho Chehab
680ccc9971eSMauro Carvalho ChehabRCU provides extremely lightweight readers, and its read-side
681ccc9971eSMauro Carvalho Chehabguarantees, though quite useful, are correspondingly lightweight. It is
682ccc9971eSMauro Carvalho Chehabtherefore all too easy to assume that RCU is guaranteeing more than it
683ccc9971eSMauro Carvalho Chehabreally is. Of course, the list of things that RCU does not guarantee is
684ccc9971eSMauro Carvalho Chehabinfinitely long, however, the following sections list a few
685ccc9971eSMauro Carvalho Chehabnon-guarantees that have caused confusion. Except where otherwise noted,
686ccc9971eSMauro Carvalho Chehabthese non-guarantees were premeditated.
687ccc9971eSMauro Carvalho Chehab
68807335c16SJoel Fernandes (Google)#. `Readers Impose Minimal Ordering`_
68907335c16SJoel Fernandes (Google)#. `Readers Do Not Exclude Updaters`_
69007335c16SJoel Fernandes (Google)#. `Updaters Only Wait For Old Readers`_
69107335c16SJoel Fernandes (Google)#. `Grace Periods Don't Partition Read-Side Critical Sections`_
69207335c16SJoel Fernandes (Google)#. `Read-Side Critical Sections Don't Partition Grace Periods`_
693ccc9971eSMauro Carvalho Chehab
694ccc9971eSMauro Carvalho ChehabReaders Impose Minimal Ordering
695ccc9971eSMauro Carvalho Chehab~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
696ccc9971eSMauro Carvalho Chehab
697ccc9971eSMauro Carvalho ChehabReader-side markers such as ``rcu_read_lock()`` and
698ccc9971eSMauro Carvalho Chehab``rcu_read_unlock()`` provide absolutely no ordering guarantees except
699ccc9971eSMauro Carvalho Chehabthrough their interaction with the grace-period APIs such as
700ccc9971eSMauro Carvalho Chehab``synchronize_rcu()``. To see this, consider the following pair of
701ccc9971eSMauro Carvalho Chehabthreads:
702ccc9971eSMauro Carvalho Chehab
703ccc9971eSMauro Carvalho Chehab   ::
704ccc9971eSMauro Carvalho Chehab
705ccc9971eSMauro Carvalho Chehab       1 void thread0(void)
706ccc9971eSMauro Carvalho Chehab       2 {
707ccc9971eSMauro Carvalho Chehab       3   rcu_read_lock();
708ccc9971eSMauro Carvalho Chehab       4   WRITE_ONCE(x, 1);
709ccc9971eSMauro Carvalho Chehab       5   rcu_read_unlock();
710ccc9971eSMauro Carvalho Chehab       6   rcu_read_lock();
711ccc9971eSMauro Carvalho Chehab       7   WRITE_ONCE(y, 1);
712ccc9971eSMauro Carvalho Chehab       8   rcu_read_unlock();
713ccc9971eSMauro Carvalho Chehab       9 }
714ccc9971eSMauro Carvalho Chehab      10
715ccc9971eSMauro Carvalho Chehab      11 void thread1(void)
716ccc9971eSMauro Carvalho Chehab      12 {
717ccc9971eSMauro Carvalho Chehab      13   rcu_read_lock();
718ccc9971eSMauro Carvalho Chehab      14   r1 = READ_ONCE(y);
719ccc9971eSMauro Carvalho Chehab      15   rcu_read_unlock();
720ccc9971eSMauro Carvalho Chehab      16   rcu_read_lock();
721ccc9971eSMauro Carvalho Chehab      17   r2 = READ_ONCE(x);
722ccc9971eSMauro Carvalho Chehab      18   rcu_read_unlock();
723ccc9971eSMauro Carvalho Chehab      19 }
724ccc9971eSMauro Carvalho Chehab
725ccc9971eSMauro Carvalho ChehabAfter ``thread0()`` and ``thread1()`` execute concurrently, it is quite
726ccc9971eSMauro Carvalho Chehabpossible to have
727ccc9971eSMauro Carvalho Chehab
728ccc9971eSMauro Carvalho Chehab   ::
729ccc9971eSMauro Carvalho Chehab
730ccc9971eSMauro Carvalho Chehab      (r1 == 1 && r2 == 0)
731ccc9971eSMauro Carvalho Chehab
732ccc9971eSMauro Carvalho Chehab(that is, ``y`` appears to have been assigned before ``x``), which would
733ccc9971eSMauro Carvalho Chehabnot be possible if ``rcu_read_lock()`` and ``rcu_read_unlock()`` had
734ccc9971eSMauro Carvalho Chehabmuch in the way of ordering properties. But they do not, so the CPU is
735ccc9971eSMauro Carvalho Chehabwithin its rights to do significant reordering. This is by design: Any
736ccc9971eSMauro Carvalho Chehabsignificant ordering constraints would slow down these fast-path APIs.
737ccc9971eSMauro Carvalho Chehab
738ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+
739ccc9971eSMauro Carvalho Chehab| **Quick Quiz**:                                                       |
740ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+
741ccc9971eSMauro Carvalho Chehab| Can't the compiler also reorder this code?                            |
742ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+
743ccc9971eSMauro Carvalho Chehab| **Answer**:                                                           |
744ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+
745ccc9971eSMauro Carvalho Chehab| No, the volatile casts in ``READ_ONCE()`` and ``WRITE_ONCE()``        |
746ccc9971eSMauro Carvalho Chehab| prevent the compiler from reordering in this particular case.         |
747ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+
748ccc9971eSMauro Carvalho Chehab
749ccc9971eSMauro Carvalho ChehabReaders Do Not Exclude Updaters
750ccc9971eSMauro Carvalho Chehab~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
751ccc9971eSMauro Carvalho Chehab
752ccc9971eSMauro Carvalho ChehabNeither ``rcu_read_lock()`` nor ``rcu_read_unlock()`` exclude updates.
753ccc9971eSMauro Carvalho ChehabAll they do is to prevent grace periods from ending. The following
754ccc9971eSMauro Carvalho Chehabexample illustrates this:
755ccc9971eSMauro Carvalho Chehab
756ccc9971eSMauro Carvalho Chehab   ::
757ccc9971eSMauro Carvalho Chehab
758ccc9971eSMauro Carvalho Chehab       1 void thread0(void)
759ccc9971eSMauro Carvalho Chehab       2 {
760ccc9971eSMauro Carvalho Chehab       3   rcu_read_lock();
761ccc9971eSMauro Carvalho Chehab       4   r1 = READ_ONCE(y);
762ccc9971eSMauro Carvalho Chehab       5   if (r1) {
763ccc9971eSMauro Carvalho Chehab       6     do_something_with_nonzero_x();
764ccc9971eSMauro Carvalho Chehab       7     r2 = READ_ONCE(x);
765ccc9971eSMauro Carvalho Chehab       8     WARN_ON(!r2); /* BUG!!! */
766ccc9971eSMauro Carvalho Chehab       9   }
767ccc9971eSMauro Carvalho Chehab      10   rcu_read_unlock();
768ccc9971eSMauro Carvalho Chehab      11 }
769ccc9971eSMauro Carvalho Chehab      12
770ccc9971eSMauro Carvalho Chehab      13 void thread1(void)
771ccc9971eSMauro Carvalho Chehab      14 {
772ccc9971eSMauro Carvalho Chehab      15   spin_lock(&my_lock);
773ccc9971eSMauro Carvalho Chehab      16   WRITE_ONCE(x, 1);
774ccc9971eSMauro Carvalho Chehab      17   WRITE_ONCE(y, 1);
775ccc9971eSMauro Carvalho Chehab      18   spin_unlock(&my_lock);
776ccc9971eSMauro Carvalho Chehab      19 }
777ccc9971eSMauro Carvalho Chehab
778ccc9971eSMauro Carvalho ChehabIf the ``thread0()`` function's ``rcu_read_lock()`` excluded the
779ccc9971eSMauro Carvalho Chehab``thread1()`` function's update, the ``WARN_ON()`` could never fire. But
780ccc9971eSMauro Carvalho Chehabthe fact is that ``rcu_read_lock()`` does not exclude much of anything
781ccc9971eSMauro Carvalho Chehabaside from subsequent grace periods, of which ``thread1()`` has none, so
782ccc9971eSMauro Carvalho Chehabthe ``WARN_ON()`` can and does fire.
783ccc9971eSMauro Carvalho Chehab
784ccc9971eSMauro Carvalho ChehabUpdaters Only Wait For Old Readers
785ccc9971eSMauro Carvalho Chehab~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
786ccc9971eSMauro Carvalho Chehab
787ccc9971eSMauro Carvalho ChehabIt might be tempting to assume that after ``synchronize_rcu()``
788ccc9971eSMauro Carvalho Chehabcompletes, there are no readers executing. This temptation must be
789ccc9971eSMauro Carvalho Chehabavoided because new readers can start immediately after
790ccc9971eSMauro Carvalho Chehab``synchronize_rcu()`` starts, and ``synchronize_rcu()`` is under no
791ccc9971eSMauro Carvalho Chehabobligation to wait for these new readers.
792ccc9971eSMauro Carvalho Chehab
793ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+
794ccc9971eSMauro Carvalho Chehab| **Quick Quiz**:                                                       |
795ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+
796ccc9971eSMauro Carvalho Chehab| Suppose that synchronize_rcu() did wait until *all* readers had       |
797ccc9971eSMauro Carvalho Chehab| completed instead of waiting only on pre-existing readers. For how    |
798ccc9971eSMauro Carvalho Chehab| long would the updater be able to rely on there being no readers?     |
799ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+
800ccc9971eSMauro Carvalho Chehab| **Answer**:                                                           |
801ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+
802ccc9971eSMauro Carvalho Chehab| For no time at all. Even if ``synchronize_rcu()`` were to wait until  |
803ccc9971eSMauro Carvalho Chehab| all readers had completed, a new reader might start immediately after |
804ccc9971eSMauro Carvalho Chehab| ``synchronize_rcu()`` completed. Therefore, the code following        |
805ccc9971eSMauro Carvalho Chehab| ``synchronize_rcu()`` can *never* rely on there being no readers.     |
806ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+
807ccc9971eSMauro Carvalho Chehab
808ccc9971eSMauro Carvalho ChehabGrace Periods Don't Partition Read-Side Critical Sections
809ccc9971eSMauro Carvalho Chehab~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
810ccc9971eSMauro Carvalho Chehab
811ccc9971eSMauro Carvalho ChehabIt is tempting to assume that if any part of one RCU read-side critical
812ccc9971eSMauro Carvalho Chehabsection precedes a given grace period, and if any part of another RCU
813ccc9971eSMauro Carvalho Chehabread-side critical section follows that same grace period, then all of
814ccc9971eSMauro Carvalho Chehabthe first RCU read-side critical section must precede all of the second.
815ccc9971eSMauro Carvalho ChehabHowever, this just isn't the case: A single grace period does not
816ccc9971eSMauro Carvalho Chehabpartition the set of RCU read-side critical sections. An example of this
817ccc9971eSMauro Carvalho Chehabsituation can be illustrated as follows, where ``x``, ``y``, and ``z``
818ccc9971eSMauro Carvalho Chehabare initially all zero:
819ccc9971eSMauro Carvalho Chehab
820ccc9971eSMauro Carvalho Chehab   ::
821ccc9971eSMauro Carvalho Chehab
822ccc9971eSMauro Carvalho Chehab       1 void thread0(void)
823ccc9971eSMauro Carvalho Chehab       2 {
824ccc9971eSMauro Carvalho Chehab       3   rcu_read_lock();
825ccc9971eSMauro Carvalho Chehab       4   WRITE_ONCE(a, 1);
826ccc9971eSMauro Carvalho Chehab       5   WRITE_ONCE(b, 1);
827ccc9971eSMauro Carvalho Chehab       6   rcu_read_unlock();
828ccc9971eSMauro Carvalho Chehab       7 }
829ccc9971eSMauro Carvalho Chehab       8
830ccc9971eSMauro Carvalho Chehab       9 void thread1(void)
831ccc9971eSMauro Carvalho Chehab      10 {
832ccc9971eSMauro Carvalho Chehab      11   r1 = READ_ONCE(a);
833ccc9971eSMauro Carvalho Chehab      12   synchronize_rcu();
834ccc9971eSMauro Carvalho Chehab      13   WRITE_ONCE(c, 1);
835ccc9971eSMauro Carvalho Chehab      14 }
836ccc9971eSMauro Carvalho Chehab      15
837ccc9971eSMauro Carvalho Chehab      16 void thread2(void)
838ccc9971eSMauro Carvalho Chehab      17 {
839ccc9971eSMauro Carvalho Chehab      18   rcu_read_lock();
840ccc9971eSMauro Carvalho Chehab      19   r2 = READ_ONCE(b);
841ccc9971eSMauro Carvalho Chehab      20   r3 = READ_ONCE(c);
842ccc9971eSMauro Carvalho Chehab      21   rcu_read_unlock();
843ccc9971eSMauro Carvalho Chehab      22 }
844ccc9971eSMauro Carvalho Chehab
845ccc9971eSMauro Carvalho ChehabIt turns out that the outcome:
846ccc9971eSMauro Carvalho Chehab
847ccc9971eSMauro Carvalho Chehab   ::
848ccc9971eSMauro Carvalho Chehab
849ccc9971eSMauro Carvalho Chehab      (r1 == 1 && r2 == 0 && r3 == 1)
850ccc9971eSMauro Carvalho Chehab
851ccc9971eSMauro Carvalho Chehabis entirely possible. The following figure show how this can happen,
852ccc9971eSMauro Carvalho Chehabwith each circled ``QS`` indicating the point at which RCU recorded a
853ccc9971eSMauro Carvalho Chehab*quiescent state* for each thread, that is, a state in which RCU knows
854ccc9971eSMauro Carvalho Chehabthat the thread cannot be in the midst of an RCU read-side critical
855ccc9971eSMauro Carvalho Chehabsection that started before the current grace period:
856ccc9971eSMauro Carvalho Chehab
857ccc9971eSMauro Carvalho Chehab.. kernel-figure:: GPpartitionReaders1.svg
858ccc9971eSMauro Carvalho Chehab
859ccc9971eSMauro Carvalho ChehabIf it is necessary to partition RCU read-side critical sections in this
860ccc9971eSMauro Carvalho Chehabmanner, it is necessary to use two grace periods, where the first grace
861ccc9971eSMauro Carvalho Chehabperiod is known to end before the second grace period starts:
862ccc9971eSMauro Carvalho Chehab
863ccc9971eSMauro Carvalho Chehab   ::
864ccc9971eSMauro Carvalho Chehab
865ccc9971eSMauro Carvalho Chehab       1 void thread0(void)
866ccc9971eSMauro Carvalho Chehab       2 {
867ccc9971eSMauro Carvalho Chehab       3   rcu_read_lock();
868ccc9971eSMauro Carvalho Chehab       4   WRITE_ONCE(a, 1);
869ccc9971eSMauro Carvalho Chehab       5   WRITE_ONCE(b, 1);
870ccc9971eSMauro Carvalho Chehab       6   rcu_read_unlock();
871ccc9971eSMauro Carvalho Chehab       7 }
872ccc9971eSMauro Carvalho Chehab       8
873ccc9971eSMauro Carvalho Chehab       9 void thread1(void)
874ccc9971eSMauro Carvalho Chehab      10 {
875ccc9971eSMauro Carvalho Chehab      11   r1 = READ_ONCE(a);
876ccc9971eSMauro Carvalho Chehab      12   synchronize_rcu();
877ccc9971eSMauro Carvalho Chehab      13   WRITE_ONCE(c, 1);
878ccc9971eSMauro Carvalho Chehab      14 }
879ccc9971eSMauro Carvalho Chehab      15
880ccc9971eSMauro Carvalho Chehab      16 void thread2(void)
881ccc9971eSMauro Carvalho Chehab      17 {
882ccc9971eSMauro Carvalho Chehab      18   r2 = READ_ONCE(c);
883ccc9971eSMauro Carvalho Chehab      19   synchronize_rcu();
884ccc9971eSMauro Carvalho Chehab      20   WRITE_ONCE(d, 1);
885ccc9971eSMauro Carvalho Chehab      21 }
886ccc9971eSMauro Carvalho Chehab      22
887ccc9971eSMauro Carvalho Chehab      23 void thread3(void)
888ccc9971eSMauro Carvalho Chehab      24 {
889ccc9971eSMauro Carvalho Chehab      25   rcu_read_lock();
890ccc9971eSMauro Carvalho Chehab      26   r3 = READ_ONCE(b);
891ccc9971eSMauro Carvalho Chehab      27   r4 = READ_ONCE(d);
892ccc9971eSMauro Carvalho Chehab      28   rcu_read_unlock();
893ccc9971eSMauro Carvalho Chehab      29 }
894ccc9971eSMauro Carvalho Chehab
895ccc9971eSMauro Carvalho ChehabHere, if ``(r1 == 1)``, then ``thread0()``'s write to ``b`` must happen
896ccc9971eSMauro Carvalho Chehabbefore the end of ``thread1()``'s grace period. If in addition
897ccc9971eSMauro Carvalho Chehab``(r4 == 1)``, then ``thread3()``'s read from ``b`` must happen after
898ccc9971eSMauro Carvalho Chehabthe beginning of ``thread2()``'s grace period. If it is also the case
899ccc9971eSMauro Carvalho Chehabthat ``(r2 == 1)``, then the end of ``thread1()``'s grace period must
900ccc9971eSMauro Carvalho Chehabprecede the beginning of ``thread2()``'s grace period. This mean that
901ccc9971eSMauro Carvalho Chehabthe two RCU read-side critical sections cannot overlap, guaranteeing
902ccc9971eSMauro Carvalho Chehabthat ``(r3 == 1)``. As a result, the outcome:
903ccc9971eSMauro Carvalho Chehab
904ccc9971eSMauro Carvalho Chehab   ::
905ccc9971eSMauro Carvalho Chehab
906ccc9971eSMauro Carvalho Chehab      (r1 == 1 && r2 == 1 && r3 == 0 && r4 == 1)
907ccc9971eSMauro Carvalho Chehab
908ccc9971eSMauro Carvalho Chehabcannot happen.
909ccc9971eSMauro Carvalho Chehab
910ccc9971eSMauro Carvalho ChehabThis non-requirement was also non-premeditated, but became apparent when
911ccc9971eSMauro Carvalho Chehabstudying RCU's interaction with memory ordering.
912ccc9971eSMauro Carvalho Chehab
913ccc9971eSMauro Carvalho ChehabRead-Side Critical Sections Don't Partition Grace Periods
914ccc9971eSMauro Carvalho Chehab~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
915ccc9971eSMauro Carvalho Chehab
916ccc9971eSMauro Carvalho ChehabIt is also tempting to assume that if an RCU read-side critical section
917ccc9971eSMauro Carvalho Chehabhappens between a pair of grace periods, then those grace periods cannot
918ccc9971eSMauro Carvalho Chehaboverlap. However, this temptation leads nowhere good, as can be
919ccc9971eSMauro Carvalho Chehabillustrated by the following, with all variables initially zero:
920ccc9971eSMauro Carvalho Chehab
921ccc9971eSMauro Carvalho Chehab   ::
922ccc9971eSMauro Carvalho Chehab
923ccc9971eSMauro Carvalho Chehab       1 void thread0(void)
924ccc9971eSMauro Carvalho Chehab       2 {
925ccc9971eSMauro Carvalho Chehab       3   rcu_read_lock();
926ccc9971eSMauro Carvalho Chehab       4   WRITE_ONCE(a, 1);
927ccc9971eSMauro Carvalho Chehab       5   WRITE_ONCE(b, 1);
928ccc9971eSMauro Carvalho Chehab       6   rcu_read_unlock();
929ccc9971eSMauro Carvalho Chehab       7 }
930ccc9971eSMauro Carvalho Chehab       8
931ccc9971eSMauro Carvalho Chehab       9 void thread1(void)
932ccc9971eSMauro Carvalho Chehab      10 {
933ccc9971eSMauro Carvalho Chehab      11   r1 = READ_ONCE(a);
934ccc9971eSMauro Carvalho Chehab      12   synchronize_rcu();
935ccc9971eSMauro Carvalho Chehab      13   WRITE_ONCE(c, 1);
936ccc9971eSMauro Carvalho Chehab      14 }
937ccc9971eSMauro Carvalho Chehab      15
938ccc9971eSMauro Carvalho Chehab      16 void thread2(void)
939ccc9971eSMauro Carvalho Chehab      17 {
940ccc9971eSMauro Carvalho Chehab      18   rcu_read_lock();
941ccc9971eSMauro Carvalho Chehab      19   WRITE_ONCE(d, 1);
942ccc9971eSMauro Carvalho Chehab      20   r2 = READ_ONCE(c);
943ccc9971eSMauro Carvalho Chehab      21   rcu_read_unlock();
944ccc9971eSMauro Carvalho Chehab      22 }
945ccc9971eSMauro Carvalho Chehab      23
946ccc9971eSMauro Carvalho Chehab      24 void thread3(void)
947ccc9971eSMauro Carvalho Chehab      25 {
948ccc9971eSMauro Carvalho Chehab      26   r3 = READ_ONCE(d);
949ccc9971eSMauro Carvalho Chehab      27   synchronize_rcu();
950ccc9971eSMauro Carvalho Chehab      28   WRITE_ONCE(e, 1);
951ccc9971eSMauro Carvalho Chehab      29 }
952ccc9971eSMauro Carvalho Chehab      30
953ccc9971eSMauro Carvalho Chehab      31 void thread4(void)
954ccc9971eSMauro Carvalho Chehab      32 {
955ccc9971eSMauro Carvalho Chehab      33   rcu_read_lock();
956ccc9971eSMauro Carvalho Chehab      34   r4 = READ_ONCE(b);
957ccc9971eSMauro Carvalho Chehab      35   r5 = READ_ONCE(e);
958ccc9971eSMauro Carvalho Chehab      36   rcu_read_unlock();
959ccc9971eSMauro Carvalho Chehab      37 }
960ccc9971eSMauro Carvalho Chehab
961ccc9971eSMauro Carvalho ChehabIn this case, the outcome:
962ccc9971eSMauro Carvalho Chehab
963ccc9971eSMauro Carvalho Chehab   ::
964ccc9971eSMauro Carvalho Chehab
965ccc9971eSMauro Carvalho Chehab      (r1 == 1 && r2 == 1 && r3 == 1 && r4 == 0 && r5 == 1)
966ccc9971eSMauro Carvalho Chehab
967ccc9971eSMauro Carvalho Chehabis entirely possible, as illustrated below:
968ccc9971eSMauro Carvalho Chehab
969ccc9971eSMauro Carvalho Chehab.. kernel-figure:: ReadersPartitionGP1.svg
970ccc9971eSMauro Carvalho Chehab
971ccc9971eSMauro Carvalho ChehabAgain, an RCU read-side critical section can overlap almost all of a
972ccc9971eSMauro Carvalho Chehabgiven grace period, just so long as it does not overlap the entire grace
973ccc9971eSMauro Carvalho Chehabperiod. As a result, an RCU read-side critical section cannot partition
974ccc9971eSMauro Carvalho Chehaba pair of RCU grace periods.
975ccc9971eSMauro Carvalho Chehab
976ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+
977ccc9971eSMauro Carvalho Chehab| **Quick Quiz**:                                                       |
978ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+
979ccc9971eSMauro Carvalho Chehab| How long a sequence of grace periods, each separated by an RCU        |
980ccc9971eSMauro Carvalho Chehab| read-side critical section, would be required to partition the RCU    |
981ccc9971eSMauro Carvalho Chehab| read-side critical sections at the beginning and end of the chain?    |
982ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+
983ccc9971eSMauro Carvalho Chehab| **Answer**:                                                           |
984ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+
985ccc9971eSMauro Carvalho Chehab| In theory, an infinite number. In practice, an unknown number that is |
986ccc9971eSMauro Carvalho Chehab| sensitive to both implementation details and timing considerations.   |
987ccc9971eSMauro Carvalho Chehab| Therefore, even in practice, RCU users must abide by the theoretical  |
988ccc9971eSMauro Carvalho Chehab| rather than the practical answer.                                     |
989ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+
990ccc9971eSMauro Carvalho Chehab
991ccc9971eSMauro Carvalho ChehabParallelism Facts of Life
992ccc9971eSMauro Carvalho Chehab-------------------------
993ccc9971eSMauro Carvalho Chehab
994ccc9971eSMauro Carvalho ChehabThese parallelism facts of life are by no means specific to RCU, but the
995ccc9971eSMauro Carvalho ChehabRCU implementation must abide by them. They therefore bear repeating:
996ccc9971eSMauro Carvalho Chehab
997ccc9971eSMauro Carvalho Chehab#. Any CPU or task may be delayed at any time, and any attempts to avoid
998ccc9971eSMauro Carvalho Chehab   these delays by disabling preemption, interrupts, or whatever are
999ccc9971eSMauro Carvalho Chehab   completely futile. This is most obvious in preemptible user-level
1000ccc9971eSMauro Carvalho Chehab   environments and in virtualized environments (where a given guest
1001ccc9971eSMauro Carvalho Chehab   OS's VCPUs can be preempted at any time by the underlying
1002ccc9971eSMauro Carvalho Chehab   hypervisor), but can also happen in bare-metal environments due to
1003ccc9971eSMauro Carvalho Chehab   ECC errors, NMIs, and other hardware events. Although a delay of more
1004ccc9971eSMauro Carvalho Chehab   than about 20 seconds can result in splats, the RCU implementation is
1005ccc9971eSMauro Carvalho Chehab   obligated to use algorithms that can tolerate extremely long delays,
1006ccc9971eSMauro Carvalho Chehab   but where “extremely long” is not long enough to allow wrap-around
1007ccc9971eSMauro Carvalho Chehab   when incrementing a 64-bit counter.
1008ccc9971eSMauro Carvalho Chehab#. Both the compiler and the CPU can reorder memory accesses. Where it
1009ccc9971eSMauro Carvalho Chehab   matters, RCU must use compiler directives and memory-barrier
1010ccc9971eSMauro Carvalho Chehab   instructions to preserve ordering.
1011ccc9971eSMauro Carvalho Chehab#. Conflicting writes to memory locations in any given cache line will
1012ccc9971eSMauro Carvalho Chehab   result in expensive cache misses. Greater numbers of concurrent
1013ccc9971eSMauro Carvalho Chehab   writes and more-frequent concurrent writes will result in more
1014ccc9971eSMauro Carvalho Chehab   dramatic slowdowns. RCU is therefore obligated to use algorithms that
1015ccc9971eSMauro Carvalho Chehab   have sufficient locality to avoid significant performance and
1016ccc9971eSMauro Carvalho Chehab   scalability problems.
1017ccc9971eSMauro Carvalho Chehab#. As a rough rule of thumb, only one CPU's worth of processing may be
1018ccc9971eSMauro Carvalho Chehab   carried out under the protection of any given exclusive lock. RCU
1019ccc9971eSMauro Carvalho Chehab   must therefore use scalable locking designs.
1020ccc9971eSMauro Carvalho Chehab#. Counters are finite, especially on 32-bit systems. RCU's use of
1021ccc9971eSMauro Carvalho Chehab   counters must therefore tolerate counter wrap, or be designed such
1022ccc9971eSMauro Carvalho Chehab   that counter wrap would take way more time than a single system is
1023ccc9971eSMauro Carvalho Chehab   likely to run. An uptime of ten years is quite possible, a runtime of
1024ccc9971eSMauro Carvalho Chehab   a century much less so. As an example of the latter, RCU's
1025ccc9971eSMauro Carvalho Chehab   dyntick-idle nesting counter allows 54 bits for interrupt nesting
1026ccc9971eSMauro Carvalho Chehab   level (this counter is 64 bits even on a 32-bit system). Overflowing
1027ccc9971eSMauro Carvalho Chehab   this counter requires 2\ :sup:`54` half-interrupts on a given CPU
1028ccc9971eSMauro Carvalho Chehab   without that CPU ever going idle. If a half-interrupt happened every
1029ccc9971eSMauro Carvalho Chehab   microsecond, it would take 570 years of runtime to overflow this
1030ccc9971eSMauro Carvalho Chehab   counter, which is currently believed to be an acceptably long time.
1031ccc9971eSMauro Carvalho Chehab#. Linux systems can have thousands of CPUs running a single Linux
1032ccc9971eSMauro Carvalho Chehab   kernel in a single shared-memory environment. RCU must therefore pay
1033ccc9971eSMauro Carvalho Chehab   close attention to high-end scalability.
1034ccc9971eSMauro Carvalho Chehab
1035ccc9971eSMauro Carvalho ChehabThis last parallelism fact of life means that RCU must pay special
1036ccc9971eSMauro Carvalho Chehabattention to the preceding facts of life. The idea that Linux might
1037ccc9971eSMauro Carvalho Chehabscale to systems with thousands of CPUs would have been met with some
1038ccc9971eSMauro Carvalho Chehabskepticism in the 1990s, but these requirements would have otherwise
1039ccc9971eSMauro Carvalho Chehabhave been unsurprising, even in the early 1990s.
1040ccc9971eSMauro Carvalho Chehab
1041ccc9971eSMauro Carvalho ChehabQuality-of-Implementation Requirements
1042ccc9971eSMauro Carvalho Chehab--------------------------------------
1043ccc9971eSMauro Carvalho Chehab
1044ccc9971eSMauro Carvalho ChehabThese sections list quality-of-implementation requirements. Although an
1045ccc9971eSMauro Carvalho ChehabRCU implementation that ignores these requirements could still be used,
1046ccc9971eSMauro Carvalho Chehabit would likely be subject to limitations that would make it
1047ccc9971eSMauro Carvalho Chehabinappropriate for industrial-strength production use. Classes of
1048ccc9971eSMauro Carvalho Chehabquality-of-implementation requirements are as follows:
1049ccc9971eSMauro Carvalho Chehab
105007335c16SJoel Fernandes (Google)#. `Specialization`_
105107335c16SJoel Fernandes (Google)#. `Performance and Scalability`_
105207335c16SJoel Fernandes (Google)#. `Forward Progress`_
105307335c16SJoel Fernandes (Google)#. `Composability`_
105407335c16SJoel Fernandes (Google)#. `Corner Cases`_
1055ccc9971eSMauro Carvalho Chehab
1056ccc9971eSMauro Carvalho ChehabThese classes is covered in the following sections.
1057ccc9971eSMauro Carvalho Chehab
1058ccc9971eSMauro Carvalho ChehabSpecialization
1059ccc9971eSMauro Carvalho Chehab~~~~~~~~~~~~~~
1060ccc9971eSMauro Carvalho Chehab
1061ccc9971eSMauro Carvalho ChehabRCU is and always has been intended primarily for read-mostly
1062ccc9971eSMauro Carvalho Chehabsituations, which means that RCU's read-side primitives are optimized,
1063ccc9971eSMauro Carvalho Chehaboften at the expense of its update-side primitives. Experience thus far
1064ccc9971eSMauro Carvalho Chehabis captured by the following list of situations:
1065ccc9971eSMauro Carvalho Chehab
1066ccc9971eSMauro Carvalho Chehab#. Read-mostly data, where stale and inconsistent data is not a problem:
1067ccc9971eSMauro Carvalho Chehab   RCU works great!
1068ccc9971eSMauro Carvalho Chehab#. Read-mostly data, where data must be consistent: RCU works well.
1069ccc9971eSMauro Carvalho Chehab#. Read-write data, where data must be consistent: RCU *might* work OK.
1070ccc9971eSMauro Carvalho Chehab   Or not.
1071ccc9971eSMauro Carvalho Chehab#. Write-mostly data, where data must be consistent: RCU is very
1072ccc9971eSMauro Carvalho Chehab   unlikely to be the right tool for the job, with the following
1073ccc9971eSMauro Carvalho Chehab   exceptions, where RCU can provide:
1074ccc9971eSMauro Carvalho Chehab
1075ccc9971eSMauro Carvalho Chehab   a. Existence guarantees for update-friendly mechanisms.
1076ccc9971eSMauro Carvalho Chehab   b. Wait-free read-side primitives for real-time use.
1077ccc9971eSMauro Carvalho Chehab
1078ccc9971eSMauro Carvalho ChehabThis focus on read-mostly situations means that RCU must interoperate
1079ccc9971eSMauro Carvalho Chehabwith other synchronization primitives. For example, the ``add_gp()`` and
1080ccc9971eSMauro Carvalho Chehab``remove_gp_synchronous()`` examples discussed earlier use RCU to
1081ccc9971eSMauro Carvalho Chehabprotect readers and locking to coordinate updaters. However, the need
1082ccc9971eSMauro Carvalho Chehabextends much farther, requiring that a variety of synchronization
1083ccc9971eSMauro Carvalho Chehabprimitives be legal within RCU read-side critical sections, including
1084ccc9971eSMauro Carvalho Chehabspinlocks, sequence locks, atomic operations, reference counters, and
1085ccc9971eSMauro Carvalho Chehabmemory barriers.
1086ccc9971eSMauro Carvalho Chehab
1087ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+
1088ccc9971eSMauro Carvalho Chehab| **Quick Quiz**:                                                       |
1089ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+
1090ccc9971eSMauro Carvalho Chehab| What about sleeping locks?                                            |
1091ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+
1092ccc9971eSMauro Carvalho Chehab| **Answer**:                                                           |
1093ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+
1094ccc9971eSMauro Carvalho Chehab| These are forbidden within Linux-kernel RCU read-side critical        |
1095ccc9971eSMauro Carvalho Chehab| sections because it is not legal to place a quiescent state (in this  |
1096ccc9971eSMauro Carvalho Chehab| case, voluntary context switch) within an RCU read-side critical      |
1097ccc9971eSMauro Carvalho Chehab| section. However, sleeping locks may be used within userspace RCU     |
1098ccc9971eSMauro Carvalho Chehab| read-side critical sections, and also within Linux-kernel sleepable   |
1099ccc9971eSMauro Carvalho Chehab| RCU `(SRCU) <#Sleepable%20RCU>`__ read-side critical sections. In     |
1100ccc9971eSMauro Carvalho Chehab| addition, the -rt patchset turns spinlocks into a sleeping locks so   |
1101ccc9971eSMauro Carvalho Chehab| that the corresponding critical sections can be preempted, which also |
1102ccc9971eSMauro Carvalho Chehab| means that these sleeplockified spinlocks (but not other sleeping     |
1103ccc9971eSMauro Carvalho Chehab| locks!) may be acquire within -rt-Linux-kernel RCU read-side critical |
1104ccc9971eSMauro Carvalho Chehab| sections.                                                             |
1105ccc9971eSMauro Carvalho Chehab| Note that it *is* legal for a normal RCU read-side critical section   |
1106ccc9971eSMauro Carvalho Chehab| to conditionally acquire a sleeping locks (as in                      |
1107ccc9971eSMauro Carvalho Chehab| ``mutex_trylock()``), but only as long as it does not loop            |
1108ccc9971eSMauro Carvalho Chehab| indefinitely attempting to conditionally acquire that sleeping locks. |
1109ccc9971eSMauro Carvalho Chehab| The key point is that things like ``mutex_trylock()`` either return   |
1110ccc9971eSMauro Carvalho Chehab| with the mutex held, or return an error indication if the mutex was   |
1111ccc9971eSMauro Carvalho Chehab| not immediately available. Either way, ``mutex_trylock()`` returns    |
1112ccc9971eSMauro Carvalho Chehab| immediately without sleeping.                                         |
1113ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+
1114ccc9971eSMauro Carvalho Chehab
1115ccc9971eSMauro Carvalho ChehabIt often comes as a surprise that many algorithms do not require a
1116ccc9971eSMauro Carvalho Chehabconsistent view of data, but many can function in that mode, with
1117ccc9971eSMauro Carvalho Chehabnetwork routing being the poster child. Internet routing algorithms take
1118ccc9971eSMauro Carvalho Chehabsignificant time to propagate updates, so that by the time an update
1119ccc9971eSMauro Carvalho Chehabarrives at a given system, that system has been sending network traffic
1120ccc9971eSMauro Carvalho Chehabthe wrong way for a considerable length of time. Having a few threads
1121ccc9971eSMauro Carvalho Chehabcontinue to send traffic the wrong way for a few more milliseconds is
1122ccc9971eSMauro Carvalho Chehabclearly not a problem: In the worst case, TCP retransmissions will
1123ccc9971eSMauro Carvalho Chehabeventually get the data where it needs to go. In general, when tracking
1124ccc9971eSMauro Carvalho Chehabthe state of the universe outside of the computer, some level of
1125ccc9971eSMauro Carvalho Chehabinconsistency must be tolerated due to speed-of-light delays if nothing
1126ccc9971eSMauro Carvalho Chehabelse.
1127ccc9971eSMauro Carvalho Chehab
1128ccc9971eSMauro Carvalho ChehabFurthermore, uncertainty about external state is inherent in many cases.
1129ccc9971eSMauro Carvalho ChehabFor example, a pair of veterinarians might use heartbeat to determine
1130ccc9971eSMauro Carvalho Chehabwhether or not a given cat was alive. But how long should they wait
1131ccc9971eSMauro Carvalho Chehabafter the last heartbeat to decide that the cat is in fact dead? Waiting
1132ccc9971eSMauro Carvalho Chehabless than 400 milliseconds makes no sense because this would mean that a
1133ccc9971eSMauro Carvalho Chehabrelaxed cat would be considered to cycle between death and life more
1134ccc9971eSMauro Carvalho Chehabthan 100 times per minute. Moreover, just as with human beings, a cat's
1135ccc9971eSMauro Carvalho Chehabheart might stop for some period of time, so the exact wait period is a
1136ccc9971eSMauro Carvalho Chehabjudgment call. One of our pair of veterinarians might wait 30 seconds
1137ccc9971eSMauro Carvalho Chehabbefore pronouncing the cat dead, while the other might insist on waiting
1138ccc9971eSMauro Carvalho Chehaba full minute. The two veterinarians would then disagree on the state of
1139ccc9971eSMauro Carvalho Chehabthe cat during the final 30 seconds of the minute following the last
1140ccc9971eSMauro Carvalho Chehabheartbeat.
1141ccc9971eSMauro Carvalho Chehab
1142ccc9971eSMauro Carvalho ChehabInterestingly enough, this same situation applies to hardware. When push
1143ccc9971eSMauro Carvalho Chehabcomes to shove, how do we tell whether or not some external server has
1144ccc9971eSMauro Carvalho Chehabfailed? We send messages to it periodically, and declare it failed if we
1145ccc9971eSMauro Carvalho Chehabdon't receive a response within a given period of time. Policy decisions
1146ccc9971eSMauro Carvalho Chehabcan usually tolerate short periods of inconsistency. The policy was
1147ccc9971eSMauro Carvalho Chehabdecided some time ago, and is only now being put into effect, so a few
1148ccc9971eSMauro Carvalho Chehabmilliseconds of delay is normally inconsequential.
1149ccc9971eSMauro Carvalho Chehab
1150ccc9971eSMauro Carvalho ChehabHowever, there are algorithms that absolutely must see consistent data.
1151ccc9971eSMauro Carvalho ChehabFor example, the translation between a user-level SystemV semaphore ID
1152ccc9971eSMauro Carvalho Chehabto the corresponding in-kernel data structure is protected by RCU, but
1153ccc9971eSMauro Carvalho Chehabit is absolutely forbidden to update a semaphore that has just been
1154ccc9971eSMauro Carvalho Chehabremoved. In the Linux kernel, this need for consistency is accommodated
1155ccc9971eSMauro Carvalho Chehabby acquiring spinlocks located in the in-kernel data structure from
1156ccc9971eSMauro Carvalho Chehabwithin the RCU read-side critical section, and this is indicated by the
1157ccc9971eSMauro Carvalho Chehabgreen box in the figure above. Many other techniques may be used, and
1158ccc9971eSMauro Carvalho Chehabare in fact used within the Linux kernel.
1159ccc9971eSMauro Carvalho Chehab
1160ccc9971eSMauro Carvalho ChehabIn short, RCU is not required to maintain consistency, and other
1161ccc9971eSMauro Carvalho Chehabmechanisms may be used in concert with RCU when consistency is required.
1162ccc9971eSMauro Carvalho ChehabRCU's specialization allows it to do its job extremely well, and its
1163ccc9971eSMauro Carvalho Chehabability to interoperate with other synchronization mechanisms allows the
1164ccc9971eSMauro Carvalho Chehabright mix of synchronization tools to be used for a given job.
1165ccc9971eSMauro Carvalho Chehab
1166ccc9971eSMauro Carvalho ChehabPerformance and Scalability
1167ccc9971eSMauro Carvalho Chehab~~~~~~~~~~~~~~~~~~~~~~~~~~~
1168ccc9971eSMauro Carvalho Chehab
1169ccc9971eSMauro Carvalho ChehabEnergy efficiency is a critical component of performance today, and
1170ccc9971eSMauro Carvalho ChehabLinux-kernel RCU implementations must therefore avoid unnecessarily
1171ccc9971eSMauro Carvalho Chehabawakening idle CPUs. I cannot claim that this requirement was
1172ccc9971eSMauro Carvalho Chehabpremeditated. In fact, I learned of it during a telephone conversation
1173ccc9971eSMauro Carvalho Chehabin which I was given “frank and open” feedback on the importance of
1174ccc9971eSMauro Carvalho Chehabenergy efficiency in battery-powered systems and on specific
1175ccc9971eSMauro Carvalho Chehabenergy-efficiency shortcomings of the Linux-kernel RCU implementation.
1176ccc9971eSMauro Carvalho ChehabIn my experience, the battery-powered embedded community will consider
1177ccc9971eSMauro Carvalho Chehabany unnecessary wakeups to be extremely unfriendly acts. So much so that
1178ccc9971eSMauro Carvalho Chehabmere Linux-kernel-mailing-list posts are insufficient to vent their ire.
1179ccc9971eSMauro Carvalho Chehab
1180ccc9971eSMauro Carvalho ChehabMemory consumption is not particularly important for in most situations,
1181ccc9971eSMauro Carvalho Chehaband has become decreasingly so as memory sizes have expanded and memory
1182ccc9971eSMauro Carvalho Chehabcosts have plummeted. However, as I learned from Matt Mackall's
1183ccc9971eSMauro Carvalho Chehab`bloatwatch <http://elinux.org/Linux_Tiny-FAQ>`__ efforts, memory
1184ccc9971eSMauro Carvalho Chehabfootprint is critically important on single-CPU systems with
1185ccc9971eSMauro Carvalho Chehabnon-preemptible (``CONFIG_PREEMPT=n``) kernels, and thus `tiny
1186ccc9971eSMauro Carvalho ChehabRCU <https://lkml.kernel.org/g/20090113221724.GA15307@linux.vnet.ibm.com>`__
1187ccc9971eSMauro Carvalho Chehabwas born. Josh Triplett has since taken over the small-memory banner
1188ccc9971eSMauro Carvalho Chehabwith his `Linux kernel tinification <https://tiny.wiki.kernel.org/>`__
1189ccc9971eSMauro Carvalho Chehabproject, which resulted in `SRCU <#Sleepable%20RCU>`__ becoming optional
1190ccc9971eSMauro Carvalho Chehabfor those kernels not needing it.
1191ccc9971eSMauro Carvalho Chehab
1192ccc9971eSMauro Carvalho ChehabThe remaining performance requirements are, for the most part,
1193ccc9971eSMauro Carvalho Chehabunsurprising. For example, in keeping with RCU's read-side
1194ccc9971eSMauro Carvalho Chehabspecialization, ``rcu_dereference()`` should have negligible overhead
1195ccc9971eSMauro Carvalho Chehab(for example, suppression of a few minor compiler optimizations).
1196ccc9971eSMauro Carvalho ChehabSimilarly, in non-preemptible environments, ``rcu_read_lock()`` and
1197ccc9971eSMauro Carvalho Chehab``rcu_read_unlock()`` should have exactly zero overhead.
1198ccc9971eSMauro Carvalho Chehab
1199ccc9971eSMauro Carvalho ChehabIn preemptible environments, in the case where the RCU read-side
1200ccc9971eSMauro Carvalho Chehabcritical section was not preempted (as will be the case for the
1201ccc9971eSMauro Carvalho Chehabhighest-priority real-time process), ``rcu_read_lock()`` and
1202ccc9971eSMauro Carvalho Chehab``rcu_read_unlock()`` should have minimal overhead. In particular, they
1203ccc9971eSMauro Carvalho Chehabshould not contain atomic read-modify-write operations, memory-barrier
1204ccc9971eSMauro Carvalho Chehabinstructions, preemption disabling, interrupt disabling, or backwards
1205ccc9971eSMauro Carvalho Chehabbranches. However, in the case where the RCU read-side critical section
1206ccc9971eSMauro Carvalho Chehabwas preempted, ``rcu_read_unlock()`` may acquire spinlocks and disable
1207ccc9971eSMauro Carvalho Chehabinterrupts. This is why it is better to nest an RCU read-side critical
1208ccc9971eSMauro Carvalho Chehabsection within a preempt-disable region than vice versa, at least in
1209ccc9971eSMauro Carvalho Chehabcases where that critical section is short enough to avoid unduly
1210ccc9971eSMauro Carvalho Chehabdegrading real-time latencies.
1211ccc9971eSMauro Carvalho Chehab
1212ccc9971eSMauro Carvalho ChehabThe ``synchronize_rcu()`` grace-period-wait primitive is optimized for
1213ccc9971eSMauro Carvalho Chehabthroughput. It may therefore incur several milliseconds of latency in
1214ccc9971eSMauro Carvalho Chehabaddition to the duration of the longest RCU read-side critical section.
1215ccc9971eSMauro Carvalho ChehabOn the other hand, multiple concurrent invocations of
1216ccc9971eSMauro Carvalho Chehab``synchronize_rcu()`` are required to use batching optimizations so that
1217ccc9971eSMauro Carvalho Chehabthey can be satisfied by a single underlying grace-period-wait
1218ccc9971eSMauro Carvalho Chehaboperation. For example, in the Linux kernel, it is not unusual for a
1219ccc9971eSMauro Carvalho Chehabsingle grace-period-wait operation to serve more than `1,000 separate
1220ccc9971eSMauro Carvalho Chehabinvocations <https://www.usenix.org/conference/2004-usenix-annual-technical-conference/making-rcu-safe-deep-sub-millisecond-response>`__
1221ccc9971eSMauro Carvalho Chehabof ``synchronize_rcu()``, thus amortizing the per-invocation overhead
1222ccc9971eSMauro Carvalho Chehabdown to nearly zero. However, the grace-period optimization is also
1223ccc9971eSMauro Carvalho Chehabrequired to avoid measurable degradation of real-time scheduling and
1224ccc9971eSMauro Carvalho Chehabinterrupt latencies.
1225ccc9971eSMauro Carvalho Chehab
1226ccc9971eSMauro Carvalho ChehabIn some cases, the multi-millisecond ``synchronize_rcu()`` latencies are
1227ccc9971eSMauro Carvalho Chehabunacceptable. In these cases, ``synchronize_rcu_expedited()`` may be
1228ccc9971eSMauro Carvalho Chehabused instead, reducing the grace-period latency down to a few tens of
1229ccc9971eSMauro Carvalho Chehabmicroseconds on small systems, at least in cases where the RCU read-side
1230ccc9971eSMauro Carvalho Chehabcritical sections are short. There are currently no special latency
1231ccc9971eSMauro Carvalho Chehabrequirements for ``synchronize_rcu_expedited()`` on large systems, but,
1232ccc9971eSMauro Carvalho Chehabconsistent with the empirical nature of the RCU specification, that is
1233ccc9971eSMauro Carvalho Chehabsubject to change. However, there most definitely are scalability
1234ccc9971eSMauro Carvalho Chehabrequirements: A storm of ``synchronize_rcu_expedited()`` invocations on
1235ccc9971eSMauro Carvalho Chehab4096 CPUs should at least make reasonable forward progress. In return
1236ccc9971eSMauro Carvalho Chehabfor its shorter latencies, ``synchronize_rcu_expedited()`` is permitted
1237ccc9971eSMauro Carvalho Chehabto impose modest degradation of real-time latency on non-idle online
1238ccc9971eSMauro Carvalho ChehabCPUs. Here, “modest” means roughly the same latency degradation as a
1239ccc9971eSMauro Carvalho Chehabscheduling-clock interrupt.
1240ccc9971eSMauro Carvalho Chehab
1241ccc9971eSMauro Carvalho ChehabThere are a number of situations where even
1242ccc9971eSMauro Carvalho Chehab``synchronize_rcu_expedited()``'s reduced grace-period latency is
1243ccc9971eSMauro Carvalho Chehabunacceptable. In these situations, the asynchronous ``call_rcu()`` can
1244ccc9971eSMauro Carvalho Chehabbe used in place of ``synchronize_rcu()`` as follows:
1245ccc9971eSMauro Carvalho Chehab
1246ccc9971eSMauro Carvalho Chehab   ::
1247ccc9971eSMauro Carvalho Chehab
1248ccc9971eSMauro Carvalho Chehab       1 struct foo {
1249ccc9971eSMauro Carvalho Chehab       2   int a;
1250ccc9971eSMauro Carvalho Chehab       3   int b;
1251ccc9971eSMauro Carvalho Chehab       4   struct rcu_head rh;
1252ccc9971eSMauro Carvalho Chehab       5 };
1253ccc9971eSMauro Carvalho Chehab       6
1254ccc9971eSMauro Carvalho Chehab       7 static void remove_gp_cb(struct rcu_head *rhp)
1255ccc9971eSMauro Carvalho Chehab       8 {
1256ccc9971eSMauro Carvalho Chehab       9   struct foo *p = container_of(rhp, struct foo, rh);
1257ccc9971eSMauro Carvalho Chehab      10
1258ccc9971eSMauro Carvalho Chehab      11   kfree(p);
1259ccc9971eSMauro Carvalho Chehab      12 }
1260ccc9971eSMauro Carvalho Chehab      13
1261ccc9971eSMauro Carvalho Chehab      14 bool remove_gp_asynchronous(void)
1262ccc9971eSMauro Carvalho Chehab      15 {
1263ccc9971eSMauro Carvalho Chehab      16   struct foo *p;
1264ccc9971eSMauro Carvalho Chehab      17
1265ccc9971eSMauro Carvalho Chehab      18   spin_lock(&gp_lock);
1266ccc9971eSMauro Carvalho Chehab      19   p = rcu_access_pointer(gp);
1267ccc9971eSMauro Carvalho Chehab      20   if (!p) {
1268ccc9971eSMauro Carvalho Chehab      21     spin_unlock(&gp_lock);
1269ccc9971eSMauro Carvalho Chehab      22     return false;
1270ccc9971eSMauro Carvalho Chehab      23   }
1271ccc9971eSMauro Carvalho Chehab      24   rcu_assign_pointer(gp, NULL);
1272ccc9971eSMauro Carvalho Chehab      25   call_rcu(&p->rh, remove_gp_cb);
1273ccc9971eSMauro Carvalho Chehab      26   spin_unlock(&gp_lock);
1274ccc9971eSMauro Carvalho Chehab      27   return true;
1275ccc9971eSMauro Carvalho Chehab      28 }
1276ccc9971eSMauro Carvalho Chehab
1277ccc9971eSMauro Carvalho ChehabA definition of ``struct foo`` is finally needed, and appears on
1278ccc9971eSMauro Carvalho Chehablines 1-5. The function ``remove_gp_cb()`` is passed to ``call_rcu()``
1279ccc9971eSMauro Carvalho Chehabon line 25, and will be invoked after the end of a subsequent grace
1280ccc9971eSMauro Carvalho Chehabperiod. This gets the same effect as ``remove_gp_synchronous()``, but
1281ccc9971eSMauro Carvalho Chehabwithout forcing the updater to wait for a grace period to elapse. The
1282ccc9971eSMauro Carvalho Chehab``call_rcu()`` function may be used in a number of situations where
1283ccc9971eSMauro Carvalho Chehabneither ``synchronize_rcu()`` nor ``synchronize_rcu_expedited()`` would
1284ccc9971eSMauro Carvalho Chehabbe legal, including within preempt-disable code, ``local_bh_disable()``
1285ccc9971eSMauro Carvalho Chehabcode, interrupt-disable code, and interrupt handlers. However, even
1286ccc9971eSMauro Carvalho Chehab``call_rcu()`` is illegal within NMI handlers and from idle and offline
1287ccc9971eSMauro Carvalho ChehabCPUs. The callback function (``remove_gp_cb()`` in this case) will be
1288ccc9971eSMauro Carvalho Chehabexecuted within softirq (software interrupt) environment within the
1289ccc9971eSMauro Carvalho ChehabLinux kernel, either within a real softirq handler or under the
1290ccc9971eSMauro Carvalho Chehabprotection of ``local_bh_disable()``. In both the Linux kernel and in
1291ccc9971eSMauro Carvalho Chehabuserspace, it is bad practice to write an RCU callback function that
1292ccc9971eSMauro Carvalho Chehabtakes too long. Long-running operations should be relegated to separate
1293ccc9971eSMauro Carvalho Chehabthreads or (in the Linux kernel) workqueues.
1294ccc9971eSMauro Carvalho Chehab
1295ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+
1296ccc9971eSMauro Carvalho Chehab| **Quick Quiz**:                                                       |
1297ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+
1298ccc9971eSMauro Carvalho Chehab| Why does line 19 use ``rcu_access_pointer()``? After all,             |
1299ccc9971eSMauro Carvalho Chehab| ``call_rcu()`` on line 25 stores into the structure, which would      |
1300ccc9971eSMauro Carvalho Chehab| interact badly with concurrent insertions. Doesn't this mean that     |
1301ccc9971eSMauro Carvalho Chehab| ``rcu_dereference()`` is required?                                    |
1302ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+
1303ccc9971eSMauro Carvalho Chehab| **Answer**:                                                           |
1304ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+
1305ccc9971eSMauro Carvalho Chehab| Presumably the ``->gp_lock`` acquired on line 18 excludes any         |
1306ccc9971eSMauro Carvalho Chehab| changes, including any insertions that ``rcu_dereference()`` would    |
1307ccc9971eSMauro Carvalho Chehab| protect against. Therefore, any insertions will be delayed until      |
1308ccc9971eSMauro Carvalho Chehab| after ``->gp_lock`` is released on line 25, which in turn means that  |
1309ccc9971eSMauro Carvalho Chehab| ``rcu_access_pointer()`` suffices.                                    |
1310ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+
1311ccc9971eSMauro Carvalho Chehab
1312ccc9971eSMauro Carvalho ChehabHowever, all that ``remove_gp_cb()`` is doing is invoking ``kfree()`` on
1313ccc9971eSMauro Carvalho Chehabthe data element. This is a common idiom, and is supported by
1314ccc9971eSMauro Carvalho Chehab``kfree_rcu()``, which allows “fire and forget” operation as shown
1315ccc9971eSMauro Carvalho Chehabbelow:
1316ccc9971eSMauro Carvalho Chehab
1317ccc9971eSMauro Carvalho Chehab   ::
1318ccc9971eSMauro Carvalho Chehab
1319ccc9971eSMauro Carvalho Chehab       1 struct foo {
1320ccc9971eSMauro Carvalho Chehab       2   int a;
1321ccc9971eSMauro Carvalho Chehab       3   int b;
1322ccc9971eSMauro Carvalho Chehab       4   struct rcu_head rh;
1323ccc9971eSMauro Carvalho Chehab       5 };
1324ccc9971eSMauro Carvalho Chehab       6
1325ccc9971eSMauro Carvalho Chehab       7 bool remove_gp_faf(void)
1326ccc9971eSMauro Carvalho Chehab       8 {
1327ccc9971eSMauro Carvalho Chehab       9   struct foo *p;
1328ccc9971eSMauro Carvalho Chehab      10
1329ccc9971eSMauro Carvalho Chehab      11   spin_lock(&gp_lock);
1330ccc9971eSMauro Carvalho Chehab      12   p = rcu_dereference(gp);
1331ccc9971eSMauro Carvalho Chehab      13   if (!p) {
1332ccc9971eSMauro Carvalho Chehab      14     spin_unlock(&gp_lock);
1333ccc9971eSMauro Carvalho Chehab      15     return false;
1334ccc9971eSMauro Carvalho Chehab      16   }
1335ccc9971eSMauro Carvalho Chehab      17   rcu_assign_pointer(gp, NULL);
1336ccc9971eSMauro Carvalho Chehab      18   kfree_rcu(p, rh);
1337ccc9971eSMauro Carvalho Chehab      19   spin_unlock(&gp_lock);
1338ccc9971eSMauro Carvalho Chehab      20   return true;
1339ccc9971eSMauro Carvalho Chehab      21 }
1340ccc9971eSMauro Carvalho Chehab
1341ccc9971eSMauro Carvalho ChehabNote that ``remove_gp_faf()`` simply invokes ``kfree_rcu()`` and
1342ccc9971eSMauro Carvalho Chehabproceeds, without any need to pay any further attention to the
1343ccc9971eSMauro Carvalho Chehabsubsequent grace period and ``kfree()``. It is permissible to invoke
1344ccc9971eSMauro Carvalho Chehab``kfree_rcu()`` from the same environments as for ``call_rcu()``.
1345ccc9971eSMauro Carvalho ChehabInterestingly enough, DYNIX/ptx had the equivalents of ``call_rcu()``
1346ccc9971eSMauro Carvalho Chehaband ``kfree_rcu()``, but not ``synchronize_rcu()``. This was due to the
1347ccc9971eSMauro Carvalho Chehabfact that RCU was not heavily used within DYNIX/ptx, so the very few
1348ccc9971eSMauro Carvalho Chehabplaces that needed something like ``synchronize_rcu()`` simply
1349ccc9971eSMauro Carvalho Chehabopen-coded it.
1350ccc9971eSMauro Carvalho Chehab
1351ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+
1352ccc9971eSMauro Carvalho Chehab| **Quick Quiz**:                                                       |
1353ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+
1354ccc9971eSMauro Carvalho Chehab| Earlier it was claimed that ``call_rcu()`` and ``kfree_rcu()``        |
1355ccc9971eSMauro Carvalho Chehab| allowed updaters to avoid being blocked by readers. But how can that  |
1356ccc9971eSMauro Carvalho Chehab| be correct, given that the invocation of the callback and the freeing |
1357ccc9971eSMauro Carvalho Chehab| of the memory (respectively) must still wait for a grace period to    |
1358ccc9971eSMauro Carvalho Chehab| elapse?                                                               |
1359ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+
1360ccc9971eSMauro Carvalho Chehab| **Answer**:                                                           |
1361ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+
1362ccc9971eSMauro Carvalho Chehab| We could define things this way, but keep in mind that this sort of   |
1363ccc9971eSMauro Carvalho Chehab| definition would say that updates in garbage-collected languages      |
1364ccc9971eSMauro Carvalho Chehab| cannot complete until the next time the garbage collector runs, which |
1365ccc9971eSMauro Carvalho Chehab| does not seem at all reasonable. The key point is that in most cases, |
1366ccc9971eSMauro Carvalho Chehab| an updater using either ``call_rcu()`` or ``kfree_rcu()`` can proceed |
1367ccc9971eSMauro Carvalho Chehab| to the next update as soon as it has invoked ``call_rcu()`` or        |
1368ccc9971eSMauro Carvalho Chehab| ``kfree_rcu()``, without having to wait for a subsequent grace        |
1369ccc9971eSMauro Carvalho Chehab| period.                                                               |
1370ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+
1371ccc9971eSMauro Carvalho Chehab
1372ccc9971eSMauro Carvalho ChehabBut what if the updater must wait for the completion of code to be
1373ccc9971eSMauro Carvalho Chehabexecuted after the end of the grace period, but has other tasks that can
1374ccc9971eSMauro Carvalho Chehabbe carried out in the meantime? The polling-style
1375ccc9971eSMauro Carvalho Chehab``get_state_synchronize_rcu()`` and ``cond_synchronize_rcu()`` functions
1376ccc9971eSMauro Carvalho Chehabmay be used for this purpose, as shown below:
1377ccc9971eSMauro Carvalho Chehab
1378ccc9971eSMauro Carvalho Chehab   ::
1379ccc9971eSMauro Carvalho Chehab
1380ccc9971eSMauro Carvalho Chehab       1 bool remove_gp_poll(void)
1381ccc9971eSMauro Carvalho Chehab       2 {
1382ccc9971eSMauro Carvalho Chehab       3   struct foo *p;
1383ccc9971eSMauro Carvalho Chehab       4   unsigned long s;
1384ccc9971eSMauro Carvalho Chehab       5
1385ccc9971eSMauro Carvalho Chehab       6   spin_lock(&gp_lock);
1386ccc9971eSMauro Carvalho Chehab       7   p = rcu_access_pointer(gp);
1387ccc9971eSMauro Carvalho Chehab       8   if (!p) {
1388ccc9971eSMauro Carvalho Chehab       9     spin_unlock(&gp_lock);
1389ccc9971eSMauro Carvalho Chehab      10     return false;
1390ccc9971eSMauro Carvalho Chehab      11   }
1391ccc9971eSMauro Carvalho Chehab      12   rcu_assign_pointer(gp, NULL);
1392ccc9971eSMauro Carvalho Chehab      13   spin_unlock(&gp_lock);
1393ccc9971eSMauro Carvalho Chehab      14   s = get_state_synchronize_rcu();
1394ccc9971eSMauro Carvalho Chehab      15   do_something_while_waiting();
1395ccc9971eSMauro Carvalho Chehab      16   cond_synchronize_rcu(s);
1396ccc9971eSMauro Carvalho Chehab      17   kfree(p);
1397ccc9971eSMauro Carvalho Chehab      18   return true;
1398ccc9971eSMauro Carvalho Chehab      19 }
1399ccc9971eSMauro Carvalho Chehab
1400ccc9971eSMauro Carvalho ChehabOn line 14, ``get_state_synchronize_rcu()`` obtains a “cookie” from RCU,
1401ccc9971eSMauro Carvalho Chehabthen line 15 carries out other tasks, and finally, line 16 returns
1402ccc9971eSMauro Carvalho Chehabimmediately if a grace period has elapsed in the meantime, but otherwise
1403ccc9971eSMauro Carvalho Chehabwaits as required. The need for ``get_state_synchronize_rcu`` and
1404ccc9971eSMauro Carvalho Chehab``cond_synchronize_rcu()`` has appeared quite recently, so it is too
1405ccc9971eSMauro Carvalho Chehabearly to tell whether they will stand the test of time.
1406ccc9971eSMauro Carvalho Chehab
1407ccc9971eSMauro Carvalho ChehabRCU thus provides a range of tools to allow updaters to strike the
1408ccc9971eSMauro Carvalho Chehabrequired tradeoff between latency, flexibility and CPU overhead.
1409ccc9971eSMauro Carvalho Chehab
1410ccc9971eSMauro Carvalho ChehabForward Progress
1411ccc9971eSMauro Carvalho Chehab~~~~~~~~~~~~~~~~
1412ccc9971eSMauro Carvalho Chehab
1413ccc9971eSMauro Carvalho ChehabIn theory, delaying grace-period completion and callback invocation is
1414ccc9971eSMauro Carvalho Chehabharmless. In practice, not only are memory sizes finite but also
1415ccc9971eSMauro Carvalho Chehabcallbacks sometimes do wakeups, and sufficiently deferred wakeups can be
1416ccc9971eSMauro Carvalho Chehabdifficult to distinguish from system hangs. Therefore, RCU must provide
1417ccc9971eSMauro Carvalho Chehaba number of mechanisms to promote forward progress.
1418ccc9971eSMauro Carvalho Chehab
1419ccc9971eSMauro Carvalho ChehabThese mechanisms are not foolproof, nor can they be. For one simple
1420ccc9971eSMauro Carvalho Chehabexample, an infinite loop in an RCU read-side critical section must by
1421ccc9971eSMauro Carvalho Chehabdefinition prevent later grace periods from ever completing. For a more
1422ccc9971eSMauro Carvalho Chehabinvolved example, consider a 64-CPU system built with
1423ccc9971eSMauro Carvalho Chehab``CONFIG_RCU_NOCB_CPU=y`` and booted with ``rcu_nocbs=1-63``, where
1424ccc9971eSMauro Carvalho ChehabCPUs 1 through 63 spin in tight loops that invoke ``call_rcu()``. Even
1425ccc9971eSMauro Carvalho Chehabif these tight loops also contain calls to ``cond_resched()`` (thus
1426ccc9971eSMauro Carvalho Chehaballowing grace periods to complete), CPU 0 simply will not be able to
1427ccc9971eSMauro Carvalho Chehabinvoke callbacks as fast as the other 63 CPUs can register them, at
1428ccc9971eSMauro Carvalho Chehableast not until the system runs out of memory. In both of these
1429ccc9971eSMauro Carvalho Chehabexamples, the Spiderman principle applies: With great power comes great
1430ccc9971eSMauro Carvalho Chehabresponsibility. However, short of this level of abuse, RCU is required
1431ccc9971eSMauro Carvalho Chehabto ensure timely completion of grace periods and timely invocation of
1432ccc9971eSMauro Carvalho Chehabcallbacks.
1433ccc9971eSMauro Carvalho Chehab
1434ccc9971eSMauro Carvalho ChehabRCU takes the following steps to encourage timely completion of grace
1435ccc9971eSMauro Carvalho Chehabperiods:
1436ccc9971eSMauro Carvalho Chehab
1437ccc9971eSMauro Carvalho Chehab#. If a grace period fails to complete within 100 milliseconds, RCU
1438ccc9971eSMauro Carvalho Chehab   causes future invocations of ``cond_resched()`` on the holdout CPUs
1439ccc9971eSMauro Carvalho Chehab   to provide an RCU quiescent state. RCU also causes those CPUs'
1440ccc9971eSMauro Carvalho Chehab   ``need_resched()`` invocations to return ``true``, but only after the
1441ccc9971eSMauro Carvalho Chehab   corresponding CPU's next scheduling-clock.
1442ccc9971eSMauro Carvalho Chehab#. CPUs mentioned in the ``nohz_full`` kernel boot parameter can run
1443ccc9971eSMauro Carvalho Chehab   indefinitely in the kernel without scheduling-clock interrupts, which
1444ccc9971eSMauro Carvalho Chehab   defeats the above ``need_resched()`` strategem. RCU will therefore
1445ccc9971eSMauro Carvalho Chehab   invoke ``resched_cpu()`` on any ``nohz_full`` CPUs still holding out
1446ccc9971eSMauro Carvalho Chehab   after 109 milliseconds.
1447ccc9971eSMauro Carvalho Chehab#. In kernels built with ``CONFIG_RCU_BOOST=y``, if a given task that
1448ccc9971eSMauro Carvalho Chehab   has been preempted within an RCU read-side critical section is
1449ccc9971eSMauro Carvalho Chehab   holding out for more than 500 milliseconds, RCU will resort to
1450ccc9971eSMauro Carvalho Chehab   priority boosting.
1451ccc9971eSMauro Carvalho Chehab#. If a CPU is still holding out 10 seconds into the grace period, RCU
1452ccc9971eSMauro Carvalho Chehab   will invoke ``resched_cpu()`` on it regardless of its ``nohz_full``
1453ccc9971eSMauro Carvalho Chehab   state.
1454ccc9971eSMauro Carvalho Chehab
1455ccc9971eSMauro Carvalho ChehabThe above values are defaults for systems running with ``HZ=1000``. They
1456ccc9971eSMauro Carvalho Chehabwill vary as the value of ``HZ`` varies, and can also be changed using
1457ccc9971eSMauro Carvalho Chehabthe relevant Kconfig options and kernel boot parameters. RCU currently
1458ccc9971eSMauro Carvalho Chehabdoes not do much sanity checking of these parameters, so please use
1459ccc9971eSMauro Carvalho Chehabcaution when changing them. Note that these forward-progress measures
1460ccc9971eSMauro Carvalho Chehabare provided only for RCU, not for `SRCU <#Sleepable%20RCU>`__ or `Tasks
1461ccc9971eSMauro Carvalho ChehabRCU <#Tasks%20RCU>`__.
1462ccc9971eSMauro Carvalho Chehab
1463ccc9971eSMauro Carvalho ChehabRCU takes the following steps in ``call_rcu()`` to encourage timely
1464ccc9971eSMauro Carvalho Chehabinvocation of callbacks when any given non-\ ``rcu_nocbs`` CPU has
1465ccc9971eSMauro Carvalho Chehab10,000 callbacks, or has 10,000 more callbacks than it had the last time
1466ccc9971eSMauro Carvalho Chehabencouragement was provided:
1467ccc9971eSMauro Carvalho Chehab
1468ccc9971eSMauro Carvalho Chehab#. Starts a grace period, if one is not already in progress.
1469ccc9971eSMauro Carvalho Chehab#. Forces immediate checking for quiescent states, rather than waiting
1470ccc9971eSMauro Carvalho Chehab   for three milliseconds to have elapsed since the beginning of the
1471ccc9971eSMauro Carvalho Chehab   grace period.
1472ccc9971eSMauro Carvalho Chehab#. Immediately tags the CPU's callbacks with their grace period
1473ccc9971eSMauro Carvalho Chehab   completion numbers, rather than waiting for the ``RCU_SOFTIRQ``
1474ccc9971eSMauro Carvalho Chehab   handler to get around to it.
1475ccc9971eSMauro Carvalho Chehab#. Lifts callback-execution batch limits, which speeds up callback
1476ccc9971eSMauro Carvalho Chehab   invocation at the expense of degrading realtime response.
1477ccc9971eSMauro Carvalho Chehab
1478ccc9971eSMauro Carvalho ChehabAgain, these are default values when running at ``HZ=1000``, and can be
1479ccc9971eSMauro Carvalho Chehaboverridden. Again, these forward-progress measures are provided only for
1480ccc9971eSMauro Carvalho ChehabRCU, not for `SRCU <#Sleepable%20RCU>`__ or `Tasks
1481ccc9971eSMauro Carvalho ChehabRCU <#Tasks%20RCU>`__. Even for RCU, callback-invocation forward
1482ccc9971eSMauro Carvalho Chehabprogress for ``rcu_nocbs`` CPUs is much less well-developed, in part
1483ccc9971eSMauro Carvalho Chehabbecause workloads benefiting from ``rcu_nocbs`` CPUs tend to invoke
1484ccc9971eSMauro Carvalho Chehab``call_rcu()`` relatively infrequently. If workloads emerge that need
1485ccc9971eSMauro Carvalho Chehabboth ``rcu_nocbs`` CPUs and high ``call_rcu()`` invocation rates, then
1486ccc9971eSMauro Carvalho Chehabadditional forward-progress work will be required.
1487ccc9971eSMauro Carvalho Chehab
1488ccc9971eSMauro Carvalho ChehabComposability
1489ccc9971eSMauro Carvalho Chehab~~~~~~~~~~~~~
1490ccc9971eSMauro Carvalho Chehab
1491ccc9971eSMauro Carvalho ChehabComposability has received much attention in recent years, perhaps in
1492ccc9971eSMauro Carvalho Chehabpart due to the collision of multicore hardware with object-oriented
1493ccc9971eSMauro Carvalho Chehabtechniques designed in single-threaded environments for single-threaded
1494ccc9971eSMauro Carvalho Chehabuse. And in theory, RCU read-side critical sections may be composed, and
1495ccc9971eSMauro Carvalho Chehabin fact may be nested arbitrarily deeply. In practice, as with all
1496ccc9971eSMauro Carvalho Chehabreal-world implementations of composable constructs, there are
1497ccc9971eSMauro Carvalho Chehablimitations.
1498ccc9971eSMauro Carvalho Chehab
1499ccc9971eSMauro Carvalho ChehabImplementations of RCU for which ``rcu_read_lock()`` and
1500ccc9971eSMauro Carvalho Chehab``rcu_read_unlock()`` generate no code, such as Linux-kernel RCU when
1501ccc9971eSMauro Carvalho Chehab``CONFIG_PREEMPT=n``, can be nested arbitrarily deeply. After all, there
1502ccc9971eSMauro Carvalho Chehabis no overhead. Except that if all these instances of
1503ccc9971eSMauro Carvalho Chehab``rcu_read_lock()`` and ``rcu_read_unlock()`` are visible to the
1504ccc9971eSMauro Carvalho Chehabcompiler, compilation will eventually fail due to exhausting memory,
1505ccc9971eSMauro Carvalho Chehabmass storage, or user patience, whichever comes first. If the nesting is
1506ccc9971eSMauro Carvalho Chehabnot visible to the compiler, as is the case with mutually recursive
1507ccc9971eSMauro Carvalho Chehabfunctions each in its own translation unit, stack overflow will result.
1508ccc9971eSMauro Carvalho ChehabIf the nesting takes the form of loops, perhaps in the guise of tail
1509ccc9971eSMauro Carvalho Chehabrecursion, either the control variable will overflow or (in the Linux
1510ccc9971eSMauro Carvalho Chehabkernel) you will get an RCU CPU stall warning. Nevertheless, this class
1511ccc9971eSMauro Carvalho Chehabof RCU implementations is one of the most composable constructs in
1512ccc9971eSMauro Carvalho Chehabexistence.
1513ccc9971eSMauro Carvalho Chehab
1514ccc9971eSMauro Carvalho ChehabRCU implementations that explicitly track nesting depth are limited by
1515ccc9971eSMauro Carvalho Chehabthe nesting-depth counter. For example, the Linux kernel's preemptible
1516ccc9971eSMauro Carvalho ChehabRCU limits nesting to ``INT_MAX``. This should suffice for almost all
1517ccc9971eSMauro Carvalho Chehabpractical purposes. That said, a consecutive pair of RCU read-side
1518ccc9971eSMauro Carvalho Chehabcritical sections between which there is an operation that waits for a
1519ccc9971eSMauro Carvalho Chehabgrace period cannot be enclosed in another RCU read-side critical
1520ccc9971eSMauro Carvalho Chehabsection. This is because it is not legal to wait for a grace period
1521ccc9971eSMauro Carvalho Chehabwithin an RCU read-side critical section: To do so would result either
1522ccc9971eSMauro Carvalho Chehabin deadlock or in RCU implicitly splitting the enclosing RCU read-side
1523ccc9971eSMauro Carvalho Chehabcritical section, neither of which is conducive to a long-lived and
1524ccc9971eSMauro Carvalho Chehabprosperous kernel.
1525ccc9971eSMauro Carvalho Chehab
1526ccc9971eSMauro Carvalho ChehabIt is worth noting that RCU is not alone in limiting composability. For
1527ccc9971eSMauro Carvalho Chehabexample, many transactional-memory implementations prohibit composing a
1528ccc9971eSMauro Carvalho Chehabpair of transactions separated by an irrevocable operation (for example,
1529ccc9971eSMauro Carvalho Chehaba network receive operation). For another example, lock-based critical
1530ccc9971eSMauro Carvalho Chehabsections can be composed surprisingly freely, but only if deadlock is
1531ccc9971eSMauro Carvalho Chehabavoided.
1532ccc9971eSMauro Carvalho Chehab
1533ccc9971eSMauro Carvalho ChehabIn short, although RCU read-side critical sections are highly
1534ccc9971eSMauro Carvalho Chehabcomposable, care is required in some situations, just as is the case for
1535ccc9971eSMauro Carvalho Chehabany other composable synchronization mechanism.
1536ccc9971eSMauro Carvalho Chehab
1537ccc9971eSMauro Carvalho ChehabCorner Cases
1538ccc9971eSMauro Carvalho Chehab~~~~~~~~~~~~
1539ccc9971eSMauro Carvalho Chehab
1540ccc9971eSMauro Carvalho ChehabA given RCU workload might have an endless and intense stream of RCU
1541ccc9971eSMauro Carvalho Chehabread-side critical sections, perhaps even so intense that there was
1542ccc9971eSMauro Carvalho Chehabnever a point in time during which there was not at least one RCU
1543ccc9971eSMauro Carvalho Chehabread-side critical section in flight. RCU cannot allow this situation to
1544ccc9971eSMauro Carvalho Chehabblock grace periods: As long as all the RCU read-side critical sections
1545ccc9971eSMauro Carvalho Chehabare finite, grace periods must also be finite.
1546ccc9971eSMauro Carvalho Chehab
1547ccc9971eSMauro Carvalho ChehabThat said, preemptible RCU implementations could potentially result in
1548ccc9971eSMauro Carvalho ChehabRCU read-side critical sections being preempted for long durations,
1549ccc9971eSMauro Carvalho Chehabwhich has the effect of creating a long-duration RCU read-side critical
1550ccc9971eSMauro Carvalho Chehabsection. This situation can arise only in heavily loaded systems, but
1551ccc9971eSMauro Carvalho Chehabsystems using real-time priorities are of course more vulnerable.
1552ccc9971eSMauro Carvalho ChehabTherefore, RCU priority boosting is provided to help deal with this
1553ccc9971eSMauro Carvalho Chehabcase. That said, the exact requirements on RCU priority boosting will
1554ccc9971eSMauro Carvalho Chehablikely evolve as more experience accumulates.
1555ccc9971eSMauro Carvalho Chehab
1556ccc9971eSMauro Carvalho ChehabOther workloads might have very high update rates. Although one can
1557ccc9971eSMauro Carvalho Chehabargue that such workloads should instead use something other than RCU,
1558ccc9971eSMauro Carvalho Chehabthe fact remains that RCU must handle such workloads gracefully. This
1559ccc9971eSMauro Carvalho Chehabrequirement is another factor driving batching of grace periods, but it
1560ccc9971eSMauro Carvalho Chehabis also the driving force behind the checks for large numbers of queued
1561ccc9971eSMauro Carvalho ChehabRCU callbacks in the ``call_rcu()`` code path. Finally, high update
1562ccc9971eSMauro Carvalho Chehabrates should not delay RCU read-side critical sections, although some
1563ccc9971eSMauro Carvalho Chehabsmall read-side delays can occur when using
1564ccc9971eSMauro Carvalho Chehab``synchronize_rcu_expedited()``, courtesy of this function's use of
1565ccc9971eSMauro Carvalho Chehab``smp_call_function_single()``.
1566ccc9971eSMauro Carvalho Chehab
1567ccc9971eSMauro Carvalho ChehabAlthough all three of these corner cases were understood in the early
1568ccc9971eSMauro Carvalho Chehab1990s, a simple user-level test consisting of ``close(open(path))`` in a
1569ccc9971eSMauro Carvalho Chehabtight loop in the early 2000s suddenly provided a much deeper
1570ccc9971eSMauro Carvalho Chehabappreciation of the high-update-rate corner case. This test also
1571ccc9971eSMauro Carvalho Chehabmotivated addition of some RCU code to react to high update rates, for
1572ccc9971eSMauro Carvalho Chehabexample, if a given CPU finds itself with more than 10,000 RCU callbacks
1573ccc9971eSMauro Carvalho Chehabqueued, it will cause RCU to take evasive action by more aggressively
1574ccc9971eSMauro Carvalho Chehabstarting grace periods and more aggressively forcing completion of
1575ccc9971eSMauro Carvalho Chehabgrace-period processing. This evasive action causes the grace period to
1576ccc9971eSMauro Carvalho Chehabcomplete more quickly, but at the cost of restricting RCU's batching
1577ccc9971eSMauro Carvalho Chehaboptimizations, thus increasing the CPU overhead incurred by that grace
1578ccc9971eSMauro Carvalho Chehabperiod.
1579ccc9971eSMauro Carvalho Chehab
1580ccc9971eSMauro Carvalho ChehabSoftware-Engineering Requirements
1581ccc9971eSMauro Carvalho Chehab---------------------------------
1582ccc9971eSMauro Carvalho Chehab
1583ccc9971eSMauro Carvalho ChehabBetween Murphy's Law and “To err is human”, it is necessary to guard
1584ccc9971eSMauro Carvalho Chehabagainst mishaps and misuse:
1585ccc9971eSMauro Carvalho Chehab
1586ccc9971eSMauro Carvalho Chehab#. It is all too easy to forget to use ``rcu_read_lock()`` everywhere
1587ccc9971eSMauro Carvalho Chehab   that it is needed, so kernels built with ``CONFIG_PROVE_RCU=y`` will
1588ccc9971eSMauro Carvalho Chehab   splat if ``rcu_dereference()`` is used outside of an RCU read-side
1589ccc9971eSMauro Carvalho Chehab   critical section. Update-side code can use
1590ccc9971eSMauro Carvalho Chehab   ``rcu_dereference_protected()``, which takes a `lockdep
1591ccc9971eSMauro Carvalho Chehab   expression <https://lwn.net/Articles/371986/>`__ to indicate what is
1592ccc9971eSMauro Carvalho Chehab   providing the protection. If the indicated protection is not
1593ccc9971eSMauro Carvalho Chehab   provided, a lockdep splat is emitted.
1594ccc9971eSMauro Carvalho Chehab   Code shared between readers and updaters can use
1595ccc9971eSMauro Carvalho Chehab   ``rcu_dereference_check()``, which also takes a lockdep expression,
1596ccc9971eSMauro Carvalho Chehab   and emits a lockdep splat if neither ``rcu_read_lock()`` nor the
1597ccc9971eSMauro Carvalho Chehab   indicated protection is in place. In addition,
1598ccc9971eSMauro Carvalho Chehab   ``rcu_dereference_raw()`` is used in those (hopefully rare) cases
1599ccc9971eSMauro Carvalho Chehab   where the required protection cannot be easily described. Finally,
1600ccc9971eSMauro Carvalho Chehab   ``rcu_read_lock_held()`` is provided to allow a function to verify
1601ccc9971eSMauro Carvalho Chehab   that it has been invoked within an RCU read-side critical section. I
1602ccc9971eSMauro Carvalho Chehab   was made aware of this set of requirements shortly after Thomas
1603ccc9971eSMauro Carvalho Chehab   Gleixner audited a number of RCU uses.
1604ccc9971eSMauro Carvalho Chehab#. A given function might wish to check for RCU-related preconditions
1605ccc9971eSMauro Carvalho Chehab   upon entry, before using any other RCU API. The
1606ccc9971eSMauro Carvalho Chehab   ``rcu_lockdep_assert()`` does this job, asserting the expression in
1607ccc9971eSMauro Carvalho Chehab   kernels having lockdep enabled and doing nothing otherwise.
1608ccc9971eSMauro Carvalho Chehab#. It is also easy to forget to use ``rcu_assign_pointer()`` and
1609ccc9971eSMauro Carvalho Chehab   ``rcu_dereference()``, perhaps (incorrectly) substituting a simple
1610ccc9971eSMauro Carvalho Chehab   assignment. To catch this sort of error, a given RCU-protected
1611ccc9971eSMauro Carvalho Chehab   pointer may be tagged with ``__rcu``, after which sparse will
1612ccc9971eSMauro Carvalho Chehab   complain about simple-assignment accesses to that pointer. Arnd
1613ccc9971eSMauro Carvalho Chehab   Bergmann made me aware of this requirement, and also supplied the
1614ccc9971eSMauro Carvalho Chehab   needed `patch series <https://lwn.net/Articles/376011/>`__.
1615ccc9971eSMauro Carvalho Chehab#. Kernels built with ``CONFIG_DEBUG_OBJECTS_RCU_HEAD=y`` will splat if
1616ccc9971eSMauro Carvalho Chehab   a data element is passed to ``call_rcu()`` twice in a row, without a
1617ccc9971eSMauro Carvalho Chehab   grace period in between. (This error is similar to a double free.)
1618ccc9971eSMauro Carvalho Chehab   The corresponding ``rcu_head`` structures that are dynamically
1619ccc9971eSMauro Carvalho Chehab   allocated are automatically tracked, but ``rcu_head`` structures
1620ccc9971eSMauro Carvalho Chehab   allocated on the stack must be initialized with
1621ccc9971eSMauro Carvalho Chehab   ``init_rcu_head_on_stack()`` and cleaned up with
1622ccc9971eSMauro Carvalho Chehab   ``destroy_rcu_head_on_stack()``. Similarly, statically allocated
1623ccc9971eSMauro Carvalho Chehab   non-stack ``rcu_head`` structures must be initialized with
1624ccc9971eSMauro Carvalho Chehab   ``init_rcu_head()`` and cleaned up with ``destroy_rcu_head()``.
1625ccc9971eSMauro Carvalho Chehab   Mathieu Desnoyers made me aware of this requirement, and also
1626ccc9971eSMauro Carvalho Chehab   supplied the needed
1627ccc9971eSMauro Carvalho Chehab   `patch <https://lkml.kernel.org/g/20100319013024.GA28456@Krystal>`__.
1628ccc9971eSMauro Carvalho Chehab#. An infinite loop in an RCU read-side critical section will eventually
1629ccc9971eSMauro Carvalho Chehab   trigger an RCU CPU stall warning splat, with the duration of
1630ccc9971eSMauro Carvalho Chehab   “eventually” being controlled by the ``RCU_CPU_STALL_TIMEOUT``
1631ccc9971eSMauro Carvalho Chehab   ``Kconfig`` option, or, alternatively, by the
1632ccc9971eSMauro Carvalho Chehab   ``rcupdate.rcu_cpu_stall_timeout`` boot/sysfs parameter. However, RCU
1633ccc9971eSMauro Carvalho Chehab   is not obligated to produce this splat unless there is a grace period
1634ccc9971eSMauro Carvalho Chehab   waiting on that particular RCU read-side critical section.
1635ccc9971eSMauro Carvalho Chehab
1636ccc9971eSMauro Carvalho Chehab   Some extreme workloads might intentionally delay RCU grace periods,
1637ccc9971eSMauro Carvalho Chehab   and systems running those workloads can be booted with
1638ccc9971eSMauro Carvalho Chehab   ``rcupdate.rcu_cpu_stall_suppress`` to suppress the splats. This
1639ccc9971eSMauro Carvalho Chehab   kernel parameter may also be set via ``sysfs``. Furthermore, RCU CPU
1640ccc9971eSMauro Carvalho Chehab   stall warnings are counter-productive during sysrq dumps and during
1641ccc9971eSMauro Carvalho Chehab   panics. RCU therefore supplies the ``rcu_sysrq_start()`` and
1642ccc9971eSMauro Carvalho Chehab   ``rcu_sysrq_end()`` API members to be called before and after long
1643ccc9971eSMauro Carvalho Chehab   sysrq dumps. RCU also supplies the ``rcu_panic()`` notifier that is
1644ccc9971eSMauro Carvalho Chehab   automatically invoked at the beginning of a panic to suppress further
1645ccc9971eSMauro Carvalho Chehab   RCU CPU stall warnings.
1646ccc9971eSMauro Carvalho Chehab
1647ccc9971eSMauro Carvalho Chehab   This requirement made itself known in the early 1990s, pretty much
1648ccc9971eSMauro Carvalho Chehab   the first time that it was necessary to debug a CPU stall. That said,
1649ccc9971eSMauro Carvalho Chehab   the initial implementation in DYNIX/ptx was quite generic in
1650ccc9971eSMauro Carvalho Chehab   comparison with that of Linux.
1651ccc9971eSMauro Carvalho Chehab
1652ccc9971eSMauro Carvalho Chehab#. Although it would be very good to detect pointers leaking out of RCU
1653ccc9971eSMauro Carvalho Chehab   read-side critical sections, there is currently no good way of doing
1654ccc9971eSMauro Carvalho Chehab   this. One complication is the need to distinguish between pointers
1655ccc9971eSMauro Carvalho Chehab   leaking and pointers that have been handed off from RCU to some other
1656ccc9971eSMauro Carvalho Chehab   synchronization mechanism, for example, reference counting.
1657ccc9971eSMauro Carvalho Chehab#. In kernels built with ``CONFIG_RCU_TRACE=y``, RCU-related information
1658ccc9971eSMauro Carvalho Chehab   is provided via event tracing.
1659ccc9971eSMauro Carvalho Chehab#. Open-coded use of ``rcu_assign_pointer()`` and ``rcu_dereference()``
1660ccc9971eSMauro Carvalho Chehab   to create typical linked data structures can be surprisingly
1661ccc9971eSMauro Carvalho Chehab   error-prone. Therefore, RCU-protected `linked
1662ccc9971eSMauro Carvalho Chehab   lists <https://lwn.net/Articles/609973/#RCU%20List%20APIs>`__ and,
1663ccc9971eSMauro Carvalho Chehab   more recently, RCU-protected `hash
1664ccc9971eSMauro Carvalho Chehab   tables <https://lwn.net/Articles/612100/>`__ are available. Many
1665ccc9971eSMauro Carvalho Chehab   other special-purpose RCU-protected data structures are available in
1666ccc9971eSMauro Carvalho Chehab   the Linux kernel and the userspace RCU library.
1667ccc9971eSMauro Carvalho Chehab#. Some linked structures are created at compile time, but still require
1668ccc9971eSMauro Carvalho Chehab   ``__rcu`` checking. The ``RCU_POINTER_INITIALIZER()`` macro serves
1669ccc9971eSMauro Carvalho Chehab   this purpose.
1670ccc9971eSMauro Carvalho Chehab#. It is not necessary to use ``rcu_assign_pointer()`` when creating
1671ccc9971eSMauro Carvalho Chehab   linked structures that are to be published via a single external
1672ccc9971eSMauro Carvalho Chehab   pointer. The ``RCU_INIT_POINTER()`` macro is provided for this task
1673ccc9971eSMauro Carvalho Chehab   and also for assigning ``NULL`` pointers at runtime.
1674ccc9971eSMauro Carvalho Chehab
1675ccc9971eSMauro Carvalho ChehabThis not a hard-and-fast list: RCU's diagnostic capabilities will
1676ccc9971eSMauro Carvalho Chehabcontinue to be guided by the number and type of usage bugs found in
1677ccc9971eSMauro Carvalho Chehabreal-world RCU usage.
1678ccc9971eSMauro Carvalho Chehab
1679ccc9971eSMauro Carvalho ChehabLinux Kernel Complications
1680ccc9971eSMauro Carvalho Chehab--------------------------
1681ccc9971eSMauro Carvalho Chehab
1682ccc9971eSMauro Carvalho ChehabThe Linux kernel provides an interesting environment for all kinds of
1683ccc9971eSMauro Carvalho Chehabsoftware, including RCU. Some of the relevant points of interest are as
1684ccc9971eSMauro Carvalho Chehabfollows:
1685ccc9971eSMauro Carvalho Chehab
168607335c16SJoel Fernandes (Google)#. `Configuration`_
168707335c16SJoel Fernandes (Google)#. `Firmware Interface`_
168807335c16SJoel Fernandes (Google)#. `Early Boot`_
168907335c16SJoel Fernandes (Google)#. `Interrupts and NMIs`_
169007335c16SJoel Fernandes (Google)#. `Loadable Modules`_
169107335c16SJoel Fernandes (Google)#. `Hotplug CPU`_
169207335c16SJoel Fernandes (Google)#. `Scheduler and RCU`_
169307335c16SJoel Fernandes (Google)#. `Tracing and RCU`_
169471cb46aeSJoel Fernandes (Google)#. `Accesses to User Memory and RCU`_
169507335c16SJoel Fernandes (Google)#. `Energy Efficiency`_
169607335c16SJoel Fernandes (Google)#. `Scheduling-Clock Interrupts and RCU`_
169707335c16SJoel Fernandes (Google)#. `Memory Efficiency`_
169807335c16SJoel Fernandes (Google)#. `Performance, Scalability, Response Time, and Reliability`_
1699ccc9971eSMauro Carvalho Chehab
1700ccc9971eSMauro Carvalho ChehabThis list is probably incomplete, but it does give a feel for the most
1701ccc9971eSMauro Carvalho Chehabnotable Linux-kernel complications. Each of the following sections
1702ccc9971eSMauro Carvalho Chehabcovers one of the above topics.
1703ccc9971eSMauro Carvalho Chehab
1704ccc9971eSMauro Carvalho ChehabConfiguration
1705ccc9971eSMauro Carvalho Chehab~~~~~~~~~~~~~
1706ccc9971eSMauro Carvalho Chehab
1707ccc9971eSMauro Carvalho ChehabRCU's goal is automatic configuration, so that almost nobody needs to
1708ccc9971eSMauro Carvalho Chehabworry about RCU's ``Kconfig`` options. And for almost all users, RCU
1709ccc9971eSMauro Carvalho Chehabdoes in fact work well “out of the box.”
1710ccc9971eSMauro Carvalho Chehab
1711ccc9971eSMauro Carvalho ChehabHowever, there are specialized use cases that are handled by kernel boot
1712ccc9971eSMauro Carvalho Chehabparameters and ``Kconfig`` options. Unfortunately, the ``Kconfig``
1713ccc9971eSMauro Carvalho Chehabsystem will explicitly ask users about new ``Kconfig`` options, which
1714ccc9971eSMauro Carvalho Chehabrequires almost all of them be hidden behind a ``CONFIG_RCU_EXPERT``
1715ccc9971eSMauro Carvalho Chehab``Kconfig`` option.
1716ccc9971eSMauro Carvalho Chehab
1717ccc9971eSMauro Carvalho ChehabThis all should be quite obvious, but the fact remains that Linus
1718ccc9971eSMauro Carvalho ChehabTorvalds recently had to
1719ccc9971eSMauro Carvalho Chehab`remind <https://lkml.kernel.org/g/CA+55aFy4wcCwaL4okTs8wXhGZ5h-ibecy_Meg9C4MNQrUnwMcg@mail.gmail.com>`__
1720ccc9971eSMauro Carvalho Chehabme of this requirement.
1721ccc9971eSMauro Carvalho Chehab
1722ccc9971eSMauro Carvalho ChehabFirmware Interface
1723ccc9971eSMauro Carvalho Chehab~~~~~~~~~~~~~~~~~~
1724ccc9971eSMauro Carvalho Chehab
1725ccc9971eSMauro Carvalho ChehabIn many cases, kernel obtains information about the system from the
1726ccc9971eSMauro Carvalho Chehabfirmware, and sometimes things are lost in translation. Or the
1727ccc9971eSMauro Carvalho Chehabtranslation is accurate, but the original message is bogus.
1728ccc9971eSMauro Carvalho Chehab
1729ccc9971eSMauro Carvalho ChehabFor example, some systems' firmware overreports the number of CPUs,
1730ccc9971eSMauro Carvalho Chehabsometimes by a large factor. If RCU naively believed the firmware, as it
1731ccc9971eSMauro Carvalho Chehabused to do, it would create too many per-CPU kthreads. Although the
1732ccc9971eSMauro Carvalho Chehabresulting system will still run correctly, the extra kthreads needlessly
1733ccc9971eSMauro Carvalho Chehabconsume memory and can cause confusion when they show up in ``ps``
1734ccc9971eSMauro Carvalho Chehablistings.
1735ccc9971eSMauro Carvalho Chehab
1736ccc9971eSMauro Carvalho ChehabRCU must therefore wait for a given CPU to actually come online before
1737ccc9971eSMauro Carvalho Chehabit can allow itself to believe that the CPU actually exists. The
1738ccc9971eSMauro Carvalho Chehabresulting “ghost CPUs” (which are never going to come online) cause a
1739ccc9971eSMauro Carvalho Chehabnumber of `interesting
1740ccc9971eSMauro Carvalho Chehabcomplications <https://paulmck.livejournal.com/37494.html>`__.
1741ccc9971eSMauro Carvalho Chehab
1742ccc9971eSMauro Carvalho ChehabEarly Boot
1743ccc9971eSMauro Carvalho Chehab~~~~~~~~~~
1744ccc9971eSMauro Carvalho Chehab
1745ccc9971eSMauro Carvalho ChehabThe Linux kernel's boot sequence is an interesting process, and RCU is
1746ccc9971eSMauro Carvalho Chehabused early, even before ``rcu_init()`` is invoked. In fact, a number of
1747ccc9971eSMauro Carvalho ChehabRCU's primitives can be used as soon as the initial task's
1748ccc9971eSMauro Carvalho Chehab``task_struct`` is available and the boot CPU's per-CPU variables are
1749ccc9971eSMauro Carvalho Chehabset up. The read-side primitives (``rcu_read_lock()``,
1750ccc9971eSMauro Carvalho Chehab``rcu_read_unlock()``, ``rcu_dereference()``, and
1751ccc9971eSMauro Carvalho Chehab``rcu_access_pointer()``) will operate normally very early on, as will
1752ccc9971eSMauro Carvalho Chehab``rcu_assign_pointer()``.
1753ccc9971eSMauro Carvalho Chehab
1754ccc9971eSMauro Carvalho ChehabAlthough ``call_rcu()`` may be invoked at any time during boot,
1755ccc9971eSMauro Carvalho Chehabcallbacks are not guaranteed to be invoked until after all of RCU's
1756ccc9971eSMauro Carvalho Chehabkthreads have been spawned, which occurs at ``early_initcall()`` time.
1757ccc9971eSMauro Carvalho ChehabThis delay in callback invocation is due to the fact that RCU does not
1758ccc9971eSMauro Carvalho Chehabinvoke callbacks until it is fully initialized, and this full
1759ccc9971eSMauro Carvalho Chehabinitialization cannot occur until after the scheduler has initialized
1760ccc9971eSMauro Carvalho Chehabitself to the point where RCU can spawn and run its kthreads. In theory,
1761ccc9971eSMauro Carvalho Chehabit would be possible to invoke callbacks earlier, however, this is not a
1762ccc9971eSMauro Carvalho Chehabpanacea because there would be severe restrictions on what operations
1763ccc9971eSMauro Carvalho Chehabthose callbacks could invoke.
1764ccc9971eSMauro Carvalho Chehab
1765ccc9971eSMauro Carvalho ChehabPerhaps surprisingly, ``synchronize_rcu()`` and
1766ccc9971eSMauro Carvalho Chehab``synchronize_rcu_expedited()``, will operate normally during very early
1767ccc9971eSMauro Carvalho Chehabboot, the reason being that there is only one CPU and preemption is
1768ccc9971eSMauro Carvalho Chehabdisabled. This means that the call ``synchronize_rcu()`` (or friends)
1769ccc9971eSMauro Carvalho Chehabitself is a quiescent state and thus a grace period, so the early-boot
1770ccc9971eSMauro Carvalho Chehabimplementation can be a no-op.
1771ccc9971eSMauro Carvalho Chehab
1772ccc9971eSMauro Carvalho ChehabHowever, once the scheduler has spawned its first kthread, this early
1773ccc9971eSMauro Carvalho Chehabboot trick fails for ``synchronize_rcu()`` (as well as for
1774ccc9971eSMauro Carvalho Chehab``synchronize_rcu_expedited()``) in ``CONFIG_PREEMPT=y`` kernels. The
1775ccc9971eSMauro Carvalho Chehabreason is that an RCU read-side critical section might be preempted,
1776ccc9971eSMauro Carvalho Chehabwhich means that a subsequent ``synchronize_rcu()`` really does have to
1777ccc9971eSMauro Carvalho Chehabwait for something, as opposed to simply returning immediately.
1778ccc9971eSMauro Carvalho ChehabUnfortunately, ``synchronize_rcu()`` can't do this until all of its
1779ccc9971eSMauro Carvalho Chehabkthreads are spawned, which doesn't happen until some time during
1780ccc9971eSMauro Carvalho Chehab``early_initcalls()`` time. But this is no excuse: RCU is nevertheless
1781ccc9971eSMauro Carvalho Chehabrequired to correctly handle synchronous grace periods during this time
1782ccc9971eSMauro Carvalho Chehabperiod. Once all of its kthreads are up and running, RCU starts running
1783ccc9971eSMauro Carvalho Chehabnormally.
1784ccc9971eSMauro Carvalho Chehab
1785ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+
1786ccc9971eSMauro Carvalho Chehab| **Quick Quiz**:                                                       |
1787ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+
1788ccc9971eSMauro Carvalho Chehab| How can RCU possibly handle grace periods before all of its kthreads  |
1789ccc9971eSMauro Carvalho Chehab| have been spawned???                                                  |
1790ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+
1791ccc9971eSMauro Carvalho Chehab| **Answer**:                                                           |
1792ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+
1793ccc9971eSMauro Carvalho Chehab| Very carefully!                                                       |
1794ccc9971eSMauro Carvalho Chehab| During the “dead zone” between the time that the scheduler spawns the |
1795ccc9971eSMauro Carvalho Chehab| first task and the time that all of RCU's kthreads have been spawned, |
1796ccc9971eSMauro Carvalho Chehab| all synchronous grace periods are handled by the expedited            |
1797ccc9971eSMauro Carvalho Chehab| grace-period mechanism. At runtime, this expedited mechanism relies   |
1798ccc9971eSMauro Carvalho Chehab| on workqueues, but during the dead zone the requesting task itself    |
1799ccc9971eSMauro Carvalho Chehab| drives the desired expedited grace period. Because dead-zone          |
1800ccc9971eSMauro Carvalho Chehab| execution takes place within task context, everything works. Once the |
1801ccc9971eSMauro Carvalho Chehab| dead zone ends, expedited grace periods go back to using workqueues,  |
1802ccc9971eSMauro Carvalho Chehab| as is required to avoid problems that would otherwise occur when a    |
1803ccc9971eSMauro Carvalho Chehab| user task received a POSIX signal while driving an expedited grace    |
1804ccc9971eSMauro Carvalho Chehab| period.                                                               |
1805ccc9971eSMauro Carvalho Chehab|                                                                       |
1806ccc9971eSMauro Carvalho Chehab| And yes, this does mean that it is unhelpful to send POSIX signals to |
1807ccc9971eSMauro Carvalho Chehab| random tasks between the time that the scheduler spawns its first     |
1808ccc9971eSMauro Carvalho Chehab| kthread and the time that RCU's kthreads have all been spawned. If    |
1809ccc9971eSMauro Carvalho Chehab| there ever turns out to be a good reason for sending POSIX signals    |
1810ccc9971eSMauro Carvalho Chehab| during that time, appropriate adjustments will be made. (If it turns  |
1811ccc9971eSMauro Carvalho Chehab| out that POSIX signals are sent during this time for no good reason,  |
1812ccc9971eSMauro Carvalho Chehab| other adjustments will be made, appropriate or otherwise.)            |
1813ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+
1814ccc9971eSMauro Carvalho Chehab
1815ccc9971eSMauro Carvalho ChehabI learned of these boot-time requirements as a result of a series of
1816ccc9971eSMauro Carvalho Chehabsystem hangs.
1817ccc9971eSMauro Carvalho Chehab
1818ccc9971eSMauro Carvalho ChehabInterrupts and NMIs
1819ccc9971eSMauro Carvalho Chehab~~~~~~~~~~~~~~~~~~~
1820ccc9971eSMauro Carvalho Chehab
1821ccc9971eSMauro Carvalho ChehabThe Linux kernel has interrupts, and RCU read-side critical sections are
1822ccc9971eSMauro Carvalho Chehablegal within interrupt handlers and within interrupt-disabled regions of
1823ccc9971eSMauro Carvalho Chehabcode, as are invocations of ``call_rcu()``.
1824ccc9971eSMauro Carvalho Chehab
1825ccc9971eSMauro Carvalho ChehabSome Linux-kernel architectures can enter an interrupt handler from
1826ccc9971eSMauro Carvalho Chehabnon-idle process context, and then just never leave it, instead
1827ccc9971eSMauro Carvalho Chehabstealthily transitioning back to process context. This trick is
1828ccc9971eSMauro Carvalho Chehabsometimes used to invoke system calls from inside the kernel. These
1829ccc9971eSMauro Carvalho Chehab“half-interrupts” mean that RCU has to be very careful about how it
1830ccc9971eSMauro Carvalho Chehabcounts interrupt nesting levels. I learned of this requirement the hard
1831ccc9971eSMauro Carvalho Chehabway during a rewrite of RCU's dyntick-idle code.
1832ccc9971eSMauro Carvalho Chehab
1833ccc9971eSMauro Carvalho ChehabThe Linux kernel has non-maskable interrupts (NMIs), and RCU read-side
1834ccc9971eSMauro Carvalho Chehabcritical sections are legal within NMI handlers. Thankfully, RCU
1835ccc9971eSMauro Carvalho Chehabupdate-side primitives, including ``call_rcu()``, are prohibited within
1836ccc9971eSMauro Carvalho ChehabNMI handlers.
1837ccc9971eSMauro Carvalho Chehab
1838ccc9971eSMauro Carvalho ChehabThe name notwithstanding, some Linux-kernel architectures can have
1839ccc9971eSMauro Carvalho Chehabnested NMIs, which RCU must handle correctly. Andy Lutomirski `surprised
1840ccc9971eSMauro Carvalho Chehabme <https://lkml.kernel.org/r/CALCETrXLq1y7e_dKFPgou-FKHB6Pu-r8+t-6Ds+8=va7anBWDA@mail.gmail.com>`__
1841ccc9971eSMauro Carvalho Chehabwith this requirement; he also kindly surprised me with `an
1842ccc9971eSMauro Carvalho Chehabalgorithm <https://lkml.kernel.org/r/CALCETrXSY9JpW3uE6H8WYk81sg56qasA2aqmjMPsq5dOtzso=g@mail.gmail.com>`__
1843ccc9971eSMauro Carvalho Chehabthat meets this requirement.
1844ccc9971eSMauro Carvalho Chehab
1845ccc9971eSMauro Carvalho ChehabFurthermore, NMI handlers can be interrupted by what appear to RCU to be
1846ccc9971eSMauro Carvalho Chehabnormal interrupts. One way that this can happen is for code that
1847ccc9971eSMauro Carvalho Chehabdirectly invokes ``rcu_irq_enter()`` and ``rcu_irq_exit()`` to be called
1848ccc9971eSMauro Carvalho Chehabfrom an NMI handler. This astonishing fact of life prompted the current
1849ccc9971eSMauro Carvalho Chehabcode structure, which has ``rcu_irq_enter()`` invoking
1850ccc9971eSMauro Carvalho Chehab``rcu_nmi_enter()`` and ``rcu_irq_exit()`` invoking ``rcu_nmi_exit()``.
1851ccc9971eSMauro Carvalho ChehabAnd yes, I also learned of this requirement the hard way.
1852ccc9971eSMauro Carvalho Chehab
1853ccc9971eSMauro Carvalho ChehabLoadable Modules
1854ccc9971eSMauro Carvalho Chehab~~~~~~~~~~~~~~~~
1855ccc9971eSMauro Carvalho Chehab
1856ccc9971eSMauro Carvalho ChehabThe Linux kernel has loadable modules, and these modules can also be
1857ccc9971eSMauro Carvalho Chehabunloaded. After a given module has been unloaded, any attempt to call
1858ccc9971eSMauro Carvalho Chehabone of its functions results in a segmentation fault. The module-unload
1859ccc9971eSMauro Carvalho Chehabfunctions must therefore cancel any delayed calls to loadable-module
1860ccc9971eSMauro Carvalho Chehabfunctions, for example, any outstanding ``mod_timer()`` must be dealt
1861ccc9971eSMauro Carvalho Chehabwith via ``del_timer_sync()`` or similar.
1862ccc9971eSMauro Carvalho Chehab
1863ccc9971eSMauro Carvalho ChehabUnfortunately, there is no way to cancel an RCU callback; once you
1864ccc9971eSMauro Carvalho Chehabinvoke ``call_rcu()``, the callback function is eventually going to be
1865ccc9971eSMauro Carvalho Chehabinvoked, unless the system goes down first. Because it is normally
1866ccc9971eSMauro Carvalho Chehabconsidered socially irresponsible to crash the system in response to a
1867ccc9971eSMauro Carvalho Chehabmodule unload request, we need some other way to deal with in-flight RCU
1868ccc9971eSMauro Carvalho Chehabcallbacks.
1869ccc9971eSMauro Carvalho Chehab
1870ccc9971eSMauro Carvalho ChehabRCU therefore provides ``rcu_barrier()``, which waits until all
1871ccc9971eSMauro Carvalho Chehabin-flight RCU callbacks have been invoked. If a module uses
1872ccc9971eSMauro Carvalho Chehab``call_rcu()``, its exit function should therefore prevent any future
1873ccc9971eSMauro Carvalho Chehabinvocation of ``call_rcu()``, then invoke ``rcu_barrier()``. In theory,
1874ccc9971eSMauro Carvalho Chehabthe underlying module-unload code could invoke ``rcu_barrier()``
1875ccc9971eSMauro Carvalho Chehabunconditionally, but in practice this would incur unacceptable
1876ccc9971eSMauro Carvalho Chehablatencies.
1877ccc9971eSMauro Carvalho Chehab
1878ccc9971eSMauro Carvalho ChehabNikita Danilov noted this requirement for an analogous
1879ccc9971eSMauro Carvalho Chehabfilesystem-unmount situation, and Dipankar Sarma incorporated
1880ccc9971eSMauro Carvalho Chehab``rcu_barrier()`` into RCU. The need for ``rcu_barrier()`` for module
1881ccc9971eSMauro Carvalho Chehabunloading became apparent later.
1882ccc9971eSMauro Carvalho Chehab
1883ccc9971eSMauro Carvalho Chehab.. important::
1884ccc9971eSMauro Carvalho Chehab
1885ccc9971eSMauro Carvalho Chehab   The ``rcu_barrier()`` function is not, repeat,
1886ccc9971eSMauro Carvalho Chehab   *not*, obligated to wait for a grace period. It is instead only required
1887ccc9971eSMauro Carvalho Chehab   to wait for RCU callbacks that have already been posted. Therefore, if
1888ccc9971eSMauro Carvalho Chehab   there are no RCU callbacks posted anywhere in the system,
1889ccc9971eSMauro Carvalho Chehab   ``rcu_barrier()`` is within its rights to return immediately. Even if
1890ccc9971eSMauro Carvalho Chehab   there are callbacks posted, ``rcu_barrier()`` does not necessarily need
1891ccc9971eSMauro Carvalho Chehab   to wait for a grace period.
1892ccc9971eSMauro Carvalho Chehab
1893ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+
1894ccc9971eSMauro Carvalho Chehab| **Quick Quiz**:                                                       |
1895ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+
1896ccc9971eSMauro Carvalho Chehab| Wait a minute! Each RCU callbacks must wait for a grace period to     |
1897ccc9971eSMauro Carvalho Chehab| complete, and ``rcu_barrier()`` must wait for each pre-existing       |
1898ccc9971eSMauro Carvalho Chehab| callback to be invoked. Doesn't ``rcu_barrier()`` therefore need to   |
1899ccc9971eSMauro Carvalho Chehab| wait for a full grace period if there is even one callback posted     |
1900ccc9971eSMauro Carvalho Chehab| anywhere in the system?                                               |
1901ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+
1902ccc9971eSMauro Carvalho Chehab| **Answer**:                                                           |
1903ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+
1904ccc9971eSMauro Carvalho Chehab| Absolutely not!!!                                                     |
1905ccc9971eSMauro Carvalho Chehab| Yes, each RCU callbacks must wait for a grace period to complete, but |
1906ccc9971eSMauro Carvalho Chehab| it might well be partly (or even completely) finished waiting by the  |
1907ccc9971eSMauro Carvalho Chehab| time ``rcu_barrier()`` is invoked. In that case, ``rcu_barrier()``    |
1908ccc9971eSMauro Carvalho Chehab| need only wait for the remaining portion of the grace period to       |
1909ccc9971eSMauro Carvalho Chehab| elapse. So even if there are quite a few callbacks posted,            |
1910ccc9971eSMauro Carvalho Chehab| ``rcu_barrier()`` might well return quite quickly.                    |
1911ccc9971eSMauro Carvalho Chehab|                                                                       |
1912ccc9971eSMauro Carvalho Chehab| So if you need to wait for a grace period as well as for all          |
1913ccc9971eSMauro Carvalho Chehab| pre-existing callbacks, you will need to invoke both                  |
1914ccc9971eSMauro Carvalho Chehab| ``synchronize_rcu()`` and ``rcu_barrier()``. If latency is a concern, |
1915ccc9971eSMauro Carvalho Chehab| you can always use workqueues to invoke them concurrently.            |
1916ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+
1917ccc9971eSMauro Carvalho Chehab
1918ccc9971eSMauro Carvalho ChehabHotplug CPU
1919ccc9971eSMauro Carvalho Chehab~~~~~~~~~~~
1920ccc9971eSMauro Carvalho Chehab
1921ccc9971eSMauro Carvalho ChehabThe Linux kernel supports CPU hotplug, which means that CPUs can come
1922ccc9971eSMauro Carvalho Chehaband go. It is of course illegal to use any RCU API member from an
1923ccc9971eSMauro Carvalho Chehaboffline CPU, with the exception of `SRCU <#Sleepable%20RCU>`__ read-side
1924ccc9971eSMauro Carvalho Chehabcritical sections. This requirement was present from day one in
1925ccc9971eSMauro Carvalho ChehabDYNIX/ptx, but on the other hand, the Linux kernel's CPU-hotplug
1926ccc9971eSMauro Carvalho Chehabimplementation is “interesting.”
1927ccc9971eSMauro Carvalho Chehab
1928ccc9971eSMauro Carvalho ChehabThe Linux-kernel CPU-hotplug implementation has notifiers that are used
1929ccc9971eSMauro Carvalho Chehabto allow the various kernel subsystems (including RCU) to respond
1930ccc9971eSMauro Carvalho Chehabappropriately to a given CPU-hotplug operation. Most RCU operations may
1931ccc9971eSMauro Carvalho Chehabbe invoked from CPU-hotplug notifiers, including even synchronous
1932ccc9971eSMauro Carvalho Chehabgrace-period operations such as ``synchronize_rcu()`` and
1933ccc9971eSMauro Carvalho Chehab``synchronize_rcu_expedited()``.
1934ccc9971eSMauro Carvalho Chehab
1935ccc9971eSMauro Carvalho ChehabHowever, all-callback-wait operations such as ``rcu_barrier()`` are also
1936ccc9971eSMauro Carvalho Chehabnot supported, due to the fact that there are phases of CPU-hotplug
1937ccc9971eSMauro Carvalho Chehaboperations where the outgoing CPU's callbacks will not be invoked until
1938ccc9971eSMauro Carvalho Chehabafter the CPU-hotplug operation ends, which could also result in
1939ccc9971eSMauro Carvalho Chehabdeadlock. Furthermore, ``rcu_barrier()`` blocks CPU-hotplug operations
1940ccc9971eSMauro Carvalho Chehabduring its execution, which results in another type of deadlock when
1941ccc9971eSMauro Carvalho Chehabinvoked from a CPU-hotplug notifier.
1942ccc9971eSMauro Carvalho Chehab
1943ccc9971eSMauro Carvalho ChehabScheduler and RCU
1944ccc9971eSMauro Carvalho Chehab~~~~~~~~~~~~~~~~~
1945ccc9971eSMauro Carvalho Chehab
1946e4453d8aSPaul E. McKenneyRCU makes use of kthreads, and it is necessary to avoid excessive CPU-time
1947e4453d8aSPaul E. McKenneyaccumulation by these kthreads. This requirement was no surprise, but
1948e4453d8aSPaul E. McKenneyRCU's violation of it when running context-switch-heavy workloads when
1949e4453d8aSPaul E. McKenneybuilt with ``CONFIG_NO_HZ_FULL=y`` `did come as a surprise
1950ccc9971eSMauro Carvalho Chehab[PDF] <http://www.rdrop.com/users/paulmck/scalability/paper/BareMetal.2015.01.15b.pdf>`__.
1951ccc9971eSMauro Carvalho ChehabRCU has made good progress towards meeting this requirement, even for
1952ccc9971eSMauro Carvalho Chehabcontext-switch-heavy ``CONFIG_NO_HZ_FULL=y`` workloads, but there is
1953ccc9971eSMauro Carvalho Chehabroom for further improvement.
1954ccc9971eSMauro Carvalho Chehab
1955e4453d8aSPaul E. McKenneyThere is no longer any prohibition against holding any of
1956e4453d8aSPaul E. McKenneyscheduler's runqueue or priority-inheritance spinlocks across an
1957e4453d8aSPaul E. McKenney``rcu_read_unlock()``, even if interrupts and preemption were enabled
1958e4453d8aSPaul E. McKenneysomewhere within the corresponding RCU read-side critical section.
1959e4453d8aSPaul E. McKenneyTherefore, it is now perfectly legal to execute ``rcu_read_lock()``
1960e4453d8aSPaul E. McKenneywith preemption enabled, acquire one of the scheduler locks, and hold
1961e4453d8aSPaul E. McKenneythat lock across the matching ``rcu_read_unlock()``.
1962ccc9971eSMauro Carvalho Chehab
1963e4453d8aSPaul E. McKenneySimilarly, the RCU flavor consolidation has removed the need for negative
1964e4453d8aSPaul E. McKenneynesting.  The fact that interrupt-disabled regions of code act as RCU
1965e4453d8aSPaul E. McKenneyread-side critical sections implicitly avoids earlier issues that used
1966e4453d8aSPaul E. McKenneyto result in destructive recursion via interrupt handler's use of RCU.
1967ccc9971eSMauro Carvalho Chehab
1968ccc9971eSMauro Carvalho ChehabTracing and RCU
1969ccc9971eSMauro Carvalho Chehab~~~~~~~~~~~~~~~
1970ccc9971eSMauro Carvalho Chehab
1971ccc9971eSMauro Carvalho ChehabIt is possible to use tracing on RCU code, but tracing itself uses RCU.
1972d7424e28SJoel Fernandes (Google)For this reason, ``rcu_dereference_raw_check()`` is provided for use
1973ccc9971eSMauro Carvalho Chehabby tracing, which avoids the destructive recursion that could otherwise
1974ccc9971eSMauro Carvalho Chehabensue. This API is also used by virtualization in some architectures,
1975ccc9971eSMauro Carvalho Chehabwhere RCU readers execute in environments in which tracing cannot be
1976ccc9971eSMauro Carvalho Chehabused. The tracing folks both located the requirement and provided the
1977ccc9971eSMauro Carvalho Chehabneeded fix, so this surprise requirement was relatively painless.
1978ccc9971eSMauro Carvalho Chehab
197971cb46aeSJoel Fernandes (Google)Accesses to User Memory and RCU
198071cb46aeSJoel Fernandes (Google)~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
198171cb46aeSJoel Fernandes (Google)
198271cb46aeSJoel Fernandes (Google)The kernel needs to access user-space memory, for example, to access data
198371cb46aeSJoel Fernandes (Google)referenced by system-call parameters.  The ``get_user()`` macro does this job.
198471cb46aeSJoel Fernandes (Google)
198571cb46aeSJoel Fernandes (Google)However, user-space memory might well be paged out, which means that
198671cb46aeSJoel Fernandes (Google)``get_user()`` might well page-fault and thus block while waiting for the
198771cb46aeSJoel Fernandes (Google)resulting I/O to complete.  It would be a very bad thing for the compiler to
198871cb46aeSJoel Fernandes (Google)reorder a ``get_user()`` invocation into an RCU read-side critical section.
198971cb46aeSJoel Fernandes (Google)
199071cb46aeSJoel Fernandes (Google)For example, suppose that the source code looked like this:
199171cb46aeSJoel Fernandes (Google)
199271cb46aeSJoel Fernandes (Google)  ::
199371cb46aeSJoel Fernandes (Google)
199471cb46aeSJoel Fernandes (Google)       1 rcu_read_lock();
199571cb46aeSJoel Fernandes (Google)       2 p = rcu_dereference(gp);
199671cb46aeSJoel Fernandes (Google)       3 v = p->value;
199771cb46aeSJoel Fernandes (Google)       4 rcu_read_unlock();
199871cb46aeSJoel Fernandes (Google)       5 get_user(user_v, user_p);
199971cb46aeSJoel Fernandes (Google)       6 do_something_with(v, user_v);
200071cb46aeSJoel Fernandes (Google)
200171cb46aeSJoel Fernandes (Google)The compiler must not be permitted to transform this source code into
200271cb46aeSJoel Fernandes (Google)the following:
200371cb46aeSJoel Fernandes (Google)
200471cb46aeSJoel Fernandes (Google)  ::
200571cb46aeSJoel Fernandes (Google)
200671cb46aeSJoel Fernandes (Google)       1 rcu_read_lock();
200771cb46aeSJoel Fernandes (Google)       2 p = rcu_dereference(gp);
200871cb46aeSJoel Fernandes (Google)       3 get_user(user_v, user_p); // BUG: POSSIBLE PAGE FAULT!!!
200971cb46aeSJoel Fernandes (Google)       4 v = p->value;
201071cb46aeSJoel Fernandes (Google)       5 rcu_read_unlock();
201171cb46aeSJoel Fernandes (Google)       6 do_something_with(v, user_v);
201271cb46aeSJoel Fernandes (Google)
201371cb46aeSJoel Fernandes (Google)If the compiler did make this transformation in a ``CONFIG_PREEMPT=n`` kernel
201471cb46aeSJoel Fernandes (Google)build, and if ``get_user()`` did page fault, the result would be a quiescent
201571cb46aeSJoel Fernandes (Google)state in the middle of an RCU read-side critical section.  This misplaced
201671cb46aeSJoel Fernandes (Google)quiescent state could result in line 4 being a use-after-free access,
201771cb46aeSJoel Fernandes (Google)which could be bad for your kernel's actuarial statistics.  Similar examples
201871cb46aeSJoel Fernandes (Google)can be constructed with the call to ``get_user()`` preceding the
201971cb46aeSJoel Fernandes (Google)``rcu_read_lock()``.
202071cb46aeSJoel Fernandes (Google)
202171cb46aeSJoel Fernandes (Google)Unfortunately, ``get_user()`` doesn't have any particular ordering properties,
202271cb46aeSJoel Fernandes (Google)and in some architectures the underlying ``asm`` isn't even marked
202371cb46aeSJoel Fernandes (Google)``volatile``.  And even if it was marked ``volatile``, the above access to
202471cb46aeSJoel Fernandes (Google)``p->value`` is not volatile, so the compiler would not have any reason to keep
202571cb46aeSJoel Fernandes (Google)those two accesses in order.
202671cb46aeSJoel Fernandes (Google)
202771cb46aeSJoel Fernandes (Google)Therefore, the Linux-kernel definitions of ``rcu_read_lock()`` and
202871cb46aeSJoel Fernandes (Google)``rcu_read_unlock()`` must act as compiler barriers, at least for outermost
202971cb46aeSJoel Fernandes (Google)instances of ``rcu_read_lock()`` and ``rcu_read_unlock()`` within a nested set
203071cb46aeSJoel Fernandes (Google)of RCU read-side critical sections.
203171cb46aeSJoel Fernandes (Google)
2032ccc9971eSMauro Carvalho ChehabEnergy Efficiency
2033ccc9971eSMauro Carvalho Chehab~~~~~~~~~~~~~~~~~
2034ccc9971eSMauro Carvalho Chehab
2035ccc9971eSMauro Carvalho ChehabInterrupting idle CPUs is considered socially unacceptable, especially
2036ccc9971eSMauro Carvalho Chehabby people with battery-powered embedded systems. RCU therefore conserves
2037ccc9971eSMauro Carvalho Chehabenergy by detecting which CPUs are idle, including tracking CPUs that
2038ccc9971eSMauro Carvalho Chehabhave been interrupted from idle. This is a large part of the
2039ccc9971eSMauro Carvalho Chehabenergy-efficiency requirement, so I learned of this via an irate phone
2040ccc9971eSMauro Carvalho Chehabcall.
2041ccc9971eSMauro Carvalho Chehab
2042ccc9971eSMauro Carvalho ChehabBecause RCU avoids interrupting idle CPUs, it is illegal to execute an
2043ccc9971eSMauro Carvalho ChehabRCU read-side critical section on an idle CPU. (Kernels built with
2044ccc9971eSMauro Carvalho Chehab``CONFIG_PROVE_RCU=y`` will splat if you try it.) The ``RCU_NONIDLE()``
2045ccc9971eSMauro Carvalho Chehabmacro and ``_rcuidle`` event tracing is provided to work around this
2046ccc9971eSMauro Carvalho Chehabrestriction. In addition, ``rcu_is_watching()`` may be used to test
2047ccc9971eSMauro Carvalho Chehabwhether or not it is currently legal to run RCU read-side critical
2048ccc9971eSMauro Carvalho Chehabsections on this CPU. I learned of the need for diagnostics on the one
2049ccc9971eSMauro Carvalho Chehabhand and ``RCU_NONIDLE()`` on the other while inspecting idle-loop code.
2050ccc9971eSMauro Carvalho ChehabSteven Rostedt supplied ``_rcuidle`` event tracing, which is used quite
2051ccc9971eSMauro Carvalho Chehabheavily in the idle loop. However, there are some restrictions on the
2052ccc9971eSMauro Carvalho Chehabcode placed within ``RCU_NONIDLE()``:
2053ccc9971eSMauro Carvalho Chehab
2054ccc9971eSMauro Carvalho Chehab#. Blocking is prohibited. In practice, this is not a serious
2055ccc9971eSMauro Carvalho Chehab   restriction given that idle tasks are prohibited from blocking to
2056ccc9971eSMauro Carvalho Chehab   begin with.
2057ccc9971eSMauro Carvalho Chehab#. Although nesting ``RCU_NONIDLE()`` is permitted, they cannot nest
2058ccc9971eSMauro Carvalho Chehab   indefinitely deeply. However, given that they can be nested on the
2059ccc9971eSMauro Carvalho Chehab   order of a million deep, even on 32-bit systems, this should not be a
2060ccc9971eSMauro Carvalho Chehab   serious restriction. This nesting limit would probably be reached
2061ccc9971eSMauro Carvalho Chehab   long after the compiler OOMed or the stack overflowed.
2062ccc9971eSMauro Carvalho Chehab#. Any code path that enters ``RCU_NONIDLE()`` must sequence out of that
2063ccc9971eSMauro Carvalho Chehab   same ``RCU_NONIDLE()``. For example, the following is grossly
2064ccc9971eSMauro Carvalho Chehab   illegal:
2065ccc9971eSMauro Carvalho Chehab
2066ccc9971eSMauro Carvalho Chehab      ::
2067ccc9971eSMauro Carvalho Chehab
2068ccc9971eSMauro Carvalho Chehab	  1     RCU_NONIDLE({
2069ccc9971eSMauro Carvalho Chehab	  2       do_something();
2070ccc9971eSMauro Carvalho Chehab	  3       goto bad_idea;  /* BUG!!! */
2071ccc9971eSMauro Carvalho Chehab	  4       do_something_else();});
2072ccc9971eSMauro Carvalho Chehab	  5   bad_idea:
2073ccc9971eSMauro Carvalho Chehab
2074ccc9971eSMauro Carvalho Chehab
2075ccc9971eSMauro Carvalho Chehab   It is just as illegal to transfer control into the middle of
2076ccc9971eSMauro Carvalho Chehab   ``RCU_NONIDLE()``'s argument. Yes, in theory, you could transfer in
2077ccc9971eSMauro Carvalho Chehab   as long as you also transferred out, but in practice you could also
2078ccc9971eSMauro Carvalho Chehab   expect to get sharply worded review comments.
2079ccc9971eSMauro Carvalho Chehab
2080ccc9971eSMauro Carvalho ChehabIt is similarly socially unacceptable to interrupt an ``nohz_full`` CPU
2081ccc9971eSMauro Carvalho Chehabrunning in userspace. RCU must therefore track ``nohz_full`` userspace
2082ccc9971eSMauro Carvalho Chehabexecution. RCU must therefore be able to sample state at two points in
2083ccc9971eSMauro Carvalho Chehabtime, and be able to determine whether or not some other CPU spent any
2084ccc9971eSMauro Carvalho Chehabtime idle and/or executing in userspace.
2085ccc9971eSMauro Carvalho Chehab
2086ccc9971eSMauro Carvalho ChehabThese energy-efficiency requirements have proven quite difficult to
2087ccc9971eSMauro Carvalho Chehabunderstand and to meet, for example, there have been more than five
2088ccc9971eSMauro Carvalho Chehabclean-sheet rewrites of RCU's energy-efficiency code, the last of which
2089ccc9971eSMauro Carvalho Chehabwas finally able to demonstrate `real energy savings running on real
2090ccc9971eSMauro Carvalho Chehabhardware
2091ccc9971eSMauro Carvalho Chehab[PDF] <http://www.rdrop.com/users/paulmck/realtime/paper/AMPenergy.2013.04.19a.pdf>`__.
2092ccc9971eSMauro Carvalho ChehabAs noted earlier, I learned of many of these requirements via angry
2093ccc9971eSMauro Carvalho Chehabphone calls: Flaming me on the Linux-kernel mailing list was apparently
2094ccc9971eSMauro Carvalho Chehabnot sufficient to fully vent their ire at RCU's energy-efficiency bugs!
2095ccc9971eSMauro Carvalho Chehab
2096ccc9971eSMauro Carvalho ChehabScheduling-Clock Interrupts and RCU
2097ccc9971eSMauro Carvalho Chehab~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2098ccc9971eSMauro Carvalho Chehab
2099ccc9971eSMauro Carvalho ChehabThe kernel transitions between in-kernel non-idle execution, userspace
2100ccc9971eSMauro Carvalho Chehabexecution, and the idle loop. Depending on kernel configuration, RCU
2101ccc9971eSMauro Carvalho Chehabhandles these states differently:
2102ccc9971eSMauro Carvalho Chehab
2103ccc9971eSMauro Carvalho Chehab+-----------------+------------------+------------------+-----------------+
2104ccc9971eSMauro Carvalho Chehab| ``HZ`` Kconfig  | In-Kernel        | Usermode         | Idle            |
2105ccc9971eSMauro Carvalho Chehab+=================+==================+==================+=================+
2106ccc9971eSMauro Carvalho Chehab| ``HZ_PERIODIC`` | Can rely on      | Can rely on      | Can rely on     |
2107ccc9971eSMauro Carvalho Chehab|                 | scheduling-clock | scheduling-clock | RCU's           |
2108ccc9971eSMauro Carvalho Chehab|                 | interrupt.       | interrupt and    | dyntick-idle    |
2109ccc9971eSMauro Carvalho Chehab|                 |                  | its detection    | detection.      |
2110ccc9971eSMauro Carvalho Chehab|                 |                  | of interrupt     |                 |
2111ccc9971eSMauro Carvalho Chehab|                 |                  | from usermode.   |                 |
2112ccc9971eSMauro Carvalho Chehab+-----------------+------------------+------------------+-----------------+
2113ccc9971eSMauro Carvalho Chehab| ``NO_HZ_IDLE``  | Can rely on      | Can rely on      | Can rely on     |
2114ccc9971eSMauro Carvalho Chehab|                 | scheduling-clock | scheduling-clock | RCU's           |
2115ccc9971eSMauro Carvalho Chehab|                 | interrupt.       | interrupt and    | dyntick-idle    |
2116ccc9971eSMauro Carvalho Chehab|                 |                  | its detection    | detection.      |
2117ccc9971eSMauro Carvalho Chehab|                 |                  | of interrupt     |                 |
2118ccc9971eSMauro Carvalho Chehab|                 |                  | from usermode.   |                 |
2119ccc9971eSMauro Carvalho Chehab+-----------------+------------------+------------------+-----------------+
2120ccc9971eSMauro Carvalho Chehab| ``NO_HZ_FULL``  | Can only         | Can rely on      | Can rely on     |
2121ccc9971eSMauro Carvalho Chehab|                 | sometimes rely   | RCU's            | RCU's           |
2122ccc9971eSMauro Carvalho Chehab|                 | on               | dyntick-idle     | dyntick-idle    |
2123ccc9971eSMauro Carvalho Chehab|                 | scheduling-clock | detection.       | detection.      |
2124ccc9971eSMauro Carvalho Chehab|                 | interrupt. In    |                  |                 |
2125ccc9971eSMauro Carvalho Chehab|                 | other cases, it  |                  |                 |
2126ccc9971eSMauro Carvalho Chehab|                 | is necessary to  |                  |                 |
2127ccc9971eSMauro Carvalho Chehab|                 | bound kernel     |                  |                 |
2128ccc9971eSMauro Carvalho Chehab|                 | execution times  |                  |                 |
2129ccc9971eSMauro Carvalho Chehab|                 | and/or use       |                  |                 |
2130ccc9971eSMauro Carvalho Chehab|                 | IPIs.            |                  |                 |
2131ccc9971eSMauro Carvalho Chehab+-----------------+------------------+------------------+-----------------+
2132ccc9971eSMauro Carvalho Chehab
2133ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+
2134ccc9971eSMauro Carvalho Chehab| **Quick Quiz**:                                                       |
2135ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+
2136ccc9971eSMauro Carvalho Chehab| Why can't ``NO_HZ_FULL`` in-kernel execution rely on the              |
2137ccc9971eSMauro Carvalho Chehab| scheduling-clock interrupt, just like ``HZ_PERIODIC`` and             |
2138ccc9971eSMauro Carvalho Chehab| ``NO_HZ_IDLE`` do?                                                    |
2139ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+
2140ccc9971eSMauro Carvalho Chehab| **Answer**:                                                           |
2141ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+
2142ccc9971eSMauro Carvalho Chehab| Because, as a performance optimization, ``NO_HZ_FULL`` does not       |
2143ccc9971eSMauro Carvalho Chehab| necessarily re-enable the scheduling-clock interrupt on entry to each |
2144ccc9971eSMauro Carvalho Chehab| and every system call.                                                |
2145ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+
2146ccc9971eSMauro Carvalho Chehab
2147ccc9971eSMauro Carvalho ChehabHowever, RCU must be reliably informed as to whether any given CPU is
2148ccc9971eSMauro Carvalho Chehabcurrently in the idle loop, and, for ``NO_HZ_FULL``, also whether that
2149ccc9971eSMauro Carvalho ChehabCPU is executing in usermode, as discussed
2150ccc9971eSMauro Carvalho Chehab`earlier <#Energy%20Efficiency>`__. It also requires that the
2151ccc9971eSMauro Carvalho Chehabscheduling-clock interrupt be enabled when RCU needs it to be:
2152ccc9971eSMauro Carvalho Chehab
2153ccc9971eSMauro Carvalho Chehab#. If a CPU is either idle or executing in usermode, and RCU believes it
2154ccc9971eSMauro Carvalho Chehab   is non-idle, the scheduling-clock tick had better be running.
2155ccc9971eSMauro Carvalho Chehab   Otherwise, you will get RCU CPU stall warnings. Or at best, very long
2156ccc9971eSMauro Carvalho Chehab   (11-second) grace periods, with a pointless IPI waking the CPU from
2157ccc9971eSMauro Carvalho Chehab   time to time.
2158ccc9971eSMauro Carvalho Chehab#. If a CPU is in a portion of the kernel that executes RCU read-side
2159ccc9971eSMauro Carvalho Chehab   critical sections, and RCU believes this CPU to be idle, you will get
2160ccc9971eSMauro Carvalho Chehab   random memory corruption. **DON'T DO THIS!!!**
2161ccc9971eSMauro Carvalho Chehab   This is one reason to test with lockdep, which will complain about
2162ccc9971eSMauro Carvalho Chehab   this sort of thing.
2163ccc9971eSMauro Carvalho Chehab#. If a CPU is in a portion of the kernel that is absolutely positively
2164ccc9971eSMauro Carvalho Chehab   no-joking guaranteed to never execute any RCU read-side critical
2165*7f45d6f8SRandy Dunlap   sections, and RCU believes this CPU to be idle, no problem. This
2166ccc9971eSMauro Carvalho Chehab   sort of thing is used by some architectures for light-weight
2167ccc9971eSMauro Carvalho Chehab   exception handlers, which can then avoid the overhead of
2168ccc9971eSMauro Carvalho Chehab   ``rcu_irq_enter()`` and ``rcu_irq_exit()`` at exception entry and
2169ccc9971eSMauro Carvalho Chehab   exit, respectively. Some go further and avoid the entireties of
2170ccc9971eSMauro Carvalho Chehab   ``irq_enter()`` and ``irq_exit()``.
2171ccc9971eSMauro Carvalho Chehab   Just make very sure you are running some of your tests with
2172ccc9971eSMauro Carvalho Chehab   ``CONFIG_PROVE_RCU=y``, just in case one of your code paths was in
2173ccc9971eSMauro Carvalho Chehab   fact joking about not doing RCU read-side critical sections.
2174ccc9971eSMauro Carvalho Chehab#. If a CPU is executing in the kernel with the scheduling-clock
2175ccc9971eSMauro Carvalho Chehab   interrupt disabled and RCU believes this CPU to be non-idle, and if
2176ccc9971eSMauro Carvalho Chehab   the CPU goes idle (from an RCU perspective) every few jiffies, no
2177ccc9971eSMauro Carvalho Chehab   problem. It is usually OK for there to be the occasional gap between
2178ccc9971eSMauro Carvalho Chehab   idle periods of up to a second or so.
2179ccc9971eSMauro Carvalho Chehab   If the gap grows too long, you get RCU CPU stall warnings.
2180ccc9971eSMauro Carvalho Chehab#. If a CPU is either idle or executing in usermode, and RCU believes it
2181ccc9971eSMauro Carvalho Chehab   to be idle, of course no problem.
2182ccc9971eSMauro Carvalho Chehab#. If a CPU is executing in the kernel, the kernel code path is passing
2183ccc9971eSMauro Carvalho Chehab   through quiescent states at a reasonable frequency (preferably about
2184ccc9971eSMauro Carvalho Chehab   once per few jiffies, but the occasional excursion to a second or so
2185ccc9971eSMauro Carvalho Chehab   is usually OK) and the scheduling-clock interrupt is enabled, of
2186ccc9971eSMauro Carvalho Chehab   course no problem.
2187ccc9971eSMauro Carvalho Chehab   If the gap between a successive pair of quiescent states grows too
2188ccc9971eSMauro Carvalho Chehab   long, you get RCU CPU stall warnings.
2189ccc9971eSMauro Carvalho Chehab
2190ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+
2191ccc9971eSMauro Carvalho Chehab| **Quick Quiz**:                                                       |
2192ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+
2193ccc9971eSMauro Carvalho Chehab| But what if my driver has a hardware interrupt handler that can run   |
2194ccc9971eSMauro Carvalho Chehab| for many seconds? I cannot invoke ``schedule()`` from an hardware     |
2195ccc9971eSMauro Carvalho Chehab| interrupt handler, after all!                                         |
2196ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+
2197ccc9971eSMauro Carvalho Chehab| **Answer**:                                                           |
2198ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+
2199ccc9971eSMauro Carvalho Chehab| One approach is to do ``rcu_irq_exit();rcu_irq_enter();`` every so    |
2200ccc9971eSMauro Carvalho Chehab| often. But given that long-running interrupt handlers can cause other |
2201ccc9971eSMauro Carvalho Chehab| problems, not least for response time, shouldn't you work to keep     |
2202ccc9971eSMauro Carvalho Chehab| your interrupt handler's runtime within reasonable bounds?            |
2203ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+
2204ccc9971eSMauro Carvalho Chehab
2205ccc9971eSMauro Carvalho ChehabBut as long as RCU is properly informed of kernel state transitions
2206ccc9971eSMauro Carvalho Chehabbetween in-kernel execution, usermode execution, and idle, and as long
2207ccc9971eSMauro Carvalho Chehabas the scheduling-clock interrupt is enabled when RCU needs it to be,
2208ccc9971eSMauro Carvalho Chehabyou can rest assured that the bugs you encounter will be in some other
2209ccc9971eSMauro Carvalho Chehabpart of RCU or some other part of the kernel!
2210ccc9971eSMauro Carvalho Chehab
2211ccc9971eSMauro Carvalho ChehabMemory Efficiency
2212ccc9971eSMauro Carvalho Chehab~~~~~~~~~~~~~~~~~
2213ccc9971eSMauro Carvalho Chehab
2214ccc9971eSMauro Carvalho ChehabAlthough small-memory non-realtime systems can simply use Tiny RCU, code
2215ccc9971eSMauro Carvalho Chehabsize is only one aspect of memory efficiency. Another aspect is the size
2216ccc9971eSMauro Carvalho Chehabof the ``rcu_head`` structure used by ``call_rcu()`` and
2217ccc9971eSMauro Carvalho Chehab``kfree_rcu()``. Although this structure contains nothing more than a
2218ccc9971eSMauro Carvalho Chehabpair of pointers, it does appear in many RCU-protected data structures,
2219ccc9971eSMauro Carvalho Chehabincluding some that are size critical. The ``page`` structure is a case
2220ccc9971eSMauro Carvalho Chehabin point, as evidenced by the many occurrences of the ``union`` keyword
2221ccc9971eSMauro Carvalho Chehabwithin that structure.
2222ccc9971eSMauro Carvalho Chehab
2223ccc9971eSMauro Carvalho ChehabThis need for memory efficiency is one reason that RCU uses hand-crafted
2224ccc9971eSMauro Carvalho Chehabsingly linked lists to track the ``rcu_head`` structures that are
2225ccc9971eSMauro Carvalho Chehabwaiting for a grace period to elapse. It is also the reason why
2226ccc9971eSMauro Carvalho Chehab``rcu_head`` structures do not contain debug information, such as fields
2227ccc9971eSMauro Carvalho Chehabtracking the file and line of the ``call_rcu()`` or ``kfree_rcu()`` that
2228ccc9971eSMauro Carvalho Chehabposted them. Although this information might appear in debug-only kernel
2229ccc9971eSMauro Carvalho Chehabbuilds at some point, in the meantime, the ``->func`` field will often
2230ccc9971eSMauro Carvalho Chehabprovide the needed debug information.
2231ccc9971eSMauro Carvalho Chehab
2232ccc9971eSMauro Carvalho ChehabHowever, in some cases, the need for memory efficiency leads to even
2233ccc9971eSMauro Carvalho Chehabmore extreme measures. Returning to the ``page`` structure, the
2234ccc9971eSMauro Carvalho Chehab``rcu_head`` field shares storage with a great many other structures
2235ccc9971eSMauro Carvalho Chehabthat are used at various points in the corresponding page's lifetime. In
2236ccc9971eSMauro Carvalho Chehaborder to correctly resolve certain `race
2237ccc9971eSMauro Carvalho Chehabconditions <https://lkml.kernel.org/g/1439976106-137226-1-git-send-email-kirill.shutemov@linux.intel.com>`__,
2238ccc9971eSMauro Carvalho Chehabthe Linux kernel's memory-management subsystem needs a particular bit to
2239ccc9971eSMauro Carvalho Chehabremain zero during all phases of grace-period processing, and that bit
2240ccc9971eSMauro Carvalho Chehabhappens to map to the bottom bit of the ``rcu_head`` structure's
2241ccc9971eSMauro Carvalho Chehab``->next`` field. RCU makes this guarantee as long as ``call_rcu()`` is
2242ccc9971eSMauro Carvalho Chehabused to post the callback, as opposed to ``kfree_rcu()`` or some future
2243ccc9971eSMauro Carvalho Chehab“lazy” variant of ``call_rcu()`` that might one day be created for
2244ccc9971eSMauro Carvalho Chehabenergy-efficiency purposes.
2245ccc9971eSMauro Carvalho Chehab
2246ccc9971eSMauro Carvalho ChehabThat said, there are limits. RCU requires that the ``rcu_head``
2247ccc9971eSMauro Carvalho Chehabstructure be aligned to a two-byte boundary, and passing a misaligned
2248ccc9971eSMauro Carvalho Chehab``rcu_head`` structure to one of the ``call_rcu()`` family of functions
2249ccc9971eSMauro Carvalho Chehabwill result in a splat. It is therefore necessary to exercise caution
2250ccc9971eSMauro Carvalho Chehabwhen packing structures containing fields of type ``rcu_head``. Why not
2251ccc9971eSMauro Carvalho Chehaba four-byte or even eight-byte alignment requirement? Because the m68k
2252ccc9971eSMauro Carvalho Chehabarchitecture provides only two-byte alignment, and thus acts as
2253ccc9971eSMauro Carvalho Chehabalignment's least common denominator.
2254ccc9971eSMauro Carvalho Chehab
2255ccc9971eSMauro Carvalho ChehabThe reason for reserving the bottom bit of pointers to ``rcu_head``
2256ccc9971eSMauro Carvalho Chehabstructures is to leave the door open to “lazy” callbacks whose
2257ccc9971eSMauro Carvalho Chehabinvocations can safely be deferred. Deferring invocation could
2258ccc9971eSMauro Carvalho Chehabpotentially have energy-efficiency benefits, but only if the rate of
2259ccc9971eSMauro Carvalho Chehabnon-lazy callbacks decreases significantly for some important workload.
2260ccc9971eSMauro Carvalho ChehabIn the meantime, reserving the bottom bit keeps this option open in case
2261ccc9971eSMauro Carvalho Chehabit one day becomes useful.
2262ccc9971eSMauro Carvalho Chehab
2263ccc9971eSMauro Carvalho ChehabPerformance, Scalability, Response Time, and Reliability
2264ccc9971eSMauro Carvalho Chehab~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2265ccc9971eSMauro Carvalho Chehab
2266ccc9971eSMauro Carvalho ChehabExpanding on the `earlier
2267ccc9971eSMauro Carvalho Chehabdiscussion <#Performance%20and%20Scalability>`__, RCU is used heavily by
2268ccc9971eSMauro Carvalho Chehabhot code paths in performance-critical portions of the Linux kernel's
2269ccc9971eSMauro Carvalho Chehabnetworking, security, virtualization, and scheduling code paths. RCU
2270ccc9971eSMauro Carvalho Chehabmust therefore use efficient implementations, especially in its
2271ccc9971eSMauro Carvalho Chehabread-side primitives. To that end, it would be good if preemptible RCU's
2272ccc9971eSMauro Carvalho Chehabimplementation of ``rcu_read_lock()`` could be inlined, however, doing
2273ccc9971eSMauro Carvalho Chehabthis requires resolving ``#include`` issues with the ``task_struct``
2274ccc9971eSMauro Carvalho Chehabstructure.
2275ccc9971eSMauro Carvalho Chehab
2276ccc9971eSMauro Carvalho ChehabThe Linux kernel supports hardware configurations with up to 4096 CPUs,
2277ccc9971eSMauro Carvalho Chehabwhich means that RCU must be extremely scalable. Algorithms that involve
2278ccc9971eSMauro Carvalho Chehabfrequent acquisitions of global locks or frequent atomic operations on
2279ccc9971eSMauro Carvalho Chehabglobal variables simply cannot be tolerated within the RCU
2280ccc9971eSMauro Carvalho Chehabimplementation. RCU therefore makes heavy use of a combining tree based
2281ccc9971eSMauro Carvalho Chehabon the ``rcu_node`` structure. RCU is required to tolerate all CPUs
2282ccc9971eSMauro Carvalho Chehabcontinuously invoking any combination of RCU's runtime primitives with
2283ccc9971eSMauro Carvalho Chehabminimal per-operation overhead. In fact, in many cases, increasing load
2284ccc9971eSMauro Carvalho Chehabmust *decrease* the per-operation overhead, witness the batching
2285ccc9971eSMauro Carvalho Chehaboptimizations for ``synchronize_rcu()``, ``call_rcu()``,
2286ccc9971eSMauro Carvalho Chehab``synchronize_rcu_expedited()``, and ``rcu_barrier()``. As a general
2287ccc9971eSMauro Carvalho Chehabrule, RCU must cheerfully accept whatever the rest of the Linux kernel
2288ccc9971eSMauro Carvalho Chehabdecides to throw at it.
2289ccc9971eSMauro Carvalho Chehab
2290ccc9971eSMauro Carvalho ChehabThe Linux kernel is used for real-time workloads, especially in
2291ccc9971eSMauro Carvalho Chehabconjunction with the `-rt
2292ccc9971eSMauro Carvalho Chehabpatchset <https://rt.wiki.kernel.org/index.php/Main_Page>`__. The
2293ccc9971eSMauro Carvalho Chehabreal-time-latency response requirements are such that the traditional
2294ccc9971eSMauro Carvalho Chehabapproach of disabling preemption across RCU read-side critical sections
2295ccc9971eSMauro Carvalho Chehabis inappropriate. Kernels built with ``CONFIG_PREEMPT=y`` therefore use
2296ccc9971eSMauro Carvalho Chehaban RCU implementation that allows RCU read-side critical sections to be
2297ccc9971eSMauro Carvalho Chehabpreempted. This requirement made its presence known after users made it
2298ccc9971eSMauro Carvalho Chehabclear that an earlier `real-time
2299ccc9971eSMauro Carvalho Chehabpatch <https://lwn.net/Articles/107930/>`__ did not meet their needs, in
2300ccc9971eSMauro Carvalho Chehabconjunction with some `RCU
2301ccc9971eSMauro Carvalho Chehabissues <https://lkml.kernel.org/g/20050318002026.GA2693@us.ibm.com>`__
2302ccc9971eSMauro Carvalho Chehabencountered by a very early version of the -rt patchset.
2303ccc9971eSMauro Carvalho Chehab
2304ccc9971eSMauro Carvalho ChehabIn addition, RCU must make do with a sub-100-microsecond real-time
2305ccc9971eSMauro Carvalho Chehablatency budget. In fact, on smaller systems with the -rt patchset, the
2306ccc9971eSMauro Carvalho ChehabLinux kernel provides sub-20-microsecond real-time latencies for the
2307ccc9971eSMauro Carvalho Chehabwhole kernel, including RCU. RCU's scalability and latency must
2308ccc9971eSMauro Carvalho Chehabtherefore be sufficient for these sorts of configurations. To my
2309ccc9971eSMauro Carvalho Chehabsurprise, the sub-100-microsecond real-time latency budget `applies to
2310ccc9971eSMauro Carvalho Chehabeven the largest systems
2311ccc9971eSMauro Carvalho Chehab[PDF] <http://www.rdrop.com/users/paulmck/realtime/paper/bigrt.2013.01.31a.LCA.pdf>`__,
2312ccc9971eSMauro Carvalho Chehabup to and including systems with 4096 CPUs. This real-time requirement
2313ccc9971eSMauro Carvalho Chehabmotivated the grace-period kthread, which also simplified handling of a
2314ccc9971eSMauro Carvalho Chehabnumber of race conditions.
2315ccc9971eSMauro Carvalho Chehab
2316ccc9971eSMauro Carvalho ChehabRCU must avoid degrading real-time response for CPU-bound threads,
2317ccc9971eSMauro Carvalho Chehabwhether executing in usermode (which is one use case for
2318ccc9971eSMauro Carvalho Chehab``CONFIG_NO_HZ_FULL=y``) or in the kernel. That said, CPU-bound loops in
2319ccc9971eSMauro Carvalho Chehabthe kernel must execute ``cond_resched()`` at least once per few tens of
2320ccc9971eSMauro Carvalho Chehabmilliseconds in order to avoid receiving an IPI from RCU.
2321ccc9971eSMauro Carvalho Chehab
2322ccc9971eSMauro Carvalho ChehabFinally, RCU's status as a synchronization primitive means that any RCU
2323ccc9971eSMauro Carvalho Chehabfailure can result in arbitrary memory corruption that can be extremely
2324ccc9971eSMauro Carvalho Chehabdifficult to debug. This means that RCU must be extremely reliable,
2325ccc9971eSMauro Carvalho Chehabwhich in practice also means that RCU must have an aggressive
2326ccc9971eSMauro Carvalho Chehabstress-test suite. This stress-test suite is called ``rcutorture``.
2327ccc9971eSMauro Carvalho Chehab
2328ccc9971eSMauro Carvalho ChehabAlthough the need for ``rcutorture`` was no surprise, the current
2329ccc9971eSMauro Carvalho Chehabimmense popularity of the Linux kernel is posing interesting—and perhaps
2330ccc9971eSMauro Carvalho Chehabunprecedented—validation challenges. To see this, keep in mind that
2331ccc9971eSMauro Carvalho Chehabthere are well over one billion instances of the Linux kernel running
2332ccc9971eSMauro Carvalho Chehabtoday, given Android smartphones, Linux-powered televisions, and
2333ccc9971eSMauro Carvalho Chehabservers. This number can be expected to increase sharply with the advent
2334ccc9971eSMauro Carvalho Chehabof the celebrated Internet of Things.
2335ccc9971eSMauro Carvalho Chehab
2336ccc9971eSMauro Carvalho ChehabSuppose that RCU contains a race condition that manifests on average
2337ccc9971eSMauro Carvalho Chehabonce per million years of runtime. This bug will be occurring about
2338ccc9971eSMauro Carvalho Chehabthree times per *day* across the installed base. RCU could simply hide
2339ccc9971eSMauro Carvalho Chehabbehind hardware error rates, given that no one should really expect
2340ccc9971eSMauro Carvalho Chehabtheir smartphone to last for a million years. However, anyone taking too
2341ccc9971eSMauro Carvalho Chehabmuch comfort from this thought should consider the fact that in most
2342ccc9971eSMauro Carvalho Chehabjurisdictions, a successful multi-year test of a given mechanism, which
2343ccc9971eSMauro Carvalho Chehabmight include a Linux kernel, suffices for a number of types of
2344ccc9971eSMauro Carvalho Chehabsafety-critical certifications. In fact, rumor has it that the Linux
2345ccc9971eSMauro Carvalho Chehabkernel is already being used in production for safety-critical
2346ccc9971eSMauro Carvalho Chehabapplications. I don't know about you, but I would feel quite bad if a
2347ccc9971eSMauro Carvalho Chehabbug in RCU killed someone. Which might explain my recent focus on
2348ccc9971eSMauro Carvalho Chehabvalidation and verification.
2349ccc9971eSMauro Carvalho Chehab
2350ccc9971eSMauro Carvalho ChehabOther RCU Flavors
2351ccc9971eSMauro Carvalho Chehab-----------------
2352ccc9971eSMauro Carvalho Chehab
2353ccc9971eSMauro Carvalho ChehabOne of the more surprising things about RCU is that there are now no
2354ccc9971eSMauro Carvalho Chehabfewer than five *flavors*, or API families. In addition, the primary
2355ccc9971eSMauro Carvalho Chehabflavor that has been the sole focus up to this point has two different
2356ccc9971eSMauro Carvalho Chehabimplementations, non-preemptible and preemptible. The other four flavors
2357ccc9971eSMauro Carvalho Chehabare listed below, with requirements for each described in a separate
2358ccc9971eSMauro Carvalho Chehabsection.
2359ccc9971eSMauro Carvalho Chehab
236007335c16SJoel Fernandes (Google)#. `Bottom-Half Flavor (Historical)`_
236107335c16SJoel Fernandes (Google)#. `Sched Flavor (Historical)`_
236207335c16SJoel Fernandes (Google)#. `Sleepable RCU`_
236307335c16SJoel Fernandes (Google)#. `Tasks RCU`_
2364ccc9971eSMauro Carvalho Chehab
2365ccc9971eSMauro Carvalho ChehabBottom-Half Flavor (Historical)
2366ccc9971eSMauro Carvalho Chehab~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2367ccc9971eSMauro Carvalho Chehab
2368ccc9971eSMauro Carvalho ChehabThe RCU-bh flavor of RCU has since been expressed in terms of the other
2369ccc9971eSMauro Carvalho ChehabRCU flavors as part of a consolidation of the three flavors into a
2370ccc9971eSMauro Carvalho Chehabsingle flavor. The read-side API remains, and continues to disable
2371ccc9971eSMauro Carvalho Chehabsoftirq and to be accounted for by lockdep. Much of the material in this
2372ccc9971eSMauro Carvalho Chehabsection is therefore strictly historical in nature.
2373ccc9971eSMauro Carvalho Chehab
2374ccc9971eSMauro Carvalho ChehabThe softirq-disable (AKA “bottom-half”, hence the “_bh” abbreviations)
2375ccc9971eSMauro Carvalho Chehabflavor of RCU, or *RCU-bh*, was developed by Dipankar Sarma to provide a
2376ccc9971eSMauro Carvalho Chehabflavor of RCU that could withstand the network-based denial-of-service
2377ccc9971eSMauro Carvalho Chehabattacks researched by Robert Olsson. These attacks placed so much
2378ccc9971eSMauro Carvalho Chehabnetworking load on the system that some of the CPUs never exited softirq
2379ccc9971eSMauro Carvalho Chehabexecution, which in turn prevented those CPUs from ever executing a
2380ccc9971eSMauro Carvalho Chehabcontext switch, which, in the RCU implementation of that time, prevented
2381ccc9971eSMauro Carvalho Chehabgrace periods from ever ending. The result was an out-of-memory
2382ccc9971eSMauro Carvalho Chehabcondition and a system hang.
2383ccc9971eSMauro Carvalho Chehab
2384ccc9971eSMauro Carvalho ChehabThe solution was the creation of RCU-bh, which does
2385ccc9971eSMauro Carvalho Chehab``local_bh_disable()`` across its read-side critical sections, and which
2386ccc9971eSMauro Carvalho Chehabuses the transition from one type of softirq processing to another as a
2387ccc9971eSMauro Carvalho Chehabquiescent state in addition to context switch, idle, user mode, and
2388ccc9971eSMauro Carvalho Chehaboffline. This means that RCU-bh grace periods can complete even when
2389ccc9971eSMauro Carvalho Chehabsome of the CPUs execute in softirq indefinitely, thus allowing
2390ccc9971eSMauro Carvalho Chehabalgorithms based on RCU-bh to withstand network-based denial-of-service
2391ccc9971eSMauro Carvalho Chehabattacks.
2392ccc9971eSMauro Carvalho Chehab
2393ccc9971eSMauro Carvalho ChehabBecause ``rcu_read_lock_bh()`` and ``rcu_read_unlock_bh()`` disable and
2394ccc9971eSMauro Carvalho Chehabre-enable softirq handlers, any attempt to start a softirq handlers
2395ccc9971eSMauro Carvalho Chehabduring the RCU-bh read-side critical section will be deferred. In this
2396ccc9971eSMauro Carvalho Chehabcase, ``rcu_read_unlock_bh()`` will invoke softirq processing, which can
2397ccc9971eSMauro Carvalho Chehabtake considerable time. One can of course argue that this softirq
2398ccc9971eSMauro Carvalho Chehaboverhead should be associated with the code following the RCU-bh
2399ccc9971eSMauro Carvalho Chehabread-side critical section rather than ``rcu_read_unlock_bh()``, but the
2400ccc9971eSMauro Carvalho Chehabfact is that most profiling tools cannot be expected to make this sort
2401ccc9971eSMauro Carvalho Chehabof fine distinction. For example, suppose that a three-millisecond-long
2402ccc9971eSMauro Carvalho ChehabRCU-bh read-side critical section executes during a time of heavy
2403ccc9971eSMauro Carvalho Chehabnetworking load. There will very likely be an attempt to invoke at least
2404ccc9971eSMauro Carvalho Chehabone softirq handler during that three milliseconds, but any such
2405ccc9971eSMauro Carvalho Chehabinvocation will be delayed until the time of the
2406ccc9971eSMauro Carvalho Chehab``rcu_read_unlock_bh()``. This can of course make it appear at first
2407ccc9971eSMauro Carvalho Chehabglance as if ``rcu_read_unlock_bh()`` was executing very slowly.
2408ccc9971eSMauro Carvalho Chehab
2409ccc9971eSMauro Carvalho ChehabThe `RCU-bh
2410ccc9971eSMauro Carvalho ChehabAPI <https://lwn.net/Articles/609973/#RCU%20Per-Flavor%20API%20Table>`__
2411ccc9971eSMauro Carvalho Chehabincludes ``rcu_read_lock_bh()``, ``rcu_read_unlock_bh()``,
2412ccc9971eSMauro Carvalho Chehab``rcu_dereference_bh()``, ``rcu_dereference_bh_check()``,
2413ccc9971eSMauro Carvalho Chehab``synchronize_rcu_bh()``, ``synchronize_rcu_bh_expedited()``,
2414ccc9971eSMauro Carvalho Chehab``call_rcu_bh()``, ``rcu_barrier_bh()``, and
2415ccc9971eSMauro Carvalho Chehab``rcu_read_lock_bh_held()``. However, the update-side APIs are now
2416ccc9971eSMauro Carvalho Chehabsimple wrappers for other RCU flavors, namely RCU-sched in
2417ccc9971eSMauro Carvalho ChehabCONFIG_PREEMPT=n kernels and RCU-preempt otherwise.
2418ccc9971eSMauro Carvalho Chehab
2419ccc9971eSMauro Carvalho ChehabSched Flavor (Historical)
2420ccc9971eSMauro Carvalho Chehab~~~~~~~~~~~~~~~~~~~~~~~~~
2421ccc9971eSMauro Carvalho Chehab
2422ccc9971eSMauro Carvalho ChehabThe RCU-sched flavor of RCU has since been expressed in terms of the
2423ccc9971eSMauro Carvalho Chehabother RCU flavors as part of a consolidation of the three flavors into a
2424ccc9971eSMauro Carvalho Chehabsingle flavor. The read-side API remains, and continues to disable
2425ccc9971eSMauro Carvalho Chehabpreemption and to be accounted for by lockdep. Much of the material in
2426ccc9971eSMauro Carvalho Chehabthis section is therefore strictly historical in nature.
2427ccc9971eSMauro Carvalho Chehab
2428ccc9971eSMauro Carvalho ChehabBefore preemptible RCU, waiting for an RCU grace period had the side
2429ccc9971eSMauro Carvalho Chehabeffect of also waiting for all pre-existing interrupt and NMI handlers.
2430ccc9971eSMauro Carvalho ChehabHowever, there are legitimate preemptible-RCU implementations that do
2431ccc9971eSMauro Carvalho Chehabnot have this property, given that any point in the code outside of an
2432ccc9971eSMauro Carvalho ChehabRCU read-side critical section can be a quiescent state. Therefore,
2433ccc9971eSMauro Carvalho Chehab*RCU-sched* was created, which follows “classic” RCU in that an
2434*7f45d6f8SRandy DunlapRCU-sched grace period waits for pre-existing interrupt and NMI
2435ccc9971eSMauro Carvalho Chehabhandlers. In kernels built with ``CONFIG_PREEMPT=n``, the RCU and
2436ccc9971eSMauro Carvalho ChehabRCU-sched APIs have identical implementations, while kernels built with
2437ccc9971eSMauro Carvalho Chehab``CONFIG_PREEMPT=y`` provide a separate implementation for each.
2438ccc9971eSMauro Carvalho Chehab
2439ccc9971eSMauro Carvalho ChehabNote well that in ``CONFIG_PREEMPT=y`` kernels,
2440ccc9971eSMauro Carvalho Chehab``rcu_read_lock_sched()`` and ``rcu_read_unlock_sched()`` disable and
2441ccc9971eSMauro Carvalho Chehabre-enable preemption, respectively. This means that if there was a
2442ccc9971eSMauro Carvalho Chehabpreemption attempt during the RCU-sched read-side critical section,
2443ccc9971eSMauro Carvalho Chehab``rcu_read_unlock_sched()`` will enter the scheduler, with all the
2444ccc9971eSMauro Carvalho Chehablatency and overhead entailed. Just as with ``rcu_read_unlock_bh()``,
2445ccc9971eSMauro Carvalho Chehabthis can make it look as if ``rcu_read_unlock_sched()`` was executing
2446ccc9971eSMauro Carvalho Chehabvery slowly. However, the highest-priority task won't be preempted, so
2447ccc9971eSMauro Carvalho Chehabthat task will enjoy low-overhead ``rcu_read_unlock_sched()``
2448ccc9971eSMauro Carvalho Chehabinvocations.
2449ccc9971eSMauro Carvalho Chehab
2450ccc9971eSMauro Carvalho ChehabThe `RCU-sched
2451ccc9971eSMauro Carvalho ChehabAPI <https://lwn.net/Articles/609973/#RCU%20Per-Flavor%20API%20Table>`__
2452ccc9971eSMauro Carvalho Chehabincludes ``rcu_read_lock_sched()``, ``rcu_read_unlock_sched()``,
2453ccc9971eSMauro Carvalho Chehab``rcu_read_lock_sched_notrace()``, ``rcu_read_unlock_sched_notrace()``,
2454ccc9971eSMauro Carvalho Chehab``rcu_dereference_sched()``, ``rcu_dereference_sched_check()``,
2455ccc9971eSMauro Carvalho Chehab``synchronize_sched()``, ``synchronize_rcu_sched_expedited()``,
2456ccc9971eSMauro Carvalho Chehab``call_rcu_sched()``, ``rcu_barrier_sched()``, and
2457ccc9971eSMauro Carvalho Chehab``rcu_read_lock_sched_held()``. However, anything that disables
2458ccc9971eSMauro Carvalho Chehabpreemption also marks an RCU-sched read-side critical section, including
2459ccc9971eSMauro Carvalho Chehab``preempt_disable()`` and ``preempt_enable()``, ``local_irq_save()`` and
2460ccc9971eSMauro Carvalho Chehab``local_irq_restore()``, and so on.
2461ccc9971eSMauro Carvalho Chehab
2462ccc9971eSMauro Carvalho ChehabSleepable RCU
2463ccc9971eSMauro Carvalho Chehab~~~~~~~~~~~~~
2464ccc9971eSMauro Carvalho Chehab
2465ccc9971eSMauro Carvalho ChehabFor well over a decade, someone saying “I need to block within an RCU
2466ccc9971eSMauro Carvalho Chehabread-side critical section” was a reliable indication that this someone
2467ccc9971eSMauro Carvalho Chehabdid not understand RCU. After all, if you are always blocking in an RCU
2468ccc9971eSMauro Carvalho Chehabread-side critical section, you can probably afford to use a
2469ccc9971eSMauro Carvalho Chehabhigher-overhead synchronization mechanism. However, that changed with
2470ccc9971eSMauro Carvalho Chehabthe advent of the Linux kernel's notifiers, whose RCU read-side critical
2471ccc9971eSMauro Carvalho Chehabsections almost never sleep, but sometimes need to. This resulted in the
2472ccc9971eSMauro Carvalho Chehabintroduction of `sleepable RCU <https://lwn.net/Articles/202847/>`__, or
2473ccc9971eSMauro Carvalho Chehab*SRCU*.
2474ccc9971eSMauro Carvalho Chehab
2475ccc9971eSMauro Carvalho ChehabSRCU allows different domains to be defined, with each such domain
2476ccc9971eSMauro Carvalho Chehabdefined by an instance of an ``srcu_struct`` structure. A pointer to
2477ccc9971eSMauro Carvalho Chehabthis structure must be passed in to each SRCU function, for example,
2478ccc9971eSMauro Carvalho Chehab``synchronize_srcu(&ss)``, where ``ss`` is the ``srcu_struct``
2479ccc9971eSMauro Carvalho Chehabstructure. The key benefit of these domains is that a slow SRCU reader
2480ccc9971eSMauro Carvalho Chehabin one domain does not delay an SRCU grace period in some other domain.
2481ccc9971eSMauro Carvalho ChehabThat said, one consequence of these domains is that read-side code must
2482ccc9971eSMauro Carvalho Chehabpass a “cookie” from ``srcu_read_lock()`` to ``srcu_read_unlock()``, for
2483ccc9971eSMauro Carvalho Chehabexample, as follows:
2484ccc9971eSMauro Carvalho Chehab
2485ccc9971eSMauro Carvalho Chehab   ::
2486ccc9971eSMauro Carvalho Chehab
2487ccc9971eSMauro Carvalho Chehab       1 int idx;
2488ccc9971eSMauro Carvalho Chehab       2
2489ccc9971eSMauro Carvalho Chehab       3 idx = srcu_read_lock(&ss);
2490ccc9971eSMauro Carvalho Chehab       4 do_something();
2491ccc9971eSMauro Carvalho Chehab       5 srcu_read_unlock(&ss, idx);
2492ccc9971eSMauro Carvalho Chehab
2493ccc9971eSMauro Carvalho ChehabAs noted above, it is legal to block within SRCU read-side critical
2494ccc9971eSMauro Carvalho Chehabsections, however, with great power comes great responsibility. If you
2495ccc9971eSMauro Carvalho Chehabblock forever in one of a given domain's SRCU read-side critical
2496ccc9971eSMauro Carvalho Chehabsections, then that domain's grace periods will also be blocked forever.
2497ccc9971eSMauro Carvalho ChehabOf course, one good way to block forever is to deadlock, which can
2498ccc9971eSMauro Carvalho Chehabhappen if any operation in a given domain's SRCU read-side critical
2499ccc9971eSMauro Carvalho Chehabsection can wait, either directly or indirectly, for that domain's grace
2500ccc9971eSMauro Carvalho Chehabperiod to elapse. For example, this results in a self-deadlock:
2501ccc9971eSMauro Carvalho Chehab
2502ccc9971eSMauro Carvalho Chehab   ::
2503ccc9971eSMauro Carvalho Chehab
2504ccc9971eSMauro Carvalho Chehab       1 int idx;
2505ccc9971eSMauro Carvalho Chehab       2
2506ccc9971eSMauro Carvalho Chehab       3 idx = srcu_read_lock(&ss);
2507ccc9971eSMauro Carvalho Chehab       4 do_something();
2508ccc9971eSMauro Carvalho Chehab       5 synchronize_srcu(&ss);
2509ccc9971eSMauro Carvalho Chehab       6 srcu_read_unlock(&ss, idx);
2510ccc9971eSMauro Carvalho Chehab
2511ccc9971eSMauro Carvalho ChehabHowever, if line 5 acquired a mutex that was held across a
2512ccc9971eSMauro Carvalho Chehab``synchronize_srcu()`` for domain ``ss``, deadlock would still be
2513ccc9971eSMauro Carvalho Chehabpossible. Furthermore, if line 5 acquired a mutex that was held across a
2514ccc9971eSMauro Carvalho Chehab``synchronize_srcu()`` for some other domain ``ss1``, and if an
2515ccc9971eSMauro Carvalho Chehab``ss1``-domain SRCU read-side critical section acquired another mutex
2516ccc9971eSMauro Carvalho Chehabthat was held across as ``ss``-domain ``synchronize_srcu()``, deadlock
2517ccc9971eSMauro Carvalho Chehabwould again be possible. Such a deadlock cycle could extend across an
2518ccc9971eSMauro Carvalho Chehabarbitrarily large number of different SRCU domains. Again, with great
2519ccc9971eSMauro Carvalho Chehabpower comes great responsibility.
2520ccc9971eSMauro Carvalho Chehab
2521ccc9971eSMauro Carvalho ChehabUnlike the other RCU flavors, SRCU read-side critical sections can run
2522ccc9971eSMauro Carvalho Chehabon idle and even offline CPUs. This ability requires that
2523ccc9971eSMauro Carvalho Chehab``srcu_read_lock()`` and ``srcu_read_unlock()`` contain memory barriers,
2524ccc9971eSMauro Carvalho Chehabwhich means that SRCU readers will run a bit slower than would RCU
2525ccc9971eSMauro Carvalho Chehabreaders. It also motivates the ``smp_mb__after_srcu_read_unlock()`` API,
2526ccc9971eSMauro Carvalho Chehabwhich, in combination with ``srcu_read_unlock()``, guarantees a full
2527ccc9971eSMauro Carvalho Chehabmemory barrier.
2528ccc9971eSMauro Carvalho Chehab
2529ccc9971eSMauro Carvalho ChehabAlso unlike other RCU flavors, ``synchronize_srcu()`` may **not** be
2530ccc9971eSMauro Carvalho Chehabinvoked from CPU-hotplug notifiers, due to the fact that SRCU grace
2531ccc9971eSMauro Carvalho Chehabperiods make use of timers and the possibility of timers being
2532ccc9971eSMauro Carvalho Chehabtemporarily “stranded” on the outgoing CPU. This stranding of timers
2533ccc9971eSMauro Carvalho Chehabmeans that timers posted to the outgoing CPU will not fire until late in
2534ccc9971eSMauro Carvalho Chehabthe CPU-hotplug process. The problem is that if a notifier is waiting on
2535ccc9971eSMauro Carvalho Chehaban SRCU grace period, that grace period is waiting on a timer, and that
2536ccc9971eSMauro Carvalho Chehabtimer is stranded on the outgoing CPU, then the notifier will never be
2537ccc9971eSMauro Carvalho Chehabawakened, in other words, deadlock has occurred. This same situation of
2538ccc9971eSMauro Carvalho Chehabcourse also prohibits ``srcu_barrier()`` from being invoked from
2539ccc9971eSMauro Carvalho ChehabCPU-hotplug notifiers.
2540ccc9971eSMauro Carvalho Chehab
2541ccc9971eSMauro Carvalho ChehabSRCU also differs from other RCU flavors in that SRCU's expedited and
2542ccc9971eSMauro Carvalho Chehabnon-expedited grace periods are implemented by the same mechanism. This
2543ccc9971eSMauro Carvalho Chehabmeans that in the current SRCU implementation, expediting a future grace
2544ccc9971eSMauro Carvalho Chehabperiod has the side effect of expediting all prior grace periods that
2545ccc9971eSMauro Carvalho Chehabhave not yet completed. (But please note that this is a property of the
2546ccc9971eSMauro Carvalho Chehabcurrent implementation, not necessarily of future implementations.) In
2547ccc9971eSMauro Carvalho Chehabaddition, if SRCU has been idle for longer than the interval specified
2548ccc9971eSMauro Carvalho Chehabby the ``srcutree.exp_holdoff`` kernel boot parameter (25 microseconds
2549ccc9971eSMauro Carvalho Chehabby default), and if a ``synchronize_srcu()`` invocation ends this idle
2550ccc9971eSMauro Carvalho Chehabperiod, that invocation will be automatically expedited.
2551ccc9971eSMauro Carvalho Chehab
2552ccc9971eSMauro Carvalho ChehabAs of v4.12, SRCU's callbacks are maintained per-CPU, eliminating a
2553ccc9971eSMauro Carvalho Chehablocking bottleneck present in prior kernel versions. Although this will
2554ccc9971eSMauro Carvalho Chehaballow users to put much heavier stress on ``call_srcu()``, it is
2555ccc9971eSMauro Carvalho Chehabimportant to note that SRCU does not yet take any special steps to deal
2556ccc9971eSMauro Carvalho Chehabwith callback flooding. So if you are posting (say) 10,000 SRCU
2557ccc9971eSMauro Carvalho Chehabcallbacks per second per CPU, you are probably totally OK, but if you
2558ccc9971eSMauro Carvalho Chehabintend to post (say) 1,000,000 SRCU callbacks per second per CPU, please
2559ccc9971eSMauro Carvalho Chehabrun some tests first. SRCU just might need a few adjustment to deal with
2560ccc9971eSMauro Carvalho Chehabthat sort of load. Of course, your mileage may vary based on the speed
2561ccc9971eSMauro Carvalho Chehabof your CPUs and the size of your memory.
2562ccc9971eSMauro Carvalho Chehab
2563ccc9971eSMauro Carvalho ChehabThe `SRCU
2564ccc9971eSMauro Carvalho ChehabAPI <https://lwn.net/Articles/609973/#RCU%20Per-Flavor%20API%20Table>`__
2565ccc9971eSMauro Carvalho Chehabincludes ``srcu_read_lock()``, ``srcu_read_unlock()``,
2566ccc9971eSMauro Carvalho Chehab``srcu_dereference()``, ``srcu_dereference_check()``,
2567ccc9971eSMauro Carvalho Chehab``synchronize_srcu()``, ``synchronize_srcu_expedited()``,
2568ccc9971eSMauro Carvalho Chehab``call_srcu()``, ``srcu_barrier()``, and ``srcu_read_lock_held()``. It
2569ccc9971eSMauro Carvalho Chehabalso includes ``DEFINE_SRCU()``, ``DEFINE_STATIC_SRCU()``, and
2570ccc9971eSMauro Carvalho Chehab``init_srcu_struct()`` APIs for defining and initializing
2571ccc9971eSMauro Carvalho Chehab``srcu_struct`` structures.
2572ccc9971eSMauro Carvalho Chehab
2573ccc9971eSMauro Carvalho ChehabTasks RCU
2574ccc9971eSMauro Carvalho Chehab~~~~~~~~~
2575ccc9971eSMauro Carvalho Chehab
2576ccc9971eSMauro Carvalho ChehabSome forms of tracing use “trampolines” to handle the binary rewriting
2577ccc9971eSMauro Carvalho Chehabrequired to install different types of probes. It would be good to be
2578ccc9971eSMauro Carvalho Chehabable to free old trampolines, which sounds like a job for some form of
2579ccc9971eSMauro Carvalho ChehabRCU. However, because it is necessary to be able to install a trace
2580ccc9971eSMauro Carvalho Chehabanywhere in the code, it is not possible to use read-side markers such
2581ccc9971eSMauro Carvalho Chehabas ``rcu_read_lock()`` and ``rcu_read_unlock()``. In addition, it does
2582ccc9971eSMauro Carvalho Chehabnot work to have these markers in the trampoline itself, because there
2583ccc9971eSMauro Carvalho Chehabwould need to be instructions following ``rcu_read_unlock()``. Although
2584ccc9971eSMauro Carvalho Chehab``synchronize_rcu()`` would guarantee that execution reached the
2585ccc9971eSMauro Carvalho Chehab``rcu_read_unlock()``, it would not be able to guarantee that execution
2586d93d97cbSPaul E. McKenneyhad completely left the trampoline. Worse yet, in some situations
2587d93d97cbSPaul E. McKenneythe trampoline's protection must extend a few instructions *prior* to
2588d93d97cbSPaul E. McKenneyexecution reaching the trampoline.  For example, these few instructions
2589d93d97cbSPaul E. McKenneymight calculate the address of the trampoline, so that entering the
2590d93d97cbSPaul E. McKenneytrampoline would be pre-ordained a surprisingly long time before execution
2591d93d97cbSPaul E. McKenneyactually reached the trampoline itself.
2592ccc9971eSMauro Carvalho Chehab
2593ccc9971eSMauro Carvalho ChehabThe solution, in the form of `Tasks
2594ccc9971eSMauro Carvalho ChehabRCU <https://lwn.net/Articles/607117/>`__, is to have implicit read-side
2595ccc9971eSMauro Carvalho Chehabcritical sections that are delimited by voluntary context switches, that
2596ccc9971eSMauro Carvalho Chehabis, calls to ``schedule()``, ``cond_resched()``, and
2597ccc9971eSMauro Carvalho Chehab``synchronize_rcu_tasks()``. In addition, transitions to and from
2598ccc9971eSMauro Carvalho Chehabuserspace execution also delimit tasks-RCU read-side critical sections.
2599ccc9971eSMauro Carvalho Chehab
2600ccc9971eSMauro Carvalho ChehabThe tasks-RCU API is quite compact, consisting only of
2601ccc9971eSMauro Carvalho Chehab``call_rcu_tasks()``, ``synchronize_rcu_tasks()``, and
2602ccc9971eSMauro Carvalho Chehab``rcu_barrier_tasks()``. In ``CONFIG_PREEMPT=n`` kernels, trampolines
2603ccc9971eSMauro Carvalho Chehabcannot be preempted, so these APIs map to ``call_rcu()``,
2604ccc9971eSMauro Carvalho Chehab``synchronize_rcu()``, and ``rcu_barrier()``, respectively. In
2605ccc9971eSMauro Carvalho Chehab``CONFIG_PREEMPT=y`` kernels, trampolines can be preempted, and these
2606ccc9971eSMauro Carvalho Chehabthree APIs are therefore implemented by separate functions that check
2607ccc9971eSMauro Carvalho Chehabfor voluntary context switches.
2608ccc9971eSMauro Carvalho Chehab
2609ccc9971eSMauro Carvalho ChehabPossible Future Changes
2610ccc9971eSMauro Carvalho Chehab-----------------------
2611ccc9971eSMauro Carvalho Chehab
2612ccc9971eSMauro Carvalho ChehabOne of the tricks that RCU uses to attain update-side scalability is to
2613ccc9971eSMauro Carvalho Chehabincrease grace-period latency with increasing numbers of CPUs. If this
2614ccc9971eSMauro Carvalho Chehabbecomes a serious problem, it will be necessary to rework the
2615ccc9971eSMauro Carvalho Chehabgrace-period state machine so as to avoid the need for the additional
2616ccc9971eSMauro Carvalho Chehablatency.
2617ccc9971eSMauro Carvalho Chehab
2618ccc9971eSMauro Carvalho ChehabRCU disables CPU hotplug in a few places, perhaps most notably in the
2619ccc9971eSMauro Carvalho Chehab``rcu_barrier()`` operations. If there is a strong reason to use
2620ccc9971eSMauro Carvalho Chehab``rcu_barrier()`` in CPU-hotplug notifiers, it will be necessary to
2621ccc9971eSMauro Carvalho Chehabavoid disabling CPU hotplug. This would introduce some complexity, so
2622ccc9971eSMauro Carvalho Chehabthere had better be a *very* good reason.
2623ccc9971eSMauro Carvalho Chehab
2624ccc9971eSMauro Carvalho ChehabThe tradeoff between grace-period latency on the one hand and
2625ccc9971eSMauro Carvalho Chehabinterruptions of other CPUs on the other hand may need to be
2626ccc9971eSMauro Carvalho Chehabre-examined. The desire is of course for zero grace-period latency as
2627ccc9971eSMauro Carvalho Chehabwell as zero interprocessor interrupts undertaken during an expedited
2628ccc9971eSMauro Carvalho Chehabgrace period operation. While this ideal is unlikely to be achievable,
2629ccc9971eSMauro Carvalho Chehabit is quite possible that further improvements can be made.
2630ccc9971eSMauro Carvalho Chehab
2631ccc9971eSMauro Carvalho ChehabThe multiprocessor implementations of RCU use a combining tree that
2632ccc9971eSMauro Carvalho Chehabgroups CPUs so as to reduce lock contention and increase cache locality.
2633ccc9971eSMauro Carvalho ChehabHowever, this combining tree does not spread its memory across NUMA
2634ccc9971eSMauro Carvalho Chehabnodes nor does it align the CPU groups with hardware features such as
2635ccc9971eSMauro Carvalho Chehabsockets or cores. Such spreading and alignment is currently believed to
2636ccc9971eSMauro Carvalho Chehabbe unnecessary because the hotpath read-side primitives do not access
2637ccc9971eSMauro Carvalho Chehabthe combining tree, nor does ``call_rcu()`` in the common case. If you
2638ccc9971eSMauro Carvalho Chehabbelieve that your architecture needs such spreading and alignment, then
2639ccc9971eSMauro Carvalho Chehabyour architecture should also benefit from the
2640ccc9971eSMauro Carvalho Chehab``rcutree.rcu_fanout_leaf`` boot parameter, which can be set to the
2641ccc9971eSMauro Carvalho Chehabnumber of CPUs in a socket, NUMA node, or whatever. If the number of
2642ccc9971eSMauro Carvalho ChehabCPUs is too large, use a fraction of the number of CPUs. If the number
2643ccc9971eSMauro Carvalho Chehabof CPUs is a large prime number, well, that certainly is an
2644ccc9971eSMauro Carvalho Chehab“interesting” architectural choice! More flexible arrangements might be
2645ccc9971eSMauro Carvalho Chehabconsidered, but only if ``rcutree.rcu_fanout_leaf`` has proven
2646ccc9971eSMauro Carvalho Chehabinadequate, and only if the inadequacy has been demonstrated by a
2647ccc9971eSMauro Carvalho Chehabcarefully run and realistic system-level workload.
2648ccc9971eSMauro Carvalho Chehab
2649ccc9971eSMauro Carvalho ChehabPlease note that arrangements that require RCU to remap CPU numbers will
2650ccc9971eSMauro Carvalho Chehabrequire extremely good demonstration of need and full exploration of
2651ccc9971eSMauro Carvalho Chehabalternatives.
2652ccc9971eSMauro Carvalho Chehab
2653ccc9971eSMauro Carvalho ChehabRCU's various kthreads are reasonably recent additions. It is quite
2654ccc9971eSMauro Carvalho Chehablikely that adjustments will be required to more gracefully handle
2655ccc9971eSMauro Carvalho Chehabextreme loads. It might also be necessary to be able to relate CPU
2656ccc9971eSMauro Carvalho Chehabutilization by RCU's kthreads and softirq handlers to the code that
2657ccc9971eSMauro Carvalho Chehabinstigated this CPU utilization. For example, RCU callback overhead
2658ccc9971eSMauro Carvalho Chehabmight be charged back to the originating ``call_rcu()`` instance, though
2659ccc9971eSMauro Carvalho Chehabprobably not in production kernels.
2660ccc9971eSMauro Carvalho Chehab
2661ccc9971eSMauro Carvalho ChehabAdditional work may be required to provide reasonable forward-progress
2662ccc9971eSMauro Carvalho Chehabguarantees under heavy load for grace periods and for callback
2663ccc9971eSMauro Carvalho Chehabinvocation.
2664ccc9971eSMauro Carvalho Chehab
2665ccc9971eSMauro Carvalho ChehabSummary
2666ccc9971eSMauro Carvalho Chehab-------
2667ccc9971eSMauro Carvalho Chehab
2668ccc9971eSMauro Carvalho ChehabThis document has presented more than two decade's worth of RCU
2669ccc9971eSMauro Carvalho Chehabrequirements. Given that the requirements keep changing, this will not
2670ccc9971eSMauro Carvalho Chehabbe the last word on this subject, but at least it serves to get an
2671ccc9971eSMauro Carvalho Chehabimportant subset of the requirements set forth.
2672ccc9971eSMauro Carvalho Chehab
2673ccc9971eSMauro Carvalho ChehabAcknowledgments
2674ccc9971eSMauro Carvalho Chehab---------------
2675ccc9971eSMauro Carvalho Chehab
2676ccc9971eSMauro Carvalho ChehabI am grateful to Steven Rostedt, Lai Jiangshan, Ingo Molnar, Oleg
2677ccc9971eSMauro Carvalho ChehabNesterov, Borislav Petkov, Peter Zijlstra, Boqun Feng, and Andy
2678ccc9971eSMauro Carvalho ChehabLutomirski for their help in rendering this article human readable, and
2679ccc9971eSMauro Carvalho Chehabto Michelle Rankin for her support of this effort. Other contributions
2680ccc9971eSMauro Carvalho Chehabare acknowledged in the Linux kernel's git archive.
2681