1ccc9971eSMauro Carvalho Chehab================================= 2ccc9971eSMauro Carvalho ChehabA Tour Through RCU's Requirements 3ccc9971eSMauro Carvalho Chehab================================= 4ccc9971eSMauro Carvalho Chehab 5ccc9971eSMauro Carvalho ChehabCopyright IBM Corporation, 2015 6ccc9971eSMauro Carvalho Chehab 7ccc9971eSMauro Carvalho ChehabAuthor: Paul E. McKenney 8ccc9971eSMauro Carvalho Chehab 9ccc9971eSMauro Carvalho ChehabThe initial version of this document appeared in the 10ccc9971eSMauro Carvalho Chehab`LWN <https://lwn.net/>`_ on those articles: 11ccc9971eSMauro Carvalho Chehab`part 1 <https://lwn.net/Articles/652156/>`_, 12ccc9971eSMauro Carvalho Chehab`part 2 <https://lwn.net/Articles/652677/>`_, and 13ccc9971eSMauro Carvalho Chehab`part 3 <https://lwn.net/Articles/653326/>`_. 14ccc9971eSMauro Carvalho Chehab 15ccc9971eSMauro Carvalho ChehabIntroduction 16ccc9971eSMauro Carvalho Chehab------------ 17ccc9971eSMauro Carvalho Chehab 18ccc9971eSMauro Carvalho ChehabRead-copy update (RCU) is a synchronization mechanism that is often used 19ccc9971eSMauro Carvalho Chehabas a replacement for reader-writer locking. RCU is unusual in that 20ccc9971eSMauro Carvalho Chehabupdaters do not block readers, which means that RCU's read-side 21ccc9971eSMauro Carvalho Chehabprimitives can be exceedingly fast and scalable. In addition, updaters 22ccc9971eSMauro Carvalho Chehabcan make useful forward progress concurrently with readers. However, all 23ccc9971eSMauro Carvalho Chehabthis concurrency between RCU readers and updaters does raise the 24ccc9971eSMauro Carvalho Chehabquestion of exactly what RCU readers are doing, which in turn raises the 25ccc9971eSMauro Carvalho Chehabquestion of exactly what RCU's requirements are. 26ccc9971eSMauro Carvalho Chehab 27ccc9971eSMauro Carvalho ChehabThis document therefore summarizes RCU's requirements, and can be 28ccc9971eSMauro Carvalho Chehabthought of as an informal, high-level specification for RCU. It is 29ccc9971eSMauro Carvalho Chehabimportant to understand that RCU's specification is primarily empirical 30ccc9971eSMauro Carvalho Chehabin nature; in fact, I learned about many of these requirements the hard 31ccc9971eSMauro Carvalho Chehabway. This situation might cause some consternation, however, not only 32ccc9971eSMauro Carvalho Chehabhas this learning process been a lot of fun, but it has also been a 33ccc9971eSMauro Carvalho Chehabgreat privilege to work with so many people willing to apply 34ccc9971eSMauro Carvalho Chehabtechnologies in interesting new ways. 35ccc9971eSMauro Carvalho Chehab 36ccc9971eSMauro Carvalho ChehabAll that aside, here are the categories of currently known RCU 37ccc9971eSMauro Carvalho Chehabrequirements: 38ccc9971eSMauro Carvalho Chehab 3907335c16SJoel Fernandes (Google)#. `Fundamental Requirements`_ 4007335c16SJoel Fernandes (Google)#. `Fundamental Non-Requirements`_ 4107335c16SJoel Fernandes (Google)#. `Parallelism Facts of Life`_ 4207335c16SJoel Fernandes (Google)#. `Quality-of-Implementation Requirements`_ 4307335c16SJoel Fernandes (Google)#. `Linux Kernel Complications`_ 4407335c16SJoel Fernandes (Google)#. `Software-Engineering Requirements`_ 4507335c16SJoel Fernandes (Google)#. `Other RCU Flavors`_ 4607335c16SJoel Fernandes (Google)#. `Possible Future Changes`_ 47ccc9971eSMauro Carvalho Chehab 484f8af077SNícolas F. R. A. PradoThis is followed by a summary_, however, the answers to 49ccc9971eSMauro Carvalho Chehabeach quick quiz immediately follows the quiz. Select the big white space 50ccc9971eSMauro Carvalho Chehabwith your mouse to see the answer. 51ccc9971eSMauro Carvalho Chehab 52ccc9971eSMauro Carvalho ChehabFundamental Requirements 53ccc9971eSMauro Carvalho Chehab------------------------ 54ccc9971eSMauro Carvalho Chehab 55ccc9971eSMauro Carvalho ChehabRCU's fundamental requirements are the closest thing RCU has to hard 56ccc9971eSMauro Carvalho Chehabmathematical requirements. These are: 57ccc9971eSMauro Carvalho Chehab 5807335c16SJoel Fernandes (Google)#. `Grace-Period Guarantee`_ 5907335c16SJoel Fernandes (Google)#. `Publish/Subscribe Guarantee`_ 6007335c16SJoel Fernandes (Google)#. `Memory-Barrier Guarantees`_ 6107335c16SJoel Fernandes (Google)#. `RCU Primitives Guaranteed to Execute Unconditionally`_ 6207335c16SJoel Fernandes (Google)#. `Guaranteed Read-to-Write Upgrade`_ 63ccc9971eSMauro Carvalho Chehab 64ccc9971eSMauro Carvalho ChehabGrace-Period Guarantee 65ccc9971eSMauro Carvalho Chehab~~~~~~~~~~~~~~~~~~~~~~ 66ccc9971eSMauro Carvalho Chehab 67ccc9971eSMauro Carvalho ChehabRCU's grace-period guarantee is unusual in being premeditated: Jack 68ccc9971eSMauro Carvalho ChehabSlingwine and I had this guarantee firmly in mind when we started work 69ccc9971eSMauro Carvalho Chehabon RCU (then called “rclock”) in the early 1990s. That said, the past 70ccc9971eSMauro Carvalho Chehabtwo decades of experience with RCU have produced a much more detailed 71ccc9971eSMauro Carvalho Chehabunderstanding of this guarantee. 72ccc9971eSMauro Carvalho Chehab 73ccc9971eSMauro Carvalho ChehabRCU's grace-period guarantee allows updaters to wait for the completion 74ccc9971eSMauro Carvalho Chehabof all pre-existing RCU read-side critical sections. An RCU read-side 75be06c257SPaul E. McKenneycritical section begins with the marker rcu_read_lock() and ends 76be06c257SPaul E. McKenneywith the marker rcu_read_unlock(). These markers may be nested, and 77ccc9971eSMauro Carvalho ChehabRCU treats a nested set as one big RCU read-side critical section. 78be06c257SPaul E. McKenneyProduction-quality implementations of rcu_read_lock() and 79be06c257SPaul E. McKenneyrcu_read_unlock() are extremely lightweight, and in fact have 80ccc9971eSMauro Carvalho Chehabexactly zero overhead in Linux kernels built for production use with 8181ad58beSSebastian Andrzej Siewior``CONFIG_PREEMPTION=n``. 82ccc9971eSMauro Carvalho Chehab 83ccc9971eSMauro Carvalho ChehabThis guarantee allows ordering to be enforced with extremely low 84ccc9971eSMauro Carvalho Chehaboverhead to readers, for example: 85ccc9971eSMauro Carvalho Chehab 86ccc9971eSMauro Carvalho Chehab :: 87ccc9971eSMauro Carvalho Chehab 88ccc9971eSMauro Carvalho Chehab 1 int x, y; 89ccc9971eSMauro Carvalho Chehab 2 90ccc9971eSMauro Carvalho Chehab 3 void thread0(void) 91ccc9971eSMauro Carvalho Chehab 4 { 92ccc9971eSMauro Carvalho Chehab 5 rcu_read_lock(); 93ccc9971eSMauro Carvalho Chehab 6 r1 = READ_ONCE(x); 94ccc9971eSMauro Carvalho Chehab 7 r2 = READ_ONCE(y); 95ccc9971eSMauro Carvalho Chehab 8 rcu_read_unlock(); 96ccc9971eSMauro Carvalho Chehab 9 } 97ccc9971eSMauro Carvalho Chehab 10 98ccc9971eSMauro Carvalho Chehab 11 void thread1(void) 99ccc9971eSMauro Carvalho Chehab 12 { 100ccc9971eSMauro Carvalho Chehab 13 WRITE_ONCE(x, 1); 101ccc9971eSMauro Carvalho Chehab 14 synchronize_rcu(); 102ccc9971eSMauro Carvalho Chehab 15 WRITE_ONCE(y, 1); 103ccc9971eSMauro Carvalho Chehab 16 } 104ccc9971eSMauro Carvalho Chehab 105be06c257SPaul E. McKenneyBecause the synchronize_rcu() on line 14 waits for all pre-existing 106be06c257SPaul E. McKenneyreaders, any instance of thread0() that loads a value of zero from 107be06c257SPaul E. McKenney``x`` must complete before thread1() stores to ``y``, so that 108ccc9971eSMauro Carvalho Chehabinstance must also load a value of zero from ``y``. Similarly, any 109be06c257SPaul E. McKenneyinstance of thread0() that loads a value of one from ``y`` must have 110be06c257SPaul E. McKenneystarted after the synchronize_rcu() started, and must therefore also 111ccc9971eSMauro Carvalho Chehabload a value of one from ``x``. Therefore, the outcome: 112ccc9971eSMauro Carvalho Chehab 113ccc9971eSMauro Carvalho Chehab :: 114ccc9971eSMauro Carvalho Chehab 115ccc9971eSMauro Carvalho Chehab (r1 == 0 && r2 == 1) 116ccc9971eSMauro Carvalho Chehab 117ccc9971eSMauro Carvalho Chehabcannot happen. 118ccc9971eSMauro Carvalho Chehab 119ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+ 120ccc9971eSMauro Carvalho Chehab| **Quick Quiz**: | 121ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+ 122ccc9971eSMauro Carvalho Chehab| Wait a minute! You said that updaters can make useful forward | 123ccc9971eSMauro Carvalho Chehab| progress concurrently with readers, but pre-existing readers will | 124be06c257SPaul E. McKenney| block synchronize_rcu()!!! | 125ccc9971eSMauro Carvalho Chehab| Just who are you trying to fool??? | 126ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+ 127ccc9971eSMauro Carvalho Chehab| **Answer**: | 128ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+ 129ccc9971eSMauro Carvalho Chehab| First, if updaters do not wish to be blocked by readers, they can use | 130be06c257SPaul E. McKenney| call_rcu() or kfree_rcu(), which will be discussed later. | 131be06c257SPaul E. McKenney| Second, even when using synchronize_rcu(), the other update-side | 132ccc9971eSMauro Carvalho Chehab| code does run concurrently with readers, whether pre-existing or not. | 133ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+ 134ccc9971eSMauro Carvalho Chehab 135ccc9971eSMauro Carvalho ChehabThis scenario resembles one of the first uses of RCU in 136ccc9971eSMauro Carvalho Chehab`DYNIX/ptx <https://en.wikipedia.org/wiki/DYNIX>`__, which managed a 137ccc9971eSMauro Carvalho Chehabdistributed lock manager's transition into a state suitable for handling 138ccc9971eSMauro Carvalho Chehabrecovery from node failure, more or less as follows: 139ccc9971eSMauro Carvalho Chehab 140ccc9971eSMauro Carvalho Chehab :: 141ccc9971eSMauro Carvalho Chehab 142ccc9971eSMauro Carvalho Chehab 1 #define STATE_NORMAL 0 143ccc9971eSMauro Carvalho Chehab 2 #define STATE_WANT_RECOVERY 1 144ccc9971eSMauro Carvalho Chehab 3 #define STATE_RECOVERING 2 145ccc9971eSMauro Carvalho Chehab 4 #define STATE_WANT_NORMAL 3 146ccc9971eSMauro Carvalho Chehab 5 147ccc9971eSMauro Carvalho Chehab 6 int state = STATE_NORMAL; 148ccc9971eSMauro Carvalho Chehab 7 149ccc9971eSMauro Carvalho Chehab 8 void do_something_dlm(void) 150ccc9971eSMauro Carvalho Chehab 9 { 151ccc9971eSMauro Carvalho Chehab 10 int state_snap; 152ccc9971eSMauro Carvalho Chehab 11 153ccc9971eSMauro Carvalho Chehab 12 rcu_read_lock(); 154ccc9971eSMauro Carvalho Chehab 13 state_snap = READ_ONCE(state); 155ccc9971eSMauro Carvalho Chehab 14 if (state_snap == STATE_NORMAL) 156ccc9971eSMauro Carvalho Chehab 15 do_something(); 157ccc9971eSMauro Carvalho Chehab 16 else 158ccc9971eSMauro Carvalho Chehab 17 do_something_carefully(); 159ccc9971eSMauro Carvalho Chehab 18 rcu_read_unlock(); 160ccc9971eSMauro Carvalho Chehab 19 } 161ccc9971eSMauro Carvalho Chehab 20 162ccc9971eSMauro Carvalho Chehab 21 void start_recovery(void) 163ccc9971eSMauro Carvalho Chehab 22 { 164ccc9971eSMauro Carvalho Chehab 23 WRITE_ONCE(state, STATE_WANT_RECOVERY); 165ccc9971eSMauro Carvalho Chehab 24 synchronize_rcu(); 166ccc9971eSMauro Carvalho Chehab 25 WRITE_ONCE(state, STATE_RECOVERING); 167ccc9971eSMauro Carvalho Chehab 26 recovery(); 168ccc9971eSMauro Carvalho Chehab 27 WRITE_ONCE(state, STATE_WANT_NORMAL); 169ccc9971eSMauro Carvalho Chehab 28 synchronize_rcu(); 170ccc9971eSMauro Carvalho Chehab 29 WRITE_ONCE(state, STATE_NORMAL); 171ccc9971eSMauro Carvalho Chehab 30 } 172ccc9971eSMauro Carvalho Chehab 173be06c257SPaul E. McKenneyThe RCU read-side critical section in do_something_dlm() works with 174be06c257SPaul E. McKenneythe synchronize_rcu() in start_recovery() to guarantee that 175be06c257SPaul E. McKenneydo_something() never runs concurrently with recovery(), but with 176be06c257SPaul E. McKenneylittle or no synchronization overhead in do_something_dlm(). 177ccc9971eSMauro Carvalho Chehab 178ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+ 179ccc9971eSMauro Carvalho Chehab| **Quick Quiz**: | 180ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+ 181be06c257SPaul E. McKenney| Why is the synchronize_rcu() on line 28 needed? | 182ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+ 183ccc9971eSMauro Carvalho Chehab| **Answer**: | 184ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+ 185ccc9971eSMauro Carvalho Chehab| Without that extra grace period, memory reordering could result in | 186be06c257SPaul E. McKenney| do_something_dlm() executing do_something() concurrently with | 187be06c257SPaul E. McKenney| the last bits of recovery(). | 188ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+ 189ccc9971eSMauro Carvalho Chehab 190ccc9971eSMauro Carvalho ChehabIn order to avoid fatal problems such as deadlocks, an RCU read-side 191be06c257SPaul E. McKenneycritical section must not contain calls to synchronize_rcu(). 192ccc9971eSMauro Carvalho ChehabSimilarly, an RCU read-side critical section must not contain anything 193ccc9971eSMauro Carvalho Chehabthat waits, directly or indirectly, on completion of an invocation of 194be06c257SPaul E. McKenneysynchronize_rcu(). 195ccc9971eSMauro Carvalho Chehab 196ccc9971eSMauro Carvalho ChehabAlthough RCU's grace-period guarantee is useful in and of itself, with 197ccc9971eSMauro Carvalho Chehab`quite a few use cases <https://lwn.net/Articles/573497/>`__, it would 198ccc9971eSMauro Carvalho Chehabbe good to be able to use RCU to coordinate read-side access to linked 199ccc9971eSMauro Carvalho Chehabdata structures. For this, the grace-period guarantee is not sufficient, 200be06c257SPaul E. McKenneyas can be seen in function add_gp_buggy() below. We will look at the 201ccc9971eSMauro Carvalho Chehabreader's code later, but in the meantime, just think of the reader as 202ccc9971eSMauro Carvalho Chehablocklessly picking up the ``gp`` pointer, and, if the value loaded is 203ccc9971eSMauro Carvalho Chehabnon-\ ``NULL``, locklessly accessing the ``->a`` and ``->b`` fields. 204ccc9971eSMauro Carvalho Chehab 205ccc9971eSMauro Carvalho Chehab :: 206ccc9971eSMauro Carvalho Chehab 207ccc9971eSMauro Carvalho Chehab 1 bool add_gp_buggy(int a, int b) 208ccc9971eSMauro Carvalho Chehab 2 { 209ccc9971eSMauro Carvalho Chehab 3 p = kmalloc(sizeof(*p), GFP_KERNEL); 210ccc9971eSMauro Carvalho Chehab 4 if (!p) 211ccc9971eSMauro Carvalho Chehab 5 return -ENOMEM; 212ccc9971eSMauro Carvalho Chehab 6 spin_lock(&gp_lock); 213ccc9971eSMauro Carvalho Chehab 7 if (rcu_access_pointer(gp)) { 214ccc9971eSMauro Carvalho Chehab 8 spin_unlock(&gp_lock); 215ccc9971eSMauro Carvalho Chehab 9 return false; 216ccc9971eSMauro Carvalho Chehab 10 } 217ccc9971eSMauro Carvalho Chehab 11 p->a = a; 218ccc9971eSMauro Carvalho Chehab 12 p->b = a; 219ccc9971eSMauro Carvalho Chehab 13 gp = p; /* ORDERING BUG */ 220ccc9971eSMauro Carvalho Chehab 14 spin_unlock(&gp_lock); 221ccc9971eSMauro Carvalho Chehab 15 return true; 222ccc9971eSMauro Carvalho Chehab 16 } 223ccc9971eSMauro Carvalho Chehab 224ccc9971eSMauro Carvalho ChehabThe problem is that both the compiler and weakly ordered CPUs are within 225ccc9971eSMauro Carvalho Chehabtheir rights to reorder this code as follows: 226ccc9971eSMauro Carvalho Chehab 227ccc9971eSMauro Carvalho Chehab :: 228ccc9971eSMauro Carvalho Chehab 229ccc9971eSMauro Carvalho Chehab 1 bool add_gp_buggy_optimized(int a, int b) 230ccc9971eSMauro Carvalho Chehab 2 { 231ccc9971eSMauro Carvalho Chehab 3 p = kmalloc(sizeof(*p), GFP_KERNEL); 232ccc9971eSMauro Carvalho Chehab 4 if (!p) 233ccc9971eSMauro Carvalho Chehab 5 return -ENOMEM; 234ccc9971eSMauro Carvalho Chehab 6 spin_lock(&gp_lock); 235ccc9971eSMauro Carvalho Chehab 7 if (rcu_access_pointer(gp)) { 236ccc9971eSMauro Carvalho Chehab 8 spin_unlock(&gp_lock); 237ccc9971eSMauro Carvalho Chehab 9 return false; 238ccc9971eSMauro Carvalho Chehab 10 } 239ccc9971eSMauro Carvalho Chehab 11 gp = p; /* ORDERING BUG */ 240ccc9971eSMauro Carvalho Chehab 12 p->a = a; 241ccc9971eSMauro Carvalho Chehab 13 p->b = a; 242ccc9971eSMauro Carvalho Chehab 14 spin_unlock(&gp_lock); 243ccc9971eSMauro Carvalho Chehab 15 return true; 244ccc9971eSMauro Carvalho Chehab 16 } 245ccc9971eSMauro Carvalho Chehab 246ccc9971eSMauro Carvalho ChehabIf an RCU reader fetches ``gp`` just after ``add_gp_buggy_optimized`` 247ccc9971eSMauro Carvalho Chehabexecutes line 11, it will see garbage in the ``->a`` and ``->b`` fields. 248ccc9971eSMauro Carvalho ChehabAnd this is but one of many ways in which compiler and hardware 249ccc9971eSMauro Carvalho Chehaboptimizations could cause trouble. Therefore, we clearly need some way 250ccc9971eSMauro Carvalho Chehabto prevent the compiler and the CPU from reordering in this manner, 251ccc9971eSMauro Carvalho Chehabwhich brings us to the publish-subscribe guarantee discussed in the next 252ccc9971eSMauro Carvalho Chehabsection. 253ccc9971eSMauro Carvalho Chehab 254ccc9971eSMauro Carvalho ChehabPublish/Subscribe Guarantee 255ccc9971eSMauro Carvalho Chehab~~~~~~~~~~~~~~~~~~~~~~~~~~~ 256ccc9971eSMauro Carvalho Chehab 257ccc9971eSMauro Carvalho ChehabRCU's publish-subscribe guarantee allows data to be inserted into a 258ccc9971eSMauro Carvalho Chehablinked data structure without disrupting RCU readers. The updater uses 259be06c257SPaul E. McKenneyrcu_assign_pointer() to insert the new data, and readers use 260be06c257SPaul E. McKenneyrcu_dereference() to access data, whether new or old. The following 261ccc9971eSMauro Carvalho Chehabshows an example of insertion: 262ccc9971eSMauro Carvalho Chehab 263ccc9971eSMauro Carvalho Chehab :: 264ccc9971eSMauro Carvalho Chehab 265ccc9971eSMauro Carvalho Chehab 1 bool add_gp(int a, int b) 266ccc9971eSMauro Carvalho Chehab 2 { 267ccc9971eSMauro Carvalho Chehab 3 p = kmalloc(sizeof(*p), GFP_KERNEL); 268ccc9971eSMauro Carvalho Chehab 4 if (!p) 269ccc9971eSMauro Carvalho Chehab 5 return -ENOMEM; 270ccc9971eSMauro Carvalho Chehab 6 spin_lock(&gp_lock); 271ccc9971eSMauro Carvalho Chehab 7 if (rcu_access_pointer(gp)) { 272ccc9971eSMauro Carvalho Chehab 8 spin_unlock(&gp_lock); 273ccc9971eSMauro Carvalho Chehab 9 return false; 274ccc9971eSMauro Carvalho Chehab 10 } 275ccc9971eSMauro Carvalho Chehab 11 p->a = a; 276ccc9971eSMauro Carvalho Chehab 12 p->b = a; 277ccc9971eSMauro Carvalho Chehab 13 rcu_assign_pointer(gp, p); 278ccc9971eSMauro Carvalho Chehab 14 spin_unlock(&gp_lock); 279ccc9971eSMauro Carvalho Chehab 15 return true; 280ccc9971eSMauro Carvalho Chehab 16 } 281ccc9971eSMauro Carvalho Chehab 282be06c257SPaul E. McKenneyThe rcu_assign_pointer() on line 13 is conceptually equivalent to a 283ccc9971eSMauro Carvalho Chehabsimple assignment statement, but also guarantees that its assignment 284ccc9971eSMauro Carvalho Chehabwill happen after the two assignments in lines 11 and 12, similar to the 285ccc9971eSMauro Carvalho ChehabC11 ``memory_order_release`` store operation. It also prevents any 286ccc9971eSMauro Carvalho Chehabnumber of “interesting” compiler optimizations, for example, the use of 287ccc9971eSMauro Carvalho Chehab``gp`` as a scratch location immediately preceding the assignment. 288ccc9971eSMauro Carvalho Chehab 289ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+ 290ccc9971eSMauro Carvalho Chehab| **Quick Quiz**: | 291ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+ 292be06c257SPaul E. McKenney| But rcu_assign_pointer() does nothing to prevent the two | 293ccc9971eSMauro Carvalho Chehab| assignments to ``p->a`` and ``p->b`` from being reordered. Can't that | 294ccc9971eSMauro Carvalho Chehab| also cause problems? | 295ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+ 296ccc9971eSMauro Carvalho Chehab| **Answer**: | 297ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+ 298ccc9971eSMauro Carvalho Chehab| No, it cannot. The readers cannot see either of these two fields | 299ccc9971eSMauro Carvalho Chehab| until the assignment to ``gp``, by which time both fields are fully | 300ccc9971eSMauro Carvalho Chehab| initialized. So reordering the assignments to ``p->a`` and ``p->b`` | 301ccc9971eSMauro Carvalho Chehab| cannot possibly cause any problems. | 302ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+ 303ccc9971eSMauro Carvalho Chehab 304ccc9971eSMauro Carvalho ChehabIt is tempting to assume that the reader need not do anything special to 305ccc9971eSMauro Carvalho Chehabcontrol its accesses to the RCU-protected data, as shown in 306be06c257SPaul E. McKenneydo_something_gp_buggy() below: 307ccc9971eSMauro Carvalho Chehab 308ccc9971eSMauro Carvalho Chehab :: 309ccc9971eSMauro Carvalho Chehab 310ccc9971eSMauro Carvalho Chehab 1 bool do_something_gp_buggy(void) 311ccc9971eSMauro Carvalho Chehab 2 { 312ccc9971eSMauro Carvalho Chehab 3 rcu_read_lock(); 313ccc9971eSMauro Carvalho Chehab 4 p = gp; /* OPTIMIZATIONS GALORE!!! */ 314ccc9971eSMauro Carvalho Chehab 5 if (p) { 315ccc9971eSMauro Carvalho Chehab 6 do_something(p->a, p->b); 316ccc9971eSMauro Carvalho Chehab 7 rcu_read_unlock(); 317ccc9971eSMauro Carvalho Chehab 8 return true; 318ccc9971eSMauro Carvalho Chehab 9 } 319ccc9971eSMauro Carvalho Chehab 10 rcu_read_unlock(); 320ccc9971eSMauro Carvalho Chehab 11 return false; 321ccc9971eSMauro Carvalho Chehab 12 } 322ccc9971eSMauro Carvalho Chehab 323ccc9971eSMauro Carvalho ChehabHowever, this temptation must be resisted because there are a 3249d3a0485SPaul Gortmakersurprisingly large number of ways that the compiler (or weak ordering 3259d3a0485SPaul GortmakerCPUs like the DEC Alpha) can trip this code up. For but one example, if 3269d3a0485SPaul Gortmakerthe compiler were short of registers, it might choose to refetch from 3279d3a0485SPaul Gortmaker``gp`` rather than keeping a separate copy in ``p`` as follows: 328ccc9971eSMauro Carvalho Chehab 329ccc9971eSMauro Carvalho Chehab :: 330ccc9971eSMauro Carvalho Chehab 331ccc9971eSMauro Carvalho Chehab 1 bool do_something_gp_buggy_optimized(void) 332ccc9971eSMauro Carvalho Chehab 2 { 333ccc9971eSMauro Carvalho Chehab 3 rcu_read_lock(); 334ccc9971eSMauro Carvalho Chehab 4 if (gp) { /* OPTIMIZATIONS GALORE!!! */ 335ccc9971eSMauro Carvalho Chehab 5 do_something(gp->a, gp->b); 336ccc9971eSMauro Carvalho Chehab 6 rcu_read_unlock(); 337ccc9971eSMauro Carvalho Chehab 7 return true; 338ccc9971eSMauro Carvalho Chehab 8 } 339ccc9971eSMauro Carvalho Chehab 9 rcu_read_unlock(); 340ccc9971eSMauro Carvalho Chehab 10 return false; 341ccc9971eSMauro Carvalho Chehab 11 } 342ccc9971eSMauro Carvalho Chehab 343ccc9971eSMauro Carvalho ChehabIf this function ran concurrently with a series of updates that replaced 344ccc9971eSMauro Carvalho Chehabthe current structure with a new one, the fetches of ``gp->a`` and 345ccc9971eSMauro Carvalho Chehab``gp->b`` might well come from two different structures, which could 346ccc9971eSMauro Carvalho Chehabcause serious confusion. To prevent this (and much else besides), 347be06c257SPaul E. McKenneydo_something_gp() uses rcu_dereference() to fetch from ``gp``: 348ccc9971eSMauro Carvalho Chehab 349ccc9971eSMauro Carvalho Chehab :: 350ccc9971eSMauro Carvalho Chehab 351ccc9971eSMauro Carvalho Chehab 1 bool do_something_gp(void) 352ccc9971eSMauro Carvalho Chehab 2 { 353ccc9971eSMauro Carvalho Chehab 3 rcu_read_lock(); 354ccc9971eSMauro Carvalho Chehab 4 p = rcu_dereference(gp); 355ccc9971eSMauro Carvalho Chehab 5 if (p) { 356ccc9971eSMauro Carvalho Chehab 6 do_something(p->a, p->b); 357ccc9971eSMauro Carvalho Chehab 7 rcu_read_unlock(); 358ccc9971eSMauro Carvalho Chehab 8 return true; 359ccc9971eSMauro Carvalho Chehab 9 } 360ccc9971eSMauro Carvalho Chehab 10 rcu_read_unlock(); 361ccc9971eSMauro Carvalho Chehab 11 return false; 362ccc9971eSMauro Carvalho Chehab 12 } 363ccc9971eSMauro Carvalho Chehab 364be06c257SPaul E. McKenneyThe rcu_dereference() uses volatile casts and (for DEC Alpha) memory 36549660908SAkira Yokosawabarriers in the Linux kernel. Should a |high-quality implementation of 36649660908SAkira YokosawaC11 memory_order_consume [PDF]|_ 367be06c257SPaul E. McKenneyever appear, then rcu_dereference() could be implemented as a 368ccc9971eSMauro Carvalho Chehab``memory_order_consume`` load. Regardless of the exact implementation, a 369be06c257SPaul E. McKenneypointer fetched by rcu_dereference() may not be used outside of the 370ccc9971eSMauro Carvalho Chehaboutermost RCU read-side critical section containing that 371be06c257SPaul E. McKenneyrcu_dereference(), unless protection of the corresponding data 372ccc9971eSMauro Carvalho Chehabelement has been passed from RCU to some other synchronization 373404147faSAkira Yokosawamechanism, most commonly locking or reference counting 374404147faSAkira Yokosawa(see ../../rcuref.rst). 375ccc9971eSMauro Carvalho Chehab 37649660908SAkira Yokosawa.. |high-quality implementation of C11 memory_order_consume [PDF]| replace:: high-quality implementation of C11 ``memory_order_consume`` [PDF] 37749660908SAkira Yokosawa.. _high-quality implementation of C11 memory_order_consume [PDF]: http://www.rdrop.com/users/paulmck/RCU/consume.2015.07.13a.pdf 37849660908SAkira Yokosawa 379be06c257SPaul E. McKenneyIn short, updaters use rcu_assign_pointer() and readers use 380be06c257SPaul E. McKenneyrcu_dereference(), and these two RCU API elements work together to 381ccc9971eSMauro Carvalho Chehabensure that readers have a consistent view of newly added data elements. 382ccc9971eSMauro Carvalho Chehab 383ccc9971eSMauro Carvalho ChehabOf course, it is also necessary to remove elements from RCU-protected 384ccc9971eSMauro Carvalho Chehabdata structures, for example, using the following process: 385ccc9971eSMauro Carvalho Chehab 386ccc9971eSMauro Carvalho Chehab#. Remove the data element from the enclosing structure. 387ccc9971eSMauro Carvalho Chehab#. Wait for all pre-existing RCU read-side critical sections to complete 388ccc9971eSMauro Carvalho Chehab (because only pre-existing readers can possibly have a reference to 389ccc9971eSMauro Carvalho Chehab the newly removed data element). 390ccc9971eSMauro Carvalho Chehab#. At this point, only the updater has a reference to the newly removed 391ccc9971eSMauro Carvalho Chehab data element, so it can safely reclaim the data element, for example, 392be06c257SPaul E. McKenney by passing it to kfree(). 393ccc9971eSMauro Carvalho Chehab 394be06c257SPaul E. McKenneyThis process is implemented by remove_gp_synchronous(): 395ccc9971eSMauro Carvalho Chehab 396ccc9971eSMauro Carvalho Chehab :: 397ccc9971eSMauro Carvalho Chehab 398ccc9971eSMauro Carvalho Chehab 1 bool remove_gp_synchronous(void) 399ccc9971eSMauro Carvalho Chehab 2 { 400ccc9971eSMauro Carvalho Chehab 3 struct foo *p; 401ccc9971eSMauro Carvalho Chehab 4 402ccc9971eSMauro Carvalho Chehab 5 spin_lock(&gp_lock); 403ccc9971eSMauro Carvalho Chehab 6 p = rcu_access_pointer(gp); 404ccc9971eSMauro Carvalho Chehab 7 if (!p) { 405ccc9971eSMauro Carvalho Chehab 8 spin_unlock(&gp_lock); 406ccc9971eSMauro Carvalho Chehab 9 return false; 407ccc9971eSMauro Carvalho Chehab 10 } 408ccc9971eSMauro Carvalho Chehab 11 rcu_assign_pointer(gp, NULL); 409ccc9971eSMauro Carvalho Chehab 12 spin_unlock(&gp_lock); 410ccc9971eSMauro Carvalho Chehab 13 synchronize_rcu(); 411ccc9971eSMauro Carvalho Chehab 14 kfree(p); 412ccc9971eSMauro Carvalho Chehab 15 return true; 413ccc9971eSMauro Carvalho Chehab 16 } 414ccc9971eSMauro Carvalho Chehab 415ccc9971eSMauro Carvalho ChehabThis function is straightforward, with line 13 waiting for a grace 416ccc9971eSMauro Carvalho Chehabperiod before line 14 frees the old data element. This waiting ensures 417be06c257SPaul E. McKenneythat readers will reach line 7 of do_something_gp() before the data 418be06c257SPaul E. McKenneyelement referenced by ``p`` is freed. The rcu_access_pointer() on 419be06c257SPaul E. McKenneyline 6 is similar to rcu_dereference(), except that: 420ccc9971eSMauro Carvalho Chehab 421be06c257SPaul E. McKenney#. The value returned by rcu_access_pointer() cannot be 422ccc9971eSMauro Carvalho Chehab dereferenced. If you want to access the value pointed to as well as 423be06c257SPaul E. McKenney the pointer itself, use rcu_dereference() instead of 424be06c257SPaul E. McKenney rcu_access_pointer(). 425be06c257SPaul E. McKenney#. The call to rcu_access_pointer() need not be protected. In 426be06c257SPaul E. McKenney contrast, rcu_dereference() must either be within an RCU 427ccc9971eSMauro Carvalho Chehab read-side critical section or in a code segment where the pointer 428ccc9971eSMauro Carvalho Chehab cannot change, for example, in code protected by the corresponding 429ccc9971eSMauro Carvalho Chehab update-side lock. 430ccc9971eSMauro Carvalho Chehab 431ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+ 432ccc9971eSMauro Carvalho Chehab| **Quick Quiz**: | 433ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+ 434be06c257SPaul E. McKenney| Without the rcu_dereference() or the rcu_access_pointer(), | 435ccc9971eSMauro Carvalho Chehab| what destructive optimizations might the compiler make use of? | 436ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+ 437ccc9971eSMauro Carvalho Chehab| **Answer**: | 438ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+ 439be06c257SPaul E. McKenney| Let's start with what happens to do_something_gp() if it fails to | 440be06c257SPaul E. McKenney| use rcu_dereference(). It could reuse a value formerly fetched | 441ccc9971eSMauro Carvalho Chehab| from this same pointer. It could also fetch the pointer from ``gp`` | 442ccc9971eSMauro Carvalho Chehab| in a byte-at-a-time manner, resulting in *load tearing*, in turn | 443ccc9971eSMauro Carvalho Chehab| resulting a bytewise mash-up of two distinct pointer values. It might | 444ccc9971eSMauro Carvalho Chehab| even use value-speculation optimizations, where it makes a wrong | 445ccc9971eSMauro Carvalho Chehab| guess, but by the time it gets around to checking the value, an | 446ccc9971eSMauro Carvalho Chehab| update has changed the pointer to match the wrong guess. Too bad | 447ccc9971eSMauro Carvalho Chehab| about any dereferences that returned pre-initialization garbage in | 448ccc9971eSMauro Carvalho Chehab| the meantime! | 449be06c257SPaul E. McKenney| For remove_gp_synchronous(), as long as all modifications to | 450ccc9971eSMauro Carvalho Chehab| ``gp`` are carried out while holding ``gp_lock``, the above | 451ccc9971eSMauro Carvalho Chehab| optimizations are harmless. However, ``sparse`` will complain if you | 452ccc9971eSMauro Carvalho Chehab| define ``gp`` with ``__rcu`` and then access it without using either | 453be06c257SPaul E. McKenney| rcu_access_pointer() or rcu_dereference(). | 454ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+ 455ccc9971eSMauro Carvalho Chehab 456ccc9971eSMauro Carvalho ChehabIn short, RCU's publish-subscribe guarantee is provided by the 457be06c257SPaul E. McKenneycombination of rcu_assign_pointer() and rcu_dereference(). This 458ccc9971eSMauro Carvalho Chehabguarantee allows data elements to be safely added to RCU-protected 459ccc9971eSMauro Carvalho Chehablinked data structures without disrupting RCU readers. This guarantee 460ccc9971eSMauro Carvalho Chehabcan be used in combination with the grace-period guarantee to also allow 461ccc9971eSMauro Carvalho Chehabdata elements to be removed from RCU-protected linked data structures, 462ccc9971eSMauro Carvalho Chehabagain without disrupting RCU readers. 463ccc9971eSMauro Carvalho Chehab 464ccc9971eSMauro Carvalho ChehabThis guarantee was only partially premeditated. DYNIX/ptx used an 465ccc9971eSMauro Carvalho Chehabexplicit memory barrier for publication, but had nothing resembling 466be06c257SPaul E. McKenneyrcu_dereference() for subscription, nor did it have anything 4678ca924aeSWill Deaconresembling the dependency-ordering barrier that was later subsumed 468be06c257SPaul E. McKenneyinto rcu_dereference() and later still into READ_ONCE(). The 469ccc9971eSMauro Carvalho Chehabneed for these operations made itself known quite suddenly at a 470ccc9971eSMauro Carvalho Chehablate-1990s meeting with the DEC Alpha architects, back in the days when 471ccc9971eSMauro Carvalho ChehabDEC was still a free-standing company. It took the Alpha architects a 472ccc9971eSMauro Carvalho Chehabgood hour to convince me that any sort of barrier would ever be needed, 473ccc9971eSMauro Carvalho Chehaband it then took me a good *two* hours to convince them that their 474ccc9971eSMauro Carvalho Chehabdocumentation did not make this point clear. More recent work with the C 475ccc9971eSMauro Carvalho Chehaband C++ standards committees have provided much education on tricks and 476ccc9971eSMauro Carvalho Chehabtraps from the compiler. In short, compilers were much less tricky in 477ccc9971eSMauro Carvalho Chehabthe early 1990s, but in 2015, don't even think about omitting 478be06c257SPaul E. McKenneyrcu_dereference()! 479ccc9971eSMauro Carvalho Chehab 480ccc9971eSMauro Carvalho ChehabMemory-Barrier Guarantees 481ccc9971eSMauro Carvalho Chehab~~~~~~~~~~~~~~~~~~~~~~~~~ 482ccc9971eSMauro Carvalho Chehab 483ccc9971eSMauro Carvalho ChehabThe previous section's simple linked-data-structure scenario clearly 484ccc9971eSMauro Carvalho Chehabdemonstrates the need for RCU's stringent memory-ordering guarantees on 485ccc9971eSMauro Carvalho Chehabsystems with more than one CPU: 486ccc9971eSMauro Carvalho Chehab 487ccc9971eSMauro Carvalho Chehab#. Each CPU that has an RCU read-side critical section that begins 488be06c257SPaul E. McKenney before synchronize_rcu() starts is guaranteed to execute a full 489ccc9971eSMauro Carvalho Chehab memory barrier between the time that the RCU read-side critical 490be06c257SPaul E. McKenney section ends and the time that synchronize_rcu() returns. Without 491ccc9971eSMauro Carvalho Chehab this guarantee, a pre-existing RCU read-side critical section might 492ccc9971eSMauro Carvalho Chehab hold a reference to the newly removed ``struct foo`` after the 493be06c257SPaul E. McKenney kfree() on line 14 of remove_gp_synchronous(). 494ccc9971eSMauro Carvalho Chehab#. Each CPU that has an RCU read-side critical section that ends after 495be06c257SPaul E. McKenney synchronize_rcu() returns is guaranteed to execute a full memory 496be06c257SPaul E. McKenney barrier between the time that synchronize_rcu() begins and the 497ccc9971eSMauro Carvalho Chehab time that the RCU read-side critical section begins. Without this 498ccc9971eSMauro Carvalho Chehab guarantee, a later RCU read-side critical section running after the 499be06c257SPaul E. McKenney kfree() on line 14 of remove_gp_synchronous() might later run 500be06c257SPaul E. McKenney do_something_gp() and find the newly deleted ``struct foo``. 501be06c257SPaul E. McKenney#. If the task invoking synchronize_rcu() remains on a given CPU, 502ccc9971eSMauro Carvalho Chehab then that CPU is guaranteed to execute a full memory barrier sometime 503be06c257SPaul E. McKenney during the execution of synchronize_rcu(). This guarantee ensures 504be06c257SPaul E. McKenney that the kfree() on line 14 of remove_gp_synchronous() really 505ccc9971eSMauro Carvalho Chehab does execute after the removal on line 11. 506be06c257SPaul E. McKenney#. If the task invoking synchronize_rcu() migrates among a group of 507ccc9971eSMauro Carvalho Chehab CPUs during that invocation, then each of the CPUs in that group is 508ccc9971eSMauro Carvalho Chehab guaranteed to execute a full memory barrier sometime during the 509be06c257SPaul E. McKenney execution of synchronize_rcu(). This guarantee also ensures that 510be06c257SPaul E. McKenney the kfree() on line 14 of remove_gp_synchronous() really does 511ccc9971eSMauro Carvalho Chehab execute after the removal on line 11, but also in the case where the 512be06c257SPaul E. McKenney thread executing the synchronize_rcu() migrates in the meantime. 513ccc9971eSMauro Carvalho Chehab 514ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+ 515ccc9971eSMauro Carvalho Chehab| **Quick Quiz**: | 516ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+ 517ccc9971eSMauro Carvalho Chehab| Given that multiple CPUs can start RCU read-side critical sections at | 518ccc9971eSMauro Carvalho Chehab| any time without any ordering whatsoever, how can RCU possibly tell | 519ccc9971eSMauro Carvalho Chehab| whether or not a given RCU read-side critical section starts before a | 520be06c257SPaul E. McKenney| given instance of synchronize_rcu()? | 521ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+ 522ccc9971eSMauro Carvalho Chehab| **Answer**: | 523ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+ 524ccc9971eSMauro Carvalho Chehab| If RCU cannot tell whether or not a given RCU read-side critical | 525be06c257SPaul E. McKenney| section starts before a given instance of synchronize_rcu(), then | 526ccc9971eSMauro Carvalho Chehab| it must assume that the RCU read-side critical section started first. | 527be06c257SPaul E. McKenney| In other words, a given instance of synchronize_rcu() can avoid | 528ccc9971eSMauro Carvalho Chehab| waiting on a given RCU read-side critical section only if it can | 529be06c257SPaul E. McKenney| prove that synchronize_rcu() started first. | 530be06c257SPaul E. McKenney| A related question is “When rcu_read_lock() doesn't generate any | 531ccc9971eSMauro Carvalho Chehab| code, why does it matter how it relates to a grace period?” The | 532be06c257SPaul E. McKenney| answer is that it is not the relationship of rcu_read_lock() | 533ccc9971eSMauro Carvalho Chehab| itself that is important, but rather the relationship of the code | 534ccc9971eSMauro Carvalho Chehab| within the enclosed RCU read-side critical section to the code | 535ccc9971eSMauro Carvalho Chehab| preceding and following the grace period. If we take this viewpoint, | 536ccc9971eSMauro Carvalho Chehab| then a given RCU read-side critical section begins before a given | 537ccc9971eSMauro Carvalho Chehab| grace period when some access preceding the grace period observes the | 538ccc9971eSMauro Carvalho Chehab| effect of some access within the critical section, in which case none | 539ccc9971eSMauro Carvalho Chehab| of the accesses within the critical section may observe the effects | 540ccc9971eSMauro Carvalho Chehab| of any access following the grace period. | 541ccc9971eSMauro Carvalho Chehab| | 542ccc9971eSMauro Carvalho Chehab| As of late 2016, mathematical models of RCU take this viewpoint, for | 543ccc9971eSMauro Carvalho Chehab| example, see slides 62 and 63 of the `2016 LinuxCon | 544ccc9971eSMauro Carvalho Chehab| EU <http://www2.rdrop.com/users/paulmck/scalability/paper/LinuxMM.201 | 545ccc9971eSMauro Carvalho Chehab| 6.10.04c.LCE.pdf>`__ | 546ccc9971eSMauro Carvalho Chehab| presentation. | 547ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+ 548ccc9971eSMauro Carvalho Chehab 549ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+ 550ccc9971eSMauro Carvalho Chehab| **Quick Quiz**: | 551ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+ 552ccc9971eSMauro Carvalho Chehab| The first and second guarantees require unbelievably strict ordering! | 553ccc9971eSMauro Carvalho Chehab| Are all these memory barriers *really* required? | 554ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+ 555ccc9971eSMauro Carvalho Chehab| **Answer**: | 556ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+ 557ccc9971eSMauro Carvalho Chehab| Yes, they really are required. To see why the first guarantee is | 558ccc9971eSMauro Carvalho Chehab| required, consider the following sequence of events: | 559ccc9971eSMauro Carvalho Chehab| | 560be06c257SPaul E. McKenney| #. CPU 1: rcu_read_lock() | 561ccc9971eSMauro Carvalho Chehab| #. CPU 1: ``q = rcu_dereference(gp); /* Very likely to return p. */`` | 562ccc9971eSMauro Carvalho Chehab| #. CPU 0: ``list_del_rcu(p);`` | 563be06c257SPaul E. McKenney| #. CPU 0: synchronize_rcu() starts. | 564ccc9971eSMauro Carvalho Chehab| #. CPU 1: ``do_something_with(q->a);`` | 565ccc9971eSMauro Carvalho Chehab| ``/* No smp_mb(), so might happen after kfree(). */`` | 566be06c257SPaul E. McKenney| #. CPU 1: rcu_read_unlock() | 567be06c257SPaul E. McKenney| #. CPU 0: synchronize_rcu() returns. | 568ccc9971eSMauro Carvalho Chehab| #. CPU 0: ``kfree(p);`` | 569ccc9971eSMauro Carvalho Chehab| | 570ccc9971eSMauro Carvalho Chehab| Therefore, there absolutely must be a full memory barrier between the | 571ccc9971eSMauro Carvalho Chehab| end of the RCU read-side critical section and the end of the grace | 572ccc9971eSMauro Carvalho Chehab| period. | 573ccc9971eSMauro Carvalho Chehab| | 574ccc9971eSMauro Carvalho Chehab| The sequence of events demonstrating the necessity of the second rule | 575ccc9971eSMauro Carvalho Chehab| is roughly similar: | 576ccc9971eSMauro Carvalho Chehab| | 577ccc9971eSMauro Carvalho Chehab| #. CPU 0: ``list_del_rcu(p);`` | 578be06c257SPaul E. McKenney| #. CPU 0: synchronize_rcu() starts. | 579be06c257SPaul E. McKenney| #. CPU 1: rcu_read_lock() | 580ccc9971eSMauro Carvalho Chehab| #. CPU 1: ``q = rcu_dereference(gp);`` | 581ccc9971eSMauro Carvalho Chehab| ``/* Might return p if no memory barrier. */`` | 582be06c257SPaul E. McKenney| #. CPU 0: synchronize_rcu() returns. | 583ccc9971eSMauro Carvalho Chehab| #. CPU 0: ``kfree(p);`` | 584ccc9971eSMauro Carvalho Chehab| #. CPU 1: ``do_something_with(q->a); /* Boom!!! */`` | 585be06c257SPaul E. McKenney| #. CPU 1: rcu_read_unlock() | 586ccc9971eSMauro Carvalho Chehab| | 587ccc9971eSMauro Carvalho Chehab| And similarly, without a memory barrier between the beginning of the | 588ccc9971eSMauro Carvalho Chehab| grace period and the beginning of the RCU read-side critical section, | 589ccc9971eSMauro Carvalho Chehab| CPU 1 might end up accessing the freelist. | 590ccc9971eSMauro Carvalho Chehab| | 591ccc9971eSMauro Carvalho Chehab| The “as if” rule of course applies, so that any implementation that | 592ccc9971eSMauro Carvalho Chehab| acts as if the appropriate memory barriers were in place is a correct | 593ccc9971eSMauro Carvalho Chehab| implementation. That said, it is much easier to fool yourself into | 594ccc9971eSMauro Carvalho Chehab| believing that you have adhered to the as-if rule than it is to | 595ccc9971eSMauro Carvalho Chehab| actually adhere to it! | 596ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+ 597ccc9971eSMauro Carvalho Chehab 598ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+ 599ccc9971eSMauro Carvalho Chehab| **Quick Quiz**: | 600ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+ 601be06c257SPaul E. McKenney| You claim that rcu_read_lock() and rcu_read_unlock() generate | 602ccc9971eSMauro Carvalho Chehab| absolutely no code in some kernel builds. This means that the | 603ccc9971eSMauro Carvalho Chehab| compiler might arbitrarily rearrange consecutive RCU read-side | 604ccc9971eSMauro Carvalho Chehab| critical sections. Given such rearrangement, if a given RCU read-side | 605ccc9971eSMauro Carvalho Chehab| critical section is done, how can you be sure that all prior RCU | 606ccc9971eSMauro Carvalho Chehab| read-side critical sections are done? Won't the compiler | 607ccc9971eSMauro Carvalho Chehab| rearrangements make that impossible to determine? | 608ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+ 609ccc9971eSMauro Carvalho Chehab| **Answer**: | 610ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+ 611be06c257SPaul E. McKenney| In cases where rcu_read_lock() and rcu_read_unlock() generate | 612ccc9971eSMauro Carvalho Chehab| absolutely no code, RCU infers quiescent states only at special | 613ccc9971eSMauro Carvalho Chehab| locations, for example, within the scheduler. Because calls to | 614be06c257SPaul E. McKenney| schedule() had better prevent calling-code accesses to shared | 615be06c257SPaul E. McKenney| variables from being rearranged across the call to schedule(), if | 616ccc9971eSMauro Carvalho Chehab| RCU detects the end of a given RCU read-side critical section, it | 617ccc9971eSMauro Carvalho Chehab| will necessarily detect the end of all prior RCU read-side critical | 618ccc9971eSMauro Carvalho Chehab| sections, no matter how aggressively the compiler scrambles the code. | 619ccc9971eSMauro Carvalho Chehab| Again, this all assumes that the compiler cannot scramble code across | 620ccc9971eSMauro Carvalho Chehab| calls to the scheduler, out of interrupt handlers, into the idle | 621ccc9971eSMauro Carvalho Chehab| loop, into user-mode code, and so on. But if your kernel build allows | 622ccc9971eSMauro Carvalho Chehab| that sort of scrambling, you have broken far more than just RCU! | 623ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+ 624ccc9971eSMauro Carvalho Chehab 625ccc9971eSMauro Carvalho ChehabNote that these memory-barrier requirements do not replace the 626ccc9971eSMauro Carvalho Chehabfundamental RCU requirement that a grace period wait for all 627ccc9971eSMauro Carvalho Chehabpre-existing readers. On the contrary, the memory barriers called out in 628ccc9971eSMauro Carvalho Chehabthis section must operate in such a way as to *enforce* this fundamental 629ccc9971eSMauro Carvalho Chehabrequirement. Of course, different implementations enforce this 630ccc9971eSMauro Carvalho Chehabrequirement in different ways, but enforce it they must. 631ccc9971eSMauro Carvalho Chehab 632ccc9971eSMauro Carvalho ChehabRCU Primitives Guaranteed to Execute Unconditionally 633ccc9971eSMauro Carvalho Chehab~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 634ccc9971eSMauro Carvalho Chehab 635ccc9971eSMauro Carvalho ChehabThe common-case RCU primitives are unconditional. They are invoked, they 636ccc9971eSMauro Carvalho Chehabdo their job, and they return, with no possibility of error, and no need 637ccc9971eSMauro Carvalho Chehabto retry. This is a key RCU design philosophy. 638ccc9971eSMauro Carvalho Chehab 639ccc9971eSMauro Carvalho ChehabHowever, this philosophy is pragmatic rather than pigheaded. If someone 640ccc9971eSMauro Carvalho Chehabcomes up with a good justification for a particular conditional RCU 641ccc9971eSMauro Carvalho Chehabprimitive, it might well be implemented and added. After all, this 642ccc9971eSMauro Carvalho Chehabguarantee was reverse-engineered, not premeditated. The unconditional 643ccc9971eSMauro Carvalho Chehabnature of the RCU primitives was initially an accident of 644ccc9971eSMauro Carvalho Chehabimplementation, and later experience with synchronization primitives 645ccc9971eSMauro Carvalho Chehabwith conditional primitives caused me to elevate this accident to a 646ccc9971eSMauro Carvalho Chehabguarantee. Therefore, the justification for adding a conditional 647ccc9971eSMauro Carvalho Chehabprimitive to RCU would need to be based on detailed and compelling use 648ccc9971eSMauro Carvalho Chehabcases. 649ccc9971eSMauro Carvalho Chehab 650ccc9971eSMauro Carvalho ChehabGuaranteed Read-to-Write Upgrade 651ccc9971eSMauro Carvalho Chehab~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 652ccc9971eSMauro Carvalho Chehab 653ccc9971eSMauro Carvalho ChehabAs far as RCU is concerned, it is always possible to carry out an update 654ccc9971eSMauro Carvalho Chehabwithin an RCU read-side critical section. For example, that RCU 655ccc9971eSMauro Carvalho Chehabread-side critical section might search for a given data element, and 656ccc9971eSMauro Carvalho Chehabthen might acquire the update-side spinlock in order to update that 657ccc9971eSMauro Carvalho Chehabelement, all while remaining in that RCU read-side critical section. Of 658ccc9971eSMauro Carvalho Chehabcourse, it is necessary to exit the RCU read-side critical section 659be06c257SPaul E. McKenneybefore invoking synchronize_rcu(), however, this inconvenience can 660be06c257SPaul E. McKenneybe avoided through use of the call_rcu() and kfree_rcu() API 661ccc9971eSMauro Carvalho Chehabmembers described later in this document. 662ccc9971eSMauro Carvalho Chehab 663ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+ 664ccc9971eSMauro Carvalho Chehab| **Quick Quiz**: | 665ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+ 666ccc9971eSMauro Carvalho Chehab| But how does the upgrade-to-write operation exclude other readers? | 667ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+ 668ccc9971eSMauro Carvalho Chehab| **Answer**: | 669ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+ 670ccc9971eSMauro Carvalho Chehab| It doesn't, just like normal RCU updates, which also do not exclude | 671ccc9971eSMauro Carvalho Chehab| RCU readers. | 672ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+ 673ccc9971eSMauro Carvalho Chehab 674ccc9971eSMauro Carvalho ChehabThis guarantee allows lookup code to be shared between read-side and 675ccc9971eSMauro Carvalho Chehabupdate-side code, and was premeditated, appearing in the earliest 676ccc9971eSMauro Carvalho ChehabDYNIX/ptx RCU documentation. 677ccc9971eSMauro Carvalho Chehab 678ccc9971eSMauro Carvalho ChehabFundamental Non-Requirements 679ccc9971eSMauro Carvalho Chehab---------------------------- 680ccc9971eSMauro Carvalho Chehab 681ccc9971eSMauro Carvalho ChehabRCU provides extremely lightweight readers, and its read-side 682ccc9971eSMauro Carvalho Chehabguarantees, though quite useful, are correspondingly lightweight. It is 683ccc9971eSMauro Carvalho Chehabtherefore all too easy to assume that RCU is guaranteeing more than it 684ccc9971eSMauro Carvalho Chehabreally is. Of course, the list of things that RCU does not guarantee is 685ccc9971eSMauro Carvalho Chehabinfinitely long, however, the following sections list a few 686ccc9971eSMauro Carvalho Chehabnon-guarantees that have caused confusion. Except where otherwise noted, 687ccc9971eSMauro Carvalho Chehabthese non-guarantees were premeditated. 688ccc9971eSMauro Carvalho Chehab 68907335c16SJoel Fernandes (Google)#. `Readers Impose Minimal Ordering`_ 69007335c16SJoel Fernandes (Google)#. `Readers Do Not Exclude Updaters`_ 69107335c16SJoel Fernandes (Google)#. `Updaters Only Wait For Old Readers`_ 69207335c16SJoel Fernandes (Google)#. `Grace Periods Don't Partition Read-Side Critical Sections`_ 69307335c16SJoel Fernandes (Google)#. `Read-Side Critical Sections Don't Partition Grace Periods`_ 694ccc9971eSMauro Carvalho Chehab 695ccc9971eSMauro Carvalho ChehabReaders Impose Minimal Ordering 696ccc9971eSMauro Carvalho Chehab~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 697ccc9971eSMauro Carvalho Chehab 698be06c257SPaul E. McKenneyReader-side markers such as rcu_read_lock() and 699be06c257SPaul E. McKenneyrcu_read_unlock() provide absolutely no ordering guarantees except 700ccc9971eSMauro Carvalho Chehabthrough their interaction with the grace-period APIs such as 701be06c257SPaul E. McKenneysynchronize_rcu(). To see this, consider the following pair of 702ccc9971eSMauro Carvalho Chehabthreads: 703ccc9971eSMauro Carvalho Chehab 704ccc9971eSMauro Carvalho Chehab :: 705ccc9971eSMauro Carvalho Chehab 706ccc9971eSMauro Carvalho Chehab 1 void thread0(void) 707ccc9971eSMauro Carvalho Chehab 2 { 708ccc9971eSMauro Carvalho Chehab 3 rcu_read_lock(); 709ccc9971eSMauro Carvalho Chehab 4 WRITE_ONCE(x, 1); 710ccc9971eSMauro Carvalho Chehab 5 rcu_read_unlock(); 711ccc9971eSMauro Carvalho Chehab 6 rcu_read_lock(); 712ccc9971eSMauro Carvalho Chehab 7 WRITE_ONCE(y, 1); 713ccc9971eSMauro Carvalho Chehab 8 rcu_read_unlock(); 714ccc9971eSMauro Carvalho Chehab 9 } 715ccc9971eSMauro Carvalho Chehab 10 716ccc9971eSMauro Carvalho Chehab 11 void thread1(void) 717ccc9971eSMauro Carvalho Chehab 12 { 718ccc9971eSMauro Carvalho Chehab 13 rcu_read_lock(); 719ccc9971eSMauro Carvalho Chehab 14 r1 = READ_ONCE(y); 720ccc9971eSMauro Carvalho Chehab 15 rcu_read_unlock(); 721ccc9971eSMauro Carvalho Chehab 16 rcu_read_lock(); 722ccc9971eSMauro Carvalho Chehab 17 r2 = READ_ONCE(x); 723ccc9971eSMauro Carvalho Chehab 18 rcu_read_unlock(); 724ccc9971eSMauro Carvalho Chehab 19 } 725ccc9971eSMauro Carvalho Chehab 726be06c257SPaul E. McKenneyAfter thread0() and thread1() execute concurrently, it is quite 727ccc9971eSMauro Carvalho Chehabpossible to have 728ccc9971eSMauro Carvalho Chehab 729ccc9971eSMauro Carvalho Chehab :: 730ccc9971eSMauro Carvalho Chehab 731ccc9971eSMauro Carvalho Chehab (r1 == 1 && r2 == 0) 732ccc9971eSMauro Carvalho Chehab 733ccc9971eSMauro Carvalho Chehab(that is, ``y`` appears to have been assigned before ``x``), which would 734be06c257SPaul E. McKenneynot be possible if rcu_read_lock() and rcu_read_unlock() had 735ccc9971eSMauro Carvalho Chehabmuch in the way of ordering properties. But they do not, so the CPU is 736ccc9971eSMauro Carvalho Chehabwithin its rights to do significant reordering. This is by design: Any 737ccc9971eSMauro Carvalho Chehabsignificant ordering constraints would slow down these fast-path APIs. 738ccc9971eSMauro Carvalho Chehab 739ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+ 740ccc9971eSMauro Carvalho Chehab| **Quick Quiz**: | 741ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+ 742ccc9971eSMauro Carvalho Chehab| Can't the compiler also reorder this code? | 743ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+ 744ccc9971eSMauro Carvalho Chehab| **Answer**: | 745ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+ 746be06c257SPaul E. McKenney| No, the volatile casts in READ_ONCE() and WRITE_ONCE() | 747ccc9971eSMauro Carvalho Chehab| prevent the compiler from reordering in this particular case. | 748ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+ 749ccc9971eSMauro Carvalho Chehab 750ccc9971eSMauro Carvalho ChehabReaders Do Not Exclude Updaters 751ccc9971eSMauro Carvalho Chehab~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 752ccc9971eSMauro Carvalho Chehab 753be06c257SPaul E. McKenneyNeither rcu_read_lock() nor rcu_read_unlock() exclude updates. 754ccc9971eSMauro Carvalho ChehabAll they do is to prevent grace periods from ending. The following 755ccc9971eSMauro Carvalho Chehabexample illustrates this: 756ccc9971eSMauro Carvalho Chehab 757ccc9971eSMauro Carvalho Chehab :: 758ccc9971eSMauro Carvalho Chehab 759ccc9971eSMauro Carvalho Chehab 1 void thread0(void) 760ccc9971eSMauro Carvalho Chehab 2 { 761ccc9971eSMauro Carvalho Chehab 3 rcu_read_lock(); 762ccc9971eSMauro Carvalho Chehab 4 r1 = READ_ONCE(y); 763ccc9971eSMauro Carvalho Chehab 5 if (r1) { 764ccc9971eSMauro Carvalho Chehab 6 do_something_with_nonzero_x(); 765ccc9971eSMauro Carvalho Chehab 7 r2 = READ_ONCE(x); 766ccc9971eSMauro Carvalho Chehab 8 WARN_ON(!r2); /* BUG!!! */ 767ccc9971eSMauro Carvalho Chehab 9 } 768ccc9971eSMauro Carvalho Chehab 10 rcu_read_unlock(); 769ccc9971eSMauro Carvalho Chehab 11 } 770ccc9971eSMauro Carvalho Chehab 12 771ccc9971eSMauro Carvalho Chehab 13 void thread1(void) 772ccc9971eSMauro Carvalho Chehab 14 { 773ccc9971eSMauro Carvalho Chehab 15 spin_lock(&my_lock); 774ccc9971eSMauro Carvalho Chehab 16 WRITE_ONCE(x, 1); 775ccc9971eSMauro Carvalho Chehab 17 WRITE_ONCE(y, 1); 776ccc9971eSMauro Carvalho Chehab 18 spin_unlock(&my_lock); 777ccc9971eSMauro Carvalho Chehab 19 } 778ccc9971eSMauro Carvalho Chehab 779be06c257SPaul E. McKenneyIf the thread0() function's rcu_read_lock() excluded the 780be06c257SPaul E. McKenneythread1() function's update, the WARN_ON() could never fire. But 781be06c257SPaul E. McKenneythe fact is that rcu_read_lock() does not exclude much of anything 782be06c257SPaul E. McKenneyaside from subsequent grace periods, of which thread1() has none, so 783be06c257SPaul E. McKenneythe WARN_ON() can and does fire. 784ccc9971eSMauro Carvalho Chehab 785ccc9971eSMauro Carvalho ChehabUpdaters Only Wait For Old Readers 786ccc9971eSMauro Carvalho Chehab~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 787ccc9971eSMauro Carvalho Chehab 788be06c257SPaul E. McKenneyIt might be tempting to assume that after synchronize_rcu() 789ccc9971eSMauro Carvalho Chehabcompletes, there are no readers executing. This temptation must be 790ccc9971eSMauro Carvalho Chehabavoided because new readers can start immediately after 791be06c257SPaul E. McKenneysynchronize_rcu() starts, and synchronize_rcu() is under no 792ccc9971eSMauro Carvalho Chehabobligation to wait for these new readers. 793ccc9971eSMauro Carvalho Chehab 794ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+ 795ccc9971eSMauro Carvalho Chehab| **Quick Quiz**: | 796ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+ 797ccc9971eSMauro Carvalho Chehab| Suppose that synchronize_rcu() did wait until *all* readers had | 798ccc9971eSMauro Carvalho Chehab| completed instead of waiting only on pre-existing readers. For how | 799ccc9971eSMauro Carvalho Chehab| long would the updater be able to rely on there being no readers? | 800ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+ 801ccc9971eSMauro Carvalho Chehab| **Answer**: | 802ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+ 803be06c257SPaul E. McKenney| For no time at all. Even if synchronize_rcu() were to wait until | 804ccc9971eSMauro Carvalho Chehab| all readers had completed, a new reader might start immediately after | 805be06c257SPaul E. McKenney| synchronize_rcu() completed. Therefore, the code following | 806be06c257SPaul E. McKenney| synchronize_rcu() can *never* rely on there being no readers. | 807ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+ 808ccc9971eSMauro Carvalho Chehab 809ccc9971eSMauro Carvalho ChehabGrace Periods Don't Partition Read-Side Critical Sections 810ccc9971eSMauro Carvalho Chehab~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 811ccc9971eSMauro Carvalho Chehab 812ccc9971eSMauro Carvalho ChehabIt is tempting to assume that if any part of one RCU read-side critical 813ccc9971eSMauro Carvalho Chehabsection precedes a given grace period, and if any part of another RCU 814ccc9971eSMauro Carvalho Chehabread-side critical section follows that same grace period, then all of 815ccc9971eSMauro Carvalho Chehabthe first RCU read-side critical section must precede all of the second. 816ccc9971eSMauro Carvalho ChehabHowever, this just isn't the case: A single grace period does not 817ccc9971eSMauro Carvalho Chehabpartition the set of RCU read-side critical sections. An example of this 818ccc9971eSMauro Carvalho Chehabsituation can be illustrated as follows, where ``x``, ``y``, and ``z`` 819ccc9971eSMauro Carvalho Chehabare initially all zero: 820ccc9971eSMauro Carvalho Chehab 821ccc9971eSMauro Carvalho Chehab :: 822ccc9971eSMauro Carvalho Chehab 823ccc9971eSMauro Carvalho Chehab 1 void thread0(void) 824ccc9971eSMauro Carvalho Chehab 2 { 825ccc9971eSMauro Carvalho Chehab 3 rcu_read_lock(); 826ccc9971eSMauro Carvalho Chehab 4 WRITE_ONCE(a, 1); 827ccc9971eSMauro Carvalho Chehab 5 WRITE_ONCE(b, 1); 828ccc9971eSMauro Carvalho Chehab 6 rcu_read_unlock(); 829ccc9971eSMauro Carvalho Chehab 7 } 830ccc9971eSMauro Carvalho Chehab 8 831ccc9971eSMauro Carvalho Chehab 9 void thread1(void) 832ccc9971eSMauro Carvalho Chehab 10 { 833ccc9971eSMauro Carvalho Chehab 11 r1 = READ_ONCE(a); 834ccc9971eSMauro Carvalho Chehab 12 synchronize_rcu(); 835ccc9971eSMauro Carvalho Chehab 13 WRITE_ONCE(c, 1); 836ccc9971eSMauro Carvalho Chehab 14 } 837ccc9971eSMauro Carvalho Chehab 15 838ccc9971eSMauro Carvalho Chehab 16 void thread2(void) 839ccc9971eSMauro Carvalho Chehab 17 { 840ccc9971eSMauro Carvalho Chehab 18 rcu_read_lock(); 841ccc9971eSMauro Carvalho Chehab 19 r2 = READ_ONCE(b); 842ccc9971eSMauro Carvalho Chehab 20 r3 = READ_ONCE(c); 843ccc9971eSMauro Carvalho Chehab 21 rcu_read_unlock(); 844ccc9971eSMauro Carvalho Chehab 22 } 845ccc9971eSMauro Carvalho Chehab 846ccc9971eSMauro Carvalho ChehabIt turns out that the outcome: 847ccc9971eSMauro Carvalho Chehab 848ccc9971eSMauro Carvalho Chehab :: 849ccc9971eSMauro Carvalho Chehab 850ccc9971eSMauro Carvalho Chehab (r1 == 1 && r2 == 0 && r3 == 1) 851ccc9971eSMauro Carvalho Chehab 852ccc9971eSMauro Carvalho Chehabis entirely possible. The following figure show how this can happen, 853ccc9971eSMauro Carvalho Chehabwith each circled ``QS`` indicating the point at which RCU recorded a 854ccc9971eSMauro Carvalho Chehab*quiescent state* for each thread, that is, a state in which RCU knows 855ccc9971eSMauro Carvalho Chehabthat the thread cannot be in the midst of an RCU read-side critical 856ccc9971eSMauro Carvalho Chehabsection that started before the current grace period: 857ccc9971eSMauro Carvalho Chehab 858ccc9971eSMauro Carvalho Chehab.. kernel-figure:: GPpartitionReaders1.svg 859ccc9971eSMauro Carvalho Chehab 860ccc9971eSMauro Carvalho ChehabIf it is necessary to partition RCU read-side critical sections in this 861ccc9971eSMauro Carvalho Chehabmanner, it is necessary to use two grace periods, where the first grace 862ccc9971eSMauro Carvalho Chehabperiod is known to end before the second grace period starts: 863ccc9971eSMauro Carvalho Chehab 864ccc9971eSMauro Carvalho Chehab :: 865ccc9971eSMauro Carvalho Chehab 866ccc9971eSMauro Carvalho Chehab 1 void thread0(void) 867ccc9971eSMauro Carvalho Chehab 2 { 868ccc9971eSMauro Carvalho Chehab 3 rcu_read_lock(); 869ccc9971eSMauro Carvalho Chehab 4 WRITE_ONCE(a, 1); 870ccc9971eSMauro Carvalho Chehab 5 WRITE_ONCE(b, 1); 871ccc9971eSMauro Carvalho Chehab 6 rcu_read_unlock(); 872ccc9971eSMauro Carvalho Chehab 7 } 873ccc9971eSMauro Carvalho Chehab 8 874ccc9971eSMauro Carvalho Chehab 9 void thread1(void) 875ccc9971eSMauro Carvalho Chehab 10 { 876ccc9971eSMauro Carvalho Chehab 11 r1 = READ_ONCE(a); 877ccc9971eSMauro Carvalho Chehab 12 synchronize_rcu(); 878ccc9971eSMauro Carvalho Chehab 13 WRITE_ONCE(c, 1); 879ccc9971eSMauro Carvalho Chehab 14 } 880ccc9971eSMauro Carvalho Chehab 15 881ccc9971eSMauro Carvalho Chehab 16 void thread2(void) 882ccc9971eSMauro Carvalho Chehab 17 { 883ccc9971eSMauro Carvalho Chehab 18 r2 = READ_ONCE(c); 884ccc9971eSMauro Carvalho Chehab 19 synchronize_rcu(); 885ccc9971eSMauro Carvalho Chehab 20 WRITE_ONCE(d, 1); 886ccc9971eSMauro Carvalho Chehab 21 } 887ccc9971eSMauro Carvalho Chehab 22 888ccc9971eSMauro Carvalho Chehab 23 void thread3(void) 889ccc9971eSMauro Carvalho Chehab 24 { 890ccc9971eSMauro Carvalho Chehab 25 rcu_read_lock(); 891ccc9971eSMauro Carvalho Chehab 26 r3 = READ_ONCE(b); 892ccc9971eSMauro Carvalho Chehab 27 r4 = READ_ONCE(d); 893ccc9971eSMauro Carvalho Chehab 28 rcu_read_unlock(); 894ccc9971eSMauro Carvalho Chehab 29 } 895ccc9971eSMauro Carvalho Chehab 896be06c257SPaul E. McKenneyHere, if ``(r1 == 1)``, then thread0()'s write to ``b`` must happen 897be06c257SPaul E. McKenneybefore the end of thread1()'s grace period. If in addition 898be06c257SPaul E. McKenney``(r4 == 1)``, then thread3()'s read from ``b`` must happen after 899be06c257SPaul E. McKenneythe beginning of thread2()'s grace period. If it is also the case 900be06c257SPaul E. McKenneythat ``(r2 == 1)``, then the end of thread1()'s grace period must 901be06c257SPaul E. McKenneyprecede the beginning of thread2()'s grace period. This mean that 902ccc9971eSMauro Carvalho Chehabthe two RCU read-side critical sections cannot overlap, guaranteeing 903ccc9971eSMauro Carvalho Chehabthat ``(r3 == 1)``. As a result, the outcome: 904ccc9971eSMauro Carvalho Chehab 905ccc9971eSMauro Carvalho Chehab :: 906ccc9971eSMauro Carvalho Chehab 907ccc9971eSMauro Carvalho Chehab (r1 == 1 && r2 == 1 && r3 == 0 && r4 == 1) 908ccc9971eSMauro Carvalho Chehab 909ccc9971eSMauro Carvalho Chehabcannot happen. 910ccc9971eSMauro Carvalho Chehab 911ccc9971eSMauro Carvalho ChehabThis non-requirement was also non-premeditated, but became apparent when 912ccc9971eSMauro Carvalho Chehabstudying RCU's interaction with memory ordering. 913ccc9971eSMauro Carvalho Chehab 914ccc9971eSMauro Carvalho ChehabRead-Side Critical Sections Don't Partition Grace Periods 915ccc9971eSMauro Carvalho Chehab~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 916ccc9971eSMauro Carvalho Chehab 917ccc9971eSMauro Carvalho ChehabIt is also tempting to assume that if an RCU read-side critical section 918ccc9971eSMauro Carvalho Chehabhappens between a pair of grace periods, then those grace periods cannot 919ccc9971eSMauro Carvalho Chehaboverlap. However, this temptation leads nowhere good, as can be 920ccc9971eSMauro Carvalho Chehabillustrated by the following, with all variables initially zero: 921ccc9971eSMauro Carvalho Chehab 922ccc9971eSMauro Carvalho Chehab :: 923ccc9971eSMauro Carvalho Chehab 924ccc9971eSMauro Carvalho Chehab 1 void thread0(void) 925ccc9971eSMauro Carvalho Chehab 2 { 926ccc9971eSMauro Carvalho Chehab 3 rcu_read_lock(); 927ccc9971eSMauro Carvalho Chehab 4 WRITE_ONCE(a, 1); 928ccc9971eSMauro Carvalho Chehab 5 WRITE_ONCE(b, 1); 929ccc9971eSMauro Carvalho Chehab 6 rcu_read_unlock(); 930ccc9971eSMauro Carvalho Chehab 7 } 931ccc9971eSMauro Carvalho Chehab 8 932ccc9971eSMauro Carvalho Chehab 9 void thread1(void) 933ccc9971eSMauro Carvalho Chehab 10 { 934ccc9971eSMauro Carvalho Chehab 11 r1 = READ_ONCE(a); 935ccc9971eSMauro Carvalho Chehab 12 synchronize_rcu(); 936ccc9971eSMauro Carvalho Chehab 13 WRITE_ONCE(c, 1); 937ccc9971eSMauro Carvalho Chehab 14 } 938ccc9971eSMauro Carvalho Chehab 15 939ccc9971eSMauro Carvalho Chehab 16 void thread2(void) 940ccc9971eSMauro Carvalho Chehab 17 { 941ccc9971eSMauro Carvalho Chehab 18 rcu_read_lock(); 942ccc9971eSMauro Carvalho Chehab 19 WRITE_ONCE(d, 1); 943ccc9971eSMauro Carvalho Chehab 20 r2 = READ_ONCE(c); 944ccc9971eSMauro Carvalho Chehab 21 rcu_read_unlock(); 945ccc9971eSMauro Carvalho Chehab 22 } 946ccc9971eSMauro Carvalho Chehab 23 947ccc9971eSMauro Carvalho Chehab 24 void thread3(void) 948ccc9971eSMauro Carvalho Chehab 25 { 949ccc9971eSMauro Carvalho Chehab 26 r3 = READ_ONCE(d); 950ccc9971eSMauro Carvalho Chehab 27 synchronize_rcu(); 951ccc9971eSMauro Carvalho Chehab 28 WRITE_ONCE(e, 1); 952ccc9971eSMauro Carvalho Chehab 29 } 953ccc9971eSMauro Carvalho Chehab 30 954ccc9971eSMauro Carvalho Chehab 31 void thread4(void) 955ccc9971eSMauro Carvalho Chehab 32 { 956ccc9971eSMauro Carvalho Chehab 33 rcu_read_lock(); 957ccc9971eSMauro Carvalho Chehab 34 r4 = READ_ONCE(b); 958ccc9971eSMauro Carvalho Chehab 35 r5 = READ_ONCE(e); 959ccc9971eSMauro Carvalho Chehab 36 rcu_read_unlock(); 960ccc9971eSMauro Carvalho Chehab 37 } 961ccc9971eSMauro Carvalho Chehab 962ccc9971eSMauro Carvalho ChehabIn this case, the outcome: 963ccc9971eSMauro Carvalho Chehab 964ccc9971eSMauro Carvalho Chehab :: 965ccc9971eSMauro Carvalho Chehab 966ccc9971eSMauro Carvalho Chehab (r1 == 1 && r2 == 1 && r3 == 1 && r4 == 0 && r5 == 1) 967ccc9971eSMauro Carvalho Chehab 968ccc9971eSMauro Carvalho Chehabis entirely possible, as illustrated below: 969ccc9971eSMauro Carvalho Chehab 970ccc9971eSMauro Carvalho Chehab.. kernel-figure:: ReadersPartitionGP1.svg 971ccc9971eSMauro Carvalho Chehab 972ccc9971eSMauro Carvalho ChehabAgain, an RCU read-side critical section can overlap almost all of a 973ccc9971eSMauro Carvalho Chehabgiven grace period, just so long as it does not overlap the entire grace 974ccc9971eSMauro Carvalho Chehabperiod. As a result, an RCU read-side critical section cannot partition 975ccc9971eSMauro Carvalho Chehaba pair of RCU grace periods. 976ccc9971eSMauro Carvalho Chehab 977ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+ 978ccc9971eSMauro Carvalho Chehab| **Quick Quiz**: | 979ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+ 980ccc9971eSMauro Carvalho Chehab| How long a sequence of grace periods, each separated by an RCU | 981ccc9971eSMauro Carvalho Chehab| read-side critical section, would be required to partition the RCU | 982ccc9971eSMauro Carvalho Chehab| read-side critical sections at the beginning and end of the chain? | 983ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+ 984ccc9971eSMauro Carvalho Chehab| **Answer**: | 985ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+ 986ccc9971eSMauro Carvalho Chehab| In theory, an infinite number. In practice, an unknown number that is | 987ccc9971eSMauro Carvalho Chehab| sensitive to both implementation details and timing considerations. | 988ccc9971eSMauro Carvalho Chehab| Therefore, even in practice, RCU users must abide by the theoretical | 989ccc9971eSMauro Carvalho Chehab| rather than the practical answer. | 990ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+ 991ccc9971eSMauro Carvalho Chehab 992ccc9971eSMauro Carvalho ChehabParallelism Facts of Life 993ccc9971eSMauro Carvalho Chehab------------------------- 994ccc9971eSMauro Carvalho Chehab 995ccc9971eSMauro Carvalho ChehabThese parallelism facts of life are by no means specific to RCU, but the 996ccc9971eSMauro Carvalho ChehabRCU implementation must abide by them. They therefore bear repeating: 997ccc9971eSMauro Carvalho Chehab 998ccc9971eSMauro Carvalho Chehab#. Any CPU or task may be delayed at any time, and any attempts to avoid 999ccc9971eSMauro Carvalho Chehab these delays by disabling preemption, interrupts, or whatever are 1000ccc9971eSMauro Carvalho Chehab completely futile. This is most obvious in preemptible user-level 1001ccc9971eSMauro Carvalho Chehab environments and in virtualized environments (where a given guest 1002ccc9971eSMauro Carvalho Chehab OS's VCPUs can be preempted at any time by the underlying 1003ccc9971eSMauro Carvalho Chehab hypervisor), but can also happen in bare-metal environments due to 1004ccc9971eSMauro Carvalho Chehab ECC errors, NMIs, and other hardware events. Although a delay of more 1005ccc9971eSMauro Carvalho Chehab than about 20 seconds can result in splats, the RCU implementation is 1006ccc9971eSMauro Carvalho Chehab obligated to use algorithms that can tolerate extremely long delays, 1007ccc9971eSMauro Carvalho Chehab but where “extremely long” is not long enough to allow wrap-around 1008ccc9971eSMauro Carvalho Chehab when incrementing a 64-bit counter. 1009ccc9971eSMauro Carvalho Chehab#. Both the compiler and the CPU can reorder memory accesses. Where it 1010ccc9971eSMauro Carvalho Chehab matters, RCU must use compiler directives and memory-barrier 1011ccc9971eSMauro Carvalho Chehab instructions to preserve ordering. 1012ccc9971eSMauro Carvalho Chehab#. Conflicting writes to memory locations in any given cache line will 1013ccc9971eSMauro Carvalho Chehab result in expensive cache misses. Greater numbers of concurrent 1014ccc9971eSMauro Carvalho Chehab writes and more-frequent concurrent writes will result in more 1015ccc9971eSMauro Carvalho Chehab dramatic slowdowns. RCU is therefore obligated to use algorithms that 1016ccc9971eSMauro Carvalho Chehab have sufficient locality to avoid significant performance and 1017ccc9971eSMauro Carvalho Chehab scalability problems. 1018ccc9971eSMauro Carvalho Chehab#. As a rough rule of thumb, only one CPU's worth of processing may be 1019ccc9971eSMauro Carvalho Chehab carried out under the protection of any given exclusive lock. RCU 1020ccc9971eSMauro Carvalho Chehab must therefore use scalable locking designs. 1021ccc9971eSMauro Carvalho Chehab#. Counters are finite, especially on 32-bit systems. RCU's use of 1022ccc9971eSMauro Carvalho Chehab counters must therefore tolerate counter wrap, or be designed such 1023ccc9971eSMauro Carvalho Chehab that counter wrap would take way more time than a single system is 1024ccc9971eSMauro Carvalho Chehab likely to run. An uptime of ten years is quite possible, a runtime of 1025ccc9971eSMauro Carvalho Chehab a century much less so. As an example of the latter, RCU's 1026ccc9971eSMauro Carvalho Chehab dyntick-idle nesting counter allows 54 bits for interrupt nesting 1027ccc9971eSMauro Carvalho Chehab level (this counter is 64 bits even on a 32-bit system). Overflowing 1028ccc9971eSMauro Carvalho Chehab this counter requires 2\ :sup:`54` half-interrupts on a given CPU 1029ccc9971eSMauro Carvalho Chehab without that CPU ever going idle. If a half-interrupt happened every 1030ccc9971eSMauro Carvalho Chehab microsecond, it would take 570 years of runtime to overflow this 1031ccc9971eSMauro Carvalho Chehab counter, which is currently believed to be an acceptably long time. 1032ccc9971eSMauro Carvalho Chehab#. Linux systems can have thousands of CPUs running a single Linux 1033ccc9971eSMauro Carvalho Chehab kernel in a single shared-memory environment. RCU must therefore pay 1034ccc9971eSMauro Carvalho Chehab close attention to high-end scalability. 1035ccc9971eSMauro Carvalho Chehab 1036ccc9971eSMauro Carvalho ChehabThis last parallelism fact of life means that RCU must pay special 1037ccc9971eSMauro Carvalho Chehabattention to the preceding facts of life. The idea that Linux might 1038ccc9971eSMauro Carvalho Chehabscale to systems with thousands of CPUs would have been met with some 1039ccc9971eSMauro Carvalho Chehabskepticism in the 1990s, but these requirements would have otherwise 1040ccc9971eSMauro Carvalho Chehabhave been unsurprising, even in the early 1990s. 1041ccc9971eSMauro Carvalho Chehab 1042ccc9971eSMauro Carvalho ChehabQuality-of-Implementation Requirements 1043ccc9971eSMauro Carvalho Chehab-------------------------------------- 1044ccc9971eSMauro Carvalho Chehab 1045ccc9971eSMauro Carvalho ChehabThese sections list quality-of-implementation requirements. Although an 1046ccc9971eSMauro Carvalho ChehabRCU implementation that ignores these requirements could still be used, 1047ccc9971eSMauro Carvalho Chehabit would likely be subject to limitations that would make it 1048ccc9971eSMauro Carvalho Chehabinappropriate for industrial-strength production use. Classes of 1049ccc9971eSMauro Carvalho Chehabquality-of-implementation requirements are as follows: 1050ccc9971eSMauro Carvalho Chehab 105107335c16SJoel Fernandes (Google)#. `Specialization`_ 105207335c16SJoel Fernandes (Google)#. `Performance and Scalability`_ 105307335c16SJoel Fernandes (Google)#. `Forward Progress`_ 105407335c16SJoel Fernandes (Google)#. `Composability`_ 105507335c16SJoel Fernandes (Google)#. `Corner Cases`_ 1056ccc9971eSMauro Carvalho Chehab 1057ccc9971eSMauro Carvalho ChehabThese classes is covered in the following sections. 1058ccc9971eSMauro Carvalho Chehab 1059ccc9971eSMauro Carvalho ChehabSpecialization 1060ccc9971eSMauro Carvalho Chehab~~~~~~~~~~~~~~ 1061ccc9971eSMauro Carvalho Chehab 1062ccc9971eSMauro Carvalho ChehabRCU is and always has been intended primarily for read-mostly 1063ccc9971eSMauro Carvalho Chehabsituations, which means that RCU's read-side primitives are optimized, 1064ccc9971eSMauro Carvalho Chehaboften at the expense of its update-side primitives. Experience thus far 1065ccc9971eSMauro Carvalho Chehabis captured by the following list of situations: 1066ccc9971eSMauro Carvalho Chehab 1067ccc9971eSMauro Carvalho Chehab#. Read-mostly data, where stale and inconsistent data is not a problem: 1068ccc9971eSMauro Carvalho Chehab RCU works great! 1069ccc9971eSMauro Carvalho Chehab#. Read-mostly data, where data must be consistent: RCU works well. 1070ccc9971eSMauro Carvalho Chehab#. Read-write data, where data must be consistent: RCU *might* work OK. 1071ccc9971eSMauro Carvalho Chehab Or not. 1072ccc9971eSMauro Carvalho Chehab#. Write-mostly data, where data must be consistent: RCU is very 1073ccc9971eSMauro Carvalho Chehab unlikely to be the right tool for the job, with the following 1074ccc9971eSMauro Carvalho Chehab exceptions, where RCU can provide: 1075ccc9971eSMauro Carvalho Chehab 1076ccc9971eSMauro Carvalho Chehab a. Existence guarantees for update-friendly mechanisms. 1077ccc9971eSMauro Carvalho Chehab b. Wait-free read-side primitives for real-time use. 1078ccc9971eSMauro Carvalho Chehab 1079ccc9971eSMauro Carvalho ChehabThis focus on read-mostly situations means that RCU must interoperate 1080be06c257SPaul E. McKenneywith other synchronization primitives. For example, the add_gp() and 1081be06c257SPaul E. McKenneyremove_gp_synchronous() examples discussed earlier use RCU to 1082ccc9971eSMauro Carvalho Chehabprotect readers and locking to coordinate updaters. However, the need 1083ccc9971eSMauro Carvalho Chehabextends much farther, requiring that a variety of synchronization 1084ccc9971eSMauro Carvalho Chehabprimitives be legal within RCU read-side critical sections, including 1085ccc9971eSMauro Carvalho Chehabspinlocks, sequence locks, atomic operations, reference counters, and 1086ccc9971eSMauro Carvalho Chehabmemory barriers. 1087ccc9971eSMauro Carvalho Chehab 1088ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+ 1089ccc9971eSMauro Carvalho Chehab| **Quick Quiz**: | 1090ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+ 1091ccc9971eSMauro Carvalho Chehab| What about sleeping locks? | 1092ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+ 1093ccc9971eSMauro Carvalho Chehab| **Answer**: | 1094ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+ 1095ccc9971eSMauro Carvalho Chehab| These are forbidden within Linux-kernel RCU read-side critical | 1096ccc9971eSMauro Carvalho Chehab| sections because it is not legal to place a quiescent state (in this | 1097ccc9971eSMauro Carvalho Chehab| case, voluntary context switch) within an RCU read-side critical | 1098ccc9971eSMauro Carvalho Chehab| section. However, sleeping locks may be used within userspace RCU | 1099ccc9971eSMauro Carvalho Chehab| read-side critical sections, and also within Linux-kernel sleepable | 11004f8af077SNícolas F. R. A. Prado| RCU `(SRCU) <Sleepable RCU_>`__ read-side critical sections. In | 1101ccc9971eSMauro Carvalho Chehab| addition, the -rt patchset turns spinlocks into a sleeping locks so | 1102ccc9971eSMauro Carvalho Chehab| that the corresponding critical sections can be preempted, which also | 1103ccc9971eSMauro Carvalho Chehab| means that these sleeplockified spinlocks (but not other sleeping | 1104ccc9971eSMauro Carvalho Chehab| locks!) may be acquire within -rt-Linux-kernel RCU read-side critical | 1105ccc9971eSMauro Carvalho Chehab| sections. | 1106ccc9971eSMauro Carvalho Chehab| Note that it *is* legal for a normal RCU read-side critical section | 1107ccc9971eSMauro Carvalho Chehab| to conditionally acquire a sleeping locks (as in | 1108be06c257SPaul E. McKenney| mutex_trylock()), but only as long as it does not loop | 1109ccc9971eSMauro Carvalho Chehab| indefinitely attempting to conditionally acquire that sleeping locks. | 1110be06c257SPaul E. McKenney| The key point is that things like mutex_trylock() either return | 1111ccc9971eSMauro Carvalho Chehab| with the mutex held, or return an error indication if the mutex was | 1112be06c257SPaul E. McKenney| not immediately available. Either way, mutex_trylock() returns | 1113ccc9971eSMauro Carvalho Chehab| immediately without sleeping. | 1114ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+ 1115ccc9971eSMauro Carvalho Chehab 1116ccc9971eSMauro Carvalho ChehabIt often comes as a surprise that many algorithms do not require a 1117ccc9971eSMauro Carvalho Chehabconsistent view of data, but many can function in that mode, with 1118ccc9971eSMauro Carvalho Chehabnetwork routing being the poster child. Internet routing algorithms take 1119ccc9971eSMauro Carvalho Chehabsignificant time to propagate updates, so that by the time an update 1120ccc9971eSMauro Carvalho Chehabarrives at a given system, that system has been sending network traffic 1121ccc9971eSMauro Carvalho Chehabthe wrong way for a considerable length of time. Having a few threads 1122ccc9971eSMauro Carvalho Chehabcontinue to send traffic the wrong way for a few more milliseconds is 1123ccc9971eSMauro Carvalho Chehabclearly not a problem: In the worst case, TCP retransmissions will 1124ccc9971eSMauro Carvalho Chehabeventually get the data where it needs to go. In general, when tracking 1125ccc9971eSMauro Carvalho Chehabthe state of the universe outside of the computer, some level of 1126ccc9971eSMauro Carvalho Chehabinconsistency must be tolerated due to speed-of-light delays if nothing 1127ccc9971eSMauro Carvalho Chehabelse. 1128ccc9971eSMauro Carvalho Chehab 1129ccc9971eSMauro Carvalho ChehabFurthermore, uncertainty about external state is inherent in many cases. 1130ccc9971eSMauro Carvalho ChehabFor example, a pair of veterinarians might use heartbeat to determine 1131ccc9971eSMauro Carvalho Chehabwhether or not a given cat was alive. But how long should they wait 1132ccc9971eSMauro Carvalho Chehabafter the last heartbeat to decide that the cat is in fact dead? Waiting 1133ccc9971eSMauro Carvalho Chehabless than 400 milliseconds makes no sense because this would mean that a 1134ccc9971eSMauro Carvalho Chehabrelaxed cat would be considered to cycle between death and life more 1135ccc9971eSMauro Carvalho Chehabthan 100 times per minute. Moreover, just as with human beings, a cat's 1136ccc9971eSMauro Carvalho Chehabheart might stop for some period of time, so the exact wait period is a 1137ccc9971eSMauro Carvalho Chehabjudgment call. One of our pair of veterinarians might wait 30 seconds 1138ccc9971eSMauro Carvalho Chehabbefore pronouncing the cat dead, while the other might insist on waiting 1139ccc9971eSMauro Carvalho Chehaba full minute. The two veterinarians would then disagree on the state of 1140ccc9971eSMauro Carvalho Chehabthe cat during the final 30 seconds of the minute following the last 1141ccc9971eSMauro Carvalho Chehabheartbeat. 1142ccc9971eSMauro Carvalho Chehab 1143ccc9971eSMauro Carvalho ChehabInterestingly enough, this same situation applies to hardware. When push 1144ccc9971eSMauro Carvalho Chehabcomes to shove, how do we tell whether or not some external server has 1145ccc9971eSMauro Carvalho Chehabfailed? We send messages to it periodically, and declare it failed if we 1146ccc9971eSMauro Carvalho Chehabdon't receive a response within a given period of time. Policy decisions 1147ccc9971eSMauro Carvalho Chehabcan usually tolerate short periods of inconsistency. The policy was 1148ccc9971eSMauro Carvalho Chehabdecided some time ago, and is only now being put into effect, so a few 1149ccc9971eSMauro Carvalho Chehabmilliseconds of delay is normally inconsequential. 1150ccc9971eSMauro Carvalho Chehab 1151ccc9971eSMauro Carvalho ChehabHowever, there are algorithms that absolutely must see consistent data. 1152ccc9971eSMauro Carvalho ChehabFor example, the translation between a user-level SystemV semaphore ID 1153ccc9971eSMauro Carvalho Chehabto the corresponding in-kernel data structure is protected by RCU, but 1154ccc9971eSMauro Carvalho Chehabit is absolutely forbidden to update a semaphore that has just been 1155ccc9971eSMauro Carvalho Chehabremoved. In the Linux kernel, this need for consistency is accommodated 1156ccc9971eSMauro Carvalho Chehabby acquiring spinlocks located in the in-kernel data structure from 1157ccc9971eSMauro Carvalho Chehabwithin the RCU read-side critical section, and this is indicated by the 1158ccc9971eSMauro Carvalho Chehabgreen box in the figure above. Many other techniques may be used, and 1159ccc9971eSMauro Carvalho Chehabare in fact used within the Linux kernel. 1160ccc9971eSMauro Carvalho Chehab 1161ccc9971eSMauro Carvalho ChehabIn short, RCU is not required to maintain consistency, and other 1162ccc9971eSMauro Carvalho Chehabmechanisms may be used in concert with RCU when consistency is required. 1163ccc9971eSMauro Carvalho ChehabRCU's specialization allows it to do its job extremely well, and its 1164ccc9971eSMauro Carvalho Chehabability to interoperate with other synchronization mechanisms allows the 1165ccc9971eSMauro Carvalho Chehabright mix of synchronization tools to be used for a given job. 1166ccc9971eSMauro Carvalho Chehab 1167ccc9971eSMauro Carvalho ChehabPerformance and Scalability 1168ccc9971eSMauro Carvalho Chehab~~~~~~~~~~~~~~~~~~~~~~~~~~~ 1169ccc9971eSMauro Carvalho Chehab 1170ccc9971eSMauro Carvalho ChehabEnergy efficiency is a critical component of performance today, and 1171ccc9971eSMauro Carvalho ChehabLinux-kernel RCU implementations must therefore avoid unnecessarily 1172ccc9971eSMauro Carvalho Chehabawakening idle CPUs. I cannot claim that this requirement was 1173ccc9971eSMauro Carvalho Chehabpremeditated. In fact, I learned of it during a telephone conversation 1174ccc9971eSMauro Carvalho Chehabin which I was given “frank and open” feedback on the importance of 1175ccc9971eSMauro Carvalho Chehabenergy efficiency in battery-powered systems and on specific 1176ccc9971eSMauro Carvalho Chehabenergy-efficiency shortcomings of the Linux-kernel RCU implementation. 1177ccc9971eSMauro Carvalho ChehabIn my experience, the battery-powered embedded community will consider 1178ccc9971eSMauro Carvalho Chehabany unnecessary wakeups to be extremely unfriendly acts. So much so that 1179ccc9971eSMauro Carvalho Chehabmere Linux-kernel-mailing-list posts are insufficient to vent their ire. 1180ccc9971eSMauro Carvalho Chehab 1181ccc9971eSMauro Carvalho ChehabMemory consumption is not particularly important for in most situations, 1182ccc9971eSMauro Carvalho Chehaband has become decreasingly so as memory sizes have expanded and memory 1183ccc9971eSMauro Carvalho Chehabcosts have plummeted. However, as I learned from Matt Mackall's 1184ccc9971eSMauro Carvalho Chehab`bloatwatch <http://elinux.org/Linux_Tiny-FAQ>`__ efforts, memory 1185ccc9971eSMauro Carvalho Chehabfootprint is critically important on single-CPU systems with 118681ad58beSSebastian Andrzej Siewiornon-preemptible (``CONFIG_PREEMPTION=n``) kernels, and thus `tiny 11879d3a0485SPaul GortmakerRCU <https://lore.kernel.org/r/20090113221724.GA15307@linux.vnet.ibm.com>`__ 1188ccc9971eSMauro Carvalho Chehabwas born. Josh Triplett has since taken over the small-memory banner 1189ccc9971eSMauro Carvalho Chehabwith his `Linux kernel tinification <https://tiny.wiki.kernel.org/>`__ 11904f8af077SNícolas F. R. A. Pradoproject, which resulted in `SRCU <Sleepable RCU_>`__ becoming optional 1191ccc9971eSMauro Carvalho Chehabfor those kernels not needing it. 1192ccc9971eSMauro Carvalho Chehab 1193ccc9971eSMauro Carvalho ChehabThe remaining performance requirements are, for the most part, 1194ccc9971eSMauro Carvalho Chehabunsurprising. For example, in keeping with RCU's read-side 1195be06c257SPaul E. McKenneyspecialization, rcu_dereference() should have negligible overhead 1196ccc9971eSMauro Carvalho Chehab(for example, suppression of a few minor compiler optimizations). 1197be06c257SPaul E. McKenneySimilarly, in non-preemptible environments, rcu_read_lock() and 1198be06c257SPaul E. McKenneyrcu_read_unlock() should have exactly zero overhead. 1199ccc9971eSMauro Carvalho Chehab 1200ccc9971eSMauro Carvalho ChehabIn preemptible environments, in the case where the RCU read-side 1201ccc9971eSMauro Carvalho Chehabcritical section was not preempted (as will be the case for the 1202be06c257SPaul E. McKenneyhighest-priority real-time process), rcu_read_lock() and 1203be06c257SPaul E. McKenneyrcu_read_unlock() should have minimal overhead. In particular, they 1204ccc9971eSMauro Carvalho Chehabshould not contain atomic read-modify-write operations, memory-barrier 1205ccc9971eSMauro Carvalho Chehabinstructions, preemption disabling, interrupt disabling, or backwards 1206ccc9971eSMauro Carvalho Chehabbranches. However, in the case where the RCU read-side critical section 1207be06c257SPaul E. McKenneywas preempted, rcu_read_unlock() may acquire spinlocks and disable 1208ccc9971eSMauro Carvalho Chehabinterrupts. This is why it is better to nest an RCU read-side critical 1209ccc9971eSMauro Carvalho Chehabsection within a preempt-disable region than vice versa, at least in 1210ccc9971eSMauro Carvalho Chehabcases where that critical section is short enough to avoid unduly 1211ccc9971eSMauro Carvalho Chehabdegrading real-time latencies. 1212ccc9971eSMauro Carvalho Chehab 1213be06c257SPaul E. McKenneyThe synchronize_rcu() grace-period-wait primitive is optimized for 1214ccc9971eSMauro Carvalho Chehabthroughput. It may therefore incur several milliseconds of latency in 1215ccc9971eSMauro Carvalho Chehabaddition to the duration of the longest RCU read-side critical section. 1216ccc9971eSMauro Carvalho ChehabOn the other hand, multiple concurrent invocations of 1217be06c257SPaul E. McKenneysynchronize_rcu() are required to use batching optimizations so that 1218ccc9971eSMauro Carvalho Chehabthey can be satisfied by a single underlying grace-period-wait 1219ccc9971eSMauro Carvalho Chehaboperation. For example, in the Linux kernel, it is not unusual for a 1220ccc9971eSMauro Carvalho Chehabsingle grace-period-wait operation to serve more than `1,000 separate 1221ccc9971eSMauro Carvalho Chehabinvocations <https://www.usenix.org/conference/2004-usenix-annual-technical-conference/making-rcu-safe-deep-sub-millisecond-response>`__ 1222be06c257SPaul E. McKenneyof synchronize_rcu(), thus amortizing the per-invocation overhead 1223ccc9971eSMauro Carvalho Chehabdown to nearly zero. However, the grace-period optimization is also 1224ccc9971eSMauro Carvalho Chehabrequired to avoid measurable degradation of real-time scheduling and 1225ccc9971eSMauro Carvalho Chehabinterrupt latencies. 1226ccc9971eSMauro Carvalho Chehab 1227be06c257SPaul E. McKenneyIn some cases, the multi-millisecond synchronize_rcu() latencies are 1228be06c257SPaul E. McKenneyunacceptable. In these cases, synchronize_rcu_expedited() may be 1229ccc9971eSMauro Carvalho Chehabused instead, reducing the grace-period latency down to a few tens of 1230ccc9971eSMauro Carvalho Chehabmicroseconds on small systems, at least in cases where the RCU read-side 1231ccc9971eSMauro Carvalho Chehabcritical sections are short. There are currently no special latency 1232be06c257SPaul E. McKenneyrequirements for synchronize_rcu_expedited() on large systems, but, 1233ccc9971eSMauro Carvalho Chehabconsistent with the empirical nature of the RCU specification, that is 1234ccc9971eSMauro Carvalho Chehabsubject to change. However, there most definitely are scalability 1235be06c257SPaul E. McKenneyrequirements: A storm of synchronize_rcu_expedited() invocations on 1236ccc9971eSMauro Carvalho Chehab4096 CPUs should at least make reasonable forward progress. In return 1237be06c257SPaul E. McKenneyfor its shorter latencies, synchronize_rcu_expedited() is permitted 1238ccc9971eSMauro Carvalho Chehabto impose modest degradation of real-time latency on non-idle online 1239ccc9971eSMauro Carvalho ChehabCPUs. Here, “modest” means roughly the same latency degradation as a 1240ccc9971eSMauro Carvalho Chehabscheduling-clock interrupt. 1241ccc9971eSMauro Carvalho Chehab 1242ccc9971eSMauro Carvalho ChehabThere are a number of situations where even 1243be06c257SPaul E. McKenneysynchronize_rcu_expedited()'s reduced grace-period latency is 1244be06c257SPaul E. McKenneyunacceptable. In these situations, the asynchronous call_rcu() can 1245be06c257SPaul E. McKenneybe used in place of synchronize_rcu() as follows: 1246ccc9971eSMauro Carvalho Chehab 1247ccc9971eSMauro Carvalho Chehab :: 1248ccc9971eSMauro Carvalho Chehab 1249ccc9971eSMauro Carvalho Chehab 1 struct foo { 1250ccc9971eSMauro Carvalho Chehab 2 int a; 1251ccc9971eSMauro Carvalho Chehab 3 int b; 1252ccc9971eSMauro Carvalho Chehab 4 struct rcu_head rh; 1253ccc9971eSMauro Carvalho Chehab 5 }; 1254ccc9971eSMauro Carvalho Chehab 6 1255ccc9971eSMauro Carvalho Chehab 7 static void remove_gp_cb(struct rcu_head *rhp) 1256ccc9971eSMauro Carvalho Chehab 8 { 1257ccc9971eSMauro Carvalho Chehab 9 struct foo *p = container_of(rhp, struct foo, rh); 1258ccc9971eSMauro Carvalho Chehab 10 1259ccc9971eSMauro Carvalho Chehab 11 kfree(p); 1260ccc9971eSMauro Carvalho Chehab 12 } 1261ccc9971eSMauro Carvalho Chehab 13 1262ccc9971eSMauro Carvalho Chehab 14 bool remove_gp_asynchronous(void) 1263ccc9971eSMauro Carvalho Chehab 15 { 1264ccc9971eSMauro Carvalho Chehab 16 struct foo *p; 1265ccc9971eSMauro Carvalho Chehab 17 1266ccc9971eSMauro Carvalho Chehab 18 spin_lock(&gp_lock); 1267ccc9971eSMauro Carvalho Chehab 19 p = rcu_access_pointer(gp); 1268ccc9971eSMauro Carvalho Chehab 20 if (!p) { 1269ccc9971eSMauro Carvalho Chehab 21 spin_unlock(&gp_lock); 1270ccc9971eSMauro Carvalho Chehab 22 return false; 1271ccc9971eSMauro Carvalho Chehab 23 } 1272ccc9971eSMauro Carvalho Chehab 24 rcu_assign_pointer(gp, NULL); 1273ccc9971eSMauro Carvalho Chehab 25 call_rcu(&p->rh, remove_gp_cb); 1274ccc9971eSMauro Carvalho Chehab 26 spin_unlock(&gp_lock); 1275ccc9971eSMauro Carvalho Chehab 27 return true; 1276ccc9971eSMauro Carvalho Chehab 28 } 1277ccc9971eSMauro Carvalho Chehab 1278ccc9971eSMauro Carvalho ChehabA definition of ``struct foo`` is finally needed, and appears on 1279be06c257SPaul E. McKenneylines 1-5. The function remove_gp_cb() is passed to call_rcu() 1280ccc9971eSMauro Carvalho Chehabon line 25, and will be invoked after the end of a subsequent grace 1281be06c257SPaul E. McKenneyperiod. This gets the same effect as remove_gp_synchronous(), but 1282ccc9971eSMauro Carvalho Chehabwithout forcing the updater to wait for a grace period to elapse. The 1283be06c257SPaul E. McKenneycall_rcu() function may be used in a number of situations where 1284be06c257SPaul E. McKenneyneither synchronize_rcu() nor synchronize_rcu_expedited() would 1285be06c257SPaul E. McKenneybe legal, including within preempt-disable code, local_bh_disable() 1286ccc9971eSMauro Carvalho Chehabcode, interrupt-disable code, and interrupt handlers. However, even 1287be06c257SPaul E. McKenneycall_rcu() is illegal within NMI handlers and from idle and offline 1288be06c257SPaul E. McKenneyCPUs. The callback function (remove_gp_cb() in this case) will be 1289ccc9971eSMauro Carvalho Chehabexecuted within softirq (software interrupt) environment within the 1290ccc9971eSMauro Carvalho ChehabLinux kernel, either within a real softirq handler or under the 1291be06c257SPaul E. McKenneyprotection of local_bh_disable(). In both the Linux kernel and in 1292ccc9971eSMauro Carvalho Chehabuserspace, it is bad practice to write an RCU callback function that 1293ccc9971eSMauro Carvalho Chehabtakes too long. Long-running operations should be relegated to separate 1294ccc9971eSMauro Carvalho Chehabthreads or (in the Linux kernel) workqueues. 1295ccc9971eSMauro Carvalho Chehab 1296ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+ 1297ccc9971eSMauro Carvalho Chehab| **Quick Quiz**: | 1298ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+ 1299be06c257SPaul E. McKenney| Why does line 19 use rcu_access_pointer()? After all, | 1300be06c257SPaul E. McKenney| call_rcu() on line 25 stores into the structure, which would | 1301ccc9971eSMauro Carvalho Chehab| interact badly with concurrent insertions. Doesn't this mean that | 1302be06c257SPaul E. McKenney| rcu_dereference() is required? | 1303ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+ 1304ccc9971eSMauro Carvalho Chehab| **Answer**: | 1305ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+ 1306ccc9971eSMauro Carvalho Chehab| Presumably the ``->gp_lock`` acquired on line 18 excludes any | 1307be06c257SPaul E. McKenney| changes, including any insertions that rcu_dereference() would | 1308ccc9971eSMauro Carvalho Chehab| protect against. Therefore, any insertions will be delayed until | 1309ccc9971eSMauro Carvalho Chehab| after ``->gp_lock`` is released on line 25, which in turn means that | 1310be06c257SPaul E. McKenney| rcu_access_pointer() suffices. | 1311ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+ 1312ccc9971eSMauro Carvalho Chehab 1313be06c257SPaul E. McKenneyHowever, all that remove_gp_cb() is doing is invoking kfree() on 1314ccc9971eSMauro Carvalho Chehabthe data element. This is a common idiom, and is supported by 1315be06c257SPaul E. McKenneykfree_rcu(), which allows “fire and forget” operation as shown 1316ccc9971eSMauro Carvalho Chehabbelow: 1317ccc9971eSMauro Carvalho Chehab 1318ccc9971eSMauro Carvalho Chehab :: 1319ccc9971eSMauro Carvalho Chehab 1320ccc9971eSMauro Carvalho Chehab 1 struct foo { 1321ccc9971eSMauro Carvalho Chehab 2 int a; 1322ccc9971eSMauro Carvalho Chehab 3 int b; 1323ccc9971eSMauro Carvalho Chehab 4 struct rcu_head rh; 1324ccc9971eSMauro Carvalho Chehab 5 }; 1325ccc9971eSMauro Carvalho Chehab 6 1326ccc9971eSMauro Carvalho Chehab 7 bool remove_gp_faf(void) 1327ccc9971eSMauro Carvalho Chehab 8 { 1328ccc9971eSMauro Carvalho Chehab 9 struct foo *p; 1329ccc9971eSMauro Carvalho Chehab 10 1330ccc9971eSMauro Carvalho Chehab 11 spin_lock(&gp_lock); 1331ccc9971eSMauro Carvalho Chehab 12 p = rcu_dereference(gp); 1332ccc9971eSMauro Carvalho Chehab 13 if (!p) { 1333ccc9971eSMauro Carvalho Chehab 14 spin_unlock(&gp_lock); 1334ccc9971eSMauro Carvalho Chehab 15 return false; 1335ccc9971eSMauro Carvalho Chehab 16 } 1336ccc9971eSMauro Carvalho Chehab 17 rcu_assign_pointer(gp, NULL); 1337ccc9971eSMauro Carvalho Chehab 18 kfree_rcu(p, rh); 1338ccc9971eSMauro Carvalho Chehab 19 spin_unlock(&gp_lock); 1339ccc9971eSMauro Carvalho Chehab 20 return true; 1340ccc9971eSMauro Carvalho Chehab 21 } 1341ccc9971eSMauro Carvalho Chehab 1342be06c257SPaul E. McKenneyNote that remove_gp_faf() simply invokes kfree_rcu() and 1343ccc9971eSMauro Carvalho Chehabproceeds, without any need to pay any further attention to the 1344be06c257SPaul E. McKenneysubsequent grace period and kfree(). It is permissible to invoke 1345be06c257SPaul E. McKenneykfree_rcu() from the same environments as for call_rcu(). 1346be06c257SPaul E. McKenneyInterestingly enough, DYNIX/ptx had the equivalents of call_rcu() 1347be06c257SPaul E. McKenneyand kfree_rcu(), but not synchronize_rcu(). This was due to the 1348ccc9971eSMauro Carvalho Chehabfact that RCU was not heavily used within DYNIX/ptx, so the very few 1349be06c257SPaul E. McKenneyplaces that needed something like synchronize_rcu() simply 1350ccc9971eSMauro Carvalho Chehabopen-coded it. 1351ccc9971eSMauro Carvalho Chehab 1352ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+ 1353ccc9971eSMauro Carvalho Chehab| **Quick Quiz**: | 1354ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+ 1355be06c257SPaul E. McKenney| Earlier it was claimed that call_rcu() and kfree_rcu() | 1356ccc9971eSMauro Carvalho Chehab| allowed updaters to avoid being blocked by readers. But how can that | 1357ccc9971eSMauro Carvalho Chehab| be correct, given that the invocation of the callback and the freeing | 1358ccc9971eSMauro Carvalho Chehab| of the memory (respectively) must still wait for a grace period to | 1359ccc9971eSMauro Carvalho Chehab| elapse? | 1360ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+ 1361ccc9971eSMauro Carvalho Chehab| **Answer**: | 1362ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+ 1363ccc9971eSMauro Carvalho Chehab| We could define things this way, but keep in mind that this sort of | 1364ccc9971eSMauro Carvalho Chehab| definition would say that updates in garbage-collected languages | 1365ccc9971eSMauro Carvalho Chehab| cannot complete until the next time the garbage collector runs, which | 1366ccc9971eSMauro Carvalho Chehab| does not seem at all reasonable. The key point is that in most cases, | 1367be06c257SPaul E. McKenney| an updater using either call_rcu() or kfree_rcu() can proceed | 1368be06c257SPaul E. McKenney| to the next update as soon as it has invoked call_rcu() or | 1369be06c257SPaul E. McKenney| kfree_rcu(), without having to wait for a subsequent grace | 1370ccc9971eSMauro Carvalho Chehab| period. | 1371ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+ 1372ccc9971eSMauro Carvalho Chehab 1373ccc9971eSMauro Carvalho ChehabBut what if the updater must wait for the completion of code to be 1374ccc9971eSMauro Carvalho Chehabexecuted after the end of the grace period, but has other tasks that can 1375ccc9971eSMauro Carvalho Chehabbe carried out in the meantime? The polling-style 1376be06c257SPaul E. McKenneyget_state_synchronize_rcu() and cond_synchronize_rcu() functions 1377ccc9971eSMauro Carvalho Chehabmay be used for this purpose, as shown below: 1378ccc9971eSMauro Carvalho Chehab 1379ccc9971eSMauro Carvalho Chehab :: 1380ccc9971eSMauro Carvalho Chehab 1381ccc9971eSMauro Carvalho Chehab 1 bool remove_gp_poll(void) 1382ccc9971eSMauro Carvalho Chehab 2 { 1383ccc9971eSMauro Carvalho Chehab 3 struct foo *p; 1384ccc9971eSMauro Carvalho Chehab 4 unsigned long s; 1385ccc9971eSMauro Carvalho Chehab 5 1386ccc9971eSMauro Carvalho Chehab 6 spin_lock(&gp_lock); 1387ccc9971eSMauro Carvalho Chehab 7 p = rcu_access_pointer(gp); 1388ccc9971eSMauro Carvalho Chehab 8 if (!p) { 1389ccc9971eSMauro Carvalho Chehab 9 spin_unlock(&gp_lock); 1390ccc9971eSMauro Carvalho Chehab 10 return false; 1391ccc9971eSMauro Carvalho Chehab 11 } 1392ccc9971eSMauro Carvalho Chehab 12 rcu_assign_pointer(gp, NULL); 1393ccc9971eSMauro Carvalho Chehab 13 spin_unlock(&gp_lock); 1394ccc9971eSMauro Carvalho Chehab 14 s = get_state_synchronize_rcu(); 1395ccc9971eSMauro Carvalho Chehab 15 do_something_while_waiting(); 1396ccc9971eSMauro Carvalho Chehab 16 cond_synchronize_rcu(s); 1397ccc9971eSMauro Carvalho Chehab 17 kfree(p); 1398ccc9971eSMauro Carvalho Chehab 18 return true; 1399ccc9971eSMauro Carvalho Chehab 19 } 1400ccc9971eSMauro Carvalho Chehab 1401be06c257SPaul E. McKenneyOn line 14, get_state_synchronize_rcu() obtains a “cookie” from RCU, 1402ccc9971eSMauro Carvalho Chehabthen line 15 carries out other tasks, and finally, line 16 returns 1403ccc9971eSMauro Carvalho Chehabimmediately if a grace period has elapsed in the meantime, but otherwise 1404ccc9971eSMauro Carvalho Chehabwaits as required. The need for ``get_state_synchronize_rcu`` and 1405be06c257SPaul E. McKenneycond_synchronize_rcu() has appeared quite recently, so it is too 1406ccc9971eSMauro Carvalho Chehabearly to tell whether they will stand the test of time. 1407ccc9971eSMauro Carvalho Chehab 1408ccc9971eSMauro Carvalho ChehabRCU thus provides a range of tools to allow updaters to strike the 1409ccc9971eSMauro Carvalho Chehabrequired tradeoff between latency, flexibility and CPU overhead. 1410ccc9971eSMauro Carvalho Chehab 1411ccc9971eSMauro Carvalho ChehabForward Progress 1412ccc9971eSMauro Carvalho Chehab~~~~~~~~~~~~~~~~ 1413ccc9971eSMauro Carvalho Chehab 1414ccc9971eSMauro Carvalho ChehabIn theory, delaying grace-period completion and callback invocation is 1415ccc9971eSMauro Carvalho Chehabharmless. In practice, not only are memory sizes finite but also 1416ccc9971eSMauro Carvalho Chehabcallbacks sometimes do wakeups, and sufficiently deferred wakeups can be 1417ccc9971eSMauro Carvalho Chehabdifficult to distinguish from system hangs. Therefore, RCU must provide 1418ccc9971eSMauro Carvalho Chehaba number of mechanisms to promote forward progress. 1419ccc9971eSMauro Carvalho Chehab 1420ccc9971eSMauro Carvalho ChehabThese mechanisms are not foolproof, nor can they be. For one simple 1421ccc9971eSMauro Carvalho Chehabexample, an infinite loop in an RCU read-side critical section must by 1422ccc9971eSMauro Carvalho Chehabdefinition prevent later grace periods from ever completing. For a more 1423ccc9971eSMauro Carvalho Chehabinvolved example, consider a 64-CPU system built with 1424ccc9971eSMauro Carvalho Chehab``CONFIG_RCU_NOCB_CPU=y`` and booted with ``rcu_nocbs=1-63``, where 1425be06c257SPaul E. McKenneyCPUs 1 through 63 spin in tight loops that invoke call_rcu(). Even 1426be06c257SPaul E. McKenneyif these tight loops also contain calls to cond_resched() (thus 1427ccc9971eSMauro Carvalho Chehaballowing grace periods to complete), CPU 0 simply will not be able to 1428ccc9971eSMauro Carvalho Chehabinvoke callbacks as fast as the other 63 CPUs can register them, at 1429ccc9971eSMauro Carvalho Chehableast not until the system runs out of memory. In both of these 1430ccc9971eSMauro Carvalho Chehabexamples, the Spiderman principle applies: With great power comes great 1431ccc9971eSMauro Carvalho Chehabresponsibility. However, short of this level of abuse, RCU is required 1432ccc9971eSMauro Carvalho Chehabto ensure timely completion of grace periods and timely invocation of 1433ccc9971eSMauro Carvalho Chehabcallbacks. 1434ccc9971eSMauro Carvalho Chehab 1435ccc9971eSMauro Carvalho ChehabRCU takes the following steps to encourage timely completion of grace 1436ccc9971eSMauro Carvalho Chehabperiods: 1437ccc9971eSMauro Carvalho Chehab 1438ccc9971eSMauro Carvalho Chehab#. If a grace period fails to complete within 100 milliseconds, RCU 1439be06c257SPaul E. McKenney causes future invocations of cond_resched() on the holdout CPUs 1440ccc9971eSMauro Carvalho Chehab to provide an RCU quiescent state. RCU also causes those CPUs' 1441be06c257SPaul E. McKenney need_resched() invocations to return ``true``, but only after the 1442ccc9971eSMauro Carvalho Chehab corresponding CPU's next scheduling-clock. 1443ccc9971eSMauro Carvalho Chehab#. CPUs mentioned in the ``nohz_full`` kernel boot parameter can run 1444ccc9971eSMauro Carvalho Chehab indefinitely in the kernel without scheduling-clock interrupts, which 1445be06c257SPaul E. McKenney defeats the above need_resched() strategem. RCU will therefore 1446be06c257SPaul E. McKenney invoke resched_cpu() on any ``nohz_full`` CPUs still holding out 1447ccc9971eSMauro Carvalho Chehab after 109 milliseconds. 1448ccc9971eSMauro Carvalho Chehab#. In kernels built with ``CONFIG_RCU_BOOST=y``, if a given task that 1449ccc9971eSMauro Carvalho Chehab has been preempted within an RCU read-side critical section is 1450ccc9971eSMauro Carvalho Chehab holding out for more than 500 milliseconds, RCU will resort to 1451ccc9971eSMauro Carvalho Chehab priority boosting. 1452ccc9971eSMauro Carvalho Chehab#. If a CPU is still holding out 10 seconds into the grace period, RCU 1453be06c257SPaul E. McKenney will invoke resched_cpu() on it regardless of its ``nohz_full`` 1454ccc9971eSMauro Carvalho Chehab state. 1455ccc9971eSMauro Carvalho Chehab 1456ccc9971eSMauro Carvalho ChehabThe above values are defaults for systems running with ``HZ=1000``. They 1457ccc9971eSMauro Carvalho Chehabwill vary as the value of ``HZ`` varies, and can also be changed using 1458ccc9971eSMauro Carvalho Chehabthe relevant Kconfig options and kernel boot parameters. RCU currently 1459ccc9971eSMauro Carvalho Chehabdoes not do much sanity checking of these parameters, so please use 1460ccc9971eSMauro Carvalho Chehabcaution when changing them. Note that these forward-progress measures 14614f8af077SNícolas F. R. A. Pradoare provided only for RCU, not for `SRCU <Sleepable RCU_>`__ or `Tasks 14624f8af077SNícolas F. R. A. PradoRCU`_. 1463ccc9971eSMauro Carvalho Chehab 1464be06c257SPaul E. McKenneyRCU takes the following steps in call_rcu() to encourage timely 1465ccc9971eSMauro Carvalho Chehabinvocation of callbacks when any given non-\ ``rcu_nocbs`` CPU has 1466ccc9971eSMauro Carvalho Chehab10,000 callbacks, or has 10,000 more callbacks than it had the last time 1467ccc9971eSMauro Carvalho Chehabencouragement was provided: 1468ccc9971eSMauro Carvalho Chehab 1469ccc9971eSMauro Carvalho Chehab#. Starts a grace period, if one is not already in progress. 1470ccc9971eSMauro Carvalho Chehab#. Forces immediate checking for quiescent states, rather than waiting 1471ccc9971eSMauro Carvalho Chehab for three milliseconds to have elapsed since the beginning of the 1472ccc9971eSMauro Carvalho Chehab grace period. 1473ccc9971eSMauro Carvalho Chehab#. Immediately tags the CPU's callbacks with their grace period 1474ccc9971eSMauro Carvalho Chehab completion numbers, rather than waiting for the ``RCU_SOFTIRQ`` 1475ccc9971eSMauro Carvalho Chehab handler to get around to it. 1476ccc9971eSMauro Carvalho Chehab#. Lifts callback-execution batch limits, which speeds up callback 1477ccc9971eSMauro Carvalho Chehab invocation at the expense of degrading realtime response. 1478ccc9971eSMauro Carvalho Chehab 1479ccc9971eSMauro Carvalho ChehabAgain, these are default values when running at ``HZ=1000``, and can be 1480ccc9971eSMauro Carvalho Chehaboverridden. Again, these forward-progress measures are provided only for 14814f8af077SNícolas F. R. A. PradoRCU, not for `SRCU <Sleepable RCU_>`__ or `Tasks 14824f8af077SNícolas F. R. A. PradoRCU`_. Even for RCU, callback-invocation forward 1483ccc9971eSMauro Carvalho Chehabprogress for ``rcu_nocbs`` CPUs is much less well-developed, in part 1484ccc9971eSMauro Carvalho Chehabbecause workloads benefiting from ``rcu_nocbs`` CPUs tend to invoke 1485be06c257SPaul E. McKenneycall_rcu() relatively infrequently. If workloads emerge that need 1486be06c257SPaul E. McKenneyboth ``rcu_nocbs`` CPUs and high call_rcu() invocation rates, then 1487ccc9971eSMauro Carvalho Chehabadditional forward-progress work will be required. 1488ccc9971eSMauro Carvalho Chehab 1489ccc9971eSMauro Carvalho ChehabComposability 1490ccc9971eSMauro Carvalho Chehab~~~~~~~~~~~~~ 1491ccc9971eSMauro Carvalho Chehab 1492ccc9971eSMauro Carvalho ChehabComposability has received much attention in recent years, perhaps in 1493ccc9971eSMauro Carvalho Chehabpart due to the collision of multicore hardware with object-oriented 1494ccc9971eSMauro Carvalho Chehabtechniques designed in single-threaded environments for single-threaded 1495ccc9971eSMauro Carvalho Chehabuse. And in theory, RCU read-side critical sections may be composed, and 1496ccc9971eSMauro Carvalho Chehabin fact may be nested arbitrarily deeply. In practice, as with all 1497ccc9971eSMauro Carvalho Chehabreal-world implementations of composable constructs, there are 1498ccc9971eSMauro Carvalho Chehablimitations. 1499ccc9971eSMauro Carvalho Chehab 1500be06c257SPaul E. McKenneyImplementations of RCU for which rcu_read_lock() and 1501be06c257SPaul E. McKenneyrcu_read_unlock() generate no code, such as Linux-kernel RCU when 150281ad58beSSebastian Andrzej Siewior``CONFIG_PREEMPTION=n``, can be nested arbitrarily deeply. After all, there 1503ccc9971eSMauro Carvalho Chehabis no overhead. Except that if all these instances of 1504be06c257SPaul E. McKenneyrcu_read_lock() and rcu_read_unlock() are visible to the 1505ccc9971eSMauro Carvalho Chehabcompiler, compilation will eventually fail due to exhausting memory, 1506ccc9971eSMauro Carvalho Chehabmass storage, or user patience, whichever comes first. If the nesting is 1507ccc9971eSMauro Carvalho Chehabnot visible to the compiler, as is the case with mutually recursive 1508ccc9971eSMauro Carvalho Chehabfunctions each in its own translation unit, stack overflow will result. 1509ccc9971eSMauro Carvalho ChehabIf the nesting takes the form of loops, perhaps in the guise of tail 1510ccc9971eSMauro Carvalho Chehabrecursion, either the control variable will overflow or (in the Linux 1511ccc9971eSMauro Carvalho Chehabkernel) you will get an RCU CPU stall warning. Nevertheless, this class 1512ccc9971eSMauro Carvalho Chehabof RCU implementations is one of the most composable constructs in 1513ccc9971eSMauro Carvalho Chehabexistence. 1514ccc9971eSMauro Carvalho Chehab 1515ccc9971eSMauro Carvalho ChehabRCU implementations that explicitly track nesting depth are limited by 1516ccc9971eSMauro Carvalho Chehabthe nesting-depth counter. For example, the Linux kernel's preemptible 1517ccc9971eSMauro Carvalho ChehabRCU limits nesting to ``INT_MAX``. This should suffice for almost all 1518ccc9971eSMauro Carvalho Chehabpractical purposes. That said, a consecutive pair of RCU read-side 1519ccc9971eSMauro Carvalho Chehabcritical sections between which there is an operation that waits for a 1520ccc9971eSMauro Carvalho Chehabgrace period cannot be enclosed in another RCU read-side critical 1521ccc9971eSMauro Carvalho Chehabsection. This is because it is not legal to wait for a grace period 1522ccc9971eSMauro Carvalho Chehabwithin an RCU read-side critical section: To do so would result either 1523ccc9971eSMauro Carvalho Chehabin deadlock or in RCU implicitly splitting the enclosing RCU read-side 1524ccc9971eSMauro Carvalho Chehabcritical section, neither of which is conducive to a long-lived and 1525ccc9971eSMauro Carvalho Chehabprosperous kernel. 1526ccc9971eSMauro Carvalho Chehab 1527ccc9971eSMauro Carvalho ChehabIt is worth noting that RCU is not alone in limiting composability. For 1528ccc9971eSMauro Carvalho Chehabexample, many transactional-memory implementations prohibit composing a 1529ccc9971eSMauro Carvalho Chehabpair of transactions separated by an irrevocable operation (for example, 1530ccc9971eSMauro Carvalho Chehaba network receive operation). For another example, lock-based critical 1531ccc9971eSMauro Carvalho Chehabsections can be composed surprisingly freely, but only if deadlock is 1532ccc9971eSMauro Carvalho Chehabavoided. 1533ccc9971eSMauro Carvalho Chehab 1534ccc9971eSMauro Carvalho ChehabIn short, although RCU read-side critical sections are highly 1535ccc9971eSMauro Carvalho Chehabcomposable, care is required in some situations, just as is the case for 1536ccc9971eSMauro Carvalho Chehabany other composable synchronization mechanism. 1537ccc9971eSMauro Carvalho Chehab 1538ccc9971eSMauro Carvalho ChehabCorner Cases 1539ccc9971eSMauro Carvalho Chehab~~~~~~~~~~~~ 1540ccc9971eSMauro Carvalho Chehab 1541ccc9971eSMauro Carvalho ChehabA given RCU workload might have an endless and intense stream of RCU 1542ccc9971eSMauro Carvalho Chehabread-side critical sections, perhaps even so intense that there was 1543ccc9971eSMauro Carvalho Chehabnever a point in time during which there was not at least one RCU 1544ccc9971eSMauro Carvalho Chehabread-side critical section in flight. RCU cannot allow this situation to 1545ccc9971eSMauro Carvalho Chehabblock grace periods: As long as all the RCU read-side critical sections 1546ccc9971eSMauro Carvalho Chehabare finite, grace periods must also be finite. 1547ccc9971eSMauro Carvalho Chehab 1548ccc9971eSMauro Carvalho ChehabThat said, preemptible RCU implementations could potentially result in 1549ccc9971eSMauro Carvalho ChehabRCU read-side critical sections being preempted for long durations, 1550ccc9971eSMauro Carvalho Chehabwhich has the effect of creating a long-duration RCU read-side critical 1551ccc9971eSMauro Carvalho Chehabsection. This situation can arise only in heavily loaded systems, but 1552ccc9971eSMauro Carvalho Chehabsystems using real-time priorities are of course more vulnerable. 1553ccc9971eSMauro Carvalho ChehabTherefore, RCU priority boosting is provided to help deal with this 1554ccc9971eSMauro Carvalho Chehabcase. That said, the exact requirements on RCU priority boosting will 1555ccc9971eSMauro Carvalho Chehablikely evolve as more experience accumulates. 1556ccc9971eSMauro Carvalho Chehab 1557ccc9971eSMauro Carvalho ChehabOther workloads might have very high update rates. Although one can 1558ccc9971eSMauro Carvalho Chehabargue that such workloads should instead use something other than RCU, 1559ccc9971eSMauro Carvalho Chehabthe fact remains that RCU must handle such workloads gracefully. This 1560ccc9971eSMauro Carvalho Chehabrequirement is another factor driving batching of grace periods, but it 1561ccc9971eSMauro Carvalho Chehabis also the driving force behind the checks for large numbers of queued 1562be06c257SPaul E. McKenneyRCU callbacks in the call_rcu() code path. Finally, high update 1563ccc9971eSMauro Carvalho Chehabrates should not delay RCU read-side critical sections, although some 1564ccc9971eSMauro Carvalho Chehabsmall read-side delays can occur when using 1565be06c257SPaul E. McKenneysynchronize_rcu_expedited(), courtesy of this function's use of 1566be06c257SPaul E. McKenneysmp_call_function_single(). 1567ccc9971eSMauro Carvalho Chehab 1568ccc9971eSMauro Carvalho ChehabAlthough all three of these corner cases were understood in the early 1569ccc9971eSMauro Carvalho Chehab1990s, a simple user-level test consisting of ``close(open(path))`` in a 1570ccc9971eSMauro Carvalho Chehabtight loop in the early 2000s suddenly provided a much deeper 1571ccc9971eSMauro Carvalho Chehabappreciation of the high-update-rate corner case. This test also 1572ccc9971eSMauro Carvalho Chehabmotivated addition of some RCU code to react to high update rates, for 1573ccc9971eSMauro Carvalho Chehabexample, if a given CPU finds itself with more than 10,000 RCU callbacks 1574ccc9971eSMauro Carvalho Chehabqueued, it will cause RCU to take evasive action by more aggressively 1575ccc9971eSMauro Carvalho Chehabstarting grace periods and more aggressively forcing completion of 1576ccc9971eSMauro Carvalho Chehabgrace-period processing. This evasive action causes the grace period to 1577ccc9971eSMauro Carvalho Chehabcomplete more quickly, but at the cost of restricting RCU's batching 1578ccc9971eSMauro Carvalho Chehaboptimizations, thus increasing the CPU overhead incurred by that grace 1579ccc9971eSMauro Carvalho Chehabperiod. 1580ccc9971eSMauro Carvalho Chehab 1581ccc9971eSMauro Carvalho ChehabSoftware-Engineering Requirements 1582ccc9971eSMauro Carvalho Chehab--------------------------------- 1583ccc9971eSMauro Carvalho Chehab 1584ccc9971eSMauro Carvalho ChehabBetween Murphy's Law and “To err is human”, it is necessary to guard 1585ccc9971eSMauro Carvalho Chehabagainst mishaps and misuse: 1586ccc9971eSMauro Carvalho Chehab 1587be06c257SPaul E. McKenney#. It is all too easy to forget to use rcu_read_lock() everywhere 1588ccc9971eSMauro Carvalho Chehab that it is needed, so kernels built with ``CONFIG_PROVE_RCU=y`` will 1589be06c257SPaul E. McKenney splat if rcu_dereference() is used outside of an RCU read-side 1590ccc9971eSMauro Carvalho Chehab critical section. Update-side code can use 1591be06c257SPaul E. McKenney rcu_dereference_protected(), which takes a `lockdep 1592ccc9971eSMauro Carvalho Chehab expression <https://lwn.net/Articles/371986/>`__ to indicate what is 1593ccc9971eSMauro Carvalho Chehab providing the protection. If the indicated protection is not 1594ccc9971eSMauro Carvalho Chehab provided, a lockdep splat is emitted. 1595ccc9971eSMauro Carvalho Chehab Code shared between readers and updaters can use 1596be06c257SPaul E. McKenney rcu_dereference_check(), which also takes a lockdep expression, 1597be06c257SPaul E. McKenney and emits a lockdep splat if neither rcu_read_lock() nor the 1598ccc9971eSMauro Carvalho Chehab indicated protection is in place. In addition, 1599be06c257SPaul E. McKenney rcu_dereference_raw() is used in those (hopefully rare) cases 1600ccc9971eSMauro Carvalho Chehab where the required protection cannot be easily described. Finally, 1601be06c257SPaul E. McKenney rcu_read_lock_held() is provided to allow a function to verify 1602ccc9971eSMauro Carvalho Chehab that it has been invoked within an RCU read-side critical section. I 1603ccc9971eSMauro Carvalho Chehab was made aware of this set of requirements shortly after Thomas 1604ccc9971eSMauro Carvalho Chehab Gleixner audited a number of RCU uses. 1605ccc9971eSMauro Carvalho Chehab#. A given function might wish to check for RCU-related preconditions 1606ccc9971eSMauro Carvalho Chehab upon entry, before using any other RCU API. The 1607be06c257SPaul E. McKenney rcu_lockdep_assert() does this job, asserting the expression in 1608ccc9971eSMauro Carvalho Chehab kernels having lockdep enabled and doing nothing otherwise. 1609be06c257SPaul E. McKenney#. It is also easy to forget to use rcu_assign_pointer() and 1610be06c257SPaul E. McKenney rcu_dereference(), perhaps (incorrectly) substituting a simple 1611ccc9971eSMauro Carvalho Chehab assignment. To catch this sort of error, a given RCU-protected 1612ccc9971eSMauro Carvalho Chehab pointer may be tagged with ``__rcu``, after which sparse will 1613ccc9971eSMauro Carvalho Chehab complain about simple-assignment accesses to that pointer. Arnd 1614ccc9971eSMauro Carvalho Chehab Bergmann made me aware of this requirement, and also supplied the 1615ccc9971eSMauro Carvalho Chehab needed `patch series <https://lwn.net/Articles/376011/>`__. 1616ccc9971eSMauro Carvalho Chehab#. Kernels built with ``CONFIG_DEBUG_OBJECTS_RCU_HEAD=y`` will splat if 1617be06c257SPaul E. McKenney a data element is passed to call_rcu() twice in a row, without a 1618ccc9971eSMauro Carvalho Chehab grace period in between. (This error is similar to a double free.) 1619ccc9971eSMauro Carvalho Chehab The corresponding ``rcu_head`` structures that are dynamically 1620ccc9971eSMauro Carvalho Chehab allocated are automatically tracked, but ``rcu_head`` structures 1621ccc9971eSMauro Carvalho Chehab allocated on the stack must be initialized with 1622be06c257SPaul E. McKenney init_rcu_head_on_stack() and cleaned up with 1623be06c257SPaul E. McKenney destroy_rcu_head_on_stack(). Similarly, statically allocated 1624ccc9971eSMauro Carvalho Chehab non-stack ``rcu_head`` structures must be initialized with 1625be06c257SPaul E. McKenney init_rcu_head() and cleaned up with destroy_rcu_head(). 1626ccc9971eSMauro Carvalho Chehab Mathieu Desnoyers made me aware of this requirement, and also 1627ccc9971eSMauro Carvalho Chehab supplied the needed 16289d3a0485SPaul Gortmaker `patch <https://lore.kernel.org/r/20100319013024.GA28456@Krystal>`__. 1629ccc9971eSMauro Carvalho Chehab#. An infinite loop in an RCU read-side critical section will eventually 1630ccc9971eSMauro Carvalho Chehab trigger an RCU CPU stall warning splat, with the duration of 1631ccc9971eSMauro Carvalho Chehab “eventually” being controlled by the ``RCU_CPU_STALL_TIMEOUT`` 1632ccc9971eSMauro Carvalho Chehab ``Kconfig`` option, or, alternatively, by the 1633ccc9971eSMauro Carvalho Chehab ``rcupdate.rcu_cpu_stall_timeout`` boot/sysfs parameter. However, RCU 1634ccc9971eSMauro Carvalho Chehab is not obligated to produce this splat unless there is a grace period 1635ccc9971eSMauro Carvalho Chehab waiting on that particular RCU read-side critical section. 1636ccc9971eSMauro Carvalho Chehab 1637ccc9971eSMauro Carvalho Chehab Some extreme workloads might intentionally delay RCU grace periods, 1638ccc9971eSMauro Carvalho Chehab and systems running those workloads can be booted with 1639ccc9971eSMauro Carvalho Chehab ``rcupdate.rcu_cpu_stall_suppress`` to suppress the splats. This 1640ccc9971eSMauro Carvalho Chehab kernel parameter may also be set via ``sysfs``. Furthermore, RCU CPU 1641ccc9971eSMauro Carvalho Chehab stall warnings are counter-productive during sysrq dumps and during 1642be06c257SPaul E. McKenney panics. RCU therefore supplies the rcu_sysrq_start() and 1643be06c257SPaul E. McKenney rcu_sysrq_end() API members to be called before and after long 1644be06c257SPaul E. McKenney sysrq dumps. RCU also supplies the rcu_panic() notifier that is 1645ccc9971eSMauro Carvalho Chehab automatically invoked at the beginning of a panic to suppress further 1646ccc9971eSMauro Carvalho Chehab RCU CPU stall warnings. 1647ccc9971eSMauro Carvalho Chehab 1648ccc9971eSMauro Carvalho Chehab This requirement made itself known in the early 1990s, pretty much 1649ccc9971eSMauro Carvalho Chehab the first time that it was necessary to debug a CPU stall. That said, 1650ccc9971eSMauro Carvalho Chehab the initial implementation in DYNIX/ptx was quite generic in 1651ccc9971eSMauro Carvalho Chehab comparison with that of Linux. 1652ccc9971eSMauro Carvalho Chehab 1653ccc9971eSMauro Carvalho Chehab#. Although it would be very good to detect pointers leaking out of RCU 1654ccc9971eSMauro Carvalho Chehab read-side critical sections, there is currently no good way of doing 1655ccc9971eSMauro Carvalho Chehab this. One complication is the need to distinguish between pointers 1656ccc9971eSMauro Carvalho Chehab leaking and pointers that have been handed off from RCU to some other 1657ccc9971eSMauro Carvalho Chehab synchronization mechanism, for example, reference counting. 1658ccc9971eSMauro Carvalho Chehab#. In kernels built with ``CONFIG_RCU_TRACE=y``, RCU-related information 1659ccc9971eSMauro Carvalho Chehab is provided via event tracing. 1660be06c257SPaul E. McKenney#. Open-coded use of rcu_assign_pointer() and rcu_dereference() 1661ccc9971eSMauro Carvalho Chehab to create typical linked data structures can be surprisingly 1662ccc9971eSMauro Carvalho Chehab error-prone. Therefore, RCU-protected `linked 1663ccc9971eSMauro Carvalho Chehab lists <https://lwn.net/Articles/609973/#RCU%20List%20APIs>`__ and, 1664ccc9971eSMauro Carvalho Chehab more recently, RCU-protected `hash 1665ccc9971eSMauro Carvalho Chehab tables <https://lwn.net/Articles/612100/>`__ are available. Many 1666ccc9971eSMauro Carvalho Chehab other special-purpose RCU-protected data structures are available in 1667ccc9971eSMauro Carvalho Chehab the Linux kernel and the userspace RCU library. 1668ccc9971eSMauro Carvalho Chehab#. Some linked structures are created at compile time, but still require 1669be06c257SPaul E. McKenney ``__rcu`` checking. The RCU_POINTER_INITIALIZER() macro serves 1670ccc9971eSMauro Carvalho Chehab this purpose. 1671be06c257SPaul E. McKenney#. It is not necessary to use rcu_assign_pointer() when creating 1672ccc9971eSMauro Carvalho Chehab linked structures that are to be published via a single external 1673d756c74eSPaul E. McKenney pointer. The RCU_INIT_POINTER() macro is provided for this task. 1674ccc9971eSMauro Carvalho Chehab 1675ccc9971eSMauro Carvalho ChehabThis not a hard-and-fast list: RCU's diagnostic capabilities will 1676ccc9971eSMauro Carvalho Chehabcontinue to be guided by the number and type of usage bugs found in 1677ccc9971eSMauro Carvalho Chehabreal-world RCU usage. 1678ccc9971eSMauro Carvalho Chehab 1679ccc9971eSMauro Carvalho ChehabLinux Kernel Complications 1680ccc9971eSMauro Carvalho Chehab-------------------------- 1681ccc9971eSMauro Carvalho Chehab 1682ccc9971eSMauro Carvalho ChehabThe Linux kernel provides an interesting environment for all kinds of 1683ccc9971eSMauro Carvalho Chehabsoftware, including RCU. Some of the relevant points of interest are as 1684ccc9971eSMauro Carvalho Chehabfollows: 1685ccc9971eSMauro Carvalho Chehab 168607335c16SJoel Fernandes (Google)#. `Configuration`_ 168707335c16SJoel Fernandes (Google)#. `Firmware Interface`_ 168807335c16SJoel Fernandes (Google)#. `Early Boot`_ 168907335c16SJoel Fernandes (Google)#. `Interrupts and NMIs`_ 169007335c16SJoel Fernandes (Google)#. `Loadable Modules`_ 169107335c16SJoel Fernandes (Google)#. `Hotplug CPU`_ 169207335c16SJoel Fernandes (Google)#. `Scheduler and RCU`_ 169307335c16SJoel Fernandes (Google)#. `Tracing and RCU`_ 169471cb46aeSJoel Fernandes (Google)#. `Accesses to User Memory and RCU`_ 169507335c16SJoel Fernandes (Google)#. `Energy Efficiency`_ 169607335c16SJoel Fernandes (Google)#. `Scheduling-Clock Interrupts and RCU`_ 169707335c16SJoel Fernandes (Google)#. `Memory Efficiency`_ 169807335c16SJoel Fernandes (Google)#. `Performance, Scalability, Response Time, and Reliability`_ 1699ccc9971eSMauro Carvalho Chehab 1700ccc9971eSMauro Carvalho ChehabThis list is probably incomplete, but it does give a feel for the most 1701ccc9971eSMauro Carvalho Chehabnotable Linux-kernel complications. Each of the following sections 1702ccc9971eSMauro Carvalho Chehabcovers one of the above topics. 1703ccc9971eSMauro Carvalho Chehab 1704ccc9971eSMauro Carvalho ChehabConfiguration 1705ccc9971eSMauro Carvalho Chehab~~~~~~~~~~~~~ 1706ccc9971eSMauro Carvalho Chehab 1707ccc9971eSMauro Carvalho ChehabRCU's goal is automatic configuration, so that almost nobody needs to 1708ccc9971eSMauro Carvalho Chehabworry about RCU's ``Kconfig`` options. And for almost all users, RCU 1709ccc9971eSMauro Carvalho Chehabdoes in fact work well “out of the box.” 1710ccc9971eSMauro Carvalho Chehab 1711ccc9971eSMauro Carvalho ChehabHowever, there are specialized use cases that are handled by kernel boot 1712ccc9971eSMauro Carvalho Chehabparameters and ``Kconfig`` options. Unfortunately, the ``Kconfig`` 1713ccc9971eSMauro Carvalho Chehabsystem will explicitly ask users about new ``Kconfig`` options, which 1714ccc9971eSMauro Carvalho Chehabrequires almost all of them be hidden behind a ``CONFIG_RCU_EXPERT`` 1715ccc9971eSMauro Carvalho Chehab``Kconfig`` option. 1716ccc9971eSMauro Carvalho Chehab 1717ccc9971eSMauro Carvalho ChehabThis all should be quite obvious, but the fact remains that Linus 1718ccc9971eSMauro Carvalho ChehabTorvalds recently had to 17199d3a0485SPaul Gortmaker`remind <https://lore.kernel.org/r/CA+55aFy4wcCwaL4okTs8wXhGZ5h-ibecy_Meg9C4MNQrUnwMcg@mail.gmail.com>`__ 1720ccc9971eSMauro Carvalho Chehabme of this requirement. 1721ccc9971eSMauro Carvalho Chehab 1722ccc9971eSMauro Carvalho ChehabFirmware Interface 1723ccc9971eSMauro Carvalho Chehab~~~~~~~~~~~~~~~~~~ 1724ccc9971eSMauro Carvalho Chehab 1725ccc9971eSMauro Carvalho ChehabIn many cases, kernel obtains information about the system from the 1726ccc9971eSMauro Carvalho Chehabfirmware, and sometimes things are lost in translation. Or the 1727ccc9971eSMauro Carvalho Chehabtranslation is accurate, but the original message is bogus. 1728ccc9971eSMauro Carvalho Chehab 1729ccc9971eSMauro Carvalho ChehabFor example, some systems' firmware overreports the number of CPUs, 1730ccc9971eSMauro Carvalho Chehabsometimes by a large factor. If RCU naively believed the firmware, as it 1731ccc9971eSMauro Carvalho Chehabused to do, it would create too many per-CPU kthreads. Although the 1732ccc9971eSMauro Carvalho Chehabresulting system will still run correctly, the extra kthreads needlessly 1733ccc9971eSMauro Carvalho Chehabconsume memory and can cause confusion when they show up in ``ps`` 1734ccc9971eSMauro Carvalho Chehablistings. 1735ccc9971eSMauro Carvalho Chehab 1736ccc9971eSMauro Carvalho ChehabRCU must therefore wait for a given CPU to actually come online before 1737ccc9971eSMauro Carvalho Chehabit can allow itself to believe that the CPU actually exists. The 1738ccc9971eSMauro Carvalho Chehabresulting “ghost CPUs” (which are never going to come online) cause a 1739ccc9971eSMauro Carvalho Chehabnumber of `interesting 1740ccc9971eSMauro Carvalho Chehabcomplications <https://paulmck.livejournal.com/37494.html>`__. 1741ccc9971eSMauro Carvalho Chehab 1742ccc9971eSMauro Carvalho ChehabEarly Boot 1743ccc9971eSMauro Carvalho Chehab~~~~~~~~~~ 1744ccc9971eSMauro Carvalho Chehab 1745ccc9971eSMauro Carvalho ChehabThe Linux kernel's boot sequence is an interesting process, and RCU is 1746be06c257SPaul E. McKenneyused early, even before rcu_init() is invoked. In fact, a number of 1747ccc9971eSMauro Carvalho ChehabRCU's primitives can be used as soon as the initial task's 1748ccc9971eSMauro Carvalho Chehab``task_struct`` is available and the boot CPU's per-CPU variables are 1749be06c257SPaul E. McKenneyset up. The read-side primitives (rcu_read_lock(), 1750be06c257SPaul E. McKenneyrcu_read_unlock(), rcu_dereference(), and 1751be06c257SPaul E. McKenneyrcu_access_pointer()) will operate normally very early on, as will 1752be06c257SPaul E. McKenneyrcu_assign_pointer(). 1753ccc9971eSMauro Carvalho Chehab 1754be06c257SPaul E. McKenneyAlthough call_rcu() may be invoked at any time during boot, 1755ccc9971eSMauro Carvalho Chehabcallbacks are not guaranteed to be invoked until after all of RCU's 1756be06c257SPaul E. McKenneykthreads have been spawned, which occurs at early_initcall() time. 1757ccc9971eSMauro Carvalho ChehabThis delay in callback invocation is due to the fact that RCU does not 1758ccc9971eSMauro Carvalho Chehabinvoke callbacks until it is fully initialized, and this full 1759ccc9971eSMauro Carvalho Chehabinitialization cannot occur until after the scheduler has initialized 1760ccc9971eSMauro Carvalho Chehabitself to the point where RCU can spawn and run its kthreads. In theory, 1761ccc9971eSMauro Carvalho Chehabit would be possible to invoke callbacks earlier, however, this is not a 1762ccc9971eSMauro Carvalho Chehabpanacea because there would be severe restrictions on what operations 1763ccc9971eSMauro Carvalho Chehabthose callbacks could invoke. 1764ccc9971eSMauro Carvalho Chehab 1765be06c257SPaul E. McKenneyPerhaps surprisingly, synchronize_rcu() and 1766be06c257SPaul E. McKenneysynchronize_rcu_expedited(), will operate normally during very early 1767ccc9971eSMauro Carvalho Chehabboot, the reason being that there is only one CPU and preemption is 1768be06c257SPaul E. McKenneydisabled. This means that the call synchronize_rcu() (or friends) 1769ccc9971eSMauro Carvalho Chehabitself is a quiescent state and thus a grace period, so the early-boot 1770ccc9971eSMauro Carvalho Chehabimplementation can be a no-op. 1771ccc9971eSMauro Carvalho Chehab 1772ccc9971eSMauro Carvalho ChehabHowever, once the scheduler has spawned its first kthread, this early 1773be06c257SPaul E. McKenneyboot trick fails for synchronize_rcu() (as well as for 177481ad58beSSebastian Andrzej Siewiorsynchronize_rcu_expedited()) in ``CONFIG_PREEMPTION=y`` kernels. The 1775ccc9971eSMauro Carvalho Chehabreason is that an RCU read-side critical section might be preempted, 1776be06c257SPaul E. McKenneywhich means that a subsequent synchronize_rcu() really does have to 1777ccc9971eSMauro Carvalho Chehabwait for something, as opposed to simply returning immediately. 1778be06c257SPaul E. McKenneyUnfortunately, synchronize_rcu() can't do this until all of its 1779ccc9971eSMauro Carvalho Chehabkthreads are spawned, which doesn't happen until some time during 1780be06c257SPaul E. McKenneyearly_initcalls() time. But this is no excuse: RCU is nevertheless 1781ccc9971eSMauro Carvalho Chehabrequired to correctly handle synchronous grace periods during this time 1782ccc9971eSMauro Carvalho Chehabperiod. Once all of its kthreads are up and running, RCU starts running 1783ccc9971eSMauro Carvalho Chehabnormally. 1784ccc9971eSMauro Carvalho Chehab 1785ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+ 1786ccc9971eSMauro Carvalho Chehab| **Quick Quiz**: | 1787ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+ 1788ccc9971eSMauro Carvalho Chehab| How can RCU possibly handle grace periods before all of its kthreads | 1789ccc9971eSMauro Carvalho Chehab| have been spawned??? | 1790ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+ 1791ccc9971eSMauro Carvalho Chehab| **Answer**: | 1792ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+ 1793ccc9971eSMauro Carvalho Chehab| Very carefully! | 1794ccc9971eSMauro Carvalho Chehab| During the “dead zone” between the time that the scheduler spawns the | 1795ccc9971eSMauro Carvalho Chehab| first task and the time that all of RCU's kthreads have been spawned, | 1796ccc9971eSMauro Carvalho Chehab| all synchronous grace periods are handled by the expedited | 1797ccc9971eSMauro Carvalho Chehab| grace-period mechanism. At runtime, this expedited mechanism relies | 1798ccc9971eSMauro Carvalho Chehab| on workqueues, but during the dead zone the requesting task itself | 1799ccc9971eSMauro Carvalho Chehab| drives the desired expedited grace period. Because dead-zone | 1800ccc9971eSMauro Carvalho Chehab| execution takes place within task context, everything works. Once the | 1801ccc9971eSMauro Carvalho Chehab| dead zone ends, expedited grace periods go back to using workqueues, | 1802ccc9971eSMauro Carvalho Chehab| as is required to avoid problems that would otherwise occur when a | 1803ccc9971eSMauro Carvalho Chehab| user task received a POSIX signal while driving an expedited grace | 1804ccc9971eSMauro Carvalho Chehab| period. | 1805ccc9971eSMauro Carvalho Chehab| | 1806ccc9971eSMauro Carvalho Chehab| And yes, this does mean that it is unhelpful to send POSIX signals to | 1807ccc9971eSMauro Carvalho Chehab| random tasks between the time that the scheduler spawns its first | 1808ccc9971eSMauro Carvalho Chehab| kthread and the time that RCU's kthreads have all been spawned. If | 1809ccc9971eSMauro Carvalho Chehab| there ever turns out to be a good reason for sending POSIX signals | 1810ccc9971eSMauro Carvalho Chehab| during that time, appropriate adjustments will be made. (If it turns | 1811ccc9971eSMauro Carvalho Chehab| out that POSIX signals are sent during this time for no good reason, | 1812ccc9971eSMauro Carvalho Chehab| other adjustments will be made, appropriate or otherwise.) | 1813ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+ 1814ccc9971eSMauro Carvalho Chehab 1815ccc9971eSMauro Carvalho ChehabI learned of these boot-time requirements as a result of a series of 1816ccc9971eSMauro Carvalho Chehabsystem hangs. 1817ccc9971eSMauro Carvalho Chehab 1818ccc9971eSMauro Carvalho ChehabInterrupts and NMIs 1819ccc9971eSMauro Carvalho Chehab~~~~~~~~~~~~~~~~~~~ 1820ccc9971eSMauro Carvalho Chehab 1821ccc9971eSMauro Carvalho ChehabThe Linux kernel has interrupts, and RCU read-side critical sections are 1822ccc9971eSMauro Carvalho Chehablegal within interrupt handlers and within interrupt-disabled regions of 1823be06c257SPaul E. McKenneycode, as are invocations of call_rcu(). 1824ccc9971eSMauro Carvalho Chehab 1825ccc9971eSMauro Carvalho ChehabSome Linux-kernel architectures can enter an interrupt handler from 1826ccc9971eSMauro Carvalho Chehabnon-idle process context, and then just never leave it, instead 1827ccc9971eSMauro Carvalho Chehabstealthily transitioning back to process context. This trick is 1828ccc9971eSMauro Carvalho Chehabsometimes used to invoke system calls from inside the kernel. These 1829ccc9971eSMauro Carvalho Chehab“half-interrupts” mean that RCU has to be very careful about how it 1830ccc9971eSMauro Carvalho Chehabcounts interrupt nesting levels. I learned of this requirement the hard 1831ccc9971eSMauro Carvalho Chehabway during a rewrite of RCU's dyntick-idle code. 1832ccc9971eSMauro Carvalho Chehab 1833ccc9971eSMauro Carvalho ChehabThe Linux kernel has non-maskable interrupts (NMIs), and RCU read-side 1834ccc9971eSMauro Carvalho Chehabcritical sections are legal within NMI handlers. Thankfully, RCU 1835be06c257SPaul E. McKenneyupdate-side primitives, including call_rcu(), are prohibited within 1836ccc9971eSMauro Carvalho ChehabNMI handlers. 1837ccc9971eSMauro Carvalho Chehab 1838ccc9971eSMauro Carvalho ChehabThe name notwithstanding, some Linux-kernel architectures can have 1839ccc9971eSMauro Carvalho Chehabnested NMIs, which RCU must handle correctly. Andy Lutomirski `surprised 18409d3a0485SPaul Gortmakerme <https://lore.kernel.org/r/CALCETrXLq1y7e_dKFPgou-FKHB6Pu-r8+t-6Ds+8=va7anBWDA@mail.gmail.com>`__ 1841ccc9971eSMauro Carvalho Chehabwith this requirement; he also kindly surprised me with `an 18429d3a0485SPaul Gortmakeralgorithm <https://lore.kernel.org/r/CALCETrXSY9JpW3uE6H8WYk81sg56qasA2aqmjMPsq5dOtzso=g@mail.gmail.com>`__ 1843ccc9971eSMauro Carvalho Chehabthat meets this requirement. 1844ccc9971eSMauro Carvalho Chehab 1845ccc9971eSMauro Carvalho ChehabFurthermore, NMI handlers can be interrupted by what appear to RCU to be 1846ccc9971eSMauro Carvalho Chehabnormal interrupts. One way that this can happen is for code that 18476f0e6c15SFrederic Weisbeckerdirectly invokes ct_irq_enter() and ct_irq_exit() to be called 1848ccc9971eSMauro Carvalho Chehabfrom an NMI handler. This astonishing fact of life prompted the current 18496f0e6c15SFrederic Weisbeckercode structure, which has ct_irq_enter() invoking 1850493c1822SFrederic Weisbeckerct_nmi_enter() and ct_irq_exit() invoking ct_nmi_exit(). 1851ccc9971eSMauro Carvalho ChehabAnd yes, I also learned of this requirement the hard way. 1852ccc9971eSMauro Carvalho Chehab 1853ccc9971eSMauro Carvalho ChehabLoadable Modules 1854ccc9971eSMauro Carvalho Chehab~~~~~~~~~~~~~~~~ 1855ccc9971eSMauro Carvalho Chehab 1856ccc9971eSMauro Carvalho ChehabThe Linux kernel has loadable modules, and these modules can also be 1857ccc9971eSMauro Carvalho Chehabunloaded. After a given module has been unloaded, any attempt to call 1858ccc9971eSMauro Carvalho Chehabone of its functions results in a segmentation fault. The module-unload 1859ccc9971eSMauro Carvalho Chehabfunctions must therefore cancel any delayed calls to loadable-module 1860be06c257SPaul E. McKenneyfunctions, for example, any outstanding mod_timer() must be dealt 1861a31323beSSteven Rostedt (Google)with via timer_shutdown_sync() or similar. 1862ccc9971eSMauro Carvalho Chehab 1863ccc9971eSMauro Carvalho ChehabUnfortunately, there is no way to cancel an RCU callback; once you 1864be06c257SPaul E. McKenneyinvoke call_rcu(), the callback function is eventually going to be 1865ccc9971eSMauro Carvalho Chehabinvoked, unless the system goes down first. Because it is normally 1866ccc9971eSMauro Carvalho Chehabconsidered socially irresponsible to crash the system in response to a 1867ccc9971eSMauro Carvalho Chehabmodule unload request, we need some other way to deal with in-flight RCU 1868ccc9971eSMauro Carvalho Chehabcallbacks. 1869ccc9971eSMauro Carvalho Chehab 1870be06c257SPaul E. McKenneyRCU therefore provides rcu_barrier(), which waits until all 1871ccc9971eSMauro Carvalho Chehabin-flight RCU callbacks have been invoked. If a module uses 1872be06c257SPaul E. McKenneycall_rcu(), its exit function should therefore prevent any future 1873be06c257SPaul E. McKenneyinvocation of call_rcu(), then invoke rcu_barrier(). In theory, 1874be06c257SPaul E. McKenneythe underlying module-unload code could invoke rcu_barrier() 1875ccc9971eSMauro Carvalho Chehabunconditionally, but in practice this would incur unacceptable 1876ccc9971eSMauro Carvalho Chehablatencies. 1877ccc9971eSMauro Carvalho Chehab 1878ccc9971eSMauro Carvalho ChehabNikita Danilov noted this requirement for an analogous 1879ccc9971eSMauro Carvalho Chehabfilesystem-unmount situation, and Dipankar Sarma incorporated 1880be06c257SPaul E. McKenneyrcu_barrier() into RCU. The need for rcu_barrier() for module 1881ccc9971eSMauro Carvalho Chehabunloading became apparent later. 1882ccc9971eSMauro Carvalho Chehab 1883ccc9971eSMauro Carvalho Chehab.. important:: 1884ccc9971eSMauro Carvalho Chehab 1885be06c257SPaul E. McKenney The rcu_barrier() function is not, repeat, 1886ccc9971eSMauro Carvalho Chehab *not*, obligated to wait for a grace period. It is instead only required 1887ccc9971eSMauro Carvalho Chehab to wait for RCU callbacks that have already been posted. Therefore, if 1888ccc9971eSMauro Carvalho Chehab there are no RCU callbacks posted anywhere in the system, 1889be06c257SPaul E. McKenney rcu_barrier() is within its rights to return immediately. Even if 1890be06c257SPaul E. McKenney there are callbacks posted, rcu_barrier() does not necessarily need 1891ccc9971eSMauro Carvalho Chehab to wait for a grace period. 1892ccc9971eSMauro Carvalho Chehab 1893ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+ 1894ccc9971eSMauro Carvalho Chehab| **Quick Quiz**: | 1895ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+ 1896ccc9971eSMauro Carvalho Chehab| Wait a minute! Each RCU callbacks must wait for a grace period to | 1897be06c257SPaul E. McKenney| complete, and rcu_barrier() must wait for each pre-existing | 1898be06c257SPaul E. McKenney| callback to be invoked. Doesn't rcu_barrier() therefore need to | 1899ccc9971eSMauro Carvalho Chehab| wait for a full grace period if there is even one callback posted | 1900ccc9971eSMauro Carvalho Chehab| anywhere in the system? | 1901ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+ 1902ccc9971eSMauro Carvalho Chehab| **Answer**: | 1903ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+ 1904ccc9971eSMauro Carvalho Chehab| Absolutely not!!! | 1905ccc9971eSMauro Carvalho Chehab| Yes, each RCU callbacks must wait for a grace period to complete, but | 1906ccc9971eSMauro Carvalho Chehab| it might well be partly (or even completely) finished waiting by the | 1907be06c257SPaul E. McKenney| time rcu_barrier() is invoked. In that case, rcu_barrier() | 1908ccc9971eSMauro Carvalho Chehab| need only wait for the remaining portion of the grace period to | 1909ccc9971eSMauro Carvalho Chehab| elapse. So even if there are quite a few callbacks posted, | 1910be06c257SPaul E. McKenney| rcu_barrier() might well return quite quickly. | 1911ccc9971eSMauro Carvalho Chehab| | 1912ccc9971eSMauro Carvalho Chehab| So if you need to wait for a grace period as well as for all | 1913ccc9971eSMauro Carvalho Chehab| pre-existing callbacks, you will need to invoke both | 1914be06c257SPaul E. McKenney| synchronize_rcu() and rcu_barrier(). If latency is a concern, | 1915ccc9971eSMauro Carvalho Chehab| you can always use workqueues to invoke them concurrently. | 1916ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+ 1917ccc9971eSMauro Carvalho Chehab 1918ccc9971eSMauro Carvalho ChehabHotplug CPU 1919ccc9971eSMauro Carvalho Chehab~~~~~~~~~~~ 1920ccc9971eSMauro Carvalho Chehab 1921ccc9971eSMauro Carvalho ChehabThe Linux kernel supports CPU hotplug, which means that CPUs can come 1922ccc9971eSMauro Carvalho Chehaband go. It is of course illegal to use any RCU API member from an 19234f8af077SNícolas F. R. A. Pradooffline CPU, with the exception of `SRCU <Sleepable RCU_>`__ read-side 1924ccc9971eSMauro Carvalho Chehabcritical sections. This requirement was present from day one in 1925ccc9971eSMauro Carvalho ChehabDYNIX/ptx, but on the other hand, the Linux kernel's CPU-hotplug 1926ccc9971eSMauro Carvalho Chehabimplementation is “interesting.” 1927ccc9971eSMauro Carvalho Chehab 1928ccc9971eSMauro Carvalho ChehabThe Linux-kernel CPU-hotplug implementation has notifiers that are used 1929ccc9971eSMauro Carvalho Chehabto allow the various kernel subsystems (including RCU) to respond 1930ccc9971eSMauro Carvalho Chehabappropriately to a given CPU-hotplug operation. Most RCU operations may 1931ccc9971eSMauro Carvalho Chehabbe invoked from CPU-hotplug notifiers, including even synchronous 1932be06c257SPaul E. McKenneygrace-period operations such as (synchronize_rcu() and 1933be06c257SPaul E. McKenneysynchronize_rcu_expedited()). However, these synchronous operations 1934a0432607SJoel Fernandes (Google)do block and therefore cannot be invoked from notifiers that execute via 1935be06c257SPaul E. McKenneystop_machine(), specifically those between the ``CPUHP_AP_OFFLINE`` 1936a0432607SJoel Fernandes (Google)and ``CPUHP_AP_ONLINE`` states. 1937ccc9971eSMauro Carvalho Chehab 1938be06c257SPaul E. McKenneyIn addition, all-callback-wait operations such as rcu_barrier() may 1939a0432607SJoel Fernandes (Google)not be invoked from any CPU-hotplug notifier. This restriction is due 1940a0432607SJoel Fernandes (Google)to the fact that there are phases of CPU-hotplug operations where the 1941a0432607SJoel Fernandes (Google)outgoing CPU's callbacks will not be invoked until after the CPU-hotplug 1942a0432607SJoel Fernandes (Google)operation ends, which could also result in deadlock. Furthermore, 1943be06c257SPaul E. McKenneyrcu_barrier() blocks CPU-hotplug operations during its execution, 1944a0432607SJoel Fernandes (Google)which results in another type of deadlock when invoked from a CPU-hotplug 1945a0432607SJoel Fernandes (Google)notifier. 1946a0432607SJoel Fernandes (Google) 1947a0432607SJoel Fernandes (Google)Finally, RCU must avoid deadlocks due to interaction between hotplug, 1948a0432607SJoel Fernandes (Google)timers and grace period processing. It does so by maintaining its own set 1949a0432607SJoel Fernandes (Google)of books that duplicate the centrally maintained ``cpu_online_mask``, 1950a0432607SJoel Fernandes (Google)and also by reporting quiescent states explicitly when a CPU goes 1951a0432607SJoel Fernandes (Google)offline. This explicit reporting of quiescent states avoids any need 1952a0432607SJoel Fernandes (Google)for the force-quiescent-state loop (FQS) to report quiescent states for 1953a0432607SJoel Fernandes (Google)offline CPUs. However, as a debugging measure, the FQS loop does splat 1954a0432607SJoel Fernandes (Google)if offline CPUs block an RCU grace period for too long. 1955a0432607SJoel Fernandes (Google) 1956a0432607SJoel Fernandes (Google)An offline CPU's quiescent state will be reported either: 1957a1b9dbb7SMauro Carvalho Chehab 1958448e9f34SFrederic Weisbecker1. As the CPU goes offline using RCU's hotplug notifier (rcutree_report_cpu_dead()). 1959be06c257SPaul E. McKenney2. When grace period initialization (rcu_gp_init()) detects a 1960a0432607SJoel Fernandes (Google) race either with CPU offlining or with a task unblocking on a leaf 1961a0432607SJoel Fernandes (Google) ``rcu_node`` structure whose CPUs are all offline. 1962a0432607SJoel Fernandes (Google) 1963448e9f34SFrederic WeisbeckerThe CPU-online path (rcutree_report_cpu_starting()) should never need to report 1964a0432607SJoel Fernandes (Google)a quiescent state for an offline CPU. However, as a debugging measure, 1965a0432607SJoel Fernandes (Google)it does emit a warning if a quiescent state was not already reported 1966a0432607SJoel Fernandes (Google)for that CPU. 1967a0432607SJoel Fernandes (Google) 1968a0432607SJoel Fernandes (Google)During the checking/modification of RCU's hotplug bookkeeping, the 1969a0432607SJoel Fernandes (Google)corresponding CPU's leaf node lock is held. This avoids race conditions 1970a0432607SJoel Fernandes (Google)between RCU's hotplug notifier hooks, the grace period initialization 1971a0432607SJoel Fernandes (Google)code, and the FQS loop, all of which refer to or modify this bookkeeping. 1972ccc9971eSMauro Carvalho Chehab 1973ccc9971eSMauro Carvalho ChehabScheduler and RCU 1974ccc9971eSMauro Carvalho Chehab~~~~~~~~~~~~~~~~~ 1975ccc9971eSMauro Carvalho Chehab 1976e4453d8aSPaul E. McKenneyRCU makes use of kthreads, and it is necessary to avoid excessive CPU-time 1977e4453d8aSPaul E. McKenneyaccumulation by these kthreads. This requirement was no surprise, but 1978e4453d8aSPaul E. McKenneyRCU's violation of it when running context-switch-heavy workloads when 1979e4453d8aSPaul E. McKenneybuilt with ``CONFIG_NO_HZ_FULL=y`` `did come as a surprise 1980ccc9971eSMauro Carvalho Chehab[PDF] <http://www.rdrop.com/users/paulmck/scalability/paper/BareMetal.2015.01.15b.pdf>`__. 1981ccc9971eSMauro Carvalho ChehabRCU has made good progress towards meeting this requirement, even for 1982ccc9971eSMauro Carvalho Chehabcontext-switch-heavy ``CONFIG_NO_HZ_FULL=y`` workloads, but there is 1983ccc9971eSMauro Carvalho Chehabroom for further improvement. 1984ccc9971eSMauro Carvalho Chehab 1985e4453d8aSPaul E. McKenneyThere is no longer any prohibition against holding any of 1986e4453d8aSPaul E. McKenneyscheduler's runqueue or priority-inheritance spinlocks across an 1987be06c257SPaul E. McKenneyrcu_read_unlock(), even if interrupts and preemption were enabled 1988e4453d8aSPaul E. McKenneysomewhere within the corresponding RCU read-side critical section. 1989be06c257SPaul E. McKenneyTherefore, it is now perfectly legal to execute rcu_read_lock() 1990e4453d8aSPaul E. McKenneywith preemption enabled, acquire one of the scheduler locks, and hold 1991be06c257SPaul E. McKenneythat lock across the matching rcu_read_unlock(). 1992ccc9971eSMauro Carvalho Chehab 1993e4453d8aSPaul E. McKenneySimilarly, the RCU flavor consolidation has removed the need for negative 1994e4453d8aSPaul E. McKenneynesting. The fact that interrupt-disabled regions of code act as RCU 1995e4453d8aSPaul E. McKenneyread-side critical sections implicitly avoids earlier issues that used 1996e4453d8aSPaul E. McKenneyto result in destructive recursion via interrupt handler's use of RCU. 1997ccc9971eSMauro Carvalho Chehab 1998ccc9971eSMauro Carvalho ChehabTracing and RCU 1999ccc9971eSMauro Carvalho Chehab~~~~~~~~~~~~~~~ 2000ccc9971eSMauro Carvalho Chehab 2001ccc9971eSMauro Carvalho ChehabIt is possible to use tracing on RCU code, but tracing itself uses RCU. 2002be06c257SPaul E. McKenneyFor this reason, rcu_dereference_raw_check() is provided for use 2003ccc9971eSMauro Carvalho Chehabby tracing, which avoids the destructive recursion that could otherwise 2004ccc9971eSMauro Carvalho Chehabensue. This API is also used by virtualization in some architectures, 2005ccc9971eSMauro Carvalho Chehabwhere RCU readers execute in environments in which tracing cannot be 2006ccc9971eSMauro Carvalho Chehabused. The tracing folks both located the requirement and provided the 2007ccc9971eSMauro Carvalho Chehabneeded fix, so this surprise requirement was relatively painless. 2008ccc9971eSMauro Carvalho Chehab 200971cb46aeSJoel Fernandes (Google)Accesses to User Memory and RCU 201071cb46aeSJoel Fernandes (Google)~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 201171cb46aeSJoel Fernandes (Google) 201271cb46aeSJoel Fernandes (Google)The kernel needs to access user-space memory, for example, to access data 2013be06c257SPaul E. McKenneyreferenced by system-call parameters. The get_user() macro does this job. 201471cb46aeSJoel Fernandes (Google) 201571cb46aeSJoel Fernandes (Google)However, user-space memory might well be paged out, which means that 2016be06c257SPaul E. McKenneyget_user() might well page-fault and thus block while waiting for the 201771cb46aeSJoel Fernandes (Google)resulting I/O to complete. It would be a very bad thing for the compiler to 2018be06c257SPaul E. McKenneyreorder a get_user() invocation into an RCU read-side critical section. 201971cb46aeSJoel Fernandes (Google) 202071cb46aeSJoel Fernandes (Google)For example, suppose that the source code looked like this: 202171cb46aeSJoel Fernandes (Google) 202271cb46aeSJoel Fernandes (Google) :: 202371cb46aeSJoel Fernandes (Google) 202471cb46aeSJoel Fernandes (Google) 1 rcu_read_lock(); 202571cb46aeSJoel Fernandes (Google) 2 p = rcu_dereference(gp); 202671cb46aeSJoel Fernandes (Google) 3 v = p->value; 202771cb46aeSJoel Fernandes (Google) 4 rcu_read_unlock(); 202871cb46aeSJoel Fernandes (Google) 5 get_user(user_v, user_p); 202971cb46aeSJoel Fernandes (Google) 6 do_something_with(v, user_v); 203071cb46aeSJoel Fernandes (Google) 203171cb46aeSJoel Fernandes (Google)The compiler must not be permitted to transform this source code into 203271cb46aeSJoel Fernandes (Google)the following: 203371cb46aeSJoel Fernandes (Google) 203471cb46aeSJoel Fernandes (Google) :: 203571cb46aeSJoel Fernandes (Google) 203671cb46aeSJoel Fernandes (Google) 1 rcu_read_lock(); 203771cb46aeSJoel Fernandes (Google) 2 p = rcu_dereference(gp); 203871cb46aeSJoel Fernandes (Google) 3 get_user(user_v, user_p); // BUG: POSSIBLE PAGE FAULT!!! 203971cb46aeSJoel Fernandes (Google) 4 v = p->value; 204071cb46aeSJoel Fernandes (Google) 5 rcu_read_unlock(); 204171cb46aeSJoel Fernandes (Google) 6 do_something_with(v, user_v); 204271cb46aeSJoel Fernandes (Google) 204381ad58beSSebastian Andrzej SiewiorIf the compiler did make this transformation in a ``CONFIG_PREEMPTION=n`` kernel 2044be06c257SPaul E. McKenneybuild, and if get_user() did page fault, the result would be a quiescent 204571cb46aeSJoel Fernandes (Google)state in the middle of an RCU read-side critical section. This misplaced 204671cb46aeSJoel Fernandes (Google)quiescent state could result in line 4 being a use-after-free access, 204771cb46aeSJoel Fernandes (Google)which could be bad for your kernel's actuarial statistics. Similar examples 2048be06c257SPaul E. McKenneycan be constructed with the call to get_user() preceding the 2049be06c257SPaul E. McKenneyrcu_read_lock(). 205071cb46aeSJoel Fernandes (Google) 2051be06c257SPaul E. McKenneyUnfortunately, get_user() doesn't have any particular ordering properties, 205271cb46aeSJoel Fernandes (Google)and in some architectures the underlying ``asm`` isn't even marked 205371cb46aeSJoel Fernandes (Google)``volatile``. And even if it was marked ``volatile``, the above access to 205471cb46aeSJoel Fernandes (Google)``p->value`` is not volatile, so the compiler would not have any reason to keep 205571cb46aeSJoel Fernandes (Google)those two accesses in order. 205671cb46aeSJoel Fernandes (Google) 2057be06c257SPaul E. McKenneyTherefore, the Linux-kernel definitions of rcu_read_lock() and 2058be06c257SPaul E. McKenneyrcu_read_unlock() must act as compiler barriers, at least for outermost 2059be06c257SPaul E. McKenneyinstances of rcu_read_lock() and rcu_read_unlock() within a nested set 206071cb46aeSJoel Fernandes (Google)of RCU read-side critical sections. 206171cb46aeSJoel Fernandes (Google) 2062ccc9971eSMauro Carvalho ChehabEnergy Efficiency 2063ccc9971eSMauro Carvalho Chehab~~~~~~~~~~~~~~~~~ 2064ccc9971eSMauro Carvalho Chehab 2065ccc9971eSMauro Carvalho ChehabInterrupting idle CPUs is considered socially unacceptable, especially 2066ccc9971eSMauro Carvalho Chehabby people with battery-powered embedded systems. RCU therefore conserves 2067ccc9971eSMauro Carvalho Chehabenergy by detecting which CPUs are idle, including tracking CPUs that 2068ccc9971eSMauro Carvalho Chehabhave been interrupted from idle. This is a large part of the 2069ccc9971eSMauro Carvalho Chehabenergy-efficiency requirement, so I learned of this via an irate phone 2070ccc9971eSMauro Carvalho Chehabcall. 2071ccc9971eSMauro Carvalho Chehab 2072ccc9971eSMauro Carvalho ChehabBecause RCU avoids interrupting idle CPUs, it is illegal to execute an 2073ccc9971eSMauro Carvalho ChehabRCU read-side critical section on an idle CPU. (Kernels built with 20747a3cc291SPeter Zijlstra``CONFIG_PROVE_RCU=y`` will splat if you try it.) 2075ccc9971eSMauro Carvalho Chehab 2076ccc9971eSMauro Carvalho ChehabIt is similarly socially unacceptable to interrupt an ``nohz_full`` CPU 2077ccc9971eSMauro Carvalho Chehabrunning in userspace. RCU must therefore track ``nohz_full`` userspace 2078ccc9971eSMauro Carvalho Chehabexecution. RCU must therefore be able to sample state at two points in 2079ccc9971eSMauro Carvalho Chehabtime, and be able to determine whether or not some other CPU spent any 2080ccc9971eSMauro Carvalho Chehabtime idle and/or executing in userspace. 2081ccc9971eSMauro Carvalho Chehab 2082ccc9971eSMauro Carvalho ChehabThese energy-efficiency requirements have proven quite difficult to 2083ccc9971eSMauro Carvalho Chehabunderstand and to meet, for example, there have been more than five 2084ccc9971eSMauro Carvalho Chehabclean-sheet rewrites of RCU's energy-efficiency code, the last of which 2085ccc9971eSMauro Carvalho Chehabwas finally able to demonstrate `real energy savings running on real 2086ccc9971eSMauro Carvalho Chehabhardware 2087ccc9971eSMauro Carvalho Chehab[PDF] <http://www.rdrop.com/users/paulmck/realtime/paper/AMPenergy.2013.04.19a.pdf>`__. 2088ccc9971eSMauro Carvalho ChehabAs noted earlier, I learned of many of these requirements via angry 2089ccc9971eSMauro Carvalho Chehabphone calls: Flaming me on the Linux-kernel mailing list was apparently 2090ccc9971eSMauro Carvalho Chehabnot sufficient to fully vent their ire at RCU's energy-efficiency bugs! 2091ccc9971eSMauro Carvalho Chehab 2092ccc9971eSMauro Carvalho ChehabScheduling-Clock Interrupts and RCU 2093ccc9971eSMauro Carvalho Chehab~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2094ccc9971eSMauro Carvalho Chehab 2095ccc9971eSMauro Carvalho ChehabThe kernel transitions between in-kernel non-idle execution, userspace 2096ccc9971eSMauro Carvalho Chehabexecution, and the idle loop. Depending on kernel configuration, RCU 2097ccc9971eSMauro Carvalho Chehabhandles these states differently: 2098ccc9971eSMauro Carvalho Chehab 2099ccc9971eSMauro Carvalho Chehab+-----------------+------------------+------------------+-----------------+ 2100ccc9971eSMauro Carvalho Chehab| ``HZ`` Kconfig | In-Kernel | Usermode | Idle | 2101ccc9971eSMauro Carvalho Chehab+=================+==================+==================+=================+ 2102ccc9971eSMauro Carvalho Chehab| ``HZ_PERIODIC`` | Can rely on | Can rely on | Can rely on | 2103ccc9971eSMauro Carvalho Chehab| | scheduling-clock | scheduling-clock | RCU's | 2104ccc9971eSMauro Carvalho Chehab| | interrupt. | interrupt and | dyntick-idle | 2105ccc9971eSMauro Carvalho Chehab| | | its detection | detection. | 2106ccc9971eSMauro Carvalho Chehab| | | of interrupt | | 2107ccc9971eSMauro Carvalho Chehab| | | from usermode. | | 2108ccc9971eSMauro Carvalho Chehab+-----------------+------------------+------------------+-----------------+ 2109ccc9971eSMauro Carvalho Chehab| ``NO_HZ_IDLE`` | Can rely on | Can rely on | Can rely on | 2110ccc9971eSMauro Carvalho Chehab| | scheduling-clock | scheduling-clock | RCU's | 2111ccc9971eSMauro Carvalho Chehab| | interrupt. | interrupt and | dyntick-idle | 2112ccc9971eSMauro Carvalho Chehab| | | its detection | detection. | 2113ccc9971eSMauro Carvalho Chehab| | | of interrupt | | 2114ccc9971eSMauro Carvalho Chehab| | | from usermode. | | 2115ccc9971eSMauro Carvalho Chehab+-----------------+------------------+------------------+-----------------+ 2116ccc9971eSMauro Carvalho Chehab| ``NO_HZ_FULL`` | Can only | Can rely on | Can rely on | 2117ccc9971eSMauro Carvalho Chehab| | sometimes rely | RCU's | RCU's | 2118ccc9971eSMauro Carvalho Chehab| | on | dyntick-idle | dyntick-idle | 2119ccc9971eSMauro Carvalho Chehab| | scheduling-clock | detection. | detection. | 2120ccc9971eSMauro Carvalho Chehab| | interrupt. In | | | 2121ccc9971eSMauro Carvalho Chehab| | other cases, it | | | 2122ccc9971eSMauro Carvalho Chehab| | is necessary to | | | 2123ccc9971eSMauro Carvalho Chehab| | bound kernel | | | 2124ccc9971eSMauro Carvalho Chehab| | execution times | | | 2125ccc9971eSMauro Carvalho Chehab| | and/or use | | | 2126ccc9971eSMauro Carvalho Chehab| | IPIs. | | | 2127ccc9971eSMauro Carvalho Chehab+-----------------+------------------+------------------+-----------------+ 2128ccc9971eSMauro Carvalho Chehab 2129ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+ 2130ccc9971eSMauro Carvalho Chehab| **Quick Quiz**: | 2131ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+ 2132ccc9971eSMauro Carvalho Chehab| Why can't ``NO_HZ_FULL`` in-kernel execution rely on the | 2133ccc9971eSMauro Carvalho Chehab| scheduling-clock interrupt, just like ``HZ_PERIODIC`` and | 2134ccc9971eSMauro Carvalho Chehab| ``NO_HZ_IDLE`` do? | 2135ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+ 2136ccc9971eSMauro Carvalho Chehab| **Answer**: | 2137ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+ 2138ccc9971eSMauro Carvalho Chehab| Because, as a performance optimization, ``NO_HZ_FULL`` does not | 2139ccc9971eSMauro Carvalho Chehab| necessarily re-enable the scheduling-clock interrupt on entry to each | 2140ccc9971eSMauro Carvalho Chehab| and every system call. | 2141ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+ 2142ccc9971eSMauro Carvalho Chehab 2143ccc9971eSMauro Carvalho ChehabHowever, RCU must be reliably informed as to whether any given CPU is 2144ccc9971eSMauro Carvalho Chehabcurrently in the idle loop, and, for ``NO_HZ_FULL``, also whether that 2145ccc9971eSMauro Carvalho ChehabCPU is executing in usermode, as discussed 21464f8af077SNícolas F. R. A. Prado`earlier <Energy Efficiency_>`__. It also requires that the 2147ccc9971eSMauro Carvalho Chehabscheduling-clock interrupt be enabled when RCU needs it to be: 2148ccc9971eSMauro Carvalho Chehab 2149ccc9971eSMauro Carvalho Chehab#. If a CPU is either idle or executing in usermode, and RCU believes it 2150ccc9971eSMauro Carvalho Chehab is non-idle, the scheduling-clock tick had better be running. 2151ccc9971eSMauro Carvalho Chehab Otherwise, you will get RCU CPU stall warnings. Or at best, very long 2152ccc9971eSMauro Carvalho Chehab (11-second) grace periods, with a pointless IPI waking the CPU from 2153ccc9971eSMauro Carvalho Chehab time to time. 2154ccc9971eSMauro Carvalho Chehab#. If a CPU is in a portion of the kernel that executes RCU read-side 2155ccc9971eSMauro Carvalho Chehab critical sections, and RCU believes this CPU to be idle, you will get 2156ccc9971eSMauro Carvalho Chehab random memory corruption. **DON'T DO THIS!!!** 2157ccc9971eSMauro Carvalho Chehab This is one reason to test with lockdep, which will complain about 2158ccc9971eSMauro Carvalho Chehab this sort of thing. 2159ccc9971eSMauro Carvalho Chehab#. If a CPU is in a portion of the kernel that is absolutely positively 2160ccc9971eSMauro Carvalho Chehab no-joking guaranteed to never execute any RCU read-side critical 21617f45d6f8SRandy Dunlap sections, and RCU believes this CPU to be idle, no problem. This 2162ccc9971eSMauro Carvalho Chehab sort of thing is used by some architectures for light-weight 2163ccc9971eSMauro Carvalho Chehab exception handlers, which can then avoid the overhead of 21646f0e6c15SFrederic Weisbecker ct_irq_enter() and ct_irq_exit() at exception entry and 2165ccc9971eSMauro Carvalho Chehab exit, respectively. Some go further and avoid the entireties of 2166be06c257SPaul E. McKenney irq_enter() and irq_exit(). 2167ccc9971eSMauro Carvalho Chehab Just make very sure you are running some of your tests with 2168ccc9971eSMauro Carvalho Chehab ``CONFIG_PROVE_RCU=y``, just in case one of your code paths was in 2169ccc9971eSMauro Carvalho Chehab fact joking about not doing RCU read-side critical sections. 2170ccc9971eSMauro Carvalho Chehab#. If a CPU is executing in the kernel with the scheduling-clock 2171ccc9971eSMauro Carvalho Chehab interrupt disabled and RCU believes this CPU to be non-idle, and if 2172ccc9971eSMauro Carvalho Chehab the CPU goes idle (from an RCU perspective) every few jiffies, no 2173ccc9971eSMauro Carvalho Chehab problem. It is usually OK for there to be the occasional gap between 2174ccc9971eSMauro Carvalho Chehab idle periods of up to a second or so. 2175ccc9971eSMauro Carvalho Chehab If the gap grows too long, you get RCU CPU stall warnings. 2176ccc9971eSMauro Carvalho Chehab#. If a CPU is either idle or executing in usermode, and RCU believes it 2177ccc9971eSMauro Carvalho Chehab to be idle, of course no problem. 2178ccc9971eSMauro Carvalho Chehab#. If a CPU is executing in the kernel, the kernel code path is passing 2179ccc9971eSMauro Carvalho Chehab through quiescent states at a reasonable frequency (preferably about 2180ccc9971eSMauro Carvalho Chehab once per few jiffies, but the occasional excursion to a second or so 2181ccc9971eSMauro Carvalho Chehab is usually OK) and the scheduling-clock interrupt is enabled, of 2182ccc9971eSMauro Carvalho Chehab course no problem. 2183ccc9971eSMauro Carvalho Chehab If the gap between a successive pair of quiescent states grows too 2184ccc9971eSMauro Carvalho Chehab long, you get RCU CPU stall warnings. 2185ccc9971eSMauro Carvalho Chehab 2186ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+ 2187ccc9971eSMauro Carvalho Chehab| **Quick Quiz**: | 2188ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+ 2189ccc9971eSMauro Carvalho Chehab| But what if my driver has a hardware interrupt handler that can run | 2190be06c257SPaul E. McKenney| for many seconds? I cannot invoke schedule() from an hardware | 2191ccc9971eSMauro Carvalho Chehab| interrupt handler, after all! | 2192ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+ 2193ccc9971eSMauro Carvalho Chehab| **Answer**: | 2194ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+ 21956f0e6c15SFrederic Weisbecker| One approach is to do ``ct_irq_exit();ct_irq_enter();`` every so | 2196ccc9971eSMauro Carvalho Chehab| often. But given that long-running interrupt handlers can cause other | 2197ccc9971eSMauro Carvalho Chehab| problems, not least for response time, shouldn't you work to keep | 2198ccc9971eSMauro Carvalho Chehab| your interrupt handler's runtime within reasonable bounds? | 2199ccc9971eSMauro Carvalho Chehab+-----------------------------------------------------------------------+ 2200ccc9971eSMauro Carvalho Chehab 2201ccc9971eSMauro Carvalho ChehabBut as long as RCU is properly informed of kernel state transitions 2202ccc9971eSMauro Carvalho Chehabbetween in-kernel execution, usermode execution, and idle, and as long 2203ccc9971eSMauro Carvalho Chehabas the scheduling-clock interrupt is enabled when RCU needs it to be, 2204ccc9971eSMauro Carvalho Chehabyou can rest assured that the bugs you encounter will be in some other 2205ccc9971eSMauro Carvalho Chehabpart of RCU or some other part of the kernel! 2206ccc9971eSMauro Carvalho Chehab 2207ccc9971eSMauro Carvalho ChehabMemory Efficiency 2208ccc9971eSMauro Carvalho Chehab~~~~~~~~~~~~~~~~~ 2209ccc9971eSMauro Carvalho Chehab 2210ccc9971eSMauro Carvalho ChehabAlthough small-memory non-realtime systems can simply use Tiny RCU, code 2211ccc9971eSMauro Carvalho Chehabsize is only one aspect of memory efficiency. Another aspect is the size 2212be06c257SPaul E. McKenneyof the ``rcu_head`` structure used by call_rcu() and 2213be06c257SPaul E. McKenneykfree_rcu(). Although this structure contains nothing more than a 2214ccc9971eSMauro Carvalho Chehabpair of pointers, it does appear in many RCU-protected data structures, 2215ccc9971eSMauro Carvalho Chehabincluding some that are size critical. The ``page`` structure is a case 2216ccc9971eSMauro Carvalho Chehabin point, as evidenced by the many occurrences of the ``union`` keyword 2217ccc9971eSMauro Carvalho Chehabwithin that structure. 2218ccc9971eSMauro Carvalho Chehab 2219ccc9971eSMauro Carvalho ChehabThis need for memory efficiency is one reason that RCU uses hand-crafted 2220ccc9971eSMauro Carvalho Chehabsingly linked lists to track the ``rcu_head`` structures that are 2221ccc9971eSMauro Carvalho Chehabwaiting for a grace period to elapse. It is also the reason why 2222ccc9971eSMauro Carvalho Chehab``rcu_head`` structures do not contain debug information, such as fields 2223be06c257SPaul E. McKenneytracking the file and line of the call_rcu() or kfree_rcu() that 2224ccc9971eSMauro Carvalho Chehabposted them. Although this information might appear in debug-only kernel 2225ccc9971eSMauro Carvalho Chehabbuilds at some point, in the meantime, the ``->func`` field will often 2226ccc9971eSMauro Carvalho Chehabprovide the needed debug information. 2227ccc9971eSMauro Carvalho Chehab 2228ccc9971eSMauro Carvalho ChehabHowever, in some cases, the need for memory efficiency leads to even 2229ccc9971eSMauro Carvalho Chehabmore extreme measures. Returning to the ``page`` structure, the 2230ccc9971eSMauro Carvalho Chehab``rcu_head`` field shares storage with a great many other structures 2231ccc9971eSMauro Carvalho Chehabthat are used at various points in the corresponding page's lifetime. In 2232ccc9971eSMauro Carvalho Chehaborder to correctly resolve certain `race 22339d3a0485SPaul Gortmakerconditions <https://lore.kernel.org/r/1439976106-137226-1-git-send-email-kirill.shutemov@linux.intel.com>`__, 2234ccc9971eSMauro Carvalho Chehabthe Linux kernel's memory-management subsystem needs a particular bit to 2235ccc9971eSMauro Carvalho Chehabremain zero during all phases of grace-period processing, and that bit 2236ccc9971eSMauro Carvalho Chehabhappens to map to the bottom bit of the ``rcu_head`` structure's 2237be06c257SPaul E. McKenney``->next`` field. RCU makes this guarantee as long as call_rcu() is 2238be06c257SPaul E. McKenneyused to post the callback, as opposed to kfree_rcu() or some future 2239be06c257SPaul E. McKenney“lazy” variant of call_rcu() that might one day be created for 2240ccc9971eSMauro Carvalho Chehabenergy-efficiency purposes. 2241ccc9971eSMauro Carvalho Chehab 2242ccc9971eSMauro Carvalho ChehabThat said, there are limits. RCU requires that the ``rcu_head`` 2243ccc9971eSMauro Carvalho Chehabstructure be aligned to a two-byte boundary, and passing a misaligned 2244be06c257SPaul E. McKenney``rcu_head`` structure to one of the call_rcu() family of functions 2245ccc9971eSMauro Carvalho Chehabwill result in a splat. It is therefore necessary to exercise caution 2246ccc9971eSMauro Carvalho Chehabwhen packing structures containing fields of type ``rcu_head``. Why not 2247ccc9971eSMauro Carvalho Chehaba four-byte or even eight-byte alignment requirement? Because the m68k 2248ccc9971eSMauro Carvalho Chehabarchitecture provides only two-byte alignment, and thus acts as 2249ccc9971eSMauro Carvalho Chehabalignment's least common denominator. 2250ccc9971eSMauro Carvalho Chehab 2251ccc9971eSMauro Carvalho ChehabThe reason for reserving the bottom bit of pointers to ``rcu_head`` 2252ccc9971eSMauro Carvalho Chehabstructures is to leave the door open to “lazy” callbacks whose 2253ccc9971eSMauro Carvalho Chehabinvocations can safely be deferred. Deferring invocation could 2254ccc9971eSMauro Carvalho Chehabpotentially have energy-efficiency benefits, but only if the rate of 2255ccc9971eSMauro Carvalho Chehabnon-lazy callbacks decreases significantly for some important workload. 2256ccc9971eSMauro Carvalho ChehabIn the meantime, reserving the bottom bit keeps this option open in case 2257ccc9971eSMauro Carvalho Chehabit one day becomes useful. 2258ccc9971eSMauro Carvalho Chehab 2259ccc9971eSMauro Carvalho ChehabPerformance, Scalability, Response Time, and Reliability 2260ccc9971eSMauro Carvalho Chehab~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2261ccc9971eSMauro Carvalho Chehab 2262ccc9971eSMauro Carvalho ChehabExpanding on the `earlier 22634f8af077SNícolas F. R. A. Pradodiscussion <Performance and Scalability_>`__, RCU is used heavily by 2264ccc9971eSMauro Carvalho Chehabhot code paths in performance-critical portions of the Linux kernel's 2265ccc9971eSMauro Carvalho Chehabnetworking, security, virtualization, and scheduling code paths. RCU 2266ccc9971eSMauro Carvalho Chehabmust therefore use efficient implementations, especially in its 2267ccc9971eSMauro Carvalho Chehabread-side primitives. To that end, it would be good if preemptible RCU's 2268be06c257SPaul E. McKenneyimplementation of rcu_read_lock() could be inlined, however, doing 2269ccc9971eSMauro Carvalho Chehabthis requires resolving ``#include`` issues with the ``task_struct`` 2270ccc9971eSMauro Carvalho Chehabstructure. 2271ccc9971eSMauro Carvalho Chehab 2272ccc9971eSMauro Carvalho ChehabThe Linux kernel supports hardware configurations with up to 4096 CPUs, 2273ccc9971eSMauro Carvalho Chehabwhich means that RCU must be extremely scalable. Algorithms that involve 2274ccc9971eSMauro Carvalho Chehabfrequent acquisitions of global locks or frequent atomic operations on 2275ccc9971eSMauro Carvalho Chehabglobal variables simply cannot be tolerated within the RCU 2276ccc9971eSMauro Carvalho Chehabimplementation. RCU therefore makes heavy use of a combining tree based 2277ccc9971eSMauro Carvalho Chehabon the ``rcu_node`` structure. RCU is required to tolerate all CPUs 2278ccc9971eSMauro Carvalho Chehabcontinuously invoking any combination of RCU's runtime primitives with 2279ccc9971eSMauro Carvalho Chehabminimal per-operation overhead. In fact, in many cases, increasing load 2280ccc9971eSMauro Carvalho Chehabmust *decrease* the per-operation overhead, witness the batching 2281be06c257SPaul E. McKenneyoptimizations for synchronize_rcu(), call_rcu(), 2282be06c257SPaul E. McKenneysynchronize_rcu_expedited(), and rcu_barrier(). As a general 2283ccc9971eSMauro Carvalho Chehabrule, RCU must cheerfully accept whatever the rest of the Linux kernel 2284ccc9971eSMauro Carvalho Chehabdecides to throw at it. 2285ccc9971eSMauro Carvalho Chehab 2286ccc9971eSMauro Carvalho ChehabThe Linux kernel is used for real-time workloads, especially in 2287ccc9971eSMauro Carvalho Chehabconjunction with the `-rt 2288361c0f3dSSebastian Andrzej Siewiorpatchset <https://wiki.linuxfoundation.org/realtime/>`__. The 2289ccc9971eSMauro Carvalho Chehabreal-time-latency response requirements are such that the traditional 2290ccc9971eSMauro Carvalho Chehabapproach of disabling preemption across RCU read-side critical sections 229181ad58beSSebastian Andrzej Siewioris inappropriate. Kernels built with ``CONFIG_PREEMPTION=y`` therefore use 2292ccc9971eSMauro Carvalho Chehaban RCU implementation that allows RCU read-side critical sections to be 2293ccc9971eSMauro Carvalho Chehabpreempted. This requirement made its presence known after users made it 2294ccc9971eSMauro Carvalho Chehabclear that an earlier `real-time 2295ccc9971eSMauro Carvalho Chehabpatch <https://lwn.net/Articles/107930/>`__ did not meet their needs, in 2296ccc9971eSMauro Carvalho Chehabconjunction with some `RCU 22979d3a0485SPaul Gortmakerissues <https://lore.kernel.org/r/20050318002026.GA2693@us.ibm.com>`__ 2298ccc9971eSMauro Carvalho Chehabencountered by a very early version of the -rt patchset. 2299ccc9971eSMauro Carvalho Chehab 2300ccc9971eSMauro Carvalho ChehabIn addition, RCU must make do with a sub-100-microsecond real-time 2301ccc9971eSMauro Carvalho Chehablatency budget. In fact, on smaller systems with the -rt patchset, the 2302ccc9971eSMauro Carvalho ChehabLinux kernel provides sub-20-microsecond real-time latencies for the 2303ccc9971eSMauro Carvalho Chehabwhole kernel, including RCU. RCU's scalability and latency must 2304ccc9971eSMauro Carvalho Chehabtherefore be sufficient for these sorts of configurations. To my 2305ccc9971eSMauro Carvalho Chehabsurprise, the sub-100-microsecond real-time latency budget `applies to 2306ccc9971eSMauro Carvalho Chehabeven the largest systems 2307ccc9971eSMauro Carvalho Chehab[PDF] <http://www.rdrop.com/users/paulmck/realtime/paper/bigrt.2013.01.31a.LCA.pdf>`__, 2308ccc9971eSMauro Carvalho Chehabup to and including systems with 4096 CPUs. This real-time requirement 2309ccc9971eSMauro Carvalho Chehabmotivated the grace-period kthread, which also simplified handling of a 2310ccc9971eSMauro Carvalho Chehabnumber of race conditions. 2311ccc9971eSMauro Carvalho Chehab 2312ccc9971eSMauro Carvalho ChehabRCU must avoid degrading real-time response for CPU-bound threads, 2313ccc9971eSMauro Carvalho Chehabwhether executing in usermode (which is one use case for 2314ccc9971eSMauro Carvalho Chehab``CONFIG_NO_HZ_FULL=y``) or in the kernel. That said, CPU-bound loops in 2315be06c257SPaul E. McKenneythe kernel must execute cond_resched() at least once per few tens of 2316ccc9971eSMauro Carvalho Chehabmilliseconds in order to avoid receiving an IPI from RCU. 2317ccc9971eSMauro Carvalho Chehab 2318ccc9971eSMauro Carvalho ChehabFinally, RCU's status as a synchronization primitive means that any RCU 2319ccc9971eSMauro Carvalho Chehabfailure can result in arbitrary memory corruption that can be extremely 2320ccc9971eSMauro Carvalho Chehabdifficult to debug. This means that RCU must be extremely reliable, 2321ccc9971eSMauro Carvalho Chehabwhich in practice also means that RCU must have an aggressive 2322ccc9971eSMauro Carvalho Chehabstress-test suite. This stress-test suite is called ``rcutorture``. 2323ccc9971eSMauro Carvalho Chehab 2324ccc9971eSMauro Carvalho ChehabAlthough the need for ``rcutorture`` was no surprise, the current 2325ccc9971eSMauro Carvalho Chehabimmense popularity of the Linux kernel is posing interesting—and perhaps 2326ccc9971eSMauro Carvalho Chehabunprecedented—validation challenges. To see this, keep in mind that 2327ccc9971eSMauro Carvalho Chehabthere are well over one billion instances of the Linux kernel running 2328ccc9971eSMauro Carvalho Chehabtoday, given Android smartphones, Linux-powered televisions, and 2329ccc9971eSMauro Carvalho Chehabservers. This number can be expected to increase sharply with the advent 2330ccc9971eSMauro Carvalho Chehabof the celebrated Internet of Things. 2331ccc9971eSMauro Carvalho Chehab 2332ccc9971eSMauro Carvalho ChehabSuppose that RCU contains a race condition that manifests on average 2333ccc9971eSMauro Carvalho Chehabonce per million years of runtime. This bug will be occurring about 2334ccc9971eSMauro Carvalho Chehabthree times per *day* across the installed base. RCU could simply hide 2335ccc9971eSMauro Carvalho Chehabbehind hardware error rates, given that no one should really expect 2336ccc9971eSMauro Carvalho Chehabtheir smartphone to last for a million years. However, anyone taking too 2337ccc9971eSMauro Carvalho Chehabmuch comfort from this thought should consider the fact that in most 2338ccc9971eSMauro Carvalho Chehabjurisdictions, a successful multi-year test of a given mechanism, which 2339ccc9971eSMauro Carvalho Chehabmight include a Linux kernel, suffices for a number of types of 2340ccc9971eSMauro Carvalho Chehabsafety-critical certifications. In fact, rumor has it that the Linux 2341ccc9971eSMauro Carvalho Chehabkernel is already being used in production for safety-critical 2342ccc9971eSMauro Carvalho Chehabapplications. I don't know about you, but I would feel quite bad if a 2343ccc9971eSMauro Carvalho Chehabbug in RCU killed someone. Which might explain my recent focus on 2344ccc9971eSMauro Carvalho Chehabvalidation and verification. 2345ccc9971eSMauro Carvalho Chehab 2346ccc9971eSMauro Carvalho ChehabOther RCU Flavors 2347ccc9971eSMauro Carvalho Chehab----------------- 2348ccc9971eSMauro Carvalho Chehab 2349ccc9971eSMauro Carvalho ChehabOne of the more surprising things about RCU is that there are now no 2350ccc9971eSMauro Carvalho Chehabfewer than five *flavors*, or API families. In addition, the primary 2351ccc9971eSMauro Carvalho Chehabflavor that has been the sole focus up to this point has two different 2352ccc9971eSMauro Carvalho Chehabimplementations, non-preemptible and preemptible. The other four flavors 2353ccc9971eSMauro Carvalho Chehabare listed below, with requirements for each described in a separate 2354ccc9971eSMauro Carvalho Chehabsection. 2355ccc9971eSMauro Carvalho Chehab 235607335c16SJoel Fernandes (Google)#. `Bottom-Half Flavor (Historical)`_ 235707335c16SJoel Fernandes (Google)#. `Sched Flavor (Historical)`_ 235807335c16SJoel Fernandes (Google)#. `Sleepable RCU`_ 235907335c16SJoel Fernandes (Google)#. `Tasks RCU`_ 2360293d9013SPaul E. McKenney#. `Tasks Trace RCU`_ 2361ccc9971eSMauro Carvalho Chehab 2362ccc9971eSMauro Carvalho ChehabBottom-Half Flavor (Historical) 2363ccc9971eSMauro Carvalho Chehab~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2364ccc9971eSMauro Carvalho Chehab 2365ccc9971eSMauro Carvalho ChehabThe RCU-bh flavor of RCU has since been expressed in terms of the other 2366ccc9971eSMauro Carvalho ChehabRCU flavors as part of a consolidation of the three flavors into a 2367ccc9971eSMauro Carvalho Chehabsingle flavor. The read-side API remains, and continues to disable 2368ccc9971eSMauro Carvalho Chehabsoftirq and to be accounted for by lockdep. Much of the material in this 2369ccc9971eSMauro Carvalho Chehabsection is therefore strictly historical in nature. 2370ccc9971eSMauro Carvalho Chehab 2371ccc9971eSMauro Carvalho ChehabThe softirq-disable (AKA “bottom-half”, hence the “_bh” abbreviations) 2372ccc9971eSMauro Carvalho Chehabflavor of RCU, or *RCU-bh*, was developed by Dipankar Sarma to provide a 2373ccc9971eSMauro Carvalho Chehabflavor of RCU that could withstand the network-based denial-of-service 2374ccc9971eSMauro Carvalho Chehabattacks researched by Robert Olsson. These attacks placed so much 2375ccc9971eSMauro Carvalho Chehabnetworking load on the system that some of the CPUs never exited softirq 2376ccc9971eSMauro Carvalho Chehabexecution, which in turn prevented those CPUs from ever executing a 2377ccc9971eSMauro Carvalho Chehabcontext switch, which, in the RCU implementation of that time, prevented 2378ccc9971eSMauro Carvalho Chehabgrace periods from ever ending. The result was an out-of-memory 2379ccc9971eSMauro Carvalho Chehabcondition and a system hang. 2380ccc9971eSMauro Carvalho Chehab 2381ccc9971eSMauro Carvalho ChehabThe solution was the creation of RCU-bh, which does 2382be06c257SPaul E. McKenneylocal_bh_disable() across its read-side critical sections, and which 2383ccc9971eSMauro Carvalho Chehabuses the transition from one type of softirq processing to another as a 2384ccc9971eSMauro Carvalho Chehabquiescent state in addition to context switch, idle, user mode, and 2385ccc9971eSMauro Carvalho Chehaboffline. This means that RCU-bh grace periods can complete even when 2386ccc9971eSMauro Carvalho Chehabsome of the CPUs execute in softirq indefinitely, thus allowing 2387ccc9971eSMauro Carvalho Chehabalgorithms based on RCU-bh to withstand network-based denial-of-service 2388ccc9971eSMauro Carvalho Chehabattacks. 2389ccc9971eSMauro Carvalho Chehab 2390be06c257SPaul E. McKenneyBecause rcu_read_lock_bh() and rcu_read_unlock_bh() disable and 2391ccc9971eSMauro Carvalho Chehabre-enable softirq handlers, any attempt to start a softirq handlers 2392ccc9971eSMauro Carvalho Chehabduring the RCU-bh read-side critical section will be deferred. In this 2393be06c257SPaul E. McKenneycase, rcu_read_unlock_bh() will invoke softirq processing, which can 2394ccc9971eSMauro Carvalho Chehabtake considerable time. One can of course argue that this softirq 2395ccc9971eSMauro Carvalho Chehaboverhead should be associated with the code following the RCU-bh 2396be06c257SPaul E. McKenneyread-side critical section rather than rcu_read_unlock_bh(), but the 2397ccc9971eSMauro Carvalho Chehabfact is that most profiling tools cannot be expected to make this sort 2398ccc9971eSMauro Carvalho Chehabof fine distinction. For example, suppose that a three-millisecond-long 2399ccc9971eSMauro Carvalho ChehabRCU-bh read-side critical section executes during a time of heavy 2400ccc9971eSMauro Carvalho Chehabnetworking load. There will very likely be an attempt to invoke at least 2401ccc9971eSMauro Carvalho Chehabone softirq handler during that three milliseconds, but any such 2402ccc9971eSMauro Carvalho Chehabinvocation will be delayed until the time of the 2403be06c257SPaul E. McKenneyrcu_read_unlock_bh(). This can of course make it appear at first 2404be06c257SPaul E. McKenneyglance as if rcu_read_unlock_bh() was executing very slowly. 2405ccc9971eSMauro Carvalho Chehab 2406ccc9971eSMauro Carvalho ChehabThe `RCU-bh 2407ccc9971eSMauro Carvalho ChehabAPI <https://lwn.net/Articles/609973/#RCU%20Per-Flavor%20API%20Table>`__ 24082c8bce60SPaul E. McKenneyincludes rcu_read_lock_bh(), rcu_read_unlock_bh(), rcu_dereference_bh(), 24092c8bce60SPaul E. McKenneyrcu_dereference_bh_check(), and rcu_read_lock_bh_held(). However, the 24102c8bce60SPaul E. McKenneyold RCU-bh update-side APIs are now gone, replaced by synchronize_rcu(), 24112c8bce60SPaul E. McKenneysynchronize_rcu_expedited(), call_rcu(), and rcu_barrier(). In addition, 24122c8bce60SPaul E. McKenneyanything that disables bottom halves also marks an RCU-bh read-side 24132c8bce60SPaul E. McKenneycritical section, including local_bh_disable() and local_bh_enable(), 24142c8bce60SPaul E. McKenneylocal_irq_save() and local_irq_restore(), and so on. 2415ccc9971eSMauro Carvalho Chehab 2416ccc9971eSMauro Carvalho ChehabSched Flavor (Historical) 2417ccc9971eSMauro Carvalho Chehab~~~~~~~~~~~~~~~~~~~~~~~~~ 2418ccc9971eSMauro Carvalho Chehab 2419ccc9971eSMauro Carvalho ChehabThe RCU-sched flavor of RCU has since been expressed in terms of the 2420ccc9971eSMauro Carvalho Chehabother RCU flavors as part of a consolidation of the three flavors into a 2421ccc9971eSMauro Carvalho Chehabsingle flavor. The read-side API remains, and continues to disable 2422ccc9971eSMauro Carvalho Chehabpreemption and to be accounted for by lockdep. Much of the material in 2423ccc9971eSMauro Carvalho Chehabthis section is therefore strictly historical in nature. 2424ccc9971eSMauro Carvalho Chehab 2425ccc9971eSMauro Carvalho ChehabBefore preemptible RCU, waiting for an RCU grace period had the side 2426ccc9971eSMauro Carvalho Chehabeffect of also waiting for all pre-existing interrupt and NMI handlers. 2427ccc9971eSMauro Carvalho ChehabHowever, there are legitimate preemptible-RCU implementations that do 2428ccc9971eSMauro Carvalho Chehabnot have this property, given that any point in the code outside of an 2429ccc9971eSMauro Carvalho ChehabRCU read-side critical section can be a quiescent state. Therefore, 2430ccc9971eSMauro Carvalho Chehab*RCU-sched* was created, which follows “classic” RCU in that an 24317f45d6f8SRandy DunlapRCU-sched grace period waits for pre-existing interrupt and NMI 243281ad58beSSebastian Andrzej Siewiorhandlers. In kernels built with ``CONFIG_PREEMPTION=n``, the RCU and 2433ccc9971eSMauro Carvalho ChehabRCU-sched APIs have identical implementations, while kernels built with 243481ad58beSSebastian Andrzej Siewior``CONFIG_PREEMPTION=y`` provide a separate implementation for each. 2435ccc9971eSMauro Carvalho Chehab 243681ad58beSSebastian Andrzej SiewiorNote well that in ``CONFIG_PREEMPTION=y`` kernels, 2437be06c257SPaul E. McKenneyrcu_read_lock_sched() and rcu_read_unlock_sched() disable and 2438ccc9971eSMauro Carvalho Chehabre-enable preemption, respectively. This means that if there was a 2439ccc9971eSMauro Carvalho Chehabpreemption attempt during the RCU-sched read-side critical section, 2440be06c257SPaul E. McKenneyrcu_read_unlock_sched() will enter the scheduler, with all the 2441be06c257SPaul E. McKenneylatency and overhead entailed. Just as with rcu_read_unlock_bh(), 2442be06c257SPaul E. McKenneythis can make it look as if rcu_read_unlock_sched() was executing 2443ccc9971eSMauro Carvalho Chehabvery slowly. However, the highest-priority task won't be preempted, so 2444be06c257SPaul E. McKenneythat task will enjoy low-overhead rcu_read_unlock_sched() 2445ccc9971eSMauro Carvalho Chehabinvocations. 2446ccc9971eSMauro Carvalho Chehab 2447ccc9971eSMauro Carvalho ChehabThe `RCU-sched 2448ccc9971eSMauro Carvalho ChehabAPI <https://lwn.net/Articles/609973/#RCU%20Per-Flavor%20API%20Table>`__ 2449be06c257SPaul E. McKenneyincludes rcu_read_lock_sched(), rcu_read_unlock_sched(), 2450be06c257SPaul E. McKenneyrcu_read_lock_sched_notrace(), rcu_read_unlock_sched_notrace(), 24512c8bce60SPaul E. McKenneyrcu_dereference_sched(), rcu_dereference_sched_check(), and 24522c8bce60SPaul E. McKenneyrcu_read_lock_sched_held(). However, the old RCU-sched update-side APIs 24532c8bce60SPaul E. McKenneyare now gone, replaced by synchronize_rcu(), synchronize_rcu_expedited(), 24542c8bce60SPaul E. McKenneycall_rcu(), and rcu_barrier(). In addition, anything that disables 24552c8bce60SPaul E. McKenneypreemption also marks an RCU-sched read-side critical section, 24562c8bce60SPaul E. McKenneyincluding preempt_disable() and preempt_enable(), local_irq_save() 24572c8bce60SPaul E. McKenneyand local_irq_restore(), and so on. 2458ccc9971eSMauro Carvalho Chehab 2459ccc9971eSMauro Carvalho ChehabSleepable RCU 2460ccc9971eSMauro Carvalho Chehab~~~~~~~~~~~~~ 2461ccc9971eSMauro Carvalho Chehab 2462ccc9971eSMauro Carvalho ChehabFor well over a decade, someone saying “I need to block within an RCU 2463ccc9971eSMauro Carvalho Chehabread-side critical section” was a reliable indication that this someone 2464ccc9971eSMauro Carvalho Chehabdid not understand RCU. After all, if you are always blocking in an RCU 2465ccc9971eSMauro Carvalho Chehabread-side critical section, you can probably afford to use a 2466ccc9971eSMauro Carvalho Chehabhigher-overhead synchronization mechanism. However, that changed with 2467ccc9971eSMauro Carvalho Chehabthe advent of the Linux kernel's notifiers, whose RCU read-side critical 2468ccc9971eSMauro Carvalho Chehabsections almost never sleep, but sometimes need to. This resulted in the 2469ccc9971eSMauro Carvalho Chehabintroduction of `sleepable RCU <https://lwn.net/Articles/202847/>`__, or 2470ccc9971eSMauro Carvalho Chehab*SRCU*. 2471ccc9971eSMauro Carvalho Chehab 2472ccc9971eSMauro Carvalho ChehabSRCU allows different domains to be defined, with each such domain 2473ccc9971eSMauro Carvalho Chehabdefined by an instance of an ``srcu_struct`` structure. A pointer to 2474ccc9971eSMauro Carvalho Chehabthis structure must be passed in to each SRCU function, for example, 2475ccc9971eSMauro Carvalho Chehab``synchronize_srcu(&ss)``, where ``ss`` is the ``srcu_struct`` 2476ccc9971eSMauro Carvalho Chehabstructure. The key benefit of these domains is that a slow SRCU reader 2477ccc9971eSMauro Carvalho Chehabin one domain does not delay an SRCU grace period in some other domain. 2478ccc9971eSMauro Carvalho ChehabThat said, one consequence of these domains is that read-side code must 2479be06c257SPaul E. McKenneypass a “cookie” from srcu_read_lock() to srcu_read_unlock(), for 2480ccc9971eSMauro Carvalho Chehabexample, as follows: 2481ccc9971eSMauro Carvalho Chehab 2482ccc9971eSMauro Carvalho Chehab :: 2483ccc9971eSMauro Carvalho Chehab 2484ccc9971eSMauro Carvalho Chehab 1 int idx; 2485ccc9971eSMauro Carvalho Chehab 2 2486ccc9971eSMauro Carvalho Chehab 3 idx = srcu_read_lock(&ss); 2487ccc9971eSMauro Carvalho Chehab 4 do_something(); 2488ccc9971eSMauro Carvalho Chehab 5 srcu_read_unlock(&ss, idx); 2489ccc9971eSMauro Carvalho Chehab 2490ccc9971eSMauro Carvalho ChehabAs noted above, it is legal to block within SRCU read-side critical 2491ccc9971eSMauro Carvalho Chehabsections, however, with great power comes great responsibility. If you 2492ccc9971eSMauro Carvalho Chehabblock forever in one of a given domain's SRCU read-side critical 2493ccc9971eSMauro Carvalho Chehabsections, then that domain's grace periods will also be blocked forever. 2494ccc9971eSMauro Carvalho ChehabOf course, one good way to block forever is to deadlock, which can 2495ccc9971eSMauro Carvalho Chehabhappen if any operation in a given domain's SRCU read-side critical 2496ccc9971eSMauro Carvalho Chehabsection can wait, either directly or indirectly, for that domain's grace 2497ccc9971eSMauro Carvalho Chehabperiod to elapse. For example, this results in a self-deadlock: 2498ccc9971eSMauro Carvalho Chehab 2499ccc9971eSMauro Carvalho Chehab :: 2500ccc9971eSMauro Carvalho Chehab 2501ccc9971eSMauro Carvalho Chehab 1 int idx; 2502ccc9971eSMauro Carvalho Chehab 2 2503ccc9971eSMauro Carvalho Chehab 3 idx = srcu_read_lock(&ss); 2504ccc9971eSMauro Carvalho Chehab 4 do_something(); 2505ccc9971eSMauro Carvalho Chehab 5 synchronize_srcu(&ss); 2506ccc9971eSMauro Carvalho Chehab 6 srcu_read_unlock(&ss, idx); 2507ccc9971eSMauro Carvalho Chehab 2508ccc9971eSMauro Carvalho ChehabHowever, if line 5 acquired a mutex that was held across a 2509be06c257SPaul E. McKenneysynchronize_srcu() for domain ``ss``, deadlock would still be 2510ccc9971eSMauro Carvalho Chehabpossible. Furthermore, if line 5 acquired a mutex that was held across a 2511be06c257SPaul E. McKenneysynchronize_srcu() for some other domain ``ss1``, and if an 2512ccc9971eSMauro Carvalho Chehab``ss1``-domain SRCU read-side critical section acquired another mutex 2513be06c257SPaul E. McKenneythat was held across as ``ss``-domain synchronize_srcu(), deadlock 2514ccc9971eSMauro Carvalho Chehabwould again be possible. Such a deadlock cycle could extend across an 2515ccc9971eSMauro Carvalho Chehabarbitrarily large number of different SRCU domains. Again, with great 2516ccc9971eSMauro Carvalho Chehabpower comes great responsibility. 2517ccc9971eSMauro Carvalho Chehab 2518ccc9971eSMauro Carvalho ChehabUnlike the other RCU flavors, SRCU read-side critical sections can run 2519ccc9971eSMauro Carvalho Chehabon idle and even offline CPUs. This ability requires that 2520be06c257SPaul E. McKenneysrcu_read_lock() and srcu_read_unlock() contain memory barriers, 2521ccc9971eSMauro Carvalho Chehabwhich means that SRCU readers will run a bit slower than would RCU 2522be06c257SPaul E. McKenneyreaders. It also motivates the smp_mb__after_srcu_read_unlock() API, 2523be06c257SPaul E. McKenneywhich, in combination with srcu_read_unlock(), guarantees a full 2524ccc9971eSMauro Carvalho Chehabmemory barrier. 2525ccc9971eSMauro Carvalho Chehab 2526be06c257SPaul E. McKenneyAlso unlike other RCU flavors, synchronize_srcu() may **not** be 2527ccc9971eSMauro Carvalho Chehabinvoked from CPU-hotplug notifiers, due to the fact that SRCU grace 2528ccc9971eSMauro Carvalho Chehabperiods make use of timers and the possibility of timers being 2529ccc9971eSMauro Carvalho Chehabtemporarily “stranded” on the outgoing CPU. This stranding of timers 2530ccc9971eSMauro Carvalho Chehabmeans that timers posted to the outgoing CPU will not fire until late in 2531ccc9971eSMauro Carvalho Chehabthe CPU-hotplug process. The problem is that if a notifier is waiting on 2532ccc9971eSMauro Carvalho Chehaban SRCU grace period, that grace period is waiting on a timer, and that 2533ccc9971eSMauro Carvalho Chehabtimer is stranded on the outgoing CPU, then the notifier will never be 2534ccc9971eSMauro Carvalho Chehabawakened, in other words, deadlock has occurred. This same situation of 2535be06c257SPaul E. McKenneycourse also prohibits srcu_barrier() from being invoked from 2536ccc9971eSMauro Carvalho ChehabCPU-hotplug notifiers. 2537ccc9971eSMauro Carvalho Chehab 2538ccc9971eSMauro Carvalho ChehabSRCU also differs from other RCU flavors in that SRCU's expedited and 2539ccc9971eSMauro Carvalho Chehabnon-expedited grace periods are implemented by the same mechanism. This 2540ccc9971eSMauro Carvalho Chehabmeans that in the current SRCU implementation, expediting a future grace 2541ccc9971eSMauro Carvalho Chehabperiod has the side effect of expediting all prior grace periods that 2542ccc9971eSMauro Carvalho Chehabhave not yet completed. (But please note that this is a property of the 2543ccc9971eSMauro Carvalho Chehabcurrent implementation, not necessarily of future implementations.) In 2544ccc9971eSMauro Carvalho Chehabaddition, if SRCU has been idle for longer than the interval specified 2545ccc9971eSMauro Carvalho Chehabby the ``srcutree.exp_holdoff`` kernel boot parameter (25 microseconds 2546be06c257SPaul E. McKenneyby default), and if a synchronize_srcu() invocation ends this idle 2547ccc9971eSMauro Carvalho Chehabperiod, that invocation will be automatically expedited. 2548ccc9971eSMauro Carvalho Chehab 2549ccc9971eSMauro Carvalho ChehabAs of v4.12, SRCU's callbacks are maintained per-CPU, eliminating a 2550ccc9971eSMauro Carvalho Chehablocking bottleneck present in prior kernel versions. Although this will 2551be06c257SPaul E. McKenneyallow users to put much heavier stress on call_srcu(), it is 2552ccc9971eSMauro Carvalho Chehabimportant to note that SRCU does not yet take any special steps to deal 2553ccc9971eSMauro Carvalho Chehabwith callback flooding. So if you are posting (say) 10,000 SRCU 2554ccc9971eSMauro Carvalho Chehabcallbacks per second per CPU, you are probably totally OK, but if you 2555ccc9971eSMauro Carvalho Chehabintend to post (say) 1,000,000 SRCU callbacks per second per CPU, please 2556ccc9971eSMauro Carvalho Chehabrun some tests first. SRCU just might need a few adjustment to deal with 2557ccc9971eSMauro Carvalho Chehabthat sort of load. Of course, your mileage may vary based on the speed 2558ccc9971eSMauro Carvalho Chehabof your CPUs and the size of your memory. 2559ccc9971eSMauro Carvalho Chehab 2560ccc9971eSMauro Carvalho ChehabThe `SRCU 2561ccc9971eSMauro Carvalho ChehabAPI <https://lwn.net/Articles/609973/#RCU%20Per-Flavor%20API%20Table>`__ 2562be06c257SPaul E. McKenneyincludes srcu_read_lock(), srcu_read_unlock(), 2563be06c257SPaul E. McKenneysrcu_dereference(), srcu_dereference_check(), 2564be06c257SPaul E. McKenneysynchronize_srcu(), synchronize_srcu_expedited(), 2565be06c257SPaul E. McKenneycall_srcu(), srcu_barrier(), and srcu_read_lock_held(). It 2566be06c257SPaul E. McKenneyalso includes DEFINE_SRCU(), DEFINE_STATIC_SRCU(), and 2567be06c257SPaul E. McKenneyinit_srcu_struct() APIs for defining and initializing 2568ccc9971eSMauro Carvalho Chehab``srcu_struct`` structures. 2569ccc9971eSMauro Carvalho Chehab 2570ee7f4a87SPaul E. McKenneyMore recently, the SRCU API has added polling interfaces: 2571ee7f4a87SPaul E. McKenney 2572ee7f4a87SPaul E. McKenney#. start_poll_synchronize_srcu() returns a cookie identifying 2573ee7f4a87SPaul E. McKenney the completion of a future SRCU grace period and ensures 2574ee7f4a87SPaul E. McKenney that this grace period will be started. 2575ee7f4a87SPaul E. McKenney#. poll_state_synchronize_srcu() returns ``true`` iff the 2576ee7f4a87SPaul E. McKenney specified cookie corresponds to an already-completed 2577ee7f4a87SPaul E. McKenney SRCU grace period. 2578ee7f4a87SPaul E. McKenney#. get_state_synchronize_srcu() returns a cookie just like 2579ee7f4a87SPaul E. McKenney start_poll_synchronize_srcu() does, but differs in that 2580ee7f4a87SPaul E. McKenney it does nothing to ensure that any future SRCU grace period 2581ee7f4a87SPaul E. McKenney will be started. 2582ee7f4a87SPaul E. McKenney 2583ee7f4a87SPaul E. McKenneyThese functions are used to avoid unnecessary SRCU grace periods in 2584ee7f4a87SPaul E. McKenneycertain types of buffer-cache algorithms having multi-stage age-out 2585ee7f4a87SPaul E. McKenneymechanisms. The idea is that by the time the block has aged completely 2586ee7f4a87SPaul E. McKenneyfrom the cache, an SRCU grace period will be very likely to have elapsed. 2587ee7f4a87SPaul E. McKenney 2588ccc9971eSMauro Carvalho ChehabTasks RCU 2589ccc9971eSMauro Carvalho Chehab~~~~~~~~~ 2590ccc9971eSMauro Carvalho Chehab 2591ccc9971eSMauro Carvalho ChehabSome forms of tracing use “trampolines” to handle the binary rewriting 2592ccc9971eSMauro Carvalho Chehabrequired to install different types of probes. It would be good to be 2593ccc9971eSMauro Carvalho Chehabable to free old trampolines, which sounds like a job for some form of 2594ccc9971eSMauro Carvalho ChehabRCU. However, because it is necessary to be able to install a trace 2595ccc9971eSMauro Carvalho Chehabanywhere in the code, it is not possible to use read-side markers such 2596be06c257SPaul E. McKenneyas rcu_read_lock() and rcu_read_unlock(). In addition, it does 2597ccc9971eSMauro Carvalho Chehabnot work to have these markers in the trampoline itself, because there 2598be06c257SPaul E. McKenneywould need to be instructions following rcu_read_unlock(). Although 2599be06c257SPaul E. McKenneysynchronize_rcu() would guarantee that execution reached the 2600be06c257SPaul E. McKenneyrcu_read_unlock(), it would not be able to guarantee that execution 2601d93d97cbSPaul E. McKenneyhad completely left the trampoline. Worse yet, in some situations 2602d93d97cbSPaul E. McKenneythe trampoline's protection must extend a few instructions *prior* to 2603d93d97cbSPaul E. McKenneyexecution reaching the trampoline. For example, these few instructions 2604d93d97cbSPaul E. McKenneymight calculate the address of the trampoline, so that entering the 2605d93d97cbSPaul E. McKenneytrampoline would be pre-ordained a surprisingly long time before execution 2606d93d97cbSPaul E. McKenneyactually reached the trampoline itself. 2607ccc9971eSMauro Carvalho Chehab 2608ccc9971eSMauro Carvalho ChehabThe solution, in the form of `Tasks 2609ccc9971eSMauro Carvalho ChehabRCU <https://lwn.net/Articles/607117/>`__, is to have implicit read-side 2610ccc9971eSMauro Carvalho Chehabcritical sections that are delimited by voluntary context switches, that 2611be06c257SPaul E. McKenneyis, calls to schedule(), cond_resched(), and 2612be06c257SPaul E. McKenneysynchronize_rcu_tasks(). In addition, transitions to and from 2613ccc9971eSMauro Carvalho Chehabuserspace execution also delimit tasks-RCU read-side critical sections. 2614293d9013SPaul E. McKenneyIdle tasks are ignored by Tasks RCU, and Tasks Rude RCU may be used to 2615293d9013SPaul E. McKenneyinteract with them. 2616293d9013SPaul E. McKenney 2617293d9013SPaul E. McKenneyNote well that involuntary context switches are *not* Tasks-RCU quiescent 2618293d9013SPaul E. McKenneystates. After all, in preemptible kernels, a task executing code in a 2619293d9013SPaul E. McKenneytrampoline might be preempted. In this case, the Tasks-RCU grace period 2620293d9013SPaul E. McKenneyclearly cannot end until that task resumes and its execution leaves that 2621293d9013SPaul E. McKenneytrampoline. This means, among other things, that cond_resched() does 2622293d9013SPaul E. McKenneynot provide a Tasks RCU quiescent state. (Instead, use rcu_softirq_qs() 2623293d9013SPaul E. McKenneyfrom softirq or rcu_tasks_classic_qs() otherwise.) 2624ccc9971eSMauro Carvalho Chehab 2625ccc9971eSMauro Carvalho ChehabThe tasks-RCU API is quite compact, consisting only of 2626be06c257SPaul E. McKenneycall_rcu_tasks(), synchronize_rcu_tasks(), and 262781ad58beSSebastian Andrzej Siewiorrcu_barrier_tasks(). In ``CONFIG_PREEMPTION=n`` kernels, trampolines 2628be06c257SPaul E. McKenneycannot be preempted, so these APIs map to call_rcu(), 2629be06c257SPaul E. McKenneysynchronize_rcu(), and rcu_barrier(), respectively. In 263081ad58beSSebastian Andrzej Siewior``CONFIG_PREEMPTION=y`` kernels, trampolines can be preempted, and these 2631ccc9971eSMauro Carvalho Chehabthree APIs are therefore implemented by separate functions that check 2632ccc9971eSMauro Carvalho Chehabfor voluntary context switches. 2633ccc9971eSMauro Carvalho Chehab 26346172de3cSPaul E. McKenneyTasks Rude RCU 26356172de3cSPaul E. McKenney~~~~~~~~~~~~~~ 26366172de3cSPaul E. McKenney 26376172de3cSPaul E. McKenneySome forms of tracing need to wait for all preemption-disabled regions 26386172de3cSPaul E. McKenneyof code running on any online CPU, including those executed when RCU is 26396172de3cSPaul E. McKenneynot watching. This means that synchronize_rcu() is insufficient, and 26406172de3cSPaul E. McKenneyTasks Rude RCU must be used instead. This flavor of RCU does its work by 26416172de3cSPaul E. McKenneyforcing a workqueue to be scheduled on each online CPU, hence the "Rude" 26426172de3cSPaul E. McKenneymoniker. And this operation is considered to be quite rude by real-time 26436172de3cSPaul E. McKenneyworkloads that don't want their ``nohz_full`` CPUs receiving IPIs and 26446172de3cSPaul E. McKenneyby battery-powered systems that don't want their idle CPUs to be awakened. 26456172de3cSPaul E. McKenney 2646293d9013SPaul E. McKenneyOnce kernel entry/exit and deep-idle functions have been properly tagged 2647293d9013SPaul E. McKenney``noinstr``, Tasks RCU can start paying attention to idle tasks (except 2648293d9013SPaul E. McKenneythose that are idle from RCU's perspective) and then Tasks Rude RCU can 2649293d9013SPaul E. McKenneybe removed from the kernel. 2650293d9013SPaul E. McKenney 26516172de3cSPaul E. McKenneyThe tasks-rude-RCU API is also reader-marking-free and thus quite compact, 2652*0ff92d14SPaul E. McKenneyconsisting solely of synchronize_rcu_tasks_rude(). 26536172de3cSPaul E. McKenney 26546172de3cSPaul E. McKenneyTasks Trace RCU 26556172de3cSPaul E. McKenney~~~~~~~~~~~~~~~ 26566172de3cSPaul E. McKenney 26576172de3cSPaul E. McKenneySome forms of tracing need to sleep in readers, but cannot tolerate 26586172de3cSPaul E. McKenneySRCU's read-side overhead, which includes a full memory barrier in both 26596172de3cSPaul E. McKenneysrcu_read_lock() and srcu_read_unlock(). This need is handled by a 26606172de3cSPaul E. McKenneyTasks Trace RCU that uses scheduler locking and IPIs to synchronize with 26616172de3cSPaul E. McKenneyreaders. Real-time systems that cannot tolerate IPIs may build their 26626172de3cSPaul E. McKenneykernels with ``CONFIG_TASKS_TRACE_RCU_READ_MB=y``, which avoids the IPIs at 26636172de3cSPaul E. McKenneythe expense of adding full memory barriers to the read-side primitives. 26646172de3cSPaul E. McKenney 26656172de3cSPaul E. McKenneyThe tasks-trace-RCU API is also reasonably compact, 26666172de3cSPaul E. McKenneyconsisting of rcu_read_lock_trace(), rcu_read_unlock_trace(), 26676172de3cSPaul E. McKenneyrcu_read_lock_trace_held(), call_rcu_tasks_trace(), 26686172de3cSPaul E. McKenneysynchronize_rcu_tasks_trace(), and rcu_barrier_tasks_trace(). 26696172de3cSPaul E. McKenney 2670ccc9971eSMauro Carvalho ChehabPossible Future Changes 2671ccc9971eSMauro Carvalho Chehab----------------------- 2672ccc9971eSMauro Carvalho Chehab 2673ccc9971eSMauro Carvalho ChehabOne of the tricks that RCU uses to attain update-side scalability is to 2674ccc9971eSMauro Carvalho Chehabincrease grace-period latency with increasing numbers of CPUs. If this 2675ccc9971eSMauro Carvalho Chehabbecomes a serious problem, it will be necessary to rework the 2676ccc9971eSMauro Carvalho Chehabgrace-period state machine so as to avoid the need for the additional 2677ccc9971eSMauro Carvalho Chehablatency. 2678ccc9971eSMauro Carvalho Chehab 2679ccc9971eSMauro Carvalho ChehabRCU disables CPU hotplug in a few places, perhaps most notably in the 2680be06c257SPaul E. McKenneyrcu_barrier() operations. If there is a strong reason to use 2681be06c257SPaul E. McKenneyrcu_barrier() in CPU-hotplug notifiers, it will be necessary to 2682ccc9971eSMauro Carvalho Chehabavoid disabling CPU hotplug. This would introduce some complexity, so 2683ccc9971eSMauro Carvalho Chehabthere had better be a *very* good reason. 2684ccc9971eSMauro Carvalho Chehab 2685ccc9971eSMauro Carvalho ChehabThe tradeoff between grace-period latency on the one hand and 2686ccc9971eSMauro Carvalho Chehabinterruptions of other CPUs on the other hand may need to be 2687ccc9971eSMauro Carvalho Chehabre-examined. The desire is of course for zero grace-period latency as 2688ccc9971eSMauro Carvalho Chehabwell as zero interprocessor interrupts undertaken during an expedited 2689ccc9971eSMauro Carvalho Chehabgrace period operation. While this ideal is unlikely to be achievable, 2690ccc9971eSMauro Carvalho Chehabit is quite possible that further improvements can be made. 2691ccc9971eSMauro Carvalho Chehab 2692ccc9971eSMauro Carvalho ChehabThe multiprocessor implementations of RCU use a combining tree that 2693ccc9971eSMauro Carvalho Chehabgroups CPUs so as to reduce lock contention and increase cache locality. 2694ccc9971eSMauro Carvalho ChehabHowever, this combining tree does not spread its memory across NUMA 2695ccc9971eSMauro Carvalho Chehabnodes nor does it align the CPU groups with hardware features such as 2696ccc9971eSMauro Carvalho Chehabsockets or cores. Such spreading and alignment is currently believed to 2697ccc9971eSMauro Carvalho Chehabbe unnecessary because the hotpath read-side primitives do not access 2698be06c257SPaul E. McKenneythe combining tree, nor does call_rcu() in the common case. If you 2699ccc9971eSMauro Carvalho Chehabbelieve that your architecture needs such spreading and alignment, then 2700ccc9971eSMauro Carvalho Chehabyour architecture should also benefit from the 2701ccc9971eSMauro Carvalho Chehab``rcutree.rcu_fanout_leaf`` boot parameter, which can be set to the 2702ccc9971eSMauro Carvalho Chehabnumber of CPUs in a socket, NUMA node, or whatever. If the number of 2703ccc9971eSMauro Carvalho ChehabCPUs is too large, use a fraction of the number of CPUs. If the number 2704ccc9971eSMauro Carvalho Chehabof CPUs is a large prime number, well, that certainly is an 2705ccc9971eSMauro Carvalho Chehab“interesting” architectural choice! More flexible arrangements might be 2706ccc9971eSMauro Carvalho Chehabconsidered, but only if ``rcutree.rcu_fanout_leaf`` has proven 2707ccc9971eSMauro Carvalho Chehabinadequate, and only if the inadequacy has been demonstrated by a 2708ccc9971eSMauro Carvalho Chehabcarefully run and realistic system-level workload. 2709ccc9971eSMauro Carvalho Chehab 2710ccc9971eSMauro Carvalho ChehabPlease note that arrangements that require RCU to remap CPU numbers will 2711ccc9971eSMauro Carvalho Chehabrequire extremely good demonstration of need and full exploration of 2712ccc9971eSMauro Carvalho Chehabalternatives. 2713ccc9971eSMauro Carvalho Chehab 2714ccc9971eSMauro Carvalho ChehabRCU's various kthreads are reasonably recent additions. It is quite 2715ccc9971eSMauro Carvalho Chehablikely that adjustments will be required to more gracefully handle 2716ccc9971eSMauro Carvalho Chehabextreme loads. It might also be necessary to be able to relate CPU 2717ccc9971eSMauro Carvalho Chehabutilization by RCU's kthreads and softirq handlers to the code that 2718ccc9971eSMauro Carvalho Chehabinstigated this CPU utilization. For example, RCU callback overhead 2719be06c257SPaul E. McKenneymight be charged back to the originating call_rcu() instance, though 2720ccc9971eSMauro Carvalho Chehabprobably not in production kernels. 2721ccc9971eSMauro Carvalho Chehab 2722ccc9971eSMauro Carvalho ChehabAdditional work may be required to provide reasonable forward-progress 2723ccc9971eSMauro Carvalho Chehabguarantees under heavy load for grace periods and for callback 2724ccc9971eSMauro Carvalho Chehabinvocation. 2725ccc9971eSMauro Carvalho Chehab 2726ccc9971eSMauro Carvalho ChehabSummary 2727ccc9971eSMauro Carvalho Chehab------- 2728ccc9971eSMauro Carvalho Chehab 2729ccc9971eSMauro Carvalho ChehabThis document has presented more than two decade's worth of RCU 2730ccc9971eSMauro Carvalho Chehabrequirements. Given that the requirements keep changing, this will not 2731ccc9971eSMauro Carvalho Chehabbe the last word on this subject, but at least it serves to get an 2732ccc9971eSMauro Carvalho Chehabimportant subset of the requirements set forth. 2733ccc9971eSMauro Carvalho Chehab 2734ccc9971eSMauro Carvalho ChehabAcknowledgments 2735ccc9971eSMauro Carvalho Chehab--------------- 2736ccc9971eSMauro Carvalho Chehab 2737ccc9971eSMauro Carvalho ChehabI am grateful to Steven Rostedt, Lai Jiangshan, Ingo Molnar, Oleg 2738ccc9971eSMauro Carvalho ChehabNesterov, Borislav Petkov, Peter Zijlstra, Boqun Feng, and Andy 2739ccc9971eSMauro Carvalho ChehabLutomirski for their help in rendering this article human readable, and 2740ccc9971eSMauro Carvalho Chehabto Michelle Rankin for her support of this effort. Other contributions 2741ccc9971eSMauro Carvalho Chehabare acknowledged in the Linux kernel's git archive. 2742