1.. _rcu_barrier: 2 3RCU and Unloadable Modules 4========================== 5 6[Originally published in LWN Jan. 14, 2007: http://lwn.net/Articles/217484/] 7 8RCU updaters sometimes use call_rcu() to initiate an asynchronous wait for 9a grace period to elapse. This primitive takes a pointer to an rcu_head 10struct placed within the RCU-protected data structure and another pointer 11to a function that may be invoked later to free that structure. Code to 12delete an element p from the linked list from IRQ context might then be 13as follows:: 14 15 list_del_rcu(p); 16 call_rcu(&p->rcu, p_callback); 17 18Since call_rcu() never blocks, this code can safely be used from within 19IRQ context. The function p_callback() might be defined as follows:: 20 21 static void p_callback(struct rcu_head *rp) 22 { 23 struct pstruct *p = container_of(rp, struct pstruct, rcu); 24 25 kfree(p); 26 } 27 28 29Unloading Modules That Use call_rcu() 30------------------------------------- 31 32But what if the p_callback() function is defined in an unloadable module? 33 34If we unload the module while some RCU callbacks are pending, 35the CPUs executing these callbacks are going to be severely 36disappointed when they are later invoked, as fancifully depicted at 37http://lwn.net/images/ns/kernel/rcu-drop.jpg. 38 39We could try placing a synchronize_rcu() in the module-exit code path, 40but this is not sufficient. Although synchronize_rcu() does wait for a 41grace period to elapse, it does not wait for the callbacks to complete. 42 43One might be tempted to try several back-to-back synchronize_rcu() 44calls, but this is still not guaranteed to work. If there is a very 45heavy RCU-callback load, then some of the callbacks might be deferred in 46order to allow other processing to proceed. For but one example, such 47deferral is required in realtime kernels in order to avoid excessive 48scheduling latencies. 49 50 51rcu_barrier() 52------------- 53 54This situation can be handled by the rcu_barrier() primitive. Rather 55than waiting for a grace period to elapse, rcu_barrier() waits for all 56outstanding RCU callbacks to complete. Please note that rcu_barrier() 57does **not** imply synchronize_rcu(), in particular, if there are no RCU 58callbacks queued anywhere, rcu_barrier() is within its rights to return 59immediately, without waiting for anything, let alone a grace period. 60 61Pseudo-code using rcu_barrier() is as follows: 62 63 1. Prevent any new RCU callbacks from being posted. 64 2. Execute rcu_barrier(). 65 3. Allow the module to be unloaded. 66 67There is also an srcu_barrier() function for SRCU, and you of course 68must match the flavor of srcu_barrier() with that of call_srcu(). 69If your module uses multiple srcu_struct structures, then it must also 70use multiple invocations of srcu_barrier() when unloading that module. 71For example, if it uses call_rcu(), call_srcu() on srcu_struct_1, and 72call_srcu() on srcu_struct_2, then the following three lines of code 73will be required when unloading:: 74 75 1 rcu_barrier(); 76 2 srcu_barrier(&srcu_struct_1); 77 3 srcu_barrier(&srcu_struct_2); 78 79If latency is of the essence, workqueues could be used to run these 80three functions concurrently. 81 82An ancient version of the rcutorture module makes use of rcu_barrier() 83in its exit function as follows:: 84 85 1 static void 86 2 rcu_torture_cleanup(void) 87 3 { 88 4 int i; 89 5 90 6 fullstop = 1; 91 7 if (shuffler_task != NULL) { 92 8 VERBOSE_PRINTK_STRING("Stopping rcu_torture_shuffle task"); 93 9 kthread_stop(shuffler_task); 94 10 } 95 11 shuffler_task = NULL; 96 12 97 13 if (writer_task != NULL) { 98 14 VERBOSE_PRINTK_STRING("Stopping rcu_torture_writer task"); 99 15 kthread_stop(writer_task); 100 16 } 101 17 writer_task = NULL; 102 18 103 19 if (reader_tasks != NULL) { 104 20 for (i = 0; i < nrealreaders; i++) { 105 21 if (reader_tasks[i] != NULL) { 106 22 VERBOSE_PRINTK_STRING( 107 23 "Stopping rcu_torture_reader task"); 108 24 kthread_stop(reader_tasks[i]); 109 25 } 110 26 reader_tasks[i] = NULL; 111 27 } 112 28 kfree(reader_tasks); 113 29 reader_tasks = NULL; 114 30 } 115 31 rcu_torture_current = NULL; 116 32 117 33 if (fakewriter_tasks != NULL) { 118 34 for (i = 0; i < nfakewriters; i++) { 119 35 if (fakewriter_tasks[i] != NULL) { 120 36 VERBOSE_PRINTK_STRING( 121 37 "Stopping rcu_torture_fakewriter task"); 122 38 kthread_stop(fakewriter_tasks[i]); 123 39 } 124 40 fakewriter_tasks[i] = NULL; 125 41 } 126 42 kfree(fakewriter_tasks); 127 43 fakewriter_tasks = NULL; 128 44 } 129 45 130 46 if (stats_task != NULL) { 131 47 VERBOSE_PRINTK_STRING("Stopping rcu_torture_stats task"); 132 48 kthread_stop(stats_task); 133 49 } 134 50 stats_task = NULL; 135 51 136 52 /* Wait for all RCU callbacks to fire. */ 137 53 rcu_barrier(); 138 54 139 55 rcu_torture_stats_print(); /* -After- the stats thread is stopped! */ 140 56 141 57 if (cur_ops->cleanup != NULL) 142 58 cur_ops->cleanup(); 143 59 if (atomic_read(&n_rcu_torture_error)) 144 60 rcu_torture_print_module_parms("End of test: FAILURE"); 145 61 else 146 62 rcu_torture_print_module_parms("End of test: SUCCESS"); 147 63 } 148 149Line 6 sets a global variable that prevents any RCU callbacks from 150re-posting themselves. This will not be necessary in most cases, since 151RCU callbacks rarely include calls to call_rcu(). However, the rcutorture 152module is an exception to this rule, and therefore needs to set this 153global variable. 154 155Lines 7-50 stop all the kernel tasks associated with the rcutorture 156module. Therefore, once execution reaches line 53, no more rcutorture 157RCU callbacks will be posted. The rcu_barrier() call on line 53 waits 158for any pre-existing callbacks to complete. 159 160Then lines 55-62 print status and do operation-specific cleanup, and 161then return, permitting the module-unload operation to be completed. 162 163.. _rcubarrier_quiz_1: 164 165Quick Quiz #1: 166 Is there any other situation where rcu_barrier() might 167 be required? 168 169:ref:`Answer to Quick Quiz #1 <answer_rcubarrier_quiz_1>` 170 171Your module might have additional complications. For example, if your 172module invokes call_rcu() from timers, you will need to first refrain 173from posting new timers, cancel (or wait for) all the already-posted 174timers, and only then invoke rcu_barrier() to wait for any remaining 175RCU callbacks to complete. 176 177Of course, if your module uses call_rcu(), you will need to invoke 178rcu_barrier() before unloading. Similarly, if your module uses 179call_srcu(), you will need to invoke srcu_barrier() before unloading, 180and on the same srcu_struct structure. If your module uses call_rcu() 181**and** call_srcu(), then (as noted above) you will need to invoke 182rcu_barrier() **and** srcu_barrier(). 183 184 185Implementing rcu_barrier() 186-------------------------- 187 188Dipankar Sarma's implementation of rcu_barrier() makes use of the fact 189that RCU callbacks are never reordered once queued on one of the per-CPU 190queues. His implementation queues an RCU callback on each of the per-CPU 191callback queues, and then waits until they have all started executing, at 192which point, all earlier RCU callbacks are guaranteed to have completed. 193 194The original code for rcu_barrier() was roughly as follows:: 195 196 1 void rcu_barrier(void) 197 2 { 198 3 BUG_ON(in_interrupt()); 199 4 /* Take cpucontrol mutex to protect against CPU hotplug */ 200 5 mutex_lock(&rcu_barrier_mutex); 201 6 init_completion(&rcu_barrier_completion); 202 7 atomic_set(&rcu_barrier_cpu_count, 1); 203 8 on_each_cpu(rcu_barrier_func, NULL, 0, 1); 204 9 if (atomic_dec_and_test(&rcu_barrier_cpu_count)) 205 10 complete(&rcu_barrier_completion); 206 11 wait_for_completion(&rcu_barrier_completion); 207 12 mutex_unlock(&rcu_barrier_mutex); 208 13 } 209 210Line 3 verifies that the caller is in process context, and lines 5 and 12 211use rcu_barrier_mutex to ensure that only one rcu_barrier() is using the 212global completion and counters at a time, which are initialized on lines 2136 and 7. Line 8 causes each CPU to invoke rcu_barrier_func(), which is 214shown below. Note that the final "1" in on_each_cpu()'s argument list 215ensures that all the calls to rcu_barrier_func() will have completed 216before on_each_cpu() returns. Line 9 removes the initial count from 217rcu_barrier_cpu_count, and if this count is now zero, line 10 finalizes 218the completion, which prevents line 11 from blocking. Either way, 219line 11 then waits (if needed) for the completion. 220 221.. _rcubarrier_quiz_2: 222 223Quick Quiz #2: 224 Why doesn't line 8 initialize rcu_barrier_cpu_count to zero, 225 thereby avoiding the need for lines 9 and 10? 226 227:ref:`Answer to Quick Quiz #2 <answer_rcubarrier_quiz_2>` 228 229This code was rewritten in 2008 and several times thereafter, but this 230still gives the general idea. 231 232The rcu_barrier_func() runs on each CPU, where it invokes call_rcu() 233to post an RCU callback, as follows:: 234 235 1 static void rcu_barrier_func(void *notused) 236 2 { 237 3 int cpu = smp_processor_id(); 238 4 struct rcu_data *rdp = &per_cpu(rcu_data, cpu); 239 5 struct rcu_head *head; 240 6 241 7 head = &rdp->barrier; 242 8 atomic_inc(&rcu_barrier_cpu_count); 243 9 call_rcu(head, rcu_barrier_callback); 244 10 } 245 246Lines 3 and 4 locate RCU's internal per-CPU rcu_data structure, 247which contains the struct rcu_head that needed for the later call to 248call_rcu(). Line 7 picks up a pointer to this struct rcu_head, and line 2498 increments the global counter. This counter will later be decremented 250by the callback. Line 9 then registers the rcu_barrier_callback() on 251the current CPU's queue. 252 253The rcu_barrier_callback() function simply atomically decrements the 254rcu_barrier_cpu_count variable and finalizes the completion when it 255reaches zero, as follows:: 256 257 1 static void rcu_barrier_callback(struct rcu_head *notused) 258 2 { 259 3 if (atomic_dec_and_test(&rcu_barrier_cpu_count)) 260 4 complete(&rcu_barrier_completion); 261 5 } 262 263.. _rcubarrier_quiz_3: 264 265Quick Quiz #3: 266 What happens if CPU 0's rcu_barrier_func() executes 267 immediately (thus incrementing rcu_barrier_cpu_count to the 268 value one), but the other CPU's rcu_barrier_func() invocations 269 are delayed for a full grace period? Couldn't this result in 270 rcu_barrier() returning prematurely? 271 272:ref:`Answer to Quick Quiz #3 <answer_rcubarrier_quiz_3>` 273 274The current rcu_barrier() implementation is more complex, due to the need 275to avoid disturbing idle CPUs (especially on battery-powered systems) 276and the need to minimally disturb non-idle CPUs in real-time systems. 277In addition, a great many optimizations have been applied. However, 278the code above illustrates the concepts. 279 280 281rcu_barrier() Summary 282--------------------- 283 284The rcu_barrier() primitive is used relatively infrequently, since most 285code using RCU is in the core kernel rather than in modules. However, if 286you are using RCU from an unloadable module, you need to use rcu_barrier() 287so that your module may be safely unloaded. 288 289 290Answers to Quick Quizzes 291------------------------ 292 293.. _answer_rcubarrier_quiz_1: 294 295Quick Quiz #1: 296 Is there any other situation where rcu_barrier() might 297 be required? 298 299Answer: 300 Interestingly enough, rcu_barrier() was not originally 301 implemented for module unloading. Nikita Danilov was using 302 RCU in a filesystem, which resulted in a similar situation at 303 filesystem-unmount time. Dipankar Sarma coded up rcu_barrier() 304 in response, so that Nikita could invoke it during the 305 filesystem-unmount process. 306 307 Much later, yours truly hit the RCU module-unload problem when 308 implementing rcutorture, and found that rcu_barrier() solves 309 this problem as well. 310 311:ref:`Back to Quick Quiz #1 <rcubarrier_quiz_1>` 312 313.. _answer_rcubarrier_quiz_2: 314 315Quick Quiz #2: 316 Why doesn't line 8 initialize rcu_barrier_cpu_count to zero, 317 thereby avoiding the need for lines 9 and 10? 318 319Answer: 320 Suppose that the on_each_cpu() function shown on line 8 was 321 delayed, so that CPU 0's rcu_barrier_func() executed and 322 the corresponding grace period elapsed, all before CPU 1's 323 rcu_barrier_func() started executing. This would result in 324 rcu_barrier_cpu_count being decremented to zero, so that line 325 11's wait_for_completion() would return immediately, failing to 326 wait for CPU 1's callbacks to be invoked. 327 328 Note that this was not a problem when the rcu_barrier() code 329 was first added back in 2005. This is because on_each_cpu() 330 disables preemption, which acted as an RCU read-side critical 331 section, thus preventing CPU 0's grace period from completing 332 until on_each_cpu() had dealt with all of the CPUs. However, 333 with the advent of preemptible RCU, rcu_barrier() no longer 334 waited on nonpreemptible regions of code in preemptible kernels, 335 that being the job of the new rcu_barrier_sched() function. 336 337 However, with the RCU flavor consolidation around v4.20, this 338 possibility was once again ruled out, because the consolidated 339 RCU once again waits on nonpreemptible regions of code. 340 341 Nevertheless, that extra count might still be a good idea. 342 Relying on these sort of accidents of implementation can result 343 in later surprise bugs when the implementation changes. 344 345:ref:`Back to Quick Quiz #2 <rcubarrier_quiz_2>` 346 347.. _answer_rcubarrier_quiz_3: 348 349Quick Quiz #3: 350 What happens if CPU 0's rcu_barrier_func() executes 351 immediately (thus incrementing rcu_barrier_cpu_count to the 352 value one), but the other CPU's rcu_barrier_func() invocations 353 are delayed for a full grace period? Couldn't this result in 354 rcu_barrier() returning prematurely? 355 356Answer: 357 This cannot happen. The reason is that on_each_cpu() has its last 358 argument, the wait flag, set to "1". This flag is passed through 359 to smp_call_function() and further to smp_call_function_on_cpu(), 360 causing this latter to spin until the cross-CPU invocation of 361 rcu_barrier_func() has completed. This by itself would prevent 362 a grace period from completing on non-CONFIG_PREEMPTION kernels, 363 since each CPU must undergo a context switch (or other quiescent 364 state) before the grace period can complete. However, this is 365 of no use in CONFIG_PREEMPTION kernels. 366 367 Therefore, on_each_cpu() disables preemption across its call 368 to smp_call_function() and also across the local call to 369 rcu_barrier_func(). Because recent RCU implementations treat 370 preemption-disabled regions of code as RCU read-side critical 371 sections, this prevents grace periods from completing. This 372 means that all CPUs have executed rcu_barrier_func() before 373 the first rcu_barrier_callback() can possibly execute, in turn 374 preventing rcu_barrier_cpu_count from prematurely reaching zero. 375 376 But if on_each_cpu() ever decides to forgo disabling preemption, 377 as might well happen due to real-time latency considerations, 378 initializing rcu_barrier_cpu_count to one will save the day. 379 380:ref:`Back to Quick Quiz #3 <rcubarrier_quiz_3>` 381