Lines Matching +full:model +full:- +full:dependent
2 Proper Locking Under a Preemptible Kernel: Keeping Kernel Code Preempt-Safe
14 kernel model leverages existing SMP locking mechanisms. Thus, the kernel
21 RULE #1: Per-CPU data structures need explicit protection
32 First, since the data is per-CPU, it may not have explicit SMP locking, but
44 Under preemption, the state of the CPU must be protected. This is arch-
45 dependent, but includes CPU structures and state not preserved over a context
48 if the kernel is executing a floating-point instruction and is then preempted.
84 n-times in a code path, and preemption will not be reenabled until the n-th
93 disabling preemption - any cond_resched() or cond_resched_lock() might trigger
95 reschedule. So use this implicit preemption-disabling property only if you
102 cpucache_t *cc; /* this is per-CPU */
105 if (cc && cc->avail) {
106 __free_block(searchp, cc_entry(cc), cc->avail);
107 cc->avail = 0;
117 if (buf[smp_processor_id()] == -1) printf(KERN_INFO "wee!\n");
121 This code is not preempt-safe, but see how easily we can fix it by simply
134 Note in 2.5 interrupt disabling is now only per-CPU (e.g. local).
140 is done. They may also be called within a spin-lock protected region, however,