Lines Matching full:cpus

3  * coupled.c - helper functions to enter the same idle state on multiple cpus
24 * cpus cannot be independently powered down, either due to
31 * shared between the cpus (L2 cache, interrupt controller, and
33 * be tightly controlled on both cpus.
36 * WFI state until all cpus are ready to enter a coupled state, at
38 * cpus at approximately the same time.
40 * Once all cpus are ready to enter idle, they are woken by an smp
42 * cpus will find work to do, and choose not to enter idle. A
43 * final pass is needed to guarantee that all cpus will call the
46 * ready counter matches the number of online coupled cpus. If any
47 * cpu exits idle, the other cpus will decrement their counter and
56 * and only read after all the cpus are ready for the coupled idle
60 * of cpus in the coupled set that are currently or soon will be
61 * online. waiting_count tracks the number of cpus that are in
63 * ready_count tracks the number of cpus that are in the ready loop
69 * coupled cpus, usually the same as cpu_possible_mask if all cpus
77 * state that affects multiple cpus.
80 * that affects multiple cpus. This function is guaranteed to be
81 * called on all cpus at approximately the same time. The driver
82 * should ensure that the cpus all abort together if any cpu tries
88 * struct cpuidle_coupled - data for set of cpus that share a coupled idle state
89 * @coupled_cpus: mask of cpus that are part of the coupled set
90 * @requested_state: array of requested states for cpus in the coupled set
91 * @ready_waiting_counts: combined count of cpus in ready or waiting loops
93 * @online_count: count of cpus that are online
119 * in use. This prevents a deadlock where two cpus are waiting for each others
132 * cpuidle_coupled_parallel_barrier - synchronize all online coupled cpus
137 * cpus in the same coupled group have called this function. Once any caller
173 * Returns true if the target state is coupled with cpus besides this one
216 * is equal to the number of online cpus. Prevents a race where one cpu
219 * down from the number of online cpus without going through the coupled idle
223 * counter was equal to the number of online cpus.
239 * cpuidle_coupled_no_cpus_ready - check if no cpus in a coupled set are ready
242 * Returns true if all of the cpus in a coupled set are out of the ready loop.
251 * cpuidle_coupled_cpus_ready - check if all cpus in a coupled set are ready
254 * Returns true if all cpus coupled to this target state are in the ready loop
263 * cpuidle_coupled_cpus_waiting - check if all cpus in a coupled set are waiting
266 * Returns true if all cpus coupled to this target state are in the wait loop
275 * cpuidle_coupled_no_cpus_waiting - check if no cpus in coupled set are waiting
278 * Returns true if all of the cpus in a coupled set are out of the waiting loop.
291 * Returns the deepest idle state that all coupled cpus can enter
341 * cpuidle_coupled_poke_others - wake up all other cpus that may be waiting
345 * Calls cpuidle_coupled_poke on all other online cpus.
364 * Returns the number of waiting cpus.
391 * cpus will increment ready_count and then spin until they in cpuidle_coupled_set_not_waiting()
447 * cpuidle_enter_state_coupled - attempt to enter a state with coupled cpus
452 * Coordinate with coupled cpus to enter the target state. This is a two
453 * stage process. In the first stage, the cpus are operating independently,
455 * To save as much power as possible, the first cpus to call this function will
457 * all the other cpus to call this function. Once all coupled cpus are idle,
458 * the second stage will start. Each coupled cpu will spin until all cpus have
495 * all the other cpus out of their waiting state so they can in cpuidle_enter_state_coupled()
496 * enter a deeper state. This can race with one of the cpus in cpuidle_enter_state_coupled()
507 * Wait for all coupled cpus to be idle, using the deepest state in cpuidle_enter_state_coupled()
510 * two cpus could arrive at the waiting loop at the same time, in cpuidle_enter_state_coupled()
547 * All coupled cpus are probably idle. There is a small chance that in cpuidle_enter_state_coupled()
548 * one of the other cpus just became active. Increment the ready count, in cpuidle_enter_state_coupled()
549 * and spin until all coupled cpus have incremented the counter. Once a in cpuidle_enter_state_coupled()
551 * spin until either all cpus have incremented the ready counter, or in cpuidle_enter_state_coupled()
557 /* Check if any other cpus bailed out of idle. */ in cpuidle_enter_state_coupled()
566 * Make sure read of all cpus ready is done before reading pending pokes in cpuidle_enter_state_coupled()
572 * cpu saw that all cpus were waiting. The cpu that reentered idle will in cpuidle_enter_state_coupled()
578 * coupled idle state of all cpus and retry. in cpuidle_enter_state_coupled()
582 /* Wait for all cpus to see the pending pokes */ in cpuidle_enter_state_coupled()
587 /* all cpus have acked the coupled state */ in cpuidle_enter_state_coupled()
600 * other cpus will need to spin waiting for the cpu that is processing in cpuidle_enter_state_coupled()
602 * all other cpus will loop back into the safe idle state instead of in cpuidle_enter_state_coupled()
612 * Wait until all coupled cpus have exited idle. There is no risk that in cpuidle_enter_state_coupled()
632 * cpuidle_coupled struct for this set of coupled cpus, or creates one if none
696 * cpuidle_coupled_prevent_idle - prevent cpus from entering a coupled state
699 * Disables coupled cpuidle on a coupled set of cpus. Used to ensure that
700 * cpu_online_mask doesn't change while cpus are coordinating coupled idle.
706 /* Force all cpus out of the waiting loop. */ in cpuidle_coupled_prevent_idle()
715 * cpuidle_coupled_allow_idle - allows cpus to enter a coupled state
718 * Enables coupled cpuidle on a coupled set of cpus. Used to ensure that
719 * cpu_online_mask doesn't change while cpus are coordinating coupled idle.
731 /* Force cpus out of the prevent loop. */ in cpuidle_coupled_allow_idle()