Lines Matching full:active

44  * active in the group. This designated role is necessary to avoid that all
45 * active CPUs in a group try to migrate expired timers from other CPUs,
50 * no CPU is active, it also checks the groups where no migrator is set
65 * When CPU comes out of idle and when a group has at least a single active
69 * time. This spares locking in active path as the lock protects (after
74 * the next active CPU in the group or sets migrator to TMIGR_NONE when
75 * there is no active CPU in the group. This delegation needs to be
101 * active CPU/group information atomic_try_cmpxchg() is used instead and only
114 * The state information with the list of active children and migrator needs to
125 * active = GRP0:0, GRP0:1
129 * active = CPU0 active = CPU2
132 * active idle active idle
141 * active = GRP0:0, GRP0:1
145 * --> active = active = CPU2
148 * --> idle idle active idle
158 * active = GRP0:0, GRP0:1
162 * --> active = CPU1 active = CPU2
165 * idle --> active active idle
169 * active members of GRP1:0 remain unchanged after the update since it is
174 * --> active = GRP0:0, GRP0:1
178 * active = CPU1 active = CPU2
181 * idle active active idle
187 * --> active = GRP0:1
191 * active = CPU1 active = CPU2
194 * idle active active idle
198 * active and is correctly listed as active in GRP0:0. However GRP1:0 does not
199 * have GRP0:0 listed as active, which is wrong. The sequence counter has been
217 * 1. Only CPU2 is active:
221 * active = GRP0:1
226 * active = active = CPU2
230 * idle idle active idle
237 * active = GRP0:1
242 * active = --> active =
255 * --> active =
260 * active = active =
271 * active =
276 * active = active =
282 * 5. GRP0:0 is not active, so the new timer has to be propagated to
285 * handed back to CPU0, as it seems that there is still an active child in
290 * active =
295 * active = active =
328 * update of the group state from active path is no problem, as the upcoming CPU
341 * also idle and has no global timer pending. CPU2 is the only active CPU and
346 * active = GRP0:1
351 * active = active = CPU2
358 * idle idle active idle
368 * active = GRP0:1
373 * active = active = CPU2
380 * idle idle active idle
392 * active:
396 * active = GRP0:1
401 * active = active = CPU2
407 * idle idle active idle
413 * CPU of GRP0:0 is active again. The CPU will mark GRP0:0 active and take care
435 * group is not active - so no migrator is set.
452 unsigned long active; in tmigr_check_migrator_and_lonely() local
460 active = s.active; in tmigr_check_migrator_and_lonely()
461 lonely = bitmap_weight(&active, BIT_CNT) <= 1; in tmigr_check_migrator_and_lonely()
468 unsigned long active; in tmigr_check_lonely() local
473 active = s.active; in tmigr_check_lonely()
475 return bitmap_weight(&active, BIT_CNT) <= 1; in tmigr_check_lonely()
644 newstate.active |= childmask; in tmigr_active_up()
652 * The group is active (again). The group event might be still queued in tmigr_active_up()
657 * The update of the ignore flag in the active path is done lockless. In in tmigr_active_up()
683 * tmigr_cpu_activate() - set this CPU active in timer migration hierarchy
733 if (childstate.active) { in tmigr_update_events()
751 * already queued events in non active groups (see section in tmigr_update_events()
800 * the group is already active, there is no need to walk the in tmigr_update_events()
805 * is not active, walking the hierarchy is required to not miss in tmigr_update_events()
806 * an enqueued timer in the non active group. The enqueued timer in tmigr_update_events()
810 if (!remote || groupstate.active) in tmigr_update_events()
827 * handled when top level group is not active, is calculated in tmigr_update_events()
864 * returned, if an active CPU will handle all the timer migration hierarchy
1006 * group has no migrator. Otherwise the group is active and is in tmigr_handle_remote_up()
1097 * has no migrator. Otherwise the group is active and is handled by its in tmigr_requires_handle_remote_up()
1105 * hierarchy walk is not active, proceed the walk to reach the top level in tmigr_requires_handle_remote_up()
1159 * If the CPU is active, walk the hierarchy to check whether a remote in tmigr_requires_handle_remote()
1256 /* Reset active bit when the child is no longer active */ in tmigr_inactive_up()
1257 if (!childstate.active) in tmigr_inactive_up()
1258 newstate.active &= ~childmask; in tmigr_inactive_up()
1265 if (!childstate.active) { in tmigr_inactive_up()
1266 unsigned long new_migr_bit, active = newstate.active; in tmigr_inactive_up() local
1268 new_migr_bit = find_first_bit(&active, BIT_CNT); in tmigr_inactive_up()
1283 WARN_ON_ONCE((newstate.migrator != TMIGR_NONE) && !(newstate.active)); in tmigr_inactive_up()
1370 * single group active on the way to top level)
1373 * child is active but @nextevt is before the lowest
1376 * if only a single child is active on each and @nextevt
1398 * Since current CPU is active, events may not be sorted in tmigr_quick_check()
1414 * tmigr_trigger_active() - trigger a CPU to become active again
1417 * last active CPU in the hierarchy is offlining. With this, it is ensured that
1418 * the other CPU is active and takes over the migrator duty.
1486 s.active = 0; in tmigr_init_group()
1565 * To prevent inconsistent states, active children need to be active in in tmigr_connect_child_parent()
1571 * top level), then they are not active. They will be set active when in tmigr_connect_child_parent()
1572 * the new online CPU comes active. in tmigr_connect_child_parent()
1575 * mandatory to propagate the active state of the already existing in tmigr_connect_child_parent()
1580 * * It is ensured that the child is active, as this setup path is in tmigr_connect_child_parent()
1586 * @child. Therefore propagate active state unconditionally. in tmigr_connect_child_parent()
1593 * child active when the parent is inactive, the parent needs to be the in tmigr_connect_child_parent()
1681 * active or not) and/or release an uninitialized childmask. in tmigr_setup_groups()