Lines Matching full:active
45 * active in the group. This designated role is necessary to avoid that all
46 * active CPUs in a group try to migrate expired timers from other CPUs,
51 * no CPU is active, it also checks the groups where no migrator is set
66 * When CPU comes out of idle and when a group has at least a single active
70 * time. This spares locking in active path as the lock protects (after
75 * the next active CPU in the group or sets migrator to TMIGR_NONE when
76 * there is no active CPU in the group. This delegation needs to be
102 * active CPU/group information atomic_try_cmpxchg() is used instead and only
115 * The state information with the list of active children and migrator needs to
126 * active = GRP0:0, GRP0:1
130 * active = CPU0 active = CPU2
133 * active idle active idle
142 * active = GRP0:0, GRP0:1
146 * --> active = active = CPU2
149 * --> idle idle active idle
159 * active = GRP0:0, GRP0:1
163 * --> active = CPU1 active = CPU2
166 * idle --> active active idle
170 * active members of GRP1:0 remain unchanged after the update since it is
175 * --> active = GRP0:0, GRP0:1
179 * active = CPU1 active = CPU2
182 * idle active active idle
188 * --> active = GRP0:1
192 * active = CPU1 active = CPU2
195 * idle active active idle
199 * active and is correctly listed as active in GRP0:0. However GRP1:0 does not
200 * have GRP0:0 listed as active, which is wrong. The sequence counter has been
218 * 1. Only CPU2 is active:
222 * active = GRP0:1
227 * active = active = CPU2
231 * idle idle active idle
238 * active = GRP0:1
243 * active = --> active =
256 * --> active =
261 * active = active =
272 * active =
277 * active = active =
283 * 5. GRP0:0 is not active, so the new timer has to be propagated to
286 * handed back to CPU0, as it seems that there is still an active child in
291 * active =
296 * active = active =
329 * update of the group state from active path is no problem, as the upcoming CPU
342 * also idle and has no global timer pending. CPU2 is the only active CPU and
347 * active = GRP0:1
352 * active = active = CPU2
359 * idle idle active idle
369 * active = GRP0:1
374 * active = active = CPU2
381 * idle idle active idle
393 * active:
397 * active = GRP0:1
402 * active = active = CPU2
408 * idle idle active idle
414 * CPU of GRP0:0 is active again. The CPU will mark GRP0:0 active and take care
475 * group is not active - so no migrator is set.
492 unsigned long active; in tmigr_check_migrator_and_lonely() local
500 active = s.active; in tmigr_check_migrator_and_lonely()
501 lonely = bitmap_weight(&active, BIT_CNT) <= 1; in tmigr_check_migrator_and_lonely()
508 unsigned long active; in tmigr_check_lonely() local
513 active = s.active; in tmigr_check_lonely()
515 return bitmap_weight(&active, BIT_CNT) <= 1; in tmigr_check_lonely()
687 newstate.active |= childmask; in tmigr_active_up()
695 * The group is active (again). The group event might be still queued in tmigr_active_up()
700 * The update of the ignore flag in the active path is done lockless. In in tmigr_active_up()
726 * tmigr_cpu_activate() - set this CPU active in timer migration hierarchy
777 if (childstate.active) { in tmigr_update_events()
803 * already queued events in non active groups (see section in tmigr_update_events()
852 * the group is already active, there is no need to walk the in tmigr_update_events()
857 * is not active, walking the hierarchy is required to not miss in tmigr_update_events()
858 * an enqueued timer in the non active group. The enqueued timer in tmigr_update_events()
862 if (!remote || groupstate.active) in tmigr_update_events()
879 * handled when top level group is not active, is calculated in tmigr_update_events()
916 * returned, if an active CPU will handle all the timer migration hierarchy
1058 * group has no migrator. Otherwise the group is active and is in tmigr_handle_remote_up()
1149 * has no migrator. Otherwise the group is active and is handled by its in tmigr_requires_handle_remote_up()
1201 * If the CPU is active, walk the hierarchy to check whether a remote in tmigr_requires_handle_remote()
1298 /* Reset active bit when the child is no longer active */ in tmigr_inactive_up()
1299 if (!childstate.active) in tmigr_inactive_up()
1300 newstate.active &= ~childmask; in tmigr_inactive_up()
1307 if (!childstate.active) { in tmigr_inactive_up()
1308 unsigned long new_migr_bit, active = newstate.active; in tmigr_inactive_up() local
1310 new_migr_bit = find_first_bit(&active, BIT_CNT); in tmigr_inactive_up()
1325 WARN_ON_ONCE((newstate.migrator != TMIGR_NONE) && !(newstate.active)); in tmigr_inactive_up()
1412 * single group active on the way to top level)
1415 * child is active but @nextevt is before the lowest
1418 * if only a single child is active on each and @nextevt
1440 * Since current CPU is active, events may not be sorted in tmigr_quick_check()
1453 * tmigr_trigger_active() - trigger a CPU to become active again
1456 * last active CPU in the hierarchy is offlining. With this, it is ensured that
1457 * the other CPU is active and takes over the migrator duty.
1645 s.active = 0; in tmigr_init_group()
1850 * To prevent inconsistent states, active children need to be active in in tmigr_setup_groups()
1855 * the lowest level, then they are not active. They will be set active in tmigr_setup_groups()
1856 * when the new online CPU comes active. in tmigr_setup_groups()
1859 * mandatory to propagate the active state of the already existing in tmigr_setup_groups()
1863 * * It is ensured that @start is active, as this setup path is in tmigr_setup_groups()
1869 * @child. Therefore propagate active state unconditionally. in tmigr_setup_groups()
1872 WARN_ON_ONCE(!state.active); in tmigr_setup_groups()
1912 * active or not) and/or release an uninitialized childmask. in tmigr_add_cpu()
1917 * otherwise the old root may not be active as expected. in tmigr_add_cpu()