Searched hist:"0 c2de3f054a59f15e01804b75a04355c48de628c" (Results 1 – 2 of 2) sorted by relevance
/linux/kernel/sched/ |
H A D | features.h | diff 0c2de3f054a59f15e01804b75a04355c48de628c Thu Mar 25 13:44:46 CET 2021 Peter Zijlstra <peterz@infradead.org> sched,fair: Alternative sched_slice()
The current sched_slice() seems to have issues; there's two possible things that could be improved:
- the 'nr_running' used for __sched_period() is daft when cgroups are considered. Using the RQ wide h_nr_running seems like a much more consistent number.
- (esp) cgroups can slice it real fine, which makes for easy over-scheduling, ensure min_gran is what the name says.
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Tested-by: Valentin Schneider <valentin.schneider@arm.com> Link: https://lkml.kernel.org/r/20210412102001.611897312@infradead.org
|
H A D | fair.c | diff 0c2de3f054a59f15e01804b75a04355c48de628c Thu Mar 25 13:44:46 CET 2021 Peter Zijlstra <peterz@infradead.org> sched,fair: Alternative sched_slice()
The current sched_slice() seems to have issues; there's two possible things that could be improved:
- the 'nr_running' used for __sched_period() is daft when cgroups are considered. Using the RQ wide h_nr_running seems like a much more consistent number.
- (esp) cgroups can slice it real fine, which makes for easy over-scheduling, ensure min_gran is what the name says.
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Tested-by: Valentin Schneider <valentin.schneider@arm.com> Link: https://lkml.kernel.org/r/20210412102001.611897312@infradead.org
|