1==================== 2Scheduler Statistics 3==================== 4 5Version 16 of schedstats changed the order of definitions within 6'enum cpu_idle_type', which changed the order of [CPU_MAX_IDLE_TYPES] 7columns in show_schedstat(). In particular the position of CPU_IDLE 8and __CPU_NOT_IDLE changed places. The size of the array is unchanged. 9 10Version 15 of schedstats dropped counters for some sched_yield: 11yld_exp_empty, yld_act_empty and yld_both_empty. Otherwise, it is 12identical to version 14. 13 14Version 14 of schedstats includes support for sched_domains, which hit the 15mainline kernel in 2.6.20 although it is identical to the stats from version 1612 which was in the kernel from 2.6.13-2.6.19 (version 13 never saw a kernel 17release). Some counters make more sense to be per-runqueue; other to be 18per-domain. Note that domains (and their associated information) will only 19be pertinent and available on machines utilizing CONFIG_SMP. 20 21In version 14 of schedstat, there is at least one level of domain 22statistics for each cpu listed, and there may well be more than one 23domain. Domains have no particular names in this implementation, but 24the highest numbered one typically arbitrates balancing across all the 25cpus on the machine, while domain0 is the most tightly focused domain, 26sometimes balancing only between pairs of cpus. At this time, there 27are no architectures which need more than three domain levels. The first 28field in the domain stats is a bit map indicating which cpus are affected 29by that domain. 30 31These fields are counters, and only increment. Programs which make use 32of these will need to start with a baseline observation and then calculate 33the change in the counters at each subsequent observation. A perl script 34which does this for many of the fields is available at 35 36 http://eaglet.pdxhosts.com/rick/linux/schedstat/ 37 38Note that any such script will necessarily be version-specific, as the main 39reason to change versions is changes in the output format. For those wishing 40to write their own scripts, the fields are described here. 41 42CPU statistics 43-------------- 44cpu<N> 1 2 3 4 5 6 7 8 9 45 46First field is a sched_yield() statistic: 47 48 1) # of times sched_yield() was called 49 50Next three are schedule() statistics: 51 52 2) This field is a legacy array expiration count field used in the O(1) 53 scheduler. We kept it for ABI compatibility, but it is always set to zero. 54 3) # of times schedule() was called 55 4) # of times schedule() left the processor idle 56 57Next two are try_to_wake_up() statistics: 58 59 5) # of times try_to_wake_up() was called 60 6) # of times try_to_wake_up() was called to wake up the local cpu 61 62Next three are statistics describing scheduling latency: 63 64 7) sum of all time spent running by tasks on this processor (in nanoseconds) 65 8) sum of all time spent waiting to run by tasks on this processor (in 66 nanoseconds) 67 9) # of timeslices run on this cpu 68 69 70Domain statistics 71----------------- 72One of these is produced per domain for each cpu described. (Note that if 73CONFIG_SMP is not defined, *no* domains are utilized and these lines 74will not appear in the output.) 75 76domain<N> <cpumask> 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 77 78The first field is a bit mask indicating what cpus this domain operates over. 79 80The next 24 are a variety of sched_balance_rq() statistics in grouped into types 81of idleness (idle, busy, and newly idle): 82 83 1) # of times in this domain sched_balance_rq() was called when the 84 cpu was idle 85 2) # of times in this domain sched_balance_rq() checked but found 86 the load did not require balancing when the cpu was idle 87 3) # of times in this domain sched_balance_rq() tried to move one or 88 more tasks and failed, when the cpu was idle 89 4) sum of imbalances discovered (if any) with each call to 90 sched_balance_rq() in this domain when the cpu was idle 91 5) # of times in this domain pull_task() was called when the cpu 92 was idle 93 6) # of times in this domain pull_task() was called even though 94 the target task was cache-hot when idle 95 7) # of times in this domain sched_balance_rq() was called but did 96 not find a busier queue while the cpu was idle 97 8) # of times in this domain a busier queue was found while the 98 cpu was idle but no busier group was found 99 9) # of times in this domain sched_balance_rq() was called when the 100 cpu was busy 101 10) # of times in this domain sched_balance_rq() checked but found the 102 load did not require balancing when busy 103 11) # of times in this domain sched_balance_rq() tried to move one or 104 more tasks and failed, when the cpu was busy 105 12) sum of imbalances discovered (if any) with each call to 106 sched_balance_rq() in this domain when the cpu was busy 107 13) # of times in this domain pull_task() was called when busy 108 14) # of times in this domain pull_task() was called even though the 109 target task was cache-hot when busy 110 15) # of times in this domain sched_balance_rq() was called but did not 111 find a busier queue while the cpu was busy 112 16) # of times in this domain a busier queue was found while the cpu 113 was busy but no busier group was found 114 115 17) # of times in this domain sched_balance_rq() was called when the 116 cpu was just becoming idle 117 18) # of times in this domain sched_balance_rq() checked but found the 118 load did not require balancing when the cpu was just becoming idle 119 19) # of times in this domain sched_balance_rq() tried to move one or more 120 tasks and failed, when the cpu was just becoming idle 121 20) sum of imbalances discovered (if any) with each call to 122 sched_balance_rq() in this domain when the cpu was just becoming idle 123 21) # of times in this domain pull_task() was called when newly idle 124 22) # of times in this domain pull_task() was called even though the 125 target task was cache-hot when just becoming idle 126 23) # of times in this domain sched_balance_rq() was called but did not 127 find a busier queue while the cpu was just becoming idle 128 24) # of times in this domain a busier queue was found while the cpu 129 was just becoming idle but no busier group was found 130 131 Next three are active_load_balance() statistics: 132 133 25) # of times active_load_balance() was called 134 26) # of times active_load_balance() tried to move a task and failed 135 27) # of times active_load_balance() successfully moved a task 136 137 Next three are sched_balance_exec() statistics: 138 139 28) sbe_cnt is not used 140 29) sbe_balanced is not used 141 30) sbe_pushed is not used 142 143 Next three are sched_balance_fork() statistics: 144 145 31) sbf_cnt is not used 146 32) sbf_balanced is not used 147 33) sbf_pushed is not used 148 149 Next three are try_to_wake_up() statistics: 150 151 34) # of times in this domain try_to_wake_up() awoke a task that 152 last ran on a different cpu in this domain 153 35) # of times in this domain try_to_wake_up() moved a task to the 154 waking cpu because it was cache-cold on its own cpu anyway 155 36) # of times in this domain try_to_wake_up() started passive balancing 156 157/proc/<pid>/schedstat 158--------------------- 159schedstats also adds a new /proc/<pid>/schedstat file to include some of 160the same information on a per-process level. There are three fields in 161this file correlating for that process to: 162 163 1) time spent on the cpu (in nanoseconds) 164 2) time spent waiting on a runqueue (in nanoseconds) 165 3) # of timeslices run on this cpu 166 167A program could be easily written to make use of these extra fields to 168report on how well a particular process or set of processes is faring 169under the scheduler's policies. A simple version of such a program is 170available at 171 172 http://eaglet.pdxhosts.com/rick/linux/schedstat/v12/latency.c 173