| /linux/Documentation/admin-guide/hw-vuln/ |
| H A D | core-scheduling.rst | 6 Core scheduling support allows userspace to define groups of tasks that can 8 group of tasks don't trust another), or for performance usecases (some 20 attacks. It allows HT to be turned on safely by ensuring that only tasks in a 35 Using this feature, userspace defines groups of tasks that can be co-scheduled 37 tasks that are not in the same group never run simultaneously on a core, while 42 well as admission and removal of tasks from created groups:: 67 will be performed for all tasks in the task group of ``pid``. 77 Building hierarchies of tasks 87 Transferring a cookie between the current and other tasks is possible using 91 scheduling group and share it with already running tasks. [all …]
|
| /linux/Documentation/admin-guide/cgroup-v1/ |
| H A D | cpuacct.rst | 5 The CPU accounting controller is used to group tasks using cgroups and 6 account the CPU usage of these groups of tasks. 9 group accumulates the CPU usage of all of its child groups and the tasks 17 visible at /sys/fs/cgroup. At bootup, this group includes all the tasks in 18 the system. /sys/fs/cgroup/tasks lists the tasks in this cgroup. 20 by this group which is essentially the CPU time obtained by all the tasks 27 # echo $$ > g1/tasks 38 user: Time spent by tasks of the cgroup in user mode. 39 system: Time spent by tasks of the cgroup in kernel mode.
|
| H A D | cgroups.rst | 45 tasks, and all their future children, into hierarchical groups with 50 A *cgroup* associates a set of tasks with a set of parameters for one 54 facilities provided by cgroups to treat groups of tasks in 67 cgroups. Each hierarchy is a partition of all tasks in the system. 81 tasks in each cgroup. 102 the division of tasks into cgroups is distinctly different for 104 hierarchy to be a natural division of tasks, without having to handle 105 complex combinations of tasks that would be present if several 116 tasks etc. The resource planning for this server could be along the 125 In addition (system tasks) are attached to topcpuset (so [all …]
|
| H A D | cpusets.rst | 44 Nodes to a set of tasks. In this document "Memory Node" refers to 47 Cpusets constrain the CPU and Memory placement of tasks to only 82 the available CPU and Memory resources amongst the requesting tasks. 139 - You can list all the tasks (by pid) attached to any cpuset. 148 - in sched.c migrate_live_tasks(), to keep migrating tasks within 184 - cpuset.sched_relax_domain_level: the searching range when migrating tasks 192 CPUs and Memory Nodes, and attached tasks, are modified by writing 200 on a system into related sets of tasks such that each set is constrained 206 the detailed placement done on individual tasks and memory regions 264 of the rate that the tasks in a cpuset are attempting to free up in [all …]
|
| H A D | memcg_test.rst | 188 /bin/echo $pid >$2/tasks 2>/dev/null 195 G1_TASK=`cat ${G1}/tasks` 196 G2_TASK=`cat ${G2}/tasks` 259 # echo 0 > /cgroup/test/tasks 265 # move all tasks in /cgroup/test to /cgroup 275 Out-of-memory caused by memcg's limit will kill tasks under 279 In this case, panic_on_oom shouldn't be invoked and tasks 306 #echo $$ >/cgroup/A/tasks 314 #echo "pid of the program running in group A" >/cgroup/B/tasks 335 # echo $$ >/cgroup/A/tasks
|
| H A D | hugetlb.rst | 10 visible at /sys/fs/cgroup. At bootup, this group includes all the tasks in 11 the system. /sys/fs/cgroup/tasks lists the tasks in this cgroup. 17 # echo $$ > g1/tasks 109 the HugeTLB usage of all the tasks in the system and make sure there is enough 110 pages to satisfy all requests. Avoiding tasks getting SIGBUS on overcommited
|
| /linux/Documentation/scheduler/ |
| H A D | sched-eevdf.rst | 14 runnable tasks with the same priority. To do so, it assigns a virtual run 18 has exceeded its portion. EEVDF picks tasks with lag greater or equal to 21 allows latency-sensitive tasks with shorter time slices to be prioritized, 25 tasks; but at the time of writing EEVDF uses a "decaying" mechanism based 26 on virtual run time (VRT). This prevents tasks from exploiting the system 29 lag to decay over VRT. Hence, long-sleeping tasks eventually have their lag 30 reset. Finally, tasks can preempt others if their VD is earlier, and tasks
|
| H A D | sched-util-clamp.rst | 12 of tasks. It was introduced in v5.3 release. The CGroup support was merged in 16 performance requirements and restrictions of the tasks, thus it helps the 35 One can tell the system (scheduler) that some tasks require a minimum 37 can tell the system that some tasks should be restricted from consuming too 45 dropped. It can also dynamically 'prime' up these tasks if it knows in the 56 Another example is in Android where tasks are classified as background, 58 resources background tasks are consuming by capping the performance point they 59 can run at. This constraint helps reserve resources for important tasks, like 63 background tasks to stay on the little cores which will ensure that: 65 1. The big cores are free to run top-app tasks immediately. top-app [all …]
|
| H A D | sched-rt-group.rst | 14 2.3 Basis for grouping tasks 44 multiple groups of real-time tasks, each group must be assigned a fixed portion 57 tasks (SCHED_OTHER). Any allocated run time not used will also be picked up by 72 The remaining CPU time will be used for user input and other tasks. Because 73 real-time tasks have explicitly allocated the CPU time they need to perform 74 their tasks, buffer underruns in the graphics or audio can be eliminated. 95 period_us for the real-time tasks. Without CONFIG_RT_GROUP_SCHED enabled, 96 this only serves for admission control of deadline tasks. With 115 SCHED_OTHER (non-RT tasks). These defaults were chosen so that a run-away 116 real-time tasks will not lock up the machine but leave a little time to recover [all …]
|
| H A D | schedutil.rst | 15 individual tasks to task-group slices to CPU runqueues. As the basis for this 28 is key, since it gives the ability to recompose the averages when tasks move 31 Note that blocked tasks still contribute to the aggregates (task-group slices 96 Because periodic tasks have their averages decayed while they sleep, even 104 A further runqueue wide sum (of runnable tasks) is maintained of: 115 the runqueue keeps an max aggregate of these clamps for all running tasks. 147 XXX: deadline tasks (Sporadic Task Model) allows us to calculate a hard f_min 165 suppose we have a CPU saturated with 4 tasks, then when we migrate a task
|
| /linux/Documentation/power/ |
| H A D | freezing-of-tasks.rst | 2 Freezing of tasks 7 I. What is the freezing of tasks? 10 The freezing of tasks is a mechanism by which user space processes and some 19 The tasks that have PF_NOFREEZE unset (all user space tasks and some kernel 31 wakes up all the kernel threads. All freezable tasks must react to that by 38 tasks are generally frozen before kernel threads. 72 has initiated a freezing operation, the freezing of tasks will fail and the 79 order to wake up each frozen task. Then, the tasks that have been frozen leave 83 Rationale behind the functions dealing with freezing and thawing of tasks 87 - freezes only userspace tasks [all …]
|
| /linux/tools/accounting/ |
| H A D | delaytop.c | 64 #define SET_TASK_STAT(task_count, field) tasks[task_count].field = stats.field 162 static struct task_info tasks[MAX_TASKS]; variable 695 tasks[task_count].pid = pid; in fetch_and_fill_task_info() 696 tasks[task_count].tgid = pid; in fetch_and_fill_task_info() 697 strncpy(tasks[task_count].command, comm, in fetch_and_fill_task_info() 699 tasks[task_count].command[TASK_COMM_LEN - 1] = '\0'; in fetch_and_fill_task_info() 716 set_mem_count(&tasks[task_count]); in fetch_and_fill_task_info() 717 set_mem_delay_total(&tasks[task_count]); in fetch_and_fill_task_info() 801 qsort(tasks, task_count, sizeof(struct task_info), compare_tasks); in sort_tasks() 974 tasks[i].pid, tasks[i].tgid, tasks[i].command); in display_results() [all …]
|
| /linux/tools/perf/scripts/python/ |
| H A D | sched-migration.py | 100 def __init__(self, tasks = [0], event = RunqueueEventUnknown()): argument 101 self.tasks = tuple(tasks) 107 if taskState(prev_state) == "R" and next in self.tasks \ 108 and prev in self.tasks: 114 next_tasks = list(self.tasks[:]) 115 if prev in self.tasks: 127 if old not in self.tasks: 129 next_tasks = [task for task in self.tasks if task != old] 134 if new in self.tasks: 137 next_tasks = self.tasks[:] + tuple([new]) [all …]
|
| /linux/kernel/sched/ |
| H A D | psi.c | 243 static u32 test_states(unsigned int *tasks, u32 state_mask) in test_states() argument 247 if (tasks[NR_IOWAIT]) { in test_states() 249 if (!tasks[NR_RUNNING]) in test_states() 253 if (tasks[NR_MEMSTALL]) { in test_states() 255 if (tasks[NR_RUNNING] == tasks[NR_MEMSTALL_RUNNING]) in test_states() 259 if (tasks[NR_RUNNING] > oncpu) in test_states() 262 if (tasks[NR_RUNNING] && !oncpu) in test_states() 265 if (tasks[NR_IOWAIT] || tasks[NR_MEMSTALL] || tasks[NR_RUNNING]) in test_states() 277 unsigned int tasks[NR_PSI_TASK_COUNTS]; in get_recent_times() local 293 memcpy(tasks, groupc->tasks, sizeof(groupc->tasks)); in get_recent_times() [all …]
|
| /linux/samples/bpf/ |
| H A D | map_perf_test_user.c | 94 static int pre_test_lru_hash_lookup(int tasks) in pre_test_lru_hash_lookup() argument 295 typedef int (*pre_test_func)(int tasks); 315 static int pre_test(int tasks) in pre_test() argument 321 int ret = pre_test_funcs[i](tasks); in pre_test() 346 static void run_perf_test(int tasks) in run_perf_test() argument 348 pid_t pid[tasks]; in run_perf_test() 351 assert(!pre_test(tasks)); in run_perf_test() 353 for (i = 0; i < tasks; i++) { in run_perf_test() 363 for (i = 0; i < tasks; i++) { in run_perf_test()
|
| /linux/Documentation/admin-guide/kdump/ |
| H A D | gdbmacros.txt | 17 set $tasks_off=((size_t)&((struct task_struct *)0)->tasks) 20 set $next_t=(((char *)($init_t->tasks).next) - $tasks_off) 51 set $next_t=(char *)($next_t->tasks.next) - $tasks_off 83 set $tasks_off=((size_t)&((struct task_struct *)0)->tasks) 86 set $next_t=(((char *)($init_t->tasks).next) - $tasks_off) 97 set $next_t=(char *)($next_t->tasks.next) - $tasks_off 106 set $tasks_off=((size_t)&((struct task_struct *)0)->tasks) 109 set $next_t=(((char *)($init_t->tasks).next) - $tasks_off) 127 set $next_t=(char *)($next_t->tasks.next) - $tasks_off 139 set $tasks_off=((size_t)&((struct task_struct *)0)->tasks) [all …]
|
| /linux/tools/perf/Documentation/ |
| H A D | perf-timechart.txt | 48 --tasks-only:: 60 Print task info for at least given number of tasks. 65 Highlight tasks (using different color) that run more than given 66 duration or tasks with given name. If number is given it's interpreted 89 --tasks-only:: 90 Record only tasks-related events 117 then generate timechart and highlight 'gcc' tasks:
|
| /linux/Documentation/livepatch/ |
| H A D | livepatch.rst | 85 transition state where tasks are converging to the patched state. 87 sequence occurs when a patch is disabled, except the tasks converge from 91 interrupts. The same is true for forked tasks: the child inherits the 95 safe to patch tasks: 98 tasks. If no affected functions are on the stack of a given task, 100 the tasks on the first try. Otherwise it'll keep trying 108 a) Patching I/O-bound user tasks which are sleeping on an affected 111 b) Patching CPU-bound user tasks. If the task is highly CPU-bound 115 3. For idle "swapper" tasks, since they don't ever exit the kernel, they 122 the second approach. It's highly likely that some tasks may still be [all …]
|
| /linux/Documentation/locking/ |
| H A D | futex-requeue-pi.rst | 5 Requeueing of tasks from a non-PI futex to a PI futex requires 17 pthread_cond_broadcast() must resort to waking all the tasks waiting 47 Once pthread_cond_broadcast() requeues the tasks, the cond->mutex 54 be able to requeue tasks to PI futexes. This support implies that 113 possibly wake the waiting tasks. Internally, this system call is 118 nr_wake+nr_requeue tasks to the PI futex, calling 126 requeue up to nr_wake + nr_requeue tasks. It will wake only as many 127 tasks as it can acquire the lock for, which in the majority of cases
|
| /linux/drivers/misc/bcm-vk/ |
| H A D | Kconfig | 11 multiple specific offload processing tasks in parallel. 12 Such offload tasks assist in such operations as video 13 transcoding, compression, and crypto tasks.
|
| /linux/Documentation/admin-guide/namespaces/ |
| H A D | compatibility-list.rst | 6 may have when creating tasks living in different namespaces. 9 occur when tasks share some namespace (the columns) while living 27 In both cases, tasks shouldn't try exposing this ID to some
|
| /linux/include/linux/ |
| H A D | user_events.h | 26 refcount_t tasks; member 47 refcount_inc(&old_mm->tasks); in user_events_fork()
|
| /linux/net/sunrpc/ |
| H A D | sched.c | 199 __rpc_list_enqueue_task(&queue->tasks[queue_priority], task); in __rpc_add_wait_queue_priority() 213 list_add_tail(&task->u.tk_wait.list, &queue->tasks[0]); in __rpc_add_wait_queue() 248 for (i = 0; i < ARRAY_SIZE(queue->tasks); i++) in __rpc_init_priority_wait_queue() 249 INIT_LIST_HEAD(&queue->tasks[i]); in __rpc_init_priority_wait_queue() 613 q = &queue->tasks[RPC_NR_PRIORITY - 1]; in __rpc_find_next_queued_priority() 622 q = &queue->tasks[queue->priority]; in __rpc_find_next_queued_priority() 633 if (q == &queue->tasks[0]) in __rpc_find_next_queued_priority() 634 q = &queue->tasks[queue->maxpriority]; in __rpc_find_next_queued_priority() 641 } while (q != &queue->tasks[queue->priority]); in __rpc_find_next_queued_priority() 647 rpc_set_waitqueue_priority(queue, (unsigned int)(q - &queue->tasks[0])); in __rpc_find_next_queued_priority() [all …]
|
| /linux/Documentation/driver-api/pm/ |
| H A D | notifiers.rst | 30 The system is going to hibernate, tasks will be frozen immediately. This 38 executed and tasks have been thawed. 47 callbacks have been executed and tasks have been thawed. 54 resume callbacks have been executed and tasks have been thawed.
|
| /linux/drivers/dma/bestcomm/ |
| H A D | Kconfig | 30 This option enables the support for the FEC tasks. 36 This option enables the support for the GenBD tasks.
|