1.. _cgroup-v2: 2 3================ 4Control Group v2 5================ 6 7:Date: October, 2015 8:Author: Tejun Heo <tj@kernel.org> 9 10This is the authoritative documentation on the design, interface and 11conventions of cgroup v2. It describes all userland-visible aspects 12of cgroup including core and specific controller behaviors. All 13future changes must be reflected in this document. Documentation for 14v1 is available under :ref:`Documentation/admin-guide/cgroup-v1/index.rst <cgroup-v1>`. 15 16.. CONTENTS 17 18 [Whenever any new section is added to this document, please also add 19 an entry here.] 20 21 1. Introduction 22 1-1. Terminology 23 1-2. What is cgroup? 24 2. Basic Operations 25 2-1. Mounting 26 2-2. Organizing Processes and Threads 27 2-2-1. Processes 28 2-2-2. Threads 29 2-3. [Un]populated Notification 30 2-4. Controlling Controllers 31 2-4-1. Availability 32 2-4-2. Enabling and Disabling 33 2-4-3. Top-down Constraint 34 2-4-4. No Internal Process Constraint 35 2-5. Delegation 36 2-5-1. Model of Delegation 37 2-5-2. Delegation Containment 38 2-6. Guidelines 39 2-6-1. Organize Once and Control 40 2-6-2. Avoid Name Collisions 41 3. Resource Distribution Models 42 3-1. Weights 43 3-2. Limits 44 3-3. Protections 45 3-4. Allocations 46 4. Interface Files 47 4-1. Format 48 4-2. Conventions 49 4-3. Core Interface Files 50 5. Controllers 51 5-1. CPU 52 5-1-1. CPU Interface Files 53 5-2. Memory 54 5-2-1. Memory Interface Files 55 5-2-2. Usage Guidelines 56 5-2-3. Reclaim Protection 57 5-2-4. Memory Ownership 58 5-3. IO 59 5-3-1. IO Interface Files 60 5-3-2. Writeback 61 5-3-3. IO Latency 62 5-3-3-1. How IO Latency Throttling Works 63 5-3-3-2. IO Latency Interface Files 64 5-3-4. IO Priority 65 5-4. PID 66 5-4-1. PID Interface Files 67 5-5. Cpuset 68 5.5-1. Cpuset Interface Files 69 5-6. Device controller 70 5-7. RDMA 71 5-7-1. RDMA Interface Files 72 5-8. DMEM 73 5-8-1. DMEM Interface Files 74 5-9. HugeTLB 75 5.9-1. HugeTLB Interface Files 76 5-10. Misc 77 5.10-1 Misc Interface Files 78 5.10-2 Migration and Ownership 79 5-11. Others 80 5-11-1. perf_event 81 5-N. Non-normative information 82 5-N-1. CPU controller root cgroup process behaviour 83 5-N-2. IO controller root cgroup process behaviour 84 6. Namespace 85 6-1. Basics 86 6-2. The Root and Views 87 6-3. Migration and setns(2) 88 6-4. Interaction with Other Namespaces 89 P. Information on Kernel Programming 90 P-1. Filesystem Support for Writeback 91 D. Deprecated v1 Core Features 92 R. Issues with v1 and Rationales for v2 93 R-1. Multiple Hierarchies 94 R-2. Thread Granularity 95 R-3. Competition Between Inner Nodes and Threads 96 R-4. Other Interface Issues 97 R-5. Controller Issues and Remedies 98 R-5-1. Memory 99 100 101Introduction 102============ 103 104Terminology 105----------- 106 107"cgroup" stands for "control group" and is never capitalized. The 108singular form is used to designate the whole feature and also as a 109qualifier as in "cgroup controllers". When explicitly referring to 110multiple individual control groups, the plural form "cgroups" is used. 111 112 113What is cgroup? 114--------------- 115 116cgroup is a mechanism to organize processes hierarchically and 117distribute system resources along the hierarchy in a controlled and 118configurable manner. 119 120cgroup is largely composed of two parts - the core and controllers. 121cgroup core is primarily responsible for hierarchically organizing 122processes. A cgroup controller is usually responsible for 123distributing a specific type of system resource along the hierarchy 124although there are utility controllers which serve purposes other than 125resource distribution. 126 127cgroups form a tree structure and every process in the system belongs 128to one and only one cgroup. All threads of a process belong to the 129same cgroup. On creation, all processes are put in the cgroup that 130the parent process belongs to at the time. A process can be migrated 131to another cgroup. Migration of a process doesn't affect already 132existing descendant processes. 133 134Following certain structural constraints, controllers may be enabled or 135disabled selectively on a cgroup. All controller behaviors are 136hierarchical - if a controller is enabled on a cgroup, it affects all 137processes which belong to the cgroups consisting the inclusive 138sub-hierarchy of the cgroup. When a controller is enabled on a nested 139cgroup, it always restricts the resource distribution further. The 140restrictions set closer to the root in the hierarchy can not be 141overridden from further away. 142 143 144Basic Operations 145================ 146 147Mounting 148-------- 149 150Unlike v1, cgroup v2 has only single hierarchy. The cgroup v2 151hierarchy can be mounted with the following mount command:: 152 153 # mount -t cgroup2 none $MOUNT_POINT 154 155cgroup2 filesystem has the magic number 0x63677270 ("cgrp"). All 156controllers which support v2 and are not bound to a v1 hierarchy are 157automatically bound to the v2 hierarchy and show up at the root. 158Controllers which are not in active use in the v2 hierarchy can be 159bound to other hierarchies. This allows mixing v2 hierarchy with the 160legacy v1 multiple hierarchies in a fully backward compatible way. 161 162A controller can be moved across hierarchies only after the controller 163is no longer referenced in its current hierarchy. Because per-cgroup 164controller states are destroyed asynchronously and controllers may 165have lingering references, a controller may not show up immediately on 166the v2 hierarchy after the final umount of the previous hierarchy. 167Similarly, a controller should be fully disabled to be moved out of 168the unified hierarchy and it may take some time for the disabled 169controller to become available for other hierarchies; furthermore, due 170to inter-controller dependencies, other controllers may need to be 171disabled too. 172 173While useful for development and manual configurations, moving 174controllers dynamically between the v2 and other hierarchies is 175strongly discouraged for production use. It is recommended to decide 176the hierarchies and controller associations before starting using the 177controllers after system boot. 178 179During transition to v2, system management software might still 180automount the v1 cgroup filesystem and so hijack all controllers 181during boot, before manual intervention is possible. To make testing 182and experimenting easier, the kernel parameter cgroup_no_v1= allows 183disabling controllers in v1 and make them always available in v2. 184 185cgroup v2 currently supports the following mount options. 186 187 nsdelegate 188 Consider cgroup namespaces as delegation boundaries. This 189 option is system wide and can only be set on mount or modified 190 through remount from the init namespace. The mount option is 191 ignored on non-init namespace mounts. Please refer to the 192 Delegation section for details. 193 194 favordynmods 195 Reduce the latencies of dynamic cgroup modifications such as 196 task migrations and controller on/offs at the cost of making 197 hot path operations such as forks and exits more expensive. 198 The static usage pattern of creating a cgroup, enabling 199 controllers, and then seeding it with CLONE_INTO_CGROUP is 200 not affected by this option. 201 202 memory_localevents 203 Only populate memory.events with data for the current cgroup, 204 and not any subtrees. This is legacy behaviour, the default 205 behaviour without this option is to include subtree counts. 206 This option is system wide and can only be set on mount or 207 modified through remount from the init namespace. The mount 208 option is ignored on non-init namespace mounts. 209 210 memory_recursiveprot 211 Recursively apply memory.min and memory.low protection to 212 entire subtrees, without requiring explicit downward 213 propagation into leaf cgroups. This allows protecting entire 214 subtrees from one another, while retaining free competition 215 within those subtrees. This should have been the default 216 behavior but is a mount-option to avoid regressing setups 217 relying on the original semantics (e.g. specifying bogusly 218 high 'bypass' protection values at higher tree levels). 219 220 memory_hugetlb_accounting 221 Count HugeTLB memory usage towards the cgroup's overall 222 memory usage for the memory controller (for the purpose of 223 statistics reporting and memory protetion). This is a new 224 behavior that could regress existing setups, so it must be 225 explicitly opted in with this mount option. 226 227 A few caveats to keep in mind: 228 229 * There is no HugeTLB pool management involved in the memory 230 controller. The pre-allocated pool does not belong to anyone. 231 Specifically, when a new HugeTLB folio is allocated to 232 the pool, it is not accounted for from the perspective of the 233 memory controller. It is only charged to a cgroup when it is 234 actually used (for e.g at page fault time). Host memory 235 overcommit management has to consider this when configuring 236 hard limits. In general, HugeTLB pool management should be 237 done via other mechanisms (such as the HugeTLB controller). 238 * Failure to charge a HugeTLB folio to the memory controller 239 results in SIGBUS. This could happen even if the HugeTLB pool 240 still has pages available (but the cgroup limit is hit and 241 reclaim attempt fails). 242 * Charging HugeTLB memory towards the memory controller affects 243 memory protection and reclaim dynamics. Any userspace tuning 244 (of low, min limits for e.g) needs to take this into account. 245 * HugeTLB pages utilized while this option is not selected 246 will not be tracked by the memory controller (even if cgroup 247 v2 is remounted later on). 248 249 pids_localevents 250 The option restores v1-like behavior of pids.events:max, that is only 251 local (inside cgroup proper) fork failures are counted. Without this 252 option pids.events.max represents any pids.max enforcemnt across 253 cgroup's subtree. 254 255 256 257Organizing Processes and Threads 258-------------------------------- 259 260Processes 261~~~~~~~~~ 262 263Initially, only the root cgroup exists to which all processes belong. 264A child cgroup can be created by creating a sub-directory:: 265 266 # mkdir $CGROUP_NAME 267 268A given cgroup may have multiple child cgroups forming a tree 269structure. Each cgroup has a read-writable interface file 270"cgroup.procs". When read, it lists the PIDs of all processes which 271belong to the cgroup one-per-line. The PIDs are not ordered and the 272same PID may show up more than once if the process got moved to 273another cgroup and then back or the PID got recycled while reading. 274 275A process can be migrated into a cgroup by writing its PID to the 276target cgroup's "cgroup.procs" file. Only one process can be migrated 277on a single write(2) call. If a process is composed of multiple 278threads, writing the PID of any thread migrates all threads of the 279process. 280 281When a process forks a child process, the new process is born into the 282cgroup that the forking process belongs to at the time of the 283operation. After exit, a process stays associated with the cgroup 284that it belonged to at the time of exit until it's reaped; however, a 285zombie process does not appear in "cgroup.procs" and thus can't be 286moved to another cgroup. 287 288A cgroup which doesn't have any children or live processes can be 289destroyed by removing the directory. Note that a cgroup which doesn't 290have any children and is associated only with zombie processes is 291considered empty and can be removed:: 292 293 # rmdir $CGROUP_NAME 294 295"/proc/$PID/cgroup" lists a process's cgroup membership. If legacy 296cgroup is in use in the system, this file may contain multiple lines, 297one for each hierarchy. The entry for cgroup v2 is always in the 298format "0::$PATH":: 299 300 # cat /proc/842/cgroup 301 ... 302 0::/test-cgroup/test-cgroup-nested 303 304If the process becomes a zombie and the cgroup it was associated with 305is removed subsequently, " (deleted)" is appended to the path:: 306 307 # cat /proc/842/cgroup 308 ... 309 0::/test-cgroup/test-cgroup-nested (deleted) 310 311 312Threads 313~~~~~~~ 314 315cgroup v2 supports thread granularity for a subset of controllers to 316support use cases requiring hierarchical resource distribution across 317the threads of a group of processes. By default, all threads of a 318process belong to the same cgroup, which also serves as the resource 319domain to host resource consumptions which are not specific to a 320process or thread. The thread mode allows threads to be spread across 321a subtree while still maintaining the common resource domain for them. 322 323Controllers which support thread mode are called threaded controllers. 324The ones which don't are called domain controllers. 325 326Marking a cgroup threaded makes it join the resource domain of its 327parent as a threaded cgroup. The parent may be another threaded 328cgroup whose resource domain is further up in the hierarchy. The root 329of a threaded subtree, that is, the nearest ancestor which is not 330threaded, is called threaded domain or thread root interchangeably and 331serves as the resource domain for the entire subtree. 332 333Inside a threaded subtree, threads of a process can be put in 334different cgroups and are not subject to the no internal process 335constraint - threaded controllers can be enabled on non-leaf cgroups 336whether they have threads in them or not. 337 338As the threaded domain cgroup hosts all the domain resource 339consumptions of the subtree, it is considered to have internal 340resource consumptions whether there are processes in it or not and 341can't have populated child cgroups which aren't threaded. Because the 342root cgroup is not subject to no internal process constraint, it can 343serve both as a threaded domain and a parent to domain cgroups. 344 345The current operation mode or type of the cgroup is shown in the 346"cgroup.type" file which indicates whether the cgroup is a normal 347domain, a domain which is serving as the domain of a threaded subtree, 348or a threaded cgroup. 349 350On creation, a cgroup is always a domain cgroup and can be made 351threaded by writing "threaded" to the "cgroup.type" file. The 352operation is single direction:: 353 354 # echo threaded > cgroup.type 355 356Once threaded, the cgroup can't be made a domain again. To enable the 357thread mode, the following conditions must be met. 358 359- As the cgroup will join the parent's resource domain. The parent 360 must either be a valid (threaded) domain or a threaded cgroup. 361 362- When the parent is an unthreaded domain, it must not have any domain 363 controllers enabled or populated domain children. The root is 364 exempt from this requirement. 365 366Topology-wise, a cgroup can be in an invalid state. Please consider 367the following topology:: 368 369 A (threaded domain) - B (threaded) - C (domain, just created) 370 371C is created as a domain but isn't connected to a parent which can 372host child domains. C can't be used until it is turned into a 373threaded cgroup. "cgroup.type" file will report "domain (invalid)" in 374these cases. Operations which fail due to invalid topology use 375EOPNOTSUPP as the errno. 376 377A domain cgroup is turned into a threaded domain when one of its child 378cgroup becomes threaded or threaded controllers are enabled in the 379"cgroup.subtree_control" file while there are processes in the cgroup. 380A threaded domain reverts to a normal domain when the conditions 381clear. 382 383When read, "cgroup.threads" contains the list of the thread IDs of all 384threads in the cgroup. Except that the operations are per-thread 385instead of per-process, "cgroup.threads" has the same format and 386behaves the same way as "cgroup.procs". While "cgroup.threads" can be 387written to in any cgroup, as it can only move threads inside the same 388threaded domain, its operations are confined inside each threaded 389subtree. 390 391The threaded domain cgroup serves as the resource domain for the whole 392subtree, and, while the threads can be scattered across the subtree, 393all the processes are considered to be in the threaded domain cgroup. 394"cgroup.procs" in a threaded domain cgroup contains the PIDs of all 395processes in the subtree and is not readable in the subtree proper. 396However, "cgroup.procs" can be written to from anywhere in the subtree 397to migrate all threads of the matching process to the cgroup. 398 399Only threaded controllers can be enabled in a threaded subtree. When 400a threaded controller is enabled inside a threaded subtree, it only 401accounts for and controls resource consumptions associated with the 402threads in the cgroup and its descendants. All consumptions which 403aren't tied to a specific thread belong to the threaded domain cgroup. 404 405Because a threaded subtree is exempt from no internal process 406constraint, a threaded controller must be able to handle competition 407between threads in a non-leaf cgroup and its child cgroups. Each 408threaded controller defines how such competitions are handled. 409 410Currently, the following controllers are threaded and can be enabled 411in a threaded cgroup:: 412 413- cpu 414- cpuset 415- perf_event 416- pids 417 418[Un]populated Notification 419-------------------------- 420 421Each non-root cgroup has a "cgroup.events" file which contains 422"populated" field indicating whether the cgroup's sub-hierarchy has 423live processes in it. Its value is 0 if there is no live process in 424the cgroup and its descendants; otherwise, 1. poll and [id]notify 425events are triggered when the value changes. This can be used, for 426example, to start a clean-up operation after all processes of a given 427sub-hierarchy have exited. The populated state updates and 428notifications are recursive. Consider the following sub-hierarchy 429where the numbers in the parentheses represent the numbers of processes 430in each cgroup:: 431 432 A(4) - B(0) - C(1) 433 \ D(0) 434 435A, B and C's "populated" fields would be 1 while D's 0. After the one 436process in C exits, B and C's "populated" fields would flip to "0" and 437file modified events will be generated on the "cgroup.events" files of 438both cgroups. 439 440 441Controlling Controllers 442----------------------- 443 444Availability 445~~~~~~~~~~~~ 446 447A controller is available in a cgroup when it is supported by the kernel (i.e., 448compiled in, not disabled and not attached to a v1 hierarchy) and listed in the 449"cgroup.controllers" file. Availability means the controller's interface files 450are exposed in the cgroup’s directory, allowing the distribution of the target 451resource to be observed or controlled within that cgroup. 452 453Enabling and Disabling 454~~~~~~~~~~~~~~~~~~~~~~ 455 456Each cgroup has a "cgroup.controllers" file which lists all 457controllers available for the cgroup to enable:: 458 459 # cat cgroup.controllers 460 cpu io memory 461 462No controller is enabled by default. Controllers can be enabled and 463disabled by writing to the "cgroup.subtree_control" file:: 464 465 # echo "+cpu +memory -io" > cgroup.subtree_control 466 467Only controllers which are listed in "cgroup.controllers" can be 468enabled. When multiple operations are specified as above, either they 469all succeed or fail. If multiple operations on the same controller 470are specified, the last one is effective. 471 472Enabling a controller in a cgroup indicates that the distribution of 473the target resource across its immediate children will be controlled. 474Consider the following sub-hierarchy. The enabled controllers are 475listed in parentheses:: 476 477 A(cpu,memory) - B(memory) - C() 478 \ D() 479 480As A has "cpu" and "memory" enabled, A will control the distribution 481of CPU cycles and memory to its children, in this case, B. As B has 482"memory" enabled but not "CPU", C and D will compete freely on CPU 483cycles but their division of memory available to B will be controlled. 484 485As a controller regulates the distribution of the target resource to 486the cgroup's children, enabling it creates the controller's interface 487files in the child cgroups. In the above example, enabling "cpu" on B 488would create the "cpu." prefixed controller interface files in C and 489D. Likewise, disabling "memory" from B would remove the "memory." 490prefixed controller interface files from C and D. This means that the 491controller interface files - anything which doesn't start with 492"cgroup." are owned by the parent rather than the cgroup itself. 493 494 495Top-down Constraint 496~~~~~~~~~~~~~~~~~~~ 497 498Resources are distributed top-down and a cgroup can further distribute 499a resource only if the resource has been distributed to it from the 500parent. This means that all non-root "cgroup.subtree_control" files 501can only contain controllers which are enabled in the parent's 502"cgroup.subtree_control" file. A controller can be enabled only if 503the parent has the controller enabled and a controller can't be 504disabled if one or more children have it enabled. 505 506 507No Internal Process Constraint 508~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 509 510Non-root cgroups can distribute domain resources to their children 511only when they don't have any processes of their own. In other words, 512only domain cgroups which don't contain any processes can have domain 513controllers enabled in their "cgroup.subtree_control" files. 514 515This guarantees that, when a domain controller is looking at the part 516of the hierarchy which has it enabled, processes are always only on 517the leaves. This rules out situations where child cgroups compete 518against internal processes of the parent. 519 520The root cgroup is exempt from this restriction. Root contains 521processes and anonymous resource consumption which can't be associated 522with any other cgroups and requires special treatment from most 523controllers. How resource consumption in the root cgroup is governed 524is up to each controller (for more information on this topic please 525refer to the Non-normative information section in the Controllers 526chapter). 527 528Note that the restriction doesn't get in the way if there is no 529enabled controller in the cgroup's "cgroup.subtree_control". This is 530important as otherwise it wouldn't be possible to create children of a 531populated cgroup. To control resource distribution of a cgroup, the 532cgroup must create children and transfer all its processes to the 533children before enabling controllers in its "cgroup.subtree_control" 534file. 535 536 537Delegation 538---------- 539 540Model of Delegation 541~~~~~~~~~~~~~~~~~~~ 542 543A cgroup can be delegated in two ways. First, to a less privileged 544user by granting write access of the directory and its "cgroup.procs", 545"cgroup.threads" and "cgroup.subtree_control" files to the user. 546Second, if the "nsdelegate" mount option is set, automatically to a 547cgroup namespace on namespace creation. 548 549Because the resource control interface files in a given directory 550control the distribution of the parent's resources, the delegatee 551shouldn't be allowed to write to them. For the first method, this is 552achieved by not granting access to these files. For the second, files 553outside the namespace should be hidden from the delegatee by the means 554of at least mount namespacing, and the kernel rejects writes to all 555files on a namespace root from inside the cgroup namespace, except for 556those files listed in "/sys/kernel/cgroup/delegate" (including 557"cgroup.procs", "cgroup.threads", "cgroup.subtree_control", etc.). 558 559The end results are equivalent for both delegation types. Once 560delegated, the user can build sub-hierarchy under the directory, 561organize processes inside it as it sees fit and further distribute the 562resources it received from the parent. The limits and other settings 563of all resource controllers are hierarchical and regardless of what 564happens in the delegated sub-hierarchy, nothing can escape the 565resource restrictions imposed by the parent. 566 567Currently, cgroup doesn't impose any restrictions on the number of 568cgroups in or nesting depth of a delegated sub-hierarchy; however, 569this may be limited explicitly in the future. 570 571 572Delegation Containment 573~~~~~~~~~~~~~~~~~~~~~~ 574 575A delegated sub-hierarchy is contained in the sense that processes 576can't be moved into or out of the sub-hierarchy by the delegatee. 577 578For delegations to a less privileged user, this is achieved by 579requiring the following conditions for a process with a non-root euid 580to migrate a target process into a cgroup by writing its PID to the 581"cgroup.procs" file. 582 583- The writer must have write access to the "cgroup.procs" file. 584 585- The writer must have write access to the "cgroup.procs" file of the 586 common ancestor of the source and destination cgroups. 587 588The above two constraints ensure that while a delegatee may migrate 589processes around freely in the delegated sub-hierarchy it can't pull 590in from or push out to outside the sub-hierarchy. 591 592For an example, let's assume cgroups C0 and C1 have been delegated to 593user U0 who created C00, C01 under C0 and C10 under C1 as follows and 594all processes under C0 and C1 belong to U0:: 595 596 ~~~~~~~~~~~~~ - C0 - C00 597 ~ cgroup ~ \ C01 598 ~ hierarchy ~ 599 ~~~~~~~~~~~~~ - C1 - C10 600 601Let's also say U0 wants to write the PID of a process which is 602currently in C10 into "C00/cgroup.procs". U0 has write access to the 603file; however, the common ancestor of the source cgroup C10 and the 604destination cgroup C00 is above the points of delegation and U0 would 605not have write access to its "cgroup.procs" files and thus the write 606will be denied with -EACCES. 607 608For delegations to namespaces, containment is achieved by requiring 609that both the source and destination cgroups are reachable from the 610namespace of the process which is attempting the migration. If either 611is not reachable, the migration is rejected with -ENOENT. 612 613 614Guidelines 615---------- 616 617Organize Once and Control 618~~~~~~~~~~~~~~~~~~~~~~~~~ 619 620Migrating a process across cgroups is a relatively expensive operation 621and stateful resources such as memory are not moved together with the 622process. This is an explicit design decision as there often exist 623inherent trade-offs between migration and various hot paths in terms 624of synchronization cost. 625 626As such, migrating processes across cgroups frequently as a means to 627apply different resource restrictions is discouraged. A workload 628should be assigned to a cgroup according to the system's logical and 629resource structure once on start-up. Dynamic adjustments to resource 630distribution can be made by changing controller configuration through 631the interface files. 632 633 634Avoid Name Collisions 635~~~~~~~~~~~~~~~~~~~~~ 636 637Interface files for a cgroup and its children cgroups occupy the same 638directory and it is possible to create children cgroups which collide 639with interface files. 640 641All cgroup core interface files are prefixed with "cgroup." and each 642controller's interface files are prefixed with the controller name and 643a dot. A controller's name is composed of lower case alphabets and 644'_'s but never begins with an '_' so it can be used as the prefix 645character for collision avoidance. Also, interface file names won't 646start or end with terms which are often used in categorizing workloads 647such as job, service, slice, unit or workload. 648 649cgroup doesn't do anything to prevent name collisions and it's the 650user's responsibility to avoid them. 651 652 653Resource Distribution Models 654============================ 655 656cgroup controllers implement several resource distribution schemes 657depending on the resource type and expected use cases. This section 658describes major schemes in use along with their expected behaviors. 659 660 661Weights 662------- 663 664A parent's resource is distributed by adding up the weights of all 665active children and giving each the fraction matching the ratio of its 666weight against the sum. As only children which can make use of the 667resource at the moment participate in the distribution, this is 668work-conserving. Due to the dynamic nature, this model is usually 669used for stateless resources. 670 671All weights are in the range [1, 10000] with the default at 100. This 672allows symmetric multiplicative biases in both directions at fine 673enough granularity while staying in the intuitive range. 674 675As long as the weight is in range, all configuration combinations are 676valid and there is no reason to reject configuration changes or 677process migrations. 678 679"cpu.weight" proportionally distributes CPU cycles to active children 680and is an example of this type. 681 682 683.. _cgroupv2-limits-distributor: 684 685Limits 686------ 687 688A child can only consume up to the configured amount of the resource. 689Limits can be over-committed - the sum of the limits of children can 690exceed the amount of resource available to the parent. 691 692Limits are in the range [0, max] and defaults to "max", which is noop. 693 694As limits can be over-committed, all configuration combinations are 695valid and there is no reason to reject configuration changes or 696process migrations. 697 698"io.max" limits the maximum BPS and/or IOPS that a cgroup can consume 699on an IO device and is an example of this type. 700 701.. _cgroupv2-protections-distributor: 702 703Protections 704----------- 705 706A cgroup is protected up to the configured amount of the resource 707as long as the usages of all its ancestors are under their 708protected levels. Protections can be hard guarantees or best effort 709soft boundaries. Protections can also be over-committed in which case 710only up to the amount available to the parent is protected among 711children. 712 713Protections are in the range [0, max] and defaults to 0, which is 714noop. 715 716As protections can be over-committed, all configuration combinations 717are valid and there is no reason to reject configuration changes or 718process migrations. 719 720"memory.low" implements best-effort memory protection and is an 721example of this type. 722 723 724Allocations 725----------- 726 727A cgroup is exclusively allocated a certain amount of a finite 728resource. Allocations can't be over-committed - the sum of the 729allocations of children can not exceed the amount of resource 730available to the parent. 731 732Allocations are in the range [0, max] and defaults to 0, which is no 733resource. 734 735As allocations can't be over-committed, some configuration 736combinations are invalid and should be rejected. Also, if the 737resource is mandatory for execution of processes, process migrations 738may be rejected. 739 740 741Interface Files 742=============== 743 744Format 745------ 746 747All interface files should be in one of the following formats whenever 748possible:: 749 750 New-line separated values 751 (when only one value can be written at once) 752 753 VAL0\n 754 VAL1\n 755 ... 756 757 Space separated values 758 (when read-only or multiple values can be written at once) 759 760 VAL0 VAL1 ...\n 761 762 Flat keyed 763 764 KEY0 VAL0\n 765 KEY1 VAL1\n 766 ... 767 768 Nested keyed 769 770 KEY0 SUB_KEY0=VAL00 SUB_KEY1=VAL01... 771 KEY1 SUB_KEY0=VAL10 SUB_KEY1=VAL11... 772 ... 773 774For a writable file, the format for writing should generally match 775reading; however, controllers may allow omitting later fields or 776implement restricted shortcuts for most common use cases. 777 778For both flat and nested keyed files, only the values for a single key 779can be written at a time. For nested keyed files, the sub key pairs 780may be specified in any order and not all pairs have to be specified. 781 782 783Conventions 784----------- 785 786- Settings for a single feature should be contained in a single file. 787 788- The root cgroup should be exempt from resource control and thus 789 shouldn't have resource control interface files. 790 791- The default time unit is microseconds. If a different unit is ever 792 used, an explicit unit suffix must be present. 793 794- A parts-per quantity should use a percentage decimal with at least 795 two digit fractional part - e.g. 13.40. 796 797- If a controller implements weight based resource distribution, its 798 interface file should be named "weight" and have the range [1, 799 10000] with 100 as the default. The values are chosen to allow 800 enough and symmetric bias in both directions while keeping it 801 intuitive (the default is 100%). 802 803- If a controller implements an absolute resource guarantee and/or 804 limit, the interface files should be named "min" and "max" 805 respectively. If a controller implements best effort resource 806 guarantee and/or limit, the interface files should be named "low" 807 and "high" respectively. 808 809 In the above four control files, the special token "max" should be 810 used to represent upward infinity for both reading and writing. 811 812- If a setting has a configurable default value and keyed specific 813 overrides, the default entry should be keyed with "default" and 814 appear as the first entry in the file. 815 816 The default value can be updated by writing either "default $VAL" or 817 "$VAL". 818 819 When writing to update a specific override, "default" can be used as 820 the value to indicate removal of the override. Override entries 821 with "default" as the value must not appear when read. 822 823 For example, a setting which is keyed by major:minor device numbers 824 with integer values may look like the following:: 825 826 # cat cgroup-example-interface-file 827 default 150 828 8:0 300 829 830 The default value can be updated by:: 831 832 # echo 125 > cgroup-example-interface-file 833 834 or:: 835 836 # echo "default 125" > cgroup-example-interface-file 837 838 An override can be set by:: 839 840 # echo "8:16 170" > cgroup-example-interface-file 841 842 and cleared by:: 843 844 # echo "8:0 default" > cgroup-example-interface-file 845 # cat cgroup-example-interface-file 846 default 125 847 8:16 170 848 849- For events which are not very high frequency, an interface file 850 "events" should be created which lists event key value pairs. 851 Whenever a notifiable event happens, file modified event should be 852 generated on the file. 853 854 855Core Interface Files 856-------------------- 857 858All cgroup core files are prefixed with "cgroup." 859 860 cgroup.type 861 A read-write single value file which exists on non-root 862 cgroups. 863 864 When read, it indicates the current type of the cgroup, which 865 can be one of the following values. 866 867 - "domain" : A normal valid domain cgroup. 868 869 - "domain threaded" : A threaded domain cgroup which is 870 serving as the root of a threaded subtree. 871 872 - "domain invalid" : A cgroup which is in an invalid state. 873 It can't be populated or have controllers enabled. It may 874 be allowed to become a threaded cgroup. 875 876 - "threaded" : A threaded cgroup which is a member of a 877 threaded subtree. 878 879 A cgroup can be turned into a threaded cgroup by writing 880 "threaded" to this file. 881 882 cgroup.procs 883 A read-write new-line separated values file which exists on 884 all cgroups. 885 886 When read, it lists the PIDs of all processes which belong to 887 the cgroup one-per-line. The PIDs are not ordered and the 888 same PID may show up more than once if the process got moved 889 to another cgroup and then back or the PID got recycled while 890 reading. 891 892 A PID can be written to migrate the process associated with 893 the PID to the cgroup. The writer should match all of the 894 following conditions. 895 896 - It must have write access to the "cgroup.procs" file. 897 898 - It must have write access to the "cgroup.procs" file of the 899 common ancestor of the source and destination cgroups. 900 901 When delegating a sub-hierarchy, write access to this file 902 should be granted along with the containing directory. 903 904 In a threaded cgroup, reading this file fails with EOPNOTSUPP 905 as all the processes belong to the thread root. Writing is 906 supported and moves every thread of the process to the cgroup. 907 908 cgroup.threads 909 A read-write new-line separated values file which exists on 910 all cgroups. 911 912 When read, it lists the TIDs of all threads which belong to 913 the cgroup one-per-line. The TIDs are not ordered and the 914 same TID may show up more than once if the thread got moved to 915 another cgroup and then back or the TID got recycled while 916 reading. 917 918 A TID can be written to migrate the thread associated with the 919 TID to the cgroup. The writer should match all of the 920 following conditions. 921 922 - It must have write access to the "cgroup.threads" file. 923 924 - The cgroup that the thread is currently in must be in the 925 same resource domain as the destination cgroup. 926 927 - It must have write access to the "cgroup.procs" file of the 928 common ancestor of the source and destination cgroups. 929 930 When delegating a sub-hierarchy, write access to this file 931 should be granted along with the containing directory. 932 933 cgroup.controllers 934 A read-only space separated values file which exists on all 935 cgroups. 936 937 It shows space separated list of all controllers available to 938 the cgroup. The controllers are not ordered. 939 940 cgroup.subtree_control 941 A read-write space separated values file which exists on all 942 cgroups. Starts out empty. 943 944 When read, it shows space separated list of the controllers 945 which are enabled to control resource distribution from the 946 cgroup to its children. 947 948 Space separated list of controllers prefixed with '+' or '-' 949 can be written to enable or disable controllers. A controller 950 name prefixed with '+' enables the controller and '-' 951 disables. If a controller appears more than once on the list, 952 the last one is effective. When multiple enable and disable 953 operations are specified, either all succeed or all fail. 954 955 cgroup.events 956 A read-only flat-keyed file which exists on non-root cgroups. 957 The following entries are defined. Unless specified 958 otherwise, a value change in this file generates a file 959 modified event. 960 961 populated 962 1 if the cgroup or its descendants contains any live 963 processes; otherwise, 0. 964 frozen 965 1 if the cgroup is frozen; otherwise, 0. 966 967 cgroup.max.descendants 968 A read-write single value files. The default is "max". 969 970 Maximum allowed number of descent cgroups. 971 If the actual number of descendants is equal or larger, 972 an attempt to create a new cgroup in the hierarchy will fail. 973 974 cgroup.max.depth 975 A read-write single value files. The default is "max". 976 977 Maximum allowed descent depth below the current cgroup. 978 If the actual descent depth is equal or larger, 979 an attempt to create a new child cgroup will fail. 980 981 cgroup.stat 982 A read-only flat-keyed file with the following entries: 983 984 nr_descendants 985 Total number of visible descendant cgroups. 986 987 nr_dying_descendants 988 Total number of dying descendant cgroups. A cgroup becomes 989 dying after being deleted by a user. The cgroup will remain 990 in dying state for some time undefined time (which can depend 991 on system load) before being completely destroyed. 992 993 A process can't enter a dying cgroup under any circumstances, 994 a dying cgroup can't revive. 995 996 A dying cgroup can consume system resources not exceeding 997 limits, which were active at the moment of cgroup deletion. 998 999 nr_subsys_<cgroup_subsys> 1000 Total number of live cgroup subsystems (e.g memory 1001 cgroup) at and beneath the current cgroup. 1002 1003 nr_dying_subsys_<cgroup_subsys> 1004 Total number of dying cgroup subsystems (e.g. memory 1005 cgroup) at and beneath the current cgroup. 1006 1007 cgroup.stat.local 1008 A read-only flat-keyed file which exists in non-root cgroups. 1009 The following entry is defined: 1010 1011 frozen_usec 1012 Cumulative time that this cgroup has spent between freezing and 1013 thawing, regardless of whether by self or ancestor groups. 1014 NB: (not) reaching "frozen" state is not accounted here. 1015 1016 Using the following ASCII representation of a cgroup's freezer 1017 state, :: 1018 1019 1 _____ 1020 frozen 0 __/ \__ 1021 ab cd 1022 1023 the duration being measured is the span between a and c. 1024 1025 cgroup.freeze 1026 A read-write single value file which exists on non-root cgroups. 1027 Allowed values are "0" and "1". The default is "0". 1028 1029 Writing "1" to the file causes freezing of the cgroup and all 1030 descendant cgroups. This means that all belonging processes will 1031 be stopped and will not run until the cgroup will be explicitly 1032 unfrozen. Freezing of the cgroup may take some time; when this action 1033 is completed, the "frozen" value in the cgroup.events control file 1034 will be updated to "1" and the corresponding notification will be 1035 issued. 1036 1037 A cgroup can be frozen either by its own settings, or by settings 1038 of any ancestor cgroups. If any of ancestor cgroups is frozen, the 1039 cgroup will remain frozen. 1040 1041 Processes in the frozen cgroup can be killed by a fatal signal. 1042 They also can enter and leave a frozen cgroup: either by an explicit 1043 move by a user, or if freezing of the cgroup races with fork(). 1044 If a process is moved to a frozen cgroup, it stops. If a process is 1045 moved out of a frozen cgroup, it becomes running. 1046 1047 Frozen status of a cgroup doesn't affect any cgroup tree operations: 1048 it's possible to delete a frozen (and empty) cgroup, as well as 1049 create new sub-cgroups. 1050 1051 cgroup.kill 1052 A write-only single value file which exists in non-root cgroups. 1053 The only allowed value is "1". 1054 1055 Writing "1" to the file causes the cgroup and all descendant cgroups to 1056 be killed. This means that all processes located in the affected cgroup 1057 tree will be killed via SIGKILL. 1058 1059 Killing a cgroup tree will deal with concurrent forks appropriately and 1060 is protected against migrations. 1061 1062 In a threaded cgroup, writing this file fails with EOPNOTSUPP as 1063 killing cgroups is a process directed operation, i.e. it affects 1064 the whole thread-group. 1065 1066 cgroup.pressure 1067 A read-write single value file that allowed values are "0" and "1". 1068 The default is "1". 1069 1070 Writing "0" to the file will disable the cgroup PSI accounting. 1071 Writing "1" to the file will re-enable the cgroup PSI accounting. 1072 1073 This control attribute is not hierarchical, so disable or enable PSI 1074 accounting in a cgroup does not affect PSI accounting in descendants 1075 and doesn't need pass enablement via ancestors from root. 1076 1077 The reason this control attribute exists is that PSI accounts stalls for 1078 each cgroup separately and aggregates it at each level of the hierarchy. 1079 This may cause non-negligible overhead for some workloads when under 1080 deep level of the hierarchy, in which case this control attribute can 1081 be used to disable PSI accounting in the non-leaf cgroups. 1082 1083 irq.pressure 1084 A read-write nested-keyed file. 1085 1086 Shows pressure stall information for IRQ/SOFTIRQ. See 1087 :ref:`Documentation/accounting/psi.rst <psi>` for details. 1088 1089Controllers 1090=========== 1091 1092.. _cgroup-v2-cpu: 1093 1094CPU 1095--- 1096 1097The "cpu" controllers regulates distribution of CPU cycles. This 1098controller implements weight and absolute bandwidth limit models for 1099normal scheduling policy and absolute bandwidth allocation model for 1100realtime scheduling policy. 1101 1102In all the above models, cycles distribution is defined only on a temporal 1103base and it does not account for the frequency at which tasks are executed. 1104The (optional) utilization clamping support allows to hint the schedutil 1105cpufreq governor about the minimum desired frequency which should always be 1106provided by a CPU, as well as the maximum desired frequency, which should not 1107be exceeded by a CPU. 1108 1109WARNING: cgroup2 cpu controller doesn't yet support the (bandwidth) control of 1110realtime processes. For a kernel built with the CONFIG_RT_GROUP_SCHED option 1111enabled for group scheduling of realtime processes, the cpu controller can only 1112be enabled when all RT processes are in the root cgroup. Be aware that system 1113management software may already have placed RT processes into non-root cgroups 1114during the system boot process, and these processes may need to be moved to the 1115root cgroup before the cpu controller can be enabled with a 1116CONFIG_RT_GROUP_SCHED enabled kernel. 1117 1118With CONFIG_RT_GROUP_SCHED disabled, this limitation does not apply and some of 1119the interface files either affect realtime processes or account for them. See 1120the following section for details. Only the cpu controller is affected by 1121CONFIG_RT_GROUP_SCHED. Other controllers can be used for the resource control of 1122realtime processes irrespective of CONFIG_RT_GROUP_SCHED. 1123 1124 1125CPU Interface Files 1126~~~~~~~~~~~~~~~~~~~ 1127 1128The interaction of a process with the cpu controller depends on its scheduling 1129policy and the underlying scheduler. From the point of view of the cpu controller, 1130processes can be categorized as follows: 1131 1132* Processes under the fair-class scheduler 1133* Processes under a BPF scheduler with the ``cgroup_set_weight`` callback 1134* Everything else: ``SCHED_{FIFO,RR,DEADLINE}`` and processes under a BPF scheduler 1135 without the ``cgroup_set_weight`` callback 1136 1137For details on when a process is under the fair-class scheduler or a BPF scheduler, 1138check out :ref:`Documentation/scheduler/sched-ext.rst <sched-ext>`. 1139 1140For each of the following interface files, the above categories 1141will be referred to. All time durations are in microseconds. 1142 1143 cpu.stat 1144 A read-only flat-keyed file. 1145 This file exists whether the controller is enabled or not. 1146 1147 It always reports the following three stats, which account for all the 1148 processes in the cgroup: 1149 1150 - usage_usec 1151 - user_usec 1152 - system_usec 1153 1154 and the following five when the controller is enabled, which account for 1155 only the processes under the fair-class scheduler: 1156 1157 - nr_periods 1158 - nr_throttled 1159 - throttled_usec 1160 - nr_bursts 1161 - burst_usec 1162 1163 cpu.weight 1164 A read-write single value file which exists on non-root 1165 cgroups. The default is "100". 1166 1167 For non idle groups (cpu.idle = 0), the weight is in the 1168 range [1, 10000]. 1169 1170 If the cgroup has been configured to be SCHED_IDLE (cpu.idle = 1), 1171 then the weight will show as a 0. 1172 1173 This file affects only processes under the fair-class scheduler and a BPF 1174 scheduler with the ``cgroup_set_weight`` callback depending on what the 1175 callback actually does. 1176 1177 cpu.weight.nice 1178 A read-write single value file which exists on non-root 1179 cgroups. The default is "0". 1180 1181 The nice value is in the range [-20, 19]. 1182 1183 This interface file is an alternative interface for 1184 "cpu.weight" and allows reading and setting weight using the 1185 same values used by nice(2). Because the range is smaller and 1186 granularity is coarser for the nice values, the read value is 1187 the closest approximation of the current weight. 1188 1189 This file affects only processes under the fair-class scheduler and a BPF 1190 scheduler with the ``cgroup_set_weight`` callback depending on what the 1191 callback actually does. 1192 1193 cpu.max 1194 A read-write two value file which exists on non-root cgroups. 1195 The default is "max 100000". 1196 1197 The maximum bandwidth limit. It's in the following format:: 1198 1199 $MAX $PERIOD 1200 1201 which indicates that the group may consume up to $MAX in each 1202 $PERIOD duration. "max" for $MAX indicates no limit. If only 1203 one number is written, $MAX is updated. 1204 1205 This file affects only processes under the fair-class scheduler. 1206 1207 cpu.max.burst 1208 A read-write single value file which exists on non-root 1209 cgroups. The default is "0". 1210 1211 The burst in the range [0, $MAX]. 1212 1213 This file affects only processes under the fair-class scheduler. 1214 1215 cpu.pressure 1216 A read-write nested-keyed file. 1217 1218 Shows pressure stall information for CPU. See 1219 :ref:`Documentation/accounting/psi.rst <psi>` for details. 1220 1221 This file accounts for all the processes in the cgroup. 1222 1223 cpu.uclamp.min 1224 A read-write single value file which exists on non-root cgroups. 1225 The default is "0", i.e. no utilization boosting. 1226 1227 The requested minimum utilization (protection) as a percentage 1228 rational number, e.g. 12.34 for 12.34%. 1229 1230 This interface allows reading and setting minimum utilization clamp 1231 values similar to the sched_setattr(2). This minimum utilization 1232 value is used to clamp the task specific minimum utilization clamp, 1233 including those of realtime processes. 1234 1235 The requested minimum utilization (protection) is always capped by 1236 the current value for the maximum utilization (limit), i.e. 1237 `cpu.uclamp.max`. 1238 1239 This file affects all the processes in the cgroup. 1240 1241 cpu.uclamp.max 1242 A read-write single value file which exists on non-root cgroups. 1243 The default is "max". i.e. no utilization capping 1244 1245 The requested maximum utilization (limit) as a percentage rational 1246 number, e.g. 98.76 for 98.76%. 1247 1248 This interface allows reading and setting maximum utilization clamp 1249 values similar to the sched_setattr(2). This maximum utilization 1250 value is used to clamp the task specific maximum utilization clamp, 1251 including those of realtime processes. 1252 1253 This file affects all the processes in the cgroup. 1254 1255 cpu.idle 1256 A read-write single value file which exists on non-root cgroups. 1257 The default is 0. 1258 1259 This is the cgroup analog of the per-task SCHED_IDLE sched policy. 1260 Setting this value to a 1 will make the scheduling policy of the 1261 cgroup SCHED_IDLE. The threads inside the cgroup will retain their 1262 own relative priorities, but the cgroup itself will be treated as 1263 very low priority relative to its peers. 1264 1265 This file affects only processes under the fair-class scheduler. 1266 1267Memory 1268------ 1269 1270The "memory" controller regulates distribution of memory. Memory is 1271stateful and implements both limit and protection models. Due to the 1272intertwining between memory usage and reclaim pressure and the 1273stateful nature of memory, the distribution model is relatively 1274complex. 1275 1276While not completely water-tight, all major memory usages by a given 1277cgroup are tracked so that the total memory consumption can be 1278accounted and controlled to a reasonable extent. Currently, the 1279following types of memory usages are tracked. 1280 1281- Userland memory - page cache and anonymous memory. 1282 1283- Kernel data structures such as dentries and inodes. 1284 1285- TCP socket buffers. 1286 1287The above list may expand in the future for better coverage. 1288 1289 1290Memory Interface Files 1291~~~~~~~~~~~~~~~~~~~~~~ 1292 1293All memory amounts are in bytes. If a value which is not aligned to 1294PAGE_SIZE is written, the value may be rounded up to the closest 1295PAGE_SIZE multiple when read back. 1296 1297 memory.current 1298 A read-only single value file which exists on non-root 1299 cgroups. 1300 1301 The total amount of memory currently being used by the cgroup 1302 and its descendants. 1303 1304 memory.min 1305 A read-write single value file which exists on non-root 1306 cgroups. The default is "0". 1307 1308 Hard memory protection. If the memory usage of a cgroup 1309 is within its effective min boundary, the cgroup's memory 1310 won't be reclaimed under any conditions. If there is no 1311 unprotected reclaimable memory available, OOM killer 1312 is invoked. Above the effective min boundary (or 1313 effective low boundary if it is higher), pages are reclaimed 1314 proportionally to the overage, reducing reclaim pressure for 1315 smaller overages. 1316 1317 Effective min boundary is limited by memory.min values of 1318 ancestor cgroups. If there is memory.min overcommitment 1319 (child cgroup or cgroups are requiring more protected memory 1320 than parent will allow), then each child cgroup will get 1321 the part of parent's protection proportional to its 1322 actual memory usage below memory.min. 1323 1324 Putting more memory than generally available under this 1325 protection is discouraged and may lead to constant OOMs. 1326 1327 memory.low 1328 A read-write single value file which exists on non-root 1329 cgroups. The default is "0". 1330 1331 Best-effort memory protection. If the memory usage of a 1332 cgroup is within its effective low boundary, the cgroup's 1333 memory won't be reclaimed unless there is no reclaimable 1334 memory available in unprotected cgroups. 1335 Above the effective low boundary (or 1336 effective min boundary if it is higher), pages are reclaimed 1337 proportionally to the overage, reducing reclaim pressure for 1338 smaller overages. 1339 1340 Effective low boundary is limited by memory.low values of 1341 ancestor cgroups. If there is memory.low overcommitment 1342 (child cgroup or cgroups are requiring more protected memory 1343 than parent will allow), then each child cgroup will get 1344 the part of parent's protection proportional to its 1345 actual memory usage below memory.low. 1346 1347 Putting more memory than generally available under this 1348 protection is discouraged. 1349 1350 memory.high 1351 A read-write single value file which exists on non-root 1352 cgroups. The default is "max". 1353 1354 Memory usage throttle limit. If a cgroup's usage goes 1355 over the high boundary, the processes of the cgroup are 1356 throttled and put under heavy reclaim pressure. 1357 1358 Going over the high limit never invokes the OOM killer and 1359 under extreme conditions the limit may be breached. The high 1360 limit should be used in scenarios where an external process 1361 monitors the limited cgroup to alleviate heavy reclaim 1362 pressure. 1363 1364 If memory.high is opened with O_NONBLOCK then the synchronous 1365 reclaim is bypassed. This is useful for admin processes that 1366 need to dynamically adjust the job's memory limits without 1367 expending their own CPU resources on memory reclamation. The 1368 job will trigger the reclaim and/or get throttled on its 1369 next charge request. 1370 1371 Please note that with O_NONBLOCK, there is a chance that the 1372 target memory cgroup may take indefinite amount of time to 1373 reduce usage below the limit due to delayed charge request or 1374 busy-hitting its memory to slow down reclaim. 1375 1376 memory.max 1377 A read-write single value file which exists on non-root 1378 cgroups. The default is "max". 1379 1380 Memory usage hard limit. This is the main mechanism to limit 1381 memory usage of a cgroup. If a cgroup's memory usage reaches 1382 this limit and can't be reduced, the OOM killer is invoked in 1383 the cgroup. Under certain circumstances, the usage may go 1384 over the limit temporarily. 1385 1386 In default configuration regular 0-order allocations always 1387 succeed unless OOM killer chooses current task as a victim. 1388 1389 Some kinds of allocations don't invoke the OOM killer. 1390 Caller could retry them differently, return into userspace 1391 as -ENOMEM or silently ignore in cases like disk readahead. 1392 1393 If memory.max is opened with O_NONBLOCK, then the synchronous 1394 reclaim and oom-kill are bypassed. This is useful for admin 1395 processes that need to dynamically adjust the job's memory limits 1396 without expending their own CPU resources on memory reclamation. 1397 The job will trigger the reclaim and/or oom-kill on its next 1398 charge request. 1399 1400 Please note that with O_NONBLOCK, there is a chance that the 1401 target memory cgroup may take indefinite amount of time to 1402 reduce usage below the limit due to delayed charge request or 1403 busy-hitting its memory to slow down reclaim. 1404 1405 memory.reclaim 1406 A write-only nested-keyed file which exists for all cgroups. 1407 1408 This is a simple interface to trigger memory reclaim in the 1409 target cgroup. 1410 1411 Example:: 1412 1413 echo "1G" > memory.reclaim 1414 1415 Please note that the kernel can over or under reclaim from 1416 the target cgroup. If less bytes are reclaimed than the 1417 specified amount, -EAGAIN is returned. 1418 1419 Please note that the proactive reclaim (triggered by this 1420 interface) is not meant to indicate memory pressure on the 1421 memory cgroup. Therefore socket memory balancing triggered by 1422 the memory reclaim normally is not exercised in this case. 1423 This means that the networking layer will not adapt based on 1424 reclaim induced by memory.reclaim. 1425 1426The following nested keys are defined. 1427 1428 ========== ================================ 1429 swappiness Swappiness value to reclaim with 1430 ========== ================================ 1431 1432 Specifying a swappiness value instructs the kernel to perform 1433 the reclaim with that swappiness value. Note that this has the 1434 same semantics as vm.swappiness applied to memcg reclaim with 1435 all the existing limitations and potential future extensions. 1436 1437 The valid range for swappiness is [0-200, max], setting 1438 swappiness=max exclusively reclaims anonymous memory. 1439 1440 memory.peak 1441 A read-write single value file which exists on non-root cgroups. 1442 1443 The max memory usage recorded for the cgroup and its descendants since 1444 either the creation of the cgroup or the most recent reset for that FD. 1445 1446 A write of any non-empty string to this file resets it to the 1447 current memory usage for subsequent reads through the same 1448 file descriptor. 1449 1450 memory.oom.group 1451 A read-write single value file which exists on non-root 1452 cgroups. The default value is "0". 1453 1454 Determines whether the cgroup should be treated as 1455 an indivisible workload by the OOM killer. If set, 1456 all tasks belonging to the cgroup or to its descendants 1457 (if the memory cgroup is not a leaf cgroup) are killed 1458 together or not at all. This can be used to avoid 1459 partial kills to guarantee workload integrity. 1460 1461 Tasks with the OOM protection (oom_score_adj set to -1000) 1462 are treated as an exception and are never killed. 1463 1464 If the OOM killer is invoked in a cgroup, it's not going 1465 to kill any tasks outside of this cgroup, regardless 1466 memory.oom.group values of ancestor cgroups. 1467 1468 memory.events 1469 A read-only flat-keyed file which exists on non-root cgroups. 1470 The following entries are defined. Unless specified 1471 otherwise, a value change in this file generates a file 1472 modified event. 1473 1474 Note that all fields in this file are hierarchical and the 1475 file modified event can be generated due to an event down the 1476 hierarchy. For the local events at the cgroup level see 1477 memory.events.local. 1478 1479 low 1480 The number of times the cgroup is reclaimed due to 1481 high memory pressure even though its usage is under 1482 the low boundary. This usually indicates that the low 1483 boundary is over-committed. 1484 1485 high 1486 The number of times processes of the cgroup are 1487 throttled and routed to perform direct memory reclaim 1488 because the high memory boundary was exceeded. For a 1489 cgroup whose memory usage is capped by the high limit 1490 rather than global memory pressure, this event's 1491 occurrences are expected. 1492 1493 max 1494 The number of times the cgroup's memory usage was 1495 about to go over the max boundary. If direct reclaim 1496 fails to bring it down, the cgroup goes to OOM state. 1497 1498 oom 1499 The number of time the cgroup's memory usage was 1500 reached the limit and allocation was about to fail. 1501 1502 This event is not raised if the OOM killer is not 1503 considered as an option, e.g. for failed high-order 1504 allocations or if caller asked to not retry attempts. 1505 1506 oom_kill 1507 The number of processes belonging to this cgroup 1508 killed by any kind of OOM killer. 1509 1510 oom_group_kill 1511 The number of times a group OOM has occurred. 1512 1513 sock_throttled 1514 The number of times network sockets associated with 1515 this cgroup are throttled. 1516 1517 memory.events.local 1518 Similar to memory.events but the fields in the file are local 1519 to the cgroup i.e. not hierarchical. The file modified event 1520 generated on this file reflects only the local events. 1521 1522 memory.stat 1523 A read-only flat-keyed file which exists on non-root cgroups. 1524 1525 This breaks down the cgroup's memory footprint into different 1526 types of memory, type-specific details, and other information 1527 on the state and past events of the memory management system. 1528 1529 All memory amounts are in bytes. 1530 1531 The entries are ordered to be human readable, and new entries 1532 can show up in the middle. Don't rely on items remaining in a 1533 fixed position; use the keys to look up specific values! 1534 1535 If the entry has no per-node counter (or not show in the 1536 memory.numa_stat). We use 'npn' (non-per-node) as the tag 1537 to indicate that it will not show in the memory.numa_stat. 1538 1539 anon 1540 Amount of memory used in anonymous mappings such as 1541 brk(), sbrk(), and mmap(MAP_ANONYMOUS). Note that 1542 some kernel configurations might account complete larger 1543 allocations (e.g., THP) if only some, but not all the 1544 memory of such an allocation is mapped anymore. 1545 1546 file 1547 Amount of memory used to cache filesystem data, 1548 including tmpfs and shared memory. 1549 1550 kernel (npn) 1551 Amount of total kernel memory, including 1552 (kernel_stack, pagetables, percpu, vmalloc, slab) in 1553 addition to other kernel memory use cases. 1554 1555 kernel_stack 1556 Amount of memory allocated to kernel stacks. 1557 1558 pagetables 1559 Amount of memory allocated for page tables. 1560 1561 sec_pagetables 1562 Amount of memory allocated for secondary page tables, 1563 this currently includes KVM mmu allocations on x86 1564 and arm64 and IOMMU page tables. 1565 1566 percpu (npn) 1567 Amount of memory used for storing per-cpu kernel 1568 data structures. 1569 1570 sock (npn) 1571 Amount of memory used in network transmission buffers 1572 1573 vmalloc (npn) 1574 Amount of memory used for vmap backed memory. 1575 1576 shmem 1577 Amount of cached filesystem data that is swap-backed, 1578 such as tmpfs, shm segments, shared anonymous mmap()s 1579 1580 zswap 1581 Amount of memory consumed by the zswap compression backend. 1582 1583 zswapped 1584 Amount of application memory swapped out to zswap. 1585 1586 file_mapped 1587 Amount of cached filesystem data mapped with mmap(). Note 1588 that some kernel configurations might account complete 1589 larger allocations (e.g., THP) if only some, but not 1590 not all the memory of such an allocation is mapped. 1591 1592 file_dirty 1593 Amount of cached filesystem data that was modified but 1594 not yet written back to disk 1595 1596 file_writeback 1597 Amount of cached filesystem data that was modified and 1598 is currently being written back to disk 1599 1600 swapcached 1601 Amount of swap cached in memory. The swapcache is accounted 1602 against both memory and swap usage. 1603 1604 anon_thp 1605 Amount of memory used in anonymous mappings backed by 1606 transparent hugepages 1607 1608 file_thp 1609 Amount of cached filesystem data backed by transparent 1610 hugepages 1611 1612 shmem_thp 1613 Amount of shm, tmpfs, shared anonymous mmap()s backed by 1614 transparent hugepages 1615 1616 inactive_anon, active_anon, inactive_file, active_file, unevictable 1617 Amount of memory, swap-backed and filesystem-backed, 1618 on the internal memory management lists used by the 1619 page reclaim algorithm. 1620 1621 As these represent internal list state (eg. shmem pages are on anon 1622 memory management lists), inactive_foo + active_foo may not be equal to 1623 the value for the foo counter, since the foo counter is type-based, not 1624 list-based. 1625 1626 slab_reclaimable 1627 Part of "slab" that might be reclaimed, such as 1628 dentries and inodes. 1629 1630 slab_unreclaimable 1631 Part of "slab" that cannot be reclaimed on memory 1632 pressure. 1633 1634 slab (npn) 1635 Amount of memory used for storing in-kernel data 1636 structures. 1637 1638 workingset_refault_anon 1639 Number of refaults of previously evicted anonymous pages. 1640 1641 workingset_refault_file 1642 Number of refaults of previously evicted file pages. 1643 1644 workingset_activate_anon 1645 Number of refaulted anonymous pages that were immediately 1646 activated. 1647 1648 workingset_activate_file 1649 Number of refaulted file pages that were immediately activated. 1650 1651 workingset_restore_anon 1652 Number of restored anonymous pages which have been detected as 1653 an active workingset before they got reclaimed. 1654 1655 workingset_restore_file 1656 Number of restored file pages which have been detected as an 1657 active workingset before they got reclaimed. 1658 1659 workingset_nodereclaim 1660 Number of times a shadow node has been reclaimed 1661 1662 pswpin (npn) 1663 Number of pages swapped into memory 1664 1665 pswpout (npn) 1666 Number of pages swapped out of memory 1667 1668 pgscan (npn) 1669 Amount of scanned pages (in an inactive LRU list) 1670 1671 pgsteal (npn) 1672 Amount of reclaimed pages 1673 1674 pgscan_kswapd (npn) 1675 Amount of scanned pages by kswapd (in an inactive LRU list) 1676 1677 pgscan_direct (npn) 1678 Amount of scanned pages directly (in an inactive LRU list) 1679 1680 pgscan_khugepaged (npn) 1681 Amount of scanned pages by khugepaged (in an inactive LRU list) 1682 1683 pgscan_proactive (npn) 1684 Amount of scanned pages proactively (in an inactive LRU list) 1685 1686 pgsteal_kswapd (npn) 1687 Amount of reclaimed pages by kswapd 1688 1689 pgsteal_direct (npn) 1690 Amount of reclaimed pages directly 1691 1692 pgsteal_khugepaged (npn) 1693 Amount of reclaimed pages by khugepaged 1694 1695 pgsteal_proactive (npn) 1696 Amount of reclaimed pages proactively 1697 1698 pgfault (npn) 1699 Total number of page faults incurred 1700 1701 pgmajfault (npn) 1702 Number of major page faults incurred 1703 1704 pgrefill (npn) 1705 Amount of scanned pages (in an active LRU list) 1706 1707 pgactivate (npn) 1708 Amount of pages moved to the active LRU list 1709 1710 pgdeactivate (npn) 1711 Amount of pages moved to the inactive LRU list 1712 1713 pglazyfree (npn) 1714 Amount of pages postponed to be freed under memory pressure 1715 1716 pglazyfreed (npn) 1717 Amount of reclaimed lazyfree pages 1718 1719 swpin_zero 1720 Number of pages swapped into memory and filled with zero, where I/O 1721 was optimized out because the page content was detected to be zero 1722 during swapout. 1723 1724 swpout_zero 1725 Number of zero-filled pages swapped out with I/O skipped due to the 1726 content being detected as zero. 1727 1728 zswpin 1729 Number of pages moved in to memory from zswap. 1730 1731 zswpout 1732 Number of pages moved out of memory to zswap. 1733 1734 zswpwb 1735 Number of pages written from zswap to swap. 1736 1737 zswap_incomp 1738 Number of incompressible pages currently stored in zswap 1739 without compression. These pages could not be compressed to 1740 a size smaller than PAGE_SIZE, so they are stored as-is. 1741 1742 thp_fault_alloc (npn) 1743 Number of transparent hugepages which were allocated to satisfy 1744 a page fault. This counter is not present when CONFIG_TRANSPARENT_HUGEPAGE 1745 is not set. 1746 1747 thp_collapse_alloc (npn) 1748 Number of transparent hugepages which were allocated to allow 1749 collapsing an existing range of pages. This counter is not 1750 present when CONFIG_TRANSPARENT_HUGEPAGE is not set. 1751 1752 thp_swpout (npn) 1753 Number of transparent hugepages which are swapout in one piece 1754 without splitting. 1755 1756 thp_swpout_fallback (npn) 1757 Number of transparent hugepages which were split before swapout. 1758 Usually because failed to allocate some continuous swap space 1759 for the huge page. 1760 1761 numa_pages_migrated (npn) 1762 Number of pages migrated by NUMA balancing. 1763 1764 numa_pte_updates (npn) 1765 Number of pages whose page table entries are modified by 1766 NUMA balancing to produce NUMA hinting faults on access. 1767 1768 numa_hint_faults (npn) 1769 Number of NUMA hinting faults. 1770 1771 pgdemote_kswapd 1772 Number of pages demoted by kswapd. 1773 1774 pgdemote_direct 1775 Number of pages demoted directly. 1776 1777 pgdemote_khugepaged 1778 Number of pages demoted by khugepaged. 1779 1780 pgdemote_proactive 1781 Number of pages demoted by proactively. 1782 1783 hugetlb 1784 Amount of memory used by hugetlb pages. This metric only shows 1785 up if hugetlb usage is accounted for in memory.current (i.e. 1786 cgroup is mounted with the memory_hugetlb_accounting option). 1787 1788 memory.numa_stat 1789 A read-only nested-keyed file which exists on non-root cgroups. 1790 1791 This breaks down the cgroup's memory footprint into different 1792 types of memory, type-specific details, and other information 1793 per node on the state of the memory management system. 1794 1795 This is useful for providing visibility into the NUMA locality 1796 information within an memcg since the pages are allowed to be 1797 allocated from any physical node. One of the use case is evaluating 1798 application performance by combining this information with the 1799 application's CPU allocation. 1800 1801 All memory amounts are in bytes. 1802 1803 The output format of memory.numa_stat is:: 1804 1805 type N0=<bytes in node 0> N1=<bytes in node 1> ... 1806 1807 The entries are ordered to be human readable, and new entries 1808 can show up in the middle. Don't rely on items remaining in a 1809 fixed position; use the keys to look up specific values! 1810 1811 The entries can refer to the memory.stat. 1812 1813 memory.swap.current 1814 A read-only single value file which exists on non-root 1815 cgroups. 1816 1817 The total amount of swap currently being used by the cgroup 1818 and its descendants. 1819 1820 memory.swap.high 1821 A read-write single value file which exists on non-root 1822 cgroups. The default is "max". 1823 1824 Swap usage throttle limit. If a cgroup's swap usage exceeds 1825 this limit, all its further allocations will be throttled to 1826 allow userspace to implement custom out-of-memory procedures. 1827 1828 This limit marks a point of no return for the cgroup. It is NOT 1829 designed to manage the amount of swapping a workload does 1830 during regular operation. Compare to memory.swap.max, which 1831 prohibits swapping past a set amount, but lets the cgroup 1832 continue unimpeded as long as other memory can be reclaimed. 1833 1834 Healthy workloads are not expected to reach this limit. 1835 1836 memory.swap.peak 1837 A read-write single value file which exists on non-root cgroups. 1838 1839 The max swap usage recorded for the cgroup and its descendants since 1840 the creation of the cgroup or the most recent reset for that FD. 1841 1842 A write of any non-empty string to this file resets it to the 1843 current memory usage for subsequent reads through the same 1844 file descriptor. 1845 1846 memory.swap.max 1847 A read-write single value file which exists on non-root 1848 cgroups. The default is "max". 1849 1850 Swap usage hard limit. If a cgroup's swap usage reaches this 1851 limit, anonymous memory of the cgroup will not be swapped out. 1852 1853 memory.swap.events 1854 A read-only flat-keyed file which exists on non-root cgroups. 1855 The following entries are defined. Unless specified 1856 otherwise, a value change in this file generates a file 1857 modified event. 1858 1859 high 1860 The number of times the cgroup's swap usage was over 1861 the high threshold. 1862 1863 max 1864 The number of times the cgroup's swap usage was about 1865 to go over the max boundary and swap allocation 1866 failed. 1867 1868 fail 1869 The number of times swap allocation failed either 1870 because of running out of swap system-wide or max 1871 limit. 1872 1873 When reduced under the current usage, the existing swap 1874 entries are reclaimed gradually and the swap usage may stay 1875 higher than the limit for an extended period of time. This 1876 reduces the impact on the workload and memory management. 1877 1878 memory.zswap.current 1879 A read-only single value file which exists on non-root 1880 cgroups. 1881 1882 The total amount of memory consumed by the zswap compression 1883 backend. 1884 1885 memory.zswap.max 1886 A read-write single value file which exists on non-root 1887 cgroups. The default is "max". 1888 1889 Zswap usage hard limit. If a cgroup's zswap pool reaches this 1890 limit, it will refuse to take any more stores before existing 1891 entries fault back in or are written out to disk. 1892 1893 memory.zswap.writeback 1894 A read-write single value file. The default value is "1". 1895 Note that this setting is hierarchical, i.e. the writeback would be 1896 implicitly disabled for child cgroups if the upper hierarchy 1897 does so. 1898 1899 When this is set to 0, all swapping attempts to swapping devices 1900 are disabled. This included both zswap writebacks, and swapping due 1901 to zswap store failures. If the zswap store failures are recurring 1902 (for e.g if the pages are incompressible), users can observe 1903 reclaim inefficiency after disabling writeback (because the same 1904 pages might be rejected again and again). 1905 1906 Note that this is subtly different from setting memory.swap.max to 1907 0, as it still allows for pages to be written to the zswap pool. 1908 This setting has no effect if zswap is disabled, and swapping 1909 is allowed unless memory.swap.max is set to 0. 1910 1911 memory.pressure 1912 A read-only nested-keyed file. 1913 1914 Shows pressure stall information for memory. See 1915 :ref:`Documentation/accounting/psi.rst <psi>` for details. 1916 1917 1918Usage Guidelines 1919~~~~~~~~~~~~~~~~ 1920 1921"memory.high" is the main mechanism to control memory usage. 1922Over-committing on high limit (sum of high limits > available memory) 1923and letting global memory pressure to distribute memory according to 1924usage is a viable strategy. 1925 1926Because breach of the high limit doesn't trigger the OOM killer but 1927throttles the offending cgroup, a management agent has ample 1928opportunities to monitor and take appropriate actions such as granting 1929more memory or terminating the workload. 1930 1931Determining whether a cgroup has enough memory is not trivial as 1932memory usage doesn't indicate whether the workload can benefit from 1933more memory. For example, a workload which writes data received from 1934network to a file can use all available memory but can also operate as 1935performant with a small amount of memory. A measure of memory 1936pressure - how much the workload is being impacted due to lack of 1937memory - is necessary to determine whether a workload needs more 1938memory; unfortunately, memory pressure monitoring mechanism isn't 1939implemented yet. 1940 1941Reclaim Protection 1942~~~~~~~~~~~~~~~~~~ 1943 1944The protection configured with "memory.low" or "memory.min" applies relatively 1945to the target of the reclaim (i.e. any of memory cgroup limits, proactive 1946memory.reclaim or global reclaim apparently located in the root cgroup). 1947The protection value configured for B applies unchanged to the reclaim 1948targeting A (i.e. caused by competition with the sibling E):: 1949 1950 root - ... - A - B - C 1951 \ ` D 1952 ` E 1953 1954When the reclaim targets ancestors of A, the effective protection of B is 1955capped by the protection value configured for A (and any other intermediate 1956ancestors between A and the target). 1957 1958To express indifference about relative sibling protection, it is suggested to 1959use memory_recursiveprot. Configuring all descendants of a parent with finite 1960protection to "max" works but it may unnecessarily skew memory.events:low 1961field. 1962 1963Memory Ownership 1964~~~~~~~~~~~~~~~~ 1965 1966A memory area is charged to the cgroup which instantiated it and stays 1967charged to the cgroup until the area is released. Migrating a process 1968to a different cgroup doesn't move the memory usages that it 1969instantiated while in the previous cgroup to the new cgroup. 1970 1971A memory area may be used by processes belonging to different cgroups. 1972To which cgroup the area will be charged is in-deterministic; however, 1973over time, the memory area is likely to end up in a cgroup which has 1974enough memory allowance to avoid high reclaim pressure. 1975 1976If a cgroup sweeps a considerable amount of memory which is expected 1977to be accessed repeatedly by other cgroups, it may make sense to use 1978POSIX_FADV_DONTNEED to relinquish the ownership of memory areas 1979belonging to the affected files to ensure correct memory ownership. 1980 1981 1982IO 1983-- 1984 1985The "io" controller regulates the distribution of IO resources. This 1986controller implements both weight based and absolute bandwidth or IOPS 1987limit distribution; however, weight based distribution is available 1988only if cfq-iosched is in use and neither scheme is available for 1989blk-mq devices. 1990 1991 1992IO Interface Files 1993~~~~~~~~~~~~~~~~~~ 1994 1995 io.stat 1996 A read-only nested-keyed file. 1997 1998 Lines are keyed by $MAJ:$MIN device numbers and not ordered. 1999 The following nested keys are defined. 2000 2001 ====== ===================== 2002 rbytes Bytes read 2003 wbytes Bytes written 2004 rios Number of read IOs 2005 wios Number of write IOs 2006 dbytes Bytes discarded 2007 dios Number of discard IOs 2008 ====== ===================== 2009 2010 An example read output follows:: 2011 2012 8:16 rbytes=1459200 wbytes=314773504 rios=192 wios=353 dbytes=0 dios=0 2013 8:0 rbytes=90430464 wbytes=299008000 rios=8950 wios=1252 dbytes=50331648 dios=3021 2014 2015 io.cost.qos 2016 A read-write nested-keyed file which exists only on the root 2017 cgroup. 2018 2019 This file configures the Quality of Service of the IO cost 2020 model based controller (CONFIG_BLK_CGROUP_IOCOST) which 2021 currently implements "io.weight" proportional control. Lines 2022 are keyed by $MAJ:$MIN device numbers and not ordered. The 2023 line for a given device is populated on the first write for 2024 the device on "io.cost.qos" or "io.cost.model". The following 2025 nested keys are defined. 2026 2027 ====== ===================================== 2028 enable Weight-based control enable 2029 ctrl "auto" or "user" 2030 rpct Read latency percentile [0, 100] 2031 rlat Read latency threshold 2032 wpct Write latency percentile [0, 100] 2033 wlat Write latency threshold 2034 min Minimum scaling percentage [1, 10000] 2035 max Maximum scaling percentage [1, 10000] 2036 ====== ===================================== 2037 2038 The controller is disabled by default and can be enabled by 2039 setting "enable" to 1. "rpct" and "wpct" parameters default 2040 to zero and the controller uses internal device saturation 2041 state to adjust the overall IO rate between "min" and "max". 2042 2043 When a better control quality is needed, latency QoS 2044 parameters can be configured. For example:: 2045 2046 8:16 enable=1 ctrl=auto rpct=95.00 rlat=75000 wpct=95.00 wlat=150000 min=50.00 max=150.0 2047 2048 shows that on sdb, the controller is enabled, will consider 2049 the device saturated if the 95th percentile of read completion 2050 latencies is above 75ms or write 150ms, and adjust the overall 2051 IO issue rate between 50% and 150% accordingly. 2052 2053 The lower the saturation point, the better the latency QoS at 2054 the cost of aggregate bandwidth. The narrower the allowed 2055 adjustment range between "min" and "max", the more conformant 2056 to the cost model the IO behavior. Note that the IO issue 2057 base rate may be far off from 100% and setting "min" and "max" 2058 blindly can lead to a significant loss of device capacity or 2059 control quality. "min" and "max" are useful for regulating 2060 devices which show wide temporary behavior changes - e.g. a 2061 ssd which accepts writes at the line speed for a while and 2062 then completely stalls for multiple seconds. 2063 2064 When "ctrl" is "auto", the parameters are controlled by the 2065 kernel and may change automatically. Setting "ctrl" to "user" 2066 or setting any of the percentile and latency parameters puts 2067 it into "user" mode and disables the automatic changes. The 2068 automatic mode can be restored by setting "ctrl" to "auto". 2069 2070 io.cost.model 2071 A read-write nested-keyed file which exists only on the root 2072 cgroup. 2073 2074 This file configures the cost model of the IO cost model based 2075 controller (CONFIG_BLK_CGROUP_IOCOST) which currently 2076 implements "io.weight" proportional control. Lines are keyed 2077 by $MAJ:$MIN device numbers and not ordered. The line for a 2078 given device is populated on the first write for the device on 2079 "io.cost.qos" or "io.cost.model". The following nested keys 2080 are defined. 2081 2082 ===== ================================ 2083 ctrl "auto" or "user" 2084 model The cost model in use - "linear" 2085 ===== ================================ 2086 2087 When "ctrl" is "auto", the kernel may change all parameters 2088 dynamically. When "ctrl" is set to "user" or any other 2089 parameters are written to, "ctrl" become "user" and the 2090 automatic changes are disabled. 2091 2092 When "model" is "linear", the following model parameters are 2093 defined. 2094 2095 ============= ======================================== 2096 [r|w]bps The maximum sequential IO throughput 2097 [r|w]seqiops The maximum 4k sequential IOs per second 2098 [r|w]randiops The maximum 4k random IOs per second 2099 ============= ======================================== 2100 2101 From the above, the builtin linear model determines the base 2102 costs of a sequential and random IO and the cost coefficient 2103 for the IO size. While simple, this model can cover most 2104 common device classes acceptably. 2105 2106 The IO cost model isn't expected to be accurate in absolute 2107 sense and is scaled to the device behavior dynamically. 2108 2109 If needed, tools/cgroup/iocost_coef_gen.py can be used to 2110 generate device-specific coefficients. 2111 2112 io.weight 2113 A read-write flat-keyed file which exists on non-root cgroups. 2114 The default is "default 100". 2115 2116 The first line is the default weight applied to devices 2117 without specific override. The rest are overrides keyed by 2118 $MAJ:$MIN device numbers and not ordered. The weights are in 2119 the range [1, 10000] and specifies the relative amount IO time 2120 the cgroup can use in relation to its siblings. 2121 2122 The default weight can be updated by writing either "default 2123 $WEIGHT" or simply "$WEIGHT". Overrides can be set by writing 2124 "$MAJ:$MIN $WEIGHT" and unset by writing "$MAJ:$MIN default". 2125 2126 An example read output follows:: 2127 2128 default 100 2129 8:16 200 2130 8:0 50 2131 2132 io.max 2133 A read-write nested-keyed file which exists on non-root 2134 cgroups. 2135 2136 BPS and IOPS based IO limit. Lines are keyed by $MAJ:$MIN 2137 device numbers and not ordered. The following nested keys are 2138 defined. 2139 2140 ===== ================================== 2141 rbps Max read bytes per second 2142 wbps Max write bytes per second 2143 riops Max read IO operations per second 2144 wiops Max write IO operations per second 2145 ===== ================================== 2146 2147 When writing, any number of nested key-value pairs can be 2148 specified in any order. "max" can be specified as the value 2149 to remove a specific limit. If the same key is specified 2150 multiple times, the outcome is undefined. 2151 2152 BPS and IOPS are measured in each IO direction and IOs are 2153 delayed if limit is reached. Temporary bursts are allowed. 2154 2155 Setting read limit at 2M BPS and write at 120 IOPS for 8:16:: 2156 2157 echo "8:16 rbps=2097152 wiops=120" > io.max 2158 2159 Reading returns the following:: 2160 2161 8:16 rbps=2097152 wbps=max riops=max wiops=120 2162 2163 Write IOPS limit can be removed by writing the following:: 2164 2165 echo "8:16 wiops=max" > io.max 2166 2167 Reading now returns the following:: 2168 2169 8:16 rbps=2097152 wbps=max riops=max wiops=max 2170 2171 io.pressure 2172 A read-only nested-keyed file. 2173 2174 Shows pressure stall information for IO. See 2175 :ref:`Documentation/accounting/psi.rst <psi>` for details. 2176 2177 2178Writeback 2179~~~~~~~~~ 2180 2181Page cache is dirtied through buffered writes and shared mmaps and 2182written asynchronously to the backing filesystem by the writeback 2183mechanism. Writeback sits between the memory and IO domains and 2184regulates the proportion of dirty memory by balancing dirtying and 2185write IOs. 2186 2187The io controller, in conjunction with the memory controller, 2188implements control of page cache writeback IOs. The memory controller 2189defines the memory domain that dirty memory ratio is calculated and 2190maintained for and the io controller defines the io domain which 2191writes out dirty pages for the memory domain. Both system-wide and 2192per-cgroup dirty memory states are examined and the more restrictive 2193of the two is enforced. 2194 2195cgroup writeback requires explicit support from the underlying 2196filesystem. Currently, cgroup writeback is implemented on ext2, ext4, 2197btrfs, f2fs, and xfs. On other filesystems, all writeback IOs are 2198attributed to the root cgroup. 2199 2200There are inherent differences in memory and writeback management 2201which affects how cgroup ownership is tracked. Memory is tracked per 2202page while writeback per inode. For the purpose of writeback, an 2203inode is assigned to a cgroup and all IO requests to write dirty pages 2204from the inode are attributed to that cgroup. 2205 2206As cgroup ownership for memory is tracked per page, there can be pages 2207which are associated with different cgroups than the one the inode is 2208associated with. These are called foreign pages. The writeback 2209constantly keeps track of foreign pages and, if a particular foreign 2210cgroup becomes the majority over a certain period of time, switches 2211the ownership of the inode to that cgroup. 2212 2213While this model is enough for most use cases where a given inode is 2214mostly dirtied by a single cgroup even when the main writing cgroup 2215changes over time, use cases where multiple cgroups write to a single 2216inode simultaneously are not supported well. In such circumstances, a 2217significant portion of IOs are likely to be attributed incorrectly. 2218As memory controller assigns page ownership on the first use and 2219doesn't update it until the page is released, even if writeback 2220strictly follows page ownership, multiple cgroups dirtying overlapping 2221areas wouldn't work as expected. It's recommended to avoid such usage 2222patterns. 2223 2224The sysctl knobs which affect writeback behavior are applied to cgroup 2225writeback as follows. 2226 2227 vm.dirty_background_ratio, vm.dirty_ratio 2228 These ratios apply the same to cgroup writeback with the 2229 amount of available memory capped by limits imposed by the 2230 memory controller and system-wide clean memory. 2231 2232 vm.dirty_background_bytes, vm.dirty_bytes 2233 For cgroup writeback, this is calculated into ratio against 2234 total available memory and applied the same way as 2235 vm.dirty[_background]_ratio. 2236 2237 2238IO Latency 2239~~~~~~~~~~ 2240 2241This is a cgroup v2 controller for IO workload protection. You provide a group 2242with a latency target, and if the average latency exceeds that target the 2243controller will throttle any peers that have a lower latency target than the 2244protected workload. 2245 2246The limits are only applied at the peer level in the hierarchy. This means that 2247in the diagram below, only groups A, B, and C will influence each other, and 2248groups D and F will influence each other. Group G will influence nobody:: 2249 2250 [root] 2251 / | \ 2252 A B C 2253 / \ | 2254 D F G 2255 2256 2257So the ideal way to configure this is to set io.latency in groups A, B, and C. 2258Generally you do not want to set a value lower than the latency your device 2259supports. Experiment to find the value that works best for your workload. 2260Start at higher than the expected latency for your device and watch the 2261avg_lat value in io.stat for your workload group to get an idea of the 2262latency you see during normal operation. Use the avg_lat value as a basis for 2263your real setting, setting at 10-15% higher than the value in io.stat. 2264 2265How IO Latency Throttling Works 2266~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2267 2268io.latency is work conserving; so as long as everybody is meeting their latency 2269target the controller doesn't do anything. Once a group starts missing its 2270target it begins throttling any peer group that has a higher target than itself. 2271This throttling takes 2 forms: 2272 2273- Queue depth throttling. This is the number of outstanding IO's a group is 2274 allowed to have. We will clamp down relatively quickly, starting at no limit 2275 and going all the way down to 1 IO at a time. 2276 2277- Artificial delay induction. There are certain types of IO that cannot be 2278 throttled without possibly adversely affecting higher priority groups. This 2279 includes swapping and metadata IO. These types of IO are allowed to occur 2280 normally, however they are "charged" to the originating group. If the 2281 originating group is being throttled you will see the use_delay and delay 2282 fields in io.stat increase. The delay value is how many microseconds that are 2283 being added to any process that runs in this group. Because this number can 2284 grow quite large if there is a lot of swapping or metadata IO occurring we 2285 limit the individual delay events to 1 second at a time. 2286 2287Once the victimized group starts meeting its latency target again it will start 2288unthrottling any peer groups that were throttled previously. If the victimized 2289group simply stops doing IO the global counter will unthrottle appropriately. 2290 2291IO Latency Interface Files 2292~~~~~~~~~~~~~~~~~~~~~~~~~~ 2293 2294 io.latency 2295 This takes a similar format as the other controllers. 2296 2297 "MAJOR:MINOR target=<target time in microseconds>" 2298 2299 io.stat 2300 If the controller is enabled you will see extra stats in io.stat in 2301 addition to the normal ones. 2302 2303 depth 2304 This is the current queue depth for the group. 2305 2306 avg_lat 2307 This is an exponential moving average with a decay rate of 1/exp 2308 bound by the sampling interval. The decay rate interval can be 2309 calculated by multiplying the win value in io.stat by the 2310 corresponding number of samples based on the win value. 2311 2312 win 2313 The sampling window size in milliseconds. This is the minimum 2314 duration of time between evaluation events. Windows only elapse 2315 with IO activity. Idle periods extend the most recent window. 2316 2317IO Priority 2318~~~~~~~~~~~ 2319 2320A single attribute controls the behavior of the I/O priority cgroup policy, 2321namely the io.prio.class attribute. The following values are accepted for 2322that attribute: 2323 2324 no-change 2325 Do not modify the I/O priority class. 2326 2327 promote-to-rt 2328 For requests that have a non-RT I/O priority class, change it into RT. 2329 Also change the priority level of these requests to 4. Do not modify 2330 the I/O priority of requests that have priority class RT. 2331 2332 restrict-to-be 2333 For requests that do not have an I/O priority class or that have I/O 2334 priority class RT, change it into BE. Also change the priority level 2335 of these requests to 0. Do not modify the I/O priority class of 2336 requests that have priority class IDLE. 2337 2338 idle 2339 Change the I/O priority class of all requests into IDLE, the lowest 2340 I/O priority class. 2341 2342 none-to-rt 2343 Deprecated. Just an alias for promote-to-rt. 2344 2345The following numerical values are associated with the I/O priority policies: 2346 2347+----------------+---+ 2348| no-change | 0 | 2349+----------------+---+ 2350| promote-to-rt | 1 | 2351+----------------+---+ 2352| restrict-to-be | 2 | 2353+----------------+---+ 2354| idle | 3 | 2355+----------------+---+ 2356 2357The numerical value that corresponds to each I/O priority class is as follows: 2358 2359+-------------------------------+---+ 2360| IOPRIO_CLASS_NONE | 0 | 2361+-------------------------------+---+ 2362| IOPRIO_CLASS_RT (real-time) | 1 | 2363+-------------------------------+---+ 2364| IOPRIO_CLASS_BE (best effort) | 2 | 2365+-------------------------------+---+ 2366| IOPRIO_CLASS_IDLE | 3 | 2367+-------------------------------+---+ 2368 2369The algorithm to set the I/O priority class for a request is as follows: 2370 2371- If I/O priority class policy is promote-to-rt, change the request I/O 2372 priority class to IOPRIO_CLASS_RT and change the request I/O priority 2373 level to 4. 2374- If I/O priority class policy is not promote-to-rt, translate the I/O priority 2375 class policy into a number, then change the request I/O priority class 2376 into the maximum of the I/O priority class policy number and the numerical 2377 I/O priority class. 2378 2379PID 2380--- 2381 2382The process number controller is used to allow a cgroup to stop any 2383new tasks from being fork()'d or clone()'d after a specified limit is 2384reached. 2385 2386The number of tasks in a cgroup can be exhausted in ways which other 2387controllers cannot prevent, thus warranting its own controller. For 2388example, a fork bomb is likely to exhaust the number of tasks before 2389hitting memory restrictions. 2390 2391Note that PIDs used in this controller refer to TIDs, process IDs as 2392used by the kernel. 2393 2394 2395PID Interface Files 2396~~~~~~~~~~~~~~~~~~~ 2397 2398 pids.max 2399 A read-write single value file which exists on non-root 2400 cgroups. The default is "max". 2401 2402 Hard limit of number of processes. 2403 2404 pids.current 2405 A read-only single value file which exists on non-root cgroups. 2406 2407 The number of processes currently in the cgroup and its 2408 descendants. 2409 2410 pids.peak 2411 A read-only single value file which exists on non-root cgroups. 2412 2413 The maximum value that the number of processes in the cgroup and its 2414 descendants has ever reached. 2415 2416 pids.events 2417 A read-only flat-keyed file which exists on non-root cgroups. Unless 2418 specified otherwise, a value change in this file generates a file 2419 modified event. The following entries are defined. 2420 2421 max 2422 The number of times the cgroup's total number of processes hit the pids.max 2423 limit (see also pids_localevents). 2424 2425 pids.events.local 2426 Similar to pids.events but the fields in the file are local 2427 to the cgroup i.e. not hierarchical. The file modified event 2428 generated on this file reflects only the local events. 2429 2430Organisational operations are not blocked by cgroup policies, so it is 2431possible to have pids.current > pids.max. This can be done by either 2432setting the limit to be smaller than pids.current, or attaching enough 2433processes to the cgroup such that pids.current is larger than 2434pids.max. However, it is not possible to violate a cgroup PID policy 2435through fork() or clone(). These will return -EAGAIN if the creation 2436of a new process would cause a cgroup policy to be violated. 2437 2438 2439Cpuset 2440------ 2441 2442The "cpuset" controller provides a mechanism for constraining 2443the CPU and memory node placement of tasks to only the resources 2444specified in the cpuset interface files in a task's current cgroup. 2445This is especially valuable on large NUMA systems where placing jobs 2446on properly sized subsets of the systems with careful processor and 2447memory placement to reduce cross-node memory access and contention 2448can improve overall system performance. 2449 2450The "cpuset" controller is hierarchical. That means the controller 2451cannot use CPUs or memory nodes not allowed in its parent. 2452 2453 2454Cpuset Interface Files 2455~~~~~~~~~~~~~~~~~~~~~~ 2456 2457 cpuset.cpus 2458 A read-write multiple values file which exists on non-root 2459 cpuset-enabled cgroups. 2460 2461 It lists the requested CPUs to be used by tasks within this 2462 cgroup. The actual list of CPUs to be granted, however, is 2463 subjected to constraints imposed by its parent and can differ 2464 from the requested CPUs. 2465 2466 The CPU numbers are comma-separated numbers or ranges. 2467 For example:: 2468 2469 # cat cpuset.cpus 2470 0-4,6,8-10 2471 2472 An empty value indicates that the cgroup is using the same 2473 setting as the nearest cgroup ancestor with a non-empty 2474 "cpuset.cpus" or all the available CPUs if none is found. 2475 2476 The value of "cpuset.cpus" stays constant until the next update 2477 and won't be affected by any CPU hotplug events. 2478 2479 cpuset.cpus.effective 2480 A read-only multiple values file which exists on all 2481 cpuset-enabled cgroups. 2482 2483 It lists the onlined CPUs that are actually granted to this 2484 cgroup by its parent. These CPUs are allowed to be used by 2485 tasks within the current cgroup. 2486 2487 If "cpuset.cpus" is empty, the "cpuset.cpus.effective" file shows 2488 all the CPUs from the parent cgroup that can be available to 2489 be used by this cgroup. Otherwise, it should be a subset of 2490 "cpuset.cpus" unless none of the CPUs listed in "cpuset.cpus" 2491 can be granted. In this case, it will be treated just like an 2492 empty "cpuset.cpus". 2493 2494 Its value will be affected by CPU hotplug events. 2495 2496 cpuset.mems 2497 A read-write multiple values file which exists on non-root 2498 cpuset-enabled cgroups. 2499 2500 It lists the requested memory nodes to be used by tasks within 2501 this cgroup. The actual list of memory nodes granted, however, 2502 is subjected to constraints imposed by its parent and can differ 2503 from the requested memory nodes. 2504 2505 The memory node numbers are comma-separated numbers or ranges. 2506 For example:: 2507 2508 # cat cpuset.mems 2509 0-1,3 2510 2511 An empty value indicates that the cgroup is using the same 2512 setting as the nearest cgroup ancestor with a non-empty 2513 "cpuset.mems" or all the available memory nodes if none 2514 is found. 2515 2516 The value of "cpuset.mems" stays constant until the next update 2517 and won't be affected by any memory nodes hotplug events. 2518 2519 Setting a non-empty value to "cpuset.mems" causes memory of 2520 tasks within the cgroup to be migrated to the designated nodes if 2521 they are currently using memory outside of the designated nodes. 2522 2523 There is a cost for this memory migration. The migration 2524 may not be complete and some memory pages may be left behind. 2525 So it is recommended that "cpuset.mems" should be set properly 2526 before spawning new tasks into the cpuset. Even if there is 2527 a need to change "cpuset.mems" with active tasks, it shouldn't 2528 be done frequently. 2529 2530 cpuset.mems.effective 2531 A read-only multiple values file which exists on all 2532 cpuset-enabled cgroups. 2533 2534 It lists the onlined memory nodes that are actually granted to 2535 this cgroup by its parent. These memory nodes are allowed to 2536 be used by tasks within the current cgroup. 2537 2538 If "cpuset.mems" is empty, it shows all the memory nodes from the 2539 parent cgroup that will be available to be used by this cgroup. 2540 Otherwise, it should be a subset of "cpuset.mems" unless none of 2541 the memory nodes listed in "cpuset.mems" can be granted. In this 2542 case, it will be treated just like an empty "cpuset.mems". 2543 2544 Its value will be affected by memory nodes hotplug events. 2545 2546 cpuset.cpus.exclusive 2547 A read-write multiple values file which exists on non-root 2548 cpuset-enabled cgroups. 2549 2550 It lists all the exclusive CPUs that are allowed to be used 2551 to create a new cpuset partition. Its value is not used 2552 unless the cgroup becomes a valid partition root. See the 2553 "cpuset.cpus.partition" section below for a description of what 2554 a cpuset partition is. 2555 2556 When the cgroup becomes a partition root, the actual exclusive 2557 CPUs that are allocated to that partition are listed in 2558 "cpuset.cpus.exclusive.effective" which may be different 2559 from "cpuset.cpus.exclusive". If "cpuset.cpus.exclusive" 2560 has previously been set, "cpuset.cpus.exclusive.effective" 2561 is always a subset of it. 2562 2563 Users can manually set it to a value that is different from 2564 "cpuset.cpus". One constraint in setting it is that the list of 2565 CPUs must be exclusive with respect to "cpuset.cpus.exclusive" 2566 and "cpuset.cpus.exclusive.effective" of its siblings. Another 2567 constraint is that it cannot be a superset of "cpuset.cpus" 2568 of its sibling in order to leave at least one CPU available to 2569 that sibling when the exclusive CPUs are taken away. 2570 2571 For a parent cgroup, any one of its exclusive CPUs can only 2572 be distributed to at most one of its child cgroups. Having an 2573 exclusive CPU appearing in two or more of its child cgroups is 2574 not allowed (the exclusivity rule). A value that violates the 2575 exclusivity rule will be rejected with a write error. 2576 2577 The root cgroup is a partition root and all its available CPUs 2578 are in its exclusive CPU set. 2579 2580 cpuset.cpus.exclusive.effective 2581 A read-only multiple values file which exists on all non-root 2582 cpuset-enabled cgroups. 2583 2584 This file shows the effective set of exclusive CPUs that 2585 can be used to create a partition root. The content 2586 of this file will always be a subset of its parent's 2587 "cpuset.cpus.exclusive.effective" if its parent is not the root 2588 cgroup. It will also be a subset of "cpuset.cpus.exclusive" 2589 if it is set. This file should only be non-empty if either 2590 "cpuset.cpus.exclusive" is set or when the current cpuset is 2591 a valid partition root. 2592 2593 cpuset.cpus.isolated 2594 A read-only and root cgroup only multiple values file. 2595 2596 This file shows the set of all isolated CPUs used in existing 2597 isolated partitions. It will be empty if no isolated partition 2598 is created. 2599 2600 cpuset.cpus.partition 2601 A read-write single value file which exists on non-root 2602 cpuset-enabled cgroups. This flag is owned by the parent cgroup 2603 and is not delegatable. 2604 2605 It accepts only the following input values when written to. 2606 2607 ========== ===================================== 2608 "member" Non-root member of a partition 2609 "root" Partition root 2610 "isolated" Partition root without load balancing 2611 ========== ===================================== 2612 2613 A cpuset partition is a collection of cpuset-enabled cgroups with 2614 a partition root at the top of the hierarchy and its descendants 2615 except those that are separate partition roots themselves and 2616 their descendants. A partition has exclusive access to the 2617 set of exclusive CPUs allocated to it. Other cgroups outside 2618 of that partition cannot use any CPUs in that set. 2619 2620 There are two types of partitions - local and remote. A local 2621 partition is one whose parent cgroup is also a valid partition 2622 root. A remote partition is one whose parent cgroup is not a 2623 valid partition root itself. 2624 2625 Writing to "cpuset.cpus.exclusive" is optional for the creation 2626 of a local partition as its "cpuset.cpus.exclusive" file will 2627 assume an implicit value that is the same as "cpuset.cpus" if it 2628 is not set. Writing the proper "cpuset.cpus.exclusive" values 2629 down the cgroup hierarchy before the target partition root is 2630 mandatory for the creation of a remote partition. 2631 2632 Not all the CPUs requested in "cpuset.cpus.exclusive" can be 2633 used to form a new partition. Only those that were present 2634 in its parent's "cpuset.cpus.exclusive.effective" control 2635 file can be used. For partitions created without setting 2636 "cpuset.cpus.exclusive", exclusive CPUs specified in sibling's 2637 "cpuset.cpus.exclusive" or "cpuset.cpus.exclusive.effective" 2638 also cannot be used. 2639 2640 Currently, a remote partition cannot be created under a local 2641 partition. All the ancestors of a remote partition root except 2642 the root cgroup cannot be a partition root. 2643 2644 The root cgroup is always a partition root and its state cannot 2645 be changed. All other non-root cgroups start out as "member". 2646 Even though the "cpuset.cpus.exclusive*" and "cpuset.cpus" 2647 control files are not present in the root cgroup, they are 2648 implicitly the same as the "/sys/devices/system/cpu/possible" 2649 sysfs file. 2650 2651 When set to "root", the current cgroup is the root of a new 2652 partition or scheduling domain. The set of exclusive CPUs is 2653 determined by the value of its "cpuset.cpus.exclusive.effective". 2654 2655 When set to "isolated", the CPUs in that partition will be in 2656 an isolated state without any load balancing from the scheduler 2657 and excluded from the unbound workqueues. Tasks placed in such 2658 a partition with multiple CPUs should be carefully distributed 2659 and bound to each of the individual CPUs for optimal performance. 2660 2661 A partition root ("root" or "isolated") can be in one of the 2662 two possible states - valid or invalid. An invalid partition 2663 root is in a degraded state where some state information may 2664 be retained, but behaves more like a "member". 2665 2666 All possible state transitions among "member", "root" and 2667 "isolated" are allowed. 2668 2669 On read, the "cpuset.cpus.partition" file can show the following 2670 values. 2671 2672 ============================= ===================================== 2673 "member" Non-root member of a partition 2674 "root" Partition root 2675 "isolated" Partition root without load balancing 2676 "root invalid (<reason>)" Invalid partition root 2677 "isolated invalid (<reason>)" Invalid isolated partition root 2678 ============================= ===================================== 2679 2680 In the case of an invalid partition root, a descriptive string on 2681 why the partition is invalid is included within parentheses. 2682 2683 For a local partition root to be valid, the following conditions 2684 must be met. 2685 2686 1) The parent cgroup is a valid partition root. 2687 2) The "cpuset.cpus.exclusive.effective" file cannot be empty, 2688 though it may contain offline CPUs. 2689 3) The "cpuset.cpus.effective" cannot be empty unless there is 2690 no task associated with this partition. 2691 2692 For a remote partition root to be valid, all the above conditions 2693 except the first one must be met. 2694 2695 External events like hotplug or changes to "cpuset.cpus" or 2696 "cpuset.cpus.exclusive" can cause a valid partition root to 2697 become invalid and vice versa. Note that a task cannot be 2698 moved to a cgroup with empty "cpuset.cpus.effective". 2699 2700 A valid non-root parent partition may distribute out all its CPUs 2701 to its child local partitions when there is no task associated 2702 with it. 2703 2704 Care must be taken to change a valid partition root to "member" 2705 as all its child local partitions, if present, will become 2706 invalid causing disruption to tasks running in those child 2707 partitions. These inactivated partitions could be recovered if 2708 their parent is switched back to a partition root with a proper 2709 value in "cpuset.cpus" or "cpuset.cpus.exclusive". 2710 2711 Poll and inotify events are triggered whenever the state of 2712 "cpuset.cpus.partition" changes. That includes changes caused 2713 by write to "cpuset.cpus.partition", cpu hotplug or other 2714 changes that modify the validity status of the partition. 2715 This will allow user space agents to monitor unexpected changes 2716 to "cpuset.cpus.partition" without the need to do continuous 2717 polling. 2718 2719 A user can pre-configure certain CPUs to an isolated state 2720 with load balancing disabled at boot time with the "isolcpus" 2721 kernel boot command line option. If those CPUs are to be put 2722 into a partition, they have to be used in an isolated partition. 2723 2724 2725Device controller 2726----------------- 2727 2728Device controller manages access to device files. It includes both 2729creation of new device files (using mknod), and access to the 2730existing device files. 2731 2732Cgroup v2 device controller has no interface files and is implemented 2733on top of cgroup BPF. To control access to device files, a user may 2734create bpf programs of type BPF_PROG_TYPE_CGROUP_DEVICE and attach 2735them to cgroups with BPF_CGROUP_DEVICE flag. On an attempt to access a 2736device file, corresponding BPF programs will be executed, and depending 2737on the return value the attempt will succeed or fail with -EPERM. 2738 2739A BPF_PROG_TYPE_CGROUP_DEVICE program takes a pointer to the 2740bpf_cgroup_dev_ctx structure, which describes the device access attempt: 2741access type (mknod/read/write) and device (type, major and minor numbers). 2742If the program returns 0, the attempt fails with -EPERM, otherwise it 2743succeeds. 2744 2745An example of BPF_PROG_TYPE_CGROUP_DEVICE program may be found in 2746tools/testing/selftests/bpf/progs/dev_cgroup.c in the kernel source tree. 2747 2748 2749RDMA 2750---- 2751 2752The "rdma" controller regulates the distribution and accounting of 2753RDMA resources. 2754 2755RDMA Interface Files 2756~~~~~~~~~~~~~~~~~~~~ 2757 2758 rdma.max 2759 A readwrite nested-keyed file that exists for all the cgroups 2760 except root that describes current configured resource limit 2761 for a RDMA/IB device. 2762 2763 Lines are keyed by device name and are not ordered. 2764 Each line contains space separated resource name and its configured 2765 limit that can be distributed. 2766 2767 The following nested keys are defined. 2768 2769 ========== ============================= 2770 hca_handle Maximum number of HCA Handles 2771 hca_object Maximum number of HCA Objects 2772 ========== ============================= 2773 2774 An example for mlx4 and ocrdma device follows:: 2775 2776 mlx4_0 hca_handle=2 hca_object=2000 2777 ocrdma1 hca_handle=3 hca_object=max 2778 2779 rdma.current 2780 A read-only file that describes current resource usage. 2781 It exists for all the cgroup except root. 2782 2783 An example for mlx4 and ocrdma device follows:: 2784 2785 mlx4_0 hca_handle=1 hca_object=20 2786 ocrdma1 hca_handle=1 hca_object=23 2787 2788DMEM 2789---- 2790 2791The "dmem" controller regulates the distribution and accounting of 2792device memory regions. Because each memory region may have its own page size, 2793which does not have to be equal to the system page size, the units are always bytes. 2794 2795DMEM Interface Files 2796~~~~~~~~~~~~~~~~~~~~ 2797 2798 dmem.max, dmem.min, dmem.low 2799 A readwrite nested-keyed file that exists for all the cgroups 2800 except root that describes current configured resource limit 2801 for a region. 2802 2803 An example for xe follows:: 2804 2805 drm/0000:03:00.0/vram0 1073741824 2806 drm/0000:03:00.0/stolen max 2807 2808 The semantics are the same as for the memory cgroup controller, and are 2809 calculated in the same way. 2810 2811 dmem.capacity 2812 A read-only file that describes maximum region capacity. 2813 It only exists on the root cgroup. Not all memory can be 2814 allocated by cgroups, as the kernel reserves some for 2815 internal use. 2816 2817 An example for xe follows:: 2818 2819 drm/0000:03:00.0/vram0 8514437120 2820 drm/0000:03:00.0/stolen 67108864 2821 2822 dmem.current 2823 A read-only file that describes current resource usage. 2824 It exists for all the cgroup except root. 2825 2826 An example for xe follows:: 2827 2828 drm/0000:03:00.0/vram0 12550144 2829 drm/0000:03:00.0/stolen 8650752 2830 2831HugeTLB 2832------- 2833 2834The HugeTLB controller allows limiting the HugeTLB usage per control group and 2835enforces the controller limit during page fault. 2836 2837HugeTLB Interface Files 2838~~~~~~~~~~~~~~~~~~~~~~~ 2839 2840 hugetlb.<hugepagesize>.current 2841 Show current usage for "hugepagesize" hugetlb. It exists for all 2842 the cgroup except root. 2843 2844 hugetlb.<hugepagesize>.max 2845 Set/show the hard limit of "hugepagesize" hugetlb usage. 2846 The default value is "max". It exists for all the cgroup except root. 2847 2848 hugetlb.<hugepagesize>.events 2849 A read-only flat-keyed file which exists on non-root cgroups. 2850 2851 max 2852 The number of allocation failure due to HugeTLB limit 2853 2854 hugetlb.<hugepagesize>.events.local 2855 Similar to hugetlb.<hugepagesize>.events but the fields in the file 2856 are local to the cgroup i.e. not hierarchical. The file modified event 2857 generated on this file reflects only the local events. 2858 2859 hugetlb.<hugepagesize>.numa_stat 2860 Similar to memory.numa_stat, it shows the numa information of the 2861 hugetlb pages of <hugepagesize> in this cgroup. Only active in 2862 use hugetlb pages are included. The per-node values are in bytes. 2863 2864Misc 2865---- 2866 2867The Miscellaneous cgroup provides the resource limiting and tracking 2868mechanism for the scalar resources which cannot be abstracted like the other 2869cgroup resources. Controller is enabled by the CONFIG_CGROUP_MISC config 2870option. 2871 2872A resource can be added to the controller via enum misc_res_type{} in the 2873include/linux/misc_cgroup.h file and the corresponding name via misc_res_name[] 2874in the kernel/cgroup/misc.c file. Provider of the resource must set its 2875capacity prior to using the resource by calling misc_cg_set_capacity(). 2876 2877Once a capacity is set then the resource usage can be updated using charge and 2878uncharge APIs. All of the APIs to interact with misc controller are in 2879include/linux/misc_cgroup.h. 2880 2881Misc Interface Files 2882~~~~~~~~~~~~~~~~~~~~ 2883 2884Miscellaneous controller provides 3 interface files. If two misc resources (res_a and res_b) are registered then: 2885 2886 misc.capacity 2887 A read-only flat-keyed file shown only in the root cgroup. It shows 2888 miscellaneous scalar resources available on the platform along with 2889 their quantities:: 2890 2891 $ cat misc.capacity 2892 res_a 50 2893 res_b 10 2894 2895 misc.current 2896 A read-only flat-keyed file shown in the all cgroups. It shows 2897 the current usage of the resources in the cgroup and its children.:: 2898 2899 $ cat misc.current 2900 res_a 3 2901 res_b 0 2902 2903 misc.peak 2904 A read-only flat-keyed file shown in all cgroups. It shows the 2905 historical maximum usage of the resources in the cgroup and its 2906 children.:: 2907 2908 $ cat misc.peak 2909 res_a 10 2910 res_b 8 2911 2912 misc.max 2913 A read-write flat-keyed file shown in the non root cgroups. Allowed 2914 maximum usage of the resources in the cgroup and its children.:: 2915 2916 $ cat misc.max 2917 res_a max 2918 res_b 4 2919 2920 Limit can be set by:: 2921 2922 # echo res_a 1 > misc.max 2923 2924 Limit can be set to max by:: 2925 2926 # echo res_a max > misc.max 2927 2928 Limits can be set higher than the capacity value in the misc.capacity 2929 file. 2930 2931 misc.events 2932 A read-only flat-keyed file which exists on non-root cgroups. The 2933 following entries are defined. Unless specified otherwise, a value 2934 change in this file generates a file modified event. All fields in 2935 this file are hierarchical. 2936 2937 max 2938 The number of times the cgroup's resource usage was 2939 about to go over the max boundary. 2940 2941 misc.events.local 2942 Similar to misc.events but the fields in the file are local to the 2943 cgroup i.e. not hierarchical. The file modified event generated on 2944 this file reflects only the local events. 2945 2946Migration and Ownership 2947~~~~~~~~~~~~~~~~~~~~~~~ 2948 2949A miscellaneous scalar resource is charged to the cgroup in which it is used 2950first, and stays charged to that cgroup until that resource is freed. Migrating 2951a process to a different cgroup does not move the charge to the destination 2952cgroup where the process has moved. 2953 2954Others 2955------ 2956 2957perf_event 2958~~~~~~~~~~ 2959 2960perf_event controller, if not mounted on a legacy hierarchy, is 2961automatically enabled on the v2 hierarchy so that perf events can 2962always be filtered by cgroup v2 path. The controller can still be 2963moved to a legacy hierarchy after v2 hierarchy is populated. 2964 2965 2966Non-normative information 2967------------------------- 2968 2969This section contains information that isn't considered to be a part of 2970the stable kernel API and so is subject to change. 2971 2972 2973CPU controller root cgroup process behaviour 2974~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2975 2976When distributing CPU cycles in the root cgroup each thread in this 2977cgroup is treated as if it was hosted in a separate child cgroup of the 2978root cgroup. This child cgroup weight is dependent on its thread nice 2979level. 2980 2981For details of this mapping see sched_prio_to_weight array in 2982kernel/sched/core.c file (values from this array should be scaled 2983appropriately so the neutral - nice 0 - value is 100 instead of 1024). 2984 2985 2986IO controller root cgroup process behaviour 2987~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2988 2989Root cgroup processes are hosted in an implicit leaf child node. 2990When distributing IO resources this implicit child node is taken into 2991account as if it was a normal child cgroup of the root cgroup with a 2992weight value of 200. 2993 2994 2995Namespace 2996========= 2997 2998Basics 2999------ 3000 3001cgroup namespace provides a mechanism to virtualize the view of the 3002"/proc/$PID/cgroup" file and cgroup mounts. The CLONE_NEWCGROUP clone 3003flag can be used with clone(2) and unshare(2) to create a new cgroup 3004namespace. The process running inside the cgroup namespace will have 3005its "/proc/$PID/cgroup" output restricted to cgroupns root. The 3006cgroupns root is the cgroup of the process at the time of creation of 3007the cgroup namespace. 3008 3009Without cgroup namespace, the "/proc/$PID/cgroup" file shows the 3010complete path of the cgroup of a process. In a container setup where 3011a set of cgroups and namespaces are intended to isolate processes the 3012"/proc/$PID/cgroup" file may leak potential system level information 3013to the isolated processes. For example:: 3014 3015 # cat /proc/self/cgroup 3016 0::/batchjobs/container_id1 3017 3018The path '/batchjobs/container_id1' can be considered as system-data 3019and undesirable to expose to the isolated processes. cgroup namespace 3020can be used to restrict visibility of this path. For example, before 3021creating a cgroup namespace, one would see:: 3022 3023 # ls -l /proc/self/ns/cgroup 3024 lrwxrwxrwx 1 root root 0 2014-07-15 10:37 /proc/self/ns/cgroup -> cgroup:[4026531835] 3025 # cat /proc/self/cgroup 3026 0::/batchjobs/container_id1 3027 3028After unsharing a new namespace, the view changes:: 3029 3030 # ls -l /proc/self/ns/cgroup 3031 lrwxrwxrwx 1 root root 0 2014-07-15 10:35 /proc/self/ns/cgroup -> cgroup:[4026532183] 3032 # cat /proc/self/cgroup 3033 0::/ 3034 3035When some thread from a multi-threaded process unshares its cgroup 3036namespace, the new cgroupns gets applied to the entire process (all 3037the threads). This is natural for the v2 hierarchy; however, for the 3038legacy hierarchies, this may be unexpected. 3039 3040A cgroup namespace is alive as long as there are processes inside or 3041mounts pinning it. When the last usage goes away, the cgroup 3042namespace is destroyed. The cgroupns root and the actual cgroups 3043remain. 3044 3045 3046The Root and Views 3047------------------ 3048 3049The 'cgroupns root' for a cgroup namespace is the cgroup in which the 3050process calling unshare(2) is running. For example, if a process in 3051/batchjobs/container_id1 cgroup calls unshare, cgroup 3052/batchjobs/container_id1 becomes the cgroupns root. For the 3053init_cgroup_ns, this is the real root ('/') cgroup. 3054 3055The cgroupns root cgroup does not change even if the namespace creator 3056process later moves to a different cgroup:: 3057 3058 # ~/unshare -c # unshare cgroupns in some cgroup 3059 # cat /proc/self/cgroup 3060 0::/ 3061 # mkdir sub_cgrp_1 3062 # echo 0 > sub_cgrp_1/cgroup.procs 3063 # cat /proc/self/cgroup 3064 0::/sub_cgrp_1 3065 3066Each process gets its namespace-specific view of "/proc/$PID/cgroup" 3067 3068Processes running inside the cgroup namespace will be able to see 3069cgroup paths (in /proc/self/cgroup) only inside their root cgroup. 3070From within an unshared cgroupns:: 3071 3072 # sleep 100000 & 3073 [1] 7353 3074 # echo 7353 > sub_cgrp_1/cgroup.procs 3075 # cat /proc/7353/cgroup 3076 0::/sub_cgrp_1 3077 3078From the initial cgroup namespace, the real cgroup path will be 3079visible:: 3080 3081 $ cat /proc/7353/cgroup 3082 0::/batchjobs/container_id1/sub_cgrp_1 3083 3084From a sibling cgroup namespace (that is, a namespace rooted at a 3085different cgroup), the cgroup path relative to its own cgroup 3086namespace root will be shown. For instance, if PID 7353's cgroup 3087namespace root is at '/batchjobs/container_id2', then it will see:: 3088 3089 # cat /proc/7353/cgroup 3090 0::/../container_id2/sub_cgrp_1 3091 3092Note that the relative path always starts with '/' to indicate that 3093its relative to the cgroup namespace root of the caller. 3094 3095 3096Migration and setns(2) 3097---------------------- 3098 3099Processes inside a cgroup namespace can move into and out of the 3100namespace root if they have proper access to external cgroups. For 3101example, from inside a namespace with cgroupns root at 3102/batchjobs/container_id1, and assuming that the global hierarchy is 3103still accessible inside cgroupns:: 3104 3105 # cat /proc/7353/cgroup 3106 0::/sub_cgrp_1 3107 # echo 7353 > batchjobs/container_id2/cgroup.procs 3108 # cat /proc/7353/cgroup 3109 0::/../container_id2 3110 3111Note that this kind of setup is not encouraged. A task inside cgroup 3112namespace should only be exposed to its own cgroupns hierarchy. 3113 3114setns(2) to another cgroup namespace is allowed when: 3115 3116(a) the process has CAP_SYS_ADMIN against its current user namespace 3117(b) the process has CAP_SYS_ADMIN against the target cgroup 3118 namespace's userns 3119 3120No implicit cgroup changes happen with attaching to another cgroup 3121namespace. It is expected that the someone moves the attaching 3122process under the target cgroup namespace root. 3123 3124 3125Interaction with Other Namespaces 3126--------------------------------- 3127 3128Namespace specific cgroup hierarchy can be mounted by a process 3129running inside a non-init cgroup namespace:: 3130 3131 # mount -t cgroup2 none $MOUNT_POINT 3132 3133This will mount the unified cgroup hierarchy with cgroupns root as the 3134filesystem root. The process needs CAP_SYS_ADMIN against its user and 3135mount namespaces. 3136 3137The virtualization of /proc/self/cgroup file combined with restricting 3138the view of cgroup hierarchy by namespace-private cgroupfs mount 3139provides a properly isolated cgroup view inside the container. 3140 3141 3142Information on Kernel Programming 3143================================= 3144 3145This section contains kernel programming information in the areas 3146where interacting with cgroup is necessary. cgroup core and 3147controllers are not covered. 3148 3149 3150Filesystem Support for Writeback 3151-------------------------------- 3152 3153A filesystem can support cgroup writeback by updating 3154address_space_operations->writepages() to annotate bio's using the 3155following two functions. 3156 3157 wbc_init_bio(@wbc, @bio) 3158 Should be called for each bio carrying writeback data and 3159 associates the bio with the inode's owner cgroup and the 3160 corresponding request queue. This must be called after 3161 a queue (device) has been associated with the bio and 3162 before submission. 3163 3164 wbc_account_cgroup_owner(@wbc, @folio, @bytes) 3165 Should be called for each data segment being written out. 3166 While this function doesn't care exactly when it's called 3167 during the writeback session, it's the easiest and most 3168 natural to call it as data segments are added to a bio. 3169 3170With writeback bio's annotated, cgroup support can be enabled per 3171super_block by setting SB_I_CGROUPWB in ->s_iflags. This allows for 3172selective disabling of cgroup writeback support which is helpful when 3173certain filesystem features, e.g. journaled data mode, are 3174incompatible. 3175 3176wbc_init_bio() binds the specified bio to its cgroup. Depending on 3177the configuration, the bio may be executed at a lower priority and if 3178the writeback session is holding shared resources, e.g. a journal 3179entry, may lead to priority inversion. There is no one easy solution 3180for the problem. Filesystems can try to work around specific problem 3181cases by skipping wbc_init_bio() and using bio_associate_blkg() 3182directly. 3183 3184 3185Deprecated v1 Core Features 3186=========================== 3187 3188- Multiple hierarchies including named ones are not supported. 3189 3190- All v1 mount options are not supported. 3191 3192- The "tasks" file is removed and "cgroup.procs" is not sorted. 3193 3194- "cgroup.clone_children" is removed. 3195 3196- /proc/cgroups is meaningless for v2. Use "cgroup.controllers" or 3197 "cgroup.stat" files at the root instead. 3198 3199 3200Issues with v1 and Rationales for v2 3201==================================== 3202 3203Multiple Hierarchies 3204-------------------- 3205 3206cgroup v1 allowed an arbitrary number of hierarchies and each 3207hierarchy could host any number of controllers. While this seemed to 3208provide a high level of flexibility, it wasn't useful in practice. 3209 3210For example, as there is only one instance of each controller, utility 3211type controllers such as freezer which can be useful in all 3212hierarchies could only be used in one. The issue is exacerbated by 3213the fact that controllers couldn't be moved to another hierarchy once 3214hierarchies were populated. Another issue was that all controllers 3215bound to a hierarchy were forced to have exactly the same view of the 3216hierarchy. It wasn't possible to vary the granularity depending on 3217the specific controller. 3218 3219In practice, these issues heavily limited which controllers could be 3220put on the same hierarchy and most configurations resorted to putting 3221each controller on its own hierarchy. Only closely related ones, such 3222as the cpu and cpuacct controllers, made sense to be put on the same 3223hierarchy. This often meant that userland ended up managing multiple 3224similar hierarchies repeating the same steps on each hierarchy 3225whenever a hierarchy management operation was necessary. 3226 3227Furthermore, support for multiple hierarchies came at a steep cost. 3228It greatly complicated cgroup core implementation but more importantly 3229the support for multiple hierarchies restricted how cgroup could be 3230used in general and what controllers was able to do. 3231 3232There was no limit on how many hierarchies there might be, which meant 3233that a thread's cgroup membership couldn't be described in finite 3234length. The key might contain any number of entries and was unlimited 3235in length, which made it highly awkward to manipulate and led to 3236addition of controllers which existed only to identify membership, 3237which in turn exacerbated the original problem of proliferating number 3238of hierarchies. 3239 3240Also, as a controller couldn't have any expectation regarding the 3241topologies of hierarchies other controllers might be on, each 3242controller had to assume that all other controllers were attached to 3243completely orthogonal hierarchies. This made it impossible, or at 3244least very cumbersome, for controllers to cooperate with each other. 3245 3246In most use cases, putting controllers on hierarchies which are 3247completely orthogonal to each other isn't necessary. What usually is 3248called for is the ability to have differing levels of granularity 3249depending on the specific controller. In other words, hierarchy may 3250be collapsed from leaf towards root when viewed from specific 3251controllers. For example, a given configuration might not care about 3252how memory is distributed beyond a certain level while still wanting 3253to control how CPU cycles are distributed. 3254 3255 3256Thread Granularity 3257------------------ 3258 3259cgroup v1 allowed threads of a process to belong to different cgroups. 3260This didn't make sense for some controllers and those controllers 3261ended up implementing different ways to ignore such situations but 3262much more importantly it blurred the line between API exposed to 3263individual applications and system management interface. 3264 3265Generally, in-process knowledge is available only to the process 3266itself; thus, unlike service-level organization of processes, 3267categorizing threads of a process requires active participation from 3268the application which owns the target process. 3269 3270cgroup v1 had an ambiguously defined delegation model which got abused 3271in combination with thread granularity. cgroups were delegated to 3272individual applications so that they can create and manage their own 3273sub-hierarchies and control resource distributions along them. This 3274effectively raised cgroup to the status of a syscall-like API exposed 3275to lay programs. 3276 3277First of all, cgroup has a fundamentally inadequate interface to be 3278exposed this way. For a process to access its own knobs, it has to 3279extract the path on the target hierarchy from /proc/self/cgroup, 3280construct the path by appending the name of the knob to the path, open 3281and then read and/or write to it. This is not only extremely clunky 3282and unusual but also inherently racy. There is no conventional way to 3283define transaction across the required steps and nothing can guarantee 3284that the process would actually be operating on its own sub-hierarchy. 3285 3286cgroup controllers implemented a number of knobs which would never be 3287accepted as public APIs because they were just adding control knobs to 3288system-management pseudo filesystem. cgroup ended up with interface 3289knobs which were not properly abstracted or refined and directly 3290revealed kernel internal details. These knobs got exposed to 3291individual applications through the ill-defined delegation mechanism 3292effectively abusing cgroup as a shortcut to implementing public APIs 3293without going through the required scrutiny. 3294 3295This was painful for both userland and kernel. Userland ended up with 3296misbehaving and poorly abstracted interfaces and kernel exposing and 3297locked into constructs inadvertently. 3298 3299 3300Competition Between Inner Nodes and Threads 3301------------------------------------------- 3302 3303cgroup v1 allowed threads to be in any cgroups which created an 3304interesting problem where threads belonging to a parent cgroup and its 3305children cgroups competed for resources. This was nasty as two 3306different types of entities competed and there was no obvious way to 3307settle it. Different controllers did different things. 3308 3309The cpu controller considered threads and cgroups as equivalents and 3310mapped nice levels to cgroup weights. This worked for some cases but 3311fell flat when children wanted to be allocated specific ratios of CPU 3312cycles and the number of internal threads fluctuated - the ratios 3313constantly changed as the number of competing entities fluctuated. 3314There also were other issues. The mapping from nice level to weight 3315wasn't obvious or universal, and there were various other knobs which 3316simply weren't available for threads. 3317 3318The io controller implicitly created a hidden leaf node for each 3319cgroup to host the threads. The hidden leaf had its own copies of all 3320the knobs with ``leaf_`` prefixed. While this allowed equivalent 3321control over internal threads, it was with serious drawbacks. It 3322always added an extra layer of nesting which wouldn't be necessary 3323otherwise, made the interface messy and significantly complicated the 3324implementation. 3325 3326The memory controller didn't have a way to control what happened 3327between internal tasks and child cgroups and the behavior was not 3328clearly defined. There were attempts to add ad-hoc behaviors and 3329knobs to tailor the behavior to specific workloads which would have 3330led to problems extremely difficult to resolve in the long term. 3331 3332Multiple controllers struggled with internal tasks and came up with 3333different ways to deal with it; unfortunately, all the approaches were 3334severely flawed and, furthermore, the widely different behaviors 3335made cgroup as a whole highly inconsistent. 3336 3337This clearly is a problem which needs to be addressed from cgroup core 3338in a uniform way. 3339 3340 3341Other Interface Issues 3342---------------------- 3343 3344cgroup v1 grew without oversight and developed a large number of 3345idiosyncrasies and inconsistencies. One issue on the cgroup core side 3346was how an empty cgroup was notified - a userland helper binary was 3347forked and executed for each event. The event delivery wasn't 3348recursive or delegatable. The limitations of the mechanism also led 3349to in-kernel event delivery filtering mechanism further complicating 3350the interface. 3351 3352Controller interfaces were problematic too. An extreme example is 3353controllers completely ignoring hierarchical organization and treating 3354all cgroups as if they were all located directly under the root 3355cgroup. Some controllers exposed a large amount of inconsistent 3356implementation details to userland. 3357 3358There also was no consistency across controllers. When a new cgroup 3359was created, some controllers defaulted to not imposing extra 3360restrictions while others disallowed any resource usage until 3361explicitly configured. Configuration knobs for the same type of 3362control used widely differing naming schemes and formats. Statistics 3363and information knobs were named arbitrarily and used different 3364formats and units even in the same controller. 3365 3366cgroup v2 establishes common conventions where appropriate and updates 3367controllers so that they expose minimal and consistent interfaces. 3368 3369 3370Controller Issues and Remedies 3371------------------------------ 3372 3373Memory 3374~~~~~~ 3375 3376The original lower boundary, the soft limit, is defined as a limit 3377that is per default unset. As a result, the set of cgroups that 3378global reclaim prefers is opt-in, rather than opt-out. The costs for 3379optimizing these mostly negative lookups are so high that the 3380implementation, despite its enormous size, does not even provide the 3381basic desirable behavior. First off, the soft limit has no 3382hierarchical meaning. All configured groups are organized in a global 3383rbtree and treated like equal peers, regardless where they are located 3384in the hierarchy. This makes subtree delegation impossible. Second, 3385the soft limit reclaim pass is so aggressive that it not just 3386introduces high allocation latencies into the system, but also impacts 3387system performance due to overreclaim, to the point where the feature 3388becomes self-defeating. 3389 3390The memory.low boundary on the other hand is a top-down allocated 3391reserve. A cgroup enjoys reclaim protection when it's within its 3392effective low, which makes delegation of subtrees possible. It also 3393enjoys having reclaim pressure proportional to its overage when 3394above its effective low. 3395 3396The original high boundary, the hard limit, is defined as a strict 3397limit that can not budge, even if the OOM killer has to be called. 3398But this generally goes against the goal of making the most out of the 3399available memory. The memory consumption of workloads varies during 3400runtime, and that requires users to overcommit. But doing that with a 3401strict upper limit requires either a fairly accurate prediction of the 3402working set size or adding slack to the limit. Since working set size 3403estimation is hard and error prone, and getting it wrong results in 3404OOM kills, most users tend to err on the side of a looser limit and 3405end up wasting precious resources. 3406 3407The memory.high boundary on the other hand can be set much more 3408conservatively. When hit, it throttles allocations by forcing them 3409into direct reclaim to work off the excess, but it never invokes the 3410OOM killer. As a result, a high boundary that is chosen too 3411aggressively will not terminate the processes, but instead it will 3412lead to gradual performance degradation. The user can monitor this 3413and make corrections until the minimal memory footprint that still 3414gives acceptable performance is found. 3415 3416In extreme cases, with many concurrent allocations and a complete 3417breakdown of reclaim progress within the group, the high boundary can 3418be exceeded. But even then it's mostly better to satisfy the 3419allocation from the slack available in other groups or the rest of the 3420system than killing the group. Otherwise, memory.max is there to 3421limit this type of spillover and ultimately contain buggy or even 3422malicious applications. 3423 3424Setting the original memory.limit_in_bytes below the current usage was 3425subject to a race condition, where concurrent charges could cause the 3426limit setting to fail. memory.max on the other hand will first set the 3427limit to prevent new charges, and then reclaim and OOM kill until the 3428new limit is met - or the task writing to memory.max is killed. 3429 3430The combined memory+swap accounting and limiting is replaced by real 3431control over swap space. 3432 3433The main argument for a combined memory+swap facility in the original 3434cgroup design was that global or parental pressure would always be 3435able to swap all anonymous memory of a child group, regardless of the 3436child's own (possibly untrusted) configuration. However, untrusted 3437groups can sabotage swapping by other means - such as referencing its 3438anonymous memory in a tight loop - and an admin can not assume full 3439swappability when overcommitting untrusted jobs. 3440 3441For trusted jobs, on the other hand, a combined counter is not an 3442intuitive userspace interface, and it flies in the face of the idea 3443that cgroup controllers should account and limit specific physical 3444resources. Swap space is a resource like all others in the system, 3445and that's why unified hierarchy allows distributing it separately. 3446