1.. _cgroup-v2: 2 3================ 4Control Group v2 5================ 6 7:Date: October, 2015 8:Author: Tejun Heo <tj@kernel.org> 9 10This is the authoritative documentation on the design, interface and 11conventions of cgroup v2. It describes all userland-visible aspects 12of cgroup including core and specific controller behaviors. All 13future changes must be reflected in this document. Documentation for 14v1 is available under :ref:`Documentation/admin-guide/cgroup-v1/index.rst <cgroup-v1>`. 15 16.. CONTENTS 17 18 [Whenever any new section is added to this document, please also add 19 an entry here.] 20 21 1. Introduction 22 1-1. Terminology 23 1-2. What is cgroup? 24 2. Basic Operations 25 2-1. Mounting 26 2-2. Organizing Processes and Threads 27 2-2-1. Processes 28 2-2-2. Threads 29 2-3. [Un]populated Notification 30 2-4. Controlling Controllers 31 2-4-1. Availability 32 2-4-2. Enabling and Disabling 33 2-4-3. Top-down Constraint 34 2-4-4. No Internal Process Constraint 35 2-5. Delegation 36 2-5-1. Model of Delegation 37 2-5-2. Delegation Containment 38 2-6. Guidelines 39 2-6-1. Organize Once and Control 40 2-6-2. Avoid Name Collisions 41 3. Resource Distribution Models 42 3-1. Weights 43 3-2. Limits 44 3-3. Protections 45 3-4. Allocations 46 4. Interface Files 47 4-1. Format 48 4-2. Conventions 49 4-3. Core Interface Files 50 5. Controllers 51 5-1. CPU 52 5-1-1. CPU Interface Files 53 5-2. Memory 54 5-2-1. Memory Interface Files 55 5-2-2. Usage Guidelines 56 5-2-3. Reclaim Protection 57 5-2-4. Memory Ownership 58 5-3. IO 59 5-3-1. IO Interface Files 60 5-3-2. Writeback 61 5-3-3. IO Latency 62 5-3-3-1. How IO Latency Throttling Works 63 5-3-3-2. IO Latency Interface Files 64 5-3-4. IO Priority 65 5-4. PID 66 5-4-1. PID Interface Files 67 5-5. Cpuset 68 5.5-1. Cpuset Interface Files 69 5-6. Device controller 70 5-7. RDMA 71 5-7-1. RDMA Interface Files 72 5-8. DMEM 73 5-8-1. DMEM Interface Files 74 5-9. HugeTLB 75 5.9-1. HugeTLB Interface Files 76 5-10. Misc 77 5.10-1 Misc Interface Files 78 5.10-2 Migration and Ownership 79 5-11. Others 80 5-11-1. perf_event 81 5-N. Non-normative information 82 5-N-1. CPU controller root cgroup process behaviour 83 5-N-2. IO controller root cgroup process behaviour 84 6. Namespace 85 6-1. Basics 86 6-2. The Root and Views 87 6-3. Migration and setns(2) 88 6-4. Interaction with Other Namespaces 89 P. Information on Kernel Programming 90 P-1. Filesystem Support for Writeback 91 D. Deprecated v1 Core Features 92 R. Issues with v1 and Rationales for v2 93 R-1. Multiple Hierarchies 94 R-2. Thread Granularity 95 R-3. Competition Between Inner Nodes and Threads 96 R-4. Other Interface Issues 97 R-5. Controller Issues and Remedies 98 R-5-1. Memory 99 100 101Introduction 102============ 103 104Terminology 105----------- 106 107"cgroup" stands for "control group" and is never capitalized. The 108singular form is used to designate the whole feature and also as a 109qualifier as in "cgroup controllers". When explicitly referring to 110multiple individual control groups, the plural form "cgroups" is used. 111 112 113What is cgroup? 114--------------- 115 116cgroup is a mechanism to organize processes hierarchically and 117distribute system resources along the hierarchy in a controlled and 118configurable manner. 119 120cgroup is largely composed of two parts - the core and controllers. 121cgroup core is primarily responsible for hierarchically organizing 122processes. A cgroup controller is usually responsible for 123distributing a specific type of system resource along the hierarchy 124although there are utility controllers which serve purposes other than 125resource distribution. 126 127cgroups form a tree structure and every process in the system belongs 128to one and only one cgroup. All threads of a process belong to the 129same cgroup. On creation, all processes are put in the cgroup that 130the parent process belongs to at the time. A process can be migrated 131to another cgroup. Migration of a process doesn't affect already 132existing descendant processes. 133 134Following certain structural constraints, controllers may be enabled or 135disabled selectively on a cgroup. All controller behaviors are 136hierarchical - if a controller is enabled on a cgroup, it affects all 137processes which belong to the cgroups consisting the inclusive 138sub-hierarchy of the cgroup. When a controller is enabled on a nested 139cgroup, it always restricts the resource distribution further. The 140restrictions set closer to the root in the hierarchy can not be 141overridden from further away. 142 143 144Basic Operations 145================ 146 147Mounting 148-------- 149 150Unlike v1, cgroup v2 has only single hierarchy. The cgroup v2 151hierarchy can be mounted with the following mount command:: 152 153 # mount -t cgroup2 none $MOUNT_POINT 154 155cgroup2 filesystem has the magic number 0x63677270 ("cgrp"). All 156controllers which support v2 and are not bound to a v1 hierarchy are 157automatically bound to the v2 hierarchy and show up at the root. 158Controllers which are not in active use in the v2 hierarchy can be 159bound to other hierarchies. This allows mixing v2 hierarchy with the 160legacy v1 multiple hierarchies in a fully backward compatible way. 161 162A controller can be moved across hierarchies only after the controller 163is no longer referenced in its current hierarchy. Because per-cgroup 164controller states are destroyed asynchronously and controllers may 165have lingering references, a controller may not show up immediately on 166the v2 hierarchy after the final umount of the previous hierarchy. 167Similarly, a controller should be fully disabled to be moved out of 168the unified hierarchy and it may take some time for the disabled 169controller to become available for other hierarchies; furthermore, due 170to inter-controller dependencies, other controllers may need to be 171disabled too. 172 173While useful for development and manual configurations, moving 174controllers dynamically between the v2 and other hierarchies is 175strongly discouraged for production use. It is recommended to decide 176the hierarchies and controller associations before starting using the 177controllers after system boot. 178 179During transition to v2, system management software might still 180automount the v1 cgroup filesystem and so hijack all controllers 181during boot, before manual intervention is possible. To make testing 182and experimenting easier, the kernel parameter cgroup_no_v1= allows 183disabling controllers in v1 and make them always available in v2. 184 185cgroup v2 currently supports the following mount options. 186 187 nsdelegate 188 Consider cgroup namespaces as delegation boundaries. This 189 option is system wide and can only be set on mount or modified 190 through remount from the init namespace. The mount option is 191 ignored on non-init namespace mounts. Please refer to the 192 Delegation section for details. 193 194 favordynmods 195 Reduce the latencies of dynamic cgroup modifications such as 196 task migrations and controller on/offs at the cost of making 197 hot path operations such as forks and exits more expensive. 198 The static usage pattern of creating a cgroup, enabling 199 controllers, and then seeding it with CLONE_INTO_CGROUP is 200 not affected by this option. 201 202 memory_localevents 203 Only populate memory.events with data for the current cgroup, 204 and not any subtrees. This is legacy behaviour, the default 205 behaviour without this option is to include subtree counts. 206 This option is system wide and can only be set on mount or 207 modified through remount from the init namespace. The mount 208 option is ignored on non-init namespace mounts. 209 210 memory_recursiveprot 211 Recursively apply memory.min and memory.low protection to 212 entire subtrees, without requiring explicit downward 213 propagation into leaf cgroups. This allows protecting entire 214 subtrees from one another, while retaining free competition 215 within those subtrees. This should have been the default 216 behavior but is a mount-option to avoid regressing setups 217 relying on the original semantics (e.g. specifying bogusly 218 high 'bypass' protection values at higher tree levels). 219 220 memory_hugetlb_accounting 221 Count HugeTLB memory usage towards the cgroup's overall 222 memory usage for the memory controller (for the purpose of 223 statistics reporting and memory protetion). This is a new 224 behavior that could regress existing setups, so it must be 225 explicitly opted in with this mount option. 226 227 A few caveats to keep in mind: 228 229 * There is no HugeTLB pool management involved in the memory 230 controller. The pre-allocated pool does not belong to anyone. 231 Specifically, when a new HugeTLB folio is allocated to 232 the pool, it is not accounted for from the perspective of the 233 memory controller. It is only charged to a cgroup when it is 234 actually used (for e.g at page fault time). Host memory 235 overcommit management has to consider this when configuring 236 hard limits. In general, HugeTLB pool management should be 237 done via other mechanisms (such as the HugeTLB controller). 238 * Failure to charge a HugeTLB folio to the memory controller 239 results in SIGBUS. This could happen even if the HugeTLB pool 240 still has pages available (but the cgroup limit is hit and 241 reclaim attempt fails). 242 * Charging HugeTLB memory towards the memory controller affects 243 memory protection and reclaim dynamics. Any userspace tuning 244 (of low, min limits for e.g) needs to take this into account. 245 * HugeTLB pages utilized while this option is not selected 246 will not be tracked by the memory controller (even if cgroup 247 v2 is remounted later on). 248 249 pids_localevents 250 The option restores v1-like behavior of pids.events:max, that is only 251 local (inside cgroup proper) fork failures are counted. Without this 252 option pids.events.max represents any pids.max enforcemnt across 253 cgroup's subtree. 254 255 256 257Organizing Processes and Threads 258-------------------------------- 259 260Processes 261~~~~~~~~~ 262 263Initially, only the root cgroup exists to which all processes belong. 264A child cgroup can be created by creating a sub-directory:: 265 266 # mkdir $CGROUP_NAME 267 268A given cgroup may have multiple child cgroups forming a tree 269structure. Each cgroup has a read-writable interface file 270"cgroup.procs". When read, it lists the PIDs of all processes which 271belong to the cgroup one-per-line. The PIDs are not ordered and the 272same PID may show up more than once if the process got moved to 273another cgroup and then back or the PID got recycled while reading. 274 275A process can be migrated into a cgroup by writing its PID to the 276target cgroup's "cgroup.procs" file. Only one process can be migrated 277on a single write(2) call. If a process is composed of multiple 278threads, writing the PID of any thread migrates all threads of the 279process. 280 281When a process forks a child process, the new process is born into the 282cgroup that the forking process belongs to at the time of the 283operation. After exit, a process stays associated with the cgroup 284that it belonged to at the time of exit until it's reaped; however, a 285zombie process does not appear in "cgroup.procs" and thus can't be 286moved to another cgroup. 287 288A cgroup which doesn't have any children or live processes can be 289destroyed by removing the directory. Note that a cgroup which doesn't 290have any children and is associated only with zombie processes is 291considered empty and can be removed:: 292 293 # rmdir $CGROUP_NAME 294 295"/proc/$PID/cgroup" lists a process's cgroup membership. If legacy 296cgroup is in use in the system, this file may contain multiple lines, 297one for each hierarchy. The entry for cgroup v2 is always in the 298format "0::$PATH":: 299 300 # cat /proc/842/cgroup 301 ... 302 0::/test-cgroup/test-cgroup-nested 303 304If the process becomes a zombie and the cgroup it was associated with 305is removed subsequently, " (deleted)" is appended to the path:: 306 307 # cat /proc/842/cgroup 308 ... 309 0::/test-cgroup/test-cgroup-nested (deleted) 310 311 312Threads 313~~~~~~~ 314 315cgroup v2 supports thread granularity for a subset of controllers to 316support use cases requiring hierarchical resource distribution across 317the threads of a group of processes. By default, all threads of a 318process belong to the same cgroup, which also serves as the resource 319domain to host resource consumptions which are not specific to a 320process or thread. The thread mode allows threads to be spread across 321a subtree while still maintaining the common resource domain for them. 322 323Controllers which support thread mode are called threaded controllers. 324The ones which don't are called domain controllers. 325 326Marking a cgroup threaded makes it join the resource domain of its 327parent as a threaded cgroup. The parent may be another threaded 328cgroup whose resource domain is further up in the hierarchy. The root 329of a threaded subtree, that is, the nearest ancestor which is not 330threaded, is called threaded domain or thread root interchangeably and 331serves as the resource domain for the entire subtree. 332 333Inside a threaded subtree, threads of a process can be put in 334different cgroups and are not subject to the no internal process 335constraint - threaded controllers can be enabled on non-leaf cgroups 336whether they have threads in them or not. 337 338As the threaded domain cgroup hosts all the domain resource 339consumptions of the subtree, it is considered to have internal 340resource consumptions whether there are processes in it or not and 341can't have populated child cgroups which aren't threaded. Because the 342root cgroup is not subject to no internal process constraint, it can 343serve both as a threaded domain and a parent to domain cgroups. 344 345The current operation mode or type of the cgroup is shown in the 346"cgroup.type" file which indicates whether the cgroup is a normal 347domain, a domain which is serving as the domain of a threaded subtree, 348or a threaded cgroup. 349 350On creation, a cgroup is always a domain cgroup and can be made 351threaded by writing "threaded" to the "cgroup.type" file. The 352operation is single direction:: 353 354 # echo threaded > cgroup.type 355 356Once threaded, the cgroup can't be made a domain again. To enable the 357thread mode, the following conditions must be met. 358 359- As the cgroup will join the parent's resource domain. The parent 360 must either be a valid (threaded) domain or a threaded cgroup. 361 362- When the parent is an unthreaded domain, it must not have any domain 363 controllers enabled or populated domain children. The root is 364 exempt from this requirement. 365 366Topology-wise, a cgroup can be in an invalid state. Please consider 367the following topology:: 368 369 A (threaded domain) - B (threaded) - C (domain, just created) 370 371C is created as a domain but isn't connected to a parent which can 372host child domains. C can't be used until it is turned into a 373threaded cgroup. "cgroup.type" file will report "domain (invalid)" in 374these cases. Operations which fail due to invalid topology use 375EOPNOTSUPP as the errno. 376 377A domain cgroup is turned into a threaded domain when one of its child 378cgroup becomes threaded or threaded controllers are enabled in the 379"cgroup.subtree_control" file while there are processes in the cgroup. 380A threaded domain reverts to a normal domain when the conditions 381clear. 382 383When read, "cgroup.threads" contains the list of the thread IDs of all 384threads in the cgroup. Except that the operations are per-thread 385instead of per-process, "cgroup.threads" has the same format and 386behaves the same way as "cgroup.procs". While "cgroup.threads" can be 387written to in any cgroup, as it can only move threads inside the same 388threaded domain, its operations are confined inside each threaded 389subtree. 390 391The threaded domain cgroup serves as the resource domain for the whole 392subtree, and, while the threads can be scattered across the subtree, 393all the processes are considered to be in the threaded domain cgroup. 394"cgroup.procs" in a threaded domain cgroup contains the PIDs of all 395processes in the subtree and is not readable in the subtree proper. 396However, "cgroup.procs" can be written to from anywhere in the subtree 397to migrate all threads of the matching process to the cgroup. 398 399Only threaded controllers can be enabled in a threaded subtree. When 400a threaded controller is enabled inside a threaded subtree, it only 401accounts for and controls resource consumptions associated with the 402threads in the cgroup and its descendants. All consumptions which 403aren't tied to a specific thread belong to the threaded domain cgroup. 404 405Because a threaded subtree is exempt from no internal process 406constraint, a threaded controller must be able to handle competition 407between threads in a non-leaf cgroup and its child cgroups. Each 408threaded controller defines how such competitions are handled. 409 410Currently, the following controllers are threaded and can be enabled 411in a threaded cgroup:: 412 413- cpu 414- cpuset 415- perf_event 416- pids 417 418[Un]populated Notification 419-------------------------- 420 421Each non-root cgroup has a "cgroup.events" file which contains 422"populated" field indicating whether the cgroup's sub-hierarchy has 423live processes in it. Its value is 0 if there is no live process in 424the cgroup and its descendants; otherwise, 1. poll and [id]notify 425events are triggered when the value changes. This can be used, for 426example, to start a clean-up operation after all processes of a given 427sub-hierarchy have exited. The populated state updates and 428notifications are recursive. Consider the following sub-hierarchy 429where the numbers in the parentheses represent the numbers of processes 430in each cgroup:: 431 432 A(4) - B(0) - C(1) 433 \ D(0) 434 435A, B and C's "populated" fields would be 1 while D's 0. After the one 436process in C exits, B and C's "populated" fields would flip to "0" and 437file modified events will be generated on the "cgroup.events" files of 438both cgroups. 439 440 441Controlling Controllers 442----------------------- 443 444Availability 445~~~~~~~~~~~~ 446 447A controller is available in a cgroup when it is supported by the kernel (i.e., 448compiled in, not disabled and not attached to a v1 hierarchy) and listed in the 449"cgroup.controllers" file. Availability means the controller's interface files 450are exposed in the cgroup’s directory, allowing the distribution of the target 451resource to be observed or controlled within that cgroup. 452 453Enabling and Disabling 454~~~~~~~~~~~~~~~~~~~~~~ 455 456Each cgroup has a "cgroup.controllers" file which lists all 457controllers available for the cgroup to enable:: 458 459 # cat cgroup.controllers 460 cpu io memory 461 462No controller is enabled by default. Controllers can be enabled and 463disabled by writing to the "cgroup.subtree_control" file:: 464 465 # echo "+cpu +memory -io" > cgroup.subtree_control 466 467Only controllers which are listed in "cgroup.controllers" can be 468enabled. When multiple operations are specified as above, either they 469all succeed or fail. If multiple operations on the same controller 470are specified, the last one is effective. 471 472Enabling a controller in a cgroup indicates that the distribution of 473the target resource across its immediate children will be controlled. 474Consider the following sub-hierarchy. The enabled controllers are 475listed in parentheses:: 476 477 A(cpu,memory) - B(memory) - C() 478 \ D() 479 480As A has "cpu" and "memory" enabled, A will control the distribution 481of CPU cycles and memory to its children, in this case, B. As B has 482"memory" enabled but not "CPU", C and D will compete freely on CPU 483cycles but their division of memory available to B will be controlled. 484 485As a controller regulates the distribution of the target resource to 486the cgroup's children, enabling it creates the controller's interface 487files in the child cgroups. In the above example, enabling "cpu" on B 488would create the "cpu." prefixed controller interface files in C and 489D. Likewise, disabling "memory" from B would remove the "memory." 490prefixed controller interface files from C and D. This means that the 491controller interface files - anything which doesn't start with 492"cgroup." are owned by the parent rather than the cgroup itself. 493 494 495Top-down Constraint 496~~~~~~~~~~~~~~~~~~~ 497 498Resources are distributed top-down and a cgroup can further distribute 499a resource only if the resource has been distributed to it from the 500parent. This means that all non-root "cgroup.subtree_control" files 501can only contain controllers which are enabled in the parent's 502"cgroup.subtree_control" file. A controller can be enabled only if 503the parent has the controller enabled and a controller can't be 504disabled if one or more children have it enabled. 505 506 507No Internal Process Constraint 508~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 509 510Non-root cgroups can distribute domain resources to their children 511only when they don't have any processes of their own. In other words, 512only domain cgroups which don't contain any processes can have domain 513controllers enabled in their "cgroup.subtree_control" files. 514 515This guarantees that, when a domain controller is looking at the part 516of the hierarchy which has it enabled, processes are always only on 517the leaves. This rules out situations where child cgroups compete 518against internal processes of the parent. 519 520The root cgroup is exempt from this restriction. Root contains 521processes and anonymous resource consumption which can't be associated 522with any other cgroups and requires special treatment from most 523controllers. How resource consumption in the root cgroup is governed 524is up to each controller (for more information on this topic please 525refer to the Non-normative information section in the Controllers 526chapter). 527 528Note that the restriction doesn't get in the way if there is no 529enabled controller in the cgroup's "cgroup.subtree_control". This is 530important as otherwise it wouldn't be possible to create children of a 531populated cgroup. To control resource distribution of a cgroup, the 532cgroup must create children and transfer all its processes to the 533children before enabling controllers in its "cgroup.subtree_control" 534file. 535 536 537Delegation 538---------- 539 540Model of Delegation 541~~~~~~~~~~~~~~~~~~~ 542 543A cgroup can be delegated in two ways. First, to a less privileged 544user by granting write access of the directory and its "cgroup.procs", 545"cgroup.threads" and "cgroup.subtree_control" files to the user. 546Second, if the "nsdelegate" mount option is set, automatically to a 547cgroup namespace on namespace creation. 548 549Because the resource control interface files in a given directory 550control the distribution of the parent's resources, the delegatee 551shouldn't be allowed to write to them. For the first method, this is 552achieved by not granting access to these files. For the second, files 553outside the namespace should be hidden from the delegatee by the means 554of at least mount namespacing, and the kernel rejects writes to all 555files on a namespace root from inside the cgroup namespace, except for 556those files listed in "/sys/kernel/cgroup/delegate" (including 557"cgroup.procs", "cgroup.threads", "cgroup.subtree_control", etc.). 558 559The end results are equivalent for both delegation types. Once 560delegated, the user can build sub-hierarchy under the directory, 561organize processes inside it as it sees fit and further distribute the 562resources it received from the parent. The limits and other settings 563of all resource controllers are hierarchical and regardless of what 564happens in the delegated sub-hierarchy, nothing can escape the 565resource restrictions imposed by the parent. 566 567Currently, cgroup doesn't impose any restrictions on the number of 568cgroups in or nesting depth of a delegated sub-hierarchy; however, 569this may be limited explicitly in the future. 570 571 572Delegation Containment 573~~~~~~~~~~~~~~~~~~~~~~ 574 575A delegated sub-hierarchy is contained in the sense that processes 576can't be moved into or out of the sub-hierarchy by the delegatee. 577 578For delegations to a less privileged user, this is achieved by 579requiring the following conditions for a process with a non-root euid 580to migrate a target process into a cgroup by writing its PID to the 581"cgroup.procs" file. 582 583- The writer must have write access to the "cgroup.procs" file. 584 585- The writer must have write access to the "cgroup.procs" file of the 586 common ancestor of the source and destination cgroups. 587 588The above two constraints ensure that while a delegatee may migrate 589processes around freely in the delegated sub-hierarchy it can't pull 590in from or push out to outside the sub-hierarchy. 591 592For an example, let's assume cgroups C0 and C1 have been delegated to 593user U0 who created C00, C01 under C0 and C10 under C1 as follows and 594all processes under C0 and C1 belong to U0:: 595 596 ~~~~~~~~~~~~~ - C0 - C00 597 ~ cgroup ~ \ C01 598 ~ hierarchy ~ 599 ~~~~~~~~~~~~~ - C1 - C10 600 601Let's also say U0 wants to write the PID of a process which is 602currently in C10 into "C00/cgroup.procs". U0 has write access to the 603file; however, the common ancestor of the source cgroup C10 and the 604destination cgroup C00 is above the points of delegation and U0 would 605not have write access to its "cgroup.procs" files and thus the write 606will be denied with -EACCES. 607 608For delegations to namespaces, containment is achieved by requiring 609that both the source and destination cgroups are reachable from the 610namespace of the process which is attempting the migration. If either 611is not reachable, the migration is rejected with -ENOENT. 612 613 614Guidelines 615---------- 616 617Organize Once and Control 618~~~~~~~~~~~~~~~~~~~~~~~~~ 619 620Migrating a process across cgroups is a relatively expensive operation 621and stateful resources such as memory are not moved together with the 622process. This is an explicit design decision as there often exist 623inherent trade-offs between migration and various hot paths in terms 624of synchronization cost. 625 626As such, migrating processes across cgroups frequently as a means to 627apply different resource restrictions is discouraged. A workload 628should be assigned to a cgroup according to the system's logical and 629resource structure once on start-up. Dynamic adjustments to resource 630distribution can be made by changing controller configuration through 631the interface files. 632 633 634Avoid Name Collisions 635~~~~~~~~~~~~~~~~~~~~~ 636 637Interface files for a cgroup and its children cgroups occupy the same 638directory and it is possible to create children cgroups which collide 639with interface files. 640 641All cgroup core interface files are prefixed with "cgroup." and each 642controller's interface files are prefixed with the controller name and 643a dot. A controller's name is composed of lower case alphabets and 644'_'s but never begins with an '_' so it can be used as the prefix 645character for collision avoidance. Also, interface file names won't 646start or end with terms which are often used in categorizing workloads 647such as job, service, slice, unit or workload. 648 649cgroup doesn't do anything to prevent name collisions and it's the 650user's responsibility to avoid them. 651 652 653Resource Distribution Models 654============================ 655 656cgroup controllers implement several resource distribution schemes 657depending on the resource type and expected use cases. This section 658describes major schemes in use along with their expected behaviors. 659 660 661Weights 662------- 663 664A parent's resource is distributed by adding up the weights of all 665active children and giving each the fraction matching the ratio of its 666weight against the sum. As only children which can make use of the 667resource at the moment participate in the distribution, this is 668work-conserving. Due to the dynamic nature, this model is usually 669used for stateless resources. 670 671All weights are in the range [1, 10000] with the default at 100. This 672allows symmetric multiplicative biases in both directions at fine 673enough granularity while staying in the intuitive range. 674 675As long as the weight is in range, all configuration combinations are 676valid and there is no reason to reject configuration changes or 677process migrations. 678 679"cpu.weight" proportionally distributes CPU cycles to active children 680and is an example of this type. 681 682 683.. _cgroupv2-limits-distributor: 684 685Limits 686------ 687 688A child can only consume up to the configured amount of the resource. 689Limits can be over-committed - the sum of the limits of children can 690exceed the amount of resource available to the parent. 691 692Limits are in the range [0, max] and defaults to "max", which is noop. 693 694As limits can be over-committed, all configuration combinations are 695valid and there is no reason to reject configuration changes or 696process migrations. 697 698"io.max" limits the maximum BPS and/or IOPS that a cgroup can consume 699on an IO device and is an example of this type. 700 701.. _cgroupv2-protections-distributor: 702 703Protections 704----------- 705 706A cgroup is protected up to the configured amount of the resource 707as long as the usages of all its ancestors are under their 708protected levels. Protections can be hard guarantees or best effort 709soft boundaries. Protections can also be over-committed in which case 710only up to the amount available to the parent is protected among 711children. 712 713Protections are in the range [0, max] and defaults to 0, which is 714noop. 715 716As protections can be over-committed, all configuration combinations 717are valid and there is no reason to reject configuration changes or 718process migrations. 719 720"memory.low" implements best-effort memory protection and is an 721example of this type. 722 723 724Allocations 725----------- 726 727A cgroup is exclusively allocated a certain amount of a finite 728resource. Allocations can't be over-committed - the sum of the 729allocations of children can not exceed the amount of resource 730available to the parent. 731 732Allocations are in the range [0, max] and defaults to 0, which is no 733resource. 734 735As allocations can't be over-committed, some configuration 736combinations are invalid and should be rejected. Also, if the 737resource is mandatory for execution of processes, process migrations 738may be rejected. 739 740"cpu.rt.max" hard-allocates realtime slices and is an example of this 741type. 742 743 744Interface Files 745=============== 746 747Format 748------ 749 750All interface files should be in one of the following formats whenever 751possible:: 752 753 New-line separated values 754 (when only one value can be written at once) 755 756 VAL0\n 757 VAL1\n 758 ... 759 760 Space separated values 761 (when read-only or multiple values can be written at once) 762 763 VAL0 VAL1 ...\n 764 765 Flat keyed 766 767 KEY0 VAL0\n 768 KEY1 VAL1\n 769 ... 770 771 Nested keyed 772 773 KEY0 SUB_KEY0=VAL00 SUB_KEY1=VAL01... 774 KEY1 SUB_KEY0=VAL10 SUB_KEY1=VAL11... 775 ... 776 777For a writable file, the format for writing should generally match 778reading; however, controllers may allow omitting later fields or 779implement restricted shortcuts for most common use cases. 780 781For both flat and nested keyed files, only the values for a single key 782can be written at a time. For nested keyed files, the sub key pairs 783may be specified in any order and not all pairs have to be specified. 784 785 786Conventions 787----------- 788 789- Settings for a single feature should be contained in a single file. 790 791- The root cgroup should be exempt from resource control and thus 792 shouldn't have resource control interface files. 793 794- The default time unit is microseconds. If a different unit is ever 795 used, an explicit unit suffix must be present. 796 797- A parts-per quantity should use a percentage decimal with at least 798 two digit fractional part - e.g. 13.40. 799 800- If a controller implements weight based resource distribution, its 801 interface file should be named "weight" and have the range [1, 802 10000] with 100 as the default. The values are chosen to allow 803 enough and symmetric bias in both directions while keeping it 804 intuitive (the default is 100%). 805 806- If a controller implements an absolute resource guarantee and/or 807 limit, the interface files should be named "min" and "max" 808 respectively. If a controller implements best effort resource 809 guarantee and/or limit, the interface files should be named "low" 810 and "high" respectively. 811 812 In the above four control files, the special token "max" should be 813 used to represent upward infinity for both reading and writing. 814 815- If a setting has a configurable default value and keyed specific 816 overrides, the default entry should be keyed with "default" and 817 appear as the first entry in the file. 818 819 The default value can be updated by writing either "default $VAL" or 820 "$VAL". 821 822 When writing to update a specific override, "default" can be used as 823 the value to indicate removal of the override. Override entries 824 with "default" as the value must not appear when read. 825 826 For example, a setting which is keyed by major:minor device numbers 827 with integer values may look like the following:: 828 829 # cat cgroup-example-interface-file 830 default 150 831 8:0 300 832 833 The default value can be updated by:: 834 835 # echo 125 > cgroup-example-interface-file 836 837 or:: 838 839 # echo "default 125" > cgroup-example-interface-file 840 841 An override can be set by:: 842 843 # echo "8:16 170" > cgroup-example-interface-file 844 845 and cleared by:: 846 847 # echo "8:0 default" > cgroup-example-interface-file 848 # cat cgroup-example-interface-file 849 default 125 850 8:16 170 851 852- For events which are not very high frequency, an interface file 853 "events" should be created which lists event key value pairs. 854 Whenever a notifiable event happens, file modified event should be 855 generated on the file. 856 857 858Core Interface Files 859-------------------- 860 861All cgroup core files are prefixed with "cgroup." 862 863 cgroup.type 864 A read-write single value file which exists on non-root 865 cgroups. 866 867 When read, it indicates the current type of the cgroup, which 868 can be one of the following values. 869 870 - "domain" : A normal valid domain cgroup. 871 872 - "domain threaded" : A threaded domain cgroup which is 873 serving as the root of a threaded subtree. 874 875 - "domain invalid" : A cgroup which is in an invalid state. 876 It can't be populated or have controllers enabled. It may 877 be allowed to become a threaded cgroup. 878 879 - "threaded" : A threaded cgroup which is a member of a 880 threaded subtree. 881 882 A cgroup can be turned into a threaded cgroup by writing 883 "threaded" to this file. 884 885 cgroup.procs 886 A read-write new-line separated values file which exists on 887 all cgroups. 888 889 When read, it lists the PIDs of all processes which belong to 890 the cgroup one-per-line. The PIDs are not ordered and the 891 same PID may show up more than once if the process got moved 892 to another cgroup and then back or the PID got recycled while 893 reading. 894 895 A PID can be written to migrate the process associated with 896 the PID to the cgroup. The writer should match all of the 897 following conditions. 898 899 - It must have write access to the "cgroup.procs" file. 900 901 - It must have write access to the "cgroup.procs" file of the 902 common ancestor of the source and destination cgroups. 903 904 When delegating a sub-hierarchy, write access to this file 905 should be granted along with the containing directory. 906 907 In a threaded cgroup, reading this file fails with EOPNOTSUPP 908 as all the processes belong to the thread root. Writing is 909 supported and moves every thread of the process to the cgroup. 910 911 cgroup.threads 912 A read-write new-line separated values file which exists on 913 all cgroups. 914 915 When read, it lists the TIDs of all threads which belong to 916 the cgroup one-per-line. The TIDs are not ordered and the 917 same TID may show up more than once if the thread got moved to 918 another cgroup and then back or the TID got recycled while 919 reading. 920 921 A TID can be written to migrate the thread associated with the 922 TID to the cgroup. The writer should match all of the 923 following conditions. 924 925 - It must have write access to the "cgroup.threads" file. 926 927 - The cgroup that the thread is currently in must be in the 928 same resource domain as the destination cgroup. 929 930 - It must have write access to the "cgroup.procs" file of the 931 common ancestor of the source and destination cgroups. 932 933 When delegating a sub-hierarchy, write access to this file 934 should be granted along with the containing directory. 935 936 cgroup.controllers 937 A read-only space separated values file which exists on all 938 cgroups. 939 940 It shows space separated list of all controllers available to 941 the cgroup. The controllers are not ordered. 942 943 cgroup.subtree_control 944 A read-write space separated values file which exists on all 945 cgroups. Starts out empty. 946 947 When read, it shows space separated list of the controllers 948 which are enabled to control resource distribution from the 949 cgroup to its children. 950 951 Space separated list of controllers prefixed with '+' or '-' 952 can be written to enable or disable controllers. A controller 953 name prefixed with '+' enables the controller and '-' 954 disables. If a controller appears more than once on the list, 955 the last one is effective. When multiple enable and disable 956 operations are specified, either all succeed or all fail. 957 958 cgroup.events 959 A read-only flat-keyed file which exists on non-root cgroups. 960 The following entries are defined. Unless specified 961 otherwise, a value change in this file generates a file 962 modified event. 963 964 populated 965 1 if the cgroup or its descendants contains any live 966 processes; otherwise, 0. 967 frozen 968 1 if the cgroup is frozen; otherwise, 0. 969 970 cgroup.max.descendants 971 A read-write single value files. The default is "max". 972 973 Maximum allowed number of descent cgroups. 974 If the actual number of descendants is equal or larger, 975 an attempt to create a new cgroup in the hierarchy will fail. 976 977 cgroup.max.depth 978 A read-write single value files. The default is "max". 979 980 Maximum allowed descent depth below the current cgroup. 981 If the actual descent depth is equal or larger, 982 an attempt to create a new child cgroup will fail. 983 984 cgroup.stat 985 A read-only flat-keyed file with the following entries: 986 987 nr_descendants 988 Total number of visible descendant cgroups. 989 990 nr_dying_descendants 991 Total number of dying descendant cgroups. A cgroup becomes 992 dying after being deleted by a user. The cgroup will remain 993 in dying state for some time undefined time (which can depend 994 on system load) before being completely destroyed. 995 996 A process can't enter a dying cgroup under any circumstances, 997 a dying cgroup can't revive. 998 999 A dying cgroup can consume system resources not exceeding 1000 limits, which were active at the moment of cgroup deletion. 1001 1002 nr_subsys_<cgroup_subsys> 1003 Total number of live cgroup subsystems (e.g memory 1004 cgroup) at and beneath the current cgroup. 1005 1006 nr_dying_subsys_<cgroup_subsys> 1007 Total number of dying cgroup subsystems (e.g. memory 1008 cgroup) at and beneath the current cgroup. 1009 1010 cgroup.stat.local 1011 A read-only flat-keyed file which exists in non-root cgroups. 1012 The following entry is defined: 1013 1014 frozen_usec 1015 Cumulative time that this cgroup has spent between freezing and 1016 thawing, regardless of whether by self or ancestor groups. 1017 NB: (not) reaching "frozen" state is not accounted here. 1018 1019 Using the following ASCII representation of a cgroup's freezer 1020 state, :: 1021 1022 1 _____ 1023 frozen 0 __/ \__ 1024 ab cd 1025 1026 the duration being measured is the span between a and c. 1027 1028 cgroup.freeze 1029 A read-write single value file which exists on non-root cgroups. 1030 Allowed values are "0" and "1". The default is "0". 1031 1032 Writing "1" to the file causes freezing of the cgroup and all 1033 descendant cgroups. This means that all belonging processes will 1034 be stopped and will not run until the cgroup will be explicitly 1035 unfrozen. Freezing of the cgroup may take some time; when this action 1036 is completed, the "frozen" value in the cgroup.events control file 1037 will be updated to "1" and the corresponding notification will be 1038 issued. 1039 1040 A cgroup can be frozen either by its own settings, or by settings 1041 of any ancestor cgroups. If any of ancestor cgroups is frozen, the 1042 cgroup will remain frozen. 1043 1044 Processes in the frozen cgroup can be killed by a fatal signal. 1045 They also can enter and leave a frozen cgroup: either by an explicit 1046 move by a user, or if freezing of the cgroup races with fork(). 1047 If a process is moved to a frozen cgroup, it stops. If a process is 1048 moved out of a frozen cgroup, it becomes running. 1049 1050 Frozen status of a cgroup doesn't affect any cgroup tree operations: 1051 it's possible to delete a frozen (and empty) cgroup, as well as 1052 create new sub-cgroups. 1053 1054 cgroup.kill 1055 A write-only single value file which exists in non-root cgroups. 1056 The only allowed value is "1". 1057 1058 Writing "1" to the file causes the cgroup and all descendant cgroups to 1059 be killed. This means that all processes located in the affected cgroup 1060 tree will be killed via SIGKILL. 1061 1062 Killing a cgroup tree will deal with concurrent forks appropriately and 1063 is protected against migrations. 1064 1065 In a threaded cgroup, writing this file fails with EOPNOTSUPP as 1066 killing cgroups is a process directed operation, i.e. it affects 1067 the whole thread-group. 1068 1069 cgroup.pressure 1070 A read-write single value file that allowed values are "0" and "1". 1071 The default is "1". 1072 1073 Writing "0" to the file will disable the cgroup PSI accounting. 1074 Writing "1" to the file will re-enable the cgroup PSI accounting. 1075 1076 This control attribute is not hierarchical, so disable or enable PSI 1077 accounting in a cgroup does not affect PSI accounting in descendants 1078 and doesn't need pass enablement via ancestors from root. 1079 1080 The reason this control attribute exists is that PSI accounts stalls for 1081 each cgroup separately and aggregates it at each level of the hierarchy. 1082 This may cause non-negligible overhead for some workloads when under 1083 deep level of the hierarchy, in which case this control attribute can 1084 be used to disable PSI accounting in the non-leaf cgroups. 1085 1086 irq.pressure 1087 A read-write nested-keyed file. 1088 1089 Shows pressure stall information for IRQ/SOFTIRQ. See 1090 :ref:`Documentation/accounting/psi.rst <psi>` for details. 1091 1092Controllers 1093=========== 1094 1095.. _cgroup-v2-cpu: 1096 1097CPU 1098--- 1099 1100The "cpu" controllers regulates distribution of CPU cycles. This 1101controller implements weight and absolute bandwidth limit models for 1102normal scheduling policy and absolute bandwidth allocation model for 1103realtime scheduling policy. 1104 1105In all the above models, cycles distribution is defined only on a temporal 1106base and it does not account for the frequency at which tasks are executed. 1107The (optional) utilization clamping support allows to hint the schedutil 1108cpufreq governor about the minimum desired frequency which should always be 1109provided by a CPU, as well as the maximum desired frequency, which should not 1110be exceeded by a CPU. 1111 1112WARNING: cgroup2 cpu controller doesn't yet support the (bandwidth) control of 1113realtime processes. For a kernel built with the CONFIG_RT_GROUP_SCHED option 1114enabled for group scheduling of realtime processes, the cpu controller can only 1115be enabled when all RT processes are in the root cgroup. Be aware that system 1116management software may already have placed RT processes into non-root cgroups 1117during the system boot process, and these processes may need to be moved to the 1118root cgroup before the cpu controller can be enabled with a 1119CONFIG_RT_GROUP_SCHED enabled kernel. 1120 1121With CONFIG_RT_GROUP_SCHED disabled, this limitation does not apply and some of 1122the interface files either affect realtime processes or account for them. See 1123the following section for details. Only the cpu controller is affected by 1124CONFIG_RT_GROUP_SCHED. Other controllers can be used for the resource control of 1125realtime processes irrespective of CONFIG_RT_GROUP_SCHED. 1126 1127 1128CPU Interface Files 1129~~~~~~~~~~~~~~~~~~~ 1130 1131The interaction of a process with the cpu controller depends on its scheduling 1132policy and the underlying scheduler. From the point of view of the cpu controller, 1133processes can be categorized as follows: 1134 1135* Processes under the fair-class scheduler 1136* Processes under a BPF scheduler with the ``cgroup_set_weight`` callback 1137* Everything else: ``SCHED_{FIFO,RR,DEADLINE}`` and processes under a BPF scheduler 1138 without the ``cgroup_set_weight`` callback 1139 1140For details on when a process is under the fair-class scheduler or a BPF scheduler, 1141check out :ref:`Documentation/scheduler/sched-ext.rst <sched-ext>`. 1142 1143For each of the following interface files, the above categories 1144will be referred to. All time durations are in microseconds. 1145 1146 cpu.stat 1147 A read-only flat-keyed file. 1148 This file exists whether the controller is enabled or not. 1149 1150 It always reports the following three stats, which account for all the 1151 processes in the cgroup: 1152 1153 - usage_usec 1154 - user_usec 1155 - system_usec 1156 1157 and the following five when the controller is enabled, which account for 1158 only the processes under the fair-class scheduler: 1159 1160 - nr_periods 1161 - nr_throttled 1162 - throttled_usec 1163 - nr_bursts 1164 - burst_usec 1165 1166 cpu.weight 1167 A read-write single value file which exists on non-root 1168 cgroups. The default is "100". 1169 1170 For non idle groups (cpu.idle = 0), the weight is in the 1171 range [1, 10000]. 1172 1173 If the cgroup has been configured to be SCHED_IDLE (cpu.idle = 1), 1174 then the weight will show as a 0. 1175 1176 This file affects only processes under the fair-class scheduler and a BPF 1177 scheduler with the ``cgroup_set_weight`` callback depending on what the 1178 callback actually does. 1179 1180 cpu.weight.nice 1181 A read-write single value file which exists on non-root 1182 cgroups. The default is "0". 1183 1184 The nice value is in the range [-20, 19]. 1185 1186 This interface file is an alternative interface for 1187 "cpu.weight" and allows reading and setting weight using the 1188 same values used by nice(2). Because the range is smaller and 1189 granularity is coarser for the nice values, the read value is 1190 the closest approximation of the current weight. 1191 1192 This file affects only processes under the fair-class scheduler and a BPF 1193 scheduler with the ``cgroup_set_weight`` callback depending on what the 1194 callback actually does. 1195 1196 cpu.max 1197 A read-write two value file which exists on non-root cgroups. 1198 The default is "max 100000". 1199 1200 The maximum bandwidth limit. It's in the following format:: 1201 1202 $MAX $PERIOD 1203 1204 which indicates that the group may consume up to $MAX in each 1205 $PERIOD duration. "max" for $MAX indicates no limit. If only 1206 one number is written, $MAX is updated. 1207 1208 This file affects only processes under the fair-class scheduler. 1209 1210 cpu.max.burst 1211 A read-write single value file which exists on non-root 1212 cgroups. The default is "0". 1213 1214 The burst in the range [0, $MAX]. 1215 1216 This file affects only processes under the fair-class scheduler. 1217 1218 cpu.pressure 1219 A read-write nested-keyed file. 1220 1221 Shows pressure stall information for CPU. See 1222 :ref:`Documentation/accounting/psi.rst <psi>` for details. 1223 1224 This file accounts for all the processes in the cgroup. 1225 1226 cpu.uclamp.min 1227 A read-write single value file which exists on non-root cgroups. 1228 The default is "0", i.e. no utilization boosting. 1229 1230 The requested minimum utilization (protection) as a percentage 1231 rational number, e.g. 12.34 for 12.34%. 1232 1233 This interface allows reading and setting minimum utilization clamp 1234 values similar to the sched_setattr(2). This minimum utilization 1235 value is used to clamp the task specific minimum utilization clamp, 1236 including those of realtime processes. 1237 1238 The requested minimum utilization (protection) is always capped by 1239 the current value for the maximum utilization (limit), i.e. 1240 `cpu.uclamp.max`. 1241 1242 This file affects all the processes in the cgroup. 1243 1244 cpu.uclamp.max 1245 A read-write single value file which exists on non-root cgroups. 1246 The default is "max". i.e. no utilization capping 1247 1248 The requested maximum utilization (limit) as a percentage rational 1249 number, e.g. 98.76 for 98.76%. 1250 1251 This interface allows reading and setting maximum utilization clamp 1252 values similar to the sched_setattr(2). This maximum utilization 1253 value is used to clamp the task specific maximum utilization clamp, 1254 including those of realtime processes. 1255 1256 This file affects all the processes in the cgroup. 1257 1258 cpu.idle 1259 A read-write single value file which exists on non-root cgroups. 1260 The default is 0. 1261 1262 This is the cgroup analog of the per-task SCHED_IDLE sched policy. 1263 Setting this value to a 1 will make the scheduling policy of the 1264 cgroup SCHED_IDLE. The threads inside the cgroup will retain their 1265 own relative priorities, but the cgroup itself will be treated as 1266 very low priority relative to its peers. 1267 1268 This file affects only processes under the fair-class scheduler. 1269 1270Memory 1271------ 1272 1273The "memory" controller regulates distribution of memory. Memory is 1274stateful and implements both limit and protection models. Due to the 1275intertwining between memory usage and reclaim pressure and the 1276stateful nature of memory, the distribution model is relatively 1277complex. 1278 1279While not completely water-tight, all major memory usages by a given 1280cgroup are tracked so that the total memory consumption can be 1281accounted and controlled to a reasonable extent. Currently, the 1282following types of memory usages are tracked. 1283 1284- Userland memory - page cache and anonymous memory. 1285 1286- Kernel data structures such as dentries and inodes. 1287 1288- TCP socket buffers. 1289 1290The above list may expand in the future for better coverage. 1291 1292 1293Memory Interface Files 1294~~~~~~~~~~~~~~~~~~~~~~ 1295 1296All memory amounts are in bytes. If a value which is not aligned to 1297PAGE_SIZE is written, the value may be rounded up to the closest 1298PAGE_SIZE multiple when read back. 1299 1300 memory.current 1301 A read-only single value file which exists on non-root 1302 cgroups. 1303 1304 The total amount of memory currently being used by the cgroup 1305 and its descendants. 1306 1307 memory.min 1308 A read-write single value file which exists on non-root 1309 cgroups. The default is "0". 1310 1311 Hard memory protection. If the memory usage of a cgroup 1312 is within its effective min boundary, the cgroup's memory 1313 won't be reclaimed under any conditions. If there is no 1314 unprotected reclaimable memory available, OOM killer 1315 is invoked. Above the effective min boundary (or 1316 effective low boundary if it is higher), pages are reclaimed 1317 proportionally to the overage, reducing reclaim pressure for 1318 smaller overages. 1319 1320 Effective min boundary is limited by memory.min values of 1321 ancestor cgroups. If there is memory.min overcommitment 1322 (child cgroup or cgroups are requiring more protected memory 1323 than parent will allow), then each child cgroup will get 1324 the part of parent's protection proportional to its 1325 actual memory usage below memory.min. 1326 1327 Putting more memory than generally available under this 1328 protection is discouraged and may lead to constant OOMs. 1329 1330 memory.low 1331 A read-write single value file which exists on non-root 1332 cgroups. The default is "0". 1333 1334 Best-effort memory protection. If the memory usage of a 1335 cgroup is within its effective low boundary, the cgroup's 1336 memory won't be reclaimed unless there is no reclaimable 1337 memory available in unprotected cgroups. 1338 Above the effective low boundary (or 1339 effective min boundary if it is higher), pages are reclaimed 1340 proportionally to the overage, reducing reclaim pressure for 1341 smaller overages. 1342 1343 Effective low boundary is limited by memory.low values of 1344 ancestor cgroups. If there is memory.low overcommitment 1345 (child cgroup or cgroups are requiring more protected memory 1346 than parent will allow), then each child cgroup will get 1347 the part of parent's protection proportional to its 1348 actual memory usage below memory.low. 1349 1350 Putting more memory than generally available under this 1351 protection is discouraged. 1352 1353 memory.high 1354 A read-write single value file which exists on non-root 1355 cgroups. The default is "max". 1356 1357 Memory usage throttle limit. If a cgroup's usage goes 1358 over the high boundary, the processes of the cgroup are 1359 throttled and put under heavy reclaim pressure. 1360 1361 Going over the high limit never invokes the OOM killer and 1362 under extreme conditions the limit may be breached. The high 1363 limit should be used in scenarios where an external process 1364 monitors the limited cgroup to alleviate heavy reclaim 1365 pressure. 1366 1367 If memory.high is opened with O_NONBLOCK then the synchronous 1368 reclaim is bypassed. This is useful for admin processes that 1369 need to dynamically adjust the job's memory limits without 1370 expending their own CPU resources on memory reclamation. The 1371 job will trigger the reclaim and/or get throttled on its 1372 next charge request. 1373 1374 Please note that with O_NONBLOCK, there is a chance that the 1375 target memory cgroup may take indefinite amount of time to 1376 reduce usage below the limit due to delayed charge request or 1377 busy-hitting its memory to slow down reclaim. 1378 1379 memory.max 1380 A read-write single value file which exists on non-root 1381 cgroups. The default is "max". 1382 1383 Memory usage hard limit. This is the main mechanism to limit 1384 memory usage of a cgroup. If a cgroup's memory usage reaches 1385 this limit and can't be reduced, the OOM killer is invoked in 1386 the cgroup. Under certain circumstances, the usage may go 1387 over the limit temporarily. 1388 1389 In default configuration regular 0-order allocations always 1390 succeed unless OOM killer chooses current task as a victim. 1391 1392 Some kinds of allocations don't invoke the OOM killer. 1393 Caller could retry them differently, return into userspace 1394 as -ENOMEM or silently ignore in cases like disk readahead. 1395 1396 If memory.max is opened with O_NONBLOCK, then the synchronous 1397 reclaim and oom-kill are bypassed. This is useful for admin 1398 processes that need to dynamically adjust the job's memory limits 1399 without expending their own CPU resources on memory reclamation. 1400 The job will trigger the reclaim and/or oom-kill on its next 1401 charge request. 1402 1403 Please note that with O_NONBLOCK, there is a chance that the 1404 target memory cgroup may take indefinite amount of time to 1405 reduce usage below the limit due to delayed charge request or 1406 busy-hitting its memory to slow down reclaim. 1407 1408 memory.reclaim 1409 A write-only nested-keyed file which exists for all cgroups. 1410 1411 This is a simple interface to trigger memory reclaim in the 1412 target cgroup. 1413 1414 Example:: 1415 1416 echo "1G" > memory.reclaim 1417 1418 Please note that the kernel can over or under reclaim from 1419 the target cgroup. If less bytes are reclaimed than the 1420 specified amount, -EAGAIN is returned. 1421 1422 Please note that the proactive reclaim (triggered by this 1423 interface) is not meant to indicate memory pressure on the 1424 memory cgroup. Therefore socket memory balancing triggered by 1425 the memory reclaim normally is not exercised in this case. 1426 This means that the networking layer will not adapt based on 1427 reclaim induced by memory.reclaim. 1428 1429The following nested keys are defined. 1430 1431 ========== ================================ 1432 swappiness Swappiness value to reclaim with 1433 ========== ================================ 1434 1435 Specifying a swappiness value instructs the kernel to perform 1436 the reclaim with that swappiness value. Note that this has the 1437 same semantics as vm.swappiness applied to memcg reclaim with 1438 all the existing limitations and potential future extensions. 1439 1440 The valid range for swappiness is [0-200, max], setting 1441 swappiness=max exclusively reclaims anonymous memory. 1442 1443 memory.peak 1444 A read-write single value file which exists on non-root cgroups. 1445 1446 The max memory usage recorded for the cgroup and its descendants since 1447 either the creation of the cgroup or the most recent reset for that FD. 1448 1449 A write of any non-empty string to this file resets it to the 1450 current memory usage for subsequent reads through the same 1451 file descriptor. 1452 1453 memory.oom.group 1454 A read-write single value file which exists on non-root 1455 cgroups. The default value is "0". 1456 1457 Determines whether the cgroup should be treated as 1458 an indivisible workload by the OOM killer. If set, 1459 all tasks belonging to the cgroup or to its descendants 1460 (if the memory cgroup is not a leaf cgroup) are killed 1461 together or not at all. This can be used to avoid 1462 partial kills to guarantee workload integrity. 1463 1464 Tasks with the OOM protection (oom_score_adj set to -1000) 1465 are treated as an exception and are never killed. 1466 1467 If the OOM killer is invoked in a cgroup, it's not going 1468 to kill any tasks outside of this cgroup, regardless 1469 memory.oom.group values of ancestor cgroups. 1470 1471 memory.events 1472 A read-only flat-keyed file which exists on non-root cgroups. 1473 The following entries are defined. Unless specified 1474 otherwise, a value change in this file generates a file 1475 modified event. 1476 1477 Note that all fields in this file are hierarchical and the 1478 file modified event can be generated due to an event down the 1479 hierarchy. For the local events at the cgroup level see 1480 memory.events.local. 1481 1482 low 1483 The number of times the cgroup is reclaimed due to 1484 high memory pressure even though its usage is under 1485 the low boundary. This usually indicates that the low 1486 boundary is over-committed. 1487 1488 high 1489 The number of times processes of the cgroup are 1490 throttled and routed to perform direct memory reclaim 1491 because the high memory boundary was exceeded. For a 1492 cgroup whose memory usage is capped by the high limit 1493 rather than global memory pressure, this event's 1494 occurrences are expected. 1495 1496 max 1497 The number of times the cgroup's memory usage was 1498 about to go over the max boundary. If direct reclaim 1499 fails to bring it down, the cgroup goes to OOM state. 1500 1501 oom 1502 The number of time the cgroup's memory usage was 1503 reached the limit and allocation was about to fail. 1504 1505 This event is not raised if the OOM killer is not 1506 considered as an option, e.g. for failed high-order 1507 allocations or if caller asked to not retry attempts. 1508 1509 oom_kill 1510 The number of processes belonging to this cgroup 1511 killed by any kind of OOM killer. 1512 1513 oom_group_kill 1514 The number of times a group OOM has occurred. 1515 1516 sock_throttled 1517 The number of times network sockets associated with 1518 this cgroup are throttled. 1519 1520 memory.events.local 1521 Similar to memory.events but the fields in the file are local 1522 to the cgroup i.e. not hierarchical. The file modified event 1523 generated on this file reflects only the local events. 1524 1525 memory.stat 1526 A read-only flat-keyed file which exists on non-root cgroups. 1527 1528 This breaks down the cgroup's memory footprint into different 1529 types of memory, type-specific details, and other information 1530 on the state and past events of the memory management system. 1531 1532 All memory amounts are in bytes. 1533 1534 The entries are ordered to be human readable, and new entries 1535 can show up in the middle. Don't rely on items remaining in a 1536 fixed position; use the keys to look up specific values! 1537 1538 If the entry has no per-node counter (or not show in the 1539 memory.numa_stat). We use 'npn' (non-per-node) as the tag 1540 to indicate that it will not show in the memory.numa_stat. 1541 1542 anon 1543 Amount of memory used in anonymous mappings such as 1544 brk(), sbrk(), and mmap(MAP_ANONYMOUS). Note that 1545 some kernel configurations might account complete larger 1546 allocations (e.g., THP) if only some, but not all the 1547 memory of such an allocation is mapped anymore. 1548 1549 file 1550 Amount of memory used to cache filesystem data, 1551 including tmpfs and shared memory. 1552 1553 kernel (npn) 1554 Amount of total kernel memory, including 1555 (kernel_stack, pagetables, percpu, vmalloc, slab) in 1556 addition to other kernel memory use cases. 1557 1558 kernel_stack 1559 Amount of memory allocated to kernel stacks. 1560 1561 pagetables 1562 Amount of memory allocated for page tables. 1563 1564 sec_pagetables 1565 Amount of memory allocated for secondary page tables, 1566 this currently includes KVM mmu allocations on x86 1567 and arm64 and IOMMU page tables. 1568 1569 percpu (npn) 1570 Amount of memory used for storing per-cpu kernel 1571 data structures. 1572 1573 sock (npn) 1574 Amount of memory used in network transmission buffers 1575 1576 vmalloc (npn) 1577 Amount of memory used for vmap backed memory. 1578 1579 shmem 1580 Amount of cached filesystem data that is swap-backed, 1581 such as tmpfs, shm segments, shared anonymous mmap()s 1582 1583 zswap 1584 Amount of memory consumed by the zswap compression backend. 1585 1586 zswapped 1587 Amount of application memory swapped out to zswap. 1588 1589 file_mapped 1590 Amount of cached filesystem data mapped with mmap(). Note 1591 that some kernel configurations might account complete 1592 larger allocations (e.g., THP) if only some, but not 1593 not all the memory of such an allocation is mapped. 1594 1595 file_dirty 1596 Amount of cached filesystem data that was modified but 1597 not yet written back to disk 1598 1599 file_writeback 1600 Amount of cached filesystem data that was modified and 1601 is currently being written back to disk 1602 1603 swapcached 1604 Amount of swap cached in memory. The swapcache is accounted 1605 against both memory and swap usage. 1606 1607 anon_thp 1608 Amount of memory used in anonymous mappings backed by 1609 transparent hugepages 1610 1611 file_thp 1612 Amount of cached filesystem data backed by transparent 1613 hugepages 1614 1615 shmem_thp 1616 Amount of shm, tmpfs, shared anonymous mmap()s backed by 1617 transparent hugepages 1618 1619 inactive_anon, active_anon, inactive_file, active_file, unevictable 1620 Amount of memory, swap-backed and filesystem-backed, 1621 on the internal memory management lists used by the 1622 page reclaim algorithm. 1623 1624 As these represent internal list state (eg. shmem pages are on anon 1625 memory management lists), inactive_foo + active_foo may not be equal to 1626 the value for the foo counter, since the foo counter is type-based, not 1627 list-based. 1628 1629 slab_reclaimable 1630 Part of "slab" that might be reclaimed, such as 1631 dentries and inodes. 1632 1633 slab_unreclaimable 1634 Part of "slab" that cannot be reclaimed on memory 1635 pressure. 1636 1637 slab (npn) 1638 Amount of memory used for storing in-kernel data 1639 structures. 1640 1641 workingset_refault_anon 1642 Number of refaults of previously evicted anonymous pages. 1643 1644 workingset_refault_file 1645 Number of refaults of previously evicted file pages. 1646 1647 workingset_activate_anon 1648 Number of refaulted anonymous pages that were immediately 1649 activated. 1650 1651 workingset_activate_file 1652 Number of refaulted file pages that were immediately activated. 1653 1654 workingset_restore_anon 1655 Number of restored anonymous pages which have been detected as 1656 an active workingset before they got reclaimed. 1657 1658 workingset_restore_file 1659 Number of restored file pages which have been detected as an 1660 active workingset before they got reclaimed. 1661 1662 workingset_nodereclaim 1663 Number of times a shadow node has been reclaimed 1664 1665 pswpin (npn) 1666 Number of pages swapped into memory 1667 1668 pswpout (npn) 1669 Number of pages swapped out of memory 1670 1671 pgscan (npn) 1672 Amount of scanned pages (in an inactive LRU list) 1673 1674 pgsteal (npn) 1675 Amount of reclaimed pages 1676 1677 pgscan_kswapd (npn) 1678 Amount of scanned pages by kswapd (in an inactive LRU list) 1679 1680 pgscan_direct (npn) 1681 Amount of scanned pages directly (in an inactive LRU list) 1682 1683 pgscan_khugepaged (npn) 1684 Amount of scanned pages by khugepaged (in an inactive LRU list) 1685 1686 pgscan_proactive (npn) 1687 Amount of scanned pages proactively (in an inactive LRU list) 1688 1689 pgsteal_kswapd (npn) 1690 Amount of reclaimed pages by kswapd 1691 1692 pgsteal_direct (npn) 1693 Amount of reclaimed pages directly 1694 1695 pgsteal_khugepaged (npn) 1696 Amount of reclaimed pages by khugepaged 1697 1698 pgsteal_proactive (npn) 1699 Amount of reclaimed pages proactively 1700 1701 pgfault (npn) 1702 Total number of page faults incurred 1703 1704 pgmajfault (npn) 1705 Number of major page faults incurred 1706 1707 pgrefill (npn) 1708 Amount of scanned pages (in an active LRU list) 1709 1710 pgactivate (npn) 1711 Amount of pages moved to the active LRU list 1712 1713 pgdeactivate (npn) 1714 Amount of pages moved to the inactive LRU list 1715 1716 pglazyfree (npn) 1717 Amount of pages postponed to be freed under memory pressure 1718 1719 pglazyfreed (npn) 1720 Amount of reclaimed lazyfree pages 1721 1722 swpin_zero 1723 Number of pages swapped into memory and filled with zero, where I/O 1724 was optimized out because the page content was detected to be zero 1725 during swapout. 1726 1727 swpout_zero 1728 Number of zero-filled pages swapped out with I/O skipped due to the 1729 content being detected as zero. 1730 1731 zswpin 1732 Number of pages moved in to memory from zswap. 1733 1734 zswpout 1735 Number of pages moved out of memory to zswap. 1736 1737 zswpwb 1738 Number of pages written from zswap to swap. 1739 1740 thp_fault_alloc (npn) 1741 Number of transparent hugepages which were allocated to satisfy 1742 a page fault. This counter is not present when CONFIG_TRANSPARENT_HUGEPAGE 1743 is not set. 1744 1745 thp_collapse_alloc (npn) 1746 Number of transparent hugepages which were allocated to allow 1747 collapsing an existing range of pages. This counter is not 1748 present when CONFIG_TRANSPARENT_HUGEPAGE is not set. 1749 1750 thp_swpout (npn) 1751 Number of transparent hugepages which are swapout in one piece 1752 without splitting. 1753 1754 thp_swpout_fallback (npn) 1755 Number of transparent hugepages which were split before swapout. 1756 Usually because failed to allocate some continuous swap space 1757 for the huge page. 1758 1759 numa_pages_migrated (npn) 1760 Number of pages migrated by NUMA balancing. 1761 1762 numa_pte_updates (npn) 1763 Number of pages whose page table entries are modified by 1764 NUMA balancing to produce NUMA hinting faults on access. 1765 1766 numa_hint_faults (npn) 1767 Number of NUMA hinting faults. 1768 1769 pgdemote_kswapd 1770 Number of pages demoted by kswapd. 1771 1772 pgdemote_direct 1773 Number of pages demoted directly. 1774 1775 pgdemote_khugepaged 1776 Number of pages demoted by khugepaged. 1777 1778 pgdemote_proactive 1779 Number of pages demoted by proactively. 1780 1781 hugetlb 1782 Amount of memory used by hugetlb pages. This metric only shows 1783 up if hugetlb usage is accounted for in memory.current (i.e. 1784 cgroup is mounted with the memory_hugetlb_accounting option). 1785 1786 memory.numa_stat 1787 A read-only nested-keyed file which exists on non-root cgroups. 1788 1789 This breaks down the cgroup's memory footprint into different 1790 types of memory, type-specific details, and other information 1791 per node on the state of the memory management system. 1792 1793 This is useful for providing visibility into the NUMA locality 1794 information within an memcg since the pages are allowed to be 1795 allocated from any physical node. One of the use case is evaluating 1796 application performance by combining this information with the 1797 application's CPU allocation. 1798 1799 All memory amounts are in bytes. 1800 1801 The output format of memory.numa_stat is:: 1802 1803 type N0=<bytes in node 0> N1=<bytes in node 1> ... 1804 1805 The entries are ordered to be human readable, and new entries 1806 can show up in the middle. Don't rely on items remaining in a 1807 fixed position; use the keys to look up specific values! 1808 1809 The entries can refer to the memory.stat. 1810 1811 memory.swap.current 1812 A read-only single value file which exists on non-root 1813 cgroups. 1814 1815 The total amount of swap currently being used by the cgroup 1816 and its descendants. 1817 1818 memory.swap.high 1819 A read-write single value file which exists on non-root 1820 cgroups. The default is "max". 1821 1822 Swap usage throttle limit. If a cgroup's swap usage exceeds 1823 this limit, all its further allocations will be throttled to 1824 allow userspace to implement custom out-of-memory procedures. 1825 1826 This limit marks a point of no return for the cgroup. It is NOT 1827 designed to manage the amount of swapping a workload does 1828 during regular operation. Compare to memory.swap.max, which 1829 prohibits swapping past a set amount, but lets the cgroup 1830 continue unimpeded as long as other memory can be reclaimed. 1831 1832 Healthy workloads are not expected to reach this limit. 1833 1834 memory.swap.peak 1835 A read-write single value file which exists on non-root cgroups. 1836 1837 The max swap usage recorded for the cgroup and its descendants since 1838 the creation of the cgroup or the most recent reset for that FD. 1839 1840 A write of any non-empty string to this file resets it to the 1841 current memory usage for subsequent reads through the same 1842 file descriptor. 1843 1844 memory.swap.max 1845 A read-write single value file which exists on non-root 1846 cgroups. The default is "max". 1847 1848 Swap usage hard limit. If a cgroup's swap usage reaches this 1849 limit, anonymous memory of the cgroup will not be swapped out. 1850 1851 memory.swap.events 1852 A read-only flat-keyed file which exists on non-root cgroups. 1853 The following entries are defined. Unless specified 1854 otherwise, a value change in this file generates a file 1855 modified event. 1856 1857 high 1858 The number of times the cgroup's swap usage was over 1859 the high threshold. 1860 1861 max 1862 The number of times the cgroup's swap usage was about 1863 to go over the max boundary and swap allocation 1864 failed. 1865 1866 fail 1867 The number of times swap allocation failed either 1868 because of running out of swap system-wide or max 1869 limit. 1870 1871 When reduced under the current usage, the existing swap 1872 entries are reclaimed gradually and the swap usage may stay 1873 higher than the limit for an extended period of time. This 1874 reduces the impact on the workload and memory management. 1875 1876 memory.zswap.current 1877 A read-only single value file which exists on non-root 1878 cgroups. 1879 1880 The total amount of memory consumed by the zswap compression 1881 backend. 1882 1883 memory.zswap.max 1884 A read-write single value file which exists on non-root 1885 cgroups. The default is "max". 1886 1887 Zswap usage hard limit. If a cgroup's zswap pool reaches this 1888 limit, it will refuse to take any more stores before existing 1889 entries fault back in or are written out to disk. 1890 1891 memory.zswap.writeback 1892 A read-write single value file. The default value is "1". 1893 Note that this setting is hierarchical, i.e. the writeback would be 1894 implicitly disabled for child cgroups if the upper hierarchy 1895 does so. 1896 1897 When this is set to 0, all swapping attempts to swapping devices 1898 are disabled. This included both zswap writebacks, and swapping due 1899 to zswap store failures. If the zswap store failures are recurring 1900 (for e.g if the pages are incompressible), users can observe 1901 reclaim inefficiency after disabling writeback (because the same 1902 pages might be rejected again and again). 1903 1904 Note that this is subtly different from setting memory.swap.max to 1905 0, as it still allows for pages to be written to the zswap pool. 1906 This setting has no effect if zswap is disabled, and swapping 1907 is allowed unless memory.swap.max is set to 0. 1908 1909 memory.pressure 1910 A read-only nested-keyed file. 1911 1912 Shows pressure stall information for memory. See 1913 :ref:`Documentation/accounting/psi.rst <psi>` for details. 1914 1915 1916Usage Guidelines 1917~~~~~~~~~~~~~~~~ 1918 1919"memory.high" is the main mechanism to control memory usage. 1920Over-committing on high limit (sum of high limits > available memory) 1921and letting global memory pressure to distribute memory according to 1922usage is a viable strategy. 1923 1924Because breach of the high limit doesn't trigger the OOM killer but 1925throttles the offending cgroup, a management agent has ample 1926opportunities to monitor and take appropriate actions such as granting 1927more memory or terminating the workload. 1928 1929Determining whether a cgroup has enough memory is not trivial as 1930memory usage doesn't indicate whether the workload can benefit from 1931more memory. For example, a workload which writes data received from 1932network to a file can use all available memory but can also operate as 1933performant with a small amount of memory. A measure of memory 1934pressure - how much the workload is being impacted due to lack of 1935memory - is necessary to determine whether a workload needs more 1936memory; unfortunately, memory pressure monitoring mechanism isn't 1937implemented yet. 1938 1939Reclaim Protection 1940~~~~~~~~~~~~~~~~~~ 1941 1942The protection configured with "memory.low" or "memory.min" applies relatively 1943to the target of the reclaim (i.e. any of memory cgroup limits, proactive 1944memory.reclaim or global reclaim apparently located in the root cgroup). 1945The protection value configured for B applies unchanged to the reclaim 1946targeting A (i.e. caused by competition with the sibling E):: 1947 1948 root - ... - A - B - C 1949 \ ` D 1950 ` E 1951 1952When the reclaim targets ancestors of A, the effective protection of B is 1953capped by the protection value configured for A (and any other intermediate 1954ancestors between A and the target). 1955 1956To express indifference about relative sibling protection, it is suggested to 1957use memory_recursiveprot. Configuring all descendants of a parent with finite 1958protection to "max" works but it may unnecessarily skew memory.events:low 1959field. 1960 1961Memory Ownership 1962~~~~~~~~~~~~~~~~ 1963 1964A memory area is charged to the cgroup which instantiated it and stays 1965charged to the cgroup until the area is released. Migrating a process 1966to a different cgroup doesn't move the memory usages that it 1967instantiated while in the previous cgroup to the new cgroup. 1968 1969A memory area may be used by processes belonging to different cgroups. 1970To which cgroup the area will be charged is in-deterministic; however, 1971over time, the memory area is likely to end up in a cgroup which has 1972enough memory allowance to avoid high reclaim pressure. 1973 1974If a cgroup sweeps a considerable amount of memory which is expected 1975to be accessed repeatedly by other cgroups, it may make sense to use 1976POSIX_FADV_DONTNEED to relinquish the ownership of memory areas 1977belonging to the affected files to ensure correct memory ownership. 1978 1979 1980IO 1981-- 1982 1983The "io" controller regulates the distribution of IO resources. This 1984controller implements both weight based and absolute bandwidth or IOPS 1985limit distribution; however, weight based distribution is available 1986only if cfq-iosched is in use and neither scheme is available for 1987blk-mq devices. 1988 1989 1990IO Interface Files 1991~~~~~~~~~~~~~~~~~~ 1992 1993 io.stat 1994 A read-only nested-keyed file. 1995 1996 Lines are keyed by $MAJ:$MIN device numbers and not ordered. 1997 The following nested keys are defined. 1998 1999 ====== ===================== 2000 rbytes Bytes read 2001 wbytes Bytes written 2002 rios Number of read IOs 2003 wios Number of write IOs 2004 dbytes Bytes discarded 2005 dios Number of discard IOs 2006 ====== ===================== 2007 2008 An example read output follows:: 2009 2010 8:16 rbytes=1459200 wbytes=314773504 rios=192 wios=353 dbytes=0 dios=0 2011 8:0 rbytes=90430464 wbytes=299008000 rios=8950 wios=1252 dbytes=50331648 dios=3021 2012 2013 io.cost.qos 2014 A read-write nested-keyed file which exists only on the root 2015 cgroup. 2016 2017 This file configures the Quality of Service of the IO cost 2018 model based controller (CONFIG_BLK_CGROUP_IOCOST) which 2019 currently implements "io.weight" proportional control. Lines 2020 are keyed by $MAJ:$MIN device numbers and not ordered. The 2021 line for a given device is populated on the first write for 2022 the device on "io.cost.qos" or "io.cost.model". The following 2023 nested keys are defined. 2024 2025 ====== ===================================== 2026 enable Weight-based control enable 2027 ctrl "auto" or "user" 2028 rpct Read latency percentile [0, 100] 2029 rlat Read latency threshold 2030 wpct Write latency percentile [0, 100] 2031 wlat Write latency threshold 2032 min Minimum scaling percentage [1, 10000] 2033 max Maximum scaling percentage [1, 10000] 2034 ====== ===================================== 2035 2036 The controller is disabled by default and can be enabled by 2037 setting "enable" to 1. "rpct" and "wpct" parameters default 2038 to zero and the controller uses internal device saturation 2039 state to adjust the overall IO rate between "min" and "max". 2040 2041 When a better control quality is needed, latency QoS 2042 parameters can be configured. For example:: 2043 2044 8:16 enable=1 ctrl=auto rpct=95.00 rlat=75000 wpct=95.00 wlat=150000 min=50.00 max=150.0 2045 2046 shows that on sdb, the controller is enabled, will consider 2047 the device saturated if the 95th percentile of read completion 2048 latencies is above 75ms or write 150ms, and adjust the overall 2049 IO issue rate between 50% and 150% accordingly. 2050 2051 The lower the saturation point, the better the latency QoS at 2052 the cost of aggregate bandwidth. The narrower the allowed 2053 adjustment range between "min" and "max", the more conformant 2054 to the cost model the IO behavior. Note that the IO issue 2055 base rate may be far off from 100% and setting "min" and "max" 2056 blindly can lead to a significant loss of device capacity or 2057 control quality. "min" and "max" are useful for regulating 2058 devices which show wide temporary behavior changes - e.g. a 2059 ssd which accepts writes at the line speed for a while and 2060 then completely stalls for multiple seconds. 2061 2062 When "ctrl" is "auto", the parameters are controlled by the 2063 kernel and may change automatically. Setting "ctrl" to "user" 2064 or setting any of the percentile and latency parameters puts 2065 it into "user" mode and disables the automatic changes. The 2066 automatic mode can be restored by setting "ctrl" to "auto". 2067 2068 io.cost.model 2069 A read-write nested-keyed file which exists only on the root 2070 cgroup. 2071 2072 This file configures the cost model of the IO cost model based 2073 controller (CONFIG_BLK_CGROUP_IOCOST) which currently 2074 implements "io.weight" proportional control. Lines are keyed 2075 by $MAJ:$MIN device numbers and not ordered. The line for a 2076 given device is populated on the first write for the device on 2077 "io.cost.qos" or "io.cost.model". The following nested keys 2078 are defined. 2079 2080 ===== ================================ 2081 ctrl "auto" or "user" 2082 model The cost model in use - "linear" 2083 ===== ================================ 2084 2085 When "ctrl" is "auto", the kernel may change all parameters 2086 dynamically. When "ctrl" is set to "user" or any other 2087 parameters are written to, "ctrl" become "user" and the 2088 automatic changes are disabled. 2089 2090 When "model" is "linear", the following model parameters are 2091 defined. 2092 2093 ============= ======================================== 2094 [r|w]bps The maximum sequential IO throughput 2095 [r|w]seqiops The maximum 4k sequential IOs per second 2096 [r|w]randiops The maximum 4k random IOs per second 2097 ============= ======================================== 2098 2099 From the above, the builtin linear model determines the base 2100 costs of a sequential and random IO and the cost coefficient 2101 for the IO size. While simple, this model can cover most 2102 common device classes acceptably. 2103 2104 The IO cost model isn't expected to be accurate in absolute 2105 sense and is scaled to the device behavior dynamically. 2106 2107 If needed, tools/cgroup/iocost_coef_gen.py can be used to 2108 generate device-specific coefficients. 2109 2110 io.weight 2111 A read-write flat-keyed file which exists on non-root cgroups. 2112 The default is "default 100". 2113 2114 The first line is the default weight applied to devices 2115 without specific override. The rest are overrides keyed by 2116 $MAJ:$MIN device numbers and not ordered. The weights are in 2117 the range [1, 10000] and specifies the relative amount IO time 2118 the cgroup can use in relation to its siblings. 2119 2120 The default weight can be updated by writing either "default 2121 $WEIGHT" or simply "$WEIGHT". Overrides can be set by writing 2122 "$MAJ:$MIN $WEIGHT" and unset by writing "$MAJ:$MIN default". 2123 2124 An example read output follows:: 2125 2126 default 100 2127 8:16 200 2128 8:0 50 2129 2130 io.max 2131 A read-write nested-keyed file which exists on non-root 2132 cgroups. 2133 2134 BPS and IOPS based IO limit. Lines are keyed by $MAJ:$MIN 2135 device numbers and not ordered. The following nested keys are 2136 defined. 2137 2138 ===== ================================== 2139 rbps Max read bytes per second 2140 wbps Max write bytes per second 2141 riops Max read IO operations per second 2142 wiops Max write IO operations per second 2143 ===== ================================== 2144 2145 When writing, any number of nested key-value pairs can be 2146 specified in any order. "max" can be specified as the value 2147 to remove a specific limit. If the same key is specified 2148 multiple times, the outcome is undefined. 2149 2150 BPS and IOPS are measured in each IO direction and IOs are 2151 delayed if limit is reached. Temporary bursts are allowed. 2152 2153 Setting read limit at 2M BPS and write at 120 IOPS for 8:16:: 2154 2155 echo "8:16 rbps=2097152 wiops=120" > io.max 2156 2157 Reading returns the following:: 2158 2159 8:16 rbps=2097152 wbps=max riops=max wiops=120 2160 2161 Write IOPS limit can be removed by writing the following:: 2162 2163 echo "8:16 wiops=max" > io.max 2164 2165 Reading now returns the following:: 2166 2167 8:16 rbps=2097152 wbps=max riops=max wiops=max 2168 2169 io.pressure 2170 A read-only nested-keyed file. 2171 2172 Shows pressure stall information for IO. See 2173 :ref:`Documentation/accounting/psi.rst <psi>` for details. 2174 2175 2176Writeback 2177~~~~~~~~~ 2178 2179Page cache is dirtied through buffered writes and shared mmaps and 2180written asynchronously to the backing filesystem by the writeback 2181mechanism. Writeback sits between the memory and IO domains and 2182regulates the proportion of dirty memory by balancing dirtying and 2183write IOs. 2184 2185The io controller, in conjunction with the memory controller, 2186implements control of page cache writeback IOs. The memory controller 2187defines the memory domain that dirty memory ratio is calculated and 2188maintained for and the io controller defines the io domain which 2189writes out dirty pages for the memory domain. Both system-wide and 2190per-cgroup dirty memory states are examined and the more restrictive 2191of the two is enforced. 2192 2193cgroup writeback requires explicit support from the underlying 2194filesystem. Currently, cgroup writeback is implemented on ext2, ext4, 2195btrfs, f2fs, and xfs. On other filesystems, all writeback IOs are 2196attributed to the root cgroup. 2197 2198There are inherent differences in memory and writeback management 2199which affects how cgroup ownership is tracked. Memory is tracked per 2200page while writeback per inode. For the purpose of writeback, an 2201inode is assigned to a cgroup and all IO requests to write dirty pages 2202from the inode are attributed to that cgroup. 2203 2204As cgroup ownership for memory is tracked per page, there can be pages 2205which are associated with different cgroups than the one the inode is 2206associated with. These are called foreign pages. The writeback 2207constantly keeps track of foreign pages and, if a particular foreign 2208cgroup becomes the majority over a certain period of time, switches 2209the ownership of the inode to that cgroup. 2210 2211While this model is enough for most use cases where a given inode is 2212mostly dirtied by a single cgroup even when the main writing cgroup 2213changes over time, use cases where multiple cgroups write to a single 2214inode simultaneously are not supported well. In such circumstances, a 2215significant portion of IOs are likely to be attributed incorrectly. 2216As memory controller assigns page ownership on the first use and 2217doesn't update it until the page is released, even if writeback 2218strictly follows page ownership, multiple cgroups dirtying overlapping 2219areas wouldn't work as expected. It's recommended to avoid such usage 2220patterns. 2221 2222The sysctl knobs which affect writeback behavior are applied to cgroup 2223writeback as follows. 2224 2225 vm.dirty_background_ratio, vm.dirty_ratio 2226 These ratios apply the same to cgroup writeback with the 2227 amount of available memory capped by limits imposed by the 2228 memory controller and system-wide clean memory. 2229 2230 vm.dirty_background_bytes, vm.dirty_bytes 2231 For cgroup writeback, this is calculated into ratio against 2232 total available memory and applied the same way as 2233 vm.dirty[_background]_ratio. 2234 2235 2236IO Latency 2237~~~~~~~~~~ 2238 2239This is a cgroup v2 controller for IO workload protection. You provide a group 2240with a latency target, and if the average latency exceeds that target the 2241controller will throttle any peers that have a lower latency target than the 2242protected workload. 2243 2244The limits are only applied at the peer level in the hierarchy. This means that 2245in the diagram below, only groups A, B, and C will influence each other, and 2246groups D and F will influence each other. Group G will influence nobody:: 2247 2248 [root] 2249 / | \ 2250 A B C 2251 / \ | 2252 D F G 2253 2254 2255So the ideal way to configure this is to set io.latency in groups A, B, and C. 2256Generally you do not want to set a value lower than the latency your device 2257supports. Experiment to find the value that works best for your workload. 2258Start at higher than the expected latency for your device and watch the 2259avg_lat value in io.stat for your workload group to get an idea of the 2260latency you see during normal operation. Use the avg_lat value as a basis for 2261your real setting, setting at 10-15% higher than the value in io.stat. 2262 2263How IO Latency Throttling Works 2264~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2265 2266io.latency is work conserving; so as long as everybody is meeting their latency 2267target the controller doesn't do anything. Once a group starts missing its 2268target it begins throttling any peer group that has a higher target than itself. 2269This throttling takes 2 forms: 2270 2271- Queue depth throttling. This is the number of outstanding IO's a group is 2272 allowed to have. We will clamp down relatively quickly, starting at no limit 2273 and going all the way down to 1 IO at a time. 2274 2275- Artificial delay induction. There are certain types of IO that cannot be 2276 throttled without possibly adversely affecting higher priority groups. This 2277 includes swapping and metadata IO. These types of IO are allowed to occur 2278 normally, however they are "charged" to the originating group. If the 2279 originating group is being throttled you will see the use_delay and delay 2280 fields in io.stat increase. The delay value is how many microseconds that are 2281 being added to any process that runs in this group. Because this number can 2282 grow quite large if there is a lot of swapping or metadata IO occurring we 2283 limit the individual delay events to 1 second at a time. 2284 2285Once the victimized group starts meeting its latency target again it will start 2286unthrottling any peer groups that were throttled previously. If the victimized 2287group simply stops doing IO the global counter will unthrottle appropriately. 2288 2289IO Latency Interface Files 2290~~~~~~~~~~~~~~~~~~~~~~~~~~ 2291 2292 io.latency 2293 This takes a similar format as the other controllers. 2294 2295 "MAJOR:MINOR target=<target time in microseconds>" 2296 2297 io.stat 2298 If the controller is enabled you will see extra stats in io.stat in 2299 addition to the normal ones. 2300 2301 depth 2302 This is the current queue depth for the group. 2303 2304 avg_lat 2305 This is an exponential moving average with a decay rate of 1/exp 2306 bound by the sampling interval. The decay rate interval can be 2307 calculated by multiplying the win value in io.stat by the 2308 corresponding number of samples based on the win value. 2309 2310 win 2311 The sampling window size in milliseconds. This is the minimum 2312 duration of time between evaluation events. Windows only elapse 2313 with IO activity. Idle periods extend the most recent window. 2314 2315IO Priority 2316~~~~~~~~~~~ 2317 2318A single attribute controls the behavior of the I/O priority cgroup policy, 2319namely the io.prio.class attribute. The following values are accepted for 2320that attribute: 2321 2322 no-change 2323 Do not modify the I/O priority class. 2324 2325 promote-to-rt 2326 For requests that have a non-RT I/O priority class, change it into RT. 2327 Also change the priority level of these requests to 4. Do not modify 2328 the I/O priority of requests that have priority class RT. 2329 2330 restrict-to-be 2331 For requests that do not have an I/O priority class or that have I/O 2332 priority class RT, change it into BE. Also change the priority level 2333 of these requests to 0. Do not modify the I/O priority class of 2334 requests that have priority class IDLE. 2335 2336 idle 2337 Change the I/O priority class of all requests into IDLE, the lowest 2338 I/O priority class. 2339 2340 none-to-rt 2341 Deprecated. Just an alias for promote-to-rt. 2342 2343The following numerical values are associated with the I/O priority policies: 2344 2345+----------------+---+ 2346| no-change | 0 | 2347+----------------+---+ 2348| promote-to-rt | 1 | 2349+----------------+---+ 2350| restrict-to-be | 2 | 2351+----------------+---+ 2352| idle | 3 | 2353+----------------+---+ 2354 2355The numerical value that corresponds to each I/O priority class is as follows: 2356 2357+-------------------------------+---+ 2358| IOPRIO_CLASS_NONE | 0 | 2359+-------------------------------+---+ 2360| IOPRIO_CLASS_RT (real-time) | 1 | 2361+-------------------------------+---+ 2362| IOPRIO_CLASS_BE (best effort) | 2 | 2363+-------------------------------+---+ 2364| IOPRIO_CLASS_IDLE | 3 | 2365+-------------------------------+---+ 2366 2367The algorithm to set the I/O priority class for a request is as follows: 2368 2369- If I/O priority class policy is promote-to-rt, change the request I/O 2370 priority class to IOPRIO_CLASS_RT and change the request I/O priority 2371 level to 4. 2372- If I/O priority class policy is not promote-to-rt, translate the I/O priority 2373 class policy into a number, then change the request I/O priority class 2374 into the maximum of the I/O priority class policy number and the numerical 2375 I/O priority class. 2376 2377PID 2378--- 2379 2380The process number controller is used to allow a cgroup to stop any 2381new tasks from being fork()'d or clone()'d after a specified limit is 2382reached. 2383 2384The number of tasks in a cgroup can be exhausted in ways which other 2385controllers cannot prevent, thus warranting its own controller. For 2386example, a fork bomb is likely to exhaust the number of tasks before 2387hitting memory restrictions. 2388 2389Note that PIDs used in this controller refer to TIDs, process IDs as 2390used by the kernel. 2391 2392 2393PID Interface Files 2394~~~~~~~~~~~~~~~~~~~ 2395 2396 pids.max 2397 A read-write single value file which exists on non-root 2398 cgroups. The default is "max". 2399 2400 Hard limit of number of processes. 2401 2402 pids.current 2403 A read-only single value file which exists on non-root cgroups. 2404 2405 The number of processes currently in the cgroup and its 2406 descendants. 2407 2408 pids.peak 2409 A read-only single value file which exists on non-root cgroups. 2410 2411 The maximum value that the number of processes in the cgroup and its 2412 descendants has ever reached. 2413 2414 pids.events 2415 A read-only flat-keyed file which exists on non-root cgroups. Unless 2416 specified otherwise, a value change in this file generates a file 2417 modified event. The following entries are defined. 2418 2419 max 2420 The number of times the cgroup's total number of processes hit the pids.max 2421 limit (see also pids_localevents). 2422 2423 pids.events.local 2424 Similar to pids.events but the fields in the file are local 2425 to the cgroup i.e. not hierarchical. The file modified event 2426 generated on this file reflects only the local events. 2427 2428Organisational operations are not blocked by cgroup policies, so it is 2429possible to have pids.current > pids.max. This can be done by either 2430setting the limit to be smaller than pids.current, or attaching enough 2431processes to the cgroup such that pids.current is larger than 2432pids.max. However, it is not possible to violate a cgroup PID policy 2433through fork() or clone(). These will return -EAGAIN if the creation 2434of a new process would cause a cgroup policy to be violated. 2435 2436 2437Cpuset 2438------ 2439 2440The "cpuset" controller provides a mechanism for constraining 2441the CPU and memory node placement of tasks to only the resources 2442specified in the cpuset interface files in a task's current cgroup. 2443This is especially valuable on large NUMA systems where placing jobs 2444on properly sized subsets of the systems with careful processor and 2445memory placement to reduce cross-node memory access and contention 2446can improve overall system performance. 2447 2448The "cpuset" controller is hierarchical. That means the controller 2449cannot use CPUs or memory nodes not allowed in its parent. 2450 2451 2452Cpuset Interface Files 2453~~~~~~~~~~~~~~~~~~~~~~ 2454 2455 cpuset.cpus 2456 A read-write multiple values file which exists on non-root 2457 cpuset-enabled cgroups. 2458 2459 It lists the requested CPUs to be used by tasks within this 2460 cgroup. The actual list of CPUs to be granted, however, is 2461 subjected to constraints imposed by its parent and can differ 2462 from the requested CPUs. 2463 2464 The CPU numbers are comma-separated numbers or ranges. 2465 For example:: 2466 2467 # cat cpuset.cpus 2468 0-4,6,8-10 2469 2470 An empty value indicates that the cgroup is using the same 2471 setting as the nearest cgroup ancestor with a non-empty 2472 "cpuset.cpus" or all the available CPUs if none is found. 2473 2474 The value of "cpuset.cpus" stays constant until the next update 2475 and won't be affected by any CPU hotplug events. 2476 2477 cpuset.cpus.effective 2478 A read-only multiple values file which exists on all 2479 cpuset-enabled cgroups. 2480 2481 It lists the onlined CPUs that are actually granted to this 2482 cgroup by its parent. These CPUs are allowed to be used by 2483 tasks within the current cgroup. 2484 2485 If "cpuset.cpus" is empty, the "cpuset.cpus.effective" file shows 2486 all the CPUs from the parent cgroup that can be available to 2487 be used by this cgroup. Otherwise, it should be a subset of 2488 "cpuset.cpus" unless none of the CPUs listed in "cpuset.cpus" 2489 can be granted. In this case, it will be treated just like an 2490 empty "cpuset.cpus". 2491 2492 Its value will be affected by CPU hotplug events. 2493 2494 cpuset.mems 2495 A read-write multiple values file which exists on non-root 2496 cpuset-enabled cgroups. 2497 2498 It lists the requested memory nodes to be used by tasks within 2499 this cgroup. The actual list of memory nodes granted, however, 2500 is subjected to constraints imposed by its parent and can differ 2501 from the requested memory nodes. 2502 2503 The memory node numbers are comma-separated numbers or ranges. 2504 For example:: 2505 2506 # cat cpuset.mems 2507 0-1,3 2508 2509 An empty value indicates that the cgroup is using the same 2510 setting as the nearest cgroup ancestor with a non-empty 2511 "cpuset.mems" or all the available memory nodes if none 2512 is found. 2513 2514 The value of "cpuset.mems" stays constant until the next update 2515 and won't be affected by any memory nodes hotplug events. 2516 2517 Setting a non-empty value to "cpuset.mems" causes memory of 2518 tasks within the cgroup to be migrated to the designated nodes if 2519 they are currently using memory outside of the designated nodes. 2520 2521 There is a cost for this memory migration. The migration 2522 may not be complete and some memory pages may be left behind. 2523 So it is recommended that "cpuset.mems" should be set properly 2524 before spawning new tasks into the cpuset. Even if there is 2525 a need to change "cpuset.mems" with active tasks, it shouldn't 2526 be done frequently. 2527 2528 cpuset.mems.effective 2529 A read-only multiple values file which exists on all 2530 cpuset-enabled cgroups. 2531 2532 It lists the onlined memory nodes that are actually granted to 2533 this cgroup by its parent. These memory nodes are allowed to 2534 be used by tasks within the current cgroup. 2535 2536 If "cpuset.mems" is empty, it shows all the memory nodes from the 2537 parent cgroup that will be available to be used by this cgroup. 2538 Otherwise, it should be a subset of "cpuset.mems" unless none of 2539 the memory nodes listed in "cpuset.mems" can be granted. In this 2540 case, it will be treated just like an empty "cpuset.mems". 2541 2542 Its value will be affected by memory nodes hotplug events. 2543 2544 cpuset.cpus.exclusive 2545 A read-write multiple values file which exists on non-root 2546 cpuset-enabled cgroups. 2547 2548 It lists all the exclusive CPUs that are allowed to be used 2549 to create a new cpuset partition. Its value is not used 2550 unless the cgroup becomes a valid partition root. See the 2551 "cpuset.cpus.partition" section below for a description of what 2552 a cpuset partition is. 2553 2554 When the cgroup becomes a partition root, the actual exclusive 2555 CPUs that are allocated to that partition are listed in 2556 "cpuset.cpus.exclusive.effective" which may be different 2557 from "cpuset.cpus.exclusive". If "cpuset.cpus.exclusive" 2558 has previously been set, "cpuset.cpus.exclusive.effective" 2559 is always a subset of it. 2560 2561 Users can manually set it to a value that is different from 2562 "cpuset.cpus". One constraint in setting it is that the list of 2563 CPUs must be exclusive with respect to "cpuset.cpus.exclusive" 2564 of its sibling. If "cpuset.cpus.exclusive" of a sibling cgroup 2565 isn't set, its "cpuset.cpus" value, if set, cannot be a subset 2566 of it to leave at least one CPU available when the exclusive 2567 CPUs are taken away. 2568 2569 For a parent cgroup, any one of its exclusive CPUs can only 2570 be distributed to at most one of its child cgroups. Having an 2571 exclusive CPU appearing in two or more of its child cgroups is 2572 not allowed (the exclusivity rule). A value that violates the 2573 exclusivity rule will be rejected with a write error. 2574 2575 The root cgroup is a partition root and all its available CPUs 2576 are in its exclusive CPU set. 2577 2578 cpuset.cpus.exclusive.effective 2579 A read-only multiple values file which exists on all non-root 2580 cpuset-enabled cgroups. 2581 2582 This file shows the effective set of exclusive CPUs that 2583 can be used to create a partition root. The content 2584 of this file will always be a subset of its parent's 2585 "cpuset.cpus.exclusive.effective" if its parent is not the root 2586 cgroup. It will also be a subset of "cpuset.cpus.exclusive" 2587 if it is set. If "cpuset.cpus.exclusive" is not set, it is 2588 treated to have an implicit value of "cpuset.cpus" in the 2589 formation of local partition. 2590 2591 cpuset.cpus.isolated 2592 A read-only and root cgroup only multiple values file. 2593 2594 This file shows the set of all isolated CPUs used in existing 2595 isolated partitions. It will be empty if no isolated partition 2596 is created. 2597 2598 cpuset.cpus.partition 2599 A read-write single value file which exists on non-root 2600 cpuset-enabled cgroups. This flag is owned by the parent cgroup 2601 and is not delegatable. 2602 2603 It accepts only the following input values when written to. 2604 2605 ========== ===================================== 2606 "member" Non-root member of a partition 2607 "root" Partition root 2608 "isolated" Partition root without load balancing 2609 ========== ===================================== 2610 2611 A cpuset partition is a collection of cpuset-enabled cgroups with 2612 a partition root at the top of the hierarchy and its descendants 2613 except those that are separate partition roots themselves and 2614 their descendants. A partition has exclusive access to the 2615 set of exclusive CPUs allocated to it. Other cgroups outside 2616 of that partition cannot use any CPUs in that set. 2617 2618 There are two types of partitions - local and remote. A local 2619 partition is one whose parent cgroup is also a valid partition 2620 root. A remote partition is one whose parent cgroup is not a 2621 valid partition root itself. Writing to "cpuset.cpus.exclusive" 2622 is optional for the creation of a local partition as its 2623 "cpuset.cpus.exclusive" file will assume an implicit value that 2624 is the same as "cpuset.cpus" if it is not set. Writing the 2625 proper "cpuset.cpus.exclusive" values down the cgroup hierarchy 2626 before the target partition root is mandatory for the creation 2627 of a remote partition. 2628 2629 Currently, a remote partition cannot be created under a local 2630 partition. All the ancestors of a remote partition root except 2631 the root cgroup cannot be a partition root. 2632 2633 The root cgroup is always a partition root and its state cannot 2634 be changed. All other non-root cgroups start out as "member". 2635 2636 When set to "root", the current cgroup is the root of a new 2637 partition or scheduling domain. The set of exclusive CPUs is 2638 determined by the value of its "cpuset.cpus.exclusive.effective". 2639 2640 When set to "isolated", the CPUs in that partition will be in 2641 an isolated state without any load balancing from the scheduler 2642 and excluded from the unbound workqueues. Tasks placed in such 2643 a partition with multiple CPUs should be carefully distributed 2644 and bound to each of the individual CPUs for optimal performance. 2645 2646 A partition root ("root" or "isolated") can be in one of the 2647 two possible states - valid or invalid. An invalid partition 2648 root is in a degraded state where some state information may 2649 be retained, but behaves more like a "member". 2650 2651 All possible state transitions among "member", "root" and 2652 "isolated" are allowed. 2653 2654 On read, the "cpuset.cpus.partition" file can show the following 2655 values. 2656 2657 ============================= ===================================== 2658 "member" Non-root member of a partition 2659 "root" Partition root 2660 "isolated" Partition root without load balancing 2661 "root invalid (<reason>)" Invalid partition root 2662 "isolated invalid (<reason>)" Invalid isolated partition root 2663 ============================= ===================================== 2664 2665 In the case of an invalid partition root, a descriptive string on 2666 why the partition is invalid is included within parentheses. 2667 2668 For a local partition root to be valid, the following conditions 2669 must be met. 2670 2671 1) The parent cgroup is a valid partition root. 2672 2) The "cpuset.cpus.exclusive.effective" file cannot be empty, 2673 though it may contain offline CPUs. 2674 3) The "cpuset.cpus.effective" cannot be empty unless there is 2675 no task associated with this partition. 2676 2677 For a remote partition root to be valid, all the above conditions 2678 except the first one must be met. 2679 2680 External events like hotplug or changes to "cpuset.cpus" or 2681 "cpuset.cpus.exclusive" can cause a valid partition root to 2682 become invalid and vice versa. Note that a task cannot be 2683 moved to a cgroup with empty "cpuset.cpus.effective". 2684 2685 A valid non-root parent partition may distribute out all its CPUs 2686 to its child local partitions when there is no task associated 2687 with it. 2688 2689 Care must be taken to change a valid partition root to "member" 2690 as all its child local partitions, if present, will become 2691 invalid causing disruption to tasks running in those child 2692 partitions. These inactivated partitions could be recovered if 2693 their parent is switched back to a partition root with a proper 2694 value in "cpuset.cpus" or "cpuset.cpus.exclusive". 2695 2696 Poll and inotify events are triggered whenever the state of 2697 "cpuset.cpus.partition" changes. That includes changes caused 2698 by write to "cpuset.cpus.partition", cpu hotplug or other 2699 changes that modify the validity status of the partition. 2700 This will allow user space agents to monitor unexpected changes 2701 to "cpuset.cpus.partition" without the need to do continuous 2702 polling. 2703 2704 A user can pre-configure certain CPUs to an isolated state 2705 with load balancing disabled at boot time with the "isolcpus" 2706 kernel boot command line option. If those CPUs are to be put 2707 into a partition, they have to be used in an isolated partition. 2708 2709 2710Device controller 2711----------------- 2712 2713Device controller manages access to device files. It includes both 2714creation of new device files (using mknod), and access to the 2715existing device files. 2716 2717Cgroup v2 device controller has no interface files and is implemented 2718on top of cgroup BPF. To control access to device files, a user may 2719create bpf programs of type BPF_PROG_TYPE_CGROUP_DEVICE and attach 2720them to cgroups with BPF_CGROUP_DEVICE flag. On an attempt to access a 2721device file, corresponding BPF programs will be executed, and depending 2722on the return value the attempt will succeed or fail with -EPERM. 2723 2724A BPF_PROG_TYPE_CGROUP_DEVICE program takes a pointer to the 2725bpf_cgroup_dev_ctx structure, which describes the device access attempt: 2726access type (mknod/read/write) and device (type, major and minor numbers). 2727If the program returns 0, the attempt fails with -EPERM, otherwise it 2728succeeds. 2729 2730An example of BPF_PROG_TYPE_CGROUP_DEVICE program may be found in 2731tools/testing/selftests/bpf/progs/dev_cgroup.c in the kernel source tree. 2732 2733 2734RDMA 2735---- 2736 2737The "rdma" controller regulates the distribution and accounting of 2738RDMA resources. 2739 2740RDMA Interface Files 2741~~~~~~~~~~~~~~~~~~~~ 2742 2743 rdma.max 2744 A readwrite nested-keyed file that exists for all the cgroups 2745 except root that describes current configured resource limit 2746 for a RDMA/IB device. 2747 2748 Lines are keyed by device name and are not ordered. 2749 Each line contains space separated resource name and its configured 2750 limit that can be distributed. 2751 2752 The following nested keys are defined. 2753 2754 ========== ============================= 2755 hca_handle Maximum number of HCA Handles 2756 hca_object Maximum number of HCA Objects 2757 ========== ============================= 2758 2759 An example for mlx4 and ocrdma device follows:: 2760 2761 mlx4_0 hca_handle=2 hca_object=2000 2762 ocrdma1 hca_handle=3 hca_object=max 2763 2764 rdma.current 2765 A read-only file that describes current resource usage. 2766 It exists for all the cgroup except root. 2767 2768 An example for mlx4 and ocrdma device follows:: 2769 2770 mlx4_0 hca_handle=1 hca_object=20 2771 ocrdma1 hca_handle=1 hca_object=23 2772 2773DMEM 2774---- 2775 2776The "dmem" controller regulates the distribution and accounting of 2777device memory regions. Because each memory region may have its own page size, 2778which does not have to be equal to the system page size, the units are always bytes. 2779 2780DMEM Interface Files 2781~~~~~~~~~~~~~~~~~~~~ 2782 2783 dmem.max, dmem.min, dmem.low 2784 A readwrite nested-keyed file that exists for all the cgroups 2785 except root that describes current configured resource limit 2786 for a region. 2787 2788 An example for xe follows:: 2789 2790 drm/0000:03:00.0/vram0 1073741824 2791 drm/0000:03:00.0/stolen max 2792 2793 The semantics are the same as for the memory cgroup controller, and are 2794 calculated in the same way. 2795 2796 dmem.capacity 2797 A read-only file that describes maximum region capacity. 2798 It only exists on the root cgroup. Not all memory can be 2799 allocated by cgroups, as the kernel reserves some for 2800 internal use. 2801 2802 An example for xe follows:: 2803 2804 drm/0000:03:00.0/vram0 8514437120 2805 drm/0000:03:00.0/stolen 67108864 2806 2807 dmem.current 2808 A read-only file that describes current resource usage. 2809 It exists for all the cgroup except root. 2810 2811 An example for xe follows:: 2812 2813 drm/0000:03:00.0/vram0 12550144 2814 drm/0000:03:00.0/stolen 8650752 2815 2816HugeTLB 2817------- 2818 2819The HugeTLB controller allows to limit the HugeTLB usage per control group and 2820enforces the controller limit during page fault. 2821 2822HugeTLB Interface Files 2823~~~~~~~~~~~~~~~~~~~~~~~ 2824 2825 hugetlb.<hugepagesize>.current 2826 Show current usage for "hugepagesize" hugetlb. It exists for all 2827 the cgroup except root. 2828 2829 hugetlb.<hugepagesize>.max 2830 Set/show the hard limit of "hugepagesize" hugetlb usage. 2831 The default value is "max". It exists for all the cgroup except root. 2832 2833 hugetlb.<hugepagesize>.events 2834 A read-only flat-keyed file which exists on non-root cgroups. 2835 2836 max 2837 The number of allocation failure due to HugeTLB limit 2838 2839 hugetlb.<hugepagesize>.events.local 2840 Similar to hugetlb.<hugepagesize>.events but the fields in the file 2841 are local to the cgroup i.e. not hierarchical. The file modified event 2842 generated on this file reflects only the local events. 2843 2844 hugetlb.<hugepagesize>.numa_stat 2845 Similar to memory.numa_stat, it shows the numa information of the 2846 hugetlb pages of <hugepagesize> in this cgroup. Only active in 2847 use hugetlb pages are included. The per-node values are in bytes. 2848 2849Misc 2850---- 2851 2852The Miscellaneous cgroup provides the resource limiting and tracking 2853mechanism for the scalar resources which cannot be abstracted like the other 2854cgroup resources. Controller is enabled by the CONFIG_CGROUP_MISC config 2855option. 2856 2857A resource can be added to the controller via enum misc_res_type{} in the 2858include/linux/misc_cgroup.h file and the corresponding name via misc_res_name[] 2859in the kernel/cgroup/misc.c file. Provider of the resource must set its 2860capacity prior to using the resource by calling misc_cg_set_capacity(). 2861 2862Once a capacity is set then the resource usage can be updated using charge and 2863uncharge APIs. All of the APIs to interact with misc controller are in 2864include/linux/misc_cgroup.h. 2865 2866Misc Interface Files 2867~~~~~~~~~~~~~~~~~~~~ 2868 2869Miscellaneous controller provides 3 interface files. If two misc resources (res_a and res_b) are registered then: 2870 2871 misc.capacity 2872 A read-only flat-keyed file shown only in the root cgroup. It shows 2873 miscellaneous scalar resources available on the platform along with 2874 their quantities:: 2875 2876 $ cat misc.capacity 2877 res_a 50 2878 res_b 10 2879 2880 misc.current 2881 A read-only flat-keyed file shown in the all cgroups. It shows 2882 the current usage of the resources in the cgroup and its children.:: 2883 2884 $ cat misc.current 2885 res_a 3 2886 res_b 0 2887 2888 misc.peak 2889 A read-only flat-keyed file shown in all cgroups. It shows the 2890 historical maximum usage of the resources in the cgroup and its 2891 children.:: 2892 2893 $ cat misc.peak 2894 res_a 10 2895 res_b 8 2896 2897 misc.max 2898 A read-write flat-keyed file shown in the non root cgroups. Allowed 2899 maximum usage of the resources in the cgroup and its children.:: 2900 2901 $ cat misc.max 2902 res_a max 2903 res_b 4 2904 2905 Limit can be set by:: 2906 2907 # echo res_a 1 > misc.max 2908 2909 Limit can be set to max by:: 2910 2911 # echo res_a max > misc.max 2912 2913 Limits can be set higher than the capacity value in the misc.capacity 2914 file. 2915 2916 misc.events 2917 A read-only flat-keyed file which exists on non-root cgroups. The 2918 following entries are defined. Unless specified otherwise, a value 2919 change in this file generates a file modified event. All fields in 2920 this file are hierarchical. 2921 2922 max 2923 The number of times the cgroup's resource usage was 2924 about to go over the max boundary. 2925 2926 misc.events.local 2927 Similar to misc.events but the fields in the file are local to the 2928 cgroup i.e. not hierarchical. The file modified event generated on 2929 this file reflects only the local events. 2930 2931Migration and Ownership 2932~~~~~~~~~~~~~~~~~~~~~~~ 2933 2934A miscellaneous scalar resource is charged to the cgroup in which it is used 2935first, and stays charged to that cgroup until that resource is freed. Migrating 2936a process to a different cgroup does not move the charge to the destination 2937cgroup where the process has moved. 2938 2939Others 2940------ 2941 2942perf_event 2943~~~~~~~~~~ 2944 2945perf_event controller, if not mounted on a legacy hierarchy, is 2946automatically enabled on the v2 hierarchy so that perf events can 2947always be filtered by cgroup v2 path. The controller can still be 2948moved to a legacy hierarchy after v2 hierarchy is populated. 2949 2950 2951Non-normative information 2952------------------------- 2953 2954This section contains information that isn't considered to be a part of 2955the stable kernel API and so is subject to change. 2956 2957 2958CPU controller root cgroup process behaviour 2959~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2960 2961When distributing CPU cycles in the root cgroup each thread in this 2962cgroup is treated as if it was hosted in a separate child cgroup of the 2963root cgroup. This child cgroup weight is dependent on its thread nice 2964level. 2965 2966For details of this mapping see sched_prio_to_weight array in 2967kernel/sched/core.c file (values from this array should be scaled 2968appropriately so the neutral - nice 0 - value is 100 instead of 1024). 2969 2970 2971IO controller root cgroup process behaviour 2972~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2973 2974Root cgroup processes are hosted in an implicit leaf child node. 2975When distributing IO resources this implicit child node is taken into 2976account as if it was a normal child cgroup of the root cgroup with a 2977weight value of 200. 2978 2979 2980Namespace 2981========= 2982 2983Basics 2984------ 2985 2986cgroup namespace provides a mechanism to virtualize the view of the 2987"/proc/$PID/cgroup" file and cgroup mounts. The CLONE_NEWCGROUP clone 2988flag can be used with clone(2) and unshare(2) to create a new cgroup 2989namespace. The process running inside the cgroup namespace will have 2990its "/proc/$PID/cgroup" output restricted to cgroupns root. The 2991cgroupns root is the cgroup of the process at the time of creation of 2992the cgroup namespace. 2993 2994Without cgroup namespace, the "/proc/$PID/cgroup" file shows the 2995complete path of the cgroup of a process. In a container setup where 2996a set of cgroups and namespaces are intended to isolate processes the 2997"/proc/$PID/cgroup" file may leak potential system level information 2998to the isolated processes. For example:: 2999 3000 # cat /proc/self/cgroup 3001 0::/batchjobs/container_id1 3002 3003The path '/batchjobs/container_id1' can be considered as system-data 3004and undesirable to expose to the isolated processes. cgroup namespace 3005can be used to restrict visibility of this path. For example, before 3006creating a cgroup namespace, one would see:: 3007 3008 # ls -l /proc/self/ns/cgroup 3009 lrwxrwxrwx 1 root root 0 2014-07-15 10:37 /proc/self/ns/cgroup -> cgroup:[4026531835] 3010 # cat /proc/self/cgroup 3011 0::/batchjobs/container_id1 3012 3013After unsharing a new namespace, the view changes:: 3014 3015 # ls -l /proc/self/ns/cgroup 3016 lrwxrwxrwx 1 root root 0 2014-07-15 10:35 /proc/self/ns/cgroup -> cgroup:[4026532183] 3017 # cat /proc/self/cgroup 3018 0::/ 3019 3020When some thread from a multi-threaded process unshares its cgroup 3021namespace, the new cgroupns gets applied to the entire process (all 3022the threads). This is natural for the v2 hierarchy; however, for the 3023legacy hierarchies, this may be unexpected. 3024 3025A cgroup namespace is alive as long as there are processes inside or 3026mounts pinning it. When the last usage goes away, the cgroup 3027namespace is destroyed. The cgroupns root and the actual cgroups 3028remain. 3029 3030 3031The Root and Views 3032------------------ 3033 3034The 'cgroupns root' for a cgroup namespace is the cgroup in which the 3035process calling unshare(2) is running. For example, if a process in 3036/batchjobs/container_id1 cgroup calls unshare, cgroup 3037/batchjobs/container_id1 becomes the cgroupns root. For the 3038init_cgroup_ns, this is the real root ('/') cgroup. 3039 3040The cgroupns root cgroup does not change even if the namespace creator 3041process later moves to a different cgroup:: 3042 3043 # ~/unshare -c # unshare cgroupns in some cgroup 3044 # cat /proc/self/cgroup 3045 0::/ 3046 # mkdir sub_cgrp_1 3047 # echo 0 > sub_cgrp_1/cgroup.procs 3048 # cat /proc/self/cgroup 3049 0::/sub_cgrp_1 3050 3051Each process gets its namespace-specific view of "/proc/$PID/cgroup" 3052 3053Processes running inside the cgroup namespace will be able to see 3054cgroup paths (in /proc/self/cgroup) only inside their root cgroup. 3055From within an unshared cgroupns:: 3056 3057 # sleep 100000 & 3058 [1] 7353 3059 # echo 7353 > sub_cgrp_1/cgroup.procs 3060 # cat /proc/7353/cgroup 3061 0::/sub_cgrp_1 3062 3063From the initial cgroup namespace, the real cgroup path will be 3064visible:: 3065 3066 $ cat /proc/7353/cgroup 3067 0::/batchjobs/container_id1/sub_cgrp_1 3068 3069From a sibling cgroup namespace (that is, a namespace rooted at a 3070different cgroup), the cgroup path relative to its own cgroup 3071namespace root will be shown. For instance, if PID 7353's cgroup 3072namespace root is at '/batchjobs/container_id2', then it will see:: 3073 3074 # cat /proc/7353/cgroup 3075 0::/../container_id2/sub_cgrp_1 3076 3077Note that the relative path always starts with '/' to indicate that 3078its relative to the cgroup namespace root of the caller. 3079 3080 3081Migration and setns(2) 3082---------------------- 3083 3084Processes inside a cgroup namespace can move into and out of the 3085namespace root if they have proper access to external cgroups. For 3086example, from inside a namespace with cgroupns root at 3087/batchjobs/container_id1, and assuming that the global hierarchy is 3088still accessible inside cgroupns:: 3089 3090 # cat /proc/7353/cgroup 3091 0::/sub_cgrp_1 3092 # echo 7353 > batchjobs/container_id2/cgroup.procs 3093 # cat /proc/7353/cgroup 3094 0::/../container_id2 3095 3096Note that this kind of setup is not encouraged. A task inside cgroup 3097namespace should only be exposed to its own cgroupns hierarchy. 3098 3099setns(2) to another cgroup namespace is allowed when: 3100 3101(a) the process has CAP_SYS_ADMIN against its current user namespace 3102(b) the process has CAP_SYS_ADMIN against the target cgroup 3103 namespace's userns 3104 3105No implicit cgroup changes happen with attaching to another cgroup 3106namespace. It is expected that the someone moves the attaching 3107process under the target cgroup namespace root. 3108 3109 3110Interaction with Other Namespaces 3111--------------------------------- 3112 3113Namespace specific cgroup hierarchy can be mounted by a process 3114running inside a non-init cgroup namespace:: 3115 3116 # mount -t cgroup2 none $MOUNT_POINT 3117 3118This will mount the unified cgroup hierarchy with cgroupns root as the 3119filesystem root. The process needs CAP_SYS_ADMIN against its user and 3120mount namespaces. 3121 3122The virtualization of /proc/self/cgroup file combined with restricting 3123the view of cgroup hierarchy by namespace-private cgroupfs mount 3124provides a properly isolated cgroup view inside the container. 3125 3126 3127Information on Kernel Programming 3128================================= 3129 3130This section contains kernel programming information in the areas 3131where interacting with cgroup is necessary. cgroup core and 3132controllers are not covered. 3133 3134 3135Filesystem Support for Writeback 3136-------------------------------- 3137 3138A filesystem can support cgroup writeback by updating 3139address_space_operations->writepages() to annotate bio's using the 3140following two functions. 3141 3142 wbc_init_bio(@wbc, @bio) 3143 Should be called for each bio carrying writeback data and 3144 associates the bio with the inode's owner cgroup and the 3145 corresponding request queue. This must be called after 3146 a queue (device) has been associated with the bio and 3147 before submission. 3148 3149 wbc_account_cgroup_owner(@wbc, @folio, @bytes) 3150 Should be called for each data segment being written out. 3151 While this function doesn't care exactly when it's called 3152 during the writeback session, it's the easiest and most 3153 natural to call it as data segments are added to a bio. 3154 3155With writeback bio's annotated, cgroup support can be enabled per 3156super_block by setting SB_I_CGROUPWB in ->s_iflags. This allows for 3157selective disabling of cgroup writeback support which is helpful when 3158certain filesystem features, e.g. journaled data mode, are 3159incompatible. 3160 3161wbc_init_bio() binds the specified bio to its cgroup. Depending on 3162the configuration, the bio may be executed at a lower priority and if 3163the writeback session is holding shared resources, e.g. a journal 3164entry, may lead to priority inversion. There is no one easy solution 3165for the problem. Filesystems can try to work around specific problem 3166cases by skipping wbc_init_bio() and using bio_associate_blkg() 3167directly. 3168 3169 3170Deprecated v1 Core Features 3171=========================== 3172 3173- Multiple hierarchies including named ones are not supported. 3174 3175- All v1 mount options are not supported. 3176 3177- The "tasks" file is removed and "cgroup.procs" is not sorted. 3178 3179- "cgroup.clone_children" is removed. 3180 3181- /proc/cgroups is meaningless for v2. Use "cgroup.controllers" or 3182 "cgroup.stat" files at the root instead. 3183 3184 3185Issues with v1 and Rationales for v2 3186==================================== 3187 3188Multiple Hierarchies 3189-------------------- 3190 3191cgroup v1 allowed an arbitrary number of hierarchies and each 3192hierarchy could host any number of controllers. While this seemed to 3193provide a high level of flexibility, it wasn't useful in practice. 3194 3195For example, as there is only one instance of each controller, utility 3196type controllers such as freezer which can be useful in all 3197hierarchies could only be used in one. The issue is exacerbated by 3198the fact that controllers couldn't be moved to another hierarchy once 3199hierarchies were populated. Another issue was that all controllers 3200bound to a hierarchy were forced to have exactly the same view of the 3201hierarchy. It wasn't possible to vary the granularity depending on 3202the specific controller. 3203 3204In practice, these issues heavily limited which controllers could be 3205put on the same hierarchy and most configurations resorted to putting 3206each controller on its own hierarchy. Only closely related ones, such 3207as the cpu and cpuacct controllers, made sense to be put on the same 3208hierarchy. This often meant that userland ended up managing multiple 3209similar hierarchies repeating the same steps on each hierarchy 3210whenever a hierarchy management operation was necessary. 3211 3212Furthermore, support for multiple hierarchies came at a steep cost. 3213It greatly complicated cgroup core implementation but more importantly 3214the support for multiple hierarchies restricted how cgroup could be 3215used in general and what controllers was able to do. 3216 3217There was no limit on how many hierarchies there might be, which meant 3218that a thread's cgroup membership couldn't be described in finite 3219length. The key might contain any number of entries and was unlimited 3220in length, which made it highly awkward to manipulate and led to 3221addition of controllers which existed only to identify membership, 3222which in turn exacerbated the original problem of proliferating number 3223of hierarchies. 3224 3225Also, as a controller couldn't have any expectation regarding the 3226topologies of hierarchies other controllers might be on, each 3227controller had to assume that all other controllers were attached to 3228completely orthogonal hierarchies. This made it impossible, or at 3229least very cumbersome, for controllers to cooperate with each other. 3230 3231In most use cases, putting controllers on hierarchies which are 3232completely orthogonal to each other isn't necessary. What usually is 3233called for is the ability to have differing levels of granularity 3234depending on the specific controller. In other words, hierarchy may 3235be collapsed from leaf towards root when viewed from specific 3236controllers. For example, a given configuration might not care about 3237how memory is distributed beyond a certain level while still wanting 3238to control how CPU cycles are distributed. 3239 3240 3241Thread Granularity 3242------------------ 3243 3244cgroup v1 allowed threads of a process to belong to different cgroups. 3245This didn't make sense for some controllers and those controllers 3246ended up implementing different ways to ignore such situations but 3247much more importantly it blurred the line between API exposed to 3248individual applications and system management interface. 3249 3250Generally, in-process knowledge is available only to the process 3251itself; thus, unlike service-level organization of processes, 3252categorizing threads of a process requires active participation from 3253the application which owns the target process. 3254 3255cgroup v1 had an ambiguously defined delegation model which got abused 3256in combination with thread granularity. cgroups were delegated to 3257individual applications so that they can create and manage their own 3258sub-hierarchies and control resource distributions along them. This 3259effectively raised cgroup to the status of a syscall-like API exposed 3260to lay programs. 3261 3262First of all, cgroup has a fundamentally inadequate interface to be 3263exposed this way. For a process to access its own knobs, it has to 3264extract the path on the target hierarchy from /proc/self/cgroup, 3265construct the path by appending the name of the knob to the path, open 3266and then read and/or write to it. This is not only extremely clunky 3267and unusual but also inherently racy. There is no conventional way to 3268define transaction across the required steps and nothing can guarantee 3269that the process would actually be operating on its own sub-hierarchy. 3270 3271cgroup controllers implemented a number of knobs which would never be 3272accepted as public APIs because they were just adding control knobs to 3273system-management pseudo filesystem. cgroup ended up with interface 3274knobs which were not properly abstracted or refined and directly 3275revealed kernel internal details. These knobs got exposed to 3276individual applications through the ill-defined delegation mechanism 3277effectively abusing cgroup as a shortcut to implementing public APIs 3278without going through the required scrutiny. 3279 3280This was painful for both userland and kernel. Userland ended up with 3281misbehaving and poorly abstracted interfaces and kernel exposing and 3282locked into constructs inadvertently. 3283 3284 3285Competition Between Inner Nodes and Threads 3286------------------------------------------- 3287 3288cgroup v1 allowed threads to be in any cgroups which created an 3289interesting problem where threads belonging to a parent cgroup and its 3290children cgroups competed for resources. This was nasty as two 3291different types of entities competed and there was no obvious way to 3292settle it. Different controllers did different things. 3293 3294The cpu controller considered threads and cgroups as equivalents and 3295mapped nice levels to cgroup weights. This worked for some cases but 3296fell flat when children wanted to be allocated specific ratios of CPU 3297cycles and the number of internal threads fluctuated - the ratios 3298constantly changed as the number of competing entities fluctuated. 3299There also were other issues. The mapping from nice level to weight 3300wasn't obvious or universal, and there were various other knobs which 3301simply weren't available for threads. 3302 3303The io controller implicitly created a hidden leaf node for each 3304cgroup to host the threads. The hidden leaf had its own copies of all 3305the knobs with ``leaf_`` prefixed. While this allowed equivalent 3306control over internal threads, it was with serious drawbacks. It 3307always added an extra layer of nesting which wouldn't be necessary 3308otherwise, made the interface messy and significantly complicated the 3309implementation. 3310 3311The memory controller didn't have a way to control what happened 3312between internal tasks and child cgroups and the behavior was not 3313clearly defined. There were attempts to add ad-hoc behaviors and 3314knobs to tailor the behavior to specific workloads which would have 3315led to problems extremely difficult to resolve in the long term. 3316 3317Multiple controllers struggled with internal tasks and came up with 3318different ways to deal with it; unfortunately, all the approaches were 3319severely flawed and, furthermore, the widely different behaviors 3320made cgroup as a whole highly inconsistent. 3321 3322This clearly is a problem which needs to be addressed from cgroup core 3323in a uniform way. 3324 3325 3326Other Interface Issues 3327---------------------- 3328 3329cgroup v1 grew without oversight and developed a large number of 3330idiosyncrasies and inconsistencies. One issue on the cgroup core side 3331was how an empty cgroup was notified - a userland helper binary was 3332forked and executed for each event. The event delivery wasn't 3333recursive or delegatable. The limitations of the mechanism also led 3334to in-kernel event delivery filtering mechanism further complicating 3335the interface. 3336 3337Controller interfaces were problematic too. An extreme example is 3338controllers completely ignoring hierarchical organization and treating 3339all cgroups as if they were all located directly under the root 3340cgroup. Some controllers exposed a large amount of inconsistent 3341implementation details to userland. 3342 3343There also was no consistency across controllers. When a new cgroup 3344was created, some controllers defaulted to not imposing extra 3345restrictions while others disallowed any resource usage until 3346explicitly configured. Configuration knobs for the same type of 3347control used widely differing naming schemes and formats. Statistics 3348and information knobs were named arbitrarily and used different 3349formats and units even in the same controller. 3350 3351cgroup v2 establishes common conventions where appropriate and updates 3352controllers so that they expose minimal and consistent interfaces. 3353 3354 3355Controller Issues and Remedies 3356------------------------------ 3357 3358Memory 3359~~~~~~ 3360 3361The original lower boundary, the soft limit, is defined as a limit 3362that is per default unset. As a result, the set of cgroups that 3363global reclaim prefers is opt-in, rather than opt-out. The costs for 3364optimizing these mostly negative lookups are so high that the 3365implementation, despite its enormous size, does not even provide the 3366basic desirable behavior. First off, the soft limit has no 3367hierarchical meaning. All configured groups are organized in a global 3368rbtree and treated like equal peers, regardless where they are located 3369in the hierarchy. This makes subtree delegation impossible. Second, 3370the soft limit reclaim pass is so aggressive that it not just 3371introduces high allocation latencies into the system, but also impacts 3372system performance due to overreclaim, to the point where the feature 3373becomes self-defeating. 3374 3375The memory.low boundary on the other hand is a top-down allocated 3376reserve. A cgroup enjoys reclaim protection when it's within its 3377effective low, which makes delegation of subtrees possible. It also 3378enjoys having reclaim pressure proportional to its overage when 3379above its effective low. 3380 3381The original high boundary, the hard limit, is defined as a strict 3382limit that can not budge, even if the OOM killer has to be called. 3383But this generally goes against the goal of making the most out of the 3384available memory. The memory consumption of workloads varies during 3385runtime, and that requires users to overcommit. But doing that with a 3386strict upper limit requires either a fairly accurate prediction of the 3387working set size or adding slack to the limit. Since working set size 3388estimation is hard and error prone, and getting it wrong results in 3389OOM kills, most users tend to err on the side of a looser limit and 3390end up wasting precious resources. 3391 3392The memory.high boundary on the other hand can be set much more 3393conservatively. When hit, it throttles allocations by forcing them 3394into direct reclaim to work off the excess, but it never invokes the 3395OOM killer. As a result, a high boundary that is chosen too 3396aggressively will not terminate the processes, but instead it will 3397lead to gradual performance degradation. The user can monitor this 3398and make corrections until the minimal memory footprint that still 3399gives acceptable performance is found. 3400 3401In extreme cases, with many concurrent allocations and a complete 3402breakdown of reclaim progress within the group, the high boundary can 3403be exceeded. But even then it's mostly better to satisfy the 3404allocation from the slack available in other groups or the rest of the 3405system than killing the group. Otherwise, memory.max is there to 3406limit this type of spillover and ultimately contain buggy or even 3407malicious applications. 3408 3409Setting the original memory.limit_in_bytes below the current usage was 3410subject to a race condition, where concurrent charges could cause the 3411limit setting to fail. memory.max on the other hand will first set the 3412limit to prevent new charges, and then reclaim and OOM kill until the 3413new limit is met - or the task writing to memory.max is killed. 3414 3415The combined memory+swap accounting and limiting is replaced by real 3416control over swap space. 3417 3418The main argument for a combined memory+swap facility in the original 3419cgroup design was that global or parental pressure would always be 3420able to swap all anonymous memory of a child group, regardless of the 3421child's own (possibly untrusted) configuration. However, untrusted 3422groups can sabotage swapping by other means - such as referencing its 3423anonymous memory in a tight loop - and an admin can not assume full 3424swappability when overcommitting untrusted jobs. 3425 3426For trusted jobs, on the other hand, a combined counter is not an 3427intuitive userspace interface, and it flies in the face of the idea 3428that cgroup controllers should account and limit specific physical 3429resources. Swap space is a resource like all others in the system, 3430and that's why unified hierarchy allows distributing it separately. 3431