1.. _cgroup-v2: 2 3================ 4Control Group v2 5================ 6 7:Date: October, 2015 8:Author: Tejun Heo <tj@kernel.org> 9 10This is the authoritative documentation on the design, interface and 11conventions of cgroup v2. It describes all userland-visible aspects 12of cgroup including core and specific controller behaviors. All 13future changes must be reflected in this document. Documentation for 14v1 is available under :ref:`Documentation/admin-guide/cgroup-v1/index.rst <cgroup-v1>`. 15 16.. CONTENTS 17 18 [Whenever any new section is added to this document, please also add 19 an entry here.] 20 21 1. Introduction 22 1-1. Terminology 23 1-2. What is cgroup? 24 2. Basic Operations 25 2-1. Mounting 26 2-2. Organizing Processes and Threads 27 2-2-1. Processes 28 2-2-2. Threads 29 2-3. [Un]populated Notification 30 2-4. Controlling Controllers 31 2-4-1. Availability 32 2-4-2. Enabling and Disabling 33 2-4-3. Top-down Constraint 34 2-4-4. No Internal Process Constraint 35 2-5. Delegation 36 2-5-1. Model of Delegation 37 2-5-2. Delegation Containment 38 2-6. Guidelines 39 2-6-1. Organize Once and Control 40 2-6-2. Avoid Name Collisions 41 3. Resource Distribution Models 42 3-1. Weights 43 3-2. Limits 44 3-3. Protections 45 3-4. Allocations 46 4. Interface Files 47 4-1. Format 48 4-2. Conventions 49 4-3. Core Interface Files 50 5. Controllers 51 5-1. CPU 52 5-1-1. CPU Interface Files 53 5-2. Memory 54 5-2-1. Memory Interface Files 55 5-2-2. Usage Guidelines 56 5-2-3. Memory Ownership 57 5-3. IO 58 5-3-1. IO Interface Files 59 5-3-2. Writeback 60 5-3-3. IO Latency 61 5-3-3-1. How IO Latency Throttling Works 62 5-3-3-2. IO Latency Interface Files 63 5-3-4. IO Priority 64 5-4. PID 65 5-4-1. PID Interface Files 66 5-5. Cpuset 67 5.5-1. Cpuset Interface Files 68 5-6. Device controller 69 5-7. RDMA 70 5-7-1. RDMA Interface Files 71 5-8. DMEM 72 5-8-1. DMEM Interface Files 73 5-9. HugeTLB 74 5.9-1. HugeTLB Interface Files 75 5-10. Misc 76 5.10-1 Misc Interface Files 77 5.10-2 Migration and Ownership 78 5-11. Others 79 5-11-1. perf_event 80 5-N. Non-normative information 81 5-N-1. CPU controller root cgroup process behaviour 82 5-N-2. IO controller root cgroup process behaviour 83 6. Namespace 84 6-1. Basics 85 6-2. The Root and Views 86 6-3. Migration and setns(2) 87 6-4. Interaction with Other Namespaces 88 P. Information on Kernel Programming 89 P-1. Filesystem Support for Writeback 90 D. Deprecated v1 Core Features 91 R. Issues with v1 and Rationales for v2 92 R-1. Multiple Hierarchies 93 R-2. Thread Granularity 94 R-3. Competition Between Inner Nodes and Threads 95 R-4. Other Interface Issues 96 R-5. Controller Issues and Remedies 97 R-5-1. Memory 98 99 100Introduction 101============ 102 103Terminology 104----------- 105 106"cgroup" stands for "control group" and is never capitalized. The 107singular form is used to designate the whole feature and also as a 108qualifier as in "cgroup controllers". When explicitly referring to 109multiple individual control groups, the plural form "cgroups" is used. 110 111 112What is cgroup? 113--------------- 114 115cgroup is a mechanism to organize processes hierarchically and 116distribute system resources along the hierarchy in a controlled and 117configurable manner. 118 119cgroup is largely composed of two parts - the core and controllers. 120cgroup core is primarily responsible for hierarchically organizing 121processes. A cgroup controller is usually responsible for 122distributing a specific type of system resource along the hierarchy 123although there are utility controllers which serve purposes other than 124resource distribution. 125 126cgroups form a tree structure and every process in the system belongs 127to one and only one cgroup. All threads of a process belong to the 128same cgroup. On creation, all processes are put in the cgroup that 129the parent process belongs to at the time. A process can be migrated 130to another cgroup. Migration of a process doesn't affect already 131existing descendant processes. 132 133Following certain structural constraints, controllers may be enabled or 134disabled selectively on a cgroup. All controller behaviors are 135hierarchical - if a controller is enabled on a cgroup, it affects all 136processes which belong to the cgroups consisting the inclusive 137sub-hierarchy of the cgroup. When a controller is enabled on a nested 138cgroup, it always restricts the resource distribution further. The 139restrictions set closer to the root in the hierarchy can not be 140overridden from further away. 141 142 143Basic Operations 144================ 145 146Mounting 147-------- 148 149Unlike v1, cgroup v2 has only single hierarchy. The cgroup v2 150hierarchy can be mounted with the following mount command:: 151 152 # mount -t cgroup2 none $MOUNT_POINT 153 154cgroup2 filesystem has the magic number 0x63677270 ("cgrp"). All 155controllers which support v2 and are not bound to a v1 hierarchy are 156automatically bound to the v2 hierarchy and show up at the root. 157Controllers which are not in active use in the v2 hierarchy can be 158bound to other hierarchies. This allows mixing v2 hierarchy with the 159legacy v1 multiple hierarchies in a fully backward compatible way. 160 161A controller can be moved across hierarchies only after the controller 162is no longer referenced in its current hierarchy. Because per-cgroup 163controller states are destroyed asynchronously and controllers may 164have lingering references, a controller may not show up immediately on 165the v2 hierarchy after the final umount of the previous hierarchy. 166Similarly, a controller should be fully disabled to be moved out of 167the unified hierarchy and it may take some time for the disabled 168controller to become available for other hierarchies; furthermore, due 169to inter-controller dependencies, other controllers may need to be 170disabled too. 171 172While useful for development and manual configurations, moving 173controllers dynamically between the v2 and other hierarchies is 174strongly discouraged for production use. It is recommended to decide 175the hierarchies and controller associations before starting using the 176controllers after system boot. 177 178During transition to v2, system management software might still 179automount the v1 cgroup filesystem and so hijack all controllers 180during boot, before manual intervention is possible. To make testing 181and experimenting easier, the kernel parameter cgroup_no_v1= allows 182disabling controllers in v1 and make them always available in v2. 183 184cgroup v2 currently supports the following mount options. 185 186 nsdelegate 187 Consider cgroup namespaces as delegation boundaries. This 188 option is system wide and can only be set on mount or modified 189 through remount from the init namespace. The mount option is 190 ignored on non-init namespace mounts. Please refer to the 191 Delegation section for details. 192 193 favordynmods 194 Reduce the latencies of dynamic cgroup modifications such as 195 task migrations and controller on/offs at the cost of making 196 hot path operations such as forks and exits more expensive. 197 The static usage pattern of creating a cgroup, enabling 198 controllers, and then seeding it with CLONE_INTO_CGROUP is 199 not affected by this option. 200 201 memory_localevents 202 Only populate memory.events with data for the current cgroup, 203 and not any subtrees. This is legacy behaviour, the default 204 behaviour without this option is to include subtree counts. 205 This option is system wide and can only be set on mount or 206 modified through remount from the init namespace. The mount 207 option is ignored on non-init namespace mounts. 208 209 memory_recursiveprot 210 Recursively apply memory.min and memory.low protection to 211 entire subtrees, without requiring explicit downward 212 propagation into leaf cgroups. This allows protecting entire 213 subtrees from one another, while retaining free competition 214 within those subtrees. This should have been the default 215 behavior but is a mount-option to avoid regressing setups 216 relying on the original semantics (e.g. specifying bogusly 217 high 'bypass' protection values at higher tree levels). 218 219 memory_hugetlb_accounting 220 Count HugeTLB memory usage towards the cgroup's overall 221 memory usage for the memory controller (for the purpose of 222 statistics reporting and memory protetion). This is a new 223 behavior that could regress existing setups, so it must be 224 explicitly opted in with this mount option. 225 226 A few caveats to keep in mind: 227 228 * There is no HugeTLB pool management involved in the memory 229 controller. The pre-allocated pool does not belong to anyone. 230 Specifically, when a new HugeTLB folio is allocated to 231 the pool, it is not accounted for from the perspective of the 232 memory controller. It is only charged to a cgroup when it is 233 actually used (for e.g at page fault time). Host memory 234 overcommit management has to consider this when configuring 235 hard limits. In general, HugeTLB pool management should be 236 done via other mechanisms (such as the HugeTLB controller). 237 * Failure to charge a HugeTLB folio to the memory controller 238 results in SIGBUS. This could happen even if the HugeTLB pool 239 still has pages available (but the cgroup limit is hit and 240 reclaim attempt fails). 241 * Charging HugeTLB memory towards the memory controller affects 242 memory protection and reclaim dynamics. Any userspace tuning 243 (of low, min limits for e.g) needs to take this into account. 244 * HugeTLB pages utilized while this option is not selected 245 will not be tracked by the memory controller (even if cgroup 246 v2 is remounted later on). 247 248 pids_localevents 249 The option restores v1-like behavior of pids.events:max, that is only 250 local (inside cgroup proper) fork failures are counted. Without this 251 option pids.events.max represents any pids.max enforcemnt across 252 cgroup's subtree. 253 254 255 256Organizing Processes and Threads 257-------------------------------- 258 259Processes 260~~~~~~~~~ 261 262Initially, only the root cgroup exists to which all processes belong. 263A child cgroup can be created by creating a sub-directory:: 264 265 # mkdir $CGROUP_NAME 266 267A given cgroup may have multiple child cgroups forming a tree 268structure. Each cgroup has a read-writable interface file 269"cgroup.procs". When read, it lists the PIDs of all processes which 270belong to the cgroup one-per-line. The PIDs are not ordered and the 271same PID may show up more than once if the process got moved to 272another cgroup and then back or the PID got recycled while reading. 273 274A process can be migrated into a cgroup by writing its PID to the 275target cgroup's "cgroup.procs" file. Only one process can be migrated 276on a single write(2) call. If a process is composed of multiple 277threads, writing the PID of any thread migrates all threads of the 278process. 279 280When a process forks a child process, the new process is born into the 281cgroup that the forking process belongs to at the time of the 282operation. After exit, a process stays associated with the cgroup 283that it belonged to at the time of exit until it's reaped; however, a 284zombie process does not appear in "cgroup.procs" and thus can't be 285moved to another cgroup. 286 287A cgroup which doesn't have any children or live processes can be 288destroyed by removing the directory. Note that a cgroup which doesn't 289have any children and is associated only with zombie processes is 290considered empty and can be removed:: 291 292 # rmdir $CGROUP_NAME 293 294"/proc/$PID/cgroup" lists a process's cgroup membership. If legacy 295cgroup is in use in the system, this file may contain multiple lines, 296one for each hierarchy. The entry for cgroup v2 is always in the 297format "0::$PATH":: 298 299 # cat /proc/842/cgroup 300 ... 301 0::/test-cgroup/test-cgroup-nested 302 303If the process becomes a zombie and the cgroup it was associated with 304is removed subsequently, " (deleted)" is appended to the path:: 305 306 # cat /proc/842/cgroup 307 ... 308 0::/test-cgroup/test-cgroup-nested (deleted) 309 310 311Threads 312~~~~~~~ 313 314cgroup v2 supports thread granularity for a subset of controllers to 315support use cases requiring hierarchical resource distribution across 316the threads of a group of processes. By default, all threads of a 317process belong to the same cgroup, which also serves as the resource 318domain to host resource consumptions which are not specific to a 319process or thread. The thread mode allows threads to be spread across 320a subtree while still maintaining the common resource domain for them. 321 322Controllers which support thread mode are called threaded controllers. 323The ones which don't are called domain controllers. 324 325Marking a cgroup threaded makes it join the resource domain of its 326parent as a threaded cgroup. The parent may be another threaded 327cgroup whose resource domain is further up in the hierarchy. The root 328of a threaded subtree, that is, the nearest ancestor which is not 329threaded, is called threaded domain or thread root interchangeably and 330serves as the resource domain for the entire subtree. 331 332Inside a threaded subtree, threads of a process can be put in 333different cgroups and are not subject to the no internal process 334constraint - threaded controllers can be enabled on non-leaf cgroups 335whether they have threads in them or not. 336 337As the threaded domain cgroup hosts all the domain resource 338consumptions of the subtree, it is considered to have internal 339resource consumptions whether there are processes in it or not and 340can't have populated child cgroups which aren't threaded. Because the 341root cgroup is not subject to no internal process constraint, it can 342serve both as a threaded domain and a parent to domain cgroups. 343 344The current operation mode or type of the cgroup is shown in the 345"cgroup.type" file which indicates whether the cgroup is a normal 346domain, a domain which is serving as the domain of a threaded subtree, 347or a threaded cgroup. 348 349On creation, a cgroup is always a domain cgroup and can be made 350threaded by writing "threaded" to the "cgroup.type" file. The 351operation is single direction:: 352 353 # echo threaded > cgroup.type 354 355Once threaded, the cgroup can't be made a domain again. To enable the 356thread mode, the following conditions must be met. 357 358- As the cgroup will join the parent's resource domain. The parent 359 must either be a valid (threaded) domain or a threaded cgroup. 360 361- When the parent is an unthreaded domain, it must not have any domain 362 controllers enabled or populated domain children. The root is 363 exempt from this requirement. 364 365Topology-wise, a cgroup can be in an invalid state. Please consider 366the following topology:: 367 368 A (threaded domain) - B (threaded) - C (domain, just created) 369 370C is created as a domain but isn't connected to a parent which can 371host child domains. C can't be used until it is turned into a 372threaded cgroup. "cgroup.type" file will report "domain (invalid)" in 373these cases. Operations which fail due to invalid topology use 374EOPNOTSUPP as the errno. 375 376A domain cgroup is turned into a threaded domain when one of its child 377cgroup becomes threaded or threaded controllers are enabled in the 378"cgroup.subtree_control" file while there are processes in the cgroup. 379A threaded domain reverts to a normal domain when the conditions 380clear. 381 382When read, "cgroup.threads" contains the list of the thread IDs of all 383threads in the cgroup. Except that the operations are per-thread 384instead of per-process, "cgroup.threads" has the same format and 385behaves the same way as "cgroup.procs". While "cgroup.threads" can be 386written to in any cgroup, as it can only move threads inside the same 387threaded domain, its operations are confined inside each threaded 388subtree. 389 390The threaded domain cgroup serves as the resource domain for the whole 391subtree, and, while the threads can be scattered across the subtree, 392all the processes are considered to be in the threaded domain cgroup. 393"cgroup.procs" in a threaded domain cgroup contains the PIDs of all 394processes in the subtree and is not readable in the subtree proper. 395However, "cgroup.procs" can be written to from anywhere in the subtree 396to migrate all threads of the matching process to the cgroup. 397 398Only threaded controllers can be enabled in a threaded subtree. When 399a threaded controller is enabled inside a threaded subtree, it only 400accounts for and controls resource consumptions associated with the 401threads in the cgroup and its descendants. All consumptions which 402aren't tied to a specific thread belong to the threaded domain cgroup. 403 404Because a threaded subtree is exempt from no internal process 405constraint, a threaded controller must be able to handle competition 406between threads in a non-leaf cgroup and its child cgroups. Each 407threaded controller defines how such competitions are handled. 408 409Currently, the following controllers are threaded and can be enabled 410in a threaded cgroup:: 411 412- cpu 413- cpuset 414- perf_event 415- pids 416 417[Un]populated Notification 418-------------------------- 419 420Each non-root cgroup has a "cgroup.events" file which contains 421"populated" field indicating whether the cgroup's sub-hierarchy has 422live processes in it. Its value is 0 if there is no live process in 423the cgroup and its descendants; otherwise, 1. poll and [id]notify 424events are triggered when the value changes. This can be used, for 425example, to start a clean-up operation after all processes of a given 426sub-hierarchy have exited. The populated state updates and 427notifications are recursive. Consider the following sub-hierarchy 428where the numbers in the parentheses represent the numbers of processes 429in each cgroup:: 430 431 A(4) - B(0) - C(1) 432 \ D(0) 433 434A, B and C's "populated" fields would be 1 while D's 0. After the one 435process in C exits, B and C's "populated" fields would flip to "0" and 436file modified events will be generated on the "cgroup.events" files of 437both cgroups. 438 439 440Controlling Controllers 441----------------------- 442 443Availability 444~~~~~~~~~~~~ 445 446A controller is available in a cgroup when it is supported by the kernel (i.e., 447compiled in, not disabled and not attached to a v1 hierarchy) and listed in the 448"cgroup.controllers" file. Availability means the controller's interface files 449are exposed in the cgroup’s directory, allowing the distribution of the target 450resource to be observed or controlled within that cgroup. 451 452Enabling and Disabling 453~~~~~~~~~~~~~~~~~~~~~~ 454 455Each cgroup has a "cgroup.controllers" file which lists all 456controllers available for the cgroup to enable:: 457 458 # cat cgroup.controllers 459 cpu io memory 460 461No controller is enabled by default. Controllers can be enabled and 462disabled by writing to the "cgroup.subtree_control" file:: 463 464 # echo "+cpu +memory -io" > cgroup.subtree_control 465 466Only controllers which are listed in "cgroup.controllers" can be 467enabled. When multiple operations are specified as above, either they 468all succeed or fail. If multiple operations on the same controller 469are specified, the last one is effective. 470 471Enabling a controller in a cgroup indicates that the distribution of 472the target resource across its immediate children will be controlled. 473Consider the following sub-hierarchy. The enabled controllers are 474listed in parentheses:: 475 476 A(cpu,memory) - B(memory) - C() 477 \ D() 478 479As A has "cpu" and "memory" enabled, A will control the distribution 480of CPU cycles and memory to its children, in this case, B. As B has 481"memory" enabled but not "CPU", C and D will compete freely on CPU 482cycles but their division of memory available to B will be controlled. 483 484As a controller regulates the distribution of the target resource to 485the cgroup's children, enabling it creates the controller's interface 486files in the child cgroups. In the above example, enabling "cpu" on B 487would create the "cpu." prefixed controller interface files in C and 488D. Likewise, disabling "memory" from B would remove the "memory." 489prefixed controller interface files from C and D. This means that the 490controller interface files - anything which doesn't start with 491"cgroup." are owned by the parent rather than the cgroup itself. 492 493 494Top-down Constraint 495~~~~~~~~~~~~~~~~~~~ 496 497Resources are distributed top-down and a cgroup can further distribute 498a resource only if the resource has been distributed to it from the 499parent. This means that all non-root "cgroup.subtree_control" files 500can only contain controllers which are enabled in the parent's 501"cgroup.subtree_control" file. A controller can be enabled only if 502the parent has the controller enabled and a controller can't be 503disabled if one or more children have it enabled. 504 505 506No Internal Process Constraint 507~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 508 509Non-root cgroups can distribute domain resources to their children 510only when they don't have any processes of their own. In other words, 511only domain cgroups which don't contain any processes can have domain 512controllers enabled in their "cgroup.subtree_control" files. 513 514This guarantees that, when a domain controller is looking at the part 515of the hierarchy which has it enabled, processes are always only on 516the leaves. This rules out situations where child cgroups compete 517against internal processes of the parent. 518 519The root cgroup is exempt from this restriction. Root contains 520processes and anonymous resource consumption which can't be associated 521with any other cgroups and requires special treatment from most 522controllers. How resource consumption in the root cgroup is governed 523is up to each controller (for more information on this topic please 524refer to the Non-normative information section in the Controllers 525chapter). 526 527Note that the restriction doesn't get in the way if there is no 528enabled controller in the cgroup's "cgroup.subtree_control". This is 529important as otherwise it wouldn't be possible to create children of a 530populated cgroup. To control resource distribution of a cgroup, the 531cgroup must create children and transfer all its processes to the 532children before enabling controllers in its "cgroup.subtree_control" 533file. 534 535 536Delegation 537---------- 538 539Model of Delegation 540~~~~~~~~~~~~~~~~~~~ 541 542A cgroup can be delegated in two ways. First, to a less privileged 543user by granting write access of the directory and its "cgroup.procs", 544"cgroup.threads" and "cgroup.subtree_control" files to the user. 545Second, if the "nsdelegate" mount option is set, automatically to a 546cgroup namespace on namespace creation. 547 548Because the resource control interface files in a given directory 549control the distribution of the parent's resources, the delegatee 550shouldn't be allowed to write to them. For the first method, this is 551achieved by not granting access to these files. For the second, files 552outside the namespace should be hidden from the delegatee by the means 553of at least mount namespacing, and the kernel rejects writes to all 554files on a namespace root from inside the cgroup namespace, except for 555those files listed in "/sys/kernel/cgroup/delegate" (including 556"cgroup.procs", "cgroup.threads", "cgroup.subtree_control", etc.). 557 558The end results are equivalent for both delegation types. Once 559delegated, the user can build sub-hierarchy under the directory, 560organize processes inside it as it sees fit and further distribute the 561resources it received from the parent. The limits and other settings 562of all resource controllers are hierarchical and regardless of what 563happens in the delegated sub-hierarchy, nothing can escape the 564resource restrictions imposed by the parent. 565 566Currently, cgroup doesn't impose any restrictions on the number of 567cgroups in or nesting depth of a delegated sub-hierarchy; however, 568this may be limited explicitly in the future. 569 570 571Delegation Containment 572~~~~~~~~~~~~~~~~~~~~~~ 573 574A delegated sub-hierarchy is contained in the sense that processes 575can't be moved into or out of the sub-hierarchy by the delegatee. 576 577For delegations to a less privileged user, this is achieved by 578requiring the following conditions for a process with a non-root euid 579to migrate a target process into a cgroup by writing its PID to the 580"cgroup.procs" file. 581 582- The writer must have write access to the "cgroup.procs" file. 583 584- The writer must have write access to the "cgroup.procs" file of the 585 common ancestor of the source and destination cgroups. 586 587The above two constraints ensure that while a delegatee may migrate 588processes around freely in the delegated sub-hierarchy it can't pull 589in from or push out to outside the sub-hierarchy. 590 591For an example, let's assume cgroups C0 and C1 have been delegated to 592user U0 who created C00, C01 under C0 and C10 under C1 as follows and 593all processes under C0 and C1 belong to U0:: 594 595 ~~~~~~~~~~~~~ - C0 - C00 596 ~ cgroup ~ \ C01 597 ~ hierarchy ~ 598 ~~~~~~~~~~~~~ - C1 - C10 599 600Let's also say U0 wants to write the PID of a process which is 601currently in C10 into "C00/cgroup.procs". U0 has write access to the 602file; however, the common ancestor of the source cgroup C10 and the 603destination cgroup C00 is above the points of delegation and U0 would 604not have write access to its "cgroup.procs" files and thus the write 605will be denied with -EACCES. 606 607For delegations to namespaces, containment is achieved by requiring 608that both the source and destination cgroups are reachable from the 609namespace of the process which is attempting the migration. If either 610is not reachable, the migration is rejected with -ENOENT. 611 612 613Guidelines 614---------- 615 616Organize Once and Control 617~~~~~~~~~~~~~~~~~~~~~~~~~ 618 619Migrating a process across cgroups is a relatively expensive operation 620and stateful resources such as memory are not moved together with the 621process. This is an explicit design decision as there often exist 622inherent trade-offs between migration and various hot paths in terms 623of synchronization cost. 624 625As such, migrating processes across cgroups frequently as a means to 626apply different resource restrictions is discouraged. A workload 627should be assigned to a cgroup according to the system's logical and 628resource structure once on start-up. Dynamic adjustments to resource 629distribution can be made by changing controller configuration through 630the interface files. 631 632 633Avoid Name Collisions 634~~~~~~~~~~~~~~~~~~~~~ 635 636Interface files for a cgroup and its children cgroups occupy the same 637directory and it is possible to create children cgroups which collide 638with interface files. 639 640All cgroup core interface files are prefixed with "cgroup." and each 641controller's interface files are prefixed with the controller name and 642a dot. A controller's name is composed of lower case alphabets and 643'_'s but never begins with an '_' so it can be used as the prefix 644character for collision avoidance. Also, interface file names won't 645start or end with terms which are often used in categorizing workloads 646such as job, service, slice, unit or workload. 647 648cgroup doesn't do anything to prevent name collisions and it's the 649user's responsibility to avoid them. 650 651 652Resource Distribution Models 653============================ 654 655cgroup controllers implement several resource distribution schemes 656depending on the resource type and expected use cases. This section 657describes major schemes in use along with their expected behaviors. 658 659 660Weights 661------- 662 663A parent's resource is distributed by adding up the weights of all 664active children and giving each the fraction matching the ratio of its 665weight against the sum. As only children which can make use of the 666resource at the moment participate in the distribution, this is 667work-conserving. Due to the dynamic nature, this model is usually 668used for stateless resources. 669 670All weights are in the range [1, 10000] with the default at 100. This 671allows symmetric multiplicative biases in both directions at fine 672enough granularity while staying in the intuitive range. 673 674As long as the weight is in range, all configuration combinations are 675valid and there is no reason to reject configuration changes or 676process migrations. 677 678"cpu.weight" proportionally distributes CPU cycles to active children 679and is an example of this type. 680 681 682.. _cgroupv2-limits-distributor: 683 684Limits 685------ 686 687A child can only consume up to the configured amount of the resource. 688Limits can be over-committed - the sum of the limits of children can 689exceed the amount of resource available to the parent. 690 691Limits are in the range [0, max] and defaults to "max", which is noop. 692 693As limits can be over-committed, all configuration combinations are 694valid and there is no reason to reject configuration changes or 695process migrations. 696 697"io.max" limits the maximum BPS and/or IOPS that a cgroup can consume 698on an IO device and is an example of this type. 699 700.. _cgroupv2-protections-distributor: 701 702Protections 703----------- 704 705A cgroup is protected up to the configured amount of the resource 706as long as the usages of all its ancestors are under their 707protected levels. Protections can be hard guarantees or best effort 708soft boundaries. Protections can also be over-committed in which case 709only up to the amount available to the parent is protected among 710children. 711 712Protections are in the range [0, max] and defaults to 0, which is 713noop. 714 715As protections can be over-committed, all configuration combinations 716are valid and there is no reason to reject configuration changes or 717process migrations. 718 719"memory.low" implements best-effort memory protection and is an 720example of this type. 721 722 723Allocations 724----------- 725 726A cgroup is exclusively allocated a certain amount of a finite 727resource. Allocations can't be over-committed - the sum of the 728allocations of children can not exceed the amount of resource 729available to the parent. 730 731Allocations are in the range [0, max] and defaults to 0, which is no 732resource. 733 734As allocations can't be over-committed, some configuration 735combinations are invalid and should be rejected. Also, if the 736resource is mandatory for execution of processes, process migrations 737may be rejected. 738 739"cpu.rt.max" hard-allocates realtime slices and is an example of this 740type. 741 742 743Interface Files 744=============== 745 746Format 747------ 748 749All interface files should be in one of the following formats whenever 750possible:: 751 752 New-line separated values 753 (when only one value can be written at once) 754 755 VAL0\n 756 VAL1\n 757 ... 758 759 Space separated values 760 (when read-only or multiple values can be written at once) 761 762 VAL0 VAL1 ...\n 763 764 Flat keyed 765 766 KEY0 VAL0\n 767 KEY1 VAL1\n 768 ... 769 770 Nested keyed 771 772 KEY0 SUB_KEY0=VAL00 SUB_KEY1=VAL01... 773 KEY1 SUB_KEY0=VAL10 SUB_KEY1=VAL11... 774 ... 775 776For a writable file, the format for writing should generally match 777reading; however, controllers may allow omitting later fields or 778implement restricted shortcuts for most common use cases. 779 780For both flat and nested keyed files, only the values for a single key 781can be written at a time. For nested keyed files, the sub key pairs 782may be specified in any order and not all pairs have to be specified. 783 784 785Conventions 786----------- 787 788- Settings for a single feature should be contained in a single file. 789 790- The root cgroup should be exempt from resource control and thus 791 shouldn't have resource control interface files. 792 793- The default time unit is microseconds. If a different unit is ever 794 used, an explicit unit suffix must be present. 795 796- A parts-per quantity should use a percentage decimal with at least 797 two digit fractional part - e.g. 13.40. 798 799- If a controller implements weight based resource distribution, its 800 interface file should be named "weight" and have the range [1, 801 10000] with 100 as the default. The values are chosen to allow 802 enough and symmetric bias in both directions while keeping it 803 intuitive (the default is 100%). 804 805- If a controller implements an absolute resource guarantee and/or 806 limit, the interface files should be named "min" and "max" 807 respectively. If a controller implements best effort resource 808 guarantee and/or limit, the interface files should be named "low" 809 and "high" respectively. 810 811 In the above four control files, the special token "max" should be 812 used to represent upward infinity for both reading and writing. 813 814- If a setting has a configurable default value and keyed specific 815 overrides, the default entry should be keyed with "default" and 816 appear as the first entry in the file. 817 818 The default value can be updated by writing either "default $VAL" or 819 "$VAL". 820 821 When writing to update a specific override, "default" can be used as 822 the value to indicate removal of the override. Override entries 823 with "default" as the value must not appear when read. 824 825 For example, a setting which is keyed by major:minor device numbers 826 with integer values may look like the following:: 827 828 # cat cgroup-example-interface-file 829 default 150 830 8:0 300 831 832 The default value can be updated by:: 833 834 # echo 125 > cgroup-example-interface-file 835 836 or:: 837 838 # echo "default 125" > cgroup-example-interface-file 839 840 An override can be set by:: 841 842 # echo "8:16 170" > cgroup-example-interface-file 843 844 and cleared by:: 845 846 # echo "8:0 default" > cgroup-example-interface-file 847 # cat cgroup-example-interface-file 848 default 125 849 8:16 170 850 851- For events which are not very high frequency, an interface file 852 "events" should be created which lists event key value pairs. 853 Whenever a notifiable event happens, file modified event should be 854 generated on the file. 855 856 857Core Interface Files 858-------------------- 859 860All cgroup core files are prefixed with "cgroup." 861 862 cgroup.type 863 A read-write single value file which exists on non-root 864 cgroups. 865 866 When read, it indicates the current type of the cgroup, which 867 can be one of the following values. 868 869 - "domain" : A normal valid domain cgroup. 870 871 - "domain threaded" : A threaded domain cgroup which is 872 serving as the root of a threaded subtree. 873 874 - "domain invalid" : A cgroup which is in an invalid state. 875 It can't be populated or have controllers enabled. It may 876 be allowed to become a threaded cgroup. 877 878 - "threaded" : A threaded cgroup which is a member of a 879 threaded subtree. 880 881 A cgroup can be turned into a threaded cgroup by writing 882 "threaded" to this file. 883 884 cgroup.procs 885 A read-write new-line separated values file which exists on 886 all cgroups. 887 888 When read, it lists the PIDs of all processes which belong to 889 the cgroup one-per-line. The PIDs are not ordered and the 890 same PID may show up more than once if the process got moved 891 to another cgroup and then back or the PID got recycled while 892 reading. 893 894 A PID can be written to migrate the process associated with 895 the PID to the cgroup. The writer should match all of the 896 following conditions. 897 898 - It must have write access to the "cgroup.procs" file. 899 900 - It must have write access to the "cgroup.procs" file of the 901 common ancestor of the source and destination cgroups. 902 903 When delegating a sub-hierarchy, write access to this file 904 should be granted along with the containing directory. 905 906 In a threaded cgroup, reading this file fails with EOPNOTSUPP 907 as all the processes belong to the thread root. Writing is 908 supported and moves every thread of the process to the cgroup. 909 910 cgroup.threads 911 A read-write new-line separated values file which exists on 912 all cgroups. 913 914 When read, it lists the TIDs of all threads which belong to 915 the cgroup one-per-line. The TIDs are not ordered and the 916 same TID may show up more than once if the thread got moved to 917 another cgroup and then back or the TID got recycled while 918 reading. 919 920 A TID can be written to migrate the thread associated with the 921 TID to the cgroup. The writer should match all of the 922 following conditions. 923 924 - It must have write access to the "cgroup.threads" file. 925 926 - The cgroup that the thread is currently in must be in the 927 same resource domain as the destination cgroup. 928 929 - It must have write access to the "cgroup.procs" file of the 930 common ancestor of the source and destination cgroups. 931 932 When delegating a sub-hierarchy, write access to this file 933 should be granted along with the containing directory. 934 935 cgroup.controllers 936 A read-only space separated values file which exists on all 937 cgroups. 938 939 It shows space separated list of all controllers available to 940 the cgroup. The controllers are not ordered. 941 942 cgroup.subtree_control 943 A read-write space separated values file which exists on all 944 cgroups. Starts out empty. 945 946 When read, it shows space separated list of the controllers 947 which are enabled to control resource distribution from the 948 cgroup to its children. 949 950 Space separated list of controllers prefixed with '+' or '-' 951 can be written to enable or disable controllers. A controller 952 name prefixed with '+' enables the controller and '-' 953 disables. If a controller appears more than once on the list, 954 the last one is effective. When multiple enable and disable 955 operations are specified, either all succeed or all fail. 956 957 cgroup.events 958 A read-only flat-keyed file which exists on non-root cgroups. 959 The following entries are defined. Unless specified 960 otherwise, a value change in this file generates a file 961 modified event. 962 963 populated 964 1 if the cgroup or its descendants contains any live 965 processes; otherwise, 0. 966 frozen 967 1 if the cgroup is frozen; otherwise, 0. 968 969 cgroup.max.descendants 970 A read-write single value files. The default is "max". 971 972 Maximum allowed number of descent cgroups. 973 If the actual number of descendants is equal or larger, 974 an attempt to create a new cgroup in the hierarchy will fail. 975 976 cgroup.max.depth 977 A read-write single value files. The default is "max". 978 979 Maximum allowed descent depth below the current cgroup. 980 If the actual descent depth is equal or larger, 981 an attempt to create a new child cgroup will fail. 982 983 cgroup.stat 984 A read-only flat-keyed file with the following entries: 985 986 nr_descendants 987 Total number of visible descendant cgroups. 988 989 nr_dying_descendants 990 Total number of dying descendant cgroups. A cgroup becomes 991 dying after being deleted by a user. The cgroup will remain 992 in dying state for some time undefined time (which can depend 993 on system load) before being completely destroyed. 994 995 A process can't enter a dying cgroup under any circumstances, 996 a dying cgroup can't revive. 997 998 A dying cgroup can consume system resources not exceeding 999 limits, which were active at the moment of cgroup deletion. 1000 1001 nr_subsys_<cgroup_subsys> 1002 Total number of live cgroup subsystems (e.g memory 1003 cgroup) at and beneath the current cgroup. 1004 1005 nr_dying_subsys_<cgroup_subsys> 1006 Total number of dying cgroup subsystems (e.g. memory 1007 cgroup) at and beneath the current cgroup. 1008 1009 cgroup.stat.local 1010 A read-only flat-keyed file which exists in non-root cgroups. 1011 The following entry is defined: 1012 1013 frozen_usec 1014 Cumulative time that this cgroup has spent between freezing and 1015 thawing, regardless of whether by self or ancestor groups. 1016 NB: (not) reaching "frozen" state is not accounted here. 1017 1018 Using the following ASCII representation of a cgroup's freezer 1019 state, :: 1020 1021 1 _____ 1022 frozen 0 __/ \__ 1023 ab cd 1024 1025 the duration being measured is the span between a and c. 1026 1027 cgroup.freeze 1028 A read-write single value file which exists on non-root cgroups. 1029 Allowed values are "0" and "1". The default is "0". 1030 1031 Writing "1" to the file causes freezing of the cgroup and all 1032 descendant cgroups. This means that all belonging processes will 1033 be stopped and will not run until the cgroup will be explicitly 1034 unfrozen. Freezing of the cgroup may take some time; when this action 1035 is completed, the "frozen" value in the cgroup.events control file 1036 will be updated to "1" and the corresponding notification will be 1037 issued. 1038 1039 A cgroup can be frozen either by its own settings, or by settings 1040 of any ancestor cgroups. If any of ancestor cgroups is frozen, the 1041 cgroup will remain frozen. 1042 1043 Processes in the frozen cgroup can be killed by a fatal signal. 1044 They also can enter and leave a frozen cgroup: either by an explicit 1045 move by a user, or if freezing of the cgroup races with fork(). 1046 If a process is moved to a frozen cgroup, it stops. If a process is 1047 moved out of a frozen cgroup, it becomes running. 1048 1049 Frozen status of a cgroup doesn't affect any cgroup tree operations: 1050 it's possible to delete a frozen (and empty) cgroup, as well as 1051 create new sub-cgroups. 1052 1053 cgroup.kill 1054 A write-only single value file which exists in non-root cgroups. 1055 The only allowed value is "1". 1056 1057 Writing "1" to the file causes the cgroup and all descendant cgroups to 1058 be killed. This means that all processes located in the affected cgroup 1059 tree will be killed via SIGKILL. 1060 1061 Killing a cgroup tree will deal with concurrent forks appropriately and 1062 is protected against migrations. 1063 1064 In a threaded cgroup, writing this file fails with EOPNOTSUPP as 1065 killing cgroups is a process directed operation, i.e. it affects 1066 the whole thread-group. 1067 1068 cgroup.pressure 1069 A read-write single value file that allowed values are "0" and "1". 1070 The default is "1". 1071 1072 Writing "0" to the file will disable the cgroup PSI accounting. 1073 Writing "1" to the file will re-enable the cgroup PSI accounting. 1074 1075 This control attribute is not hierarchical, so disable or enable PSI 1076 accounting in a cgroup does not affect PSI accounting in descendants 1077 and doesn't need pass enablement via ancestors from root. 1078 1079 The reason this control attribute exists is that PSI accounts stalls for 1080 each cgroup separately and aggregates it at each level of the hierarchy. 1081 This may cause non-negligible overhead for some workloads when under 1082 deep level of the hierarchy, in which case this control attribute can 1083 be used to disable PSI accounting in the non-leaf cgroups. 1084 1085 irq.pressure 1086 A read-write nested-keyed file. 1087 1088 Shows pressure stall information for IRQ/SOFTIRQ. See 1089 :ref:`Documentation/accounting/psi.rst <psi>` for details. 1090 1091Controllers 1092=========== 1093 1094.. _cgroup-v2-cpu: 1095 1096CPU 1097--- 1098 1099The "cpu" controllers regulates distribution of CPU cycles. This 1100controller implements weight and absolute bandwidth limit models for 1101normal scheduling policy and absolute bandwidth allocation model for 1102realtime scheduling policy. 1103 1104In all the above models, cycles distribution is defined only on a temporal 1105base and it does not account for the frequency at which tasks are executed. 1106The (optional) utilization clamping support allows to hint the schedutil 1107cpufreq governor about the minimum desired frequency which should always be 1108provided by a CPU, as well as the maximum desired frequency, which should not 1109be exceeded by a CPU. 1110 1111WARNING: cgroup2 cpu controller doesn't yet support the (bandwidth) control of 1112realtime processes. For a kernel built with the CONFIG_RT_GROUP_SCHED option 1113enabled for group scheduling of realtime processes, the cpu controller can only 1114be enabled when all RT processes are in the root cgroup. Be aware that system 1115management software may already have placed RT processes into non-root cgroups 1116during the system boot process, and these processes may need to be moved to the 1117root cgroup before the cpu controller can be enabled with a 1118CONFIG_RT_GROUP_SCHED enabled kernel. 1119 1120With CONFIG_RT_GROUP_SCHED disabled, this limitation does not apply and some of 1121the interface files either affect realtime processes or account for them. See 1122the following section for details. Only the cpu controller is affected by 1123CONFIG_RT_GROUP_SCHED. Other controllers can be used for the resource control of 1124realtime processes irrespective of CONFIG_RT_GROUP_SCHED. 1125 1126 1127CPU Interface Files 1128~~~~~~~~~~~~~~~~~~~ 1129 1130The interaction of a process with the cpu controller depends on its scheduling 1131policy and the underlying scheduler. From the point of view of the cpu controller, 1132processes can be categorized as follows: 1133 1134* Processes under the fair-class scheduler 1135* Processes under a BPF scheduler with the ``cgroup_set_weight`` callback 1136* Everything else: ``SCHED_{FIFO,RR,DEADLINE}`` and processes under a BPF scheduler 1137 without the ``cgroup_set_weight`` callback 1138 1139For details on when a process is under the fair-class scheduler or a BPF scheduler, 1140check out :ref:`Documentation/scheduler/sched-ext.rst <sched-ext>`. 1141 1142For each of the following interface files, the above categories 1143will be referred to. All time durations are in microseconds. 1144 1145 cpu.stat 1146 A read-only flat-keyed file. 1147 This file exists whether the controller is enabled or not. 1148 1149 It always reports the following three stats, which account for all the 1150 processes in the cgroup: 1151 1152 - usage_usec 1153 - user_usec 1154 - system_usec 1155 1156 and the following five when the controller is enabled, which account for 1157 only the processes under the fair-class scheduler: 1158 1159 - nr_periods 1160 - nr_throttled 1161 - throttled_usec 1162 - nr_bursts 1163 - burst_usec 1164 1165 cpu.weight 1166 A read-write single value file which exists on non-root 1167 cgroups. The default is "100". 1168 1169 For non idle groups (cpu.idle = 0), the weight is in the 1170 range [1, 10000]. 1171 1172 If the cgroup has been configured to be SCHED_IDLE (cpu.idle = 1), 1173 then the weight will show as a 0. 1174 1175 This file affects only processes under the fair-class scheduler and a BPF 1176 scheduler with the ``cgroup_set_weight`` callback depending on what the 1177 callback actually does. 1178 1179 cpu.weight.nice 1180 A read-write single value file which exists on non-root 1181 cgroups. The default is "0". 1182 1183 The nice value is in the range [-20, 19]. 1184 1185 This interface file is an alternative interface for 1186 "cpu.weight" and allows reading and setting weight using the 1187 same values used by nice(2). Because the range is smaller and 1188 granularity is coarser for the nice values, the read value is 1189 the closest approximation of the current weight. 1190 1191 This file affects only processes under the fair-class scheduler and a BPF 1192 scheduler with the ``cgroup_set_weight`` callback depending on what the 1193 callback actually does. 1194 1195 cpu.max 1196 A read-write two value file which exists on non-root cgroups. 1197 The default is "max 100000". 1198 1199 The maximum bandwidth limit. It's in the following format:: 1200 1201 $MAX $PERIOD 1202 1203 which indicates that the group may consume up to $MAX in each 1204 $PERIOD duration. "max" for $MAX indicates no limit. If only 1205 one number is written, $MAX is updated. 1206 1207 This file affects only processes under the fair-class scheduler. 1208 1209 cpu.max.burst 1210 A read-write single value file which exists on non-root 1211 cgroups. The default is "0". 1212 1213 The burst in the range [0, $MAX]. 1214 1215 This file affects only processes under the fair-class scheduler. 1216 1217 cpu.pressure 1218 A read-write nested-keyed file. 1219 1220 Shows pressure stall information for CPU. See 1221 :ref:`Documentation/accounting/psi.rst <psi>` for details. 1222 1223 This file accounts for all the processes in the cgroup. 1224 1225 cpu.uclamp.min 1226 A read-write single value file which exists on non-root cgroups. 1227 The default is "0", i.e. no utilization boosting. 1228 1229 The requested minimum utilization (protection) as a percentage 1230 rational number, e.g. 12.34 for 12.34%. 1231 1232 This interface allows reading and setting minimum utilization clamp 1233 values similar to the sched_setattr(2). This minimum utilization 1234 value is used to clamp the task specific minimum utilization clamp, 1235 including those of realtime processes. 1236 1237 The requested minimum utilization (protection) is always capped by 1238 the current value for the maximum utilization (limit), i.e. 1239 `cpu.uclamp.max`. 1240 1241 This file affects all the processes in the cgroup. 1242 1243 cpu.uclamp.max 1244 A read-write single value file which exists on non-root cgroups. 1245 The default is "max". i.e. no utilization capping 1246 1247 The requested maximum utilization (limit) as a percentage rational 1248 number, e.g. 98.76 for 98.76%. 1249 1250 This interface allows reading and setting maximum utilization clamp 1251 values similar to the sched_setattr(2). This maximum utilization 1252 value is used to clamp the task specific maximum utilization clamp, 1253 including those of realtime processes. 1254 1255 This file affects all the processes in the cgroup. 1256 1257 cpu.idle 1258 A read-write single value file which exists on non-root cgroups. 1259 The default is 0. 1260 1261 This is the cgroup analog of the per-task SCHED_IDLE sched policy. 1262 Setting this value to a 1 will make the scheduling policy of the 1263 cgroup SCHED_IDLE. The threads inside the cgroup will retain their 1264 own relative priorities, but the cgroup itself will be treated as 1265 very low priority relative to its peers. 1266 1267 This file affects only processes under the fair-class scheduler. 1268 1269Memory 1270------ 1271 1272The "memory" controller regulates distribution of memory. Memory is 1273stateful and implements both limit and protection models. Due to the 1274intertwining between memory usage and reclaim pressure and the 1275stateful nature of memory, the distribution model is relatively 1276complex. 1277 1278While not completely water-tight, all major memory usages by a given 1279cgroup are tracked so that the total memory consumption can be 1280accounted and controlled to a reasonable extent. Currently, the 1281following types of memory usages are tracked. 1282 1283- Userland memory - page cache and anonymous memory. 1284 1285- Kernel data structures such as dentries and inodes. 1286 1287- TCP socket buffers. 1288 1289The above list may expand in the future for better coverage. 1290 1291 1292Memory Interface Files 1293~~~~~~~~~~~~~~~~~~~~~~ 1294 1295All memory amounts are in bytes. If a value which is not aligned to 1296PAGE_SIZE is written, the value may be rounded up to the closest 1297PAGE_SIZE multiple when read back. 1298 1299 memory.current 1300 A read-only single value file which exists on non-root 1301 cgroups. 1302 1303 The total amount of memory currently being used by the cgroup 1304 and its descendants. 1305 1306 memory.min 1307 A read-write single value file which exists on non-root 1308 cgroups. The default is "0". 1309 1310 Hard memory protection. If the memory usage of a cgroup 1311 is within its effective min boundary, the cgroup's memory 1312 won't be reclaimed under any conditions. If there is no 1313 unprotected reclaimable memory available, OOM killer 1314 is invoked. Above the effective min boundary (or 1315 effective low boundary if it is higher), pages are reclaimed 1316 proportionally to the overage, reducing reclaim pressure for 1317 smaller overages. 1318 1319 Effective min boundary is limited by memory.min values of 1320 all ancestor cgroups. If there is memory.min overcommitment 1321 (child cgroup or cgroups are requiring more protected memory 1322 than parent will allow), then each child cgroup will get 1323 the part of parent's protection proportional to its 1324 actual memory usage below memory.min. 1325 1326 Putting more memory than generally available under this 1327 protection is discouraged and may lead to constant OOMs. 1328 1329 If a memory cgroup is not populated with processes, 1330 its memory.min is ignored. 1331 1332 memory.low 1333 A read-write single value file which exists on non-root 1334 cgroups. The default is "0". 1335 1336 Best-effort memory protection. If the memory usage of a 1337 cgroup is within its effective low boundary, the cgroup's 1338 memory won't be reclaimed unless there is no reclaimable 1339 memory available in unprotected cgroups. 1340 Above the effective low boundary (or 1341 effective min boundary if it is higher), pages are reclaimed 1342 proportionally to the overage, reducing reclaim pressure for 1343 smaller overages. 1344 1345 Effective low boundary is limited by memory.low values of 1346 all ancestor cgroups. If there is memory.low overcommitment 1347 (child cgroup or cgroups are requiring more protected memory 1348 than parent will allow), then each child cgroup will get 1349 the part of parent's protection proportional to its 1350 actual memory usage below memory.low. 1351 1352 Putting more memory than generally available under this 1353 protection is discouraged. 1354 1355 memory.high 1356 A read-write single value file which exists on non-root 1357 cgroups. The default is "max". 1358 1359 Memory usage throttle limit. If a cgroup's usage goes 1360 over the high boundary, the processes of the cgroup are 1361 throttled and put under heavy reclaim pressure. 1362 1363 Going over the high limit never invokes the OOM killer and 1364 under extreme conditions the limit may be breached. The high 1365 limit should be used in scenarios where an external process 1366 monitors the limited cgroup to alleviate heavy reclaim 1367 pressure. 1368 1369 If memory.high is opened with O_NONBLOCK then the synchronous 1370 reclaim is bypassed. This is useful for admin processes that 1371 need to dynamically adjust the job's memory limits without 1372 expending their own CPU resources on memory reclamation. The 1373 job will trigger the reclaim and/or get throttled on its 1374 next charge request. 1375 1376 Please note that with O_NONBLOCK, there is a chance that the 1377 target memory cgroup may take indefinite amount of time to 1378 reduce usage below the limit due to delayed charge request or 1379 busy-hitting its memory to slow down reclaim. 1380 1381 memory.max 1382 A read-write single value file which exists on non-root 1383 cgroups. The default is "max". 1384 1385 Memory usage hard limit. This is the main mechanism to limit 1386 memory usage of a cgroup. If a cgroup's memory usage reaches 1387 this limit and can't be reduced, the OOM killer is invoked in 1388 the cgroup. Under certain circumstances, the usage may go 1389 over the limit temporarily. 1390 1391 In default configuration regular 0-order allocations always 1392 succeed unless OOM killer chooses current task as a victim. 1393 1394 Some kinds of allocations don't invoke the OOM killer. 1395 Caller could retry them differently, return into userspace 1396 as -ENOMEM or silently ignore in cases like disk readahead. 1397 1398 If memory.max is opened with O_NONBLOCK, then the synchronous 1399 reclaim and oom-kill are bypassed. This is useful for admin 1400 processes that need to dynamically adjust the job's memory limits 1401 without expending their own CPU resources on memory reclamation. 1402 The job will trigger the reclaim and/or oom-kill on its next 1403 charge request. 1404 1405 Please note that with O_NONBLOCK, there is a chance that the 1406 target memory cgroup may take indefinite amount of time to 1407 reduce usage below the limit due to delayed charge request or 1408 busy-hitting its memory to slow down reclaim. 1409 1410 memory.reclaim 1411 A write-only nested-keyed file which exists for all cgroups. 1412 1413 This is a simple interface to trigger memory reclaim in the 1414 target cgroup. 1415 1416 Example:: 1417 1418 echo "1G" > memory.reclaim 1419 1420 Please note that the kernel can over or under reclaim from 1421 the target cgroup. If less bytes are reclaimed than the 1422 specified amount, -EAGAIN is returned. 1423 1424 Please note that the proactive reclaim (triggered by this 1425 interface) is not meant to indicate memory pressure on the 1426 memory cgroup. Therefore socket memory balancing triggered by 1427 the memory reclaim normally is not exercised in this case. 1428 This means that the networking layer will not adapt based on 1429 reclaim induced by memory.reclaim. 1430 1431The following nested keys are defined. 1432 1433 ========== ================================ 1434 swappiness Swappiness value to reclaim with 1435 ========== ================================ 1436 1437 Specifying a swappiness value instructs the kernel to perform 1438 the reclaim with that swappiness value. Note that this has the 1439 same semantics as vm.swappiness applied to memcg reclaim with 1440 all the existing limitations and potential future extensions. 1441 1442 The valid range for swappiness is [0-200, max], setting 1443 swappiness=max exclusively reclaims anonymous memory. 1444 1445 memory.peak 1446 A read-write single value file which exists on non-root cgroups. 1447 1448 The max memory usage recorded for the cgroup and its descendants since 1449 either the creation of the cgroup or the most recent reset for that FD. 1450 1451 A write of any non-empty string to this file resets it to the 1452 current memory usage for subsequent reads through the same 1453 file descriptor. 1454 1455 memory.oom.group 1456 A read-write single value file which exists on non-root 1457 cgroups. The default value is "0". 1458 1459 Determines whether the cgroup should be treated as 1460 an indivisible workload by the OOM killer. If set, 1461 all tasks belonging to the cgroup or to its descendants 1462 (if the memory cgroup is not a leaf cgroup) are killed 1463 together or not at all. This can be used to avoid 1464 partial kills to guarantee workload integrity. 1465 1466 Tasks with the OOM protection (oom_score_adj set to -1000) 1467 are treated as an exception and are never killed. 1468 1469 If the OOM killer is invoked in a cgroup, it's not going 1470 to kill any tasks outside of this cgroup, regardless 1471 memory.oom.group values of ancestor cgroups. 1472 1473 memory.events 1474 A read-only flat-keyed file which exists on non-root cgroups. 1475 The following entries are defined. Unless specified 1476 otherwise, a value change in this file generates a file 1477 modified event. 1478 1479 Note that all fields in this file are hierarchical and the 1480 file modified event can be generated due to an event down the 1481 hierarchy. For the local events at the cgroup level see 1482 memory.events.local. 1483 1484 low 1485 The number of times the cgroup is reclaimed due to 1486 high memory pressure even though its usage is under 1487 the low boundary. This usually indicates that the low 1488 boundary is over-committed. 1489 1490 high 1491 The number of times processes of the cgroup are 1492 throttled and routed to perform direct memory reclaim 1493 because the high memory boundary was exceeded. For a 1494 cgroup whose memory usage is capped by the high limit 1495 rather than global memory pressure, this event's 1496 occurrences are expected. 1497 1498 max 1499 The number of times the cgroup's memory usage was 1500 about to go over the max boundary. If direct reclaim 1501 fails to bring it down, the cgroup goes to OOM state. 1502 1503 oom 1504 The number of time the cgroup's memory usage was 1505 reached the limit and allocation was about to fail. 1506 1507 This event is not raised if the OOM killer is not 1508 considered as an option, e.g. for failed high-order 1509 allocations or if caller asked to not retry attempts. 1510 1511 oom_kill 1512 The number of processes belonging to this cgroup 1513 killed by any kind of OOM killer. 1514 1515 oom_group_kill 1516 The number of times a group OOM has occurred. 1517 1518 sock_throttled 1519 The number of times network sockets associated with 1520 this cgroup are throttled. 1521 1522 memory.events.local 1523 Similar to memory.events but the fields in the file are local 1524 to the cgroup i.e. not hierarchical. The file modified event 1525 generated on this file reflects only the local events. 1526 1527 memory.stat 1528 A read-only flat-keyed file which exists on non-root cgroups. 1529 1530 This breaks down the cgroup's memory footprint into different 1531 types of memory, type-specific details, and other information 1532 on the state and past events of the memory management system. 1533 1534 All memory amounts are in bytes. 1535 1536 The entries are ordered to be human readable, and new entries 1537 can show up in the middle. Don't rely on items remaining in a 1538 fixed position; use the keys to look up specific values! 1539 1540 If the entry has no per-node counter (or not show in the 1541 memory.numa_stat). We use 'npn' (non-per-node) as the tag 1542 to indicate that it will not show in the memory.numa_stat. 1543 1544 anon 1545 Amount of memory used in anonymous mappings such as 1546 brk(), sbrk(), and mmap(MAP_ANONYMOUS). Note that 1547 some kernel configurations might account complete larger 1548 allocations (e.g., THP) if only some, but not all the 1549 memory of such an allocation is mapped anymore. 1550 1551 file 1552 Amount of memory used to cache filesystem data, 1553 including tmpfs and shared memory. 1554 1555 kernel (npn) 1556 Amount of total kernel memory, including 1557 (kernel_stack, pagetables, percpu, vmalloc, slab) in 1558 addition to other kernel memory use cases. 1559 1560 kernel_stack 1561 Amount of memory allocated to kernel stacks. 1562 1563 pagetables 1564 Amount of memory allocated for page tables. 1565 1566 sec_pagetables 1567 Amount of memory allocated for secondary page tables, 1568 this currently includes KVM mmu allocations on x86 1569 and arm64 and IOMMU page tables. 1570 1571 percpu (npn) 1572 Amount of memory used for storing per-cpu kernel 1573 data structures. 1574 1575 sock (npn) 1576 Amount of memory used in network transmission buffers 1577 1578 vmalloc (npn) 1579 Amount of memory used for vmap backed memory. 1580 1581 shmem 1582 Amount of cached filesystem data that is swap-backed, 1583 such as tmpfs, shm segments, shared anonymous mmap()s 1584 1585 zswap 1586 Amount of memory consumed by the zswap compression backend. 1587 1588 zswapped 1589 Amount of application memory swapped out to zswap. 1590 1591 file_mapped 1592 Amount of cached filesystem data mapped with mmap(). Note 1593 that some kernel configurations might account complete 1594 larger allocations (e.g., THP) if only some, but not 1595 not all the memory of such an allocation is mapped. 1596 1597 file_dirty 1598 Amount of cached filesystem data that was modified but 1599 not yet written back to disk 1600 1601 file_writeback 1602 Amount of cached filesystem data that was modified and 1603 is currently being written back to disk 1604 1605 swapcached 1606 Amount of swap cached in memory. The swapcache is accounted 1607 against both memory and swap usage. 1608 1609 anon_thp 1610 Amount of memory used in anonymous mappings backed by 1611 transparent hugepages 1612 1613 file_thp 1614 Amount of cached filesystem data backed by transparent 1615 hugepages 1616 1617 shmem_thp 1618 Amount of shm, tmpfs, shared anonymous mmap()s backed by 1619 transparent hugepages 1620 1621 inactive_anon, active_anon, inactive_file, active_file, unevictable 1622 Amount of memory, swap-backed and filesystem-backed, 1623 on the internal memory management lists used by the 1624 page reclaim algorithm. 1625 1626 As these represent internal list state (eg. shmem pages are on anon 1627 memory management lists), inactive_foo + active_foo may not be equal to 1628 the value for the foo counter, since the foo counter is type-based, not 1629 list-based. 1630 1631 slab_reclaimable 1632 Part of "slab" that might be reclaimed, such as 1633 dentries and inodes. 1634 1635 slab_unreclaimable 1636 Part of "slab" that cannot be reclaimed on memory 1637 pressure. 1638 1639 slab (npn) 1640 Amount of memory used for storing in-kernel data 1641 structures. 1642 1643 workingset_refault_anon 1644 Number of refaults of previously evicted anonymous pages. 1645 1646 workingset_refault_file 1647 Number of refaults of previously evicted file pages. 1648 1649 workingset_activate_anon 1650 Number of refaulted anonymous pages that were immediately 1651 activated. 1652 1653 workingset_activate_file 1654 Number of refaulted file pages that were immediately activated. 1655 1656 workingset_restore_anon 1657 Number of restored anonymous pages which have been detected as 1658 an active workingset before they got reclaimed. 1659 1660 workingset_restore_file 1661 Number of restored file pages which have been detected as an 1662 active workingset before they got reclaimed. 1663 1664 workingset_nodereclaim 1665 Number of times a shadow node has been reclaimed 1666 1667 pswpin (npn) 1668 Number of pages swapped into memory 1669 1670 pswpout (npn) 1671 Number of pages swapped out of memory 1672 1673 pgscan (npn) 1674 Amount of scanned pages (in an inactive LRU list) 1675 1676 pgsteal (npn) 1677 Amount of reclaimed pages 1678 1679 pgscan_kswapd (npn) 1680 Amount of scanned pages by kswapd (in an inactive LRU list) 1681 1682 pgscan_direct (npn) 1683 Amount of scanned pages directly (in an inactive LRU list) 1684 1685 pgscan_khugepaged (npn) 1686 Amount of scanned pages by khugepaged (in an inactive LRU list) 1687 1688 pgscan_proactive (npn) 1689 Amount of scanned pages proactively (in an inactive LRU list) 1690 1691 pgsteal_kswapd (npn) 1692 Amount of reclaimed pages by kswapd 1693 1694 pgsteal_direct (npn) 1695 Amount of reclaimed pages directly 1696 1697 pgsteal_khugepaged (npn) 1698 Amount of reclaimed pages by khugepaged 1699 1700 pgsteal_proactive (npn) 1701 Amount of reclaimed pages proactively 1702 1703 pgfault (npn) 1704 Total number of page faults incurred 1705 1706 pgmajfault (npn) 1707 Number of major page faults incurred 1708 1709 pgrefill (npn) 1710 Amount of scanned pages (in an active LRU list) 1711 1712 pgactivate (npn) 1713 Amount of pages moved to the active LRU list 1714 1715 pgdeactivate (npn) 1716 Amount of pages moved to the inactive LRU list 1717 1718 pglazyfree (npn) 1719 Amount of pages postponed to be freed under memory pressure 1720 1721 pglazyfreed (npn) 1722 Amount of reclaimed lazyfree pages 1723 1724 swpin_zero 1725 Number of pages swapped into memory and filled with zero, where I/O 1726 was optimized out because the page content was detected to be zero 1727 during swapout. 1728 1729 swpout_zero 1730 Number of zero-filled pages swapped out with I/O skipped due to the 1731 content being detected as zero. 1732 1733 zswpin 1734 Number of pages moved in to memory from zswap. 1735 1736 zswpout 1737 Number of pages moved out of memory to zswap. 1738 1739 zswpwb 1740 Number of pages written from zswap to swap. 1741 1742 thp_fault_alloc (npn) 1743 Number of transparent hugepages which were allocated to satisfy 1744 a page fault. This counter is not present when CONFIG_TRANSPARENT_HUGEPAGE 1745 is not set. 1746 1747 thp_collapse_alloc (npn) 1748 Number of transparent hugepages which were allocated to allow 1749 collapsing an existing range of pages. This counter is not 1750 present when CONFIG_TRANSPARENT_HUGEPAGE is not set. 1751 1752 thp_swpout (npn) 1753 Number of transparent hugepages which are swapout in one piece 1754 without splitting. 1755 1756 thp_swpout_fallback (npn) 1757 Number of transparent hugepages which were split before swapout. 1758 Usually because failed to allocate some continuous swap space 1759 for the huge page. 1760 1761 numa_pages_migrated (npn) 1762 Number of pages migrated by NUMA balancing. 1763 1764 numa_pte_updates (npn) 1765 Number of pages whose page table entries are modified by 1766 NUMA balancing to produce NUMA hinting faults on access. 1767 1768 numa_hint_faults (npn) 1769 Number of NUMA hinting faults. 1770 1771 pgdemote_kswapd 1772 Number of pages demoted by kswapd. 1773 1774 pgdemote_direct 1775 Number of pages demoted directly. 1776 1777 pgdemote_khugepaged 1778 Number of pages demoted by khugepaged. 1779 1780 pgdemote_proactive 1781 Number of pages demoted by proactively. 1782 1783 hugetlb 1784 Amount of memory used by hugetlb pages. This metric only shows 1785 up if hugetlb usage is accounted for in memory.current (i.e. 1786 cgroup is mounted with the memory_hugetlb_accounting option). 1787 1788 memory.numa_stat 1789 A read-only nested-keyed file which exists on non-root cgroups. 1790 1791 This breaks down the cgroup's memory footprint into different 1792 types of memory, type-specific details, and other information 1793 per node on the state of the memory management system. 1794 1795 This is useful for providing visibility into the NUMA locality 1796 information within an memcg since the pages are allowed to be 1797 allocated from any physical node. One of the use case is evaluating 1798 application performance by combining this information with the 1799 application's CPU allocation. 1800 1801 All memory amounts are in bytes. 1802 1803 The output format of memory.numa_stat is:: 1804 1805 type N0=<bytes in node 0> N1=<bytes in node 1> ... 1806 1807 The entries are ordered to be human readable, and new entries 1808 can show up in the middle. Don't rely on items remaining in a 1809 fixed position; use the keys to look up specific values! 1810 1811 The entries can refer to the memory.stat. 1812 1813 memory.swap.current 1814 A read-only single value file which exists on non-root 1815 cgroups. 1816 1817 The total amount of swap currently being used by the cgroup 1818 and its descendants. 1819 1820 memory.swap.high 1821 A read-write single value file which exists on non-root 1822 cgroups. The default is "max". 1823 1824 Swap usage throttle limit. If a cgroup's swap usage exceeds 1825 this limit, all its further allocations will be throttled to 1826 allow userspace to implement custom out-of-memory procedures. 1827 1828 This limit marks a point of no return for the cgroup. It is NOT 1829 designed to manage the amount of swapping a workload does 1830 during regular operation. Compare to memory.swap.max, which 1831 prohibits swapping past a set amount, but lets the cgroup 1832 continue unimpeded as long as other memory can be reclaimed. 1833 1834 Healthy workloads are not expected to reach this limit. 1835 1836 memory.swap.peak 1837 A read-write single value file which exists on non-root cgroups. 1838 1839 The max swap usage recorded for the cgroup and its descendants since 1840 the creation of the cgroup or the most recent reset for that FD. 1841 1842 A write of any non-empty string to this file resets it to the 1843 current memory usage for subsequent reads through the same 1844 file descriptor. 1845 1846 memory.swap.max 1847 A read-write single value file which exists on non-root 1848 cgroups. The default is "max". 1849 1850 Swap usage hard limit. If a cgroup's swap usage reaches this 1851 limit, anonymous memory of the cgroup will not be swapped out. 1852 1853 memory.swap.events 1854 A read-only flat-keyed file which exists on non-root cgroups. 1855 The following entries are defined. Unless specified 1856 otherwise, a value change in this file generates a file 1857 modified event. 1858 1859 high 1860 The number of times the cgroup's swap usage was over 1861 the high threshold. 1862 1863 max 1864 The number of times the cgroup's swap usage was about 1865 to go over the max boundary and swap allocation 1866 failed. 1867 1868 fail 1869 The number of times swap allocation failed either 1870 because of running out of swap system-wide or max 1871 limit. 1872 1873 When reduced under the current usage, the existing swap 1874 entries are reclaimed gradually and the swap usage may stay 1875 higher than the limit for an extended period of time. This 1876 reduces the impact on the workload and memory management. 1877 1878 memory.zswap.current 1879 A read-only single value file which exists on non-root 1880 cgroups. 1881 1882 The total amount of memory consumed by the zswap compression 1883 backend. 1884 1885 memory.zswap.max 1886 A read-write single value file which exists on non-root 1887 cgroups. The default is "max". 1888 1889 Zswap usage hard limit. If a cgroup's zswap pool reaches this 1890 limit, it will refuse to take any more stores before existing 1891 entries fault back in or are written out to disk. 1892 1893 memory.zswap.writeback 1894 A read-write single value file. The default value is "1". 1895 Note that this setting is hierarchical, i.e. the writeback would be 1896 implicitly disabled for child cgroups if the upper hierarchy 1897 does so. 1898 1899 When this is set to 0, all swapping attempts to swapping devices 1900 are disabled. This included both zswap writebacks, and swapping due 1901 to zswap store failures. If the zswap store failures are recurring 1902 (for e.g if the pages are incompressible), users can observe 1903 reclaim inefficiency after disabling writeback (because the same 1904 pages might be rejected again and again). 1905 1906 Note that this is subtly different from setting memory.swap.max to 1907 0, as it still allows for pages to be written to the zswap pool. 1908 This setting has no effect if zswap is disabled, and swapping 1909 is allowed unless memory.swap.max is set to 0. 1910 1911 memory.pressure 1912 A read-only nested-keyed file. 1913 1914 Shows pressure stall information for memory. See 1915 :ref:`Documentation/accounting/psi.rst <psi>` for details. 1916 1917 1918Usage Guidelines 1919~~~~~~~~~~~~~~~~ 1920 1921"memory.high" is the main mechanism to control memory usage. 1922Over-committing on high limit (sum of high limits > available memory) 1923and letting global memory pressure to distribute memory according to 1924usage is a viable strategy. 1925 1926Because breach of the high limit doesn't trigger the OOM killer but 1927throttles the offending cgroup, a management agent has ample 1928opportunities to monitor and take appropriate actions such as granting 1929more memory or terminating the workload. 1930 1931Determining whether a cgroup has enough memory is not trivial as 1932memory usage doesn't indicate whether the workload can benefit from 1933more memory. For example, a workload which writes data received from 1934network to a file can use all available memory but can also operate as 1935performant with a small amount of memory. A measure of memory 1936pressure - how much the workload is being impacted due to lack of 1937memory - is necessary to determine whether a workload needs more 1938memory; unfortunately, memory pressure monitoring mechanism isn't 1939implemented yet. 1940 1941 1942Memory Ownership 1943~~~~~~~~~~~~~~~~ 1944 1945A memory area is charged to the cgroup which instantiated it and stays 1946charged to the cgroup until the area is released. Migrating a process 1947to a different cgroup doesn't move the memory usages that it 1948instantiated while in the previous cgroup to the new cgroup. 1949 1950A memory area may be used by processes belonging to different cgroups. 1951To which cgroup the area will be charged is in-deterministic; however, 1952over time, the memory area is likely to end up in a cgroup which has 1953enough memory allowance to avoid high reclaim pressure. 1954 1955If a cgroup sweeps a considerable amount of memory which is expected 1956to be accessed repeatedly by other cgroups, it may make sense to use 1957POSIX_FADV_DONTNEED to relinquish the ownership of memory areas 1958belonging to the affected files to ensure correct memory ownership. 1959 1960 1961IO 1962-- 1963 1964The "io" controller regulates the distribution of IO resources. This 1965controller implements both weight based and absolute bandwidth or IOPS 1966limit distribution; however, weight based distribution is available 1967only if cfq-iosched is in use and neither scheme is available for 1968blk-mq devices. 1969 1970 1971IO Interface Files 1972~~~~~~~~~~~~~~~~~~ 1973 1974 io.stat 1975 A read-only nested-keyed file. 1976 1977 Lines are keyed by $MAJ:$MIN device numbers and not ordered. 1978 The following nested keys are defined. 1979 1980 ====== ===================== 1981 rbytes Bytes read 1982 wbytes Bytes written 1983 rios Number of read IOs 1984 wios Number of write IOs 1985 dbytes Bytes discarded 1986 dios Number of discard IOs 1987 ====== ===================== 1988 1989 An example read output follows:: 1990 1991 8:16 rbytes=1459200 wbytes=314773504 rios=192 wios=353 dbytes=0 dios=0 1992 8:0 rbytes=90430464 wbytes=299008000 rios=8950 wios=1252 dbytes=50331648 dios=3021 1993 1994 io.cost.qos 1995 A read-write nested-keyed file which exists only on the root 1996 cgroup. 1997 1998 This file configures the Quality of Service of the IO cost 1999 model based controller (CONFIG_BLK_CGROUP_IOCOST) which 2000 currently implements "io.weight" proportional control. Lines 2001 are keyed by $MAJ:$MIN device numbers and not ordered. The 2002 line for a given device is populated on the first write for 2003 the device on "io.cost.qos" or "io.cost.model". The following 2004 nested keys are defined. 2005 2006 ====== ===================================== 2007 enable Weight-based control enable 2008 ctrl "auto" or "user" 2009 rpct Read latency percentile [0, 100] 2010 rlat Read latency threshold 2011 wpct Write latency percentile [0, 100] 2012 wlat Write latency threshold 2013 min Minimum scaling percentage [1, 10000] 2014 max Maximum scaling percentage [1, 10000] 2015 ====== ===================================== 2016 2017 The controller is disabled by default and can be enabled by 2018 setting "enable" to 1. "rpct" and "wpct" parameters default 2019 to zero and the controller uses internal device saturation 2020 state to adjust the overall IO rate between "min" and "max". 2021 2022 When a better control quality is needed, latency QoS 2023 parameters can be configured. For example:: 2024 2025 8:16 enable=1 ctrl=auto rpct=95.00 rlat=75000 wpct=95.00 wlat=150000 min=50.00 max=150.0 2026 2027 shows that on sdb, the controller is enabled, will consider 2028 the device saturated if the 95th percentile of read completion 2029 latencies is above 75ms or write 150ms, and adjust the overall 2030 IO issue rate between 50% and 150% accordingly. 2031 2032 The lower the saturation point, the better the latency QoS at 2033 the cost of aggregate bandwidth. The narrower the allowed 2034 adjustment range between "min" and "max", the more conformant 2035 to the cost model the IO behavior. Note that the IO issue 2036 base rate may be far off from 100% and setting "min" and "max" 2037 blindly can lead to a significant loss of device capacity or 2038 control quality. "min" and "max" are useful for regulating 2039 devices which show wide temporary behavior changes - e.g. a 2040 ssd which accepts writes at the line speed for a while and 2041 then completely stalls for multiple seconds. 2042 2043 When "ctrl" is "auto", the parameters are controlled by the 2044 kernel and may change automatically. Setting "ctrl" to "user" 2045 or setting any of the percentile and latency parameters puts 2046 it into "user" mode and disables the automatic changes. The 2047 automatic mode can be restored by setting "ctrl" to "auto". 2048 2049 io.cost.model 2050 A read-write nested-keyed file which exists only on the root 2051 cgroup. 2052 2053 This file configures the cost model of the IO cost model based 2054 controller (CONFIG_BLK_CGROUP_IOCOST) which currently 2055 implements "io.weight" proportional control. Lines are keyed 2056 by $MAJ:$MIN device numbers and not ordered. The line for a 2057 given device is populated on the first write for the device on 2058 "io.cost.qos" or "io.cost.model". The following nested keys 2059 are defined. 2060 2061 ===== ================================ 2062 ctrl "auto" or "user" 2063 model The cost model in use - "linear" 2064 ===== ================================ 2065 2066 When "ctrl" is "auto", the kernel may change all parameters 2067 dynamically. When "ctrl" is set to "user" or any other 2068 parameters are written to, "ctrl" become "user" and the 2069 automatic changes are disabled. 2070 2071 When "model" is "linear", the following model parameters are 2072 defined. 2073 2074 ============= ======================================== 2075 [r|w]bps The maximum sequential IO throughput 2076 [r|w]seqiops The maximum 4k sequential IOs per second 2077 [r|w]randiops The maximum 4k random IOs per second 2078 ============= ======================================== 2079 2080 From the above, the builtin linear model determines the base 2081 costs of a sequential and random IO and the cost coefficient 2082 for the IO size. While simple, this model can cover most 2083 common device classes acceptably. 2084 2085 The IO cost model isn't expected to be accurate in absolute 2086 sense and is scaled to the device behavior dynamically. 2087 2088 If needed, tools/cgroup/iocost_coef_gen.py can be used to 2089 generate device-specific coefficients. 2090 2091 io.weight 2092 A read-write flat-keyed file which exists on non-root cgroups. 2093 The default is "default 100". 2094 2095 The first line is the default weight applied to devices 2096 without specific override. The rest are overrides keyed by 2097 $MAJ:$MIN device numbers and not ordered. The weights are in 2098 the range [1, 10000] and specifies the relative amount IO time 2099 the cgroup can use in relation to its siblings. 2100 2101 The default weight can be updated by writing either "default 2102 $WEIGHT" or simply "$WEIGHT". Overrides can be set by writing 2103 "$MAJ:$MIN $WEIGHT" and unset by writing "$MAJ:$MIN default". 2104 2105 An example read output follows:: 2106 2107 default 100 2108 8:16 200 2109 8:0 50 2110 2111 io.max 2112 A read-write nested-keyed file which exists on non-root 2113 cgroups. 2114 2115 BPS and IOPS based IO limit. Lines are keyed by $MAJ:$MIN 2116 device numbers and not ordered. The following nested keys are 2117 defined. 2118 2119 ===== ================================== 2120 rbps Max read bytes per second 2121 wbps Max write bytes per second 2122 riops Max read IO operations per second 2123 wiops Max write IO operations per second 2124 ===== ================================== 2125 2126 When writing, any number of nested key-value pairs can be 2127 specified in any order. "max" can be specified as the value 2128 to remove a specific limit. If the same key is specified 2129 multiple times, the outcome is undefined. 2130 2131 BPS and IOPS are measured in each IO direction and IOs are 2132 delayed if limit is reached. Temporary bursts are allowed. 2133 2134 Setting read limit at 2M BPS and write at 120 IOPS for 8:16:: 2135 2136 echo "8:16 rbps=2097152 wiops=120" > io.max 2137 2138 Reading returns the following:: 2139 2140 8:16 rbps=2097152 wbps=max riops=max wiops=120 2141 2142 Write IOPS limit can be removed by writing the following:: 2143 2144 echo "8:16 wiops=max" > io.max 2145 2146 Reading now returns the following:: 2147 2148 8:16 rbps=2097152 wbps=max riops=max wiops=max 2149 2150 io.pressure 2151 A read-only nested-keyed file. 2152 2153 Shows pressure stall information for IO. See 2154 :ref:`Documentation/accounting/psi.rst <psi>` for details. 2155 2156 2157Writeback 2158~~~~~~~~~ 2159 2160Page cache is dirtied through buffered writes and shared mmaps and 2161written asynchronously to the backing filesystem by the writeback 2162mechanism. Writeback sits between the memory and IO domains and 2163regulates the proportion of dirty memory by balancing dirtying and 2164write IOs. 2165 2166The io controller, in conjunction with the memory controller, 2167implements control of page cache writeback IOs. The memory controller 2168defines the memory domain that dirty memory ratio is calculated and 2169maintained for and the io controller defines the io domain which 2170writes out dirty pages for the memory domain. Both system-wide and 2171per-cgroup dirty memory states are examined and the more restrictive 2172of the two is enforced. 2173 2174cgroup writeback requires explicit support from the underlying 2175filesystem. Currently, cgroup writeback is implemented on ext2, ext4, 2176btrfs, f2fs, and xfs. On other filesystems, all writeback IOs are 2177attributed to the root cgroup. 2178 2179There are inherent differences in memory and writeback management 2180which affects how cgroup ownership is tracked. Memory is tracked per 2181page while writeback per inode. For the purpose of writeback, an 2182inode is assigned to a cgroup and all IO requests to write dirty pages 2183from the inode are attributed to that cgroup. 2184 2185As cgroup ownership for memory is tracked per page, there can be pages 2186which are associated with different cgroups than the one the inode is 2187associated with. These are called foreign pages. The writeback 2188constantly keeps track of foreign pages and, if a particular foreign 2189cgroup becomes the majority over a certain period of time, switches 2190the ownership of the inode to that cgroup. 2191 2192While this model is enough for most use cases where a given inode is 2193mostly dirtied by a single cgroup even when the main writing cgroup 2194changes over time, use cases where multiple cgroups write to a single 2195inode simultaneously are not supported well. In such circumstances, a 2196significant portion of IOs are likely to be attributed incorrectly. 2197As memory controller assigns page ownership on the first use and 2198doesn't update it until the page is released, even if writeback 2199strictly follows page ownership, multiple cgroups dirtying overlapping 2200areas wouldn't work as expected. It's recommended to avoid such usage 2201patterns. 2202 2203The sysctl knobs which affect writeback behavior are applied to cgroup 2204writeback as follows. 2205 2206 vm.dirty_background_ratio, vm.dirty_ratio 2207 These ratios apply the same to cgroup writeback with the 2208 amount of available memory capped by limits imposed by the 2209 memory controller and system-wide clean memory. 2210 2211 vm.dirty_background_bytes, vm.dirty_bytes 2212 For cgroup writeback, this is calculated into ratio against 2213 total available memory and applied the same way as 2214 vm.dirty[_background]_ratio. 2215 2216 2217IO Latency 2218~~~~~~~~~~ 2219 2220This is a cgroup v2 controller for IO workload protection. You provide a group 2221with a latency target, and if the average latency exceeds that target the 2222controller will throttle any peers that have a lower latency target than the 2223protected workload. 2224 2225The limits are only applied at the peer level in the hierarchy. This means that 2226in the diagram below, only groups A, B, and C will influence each other, and 2227groups D and F will influence each other. Group G will influence nobody:: 2228 2229 [root] 2230 / | \ 2231 A B C 2232 / \ | 2233 D F G 2234 2235 2236So the ideal way to configure this is to set io.latency in groups A, B, and C. 2237Generally you do not want to set a value lower than the latency your device 2238supports. Experiment to find the value that works best for your workload. 2239Start at higher than the expected latency for your device and watch the 2240avg_lat value in io.stat for your workload group to get an idea of the 2241latency you see during normal operation. Use the avg_lat value as a basis for 2242your real setting, setting at 10-15% higher than the value in io.stat. 2243 2244How IO Latency Throttling Works 2245~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2246 2247io.latency is work conserving; so as long as everybody is meeting their latency 2248target the controller doesn't do anything. Once a group starts missing its 2249target it begins throttling any peer group that has a higher target than itself. 2250This throttling takes 2 forms: 2251 2252- Queue depth throttling. This is the number of outstanding IO's a group is 2253 allowed to have. We will clamp down relatively quickly, starting at no limit 2254 and going all the way down to 1 IO at a time. 2255 2256- Artificial delay induction. There are certain types of IO that cannot be 2257 throttled without possibly adversely affecting higher priority groups. This 2258 includes swapping and metadata IO. These types of IO are allowed to occur 2259 normally, however they are "charged" to the originating group. If the 2260 originating group is being throttled you will see the use_delay and delay 2261 fields in io.stat increase. The delay value is how many microseconds that are 2262 being added to any process that runs in this group. Because this number can 2263 grow quite large if there is a lot of swapping or metadata IO occurring we 2264 limit the individual delay events to 1 second at a time. 2265 2266Once the victimized group starts meeting its latency target again it will start 2267unthrottling any peer groups that were throttled previously. If the victimized 2268group simply stops doing IO the global counter will unthrottle appropriately. 2269 2270IO Latency Interface Files 2271~~~~~~~~~~~~~~~~~~~~~~~~~~ 2272 2273 io.latency 2274 This takes a similar format as the other controllers. 2275 2276 "MAJOR:MINOR target=<target time in microseconds>" 2277 2278 io.stat 2279 If the controller is enabled you will see extra stats in io.stat in 2280 addition to the normal ones. 2281 2282 depth 2283 This is the current queue depth for the group. 2284 2285 avg_lat 2286 This is an exponential moving average with a decay rate of 1/exp 2287 bound by the sampling interval. The decay rate interval can be 2288 calculated by multiplying the win value in io.stat by the 2289 corresponding number of samples based on the win value. 2290 2291 win 2292 The sampling window size in milliseconds. This is the minimum 2293 duration of time between evaluation events. Windows only elapse 2294 with IO activity. Idle periods extend the most recent window. 2295 2296IO Priority 2297~~~~~~~~~~~ 2298 2299A single attribute controls the behavior of the I/O priority cgroup policy, 2300namely the io.prio.class attribute. The following values are accepted for 2301that attribute: 2302 2303 no-change 2304 Do not modify the I/O priority class. 2305 2306 promote-to-rt 2307 For requests that have a non-RT I/O priority class, change it into RT. 2308 Also change the priority level of these requests to 4. Do not modify 2309 the I/O priority of requests that have priority class RT. 2310 2311 restrict-to-be 2312 For requests that do not have an I/O priority class or that have I/O 2313 priority class RT, change it into BE. Also change the priority level 2314 of these requests to 0. Do not modify the I/O priority class of 2315 requests that have priority class IDLE. 2316 2317 idle 2318 Change the I/O priority class of all requests into IDLE, the lowest 2319 I/O priority class. 2320 2321 none-to-rt 2322 Deprecated. Just an alias for promote-to-rt. 2323 2324The following numerical values are associated with the I/O priority policies: 2325 2326+----------------+---+ 2327| no-change | 0 | 2328+----------------+---+ 2329| promote-to-rt | 1 | 2330+----------------+---+ 2331| restrict-to-be | 2 | 2332+----------------+---+ 2333| idle | 3 | 2334+----------------+---+ 2335 2336The numerical value that corresponds to each I/O priority class is as follows: 2337 2338+-------------------------------+---+ 2339| IOPRIO_CLASS_NONE | 0 | 2340+-------------------------------+---+ 2341| IOPRIO_CLASS_RT (real-time) | 1 | 2342+-------------------------------+---+ 2343| IOPRIO_CLASS_BE (best effort) | 2 | 2344+-------------------------------+---+ 2345| IOPRIO_CLASS_IDLE | 3 | 2346+-------------------------------+---+ 2347 2348The algorithm to set the I/O priority class for a request is as follows: 2349 2350- If I/O priority class policy is promote-to-rt, change the request I/O 2351 priority class to IOPRIO_CLASS_RT and change the request I/O priority 2352 level to 4. 2353- If I/O priority class policy is not promote-to-rt, translate the I/O priority 2354 class policy into a number, then change the request I/O priority class 2355 into the maximum of the I/O priority class policy number and the numerical 2356 I/O priority class. 2357 2358PID 2359--- 2360 2361The process number controller is used to allow a cgroup to stop any 2362new tasks from being fork()'d or clone()'d after a specified limit is 2363reached. 2364 2365The number of tasks in a cgroup can be exhausted in ways which other 2366controllers cannot prevent, thus warranting its own controller. For 2367example, a fork bomb is likely to exhaust the number of tasks before 2368hitting memory restrictions. 2369 2370Note that PIDs used in this controller refer to TIDs, process IDs as 2371used by the kernel. 2372 2373 2374PID Interface Files 2375~~~~~~~~~~~~~~~~~~~ 2376 2377 pids.max 2378 A read-write single value file which exists on non-root 2379 cgroups. The default is "max". 2380 2381 Hard limit of number of processes. 2382 2383 pids.current 2384 A read-only single value file which exists on non-root cgroups. 2385 2386 The number of processes currently in the cgroup and its 2387 descendants. 2388 2389 pids.peak 2390 A read-only single value file which exists on non-root cgroups. 2391 2392 The maximum value that the number of processes in the cgroup and its 2393 descendants has ever reached. 2394 2395 pids.events 2396 A read-only flat-keyed file which exists on non-root cgroups. Unless 2397 specified otherwise, a value change in this file generates a file 2398 modified event. The following entries are defined. 2399 2400 max 2401 The number of times the cgroup's total number of processes hit the pids.max 2402 limit (see also pids_localevents). 2403 2404 pids.events.local 2405 Similar to pids.events but the fields in the file are local 2406 to the cgroup i.e. not hierarchical. The file modified event 2407 generated on this file reflects only the local events. 2408 2409Organisational operations are not blocked by cgroup policies, so it is 2410possible to have pids.current > pids.max. This can be done by either 2411setting the limit to be smaller than pids.current, or attaching enough 2412processes to the cgroup such that pids.current is larger than 2413pids.max. However, it is not possible to violate a cgroup PID policy 2414through fork() or clone(). These will return -EAGAIN if the creation 2415of a new process would cause a cgroup policy to be violated. 2416 2417 2418Cpuset 2419------ 2420 2421The "cpuset" controller provides a mechanism for constraining 2422the CPU and memory node placement of tasks to only the resources 2423specified in the cpuset interface files in a task's current cgroup. 2424This is especially valuable on large NUMA systems where placing jobs 2425on properly sized subsets of the systems with careful processor and 2426memory placement to reduce cross-node memory access and contention 2427can improve overall system performance. 2428 2429The "cpuset" controller is hierarchical. That means the controller 2430cannot use CPUs or memory nodes not allowed in its parent. 2431 2432 2433Cpuset Interface Files 2434~~~~~~~~~~~~~~~~~~~~~~ 2435 2436 cpuset.cpus 2437 A read-write multiple values file which exists on non-root 2438 cpuset-enabled cgroups. 2439 2440 It lists the requested CPUs to be used by tasks within this 2441 cgroup. The actual list of CPUs to be granted, however, is 2442 subjected to constraints imposed by its parent and can differ 2443 from the requested CPUs. 2444 2445 The CPU numbers are comma-separated numbers or ranges. 2446 For example:: 2447 2448 # cat cpuset.cpus 2449 0-4,6,8-10 2450 2451 An empty value indicates that the cgroup is using the same 2452 setting as the nearest cgroup ancestor with a non-empty 2453 "cpuset.cpus" or all the available CPUs if none is found. 2454 2455 The value of "cpuset.cpus" stays constant until the next update 2456 and won't be affected by any CPU hotplug events. 2457 2458 cpuset.cpus.effective 2459 A read-only multiple values file which exists on all 2460 cpuset-enabled cgroups. 2461 2462 It lists the onlined CPUs that are actually granted to this 2463 cgroup by its parent. These CPUs are allowed to be used by 2464 tasks within the current cgroup. 2465 2466 If "cpuset.cpus" is empty, the "cpuset.cpus.effective" file shows 2467 all the CPUs from the parent cgroup that can be available to 2468 be used by this cgroup. Otherwise, it should be a subset of 2469 "cpuset.cpus" unless none of the CPUs listed in "cpuset.cpus" 2470 can be granted. In this case, it will be treated just like an 2471 empty "cpuset.cpus". 2472 2473 Its value will be affected by CPU hotplug events. 2474 2475 cpuset.mems 2476 A read-write multiple values file which exists on non-root 2477 cpuset-enabled cgroups. 2478 2479 It lists the requested memory nodes to be used by tasks within 2480 this cgroup. The actual list of memory nodes granted, however, 2481 is subjected to constraints imposed by its parent and can differ 2482 from the requested memory nodes. 2483 2484 The memory node numbers are comma-separated numbers or ranges. 2485 For example:: 2486 2487 # cat cpuset.mems 2488 0-1,3 2489 2490 An empty value indicates that the cgroup is using the same 2491 setting as the nearest cgroup ancestor with a non-empty 2492 "cpuset.mems" or all the available memory nodes if none 2493 is found. 2494 2495 The value of "cpuset.mems" stays constant until the next update 2496 and won't be affected by any memory nodes hotplug events. 2497 2498 Setting a non-empty value to "cpuset.mems" causes memory of 2499 tasks within the cgroup to be migrated to the designated nodes if 2500 they are currently using memory outside of the designated nodes. 2501 2502 There is a cost for this memory migration. The migration 2503 may not be complete and some memory pages may be left behind. 2504 So it is recommended that "cpuset.mems" should be set properly 2505 before spawning new tasks into the cpuset. Even if there is 2506 a need to change "cpuset.mems" with active tasks, it shouldn't 2507 be done frequently. 2508 2509 cpuset.mems.effective 2510 A read-only multiple values file which exists on all 2511 cpuset-enabled cgroups. 2512 2513 It lists the onlined memory nodes that are actually granted to 2514 this cgroup by its parent. These memory nodes are allowed to 2515 be used by tasks within the current cgroup. 2516 2517 If "cpuset.mems" is empty, it shows all the memory nodes from the 2518 parent cgroup that will be available to be used by this cgroup. 2519 Otherwise, it should be a subset of "cpuset.mems" unless none of 2520 the memory nodes listed in "cpuset.mems" can be granted. In this 2521 case, it will be treated just like an empty "cpuset.mems". 2522 2523 Its value will be affected by memory nodes hotplug events. 2524 2525 cpuset.cpus.exclusive 2526 A read-write multiple values file which exists on non-root 2527 cpuset-enabled cgroups. 2528 2529 It lists all the exclusive CPUs that are allowed to be used 2530 to create a new cpuset partition. Its value is not used 2531 unless the cgroup becomes a valid partition root. See the 2532 "cpuset.cpus.partition" section below for a description of what 2533 a cpuset partition is. 2534 2535 When the cgroup becomes a partition root, the actual exclusive 2536 CPUs that are allocated to that partition are listed in 2537 "cpuset.cpus.exclusive.effective" which may be different 2538 from "cpuset.cpus.exclusive". If "cpuset.cpus.exclusive" 2539 has previously been set, "cpuset.cpus.exclusive.effective" 2540 is always a subset of it. 2541 2542 Users can manually set it to a value that is different from 2543 "cpuset.cpus". One constraint in setting it is that the list of 2544 CPUs must be exclusive with respect to "cpuset.cpus.exclusive" 2545 of its sibling. If "cpuset.cpus.exclusive" of a sibling cgroup 2546 isn't set, its "cpuset.cpus" value, if set, cannot be a subset 2547 of it to leave at least one CPU available when the exclusive 2548 CPUs are taken away. 2549 2550 For a parent cgroup, any one of its exclusive CPUs can only 2551 be distributed to at most one of its child cgroups. Having an 2552 exclusive CPU appearing in two or more of its child cgroups is 2553 not allowed (the exclusivity rule). A value that violates the 2554 exclusivity rule will be rejected with a write error. 2555 2556 The root cgroup is a partition root and all its available CPUs 2557 are in its exclusive CPU set. 2558 2559 cpuset.cpus.exclusive.effective 2560 A read-only multiple values file which exists on all non-root 2561 cpuset-enabled cgroups. 2562 2563 This file shows the effective set of exclusive CPUs that 2564 can be used to create a partition root. The content 2565 of this file will always be a subset of its parent's 2566 "cpuset.cpus.exclusive.effective" if its parent is not the root 2567 cgroup. It will also be a subset of "cpuset.cpus.exclusive" 2568 if it is set. If "cpuset.cpus.exclusive" is not set, it is 2569 treated to have an implicit value of "cpuset.cpus" in the 2570 formation of local partition. 2571 2572 cpuset.cpus.isolated 2573 A read-only and root cgroup only multiple values file. 2574 2575 This file shows the set of all isolated CPUs used in existing 2576 isolated partitions. It will be empty if no isolated partition 2577 is created. 2578 2579 cpuset.cpus.partition 2580 A read-write single value file which exists on non-root 2581 cpuset-enabled cgroups. This flag is owned by the parent cgroup 2582 and is not delegatable. 2583 2584 It accepts only the following input values when written to. 2585 2586 ========== ===================================== 2587 "member" Non-root member of a partition 2588 "root" Partition root 2589 "isolated" Partition root without load balancing 2590 ========== ===================================== 2591 2592 A cpuset partition is a collection of cpuset-enabled cgroups with 2593 a partition root at the top of the hierarchy and its descendants 2594 except those that are separate partition roots themselves and 2595 their descendants. A partition has exclusive access to the 2596 set of exclusive CPUs allocated to it. Other cgroups outside 2597 of that partition cannot use any CPUs in that set. 2598 2599 There are two types of partitions - local and remote. A local 2600 partition is one whose parent cgroup is also a valid partition 2601 root. A remote partition is one whose parent cgroup is not a 2602 valid partition root itself. Writing to "cpuset.cpus.exclusive" 2603 is optional for the creation of a local partition as its 2604 "cpuset.cpus.exclusive" file will assume an implicit value that 2605 is the same as "cpuset.cpus" if it is not set. Writing the 2606 proper "cpuset.cpus.exclusive" values down the cgroup hierarchy 2607 before the target partition root is mandatory for the creation 2608 of a remote partition. 2609 2610 Currently, a remote partition cannot be created under a local 2611 partition. All the ancestors of a remote partition root except 2612 the root cgroup cannot be a partition root. 2613 2614 The root cgroup is always a partition root and its state cannot 2615 be changed. All other non-root cgroups start out as "member". 2616 2617 When set to "root", the current cgroup is the root of a new 2618 partition or scheduling domain. The set of exclusive CPUs is 2619 determined by the value of its "cpuset.cpus.exclusive.effective". 2620 2621 When set to "isolated", the CPUs in that partition will be in 2622 an isolated state without any load balancing from the scheduler 2623 and excluded from the unbound workqueues. Tasks placed in such 2624 a partition with multiple CPUs should be carefully distributed 2625 and bound to each of the individual CPUs for optimal performance. 2626 2627 A partition root ("root" or "isolated") can be in one of the 2628 two possible states - valid or invalid. An invalid partition 2629 root is in a degraded state where some state information may 2630 be retained, but behaves more like a "member". 2631 2632 All possible state transitions among "member", "root" and 2633 "isolated" are allowed. 2634 2635 On read, the "cpuset.cpus.partition" file can show the following 2636 values. 2637 2638 ============================= ===================================== 2639 "member" Non-root member of a partition 2640 "root" Partition root 2641 "isolated" Partition root without load balancing 2642 "root invalid (<reason>)" Invalid partition root 2643 "isolated invalid (<reason>)" Invalid isolated partition root 2644 ============================= ===================================== 2645 2646 In the case of an invalid partition root, a descriptive string on 2647 why the partition is invalid is included within parentheses. 2648 2649 For a local partition root to be valid, the following conditions 2650 must be met. 2651 2652 1) The parent cgroup is a valid partition root. 2653 2) The "cpuset.cpus.exclusive.effective" file cannot be empty, 2654 though it may contain offline CPUs. 2655 3) The "cpuset.cpus.effective" cannot be empty unless there is 2656 no task associated with this partition. 2657 2658 For a remote partition root to be valid, all the above conditions 2659 except the first one must be met. 2660 2661 External events like hotplug or changes to "cpuset.cpus" or 2662 "cpuset.cpus.exclusive" can cause a valid partition root to 2663 become invalid and vice versa. Note that a task cannot be 2664 moved to a cgroup with empty "cpuset.cpus.effective". 2665 2666 A valid non-root parent partition may distribute out all its CPUs 2667 to its child local partitions when there is no task associated 2668 with it. 2669 2670 Care must be taken to change a valid partition root to "member" 2671 as all its child local partitions, if present, will become 2672 invalid causing disruption to tasks running in those child 2673 partitions. These inactivated partitions could be recovered if 2674 their parent is switched back to a partition root with a proper 2675 value in "cpuset.cpus" or "cpuset.cpus.exclusive". 2676 2677 Poll and inotify events are triggered whenever the state of 2678 "cpuset.cpus.partition" changes. That includes changes caused 2679 by write to "cpuset.cpus.partition", cpu hotplug or other 2680 changes that modify the validity status of the partition. 2681 This will allow user space agents to monitor unexpected changes 2682 to "cpuset.cpus.partition" without the need to do continuous 2683 polling. 2684 2685 A user can pre-configure certain CPUs to an isolated state 2686 with load balancing disabled at boot time with the "isolcpus" 2687 kernel boot command line option. If those CPUs are to be put 2688 into a partition, they have to be used in an isolated partition. 2689 2690 2691Device controller 2692----------------- 2693 2694Device controller manages access to device files. It includes both 2695creation of new device files (using mknod), and access to the 2696existing device files. 2697 2698Cgroup v2 device controller has no interface files and is implemented 2699on top of cgroup BPF. To control access to device files, a user may 2700create bpf programs of type BPF_PROG_TYPE_CGROUP_DEVICE and attach 2701them to cgroups with BPF_CGROUP_DEVICE flag. On an attempt to access a 2702device file, corresponding BPF programs will be executed, and depending 2703on the return value the attempt will succeed or fail with -EPERM. 2704 2705A BPF_PROG_TYPE_CGROUP_DEVICE program takes a pointer to the 2706bpf_cgroup_dev_ctx structure, which describes the device access attempt: 2707access type (mknod/read/write) and device (type, major and minor numbers). 2708If the program returns 0, the attempt fails with -EPERM, otherwise it 2709succeeds. 2710 2711An example of BPF_PROG_TYPE_CGROUP_DEVICE program may be found in 2712tools/testing/selftests/bpf/progs/dev_cgroup.c in the kernel source tree. 2713 2714 2715RDMA 2716---- 2717 2718The "rdma" controller regulates the distribution and accounting of 2719RDMA resources. 2720 2721RDMA Interface Files 2722~~~~~~~~~~~~~~~~~~~~ 2723 2724 rdma.max 2725 A readwrite nested-keyed file that exists for all the cgroups 2726 except root that describes current configured resource limit 2727 for a RDMA/IB device. 2728 2729 Lines are keyed by device name and are not ordered. 2730 Each line contains space separated resource name and its configured 2731 limit that can be distributed. 2732 2733 The following nested keys are defined. 2734 2735 ========== ============================= 2736 hca_handle Maximum number of HCA Handles 2737 hca_object Maximum number of HCA Objects 2738 ========== ============================= 2739 2740 An example for mlx4 and ocrdma device follows:: 2741 2742 mlx4_0 hca_handle=2 hca_object=2000 2743 ocrdma1 hca_handle=3 hca_object=max 2744 2745 rdma.current 2746 A read-only file that describes current resource usage. 2747 It exists for all the cgroup except root. 2748 2749 An example for mlx4 and ocrdma device follows:: 2750 2751 mlx4_0 hca_handle=1 hca_object=20 2752 ocrdma1 hca_handle=1 hca_object=23 2753 2754DMEM 2755---- 2756 2757The "dmem" controller regulates the distribution and accounting of 2758device memory regions. Because each memory region may have its own page size, 2759which does not have to be equal to the system page size, the units are always bytes. 2760 2761DMEM Interface Files 2762~~~~~~~~~~~~~~~~~~~~ 2763 2764 dmem.max, dmem.min, dmem.low 2765 A readwrite nested-keyed file that exists for all the cgroups 2766 except root that describes current configured resource limit 2767 for a region. 2768 2769 An example for xe follows:: 2770 2771 drm/0000:03:00.0/vram0 1073741824 2772 drm/0000:03:00.0/stolen max 2773 2774 The semantics are the same as for the memory cgroup controller, and are 2775 calculated in the same way. 2776 2777 dmem.capacity 2778 A read-only file that describes maximum region capacity. 2779 It only exists on the root cgroup. Not all memory can be 2780 allocated by cgroups, as the kernel reserves some for 2781 internal use. 2782 2783 An example for xe follows:: 2784 2785 drm/0000:03:00.0/vram0 8514437120 2786 drm/0000:03:00.0/stolen 67108864 2787 2788 dmem.current 2789 A read-only file that describes current resource usage. 2790 It exists for all the cgroup except root. 2791 2792 An example for xe follows:: 2793 2794 drm/0000:03:00.0/vram0 12550144 2795 drm/0000:03:00.0/stolen 8650752 2796 2797HugeTLB 2798------- 2799 2800The HugeTLB controller allows to limit the HugeTLB usage per control group and 2801enforces the controller limit during page fault. 2802 2803HugeTLB Interface Files 2804~~~~~~~~~~~~~~~~~~~~~~~ 2805 2806 hugetlb.<hugepagesize>.current 2807 Show current usage for "hugepagesize" hugetlb. It exists for all 2808 the cgroup except root. 2809 2810 hugetlb.<hugepagesize>.max 2811 Set/show the hard limit of "hugepagesize" hugetlb usage. 2812 The default value is "max". It exists for all the cgroup except root. 2813 2814 hugetlb.<hugepagesize>.events 2815 A read-only flat-keyed file which exists on non-root cgroups. 2816 2817 max 2818 The number of allocation failure due to HugeTLB limit 2819 2820 hugetlb.<hugepagesize>.events.local 2821 Similar to hugetlb.<hugepagesize>.events but the fields in the file 2822 are local to the cgroup i.e. not hierarchical. The file modified event 2823 generated on this file reflects only the local events. 2824 2825 hugetlb.<hugepagesize>.numa_stat 2826 Similar to memory.numa_stat, it shows the numa information of the 2827 hugetlb pages of <hugepagesize> in this cgroup. Only active in 2828 use hugetlb pages are included. The per-node values are in bytes. 2829 2830Misc 2831---- 2832 2833The Miscellaneous cgroup provides the resource limiting and tracking 2834mechanism for the scalar resources which cannot be abstracted like the other 2835cgroup resources. Controller is enabled by the CONFIG_CGROUP_MISC config 2836option. 2837 2838A resource can be added to the controller via enum misc_res_type{} in the 2839include/linux/misc_cgroup.h file and the corresponding name via misc_res_name[] 2840in the kernel/cgroup/misc.c file. Provider of the resource must set its 2841capacity prior to using the resource by calling misc_cg_set_capacity(). 2842 2843Once a capacity is set then the resource usage can be updated using charge and 2844uncharge APIs. All of the APIs to interact with misc controller are in 2845include/linux/misc_cgroup.h. 2846 2847Misc Interface Files 2848~~~~~~~~~~~~~~~~~~~~ 2849 2850Miscellaneous controller provides 3 interface files. If two misc resources (res_a and res_b) are registered then: 2851 2852 misc.capacity 2853 A read-only flat-keyed file shown only in the root cgroup. It shows 2854 miscellaneous scalar resources available on the platform along with 2855 their quantities:: 2856 2857 $ cat misc.capacity 2858 res_a 50 2859 res_b 10 2860 2861 misc.current 2862 A read-only flat-keyed file shown in the all cgroups. It shows 2863 the current usage of the resources in the cgroup and its children.:: 2864 2865 $ cat misc.current 2866 res_a 3 2867 res_b 0 2868 2869 misc.peak 2870 A read-only flat-keyed file shown in all cgroups. It shows the 2871 historical maximum usage of the resources in the cgroup and its 2872 children.:: 2873 2874 $ cat misc.peak 2875 res_a 10 2876 res_b 8 2877 2878 misc.max 2879 A read-write flat-keyed file shown in the non root cgroups. Allowed 2880 maximum usage of the resources in the cgroup and its children.:: 2881 2882 $ cat misc.max 2883 res_a max 2884 res_b 4 2885 2886 Limit can be set by:: 2887 2888 # echo res_a 1 > misc.max 2889 2890 Limit can be set to max by:: 2891 2892 # echo res_a max > misc.max 2893 2894 Limits can be set higher than the capacity value in the misc.capacity 2895 file. 2896 2897 misc.events 2898 A read-only flat-keyed file which exists on non-root cgroups. The 2899 following entries are defined. Unless specified otherwise, a value 2900 change in this file generates a file modified event. All fields in 2901 this file are hierarchical. 2902 2903 max 2904 The number of times the cgroup's resource usage was 2905 about to go over the max boundary. 2906 2907 misc.events.local 2908 Similar to misc.events but the fields in the file are local to the 2909 cgroup i.e. not hierarchical. The file modified event generated on 2910 this file reflects only the local events. 2911 2912Migration and Ownership 2913~~~~~~~~~~~~~~~~~~~~~~~ 2914 2915A miscellaneous scalar resource is charged to the cgroup in which it is used 2916first, and stays charged to that cgroup until that resource is freed. Migrating 2917a process to a different cgroup does not move the charge to the destination 2918cgroup where the process has moved. 2919 2920Others 2921------ 2922 2923perf_event 2924~~~~~~~~~~ 2925 2926perf_event controller, if not mounted on a legacy hierarchy, is 2927automatically enabled on the v2 hierarchy so that perf events can 2928always be filtered by cgroup v2 path. The controller can still be 2929moved to a legacy hierarchy after v2 hierarchy is populated. 2930 2931 2932Non-normative information 2933------------------------- 2934 2935This section contains information that isn't considered to be a part of 2936the stable kernel API and so is subject to change. 2937 2938 2939CPU controller root cgroup process behaviour 2940~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2941 2942When distributing CPU cycles in the root cgroup each thread in this 2943cgroup is treated as if it was hosted in a separate child cgroup of the 2944root cgroup. This child cgroup weight is dependent on its thread nice 2945level. 2946 2947For details of this mapping see sched_prio_to_weight array in 2948kernel/sched/core.c file (values from this array should be scaled 2949appropriately so the neutral - nice 0 - value is 100 instead of 1024). 2950 2951 2952IO controller root cgroup process behaviour 2953~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2954 2955Root cgroup processes are hosted in an implicit leaf child node. 2956When distributing IO resources this implicit child node is taken into 2957account as if it was a normal child cgroup of the root cgroup with a 2958weight value of 200. 2959 2960 2961Namespace 2962========= 2963 2964Basics 2965------ 2966 2967cgroup namespace provides a mechanism to virtualize the view of the 2968"/proc/$PID/cgroup" file and cgroup mounts. The CLONE_NEWCGROUP clone 2969flag can be used with clone(2) and unshare(2) to create a new cgroup 2970namespace. The process running inside the cgroup namespace will have 2971its "/proc/$PID/cgroup" output restricted to cgroupns root. The 2972cgroupns root is the cgroup of the process at the time of creation of 2973the cgroup namespace. 2974 2975Without cgroup namespace, the "/proc/$PID/cgroup" file shows the 2976complete path of the cgroup of a process. In a container setup where 2977a set of cgroups and namespaces are intended to isolate processes the 2978"/proc/$PID/cgroup" file may leak potential system level information 2979to the isolated processes. For example:: 2980 2981 # cat /proc/self/cgroup 2982 0::/batchjobs/container_id1 2983 2984The path '/batchjobs/container_id1' can be considered as system-data 2985and undesirable to expose to the isolated processes. cgroup namespace 2986can be used to restrict visibility of this path. For example, before 2987creating a cgroup namespace, one would see:: 2988 2989 # ls -l /proc/self/ns/cgroup 2990 lrwxrwxrwx 1 root root 0 2014-07-15 10:37 /proc/self/ns/cgroup -> cgroup:[4026531835] 2991 # cat /proc/self/cgroup 2992 0::/batchjobs/container_id1 2993 2994After unsharing a new namespace, the view changes:: 2995 2996 # ls -l /proc/self/ns/cgroup 2997 lrwxrwxrwx 1 root root 0 2014-07-15 10:35 /proc/self/ns/cgroup -> cgroup:[4026532183] 2998 # cat /proc/self/cgroup 2999 0::/ 3000 3001When some thread from a multi-threaded process unshares its cgroup 3002namespace, the new cgroupns gets applied to the entire process (all 3003the threads). This is natural for the v2 hierarchy; however, for the 3004legacy hierarchies, this may be unexpected. 3005 3006A cgroup namespace is alive as long as there are processes inside or 3007mounts pinning it. When the last usage goes away, the cgroup 3008namespace is destroyed. The cgroupns root and the actual cgroups 3009remain. 3010 3011 3012The Root and Views 3013------------------ 3014 3015The 'cgroupns root' for a cgroup namespace is the cgroup in which the 3016process calling unshare(2) is running. For example, if a process in 3017/batchjobs/container_id1 cgroup calls unshare, cgroup 3018/batchjobs/container_id1 becomes the cgroupns root. For the 3019init_cgroup_ns, this is the real root ('/') cgroup. 3020 3021The cgroupns root cgroup does not change even if the namespace creator 3022process later moves to a different cgroup:: 3023 3024 # ~/unshare -c # unshare cgroupns in some cgroup 3025 # cat /proc/self/cgroup 3026 0::/ 3027 # mkdir sub_cgrp_1 3028 # echo 0 > sub_cgrp_1/cgroup.procs 3029 # cat /proc/self/cgroup 3030 0::/sub_cgrp_1 3031 3032Each process gets its namespace-specific view of "/proc/$PID/cgroup" 3033 3034Processes running inside the cgroup namespace will be able to see 3035cgroup paths (in /proc/self/cgroup) only inside their root cgroup. 3036From within an unshared cgroupns:: 3037 3038 # sleep 100000 & 3039 [1] 7353 3040 # echo 7353 > sub_cgrp_1/cgroup.procs 3041 # cat /proc/7353/cgroup 3042 0::/sub_cgrp_1 3043 3044From the initial cgroup namespace, the real cgroup path will be 3045visible:: 3046 3047 $ cat /proc/7353/cgroup 3048 0::/batchjobs/container_id1/sub_cgrp_1 3049 3050From a sibling cgroup namespace (that is, a namespace rooted at a 3051different cgroup), the cgroup path relative to its own cgroup 3052namespace root will be shown. For instance, if PID 7353's cgroup 3053namespace root is at '/batchjobs/container_id2', then it will see:: 3054 3055 # cat /proc/7353/cgroup 3056 0::/../container_id2/sub_cgrp_1 3057 3058Note that the relative path always starts with '/' to indicate that 3059its relative to the cgroup namespace root of the caller. 3060 3061 3062Migration and setns(2) 3063---------------------- 3064 3065Processes inside a cgroup namespace can move into and out of the 3066namespace root if they have proper access to external cgroups. For 3067example, from inside a namespace with cgroupns root at 3068/batchjobs/container_id1, and assuming that the global hierarchy is 3069still accessible inside cgroupns:: 3070 3071 # cat /proc/7353/cgroup 3072 0::/sub_cgrp_1 3073 # echo 7353 > batchjobs/container_id2/cgroup.procs 3074 # cat /proc/7353/cgroup 3075 0::/../container_id2 3076 3077Note that this kind of setup is not encouraged. A task inside cgroup 3078namespace should only be exposed to its own cgroupns hierarchy. 3079 3080setns(2) to another cgroup namespace is allowed when: 3081 3082(a) the process has CAP_SYS_ADMIN against its current user namespace 3083(b) the process has CAP_SYS_ADMIN against the target cgroup 3084 namespace's userns 3085 3086No implicit cgroup changes happen with attaching to another cgroup 3087namespace. It is expected that the someone moves the attaching 3088process under the target cgroup namespace root. 3089 3090 3091Interaction with Other Namespaces 3092--------------------------------- 3093 3094Namespace specific cgroup hierarchy can be mounted by a process 3095running inside a non-init cgroup namespace:: 3096 3097 # mount -t cgroup2 none $MOUNT_POINT 3098 3099This will mount the unified cgroup hierarchy with cgroupns root as the 3100filesystem root. The process needs CAP_SYS_ADMIN against its user and 3101mount namespaces. 3102 3103The virtualization of /proc/self/cgroup file combined with restricting 3104the view of cgroup hierarchy by namespace-private cgroupfs mount 3105provides a properly isolated cgroup view inside the container. 3106 3107 3108Information on Kernel Programming 3109================================= 3110 3111This section contains kernel programming information in the areas 3112where interacting with cgroup is necessary. cgroup core and 3113controllers are not covered. 3114 3115 3116Filesystem Support for Writeback 3117-------------------------------- 3118 3119A filesystem can support cgroup writeback by updating 3120address_space_operations->writepages() to annotate bio's using the 3121following two functions. 3122 3123 wbc_init_bio(@wbc, @bio) 3124 Should be called for each bio carrying writeback data and 3125 associates the bio with the inode's owner cgroup and the 3126 corresponding request queue. This must be called after 3127 a queue (device) has been associated with the bio and 3128 before submission. 3129 3130 wbc_account_cgroup_owner(@wbc, @folio, @bytes) 3131 Should be called for each data segment being written out. 3132 While this function doesn't care exactly when it's called 3133 during the writeback session, it's the easiest and most 3134 natural to call it as data segments are added to a bio. 3135 3136With writeback bio's annotated, cgroup support can be enabled per 3137super_block by setting SB_I_CGROUPWB in ->s_iflags. This allows for 3138selective disabling of cgroup writeback support which is helpful when 3139certain filesystem features, e.g. journaled data mode, are 3140incompatible. 3141 3142wbc_init_bio() binds the specified bio to its cgroup. Depending on 3143the configuration, the bio may be executed at a lower priority and if 3144the writeback session is holding shared resources, e.g. a journal 3145entry, may lead to priority inversion. There is no one easy solution 3146for the problem. Filesystems can try to work around specific problem 3147cases by skipping wbc_init_bio() and using bio_associate_blkg() 3148directly. 3149 3150 3151Deprecated v1 Core Features 3152=========================== 3153 3154- Multiple hierarchies including named ones are not supported. 3155 3156- All v1 mount options are not supported. 3157 3158- The "tasks" file is removed and "cgroup.procs" is not sorted. 3159 3160- "cgroup.clone_children" is removed. 3161 3162- /proc/cgroups is meaningless for v2. Use "cgroup.controllers" or 3163 "cgroup.stat" files at the root instead. 3164 3165 3166Issues with v1 and Rationales for v2 3167==================================== 3168 3169Multiple Hierarchies 3170-------------------- 3171 3172cgroup v1 allowed an arbitrary number of hierarchies and each 3173hierarchy could host any number of controllers. While this seemed to 3174provide a high level of flexibility, it wasn't useful in practice. 3175 3176For example, as there is only one instance of each controller, utility 3177type controllers such as freezer which can be useful in all 3178hierarchies could only be used in one. The issue is exacerbated by 3179the fact that controllers couldn't be moved to another hierarchy once 3180hierarchies were populated. Another issue was that all controllers 3181bound to a hierarchy were forced to have exactly the same view of the 3182hierarchy. It wasn't possible to vary the granularity depending on 3183the specific controller. 3184 3185In practice, these issues heavily limited which controllers could be 3186put on the same hierarchy and most configurations resorted to putting 3187each controller on its own hierarchy. Only closely related ones, such 3188as the cpu and cpuacct controllers, made sense to be put on the same 3189hierarchy. This often meant that userland ended up managing multiple 3190similar hierarchies repeating the same steps on each hierarchy 3191whenever a hierarchy management operation was necessary. 3192 3193Furthermore, support for multiple hierarchies came at a steep cost. 3194It greatly complicated cgroup core implementation but more importantly 3195the support for multiple hierarchies restricted how cgroup could be 3196used in general and what controllers was able to do. 3197 3198There was no limit on how many hierarchies there might be, which meant 3199that a thread's cgroup membership couldn't be described in finite 3200length. The key might contain any number of entries and was unlimited 3201in length, which made it highly awkward to manipulate and led to 3202addition of controllers which existed only to identify membership, 3203which in turn exacerbated the original problem of proliferating number 3204of hierarchies. 3205 3206Also, as a controller couldn't have any expectation regarding the 3207topologies of hierarchies other controllers might be on, each 3208controller had to assume that all other controllers were attached to 3209completely orthogonal hierarchies. This made it impossible, or at 3210least very cumbersome, for controllers to cooperate with each other. 3211 3212In most use cases, putting controllers on hierarchies which are 3213completely orthogonal to each other isn't necessary. What usually is 3214called for is the ability to have differing levels of granularity 3215depending on the specific controller. In other words, hierarchy may 3216be collapsed from leaf towards root when viewed from specific 3217controllers. For example, a given configuration might not care about 3218how memory is distributed beyond a certain level while still wanting 3219to control how CPU cycles are distributed. 3220 3221 3222Thread Granularity 3223------------------ 3224 3225cgroup v1 allowed threads of a process to belong to different cgroups. 3226This didn't make sense for some controllers and those controllers 3227ended up implementing different ways to ignore such situations but 3228much more importantly it blurred the line between API exposed to 3229individual applications and system management interface. 3230 3231Generally, in-process knowledge is available only to the process 3232itself; thus, unlike service-level organization of processes, 3233categorizing threads of a process requires active participation from 3234the application which owns the target process. 3235 3236cgroup v1 had an ambiguously defined delegation model which got abused 3237in combination with thread granularity. cgroups were delegated to 3238individual applications so that they can create and manage their own 3239sub-hierarchies and control resource distributions along them. This 3240effectively raised cgroup to the status of a syscall-like API exposed 3241to lay programs. 3242 3243First of all, cgroup has a fundamentally inadequate interface to be 3244exposed this way. For a process to access its own knobs, it has to 3245extract the path on the target hierarchy from /proc/self/cgroup, 3246construct the path by appending the name of the knob to the path, open 3247and then read and/or write to it. This is not only extremely clunky 3248and unusual but also inherently racy. There is no conventional way to 3249define transaction across the required steps and nothing can guarantee 3250that the process would actually be operating on its own sub-hierarchy. 3251 3252cgroup controllers implemented a number of knobs which would never be 3253accepted as public APIs because they were just adding control knobs to 3254system-management pseudo filesystem. cgroup ended up with interface 3255knobs which were not properly abstracted or refined and directly 3256revealed kernel internal details. These knobs got exposed to 3257individual applications through the ill-defined delegation mechanism 3258effectively abusing cgroup as a shortcut to implementing public APIs 3259without going through the required scrutiny. 3260 3261This was painful for both userland and kernel. Userland ended up with 3262misbehaving and poorly abstracted interfaces and kernel exposing and 3263locked into constructs inadvertently. 3264 3265 3266Competition Between Inner Nodes and Threads 3267------------------------------------------- 3268 3269cgroup v1 allowed threads to be in any cgroups which created an 3270interesting problem where threads belonging to a parent cgroup and its 3271children cgroups competed for resources. This was nasty as two 3272different types of entities competed and there was no obvious way to 3273settle it. Different controllers did different things. 3274 3275The cpu controller considered threads and cgroups as equivalents and 3276mapped nice levels to cgroup weights. This worked for some cases but 3277fell flat when children wanted to be allocated specific ratios of CPU 3278cycles and the number of internal threads fluctuated - the ratios 3279constantly changed as the number of competing entities fluctuated. 3280There also were other issues. The mapping from nice level to weight 3281wasn't obvious or universal, and there were various other knobs which 3282simply weren't available for threads. 3283 3284The io controller implicitly created a hidden leaf node for each 3285cgroup to host the threads. The hidden leaf had its own copies of all 3286the knobs with ``leaf_`` prefixed. While this allowed equivalent 3287control over internal threads, it was with serious drawbacks. It 3288always added an extra layer of nesting which wouldn't be necessary 3289otherwise, made the interface messy and significantly complicated the 3290implementation. 3291 3292The memory controller didn't have a way to control what happened 3293between internal tasks and child cgroups and the behavior was not 3294clearly defined. There were attempts to add ad-hoc behaviors and 3295knobs to tailor the behavior to specific workloads which would have 3296led to problems extremely difficult to resolve in the long term. 3297 3298Multiple controllers struggled with internal tasks and came up with 3299different ways to deal with it; unfortunately, all the approaches were 3300severely flawed and, furthermore, the widely different behaviors 3301made cgroup as a whole highly inconsistent. 3302 3303This clearly is a problem which needs to be addressed from cgroup core 3304in a uniform way. 3305 3306 3307Other Interface Issues 3308---------------------- 3309 3310cgroup v1 grew without oversight and developed a large number of 3311idiosyncrasies and inconsistencies. One issue on the cgroup core side 3312was how an empty cgroup was notified - a userland helper binary was 3313forked and executed for each event. The event delivery wasn't 3314recursive or delegatable. The limitations of the mechanism also led 3315to in-kernel event delivery filtering mechanism further complicating 3316the interface. 3317 3318Controller interfaces were problematic too. An extreme example is 3319controllers completely ignoring hierarchical organization and treating 3320all cgroups as if they were all located directly under the root 3321cgroup. Some controllers exposed a large amount of inconsistent 3322implementation details to userland. 3323 3324There also was no consistency across controllers. When a new cgroup 3325was created, some controllers defaulted to not imposing extra 3326restrictions while others disallowed any resource usage until 3327explicitly configured. Configuration knobs for the same type of 3328control used widely differing naming schemes and formats. Statistics 3329and information knobs were named arbitrarily and used different 3330formats and units even in the same controller. 3331 3332cgroup v2 establishes common conventions where appropriate and updates 3333controllers so that they expose minimal and consistent interfaces. 3334 3335 3336Controller Issues and Remedies 3337------------------------------ 3338 3339Memory 3340~~~~~~ 3341 3342The original lower boundary, the soft limit, is defined as a limit 3343that is per default unset. As a result, the set of cgroups that 3344global reclaim prefers is opt-in, rather than opt-out. The costs for 3345optimizing these mostly negative lookups are so high that the 3346implementation, despite its enormous size, does not even provide the 3347basic desirable behavior. First off, the soft limit has no 3348hierarchical meaning. All configured groups are organized in a global 3349rbtree and treated like equal peers, regardless where they are located 3350in the hierarchy. This makes subtree delegation impossible. Second, 3351the soft limit reclaim pass is so aggressive that it not just 3352introduces high allocation latencies into the system, but also impacts 3353system performance due to overreclaim, to the point where the feature 3354becomes self-defeating. 3355 3356The memory.low boundary on the other hand is a top-down allocated 3357reserve. A cgroup enjoys reclaim protection when it's within its 3358effective low, which makes delegation of subtrees possible. It also 3359enjoys having reclaim pressure proportional to its overage when 3360above its effective low. 3361 3362The original high boundary, the hard limit, is defined as a strict 3363limit that can not budge, even if the OOM killer has to be called. 3364But this generally goes against the goal of making the most out of the 3365available memory. The memory consumption of workloads varies during 3366runtime, and that requires users to overcommit. But doing that with a 3367strict upper limit requires either a fairly accurate prediction of the 3368working set size or adding slack to the limit. Since working set size 3369estimation is hard and error prone, and getting it wrong results in 3370OOM kills, most users tend to err on the side of a looser limit and 3371end up wasting precious resources. 3372 3373The memory.high boundary on the other hand can be set much more 3374conservatively. When hit, it throttles allocations by forcing them 3375into direct reclaim to work off the excess, but it never invokes the 3376OOM killer. As a result, a high boundary that is chosen too 3377aggressively will not terminate the processes, but instead it will 3378lead to gradual performance degradation. The user can monitor this 3379and make corrections until the minimal memory footprint that still 3380gives acceptable performance is found. 3381 3382In extreme cases, with many concurrent allocations and a complete 3383breakdown of reclaim progress within the group, the high boundary can 3384be exceeded. But even then it's mostly better to satisfy the 3385allocation from the slack available in other groups or the rest of the 3386system than killing the group. Otherwise, memory.max is there to 3387limit this type of spillover and ultimately contain buggy or even 3388malicious applications. 3389 3390Setting the original memory.limit_in_bytes below the current usage was 3391subject to a race condition, where concurrent charges could cause the 3392limit setting to fail. memory.max on the other hand will first set the 3393limit to prevent new charges, and then reclaim and OOM kill until the 3394new limit is met - or the task writing to memory.max is killed. 3395 3396The combined memory+swap accounting and limiting is replaced by real 3397control over swap space. 3398 3399The main argument for a combined memory+swap facility in the original 3400cgroup design was that global or parental pressure would always be 3401able to swap all anonymous memory of a child group, regardless of the 3402child's own (possibly untrusted) configuration. However, untrusted 3403groups can sabotage swapping by other means - such as referencing its 3404anonymous memory in a tight loop - and an admin can not assume full 3405swappability when overcommitting untrusted jobs. 3406 3407For trusted jobs, on the other hand, a combined counter is not an 3408intuitive userspace interface, and it flies in the face of the idea 3409that cgroup controllers should account and limit specific physical 3410resources. Swap space is a resource like all others in the system, 3411and that's why unified hierarchy allows distributing it separately. 3412