1.. _cgroup-v2: 2 3================ 4Control Group v2 5================ 6 7:Date: October, 2015 8:Author: Tejun Heo <tj@kernel.org> 9 10This is the authoritative documentation on the design, interface and 11conventions of cgroup v2. It describes all userland-visible aspects 12of cgroup including core and specific controller behaviors. All 13future changes must be reflected in this document. Documentation for 14v1 is available under :ref:`Documentation/admin-guide/cgroup-v1/index.rst <cgroup-v1>`. 15 16.. CONTENTS 17 18 1. Introduction 19 1-1. Terminology 20 1-2. What is cgroup? 21 2. Basic Operations 22 2-1. Mounting 23 2-2. Organizing Processes and Threads 24 2-2-1. Processes 25 2-2-2. Threads 26 2-3. [Un]populated Notification 27 2-4. Controlling Controllers 28 2-4-1. Enabling and Disabling 29 2-4-2. Top-down Constraint 30 2-4-3. No Internal Process Constraint 31 2-5. Delegation 32 2-5-1. Model of Delegation 33 2-5-2. Delegation Containment 34 2-6. Guidelines 35 2-6-1. Organize Once and Control 36 2-6-2. Avoid Name Collisions 37 3. Resource Distribution Models 38 3-1. Weights 39 3-2. Limits 40 3-3. Protections 41 3-4. Allocations 42 4. Interface Files 43 4-1. Format 44 4-2. Conventions 45 4-3. Core Interface Files 46 5. Controllers 47 5-1. CPU 48 5-1-1. CPU Interface Files 49 5-2. Memory 50 5-2-1. Memory Interface Files 51 5-2-2. Usage Guidelines 52 5-2-3. Memory Ownership 53 5-3. IO 54 5-3-1. IO Interface Files 55 5-3-2. Writeback 56 5-3-3. IO Latency 57 5-3-3-1. How IO Latency Throttling Works 58 5-3-3-2. IO Latency Interface Files 59 5-3-4. IO Priority 60 5-4. PID 61 5-4-1. PID Interface Files 62 5-5. Cpuset 63 5.5-1. Cpuset Interface Files 64 5-6. Device 65 5-7. RDMA 66 5-7-1. RDMA Interface Files 67 5-8. HugeTLB 68 5.8-1. HugeTLB Interface Files 69 5-9. Misc 70 5.9-1 Miscellaneous cgroup Interface Files 71 5.9-2 Migration and Ownership 72 5-10. Others 73 5-10-1. perf_event 74 5-N. Non-normative information 75 5-N-1. CPU controller root cgroup process behaviour 76 5-N-2. IO controller root cgroup process behaviour 77 6. Namespace 78 6-1. Basics 79 6-2. The Root and Views 80 6-3. Migration and setns(2) 81 6-4. Interaction with Other Namespaces 82 P. Information on Kernel Programming 83 P-1. Filesystem Support for Writeback 84 D. Deprecated v1 Core Features 85 R. Issues with v1 and Rationales for v2 86 R-1. Multiple Hierarchies 87 R-2. Thread Granularity 88 R-3. Competition Between Inner Nodes and Threads 89 R-4. Other Interface Issues 90 R-5. Controller Issues and Remedies 91 R-5-1. Memory 92 93 94Introduction 95============ 96 97Terminology 98----------- 99 100"cgroup" stands for "control group" and is never capitalized. The 101singular form is used to designate the whole feature and also as a 102qualifier as in "cgroup controllers". When explicitly referring to 103multiple individual control groups, the plural form "cgroups" is used. 104 105 106What is cgroup? 107--------------- 108 109cgroup is a mechanism to organize processes hierarchically and 110distribute system resources along the hierarchy in a controlled and 111configurable manner. 112 113cgroup is largely composed of two parts - the core and controllers. 114cgroup core is primarily responsible for hierarchically organizing 115processes. A cgroup controller is usually responsible for 116distributing a specific type of system resource along the hierarchy 117although there are utility controllers which serve purposes other than 118resource distribution. 119 120cgroups form a tree structure and every process in the system belongs 121to one and only one cgroup. All threads of a process belong to the 122same cgroup. On creation, all processes are put in the cgroup that 123the parent process belongs to at the time. A process can be migrated 124to another cgroup. Migration of a process doesn't affect already 125existing descendant processes. 126 127Following certain structural constraints, controllers may be enabled or 128disabled selectively on a cgroup. All controller behaviors are 129hierarchical - if a controller is enabled on a cgroup, it affects all 130processes which belong to the cgroups consisting the inclusive 131sub-hierarchy of the cgroup. When a controller is enabled on a nested 132cgroup, it always restricts the resource distribution further. The 133restrictions set closer to the root in the hierarchy can not be 134overridden from further away. 135 136 137Basic Operations 138================ 139 140Mounting 141-------- 142 143Unlike v1, cgroup v2 has only single hierarchy. The cgroup v2 144hierarchy can be mounted with the following mount command:: 145 146 # mount -t cgroup2 none $MOUNT_POINT 147 148cgroup2 filesystem has the magic number 0x63677270 ("cgrp"). All 149controllers which support v2 and are not bound to a v1 hierarchy are 150automatically bound to the v2 hierarchy and show up at the root. 151Controllers which are not in active use in the v2 hierarchy can be 152bound to other hierarchies. This allows mixing v2 hierarchy with the 153legacy v1 multiple hierarchies in a fully backward compatible way. 154 155A controller can be moved across hierarchies only after the controller 156is no longer referenced in its current hierarchy. Because per-cgroup 157controller states are destroyed asynchronously and controllers may 158have lingering references, a controller may not show up immediately on 159the v2 hierarchy after the final umount of the previous hierarchy. 160Similarly, a controller should be fully disabled to be moved out of 161the unified hierarchy and it may take some time for the disabled 162controller to become available for other hierarchies; furthermore, due 163to inter-controller dependencies, other controllers may need to be 164disabled too. 165 166While useful for development and manual configurations, moving 167controllers dynamically between the v2 and other hierarchies is 168strongly discouraged for production use. It is recommended to decide 169the hierarchies and controller associations before starting using the 170controllers after system boot. 171 172During transition to v2, system management software might still 173automount the v1 cgroup filesystem and so hijack all controllers 174during boot, before manual intervention is possible. To make testing 175and experimenting easier, the kernel parameter cgroup_no_v1= allows 176disabling controllers in v1 and make them always available in v2. 177 178cgroup v2 currently supports the following mount options. 179 180 nsdelegate 181 Consider cgroup namespaces as delegation boundaries. This 182 option is system wide and can only be set on mount or modified 183 through remount from the init namespace. The mount option is 184 ignored on non-init namespace mounts. Please refer to the 185 Delegation section for details. 186 187 favordynmods 188 Reduce the latencies of dynamic cgroup modifications such as 189 task migrations and controller on/offs at the cost of making 190 hot path operations such as forks and exits more expensive. 191 The static usage pattern of creating a cgroup, enabling 192 controllers, and then seeding it with CLONE_INTO_CGROUP is 193 not affected by this option. 194 195 memory_localevents 196 Only populate memory.events with data for the current cgroup, 197 and not any subtrees. This is legacy behaviour, the default 198 behaviour without this option is to include subtree counts. 199 This option is system wide and can only be set on mount or 200 modified through remount from the init namespace. The mount 201 option is ignored on non-init namespace mounts. 202 203 memory_recursiveprot 204 Recursively apply memory.min and memory.low protection to 205 entire subtrees, without requiring explicit downward 206 propagation into leaf cgroups. This allows protecting entire 207 subtrees from one another, while retaining free competition 208 within those subtrees. This should have been the default 209 behavior but is a mount-option to avoid regressing setups 210 relying on the original semantics (e.g. specifying bogusly 211 high 'bypass' protection values at higher tree levels). 212 213 memory_hugetlb_accounting 214 Count HugeTLB memory usage towards the cgroup's overall 215 memory usage for the memory controller (for the purpose of 216 statistics reporting and memory protetion). This is a new 217 behavior that could regress existing setups, so it must be 218 explicitly opted in with this mount option. 219 220 A few caveats to keep in mind: 221 222 * There is no HugeTLB pool management involved in the memory 223 controller. The pre-allocated pool does not belong to anyone. 224 Specifically, when a new HugeTLB folio is allocated to 225 the pool, it is not accounted for from the perspective of the 226 memory controller. It is only charged to a cgroup when it is 227 actually used (for e.g at page fault time). Host memory 228 overcommit management has to consider this when configuring 229 hard limits. In general, HugeTLB pool management should be 230 done via other mechanisms (such as the HugeTLB controller). 231 * Failure to charge a HugeTLB folio to the memory controller 232 results in SIGBUS. This could happen even if the HugeTLB pool 233 still has pages available (but the cgroup limit is hit and 234 reclaim attempt fails). 235 * Charging HugeTLB memory towards the memory controller affects 236 memory protection and reclaim dynamics. Any userspace tuning 237 (of low, min limits for e.g) needs to take this into account. 238 * HugeTLB pages utilized while this option is not selected 239 will not be tracked by the memory controller (even if cgroup 240 v2 is remounted later on). 241 242 243Organizing Processes and Threads 244-------------------------------- 245 246Processes 247~~~~~~~~~ 248 249Initially, only the root cgroup exists to which all processes belong. 250A child cgroup can be created by creating a sub-directory:: 251 252 # mkdir $CGROUP_NAME 253 254A given cgroup may have multiple child cgroups forming a tree 255structure. Each cgroup has a read-writable interface file 256"cgroup.procs". When read, it lists the PIDs of all processes which 257belong to the cgroup one-per-line. The PIDs are not ordered and the 258same PID may show up more than once if the process got moved to 259another cgroup and then back or the PID got recycled while reading. 260 261A process can be migrated into a cgroup by writing its PID to the 262target cgroup's "cgroup.procs" file. Only one process can be migrated 263on a single write(2) call. If a process is composed of multiple 264threads, writing the PID of any thread migrates all threads of the 265process. 266 267When a process forks a child process, the new process is born into the 268cgroup that the forking process belongs to at the time of the 269operation. After exit, a process stays associated with the cgroup 270that it belonged to at the time of exit until it's reaped; however, a 271zombie process does not appear in "cgroup.procs" and thus can't be 272moved to another cgroup. 273 274A cgroup which doesn't have any children or live processes can be 275destroyed by removing the directory. Note that a cgroup which doesn't 276have any children and is associated only with zombie processes is 277considered empty and can be removed:: 278 279 # rmdir $CGROUP_NAME 280 281"/proc/$PID/cgroup" lists a process's cgroup membership. If legacy 282cgroup is in use in the system, this file may contain multiple lines, 283one for each hierarchy. The entry for cgroup v2 is always in the 284format "0::$PATH":: 285 286 # cat /proc/842/cgroup 287 ... 288 0::/test-cgroup/test-cgroup-nested 289 290If the process becomes a zombie and the cgroup it was associated with 291is removed subsequently, " (deleted)" is appended to the path:: 292 293 # cat /proc/842/cgroup 294 ... 295 0::/test-cgroup/test-cgroup-nested (deleted) 296 297 298Threads 299~~~~~~~ 300 301cgroup v2 supports thread granularity for a subset of controllers to 302support use cases requiring hierarchical resource distribution across 303the threads of a group of processes. By default, all threads of a 304process belong to the same cgroup, which also serves as the resource 305domain to host resource consumptions which are not specific to a 306process or thread. The thread mode allows threads to be spread across 307a subtree while still maintaining the common resource domain for them. 308 309Controllers which support thread mode are called threaded controllers. 310The ones which don't are called domain controllers. 311 312Marking a cgroup threaded makes it join the resource domain of its 313parent as a threaded cgroup. The parent may be another threaded 314cgroup whose resource domain is further up in the hierarchy. The root 315of a threaded subtree, that is, the nearest ancestor which is not 316threaded, is called threaded domain or thread root interchangeably and 317serves as the resource domain for the entire subtree. 318 319Inside a threaded subtree, threads of a process can be put in 320different cgroups and are not subject to the no internal process 321constraint - threaded controllers can be enabled on non-leaf cgroups 322whether they have threads in them or not. 323 324As the threaded domain cgroup hosts all the domain resource 325consumptions of the subtree, it is considered to have internal 326resource consumptions whether there are processes in it or not and 327can't have populated child cgroups which aren't threaded. Because the 328root cgroup is not subject to no internal process constraint, it can 329serve both as a threaded domain and a parent to domain cgroups. 330 331The current operation mode or type of the cgroup is shown in the 332"cgroup.type" file which indicates whether the cgroup is a normal 333domain, a domain which is serving as the domain of a threaded subtree, 334or a threaded cgroup. 335 336On creation, a cgroup is always a domain cgroup and can be made 337threaded by writing "threaded" to the "cgroup.type" file. The 338operation is single direction:: 339 340 # echo threaded > cgroup.type 341 342Once threaded, the cgroup can't be made a domain again. To enable the 343thread mode, the following conditions must be met. 344 345- As the cgroup will join the parent's resource domain. The parent 346 must either be a valid (threaded) domain or a threaded cgroup. 347 348- When the parent is an unthreaded domain, it must not have any domain 349 controllers enabled or populated domain children. The root is 350 exempt from this requirement. 351 352Topology-wise, a cgroup can be in an invalid state. Please consider 353the following topology:: 354 355 A (threaded domain) - B (threaded) - C (domain, just created) 356 357C is created as a domain but isn't connected to a parent which can 358host child domains. C can't be used until it is turned into a 359threaded cgroup. "cgroup.type" file will report "domain (invalid)" in 360these cases. Operations which fail due to invalid topology use 361EOPNOTSUPP as the errno. 362 363A domain cgroup is turned into a threaded domain when one of its child 364cgroup becomes threaded or threaded controllers are enabled in the 365"cgroup.subtree_control" file while there are processes in the cgroup. 366A threaded domain reverts to a normal domain when the conditions 367clear. 368 369When read, "cgroup.threads" contains the list of the thread IDs of all 370threads in the cgroup. Except that the operations are per-thread 371instead of per-process, "cgroup.threads" has the same format and 372behaves the same way as "cgroup.procs". While "cgroup.threads" can be 373written to in any cgroup, as it can only move threads inside the same 374threaded domain, its operations are confined inside each threaded 375subtree. 376 377The threaded domain cgroup serves as the resource domain for the whole 378subtree, and, while the threads can be scattered across the subtree, 379all the processes are considered to be in the threaded domain cgroup. 380"cgroup.procs" in a threaded domain cgroup contains the PIDs of all 381processes in the subtree and is not readable in the subtree proper. 382However, "cgroup.procs" can be written to from anywhere in the subtree 383to migrate all threads of the matching process to the cgroup. 384 385Only threaded controllers can be enabled in a threaded subtree. When 386a threaded controller is enabled inside a threaded subtree, it only 387accounts for and controls resource consumptions associated with the 388threads in the cgroup and its descendants. All consumptions which 389aren't tied to a specific thread belong to the threaded domain cgroup. 390 391Because a threaded subtree is exempt from no internal process 392constraint, a threaded controller must be able to handle competition 393between threads in a non-leaf cgroup and its child cgroups. Each 394threaded controller defines how such competitions are handled. 395 396Currently, the following controllers are threaded and can be enabled 397in a threaded cgroup:: 398 399- cpu 400- cpuset 401- perf_event 402- pids 403 404[Un]populated Notification 405-------------------------- 406 407Each non-root cgroup has a "cgroup.events" file which contains 408"populated" field indicating whether the cgroup's sub-hierarchy has 409live processes in it. Its value is 0 if there is no live process in 410the cgroup and its descendants; otherwise, 1. poll and [id]notify 411events are triggered when the value changes. This can be used, for 412example, to start a clean-up operation after all processes of a given 413sub-hierarchy have exited. The populated state updates and 414notifications are recursive. Consider the following sub-hierarchy 415where the numbers in the parentheses represent the numbers of processes 416in each cgroup:: 417 418 A(4) - B(0) - C(1) 419 \ D(0) 420 421A, B and C's "populated" fields would be 1 while D's 0. After the one 422process in C exits, B and C's "populated" fields would flip to "0" and 423file modified events will be generated on the "cgroup.events" files of 424both cgroups. 425 426 427Controlling Controllers 428----------------------- 429 430Enabling and Disabling 431~~~~~~~~~~~~~~~~~~~~~~ 432 433Each cgroup has a "cgroup.controllers" file which lists all 434controllers available for the cgroup to enable:: 435 436 # cat cgroup.controllers 437 cpu io memory 438 439No controller is enabled by default. Controllers can be enabled and 440disabled by writing to the "cgroup.subtree_control" file:: 441 442 # echo "+cpu +memory -io" > cgroup.subtree_control 443 444Only controllers which are listed in "cgroup.controllers" can be 445enabled. When multiple operations are specified as above, either they 446all succeed or fail. If multiple operations on the same controller 447are specified, the last one is effective. 448 449Enabling a controller in a cgroup indicates that the distribution of 450the target resource across its immediate children will be controlled. 451Consider the following sub-hierarchy. The enabled controllers are 452listed in parentheses:: 453 454 A(cpu,memory) - B(memory) - C() 455 \ D() 456 457As A has "cpu" and "memory" enabled, A will control the distribution 458of CPU cycles and memory to its children, in this case, B. As B has 459"memory" enabled but not "CPU", C and D will compete freely on CPU 460cycles but their division of memory available to B will be controlled. 461 462As a controller regulates the distribution of the target resource to 463the cgroup's children, enabling it creates the controller's interface 464files in the child cgroups. In the above example, enabling "cpu" on B 465would create the "cpu." prefixed controller interface files in C and 466D. Likewise, disabling "memory" from B would remove the "memory." 467prefixed controller interface files from C and D. This means that the 468controller interface files - anything which doesn't start with 469"cgroup." are owned by the parent rather than the cgroup itself. 470 471 472Top-down Constraint 473~~~~~~~~~~~~~~~~~~~ 474 475Resources are distributed top-down and a cgroup can further distribute 476a resource only if the resource has been distributed to it from the 477parent. This means that all non-root "cgroup.subtree_control" files 478can only contain controllers which are enabled in the parent's 479"cgroup.subtree_control" file. A controller can be enabled only if 480the parent has the controller enabled and a controller can't be 481disabled if one or more children have it enabled. 482 483 484No Internal Process Constraint 485~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 486 487Non-root cgroups can distribute domain resources to their children 488only when they don't have any processes of their own. In other words, 489only domain cgroups which don't contain any processes can have domain 490controllers enabled in their "cgroup.subtree_control" files. 491 492This guarantees that, when a domain controller is looking at the part 493of the hierarchy which has it enabled, processes are always only on 494the leaves. This rules out situations where child cgroups compete 495against internal processes of the parent. 496 497The root cgroup is exempt from this restriction. Root contains 498processes and anonymous resource consumption which can't be associated 499with any other cgroups and requires special treatment from most 500controllers. How resource consumption in the root cgroup is governed 501is up to each controller (for more information on this topic please 502refer to the Non-normative information section in the Controllers 503chapter). 504 505Note that the restriction doesn't get in the way if there is no 506enabled controller in the cgroup's "cgroup.subtree_control". This is 507important as otherwise it wouldn't be possible to create children of a 508populated cgroup. To control resource distribution of a cgroup, the 509cgroup must create children and transfer all its processes to the 510children before enabling controllers in its "cgroup.subtree_control" 511file. 512 513 514Delegation 515---------- 516 517Model of Delegation 518~~~~~~~~~~~~~~~~~~~ 519 520A cgroup can be delegated in two ways. First, to a less privileged 521user by granting write access of the directory and its "cgroup.procs", 522"cgroup.threads" and "cgroup.subtree_control" files to the user. 523Second, if the "nsdelegate" mount option is set, automatically to a 524cgroup namespace on namespace creation. 525 526Because the resource control interface files in a given directory 527control the distribution of the parent's resources, the delegatee 528shouldn't be allowed to write to them. For the first method, this is 529achieved by not granting access to these files. For the second, the 530kernel rejects writes to all files other than "cgroup.procs" and 531"cgroup.subtree_control" on a namespace root from inside the 532namespace. 533 534The end results are equivalent for both delegation types. Once 535delegated, the user can build sub-hierarchy under the directory, 536organize processes inside it as it sees fit and further distribute the 537resources it received from the parent. The limits and other settings 538of all resource controllers are hierarchical and regardless of what 539happens in the delegated sub-hierarchy, nothing can escape the 540resource restrictions imposed by the parent. 541 542Currently, cgroup doesn't impose any restrictions on the number of 543cgroups in or nesting depth of a delegated sub-hierarchy; however, 544this may be limited explicitly in the future. 545 546 547Delegation Containment 548~~~~~~~~~~~~~~~~~~~~~~ 549 550A delegated sub-hierarchy is contained in the sense that processes 551can't be moved into or out of the sub-hierarchy by the delegatee. 552 553For delegations to a less privileged user, this is achieved by 554requiring the following conditions for a process with a non-root euid 555to migrate a target process into a cgroup by writing its PID to the 556"cgroup.procs" file. 557 558- The writer must have write access to the "cgroup.procs" file. 559 560- The writer must have write access to the "cgroup.procs" file of the 561 common ancestor of the source and destination cgroups. 562 563The above two constraints ensure that while a delegatee may migrate 564processes around freely in the delegated sub-hierarchy it can't pull 565in from or push out to outside the sub-hierarchy. 566 567For an example, let's assume cgroups C0 and C1 have been delegated to 568user U0 who created C00, C01 under C0 and C10 under C1 as follows and 569all processes under C0 and C1 belong to U0:: 570 571 ~~~~~~~~~~~~~ - C0 - C00 572 ~ cgroup ~ \ C01 573 ~ hierarchy ~ 574 ~~~~~~~~~~~~~ - C1 - C10 575 576Let's also say U0 wants to write the PID of a process which is 577currently in C10 into "C00/cgroup.procs". U0 has write access to the 578file; however, the common ancestor of the source cgroup C10 and the 579destination cgroup C00 is above the points of delegation and U0 would 580not have write access to its "cgroup.procs" files and thus the write 581will be denied with -EACCES. 582 583For delegations to namespaces, containment is achieved by requiring 584that both the source and destination cgroups are reachable from the 585namespace of the process which is attempting the migration. If either 586is not reachable, the migration is rejected with -ENOENT. 587 588 589Guidelines 590---------- 591 592Organize Once and Control 593~~~~~~~~~~~~~~~~~~~~~~~~~ 594 595Migrating a process across cgroups is a relatively expensive operation 596and stateful resources such as memory are not moved together with the 597process. This is an explicit design decision as there often exist 598inherent trade-offs between migration and various hot paths in terms 599of synchronization cost. 600 601As such, migrating processes across cgroups frequently as a means to 602apply different resource restrictions is discouraged. A workload 603should be assigned to a cgroup according to the system's logical and 604resource structure once on start-up. Dynamic adjustments to resource 605distribution can be made by changing controller configuration through 606the interface files. 607 608 609Avoid Name Collisions 610~~~~~~~~~~~~~~~~~~~~~ 611 612Interface files for a cgroup and its children cgroups occupy the same 613directory and it is possible to create children cgroups which collide 614with interface files. 615 616All cgroup core interface files are prefixed with "cgroup." and each 617controller's interface files are prefixed with the controller name and 618a dot. A controller's name is composed of lower case alphabets and 619'_'s but never begins with an '_' so it can be used as the prefix 620character for collision avoidance. Also, interface file names won't 621start or end with terms which are often used in categorizing workloads 622such as job, service, slice, unit or workload. 623 624cgroup doesn't do anything to prevent name collisions and it's the 625user's responsibility to avoid them. 626 627 628Resource Distribution Models 629============================ 630 631cgroup controllers implement several resource distribution schemes 632depending on the resource type and expected use cases. This section 633describes major schemes in use along with their expected behaviors. 634 635 636Weights 637------- 638 639A parent's resource is distributed by adding up the weights of all 640active children and giving each the fraction matching the ratio of its 641weight against the sum. As only children which can make use of the 642resource at the moment participate in the distribution, this is 643work-conserving. Due to the dynamic nature, this model is usually 644used for stateless resources. 645 646All weights are in the range [1, 10000] with the default at 100. This 647allows symmetric multiplicative biases in both directions at fine 648enough granularity while staying in the intuitive range. 649 650As long as the weight is in range, all configuration combinations are 651valid and there is no reason to reject configuration changes or 652process migrations. 653 654"cpu.weight" proportionally distributes CPU cycles to active children 655and is an example of this type. 656 657 658.. _cgroupv2-limits-distributor: 659 660Limits 661------ 662 663A child can only consume up to the configured amount of the resource. 664Limits can be over-committed - the sum of the limits of children can 665exceed the amount of resource available to the parent. 666 667Limits are in the range [0, max] and defaults to "max", which is noop. 668 669As limits can be over-committed, all configuration combinations are 670valid and there is no reason to reject configuration changes or 671process migrations. 672 673"io.max" limits the maximum BPS and/or IOPS that a cgroup can consume 674on an IO device and is an example of this type. 675 676.. _cgroupv2-protections-distributor: 677 678Protections 679----------- 680 681A cgroup is protected up to the configured amount of the resource 682as long as the usages of all its ancestors are under their 683protected levels. Protections can be hard guarantees or best effort 684soft boundaries. Protections can also be over-committed in which case 685only up to the amount available to the parent is protected among 686children. 687 688Protections are in the range [0, max] and defaults to 0, which is 689noop. 690 691As protections can be over-committed, all configuration combinations 692are valid and there is no reason to reject configuration changes or 693process migrations. 694 695"memory.low" implements best-effort memory protection and is an 696example of this type. 697 698 699Allocations 700----------- 701 702A cgroup is exclusively allocated a certain amount of a finite 703resource. Allocations can't be over-committed - the sum of the 704allocations of children can not exceed the amount of resource 705available to the parent. 706 707Allocations are in the range [0, max] and defaults to 0, which is no 708resource. 709 710As allocations can't be over-committed, some configuration 711combinations are invalid and should be rejected. Also, if the 712resource is mandatory for execution of processes, process migrations 713may be rejected. 714 715"cpu.rt.max" hard-allocates realtime slices and is an example of this 716type. 717 718 719Interface Files 720=============== 721 722Format 723------ 724 725All interface files should be in one of the following formats whenever 726possible:: 727 728 New-line separated values 729 (when only one value can be written at once) 730 731 VAL0\n 732 VAL1\n 733 ... 734 735 Space separated values 736 (when read-only or multiple values can be written at once) 737 738 VAL0 VAL1 ...\n 739 740 Flat keyed 741 742 KEY0 VAL0\n 743 KEY1 VAL1\n 744 ... 745 746 Nested keyed 747 748 KEY0 SUB_KEY0=VAL00 SUB_KEY1=VAL01... 749 KEY1 SUB_KEY0=VAL10 SUB_KEY1=VAL11... 750 ... 751 752For a writable file, the format for writing should generally match 753reading; however, controllers may allow omitting later fields or 754implement restricted shortcuts for most common use cases. 755 756For both flat and nested keyed files, only the values for a single key 757can be written at a time. For nested keyed files, the sub key pairs 758may be specified in any order and not all pairs have to be specified. 759 760 761Conventions 762----------- 763 764- Settings for a single feature should be contained in a single file. 765 766- The root cgroup should be exempt from resource control and thus 767 shouldn't have resource control interface files. 768 769- The default time unit is microseconds. If a different unit is ever 770 used, an explicit unit suffix must be present. 771 772- A parts-per quantity should use a percentage decimal with at least 773 two digit fractional part - e.g. 13.40. 774 775- If a controller implements weight based resource distribution, its 776 interface file should be named "weight" and have the range [1, 777 10000] with 100 as the default. The values are chosen to allow 778 enough and symmetric bias in both directions while keeping it 779 intuitive (the default is 100%). 780 781- If a controller implements an absolute resource guarantee and/or 782 limit, the interface files should be named "min" and "max" 783 respectively. If a controller implements best effort resource 784 guarantee and/or limit, the interface files should be named "low" 785 and "high" respectively. 786 787 In the above four control files, the special token "max" should be 788 used to represent upward infinity for both reading and writing. 789 790- If a setting has a configurable default value and keyed specific 791 overrides, the default entry should be keyed with "default" and 792 appear as the first entry in the file. 793 794 The default value can be updated by writing either "default $VAL" or 795 "$VAL". 796 797 When writing to update a specific override, "default" can be used as 798 the value to indicate removal of the override. Override entries 799 with "default" as the value must not appear when read. 800 801 For example, a setting which is keyed by major:minor device numbers 802 with integer values may look like the following:: 803 804 # cat cgroup-example-interface-file 805 default 150 806 8:0 300 807 808 The default value can be updated by:: 809 810 # echo 125 > cgroup-example-interface-file 811 812 or:: 813 814 # echo "default 125" > cgroup-example-interface-file 815 816 An override can be set by:: 817 818 # echo "8:16 170" > cgroup-example-interface-file 819 820 and cleared by:: 821 822 # echo "8:0 default" > cgroup-example-interface-file 823 # cat cgroup-example-interface-file 824 default 125 825 8:16 170 826 827- For events which are not very high frequency, an interface file 828 "events" should be created which lists event key value pairs. 829 Whenever a notifiable event happens, file modified event should be 830 generated on the file. 831 832 833Core Interface Files 834-------------------- 835 836All cgroup core files are prefixed with "cgroup." 837 838 cgroup.type 839 A read-write single value file which exists on non-root 840 cgroups. 841 842 When read, it indicates the current type of the cgroup, which 843 can be one of the following values. 844 845 - "domain" : A normal valid domain cgroup. 846 847 - "domain threaded" : A threaded domain cgroup which is 848 serving as the root of a threaded subtree. 849 850 - "domain invalid" : A cgroup which is in an invalid state. 851 It can't be populated or have controllers enabled. It may 852 be allowed to become a threaded cgroup. 853 854 - "threaded" : A threaded cgroup which is a member of a 855 threaded subtree. 856 857 A cgroup can be turned into a threaded cgroup by writing 858 "threaded" to this file. 859 860 cgroup.procs 861 A read-write new-line separated values file which exists on 862 all cgroups. 863 864 When read, it lists the PIDs of all processes which belong to 865 the cgroup one-per-line. The PIDs are not ordered and the 866 same PID may show up more than once if the process got moved 867 to another cgroup and then back or the PID got recycled while 868 reading. 869 870 A PID can be written to migrate the process associated with 871 the PID to the cgroup. The writer should match all of the 872 following conditions. 873 874 - It must have write access to the "cgroup.procs" file. 875 876 - It must have write access to the "cgroup.procs" file of the 877 common ancestor of the source and destination cgroups. 878 879 When delegating a sub-hierarchy, write access to this file 880 should be granted along with the containing directory. 881 882 In a threaded cgroup, reading this file fails with EOPNOTSUPP 883 as all the processes belong to the thread root. Writing is 884 supported and moves every thread of the process to the cgroup. 885 886 cgroup.threads 887 A read-write new-line separated values file which exists on 888 all cgroups. 889 890 When read, it lists the TIDs of all threads which belong to 891 the cgroup one-per-line. The TIDs are not ordered and the 892 same TID may show up more than once if the thread got moved to 893 another cgroup and then back or the TID got recycled while 894 reading. 895 896 A TID can be written to migrate the thread associated with the 897 TID to the cgroup. The writer should match all of the 898 following conditions. 899 900 - It must have write access to the "cgroup.threads" file. 901 902 - The cgroup that the thread is currently in must be in the 903 same resource domain as the destination cgroup. 904 905 - It must have write access to the "cgroup.procs" file of the 906 common ancestor of the source and destination cgroups. 907 908 When delegating a sub-hierarchy, write access to this file 909 should be granted along with the containing directory. 910 911 cgroup.controllers 912 A read-only space separated values file which exists on all 913 cgroups. 914 915 It shows space separated list of all controllers available to 916 the cgroup. The controllers are not ordered. 917 918 cgroup.subtree_control 919 A read-write space separated values file which exists on all 920 cgroups. Starts out empty. 921 922 When read, it shows space separated list of the controllers 923 which are enabled to control resource distribution from the 924 cgroup to its children. 925 926 Space separated list of controllers prefixed with '+' or '-' 927 can be written to enable or disable controllers. A controller 928 name prefixed with '+' enables the controller and '-' 929 disables. If a controller appears more than once on the list, 930 the last one is effective. When multiple enable and disable 931 operations are specified, either all succeed or all fail. 932 933 cgroup.events 934 A read-only flat-keyed file which exists on non-root cgroups. 935 The following entries are defined. Unless specified 936 otherwise, a value change in this file generates a file 937 modified event. 938 939 populated 940 1 if the cgroup or its descendants contains any live 941 processes; otherwise, 0. 942 frozen 943 1 if the cgroup is frozen; otherwise, 0. 944 945 cgroup.max.descendants 946 A read-write single value files. The default is "max". 947 948 Maximum allowed number of descent cgroups. 949 If the actual number of descendants is equal or larger, 950 an attempt to create a new cgroup in the hierarchy will fail. 951 952 cgroup.max.depth 953 A read-write single value files. The default is "max". 954 955 Maximum allowed descent depth below the current cgroup. 956 If the actual descent depth is equal or larger, 957 an attempt to create a new child cgroup will fail. 958 959 cgroup.stat 960 A read-only flat-keyed file with the following entries: 961 962 nr_descendants 963 Total number of visible descendant cgroups. 964 965 nr_dying_descendants 966 Total number of dying descendant cgroups. A cgroup becomes 967 dying after being deleted by a user. The cgroup will remain 968 in dying state for some time undefined time (which can depend 969 on system load) before being completely destroyed. 970 971 A process can't enter a dying cgroup under any circumstances, 972 a dying cgroup can't revive. 973 974 A dying cgroup can consume system resources not exceeding 975 limits, which were active at the moment of cgroup deletion. 976 977 cgroup.freeze 978 A read-write single value file which exists on non-root cgroups. 979 Allowed values are "0" and "1". The default is "0". 980 981 Writing "1" to the file causes freezing of the cgroup and all 982 descendant cgroups. This means that all belonging processes will 983 be stopped and will not run until the cgroup will be explicitly 984 unfrozen. Freezing of the cgroup may take some time; when this action 985 is completed, the "frozen" value in the cgroup.events control file 986 will be updated to "1" and the corresponding notification will be 987 issued. 988 989 A cgroup can be frozen either by its own settings, or by settings 990 of any ancestor cgroups. If any of ancestor cgroups is frozen, the 991 cgroup will remain frozen. 992 993 Processes in the frozen cgroup can be killed by a fatal signal. 994 They also can enter and leave a frozen cgroup: either by an explicit 995 move by a user, or if freezing of the cgroup races with fork(). 996 If a process is moved to a frozen cgroup, it stops. If a process is 997 moved out of a frozen cgroup, it becomes running. 998 999 Frozen status of a cgroup doesn't affect any cgroup tree operations: 1000 it's possible to delete a frozen (and empty) cgroup, as well as 1001 create new sub-cgroups. 1002 1003 cgroup.kill 1004 A write-only single value file which exists in non-root cgroups. 1005 The only allowed value is "1". 1006 1007 Writing "1" to the file causes the cgroup and all descendant cgroups to 1008 be killed. This means that all processes located in the affected cgroup 1009 tree will be killed via SIGKILL. 1010 1011 Killing a cgroup tree will deal with concurrent forks appropriately and 1012 is protected against migrations. 1013 1014 In a threaded cgroup, writing this file fails with EOPNOTSUPP as 1015 killing cgroups is a process directed operation, i.e. it affects 1016 the whole thread-group. 1017 1018 cgroup.pressure 1019 A read-write single value file that allowed values are "0" and "1". 1020 The default is "1". 1021 1022 Writing "0" to the file will disable the cgroup PSI accounting. 1023 Writing "1" to the file will re-enable the cgroup PSI accounting. 1024 1025 This control attribute is not hierarchical, so disable or enable PSI 1026 accounting in a cgroup does not affect PSI accounting in descendants 1027 and doesn't need pass enablement via ancestors from root. 1028 1029 The reason this control attribute exists is that PSI accounts stalls for 1030 each cgroup separately and aggregates it at each level of the hierarchy. 1031 This may cause non-negligible overhead for some workloads when under 1032 deep level of the hierarchy, in which case this control attribute can 1033 be used to disable PSI accounting in the non-leaf cgroups. 1034 1035 irq.pressure 1036 A read-write nested-keyed file. 1037 1038 Shows pressure stall information for IRQ/SOFTIRQ. See 1039 :ref:`Documentation/accounting/psi.rst <psi>` for details. 1040 1041Controllers 1042=========== 1043 1044.. _cgroup-v2-cpu: 1045 1046CPU 1047--- 1048 1049The "cpu" controllers regulates distribution of CPU cycles. This 1050controller implements weight and absolute bandwidth limit models for 1051normal scheduling policy and absolute bandwidth allocation model for 1052realtime scheduling policy. 1053 1054In all the above models, cycles distribution is defined only on a temporal 1055base and it does not account for the frequency at which tasks are executed. 1056The (optional) utilization clamping support allows to hint the schedutil 1057cpufreq governor about the minimum desired frequency which should always be 1058provided by a CPU, as well as the maximum desired frequency, which should not 1059be exceeded by a CPU. 1060 1061WARNING: cgroup2 doesn't yet support control of realtime processes and 1062the cpu controller can only be enabled when all RT processes are in 1063the root cgroup. Be aware that system management software may already 1064have placed RT processes into nonroot cgroups during the system boot 1065process, and these processes may need to be moved to the root cgroup 1066before the cpu controller can be enabled. 1067 1068 1069CPU Interface Files 1070~~~~~~~~~~~~~~~~~~~ 1071 1072All time durations are in microseconds. 1073 1074 cpu.stat 1075 A read-only flat-keyed file. 1076 This file exists whether the controller is enabled or not. 1077 1078 It always reports the following three stats: 1079 1080 - usage_usec 1081 - user_usec 1082 - system_usec 1083 1084 and the following five when the controller is enabled: 1085 1086 - nr_periods 1087 - nr_throttled 1088 - throttled_usec 1089 - nr_bursts 1090 - burst_usec 1091 1092 cpu.weight 1093 A read-write single value file which exists on non-root 1094 cgroups. The default is "100". 1095 1096 The weight in the range [1, 10000]. 1097 1098 cpu.weight.nice 1099 A read-write single value file which exists on non-root 1100 cgroups. The default is "0". 1101 1102 The nice value is in the range [-20, 19]. 1103 1104 This interface file is an alternative interface for 1105 "cpu.weight" and allows reading and setting weight using the 1106 same values used by nice(2). Because the range is smaller and 1107 granularity is coarser for the nice values, the read value is 1108 the closest approximation of the current weight. 1109 1110 cpu.max 1111 A read-write two value file which exists on non-root cgroups. 1112 The default is "max 100000". 1113 1114 The maximum bandwidth limit. It's in the following format:: 1115 1116 $MAX $PERIOD 1117 1118 which indicates that the group may consume up to $MAX in each 1119 $PERIOD duration. "max" for $MAX indicates no limit. If only 1120 one number is written, $MAX is updated. 1121 1122 cpu.max.burst 1123 A read-write single value file which exists on non-root 1124 cgroups. The default is "0". 1125 1126 The burst in the range [0, $MAX]. 1127 1128 cpu.pressure 1129 A read-write nested-keyed file. 1130 1131 Shows pressure stall information for CPU. See 1132 :ref:`Documentation/accounting/psi.rst <psi>` for details. 1133 1134 cpu.uclamp.min 1135 A read-write single value file which exists on non-root cgroups. 1136 The default is "0", i.e. no utilization boosting. 1137 1138 The requested minimum utilization (protection) as a percentage 1139 rational number, e.g. 12.34 for 12.34%. 1140 1141 This interface allows reading and setting minimum utilization clamp 1142 values similar to the sched_setattr(2). This minimum utilization 1143 value is used to clamp the task specific minimum utilization clamp. 1144 1145 The requested minimum utilization (protection) is always capped by 1146 the current value for the maximum utilization (limit), i.e. 1147 `cpu.uclamp.max`. 1148 1149 cpu.uclamp.max 1150 A read-write single value file which exists on non-root cgroups. 1151 The default is "max". i.e. no utilization capping 1152 1153 The requested maximum utilization (limit) as a percentage rational 1154 number, e.g. 98.76 for 98.76%. 1155 1156 This interface allows reading and setting maximum utilization clamp 1157 values similar to the sched_setattr(2). This maximum utilization 1158 value is used to clamp the task specific maximum utilization clamp. 1159 1160 1161 1162Memory 1163------ 1164 1165The "memory" controller regulates distribution of memory. Memory is 1166stateful and implements both limit and protection models. Due to the 1167intertwining between memory usage and reclaim pressure and the 1168stateful nature of memory, the distribution model is relatively 1169complex. 1170 1171While not completely water-tight, all major memory usages by a given 1172cgroup are tracked so that the total memory consumption can be 1173accounted and controlled to a reasonable extent. Currently, the 1174following types of memory usages are tracked. 1175 1176- Userland memory - page cache and anonymous memory. 1177 1178- Kernel data structures such as dentries and inodes. 1179 1180- TCP socket buffers. 1181 1182The above list may expand in the future for better coverage. 1183 1184 1185Memory Interface Files 1186~~~~~~~~~~~~~~~~~~~~~~ 1187 1188All memory amounts are in bytes. If a value which is not aligned to 1189PAGE_SIZE is written, the value may be rounded up to the closest 1190PAGE_SIZE multiple when read back. 1191 1192 memory.current 1193 A read-only single value file which exists on non-root 1194 cgroups. 1195 1196 The total amount of memory currently being used by the cgroup 1197 and its descendants. 1198 1199 memory.min 1200 A read-write single value file which exists on non-root 1201 cgroups. The default is "0". 1202 1203 Hard memory protection. If the memory usage of a cgroup 1204 is within its effective min boundary, the cgroup's memory 1205 won't be reclaimed under any conditions. If there is no 1206 unprotected reclaimable memory available, OOM killer 1207 is invoked. Above the effective min boundary (or 1208 effective low boundary if it is higher), pages are reclaimed 1209 proportionally to the overage, reducing reclaim pressure for 1210 smaller overages. 1211 1212 Effective min boundary is limited by memory.min values of 1213 all ancestor cgroups. If there is memory.min overcommitment 1214 (child cgroup or cgroups are requiring more protected memory 1215 than parent will allow), then each child cgroup will get 1216 the part of parent's protection proportional to its 1217 actual memory usage below memory.min. 1218 1219 Putting more memory than generally available under this 1220 protection is discouraged and may lead to constant OOMs. 1221 1222 If a memory cgroup is not populated with processes, 1223 its memory.min is ignored. 1224 1225 memory.low 1226 A read-write single value file which exists on non-root 1227 cgroups. The default is "0". 1228 1229 Best-effort memory protection. If the memory usage of a 1230 cgroup is within its effective low boundary, the cgroup's 1231 memory won't be reclaimed unless there is no reclaimable 1232 memory available in unprotected cgroups. 1233 Above the effective low boundary (or 1234 effective min boundary if it is higher), pages are reclaimed 1235 proportionally to the overage, reducing reclaim pressure for 1236 smaller overages. 1237 1238 Effective low boundary is limited by memory.low values of 1239 all ancestor cgroups. If there is memory.low overcommitment 1240 (child cgroup or cgroups are requiring more protected memory 1241 than parent will allow), then each child cgroup will get 1242 the part of parent's protection proportional to its 1243 actual memory usage below memory.low. 1244 1245 Putting more memory than generally available under this 1246 protection is discouraged. 1247 1248 memory.high 1249 A read-write single value file which exists on non-root 1250 cgroups. The default is "max". 1251 1252 Memory usage throttle limit. If a cgroup's usage goes 1253 over the high boundary, the processes of the cgroup are 1254 throttled and put under heavy reclaim pressure. 1255 1256 Going over the high limit never invokes the OOM killer and 1257 under extreme conditions the limit may be breached. The high 1258 limit should be used in scenarios where an external process 1259 monitors the limited cgroup to alleviate heavy reclaim 1260 pressure. 1261 1262 memory.max 1263 A read-write single value file which exists on non-root 1264 cgroups. The default is "max". 1265 1266 Memory usage hard limit. This is the main mechanism to limit 1267 memory usage of a cgroup. If a cgroup's memory usage reaches 1268 this limit and can't be reduced, the OOM killer is invoked in 1269 the cgroup. Under certain circumstances, the usage may go 1270 over the limit temporarily. 1271 1272 In default configuration regular 0-order allocations always 1273 succeed unless OOM killer chooses current task as a victim. 1274 1275 Some kinds of allocations don't invoke the OOM killer. 1276 Caller could retry them differently, return into userspace 1277 as -ENOMEM or silently ignore in cases like disk readahead. 1278 1279 memory.reclaim 1280 A write-only nested-keyed file which exists for all cgroups. 1281 1282 This is a simple interface to trigger memory reclaim in the 1283 target cgroup. 1284 1285 This file accepts a single key, the number of bytes to reclaim. 1286 No nested keys are currently supported. 1287 1288 Example:: 1289 1290 echo "1G" > memory.reclaim 1291 1292 The interface can be later extended with nested keys to 1293 configure the reclaim behavior. For example, specify the 1294 type of memory to reclaim from (anon, file, ..). 1295 1296 Please note that the kernel can over or under reclaim from 1297 the target cgroup. If less bytes are reclaimed than the 1298 specified amount, -EAGAIN is returned. 1299 1300 Please note that the proactive reclaim (triggered by this 1301 interface) is not meant to indicate memory pressure on the 1302 memory cgroup. Therefore socket memory balancing triggered by 1303 the memory reclaim normally is not exercised in this case. 1304 This means that the networking layer will not adapt based on 1305 reclaim induced by memory.reclaim. 1306 1307 memory.peak 1308 A read-only single value file which exists on non-root 1309 cgroups. 1310 1311 The max memory usage recorded for the cgroup and its 1312 descendants since the creation of the cgroup. 1313 1314 memory.oom.group 1315 A read-write single value file which exists on non-root 1316 cgroups. The default value is "0". 1317 1318 Determines whether the cgroup should be treated as 1319 an indivisible workload by the OOM killer. If set, 1320 all tasks belonging to the cgroup or to its descendants 1321 (if the memory cgroup is not a leaf cgroup) are killed 1322 together or not at all. This can be used to avoid 1323 partial kills to guarantee workload integrity. 1324 1325 Tasks with the OOM protection (oom_score_adj set to -1000) 1326 are treated as an exception and are never killed. 1327 1328 If the OOM killer is invoked in a cgroup, it's not going 1329 to kill any tasks outside of this cgroup, regardless 1330 memory.oom.group values of ancestor cgroups. 1331 1332 memory.events 1333 A read-only flat-keyed file which exists on non-root cgroups. 1334 The following entries are defined. Unless specified 1335 otherwise, a value change in this file generates a file 1336 modified event. 1337 1338 Note that all fields in this file are hierarchical and the 1339 file modified event can be generated due to an event down the 1340 hierarchy. For the local events at the cgroup level see 1341 memory.events.local. 1342 1343 low 1344 The number of times the cgroup is reclaimed due to 1345 high memory pressure even though its usage is under 1346 the low boundary. This usually indicates that the low 1347 boundary is over-committed. 1348 1349 high 1350 The number of times processes of the cgroup are 1351 throttled and routed to perform direct memory reclaim 1352 because the high memory boundary was exceeded. For a 1353 cgroup whose memory usage is capped by the high limit 1354 rather than global memory pressure, this event's 1355 occurrences are expected. 1356 1357 max 1358 The number of times the cgroup's memory usage was 1359 about to go over the max boundary. If direct reclaim 1360 fails to bring it down, the cgroup goes to OOM state. 1361 1362 oom 1363 The number of time the cgroup's memory usage was 1364 reached the limit and allocation was about to fail. 1365 1366 This event is not raised if the OOM killer is not 1367 considered as an option, e.g. for failed high-order 1368 allocations or if caller asked to not retry attempts. 1369 1370 oom_kill 1371 The number of processes belonging to this cgroup 1372 killed by any kind of OOM killer. 1373 1374 oom_group_kill 1375 The number of times a group OOM has occurred. 1376 1377 memory.events.local 1378 Similar to memory.events but the fields in the file are local 1379 to the cgroup i.e. not hierarchical. The file modified event 1380 generated on this file reflects only the local events. 1381 1382 memory.stat 1383 A read-only flat-keyed file which exists on non-root cgroups. 1384 1385 This breaks down the cgroup's memory footprint into different 1386 types of memory, type-specific details, and other information 1387 on the state and past events of the memory management system. 1388 1389 All memory amounts are in bytes. 1390 1391 The entries are ordered to be human readable, and new entries 1392 can show up in the middle. Don't rely on items remaining in a 1393 fixed position; use the keys to look up specific values! 1394 1395 If the entry has no per-node counter (or not show in the 1396 memory.numa_stat). We use 'npn' (non-per-node) as the tag 1397 to indicate that it will not show in the memory.numa_stat. 1398 1399 anon 1400 Amount of memory used in anonymous mappings such as 1401 brk(), sbrk(), and mmap(MAP_ANONYMOUS) 1402 1403 file 1404 Amount of memory used to cache filesystem data, 1405 including tmpfs and shared memory. 1406 1407 kernel (npn) 1408 Amount of total kernel memory, including 1409 (kernel_stack, pagetables, percpu, vmalloc, slab) in 1410 addition to other kernel memory use cases. 1411 1412 kernel_stack 1413 Amount of memory allocated to kernel stacks. 1414 1415 pagetables 1416 Amount of memory allocated for page tables. 1417 1418 sec_pagetables 1419 Amount of memory allocated for secondary page tables, 1420 this currently includes KVM mmu allocations on x86 1421 and arm64. 1422 1423 percpu (npn) 1424 Amount of memory used for storing per-cpu kernel 1425 data structures. 1426 1427 sock (npn) 1428 Amount of memory used in network transmission buffers 1429 1430 vmalloc (npn) 1431 Amount of memory used for vmap backed memory. 1432 1433 shmem 1434 Amount of cached filesystem data that is swap-backed, 1435 such as tmpfs, shm segments, shared anonymous mmap()s 1436 1437 zswap 1438 Amount of memory consumed by the zswap compression backend. 1439 1440 zswapped 1441 Amount of application memory swapped out to zswap. 1442 1443 file_mapped 1444 Amount of cached filesystem data mapped with mmap() 1445 1446 file_dirty 1447 Amount of cached filesystem data that was modified but 1448 not yet written back to disk 1449 1450 file_writeback 1451 Amount of cached filesystem data that was modified and 1452 is currently being written back to disk 1453 1454 swapcached 1455 Amount of swap cached in memory. The swapcache is accounted 1456 against both memory and swap usage. 1457 1458 anon_thp 1459 Amount of memory used in anonymous mappings backed by 1460 transparent hugepages 1461 1462 file_thp 1463 Amount of cached filesystem data backed by transparent 1464 hugepages 1465 1466 shmem_thp 1467 Amount of shm, tmpfs, shared anonymous mmap()s backed by 1468 transparent hugepages 1469 1470 inactive_anon, active_anon, inactive_file, active_file, unevictable 1471 Amount of memory, swap-backed and filesystem-backed, 1472 on the internal memory management lists used by the 1473 page reclaim algorithm. 1474 1475 As these represent internal list state (eg. shmem pages are on anon 1476 memory management lists), inactive_foo + active_foo may not be equal to 1477 the value for the foo counter, since the foo counter is type-based, not 1478 list-based. 1479 1480 slab_reclaimable 1481 Part of "slab" that might be reclaimed, such as 1482 dentries and inodes. 1483 1484 slab_unreclaimable 1485 Part of "slab" that cannot be reclaimed on memory 1486 pressure. 1487 1488 slab (npn) 1489 Amount of memory used for storing in-kernel data 1490 structures. 1491 1492 workingset_refault_anon 1493 Number of refaults of previously evicted anonymous pages. 1494 1495 workingset_refault_file 1496 Number of refaults of previously evicted file pages. 1497 1498 workingset_activate_anon 1499 Number of refaulted anonymous pages that were immediately 1500 activated. 1501 1502 workingset_activate_file 1503 Number of refaulted file pages that were immediately activated. 1504 1505 workingset_restore_anon 1506 Number of restored anonymous pages which have been detected as 1507 an active workingset before they got reclaimed. 1508 1509 workingset_restore_file 1510 Number of restored file pages which have been detected as an 1511 active workingset before they got reclaimed. 1512 1513 workingset_nodereclaim 1514 Number of times a shadow node has been reclaimed 1515 1516 pgscan (npn) 1517 Amount of scanned pages (in an inactive LRU list) 1518 1519 pgsteal (npn) 1520 Amount of reclaimed pages 1521 1522 pgscan_kswapd (npn) 1523 Amount of scanned pages by kswapd (in an inactive LRU list) 1524 1525 pgscan_direct (npn) 1526 Amount of scanned pages directly (in an inactive LRU list) 1527 1528 pgscan_khugepaged (npn) 1529 Amount of scanned pages by khugepaged (in an inactive LRU list) 1530 1531 pgsteal_kswapd (npn) 1532 Amount of reclaimed pages by kswapd 1533 1534 pgsteal_direct (npn) 1535 Amount of reclaimed pages directly 1536 1537 pgsteal_khugepaged (npn) 1538 Amount of reclaimed pages by khugepaged 1539 1540 pgfault (npn) 1541 Total number of page faults incurred 1542 1543 pgmajfault (npn) 1544 Number of major page faults incurred 1545 1546 pgrefill (npn) 1547 Amount of scanned pages (in an active LRU list) 1548 1549 pgactivate (npn) 1550 Amount of pages moved to the active LRU list 1551 1552 pgdeactivate (npn) 1553 Amount of pages moved to the inactive LRU list 1554 1555 pglazyfree (npn) 1556 Amount of pages postponed to be freed under memory pressure 1557 1558 pglazyfreed (npn) 1559 Amount of reclaimed lazyfree pages 1560 1561 thp_fault_alloc (npn) 1562 Number of transparent hugepages which were allocated to satisfy 1563 a page fault. This counter is not present when CONFIG_TRANSPARENT_HUGEPAGE 1564 is not set. 1565 1566 thp_collapse_alloc (npn) 1567 Number of transparent hugepages which were allocated to allow 1568 collapsing an existing range of pages. This counter is not 1569 present when CONFIG_TRANSPARENT_HUGEPAGE is not set. 1570 1571 thp_swpout (npn) 1572 Number of transparent hugepages which are swapout in one piece 1573 without splitting. 1574 1575 thp_swpout_fallback (npn) 1576 Number of transparent hugepages which were split before swapout. 1577 Usually because failed to allocate some continuous swap space 1578 for the huge page. 1579 1580 memory.numa_stat 1581 A read-only nested-keyed file which exists on non-root cgroups. 1582 1583 This breaks down the cgroup's memory footprint into different 1584 types of memory, type-specific details, and other information 1585 per node on the state of the memory management system. 1586 1587 This is useful for providing visibility into the NUMA locality 1588 information within an memcg since the pages are allowed to be 1589 allocated from any physical node. One of the use case is evaluating 1590 application performance by combining this information with the 1591 application's CPU allocation. 1592 1593 All memory amounts are in bytes. 1594 1595 The output format of memory.numa_stat is:: 1596 1597 type N0=<bytes in node 0> N1=<bytes in node 1> ... 1598 1599 The entries are ordered to be human readable, and new entries 1600 can show up in the middle. Don't rely on items remaining in a 1601 fixed position; use the keys to look up specific values! 1602 1603 The entries can refer to the memory.stat. 1604 1605 memory.swap.current 1606 A read-only single value file which exists on non-root 1607 cgroups. 1608 1609 The total amount of swap currently being used by the cgroup 1610 and its descendants. 1611 1612 memory.swap.high 1613 A read-write single value file which exists on non-root 1614 cgroups. The default is "max". 1615 1616 Swap usage throttle limit. If a cgroup's swap usage exceeds 1617 this limit, all its further allocations will be throttled to 1618 allow userspace to implement custom out-of-memory procedures. 1619 1620 This limit marks a point of no return for the cgroup. It is NOT 1621 designed to manage the amount of swapping a workload does 1622 during regular operation. Compare to memory.swap.max, which 1623 prohibits swapping past a set amount, but lets the cgroup 1624 continue unimpeded as long as other memory can be reclaimed. 1625 1626 Healthy workloads are not expected to reach this limit. 1627 1628 memory.swap.peak 1629 A read-only single value file which exists on non-root 1630 cgroups. 1631 1632 The max swap usage recorded for the cgroup and its 1633 descendants since the creation of the cgroup. 1634 1635 memory.swap.max 1636 A read-write single value file which exists on non-root 1637 cgroups. The default is "max". 1638 1639 Swap usage hard limit. If a cgroup's swap usage reaches this 1640 limit, anonymous memory of the cgroup will not be swapped out. 1641 1642 memory.swap.events 1643 A read-only flat-keyed file which exists on non-root cgroups. 1644 The following entries are defined. Unless specified 1645 otherwise, a value change in this file generates a file 1646 modified event. 1647 1648 high 1649 The number of times the cgroup's swap usage was over 1650 the high threshold. 1651 1652 max 1653 The number of times the cgroup's swap usage was about 1654 to go over the max boundary and swap allocation 1655 failed. 1656 1657 fail 1658 The number of times swap allocation failed either 1659 because of running out of swap system-wide or max 1660 limit. 1661 1662 When reduced under the current usage, the existing swap 1663 entries are reclaimed gradually and the swap usage may stay 1664 higher than the limit for an extended period of time. This 1665 reduces the impact on the workload and memory management. 1666 1667 memory.zswap.current 1668 A read-only single value file which exists on non-root 1669 cgroups. 1670 1671 The total amount of memory consumed by the zswap compression 1672 backend. 1673 1674 memory.zswap.max 1675 A read-write single value file which exists on non-root 1676 cgroups. The default is "max". 1677 1678 Zswap usage hard limit. If a cgroup's zswap pool reaches this 1679 limit, it will refuse to take any more stores before existing 1680 entries fault back in or are written out to disk. 1681 1682 memory.pressure 1683 A read-only nested-keyed file. 1684 1685 Shows pressure stall information for memory. See 1686 :ref:`Documentation/accounting/psi.rst <psi>` for details. 1687 1688 1689Usage Guidelines 1690~~~~~~~~~~~~~~~~ 1691 1692"memory.high" is the main mechanism to control memory usage. 1693Over-committing on high limit (sum of high limits > available memory) 1694and letting global memory pressure to distribute memory according to 1695usage is a viable strategy. 1696 1697Because breach of the high limit doesn't trigger the OOM killer but 1698throttles the offending cgroup, a management agent has ample 1699opportunities to monitor and take appropriate actions such as granting 1700more memory or terminating the workload. 1701 1702Determining whether a cgroup has enough memory is not trivial as 1703memory usage doesn't indicate whether the workload can benefit from 1704more memory. For example, a workload which writes data received from 1705network to a file can use all available memory but can also operate as 1706performant with a small amount of memory. A measure of memory 1707pressure - how much the workload is being impacted due to lack of 1708memory - is necessary to determine whether a workload needs more 1709memory; unfortunately, memory pressure monitoring mechanism isn't 1710implemented yet. 1711 1712 1713Memory Ownership 1714~~~~~~~~~~~~~~~~ 1715 1716A memory area is charged to the cgroup which instantiated it and stays 1717charged to the cgroup until the area is released. Migrating a process 1718to a different cgroup doesn't move the memory usages that it 1719instantiated while in the previous cgroup to the new cgroup. 1720 1721A memory area may be used by processes belonging to different cgroups. 1722To which cgroup the area will be charged is in-deterministic; however, 1723over time, the memory area is likely to end up in a cgroup which has 1724enough memory allowance to avoid high reclaim pressure. 1725 1726If a cgroup sweeps a considerable amount of memory which is expected 1727to be accessed repeatedly by other cgroups, it may make sense to use 1728POSIX_FADV_DONTNEED to relinquish the ownership of memory areas 1729belonging to the affected files to ensure correct memory ownership. 1730 1731 1732IO 1733-- 1734 1735The "io" controller regulates the distribution of IO resources. This 1736controller implements both weight based and absolute bandwidth or IOPS 1737limit distribution; however, weight based distribution is available 1738only if cfq-iosched is in use and neither scheme is available for 1739blk-mq devices. 1740 1741 1742IO Interface Files 1743~~~~~~~~~~~~~~~~~~ 1744 1745 io.stat 1746 A read-only nested-keyed file. 1747 1748 Lines are keyed by $MAJ:$MIN device numbers and not ordered. 1749 The following nested keys are defined. 1750 1751 ====== ===================== 1752 rbytes Bytes read 1753 wbytes Bytes written 1754 rios Number of read IOs 1755 wios Number of write IOs 1756 dbytes Bytes discarded 1757 dios Number of discard IOs 1758 ====== ===================== 1759 1760 An example read output follows:: 1761 1762 8:16 rbytes=1459200 wbytes=314773504 rios=192 wios=353 dbytes=0 dios=0 1763 8:0 rbytes=90430464 wbytes=299008000 rios=8950 wios=1252 dbytes=50331648 dios=3021 1764 1765 io.cost.qos 1766 A read-write nested-keyed file which exists only on the root 1767 cgroup. 1768 1769 This file configures the Quality of Service of the IO cost 1770 model based controller (CONFIG_BLK_CGROUP_IOCOST) which 1771 currently implements "io.weight" proportional control. Lines 1772 are keyed by $MAJ:$MIN device numbers and not ordered. The 1773 line for a given device is populated on the first write for 1774 the device on "io.cost.qos" or "io.cost.model". The following 1775 nested keys are defined. 1776 1777 ====== ===================================== 1778 enable Weight-based control enable 1779 ctrl "auto" or "user" 1780 rpct Read latency percentile [0, 100] 1781 rlat Read latency threshold 1782 wpct Write latency percentile [0, 100] 1783 wlat Write latency threshold 1784 min Minimum scaling percentage [1, 10000] 1785 max Maximum scaling percentage [1, 10000] 1786 ====== ===================================== 1787 1788 The controller is disabled by default and can be enabled by 1789 setting "enable" to 1. "rpct" and "wpct" parameters default 1790 to zero and the controller uses internal device saturation 1791 state to adjust the overall IO rate between "min" and "max". 1792 1793 When a better control quality is needed, latency QoS 1794 parameters can be configured. For example:: 1795 1796 8:16 enable=1 ctrl=auto rpct=95.00 rlat=75000 wpct=95.00 wlat=150000 min=50.00 max=150.0 1797 1798 shows that on sdb, the controller is enabled, will consider 1799 the device saturated if the 95th percentile of read completion 1800 latencies is above 75ms or write 150ms, and adjust the overall 1801 IO issue rate between 50% and 150% accordingly. 1802 1803 The lower the saturation point, the better the latency QoS at 1804 the cost of aggregate bandwidth. The narrower the allowed 1805 adjustment range between "min" and "max", the more conformant 1806 to the cost model the IO behavior. Note that the IO issue 1807 base rate may be far off from 100% and setting "min" and "max" 1808 blindly can lead to a significant loss of device capacity or 1809 control quality. "min" and "max" are useful for regulating 1810 devices which show wide temporary behavior changes - e.g. a 1811 ssd which accepts writes at the line speed for a while and 1812 then completely stalls for multiple seconds. 1813 1814 When "ctrl" is "auto", the parameters are controlled by the 1815 kernel and may change automatically. Setting "ctrl" to "user" 1816 or setting any of the percentile and latency parameters puts 1817 it into "user" mode and disables the automatic changes. The 1818 automatic mode can be restored by setting "ctrl" to "auto". 1819 1820 io.cost.model 1821 A read-write nested-keyed file which exists only on the root 1822 cgroup. 1823 1824 This file configures the cost model of the IO cost model based 1825 controller (CONFIG_BLK_CGROUP_IOCOST) which currently 1826 implements "io.weight" proportional control. Lines are keyed 1827 by $MAJ:$MIN device numbers and not ordered. The line for a 1828 given device is populated on the first write for the device on 1829 "io.cost.qos" or "io.cost.model". The following nested keys 1830 are defined. 1831 1832 ===== ================================ 1833 ctrl "auto" or "user" 1834 model The cost model in use - "linear" 1835 ===== ================================ 1836 1837 When "ctrl" is "auto", the kernel may change all parameters 1838 dynamically. When "ctrl" is set to "user" or any other 1839 parameters are written to, "ctrl" become "user" and the 1840 automatic changes are disabled. 1841 1842 When "model" is "linear", the following model parameters are 1843 defined. 1844 1845 ============= ======================================== 1846 [r|w]bps The maximum sequential IO throughput 1847 [r|w]seqiops The maximum 4k sequential IOs per second 1848 [r|w]randiops The maximum 4k random IOs per second 1849 ============= ======================================== 1850 1851 From the above, the builtin linear model determines the base 1852 costs of a sequential and random IO and the cost coefficient 1853 for the IO size. While simple, this model can cover most 1854 common device classes acceptably. 1855 1856 The IO cost model isn't expected to be accurate in absolute 1857 sense and is scaled to the device behavior dynamically. 1858 1859 If needed, tools/cgroup/iocost_coef_gen.py can be used to 1860 generate device-specific coefficients. 1861 1862 io.weight 1863 A read-write flat-keyed file which exists on non-root cgroups. 1864 The default is "default 100". 1865 1866 The first line is the default weight applied to devices 1867 without specific override. The rest are overrides keyed by 1868 $MAJ:$MIN device numbers and not ordered. The weights are in 1869 the range [1, 10000] and specifies the relative amount IO time 1870 the cgroup can use in relation to its siblings. 1871 1872 The default weight can be updated by writing either "default 1873 $WEIGHT" or simply "$WEIGHT". Overrides can be set by writing 1874 "$MAJ:$MIN $WEIGHT" and unset by writing "$MAJ:$MIN default". 1875 1876 An example read output follows:: 1877 1878 default 100 1879 8:16 200 1880 8:0 50 1881 1882 io.max 1883 A read-write nested-keyed file which exists on non-root 1884 cgroups. 1885 1886 BPS and IOPS based IO limit. Lines are keyed by $MAJ:$MIN 1887 device numbers and not ordered. The following nested keys are 1888 defined. 1889 1890 ===== ================================== 1891 rbps Max read bytes per second 1892 wbps Max write bytes per second 1893 riops Max read IO operations per second 1894 wiops Max write IO operations per second 1895 ===== ================================== 1896 1897 When writing, any number of nested key-value pairs can be 1898 specified in any order. "max" can be specified as the value 1899 to remove a specific limit. If the same key is specified 1900 multiple times, the outcome is undefined. 1901 1902 BPS and IOPS are measured in each IO direction and IOs are 1903 delayed if limit is reached. Temporary bursts are allowed. 1904 1905 Setting read limit at 2M BPS and write at 120 IOPS for 8:16:: 1906 1907 echo "8:16 rbps=2097152 wiops=120" > io.max 1908 1909 Reading returns the following:: 1910 1911 8:16 rbps=2097152 wbps=max riops=max wiops=120 1912 1913 Write IOPS limit can be removed by writing the following:: 1914 1915 echo "8:16 wiops=max" > io.max 1916 1917 Reading now returns the following:: 1918 1919 8:16 rbps=2097152 wbps=max riops=max wiops=max 1920 1921 io.pressure 1922 A read-only nested-keyed file. 1923 1924 Shows pressure stall information for IO. See 1925 :ref:`Documentation/accounting/psi.rst <psi>` for details. 1926 1927 1928Writeback 1929~~~~~~~~~ 1930 1931Page cache is dirtied through buffered writes and shared mmaps and 1932written asynchronously to the backing filesystem by the writeback 1933mechanism. Writeback sits between the memory and IO domains and 1934regulates the proportion of dirty memory by balancing dirtying and 1935write IOs. 1936 1937The io controller, in conjunction with the memory controller, 1938implements control of page cache writeback IOs. The memory controller 1939defines the memory domain that dirty memory ratio is calculated and 1940maintained for and the io controller defines the io domain which 1941writes out dirty pages for the memory domain. Both system-wide and 1942per-cgroup dirty memory states are examined and the more restrictive 1943of the two is enforced. 1944 1945cgroup writeback requires explicit support from the underlying 1946filesystem. Currently, cgroup writeback is implemented on ext2, ext4, 1947btrfs, f2fs, and xfs. On other filesystems, all writeback IOs are 1948attributed to the root cgroup. 1949 1950There are inherent differences in memory and writeback management 1951which affects how cgroup ownership is tracked. Memory is tracked per 1952page while writeback per inode. For the purpose of writeback, an 1953inode is assigned to a cgroup and all IO requests to write dirty pages 1954from the inode are attributed to that cgroup. 1955 1956As cgroup ownership for memory is tracked per page, there can be pages 1957which are associated with different cgroups than the one the inode is 1958associated with. These are called foreign pages. The writeback 1959constantly keeps track of foreign pages and, if a particular foreign 1960cgroup becomes the majority over a certain period of time, switches 1961the ownership of the inode to that cgroup. 1962 1963While this model is enough for most use cases where a given inode is 1964mostly dirtied by a single cgroup even when the main writing cgroup 1965changes over time, use cases where multiple cgroups write to a single 1966inode simultaneously are not supported well. In such circumstances, a 1967significant portion of IOs are likely to be attributed incorrectly. 1968As memory controller assigns page ownership on the first use and 1969doesn't update it until the page is released, even if writeback 1970strictly follows page ownership, multiple cgroups dirtying overlapping 1971areas wouldn't work as expected. It's recommended to avoid such usage 1972patterns. 1973 1974The sysctl knobs which affect writeback behavior are applied to cgroup 1975writeback as follows. 1976 1977 vm.dirty_background_ratio, vm.dirty_ratio 1978 These ratios apply the same to cgroup writeback with the 1979 amount of available memory capped by limits imposed by the 1980 memory controller and system-wide clean memory. 1981 1982 vm.dirty_background_bytes, vm.dirty_bytes 1983 For cgroup writeback, this is calculated into ratio against 1984 total available memory and applied the same way as 1985 vm.dirty[_background]_ratio. 1986 1987 1988IO Latency 1989~~~~~~~~~~ 1990 1991This is a cgroup v2 controller for IO workload protection. You provide a group 1992with a latency target, and if the average latency exceeds that target the 1993controller will throttle any peers that have a lower latency target than the 1994protected workload. 1995 1996The limits are only applied at the peer level in the hierarchy. This means that 1997in the diagram below, only groups A, B, and C will influence each other, and 1998groups D and F will influence each other. Group G will influence nobody:: 1999 2000 [root] 2001 / | \ 2002 A B C 2003 / \ | 2004 D F G 2005 2006 2007So the ideal way to configure this is to set io.latency in groups A, B, and C. 2008Generally you do not want to set a value lower than the latency your device 2009supports. Experiment to find the value that works best for your workload. 2010Start at higher than the expected latency for your device and watch the 2011avg_lat value in io.stat for your workload group to get an idea of the 2012latency you see during normal operation. Use the avg_lat value as a basis for 2013your real setting, setting at 10-15% higher than the value in io.stat. 2014 2015How IO Latency Throttling Works 2016~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2017 2018io.latency is work conserving; so as long as everybody is meeting their latency 2019target the controller doesn't do anything. Once a group starts missing its 2020target it begins throttling any peer group that has a higher target than itself. 2021This throttling takes 2 forms: 2022 2023- Queue depth throttling. This is the number of outstanding IO's a group is 2024 allowed to have. We will clamp down relatively quickly, starting at no limit 2025 and going all the way down to 1 IO at a time. 2026 2027- Artificial delay induction. There are certain types of IO that cannot be 2028 throttled without possibly adversely affecting higher priority groups. This 2029 includes swapping and metadata IO. These types of IO are allowed to occur 2030 normally, however they are "charged" to the originating group. If the 2031 originating group is being throttled you will see the use_delay and delay 2032 fields in io.stat increase. The delay value is how many microseconds that are 2033 being added to any process that runs in this group. Because this number can 2034 grow quite large if there is a lot of swapping or metadata IO occurring we 2035 limit the individual delay events to 1 second at a time. 2036 2037Once the victimized group starts meeting its latency target again it will start 2038unthrottling any peer groups that were throttled previously. If the victimized 2039group simply stops doing IO the global counter will unthrottle appropriately. 2040 2041IO Latency Interface Files 2042~~~~~~~~~~~~~~~~~~~~~~~~~~ 2043 2044 io.latency 2045 This takes a similar format as the other controllers. 2046 2047 "MAJOR:MINOR target=<target time in microseconds>" 2048 2049 io.stat 2050 If the controller is enabled you will see extra stats in io.stat in 2051 addition to the normal ones. 2052 2053 depth 2054 This is the current queue depth for the group. 2055 2056 avg_lat 2057 This is an exponential moving average with a decay rate of 1/exp 2058 bound by the sampling interval. The decay rate interval can be 2059 calculated by multiplying the win value in io.stat by the 2060 corresponding number of samples based on the win value. 2061 2062 win 2063 The sampling window size in milliseconds. This is the minimum 2064 duration of time between evaluation events. Windows only elapse 2065 with IO activity. Idle periods extend the most recent window. 2066 2067IO Priority 2068~~~~~~~~~~~ 2069 2070A single attribute controls the behavior of the I/O priority cgroup policy, 2071namely the io.prio.class attribute. The following values are accepted for 2072that attribute: 2073 2074 no-change 2075 Do not modify the I/O priority class. 2076 2077 promote-to-rt 2078 For requests that have a non-RT I/O priority class, change it into RT. 2079 Also change the priority level of these requests to 4. Do not modify 2080 the I/O priority of requests that have priority class RT. 2081 2082 restrict-to-be 2083 For requests that do not have an I/O priority class or that have I/O 2084 priority class RT, change it into BE. Also change the priority level 2085 of these requests to 0. Do not modify the I/O priority class of 2086 requests that have priority class IDLE. 2087 2088 idle 2089 Change the I/O priority class of all requests into IDLE, the lowest 2090 I/O priority class. 2091 2092 none-to-rt 2093 Deprecated. Just an alias for promote-to-rt. 2094 2095The following numerical values are associated with the I/O priority policies: 2096 2097+----------------+---+ 2098| no-change | 0 | 2099+----------------+---+ 2100| promote-to-rt | 1 | 2101+----------------+---+ 2102| restrict-to-be | 2 | 2103+----------------+---+ 2104| idle | 3 | 2105+----------------+---+ 2106 2107The numerical value that corresponds to each I/O priority class is as follows: 2108 2109+-------------------------------+---+ 2110| IOPRIO_CLASS_NONE | 0 | 2111+-------------------------------+---+ 2112| IOPRIO_CLASS_RT (real-time) | 1 | 2113+-------------------------------+---+ 2114| IOPRIO_CLASS_BE (best effort) | 2 | 2115+-------------------------------+---+ 2116| IOPRIO_CLASS_IDLE | 3 | 2117+-------------------------------+---+ 2118 2119The algorithm to set the I/O priority class for a request is as follows: 2120 2121- If I/O priority class policy is promote-to-rt, change the request I/O 2122 priority class to IOPRIO_CLASS_RT and change the request I/O priority 2123 level to 4. 2124- If I/O priority class policy is not promote-to-rt, translate the I/O priority 2125 class policy into a number, then change the request I/O priority class 2126 into the maximum of the I/O priority class policy number and the numerical 2127 I/O priority class. 2128 2129PID 2130--- 2131 2132The process number controller is used to allow a cgroup to stop any 2133new tasks from being fork()'d or clone()'d after a specified limit is 2134reached. 2135 2136The number of tasks in a cgroup can be exhausted in ways which other 2137controllers cannot prevent, thus warranting its own controller. For 2138example, a fork bomb is likely to exhaust the number of tasks before 2139hitting memory restrictions. 2140 2141Note that PIDs used in this controller refer to TIDs, process IDs as 2142used by the kernel. 2143 2144 2145PID Interface Files 2146~~~~~~~~~~~~~~~~~~~ 2147 2148 pids.max 2149 A read-write single value file which exists on non-root 2150 cgroups. The default is "max". 2151 2152 Hard limit of number of processes. 2153 2154 pids.current 2155 A read-only single value file which exists on all cgroups. 2156 2157 The number of processes currently in the cgroup and its 2158 descendants. 2159 2160Organisational operations are not blocked by cgroup policies, so it is 2161possible to have pids.current > pids.max. This can be done by either 2162setting the limit to be smaller than pids.current, or attaching enough 2163processes to the cgroup such that pids.current is larger than 2164pids.max. However, it is not possible to violate a cgroup PID policy 2165through fork() or clone(). These will return -EAGAIN if the creation 2166of a new process would cause a cgroup policy to be violated. 2167 2168 2169Cpuset 2170------ 2171 2172The "cpuset" controller provides a mechanism for constraining 2173the CPU and memory node placement of tasks to only the resources 2174specified in the cpuset interface files in a task's current cgroup. 2175This is especially valuable on large NUMA systems where placing jobs 2176on properly sized subsets of the systems with careful processor and 2177memory placement to reduce cross-node memory access and contention 2178can improve overall system performance. 2179 2180The "cpuset" controller is hierarchical. That means the controller 2181cannot use CPUs or memory nodes not allowed in its parent. 2182 2183 2184Cpuset Interface Files 2185~~~~~~~~~~~~~~~~~~~~~~ 2186 2187 cpuset.cpus 2188 A read-write multiple values file which exists on non-root 2189 cpuset-enabled cgroups. 2190 2191 It lists the requested CPUs to be used by tasks within this 2192 cgroup. The actual list of CPUs to be granted, however, is 2193 subjected to constraints imposed by its parent and can differ 2194 from the requested CPUs. 2195 2196 The CPU numbers are comma-separated numbers or ranges. 2197 For example:: 2198 2199 # cat cpuset.cpus 2200 0-4,6,8-10 2201 2202 An empty value indicates that the cgroup is using the same 2203 setting as the nearest cgroup ancestor with a non-empty 2204 "cpuset.cpus" or all the available CPUs if none is found. 2205 2206 The value of "cpuset.cpus" stays constant until the next update 2207 and won't be affected by any CPU hotplug events. 2208 2209 cpuset.cpus.effective 2210 A read-only multiple values file which exists on all 2211 cpuset-enabled cgroups. 2212 2213 It lists the onlined CPUs that are actually granted to this 2214 cgroup by its parent. These CPUs are allowed to be used by 2215 tasks within the current cgroup. 2216 2217 If "cpuset.cpus" is empty, the "cpuset.cpus.effective" file shows 2218 all the CPUs from the parent cgroup that can be available to 2219 be used by this cgroup. Otherwise, it should be a subset of 2220 "cpuset.cpus" unless none of the CPUs listed in "cpuset.cpus" 2221 can be granted. In this case, it will be treated just like an 2222 empty "cpuset.cpus". 2223 2224 Its value will be affected by CPU hotplug events. 2225 2226 cpuset.mems 2227 A read-write multiple values file which exists on non-root 2228 cpuset-enabled cgroups. 2229 2230 It lists the requested memory nodes to be used by tasks within 2231 this cgroup. The actual list of memory nodes granted, however, 2232 is subjected to constraints imposed by its parent and can differ 2233 from the requested memory nodes. 2234 2235 The memory node numbers are comma-separated numbers or ranges. 2236 For example:: 2237 2238 # cat cpuset.mems 2239 0-1,3 2240 2241 An empty value indicates that the cgroup is using the same 2242 setting as the nearest cgroup ancestor with a non-empty 2243 "cpuset.mems" or all the available memory nodes if none 2244 is found. 2245 2246 The value of "cpuset.mems" stays constant until the next update 2247 and won't be affected by any memory nodes hotplug events. 2248 2249 Setting a non-empty value to "cpuset.mems" causes memory of 2250 tasks within the cgroup to be migrated to the designated nodes if 2251 they are currently using memory outside of the designated nodes. 2252 2253 There is a cost for this memory migration. The migration 2254 may not be complete and some memory pages may be left behind. 2255 So it is recommended that "cpuset.mems" should be set properly 2256 before spawning new tasks into the cpuset. Even if there is 2257 a need to change "cpuset.mems" with active tasks, it shouldn't 2258 be done frequently. 2259 2260 cpuset.mems.effective 2261 A read-only multiple values file which exists on all 2262 cpuset-enabled cgroups. 2263 2264 It lists the onlined memory nodes that are actually granted to 2265 this cgroup by its parent. These memory nodes are allowed to 2266 be used by tasks within the current cgroup. 2267 2268 If "cpuset.mems" is empty, it shows all the memory nodes from the 2269 parent cgroup that will be available to be used by this cgroup. 2270 Otherwise, it should be a subset of "cpuset.mems" unless none of 2271 the memory nodes listed in "cpuset.mems" can be granted. In this 2272 case, it will be treated just like an empty "cpuset.mems". 2273 2274 Its value will be affected by memory nodes hotplug events. 2275 2276 cpuset.cpus.exclusive 2277 A read-write multiple values file which exists on non-root 2278 cpuset-enabled cgroups. 2279 2280 It lists all the exclusive CPUs that are allowed to be used 2281 to create a new cpuset partition. Its value is not used 2282 unless the cgroup becomes a valid partition root. See the 2283 "cpuset.cpus.partition" section below for a description of what 2284 a cpuset partition is. 2285 2286 When the cgroup becomes a partition root, the actual exclusive 2287 CPUs that are allocated to that partition are listed in 2288 "cpuset.cpus.exclusive.effective" which may be different 2289 from "cpuset.cpus.exclusive". If "cpuset.cpus.exclusive" 2290 has previously been set, "cpuset.cpus.exclusive.effective" 2291 is always a subset of it. 2292 2293 Users can manually set it to a value that is different from 2294 "cpuset.cpus". The only constraint in setting it is that the 2295 list of CPUs must be exclusive with respect to its sibling. 2296 2297 For a parent cgroup, any one of its exclusive CPUs can only 2298 be distributed to at most one of its child cgroups. Having an 2299 exclusive CPU appearing in two or more of its child cgroups is 2300 not allowed (the exclusivity rule). A value that violates the 2301 exclusivity rule will be rejected with a write error. 2302 2303 The root cgroup is a partition root and all its available CPUs 2304 are in its exclusive CPU set. 2305 2306 cpuset.cpus.exclusive.effective 2307 A read-only multiple values file which exists on all non-root 2308 cpuset-enabled cgroups. 2309 2310 This file shows the effective set of exclusive CPUs that 2311 can be used to create a partition root. The content of this 2312 file will always be a subset of "cpuset.cpus" and its parent's 2313 "cpuset.cpus.exclusive.effective" if its parent is not the root 2314 cgroup. It will also be a subset of "cpuset.cpus.exclusive" 2315 if it is set. If "cpuset.cpus.exclusive" is not set, it is 2316 treated to have an implicit value of "cpuset.cpus" in the 2317 formation of local partition. 2318 2319 cpuset.cpus.partition 2320 A read-write single value file which exists on non-root 2321 cpuset-enabled cgroups. This flag is owned by the parent cgroup 2322 and is not delegatable. 2323 2324 It accepts only the following input values when written to. 2325 2326 ========== ===================================== 2327 "member" Non-root member of a partition 2328 "root" Partition root 2329 "isolated" Partition root without load balancing 2330 ========== ===================================== 2331 2332 A cpuset partition is a collection of cpuset-enabled cgroups with 2333 a partition root at the top of the hierarchy and its descendants 2334 except those that are separate partition roots themselves and 2335 their descendants. A partition has exclusive access to the 2336 set of exclusive CPUs allocated to it. Other cgroups outside 2337 of that partition cannot use any CPUs in that set. 2338 2339 There are two types of partitions - local and remote. A local 2340 partition is one whose parent cgroup is also a valid partition 2341 root. A remote partition is one whose parent cgroup is not a 2342 valid partition root itself. Writing to "cpuset.cpus.exclusive" 2343 is optional for the creation of a local partition as its 2344 "cpuset.cpus.exclusive" file will assume an implicit value that 2345 is the same as "cpuset.cpus" if it is not set. Writing the 2346 proper "cpuset.cpus.exclusive" values down the cgroup hierarchy 2347 before the target partition root is mandatory for the creation 2348 of a remote partition. 2349 2350 Currently, a remote partition cannot be created under a local 2351 partition. All the ancestors of a remote partition root except 2352 the root cgroup cannot be a partition root. 2353 2354 The root cgroup is always a partition root and its state cannot 2355 be changed. All other non-root cgroups start out as "member". 2356 2357 When set to "root", the current cgroup is the root of a new 2358 partition or scheduling domain. The set of exclusive CPUs is 2359 determined by the value of its "cpuset.cpus.exclusive.effective". 2360 2361 When set to "isolated", the CPUs in that partition will be in 2362 an isolated state without any load balancing from the scheduler 2363 and excluded from the unbound workqueues. Tasks placed in such 2364 a partition with multiple CPUs should be carefully distributed 2365 and bound to each of the individual CPUs for optimal performance. 2366 2367 A partition root ("root" or "isolated") can be in one of the 2368 two possible states - valid or invalid. An invalid partition 2369 root is in a degraded state where some state information may 2370 be retained, but behaves more like a "member". 2371 2372 All possible state transitions among "member", "root" and 2373 "isolated" are allowed. 2374 2375 On read, the "cpuset.cpus.partition" file can show the following 2376 values. 2377 2378 ============================= ===================================== 2379 "member" Non-root member of a partition 2380 "root" Partition root 2381 "isolated" Partition root without load balancing 2382 "root invalid (<reason>)" Invalid partition root 2383 "isolated invalid (<reason>)" Invalid isolated partition root 2384 ============================= ===================================== 2385 2386 In the case of an invalid partition root, a descriptive string on 2387 why the partition is invalid is included within parentheses. 2388 2389 For a local partition root to be valid, the following conditions 2390 must be met. 2391 2392 1) The parent cgroup is a valid partition root. 2393 2) The "cpuset.cpus.exclusive.effective" file cannot be empty, 2394 though it may contain offline CPUs. 2395 3) The "cpuset.cpus.effective" cannot be empty unless there is 2396 no task associated with this partition. 2397 2398 For a remote partition root to be valid, all the above conditions 2399 except the first one must be met. 2400 2401 External events like hotplug or changes to "cpuset.cpus" or 2402 "cpuset.cpus.exclusive" can cause a valid partition root to 2403 become invalid and vice versa. Note that a task cannot be 2404 moved to a cgroup with empty "cpuset.cpus.effective". 2405 2406 A valid non-root parent partition may distribute out all its CPUs 2407 to its child local partitions when there is no task associated 2408 with it. 2409 2410 Care must be taken to change a valid partition root to "member" 2411 as all its child local partitions, if present, will become 2412 invalid causing disruption to tasks running in those child 2413 partitions. These inactivated partitions could be recovered if 2414 their parent is switched back to a partition root with a proper 2415 value in "cpuset.cpus" or "cpuset.cpus.exclusive". 2416 2417 Poll and inotify events are triggered whenever the state of 2418 "cpuset.cpus.partition" changes. That includes changes caused 2419 by write to "cpuset.cpus.partition", cpu hotplug or other 2420 changes that modify the validity status of the partition. 2421 This will allow user space agents to monitor unexpected changes 2422 to "cpuset.cpus.partition" without the need to do continuous 2423 polling. 2424 2425 A user can pre-configure certain CPUs to an isolated state 2426 with load balancing disabled at boot time with the "isolcpus" 2427 kernel boot command line option. If those CPUs are to be put 2428 into a partition, they have to be used in an isolated partition. 2429 2430 2431Device controller 2432----------------- 2433 2434Device controller manages access to device files. It includes both 2435creation of new device files (using mknod), and access to the 2436existing device files. 2437 2438Cgroup v2 device controller has no interface files and is implemented 2439on top of cgroup BPF. To control access to device files, a user may 2440create bpf programs of type BPF_PROG_TYPE_CGROUP_DEVICE and attach 2441them to cgroups with BPF_CGROUP_DEVICE flag. On an attempt to access a 2442device file, corresponding BPF programs will be executed, and depending 2443on the return value the attempt will succeed or fail with -EPERM. 2444 2445A BPF_PROG_TYPE_CGROUP_DEVICE program takes a pointer to the 2446bpf_cgroup_dev_ctx structure, which describes the device access attempt: 2447access type (mknod/read/write) and device (type, major and minor numbers). 2448If the program returns 0, the attempt fails with -EPERM, otherwise it 2449succeeds. 2450 2451An example of BPF_PROG_TYPE_CGROUP_DEVICE program may be found in 2452tools/testing/selftests/bpf/progs/dev_cgroup.c in the kernel source tree. 2453 2454 2455RDMA 2456---- 2457 2458The "rdma" controller regulates the distribution and accounting of 2459RDMA resources. 2460 2461RDMA Interface Files 2462~~~~~~~~~~~~~~~~~~~~ 2463 2464 rdma.max 2465 A readwrite nested-keyed file that exists for all the cgroups 2466 except root that describes current configured resource limit 2467 for a RDMA/IB device. 2468 2469 Lines are keyed by device name and are not ordered. 2470 Each line contains space separated resource name and its configured 2471 limit that can be distributed. 2472 2473 The following nested keys are defined. 2474 2475 ========== ============================= 2476 hca_handle Maximum number of HCA Handles 2477 hca_object Maximum number of HCA Objects 2478 ========== ============================= 2479 2480 An example for mlx4 and ocrdma device follows:: 2481 2482 mlx4_0 hca_handle=2 hca_object=2000 2483 ocrdma1 hca_handle=3 hca_object=max 2484 2485 rdma.current 2486 A read-only file that describes current resource usage. 2487 It exists for all the cgroup except root. 2488 2489 An example for mlx4 and ocrdma device follows:: 2490 2491 mlx4_0 hca_handle=1 hca_object=20 2492 ocrdma1 hca_handle=1 hca_object=23 2493 2494HugeTLB 2495------- 2496 2497The HugeTLB controller allows to limit the HugeTLB usage per control group and 2498enforces the controller limit during page fault. 2499 2500HugeTLB Interface Files 2501~~~~~~~~~~~~~~~~~~~~~~~ 2502 2503 hugetlb.<hugepagesize>.current 2504 Show current usage for "hugepagesize" hugetlb. It exists for all 2505 the cgroup except root. 2506 2507 hugetlb.<hugepagesize>.max 2508 Set/show the hard limit of "hugepagesize" hugetlb usage. 2509 The default value is "max". It exists for all the cgroup except root. 2510 2511 hugetlb.<hugepagesize>.events 2512 A read-only flat-keyed file which exists on non-root cgroups. 2513 2514 max 2515 The number of allocation failure due to HugeTLB limit 2516 2517 hugetlb.<hugepagesize>.events.local 2518 Similar to hugetlb.<hugepagesize>.events but the fields in the file 2519 are local to the cgroup i.e. not hierarchical. The file modified event 2520 generated on this file reflects only the local events. 2521 2522 hugetlb.<hugepagesize>.numa_stat 2523 Similar to memory.numa_stat, it shows the numa information of the 2524 hugetlb pages of <hugepagesize> in this cgroup. Only active in 2525 use hugetlb pages are included. The per-node values are in bytes. 2526 2527Misc 2528---- 2529 2530The Miscellaneous cgroup provides the resource limiting and tracking 2531mechanism for the scalar resources which cannot be abstracted like the other 2532cgroup resources. Controller is enabled by the CONFIG_CGROUP_MISC config 2533option. 2534 2535A resource can be added to the controller via enum misc_res_type{} in the 2536include/linux/misc_cgroup.h file and the corresponding name via misc_res_name[] 2537in the kernel/cgroup/misc.c file. Provider of the resource must set its 2538capacity prior to using the resource by calling misc_cg_set_capacity(). 2539 2540Once a capacity is set then the resource usage can be updated using charge and 2541uncharge APIs. All of the APIs to interact with misc controller are in 2542include/linux/misc_cgroup.h. 2543 2544Misc Interface Files 2545~~~~~~~~~~~~~~~~~~~~ 2546 2547Miscellaneous controller provides 3 interface files. If two misc resources (res_a and res_b) are registered then: 2548 2549 misc.capacity 2550 A read-only flat-keyed file shown only in the root cgroup. It shows 2551 miscellaneous scalar resources available on the platform along with 2552 their quantities:: 2553 2554 $ cat misc.capacity 2555 res_a 50 2556 res_b 10 2557 2558 misc.current 2559 A read-only flat-keyed file shown in the all cgroups. It shows 2560 the current usage of the resources in the cgroup and its children.:: 2561 2562 $ cat misc.current 2563 res_a 3 2564 res_b 0 2565 2566 misc.max 2567 A read-write flat-keyed file shown in the non root cgroups. Allowed 2568 maximum usage of the resources in the cgroup and its children.:: 2569 2570 $ cat misc.max 2571 res_a max 2572 res_b 4 2573 2574 Limit can be set by:: 2575 2576 # echo res_a 1 > misc.max 2577 2578 Limit can be set to max by:: 2579 2580 # echo res_a max > misc.max 2581 2582 Limits can be set higher than the capacity value in the misc.capacity 2583 file. 2584 2585 misc.events 2586 A read-only flat-keyed file which exists on non-root cgroups. The 2587 following entries are defined. Unless specified otherwise, a value 2588 change in this file generates a file modified event. All fields in 2589 this file are hierarchical. 2590 2591 max 2592 The number of times the cgroup's resource usage was 2593 about to go over the max boundary. 2594 2595Migration and Ownership 2596~~~~~~~~~~~~~~~~~~~~~~~ 2597 2598A miscellaneous scalar resource is charged to the cgroup in which it is used 2599first, and stays charged to that cgroup until that resource is freed. Migrating 2600a process to a different cgroup does not move the charge to the destination 2601cgroup where the process has moved. 2602 2603Others 2604------ 2605 2606perf_event 2607~~~~~~~~~~ 2608 2609perf_event controller, if not mounted on a legacy hierarchy, is 2610automatically enabled on the v2 hierarchy so that perf events can 2611always be filtered by cgroup v2 path. The controller can still be 2612moved to a legacy hierarchy after v2 hierarchy is populated. 2613 2614 2615Non-normative information 2616------------------------- 2617 2618This section contains information that isn't considered to be a part of 2619the stable kernel API and so is subject to change. 2620 2621 2622CPU controller root cgroup process behaviour 2623~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2624 2625When distributing CPU cycles in the root cgroup each thread in this 2626cgroup is treated as if it was hosted in a separate child cgroup of the 2627root cgroup. This child cgroup weight is dependent on its thread nice 2628level. 2629 2630For details of this mapping see sched_prio_to_weight array in 2631kernel/sched/core.c file (values from this array should be scaled 2632appropriately so the neutral - nice 0 - value is 100 instead of 1024). 2633 2634 2635IO controller root cgroup process behaviour 2636~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2637 2638Root cgroup processes are hosted in an implicit leaf child node. 2639When distributing IO resources this implicit child node is taken into 2640account as if it was a normal child cgroup of the root cgroup with a 2641weight value of 200. 2642 2643 2644Namespace 2645========= 2646 2647Basics 2648------ 2649 2650cgroup namespace provides a mechanism to virtualize the view of the 2651"/proc/$PID/cgroup" file and cgroup mounts. The CLONE_NEWCGROUP clone 2652flag can be used with clone(2) and unshare(2) to create a new cgroup 2653namespace. The process running inside the cgroup namespace will have 2654its "/proc/$PID/cgroup" output restricted to cgroupns root. The 2655cgroupns root is the cgroup of the process at the time of creation of 2656the cgroup namespace. 2657 2658Without cgroup namespace, the "/proc/$PID/cgroup" file shows the 2659complete path of the cgroup of a process. In a container setup where 2660a set of cgroups and namespaces are intended to isolate processes the 2661"/proc/$PID/cgroup" file may leak potential system level information 2662to the isolated processes. For example:: 2663 2664 # cat /proc/self/cgroup 2665 0::/batchjobs/container_id1 2666 2667The path '/batchjobs/container_id1' can be considered as system-data 2668and undesirable to expose to the isolated processes. cgroup namespace 2669can be used to restrict visibility of this path. For example, before 2670creating a cgroup namespace, one would see:: 2671 2672 # ls -l /proc/self/ns/cgroup 2673 lrwxrwxrwx 1 root root 0 2014-07-15 10:37 /proc/self/ns/cgroup -> cgroup:[4026531835] 2674 # cat /proc/self/cgroup 2675 0::/batchjobs/container_id1 2676 2677After unsharing a new namespace, the view changes:: 2678 2679 # ls -l /proc/self/ns/cgroup 2680 lrwxrwxrwx 1 root root 0 2014-07-15 10:35 /proc/self/ns/cgroup -> cgroup:[4026532183] 2681 # cat /proc/self/cgroup 2682 0::/ 2683 2684When some thread from a multi-threaded process unshares its cgroup 2685namespace, the new cgroupns gets applied to the entire process (all 2686the threads). This is natural for the v2 hierarchy; however, for the 2687legacy hierarchies, this may be unexpected. 2688 2689A cgroup namespace is alive as long as there are processes inside or 2690mounts pinning it. When the last usage goes away, the cgroup 2691namespace is destroyed. The cgroupns root and the actual cgroups 2692remain. 2693 2694 2695The Root and Views 2696------------------ 2697 2698The 'cgroupns root' for a cgroup namespace is the cgroup in which the 2699process calling unshare(2) is running. For example, if a process in 2700/batchjobs/container_id1 cgroup calls unshare, cgroup 2701/batchjobs/container_id1 becomes the cgroupns root. For the 2702init_cgroup_ns, this is the real root ('/') cgroup. 2703 2704The cgroupns root cgroup does not change even if the namespace creator 2705process later moves to a different cgroup:: 2706 2707 # ~/unshare -c # unshare cgroupns in some cgroup 2708 # cat /proc/self/cgroup 2709 0::/ 2710 # mkdir sub_cgrp_1 2711 # echo 0 > sub_cgrp_1/cgroup.procs 2712 # cat /proc/self/cgroup 2713 0::/sub_cgrp_1 2714 2715Each process gets its namespace-specific view of "/proc/$PID/cgroup" 2716 2717Processes running inside the cgroup namespace will be able to see 2718cgroup paths (in /proc/self/cgroup) only inside their root cgroup. 2719From within an unshared cgroupns:: 2720 2721 # sleep 100000 & 2722 [1] 7353 2723 # echo 7353 > sub_cgrp_1/cgroup.procs 2724 # cat /proc/7353/cgroup 2725 0::/sub_cgrp_1 2726 2727From the initial cgroup namespace, the real cgroup path will be 2728visible:: 2729 2730 $ cat /proc/7353/cgroup 2731 0::/batchjobs/container_id1/sub_cgrp_1 2732 2733From a sibling cgroup namespace (that is, a namespace rooted at a 2734different cgroup), the cgroup path relative to its own cgroup 2735namespace root will be shown. For instance, if PID 7353's cgroup 2736namespace root is at '/batchjobs/container_id2', then it will see:: 2737 2738 # cat /proc/7353/cgroup 2739 0::/../container_id2/sub_cgrp_1 2740 2741Note that the relative path always starts with '/' to indicate that 2742its relative to the cgroup namespace root of the caller. 2743 2744 2745Migration and setns(2) 2746---------------------- 2747 2748Processes inside a cgroup namespace can move into and out of the 2749namespace root if they have proper access to external cgroups. For 2750example, from inside a namespace with cgroupns root at 2751/batchjobs/container_id1, and assuming that the global hierarchy is 2752still accessible inside cgroupns:: 2753 2754 # cat /proc/7353/cgroup 2755 0::/sub_cgrp_1 2756 # echo 7353 > batchjobs/container_id2/cgroup.procs 2757 # cat /proc/7353/cgroup 2758 0::/../container_id2 2759 2760Note that this kind of setup is not encouraged. A task inside cgroup 2761namespace should only be exposed to its own cgroupns hierarchy. 2762 2763setns(2) to another cgroup namespace is allowed when: 2764 2765(a) the process has CAP_SYS_ADMIN against its current user namespace 2766(b) the process has CAP_SYS_ADMIN against the target cgroup 2767 namespace's userns 2768 2769No implicit cgroup changes happen with attaching to another cgroup 2770namespace. It is expected that the someone moves the attaching 2771process under the target cgroup namespace root. 2772 2773 2774Interaction with Other Namespaces 2775--------------------------------- 2776 2777Namespace specific cgroup hierarchy can be mounted by a process 2778running inside a non-init cgroup namespace:: 2779 2780 # mount -t cgroup2 none $MOUNT_POINT 2781 2782This will mount the unified cgroup hierarchy with cgroupns root as the 2783filesystem root. The process needs CAP_SYS_ADMIN against its user and 2784mount namespaces. 2785 2786The virtualization of /proc/self/cgroup file combined with restricting 2787the view of cgroup hierarchy by namespace-private cgroupfs mount 2788provides a properly isolated cgroup view inside the container. 2789 2790 2791Information on Kernel Programming 2792================================= 2793 2794This section contains kernel programming information in the areas 2795where interacting with cgroup is necessary. cgroup core and 2796controllers are not covered. 2797 2798 2799Filesystem Support for Writeback 2800-------------------------------- 2801 2802A filesystem can support cgroup writeback by updating 2803address_space_operations->writepage[s]() to annotate bio's using the 2804following two functions. 2805 2806 wbc_init_bio(@wbc, @bio) 2807 Should be called for each bio carrying writeback data and 2808 associates the bio with the inode's owner cgroup and the 2809 corresponding request queue. This must be called after 2810 a queue (device) has been associated with the bio and 2811 before submission. 2812 2813 wbc_account_cgroup_owner(@wbc, @page, @bytes) 2814 Should be called for each data segment being written out. 2815 While this function doesn't care exactly when it's called 2816 during the writeback session, it's the easiest and most 2817 natural to call it as data segments are added to a bio. 2818 2819With writeback bio's annotated, cgroup support can be enabled per 2820super_block by setting SB_I_CGROUPWB in ->s_iflags. This allows for 2821selective disabling of cgroup writeback support which is helpful when 2822certain filesystem features, e.g. journaled data mode, are 2823incompatible. 2824 2825wbc_init_bio() binds the specified bio to its cgroup. Depending on 2826the configuration, the bio may be executed at a lower priority and if 2827the writeback session is holding shared resources, e.g. a journal 2828entry, may lead to priority inversion. There is no one easy solution 2829for the problem. Filesystems can try to work around specific problem 2830cases by skipping wbc_init_bio() and using bio_associate_blkg() 2831directly. 2832 2833 2834Deprecated v1 Core Features 2835=========================== 2836 2837- Multiple hierarchies including named ones are not supported. 2838 2839- All v1 mount options are not supported. 2840 2841- The "tasks" file is removed and "cgroup.procs" is not sorted. 2842 2843- "cgroup.clone_children" is removed. 2844 2845- /proc/cgroups is meaningless for v2. Use "cgroup.controllers" file 2846 at the root instead. 2847 2848 2849Issues with v1 and Rationales for v2 2850==================================== 2851 2852Multiple Hierarchies 2853-------------------- 2854 2855cgroup v1 allowed an arbitrary number of hierarchies and each 2856hierarchy could host any number of controllers. While this seemed to 2857provide a high level of flexibility, it wasn't useful in practice. 2858 2859For example, as there is only one instance of each controller, utility 2860type controllers such as freezer which can be useful in all 2861hierarchies could only be used in one. The issue is exacerbated by 2862the fact that controllers couldn't be moved to another hierarchy once 2863hierarchies were populated. Another issue was that all controllers 2864bound to a hierarchy were forced to have exactly the same view of the 2865hierarchy. It wasn't possible to vary the granularity depending on 2866the specific controller. 2867 2868In practice, these issues heavily limited which controllers could be 2869put on the same hierarchy and most configurations resorted to putting 2870each controller on its own hierarchy. Only closely related ones, such 2871as the cpu and cpuacct controllers, made sense to be put on the same 2872hierarchy. This often meant that userland ended up managing multiple 2873similar hierarchies repeating the same steps on each hierarchy 2874whenever a hierarchy management operation was necessary. 2875 2876Furthermore, support for multiple hierarchies came at a steep cost. 2877It greatly complicated cgroup core implementation but more importantly 2878the support for multiple hierarchies restricted how cgroup could be 2879used in general and what controllers was able to do. 2880 2881There was no limit on how many hierarchies there might be, which meant 2882that a thread's cgroup membership couldn't be described in finite 2883length. The key might contain any number of entries and was unlimited 2884in length, which made it highly awkward to manipulate and led to 2885addition of controllers which existed only to identify membership, 2886which in turn exacerbated the original problem of proliferating number 2887of hierarchies. 2888 2889Also, as a controller couldn't have any expectation regarding the 2890topologies of hierarchies other controllers might be on, each 2891controller had to assume that all other controllers were attached to 2892completely orthogonal hierarchies. This made it impossible, or at 2893least very cumbersome, for controllers to cooperate with each other. 2894 2895In most use cases, putting controllers on hierarchies which are 2896completely orthogonal to each other isn't necessary. What usually is 2897called for is the ability to have differing levels of granularity 2898depending on the specific controller. In other words, hierarchy may 2899be collapsed from leaf towards root when viewed from specific 2900controllers. For example, a given configuration might not care about 2901how memory is distributed beyond a certain level while still wanting 2902to control how CPU cycles are distributed. 2903 2904 2905Thread Granularity 2906------------------ 2907 2908cgroup v1 allowed threads of a process to belong to different cgroups. 2909This didn't make sense for some controllers and those controllers 2910ended up implementing different ways to ignore such situations but 2911much more importantly it blurred the line between API exposed to 2912individual applications and system management interface. 2913 2914Generally, in-process knowledge is available only to the process 2915itself; thus, unlike service-level organization of processes, 2916categorizing threads of a process requires active participation from 2917the application which owns the target process. 2918 2919cgroup v1 had an ambiguously defined delegation model which got abused 2920in combination with thread granularity. cgroups were delegated to 2921individual applications so that they can create and manage their own 2922sub-hierarchies and control resource distributions along them. This 2923effectively raised cgroup to the status of a syscall-like API exposed 2924to lay programs. 2925 2926First of all, cgroup has a fundamentally inadequate interface to be 2927exposed this way. For a process to access its own knobs, it has to 2928extract the path on the target hierarchy from /proc/self/cgroup, 2929construct the path by appending the name of the knob to the path, open 2930and then read and/or write to it. This is not only extremely clunky 2931and unusual but also inherently racy. There is no conventional way to 2932define transaction across the required steps and nothing can guarantee 2933that the process would actually be operating on its own sub-hierarchy. 2934 2935cgroup controllers implemented a number of knobs which would never be 2936accepted as public APIs because they were just adding control knobs to 2937system-management pseudo filesystem. cgroup ended up with interface 2938knobs which were not properly abstracted or refined and directly 2939revealed kernel internal details. These knobs got exposed to 2940individual applications through the ill-defined delegation mechanism 2941effectively abusing cgroup as a shortcut to implementing public APIs 2942without going through the required scrutiny. 2943 2944This was painful for both userland and kernel. Userland ended up with 2945misbehaving and poorly abstracted interfaces and kernel exposing and 2946locked into constructs inadvertently. 2947 2948 2949Competition Between Inner Nodes and Threads 2950------------------------------------------- 2951 2952cgroup v1 allowed threads to be in any cgroups which created an 2953interesting problem where threads belonging to a parent cgroup and its 2954children cgroups competed for resources. This was nasty as two 2955different types of entities competed and there was no obvious way to 2956settle it. Different controllers did different things. 2957 2958The cpu controller considered threads and cgroups as equivalents and 2959mapped nice levels to cgroup weights. This worked for some cases but 2960fell flat when children wanted to be allocated specific ratios of CPU 2961cycles and the number of internal threads fluctuated - the ratios 2962constantly changed as the number of competing entities fluctuated. 2963There also were other issues. The mapping from nice level to weight 2964wasn't obvious or universal, and there were various other knobs which 2965simply weren't available for threads. 2966 2967The io controller implicitly created a hidden leaf node for each 2968cgroup to host the threads. The hidden leaf had its own copies of all 2969the knobs with ``leaf_`` prefixed. While this allowed equivalent 2970control over internal threads, it was with serious drawbacks. It 2971always added an extra layer of nesting which wouldn't be necessary 2972otherwise, made the interface messy and significantly complicated the 2973implementation. 2974 2975The memory controller didn't have a way to control what happened 2976between internal tasks and child cgroups and the behavior was not 2977clearly defined. There were attempts to add ad-hoc behaviors and 2978knobs to tailor the behavior to specific workloads which would have 2979led to problems extremely difficult to resolve in the long term. 2980 2981Multiple controllers struggled with internal tasks and came up with 2982different ways to deal with it; unfortunately, all the approaches were 2983severely flawed and, furthermore, the widely different behaviors 2984made cgroup as a whole highly inconsistent. 2985 2986This clearly is a problem which needs to be addressed from cgroup core 2987in a uniform way. 2988 2989 2990Other Interface Issues 2991---------------------- 2992 2993cgroup v1 grew without oversight and developed a large number of 2994idiosyncrasies and inconsistencies. One issue on the cgroup core side 2995was how an empty cgroup was notified - a userland helper binary was 2996forked and executed for each event. The event delivery wasn't 2997recursive or delegatable. The limitations of the mechanism also led 2998to in-kernel event delivery filtering mechanism further complicating 2999the interface. 3000 3001Controller interfaces were problematic too. An extreme example is 3002controllers completely ignoring hierarchical organization and treating 3003all cgroups as if they were all located directly under the root 3004cgroup. Some controllers exposed a large amount of inconsistent 3005implementation details to userland. 3006 3007There also was no consistency across controllers. When a new cgroup 3008was created, some controllers defaulted to not imposing extra 3009restrictions while others disallowed any resource usage until 3010explicitly configured. Configuration knobs for the same type of 3011control used widely differing naming schemes and formats. Statistics 3012and information knobs were named arbitrarily and used different 3013formats and units even in the same controller. 3014 3015cgroup v2 establishes common conventions where appropriate and updates 3016controllers so that they expose minimal and consistent interfaces. 3017 3018 3019Controller Issues and Remedies 3020------------------------------ 3021 3022Memory 3023~~~~~~ 3024 3025The original lower boundary, the soft limit, is defined as a limit 3026that is per default unset. As a result, the set of cgroups that 3027global reclaim prefers is opt-in, rather than opt-out. The costs for 3028optimizing these mostly negative lookups are so high that the 3029implementation, despite its enormous size, does not even provide the 3030basic desirable behavior. First off, the soft limit has no 3031hierarchical meaning. All configured groups are organized in a global 3032rbtree and treated like equal peers, regardless where they are located 3033in the hierarchy. This makes subtree delegation impossible. Second, 3034the soft limit reclaim pass is so aggressive that it not just 3035introduces high allocation latencies into the system, but also impacts 3036system performance due to overreclaim, to the point where the feature 3037becomes self-defeating. 3038 3039The memory.low boundary on the other hand is a top-down allocated 3040reserve. A cgroup enjoys reclaim protection when it's within its 3041effective low, which makes delegation of subtrees possible. It also 3042enjoys having reclaim pressure proportional to its overage when 3043above its effective low. 3044 3045The original high boundary, the hard limit, is defined as a strict 3046limit that can not budge, even if the OOM killer has to be called. 3047But this generally goes against the goal of making the most out of the 3048available memory. The memory consumption of workloads varies during 3049runtime, and that requires users to overcommit. But doing that with a 3050strict upper limit requires either a fairly accurate prediction of the 3051working set size or adding slack to the limit. Since working set size 3052estimation is hard and error prone, and getting it wrong results in 3053OOM kills, most users tend to err on the side of a looser limit and 3054end up wasting precious resources. 3055 3056The memory.high boundary on the other hand can be set much more 3057conservatively. When hit, it throttles allocations by forcing them 3058into direct reclaim to work off the excess, but it never invokes the 3059OOM killer. As a result, a high boundary that is chosen too 3060aggressively will not terminate the processes, but instead it will 3061lead to gradual performance degradation. The user can monitor this 3062and make corrections until the minimal memory footprint that still 3063gives acceptable performance is found. 3064 3065In extreme cases, with many concurrent allocations and a complete 3066breakdown of reclaim progress within the group, the high boundary can 3067be exceeded. But even then it's mostly better to satisfy the 3068allocation from the slack available in other groups or the rest of the 3069system than killing the group. Otherwise, memory.max is there to 3070limit this type of spillover and ultimately contain buggy or even 3071malicious applications. 3072 3073Setting the original memory.limit_in_bytes below the current usage was 3074subject to a race condition, where concurrent charges could cause the 3075limit setting to fail. memory.max on the other hand will first set the 3076limit to prevent new charges, and then reclaim and OOM kill until the 3077new limit is met - or the task writing to memory.max is killed. 3078 3079The combined memory+swap accounting and limiting is replaced by real 3080control over swap space. 3081 3082The main argument for a combined memory+swap facility in the original 3083cgroup design was that global or parental pressure would always be 3084able to swap all anonymous memory of a child group, regardless of the 3085child's own (possibly untrusted) configuration. However, untrusted 3086groups can sabotage swapping by other means - such as referencing its 3087anonymous memory in a tight loop - and an admin can not assume full 3088swappability when overcommitting untrusted jobs. 3089 3090For trusted jobs, on the other hand, a combined counter is not an 3091intuitive userspace interface, and it flies in the face of the idea 3092that cgroup controllers should account and limit specific physical 3093resources. Swap space is a resource like all others in the system, 3094and that's why unified hierarchy allows distributing it separately. 3095