1.. _cgroup-v2: 2 3================ 4Control Group v2 5================ 6 7:Date: October, 2015 8:Author: Tejun Heo <tj@kernel.org> 9 10This is the authoritative documentation on the design, interface and 11conventions of cgroup v2. It describes all userland-visible aspects 12of cgroup including core and specific controller behaviors. All 13future changes must be reflected in this document. Documentation for 14v1 is available under :ref:`Documentation/admin-guide/cgroup-v1/index.rst <cgroup-v1>`. 15 16.. CONTENTS 17 18 1. Introduction 19 1-1. Terminology 20 1-2. What is cgroup? 21 2. Basic Operations 22 2-1. Mounting 23 2-2. Organizing Processes and Threads 24 2-2-1. Processes 25 2-2-2. Threads 26 2-3. [Un]populated Notification 27 2-4. Controlling Controllers 28 2-4-1. Enabling and Disabling 29 2-4-2. Top-down Constraint 30 2-4-3. No Internal Process Constraint 31 2-5. Delegation 32 2-5-1. Model of Delegation 33 2-5-2. Delegation Containment 34 2-6. Guidelines 35 2-6-1. Organize Once and Control 36 2-6-2. Avoid Name Collisions 37 3. Resource Distribution Models 38 3-1. Weights 39 3-2. Limits 40 3-3. Protections 41 3-4. Allocations 42 4. Interface Files 43 4-1. Format 44 4-2. Conventions 45 4-3. Core Interface Files 46 5. Controllers 47 5-1. CPU 48 5-1-1. CPU Interface Files 49 5-2. Memory 50 5-2-1. Memory Interface Files 51 5-2-2. Usage Guidelines 52 5-2-3. Memory Ownership 53 5-3. IO 54 5-3-1. IO Interface Files 55 5-3-2. Writeback 56 5-3-3. IO Latency 57 5-3-3-1. How IO Latency Throttling Works 58 5-3-3-2. IO Latency Interface Files 59 5-3-4. IO Priority 60 5-4. PID 61 5-4-1. PID Interface Files 62 5-5. Cpuset 63 5.5-1. Cpuset Interface Files 64 5-6. Device 65 5-7. RDMA 66 5-7-1. RDMA Interface Files 67 5-8. DMEM 68 5-9. HugeTLB 69 5.9-1. HugeTLB Interface Files 70 5-10. Misc 71 5.10-1 Miscellaneous cgroup Interface Files 72 5.10-2 Migration and Ownership 73 5-11. Others 74 5-11-1. perf_event 75 5-N. Non-normative information 76 5-N-1. CPU controller root cgroup process behaviour 77 5-N-2. IO controller root cgroup process behaviour 78 6. Namespace 79 6-1. Basics 80 6-2. The Root and Views 81 6-3. Migration and setns(2) 82 6-4. Interaction with Other Namespaces 83 P. Information on Kernel Programming 84 P-1. Filesystem Support for Writeback 85 D. Deprecated v1 Core Features 86 R. Issues with v1 and Rationales for v2 87 R-1. Multiple Hierarchies 88 R-2. Thread Granularity 89 R-3. Competition Between Inner Nodes and Threads 90 R-4. Other Interface Issues 91 R-5. Controller Issues and Remedies 92 R-5-1. Memory 93 94 95Introduction 96============ 97 98Terminology 99----------- 100 101"cgroup" stands for "control group" and is never capitalized. The 102singular form is used to designate the whole feature and also as a 103qualifier as in "cgroup controllers". When explicitly referring to 104multiple individual control groups, the plural form "cgroups" is used. 105 106 107What is cgroup? 108--------------- 109 110cgroup is a mechanism to organize processes hierarchically and 111distribute system resources along the hierarchy in a controlled and 112configurable manner. 113 114cgroup is largely composed of two parts - the core and controllers. 115cgroup core is primarily responsible for hierarchically organizing 116processes. A cgroup controller is usually responsible for 117distributing a specific type of system resource along the hierarchy 118although there are utility controllers which serve purposes other than 119resource distribution. 120 121cgroups form a tree structure and every process in the system belongs 122to one and only one cgroup. All threads of a process belong to the 123same cgroup. On creation, all processes are put in the cgroup that 124the parent process belongs to at the time. A process can be migrated 125to another cgroup. Migration of a process doesn't affect already 126existing descendant processes. 127 128Following certain structural constraints, controllers may be enabled or 129disabled selectively on a cgroup. All controller behaviors are 130hierarchical - if a controller is enabled on a cgroup, it affects all 131processes which belong to the cgroups consisting the inclusive 132sub-hierarchy of the cgroup. When a controller is enabled on a nested 133cgroup, it always restricts the resource distribution further. The 134restrictions set closer to the root in the hierarchy can not be 135overridden from further away. 136 137 138Basic Operations 139================ 140 141Mounting 142-------- 143 144Unlike v1, cgroup v2 has only single hierarchy. The cgroup v2 145hierarchy can be mounted with the following mount command:: 146 147 # mount -t cgroup2 none $MOUNT_POINT 148 149cgroup2 filesystem has the magic number 0x63677270 ("cgrp"). All 150controllers which support v2 and are not bound to a v1 hierarchy are 151automatically bound to the v2 hierarchy and show up at the root. 152Controllers which are not in active use in the v2 hierarchy can be 153bound to other hierarchies. This allows mixing v2 hierarchy with the 154legacy v1 multiple hierarchies in a fully backward compatible way. 155 156A controller can be moved across hierarchies only after the controller 157is no longer referenced in its current hierarchy. Because per-cgroup 158controller states are destroyed asynchronously and controllers may 159have lingering references, a controller may not show up immediately on 160the v2 hierarchy after the final umount of the previous hierarchy. 161Similarly, a controller should be fully disabled to be moved out of 162the unified hierarchy and it may take some time for the disabled 163controller to become available for other hierarchies; furthermore, due 164to inter-controller dependencies, other controllers may need to be 165disabled too. 166 167While useful for development and manual configurations, moving 168controllers dynamically between the v2 and other hierarchies is 169strongly discouraged for production use. It is recommended to decide 170the hierarchies and controller associations before starting using the 171controllers after system boot. 172 173During transition to v2, system management software might still 174automount the v1 cgroup filesystem and so hijack all controllers 175during boot, before manual intervention is possible. To make testing 176and experimenting easier, the kernel parameter cgroup_no_v1= allows 177disabling controllers in v1 and make them always available in v2. 178 179cgroup v2 currently supports the following mount options. 180 181 nsdelegate 182 Consider cgroup namespaces as delegation boundaries. This 183 option is system wide and can only be set on mount or modified 184 through remount from the init namespace. The mount option is 185 ignored on non-init namespace mounts. Please refer to the 186 Delegation section for details. 187 188 favordynmods 189 Reduce the latencies of dynamic cgroup modifications such as 190 task migrations and controller on/offs at the cost of making 191 hot path operations such as forks and exits more expensive. 192 The static usage pattern of creating a cgroup, enabling 193 controllers, and then seeding it with CLONE_INTO_CGROUP is 194 not affected by this option. 195 196 memory_localevents 197 Only populate memory.events with data for the current cgroup, 198 and not any subtrees. This is legacy behaviour, the default 199 behaviour without this option is to include subtree counts. 200 This option is system wide and can only be set on mount or 201 modified through remount from the init namespace. The mount 202 option is ignored on non-init namespace mounts. 203 204 memory_recursiveprot 205 Recursively apply memory.min and memory.low protection to 206 entire subtrees, without requiring explicit downward 207 propagation into leaf cgroups. This allows protecting entire 208 subtrees from one another, while retaining free competition 209 within those subtrees. This should have been the default 210 behavior but is a mount-option to avoid regressing setups 211 relying on the original semantics (e.g. specifying bogusly 212 high 'bypass' protection values at higher tree levels). 213 214 memory_hugetlb_accounting 215 Count HugeTLB memory usage towards the cgroup's overall 216 memory usage for the memory controller (for the purpose of 217 statistics reporting and memory protetion). This is a new 218 behavior that could regress existing setups, so it must be 219 explicitly opted in with this mount option. 220 221 A few caveats to keep in mind: 222 223 * There is no HugeTLB pool management involved in the memory 224 controller. The pre-allocated pool does not belong to anyone. 225 Specifically, when a new HugeTLB folio is allocated to 226 the pool, it is not accounted for from the perspective of the 227 memory controller. It is only charged to a cgroup when it is 228 actually used (for e.g at page fault time). Host memory 229 overcommit management has to consider this when configuring 230 hard limits. In general, HugeTLB pool management should be 231 done via other mechanisms (such as the HugeTLB controller). 232 * Failure to charge a HugeTLB folio to the memory controller 233 results in SIGBUS. This could happen even if the HugeTLB pool 234 still has pages available (but the cgroup limit is hit and 235 reclaim attempt fails). 236 * Charging HugeTLB memory towards the memory controller affects 237 memory protection and reclaim dynamics. Any userspace tuning 238 (of low, min limits for e.g) needs to take this into account. 239 * HugeTLB pages utilized while this option is not selected 240 will not be tracked by the memory controller (even if cgroup 241 v2 is remounted later on). 242 243 pids_localevents 244 The option restores v1-like behavior of pids.events:max, that is only 245 local (inside cgroup proper) fork failures are counted. Without this 246 option pids.events.max represents any pids.max enforcemnt across 247 cgroup's subtree. 248 249 250 251Organizing Processes and Threads 252-------------------------------- 253 254Processes 255~~~~~~~~~ 256 257Initially, only the root cgroup exists to which all processes belong. 258A child cgroup can be created by creating a sub-directory:: 259 260 # mkdir $CGROUP_NAME 261 262A given cgroup may have multiple child cgroups forming a tree 263structure. Each cgroup has a read-writable interface file 264"cgroup.procs". When read, it lists the PIDs of all processes which 265belong to the cgroup one-per-line. The PIDs are not ordered and the 266same PID may show up more than once if the process got moved to 267another cgroup and then back or the PID got recycled while reading. 268 269A process can be migrated into a cgroup by writing its PID to the 270target cgroup's "cgroup.procs" file. Only one process can be migrated 271on a single write(2) call. If a process is composed of multiple 272threads, writing the PID of any thread migrates all threads of the 273process. 274 275When a process forks a child process, the new process is born into the 276cgroup that the forking process belongs to at the time of the 277operation. After exit, a process stays associated with the cgroup 278that it belonged to at the time of exit until it's reaped; however, a 279zombie process does not appear in "cgroup.procs" and thus can't be 280moved to another cgroup. 281 282A cgroup which doesn't have any children or live processes can be 283destroyed by removing the directory. Note that a cgroup which doesn't 284have any children and is associated only with zombie processes is 285considered empty and can be removed:: 286 287 # rmdir $CGROUP_NAME 288 289"/proc/$PID/cgroup" lists a process's cgroup membership. If legacy 290cgroup is in use in the system, this file may contain multiple lines, 291one for each hierarchy. The entry for cgroup v2 is always in the 292format "0::$PATH":: 293 294 # cat /proc/842/cgroup 295 ... 296 0::/test-cgroup/test-cgroup-nested 297 298If the process becomes a zombie and the cgroup it was associated with 299is removed subsequently, " (deleted)" is appended to the path:: 300 301 # cat /proc/842/cgroup 302 ... 303 0::/test-cgroup/test-cgroup-nested (deleted) 304 305 306Threads 307~~~~~~~ 308 309cgroup v2 supports thread granularity for a subset of controllers to 310support use cases requiring hierarchical resource distribution across 311the threads of a group of processes. By default, all threads of a 312process belong to the same cgroup, which also serves as the resource 313domain to host resource consumptions which are not specific to a 314process or thread. The thread mode allows threads to be spread across 315a subtree while still maintaining the common resource domain for them. 316 317Controllers which support thread mode are called threaded controllers. 318The ones which don't are called domain controllers. 319 320Marking a cgroup threaded makes it join the resource domain of its 321parent as a threaded cgroup. The parent may be another threaded 322cgroup whose resource domain is further up in the hierarchy. The root 323of a threaded subtree, that is, the nearest ancestor which is not 324threaded, is called threaded domain or thread root interchangeably and 325serves as the resource domain for the entire subtree. 326 327Inside a threaded subtree, threads of a process can be put in 328different cgroups and are not subject to the no internal process 329constraint - threaded controllers can be enabled on non-leaf cgroups 330whether they have threads in them or not. 331 332As the threaded domain cgroup hosts all the domain resource 333consumptions of the subtree, it is considered to have internal 334resource consumptions whether there are processes in it or not and 335can't have populated child cgroups which aren't threaded. Because the 336root cgroup is not subject to no internal process constraint, it can 337serve both as a threaded domain and a parent to domain cgroups. 338 339The current operation mode or type of the cgroup is shown in the 340"cgroup.type" file which indicates whether the cgroup is a normal 341domain, a domain which is serving as the domain of a threaded subtree, 342or a threaded cgroup. 343 344On creation, a cgroup is always a domain cgroup and can be made 345threaded by writing "threaded" to the "cgroup.type" file. The 346operation is single direction:: 347 348 # echo threaded > cgroup.type 349 350Once threaded, the cgroup can't be made a domain again. To enable the 351thread mode, the following conditions must be met. 352 353- As the cgroup will join the parent's resource domain. The parent 354 must either be a valid (threaded) domain or a threaded cgroup. 355 356- When the parent is an unthreaded domain, it must not have any domain 357 controllers enabled or populated domain children. The root is 358 exempt from this requirement. 359 360Topology-wise, a cgroup can be in an invalid state. Please consider 361the following topology:: 362 363 A (threaded domain) - B (threaded) - C (domain, just created) 364 365C is created as a domain but isn't connected to a parent which can 366host child domains. C can't be used until it is turned into a 367threaded cgroup. "cgroup.type" file will report "domain (invalid)" in 368these cases. Operations which fail due to invalid topology use 369EOPNOTSUPP as the errno. 370 371A domain cgroup is turned into a threaded domain when one of its child 372cgroup becomes threaded or threaded controllers are enabled in the 373"cgroup.subtree_control" file while there are processes in the cgroup. 374A threaded domain reverts to a normal domain when the conditions 375clear. 376 377When read, "cgroup.threads" contains the list of the thread IDs of all 378threads in the cgroup. Except that the operations are per-thread 379instead of per-process, "cgroup.threads" has the same format and 380behaves the same way as "cgroup.procs". While "cgroup.threads" can be 381written to in any cgroup, as it can only move threads inside the same 382threaded domain, its operations are confined inside each threaded 383subtree. 384 385The threaded domain cgroup serves as the resource domain for the whole 386subtree, and, while the threads can be scattered across the subtree, 387all the processes are considered to be in the threaded domain cgroup. 388"cgroup.procs" in a threaded domain cgroup contains the PIDs of all 389processes in the subtree and is not readable in the subtree proper. 390However, "cgroup.procs" can be written to from anywhere in the subtree 391to migrate all threads of the matching process to the cgroup. 392 393Only threaded controllers can be enabled in a threaded subtree. When 394a threaded controller is enabled inside a threaded subtree, it only 395accounts for and controls resource consumptions associated with the 396threads in the cgroup and its descendants. All consumptions which 397aren't tied to a specific thread belong to the threaded domain cgroup. 398 399Because a threaded subtree is exempt from no internal process 400constraint, a threaded controller must be able to handle competition 401between threads in a non-leaf cgroup and its child cgroups. Each 402threaded controller defines how such competitions are handled. 403 404Currently, the following controllers are threaded and can be enabled 405in a threaded cgroup:: 406 407- cpu 408- cpuset 409- perf_event 410- pids 411 412[Un]populated Notification 413-------------------------- 414 415Each non-root cgroup has a "cgroup.events" file which contains 416"populated" field indicating whether the cgroup's sub-hierarchy has 417live processes in it. Its value is 0 if there is no live process in 418the cgroup and its descendants; otherwise, 1. poll and [id]notify 419events are triggered when the value changes. This can be used, for 420example, to start a clean-up operation after all processes of a given 421sub-hierarchy have exited. The populated state updates and 422notifications are recursive. Consider the following sub-hierarchy 423where the numbers in the parentheses represent the numbers of processes 424in each cgroup:: 425 426 A(4) - B(0) - C(1) 427 \ D(0) 428 429A, B and C's "populated" fields would be 1 while D's 0. After the one 430process in C exits, B and C's "populated" fields would flip to "0" and 431file modified events will be generated on the "cgroup.events" files of 432both cgroups. 433 434 435Controlling Controllers 436----------------------- 437 438Enabling and Disabling 439~~~~~~~~~~~~~~~~~~~~~~ 440 441Each cgroup has a "cgroup.controllers" file which lists all 442controllers available for the cgroup to enable:: 443 444 # cat cgroup.controllers 445 cpu io memory 446 447No controller is enabled by default. Controllers can be enabled and 448disabled by writing to the "cgroup.subtree_control" file:: 449 450 # echo "+cpu +memory -io" > cgroup.subtree_control 451 452Only controllers which are listed in "cgroup.controllers" can be 453enabled. When multiple operations are specified as above, either they 454all succeed or fail. If multiple operations on the same controller 455are specified, the last one is effective. 456 457Enabling a controller in a cgroup indicates that the distribution of 458the target resource across its immediate children will be controlled. 459Consider the following sub-hierarchy. The enabled controllers are 460listed in parentheses:: 461 462 A(cpu,memory) - B(memory) - C() 463 \ D() 464 465As A has "cpu" and "memory" enabled, A will control the distribution 466of CPU cycles and memory to its children, in this case, B. As B has 467"memory" enabled but not "CPU", C and D will compete freely on CPU 468cycles but their division of memory available to B will be controlled. 469 470As a controller regulates the distribution of the target resource to 471the cgroup's children, enabling it creates the controller's interface 472files in the child cgroups. In the above example, enabling "cpu" on B 473would create the "cpu." prefixed controller interface files in C and 474D. Likewise, disabling "memory" from B would remove the "memory." 475prefixed controller interface files from C and D. This means that the 476controller interface files - anything which doesn't start with 477"cgroup." are owned by the parent rather than the cgroup itself. 478 479 480Top-down Constraint 481~~~~~~~~~~~~~~~~~~~ 482 483Resources are distributed top-down and a cgroup can further distribute 484a resource only if the resource has been distributed to it from the 485parent. This means that all non-root "cgroup.subtree_control" files 486can only contain controllers which are enabled in the parent's 487"cgroup.subtree_control" file. A controller can be enabled only if 488the parent has the controller enabled and a controller can't be 489disabled if one or more children have it enabled. 490 491 492No Internal Process Constraint 493~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 494 495Non-root cgroups can distribute domain resources to their children 496only when they don't have any processes of their own. In other words, 497only domain cgroups which don't contain any processes can have domain 498controllers enabled in their "cgroup.subtree_control" files. 499 500This guarantees that, when a domain controller is looking at the part 501of the hierarchy which has it enabled, processes are always only on 502the leaves. This rules out situations where child cgroups compete 503against internal processes of the parent. 504 505The root cgroup is exempt from this restriction. Root contains 506processes and anonymous resource consumption which can't be associated 507with any other cgroups and requires special treatment from most 508controllers. How resource consumption in the root cgroup is governed 509is up to each controller (for more information on this topic please 510refer to the Non-normative information section in the Controllers 511chapter). 512 513Note that the restriction doesn't get in the way if there is no 514enabled controller in the cgroup's "cgroup.subtree_control". This is 515important as otherwise it wouldn't be possible to create children of a 516populated cgroup. To control resource distribution of a cgroup, the 517cgroup must create children and transfer all its processes to the 518children before enabling controllers in its "cgroup.subtree_control" 519file. 520 521 522Delegation 523---------- 524 525Model of Delegation 526~~~~~~~~~~~~~~~~~~~ 527 528A cgroup can be delegated in two ways. First, to a less privileged 529user by granting write access of the directory and its "cgroup.procs", 530"cgroup.threads" and "cgroup.subtree_control" files to the user. 531Second, if the "nsdelegate" mount option is set, automatically to a 532cgroup namespace on namespace creation. 533 534Because the resource control interface files in a given directory 535control the distribution of the parent's resources, the delegatee 536shouldn't be allowed to write to them. For the first method, this is 537achieved by not granting access to these files. For the second, files 538outside the namespace should be hidden from the delegatee by the means 539of at least mount namespacing, and the kernel rejects writes to all 540files on a namespace root from inside the cgroup namespace, except for 541those files listed in "/sys/kernel/cgroup/delegate" (including 542"cgroup.procs", "cgroup.threads", "cgroup.subtree_control", etc.). 543 544The end results are equivalent for both delegation types. Once 545delegated, the user can build sub-hierarchy under the directory, 546organize processes inside it as it sees fit and further distribute the 547resources it received from the parent. The limits and other settings 548of all resource controllers are hierarchical and regardless of what 549happens in the delegated sub-hierarchy, nothing can escape the 550resource restrictions imposed by the parent. 551 552Currently, cgroup doesn't impose any restrictions on the number of 553cgroups in or nesting depth of a delegated sub-hierarchy; however, 554this may be limited explicitly in the future. 555 556 557Delegation Containment 558~~~~~~~~~~~~~~~~~~~~~~ 559 560A delegated sub-hierarchy is contained in the sense that processes 561can't be moved into or out of the sub-hierarchy by the delegatee. 562 563For delegations to a less privileged user, this is achieved by 564requiring the following conditions for a process with a non-root euid 565to migrate a target process into a cgroup by writing its PID to the 566"cgroup.procs" file. 567 568- The writer must have write access to the "cgroup.procs" file. 569 570- The writer must have write access to the "cgroup.procs" file of the 571 common ancestor of the source and destination cgroups. 572 573The above two constraints ensure that while a delegatee may migrate 574processes around freely in the delegated sub-hierarchy it can't pull 575in from or push out to outside the sub-hierarchy. 576 577For an example, let's assume cgroups C0 and C1 have been delegated to 578user U0 who created C00, C01 under C0 and C10 under C1 as follows and 579all processes under C0 and C1 belong to U0:: 580 581 ~~~~~~~~~~~~~ - C0 - C00 582 ~ cgroup ~ \ C01 583 ~ hierarchy ~ 584 ~~~~~~~~~~~~~ - C1 - C10 585 586Let's also say U0 wants to write the PID of a process which is 587currently in C10 into "C00/cgroup.procs". U0 has write access to the 588file; however, the common ancestor of the source cgroup C10 and the 589destination cgroup C00 is above the points of delegation and U0 would 590not have write access to its "cgroup.procs" files and thus the write 591will be denied with -EACCES. 592 593For delegations to namespaces, containment is achieved by requiring 594that both the source and destination cgroups are reachable from the 595namespace of the process which is attempting the migration. If either 596is not reachable, the migration is rejected with -ENOENT. 597 598 599Guidelines 600---------- 601 602Organize Once and Control 603~~~~~~~~~~~~~~~~~~~~~~~~~ 604 605Migrating a process across cgroups is a relatively expensive operation 606and stateful resources such as memory are not moved together with the 607process. This is an explicit design decision as there often exist 608inherent trade-offs between migration and various hot paths in terms 609of synchronization cost. 610 611As such, migrating processes across cgroups frequently as a means to 612apply different resource restrictions is discouraged. A workload 613should be assigned to a cgroup according to the system's logical and 614resource structure once on start-up. Dynamic adjustments to resource 615distribution can be made by changing controller configuration through 616the interface files. 617 618 619Avoid Name Collisions 620~~~~~~~~~~~~~~~~~~~~~ 621 622Interface files for a cgroup and its children cgroups occupy the same 623directory and it is possible to create children cgroups which collide 624with interface files. 625 626All cgroup core interface files are prefixed with "cgroup." and each 627controller's interface files are prefixed with the controller name and 628a dot. A controller's name is composed of lower case alphabets and 629'_'s but never begins with an '_' so it can be used as the prefix 630character for collision avoidance. Also, interface file names won't 631start or end with terms which are often used in categorizing workloads 632such as job, service, slice, unit or workload. 633 634cgroup doesn't do anything to prevent name collisions and it's the 635user's responsibility to avoid them. 636 637 638Resource Distribution Models 639============================ 640 641cgroup controllers implement several resource distribution schemes 642depending on the resource type and expected use cases. This section 643describes major schemes in use along with their expected behaviors. 644 645 646Weights 647------- 648 649A parent's resource is distributed by adding up the weights of all 650active children and giving each the fraction matching the ratio of its 651weight against the sum. As only children which can make use of the 652resource at the moment participate in the distribution, this is 653work-conserving. Due to the dynamic nature, this model is usually 654used for stateless resources. 655 656All weights are in the range [1, 10000] with the default at 100. This 657allows symmetric multiplicative biases in both directions at fine 658enough granularity while staying in the intuitive range. 659 660As long as the weight is in range, all configuration combinations are 661valid and there is no reason to reject configuration changes or 662process migrations. 663 664"cpu.weight" proportionally distributes CPU cycles to active children 665and is an example of this type. 666 667 668.. _cgroupv2-limits-distributor: 669 670Limits 671------ 672 673A child can only consume up to the configured amount of the resource. 674Limits can be over-committed - the sum of the limits of children can 675exceed the amount of resource available to the parent. 676 677Limits are in the range [0, max] and defaults to "max", which is noop. 678 679As limits can be over-committed, all configuration combinations are 680valid and there is no reason to reject configuration changes or 681process migrations. 682 683"io.max" limits the maximum BPS and/or IOPS that a cgroup can consume 684on an IO device and is an example of this type. 685 686.. _cgroupv2-protections-distributor: 687 688Protections 689----------- 690 691A cgroup is protected up to the configured amount of the resource 692as long as the usages of all its ancestors are under their 693protected levels. Protections can be hard guarantees or best effort 694soft boundaries. Protections can also be over-committed in which case 695only up to the amount available to the parent is protected among 696children. 697 698Protections are in the range [0, max] and defaults to 0, which is 699noop. 700 701As protections can be over-committed, all configuration combinations 702are valid and there is no reason to reject configuration changes or 703process migrations. 704 705"memory.low" implements best-effort memory protection and is an 706example of this type. 707 708 709Allocations 710----------- 711 712A cgroup is exclusively allocated a certain amount of a finite 713resource. Allocations can't be over-committed - the sum of the 714allocations of children can not exceed the amount of resource 715available to the parent. 716 717Allocations are in the range [0, max] and defaults to 0, which is no 718resource. 719 720As allocations can't be over-committed, some configuration 721combinations are invalid and should be rejected. Also, if the 722resource is mandatory for execution of processes, process migrations 723may be rejected. 724 725"cpu.rt.max" hard-allocates realtime slices and is an example of this 726type. 727 728 729Interface Files 730=============== 731 732Format 733------ 734 735All interface files should be in one of the following formats whenever 736possible:: 737 738 New-line separated values 739 (when only one value can be written at once) 740 741 VAL0\n 742 VAL1\n 743 ... 744 745 Space separated values 746 (when read-only or multiple values can be written at once) 747 748 VAL0 VAL1 ...\n 749 750 Flat keyed 751 752 KEY0 VAL0\n 753 KEY1 VAL1\n 754 ... 755 756 Nested keyed 757 758 KEY0 SUB_KEY0=VAL00 SUB_KEY1=VAL01... 759 KEY1 SUB_KEY0=VAL10 SUB_KEY1=VAL11... 760 ... 761 762For a writable file, the format for writing should generally match 763reading; however, controllers may allow omitting later fields or 764implement restricted shortcuts for most common use cases. 765 766For both flat and nested keyed files, only the values for a single key 767can be written at a time. For nested keyed files, the sub key pairs 768may be specified in any order and not all pairs have to be specified. 769 770 771Conventions 772----------- 773 774- Settings for a single feature should be contained in a single file. 775 776- The root cgroup should be exempt from resource control and thus 777 shouldn't have resource control interface files. 778 779- The default time unit is microseconds. If a different unit is ever 780 used, an explicit unit suffix must be present. 781 782- A parts-per quantity should use a percentage decimal with at least 783 two digit fractional part - e.g. 13.40. 784 785- If a controller implements weight based resource distribution, its 786 interface file should be named "weight" and have the range [1, 787 10000] with 100 as the default. The values are chosen to allow 788 enough and symmetric bias in both directions while keeping it 789 intuitive (the default is 100%). 790 791- If a controller implements an absolute resource guarantee and/or 792 limit, the interface files should be named "min" and "max" 793 respectively. If a controller implements best effort resource 794 guarantee and/or limit, the interface files should be named "low" 795 and "high" respectively. 796 797 In the above four control files, the special token "max" should be 798 used to represent upward infinity for both reading and writing. 799 800- If a setting has a configurable default value and keyed specific 801 overrides, the default entry should be keyed with "default" and 802 appear as the first entry in the file. 803 804 The default value can be updated by writing either "default $VAL" or 805 "$VAL". 806 807 When writing to update a specific override, "default" can be used as 808 the value to indicate removal of the override. Override entries 809 with "default" as the value must not appear when read. 810 811 For example, a setting which is keyed by major:minor device numbers 812 with integer values may look like the following:: 813 814 # cat cgroup-example-interface-file 815 default 150 816 8:0 300 817 818 The default value can be updated by:: 819 820 # echo 125 > cgroup-example-interface-file 821 822 or:: 823 824 # echo "default 125" > cgroup-example-interface-file 825 826 An override can be set by:: 827 828 # echo "8:16 170" > cgroup-example-interface-file 829 830 and cleared by:: 831 832 # echo "8:0 default" > cgroup-example-interface-file 833 # cat cgroup-example-interface-file 834 default 125 835 8:16 170 836 837- For events which are not very high frequency, an interface file 838 "events" should be created which lists event key value pairs. 839 Whenever a notifiable event happens, file modified event should be 840 generated on the file. 841 842 843Core Interface Files 844-------------------- 845 846All cgroup core files are prefixed with "cgroup." 847 848 cgroup.type 849 A read-write single value file which exists on non-root 850 cgroups. 851 852 When read, it indicates the current type of the cgroup, which 853 can be one of the following values. 854 855 - "domain" : A normal valid domain cgroup. 856 857 - "domain threaded" : A threaded domain cgroup which is 858 serving as the root of a threaded subtree. 859 860 - "domain invalid" : A cgroup which is in an invalid state. 861 It can't be populated or have controllers enabled. It may 862 be allowed to become a threaded cgroup. 863 864 - "threaded" : A threaded cgroup which is a member of a 865 threaded subtree. 866 867 A cgroup can be turned into a threaded cgroup by writing 868 "threaded" to this file. 869 870 cgroup.procs 871 A read-write new-line separated values file which exists on 872 all cgroups. 873 874 When read, it lists the PIDs of all processes which belong to 875 the cgroup one-per-line. The PIDs are not ordered and the 876 same PID may show up more than once if the process got moved 877 to another cgroup and then back or the PID got recycled while 878 reading. 879 880 A PID can be written to migrate the process associated with 881 the PID to the cgroup. The writer should match all of the 882 following conditions. 883 884 - It must have write access to the "cgroup.procs" file. 885 886 - It must have write access to the "cgroup.procs" file of the 887 common ancestor of the source and destination cgroups. 888 889 When delegating a sub-hierarchy, write access to this file 890 should be granted along with the containing directory. 891 892 In a threaded cgroup, reading this file fails with EOPNOTSUPP 893 as all the processes belong to the thread root. Writing is 894 supported and moves every thread of the process to the cgroup. 895 896 cgroup.threads 897 A read-write new-line separated values file which exists on 898 all cgroups. 899 900 When read, it lists the TIDs of all threads which belong to 901 the cgroup one-per-line. The TIDs are not ordered and the 902 same TID may show up more than once if the thread got moved to 903 another cgroup and then back or the TID got recycled while 904 reading. 905 906 A TID can be written to migrate the thread associated with the 907 TID to the cgroup. The writer should match all of the 908 following conditions. 909 910 - It must have write access to the "cgroup.threads" file. 911 912 - The cgroup that the thread is currently in must be in the 913 same resource domain as the destination cgroup. 914 915 - It must have write access to the "cgroup.procs" file of the 916 common ancestor of the source and destination cgroups. 917 918 When delegating a sub-hierarchy, write access to this file 919 should be granted along with the containing directory. 920 921 cgroup.controllers 922 A read-only space separated values file which exists on all 923 cgroups. 924 925 It shows space separated list of all controllers available to 926 the cgroup. The controllers are not ordered. 927 928 cgroup.subtree_control 929 A read-write space separated values file which exists on all 930 cgroups. Starts out empty. 931 932 When read, it shows space separated list of the controllers 933 which are enabled to control resource distribution from the 934 cgroup to its children. 935 936 Space separated list of controllers prefixed with '+' or '-' 937 can be written to enable or disable controllers. A controller 938 name prefixed with '+' enables the controller and '-' 939 disables. If a controller appears more than once on the list, 940 the last one is effective. When multiple enable and disable 941 operations are specified, either all succeed or all fail. 942 943 cgroup.events 944 A read-only flat-keyed file which exists on non-root cgroups. 945 The following entries are defined. Unless specified 946 otherwise, a value change in this file generates a file 947 modified event. 948 949 populated 950 1 if the cgroup or its descendants contains any live 951 processes; otherwise, 0. 952 frozen 953 1 if the cgroup is frozen; otherwise, 0. 954 955 cgroup.max.descendants 956 A read-write single value files. The default is "max". 957 958 Maximum allowed number of descent cgroups. 959 If the actual number of descendants is equal or larger, 960 an attempt to create a new cgroup in the hierarchy will fail. 961 962 cgroup.max.depth 963 A read-write single value files. The default is "max". 964 965 Maximum allowed descent depth below the current cgroup. 966 If the actual descent depth is equal or larger, 967 an attempt to create a new child cgroup will fail. 968 969 cgroup.stat 970 A read-only flat-keyed file with the following entries: 971 972 nr_descendants 973 Total number of visible descendant cgroups. 974 975 nr_dying_descendants 976 Total number of dying descendant cgroups. A cgroup becomes 977 dying after being deleted by a user. The cgroup will remain 978 in dying state for some time undefined time (which can depend 979 on system load) before being completely destroyed. 980 981 A process can't enter a dying cgroup under any circumstances, 982 a dying cgroup can't revive. 983 984 A dying cgroup can consume system resources not exceeding 985 limits, which were active at the moment of cgroup deletion. 986 987 nr_subsys_<cgroup_subsys> 988 Total number of live cgroup subsystems (e.g memory 989 cgroup) at and beneath the current cgroup. 990 991 nr_dying_subsys_<cgroup_subsys> 992 Total number of dying cgroup subsystems (e.g. memory 993 cgroup) at and beneath the current cgroup. 994 995 cgroup.freeze 996 A read-write single value file which exists on non-root cgroups. 997 Allowed values are "0" and "1". The default is "0". 998 999 Writing "1" to the file causes freezing of the cgroup and all 1000 descendant cgroups. This means that all belonging processes will 1001 be stopped and will not run until the cgroup will be explicitly 1002 unfrozen. Freezing of the cgroup may take some time; when this action 1003 is completed, the "frozen" value in the cgroup.events control file 1004 will be updated to "1" and the corresponding notification will be 1005 issued. 1006 1007 A cgroup can be frozen either by its own settings, or by settings 1008 of any ancestor cgroups. If any of ancestor cgroups is frozen, the 1009 cgroup will remain frozen. 1010 1011 Processes in the frozen cgroup can be killed by a fatal signal. 1012 They also can enter and leave a frozen cgroup: either by an explicit 1013 move by a user, or if freezing of the cgroup races with fork(). 1014 If a process is moved to a frozen cgroup, it stops. If a process is 1015 moved out of a frozen cgroup, it becomes running. 1016 1017 Frozen status of a cgroup doesn't affect any cgroup tree operations: 1018 it's possible to delete a frozen (and empty) cgroup, as well as 1019 create new sub-cgroups. 1020 1021 cgroup.kill 1022 A write-only single value file which exists in non-root cgroups. 1023 The only allowed value is "1". 1024 1025 Writing "1" to the file causes the cgroup and all descendant cgroups to 1026 be killed. This means that all processes located in the affected cgroup 1027 tree will be killed via SIGKILL. 1028 1029 Killing a cgroup tree will deal with concurrent forks appropriately and 1030 is protected against migrations. 1031 1032 In a threaded cgroup, writing this file fails with EOPNOTSUPP as 1033 killing cgroups is a process directed operation, i.e. it affects 1034 the whole thread-group. 1035 1036 cgroup.pressure 1037 A read-write single value file that allowed values are "0" and "1". 1038 The default is "1". 1039 1040 Writing "0" to the file will disable the cgroup PSI accounting. 1041 Writing "1" to the file will re-enable the cgroup PSI accounting. 1042 1043 This control attribute is not hierarchical, so disable or enable PSI 1044 accounting in a cgroup does not affect PSI accounting in descendants 1045 and doesn't need pass enablement via ancestors from root. 1046 1047 The reason this control attribute exists is that PSI accounts stalls for 1048 each cgroup separately and aggregates it at each level of the hierarchy. 1049 This may cause non-negligible overhead for some workloads when under 1050 deep level of the hierarchy, in which case this control attribute can 1051 be used to disable PSI accounting in the non-leaf cgroups. 1052 1053 irq.pressure 1054 A read-write nested-keyed file. 1055 1056 Shows pressure stall information for IRQ/SOFTIRQ. See 1057 :ref:`Documentation/accounting/psi.rst <psi>` for details. 1058 1059Controllers 1060=========== 1061 1062.. _cgroup-v2-cpu: 1063 1064CPU 1065--- 1066 1067The "cpu" controllers regulates distribution of CPU cycles. This 1068controller implements weight and absolute bandwidth limit models for 1069normal scheduling policy and absolute bandwidth allocation model for 1070realtime scheduling policy. 1071 1072In all the above models, cycles distribution is defined only on a temporal 1073base and it does not account for the frequency at which tasks are executed. 1074The (optional) utilization clamping support allows to hint the schedutil 1075cpufreq governor about the minimum desired frequency which should always be 1076provided by a CPU, as well as the maximum desired frequency, which should not 1077be exceeded by a CPU. 1078 1079WARNING: cgroup2 cpu controller doesn't yet support the (bandwidth) control of 1080realtime processes. For a kernel built with the CONFIG_RT_GROUP_SCHED option 1081enabled for group scheduling of realtime processes, the cpu controller can only 1082be enabled when all RT processes are in the root cgroup. Be aware that system 1083management software may already have placed RT processes into non-root cgroups 1084during the system boot process, and these processes may need to be moved to the 1085root cgroup before the cpu controller can be enabled with a 1086CONFIG_RT_GROUP_SCHED enabled kernel. 1087 1088With CONFIG_RT_GROUP_SCHED disabled, this limitation does not apply and some of 1089the interface files either affect realtime processes or account for them. See 1090the following section for details. Only the cpu controller is affected by 1091CONFIG_RT_GROUP_SCHED. Other controllers can be used for the resource control of 1092realtime processes irrespective of CONFIG_RT_GROUP_SCHED. 1093 1094 1095CPU Interface Files 1096~~~~~~~~~~~~~~~~~~~ 1097 1098The interaction of a process with the cpu controller depends on its scheduling 1099policy and the underlying scheduler. From the point of view of the cpu controller, 1100processes can be categorized as follows: 1101 1102* Processes under the fair-class scheduler 1103* Processes under a BPF scheduler with the ``cgroup_set_weight`` callback 1104* Everything else: ``SCHED_{FIFO,RR,DEADLINE}`` and processes under a BPF scheduler 1105 without the ``cgroup_set_weight`` callback 1106 1107For details on when a process is under the fair-class scheduler or a BPF scheduler, 1108check out :ref:`Documentation/scheduler/sched-ext.rst <sched-ext>`. 1109 1110For each of the following interface files, the above categories 1111will be referred to. All time durations are in microseconds. 1112 1113 cpu.stat 1114 A read-only flat-keyed file. 1115 This file exists whether the controller is enabled or not. 1116 1117 It always reports the following three stats, which account for all the 1118 processes in the cgroup: 1119 1120 - usage_usec 1121 - user_usec 1122 - system_usec 1123 1124 and the following five when the controller is enabled, which account for 1125 only the processes under the fair-class scheduler: 1126 1127 - nr_periods 1128 - nr_throttled 1129 - throttled_usec 1130 - nr_bursts 1131 - burst_usec 1132 1133 cpu.weight 1134 A read-write single value file which exists on non-root 1135 cgroups. The default is "100". 1136 1137 For non idle groups (cpu.idle = 0), the weight is in the 1138 range [1, 10000]. 1139 1140 If the cgroup has been configured to be SCHED_IDLE (cpu.idle = 1), 1141 then the weight will show as a 0. 1142 1143 This file affects only processes under the fair-class scheduler and a BPF 1144 scheduler with the ``cgroup_set_weight`` callback depending on what the 1145 callback actually does. 1146 1147 cpu.weight.nice 1148 A read-write single value file which exists on non-root 1149 cgroups. The default is "0". 1150 1151 The nice value is in the range [-20, 19]. 1152 1153 This interface file is an alternative interface for 1154 "cpu.weight" and allows reading and setting weight using the 1155 same values used by nice(2). Because the range is smaller and 1156 granularity is coarser for the nice values, the read value is 1157 the closest approximation of the current weight. 1158 1159 This file affects only processes under the fair-class scheduler and a BPF 1160 scheduler with the ``cgroup_set_weight`` callback depending on what the 1161 callback actually does. 1162 1163 cpu.max 1164 A read-write two value file which exists on non-root cgroups. 1165 The default is "max 100000". 1166 1167 The maximum bandwidth limit. It's in the following format:: 1168 1169 $MAX $PERIOD 1170 1171 which indicates that the group may consume up to $MAX in each 1172 $PERIOD duration. "max" for $MAX indicates no limit. If only 1173 one number is written, $MAX is updated. 1174 1175 This file affects only processes under the fair-class scheduler. 1176 1177 cpu.max.burst 1178 A read-write single value file which exists on non-root 1179 cgroups. The default is "0". 1180 1181 The burst in the range [0, $MAX]. 1182 1183 This file affects only processes under the fair-class scheduler. 1184 1185 cpu.pressure 1186 A read-write nested-keyed file. 1187 1188 Shows pressure stall information for CPU. See 1189 :ref:`Documentation/accounting/psi.rst <psi>` for details. 1190 1191 This file accounts for all the processes in the cgroup. 1192 1193 cpu.uclamp.min 1194 A read-write single value file which exists on non-root cgroups. 1195 The default is "0", i.e. no utilization boosting. 1196 1197 The requested minimum utilization (protection) as a percentage 1198 rational number, e.g. 12.34 for 12.34%. 1199 1200 This interface allows reading and setting minimum utilization clamp 1201 values similar to the sched_setattr(2). This minimum utilization 1202 value is used to clamp the task specific minimum utilization clamp, 1203 including those of realtime processes. 1204 1205 The requested minimum utilization (protection) is always capped by 1206 the current value for the maximum utilization (limit), i.e. 1207 `cpu.uclamp.max`. 1208 1209 This file affects all the processes in the cgroup. 1210 1211 cpu.uclamp.max 1212 A read-write single value file which exists on non-root cgroups. 1213 The default is "max". i.e. no utilization capping 1214 1215 The requested maximum utilization (limit) as a percentage rational 1216 number, e.g. 98.76 for 98.76%. 1217 1218 This interface allows reading and setting maximum utilization clamp 1219 values similar to the sched_setattr(2). This maximum utilization 1220 value is used to clamp the task specific maximum utilization clamp, 1221 including those of realtime processes. 1222 1223 This file affects all the processes in the cgroup. 1224 1225 cpu.idle 1226 A read-write single value file which exists on non-root cgroups. 1227 The default is 0. 1228 1229 This is the cgroup analog of the per-task SCHED_IDLE sched policy. 1230 Setting this value to a 1 will make the scheduling policy of the 1231 cgroup SCHED_IDLE. The threads inside the cgroup will retain their 1232 own relative priorities, but the cgroup itself will be treated as 1233 very low priority relative to its peers. 1234 1235 This file affects only processes under the fair-class scheduler. 1236 1237Memory 1238------ 1239 1240The "memory" controller regulates distribution of memory. Memory is 1241stateful and implements both limit and protection models. Due to the 1242intertwining between memory usage and reclaim pressure and the 1243stateful nature of memory, the distribution model is relatively 1244complex. 1245 1246While not completely water-tight, all major memory usages by a given 1247cgroup are tracked so that the total memory consumption can be 1248accounted and controlled to a reasonable extent. Currently, the 1249following types of memory usages are tracked. 1250 1251- Userland memory - page cache and anonymous memory. 1252 1253- Kernel data structures such as dentries and inodes. 1254 1255- TCP socket buffers. 1256 1257The above list may expand in the future for better coverage. 1258 1259 1260Memory Interface Files 1261~~~~~~~~~~~~~~~~~~~~~~ 1262 1263All memory amounts are in bytes. If a value which is not aligned to 1264PAGE_SIZE is written, the value may be rounded up to the closest 1265PAGE_SIZE multiple when read back. 1266 1267 memory.current 1268 A read-only single value file which exists on non-root 1269 cgroups. 1270 1271 The total amount of memory currently being used by the cgroup 1272 and its descendants. 1273 1274 memory.min 1275 A read-write single value file which exists on non-root 1276 cgroups. The default is "0". 1277 1278 Hard memory protection. If the memory usage of a cgroup 1279 is within its effective min boundary, the cgroup's memory 1280 won't be reclaimed under any conditions. If there is no 1281 unprotected reclaimable memory available, OOM killer 1282 is invoked. Above the effective min boundary (or 1283 effective low boundary if it is higher), pages are reclaimed 1284 proportionally to the overage, reducing reclaim pressure for 1285 smaller overages. 1286 1287 Effective min boundary is limited by memory.min values of 1288 all ancestor cgroups. If there is memory.min overcommitment 1289 (child cgroup or cgroups are requiring more protected memory 1290 than parent will allow), then each child cgroup will get 1291 the part of parent's protection proportional to its 1292 actual memory usage below memory.min. 1293 1294 Putting more memory than generally available under this 1295 protection is discouraged and may lead to constant OOMs. 1296 1297 If a memory cgroup is not populated with processes, 1298 its memory.min is ignored. 1299 1300 memory.low 1301 A read-write single value file which exists on non-root 1302 cgroups. The default is "0". 1303 1304 Best-effort memory protection. If the memory usage of a 1305 cgroup is within its effective low boundary, the cgroup's 1306 memory won't be reclaimed unless there is no reclaimable 1307 memory available in unprotected cgroups. 1308 Above the effective low boundary (or 1309 effective min boundary if it is higher), pages are reclaimed 1310 proportionally to the overage, reducing reclaim pressure for 1311 smaller overages. 1312 1313 Effective low boundary is limited by memory.low values of 1314 all ancestor cgroups. If there is memory.low overcommitment 1315 (child cgroup or cgroups are requiring more protected memory 1316 than parent will allow), then each child cgroup will get 1317 the part of parent's protection proportional to its 1318 actual memory usage below memory.low. 1319 1320 Putting more memory than generally available under this 1321 protection is discouraged. 1322 1323 memory.high 1324 A read-write single value file which exists on non-root 1325 cgroups. The default is "max". 1326 1327 Memory usage throttle limit. If a cgroup's usage goes 1328 over the high boundary, the processes of the cgroup are 1329 throttled and put under heavy reclaim pressure. 1330 1331 Going over the high limit never invokes the OOM killer and 1332 under extreme conditions the limit may be breached. The high 1333 limit should be used in scenarios where an external process 1334 monitors the limited cgroup to alleviate heavy reclaim 1335 pressure. 1336 1337 memory.max 1338 A read-write single value file which exists on non-root 1339 cgroups. The default is "max". 1340 1341 Memory usage hard limit. This is the main mechanism to limit 1342 memory usage of a cgroup. If a cgroup's memory usage reaches 1343 this limit and can't be reduced, the OOM killer is invoked in 1344 the cgroup. Under certain circumstances, the usage may go 1345 over the limit temporarily. 1346 1347 In default configuration regular 0-order allocations always 1348 succeed unless OOM killer chooses current task as a victim. 1349 1350 Some kinds of allocations don't invoke the OOM killer. 1351 Caller could retry them differently, return into userspace 1352 as -ENOMEM or silently ignore in cases like disk readahead. 1353 1354 memory.reclaim 1355 A write-only nested-keyed file which exists for all cgroups. 1356 1357 This is a simple interface to trigger memory reclaim in the 1358 target cgroup. 1359 1360 Example:: 1361 1362 echo "1G" > memory.reclaim 1363 1364 Please note that the kernel can over or under reclaim from 1365 the target cgroup. If less bytes are reclaimed than the 1366 specified amount, -EAGAIN is returned. 1367 1368 Please note that the proactive reclaim (triggered by this 1369 interface) is not meant to indicate memory pressure on the 1370 memory cgroup. Therefore socket memory balancing triggered by 1371 the memory reclaim normally is not exercised in this case. 1372 This means that the networking layer will not adapt based on 1373 reclaim induced by memory.reclaim. 1374 1375The following nested keys are defined. 1376 1377 ========== ================================ 1378 swappiness Swappiness value to reclaim with 1379 ========== ================================ 1380 1381 Specifying a swappiness value instructs the kernel to perform 1382 the reclaim with that swappiness value. Note that this has the 1383 same semantics as vm.swappiness applied to memcg reclaim with 1384 all the existing limitations and potential future extensions. 1385 1386 memory.peak 1387 A read-write single value file which exists on non-root cgroups. 1388 1389 The max memory usage recorded for the cgroup and its descendants since 1390 either the creation of the cgroup or the most recent reset for that FD. 1391 1392 A write of any non-empty string to this file resets it to the 1393 current memory usage for subsequent reads through the same 1394 file descriptor. 1395 1396 memory.oom.group 1397 A read-write single value file which exists on non-root 1398 cgroups. The default value is "0". 1399 1400 Determines whether the cgroup should be treated as 1401 an indivisible workload by the OOM killer. If set, 1402 all tasks belonging to the cgroup or to its descendants 1403 (if the memory cgroup is not a leaf cgroup) are killed 1404 together or not at all. This can be used to avoid 1405 partial kills to guarantee workload integrity. 1406 1407 Tasks with the OOM protection (oom_score_adj set to -1000) 1408 are treated as an exception and are never killed. 1409 1410 If the OOM killer is invoked in a cgroup, it's not going 1411 to kill any tasks outside of this cgroup, regardless 1412 memory.oom.group values of ancestor cgroups. 1413 1414 memory.events 1415 A read-only flat-keyed file which exists on non-root cgroups. 1416 The following entries are defined. Unless specified 1417 otherwise, a value change in this file generates a file 1418 modified event. 1419 1420 Note that all fields in this file are hierarchical and the 1421 file modified event can be generated due to an event down the 1422 hierarchy. For the local events at the cgroup level see 1423 memory.events.local. 1424 1425 low 1426 The number of times the cgroup is reclaimed due to 1427 high memory pressure even though its usage is under 1428 the low boundary. This usually indicates that the low 1429 boundary is over-committed. 1430 1431 high 1432 The number of times processes of the cgroup are 1433 throttled and routed to perform direct memory reclaim 1434 because the high memory boundary was exceeded. For a 1435 cgroup whose memory usage is capped by the high limit 1436 rather than global memory pressure, this event's 1437 occurrences are expected. 1438 1439 max 1440 The number of times the cgroup's memory usage was 1441 about to go over the max boundary. If direct reclaim 1442 fails to bring it down, the cgroup goes to OOM state. 1443 1444 oom 1445 The number of time the cgroup's memory usage was 1446 reached the limit and allocation was about to fail. 1447 1448 This event is not raised if the OOM killer is not 1449 considered as an option, e.g. for failed high-order 1450 allocations or if caller asked to not retry attempts. 1451 1452 oom_kill 1453 The number of processes belonging to this cgroup 1454 killed by any kind of OOM killer. 1455 1456 oom_group_kill 1457 The number of times a group OOM has occurred. 1458 1459 memory.events.local 1460 Similar to memory.events but the fields in the file are local 1461 to the cgroup i.e. not hierarchical. The file modified event 1462 generated on this file reflects only the local events. 1463 1464 memory.stat 1465 A read-only flat-keyed file which exists on non-root cgroups. 1466 1467 This breaks down the cgroup's memory footprint into different 1468 types of memory, type-specific details, and other information 1469 on the state and past events of the memory management system. 1470 1471 All memory amounts are in bytes. 1472 1473 The entries are ordered to be human readable, and new entries 1474 can show up in the middle. Don't rely on items remaining in a 1475 fixed position; use the keys to look up specific values! 1476 1477 If the entry has no per-node counter (or not show in the 1478 memory.numa_stat). We use 'npn' (non-per-node) as the tag 1479 to indicate that it will not show in the memory.numa_stat. 1480 1481 anon 1482 Amount of memory used in anonymous mappings such as 1483 brk(), sbrk(), and mmap(MAP_ANONYMOUS). Note that 1484 some kernel configurations might account complete larger 1485 allocations (e.g., THP) if only some, but not all the 1486 memory of such an allocation is mapped anymore. 1487 1488 file 1489 Amount of memory used to cache filesystem data, 1490 including tmpfs and shared memory. 1491 1492 kernel (npn) 1493 Amount of total kernel memory, including 1494 (kernel_stack, pagetables, percpu, vmalloc, slab) in 1495 addition to other kernel memory use cases. 1496 1497 kernel_stack 1498 Amount of memory allocated to kernel stacks. 1499 1500 pagetables 1501 Amount of memory allocated for page tables. 1502 1503 sec_pagetables 1504 Amount of memory allocated for secondary page tables, 1505 this currently includes KVM mmu allocations on x86 1506 and arm64 and IOMMU page tables. 1507 1508 percpu (npn) 1509 Amount of memory used for storing per-cpu kernel 1510 data structures. 1511 1512 sock (npn) 1513 Amount of memory used in network transmission buffers 1514 1515 vmalloc (npn) 1516 Amount of memory used for vmap backed memory. 1517 1518 shmem 1519 Amount of cached filesystem data that is swap-backed, 1520 such as tmpfs, shm segments, shared anonymous mmap()s 1521 1522 zswap 1523 Amount of memory consumed by the zswap compression backend. 1524 1525 zswapped 1526 Amount of application memory swapped out to zswap. 1527 1528 file_mapped 1529 Amount of cached filesystem data mapped with mmap(). Note 1530 that some kernel configurations might account complete 1531 larger allocations (e.g., THP) if only some, but not 1532 not all the memory of such an allocation is mapped. 1533 1534 file_dirty 1535 Amount of cached filesystem data that was modified but 1536 not yet written back to disk 1537 1538 file_writeback 1539 Amount of cached filesystem data that was modified and 1540 is currently being written back to disk 1541 1542 swapcached 1543 Amount of swap cached in memory. The swapcache is accounted 1544 against both memory and swap usage. 1545 1546 anon_thp 1547 Amount of memory used in anonymous mappings backed by 1548 transparent hugepages 1549 1550 file_thp 1551 Amount of cached filesystem data backed by transparent 1552 hugepages 1553 1554 shmem_thp 1555 Amount of shm, tmpfs, shared anonymous mmap()s backed by 1556 transparent hugepages 1557 1558 inactive_anon, active_anon, inactive_file, active_file, unevictable 1559 Amount of memory, swap-backed and filesystem-backed, 1560 on the internal memory management lists used by the 1561 page reclaim algorithm. 1562 1563 As these represent internal list state (eg. shmem pages are on anon 1564 memory management lists), inactive_foo + active_foo may not be equal to 1565 the value for the foo counter, since the foo counter is type-based, not 1566 list-based. 1567 1568 slab_reclaimable 1569 Part of "slab" that might be reclaimed, such as 1570 dentries and inodes. 1571 1572 slab_unreclaimable 1573 Part of "slab" that cannot be reclaimed on memory 1574 pressure. 1575 1576 slab (npn) 1577 Amount of memory used for storing in-kernel data 1578 structures. 1579 1580 workingset_refault_anon 1581 Number of refaults of previously evicted anonymous pages. 1582 1583 workingset_refault_file 1584 Number of refaults of previously evicted file pages. 1585 1586 workingset_activate_anon 1587 Number of refaulted anonymous pages that were immediately 1588 activated. 1589 1590 workingset_activate_file 1591 Number of refaulted file pages that were immediately activated. 1592 1593 workingset_restore_anon 1594 Number of restored anonymous pages which have been detected as 1595 an active workingset before they got reclaimed. 1596 1597 workingset_restore_file 1598 Number of restored file pages which have been detected as an 1599 active workingset before they got reclaimed. 1600 1601 workingset_nodereclaim 1602 Number of times a shadow node has been reclaimed 1603 1604 pswpin (npn) 1605 Number of pages swapped into memory 1606 1607 pswpout (npn) 1608 Number of pages swapped out of memory 1609 1610 pgscan (npn) 1611 Amount of scanned pages (in an inactive LRU list) 1612 1613 pgsteal (npn) 1614 Amount of reclaimed pages 1615 1616 pgscan_kswapd (npn) 1617 Amount of scanned pages by kswapd (in an inactive LRU list) 1618 1619 pgscan_direct (npn) 1620 Amount of scanned pages directly (in an inactive LRU list) 1621 1622 pgscan_khugepaged (npn) 1623 Amount of scanned pages by khugepaged (in an inactive LRU list) 1624 1625 pgscan_proactive (npn) 1626 Amount of scanned pages proactively (in an inactive LRU list) 1627 1628 pgsteal_kswapd (npn) 1629 Amount of reclaimed pages by kswapd 1630 1631 pgsteal_direct (npn) 1632 Amount of reclaimed pages directly 1633 1634 pgsteal_khugepaged (npn) 1635 Amount of reclaimed pages by khugepaged 1636 1637 pgsteal_proactive (npn) 1638 Amount of reclaimed pages proactively 1639 1640 pgfault (npn) 1641 Total number of page faults incurred 1642 1643 pgmajfault (npn) 1644 Number of major page faults incurred 1645 1646 pgrefill (npn) 1647 Amount of scanned pages (in an active LRU list) 1648 1649 pgactivate (npn) 1650 Amount of pages moved to the active LRU list 1651 1652 pgdeactivate (npn) 1653 Amount of pages moved to the inactive LRU list 1654 1655 pglazyfree (npn) 1656 Amount of pages postponed to be freed under memory pressure 1657 1658 pglazyfreed (npn) 1659 Amount of reclaimed lazyfree pages 1660 1661 swpin_zero 1662 Number of pages swapped into memory and filled with zero, where I/O 1663 was optimized out because the page content was detected to be zero 1664 during swapout. 1665 1666 swpout_zero 1667 Number of zero-filled pages swapped out with I/O skipped due to the 1668 content being detected as zero. 1669 1670 zswpin 1671 Number of pages moved in to memory from zswap. 1672 1673 zswpout 1674 Number of pages moved out of memory to zswap. 1675 1676 zswpwb 1677 Number of pages written from zswap to swap. 1678 1679 thp_fault_alloc (npn) 1680 Number of transparent hugepages which were allocated to satisfy 1681 a page fault. This counter is not present when CONFIG_TRANSPARENT_HUGEPAGE 1682 is not set. 1683 1684 thp_collapse_alloc (npn) 1685 Number of transparent hugepages which were allocated to allow 1686 collapsing an existing range of pages. This counter is not 1687 present when CONFIG_TRANSPARENT_HUGEPAGE is not set. 1688 1689 thp_swpout (npn) 1690 Number of transparent hugepages which are swapout in one piece 1691 without splitting. 1692 1693 thp_swpout_fallback (npn) 1694 Number of transparent hugepages which were split before swapout. 1695 Usually because failed to allocate some continuous swap space 1696 for the huge page. 1697 1698 numa_pages_migrated (npn) 1699 Number of pages migrated by NUMA balancing. 1700 1701 numa_pte_updates (npn) 1702 Number of pages whose page table entries are modified by 1703 NUMA balancing to produce NUMA hinting faults on access. 1704 1705 numa_hint_faults (npn) 1706 Number of NUMA hinting faults. 1707 1708 pgdemote_kswapd 1709 Number of pages demoted by kswapd. 1710 1711 pgdemote_direct 1712 Number of pages demoted directly. 1713 1714 pgdemote_khugepaged 1715 Number of pages demoted by khugepaged. 1716 1717 pgdemote_proactive 1718 Number of pages demoted by proactively. 1719 1720 hugetlb 1721 Amount of memory used by hugetlb pages. This metric only shows 1722 up if hugetlb usage is accounted for in memory.current (i.e. 1723 cgroup is mounted with the memory_hugetlb_accounting option). 1724 1725 memory.numa_stat 1726 A read-only nested-keyed file which exists on non-root cgroups. 1727 1728 This breaks down the cgroup's memory footprint into different 1729 types of memory, type-specific details, and other information 1730 per node on the state of the memory management system. 1731 1732 This is useful for providing visibility into the NUMA locality 1733 information within an memcg since the pages are allowed to be 1734 allocated from any physical node. One of the use case is evaluating 1735 application performance by combining this information with the 1736 application's CPU allocation. 1737 1738 All memory amounts are in bytes. 1739 1740 The output format of memory.numa_stat is:: 1741 1742 type N0=<bytes in node 0> N1=<bytes in node 1> ... 1743 1744 The entries are ordered to be human readable, and new entries 1745 can show up in the middle. Don't rely on items remaining in a 1746 fixed position; use the keys to look up specific values! 1747 1748 The entries can refer to the memory.stat. 1749 1750 memory.swap.current 1751 A read-only single value file which exists on non-root 1752 cgroups. 1753 1754 The total amount of swap currently being used by the cgroup 1755 and its descendants. 1756 1757 memory.swap.high 1758 A read-write single value file which exists on non-root 1759 cgroups. The default is "max". 1760 1761 Swap usage throttle limit. If a cgroup's swap usage exceeds 1762 this limit, all its further allocations will be throttled to 1763 allow userspace to implement custom out-of-memory procedures. 1764 1765 This limit marks a point of no return for the cgroup. It is NOT 1766 designed to manage the amount of swapping a workload does 1767 during regular operation. Compare to memory.swap.max, which 1768 prohibits swapping past a set amount, but lets the cgroup 1769 continue unimpeded as long as other memory can be reclaimed. 1770 1771 Healthy workloads are not expected to reach this limit. 1772 1773 memory.swap.peak 1774 A read-write single value file which exists on non-root cgroups. 1775 1776 The max swap usage recorded for the cgroup and its descendants since 1777 the creation of the cgroup or the most recent reset for that FD. 1778 1779 A write of any non-empty string to this file resets it to the 1780 current memory usage for subsequent reads through the same 1781 file descriptor. 1782 1783 memory.swap.max 1784 A read-write single value file which exists on non-root 1785 cgroups. The default is "max". 1786 1787 Swap usage hard limit. If a cgroup's swap usage reaches this 1788 limit, anonymous memory of the cgroup will not be swapped out. 1789 1790 memory.swap.events 1791 A read-only flat-keyed file which exists on non-root cgroups. 1792 The following entries are defined. Unless specified 1793 otherwise, a value change in this file generates a file 1794 modified event. 1795 1796 high 1797 The number of times the cgroup's swap usage was over 1798 the high threshold. 1799 1800 max 1801 The number of times the cgroup's swap usage was about 1802 to go over the max boundary and swap allocation 1803 failed. 1804 1805 fail 1806 The number of times swap allocation failed either 1807 because of running out of swap system-wide or max 1808 limit. 1809 1810 When reduced under the current usage, the existing swap 1811 entries are reclaimed gradually and the swap usage may stay 1812 higher than the limit for an extended period of time. This 1813 reduces the impact on the workload and memory management. 1814 1815 memory.zswap.current 1816 A read-only single value file which exists on non-root 1817 cgroups. 1818 1819 The total amount of memory consumed by the zswap compression 1820 backend. 1821 1822 memory.zswap.max 1823 A read-write single value file which exists on non-root 1824 cgroups. The default is "max". 1825 1826 Zswap usage hard limit. If a cgroup's zswap pool reaches this 1827 limit, it will refuse to take any more stores before existing 1828 entries fault back in or are written out to disk. 1829 1830 memory.zswap.writeback 1831 A read-write single value file. The default value is "1". 1832 Note that this setting is hierarchical, i.e. the writeback would be 1833 implicitly disabled for child cgroups if the upper hierarchy 1834 does so. 1835 1836 When this is set to 0, all swapping attempts to swapping devices 1837 are disabled. This included both zswap writebacks, and swapping due 1838 to zswap store failures. If the zswap store failures are recurring 1839 (for e.g if the pages are incompressible), users can observe 1840 reclaim inefficiency after disabling writeback (because the same 1841 pages might be rejected again and again). 1842 1843 Note that this is subtly different from setting memory.swap.max to 1844 0, as it still allows for pages to be written to the zswap pool. 1845 This setting has no effect if zswap is disabled, and swapping 1846 is allowed unless memory.swap.max is set to 0. 1847 1848 memory.pressure 1849 A read-only nested-keyed file. 1850 1851 Shows pressure stall information for memory. See 1852 :ref:`Documentation/accounting/psi.rst <psi>` for details. 1853 1854 1855Usage Guidelines 1856~~~~~~~~~~~~~~~~ 1857 1858"memory.high" is the main mechanism to control memory usage. 1859Over-committing on high limit (sum of high limits > available memory) 1860and letting global memory pressure to distribute memory according to 1861usage is a viable strategy. 1862 1863Because breach of the high limit doesn't trigger the OOM killer but 1864throttles the offending cgroup, a management agent has ample 1865opportunities to monitor and take appropriate actions such as granting 1866more memory or terminating the workload. 1867 1868Determining whether a cgroup has enough memory is not trivial as 1869memory usage doesn't indicate whether the workload can benefit from 1870more memory. For example, a workload which writes data received from 1871network to a file can use all available memory but can also operate as 1872performant with a small amount of memory. A measure of memory 1873pressure - how much the workload is being impacted due to lack of 1874memory - is necessary to determine whether a workload needs more 1875memory; unfortunately, memory pressure monitoring mechanism isn't 1876implemented yet. 1877 1878 1879Memory Ownership 1880~~~~~~~~~~~~~~~~ 1881 1882A memory area is charged to the cgroup which instantiated it and stays 1883charged to the cgroup until the area is released. Migrating a process 1884to a different cgroup doesn't move the memory usages that it 1885instantiated while in the previous cgroup to the new cgroup. 1886 1887A memory area may be used by processes belonging to different cgroups. 1888To which cgroup the area will be charged is in-deterministic; however, 1889over time, the memory area is likely to end up in a cgroup which has 1890enough memory allowance to avoid high reclaim pressure. 1891 1892If a cgroup sweeps a considerable amount of memory which is expected 1893to be accessed repeatedly by other cgroups, it may make sense to use 1894POSIX_FADV_DONTNEED to relinquish the ownership of memory areas 1895belonging to the affected files to ensure correct memory ownership. 1896 1897 1898IO 1899-- 1900 1901The "io" controller regulates the distribution of IO resources. This 1902controller implements both weight based and absolute bandwidth or IOPS 1903limit distribution; however, weight based distribution is available 1904only if cfq-iosched is in use and neither scheme is available for 1905blk-mq devices. 1906 1907 1908IO Interface Files 1909~~~~~~~~~~~~~~~~~~ 1910 1911 io.stat 1912 A read-only nested-keyed file. 1913 1914 Lines are keyed by $MAJ:$MIN device numbers and not ordered. 1915 The following nested keys are defined. 1916 1917 ====== ===================== 1918 rbytes Bytes read 1919 wbytes Bytes written 1920 rios Number of read IOs 1921 wios Number of write IOs 1922 dbytes Bytes discarded 1923 dios Number of discard IOs 1924 ====== ===================== 1925 1926 An example read output follows:: 1927 1928 8:16 rbytes=1459200 wbytes=314773504 rios=192 wios=353 dbytes=0 dios=0 1929 8:0 rbytes=90430464 wbytes=299008000 rios=8950 wios=1252 dbytes=50331648 dios=3021 1930 1931 io.cost.qos 1932 A read-write nested-keyed file which exists only on the root 1933 cgroup. 1934 1935 This file configures the Quality of Service of the IO cost 1936 model based controller (CONFIG_BLK_CGROUP_IOCOST) which 1937 currently implements "io.weight" proportional control. Lines 1938 are keyed by $MAJ:$MIN device numbers and not ordered. The 1939 line for a given device is populated on the first write for 1940 the device on "io.cost.qos" or "io.cost.model". The following 1941 nested keys are defined. 1942 1943 ====== ===================================== 1944 enable Weight-based control enable 1945 ctrl "auto" or "user" 1946 rpct Read latency percentile [0, 100] 1947 rlat Read latency threshold 1948 wpct Write latency percentile [0, 100] 1949 wlat Write latency threshold 1950 min Minimum scaling percentage [1, 10000] 1951 max Maximum scaling percentage [1, 10000] 1952 ====== ===================================== 1953 1954 The controller is disabled by default and can be enabled by 1955 setting "enable" to 1. "rpct" and "wpct" parameters default 1956 to zero and the controller uses internal device saturation 1957 state to adjust the overall IO rate between "min" and "max". 1958 1959 When a better control quality is needed, latency QoS 1960 parameters can be configured. For example:: 1961 1962 8:16 enable=1 ctrl=auto rpct=95.00 rlat=75000 wpct=95.00 wlat=150000 min=50.00 max=150.0 1963 1964 shows that on sdb, the controller is enabled, will consider 1965 the device saturated if the 95th percentile of read completion 1966 latencies is above 75ms or write 150ms, and adjust the overall 1967 IO issue rate between 50% and 150% accordingly. 1968 1969 The lower the saturation point, the better the latency QoS at 1970 the cost of aggregate bandwidth. The narrower the allowed 1971 adjustment range between "min" and "max", the more conformant 1972 to the cost model the IO behavior. Note that the IO issue 1973 base rate may be far off from 100% and setting "min" and "max" 1974 blindly can lead to a significant loss of device capacity or 1975 control quality. "min" and "max" are useful for regulating 1976 devices which show wide temporary behavior changes - e.g. a 1977 ssd which accepts writes at the line speed for a while and 1978 then completely stalls for multiple seconds. 1979 1980 When "ctrl" is "auto", the parameters are controlled by the 1981 kernel and may change automatically. Setting "ctrl" to "user" 1982 or setting any of the percentile and latency parameters puts 1983 it into "user" mode and disables the automatic changes. The 1984 automatic mode can be restored by setting "ctrl" to "auto". 1985 1986 io.cost.model 1987 A read-write nested-keyed file which exists only on the root 1988 cgroup. 1989 1990 This file configures the cost model of the IO cost model based 1991 controller (CONFIG_BLK_CGROUP_IOCOST) which currently 1992 implements "io.weight" proportional control. Lines are keyed 1993 by $MAJ:$MIN device numbers and not ordered. The line for a 1994 given device is populated on the first write for the device on 1995 "io.cost.qos" or "io.cost.model". The following nested keys 1996 are defined. 1997 1998 ===== ================================ 1999 ctrl "auto" or "user" 2000 model The cost model in use - "linear" 2001 ===== ================================ 2002 2003 When "ctrl" is "auto", the kernel may change all parameters 2004 dynamically. When "ctrl" is set to "user" or any other 2005 parameters are written to, "ctrl" become "user" and the 2006 automatic changes are disabled. 2007 2008 When "model" is "linear", the following model parameters are 2009 defined. 2010 2011 ============= ======================================== 2012 [r|w]bps The maximum sequential IO throughput 2013 [r|w]seqiops The maximum 4k sequential IOs per second 2014 [r|w]randiops The maximum 4k random IOs per second 2015 ============= ======================================== 2016 2017 From the above, the builtin linear model determines the base 2018 costs of a sequential and random IO and the cost coefficient 2019 for the IO size. While simple, this model can cover most 2020 common device classes acceptably. 2021 2022 The IO cost model isn't expected to be accurate in absolute 2023 sense and is scaled to the device behavior dynamically. 2024 2025 If needed, tools/cgroup/iocost_coef_gen.py can be used to 2026 generate device-specific coefficients. 2027 2028 io.weight 2029 A read-write flat-keyed file which exists on non-root cgroups. 2030 The default is "default 100". 2031 2032 The first line is the default weight applied to devices 2033 without specific override. The rest are overrides keyed by 2034 $MAJ:$MIN device numbers and not ordered. The weights are in 2035 the range [1, 10000] and specifies the relative amount IO time 2036 the cgroup can use in relation to its siblings. 2037 2038 The default weight can be updated by writing either "default 2039 $WEIGHT" or simply "$WEIGHT". Overrides can be set by writing 2040 "$MAJ:$MIN $WEIGHT" and unset by writing "$MAJ:$MIN default". 2041 2042 An example read output follows:: 2043 2044 default 100 2045 8:16 200 2046 8:0 50 2047 2048 io.max 2049 A read-write nested-keyed file which exists on non-root 2050 cgroups. 2051 2052 BPS and IOPS based IO limit. Lines are keyed by $MAJ:$MIN 2053 device numbers and not ordered. The following nested keys are 2054 defined. 2055 2056 ===== ================================== 2057 rbps Max read bytes per second 2058 wbps Max write bytes per second 2059 riops Max read IO operations per second 2060 wiops Max write IO operations per second 2061 ===== ================================== 2062 2063 When writing, any number of nested key-value pairs can be 2064 specified in any order. "max" can be specified as the value 2065 to remove a specific limit. If the same key is specified 2066 multiple times, the outcome is undefined. 2067 2068 BPS and IOPS are measured in each IO direction and IOs are 2069 delayed if limit is reached. Temporary bursts are allowed. 2070 2071 Setting read limit at 2M BPS and write at 120 IOPS for 8:16:: 2072 2073 echo "8:16 rbps=2097152 wiops=120" > io.max 2074 2075 Reading returns the following:: 2076 2077 8:16 rbps=2097152 wbps=max riops=max wiops=120 2078 2079 Write IOPS limit can be removed by writing the following:: 2080 2081 echo "8:16 wiops=max" > io.max 2082 2083 Reading now returns the following:: 2084 2085 8:16 rbps=2097152 wbps=max riops=max wiops=max 2086 2087 io.pressure 2088 A read-only nested-keyed file. 2089 2090 Shows pressure stall information for IO. See 2091 :ref:`Documentation/accounting/psi.rst <psi>` for details. 2092 2093 2094Writeback 2095~~~~~~~~~ 2096 2097Page cache is dirtied through buffered writes and shared mmaps and 2098written asynchronously to the backing filesystem by the writeback 2099mechanism. Writeback sits between the memory and IO domains and 2100regulates the proportion of dirty memory by balancing dirtying and 2101write IOs. 2102 2103The io controller, in conjunction with the memory controller, 2104implements control of page cache writeback IOs. The memory controller 2105defines the memory domain that dirty memory ratio is calculated and 2106maintained for and the io controller defines the io domain which 2107writes out dirty pages for the memory domain. Both system-wide and 2108per-cgroup dirty memory states are examined and the more restrictive 2109of the two is enforced. 2110 2111cgroup writeback requires explicit support from the underlying 2112filesystem. Currently, cgroup writeback is implemented on ext2, ext4, 2113btrfs, f2fs, and xfs. On other filesystems, all writeback IOs are 2114attributed to the root cgroup. 2115 2116There are inherent differences in memory and writeback management 2117which affects how cgroup ownership is tracked. Memory is tracked per 2118page while writeback per inode. For the purpose of writeback, an 2119inode is assigned to a cgroup and all IO requests to write dirty pages 2120from the inode are attributed to that cgroup. 2121 2122As cgroup ownership for memory is tracked per page, there can be pages 2123which are associated with different cgroups than the one the inode is 2124associated with. These are called foreign pages. The writeback 2125constantly keeps track of foreign pages and, if a particular foreign 2126cgroup becomes the majority over a certain period of time, switches 2127the ownership of the inode to that cgroup. 2128 2129While this model is enough for most use cases where a given inode is 2130mostly dirtied by a single cgroup even when the main writing cgroup 2131changes over time, use cases where multiple cgroups write to a single 2132inode simultaneously are not supported well. In such circumstances, a 2133significant portion of IOs are likely to be attributed incorrectly. 2134As memory controller assigns page ownership on the first use and 2135doesn't update it until the page is released, even if writeback 2136strictly follows page ownership, multiple cgroups dirtying overlapping 2137areas wouldn't work as expected. It's recommended to avoid such usage 2138patterns. 2139 2140The sysctl knobs which affect writeback behavior are applied to cgroup 2141writeback as follows. 2142 2143 vm.dirty_background_ratio, vm.dirty_ratio 2144 These ratios apply the same to cgroup writeback with the 2145 amount of available memory capped by limits imposed by the 2146 memory controller and system-wide clean memory. 2147 2148 vm.dirty_background_bytes, vm.dirty_bytes 2149 For cgroup writeback, this is calculated into ratio against 2150 total available memory and applied the same way as 2151 vm.dirty[_background]_ratio. 2152 2153 2154IO Latency 2155~~~~~~~~~~ 2156 2157This is a cgroup v2 controller for IO workload protection. You provide a group 2158with a latency target, and if the average latency exceeds that target the 2159controller will throttle any peers that have a lower latency target than the 2160protected workload. 2161 2162The limits are only applied at the peer level in the hierarchy. This means that 2163in the diagram below, only groups A, B, and C will influence each other, and 2164groups D and F will influence each other. Group G will influence nobody:: 2165 2166 [root] 2167 / | \ 2168 A B C 2169 / \ | 2170 D F G 2171 2172 2173So the ideal way to configure this is to set io.latency in groups A, B, and C. 2174Generally you do not want to set a value lower than the latency your device 2175supports. Experiment to find the value that works best for your workload. 2176Start at higher than the expected latency for your device and watch the 2177avg_lat value in io.stat for your workload group to get an idea of the 2178latency you see during normal operation. Use the avg_lat value as a basis for 2179your real setting, setting at 10-15% higher than the value in io.stat. 2180 2181How IO Latency Throttling Works 2182~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2183 2184io.latency is work conserving; so as long as everybody is meeting their latency 2185target the controller doesn't do anything. Once a group starts missing its 2186target it begins throttling any peer group that has a higher target than itself. 2187This throttling takes 2 forms: 2188 2189- Queue depth throttling. This is the number of outstanding IO's a group is 2190 allowed to have. We will clamp down relatively quickly, starting at no limit 2191 and going all the way down to 1 IO at a time. 2192 2193- Artificial delay induction. There are certain types of IO that cannot be 2194 throttled without possibly adversely affecting higher priority groups. This 2195 includes swapping and metadata IO. These types of IO are allowed to occur 2196 normally, however they are "charged" to the originating group. If the 2197 originating group is being throttled you will see the use_delay and delay 2198 fields in io.stat increase. The delay value is how many microseconds that are 2199 being added to any process that runs in this group. Because this number can 2200 grow quite large if there is a lot of swapping or metadata IO occurring we 2201 limit the individual delay events to 1 second at a time. 2202 2203Once the victimized group starts meeting its latency target again it will start 2204unthrottling any peer groups that were throttled previously. If the victimized 2205group simply stops doing IO the global counter will unthrottle appropriately. 2206 2207IO Latency Interface Files 2208~~~~~~~~~~~~~~~~~~~~~~~~~~ 2209 2210 io.latency 2211 This takes a similar format as the other controllers. 2212 2213 "MAJOR:MINOR target=<target time in microseconds>" 2214 2215 io.stat 2216 If the controller is enabled you will see extra stats in io.stat in 2217 addition to the normal ones. 2218 2219 depth 2220 This is the current queue depth for the group. 2221 2222 avg_lat 2223 This is an exponential moving average with a decay rate of 1/exp 2224 bound by the sampling interval. The decay rate interval can be 2225 calculated by multiplying the win value in io.stat by the 2226 corresponding number of samples based on the win value. 2227 2228 win 2229 The sampling window size in milliseconds. This is the minimum 2230 duration of time between evaluation events. Windows only elapse 2231 with IO activity. Idle periods extend the most recent window. 2232 2233IO Priority 2234~~~~~~~~~~~ 2235 2236A single attribute controls the behavior of the I/O priority cgroup policy, 2237namely the io.prio.class attribute. The following values are accepted for 2238that attribute: 2239 2240 no-change 2241 Do not modify the I/O priority class. 2242 2243 promote-to-rt 2244 For requests that have a non-RT I/O priority class, change it into RT. 2245 Also change the priority level of these requests to 4. Do not modify 2246 the I/O priority of requests that have priority class RT. 2247 2248 restrict-to-be 2249 For requests that do not have an I/O priority class or that have I/O 2250 priority class RT, change it into BE. Also change the priority level 2251 of these requests to 0. Do not modify the I/O priority class of 2252 requests that have priority class IDLE. 2253 2254 idle 2255 Change the I/O priority class of all requests into IDLE, the lowest 2256 I/O priority class. 2257 2258 none-to-rt 2259 Deprecated. Just an alias for promote-to-rt. 2260 2261The following numerical values are associated with the I/O priority policies: 2262 2263+----------------+---+ 2264| no-change | 0 | 2265+----------------+---+ 2266| promote-to-rt | 1 | 2267+----------------+---+ 2268| restrict-to-be | 2 | 2269+----------------+---+ 2270| idle | 3 | 2271+----------------+---+ 2272 2273The numerical value that corresponds to each I/O priority class is as follows: 2274 2275+-------------------------------+---+ 2276| IOPRIO_CLASS_NONE | 0 | 2277+-------------------------------+---+ 2278| IOPRIO_CLASS_RT (real-time) | 1 | 2279+-------------------------------+---+ 2280| IOPRIO_CLASS_BE (best effort) | 2 | 2281+-------------------------------+---+ 2282| IOPRIO_CLASS_IDLE | 3 | 2283+-------------------------------+---+ 2284 2285The algorithm to set the I/O priority class for a request is as follows: 2286 2287- If I/O priority class policy is promote-to-rt, change the request I/O 2288 priority class to IOPRIO_CLASS_RT and change the request I/O priority 2289 level to 4. 2290- If I/O priority class policy is not promote-to-rt, translate the I/O priority 2291 class policy into a number, then change the request I/O priority class 2292 into the maximum of the I/O priority class policy number and the numerical 2293 I/O priority class. 2294 2295PID 2296--- 2297 2298The process number controller is used to allow a cgroup to stop any 2299new tasks from being fork()'d or clone()'d after a specified limit is 2300reached. 2301 2302The number of tasks in a cgroup can be exhausted in ways which other 2303controllers cannot prevent, thus warranting its own controller. For 2304example, a fork bomb is likely to exhaust the number of tasks before 2305hitting memory restrictions. 2306 2307Note that PIDs used in this controller refer to TIDs, process IDs as 2308used by the kernel. 2309 2310 2311PID Interface Files 2312~~~~~~~~~~~~~~~~~~~ 2313 2314 pids.max 2315 A read-write single value file which exists on non-root 2316 cgroups. The default is "max". 2317 2318 Hard limit of number of processes. 2319 2320 pids.current 2321 A read-only single value file which exists on non-root cgroups. 2322 2323 The number of processes currently in the cgroup and its 2324 descendants. 2325 2326 pids.peak 2327 A read-only single value file which exists on non-root cgroups. 2328 2329 The maximum value that the number of processes in the cgroup and its 2330 descendants has ever reached. 2331 2332 pids.events 2333 A read-only flat-keyed file which exists on non-root cgroups. Unless 2334 specified otherwise, a value change in this file generates a file 2335 modified event. The following entries are defined. 2336 2337 max 2338 The number of times the cgroup's total number of processes hit the pids.max 2339 limit (see also pids_localevents). 2340 2341 pids.events.local 2342 Similar to pids.events but the fields in the file are local 2343 to the cgroup i.e. not hierarchical. The file modified event 2344 generated on this file reflects only the local events. 2345 2346Organisational operations are not blocked by cgroup policies, so it is 2347possible to have pids.current > pids.max. This can be done by either 2348setting the limit to be smaller than pids.current, or attaching enough 2349processes to the cgroup such that pids.current is larger than 2350pids.max. However, it is not possible to violate a cgroup PID policy 2351through fork() or clone(). These will return -EAGAIN if the creation 2352of a new process would cause a cgroup policy to be violated. 2353 2354 2355Cpuset 2356------ 2357 2358The "cpuset" controller provides a mechanism for constraining 2359the CPU and memory node placement of tasks to only the resources 2360specified in the cpuset interface files in a task's current cgroup. 2361This is especially valuable on large NUMA systems where placing jobs 2362on properly sized subsets of the systems with careful processor and 2363memory placement to reduce cross-node memory access and contention 2364can improve overall system performance. 2365 2366The "cpuset" controller is hierarchical. That means the controller 2367cannot use CPUs or memory nodes not allowed in its parent. 2368 2369 2370Cpuset Interface Files 2371~~~~~~~~~~~~~~~~~~~~~~ 2372 2373 cpuset.cpus 2374 A read-write multiple values file which exists on non-root 2375 cpuset-enabled cgroups. 2376 2377 It lists the requested CPUs to be used by tasks within this 2378 cgroup. The actual list of CPUs to be granted, however, is 2379 subjected to constraints imposed by its parent and can differ 2380 from the requested CPUs. 2381 2382 The CPU numbers are comma-separated numbers or ranges. 2383 For example:: 2384 2385 # cat cpuset.cpus 2386 0-4,6,8-10 2387 2388 An empty value indicates that the cgroup is using the same 2389 setting as the nearest cgroup ancestor with a non-empty 2390 "cpuset.cpus" or all the available CPUs if none is found. 2391 2392 The value of "cpuset.cpus" stays constant until the next update 2393 and won't be affected by any CPU hotplug events. 2394 2395 cpuset.cpus.effective 2396 A read-only multiple values file which exists on all 2397 cpuset-enabled cgroups. 2398 2399 It lists the onlined CPUs that are actually granted to this 2400 cgroup by its parent. These CPUs are allowed to be used by 2401 tasks within the current cgroup. 2402 2403 If "cpuset.cpus" is empty, the "cpuset.cpus.effective" file shows 2404 all the CPUs from the parent cgroup that can be available to 2405 be used by this cgroup. Otherwise, it should be a subset of 2406 "cpuset.cpus" unless none of the CPUs listed in "cpuset.cpus" 2407 can be granted. In this case, it will be treated just like an 2408 empty "cpuset.cpus". 2409 2410 Its value will be affected by CPU hotplug events. 2411 2412 cpuset.mems 2413 A read-write multiple values file which exists on non-root 2414 cpuset-enabled cgroups. 2415 2416 It lists the requested memory nodes to be used by tasks within 2417 this cgroup. The actual list of memory nodes granted, however, 2418 is subjected to constraints imposed by its parent and can differ 2419 from the requested memory nodes. 2420 2421 The memory node numbers are comma-separated numbers or ranges. 2422 For example:: 2423 2424 # cat cpuset.mems 2425 0-1,3 2426 2427 An empty value indicates that the cgroup is using the same 2428 setting as the nearest cgroup ancestor with a non-empty 2429 "cpuset.mems" or all the available memory nodes if none 2430 is found. 2431 2432 The value of "cpuset.mems" stays constant until the next update 2433 and won't be affected by any memory nodes hotplug events. 2434 2435 Setting a non-empty value to "cpuset.mems" causes memory of 2436 tasks within the cgroup to be migrated to the designated nodes if 2437 they are currently using memory outside of the designated nodes. 2438 2439 There is a cost for this memory migration. The migration 2440 may not be complete and some memory pages may be left behind. 2441 So it is recommended that "cpuset.mems" should be set properly 2442 before spawning new tasks into the cpuset. Even if there is 2443 a need to change "cpuset.mems" with active tasks, it shouldn't 2444 be done frequently. 2445 2446 cpuset.mems.effective 2447 A read-only multiple values file which exists on all 2448 cpuset-enabled cgroups. 2449 2450 It lists the onlined memory nodes that are actually granted to 2451 this cgroup by its parent. These memory nodes are allowed to 2452 be used by tasks within the current cgroup. 2453 2454 If "cpuset.mems" is empty, it shows all the memory nodes from the 2455 parent cgroup that will be available to be used by this cgroup. 2456 Otherwise, it should be a subset of "cpuset.mems" unless none of 2457 the memory nodes listed in "cpuset.mems" can be granted. In this 2458 case, it will be treated just like an empty "cpuset.mems". 2459 2460 Its value will be affected by memory nodes hotplug events. 2461 2462 cpuset.cpus.exclusive 2463 A read-write multiple values file which exists on non-root 2464 cpuset-enabled cgroups. 2465 2466 It lists all the exclusive CPUs that are allowed to be used 2467 to create a new cpuset partition. Its value is not used 2468 unless the cgroup becomes a valid partition root. See the 2469 "cpuset.cpus.partition" section below for a description of what 2470 a cpuset partition is. 2471 2472 When the cgroup becomes a partition root, the actual exclusive 2473 CPUs that are allocated to that partition are listed in 2474 "cpuset.cpus.exclusive.effective" which may be different 2475 from "cpuset.cpus.exclusive". If "cpuset.cpus.exclusive" 2476 has previously been set, "cpuset.cpus.exclusive.effective" 2477 is always a subset of it. 2478 2479 Users can manually set it to a value that is different from 2480 "cpuset.cpus". One constraint in setting it is that the list of 2481 CPUs must be exclusive with respect to "cpuset.cpus.exclusive" 2482 of its sibling. If "cpuset.cpus.exclusive" of a sibling cgroup 2483 isn't set, its "cpuset.cpus" value, if set, cannot be a subset 2484 of it to leave at least one CPU available when the exclusive 2485 CPUs are taken away. 2486 2487 For a parent cgroup, any one of its exclusive CPUs can only 2488 be distributed to at most one of its child cgroups. Having an 2489 exclusive CPU appearing in two or more of its child cgroups is 2490 not allowed (the exclusivity rule). A value that violates the 2491 exclusivity rule will be rejected with a write error. 2492 2493 The root cgroup is a partition root and all its available CPUs 2494 are in its exclusive CPU set. 2495 2496 cpuset.cpus.exclusive.effective 2497 A read-only multiple values file which exists on all non-root 2498 cpuset-enabled cgroups. 2499 2500 This file shows the effective set of exclusive CPUs that 2501 can be used to create a partition root. The content 2502 of this file will always be a subset of its parent's 2503 "cpuset.cpus.exclusive.effective" if its parent is not the root 2504 cgroup. It will also be a subset of "cpuset.cpus.exclusive" 2505 if it is set. If "cpuset.cpus.exclusive" is not set, it is 2506 treated to have an implicit value of "cpuset.cpus" in the 2507 formation of local partition. 2508 2509 cpuset.cpus.isolated 2510 A read-only and root cgroup only multiple values file. 2511 2512 This file shows the set of all isolated CPUs used in existing 2513 isolated partitions. It will be empty if no isolated partition 2514 is created. 2515 2516 cpuset.cpus.partition 2517 A read-write single value file which exists on non-root 2518 cpuset-enabled cgroups. This flag is owned by the parent cgroup 2519 and is not delegatable. 2520 2521 It accepts only the following input values when written to. 2522 2523 ========== ===================================== 2524 "member" Non-root member of a partition 2525 "root" Partition root 2526 "isolated" Partition root without load balancing 2527 ========== ===================================== 2528 2529 A cpuset partition is a collection of cpuset-enabled cgroups with 2530 a partition root at the top of the hierarchy and its descendants 2531 except those that are separate partition roots themselves and 2532 their descendants. A partition has exclusive access to the 2533 set of exclusive CPUs allocated to it. Other cgroups outside 2534 of that partition cannot use any CPUs in that set. 2535 2536 There are two types of partitions - local and remote. A local 2537 partition is one whose parent cgroup is also a valid partition 2538 root. A remote partition is one whose parent cgroup is not a 2539 valid partition root itself. Writing to "cpuset.cpus.exclusive" 2540 is optional for the creation of a local partition as its 2541 "cpuset.cpus.exclusive" file will assume an implicit value that 2542 is the same as "cpuset.cpus" if it is not set. Writing the 2543 proper "cpuset.cpus.exclusive" values down the cgroup hierarchy 2544 before the target partition root is mandatory for the creation 2545 of a remote partition. 2546 2547 Currently, a remote partition cannot be created under a local 2548 partition. All the ancestors of a remote partition root except 2549 the root cgroup cannot be a partition root. 2550 2551 The root cgroup is always a partition root and its state cannot 2552 be changed. All other non-root cgroups start out as "member". 2553 2554 When set to "root", the current cgroup is the root of a new 2555 partition or scheduling domain. The set of exclusive CPUs is 2556 determined by the value of its "cpuset.cpus.exclusive.effective". 2557 2558 When set to "isolated", the CPUs in that partition will be in 2559 an isolated state without any load balancing from the scheduler 2560 and excluded from the unbound workqueues. Tasks placed in such 2561 a partition with multiple CPUs should be carefully distributed 2562 and bound to each of the individual CPUs for optimal performance. 2563 2564 A partition root ("root" or "isolated") can be in one of the 2565 two possible states - valid or invalid. An invalid partition 2566 root is in a degraded state where some state information may 2567 be retained, but behaves more like a "member". 2568 2569 All possible state transitions among "member", "root" and 2570 "isolated" are allowed. 2571 2572 On read, the "cpuset.cpus.partition" file can show the following 2573 values. 2574 2575 ============================= ===================================== 2576 "member" Non-root member of a partition 2577 "root" Partition root 2578 "isolated" Partition root without load balancing 2579 "root invalid (<reason>)" Invalid partition root 2580 "isolated invalid (<reason>)" Invalid isolated partition root 2581 ============================= ===================================== 2582 2583 In the case of an invalid partition root, a descriptive string on 2584 why the partition is invalid is included within parentheses. 2585 2586 For a local partition root to be valid, the following conditions 2587 must be met. 2588 2589 1) The parent cgroup is a valid partition root. 2590 2) The "cpuset.cpus.exclusive.effective" file cannot be empty, 2591 though it may contain offline CPUs. 2592 3) The "cpuset.cpus.effective" cannot be empty unless there is 2593 no task associated with this partition. 2594 2595 For a remote partition root to be valid, all the above conditions 2596 except the first one must be met. 2597 2598 External events like hotplug or changes to "cpuset.cpus" or 2599 "cpuset.cpus.exclusive" can cause a valid partition root to 2600 become invalid and vice versa. Note that a task cannot be 2601 moved to a cgroup with empty "cpuset.cpus.effective". 2602 2603 A valid non-root parent partition may distribute out all its CPUs 2604 to its child local partitions when there is no task associated 2605 with it. 2606 2607 Care must be taken to change a valid partition root to "member" 2608 as all its child local partitions, if present, will become 2609 invalid causing disruption to tasks running in those child 2610 partitions. These inactivated partitions could be recovered if 2611 their parent is switched back to a partition root with a proper 2612 value in "cpuset.cpus" or "cpuset.cpus.exclusive". 2613 2614 Poll and inotify events are triggered whenever the state of 2615 "cpuset.cpus.partition" changes. That includes changes caused 2616 by write to "cpuset.cpus.partition", cpu hotplug or other 2617 changes that modify the validity status of the partition. 2618 This will allow user space agents to monitor unexpected changes 2619 to "cpuset.cpus.partition" without the need to do continuous 2620 polling. 2621 2622 A user can pre-configure certain CPUs to an isolated state 2623 with load balancing disabled at boot time with the "isolcpus" 2624 kernel boot command line option. If those CPUs are to be put 2625 into a partition, they have to be used in an isolated partition. 2626 2627 2628Device controller 2629----------------- 2630 2631Device controller manages access to device files. It includes both 2632creation of new device files (using mknod), and access to the 2633existing device files. 2634 2635Cgroup v2 device controller has no interface files and is implemented 2636on top of cgroup BPF. To control access to device files, a user may 2637create bpf programs of type BPF_PROG_TYPE_CGROUP_DEVICE and attach 2638them to cgroups with BPF_CGROUP_DEVICE flag. On an attempt to access a 2639device file, corresponding BPF programs will be executed, and depending 2640on the return value the attempt will succeed or fail with -EPERM. 2641 2642A BPF_PROG_TYPE_CGROUP_DEVICE program takes a pointer to the 2643bpf_cgroup_dev_ctx structure, which describes the device access attempt: 2644access type (mknod/read/write) and device (type, major and minor numbers). 2645If the program returns 0, the attempt fails with -EPERM, otherwise it 2646succeeds. 2647 2648An example of BPF_PROG_TYPE_CGROUP_DEVICE program may be found in 2649tools/testing/selftests/bpf/progs/dev_cgroup.c in the kernel source tree. 2650 2651 2652RDMA 2653---- 2654 2655The "rdma" controller regulates the distribution and accounting of 2656RDMA resources. 2657 2658RDMA Interface Files 2659~~~~~~~~~~~~~~~~~~~~ 2660 2661 rdma.max 2662 A readwrite nested-keyed file that exists for all the cgroups 2663 except root that describes current configured resource limit 2664 for a RDMA/IB device. 2665 2666 Lines are keyed by device name and are not ordered. 2667 Each line contains space separated resource name and its configured 2668 limit that can be distributed. 2669 2670 The following nested keys are defined. 2671 2672 ========== ============================= 2673 hca_handle Maximum number of HCA Handles 2674 hca_object Maximum number of HCA Objects 2675 ========== ============================= 2676 2677 An example for mlx4 and ocrdma device follows:: 2678 2679 mlx4_0 hca_handle=2 hca_object=2000 2680 ocrdma1 hca_handle=3 hca_object=max 2681 2682 rdma.current 2683 A read-only file that describes current resource usage. 2684 It exists for all the cgroup except root. 2685 2686 An example for mlx4 and ocrdma device follows:: 2687 2688 mlx4_0 hca_handle=1 hca_object=20 2689 ocrdma1 hca_handle=1 hca_object=23 2690 2691DMEM 2692---- 2693 2694The "dmem" controller regulates the distribution and accounting of 2695device memory regions. Because each memory region may have its own page size, 2696which does not have to be equal to the system page size, the units are always bytes. 2697 2698DMEM Interface Files 2699~~~~~~~~~~~~~~~~~~~~ 2700 2701 dmem.max, dmem.min, dmem.low 2702 A readwrite nested-keyed file that exists for all the cgroups 2703 except root that describes current configured resource limit 2704 for a region. 2705 2706 An example for xe follows:: 2707 2708 drm/0000:03:00.0/vram0 1073741824 2709 drm/0000:03:00.0/stolen max 2710 2711 The semantics are the same as for the memory cgroup controller, and are 2712 calculated in the same way. 2713 2714 dmem.capacity 2715 A read-only file that describes maximum region capacity. 2716 It only exists on the root cgroup. Not all memory can be 2717 allocated by cgroups, as the kernel reserves some for 2718 internal use. 2719 2720 An example for xe follows:: 2721 2722 drm/0000:03:00.0/vram0 8514437120 2723 drm/0000:03:00.0/stolen 67108864 2724 2725 dmem.current 2726 A read-only file that describes current resource usage. 2727 It exists for all the cgroup except root. 2728 2729 An example for xe follows:: 2730 2731 drm/0000:03:00.0/vram0 12550144 2732 drm/0000:03:00.0/stolen 8650752 2733 2734HugeTLB 2735------- 2736 2737The HugeTLB controller allows to limit the HugeTLB usage per control group and 2738enforces the controller limit during page fault. 2739 2740HugeTLB Interface Files 2741~~~~~~~~~~~~~~~~~~~~~~~ 2742 2743 hugetlb.<hugepagesize>.current 2744 Show current usage for "hugepagesize" hugetlb. It exists for all 2745 the cgroup except root. 2746 2747 hugetlb.<hugepagesize>.max 2748 Set/show the hard limit of "hugepagesize" hugetlb usage. 2749 The default value is "max". It exists for all the cgroup except root. 2750 2751 hugetlb.<hugepagesize>.events 2752 A read-only flat-keyed file which exists on non-root cgroups. 2753 2754 max 2755 The number of allocation failure due to HugeTLB limit 2756 2757 hugetlb.<hugepagesize>.events.local 2758 Similar to hugetlb.<hugepagesize>.events but the fields in the file 2759 are local to the cgroup i.e. not hierarchical. The file modified event 2760 generated on this file reflects only the local events. 2761 2762 hugetlb.<hugepagesize>.numa_stat 2763 Similar to memory.numa_stat, it shows the numa information of the 2764 hugetlb pages of <hugepagesize> in this cgroup. Only active in 2765 use hugetlb pages are included. The per-node values are in bytes. 2766 2767Misc 2768---- 2769 2770The Miscellaneous cgroup provides the resource limiting and tracking 2771mechanism for the scalar resources which cannot be abstracted like the other 2772cgroup resources. Controller is enabled by the CONFIG_CGROUP_MISC config 2773option. 2774 2775A resource can be added to the controller via enum misc_res_type{} in the 2776include/linux/misc_cgroup.h file and the corresponding name via misc_res_name[] 2777in the kernel/cgroup/misc.c file. Provider of the resource must set its 2778capacity prior to using the resource by calling misc_cg_set_capacity(). 2779 2780Once a capacity is set then the resource usage can be updated using charge and 2781uncharge APIs. All of the APIs to interact with misc controller are in 2782include/linux/misc_cgroup.h. 2783 2784Misc Interface Files 2785~~~~~~~~~~~~~~~~~~~~ 2786 2787Miscellaneous controller provides 3 interface files. If two misc resources (res_a and res_b) are registered then: 2788 2789 misc.capacity 2790 A read-only flat-keyed file shown only in the root cgroup. It shows 2791 miscellaneous scalar resources available on the platform along with 2792 their quantities:: 2793 2794 $ cat misc.capacity 2795 res_a 50 2796 res_b 10 2797 2798 misc.current 2799 A read-only flat-keyed file shown in the all cgroups. It shows 2800 the current usage of the resources in the cgroup and its children.:: 2801 2802 $ cat misc.current 2803 res_a 3 2804 res_b 0 2805 2806 misc.peak 2807 A read-only flat-keyed file shown in all cgroups. It shows the 2808 historical maximum usage of the resources in the cgroup and its 2809 children.:: 2810 2811 $ cat misc.peak 2812 res_a 10 2813 res_b 8 2814 2815 misc.max 2816 A read-write flat-keyed file shown in the non root cgroups. Allowed 2817 maximum usage of the resources in the cgroup and its children.:: 2818 2819 $ cat misc.max 2820 res_a max 2821 res_b 4 2822 2823 Limit can be set by:: 2824 2825 # echo res_a 1 > misc.max 2826 2827 Limit can be set to max by:: 2828 2829 # echo res_a max > misc.max 2830 2831 Limits can be set higher than the capacity value in the misc.capacity 2832 file. 2833 2834 misc.events 2835 A read-only flat-keyed file which exists on non-root cgroups. The 2836 following entries are defined. Unless specified otherwise, a value 2837 change in this file generates a file modified event. All fields in 2838 this file are hierarchical. 2839 2840 max 2841 The number of times the cgroup's resource usage was 2842 about to go over the max boundary. 2843 2844 misc.events.local 2845 Similar to misc.events but the fields in the file are local to the 2846 cgroup i.e. not hierarchical. The file modified event generated on 2847 this file reflects only the local events. 2848 2849Migration and Ownership 2850~~~~~~~~~~~~~~~~~~~~~~~ 2851 2852A miscellaneous scalar resource is charged to the cgroup in which it is used 2853first, and stays charged to that cgroup until that resource is freed. Migrating 2854a process to a different cgroup does not move the charge to the destination 2855cgroup where the process has moved. 2856 2857Others 2858------ 2859 2860perf_event 2861~~~~~~~~~~ 2862 2863perf_event controller, if not mounted on a legacy hierarchy, is 2864automatically enabled on the v2 hierarchy so that perf events can 2865always be filtered by cgroup v2 path. The controller can still be 2866moved to a legacy hierarchy after v2 hierarchy is populated. 2867 2868 2869Non-normative information 2870------------------------- 2871 2872This section contains information that isn't considered to be a part of 2873the stable kernel API and so is subject to change. 2874 2875 2876CPU controller root cgroup process behaviour 2877~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2878 2879When distributing CPU cycles in the root cgroup each thread in this 2880cgroup is treated as if it was hosted in a separate child cgroup of the 2881root cgroup. This child cgroup weight is dependent on its thread nice 2882level. 2883 2884For details of this mapping see sched_prio_to_weight array in 2885kernel/sched/core.c file (values from this array should be scaled 2886appropriately so the neutral - nice 0 - value is 100 instead of 1024). 2887 2888 2889IO controller root cgroup process behaviour 2890~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2891 2892Root cgroup processes are hosted in an implicit leaf child node. 2893When distributing IO resources this implicit child node is taken into 2894account as if it was a normal child cgroup of the root cgroup with a 2895weight value of 200. 2896 2897 2898Namespace 2899========= 2900 2901Basics 2902------ 2903 2904cgroup namespace provides a mechanism to virtualize the view of the 2905"/proc/$PID/cgroup" file and cgroup mounts. The CLONE_NEWCGROUP clone 2906flag can be used with clone(2) and unshare(2) to create a new cgroup 2907namespace. The process running inside the cgroup namespace will have 2908its "/proc/$PID/cgroup" output restricted to cgroupns root. The 2909cgroupns root is the cgroup of the process at the time of creation of 2910the cgroup namespace. 2911 2912Without cgroup namespace, the "/proc/$PID/cgroup" file shows the 2913complete path of the cgroup of a process. In a container setup where 2914a set of cgroups and namespaces are intended to isolate processes the 2915"/proc/$PID/cgroup" file may leak potential system level information 2916to the isolated processes. For example:: 2917 2918 # cat /proc/self/cgroup 2919 0::/batchjobs/container_id1 2920 2921The path '/batchjobs/container_id1' can be considered as system-data 2922and undesirable to expose to the isolated processes. cgroup namespace 2923can be used to restrict visibility of this path. For example, before 2924creating a cgroup namespace, one would see:: 2925 2926 # ls -l /proc/self/ns/cgroup 2927 lrwxrwxrwx 1 root root 0 2014-07-15 10:37 /proc/self/ns/cgroup -> cgroup:[4026531835] 2928 # cat /proc/self/cgroup 2929 0::/batchjobs/container_id1 2930 2931After unsharing a new namespace, the view changes:: 2932 2933 # ls -l /proc/self/ns/cgroup 2934 lrwxrwxrwx 1 root root 0 2014-07-15 10:35 /proc/self/ns/cgroup -> cgroup:[4026532183] 2935 # cat /proc/self/cgroup 2936 0::/ 2937 2938When some thread from a multi-threaded process unshares its cgroup 2939namespace, the new cgroupns gets applied to the entire process (all 2940the threads). This is natural for the v2 hierarchy; however, for the 2941legacy hierarchies, this may be unexpected. 2942 2943A cgroup namespace is alive as long as there are processes inside or 2944mounts pinning it. When the last usage goes away, the cgroup 2945namespace is destroyed. The cgroupns root and the actual cgroups 2946remain. 2947 2948 2949The Root and Views 2950------------------ 2951 2952The 'cgroupns root' for a cgroup namespace is the cgroup in which the 2953process calling unshare(2) is running. For example, if a process in 2954/batchjobs/container_id1 cgroup calls unshare, cgroup 2955/batchjobs/container_id1 becomes the cgroupns root. For the 2956init_cgroup_ns, this is the real root ('/') cgroup. 2957 2958The cgroupns root cgroup does not change even if the namespace creator 2959process later moves to a different cgroup:: 2960 2961 # ~/unshare -c # unshare cgroupns in some cgroup 2962 # cat /proc/self/cgroup 2963 0::/ 2964 # mkdir sub_cgrp_1 2965 # echo 0 > sub_cgrp_1/cgroup.procs 2966 # cat /proc/self/cgroup 2967 0::/sub_cgrp_1 2968 2969Each process gets its namespace-specific view of "/proc/$PID/cgroup" 2970 2971Processes running inside the cgroup namespace will be able to see 2972cgroup paths (in /proc/self/cgroup) only inside their root cgroup. 2973From within an unshared cgroupns:: 2974 2975 # sleep 100000 & 2976 [1] 7353 2977 # echo 7353 > sub_cgrp_1/cgroup.procs 2978 # cat /proc/7353/cgroup 2979 0::/sub_cgrp_1 2980 2981From the initial cgroup namespace, the real cgroup path will be 2982visible:: 2983 2984 $ cat /proc/7353/cgroup 2985 0::/batchjobs/container_id1/sub_cgrp_1 2986 2987From a sibling cgroup namespace (that is, a namespace rooted at a 2988different cgroup), the cgroup path relative to its own cgroup 2989namespace root will be shown. For instance, if PID 7353's cgroup 2990namespace root is at '/batchjobs/container_id2', then it will see:: 2991 2992 # cat /proc/7353/cgroup 2993 0::/../container_id2/sub_cgrp_1 2994 2995Note that the relative path always starts with '/' to indicate that 2996its relative to the cgroup namespace root of the caller. 2997 2998 2999Migration and setns(2) 3000---------------------- 3001 3002Processes inside a cgroup namespace can move into and out of the 3003namespace root if they have proper access to external cgroups. For 3004example, from inside a namespace with cgroupns root at 3005/batchjobs/container_id1, and assuming that the global hierarchy is 3006still accessible inside cgroupns:: 3007 3008 # cat /proc/7353/cgroup 3009 0::/sub_cgrp_1 3010 # echo 7353 > batchjobs/container_id2/cgroup.procs 3011 # cat /proc/7353/cgroup 3012 0::/../container_id2 3013 3014Note that this kind of setup is not encouraged. A task inside cgroup 3015namespace should only be exposed to its own cgroupns hierarchy. 3016 3017setns(2) to another cgroup namespace is allowed when: 3018 3019(a) the process has CAP_SYS_ADMIN against its current user namespace 3020(b) the process has CAP_SYS_ADMIN against the target cgroup 3021 namespace's userns 3022 3023No implicit cgroup changes happen with attaching to another cgroup 3024namespace. It is expected that the someone moves the attaching 3025process under the target cgroup namespace root. 3026 3027 3028Interaction with Other Namespaces 3029--------------------------------- 3030 3031Namespace specific cgroup hierarchy can be mounted by a process 3032running inside a non-init cgroup namespace:: 3033 3034 # mount -t cgroup2 none $MOUNT_POINT 3035 3036This will mount the unified cgroup hierarchy with cgroupns root as the 3037filesystem root. The process needs CAP_SYS_ADMIN against its user and 3038mount namespaces. 3039 3040The virtualization of /proc/self/cgroup file combined with restricting 3041the view of cgroup hierarchy by namespace-private cgroupfs mount 3042provides a properly isolated cgroup view inside the container. 3043 3044 3045Information on Kernel Programming 3046================================= 3047 3048This section contains kernel programming information in the areas 3049where interacting with cgroup is necessary. cgroup core and 3050controllers are not covered. 3051 3052 3053Filesystem Support for Writeback 3054-------------------------------- 3055 3056A filesystem can support cgroup writeback by updating 3057address_space_operations->writepages() to annotate bio's using the 3058following two functions. 3059 3060 wbc_init_bio(@wbc, @bio) 3061 Should be called for each bio carrying writeback data and 3062 associates the bio with the inode's owner cgroup and the 3063 corresponding request queue. This must be called after 3064 a queue (device) has been associated with the bio and 3065 before submission. 3066 3067 wbc_account_cgroup_owner(@wbc, @folio, @bytes) 3068 Should be called for each data segment being written out. 3069 While this function doesn't care exactly when it's called 3070 during the writeback session, it's the easiest and most 3071 natural to call it as data segments are added to a bio. 3072 3073With writeback bio's annotated, cgroup support can be enabled per 3074super_block by setting SB_I_CGROUPWB in ->s_iflags. This allows for 3075selective disabling of cgroup writeback support which is helpful when 3076certain filesystem features, e.g. journaled data mode, are 3077incompatible. 3078 3079wbc_init_bio() binds the specified bio to its cgroup. Depending on 3080the configuration, the bio may be executed at a lower priority and if 3081the writeback session is holding shared resources, e.g. a journal 3082entry, may lead to priority inversion. There is no one easy solution 3083for the problem. Filesystems can try to work around specific problem 3084cases by skipping wbc_init_bio() and using bio_associate_blkg() 3085directly. 3086 3087 3088Deprecated v1 Core Features 3089=========================== 3090 3091- Multiple hierarchies including named ones are not supported. 3092 3093- All v1 mount options are not supported. 3094 3095- The "tasks" file is removed and "cgroup.procs" is not sorted. 3096 3097- "cgroup.clone_children" is removed. 3098 3099- /proc/cgroups is meaningless for v2. Use "cgroup.controllers" or 3100 "cgroup.stat" files at the root instead. 3101 3102 3103Issues with v1 and Rationales for v2 3104==================================== 3105 3106Multiple Hierarchies 3107-------------------- 3108 3109cgroup v1 allowed an arbitrary number of hierarchies and each 3110hierarchy could host any number of controllers. While this seemed to 3111provide a high level of flexibility, it wasn't useful in practice. 3112 3113For example, as there is only one instance of each controller, utility 3114type controllers such as freezer which can be useful in all 3115hierarchies could only be used in one. The issue is exacerbated by 3116the fact that controllers couldn't be moved to another hierarchy once 3117hierarchies were populated. Another issue was that all controllers 3118bound to a hierarchy were forced to have exactly the same view of the 3119hierarchy. It wasn't possible to vary the granularity depending on 3120the specific controller. 3121 3122In practice, these issues heavily limited which controllers could be 3123put on the same hierarchy and most configurations resorted to putting 3124each controller on its own hierarchy. Only closely related ones, such 3125as the cpu and cpuacct controllers, made sense to be put on the same 3126hierarchy. This often meant that userland ended up managing multiple 3127similar hierarchies repeating the same steps on each hierarchy 3128whenever a hierarchy management operation was necessary. 3129 3130Furthermore, support for multiple hierarchies came at a steep cost. 3131It greatly complicated cgroup core implementation but more importantly 3132the support for multiple hierarchies restricted how cgroup could be 3133used in general and what controllers was able to do. 3134 3135There was no limit on how many hierarchies there might be, which meant 3136that a thread's cgroup membership couldn't be described in finite 3137length. The key might contain any number of entries and was unlimited 3138in length, which made it highly awkward to manipulate and led to 3139addition of controllers which existed only to identify membership, 3140which in turn exacerbated the original problem of proliferating number 3141of hierarchies. 3142 3143Also, as a controller couldn't have any expectation regarding the 3144topologies of hierarchies other controllers might be on, each 3145controller had to assume that all other controllers were attached to 3146completely orthogonal hierarchies. This made it impossible, or at 3147least very cumbersome, for controllers to cooperate with each other. 3148 3149In most use cases, putting controllers on hierarchies which are 3150completely orthogonal to each other isn't necessary. What usually is 3151called for is the ability to have differing levels of granularity 3152depending on the specific controller. In other words, hierarchy may 3153be collapsed from leaf towards root when viewed from specific 3154controllers. For example, a given configuration might not care about 3155how memory is distributed beyond a certain level while still wanting 3156to control how CPU cycles are distributed. 3157 3158 3159Thread Granularity 3160------------------ 3161 3162cgroup v1 allowed threads of a process to belong to different cgroups. 3163This didn't make sense for some controllers and those controllers 3164ended up implementing different ways to ignore such situations but 3165much more importantly it blurred the line between API exposed to 3166individual applications and system management interface. 3167 3168Generally, in-process knowledge is available only to the process 3169itself; thus, unlike service-level organization of processes, 3170categorizing threads of a process requires active participation from 3171the application which owns the target process. 3172 3173cgroup v1 had an ambiguously defined delegation model which got abused 3174in combination with thread granularity. cgroups were delegated to 3175individual applications so that they can create and manage their own 3176sub-hierarchies and control resource distributions along them. This 3177effectively raised cgroup to the status of a syscall-like API exposed 3178to lay programs. 3179 3180First of all, cgroup has a fundamentally inadequate interface to be 3181exposed this way. For a process to access its own knobs, it has to 3182extract the path on the target hierarchy from /proc/self/cgroup, 3183construct the path by appending the name of the knob to the path, open 3184and then read and/or write to it. This is not only extremely clunky 3185and unusual but also inherently racy. There is no conventional way to 3186define transaction across the required steps and nothing can guarantee 3187that the process would actually be operating on its own sub-hierarchy. 3188 3189cgroup controllers implemented a number of knobs which would never be 3190accepted as public APIs because they were just adding control knobs to 3191system-management pseudo filesystem. cgroup ended up with interface 3192knobs which were not properly abstracted or refined and directly 3193revealed kernel internal details. These knobs got exposed to 3194individual applications through the ill-defined delegation mechanism 3195effectively abusing cgroup as a shortcut to implementing public APIs 3196without going through the required scrutiny. 3197 3198This was painful for both userland and kernel. Userland ended up with 3199misbehaving and poorly abstracted interfaces and kernel exposing and 3200locked into constructs inadvertently. 3201 3202 3203Competition Between Inner Nodes and Threads 3204------------------------------------------- 3205 3206cgroup v1 allowed threads to be in any cgroups which created an 3207interesting problem where threads belonging to a parent cgroup and its 3208children cgroups competed for resources. This was nasty as two 3209different types of entities competed and there was no obvious way to 3210settle it. Different controllers did different things. 3211 3212The cpu controller considered threads and cgroups as equivalents and 3213mapped nice levels to cgroup weights. This worked for some cases but 3214fell flat when children wanted to be allocated specific ratios of CPU 3215cycles and the number of internal threads fluctuated - the ratios 3216constantly changed as the number of competing entities fluctuated. 3217There also were other issues. The mapping from nice level to weight 3218wasn't obvious or universal, and there were various other knobs which 3219simply weren't available for threads. 3220 3221The io controller implicitly created a hidden leaf node for each 3222cgroup to host the threads. The hidden leaf had its own copies of all 3223the knobs with ``leaf_`` prefixed. While this allowed equivalent 3224control over internal threads, it was with serious drawbacks. It 3225always added an extra layer of nesting which wouldn't be necessary 3226otherwise, made the interface messy and significantly complicated the 3227implementation. 3228 3229The memory controller didn't have a way to control what happened 3230between internal tasks and child cgroups and the behavior was not 3231clearly defined. There were attempts to add ad-hoc behaviors and 3232knobs to tailor the behavior to specific workloads which would have 3233led to problems extremely difficult to resolve in the long term. 3234 3235Multiple controllers struggled with internal tasks and came up with 3236different ways to deal with it; unfortunately, all the approaches were 3237severely flawed and, furthermore, the widely different behaviors 3238made cgroup as a whole highly inconsistent. 3239 3240This clearly is a problem which needs to be addressed from cgroup core 3241in a uniform way. 3242 3243 3244Other Interface Issues 3245---------------------- 3246 3247cgroup v1 grew without oversight and developed a large number of 3248idiosyncrasies and inconsistencies. One issue on the cgroup core side 3249was how an empty cgroup was notified - a userland helper binary was 3250forked and executed for each event. The event delivery wasn't 3251recursive or delegatable. The limitations of the mechanism also led 3252to in-kernel event delivery filtering mechanism further complicating 3253the interface. 3254 3255Controller interfaces were problematic too. An extreme example is 3256controllers completely ignoring hierarchical organization and treating 3257all cgroups as if they were all located directly under the root 3258cgroup. Some controllers exposed a large amount of inconsistent 3259implementation details to userland. 3260 3261There also was no consistency across controllers. When a new cgroup 3262was created, some controllers defaulted to not imposing extra 3263restrictions while others disallowed any resource usage until 3264explicitly configured. Configuration knobs for the same type of 3265control used widely differing naming schemes and formats. Statistics 3266and information knobs were named arbitrarily and used different 3267formats and units even in the same controller. 3268 3269cgroup v2 establishes common conventions where appropriate and updates 3270controllers so that they expose minimal and consistent interfaces. 3271 3272 3273Controller Issues and Remedies 3274------------------------------ 3275 3276Memory 3277~~~~~~ 3278 3279The original lower boundary, the soft limit, is defined as a limit 3280that is per default unset. As a result, the set of cgroups that 3281global reclaim prefers is opt-in, rather than opt-out. The costs for 3282optimizing these mostly negative lookups are so high that the 3283implementation, despite its enormous size, does not even provide the 3284basic desirable behavior. First off, the soft limit has no 3285hierarchical meaning. All configured groups are organized in a global 3286rbtree and treated like equal peers, regardless where they are located 3287in the hierarchy. This makes subtree delegation impossible. Second, 3288the soft limit reclaim pass is so aggressive that it not just 3289introduces high allocation latencies into the system, but also impacts 3290system performance due to overreclaim, to the point where the feature 3291becomes self-defeating. 3292 3293The memory.low boundary on the other hand is a top-down allocated 3294reserve. A cgroup enjoys reclaim protection when it's within its 3295effective low, which makes delegation of subtrees possible. It also 3296enjoys having reclaim pressure proportional to its overage when 3297above its effective low. 3298 3299The original high boundary, the hard limit, is defined as a strict 3300limit that can not budge, even if the OOM killer has to be called. 3301But this generally goes against the goal of making the most out of the 3302available memory. The memory consumption of workloads varies during 3303runtime, and that requires users to overcommit. But doing that with a 3304strict upper limit requires either a fairly accurate prediction of the 3305working set size or adding slack to the limit. Since working set size 3306estimation is hard and error prone, and getting it wrong results in 3307OOM kills, most users tend to err on the side of a looser limit and 3308end up wasting precious resources. 3309 3310The memory.high boundary on the other hand can be set much more 3311conservatively. When hit, it throttles allocations by forcing them 3312into direct reclaim to work off the excess, but it never invokes the 3313OOM killer. As a result, a high boundary that is chosen too 3314aggressively will not terminate the processes, but instead it will 3315lead to gradual performance degradation. The user can monitor this 3316and make corrections until the minimal memory footprint that still 3317gives acceptable performance is found. 3318 3319In extreme cases, with many concurrent allocations and a complete 3320breakdown of reclaim progress within the group, the high boundary can 3321be exceeded. But even then it's mostly better to satisfy the 3322allocation from the slack available in other groups or the rest of the 3323system than killing the group. Otherwise, memory.max is there to 3324limit this type of spillover and ultimately contain buggy or even 3325malicious applications. 3326 3327Setting the original memory.limit_in_bytes below the current usage was 3328subject to a race condition, where concurrent charges could cause the 3329limit setting to fail. memory.max on the other hand will first set the 3330limit to prevent new charges, and then reclaim and OOM kill until the 3331new limit is met - or the task writing to memory.max is killed. 3332 3333The combined memory+swap accounting and limiting is replaced by real 3334control over swap space. 3335 3336The main argument for a combined memory+swap facility in the original 3337cgroup design was that global or parental pressure would always be 3338able to swap all anonymous memory of a child group, regardless of the 3339child's own (possibly untrusted) configuration. However, untrusted 3340groups can sabotage swapping by other means - such as referencing its 3341anonymous memory in a tight loop - and an admin can not assume full 3342swappability when overcommitting untrusted jobs. 3343 3344For trusted jobs, on the other hand, a combined counter is not an 3345intuitive userspace interface, and it flies in the face of the idea 3346that cgroup controllers should account and limit specific physical 3347resources. Swap space is a resource like all others in the system, 3348and that's why unified hierarchy allows distributing it separately. 3349