1.. _cgroup-v2: 2 3================ 4Control Group v2 5================ 6 7:Date: October, 2015 8:Author: Tejun Heo <tj@kernel.org> 9 10This is the authoritative documentation on the design, interface and 11conventions of cgroup v2. It describes all userland-visible aspects 12of cgroup including core and specific controller behaviors. All 13future changes must be reflected in this document. Documentation for 14v1 is available under :ref:`Documentation/admin-guide/cgroup-v1/index.rst <cgroup-v1>`. 15 16.. CONTENTS 17 18 1. Introduction 19 1-1. Terminology 20 1-2. What is cgroup? 21 2. Basic Operations 22 2-1. Mounting 23 2-2. Organizing Processes and Threads 24 2-2-1. Processes 25 2-2-2. Threads 26 2-3. [Un]populated Notification 27 2-4. Controlling Controllers 28 2-4-1. Enabling and Disabling 29 2-4-2. Top-down Constraint 30 2-4-3. No Internal Process Constraint 31 2-5. Delegation 32 2-5-1. Model of Delegation 33 2-5-2. Delegation Containment 34 2-6. Guidelines 35 2-6-1. Organize Once and Control 36 2-6-2. Avoid Name Collisions 37 3. Resource Distribution Models 38 3-1. Weights 39 3-2. Limits 40 3-3. Protections 41 3-4. Allocations 42 4. Interface Files 43 4-1. Format 44 4-2. Conventions 45 4-3. Core Interface Files 46 5. Controllers 47 5-1. CPU 48 5-1-1. CPU Interface Files 49 5-2. Memory 50 5-2-1. Memory Interface Files 51 5-2-2. Usage Guidelines 52 5-2-3. Memory Ownership 53 5-3. IO 54 5-3-1. IO Interface Files 55 5-3-2. Writeback 56 5-3-3. IO Latency 57 5-3-3-1. How IO Latency Throttling Works 58 5-3-3-2. IO Latency Interface Files 59 5-3-4. IO Priority 60 5-4. PID 61 5-4-1. PID Interface Files 62 5-5. Cpuset 63 5.5-1. Cpuset Interface Files 64 5-6. Device 65 5-7. RDMA 66 5-7-1. RDMA Interface Files 67 5-8. HugeTLB 68 5.8-1. HugeTLB Interface Files 69 5-9. Misc 70 5.9-1 Miscellaneous cgroup Interface Files 71 5.9-2 Migration and Ownership 72 5-10. Others 73 5-10-1. perf_event 74 5-N. Non-normative information 75 5-N-1. CPU controller root cgroup process behaviour 76 5-N-2. IO controller root cgroup process behaviour 77 6. Namespace 78 6-1. Basics 79 6-2. The Root and Views 80 6-3. Migration and setns(2) 81 6-4. Interaction with Other Namespaces 82 P. Information on Kernel Programming 83 P-1. Filesystem Support for Writeback 84 D. Deprecated v1 Core Features 85 R. Issues with v1 and Rationales for v2 86 R-1. Multiple Hierarchies 87 R-2. Thread Granularity 88 R-3. Competition Between Inner Nodes and Threads 89 R-4. Other Interface Issues 90 R-5. Controller Issues and Remedies 91 R-5-1. Memory 92 93 94Introduction 95============ 96 97Terminology 98----------- 99 100"cgroup" stands for "control group" and is never capitalized. The 101singular form is used to designate the whole feature and also as a 102qualifier as in "cgroup controllers". When explicitly referring to 103multiple individual control groups, the plural form "cgroups" is used. 104 105 106What is cgroup? 107--------------- 108 109cgroup is a mechanism to organize processes hierarchically and 110distribute system resources along the hierarchy in a controlled and 111configurable manner. 112 113cgroup is largely composed of two parts - the core and controllers. 114cgroup core is primarily responsible for hierarchically organizing 115processes. A cgroup controller is usually responsible for 116distributing a specific type of system resource along the hierarchy 117although there are utility controllers which serve purposes other than 118resource distribution. 119 120cgroups form a tree structure and every process in the system belongs 121to one and only one cgroup. All threads of a process belong to the 122same cgroup. On creation, all processes are put in the cgroup that 123the parent process belongs to at the time. A process can be migrated 124to another cgroup. Migration of a process doesn't affect already 125existing descendant processes. 126 127Following certain structural constraints, controllers may be enabled or 128disabled selectively on a cgroup. All controller behaviors are 129hierarchical - if a controller is enabled on a cgroup, it affects all 130processes which belong to the cgroups consisting the inclusive 131sub-hierarchy of the cgroup. When a controller is enabled on a nested 132cgroup, it always restricts the resource distribution further. The 133restrictions set closer to the root in the hierarchy can not be 134overridden from further away. 135 136 137Basic Operations 138================ 139 140Mounting 141-------- 142 143Unlike v1, cgroup v2 has only single hierarchy. The cgroup v2 144hierarchy can be mounted with the following mount command:: 145 146 # mount -t cgroup2 none $MOUNT_POINT 147 148cgroup2 filesystem has the magic number 0x63677270 ("cgrp"). All 149controllers which support v2 and are not bound to a v1 hierarchy are 150automatically bound to the v2 hierarchy and show up at the root. 151Controllers which are not in active use in the v2 hierarchy can be 152bound to other hierarchies. This allows mixing v2 hierarchy with the 153legacy v1 multiple hierarchies in a fully backward compatible way. 154 155A controller can be moved across hierarchies only after the controller 156is no longer referenced in its current hierarchy. Because per-cgroup 157controller states are destroyed asynchronously and controllers may 158have lingering references, a controller may not show up immediately on 159the v2 hierarchy after the final umount of the previous hierarchy. 160Similarly, a controller should be fully disabled to be moved out of 161the unified hierarchy and it may take some time for the disabled 162controller to become available for other hierarchies; furthermore, due 163to inter-controller dependencies, other controllers may need to be 164disabled too. 165 166While useful for development and manual configurations, moving 167controllers dynamically between the v2 and other hierarchies is 168strongly discouraged for production use. It is recommended to decide 169the hierarchies and controller associations before starting using the 170controllers after system boot. 171 172During transition to v2, system management software might still 173automount the v1 cgroup filesystem and so hijack all controllers 174during boot, before manual intervention is possible. To make testing 175and experimenting easier, the kernel parameter cgroup_no_v1= allows 176disabling controllers in v1 and make them always available in v2. 177 178cgroup v2 currently supports the following mount options. 179 180 nsdelegate 181 Consider cgroup namespaces as delegation boundaries. This 182 option is system wide and can only be set on mount or modified 183 through remount from the init namespace. The mount option is 184 ignored on non-init namespace mounts. Please refer to the 185 Delegation section for details. 186 187 favordynmods 188 Reduce the latencies of dynamic cgroup modifications such as 189 task migrations and controller on/offs at the cost of making 190 hot path operations such as forks and exits more expensive. 191 The static usage pattern of creating a cgroup, enabling 192 controllers, and then seeding it with CLONE_INTO_CGROUP is 193 not affected by this option. 194 195 memory_localevents 196 Only populate memory.events with data for the current cgroup, 197 and not any subtrees. This is legacy behaviour, the default 198 behaviour without this option is to include subtree counts. 199 This option is system wide and can only be set on mount or 200 modified through remount from the init namespace. The mount 201 option is ignored on non-init namespace mounts. 202 203 memory_recursiveprot 204 Recursively apply memory.min and memory.low protection to 205 entire subtrees, without requiring explicit downward 206 propagation into leaf cgroups. This allows protecting entire 207 subtrees from one another, while retaining free competition 208 within those subtrees. This should have been the default 209 behavior but is a mount-option to avoid regressing setups 210 relying on the original semantics (e.g. specifying bogusly 211 high 'bypass' protection values at higher tree levels). 212 213 214Organizing Processes and Threads 215-------------------------------- 216 217Processes 218~~~~~~~~~ 219 220Initially, only the root cgroup exists to which all processes belong. 221A child cgroup can be created by creating a sub-directory:: 222 223 # mkdir $CGROUP_NAME 224 225A given cgroup may have multiple child cgroups forming a tree 226structure. Each cgroup has a read-writable interface file 227"cgroup.procs". When read, it lists the PIDs of all processes which 228belong to the cgroup one-per-line. The PIDs are not ordered and the 229same PID may show up more than once if the process got moved to 230another cgroup and then back or the PID got recycled while reading. 231 232A process can be migrated into a cgroup by writing its PID to the 233target cgroup's "cgroup.procs" file. Only one process can be migrated 234on a single write(2) call. If a process is composed of multiple 235threads, writing the PID of any thread migrates all threads of the 236process. 237 238When a process forks a child process, the new process is born into the 239cgroup that the forking process belongs to at the time of the 240operation. After exit, a process stays associated with the cgroup 241that it belonged to at the time of exit until it's reaped; however, a 242zombie process does not appear in "cgroup.procs" and thus can't be 243moved to another cgroup. 244 245A cgroup which doesn't have any children or live processes can be 246destroyed by removing the directory. Note that a cgroup which doesn't 247have any children and is associated only with zombie processes is 248considered empty and can be removed:: 249 250 # rmdir $CGROUP_NAME 251 252"/proc/$PID/cgroup" lists a process's cgroup membership. If legacy 253cgroup is in use in the system, this file may contain multiple lines, 254one for each hierarchy. The entry for cgroup v2 is always in the 255format "0::$PATH":: 256 257 # cat /proc/842/cgroup 258 ... 259 0::/test-cgroup/test-cgroup-nested 260 261If the process becomes a zombie and the cgroup it was associated with 262is removed subsequently, " (deleted)" is appended to the path:: 263 264 # cat /proc/842/cgroup 265 ... 266 0::/test-cgroup/test-cgroup-nested (deleted) 267 268 269Threads 270~~~~~~~ 271 272cgroup v2 supports thread granularity for a subset of controllers to 273support use cases requiring hierarchical resource distribution across 274the threads of a group of processes. By default, all threads of a 275process belong to the same cgroup, which also serves as the resource 276domain to host resource consumptions which are not specific to a 277process or thread. The thread mode allows threads to be spread across 278a subtree while still maintaining the common resource domain for them. 279 280Controllers which support thread mode are called threaded controllers. 281The ones which don't are called domain controllers. 282 283Marking a cgroup threaded makes it join the resource domain of its 284parent as a threaded cgroup. The parent may be another threaded 285cgroup whose resource domain is further up in the hierarchy. The root 286of a threaded subtree, that is, the nearest ancestor which is not 287threaded, is called threaded domain or thread root interchangeably and 288serves as the resource domain for the entire subtree. 289 290Inside a threaded subtree, threads of a process can be put in 291different cgroups and are not subject to the no internal process 292constraint - threaded controllers can be enabled on non-leaf cgroups 293whether they have threads in them or not. 294 295As the threaded domain cgroup hosts all the domain resource 296consumptions of the subtree, it is considered to have internal 297resource consumptions whether there are processes in it or not and 298can't have populated child cgroups which aren't threaded. Because the 299root cgroup is not subject to no internal process constraint, it can 300serve both as a threaded domain and a parent to domain cgroups. 301 302The current operation mode or type of the cgroup is shown in the 303"cgroup.type" file which indicates whether the cgroup is a normal 304domain, a domain which is serving as the domain of a threaded subtree, 305or a threaded cgroup. 306 307On creation, a cgroup is always a domain cgroup and can be made 308threaded by writing "threaded" to the "cgroup.type" file. The 309operation is single direction:: 310 311 # echo threaded > cgroup.type 312 313Once threaded, the cgroup can't be made a domain again. To enable the 314thread mode, the following conditions must be met. 315 316- As the cgroup will join the parent's resource domain. The parent 317 must either be a valid (threaded) domain or a threaded cgroup. 318 319- When the parent is an unthreaded domain, it must not have any domain 320 controllers enabled or populated domain children. The root is 321 exempt from this requirement. 322 323Topology-wise, a cgroup can be in an invalid state. Please consider 324the following topology:: 325 326 A (threaded domain) - B (threaded) - C (domain, just created) 327 328C is created as a domain but isn't connected to a parent which can 329host child domains. C can't be used until it is turned into a 330threaded cgroup. "cgroup.type" file will report "domain (invalid)" in 331these cases. Operations which fail due to invalid topology use 332EOPNOTSUPP as the errno. 333 334A domain cgroup is turned into a threaded domain when one of its child 335cgroup becomes threaded or threaded controllers are enabled in the 336"cgroup.subtree_control" file while there are processes in the cgroup. 337A threaded domain reverts to a normal domain when the conditions 338clear. 339 340When read, "cgroup.threads" contains the list of the thread IDs of all 341threads in the cgroup. Except that the operations are per-thread 342instead of per-process, "cgroup.threads" has the same format and 343behaves the same way as "cgroup.procs". While "cgroup.threads" can be 344written to in any cgroup, as it can only move threads inside the same 345threaded domain, its operations are confined inside each threaded 346subtree. 347 348The threaded domain cgroup serves as the resource domain for the whole 349subtree, and, while the threads can be scattered across the subtree, 350all the processes are considered to be in the threaded domain cgroup. 351"cgroup.procs" in a threaded domain cgroup contains the PIDs of all 352processes in the subtree and is not readable in the subtree proper. 353However, "cgroup.procs" can be written to from anywhere in the subtree 354to migrate all threads of the matching process to the cgroup. 355 356Only threaded controllers can be enabled in a threaded subtree. When 357a threaded controller is enabled inside a threaded subtree, it only 358accounts for and controls resource consumptions associated with the 359threads in the cgroup and its descendants. All consumptions which 360aren't tied to a specific thread belong to the threaded domain cgroup. 361 362Because a threaded subtree is exempt from no internal process 363constraint, a threaded controller must be able to handle competition 364between threads in a non-leaf cgroup and its child cgroups. Each 365threaded controller defines how such competitions are handled. 366 367Currently, the following controllers are threaded and can be enabled 368in a threaded cgroup:: 369 370- cpu 371- cpuset 372- perf_event 373- pids 374 375[Un]populated Notification 376-------------------------- 377 378Each non-root cgroup has a "cgroup.events" file which contains 379"populated" field indicating whether the cgroup's sub-hierarchy has 380live processes in it. Its value is 0 if there is no live process in 381the cgroup and its descendants; otherwise, 1. poll and [id]notify 382events are triggered when the value changes. This can be used, for 383example, to start a clean-up operation after all processes of a given 384sub-hierarchy have exited. The populated state updates and 385notifications are recursive. Consider the following sub-hierarchy 386where the numbers in the parentheses represent the numbers of processes 387in each cgroup:: 388 389 A(4) - B(0) - C(1) 390 \ D(0) 391 392A, B and C's "populated" fields would be 1 while D's 0. After the one 393process in C exits, B and C's "populated" fields would flip to "0" and 394file modified events will be generated on the "cgroup.events" files of 395both cgroups. 396 397 398Controlling Controllers 399----------------------- 400 401Enabling and Disabling 402~~~~~~~~~~~~~~~~~~~~~~ 403 404Each cgroup has a "cgroup.controllers" file which lists all 405controllers available for the cgroup to enable:: 406 407 # cat cgroup.controllers 408 cpu io memory 409 410No controller is enabled by default. Controllers can be enabled and 411disabled by writing to the "cgroup.subtree_control" file:: 412 413 # echo "+cpu +memory -io" > cgroup.subtree_control 414 415Only controllers which are listed in "cgroup.controllers" can be 416enabled. When multiple operations are specified as above, either they 417all succeed or fail. If multiple operations on the same controller 418are specified, the last one is effective. 419 420Enabling a controller in a cgroup indicates that the distribution of 421the target resource across its immediate children will be controlled. 422Consider the following sub-hierarchy. The enabled controllers are 423listed in parentheses:: 424 425 A(cpu,memory) - B(memory) - C() 426 \ D() 427 428As A has "cpu" and "memory" enabled, A will control the distribution 429of CPU cycles and memory to its children, in this case, B. As B has 430"memory" enabled but not "CPU", C and D will compete freely on CPU 431cycles but their division of memory available to B will be controlled. 432 433As a controller regulates the distribution of the target resource to 434the cgroup's children, enabling it creates the controller's interface 435files in the child cgroups. In the above example, enabling "cpu" on B 436would create the "cpu." prefixed controller interface files in C and 437D. Likewise, disabling "memory" from B would remove the "memory." 438prefixed controller interface files from C and D. This means that the 439controller interface files - anything which doesn't start with 440"cgroup." are owned by the parent rather than the cgroup itself. 441 442 443Top-down Constraint 444~~~~~~~~~~~~~~~~~~~ 445 446Resources are distributed top-down and a cgroup can further distribute 447a resource only if the resource has been distributed to it from the 448parent. This means that all non-root "cgroup.subtree_control" files 449can only contain controllers which are enabled in the parent's 450"cgroup.subtree_control" file. A controller can be enabled only if 451the parent has the controller enabled and a controller can't be 452disabled if one or more children have it enabled. 453 454 455No Internal Process Constraint 456~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 457 458Non-root cgroups can distribute domain resources to their children 459only when they don't have any processes of their own. In other words, 460only domain cgroups which don't contain any processes can have domain 461controllers enabled in their "cgroup.subtree_control" files. 462 463This guarantees that, when a domain controller is looking at the part 464of the hierarchy which has it enabled, processes are always only on 465the leaves. This rules out situations where child cgroups compete 466against internal processes of the parent. 467 468The root cgroup is exempt from this restriction. Root contains 469processes and anonymous resource consumption which can't be associated 470with any other cgroups and requires special treatment from most 471controllers. How resource consumption in the root cgroup is governed 472is up to each controller (for more information on this topic please 473refer to the Non-normative information section in the Controllers 474chapter). 475 476Note that the restriction doesn't get in the way if there is no 477enabled controller in the cgroup's "cgroup.subtree_control". This is 478important as otherwise it wouldn't be possible to create children of a 479populated cgroup. To control resource distribution of a cgroup, the 480cgroup must create children and transfer all its processes to the 481children before enabling controllers in its "cgroup.subtree_control" 482file. 483 484 485Delegation 486---------- 487 488Model of Delegation 489~~~~~~~~~~~~~~~~~~~ 490 491A cgroup can be delegated in two ways. First, to a less privileged 492user by granting write access of the directory and its "cgroup.procs", 493"cgroup.threads" and "cgroup.subtree_control" files to the user. 494Second, if the "nsdelegate" mount option is set, automatically to a 495cgroup namespace on namespace creation. 496 497Because the resource control interface files in a given directory 498control the distribution of the parent's resources, the delegatee 499shouldn't be allowed to write to them. For the first method, this is 500achieved by not granting access to these files. For the second, the 501kernel rejects writes to all files other than "cgroup.procs" and 502"cgroup.subtree_control" on a namespace root from inside the 503namespace. 504 505The end results are equivalent for both delegation types. Once 506delegated, the user can build sub-hierarchy under the directory, 507organize processes inside it as it sees fit and further distribute the 508resources it received from the parent. The limits and other settings 509of all resource controllers are hierarchical and regardless of what 510happens in the delegated sub-hierarchy, nothing can escape the 511resource restrictions imposed by the parent. 512 513Currently, cgroup doesn't impose any restrictions on the number of 514cgroups in or nesting depth of a delegated sub-hierarchy; however, 515this may be limited explicitly in the future. 516 517 518Delegation Containment 519~~~~~~~~~~~~~~~~~~~~~~ 520 521A delegated sub-hierarchy is contained in the sense that processes 522can't be moved into or out of the sub-hierarchy by the delegatee. 523 524For delegations to a less privileged user, this is achieved by 525requiring the following conditions for a process with a non-root euid 526to migrate a target process into a cgroup by writing its PID to the 527"cgroup.procs" file. 528 529- The writer must have write access to the "cgroup.procs" file. 530 531- The writer must have write access to the "cgroup.procs" file of the 532 common ancestor of the source and destination cgroups. 533 534The above two constraints ensure that while a delegatee may migrate 535processes around freely in the delegated sub-hierarchy it can't pull 536in from or push out to outside the sub-hierarchy. 537 538For an example, let's assume cgroups C0 and C1 have been delegated to 539user U0 who created C00, C01 under C0 and C10 under C1 as follows and 540all processes under C0 and C1 belong to U0:: 541 542 ~~~~~~~~~~~~~ - C0 - C00 543 ~ cgroup ~ \ C01 544 ~ hierarchy ~ 545 ~~~~~~~~~~~~~ - C1 - C10 546 547Let's also say U0 wants to write the PID of a process which is 548currently in C10 into "C00/cgroup.procs". U0 has write access to the 549file; however, the common ancestor of the source cgroup C10 and the 550destination cgroup C00 is above the points of delegation and U0 would 551not have write access to its "cgroup.procs" files and thus the write 552will be denied with -EACCES. 553 554For delegations to namespaces, containment is achieved by requiring 555that both the source and destination cgroups are reachable from the 556namespace of the process which is attempting the migration. If either 557is not reachable, the migration is rejected with -ENOENT. 558 559 560Guidelines 561---------- 562 563Organize Once and Control 564~~~~~~~~~~~~~~~~~~~~~~~~~ 565 566Migrating a process across cgroups is a relatively expensive operation 567and stateful resources such as memory are not moved together with the 568process. This is an explicit design decision as there often exist 569inherent trade-offs between migration and various hot paths in terms 570of synchronization cost. 571 572As such, migrating processes across cgroups frequently as a means to 573apply different resource restrictions is discouraged. A workload 574should be assigned to a cgroup according to the system's logical and 575resource structure once on start-up. Dynamic adjustments to resource 576distribution can be made by changing controller configuration through 577the interface files. 578 579 580Avoid Name Collisions 581~~~~~~~~~~~~~~~~~~~~~ 582 583Interface files for a cgroup and its children cgroups occupy the same 584directory and it is possible to create children cgroups which collide 585with interface files. 586 587All cgroup core interface files are prefixed with "cgroup." and each 588controller's interface files are prefixed with the controller name and 589a dot. A controller's name is composed of lower case alphabets and 590'_'s but never begins with an '_' so it can be used as the prefix 591character for collision avoidance. Also, interface file names won't 592start or end with terms which are often used in categorizing workloads 593such as job, service, slice, unit or workload. 594 595cgroup doesn't do anything to prevent name collisions and it's the 596user's responsibility to avoid them. 597 598 599Resource Distribution Models 600============================ 601 602cgroup controllers implement several resource distribution schemes 603depending on the resource type and expected use cases. This section 604describes major schemes in use along with their expected behaviors. 605 606 607Weights 608------- 609 610A parent's resource is distributed by adding up the weights of all 611active children and giving each the fraction matching the ratio of its 612weight against the sum. As only children which can make use of the 613resource at the moment participate in the distribution, this is 614work-conserving. Due to the dynamic nature, this model is usually 615used for stateless resources. 616 617All weights are in the range [1, 10000] with the default at 100. This 618allows symmetric multiplicative biases in both directions at fine 619enough granularity while staying in the intuitive range. 620 621As long as the weight is in range, all configuration combinations are 622valid and there is no reason to reject configuration changes or 623process migrations. 624 625"cpu.weight" proportionally distributes CPU cycles to active children 626and is an example of this type. 627 628 629.. _cgroupv2-limits-distributor: 630 631Limits 632------ 633 634A child can only consume up to the configured amount of the resource. 635Limits can be over-committed - the sum of the limits of children can 636exceed the amount of resource available to the parent. 637 638Limits are in the range [0, max] and defaults to "max", which is noop. 639 640As limits can be over-committed, all configuration combinations are 641valid and there is no reason to reject configuration changes or 642process migrations. 643 644"io.max" limits the maximum BPS and/or IOPS that a cgroup can consume 645on an IO device and is an example of this type. 646 647.. _cgroupv2-protections-distributor: 648 649Protections 650----------- 651 652A cgroup is protected up to the configured amount of the resource 653as long as the usages of all its ancestors are under their 654protected levels. Protections can be hard guarantees or best effort 655soft boundaries. Protections can also be over-committed in which case 656only up to the amount available to the parent is protected among 657children. 658 659Protections are in the range [0, max] and defaults to 0, which is 660noop. 661 662As protections can be over-committed, all configuration combinations 663are valid and there is no reason to reject configuration changes or 664process migrations. 665 666"memory.low" implements best-effort memory protection and is an 667example of this type. 668 669 670Allocations 671----------- 672 673A cgroup is exclusively allocated a certain amount of a finite 674resource. Allocations can't be over-committed - the sum of the 675allocations of children can not exceed the amount of resource 676available to the parent. 677 678Allocations are in the range [0, max] and defaults to 0, which is no 679resource. 680 681As allocations can't be over-committed, some configuration 682combinations are invalid and should be rejected. Also, if the 683resource is mandatory for execution of processes, process migrations 684may be rejected. 685 686"cpu.rt.max" hard-allocates realtime slices and is an example of this 687type. 688 689 690Interface Files 691=============== 692 693Format 694------ 695 696All interface files should be in one of the following formats whenever 697possible:: 698 699 New-line separated values 700 (when only one value can be written at once) 701 702 VAL0\n 703 VAL1\n 704 ... 705 706 Space separated values 707 (when read-only or multiple values can be written at once) 708 709 VAL0 VAL1 ...\n 710 711 Flat keyed 712 713 KEY0 VAL0\n 714 KEY1 VAL1\n 715 ... 716 717 Nested keyed 718 719 KEY0 SUB_KEY0=VAL00 SUB_KEY1=VAL01... 720 KEY1 SUB_KEY0=VAL10 SUB_KEY1=VAL11... 721 ... 722 723For a writable file, the format for writing should generally match 724reading; however, controllers may allow omitting later fields or 725implement restricted shortcuts for most common use cases. 726 727For both flat and nested keyed files, only the values for a single key 728can be written at a time. For nested keyed files, the sub key pairs 729may be specified in any order and not all pairs have to be specified. 730 731 732Conventions 733----------- 734 735- Settings for a single feature should be contained in a single file. 736 737- The root cgroup should be exempt from resource control and thus 738 shouldn't have resource control interface files. 739 740- The default time unit is microseconds. If a different unit is ever 741 used, an explicit unit suffix must be present. 742 743- A parts-per quantity should use a percentage decimal with at least 744 two digit fractional part - e.g. 13.40. 745 746- If a controller implements weight based resource distribution, its 747 interface file should be named "weight" and have the range [1, 748 10000] with 100 as the default. The values are chosen to allow 749 enough and symmetric bias in both directions while keeping it 750 intuitive (the default is 100%). 751 752- If a controller implements an absolute resource guarantee and/or 753 limit, the interface files should be named "min" and "max" 754 respectively. If a controller implements best effort resource 755 guarantee and/or limit, the interface files should be named "low" 756 and "high" respectively. 757 758 In the above four control files, the special token "max" should be 759 used to represent upward infinity for both reading and writing. 760 761- If a setting has a configurable default value and keyed specific 762 overrides, the default entry should be keyed with "default" and 763 appear as the first entry in the file. 764 765 The default value can be updated by writing either "default $VAL" or 766 "$VAL". 767 768 When writing to update a specific override, "default" can be used as 769 the value to indicate removal of the override. Override entries 770 with "default" as the value must not appear when read. 771 772 For example, a setting which is keyed by major:minor device numbers 773 with integer values may look like the following:: 774 775 # cat cgroup-example-interface-file 776 default 150 777 8:0 300 778 779 The default value can be updated by:: 780 781 # echo 125 > cgroup-example-interface-file 782 783 or:: 784 785 # echo "default 125" > cgroup-example-interface-file 786 787 An override can be set by:: 788 789 # echo "8:16 170" > cgroup-example-interface-file 790 791 and cleared by:: 792 793 # echo "8:0 default" > cgroup-example-interface-file 794 # cat cgroup-example-interface-file 795 default 125 796 8:16 170 797 798- For events which are not very high frequency, an interface file 799 "events" should be created which lists event key value pairs. 800 Whenever a notifiable event happens, file modified event should be 801 generated on the file. 802 803 804Core Interface Files 805-------------------- 806 807All cgroup core files are prefixed with "cgroup." 808 809 cgroup.type 810 A read-write single value file which exists on non-root 811 cgroups. 812 813 When read, it indicates the current type of the cgroup, which 814 can be one of the following values. 815 816 - "domain" : A normal valid domain cgroup. 817 818 - "domain threaded" : A threaded domain cgroup which is 819 serving as the root of a threaded subtree. 820 821 - "domain invalid" : A cgroup which is in an invalid state. 822 It can't be populated or have controllers enabled. It may 823 be allowed to become a threaded cgroup. 824 825 - "threaded" : A threaded cgroup which is a member of a 826 threaded subtree. 827 828 A cgroup can be turned into a threaded cgroup by writing 829 "threaded" to this file. 830 831 cgroup.procs 832 A read-write new-line separated values file which exists on 833 all cgroups. 834 835 When read, it lists the PIDs of all processes which belong to 836 the cgroup one-per-line. The PIDs are not ordered and the 837 same PID may show up more than once if the process got moved 838 to another cgroup and then back or the PID got recycled while 839 reading. 840 841 A PID can be written to migrate the process associated with 842 the PID to the cgroup. The writer should match all of the 843 following conditions. 844 845 - It must have write access to the "cgroup.procs" file. 846 847 - It must have write access to the "cgroup.procs" file of the 848 common ancestor of the source and destination cgroups. 849 850 When delegating a sub-hierarchy, write access to this file 851 should be granted along with the containing directory. 852 853 In a threaded cgroup, reading this file fails with EOPNOTSUPP 854 as all the processes belong to the thread root. Writing is 855 supported and moves every thread of the process to the cgroup. 856 857 cgroup.threads 858 A read-write new-line separated values file which exists on 859 all cgroups. 860 861 When read, it lists the TIDs of all threads which belong to 862 the cgroup one-per-line. The TIDs are not ordered and the 863 same TID may show up more than once if the thread got moved to 864 another cgroup and then back or the TID got recycled while 865 reading. 866 867 A TID can be written to migrate the thread associated with the 868 TID to the cgroup. The writer should match all of the 869 following conditions. 870 871 - It must have write access to the "cgroup.threads" file. 872 873 - The cgroup that the thread is currently in must be in the 874 same resource domain as the destination cgroup. 875 876 - It must have write access to the "cgroup.procs" file of the 877 common ancestor of the source and destination cgroups. 878 879 When delegating a sub-hierarchy, write access to this file 880 should be granted along with the containing directory. 881 882 cgroup.controllers 883 A read-only space separated values file which exists on all 884 cgroups. 885 886 It shows space separated list of all controllers available to 887 the cgroup. The controllers are not ordered. 888 889 cgroup.subtree_control 890 A read-write space separated values file which exists on all 891 cgroups. Starts out empty. 892 893 When read, it shows space separated list of the controllers 894 which are enabled to control resource distribution from the 895 cgroup to its children. 896 897 Space separated list of controllers prefixed with '+' or '-' 898 can be written to enable or disable controllers. A controller 899 name prefixed with '+' enables the controller and '-' 900 disables. If a controller appears more than once on the list, 901 the last one is effective. When multiple enable and disable 902 operations are specified, either all succeed or all fail. 903 904 cgroup.events 905 A read-only flat-keyed file which exists on non-root cgroups. 906 The following entries are defined. Unless specified 907 otherwise, a value change in this file generates a file 908 modified event. 909 910 populated 911 1 if the cgroup or its descendants contains any live 912 processes; otherwise, 0. 913 frozen 914 1 if the cgroup is frozen; otherwise, 0. 915 916 cgroup.max.descendants 917 A read-write single value files. The default is "max". 918 919 Maximum allowed number of descent cgroups. 920 If the actual number of descendants is equal or larger, 921 an attempt to create a new cgroup in the hierarchy will fail. 922 923 cgroup.max.depth 924 A read-write single value files. The default is "max". 925 926 Maximum allowed descent depth below the current cgroup. 927 If the actual descent depth is equal or larger, 928 an attempt to create a new child cgroup will fail. 929 930 cgroup.stat 931 A read-only flat-keyed file with the following entries: 932 933 nr_descendants 934 Total number of visible descendant cgroups. 935 936 nr_dying_descendants 937 Total number of dying descendant cgroups. A cgroup becomes 938 dying after being deleted by a user. The cgroup will remain 939 in dying state for some time undefined time (which can depend 940 on system load) before being completely destroyed. 941 942 A process can't enter a dying cgroup under any circumstances, 943 a dying cgroup can't revive. 944 945 A dying cgroup can consume system resources not exceeding 946 limits, which were active at the moment of cgroup deletion. 947 948 cgroup.freeze 949 A read-write single value file which exists on non-root cgroups. 950 Allowed values are "0" and "1". The default is "0". 951 952 Writing "1" to the file causes freezing of the cgroup and all 953 descendant cgroups. This means that all belonging processes will 954 be stopped and will not run until the cgroup will be explicitly 955 unfrozen. Freezing of the cgroup may take some time; when this action 956 is completed, the "frozen" value in the cgroup.events control file 957 will be updated to "1" and the corresponding notification will be 958 issued. 959 960 A cgroup can be frozen either by its own settings, or by settings 961 of any ancestor cgroups. If any of ancestor cgroups is frozen, the 962 cgroup will remain frozen. 963 964 Processes in the frozen cgroup can be killed by a fatal signal. 965 They also can enter and leave a frozen cgroup: either by an explicit 966 move by a user, or if freezing of the cgroup races with fork(). 967 If a process is moved to a frozen cgroup, it stops. If a process is 968 moved out of a frozen cgroup, it becomes running. 969 970 Frozen status of a cgroup doesn't affect any cgroup tree operations: 971 it's possible to delete a frozen (and empty) cgroup, as well as 972 create new sub-cgroups. 973 974 cgroup.kill 975 A write-only single value file which exists in non-root cgroups. 976 The only allowed value is "1". 977 978 Writing "1" to the file causes the cgroup and all descendant cgroups to 979 be killed. This means that all processes located in the affected cgroup 980 tree will be killed via SIGKILL. 981 982 Killing a cgroup tree will deal with concurrent forks appropriately and 983 is protected against migrations. 984 985 In a threaded cgroup, writing this file fails with EOPNOTSUPP as 986 killing cgroups is a process directed operation, i.e. it affects 987 the whole thread-group. 988 989 cgroup.pressure 990 A read-write single value file that allowed values are "0" and "1". 991 The default is "1". 992 993 Writing "0" to the file will disable the cgroup PSI accounting. 994 Writing "1" to the file will re-enable the cgroup PSI accounting. 995 996 This control attribute is not hierarchical, so disable or enable PSI 997 accounting in a cgroup does not affect PSI accounting in descendants 998 and doesn't need pass enablement via ancestors from root. 999 1000 The reason this control attribute exists is that PSI accounts stalls for 1001 each cgroup separately and aggregates it at each level of the hierarchy. 1002 This may cause non-negligible overhead for some workloads when under 1003 deep level of the hierarchy, in which case this control attribute can 1004 be used to disable PSI accounting in the non-leaf cgroups. 1005 1006 irq.pressure 1007 A read-write nested-keyed file. 1008 1009 Shows pressure stall information for IRQ/SOFTIRQ. See 1010 :ref:`Documentation/accounting/psi.rst <psi>` for details. 1011 1012Controllers 1013=========== 1014 1015.. _cgroup-v2-cpu: 1016 1017CPU 1018--- 1019 1020The "cpu" controllers regulates distribution of CPU cycles. This 1021controller implements weight and absolute bandwidth limit models for 1022normal scheduling policy and absolute bandwidth allocation model for 1023realtime scheduling policy. 1024 1025In all the above models, cycles distribution is defined only on a temporal 1026base and it does not account for the frequency at which tasks are executed. 1027The (optional) utilization clamping support allows to hint the schedutil 1028cpufreq governor about the minimum desired frequency which should always be 1029provided by a CPU, as well as the maximum desired frequency, which should not 1030be exceeded by a CPU. 1031 1032WARNING: cgroup2 doesn't yet support control of realtime processes and 1033the cpu controller can only be enabled when all RT processes are in 1034the root cgroup. Be aware that system management software may already 1035have placed RT processes into nonroot cgroups during the system boot 1036process, and these processes may need to be moved to the root cgroup 1037before the cpu controller can be enabled. 1038 1039 1040CPU Interface Files 1041~~~~~~~~~~~~~~~~~~~ 1042 1043All time durations are in microseconds. 1044 1045 cpu.stat 1046 A read-only flat-keyed file. 1047 This file exists whether the controller is enabled or not. 1048 1049 It always reports the following three stats: 1050 1051 - usage_usec 1052 - user_usec 1053 - system_usec 1054 1055 and the following five when the controller is enabled: 1056 1057 - nr_periods 1058 - nr_throttled 1059 - throttled_usec 1060 - nr_bursts 1061 - burst_usec 1062 1063 cpu.weight 1064 A read-write single value file which exists on non-root 1065 cgroups. The default is "100". 1066 1067 The weight in the range [1, 10000]. 1068 1069 cpu.weight.nice 1070 A read-write single value file which exists on non-root 1071 cgroups. The default is "0". 1072 1073 The nice value is in the range [-20, 19]. 1074 1075 This interface file is an alternative interface for 1076 "cpu.weight" and allows reading and setting weight using the 1077 same values used by nice(2). Because the range is smaller and 1078 granularity is coarser for the nice values, the read value is 1079 the closest approximation of the current weight. 1080 1081 cpu.max 1082 A read-write two value file which exists on non-root cgroups. 1083 The default is "max 100000". 1084 1085 The maximum bandwidth limit. It's in the following format:: 1086 1087 $MAX $PERIOD 1088 1089 which indicates that the group may consume up to $MAX in each 1090 $PERIOD duration. "max" for $MAX indicates no limit. If only 1091 one number is written, $MAX is updated. 1092 1093 cpu.max.burst 1094 A read-write single value file which exists on non-root 1095 cgroups. The default is "0". 1096 1097 The burst in the range [0, $MAX]. 1098 1099 cpu.pressure 1100 A read-write nested-keyed file. 1101 1102 Shows pressure stall information for CPU. See 1103 :ref:`Documentation/accounting/psi.rst <psi>` for details. 1104 1105 cpu.uclamp.min 1106 A read-write single value file which exists on non-root cgroups. 1107 The default is "0", i.e. no utilization boosting. 1108 1109 The requested minimum utilization (protection) as a percentage 1110 rational number, e.g. 12.34 for 12.34%. 1111 1112 This interface allows reading and setting minimum utilization clamp 1113 values similar to the sched_setattr(2). This minimum utilization 1114 value is used to clamp the task specific minimum utilization clamp. 1115 1116 The requested minimum utilization (protection) is always capped by 1117 the current value for the maximum utilization (limit), i.e. 1118 `cpu.uclamp.max`. 1119 1120 cpu.uclamp.max 1121 A read-write single value file which exists on non-root cgroups. 1122 The default is "max". i.e. no utilization capping 1123 1124 The requested maximum utilization (limit) as a percentage rational 1125 number, e.g. 98.76 for 98.76%. 1126 1127 This interface allows reading and setting maximum utilization clamp 1128 values similar to the sched_setattr(2). This maximum utilization 1129 value is used to clamp the task specific maximum utilization clamp. 1130 1131 1132 1133Memory 1134------ 1135 1136The "memory" controller regulates distribution of memory. Memory is 1137stateful and implements both limit and protection models. Due to the 1138intertwining between memory usage and reclaim pressure and the 1139stateful nature of memory, the distribution model is relatively 1140complex. 1141 1142While not completely water-tight, all major memory usages by a given 1143cgroup are tracked so that the total memory consumption can be 1144accounted and controlled to a reasonable extent. Currently, the 1145following types of memory usages are tracked. 1146 1147- Userland memory - page cache and anonymous memory. 1148 1149- Kernel data structures such as dentries and inodes. 1150 1151- TCP socket buffers. 1152 1153The above list may expand in the future for better coverage. 1154 1155 1156Memory Interface Files 1157~~~~~~~~~~~~~~~~~~~~~~ 1158 1159All memory amounts are in bytes. If a value which is not aligned to 1160PAGE_SIZE is written, the value may be rounded up to the closest 1161PAGE_SIZE multiple when read back. 1162 1163 memory.current 1164 A read-only single value file which exists on non-root 1165 cgroups. 1166 1167 The total amount of memory currently being used by the cgroup 1168 and its descendants. 1169 1170 memory.min 1171 A read-write single value file which exists on non-root 1172 cgroups. The default is "0". 1173 1174 Hard memory protection. If the memory usage of a cgroup 1175 is within its effective min boundary, the cgroup's memory 1176 won't be reclaimed under any conditions. If there is no 1177 unprotected reclaimable memory available, OOM killer 1178 is invoked. Above the effective min boundary (or 1179 effective low boundary if it is higher), pages are reclaimed 1180 proportionally to the overage, reducing reclaim pressure for 1181 smaller overages. 1182 1183 Effective min boundary is limited by memory.min values of 1184 all ancestor cgroups. If there is memory.min overcommitment 1185 (child cgroup or cgroups are requiring more protected memory 1186 than parent will allow), then each child cgroup will get 1187 the part of parent's protection proportional to its 1188 actual memory usage below memory.min. 1189 1190 Putting more memory than generally available under this 1191 protection is discouraged and may lead to constant OOMs. 1192 1193 If a memory cgroup is not populated with processes, 1194 its memory.min is ignored. 1195 1196 memory.low 1197 A read-write single value file which exists on non-root 1198 cgroups. The default is "0". 1199 1200 Best-effort memory protection. If the memory usage of a 1201 cgroup is within its effective low boundary, the cgroup's 1202 memory won't be reclaimed unless there is no reclaimable 1203 memory available in unprotected cgroups. 1204 Above the effective low boundary (or 1205 effective min boundary if it is higher), pages are reclaimed 1206 proportionally to the overage, reducing reclaim pressure for 1207 smaller overages. 1208 1209 Effective low boundary is limited by memory.low values of 1210 all ancestor cgroups. If there is memory.low overcommitment 1211 (child cgroup or cgroups are requiring more protected memory 1212 than parent will allow), then each child cgroup will get 1213 the part of parent's protection proportional to its 1214 actual memory usage below memory.low. 1215 1216 Putting more memory than generally available under this 1217 protection is discouraged. 1218 1219 memory.high 1220 A read-write single value file which exists on non-root 1221 cgroups. The default is "max". 1222 1223 Memory usage throttle limit. If a cgroup's usage goes 1224 over the high boundary, the processes of the cgroup are 1225 throttled and put under heavy reclaim pressure. 1226 1227 Going over the high limit never invokes the OOM killer and 1228 under extreme conditions the limit may be breached. The high 1229 limit should be used in scenarios where an external process 1230 monitors the limited cgroup to alleviate heavy reclaim 1231 pressure. 1232 1233 memory.max 1234 A read-write single value file which exists on non-root 1235 cgroups. The default is "max". 1236 1237 Memory usage hard limit. This is the main mechanism to limit 1238 memory usage of a cgroup. If a cgroup's memory usage reaches 1239 this limit and can't be reduced, the OOM killer is invoked in 1240 the cgroup. Under certain circumstances, the usage may go 1241 over the limit temporarily. 1242 1243 In default configuration regular 0-order allocations always 1244 succeed unless OOM killer chooses current task as a victim. 1245 1246 Some kinds of allocations don't invoke the OOM killer. 1247 Caller could retry them differently, return into userspace 1248 as -ENOMEM or silently ignore in cases like disk readahead. 1249 1250 memory.reclaim 1251 A write-only nested-keyed file which exists for all cgroups. 1252 1253 This is a simple interface to trigger memory reclaim in the 1254 target cgroup. 1255 1256 This file accepts a single key, the number of bytes to reclaim. 1257 No nested keys are currently supported. 1258 1259 Example:: 1260 1261 echo "1G" > memory.reclaim 1262 1263 The interface can be later extended with nested keys to 1264 configure the reclaim behavior. For example, specify the 1265 type of memory to reclaim from (anon, file, ..). 1266 1267 Please note that the kernel can over or under reclaim from 1268 the target cgroup. If less bytes are reclaimed than the 1269 specified amount, -EAGAIN is returned. 1270 1271 Please note that the proactive reclaim (triggered by this 1272 interface) is not meant to indicate memory pressure on the 1273 memory cgroup. Therefore socket memory balancing triggered by 1274 the memory reclaim normally is not exercised in this case. 1275 This means that the networking layer will not adapt based on 1276 reclaim induced by memory.reclaim. 1277 1278 memory.peak 1279 A read-only single value file which exists on non-root 1280 cgroups. 1281 1282 The max memory usage recorded for the cgroup and its 1283 descendants since the creation of the cgroup. 1284 1285 memory.oom.group 1286 A read-write single value file which exists on non-root 1287 cgroups. The default value is "0". 1288 1289 Determines whether the cgroup should be treated as 1290 an indivisible workload by the OOM killer. If set, 1291 all tasks belonging to the cgroup or to its descendants 1292 (if the memory cgroup is not a leaf cgroup) are killed 1293 together or not at all. This can be used to avoid 1294 partial kills to guarantee workload integrity. 1295 1296 Tasks with the OOM protection (oom_score_adj set to -1000) 1297 are treated as an exception and are never killed. 1298 1299 If the OOM killer is invoked in a cgroup, it's not going 1300 to kill any tasks outside of this cgroup, regardless 1301 memory.oom.group values of ancestor cgroups. 1302 1303 memory.events 1304 A read-only flat-keyed file which exists on non-root cgroups. 1305 The following entries are defined. Unless specified 1306 otherwise, a value change in this file generates a file 1307 modified event. 1308 1309 Note that all fields in this file are hierarchical and the 1310 file modified event can be generated due to an event down the 1311 hierarchy. For the local events at the cgroup level see 1312 memory.events.local. 1313 1314 low 1315 The number of times the cgroup is reclaimed due to 1316 high memory pressure even though its usage is under 1317 the low boundary. This usually indicates that the low 1318 boundary is over-committed. 1319 1320 high 1321 The number of times processes of the cgroup are 1322 throttled and routed to perform direct memory reclaim 1323 because the high memory boundary was exceeded. For a 1324 cgroup whose memory usage is capped by the high limit 1325 rather than global memory pressure, this event's 1326 occurrences are expected. 1327 1328 max 1329 The number of times the cgroup's memory usage was 1330 about to go over the max boundary. If direct reclaim 1331 fails to bring it down, the cgroup goes to OOM state. 1332 1333 oom 1334 The number of time the cgroup's memory usage was 1335 reached the limit and allocation was about to fail. 1336 1337 This event is not raised if the OOM killer is not 1338 considered as an option, e.g. for failed high-order 1339 allocations or if caller asked to not retry attempts. 1340 1341 oom_kill 1342 The number of processes belonging to this cgroup 1343 killed by any kind of OOM killer. 1344 1345 oom_group_kill 1346 The number of times a group OOM has occurred. 1347 1348 memory.events.local 1349 Similar to memory.events but the fields in the file are local 1350 to the cgroup i.e. not hierarchical. The file modified event 1351 generated on this file reflects only the local events. 1352 1353 memory.stat 1354 A read-only flat-keyed file which exists on non-root cgroups. 1355 1356 This breaks down the cgroup's memory footprint into different 1357 types of memory, type-specific details, and other information 1358 on the state and past events of the memory management system. 1359 1360 All memory amounts are in bytes. 1361 1362 The entries are ordered to be human readable, and new entries 1363 can show up in the middle. Don't rely on items remaining in a 1364 fixed position; use the keys to look up specific values! 1365 1366 If the entry has no per-node counter (or not show in the 1367 memory.numa_stat). We use 'npn' (non-per-node) as the tag 1368 to indicate that it will not show in the memory.numa_stat. 1369 1370 anon 1371 Amount of memory used in anonymous mappings such as 1372 brk(), sbrk(), and mmap(MAP_ANONYMOUS) 1373 1374 file 1375 Amount of memory used to cache filesystem data, 1376 including tmpfs and shared memory. 1377 1378 kernel (npn) 1379 Amount of total kernel memory, including 1380 (kernel_stack, pagetables, percpu, vmalloc, slab) in 1381 addition to other kernel memory use cases. 1382 1383 kernel_stack 1384 Amount of memory allocated to kernel stacks. 1385 1386 pagetables 1387 Amount of memory allocated for page tables. 1388 1389 sec_pagetables 1390 Amount of memory allocated for secondary page tables, 1391 this currently includes KVM mmu allocations on x86 1392 and arm64. 1393 1394 percpu (npn) 1395 Amount of memory used for storing per-cpu kernel 1396 data structures. 1397 1398 sock (npn) 1399 Amount of memory used in network transmission buffers 1400 1401 vmalloc (npn) 1402 Amount of memory used for vmap backed memory. 1403 1404 shmem 1405 Amount of cached filesystem data that is swap-backed, 1406 such as tmpfs, shm segments, shared anonymous mmap()s 1407 1408 zswap 1409 Amount of memory consumed by the zswap compression backend. 1410 1411 zswapped 1412 Amount of application memory swapped out to zswap. 1413 1414 file_mapped 1415 Amount of cached filesystem data mapped with mmap() 1416 1417 file_dirty 1418 Amount of cached filesystem data that was modified but 1419 not yet written back to disk 1420 1421 file_writeback 1422 Amount of cached filesystem data that was modified and 1423 is currently being written back to disk 1424 1425 swapcached 1426 Amount of swap cached in memory. The swapcache is accounted 1427 against both memory and swap usage. 1428 1429 anon_thp 1430 Amount of memory used in anonymous mappings backed by 1431 transparent hugepages 1432 1433 file_thp 1434 Amount of cached filesystem data backed by transparent 1435 hugepages 1436 1437 shmem_thp 1438 Amount of shm, tmpfs, shared anonymous mmap()s backed by 1439 transparent hugepages 1440 1441 inactive_anon, active_anon, inactive_file, active_file, unevictable 1442 Amount of memory, swap-backed and filesystem-backed, 1443 on the internal memory management lists used by the 1444 page reclaim algorithm. 1445 1446 As these represent internal list state (eg. shmem pages are on anon 1447 memory management lists), inactive_foo + active_foo may not be equal to 1448 the value for the foo counter, since the foo counter is type-based, not 1449 list-based. 1450 1451 slab_reclaimable 1452 Part of "slab" that might be reclaimed, such as 1453 dentries and inodes. 1454 1455 slab_unreclaimable 1456 Part of "slab" that cannot be reclaimed on memory 1457 pressure. 1458 1459 slab (npn) 1460 Amount of memory used for storing in-kernel data 1461 structures. 1462 1463 workingset_refault_anon 1464 Number of refaults of previously evicted anonymous pages. 1465 1466 workingset_refault_file 1467 Number of refaults of previously evicted file pages. 1468 1469 workingset_activate_anon 1470 Number of refaulted anonymous pages that were immediately 1471 activated. 1472 1473 workingset_activate_file 1474 Number of refaulted file pages that were immediately activated. 1475 1476 workingset_restore_anon 1477 Number of restored anonymous pages which have been detected as 1478 an active workingset before they got reclaimed. 1479 1480 workingset_restore_file 1481 Number of restored file pages which have been detected as an 1482 active workingset before they got reclaimed. 1483 1484 workingset_nodereclaim 1485 Number of times a shadow node has been reclaimed 1486 1487 pgscan (npn) 1488 Amount of scanned pages (in an inactive LRU list) 1489 1490 pgsteal (npn) 1491 Amount of reclaimed pages 1492 1493 pgscan_kswapd (npn) 1494 Amount of scanned pages by kswapd (in an inactive LRU list) 1495 1496 pgscan_direct (npn) 1497 Amount of scanned pages directly (in an inactive LRU list) 1498 1499 pgscan_khugepaged (npn) 1500 Amount of scanned pages by khugepaged (in an inactive LRU list) 1501 1502 pgsteal_kswapd (npn) 1503 Amount of reclaimed pages by kswapd 1504 1505 pgsteal_direct (npn) 1506 Amount of reclaimed pages directly 1507 1508 pgsteal_khugepaged (npn) 1509 Amount of reclaimed pages by khugepaged 1510 1511 pgfault (npn) 1512 Total number of page faults incurred 1513 1514 pgmajfault (npn) 1515 Number of major page faults incurred 1516 1517 pgrefill (npn) 1518 Amount of scanned pages (in an active LRU list) 1519 1520 pgactivate (npn) 1521 Amount of pages moved to the active LRU list 1522 1523 pgdeactivate (npn) 1524 Amount of pages moved to the inactive LRU list 1525 1526 pglazyfree (npn) 1527 Amount of pages postponed to be freed under memory pressure 1528 1529 pglazyfreed (npn) 1530 Amount of reclaimed lazyfree pages 1531 1532 thp_fault_alloc (npn) 1533 Number of transparent hugepages which were allocated to satisfy 1534 a page fault. This counter is not present when CONFIG_TRANSPARENT_HUGEPAGE 1535 is not set. 1536 1537 thp_collapse_alloc (npn) 1538 Number of transparent hugepages which were allocated to allow 1539 collapsing an existing range of pages. This counter is not 1540 present when CONFIG_TRANSPARENT_HUGEPAGE is not set. 1541 1542 memory.numa_stat 1543 A read-only nested-keyed file which exists on non-root cgroups. 1544 1545 This breaks down the cgroup's memory footprint into different 1546 types of memory, type-specific details, and other information 1547 per node on the state of the memory management system. 1548 1549 This is useful for providing visibility into the NUMA locality 1550 information within an memcg since the pages are allowed to be 1551 allocated from any physical node. One of the use case is evaluating 1552 application performance by combining this information with the 1553 application's CPU allocation. 1554 1555 All memory amounts are in bytes. 1556 1557 The output format of memory.numa_stat is:: 1558 1559 type N0=<bytes in node 0> N1=<bytes in node 1> ... 1560 1561 The entries are ordered to be human readable, and new entries 1562 can show up in the middle. Don't rely on items remaining in a 1563 fixed position; use the keys to look up specific values! 1564 1565 The entries can refer to the memory.stat. 1566 1567 memory.swap.current 1568 A read-only single value file which exists on non-root 1569 cgroups. 1570 1571 The total amount of swap currently being used by the cgroup 1572 and its descendants. 1573 1574 memory.swap.high 1575 A read-write single value file which exists on non-root 1576 cgroups. The default is "max". 1577 1578 Swap usage throttle limit. If a cgroup's swap usage exceeds 1579 this limit, all its further allocations will be throttled to 1580 allow userspace to implement custom out-of-memory procedures. 1581 1582 This limit marks a point of no return for the cgroup. It is NOT 1583 designed to manage the amount of swapping a workload does 1584 during regular operation. Compare to memory.swap.max, which 1585 prohibits swapping past a set amount, but lets the cgroup 1586 continue unimpeded as long as other memory can be reclaimed. 1587 1588 Healthy workloads are not expected to reach this limit. 1589 1590 memory.swap.peak 1591 A read-only single value file which exists on non-root 1592 cgroups. 1593 1594 The max swap usage recorded for the cgroup and its 1595 descendants since the creation of the cgroup. 1596 1597 memory.swap.max 1598 A read-write single value file which exists on non-root 1599 cgroups. The default is "max". 1600 1601 Swap usage hard limit. If a cgroup's swap usage reaches this 1602 limit, anonymous memory of the cgroup will not be swapped out. 1603 1604 memory.swap.events 1605 A read-only flat-keyed file which exists on non-root cgroups. 1606 The following entries are defined. Unless specified 1607 otherwise, a value change in this file generates a file 1608 modified event. 1609 1610 high 1611 The number of times the cgroup's swap usage was over 1612 the high threshold. 1613 1614 max 1615 The number of times the cgroup's swap usage was about 1616 to go over the max boundary and swap allocation 1617 failed. 1618 1619 fail 1620 The number of times swap allocation failed either 1621 because of running out of swap system-wide or max 1622 limit. 1623 1624 When reduced under the current usage, the existing swap 1625 entries are reclaimed gradually and the swap usage may stay 1626 higher than the limit for an extended period of time. This 1627 reduces the impact on the workload and memory management. 1628 1629 memory.zswap.current 1630 A read-only single value file which exists on non-root 1631 cgroups. 1632 1633 The total amount of memory consumed by the zswap compression 1634 backend. 1635 1636 memory.zswap.max 1637 A read-write single value file which exists on non-root 1638 cgroups. The default is "max". 1639 1640 Zswap usage hard limit. If a cgroup's zswap pool reaches this 1641 limit, it will refuse to take any more stores before existing 1642 entries fault back in or are written out to disk. 1643 1644 memory.pressure 1645 A read-only nested-keyed file. 1646 1647 Shows pressure stall information for memory. See 1648 :ref:`Documentation/accounting/psi.rst <psi>` for details. 1649 1650 1651Usage Guidelines 1652~~~~~~~~~~~~~~~~ 1653 1654"memory.high" is the main mechanism to control memory usage. 1655Over-committing on high limit (sum of high limits > available memory) 1656and letting global memory pressure to distribute memory according to 1657usage is a viable strategy. 1658 1659Because breach of the high limit doesn't trigger the OOM killer but 1660throttles the offending cgroup, a management agent has ample 1661opportunities to monitor and take appropriate actions such as granting 1662more memory or terminating the workload. 1663 1664Determining whether a cgroup has enough memory is not trivial as 1665memory usage doesn't indicate whether the workload can benefit from 1666more memory. For example, a workload which writes data received from 1667network to a file can use all available memory but can also operate as 1668performant with a small amount of memory. A measure of memory 1669pressure - how much the workload is being impacted due to lack of 1670memory - is necessary to determine whether a workload needs more 1671memory; unfortunately, memory pressure monitoring mechanism isn't 1672implemented yet. 1673 1674 1675Memory Ownership 1676~~~~~~~~~~~~~~~~ 1677 1678A memory area is charged to the cgroup which instantiated it and stays 1679charged to the cgroup until the area is released. Migrating a process 1680to a different cgroup doesn't move the memory usages that it 1681instantiated while in the previous cgroup to the new cgroup. 1682 1683A memory area may be used by processes belonging to different cgroups. 1684To which cgroup the area will be charged is in-deterministic; however, 1685over time, the memory area is likely to end up in a cgroup which has 1686enough memory allowance to avoid high reclaim pressure. 1687 1688If a cgroup sweeps a considerable amount of memory which is expected 1689to be accessed repeatedly by other cgroups, it may make sense to use 1690POSIX_FADV_DONTNEED to relinquish the ownership of memory areas 1691belonging to the affected files to ensure correct memory ownership. 1692 1693 1694IO 1695-- 1696 1697The "io" controller regulates the distribution of IO resources. This 1698controller implements both weight based and absolute bandwidth or IOPS 1699limit distribution; however, weight based distribution is available 1700only if cfq-iosched is in use and neither scheme is available for 1701blk-mq devices. 1702 1703 1704IO Interface Files 1705~~~~~~~~~~~~~~~~~~ 1706 1707 io.stat 1708 A read-only nested-keyed file. 1709 1710 Lines are keyed by $MAJ:$MIN device numbers and not ordered. 1711 The following nested keys are defined. 1712 1713 ====== ===================== 1714 rbytes Bytes read 1715 wbytes Bytes written 1716 rios Number of read IOs 1717 wios Number of write IOs 1718 dbytes Bytes discarded 1719 dios Number of discard IOs 1720 ====== ===================== 1721 1722 An example read output follows:: 1723 1724 8:16 rbytes=1459200 wbytes=314773504 rios=192 wios=353 dbytes=0 dios=0 1725 8:0 rbytes=90430464 wbytes=299008000 rios=8950 wios=1252 dbytes=50331648 dios=3021 1726 1727 io.cost.qos 1728 A read-write nested-keyed file which exists only on the root 1729 cgroup. 1730 1731 This file configures the Quality of Service of the IO cost 1732 model based controller (CONFIG_BLK_CGROUP_IOCOST) which 1733 currently implements "io.weight" proportional control. Lines 1734 are keyed by $MAJ:$MIN device numbers and not ordered. The 1735 line for a given device is populated on the first write for 1736 the device on "io.cost.qos" or "io.cost.model". The following 1737 nested keys are defined. 1738 1739 ====== ===================================== 1740 enable Weight-based control enable 1741 ctrl "auto" or "user" 1742 rpct Read latency percentile [0, 100] 1743 rlat Read latency threshold 1744 wpct Write latency percentile [0, 100] 1745 wlat Write latency threshold 1746 min Minimum scaling percentage [1, 10000] 1747 max Maximum scaling percentage [1, 10000] 1748 ====== ===================================== 1749 1750 The controller is disabled by default and can be enabled by 1751 setting "enable" to 1. "rpct" and "wpct" parameters default 1752 to zero and the controller uses internal device saturation 1753 state to adjust the overall IO rate between "min" and "max". 1754 1755 When a better control quality is needed, latency QoS 1756 parameters can be configured. For example:: 1757 1758 8:16 enable=1 ctrl=auto rpct=95.00 rlat=75000 wpct=95.00 wlat=150000 min=50.00 max=150.0 1759 1760 shows that on sdb, the controller is enabled, will consider 1761 the device saturated if the 95th percentile of read completion 1762 latencies is above 75ms or write 150ms, and adjust the overall 1763 IO issue rate between 50% and 150% accordingly. 1764 1765 The lower the saturation point, the better the latency QoS at 1766 the cost of aggregate bandwidth. The narrower the allowed 1767 adjustment range between "min" and "max", the more conformant 1768 to the cost model the IO behavior. Note that the IO issue 1769 base rate may be far off from 100% and setting "min" and "max" 1770 blindly can lead to a significant loss of device capacity or 1771 control quality. "min" and "max" are useful for regulating 1772 devices which show wide temporary behavior changes - e.g. a 1773 ssd which accepts writes at the line speed for a while and 1774 then completely stalls for multiple seconds. 1775 1776 When "ctrl" is "auto", the parameters are controlled by the 1777 kernel and may change automatically. Setting "ctrl" to "user" 1778 or setting any of the percentile and latency parameters puts 1779 it into "user" mode and disables the automatic changes. The 1780 automatic mode can be restored by setting "ctrl" to "auto". 1781 1782 io.cost.model 1783 A read-write nested-keyed file which exists only on the root 1784 cgroup. 1785 1786 This file configures the cost model of the IO cost model based 1787 controller (CONFIG_BLK_CGROUP_IOCOST) which currently 1788 implements "io.weight" proportional control. Lines are keyed 1789 by $MAJ:$MIN device numbers and not ordered. The line for a 1790 given device is populated on the first write for the device on 1791 "io.cost.qos" or "io.cost.model". The following nested keys 1792 are defined. 1793 1794 ===== ================================ 1795 ctrl "auto" or "user" 1796 model The cost model in use - "linear" 1797 ===== ================================ 1798 1799 When "ctrl" is "auto", the kernel may change all parameters 1800 dynamically. When "ctrl" is set to "user" or any other 1801 parameters are written to, "ctrl" become "user" and the 1802 automatic changes are disabled. 1803 1804 When "model" is "linear", the following model parameters are 1805 defined. 1806 1807 ============= ======================================== 1808 [r|w]bps The maximum sequential IO throughput 1809 [r|w]seqiops The maximum 4k sequential IOs per second 1810 [r|w]randiops The maximum 4k random IOs per second 1811 ============= ======================================== 1812 1813 From the above, the builtin linear model determines the base 1814 costs of a sequential and random IO and the cost coefficient 1815 for the IO size. While simple, this model can cover most 1816 common device classes acceptably. 1817 1818 The IO cost model isn't expected to be accurate in absolute 1819 sense and is scaled to the device behavior dynamically. 1820 1821 If needed, tools/cgroup/iocost_coef_gen.py can be used to 1822 generate device-specific coefficients. 1823 1824 io.weight 1825 A read-write flat-keyed file which exists on non-root cgroups. 1826 The default is "default 100". 1827 1828 The first line is the default weight applied to devices 1829 without specific override. The rest are overrides keyed by 1830 $MAJ:$MIN device numbers and not ordered. The weights are in 1831 the range [1, 10000] and specifies the relative amount IO time 1832 the cgroup can use in relation to its siblings. 1833 1834 The default weight can be updated by writing either "default 1835 $WEIGHT" or simply "$WEIGHT". Overrides can be set by writing 1836 "$MAJ:$MIN $WEIGHT" and unset by writing "$MAJ:$MIN default". 1837 1838 An example read output follows:: 1839 1840 default 100 1841 8:16 200 1842 8:0 50 1843 1844 io.max 1845 A read-write nested-keyed file which exists on non-root 1846 cgroups. 1847 1848 BPS and IOPS based IO limit. Lines are keyed by $MAJ:$MIN 1849 device numbers and not ordered. The following nested keys are 1850 defined. 1851 1852 ===== ================================== 1853 rbps Max read bytes per second 1854 wbps Max write bytes per second 1855 riops Max read IO operations per second 1856 wiops Max write IO operations per second 1857 ===== ================================== 1858 1859 When writing, any number of nested key-value pairs can be 1860 specified in any order. "max" can be specified as the value 1861 to remove a specific limit. If the same key is specified 1862 multiple times, the outcome is undefined. 1863 1864 BPS and IOPS are measured in each IO direction and IOs are 1865 delayed if limit is reached. Temporary bursts are allowed. 1866 1867 Setting read limit at 2M BPS and write at 120 IOPS for 8:16:: 1868 1869 echo "8:16 rbps=2097152 wiops=120" > io.max 1870 1871 Reading returns the following:: 1872 1873 8:16 rbps=2097152 wbps=max riops=max wiops=120 1874 1875 Write IOPS limit can be removed by writing the following:: 1876 1877 echo "8:16 wiops=max" > io.max 1878 1879 Reading now returns the following:: 1880 1881 8:16 rbps=2097152 wbps=max riops=max wiops=max 1882 1883 io.pressure 1884 A read-only nested-keyed file. 1885 1886 Shows pressure stall information for IO. See 1887 :ref:`Documentation/accounting/psi.rst <psi>` for details. 1888 1889 1890Writeback 1891~~~~~~~~~ 1892 1893Page cache is dirtied through buffered writes and shared mmaps and 1894written asynchronously to the backing filesystem by the writeback 1895mechanism. Writeback sits between the memory and IO domains and 1896regulates the proportion of dirty memory by balancing dirtying and 1897write IOs. 1898 1899The io controller, in conjunction with the memory controller, 1900implements control of page cache writeback IOs. The memory controller 1901defines the memory domain that dirty memory ratio is calculated and 1902maintained for and the io controller defines the io domain which 1903writes out dirty pages for the memory domain. Both system-wide and 1904per-cgroup dirty memory states are examined and the more restrictive 1905of the two is enforced. 1906 1907cgroup writeback requires explicit support from the underlying 1908filesystem. Currently, cgroup writeback is implemented on ext2, ext4, 1909btrfs, f2fs, and xfs. On other filesystems, all writeback IOs are 1910attributed to the root cgroup. 1911 1912There are inherent differences in memory and writeback management 1913which affects how cgroup ownership is tracked. Memory is tracked per 1914page while writeback per inode. For the purpose of writeback, an 1915inode is assigned to a cgroup and all IO requests to write dirty pages 1916from the inode are attributed to that cgroup. 1917 1918As cgroup ownership for memory is tracked per page, there can be pages 1919which are associated with different cgroups than the one the inode is 1920associated with. These are called foreign pages. The writeback 1921constantly keeps track of foreign pages and, if a particular foreign 1922cgroup becomes the majority over a certain period of time, switches 1923the ownership of the inode to that cgroup. 1924 1925While this model is enough for most use cases where a given inode is 1926mostly dirtied by a single cgroup even when the main writing cgroup 1927changes over time, use cases where multiple cgroups write to a single 1928inode simultaneously are not supported well. In such circumstances, a 1929significant portion of IOs are likely to be attributed incorrectly. 1930As memory controller assigns page ownership on the first use and 1931doesn't update it until the page is released, even if writeback 1932strictly follows page ownership, multiple cgroups dirtying overlapping 1933areas wouldn't work as expected. It's recommended to avoid such usage 1934patterns. 1935 1936The sysctl knobs which affect writeback behavior are applied to cgroup 1937writeback as follows. 1938 1939 vm.dirty_background_ratio, vm.dirty_ratio 1940 These ratios apply the same to cgroup writeback with the 1941 amount of available memory capped by limits imposed by the 1942 memory controller and system-wide clean memory. 1943 1944 vm.dirty_background_bytes, vm.dirty_bytes 1945 For cgroup writeback, this is calculated into ratio against 1946 total available memory and applied the same way as 1947 vm.dirty[_background]_ratio. 1948 1949 1950IO Latency 1951~~~~~~~~~~ 1952 1953This is a cgroup v2 controller for IO workload protection. You provide a group 1954with a latency target, and if the average latency exceeds that target the 1955controller will throttle any peers that have a lower latency target than the 1956protected workload. 1957 1958The limits are only applied at the peer level in the hierarchy. This means that 1959in the diagram below, only groups A, B, and C will influence each other, and 1960groups D and F will influence each other. Group G will influence nobody:: 1961 1962 [root] 1963 / | \ 1964 A B C 1965 / \ | 1966 D F G 1967 1968 1969So the ideal way to configure this is to set io.latency in groups A, B, and C. 1970Generally you do not want to set a value lower than the latency your device 1971supports. Experiment to find the value that works best for your workload. 1972Start at higher than the expected latency for your device and watch the 1973avg_lat value in io.stat for your workload group to get an idea of the 1974latency you see during normal operation. Use the avg_lat value as a basis for 1975your real setting, setting at 10-15% higher than the value in io.stat. 1976 1977How IO Latency Throttling Works 1978~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 1979 1980io.latency is work conserving; so as long as everybody is meeting their latency 1981target the controller doesn't do anything. Once a group starts missing its 1982target it begins throttling any peer group that has a higher target than itself. 1983This throttling takes 2 forms: 1984 1985- Queue depth throttling. This is the number of outstanding IO's a group is 1986 allowed to have. We will clamp down relatively quickly, starting at no limit 1987 and going all the way down to 1 IO at a time. 1988 1989- Artificial delay induction. There are certain types of IO that cannot be 1990 throttled without possibly adversely affecting higher priority groups. This 1991 includes swapping and metadata IO. These types of IO are allowed to occur 1992 normally, however they are "charged" to the originating group. If the 1993 originating group is being throttled you will see the use_delay and delay 1994 fields in io.stat increase. The delay value is how many microseconds that are 1995 being added to any process that runs in this group. Because this number can 1996 grow quite large if there is a lot of swapping or metadata IO occurring we 1997 limit the individual delay events to 1 second at a time. 1998 1999Once the victimized group starts meeting its latency target again it will start 2000unthrottling any peer groups that were throttled previously. If the victimized 2001group simply stops doing IO the global counter will unthrottle appropriately. 2002 2003IO Latency Interface Files 2004~~~~~~~~~~~~~~~~~~~~~~~~~~ 2005 2006 io.latency 2007 This takes a similar format as the other controllers. 2008 2009 "MAJOR:MINOR target=<target time in microseconds>" 2010 2011 io.stat 2012 If the controller is enabled you will see extra stats in io.stat in 2013 addition to the normal ones. 2014 2015 depth 2016 This is the current queue depth for the group. 2017 2018 avg_lat 2019 This is an exponential moving average with a decay rate of 1/exp 2020 bound by the sampling interval. The decay rate interval can be 2021 calculated by multiplying the win value in io.stat by the 2022 corresponding number of samples based on the win value. 2023 2024 win 2025 The sampling window size in milliseconds. This is the minimum 2026 duration of time between evaluation events. Windows only elapse 2027 with IO activity. Idle periods extend the most recent window. 2028 2029IO Priority 2030~~~~~~~~~~~ 2031 2032A single attribute controls the behavior of the I/O priority cgroup policy, 2033namely the blkio.prio.class attribute. The following values are accepted for 2034that attribute: 2035 2036 no-change 2037 Do not modify the I/O priority class. 2038 2039 promote-to-rt 2040 For requests that have a non-RT I/O priority class, change it into RT. 2041 Also change the priority level of these requests to 4. Do not modify 2042 the I/O priority of requests that have priority class RT. 2043 2044 restrict-to-be 2045 For requests that do not have an I/O priority class or that have I/O 2046 priority class RT, change it into BE. Also change the priority level 2047 of these requests to 0. Do not modify the I/O priority class of 2048 requests that have priority class IDLE. 2049 2050 idle 2051 Change the I/O priority class of all requests into IDLE, the lowest 2052 I/O priority class. 2053 2054 none-to-rt 2055 Deprecated. Just an alias for promote-to-rt. 2056 2057The following numerical values are associated with the I/O priority policies: 2058 2059+----------------+---+ 2060| no-change | 0 | 2061+----------------+---+ 2062| rt-to-be | 2 | 2063+----------------+---+ 2064| all-to-idle | 3 | 2065+----------------+---+ 2066 2067The numerical value that corresponds to each I/O priority class is as follows: 2068 2069+-------------------------------+---+ 2070| IOPRIO_CLASS_NONE | 0 | 2071+-------------------------------+---+ 2072| IOPRIO_CLASS_RT (real-time) | 1 | 2073+-------------------------------+---+ 2074| IOPRIO_CLASS_BE (best effort) | 2 | 2075+-------------------------------+---+ 2076| IOPRIO_CLASS_IDLE | 3 | 2077+-------------------------------+---+ 2078 2079The algorithm to set the I/O priority class for a request is as follows: 2080 2081- If I/O priority class policy is promote-to-rt, change the request I/O 2082 priority class to IOPRIO_CLASS_RT and change the request I/O priority 2083 level to 4. 2084- If I/O priorityt class is not promote-to-rt, translate the I/O priority 2085 class policy into a number, then change the request I/O priority class 2086 into the maximum of the I/O priority class policy number and the numerical 2087 I/O priority class. 2088 2089PID 2090--- 2091 2092The process number controller is used to allow a cgroup to stop any 2093new tasks from being fork()'d or clone()'d after a specified limit is 2094reached. 2095 2096The number of tasks in a cgroup can be exhausted in ways which other 2097controllers cannot prevent, thus warranting its own controller. For 2098example, a fork bomb is likely to exhaust the number of tasks before 2099hitting memory restrictions. 2100 2101Note that PIDs used in this controller refer to TIDs, process IDs as 2102used by the kernel. 2103 2104 2105PID Interface Files 2106~~~~~~~~~~~~~~~~~~~ 2107 2108 pids.max 2109 A read-write single value file which exists on non-root 2110 cgroups. The default is "max". 2111 2112 Hard limit of number of processes. 2113 2114 pids.current 2115 A read-only single value file which exists on all cgroups. 2116 2117 The number of processes currently in the cgroup and its 2118 descendants. 2119 2120Organisational operations are not blocked by cgroup policies, so it is 2121possible to have pids.current > pids.max. This can be done by either 2122setting the limit to be smaller than pids.current, or attaching enough 2123processes to the cgroup such that pids.current is larger than 2124pids.max. However, it is not possible to violate a cgroup PID policy 2125through fork() or clone(). These will return -EAGAIN if the creation 2126of a new process would cause a cgroup policy to be violated. 2127 2128 2129Cpuset 2130------ 2131 2132The "cpuset" controller provides a mechanism for constraining 2133the CPU and memory node placement of tasks to only the resources 2134specified in the cpuset interface files in a task's current cgroup. 2135This is especially valuable on large NUMA systems where placing jobs 2136on properly sized subsets of the systems with careful processor and 2137memory placement to reduce cross-node memory access and contention 2138can improve overall system performance. 2139 2140The "cpuset" controller is hierarchical. That means the controller 2141cannot use CPUs or memory nodes not allowed in its parent. 2142 2143 2144Cpuset Interface Files 2145~~~~~~~~~~~~~~~~~~~~~~ 2146 2147 cpuset.cpus 2148 A read-write multiple values file which exists on non-root 2149 cpuset-enabled cgroups. 2150 2151 It lists the requested CPUs to be used by tasks within this 2152 cgroup. The actual list of CPUs to be granted, however, is 2153 subjected to constraints imposed by its parent and can differ 2154 from the requested CPUs. 2155 2156 The CPU numbers are comma-separated numbers or ranges. 2157 For example:: 2158 2159 # cat cpuset.cpus 2160 0-4,6,8-10 2161 2162 An empty value indicates that the cgroup is using the same 2163 setting as the nearest cgroup ancestor with a non-empty 2164 "cpuset.cpus" or all the available CPUs if none is found. 2165 2166 The value of "cpuset.cpus" stays constant until the next update 2167 and won't be affected by any CPU hotplug events. 2168 2169 cpuset.cpus.effective 2170 A read-only multiple values file which exists on all 2171 cpuset-enabled cgroups. 2172 2173 It lists the onlined CPUs that are actually granted to this 2174 cgroup by its parent. These CPUs are allowed to be used by 2175 tasks within the current cgroup. 2176 2177 If "cpuset.cpus" is empty, the "cpuset.cpus.effective" file shows 2178 all the CPUs from the parent cgroup that can be available to 2179 be used by this cgroup. Otherwise, it should be a subset of 2180 "cpuset.cpus" unless none of the CPUs listed in "cpuset.cpus" 2181 can be granted. In this case, it will be treated just like an 2182 empty "cpuset.cpus". 2183 2184 Its value will be affected by CPU hotplug events. 2185 2186 cpuset.mems 2187 A read-write multiple values file which exists on non-root 2188 cpuset-enabled cgroups. 2189 2190 It lists the requested memory nodes to be used by tasks within 2191 this cgroup. The actual list of memory nodes granted, however, 2192 is subjected to constraints imposed by its parent and can differ 2193 from the requested memory nodes. 2194 2195 The memory node numbers are comma-separated numbers or ranges. 2196 For example:: 2197 2198 # cat cpuset.mems 2199 0-1,3 2200 2201 An empty value indicates that the cgroup is using the same 2202 setting as the nearest cgroup ancestor with a non-empty 2203 "cpuset.mems" or all the available memory nodes if none 2204 is found. 2205 2206 The value of "cpuset.mems" stays constant until the next update 2207 and won't be affected by any memory nodes hotplug events. 2208 2209 Setting a non-empty value to "cpuset.mems" causes memory of 2210 tasks within the cgroup to be migrated to the designated nodes if 2211 they are currently using memory outside of the designated nodes. 2212 2213 There is a cost for this memory migration. The migration 2214 may not be complete and some memory pages may be left behind. 2215 So it is recommended that "cpuset.mems" should be set properly 2216 before spawning new tasks into the cpuset. Even if there is 2217 a need to change "cpuset.mems" with active tasks, it shouldn't 2218 be done frequently. 2219 2220 cpuset.mems.effective 2221 A read-only multiple values file which exists on all 2222 cpuset-enabled cgroups. 2223 2224 It lists the onlined memory nodes that are actually granted to 2225 this cgroup by its parent. These memory nodes are allowed to 2226 be used by tasks within the current cgroup. 2227 2228 If "cpuset.mems" is empty, it shows all the memory nodes from the 2229 parent cgroup that will be available to be used by this cgroup. 2230 Otherwise, it should be a subset of "cpuset.mems" unless none of 2231 the memory nodes listed in "cpuset.mems" can be granted. In this 2232 case, it will be treated just like an empty "cpuset.mems". 2233 2234 Its value will be affected by memory nodes hotplug events. 2235 2236 cpuset.cpus.exclusive 2237 A read-write multiple values file which exists on non-root 2238 cpuset-enabled cgroups. 2239 2240 It lists all the exclusive CPUs that are allowed to be used 2241 to create a new cpuset partition. Its value is not used 2242 unless the cgroup becomes a valid partition root. See the 2243 "cpuset.cpus.partition" section below for a description of what 2244 a cpuset partition is. 2245 2246 When the cgroup becomes a partition root, the actual exclusive 2247 CPUs that are allocated to that partition are listed in 2248 "cpuset.cpus.exclusive.effective" which may be different 2249 from "cpuset.cpus.exclusive". If "cpuset.cpus.exclusive" 2250 has previously been set, "cpuset.cpus.exclusive.effective" 2251 is always a subset of it. 2252 2253 Users can manually set it to a value that is different from 2254 "cpuset.cpus". The only constraint in setting it is that the 2255 list of CPUs must be exclusive with respect to its sibling. 2256 2257 For a parent cgroup, any one of its exclusive CPUs can only 2258 be distributed to at most one of its child cgroups. Having an 2259 exclusive CPU appearing in two or more of its child cgroups is 2260 not allowed (the exclusivity rule). A value that violates the 2261 exclusivity rule will be rejected with a write error. 2262 2263 The root cgroup is a partition root and all its available CPUs 2264 are in its exclusive CPU set. 2265 2266 cpuset.cpus.exclusive.effective 2267 A read-only multiple values file which exists on all non-root 2268 cpuset-enabled cgroups. 2269 2270 This file shows the effective set of exclusive CPUs that 2271 can be used to create a partition root. The content of this 2272 file will always be a subset of "cpuset.cpus" and its parent's 2273 "cpuset.cpus.exclusive.effective" if its parent is not the root 2274 cgroup. It will also be a subset of "cpuset.cpus.exclusive" 2275 if it is set. If "cpuset.cpus.exclusive" is not set, it is 2276 treated to have an implicit value of "cpuset.cpus" in the 2277 formation of local partition. 2278 2279 cpuset.cpus.partition 2280 A read-write single value file which exists on non-root 2281 cpuset-enabled cgroups. This flag is owned by the parent cgroup 2282 and is not delegatable. 2283 2284 It accepts only the following input values when written to. 2285 2286 ========== ===================================== 2287 "member" Non-root member of a partition 2288 "root" Partition root 2289 "isolated" Partition root without load balancing 2290 ========== ===================================== 2291 2292 A cpuset partition is a collection of cpuset-enabled cgroups with 2293 a partition root at the top of the hierarchy and its descendants 2294 except those that are separate partition roots themselves and 2295 their descendants. A partition has exclusive access to the 2296 set of exclusive CPUs allocated to it. Other cgroups outside 2297 of that partition cannot use any CPUs in that set. 2298 2299 There are two types of partitions - local and remote. A local 2300 partition is one whose parent cgroup is also a valid partition 2301 root. A remote partition is one whose parent cgroup is not a 2302 valid partition root itself. Writing to "cpuset.cpus.exclusive" 2303 is optional for the creation of a local partition as its 2304 "cpuset.cpus.exclusive" file will assume an implicit value that 2305 is the same as "cpuset.cpus" if it is not set. Writing the 2306 proper "cpuset.cpus.exclusive" values down the cgroup hierarchy 2307 before the target partition root is mandatory for the creation 2308 of a remote partition. 2309 2310 Currently, a remote partition cannot be created under a local 2311 partition. All the ancestors of a remote partition root except 2312 the root cgroup cannot be a partition root. 2313 2314 The root cgroup is always a partition root and its state cannot 2315 be changed. All other non-root cgroups start out as "member". 2316 2317 When set to "root", the current cgroup is the root of a new 2318 partition or scheduling domain. The set of exclusive CPUs is 2319 determined by the value of its "cpuset.cpus.exclusive.effective". 2320 2321 When set to "isolated", the CPUs in that partition will 2322 be in an isolated state without any load balancing from the 2323 scheduler. Tasks placed in such a partition with multiple 2324 CPUs should be carefully distributed and bound to each of the 2325 individual CPUs for optimal performance. 2326 2327 A partition root ("root" or "isolated") can be in one of the 2328 two possible states - valid or invalid. An invalid partition 2329 root is in a degraded state where some state information may 2330 be retained, but behaves more like a "member". 2331 2332 All possible state transitions among "member", "root" and 2333 "isolated" are allowed. 2334 2335 On read, the "cpuset.cpus.partition" file can show the following 2336 values. 2337 2338 ============================= ===================================== 2339 "member" Non-root member of a partition 2340 "root" Partition root 2341 "isolated" Partition root without load balancing 2342 "root invalid (<reason>)" Invalid partition root 2343 "isolated invalid (<reason>)" Invalid isolated partition root 2344 ============================= ===================================== 2345 2346 In the case of an invalid partition root, a descriptive string on 2347 why the partition is invalid is included within parentheses. 2348 2349 For a local partition root to be valid, the following conditions 2350 must be met. 2351 2352 1) The parent cgroup is a valid partition root. 2353 2) The "cpuset.cpus.exclusive.effective" file cannot be empty, 2354 though it may contain offline CPUs. 2355 3) The "cpuset.cpus.effective" cannot be empty unless there is 2356 no task associated with this partition. 2357 2358 For a remote partition root to be valid, all the above conditions 2359 except the first one must be met. 2360 2361 External events like hotplug or changes to "cpuset.cpus" or 2362 "cpuset.cpus.exclusive" can cause a valid partition root to 2363 become invalid and vice versa. Note that a task cannot be 2364 moved to a cgroup with empty "cpuset.cpus.effective". 2365 2366 A valid non-root parent partition may distribute out all its CPUs 2367 to its child local partitions when there is no task associated 2368 with it. 2369 2370 Care must be taken to change a valid partition root to "member" 2371 as all its child local partitions, if present, will become 2372 invalid causing disruption to tasks running in those child 2373 partitions. These inactivated partitions could be recovered if 2374 their parent is switched back to a partition root with a proper 2375 value in "cpuset.cpus" or "cpuset.cpus.exclusive". 2376 2377 Poll and inotify events are triggered whenever the state of 2378 "cpuset.cpus.partition" changes. That includes changes caused 2379 by write to "cpuset.cpus.partition", cpu hotplug or other 2380 changes that modify the validity status of the partition. 2381 This will allow user space agents to monitor unexpected changes 2382 to "cpuset.cpus.partition" without the need to do continuous 2383 polling. 2384 2385 A user can pre-configure certain CPUs to an isolated state 2386 with load balancing disabled at boot time with the "isolcpus" 2387 kernel boot command line option. If those CPUs are to be put 2388 into a partition, they have to be used in an isolated partition. 2389 2390 2391Device controller 2392----------------- 2393 2394Device controller manages access to device files. It includes both 2395creation of new device files (using mknod), and access to the 2396existing device files. 2397 2398Cgroup v2 device controller has no interface files and is implemented 2399on top of cgroup BPF. To control access to device files, a user may 2400create bpf programs of type BPF_PROG_TYPE_CGROUP_DEVICE and attach 2401them to cgroups with BPF_CGROUP_DEVICE flag. On an attempt to access a 2402device file, corresponding BPF programs will be executed, and depending 2403on the return value the attempt will succeed or fail with -EPERM. 2404 2405A BPF_PROG_TYPE_CGROUP_DEVICE program takes a pointer to the 2406bpf_cgroup_dev_ctx structure, which describes the device access attempt: 2407access type (mknod/read/write) and device (type, major and minor numbers). 2408If the program returns 0, the attempt fails with -EPERM, otherwise it 2409succeeds. 2410 2411An example of BPF_PROG_TYPE_CGROUP_DEVICE program may be found in 2412tools/testing/selftests/bpf/progs/dev_cgroup.c in the kernel source tree. 2413 2414 2415RDMA 2416---- 2417 2418The "rdma" controller regulates the distribution and accounting of 2419RDMA resources. 2420 2421RDMA Interface Files 2422~~~~~~~~~~~~~~~~~~~~ 2423 2424 rdma.max 2425 A readwrite nested-keyed file that exists for all the cgroups 2426 except root that describes current configured resource limit 2427 for a RDMA/IB device. 2428 2429 Lines are keyed by device name and are not ordered. 2430 Each line contains space separated resource name and its configured 2431 limit that can be distributed. 2432 2433 The following nested keys are defined. 2434 2435 ========== ============================= 2436 hca_handle Maximum number of HCA Handles 2437 hca_object Maximum number of HCA Objects 2438 ========== ============================= 2439 2440 An example for mlx4 and ocrdma device follows:: 2441 2442 mlx4_0 hca_handle=2 hca_object=2000 2443 ocrdma1 hca_handle=3 hca_object=max 2444 2445 rdma.current 2446 A read-only file that describes current resource usage. 2447 It exists for all the cgroup except root. 2448 2449 An example for mlx4 and ocrdma device follows:: 2450 2451 mlx4_0 hca_handle=1 hca_object=20 2452 ocrdma1 hca_handle=1 hca_object=23 2453 2454HugeTLB 2455------- 2456 2457The HugeTLB controller allows to limit the HugeTLB usage per control group and 2458enforces the controller limit during page fault. 2459 2460HugeTLB Interface Files 2461~~~~~~~~~~~~~~~~~~~~~~~ 2462 2463 hugetlb.<hugepagesize>.current 2464 Show current usage for "hugepagesize" hugetlb. It exists for all 2465 the cgroup except root. 2466 2467 hugetlb.<hugepagesize>.max 2468 Set/show the hard limit of "hugepagesize" hugetlb usage. 2469 The default value is "max". It exists for all the cgroup except root. 2470 2471 hugetlb.<hugepagesize>.events 2472 A read-only flat-keyed file which exists on non-root cgroups. 2473 2474 max 2475 The number of allocation failure due to HugeTLB limit 2476 2477 hugetlb.<hugepagesize>.events.local 2478 Similar to hugetlb.<hugepagesize>.events but the fields in the file 2479 are local to the cgroup i.e. not hierarchical. The file modified event 2480 generated on this file reflects only the local events. 2481 2482 hugetlb.<hugepagesize>.numa_stat 2483 Similar to memory.numa_stat, it shows the numa information of the 2484 hugetlb pages of <hugepagesize> in this cgroup. Only active in 2485 use hugetlb pages are included. The per-node values are in bytes. 2486 2487Misc 2488---- 2489 2490The Miscellaneous cgroup provides the resource limiting and tracking 2491mechanism for the scalar resources which cannot be abstracted like the other 2492cgroup resources. Controller is enabled by the CONFIG_CGROUP_MISC config 2493option. 2494 2495A resource can be added to the controller via enum misc_res_type{} in the 2496include/linux/misc_cgroup.h file and the corresponding name via misc_res_name[] 2497in the kernel/cgroup/misc.c file. Provider of the resource must set its 2498capacity prior to using the resource by calling misc_cg_set_capacity(). 2499 2500Once a capacity is set then the resource usage can be updated using charge and 2501uncharge APIs. All of the APIs to interact with misc controller are in 2502include/linux/misc_cgroup.h. 2503 2504Misc Interface Files 2505~~~~~~~~~~~~~~~~~~~~ 2506 2507Miscellaneous controller provides 3 interface files. If two misc resources (res_a and res_b) are registered then: 2508 2509 misc.capacity 2510 A read-only flat-keyed file shown only in the root cgroup. It shows 2511 miscellaneous scalar resources available on the platform along with 2512 their quantities:: 2513 2514 $ cat misc.capacity 2515 res_a 50 2516 res_b 10 2517 2518 misc.current 2519 A read-only flat-keyed file shown in the all cgroups. It shows 2520 the current usage of the resources in the cgroup and its children.:: 2521 2522 $ cat misc.current 2523 res_a 3 2524 res_b 0 2525 2526 misc.max 2527 A read-write flat-keyed file shown in the non root cgroups. Allowed 2528 maximum usage of the resources in the cgroup and its children.:: 2529 2530 $ cat misc.max 2531 res_a max 2532 res_b 4 2533 2534 Limit can be set by:: 2535 2536 # echo res_a 1 > misc.max 2537 2538 Limit can be set to max by:: 2539 2540 # echo res_a max > misc.max 2541 2542 Limits can be set higher than the capacity value in the misc.capacity 2543 file. 2544 2545 misc.events 2546 A read-only flat-keyed file which exists on non-root cgroups. The 2547 following entries are defined. Unless specified otherwise, a value 2548 change in this file generates a file modified event. All fields in 2549 this file are hierarchical. 2550 2551 max 2552 The number of times the cgroup's resource usage was 2553 about to go over the max boundary. 2554 2555Migration and Ownership 2556~~~~~~~~~~~~~~~~~~~~~~~ 2557 2558A miscellaneous scalar resource is charged to the cgroup in which it is used 2559first, and stays charged to that cgroup until that resource is freed. Migrating 2560a process to a different cgroup does not move the charge to the destination 2561cgroup where the process has moved. 2562 2563Others 2564------ 2565 2566perf_event 2567~~~~~~~~~~ 2568 2569perf_event controller, if not mounted on a legacy hierarchy, is 2570automatically enabled on the v2 hierarchy so that perf events can 2571always be filtered by cgroup v2 path. The controller can still be 2572moved to a legacy hierarchy after v2 hierarchy is populated. 2573 2574 2575Non-normative information 2576------------------------- 2577 2578This section contains information that isn't considered to be a part of 2579the stable kernel API and so is subject to change. 2580 2581 2582CPU controller root cgroup process behaviour 2583~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2584 2585When distributing CPU cycles in the root cgroup each thread in this 2586cgroup is treated as if it was hosted in a separate child cgroup of the 2587root cgroup. This child cgroup weight is dependent on its thread nice 2588level. 2589 2590For details of this mapping see sched_prio_to_weight array in 2591kernel/sched/core.c file (values from this array should be scaled 2592appropriately so the neutral - nice 0 - value is 100 instead of 1024). 2593 2594 2595IO controller root cgroup process behaviour 2596~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2597 2598Root cgroup processes are hosted in an implicit leaf child node. 2599When distributing IO resources this implicit child node is taken into 2600account as if it was a normal child cgroup of the root cgroup with a 2601weight value of 200. 2602 2603 2604Namespace 2605========= 2606 2607Basics 2608------ 2609 2610cgroup namespace provides a mechanism to virtualize the view of the 2611"/proc/$PID/cgroup" file and cgroup mounts. The CLONE_NEWCGROUP clone 2612flag can be used with clone(2) and unshare(2) to create a new cgroup 2613namespace. The process running inside the cgroup namespace will have 2614its "/proc/$PID/cgroup" output restricted to cgroupns root. The 2615cgroupns root is the cgroup of the process at the time of creation of 2616the cgroup namespace. 2617 2618Without cgroup namespace, the "/proc/$PID/cgroup" file shows the 2619complete path of the cgroup of a process. In a container setup where 2620a set of cgroups and namespaces are intended to isolate processes the 2621"/proc/$PID/cgroup" file may leak potential system level information 2622to the isolated processes. For example:: 2623 2624 # cat /proc/self/cgroup 2625 0::/batchjobs/container_id1 2626 2627The path '/batchjobs/container_id1' can be considered as system-data 2628and undesirable to expose to the isolated processes. cgroup namespace 2629can be used to restrict visibility of this path. For example, before 2630creating a cgroup namespace, one would see:: 2631 2632 # ls -l /proc/self/ns/cgroup 2633 lrwxrwxrwx 1 root root 0 2014-07-15 10:37 /proc/self/ns/cgroup -> cgroup:[4026531835] 2634 # cat /proc/self/cgroup 2635 0::/batchjobs/container_id1 2636 2637After unsharing a new namespace, the view changes:: 2638 2639 # ls -l /proc/self/ns/cgroup 2640 lrwxrwxrwx 1 root root 0 2014-07-15 10:35 /proc/self/ns/cgroup -> cgroup:[4026532183] 2641 # cat /proc/self/cgroup 2642 0::/ 2643 2644When some thread from a multi-threaded process unshares its cgroup 2645namespace, the new cgroupns gets applied to the entire process (all 2646the threads). This is natural for the v2 hierarchy; however, for the 2647legacy hierarchies, this may be unexpected. 2648 2649A cgroup namespace is alive as long as there are processes inside or 2650mounts pinning it. When the last usage goes away, the cgroup 2651namespace is destroyed. The cgroupns root and the actual cgroups 2652remain. 2653 2654 2655The Root and Views 2656------------------ 2657 2658The 'cgroupns root' for a cgroup namespace is the cgroup in which the 2659process calling unshare(2) is running. For example, if a process in 2660/batchjobs/container_id1 cgroup calls unshare, cgroup 2661/batchjobs/container_id1 becomes the cgroupns root. For the 2662init_cgroup_ns, this is the real root ('/') cgroup. 2663 2664The cgroupns root cgroup does not change even if the namespace creator 2665process later moves to a different cgroup:: 2666 2667 # ~/unshare -c # unshare cgroupns in some cgroup 2668 # cat /proc/self/cgroup 2669 0::/ 2670 # mkdir sub_cgrp_1 2671 # echo 0 > sub_cgrp_1/cgroup.procs 2672 # cat /proc/self/cgroup 2673 0::/sub_cgrp_1 2674 2675Each process gets its namespace-specific view of "/proc/$PID/cgroup" 2676 2677Processes running inside the cgroup namespace will be able to see 2678cgroup paths (in /proc/self/cgroup) only inside their root cgroup. 2679From within an unshared cgroupns:: 2680 2681 # sleep 100000 & 2682 [1] 7353 2683 # echo 7353 > sub_cgrp_1/cgroup.procs 2684 # cat /proc/7353/cgroup 2685 0::/sub_cgrp_1 2686 2687From the initial cgroup namespace, the real cgroup path will be 2688visible:: 2689 2690 $ cat /proc/7353/cgroup 2691 0::/batchjobs/container_id1/sub_cgrp_1 2692 2693From a sibling cgroup namespace (that is, a namespace rooted at a 2694different cgroup), the cgroup path relative to its own cgroup 2695namespace root will be shown. For instance, if PID 7353's cgroup 2696namespace root is at '/batchjobs/container_id2', then it will see:: 2697 2698 # cat /proc/7353/cgroup 2699 0::/../container_id2/sub_cgrp_1 2700 2701Note that the relative path always starts with '/' to indicate that 2702its relative to the cgroup namespace root of the caller. 2703 2704 2705Migration and setns(2) 2706---------------------- 2707 2708Processes inside a cgroup namespace can move into and out of the 2709namespace root if they have proper access to external cgroups. For 2710example, from inside a namespace with cgroupns root at 2711/batchjobs/container_id1, and assuming that the global hierarchy is 2712still accessible inside cgroupns:: 2713 2714 # cat /proc/7353/cgroup 2715 0::/sub_cgrp_1 2716 # echo 7353 > batchjobs/container_id2/cgroup.procs 2717 # cat /proc/7353/cgroup 2718 0::/../container_id2 2719 2720Note that this kind of setup is not encouraged. A task inside cgroup 2721namespace should only be exposed to its own cgroupns hierarchy. 2722 2723setns(2) to another cgroup namespace is allowed when: 2724 2725(a) the process has CAP_SYS_ADMIN against its current user namespace 2726(b) the process has CAP_SYS_ADMIN against the target cgroup 2727 namespace's userns 2728 2729No implicit cgroup changes happen with attaching to another cgroup 2730namespace. It is expected that the someone moves the attaching 2731process under the target cgroup namespace root. 2732 2733 2734Interaction with Other Namespaces 2735--------------------------------- 2736 2737Namespace specific cgroup hierarchy can be mounted by a process 2738running inside a non-init cgroup namespace:: 2739 2740 # mount -t cgroup2 none $MOUNT_POINT 2741 2742This will mount the unified cgroup hierarchy with cgroupns root as the 2743filesystem root. The process needs CAP_SYS_ADMIN against its user and 2744mount namespaces. 2745 2746The virtualization of /proc/self/cgroup file combined with restricting 2747the view of cgroup hierarchy by namespace-private cgroupfs mount 2748provides a properly isolated cgroup view inside the container. 2749 2750 2751Information on Kernel Programming 2752================================= 2753 2754This section contains kernel programming information in the areas 2755where interacting with cgroup is necessary. cgroup core and 2756controllers are not covered. 2757 2758 2759Filesystem Support for Writeback 2760-------------------------------- 2761 2762A filesystem can support cgroup writeback by updating 2763address_space_operations->writepage[s]() to annotate bio's using the 2764following two functions. 2765 2766 wbc_init_bio(@wbc, @bio) 2767 Should be called for each bio carrying writeback data and 2768 associates the bio with the inode's owner cgroup and the 2769 corresponding request queue. This must be called after 2770 a queue (device) has been associated with the bio and 2771 before submission. 2772 2773 wbc_account_cgroup_owner(@wbc, @page, @bytes) 2774 Should be called for each data segment being written out. 2775 While this function doesn't care exactly when it's called 2776 during the writeback session, it's the easiest and most 2777 natural to call it as data segments are added to a bio. 2778 2779With writeback bio's annotated, cgroup support can be enabled per 2780super_block by setting SB_I_CGROUPWB in ->s_iflags. This allows for 2781selective disabling of cgroup writeback support which is helpful when 2782certain filesystem features, e.g. journaled data mode, are 2783incompatible. 2784 2785wbc_init_bio() binds the specified bio to its cgroup. Depending on 2786the configuration, the bio may be executed at a lower priority and if 2787the writeback session is holding shared resources, e.g. a journal 2788entry, may lead to priority inversion. There is no one easy solution 2789for the problem. Filesystems can try to work around specific problem 2790cases by skipping wbc_init_bio() and using bio_associate_blkg() 2791directly. 2792 2793 2794Deprecated v1 Core Features 2795=========================== 2796 2797- Multiple hierarchies including named ones are not supported. 2798 2799- All v1 mount options are not supported. 2800 2801- The "tasks" file is removed and "cgroup.procs" is not sorted. 2802 2803- "cgroup.clone_children" is removed. 2804 2805- /proc/cgroups is meaningless for v2. Use "cgroup.controllers" file 2806 at the root instead. 2807 2808 2809Issues with v1 and Rationales for v2 2810==================================== 2811 2812Multiple Hierarchies 2813-------------------- 2814 2815cgroup v1 allowed an arbitrary number of hierarchies and each 2816hierarchy could host any number of controllers. While this seemed to 2817provide a high level of flexibility, it wasn't useful in practice. 2818 2819For example, as there is only one instance of each controller, utility 2820type controllers such as freezer which can be useful in all 2821hierarchies could only be used in one. The issue is exacerbated by 2822the fact that controllers couldn't be moved to another hierarchy once 2823hierarchies were populated. Another issue was that all controllers 2824bound to a hierarchy were forced to have exactly the same view of the 2825hierarchy. It wasn't possible to vary the granularity depending on 2826the specific controller. 2827 2828In practice, these issues heavily limited which controllers could be 2829put on the same hierarchy and most configurations resorted to putting 2830each controller on its own hierarchy. Only closely related ones, such 2831as the cpu and cpuacct controllers, made sense to be put on the same 2832hierarchy. This often meant that userland ended up managing multiple 2833similar hierarchies repeating the same steps on each hierarchy 2834whenever a hierarchy management operation was necessary. 2835 2836Furthermore, support for multiple hierarchies came at a steep cost. 2837It greatly complicated cgroup core implementation but more importantly 2838the support for multiple hierarchies restricted how cgroup could be 2839used in general and what controllers was able to do. 2840 2841There was no limit on how many hierarchies there might be, which meant 2842that a thread's cgroup membership couldn't be described in finite 2843length. The key might contain any number of entries and was unlimited 2844in length, which made it highly awkward to manipulate and led to 2845addition of controllers which existed only to identify membership, 2846which in turn exacerbated the original problem of proliferating number 2847of hierarchies. 2848 2849Also, as a controller couldn't have any expectation regarding the 2850topologies of hierarchies other controllers might be on, each 2851controller had to assume that all other controllers were attached to 2852completely orthogonal hierarchies. This made it impossible, or at 2853least very cumbersome, for controllers to cooperate with each other. 2854 2855In most use cases, putting controllers on hierarchies which are 2856completely orthogonal to each other isn't necessary. What usually is 2857called for is the ability to have differing levels of granularity 2858depending on the specific controller. In other words, hierarchy may 2859be collapsed from leaf towards root when viewed from specific 2860controllers. For example, a given configuration might not care about 2861how memory is distributed beyond a certain level while still wanting 2862to control how CPU cycles are distributed. 2863 2864 2865Thread Granularity 2866------------------ 2867 2868cgroup v1 allowed threads of a process to belong to different cgroups. 2869This didn't make sense for some controllers and those controllers 2870ended up implementing different ways to ignore such situations but 2871much more importantly it blurred the line between API exposed to 2872individual applications and system management interface. 2873 2874Generally, in-process knowledge is available only to the process 2875itself; thus, unlike service-level organization of processes, 2876categorizing threads of a process requires active participation from 2877the application which owns the target process. 2878 2879cgroup v1 had an ambiguously defined delegation model which got abused 2880in combination with thread granularity. cgroups were delegated to 2881individual applications so that they can create and manage their own 2882sub-hierarchies and control resource distributions along them. This 2883effectively raised cgroup to the status of a syscall-like API exposed 2884to lay programs. 2885 2886First of all, cgroup has a fundamentally inadequate interface to be 2887exposed this way. For a process to access its own knobs, it has to 2888extract the path on the target hierarchy from /proc/self/cgroup, 2889construct the path by appending the name of the knob to the path, open 2890and then read and/or write to it. This is not only extremely clunky 2891and unusual but also inherently racy. There is no conventional way to 2892define transaction across the required steps and nothing can guarantee 2893that the process would actually be operating on its own sub-hierarchy. 2894 2895cgroup controllers implemented a number of knobs which would never be 2896accepted as public APIs because they were just adding control knobs to 2897system-management pseudo filesystem. cgroup ended up with interface 2898knobs which were not properly abstracted or refined and directly 2899revealed kernel internal details. These knobs got exposed to 2900individual applications through the ill-defined delegation mechanism 2901effectively abusing cgroup as a shortcut to implementing public APIs 2902without going through the required scrutiny. 2903 2904This was painful for both userland and kernel. Userland ended up with 2905misbehaving and poorly abstracted interfaces and kernel exposing and 2906locked into constructs inadvertently. 2907 2908 2909Competition Between Inner Nodes and Threads 2910------------------------------------------- 2911 2912cgroup v1 allowed threads to be in any cgroups which created an 2913interesting problem where threads belonging to a parent cgroup and its 2914children cgroups competed for resources. This was nasty as two 2915different types of entities competed and there was no obvious way to 2916settle it. Different controllers did different things. 2917 2918The cpu controller considered threads and cgroups as equivalents and 2919mapped nice levels to cgroup weights. This worked for some cases but 2920fell flat when children wanted to be allocated specific ratios of CPU 2921cycles and the number of internal threads fluctuated - the ratios 2922constantly changed as the number of competing entities fluctuated. 2923There also were other issues. The mapping from nice level to weight 2924wasn't obvious or universal, and there were various other knobs which 2925simply weren't available for threads. 2926 2927The io controller implicitly created a hidden leaf node for each 2928cgroup to host the threads. The hidden leaf had its own copies of all 2929the knobs with ``leaf_`` prefixed. While this allowed equivalent 2930control over internal threads, it was with serious drawbacks. It 2931always added an extra layer of nesting which wouldn't be necessary 2932otherwise, made the interface messy and significantly complicated the 2933implementation. 2934 2935The memory controller didn't have a way to control what happened 2936between internal tasks and child cgroups and the behavior was not 2937clearly defined. There were attempts to add ad-hoc behaviors and 2938knobs to tailor the behavior to specific workloads which would have 2939led to problems extremely difficult to resolve in the long term. 2940 2941Multiple controllers struggled with internal tasks and came up with 2942different ways to deal with it; unfortunately, all the approaches were 2943severely flawed and, furthermore, the widely different behaviors 2944made cgroup as a whole highly inconsistent. 2945 2946This clearly is a problem which needs to be addressed from cgroup core 2947in a uniform way. 2948 2949 2950Other Interface Issues 2951---------------------- 2952 2953cgroup v1 grew without oversight and developed a large number of 2954idiosyncrasies and inconsistencies. One issue on the cgroup core side 2955was how an empty cgroup was notified - a userland helper binary was 2956forked and executed for each event. The event delivery wasn't 2957recursive or delegatable. The limitations of the mechanism also led 2958to in-kernel event delivery filtering mechanism further complicating 2959the interface. 2960 2961Controller interfaces were problematic too. An extreme example is 2962controllers completely ignoring hierarchical organization and treating 2963all cgroups as if they were all located directly under the root 2964cgroup. Some controllers exposed a large amount of inconsistent 2965implementation details to userland. 2966 2967There also was no consistency across controllers. When a new cgroup 2968was created, some controllers defaulted to not imposing extra 2969restrictions while others disallowed any resource usage until 2970explicitly configured. Configuration knobs for the same type of 2971control used widely differing naming schemes and formats. Statistics 2972and information knobs were named arbitrarily and used different 2973formats and units even in the same controller. 2974 2975cgroup v2 establishes common conventions where appropriate and updates 2976controllers so that they expose minimal and consistent interfaces. 2977 2978 2979Controller Issues and Remedies 2980------------------------------ 2981 2982Memory 2983~~~~~~ 2984 2985The original lower boundary, the soft limit, is defined as a limit 2986that is per default unset. As a result, the set of cgroups that 2987global reclaim prefers is opt-in, rather than opt-out. The costs for 2988optimizing these mostly negative lookups are so high that the 2989implementation, despite its enormous size, does not even provide the 2990basic desirable behavior. First off, the soft limit has no 2991hierarchical meaning. All configured groups are organized in a global 2992rbtree and treated like equal peers, regardless where they are located 2993in the hierarchy. This makes subtree delegation impossible. Second, 2994the soft limit reclaim pass is so aggressive that it not just 2995introduces high allocation latencies into the system, but also impacts 2996system performance due to overreclaim, to the point where the feature 2997becomes self-defeating. 2998 2999The memory.low boundary on the other hand is a top-down allocated 3000reserve. A cgroup enjoys reclaim protection when it's within its 3001effective low, which makes delegation of subtrees possible. It also 3002enjoys having reclaim pressure proportional to its overage when 3003above its effective low. 3004 3005The original high boundary, the hard limit, is defined as a strict 3006limit that can not budge, even if the OOM killer has to be called. 3007But this generally goes against the goal of making the most out of the 3008available memory. The memory consumption of workloads varies during 3009runtime, and that requires users to overcommit. But doing that with a 3010strict upper limit requires either a fairly accurate prediction of the 3011working set size or adding slack to the limit. Since working set size 3012estimation is hard and error prone, and getting it wrong results in 3013OOM kills, most users tend to err on the side of a looser limit and 3014end up wasting precious resources. 3015 3016The memory.high boundary on the other hand can be set much more 3017conservatively. When hit, it throttles allocations by forcing them 3018into direct reclaim to work off the excess, but it never invokes the 3019OOM killer. As a result, a high boundary that is chosen too 3020aggressively will not terminate the processes, but instead it will 3021lead to gradual performance degradation. The user can monitor this 3022and make corrections until the minimal memory footprint that still 3023gives acceptable performance is found. 3024 3025In extreme cases, with many concurrent allocations and a complete 3026breakdown of reclaim progress within the group, the high boundary can 3027be exceeded. But even then it's mostly better to satisfy the 3028allocation from the slack available in other groups or the rest of the 3029system than killing the group. Otherwise, memory.max is there to 3030limit this type of spillover and ultimately contain buggy or even 3031malicious applications. 3032 3033Setting the original memory.limit_in_bytes below the current usage was 3034subject to a race condition, where concurrent charges could cause the 3035limit setting to fail. memory.max on the other hand will first set the 3036limit to prevent new charges, and then reclaim and OOM kill until the 3037new limit is met - or the task writing to memory.max is killed. 3038 3039The combined memory+swap accounting and limiting is replaced by real 3040control over swap space. 3041 3042The main argument for a combined memory+swap facility in the original 3043cgroup design was that global or parental pressure would always be 3044able to swap all anonymous memory of a child group, regardless of the 3045child's own (possibly untrusted) configuration. However, untrusted 3046groups can sabotage swapping by other means - such as referencing its 3047anonymous memory in a tight loop - and an admin can not assume full 3048swappability when overcommitting untrusted jobs. 3049 3050For trusted jobs, on the other hand, a combined counter is not an 3051intuitive userspace interface, and it flies in the face of the idea 3052that cgroup controllers should account and limit specific physical 3053resources. Swap space is a resource like all others in the system, 3054and that's why unified hierarchy allows distributing it separately. 3055