1================ 2Control Group v2 3================ 4 5:Date: October, 2015 6:Author: Tejun Heo <tj@kernel.org> 7 8This is the authoritative documentation on the design, interface and 9conventions of cgroup v2. It describes all userland-visible aspects 10of cgroup including core and specific controller behaviors. All 11future changes must be reflected in this document. Documentation for 12v1 is available under Documentation/cgroup-v1/. 13 14.. CONTENTS 15 16 1. Introduction 17 1-1. Terminology 18 1-2. What is cgroup? 19 2. Basic Operations 20 2-1. Mounting 21 2-2. Organizing Processes and Threads 22 2-2-1. Processes 23 2-2-2. Threads 24 2-3. [Un]populated Notification 25 2-4. Controlling Controllers 26 2-4-1. Enabling and Disabling 27 2-4-2. Top-down Constraint 28 2-4-3. No Internal Process Constraint 29 2-5. Delegation 30 2-5-1. Model of Delegation 31 2-5-2. Delegation Containment 32 2-6. Guidelines 33 2-6-1. Organize Once and Control 34 2-6-2. Avoid Name Collisions 35 3. Resource Distribution Models 36 3-1. Weights 37 3-2. Limits 38 3-3. Protections 39 3-4. Allocations 40 4. Interface Files 41 4-1. Format 42 4-2. Conventions 43 4-3. Core Interface Files 44 5. Controllers 45 5-1. CPU 46 5-1-1. CPU Interface Files 47 5-2. Memory 48 5-2-1. Memory Interface Files 49 5-2-2. Usage Guidelines 50 5-2-3. Memory Ownership 51 5-3. IO 52 5-3-1. IO Interface Files 53 5-3-2. Writeback 54 5-3-3. IO Latency 55 5-3-3-1. How IO Latency Throttling Works 56 5-3-3-2. IO Latency Interface Files 57 5-4. PID 58 5-4-1. PID Interface Files 59 5-5. Cpuset 60 5.5-1. Cpuset Interface Files 61 5-6. Device 62 5-7. RDMA 63 5-7-1. RDMA Interface Files 64 5-8. Misc 65 5-8-1. perf_event 66 5-N. Non-normative information 67 5-N-1. CPU controller root cgroup process behaviour 68 5-N-2. IO controller root cgroup process behaviour 69 6. Namespace 70 6-1. Basics 71 6-2. The Root and Views 72 6-3. Migration and setns(2) 73 6-4. Interaction with Other Namespaces 74 P. Information on Kernel Programming 75 P-1. Filesystem Support for Writeback 76 D. Deprecated v1 Core Features 77 R. Issues with v1 and Rationales for v2 78 R-1. Multiple Hierarchies 79 R-2. Thread Granularity 80 R-3. Competition Between Inner Nodes and Threads 81 R-4. Other Interface Issues 82 R-5. Controller Issues and Remedies 83 R-5-1. Memory 84 85 86Introduction 87============ 88 89Terminology 90----------- 91 92"cgroup" stands for "control group" and is never capitalized. The 93singular form is used to designate the whole feature and also as a 94qualifier as in "cgroup controllers". When explicitly referring to 95multiple individual control groups, the plural form "cgroups" is used. 96 97 98What is cgroup? 99--------------- 100 101cgroup is a mechanism to organize processes hierarchically and 102distribute system resources along the hierarchy in a controlled and 103configurable manner. 104 105cgroup is largely composed of two parts - the core and controllers. 106cgroup core is primarily responsible for hierarchically organizing 107processes. A cgroup controller is usually responsible for 108distributing a specific type of system resource along the hierarchy 109although there are utility controllers which serve purposes other than 110resource distribution. 111 112cgroups form a tree structure and every process in the system belongs 113to one and only one cgroup. All threads of a process belong to the 114same cgroup. On creation, all processes are put in the cgroup that 115the parent process belongs to at the time. A process can be migrated 116to another cgroup. Migration of a process doesn't affect already 117existing descendant processes. 118 119Following certain structural constraints, controllers may be enabled or 120disabled selectively on a cgroup. All controller behaviors are 121hierarchical - if a controller is enabled on a cgroup, it affects all 122processes which belong to the cgroups consisting the inclusive 123sub-hierarchy of the cgroup. When a controller is enabled on a nested 124cgroup, it always restricts the resource distribution further. The 125restrictions set closer to the root in the hierarchy can not be 126overridden from further away. 127 128 129Basic Operations 130================ 131 132Mounting 133-------- 134 135Unlike v1, cgroup v2 has only single hierarchy. The cgroup v2 136hierarchy can be mounted with the following mount command:: 137 138 # mount -t cgroup2 none $MOUNT_POINT 139 140cgroup2 filesystem has the magic number 0x63677270 ("cgrp"). All 141controllers which support v2 and are not bound to a v1 hierarchy are 142automatically bound to the v2 hierarchy and show up at the root. 143Controllers which are not in active use in the v2 hierarchy can be 144bound to other hierarchies. This allows mixing v2 hierarchy with the 145legacy v1 multiple hierarchies in a fully backward compatible way. 146 147A controller can be moved across hierarchies only after the controller 148is no longer referenced in its current hierarchy. Because per-cgroup 149controller states are destroyed asynchronously and controllers may 150have lingering references, a controller may not show up immediately on 151the v2 hierarchy after the final umount of the previous hierarchy. 152Similarly, a controller should be fully disabled to be moved out of 153the unified hierarchy and it may take some time for the disabled 154controller to become available for other hierarchies; furthermore, due 155to inter-controller dependencies, other controllers may need to be 156disabled too. 157 158While useful for development and manual configurations, moving 159controllers dynamically between the v2 and other hierarchies is 160strongly discouraged for production use. It is recommended to decide 161the hierarchies and controller associations before starting using the 162controllers after system boot. 163 164During transition to v2, system management software might still 165automount the v1 cgroup filesystem and so hijack all controllers 166during boot, before manual intervention is possible. To make testing 167and experimenting easier, the kernel parameter cgroup_no_v1= allows 168disabling controllers in v1 and make them always available in v2. 169 170cgroup v2 currently supports the following mount options. 171 172 nsdelegate 173 174 Consider cgroup namespaces as delegation boundaries. This 175 option is system wide and can only be set on mount or modified 176 through remount from the init namespace. The mount option is 177 ignored on non-init namespace mounts. Please refer to the 178 Delegation section for details. 179 180 memory_localevents 181 182 Only populate memory.events with data for the current cgroup, 183 and not any subtrees. This is legacy behaviour, the default 184 behaviour without this option is to include subtree counts. 185 This option is system wide and can only be set on mount or 186 modified through remount from the init namespace. The mount 187 option is ignored on non-init namespace mounts. 188 189 190Organizing Processes and Threads 191-------------------------------- 192 193Processes 194~~~~~~~~~ 195 196Initially, only the root cgroup exists to which all processes belong. 197A child cgroup can be created by creating a sub-directory:: 198 199 # mkdir $CGROUP_NAME 200 201A given cgroup may have multiple child cgroups forming a tree 202structure. Each cgroup has a read-writable interface file 203"cgroup.procs". When read, it lists the PIDs of all processes which 204belong to the cgroup one-per-line. The PIDs are not ordered and the 205same PID may show up more than once if the process got moved to 206another cgroup and then back or the PID got recycled while reading. 207 208A process can be migrated into a cgroup by writing its PID to the 209target cgroup's "cgroup.procs" file. Only one process can be migrated 210on a single write(2) call. If a process is composed of multiple 211threads, writing the PID of any thread migrates all threads of the 212process. 213 214When a process forks a child process, the new process is born into the 215cgroup that the forking process belongs to at the time of the 216operation. After exit, a process stays associated with the cgroup 217that it belonged to at the time of exit until it's reaped; however, a 218zombie process does not appear in "cgroup.procs" and thus can't be 219moved to another cgroup. 220 221A cgroup which doesn't have any children or live processes can be 222destroyed by removing the directory. Note that a cgroup which doesn't 223have any children and is associated only with zombie processes is 224considered empty and can be removed:: 225 226 # rmdir $CGROUP_NAME 227 228"/proc/$PID/cgroup" lists a process's cgroup membership. If legacy 229cgroup is in use in the system, this file may contain multiple lines, 230one for each hierarchy. The entry for cgroup v2 is always in the 231format "0::$PATH":: 232 233 # cat /proc/842/cgroup 234 ... 235 0::/test-cgroup/test-cgroup-nested 236 237If the process becomes a zombie and the cgroup it was associated with 238is removed subsequently, " (deleted)" is appended to the path:: 239 240 # cat /proc/842/cgroup 241 ... 242 0::/test-cgroup/test-cgroup-nested (deleted) 243 244 245Threads 246~~~~~~~ 247 248cgroup v2 supports thread granularity for a subset of controllers to 249support use cases requiring hierarchical resource distribution across 250the threads of a group of processes. By default, all threads of a 251process belong to the same cgroup, which also serves as the resource 252domain to host resource consumptions which are not specific to a 253process or thread. The thread mode allows threads to be spread across 254a subtree while still maintaining the common resource domain for them. 255 256Controllers which support thread mode are called threaded controllers. 257The ones which don't are called domain controllers. 258 259Marking a cgroup threaded makes it join the resource domain of its 260parent as a threaded cgroup. The parent may be another threaded 261cgroup whose resource domain is further up in the hierarchy. The root 262of a threaded subtree, that is, the nearest ancestor which is not 263threaded, is called threaded domain or thread root interchangeably and 264serves as the resource domain for the entire subtree. 265 266Inside a threaded subtree, threads of a process can be put in 267different cgroups and are not subject to the no internal process 268constraint - threaded controllers can be enabled on non-leaf cgroups 269whether they have threads in them or not. 270 271As the threaded domain cgroup hosts all the domain resource 272consumptions of the subtree, it is considered to have internal 273resource consumptions whether there are processes in it or not and 274can't have populated child cgroups which aren't threaded. Because the 275root cgroup is not subject to no internal process constraint, it can 276serve both as a threaded domain and a parent to domain cgroups. 277 278The current operation mode or type of the cgroup is shown in the 279"cgroup.type" file which indicates whether the cgroup is a normal 280domain, a domain which is serving as the domain of a threaded subtree, 281or a threaded cgroup. 282 283On creation, a cgroup is always a domain cgroup and can be made 284threaded by writing "threaded" to the "cgroup.type" file. The 285operation is single direction:: 286 287 # echo threaded > cgroup.type 288 289Once threaded, the cgroup can't be made a domain again. To enable the 290thread mode, the following conditions must be met. 291 292- As the cgroup will join the parent's resource domain. The parent 293 must either be a valid (threaded) domain or a threaded cgroup. 294 295- When the parent is an unthreaded domain, it must not have any domain 296 controllers enabled or populated domain children. The root is 297 exempt from this requirement. 298 299Topology-wise, a cgroup can be in an invalid state. Please consider 300the following topology:: 301 302 A (threaded domain) - B (threaded) - C (domain, just created) 303 304C is created as a domain but isn't connected to a parent which can 305host child domains. C can't be used until it is turned into a 306threaded cgroup. "cgroup.type" file will report "domain (invalid)" in 307these cases. Operations which fail due to invalid topology use 308EOPNOTSUPP as the errno. 309 310A domain cgroup is turned into a threaded domain when one of its child 311cgroup becomes threaded or threaded controllers are enabled in the 312"cgroup.subtree_control" file while there are processes in the cgroup. 313A threaded domain reverts to a normal domain when the conditions 314clear. 315 316When read, "cgroup.threads" contains the list of the thread IDs of all 317threads in the cgroup. Except that the operations are per-thread 318instead of per-process, "cgroup.threads" has the same format and 319behaves the same way as "cgroup.procs". While "cgroup.threads" can be 320written to in any cgroup, as it can only move threads inside the same 321threaded domain, its operations are confined inside each threaded 322subtree. 323 324The threaded domain cgroup serves as the resource domain for the whole 325subtree, and, while the threads can be scattered across the subtree, 326all the processes are considered to be in the threaded domain cgroup. 327"cgroup.procs" in a threaded domain cgroup contains the PIDs of all 328processes in the subtree and is not readable in the subtree proper. 329However, "cgroup.procs" can be written to from anywhere in the subtree 330to migrate all threads of the matching process to the cgroup. 331 332Only threaded controllers can be enabled in a threaded subtree. When 333a threaded controller is enabled inside a threaded subtree, it only 334accounts for and controls resource consumptions associated with the 335threads in the cgroup and its descendants. All consumptions which 336aren't tied to a specific thread belong to the threaded domain cgroup. 337 338Because a threaded subtree is exempt from no internal process 339constraint, a threaded controller must be able to handle competition 340between threads in a non-leaf cgroup and its child cgroups. Each 341threaded controller defines how such competitions are handled. 342 343 344[Un]populated Notification 345-------------------------- 346 347Each non-root cgroup has a "cgroup.events" file which contains 348"populated" field indicating whether the cgroup's sub-hierarchy has 349live processes in it. Its value is 0 if there is no live process in 350the cgroup and its descendants; otherwise, 1. poll and [id]notify 351events are triggered when the value changes. This can be used, for 352example, to start a clean-up operation after all processes of a given 353sub-hierarchy have exited. The populated state updates and 354notifications are recursive. Consider the following sub-hierarchy 355where the numbers in the parentheses represent the numbers of processes 356in each cgroup:: 357 358 A(4) - B(0) - C(1) 359 \ D(0) 360 361A, B and C's "populated" fields would be 1 while D's 0. After the one 362process in C exits, B and C's "populated" fields would flip to "0" and 363file modified events will be generated on the "cgroup.events" files of 364both cgroups. 365 366 367Controlling Controllers 368----------------------- 369 370Enabling and Disabling 371~~~~~~~~~~~~~~~~~~~~~~ 372 373Each cgroup has a "cgroup.controllers" file which lists all 374controllers available for the cgroup to enable:: 375 376 # cat cgroup.controllers 377 cpu io memory 378 379No controller is enabled by default. Controllers can be enabled and 380disabled by writing to the "cgroup.subtree_control" file:: 381 382 # echo "+cpu +memory -io" > cgroup.subtree_control 383 384Only controllers which are listed in "cgroup.controllers" can be 385enabled. When multiple operations are specified as above, either they 386all succeed or fail. If multiple operations on the same controller 387are specified, the last one is effective. 388 389Enabling a controller in a cgroup indicates that the distribution of 390the target resource across its immediate children will be controlled. 391Consider the following sub-hierarchy. The enabled controllers are 392listed in parentheses:: 393 394 A(cpu,memory) - B(memory) - C() 395 \ D() 396 397As A has "cpu" and "memory" enabled, A will control the distribution 398of CPU cycles and memory to its children, in this case, B. As B has 399"memory" enabled but not "CPU", C and D will compete freely on CPU 400cycles but their division of memory available to B will be controlled. 401 402As a controller regulates the distribution of the target resource to 403the cgroup's children, enabling it creates the controller's interface 404files in the child cgroups. In the above example, enabling "cpu" on B 405would create the "cpu." prefixed controller interface files in C and 406D. Likewise, disabling "memory" from B would remove the "memory." 407prefixed controller interface files from C and D. This means that the 408controller interface files - anything which doesn't start with 409"cgroup." are owned by the parent rather than the cgroup itself. 410 411 412Top-down Constraint 413~~~~~~~~~~~~~~~~~~~ 414 415Resources are distributed top-down and a cgroup can further distribute 416a resource only if the resource has been distributed to it from the 417parent. This means that all non-root "cgroup.subtree_control" files 418can only contain controllers which are enabled in the parent's 419"cgroup.subtree_control" file. A controller can be enabled only if 420the parent has the controller enabled and a controller can't be 421disabled if one or more children have it enabled. 422 423 424No Internal Process Constraint 425~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 426 427Non-root cgroups can distribute domain resources to their children 428only when they don't have any processes of their own. In other words, 429only domain cgroups which don't contain any processes can have domain 430controllers enabled in their "cgroup.subtree_control" files. 431 432This guarantees that, when a domain controller is looking at the part 433of the hierarchy which has it enabled, processes are always only on 434the leaves. This rules out situations where child cgroups compete 435against internal processes of the parent. 436 437The root cgroup is exempt from this restriction. Root contains 438processes and anonymous resource consumption which can't be associated 439with any other cgroups and requires special treatment from most 440controllers. How resource consumption in the root cgroup is governed 441is up to each controller (for more information on this topic please 442refer to the Non-normative information section in the Controllers 443chapter). 444 445Note that the restriction doesn't get in the way if there is no 446enabled controller in the cgroup's "cgroup.subtree_control". This is 447important as otherwise it wouldn't be possible to create children of a 448populated cgroup. To control resource distribution of a cgroup, the 449cgroup must create children and transfer all its processes to the 450children before enabling controllers in its "cgroup.subtree_control" 451file. 452 453 454Delegation 455---------- 456 457Model of Delegation 458~~~~~~~~~~~~~~~~~~~ 459 460A cgroup can be delegated in two ways. First, to a less privileged 461user by granting write access of the directory and its "cgroup.procs", 462"cgroup.threads" and "cgroup.subtree_control" files to the user. 463Second, if the "nsdelegate" mount option is set, automatically to a 464cgroup namespace on namespace creation. 465 466Because the resource control interface files in a given directory 467control the distribution of the parent's resources, the delegatee 468shouldn't be allowed to write to them. For the first method, this is 469achieved by not granting access to these files. For the second, the 470kernel rejects writes to all files other than "cgroup.procs" and 471"cgroup.subtree_control" on a namespace root from inside the 472namespace. 473 474The end results are equivalent for both delegation types. Once 475delegated, the user can build sub-hierarchy under the directory, 476organize processes inside it as it sees fit and further distribute the 477resources it received from the parent. The limits and other settings 478of all resource controllers are hierarchical and regardless of what 479happens in the delegated sub-hierarchy, nothing can escape the 480resource restrictions imposed by the parent. 481 482Currently, cgroup doesn't impose any restrictions on the number of 483cgroups in or nesting depth of a delegated sub-hierarchy; however, 484this may be limited explicitly in the future. 485 486 487Delegation Containment 488~~~~~~~~~~~~~~~~~~~~~~ 489 490A delegated sub-hierarchy is contained in the sense that processes 491can't be moved into or out of the sub-hierarchy by the delegatee. 492 493For delegations to a less privileged user, this is achieved by 494requiring the following conditions for a process with a non-root euid 495to migrate a target process into a cgroup by writing its PID to the 496"cgroup.procs" file. 497 498- The writer must have write access to the "cgroup.procs" file. 499 500- The writer must have write access to the "cgroup.procs" file of the 501 common ancestor of the source and destination cgroups. 502 503The above two constraints ensure that while a delegatee may migrate 504processes around freely in the delegated sub-hierarchy it can't pull 505in from or push out to outside the sub-hierarchy. 506 507For an example, let's assume cgroups C0 and C1 have been delegated to 508user U0 who created C00, C01 under C0 and C10 under C1 as follows and 509all processes under C0 and C1 belong to U0:: 510 511 ~~~~~~~~~~~~~ - C0 - C00 512 ~ cgroup ~ \ C01 513 ~ hierarchy ~ 514 ~~~~~~~~~~~~~ - C1 - C10 515 516Let's also say U0 wants to write the PID of a process which is 517currently in C10 into "C00/cgroup.procs". U0 has write access to the 518file; however, the common ancestor of the source cgroup C10 and the 519destination cgroup C00 is above the points of delegation and U0 would 520not have write access to its "cgroup.procs" files and thus the write 521will be denied with -EACCES. 522 523For delegations to namespaces, containment is achieved by requiring 524that both the source and destination cgroups are reachable from the 525namespace of the process which is attempting the migration. If either 526is not reachable, the migration is rejected with -ENOENT. 527 528 529Guidelines 530---------- 531 532Organize Once and Control 533~~~~~~~~~~~~~~~~~~~~~~~~~ 534 535Migrating a process across cgroups is a relatively expensive operation 536and stateful resources such as memory are not moved together with the 537process. This is an explicit design decision as there often exist 538inherent trade-offs between migration and various hot paths in terms 539of synchronization cost. 540 541As such, migrating processes across cgroups frequently as a means to 542apply different resource restrictions is discouraged. A workload 543should be assigned to a cgroup according to the system's logical and 544resource structure once on start-up. Dynamic adjustments to resource 545distribution can be made by changing controller configuration through 546the interface files. 547 548 549Avoid Name Collisions 550~~~~~~~~~~~~~~~~~~~~~ 551 552Interface files for a cgroup and its children cgroups occupy the same 553directory and it is possible to create children cgroups which collide 554with interface files. 555 556All cgroup core interface files are prefixed with "cgroup." and each 557controller's interface files are prefixed with the controller name and 558a dot. A controller's name is composed of lower case alphabets and 559'_'s but never begins with an '_' so it can be used as the prefix 560character for collision avoidance. Also, interface file names won't 561start or end with terms which are often used in categorizing workloads 562such as job, service, slice, unit or workload. 563 564cgroup doesn't do anything to prevent name collisions and it's the 565user's responsibility to avoid them. 566 567 568Resource Distribution Models 569============================ 570 571cgroup controllers implement several resource distribution schemes 572depending on the resource type and expected use cases. This section 573describes major schemes in use along with their expected behaviors. 574 575 576Weights 577------- 578 579A parent's resource is distributed by adding up the weights of all 580active children and giving each the fraction matching the ratio of its 581weight against the sum. As only children which can make use of the 582resource at the moment participate in the distribution, this is 583work-conserving. Due to the dynamic nature, this model is usually 584used for stateless resources. 585 586All weights are in the range [1, 10000] with the default at 100. This 587allows symmetric multiplicative biases in both directions at fine 588enough granularity while staying in the intuitive range. 589 590As long as the weight is in range, all configuration combinations are 591valid and there is no reason to reject configuration changes or 592process migrations. 593 594"cpu.weight" proportionally distributes CPU cycles to active children 595and is an example of this type. 596 597 598Limits 599------ 600 601A child can only consume upto the configured amount of the resource. 602Limits can be over-committed - the sum of the limits of children can 603exceed the amount of resource available to the parent. 604 605Limits are in the range [0, max] and defaults to "max", which is noop. 606 607As limits can be over-committed, all configuration combinations are 608valid and there is no reason to reject configuration changes or 609process migrations. 610 611"io.max" limits the maximum BPS and/or IOPS that a cgroup can consume 612on an IO device and is an example of this type. 613 614 615Protections 616----------- 617 618A cgroup is protected to be allocated upto the configured amount of 619the resource if the usages of all its ancestors are under their 620protected levels. Protections can be hard guarantees or best effort 621soft boundaries. Protections can also be over-committed in which case 622only upto the amount available to the parent is protected among 623children. 624 625Protections are in the range [0, max] and defaults to 0, which is 626noop. 627 628As protections can be over-committed, all configuration combinations 629are valid and there is no reason to reject configuration changes or 630process migrations. 631 632"memory.low" implements best-effort memory protection and is an 633example of this type. 634 635 636Allocations 637----------- 638 639A cgroup is exclusively allocated a certain amount of a finite 640resource. Allocations can't be over-committed - the sum of the 641allocations of children can not exceed the amount of resource 642available to the parent. 643 644Allocations are in the range [0, max] and defaults to 0, which is no 645resource. 646 647As allocations can't be over-committed, some configuration 648combinations are invalid and should be rejected. Also, if the 649resource is mandatory for execution of processes, process migrations 650may be rejected. 651 652"cpu.rt.max" hard-allocates realtime slices and is an example of this 653type. 654 655 656Interface Files 657=============== 658 659Format 660------ 661 662All interface files should be in one of the following formats whenever 663possible:: 664 665 New-line separated values 666 (when only one value can be written at once) 667 668 VAL0\n 669 VAL1\n 670 ... 671 672 Space separated values 673 (when read-only or multiple values can be written at once) 674 675 VAL0 VAL1 ...\n 676 677 Flat keyed 678 679 KEY0 VAL0\n 680 KEY1 VAL1\n 681 ... 682 683 Nested keyed 684 685 KEY0 SUB_KEY0=VAL00 SUB_KEY1=VAL01... 686 KEY1 SUB_KEY0=VAL10 SUB_KEY1=VAL11... 687 ... 688 689For a writable file, the format for writing should generally match 690reading; however, controllers may allow omitting later fields or 691implement restricted shortcuts for most common use cases. 692 693For both flat and nested keyed files, only the values for a single key 694can be written at a time. For nested keyed files, the sub key pairs 695may be specified in any order and not all pairs have to be specified. 696 697 698Conventions 699----------- 700 701- Settings for a single feature should be contained in a single file. 702 703- The root cgroup should be exempt from resource control and thus 704 shouldn't have resource control interface files. Also, 705 informational files on the root cgroup which end up showing global 706 information available elsewhere shouldn't exist. 707 708- If a controller implements weight based resource distribution, its 709 interface file should be named "weight" and have the range [1, 710 10000] with 100 as the default. The values are chosen to allow 711 enough and symmetric bias in both directions while keeping it 712 intuitive (the default is 100%). 713 714- If a controller implements an absolute resource guarantee and/or 715 limit, the interface files should be named "min" and "max" 716 respectively. If a controller implements best effort resource 717 guarantee and/or limit, the interface files should be named "low" 718 and "high" respectively. 719 720 In the above four control files, the special token "max" should be 721 used to represent upward infinity for both reading and writing. 722 723- If a setting has a configurable default value and keyed specific 724 overrides, the default entry should be keyed with "default" and 725 appear as the first entry in the file. 726 727 The default value can be updated by writing either "default $VAL" or 728 "$VAL". 729 730 When writing to update a specific override, "default" can be used as 731 the value to indicate removal of the override. Override entries 732 with "default" as the value must not appear when read. 733 734 For example, a setting which is keyed by major:minor device numbers 735 with integer values may look like the following:: 736 737 # cat cgroup-example-interface-file 738 default 150 739 8:0 300 740 741 The default value can be updated by:: 742 743 # echo 125 > cgroup-example-interface-file 744 745 or:: 746 747 # echo "default 125" > cgroup-example-interface-file 748 749 An override can be set by:: 750 751 # echo "8:16 170" > cgroup-example-interface-file 752 753 and cleared by:: 754 755 # echo "8:0 default" > cgroup-example-interface-file 756 # cat cgroup-example-interface-file 757 default 125 758 8:16 170 759 760- For events which are not very high frequency, an interface file 761 "events" should be created which lists event key value pairs. 762 Whenever a notifiable event happens, file modified event should be 763 generated on the file. 764 765 766Core Interface Files 767-------------------- 768 769All cgroup core files are prefixed with "cgroup." 770 771 cgroup.type 772 773 A read-write single value file which exists on non-root 774 cgroups. 775 776 When read, it indicates the current type of the cgroup, which 777 can be one of the following values. 778 779 - "domain" : A normal valid domain cgroup. 780 781 - "domain threaded" : A threaded domain cgroup which is 782 serving as the root of a threaded subtree. 783 784 - "domain invalid" : A cgroup which is in an invalid state. 785 It can't be populated or have controllers enabled. It may 786 be allowed to become a threaded cgroup. 787 788 - "threaded" : A threaded cgroup which is a member of a 789 threaded subtree. 790 791 A cgroup can be turned into a threaded cgroup by writing 792 "threaded" to this file. 793 794 cgroup.procs 795 A read-write new-line separated values file which exists on 796 all cgroups. 797 798 When read, it lists the PIDs of all processes which belong to 799 the cgroup one-per-line. The PIDs are not ordered and the 800 same PID may show up more than once if the process got moved 801 to another cgroup and then back or the PID got recycled while 802 reading. 803 804 A PID can be written to migrate the process associated with 805 the PID to the cgroup. The writer should match all of the 806 following conditions. 807 808 - It must have write access to the "cgroup.procs" file. 809 810 - It must have write access to the "cgroup.procs" file of the 811 common ancestor of the source and destination cgroups. 812 813 When delegating a sub-hierarchy, write access to this file 814 should be granted along with the containing directory. 815 816 In a threaded cgroup, reading this file fails with EOPNOTSUPP 817 as all the processes belong to the thread root. Writing is 818 supported and moves every thread of the process to the cgroup. 819 820 cgroup.threads 821 A read-write new-line separated values file which exists on 822 all cgroups. 823 824 When read, it lists the TIDs of all threads which belong to 825 the cgroup one-per-line. The TIDs are not ordered and the 826 same TID may show up more than once if the thread got moved to 827 another cgroup and then back or the TID got recycled while 828 reading. 829 830 A TID can be written to migrate the thread associated with the 831 TID to the cgroup. The writer should match all of the 832 following conditions. 833 834 - It must have write access to the "cgroup.threads" file. 835 836 - The cgroup that the thread is currently in must be in the 837 same resource domain as the destination cgroup. 838 839 - It must have write access to the "cgroup.procs" file of the 840 common ancestor of the source and destination cgroups. 841 842 When delegating a sub-hierarchy, write access to this file 843 should be granted along with the containing directory. 844 845 cgroup.controllers 846 A read-only space separated values file which exists on all 847 cgroups. 848 849 It shows space separated list of all controllers available to 850 the cgroup. The controllers are not ordered. 851 852 cgroup.subtree_control 853 A read-write space separated values file which exists on all 854 cgroups. Starts out empty. 855 856 When read, it shows space separated list of the controllers 857 which are enabled to control resource distribution from the 858 cgroup to its children. 859 860 Space separated list of controllers prefixed with '+' or '-' 861 can be written to enable or disable controllers. A controller 862 name prefixed with '+' enables the controller and '-' 863 disables. If a controller appears more than once on the list, 864 the last one is effective. When multiple enable and disable 865 operations are specified, either all succeed or all fail. 866 867 cgroup.events 868 A read-only flat-keyed file which exists on non-root cgroups. 869 The following entries are defined. Unless specified 870 otherwise, a value change in this file generates a file 871 modified event. 872 873 populated 874 1 if the cgroup or its descendants contains any live 875 processes; otherwise, 0. 876 frozen 877 1 if the cgroup is frozen; otherwise, 0. 878 879 cgroup.max.descendants 880 A read-write single value files. The default is "max". 881 882 Maximum allowed number of descent cgroups. 883 If the actual number of descendants is equal or larger, 884 an attempt to create a new cgroup in the hierarchy will fail. 885 886 cgroup.max.depth 887 A read-write single value files. The default is "max". 888 889 Maximum allowed descent depth below the current cgroup. 890 If the actual descent depth is equal or larger, 891 an attempt to create a new child cgroup will fail. 892 893 cgroup.stat 894 A read-only flat-keyed file with the following entries: 895 896 nr_descendants 897 Total number of visible descendant cgroups. 898 899 nr_dying_descendants 900 Total number of dying descendant cgroups. A cgroup becomes 901 dying after being deleted by a user. The cgroup will remain 902 in dying state for some time undefined time (which can depend 903 on system load) before being completely destroyed. 904 905 A process can't enter a dying cgroup under any circumstances, 906 a dying cgroup can't revive. 907 908 A dying cgroup can consume system resources not exceeding 909 limits, which were active at the moment of cgroup deletion. 910 911 cgroup.freeze 912 A read-write single value file which exists on non-root cgroups. 913 Allowed values are "0" and "1". The default is "0". 914 915 Writing "1" to the file causes freezing of the cgroup and all 916 descendant cgroups. This means that all belonging processes will 917 be stopped and will not run until the cgroup will be explicitly 918 unfrozen. Freezing of the cgroup may take some time; when this action 919 is completed, the "frozen" value in the cgroup.events control file 920 will be updated to "1" and the corresponding notification will be 921 issued. 922 923 A cgroup can be frozen either by its own settings, or by settings 924 of any ancestor cgroups. If any of ancestor cgroups is frozen, the 925 cgroup will remain frozen. 926 927 Processes in the frozen cgroup can be killed by a fatal signal. 928 They also can enter and leave a frozen cgroup: either by an explicit 929 move by a user, or if freezing of the cgroup races with fork(). 930 If a process is moved to a frozen cgroup, it stops. If a process is 931 moved out of a frozen cgroup, it becomes running. 932 933 Frozen status of a cgroup doesn't affect any cgroup tree operations: 934 it's possible to delete a frozen (and empty) cgroup, as well as 935 create new sub-cgroups. 936 937Controllers 938=========== 939 940CPU 941--- 942 943The "cpu" controllers regulates distribution of CPU cycles. This 944controller implements weight and absolute bandwidth limit models for 945normal scheduling policy and absolute bandwidth allocation model for 946realtime scheduling policy. 947 948WARNING: cgroup2 doesn't yet support control of realtime processes and 949the cpu controller can only be enabled when all RT processes are in 950the root cgroup. Be aware that system management software may already 951have placed RT processes into nonroot cgroups during the system boot 952process, and these processes may need to be moved to the root cgroup 953before the cpu controller can be enabled. 954 955 956CPU Interface Files 957~~~~~~~~~~~~~~~~~~~ 958 959All time durations are in microseconds. 960 961 cpu.stat 962 A read-only flat-keyed file which exists on non-root cgroups. 963 This file exists whether the controller is enabled or not. 964 965 It always reports the following three stats: 966 967 - usage_usec 968 - user_usec 969 - system_usec 970 971 and the following three when the controller is enabled: 972 973 - nr_periods 974 - nr_throttled 975 - throttled_usec 976 977 cpu.weight 978 A read-write single value file which exists on non-root 979 cgroups. The default is "100". 980 981 The weight in the range [1, 10000]. 982 983 cpu.weight.nice 984 A read-write single value file which exists on non-root 985 cgroups. The default is "0". 986 987 The nice value is in the range [-20, 19]. 988 989 This interface file is an alternative interface for 990 "cpu.weight" and allows reading and setting weight using the 991 same values used by nice(2). Because the range is smaller and 992 granularity is coarser for the nice values, the read value is 993 the closest approximation of the current weight. 994 995 cpu.max 996 A read-write two value file which exists on non-root cgroups. 997 The default is "max 100000". 998 999 The maximum bandwidth limit. It's in the following format:: 1000 1001 $MAX $PERIOD 1002 1003 which indicates that the group may consume upto $MAX in each 1004 $PERIOD duration. "max" for $MAX indicates no limit. If only 1005 one number is written, $MAX is updated. 1006 1007 cpu.pressure 1008 A read-only nested-key file which exists on non-root cgroups. 1009 1010 Shows pressure stall information for CPU. See 1011 Documentation/accounting/psi.txt for details. 1012 1013 1014Memory 1015------ 1016 1017The "memory" controller regulates distribution of memory. Memory is 1018stateful and implements both limit and protection models. Due to the 1019intertwining between memory usage and reclaim pressure and the 1020stateful nature of memory, the distribution model is relatively 1021complex. 1022 1023While not completely water-tight, all major memory usages by a given 1024cgroup are tracked so that the total memory consumption can be 1025accounted and controlled to a reasonable extent. Currently, the 1026following types of memory usages are tracked. 1027 1028- Userland memory - page cache and anonymous memory. 1029 1030- Kernel data structures such as dentries and inodes. 1031 1032- TCP socket buffers. 1033 1034The above list may expand in the future for better coverage. 1035 1036 1037Memory Interface Files 1038~~~~~~~~~~~~~~~~~~~~~~ 1039 1040All memory amounts are in bytes. If a value which is not aligned to 1041PAGE_SIZE is written, the value may be rounded up to the closest 1042PAGE_SIZE multiple when read back. 1043 1044 memory.current 1045 A read-only single value file which exists on non-root 1046 cgroups. 1047 1048 The total amount of memory currently being used by the cgroup 1049 and its descendants. 1050 1051 memory.min 1052 A read-write single value file which exists on non-root 1053 cgroups. The default is "0". 1054 1055 Hard memory protection. If the memory usage of a cgroup 1056 is within its effective min boundary, the cgroup's memory 1057 won't be reclaimed under any conditions. If there is no 1058 unprotected reclaimable memory available, OOM killer 1059 is invoked. 1060 1061 Effective min boundary is limited by memory.min values of 1062 all ancestor cgroups. If there is memory.min overcommitment 1063 (child cgroup or cgroups are requiring more protected memory 1064 than parent will allow), then each child cgroup will get 1065 the part of parent's protection proportional to its 1066 actual memory usage below memory.min. 1067 1068 Putting more memory than generally available under this 1069 protection is discouraged and may lead to constant OOMs. 1070 1071 If a memory cgroup is not populated with processes, 1072 its memory.min is ignored. 1073 1074 memory.low 1075 A read-write single value file which exists on non-root 1076 cgroups. The default is "0". 1077 1078 Best-effort memory protection. If the memory usage of a 1079 cgroup is within its effective low boundary, the cgroup's 1080 memory won't be reclaimed unless memory can be reclaimed 1081 from unprotected cgroups. 1082 1083 Effective low boundary is limited by memory.low values of 1084 all ancestor cgroups. If there is memory.low overcommitment 1085 (child cgroup or cgroups are requiring more protected memory 1086 than parent will allow), then each child cgroup will get 1087 the part of parent's protection proportional to its 1088 actual memory usage below memory.low. 1089 1090 Putting more memory than generally available under this 1091 protection is discouraged. 1092 1093 memory.high 1094 A read-write single value file which exists on non-root 1095 cgroups. The default is "max". 1096 1097 Memory usage throttle limit. This is the main mechanism to 1098 control memory usage of a cgroup. If a cgroup's usage goes 1099 over the high boundary, the processes of the cgroup are 1100 throttled and put under heavy reclaim pressure. 1101 1102 Going over the high limit never invokes the OOM killer and 1103 under extreme conditions the limit may be breached. 1104 1105 memory.max 1106 A read-write single value file which exists on non-root 1107 cgroups. The default is "max". 1108 1109 Memory usage hard limit. This is the final protection 1110 mechanism. If a cgroup's memory usage reaches this limit and 1111 can't be reduced, the OOM killer is invoked in the cgroup. 1112 Under certain circumstances, the usage may go over the limit 1113 temporarily. 1114 1115 This is the ultimate protection mechanism. As long as the 1116 high limit is used and monitored properly, this limit's 1117 utility is limited to providing the final safety net. 1118 1119 memory.oom.group 1120 A read-write single value file which exists on non-root 1121 cgroups. The default value is "0". 1122 1123 Determines whether the cgroup should be treated as 1124 an indivisible workload by the OOM killer. If set, 1125 all tasks belonging to the cgroup or to its descendants 1126 (if the memory cgroup is not a leaf cgroup) are killed 1127 together or not at all. This can be used to avoid 1128 partial kills to guarantee workload integrity. 1129 1130 Tasks with the OOM protection (oom_score_adj set to -1000) 1131 are treated as an exception and are never killed. 1132 1133 If the OOM killer is invoked in a cgroup, it's not going 1134 to kill any tasks outside of this cgroup, regardless 1135 memory.oom.group values of ancestor cgroups. 1136 1137 memory.events 1138 A read-only flat-keyed file which exists on non-root cgroups. 1139 The following entries are defined. Unless specified 1140 otherwise, a value change in this file generates a file 1141 modified event. 1142 1143 low 1144 The number of times the cgroup is reclaimed due to 1145 high memory pressure even though its usage is under 1146 the low boundary. This usually indicates that the low 1147 boundary is over-committed. 1148 1149 high 1150 The number of times processes of the cgroup are 1151 throttled and routed to perform direct memory reclaim 1152 because the high memory boundary was exceeded. For a 1153 cgroup whose memory usage is capped by the high limit 1154 rather than global memory pressure, this event's 1155 occurrences are expected. 1156 1157 max 1158 The number of times the cgroup's memory usage was 1159 about to go over the max boundary. If direct reclaim 1160 fails to bring it down, the cgroup goes to OOM state. 1161 1162 oom 1163 The number of time the cgroup's memory usage was 1164 reached the limit and allocation was about to fail. 1165 1166 Depending on context result could be invocation of OOM 1167 killer and retrying allocation or failing allocation. 1168 1169 Failed allocation in its turn could be returned into 1170 userspace as -ENOMEM or silently ignored in cases like 1171 disk readahead. For now OOM in memory cgroup kills 1172 tasks iff shortage has happened inside page fault. 1173 1174 This event is not raised if the OOM killer is not 1175 considered as an option, e.g. for failed high-order 1176 allocations. 1177 1178 oom_kill 1179 The number of processes belonging to this cgroup 1180 killed by any kind of OOM killer. 1181 1182 memory.stat 1183 A read-only flat-keyed file which exists on non-root cgroups. 1184 1185 This breaks down the cgroup's memory footprint into different 1186 types of memory, type-specific details, and other information 1187 on the state and past events of the memory management system. 1188 1189 All memory amounts are in bytes. 1190 1191 The entries are ordered to be human readable, and new entries 1192 can show up in the middle. Don't rely on items remaining in a 1193 fixed position; use the keys to look up specific values! 1194 1195 anon 1196 Amount of memory used in anonymous mappings such as 1197 brk(), sbrk(), and mmap(MAP_ANONYMOUS) 1198 1199 file 1200 Amount of memory used to cache filesystem data, 1201 including tmpfs and shared memory. 1202 1203 kernel_stack 1204 Amount of memory allocated to kernel stacks. 1205 1206 slab 1207 Amount of memory used for storing in-kernel data 1208 structures. 1209 1210 sock 1211 Amount of memory used in network transmission buffers 1212 1213 shmem 1214 Amount of cached filesystem data that is swap-backed, 1215 such as tmpfs, shm segments, shared anonymous mmap()s 1216 1217 file_mapped 1218 Amount of cached filesystem data mapped with mmap() 1219 1220 file_dirty 1221 Amount of cached filesystem data that was modified but 1222 not yet written back to disk 1223 1224 file_writeback 1225 Amount of cached filesystem data that was modified and 1226 is currently being written back to disk 1227 1228 anon_thp 1229 Amount of memory used in anonymous mappings backed by 1230 transparent hugepages 1231 1232 inactive_anon, active_anon, inactive_file, active_file, unevictable 1233 Amount of memory, swap-backed and filesystem-backed, 1234 on the internal memory management lists used by the 1235 page reclaim algorithm 1236 1237 slab_reclaimable 1238 Part of "slab" that might be reclaimed, such as 1239 dentries and inodes. 1240 1241 slab_unreclaimable 1242 Part of "slab" that cannot be reclaimed on memory 1243 pressure. 1244 1245 pgfault 1246 Total number of page faults incurred 1247 1248 pgmajfault 1249 Number of major page faults incurred 1250 1251 workingset_refault 1252 1253 Number of refaults of previously evicted pages 1254 1255 workingset_activate 1256 1257 Number of refaulted pages that were immediately activated 1258 1259 workingset_nodereclaim 1260 1261 Number of times a shadow node has been reclaimed 1262 1263 pgrefill 1264 1265 Amount of scanned pages (in an active LRU list) 1266 1267 pgscan 1268 1269 Amount of scanned pages (in an inactive LRU list) 1270 1271 pgsteal 1272 1273 Amount of reclaimed pages 1274 1275 pgactivate 1276 1277 Amount of pages moved to the active LRU list 1278 1279 pgdeactivate 1280 1281 Amount of pages moved to the inactive LRU lis 1282 1283 pglazyfree 1284 1285 Amount of pages postponed to be freed under memory pressure 1286 1287 pglazyfreed 1288 1289 Amount of reclaimed lazyfree pages 1290 1291 thp_fault_alloc 1292 1293 Number of transparent hugepages which were allocated to satisfy 1294 a page fault, including COW faults. This counter is not present 1295 when CONFIG_TRANSPARENT_HUGEPAGE is not set. 1296 1297 thp_collapse_alloc 1298 1299 Number of transparent hugepages which were allocated to allow 1300 collapsing an existing range of pages. This counter is not 1301 present when CONFIG_TRANSPARENT_HUGEPAGE is not set. 1302 1303 memory.swap.current 1304 A read-only single value file which exists on non-root 1305 cgroups. 1306 1307 The total amount of swap currently being used by the cgroup 1308 and its descendants. 1309 1310 memory.swap.max 1311 A read-write single value file which exists on non-root 1312 cgroups. The default is "max". 1313 1314 Swap usage hard limit. If a cgroup's swap usage reaches this 1315 limit, anonymous memory of the cgroup will not be swapped out. 1316 1317 memory.swap.events 1318 A read-only flat-keyed file which exists on non-root cgroups. 1319 The following entries are defined. Unless specified 1320 otherwise, a value change in this file generates a file 1321 modified event. 1322 1323 max 1324 The number of times the cgroup's swap usage was about 1325 to go over the max boundary and swap allocation 1326 failed. 1327 1328 fail 1329 The number of times swap allocation failed either 1330 because of running out of swap system-wide or max 1331 limit. 1332 1333 When reduced under the current usage, the existing swap 1334 entries are reclaimed gradually and the swap usage may stay 1335 higher than the limit for an extended period of time. This 1336 reduces the impact on the workload and memory management. 1337 1338 memory.pressure 1339 A read-only nested-key file which exists on non-root cgroups. 1340 1341 Shows pressure stall information for memory. See 1342 Documentation/accounting/psi.txt for details. 1343 1344 1345Usage Guidelines 1346~~~~~~~~~~~~~~~~ 1347 1348"memory.high" is the main mechanism to control memory usage. 1349Over-committing on high limit (sum of high limits > available memory) 1350and letting global memory pressure to distribute memory according to 1351usage is a viable strategy. 1352 1353Because breach of the high limit doesn't trigger the OOM killer but 1354throttles the offending cgroup, a management agent has ample 1355opportunities to monitor and take appropriate actions such as granting 1356more memory or terminating the workload. 1357 1358Determining whether a cgroup has enough memory is not trivial as 1359memory usage doesn't indicate whether the workload can benefit from 1360more memory. For example, a workload which writes data received from 1361network to a file can use all available memory but can also operate as 1362performant with a small amount of memory. A measure of memory 1363pressure - how much the workload is being impacted due to lack of 1364memory - is necessary to determine whether a workload needs more 1365memory; unfortunately, memory pressure monitoring mechanism isn't 1366implemented yet. 1367 1368 1369Memory Ownership 1370~~~~~~~~~~~~~~~~ 1371 1372A memory area is charged to the cgroup which instantiated it and stays 1373charged to the cgroup until the area is released. Migrating a process 1374to a different cgroup doesn't move the memory usages that it 1375instantiated while in the previous cgroup to the new cgroup. 1376 1377A memory area may be used by processes belonging to different cgroups. 1378To which cgroup the area will be charged is in-deterministic; however, 1379over time, the memory area is likely to end up in a cgroup which has 1380enough memory allowance to avoid high reclaim pressure. 1381 1382If a cgroup sweeps a considerable amount of memory which is expected 1383to be accessed repeatedly by other cgroups, it may make sense to use 1384POSIX_FADV_DONTNEED to relinquish the ownership of memory areas 1385belonging to the affected files to ensure correct memory ownership. 1386 1387 1388IO 1389-- 1390 1391The "io" controller regulates the distribution of IO resources. This 1392controller implements both weight based and absolute bandwidth or IOPS 1393limit distribution; however, weight based distribution is available 1394only if cfq-iosched is in use and neither scheme is available for 1395blk-mq devices. 1396 1397 1398IO Interface Files 1399~~~~~~~~~~~~~~~~~~ 1400 1401 io.stat 1402 A read-only nested-keyed file which exists on non-root 1403 cgroups. 1404 1405 Lines are keyed by $MAJ:$MIN device numbers and not ordered. 1406 The following nested keys are defined. 1407 1408 ====== ===================== 1409 rbytes Bytes read 1410 wbytes Bytes written 1411 rios Number of read IOs 1412 wios Number of write IOs 1413 dbytes Bytes discarded 1414 dios Number of discard IOs 1415 ====== ===================== 1416 1417 An example read output follows: 1418 1419 8:16 rbytes=1459200 wbytes=314773504 rios=192 wios=353 dbytes=0 dios=0 1420 8:0 rbytes=90430464 wbytes=299008000 rios=8950 wios=1252 dbytes=50331648 dios=3021 1421 1422 io.weight 1423 A read-write flat-keyed file which exists on non-root cgroups. 1424 The default is "default 100". 1425 1426 The first line is the default weight applied to devices 1427 without specific override. The rest are overrides keyed by 1428 $MAJ:$MIN device numbers and not ordered. The weights are in 1429 the range [1, 10000] and specifies the relative amount IO time 1430 the cgroup can use in relation to its siblings. 1431 1432 The default weight can be updated by writing either "default 1433 $WEIGHT" or simply "$WEIGHT". Overrides can be set by writing 1434 "$MAJ:$MIN $WEIGHT" and unset by writing "$MAJ:$MIN default". 1435 1436 An example read output follows:: 1437 1438 default 100 1439 8:16 200 1440 8:0 50 1441 1442 io.max 1443 A read-write nested-keyed file which exists on non-root 1444 cgroups. 1445 1446 BPS and IOPS based IO limit. Lines are keyed by $MAJ:$MIN 1447 device numbers and not ordered. The following nested keys are 1448 defined. 1449 1450 ===== ================================== 1451 rbps Max read bytes per second 1452 wbps Max write bytes per second 1453 riops Max read IO operations per second 1454 wiops Max write IO operations per second 1455 ===== ================================== 1456 1457 When writing, any number of nested key-value pairs can be 1458 specified in any order. "max" can be specified as the value 1459 to remove a specific limit. If the same key is specified 1460 multiple times, the outcome is undefined. 1461 1462 BPS and IOPS are measured in each IO direction and IOs are 1463 delayed if limit is reached. Temporary bursts are allowed. 1464 1465 Setting read limit at 2M BPS and write at 120 IOPS for 8:16:: 1466 1467 echo "8:16 rbps=2097152 wiops=120" > io.max 1468 1469 Reading returns the following:: 1470 1471 8:16 rbps=2097152 wbps=max riops=max wiops=120 1472 1473 Write IOPS limit can be removed by writing the following:: 1474 1475 echo "8:16 wiops=max" > io.max 1476 1477 Reading now returns the following:: 1478 1479 8:16 rbps=2097152 wbps=max riops=max wiops=max 1480 1481 io.pressure 1482 A read-only nested-key file which exists on non-root cgroups. 1483 1484 Shows pressure stall information for IO. See 1485 Documentation/accounting/psi.txt for details. 1486 1487 1488Writeback 1489~~~~~~~~~ 1490 1491Page cache is dirtied through buffered writes and shared mmaps and 1492written asynchronously to the backing filesystem by the writeback 1493mechanism. Writeback sits between the memory and IO domains and 1494regulates the proportion of dirty memory by balancing dirtying and 1495write IOs. 1496 1497The io controller, in conjunction with the memory controller, 1498implements control of page cache writeback IOs. The memory controller 1499defines the memory domain that dirty memory ratio is calculated and 1500maintained for and the io controller defines the io domain which 1501writes out dirty pages for the memory domain. Both system-wide and 1502per-cgroup dirty memory states are examined and the more restrictive 1503of the two is enforced. 1504 1505cgroup writeback requires explicit support from the underlying 1506filesystem. Currently, cgroup writeback is implemented on ext2, ext4 1507and btrfs. On other filesystems, all writeback IOs are attributed to 1508the root cgroup. 1509 1510There are inherent differences in memory and writeback management 1511which affects how cgroup ownership is tracked. Memory is tracked per 1512page while writeback per inode. For the purpose of writeback, an 1513inode is assigned to a cgroup and all IO requests to write dirty pages 1514from the inode are attributed to that cgroup. 1515 1516As cgroup ownership for memory is tracked per page, there can be pages 1517which are associated with different cgroups than the one the inode is 1518associated with. These are called foreign pages. The writeback 1519constantly keeps track of foreign pages and, if a particular foreign 1520cgroup becomes the majority over a certain period of time, switches 1521the ownership of the inode to that cgroup. 1522 1523While this model is enough for most use cases where a given inode is 1524mostly dirtied by a single cgroup even when the main writing cgroup 1525changes over time, use cases where multiple cgroups write to a single 1526inode simultaneously are not supported well. In such circumstances, a 1527significant portion of IOs are likely to be attributed incorrectly. 1528As memory controller assigns page ownership on the first use and 1529doesn't update it until the page is released, even if writeback 1530strictly follows page ownership, multiple cgroups dirtying overlapping 1531areas wouldn't work as expected. It's recommended to avoid such usage 1532patterns. 1533 1534The sysctl knobs which affect writeback behavior are applied to cgroup 1535writeback as follows. 1536 1537 vm.dirty_background_ratio, vm.dirty_ratio 1538 These ratios apply the same to cgroup writeback with the 1539 amount of available memory capped by limits imposed by the 1540 memory controller and system-wide clean memory. 1541 1542 vm.dirty_background_bytes, vm.dirty_bytes 1543 For cgroup writeback, this is calculated into ratio against 1544 total available memory and applied the same way as 1545 vm.dirty[_background]_ratio. 1546 1547 1548IO Latency 1549~~~~~~~~~~ 1550 1551This is a cgroup v2 controller for IO workload protection. You provide a group 1552with a latency target, and if the average latency exceeds that target the 1553controller will throttle any peers that have a lower latency target than the 1554protected workload. 1555 1556The limits are only applied at the peer level in the hierarchy. This means that 1557in the diagram below, only groups A, B, and C will influence each other, and 1558groups D and F will influence each other. Group G will influence nobody:: 1559 1560 [root] 1561 / | \ 1562 A B C 1563 / \ | 1564 D F G 1565 1566 1567So the ideal way to configure this is to set io.latency in groups A, B, and C. 1568Generally you do not want to set a value lower than the latency your device 1569supports. Experiment to find the value that works best for your workload. 1570Start at higher than the expected latency for your device and watch the 1571avg_lat value in io.stat for your workload group to get an idea of the 1572latency you see during normal operation. Use the avg_lat value as a basis for 1573your real setting, setting at 10-15% higher than the value in io.stat. 1574 1575How IO Latency Throttling Works 1576~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 1577 1578io.latency is work conserving; so as long as everybody is meeting their latency 1579target the controller doesn't do anything. Once a group starts missing its 1580target it begins throttling any peer group that has a higher target than itself. 1581This throttling takes 2 forms: 1582 1583- Queue depth throttling. This is the number of outstanding IO's a group is 1584 allowed to have. We will clamp down relatively quickly, starting at no limit 1585 and going all the way down to 1 IO at a time. 1586 1587- Artificial delay induction. There are certain types of IO that cannot be 1588 throttled without possibly adversely affecting higher priority groups. This 1589 includes swapping and metadata IO. These types of IO are allowed to occur 1590 normally, however they are "charged" to the originating group. If the 1591 originating group is being throttled you will see the use_delay and delay 1592 fields in io.stat increase. The delay value is how many microseconds that are 1593 being added to any process that runs in this group. Because this number can 1594 grow quite large if there is a lot of swapping or metadata IO occurring we 1595 limit the individual delay events to 1 second at a time. 1596 1597Once the victimized group starts meeting its latency target again it will start 1598unthrottling any peer groups that were throttled previously. If the victimized 1599group simply stops doing IO the global counter will unthrottle appropriately. 1600 1601IO Latency Interface Files 1602~~~~~~~~~~~~~~~~~~~~~~~~~~ 1603 1604 io.latency 1605 This takes a similar format as the other controllers. 1606 1607 "MAJOR:MINOR target=<target time in microseconds" 1608 1609 io.stat 1610 If the controller is enabled you will see extra stats in io.stat in 1611 addition to the normal ones. 1612 1613 depth 1614 This is the current queue depth for the group. 1615 1616 avg_lat 1617 This is an exponential moving average with a decay rate of 1/exp 1618 bound by the sampling interval. The decay rate interval can be 1619 calculated by multiplying the win value in io.stat by the 1620 corresponding number of samples based on the win value. 1621 1622 win 1623 The sampling window size in milliseconds. This is the minimum 1624 duration of time between evaluation events. Windows only elapse 1625 with IO activity. Idle periods extend the most recent window. 1626 1627PID 1628--- 1629 1630The process number controller is used to allow a cgroup to stop any 1631new tasks from being fork()'d or clone()'d after a specified limit is 1632reached. 1633 1634The number of tasks in a cgroup can be exhausted in ways which other 1635controllers cannot prevent, thus warranting its own controller. For 1636example, a fork bomb is likely to exhaust the number of tasks before 1637hitting memory restrictions. 1638 1639Note that PIDs used in this controller refer to TIDs, process IDs as 1640used by the kernel. 1641 1642 1643PID Interface Files 1644~~~~~~~~~~~~~~~~~~~ 1645 1646 pids.max 1647 A read-write single value file which exists on non-root 1648 cgroups. The default is "max". 1649 1650 Hard limit of number of processes. 1651 1652 pids.current 1653 A read-only single value file which exists on all cgroups. 1654 1655 The number of processes currently in the cgroup and its 1656 descendants. 1657 1658Organisational operations are not blocked by cgroup policies, so it is 1659possible to have pids.current > pids.max. This can be done by either 1660setting the limit to be smaller than pids.current, or attaching enough 1661processes to the cgroup such that pids.current is larger than 1662pids.max. However, it is not possible to violate a cgroup PID policy 1663through fork() or clone(). These will return -EAGAIN if the creation 1664of a new process would cause a cgroup policy to be violated. 1665 1666 1667Cpuset 1668------ 1669 1670The "cpuset" controller provides a mechanism for constraining 1671the CPU and memory node placement of tasks to only the resources 1672specified in the cpuset interface files in a task's current cgroup. 1673This is especially valuable on large NUMA systems where placing jobs 1674on properly sized subsets of the systems with careful processor and 1675memory placement to reduce cross-node memory access and contention 1676can improve overall system performance. 1677 1678The "cpuset" controller is hierarchical. That means the controller 1679cannot use CPUs or memory nodes not allowed in its parent. 1680 1681 1682Cpuset Interface Files 1683~~~~~~~~~~~~~~~~~~~~~~ 1684 1685 cpuset.cpus 1686 A read-write multiple values file which exists on non-root 1687 cpuset-enabled cgroups. 1688 1689 It lists the requested CPUs to be used by tasks within this 1690 cgroup. The actual list of CPUs to be granted, however, is 1691 subjected to constraints imposed by its parent and can differ 1692 from the requested CPUs. 1693 1694 The CPU numbers are comma-separated numbers or ranges. 1695 For example: 1696 1697 # cat cpuset.cpus 1698 0-4,6,8-10 1699 1700 An empty value indicates that the cgroup is using the same 1701 setting as the nearest cgroup ancestor with a non-empty 1702 "cpuset.cpus" or all the available CPUs if none is found. 1703 1704 The value of "cpuset.cpus" stays constant until the next update 1705 and won't be affected by any CPU hotplug events. 1706 1707 cpuset.cpus.effective 1708 A read-only multiple values file which exists on all 1709 cpuset-enabled cgroups. 1710 1711 It lists the onlined CPUs that are actually granted to this 1712 cgroup by its parent. These CPUs are allowed to be used by 1713 tasks within the current cgroup. 1714 1715 If "cpuset.cpus" is empty, the "cpuset.cpus.effective" file shows 1716 all the CPUs from the parent cgroup that can be available to 1717 be used by this cgroup. Otherwise, it should be a subset of 1718 "cpuset.cpus" unless none of the CPUs listed in "cpuset.cpus" 1719 can be granted. In this case, it will be treated just like an 1720 empty "cpuset.cpus". 1721 1722 Its value will be affected by CPU hotplug events. 1723 1724 cpuset.mems 1725 A read-write multiple values file which exists on non-root 1726 cpuset-enabled cgroups. 1727 1728 It lists the requested memory nodes to be used by tasks within 1729 this cgroup. The actual list of memory nodes granted, however, 1730 is subjected to constraints imposed by its parent and can differ 1731 from the requested memory nodes. 1732 1733 The memory node numbers are comma-separated numbers or ranges. 1734 For example: 1735 1736 # cat cpuset.mems 1737 0-1,3 1738 1739 An empty value indicates that the cgroup is using the same 1740 setting as the nearest cgroup ancestor with a non-empty 1741 "cpuset.mems" or all the available memory nodes if none 1742 is found. 1743 1744 The value of "cpuset.mems" stays constant until the next update 1745 and won't be affected by any memory nodes hotplug events. 1746 1747 cpuset.mems.effective 1748 A read-only multiple values file which exists on all 1749 cpuset-enabled cgroups. 1750 1751 It lists the onlined memory nodes that are actually granted to 1752 this cgroup by its parent. These memory nodes are allowed to 1753 be used by tasks within the current cgroup. 1754 1755 If "cpuset.mems" is empty, it shows all the memory nodes from the 1756 parent cgroup that will be available to be used by this cgroup. 1757 Otherwise, it should be a subset of "cpuset.mems" unless none of 1758 the memory nodes listed in "cpuset.mems" can be granted. In this 1759 case, it will be treated just like an empty "cpuset.mems". 1760 1761 Its value will be affected by memory nodes hotplug events. 1762 1763 cpuset.cpus.partition 1764 A read-write single value file which exists on non-root 1765 cpuset-enabled cgroups. This flag is owned by the parent cgroup 1766 and is not delegatable. 1767 1768 It accepts only the following input values when written to. 1769 1770 "root" - a paritition root 1771 "member" - a non-root member of a partition 1772 1773 When set to be a partition root, the current cgroup is the 1774 root of a new partition or scheduling domain that comprises 1775 itself and all its descendants except those that are separate 1776 partition roots themselves and their descendants. The root 1777 cgroup is always a partition root. 1778 1779 There are constraints on where a partition root can be set. 1780 It can only be set in a cgroup if all the following conditions 1781 are true. 1782 1783 1) The "cpuset.cpus" is not empty and the list of CPUs are 1784 exclusive, i.e. they are not shared by any of its siblings. 1785 2) The parent cgroup is a partition root. 1786 3) The "cpuset.cpus" is also a proper subset of the parent's 1787 "cpuset.cpus.effective". 1788 4) There is no child cgroups with cpuset enabled. This is for 1789 eliminating corner cases that have to be handled if such a 1790 condition is allowed. 1791 1792 Setting it to partition root will take the CPUs away from the 1793 effective CPUs of the parent cgroup. Once it is set, this 1794 file cannot be reverted back to "member" if there are any child 1795 cgroups with cpuset enabled. 1796 1797 A parent partition cannot distribute all its CPUs to its 1798 child partitions. There must be at least one cpu left in the 1799 parent partition. 1800 1801 Once becoming a partition root, changes to "cpuset.cpus" is 1802 generally allowed as long as the first condition above is true, 1803 the change will not take away all the CPUs from the parent 1804 partition and the new "cpuset.cpus" value is a superset of its 1805 children's "cpuset.cpus" values. 1806 1807 Sometimes, external factors like changes to ancestors' 1808 "cpuset.cpus" or cpu hotplug can cause the state of the partition 1809 root to change. On read, the "cpuset.sched.partition" file 1810 can show the following values. 1811 1812 "member" Non-root member of a partition 1813 "root" Partition root 1814 "root invalid" Invalid partition root 1815 1816 It is a partition root if the first 2 partition root conditions 1817 above are true and at least one CPU from "cpuset.cpus" is 1818 granted by the parent cgroup. 1819 1820 A partition root can become invalid if none of CPUs requested 1821 in "cpuset.cpus" can be granted by the parent cgroup or the 1822 parent cgroup is no longer a partition root itself. In this 1823 case, it is not a real partition even though the restriction 1824 of the first partition root condition above will still apply. 1825 The cpu affinity of all the tasks in the cgroup will then be 1826 associated with CPUs in the nearest ancestor partition. 1827 1828 An invalid partition root can be transitioned back to a 1829 real partition root if at least one of the requested CPUs 1830 can now be granted by its parent. In this case, the cpu 1831 affinity of all the tasks in the formerly invalid partition 1832 will be associated to the CPUs of the newly formed partition. 1833 Changing the partition state of an invalid partition root to 1834 "member" is always allowed even if child cpusets are present. 1835 1836 1837Device controller 1838----------------- 1839 1840Device controller manages access to device files. It includes both 1841creation of new device files (using mknod), and access to the 1842existing device files. 1843 1844Cgroup v2 device controller has no interface files and is implemented 1845on top of cgroup BPF. To control access to device files, a user may 1846create bpf programs of the BPF_CGROUP_DEVICE type and attach them 1847to cgroups. On an attempt to access a device file, corresponding 1848BPF programs will be executed, and depending on the return value 1849the attempt will succeed or fail with -EPERM. 1850 1851A BPF_CGROUP_DEVICE program takes a pointer to the bpf_cgroup_dev_ctx 1852structure, which describes the device access attempt: access type 1853(mknod/read/write) and device (type, major and minor numbers). 1854If the program returns 0, the attempt fails with -EPERM, otherwise 1855it succeeds. 1856 1857An example of BPF_CGROUP_DEVICE program may be found in the kernel 1858source tree in the tools/testing/selftests/bpf/dev_cgroup.c file. 1859 1860 1861RDMA 1862---- 1863 1864The "rdma" controller regulates the distribution and accounting of 1865of RDMA resources. 1866 1867RDMA Interface Files 1868~~~~~~~~~~~~~~~~~~~~ 1869 1870 rdma.max 1871 A readwrite nested-keyed file that exists for all the cgroups 1872 except root that describes current configured resource limit 1873 for a RDMA/IB device. 1874 1875 Lines are keyed by device name and are not ordered. 1876 Each line contains space separated resource name and its configured 1877 limit that can be distributed. 1878 1879 The following nested keys are defined. 1880 1881 ========== ============================= 1882 hca_handle Maximum number of HCA Handles 1883 hca_object Maximum number of HCA Objects 1884 ========== ============================= 1885 1886 An example for mlx4 and ocrdma device follows:: 1887 1888 mlx4_0 hca_handle=2 hca_object=2000 1889 ocrdma1 hca_handle=3 hca_object=max 1890 1891 rdma.current 1892 A read-only file that describes current resource usage. 1893 It exists for all the cgroup except root. 1894 1895 An example for mlx4 and ocrdma device follows:: 1896 1897 mlx4_0 hca_handle=1 hca_object=20 1898 ocrdma1 hca_handle=1 hca_object=23 1899 1900 1901Misc 1902---- 1903 1904perf_event 1905~~~~~~~~~~ 1906 1907perf_event controller, if not mounted on a legacy hierarchy, is 1908automatically enabled on the v2 hierarchy so that perf events can 1909always be filtered by cgroup v2 path. The controller can still be 1910moved to a legacy hierarchy after v2 hierarchy is populated. 1911 1912 1913Non-normative information 1914------------------------- 1915 1916This section contains information that isn't considered to be a part of 1917the stable kernel API and so is subject to change. 1918 1919 1920CPU controller root cgroup process behaviour 1921~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 1922 1923When distributing CPU cycles in the root cgroup each thread in this 1924cgroup is treated as if it was hosted in a separate child cgroup of the 1925root cgroup. This child cgroup weight is dependent on its thread nice 1926level. 1927 1928For details of this mapping see sched_prio_to_weight array in 1929kernel/sched/core.c file (values from this array should be scaled 1930appropriately so the neutral - nice 0 - value is 100 instead of 1024). 1931 1932 1933IO controller root cgroup process behaviour 1934~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 1935 1936Root cgroup processes are hosted in an implicit leaf child node. 1937When distributing IO resources this implicit child node is taken into 1938account as if it was a normal child cgroup of the root cgroup with a 1939weight value of 200. 1940 1941 1942Namespace 1943========= 1944 1945Basics 1946------ 1947 1948cgroup namespace provides a mechanism to virtualize the view of the 1949"/proc/$PID/cgroup" file and cgroup mounts. The CLONE_NEWCGROUP clone 1950flag can be used with clone(2) and unshare(2) to create a new cgroup 1951namespace. The process running inside the cgroup namespace will have 1952its "/proc/$PID/cgroup" output restricted to cgroupns root. The 1953cgroupns root is the cgroup of the process at the time of creation of 1954the cgroup namespace. 1955 1956Without cgroup namespace, the "/proc/$PID/cgroup" file shows the 1957complete path of the cgroup of a process. In a container setup where 1958a set of cgroups and namespaces are intended to isolate processes the 1959"/proc/$PID/cgroup" file may leak potential system level information 1960to the isolated processes. For Example:: 1961 1962 # cat /proc/self/cgroup 1963 0::/batchjobs/container_id1 1964 1965The path '/batchjobs/container_id1' can be considered as system-data 1966and undesirable to expose to the isolated processes. cgroup namespace 1967can be used to restrict visibility of this path. For example, before 1968creating a cgroup namespace, one would see:: 1969 1970 # ls -l /proc/self/ns/cgroup 1971 lrwxrwxrwx 1 root root 0 2014-07-15 10:37 /proc/self/ns/cgroup -> cgroup:[4026531835] 1972 # cat /proc/self/cgroup 1973 0::/batchjobs/container_id1 1974 1975After unsharing a new namespace, the view changes:: 1976 1977 # ls -l /proc/self/ns/cgroup 1978 lrwxrwxrwx 1 root root 0 2014-07-15 10:35 /proc/self/ns/cgroup -> cgroup:[4026532183] 1979 # cat /proc/self/cgroup 1980 0::/ 1981 1982When some thread from a multi-threaded process unshares its cgroup 1983namespace, the new cgroupns gets applied to the entire process (all 1984the threads). This is natural for the v2 hierarchy; however, for the 1985legacy hierarchies, this may be unexpected. 1986 1987A cgroup namespace is alive as long as there are processes inside or 1988mounts pinning it. When the last usage goes away, the cgroup 1989namespace is destroyed. The cgroupns root and the actual cgroups 1990remain. 1991 1992 1993The Root and Views 1994------------------ 1995 1996The 'cgroupns root' for a cgroup namespace is the cgroup in which the 1997process calling unshare(2) is running. For example, if a process in 1998/batchjobs/container_id1 cgroup calls unshare, cgroup 1999/batchjobs/container_id1 becomes the cgroupns root. For the 2000init_cgroup_ns, this is the real root ('/') cgroup. 2001 2002The cgroupns root cgroup does not change even if the namespace creator 2003process later moves to a different cgroup:: 2004 2005 # ~/unshare -c # unshare cgroupns in some cgroup 2006 # cat /proc/self/cgroup 2007 0::/ 2008 # mkdir sub_cgrp_1 2009 # echo 0 > sub_cgrp_1/cgroup.procs 2010 # cat /proc/self/cgroup 2011 0::/sub_cgrp_1 2012 2013Each process gets its namespace-specific view of "/proc/$PID/cgroup" 2014 2015Processes running inside the cgroup namespace will be able to see 2016cgroup paths (in /proc/self/cgroup) only inside their root cgroup. 2017From within an unshared cgroupns:: 2018 2019 # sleep 100000 & 2020 [1] 7353 2021 # echo 7353 > sub_cgrp_1/cgroup.procs 2022 # cat /proc/7353/cgroup 2023 0::/sub_cgrp_1 2024 2025From the initial cgroup namespace, the real cgroup path will be 2026visible:: 2027 2028 $ cat /proc/7353/cgroup 2029 0::/batchjobs/container_id1/sub_cgrp_1 2030 2031From a sibling cgroup namespace (that is, a namespace rooted at a 2032different cgroup), the cgroup path relative to its own cgroup 2033namespace root will be shown. For instance, if PID 7353's cgroup 2034namespace root is at '/batchjobs/container_id2', then it will see:: 2035 2036 # cat /proc/7353/cgroup 2037 0::/../container_id2/sub_cgrp_1 2038 2039Note that the relative path always starts with '/' to indicate that 2040its relative to the cgroup namespace root of the caller. 2041 2042 2043Migration and setns(2) 2044---------------------- 2045 2046Processes inside a cgroup namespace can move into and out of the 2047namespace root if they have proper access to external cgroups. For 2048example, from inside a namespace with cgroupns root at 2049/batchjobs/container_id1, and assuming that the global hierarchy is 2050still accessible inside cgroupns:: 2051 2052 # cat /proc/7353/cgroup 2053 0::/sub_cgrp_1 2054 # echo 7353 > batchjobs/container_id2/cgroup.procs 2055 # cat /proc/7353/cgroup 2056 0::/../container_id2 2057 2058Note that this kind of setup is not encouraged. A task inside cgroup 2059namespace should only be exposed to its own cgroupns hierarchy. 2060 2061setns(2) to another cgroup namespace is allowed when: 2062 2063(a) the process has CAP_SYS_ADMIN against its current user namespace 2064(b) the process has CAP_SYS_ADMIN against the target cgroup 2065 namespace's userns 2066 2067No implicit cgroup changes happen with attaching to another cgroup 2068namespace. It is expected that the someone moves the attaching 2069process under the target cgroup namespace root. 2070 2071 2072Interaction with Other Namespaces 2073--------------------------------- 2074 2075Namespace specific cgroup hierarchy can be mounted by a process 2076running inside a non-init cgroup namespace:: 2077 2078 # mount -t cgroup2 none $MOUNT_POINT 2079 2080This will mount the unified cgroup hierarchy with cgroupns root as the 2081filesystem root. The process needs CAP_SYS_ADMIN against its user and 2082mount namespaces. 2083 2084The virtualization of /proc/self/cgroup file combined with restricting 2085the view of cgroup hierarchy by namespace-private cgroupfs mount 2086provides a properly isolated cgroup view inside the container. 2087 2088 2089Information on Kernel Programming 2090================================= 2091 2092This section contains kernel programming information in the areas 2093where interacting with cgroup is necessary. cgroup core and 2094controllers are not covered. 2095 2096 2097Filesystem Support for Writeback 2098-------------------------------- 2099 2100A filesystem can support cgroup writeback by updating 2101address_space_operations->writepage[s]() to annotate bio's using the 2102following two functions. 2103 2104 wbc_init_bio(@wbc, @bio) 2105 Should be called for each bio carrying writeback data and 2106 associates the bio with the inode's owner cgroup and the 2107 corresponding request queue. This must be called after 2108 a queue (device) has been associated with the bio and 2109 before submission. 2110 2111 wbc_account_io(@wbc, @page, @bytes) 2112 Should be called for each data segment being written out. 2113 While this function doesn't care exactly when it's called 2114 during the writeback session, it's the easiest and most 2115 natural to call it as data segments are added to a bio. 2116 2117With writeback bio's annotated, cgroup support can be enabled per 2118super_block by setting SB_I_CGROUPWB in ->s_iflags. This allows for 2119selective disabling of cgroup writeback support which is helpful when 2120certain filesystem features, e.g. journaled data mode, are 2121incompatible. 2122 2123wbc_init_bio() binds the specified bio to its cgroup. Depending on 2124the configuration, the bio may be executed at a lower priority and if 2125the writeback session is holding shared resources, e.g. a journal 2126entry, may lead to priority inversion. There is no one easy solution 2127for the problem. Filesystems can try to work around specific problem 2128cases by skipping wbc_init_bio() and using bio_associate_blkg() 2129directly. 2130 2131 2132Deprecated v1 Core Features 2133=========================== 2134 2135- Multiple hierarchies including named ones are not supported. 2136 2137- All v1 mount options are not supported. 2138 2139- The "tasks" file is removed and "cgroup.procs" is not sorted. 2140 2141- "cgroup.clone_children" is removed. 2142 2143- /proc/cgroups is meaningless for v2. Use "cgroup.controllers" file 2144 at the root instead. 2145 2146 2147Issues with v1 and Rationales for v2 2148==================================== 2149 2150Multiple Hierarchies 2151-------------------- 2152 2153cgroup v1 allowed an arbitrary number of hierarchies and each 2154hierarchy could host any number of controllers. While this seemed to 2155provide a high level of flexibility, it wasn't useful in practice. 2156 2157For example, as there is only one instance of each controller, utility 2158type controllers such as freezer which can be useful in all 2159hierarchies could only be used in one. The issue is exacerbated by 2160the fact that controllers couldn't be moved to another hierarchy once 2161hierarchies were populated. Another issue was that all controllers 2162bound to a hierarchy were forced to have exactly the same view of the 2163hierarchy. It wasn't possible to vary the granularity depending on 2164the specific controller. 2165 2166In practice, these issues heavily limited which controllers could be 2167put on the same hierarchy and most configurations resorted to putting 2168each controller on its own hierarchy. Only closely related ones, such 2169as the cpu and cpuacct controllers, made sense to be put on the same 2170hierarchy. This often meant that userland ended up managing multiple 2171similar hierarchies repeating the same steps on each hierarchy 2172whenever a hierarchy management operation was necessary. 2173 2174Furthermore, support for multiple hierarchies came at a steep cost. 2175It greatly complicated cgroup core implementation but more importantly 2176the support for multiple hierarchies restricted how cgroup could be 2177used in general and what controllers was able to do. 2178 2179There was no limit on how many hierarchies there might be, which meant 2180that a thread's cgroup membership couldn't be described in finite 2181length. The key might contain any number of entries and was unlimited 2182in length, which made it highly awkward to manipulate and led to 2183addition of controllers which existed only to identify membership, 2184which in turn exacerbated the original problem of proliferating number 2185of hierarchies. 2186 2187Also, as a controller couldn't have any expectation regarding the 2188topologies of hierarchies other controllers might be on, each 2189controller had to assume that all other controllers were attached to 2190completely orthogonal hierarchies. This made it impossible, or at 2191least very cumbersome, for controllers to cooperate with each other. 2192 2193In most use cases, putting controllers on hierarchies which are 2194completely orthogonal to each other isn't necessary. What usually is 2195called for is the ability to have differing levels of granularity 2196depending on the specific controller. In other words, hierarchy may 2197be collapsed from leaf towards root when viewed from specific 2198controllers. For example, a given configuration might not care about 2199how memory is distributed beyond a certain level while still wanting 2200to control how CPU cycles are distributed. 2201 2202 2203Thread Granularity 2204------------------ 2205 2206cgroup v1 allowed threads of a process to belong to different cgroups. 2207This didn't make sense for some controllers and those controllers 2208ended up implementing different ways to ignore such situations but 2209much more importantly it blurred the line between API exposed to 2210individual applications and system management interface. 2211 2212Generally, in-process knowledge is available only to the process 2213itself; thus, unlike service-level organization of processes, 2214categorizing threads of a process requires active participation from 2215the application which owns the target process. 2216 2217cgroup v1 had an ambiguously defined delegation model which got abused 2218in combination with thread granularity. cgroups were delegated to 2219individual applications so that they can create and manage their own 2220sub-hierarchies and control resource distributions along them. This 2221effectively raised cgroup to the status of a syscall-like API exposed 2222to lay programs. 2223 2224First of all, cgroup has a fundamentally inadequate interface to be 2225exposed this way. For a process to access its own knobs, it has to 2226extract the path on the target hierarchy from /proc/self/cgroup, 2227construct the path by appending the name of the knob to the path, open 2228and then read and/or write to it. This is not only extremely clunky 2229and unusual but also inherently racy. There is no conventional way to 2230define transaction across the required steps and nothing can guarantee 2231that the process would actually be operating on its own sub-hierarchy. 2232 2233cgroup controllers implemented a number of knobs which would never be 2234accepted as public APIs because they were just adding control knobs to 2235system-management pseudo filesystem. cgroup ended up with interface 2236knobs which were not properly abstracted or refined and directly 2237revealed kernel internal details. These knobs got exposed to 2238individual applications through the ill-defined delegation mechanism 2239effectively abusing cgroup as a shortcut to implementing public APIs 2240without going through the required scrutiny. 2241 2242This was painful for both userland and kernel. Userland ended up with 2243misbehaving and poorly abstracted interfaces and kernel exposing and 2244locked into constructs inadvertently. 2245 2246 2247Competition Between Inner Nodes and Threads 2248------------------------------------------- 2249 2250cgroup v1 allowed threads to be in any cgroups which created an 2251interesting problem where threads belonging to a parent cgroup and its 2252children cgroups competed for resources. This was nasty as two 2253different types of entities competed and there was no obvious way to 2254settle it. Different controllers did different things. 2255 2256The cpu controller considered threads and cgroups as equivalents and 2257mapped nice levels to cgroup weights. This worked for some cases but 2258fell flat when children wanted to be allocated specific ratios of CPU 2259cycles and the number of internal threads fluctuated - the ratios 2260constantly changed as the number of competing entities fluctuated. 2261There also were other issues. The mapping from nice level to weight 2262wasn't obvious or universal, and there were various other knobs which 2263simply weren't available for threads. 2264 2265The io controller implicitly created a hidden leaf node for each 2266cgroup to host the threads. The hidden leaf had its own copies of all 2267the knobs with ``leaf_`` prefixed. While this allowed equivalent 2268control over internal threads, it was with serious drawbacks. It 2269always added an extra layer of nesting which wouldn't be necessary 2270otherwise, made the interface messy and significantly complicated the 2271implementation. 2272 2273The memory controller didn't have a way to control what happened 2274between internal tasks and child cgroups and the behavior was not 2275clearly defined. There were attempts to add ad-hoc behaviors and 2276knobs to tailor the behavior to specific workloads which would have 2277led to problems extremely difficult to resolve in the long term. 2278 2279Multiple controllers struggled with internal tasks and came up with 2280different ways to deal with it; unfortunately, all the approaches were 2281severely flawed and, furthermore, the widely different behaviors 2282made cgroup as a whole highly inconsistent. 2283 2284This clearly is a problem which needs to be addressed from cgroup core 2285in a uniform way. 2286 2287 2288Other Interface Issues 2289---------------------- 2290 2291cgroup v1 grew without oversight and developed a large number of 2292idiosyncrasies and inconsistencies. One issue on the cgroup core side 2293was how an empty cgroup was notified - a userland helper binary was 2294forked and executed for each event. The event delivery wasn't 2295recursive or delegatable. The limitations of the mechanism also led 2296to in-kernel event delivery filtering mechanism further complicating 2297the interface. 2298 2299Controller interfaces were problematic too. An extreme example is 2300controllers completely ignoring hierarchical organization and treating 2301all cgroups as if they were all located directly under the root 2302cgroup. Some controllers exposed a large amount of inconsistent 2303implementation details to userland. 2304 2305There also was no consistency across controllers. When a new cgroup 2306was created, some controllers defaulted to not imposing extra 2307restrictions while others disallowed any resource usage until 2308explicitly configured. Configuration knobs for the same type of 2309control used widely differing naming schemes and formats. Statistics 2310and information knobs were named arbitrarily and used different 2311formats and units even in the same controller. 2312 2313cgroup v2 establishes common conventions where appropriate and updates 2314controllers so that they expose minimal and consistent interfaces. 2315 2316 2317Controller Issues and Remedies 2318------------------------------ 2319 2320Memory 2321~~~~~~ 2322 2323The original lower boundary, the soft limit, is defined as a limit 2324that is per default unset. As a result, the set of cgroups that 2325global reclaim prefers is opt-in, rather than opt-out. The costs for 2326optimizing these mostly negative lookups are so high that the 2327implementation, despite its enormous size, does not even provide the 2328basic desirable behavior. First off, the soft limit has no 2329hierarchical meaning. All configured groups are organized in a global 2330rbtree and treated like equal peers, regardless where they are located 2331in the hierarchy. This makes subtree delegation impossible. Second, 2332the soft limit reclaim pass is so aggressive that it not just 2333introduces high allocation latencies into the system, but also impacts 2334system performance due to overreclaim, to the point where the feature 2335becomes self-defeating. 2336 2337The memory.low boundary on the other hand is a top-down allocated 2338reserve. A cgroup enjoys reclaim protection when it's within its low, 2339which makes delegation of subtrees possible. 2340 2341The original high boundary, the hard limit, is defined as a strict 2342limit that can not budge, even if the OOM killer has to be called. 2343But this generally goes against the goal of making the most out of the 2344available memory. The memory consumption of workloads varies during 2345runtime, and that requires users to overcommit. But doing that with a 2346strict upper limit requires either a fairly accurate prediction of the 2347working set size or adding slack to the limit. Since working set size 2348estimation is hard and error prone, and getting it wrong results in 2349OOM kills, most users tend to err on the side of a looser limit and 2350end up wasting precious resources. 2351 2352The memory.high boundary on the other hand can be set much more 2353conservatively. When hit, it throttles allocations by forcing them 2354into direct reclaim to work off the excess, but it never invokes the 2355OOM killer. As a result, a high boundary that is chosen too 2356aggressively will not terminate the processes, but instead it will 2357lead to gradual performance degradation. The user can monitor this 2358and make corrections until the minimal memory footprint that still 2359gives acceptable performance is found. 2360 2361In extreme cases, with many concurrent allocations and a complete 2362breakdown of reclaim progress within the group, the high boundary can 2363be exceeded. But even then it's mostly better to satisfy the 2364allocation from the slack available in other groups or the rest of the 2365system than killing the group. Otherwise, memory.max is there to 2366limit this type of spillover and ultimately contain buggy or even 2367malicious applications. 2368 2369Setting the original memory.limit_in_bytes below the current usage was 2370subject to a race condition, where concurrent charges could cause the 2371limit setting to fail. memory.max on the other hand will first set the 2372limit to prevent new charges, and then reclaim and OOM kill until the 2373new limit is met - or the task writing to memory.max is killed. 2374 2375The combined memory+swap accounting and limiting is replaced by real 2376control over swap space. 2377 2378The main argument for a combined memory+swap facility in the original 2379cgroup design was that global or parental pressure would always be 2380able to swap all anonymous memory of a child group, regardless of the 2381child's own (possibly untrusted) configuration. However, untrusted 2382groups can sabotage swapping by other means - such as referencing its 2383anonymous memory in a tight loop - and an admin can not assume full 2384swappability when overcommitting untrusted jobs. 2385 2386For trusted jobs, on the other hand, a combined counter is not an 2387intuitive userspace interface, and it flies in the face of the idea 2388that cgroup controllers should account and limit specific physical 2389resources. Swap space is a resource like all others in the system, 2390and that's why unified hierarchy allows distributing it separately. 2391