1.. _cgroup-v2: 2 3================ 4Control Group v2 5================ 6 7:Date: October, 2015 8:Author: Tejun Heo <tj@kernel.org> 9 10This is the authoritative documentation on the design, interface and 11conventions of cgroup v2. It describes all userland-visible aspects 12of cgroup including core and specific controller behaviors. All 13future changes must be reflected in this document. Documentation for 14v1 is available under :ref:`Documentation/admin-guide/cgroup-v1/index.rst <cgroup-v1>`. 15 16.. CONTENTS 17 18 1. Introduction 19 1-1. Terminology 20 1-2. What is cgroup? 21 2. Basic Operations 22 2-1. Mounting 23 2-2. Organizing Processes and Threads 24 2-2-1. Processes 25 2-2-2. Threads 26 2-3. [Un]populated Notification 27 2-4. Controlling Controllers 28 2-4-1. Enabling and Disabling 29 2-4-2. Top-down Constraint 30 2-4-3. No Internal Process Constraint 31 2-5. Delegation 32 2-5-1. Model of Delegation 33 2-5-2. Delegation Containment 34 2-6. Guidelines 35 2-6-1. Organize Once and Control 36 2-6-2. Avoid Name Collisions 37 3. Resource Distribution Models 38 3-1. Weights 39 3-2. Limits 40 3-3. Protections 41 3-4. Allocations 42 4. Interface Files 43 4-1. Format 44 4-2. Conventions 45 4-3. Core Interface Files 46 5. Controllers 47 5-1. CPU 48 5-1-1. CPU Interface Files 49 5-2. Memory 50 5-2-1. Memory Interface Files 51 5-2-2. Usage Guidelines 52 5-2-3. Memory Ownership 53 5-3. IO 54 5-3-1. IO Interface Files 55 5-3-2. Writeback 56 5-3-3. IO Latency 57 5-3-3-1. How IO Latency Throttling Works 58 5-3-3-2. IO Latency Interface Files 59 5-3-4. IO Priority 60 5-4. PID 61 5-4-1. PID Interface Files 62 5-5. Cpuset 63 5.5-1. Cpuset Interface Files 64 5-6. Device 65 5-7. RDMA 66 5-7-1. RDMA Interface Files 67 5-8. HugeTLB 68 5.8-1. HugeTLB Interface Files 69 5-9. Misc 70 5.9-1 Miscellaneous cgroup Interface Files 71 5.9-2 Migration and Ownership 72 5-10. Others 73 5-10-1. perf_event 74 5-N. Non-normative information 75 5-N-1. CPU controller root cgroup process behaviour 76 5-N-2. IO controller root cgroup process behaviour 77 6. Namespace 78 6-1. Basics 79 6-2. The Root and Views 80 6-3. Migration and setns(2) 81 6-4. Interaction with Other Namespaces 82 P. Information on Kernel Programming 83 P-1. Filesystem Support for Writeback 84 D. Deprecated v1 Core Features 85 R. Issues with v1 and Rationales for v2 86 R-1. Multiple Hierarchies 87 R-2. Thread Granularity 88 R-3. Competition Between Inner Nodes and Threads 89 R-4. Other Interface Issues 90 R-5. Controller Issues and Remedies 91 R-5-1. Memory 92 93 94Introduction 95============ 96 97Terminology 98----------- 99 100"cgroup" stands for "control group" and is never capitalized. The 101singular form is used to designate the whole feature and also as a 102qualifier as in "cgroup controllers". When explicitly referring to 103multiple individual control groups, the plural form "cgroups" is used. 104 105 106What is cgroup? 107--------------- 108 109cgroup is a mechanism to organize processes hierarchically and 110distribute system resources along the hierarchy in a controlled and 111configurable manner. 112 113cgroup is largely composed of two parts - the core and controllers. 114cgroup core is primarily responsible for hierarchically organizing 115processes. A cgroup controller is usually responsible for 116distributing a specific type of system resource along the hierarchy 117although there are utility controllers which serve purposes other than 118resource distribution. 119 120cgroups form a tree structure and every process in the system belongs 121to one and only one cgroup. All threads of a process belong to the 122same cgroup. On creation, all processes are put in the cgroup that 123the parent process belongs to at the time. A process can be migrated 124to another cgroup. Migration of a process doesn't affect already 125existing descendant processes. 126 127Following certain structural constraints, controllers may be enabled or 128disabled selectively on a cgroup. All controller behaviors are 129hierarchical - if a controller is enabled on a cgroup, it affects all 130processes which belong to the cgroups consisting the inclusive 131sub-hierarchy of the cgroup. When a controller is enabled on a nested 132cgroup, it always restricts the resource distribution further. The 133restrictions set closer to the root in the hierarchy can not be 134overridden from further away. 135 136 137Basic Operations 138================ 139 140Mounting 141-------- 142 143Unlike v1, cgroup v2 has only single hierarchy. The cgroup v2 144hierarchy can be mounted with the following mount command:: 145 146 # mount -t cgroup2 none $MOUNT_POINT 147 148cgroup2 filesystem has the magic number 0x63677270 ("cgrp"). All 149controllers which support v2 and are not bound to a v1 hierarchy are 150automatically bound to the v2 hierarchy and show up at the root. 151Controllers which are not in active use in the v2 hierarchy can be 152bound to other hierarchies. This allows mixing v2 hierarchy with the 153legacy v1 multiple hierarchies in a fully backward compatible way. 154 155A controller can be moved across hierarchies only after the controller 156is no longer referenced in its current hierarchy. Because per-cgroup 157controller states are destroyed asynchronously and controllers may 158have lingering references, a controller may not show up immediately on 159the v2 hierarchy after the final umount of the previous hierarchy. 160Similarly, a controller should be fully disabled to be moved out of 161the unified hierarchy and it may take some time for the disabled 162controller to become available for other hierarchies; furthermore, due 163to inter-controller dependencies, other controllers may need to be 164disabled too. 165 166While useful for development and manual configurations, moving 167controllers dynamically between the v2 and other hierarchies is 168strongly discouraged for production use. It is recommended to decide 169the hierarchies and controller associations before starting using the 170controllers after system boot. 171 172During transition to v2, system management software might still 173automount the v1 cgroup filesystem and so hijack all controllers 174during boot, before manual intervention is possible. To make testing 175and experimenting easier, the kernel parameter cgroup_no_v1= allows 176disabling controllers in v1 and make them always available in v2. 177 178cgroup v2 currently supports the following mount options. 179 180 nsdelegate 181 Consider cgroup namespaces as delegation boundaries. This 182 option is system wide and can only be set on mount or modified 183 through remount from the init namespace. The mount option is 184 ignored on non-init namespace mounts. Please refer to the 185 Delegation section for details. 186 187 favordynmods 188 Reduce the latencies of dynamic cgroup modifications such as 189 task migrations and controller on/offs at the cost of making 190 hot path operations such as forks and exits more expensive. 191 The static usage pattern of creating a cgroup, enabling 192 controllers, and then seeding it with CLONE_INTO_CGROUP is 193 not affected by this option. 194 195 memory_localevents 196 Only populate memory.events with data for the current cgroup, 197 and not any subtrees. This is legacy behaviour, the default 198 behaviour without this option is to include subtree counts. 199 This option is system wide and can only be set on mount or 200 modified through remount from the init namespace. The mount 201 option is ignored on non-init namespace mounts. 202 203 memory_recursiveprot 204 Recursively apply memory.min and memory.low protection to 205 entire subtrees, without requiring explicit downward 206 propagation into leaf cgroups. This allows protecting entire 207 subtrees from one another, while retaining free competition 208 within those subtrees. This should have been the default 209 behavior but is a mount-option to avoid regressing setups 210 relying on the original semantics (e.g. specifying bogusly 211 high 'bypass' protection values at higher tree levels). 212 213 memory_hugetlb_accounting 214 Count HugeTLB memory usage towards the cgroup's overall 215 memory usage for the memory controller (for the purpose of 216 statistics reporting and memory protetion). This is a new 217 behavior that could regress existing setups, so it must be 218 explicitly opted in with this mount option. 219 220 A few caveats to keep in mind: 221 222 * There is no HugeTLB pool management involved in the memory 223 controller. The pre-allocated pool does not belong to anyone. 224 Specifically, when a new HugeTLB folio is allocated to 225 the pool, it is not accounted for from the perspective of the 226 memory controller. It is only charged to a cgroup when it is 227 actually used (for e.g at page fault time). Host memory 228 overcommit management has to consider this when configuring 229 hard limits. In general, HugeTLB pool management should be 230 done via other mechanisms (such as the HugeTLB controller). 231 * Failure to charge a HugeTLB folio to the memory controller 232 results in SIGBUS. This could happen even if the HugeTLB pool 233 still has pages available (but the cgroup limit is hit and 234 reclaim attempt fails). 235 * Charging HugeTLB memory towards the memory controller affects 236 memory protection and reclaim dynamics. Any userspace tuning 237 (of low, min limits for e.g) needs to take this into account. 238 * HugeTLB pages utilized while this option is not selected 239 will not be tracked by the memory controller (even if cgroup 240 v2 is remounted later on). 241 242 pids_localevents 243 The option restores v1-like behavior of pids.events:max, that is only 244 local (inside cgroup proper) fork failures are counted. Without this 245 option pids.events.max represents any pids.max enforcemnt across 246 cgroup's subtree. 247 248 249 250Organizing Processes and Threads 251-------------------------------- 252 253Processes 254~~~~~~~~~ 255 256Initially, only the root cgroup exists to which all processes belong. 257A child cgroup can be created by creating a sub-directory:: 258 259 # mkdir $CGROUP_NAME 260 261A given cgroup may have multiple child cgroups forming a tree 262structure. Each cgroup has a read-writable interface file 263"cgroup.procs". When read, it lists the PIDs of all processes which 264belong to the cgroup one-per-line. The PIDs are not ordered and the 265same PID may show up more than once if the process got moved to 266another cgroup and then back or the PID got recycled while reading. 267 268A process can be migrated into a cgroup by writing its PID to the 269target cgroup's "cgroup.procs" file. Only one process can be migrated 270on a single write(2) call. If a process is composed of multiple 271threads, writing the PID of any thread migrates all threads of the 272process. 273 274When a process forks a child process, the new process is born into the 275cgroup that the forking process belongs to at the time of the 276operation. After exit, a process stays associated with the cgroup 277that it belonged to at the time of exit until it's reaped; however, a 278zombie process does not appear in "cgroup.procs" and thus can't be 279moved to another cgroup. 280 281A cgroup which doesn't have any children or live processes can be 282destroyed by removing the directory. Note that a cgroup which doesn't 283have any children and is associated only with zombie processes is 284considered empty and can be removed:: 285 286 # rmdir $CGROUP_NAME 287 288"/proc/$PID/cgroup" lists a process's cgroup membership. If legacy 289cgroup is in use in the system, this file may contain multiple lines, 290one for each hierarchy. The entry for cgroup v2 is always in the 291format "0::$PATH":: 292 293 # cat /proc/842/cgroup 294 ... 295 0::/test-cgroup/test-cgroup-nested 296 297If the process becomes a zombie and the cgroup it was associated with 298is removed subsequently, " (deleted)" is appended to the path:: 299 300 # cat /proc/842/cgroup 301 ... 302 0::/test-cgroup/test-cgroup-nested (deleted) 303 304 305Threads 306~~~~~~~ 307 308cgroup v2 supports thread granularity for a subset of controllers to 309support use cases requiring hierarchical resource distribution across 310the threads of a group of processes. By default, all threads of a 311process belong to the same cgroup, which also serves as the resource 312domain to host resource consumptions which are not specific to a 313process or thread. The thread mode allows threads to be spread across 314a subtree while still maintaining the common resource domain for them. 315 316Controllers which support thread mode are called threaded controllers. 317The ones which don't are called domain controllers. 318 319Marking a cgroup threaded makes it join the resource domain of its 320parent as a threaded cgroup. The parent may be another threaded 321cgroup whose resource domain is further up in the hierarchy. The root 322of a threaded subtree, that is, the nearest ancestor which is not 323threaded, is called threaded domain or thread root interchangeably and 324serves as the resource domain for the entire subtree. 325 326Inside a threaded subtree, threads of a process can be put in 327different cgroups and are not subject to the no internal process 328constraint - threaded controllers can be enabled on non-leaf cgroups 329whether they have threads in them or not. 330 331As the threaded domain cgroup hosts all the domain resource 332consumptions of the subtree, it is considered to have internal 333resource consumptions whether there are processes in it or not and 334can't have populated child cgroups which aren't threaded. Because the 335root cgroup is not subject to no internal process constraint, it can 336serve both as a threaded domain and a parent to domain cgroups. 337 338The current operation mode or type of the cgroup is shown in the 339"cgroup.type" file which indicates whether the cgroup is a normal 340domain, a domain which is serving as the domain of a threaded subtree, 341or a threaded cgroup. 342 343On creation, a cgroup is always a domain cgroup and can be made 344threaded by writing "threaded" to the "cgroup.type" file. The 345operation is single direction:: 346 347 # echo threaded > cgroup.type 348 349Once threaded, the cgroup can't be made a domain again. To enable the 350thread mode, the following conditions must be met. 351 352- As the cgroup will join the parent's resource domain. The parent 353 must either be a valid (threaded) domain or a threaded cgroup. 354 355- When the parent is an unthreaded domain, it must not have any domain 356 controllers enabled or populated domain children. The root is 357 exempt from this requirement. 358 359Topology-wise, a cgroup can be in an invalid state. Please consider 360the following topology:: 361 362 A (threaded domain) - B (threaded) - C (domain, just created) 363 364C is created as a domain but isn't connected to a parent which can 365host child domains. C can't be used until it is turned into a 366threaded cgroup. "cgroup.type" file will report "domain (invalid)" in 367these cases. Operations which fail due to invalid topology use 368EOPNOTSUPP as the errno. 369 370A domain cgroup is turned into a threaded domain when one of its child 371cgroup becomes threaded or threaded controllers are enabled in the 372"cgroup.subtree_control" file while there are processes in the cgroup. 373A threaded domain reverts to a normal domain when the conditions 374clear. 375 376When read, "cgroup.threads" contains the list of the thread IDs of all 377threads in the cgroup. Except that the operations are per-thread 378instead of per-process, "cgroup.threads" has the same format and 379behaves the same way as "cgroup.procs". While "cgroup.threads" can be 380written to in any cgroup, as it can only move threads inside the same 381threaded domain, its operations are confined inside each threaded 382subtree. 383 384The threaded domain cgroup serves as the resource domain for the whole 385subtree, and, while the threads can be scattered across the subtree, 386all the processes are considered to be in the threaded domain cgroup. 387"cgroup.procs" in a threaded domain cgroup contains the PIDs of all 388processes in the subtree and is not readable in the subtree proper. 389However, "cgroup.procs" can be written to from anywhere in the subtree 390to migrate all threads of the matching process to the cgroup. 391 392Only threaded controllers can be enabled in a threaded subtree. When 393a threaded controller is enabled inside a threaded subtree, it only 394accounts for and controls resource consumptions associated with the 395threads in the cgroup and its descendants. All consumptions which 396aren't tied to a specific thread belong to the threaded domain cgroup. 397 398Because a threaded subtree is exempt from no internal process 399constraint, a threaded controller must be able to handle competition 400between threads in a non-leaf cgroup and its child cgroups. Each 401threaded controller defines how such competitions are handled. 402 403Currently, the following controllers are threaded and can be enabled 404in a threaded cgroup:: 405 406- cpu 407- cpuset 408- perf_event 409- pids 410 411[Un]populated Notification 412-------------------------- 413 414Each non-root cgroup has a "cgroup.events" file which contains 415"populated" field indicating whether the cgroup's sub-hierarchy has 416live processes in it. Its value is 0 if there is no live process in 417the cgroup and its descendants; otherwise, 1. poll and [id]notify 418events are triggered when the value changes. This can be used, for 419example, to start a clean-up operation after all processes of a given 420sub-hierarchy have exited. The populated state updates and 421notifications are recursive. Consider the following sub-hierarchy 422where the numbers in the parentheses represent the numbers of processes 423in each cgroup:: 424 425 A(4) - B(0) - C(1) 426 \ D(0) 427 428A, B and C's "populated" fields would be 1 while D's 0. After the one 429process in C exits, B and C's "populated" fields would flip to "0" and 430file modified events will be generated on the "cgroup.events" files of 431both cgroups. 432 433 434Controlling Controllers 435----------------------- 436 437Enabling and Disabling 438~~~~~~~~~~~~~~~~~~~~~~ 439 440Each cgroup has a "cgroup.controllers" file which lists all 441controllers available for the cgroup to enable:: 442 443 # cat cgroup.controllers 444 cpu io memory 445 446No controller is enabled by default. Controllers can be enabled and 447disabled by writing to the "cgroup.subtree_control" file:: 448 449 # echo "+cpu +memory -io" > cgroup.subtree_control 450 451Only controllers which are listed in "cgroup.controllers" can be 452enabled. When multiple operations are specified as above, either they 453all succeed or fail. If multiple operations on the same controller 454are specified, the last one is effective. 455 456Enabling a controller in a cgroup indicates that the distribution of 457the target resource across its immediate children will be controlled. 458Consider the following sub-hierarchy. The enabled controllers are 459listed in parentheses:: 460 461 A(cpu,memory) - B(memory) - C() 462 \ D() 463 464As A has "cpu" and "memory" enabled, A will control the distribution 465of CPU cycles and memory to its children, in this case, B. As B has 466"memory" enabled but not "CPU", C and D will compete freely on CPU 467cycles but their division of memory available to B will be controlled. 468 469As a controller regulates the distribution of the target resource to 470the cgroup's children, enabling it creates the controller's interface 471files in the child cgroups. In the above example, enabling "cpu" on B 472would create the "cpu." prefixed controller interface files in C and 473D. Likewise, disabling "memory" from B would remove the "memory." 474prefixed controller interface files from C and D. This means that the 475controller interface files - anything which doesn't start with 476"cgroup." are owned by the parent rather than the cgroup itself. 477 478 479Top-down Constraint 480~~~~~~~~~~~~~~~~~~~ 481 482Resources are distributed top-down and a cgroup can further distribute 483a resource only if the resource has been distributed to it from the 484parent. This means that all non-root "cgroup.subtree_control" files 485can only contain controllers which are enabled in the parent's 486"cgroup.subtree_control" file. A controller can be enabled only if 487the parent has the controller enabled and a controller can't be 488disabled if one or more children have it enabled. 489 490 491No Internal Process Constraint 492~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 493 494Non-root cgroups can distribute domain resources to their children 495only when they don't have any processes of their own. In other words, 496only domain cgroups which don't contain any processes can have domain 497controllers enabled in their "cgroup.subtree_control" files. 498 499This guarantees that, when a domain controller is looking at the part 500of the hierarchy which has it enabled, processes are always only on 501the leaves. This rules out situations where child cgroups compete 502against internal processes of the parent. 503 504The root cgroup is exempt from this restriction. Root contains 505processes and anonymous resource consumption which can't be associated 506with any other cgroups and requires special treatment from most 507controllers. How resource consumption in the root cgroup is governed 508is up to each controller (for more information on this topic please 509refer to the Non-normative information section in the Controllers 510chapter). 511 512Note that the restriction doesn't get in the way if there is no 513enabled controller in the cgroup's "cgroup.subtree_control". This is 514important as otherwise it wouldn't be possible to create children of a 515populated cgroup. To control resource distribution of a cgroup, the 516cgroup must create children and transfer all its processes to the 517children before enabling controllers in its "cgroup.subtree_control" 518file. 519 520 521Delegation 522---------- 523 524Model of Delegation 525~~~~~~~~~~~~~~~~~~~ 526 527A cgroup can be delegated in two ways. First, to a less privileged 528user by granting write access of the directory and its "cgroup.procs", 529"cgroup.threads" and "cgroup.subtree_control" files to the user. 530Second, if the "nsdelegate" mount option is set, automatically to a 531cgroup namespace on namespace creation. 532 533Because the resource control interface files in a given directory 534control the distribution of the parent's resources, the delegatee 535shouldn't be allowed to write to them. For the first method, this is 536achieved by not granting access to these files. For the second, files 537outside the namespace should be hidden from the delegatee by the means 538of at least mount namespacing, and the kernel rejects writes to all 539files on a namespace root from inside the cgroup namespace, except for 540those files listed in "/sys/kernel/cgroup/delegate" (including 541"cgroup.procs", "cgroup.threads", "cgroup.subtree_control", etc.). 542 543The end results are equivalent for both delegation types. Once 544delegated, the user can build sub-hierarchy under the directory, 545organize processes inside it as it sees fit and further distribute the 546resources it received from the parent. The limits and other settings 547of all resource controllers are hierarchical and regardless of what 548happens in the delegated sub-hierarchy, nothing can escape the 549resource restrictions imposed by the parent. 550 551Currently, cgroup doesn't impose any restrictions on the number of 552cgroups in or nesting depth of a delegated sub-hierarchy; however, 553this may be limited explicitly in the future. 554 555 556Delegation Containment 557~~~~~~~~~~~~~~~~~~~~~~ 558 559A delegated sub-hierarchy is contained in the sense that processes 560can't be moved into or out of the sub-hierarchy by the delegatee. 561 562For delegations to a less privileged user, this is achieved by 563requiring the following conditions for a process with a non-root euid 564to migrate a target process into a cgroup by writing its PID to the 565"cgroup.procs" file. 566 567- The writer must have write access to the "cgroup.procs" file. 568 569- The writer must have write access to the "cgroup.procs" file of the 570 common ancestor of the source and destination cgroups. 571 572The above two constraints ensure that while a delegatee may migrate 573processes around freely in the delegated sub-hierarchy it can't pull 574in from or push out to outside the sub-hierarchy. 575 576For an example, let's assume cgroups C0 and C1 have been delegated to 577user U0 who created C00, C01 under C0 and C10 under C1 as follows and 578all processes under C0 and C1 belong to U0:: 579 580 ~~~~~~~~~~~~~ - C0 - C00 581 ~ cgroup ~ \ C01 582 ~ hierarchy ~ 583 ~~~~~~~~~~~~~ - C1 - C10 584 585Let's also say U0 wants to write the PID of a process which is 586currently in C10 into "C00/cgroup.procs". U0 has write access to the 587file; however, the common ancestor of the source cgroup C10 and the 588destination cgroup C00 is above the points of delegation and U0 would 589not have write access to its "cgroup.procs" files and thus the write 590will be denied with -EACCES. 591 592For delegations to namespaces, containment is achieved by requiring 593that both the source and destination cgroups are reachable from the 594namespace of the process which is attempting the migration. If either 595is not reachable, the migration is rejected with -ENOENT. 596 597 598Guidelines 599---------- 600 601Organize Once and Control 602~~~~~~~~~~~~~~~~~~~~~~~~~ 603 604Migrating a process across cgroups is a relatively expensive operation 605and stateful resources such as memory are not moved together with the 606process. This is an explicit design decision as there often exist 607inherent trade-offs between migration and various hot paths in terms 608of synchronization cost. 609 610As such, migrating processes across cgroups frequently as a means to 611apply different resource restrictions is discouraged. A workload 612should be assigned to a cgroup according to the system's logical and 613resource structure once on start-up. Dynamic adjustments to resource 614distribution can be made by changing controller configuration through 615the interface files. 616 617 618Avoid Name Collisions 619~~~~~~~~~~~~~~~~~~~~~ 620 621Interface files for a cgroup and its children cgroups occupy the same 622directory and it is possible to create children cgroups which collide 623with interface files. 624 625All cgroup core interface files are prefixed with "cgroup." and each 626controller's interface files are prefixed with the controller name and 627a dot. A controller's name is composed of lower case alphabets and 628'_'s but never begins with an '_' so it can be used as the prefix 629character for collision avoidance. Also, interface file names won't 630start or end with terms which are often used in categorizing workloads 631such as job, service, slice, unit or workload. 632 633cgroup doesn't do anything to prevent name collisions and it's the 634user's responsibility to avoid them. 635 636 637Resource Distribution Models 638============================ 639 640cgroup controllers implement several resource distribution schemes 641depending on the resource type and expected use cases. This section 642describes major schemes in use along with their expected behaviors. 643 644 645Weights 646------- 647 648A parent's resource is distributed by adding up the weights of all 649active children and giving each the fraction matching the ratio of its 650weight against the sum. As only children which can make use of the 651resource at the moment participate in the distribution, this is 652work-conserving. Due to the dynamic nature, this model is usually 653used for stateless resources. 654 655All weights are in the range [1, 10000] with the default at 100. This 656allows symmetric multiplicative biases in both directions at fine 657enough granularity while staying in the intuitive range. 658 659As long as the weight is in range, all configuration combinations are 660valid and there is no reason to reject configuration changes or 661process migrations. 662 663"cpu.weight" proportionally distributes CPU cycles to active children 664and is an example of this type. 665 666 667.. _cgroupv2-limits-distributor: 668 669Limits 670------ 671 672A child can only consume up to the configured amount of the resource. 673Limits can be over-committed - the sum of the limits of children can 674exceed the amount of resource available to the parent. 675 676Limits are in the range [0, max] and defaults to "max", which is noop. 677 678As limits can be over-committed, all configuration combinations are 679valid and there is no reason to reject configuration changes or 680process migrations. 681 682"io.max" limits the maximum BPS and/or IOPS that a cgroup can consume 683on an IO device and is an example of this type. 684 685.. _cgroupv2-protections-distributor: 686 687Protections 688----------- 689 690A cgroup is protected up to the configured amount of the resource 691as long as the usages of all its ancestors are under their 692protected levels. Protections can be hard guarantees or best effort 693soft boundaries. Protections can also be over-committed in which case 694only up to the amount available to the parent is protected among 695children. 696 697Protections are in the range [0, max] and defaults to 0, which is 698noop. 699 700As protections can be over-committed, all configuration combinations 701are valid and there is no reason to reject configuration changes or 702process migrations. 703 704"memory.low" implements best-effort memory protection and is an 705example of this type. 706 707 708Allocations 709----------- 710 711A cgroup is exclusively allocated a certain amount of a finite 712resource. Allocations can't be over-committed - the sum of the 713allocations of children can not exceed the amount of resource 714available to the parent. 715 716Allocations are in the range [0, max] and defaults to 0, which is no 717resource. 718 719As allocations can't be over-committed, some configuration 720combinations are invalid and should be rejected. Also, if the 721resource is mandatory for execution of processes, process migrations 722may be rejected. 723 724"cpu.rt.max" hard-allocates realtime slices and is an example of this 725type. 726 727 728Interface Files 729=============== 730 731Format 732------ 733 734All interface files should be in one of the following formats whenever 735possible:: 736 737 New-line separated values 738 (when only one value can be written at once) 739 740 VAL0\n 741 VAL1\n 742 ... 743 744 Space separated values 745 (when read-only or multiple values can be written at once) 746 747 VAL0 VAL1 ...\n 748 749 Flat keyed 750 751 KEY0 VAL0\n 752 KEY1 VAL1\n 753 ... 754 755 Nested keyed 756 757 KEY0 SUB_KEY0=VAL00 SUB_KEY1=VAL01... 758 KEY1 SUB_KEY0=VAL10 SUB_KEY1=VAL11... 759 ... 760 761For a writable file, the format for writing should generally match 762reading; however, controllers may allow omitting later fields or 763implement restricted shortcuts for most common use cases. 764 765For both flat and nested keyed files, only the values for a single key 766can be written at a time. For nested keyed files, the sub key pairs 767may be specified in any order and not all pairs have to be specified. 768 769 770Conventions 771----------- 772 773- Settings for a single feature should be contained in a single file. 774 775- The root cgroup should be exempt from resource control and thus 776 shouldn't have resource control interface files. 777 778- The default time unit is microseconds. If a different unit is ever 779 used, an explicit unit suffix must be present. 780 781- A parts-per quantity should use a percentage decimal with at least 782 two digit fractional part - e.g. 13.40. 783 784- If a controller implements weight based resource distribution, its 785 interface file should be named "weight" and have the range [1, 786 10000] with 100 as the default. The values are chosen to allow 787 enough and symmetric bias in both directions while keeping it 788 intuitive (the default is 100%). 789 790- If a controller implements an absolute resource guarantee and/or 791 limit, the interface files should be named "min" and "max" 792 respectively. If a controller implements best effort resource 793 guarantee and/or limit, the interface files should be named "low" 794 and "high" respectively. 795 796 In the above four control files, the special token "max" should be 797 used to represent upward infinity for both reading and writing. 798 799- If a setting has a configurable default value and keyed specific 800 overrides, the default entry should be keyed with "default" and 801 appear as the first entry in the file. 802 803 The default value can be updated by writing either "default $VAL" or 804 "$VAL". 805 806 When writing to update a specific override, "default" can be used as 807 the value to indicate removal of the override. Override entries 808 with "default" as the value must not appear when read. 809 810 For example, a setting which is keyed by major:minor device numbers 811 with integer values may look like the following:: 812 813 # cat cgroup-example-interface-file 814 default 150 815 8:0 300 816 817 The default value can be updated by:: 818 819 # echo 125 > cgroup-example-interface-file 820 821 or:: 822 823 # echo "default 125" > cgroup-example-interface-file 824 825 An override can be set by:: 826 827 # echo "8:16 170" > cgroup-example-interface-file 828 829 and cleared by:: 830 831 # echo "8:0 default" > cgroup-example-interface-file 832 # cat cgroup-example-interface-file 833 default 125 834 8:16 170 835 836- For events which are not very high frequency, an interface file 837 "events" should be created which lists event key value pairs. 838 Whenever a notifiable event happens, file modified event should be 839 generated on the file. 840 841 842Core Interface Files 843-------------------- 844 845All cgroup core files are prefixed with "cgroup." 846 847 cgroup.type 848 A read-write single value file which exists on non-root 849 cgroups. 850 851 When read, it indicates the current type of the cgroup, which 852 can be one of the following values. 853 854 - "domain" : A normal valid domain cgroup. 855 856 - "domain threaded" : A threaded domain cgroup which is 857 serving as the root of a threaded subtree. 858 859 - "domain invalid" : A cgroup which is in an invalid state. 860 It can't be populated or have controllers enabled. It may 861 be allowed to become a threaded cgroup. 862 863 - "threaded" : A threaded cgroup which is a member of a 864 threaded subtree. 865 866 A cgroup can be turned into a threaded cgroup by writing 867 "threaded" to this file. 868 869 cgroup.procs 870 A read-write new-line separated values file which exists on 871 all cgroups. 872 873 When read, it lists the PIDs of all processes which belong to 874 the cgroup one-per-line. The PIDs are not ordered and the 875 same PID may show up more than once if the process got moved 876 to another cgroup and then back or the PID got recycled while 877 reading. 878 879 A PID can be written to migrate the process associated with 880 the PID to the cgroup. The writer should match all of the 881 following conditions. 882 883 - It must have write access to the "cgroup.procs" file. 884 885 - It must have write access to the "cgroup.procs" file of the 886 common ancestor of the source and destination cgroups. 887 888 When delegating a sub-hierarchy, write access to this file 889 should be granted along with the containing directory. 890 891 In a threaded cgroup, reading this file fails with EOPNOTSUPP 892 as all the processes belong to the thread root. Writing is 893 supported and moves every thread of the process to the cgroup. 894 895 cgroup.threads 896 A read-write new-line separated values file which exists on 897 all cgroups. 898 899 When read, it lists the TIDs of all threads which belong to 900 the cgroup one-per-line. The TIDs are not ordered and the 901 same TID may show up more than once if the thread got moved to 902 another cgroup and then back or the TID got recycled while 903 reading. 904 905 A TID can be written to migrate the thread associated with the 906 TID to the cgroup. The writer should match all of the 907 following conditions. 908 909 - It must have write access to the "cgroup.threads" file. 910 911 - The cgroup that the thread is currently in must be in the 912 same resource domain as the destination cgroup. 913 914 - It must have write access to the "cgroup.procs" file of the 915 common ancestor of the source and destination cgroups. 916 917 When delegating a sub-hierarchy, write access to this file 918 should be granted along with the containing directory. 919 920 cgroup.controllers 921 A read-only space separated values file which exists on all 922 cgroups. 923 924 It shows space separated list of all controllers available to 925 the cgroup. The controllers are not ordered. 926 927 cgroup.subtree_control 928 A read-write space separated values file which exists on all 929 cgroups. Starts out empty. 930 931 When read, it shows space separated list of the controllers 932 which are enabled to control resource distribution from the 933 cgroup to its children. 934 935 Space separated list of controllers prefixed with '+' or '-' 936 can be written to enable or disable controllers. A controller 937 name prefixed with '+' enables the controller and '-' 938 disables. If a controller appears more than once on the list, 939 the last one is effective. When multiple enable and disable 940 operations are specified, either all succeed or all fail. 941 942 cgroup.events 943 A read-only flat-keyed file which exists on non-root cgroups. 944 The following entries are defined. Unless specified 945 otherwise, a value change in this file generates a file 946 modified event. 947 948 populated 949 1 if the cgroup or its descendants contains any live 950 processes; otherwise, 0. 951 frozen 952 1 if the cgroup is frozen; otherwise, 0. 953 954 cgroup.max.descendants 955 A read-write single value files. The default is "max". 956 957 Maximum allowed number of descent cgroups. 958 If the actual number of descendants is equal or larger, 959 an attempt to create a new cgroup in the hierarchy will fail. 960 961 cgroup.max.depth 962 A read-write single value files. The default is "max". 963 964 Maximum allowed descent depth below the current cgroup. 965 If the actual descent depth is equal or larger, 966 an attempt to create a new child cgroup will fail. 967 968 cgroup.stat 969 A read-only flat-keyed file with the following entries: 970 971 nr_descendants 972 Total number of visible descendant cgroups. 973 974 nr_dying_descendants 975 Total number of dying descendant cgroups. A cgroup becomes 976 dying after being deleted by a user. The cgroup will remain 977 in dying state for some time undefined time (which can depend 978 on system load) before being completely destroyed. 979 980 A process can't enter a dying cgroup under any circumstances, 981 a dying cgroup can't revive. 982 983 A dying cgroup can consume system resources not exceeding 984 limits, which were active at the moment of cgroup deletion. 985 986 nr_subsys_<cgroup_subsys> 987 Total number of live cgroup subsystems (e.g memory 988 cgroup) at and beneath the current cgroup. 989 990 nr_dying_subsys_<cgroup_subsys> 991 Total number of dying cgroup subsystems (e.g. memory 992 cgroup) at and beneath the current cgroup. 993 994 cgroup.freeze 995 A read-write single value file which exists on non-root cgroups. 996 Allowed values are "0" and "1". The default is "0". 997 998 Writing "1" to the file causes freezing of the cgroup and all 999 descendant cgroups. This means that all belonging processes will 1000 be stopped and will not run until the cgroup will be explicitly 1001 unfrozen. Freezing of the cgroup may take some time; when this action 1002 is completed, the "frozen" value in the cgroup.events control file 1003 will be updated to "1" and the corresponding notification will be 1004 issued. 1005 1006 A cgroup can be frozen either by its own settings, or by settings 1007 of any ancestor cgroups. If any of ancestor cgroups is frozen, the 1008 cgroup will remain frozen. 1009 1010 Processes in the frozen cgroup can be killed by a fatal signal. 1011 They also can enter and leave a frozen cgroup: either by an explicit 1012 move by a user, or if freezing of the cgroup races with fork(). 1013 If a process is moved to a frozen cgroup, it stops. If a process is 1014 moved out of a frozen cgroup, it becomes running. 1015 1016 Frozen status of a cgroup doesn't affect any cgroup tree operations: 1017 it's possible to delete a frozen (and empty) cgroup, as well as 1018 create new sub-cgroups. 1019 1020 cgroup.kill 1021 A write-only single value file which exists in non-root cgroups. 1022 The only allowed value is "1". 1023 1024 Writing "1" to the file causes the cgroup and all descendant cgroups to 1025 be killed. This means that all processes located in the affected cgroup 1026 tree will be killed via SIGKILL. 1027 1028 Killing a cgroup tree will deal with concurrent forks appropriately and 1029 is protected against migrations. 1030 1031 In a threaded cgroup, writing this file fails with EOPNOTSUPP as 1032 killing cgroups is a process directed operation, i.e. it affects 1033 the whole thread-group. 1034 1035 cgroup.pressure 1036 A read-write single value file that allowed values are "0" and "1". 1037 The default is "1". 1038 1039 Writing "0" to the file will disable the cgroup PSI accounting. 1040 Writing "1" to the file will re-enable the cgroup PSI accounting. 1041 1042 This control attribute is not hierarchical, so disable or enable PSI 1043 accounting in a cgroup does not affect PSI accounting in descendants 1044 and doesn't need pass enablement via ancestors from root. 1045 1046 The reason this control attribute exists is that PSI accounts stalls for 1047 each cgroup separately and aggregates it at each level of the hierarchy. 1048 This may cause non-negligible overhead for some workloads when under 1049 deep level of the hierarchy, in which case this control attribute can 1050 be used to disable PSI accounting in the non-leaf cgroups. 1051 1052 irq.pressure 1053 A read-write nested-keyed file. 1054 1055 Shows pressure stall information for IRQ/SOFTIRQ. See 1056 :ref:`Documentation/accounting/psi.rst <psi>` for details. 1057 1058Controllers 1059=========== 1060 1061.. _cgroup-v2-cpu: 1062 1063CPU 1064--- 1065 1066The "cpu" controllers regulates distribution of CPU cycles. This 1067controller implements weight and absolute bandwidth limit models for 1068normal scheduling policy and absolute bandwidth allocation model for 1069realtime scheduling policy. 1070 1071In all the above models, cycles distribution is defined only on a temporal 1072base and it does not account for the frequency at which tasks are executed. 1073The (optional) utilization clamping support allows to hint the schedutil 1074cpufreq governor about the minimum desired frequency which should always be 1075provided by a CPU, as well as the maximum desired frequency, which should not 1076be exceeded by a CPU. 1077 1078WARNING: cgroup2 doesn't yet support control of realtime processes. For 1079a kernel built with the CONFIG_RT_GROUP_SCHED option enabled for group 1080scheduling of realtime processes, the cpu controller can only be enabled 1081when all RT processes are in the root cgroup. This limitation does 1082not apply if CONFIG_RT_GROUP_SCHED is disabled. Be aware that system 1083management software may already have placed RT processes into nonroot 1084cgroups during the system boot process, and these processes may need 1085to be moved to the root cgroup before the cpu controller can be enabled 1086with a CONFIG_RT_GROUP_SCHED enabled kernel. 1087 1088 1089CPU Interface Files 1090~~~~~~~~~~~~~~~~~~~ 1091 1092All time durations are in microseconds. 1093 1094 cpu.stat 1095 A read-only flat-keyed file. 1096 This file exists whether the controller is enabled or not. 1097 1098 It always reports the following three stats: 1099 1100 - usage_usec 1101 - user_usec 1102 - system_usec 1103 1104 and the following five when the controller is enabled: 1105 1106 - nr_periods 1107 - nr_throttled 1108 - throttled_usec 1109 - nr_bursts 1110 - burst_usec 1111 1112 cpu.weight 1113 A read-write single value file which exists on non-root 1114 cgroups. The default is "100". 1115 1116 For non idle groups (cpu.idle = 0), the weight is in the 1117 range [1, 10000]. 1118 1119 If the cgroup has been configured to be SCHED_IDLE (cpu.idle = 1), 1120 then the weight will show as a 0. 1121 1122 cpu.weight.nice 1123 A read-write single value file which exists on non-root 1124 cgroups. The default is "0". 1125 1126 The nice value is in the range [-20, 19]. 1127 1128 This interface file is an alternative interface for 1129 "cpu.weight" and allows reading and setting weight using the 1130 same values used by nice(2). Because the range is smaller and 1131 granularity is coarser for the nice values, the read value is 1132 the closest approximation of the current weight. 1133 1134 cpu.max 1135 A read-write two value file which exists on non-root cgroups. 1136 The default is "max 100000". 1137 1138 The maximum bandwidth limit. It's in the following format:: 1139 1140 $MAX $PERIOD 1141 1142 which indicates that the group may consume up to $MAX in each 1143 $PERIOD duration. "max" for $MAX indicates no limit. If only 1144 one number is written, $MAX is updated. 1145 1146 cpu.max.burst 1147 A read-write single value file which exists on non-root 1148 cgroups. The default is "0". 1149 1150 The burst in the range [0, $MAX]. 1151 1152 cpu.pressure 1153 A read-write nested-keyed file. 1154 1155 Shows pressure stall information for CPU. See 1156 :ref:`Documentation/accounting/psi.rst <psi>` for details. 1157 1158 cpu.uclamp.min 1159 A read-write single value file which exists on non-root cgroups. 1160 The default is "0", i.e. no utilization boosting. 1161 1162 The requested minimum utilization (protection) as a percentage 1163 rational number, e.g. 12.34 for 12.34%. 1164 1165 This interface allows reading and setting minimum utilization clamp 1166 values similar to the sched_setattr(2). This minimum utilization 1167 value is used to clamp the task specific minimum utilization clamp. 1168 1169 The requested minimum utilization (protection) is always capped by 1170 the current value for the maximum utilization (limit), i.e. 1171 `cpu.uclamp.max`. 1172 1173 cpu.uclamp.max 1174 A read-write single value file which exists on non-root cgroups. 1175 The default is "max". i.e. no utilization capping 1176 1177 The requested maximum utilization (limit) as a percentage rational 1178 number, e.g. 98.76 for 98.76%. 1179 1180 This interface allows reading and setting maximum utilization clamp 1181 values similar to the sched_setattr(2). This maximum utilization 1182 value is used to clamp the task specific maximum utilization clamp. 1183 1184 cpu.idle 1185 A read-write single value file which exists on non-root cgroups. 1186 The default is 0. 1187 1188 This is the cgroup analog of the per-task SCHED_IDLE sched policy. 1189 Setting this value to a 1 will make the scheduling policy of the 1190 cgroup SCHED_IDLE. The threads inside the cgroup will retain their 1191 own relative priorities, but the cgroup itself will be treated as 1192 very low priority relative to its peers. 1193 1194 1195 1196Memory 1197------ 1198 1199The "memory" controller regulates distribution of memory. Memory is 1200stateful and implements both limit and protection models. Due to the 1201intertwining between memory usage and reclaim pressure and the 1202stateful nature of memory, the distribution model is relatively 1203complex. 1204 1205While not completely water-tight, all major memory usages by a given 1206cgroup are tracked so that the total memory consumption can be 1207accounted and controlled to a reasonable extent. Currently, the 1208following types of memory usages are tracked. 1209 1210- Userland memory - page cache and anonymous memory. 1211 1212- Kernel data structures such as dentries and inodes. 1213 1214- TCP socket buffers. 1215 1216The above list may expand in the future for better coverage. 1217 1218 1219Memory Interface Files 1220~~~~~~~~~~~~~~~~~~~~~~ 1221 1222All memory amounts are in bytes. If a value which is not aligned to 1223PAGE_SIZE is written, the value may be rounded up to the closest 1224PAGE_SIZE multiple when read back. 1225 1226 memory.current 1227 A read-only single value file which exists on non-root 1228 cgroups. 1229 1230 The total amount of memory currently being used by the cgroup 1231 and its descendants. 1232 1233 memory.min 1234 A read-write single value file which exists on non-root 1235 cgroups. The default is "0". 1236 1237 Hard memory protection. If the memory usage of a cgroup 1238 is within its effective min boundary, the cgroup's memory 1239 won't be reclaimed under any conditions. If there is no 1240 unprotected reclaimable memory available, OOM killer 1241 is invoked. Above the effective min boundary (or 1242 effective low boundary if it is higher), pages are reclaimed 1243 proportionally to the overage, reducing reclaim pressure for 1244 smaller overages. 1245 1246 Effective min boundary is limited by memory.min values of 1247 all ancestor cgroups. If there is memory.min overcommitment 1248 (child cgroup or cgroups are requiring more protected memory 1249 than parent will allow), then each child cgroup will get 1250 the part of parent's protection proportional to its 1251 actual memory usage below memory.min. 1252 1253 Putting more memory than generally available under this 1254 protection is discouraged and may lead to constant OOMs. 1255 1256 If a memory cgroup is not populated with processes, 1257 its memory.min is ignored. 1258 1259 memory.low 1260 A read-write single value file which exists on non-root 1261 cgroups. The default is "0". 1262 1263 Best-effort memory protection. If the memory usage of a 1264 cgroup is within its effective low boundary, the cgroup's 1265 memory won't be reclaimed unless there is no reclaimable 1266 memory available in unprotected cgroups. 1267 Above the effective low boundary (or 1268 effective min boundary if it is higher), pages are reclaimed 1269 proportionally to the overage, reducing reclaim pressure for 1270 smaller overages. 1271 1272 Effective low boundary is limited by memory.low values of 1273 all ancestor cgroups. If there is memory.low overcommitment 1274 (child cgroup or cgroups are requiring more protected memory 1275 than parent will allow), then each child cgroup will get 1276 the part of parent's protection proportional to its 1277 actual memory usage below memory.low. 1278 1279 Putting more memory than generally available under this 1280 protection is discouraged. 1281 1282 memory.high 1283 A read-write single value file which exists on non-root 1284 cgroups. The default is "max". 1285 1286 Memory usage throttle limit. If a cgroup's usage goes 1287 over the high boundary, the processes of the cgroup are 1288 throttled and put under heavy reclaim pressure. 1289 1290 Going over the high limit never invokes the OOM killer and 1291 under extreme conditions the limit may be breached. The high 1292 limit should be used in scenarios where an external process 1293 monitors the limited cgroup to alleviate heavy reclaim 1294 pressure. 1295 1296 memory.max 1297 A read-write single value file which exists on non-root 1298 cgroups. The default is "max". 1299 1300 Memory usage hard limit. This is the main mechanism to limit 1301 memory usage of a cgroup. If a cgroup's memory usage reaches 1302 this limit and can't be reduced, the OOM killer is invoked in 1303 the cgroup. Under certain circumstances, the usage may go 1304 over the limit temporarily. 1305 1306 In default configuration regular 0-order allocations always 1307 succeed unless OOM killer chooses current task as a victim. 1308 1309 Some kinds of allocations don't invoke the OOM killer. 1310 Caller could retry them differently, return into userspace 1311 as -ENOMEM or silently ignore in cases like disk readahead. 1312 1313 memory.reclaim 1314 A write-only nested-keyed file which exists for all cgroups. 1315 1316 This is a simple interface to trigger memory reclaim in the 1317 target cgroup. 1318 1319 Example:: 1320 1321 echo "1G" > memory.reclaim 1322 1323 Please note that the kernel can over or under reclaim from 1324 the target cgroup. If less bytes are reclaimed than the 1325 specified amount, -EAGAIN is returned. 1326 1327 Please note that the proactive reclaim (triggered by this 1328 interface) is not meant to indicate memory pressure on the 1329 memory cgroup. Therefore socket memory balancing triggered by 1330 the memory reclaim normally is not exercised in this case. 1331 This means that the networking layer will not adapt based on 1332 reclaim induced by memory.reclaim. 1333 1334The following nested keys are defined. 1335 1336 ========== ================================ 1337 swappiness Swappiness value to reclaim with 1338 ========== ================================ 1339 1340 Specifying a swappiness value instructs the kernel to perform 1341 the reclaim with that swappiness value. Note that this has the 1342 same semantics as vm.swappiness applied to memcg reclaim with 1343 all the existing limitations and potential future extensions. 1344 1345 memory.peak 1346 A read-write single value file which exists on non-root cgroups. 1347 1348 The max memory usage recorded for the cgroup and its descendants since 1349 either the creation of the cgroup or the most recent reset for that FD. 1350 1351 A write of any non-empty string to this file resets it to the 1352 current memory usage for subsequent reads through the same 1353 file descriptor. 1354 1355 memory.oom.group 1356 A read-write single value file which exists on non-root 1357 cgroups. The default value is "0". 1358 1359 Determines whether the cgroup should be treated as 1360 an indivisible workload by the OOM killer. If set, 1361 all tasks belonging to the cgroup or to its descendants 1362 (if the memory cgroup is not a leaf cgroup) are killed 1363 together or not at all. This can be used to avoid 1364 partial kills to guarantee workload integrity. 1365 1366 Tasks with the OOM protection (oom_score_adj set to -1000) 1367 are treated as an exception and are never killed. 1368 1369 If the OOM killer is invoked in a cgroup, it's not going 1370 to kill any tasks outside of this cgroup, regardless 1371 memory.oom.group values of ancestor cgroups. 1372 1373 memory.events 1374 A read-only flat-keyed file which exists on non-root cgroups. 1375 The following entries are defined. Unless specified 1376 otherwise, a value change in this file generates a file 1377 modified event. 1378 1379 Note that all fields in this file are hierarchical and the 1380 file modified event can be generated due to an event down the 1381 hierarchy. For the local events at the cgroup level see 1382 memory.events.local. 1383 1384 low 1385 The number of times the cgroup is reclaimed due to 1386 high memory pressure even though its usage is under 1387 the low boundary. This usually indicates that the low 1388 boundary is over-committed. 1389 1390 high 1391 The number of times processes of the cgroup are 1392 throttled and routed to perform direct memory reclaim 1393 because the high memory boundary was exceeded. For a 1394 cgroup whose memory usage is capped by the high limit 1395 rather than global memory pressure, this event's 1396 occurrences are expected. 1397 1398 max 1399 The number of times the cgroup's memory usage was 1400 about to go over the max boundary. If direct reclaim 1401 fails to bring it down, the cgroup goes to OOM state. 1402 1403 oom 1404 The number of time the cgroup's memory usage was 1405 reached the limit and allocation was about to fail. 1406 1407 This event is not raised if the OOM killer is not 1408 considered as an option, e.g. for failed high-order 1409 allocations or if caller asked to not retry attempts. 1410 1411 oom_kill 1412 The number of processes belonging to this cgroup 1413 killed by any kind of OOM killer. 1414 1415 oom_group_kill 1416 The number of times a group OOM has occurred. 1417 1418 memory.events.local 1419 Similar to memory.events but the fields in the file are local 1420 to the cgroup i.e. not hierarchical. The file modified event 1421 generated on this file reflects only the local events. 1422 1423 memory.stat 1424 A read-only flat-keyed file which exists on non-root cgroups. 1425 1426 This breaks down the cgroup's memory footprint into different 1427 types of memory, type-specific details, and other information 1428 on the state and past events of the memory management system. 1429 1430 All memory amounts are in bytes. 1431 1432 The entries are ordered to be human readable, and new entries 1433 can show up in the middle. Don't rely on items remaining in a 1434 fixed position; use the keys to look up specific values! 1435 1436 If the entry has no per-node counter (or not show in the 1437 memory.numa_stat). We use 'npn' (non-per-node) as the tag 1438 to indicate that it will not show in the memory.numa_stat. 1439 1440 anon 1441 Amount of memory used in anonymous mappings such as 1442 brk(), sbrk(), and mmap(MAP_ANONYMOUS) 1443 1444 file 1445 Amount of memory used to cache filesystem data, 1446 including tmpfs and shared memory. 1447 1448 kernel (npn) 1449 Amount of total kernel memory, including 1450 (kernel_stack, pagetables, percpu, vmalloc, slab) in 1451 addition to other kernel memory use cases. 1452 1453 kernel_stack 1454 Amount of memory allocated to kernel stacks. 1455 1456 pagetables 1457 Amount of memory allocated for page tables. 1458 1459 sec_pagetables 1460 Amount of memory allocated for secondary page tables, 1461 this currently includes KVM mmu allocations on x86 1462 and arm64 and IOMMU page tables. 1463 1464 percpu (npn) 1465 Amount of memory used for storing per-cpu kernel 1466 data structures. 1467 1468 sock (npn) 1469 Amount of memory used in network transmission buffers 1470 1471 vmalloc (npn) 1472 Amount of memory used for vmap backed memory. 1473 1474 shmem 1475 Amount of cached filesystem data that is swap-backed, 1476 such as tmpfs, shm segments, shared anonymous mmap()s 1477 1478 zswap 1479 Amount of memory consumed by the zswap compression backend. 1480 1481 zswapped 1482 Amount of application memory swapped out to zswap. 1483 1484 file_mapped 1485 Amount of cached filesystem data mapped with mmap() 1486 1487 file_dirty 1488 Amount of cached filesystem data that was modified but 1489 not yet written back to disk 1490 1491 file_writeback 1492 Amount of cached filesystem data that was modified and 1493 is currently being written back to disk 1494 1495 swapcached 1496 Amount of swap cached in memory. The swapcache is accounted 1497 against both memory and swap usage. 1498 1499 anon_thp 1500 Amount of memory used in anonymous mappings backed by 1501 transparent hugepages 1502 1503 file_thp 1504 Amount of cached filesystem data backed by transparent 1505 hugepages 1506 1507 shmem_thp 1508 Amount of shm, tmpfs, shared anonymous mmap()s backed by 1509 transparent hugepages 1510 1511 inactive_anon, active_anon, inactive_file, active_file, unevictable 1512 Amount of memory, swap-backed and filesystem-backed, 1513 on the internal memory management lists used by the 1514 page reclaim algorithm. 1515 1516 As these represent internal list state (eg. shmem pages are on anon 1517 memory management lists), inactive_foo + active_foo may not be equal to 1518 the value for the foo counter, since the foo counter is type-based, not 1519 list-based. 1520 1521 slab_reclaimable 1522 Part of "slab" that might be reclaimed, such as 1523 dentries and inodes. 1524 1525 slab_unreclaimable 1526 Part of "slab" that cannot be reclaimed on memory 1527 pressure. 1528 1529 slab (npn) 1530 Amount of memory used for storing in-kernel data 1531 structures. 1532 1533 workingset_refault_anon 1534 Number of refaults of previously evicted anonymous pages. 1535 1536 workingset_refault_file 1537 Number of refaults of previously evicted file pages. 1538 1539 workingset_activate_anon 1540 Number of refaulted anonymous pages that were immediately 1541 activated. 1542 1543 workingset_activate_file 1544 Number of refaulted file pages that were immediately activated. 1545 1546 workingset_restore_anon 1547 Number of restored anonymous pages which have been detected as 1548 an active workingset before they got reclaimed. 1549 1550 workingset_restore_file 1551 Number of restored file pages which have been detected as an 1552 active workingset before they got reclaimed. 1553 1554 workingset_nodereclaim 1555 Number of times a shadow node has been reclaimed 1556 1557 pgscan (npn) 1558 Amount of scanned pages (in an inactive LRU list) 1559 1560 pgsteal (npn) 1561 Amount of reclaimed pages 1562 1563 pgscan_kswapd (npn) 1564 Amount of scanned pages by kswapd (in an inactive LRU list) 1565 1566 pgscan_direct (npn) 1567 Amount of scanned pages directly (in an inactive LRU list) 1568 1569 pgscan_khugepaged (npn) 1570 Amount of scanned pages by khugepaged (in an inactive LRU list) 1571 1572 pgsteal_kswapd (npn) 1573 Amount of reclaimed pages by kswapd 1574 1575 pgsteal_direct (npn) 1576 Amount of reclaimed pages directly 1577 1578 pgsteal_khugepaged (npn) 1579 Amount of reclaimed pages by khugepaged 1580 1581 pgfault (npn) 1582 Total number of page faults incurred 1583 1584 pgmajfault (npn) 1585 Number of major page faults incurred 1586 1587 pgrefill (npn) 1588 Amount of scanned pages (in an active LRU list) 1589 1590 pgactivate (npn) 1591 Amount of pages moved to the active LRU list 1592 1593 pgdeactivate (npn) 1594 Amount of pages moved to the inactive LRU list 1595 1596 pglazyfree (npn) 1597 Amount of pages postponed to be freed under memory pressure 1598 1599 pglazyfreed (npn) 1600 Amount of reclaimed lazyfree pages 1601 1602 zswpin 1603 Number of pages moved in to memory from zswap. 1604 1605 zswpout 1606 Number of pages moved out of memory to zswap. 1607 1608 zswpwb 1609 Number of pages written from zswap to swap. 1610 1611 thp_fault_alloc (npn) 1612 Number of transparent hugepages which were allocated to satisfy 1613 a page fault. This counter is not present when CONFIG_TRANSPARENT_HUGEPAGE 1614 is not set. 1615 1616 thp_collapse_alloc (npn) 1617 Number of transparent hugepages which were allocated to allow 1618 collapsing an existing range of pages. This counter is not 1619 present when CONFIG_TRANSPARENT_HUGEPAGE is not set. 1620 1621 thp_swpout (npn) 1622 Number of transparent hugepages which are swapout in one piece 1623 without splitting. 1624 1625 thp_swpout_fallback (npn) 1626 Number of transparent hugepages which were split before swapout. 1627 Usually because failed to allocate some continuous swap space 1628 for the huge page. 1629 1630 numa_pages_migrated (npn) 1631 Number of pages migrated by NUMA balancing. 1632 1633 numa_pte_updates (npn) 1634 Number of pages whose page table entries are modified by 1635 NUMA balancing to produce NUMA hinting faults on access. 1636 1637 numa_hint_faults (npn) 1638 Number of NUMA hinting faults. 1639 1640 pgdemote_kswapd 1641 Number of pages demoted by kswapd. 1642 1643 pgdemote_direct 1644 Number of pages demoted directly. 1645 1646 pgdemote_khugepaged 1647 Number of pages demoted by khugepaged. 1648 1649 memory.numa_stat 1650 A read-only nested-keyed file which exists on non-root cgroups. 1651 1652 This breaks down the cgroup's memory footprint into different 1653 types of memory, type-specific details, and other information 1654 per node on the state of the memory management system. 1655 1656 This is useful for providing visibility into the NUMA locality 1657 information within an memcg since the pages are allowed to be 1658 allocated from any physical node. One of the use case is evaluating 1659 application performance by combining this information with the 1660 application's CPU allocation. 1661 1662 All memory amounts are in bytes. 1663 1664 The output format of memory.numa_stat is:: 1665 1666 type N0=<bytes in node 0> N1=<bytes in node 1> ... 1667 1668 The entries are ordered to be human readable, and new entries 1669 can show up in the middle. Don't rely on items remaining in a 1670 fixed position; use the keys to look up specific values! 1671 1672 The entries can refer to the memory.stat. 1673 1674 memory.swap.current 1675 A read-only single value file which exists on non-root 1676 cgroups. 1677 1678 The total amount of swap currently being used by the cgroup 1679 and its descendants. 1680 1681 memory.swap.high 1682 A read-write single value file which exists on non-root 1683 cgroups. The default is "max". 1684 1685 Swap usage throttle limit. If a cgroup's swap usage exceeds 1686 this limit, all its further allocations will be throttled to 1687 allow userspace to implement custom out-of-memory procedures. 1688 1689 This limit marks a point of no return for the cgroup. It is NOT 1690 designed to manage the amount of swapping a workload does 1691 during regular operation. Compare to memory.swap.max, which 1692 prohibits swapping past a set amount, but lets the cgroup 1693 continue unimpeded as long as other memory can be reclaimed. 1694 1695 Healthy workloads are not expected to reach this limit. 1696 1697 memory.swap.peak 1698 A read-write single value file which exists on non-root cgroups. 1699 1700 The max swap usage recorded for the cgroup and its descendants since 1701 the creation of the cgroup or the most recent reset for that FD. 1702 1703 A write of any non-empty string to this file resets it to the 1704 current memory usage for subsequent reads through the same 1705 file descriptor. 1706 1707 memory.swap.max 1708 A read-write single value file which exists on non-root 1709 cgroups. The default is "max". 1710 1711 Swap usage hard limit. If a cgroup's swap usage reaches this 1712 limit, anonymous memory of the cgroup will not be swapped out. 1713 1714 memory.swap.events 1715 A read-only flat-keyed file which exists on non-root cgroups. 1716 The following entries are defined. Unless specified 1717 otherwise, a value change in this file generates a file 1718 modified event. 1719 1720 high 1721 The number of times the cgroup's swap usage was over 1722 the high threshold. 1723 1724 max 1725 The number of times the cgroup's swap usage was about 1726 to go over the max boundary and swap allocation 1727 failed. 1728 1729 fail 1730 The number of times swap allocation failed either 1731 because of running out of swap system-wide or max 1732 limit. 1733 1734 When reduced under the current usage, the existing swap 1735 entries are reclaimed gradually and the swap usage may stay 1736 higher than the limit for an extended period of time. This 1737 reduces the impact on the workload and memory management. 1738 1739 memory.zswap.current 1740 A read-only single value file which exists on non-root 1741 cgroups. 1742 1743 The total amount of memory consumed by the zswap compression 1744 backend. 1745 1746 memory.zswap.max 1747 A read-write single value file which exists on non-root 1748 cgroups. The default is "max". 1749 1750 Zswap usage hard limit. If a cgroup's zswap pool reaches this 1751 limit, it will refuse to take any more stores before existing 1752 entries fault back in or are written out to disk. 1753 1754 memory.zswap.writeback 1755 A read-write single value file. The default value is "1". 1756 Note that this setting is hierarchical, i.e. the writeback would be 1757 implicitly disabled for child cgroups if the upper hierarchy 1758 does so. 1759 1760 When this is set to 0, all swapping attempts to swapping devices 1761 are disabled. This included both zswap writebacks, and swapping due 1762 to zswap store failures. If the zswap store failures are recurring 1763 (for e.g if the pages are incompressible), users can observe 1764 reclaim inefficiency after disabling writeback (because the same 1765 pages might be rejected again and again). 1766 1767 Note that this is subtly different from setting memory.swap.max to 1768 0, as it still allows for pages to be written to the zswap pool. 1769 This setting has no effect if zswap is disabled, and swapping 1770 is allowed unless memory.swap.max is set to 0. 1771 1772 memory.pressure 1773 A read-only nested-keyed file. 1774 1775 Shows pressure stall information for memory. See 1776 :ref:`Documentation/accounting/psi.rst <psi>` for details. 1777 1778 1779Usage Guidelines 1780~~~~~~~~~~~~~~~~ 1781 1782"memory.high" is the main mechanism to control memory usage. 1783Over-committing on high limit (sum of high limits > available memory) 1784and letting global memory pressure to distribute memory according to 1785usage is a viable strategy. 1786 1787Because breach of the high limit doesn't trigger the OOM killer but 1788throttles the offending cgroup, a management agent has ample 1789opportunities to monitor and take appropriate actions such as granting 1790more memory or terminating the workload. 1791 1792Determining whether a cgroup has enough memory is not trivial as 1793memory usage doesn't indicate whether the workload can benefit from 1794more memory. For example, a workload which writes data received from 1795network to a file can use all available memory but can also operate as 1796performant with a small amount of memory. A measure of memory 1797pressure - how much the workload is being impacted due to lack of 1798memory - is necessary to determine whether a workload needs more 1799memory; unfortunately, memory pressure monitoring mechanism isn't 1800implemented yet. 1801 1802 1803Memory Ownership 1804~~~~~~~~~~~~~~~~ 1805 1806A memory area is charged to the cgroup which instantiated it and stays 1807charged to the cgroup until the area is released. Migrating a process 1808to a different cgroup doesn't move the memory usages that it 1809instantiated while in the previous cgroup to the new cgroup. 1810 1811A memory area may be used by processes belonging to different cgroups. 1812To which cgroup the area will be charged is in-deterministic; however, 1813over time, the memory area is likely to end up in a cgroup which has 1814enough memory allowance to avoid high reclaim pressure. 1815 1816If a cgroup sweeps a considerable amount of memory which is expected 1817to be accessed repeatedly by other cgroups, it may make sense to use 1818POSIX_FADV_DONTNEED to relinquish the ownership of memory areas 1819belonging to the affected files to ensure correct memory ownership. 1820 1821 1822IO 1823-- 1824 1825The "io" controller regulates the distribution of IO resources. This 1826controller implements both weight based and absolute bandwidth or IOPS 1827limit distribution; however, weight based distribution is available 1828only if cfq-iosched is in use and neither scheme is available for 1829blk-mq devices. 1830 1831 1832IO Interface Files 1833~~~~~~~~~~~~~~~~~~ 1834 1835 io.stat 1836 A read-only nested-keyed file. 1837 1838 Lines are keyed by $MAJ:$MIN device numbers and not ordered. 1839 The following nested keys are defined. 1840 1841 ====== ===================== 1842 rbytes Bytes read 1843 wbytes Bytes written 1844 rios Number of read IOs 1845 wios Number of write IOs 1846 dbytes Bytes discarded 1847 dios Number of discard IOs 1848 ====== ===================== 1849 1850 An example read output follows:: 1851 1852 8:16 rbytes=1459200 wbytes=314773504 rios=192 wios=353 dbytes=0 dios=0 1853 8:0 rbytes=90430464 wbytes=299008000 rios=8950 wios=1252 dbytes=50331648 dios=3021 1854 1855 io.cost.qos 1856 A read-write nested-keyed file which exists only on the root 1857 cgroup. 1858 1859 This file configures the Quality of Service of the IO cost 1860 model based controller (CONFIG_BLK_CGROUP_IOCOST) which 1861 currently implements "io.weight" proportional control. Lines 1862 are keyed by $MAJ:$MIN device numbers and not ordered. The 1863 line for a given device is populated on the first write for 1864 the device on "io.cost.qos" or "io.cost.model". The following 1865 nested keys are defined. 1866 1867 ====== ===================================== 1868 enable Weight-based control enable 1869 ctrl "auto" or "user" 1870 rpct Read latency percentile [0, 100] 1871 rlat Read latency threshold 1872 wpct Write latency percentile [0, 100] 1873 wlat Write latency threshold 1874 min Minimum scaling percentage [1, 10000] 1875 max Maximum scaling percentage [1, 10000] 1876 ====== ===================================== 1877 1878 The controller is disabled by default and can be enabled by 1879 setting "enable" to 1. "rpct" and "wpct" parameters default 1880 to zero and the controller uses internal device saturation 1881 state to adjust the overall IO rate between "min" and "max". 1882 1883 When a better control quality is needed, latency QoS 1884 parameters can be configured. For example:: 1885 1886 8:16 enable=1 ctrl=auto rpct=95.00 rlat=75000 wpct=95.00 wlat=150000 min=50.00 max=150.0 1887 1888 shows that on sdb, the controller is enabled, will consider 1889 the device saturated if the 95th percentile of read completion 1890 latencies is above 75ms or write 150ms, and adjust the overall 1891 IO issue rate between 50% and 150% accordingly. 1892 1893 The lower the saturation point, the better the latency QoS at 1894 the cost of aggregate bandwidth. The narrower the allowed 1895 adjustment range between "min" and "max", the more conformant 1896 to the cost model the IO behavior. Note that the IO issue 1897 base rate may be far off from 100% and setting "min" and "max" 1898 blindly can lead to a significant loss of device capacity or 1899 control quality. "min" and "max" are useful for regulating 1900 devices which show wide temporary behavior changes - e.g. a 1901 ssd which accepts writes at the line speed for a while and 1902 then completely stalls for multiple seconds. 1903 1904 When "ctrl" is "auto", the parameters are controlled by the 1905 kernel and may change automatically. Setting "ctrl" to "user" 1906 or setting any of the percentile and latency parameters puts 1907 it into "user" mode and disables the automatic changes. The 1908 automatic mode can be restored by setting "ctrl" to "auto". 1909 1910 io.cost.model 1911 A read-write nested-keyed file which exists only on the root 1912 cgroup. 1913 1914 This file configures the cost model of the IO cost model based 1915 controller (CONFIG_BLK_CGROUP_IOCOST) which currently 1916 implements "io.weight" proportional control. Lines are keyed 1917 by $MAJ:$MIN device numbers and not ordered. The line for a 1918 given device is populated on the first write for the device on 1919 "io.cost.qos" or "io.cost.model". The following nested keys 1920 are defined. 1921 1922 ===== ================================ 1923 ctrl "auto" or "user" 1924 model The cost model in use - "linear" 1925 ===== ================================ 1926 1927 When "ctrl" is "auto", the kernel may change all parameters 1928 dynamically. When "ctrl" is set to "user" or any other 1929 parameters are written to, "ctrl" become "user" and the 1930 automatic changes are disabled. 1931 1932 When "model" is "linear", the following model parameters are 1933 defined. 1934 1935 ============= ======================================== 1936 [r|w]bps The maximum sequential IO throughput 1937 [r|w]seqiops The maximum 4k sequential IOs per second 1938 [r|w]randiops The maximum 4k random IOs per second 1939 ============= ======================================== 1940 1941 From the above, the builtin linear model determines the base 1942 costs of a sequential and random IO and the cost coefficient 1943 for the IO size. While simple, this model can cover most 1944 common device classes acceptably. 1945 1946 The IO cost model isn't expected to be accurate in absolute 1947 sense and is scaled to the device behavior dynamically. 1948 1949 If needed, tools/cgroup/iocost_coef_gen.py can be used to 1950 generate device-specific coefficients. 1951 1952 io.weight 1953 A read-write flat-keyed file which exists on non-root cgroups. 1954 The default is "default 100". 1955 1956 The first line is the default weight applied to devices 1957 without specific override. The rest are overrides keyed by 1958 $MAJ:$MIN device numbers and not ordered. The weights are in 1959 the range [1, 10000] and specifies the relative amount IO time 1960 the cgroup can use in relation to its siblings. 1961 1962 The default weight can be updated by writing either "default 1963 $WEIGHT" or simply "$WEIGHT". Overrides can be set by writing 1964 "$MAJ:$MIN $WEIGHT" and unset by writing "$MAJ:$MIN default". 1965 1966 An example read output follows:: 1967 1968 default 100 1969 8:16 200 1970 8:0 50 1971 1972 io.max 1973 A read-write nested-keyed file which exists on non-root 1974 cgroups. 1975 1976 BPS and IOPS based IO limit. Lines are keyed by $MAJ:$MIN 1977 device numbers and not ordered. The following nested keys are 1978 defined. 1979 1980 ===== ================================== 1981 rbps Max read bytes per second 1982 wbps Max write bytes per second 1983 riops Max read IO operations per second 1984 wiops Max write IO operations per second 1985 ===== ================================== 1986 1987 When writing, any number of nested key-value pairs can be 1988 specified in any order. "max" can be specified as the value 1989 to remove a specific limit. If the same key is specified 1990 multiple times, the outcome is undefined. 1991 1992 BPS and IOPS are measured in each IO direction and IOs are 1993 delayed if limit is reached. Temporary bursts are allowed. 1994 1995 Setting read limit at 2M BPS and write at 120 IOPS for 8:16:: 1996 1997 echo "8:16 rbps=2097152 wiops=120" > io.max 1998 1999 Reading returns the following:: 2000 2001 8:16 rbps=2097152 wbps=max riops=max wiops=120 2002 2003 Write IOPS limit can be removed by writing the following:: 2004 2005 echo "8:16 wiops=max" > io.max 2006 2007 Reading now returns the following:: 2008 2009 8:16 rbps=2097152 wbps=max riops=max wiops=max 2010 2011 io.pressure 2012 A read-only nested-keyed file. 2013 2014 Shows pressure stall information for IO. See 2015 :ref:`Documentation/accounting/psi.rst <psi>` for details. 2016 2017 2018Writeback 2019~~~~~~~~~ 2020 2021Page cache is dirtied through buffered writes and shared mmaps and 2022written asynchronously to the backing filesystem by the writeback 2023mechanism. Writeback sits between the memory and IO domains and 2024regulates the proportion of dirty memory by balancing dirtying and 2025write IOs. 2026 2027The io controller, in conjunction with the memory controller, 2028implements control of page cache writeback IOs. The memory controller 2029defines the memory domain that dirty memory ratio is calculated and 2030maintained for and the io controller defines the io domain which 2031writes out dirty pages for the memory domain. Both system-wide and 2032per-cgroup dirty memory states are examined and the more restrictive 2033of the two is enforced. 2034 2035cgroup writeback requires explicit support from the underlying 2036filesystem. Currently, cgroup writeback is implemented on ext2, ext4, 2037btrfs, f2fs, and xfs. On other filesystems, all writeback IOs are 2038attributed to the root cgroup. 2039 2040There are inherent differences in memory and writeback management 2041which affects how cgroup ownership is tracked. Memory is tracked per 2042page while writeback per inode. For the purpose of writeback, an 2043inode is assigned to a cgroup and all IO requests to write dirty pages 2044from the inode are attributed to that cgroup. 2045 2046As cgroup ownership for memory is tracked per page, there can be pages 2047which are associated with different cgroups than the one the inode is 2048associated with. These are called foreign pages. The writeback 2049constantly keeps track of foreign pages and, if a particular foreign 2050cgroup becomes the majority over a certain period of time, switches 2051the ownership of the inode to that cgroup. 2052 2053While this model is enough for most use cases where a given inode is 2054mostly dirtied by a single cgroup even when the main writing cgroup 2055changes over time, use cases where multiple cgroups write to a single 2056inode simultaneously are not supported well. In such circumstances, a 2057significant portion of IOs are likely to be attributed incorrectly. 2058As memory controller assigns page ownership on the first use and 2059doesn't update it until the page is released, even if writeback 2060strictly follows page ownership, multiple cgroups dirtying overlapping 2061areas wouldn't work as expected. It's recommended to avoid such usage 2062patterns. 2063 2064The sysctl knobs which affect writeback behavior are applied to cgroup 2065writeback as follows. 2066 2067 vm.dirty_background_ratio, vm.dirty_ratio 2068 These ratios apply the same to cgroup writeback with the 2069 amount of available memory capped by limits imposed by the 2070 memory controller and system-wide clean memory. 2071 2072 vm.dirty_background_bytes, vm.dirty_bytes 2073 For cgroup writeback, this is calculated into ratio against 2074 total available memory and applied the same way as 2075 vm.dirty[_background]_ratio. 2076 2077 2078IO Latency 2079~~~~~~~~~~ 2080 2081This is a cgroup v2 controller for IO workload protection. You provide a group 2082with a latency target, and if the average latency exceeds that target the 2083controller will throttle any peers that have a lower latency target than the 2084protected workload. 2085 2086The limits are only applied at the peer level in the hierarchy. This means that 2087in the diagram below, only groups A, B, and C will influence each other, and 2088groups D and F will influence each other. Group G will influence nobody:: 2089 2090 [root] 2091 / | \ 2092 A B C 2093 / \ | 2094 D F G 2095 2096 2097So the ideal way to configure this is to set io.latency in groups A, B, and C. 2098Generally you do not want to set a value lower than the latency your device 2099supports. Experiment to find the value that works best for your workload. 2100Start at higher than the expected latency for your device and watch the 2101avg_lat value in io.stat for your workload group to get an idea of the 2102latency you see during normal operation. Use the avg_lat value as a basis for 2103your real setting, setting at 10-15% higher than the value in io.stat. 2104 2105How IO Latency Throttling Works 2106~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2107 2108io.latency is work conserving; so as long as everybody is meeting their latency 2109target the controller doesn't do anything. Once a group starts missing its 2110target it begins throttling any peer group that has a higher target than itself. 2111This throttling takes 2 forms: 2112 2113- Queue depth throttling. This is the number of outstanding IO's a group is 2114 allowed to have. We will clamp down relatively quickly, starting at no limit 2115 and going all the way down to 1 IO at a time. 2116 2117- Artificial delay induction. There are certain types of IO that cannot be 2118 throttled without possibly adversely affecting higher priority groups. This 2119 includes swapping and metadata IO. These types of IO are allowed to occur 2120 normally, however they are "charged" to the originating group. If the 2121 originating group is being throttled you will see the use_delay and delay 2122 fields in io.stat increase. The delay value is how many microseconds that are 2123 being added to any process that runs in this group. Because this number can 2124 grow quite large if there is a lot of swapping or metadata IO occurring we 2125 limit the individual delay events to 1 second at a time. 2126 2127Once the victimized group starts meeting its latency target again it will start 2128unthrottling any peer groups that were throttled previously. If the victimized 2129group simply stops doing IO the global counter will unthrottle appropriately. 2130 2131IO Latency Interface Files 2132~~~~~~~~~~~~~~~~~~~~~~~~~~ 2133 2134 io.latency 2135 This takes a similar format as the other controllers. 2136 2137 "MAJOR:MINOR target=<target time in microseconds>" 2138 2139 io.stat 2140 If the controller is enabled you will see extra stats in io.stat in 2141 addition to the normal ones. 2142 2143 depth 2144 This is the current queue depth for the group. 2145 2146 avg_lat 2147 This is an exponential moving average with a decay rate of 1/exp 2148 bound by the sampling interval. The decay rate interval can be 2149 calculated by multiplying the win value in io.stat by the 2150 corresponding number of samples based on the win value. 2151 2152 win 2153 The sampling window size in milliseconds. This is the minimum 2154 duration of time between evaluation events. Windows only elapse 2155 with IO activity. Idle periods extend the most recent window. 2156 2157IO Priority 2158~~~~~~~~~~~ 2159 2160A single attribute controls the behavior of the I/O priority cgroup policy, 2161namely the io.prio.class attribute. The following values are accepted for 2162that attribute: 2163 2164 no-change 2165 Do not modify the I/O priority class. 2166 2167 promote-to-rt 2168 For requests that have a non-RT I/O priority class, change it into RT. 2169 Also change the priority level of these requests to 4. Do not modify 2170 the I/O priority of requests that have priority class RT. 2171 2172 restrict-to-be 2173 For requests that do not have an I/O priority class or that have I/O 2174 priority class RT, change it into BE. Also change the priority level 2175 of these requests to 0. Do not modify the I/O priority class of 2176 requests that have priority class IDLE. 2177 2178 idle 2179 Change the I/O priority class of all requests into IDLE, the lowest 2180 I/O priority class. 2181 2182 none-to-rt 2183 Deprecated. Just an alias for promote-to-rt. 2184 2185The following numerical values are associated with the I/O priority policies: 2186 2187+----------------+---+ 2188| no-change | 0 | 2189+----------------+---+ 2190| promote-to-rt | 1 | 2191+----------------+---+ 2192| restrict-to-be | 2 | 2193+----------------+---+ 2194| idle | 3 | 2195+----------------+---+ 2196 2197The numerical value that corresponds to each I/O priority class is as follows: 2198 2199+-------------------------------+---+ 2200| IOPRIO_CLASS_NONE | 0 | 2201+-------------------------------+---+ 2202| IOPRIO_CLASS_RT (real-time) | 1 | 2203+-------------------------------+---+ 2204| IOPRIO_CLASS_BE (best effort) | 2 | 2205+-------------------------------+---+ 2206| IOPRIO_CLASS_IDLE | 3 | 2207+-------------------------------+---+ 2208 2209The algorithm to set the I/O priority class for a request is as follows: 2210 2211- If I/O priority class policy is promote-to-rt, change the request I/O 2212 priority class to IOPRIO_CLASS_RT and change the request I/O priority 2213 level to 4. 2214- If I/O priority class policy is not promote-to-rt, translate the I/O priority 2215 class policy into a number, then change the request I/O priority class 2216 into the maximum of the I/O priority class policy number and the numerical 2217 I/O priority class. 2218 2219PID 2220--- 2221 2222The process number controller is used to allow a cgroup to stop any 2223new tasks from being fork()'d or clone()'d after a specified limit is 2224reached. 2225 2226The number of tasks in a cgroup can be exhausted in ways which other 2227controllers cannot prevent, thus warranting its own controller. For 2228example, a fork bomb is likely to exhaust the number of tasks before 2229hitting memory restrictions. 2230 2231Note that PIDs used in this controller refer to TIDs, process IDs as 2232used by the kernel. 2233 2234 2235PID Interface Files 2236~~~~~~~~~~~~~~~~~~~ 2237 2238 pids.max 2239 A read-write single value file which exists on non-root 2240 cgroups. The default is "max". 2241 2242 Hard limit of number of processes. 2243 2244 pids.current 2245 A read-only single value file which exists on non-root cgroups. 2246 2247 The number of processes currently in the cgroup and its 2248 descendants. 2249 2250 pids.peak 2251 A read-only single value file which exists on non-root cgroups. 2252 2253 The maximum value that the number of processes in the cgroup and its 2254 descendants has ever reached. 2255 2256 pids.events 2257 A read-only flat-keyed file which exists on non-root cgroups. Unless 2258 specified otherwise, a value change in this file generates a file 2259 modified event. The following entries are defined. 2260 2261 max 2262 The number of times the cgroup's total number of processes hit the pids.max 2263 limit (see also pids_localevents). 2264 2265 pids.events.local 2266 Similar to pids.events but the fields in the file are local 2267 to the cgroup i.e. not hierarchical. The file modified event 2268 generated on this file reflects only the local events. 2269 2270Organisational operations are not blocked by cgroup policies, so it is 2271possible to have pids.current > pids.max. This can be done by either 2272setting the limit to be smaller than pids.current, or attaching enough 2273processes to the cgroup such that pids.current is larger than 2274pids.max. However, it is not possible to violate a cgroup PID policy 2275through fork() or clone(). These will return -EAGAIN if the creation 2276of a new process would cause a cgroup policy to be violated. 2277 2278 2279Cpuset 2280------ 2281 2282The "cpuset" controller provides a mechanism for constraining 2283the CPU and memory node placement of tasks to only the resources 2284specified in the cpuset interface files in a task's current cgroup. 2285This is especially valuable on large NUMA systems where placing jobs 2286on properly sized subsets of the systems with careful processor and 2287memory placement to reduce cross-node memory access and contention 2288can improve overall system performance. 2289 2290The "cpuset" controller is hierarchical. That means the controller 2291cannot use CPUs or memory nodes not allowed in its parent. 2292 2293 2294Cpuset Interface Files 2295~~~~~~~~~~~~~~~~~~~~~~ 2296 2297 cpuset.cpus 2298 A read-write multiple values file which exists on non-root 2299 cpuset-enabled cgroups. 2300 2301 It lists the requested CPUs to be used by tasks within this 2302 cgroup. The actual list of CPUs to be granted, however, is 2303 subjected to constraints imposed by its parent and can differ 2304 from the requested CPUs. 2305 2306 The CPU numbers are comma-separated numbers or ranges. 2307 For example:: 2308 2309 # cat cpuset.cpus 2310 0-4,6,8-10 2311 2312 An empty value indicates that the cgroup is using the same 2313 setting as the nearest cgroup ancestor with a non-empty 2314 "cpuset.cpus" or all the available CPUs if none is found. 2315 2316 The value of "cpuset.cpus" stays constant until the next update 2317 and won't be affected by any CPU hotplug events. 2318 2319 cpuset.cpus.effective 2320 A read-only multiple values file which exists on all 2321 cpuset-enabled cgroups. 2322 2323 It lists the onlined CPUs that are actually granted to this 2324 cgroup by its parent. These CPUs are allowed to be used by 2325 tasks within the current cgroup. 2326 2327 If "cpuset.cpus" is empty, the "cpuset.cpus.effective" file shows 2328 all the CPUs from the parent cgroup that can be available to 2329 be used by this cgroup. Otherwise, it should be a subset of 2330 "cpuset.cpus" unless none of the CPUs listed in "cpuset.cpus" 2331 can be granted. In this case, it will be treated just like an 2332 empty "cpuset.cpus". 2333 2334 Its value will be affected by CPU hotplug events. 2335 2336 cpuset.mems 2337 A read-write multiple values file which exists on non-root 2338 cpuset-enabled cgroups. 2339 2340 It lists the requested memory nodes to be used by tasks within 2341 this cgroup. The actual list of memory nodes granted, however, 2342 is subjected to constraints imposed by its parent and can differ 2343 from the requested memory nodes. 2344 2345 The memory node numbers are comma-separated numbers or ranges. 2346 For example:: 2347 2348 # cat cpuset.mems 2349 0-1,3 2350 2351 An empty value indicates that the cgroup is using the same 2352 setting as the nearest cgroup ancestor with a non-empty 2353 "cpuset.mems" or all the available memory nodes if none 2354 is found. 2355 2356 The value of "cpuset.mems" stays constant until the next update 2357 and won't be affected by any memory nodes hotplug events. 2358 2359 Setting a non-empty value to "cpuset.mems" causes memory of 2360 tasks within the cgroup to be migrated to the designated nodes if 2361 they are currently using memory outside of the designated nodes. 2362 2363 There is a cost for this memory migration. The migration 2364 may not be complete and some memory pages may be left behind. 2365 So it is recommended that "cpuset.mems" should be set properly 2366 before spawning new tasks into the cpuset. Even if there is 2367 a need to change "cpuset.mems" with active tasks, it shouldn't 2368 be done frequently. 2369 2370 cpuset.mems.effective 2371 A read-only multiple values file which exists on all 2372 cpuset-enabled cgroups. 2373 2374 It lists the onlined memory nodes that are actually granted to 2375 this cgroup by its parent. These memory nodes are allowed to 2376 be used by tasks within the current cgroup. 2377 2378 If "cpuset.mems" is empty, it shows all the memory nodes from the 2379 parent cgroup that will be available to be used by this cgroup. 2380 Otherwise, it should be a subset of "cpuset.mems" unless none of 2381 the memory nodes listed in "cpuset.mems" can be granted. In this 2382 case, it will be treated just like an empty "cpuset.mems". 2383 2384 Its value will be affected by memory nodes hotplug events. 2385 2386 cpuset.cpus.exclusive 2387 A read-write multiple values file which exists on non-root 2388 cpuset-enabled cgroups. 2389 2390 It lists all the exclusive CPUs that are allowed to be used 2391 to create a new cpuset partition. Its value is not used 2392 unless the cgroup becomes a valid partition root. See the 2393 "cpuset.cpus.partition" section below for a description of what 2394 a cpuset partition is. 2395 2396 When the cgroup becomes a partition root, the actual exclusive 2397 CPUs that are allocated to that partition are listed in 2398 "cpuset.cpus.exclusive.effective" which may be different 2399 from "cpuset.cpus.exclusive". If "cpuset.cpus.exclusive" 2400 has previously been set, "cpuset.cpus.exclusive.effective" 2401 is always a subset of it. 2402 2403 Users can manually set it to a value that is different from 2404 "cpuset.cpus". One constraint in setting it is that the list of 2405 CPUs must be exclusive with respect to "cpuset.cpus.exclusive" 2406 of its sibling. If "cpuset.cpus.exclusive" of a sibling cgroup 2407 isn't set, its "cpuset.cpus" value, if set, cannot be a subset 2408 of it to leave at least one CPU available when the exclusive 2409 CPUs are taken away. 2410 2411 For a parent cgroup, any one of its exclusive CPUs can only 2412 be distributed to at most one of its child cgroups. Having an 2413 exclusive CPU appearing in two or more of its child cgroups is 2414 not allowed (the exclusivity rule). A value that violates the 2415 exclusivity rule will be rejected with a write error. 2416 2417 The root cgroup is a partition root and all its available CPUs 2418 are in its exclusive CPU set. 2419 2420 cpuset.cpus.exclusive.effective 2421 A read-only multiple values file which exists on all non-root 2422 cpuset-enabled cgroups. 2423 2424 This file shows the effective set of exclusive CPUs that 2425 can be used to create a partition root. The content 2426 of this file will always be a subset of its parent's 2427 "cpuset.cpus.exclusive.effective" if its parent is not the root 2428 cgroup. It will also be a subset of "cpuset.cpus.exclusive" 2429 if it is set. If "cpuset.cpus.exclusive" is not set, it is 2430 treated to have an implicit value of "cpuset.cpus" in the 2431 formation of local partition. 2432 2433 cpuset.cpus.isolated 2434 A read-only and root cgroup only multiple values file. 2435 2436 This file shows the set of all isolated CPUs used in existing 2437 isolated partitions. It will be empty if no isolated partition 2438 is created. 2439 2440 cpuset.cpus.partition 2441 A read-write single value file which exists on non-root 2442 cpuset-enabled cgroups. This flag is owned by the parent cgroup 2443 and is not delegatable. 2444 2445 It accepts only the following input values when written to. 2446 2447 ========== ===================================== 2448 "member" Non-root member of a partition 2449 "root" Partition root 2450 "isolated" Partition root without load balancing 2451 ========== ===================================== 2452 2453 A cpuset partition is a collection of cpuset-enabled cgroups with 2454 a partition root at the top of the hierarchy and its descendants 2455 except those that are separate partition roots themselves and 2456 their descendants. A partition has exclusive access to the 2457 set of exclusive CPUs allocated to it. Other cgroups outside 2458 of that partition cannot use any CPUs in that set. 2459 2460 There are two types of partitions - local and remote. A local 2461 partition is one whose parent cgroup is also a valid partition 2462 root. A remote partition is one whose parent cgroup is not a 2463 valid partition root itself. Writing to "cpuset.cpus.exclusive" 2464 is optional for the creation of a local partition as its 2465 "cpuset.cpus.exclusive" file will assume an implicit value that 2466 is the same as "cpuset.cpus" if it is not set. Writing the 2467 proper "cpuset.cpus.exclusive" values down the cgroup hierarchy 2468 before the target partition root is mandatory for the creation 2469 of a remote partition. 2470 2471 Currently, a remote partition cannot be created under a local 2472 partition. All the ancestors of a remote partition root except 2473 the root cgroup cannot be a partition root. 2474 2475 The root cgroup is always a partition root and its state cannot 2476 be changed. All other non-root cgroups start out as "member". 2477 2478 When set to "root", the current cgroup is the root of a new 2479 partition or scheduling domain. The set of exclusive CPUs is 2480 determined by the value of its "cpuset.cpus.exclusive.effective". 2481 2482 When set to "isolated", the CPUs in that partition will be in 2483 an isolated state without any load balancing from the scheduler 2484 and excluded from the unbound workqueues. Tasks placed in such 2485 a partition with multiple CPUs should be carefully distributed 2486 and bound to each of the individual CPUs for optimal performance. 2487 2488 A partition root ("root" or "isolated") can be in one of the 2489 two possible states - valid or invalid. An invalid partition 2490 root is in a degraded state where some state information may 2491 be retained, but behaves more like a "member". 2492 2493 All possible state transitions among "member", "root" and 2494 "isolated" are allowed. 2495 2496 On read, the "cpuset.cpus.partition" file can show the following 2497 values. 2498 2499 ============================= ===================================== 2500 "member" Non-root member of a partition 2501 "root" Partition root 2502 "isolated" Partition root without load balancing 2503 "root invalid (<reason>)" Invalid partition root 2504 "isolated invalid (<reason>)" Invalid isolated partition root 2505 ============================= ===================================== 2506 2507 In the case of an invalid partition root, a descriptive string on 2508 why the partition is invalid is included within parentheses. 2509 2510 For a local partition root to be valid, the following conditions 2511 must be met. 2512 2513 1) The parent cgroup is a valid partition root. 2514 2) The "cpuset.cpus.exclusive.effective" file cannot be empty, 2515 though it may contain offline CPUs. 2516 3) The "cpuset.cpus.effective" cannot be empty unless there is 2517 no task associated with this partition. 2518 2519 For a remote partition root to be valid, all the above conditions 2520 except the first one must be met. 2521 2522 External events like hotplug or changes to "cpuset.cpus" or 2523 "cpuset.cpus.exclusive" can cause a valid partition root to 2524 become invalid and vice versa. Note that a task cannot be 2525 moved to a cgroup with empty "cpuset.cpus.effective". 2526 2527 A valid non-root parent partition may distribute out all its CPUs 2528 to its child local partitions when there is no task associated 2529 with it. 2530 2531 Care must be taken to change a valid partition root to "member" 2532 as all its child local partitions, if present, will become 2533 invalid causing disruption to tasks running in those child 2534 partitions. These inactivated partitions could be recovered if 2535 their parent is switched back to a partition root with a proper 2536 value in "cpuset.cpus" or "cpuset.cpus.exclusive". 2537 2538 Poll and inotify events are triggered whenever the state of 2539 "cpuset.cpus.partition" changes. That includes changes caused 2540 by write to "cpuset.cpus.partition", cpu hotplug or other 2541 changes that modify the validity status of the partition. 2542 This will allow user space agents to monitor unexpected changes 2543 to "cpuset.cpus.partition" without the need to do continuous 2544 polling. 2545 2546 A user can pre-configure certain CPUs to an isolated state 2547 with load balancing disabled at boot time with the "isolcpus" 2548 kernel boot command line option. If those CPUs are to be put 2549 into a partition, they have to be used in an isolated partition. 2550 2551 2552Device controller 2553----------------- 2554 2555Device controller manages access to device files. It includes both 2556creation of new device files (using mknod), and access to the 2557existing device files. 2558 2559Cgroup v2 device controller has no interface files and is implemented 2560on top of cgroup BPF. To control access to device files, a user may 2561create bpf programs of type BPF_PROG_TYPE_CGROUP_DEVICE and attach 2562them to cgroups with BPF_CGROUP_DEVICE flag. On an attempt to access a 2563device file, corresponding BPF programs will be executed, and depending 2564on the return value the attempt will succeed or fail with -EPERM. 2565 2566A BPF_PROG_TYPE_CGROUP_DEVICE program takes a pointer to the 2567bpf_cgroup_dev_ctx structure, which describes the device access attempt: 2568access type (mknod/read/write) and device (type, major and minor numbers). 2569If the program returns 0, the attempt fails with -EPERM, otherwise it 2570succeeds. 2571 2572An example of BPF_PROG_TYPE_CGROUP_DEVICE program may be found in 2573tools/testing/selftests/bpf/progs/dev_cgroup.c in the kernel source tree. 2574 2575 2576RDMA 2577---- 2578 2579The "rdma" controller regulates the distribution and accounting of 2580RDMA resources. 2581 2582RDMA Interface Files 2583~~~~~~~~~~~~~~~~~~~~ 2584 2585 rdma.max 2586 A readwrite nested-keyed file that exists for all the cgroups 2587 except root that describes current configured resource limit 2588 for a RDMA/IB device. 2589 2590 Lines are keyed by device name and are not ordered. 2591 Each line contains space separated resource name and its configured 2592 limit that can be distributed. 2593 2594 The following nested keys are defined. 2595 2596 ========== ============================= 2597 hca_handle Maximum number of HCA Handles 2598 hca_object Maximum number of HCA Objects 2599 ========== ============================= 2600 2601 An example for mlx4 and ocrdma device follows:: 2602 2603 mlx4_0 hca_handle=2 hca_object=2000 2604 ocrdma1 hca_handle=3 hca_object=max 2605 2606 rdma.current 2607 A read-only file that describes current resource usage. 2608 It exists for all the cgroup except root. 2609 2610 An example for mlx4 and ocrdma device follows:: 2611 2612 mlx4_0 hca_handle=1 hca_object=20 2613 ocrdma1 hca_handle=1 hca_object=23 2614 2615HugeTLB 2616------- 2617 2618The HugeTLB controller allows to limit the HugeTLB usage per control group and 2619enforces the controller limit during page fault. 2620 2621HugeTLB Interface Files 2622~~~~~~~~~~~~~~~~~~~~~~~ 2623 2624 hugetlb.<hugepagesize>.current 2625 Show current usage for "hugepagesize" hugetlb. It exists for all 2626 the cgroup except root. 2627 2628 hugetlb.<hugepagesize>.max 2629 Set/show the hard limit of "hugepagesize" hugetlb usage. 2630 The default value is "max". It exists for all the cgroup except root. 2631 2632 hugetlb.<hugepagesize>.events 2633 A read-only flat-keyed file which exists on non-root cgroups. 2634 2635 max 2636 The number of allocation failure due to HugeTLB limit 2637 2638 hugetlb.<hugepagesize>.events.local 2639 Similar to hugetlb.<hugepagesize>.events but the fields in the file 2640 are local to the cgroup i.e. not hierarchical. The file modified event 2641 generated on this file reflects only the local events. 2642 2643 hugetlb.<hugepagesize>.numa_stat 2644 Similar to memory.numa_stat, it shows the numa information of the 2645 hugetlb pages of <hugepagesize> in this cgroup. Only active in 2646 use hugetlb pages are included. The per-node values are in bytes. 2647 2648Misc 2649---- 2650 2651The Miscellaneous cgroup provides the resource limiting and tracking 2652mechanism for the scalar resources which cannot be abstracted like the other 2653cgroup resources. Controller is enabled by the CONFIG_CGROUP_MISC config 2654option. 2655 2656A resource can be added to the controller via enum misc_res_type{} in the 2657include/linux/misc_cgroup.h file and the corresponding name via misc_res_name[] 2658in the kernel/cgroup/misc.c file. Provider of the resource must set its 2659capacity prior to using the resource by calling misc_cg_set_capacity(). 2660 2661Once a capacity is set then the resource usage can be updated using charge and 2662uncharge APIs. All of the APIs to interact with misc controller are in 2663include/linux/misc_cgroup.h. 2664 2665Misc Interface Files 2666~~~~~~~~~~~~~~~~~~~~ 2667 2668Miscellaneous controller provides 3 interface files. If two misc resources (res_a and res_b) are registered then: 2669 2670 misc.capacity 2671 A read-only flat-keyed file shown only in the root cgroup. It shows 2672 miscellaneous scalar resources available on the platform along with 2673 their quantities:: 2674 2675 $ cat misc.capacity 2676 res_a 50 2677 res_b 10 2678 2679 misc.current 2680 A read-only flat-keyed file shown in the all cgroups. It shows 2681 the current usage of the resources in the cgroup and its children.:: 2682 2683 $ cat misc.current 2684 res_a 3 2685 res_b 0 2686 2687 misc.peak 2688 A read-only flat-keyed file shown in all cgroups. It shows the 2689 historical maximum usage of the resources in the cgroup and its 2690 children.:: 2691 2692 $ cat misc.peak 2693 res_a 10 2694 res_b 8 2695 2696 misc.max 2697 A read-write flat-keyed file shown in the non root cgroups. Allowed 2698 maximum usage of the resources in the cgroup and its children.:: 2699 2700 $ cat misc.max 2701 res_a max 2702 res_b 4 2703 2704 Limit can be set by:: 2705 2706 # echo res_a 1 > misc.max 2707 2708 Limit can be set to max by:: 2709 2710 # echo res_a max > misc.max 2711 2712 Limits can be set higher than the capacity value in the misc.capacity 2713 file. 2714 2715 misc.events 2716 A read-only flat-keyed file which exists on non-root cgroups. The 2717 following entries are defined. Unless specified otherwise, a value 2718 change in this file generates a file modified event. All fields in 2719 this file are hierarchical. 2720 2721 max 2722 The number of times the cgroup's resource usage was 2723 about to go over the max boundary. 2724 2725 misc.events.local 2726 Similar to misc.events but the fields in the file are local to the 2727 cgroup i.e. not hierarchical. The file modified event generated on 2728 this file reflects only the local events. 2729 2730Migration and Ownership 2731~~~~~~~~~~~~~~~~~~~~~~~ 2732 2733A miscellaneous scalar resource is charged to the cgroup in which it is used 2734first, and stays charged to that cgroup until that resource is freed. Migrating 2735a process to a different cgroup does not move the charge to the destination 2736cgroup where the process has moved. 2737 2738Others 2739------ 2740 2741perf_event 2742~~~~~~~~~~ 2743 2744perf_event controller, if not mounted on a legacy hierarchy, is 2745automatically enabled on the v2 hierarchy so that perf events can 2746always be filtered by cgroup v2 path. The controller can still be 2747moved to a legacy hierarchy after v2 hierarchy is populated. 2748 2749 2750Non-normative information 2751------------------------- 2752 2753This section contains information that isn't considered to be a part of 2754the stable kernel API and so is subject to change. 2755 2756 2757CPU controller root cgroup process behaviour 2758~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2759 2760When distributing CPU cycles in the root cgroup each thread in this 2761cgroup is treated as if it was hosted in a separate child cgroup of the 2762root cgroup. This child cgroup weight is dependent on its thread nice 2763level. 2764 2765For details of this mapping see sched_prio_to_weight array in 2766kernel/sched/core.c file (values from this array should be scaled 2767appropriately so the neutral - nice 0 - value is 100 instead of 1024). 2768 2769 2770IO controller root cgroup process behaviour 2771~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2772 2773Root cgroup processes are hosted in an implicit leaf child node. 2774When distributing IO resources this implicit child node is taken into 2775account as if it was a normal child cgroup of the root cgroup with a 2776weight value of 200. 2777 2778 2779Namespace 2780========= 2781 2782Basics 2783------ 2784 2785cgroup namespace provides a mechanism to virtualize the view of the 2786"/proc/$PID/cgroup" file and cgroup mounts. The CLONE_NEWCGROUP clone 2787flag can be used with clone(2) and unshare(2) to create a new cgroup 2788namespace. The process running inside the cgroup namespace will have 2789its "/proc/$PID/cgroup" output restricted to cgroupns root. The 2790cgroupns root is the cgroup of the process at the time of creation of 2791the cgroup namespace. 2792 2793Without cgroup namespace, the "/proc/$PID/cgroup" file shows the 2794complete path of the cgroup of a process. In a container setup where 2795a set of cgroups and namespaces are intended to isolate processes the 2796"/proc/$PID/cgroup" file may leak potential system level information 2797to the isolated processes. For example:: 2798 2799 # cat /proc/self/cgroup 2800 0::/batchjobs/container_id1 2801 2802The path '/batchjobs/container_id1' can be considered as system-data 2803and undesirable to expose to the isolated processes. cgroup namespace 2804can be used to restrict visibility of this path. For example, before 2805creating a cgroup namespace, one would see:: 2806 2807 # ls -l /proc/self/ns/cgroup 2808 lrwxrwxrwx 1 root root 0 2014-07-15 10:37 /proc/self/ns/cgroup -> cgroup:[4026531835] 2809 # cat /proc/self/cgroup 2810 0::/batchjobs/container_id1 2811 2812After unsharing a new namespace, the view changes:: 2813 2814 # ls -l /proc/self/ns/cgroup 2815 lrwxrwxrwx 1 root root 0 2014-07-15 10:35 /proc/self/ns/cgroup -> cgroup:[4026532183] 2816 # cat /proc/self/cgroup 2817 0::/ 2818 2819When some thread from a multi-threaded process unshares its cgroup 2820namespace, the new cgroupns gets applied to the entire process (all 2821the threads). This is natural for the v2 hierarchy; however, for the 2822legacy hierarchies, this may be unexpected. 2823 2824A cgroup namespace is alive as long as there are processes inside or 2825mounts pinning it. When the last usage goes away, the cgroup 2826namespace is destroyed. The cgroupns root and the actual cgroups 2827remain. 2828 2829 2830The Root and Views 2831------------------ 2832 2833The 'cgroupns root' for a cgroup namespace is the cgroup in which the 2834process calling unshare(2) is running. For example, if a process in 2835/batchjobs/container_id1 cgroup calls unshare, cgroup 2836/batchjobs/container_id1 becomes the cgroupns root. For the 2837init_cgroup_ns, this is the real root ('/') cgroup. 2838 2839The cgroupns root cgroup does not change even if the namespace creator 2840process later moves to a different cgroup:: 2841 2842 # ~/unshare -c # unshare cgroupns in some cgroup 2843 # cat /proc/self/cgroup 2844 0::/ 2845 # mkdir sub_cgrp_1 2846 # echo 0 > sub_cgrp_1/cgroup.procs 2847 # cat /proc/self/cgroup 2848 0::/sub_cgrp_1 2849 2850Each process gets its namespace-specific view of "/proc/$PID/cgroup" 2851 2852Processes running inside the cgroup namespace will be able to see 2853cgroup paths (in /proc/self/cgroup) only inside their root cgroup. 2854From within an unshared cgroupns:: 2855 2856 # sleep 100000 & 2857 [1] 7353 2858 # echo 7353 > sub_cgrp_1/cgroup.procs 2859 # cat /proc/7353/cgroup 2860 0::/sub_cgrp_1 2861 2862From the initial cgroup namespace, the real cgroup path will be 2863visible:: 2864 2865 $ cat /proc/7353/cgroup 2866 0::/batchjobs/container_id1/sub_cgrp_1 2867 2868From a sibling cgroup namespace (that is, a namespace rooted at a 2869different cgroup), the cgroup path relative to its own cgroup 2870namespace root will be shown. For instance, if PID 7353's cgroup 2871namespace root is at '/batchjobs/container_id2', then it will see:: 2872 2873 # cat /proc/7353/cgroup 2874 0::/../container_id2/sub_cgrp_1 2875 2876Note that the relative path always starts with '/' to indicate that 2877its relative to the cgroup namespace root of the caller. 2878 2879 2880Migration and setns(2) 2881---------------------- 2882 2883Processes inside a cgroup namespace can move into and out of the 2884namespace root if they have proper access to external cgroups. For 2885example, from inside a namespace with cgroupns root at 2886/batchjobs/container_id1, and assuming that the global hierarchy is 2887still accessible inside cgroupns:: 2888 2889 # cat /proc/7353/cgroup 2890 0::/sub_cgrp_1 2891 # echo 7353 > batchjobs/container_id2/cgroup.procs 2892 # cat /proc/7353/cgroup 2893 0::/../container_id2 2894 2895Note that this kind of setup is not encouraged. A task inside cgroup 2896namespace should only be exposed to its own cgroupns hierarchy. 2897 2898setns(2) to another cgroup namespace is allowed when: 2899 2900(a) the process has CAP_SYS_ADMIN against its current user namespace 2901(b) the process has CAP_SYS_ADMIN against the target cgroup 2902 namespace's userns 2903 2904No implicit cgroup changes happen with attaching to another cgroup 2905namespace. It is expected that the someone moves the attaching 2906process under the target cgroup namespace root. 2907 2908 2909Interaction with Other Namespaces 2910--------------------------------- 2911 2912Namespace specific cgroup hierarchy can be mounted by a process 2913running inside a non-init cgroup namespace:: 2914 2915 # mount -t cgroup2 none $MOUNT_POINT 2916 2917This will mount the unified cgroup hierarchy with cgroupns root as the 2918filesystem root. The process needs CAP_SYS_ADMIN against its user and 2919mount namespaces. 2920 2921The virtualization of /proc/self/cgroup file combined with restricting 2922the view of cgroup hierarchy by namespace-private cgroupfs mount 2923provides a properly isolated cgroup view inside the container. 2924 2925 2926Information on Kernel Programming 2927================================= 2928 2929This section contains kernel programming information in the areas 2930where interacting with cgroup is necessary. cgroup core and 2931controllers are not covered. 2932 2933 2934Filesystem Support for Writeback 2935-------------------------------- 2936 2937A filesystem can support cgroup writeback by updating 2938address_space_operations->writepage[s]() to annotate bio's using the 2939following two functions. 2940 2941 wbc_init_bio(@wbc, @bio) 2942 Should be called for each bio carrying writeback data and 2943 associates the bio with the inode's owner cgroup and the 2944 corresponding request queue. This must be called after 2945 a queue (device) has been associated with the bio and 2946 before submission. 2947 2948 wbc_account_cgroup_owner(@wbc, @page, @bytes) 2949 Should be called for each data segment being written out. 2950 While this function doesn't care exactly when it's called 2951 during the writeback session, it's the easiest and most 2952 natural to call it as data segments are added to a bio. 2953 2954With writeback bio's annotated, cgroup support can be enabled per 2955super_block by setting SB_I_CGROUPWB in ->s_iflags. This allows for 2956selective disabling of cgroup writeback support which is helpful when 2957certain filesystem features, e.g. journaled data mode, are 2958incompatible. 2959 2960wbc_init_bio() binds the specified bio to its cgroup. Depending on 2961the configuration, the bio may be executed at a lower priority and if 2962the writeback session is holding shared resources, e.g. a journal 2963entry, may lead to priority inversion. There is no one easy solution 2964for the problem. Filesystems can try to work around specific problem 2965cases by skipping wbc_init_bio() and using bio_associate_blkg() 2966directly. 2967 2968 2969Deprecated v1 Core Features 2970=========================== 2971 2972- Multiple hierarchies including named ones are not supported. 2973 2974- All v1 mount options are not supported. 2975 2976- The "tasks" file is removed and "cgroup.procs" is not sorted. 2977 2978- "cgroup.clone_children" is removed. 2979 2980- /proc/cgroups is meaningless for v2. Use "cgroup.controllers" or 2981 "cgroup.stat" files at the root instead. 2982 2983 2984Issues with v1 and Rationales for v2 2985==================================== 2986 2987Multiple Hierarchies 2988-------------------- 2989 2990cgroup v1 allowed an arbitrary number of hierarchies and each 2991hierarchy could host any number of controllers. While this seemed to 2992provide a high level of flexibility, it wasn't useful in practice. 2993 2994For example, as there is only one instance of each controller, utility 2995type controllers such as freezer which can be useful in all 2996hierarchies could only be used in one. The issue is exacerbated by 2997the fact that controllers couldn't be moved to another hierarchy once 2998hierarchies were populated. Another issue was that all controllers 2999bound to a hierarchy were forced to have exactly the same view of the 3000hierarchy. It wasn't possible to vary the granularity depending on 3001the specific controller. 3002 3003In practice, these issues heavily limited which controllers could be 3004put on the same hierarchy and most configurations resorted to putting 3005each controller on its own hierarchy. Only closely related ones, such 3006as the cpu and cpuacct controllers, made sense to be put on the same 3007hierarchy. This often meant that userland ended up managing multiple 3008similar hierarchies repeating the same steps on each hierarchy 3009whenever a hierarchy management operation was necessary. 3010 3011Furthermore, support for multiple hierarchies came at a steep cost. 3012It greatly complicated cgroup core implementation but more importantly 3013the support for multiple hierarchies restricted how cgroup could be 3014used in general and what controllers was able to do. 3015 3016There was no limit on how many hierarchies there might be, which meant 3017that a thread's cgroup membership couldn't be described in finite 3018length. The key might contain any number of entries and was unlimited 3019in length, which made it highly awkward to manipulate and led to 3020addition of controllers which existed only to identify membership, 3021which in turn exacerbated the original problem of proliferating number 3022of hierarchies. 3023 3024Also, as a controller couldn't have any expectation regarding the 3025topologies of hierarchies other controllers might be on, each 3026controller had to assume that all other controllers were attached to 3027completely orthogonal hierarchies. This made it impossible, or at 3028least very cumbersome, for controllers to cooperate with each other. 3029 3030In most use cases, putting controllers on hierarchies which are 3031completely orthogonal to each other isn't necessary. What usually is 3032called for is the ability to have differing levels of granularity 3033depending on the specific controller. In other words, hierarchy may 3034be collapsed from leaf towards root when viewed from specific 3035controllers. For example, a given configuration might not care about 3036how memory is distributed beyond a certain level while still wanting 3037to control how CPU cycles are distributed. 3038 3039 3040Thread Granularity 3041------------------ 3042 3043cgroup v1 allowed threads of a process to belong to different cgroups. 3044This didn't make sense for some controllers and those controllers 3045ended up implementing different ways to ignore such situations but 3046much more importantly it blurred the line between API exposed to 3047individual applications and system management interface. 3048 3049Generally, in-process knowledge is available only to the process 3050itself; thus, unlike service-level organization of processes, 3051categorizing threads of a process requires active participation from 3052the application which owns the target process. 3053 3054cgroup v1 had an ambiguously defined delegation model which got abused 3055in combination with thread granularity. cgroups were delegated to 3056individual applications so that they can create and manage their own 3057sub-hierarchies and control resource distributions along them. This 3058effectively raised cgroup to the status of a syscall-like API exposed 3059to lay programs. 3060 3061First of all, cgroup has a fundamentally inadequate interface to be 3062exposed this way. For a process to access its own knobs, it has to 3063extract the path on the target hierarchy from /proc/self/cgroup, 3064construct the path by appending the name of the knob to the path, open 3065and then read and/or write to it. This is not only extremely clunky 3066and unusual but also inherently racy. There is no conventional way to 3067define transaction across the required steps and nothing can guarantee 3068that the process would actually be operating on its own sub-hierarchy. 3069 3070cgroup controllers implemented a number of knobs which would never be 3071accepted as public APIs because they were just adding control knobs to 3072system-management pseudo filesystem. cgroup ended up with interface 3073knobs which were not properly abstracted or refined and directly 3074revealed kernel internal details. These knobs got exposed to 3075individual applications through the ill-defined delegation mechanism 3076effectively abusing cgroup as a shortcut to implementing public APIs 3077without going through the required scrutiny. 3078 3079This was painful for both userland and kernel. Userland ended up with 3080misbehaving and poorly abstracted interfaces and kernel exposing and 3081locked into constructs inadvertently. 3082 3083 3084Competition Between Inner Nodes and Threads 3085------------------------------------------- 3086 3087cgroup v1 allowed threads to be in any cgroups which created an 3088interesting problem where threads belonging to a parent cgroup and its 3089children cgroups competed for resources. This was nasty as two 3090different types of entities competed and there was no obvious way to 3091settle it. Different controllers did different things. 3092 3093The cpu controller considered threads and cgroups as equivalents and 3094mapped nice levels to cgroup weights. This worked for some cases but 3095fell flat when children wanted to be allocated specific ratios of CPU 3096cycles and the number of internal threads fluctuated - the ratios 3097constantly changed as the number of competing entities fluctuated. 3098There also were other issues. The mapping from nice level to weight 3099wasn't obvious or universal, and there were various other knobs which 3100simply weren't available for threads. 3101 3102The io controller implicitly created a hidden leaf node for each 3103cgroup to host the threads. The hidden leaf had its own copies of all 3104the knobs with ``leaf_`` prefixed. While this allowed equivalent 3105control over internal threads, it was with serious drawbacks. It 3106always added an extra layer of nesting which wouldn't be necessary 3107otherwise, made the interface messy and significantly complicated the 3108implementation. 3109 3110The memory controller didn't have a way to control what happened 3111between internal tasks and child cgroups and the behavior was not 3112clearly defined. There were attempts to add ad-hoc behaviors and 3113knobs to tailor the behavior to specific workloads which would have 3114led to problems extremely difficult to resolve in the long term. 3115 3116Multiple controllers struggled with internal tasks and came up with 3117different ways to deal with it; unfortunately, all the approaches were 3118severely flawed and, furthermore, the widely different behaviors 3119made cgroup as a whole highly inconsistent. 3120 3121This clearly is a problem which needs to be addressed from cgroup core 3122in a uniform way. 3123 3124 3125Other Interface Issues 3126---------------------- 3127 3128cgroup v1 grew without oversight and developed a large number of 3129idiosyncrasies and inconsistencies. One issue on the cgroup core side 3130was how an empty cgroup was notified - a userland helper binary was 3131forked and executed for each event. The event delivery wasn't 3132recursive or delegatable. The limitations of the mechanism also led 3133to in-kernel event delivery filtering mechanism further complicating 3134the interface. 3135 3136Controller interfaces were problematic too. An extreme example is 3137controllers completely ignoring hierarchical organization and treating 3138all cgroups as if they were all located directly under the root 3139cgroup. Some controllers exposed a large amount of inconsistent 3140implementation details to userland. 3141 3142There also was no consistency across controllers. When a new cgroup 3143was created, some controllers defaulted to not imposing extra 3144restrictions while others disallowed any resource usage until 3145explicitly configured. Configuration knobs for the same type of 3146control used widely differing naming schemes and formats. Statistics 3147and information knobs were named arbitrarily and used different 3148formats and units even in the same controller. 3149 3150cgroup v2 establishes common conventions where appropriate and updates 3151controllers so that they expose minimal and consistent interfaces. 3152 3153 3154Controller Issues and Remedies 3155------------------------------ 3156 3157Memory 3158~~~~~~ 3159 3160The original lower boundary, the soft limit, is defined as a limit 3161that is per default unset. As a result, the set of cgroups that 3162global reclaim prefers is opt-in, rather than opt-out. The costs for 3163optimizing these mostly negative lookups are so high that the 3164implementation, despite its enormous size, does not even provide the 3165basic desirable behavior. First off, the soft limit has no 3166hierarchical meaning. All configured groups are organized in a global 3167rbtree and treated like equal peers, regardless where they are located 3168in the hierarchy. This makes subtree delegation impossible. Second, 3169the soft limit reclaim pass is so aggressive that it not just 3170introduces high allocation latencies into the system, but also impacts 3171system performance due to overreclaim, to the point where the feature 3172becomes self-defeating. 3173 3174The memory.low boundary on the other hand is a top-down allocated 3175reserve. A cgroup enjoys reclaim protection when it's within its 3176effective low, which makes delegation of subtrees possible. It also 3177enjoys having reclaim pressure proportional to its overage when 3178above its effective low. 3179 3180The original high boundary, the hard limit, is defined as a strict 3181limit that can not budge, even if the OOM killer has to be called. 3182But this generally goes against the goal of making the most out of the 3183available memory. The memory consumption of workloads varies during 3184runtime, and that requires users to overcommit. But doing that with a 3185strict upper limit requires either a fairly accurate prediction of the 3186working set size or adding slack to the limit. Since working set size 3187estimation is hard and error prone, and getting it wrong results in 3188OOM kills, most users tend to err on the side of a looser limit and 3189end up wasting precious resources. 3190 3191The memory.high boundary on the other hand can be set much more 3192conservatively. When hit, it throttles allocations by forcing them 3193into direct reclaim to work off the excess, but it never invokes the 3194OOM killer. As a result, a high boundary that is chosen too 3195aggressively will not terminate the processes, but instead it will 3196lead to gradual performance degradation. The user can monitor this 3197and make corrections until the minimal memory footprint that still 3198gives acceptable performance is found. 3199 3200In extreme cases, with many concurrent allocations and a complete 3201breakdown of reclaim progress within the group, the high boundary can 3202be exceeded. But even then it's mostly better to satisfy the 3203allocation from the slack available in other groups or the rest of the 3204system than killing the group. Otherwise, memory.max is there to 3205limit this type of spillover and ultimately contain buggy or even 3206malicious applications. 3207 3208Setting the original memory.limit_in_bytes below the current usage was 3209subject to a race condition, where concurrent charges could cause the 3210limit setting to fail. memory.max on the other hand will first set the 3211limit to prevent new charges, and then reclaim and OOM kill until the 3212new limit is met - or the task writing to memory.max is killed. 3213 3214The combined memory+swap accounting and limiting is replaced by real 3215control over swap space. 3216 3217The main argument for a combined memory+swap facility in the original 3218cgroup design was that global or parental pressure would always be 3219able to swap all anonymous memory of a child group, regardless of the 3220child's own (possibly untrusted) configuration. However, untrusted 3221groups can sabotage swapping by other means - such as referencing its 3222anonymous memory in a tight loop - and an admin can not assume full 3223swappability when overcommitting untrusted jobs. 3224 3225For trusted jobs, on the other hand, a combined counter is not an 3226intuitive userspace interface, and it flies in the face of the idea 3227that cgroup controllers should account and limit specific physical 3228resources. Swap space is a resource like all others in the system, 3229and that's why unified hierarchy allows distributing it separately. 3230