1.. _cgroup-v2: 2 3================ 4Control Group v2 5================ 6 7:Date: October, 2015 8:Author: Tejun Heo <tj@kernel.org> 9 10This is the authoritative documentation on the design, interface and 11conventions of cgroup v2. It describes all userland-visible aspects 12of cgroup including core and specific controller behaviors. All 13future changes must be reflected in this document. Documentation for 14v1 is available under :ref:`Documentation/admin-guide/cgroup-v1/index.rst <cgroup-v1>`. 15 16.. CONTENTS 17 18 1. Introduction 19 1-1. Terminology 20 1-2. What is cgroup? 21 2. Basic Operations 22 2-1. Mounting 23 2-2. Organizing Processes and Threads 24 2-2-1. Processes 25 2-2-2. Threads 26 2-3. [Un]populated Notification 27 2-4. Controlling Controllers 28 2-4-1. Enabling and Disabling 29 2-4-2. Top-down Constraint 30 2-4-3. No Internal Process Constraint 31 2-5. Delegation 32 2-5-1. Model of Delegation 33 2-5-2. Delegation Containment 34 2-6. Guidelines 35 2-6-1. Organize Once and Control 36 2-6-2. Avoid Name Collisions 37 3. Resource Distribution Models 38 3-1. Weights 39 3-2. Limits 40 3-3. Protections 41 3-4. Allocations 42 4. Interface Files 43 4-1. Format 44 4-2. Conventions 45 4-3. Core Interface Files 46 5. Controllers 47 5-1. CPU 48 5-1-1. CPU Interface Files 49 5-2. Memory 50 5-2-1. Memory Interface Files 51 5-2-2. Usage Guidelines 52 5-2-3. Memory Ownership 53 5-3. IO 54 5-3-1. IO Interface Files 55 5-3-2. Writeback 56 5-3-3. IO Latency 57 5-3-3-1. How IO Latency Throttling Works 58 5-3-3-2. IO Latency Interface Files 59 5-3-4. IO Priority 60 5-4. PID 61 5-4-1. PID Interface Files 62 5-5. Cpuset 63 5.5-1. Cpuset Interface Files 64 5-6. Device 65 5-7. RDMA 66 5-7-1. RDMA Interface Files 67 5-8. HugeTLB 68 5.8-1. HugeTLB Interface Files 69 5-9. Misc 70 5.9-1 Miscellaneous cgroup Interface Files 71 5.9-2 Migration and Ownership 72 5-10. Others 73 5-10-1. perf_event 74 5-N. Non-normative information 75 5-N-1. CPU controller root cgroup process behaviour 76 5-N-2. IO controller root cgroup process behaviour 77 6. Namespace 78 6-1. Basics 79 6-2. The Root and Views 80 6-3. Migration and setns(2) 81 6-4. Interaction with Other Namespaces 82 P. Information on Kernel Programming 83 P-1. Filesystem Support for Writeback 84 D. Deprecated v1 Core Features 85 R. Issues with v1 and Rationales for v2 86 R-1. Multiple Hierarchies 87 R-2. Thread Granularity 88 R-3. Competition Between Inner Nodes and Threads 89 R-4. Other Interface Issues 90 R-5. Controller Issues and Remedies 91 R-5-1. Memory 92 93 94Introduction 95============ 96 97Terminology 98----------- 99 100"cgroup" stands for "control group" and is never capitalized. The 101singular form is used to designate the whole feature and also as a 102qualifier as in "cgroup controllers". When explicitly referring to 103multiple individual control groups, the plural form "cgroups" is used. 104 105 106What is cgroup? 107--------------- 108 109cgroup is a mechanism to organize processes hierarchically and 110distribute system resources along the hierarchy in a controlled and 111configurable manner. 112 113cgroup is largely composed of two parts - the core and controllers. 114cgroup core is primarily responsible for hierarchically organizing 115processes. A cgroup controller is usually responsible for 116distributing a specific type of system resource along the hierarchy 117although there are utility controllers which serve purposes other than 118resource distribution. 119 120cgroups form a tree structure and every process in the system belongs 121to one and only one cgroup. All threads of a process belong to the 122same cgroup. On creation, all processes are put in the cgroup that 123the parent process belongs to at the time. A process can be migrated 124to another cgroup. Migration of a process doesn't affect already 125existing descendant processes. 126 127Following certain structural constraints, controllers may be enabled or 128disabled selectively on a cgroup. All controller behaviors are 129hierarchical - if a controller is enabled on a cgroup, it affects all 130processes which belong to the cgroups consisting the inclusive 131sub-hierarchy of the cgroup. When a controller is enabled on a nested 132cgroup, it always restricts the resource distribution further. The 133restrictions set closer to the root in the hierarchy can not be 134overridden from further away. 135 136 137Basic Operations 138================ 139 140Mounting 141-------- 142 143Unlike v1, cgroup v2 has only single hierarchy. The cgroup v2 144hierarchy can be mounted with the following mount command:: 145 146 # mount -t cgroup2 none $MOUNT_POINT 147 148cgroup2 filesystem has the magic number 0x63677270 ("cgrp"). All 149controllers which support v2 and are not bound to a v1 hierarchy are 150automatically bound to the v2 hierarchy and show up at the root. 151Controllers which are not in active use in the v2 hierarchy can be 152bound to other hierarchies. This allows mixing v2 hierarchy with the 153legacy v1 multiple hierarchies in a fully backward compatible way. 154 155A controller can be moved across hierarchies only after the controller 156is no longer referenced in its current hierarchy. Because per-cgroup 157controller states are destroyed asynchronously and controllers may 158have lingering references, a controller may not show up immediately on 159the v2 hierarchy after the final umount of the previous hierarchy. 160Similarly, a controller should be fully disabled to be moved out of 161the unified hierarchy and it may take some time for the disabled 162controller to become available for other hierarchies; furthermore, due 163to inter-controller dependencies, other controllers may need to be 164disabled too. 165 166While useful for development and manual configurations, moving 167controllers dynamically between the v2 and other hierarchies is 168strongly discouraged for production use. It is recommended to decide 169the hierarchies and controller associations before starting using the 170controllers after system boot. 171 172During transition to v2, system management software might still 173automount the v1 cgroup filesystem and so hijack all controllers 174during boot, before manual intervention is possible. To make testing 175and experimenting easier, the kernel parameter cgroup_no_v1= allows 176disabling controllers in v1 and make them always available in v2. 177 178cgroup v2 currently supports the following mount options. 179 180 nsdelegate 181 Consider cgroup namespaces as delegation boundaries. This 182 option is system wide and can only be set on mount or modified 183 through remount from the init namespace. The mount option is 184 ignored on non-init namespace mounts. Please refer to the 185 Delegation section for details. 186 187 favordynmods 188 Reduce the latencies of dynamic cgroup modifications such as 189 task migrations and controller on/offs at the cost of making 190 hot path operations such as forks and exits more expensive. 191 The static usage pattern of creating a cgroup, enabling 192 controllers, and then seeding it with CLONE_INTO_CGROUP is 193 not affected by this option. 194 195 memory_localevents 196 Only populate memory.events with data for the current cgroup, 197 and not any subtrees. This is legacy behaviour, the default 198 behaviour without this option is to include subtree counts. 199 This option is system wide and can only be set on mount or 200 modified through remount from the init namespace. The mount 201 option is ignored on non-init namespace mounts. 202 203 memory_recursiveprot 204 Recursively apply memory.min and memory.low protection to 205 entire subtrees, without requiring explicit downward 206 propagation into leaf cgroups. This allows protecting entire 207 subtrees from one another, while retaining free competition 208 within those subtrees. This should have been the default 209 behavior but is a mount-option to avoid regressing setups 210 relying on the original semantics (e.g. specifying bogusly 211 high 'bypass' protection values at higher tree levels). 212 213 memory_hugetlb_accounting 214 Count HugeTLB memory usage towards the cgroup's overall 215 memory usage for the memory controller (for the purpose of 216 statistics reporting and memory protetion). This is a new 217 behavior that could regress existing setups, so it must be 218 explicitly opted in with this mount option. 219 220 A few caveats to keep in mind: 221 222 * There is no HugeTLB pool management involved in the memory 223 controller. The pre-allocated pool does not belong to anyone. 224 Specifically, when a new HugeTLB folio is allocated to 225 the pool, it is not accounted for from the perspective of the 226 memory controller. It is only charged to a cgroup when it is 227 actually used (for e.g at page fault time). Host memory 228 overcommit management has to consider this when configuring 229 hard limits. In general, HugeTLB pool management should be 230 done via other mechanisms (such as the HugeTLB controller). 231 * Failure to charge a HugeTLB folio to the memory controller 232 results in SIGBUS. This could happen even if the HugeTLB pool 233 still has pages available (but the cgroup limit is hit and 234 reclaim attempt fails). 235 * Charging HugeTLB memory towards the memory controller affects 236 memory protection and reclaim dynamics. Any userspace tuning 237 (of low, min limits for e.g) needs to take this into account. 238 * HugeTLB pages utilized while this option is not selected 239 will not be tracked by the memory controller (even if cgroup 240 v2 is remounted later on). 241 242 pids_localevents 243 The option restores v1-like behavior of pids.events:max, that is only 244 local (inside cgroup proper) fork failures are counted. Without this 245 option pids.events.max represents any pids.max enforcemnt across 246 cgroup's subtree. 247 248 249 250Organizing Processes and Threads 251-------------------------------- 252 253Processes 254~~~~~~~~~ 255 256Initially, only the root cgroup exists to which all processes belong. 257A child cgroup can be created by creating a sub-directory:: 258 259 # mkdir $CGROUP_NAME 260 261A given cgroup may have multiple child cgroups forming a tree 262structure. Each cgroup has a read-writable interface file 263"cgroup.procs". When read, it lists the PIDs of all processes which 264belong to the cgroup one-per-line. The PIDs are not ordered and the 265same PID may show up more than once if the process got moved to 266another cgroup and then back or the PID got recycled while reading. 267 268A process can be migrated into a cgroup by writing its PID to the 269target cgroup's "cgroup.procs" file. Only one process can be migrated 270on a single write(2) call. If a process is composed of multiple 271threads, writing the PID of any thread migrates all threads of the 272process. 273 274When a process forks a child process, the new process is born into the 275cgroup that the forking process belongs to at the time of the 276operation. After exit, a process stays associated with the cgroup 277that it belonged to at the time of exit until it's reaped; however, a 278zombie process does not appear in "cgroup.procs" and thus can't be 279moved to another cgroup. 280 281A cgroup which doesn't have any children or live processes can be 282destroyed by removing the directory. Note that a cgroup which doesn't 283have any children and is associated only with zombie processes is 284considered empty and can be removed:: 285 286 # rmdir $CGROUP_NAME 287 288"/proc/$PID/cgroup" lists a process's cgroup membership. If legacy 289cgroup is in use in the system, this file may contain multiple lines, 290one for each hierarchy. The entry for cgroup v2 is always in the 291format "0::$PATH":: 292 293 # cat /proc/842/cgroup 294 ... 295 0::/test-cgroup/test-cgroup-nested 296 297If the process becomes a zombie and the cgroup it was associated with 298is removed subsequently, " (deleted)" is appended to the path:: 299 300 # cat /proc/842/cgroup 301 ... 302 0::/test-cgroup/test-cgroup-nested (deleted) 303 304 305Threads 306~~~~~~~ 307 308cgroup v2 supports thread granularity for a subset of controllers to 309support use cases requiring hierarchical resource distribution across 310the threads of a group of processes. By default, all threads of a 311process belong to the same cgroup, which also serves as the resource 312domain to host resource consumptions which are not specific to a 313process or thread. The thread mode allows threads to be spread across 314a subtree while still maintaining the common resource domain for them. 315 316Controllers which support thread mode are called threaded controllers. 317The ones which don't are called domain controllers. 318 319Marking a cgroup threaded makes it join the resource domain of its 320parent as a threaded cgroup. The parent may be another threaded 321cgroup whose resource domain is further up in the hierarchy. The root 322of a threaded subtree, that is, the nearest ancestor which is not 323threaded, is called threaded domain or thread root interchangeably and 324serves as the resource domain for the entire subtree. 325 326Inside a threaded subtree, threads of a process can be put in 327different cgroups and are not subject to the no internal process 328constraint - threaded controllers can be enabled on non-leaf cgroups 329whether they have threads in them or not. 330 331As the threaded domain cgroup hosts all the domain resource 332consumptions of the subtree, it is considered to have internal 333resource consumptions whether there are processes in it or not and 334can't have populated child cgroups which aren't threaded. Because the 335root cgroup is not subject to no internal process constraint, it can 336serve both as a threaded domain and a parent to domain cgroups. 337 338The current operation mode or type of the cgroup is shown in the 339"cgroup.type" file which indicates whether the cgroup is a normal 340domain, a domain which is serving as the domain of a threaded subtree, 341or a threaded cgroup. 342 343On creation, a cgroup is always a domain cgroup and can be made 344threaded by writing "threaded" to the "cgroup.type" file. The 345operation is single direction:: 346 347 # echo threaded > cgroup.type 348 349Once threaded, the cgroup can't be made a domain again. To enable the 350thread mode, the following conditions must be met. 351 352- As the cgroup will join the parent's resource domain. The parent 353 must either be a valid (threaded) domain or a threaded cgroup. 354 355- When the parent is an unthreaded domain, it must not have any domain 356 controllers enabled or populated domain children. The root is 357 exempt from this requirement. 358 359Topology-wise, a cgroup can be in an invalid state. Please consider 360the following topology:: 361 362 A (threaded domain) - B (threaded) - C (domain, just created) 363 364C is created as a domain but isn't connected to a parent which can 365host child domains. C can't be used until it is turned into a 366threaded cgroup. "cgroup.type" file will report "domain (invalid)" in 367these cases. Operations which fail due to invalid topology use 368EOPNOTSUPP as the errno. 369 370A domain cgroup is turned into a threaded domain when one of its child 371cgroup becomes threaded or threaded controllers are enabled in the 372"cgroup.subtree_control" file while there are processes in the cgroup. 373A threaded domain reverts to a normal domain when the conditions 374clear. 375 376When read, "cgroup.threads" contains the list of the thread IDs of all 377threads in the cgroup. Except that the operations are per-thread 378instead of per-process, "cgroup.threads" has the same format and 379behaves the same way as "cgroup.procs". While "cgroup.threads" can be 380written to in any cgroup, as it can only move threads inside the same 381threaded domain, its operations are confined inside each threaded 382subtree. 383 384The threaded domain cgroup serves as the resource domain for the whole 385subtree, and, while the threads can be scattered across the subtree, 386all the processes are considered to be in the threaded domain cgroup. 387"cgroup.procs" in a threaded domain cgroup contains the PIDs of all 388processes in the subtree and is not readable in the subtree proper. 389However, "cgroup.procs" can be written to from anywhere in the subtree 390to migrate all threads of the matching process to the cgroup. 391 392Only threaded controllers can be enabled in a threaded subtree. When 393a threaded controller is enabled inside a threaded subtree, it only 394accounts for and controls resource consumptions associated with the 395threads in the cgroup and its descendants. All consumptions which 396aren't tied to a specific thread belong to the threaded domain cgroup. 397 398Because a threaded subtree is exempt from no internal process 399constraint, a threaded controller must be able to handle competition 400between threads in a non-leaf cgroup and its child cgroups. Each 401threaded controller defines how such competitions are handled. 402 403Currently, the following controllers are threaded and can be enabled 404in a threaded cgroup:: 405 406- cpu 407- cpuset 408- perf_event 409- pids 410 411[Un]populated Notification 412-------------------------- 413 414Each non-root cgroup has a "cgroup.events" file which contains 415"populated" field indicating whether the cgroup's sub-hierarchy has 416live processes in it. Its value is 0 if there is no live process in 417the cgroup and its descendants; otherwise, 1. poll and [id]notify 418events are triggered when the value changes. This can be used, for 419example, to start a clean-up operation after all processes of a given 420sub-hierarchy have exited. The populated state updates and 421notifications are recursive. Consider the following sub-hierarchy 422where the numbers in the parentheses represent the numbers of processes 423in each cgroup:: 424 425 A(4) - B(0) - C(1) 426 \ D(0) 427 428A, B and C's "populated" fields would be 1 while D's 0. After the one 429process in C exits, B and C's "populated" fields would flip to "0" and 430file modified events will be generated on the "cgroup.events" files of 431both cgroups. 432 433 434Controlling Controllers 435----------------------- 436 437Enabling and Disabling 438~~~~~~~~~~~~~~~~~~~~~~ 439 440Each cgroup has a "cgroup.controllers" file which lists all 441controllers available for the cgroup to enable:: 442 443 # cat cgroup.controllers 444 cpu io memory 445 446No controller is enabled by default. Controllers can be enabled and 447disabled by writing to the "cgroup.subtree_control" file:: 448 449 # echo "+cpu +memory -io" > cgroup.subtree_control 450 451Only controllers which are listed in "cgroup.controllers" can be 452enabled. When multiple operations are specified as above, either they 453all succeed or fail. If multiple operations on the same controller 454are specified, the last one is effective. 455 456Enabling a controller in a cgroup indicates that the distribution of 457the target resource across its immediate children will be controlled. 458Consider the following sub-hierarchy. The enabled controllers are 459listed in parentheses:: 460 461 A(cpu,memory) - B(memory) - C() 462 \ D() 463 464As A has "cpu" and "memory" enabled, A will control the distribution 465of CPU cycles and memory to its children, in this case, B. As B has 466"memory" enabled but not "CPU", C and D will compete freely on CPU 467cycles but their division of memory available to B will be controlled. 468 469As a controller regulates the distribution of the target resource to 470the cgroup's children, enabling it creates the controller's interface 471files in the child cgroups. In the above example, enabling "cpu" on B 472would create the "cpu." prefixed controller interface files in C and 473D. Likewise, disabling "memory" from B would remove the "memory." 474prefixed controller interface files from C and D. This means that the 475controller interface files - anything which doesn't start with 476"cgroup." are owned by the parent rather than the cgroup itself. 477 478 479Top-down Constraint 480~~~~~~~~~~~~~~~~~~~ 481 482Resources are distributed top-down and a cgroup can further distribute 483a resource only if the resource has been distributed to it from the 484parent. This means that all non-root "cgroup.subtree_control" files 485can only contain controllers which are enabled in the parent's 486"cgroup.subtree_control" file. A controller can be enabled only if 487the parent has the controller enabled and a controller can't be 488disabled if one or more children have it enabled. 489 490 491No Internal Process Constraint 492~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 493 494Non-root cgroups can distribute domain resources to their children 495only when they don't have any processes of their own. In other words, 496only domain cgroups which don't contain any processes can have domain 497controllers enabled in their "cgroup.subtree_control" files. 498 499This guarantees that, when a domain controller is looking at the part 500of the hierarchy which has it enabled, processes are always only on 501the leaves. This rules out situations where child cgroups compete 502against internal processes of the parent. 503 504The root cgroup is exempt from this restriction. Root contains 505processes and anonymous resource consumption which can't be associated 506with any other cgroups and requires special treatment from most 507controllers. How resource consumption in the root cgroup is governed 508is up to each controller (for more information on this topic please 509refer to the Non-normative information section in the Controllers 510chapter). 511 512Note that the restriction doesn't get in the way if there is no 513enabled controller in the cgroup's "cgroup.subtree_control". This is 514important as otherwise it wouldn't be possible to create children of a 515populated cgroup. To control resource distribution of a cgroup, the 516cgroup must create children and transfer all its processes to the 517children before enabling controllers in its "cgroup.subtree_control" 518file. 519 520 521Delegation 522---------- 523 524Model of Delegation 525~~~~~~~~~~~~~~~~~~~ 526 527A cgroup can be delegated in two ways. First, to a less privileged 528user by granting write access of the directory and its "cgroup.procs", 529"cgroup.threads" and "cgroup.subtree_control" files to the user. 530Second, if the "nsdelegate" mount option is set, automatically to a 531cgroup namespace on namespace creation. 532 533Because the resource control interface files in a given directory 534control the distribution of the parent's resources, the delegatee 535shouldn't be allowed to write to them. For the first method, this is 536achieved by not granting access to these files. For the second, the 537kernel rejects writes to all files other than "cgroup.procs" and 538"cgroup.subtree_control" on a namespace root from inside the 539namespace. 540 541The end results are equivalent for both delegation types. Once 542delegated, the user can build sub-hierarchy under the directory, 543organize processes inside it as it sees fit and further distribute the 544resources it received from the parent. The limits and other settings 545of all resource controllers are hierarchical and regardless of what 546happens in the delegated sub-hierarchy, nothing can escape the 547resource restrictions imposed by the parent. 548 549Currently, cgroup doesn't impose any restrictions on the number of 550cgroups in or nesting depth of a delegated sub-hierarchy; however, 551this may be limited explicitly in the future. 552 553 554Delegation Containment 555~~~~~~~~~~~~~~~~~~~~~~ 556 557A delegated sub-hierarchy is contained in the sense that processes 558can't be moved into or out of the sub-hierarchy by the delegatee. 559 560For delegations to a less privileged user, this is achieved by 561requiring the following conditions for a process with a non-root euid 562to migrate a target process into a cgroup by writing its PID to the 563"cgroup.procs" file. 564 565- The writer must have write access to the "cgroup.procs" file. 566 567- The writer must have write access to the "cgroup.procs" file of the 568 common ancestor of the source and destination cgroups. 569 570The above two constraints ensure that while a delegatee may migrate 571processes around freely in the delegated sub-hierarchy it can't pull 572in from or push out to outside the sub-hierarchy. 573 574For an example, let's assume cgroups C0 and C1 have been delegated to 575user U0 who created C00, C01 under C0 and C10 under C1 as follows and 576all processes under C0 and C1 belong to U0:: 577 578 ~~~~~~~~~~~~~ - C0 - C00 579 ~ cgroup ~ \ C01 580 ~ hierarchy ~ 581 ~~~~~~~~~~~~~ - C1 - C10 582 583Let's also say U0 wants to write the PID of a process which is 584currently in C10 into "C00/cgroup.procs". U0 has write access to the 585file; however, the common ancestor of the source cgroup C10 and the 586destination cgroup C00 is above the points of delegation and U0 would 587not have write access to its "cgroup.procs" files and thus the write 588will be denied with -EACCES. 589 590For delegations to namespaces, containment is achieved by requiring 591that both the source and destination cgroups are reachable from the 592namespace of the process which is attempting the migration. If either 593is not reachable, the migration is rejected with -ENOENT. 594 595 596Guidelines 597---------- 598 599Organize Once and Control 600~~~~~~~~~~~~~~~~~~~~~~~~~ 601 602Migrating a process across cgroups is a relatively expensive operation 603and stateful resources such as memory are not moved together with the 604process. This is an explicit design decision as there often exist 605inherent trade-offs between migration and various hot paths in terms 606of synchronization cost. 607 608As such, migrating processes across cgroups frequently as a means to 609apply different resource restrictions is discouraged. A workload 610should be assigned to a cgroup according to the system's logical and 611resource structure once on start-up. Dynamic adjustments to resource 612distribution can be made by changing controller configuration through 613the interface files. 614 615 616Avoid Name Collisions 617~~~~~~~~~~~~~~~~~~~~~ 618 619Interface files for a cgroup and its children cgroups occupy the same 620directory and it is possible to create children cgroups which collide 621with interface files. 622 623All cgroup core interface files are prefixed with "cgroup." and each 624controller's interface files are prefixed with the controller name and 625a dot. A controller's name is composed of lower case alphabets and 626'_'s but never begins with an '_' so it can be used as the prefix 627character for collision avoidance. Also, interface file names won't 628start or end with terms which are often used in categorizing workloads 629such as job, service, slice, unit or workload. 630 631cgroup doesn't do anything to prevent name collisions and it's the 632user's responsibility to avoid them. 633 634 635Resource Distribution Models 636============================ 637 638cgroup controllers implement several resource distribution schemes 639depending on the resource type and expected use cases. This section 640describes major schemes in use along with their expected behaviors. 641 642 643Weights 644------- 645 646A parent's resource is distributed by adding up the weights of all 647active children and giving each the fraction matching the ratio of its 648weight against the sum. As only children which can make use of the 649resource at the moment participate in the distribution, this is 650work-conserving. Due to the dynamic nature, this model is usually 651used for stateless resources. 652 653All weights are in the range [1, 10000] with the default at 100. This 654allows symmetric multiplicative biases in both directions at fine 655enough granularity while staying in the intuitive range. 656 657As long as the weight is in range, all configuration combinations are 658valid and there is no reason to reject configuration changes or 659process migrations. 660 661"cpu.weight" proportionally distributes CPU cycles to active children 662and is an example of this type. 663 664 665.. _cgroupv2-limits-distributor: 666 667Limits 668------ 669 670A child can only consume up to the configured amount of the resource. 671Limits can be over-committed - the sum of the limits of children can 672exceed the amount of resource available to the parent. 673 674Limits are in the range [0, max] and defaults to "max", which is noop. 675 676As limits can be over-committed, all configuration combinations are 677valid and there is no reason to reject configuration changes or 678process migrations. 679 680"io.max" limits the maximum BPS and/or IOPS that a cgroup can consume 681on an IO device and is an example of this type. 682 683.. _cgroupv2-protections-distributor: 684 685Protections 686----------- 687 688A cgroup is protected up to the configured amount of the resource 689as long as the usages of all its ancestors are under their 690protected levels. Protections can be hard guarantees or best effort 691soft boundaries. Protections can also be over-committed in which case 692only up to the amount available to the parent is protected among 693children. 694 695Protections are in the range [0, max] and defaults to 0, which is 696noop. 697 698As protections can be over-committed, all configuration combinations 699are valid and there is no reason to reject configuration changes or 700process migrations. 701 702"memory.low" implements best-effort memory protection and is an 703example of this type. 704 705 706Allocations 707----------- 708 709A cgroup is exclusively allocated a certain amount of a finite 710resource. Allocations can't be over-committed - the sum of the 711allocations of children can not exceed the amount of resource 712available to the parent. 713 714Allocations are in the range [0, max] and defaults to 0, which is no 715resource. 716 717As allocations can't be over-committed, some configuration 718combinations are invalid and should be rejected. Also, if the 719resource is mandatory for execution of processes, process migrations 720may be rejected. 721 722"cpu.rt.max" hard-allocates realtime slices and is an example of this 723type. 724 725 726Interface Files 727=============== 728 729Format 730------ 731 732All interface files should be in one of the following formats whenever 733possible:: 734 735 New-line separated values 736 (when only one value can be written at once) 737 738 VAL0\n 739 VAL1\n 740 ... 741 742 Space separated values 743 (when read-only or multiple values can be written at once) 744 745 VAL0 VAL1 ...\n 746 747 Flat keyed 748 749 KEY0 VAL0\n 750 KEY1 VAL1\n 751 ... 752 753 Nested keyed 754 755 KEY0 SUB_KEY0=VAL00 SUB_KEY1=VAL01... 756 KEY1 SUB_KEY0=VAL10 SUB_KEY1=VAL11... 757 ... 758 759For a writable file, the format for writing should generally match 760reading; however, controllers may allow omitting later fields or 761implement restricted shortcuts for most common use cases. 762 763For both flat and nested keyed files, only the values for a single key 764can be written at a time. For nested keyed files, the sub key pairs 765may be specified in any order and not all pairs have to be specified. 766 767 768Conventions 769----------- 770 771- Settings for a single feature should be contained in a single file. 772 773- The root cgroup should be exempt from resource control and thus 774 shouldn't have resource control interface files. 775 776- The default time unit is microseconds. If a different unit is ever 777 used, an explicit unit suffix must be present. 778 779- A parts-per quantity should use a percentage decimal with at least 780 two digit fractional part - e.g. 13.40. 781 782- If a controller implements weight based resource distribution, its 783 interface file should be named "weight" and have the range [1, 784 10000] with 100 as the default. The values are chosen to allow 785 enough and symmetric bias in both directions while keeping it 786 intuitive (the default is 100%). 787 788- If a controller implements an absolute resource guarantee and/or 789 limit, the interface files should be named "min" and "max" 790 respectively. If a controller implements best effort resource 791 guarantee and/or limit, the interface files should be named "low" 792 and "high" respectively. 793 794 In the above four control files, the special token "max" should be 795 used to represent upward infinity for both reading and writing. 796 797- If a setting has a configurable default value and keyed specific 798 overrides, the default entry should be keyed with "default" and 799 appear as the first entry in the file. 800 801 The default value can be updated by writing either "default $VAL" or 802 "$VAL". 803 804 When writing to update a specific override, "default" can be used as 805 the value to indicate removal of the override. Override entries 806 with "default" as the value must not appear when read. 807 808 For example, a setting which is keyed by major:minor device numbers 809 with integer values may look like the following:: 810 811 # cat cgroup-example-interface-file 812 default 150 813 8:0 300 814 815 The default value can be updated by:: 816 817 # echo 125 > cgroup-example-interface-file 818 819 or:: 820 821 # echo "default 125" > cgroup-example-interface-file 822 823 An override can be set by:: 824 825 # echo "8:16 170" > cgroup-example-interface-file 826 827 and cleared by:: 828 829 # echo "8:0 default" > cgroup-example-interface-file 830 # cat cgroup-example-interface-file 831 default 125 832 8:16 170 833 834- For events which are not very high frequency, an interface file 835 "events" should be created which lists event key value pairs. 836 Whenever a notifiable event happens, file modified event should be 837 generated on the file. 838 839 840Core Interface Files 841-------------------- 842 843All cgroup core files are prefixed with "cgroup." 844 845 cgroup.type 846 A read-write single value file which exists on non-root 847 cgroups. 848 849 When read, it indicates the current type of the cgroup, which 850 can be one of the following values. 851 852 - "domain" : A normal valid domain cgroup. 853 854 - "domain threaded" : A threaded domain cgroup which is 855 serving as the root of a threaded subtree. 856 857 - "domain invalid" : A cgroup which is in an invalid state. 858 It can't be populated or have controllers enabled. It may 859 be allowed to become a threaded cgroup. 860 861 - "threaded" : A threaded cgroup which is a member of a 862 threaded subtree. 863 864 A cgroup can be turned into a threaded cgroup by writing 865 "threaded" to this file. 866 867 cgroup.procs 868 A read-write new-line separated values file which exists on 869 all cgroups. 870 871 When read, it lists the PIDs of all processes which belong to 872 the cgroup one-per-line. The PIDs are not ordered and the 873 same PID may show up more than once if the process got moved 874 to another cgroup and then back or the PID got recycled while 875 reading. 876 877 A PID can be written to migrate the process associated with 878 the PID to the cgroup. The writer should match all of the 879 following conditions. 880 881 - It must have write access to the "cgroup.procs" file. 882 883 - It must have write access to the "cgroup.procs" file of the 884 common ancestor of the source and destination cgroups. 885 886 When delegating a sub-hierarchy, write access to this file 887 should be granted along with the containing directory. 888 889 In a threaded cgroup, reading this file fails with EOPNOTSUPP 890 as all the processes belong to the thread root. Writing is 891 supported and moves every thread of the process to the cgroup. 892 893 cgroup.threads 894 A read-write new-line separated values file which exists on 895 all cgroups. 896 897 When read, it lists the TIDs of all threads which belong to 898 the cgroup one-per-line. The TIDs are not ordered and the 899 same TID may show up more than once if the thread got moved to 900 another cgroup and then back or the TID got recycled while 901 reading. 902 903 A TID can be written to migrate the thread associated with the 904 TID to the cgroup. The writer should match all of the 905 following conditions. 906 907 - It must have write access to the "cgroup.threads" file. 908 909 - The cgroup that the thread is currently in must be in the 910 same resource domain as the destination cgroup. 911 912 - It must have write access to the "cgroup.procs" file of the 913 common ancestor of the source and destination cgroups. 914 915 When delegating a sub-hierarchy, write access to this file 916 should be granted along with the containing directory. 917 918 cgroup.controllers 919 A read-only space separated values file which exists on all 920 cgroups. 921 922 It shows space separated list of all controllers available to 923 the cgroup. The controllers are not ordered. 924 925 cgroup.subtree_control 926 A read-write space separated values file which exists on all 927 cgroups. Starts out empty. 928 929 When read, it shows space separated list of the controllers 930 which are enabled to control resource distribution from the 931 cgroup to its children. 932 933 Space separated list of controllers prefixed with '+' or '-' 934 can be written to enable or disable controllers. A controller 935 name prefixed with '+' enables the controller and '-' 936 disables. If a controller appears more than once on the list, 937 the last one is effective. When multiple enable and disable 938 operations are specified, either all succeed or all fail. 939 940 cgroup.events 941 A read-only flat-keyed file which exists on non-root cgroups. 942 The following entries are defined. Unless specified 943 otherwise, a value change in this file generates a file 944 modified event. 945 946 populated 947 1 if the cgroup or its descendants contains any live 948 processes; otherwise, 0. 949 frozen 950 1 if the cgroup is frozen; otherwise, 0. 951 952 cgroup.max.descendants 953 A read-write single value files. The default is "max". 954 955 Maximum allowed number of descent cgroups. 956 If the actual number of descendants is equal or larger, 957 an attempt to create a new cgroup in the hierarchy will fail. 958 959 cgroup.max.depth 960 A read-write single value files. The default is "max". 961 962 Maximum allowed descent depth below the current cgroup. 963 If the actual descent depth is equal or larger, 964 an attempt to create a new child cgroup will fail. 965 966 cgroup.stat 967 A read-only flat-keyed file with the following entries: 968 969 nr_descendants 970 Total number of visible descendant cgroups. 971 972 nr_dying_descendants 973 Total number of dying descendant cgroups. A cgroup becomes 974 dying after being deleted by a user. The cgroup will remain 975 in dying state for some time undefined time (which can depend 976 on system load) before being completely destroyed. 977 978 A process can't enter a dying cgroup under any circumstances, 979 a dying cgroup can't revive. 980 981 A dying cgroup can consume system resources not exceeding 982 limits, which were active at the moment of cgroup deletion. 983 984 cgroup.freeze 985 A read-write single value file which exists on non-root cgroups. 986 Allowed values are "0" and "1". The default is "0". 987 988 Writing "1" to the file causes freezing of the cgroup and all 989 descendant cgroups. This means that all belonging processes will 990 be stopped and will not run until the cgroup will be explicitly 991 unfrozen. Freezing of the cgroup may take some time; when this action 992 is completed, the "frozen" value in the cgroup.events control file 993 will be updated to "1" and the corresponding notification will be 994 issued. 995 996 A cgroup can be frozen either by its own settings, or by settings 997 of any ancestor cgroups. If any of ancestor cgroups is frozen, the 998 cgroup will remain frozen. 999 1000 Processes in the frozen cgroup can be killed by a fatal signal. 1001 They also can enter and leave a frozen cgroup: either by an explicit 1002 move by a user, or if freezing of the cgroup races with fork(). 1003 If a process is moved to a frozen cgroup, it stops. If a process is 1004 moved out of a frozen cgroup, it becomes running. 1005 1006 Frozen status of a cgroup doesn't affect any cgroup tree operations: 1007 it's possible to delete a frozen (and empty) cgroup, as well as 1008 create new sub-cgroups. 1009 1010 cgroup.kill 1011 A write-only single value file which exists in non-root cgroups. 1012 The only allowed value is "1". 1013 1014 Writing "1" to the file causes the cgroup and all descendant cgroups to 1015 be killed. This means that all processes located in the affected cgroup 1016 tree will be killed via SIGKILL. 1017 1018 Killing a cgroup tree will deal with concurrent forks appropriately and 1019 is protected against migrations. 1020 1021 In a threaded cgroup, writing this file fails with EOPNOTSUPP as 1022 killing cgroups is a process directed operation, i.e. it affects 1023 the whole thread-group. 1024 1025 cgroup.pressure 1026 A read-write single value file that allowed values are "0" and "1". 1027 The default is "1". 1028 1029 Writing "0" to the file will disable the cgroup PSI accounting. 1030 Writing "1" to the file will re-enable the cgroup PSI accounting. 1031 1032 This control attribute is not hierarchical, so disable or enable PSI 1033 accounting in a cgroup does not affect PSI accounting in descendants 1034 and doesn't need pass enablement via ancestors from root. 1035 1036 The reason this control attribute exists is that PSI accounts stalls for 1037 each cgroup separately and aggregates it at each level of the hierarchy. 1038 This may cause non-negligible overhead for some workloads when under 1039 deep level of the hierarchy, in which case this control attribute can 1040 be used to disable PSI accounting in the non-leaf cgroups. 1041 1042 irq.pressure 1043 A read-write nested-keyed file. 1044 1045 Shows pressure stall information for IRQ/SOFTIRQ. See 1046 :ref:`Documentation/accounting/psi.rst <psi>` for details. 1047 1048Controllers 1049=========== 1050 1051.. _cgroup-v2-cpu: 1052 1053CPU 1054--- 1055 1056The "cpu" controllers regulates distribution of CPU cycles. This 1057controller implements weight and absolute bandwidth limit models for 1058normal scheduling policy and absolute bandwidth allocation model for 1059realtime scheduling policy. 1060 1061In all the above models, cycles distribution is defined only on a temporal 1062base and it does not account for the frequency at which tasks are executed. 1063The (optional) utilization clamping support allows to hint the schedutil 1064cpufreq governor about the minimum desired frequency which should always be 1065provided by a CPU, as well as the maximum desired frequency, which should not 1066be exceeded by a CPU. 1067 1068WARNING: cgroup2 doesn't yet support control of realtime processes. For 1069a kernel built with the CONFIG_RT_GROUP_SCHED option enabled for group 1070scheduling of realtime processes, the cpu controller can only be enabled 1071when all RT processes are in the root cgroup. This limitation does 1072not apply if CONFIG_RT_GROUP_SCHED is disabled. Be aware that system 1073management software may already have placed RT processes into nonroot 1074cgroups during the system boot process, and these processes may need 1075to be moved to the root cgroup before the cpu controller can be enabled 1076with a CONFIG_RT_GROUP_SCHED enabled kernel. 1077 1078 1079CPU Interface Files 1080~~~~~~~~~~~~~~~~~~~ 1081 1082All time durations are in microseconds. 1083 1084 cpu.stat 1085 A read-only flat-keyed file. 1086 This file exists whether the controller is enabled or not. 1087 1088 It always reports the following three stats: 1089 1090 - usage_usec 1091 - user_usec 1092 - system_usec 1093 1094 and the following five when the controller is enabled: 1095 1096 - nr_periods 1097 - nr_throttled 1098 - throttled_usec 1099 - nr_bursts 1100 - burst_usec 1101 1102 cpu.weight 1103 A read-write single value file which exists on non-root 1104 cgroups. The default is "100". 1105 1106 For non idle groups (cpu.idle = 0), the weight is in the 1107 range [1, 10000]. 1108 1109 If the cgroup has been configured to be SCHED_IDLE (cpu.idle = 1), 1110 then the weight will show as a 0. 1111 1112 cpu.weight.nice 1113 A read-write single value file which exists on non-root 1114 cgroups. The default is "0". 1115 1116 The nice value is in the range [-20, 19]. 1117 1118 This interface file is an alternative interface for 1119 "cpu.weight" and allows reading and setting weight using the 1120 same values used by nice(2). Because the range is smaller and 1121 granularity is coarser for the nice values, the read value is 1122 the closest approximation of the current weight. 1123 1124 cpu.max 1125 A read-write two value file which exists on non-root cgroups. 1126 The default is "max 100000". 1127 1128 The maximum bandwidth limit. It's in the following format:: 1129 1130 $MAX $PERIOD 1131 1132 which indicates that the group may consume up to $MAX in each 1133 $PERIOD duration. "max" for $MAX indicates no limit. If only 1134 one number is written, $MAX is updated. 1135 1136 cpu.max.burst 1137 A read-write single value file which exists on non-root 1138 cgroups. The default is "0". 1139 1140 The burst in the range [0, $MAX]. 1141 1142 cpu.pressure 1143 A read-write nested-keyed file. 1144 1145 Shows pressure stall information for CPU. See 1146 :ref:`Documentation/accounting/psi.rst <psi>` for details. 1147 1148 cpu.uclamp.min 1149 A read-write single value file which exists on non-root cgroups. 1150 The default is "0", i.e. no utilization boosting. 1151 1152 The requested minimum utilization (protection) as a percentage 1153 rational number, e.g. 12.34 for 12.34%. 1154 1155 This interface allows reading and setting minimum utilization clamp 1156 values similar to the sched_setattr(2). This minimum utilization 1157 value is used to clamp the task specific minimum utilization clamp. 1158 1159 The requested minimum utilization (protection) is always capped by 1160 the current value for the maximum utilization (limit), i.e. 1161 `cpu.uclamp.max`. 1162 1163 cpu.uclamp.max 1164 A read-write single value file which exists on non-root cgroups. 1165 The default is "max". i.e. no utilization capping 1166 1167 The requested maximum utilization (limit) as a percentage rational 1168 number, e.g. 98.76 for 98.76%. 1169 1170 This interface allows reading and setting maximum utilization clamp 1171 values similar to the sched_setattr(2). This maximum utilization 1172 value is used to clamp the task specific maximum utilization clamp. 1173 1174 cpu.idle 1175 A read-write single value file which exists on non-root cgroups. 1176 The default is 0. 1177 1178 This is the cgroup analog of the per-task SCHED_IDLE sched policy. 1179 Setting this value to a 1 will make the scheduling policy of the 1180 cgroup SCHED_IDLE. The threads inside the cgroup will retain their 1181 own relative priorities, but the cgroup itself will be treated as 1182 very low priority relative to its peers. 1183 1184 1185 1186Memory 1187------ 1188 1189The "memory" controller regulates distribution of memory. Memory is 1190stateful and implements both limit and protection models. Due to the 1191intertwining between memory usage and reclaim pressure and the 1192stateful nature of memory, the distribution model is relatively 1193complex. 1194 1195While not completely water-tight, all major memory usages by a given 1196cgroup are tracked so that the total memory consumption can be 1197accounted and controlled to a reasonable extent. Currently, the 1198following types of memory usages are tracked. 1199 1200- Userland memory - page cache and anonymous memory. 1201 1202- Kernel data structures such as dentries and inodes. 1203 1204- TCP socket buffers. 1205 1206The above list may expand in the future for better coverage. 1207 1208 1209Memory Interface Files 1210~~~~~~~~~~~~~~~~~~~~~~ 1211 1212All memory amounts are in bytes. If a value which is not aligned to 1213PAGE_SIZE is written, the value may be rounded up to the closest 1214PAGE_SIZE multiple when read back. 1215 1216 memory.current 1217 A read-only single value file which exists on non-root 1218 cgroups. 1219 1220 The total amount of memory currently being used by the cgroup 1221 and its descendants. 1222 1223 memory.min 1224 A read-write single value file which exists on non-root 1225 cgroups. The default is "0". 1226 1227 Hard memory protection. If the memory usage of a cgroup 1228 is within its effective min boundary, the cgroup's memory 1229 won't be reclaimed under any conditions. If there is no 1230 unprotected reclaimable memory available, OOM killer 1231 is invoked. Above the effective min boundary (or 1232 effective low boundary if it is higher), pages are reclaimed 1233 proportionally to the overage, reducing reclaim pressure for 1234 smaller overages. 1235 1236 Effective min boundary is limited by memory.min values of 1237 all ancestor cgroups. If there is memory.min overcommitment 1238 (child cgroup or cgroups are requiring more protected memory 1239 than parent will allow), then each child cgroup will get 1240 the part of parent's protection proportional to its 1241 actual memory usage below memory.min. 1242 1243 Putting more memory than generally available under this 1244 protection is discouraged and may lead to constant OOMs. 1245 1246 If a memory cgroup is not populated with processes, 1247 its memory.min is ignored. 1248 1249 memory.low 1250 A read-write single value file which exists on non-root 1251 cgroups. The default is "0". 1252 1253 Best-effort memory protection. If the memory usage of a 1254 cgroup is within its effective low boundary, the cgroup's 1255 memory won't be reclaimed unless there is no reclaimable 1256 memory available in unprotected cgroups. 1257 Above the effective low boundary (or 1258 effective min boundary if it is higher), pages are reclaimed 1259 proportionally to the overage, reducing reclaim pressure for 1260 smaller overages. 1261 1262 Effective low boundary is limited by memory.low values of 1263 all ancestor cgroups. If there is memory.low overcommitment 1264 (child cgroup or cgroups are requiring more protected memory 1265 than parent will allow), then each child cgroup will get 1266 the part of parent's protection proportional to its 1267 actual memory usage below memory.low. 1268 1269 Putting more memory than generally available under this 1270 protection is discouraged. 1271 1272 memory.high 1273 A read-write single value file which exists on non-root 1274 cgroups. The default is "max". 1275 1276 Memory usage throttle limit. If a cgroup's usage goes 1277 over the high boundary, the processes of the cgroup are 1278 throttled and put under heavy reclaim pressure. 1279 1280 Going over the high limit never invokes the OOM killer and 1281 under extreme conditions the limit may be breached. The high 1282 limit should be used in scenarios where an external process 1283 monitors the limited cgroup to alleviate heavy reclaim 1284 pressure. 1285 1286 memory.max 1287 A read-write single value file which exists on non-root 1288 cgroups. The default is "max". 1289 1290 Memory usage hard limit. This is the main mechanism to limit 1291 memory usage of a cgroup. If a cgroup's memory usage reaches 1292 this limit and can't be reduced, the OOM killer is invoked in 1293 the cgroup. Under certain circumstances, the usage may go 1294 over the limit temporarily. 1295 1296 In default configuration regular 0-order allocations always 1297 succeed unless OOM killer chooses current task as a victim. 1298 1299 Some kinds of allocations don't invoke the OOM killer. 1300 Caller could retry them differently, return into userspace 1301 as -ENOMEM or silently ignore in cases like disk readahead. 1302 1303 memory.reclaim 1304 A write-only nested-keyed file which exists for all cgroups. 1305 1306 This is a simple interface to trigger memory reclaim in the 1307 target cgroup. 1308 1309 Example:: 1310 1311 echo "1G" > memory.reclaim 1312 1313 Please note that the kernel can over or under reclaim from 1314 the target cgroup. If less bytes are reclaimed than the 1315 specified amount, -EAGAIN is returned. 1316 1317 Please note that the proactive reclaim (triggered by this 1318 interface) is not meant to indicate memory pressure on the 1319 memory cgroup. Therefore socket memory balancing triggered by 1320 the memory reclaim normally is not exercised in this case. 1321 This means that the networking layer will not adapt based on 1322 reclaim induced by memory.reclaim. 1323 1324The following nested keys are defined. 1325 1326 ========== ================================ 1327 swappiness Swappiness value to reclaim with 1328 ========== ================================ 1329 1330 Specifying a swappiness value instructs the kernel to perform 1331 the reclaim with that swappiness value. Note that this has the 1332 same semantics as vm.swappiness applied to memcg reclaim with 1333 all the existing limitations and potential future extensions. 1334 1335 memory.peak 1336 A read-only single value file which exists on non-root 1337 cgroups. 1338 1339 The max memory usage recorded for the cgroup and its 1340 descendants since the creation of the cgroup. 1341 1342 memory.oom.group 1343 A read-write single value file which exists on non-root 1344 cgroups. The default value is "0". 1345 1346 Determines whether the cgroup should be treated as 1347 an indivisible workload by the OOM killer. If set, 1348 all tasks belonging to the cgroup or to its descendants 1349 (if the memory cgroup is not a leaf cgroup) are killed 1350 together or not at all. This can be used to avoid 1351 partial kills to guarantee workload integrity. 1352 1353 Tasks with the OOM protection (oom_score_adj set to -1000) 1354 are treated as an exception and are never killed. 1355 1356 If the OOM killer is invoked in a cgroup, it's not going 1357 to kill any tasks outside of this cgroup, regardless 1358 memory.oom.group values of ancestor cgroups. 1359 1360 memory.events 1361 A read-only flat-keyed file which exists on non-root cgroups. 1362 The following entries are defined. Unless specified 1363 otherwise, a value change in this file generates a file 1364 modified event. 1365 1366 Note that all fields in this file are hierarchical and the 1367 file modified event can be generated due to an event down the 1368 hierarchy. For the local events at the cgroup level see 1369 memory.events.local. 1370 1371 low 1372 The number of times the cgroup is reclaimed due to 1373 high memory pressure even though its usage is under 1374 the low boundary. This usually indicates that the low 1375 boundary is over-committed. 1376 1377 high 1378 The number of times processes of the cgroup are 1379 throttled and routed to perform direct memory reclaim 1380 because the high memory boundary was exceeded. For a 1381 cgroup whose memory usage is capped by the high limit 1382 rather than global memory pressure, this event's 1383 occurrences are expected. 1384 1385 max 1386 The number of times the cgroup's memory usage was 1387 about to go over the max boundary. If direct reclaim 1388 fails to bring it down, the cgroup goes to OOM state. 1389 1390 oom 1391 The number of time the cgroup's memory usage was 1392 reached the limit and allocation was about to fail. 1393 1394 This event is not raised if the OOM killer is not 1395 considered as an option, e.g. for failed high-order 1396 allocations or if caller asked to not retry attempts. 1397 1398 oom_kill 1399 The number of processes belonging to this cgroup 1400 killed by any kind of OOM killer. 1401 1402 oom_group_kill 1403 The number of times a group OOM has occurred. 1404 1405 memory.events.local 1406 Similar to memory.events but the fields in the file are local 1407 to the cgroup i.e. not hierarchical. The file modified event 1408 generated on this file reflects only the local events. 1409 1410 memory.stat 1411 A read-only flat-keyed file which exists on non-root cgroups. 1412 1413 This breaks down the cgroup's memory footprint into different 1414 types of memory, type-specific details, and other information 1415 on the state and past events of the memory management system. 1416 1417 All memory amounts are in bytes. 1418 1419 The entries are ordered to be human readable, and new entries 1420 can show up in the middle. Don't rely on items remaining in a 1421 fixed position; use the keys to look up specific values! 1422 1423 If the entry has no per-node counter (or not show in the 1424 memory.numa_stat). We use 'npn' (non-per-node) as the tag 1425 to indicate that it will not show in the memory.numa_stat. 1426 1427 anon 1428 Amount of memory used in anonymous mappings such as 1429 brk(), sbrk(), and mmap(MAP_ANONYMOUS) 1430 1431 file 1432 Amount of memory used to cache filesystem data, 1433 including tmpfs and shared memory. 1434 1435 kernel (npn) 1436 Amount of total kernel memory, including 1437 (kernel_stack, pagetables, percpu, vmalloc, slab) in 1438 addition to other kernel memory use cases. 1439 1440 kernel_stack 1441 Amount of memory allocated to kernel stacks. 1442 1443 pagetables 1444 Amount of memory allocated for page tables. 1445 1446 sec_pagetables 1447 Amount of memory allocated for secondary page tables, 1448 this currently includes KVM mmu allocations on x86 1449 and arm64 and IOMMU page tables. 1450 1451 percpu (npn) 1452 Amount of memory used for storing per-cpu kernel 1453 data structures. 1454 1455 sock (npn) 1456 Amount of memory used in network transmission buffers 1457 1458 vmalloc (npn) 1459 Amount of memory used for vmap backed memory. 1460 1461 shmem 1462 Amount of cached filesystem data that is swap-backed, 1463 such as tmpfs, shm segments, shared anonymous mmap()s 1464 1465 zswap 1466 Amount of memory consumed by the zswap compression backend. 1467 1468 zswapped 1469 Amount of application memory swapped out to zswap. 1470 1471 file_mapped 1472 Amount of cached filesystem data mapped with mmap() 1473 1474 file_dirty 1475 Amount of cached filesystem data that was modified but 1476 not yet written back to disk 1477 1478 file_writeback 1479 Amount of cached filesystem data that was modified and 1480 is currently being written back to disk 1481 1482 swapcached 1483 Amount of swap cached in memory. The swapcache is accounted 1484 against both memory and swap usage. 1485 1486 anon_thp 1487 Amount of memory used in anonymous mappings backed by 1488 transparent hugepages 1489 1490 file_thp 1491 Amount of cached filesystem data backed by transparent 1492 hugepages 1493 1494 shmem_thp 1495 Amount of shm, tmpfs, shared anonymous mmap()s backed by 1496 transparent hugepages 1497 1498 inactive_anon, active_anon, inactive_file, active_file, unevictable 1499 Amount of memory, swap-backed and filesystem-backed, 1500 on the internal memory management lists used by the 1501 page reclaim algorithm. 1502 1503 As these represent internal list state (eg. shmem pages are on anon 1504 memory management lists), inactive_foo + active_foo may not be equal to 1505 the value for the foo counter, since the foo counter is type-based, not 1506 list-based. 1507 1508 slab_reclaimable 1509 Part of "slab" that might be reclaimed, such as 1510 dentries and inodes. 1511 1512 slab_unreclaimable 1513 Part of "slab" that cannot be reclaimed on memory 1514 pressure. 1515 1516 slab (npn) 1517 Amount of memory used for storing in-kernel data 1518 structures. 1519 1520 workingset_refault_anon 1521 Number of refaults of previously evicted anonymous pages. 1522 1523 workingset_refault_file 1524 Number of refaults of previously evicted file pages. 1525 1526 workingset_activate_anon 1527 Number of refaulted anonymous pages that were immediately 1528 activated. 1529 1530 workingset_activate_file 1531 Number of refaulted file pages that were immediately activated. 1532 1533 workingset_restore_anon 1534 Number of restored anonymous pages which have been detected as 1535 an active workingset before they got reclaimed. 1536 1537 workingset_restore_file 1538 Number of restored file pages which have been detected as an 1539 active workingset before they got reclaimed. 1540 1541 workingset_nodereclaim 1542 Number of times a shadow node has been reclaimed 1543 1544 pgscan (npn) 1545 Amount of scanned pages (in an inactive LRU list) 1546 1547 pgsteal (npn) 1548 Amount of reclaimed pages 1549 1550 pgscan_kswapd (npn) 1551 Amount of scanned pages by kswapd (in an inactive LRU list) 1552 1553 pgscan_direct (npn) 1554 Amount of scanned pages directly (in an inactive LRU list) 1555 1556 pgscan_khugepaged (npn) 1557 Amount of scanned pages by khugepaged (in an inactive LRU list) 1558 1559 pgsteal_kswapd (npn) 1560 Amount of reclaimed pages by kswapd 1561 1562 pgsteal_direct (npn) 1563 Amount of reclaimed pages directly 1564 1565 pgsteal_khugepaged (npn) 1566 Amount of reclaimed pages by khugepaged 1567 1568 pgfault (npn) 1569 Total number of page faults incurred 1570 1571 pgmajfault (npn) 1572 Number of major page faults incurred 1573 1574 pgrefill (npn) 1575 Amount of scanned pages (in an active LRU list) 1576 1577 pgactivate (npn) 1578 Amount of pages moved to the active LRU list 1579 1580 pgdeactivate (npn) 1581 Amount of pages moved to the inactive LRU list 1582 1583 pglazyfree (npn) 1584 Amount of pages postponed to be freed under memory pressure 1585 1586 pglazyfreed (npn) 1587 Amount of reclaimed lazyfree pages 1588 1589 zswpin 1590 Number of pages moved in to memory from zswap. 1591 1592 zswpout 1593 Number of pages moved out of memory to zswap. 1594 1595 zswpwb 1596 Number of pages written from zswap to swap. 1597 1598 thp_fault_alloc (npn) 1599 Number of transparent hugepages which were allocated to satisfy 1600 a page fault. This counter is not present when CONFIG_TRANSPARENT_HUGEPAGE 1601 is not set. 1602 1603 thp_collapse_alloc (npn) 1604 Number of transparent hugepages which were allocated to allow 1605 collapsing an existing range of pages. This counter is not 1606 present when CONFIG_TRANSPARENT_HUGEPAGE is not set. 1607 1608 thp_swpout (npn) 1609 Number of transparent hugepages which are swapout in one piece 1610 without splitting. 1611 1612 thp_swpout_fallback (npn) 1613 Number of transparent hugepages which were split before swapout. 1614 Usually because failed to allocate some continuous swap space 1615 for the huge page. 1616 1617 memory.numa_stat 1618 A read-only nested-keyed file which exists on non-root cgroups. 1619 1620 This breaks down the cgroup's memory footprint into different 1621 types of memory, type-specific details, and other information 1622 per node on the state of the memory management system. 1623 1624 This is useful for providing visibility into the NUMA locality 1625 information within an memcg since the pages are allowed to be 1626 allocated from any physical node. One of the use case is evaluating 1627 application performance by combining this information with the 1628 application's CPU allocation. 1629 1630 All memory amounts are in bytes. 1631 1632 The output format of memory.numa_stat is:: 1633 1634 type N0=<bytes in node 0> N1=<bytes in node 1> ... 1635 1636 The entries are ordered to be human readable, and new entries 1637 can show up in the middle. Don't rely on items remaining in a 1638 fixed position; use the keys to look up specific values! 1639 1640 The entries can refer to the memory.stat. 1641 1642 memory.swap.current 1643 A read-only single value file which exists on non-root 1644 cgroups. 1645 1646 The total amount of swap currently being used by the cgroup 1647 and its descendants. 1648 1649 memory.swap.high 1650 A read-write single value file which exists on non-root 1651 cgroups. The default is "max". 1652 1653 Swap usage throttle limit. If a cgroup's swap usage exceeds 1654 this limit, all its further allocations will be throttled to 1655 allow userspace to implement custom out-of-memory procedures. 1656 1657 This limit marks a point of no return for the cgroup. It is NOT 1658 designed to manage the amount of swapping a workload does 1659 during regular operation. Compare to memory.swap.max, which 1660 prohibits swapping past a set amount, but lets the cgroup 1661 continue unimpeded as long as other memory can be reclaimed. 1662 1663 Healthy workloads are not expected to reach this limit. 1664 1665 memory.swap.peak 1666 A read-only single value file which exists on non-root 1667 cgroups. 1668 1669 The max swap usage recorded for the cgroup and its 1670 descendants since the creation of the cgroup. 1671 1672 memory.swap.max 1673 A read-write single value file which exists on non-root 1674 cgroups. The default is "max". 1675 1676 Swap usage hard limit. If a cgroup's swap usage reaches this 1677 limit, anonymous memory of the cgroup will not be swapped out. 1678 1679 memory.swap.events 1680 A read-only flat-keyed file which exists on non-root cgroups. 1681 The following entries are defined. Unless specified 1682 otherwise, a value change in this file generates a file 1683 modified event. 1684 1685 high 1686 The number of times the cgroup's swap usage was over 1687 the high threshold. 1688 1689 max 1690 The number of times the cgroup's swap usage was about 1691 to go over the max boundary and swap allocation 1692 failed. 1693 1694 fail 1695 The number of times swap allocation failed either 1696 because of running out of swap system-wide or max 1697 limit. 1698 1699 When reduced under the current usage, the existing swap 1700 entries are reclaimed gradually and the swap usage may stay 1701 higher than the limit for an extended period of time. This 1702 reduces the impact on the workload and memory management. 1703 1704 memory.zswap.current 1705 A read-only single value file which exists on non-root 1706 cgroups. 1707 1708 The total amount of memory consumed by the zswap compression 1709 backend. 1710 1711 memory.zswap.max 1712 A read-write single value file which exists on non-root 1713 cgroups. The default is "max". 1714 1715 Zswap usage hard limit. If a cgroup's zswap pool reaches this 1716 limit, it will refuse to take any more stores before existing 1717 entries fault back in or are written out to disk. 1718 1719 memory.zswap.writeback 1720 A read-write single value file. The default value is "1". The 1721 initial value of the root cgroup is 1, and when a new cgroup is 1722 created, it inherits the current value of its parent. 1723 1724 When this is set to 0, all swapping attempts to swapping devices 1725 are disabled. This included both zswap writebacks, and swapping due 1726 to zswap store failures. If the zswap store failures are recurring 1727 (for e.g if the pages are incompressible), users can observe 1728 reclaim inefficiency after disabling writeback (because the same 1729 pages might be rejected again and again). 1730 1731 Note that this is subtly different from setting memory.swap.max to 1732 0, as it still allows for pages to be written to the zswap pool. 1733 1734 memory.pressure 1735 A read-only nested-keyed file. 1736 1737 Shows pressure stall information for memory. See 1738 :ref:`Documentation/accounting/psi.rst <psi>` for details. 1739 1740 1741Usage Guidelines 1742~~~~~~~~~~~~~~~~ 1743 1744"memory.high" is the main mechanism to control memory usage. 1745Over-committing on high limit (sum of high limits > available memory) 1746and letting global memory pressure to distribute memory according to 1747usage is a viable strategy. 1748 1749Because breach of the high limit doesn't trigger the OOM killer but 1750throttles the offending cgroup, a management agent has ample 1751opportunities to monitor and take appropriate actions such as granting 1752more memory or terminating the workload. 1753 1754Determining whether a cgroup has enough memory is not trivial as 1755memory usage doesn't indicate whether the workload can benefit from 1756more memory. For example, a workload which writes data received from 1757network to a file can use all available memory but can also operate as 1758performant with a small amount of memory. A measure of memory 1759pressure - how much the workload is being impacted due to lack of 1760memory - is necessary to determine whether a workload needs more 1761memory; unfortunately, memory pressure monitoring mechanism isn't 1762implemented yet. 1763 1764 1765Memory Ownership 1766~~~~~~~~~~~~~~~~ 1767 1768A memory area is charged to the cgroup which instantiated it and stays 1769charged to the cgroup until the area is released. Migrating a process 1770to a different cgroup doesn't move the memory usages that it 1771instantiated while in the previous cgroup to the new cgroup. 1772 1773A memory area may be used by processes belonging to different cgroups. 1774To which cgroup the area will be charged is in-deterministic; however, 1775over time, the memory area is likely to end up in a cgroup which has 1776enough memory allowance to avoid high reclaim pressure. 1777 1778If a cgroup sweeps a considerable amount of memory which is expected 1779to be accessed repeatedly by other cgroups, it may make sense to use 1780POSIX_FADV_DONTNEED to relinquish the ownership of memory areas 1781belonging to the affected files to ensure correct memory ownership. 1782 1783 1784IO 1785-- 1786 1787The "io" controller regulates the distribution of IO resources. This 1788controller implements both weight based and absolute bandwidth or IOPS 1789limit distribution; however, weight based distribution is available 1790only if cfq-iosched is in use and neither scheme is available for 1791blk-mq devices. 1792 1793 1794IO Interface Files 1795~~~~~~~~~~~~~~~~~~ 1796 1797 io.stat 1798 A read-only nested-keyed file. 1799 1800 Lines are keyed by $MAJ:$MIN device numbers and not ordered. 1801 The following nested keys are defined. 1802 1803 ====== ===================== 1804 rbytes Bytes read 1805 wbytes Bytes written 1806 rios Number of read IOs 1807 wios Number of write IOs 1808 dbytes Bytes discarded 1809 dios Number of discard IOs 1810 ====== ===================== 1811 1812 An example read output follows:: 1813 1814 8:16 rbytes=1459200 wbytes=314773504 rios=192 wios=353 dbytes=0 dios=0 1815 8:0 rbytes=90430464 wbytes=299008000 rios=8950 wios=1252 dbytes=50331648 dios=3021 1816 1817 io.cost.qos 1818 A read-write nested-keyed file which exists only on the root 1819 cgroup. 1820 1821 This file configures the Quality of Service of the IO cost 1822 model based controller (CONFIG_BLK_CGROUP_IOCOST) which 1823 currently implements "io.weight" proportional control. Lines 1824 are keyed by $MAJ:$MIN device numbers and not ordered. The 1825 line for a given device is populated on the first write for 1826 the device on "io.cost.qos" or "io.cost.model". The following 1827 nested keys are defined. 1828 1829 ====== ===================================== 1830 enable Weight-based control enable 1831 ctrl "auto" or "user" 1832 rpct Read latency percentile [0, 100] 1833 rlat Read latency threshold 1834 wpct Write latency percentile [0, 100] 1835 wlat Write latency threshold 1836 min Minimum scaling percentage [1, 10000] 1837 max Maximum scaling percentage [1, 10000] 1838 ====== ===================================== 1839 1840 The controller is disabled by default and can be enabled by 1841 setting "enable" to 1. "rpct" and "wpct" parameters default 1842 to zero and the controller uses internal device saturation 1843 state to adjust the overall IO rate between "min" and "max". 1844 1845 When a better control quality is needed, latency QoS 1846 parameters can be configured. For example:: 1847 1848 8:16 enable=1 ctrl=auto rpct=95.00 rlat=75000 wpct=95.00 wlat=150000 min=50.00 max=150.0 1849 1850 shows that on sdb, the controller is enabled, will consider 1851 the device saturated if the 95th percentile of read completion 1852 latencies is above 75ms or write 150ms, and adjust the overall 1853 IO issue rate between 50% and 150% accordingly. 1854 1855 The lower the saturation point, the better the latency QoS at 1856 the cost of aggregate bandwidth. The narrower the allowed 1857 adjustment range between "min" and "max", the more conformant 1858 to the cost model the IO behavior. Note that the IO issue 1859 base rate may be far off from 100% and setting "min" and "max" 1860 blindly can lead to a significant loss of device capacity or 1861 control quality. "min" and "max" are useful for regulating 1862 devices which show wide temporary behavior changes - e.g. a 1863 ssd which accepts writes at the line speed for a while and 1864 then completely stalls for multiple seconds. 1865 1866 When "ctrl" is "auto", the parameters are controlled by the 1867 kernel and may change automatically. Setting "ctrl" to "user" 1868 or setting any of the percentile and latency parameters puts 1869 it into "user" mode and disables the automatic changes. The 1870 automatic mode can be restored by setting "ctrl" to "auto". 1871 1872 io.cost.model 1873 A read-write nested-keyed file which exists only on the root 1874 cgroup. 1875 1876 This file configures the cost model of the IO cost model based 1877 controller (CONFIG_BLK_CGROUP_IOCOST) which currently 1878 implements "io.weight" proportional control. Lines are keyed 1879 by $MAJ:$MIN device numbers and not ordered. The line for a 1880 given device is populated on the first write for the device on 1881 "io.cost.qos" or "io.cost.model". The following nested keys 1882 are defined. 1883 1884 ===== ================================ 1885 ctrl "auto" or "user" 1886 model The cost model in use - "linear" 1887 ===== ================================ 1888 1889 When "ctrl" is "auto", the kernel may change all parameters 1890 dynamically. When "ctrl" is set to "user" or any other 1891 parameters are written to, "ctrl" become "user" and the 1892 automatic changes are disabled. 1893 1894 When "model" is "linear", the following model parameters are 1895 defined. 1896 1897 ============= ======================================== 1898 [r|w]bps The maximum sequential IO throughput 1899 [r|w]seqiops The maximum 4k sequential IOs per second 1900 [r|w]randiops The maximum 4k random IOs per second 1901 ============= ======================================== 1902 1903 From the above, the builtin linear model determines the base 1904 costs of a sequential and random IO and the cost coefficient 1905 for the IO size. While simple, this model can cover most 1906 common device classes acceptably. 1907 1908 The IO cost model isn't expected to be accurate in absolute 1909 sense and is scaled to the device behavior dynamically. 1910 1911 If needed, tools/cgroup/iocost_coef_gen.py can be used to 1912 generate device-specific coefficients. 1913 1914 io.weight 1915 A read-write flat-keyed file which exists on non-root cgroups. 1916 The default is "default 100". 1917 1918 The first line is the default weight applied to devices 1919 without specific override. The rest are overrides keyed by 1920 $MAJ:$MIN device numbers and not ordered. The weights are in 1921 the range [1, 10000] and specifies the relative amount IO time 1922 the cgroup can use in relation to its siblings. 1923 1924 The default weight can be updated by writing either "default 1925 $WEIGHT" or simply "$WEIGHT". Overrides can be set by writing 1926 "$MAJ:$MIN $WEIGHT" and unset by writing "$MAJ:$MIN default". 1927 1928 An example read output follows:: 1929 1930 default 100 1931 8:16 200 1932 8:0 50 1933 1934 io.max 1935 A read-write nested-keyed file which exists on non-root 1936 cgroups. 1937 1938 BPS and IOPS based IO limit. Lines are keyed by $MAJ:$MIN 1939 device numbers and not ordered. The following nested keys are 1940 defined. 1941 1942 ===== ================================== 1943 rbps Max read bytes per second 1944 wbps Max write bytes per second 1945 riops Max read IO operations per second 1946 wiops Max write IO operations per second 1947 ===== ================================== 1948 1949 When writing, any number of nested key-value pairs can be 1950 specified in any order. "max" can be specified as the value 1951 to remove a specific limit. If the same key is specified 1952 multiple times, the outcome is undefined. 1953 1954 BPS and IOPS are measured in each IO direction and IOs are 1955 delayed if limit is reached. Temporary bursts are allowed. 1956 1957 Setting read limit at 2M BPS and write at 120 IOPS for 8:16:: 1958 1959 echo "8:16 rbps=2097152 wiops=120" > io.max 1960 1961 Reading returns the following:: 1962 1963 8:16 rbps=2097152 wbps=max riops=max wiops=120 1964 1965 Write IOPS limit can be removed by writing the following:: 1966 1967 echo "8:16 wiops=max" > io.max 1968 1969 Reading now returns the following:: 1970 1971 8:16 rbps=2097152 wbps=max riops=max wiops=max 1972 1973 io.pressure 1974 A read-only nested-keyed file. 1975 1976 Shows pressure stall information for IO. See 1977 :ref:`Documentation/accounting/psi.rst <psi>` for details. 1978 1979 1980Writeback 1981~~~~~~~~~ 1982 1983Page cache is dirtied through buffered writes and shared mmaps and 1984written asynchronously to the backing filesystem by the writeback 1985mechanism. Writeback sits between the memory and IO domains and 1986regulates the proportion of dirty memory by balancing dirtying and 1987write IOs. 1988 1989The io controller, in conjunction with the memory controller, 1990implements control of page cache writeback IOs. The memory controller 1991defines the memory domain that dirty memory ratio is calculated and 1992maintained for and the io controller defines the io domain which 1993writes out dirty pages for the memory domain. Both system-wide and 1994per-cgroup dirty memory states are examined and the more restrictive 1995of the two is enforced. 1996 1997cgroup writeback requires explicit support from the underlying 1998filesystem. Currently, cgroup writeback is implemented on ext2, ext4, 1999btrfs, f2fs, and xfs. On other filesystems, all writeback IOs are 2000attributed to the root cgroup. 2001 2002There are inherent differences in memory and writeback management 2003which affects how cgroup ownership is tracked. Memory is tracked per 2004page while writeback per inode. For the purpose of writeback, an 2005inode is assigned to a cgroup and all IO requests to write dirty pages 2006from the inode are attributed to that cgroup. 2007 2008As cgroup ownership for memory is tracked per page, there can be pages 2009which are associated with different cgroups than the one the inode is 2010associated with. These are called foreign pages. The writeback 2011constantly keeps track of foreign pages and, if a particular foreign 2012cgroup becomes the majority over a certain period of time, switches 2013the ownership of the inode to that cgroup. 2014 2015While this model is enough for most use cases where a given inode is 2016mostly dirtied by a single cgroup even when the main writing cgroup 2017changes over time, use cases where multiple cgroups write to a single 2018inode simultaneously are not supported well. In such circumstances, a 2019significant portion of IOs are likely to be attributed incorrectly. 2020As memory controller assigns page ownership on the first use and 2021doesn't update it until the page is released, even if writeback 2022strictly follows page ownership, multiple cgroups dirtying overlapping 2023areas wouldn't work as expected. It's recommended to avoid such usage 2024patterns. 2025 2026The sysctl knobs which affect writeback behavior are applied to cgroup 2027writeback as follows. 2028 2029 vm.dirty_background_ratio, vm.dirty_ratio 2030 These ratios apply the same to cgroup writeback with the 2031 amount of available memory capped by limits imposed by the 2032 memory controller and system-wide clean memory. 2033 2034 vm.dirty_background_bytes, vm.dirty_bytes 2035 For cgroup writeback, this is calculated into ratio against 2036 total available memory and applied the same way as 2037 vm.dirty[_background]_ratio. 2038 2039 2040IO Latency 2041~~~~~~~~~~ 2042 2043This is a cgroup v2 controller for IO workload protection. You provide a group 2044with a latency target, and if the average latency exceeds that target the 2045controller will throttle any peers that have a lower latency target than the 2046protected workload. 2047 2048The limits are only applied at the peer level in the hierarchy. This means that 2049in the diagram below, only groups A, B, and C will influence each other, and 2050groups D and F will influence each other. Group G will influence nobody:: 2051 2052 [root] 2053 / | \ 2054 A B C 2055 / \ | 2056 D F G 2057 2058 2059So the ideal way to configure this is to set io.latency in groups A, B, and C. 2060Generally you do not want to set a value lower than the latency your device 2061supports. Experiment to find the value that works best for your workload. 2062Start at higher than the expected latency for your device and watch the 2063avg_lat value in io.stat for your workload group to get an idea of the 2064latency you see during normal operation. Use the avg_lat value as a basis for 2065your real setting, setting at 10-15% higher than the value in io.stat. 2066 2067How IO Latency Throttling Works 2068~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2069 2070io.latency is work conserving; so as long as everybody is meeting their latency 2071target the controller doesn't do anything. Once a group starts missing its 2072target it begins throttling any peer group that has a higher target than itself. 2073This throttling takes 2 forms: 2074 2075- Queue depth throttling. This is the number of outstanding IO's a group is 2076 allowed to have. We will clamp down relatively quickly, starting at no limit 2077 and going all the way down to 1 IO at a time. 2078 2079- Artificial delay induction. There are certain types of IO that cannot be 2080 throttled without possibly adversely affecting higher priority groups. This 2081 includes swapping and metadata IO. These types of IO are allowed to occur 2082 normally, however they are "charged" to the originating group. If the 2083 originating group is being throttled you will see the use_delay and delay 2084 fields in io.stat increase. The delay value is how many microseconds that are 2085 being added to any process that runs in this group. Because this number can 2086 grow quite large if there is a lot of swapping or metadata IO occurring we 2087 limit the individual delay events to 1 second at a time. 2088 2089Once the victimized group starts meeting its latency target again it will start 2090unthrottling any peer groups that were throttled previously. If the victimized 2091group simply stops doing IO the global counter will unthrottle appropriately. 2092 2093IO Latency Interface Files 2094~~~~~~~~~~~~~~~~~~~~~~~~~~ 2095 2096 io.latency 2097 This takes a similar format as the other controllers. 2098 2099 "MAJOR:MINOR target=<target time in microseconds>" 2100 2101 io.stat 2102 If the controller is enabled you will see extra stats in io.stat in 2103 addition to the normal ones. 2104 2105 depth 2106 This is the current queue depth for the group. 2107 2108 avg_lat 2109 This is an exponential moving average with a decay rate of 1/exp 2110 bound by the sampling interval. The decay rate interval can be 2111 calculated by multiplying the win value in io.stat by the 2112 corresponding number of samples based on the win value. 2113 2114 win 2115 The sampling window size in milliseconds. This is the minimum 2116 duration of time between evaluation events. Windows only elapse 2117 with IO activity. Idle periods extend the most recent window. 2118 2119IO Priority 2120~~~~~~~~~~~ 2121 2122A single attribute controls the behavior of the I/O priority cgroup policy, 2123namely the io.prio.class attribute. The following values are accepted for 2124that attribute: 2125 2126 no-change 2127 Do not modify the I/O priority class. 2128 2129 promote-to-rt 2130 For requests that have a non-RT I/O priority class, change it into RT. 2131 Also change the priority level of these requests to 4. Do not modify 2132 the I/O priority of requests that have priority class RT. 2133 2134 restrict-to-be 2135 For requests that do not have an I/O priority class or that have I/O 2136 priority class RT, change it into BE. Also change the priority level 2137 of these requests to 0. Do not modify the I/O priority class of 2138 requests that have priority class IDLE. 2139 2140 idle 2141 Change the I/O priority class of all requests into IDLE, the lowest 2142 I/O priority class. 2143 2144 none-to-rt 2145 Deprecated. Just an alias for promote-to-rt. 2146 2147The following numerical values are associated with the I/O priority policies: 2148 2149+----------------+---+ 2150| no-change | 0 | 2151+----------------+---+ 2152| promote-to-rt | 1 | 2153+----------------+---+ 2154| restrict-to-be | 2 | 2155+----------------+---+ 2156| idle | 3 | 2157+----------------+---+ 2158 2159The numerical value that corresponds to each I/O priority class is as follows: 2160 2161+-------------------------------+---+ 2162| IOPRIO_CLASS_NONE | 0 | 2163+-------------------------------+---+ 2164| IOPRIO_CLASS_RT (real-time) | 1 | 2165+-------------------------------+---+ 2166| IOPRIO_CLASS_BE (best effort) | 2 | 2167+-------------------------------+---+ 2168| IOPRIO_CLASS_IDLE | 3 | 2169+-------------------------------+---+ 2170 2171The algorithm to set the I/O priority class for a request is as follows: 2172 2173- If I/O priority class policy is promote-to-rt, change the request I/O 2174 priority class to IOPRIO_CLASS_RT and change the request I/O priority 2175 level to 4. 2176- If I/O priority class policy is not promote-to-rt, translate the I/O priority 2177 class policy into a number, then change the request I/O priority class 2178 into the maximum of the I/O priority class policy number and the numerical 2179 I/O priority class. 2180 2181PID 2182--- 2183 2184The process number controller is used to allow a cgroup to stop any 2185new tasks from being fork()'d or clone()'d after a specified limit is 2186reached. 2187 2188The number of tasks in a cgroup can be exhausted in ways which other 2189controllers cannot prevent, thus warranting its own controller. For 2190example, a fork bomb is likely to exhaust the number of tasks before 2191hitting memory restrictions. 2192 2193Note that PIDs used in this controller refer to TIDs, process IDs as 2194used by the kernel. 2195 2196 2197PID Interface Files 2198~~~~~~~~~~~~~~~~~~~ 2199 2200 pids.max 2201 A read-write single value file which exists on non-root 2202 cgroups. The default is "max". 2203 2204 Hard limit of number of processes. 2205 2206 pids.current 2207 A read-only single value file which exists on non-root cgroups. 2208 2209 The number of processes currently in the cgroup and its 2210 descendants. 2211 2212 pids.peak 2213 A read-only single value file which exists on non-root cgroups. 2214 2215 The maximum value that the number of processes in the cgroup and its 2216 descendants has ever reached. 2217 2218 pids.events 2219 A read-only flat-keyed file which exists on non-root cgroups. Unless 2220 specified otherwise, a value change in this file generates a file 2221 modified event. The following entries are defined. 2222 2223 max 2224 The number of times the cgroup's total number of processes hit the pids.max 2225 limit (see also pids_localevents). 2226 2227 pids.events.local 2228 Similar to pids.events but the fields in the file are local 2229 to the cgroup i.e. not hierarchical. The file modified event 2230 generated on this file reflects only the local events. 2231 2232Organisational operations are not blocked by cgroup policies, so it is 2233possible to have pids.current > pids.max. This can be done by either 2234setting the limit to be smaller than pids.current, or attaching enough 2235processes to the cgroup such that pids.current is larger than 2236pids.max. However, it is not possible to violate a cgroup PID policy 2237through fork() or clone(). These will return -EAGAIN if the creation 2238of a new process would cause a cgroup policy to be violated. 2239 2240 2241Cpuset 2242------ 2243 2244The "cpuset" controller provides a mechanism for constraining 2245the CPU and memory node placement of tasks to only the resources 2246specified in the cpuset interface files in a task's current cgroup. 2247This is especially valuable on large NUMA systems where placing jobs 2248on properly sized subsets of the systems with careful processor and 2249memory placement to reduce cross-node memory access and contention 2250can improve overall system performance. 2251 2252The "cpuset" controller is hierarchical. That means the controller 2253cannot use CPUs or memory nodes not allowed in its parent. 2254 2255 2256Cpuset Interface Files 2257~~~~~~~~~~~~~~~~~~~~~~ 2258 2259 cpuset.cpus 2260 A read-write multiple values file which exists on non-root 2261 cpuset-enabled cgroups. 2262 2263 It lists the requested CPUs to be used by tasks within this 2264 cgroup. The actual list of CPUs to be granted, however, is 2265 subjected to constraints imposed by its parent and can differ 2266 from the requested CPUs. 2267 2268 The CPU numbers are comma-separated numbers or ranges. 2269 For example:: 2270 2271 # cat cpuset.cpus 2272 0-4,6,8-10 2273 2274 An empty value indicates that the cgroup is using the same 2275 setting as the nearest cgroup ancestor with a non-empty 2276 "cpuset.cpus" or all the available CPUs if none is found. 2277 2278 The value of "cpuset.cpus" stays constant until the next update 2279 and won't be affected by any CPU hotplug events. 2280 2281 cpuset.cpus.effective 2282 A read-only multiple values file which exists on all 2283 cpuset-enabled cgroups. 2284 2285 It lists the onlined CPUs that are actually granted to this 2286 cgroup by its parent. These CPUs are allowed to be used by 2287 tasks within the current cgroup. 2288 2289 If "cpuset.cpus" is empty, the "cpuset.cpus.effective" file shows 2290 all the CPUs from the parent cgroup that can be available to 2291 be used by this cgroup. Otherwise, it should be a subset of 2292 "cpuset.cpus" unless none of the CPUs listed in "cpuset.cpus" 2293 can be granted. In this case, it will be treated just like an 2294 empty "cpuset.cpus". 2295 2296 Its value will be affected by CPU hotplug events. 2297 2298 cpuset.mems 2299 A read-write multiple values file which exists on non-root 2300 cpuset-enabled cgroups. 2301 2302 It lists the requested memory nodes to be used by tasks within 2303 this cgroup. The actual list of memory nodes granted, however, 2304 is subjected to constraints imposed by its parent and can differ 2305 from the requested memory nodes. 2306 2307 The memory node numbers are comma-separated numbers or ranges. 2308 For example:: 2309 2310 # cat cpuset.mems 2311 0-1,3 2312 2313 An empty value indicates that the cgroup is using the same 2314 setting as the nearest cgroup ancestor with a non-empty 2315 "cpuset.mems" or all the available memory nodes if none 2316 is found. 2317 2318 The value of "cpuset.mems" stays constant until the next update 2319 and won't be affected by any memory nodes hotplug events. 2320 2321 Setting a non-empty value to "cpuset.mems" causes memory of 2322 tasks within the cgroup to be migrated to the designated nodes if 2323 they are currently using memory outside of the designated nodes. 2324 2325 There is a cost for this memory migration. The migration 2326 may not be complete and some memory pages may be left behind. 2327 So it is recommended that "cpuset.mems" should be set properly 2328 before spawning new tasks into the cpuset. Even if there is 2329 a need to change "cpuset.mems" with active tasks, it shouldn't 2330 be done frequently. 2331 2332 cpuset.mems.effective 2333 A read-only multiple values file which exists on all 2334 cpuset-enabled cgroups. 2335 2336 It lists the onlined memory nodes that are actually granted to 2337 this cgroup by its parent. These memory nodes are allowed to 2338 be used by tasks within the current cgroup. 2339 2340 If "cpuset.mems" is empty, it shows all the memory nodes from the 2341 parent cgroup that will be available to be used by this cgroup. 2342 Otherwise, it should be a subset of "cpuset.mems" unless none of 2343 the memory nodes listed in "cpuset.mems" can be granted. In this 2344 case, it will be treated just like an empty "cpuset.mems". 2345 2346 Its value will be affected by memory nodes hotplug events. 2347 2348 cpuset.cpus.exclusive 2349 A read-write multiple values file which exists on non-root 2350 cpuset-enabled cgroups. 2351 2352 It lists all the exclusive CPUs that are allowed to be used 2353 to create a new cpuset partition. Its value is not used 2354 unless the cgroup becomes a valid partition root. See the 2355 "cpuset.cpus.partition" section below for a description of what 2356 a cpuset partition is. 2357 2358 When the cgroup becomes a partition root, the actual exclusive 2359 CPUs that are allocated to that partition are listed in 2360 "cpuset.cpus.exclusive.effective" which may be different 2361 from "cpuset.cpus.exclusive". If "cpuset.cpus.exclusive" 2362 has previously been set, "cpuset.cpus.exclusive.effective" 2363 is always a subset of it. 2364 2365 Users can manually set it to a value that is different from 2366 "cpuset.cpus". One constraint in setting it is that the list of 2367 CPUs must be exclusive with respect to "cpuset.cpus.exclusive" 2368 of its sibling. If "cpuset.cpus.exclusive" of a sibling cgroup 2369 isn't set, its "cpuset.cpus" value, if set, cannot be a subset 2370 of it to leave at least one CPU available when the exclusive 2371 CPUs are taken away. 2372 2373 For a parent cgroup, any one of its exclusive CPUs can only 2374 be distributed to at most one of its child cgroups. Having an 2375 exclusive CPU appearing in two or more of its child cgroups is 2376 not allowed (the exclusivity rule). A value that violates the 2377 exclusivity rule will be rejected with a write error. 2378 2379 The root cgroup is a partition root and all its available CPUs 2380 are in its exclusive CPU set. 2381 2382 cpuset.cpus.exclusive.effective 2383 A read-only multiple values file which exists on all non-root 2384 cpuset-enabled cgroups. 2385 2386 This file shows the effective set of exclusive CPUs that 2387 can be used to create a partition root. The content 2388 of this file will always be a subset of its parent's 2389 "cpuset.cpus.exclusive.effective" if its parent is not the root 2390 cgroup. It will also be a subset of "cpuset.cpus.exclusive" 2391 if it is set. If "cpuset.cpus.exclusive" is not set, it is 2392 treated to have an implicit value of "cpuset.cpus" in the 2393 formation of local partition. 2394 2395 cpuset.cpus.isolated 2396 A read-only and root cgroup only multiple values file. 2397 2398 This file shows the set of all isolated CPUs used in existing 2399 isolated partitions. It will be empty if no isolated partition 2400 is created. 2401 2402 cpuset.cpus.partition 2403 A read-write single value file which exists on non-root 2404 cpuset-enabled cgroups. This flag is owned by the parent cgroup 2405 and is not delegatable. 2406 2407 It accepts only the following input values when written to. 2408 2409 ========== ===================================== 2410 "member" Non-root member of a partition 2411 "root" Partition root 2412 "isolated" Partition root without load balancing 2413 ========== ===================================== 2414 2415 A cpuset partition is a collection of cpuset-enabled cgroups with 2416 a partition root at the top of the hierarchy and its descendants 2417 except those that are separate partition roots themselves and 2418 their descendants. A partition has exclusive access to the 2419 set of exclusive CPUs allocated to it. Other cgroups outside 2420 of that partition cannot use any CPUs in that set. 2421 2422 There are two types of partitions - local and remote. A local 2423 partition is one whose parent cgroup is also a valid partition 2424 root. A remote partition is one whose parent cgroup is not a 2425 valid partition root itself. Writing to "cpuset.cpus.exclusive" 2426 is optional for the creation of a local partition as its 2427 "cpuset.cpus.exclusive" file will assume an implicit value that 2428 is the same as "cpuset.cpus" if it is not set. Writing the 2429 proper "cpuset.cpus.exclusive" values down the cgroup hierarchy 2430 before the target partition root is mandatory for the creation 2431 of a remote partition. 2432 2433 Currently, a remote partition cannot be created under a local 2434 partition. All the ancestors of a remote partition root except 2435 the root cgroup cannot be a partition root. 2436 2437 The root cgroup is always a partition root and its state cannot 2438 be changed. All other non-root cgroups start out as "member". 2439 2440 When set to "root", the current cgroup is the root of a new 2441 partition or scheduling domain. The set of exclusive CPUs is 2442 determined by the value of its "cpuset.cpus.exclusive.effective". 2443 2444 When set to "isolated", the CPUs in that partition will be in 2445 an isolated state without any load balancing from the scheduler 2446 and excluded from the unbound workqueues. Tasks placed in such 2447 a partition with multiple CPUs should be carefully distributed 2448 and bound to each of the individual CPUs for optimal performance. 2449 2450 A partition root ("root" or "isolated") can be in one of the 2451 two possible states - valid or invalid. An invalid partition 2452 root is in a degraded state where some state information may 2453 be retained, but behaves more like a "member". 2454 2455 All possible state transitions among "member", "root" and 2456 "isolated" are allowed. 2457 2458 On read, the "cpuset.cpus.partition" file can show the following 2459 values. 2460 2461 ============================= ===================================== 2462 "member" Non-root member of a partition 2463 "root" Partition root 2464 "isolated" Partition root without load balancing 2465 "root invalid (<reason>)" Invalid partition root 2466 "isolated invalid (<reason>)" Invalid isolated partition root 2467 ============================= ===================================== 2468 2469 In the case of an invalid partition root, a descriptive string on 2470 why the partition is invalid is included within parentheses. 2471 2472 For a local partition root to be valid, the following conditions 2473 must be met. 2474 2475 1) The parent cgroup is a valid partition root. 2476 2) The "cpuset.cpus.exclusive.effective" file cannot be empty, 2477 though it may contain offline CPUs. 2478 3) The "cpuset.cpus.effective" cannot be empty unless there is 2479 no task associated with this partition. 2480 2481 For a remote partition root to be valid, all the above conditions 2482 except the first one must be met. 2483 2484 External events like hotplug or changes to "cpuset.cpus" or 2485 "cpuset.cpus.exclusive" can cause a valid partition root to 2486 become invalid and vice versa. Note that a task cannot be 2487 moved to a cgroup with empty "cpuset.cpus.effective". 2488 2489 A valid non-root parent partition may distribute out all its CPUs 2490 to its child local partitions when there is no task associated 2491 with it. 2492 2493 Care must be taken to change a valid partition root to "member" 2494 as all its child local partitions, if present, will become 2495 invalid causing disruption to tasks running in those child 2496 partitions. These inactivated partitions could be recovered if 2497 their parent is switched back to a partition root with a proper 2498 value in "cpuset.cpus" or "cpuset.cpus.exclusive". 2499 2500 Poll and inotify events are triggered whenever the state of 2501 "cpuset.cpus.partition" changes. That includes changes caused 2502 by write to "cpuset.cpus.partition", cpu hotplug or other 2503 changes that modify the validity status of the partition. 2504 This will allow user space agents to monitor unexpected changes 2505 to "cpuset.cpus.partition" without the need to do continuous 2506 polling. 2507 2508 A user can pre-configure certain CPUs to an isolated state 2509 with load balancing disabled at boot time with the "isolcpus" 2510 kernel boot command line option. If those CPUs are to be put 2511 into a partition, they have to be used in an isolated partition. 2512 2513 2514Device controller 2515----------------- 2516 2517Device controller manages access to device files. It includes both 2518creation of new device files (using mknod), and access to the 2519existing device files. 2520 2521Cgroup v2 device controller has no interface files and is implemented 2522on top of cgroup BPF. To control access to device files, a user may 2523create bpf programs of type BPF_PROG_TYPE_CGROUP_DEVICE and attach 2524them to cgroups with BPF_CGROUP_DEVICE flag. On an attempt to access a 2525device file, corresponding BPF programs will be executed, and depending 2526on the return value the attempt will succeed or fail with -EPERM. 2527 2528A BPF_PROG_TYPE_CGROUP_DEVICE program takes a pointer to the 2529bpf_cgroup_dev_ctx structure, which describes the device access attempt: 2530access type (mknod/read/write) and device (type, major and minor numbers). 2531If the program returns 0, the attempt fails with -EPERM, otherwise it 2532succeeds. 2533 2534An example of BPF_PROG_TYPE_CGROUP_DEVICE program may be found in 2535tools/testing/selftests/bpf/progs/dev_cgroup.c in the kernel source tree. 2536 2537 2538RDMA 2539---- 2540 2541The "rdma" controller regulates the distribution and accounting of 2542RDMA resources. 2543 2544RDMA Interface Files 2545~~~~~~~~~~~~~~~~~~~~ 2546 2547 rdma.max 2548 A readwrite nested-keyed file that exists for all the cgroups 2549 except root that describes current configured resource limit 2550 for a RDMA/IB device. 2551 2552 Lines are keyed by device name and are not ordered. 2553 Each line contains space separated resource name and its configured 2554 limit that can be distributed. 2555 2556 The following nested keys are defined. 2557 2558 ========== ============================= 2559 hca_handle Maximum number of HCA Handles 2560 hca_object Maximum number of HCA Objects 2561 ========== ============================= 2562 2563 An example for mlx4 and ocrdma device follows:: 2564 2565 mlx4_0 hca_handle=2 hca_object=2000 2566 ocrdma1 hca_handle=3 hca_object=max 2567 2568 rdma.current 2569 A read-only file that describes current resource usage. 2570 It exists for all the cgroup except root. 2571 2572 An example for mlx4 and ocrdma device follows:: 2573 2574 mlx4_0 hca_handle=1 hca_object=20 2575 ocrdma1 hca_handle=1 hca_object=23 2576 2577HugeTLB 2578------- 2579 2580The HugeTLB controller allows to limit the HugeTLB usage per control group and 2581enforces the controller limit during page fault. 2582 2583HugeTLB Interface Files 2584~~~~~~~~~~~~~~~~~~~~~~~ 2585 2586 hugetlb.<hugepagesize>.current 2587 Show current usage for "hugepagesize" hugetlb. It exists for all 2588 the cgroup except root. 2589 2590 hugetlb.<hugepagesize>.max 2591 Set/show the hard limit of "hugepagesize" hugetlb usage. 2592 The default value is "max". It exists for all the cgroup except root. 2593 2594 hugetlb.<hugepagesize>.events 2595 A read-only flat-keyed file which exists on non-root cgroups. 2596 2597 max 2598 The number of allocation failure due to HugeTLB limit 2599 2600 hugetlb.<hugepagesize>.events.local 2601 Similar to hugetlb.<hugepagesize>.events but the fields in the file 2602 are local to the cgroup i.e. not hierarchical. The file modified event 2603 generated on this file reflects only the local events. 2604 2605 hugetlb.<hugepagesize>.numa_stat 2606 Similar to memory.numa_stat, it shows the numa information of the 2607 hugetlb pages of <hugepagesize> in this cgroup. Only active in 2608 use hugetlb pages are included. The per-node values are in bytes. 2609 2610Misc 2611---- 2612 2613The Miscellaneous cgroup provides the resource limiting and tracking 2614mechanism for the scalar resources which cannot be abstracted like the other 2615cgroup resources. Controller is enabled by the CONFIG_CGROUP_MISC config 2616option. 2617 2618A resource can be added to the controller via enum misc_res_type{} in the 2619include/linux/misc_cgroup.h file and the corresponding name via misc_res_name[] 2620in the kernel/cgroup/misc.c file. Provider of the resource must set its 2621capacity prior to using the resource by calling misc_cg_set_capacity(). 2622 2623Once a capacity is set then the resource usage can be updated using charge and 2624uncharge APIs. All of the APIs to interact with misc controller are in 2625include/linux/misc_cgroup.h. 2626 2627Misc Interface Files 2628~~~~~~~~~~~~~~~~~~~~ 2629 2630Miscellaneous controller provides 3 interface files. If two misc resources (res_a and res_b) are registered then: 2631 2632 misc.capacity 2633 A read-only flat-keyed file shown only in the root cgroup. It shows 2634 miscellaneous scalar resources available on the platform along with 2635 their quantities:: 2636 2637 $ cat misc.capacity 2638 res_a 50 2639 res_b 10 2640 2641 misc.current 2642 A read-only flat-keyed file shown in the all cgroups. It shows 2643 the current usage of the resources in the cgroup and its children.:: 2644 2645 $ cat misc.current 2646 res_a 3 2647 res_b 0 2648 2649 misc.peak 2650 A read-only flat-keyed file shown in all cgroups. It shows the 2651 historical maximum usage of the resources in the cgroup and its 2652 children.:: 2653 2654 $ cat misc.peak 2655 res_a 10 2656 res_b 8 2657 2658 misc.max 2659 A read-write flat-keyed file shown in the non root cgroups. Allowed 2660 maximum usage of the resources in the cgroup and its children.:: 2661 2662 $ cat misc.max 2663 res_a max 2664 res_b 4 2665 2666 Limit can be set by:: 2667 2668 # echo res_a 1 > misc.max 2669 2670 Limit can be set to max by:: 2671 2672 # echo res_a max > misc.max 2673 2674 Limits can be set higher than the capacity value in the misc.capacity 2675 file. 2676 2677 misc.events 2678 A read-only flat-keyed file which exists on non-root cgroups. The 2679 following entries are defined. Unless specified otherwise, a value 2680 change in this file generates a file modified event. All fields in 2681 this file are hierarchical. 2682 2683 max 2684 The number of times the cgroup's resource usage was 2685 about to go over the max boundary. 2686 2687 misc.events.local 2688 Similar to misc.events but the fields in the file are local to the 2689 cgroup i.e. not hierarchical. The file modified event generated on 2690 this file reflects only the local events. 2691 2692Migration and Ownership 2693~~~~~~~~~~~~~~~~~~~~~~~ 2694 2695A miscellaneous scalar resource is charged to the cgroup in which it is used 2696first, and stays charged to that cgroup until that resource is freed. Migrating 2697a process to a different cgroup does not move the charge to the destination 2698cgroup where the process has moved. 2699 2700Others 2701------ 2702 2703perf_event 2704~~~~~~~~~~ 2705 2706perf_event controller, if not mounted on a legacy hierarchy, is 2707automatically enabled on the v2 hierarchy so that perf events can 2708always be filtered by cgroup v2 path. The controller can still be 2709moved to a legacy hierarchy after v2 hierarchy is populated. 2710 2711 2712Non-normative information 2713------------------------- 2714 2715This section contains information that isn't considered to be a part of 2716the stable kernel API and so is subject to change. 2717 2718 2719CPU controller root cgroup process behaviour 2720~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2721 2722When distributing CPU cycles in the root cgroup each thread in this 2723cgroup is treated as if it was hosted in a separate child cgroup of the 2724root cgroup. This child cgroup weight is dependent on its thread nice 2725level. 2726 2727For details of this mapping see sched_prio_to_weight array in 2728kernel/sched/core.c file (values from this array should be scaled 2729appropriately so the neutral - nice 0 - value is 100 instead of 1024). 2730 2731 2732IO controller root cgroup process behaviour 2733~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2734 2735Root cgroup processes are hosted in an implicit leaf child node. 2736When distributing IO resources this implicit child node is taken into 2737account as if it was a normal child cgroup of the root cgroup with a 2738weight value of 200. 2739 2740 2741Namespace 2742========= 2743 2744Basics 2745------ 2746 2747cgroup namespace provides a mechanism to virtualize the view of the 2748"/proc/$PID/cgroup" file and cgroup mounts. The CLONE_NEWCGROUP clone 2749flag can be used with clone(2) and unshare(2) to create a new cgroup 2750namespace. The process running inside the cgroup namespace will have 2751its "/proc/$PID/cgroup" output restricted to cgroupns root. The 2752cgroupns root is the cgroup of the process at the time of creation of 2753the cgroup namespace. 2754 2755Without cgroup namespace, the "/proc/$PID/cgroup" file shows the 2756complete path of the cgroup of a process. In a container setup where 2757a set of cgroups and namespaces are intended to isolate processes the 2758"/proc/$PID/cgroup" file may leak potential system level information 2759to the isolated processes. For example:: 2760 2761 # cat /proc/self/cgroup 2762 0::/batchjobs/container_id1 2763 2764The path '/batchjobs/container_id1' can be considered as system-data 2765and undesirable to expose to the isolated processes. cgroup namespace 2766can be used to restrict visibility of this path. For example, before 2767creating a cgroup namespace, one would see:: 2768 2769 # ls -l /proc/self/ns/cgroup 2770 lrwxrwxrwx 1 root root 0 2014-07-15 10:37 /proc/self/ns/cgroup -> cgroup:[4026531835] 2771 # cat /proc/self/cgroup 2772 0::/batchjobs/container_id1 2773 2774After unsharing a new namespace, the view changes:: 2775 2776 # ls -l /proc/self/ns/cgroup 2777 lrwxrwxrwx 1 root root 0 2014-07-15 10:35 /proc/self/ns/cgroup -> cgroup:[4026532183] 2778 # cat /proc/self/cgroup 2779 0::/ 2780 2781When some thread from a multi-threaded process unshares its cgroup 2782namespace, the new cgroupns gets applied to the entire process (all 2783the threads). This is natural for the v2 hierarchy; however, for the 2784legacy hierarchies, this may be unexpected. 2785 2786A cgroup namespace is alive as long as there are processes inside or 2787mounts pinning it. When the last usage goes away, the cgroup 2788namespace is destroyed. The cgroupns root and the actual cgroups 2789remain. 2790 2791 2792The Root and Views 2793------------------ 2794 2795The 'cgroupns root' for a cgroup namespace is the cgroup in which the 2796process calling unshare(2) is running. For example, if a process in 2797/batchjobs/container_id1 cgroup calls unshare, cgroup 2798/batchjobs/container_id1 becomes the cgroupns root. For the 2799init_cgroup_ns, this is the real root ('/') cgroup. 2800 2801The cgroupns root cgroup does not change even if the namespace creator 2802process later moves to a different cgroup:: 2803 2804 # ~/unshare -c # unshare cgroupns in some cgroup 2805 # cat /proc/self/cgroup 2806 0::/ 2807 # mkdir sub_cgrp_1 2808 # echo 0 > sub_cgrp_1/cgroup.procs 2809 # cat /proc/self/cgroup 2810 0::/sub_cgrp_1 2811 2812Each process gets its namespace-specific view of "/proc/$PID/cgroup" 2813 2814Processes running inside the cgroup namespace will be able to see 2815cgroup paths (in /proc/self/cgroup) only inside their root cgroup. 2816From within an unshared cgroupns:: 2817 2818 # sleep 100000 & 2819 [1] 7353 2820 # echo 7353 > sub_cgrp_1/cgroup.procs 2821 # cat /proc/7353/cgroup 2822 0::/sub_cgrp_1 2823 2824From the initial cgroup namespace, the real cgroup path will be 2825visible:: 2826 2827 $ cat /proc/7353/cgroup 2828 0::/batchjobs/container_id1/sub_cgrp_1 2829 2830From a sibling cgroup namespace (that is, a namespace rooted at a 2831different cgroup), the cgroup path relative to its own cgroup 2832namespace root will be shown. For instance, if PID 7353's cgroup 2833namespace root is at '/batchjobs/container_id2', then it will see:: 2834 2835 # cat /proc/7353/cgroup 2836 0::/../container_id2/sub_cgrp_1 2837 2838Note that the relative path always starts with '/' to indicate that 2839its relative to the cgroup namespace root of the caller. 2840 2841 2842Migration and setns(2) 2843---------------------- 2844 2845Processes inside a cgroup namespace can move into and out of the 2846namespace root if they have proper access to external cgroups. For 2847example, from inside a namespace with cgroupns root at 2848/batchjobs/container_id1, and assuming that the global hierarchy is 2849still accessible inside cgroupns:: 2850 2851 # cat /proc/7353/cgroup 2852 0::/sub_cgrp_1 2853 # echo 7353 > batchjobs/container_id2/cgroup.procs 2854 # cat /proc/7353/cgroup 2855 0::/../container_id2 2856 2857Note that this kind of setup is not encouraged. A task inside cgroup 2858namespace should only be exposed to its own cgroupns hierarchy. 2859 2860setns(2) to another cgroup namespace is allowed when: 2861 2862(a) the process has CAP_SYS_ADMIN against its current user namespace 2863(b) the process has CAP_SYS_ADMIN against the target cgroup 2864 namespace's userns 2865 2866No implicit cgroup changes happen with attaching to another cgroup 2867namespace. It is expected that the someone moves the attaching 2868process under the target cgroup namespace root. 2869 2870 2871Interaction with Other Namespaces 2872--------------------------------- 2873 2874Namespace specific cgroup hierarchy can be mounted by a process 2875running inside a non-init cgroup namespace:: 2876 2877 # mount -t cgroup2 none $MOUNT_POINT 2878 2879This will mount the unified cgroup hierarchy with cgroupns root as the 2880filesystem root. The process needs CAP_SYS_ADMIN against its user and 2881mount namespaces. 2882 2883The virtualization of /proc/self/cgroup file combined with restricting 2884the view of cgroup hierarchy by namespace-private cgroupfs mount 2885provides a properly isolated cgroup view inside the container. 2886 2887 2888Information on Kernel Programming 2889================================= 2890 2891This section contains kernel programming information in the areas 2892where interacting with cgroup is necessary. cgroup core and 2893controllers are not covered. 2894 2895 2896Filesystem Support for Writeback 2897-------------------------------- 2898 2899A filesystem can support cgroup writeback by updating 2900address_space_operations->writepage[s]() to annotate bio's using the 2901following two functions. 2902 2903 wbc_init_bio(@wbc, @bio) 2904 Should be called for each bio carrying writeback data and 2905 associates the bio with the inode's owner cgroup and the 2906 corresponding request queue. This must be called after 2907 a queue (device) has been associated with the bio and 2908 before submission. 2909 2910 wbc_account_cgroup_owner(@wbc, @page, @bytes) 2911 Should be called for each data segment being written out. 2912 While this function doesn't care exactly when it's called 2913 during the writeback session, it's the easiest and most 2914 natural to call it as data segments are added to a bio. 2915 2916With writeback bio's annotated, cgroup support can be enabled per 2917super_block by setting SB_I_CGROUPWB in ->s_iflags. This allows for 2918selective disabling of cgroup writeback support which is helpful when 2919certain filesystem features, e.g. journaled data mode, are 2920incompatible. 2921 2922wbc_init_bio() binds the specified bio to its cgroup. Depending on 2923the configuration, the bio may be executed at a lower priority and if 2924the writeback session is holding shared resources, e.g. a journal 2925entry, may lead to priority inversion. There is no one easy solution 2926for the problem. Filesystems can try to work around specific problem 2927cases by skipping wbc_init_bio() and using bio_associate_blkg() 2928directly. 2929 2930 2931Deprecated v1 Core Features 2932=========================== 2933 2934- Multiple hierarchies including named ones are not supported. 2935 2936- All v1 mount options are not supported. 2937 2938- The "tasks" file is removed and "cgroup.procs" is not sorted. 2939 2940- "cgroup.clone_children" is removed. 2941 2942- /proc/cgroups is meaningless for v2. Use "cgroup.controllers" file 2943 at the root instead. 2944 2945 2946Issues with v1 and Rationales for v2 2947==================================== 2948 2949Multiple Hierarchies 2950-------------------- 2951 2952cgroup v1 allowed an arbitrary number of hierarchies and each 2953hierarchy could host any number of controllers. While this seemed to 2954provide a high level of flexibility, it wasn't useful in practice. 2955 2956For example, as there is only one instance of each controller, utility 2957type controllers such as freezer which can be useful in all 2958hierarchies could only be used in one. The issue is exacerbated by 2959the fact that controllers couldn't be moved to another hierarchy once 2960hierarchies were populated. Another issue was that all controllers 2961bound to a hierarchy were forced to have exactly the same view of the 2962hierarchy. It wasn't possible to vary the granularity depending on 2963the specific controller. 2964 2965In practice, these issues heavily limited which controllers could be 2966put on the same hierarchy and most configurations resorted to putting 2967each controller on its own hierarchy. Only closely related ones, such 2968as the cpu and cpuacct controllers, made sense to be put on the same 2969hierarchy. This often meant that userland ended up managing multiple 2970similar hierarchies repeating the same steps on each hierarchy 2971whenever a hierarchy management operation was necessary. 2972 2973Furthermore, support for multiple hierarchies came at a steep cost. 2974It greatly complicated cgroup core implementation but more importantly 2975the support for multiple hierarchies restricted how cgroup could be 2976used in general and what controllers was able to do. 2977 2978There was no limit on how many hierarchies there might be, which meant 2979that a thread's cgroup membership couldn't be described in finite 2980length. The key might contain any number of entries and was unlimited 2981in length, which made it highly awkward to manipulate and led to 2982addition of controllers which existed only to identify membership, 2983which in turn exacerbated the original problem of proliferating number 2984of hierarchies. 2985 2986Also, as a controller couldn't have any expectation regarding the 2987topologies of hierarchies other controllers might be on, each 2988controller had to assume that all other controllers were attached to 2989completely orthogonal hierarchies. This made it impossible, or at 2990least very cumbersome, for controllers to cooperate with each other. 2991 2992In most use cases, putting controllers on hierarchies which are 2993completely orthogonal to each other isn't necessary. What usually is 2994called for is the ability to have differing levels of granularity 2995depending on the specific controller. In other words, hierarchy may 2996be collapsed from leaf towards root when viewed from specific 2997controllers. For example, a given configuration might not care about 2998how memory is distributed beyond a certain level while still wanting 2999to control how CPU cycles are distributed. 3000 3001 3002Thread Granularity 3003------------------ 3004 3005cgroup v1 allowed threads of a process to belong to different cgroups. 3006This didn't make sense for some controllers and those controllers 3007ended up implementing different ways to ignore such situations but 3008much more importantly it blurred the line between API exposed to 3009individual applications and system management interface. 3010 3011Generally, in-process knowledge is available only to the process 3012itself; thus, unlike service-level organization of processes, 3013categorizing threads of a process requires active participation from 3014the application which owns the target process. 3015 3016cgroup v1 had an ambiguously defined delegation model which got abused 3017in combination with thread granularity. cgroups were delegated to 3018individual applications so that they can create and manage their own 3019sub-hierarchies and control resource distributions along them. This 3020effectively raised cgroup to the status of a syscall-like API exposed 3021to lay programs. 3022 3023First of all, cgroup has a fundamentally inadequate interface to be 3024exposed this way. For a process to access its own knobs, it has to 3025extract the path on the target hierarchy from /proc/self/cgroup, 3026construct the path by appending the name of the knob to the path, open 3027and then read and/or write to it. This is not only extremely clunky 3028and unusual but also inherently racy. There is no conventional way to 3029define transaction across the required steps and nothing can guarantee 3030that the process would actually be operating on its own sub-hierarchy. 3031 3032cgroup controllers implemented a number of knobs which would never be 3033accepted as public APIs because they were just adding control knobs to 3034system-management pseudo filesystem. cgroup ended up with interface 3035knobs which were not properly abstracted or refined and directly 3036revealed kernel internal details. These knobs got exposed to 3037individual applications through the ill-defined delegation mechanism 3038effectively abusing cgroup as a shortcut to implementing public APIs 3039without going through the required scrutiny. 3040 3041This was painful for both userland and kernel. Userland ended up with 3042misbehaving and poorly abstracted interfaces and kernel exposing and 3043locked into constructs inadvertently. 3044 3045 3046Competition Between Inner Nodes and Threads 3047------------------------------------------- 3048 3049cgroup v1 allowed threads to be in any cgroups which created an 3050interesting problem where threads belonging to a parent cgroup and its 3051children cgroups competed for resources. This was nasty as two 3052different types of entities competed and there was no obvious way to 3053settle it. Different controllers did different things. 3054 3055The cpu controller considered threads and cgroups as equivalents and 3056mapped nice levels to cgroup weights. This worked for some cases but 3057fell flat when children wanted to be allocated specific ratios of CPU 3058cycles and the number of internal threads fluctuated - the ratios 3059constantly changed as the number of competing entities fluctuated. 3060There also were other issues. The mapping from nice level to weight 3061wasn't obvious or universal, and there were various other knobs which 3062simply weren't available for threads. 3063 3064The io controller implicitly created a hidden leaf node for each 3065cgroup to host the threads. The hidden leaf had its own copies of all 3066the knobs with ``leaf_`` prefixed. While this allowed equivalent 3067control over internal threads, it was with serious drawbacks. It 3068always added an extra layer of nesting which wouldn't be necessary 3069otherwise, made the interface messy and significantly complicated the 3070implementation. 3071 3072The memory controller didn't have a way to control what happened 3073between internal tasks and child cgroups and the behavior was not 3074clearly defined. There were attempts to add ad-hoc behaviors and 3075knobs to tailor the behavior to specific workloads which would have 3076led to problems extremely difficult to resolve in the long term. 3077 3078Multiple controllers struggled with internal tasks and came up with 3079different ways to deal with it; unfortunately, all the approaches were 3080severely flawed and, furthermore, the widely different behaviors 3081made cgroup as a whole highly inconsistent. 3082 3083This clearly is a problem which needs to be addressed from cgroup core 3084in a uniform way. 3085 3086 3087Other Interface Issues 3088---------------------- 3089 3090cgroup v1 grew without oversight and developed a large number of 3091idiosyncrasies and inconsistencies. One issue on the cgroup core side 3092was how an empty cgroup was notified - a userland helper binary was 3093forked and executed for each event. The event delivery wasn't 3094recursive or delegatable. The limitations of the mechanism also led 3095to in-kernel event delivery filtering mechanism further complicating 3096the interface. 3097 3098Controller interfaces were problematic too. An extreme example is 3099controllers completely ignoring hierarchical organization and treating 3100all cgroups as if they were all located directly under the root 3101cgroup. Some controllers exposed a large amount of inconsistent 3102implementation details to userland. 3103 3104There also was no consistency across controllers. When a new cgroup 3105was created, some controllers defaulted to not imposing extra 3106restrictions while others disallowed any resource usage until 3107explicitly configured. Configuration knobs for the same type of 3108control used widely differing naming schemes and formats. Statistics 3109and information knobs were named arbitrarily and used different 3110formats and units even in the same controller. 3111 3112cgroup v2 establishes common conventions where appropriate and updates 3113controllers so that they expose minimal and consistent interfaces. 3114 3115 3116Controller Issues and Remedies 3117------------------------------ 3118 3119Memory 3120~~~~~~ 3121 3122The original lower boundary, the soft limit, is defined as a limit 3123that is per default unset. As a result, the set of cgroups that 3124global reclaim prefers is opt-in, rather than opt-out. The costs for 3125optimizing these mostly negative lookups are so high that the 3126implementation, despite its enormous size, does not even provide the 3127basic desirable behavior. First off, the soft limit has no 3128hierarchical meaning. All configured groups are organized in a global 3129rbtree and treated like equal peers, regardless where they are located 3130in the hierarchy. This makes subtree delegation impossible. Second, 3131the soft limit reclaim pass is so aggressive that it not just 3132introduces high allocation latencies into the system, but also impacts 3133system performance due to overreclaim, to the point where the feature 3134becomes self-defeating. 3135 3136The memory.low boundary on the other hand is a top-down allocated 3137reserve. A cgroup enjoys reclaim protection when it's within its 3138effective low, which makes delegation of subtrees possible. It also 3139enjoys having reclaim pressure proportional to its overage when 3140above its effective low. 3141 3142The original high boundary, the hard limit, is defined as a strict 3143limit that can not budge, even if the OOM killer has to be called. 3144But this generally goes against the goal of making the most out of the 3145available memory. The memory consumption of workloads varies during 3146runtime, and that requires users to overcommit. But doing that with a 3147strict upper limit requires either a fairly accurate prediction of the 3148working set size or adding slack to the limit. Since working set size 3149estimation is hard and error prone, and getting it wrong results in 3150OOM kills, most users tend to err on the side of a looser limit and 3151end up wasting precious resources. 3152 3153The memory.high boundary on the other hand can be set much more 3154conservatively. When hit, it throttles allocations by forcing them 3155into direct reclaim to work off the excess, but it never invokes the 3156OOM killer. As a result, a high boundary that is chosen too 3157aggressively will not terminate the processes, but instead it will 3158lead to gradual performance degradation. The user can monitor this 3159and make corrections until the minimal memory footprint that still 3160gives acceptable performance is found. 3161 3162In extreme cases, with many concurrent allocations and a complete 3163breakdown of reclaim progress within the group, the high boundary can 3164be exceeded. But even then it's mostly better to satisfy the 3165allocation from the slack available in other groups or the rest of the 3166system than killing the group. Otherwise, memory.max is there to 3167limit this type of spillover and ultimately contain buggy or even 3168malicious applications. 3169 3170Setting the original memory.limit_in_bytes below the current usage was 3171subject to a race condition, where concurrent charges could cause the 3172limit setting to fail. memory.max on the other hand will first set the 3173limit to prevent new charges, and then reclaim and OOM kill until the 3174new limit is met - or the task writing to memory.max is killed. 3175 3176The combined memory+swap accounting and limiting is replaced by real 3177control over swap space. 3178 3179The main argument for a combined memory+swap facility in the original 3180cgroup design was that global or parental pressure would always be 3181able to swap all anonymous memory of a child group, regardless of the 3182child's own (possibly untrusted) configuration. However, untrusted 3183groups can sabotage swapping by other means - such as referencing its 3184anonymous memory in a tight loop - and an admin can not assume full 3185swappability when overcommitting untrusted jobs. 3186 3187For trusted jobs, on the other hand, a combined counter is not an 3188intuitive userspace interface, and it flies in the face of the idea 3189that cgroup controllers should account and limit specific physical 3190resources. Swap space is a resource like all others in the system, 3191and that's why unified hierarchy allows distributing it separately. 3192