xref: /linux/Documentation/admin-guide/cgroup-v2.rst (revision f96a974170b749e3a56844e25b31d46a7233b6f6)
1.. _cgroup-v2:
2
3================
4Control Group v2
5================
6
7:Date: October, 2015
8:Author: Tejun Heo <tj@kernel.org>
9
10This is the authoritative documentation on the design, interface and
11conventions of cgroup v2.  It describes all userland-visible aspects
12of cgroup including core and specific controller behaviors.  All
13future changes must be reflected in this document.  Documentation for
14v1 is available under :ref:`Documentation/admin-guide/cgroup-v1/index.rst <cgroup-v1>`.
15
16.. CONTENTS
17
18   1. Introduction
19     1-1. Terminology
20     1-2. What is cgroup?
21   2. Basic Operations
22     2-1. Mounting
23     2-2. Organizing Processes and Threads
24       2-2-1. Processes
25       2-2-2. Threads
26     2-3. [Un]populated Notification
27     2-4. Controlling Controllers
28       2-4-1. Enabling and Disabling
29       2-4-2. Top-down Constraint
30       2-4-3. No Internal Process Constraint
31     2-5. Delegation
32       2-5-1. Model of Delegation
33       2-5-2. Delegation Containment
34     2-6. Guidelines
35       2-6-1. Organize Once and Control
36       2-6-2. Avoid Name Collisions
37   3. Resource Distribution Models
38     3-1. Weights
39     3-2. Limits
40     3-3. Protections
41     3-4. Allocations
42   4. Interface Files
43     4-1. Format
44     4-2. Conventions
45     4-3. Core Interface Files
46   5. Controllers
47     5-1. CPU
48       5-1-1. CPU Interface Files
49     5-2. Memory
50       5-2-1. Memory Interface Files
51       5-2-2. Usage Guidelines
52       5-2-3. Memory Ownership
53     5-3. IO
54       5-3-1. IO Interface Files
55       5-3-2. Writeback
56       5-3-3. IO Latency
57         5-3-3-1. How IO Latency Throttling Works
58         5-3-3-2. IO Latency Interface Files
59       5-3-4. IO Priority
60     5-4. PID
61       5-4-1. PID Interface Files
62     5-5. Cpuset
63       5.5-1. Cpuset Interface Files
64     5-6. Device
65     5-7. RDMA
66       5-7-1. RDMA Interface Files
67     5-8. DMEM
68     5-9. HugeTLB
69       5.9-1. HugeTLB Interface Files
70     5-10. Misc
71       5.10-1 Miscellaneous cgroup Interface Files
72       5.10-2 Migration and Ownership
73     5-11. Others
74       5-11-1. perf_event
75     5-N. Non-normative information
76       5-N-1. CPU controller root cgroup process behaviour
77       5-N-2. IO controller root cgroup process behaviour
78   6. Namespace
79     6-1. Basics
80     6-2. The Root and Views
81     6-3. Migration and setns(2)
82     6-4. Interaction with Other Namespaces
83   P. Information on Kernel Programming
84     P-1. Filesystem Support for Writeback
85   D. Deprecated v1 Core Features
86   R. Issues with v1 and Rationales for v2
87     R-1. Multiple Hierarchies
88     R-2. Thread Granularity
89     R-3. Competition Between Inner Nodes and Threads
90     R-4. Other Interface Issues
91     R-5. Controller Issues and Remedies
92       R-5-1. Memory
93
94
95Introduction
96============
97
98Terminology
99-----------
100
101"cgroup" stands for "control group" and is never capitalized.  The
102singular form is used to designate the whole feature and also as a
103qualifier as in "cgroup controllers".  When explicitly referring to
104multiple individual control groups, the plural form "cgroups" is used.
105
106
107What is cgroup?
108---------------
109
110cgroup is a mechanism to organize processes hierarchically and
111distribute system resources along the hierarchy in a controlled and
112configurable manner.
113
114cgroup is largely composed of two parts - the core and controllers.
115cgroup core is primarily responsible for hierarchically organizing
116processes.  A cgroup controller is usually responsible for
117distributing a specific type of system resource along the hierarchy
118although there are utility controllers which serve purposes other than
119resource distribution.
120
121cgroups form a tree structure and every process in the system belongs
122to one and only one cgroup.  All threads of a process belong to the
123same cgroup.  On creation, all processes are put in the cgroup that
124the parent process belongs to at the time.  A process can be migrated
125to another cgroup.  Migration of a process doesn't affect already
126existing descendant processes.
127
128Following certain structural constraints, controllers may be enabled or
129disabled selectively on a cgroup.  All controller behaviors are
130hierarchical - if a controller is enabled on a cgroup, it affects all
131processes which belong to the cgroups consisting the inclusive
132sub-hierarchy of the cgroup.  When a controller is enabled on a nested
133cgroup, it always restricts the resource distribution further.  The
134restrictions set closer to the root in the hierarchy can not be
135overridden from further away.
136
137
138Basic Operations
139================
140
141Mounting
142--------
143
144Unlike v1, cgroup v2 has only single hierarchy.  The cgroup v2
145hierarchy can be mounted with the following mount command::
146
147  # mount -t cgroup2 none $MOUNT_POINT
148
149cgroup2 filesystem has the magic number 0x63677270 ("cgrp").  All
150controllers which support v2 and are not bound to a v1 hierarchy are
151automatically bound to the v2 hierarchy and show up at the root.
152Controllers which are not in active use in the v2 hierarchy can be
153bound to other hierarchies.  This allows mixing v2 hierarchy with the
154legacy v1 multiple hierarchies in a fully backward compatible way.
155
156A controller can be moved across hierarchies only after the controller
157is no longer referenced in its current hierarchy.  Because per-cgroup
158controller states are destroyed asynchronously and controllers may
159have lingering references, a controller may not show up immediately on
160the v2 hierarchy after the final umount of the previous hierarchy.
161Similarly, a controller should be fully disabled to be moved out of
162the unified hierarchy and it may take some time for the disabled
163controller to become available for other hierarchies; furthermore, due
164to inter-controller dependencies, other controllers may need to be
165disabled too.
166
167While useful for development and manual configurations, moving
168controllers dynamically between the v2 and other hierarchies is
169strongly discouraged for production use.  It is recommended to decide
170the hierarchies and controller associations before starting using the
171controllers after system boot.
172
173During transition to v2, system management software might still
174automount the v1 cgroup filesystem and so hijack all controllers
175during boot, before manual intervention is possible. To make testing
176and experimenting easier, the kernel parameter cgroup_no_v1= allows
177disabling controllers in v1 and make them always available in v2.
178
179cgroup v2 currently supports the following mount options.
180
181  nsdelegate
182	Consider cgroup namespaces as delegation boundaries.  This
183	option is system wide and can only be set on mount or modified
184	through remount from the init namespace.  The mount option is
185	ignored on non-init namespace mounts.  Please refer to the
186	Delegation section for details.
187
188  favordynmods
189        Reduce the latencies of dynamic cgroup modifications such as
190        task migrations and controller on/offs at the cost of making
191        hot path operations such as forks and exits more expensive.
192        The static usage pattern of creating a cgroup, enabling
193        controllers, and then seeding it with CLONE_INTO_CGROUP is
194        not affected by this option.
195
196  memory_localevents
197        Only populate memory.events with data for the current cgroup,
198        and not any subtrees. This is legacy behaviour, the default
199        behaviour without this option is to include subtree counts.
200        This option is system wide and can only be set on mount or
201        modified through remount from the init namespace. The mount
202        option is ignored on non-init namespace mounts.
203
204  memory_recursiveprot
205        Recursively apply memory.min and memory.low protection to
206        entire subtrees, without requiring explicit downward
207        propagation into leaf cgroups.  This allows protecting entire
208        subtrees from one another, while retaining free competition
209        within those subtrees.  This should have been the default
210        behavior but is a mount-option to avoid regressing setups
211        relying on the original semantics (e.g. specifying bogusly
212        high 'bypass' protection values at higher tree levels).
213
214  memory_hugetlb_accounting
215        Count HugeTLB memory usage towards the cgroup's overall
216        memory usage for the memory controller (for the purpose of
217        statistics reporting and memory protetion). This is a new
218        behavior that could regress existing setups, so it must be
219        explicitly opted in with this mount option.
220
221        A few caveats to keep in mind:
222
223        * There is no HugeTLB pool management involved in the memory
224          controller. The pre-allocated pool does not belong to anyone.
225          Specifically, when a new HugeTLB folio is allocated to
226          the pool, it is not accounted for from the perspective of the
227          memory controller. It is only charged to a cgroup when it is
228          actually used (for e.g at page fault time). Host memory
229          overcommit management has to consider this when configuring
230          hard limits. In general, HugeTLB pool management should be
231          done via other mechanisms (such as the HugeTLB controller).
232        * Failure to charge a HugeTLB folio to the memory controller
233          results in SIGBUS. This could happen even if the HugeTLB pool
234          still has pages available (but the cgroup limit is hit and
235          reclaim attempt fails).
236        * Charging HugeTLB memory towards the memory controller affects
237          memory protection and reclaim dynamics. Any userspace tuning
238          (of low, min limits for e.g) needs to take this into account.
239        * HugeTLB pages utilized while this option is not selected
240          will not be tracked by the memory controller (even if cgroup
241          v2 is remounted later on).
242
243  pids_localevents
244        The option restores v1-like behavior of pids.events:max, that is only
245        local (inside cgroup proper) fork failures are counted. Without this
246        option pids.events.max represents any pids.max enforcemnt across
247        cgroup's subtree.
248
249
250
251Organizing Processes and Threads
252--------------------------------
253
254Processes
255~~~~~~~~~
256
257Initially, only the root cgroup exists to which all processes belong.
258A child cgroup can be created by creating a sub-directory::
259
260  # mkdir $CGROUP_NAME
261
262A given cgroup may have multiple child cgroups forming a tree
263structure.  Each cgroup has a read-writable interface file
264"cgroup.procs".  When read, it lists the PIDs of all processes which
265belong to the cgroup one-per-line.  The PIDs are not ordered and the
266same PID may show up more than once if the process got moved to
267another cgroup and then back or the PID got recycled while reading.
268
269A process can be migrated into a cgroup by writing its PID to the
270target cgroup's "cgroup.procs" file.  Only one process can be migrated
271on a single write(2) call.  If a process is composed of multiple
272threads, writing the PID of any thread migrates all threads of the
273process.
274
275When a process forks a child process, the new process is born into the
276cgroup that the forking process belongs to at the time of the
277operation.  After exit, a process stays associated with the cgroup
278that it belonged to at the time of exit until it's reaped; however, a
279zombie process does not appear in "cgroup.procs" and thus can't be
280moved to another cgroup.
281
282A cgroup which doesn't have any children or live processes can be
283destroyed by removing the directory.  Note that a cgroup which doesn't
284have any children and is associated only with zombie processes is
285considered empty and can be removed::
286
287  # rmdir $CGROUP_NAME
288
289"/proc/$PID/cgroup" lists a process's cgroup membership.  If legacy
290cgroup is in use in the system, this file may contain multiple lines,
291one for each hierarchy.  The entry for cgroup v2 is always in the
292format "0::$PATH"::
293
294  # cat /proc/842/cgroup
295  ...
296  0::/test-cgroup/test-cgroup-nested
297
298If the process becomes a zombie and the cgroup it was associated with
299is removed subsequently, " (deleted)" is appended to the path::
300
301  # cat /proc/842/cgroup
302  ...
303  0::/test-cgroup/test-cgroup-nested (deleted)
304
305
306Threads
307~~~~~~~
308
309cgroup v2 supports thread granularity for a subset of controllers to
310support use cases requiring hierarchical resource distribution across
311the threads of a group of processes.  By default, all threads of a
312process belong to the same cgroup, which also serves as the resource
313domain to host resource consumptions which are not specific to a
314process or thread.  The thread mode allows threads to be spread across
315a subtree while still maintaining the common resource domain for them.
316
317Controllers which support thread mode are called threaded controllers.
318The ones which don't are called domain controllers.
319
320Marking a cgroup threaded makes it join the resource domain of its
321parent as a threaded cgroup.  The parent may be another threaded
322cgroup whose resource domain is further up in the hierarchy.  The root
323of a threaded subtree, that is, the nearest ancestor which is not
324threaded, is called threaded domain or thread root interchangeably and
325serves as the resource domain for the entire subtree.
326
327Inside a threaded subtree, threads of a process can be put in
328different cgroups and are not subject to the no internal process
329constraint - threaded controllers can be enabled on non-leaf cgroups
330whether they have threads in them or not.
331
332As the threaded domain cgroup hosts all the domain resource
333consumptions of the subtree, it is considered to have internal
334resource consumptions whether there are processes in it or not and
335can't have populated child cgroups which aren't threaded.  Because the
336root cgroup is not subject to no internal process constraint, it can
337serve both as a threaded domain and a parent to domain cgroups.
338
339The current operation mode or type of the cgroup is shown in the
340"cgroup.type" file which indicates whether the cgroup is a normal
341domain, a domain which is serving as the domain of a threaded subtree,
342or a threaded cgroup.
343
344On creation, a cgroup is always a domain cgroup and can be made
345threaded by writing "threaded" to the "cgroup.type" file.  The
346operation is single direction::
347
348  # echo threaded > cgroup.type
349
350Once threaded, the cgroup can't be made a domain again.  To enable the
351thread mode, the following conditions must be met.
352
353- As the cgroup will join the parent's resource domain.  The parent
354  must either be a valid (threaded) domain or a threaded cgroup.
355
356- When the parent is an unthreaded domain, it must not have any domain
357  controllers enabled or populated domain children.  The root is
358  exempt from this requirement.
359
360Topology-wise, a cgroup can be in an invalid state.  Please consider
361the following topology::
362
363  A (threaded domain) - B (threaded) - C (domain, just created)
364
365C is created as a domain but isn't connected to a parent which can
366host child domains.  C can't be used until it is turned into a
367threaded cgroup.  "cgroup.type" file will report "domain (invalid)" in
368these cases.  Operations which fail due to invalid topology use
369EOPNOTSUPP as the errno.
370
371A domain cgroup is turned into a threaded domain when one of its child
372cgroup becomes threaded or threaded controllers are enabled in the
373"cgroup.subtree_control" file while there are processes in the cgroup.
374A threaded domain reverts to a normal domain when the conditions
375clear.
376
377When read, "cgroup.threads" contains the list of the thread IDs of all
378threads in the cgroup.  Except that the operations are per-thread
379instead of per-process, "cgroup.threads" has the same format and
380behaves the same way as "cgroup.procs".  While "cgroup.threads" can be
381written to in any cgroup, as it can only move threads inside the same
382threaded domain, its operations are confined inside each threaded
383subtree.
384
385The threaded domain cgroup serves as the resource domain for the whole
386subtree, and, while the threads can be scattered across the subtree,
387all the processes are considered to be in the threaded domain cgroup.
388"cgroup.procs" in a threaded domain cgroup contains the PIDs of all
389processes in the subtree and is not readable in the subtree proper.
390However, "cgroup.procs" can be written to from anywhere in the subtree
391to migrate all threads of the matching process to the cgroup.
392
393Only threaded controllers can be enabled in a threaded subtree.  When
394a threaded controller is enabled inside a threaded subtree, it only
395accounts for and controls resource consumptions associated with the
396threads in the cgroup and its descendants.  All consumptions which
397aren't tied to a specific thread belong to the threaded domain cgroup.
398
399Because a threaded subtree is exempt from no internal process
400constraint, a threaded controller must be able to handle competition
401between threads in a non-leaf cgroup and its child cgroups.  Each
402threaded controller defines how such competitions are handled.
403
404Currently, the following controllers are threaded and can be enabled
405in a threaded cgroup::
406
407- cpu
408- cpuset
409- perf_event
410- pids
411
412[Un]populated Notification
413--------------------------
414
415Each non-root cgroup has a "cgroup.events" file which contains
416"populated" field indicating whether the cgroup's sub-hierarchy has
417live processes in it.  Its value is 0 if there is no live process in
418the cgroup and its descendants; otherwise, 1.  poll and [id]notify
419events are triggered when the value changes.  This can be used, for
420example, to start a clean-up operation after all processes of a given
421sub-hierarchy have exited.  The populated state updates and
422notifications are recursive.  Consider the following sub-hierarchy
423where the numbers in the parentheses represent the numbers of processes
424in each cgroup::
425
426  A(4) - B(0) - C(1)
427              \ D(0)
428
429A, B and C's "populated" fields would be 1 while D's 0.  After the one
430process in C exits, B and C's "populated" fields would flip to "0" and
431file modified events will be generated on the "cgroup.events" files of
432both cgroups.
433
434
435Controlling Controllers
436-----------------------
437
438Enabling and Disabling
439~~~~~~~~~~~~~~~~~~~~~~
440
441Each cgroup has a "cgroup.controllers" file which lists all
442controllers available for the cgroup to enable::
443
444  # cat cgroup.controllers
445  cpu io memory
446
447No controller is enabled by default.  Controllers can be enabled and
448disabled by writing to the "cgroup.subtree_control" file::
449
450  # echo "+cpu +memory -io" > cgroup.subtree_control
451
452Only controllers which are listed in "cgroup.controllers" can be
453enabled.  When multiple operations are specified as above, either they
454all succeed or fail.  If multiple operations on the same controller
455are specified, the last one is effective.
456
457Enabling a controller in a cgroup indicates that the distribution of
458the target resource across its immediate children will be controlled.
459Consider the following sub-hierarchy.  The enabled controllers are
460listed in parentheses::
461
462  A(cpu,memory) - B(memory) - C()
463                            \ D()
464
465As A has "cpu" and "memory" enabled, A will control the distribution
466of CPU cycles and memory to its children, in this case, B.  As B has
467"memory" enabled but not "CPU", C and D will compete freely on CPU
468cycles but their division of memory available to B will be controlled.
469
470As a controller regulates the distribution of the target resource to
471the cgroup's children, enabling it creates the controller's interface
472files in the child cgroups.  In the above example, enabling "cpu" on B
473would create the "cpu." prefixed controller interface files in C and
474D.  Likewise, disabling "memory" from B would remove the "memory."
475prefixed controller interface files from C and D.  This means that the
476controller interface files - anything which doesn't start with
477"cgroup." are owned by the parent rather than the cgroup itself.
478
479
480Top-down Constraint
481~~~~~~~~~~~~~~~~~~~
482
483Resources are distributed top-down and a cgroup can further distribute
484a resource only if the resource has been distributed to it from the
485parent.  This means that all non-root "cgroup.subtree_control" files
486can only contain controllers which are enabled in the parent's
487"cgroup.subtree_control" file.  A controller can be enabled only if
488the parent has the controller enabled and a controller can't be
489disabled if one or more children have it enabled.
490
491
492No Internal Process Constraint
493~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
494
495Non-root cgroups can distribute domain resources to their children
496only when they don't have any processes of their own.  In other words,
497only domain cgroups which don't contain any processes can have domain
498controllers enabled in their "cgroup.subtree_control" files.
499
500This guarantees that, when a domain controller is looking at the part
501of the hierarchy which has it enabled, processes are always only on
502the leaves.  This rules out situations where child cgroups compete
503against internal processes of the parent.
504
505The root cgroup is exempt from this restriction.  Root contains
506processes and anonymous resource consumption which can't be associated
507with any other cgroups and requires special treatment from most
508controllers.  How resource consumption in the root cgroup is governed
509is up to each controller (for more information on this topic please
510refer to the Non-normative information section in the Controllers
511chapter).
512
513Note that the restriction doesn't get in the way if there is no
514enabled controller in the cgroup's "cgroup.subtree_control".  This is
515important as otherwise it wouldn't be possible to create children of a
516populated cgroup.  To control resource distribution of a cgroup, the
517cgroup must create children and transfer all its processes to the
518children before enabling controllers in its "cgroup.subtree_control"
519file.
520
521
522Delegation
523----------
524
525Model of Delegation
526~~~~~~~~~~~~~~~~~~~
527
528A cgroup can be delegated in two ways.  First, to a less privileged
529user by granting write access of the directory and its "cgroup.procs",
530"cgroup.threads" and "cgroup.subtree_control" files to the user.
531Second, if the "nsdelegate" mount option is set, automatically to a
532cgroup namespace on namespace creation.
533
534Because the resource control interface files in a given directory
535control the distribution of the parent's resources, the delegatee
536shouldn't be allowed to write to them.  For the first method, this is
537achieved by not granting access to these files.  For the second, files
538outside the namespace should be hidden from the delegatee by the means
539of at least mount namespacing, and the kernel rejects writes to all
540files on a namespace root from inside the cgroup namespace, except for
541those files listed in "/sys/kernel/cgroup/delegate" (including
542"cgroup.procs", "cgroup.threads", "cgroup.subtree_control", etc.).
543
544The end results are equivalent for both delegation types.  Once
545delegated, the user can build sub-hierarchy under the directory,
546organize processes inside it as it sees fit and further distribute the
547resources it received from the parent.  The limits and other settings
548of all resource controllers are hierarchical and regardless of what
549happens in the delegated sub-hierarchy, nothing can escape the
550resource restrictions imposed by the parent.
551
552Currently, cgroup doesn't impose any restrictions on the number of
553cgroups in or nesting depth of a delegated sub-hierarchy; however,
554this may be limited explicitly in the future.
555
556
557Delegation Containment
558~~~~~~~~~~~~~~~~~~~~~~
559
560A delegated sub-hierarchy is contained in the sense that processes
561can't be moved into or out of the sub-hierarchy by the delegatee.
562
563For delegations to a less privileged user, this is achieved by
564requiring the following conditions for a process with a non-root euid
565to migrate a target process into a cgroup by writing its PID to the
566"cgroup.procs" file.
567
568- The writer must have write access to the "cgroup.procs" file.
569
570- The writer must have write access to the "cgroup.procs" file of the
571  common ancestor of the source and destination cgroups.
572
573The above two constraints ensure that while a delegatee may migrate
574processes around freely in the delegated sub-hierarchy it can't pull
575in from or push out to outside the sub-hierarchy.
576
577For an example, let's assume cgroups C0 and C1 have been delegated to
578user U0 who created C00, C01 under C0 and C10 under C1 as follows and
579all processes under C0 and C1 belong to U0::
580
581  ~~~~~~~~~~~~~ - C0 - C00
582  ~ cgroup    ~      \ C01
583  ~ hierarchy ~
584  ~~~~~~~~~~~~~ - C1 - C10
585
586Let's also say U0 wants to write the PID of a process which is
587currently in C10 into "C00/cgroup.procs".  U0 has write access to the
588file; however, the common ancestor of the source cgroup C10 and the
589destination cgroup C00 is above the points of delegation and U0 would
590not have write access to its "cgroup.procs" files and thus the write
591will be denied with -EACCES.
592
593For delegations to namespaces, containment is achieved by requiring
594that both the source and destination cgroups are reachable from the
595namespace of the process which is attempting the migration.  If either
596is not reachable, the migration is rejected with -ENOENT.
597
598
599Guidelines
600----------
601
602Organize Once and Control
603~~~~~~~~~~~~~~~~~~~~~~~~~
604
605Migrating a process across cgroups is a relatively expensive operation
606and stateful resources such as memory are not moved together with the
607process.  This is an explicit design decision as there often exist
608inherent trade-offs between migration and various hot paths in terms
609of synchronization cost.
610
611As such, migrating processes across cgroups frequently as a means to
612apply different resource restrictions is discouraged.  A workload
613should be assigned to a cgroup according to the system's logical and
614resource structure once on start-up.  Dynamic adjustments to resource
615distribution can be made by changing controller configuration through
616the interface files.
617
618
619Avoid Name Collisions
620~~~~~~~~~~~~~~~~~~~~~
621
622Interface files for a cgroup and its children cgroups occupy the same
623directory and it is possible to create children cgroups which collide
624with interface files.
625
626All cgroup core interface files are prefixed with "cgroup." and each
627controller's interface files are prefixed with the controller name and
628a dot.  A controller's name is composed of lower case alphabets and
629'_'s but never begins with an '_' so it can be used as the prefix
630character for collision avoidance.  Also, interface file names won't
631start or end with terms which are often used in categorizing workloads
632such as job, service, slice, unit or workload.
633
634cgroup doesn't do anything to prevent name collisions and it's the
635user's responsibility to avoid them.
636
637
638Resource Distribution Models
639============================
640
641cgroup controllers implement several resource distribution schemes
642depending on the resource type and expected use cases.  This section
643describes major schemes in use along with their expected behaviors.
644
645
646Weights
647-------
648
649A parent's resource is distributed by adding up the weights of all
650active children and giving each the fraction matching the ratio of its
651weight against the sum.  As only children which can make use of the
652resource at the moment participate in the distribution, this is
653work-conserving.  Due to the dynamic nature, this model is usually
654used for stateless resources.
655
656All weights are in the range [1, 10000] with the default at 100.  This
657allows symmetric multiplicative biases in both directions at fine
658enough granularity while staying in the intuitive range.
659
660As long as the weight is in range, all configuration combinations are
661valid and there is no reason to reject configuration changes or
662process migrations.
663
664"cpu.weight" proportionally distributes CPU cycles to active children
665and is an example of this type.
666
667
668.. _cgroupv2-limits-distributor:
669
670Limits
671------
672
673A child can only consume up to the configured amount of the resource.
674Limits can be over-committed - the sum of the limits of children can
675exceed the amount of resource available to the parent.
676
677Limits are in the range [0, max] and defaults to "max", which is noop.
678
679As limits can be over-committed, all configuration combinations are
680valid and there is no reason to reject configuration changes or
681process migrations.
682
683"io.max" limits the maximum BPS and/or IOPS that a cgroup can consume
684on an IO device and is an example of this type.
685
686.. _cgroupv2-protections-distributor:
687
688Protections
689-----------
690
691A cgroup is protected up to the configured amount of the resource
692as long as the usages of all its ancestors are under their
693protected levels.  Protections can be hard guarantees or best effort
694soft boundaries.  Protections can also be over-committed in which case
695only up to the amount available to the parent is protected among
696children.
697
698Protections are in the range [0, max] and defaults to 0, which is
699noop.
700
701As protections can be over-committed, all configuration combinations
702are valid and there is no reason to reject configuration changes or
703process migrations.
704
705"memory.low" implements best-effort memory protection and is an
706example of this type.
707
708
709Allocations
710-----------
711
712A cgroup is exclusively allocated a certain amount of a finite
713resource.  Allocations can't be over-committed - the sum of the
714allocations of children can not exceed the amount of resource
715available to the parent.
716
717Allocations are in the range [0, max] and defaults to 0, which is no
718resource.
719
720As allocations can't be over-committed, some configuration
721combinations are invalid and should be rejected.  Also, if the
722resource is mandatory for execution of processes, process migrations
723may be rejected.
724
725"cpu.rt.max" hard-allocates realtime slices and is an example of this
726type.
727
728
729Interface Files
730===============
731
732Format
733------
734
735All interface files should be in one of the following formats whenever
736possible::
737
738  New-line separated values
739  (when only one value can be written at once)
740
741	VAL0\n
742	VAL1\n
743	...
744
745  Space separated values
746  (when read-only or multiple values can be written at once)
747
748	VAL0 VAL1 ...\n
749
750  Flat keyed
751
752	KEY0 VAL0\n
753	KEY1 VAL1\n
754	...
755
756  Nested keyed
757
758	KEY0 SUB_KEY0=VAL00 SUB_KEY1=VAL01...
759	KEY1 SUB_KEY0=VAL10 SUB_KEY1=VAL11...
760	...
761
762For a writable file, the format for writing should generally match
763reading; however, controllers may allow omitting later fields or
764implement restricted shortcuts for most common use cases.
765
766For both flat and nested keyed files, only the values for a single key
767can be written at a time.  For nested keyed files, the sub key pairs
768may be specified in any order and not all pairs have to be specified.
769
770
771Conventions
772-----------
773
774- Settings for a single feature should be contained in a single file.
775
776- The root cgroup should be exempt from resource control and thus
777  shouldn't have resource control interface files.
778
779- The default time unit is microseconds.  If a different unit is ever
780  used, an explicit unit suffix must be present.
781
782- A parts-per quantity should use a percentage decimal with at least
783  two digit fractional part - e.g. 13.40.
784
785- If a controller implements weight based resource distribution, its
786  interface file should be named "weight" and have the range [1,
787  10000] with 100 as the default.  The values are chosen to allow
788  enough and symmetric bias in both directions while keeping it
789  intuitive (the default is 100%).
790
791- If a controller implements an absolute resource guarantee and/or
792  limit, the interface files should be named "min" and "max"
793  respectively.  If a controller implements best effort resource
794  guarantee and/or limit, the interface files should be named "low"
795  and "high" respectively.
796
797  In the above four control files, the special token "max" should be
798  used to represent upward infinity for both reading and writing.
799
800- If a setting has a configurable default value and keyed specific
801  overrides, the default entry should be keyed with "default" and
802  appear as the first entry in the file.
803
804  The default value can be updated by writing either "default $VAL" or
805  "$VAL".
806
807  When writing to update a specific override, "default" can be used as
808  the value to indicate removal of the override.  Override entries
809  with "default" as the value must not appear when read.
810
811  For example, a setting which is keyed by major:minor device numbers
812  with integer values may look like the following::
813
814    # cat cgroup-example-interface-file
815    default 150
816    8:0 300
817
818  The default value can be updated by::
819
820    # echo 125 > cgroup-example-interface-file
821
822  or::
823
824    # echo "default 125" > cgroup-example-interface-file
825
826  An override can be set by::
827
828    # echo "8:16 170" > cgroup-example-interface-file
829
830  and cleared by::
831
832    # echo "8:0 default" > cgroup-example-interface-file
833    # cat cgroup-example-interface-file
834    default 125
835    8:16 170
836
837- For events which are not very high frequency, an interface file
838  "events" should be created which lists event key value pairs.
839  Whenever a notifiable event happens, file modified event should be
840  generated on the file.
841
842
843Core Interface Files
844--------------------
845
846All cgroup core files are prefixed with "cgroup."
847
848  cgroup.type
849	A read-write single value file which exists on non-root
850	cgroups.
851
852	When read, it indicates the current type of the cgroup, which
853	can be one of the following values.
854
855	- "domain" : A normal valid domain cgroup.
856
857	- "domain threaded" : A threaded domain cgroup which is
858          serving as the root of a threaded subtree.
859
860	- "domain invalid" : A cgroup which is in an invalid state.
861	  It can't be populated or have controllers enabled.  It may
862	  be allowed to become a threaded cgroup.
863
864	- "threaded" : A threaded cgroup which is a member of a
865          threaded subtree.
866
867	A cgroup can be turned into a threaded cgroup by writing
868	"threaded" to this file.
869
870  cgroup.procs
871	A read-write new-line separated values file which exists on
872	all cgroups.
873
874	When read, it lists the PIDs of all processes which belong to
875	the cgroup one-per-line.  The PIDs are not ordered and the
876	same PID may show up more than once if the process got moved
877	to another cgroup and then back or the PID got recycled while
878	reading.
879
880	A PID can be written to migrate the process associated with
881	the PID to the cgroup.  The writer should match all of the
882	following conditions.
883
884	- It must have write access to the "cgroup.procs" file.
885
886	- It must have write access to the "cgroup.procs" file of the
887	  common ancestor of the source and destination cgroups.
888
889	When delegating a sub-hierarchy, write access to this file
890	should be granted along with the containing directory.
891
892	In a threaded cgroup, reading this file fails with EOPNOTSUPP
893	as all the processes belong to the thread root.  Writing is
894	supported and moves every thread of the process to the cgroup.
895
896  cgroup.threads
897	A read-write new-line separated values file which exists on
898	all cgroups.
899
900	When read, it lists the TIDs of all threads which belong to
901	the cgroup one-per-line.  The TIDs are not ordered and the
902	same TID may show up more than once if the thread got moved to
903	another cgroup and then back or the TID got recycled while
904	reading.
905
906	A TID can be written to migrate the thread associated with the
907	TID to the cgroup.  The writer should match all of the
908	following conditions.
909
910	- It must have write access to the "cgroup.threads" file.
911
912	- The cgroup that the thread is currently in must be in the
913          same resource domain as the destination cgroup.
914
915	- It must have write access to the "cgroup.procs" file of the
916	  common ancestor of the source and destination cgroups.
917
918	When delegating a sub-hierarchy, write access to this file
919	should be granted along with the containing directory.
920
921  cgroup.controllers
922	A read-only space separated values file which exists on all
923	cgroups.
924
925	It shows space separated list of all controllers available to
926	the cgroup.  The controllers are not ordered.
927
928  cgroup.subtree_control
929	A read-write space separated values file which exists on all
930	cgroups.  Starts out empty.
931
932	When read, it shows space separated list of the controllers
933	which are enabled to control resource distribution from the
934	cgroup to its children.
935
936	Space separated list of controllers prefixed with '+' or '-'
937	can be written to enable or disable controllers.  A controller
938	name prefixed with '+' enables the controller and '-'
939	disables.  If a controller appears more than once on the list,
940	the last one is effective.  When multiple enable and disable
941	operations are specified, either all succeed or all fail.
942
943  cgroup.events
944	A read-only flat-keyed file which exists on non-root cgroups.
945	The following entries are defined.  Unless specified
946	otherwise, a value change in this file generates a file
947	modified event.
948
949	  populated
950		1 if the cgroup or its descendants contains any live
951		processes; otherwise, 0.
952	  frozen
953		1 if the cgroup is frozen; otherwise, 0.
954
955  cgroup.max.descendants
956	A read-write single value files.  The default is "max".
957
958	Maximum allowed number of descent cgroups.
959	If the actual number of descendants is equal or larger,
960	an attempt to create a new cgroup in the hierarchy will fail.
961
962  cgroup.max.depth
963	A read-write single value files.  The default is "max".
964
965	Maximum allowed descent depth below the current cgroup.
966	If the actual descent depth is equal or larger,
967	an attempt to create a new child cgroup will fail.
968
969  cgroup.stat
970	A read-only flat-keyed file with the following entries:
971
972	  nr_descendants
973		Total number of visible descendant cgroups.
974
975	  nr_dying_descendants
976		Total number of dying descendant cgroups. A cgroup becomes
977		dying after being deleted by a user. The cgroup will remain
978		in dying state for some time undefined time (which can depend
979		on system load) before being completely destroyed.
980
981		A process can't enter a dying cgroup under any circumstances,
982		a dying cgroup can't revive.
983
984		A dying cgroup can consume system resources not exceeding
985		limits, which were active at the moment of cgroup deletion.
986
987	  nr_subsys_<cgroup_subsys>
988		Total number of live cgroup subsystems (e.g memory
989		cgroup) at and beneath the current cgroup.
990
991	  nr_dying_subsys_<cgroup_subsys>
992		Total number of dying cgroup subsystems (e.g. memory
993		cgroup) at and beneath the current cgroup.
994
995  cgroup.freeze
996	A read-write single value file which exists on non-root cgroups.
997	Allowed values are "0" and "1". The default is "0".
998
999	Writing "1" to the file causes freezing of the cgroup and all
1000	descendant cgroups. This means that all belonging processes will
1001	be stopped and will not run until the cgroup will be explicitly
1002	unfrozen. Freezing of the cgroup may take some time; when this action
1003	is completed, the "frozen" value in the cgroup.events control file
1004	will be updated to "1" and the corresponding notification will be
1005	issued.
1006
1007	A cgroup can be frozen either by its own settings, or by settings
1008	of any ancestor cgroups. If any of ancestor cgroups is frozen, the
1009	cgroup will remain frozen.
1010
1011	Processes in the frozen cgroup can be killed by a fatal signal.
1012	They also can enter and leave a frozen cgroup: either by an explicit
1013	move by a user, or if freezing of the cgroup races with fork().
1014	If a process is moved to a frozen cgroup, it stops. If a process is
1015	moved out of a frozen cgroup, it becomes running.
1016
1017	Frozen status of a cgroup doesn't affect any cgroup tree operations:
1018	it's possible to delete a frozen (and empty) cgroup, as well as
1019	create new sub-cgroups.
1020
1021  cgroup.kill
1022	A write-only single value file which exists in non-root cgroups.
1023	The only allowed value is "1".
1024
1025	Writing "1" to the file causes the cgroup and all descendant cgroups to
1026	be killed. This means that all processes located in the affected cgroup
1027	tree will be killed via SIGKILL.
1028
1029	Killing a cgroup tree will deal with concurrent forks appropriately and
1030	is protected against migrations.
1031
1032	In a threaded cgroup, writing this file fails with EOPNOTSUPP as
1033	killing cgroups is a process directed operation, i.e. it affects
1034	the whole thread-group.
1035
1036  cgroup.pressure
1037	A read-write single value file that allowed values are "0" and "1".
1038	The default is "1".
1039
1040	Writing "0" to the file will disable the cgroup PSI accounting.
1041	Writing "1" to the file will re-enable the cgroup PSI accounting.
1042
1043	This control attribute is not hierarchical, so disable or enable PSI
1044	accounting in a cgroup does not affect PSI accounting in descendants
1045	and doesn't need pass enablement via ancestors from root.
1046
1047	The reason this control attribute exists is that PSI accounts stalls for
1048	each cgroup separately and aggregates it at each level of the hierarchy.
1049	This may cause non-negligible overhead for some workloads when under
1050	deep level of the hierarchy, in which case this control attribute can
1051	be used to disable PSI accounting in the non-leaf cgroups.
1052
1053  irq.pressure
1054	A read-write nested-keyed file.
1055
1056	Shows pressure stall information for IRQ/SOFTIRQ. See
1057	:ref:`Documentation/accounting/psi.rst <psi>` for details.
1058
1059Controllers
1060===========
1061
1062.. _cgroup-v2-cpu:
1063
1064CPU
1065---
1066
1067The "cpu" controllers regulates distribution of CPU cycles.  This
1068controller implements weight and absolute bandwidth limit models for
1069normal scheduling policy and absolute bandwidth allocation model for
1070realtime scheduling policy.
1071
1072In all the above models, cycles distribution is defined only on a temporal
1073base and it does not account for the frequency at which tasks are executed.
1074The (optional) utilization clamping support allows to hint the schedutil
1075cpufreq governor about the minimum desired frequency which should always be
1076provided by a CPU, as well as the maximum desired frequency, which should not
1077be exceeded by a CPU.
1078
1079WARNING: cgroup2 doesn't yet support control of realtime processes. For
1080a kernel built with the CONFIG_RT_GROUP_SCHED option enabled for group
1081scheduling of realtime processes, the cpu controller can only be enabled
1082when all RT processes are in the root cgroup.  This limitation does
1083not apply if CONFIG_RT_GROUP_SCHED is disabled.  Be aware that system
1084management software may already have placed RT processes into nonroot
1085cgroups during the system boot process, and these processes may need
1086to be moved to the root cgroup before the cpu controller can be enabled
1087with a CONFIG_RT_GROUP_SCHED enabled kernel.
1088
1089
1090CPU Interface Files
1091~~~~~~~~~~~~~~~~~~~
1092
1093All time durations are in microseconds.
1094
1095  cpu.stat
1096	A read-only flat-keyed file.
1097	This file exists whether the controller is enabled or not.
1098
1099	It always reports the following three stats:
1100
1101	- usage_usec
1102	- user_usec
1103	- system_usec
1104
1105	and the following five when the controller is enabled:
1106
1107	- nr_periods
1108	- nr_throttled
1109	- throttled_usec
1110	- nr_bursts
1111	- burst_usec
1112
1113  cpu.weight
1114	A read-write single value file which exists on non-root
1115	cgroups.  The default is "100".
1116
1117	For non idle groups (cpu.idle = 0), the weight is in the
1118	range [1, 10000].
1119
1120	If the cgroup has been configured to be SCHED_IDLE (cpu.idle = 1),
1121	then the weight will show as a 0.
1122
1123  cpu.weight.nice
1124	A read-write single value file which exists on non-root
1125	cgroups.  The default is "0".
1126
1127	The nice value is in the range [-20, 19].
1128
1129	This interface file is an alternative interface for
1130	"cpu.weight" and allows reading and setting weight using the
1131	same values used by nice(2).  Because the range is smaller and
1132	granularity is coarser for the nice values, the read value is
1133	the closest approximation of the current weight.
1134
1135  cpu.max
1136	A read-write two value file which exists on non-root cgroups.
1137	The default is "max 100000".
1138
1139	The maximum bandwidth limit.  It's in the following format::
1140
1141	  $MAX $PERIOD
1142
1143	which indicates that the group may consume up to $MAX in each
1144	$PERIOD duration.  "max" for $MAX indicates no limit.  If only
1145	one number is written, $MAX is updated.
1146
1147  cpu.max.burst
1148	A read-write single value file which exists on non-root
1149	cgroups.  The default is "0".
1150
1151	The burst in the range [0, $MAX].
1152
1153  cpu.pressure
1154	A read-write nested-keyed file.
1155
1156	Shows pressure stall information for CPU. See
1157	:ref:`Documentation/accounting/psi.rst <psi>` for details.
1158
1159  cpu.uclamp.min
1160        A read-write single value file which exists on non-root cgroups.
1161        The default is "0", i.e. no utilization boosting.
1162
1163        The requested minimum utilization (protection) as a percentage
1164        rational number, e.g. 12.34 for 12.34%.
1165
1166        This interface allows reading and setting minimum utilization clamp
1167        values similar to the sched_setattr(2). This minimum utilization
1168        value is used to clamp the task specific minimum utilization clamp.
1169
1170        The requested minimum utilization (protection) is always capped by
1171        the current value for the maximum utilization (limit), i.e.
1172        `cpu.uclamp.max`.
1173
1174  cpu.uclamp.max
1175        A read-write single value file which exists on non-root cgroups.
1176        The default is "max". i.e. no utilization capping
1177
1178        The requested maximum utilization (limit) as a percentage rational
1179        number, e.g. 98.76 for 98.76%.
1180
1181        This interface allows reading and setting maximum utilization clamp
1182        values similar to the sched_setattr(2). This maximum utilization
1183        value is used to clamp the task specific maximum utilization clamp.
1184
1185  cpu.idle
1186	A read-write single value file which exists on non-root cgroups.
1187	The default is 0.
1188
1189	This is the cgroup analog of the per-task SCHED_IDLE sched policy.
1190	Setting this value to a 1 will make the scheduling policy of the
1191	cgroup SCHED_IDLE. The threads inside the cgroup will retain their
1192	own relative priorities, but the cgroup itself will be treated as
1193	very low priority relative to its peers.
1194
1195
1196
1197Memory
1198------
1199
1200The "memory" controller regulates distribution of memory.  Memory is
1201stateful and implements both limit and protection models.  Due to the
1202intertwining between memory usage and reclaim pressure and the
1203stateful nature of memory, the distribution model is relatively
1204complex.
1205
1206While not completely water-tight, all major memory usages by a given
1207cgroup are tracked so that the total memory consumption can be
1208accounted and controlled to a reasonable extent.  Currently, the
1209following types of memory usages are tracked.
1210
1211- Userland memory - page cache and anonymous memory.
1212
1213- Kernel data structures such as dentries and inodes.
1214
1215- TCP socket buffers.
1216
1217The above list may expand in the future for better coverage.
1218
1219
1220Memory Interface Files
1221~~~~~~~~~~~~~~~~~~~~~~
1222
1223All memory amounts are in bytes.  If a value which is not aligned to
1224PAGE_SIZE is written, the value may be rounded up to the closest
1225PAGE_SIZE multiple when read back.
1226
1227  memory.current
1228	A read-only single value file which exists on non-root
1229	cgroups.
1230
1231	The total amount of memory currently being used by the cgroup
1232	and its descendants.
1233
1234  memory.min
1235	A read-write single value file which exists on non-root
1236	cgroups.  The default is "0".
1237
1238	Hard memory protection.  If the memory usage of a cgroup
1239	is within its effective min boundary, the cgroup's memory
1240	won't be reclaimed under any conditions. If there is no
1241	unprotected reclaimable memory available, OOM killer
1242	is invoked. Above the effective min boundary (or
1243	effective low boundary if it is higher), pages are reclaimed
1244	proportionally to the overage, reducing reclaim pressure for
1245	smaller overages.
1246
1247	Effective min boundary is limited by memory.min values of
1248	all ancestor cgroups. If there is memory.min overcommitment
1249	(child cgroup or cgroups are requiring more protected memory
1250	than parent will allow), then each child cgroup will get
1251	the part of parent's protection proportional to its
1252	actual memory usage below memory.min.
1253
1254	Putting more memory than generally available under this
1255	protection is discouraged and may lead to constant OOMs.
1256
1257	If a memory cgroup is not populated with processes,
1258	its memory.min is ignored.
1259
1260  memory.low
1261	A read-write single value file which exists on non-root
1262	cgroups.  The default is "0".
1263
1264	Best-effort memory protection.  If the memory usage of a
1265	cgroup is within its effective low boundary, the cgroup's
1266	memory won't be reclaimed unless there is no reclaimable
1267	memory available in unprotected cgroups.
1268	Above the effective low	boundary (or
1269	effective min boundary if it is higher), pages are reclaimed
1270	proportionally to the overage, reducing reclaim pressure for
1271	smaller overages.
1272
1273	Effective low boundary is limited by memory.low values of
1274	all ancestor cgroups. If there is memory.low overcommitment
1275	(child cgroup or cgroups are requiring more protected memory
1276	than parent will allow), then each child cgroup will get
1277	the part of parent's protection proportional to its
1278	actual memory usage below memory.low.
1279
1280	Putting more memory than generally available under this
1281	protection is discouraged.
1282
1283  memory.high
1284	A read-write single value file which exists on non-root
1285	cgroups.  The default is "max".
1286
1287	Memory usage throttle limit.  If a cgroup's usage goes
1288	over the high boundary, the processes of the cgroup are
1289	throttled and put under heavy reclaim pressure.
1290
1291	Going over the high limit never invokes the OOM killer and
1292	under extreme conditions the limit may be breached. The high
1293	limit should be used in scenarios where an external process
1294	monitors the limited cgroup to alleviate heavy reclaim
1295	pressure.
1296
1297  memory.max
1298	A read-write single value file which exists on non-root
1299	cgroups.  The default is "max".
1300
1301	Memory usage hard limit.  This is the main mechanism to limit
1302	memory usage of a cgroup.  If a cgroup's memory usage reaches
1303	this limit and can't be reduced, the OOM killer is invoked in
1304	the cgroup. Under certain circumstances, the usage may go
1305	over the limit temporarily.
1306
1307	In default configuration regular 0-order allocations always
1308	succeed unless OOM killer chooses current task as a victim.
1309
1310	Some kinds of allocations don't invoke the OOM killer.
1311	Caller could retry them differently, return into userspace
1312	as -ENOMEM or silently ignore in cases like disk readahead.
1313
1314  memory.reclaim
1315	A write-only nested-keyed file which exists for all cgroups.
1316
1317	This is a simple interface to trigger memory reclaim in the
1318	target cgroup.
1319
1320	Example::
1321
1322	  echo "1G" > memory.reclaim
1323
1324	Please note that the kernel can over or under reclaim from
1325	the target cgroup. If less bytes are reclaimed than the
1326	specified amount, -EAGAIN is returned.
1327
1328	Please note that the proactive reclaim (triggered by this
1329	interface) is not meant to indicate memory pressure on the
1330	memory cgroup. Therefore socket memory balancing triggered by
1331	the memory reclaim normally is not exercised in this case.
1332	This means that the networking layer will not adapt based on
1333	reclaim induced by memory.reclaim.
1334
1335The following nested keys are defined.
1336
1337	  ==========            ================================
1338	  swappiness            Swappiness value to reclaim with
1339	  ==========            ================================
1340
1341	Specifying a swappiness value instructs the kernel to perform
1342	the reclaim with that swappiness value. Note that this has the
1343	same semantics as vm.swappiness applied to memcg reclaim with
1344	all the existing limitations and potential future extensions.
1345
1346  memory.peak
1347	A read-write single value file which exists on non-root cgroups.
1348
1349	The max memory usage recorded for the cgroup and its descendants since
1350	either the creation of the cgroup or the most recent reset for that FD.
1351
1352	A write of any non-empty string to this file resets it to the
1353	current memory usage for subsequent reads through the same
1354	file descriptor.
1355
1356  memory.oom.group
1357	A read-write single value file which exists on non-root
1358	cgroups.  The default value is "0".
1359
1360	Determines whether the cgroup should be treated as
1361	an indivisible workload by the OOM killer. If set,
1362	all tasks belonging to the cgroup or to its descendants
1363	(if the memory cgroup is not a leaf cgroup) are killed
1364	together or not at all. This can be used to avoid
1365	partial kills to guarantee workload integrity.
1366
1367	Tasks with the OOM protection (oom_score_adj set to -1000)
1368	are treated as an exception and are never killed.
1369
1370	If the OOM killer is invoked in a cgroup, it's not going
1371	to kill any tasks outside of this cgroup, regardless
1372	memory.oom.group values of ancestor cgroups.
1373
1374  memory.events
1375	A read-only flat-keyed file which exists on non-root cgroups.
1376	The following entries are defined.  Unless specified
1377	otherwise, a value change in this file generates a file
1378	modified event.
1379
1380	Note that all fields in this file are hierarchical and the
1381	file modified event can be generated due to an event down the
1382	hierarchy. For the local events at the cgroup level see
1383	memory.events.local.
1384
1385	  low
1386		The number of times the cgroup is reclaimed due to
1387		high memory pressure even though its usage is under
1388		the low boundary.  This usually indicates that the low
1389		boundary is over-committed.
1390
1391	  high
1392		The number of times processes of the cgroup are
1393		throttled and routed to perform direct memory reclaim
1394		because the high memory boundary was exceeded.  For a
1395		cgroup whose memory usage is capped by the high limit
1396		rather than global memory pressure, this event's
1397		occurrences are expected.
1398
1399	  max
1400		The number of times the cgroup's memory usage was
1401		about to go over the max boundary.  If direct reclaim
1402		fails to bring it down, the cgroup goes to OOM state.
1403
1404	  oom
1405		The number of time the cgroup's memory usage was
1406		reached the limit and allocation was about to fail.
1407
1408		This event is not raised if the OOM killer is not
1409		considered as an option, e.g. for failed high-order
1410		allocations or if caller asked to not retry attempts.
1411
1412	  oom_kill
1413		The number of processes belonging to this cgroup
1414		killed by any kind of OOM killer.
1415
1416          oom_group_kill
1417                The number of times a group OOM has occurred.
1418
1419  memory.events.local
1420	Similar to memory.events but the fields in the file are local
1421	to the cgroup i.e. not hierarchical. The file modified event
1422	generated on this file reflects only the local events.
1423
1424  memory.stat
1425	A read-only flat-keyed file which exists on non-root cgroups.
1426
1427	This breaks down the cgroup's memory footprint into different
1428	types of memory, type-specific details, and other information
1429	on the state and past events of the memory management system.
1430
1431	All memory amounts are in bytes.
1432
1433	The entries are ordered to be human readable, and new entries
1434	can show up in the middle. Don't rely on items remaining in a
1435	fixed position; use the keys to look up specific values!
1436
1437	If the entry has no per-node counter (or not show in the
1438	memory.numa_stat). We use 'npn' (non-per-node) as the tag
1439	to indicate that it will not show in the memory.numa_stat.
1440
1441	  anon
1442		Amount of memory used in anonymous mappings such as
1443		brk(), sbrk(), and mmap(MAP_ANONYMOUS)
1444
1445	  file
1446		Amount of memory used to cache filesystem data,
1447		including tmpfs and shared memory.
1448
1449	  kernel (npn)
1450		Amount of total kernel memory, including
1451		(kernel_stack, pagetables, percpu, vmalloc, slab) in
1452		addition to other kernel memory use cases.
1453
1454	  kernel_stack
1455		Amount of memory allocated to kernel stacks.
1456
1457	  pagetables
1458                Amount of memory allocated for page tables.
1459
1460	  sec_pagetables
1461		Amount of memory allocated for secondary page tables,
1462		this currently includes KVM mmu allocations on x86
1463		and arm64 and IOMMU page tables.
1464
1465	  percpu (npn)
1466		Amount of memory used for storing per-cpu kernel
1467		data structures.
1468
1469	  sock (npn)
1470		Amount of memory used in network transmission buffers
1471
1472	  vmalloc (npn)
1473		Amount of memory used for vmap backed memory.
1474
1475	  shmem
1476		Amount of cached filesystem data that is swap-backed,
1477		such as tmpfs, shm segments, shared anonymous mmap()s
1478
1479	  zswap
1480		Amount of memory consumed by the zswap compression backend.
1481
1482	  zswapped
1483		Amount of application memory swapped out to zswap.
1484
1485	  file_mapped
1486		Amount of cached filesystem data mapped with mmap()
1487
1488	  file_dirty
1489		Amount of cached filesystem data that was modified but
1490		not yet written back to disk
1491
1492	  file_writeback
1493		Amount of cached filesystem data that was modified and
1494		is currently being written back to disk
1495
1496	  swapcached
1497		Amount of swap cached in memory. The swapcache is accounted
1498		against both memory and swap usage.
1499
1500	  anon_thp
1501		Amount of memory used in anonymous mappings backed by
1502		transparent hugepages
1503
1504	  file_thp
1505		Amount of cached filesystem data backed by transparent
1506		hugepages
1507
1508	  shmem_thp
1509		Amount of shm, tmpfs, shared anonymous mmap()s backed by
1510		transparent hugepages
1511
1512	  inactive_anon, active_anon, inactive_file, active_file, unevictable
1513		Amount of memory, swap-backed and filesystem-backed,
1514		on the internal memory management lists used by the
1515		page reclaim algorithm.
1516
1517		As these represent internal list state (eg. shmem pages are on anon
1518		memory management lists), inactive_foo + active_foo may not be equal to
1519		the value for the foo counter, since the foo counter is type-based, not
1520		list-based.
1521
1522	  slab_reclaimable
1523		Part of "slab" that might be reclaimed, such as
1524		dentries and inodes.
1525
1526	  slab_unreclaimable
1527		Part of "slab" that cannot be reclaimed on memory
1528		pressure.
1529
1530	  slab (npn)
1531		Amount of memory used for storing in-kernel data
1532		structures.
1533
1534	  workingset_refault_anon
1535		Number of refaults of previously evicted anonymous pages.
1536
1537	  workingset_refault_file
1538		Number of refaults of previously evicted file pages.
1539
1540	  workingset_activate_anon
1541		Number of refaulted anonymous pages that were immediately
1542		activated.
1543
1544	  workingset_activate_file
1545		Number of refaulted file pages that were immediately activated.
1546
1547	  workingset_restore_anon
1548		Number of restored anonymous pages which have been detected as
1549		an active workingset before they got reclaimed.
1550
1551	  workingset_restore_file
1552		Number of restored file pages which have been detected as an
1553		active workingset before they got reclaimed.
1554
1555	  workingset_nodereclaim
1556		Number of times a shadow node has been reclaimed
1557
1558	  pgscan (npn)
1559		Amount of scanned pages (in an inactive LRU list)
1560
1561	  pgsteal (npn)
1562		Amount of reclaimed pages
1563
1564	  pgscan_kswapd (npn)
1565		Amount of scanned pages by kswapd (in an inactive LRU list)
1566
1567	  pgscan_direct (npn)
1568		Amount of scanned pages directly  (in an inactive LRU list)
1569
1570	  pgscan_khugepaged (npn)
1571		Amount of scanned pages by khugepaged  (in an inactive LRU list)
1572
1573	  pgsteal_kswapd (npn)
1574		Amount of reclaimed pages by kswapd
1575
1576	  pgsteal_direct (npn)
1577		Amount of reclaimed pages directly
1578
1579	  pgsteal_khugepaged (npn)
1580		Amount of reclaimed pages by khugepaged
1581
1582	  pgfault (npn)
1583		Total number of page faults incurred
1584
1585	  pgmajfault (npn)
1586		Number of major page faults incurred
1587
1588	  pgrefill (npn)
1589		Amount of scanned pages (in an active LRU list)
1590
1591	  pgactivate (npn)
1592		Amount of pages moved to the active LRU list
1593
1594	  pgdeactivate (npn)
1595		Amount of pages moved to the inactive LRU list
1596
1597	  pglazyfree (npn)
1598		Amount of pages postponed to be freed under memory pressure
1599
1600	  pglazyfreed (npn)
1601		Amount of reclaimed lazyfree pages
1602
1603	  swpin_zero
1604		Number of pages swapped into memory and filled with zero, where I/O
1605		was optimized out because the page content was detected to be zero
1606		during swapout.
1607
1608	  swpout_zero
1609		Number of zero-filled pages swapped out with I/O skipped due to the
1610		content being detected as zero.
1611
1612	  zswpin
1613		Number of pages moved in to memory from zswap.
1614
1615	  zswpout
1616		Number of pages moved out of memory to zswap.
1617
1618	  zswpwb
1619		Number of pages written from zswap to swap.
1620
1621	  thp_fault_alloc (npn)
1622		Number of transparent hugepages which were allocated to satisfy
1623		a page fault. This counter is not present when CONFIG_TRANSPARENT_HUGEPAGE
1624                is not set.
1625
1626	  thp_collapse_alloc (npn)
1627		Number of transparent hugepages which were allocated to allow
1628		collapsing an existing range of pages. This counter is not
1629		present when CONFIG_TRANSPARENT_HUGEPAGE is not set.
1630
1631	  thp_swpout (npn)
1632		Number of transparent hugepages which are swapout in one piece
1633		without splitting.
1634
1635	  thp_swpout_fallback (npn)
1636		Number of transparent hugepages which were split before swapout.
1637		Usually because failed to allocate some continuous swap space
1638		for the huge page.
1639
1640	  numa_pages_migrated (npn)
1641		Number of pages migrated by NUMA balancing.
1642
1643	  numa_pte_updates (npn)
1644		Number of pages whose page table entries are modified by
1645		NUMA balancing to produce NUMA hinting faults on access.
1646
1647	  numa_hint_faults (npn)
1648		Number of NUMA hinting faults.
1649
1650	  pgdemote_kswapd
1651		Number of pages demoted by kswapd.
1652
1653	  pgdemote_direct
1654		Number of pages demoted directly.
1655
1656	  pgdemote_khugepaged
1657		Number of pages demoted by khugepaged.
1658
1659	  hugetlb
1660		Amount of memory used by hugetlb pages. This metric only shows
1661		up if hugetlb usage is accounted for in memory.current (i.e.
1662		cgroup is mounted with the memory_hugetlb_accounting option).
1663
1664  memory.numa_stat
1665	A read-only nested-keyed file which exists on non-root cgroups.
1666
1667	This breaks down the cgroup's memory footprint into different
1668	types of memory, type-specific details, and other information
1669	per node on the state of the memory management system.
1670
1671	This is useful for providing visibility into the NUMA locality
1672	information within an memcg since the pages are allowed to be
1673	allocated from any physical node. One of the use case is evaluating
1674	application performance by combining this information with the
1675	application's CPU allocation.
1676
1677	All memory amounts are in bytes.
1678
1679	The output format of memory.numa_stat is::
1680
1681	  type N0=<bytes in node 0> N1=<bytes in node 1> ...
1682
1683	The entries are ordered to be human readable, and new entries
1684	can show up in the middle. Don't rely on items remaining in a
1685	fixed position; use the keys to look up specific values!
1686
1687	The entries can refer to the memory.stat.
1688
1689  memory.swap.current
1690	A read-only single value file which exists on non-root
1691	cgroups.
1692
1693	The total amount of swap currently being used by the cgroup
1694	and its descendants.
1695
1696  memory.swap.high
1697	A read-write single value file which exists on non-root
1698	cgroups.  The default is "max".
1699
1700	Swap usage throttle limit.  If a cgroup's swap usage exceeds
1701	this limit, all its further allocations will be throttled to
1702	allow userspace to implement custom out-of-memory procedures.
1703
1704	This limit marks a point of no return for the cgroup. It is NOT
1705	designed to manage the amount of swapping a workload does
1706	during regular operation. Compare to memory.swap.max, which
1707	prohibits swapping past a set amount, but lets the cgroup
1708	continue unimpeded as long as other memory can be reclaimed.
1709
1710	Healthy workloads are not expected to reach this limit.
1711
1712  memory.swap.peak
1713	A read-write single value file which exists on non-root cgroups.
1714
1715	The max swap usage recorded for the cgroup and its descendants since
1716	the creation of the cgroup or the most recent reset for that FD.
1717
1718	A write of any non-empty string to this file resets it to the
1719	current memory usage for subsequent reads through the same
1720	file descriptor.
1721
1722  memory.swap.max
1723	A read-write single value file which exists on non-root
1724	cgroups.  The default is "max".
1725
1726	Swap usage hard limit.  If a cgroup's swap usage reaches this
1727	limit, anonymous memory of the cgroup will not be swapped out.
1728
1729  memory.swap.events
1730	A read-only flat-keyed file which exists on non-root cgroups.
1731	The following entries are defined.  Unless specified
1732	otherwise, a value change in this file generates a file
1733	modified event.
1734
1735	  high
1736		The number of times the cgroup's swap usage was over
1737		the high threshold.
1738
1739	  max
1740		The number of times the cgroup's swap usage was about
1741		to go over the max boundary and swap allocation
1742		failed.
1743
1744	  fail
1745		The number of times swap allocation failed either
1746		because of running out of swap system-wide or max
1747		limit.
1748
1749	When reduced under the current usage, the existing swap
1750	entries are reclaimed gradually and the swap usage may stay
1751	higher than the limit for an extended period of time.  This
1752	reduces the impact on the workload and memory management.
1753
1754  memory.zswap.current
1755	A read-only single value file which exists on non-root
1756	cgroups.
1757
1758	The total amount of memory consumed by the zswap compression
1759	backend.
1760
1761  memory.zswap.max
1762	A read-write single value file which exists on non-root
1763	cgroups.  The default is "max".
1764
1765	Zswap usage hard limit. If a cgroup's zswap pool reaches this
1766	limit, it will refuse to take any more stores before existing
1767	entries fault back in or are written out to disk.
1768
1769  memory.zswap.writeback
1770	A read-write single value file. The default value is "1".
1771	Note that this setting is hierarchical, i.e. the writeback would be
1772	implicitly disabled for child cgroups if the upper hierarchy
1773	does so.
1774
1775	When this is set to 0, all swapping attempts to swapping devices
1776	are disabled. This included both zswap writebacks, and swapping due
1777	to zswap store failures. If the zswap store failures are recurring
1778	(for e.g if the pages are incompressible), users can observe
1779	reclaim inefficiency after disabling writeback (because the same
1780	pages might be rejected again and again).
1781
1782	Note that this is subtly different from setting memory.swap.max to
1783	0, as it still allows for pages to be written to the zswap pool.
1784	This setting has no effect if zswap is disabled, and swapping
1785	is allowed unless memory.swap.max is set to 0.
1786
1787  memory.pressure
1788	A read-only nested-keyed file.
1789
1790	Shows pressure stall information for memory. See
1791	:ref:`Documentation/accounting/psi.rst <psi>` for details.
1792
1793
1794Usage Guidelines
1795~~~~~~~~~~~~~~~~
1796
1797"memory.high" is the main mechanism to control memory usage.
1798Over-committing on high limit (sum of high limits > available memory)
1799and letting global memory pressure to distribute memory according to
1800usage is a viable strategy.
1801
1802Because breach of the high limit doesn't trigger the OOM killer but
1803throttles the offending cgroup, a management agent has ample
1804opportunities to monitor and take appropriate actions such as granting
1805more memory or terminating the workload.
1806
1807Determining whether a cgroup has enough memory is not trivial as
1808memory usage doesn't indicate whether the workload can benefit from
1809more memory.  For example, a workload which writes data received from
1810network to a file can use all available memory but can also operate as
1811performant with a small amount of memory.  A measure of memory
1812pressure - how much the workload is being impacted due to lack of
1813memory - is necessary to determine whether a workload needs more
1814memory; unfortunately, memory pressure monitoring mechanism isn't
1815implemented yet.
1816
1817
1818Memory Ownership
1819~~~~~~~~~~~~~~~~
1820
1821A memory area is charged to the cgroup which instantiated it and stays
1822charged to the cgroup until the area is released.  Migrating a process
1823to a different cgroup doesn't move the memory usages that it
1824instantiated while in the previous cgroup to the new cgroup.
1825
1826A memory area may be used by processes belonging to different cgroups.
1827To which cgroup the area will be charged is in-deterministic; however,
1828over time, the memory area is likely to end up in a cgroup which has
1829enough memory allowance to avoid high reclaim pressure.
1830
1831If a cgroup sweeps a considerable amount of memory which is expected
1832to be accessed repeatedly by other cgroups, it may make sense to use
1833POSIX_FADV_DONTNEED to relinquish the ownership of memory areas
1834belonging to the affected files to ensure correct memory ownership.
1835
1836
1837IO
1838--
1839
1840The "io" controller regulates the distribution of IO resources.  This
1841controller implements both weight based and absolute bandwidth or IOPS
1842limit distribution; however, weight based distribution is available
1843only if cfq-iosched is in use and neither scheme is available for
1844blk-mq devices.
1845
1846
1847IO Interface Files
1848~~~~~~~~~~~~~~~~~~
1849
1850  io.stat
1851	A read-only nested-keyed file.
1852
1853	Lines are keyed by $MAJ:$MIN device numbers and not ordered.
1854	The following nested keys are defined.
1855
1856	  ======	=====================
1857	  rbytes	Bytes read
1858	  wbytes	Bytes written
1859	  rios		Number of read IOs
1860	  wios		Number of write IOs
1861	  dbytes	Bytes discarded
1862	  dios		Number of discard IOs
1863	  ======	=====================
1864
1865	An example read output follows::
1866
1867	  8:16 rbytes=1459200 wbytes=314773504 rios=192 wios=353 dbytes=0 dios=0
1868	  8:0 rbytes=90430464 wbytes=299008000 rios=8950 wios=1252 dbytes=50331648 dios=3021
1869
1870  io.cost.qos
1871	A read-write nested-keyed file which exists only on the root
1872	cgroup.
1873
1874	This file configures the Quality of Service of the IO cost
1875	model based controller (CONFIG_BLK_CGROUP_IOCOST) which
1876	currently implements "io.weight" proportional control.  Lines
1877	are keyed by $MAJ:$MIN device numbers and not ordered.  The
1878	line for a given device is populated on the first write for
1879	the device on "io.cost.qos" or "io.cost.model".  The following
1880	nested keys are defined.
1881
1882	  ======	=====================================
1883	  enable	Weight-based control enable
1884	  ctrl		"auto" or "user"
1885	  rpct		Read latency percentile    [0, 100]
1886	  rlat		Read latency threshold
1887	  wpct		Write latency percentile   [0, 100]
1888	  wlat		Write latency threshold
1889	  min		Minimum scaling percentage [1, 10000]
1890	  max		Maximum scaling percentage [1, 10000]
1891	  ======	=====================================
1892
1893	The controller is disabled by default and can be enabled by
1894	setting "enable" to 1.  "rpct" and "wpct" parameters default
1895	to zero and the controller uses internal device saturation
1896	state to adjust the overall IO rate between "min" and "max".
1897
1898	When a better control quality is needed, latency QoS
1899	parameters can be configured.  For example::
1900
1901	  8:16 enable=1 ctrl=auto rpct=95.00 rlat=75000 wpct=95.00 wlat=150000 min=50.00 max=150.0
1902
1903	shows that on sdb, the controller is enabled, will consider
1904	the device saturated if the 95th percentile of read completion
1905	latencies is above 75ms or write 150ms, and adjust the overall
1906	IO issue rate between 50% and 150% accordingly.
1907
1908	The lower the saturation point, the better the latency QoS at
1909	the cost of aggregate bandwidth.  The narrower the allowed
1910	adjustment range between "min" and "max", the more conformant
1911	to the cost model the IO behavior.  Note that the IO issue
1912	base rate may be far off from 100% and setting "min" and "max"
1913	blindly can lead to a significant loss of device capacity or
1914	control quality.  "min" and "max" are useful for regulating
1915	devices which show wide temporary behavior changes - e.g. a
1916	ssd which accepts writes at the line speed for a while and
1917	then completely stalls for multiple seconds.
1918
1919	When "ctrl" is "auto", the parameters are controlled by the
1920	kernel and may change automatically.  Setting "ctrl" to "user"
1921	or setting any of the percentile and latency parameters puts
1922	it into "user" mode and disables the automatic changes.  The
1923	automatic mode can be restored by setting "ctrl" to "auto".
1924
1925  io.cost.model
1926	A read-write nested-keyed file which exists only on the root
1927	cgroup.
1928
1929	This file configures the cost model of the IO cost model based
1930	controller (CONFIG_BLK_CGROUP_IOCOST) which currently
1931	implements "io.weight" proportional control.  Lines are keyed
1932	by $MAJ:$MIN device numbers and not ordered.  The line for a
1933	given device is populated on the first write for the device on
1934	"io.cost.qos" or "io.cost.model".  The following nested keys
1935	are defined.
1936
1937	  =====		================================
1938	  ctrl		"auto" or "user"
1939	  model		The cost model in use - "linear"
1940	  =====		================================
1941
1942	When "ctrl" is "auto", the kernel may change all parameters
1943	dynamically.  When "ctrl" is set to "user" or any other
1944	parameters are written to, "ctrl" become "user" and the
1945	automatic changes are disabled.
1946
1947	When "model" is "linear", the following model parameters are
1948	defined.
1949
1950	  =============	========================================
1951	  [r|w]bps	The maximum sequential IO throughput
1952	  [r|w]seqiops	The maximum 4k sequential IOs per second
1953	  [r|w]randiops	The maximum 4k random IOs per second
1954	  =============	========================================
1955
1956	From the above, the builtin linear model determines the base
1957	costs of a sequential and random IO and the cost coefficient
1958	for the IO size.  While simple, this model can cover most
1959	common device classes acceptably.
1960
1961	The IO cost model isn't expected to be accurate in absolute
1962	sense and is scaled to the device behavior dynamically.
1963
1964	If needed, tools/cgroup/iocost_coef_gen.py can be used to
1965	generate device-specific coefficients.
1966
1967  io.weight
1968	A read-write flat-keyed file which exists on non-root cgroups.
1969	The default is "default 100".
1970
1971	The first line is the default weight applied to devices
1972	without specific override.  The rest are overrides keyed by
1973	$MAJ:$MIN device numbers and not ordered.  The weights are in
1974	the range [1, 10000] and specifies the relative amount IO time
1975	the cgroup can use in relation to its siblings.
1976
1977	The default weight can be updated by writing either "default
1978	$WEIGHT" or simply "$WEIGHT".  Overrides can be set by writing
1979	"$MAJ:$MIN $WEIGHT" and unset by writing "$MAJ:$MIN default".
1980
1981	An example read output follows::
1982
1983	  default 100
1984	  8:16 200
1985	  8:0 50
1986
1987  io.max
1988	A read-write nested-keyed file which exists on non-root
1989	cgroups.
1990
1991	BPS and IOPS based IO limit.  Lines are keyed by $MAJ:$MIN
1992	device numbers and not ordered.  The following nested keys are
1993	defined.
1994
1995	  =====		==================================
1996	  rbps		Max read bytes per second
1997	  wbps		Max write bytes per second
1998	  riops		Max read IO operations per second
1999	  wiops		Max write IO operations per second
2000	  =====		==================================
2001
2002	When writing, any number of nested key-value pairs can be
2003	specified in any order.  "max" can be specified as the value
2004	to remove a specific limit.  If the same key is specified
2005	multiple times, the outcome is undefined.
2006
2007	BPS and IOPS are measured in each IO direction and IOs are
2008	delayed if limit is reached.  Temporary bursts are allowed.
2009
2010	Setting read limit at 2M BPS and write at 120 IOPS for 8:16::
2011
2012	  echo "8:16 rbps=2097152 wiops=120" > io.max
2013
2014	Reading returns the following::
2015
2016	  8:16 rbps=2097152 wbps=max riops=max wiops=120
2017
2018	Write IOPS limit can be removed by writing the following::
2019
2020	  echo "8:16 wiops=max" > io.max
2021
2022	Reading now returns the following::
2023
2024	  8:16 rbps=2097152 wbps=max riops=max wiops=max
2025
2026  io.pressure
2027	A read-only nested-keyed file.
2028
2029	Shows pressure stall information for IO. See
2030	:ref:`Documentation/accounting/psi.rst <psi>` for details.
2031
2032
2033Writeback
2034~~~~~~~~~
2035
2036Page cache is dirtied through buffered writes and shared mmaps and
2037written asynchronously to the backing filesystem by the writeback
2038mechanism.  Writeback sits between the memory and IO domains and
2039regulates the proportion of dirty memory by balancing dirtying and
2040write IOs.
2041
2042The io controller, in conjunction with the memory controller,
2043implements control of page cache writeback IOs.  The memory controller
2044defines the memory domain that dirty memory ratio is calculated and
2045maintained for and the io controller defines the io domain which
2046writes out dirty pages for the memory domain.  Both system-wide and
2047per-cgroup dirty memory states are examined and the more restrictive
2048of the two is enforced.
2049
2050cgroup writeback requires explicit support from the underlying
2051filesystem.  Currently, cgroup writeback is implemented on ext2, ext4,
2052btrfs, f2fs, and xfs.  On other filesystems, all writeback IOs are
2053attributed to the root cgroup.
2054
2055There are inherent differences in memory and writeback management
2056which affects how cgroup ownership is tracked.  Memory is tracked per
2057page while writeback per inode.  For the purpose of writeback, an
2058inode is assigned to a cgroup and all IO requests to write dirty pages
2059from the inode are attributed to that cgroup.
2060
2061As cgroup ownership for memory is tracked per page, there can be pages
2062which are associated with different cgroups than the one the inode is
2063associated with.  These are called foreign pages.  The writeback
2064constantly keeps track of foreign pages and, if a particular foreign
2065cgroup becomes the majority over a certain period of time, switches
2066the ownership of the inode to that cgroup.
2067
2068While this model is enough for most use cases where a given inode is
2069mostly dirtied by a single cgroup even when the main writing cgroup
2070changes over time, use cases where multiple cgroups write to a single
2071inode simultaneously are not supported well.  In such circumstances, a
2072significant portion of IOs are likely to be attributed incorrectly.
2073As memory controller assigns page ownership on the first use and
2074doesn't update it until the page is released, even if writeback
2075strictly follows page ownership, multiple cgroups dirtying overlapping
2076areas wouldn't work as expected.  It's recommended to avoid such usage
2077patterns.
2078
2079The sysctl knobs which affect writeback behavior are applied to cgroup
2080writeback as follows.
2081
2082  vm.dirty_background_ratio, vm.dirty_ratio
2083	These ratios apply the same to cgroup writeback with the
2084	amount of available memory capped by limits imposed by the
2085	memory controller and system-wide clean memory.
2086
2087  vm.dirty_background_bytes, vm.dirty_bytes
2088	For cgroup writeback, this is calculated into ratio against
2089	total available memory and applied the same way as
2090	vm.dirty[_background]_ratio.
2091
2092
2093IO Latency
2094~~~~~~~~~~
2095
2096This is a cgroup v2 controller for IO workload protection.  You provide a group
2097with a latency target, and if the average latency exceeds that target the
2098controller will throttle any peers that have a lower latency target than the
2099protected workload.
2100
2101The limits are only applied at the peer level in the hierarchy.  This means that
2102in the diagram below, only groups A, B, and C will influence each other, and
2103groups D and F will influence each other.  Group G will influence nobody::
2104
2105			[root]
2106		/	   |		\
2107		A	   B		C
2108	       /  \        |
2109	      D    F	   G
2110
2111
2112So the ideal way to configure this is to set io.latency in groups A, B, and C.
2113Generally you do not want to set a value lower than the latency your device
2114supports.  Experiment to find the value that works best for your workload.
2115Start at higher than the expected latency for your device and watch the
2116avg_lat value in io.stat for your workload group to get an idea of the
2117latency you see during normal operation.  Use the avg_lat value as a basis for
2118your real setting, setting at 10-15% higher than the value in io.stat.
2119
2120How IO Latency Throttling Works
2121~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2122
2123io.latency is work conserving; so as long as everybody is meeting their latency
2124target the controller doesn't do anything.  Once a group starts missing its
2125target it begins throttling any peer group that has a higher target than itself.
2126This throttling takes 2 forms:
2127
2128- Queue depth throttling.  This is the number of outstanding IO's a group is
2129  allowed to have.  We will clamp down relatively quickly, starting at no limit
2130  and going all the way down to 1 IO at a time.
2131
2132- Artificial delay induction.  There are certain types of IO that cannot be
2133  throttled without possibly adversely affecting higher priority groups.  This
2134  includes swapping and metadata IO.  These types of IO are allowed to occur
2135  normally, however they are "charged" to the originating group.  If the
2136  originating group is being throttled you will see the use_delay and delay
2137  fields in io.stat increase.  The delay value is how many microseconds that are
2138  being added to any process that runs in this group.  Because this number can
2139  grow quite large if there is a lot of swapping or metadata IO occurring we
2140  limit the individual delay events to 1 second at a time.
2141
2142Once the victimized group starts meeting its latency target again it will start
2143unthrottling any peer groups that were throttled previously.  If the victimized
2144group simply stops doing IO the global counter will unthrottle appropriately.
2145
2146IO Latency Interface Files
2147~~~~~~~~~~~~~~~~~~~~~~~~~~
2148
2149  io.latency
2150	This takes a similar format as the other controllers.
2151
2152		"MAJOR:MINOR target=<target time in microseconds>"
2153
2154  io.stat
2155	If the controller is enabled you will see extra stats in io.stat in
2156	addition to the normal ones.
2157
2158	  depth
2159		This is the current queue depth for the group.
2160
2161	  avg_lat
2162		This is an exponential moving average with a decay rate of 1/exp
2163		bound by the sampling interval.  The decay rate interval can be
2164		calculated by multiplying the win value in io.stat by the
2165		corresponding number of samples based on the win value.
2166
2167	  win
2168		The sampling window size in milliseconds.  This is the minimum
2169		duration of time between evaluation events.  Windows only elapse
2170		with IO activity.  Idle periods extend the most recent window.
2171
2172IO Priority
2173~~~~~~~~~~~
2174
2175A single attribute controls the behavior of the I/O priority cgroup policy,
2176namely the io.prio.class attribute. The following values are accepted for
2177that attribute:
2178
2179  no-change
2180	Do not modify the I/O priority class.
2181
2182  promote-to-rt
2183	For requests that have a non-RT I/O priority class, change it into RT.
2184	Also change the priority level of these requests to 4. Do not modify
2185	the I/O priority of requests that have priority class RT.
2186
2187  restrict-to-be
2188	For requests that do not have an I/O priority class or that have I/O
2189	priority class RT, change it into BE. Also change the priority level
2190	of these requests to 0. Do not modify the I/O priority class of
2191	requests that have priority class IDLE.
2192
2193  idle
2194	Change the I/O priority class of all requests into IDLE, the lowest
2195	I/O priority class.
2196
2197  none-to-rt
2198	Deprecated. Just an alias for promote-to-rt.
2199
2200The following numerical values are associated with the I/O priority policies:
2201
2202+----------------+---+
2203| no-change      | 0 |
2204+----------------+---+
2205| promote-to-rt  | 1 |
2206+----------------+---+
2207| restrict-to-be | 2 |
2208+----------------+---+
2209| idle           | 3 |
2210+----------------+---+
2211
2212The numerical value that corresponds to each I/O priority class is as follows:
2213
2214+-------------------------------+---+
2215| IOPRIO_CLASS_NONE             | 0 |
2216+-------------------------------+---+
2217| IOPRIO_CLASS_RT (real-time)   | 1 |
2218+-------------------------------+---+
2219| IOPRIO_CLASS_BE (best effort) | 2 |
2220+-------------------------------+---+
2221| IOPRIO_CLASS_IDLE             | 3 |
2222+-------------------------------+---+
2223
2224The algorithm to set the I/O priority class for a request is as follows:
2225
2226- If I/O priority class policy is promote-to-rt, change the request I/O
2227  priority class to IOPRIO_CLASS_RT and change the request I/O priority
2228  level to 4.
2229- If I/O priority class policy is not promote-to-rt, translate the I/O priority
2230  class policy into a number, then change the request I/O priority class
2231  into the maximum of the I/O priority class policy number and the numerical
2232  I/O priority class.
2233
2234PID
2235---
2236
2237The process number controller is used to allow a cgroup to stop any
2238new tasks from being fork()'d or clone()'d after a specified limit is
2239reached.
2240
2241The number of tasks in a cgroup can be exhausted in ways which other
2242controllers cannot prevent, thus warranting its own controller.  For
2243example, a fork bomb is likely to exhaust the number of tasks before
2244hitting memory restrictions.
2245
2246Note that PIDs used in this controller refer to TIDs, process IDs as
2247used by the kernel.
2248
2249
2250PID Interface Files
2251~~~~~~~~~~~~~~~~~~~
2252
2253  pids.max
2254	A read-write single value file which exists on non-root
2255	cgroups.  The default is "max".
2256
2257	Hard limit of number of processes.
2258
2259  pids.current
2260	A read-only single value file which exists on non-root cgroups.
2261
2262	The number of processes currently in the cgroup and its
2263	descendants.
2264
2265  pids.peak
2266	A read-only single value file which exists on non-root cgroups.
2267
2268	The maximum value that the number of processes in the cgroup and its
2269	descendants has ever reached.
2270
2271  pids.events
2272	A read-only flat-keyed file which exists on non-root cgroups. Unless
2273	specified otherwise, a value change in this file generates a file
2274	modified event. The following entries are defined.
2275
2276	  max
2277		The number of times the cgroup's total number of processes hit the pids.max
2278		limit (see also pids_localevents).
2279
2280  pids.events.local
2281	Similar to pids.events but the fields in the file are local
2282	to the cgroup i.e. not hierarchical. The file modified event
2283	generated on this file reflects only the local events.
2284
2285Organisational operations are not blocked by cgroup policies, so it is
2286possible to have pids.current > pids.max.  This can be done by either
2287setting the limit to be smaller than pids.current, or attaching enough
2288processes to the cgroup such that pids.current is larger than
2289pids.max.  However, it is not possible to violate a cgroup PID policy
2290through fork() or clone(). These will return -EAGAIN if the creation
2291of a new process would cause a cgroup policy to be violated.
2292
2293
2294Cpuset
2295------
2296
2297The "cpuset" controller provides a mechanism for constraining
2298the CPU and memory node placement of tasks to only the resources
2299specified in the cpuset interface files in a task's current cgroup.
2300This is especially valuable on large NUMA systems where placing jobs
2301on properly sized subsets of the systems with careful processor and
2302memory placement to reduce cross-node memory access and contention
2303can improve overall system performance.
2304
2305The "cpuset" controller is hierarchical.  That means the controller
2306cannot use CPUs or memory nodes not allowed in its parent.
2307
2308
2309Cpuset Interface Files
2310~~~~~~~~~~~~~~~~~~~~~~
2311
2312  cpuset.cpus
2313	A read-write multiple values file which exists on non-root
2314	cpuset-enabled cgroups.
2315
2316	It lists the requested CPUs to be used by tasks within this
2317	cgroup.  The actual list of CPUs to be granted, however, is
2318	subjected to constraints imposed by its parent and can differ
2319	from the requested CPUs.
2320
2321	The CPU numbers are comma-separated numbers or ranges.
2322	For example::
2323
2324	  # cat cpuset.cpus
2325	  0-4,6,8-10
2326
2327	An empty value indicates that the cgroup is using the same
2328	setting as the nearest cgroup ancestor with a non-empty
2329	"cpuset.cpus" or all the available CPUs if none is found.
2330
2331	The value of "cpuset.cpus" stays constant until the next update
2332	and won't be affected by any CPU hotplug events.
2333
2334  cpuset.cpus.effective
2335	A read-only multiple values file which exists on all
2336	cpuset-enabled cgroups.
2337
2338	It lists the onlined CPUs that are actually granted to this
2339	cgroup by its parent.  These CPUs are allowed to be used by
2340	tasks within the current cgroup.
2341
2342	If "cpuset.cpus" is empty, the "cpuset.cpus.effective" file shows
2343	all the CPUs from the parent cgroup that can be available to
2344	be used by this cgroup.  Otherwise, it should be a subset of
2345	"cpuset.cpus" unless none of the CPUs listed in "cpuset.cpus"
2346	can be granted.  In this case, it will be treated just like an
2347	empty "cpuset.cpus".
2348
2349	Its value will be affected by CPU hotplug events.
2350
2351  cpuset.mems
2352	A read-write multiple values file which exists on non-root
2353	cpuset-enabled cgroups.
2354
2355	It lists the requested memory nodes to be used by tasks within
2356	this cgroup.  The actual list of memory nodes granted, however,
2357	is subjected to constraints imposed by its parent and can differ
2358	from the requested memory nodes.
2359
2360	The memory node numbers are comma-separated numbers or ranges.
2361	For example::
2362
2363	  # cat cpuset.mems
2364	  0-1,3
2365
2366	An empty value indicates that the cgroup is using the same
2367	setting as the nearest cgroup ancestor with a non-empty
2368	"cpuset.mems" or all the available memory nodes if none
2369	is found.
2370
2371	The value of "cpuset.mems" stays constant until the next update
2372	and won't be affected by any memory nodes hotplug events.
2373
2374	Setting a non-empty value to "cpuset.mems" causes memory of
2375	tasks within the cgroup to be migrated to the designated nodes if
2376	they are currently using memory outside of the designated nodes.
2377
2378	There is a cost for this memory migration.  The migration
2379	may not be complete and some memory pages may be left behind.
2380	So it is recommended that "cpuset.mems" should be set properly
2381	before spawning new tasks into the cpuset.  Even if there is
2382	a need to change "cpuset.mems" with active tasks, it shouldn't
2383	be done frequently.
2384
2385  cpuset.mems.effective
2386	A read-only multiple values file which exists on all
2387	cpuset-enabled cgroups.
2388
2389	It lists the onlined memory nodes that are actually granted to
2390	this cgroup by its parent. These memory nodes are allowed to
2391	be used by tasks within the current cgroup.
2392
2393	If "cpuset.mems" is empty, it shows all the memory nodes from the
2394	parent cgroup that will be available to be used by this cgroup.
2395	Otherwise, it should be a subset of "cpuset.mems" unless none of
2396	the memory nodes listed in "cpuset.mems" can be granted.  In this
2397	case, it will be treated just like an empty "cpuset.mems".
2398
2399	Its value will be affected by memory nodes hotplug events.
2400
2401  cpuset.cpus.exclusive
2402	A read-write multiple values file which exists on non-root
2403	cpuset-enabled cgroups.
2404
2405	It lists all the exclusive CPUs that are allowed to be used
2406	to create a new cpuset partition.  Its value is not used
2407	unless the cgroup becomes a valid partition root.  See the
2408	"cpuset.cpus.partition" section below for a description of what
2409	a cpuset partition is.
2410
2411	When the cgroup becomes a partition root, the actual exclusive
2412	CPUs that are allocated to that partition are listed in
2413	"cpuset.cpus.exclusive.effective" which may be different
2414	from "cpuset.cpus.exclusive".  If "cpuset.cpus.exclusive"
2415	has previously been set, "cpuset.cpus.exclusive.effective"
2416	is always a subset of it.
2417
2418	Users can manually set it to a value that is different from
2419	"cpuset.cpus".	One constraint in setting it is that the list of
2420	CPUs must be exclusive with respect to "cpuset.cpus.exclusive"
2421	of its sibling.  If "cpuset.cpus.exclusive" of a sibling cgroup
2422	isn't set, its "cpuset.cpus" value, if set, cannot be a subset
2423	of it to leave at least one CPU available when the exclusive
2424	CPUs are taken away.
2425
2426	For a parent cgroup, any one of its exclusive CPUs can only
2427	be distributed to at most one of its child cgroups.  Having an
2428	exclusive CPU appearing in two or more of its child cgroups is
2429	not allowed (the exclusivity rule).  A value that violates the
2430	exclusivity rule will be rejected with a write error.
2431
2432	The root cgroup is a partition root and all its available CPUs
2433	are in its exclusive CPU set.
2434
2435  cpuset.cpus.exclusive.effective
2436	A read-only multiple values file which exists on all non-root
2437	cpuset-enabled cgroups.
2438
2439	This file shows the effective set of exclusive CPUs that
2440	can be used to create a partition root.  The content
2441	of this file will always be a subset of its parent's
2442	"cpuset.cpus.exclusive.effective" if its parent is not the root
2443	cgroup.  It will also be a subset of "cpuset.cpus.exclusive"
2444	if it is set.  If "cpuset.cpus.exclusive" is not set, it is
2445	treated to have an implicit value of "cpuset.cpus" in the
2446	formation of local partition.
2447
2448  cpuset.cpus.isolated
2449	A read-only and root cgroup only multiple values file.
2450
2451	This file shows the set of all isolated CPUs used in existing
2452	isolated partitions. It will be empty if no isolated partition
2453	is created.
2454
2455  cpuset.cpus.partition
2456	A read-write single value file which exists on non-root
2457	cpuset-enabled cgroups.  This flag is owned by the parent cgroup
2458	and is not delegatable.
2459
2460	It accepts only the following input values when written to.
2461
2462	  ==========	=====================================
2463	  "member"	Non-root member of a partition
2464	  "root"	Partition root
2465	  "isolated"	Partition root without load balancing
2466	  ==========	=====================================
2467
2468	A cpuset partition is a collection of cpuset-enabled cgroups with
2469	a partition root at the top of the hierarchy and its descendants
2470	except those that are separate partition roots themselves and
2471	their descendants.  A partition has exclusive access to the
2472	set of exclusive CPUs allocated to it.	Other cgroups outside
2473	of that partition cannot use any CPUs in that set.
2474
2475	There are two types of partitions - local and remote.  A local
2476	partition is one whose parent cgroup is also a valid partition
2477	root.  A remote partition is one whose parent cgroup is not a
2478	valid partition root itself.  Writing to "cpuset.cpus.exclusive"
2479	is optional for the creation of a local partition as its
2480	"cpuset.cpus.exclusive" file will assume an implicit value that
2481	is the same as "cpuset.cpus" if it is not set.	Writing the
2482	proper "cpuset.cpus.exclusive" values down the cgroup hierarchy
2483	before the target partition root is mandatory for the creation
2484	of a remote partition.
2485
2486	Currently, a remote partition cannot be created under a local
2487	partition.  All the ancestors of a remote partition root except
2488	the root cgroup cannot be a partition root.
2489
2490	The root cgroup is always a partition root and its state cannot
2491	be changed.  All other non-root cgroups start out as "member".
2492
2493	When set to "root", the current cgroup is the root of a new
2494	partition or scheduling domain.  The set of exclusive CPUs is
2495	determined by the value of its "cpuset.cpus.exclusive.effective".
2496
2497	When set to "isolated", the CPUs in that partition will be in
2498	an isolated state without any load balancing from the scheduler
2499	and excluded from the unbound workqueues.  Tasks placed in such
2500	a partition with multiple CPUs should be carefully distributed
2501	and bound to each of the individual CPUs for optimal performance.
2502
2503	A partition root ("root" or "isolated") can be in one of the
2504	two possible states - valid or invalid.  An invalid partition
2505	root is in a degraded state where some state information may
2506	be retained, but behaves more like a "member".
2507
2508	All possible state transitions among "member", "root" and
2509	"isolated" are allowed.
2510
2511	On read, the "cpuset.cpus.partition" file can show the following
2512	values.
2513
2514	  =============================	=====================================
2515	  "member"			Non-root member of a partition
2516	  "root"			Partition root
2517	  "isolated"			Partition root without load balancing
2518	  "root invalid (<reason>)"	Invalid partition root
2519	  "isolated invalid (<reason>)"	Invalid isolated partition root
2520	  =============================	=====================================
2521
2522	In the case of an invalid partition root, a descriptive string on
2523	why the partition is invalid is included within parentheses.
2524
2525	For a local partition root to be valid, the following conditions
2526	must be met.
2527
2528	1) The parent cgroup is a valid partition root.
2529	2) The "cpuset.cpus.exclusive.effective" file cannot be empty,
2530	   though it may contain offline CPUs.
2531	3) The "cpuset.cpus.effective" cannot be empty unless there is
2532	   no task associated with this partition.
2533
2534	For a remote partition root to be valid, all the above conditions
2535	except the first one must be met.
2536
2537	External events like hotplug or changes to "cpuset.cpus" or
2538	"cpuset.cpus.exclusive" can cause a valid partition root to
2539	become invalid and vice versa.	Note that a task cannot be
2540	moved to a cgroup with empty "cpuset.cpus.effective".
2541
2542	A valid non-root parent partition may distribute out all its CPUs
2543	to its child local partitions when there is no task associated
2544	with it.
2545
2546	Care must be taken to change a valid partition root to "member"
2547	as all its child local partitions, if present, will become
2548	invalid causing disruption to tasks running in those child
2549	partitions. These inactivated partitions could be recovered if
2550	their parent is switched back to a partition root with a proper
2551	value in "cpuset.cpus" or "cpuset.cpus.exclusive".
2552
2553	Poll and inotify events are triggered whenever the state of
2554	"cpuset.cpus.partition" changes.  That includes changes caused
2555	by write to "cpuset.cpus.partition", cpu hotplug or other
2556	changes that modify the validity status of the partition.
2557	This will allow user space agents to monitor unexpected changes
2558	to "cpuset.cpus.partition" without the need to do continuous
2559	polling.
2560
2561	A user can pre-configure certain CPUs to an isolated state
2562	with load balancing disabled at boot time with the "isolcpus"
2563	kernel boot command line option.  If those CPUs are to be put
2564	into a partition, they have to be used in an isolated partition.
2565
2566
2567Device controller
2568-----------------
2569
2570Device controller manages access to device files. It includes both
2571creation of new device files (using mknod), and access to the
2572existing device files.
2573
2574Cgroup v2 device controller has no interface files and is implemented
2575on top of cgroup BPF. To control access to device files, a user may
2576create bpf programs of type BPF_PROG_TYPE_CGROUP_DEVICE and attach
2577them to cgroups with BPF_CGROUP_DEVICE flag. On an attempt to access a
2578device file, corresponding BPF programs will be executed, and depending
2579on the return value the attempt will succeed or fail with -EPERM.
2580
2581A BPF_PROG_TYPE_CGROUP_DEVICE program takes a pointer to the
2582bpf_cgroup_dev_ctx structure, which describes the device access attempt:
2583access type (mknod/read/write) and device (type, major and minor numbers).
2584If the program returns 0, the attempt fails with -EPERM, otherwise it
2585succeeds.
2586
2587An example of BPF_PROG_TYPE_CGROUP_DEVICE program may be found in
2588tools/testing/selftests/bpf/progs/dev_cgroup.c in the kernel source tree.
2589
2590
2591RDMA
2592----
2593
2594The "rdma" controller regulates the distribution and accounting of
2595RDMA resources.
2596
2597RDMA Interface Files
2598~~~~~~~~~~~~~~~~~~~~
2599
2600  rdma.max
2601	A readwrite nested-keyed file that exists for all the cgroups
2602	except root that describes current configured resource limit
2603	for a RDMA/IB device.
2604
2605	Lines are keyed by device name and are not ordered.
2606	Each line contains space separated resource name and its configured
2607	limit that can be distributed.
2608
2609	The following nested keys are defined.
2610
2611	  ==========	=============================
2612	  hca_handle	Maximum number of HCA Handles
2613	  hca_object 	Maximum number of HCA Objects
2614	  ==========	=============================
2615
2616	An example for mlx4 and ocrdma device follows::
2617
2618	  mlx4_0 hca_handle=2 hca_object=2000
2619	  ocrdma1 hca_handle=3 hca_object=max
2620
2621  rdma.current
2622	A read-only file that describes current resource usage.
2623	It exists for all the cgroup except root.
2624
2625	An example for mlx4 and ocrdma device follows::
2626
2627	  mlx4_0 hca_handle=1 hca_object=20
2628	  ocrdma1 hca_handle=1 hca_object=23
2629
2630DMEM
2631----
2632
2633The "dmem" controller regulates the distribution and accounting of
2634device memory regions. Because each memory region may have its own page size,
2635which does not have to be equal to the system page size, the units are always bytes.
2636
2637DMEM Interface Files
2638~~~~~~~~~~~~~~~~~~~~
2639
2640  dmem.max, dmem.min, dmem.low
2641	A readwrite nested-keyed file that exists for all the cgroups
2642	except root that describes current configured resource limit
2643	for a region.
2644
2645	An example for xe follows::
2646
2647	  drm/0000:03:00.0/vram0 1073741824
2648	  drm/0000:03:00.0/stolen max
2649
2650	The semantics are the same as for the memory cgroup controller, and are
2651	calculated in the same way.
2652
2653  dmem.capacity
2654	A read-only file that describes maximum region capacity.
2655	It only exists on the root cgroup. Not all memory can be
2656	allocated by cgroups, as the kernel reserves some for
2657	internal use.
2658
2659	An example for xe follows::
2660
2661	  drm/0000:03:00.0/vram0 8514437120
2662	  drm/0000:03:00.0/stolen 67108864
2663
2664  dmem.current
2665	A read-only file that describes current resource usage.
2666	It exists for all the cgroup except root.
2667
2668	An example for xe follows::
2669
2670	  drm/0000:03:00.0/vram0 12550144
2671	  drm/0000:03:00.0/stolen 8650752
2672
2673HugeTLB
2674-------
2675
2676The HugeTLB controller allows to limit the HugeTLB usage per control group and
2677enforces the controller limit during page fault.
2678
2679HugeTLB Interface Files
2680~~~~~~~~~~~~~~~~~~~~~~~
2681
2682  hugetlb.<hugepagesize>.current
2683	Show current usage for "hugepagesize" hugetlb.  It exists for all
2684	the cgroup except root.
2685
2686  hugetlb.<hugepagesize>.max
2687	Set/show the hard limit of "hugepagesize" hugetlb usage.
2688	The default value is "max".  It exists for all the cgroup except root.
2689
2690  hugetlb.<hugepagesize>.events
2691	A read-only flat-keyed file which exists on non-root cgroups.
2692
2693	  max
2694		The number of allocation failure due to HugeTLB limit
2695
2696  hugetlb.<hugepagesize>.events.local
2697	Similar to hugetlb.<hugepagesize>.events but the fields in the file
2698	are local to the cgroup i.e. not hierarchical. The file modified event
2699	generated on this file reflects only the local events.
2700
2701  hugetlb.<hugepagesize>.numa_stat
2702	Similar to memory.numa_stat, it shows the numa information of the
2703        hugetlb pages of <hugepagesize> in this cgroup.  Only active in
2704        use hugetlb pages are included.  The per-node values are in bytes.
2705
2706Misc
2707----
2708
2709The Miscellaneous cgroup provides the resource limiting and tracking
2710mechanism for the scalar resources which cannot be abstracted like the other
2711cgroup resources. Controller is enabled by the CONFIG_CGROUP_MISC config
2712option.
2713
2714A resource can be added to the controller via enum misc_res_type{} in the
2715include/linux/misc_cgroup.h file and the corresponding name via misc_res_name[]
2716in the kernel/cgroup/misc.c file. Provider of the resource must set its
2717capacity prior to using the resource by calling misc_cg_set_capacity().
2718
2719Once a capacity is set then the resource usage can be updated using charge and
2720uncharge APIs. All of the APIs to interact with misc controller are in
2721include/linux/misc_cgroup.h.
2722
2723Misc Interface Files
2724~~~~~~~~~~~~~~~~~~~~
2725
2726Miscellaneous controller provides 3 interface files. If two misc resources (res_a and res_b) are registered then:
2727
2728  misc.capacity
2729        A read-only flat-keyed file shown only in the root cgroup.  It shows
2730        miscellaneous scalar resources available on the platform along with
2731        their quantities::
2732
2733	  $ cat misc.capacity
2734	  res_a 50
2735	  res_b 10
2736
2737  misc.current
2738        A read-only flat-keyed file shown in the all cgroups.  It shows
2739        the current usage of the resources in the cgroup and its children.::
2740
2741	  $ cat misc.current
2742	  res_a 3
2743	  res_b 0
2744
2745  misc.peak
2746        A read-only flat-keyed file shown in all cgroups.  It shows the
2747        historical maximum usage of the resources in the cgroup and its
2748        children.::
2749
2750	  $ cat misc.peak
2751	  res_a 10
2752	  res_b 8
2753
2754  misc.max
2755        A read-write flat-keyed file shown in the non root cgroups. Allowed
2756        maximum usage of the resources in the cgroup and its children.::
2757
2758	  $ cat misc.max
2759	  res_a max
2760	  res_b 4
2761
2762	Limit can be set by::
2763
2764	  # echo res_a 1 > misc.max
2765
2766	Limit can be set to max by::
2767
2768	  # echo res_a max > misc.max
2769
2770        Limits can be set higher than the capacity value in the misc.capacity
2771        file.
2772
2773  misc.events
2774	A read-only flat-keyed file which exists on non-root cgroups. The
2775	following entries are defined. Unless specified otherwise, a value
2776	change in this file generates a file modified event. All fields in
2777	this file are hierarchical.
2778
2779	  max
2780		The number of times the cgroup's resource usage was
2781		about to go over the max boundary.
2782
2783  misc.events.local
2784        Similar to misc.events but the fields in the file are local to the
2785        cgroup i.e. not hierarchical. The file modified event generated on
2786        this file reflects only the local events.
2787
2788Migration and Ownership
2789~~~~~~~~~~~~~~~~~~~~~~~
2790
2791A miscellaneous scalar resource is charged to the cgroup in which it is used
2792first, and stays charged to that cgroup until that resource is freed. Migrating
2793a process to a different cgroup does not move the charge to the destination
2794cgroup where the process has moved.
2795
2796Others
2797------
2798
2799perf_event
2800~~~~~~~~~~
2801
2802perf_event controller, if not mounted on a legacy hierarchy, is
2803automatically enabled on the v2 hierarchy so that perf events can
2804always be filtered by cgroup v2 path.  The controller can still be
2805moved to a legacy hierarchy after v2 hierarchy is populated.
2806
2807
2808Non-normative information
2809-------------------------
2810
2811This section contains information that isn't considered to be a part of
2812the stable kernel API and so is subject to change.
2813
2814
2815CPU controller root cgroup process behaviour
2816~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2817
2818When distributing CPU cycles in the root cgroup each thread in this
2819cgroup is treated as if it was hosted in a separate child cgroup of the
2820root cgroup. This child cgroup weight is dependent on its thread nice
2821level.
2822
2823For details of this mapping see sched_prio_to_weight array in
2824kernel/sched/core.c file (values from this array should be scaled
2825appropriately so the neutral - nice 0 - value is 100 instead of 1024).
2826
2827
2828IO controller root cgroup process behaviour
2829~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2830
2831Root cgroup processes are hosted in an implicit leaf child node.
2832When distributing IO resources this implicit child node is taken into
2833account as if it was a normal child cgroup of the root cgroup with a
2834weight value of 200.
2835
2836
2837Namespace
2838=========
2839
2840Basics
2841------
2842
2843cgroup namespace provides a mechanism to virtualize the view of the
2844"/proc/$PID/cgroup" file and cgroup mounts.  The CLONE_NEWCGROUP clone
2845flag can be used with clone(2) and unshare(2) to create a new cgroup
2846namespace.  The process running inside the cgroup namespace will have
2847its "/proc/$PID/cgroup" output restricted to cgroupns root.  The
2848cgroupns root is the cgroup of the process at the time of creation of
2849the cgroup namespace.
2850
2851Without cgroup namespace, the "/proc/$PID/cgroup" file shows the
2852complete path of the cgroup of a process.  In a container setup where
2853a set of cgroups and namespaces are intended to isolate processes the
2854"/proc/$PID/cgroup" file may leak potential system level information
2855to the isolated processes.  For example::
2856
2857  # cat /proc/self/cgroup
2858  0::/batchjobs/container_id1
2859
2860The path '/batchjobs/container_id1' can be considered as system-data
2861and undesirable to expose to the isolated processes.  cgroup namespace
2862can be used to restrict visibility of this path.  For example, before
2863creating a cgroup namespace, one would see::
2864
2865  # ls -l /proc/self/ns/cgroup
2866  lrwxrwxrwx 1 root root 0 2014-07-15 10:37 /proc/self/ns/cgroup -> cgroup:[4026531835]
2867  # cat /proc/self/cgroup
2868  0::/batchjobs/container_id1
2869
2870After unsharing a new namespace, the view changes::
2871
2872  # ls -l /proc/self/ns/cgroup
2873  lrwxrwxrwx 1 root root 0 2014-07-15 10:35 /proc/self/ns/cgroup -> cgroup:[4026532183]
2874  # cat /proc/self/cgroup
2875  0::/
2876
2877When some thread from a multi-threaded process unshares its cgroup
2878namespace, the new cgroupns gets applied to the entire process (all
2879the threads).  This is natural for the v2 hierarchy; however, for the
2880legacy hierarchies, this may be unexpected.
2881
2882A cgroup namespace is alive as long as there are processes inside or
2883mounts pinning it.  When the last usage goes away, the cgroup
2884namespace is destroyed.  The cgroupns root and the actual cgroups
2885remain.
2886
2887
2888The Root and Views
2889------------------
2890
2891The 'cgroupns root' for a cgroup namespace is the cgroup in which the
2892process calling unshare(2) is running.  For example, if a process in
2893/batchjobs/container_id1 cgroup calls unshare, cgroup
2894/batchjobs/container_id1 becomes the cgroupns root.  For the
2895init_cgroup_ns, this is the real root ('/') cgroup.
2896
2897The cgroupns root cgroup does not change even if the namespace creator
2898process later moves to a different cgroup::
2899
2900  # ~/unshare -c # unshare cgroupns in some cgroup
2901  # cat /proc/self/cgroup
2902  0::/
2903  # mkdir sub_cgrp_1
2904  # echo 0 > sub_cgrp_1/cgroup.procs
2905  # cat /proc/self/cgroup
2906  0::/sub_cgrp_1
2907
2908Each process gets its namespace-specific view of "/proc/$PID/cgroup"
2909
2910Processes running inside the cgroup namespace will be able to see
2911cgroup paths (in /proc/self/cgroup) only inside their root cgroup.
2912From within an unshared cgroupns::
2913
2914  # sleep 100000 &
2915  [1] 7353
2916  # echo 7353 > sub_cgrp_1/cgroup.procs
2917  # cat /proc/7353/cgroup
2918  0::/sub_cgrp_1
2919
2920From the initial cgroup namespace, the real cgroup path will be
2921visible::
2922
2923  $ cat /proc/7353/cgroup
2924  0::/batchjobs/container_id1/sub_cgrp_1
2925
2926From a sibling cgroup namespace (that is, a namespace rooted at a
2927different cgroup), the cgroup path relative to its own cgroup
2928namespace root will be shown.  For instance, if PID 7353's cgroup
2929namespace root is at '/batchjobs/container_id2', then it will see::
2930
2931  # cat /proc/7353/cgroup
2932  0::/../container_id2/sub_cgrp_1
2933
2934Note that the relative path always starts with '/' to indicate that
2935its relative to the cgroup namespace root of the caller.
2936
2937
2938Migration and setns(2)
2939----------------------
2940
2941Processes inside a cgroup namespace can move into and out of the
2942namespace root if they have proper access to external cgroups.  For
2943example, from inside a namespace with cgroupns root at
2944/batchjobs/container_id1, and assuming that the global hierarchy is
2945still accessible inside cgroupns::
2946
2947  # cat /proc/7353/cgroup
2948  0::/sub_cgrp_1
2949  # echo 7353 > batchjobs/container_id2/cgroup.procs
2950  # cat /proc/7353/cgroup
2951  0::/../container_id2
2952
2953Note that this kind of setup is not encouraged.  A task inside cgroup
2954namespace should only be exposed to its own cgroupns hierarchy.
2955
2956setns(2) to another cgroup namespace is allowed when:
2957
2958(a) the process has CAP_SYS_ADMIN against its current user namespace
2959(b) the process has CAP_SYS_ADMIN against the target cgroup
2960    namespace's userns
2961
2962No implicit cgroup changes happen with attaching to another cgroup
2963namespace.  It is expected that the someone moves the attaching
2964process under the target cgroup namespace root.
2965
2966
2967Interaction with Other Namespaces
2968---------------------------------
2969
2970Namespace specific cgroup hierarchy can be mounted by a process
2971running inside a non-init cgroup namespace::
2972
2973  # mount -t cgroup2 none $MOUNT_POINT
2974
2975This will mount the unified cgroup hierarchy with cgroupns root as the
2976filesystem root.  The process needs CAP_SYS_ADMIN against its user and
2977mount namespaces.
2978
2979The virtualization of /proc/self/cgroup file combined with restricting
2980the view of cgroup hierarchy by namespace-private cgroupfs mount
2981provides a properly isolated cgroup view inside the container.
2982
2983
2984Information on Kernel Programming
2985=================================
2986
2987This section contains kernel programming information in the areas
2988where interacting with cgroup is necessary.  cgroup core and
2989controllers are not covered.
2990
2991
2992Filesystem Support for Writeback
2993--------------------------------
2994
2995A filesystem can support cgroup writeback by updating
2996address_space_operations->writepage[s]() to annotate bio's using the
2997following two functions.
2998
2999  wbc_init_bio(@wbc, @bio)
3000	Should be called for each bio carrying writeback data and
3001	associates the bio with the inode's owner cgroup and the
3002	corresponding request queue.  This must be called after
3003	a queue (device) has been associated with the bio and
3004	before submission.
3005
3006  wbc_account_cgroup_owner(@wbc, @folio, @bytes)
3007	Should be called for each data segment being written out.
3008	While this function doesn't care exactly when it's called
3009	during the writeback session, it's the easiest and most
3010	natural to call it as data segments are added to a bio.
3011
3012With writeback bio's annotated, cgroup support can be enabled per
3013super_block by setting SB_I_CGROUPWB in ->s_iflags.  This allows for
3014selective disabling of cgroup writeback support which is helpful when
3015certain filesystem features, e.g. journaled data mode, are
3016incompatible.
3017
3018wbc_init_bio() binds the specified bio to its cgroup.  Depending on
3019the configuration, the bio may be executed at a lower priority and if
3020the writeback session is holding shared resources, e.g. a journal
3021entry, may lead to priority inversion.  There is no one easy solution
3022for the problem.  Filesystems can try to work around specific problem
3023cases by skipping wbc_init_bio() and using bio_associate_blkg()
3024directly.
3025
3026
3027Deprecated v1 Core Features
3028===========================
3029
3030- Multiple hierarchies including named ones are not supported.
3031
3032- All v1 mount options are not supported.
3033
3034- The "tasks" file is removed and "cgroup.procs" is not sorted.
3035
3036- "cgroup.clone_children" is removed.
3037
3038- /proc/cgroups is meaningless for v2.  Use "cgroup.controllers" or
3039  "cgroup.stat" files at the root instead.
3040
3041
3042Issues with v1 and Rationales for v2
3043====================================
3044
3045Multiple Hierarchies
3046--------------------
3047
3048cgroup v1 allowed an arbitrary number of hierarchies and each
3049hierarchy could host any number of controllers.  While this seemed to
3050provide a high level of flexibility, it wasn't useful in practice.
3051
3052For example, as there is only one instance of each controller, utility
3053type controllers such as freezer which can be useful in all
3054hierarchies could only be used in one.  The issue is exacerbated by
3055the fact that controllers couldn't be moved to another hierarchy once
3056hierarchies were populated.  Another issue was that all controllers
3057bound to a hierarchy were forced to have exactly the same view of the
3058hierarchy.  It wasn't possible to vary the granularity depending on
3059the specific controller.
3060
3061In practice, these issues heavily limited which controllers could be
3062put on the same hierarchy and most configurations resorted to putting
3063each controller on its own hierarchy.  Only closely related ones, such
3064as the cpu and cpuacct controllers, made sense to be put on the same
3065hierarchy.  This often meant that userland ended up managing multiple
3066similar hierarchies repeating the same steps on each hierarchy
3067whenever a hierarchy management operation was necessary.
3068
3069Furthermore, support for multiple hierarchies came at a steep cost.
3070It greatly complicated cgroup core implementation but more importantly
3071the support for multiple hierarchies restricted how cgroup could be
3072used in general and what controllers was able to do.
3073
3074There was no limit on how many hierarchies there might be, which meant
3075that a thread's cgroup membership couldn't be described in finite
3076length.  The key might contain any number of entries and was unlimited
3077in length, which made it highly awkward to manipulate and led to
3078addition of controllers which existed only to identify membership,
3079which in turn exacerbated the original problem of proliferating number
3080of hierarchies.
3081
3082Also, as a controller couldn't have any expectation regarding the
3083topologies of hierarchies other controllers might be on, each
3084controller had to assume that all other controllers were attached to
3085completely orthogonal hierarchies.  This made it impossible, or at
3086least very cumbersome, for controllers to cooperate with each other.
3087
3088In most use cases, putting controllers on hierarchies which are
3089completely orthogonal to each other isn't necessary.  What usually is
3090called for is the ability to have differing levels of granularity
3091depending on the specific controller.  In other words, hierarchy may
3092be collapsed from leaf towards root when viewed from specific
3093controllers.  For example, a given configuration might not care about
3094how memory is distributed beyond a certain level while still wanting
3095to control how CPU cycles are distributed.
3096
3097
3098Thread Granularity
3099------------------
3100
3101cgroup v1 allowed threads of a process to belong to different cgroups.
3102This didn't make sense for some controllers and those controllers
3103ended up implementing different ways to ignore such situations but
3104much more importantly it blurred the line between API exposed to
3105individual applications and system management interface.
3106
3107Generally, in-process knowledge is available only to the process
3108itself; thus, unlike service-level organization of processes,
3109categorizing threads of a process requires active participation from
3110the application which owns the target process.
3111
3112cgroup v1 had an ambiguously defined delegation model which got abused
3113in combination with thread granularity.  cgroups were delegated to
3114individual applications so that they can create and manage their own
3115sub-hierarchies and control resource distributions along them.  This
3116effectively raised cgroup to the status of a syscall-like API exposed
3117to lay programs.
3118
3119First of all, cgroup has a fundamentally inadequate interface to be
3120exposed this way.  For a process to access its own knobs, it has to
3121extract the path on the target hierarchy from /proc/self/cgroup,
3122construct the path by appending the name of the knob to the path, open
3123and then read and/or write to it.  This is not only extremely clunky
3124and unusual but also inherently racy.  There is no conventional way to
3125define transaction across the required steps and nothing can guarantee
3126that the process would actually be operating on its own sub-hierarchy.
3127
3128cgroup controllers implemented a number of knobs which would never be
3129accepted as public APIs because they were just adding control knobs to
3130system-management pseudo filesystem.  cgroup ended up with interface
3131knobs which were not properly abstracted or refined and directly
3132revealed kernel internal details.  These knobs got exposed to
3133individual applications through the ill-defined delegation mechanism
3134effectively abusing cgroup as a shortcut to implementing public APIs
3135without going through the required scrutiny.
3136
3137This was painful for both userland and kernel.  Userland ended up with
3138misbehaving and poorly abstracted interfaces and kernel exposing and
3139locked into constructs inadvertently.
3140
3141
3142Competition Between Inner Nodes and Threads
3143-------------------------------------------
3144
3145cgroup v1 allowed threads to be in any cgroups which created an
3146interesting problem where threads belonging to a parent cgroup and its
3147children cgroups competed for resources.  This was nasty as two
3148different types of entities competed and there was no obvious way to
3149settle it.  Different controllers did different things.
3150
3151The cpu controller considered threads and cgroups as equivalents and
3152mapped nice levels to cgroup weights.  This worked for some cases but
3153fell flat when children wanted to be allocated specific ratios of CPU
3154cycles and the number of internal threads fluctuated - the ratios
3155constantly changed as the number of competing entities fluctuated.
3156There also were other issues.  The mapping from nice level to weight
3157wasn't obvious or universal, and there were various other knobs which
3158simply weren't available for threads.
3159
3160The io controller implicitly created a hidden leaf node for each
3161cgroup to host the threads.  The hidden leaf had its own copies of all
3162the knobs with ``leaf_`` prefixed.  While this allowed equivalent
3163control over internal threads, it was with serious drawbacks.  It
3164always added an extra layer of nesting which wouldn't be necessary
3165otherwise, made the interface messy and significantly complicated the
3166implementation.
3167
3168The memory controller didn't have a way to control what happened
3169between internal tasks and child cgroups and the behavior was not
3170clearly defined.  There were attempts to add ad-hoc behaviors and
3171knobs to tailor the behavior to specific workloads which would have
3172led to problems extremely difficult to resolve in the long term.
3173
3174Multiple controllers struggled with internal tasks and came up with
3175different ways to deal with it; unfortunately, all the approaches were
3176severely flawed and, furthermore, the widely different behaviors
3177made cgroup as a whole highly inconsistent.
3178
3179This clearly is a problem which needs to be addressed from cgroup core
3180in a uniform way.
3181
3182
3183Other Interface Issues
3184----------------------
3185
3186cgroup v1 grew without oversight and developed a large number of
3187idiosyncrasies and inconsistencies.  One issue on the cgroup core side
3188was how an empty cgroup was notified - a userland helper binary was
3189forked and executed for each event.  The event delivery wasn't
3190recursive or delegatable.  The limitations of the mechanism also led
3191to in-kernel event delivery filtering mechanism further complicating
3192the interface.
3193
3194Controller interfaces were problematic too.  An extreme example is
3195controllers completely ignoring hierarchical organization and treating
3196all cgroups as if they were all located directly under the root
3197cgroup.  Some controllers exposed a large amount of inconsistent
3198implementation details to userland.
3199
3200There also was no consistency across controllers.  When a new cgroup
3201was created, some controllers defaulted to not imposing extra
3202restrictions while others disallowed any resource usage until
3203explicitly configured.  Configuration knobs for the same type of
3204control used widely differing naming schemes and formats.  Statistics
3205and information knobs were named arbitrarily and used different
3206formats and units even in the same controller.
3207
3208cgroup v2 establishes common conventions where appropriate and updates
3209controllers so that they expose minimal and consistent interfaces.
3210
3211
3212Controller Issues and Remedies
3213------------------------------
3214
3215Memory
3216~~~~~~
3217
3218The original lower boundary, the soft limit, is defined as a limit
3219that is per default unset.  As a result, the set of cgroups that
3220global reclaim prefers is opt-in, rather than opt-out.  The costs for
3221optimizing these mostly negative lookups are so high that the
3222implementation, despite its enormous size, does not even provide the
3223basic desirable behavior.  First off, the soft limit has no
3224hierarchical meaning.  All configured groups are organized in a global
3225rbtree and treated like equal peers, regardless where they are located
3226in the hierarchy.  This makes subtree delegation impossible.  Second,
3227the soft limit reclaim pass is so aggressive that it not just
3228introduces high allocation latencies into the system, but also impacts
3229system performance due to overreclaim, to the point where the feature
3230becomes self-defeating.
3231
3232The memory.low boundary on the other hand is a top-down allocated
3233reserve.  A cgroup enjoys reclaim protection when it's within its
3234effective low, which makes delegation of subtrees possible. It also
3235enjoys having reclaim pressure proportional to its overage when
3236above its effective low.
3237
3238The original high boundary, the hard limit, is defined as a strict
3239limit that can not budge, even if the OOM killer has to be called.
3240But this generally goes against the goal of making the most out of the
3241available memory.  The memory consumption of workloads varies during
3242runtime, and that requires users to overcommit.  But doing that with a
3243strict upper limit requires either a fairly accurate prediction of the
3244working set size or adding slack to the limit.  Since working set size
3245estimation is hard and error prone, and getting it wrong results in
3246OOM kills, most users tend to err on the side of a looser limit and
3247end up wasting precious resources.
3248
3249The memory.high boundary on the other hand can be set much more
3250conservatively.  When hit, it throttles allocations by forcing them
3251into direct reclaim to work off the excess, but it never invokes the
3252OOM killer.  As a result, a high boundary that is chosen too
3253aggressively will not terminate the processes, but instead it will
3254lead to gradual performance degradation.  The user can monitor this
3255and make corrections until the minimal memory footprint that still
3256gives acceptable performance is found.
3257
3258In extreme cases, with many concurrent allocations and a complete
3259breakdown of reclaim progress within the group, the high boundary can
3260be exceeded.  But even then it's mostly better to satisfy the
3261allocation from the slack available in other groups or the rest of the
3262system than killing the group.  Otherwise, memory.max is there to
3263limit this type of spillover and ultimately contain buggy or even
3264malicious applications.
3265
3266Setting the original memory.limit_in_bytes below the current usage was
3267subject to a race condition, where concurrent charges could cause the
3268limit setting to fail. memory.max on the other hand will first set the
3269limit to prevent new charges, and then reclaim and OOM kill until the
3270new limit is met - or the task writing to memory.max is killed.
3271
3272The combined memory+swap accounting and limiting is replaced by real
3273control over swap space.
3274
3275The main argument for a combined memory+swap facility in the original
3276cgroup design was that global or parental pressure would always be
3277able to swap all anonymous memory of a child group, regardless of the
3278child's own (possibly untrusted) configuration.  However, untrusted
3279groups can sabotage swapping by other means - such as referencing its
3280anonymous memory in a tight loop - and an admin can not assume full
3281swappability when overcommitting untrusted jobs.
3282
3283For trusted jobs, on the other hand, a combined counter is not an
3284intuitive userspace interface, and it flies in the face of the idea
3285that cgroup controllers should account and limit specific physical
3286resources.  Swap space is a resource like all others in the system,
3287and that's why unified hierarchy allows distributing it separately.
3288