xref: /linux/Documentation/admin-guide/cgroup-v2.rst (revision fb1ceb29b27cda91af35851ebab01f298d82162e)
1.. _cgroup-v2:
2
3================
4Control Group v2
5================
6
7:Date: October, 2015
8:Author: Tejun Heo <tj@kernel.org>
9
10This is the authoritative documentation on the design, interface and
11conventions of cgroup v2.  It describes all userland-visible aspects
12of cgroup including core and specific controller behaviors.  All
13future changes must be reflected in this document.  Documentation for
14v1 is available under :ref:`Documentation/admin-guide/cgroup-v1/index.rst <cgroup-v1>`.
15
16.. CONTENTS
17
18   1. Introduction
19     1-1. Terminology
20     1-2. What is cgroup?
21   2. Basic Operations
22     2-1. Mounting
23     2-2. Organizing Processes and Threads
24       2-2-1. Processes
25       2-2-2. Threads
26     2-3. [Un]populated Notification
27     2-4. Controlling Controllers
28       2-4-1. Enabling and Disabling
29       2-4-2. Top-down Constraint
30       2-4-3. No Internal Process Constraint
31     2-5. Delegation
32       2-5-1. Model of Delegation
33       2-5-2. Delegation Containment
34     2-6. Guidelines
35       2-6-1. Organize Once and Control
36       2-6-2. Avoid Name Collisions
37   3. Resource Distribution Models
38     3-1. Weights
39     3-2. Limits
40     3-3. Protections
41     3-4. Allocations
42   4. Interface Files
43     4-1. Format
44     4-2. Conventions
45     4-3. Core Interface Files
46   5. Controllers
47     5-1. CPU
48       5-1-1. CPU Interface Files
49     5-2. Memory
50       5-2-1. Memory Interface Files
51       5-2-2. Usage Guidelines
52       5-2-3. Memory Ownership
53     5-3. IO
54       5-3-1. IO Interface Files
55       5-3-2. Writeback
56       5-3-3. IO Latency
57         5-3-3-1. How IO Latency Throttling Works
58         5-3-3-2. IO Latency Interface Files
59       5-3-4. IO Priority
60     5-4. PID
61       5-4-1. PID Interface Files
62     5-5. Cpuset
63       5.5-1. Cpuset Interface Files
64     5-6. Device
65     5-7. RDMA
66       5-7-1. RDMA Interface Files
67     5-8. DMEM
68     5-9. HugeTLB
69       5.9-1. HugeTLB Interface Files
70     5-10. Misc
71       5.10-1 Miscellaneous cgroup Interface Files
72       5.10-2 Migration and Ownership
73     5-11. Others
74       5-11-1. perf_event
75     5-N. Non-normative information
76       5-N-1. CPU controller root cgroup process behaviour
77       5-N-2. IO controller root cgroup process behaviour
78   6. Namespace
79     6-1. Basics
80     6-2. The Root and Views
81     6-3. Migration and setns(2)
82     6-4. Interaction with Other Namespaces
83   P. Information on Kernel Programming
84     P-1. Filesystem Support for Writeback
85   D. Deprecated v1 Core Features
86   R. Issues with v1 and Rationales for v2
87     R-1. Multiple Hierarchies
88     R-2. Thread Granularity
89     R-3. Competition Between Inner Nodes and Threads
90     R-4. Other Interface Issues
91     R-5. Controller Issues and Remedies
92       R-5-1. Memory
93
94
95Introduction
96============
97
98Terminology
99-----------
100
101"cgroup" stands for "control group" and is never capitalized.  The
102singular form is used to designate the whole feature and also as a
103qualifier as in "cgroup controllers".  When explicitly referring to
104multiple individual control groups, the plural form "cgroups" is used.
105
106
107What is cgroup?
108---------------
109
110cgroup is a mechanism to organize processes hierarchically and
111distribute system resources along the hierarchy in a controlled and
112configurable manner.
113
114cgroup is largely composed of two parts - the core and controllers.
115cgroup core is primarily responsible for hierarchically organizing
116processes.  A cgroup controller is usually responsible for
117distributing a specific type of system resource along the hierarchy
118although there are utility controllers which serve purposes other than
119resource distribution.
120
121cgroups form a tree structure and every process in the system belongs
122to one and only one cgroup.  All threads of a process belong to the
123same cgroup.  On creation, all processes are put in the cgroup that
124the parent process belongs to at the time.  A process can be migrated
125to another cgroup.  Migration of a process doesn't affect already
126existing descendant processes.
127
128Following certain structural constraints, controllers may be enabled or
129disabled selectively on a cgroup.  All controller behaviors are
130hierarchical - if a controller is enabled on a cgroup, it affects all
131processes which belong to the cgroups consisting the inclusive
132sub-hierarchy of the cgroup.  When a controller is enabled on a nested
133cgroup, it always restricts the resource distribution further.  The
134restrictions set closer to the root in the hierarchy can not be
135overridden from further away.
136
137
138Basic Operations
139================
140
141Mounting
142--------
143
144Unlike v1, cgroup v2 has only single hierarchy.  The cgroup v2
145hierarchy can be mounted with the following mount command::
146
147  # mount -t cgroup2 none $MOUNT_POINT
148
149cgroup2 filesystem has the magic number 0x63677270 ("cgrp").  All
150controllers which support v2 and are not bound to a v1 hierarchy are
151automatically bound to the v2 hierarchy and show up at the root.
152Controllers which are not in active use in the v2 hierarchy can be
153bound to other hierarchies.  This allows mixing v2 hierarchy with the
154legacy v1 multiple hierarchies in a fully backward compatible way.
155
156A controller can be moved across hierarchies only after the controller
157is no longer referenced in its current hierarchy.  Because per-cgroup
158controller states are destroyed asynchronously and controllers may
159have lingering references, a controller may not show up immediately on
160the v2 hierarchy after the final umount of the previous hierarchy.
161Similarly, a controller should be fully disabled to be moved out of
162the unified hierarchy and it may take some time for the disabled
163controller to become available for other hierarchies; furthermore, due
164to inter-controller dependencies, other controllers may need to be
165disabled too.
166
167While useful for development and manual configurations, moving
168controllers dynamically between the v2 and other hierarchies is
169strongly discouraged for production use.  It is recommended to decide
170the hierarchies and controller associations before starting using the
171controllers after system boot.
172
173During transition to v2, system management software might still
174automount the v1 cgroup filesystem and so hijack all controllers
175during boot, before manual intervention is possible. To make testing
176and experimenting easier, the kernel parameter cgroup_no_v1= allows
177disabling controllers in v1 and make them always available in v2.
178
179cgroup v2 currently supports the following mount options.
180
181  nsdelegate
182	Consider cgroup namespaces as delegation boundaries.  This
183	option is system wide and can only be set on mount or modified
184	through remount from the init namespace.  The mount option is
185	ignored on non-init namespace mounts.  Please refer to the
186	Delegation section for details.
187
188  favordynmods
189        Reduce the latencies of dynamic cgroup modifications such as
190        task migrations and controller on/offs at the cost of making
191        hot path operations such as forks and exits more expensive.
192        The static usage pattern of creating a cgroup, enabling
193        controllers, and then seeding it with CLONE_INTO_CGROUP is
194        not affected by this option.
195
196  memory_localevents
197        Only populate memory.events with data for the current cgroup,
198        and not any subtrees. This is legacy behaviour, the default
199        behaviour without this option is to include subtree counts.
200        This option is system wide and can only be set on mount or
201        modified through remount from the init namespace. The mount
202        option is ignored on non-init namespace mounts.
203
204  memory_recursiveprot
205        Recursively apply memory.min and memory.low protection to
206        entire subtrees, without requiring explicit downward
207        propagation into leaf cgroups.  This allows protecting entire
208        subtrees from one another, while retaining free competition
209        within those subtrees.  This should have been the default
210        behavior but is a mount-option to avoid regressing setups
211        relying on the original semantics (e.g. specifying bogusly
212        high 'bypass' protection values at higher tree levels).
213
214  memory_hugetlb_accounting
215        Count HugeTLB memory usage towards the cgroup's overall
216        memory usage for the memory controller (for the purpose of
217        statistics reporting and memory protetion). This is a new
218        behavior that could regress existing setups, so it must be
219        explicitly opted in with this mount option.
220
221        A few caveats to keep in mind:
222
223        * There is no HugeTLB pool management involved in the memory
224          controller. The pre-allocated pool does not belong to anyone.
225          Specifically, when a new HugeTLB folio is allocated to
226          the pool, it is not accounted for from the perspective of the
227          memory controller. It is only charged to a cgroup when it is
228          actually used (for e.g at page fault time). Host memory
229          overcommit management has to consider this when configuring
230          hard limits. In general, HugeTLB pool management should be
231          done via other mechanisms (such as the HugeTLB controller).
232        * Failure to charge a HugeTLB folio to the memory controller
233          results in SIGBUS. This could happen even if the HugeTLB pool
234          still has pages available (but the cgroup limit is hit and
235          reclaim attempt fails).
236        * Charging HugeTLB memory towards the memory controller affects
237          memory protection and reclaim dynamics. Any userspace tuning
238          (of low, min limits for e.g) needs to take this into account.
239        * HugeTLB pages utilized while this option is not selected
240          will not be tracked by the memory controller (even if cgroup
241          v2 is remounted later on).
242
243  pids_localevents
244        The option restores v1-like behavior of pids.events:max, that is only
245        local (inside cgroup proper) fork failures are counted. Without this
246        option pids.events.max represents any pids.max enforcemnt across
247        cgroup's subtree.
248
249
250
251Organizing Processes and Threads
252--------------------------------
253
254Processes
255~~~~~~~~~
256
257Initially, only the root cgroup exists to which all processes belong.
258A child cgroup can be created by creating a sub-directory::
259
260  # mkdir $CGROUP_NAME
261
262A given cgroup may have multiple child cgroups forming a tree
263structure.  Each cgroup has a read-writable interface file
264"cgroup.procs".  When read, it lists the PIDs of all processes which
265belong to the cgroup one-per-line.  The PIDs are not ordered and the
266same PID may show up more than once if the process got moved to
267another cgroup and then back or the PID got recycled while reading.
268
269A process can be migrated into a cgroup by writing its PID to the
270target cgroup's "cgroup.procs" file.  Only one process can be migrated
271on a single write(2) call.  If a process is composed of multiple
272threads, writing the PID of any thread migrates all threads of the
273process.
274
275When a process forks a child process, the new process is born into the
276cgroup that the forking process belongs to at the time of the
277operation.  After exit, a process stays associated with the cgroup
278that it belonged to at the time of exit until it's reaped; however, a
279zombie process does not appear in "cgroup.procs" and thus can't be
280moved to another cgroup.
281
282A cgroup which doesn't have any children or live processes can be
283destroyed by removing the directory.  Note that a cgroup which doesn't
284have any children and is associated only with zombie processes is
285considered empty and can be removed::
286
287  # rmdir $CGROUP_NAME
288
289"/proc/$PID/cgroup" lists a process's cgroup membership.  If legacy
290cgroup is in use in the system, this file may contain multiple lines,
291one for each hierarchy.  The entry for cgroup v2 is always in the
292format "0::$PATH"::
293
294  # cat /proc/842/cgroup
295  ...
296  0::/test-cgroup/test-cgroup-nested
297
298If the process becomes a zombie and the cgroup it was associated with
299is removed subsequently, " (deleted)" is appended to the path::
300
301  # cat /proc/842/cgroup
302  ...
303  0::/test-cgroup/test-cgroup-nested (deleted)
304
305
306Threads
307~~~~~~~
308
309cgroup v2 supports thread granularity for a subset of controllers to
310support use cases requiring hierarchical resource distribution across
311the threads of a group of processes.  By default, all threads of a
312process belong to the same cgroup, which also serves as the resource
313domain to host resource consumptions which are not specific to a
314process or thread.  The thread mode allows threads to be spread across
315a subtree while still maintaining the common resource domain for them.
316
317Controllers which support thread mode are called threaded controllers.
318The ones which don't are called domain controllers.
319
320Marking a cgroup threaded makes it join the resource domain of its
321parent as a threaded cgroup.  The parent may be another threaded
322cgroup whose resource domain is further up in the hierarchy.  The root
323of a threaded subtree, that is, the nearest ancestor which is not
324threaded, is called threaded domain or thread root interchangeably and
325serves as the resource domain for the entire subtree.
326
327Inside a threaded subtree, threads of a process can be put in
328different cgroups and are not subject to the no internal process
329constraint - threaded controllers can be enabled on non-leaf cgroups
330whether they have threads in them or not.
331
332As the threaded domain cgroup hosts all the domain resource
333consumptions of the subtree, it is considered to have internal
334resource consumptions whether there are processes in it or not and
335can't have populated child cgroups which aren't threaded.  Because the
336root cgroup is not subject to no internal process constraint, it can
337serve both as a threaded domain and a parent to domain cgroups.
338
339The current operation mode or type of the cgroup is shown in the
340"cgroup.type" file which indicates whether the cgroup is a normal
341domain, a domain which is serving as the domain of a threaded subtree,
342or a threaded cgroup.
343
344On creation, a cgroup is always a domain cgroup and can be made
345threaded by writing "threaded" to the "cgroup.type" file.  The
346operation is single direction::
347
348  # echo threaded > cgroup.type
349
350Once threaded, the cgroup can't be made a domain again.  To enable the
351thread mode, the following conditions must be met.
352
353- As the cgroup will join the parent's resource domain.  The parent
354  must either be a valid (threaded) domain or a threaded cgroup.
355
356- When the parent is an unthreaded domain, it must not have any domain
357  controllers enabled or populated domain children.  The root is
358  exempt from this requirement.
359
360Topology-wise, a cgroup can be in an invalid state.  Please consider
361the following topology::
362
363  A (threaded domain) - B (threaded) - C (domain, just created)
364
365C is created as a domain but isn't connected to a parent which can
366host child domains.  C can't be used until it is turned into a
367threaded cgroup.  "cgroup.type" file will report "domain (invalid)" in
368these cases.  Operations which fail due to invalid topology use
369EOPNOTSUPP as the errno.
370
371A domain cgroup is turned into a threaded domain when one of its child
372cgroup becomes threaded or threaded controllers are enabled in the
373"cgroup.subtree_control" file while there are processes in the cgroup.
374A threaded domain reverts to a normal domain when the conditions
375clear.
376
377When read, "cgroup.threads" contains the list of the thread IDs of all
378threads in the cgroup.  Except that the operations are per-thread
379instead of per-process, "cgroup.threads" has the same format and
380behaves the same way as "cgroup.procs".  While "cgroup.threads" can be
381written to in any cgroup, as it can only move threads inside the same
382threaded domain, its operations are confined inside each threaded
383subtree.
384
385The threaded domain cgroup serves as the resource domain for the whole
386subtree, and, while the threads can be scattered across the subtree,
387all the processes are considered to be in the threaded domain cgroup.
388"cgroup.procs" in a threaded domain cgroup contains the PIDs of all
389processes in the subtree and is not readable in the subtree proper.
390However, "cgroup.procs" can be written to from anywhere in the subtree
391to migrate all threads of the matching process to the cgroup.
392
393Only threaded controllers can be enabled in a threaded subtree.  When
394a threaded controller is enabled inside a threaded subtree, it only
395accounts for and controls resource consumptions associated with the
396threads in the cgroup and its descendants.  All consumptions which
397aren't tied to a specific thread belong to the threaded domain cgroup.
398
399Because a threaded subtree is exempt from no internal process
400constraint, a threaded controller must be able to handle competition
401between threads in a non-leaf cgroup and its child cgroups.  Each
402threaded controller defines how such competitions are handled.
403
404Currently, the following controllers are threaded and can be enabled
405in a threaded cgroup::
406
407- cpu
408- cpuset
409- perf_event
410- pids
411
412[Un]populated Notification
413--------------------------
414
415Each non-root cgroup has a "cgroup.events" file which contains
416"populated" field indicating whether the cgroup's sub-hierarchy has
417live processes in it.  Its value is 0 if there is no live process in
418the cgroup and its descendants; otherwise, 1.  poll and [id]notify
419events are triggered when the value changes.  This can be used, for
420example, to start a clean-up operation after all processes of a given
421sub-hierarchy have exited.  The populated state updates and
422notifications are recursive.  Consider the following sub-hierarchy
423where the numbers in the parentheses represent the numbers of processes
424in each cgroup::
425
426  A(4) - B(0) - C(1)
427              \ D(0)
428
429A, B and C's "populated" fields would be 1 while D's 0.  After the one
430process in C exits, B and C's "populated" fields would flip to "0" and
431file modified events will be generated on the "cgroup.events" files of
432both cgroups.
433
434
435Controlling Controllers
436-----------------------
437
438Enabling and Disabling
439~~~~~~~~~~~~~~~~~~~~~~
440
441Each cgroup has a "cgroup.controllers" file which lists all
442controllers available for the cgroup to enable::
443
444  # cat cgroup.controllers
445  cpu io memory
446
447No controller is enabled by default.  Controllers can be enabled and
448disabled by writing to the "cgroup.subtree_control" file::
449
450  # echo "+cpu +memory -io" > cgroup.subtree_control
451
452Only controllers which are listed in "cgroup.controllers" can be
453enabled.  When multiple operations are specified as above, either they
454all succeed or fail.  If multiple operations on the same controller
455are specified, the last one is effective.
456
457Enabling a controller in a cgroup indicates that the distribution of
458the target resource across its immediate children will be controlled.
459Consider the following sub-hierarchy.  The enabled controllers are
460listed in parentheses::
461
462  A(cpu,memory) - B(memory) - C()
463                            \ D()
464
465As A has "cpu" and "memory" enabled, A will control the distribution
466of CPU cycles and memory to its children, in this case, B.  As B has
467"memory" enabled but not "CPU", C and D will compete freely on CPU
468cycles but their division of memory available to B will be controlled.
469
470As a controller regulates the distribution of the target resource to
471the cgroup's children, enabling it creates the controller's interface
472files in the child cgroups.  In the above example, enabling "cpu" on B
473would create the "cpu." prefixed controller interface files in C and
474D.  Likewise, disabling "memory" from B would remove the "memory."
475prefixed controller interface files from C and D.  This means that the
476controller interface files - anything which doesn't start with
477"cgroup." are owned by the parent rather than the cgroup itself.
478
479
480Top-down Constraint
481~~~~~~~~~~~~~~~~~~~
482
483Resources are distributed top-down and a cgroup can further distribute
484a resource only if the resource has been distributed to it from the
485parent.  This means that all non-root "cgroup.subtree_control" files
486can only contain controllers which are enabled in the parent's
487"cgroup.subtree_control" file.  A controller can be enabled only if
488the parent has the controller enabled and a controller can't be
489disabled if one or more children have it enabled.
490
491
492No Internal Process Constraint
493~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
494
495Non-root cgroups can distribute domain resources to their children
496only when they don't have any processes of their own.  In other words,
497only domain cgroups which don't contain any processes can have domain
498controllers enabled in their "cgroup.subtree_control" files.
499
500This guarantees that, when a domain controller is looking at the part
501of the hierarchy which has it enabled, processes are always only on
502the leaves.  This rules out situations where child cgroups compete
503against internal processes of the parent.
504
505The root cgroup is exempt from this restriction.  Root contains
506processes and anonymous resource consumption which can't be associated
507with any other cgroups and requires special treatment from most
508controllers.  How resource consumption in the root cgroup is governed
509is up to each controller (for more information on this topic please
510refer to the Non-normative information section in the Controllers
511chapter).
512
513Note that the restriction doesn't get in the way if there is no
514enabled controller in the cgroup's "cgroup.subtree_control".  This is
515important as otherwise it wouldn't be possible to create children of a
516populated cgroup.  To control resource distribution of a cgroup, the
517cgroup must create children and transfer all its processes to the
518children before enabling controllers in its "cgroup.subtree_control"
519file.
520
521
522Delegation
523----------
524
525Model of Delegation
526~~~~~~~~~~~~~~~~~~~
527
528A cgroup can be delegated in two ways.  First, to a less privileged
529user by granting write access of the directory and its "cgroup.procs",
530"cgroup.threads" and "cgroup.subtree_control" files to the user.
531Second, if the "nsdelegate" mount option is set, automatically to a
532cgroup namespace on namespace creation.
533
534Because the resource control interface files in a given directory
535control the distribution of the parent's resources, the delegatee
536shouldn't be allowed to write to them.  For the first method, this is
537achieved by not granting access to these files.  For the second, files
538outside the namespace should be hidden from the delegatee by the means
539of at least mount namespacing, and the kernel rejects writes to all
540files on a namespace root from inside the cgroup namespace, except for
541those files listed in "/sys/kernel/cgroup/delegate" (including
542"cgroup.procs", "cgroup.threads", "cgroup.subtree_control", etc.).
543
544The end results are equivalent for both delegation types.  Once
545delegated, the user can build sub-hierarchy under the directory,
546organize processes inside it as it sees fit and further distribute the
547resources it received from the parent.  The limits and other settings
548of all resource controllers are hierarchical and regardless of what
549happens in the delegated sub-hierarchy, nothing can escape the
550resource restrictions imposed by the parent.
551
552Currently, cgroup doesn't impose any restrictions on the number of
553cgroups in or nesting depth of a delegated sub-hierarchy; however,
554this may be limited explicitly in the future.
555
556
557Delegation Containment
558~~~~~~~~~~~~~~~~~~~~~~
559
560A delegated sub-hierarchy is contained in the sense that processes
561can't be moved into or out of the sub-hierarchy by the delegatee.
562
563For delegations to a less privileged user, this is achieved by
564requiring the following conditions for a process with a non-root euid
565to migrate a target process into a cgroup by writing its PID to the
566"cgroup.procs" file.
567
568- The writer must have write access to the "cgroup.procs" file.
569
570- The writer must have write access to the "cgroup.procs" file of the
571  common ancestor of the source and destination cgroups.
572
573The above two constraints ensure that while a delegatee may migrate
574processes around freely in the delegated sub-hierarchy it can't pull
575in from or push out to outside the sub-hierarchy.
576
577For an example, let's assume cgroups C0 and C1 have been delegated to
578user U0 who created C00, C01 under C0 and C10 under C1 as follows and
579all processes under C0 and C1 belong to U0::
580
581  ~~~~~~~~~~~~~ - C0 - C00
582  ~ cgroup    ~      \ C01
583  ~ hierarchy ~
584  ~~~~~~~~~~~~~ - C1 - C10
585
586Let's also say U0 wants to write the PID of a process which is
587currently in C10 into "C00/cgroup.procs".  U0 has write access to the
588file; however, the common ancestor of the source cgroup C10 and the
589destination cgroup C00 is above the points of delegation and U0 would
590not have write access to its "cgroup.procs" files and thus the write
591will be denied with -EACCES.
592
593For delegations to namespaces, containment is achieved by requiring
594that both the source and destination cgroups are reachable from the
595namespace of the process which is attempting the migration.  If either
596is not reachable, the migration is rejected with -ENOENT.
597
598
599Guidelines
600----------
601
602Organize Once and Control
603~~~~~~~~~~~~~~~~~~~~~~~~~
604
605Migrating a process across cgroups is a relatively expensive operation
606and stateful resources such as memory are not moved together with the
607process.  This is an explicit design decision as there often exist
608inherent trade-offs between migration and various hot paths in terms
609of synchronization cost.
610
611As such, migrating processes across cgroups frequently as a means to
612apply different resource restrictions is discouraged.  A workload
613should be assigned to a cgroup according to the system's logical and
614resource structure once on start-up.  Dynamic adjustments to resource
615distribution can be made by changing controller configuration through
616the interface files.
617
618
619Avoid Name Collisions
620~~~~~~~~~~~~~~~~~~~~~
621
622Interface files for a cgroup and its children cgroups occupy the same
623directory and it is possible to create children cgroups which collide
624with interface files.
625
626All cgroup core interface files are prefixed with "cgroup." and each
627controller's interface files are prefixed with the controller name and
628a dot.  A controller's name is composed of lower case alphabets and
629'_'s but never begins with an '_' so it can be used as the prefix
630character for collision avoidance.  Also, interface file names won't
631start or end with terms which are often used in categorizing workloads
632such as job, service, slice, unit or workload.
633
634cgroup doesn't do anything to prevent name collisions and it's the
635user's responsibility to avoid them.
636
637
638Resource Distribution Models
639============================
640
641cgroup controllers implement several resource distribution schemes
642depending on the resource type and expected use cases.  This section
643describes major schemes in use along with their expected behaviors.
644
645
646Weights
647-------
648
649A parent's resource is distributed by adding up the weights of all
650active children and giving each the fraction matching the ratio of its
651weight against the sum.  As only children which can make use of the
652resource at the moment participate in the distribution, this is
653work-conserving.  Due to the dynamic nature, this model is usually
654used for stateless resources.
655
656All weights are in the range [1, 10000] with the default at 100.  This
657allows symmetric multiplicative biases in both directions at fine
658enough granularity while staying in the intuitive range.
659
660As long as the weight is in range, all configuration combinations are
661valid and there is no reason to reject configuration changes or
662process migrations.
663
664"cpu.weight" proportionally distributes CPU cycles to active children
665and is an example of this type.
666
667
668.. _cgroupv2-limits-distributor:
669
670Limits
671------
672
673A child can only consume up to the configured amount of the resource.
674Limits can be over-committed - the sum of the limits of children can
675exceed the amount of resource available to the parent.
676
677Limits are in the range [0, max] and defaults to "max", which is noop.
678
679As limits can be over-committed, all configuration combinations are
680valid and there is no reason to reject configuration changes or
681process migrations.
682
683"io.max" limits the maximum BPS and/or IOPS that a cgroup can consume
684on an IO device and is an example of this type.
685
686.. _cgroupv2-protections-distributor:
687
688Protections
689-----------
690
691A cgroup is protected up to the configured amount of the resource
692as long as the usages of all its ancestors are under their
693protected levels.  Protections can be hard guarantees or best effort
694soft boundaries.  Protections can also be over-committed in which case
695only up to the amount available to the parent is protected among
696children.
697
698Protections are in the range [0, max] and defaults to 0, which is
699noop.
700
701As protections can be over-committed, all configuration combinations
702are valid and there is no reason to reject configuration changes or
703process migrations.
704
705"memory.low" implements best-effort memory protection and is an
706example of this type.
707
708
709Allocations
710-----------
711
712A cgroup is exclusively allocated a certain amount of a finite
713resource.  Allocations can't be over-committed - the sum of the
714allocations of children can not exceed the amount of resource
715available to the parent.
716
717Allocations are in the range [0, max] and defaults to 0, which is no
718resource.
719
720As allocations can't be over-committed, some configuration
721combinations are invalid and should be rejected.  Also, if the
722resource is mandatory for execution of processes, process migrations
723may be rejected.
724
725"cpu.rt.max" hard-allocates realtime slices and is an example of this
726type.
727
728
729Interface Files
730===============
731
732Format
733------
734
735All interface files should be in one of the following formats whenever
736possible::
737
738  New-line separated values
739  (when only one value can be written at once)
740
741	VAL0\n
742	VAL1\n
743	...
744
745  Space separated values
746  (when read-only or multiple values can be written at once)
747
748	VAL0 VAL1 ...\n
749
750  Flat keyed
751
752	KEY0 VAL0\n
753	KEY1 VAL1\n
754	...
755
756  Nested keyed
757
758	KEY0 SUB_KEY0=VAL00 SUB_KEY1=VAL01...
759	KEY1 SUB_KEY0=VAL10 SUB_KEY1=VAL11...
760	...
761
762For a writable file, the format for writing should generally match
763reading; however, controllers may allow omitting later fields or
764implement restricted shortcuts for most common use cases.
765
766For both flat and nested keyed files, only the values for a single key
767can be written at a time.  For nested keyed files, the sub key pairs
768may be specified in any order and not all pairs have to be specified.
769
770
771Conventions
772-----------
773
774- Settings for a single feature should be contained in a single file.
775
776- The root cgroup should be exempt from resource control and thus
777  shouldn't have resource control interface files.
778
779- The default time unit is microseconds.  If a different unit is ever
780  used, an explicit unit suffix must be present.
781
782- A parts-per quantity should use a percentage decimal with at least
783  two digit fractional part - e.g. 13.40.
784
785- If a controller implements weight based resource distribution, its
786  interface file should be named "weight" and have the range [1,
787  10000] with 100 as the default.  The values are chosen to allow
788  enough and symmetric bias in both directions while keeping it
789  intuitive (the default is 100%).
790
791- If a controller implements an absolute resource guarantee and/or
792  limit, the interface files should be named "min" and "max"
793  respectively.  If a controller implements best effort resource
794  guarantee and/or limit, the interface files should be named "low"
795  and "high" respectively.
796
797  In the above four control files, the special token "max" should be
798  used to represent upward infinity for both reading and writing.
799
800- If a setting has a configurable default value and keyed specific
801  overrides, the default entry should be keyed with "default" and
802  appear as the first entry in the file.
803
804  The default value can be updated by writing either "default $VAL" or
805  "$VAL".
806
807  When writing to update a specific override, "default" can be used as
808  the value to indicate removal of the override.  Override entries
809  with "default" as the value must not appear when read.
810
811  For example, a setting which is keyed by major:minor device numbers
812  with integer values may look like the following::
813
814    # cat cgroup-example-interface-file
815    default 150
816    8:0 300
817
818  The default value can be updated by::
819
820    # echo 125 > cgroup-example-interface-file
821
822  or::
823
824    # echo "default 125" > cgroup-example-interface-file
825
826  An override can be set by::
827
828    # echo "8:16 170" > cgroup-example-interface-file
829
830  and cleared by::
831
832    # echo "8:0 default" > cgroup-example-interface-file
833    # cat cgroup-example-interface-file
834    default 125
835    8:16 170
836
837- For events which are not very high frequency, an interface file
838  "events" should be created which lists event key value pairs.
839  Whenever a notifiable event happens, file modified event should be
840  generated on the file.
841
842
843Core Interface Files
844--------------------
845
846All cgroup core files are prefixed with "cgroup."
847
848  cgroup.type
849	A read-write single value file which exists on non-root
850	cgroups.
851
852	When read, it indicates the current type of the cgroup, which
853	can be one of the following values.
854
855	- "domain" : A normal valid domain cgroup.
856
857	- "domain threaded" : A threaded domain cgroup which is
858          serving as the root of a threaded subtree.
859
860	- "domain invalid" : A cgroup which is in an invalid state.
861	  It can't be populated or have controllers enabled.  It may
862	  be allowed to become a threaded cgroup.
863
864	- "threaded" : A threaded cgroup which is a member of a
865          threaded subtree.
866
867	A cgroup can be turned into a threaded cgroup by writing
868	"threaded" to this file.
869
870  cgroup.procs
871	A read-write new-line separated values file which exists on
872	all cgroups.
873
874	When read, it lists the PIDs of all processes which belong to
875	the cgroup one-per-line.  The PIDs are not ordered and the
876	same PID may show up more than once if the process got moved
877	to another cgroup and then back or the PID got recycled while
878	reading.
879
880	A PID can be written to migrate the process associated with
881	the PID to the cgroup.  The writer should match all of the
882	following conditions.
883
884	- It must have write access to the "cgroup.procs" file.
885
886	- It must have write access to the "cgroup.procs" file of the
887	  common ancestor of the source and destination cgroups.
888
889	When delegating a sub-hierarchy, write access to this file
890	should be granted along with the containing directory.
891
892	In a threaded cgroup, reading this file fails with EOPNOTSUPP
893	as all the processes belong to the thread root.  Writing is
894	supported and moves every thread of the process to the cgroup.
895
896  cgroup.threads
897	A read-write new-line separated values file which exists on
898	all cgroups.
899
900	When read, it lists the TIDs of all threads which belong to
901	the cgroup one-per-line.  The TIDs are not ordered and the
902	same TID may show up more than once if the thread got moved to
903	another cgroup and then back or the TID got recycled while
904	reading.
905
906	A TID can be written to migrate the thread associated with the
907	TID to the cgroup.  The writer should match all of the
908	following conditions.
909
910	- It must have write access to the "cgroup.threads" file.
911
912	- The cgroup that the thread is currently in must be in the
913          same resource domain as the destination cgroup.
914
915	- It must have write access to the "cgroup.procs" file of the
916	  common ancestor of the source and destination cgroups.
917
918	When delegating a sub-hierarchy, write access to this file
919	should be granted along with the containing directory.
920
921  cgroup.controllers
922	A read-only space separated values file which exists on all
923	cgroups.
924
925	It shows space separated list of all controllers available to
926	the cgroup.  The controllers are not ordered.
927
928  cgroup.subtree_control
929	A read-write space separated values file which exists on all
930	cgroups.  Starts out empty.
931
932	When read, it shows space separated list of the controllers
933	which are enabled to control resource distribution from the
934	cgroup to its children.
935
936	Space separated list of controllers prefixed with '+' or '-'
937	can be written to enable or disable controllers.  A controller
938	name prefixed with '+' enables the controller and '-'
939	disables.  If a controller appears more than once on the list,
940	the last one is effective.  When multiple enable and disable
941	operations are specified, either all succeed or all fail.
942
943  cgroup.events
944	A read-only flat-keyed file which exists on non-root cgroups.
945	The following entries are defined.  Unless specified
946	otherwise, a value change in this file generates a file
947	modified event.
948
949	  populated
950		1 if the cgroup or its descendants contains any live
951		processes; otherwise, 0.
952	  frozen
953		1 if the cgroup is frozen; otherwise, 0.
954
955  cgroup.max.descendants
956	A read-write single value files.  The default is "max".
957
958	Maximum allowed number of descent cgroups.
959	If the actual number of descendants is equal or larger,
960	an attempt to create a new cgroup in the hierarchy will fail.
961
962  cgroup.max.depth
963	A read-write single value files.  The default is "max".
964
965	Maximum allowed descent depth below the current cgroup.
966	If the actual descent depth is equal or larger,
967	an attempt to create a new child cgroup will fail.
968
969  cgroup.stat
970	A read-only flat-keyed file with the following entries:
971
972	  nr_descendants
973		Total number of visible descendant cgroups.
974
975	  nr_dying_descendants
976		Total number of dying descendant cgroups. A cgroup becomes
977		dying after being deleted by a user. The cgroup will remain
978		in dying state for some time undefined time (which can depend
979		on system load) before being completely destroyed.
980
981		A process can't enter a dying cgroup under any circumstances,
982		a dying cgroup can't revive.
983
984		A dying cgroup can consume system resources not exceeding
985		limits, which were active at the moment of cgroup deletion.
986
987	  nr_subsys_<cgroup_subsys>
988		Total number of live cgroup subsystems (e.g memory
989		cgroup) at and beneath the current cgroup.
990
991	  nr_dying_subsys_<cgroup_subsys>
992		Total number of dying cgroup subsystems (e.g. memory
993		cgroup) at and beneath the current cgroup.
994
995  cgroup.freeze
996	A read-write single value file which exists on non-root cgroups.
997	Allowed values are "0" and "1". The default is "0".
998
999	Writing "1" to the file causes freezing of the cgroup and all
1000	descendant cgroups. This means that all belonging processes will
1001	be stopped and will not run until the cgroup will be explicitly
1002	unfrozen. Freezing of the cgroup may take some time; when this action
1003	is completed, the "frozen" value in the cgroup.events control file
1004	will be updated to "1" and the corresponding notification will be
1005	issued.
1006
1007	A cgroup can be frozen either by its own settings, or by settings
1008	of any ancestor cgroups. If any of ancestor cgroups is frozen, the
1009	cgroup will remain frozen.
1010
1011	Processes in the frozen cgroup can be killed by a fatal signal.
1012	They also can enter and leave a frozen cgroup: either by an explicit
1013	move by a user, or if freezing of the cgroup races with fork().
1014	If a process is moved to a frozen cgroup, it stops. If a process is
1015	moved out of a frozen cgroup, it becomes running.
1016
1017	Frozen status of a cgroup doesn't affect any cgroup tree operations:
1018	it's possible to delete a frozen (and empty) cgroup, as well as
1019	create new sub-cgroups.
1020
1021  cgroup.kill
1022	A write-only single value file which exists in non-root cgroups.
1023	The only allowed value is "1".
1024
1025	Writing "1" to the file causes the cgroup and all descendant cgroups to
1026	be killed. This means that all processes located in the affected cgroup
1027	tree will be killed via SIGKILL.
1028
1029	Killing a cgroup tree will deal with concurrent forks appropriately and
1030	is protected against migrations.
1031
1032	In a threaded cgroup, writing this file fails with EOPNOTSUPP as
1033	killing cgroups is a process directed operation, i.e. it affects
1034	the whole thread-group.
1035
1036  cgroup.pressure
1037	A read-write single value file that allowed values are "0" and "1".
1038	The default is "1".
1039
1040	Writing "0" to the file will disable the cgroup PSI accounting.
1041	Writing "1" to the file will re-enable the cgroup PSI accounting.
1042
1043	This control attribute is not hierarchical, so disable or enable PSI
1044	accounting in a cgroup does not affect PSI accounting in descendants
1045	and doesn't need pass enablement via ancestors from root.
1046
1047	The reason this control attribute exists is that PSI accounts stalls for
1048	each cgroup separately and aggregates it at each level of the hierarchy.
1049	This may cause non-negligible overhead for some workloads when under
1050	deep level of the hierarchy, in which case this control attribute can
1051	be used to disable PSI accounting in the non-leaf cgroups.
1052
1053  irq.pressure
1054	A read-write nested-keyed file.
1055
1056	Shows pressure stall information for IRQ/SOFTIRQ. See
1057	:ref:`Documentation/accounting/psi.rst <psi>` for details.
1058
1059Controllers
1060===========
1061
1062.. _cgroup-v2-cpu:
1063
1064CPU
1065---
1066
1067The "cpu" controllers regulates distribution of CPU cycles.  This
1068controller implements weight and absolute bandwidth limit models for
1069normal scheduling policy and absolute bandwidth allocation model for
1070realtime scheduling policy.
1071
1072In all the above models, cycles distribution is defined only on a temporal
1073base and it does not account for the frequency at which tasks are executed.
1074The (optional) utilization clamping support allows to hint the schedutil
1075cpufreq governor about the minimum desired frequency which should always be
1076provided by a CPU, as well as the maximum desired frequency, which should not
1077be exceeded by a CPU.
1078
1079WARNING: cgroup2 cpu controller doesn't yet fully support the control of
1080realtime processes. For a kernel built with the CONFIG_RT_GROUP_SCHED option
1081enabled for group scheduling of realtime processes, the cpu controller can only
1082be enabled when all RT processes are in the root cgroup. Be aware that system
1083management software may already have placed RT processes into non-root cgroups
1084during the system boot process, and these processes may need to be moved to the
1085root cgroup before the cpu controller can be enabled with a
1086CONFIG_RT_GROUP_SCHED enabled kernel.
1087
1088With CONFIG_RT_GROUP_SCHED disabled, this limitation does not apply and some of
1089the interface files either affect realtime processes or account for them. See
1090the following section for details. Only the cpu controller is affected by
1091CONFIG_RT_GROUP_SCHED. Other controllers can be used for the resource control of
1092realtime processes irrespective of CONFIG_RT_GROUP_SCHED.
1093
1094
1095CPU Interface Files
1096~~~~~~~~~~~~~~~~~~~
1097
1098All time durations are in microseconds.
1099
1100  cpu.stat
1101	A read-only flat-keyed file.
1102	This file exists whether the controller is enabled or not.
1103
1104	It always reports the following three stats:
1105
1106	- usage_usec
1107	- user_usec
1108	- system_usec
1109
1110	and the following five when the controller is enabled:
1111
1112	- nr_periods
1113	- nr_throttled
1114	- throttled_usec
1115	- nr_bursts
1116	- burst_usec
1117
1118  cpu.weight
1119	A read-write single value file which exists on non-root
1120	cgroups.  The default is "100".
1121
1122	For non idle groups (cpu.idle = 0), the weight is in the
1123	range [1, 10000].
1124
1125	If the cgroup has been configured to be SCHED_IDLE (cpu.idle = 1),
1126	then the weight will show as a 0.
1127
1128  cpu.weight.nice
1129	A read-write single value file which exists on non-root
1130	cgroups.  The default is "0".
1131
1132	The nice value is in the range [-20, 19].
1133
1134	This interface file is an alternative interface for
1135	"cpu.weight" and allows reading and setting weight using the
1136	same values used by nice(2).  Because the range is smaller and
1137	granularity is coarser for the nice values, the read value is
1138	the closest approximation of the current weight.
1139
1140  cpu.max
1141	A read-write two value file which exists on non-root cgroups.
1142	The default is "max 100000".
1143
1144	The maximum bandwidth limit.  It's in the following format::
1145
1146	  $MAX $PERIOD
1147
1148	which indicates that the group may consume up to $MAX in each
1149	$PERIOD duration.  "max" for $MAX indicates no limit.  If only
1150	one number is written, $MAX is updated.
1151
1152  cpu.max.burst
1153	A read-write single value file which exists on non-root
1154	cgroups.  The default is "0".
1155
1156	The burst in the range [0, $MAX].
1157
1158  cpu.pressure
1159	A read-write nested-keyed file.
1160
1161	Shows pressure stall information for CPU. See
1162	:ref:`Documentation/accounting/psi.rst <psi>` for details.
1163
1164  cpu.uclamp.min
1165        A read-write single value file which exists on non-root cgroups.
1166        The default is "0", i.e. no utilization boosting.
1167
1168        The requested minimum utilization (protection) as a percentage
1169        rational number, e.g. 12.34 for 12.34%.
1170
1171        This interface allows reading and setting minimum utilization clamp
1172        values similar to the sched_setattr(2). This minimum utilization
1173        value is used to clamp the task specific minimum utilization clamp.
1174
1175        The requested minimum utilization (protection) is always capped by
1176        the current value for the maximum utilization (limit), i.e.
1177        `cpu.uclamp.max`.
1178
1179  cpu.uclamp.max
1180        A read-write single value file which exists on non-root cgroups.
1181        The default is "max". i.e. no utilization capping
1182
1183        The requested maximum utilization (limit) as a percentage rational
1184        number, e.g. 98.76 for 98.76%.
1185
1186        This interface allows reading and setting maximum utilization clamp
1187        values similar to the sched_setattr(2). This maximum utilization
1188        value is used to clamp the task specific maximum utilization clamp.
1189
1190  cpu.idle
1191	A read-write single value file which exists on non-root cgroups.
1192	The default is 0.
1193
1194	This is the cgroup analog of the per-task SCHED_IDLE sched policy.
1195	Setting this value to a 1 will make the scheduling policy of the
1196	cgroup SCHED_IDLE. The threads inside the cgroup will retain their
1197	own relative priorities, but the cgroup itself will be treated as
1198	very low priority relative to its peers.
1199
1200
1201
1202Memory
1203------
1204
1205The "memory" controller regulates distribution of memory.  Memory is
1206stateful and implements both limit and protection models.  Due to the
1207intertwining between memory usage and reclaim pressure and the
1208stateful nature of memory, the distribution model is relatively
1209complex.
1210
1211While not completely water-tight, all major memory usages by a given
1212cgroup are tracked so that the total memory consumption can be
1213accounted and controlled to a reasonable extent.  Currently, the
1214following types of memory usages are tracked.
1215
1216- Userland memory - page cache and anonymous memory.
1217
1218- Kernel data structures such as dentries and inodes.
1219
1220- TCP socket buffers.
1221
1222The above list may expand in the future for better coverage.
1223
1224
1225Memory Interface Files
1226~~~~~~~~~~~~~~~~~~~~~~
1227
1228All memory amounts are in bytes.  If a value which is not aligned to
1229PAGE_SIZE is written, the value may be rounded up to the closest
1230PAGE_SIZE multiple when read back.
1231
1232  memory.current
1233	A read-only single value file which exists on non-root
1234	cgroups.
1235
1236	The total amount of memory currently being used by the cgroup
1237	and its descendants.
1238
1239  memory.min
1240	A read-write single value file which exists on non-root
1241	cgroups.  The default is "0".
1242
1243	Hard memory protection.  If the memory usage of a cgroup
1244	is within its effective min boundary, the cgroup's memory
1245	won't be reclaimed under any conditions. If there is no
1246	unprotected reclaimable memory available, OOM killer
1247	is invoked. Above the effective min boundary (or
1248	effective low boundary if it is higher), pages are reclaimed
1249	proportionally to the overage, reducing reclaim pressure for
1250	smaller overages.
1251
1252	Effective min boundary is limited by memory.min values of
1253	all ancestor cgroups. If there is memory.min overcommitment
1254	(child cgroup or cgroups are requiring more protected memory
1255	than parent will allow), then each child cgroup will get
1256	the part of parent's protection proportional to its
1257	actual memory usage below memory.min.
1258
1259	Putting more memory than generally available under this
1260	protection is discouraged and may lead to constant OOMs.
1261
1262	If a memory cgroup is not populated with processes,
1263	its memory.min is ignored.
1264
1265  memory.low
1266	A read-write single value file which exists on non-root
1267	cgroups.  The default is "0".
1268
1269	Best-effort memory protection.  If the memory usage of a
1270	cgroup is within its effective low boundary, the cgroup's
1271	memory won't be reclaimed unless there is no reclaimable
1272	memory available in unprotected cgroups.
1273	Above the effective low	boundary (or
1274	effective min boundary if it is higher), pages are reclaimed
1275	proportionally to the overage, reducing reclaim pressure for
1276	smaller overages.
1277
1278	Effective low boundary is limited by memory.low values of
1279	all ancestor cgroups. If there is memory.low overcommitment
1280	(child cgroup or cgroups are requiring more protected memory
1281	than parent will allow), then each child cgroup will get
1282	the part of parent's protection proportional to its
1283	actual memory usage below memory.low.
1284
1285	Putting more memory than generally available under this
1286	protection is discouraged.
1287
1288  memory.high
1289	A read-write single value file which exists on non-root
1290	cgroups.  The default is "max".
1291
1292	Memory usage throttle limit.  If a cgroup's usage goes
1293	over the high boundary, the processes of the cgroup are
1294	throttled and put under heavy reclaim pressure.
1295
1296	Going over the high limit never invokes the OOM killer and
1297	under extreme conditions the limit may be breached. The high
1298	limit should be used in scenarios where an external process
1299	monitors the limited cgroup to alleviate heavy reclaim
1300	pressure.
1301
1302  memory.max
1303	A read-write single value file which exists on non-root
1304	cgroups.  The default is "max".
1305
1306	Memory usage hard limit.  This is the main mechanism to limit
1307	memory usage of a cgroup.  If a cgroup's memory usage reaches
1308	this limit and can't be reduced, the OOM killer is invoked in
1309	the cgroup. Under certain circumstances, the usage may go
1310	over the limit temporarily.
1311
1312	In default configuration regular 0-order allocations always
1313	succeed unless OOM killer chooses current task as a victim.
1314
1315	Some kinds of allocations don't invoke the OOM killer.
1316	Caller could retry them differently, return into userspace
1317	as -ENOMEM or silently ignore in cases like disk readahead.
1318
1319  memory.reclaim
1320	A write-only nested-keyed file which exists for all cgroups.
1321
1322	This is a simple interface to trigger memory reclaim in the
1323	target cgroup.
1324
1325	Example::
1326
1327	  echo "1G" > memory.reclaim
1328
1329	Please note that the kernel can over or under reclaim from
1330	the target cgroup. If less bytes are reclaimed than the
1331	specified amount, -EAGAIN is returned.
1332
1333	Please note that the proactive reclaim (triggered by this
1334	interface) is not meant to indicate memory pressure on the
1335	memory cgroup. Therefore socket memory balancing triggered by
1336	the memory reclaim normally is not exercised in this case.
1337	This means that the networking layer will not adapt based on
1338	reclaim induced by memory.reclaim.
1339
1340The following nested keys are defined.
1341
1342	  ==========            ================================
1343	  swappiness            Swappiness value to reclaim with
1344	  ==========            ================================
1345
1346	Specifying a swappiness value instructs the kernel to perform
1347	the reclaim with that swappiness value. Note that this has the
1348	same semantics as vm.swappiness applied to memcg reclaim with
1349	all the existing limitations and potential future extensions.
1350
1351  memory.peak
1352	A read-write single value file which exists on non-root cgroups.
1353
1354	The max memory usage recorded for the cgroup and its descendants since
1355	either the creation of the cgroup or the most recent reset for that FD.
1356
1357	A write of any non-empty string to this file resets it to the
1358	current memory usage for subsequent reads through the same
1359	file descriptor.
1360
1361  memory.oom.group
1362	A read-write single value file which exists on non-root
1363	cgroups.  The default value is "0".
1364
1365	Determines whether the cgroup should be treated as
1366	an indivisible workload by the OOM killer. If set,
1367	all tasks belonging to the cgroup or to its descendants
1368	(if the memory cgroup is not a leaf cgroup) are killed
1369	together or not at all. This can be used to avoid
1370	partial kills to guarantee workload integrity.
1371
1372	Tasks with the OOM protection (oom_score_adj set to -1000)
1373	are treated as an exception and are never killed.
1374
1375	If the OOM killer is invoked in a cgroup, it's not going
1376	to kill any tasks outside of this cgroup, regardless
1377	memory.oom.group values of ancestor cgroups.
1378
1379  memory.events
1380	A read-only flat-keyed file which exists on non-root cgroups.
1381	The following entries are defined.  Unless specified
1382	otherwise, a value change in this file generates a file
1383	modified event.
1384
1385	Note that all fields in this file are hierarchical and the
1386	file modified event can be generated due to an event down the
1387	hierarchy. For the local events at the cgroup level see
1388	memory.events.local.
1389
1390	  low
1391		The number of times the cgroup is reclaimed due to
1392		high memory pressure even though its usage is under
1393		the low boundary.  This usually indicates that the low
1394		boundary is over-committed.
1395
1396	  high
1397		The number of times processes of the cgroup are
1398		throttled and routed to perform direct memory reclaim
1399		because the high memory boundary was exceeded.  For a
1400		cgroup whose memory usage is capped by the high limit
1401		rather than global memory pressure, this event's
1402		occurrences are expected.
1403
1404	  max
1405		The number of times the cgroup's memory usage was
1406		about to go over the max boundary.  If direct reclaim
1407		fails to bring it down, the cgroup goes to OOM state.
1408
1409	  oom
1410		The number of time the cgroup's memory usage was
1411		reached the limit and allocation was about to fail.
1412
1413		This event is not raised if the OOM killer is not
1414		considered as an option, e.g. for failed high-order
1415		allocations or if caller asked to not retry attempts.
1416
1417	  oom_kill
1418		The number of processes belonging to this cgroup
1419		killed by any kind of OOM killer.
1420
1421          oom_group_kill
1422                The number of times a group OOM has occurred.
1423
1424  memory.events.local
1425	Similar to memory.events but the fields in the file are local
1426	to the cgroup i.e. not hierarchical. The file modified event
1427	generated on this file reflects only the local events.
1428
1429  memory.stat
1430	A read-only flat-keyed file which exists on non-root cgroups.
1431
1432	This breaks down the cgroup's memory footprint into different
1433	types of memory, type-specific details, and other information
1434	on the state and past events of the memory management system.
1435
1436	All memory amounts are in bytes.
1437
1438	The entries are ordered to be human readable, and new entries
1439	can show up in the middle. Don't rely on items remaining in a
1440	fixed position; use the keys to look up specific values!
1441
1442	If the entry has no per-node counter (or not show in the
1443	memory.numa_stat). We use 'npn' (non-per-node) as the tag
1444	to indicate that it will not show in the memory.numa_stat.
1445
1446	  anon
1447		Amount of memory used in anonymous mappings such as
1448		brk(), sbrk(), and mmap(MAP_ANONYMOUS)
1449
1450	  file
1451		Amount of memory used to cache filesystem data,
1452		including tmpfs and shared memory.
1453
1454	  kernel (npn)
1455		Amount of total kernel memory, including
1456		(kernel_stack, pagetables, percpu, vmalloc, slab) in
1457		addition to other kernel memory use cases.
1458
1459	  kernel_stack
1460		Amount of memory allocated to kernel stacks.
1461
1462	  pagetables
1463                Amount of memory allocated for page tables.
1464
1465	  sec_pagetables
1466		Amount of memory allocated for secondary page tables,
1467		this currently includes KVM mmu allocations on x86
1468		and arm64 and IOMMU page tables.
1469
1470	  percpu (npn)
1471		Amount of memory used for storing per-cpu kernel
1472		data structures.
1473
1474	  sock (npn)
1475		Amount of memory used in network transmission buffers
1476
1477	  vmalloc (npn)
1478		Amount of memory used for vmap backed memory.
1479
1480	  shmem
1481		Amount of cached filesystem data that is swap-backed,
1482		such as tmpfs, shm segments, shared anonymous mmap()s
1483
1484	  zswap
1485		Amount of memory consumed by the zswap compression backend.
1486
1487	  zswapped
1488		Amount of application memory swapped out to zswap.
1489
1490	  file_mapped
1491		Amount of cached filesystem data mapped with mmap()
1492
1493	  file_dirty
1494		Amount of cached filesystem data that was modified but
1495		not yet written back to disk
1496
1497	  file_writeback
1498		Amount of cached filesystem data that was modified and
1499		is currently being written back to disk
1500
1501	  swapcached
1502		Amount of swap cached in memory. The swapcache is accounted
1503		against both memory and swap usage.
1504
1505	  anon_thp
1506		Amount of memory used in anonymous mappings backed by
1507		transparent hugepages
1508
1509	  file_thp
1510		Amount of cached filesystem data backed by transparent
1511		hugepages
1512
1513	  shmem_thp
1514		Amount of shm, tmpfs, shared anonymous mmap()s backed by
1515		transparent hugepages
1516
1517	  inactive_anon, active_anon, inactive_file, active_file, unevictable
1518		Amount of memory, swap-backed and filesystem-backed,
1519		on the internal memory management lists used by the
1520		page reclaim algorithm.
1521
1522		As these represent internal list state (eg. shmem pages are on anon
1523		memory management lists), inactive_foo + active_foo may not be equal to
1524		the value for the foo counter, since the foo counter is type-based, not
1525		list-based.
1526
1527	  slab_reclaimable
1528		Part of "slab" that might be reclaimed, such as
1529		dentries and inodes.
1530
1531	  slab_unreclaimable
1532		Part of "slab" that cannot be reclaimed on memory
1533		pressure.
1534
1535	  slab (npn)
1536		Amount of memory used for storing in-kernel data
1537		structures.
1538
1539	  workingset_refault_anon
1540		Number of refaults of previously evicted anonymous pages.
1541
1542	  workingset_refault_file
1543		Number of refaults of previously evicted file pages.
1544
1545	  workingset_activate_anon
1546		Number of refaulted anonymous pages that were immediately
1547		activated.
1548
1549	  workingset_activate_file
1550		Number of refaulted file pages that were immediately activated.
1551
1552	  workingset_restore_anon
1553		Number of restored anonymous pages which have been detected as
1554		an active workingset before they got reclaimed.
1555
1556	  workingset_restore_file
1557		Number of restored file pages which have been detected as an
1558		active workingset before they got reclaimed.
1559
1560	  workingset_nodereclaim
1561		Number of times a shadow node has been reclaimed
1562
1563	  pgscan (npn)
1564		Amount of scanned pages (in an inactive LRU list)
1565
1566	  pgsteal (npn)
1567		Amount of reclaimed pages
1568
1569	  pgscan_kswapd (npn)
1570		Amount of scanned pages by kswapd (in an inactive LRU list)
1571
1572	  pgscan_direct (npn)
1573		Amount of scanned pages directly  (in an inactive LRU list)
1574
1575	  pgscan_khugepaged (npn)
1576		Amount of scanned pages by khugepaged  (in an inactive LRU list)
1577
1578	  pgsteal_kswapd (npn)
1579		Amount of reclaimed pages by kswapd
1580
1581	  pgsteal_direct (npn)
1582		Amount of reclaimed pages directly
1583
1584	  pgsteal_khugepaged (npn)
1585		Amount of reclaimed pages by khugepaged
1586
1587	  pgfault (npn)
1588		Total number of page faults incurred
1589
1590	  pgmajfault (npn)
1591		Number of major page faults incurred
1592
1593	  pgrefill (npn)
1594		Amount of scanned pages (in an active LRU list)
1595
1596	  pgactivate (npn)
1597		Amount of pages moved to the active LRU list
1598
1599	  pgdeactivate (npn)
1600		Amount of pages moved to the inactive LRU list
1601
1602	  pglazyfree (npn)
1603		Amount of pages postponed to be freed under memory pressure
1604
1605	  pglazyfreed (npn)
1606		Amount of reclaimed lazyfree pages
1607
1608	  swpin_zero
1609		Number of pages swapped into memory and filled with zero, where I/O
1610		was optimized out because the page content was detected to be zero
1611		during swapout.
1612
1613	  swpout_zero
1614		Number of zero-filled pages swapped out with I/O skipped due to the
1615		content being detected as zero.
1616
1617	  zswpin
1618		Number of pages moved in to memory from zswap.
1619
1620	  zswpout
1621		Number of pages moved out of memory to zswap.
1622
1623	  zswpwb
1624		Number of pages written from zswap to swap.
1625
1626	  thp_fault_alloc (npn)
1627		Number of transparent hugepages which were allocated to satisfy
1628		a page fault. This counter is not present when CONFIG_TRANSPARENT_HUGEPAGE
1629                is not set.
1630
1631	  thp_collapse_alloc (npn)
1632		Number of transparent hugepages which were allocated to allow
1633		collapsing an existing range of pages. This counter is not
1634		present when CONFIG_TRANSPARENT_HUGEPAGE is not set.
1635
1636	  thp_swpout (npn)
1637		Number of transparent hugepages which are swapout in one piece
1638		without splitting.
1639
1640	  thp_swpout_fallback (npn)
1641		Number of transparent hugepages which were split before swapout.
1642		Usually because failed to allocate some continuous swap space
1643		for the huge page.
1644
1645	  numa_pages_migrated (npn)
1646		Number of pages migrated by NUMA balancing.
1647
1648	  numa_pte_updates (npn)
1649		Number of pages whose page table entries are modified by
1650		NUMA balancing to produce NUMA hinting faults on access.
1651
1652	  numa_hint_faults (npn)
1653		Number of NUMA hinting faults.
1654
1655	  pgdemote_kswapd
1656		Number of pages demoted by kswapd.
1657
1658	  pgdemote_direct
1659		Number of pages demoted directly.
1660
1661	  pgdemote_khugepaged
1662		Number of pages demoted by khugepaged.
1663
1664	  hugetlb
1665		Amount of memory used by hugetlb pages. This metric only shows
1666		up if hugetlb usage is accounted for in memory.current (i.e.
1667		cgroup is mounted with the memory_hugetlb_accounting option).
1668
1669  memory.numa_stat
1670	A read-only nested-keyed file which exists on non-root cgroups.
1671
1672	This breaks down the cgroup's memory footprint into different
1673	types of memory, type-specific details, and other information
1674	per node on the state of the memory management system.
1675
1676	This is useful for providing visibility into the NUMA locality
1677	information within an memcg since the pages are allowed to be
1678	allocated from any physical node. One of the use case is evaluating
1679	application performance by combining this information with the
1680	application's CPU allocation.
1681
1682	All memory amounts are in bytes.
1683
1684	The output format of memory.numa_stat is::
1685
1686	  type N0=<bytes in node 0> N1=<bytes in node 1> ...
1687
1688	The entries are ordered to be human readable, and new entries
1689	can show up in the middle. Don't rely on items remaining in a
1690	fixed position; use the keys to look up specific values!
1691
1692	The entries can refer to the memory.stat.
1693
1694  memory.swap.current
1695	A read-only single value file which exists on non-root
1696	cgroups.
1697
1698	The total amount of swap currently being used by the cgroup
1699	and its descendants.
1700
1701  memory.swap.high
1702	A read-write single value file which exists on non-root
1703	cgroups.  The default is "max".
1704
1705	Swap usage throttle limit.  If a cgroup's swap usage exceeds
1706	this limit, all its further allocations will be throttled to
1707	allow userspace to implement custom out-of-memory procedures.
1708
1709	This limit marks a point of no return for the cgroup. It is NOT
1710	designed to manage the amount of swapping a workload does
1711	during regular operation. Compare to memory.swap.max, which
1712	prohibits swapping past a set amount, but lets the cgroup
1713	continue unimpeded as long as other memory can be reclaimed.
1714
1715	Healthy workloads are not expected to reach this limit.
1716
1717  memory.swap.peak
1718	A read-write single value file which exists on non-root cgroups.
1719
1720	The max swap usage recorded for the cgroup and its descendants since
1721	the creation of the cgroup or the most recent reset for that FD.
1722
1723	A write of any non-empty string to this file resets it to the
1724	current memory usage for subsequent reads through the same
1725	file descriptor.
1726
1727  memory.swap.max
1728	A read-write single value file which exists on non-root
1729	cgroups.  The default is "max".
1730
1731	Swap usage hard limit.  If a cgroup's swap usage reaches this
1732	limit, anonymous memory of the cgroup will not be swapped out.
1733
1734  memory.swap.events
1735	A read-only flat-keyed file which exists on non-root cgroups.
1736	The following entries are defined.  Unless specified
1737	otherwise, a value change in this file generates a file
1738	modified event.
1739
1740	  high
1741		The number of times the cgroup's swap usage was over
1742		the high threshold.
1743
1744	  max
1745		The number of times the cgroup's swap usage was about
1746		to go over the max boundary and swap allocation
1747		failed.
1748
1749	  fail
1750		The number of times swap allocation failed either
1751		because of running out of swap system-wide or max
1752		limit.
1753
1754	When reduced under the current usage, the existing swap
1755	entries are reclaimed gradually and the swap usage may stay
1756	higher than the limit for an extended period of time.  This
1757	reduces the impact on the workload and memory management.
1758
1759  memory.zswap.current
1760	A read-only single value file which exists on non-root
1761	cgroups.
1762
1763	The total amount of memory consumed by the zswap compression
1764	backend.
1765
1766  memory.zswap.max
1767	A read-write single value file which exists on non-root
1768	cgroups.  The default is "max".
1769
1770	Zswap usage hard limit. If a cgroup's zswap pool reaches this
1771	limit, it will refuse to take any more stores before existing
1772	entries fault back in or are written out to disk.
1773
1774  memory.zswap.writeback
1775	A read-write single value file. The default value is "1".
1776	Note that this setting is hierarchical, i.e. the writeback would be
1777	implicitly disabled for child cgroups if the upper hierarchy
1778	does so.
1779
1780	When this is set to 0, all swapping attempts to swapping devices
1781	are disabled. This included both zswap writebacks, and swapping due
1782	to zswap store failures. If the zswap store failures are recurring
1783	(for e.g if the pages are incompressible), users can observe
1784	reclaim inefficiency after disabling writeback (because the same
1785	pages might be rejected again and again).
1786
1787	Note that this is subtly different from setting memory.swap.max to
1788	0, as it still allows for pages to be written to the zswap pool.
1789	This setting has no effect if zswap is disabled, and swapping
1790	is allowed unless memory.swap.max is set to 0.
1791
1792  memory.pressure
1793	A read-only nested-keyed file.
1794
1795	Shows pressure stall information for memory. See
1796	:ref:`Documentation/accounting/psi.rst <psi>` for details.
1797
1798
1799Usage Guidelines
1800~~~~~~~~~~~~~~~~
1801
1802"memory.high" is the main mechanism to control memory usage.
1803Over-committing on high limit (sum of high limits > available memory)
1804and letting global memory pressure to distribute memory according to
1805usage is a viable strategy.
1806
1807Because breach of the high limit doesn't trigger the OOM killer but
1808throttles the offending cgroup, a management agent has ample
1809opportunities to monitor and take appropriate actions such as granting
1810more memory or terminating the workload.
1811
1812Determining whether a cgroup has enough memory is not trivial as
1813memory usage doesn't indicate whether the workload can benefit from
1814more memory.  For example, a workload which writes data received from
1815network to a file can use all available memory but can also operate as
1816performant with a small amount of memory.  A measure of memory
1817pressure - how much the workload is being impacted due to lack of
1818memory - is necessary to determine whether a workload needs more
1819memory; unfortunately, memory pressure monitoring mechanism isn't
1820implemented yet.
1821
1822
1823Memory Ownership
1824~~~~~~~~~~~~~~~~
1825
1826A memory area is charged to the cgroup which instantiated it and stays
1827charged to the cgroup until the area is released.  Migrating a process
1828to a different cgroup doesn't move the memory usages that it
1829instantiated while in the previous cgroup to the new cgroup.
1830
1831A memory area may be used by processes belonging to different cgroups.
1832To which cgroup the area will be charged is in-deterministic; however,
1833over time, the memory area is likely to end up in a cgroup which has
1834enough memory allowance to avoid high reclaim pressure.
1835
1836If a cgroup sweeps a considerable amount of memory which is expected
1837to be accessed repeatedly by other cgroups, it may make sense to use
1838POSIX_FADV_DONTNEED to relinquish the ownership of memory areas
1839belonging to the affected files to ensure correct memory ownership.
1840
1841
1842IO
1843--
1844
1845The "io" controller regulates the distribution of IO resources.  This
1846controller implements both weight based and absolute bandwidth or IOPS
1847limit distribution; however, weight based distribution is available
1848only if cfq-iosched is in use and neither scheme is available for
1849blk-mq devices.
1850
1851
1852IO Interface Files
1853~~~~~~~~~~~~~~~~~~
1854
1855  io.stat
1856	A read-only nested-keyed file.
1857
1858	Lines are keyed by $MAJ:$MIN device numbers and not ordered.
1859	The following nested keys are defined.
1860
1861	  ======	=====================
1862	  rbytes	Bytes read
1863	  wbytes	Bytes written
1864	  rios		Number of read IOs
1865	  wios		Number of write IOs
1866	  dbytes	Bytes discarded
1867	  dios		Number of discard IOs
1868	  ======	=====================
1869
1870	An example read output follows::
1871
1872	  8:16 rbytes=1459200 wbytes=314773504 rios=192 wios=353 dbytes=0 dios=0
1873	  8:0 rbytes=90430464 wbytes=299008000 rios=8950 wios=1252 dbytes=50331648 dios=3021
1874
1875  io.cost.qos
1876	A read-write nested-keyed file which exists only on the root
1877	cgroup.
1878
1879	This file configures the Quality of Service of the IO cost
1880	model based controller (CONFIG_BLK_CGROUP_IOCOST) which
1881	currently implements "io.weight" proportional control.  Lines
1882	are keyed by $MAJ:$MIN device numbers and not ordered.  The
1883	line for a given device is populated on the first write for
1884	the device on "io.cost.qos" or "io.cost.model".  The following
1885	nested keys are defined.
1886
1887	  ======	=====================================
1888	  enable	Weight-based control enable
1889	  ctrl		"auto" or "user"
1890	  rpct		Read latency percentile    [0, 100]
1891	  rlat		Read latency threshold
1892	  wpct		Write latency percentile   [0, 100]
1893	  wlat		Write latency threshold
1894	  min		Minimum scaling percentage [1, 10000]
1895	  max		Maximum scaling percentage [1, 10000]
1896	  ======	=====================================
1897
1898	The controller is disabled by default and can be enabled by
1899	setting "enable" to 1.  "rpct" and "wpct" parameters default
1900	to zero and the controller uses internal device saturation
1901	state to adjust the overall IO rate between "min" and "max".
1902
1903	When a better control quality is needed, latency QoS
1904	parameters can be configured.  For example::
1905
1906	  8:16 enable=1 ctrl=auto rpct=95.00 rlat=75000 wpct=95.00 wlat=150000 min=50.00 max=150.0
1907
1908	shows that on sdb, the controller is enabled, will consider
1909	the device saturated if the 95th percentile of read completion
1910	latencies is above 75ms or write 150ms, and adjust the overall
1911	IO issue rate between 50% and 150% accordingly.
1912
1913	The lower the saturation point, the better the latency QoS at
1914	the cost of aggregate bandwidth.  The narrower the allowed
1915	adjustment range between "min" and "max", the more conformant
1916	to the cost model the IO behavior.  Note that the IO issue
1917	base rate may be far off from 100% and setting "min" and "max"
1918	blindly can lead to a significant loss of device capacity or
1919	control quality.  "min" and "max" are useful for regulating
1920	devices which show wide temporary behavior changes - e.g. a
1921	ssd which accepts writes at the line speed for a while and
1922	then completely stalls for multiple seconds.
1923
1924	When "ctrl" is "auto", the parameters are controlled by the
1925	kernel and may change automatically.  Setting "ctrl" to "user"
1926	or setting any of the percentile and latency parameters puts
1927	it into "user" mode and disables the automatic changes.  The
1928	automatic mode can be restored by setting "ctrl" to "auto".
1929
1930  io.cost.model
1931	A read-write nested-keyed file which exists only on the root
1932	cgroup.
1933
1934	This file configures the cost model of the IO cost model based
1935	controller (CONFIG_BLK_CGROUP_IOCOST) which currently
1936	implements "io.weight" proportional control.  Lines are keyed
1937	by $MAJ:$MIN device numbers and not ordered.  The line for a
1938	given device is populated on the first write for the device on
1939	"io.cost.qos" or "io.cost.model".  The following nested keys
1940	are defined.
1941
1942	  =====		================================
1943	  ctrl		"auto" or "user"
1944	  model		The cost model in use - "linear"
1945	  =====		================================
1946
1947	When "ctrl" is "auto", the kernel may change all parameters
1948	dynamically.  When "ctrl" is set to "user" or any other
1949	parameters are written to, "ctrl" become "user" and the
1950	automatic changes are disabled.
1951
1952	When "model" is "linear", the following model parameters are
1953	defined.
1954
1955	  =============	========================================
1956	  [r|w]bps	The maximum sequential IO throughput
1957	  [r|w]seqiops	The maximum 4k sequential IOs per second
1958	  [r|w]randiops	The maximum 4k random IOs per second
1959	  =============	========================================
1960
1961	From the above, the builtin linear model determines the base
1962	costs of a sequential and random IO and the cost coefficient
1963	for the IO size.  While simple, this model can cover most
1964	common device classes acceptably.
1965
1966	The IO cost model isn't expected to be accurate in absolute
1967	sense and is scaled to the device behavior dynamically.
1968
1969	If needed, tools/cgroup/iocost_coef_gen.py can be used to
1970	generate device-specific coefficients.
1971
1972  io.weight
1973	A read-write flat-keyed file which exists on non-root cgroups.
1974	The default is "default 100".
1975
1976	The first line is the default weight applied to devices
1977	without specific override.  The rest are overrides keyed by
1978	$MAJ:$MIN device numbers and not ordered.  The weights are in
1979	the range [1, 10000] and specifies the relative amount IO time
1980	the cgroup can use in relation to its siblings.
1981
1982	The default weight can be updated by writing either "default
1983	$WEIGHT" or simply "$WEIGHT".  Overrides can be set by writing
1984	"$MAJ:$MIN $WEIGHT" and unset by writing "$MAJ:$MIN default".
1985
1986	An example read output follows::
1987
1988	  default 100
1989	  8:16 200
1990	  8:0 50
1991
1992  io.max
1993	A read-write nested-keyed file which exists on non-root
1994	cgroups.
1995
1996	BPS and IOPS based IO limit.  Lines are keyed by $MAJ:$MIN
1997	device numbers and not ordered.  The following nested keys are
1998	defined.
1999
2000	  =====		==================================
2001	  rbps		Max read bytes per second
2002	  wbps		Max write bytes per second
2003	  riops		Max read IO operations per second
2004	  wiops		Max write IO operations per second
2005	  =====		==================================
2006
2007	When writing, any number of nested key-value pairs can be
2008	specified in any order.  "max" can be specified as the value
2009	to remove a specific limit.  If the same key is specified
2010	multiple times, the outcome is undefined.
2011
2012	BPS and IOPS are measured in each IO direction and IOs are
2013	delayed if limit is reached.  Temporary bursts are allowed.
2014
2015	Setting read limit at 2M BPS and write at 120 IOPS for 8:16::
2016
2017	  echo "8:16 rbps=2097152 wiops=120" > io.max
2018
2019	Reading returns the following::
2020
2021	  8:16 rbps=2097152 wbps=max riops=max wiops=120
2022
2023	Write IOPS limit can be removed by writing the following::
2024
2025	  echo "8:16 wiops=max" > io.max
2026
2027	Reading now returns the following::
2028
2029	  8:16 rbps=2097152 wbps=max riops=max wiops=max
2030
2031  io.pressure
2032	A read-only nested-keyed file.
2033
2034	Shows pressure stall information for IO. See
2035	:ref:`Documentation/accounting/psi.rst <psi>` for details.
2036
2037
2038Writeback
2039~~~~~~~~~
2040
2041Page cache is dirtied through buffered writes and shared mmaps and
2042written asynchronously to the backing filesystem by the writeback
2043mechanism.  Writeback sits between the memory and IO domains and
2044regulates the proportion of dirty memory by balancing dirtying and
2045write IOs.
2046
2047The io controller, in conjunction with the memory controller,
2048implements control of page cache writeback IOs.  The memory controller
2049defines the memory domain that dirty memory ratio is calculated and
2050maintained for and the io controller defines the io domain which
2051writes out dirty pages for the memory domain.  Both system-wide and
2052per-cgroup dirty memory states are examined and the more restrictive
2053of the two is enforced.
2054
2055cgroup writeback requires explicit support from the underlying
2056filesystem.  Currently, cgroup writeback is implemented on ext2, ext4,
2057btrfs, f2fs, and xfs.  On other filesystems, all writeback IOs are
2058attributed to the root cgroup.
2059
2060There are inherent differences in memory and writeback management
2061which affects how cgroup ownership is tracked.  Memory is tracked per
2062page while writeback per inode.  For the purpose of writeback, an
2063inode is assigned to a cgroup and all IO requests to write dirty pages
2064from the inode are attributed to that cgroup.
2065
2066As cgroup ownership for memory is tracked per page, there can be pages
2067which are associated with different cgroups than the one the inode is
2068associated with.  These are called foreign pages.  The writeback
2069constantly keeps track of foreign pages and, if a particular foreign
2070cgroup becomes the majority over a certain period of time, switches
2071the ownership of the inode to that cgroup.
2072
2073While this model is enough for most use cases where a given inode is
2074mostly dirtied by a single cgroup even when the main writing cgroup
2075changes over time, use cases where multiple cgroups write to a single
2076inode simultaneously are not supported well.  In such circumstances, a
2077significant portion of IOs are likely to be attributed incorrectly.
2078As memory controller assigns page ownership on the first use and
2079doesn't update it until the page is released, even if writeback
2080strictly follows page ownership, multiple cgroups dirtying overlapping
2081areas wouldn't work as expected.  It's recommended to avoid such usage
2082patterns.
2083
2084The sysctl knobs which affect writeback behavior are applied to cgroup
2085writeback as follows.
2086
2087  vm.dirty_background_ratio, vm.dirty_ratio
2088	These ratios apply the same to cgroup writeback with the
2089	amount of available memory capped by limits imposed by the
2090	memory controller and system-wide clean memory.
2091
2092  vm.dirty_background_bytes, vm.dirty_bytes
2093	For cgroup writeback, this is calculated into ratio against
2094	total available memory and applied the same way as
2095	vm.dirty[_background]_ratio.
2096
2097
2098IO Latency
2099~~~~~~~~~~
2100
2101This is a cgroup v2 controller for IO workload protection.  You provide a group
2102with a latency target, and if the average latency exceeds that target the
2103controller will throttle any peers that have a lower latency target than the
2104protected workload.
2105
2106The limits are only applied at the peer level in the hierarchy.  This means that
2107in the diagram below, only groups A, B, and C will influence each other, and
2108groups D and F will influence each other.  Group G will influence nobody::
2109
2110			[root]
2111		/	   |		\
2112		A	   B		C
2113	       /  \        |
2114	      D    F	   G
2115
2116
2117So the ideal way to configure this is to set io.latency in groups A, B, and C.
2118Generally you do not want to set a value lower than the latency your device
2119supports.  Experiment to find the value that works best for your workload.
2120Start at higher than the expected latency for your device and watch the
2121avg_lat value in io.stat for your workload group to get an idea of the
2122latency you see during normal operation.  Use the avg_lat value as a basis for
2123your real setting, setting at 10-15% higher than the value in io.stat.
2124
2125How IO Latency Throttling Works
2126~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2127
2128io.latency is work conserving; so as long as everybody is meeting their latency
2129target the controller doesn't do anything.  Once a group starts missing its
2130target it begins throttling any peer group that has a higher target than itself.
2131This throttling takes 2 forms:
2132
2133- Queue depth throttling.  This is the number of outstanding IO's a group is
2134  allowed to have.  We will clamp down relatively quickly, starting at no limit
2135  and going all the way down to 1 IO at a time.
2136
2137- Artificial delay induction.  There are certain types of IO that cannot be
2138  throttled without possibly adversely affecting higher priority groups.  This
2139  includes swapping and metadata IO.  These types of IO are allowed to occur
2140  normally, however they are "charged" to the originating group.  If the
2141  originating group is being throttled you will see the use_delay and delay
2142  fields in io.stat increase.  The delay value is how many microseconds that are
2143  being added to any process that runs in this group.  Because this number can
2144  grow quite large if there is a lot of swapping or metadata IO occurring we
2145  limit the individual delay events to 1 second at a time.
2146
2147Once the victimized group starts meeting its latency target again it will start
2148unthrottling any peer groups that were throttled previously.  If the victimized
2149group simply stops doing IO the global counter will unthrottle appropriately.
2150
2151IO Latency Interface Files
2152~~~~~~~~~~~~~~~~~~~~~~~~~~
2153
2154  io.latency
2155	This takes a similar format as the other controllers.
2156
2157		"MAJOR:MINOR target=<target time in microseconds>"
2158
2159  io.stat
2160	If the controller is enabled you will see extra stats in io.stat in
2161	addition to the normal ones.
2162
2163	  depth
2164		This is the current queue depth for the group.
2165
2166	  avg_lat
2167		This is an exponential moving average with a decay rate of 1/exp
2168		bound by the sampling interval.  The decay rate interval can be
2169		calculated by multiplying the win value in io.stat by the
2170		corresponding number of samples based on the win value.
2171
2172	  win
2173		The sampling window size in milliseconds.  This is the minimum
2174		duration of time between evaluation events.  Windows only elapse
2175		with IO activity.  Idle periods extend the most recent window.
2176
2177IO Priority
2178~~~~~~~~~~~
2179
2180A single attribute controls the behavior of the I/O priority cgroup policy,
2181namely the io.prio.class attribute. The following values are accepted for
2182that attribute:
2183
2184  no-change
2185	Do not modify the I/O priority class.
2186
2187  promote-to-rt
2188	For requests that have a non-RT I/O priority class, change it into RT.
2189	Also change the priority level of these requests to 4. Do not modify
2190	the I/O priority of requests that have priority class RT.
2191
2192  restrict-to-be
2193	For requests that do not have an I/O priority class or that have I/O
2194	priority class RT, change it into BE. Also change the priority level
2195	of these requests to 0. Do not modify the I/O priority class of
2196	requests that have priority class IDLE.
2197
2198  idle
2199	Change the I/O priority class of all requests into IDLE, the lowest
2200	I/O priority class.
2201
2202  none-to-rt
2203	Deprecated. Just an alias for promote-to-rt.
2204
2205The following numerical values are associated with the I/O priority policies:
2206
2207+----------------+---+
2208| no-change      | 0 |
2209+----------------+---+
2210| promote-to-rt  | 1 |
2211+----------------+---+
2212| restrict-to-be | 2 |
2213+----------------+---+
2214| idle           | 3 |
2215+----------------+---+
2216
2217The numerical value that corresponds to each I/O priority class is as follows:
2218
2219+-------------------------------+---+
2220| IOPRIO_CLASS_NONE             | 0 |
2221+-------------------------------+---+
2222| IOPRIO_CLASS_RT (real-time)   | 1 |
2223+-------------------------------+---+
2224| IOPRIO_CLASS_BE (best effort) | 2 |
2225+-------------------------------+---+
2226| IOPRIO_CLASS_IDLE             | 3 |
2227+-------------------------------+---+
2228
2229The algorithm to set the I/O priority class for a request is as follows:
2230
2231- If I/O priority class policy is promote-to-rt, change the request I/O
2232  priority class to IOPRIO_CLASS_RT and change the request I/O priority
2233  level to 4.
2234- If I/O priority class policy is not promote-to-rt, translate the I/O priority
2235  class policy into a number, then change the request I/O priority class
2236  into the maximum of the I/O priority class policy number and the numerical
2237  I/O priority class.
2238
2239PID
2240---
2241
2242The process number controller is used to allow a cgroup to stop any
2243new tasks from being fork()'d or clone()'d after a specified limit is
2244reached.
2245
2246The number of tasks in a cgroup can be exhausted in ways which other
2247controllers cannot prevent, thus warranting its own controller.  For
2248example, a fork bomb is likely to exhaust the number of tasks before
2249hitting memory restrictions.
2250
2251Note that PIDs used in this controller refer to TIDs, process IDs as
2252used by the kernel.
2253
2254
2255PID Interface Files
2256~~~~~~~~~~~~~~~~~~~
2257
2258  pids.max
2259	A read-write single value file which exists on non-root
2260	cgroups.  The default is "max".
2261
2262	Hard limit of number of processes.
2263
2264  pids.current
2265	A read-only single value file which exists on non-root cgroups.
2266
2267	The number of processes currently in the cgroup and its
2268	descendants.
2269
2270  pids.peak
2271	A read-only single value file which exists on non-root cgroups.
2272
2273	The maximum value that the number of processes in the cgroup and its
2274	descendants has ever reached.
2275
2276  pids.events
2277	A read-only flat-keyed file which exists on non-root cgroups. Unless
2278	specified otherwise, a value change in this file generates a file
2279	modified event. The following entries are defined.
2280
2281	  max
2282		The number of times the cgroup's total number of processes hit the pids.max
2283		limit (see also pids_localevents).
2284
2285  pids.events.local
2286	Similar to pids.events but the fields in the file are local
2287	to the cgroup i.e. not hierarchical. The file modified event
2288	generated on this file reflects only the local events.
2289
2290Organisational operations are not blocked by cgroup policies, so it is
2291possible to have pids.current > pids.max.  This can be done by either
2292setting the limit to be smaller than pids.current, or attaching enough
2293processes to the cgroup such that pids.current is larger than
2294pids.max.  However, it is not possible to violate a cgroup PID policy
2295through fork() or clone(). These will return -EAGAIN if the creation
2296of a new process would cause a cgroup policy to be violated.
2297
2298
2299Cpuset
2300------
2301
2302The "cpuset" controller provides a mechanism for constraining
2303the CPU and memory node placement of tasks to only the resources
2304specified in the cpuset interface files in a task's current cgroup.
2305This is especially valuable on large NUMA systems where placing jobs
2306on properly sized subsets of the systems with careful processor and
2307memory placement to reduce cross-node memory access and contention
2308can improve overall system performance.
2309
2310The "cpuset" controller is hierarchical.  That means the controller
2311cannot use CPUs or memory nodes not allowed in its parent.
2312
2313
2314Cpuset Interface Files
2315~~~~~~~~~~~~~~~~~~~~~~
2316
2317  cpuset.cpus
2318	A read-write multiple values file which exists on non-root
2319	cpuset-enabled cgroups.
2320
2321	It lists the requested CPUs to be used by tasks within this
2322	cgroup.  The actual list of CPUs to be granted, however, is
2323	subjected to constraints imposed by its parent and can differ
2324	from the requested CPUs.
2325
2326	The CPU numbers are comma-separated numbers or ranges.
2327	For example::
2328
2329	  # cat cpuset.cpus
2330	  0-4,6,8-10
2331
2332	An empty value indicates that the cgroup is using the same
2333	setting as the nearest cgroup ancestor with a non-empty
2334	"cpuset.cpus" or all the available CPUs if none is found.
2335
2336	The value of "cpuset.cpus" stays constant until the next update
2337	and won't be affected by any CPU hotplug events.
2338
2339  cpuset.cpus.effective
2340	A read-only multiple values file which exists on all
2341	cpuset-enabled cgroups.
2342
2343	It lists the onlined CPUs that are actually granted to this
2344	cgroup by its parent.  These CPUs are allowed to be used by
2345	tasks within the current cgroup.
2346
2347	If "cpuset.cpus" is empty, the "cpuset.cpus.effective" file shows
2348	all the CPUs from the parent cgroup that can be available to
2349	be used by this cgroup.  Otherwise, it should be a subset of
2350	"cpuset.cpus" unless none of the CPUs listed in "cpuset.cpus"
2351	can be granted.  In this case, it will be treated just like an
2352	empty "cpuset.cpus".
2353
2354	Its value will be affected by CPU hotplug events.
2355
2356  cpuset.mems
2357	A read-write multiple values file which exists on non-root
2358	cpuset-enabled cgroups.
2359
2360	It lists the requested memory nodes to be used by tasks within
2361	this cgroup.  The actual list of memory nodes granted, however,
2362	is subjected to constraints imposed by its parent and can differ
2363	from the requested memory nodes.
2364
2365	The memory node numbers are comma-separated numbers or ranges.
2366	For example::
2367
2368	  # cat cpuset.mems
2369	  0-1,3
2370
2371	An empty value indicates that the cgroup is using the same
2372	setting as the nearest cgroup ancestor with a non-empty
2373	"cpuset.mems" or all the available memory nodes if none
2374	is found.
2375
2376	The value of "cpuset.mems" stays constant until the next update
2377	and won't be affected by any memory nodes hotplug events.
2378
2379	Setting a non-empty value to "cpuset.mems" causes memory of
2380	tasks within the cgroup to be migrated to the designated nodes if
2381	they are currently using memory outside of the designated nodes.
2382
2383	There is a cost for this memory migration.  The migration
2384	may not be complete and some memory pages may be left behind.
2385	So it is recommended that "cpuset.mems" should be set properly
2386	before spawning new tasks into the cpuset.  Even if there is
2387	a need to change "cpuset.mems" with active tasks, it shouldn't
2388	be done frequently.
2389
2390  cpuset.mems.effective
2391	A read-only multiple values file which exists on all
2392	cpuset-enabled cgroups.
2393
2394	It lists the onlined memory nodes that are actually granted to
2395	this cgroup by its parent. These memory nodes are allowed to
2396	be used by tasks within the current cgroup.
2397
2398	If "cpuset.mems" is empty, it shows all the memory nodes from the
2399	parent cgroup that will be available to be used by this cgroup.
2400	Otherwise, it should be a subset of "cpuset.mems" unless none of
2401	the memory nodes listed in "cpuset.mems" can be granted.  In this
2402	case, it will be treated just like an empty "cpuset.mems".
2403
2404	Its value will be affected by memory nodes hotplug events.
2405
2406  cpuset.cpus.exclusive
2407	A read-write multiple values file which exists on non-root
2408	cpuset-enabled cgroups.
2409
2410	It lists all the exclusive CPUs that are allowed to be used
2411	to create a new cpuset partition.  Its value is not used
2412	unless the cgroup becomes a valid partition root.  See the
2413	"cpuset.cpus.partition" section below for a description of what
2414	a cpuset partition is.
2415
2416	When the cgroup becomes a partition root, the actual exclusive
2417	CPUs that are allocated to that partition are listed in
2418	"cpuset.cpus.exclusive.effective" which may be different
2419	from "cpuset.cpus.exclusive".  If "cpuset.cpus.exclusive"
2420	has previously been set, "cpuset.cpus.exclusive.effective"
2421	is always a subset of it.
2422
2423	Users can manually set it to a value that is different from
2424	"cpuset.cpus".	One constraint in setting it is that the list of
2425	CPUs must be exclusive with respect to "cpuset.cpus.exclusive"
2426	of its sibling.  If "cpuset.cpus.exclusive" of a sibling cgroup
2427	isn't set, its "cpuset.cpus" value, if set, cannot be a subset
2428	of it to leave at least one CPU available when the exclusive
2429	CPUs are taken away.
2430
2431	For a parent cgroup, any one of its exclusive CPUs can only
2432	be distributed to at most one of its child cgroups.  Having an
2433	exclusive CPU appearing in two or more of its child cgroups is
2434	not allowed (the exclusivity rule).  A value that violates the
2435	exclusivity rule will be rejected with a write error.
2436
2437	The root cgroup is a partition root and all its available CPUs
2438	are in its exclusive CPU set.
2439
2440  cpuset.cpus.exclusive.effective
2441	A read-only multiple values file which exists on all non-root
2442	cpuset-enabled cgroups.
2443
2444	This file shows the effective set of exclusive CPUs that
2445	can be used to create a partition root.  The content
2446	of this file will always be a subset of its parent's
2447	"cpuset.cpus.exclusive.effective" if its parent is not the root
2448	cgroup.  It will also be a subset of "cpuset.cpus.exclusive"
2449	if it is set.  If "cpuset.cpus.exclusive" is not set, it is
2450	treated to have an implicit value of "cpuset.cpus" in the
2451	formation of local partition.
2452
2453  cpuset.cpus.isolated
2454	A read-only and root cgroup only multiple values file.
2455
2456	This file shows the set of all isolated CPUs used in existing
2457	isolated partitions. It will be empty if no isolated partition
2458	is created.
2459
2460  cpuset.cpus.partition
2461	A read-write single value file which exists on non-root
2462	cpuset-enabled cgroups.  This flag is owned by the parent cgroup
2463	and is not delegatable.
2464
2465	It accepts only the following input values when written to.
2466
2467	  ==========	=====================================
2468	  "member"	Non-root member of a partition
2469	  "root"	Partition root
2470	  "isolated"	Partition root without load balancing
2471	  ==========	=====================================
2472
2473	A cpuset partition is a collection of cpuset-enabled cgroups with
2474	a partition root at the top of the hierarchy and its descendants
2475	except those that are separate partition roots themselves and
2476	their descendants.  A partition has exclusive access to the
2477	set of exclusive CPUs allocated to it.	Other cgroups outside
2478	of that partition cannot use any CPUs in that set.
2479
2480	There are two types of partitions - local and remote.  A local
2481	partition is one whose parent cgroup is also a valid partition
2482	root.  A remote partition is one whose parent cgroup is not a
2483	valid partition root itself.  Writing to "cpuset.cpus.exclusive"
2484	is optional for the creation of a local partition as its
2485	"cpuset.cpus.exclusive" file will assume an implicit value that
2486	is the same as "cpuset.cpus" if it is not set.	Writing the
2487	proper "cpuset.cpus.exclusive" values down the cgroup hierarchy
2488	before the target partition root is mandatory for the creation
2489	of a remote partition.
2490
2491	Currently, a remote partition cannot be created under a local
2492	partition.  All the ancestors of a remote partition root except
2493	the root cgroup cannot be a partition root.
2494
2495	The root cgroup is always a partition root and its state cannot
2496	be changed.  All other non-root cgroups start out as "member".
2497
2498	When set to "root", the current cgroup is the root of a new
2499	partition or scheduling domain.  The set of exclusive CPUs is
2500	determined by the value of its "cpuset.cpus.exclusive.effective".
2501
2502	When set to "isolated", the CPUs in that partition will be in
2503	an isolated state without any load balancing from the scheduler
2504	and excluded from the unbound workqueues.  Tasks placed in such
2505	a partition with multiple CPUs should be carefully distributed
2506	and bound to each of the individual CPUs for optimal performance.
2507
2508	A partition root ("root" or "isolated") can be in one of the
2509	two possible states - valid or invalid.  An invalid partition
2510	root is in a degraded state where some state information may
2511	be retained, but behaves more like a "member".
2512
2513	All possible state transitions among "member", "root" and
2514	"isolated" are allowed.
2515
2516	On read, the "cpuset.cpus.partition" file can show the following
2517	values.
2518
2519	  =============================	=====================================
2520	  "member"			Non-root member of a partition
2521	  "root"			Partition root
2522	  "isolated"			Partition root without load balancing
2523	  "root invalid (<reason>)"	Invalid partition root
2524	  "isolated invalid (<reason>)"	Invalid isolated partition root
2525	  =============================	=====================================
2526
2527	In the case of an invalid partition root, a descriptive string on
2528	why the partition is invalid is included within parentheses.
2529
2530	For a local partition root to be valid, the following conditions
2531	must be met.
2532
2533	1) The parent cgroup is a valid partition root.
2534	2) The "cpuset.cpus.exclusive.effective" file cannot be empty,
2535	   though it may contain offline CPUs.
2536	3) The "cpuset.cpus.effective" cannot be empty unless there is
2537	   no task associated with this partition.
2538
2539	For a remote partition root to be valid, all the above conditions
2540	except the first one must be met.
2541
2542	External events like hotplug or changes to "cpuset.cpus" or
2543	"cpuset.cpus.exclusive" can cause a valid partition root to
2544	become invalid and vice versa.	Note that a task cannot be
2545	moved to a cgroup with empty "cpuset.cpus.effective".
2546
2547	A valid non-root parent partition may distribute out all its CPUs
2548	to its child local partitions when there is no task associated
2549	with it.
2550
2551	Care must be taken to change a valid partition root to "member"
2552	as all its child local partitions, if present, will become
2553	invalid causing disruption to tasks running in those child
2554	partitions. These inactivated partitions could be recovered if
2555	their parent is switched back to a partition root with a proper
2556	value in "cpuset.cpus" or "cpuset.cpus.exclusive".
2557
2558	Poll and inotify events are triggered whenever the state of
2559	"cpuset.cpus.partition" changes.  That includes changes caused
2560	by write to "cpuset.cpus.partition", cpu hotplug or other
2561	changes that modify the validity status of the partition.
2562	This will allow user space agents to monitor unexpected changes
2563	to "cpuset.cpus.partition" without the need to do continuous
2564	polling.
2565
2566	A user can pre-configure certain CPUs to an isolated state
2567	with load balancing disabled at boot time with the "isolcpus"
2568	kernel boot command line option.  If those CPUs are to be put
2569	into a partition, they have to be used in an isolated partition.
2570
2571
2572Device controller
2573-----------------
2574
2575Device controller manages access to device files. It includes both
2576creation of new device files (using mknod), and access to the
2577existing device files.
2578
2579Cgroup v2 device controller has no interface files and is implemented
2580on top of cgroup BPF. To control access to device files, a user may
2581create bpf programs of type BPF_PROG_TYPE_CGROUP_DEVICE and attach
2582them to cgroups with BPF_CGROUP_DEVICE flag. On an attempt to access a
2583device file, corresponding BPF programs will be executed, and depending
2584on the return value the attempt will succeed or fail with -EPERM.
2585
2586A BPF_PROG_TYPE_CGROUP_DEVICE program takes a pointer to the
2587bpf_cgroup_dev_ctx structure, which describes the device access attempt:
2588access type (mknod/read/write) and device (type, major and minor numbers).
2589If the program returns 0, the attempt fails with -EPERM, otherwise it
2590succeeds.
2591
2592An example of BPF_PROG_TYPE_CGROUP_DEVICE program may be found in
2593tools/testing/selftests/bpf/progs/dev_cgroup.c in the kernel source tree.
2594
2595
2596RDMA
2597----
2598
2599The "rdma" controller regulates the distribution and accounting of
2600RDMA resources.
2601
2602RDMA Interface Files
2603~~~~~~~~~~~~~~~~~~~~
2604
2605  rdma.max
2606	A readwrite nested-keyed file that exists for all the cgroups
2607	except root that describes current configured resource limit
2608	for a RDMA/IB device.
2609
2610	Lines are keyed by device name and are not ordered.
2611	Each line contains space separated resource name and its configured
2612	limit that can be distributed.
2613
2614	The following nested keys are defined.
2615
2616	  ==========	=============================
2617	  hca_handle	Maximum number of HCA Handles
2618	  hca_object 	Maximum number of HCA Objects
2619	  ==========	=============================
2620
2621	An example for mlx4 and ocrdma device follows::
2622
2623	  mlx4_0 hca_handle=2 hca_object=2000
2624	  ocrdma1 hca_handle=3 hca_object=max
2625
2626  rdma.current
2627	A read-only file that describes current resource usage.
2628	It exists for all the cgroup except root.
2629
2630	An example for mlx4 and ocrdma device follows::
2631
2632	  mlx4_0 hca_handle=1 hca_object=20
2633	  ocrdma1 hca_handle=1 hca_object=23
2634
2635DMEM
2636----
2637
2638The "dmem" controller regulates the distribution and accounting of
2639device memory regions. Because each memory region may have its own page size,
2640which does not have to be equal to the system page size, the units are always bytes.
2641
2642DMEM Interface Files
2643~~~~~~~~~~~~~~~~~~~~
2644
2645  dmem.max, dmem.min, dmem.low
2646	A readwrite nested-keyed file that exists for all the cgroups
2647	except root that describes current configured resource limit
2648	for a region.
2649
2650	An example for xe follows::
2651
2652	  drm/0000:03:00.0/vram0 1073741824
2653	  drm/0000:03:00.0/stolen max
2654
2655	The semantics are the same as for the memory cgroup controller, and are
2656	calculated in the same way.
2657
2658  dmem.capacity
2659	A read-only file that describes maximum region capacity.
2660	It only exists on the root cgroup. Not all memory can be
2661	allocated by cgroups, as the kernel reserves some for
2662	internal use.
2663
2664	An example for xe follows::
2665
2666	  drm/0000:03:00.0/vram0 8514437120
2667	  drm/0000:03:00.0/stolen 67108864
2668
2669  dmem.current
2670	A read-only file that describes current resource usage.
2671	It exists for all the cgroup except root.
2672
2673	An example for xe follows::
2674
2675	  drm/0000:03:00.0/vram0 12550144
2676	  drm/0000:03:00.0/stolen 8650752
2677
2678HugeTLB
2679-------
2680
2681The HugeTLB controller allows to limit the HugeTLB usage per control group and
2682enforces the controller limit during page fault.
2683
2684HugeTLB Interface Files
2685~~~~~~~~~~~~~~~~~~~~~~~
2686
2687  hugetlb.<hugepagesize>.current
2688	Show current usage for "hugepagesize" hugetlb.  It exists for all
2689	the cgroup except root.
2690
2691  hugetlb.<hugepagesize>.max
2692	Set/show the hard limit of "hugepagesize" hugetlb usage.
2693	The default value is "max".  It exists for all the cgroup except root.
2694
2695  hugetlb.<hugepagesize>.events
2696	A read-only flat-keyed file which exists on non-root cgroups.
2697
2698	  max
2699		The number of allocation failure due to HugeTLB limit
2700
2701  hugetlb.<hugepagesize>.events.local
2702	Similar to hugetlb.<hugepagesize>.events but the fields in the file
2703	are local to the cgroup i.e. not hierarchical. The file modified event
2704	generated on this file reflects only the local events.
2705
2706  hugetlb.<hugepagesize>.numa_stat
2707	Similar to memory.numa_stat, it shows the numa information of the
2708        hugetlb pages of <hugepagesize> in this cgroup.  Only active in
2709        use hugetlb pages are included.  The per-node values are in bytes.
2710
2711Misc
2712----
2713
2714The Miscellaneous cgroup provides the resource limiting and tracking
2715mechanism for the scalar resources which cannot be abstracted like the other
2716cgroup resources. Controller is enabled by the CONFIG_CGROUP_MISC config
2717option.
2718
2719A resource can be added to the controller via enum misc_res_type{} in the
2720include/linux/misc_cgroup.h file and the corresponding name via misc_res_name[]
2721in the kernel/cgroup/misc.c file. Provider of the resource must set its
2722capacity prior to using the resource by calling misc_cg_set_capacity().
2723
2724Once a capacity is set then the resource usage can be updated using charge and
2725uncharge APIs. All of the APIs to interact with misc controller are in
2726include/linux/misc_cgroup.h.
2727
2728Misc Interface Files
2729~~~~~~~~~~~~~~~~~~~~
2730
2731Miscellaneous controller provides 3 interface files. If two misc resources (res_a and res_b) are registered then:
2732
2733  misc.capacity
2734        A read-only flat-keyed file shown only in the root cgroup.  It shows
2735        miscellaneous scalar resources available on the platform along with
2736        their quantities::
2737
2738	  $ cat misc.capacity
2739	  res_a 50
2740	  res_b 10
2741
2742  misc.current
2743        A read-only flat-keyed file shown in the all cgroups.  It shows
2744        the current usage of the resources in the cgroup and its children.::
2745
2746	  $ cat misc.current
2747	  res_a 3
2748	  res_b 0
2749
2750  misc.peak
2751        A read-only flat-keyed file shown in all cgroups.  It shows the
2752        historical maximum usage of the resources in the cgroup and its
2753        children.::
2754
2755	  $ cat misc.peak
2756	  res_a 10
2757	  res_b 8
2758
2759  misc.max
2760        A read-write flat-keyed file shown in the non root cgroups. Allowed
2761        maximum usage of the resources in the cgroup and its children.::
2762
2763	  $ cat misc.max
2764	  res_a max
2765	  res_b 4
2766
2767	Limit can be set by::
2768
2769	  # echo res_a 1 > misc.max
2770
2771	Limit can be set to max by::
2772
2773	  # echo res_a max > misc.max
2774
2775        Limits can be set higher than the capacity value in the misc.capacity
2776        file.
2777
2778  misc.events
2779	A read-only flat-keyed file which exists on non-root cgroups. The
2780	following entries are defined. Unless specified otherwise, a value
2781	change in this file generates a file modified event. All fields in
2782	this file are hierarchical.
2783
2784	  max
2785		The number of times the cgroup's resource usage was
2786		about to go over the max boundary.
2787
2788  misc.events.local
2789        Similar to misc.events but the fields in the file are local to the
2790        cgroup i.e. not hierarchical. The file modified event generated on
2791        this file reflects only the local events.
2792
2793Migration and Ownership
2794~~~~~~~~~~~~~~~~~~~~~~~
2795
2796A miscellaneous scalar resource is charged to the cgroup in which it is used
2797first, and stays charged to that cgroup until that resource is freed. Migrating
2798a process to a different cgroup does not move the charge to the destination
2799cgroup where the process has moved.
2800
2801Others
2802------
2803
2804perf_event
2805~~~~~~~~~~
2806
2807perf_event controller, if not mounted on a legacy hierarchy, is
2808automatically enabled on the v2 hierarchy so that perf events can
2809always be filtered by cgroup v2 path.  The controller can still be
2810moved to a legacy hierarchy after v2 hierarchy is populated.
2811
2812
2813Non-normative information
2814-------------------------
2815
2816This section contains information that isn't considered to be a part of
2817the stable kernel API and so is subject to change.
2818
2819
2820CPU controller root cgroup process behaviour
2821~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2822
2823When distributing CPU cycles in the root cgroup each thread in this
2824cgroup is treated as if it was hosted in a separate child cgroup of the
2825root cgroup. This child cgroup weight is dependent on its thread nice
2826level.
2827
2828For details of this mapping see sched_prio_to_weight array in
2829kernel/sched/core.c file (values from this array should be scaled
2830appropriately so the neutral - nice 0 - value is 100 instead of 1024).
2831
2832
2833IO controller root cgroup process behaviour
2834~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2835
2836Root cgroup processes are hosted in an implicit leaf child node.
2837When distributing IO resources this implicit child node is taken into
2838account as if it was a normal child cgroup of the root cgroup with a
2839weight value of 200.
2840
2841
2842Namespace
2843=========
2844
2845Basics
2846------
2847
2848cgroup namespace provides a mechanism to virtualize the view of the
2849"/proc/$PID/cgroup" file and cgroup mounts.  The CLONE_NEWCGROUP clone
2850flag can be used with clone(2) and unshare(2) to create a new cgroup
2851namespace.  The process running inside the cgroup namespace will have
2852its "/proc/$PID/cgroup" output restricted to cgroupns root.  The
2853cgroupns root is the cgroup of the process at the time of creation of
2854the cgroup namespace.
2855
2856Without cgroup namespace, the "/proc/$PID/cgroup" file shows the
2857complete path of the cgroup of a process.  In a container setup where
2858a set of cgroups and namespaces are intended to isolate processes the
2859"/proc/$PID/cgroup" file may leak potential system level information
2860to the isolated processes.  For example::
2861
2862  # cat /proc/self/cgroup
2863  0::/batchjobs/container_id1
2864
2865The path '/batchjobs/container_id1' can be considered as system-data
2866and undesirable to expose to the isolated processes.  cgroup namespace
2867can be used to restrict visibility of this path.  For example, before
2868creating a cgroup namespace, one would see::
2869
2870  # ls -l /proc/self/ns/cgroup
2871  lrwxrwxrwx 1 root root 0 2014-07-15 10:37 /proc/self/ns/cgroup -> cgroup:[4026531835]
2872  # cat /proc/self/cgroup
2873  0::/batchjobs/container_id1
2874
2875After unsharing a new namespace, the view changes::
2876
2877  # ls -l /proc/self/ns/cgroup
2878  lrwxrwxrwx 1 root root 0 2014-07-15 10:35 /proc/self/ns/cgroup -> cgroup:[4026532183]
2879  # cat /proc/self/cgroup
2880  0::/
2881
2882When some thread from a multi-threaded process unshares its cgroup
2883namespace, the new cgroupns gets applied to the entire process (all
2884the threads).  This is natural for the v2 hierarchy; however, for the
2885legacy hierarchies, this may be unexpected.
2886
2887A cgroup namespace is alive as long as there are processes inside or
2888mounts pinning it.  When the last usage goes away, the cgroup
2889namespace is destroyed.  The cgroupns root and the actual cgroups
2890remain.
2891
2892
2893The Root and Views
2894------------------
2895
2896The 'cgroupns root' for a cgroup namespace is the cgroup in which the
2897process calling unshare(2) is running.  For example, if a process in
2898/batchjobs/container_id1 cgroup calls unshare, cgroup
2899/batchjobs/container_id1 becomes the cgroupns root.  For the
2900init_cgroup_ns, this is the real root ('/') cgroup.
2901
2902The cgroupns root cgroup does not change even if the namespace creator
2903process later moves to a different cgroup::
2904
2905  # ~/unshare -c # unshare cgroupns in some cgroup
2906  # cat /proc/self/cgroup
2907  0::/
2908  # mkdir sub_cgrp_1
2909  # echo 0 > sub_cgrp_1/cgroup.procs
2910  # cat /proc/self/cgroup
2911  0::/sub_cgrp_1
2912
2913Each process gets its namespace-specific view of "/proc/$PID/cgroup"
2914
2915Processes running inside the cgroup namespace will be able to see
2916cgroup paths (in /proc/self/cgroup) only inside their root cgroup.
2917From within an unshared cgroupns::
2918
2919  # sleep 100000 &
2920  [1] 7353
2921  # echo 7353 > sub_cgrp_1/cgroup.procs
2922  # cat /proc/7353/cgroup
2923  0::/sub_cgrp_1
2924
2925From the initial cgroup namespace, the real cgroup path will be
2926visible::
2927
2928  $ cat /proc/7353/cgroup
2929  0::/batchjobs/container_id1/sub_cgrp_1
2930
2931From a sibling cgroup namespace (that is, a namespace rooted at a
2932different cgroup), the cgroup path relative to its own cgroup
2933namespace root will be shown.  For instance, if PID 7353's cgroup
2934namespace root is at '/batchjobs/container_id2', then it will see::
2935
2936  # cat /proc/7353/cgroup
2937  0::/../container_id2/sub_cgrp_1
2938
2939Note that the relative path always starts with '/' to indicate that
2940its relative to the cgroup namespace root of the caller.
2941
2942
2943Migration and setns(2)
2944----------------------
2945
2946Processes inside a cgroup namespace can move into and out of the
2947namespace root if they have proper access to external cgroups.  For
2948example, from inside a namespace with cgroupns root at
2949/batchjobs/container_id1, and assuming that the global hierarchy is
2950still accessible inside cgroupns::
2951
2952  # cat /proc/7353/cgroup
2953  0::/sub_cgrp_1
2954  # echo 7353 > batchjobs/container_id2/cgroup.procs
2955  # cat /proc/7353/cgroup
2956  0::/../container_id2
2957
2958Note that this kind of setup is not encouraged.  A task inside cgroup
2959namespace should only be exposed to its own cgroupns hierarchy.
2960
2961setns(2) to another cgroup namespace is allowed when:
2962
2963(a) the process has CAP_SYS_ADMIN against its current user namespace
2964(b) the process has CAP_SYS_ADMIN against the target cgroup
2965    namespace's userns
2966
2967No implicit cgroup changes happen with attaching to another cgroup
2968namespace.  It is expected that the someone moves the attaching
2969process under the target cgroup namespace root.
2970
2971
2972Interaction with Other Namespaces
2973---------------------------------
2974
2975Namespace specific cgroup hierarchy can be mounted by a process
2976running inside a non-init cgroup namespace::
2977
2978  # mount -t cgroup2 none $MOUNT_POINT
2979
2980This will mount the unified cgroup hierarchy with cgroupns root as the
2981filesystem root.  The process needs CAP_SYS_ADMIN against its user and
2982mount namespaces.
2983
2984The virtualization of /proc/self/cgroup file combined with restricting
2985the view of cgroup hierarchy by namespace-private cgroupfs mount
2986provides a properly isolated cgroup view inside the container.
2987
2988
2989Information on Kernel Programming
2990=================================
2991
2992This section contains kernel programming information in the areas
2993where interacting with cgroup is necessary.  cgroup core and
2994controllers are not covered.
2995
2996
2997Filesystem Support for Writeback
2998--------------------------------
2999
3000A filesystem can support cgroup writeback by updating
3001address_space_operations->writepage[s]() to annotate bio's using the
3002following two functions.
3003
3004  wbc_init_bio(@wbc, @bio)
3005	Should be called for each bio carrying writeback data and
3006	associates the bio with the inode's owner cgroup and the
3007	corresponding request queue.  This must be called after
3008	a queue (device) has been associated with the bio and
3009	before submission.
3010
3011  wbc_account_cgroup_owner(@wbc, @folio, @bytes)
3012	Should be called for each data segment being written out.
3013	While this function doesn't care exactly when it's called
3014	during the writeback session, it's the easiest and most
3015	natural to call it as data segments are added to a bio.
3016
3017With writeback bio's annotated, cgroup support can be enabled per
3018super_block by setting SB_I_CGROUPWB in ->s_iflags.  This allows for
3019selective disabling of cgroup writeback support which is helpful when
3020certain filesystem features, e.g. journaled data mode, are
3021incompatible.
3022
3023wbc_init_bio() binds the specified bio to its cgroup.  Depending on
3024the configuration, the bio may be executed at a lower priority and if
3025the writeback session is holding shared resources, e.g. a journal
3026entry, may lead to priority inversion.  There is no one easy solution
3027for the problem.  Filesystems can try to work around specific problem
3028cases by skipping wbc_init_bio() and using bio_associate_blkg()
3029directly.
3030
3031
3032Deprecated v1 Core Features
3033===========================
3034
3035- Multiple hierarchies including named ones are not supported.
3036
3037- All v1 mount options are not supported.
3038
3039- The "tasks" file is removed and "cgroup.procs" is not sorted.
3040
3041- "cgroup.clone_children" is removed.
3042
3043- /proc/cgroups is meaningless for v2.  Use "cgroup.controllers" or
3044  "cgroup.stat" files at the root instead.
3045
3046
3047Issues with v1 and Rationales for v2
3048====================================
3049
3050Multiple Hierarchies
3051--------------------
3052
3053cgroup v1 allowed an arbitrary number of hierarchies and each
3054hierarchy could host any number of controllers.  While this seemed to
3055provide a high level of flexibility, it wasn't useful in practice.
3056
3057For example, as there is only one instance of each controller, utility
3058type controllers such as freezer which can be useful in all
3059hierarchies could only be used in one.  The issue is exacerbated by
3060the fact that controllers couldn't be moved to another hierarchy once
3061hierarchies were populated.  Another issue was that all controllers
3062bound to a hierarchy were forced to have exactly the same view of the
3063hierarchy.  It wasn't possible to vary the granularity depending on
3064the specific controller.
3065
3066In practice, these issues heavily limited which controllers could be
3067put on the same hierarchy and most configurations resorted to putting
3068each controller on its own hierarchy.  Only closely related ones, such
3069as the cpu and cpuacct controllers, made sense to be put on the same
3070hierarchy.  This often meant that userland ended up managing multiple
3071similar hierarchies repeating the same steps on each hierarchy
3072whenever a hierarchy management operation was necessary.
3073
3074Furthermore, support for multiple hierarchies came at a steep cost.
3075It greatly complicated cgroup core implementation but more importantly
3076the support for multiple hierarchies restricted how cgroup could be
3077used in general and what controllers was able to do.
3078
3079There was no limit on how many hierarchies there might be, which meant
3080that a thread's cgroup membership couldn't be described in finite
3081length.  The key might contain any number of entries and was unlimited
3082in length, which made it highly awkward to manipulate and led to
3083addition of controllers which existed only to identify membership,
3084which in turn exacerbated the original problem of proliferating number
3085of hierarchies.
3086
3087Also, as a controller couldn't have any expectation regarding the
3088topologies of hierarchies other controllers might be on, each
3089controller had to assume that all other controllers were attached to
3090completely orthogonal hierarchies.  This made it impossible, or at
3091least very cumbersome, for controllers to cooperate with each other.
3092
3093In most use cases, putting controllers on hierarchies which are
3094completely orthogonal to each other isn't necessary.  What usually is
3095called for is the ability to have differing levels of granularity
3096depending on the specific controller.  In other words, hierarchy may
3097be collapsed from leaf towards root when viewed from specific
3098controllers.  For example, a given configuration might not care about
3099how memory is distributed beyond a certain level while still wanting
3100to control how CPU cycles are distributed.
3101
3102
3103Thread Granularity
3104------------------
3105
3106cgroup v1 allowed threads of a process to belong to different cgroups.
3107This didn't make sense for some controllers and those controllers
3108ended up implementing different ways to ignore such situations but
3109much more importantly it blurred the line between API exposed to
3110individual applications and system management interface.
3111
3112Generally, in-process knowledge is available only to the process
3113itself; thus, unlike service-level organization of processes,
3114categorizing threads of a process requires active participation from
3115the application which owns the target process.
3116
3117cgroup v1 had an ambiguously defined delegation model which got abused
3118in combination with thread granularity.  cgroups were delegated to
3119individual applications so that they can create and manage their own
3120sub-hierarchies and control resource distributions along them.  This
3121effectively raised cgroup to the status of a syscall-like API exposed
3122to lay programs.
3123
3124First of all, cgroup has a fundamentally inadequate interface to be
3125exposed this way.  For a process to access its own knobs, it has to
3126extract the path on the target hierarchy from /proc/self/cgroup,
3127construct the path by appending the name of the knob to the path, open
3128and then read and/or write to it.  This is not only extremely clunky
3129and unusual but also inherently racy.  There is no conventional way to
3130define transaction across the required steps and nothing can guarantee
3131that the process would actually be operating on its own sub-hierarchy.
3132
3133cgroup controllers implemented a number of knobs which would never be
3134accepted as public APIs because they were just adding control knobs to
3135system-management pseudo filesystem.  cgroup ended up with interface
3136knobs which were not properly abstracted or refined and directly
3137revealed kernel internal details.  These knobs got exposed to
3138individual applications through the ill-defined delegation mechanism
3139effectively abusing cgroup as a shortcut to implementing public APIs
3140without going through the required scrutiny.
3141
3142This was painful for both userland and kernel.  Userland ended up with
3143misbehaving and poorly abstracted interfaces and kernel exposing and
3144locked into constructs inadvertently.
3145
3146
3147Competition Between Inner Nodes and Threads
3148-------------------------------------------
3149
3150cgroup v1 allowed threads to be in any cgroups which created an
3151interesting problem where threads belonging to a parent cgroup and its
3152children cgroups competed for resources.  This was nasty as two
3153different types of entities competed and there was no obvious way to
3154settle it.  Different controllers did different things.
3155
3156The cpu controller considered threads and cgroups as equivalents and
3157mapped nice levels to cgroup weights.  This worked for some cases but
3158fell flat when children wanted to be allocated specific ratios of CPU
3159cycles and the number of internal threads fluctuated - the ratios
3160constantly changed as the number of competing entities fluctuated.
3161There also were other issues.  The mapping from nice level to weight
3162wasn't obvious or universal, and there were various other knobs which
3163simply weren't available for threads.
3164
3165The io controller implicitly created a hidden leaf node for each
3166cgroup to host the threads.  The hidden leaf had its own copies of all
3167the knobs with ``leaf_`` prefixed.  While this allowed equivalent
3168control over internal threads, it was with serious drawbacks.  It
3169always added an extra layer of nesting which wouldn't be necessary
3170otherwise, made the interface messy and significantly complicated the
3171implementation.
3172
3173The memory controller didn't have a way to control what happened
3174between internal tasks and child cgroups and the behavior was not
3175clearly defined.  There were attempts to add ad-hoc behaviors and
3176knobs to tailor the behavior to specific workloads which would have
3177led to problems extremely difficult to resolve in the long term.
3178
3179Multiple controllers struggled with internal tasks and came up with
3180different ways to deal with it; unfortunately, all the approaches were
3181severely flawed and, furthermore, the widely different behaviors
3182made cgroup as a whole highly inconsistent.
3183
3184This clearly is a problem which needs to be addressed from cgroup core
3185in a uniform way.
3186
3187
3188Other Interface Issues
3189----------------------
3190
3191cgroup v1 grew without oversight and developed a large number of
3192idiosyncrasies and inconsistencies.  One issue on the cgroup core side
3193was how an empty cgroup was notified - a userland helper binary was
3194forked and executed for each event.  The event delivery wasn't
3195recursive or delegatable.  The limitations of the mechanism also led
3196to in-kernel event delivery filtering mechanism further complicating
3197the interface.
3198
3199Controller interfaces were problematic too.  An extreme example is
3200controllers completely ignoring hierarchical organization and treating
3201all cgroups as if they were all located directly under the root
3202cgroup.  Some controllers exposed a large amount of inconsistent
3203implementation details to userland.
3204
3205There also was no consistency across controllers.  When a new cgroup
3206was created, some controllers defaulted to not imposing extra
3207restrictions while others disallowed any resource usage until
3208explicitly configured.  Configuration knobs for the same type of
3209control used widely differing naming schemes and formats.  Statistics
3210and information knobs were named arbitrarily and used different
3211formats and units even in the same controller.
3212
3213cgroup v2 establishes common conventions where appropriate and updates
3214controllers so that they expose minimal and consistent interfaces.
3215
3216
3217Controller Issues and Remedies
3218------------------------------
3219
3220Memory
3221~~~~~~
3222
3223The original lower boundary, the soft limit, is defined as a limit
3224that is per default unset.  As a result, the set of cgroups that
3225global reclaim prefers is opt-in, rather than opt-out.  The costs for
3226optimizing these mostly negative lookups are so high that the
3227implementation, despite its enormous size, does not even provide the
3228basic desirable behavior.  First off, the soft limit has no
3229hierarchical meaning.  All configured groups are organized in a global
3230rbtree and treated like equal peers, regardless where they are located
3231in the hierarchy.  This makes subtree delegation impossible.  Second,
3232the soft limit reclaim pass is so aggressive that it not just
3233introduces high allocation latencies into the system, but also impacts
3234system performance due to overreclaim, to the point where the feature
3235becomes self-defeating.
3236
3237The memory.low boundary on the other hand is a top-down allocated
3238reserve.  A cgroup enjoys reclaim protection when it's within its
3239effective low, which makes delegation of subtrees possible. It also
3240enjoys having reclaim pressure proportional to its overage when
3241above its effective low.
3242
3243The original high boundary, the hard limit, is defined as a strict
3244limit that can not budge, even if the OOM killer has to be called.
3245But this generally goes against the goal of making the most out of the
3246available memory.  The memory consumption of workloads varies during
3247runtime, and that requires users to overcommit.  But doing that with a
3248strict upper limit requires either a fairly accurate prediction of the
3249working set size or adding slack to the limit.  Since working set size
3250estimation is hard and error prone, and getting it wrong results in
3251OOM kills, most users tend to err on the side of a looser limit and
3252end up wasting precious resources.
3253
3254The memory.high boundary on the other hand can be set much more
3255conservatively.  When hit, it throttles allocations by forcing them
3256into direct reclaim to work off the excess, but it never invokes the
3257OOM killer.  As a result, a high boundary that is chosen too
3258aggressively will not terminate the processes, but instead it will
3259lead to gradual performance degradation.  The user can monitor this
3260and make corrections until the minimal memory footprint that still
3261gives acceptable performance is found.
3262
3263In extreme cases, with many concurrent allocations and a complete
3264breakdown of reclaim progress within the group, the high boundary can
3265be exceeded.  But even then it's mostly better to satisfy the
3266allocation from the slack available in other groups or the rest of the
3267system than killing the group.  Otherwise, memory.max is there to
3268limit this type of spillover and ultimately contain buggy or even
3269malicious applications.
3270
3271Setting the original memory.limit_in_bytes below the current usage was
3272subject to a race condition, where concurrent charges could cause the
3273limit setting to fail. memory.max on the other hand will first set the
3274limit to prevent new charges, and then reclaim and OOM kill until the
3275new limit is met - or the task writing to memory.max is killed.
3276
3277The combined memory+swap accounting and limiting is replaced by real
3278control over swap space.
3279
3280The main argument for a combined memory+swap facility in the original
3281cgroup design was that global or parental pressure would always be
3282able to swap all anonymous memory of a child group, regardless of the
3283child's own (possibly untrusted) configuration.  However, untrusted
3284groups can sabotage swapping by other means - such as referencing its
3285anonymous memory in a tight loop - and an admin can not assume full
3286swappability when overcommitting untrusted jobs.
3287
3288For trusted jobs, on the other hand, a combined counter is not an
3289intuitive userspace interface, and it flies in the face of the idea
3290that cgroup controllers should account and limit specific physical
3291resources.  Swap space is a resource like all others in the system,
3292and that's why unified hierarchy allows distributing it separately.
3293