xref: /linux/Documentation/admin-guide/cgroup-v2.rst (revision 69050f8d6d075dc01af7a5f2f550a8067510366f)
1.. _cgroup-v2:
2
3================
4Control Group v2
5================
6
7:Date: October, 2015
8:Author: Tejun Heo <tj@kernel.org>
9
10This is the authoritative documentation on the design, interface and
11conventions of cgroup v2.  It describes all userland-visible aspects
12of cgroup including core and specific controller behaviors.  All
13future changes must be reflected in this document.  Documentation for
14v1 is available under :ref:`Documentation/admin-guide/cgroup-v1/index.rst <cgroup-v1>`.
15
16.. CONTENTS
17
18   [Whenever any new section is added to this document, please also add
19    an entry here.]
20
21   1. Introduction
22     1-1. Terminology
23     1-2. What is cgroup?
24   2. Basic Operations
25     2-1. Mounting
26     2-2. Organizing Processes and Threads
27       2-2-1. Processes
28       2-2-2. Threads
29     2-3. [Un]populated Notification
30     2-4. Controlling Controllers
31       2-4-1. Availability
32       2-4-2. Enabling and Disabling
33       2-4-3. Top-down Constraint
34       2-4-4. No Internal Process Constraint
35     2-5. Delegation
36       2-5-1. Model of Delegation
37       2-5-2. Delegation Containment
38     2-6. Guidelines
39       2-6-1. Organize Once and Control
40       2-6-2. Avoid Name Collisions
41   3. Resource Distribution Models
42     3-1. Weights
43     3-2. Limits
44     3-3. Protections
45     3-4. Allocations
46   4. Interface Files
47     4-1. Format
48     4-2. Conventions
49     4-3. Core Interface Files
50   5. Controllers
51     5-1. CPU
52       5-1-1. CPU Interface Files
53     5-2. Memory
54       5-2-1. Memory Interface Files
55       5-2-2. Usage Guidelines
56       5-2-3. Reclaim Protection
57       5-2-4. Memory Ownership
58     5-3. IO
59       5-3-1. IO Interface Files
60       5-3-2. Writeback
61       5-3-3. IO Latency
62         5-3-3-1. How IO Latency Throttling Works
63         5-3-3-2. IO Latency Interface Files
64       5-3-4. IO Priority
65     5-4. PID
66       5-4-1. PID Interface Files
67     5-5. Cpuset
68       5.5-1. Cpuset Interface Files
69     5-6. Device controller
70     5-7. RDMA
71       5-7-1. RDMA Interface Files
72     5-8. DMEM
73       5-8-1. DMEM Interface Files
74     5-9. HugeTLB
75       5.9-1. HugeTLB Interface Files
76     5-10. Misc
77       5.10-1 Misc Interface Files
78       5.10-2 Migration and Ownership
79     5-11. Others
80       5-11-1. perf_event
81     5-N. Non-normative information
82       5-N-1. CPU controller root cgroup process behaviour
83       5-N-2. IO controller root cgroup process behaviour
84   6. Namespace
85     6-1. Basics
86     6-2. The Root and Views
87     6-3. Migration and setns(2)
88     6-4. Interaction with Other Namespaces
89   P. Information on Kernel Programming
90     P-1. Filesystem Support for Writeback
91   D. Deprecated v1 Core Features
92   R. Issues with v1 and Rationales for v2
93     R-1. Multiple Hierarchies
94     R-2. Thread Granularity
95     R-3. Competition Between Inner Nodes and Threads
96     R-4. Other Interface Issues
97     R-5. Controller Issues and Remedies
98       R-5-1. Memory
99
100
101Introduction
102============
103
104Terminology
105-----------
106
107"cgroup" stands for "control group" and is never capitalized.  The
108singular form is used to designate the whole feature and also as a
109qualifier as in "cgroup controllers".  When explicitly referring to
110multiple individual control groups, the plural form "cgroups" is used.
111
112
113What is cgroup?
114---------------
115
116cgroup is a mechanism to organize processes hierarchically and
117distribute system resources along the hierarchy in a controlled and
118configurable manner.
119
120cgroup is largely composed of two parts - the core and controllers.
121cgroup core is primarily responsible for hierarchically organizing
122processes.  A cgroup controller is usually responsible for
123distributing a specific type of system resource along the hierarchy
124although there are utility controllers which serve purposes other than
125resource distribution.
126
127cgroups form a tree structure and every process in the system belongs
128to one and only one cgroup.  All threads of a process belong to the
129same cgroup.  On creation, all processes are put in the cgroup that
130the parent process belongs to at the time.  A process can be migrated
131to another cgroup.  Migration of a process doesn't affect already
132existing descendant processes.
133
134Following certain structural constraints, controllers may be enabled or
135disabled selectively on a cgroup.  All controller behaviors are
136hierarchical - if a controller is enabled on a cgroup, it affects all
137processes which belong to the cgroups consisting the inclusive
138sub-hierarchy of the cgroup.  When a controller is enabled on a nested
139cgroup, it always restricts the resource distribution further.  The
140restrictions set closer to the root in the hierarchy can not be
141overridden from further away.
142
143
144Basic Operations
145================
146
147Mounting
148--------
149
150Unlike v1, cgroup v2 has only single hierarchy.  The cgroup v2
151hierarchy can be mounted with the following mount command::
152
153  # mount -t cgroup2 none $MOUNT_POINT
154
155cgroup2 filesystem has the magic number 0x63677270 ("cgrp").  All
156controllers which support v2 and are not bound to a v1 hierarchy are
157automatically bound to the v2 hierarchy and show up at the root.
158Controllers which are not in active use in the v2 hierarchy can be
159bound to other hierarchies.  This allows mixing v2 hierarchy with the
160legacy v1 multiple hierarchies in a fully backward compatible way.
161
162A controller can be moved across hierarchies only after the controller
163is no longer referenced in its current hierarchy.  Because per-cgroup
164controller states are destroyed asynchronously and controllers may
165have lingering references, a controller may not show up immediately on
166the v2 hierarchy after the final umount of the previous hierarchy.
167Similarly, a controller should be fully disabled to be moved out of
168the unified hierarchy and it may take some time for the disabled
169controller to become available for other hierarchies; furthermore, due
170to inter-controller dependencies, other controllers may need to be
171disabled too.
172
173While useful for development and manual configurations, moving
174controllers dynamically between the v2 and other hierarchies is
175strongly discouraged for production use.  It is recommended to decide
176the hierarchies and controller associations before starting using the
177controllers after system boot.
178
179During transition to v2, system management software might still
180automount the v1 cgroup filesystem and so hijack all controllers
181during boot, before manual intervention is possible. To make testing
182and experimenting easier, the kernel parameter cgroup_no_v1= allows
183disabling controllers in v1 and make them always available in v2.
184
185cgroup v2 currently supports the following mount options.
186
187  nsdelegate
188	Consider cgroup namespaces as delegation boundaries.  This
189	option is system wide and can only be set on mount or modified
190	through remount from the init namespace.  The mount option is
191	ignored on non-init namespace mounts.  Please refer to the
192	Delegation section for details.
193
194  favordynmods
195        Reduce the latencies of dynamic cgroup modifications such as
196        task migrations and controller on/offs at the cost of making
197        hot path operations such as forks and exits more expensive.
198        The static usage pattern of creating a cgroup, enabling
199        controllers, and then seeding it with CLONE_INTO_CGROUP is
200        not affected by this option.
201
202  memory_localevents
203        Only populate memory.events with data for the current cgroup,
204        and not any subtrees. This is legacy behaviour, the default
205        behaviour without this option is to include subtree counts.
206        This option is system wide and can only be set on mount or
207        modified through remount from the init namespace. The mount
208        option is ignored on non-init namespace mounts.
209
210  memory_recursiveprot
211        Recursively apply memory.min and memory.low protection to
212        entire subtrees, without requiring explicit downward
213        propagation into leaf cgroups.  This allows protecting entire
214        subtrees from one another, while retaining free competition
215        within those subtrees.  This should have been the default
216        behavior but is a mount-option to avoid regressing setups
217        relying on the original semantics (e.g. specifying bogusly
218        high 'bypass' protection values at higher tree levels).
219
220  memory_hugetlb_accounting
221        Count HugeTLB memory usage towards the cgroup's overall
222        memory usage for the memory controller (for the purpose of
223        statistics reporting and memory protetion). This is a new
224        behavior that could regress existing setups, so it must be
225        explicitly opted in with this mount option.
226
227        A few caveats to keep in mind:
228
229        * There is no HugeTLB pool management involved in the memory
230          controller. The pre-allocated pool does not belong to anyone.
231          Specifically, when a new HugeTLB folio is allocated to
232          the pool, it is not accounted for from the perspective of the
233          memory controller. It is only charged to a cgroup when it is
234          actually used (for e.g at page fault time). Host memory
235          overcommit management has to consider this when configuring
236          hard limits. In general, HugeTLB pool management should be
237          done via other mechanisms (such as the HugeTLB controller).
238        * Failure to charge a HugeTLB folio to the memory controller
239          results in SIGBUS. This could happen even if the HugeTLB pool
240          still has pages available (but the cgroup limit is hit and
241          reclaim attempt fails).
242        * Charging HugeTLB memory towards the memory controller affects
243          memory protection and reclaim dynamics. Any userspace tuning
244          (of low, min limits for e.g) needs to take this into account.
245        * HugeTLB pages utilized while this option is not selected
246          will not be tracked by the memory controller (even if cgroup
247          v2 is remounted later on).
248
249  pids_localevents
250        The option restores v1-like behavior of pids.events:max, that is only
251        local (inside cgroup proper) fork failures are counted. Without this
252        option pids.events.max represents any pids.max enforcemnt across
253        cgroup's subtree.
254
255
256
257Organizing Processes and Threads
258--------------------------------
259
260Processes
261~~~~~~~~~
262
263Initially, only the root cgroup exists to which all processes belong.
264A child cgroup can be created by creating a sub-directory::
265
266  # mkdir $CGROUP_NAME
267
268A given cgroup may have multiple child cgroups forming a tree
269structure.  Each cgroup has a read-writable interface file
270"cgroup.procs".  When read, it lists the PIDs of all processes which
271belong to the cgroup one-per-line.  The PIDs are not ordered and the
272same PID may show up more than once if the process got moved to
273another cgroup and then back or the PID got recycled while reading.
274
275A process can be migrated into a cgroup by writing its PID to the
276target cgroup's "cgroup.procs" file.  Only one process can be migrated
277on a single write(2) call.  If a process is composed of multiple
278threads, writing the PID of any thread migrates all threads of the
279process.
280
281When a process forks a child process, the new process is born into the
282cgroup that the forking process belongs to at the time of the
283operation.  After exit, a process stays associated with the cgroup
284that it belonged to at the time of exit until it's reaped; however, a
285zombie process does not appear in "cgroup.procs" and thus can't be
286moved to another cgroup.
287
288A cgroup which doesn't have any children or live processes can be
289destroyed by removing the directory.  Note that a cgroup which doesn't
290have any children and is associated only with zombie processes is
291considered empty and can be removed::
292
293  # rmdir $CGROUP_NAME
294
295"/proc/$PID/cgroup" lists a process's cgroup membership.  If legacy
296cgroup is in use in the system, this file may contain multiple lines,
297one for each hierarchy.  The entry for cgroup v2 is always in the
298format "0::$PATH"::
299
300  # cat /proc/842/cgroup
301  ...
302  0::/test-cgroup/test-cgroup-nested
303
304If the process becomes a zombie and the cgroup it was associated with
305is removed subsequently, " (deleted)" is appended to the path::
306
307  # cat /proc/842/cgroup
308  ...
309  0::/test-cgroup/test-cgroup-nested (deleted)
310
311
312Threads
313~~~~~~~
314
315cgroup v2 supports thread granularity for a subset of controllers to
316support use cases requiring hierarchical resource distribution across
317the threads of a group of processes.  By default, all threads of a
318process belong to the same cgroup, which also serves as the resource
319domain to host resource consumptions which are not specific to a
320process or thread.  The thread mode allows threads to be spread across
321a subtree while still maintaining the common resource domain for them.
322
323Controllers which support thread mode are called threaded controllers.
324The ones which don't are called domain controllers.
325
326Marking a cgroup threaded makes it join the resource domain of its
327parent as a threaded cgroup.  The parent may be another threaded
328cgroup whose resource domain is further up in the hierarchy.  The root
329of a threaded subtree, that is, the nearest ancestor which is not
330threaded, is called threaded domain or thread root interchangeably and
331serves as the resource domain for the entire subtree.
332
333Inside a threaded subtree, threads of a process can be put in
334different cgroups and are not subject to the no internal process
335constraint - threaded controllers can be enabled on non-leaf cgroups
336whether they have threads in them or not.
337
338As the threaded domain cgroup hosts all the domain resource
339consumptions of the subtree, it is considered to have internal
340resource consumptions whether there are processes in it or not and
341can't have populated child cgroups which aren't threaded.  Because the
342root cgroup is not subject to no internal process constraint, it can
343serve both as a threaded domain and a parent to domain cgroups.
344
345The current operation mode or type of the cgroup is shown in the
346"cgroup.type" file which indicates whether the cgroup is a normal
347domain, a domain which is serving as the domain of a threaded subtree,
348or a threaded cgroup.
349
350On creation, a cgroup is always a domain cgroup and can be made
351threaded by writing "threaded" to the "cgroup.type" file.  The
352operation is single direction::
353
354  # echo threaded > cgroup.type
355
356Once threaded, the cgroup can't be made a domain again.  To enable the
357thread mode, the following conditions must be met.
358
359- As the cgroup will join the parent's resource domain.  The parent
360  must either be a valid (threaded) domain or a threaded cgroup.
361
362- When the parent is an unthreaded domain, it must not have any domain
363  controllers enabled or populated domain children.  The root is
364  exempt from this requirement.
365
366Topology-wise, a cgroup can be in an invalid state.  Please consider
367the following topology::
368
369  A (threaded domain) - B (threaded) - C (domain, just created)
370
371C is created as a domain but isn't connected to a parent which can
372host child domains.  C can't be used until it is turned into a
373threaded cgroup.  "cgroup.type" file will report "domain (invalid)" in
374these cases.  Operations which fail due to invalid topology use
375EOPNOTSUPP as the errno.
376
377A domain cgroup is turned into a threaded domain when one of its child
378cgroup becomes threaded or threaded controllers are enabled in the
379"cgroup.subtree_control" file while there are processes in the cgroup.
380A threaded domain reverts to a normal domain when the conditions
381clear.
382
383When read, "cgroup.threads" contains the list of the thread IDs of all
384threads in the cgroup.  Except that the operations are per-thread
385instead of per-process, "cgroup.threads" has the same format and
386behaves the same way as "cgroup.procs".  While "cgroup.threads" can be
387written to in any cgroup, as it can only move threads inside the same
388threaded domain, its operations are confined inside each threaded
389subtree.
390
391The threaded domain cgroup serves as the resource domain for the whole
392subtree, and, while the threads can be scattered across the subtree,
393all the processes are considered to be in the threaded domain cgroup.
394"cgroup.procs" in a threaded domain cgroup contains the PIDs of all
395processes in the subtree and is not readable in the subtree proper.
396However, "cgroup.procs" can be written to from anywhere in the subtree
397to migrate all threads of the matching process to the cgroup.
398
399Only threaded controllers can be enabled in a threaded subtree.  When
400a threaded controller is enabled inside a threaded subtree, it only
401accounts for and controls resource consumptions associated with the
402threads in the cgroup and its descendants.  All consumptions which
403aren't tied to a specific thread belong to the threaded domain cgroup.
404
405Because a threaded subtree is exempt from no internal process
406constraint, a threaded controller must be able to handle competition
407between threads in a non-leaf cgroup and its child cgroups.  Each
408threaded controller defines how such competitions are handled.
409
410Currently, the following controllers are threaded and can be enabled
411in a threaded cgroup::
412
413- cpu
414- cpuset
415- perf_event
416- pids
417
418[Un]populated Notification
419--------------------------
420
421Each non-root cgroup has a "cgroup.events" file which contains
422"populated" field indicating whether the cgroup's sub-hierarchy has
423live processes in it.  Its value is 0 if there is no live process in
424the cgroup and its descendants; otherwise, 1.  poll and [id]notify
425events are triggered when the value changes.  This can be used, for
426example, to start a clean-up operation after all processes of a given
427sub-hierarchy have exited.  The populated state updates and
428notifications are recursive.  Consider the following sub-hierarchy
429where the numbers in the parentheses represent the numbers of processes
430in each cgroup::
431
432  A(4) - B(0) - C(1)
433              \ D(0)
434
435A, B and C's "populated" fields would be 1 while D's 0.  After the one
436process in C exits, B and C's "populated" fields would flip to "0" and
437file modified events will be generated on the "cgroup.events" files of
438both cgroups.
439
440
441Controlling Controllers
442-----------------------
443
444Availability
445~~~~~~~~~~~~
446
447A controller is available in a cgroup when it is supported by the kernel (i.e.,
448compiled in, not disabled and not attached to a v1 hierarchy) and listed in the
449"cgroup.controllers" file. Availability means the controller's interface files
450are exposed in the cgroup’s directory, allowing the distribution of the target
451resource to be observed or controlled within that cgroup.
452
453Enabling and Disabling
454~~~~~~~~~~~~~~~~~~~~~~
455
456Each cgroup has a "cgroup.controllers" file which lists all
457controllers available for the cgroup to enable::
458
459  # cat cgroup.controllers
460  cpu io memory
461
462No controller is enabled by default.  Controllers can be enabled and
463disabled by writing to the "cgroup.subtree_control" file::
464
465  # echo "+cpu +memory -io" > cgroup.subtree_control
466
467Only controllers which are listed in "cgroup.controllers" can be
468enabled.  When multiple operations are specified as above, either they
469all succeed or fail.  If multiple operations on the same controller
470are specified, the last one is effective.
471
472Enabling a controller in a cgroup indicates that the distribution of
473the target resource across its immediate children will be controlled.
474Consider the following sub-hierarchy.  The enabled controllers are
475listed in parentheses::
476
477  A(cpu,memory) - B(memory) - C()
478                            \ D()
479
480As A has "cpu" and "memory" enabled, A will control the distribution
481of CPU cycles and memory to its children, in this case, B.  As B has
482"memory" enabled but not "CPU", C and D will compete freely on CPU
483cycles but their division of memory available to B will be controlled.
484
485As a controller regulates the distribution of the target resource to
486the cgroup's children, enabling it creates the controller's interface
487files in the child cgroups.  In the above example, enabling "cpu" on B
488would create the "cpu." prefixed controller interface files in C and
489D.  Likewise, disabling "memory" from B would remove the "memory."
490prefixed controller interface files from C and D.  This means that the
491controller interface files - anything which doesn't start with
492"cgroup." are owned by the parent rather than the cgroup itself.
493
494
495Top-down Constraint
496~~~~~~~~~~~~~~~~~~~
497
498Resources are distributed top-down and a cgroup can further distribute
499a resource only if the resource has been distributed to it from the
500parent.  This means that all non-root "cgroup.subtree_control" files
501can only contain controllers which are enabled in the parent's
502"cgroup.subtree_control" file.  A controller can be enabled only if
503the parent has the controller enabled and a controller can't be
504disabled if one or more children have it enabled.
505
506
507No Internal Process Constraint
508~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
509
510Non-root cgroups can distribute domain resources to their children
511only when they don't have any processes of their own.  In other words,
512only domain cgroups which don't contain any processes can have domain
513controllers enabled in their "cgroup.subtree_control" files.
514
515This guarantees that, when a domain controller is looking at the part
516of the hierarchy which has it enabled, processes are always only on
517the leaves.  This rules out situations where child cgroups compete
518against internal processes of the parent.
519
520The root cgroup is exempt from this restriction.  Root contains
521processes and anonymous resource consumption which can't be associated
522with any other cgroups and requires special treatment from most
523controllers.  How resource consumption in the root cgroup is governed
524is up to each controller (for more information on this topic please
525refer to the Non-normative information section in the Controllers
526chapter).
527
528Note that the restriction doesn't get in the way if there is no
529enabled controller in the cgroup's "cgroup.subtree_control".  This is
530important as otherwise it wouldn't be possible to create children of a
531populated cgroup.  To control resource distribution of a cgroup, the
532cgroup must create children and transfer all its processes to the
533children before enabling controllers in its "cgroup.subtree_control"
534file.
535
536
537Delegation
538----------
539
540Model of Delegation
541~~~~~~~~~~~~~~~~~~~
542
543A cgroup can be delegated in two ways.  First, to a less privileged
544user by granting write access of the directory and its "cgroup.procs",
545"cgroup.threads" and "cgroup.subtree_control" files to the user.
546Second, if the "nsdelegate" mount option is set, automatically to a
547cgroup namespace on namespace creation.
548
549Because the resource control interface files in a given directory
550control the distribution of the parent's resources, the delegatee
551shouldn't be allowed to write to them.  For the first method, this is
552achieved by not granting access to these files.  For the second, files
553outside the namespace should be hidden from the delegatee by the means
554of at least mount namespacing, and the kernel rejects writes to all
555files on a namespace root from inside the cgroup namespace, except for
556those files listed in "/sys/kernel/cgroup/delegate" (including
557"cgroup.procs", "cgroup.threads", "cgroup.subtree_control", etc.).
558
559The end results are equivalent for both delegation types.  Once
560delegated, the user can build sub-hierarchy under the directory,
561organize processes inside it as it sees fit and further distribute the
562resources it received from the parent.  The limits and other settings
563of all resource controllers are hierarchical and regardless of what
564happens in the delegated sub-hierarchy, nothing can escape the
565resource restrictions imposed by the parent.
566
567Currently, cgroup doesn't impose any restrictions on the number of
568cgroups in or nesting depth of a delegated sub-hierarchy; however,
569this may be limited explicitly in the future.
570
571
572Delegation Containment
573~~~~~~~~~~~~~~~~~~~~~~
574
575A delegated sub-hierarchy is contained in the sense that processes
576can't be moved into or out of the sub-hierarchy by the delegatee.
577
578For delegations to a less privileged user, this is achieved by
579requiring the following conditions for a process with a non-root euid
580to migrate a target process into a cgroup by writing its PID to the
581"cgroup.procs" file.
582
583- The writer must have write access to the "cgroup.procs" file.
584
585- The writer must have write access to the "cgroup.procs" file of the
586  common ancestor of the source and destination cgroups.
587
588The above two constraints ensure that while a delegatee may migrate
589processes around freely in the delegated sub-hierarchy it can't pull
590in from or push out to outside the sub-hierarchy.
591
592For an example, let's assume cgroups C0 and C1 have been delegated to
593user U0 who created C00, C01 under C0 and C10 under C1 as follows and
594all processes under C0 and C1 belong to U0::
595
596  ~~~~~~~~~~~~~ - C0 - C00
597  ~ cgroup    ~      \ C01
598  ~ hierarchy ~
599  ~~~~~~~~~~~~~ - C1 - C10
600
601Let's also say U0 wants to write the PID of a process which is
602currently in C10 into "C00/cgroup.procs".  U0 has write access to the
603file; however, the common ancestor of the source cgroup C10 and the
604destination cgroup C00 is above the points of delegation and U0 would
605not have write access to its "cgroup.procs" files and thus the write
606will be denied with -EACCES.
607
608For delegations to namespaces, containment is achieved by requiring
609that both the source and destination cgroups are reachable from the
610namespace of the process which is attempting the migration.  If either
611is not reachable, the migration is rejected with -ENOENT.
612
613
614Guidelines
615----------
616
617Organize Once and Control
618~~~~~~~~~~~~~~~~~~~~~~~~~
619
620Migrating a process across cgroups is a relatively expensive operation
621and stateful resources such as memory are not moved together with the
622process.  This is an explicit design decision as there often exist
623inherent trade-offs between migration and various hot paths in terms
624of synchronization cost.
625
626As such, migrating processes across cgroups frequently as a means to
627apply different resource restrictions is discouraged.  A workload
628should be assigned to a cgroup according to the system's logical and
629resource structure once on start-up.  Dynamic adjustments to resource
630distribution can be made by changing controller configuration through
631the interface files.
632
633
634Avoid Name Collisions
635~~~~~~~~~~~~~~~~~~~~~
636
637Interface files for a cgroup and its children cgroups occupy the same
638directory and it is possible to create children cgroups which collide
639with interface files.
640
641All cgroup core interface files are prefixed with "cgroup." and each
642controller's interface files are prefixed with the controller name and
643a dot.  A controller's name is composed of lower case alphabets and
644'_'s but never begins with an '_' so it can be used as the prefix
645character for collision avoidance.  Also, interface file names won't
646start or end with terms which are often used in categorizing workloads
647such as job, service, slice, unit or workload.
648
649cgroup doesn't do anything to prevent name collisions and it's the
650user's responsibility to avoid them.
651
652
653Resource Distribution Models
654============================
655
656cgroup controllers implement several resource distribution schemes
657depending on the resource type and expected use cases.  This section
658describes major schemes in use along with their expected behaviors.
659
660
661Weights
662-------
663
664A parent's resource is distributed by adding up the weights of all
665active children and giving each the fraction matching the ratio of its
666weight against the sum.  As only children which can make use of the
667resource at the moment participate in the distribution, this is
668work-conserving.  Due to the dynamic nature, this model is usually
669used for stateless resources.
670
671All weights are in the range [1, 10000] with the default at 100.  This
672allows symmetric multiplicative biases in both directions at fine
673enough granularity while staying in the intuitive range.
674
675As long as the weight is in range, all configuration combinations are
676valid and there is no reason to reject configuration changes or
677process migrations.
678
679"cpu.weight" proportionally distributes CPU cycles to active children
680and is an example of this type.
681
682
683.. _cgroupv2-limits-distributor:
684
685Limits
686------
687
688A child can only consume up to the configured amount of the resource.
689Limits can be over-committed - the sum of the limits of children can
690exceed the amount of resource available to the parent.
691
692Limits are in the range [0, max] and defaults to "max", which is noop.
693
694As limits can be over-committed, all configuration combinations are
695valid and there is no reason to reject configuration changes or
696process migrations.
697
698"io.max" limits the maximum BPS and/or IOPS that a cgroup can consume
699on an IO device and is an example of this type.
700
701.. _cgroupv2-protections-distributor:
702
703Protections
704-----------
705
706A cgroup is protected up to the configured amount of the resource
707as long as the usages of all its ancestors are under their
708protected levels.  Protections can be hard guarantees or best effort
709soft boundaries.  Protections can also be over-committed in which case
710only up to the amount available to the parent is protected among
711children.
712
713Protections are in the range [0, max] and defaults to 0, which is
714noop.
715
716As protections can be over-committed, all configuration combinations
717are valid and there is no reason to reject configuration changes or
718process migrations.
719
720"memory.low" implements best-effort memory protection and is an
721example of this type.
722
723
724Allocations
725-----------
726
727A cgroup is exclusively allocated a certain amount of a finite
728resource.  Allocations can't be over-committed - the sum of the
729allocations of children can not exceed the amount of resource
730available to the parent.
731
732Allocations are in the range [0, max] and defaults to 0, which is no
733resource.
734
735As allocations can't be over-committed, some configuration
736combinations are invalid and should be rejected.  Also, if the
737resource is mandatory for execution of processes, process migrations
738may be rejected.
739
740
741Interface Files
742===============
743
744Format
745------
746
747All interface files should be in one of the following formats whenever
748possible::
749
750  New-line separated values
751  (when only one value can be written at once)
752
753	VAL0\n
754	VAL1\n
755	...
756
757  Space separated values
758  (when read-only or multiple values can be written at once)
759
760	VAL0 VAL1 ...\n
761
762  Flat keyed
763
764	KEY0 VAL0\n
765	KEY1 VAL1\n
766	...
767
768  Nested keyed
769
770	KEY0 SUB_KEY0=VAL00 SUB_KEY1=VAL01...
771	KEY1 SUB_KEY0=VAL10 SUB_KEY1=VAL11...
772	...
773
774For a writable file, the format for writing should generally match
775reading; however, controllers may allow omitting later fields or
776implement restricted shortcuts for most common use cases.
777
778For both flat and nested keyed files, only the values for a single key
779can be written at a time.  For nested keyed files, the sub key pairs
780may be specified in any order and not all pairs have to be specified.
781
782
783Conventions
784-----------
785
786- Settings for a single feature should be contained in a single file.
787
788- The root cgroup should be exempt from resource control and thus
789  shouldn't have resource control interface files.
790
791- The default time unit is microseconds.  If a different unit is ever
792  used, an explicit unit suffix must be present.
793
794- A parts-per quantity should use a percentage decimal with at least
795  two digit fractional part - e.g. 13.40.
796
797- If a controller implements weight based resource distribution, its
798  interface file should be named "weight" and have the range [1,
799  10000] with 100 as the default.  The values are chosen to allow
800  enough and symmetric bias in both directions while keeping it
801  intuitive (the default is 100%).
802
803- If a controller implements an absolute resource guarantee and/or
804  limit, the interface files should be named "min" and "max"
805  respectively.  If a controller implements best effort resource
806  guarantee and/or limit, the interface files should be named "low"
807  and "high" respectively.
808
809  In the above four control files, the special token "max" should be
810  used to represent upward infinity for both reading and writing.
811
812- If a setting has a configurable default value and keyed specific
813  overrides, the default entry should be keyed with "default" and
814  appear as the first entry in the file.
815
816  The default value can be updated by writing either "default $VAL" or
817  "$VAL".
818
819  When writing to update a specific override, "default" can be used as
820  the value to indicate removal of the override.  Override entries
821  with "default" as the value must not appear when read.
822
823  For example, a setting which is keyed by major:minor device numbers
824  with integer values may look like the following::
825
826    # cat cgroup-example-interface-file
827    default 150
828    8:0 300
829
830  The default value can be updated by::
831
832    # echo 125 > cgroup-example-interface-file
833
834  or::
835
836    # echo "default 125" > cgroup-example-interface-file
837
838  An override can be set by::
839
840    # echo "8:16 170" > cgroup-example-interface-file
841
842  and cleared by::
843
844    # echo "8:0 default" > cgroup-example-interface-file
845    # cat cgroup-example-interface-file
846    default 125
847    8:16 170
848
849- For events which are not very high frequency, an interface file
850  "events" should be created which lists event key value pairs.
851  Whenever a notifiable event happens, file modified event should be
852  generated on the file.
853
854
855Core Interface Files
856--------------------
857
858All cgroup core files are prefixed with "cgroup."
859
860  cgroup.type
861	A read-write single value file which exists on non-root
862	cgroups.
863
864	When read, it indicates the current type of the cgroup, which
865	can be one of the following values.
866
867	- "domain" : A normal valid domain cgroup.
868
869	- "domain threaded" : A threaded domain cgroup which is
870          serving as the root of a threaded subtree.
871
872	- "domain invalid" : A cgroup which is in an invalid state.
873	  It can't be populated or have controllers enabled.  It may
874	  be allowed to become a threaded cgroup.
875
876	- "threaded" : A threaded cgroup which is a member of a
877          threaded subtree.
878
879	A cgroup can be turned into a threaded cgroup by writing
880	"threaded" to this file.
881
882  cgroup.procs
883	A read-write new-line separated values file which exists on
884	all cgroups.
885
886	When read, it lists the PIDs of all processes which belong to
887	the cgroup one-per-line.  The PIDs are not ordered and the
888	same PID may show up more than once if the process got moved
889	to another cgroup and then back or the PID got recycled while
890	reading.
891
892	A PID can be written to migrate the process associated with
893	the PID to the cgroup.  The writer should match all of the
894	following conditions.
895
896	- It must have write access to the "cgroup.procs" file.
897
898	- It must have write access to the "cgroup.procs" file of the
899	  common ancestor of the source and destination cgroups.
900
901	When delegating a sub-hierarchy, write access to this file
902	should be granted along with the containing directory.
903
904	In a threaded cgroup, reading this file fails with EOPNOTSUPP
905	as all the processes belong to the thread root.  Writing is
906	supported and moves every thread of the process to the cgroup.
907
908  cgroup.threads
909	A read-write new-line separated values file which exists on
910	all cgroups.
911
912	When read, it lists the TIDs of all threads which belong to
913	the cgroup one-per-line.  The TIDs are not ordered and the
914	same TID may show up more than once if the thread got moved to
915	another cgroup and then back or the TID got recycled while
916	reading.
917
918	A TID can be written to migrate the thread associated with the
919	TID to the cgroup.  The writer should match all of the
920	following conditions.
921
922	- It must have write access to the "cgroup.threads" file.
923
924	- The cgroup that the thread is currently in must be in the
925          same resource domain as the destination cgroup.
926
927	- It must have write access to the "cgroup.procs" file of the
928	  common ancestor of the source and destination cgroups.
929
930	When delegating a sub-hierarchy, write access to this file
931	should be granted along with the containing directory.
932
933  cgroup.controllers
934	A read-only space separated values file which exists on all
935	cgroups.
936
937	It shows space separated list of all controllers available to
938	the cgroup.  The controllers are not ordered.
939
940  cgroup.subtree_control
941	A read-write space separated values file which exists on all
942	cgroups.  Starts out empty.
943
944	When read, it shows space separated list of the controllers
945	which are enabled to control resource distribution from the
946	cgroup to its children.
947
948	Space separated list of controllers prefixed with '+' or '-'
949	can be written to enable or disable controllers.  A controller
950	name prefixed with '+' enables the controller and '-'
951	disables.  If a controller appears more than once on the list,
952	the last one is effective.  When multiple enable and disable
953	operations are specified, either all succeed or all fail.
954
955  cgroup.events
956	A read-only flat-keyed file which exists on non-root cgroups.
957	The following entries are defined.  Unless specified
958	otherwise, a value change in this file generates a file
959	modified event.
960
961	  populated
962		1 if the cgroup or its descendants contains any live
963		processes; otherwise, 0.
964	  frozen
965		1 if the cgroup is frozen; otherwise, 0.
966
967  cgroup.max.descendants
968	A read-write single value files.  The default is "max".
969
970	Maximum allowed number of descent cgroups.
971	If the actual number of descendants is equal or larger,
972	an attempt to create a new cgroup in the hierarchy will fail.
973
974  cgroup.max.depth
975	A read-write single value files.  The default is "max".
976
977	Maximum allowed descent depth below the current cgroup.
978	If the actual descent depth is equal or larger,
979	an attempt to create a new child cgroup will fail.
980
981  cgroup.stat
982	A read-only flat-keyed file with the following entries:
983
984	  nr_descendants
985		Total number of visible descendant cgroups.
986
987	  nr_dying_descendants
988		Total number of dying descendant cgroups. A cgroup becomes
989		dying after being deleted by a user. The cgroup will remain
990		in dying state for some time undefined time (which can depend
991		on system load) before being completely destroyed.
992
993		A process can't enter a dying cgroup under any circumstances,
994		a dying cgroup can't revive.
995
996		A dying cgroup can consume system resources not exceeding
997		limits, which were active at the moment of cgroup deletion.
998
999	  nr_subsys_<cgroup_subsys>
1000		Total number of live cgroup subsystems (e.g memory
1001		cgroup) at and beneath the current cgroup.
1002
1003	  nr_dying_subsys_<cgroup_subsys>
1004		Total number of dying cgroup subsystems (e.g. memory
1005		cgroup) at and beneath the current cgroup.
1006
1007  cgroup.stat.local
1008	A read-only flat-keyed file which exists in non-root cgroups.
1009	The following entry is defined:
1010
1011	  frozen_usec
1012		Cumulative time that this cgroup has spent between freezing and
1013		thawing, regardless of whether by self or ancestor groups.
1014		NB: (not) reaching "frozen" state is not accounted here.
1015
1016		Using the following ASCII representation of a cgroup's freezer
1017		state, ::
1018
1019			       1    _____
1020			frozen 0 __/     \__
1021			          ab    cd
1022
1023		the duration being measured is the span between a and c.
1024
1025  cgroup.freeze
1026	A read-write single value file which exists on non-root cgroups.
1027	Allowed values are "0" and "1". The default is "0".
1028
1029	Writing "1" to the file causes freezing of the cgroup and all
1030	descendant cgroups. This means that all belonging processes will
1031	be stopped and will not run until the cgroup will be explicitly
1032	unfrozen. Freezing of the cgroup may take some time; when this action
1033	is completed, the "frozen" value in the cgroup.events control file
1034	will be updated to "1" and the corresponding notification will be
1035	issued.
1036
1037	A cgroup can be frozen either by its own settings, or by settings
1038	of any ancestor cgroups. If any of ancestor cgroups is frozen, the
1039	cgroup will remain frozen.
1040
1041	Processes in the frozen cgroup can be killed by a fatal signal.
1042	They also can enter and leave a frozen cgroup: either by an explicit
1043	move by a user, or if freezing of the cgroup races with fork().
1044	If a process is moved to a frozen cgroup, it stops. If a process is
1045	moved out of a frozen cgroup, it becomes running.
1046
1047	Frozen status of a cgroup doesn't affect any cgroup tree operations:
1048	it's possible to delete a frozen (and empty) cgroup, as well as
1049	create new sub-cgroups.
1050
1051  cgroup.kill
1052	A write-only single value file which exists in non-root cgroups.
1053	The only allowed value is "1".
1054
1055	Writing "1" to the file causes the cgroup and all descendant cgroups to
1056	be killed. This means that all processes located in the affected cgroup
1057	tree will be killed via SIGKILL.
1058
1059	Killing a cgroup tree will deal with concurrent forks appropriately and
1060	is protected against migrations.
1061
1062	In a threaded cgroup, writing this file fails with EOPNOTSUPP as
1063	killing cgroups is a process directed operation, i.e. it affects
1064	the whole thread-group.
1065
1066  cgroup.pressure
1067	A read-write single value file that allowed values are "0" and "1".
1068	The default is "1".
1069
1070	Writing "0" to the file will disable the cgroup PSI accounting.
1071	Writing "1" to the file will re-enable the cgroup PSI accounting.
1072
1073	This control attribute is not hierarchical, so disable or enable PSI
1074	accounting in a cgroup does not affect PSI accounting in descendants
1075	and doesn't need pass enablement via ancestors from root.
1076
1077	The reason this control attribute exists is that PSI accounts stalls for
1078	each cgroup separately and aggregates it at each level of the hierarchy.
1079	This may cause non-negligible overhead for some workloads when under
1080	deep level of the hierarchy, in which case this control attribute can
1081	be used to disable PSI accounting in the non-leaf cgroups.
1082
1083  irq.pressure
1084	A read-write nested-keyed file.
1085
1086	Shows pressure stall information for IRQ/SOFTIRQ. See
1087	:ref:`Documentation/accounting/psi.rst <psi>` for details.
1088
1089Controllers
1090===========
1091
1092.. _cgroup-v2-cpu:
1093
1094CPU
1095---
1096
1097The "cpu" controllers regulates distribution of CPU cycles.  This
1098controller implements weight and absolute bandwidth limit models for
1099normal scheduling policy and absolute bandwidth allocation model for
1100realtime scheduling policy.
1101
1102In all the above models, cycles distribution is defined only on a temporal
1103base and it does not account for the frequency at which tasks are executed.
1104The (optional) utilization clamping support allows to hint the schedutil
1105cpufreq governor about the minimum desired frequency which should always be
1106provided by a CPU, as well as the maximum desired frequency, which should not
1107be exceeded by a CPU.
1108
1109WARNING: cgroup2 cpu controller doesn't yet support the (bandwidth) control of
1110realtime processes. For a kernel built with the CONFIG_RT_GROUP_SCHED option
1111enabled for group scheduling of realtime processes, the cpu controller can only
1112be enabled when all RT processes are in the root cgroup. Be aware that system
1113management software may already have placed RT processes into non-root cgroups
1114during the system boot process, and these processes may need to be moved to the
1115root cgroup before the cpu controller can be enabled with a
1116CONFIG_RT_GROUP_SCHED enabled kernel.
1117
1118With CONFIG_RT_GROUP_SCHED disabled, this limitation does not apply and some of
1119the interface files either affect realtime processes or account for them. See
1120the following section for details. Only the cpu controller is affected by
1121CONFIG_RT_GROUP_SCHED. Other controllers can be used for the resource control of
1122realtime processes irrespective of CONFIG_RT_GROUP_SCHED.
1123
1124
1125CPU Interface Files
1126~~~~~~~~~~~~~~~~~~~
1127
1128The interaction of a process with the cpu controller depends on its scheduling
1129policy and the underlying scheduler. From the point of view of the cpu controller,
1130processes can be categorized as follows:
1131
1132* Processes under the fair-class scheduler
1133* Processes under a BPF scheduler with the ``cgroup_set_weight`` callback
1134* Everything else: ``SCHED_{FIFO,RR,DEADLINE}`` and processes under a BPF scheduler
1135  without the ``cgroup_set_weight`` callback
1136
1137For details on when a process is under the fair-class scheduler or a BPF scheduler,
1138check out :ref:`Documentation/scheduler/sched-ext.rst <sched-ext>`.
1139
1140For each of the following interface files, the above categories
1141will be referred to. All time durations are in microseconds.
1142
1143  cpu.stat
1144	A read-only flat-keyed file.
1145	This file exists whether the controller is enabled or not.
1146
1147	It always reports the following three stats, which account for all the
1148	processes in the cgroup:
1149
1150	- usage_usec
1151	- user_usec
1152	- system_usec
1153
1154	and the following five when the controller is enabled, which account for
1155	only the processes under the fair-class scheduler:
1156
1157	- nr_periods
1158	- nr_throttled
1159	- throttled_usec
1160	- nr_bursts
1161	- burst_usec
1162
1163  cpu.weight
1164	A read-write single value file which exists on non-root
1165	cgroups.  The default is "100".
1166
1167	For non idle groups (cpu.idle = 0), the weight is in the
1168	range [1, 10000].
1169
1170	If the cgroup has been configured to be SCHED_IDLE (cpu.idle = 1),
1171	then the weight will show as a 0.
1172
1173	This file affects only processes under the fair-class scheduler and a BPF
1174	scheduler with the ``cgroup_set_weight`` callback depending on what the
1175	callback actually does.
1176
1177  cpu.weight.nice
1178	A read-write single value file which exists on non-root
1179	cgroups.  The default is "0".
1180
1181	The nice value is in the range [-20, 19].
1182
1183	This interface file is an alternative interface for
1184	"cpu.weight" and allows reading and setting weight using the
1185	same values used by nice(2).  Because the range is smaller and
1186	granularity is coarser for the nice values, the read value is
1187	the closest approximation of the current weight.
1188
1189	This file affects only processes under the fair-class scheduler and a BPF
1190	scheduler with the ``cgroup_set_weight`` callback depending on what the
1191	callback actually does.
1192
1193  cpu.max
1194	A read-write two value file which exists on non-root cgroups.
1195	The default is "max 100000".
1196
1197	The maximum bandwidth limit.  It's in the following format::
1198
1199	  $MAX $PERIOD
1200
1201	which indicates that the group may consume up to $MAX in each
1202	$PERIOD duration.  "max" for $MAX indicates no limit.  If only
1203	one number is written, $MAX is updated.
1204
1205	This file affects only processes under the fair-class scheduler.
1206
1207  cpu.max.burst
1208	A read-write single value file which exists on non-root
1209	cgroups.  The default is "0".
1210
1211	The burst in the range [0, $MAX].
1212
1213	This file affects only processes under the fair-class scheduler.
1214
1215  cpu.pressure
1216	A read-write nested-keyed file.
1217
1218	Shows pressure stall information for CPU. See
1219	:ref:`Documentation/accounting/psi.rst <psi>` for details.
1220
1221	This file accounts for all the processes in the cgroup.
1222
1223  cpu.uclamp.min
1224	A read-write single value file which exists on non-root cgroups.
1225	The default is "0", i.e. no utilization boosting.
1226
1227	The requested minimum utilization (protection) as a percentage
1228	rational number, e.g. 12.34 for 12.34%.
1229
1230	This interface allows reading and setting minimum utilization clamp
1231	values similar to the sched_setattr(2). This minimum utilization
1232	value is used to clamp the task specific minimum utilization clamp,
1233	including those of realtime processes.
1234
1235	The requested minimum utilization (protection) is always capped by
1236	the current value for the maximum utilization (limit), i.e.
1237	`cpu.uclamp.max`.
1238
1239	This file affects all the processes in the cgroup.
1240
1241  cpu.uclamp.max
1242	A read-write single value file which exists on non-root cgroups.
1243	The default is "max". i.e. no utilization capping
1244
1245	The requested maximum utilization (limit) as a percentage rational
1246	number, e.g. 98.76 for 98.76%.
1247
1248	This interface allows reading and setting maximum utilization clamp
1249	values similar to the sched_setattr(2). This maximum utilization
1250	value is used to clamp the task specific maximum utilization clamp,
1251	including those of realtime processes.
1252
1253	This file affects all the processes in the cgroup.
1254
1255  cpu.idle
1256	A read-write single value file which exists on non-root cgroups.
1257	The default is 0.
1258
1259	This is the cgroup analog of the per-task SCHED_IDLE sched policy.
1260	Setting this value to a 1 will make the scheduling policy of the
1261	cgroup SCHED_IDLE. The threads inside the cgroup will retain their
1262	own relative priorities, but the cgroup itself will be treated as
1263	very low priority relative to its peers.
1264
1265	This file affects only processes under the fair-class scheduler.
1266
1267Memory
1268------
1269
1270The "memory" controller regulates distribution of memory.  Memory is
1271stateful and implements both limit and protection models.  Due to the
1272intertwining between memory usage and reclaim pressure and the
1273stateful nature of memory, the distribution model is relatively
1274complex.
1275
1276While not completely water-tight, all major memory usages by a given
1277cgroup are tracked so that the total memory consumption can be
1278accounted and controlled to a reasonable extent.  Currently, the
1279following types of memory usages are tracked.
1280
1281- Userland memory - page cache and anonymous memory.
1282
1283- Kernel data structures such as dentries and inodes.
1284
1285- TCP socket buffers.
1286
1287The above list may expand in the future for better coverage.
1288
1289
1290Memory Interface Files
1291~~~~~~~~~~~~~~~~~~~~~~
1292
1293All memory amounts are in bytes.  If a value which is not aligned to
1294PAGE_SIZE is written, the value may be rounded up to the closest
1295PAGE_SIZE multiple when read back.
1296
1297  memory.current
1298	A read-only single value file which exists on non-root
1299	cgroups.
1300
1301	The total amount of memory currently being used by the cgroup
1302	and its descendants.
1303
1304  memory.min
1305	A read-write single value file which exists on non-root
1306	cgroups.  The default is "0".
1307
1308	Hard memory protection.  If the memory usage of a cgroup
1309	is within its effective min boundary, the cgroup's memory
1310	won't be reclaimed under any conditions. If there is no
1311	unprotected reclaimable memory available, OOM killer
1312	is invoked. Above the effective min boundary (or
1313	effective low boundary if it is higher), pages are reclaimed
1314	proportionally to the overage, reducing reclaim pressure for
1315	smaller overages.
1316
1317	Effective min boundary is limited by memory.min values of
1318	ancestor cgroups. If there is memory.min overcommitment
1319	(child cgroup or cgroups are requiring more protected memory
1320	than parent will allow), then each child cgroup will get
1321	the part of parent's protection proportional to its
1322	actual memory usage below memory.min.
1323
1324	Putting more memory than generally available under this
1325	protection is discouraged and may lead to constant OOMs.
1326
1327  memory.low
1328	A read-write single value file which exists on non-root
1329	cgroups.  The default is "0".
1330
1331	Best-effort memory protection.  If the memory usage of a
1332	cgroup is within its effective low boundary, the cgroup's
1333	memory won't be reclaimed unless there is no reclaimable
1334	memory available in unprotected cgroups.
1335	Above the effective low	boundary (or
1336	effective min boundary if it is higher), pages are reclaimed
1337	proportionally to the overage, reducing reclaim pressure for
1338	smaller overages.
1339
1340	Effective low boundary is limited by memory.low values of
1341	ancestor cgroups. If there is memory.low overcommitment
1342	(child cgroup or cgroups are requiring more protected memory
1343	than parent will allow), then each child cgroup will get
1344	the part of parent's protection proportional to its
1345	actual memory usage below memory.low.
1346
1347	Putting more memory than generally available under this
1348	protection is discouraged.
1349
1350  memory.high
1351	A read-write single value file which exists on non-root
1352	cgroups.  The default is "max".
1353
1354	Memory usage throttle limit.  If a cgroup's usage goes
1355	over the high boundary, the processes of the cgroup are
1356	throttled and put under heavy reclaim pressure.
1357
1358	Going over the high limit never invokes the OOM killer and
1359	under extreme conditions the limit may be breached. The high
1360	limit should be used in scenarios where an external process
1361	monitors the limited cgroup to alleviate heavy reclaim
1362	pressure.
1363
1364	If memory.high is opened with O_NONBLOCK then the synchronous
1365	reclaim is bypassed. This is useful for admin processes that
1366	need to dynamically adjust the job's memory limits without
1367	expending their own CPU resources on memory reclamation. The
1368	job will trigger the reclaim and/or get throttled on its
1369	next charge request.
1370
1371	Please note that with O_NONBLOCK, there is a chance that the
1372	target memory cgroup may take indefinite amount of time to
1373	reduce usage below the limit due to delayed charge request or
1374	busy-hitting its memory to slow down reclaim.
1375
1376  memory.max
1377	A read-write single value file which exists on non-root
1378	cgroups.  The default is "max".
1379
1380	Memory usage hard limit.  This is the main mechanism to limit
1381	memory usage of a cgroup.  If a cgroup's memory usage reaches
1382	this limit and can't be reduced, the OOM killer is invoked in
1383	the cgroup. Under certain circumstances, the usage may go
1384	over the limit temporarily.
1385
1386	In default configuration regular 0-order allocations always
1387	succeed unless OOM killer chooses current task as a victim.
1388
1389	Some kinds of allocations don't invoke the OOM killer.
1390	Caller could retry them differently, return into userspace
1391	as -ENOMEM or silently ignore in cases like disk readahead.
1392
1393	If memory.max is opened with O_NONBLOCK, then the synchronous
1394	reclaim and oom-kill are bypassed. This is useful for admin
1395	processes that need to dynamically adjust the job's memory limits
1396	without expending their own CPU resources on memory reclamation.
1397	The job will trigger the reclaim and/or oom-kill on its next
1398	charge request.
1399
1400	Please note that with O_NONBLOCK, there is a chance that the
1401	target memory cgroup may take indefinite amount of time to
1402	reduce usage below the limit due to delayed charge request or
1403	busy-hitting its memory to slow down reclaim.
1404
1405  memory.reclaim
1406	A write-only nested-keyed file which exists for all cgroups.
1407
1408	This is a simple interface to trigger memory reclaim in the
1409	target cgroup.
1410
1411	Example::
1412
1413	  echo "1G" > memory.reclaim
1414
1415	Please note that the kernel can over or under reclaim from
1416	the target cgroup. If less bytes are reclaimed than the
1417	specified amount, -EAGAIN is returned.
1418
1419	Please note that the proactive reclaim (triggered by this
1420	interface) is not meant to indicate memory pressure on the
1421	memory cgroup. Therefore socket memory balancing triggered by
1422	the memory reclaim normally is not exercised in this case.
1423	This means that the networking layer will not adapt based on
1424	reclaim induced by memory.reclaim.
1425
1426The following nested keys are defined.
1427
1428	  ==========            ================================
1429	  swappiness            Swappiness value to reclaim with
1430	  ==========            ================================
1431
1432	Specifying a swappiness value instructs the kernel to perform
1433	the reclaim with that swappiness value. Note that this has the
1434	same semantics as vm.swappiness applied to memcg reclaim with
1435	all the existing limitations and potential future extensions.
1436
1437	The valid range for swappiness is [0-200, max], setting
1438	swappiness=max exclusively reclaims anonymous memory.
1439
1440  memory.peak
1441	A read-write single value file which exists on non-root cgroups.
1442
1443	The max memory usage recorded for the cgroup and its descendants since
1444	either the creation of the cgroup or the most recent reset for that FD.
1445
1446	A write of any non-empty string to this file resets it to the
1447	current memory usage for subsequent reads through the same
1448	file descriptor.
1449
1450  memory.oom.group
1451	A read-write single value file which exists on non-root
1452	cgroups.  The default value is "0".
1453
1454	Determines whether the cgroup should be treated as
1455	an indivisible workload by the OOM killer. If set,
1456	all tasks belonging to the cgroup or to its descendants
1457	(if the memory cgroup is not a leaf cgroup) are killed
1458	together or not at all. This can be used to avoid
1459	partial kills to guarantee workload integrity.
1460
1461	Tasks with the OOM protection (oom_score_adj set to -1000)
1462	are treated as an exception and are never killed.
1463
1464	If the OOM killer is invoked in a cgroup, it's not going
1465	to kill any tasks outside of this cgroup, regardless
1466	memory.oom.group values of ancestor cgroups.
1467
1468  memory.events
1469	A read-only flat-keyed file which exists on non-root cgroups.
1470	The following entries are defined.  Unless specified
1471	otherwise, a value change in this file generates a file
1472	modified event.
1473
1474	Note that all fields in this file are hierarchical and the
1475	file modified event can be generated due to an event down the
1476	hierarchy. For the local events at the cgroup level see
1477	memory.events.local.
1478
1479	  low
1480		The number of times the cgroup is reclaimed due to
1481		high memory pressure even though its usage is under
1482		the low boundary.  This usually indicates that the low
1483		boundary is over-committed.
1484
1485	  high
1486		The number of times processes of the cgroup are
1487		throttled and routed to perform direct memory reclaim
1488		because the high memory boundary was exceeded.  For a
1489		cgroup whose memory usage is capped by the high limit
1490		rather than global memory pressure, this event's
1491		occurrences are expected.
1492
1493	  max
1494		The number of times the cgroup's memory usage was
1495		about to go over the max boundary.  If direct reclaim
1496		fails to bring it down, the cgroup goes to OOM state.
1497
1498	  oom
1499		The number of time the cgroup's memory usage was
1500		reached the limit and allocation was about to fail.
1501
1502		This event is not raised if the OOM killer is not
1503		considered as an option, e.g. for failed high-order
1504		allocations or if caller asked to not retry attempts.
1505
1506	  oom_kill
1507		The number of processes belonging to this cgroup
1508		killed by any kind of OOM killer.
1509
1510          oom_group_kill
1511                The number of times a group OOM has occurred.
1512
1513          sock_throttled
1514                The number of times network sockets associated with
1515                this cgroup are throttled.
1516
1517  memory.events.local
1518	Similar to memory.events but the fields in the file are local
1519	to the cgroup i.e. not hierarchical. The file modified event
1520	generated on this file reflects only the local events.
1521
1522  memory.stat
1523	A read-only flat-keyed file which exists on non-root cgroups.
1524
1525	This breaks down the cgroup's memory footprint into different
1526	types of memory, type-specific details, and other information
1527	on the state and past events of the memory management system.
1528
1529	All memory amounts are in bytes.
1530
1531	The entries are ordered to be human readable, and new entries
1532	can show up in the middle. Don't rely on items remaining in a
1533	fixed position; use the keys to look up specific values!
1534
1535	If the entry has no per-node counter (or not show in the
1536	memory.numa_stat). We use 'npn' (non-per-node) as the tag
1537	to indicate that it will not show in the memory.numa_stat.
1538
1539	  anon
1540		Amount of memory used in anonymous mappings such as
1541		brk(), sbrk(), and mmap(MAP_ANONYMOUS). Note that
1542		some kernel configurations might account complete larger
1543		allocations (e.g., THP) if only some, but not all the
1544		memory of such an allocation is mapped anymore.
1545
1546	  file
1547		Amount of memory used to cache filesystem data,
1548		including tmpfs and shared memory.
1549
1550	  kernel (npn)
1551		Amount of total kernel memory, including
1552		(kernel_stack, pagetables, percpu, vmalloc, slab) in
1553		addition to other kernel memory use cases.
1554
1555	  kernel_stack
1556		Amount of memory allocated to kernel stacks.
1557
1558	  pagetables
1559                Amount of memory allocated for page tables.
1560
1561	  sec_pagetables
1562		Amount of memory allocated for secondary page tables,
1563		this currently includes KVM mmu allocations on x86
1564		and arm64 and IOMMU page tables.
1565
1566	  percpu (npn)
1567		Amount of memory used for storing per-cpu kernel
1568		data structures.
1569
1570	  sock (npn)
1571		Amount of memory used in network transmission buffers
1572
1573	  vmalloc (npn)
1574		Amount of memory used for vmap backed memory.
1575
1576	  shmem
1577		Amount of cached filesystem data that is swap-backed,
1578		such as tmpfs, shm segments, shared anonymous mmap()s
1579
1580	  zswap
1581		Amount of memory consumed by the zswap compression backend.
1582
1583	  zswapped
1584		Amount of application memory swapped out to zswap.
1585
1586	  file_mapped
1587		Amount of cached filesystem data mapped with mmap(). Note
1588		that some kernel configurations might account complete
1589		larger allocations (e.g., THP) if only some, but not
1590		not all the memory of such an allocation is mapped.
1591
1592	  file_dirty
1593		Amount of cached filesystem data that was modified but
1594		not yet written back to disk
1595
1596	  file_writeback
1597		Amount of cached filesystem data that was modified and
1598		is currently being written back to disk
1599
1600	  swapcached
1601		Amount of swap cached in memory. The swapcache is accounted
1602		against both memory and swap usage.
1603
1604	  anon_thp
1605		Amount of memory used in anonymous mappings backed by
1606		transparent hugepages
1607
1608	  file_thp
1609		Amount of cached filesystem data backed by transparent
1610		hugepages
1611
1612	  shmem_thp
1613		Amount of shm, tmpfs, shared anonymous mmap()s backed by
1614		transparent hugepages
1615
1616	  inactive_anon, active_anon, inactive_file, active_file, unevictable
1617		Amount of memory, swap-backed and filesystem-backed,
1618		on the internal memory management lists used by the
1619		page reclaim algorithm.
1620
1621		As these represent internal list state (eg. shmem pages are on anon
1622		memory management lists), inactive_foo + active_foo may not be equal to
1623		the value for the foo counter, since the foo counter is type-based, not
1624		list-based.
1625
1626	  slab_reclaimable
1627		Part of "slab" that might be reclaimed, such as
1628		dentries and inodes.
1629
1630	  slab_unreclaimable
1631		Part of "slab" that cannot be reclaimed on memory
1632		pressure.
1633
1634	  slab (npn)
1635		Amount of memory used for storing in-kernel data
1636		structures.
1637
1638	  workingset_refault_anon
1639		Number of refaults of previously evicted anonymous pages.
1640
1641	  workingset_refault_file
1642		Number of refaults of previously evicted file pages.
1643
1644	  workingset_activate_anon
1645		Number of refaulted anonymous pages that were immediately
1646		activated.
1647
1648	  workingset_activate_file
1649		Number of refaulted file pages that were immediately activated.
1650
1651	  workingset_restore_anon
1652		Number of restored anonymous pages which have been detected as
1653		an active workingset before they got reclaimed.
1654
1655	  workingset_restore_file
1656		Number of restored file pages which have been detected as an
1657		active workingset before they got reclaimed.
1658
1659	  workingset_nodereclaim
1660		Number of times a shadow node has been reclaimed
1661
1662	  pswpin (npn)
1663		Number of pages swapped into memory
1664
1665	  pswpout (npn)
1666		Number of pages swapped out of memory
1667
1668	  pgscan (npn)
1669		Amount of scanned pages (in an inactive LRU list)
1670
1671	  pgsteal (npn)
1672		Amount of reclaimed pages
1673
1674	  pgscan_kswapd (npn)
1675		Amount of scanned pages by kswapd (in an inactive LRU list)
1676
1677	  pgscan_direct (npn)
1678		Amount of scanned pages directly  (in an inactive LRU list)
1679
1680	  pgscan_khugepaged (npn)
1681		Amount of scanned pages by khugepaged  (in an inactive LRU list)
1682
1683	  pgscan_proactive (npn)
1684		Amount of scanned pages proactively (in an inactive LRU list)
1685
1686	  pgsteal_kswapd (npn)
1687		Amount of reclaimed pages by kswapd
1688
1689	  pgsteal_direct (npn)
1690		Amount of reclaimed pages directly
1691
1692	  pgsteal_khugepaged (npn)
1693		Amount of reclaimed pages by khugepaged
1694
1695	  pgsteal_proactive (npn)
1696		Amount of reclaimed pages proactively
1697
1698	  pgfault (npn)
1699		Total number of page faults incurred
1700
1701	  pgmajfault (npn)
1702		Number of major page faults incurred
1703
1704	  pgrefill (npn)
1705		Amount of scanned pages (in an active LRU list)
1706
1707	  pgactivate (npn)
1708		Amount of pages moved to the active LRU list
1709
1710	  pgdeactivate (npn)
1711		Amount of pages moved to the inactive LRU list
1712
1713	  pglazyfree (npn)
1714		Amount of pages postponed to be freed under memory pressure
1715
1716	  pglazyfreed (npn)
1717		Amount of reclaimed lazyfree pages
1718
1719	  swpin_zero
1720		Number of pages swapped into memory and filled with zero, where I/O
1721		was optimized out because the page content was detected to be zero
1722		during swapout.
1723
1724	  swpout_zero
1725		Number of zero-filled pages swapped out with I/O skipped due to the
1726		content being detected as zero.
1727
1728	  zswpin
1729		Number of pages moved in to memory from zswap.
1730
1731	  zswpout
1732		Number of pages moved out of memory to zswap.
1733
1734	  zswpwb
1735		Number of pages written from zswap to swap.
1736
1737	  thp_fault_alloc (npn)
1738		Number of transparent hugepages which were allocated to satisfy
1739		a page fault. This counter is not present when CONFIG_TRANSPARENT_HUGEPAGE
1740                is not set.
1741
1742	  thp_collapse_alloc (npn)
1743		Number of transparent hugepages which were allocated to allow
1744		collapsing an existing range of pages. This counter is not
1745		present when CONFIG_TRANSPARENT_HUGEPAGE is not set.
1746
1747	  thp_swpout (npn)
1748		Number of transparent hugepages which are swapout in one piece
1749		without splitting.
1750
1751	  thp_swpout_fallback (npn)
1752		Number of transparent hugepages which were split before swapout.
1753		Usually because failed to allocate some continuous swap space
1754		for the huge page.
1755
1756	  numa_pages_migrated (npn)
1757		Number of pages migrated by NUMA balancing.
1758
1759	  numa_pte_updates (npn)
1760		Number of pages whose page table entries are modified by
1761		NUMA balancing to produce NUMA hinting faults on access.
1762
1763	  numa_hint_faults (npn)
1764		Number of NUMA hinting faults.
1765
1766	  pgdemote_kswapd
1767		Number of pages demoted by kswapd.
1768
1769	  pgdemote_direct
1770		Number of pages demoted directly.
1771
1772	  pgdemote_khugepaged
1773		Number of pages demoted by khugepaged.
1774
1775	  pgdemote_proactive
1776		Number of pages demoted by proactively.
1777
1778	  hugetlb
1779		Amount of memory used by hugetlb pages. This metric only shows
1780		up if hugetlb usage is accounted for in memory.current (i.e.
1781		cgroup is mounted with the memory_hugetlb_accounting option).
1782
1783  memory.numa_stat
1784	A read-only nested-keyed file which exists on non-root cgroups.
1785
1786	This breaks down the cgroup's memory footprint into different
1787	types of memory, type-specific details, and other information
1788	per node on the state of the memory management system.
1789
1790	This is useful for providing visibility into the NUMA locality
1791	information within an memcg since the pages are allowed to be
1792	allocated from any physical node. One of the use case is evaluating
1793	application performance by combining this information with the
1794	application's CPU allocation.
1795
1796	All memory amounts are in bytes.
1797
1798	The output format of memory.numa_stat is::
1799
1800	  type N0=<bytes in node 0> N1=<bytes in node 1> ...
1801
1802	The entries are ordered to be human readable, and new entries
1803	can show up in the middle. Don't rely on items remaining in a
1804	fixed position; use the keys to look up specific values!
1805
1806	The entries can refer to the memory.stat.
1807
1808  memory.swap.current
1809	A read-only single value file which exists on non-root
1810	cgroups.
1811
1812	The total amount of swap currently being used by the cgroup
1813	and its descendants.
1814
1815  memory.swap.high
1816	A read-write single value file which exists on non-root
1817	cgroups.  The default is "max".
1818
1819	Swap usage throttle limit.  If a cgroup's swap usage exceeds
1820	this limit, all its further allocations will be throttled to
1821	allow userspace to implement custom out-of-memory procedures.
1822
1823	This limit marks a point of no return for the cgroup. It is NOT
1824	designed to manage the amount of swapping a workload does
1825	during regular operation. Compare to memory.swap.max, which
1826	prohibits swapping past a set amount, but lets the cgroup
1827	continue unimpeded as long as other memory can be reclaimed.
1828
1829	Healthy workloads are not expected to reach this limit.
1830
1831  memory.swap.peak
1832	A read-write single value file which exists on non-root cgroups.
1833
1834	The max swap usage recorded for the cgroup and its descendants since
1835	the creation of the cgroup or the most recent reset for that FD.
1836
1837	A write of any non-empty string to this file resets it to the
1838	current memory usage for subsequent reads through the same
1839	file descriptor.
1840
1841  memory.swap.max
1842	A read-write single value file which exists on non-root
1843	cgroups.  The default is "max".
1844
1845	Swap usage hard limit.  If a cgroup's swap usage reaches this
1846	limit, anonymous memory of the cgroup will not be swapped out.
1847
1848  memory.swap.events
1849	A read-only flat-keyed file which exists on non-root cgroups.
1850	The following entries are defined.  Unless specified
1851	otherwise, a value change in this file generates a file
1852	modified event.
1853
1854	  high
1855		The number of times the cgroup's swap usage was over
1856		the high threshold.
1857
1858	  max
1859		The number of times the cgroup's swap usage was about
1860		to go over the max boundary and swap allocation
1861		failed.
1862
1863	  fail
1864		The number of times swap allocation failed either
1865		because of running out of swap system-wide or max
1866		limit.
1867
1868	When reduced under the current usage, the existing swap
1869	entries are reclaimed gradually and the swap usage may stay
1870	higher than the limit for an extended period of time.  This
1871	reduces the impact on the workload and memory management.
1872
1873  memory.zswap.current
1874	A read-only single value file which exists on non-root
1875	cgroups.
1876
1877	The total amount of memory consumed by the zswap compression
1878	backend.
1879
1880  memory.zswap.max
1881	A read-write single value file which exists on non-root
1882	cgroups.  The default is "max".
1883
1884	Zswap usage hard limit. If a cgroup's zswap pool reaches this
1885	limit, it will refuse to take any more stores before existing
1886	entries fault back in or are written out to disk.
1887
1888  memory.zswap.writeback
1889	A read-write single value file. The default value is "1".
1890	Note that this setting is hierarchical, i.e. the writeback would be
1891	implicitly disabled for child cgroups if the upper hierarchy
1892	does so.
1893
1894	When this is set to 0, all swapping attempts to swapping devices
1895	are disabled. This included both zswap writebacks, and swapping due
1896	to zswap store failures. If the zswap store failures are recurring
1897	(for e.g if the pages are incompressible), users can observe
1898	reclaim inefficiency after disabling writeback (because the same
1899	pages might be rejected again and again).
1900
1901	Note that this is subtly different from setting memory.swap.max to
1902	0, as it still allows for pages to be written to the zswap pool.
1903	This setting has no effect if zswap is disabled, and swapping
1904	is allowed unless memory.swap.max is set to 0.
1905
1906  memory.pressure
1907	A read-only nested-keyed file.
1908
1909	Shows pressure stall information for memory. See
1910	:ref:`Documentation/accounting/psi.rst <psi>` for details.
1911
1912
1913Usage Guidelines
1914~~~~~~~~~~~~~~~~
1915
1916"memory.high" is the main mechanism to control memory usage.
1917Over-committing on high limit (sum of high limits > available memory)
1918and letting global memory pressure to distribute memory according to
1919usage is a viable strategy.
1920
1921Because breach of the high limit doesn't trigger the OOM killer but
1922throttles the offending cgroup, a management agent has ample
1923opportunities to monitor and take appropriate actions such as granting
1924more memory or terminating the workload.
1925
1926Determining whether a cgroup has enough memory is not trivial as
1927memory usage doesn't indicate whether the workload can benefit from
1928more memory.  For example, a workload which writes data received from
1929network to a file can use all available memory but can also operate as
1930performant with a small amount of memory.  A measure of memory
1931pressure - how much the workload is being impacted due to lack of
1932memory - is necessary to determine whether a workload needs more
1933memory; unfortunately, memory pressure monitoring mechanism isn't
1934implemented yet.
1935
1936Reclaim Protection
1937~~~~~~~~~~~~~~~~~~
1938
1939The protection configured with "memory.low" or "memory.min" applies relatively
1940to the target of the reclaim (i.e. any of memory cgroup limits, proactive
1941memory.reclaim or global reclaim apparently located in the root cgroup).
1942The protection value configured for B applies unchanged to the reclaim
1943targeting A (i.e. caused by competition with the sibling E)::
1944
1945		root - ... - A - B - C
1946		              \    ` D
1947		               ` E
1948
1949When the reclaim targets ancestors of A, the effective protection of B is
1950capped by the protection value configured for A (and any other intermediate
1951ancestors between A and the target).
1952
1953To express indifference about relative sibling protection, it is suggested to
1954use memory_recursiveprot. Configuring all descendants of a parent with finite
1955protection to "max" works but it may unnecessarily skew memory.events:low
1956field.
1957
1958Memory Ownership
1959~~~~~~~~~~~~~~~~
1960
1961A memory area is charged to the cgroup which instantiated it and stays
1962charged to the cgroup until the area is released.  Migrating a process
1963to a different cgroup doesn't move the memory usages that it
1964instantiated while in the previous cgroup to the new cgroup.
1965
1966A memory area may be used by processes belonging to different cgroups.
1967To which cgroup the area will be charged is in-deterministic; however,
1968over time, the memory area is likely to end up in a cgroup which has
1969enough memory allowance to avoid high reclaim pressure.
1970
1971If a cgroup sweeps a considerable amount of memory which is expected
1972to be accessed repeatedly by other cgroups, it may make sense to use
1973POSIX_FADV_DONTNEED to relinquish the ownership of memory areas
1974belonging to the affected files to ensure correct memory ownership.
1975
1976
1977IO
1978--
1979
1980The "io" controller regulates the distribution of IO resources.  This
1981controller implements both weight based and absolute bandwidth or IOPS
1982limit distribution; however, weight based distribution is available
1983only if cfq-iosched is in use and neither scheme is available for
1984blk-mq devices.
1985
1986
1987IO Interface Files
1988~~~~~~~~~~~~~~~~~~
1989
1990  io.stat
1991	A read-only nested-keyed file.
1992
1993	Lines are keyed by $MAJ:$MIN device numbers and not ordered.
1994	The following nested keys are defined.
1995
1996	  ======	=====================
1997	  rbytes	Bytes read
1998	  wbytes	Bytes written
1999	  rios		Number of read IOs
2000	  wios		Number of write IOs
2001	  dbytes	Bytes discarded
2002	  dios		Number of discard IOs
2003	  ======	=====================
2004
2005	An example read output follows::
2006
2007	  8:16 rbytes=1459200 wbytes=314773504 rios=192 wios=353 dbytes=0 dios=0
2008	  8:0 rbytes=90430464 wbytes=299008000 rios=8950 wios=1252 dbytes=50331648 dios=3021
2009
2010  io.cost.qos
2011	A read-write nested-keyed file which exists only on the root
2012	cgroup.
2013
2014	This file configures the Quality of Service of the IO cost
2015	model based controller (CONFIG_BLK_CGROUP_IOCOST) which
2016	currently implements "io.weight" proportional control.  Lines
2017	are keyed by $MAJ:$MIN device numbers and not ordered.  The
2018	line for a given device is populated on the first write for
2019	the device on "io.cost.qos" or "io.cost.model".  The following
2020	nested keys are defined.
2021
2022	  ======	=====================================
2023	  enable	Weight-based control enable
2024	  ctrl		"auto" or "user"
2025	  rpct		Read latency percentile    [0, 100]
2026	  rlat		Read latency threshold
2027	  wpct		Write latency percentile   [0, 100]
2028	  wlat		Write latency threshold
2029	  min		Minimum scaling percentage [1, 10000]
2030	  max		Maximum scaling percentage [1, 10000]
2031	  ======	=====================================
2032
2033	The controller is disabled by default and can be enabled by
2034	setting "enable" to 1.  "rpct" and "wpct" parameters default
2035	to zero and the controller uses internal device saturation
2036	state to adjust the overall IO rate between "min" and "max".
2037
2038	When a better control quality is needed, latency QoS
2039	parameters can be configured.  For example::
2040
2041	  8:16 enable=1 ctrl=auto rpct=95.00 rlat=75000 wpct=95.00 wlat=150000 min=50.00 max=150.0
2042
2043	shows that on sdb, the controller is enabled, will consider
2044	the device saturated if the 95th percentile of read completion
2045	latencies is above 75ms or write 150ms, and adjust the overall
2046	IO issue rate between 50% and 150% accordingly.
2047
2048	The lower the saturation point, the better the latency QoS at
2049	the cost of aggregate bandwidth.  The narrower the allowed
2050	adjustment range between "min" and "max", the more conformant
2051	to the cost model the IO behavior.  Note that the IO issue
2052	base rate may be far off from 100% and setting "min" and "max"
2053	blindly can lead to a significant loss of device capacity or
2054	control quality.  "min" and "max" are useful for regulating
2055	devices which show wide temporary behavior changes - e.g. a
2056	ssd which accepts writes at the line speed for a while and
2057	then completely stalls for multiple seconds.
2058
2059	When "ctrl" is "auto", the parameters are controlled by the
2060	kernel and may change automatically.  Setting "ctrl" to "user"
2061	or setting any of the percentile and latency parameters puts
2062	it into "user" mode and disables the automatic changes.  The
2063	automatic mode can be restored by setting "ctrl" to "auto".
2064
2065  io.cost.model
2066	A read-write nested-keyed file which exists only on the root
2067	cgroup.
2068
2069	This file configures the cost model of the IO cost model based
2070	controller (CONFIG_BLK_CGROUP_IOCOST) which currently
2071	implements "io.weight" proportional control.  Lines are keyed
2072	by $MAJ:$MIN device numbers and not ordered.  The line for a
2073	given device is populated on the first write for the device on
2074	"io.cost.qos" or "io.cost.model".  The following nested keys
2075	are defined.
2076
2077	  =====		================================
2078	  ctrl		"auto" or "user"
2079	  model		The cost model in use - "linear"
2080	  =====		================================
2081
2082	When "ctrl" is "auto", the kernel may change all parameters
2083	dynamically.  When "ctrl" is set to "user" or any other
2084	parameters are written to, "ctrl" become "user" and the
2085	automatic changes are disabled.
2086
2087	When "model" is "linear", the following model parameters are
2088	defined.
2089
2090	  =============	========================================
2091	  [r|w]bps	The maximum sequential IO throughput
2092	  [r|w]seqiops	The maximum 4k sequential IOs per second
2093	  [r|w]randiops	The maximum 4k random IOs per second
2094	  =============	========================================
2095
2096	From the above, the builtin linear model determines the base
2097	costs of a sequential and random IO and the cost coefficient
2098	for the IO size.  While simple, this model can cover most
2099	common device classes acceptably.
2100
2101	The IO cost model isn't expected to be accurate in absolute
2102	sense and is scaled to the device behavior dynamically.
2103
2104	If needed, tools/cgroup/iocost_coef_gen.py can be used to
2105	generate device-specific coefficients.
2106
2107  io.weight
2108	A read-write flat-keyed file which exists on non-root cgroups.
2109	The default is "default 100".
2110
2111	The first line is the default weight applied to devices
2112	without specific override.  The rest are overrides keyed by
2113	$MAJ:$MIN device numbers and not ordered.  The weights are in
2114	the range [1, 10000] and specifies the relative amount IO time
2115	the cgroup can use in relation to its siblings.
2116
2117	The default weight can be updated by writing either "default
2118	$WEIGHT" or simply "$WEIGHT".  Overrides can be set by writing
2119	"$MAJ:$MIN $WEIGHT" and unset by writing "$MAJ:$MIN default".
2120
2121	An example read output follows::
2122
2123	  default 100
2124	  8:16 200
2125	  8:0 50
2126
2127  io.max
2128	A read-write nested-keyed file which exists on non-root
2129	cgroups.
2130
2131	BPS and IOPS based IO limit.  Lines are keyed by $MAJ:$MIN
2132	device numbers and not ordered.  The following nested keys are
2133	defined.
2134
2135	  =====		==================================
2136	  rbps		Max read bytes per second
2137	  wbps		Max write bytes per second
2138	  riops		Max read IO operations per second
2139	  wiops		Max write IO operations per second
2140	  =====		==================================
2141
2142	When writing, any number of nested key-value pairs can be
2143	specified in any order.  "max" can be specified as the value
2144	to remove a specific limit.  If the same key is specified
2145	multiple times, the outcome is undefined.
2146
2147	BPS and IOPS are measured in each IO direction and IOs are
2148	delayed if limit is reached.  Temporary bursts are allowed.
2149
2150	Setting read limit at 2M BPS and write at 120 IOPS for 8:16::
2151
2152	  echo "8:16 rbps=2097152 wiops=120" > io.max
2153
2154	Reading returns the following::
2155
2156	  8:16 rbps=2097152 wbps=max riops=max wiops=120
2157
2158	Write IOPS limit can be removed by writing the following::
2159
2160	  echo "8:16 wiops=max" > io.max
2161
2162	Reading now returns the following::
2163
2164	  8:16 rbps=2097152 wbps=max riops=max wiops=max
2165
2166  io.pressure
2167	A read-only nested-keyed file.
2168
2169	Shows pressure stall information for IO. See
2170	:ref:`Documentation/accounting/psi.rst <psi>` for details.
2171
2172
2173Writeback
2174~~~~~~~~~
2175
2176Page cache is dirtied through buffered writes and shared mmaps and
2177written asynchronously to the backing filesystem by the writeback
2178mechanism.  Writeback sits between the memory and IO domains and
2179regulates the proportion of dirty memory by balancing dirtying and
2180write IOs.
2181
2182The io controller, in conjunction with the memory controller,
2183implements control of page cache writeback IOs.  The memory controller
2184defines the memory domain that dirty memory ratio is calculated and
2185maintained for and the io controller defines the io domain which
2186writes out dirty pages for the memory domain.  Both system-wide and
2187per-cgroup dirty memory states are examined and the more restrictive
2188of the two is enforced.
2189
2190cgroup writeback requires explicit support from the underlying
2191filesystem.  Currently, cgroup writeback is implemented on ext2, ext4,
2192btrfs, f2fs, and xfs.  On other filesystems, all writeback IOs are
2193attributed to the root cgroup.
2194
2195There are inherent differences in memory and writeback management
2196which affects how cgroup ownership is tracked.  Memory is tracked per
2197page while writeback per inode.  For the purpose of writeback, an
2198inode is assigned to a cgroup and all IO requests to write dirty pages
2199from the inode are attributed to that cgroup.
2200
2201As cgroup ownership for memory is tracked per page, there can be pages
2202which are associated with different cgroups than the one the inode is
2203associated with.  These are called foreign pages.  The writeback
2204constantly keeps track of foreign pages and, if a particular foreign
2205cgroup becomes the majority over a certain period of time, switches
2206the ownership of the inode to that cgroup.
2207
2208While this model is enough for most use cases where a given inode is
2209mostly dirtied by a single cgroup even when the main writing cgroup
2210changes over time, use cases where multiple cgroups write to a single
2211inode simultaneously are not supported well.  In such circumstances, a
2212significant portion of IOs are likely to be attributed incorrectly.
2213As memory controller assigns page ownership on the first use and
2214doesn't update it until the page is released, even if writeback
2215strictly follows page ownership, multiple cgroups dirtying overlapping
2216areas wouldn't work as expected.  It's recommended to avoid such usage
2217patterns.
2218
2219The sysctl knobs which affect writeback behavior are applied to cgroup
2220writeback as follows.
2221
2222  vm.dirty_background_ratio, vm.dirty_ratio
2223	These ratios apply the same to cgroup writeback with the
2224	amount of available memory capped by limits imposed by the
2225	memory controller and system-wide clean memory.
2226
2227  vm.dirty_background_bytes, vm.dirty_bytes
2228	For cgroup writeback, this is calculated into ratio against
2229	total available memory and applied the same way as
2230	vm.dirty[_background]_ratio.
2231
2232
2233IO Latency
2234~~~~~~~~~~
2235
2236This is a cgroup v2 controller for IO workload protection.  You provide a group
2237with a latency target, and if the average latency exceeds that target the
2238controller will throttle any peers that have a lower latency target than the
2239protected workload.
2240
2241The limits are only applied at the peer level in the hierarchy.  This means that
2242in the diagram below, only groups A, B, and C will influence each other, and
2243groups D and F will influence each other.  Group G will influence nobody::
2244
2245			[root]
2246		/	   |		\
2247		A	   B		C
2248	       /  \        |
2249	      D    F	   G
2250
2251
2252So the ideal way to configure this is to set io.latency in groups A, B, and C.
2253Generally you do not want to set a value lower than the latency your device
2254supports.  Experiment to find the value that works best for your workload.
2255Start at higher than the expected latency for your device and watch the
2256avg_lat value in io.stat for your workload group to get an idea of the
2257latency you see during normal operation.  Use the avg_lat value as a basis for
2258your real setting, setting at 10-15% higher than the value in io.stat.
2259
2260How IO Latency Throttling Works
2261~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2262
2263io.latency is work conserving; so as long as everybody is meeting their latency
2264target the controller doesn't do anything.  Once a group starts missing its
2265target it begins throttling any peer group that has a higher target than itself.
2266This throttling takes 2 forms:
2267
2268- Queue depth throttling.  This is the number of outstanding IO's a group is
2269  allowed to have.  We will clamp down relatively quickly, starting at no limit
2270  and going all the way down to 1 IO at a time.
2271
2272- Artificial delay induction.  There are certain types of IO that cannot be
2273  throttled without possibly adversely affecting higher priority groups.  This
2274  includes swapping and metadata IO.  These types of IO are allowed to occur
2275  normally, however they are "charged" to the originating group.  If the
2276  originating group is being throttled you will see the use_delay and delay
2277  fields in io.stat increase.  The delay value is how many microseconds that are
2278  being added to any process that runs in this group.  Because this number can
2279  grow quite large if there is a lot of swapping or metadata IO occurring we
2280  limit the individual delay events to 1 second at a time.
2281
2282Once the victimized group starts meeting its latency target again it will start
2283unthrottling any peer groups that were throttled previously.  If the victimized
2284group simply stops doing IO the global counter will unthrottle appropriately.
2285
2286IO Latency Interface Files
2287~~~~~~~~~~~~~~~~~~~~~~~~~~
2288
2289  io.latency
2290	This takes a similar format as the other controllers.
2291
2292		"MAJOR:MINOR target=<target time in microseconds>"
2293
2294  io.stat
2295	If the controller is enabled you will see extra stats in io.stat in
2296	addition to the normal ones.
2297
2298	  depth
2299		This is the current queue depth for the group.
2300
2301	  avg_lat
2302		This is an exponential moving average with a decay rate of 1/exp
2303		bound by the sampling interval.  The decay rate interval can be
2304		calculated by multiplying the win value in io.stat by the
2305		corresponding number of samples based on the win value.
2306
2307	  win
2308		The sampling window size in milliseconds.  This is the minimum
2309		duration of time between evaluation events.  Windows only elapse
2310		with IO activity.  Idle periods extend the most recent window.
2311
2312IO Priority
2313~~~~~~~~~~~
2314
2315A single attribute controls the behavior of the I/O priority cgroup policy,
2316namely the io.prio.class attribute. The following values are accepted for
2317that attribute:
2318
2319  no-change
2320	Do not modify the I/O priority class.
2321
2322  promote-to-rt
2323	For requests that have a non-RT I/O priority class, change it into RT.
2324	Also change the priority level of these requests to 4. Do not modify
2325	the I/O priority of requests that have priority class RT.
2326
2327  restrict-to-be
2328	For requests that do not have an I/O priority class or that have I/O
2329	priority class RT, change it into BE. Also change the priority level
2330	of these requests to 0. Do not modify the I/O priority class of
2331	requests that have priority class IDLE.
2332
2333  idle
2334	Change the I/O priority class of all requests into IDLE, the lowest
2335	I/O priority class.
2336
2337  none-to-rt
2338	Deprecated. Just an alias for promote-to-rt.
2339
2340The following numerical values are associated with the I/O priority policies:
2341
2342+----------------+---+
2343| no-change      | 0 |
2344+----------------+---+
2345| promote-to-rt  | 1 |
2346+----------------+---+
2347| restrict-to-be | 2 |
2348+----------------+---+
2349| idle           | 3 |
2350+----------------+---+
2351
2352The numerical value that corresponds to each I/O priority class is as follows:
2353
2354+-------------------------------+---+
2355| IOPRIO_CLASS_NONE             | 0 |
2356+-------------------------------+---+
2357| IOPRIO_CLASS_RT (real-time)   | 1 |
2358+-------------------------------+---+
2359| IOPRIO_CLASS_BE (best effort) | 2 |
2360+-------------------------------+---+
2361| IOPRIO_CLASS_IDLE             | 3 |
2362+-------------------------------+---+
2363
2364The algorithm to set the I/O priority class for a request is as follows:
2365
2366- If I/O priority class policy is promote-to-rt, change the request I/O
2367  priority class to IOPRIO_CLASS_RT and change the request I/O priority
2368  level to 4.
2369- If I/O priority class policy is not promote-to-rt, translate the I/O priority
2370  class policy into a number, then change the request I/O priority class
2371  into the maximum of the I/O priority class policy number and the numerical
2372  I/O priority class.
2373
2374PID
2375---
2376
2377The process number controller is used to allow a cgroup to stop any
2378new tasks from being fork()'d or clone()'d after a specified limit is
2379reached.
2380
2381The number of tasks in a cgroup can be exhausted in ways which other
2382controllers cannot prevent, thus warranting its own controller.  For
2383example, a fork bomb is likely to exhaust the number of tasks before
2384hitting memory restrictions.
2385
2386Note that PIDs used in this controller refer to TIDs, process IDs as
2387used by the kernel.
2388
2389
2390PID Interface Files
2391~~~~~~~~~~~~~~~~~~~
2392
2393  pids.max
2394	A read-write single value file which exists on non-root
2395	cgroups.  The default is "max".
2396
2397	Hard limit of number of processes.
2398
2399  pids.current
2400	A read-only single value file which exists on non-root cgroups.
2401
2402	The number of processes currently in the cgroup and its
2403	descendants.
2404
2405  pids.peak
2406	A read-only single value file which exists on non-root cgroups.
2407
2408	The maximum value that the number of processes in the cgroup and its
2409	descendants has ever reached.
2410
2411  pids.events
2412	A read-only flat-keyed file which exists on non-root cgroups. Unless
2413	specified otherwise, a value change in this file generates a file
2414	modified event. The following entries are defined.
2415
2416	  max
2417		The number of times the cgroup's total number of processes hit the pids.max
2418		limit (see also pids_localevents).
2419
2420  pids.events.local
2421	Similar to pids.events but the fields in the file are local
2422	to the cgroup i.e. not hierarchical. The file modified event
2423	generated on this file reflects only the local events.
2424
2425Organisational operations are not blocked by cgroup policies, so it is
2426possible to have pids.current > pids.max.  This can be done by either
2427setting the limit to be smaller than pids.current, or attaching enough
2428processes to the cgroup such that pids.current is larger than
2429pids.max.  However, it is not possible to violate a cgroup PID policy
2430through fork() or clone(). These will return -EAGAIN if the creation
2431of a new process would cause a cgroup policy to be violated.
2432
2433
2434Cpuset
2435------
2436
2437The "cpuset" controller provides a mechanism for constraining
2438the CPU and memory node placement of tasks to only the resources
2439specified in the cpuset interface files in a task's current cgroup.
2440This is especially valuable on large NUMA systems where placing jobs
2441on properly sized subsets of the systems with careful processor and
2442memory placement to reduce cross-node memory access and contention
2443can improve overall system performance.
2444
2445The "cpuset" controller is hierarchical.  That means the controller
2446cannot use CPUs or memory nodes not allowed in its parent.
2447
2448
2449Cpuset Interface Files
2450~~~~~~~~~~~~~~~~~~~~~~
2451
2452  cpuset.cpus
2453	A read-write multiple values file which exists on non-root
2454	cpuset-enabled cgroups.
2455
2456	It lists the requested CPUs to be used by tasks within this
2457	cgroup.  The actual list of CPUs to be granted, however, is
2458	subjected to constraints imposed by its parent and can differ
2459	from the requested CPUs.
2460
2461	The CPU numbers are comma-separated numbers or ranges.
2462	For example::
2463
2464	  # cat cpuset.cpus
2465	  0-4,6,8-10
2466
2467	An empty value indicates that the cgroup is using the same
2468	setting as the nearest cgroup ancestor with a non-empty
2469	"cpuset.cpus" or all the available CPUs if none is found.
2470
2471	The value of "cpuset.cpus" stays constant until the next update
2472	and won't be affected by any CPU hotplug events.
2473
2474  cpuset.cpus.effective
2475	A read-only multiple values file which exists on all
2476	cpuset-enabled cgroups.
2477
2478	It lists the onlined CPUs that are actually granted to this
2479	cgroup by its parent.  These CPUs are allowed to be used by
2480	tasks within the current cgroup.
2481
2482	If "cpuset.cpus" is empty, the "cpuset.cpus.effective" file shows
2483	all the CPUs from the parent cgroup that can be available to
2484	be used by this cgroup.  Otherwise, it should be a subset of
2485	"cpuset.cpus" unless none of the CPUs listed in "cpuset.cpus"
2486	can be granted.  In this case, it will be treated just like an
2487	empty "cpuset.cpus".
2488
2489	Its value will be affected by CPU hotplug events.
2490
2491  cpuset.mems
2492	A read-write multiple values file which exists on non-root
2493	cpuset-enabled cgroups.
2494
2495	It lists the requested memory nodes to be used by tasks within
2496	this cgroup.  The actual list of memory nodes granted, however,
2497	is subjected to constraints imposed by its parent and can differ
2498	from the requested memory nodes.
2499
2500	The memory node numbers are comma-separated numbers or ranges.
2501	For example::
2502
2503	  # cat cpuset.mems
2504	  0-1,3
2505
2506	An empty value indicates that the cgroup is using the same
2507	setting as the nearest cgroup ancestor with a non-empty
2508	"cpuset.mems" or all the available memory nodes if none
2509	is found.
2510
2511	The value of "cpuset.mems" stays constant until the next update
2512	and won't be affected by any memory nodes hotplug events.
2513
2514	Setting a non-empty value to "cpuset.mems" causes memory of
2515	tasks within the cgroup to be migrated to the designated nodes if
2516	they are currently using memory outside of the designated nodes.
2517
2518	There is a cost for this memory migration.  The migration
2519	may not be complete and some memory pages may be left behind.
2520	So it is recommended that "cpuset.mems" should be set properly
2521	before spawning new tasks into the cpuset.  Even if there is
2522	a need to change "cpuset.mems" with active tasks, it shouldn't
2523	be done frequently.
2524
2525  cpuset.mems.effective
2526	A read-only multiple values file which exists on all
2527	cpuset-enabled cgroups.
2528
2529	It lists the onlined memory nodes that are actually granted to
2530	this cgroup by its parent. These memory nodes are allowed to
2531	be used by tasks within the current cgroup.
2532
2533	If "cpuset.mems" is empty, it shows all the memory nodes from the
2534	parent cgroup that will be available to be used by this cgroup.
2535	Otherwise, it should be a subset of "cpuset.mems" unless none of
2536	the memory nodes listed in "cpuset.mems" can be granted.  In this
2537	case, it will be treated just like an empty "cpuset.mems".
2538
2539	Its value will be affected by memory nodes hotplug events.
2540
2541  cpuset.cpus.exclusive
2542	A read-write multiple values file which exists on non-root
2543	cpuset-enabled cgroups.
2544
2545	It lists all the exclusive CPUs that are allowed to be used
2546	to create a new cpuset partition.  Its value is not used
2547	unless the cgroup becomes a valid partition root.  See the
2548	"cpuset.cpus.partition" section below for a description of what
2549	a cpuset partition is.
2550
2551	When the cgroup becomes a partition root, the actual exclusive
2552	CPUs that are allocated to that partition are listed in
2553	"cpuset.cpus.exclusive.effective" which may be different
2554	from "cpuset.cpus.exclusive".  If "cpuset.cpus.exclusive"
2555	has previously been set, "cpuset.cpus.exclusive.effective"
2556	is always a subset of it.
2557
2558	Users can manually set it to a value that is different from
2559	"cpuset.cpus".	One constraint in setting it is that the list of
2560	CPUs must be exclusive with respect to "cpuset.cpus.exclusive"
2561	and "cpuset.cpus.exclusive.effective" of its siblings.	Another
2562	constraint is that it cannot be a superset of "cpuset.cpus"
2563	of its sibling in order to leave at least one CPU available to
2564	that sibling when the exclusive CPUs are taken away.
2565
2566	For a parent cgroup, any one of its exclusive CPUs can only
2567	be distributed to at most one of its child cgroups.  Having an
2568	exclusive CPU appearing in two or more of its child cgroups is
2569	not allowed (the exclusivity rule).  A value that violates the
2570	exclusivity rule will be rejected with a write error.
2571
2572	The root cgroup is a partition root and all its available CPUs
2573	are in its exclusive CPU set.
2574
2575  cpuset.cpus.exclusive.effective
2576	A read-only multiple values file which exists on all non-root
2577	cpuset-enabled cgroups.
2578
2579	This file shows the effective set of exclusive CPUs that
2580	can be used to create a partition root.  The content
2581	of this file will always be a subset of its parent's
2582	"cpuset.cpus.exclusive.effective" if its parent is not the root
2583	cgroup.  It will also be a subset of "cpuset.cpus.exclusive"
2584	if it is set.  This file should only be non-empty if either
2585	"cpuset.cpus.exclusive" is set or when the current cpuset is
2586	a valid partition root.
2587
2588  cpuset.cpus.isolated
2589	A read-only and root cgroup only multiple values file.
2590
2591	This file shows the set of all isolated CPUs used in existing
2592	isolated partitions. It will be empty if no isolated partition
2593	is created.
2594
2595  cpuset.cpus.partition
2596	A read-write single value file which exists on non-root
2597	cpuset-enabled cgroups.  This flag is owned by the parent cgroup
2598	and is not delegatable.
2599
2600	It accepts only the following input values when written to.
2601
2602	  ==========	=====================================
2603	  "member"	Non-root member of a partition
2604	  "root"	Partition root
2605	  "isolated"	Partition root without load balancing
2606	  ==========	=====================================
2607
2608	A cpuset partition is a collection of cpuset-enabled cgroups with
2609	a partition root at the top of the hierarchy and its descendants
2610	except those that are separate partition roots themselves and
2611	their descendants.  A partition has exclusive access to the
2612	set of exclusive CPUs allocated to it.	Other cgroups outside
2613	of that partition cannot use any CPUs in that set.
2614
2615	There are two types of partitions - local and remote.  A local
2616	partition is one whose parent cgroup is also a valid partition
2617	root.  A remote partition is one whose parent cgroup is not a
2618	valid partition root itself.
2619
2620	Writing to "cpuset.cpus.exclusive" is optional for the creation
2621	of a local partition as its "cpuset.cpus.exclusive" file will
2622	assume an implicit value that is the same as "cpuset.cpus" if it
2623	is not set.  Writing the proper "cpuset.cpus.exclusive" values
2624	down the cgroup hierarchy before the target partition root is
2625	mandatory for the creation of a remote partition.
2626
2627	Not all the CPUs requested in "cpuset.cpus.exclusive" can be
2628	used to form a new partition.  Only those that were present
2629	in its parent's "cpuset.cpus.exclusive.effective" control
2630	file can be used.  For partitions created without setting
2631	"cpuset.cpus.exclusive", exclusive CPUs specified in sibling's
2632	"cpuset.cpus.exclusive" or "cpuset.cpus.exclusive.effective"
2633	also cannot be used.
2634
2635	Currently, a remote partition cannot be created under a local
2636	partition.  All the ancestors of a remote partition root except
2637	the root cgroup cannot be a partition root.
2638
2639	The root cgroup is always a partition root and its state cannot
2640	be changed.  All other non-root cgroups start out as "member".
2641	Even though the "cpuset.cpus.exclusive*" and "cpuset.cpus"
2642	control files are not present in the root cgroup, they are
2643	implicitly the same as the "/sys/devices/system/cpu/possible"
2644	sysfs file.
2645
2646	When set to "root", the current cgroup is the root of a new
2647	partition or scheduling domain.  The set of exclusive CPUs is
2648	determined by the value of its "cpuset.cpus.exclusive.effective".
2649
2650	When set to "isolated", the CPUs in that partition will be in
2651	an isolated state without any load balancing from the scheduler
2652	and excluded from the unbound workqueues.  Tasks placed in such
2653	a partition with multiple CPUs should be carefully distributed
2654	and bound to each of the individual CPUs for optimal performance.
2655
2656	A partition root ("root" or "isolated") can be in one of the
2657	two possible states - valid or invalid.  An invalid partition
2658	root is in a degraded state where some state information may
2659	be retained, but behaves more like a "member".
2660
2661	All possible state transitions among "member", "root" and
2662	"isolated" are allowed.
2663
2664	On read, the "cpuset.cpus.partition" file can show the following
2665	values.
2666
2667	  =============================	=====================================
2668	  "member"			Non-root member of a partition
2669	  "root"			Partition root
2670	  "isolated"			Partition root without load balancing
2671	  "root invalid (<reason>)"	Invalid partition root
2672	  "isolated invalid (<reason>)"	Invalid isolated partition root
2673	  =============================	=====================================
2674
2675	In the case of an invalid partition root, a descriptive string on
2676	why the partition is invalid is included within parentheses.
2677
2678	For a local partition root to be valid, the following conditions
2679	must be met.
2680
2681	1) The parent cgroup is a valid partition root.
2682	2) The "cpuset.cpus.exclusive.effective" file cannot be empty,
2683	   though it may contain offline CPUs.
2684	3) The "cpuset.cpus.effective" cannot be empty unless there is
2685	   no task associated with this partition.
2686
2687	For a remote partition root to be valid, all the above conditions
2688	except the first one must be met.
2689
2690	External events like hotplug or changes to "cpuset.cpus" or
2691	"cpuset.cpus.exclusive" can cause a valid partition root to
2692	become invalid and vice versa.	Note that a task cannot be
2693	moved to a cgroup with empty "cpuset.cpus.effective".
2694
2695	A valid non-root parent partition may distribute out all its CPUs
2696	to its child local partitions when there is no task associated
2697	with it.
2698
2699	Care must be taken to change a valid partition root to "member"
2700	as all its child local partitions, if present, will become
2701	invalid causing disruption to tasks running in those child
2702	partitions. These inactivated partitions could be recovered if
2703	their parent is switched back to a partition root with a proper
2704	value in "cpuset.cpus" or "cpuset.cpus.exclusive".
2705
2706	Poll and inotify events are triggered whenever the state of
2707	"cpuset.cpus.partition" changes.  That includes changes caused
2708	by write to "cpuset.cpus.partition", cpu hotplug or other
2709	changes that modify the validity status of the partition.
2710	This will allow user space agents to monitor unexpected changes
2711	to "cpuset.cpus.partition" without the need to do continuous
2712	polling.
2713
2714	A user can pre-configure certain CPUs to an isolated state
2715	with load balancing disabled at boot time with the "isolcpus"
2716	kernel boot command line option.  If those CPUs are to be put
2717	into a partition, they have to be used in an isolated partition.
2718
2719
2720Device controller
2721-----------------
2722
2723Device controller manages access to device files. It includes both
2724creation of new device files (using mknod), and access to the
2725existing device files.
2726
2727Cgroup v2 device controller has no interface files and is implemented
2728on top of cgroup BPF. To control access to device files, a user may
2729create bpf programs of type BPF_PROG_TYPE_CGROUP_DEVICE and attach
2730them to cgroups with BPF_CGROUP_DEVICE flag. On an attempt to access a
2731device file, corresponding BPF programs will be executed, and depending
2732on the return value the attempt will succeed or fail with -EPERM.
2733
2734A BPF_PROG_TYPE_CGROUP_DEVICE program takes a pointer to the
2735bpf_cgroup_dev_ctx structure, which describes the device access attempt:
2736access type (mknod/read/write) and device (type, major and minor numbers).
2737If the program returns 0, the attempt fails with -EPERM, otherwise it
2738succeeds.
2739
2740An example of BPF_PROG_TYPE_CGROUP_DEVICE program may be found in
2741tools/testing/selftests/bpf/progs/dev_cgroup.c in the kernel source tree.
2742
2743
2744RDMA
2745----
2746
2747The "rdma" controller regulates the distribution and accounting of
2748RDMA resources.
2749
2750RDMA Interface Files
2751~~~~~~~~~~~~~~~~~~~~
2752
2753  rdma.max
2754	A readwrite nested-keyed file that exists for all the cgroups
2755	except root that describes current configured resource limit
2756	for a RDMA/IB device.
2757
2758	Lines are keyed by device name and are not ordered.
2759	Each line contains space separated resource name and its configured
2760	limit that can be distributed.
2761
2762	The following nested keys are defined.
2763
2764	  ==========	=============================
2765	  hca_handle	Maximum number of HCA Handles
2766	  hca_object 	Maximum number of HCA Objects
2767	  ==========	=============================
2768
2769	An example for mlx4 and ocrdma device follows::
2770
2771	  mlx4_0 hca_handle=2 hca_object=2000
2772	  ocrdma1 hca_handle=3 hca_object=max
2773
2774  rdma.current
2775	A read-only file that describes current resource usage.
2776	It exists for all the cgroup except root.
2777
2778	An example for mlx4 and ocrdma device follows::
2779
2780	  mlx4_0 hca_handle=1 hca_object=20
2781	  ocrdma1 hca_handle=1 hca_object=23
2782
2783DMEM
2784----
2785
2786The "dmem" controller regulates the distribution and accounting of
2787device memory regions. Because each memory region may have its own page size,
2788which does not have to be equal to the system page size, the units are always bytes.
2789
2790DMEM Interface Files
2791~~~~~~~~~~~~~~~~~~~~
2792
2793  dmem.max, dmem.min, dmem.low
2794	A readwrite nested-keyed file that exists for all the cgroups
2795	except root that describes current configured resource limit
2796	for a region.
2797
2798	An example for xe follows::
2799
2800	  drm/0000:03:00.0/vram0 1073741824
2801	  drm/0000:03:00.0/stolen max
2802
2803	The semantics are the same as for the memory cgroup controller, and are
2804	calculated in the same way.
2805
2806  dmem.capacity
2807	A read-only file that describes maximum region capacity.
2808	It only exists on the root cgroup. Not all memory can be
2809	allocated by cgroups, as the kernel reserves some for
2810	internal use.
2811
2812	An example for xe follows::
2813
2814	  drm/0000:03:00.0/vram0 8514437120
2815	  drm/0000:03:00.0/stolen 67108864
2816
2817  dmem.current
2818	A read-only file that describes current resource usage.
2819	It exists for all the cgroup except root.
2820
2821	An example for xe follows::
2822
2823	  drm/0000:03:00.0/vram0 12550144
2824	  drm/0000:03:00.0/stolen 8650752
2825
2826HugeTLB
2827-------
2828
2829The HugeTLB controller allows limiting the HugeTLB usage per control group and
2830enforces the controller limit during page fault.
2831
2832HugeTLB Interface Files
2833~~~~~~~~~~~~~~~~~~~~~~~
2834
2835  hugetlb.<hugepagesize>.current
2836	Show current usage for "hugepagesize" hugetlb.  It exists for all
2837	the cgroup except root.
2838
2839  hugetlb.<hugepagesize>.max
2840	Set/show the hard limit of "hugepagesize" hugetlb usage.
2841	The default value is "max".  It exists for all the cgroup except root.
2842
2843  hugetlb.<hugepagesize>.events
2844	A read-only flat-keyed file which exists on non-root cgroups.
2845
2846	  max
2847		The number of allocation failure due to HugeTLB limit
2848
2849  hugetlb.<hugepagesize>.events.local
2850	Similar to hugetlb.<hugepagesize>.events but the fields in the file
2851	are local to the cgroup i.e. not hierarchical. The file modified event
2852	generated on this file reflects only the local events.
2853
2854  hugetlb.<hugepagesize>.numa_stat
2855	Similar to memory.numa_stat, it shows the numa information of the
2856        hugetlb pages of <hugepagesize> in this cgroup.  Only active in
2857        use hugetlb pages are included.  The per-node values are in bytes.
2858
2859Misc
2860----
2861
2862The Miscellaneous cgroup provides the resource limiting and tracking
2863mechanism for the scalar resources which cannot be abstracted like the other
2864cgroup resources. Controller is enabled by the CONFIG_CGROUP_MISC config
2865option.
2866
2867A resource can be added to the controller via enum misc_res_type{} in the
2868include/linux/misc_cgroup.h file and the corresponding name via misc_res_name[]
2869in the kernel/cgroup/misc.c file. Provider of the resource must set its
2870capacity prior to using the resource by calling misc_cg_set_capacity().
2871
2872Once a capacity is set then the resource usage can be updated using charge and
2873uncharge APIs. All of the APIs to interact with misc controller are in
2874include/linux/misc_cgroup.h.
2875
2876Misc Interface Files
2877~~~~~~~~~~~~~~~~~~~~
2878
2879Miscellaneous controller provides 3 interface files. If two misc resources (res_a and res_b) are registered then:
2880
2881  misc.capacity
2882        A read-only flat-keyed file shown only in the root cgroup.  It shows
2883        miscellaneous scalar resources available on the platform along with
2884        their quantities::
2885
2886	  $ cat misc.capacity
2887	  res_a 50
2888	  res_b 10
2889
2890  misc.current
2891        A read-only flat-keyed file shown in the all cgroups.  It shows
2892        the current usage of the resources in the cgroup and its children.::
2893
2894	  $ cat misc.current
2895	  res_a 3
2896	  res_b 0
2897
2898  misc.peak
2899        A read-only flat-keyed file shown in all cgroups.  It shows the
2900        historical maximum usage of the resources in the cgroup and its
2901        children.::
2902
2903	  $ cat misc.peak
2904	  res_a 10
2905	  res_b 8
2906
2907  misc.max
2908        A read-write flat-keyed file shown in the non root cgroups. Allowed
2909        maximum usage of the resources in the cgroup and its children.::
2910
2911	  $ cat misc.max
2912	  res_a max
2913	  res_b 4
2914
2915	Limit can be set by::
2916
2917	  # echo res_a 1 > misc.max
2918
2919	Limit can be set to max by::
2920
2921	  # echo res_a max > misc.max
2922
2923        Limits can be set higher than the capacity value in the misc.capacity
2924        file.
2925
2926  misc.events
2927	A read-only flat-keyed file which exists on non-root cgroups. The
2928	following entries are defined. Unless specified otherwise, a value
2929	change in this file generates a file modified event. All fields in
2930	this file are hierarchical.
2931
2932	  max
2933		The number of times the cgroup's resource usage was
2934		about to go over the max boundary.
2935
2936  misc.events.local
2937        Similar to misc.events but the fields in the file are local to the
2938        cgroup i.e. not hierarchical. The file modified event generated on
2939        this file reflects only the local events.
2940
2941Migration and Ownership
2942~~~~~~~~~~~~~~~~~~~~~~~
2943
2944A miscellaneous scalar resource is charged to the cgroup in which it is used
2945first, and stays charged to that cgroup until that resource is freed. Migrating
2946a process to a different cgroup does not move the charge to the destination
2947cgroup where the process has moved.
2948
2949Others
2950------
2951
2952perf_event
2953~~~~~~~~~~
2954
2955perf_event controller, if not mounted on a legacy hierarchy, is
2956automatically enabled on the v2 hierarchy so that perf events can
2957always be filtered by cgroup v2 path.  The controller can still be
2958moved to a legacy hierarchy after v2 hierarchy is populated.
2959
2960
2961Non-normative information
2962-------------------------
2963
2964This section contains information that isn't considered to be a part of
2965the stable kernel API and so is subject to change.
2966
2967
2968CPU controller root cgroup process behaviour
2969~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2970
2971When distributing CPU cycles in the root cgroup each thread in this
2972cgroup is treated as if it was hosted in a separate child cgroup of the
2973root cgroup. This child cgroup weight is dependent on its thread nice
2974level.
2975
2976For details of this mapping see sched_prio_to_weight array in
2977kernel/sched/core.c file (values from this array should be scaled
2978appropriately so the neutral - nice 0 - value is 100 instead of 1024).
2979
2980
2981IO controller root cgroup process behaviour
2982~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2983
2984Root cgroup processes are hosted in an implicit leaf child node.
2985When distributing IO resources this implicit child node is taken into
2986account as if it was a normal child cgroup of the root cgroup with a
2987weight value of 200.
2988
2989
2990Namespace
2991=========
2992
2993Basics
2994------
2995
2996cgroup namespace provides a mechanism to virtualize the view of the
2997"/proc/$PID/cgroup" file and cgroup mounts.  The CLONE_NEWCGROUP clone
2998flag can be used with clone(2) and unshare(2) to create a new cgroup
2999namespace.  The process running inside the cgroup namespace will have
3000its "/proc/$PID/cgroup" output restricted to cgroupns root.  The
3001cgroupns root is the cgroup of the process at the time of creation of
3002the cgroup namespace.
3003
3004Without cgroup namespace, the "/proc/$PID/cgroup" file shows the
3005complete path of the cgroup of a process.  In a container setup where
3006a set of cgroups and namespaces are intended to isolate processes the
3007"/proc/$PID/cgroup" file may leak potential system level information
3008to the isolated processes.  For example::
3009
3010  # cat /proc/self/cgroup
3011  0::/batchjobs/container_id1
3012
3013The path '/batchjobs/container_id1' can be considered as system-data
3014and undesirable to expose to the isolated processes.  cgroup namespace
3015can be used to restrict visibility of this path.  For example, before
3016creating a cgroup namespace, one would see::
3017
3018  # ls -l /proc/self/ns/cgroup
3019  lrwxrwxrwx 1 root root 0 2014-07-15 10:37 /proc/self/ns/cgroup -> cgroup:[4026531835]
3020  # cat /proc/self/cgroup
3021  0::/batchjobs/container_id1
3022
3023After unsharing a new namespace, the view changes::
3024
3025  # ls -l /proc/self/ns/cgroup
3026  lrwxrwxrwx 1 root root 0 2014-07-15 10:35 /proc/self/ns/cgroup -> cgroup:[4026532183]
3027  # cat /proc/self/cgroup
3028  0::/
3029
3030When some thread from a multi-threaded process unshares its cgroup
3031namespace, the new cgroupns gets applied to the entire process (all
3032the threads).  This is natural for the v2 hierarchy; however, for the
3033legacy hierarchies, this may be unexpected.
3034
3035A cgroup namespace is alive as long as there are processes inside or
3036mounts pinning it.  When the last usage goes away, the cgroup
3037namespace is destroyed.  The cgroupns root and the actual cgroups
3038remain.
3039
3040
3041The Root and Views
3042------------------
3043
3044The 'cgroupns root' for a cgroup namespace is the cgroup in which the
3045process calling unshare(2) is running.  For example, if a process in
3046/batchjobs/container_id1 cgroup calls unshare, cgroup
3047/batchjobs/container_id1 becomes the cgroupns root.  For the
3048init_cgroup_ns, this is the real root ('/') cgroup.
3049
3050The cgroupns root cgroup does not change even if the namespace creator
3051process later moves to a different cgroup::
3052
3053  # ~/unshare -c # unshare cgroupns in some cgroup
3054  # cat /proc/self/cgroup
3055  0::/
3056  # mkdir sub_cgrp_1
3057  # echo 0 > sub_cgrp_1/cgroup.procs
3058  # cat /proc/self/cgroup
3059  0::/sub_cgrp_1
3060
3061Each process gets its namespace-specific view of "/proc/$PID/cgroup"
3062
3063Processes running inside the cgroup namespace will be able to see
3064cgroup paths (in /proc/self/cgroup) only inside their root cgroup.
3065From within an unshared cgroupns::
3066
3067  # sleep 100000 &
3068  [1] 7353
3069  # echo 7353 > sub_cgrp_1/cgroup.procs
3070  # cat /proc/7353/cgroup
3071  0::/sub_cgrp_1
3072
3073From the initial cgroup namespace, the real cgroup path will be
3074visible::
3075
3076  $ cat /proc/7353/cgroup
3077  0::/batchjobs/container_id1/sub_cgrp_1
3078
3079From a sibling cgroup namespace (that is, a namespace rooted at a
3080different cgroup), the cgroup path relative to its own cgroup
3081namespace root will be shown.  For instance, if PID 7353's cgroup
3082namespace root is at '/batchjobs/container_id2', then it will see::
3083
3084  # cat /proc/7353/cgroup
3085  0::/../container_id2/sub_cgrp_1
3086
3087Note that the relative path always starts with '/' to indicate that
3088its relative to the cgroup namespace root of the caller.
3089
3090
3091Migration and setns(2)
3092----------------------
3093
3094Processes inside a cgroup namespace can move into and out of the
3095namespace root if they have proper access to external cgroups.  For
3096example, from inside a namespace with cgroupns root at
3097/batchjobs/container_id1, and assuming that the global hierarchy is
3098still accessible inside cgroupns::
3099
3100  # cat /proc/7353/cgroup
3101  0::/sub_cgrp_1
3102  # echo 7353 > batchjobs/container_id2/cgroup.procs
3103  # cat /proc/7353/cgroup
3104  0::/../container_id2
3105
3106Note that this kind of setup is not encouraged.  A task inside cgroup
3107namespace should only be exposed to its own cgroupns hierarchy.
3108
3109setns(2) to another cgroup namespace is allowed when:
3110
3111(a) the process has CAP_SYS_ADMIN against its current user namespace
3112(b) the process has CAP_SYS_ADMIN against the target cgroup
3113    namespace's userns
3114
3115No implicit cgroup changes happen with attaching to another cgroup
3116namespace.  It is expected that the someone moves the attaching
3117process under the target cgroup namespace root.
3118
3119
3120Interaction with Other Namespaces
3121---------------------------------
3122
3123Namespace specific cgroup hierarchy can be mounted by a process
3124running inside a non-init cgroup namespace::
3125
3126  # mount -t cgroup2 none $MOUNT_POINT
3127
3128This will mount the unified cgroup hierarchy with cgroupns root as the
3129filesystem root.  The process needs CAP_SYS_ADMIN against its user and
3130mount namespaces.
3131
3132The virtualization of /proc/self/cgroup file combined with restricting
3133the view of cgroup hierarchy by namespace-private cgroupfs mount
3134provides a properly isolated cgroup view inside the container.
3135
3136
3137Information on Kernel Programming
3138=================================
3139
3140This section contains kernel programming information in the areas
3141where interacting with cgroup is necessary.  cgroup core and
3142controllers are not covered.
3143
3144
3145Filesystem Support for Writeback
3146--------------------------------
3147
3148A filesystem can support cgroup writeback by updating
3149address_space_operations->writepages() to annotate bio's using the
3150following two functions.
3151
3152  wbc_init_bio(@wbc, @bio)
3153	Should be called for each bio carrying writeback data and
3154	associates the bio with the inode's owner cgroup and the
3155	corresponding request queue.  This must be called after
3156	a queue (device) has been associated with the bio and
3157	before submission.
3158
3159  wbc_account_cgroup_owner(@wbc, @folio, @bytes)
3160	Should be called for each data segment being written out.
3161	While this function doesn't care exactly when it's called
3162	during the writeback session, it's the easiest and most
3163	natural to call it as data segments are added to a bio.
3164
3165With writeback bio's annotated, cgroup support can be enabled per
3166super_block by setting SB_I_CGROUPWB in ->s_iflags.  This allows for
3167selective disabling of cgroup writeback support which is helpful when
3168certain filesystem features, e.g. journaled data mode, are
3169incompatible.
3170
3171wbc_init_bio() binds the specified bio to its cgroup.  Depending on
3172the configuration, the bio may be executed at a lower priority and if
3173the writeback session is holding shared resources, e.g. a journal
3174entry, may lead to priority inversion.  There is no one easy solution
3175for the problem.  Filesystems can try to work around specific problem
3176cases by skipping wbc_init_bio() and using bio_associate_blkg()
3177directly.
3178
3179
3180Deprecated v1 Core Features
3181===========================
3182
3183- Multiple hierarchies including named ones are not supported.
3184
3185- All v1 mount options are not supported.
3186
3187- The "tasks" file is removed and "cgroup.procs" is not sorted.
3188
3189- "cgroup.clone_children" is removed.
3190
3191- /proc/cgroups is meaningless for v2.  Use "cgroup.controllers" or
3192  "cgroup.stat" files at the root instead.
3193
3194
3195Issues with v1 and Rationales for v2
3196====================================
3197
3198Multiple Hierarchies
3199--------------------
3200
3201cgroup v1 allowed an arbitrary number of hierarchies and each
3202hierarchy could host any number of controllers.  While this seemed to
3203provide a high level of flexibility, it wasn't useful in practice.
3204
3205For example, as there is only one instance of each controller, utility
3206type controllers such as freezer which can be useful in all
3207hierarchies could only be used in one.  The issue is exacerbated by
3208the fact that controllers couldn't be moved to another hierarchy once
3209hierarchies were populated.  Another issue was that all controllers
3210bound to a hierarchy were forced to have exactly the same view of the
3211hierarchy.  It wasn't possible to vary the granularity depending on
3212the specific controller.
3213
3214In practice, these issues heavily limited which controllers could be
3215put on the same hierarchy and most configurations resorted to putting
3216each controller on its own hierarchy.  Only closely related ones, such
3217as the cpu and cpuacct controllers, made sense to be put on the same
3218hierarchy.  This often meant that userland ended up managing multiple
3219similar hierarchies repeating the same steps on each hierarchy
3220whenever a hierarchy management operation was necessary.
3221
3222Furthermore, support for multiple hierarchies came at a steep cost.
3223It greatly complicated cgroup core implementation but more importantly
3224the support for multiple hierarchies restricted how cgroup could be
3225used in general and what controllers was able to do.
3226
3227There was no limit on how many hierarchies there might be, which meant
3228that a thread's cgroup membership couldn't be described in finite
3229length.  The key might contain any number of entries and was unlimited
3230in length, which made it highly awkward to manipulate and led to
3231addition of controllers which existed only to identify membership,
3232which in turn exacerbated the original problem of proliferating number
3233of hierarchies.
3234
3235Also, as a controller couldn't have any expectation regarding the
3236topologies of hierarchies other controllers might be on, each
3237controller had to assume that all other controllers were attached to
3238completely orthogonal hierarchies.  This made it impossible, or at
3239least very cumbersome, for controllers to cooperate with each other.
3240
3241In most use cases, putting controllers on hierarchies which are
3242completely orthogonal to each other isn't necessary.  What usually is
3243called for is the ability to have differing levels of granularity
3244depending on the specific controller.  In other words, hierarchy may
3245be collapsed from leaf towards root when viewed from specific
3246controllers.  For example, a given configuration might not care about
3247how memory is distributed beyond a certain level while still wanting
3248to control how CPU cycles are distributed.
3249
3250
3251Thread Granularity
3252------------------
3253
3254cgroup v1 allowed threads of a process to belong to different cgroups.
3255This didn't make sense for some controllers and those controllers
3256ended up implementing different ways to ignore such situations but
3257much more importantly it blurred the line between API exposed to
3258individual applications and system management interface.
3259
3260Generally, in-process knowledge is available only to the process
3261itself; thus, unlike service-level organization of processes,
3262categorizing threads of a process requires active participation from
3263the application which owns the target process.
3264
3265cgroup v1 had an ambiguously defined delegation model which got abused
3266in combination with thread granularity.  cgroups were delegated to
3267individual applications so that they can create and manage their own
3268sub-hierarchies and control resource distributions along them.  This
3269effectively raised cgroup to the status of a syscall-like API exposed
3270to lay programs.
3271
3272First of all, cgroup has a fundamentally inadequate interface to be
3273exposed this way.  For a process to access its own knobs, it has to
3274extract the path on the target hierarchy from /proc/self/cgroup,
3275construct the path by appending the name of the knob to the path, open
3276and then read and/or write to it.  This is not only extremely clunky
3277and unusual but also inherently racy.  There is no conventional way to
3278define transaction across the required steps and nothing can guarantee
3279that the process would actually be operating on its own sub-hierarchy.
3280
3281cgroup controllers implemented a number of knobs which would never be
3282accepted as public APIs because they were just adding control knobs to
3283system-management pseudo filesystem.  cgroup ended up with interface
3284knobs which were not properly abstracted or refined and directly
3285revealed kernel internal details.  These knobs got exposed to
3286individual applications through the ill-defined delegation mechanism
3287effectively abusing cgroup as a shortcut to implementing public APIs
3288without going through the required scrutiny.
3289
3290This was painful for both userland and kernel.  Userland ended up with
3291misbehaving and poorly abstracted interfaces and kernel exposing and
3292locked into constructs inadvertently.
3293
3294
3295Competition Between Inner Nodes and Threads
3296-------------------------------------------
3297
3298cgroup v1 allowed threads to be in any cgroups which created an
3299interesting problem where threads belonging to a parent cgroup and its
3300children cgroups competed for resources.  This was nasty as two
3301different types of entities competed and there was no obvious way to
3302settle it.  Different controllers did different things.
3303
3304The cpu controller considered threads and cgroups as equivalents and
3305mapped nice levels to cgroup weights.  This worked for some cases but
3306fell flat when children wanted to be allocated specific ratios of CPU
3307cycles and the number of internal threads fluctuated - the ratios
3308constantly changed as the number of competing entities fluctuated.
3309There also were other issues.  The mapping from nice level to weight
3310wasn't obvious or universal, and there were various other knobs which
3311simply weren't available for threads.
3312
3313The io controller implicitly created a hidden leaf node for each
3314cgroup to host the threads.  The hidden leaf had its own copies of all
3315the knobs with ``leaf_`` prefixed.  While this allowed equivalent
3316control over internal threads, it was with serious drawbacks.  It
3317always added an extra layer of nesting which wouldn't be necessary
3318otherwise, made the interface messy and significantly complicated the
3319implementation.
3320
3321The memory controller didn't have a way to control what happened
3322between internal tasks and child cgroups and the behavior was not
3323clearly defined.  There were attempts to add ad-hoc behaviors and
3324knobs to tailor the behavior to specific workloads which would have
3325led to problems extremely difficult to resolve in the long term.
3326
3327Multiple controllers struggled with internal tasks and came up with
3328different ways to deal with it; unfortunately, all the approaches were
3329severely flawed and, furthermore, the widely different behaviors
3330made cgroup as a whole highly inconsistent.
3331
3332This clearly is a problem which needs to be addressed from cgroup core
3333in a uniform way.
3334
3335
3336Other Interface Issues
3337----------------------
3338
3339cgroup v1 grew without oversight and developed a large number of
3340idiosyncrasies and inconsistencies.  One issue on the cgroup core side
3341was how an empty cgroup was notified - a userland helper binary was
3342forked and executed for each event.  The event delivery wasn't
3343recursive or delegatable.  The limitations of the mechanism also led
3344to in-kernel event delivery filtering mechanism further complicating
3345the interface.
3346
3347Controller interfaces were problematic too.  An extreme example is
3348controllers completely ignoring hierarchical organization and treating
3349all cgroups as if they were all located directly under the root
3350cgroup.  Some controllers exposed a large amount of inconsistent
3351implementation details to userland.
3352
3353There also was no consistency across controllers.  When a new cgroup
3354was created, some controllers defaulted to not imposing extra
3355restrictions while others disallowed any resource usage until
3356explicitly configured.  Configuration knobs for the same type of
3357control used widely differing naming schemes and formats.  Statistics
3358and information knobs were named arbitrarily and used different
3359formats and units even in the same controller.
3360
3361cgroup v2 establishes common conventions where appropriate and updates
3362controllers so that they expose minimal and consistent interfaces.
3363
3364
3365Controller Issues and Remedies
3366------------------------------
3367
3368Memory
3369~~~~~~
3370
3371The original lower boundary, the soft limit, is defined as a limit
3372that is per default unset.  As a result, the set of cgroups that
3373global reclaim prefers is opt-in, rather than opt-out.  The costs for
3374optimizing these mostly negative lookups are so high that the
3375implementation, despite its enormous size, does not even provide the
3376basic desirable behavior.  First off, the soft limit has no
3377hierarchical meaning.  All configured groups are organized in a global
3378rbtree and treated like equal peers, regardless where they are located
3379in the hierarchy.  This makes subtree delegation impossible.  Second,
3380the soft limit reclaim pass is so aggressive that it not just
3381introduces high allocation latencies into the system, but also impacts
3382system performance due to overreclaim, to the point where the feature
3383becomes self-defeating.
3384
3385The memory.low boundary on the other hand is a top-down allocated
3386reserve.  A cgroup enjoys reclaim protection when it's within its
3387effective low, which makes delegation of subtrees possible. It also
3388enjoys having reclaim pressure proportional to its overage when
3389above its effective low.
3390
3391The original high boundary, the hard limit, is defined as a strict
3392limit that can not budge, even if the OOM killer has to be called.
3393But this generally goes against the goal of making the most out of the
3394available memory.  The memory consumption of workloads varies during
3395runtime, and that requires users to overcommit.  But doing that with a
3396strict upper limit requires either a fairly accurate prediction of the
3397working set size or adding slack to the limit.  Since working set size
3398estimation is hard and error prone, and getting it wrong results in
3399OOM kills, most users tend to err on the side of a looser limit and
3400end up wasting precious resources.
3401
3402The memory.high boundary on the other hand can be set much more
3403conservatively.  When hit, it throttles allocations by forcing them
3404into direct reclaim to work off the excess, but it never invokes the
3405OOM killer.  As a result, a high boundary that is chosen too
3406aggressively will not terminate the processes, but instead it will
3407lead to gradual performance degradation.  The user can monitor this
3408and make corrections until the minimal memory footprint that still
3409gives acceptable performance is found.
3410
3411In extreme cases, with many concurrent allocations and a complete
3412breakdown of reclaim progress within the group, the high boundary can
3413be exceeded.  But even then it's mostly better to satisfy the
3414allocation from the slack available in other groups or the rest of the
3415system than killing the group.  Otherwise, memory.max is there to
3416limit this type of spillover and ultimately contain buggy or even
3417malicious applications.
3418
3419Setting the original memory.limit_in_bytes below the current usage was
3420subject to a race condition, where concurrent charges could cause the
3421limit setting to fail. memory.max on the other hand will first set the
3422limit to prevent new charges, and then reclaim and OOM kill until the
3423new limit is met - or the task writing to memory.max is killed.
3424
3425The combined memory+swap accounting and limiting is replaced by real
3426control over swap space.
3427
3428The main argument for a combined memory+swap facility in the original
3429cgroup design was that global or parental pressure would always be
3430able to swap all anonymous memory of a child group, regardless of the
3431child's own (possibly untrusted) configuration.  However, untrusted
3432groups can sabotage swapping by other means - such as referencing its
3433anonymous memory in a tight loop - and an admin can not assume full
3434swappability when overcommitting untrusted jobs.
3435
3436For trusted jobs, on the other hand, a combined counter is not an
3437intuitive userspace interface, and it flies in the face of the idea
3438that cgroup controllers should account and limit specific physical
3439resources.  Swap space is a resource like all others in the system,
3440and that's why unified hierarchy allows distributing it separately.
3441