xref: /linux/Documentation/admin-guide/cgroup-v2.rst (revision 3957a5720157264dcc41415fbec7c51c4000fc2d)
1.. _cgroup-v2:
2
3================
4Control Group v2
5================
6
7:Date: October, 2015
8:Author: Tejun Heo <tj@kernel.org>
9
10This is the authoritative documentation on the design, interface and
11conventions of cgroup v2.  It describes all userland-visible aspects
12of cgroup including core and specific controller behaviors.  All
13future changes must be reflected in this document.  Documentation for
14v1 is available under :ref:`Documentation/admin-guide/cgroup-v1/index.rst <cgroup-v1>`.
15
16.. CONTENTS
17
18   1. Introduction
19     1-1. Terminology
20     1-2. What is cgroup?
21   2. Basic Operations
22     2-1. Mounting
23     2-2. Organizing Processes and Threads
24       2-2-1. Processes
25       2-2-2. Threads
26     2-3. [Un]populated Notification
27     2-4. Controlling Controllers
28       2-4-1. Enabling and Disabling
29       2-4-2. Top-down Constraint
30       2-4-3. No Internal Process Constraint
31     2-5. Delegation
32       2-5-1. Model of Delegation
33       2-5-2. Delegation Containment
34     2-6. Guidelines
35       2-6-1. Organize Once and Control
36       2-6-2. Avoid Name Collisions
37   3. Resource Distribution Models
38     3-1. Weights
39     3-2. Limits
40     3-3. Protections
41     3-4. Allocations
42   4. Interface Files
43     4-1. Format
44     4-2. Conventions
45     4-3. Core Interface Files
46   5. Controllers
47     5-1. CPU
48       5-1-1. CPU Interface Files
49     5-2. Memory
50       5-2-1. Memory Interface Files
51       5-2-2. Usage Guidelines
52       5-2-3. Memory Ownership
53     5-3. IO
54       5-3-1. IO Interface Files
55       5-3-2. Writeback
56       5-3-3. IO Latency
57         5-3-3-1. How IO Latency Throttling Works
58         5-3-3-2. IO Latency Interface Files
59       5-3-4. IO Priority
60     5-4. PID
61       5-4-1. PID Interface Files
62     5-5. Cpuset
63       5.5-1. Cpuset Interface Files
64     5-6. Device
65     5-7. RDMA
66       5-7-1. RDMA Interface Files
67     5-8. DMEM
68     5-9. HugeTLB
69       5.9-1. HugeTLB Interface Files
70     5-10. Misc
71       5.10-1 Miscellaneous cgroup Interface Files
72       5.10-2 Migration and Ownership
73     5-11. Others
74       5-11-1. perf_event
75     5-N. Non-normative information
76       5-N-1. CPU controller root cgroup process behaviour
77       5-N-2. IO controller root cgroup process behaviour
78   6. Namespace
79     6-1. Basics
80     6-2. The Root and Views
81     6-3. Migration and setns(2)
82     6-4. Interaction with Other Namespaces
83   P. Information on Kernel Programming
84     P-1. Filesystem Support for Writeback
85   D. Deprecated v1 Core Features
86   R. Issues with v1 and Rationales for v2
87     R-1. Multiple Hierarchies
88     R-2. Thread Granularity
89     R-3. Competition Between Inner Nodes and Threads
90     R-4. Other Interface Issues
91     R-5. Controller Issues and Remedies
92       R-5-1. Memory
93
94
95Introduction
96============
97
98Terminology
99-----------
100
101"cgroup" stands for "control group" and is never capitalized.  The
102singular form is used to designate the whole feature and also as a
103qualifier as in "cgroup controllers".  When explicitly referring to
104multiple individual control groups, the plural form "cgroups" is used.
105
106
107What is cgroup?
108---------------
109
110cgroup is a mechanism to organize processes hierarchically and
111distribute system resources along the hierarchy in a controlled and
112configurable manner.
113
114cgroup is largely composed of two parts - the core and controllers.
115cgroup core is primarily responsible for hierarchically organizing
116processes.  A cgroup controller is usually responsible for
117distributing a specific type of system resource along the hierarchy
118although there are utility controllers which serve purposes other than
119resource distribution.
120
121cgroups form a tree structure and every process in the system belongs
122to one and only one cgroup.  All threads of a process belong to the
123same cgroup.  On creation, all processes are put in the cgroup that
124the parent process belongs to at the time.  A process can be migrated
125to another cgroup.  Migration of a process doesn't affect already
126existing descendant processes.
127
128Following certain structural constraints, controllers may be enabled or
129disabled selectively on a cgroup.  All controller behaviors are
130hierarchical - if a controller is enabled on a cgroup, it affects all
131processes which belong to the cgroups consisting the inclusive
132sub-hierarchy of the cgroup.  When a controller is enabled on a nested
133cgroup, it always restricts the resource distribution further.  The
134restrictions set closer to the root in the hierarchy can not be
135overridden from further away.
136
137
138Basic Operations
139================
140
141Mounting
142--------
143
144Unlike v1, cgroup v2 has only single hierarchy.  The cgroup v2
145hierarchy can be mounted with the following mount command::
146
147  # mount -t cgroup2 none $MOUNT_POINT
148
149cgroup2 filesystem has the magic number 0x63677270 ("cgrp").  All
150controllers which support v2 and are not bound to a v1 hierarchy are
151automatically bound to the v2 hierarchy and show up at the root.
152Controllers which are not in active use in the v2 hierarchy can be
153bound to other hierarchies.  This allows mixing v2 hierarchy with the
154legacy v1 multiple hierarchies in a fully backward compatible way.
155
156A controller can be moved across hierarchies only after the controller
157is no longer referenced in its current hierarchy.  Because per-cgroup
158controller states are destroyed asynchronously and controllers may
159have lingering references, a controller may not show up immediately on
160the v2 hierarchy after the final umount of the previous hierarchy.
161Similarly, a controller should be fully disabled to be moved out of
162the unified hierarchy and it may take some time for the disabled
163controller to become available for other hierarchies; furthermore, due
164to inter-controller dependencies, other controllers may need to be
165disabled too.
166
167While useful for development and manual configurations, moving
168controllers dynamically between the v2 and other hierarchies is
169strongly discouraged for production use.  It is recommended to decide
170the hierarchies and controller associations before starting using the
171controllers after system boot.
172
173During transition to v2, system management software might still
174automount the v1 cgroup filesystem and so hijack all controllers
175during boot, before manual intervention is possible. To make testing
176and experimenting easier, the kernel parameter cgroup_no_v1= allows
177disabling controllers in v1 and make them always available in v2.
178
179cgroup v2 currently supports the following mount options.
180
181  nsdelegate
182	Consider cgroup namespaces as delegation boundaries.  This
183	option is system wide and can only be set on mount or modified
184	through remount from the init namespace.  The mount option is
185	ignored on non-init namespace mounts.  Please refer to the
186	Delegation section for details.
187
188  favordynmods
189        Reduce the latencies of dynamic cgroup modifications such as
190        task migrations and controller on/offs at the cost of making
191        hot path operations such as forks and exits more expensive.
192        The static usage pattern of creating a cgroup, enabling
193        controllers, and then seeding it with CLONE_INTO_CGROUP is
194        not affected by this option.
195
196  memory_localevents
197        Only populate memory.events with data for the current cgroup,
198        and not any subtrees. This is legacy behaviour, the default
199        behaviour without this option is to include subtree counts.
200        This option is system wide and can only be set on mount or
201        modified through remount from the init namespace. The mount
202        option is ignored on non-init namespace mounts.
203
204  memory_recursiveprot
205        Recursively apply memory.min and memory.low protection to
206        entire subtrees, without requiring explicit downward
207        propagation into leaf cgroups.  This allows protecting entire
208        subtrees from one another, while retaining free competition
209        within those subtrees.  This should have been the default
210        behavior but is a mount-option to avoid regressing setups
211        relying on the original semantics (e.g. specifying bogusly
212        high 'bypass' protection values at higher tree levels).
213
214  memory_hugetlb_accounting
215        Count HugeTLB memory usage towards the cgroup's overall
216        memory usage for the memory controller (for the purpose of
217        statistics reporting and memory protetion). This is a new
218        behavior that could regress existing setups, so it must be
219        explicitly opted in with this mount option.
220
221        A few caveats to keep in mind:
222
223        * There is no HugeTLB pool management involved in the memory
224          controller. The pre-allocated pool does not belong to anyone.
225          Specifically, when a new HugeTLB folio is allocated to
226          the pool, it is not accounted for from the perspective of the
227          memory controller. It is only charged to a cgroup when it is
228          actually used (for e.g at page fault time). Host memory
229          overcommit management has to consider this when configuring
230          hard limits. In general, HugeTLB pool management should be
231          done via other mechanisms (such as the HugeTLB controller).
232        * Failure to charge a HugeTLB folio to the memory controller
233          results in SIGBUS. This could happen even if the HugeTLB pool
234          still has pages available (but the cgroup limit is hit and
235          reclaim attempt fails).
236        * Charging HugeTLB memory towards the memory controller affects
237          memory protection and reclaim dynamics. Any userspace tuning
238          (of low, min limits for e.g) needs to take this into account.
239        * HugeTLB pages utilized while this option is not selected
240          will not be tracked by the memory controller (even if cgroup
241          v2 is remounted later on).
242
243  pids_localevents
244        The option restores v1-like behavior of pids.events:max, that is only
245        local (inside cgroup proper) fork failures are counted. Without this
246        option pids.events.max represents any pids.max enforcemnt across
247        cgroup's subtree.
248
249
250
251Organizing Processes and Threads
252--------------------------------
253
254Processes
255~~~~~~~~~
256
257Initially, only the root cgroup exists to which all processes belong.
258A child cgroup can be created by creating a sub-directory::
259
260  # mkdir $CGROUP_NAME
261
262A given cgroup may have multiple child cgroups forming a tree
263structure.  Each cgroup has a read-writable interface file
264"cgroup.procs".  When read, it lists the PIDs of all processes which
265belong to the cgroup one-per-line.  The PIDs are not ordered and the
266same PID may show up more than once if the process got moved to
267another cgroup and then back or the PID got recycled while reading.
268
269A process can be migrated into a cgroup by writing its PID to the
270target cgroup's "cgroup.procs" file.  Only one process can be migrated
271on a single write(2) call.  If a process is composed of multiple
272threads, writing the PID of any thread migrates all threads of the
273process.
274
275When a process forks a child process, the new process is born into the
276cgroup that the forking process belongs to at the time of the
277operation.  After exit, a process stays associated with the cgroup
278that it belonged to at the time of exit until it's reaped; however, a
279zombie process does not appear in "cgroup.procs" and thus can't be
280moved to another cgroup.
281
282A cgroup which doesn't have any children or live processes can be
283destroyed by removing the directory.  Note that a cgroup which doesn't
284have any children and is associated only with zombie processes is
285considered empty and can be removed::
286
287  # rmdir $CGROUP_NAME
288
289"/proc/$PID/cgroup" lists a process's cgroup membership.  If legacy
290cgroup is in use in the system, this file may contain multiple lines,
291one for each hierarchy.  The entry for cgroup v2 is always in the
292format "0::$PATH"::
293
294  # cat /proc/842/cgroup
295  ...
296  0::/test-cgroup/test-cgroup-nested
297
298If the process becomes a zombie and the cgroup it was associated with
299is removed subsequently, " (deleted)" is appended to the path::
300
301  # cat /proc/842/cgroup
302  ...
303  0::/test-cgroup/test-cgroup-nested (deleted)
304
305
306Threads
307~~~~~~~
308
309cgroup v2 supports thread granularity for a subset of controllers to
310support use cases requiring hierarchical resource distribution across
311the threads of a group of processes.  By default, all threads of a
312process belong to the same cgroup, which also serves as the resource
313domain to host resource consumptions which are not specific to a
314process or thread.  The thread mode allows threads to be spread across
315a subtree while still maintaining the common resource domain for them.
316
317Controllers which support thread mode are called threaded controllers.
318The ones which don't are called domain controllers.
319
320Marking a cgroup threaded makes it join the resource domain of its
321parent as a threaded cgroup.  The parent may be another threaded
322cgroup whose resource domain is further up in the hierarchy.  The root
323of a threaded subtree, that is, the nearest ancestor which is not
324threaded, is called threaded domain or thread root interchangeably and
325serves as the resource domain for the entire subtree.
326
327Inside a threaded subtree, threads of a process can be put in
328different cgroups and are not subject to the no internal process
329constraint - threaded controllers can be enabled on non-leaf cgroups
330whether they have threads in them or not.
331
332As the threaded domain cgroup hosts all the domain resource
333consumptions of the subtree, it is considered to have internal
334resource consumptions whether there are processes in it or not and
335can't have populated child cgroups which aren't threaded.  Because the
336root cgroup is not subject to no internal process constraint, it can
337serve both as a threaded domain and a parent to domain cgroups.
338
339The current operation mode or type of the cgroup is shown in the
340"cgroup.type" file which indicates whether the cgroup is a normal
341domain, a domain which is serving as the domain of a threaded subtree,
342or a threaded cgroup.
343
344On creation, a cgroup is always a domain cgroup and can be made
345threaded by writing "threaded" to the "cgroup.type" file.  The
346operation is single direction::
347
348  # echo threaded > cgroup.type
349
350Once threaded, the cgroup can't be made a domain again.  To enable the
351thread mode, the following conditions must be met.
352
353- As the cgroup will join the parent's resource domain.  The parent
354  must either be a valid (threaded) domain or a threaded cgroup.
355
356- When the parent is an unthreaded domain, it must not have any domain
357  controllers enabled or populated domain children.  The root is
358  exempt from this requirement.
359
360Topology-wise, a cgroup can be in an invalid state.  Please consider
361the following topology::
362
363  A (threaded domain) - B (threaded) - C (domain, just created)
364
365C is created as a domain but isn't connected to a parent which can
366host child domains.  C can't be used until it is turned into a
367threaded cgroup.  "cgroup.type" file will report "domain (invalid)" in
368these cases.  Operations which fail due to invalid topology use
369EOPNOTSUPP as the errno.
370
371A domain cgroup is turned into a threaded domain when one of its child
372cgroup becomes threaded or threaded controllers are enabled in the
373"cgroup.subtree_control" file while there are processes in the cgroup.
374A threaded domain reverts to a normal domain when the conditions
375clear.
376
377When read, "cgroup.threads" contains the list of the thread IDs of all
378threads in the cgroup.  Except that the operations are per-thread
379instead of per-process, "cgroup.threads" has the same format and
380behaves the same way as "cgroup.procs".  While "cgroup.threads" can be
381written to in any cgroup, as it can only move threads inside the same
382threaded domain, its operations are confined inside each threaded
383subtree.
384
385The threaded domain cgroup serves as the resource domain for the whole
386subtree, and, while the threads can be scattered across the subtree,
387all the processes are considered to be in the threaded domain cgroup.
388"cgroup.procs" in a threaded domain cgroup contains the PIDs of all
389processes in the subtree and is not readable in the subtree proper.
390However, "cgroup.procs" can be written to from anywhere in the subtree
391to migrate all threads of the matching process to the cgroup.
392
393Only threaded controllers can be enabled in a threaded subtree.  When
394a threaded controller is enabled inside a threaded subtree, it only
395accounts for and controls resource consumptions associated with the
396threads in the cgroup and its descendants.  All consumptions which
397aren't tied to a specific thread belong to the threaded domain cgroup.
398
399Because a threaded subtree is exempt from no internal process
400constraint, a threaded controller must be able to handle competition
401between threads in a non-leaf cgroup and its child cgroups.  Each
402threaded controller defines how such competitions are handled.
403
404Currently, the following controllers are threaded and can be enabled
405in a threaded cgroup::
406
407- cpu
408- cpuset
409- perf_event
410- pids
411
412[Un]populated Notification
413--------------------------
414
415Each non-root cgroup has a "cgroup.events" file which contains
416"populated" field indicating whether the cgroup's sub-hierarchy has
417live processes in it.  Its value is 0 if there is no live process in
418the cgroup and its descendants; otherwise, 1.  poll and [id]notify
419events are triggered when the value changes.  This can be used, for
420example, to start a clean-up operation after all processes of a given
421sub-hierarchy have exited.  The populated state updates and
422notifications are recursive.  Consider the following sub-hierarchy
423where the numbers in the parentheses represent the numbers of processes
424in each cgroup::
425
426  A(4) - B(0) - C(1)
427              \ D(0)
428
429A, B and C's "populated" fields would be 1 while D's 0.  After the one
430process in C exits, B and C's "populated" fields would flip to "0" and
431file modified events will be generated on the "cgroup.events" files of
432both cgroups.
433
434
435Controlling Controllers
436-----------------------
437
438Availability
439~~~~~~~~~~~~
440
441A controller is available in a cgroup when it is supported by the kernel (i.e.,
442compiled in, not disabled and not attached to a v1 hierarchy) and listed in the
443"cgroup.controllers" file. Availability means the controller's interface files
444are exposed in the cgroup’s directory, allowing the distribution of the target
445resource to be observed or controlled within that cgroup.
446
447Enabling and Disabling
448~~~~~~~~~~~~~~~~~~~~~~
449
450Each cgroup has a "cgroup.controllers" file which lists all
451controllers available for the cgroup to enable::
452
453  # cat cgroup.controllers
454  cpu io memory
455
456No controller is enabled by default.  Controllers can be enabled and
457disabled by writing to the "cgroup.subtree_control" file::
458
459  # echo "+cpu +memory -io" > cgroup.subtree_control
460
461Only controllers which are listed in "cgroup.controllers" can be
462enabled.  When multiple operations are specified as above, either they
463all succeed or fail.  If multiple operations on the same controller
464are specified, the last one is effective.
465
466Enabling a controller in a cgroup indicates that the distribution of
467the target resource across its immediate children will be controlled.
468Consider the following sub-hierarchy.  The enabled controllers are
469listed in parentheses::
470
471  A(cpu,memory) - B(memory) - C()
472                            \ D()
473
474As A has "cpu" and "memory" enabled, A will control the distribution
475of CPU cycles and memory to its children, in this case, B.  As B has
476"memory" enabled but not "CPU", C and D will compete freely on CPU
477cycles but their division of memory available to B will be controlled.
478
479As a controller regulates the distribution of the target resource to
480the cgroup's children, enabling it creates the controller's interface
481files in the child cgroups.  In the above example, enabling "cpu" on B
482would create the "cpu." prefixed controller interface files in C and
483D.  Likewise, disabling "memory" from B would remove the "memory."
484prefixed controller interface files from C and D.  This means that the
485controller interface files - anything which doesn't start with
486"cgroup." are owned by the parent rather than the cgroup itself.
487
488
489Top-down Constraint
490~~~~~~~~~~~~~~~~~~~
491
492Resources are distributed top-down and a cgroup can further distribute
493a resource only if the resource has been distributed to it from the
494parent.  This means that all non-root "cgroup.subtree_control" files
495can only contain controllers which are enabled in the parent's
496"cgroup.subtree_control" file.  A controller can be enabled only if
497the parent has the controller enabled and a controller can't be
498disabled if one or more children have it enabled.
499
500
501No Internal Process Constraint
502~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
503
504Non-root cgroups can distribute domain resources to their children
505only when they don't have any processes of their own.  In other words,
506only domain cgroups which don't contain any processes can have domain
507controllers enabled in their "cgroup.subtree_control" files.
508
509This guarantees that, when a domain controller is looking at the part
510of the hierarchy which has it enabled, processes are always only on
511the leaves.  This rules out situations where child cgroups compete
512against internal processes of the parent.
513
514The root cgroup is exempt from this restriction.  Root contains
515processes and anonymous resource consumption which can't be associated
516with any other cgroups and requires special treatment from most
517controllers.  How resource consumption in the root cgroup is governed
518is up to each controller (for more information on this topic please
519refer to the Non-normative information section in the Controllers
520chapter).
521
522Note that the restriction doesn't get in the way if there is no
523enabled controller in the cgroup's "cgroup.subtree_control".  This is
524important as otherwise it wouldn't be possible to create children of a
525populated cgroup.  To control resource distribution of a cgroup, the
526cgroup must create children and transfer all its processes to the
527children before enabling controllers in its "cgroup.subtree_control"
528file.
529
530
531Delegation
532----------
533
534Model of Delegation
535~~~~~~~~~~~~~~~~~~~
536
537A cgroup can be delegated in two ways.  First, to a less privileged
538user by granting write access of the directory and its "cgroup.procs",
539"cgroup.threads" and "cgroup.subtree_control" files to the user.
540Second, if the "nsdelegate" mount option is set, automatically to a
541cgroup namespace on namespace creation.
542
543Because the resource control interface files in a given directory
544control the distribution of the parent's resources, the delegatee
545shouldn't be allowed to write to them.  For the first method, this is
546achieved by not granting access to these files.  For the second, files
547outside the namespace should be hidden from the delegatee by the means
548of at least mount namespacing, and the kernel rejects writes to all
549files on a namespace root from inside the cgroup namespace, except for
550those files listed in "/sys/kernel/cgroup/delegate" (including
551"cgroup.procs", "cgroup.threads", "cgroup.subtree_control", etc.).
552
553The end results are equivalent for both delegation types.  Once
554delegated, the user can build sub-hierarchy under the directory,
555organize processes inside it as it sees fit and further distribute the
556resources it received from the parent.  The limits and other settings
557of all resource controllers are hierarchical and regardless of what
558happens in the delegated sub-hierarchy, nothing can escape the
559resource restrictions imposed by the parent.
560
561Currently, cgroup doesn't impose any restrictions on the number of
562cgroups in or nesting depth of a delegated sub-hierarchy; however,
563this may be limited explicitly in the future.
564
565
566Delegation Containment
567~~~~~~~~~~~~~~~~~~~~~~
568
569A delegated sub-hierarchy is contained in the sense that processes
570can't be moved into or out of the sub-hierarchy by the delegatee.
571
572For delegations to a less privileged user, this is achieved by
573requiring the following conditions for a process with a non-root euid
574to migrate a target process into a cgroup by writing its PID to the
575"cgroup.procs" file.
576
577- The writer must have write access to the "cgroup.procs" file.
578
579- The writer must have write access to the "cgroup.procs" file of the
580  common ancestor of the source and destination cgroups.
581
582The above two constraints ensure that while a delegatee may migrate
583processes around freely in the delegated sub-hierarchy it can't pull
584in from or push out to outside the sub-hierarchy.
585
586For an example, let's assume cgroups C0 and C1 have been delegated to
587user U0 who created C00, C01 under C0 and C10 under C1 as follows and
588all processes under C0 and C1 belong to U0::
589
590  ~~~~~~~~~~~~~ - C0 - C00
591  ~ cgroup    ~      \ C01
592  ~ hierarchy ~
593  ~~~~~~~~~~~~~ - C1 - C10
594
595Let's also say U0 wants to write the PID of a process which is
596currently in C10 into "C00/cgroup.procs".  U0 has write access to the
597file; however, the common ancestor of the source cgroup C10 and the
598destination cgroup C00 is above the points of delegation and U0 would
599not have write access to its "cgroup.procs" files and thus the write
600will be denied with -EACCES.
601
602For delegations to namespaces, containment is achieved by requiring
603that both the source and destination cgroups are reachable from the
604namespace of the process which is attempting the migration.  If either
605is not reachable, the migration is rejected with -ENOENT.
606
607
608Guidelines
609----------
610
611Organize Once and Control
612~~~~~~~~~~~~~~~~~~~~~~~~~
613
614Migrating a process across cgroups is a relatively expensive operation
615and stateful resources such as memory are not moved together with the
616process.  This is an explicit design decision as there often exist
617inherent trade-offs between migration and various hot paths in terms
618of synchronization cost.
619
620As such, migrating processes across cgroups frequently as a means to
621apply different resource restrictions is discouraged.  A workload
622should be assigned to a cgroup according to the system's logical and
623resource structure once on start-up.  Dynamic adjustments to resource
624distribution can be made by changing controller configuration through
625the interface files.
626
627
628Avoid Name Collisions
629~~~~~~~~~~~~~~~~~~~~~
630
631Interface files for a cgroup and its children cgroups occupy the same
632directory and it is possible to create children cgroups which collide
633with interface files.
634
635All cgroup core interface files are prefixed with "cgroup." and each
636controller's interface files are prefixed with the controller name and
637a dot.  A controller's name is composed of lower case alphabets and
638'_'s but never begins with an '_' so it can be used as the prefix
639character for collision avoidance.  Also, interface file names won't
640start or end with terms which are often used in categorizing workloads
641such as job, service, slice, unit or workload.
642
643cgroup doesn't do anything to prevent name collisions and it's the
644user's responsibility to avoid them.
645
646
647Resource Distribution Models
648============================
649
650cgroup controllers implement several resource distribution schemes
651depending on the resource type and expected use cases.  This section
652describes major schemes in use along with their expected behaviors.
653
654
655Weights
656-------
657
658A parent's resource is distributed by adding up the weights of all
659active children and giving each the fraction matching the ratio of its
660weight against the sum.  As only children which can make use of the
661resource at the moment participate in the distribution, this is
662work-conserving.  Due to the dynamic nature, this model is usually
663used for stateless resources.
664
665All weights are in the range [1, 10000] with the default at 100.  This
666allows symmetric multiplicative biases in both directions at fine
667enough granularity while staying in the intuitive range.
668
669As long as the weight is in range, all configuration combinations are
670valid and there is no reason to reject configuration changes or
671process migrations.
672
673"cpu.weight" proportionally distributes CPU cycles to active children
674and is an example of this type.
675
676
677.. _cgroupv2-limits-distributor:
678
679Limits
680------
681
682A child can only consume up to the configured amount of the resource.
683Limits can be over-committed - the sum of the limits of children can
684exceed the amount of resource available to the parent.
685
686Limits are in the range [0, max] and defaults to "max", which is noop.
687
688As limits can be over-committed, all configuration combinations are
689valid and there is no reason to reject configuration changes or
690process migrations.
691
692"io.max" limits the maximum BPS and/or IOPS that a cgroup can consume
693on an IO device and is an example of this type.
694
695.. _cgroupv2-protections-distributor:
696
697Protections
698-----------
699
700A cgroup is protected up to the configured amount of the resource
701as long as the usages of all its ancestors are under their
702protected levels.  Protections can be hard guarantees or best effort
703soft boundaries.  Protections can also be over-committed in which case
704only up to the amount available to the parent is protected among
705children.
706
707Protections are in the range [0, max] and defaults to 0, which is
708noop.
709
710As protections can be over-committed, all configuration combinations
711are valid and there is no reason to reject configuration changes or
712process migrations.
713
714"memory.low" implements best-effort memory protection and is an
715example of this type.
716
717
718Allocations
719-----------
720
721A cgroup is exclusively allocated a certain amount of a finite
722resource.  Allocations can't be over-committed - the sum of the
723allocations of children can not exceed the amount of resource
724available to the parent.
725
726Allocations are in the range [0, max] and defaults to 0, which is no
727resource.
728
729As allocations can't be over-committed, some configuration
730combinations are invalid and should be rejected.  Also, if the
731resource is mandatory for execution of processes, process migrations
732may be rejected.
733
734"cpu.rt.max" hard-allocates realtime slices and is an example of this
735type.
736
737
738Interface Files
739===============
740
741Format
742------
743
744All interface files should be in one of the following formats whenever
745possible::
746
747  New-line separated values
748  (when only one value can be written at once)
749
750	VAL0\n
751	VAL1\n
752	...
753
754  Space separated values
755  (when read-only or multiple values can be written at once)
756
757	VAL0 VAL1 ...\n
758
759  Flat keyed
760
761	KEY0 VAL0\n
762	KEY1 VAL1\n
763	...
764
765  Nested keyed
766
767	KEY0 SUB_KEY0=VAL00 SUB_KEY1=VAL01...
768	KEY1 SUB_KEY0=VAL10 SUB_KEY1=VAL11...
769	...
770
771For a writable file, the format for writing should generally match
772reading; however, controllers may allow omitting later fields or
773implement restricted shortcuts for most common use cases.
774
775For both flat and nested keyed files, only the values for a single key
776can be written at a time.  For nested keyed files, the sub key pairs
777may be specified in any order and not all pairs have to be specified.
778
779
780Conventions
781-----------
782
783- Settings for a single feature should be contained in a single file.
784
785- The root cgroup should be exempt from resource control and thus
786  shouldn't have resource control interface files.
787
788- The default time unit is microseconds.  If a different unit is ever
789  used, an explicit unit suffix must be present.
790
791- A parts-per quantity should use a percentage decimal with at least
792  two digit fractional part - e.g. 13.40.
793
794- If a controller implements weight based resource distribution, its
795  interface file should be named "weight" and have the range [1,
796  10000] with 100 as the default.  The values are chosen to allow
797  enough and symmetric bias in both directions while keeping it
798  intuitive (the default is 100%).
799
800- If a controller implements an absolute resource guarantee and/or
801  limit, the interface files should be named "min" and "max"
802  respectively.  If a controller implements best effort resource
803  guarantee and/or limit, the interface files should be named "low"
804  and "high" respectively.
805
806  In the above four control files, the special token "max" should be
807  used to represent upward infinity for both reading and writing.
808
809- If a setting has a configurable default value and keyed specific
810  overrides, the default entry should be keyed with "default" and
811  appear as the first entry in the file.
812
813  The default value can be updated by writing either "default $VAL" or
814  "$VAL".
815
816  When writing to update a specific override, "default" can be used as
817  the value to indicate removal of the override.  Override entries
818  with "default" as the value must not appear when read.
819
820  For example, a setting which is keyed by major:minor device numbers
821  with integer values may look like the following::
822
823    # cat cgroup-example-interface-file
824    default 150
825    8:0 300
826
827  The default value can be updated by::
828
829    # echo 125 > cgroup-example-interface-file
830
831  or::
832
833    # echo "default 125" > cgroup-example-interface-file
834
835  An override can be set by::
836
837    # echo "8:16 170" > cgroup-example-interface-file
838
839  and cleared by::
840
841    # echo "8:0 default" > cgroup-example-interface-file
842    # cat cgroup-example-interface-file
843    default 125
844    8:16 170
845
846- For events which are not very high frequency, an interface file
847  "events" should be created which lists event key value pairs.
848  Whenever a notifiable event happens, file modified event should be
849  generated on the file.
850
851
852Core Interface Files
853--------------------
854
855All cgroup core files are prefixed with "cgroup."
856
857  cgroup.type
858	A read-write single value file which exists on non-root
859	cgroups.
860
861	When read, it indicates the current type of the cgroup, which
862	can be one of the following values.
863
864	- "domain" : A normal valid domain cgroup.
865
866	- "domain threaded" : A threaded domain cgroup which is
867          serving as the root of a threaded subtree.
868
869	- "domain invalid" : A cgroup which is in an invalid state.
870	  It can't be populated or have controllers enabled.  It may
871	  be allowed to become a threaded cgroup.
872
873	- "threaded" : A threaded cgroup which is a member of a
874          threaded subtree.
875
876	A cgroup can be turned into a threaded cgroup by writing
877	"threaded" to this file.
878
879  cgroup.procs
880	A read-write new-line separated values file which exists on
881	all cgroups.
882
883	When read, it lists the PIDs of all processes which belong to
884	the cgroup one-per-line.  The PIDs are not ordered and the
885	same PID may show up more than once if the process got moved
886	to another cgroup and then back or the PID got recycled while
887	reading.
888
889	A PID can be written to migrate the process associated with
890	the PID to the cgroup.  The writer should match all of the
891	following conditions.
892
893	- It must have write access to the "cgroup.procs" file.
894
895	- It must have write access to the "cgroup.procs" file of the
896	  common ancestor of the source and destination cgroups.
897
898	When delegating a sub-hierarchy, write access to this file
899	should be granted along with the containing directory.
900
901	In a threaded cgroup, reading this file fails with EOPNOTSUPP
902	as all the processes belong to the thread root.  Writing is
903	supported and moves every thread of the process to the cgroup.
904
905  cgroup.threads
906	A read-write new-line separated values file which exists on
907	all cgroups.
908
909	When read, it lists the TIDs of all threads which belong to
910	the cgroup one-per-line.  The TIDs are not ordered and the
911	same TID may show up more than once if the thread got moved to
912	another cgroup and then back or the TID got recycled while
913	reading.
914
915	A TID can be written to migrate the thread associated with the
916	TID to the cgroup.  The writer should match all of the
917	following conditions.
918
919	- It must have write access to the "cgroup.threads" file.
920
921	- The cgroup that the thread is currently in must be in the
922          same resource domain as the destination cgroup.
923
924	- It must have write access to the "cgroup.procs" file of the
925	  common ancestor of the source and destination cgroups.
926
927	When delegating a sub-hierarchy, write access to this file
928	should be granted along with the containing directory.
929
930  cgroup.controllers
931	A read-only space separated values file which exists on all
932	cgroups.
933
934	It shows space separated list of all controllers available to
935	the cgroup.  The controllers are not ordered.
936
937  cgroup.subtree_control
938	A read-write space separated values file which exists on all
939	cgroups.  Starts out empty.
940
941	When read, it shows space separated list of the controllers
942	which are enabled to control resource distribution from the
943	cgroup to its children.
944
945	Space separated list of controllers prefixed with '+' or '-'
946	can be written to enable or disable controllers.  A controller
947	name prefixed with '+' enables the controller and '-'
948	disables.  If a controller appears more than once on the list,
949	the last one is effective.  When multiple enable and disable
950	operations are specified, either all succeed or all fail.
951
952  cgroup.events
953	A read-only flat-keyed file which exists on non-root cgroups.
954	The following entries are defined.  Unless specified
955	otherwise, a value change in this file generates a file
956	modified event.
957
958	  populated
959		1 if the cgroup or its descendants contains any live
960		processes; otherwise, 0.
961	  frozen
962		1 if the cgroup is frozen; otherwise, 0.
963
964  cgroup.max.descendants
965	A read-write single value files.  The default is "max".
966
967	Maximum allowed number of descent cgroups.
968	If the actual number of descendants is equal or larger,
969	an attempt to create a new cgroup in the hierarchy will fail.
970
971  cgroup.max.depth
972	A read-write single value files.  The default is "max".
973
974	Maximum allowed descent depth below the current cgroup.
975	If the actual descent depth is equal or larger,
976	an attempt to create a new child cgroup will fail.
977
978  cgroup.stat
979	A read-only flat-keyed file with the following entries:
980
981	  nr_descendants
982		Total number of visible descendant cgroups.
983
984	  nr_dying_descendants
985		Total number of dying descendant cgroups. A cgroup becomes
986		dying after being deleted by a user. The cgroup will remain
987		in dying state for some time undefined time (which can depend
988		on system load) before being completely destroyed.
989
990		A process can't enter a dying cgroup under any circumstances,
991		a dying cgroup can't revive.
992
993		A dying cgroup can consume system resources not exceeding
994		limits, which were active at the moment of cgroup deletion.
995
996	  nr_subsys_<cgroup_subsys>
997		Total number of live cgroup subsystems (e.g memory
998		cgroup) at and beneath the current cgroup.
999
1000	  nr_dying_subsys_<cgroup_subsys>
1001		Total number of dying cgroup subsystems (e.g. memory
1002		cgroup) at and beneath the current cgroup.
1003
1004  cgroup.freeze
1005	A read-write single value file which exists on non-root cgroups.
1006	Allowed values are "0" and "1". The default is "0".
1007
1008	Writing "1" to the file causes freezing of the cgroup and all
1009	descendant cgroups. This means that all belonging processes will
1010	be stopped and will not run until the cgroup will be explicitly
1011	unfrozen. Freezing of the cgroup may take some time; when this action
1012	is completed, the "frozen" value in the cgroup.events control file
1013	will be updated to "1" and the corresponding notification will be
1014	issued.
1015
1016	A cgroup can be frozen either by its own settings, or by settings
1017	of any ancestor cgroups. If any of ancestor cgroups is frozen, the
1018	cgroup will remain frozen.
1019
1020	Processes in the frozen cgroup can be killed by a fatal signal.
1021	They also can enter and leave a frozen cgroup: either by an explicit
1022	move by a user, or if freezing of the cgroup races with fork().
1023	If a process is moved to a frozen cgroup, it stops. If a process is
1024	moved out of a frozen cgroup, it becomes running.
1025
1026	Frozen status of a cgroup doesn't affect any cgroup tree operations:
1027	it's possible to delete a frozen (and empty) cgroup, as well as
1028	create new sub-cgroups.
1029
1030  cgroup.kill
1031	A write-only single value file which exists in non-root cgroups.
1032	The only allowed value is "1".
1033
1034	Writing "1" to the file causes the cgroup and all descendant cgroups to
1035	be killed. This means that all processes located in the affected cgroup
1036	tree will be killed via SIGKILL.
1037
1038	Killing a cgroup tree will deal with concurrent forks appropriately and
1039	is protected against migrations.
1040
1041	In a threaded cgroup, writing this file fails with EOPNOTSUPP as
1042	killing cgroups is a process directed operation, i.e. it affects
1043	the whole thread-group.
1044
1045  cgroup.pressure
1046	A read-write single value file that allowed values are "0" and "1".
1047	The default is "1".
1048
1049	Writing "0" to the file will disable the cgroup PSI accounting.
1050	Writing "1" to the file will re-enable the cgroup PSI accounting.
1051
1052	This control attribute is not hierarchical, so disable or enable PSI
1053	accounting in a cgroup does not affect PSI accounting in descendants
1054	and doesn't need pass enablement via ancestors from root.
1055
1056	The reason this control attribute exists is that PSI accounts stalls for
1057	each cgroup separately and aggregates it at each level of the hierarchy.
1058	This may cause non-negligible overhead for some workloads when under
1059	deep level of the hierarchy, in which case this control attribute can
1060	be used to disable PSI accounting in the non-leaf cgroups.
1061
1062  irq.pressure
1063	A read-write nested-keyed file.
1064
1065	Shows pressure stall information for IRQ/SOFTIRQ. See
1066	:ref:`Documentation/accounting/psi.rst <psi>` for details.
1067
1068Controllers
1069===========
1070
1071.. _cgroup-v2-cpu:
1072
1073CPU
1074---
1075
1076The "cpu" controllers regulates distribution of CPU cycles.  This
1077controller implements weight and absolute bandwidth limit models for
1078normal scheduling policy and absolute bandwidth allocation model for
1079realtime scheduling policy.
1080
1081In all the above models, cycles distribution is defined only on a temporal
1082base and it does not account for the frequency at which tasks are executed.
1083The (optional) utilization clamping support allows to hint the schedutil
1084cpufreq governor about the minimum desired frequency which should always be
1085provided by a CPU, as well as the maximum desired frequency, which should not
1086be exceeded by a CPU.
1087
1088WARNING: cgroup2 cpu controller doesn't yet support the (bandwidth) control of
1089realtime processes. For a kernel built with the CONFIG_RT_GROUP_SCHED option
1090enabled for group scheduling of realtime processes, the cpu controller can only
1091be enabled when all RT processes are in the root cgroup. Be aware that system
1092management software may already have placed RT processes into non-root cgroups
1093during the system boot process, and these processes may need to be moved to the
1094root cgroup before the cpu controller can be enabled with a
1095CONFIG_RT_GROUP_SCHED enabled kernel.
1096
1097With CONFIG_RT_GROUP_SCHED disabled, this limitation does not apply and some of
1098the interface files either affect realtime processes or account for them. See
1099the following section for details. Only the cpu controller is affected by
1100CONFIG_RT_GROUP_SCHED. Other controllers can be used for the resource control of
1101realtime processes irrespective of CONFIG_RT_GROUP_SCHED.
1102
1103
1104CPU Interface Files
1105~~~~~~~~~~~~~~~~~~~
1106
1107The interaction of a process with the cpu controller depends on its scheduling
1108policy and the underlying scheduler. From the point of view of the cpu controller,
1109processes can be categorized as follows:
1110
1111* Processes under the fair-class scheduler
1112* Processes under a BPF scheduler with the ``cgroup_set_weight`` callback
1113* Everything else: ``SCHED_{FIFO,RR,DEADLINE}`` and processes under a BPF scheduler
1114  without the ``cgroup_set_weight`` callback
1115
1116For details on when a process is under the fair-class scheduler or a BPF scheduler,
1117check out :ref:`Documentation/scheduler/sched-ext.rst <sched-ext>`.
1118
1119For each of the following interface files, the above categories
1120will be referred to. All time durations are in microseconds.
1121
1122  cpu.stat
1123	A read-only flat-keyed file.
1124	This file exists whether the controller is enabled or not.
1125
1126	It always reports the following three stats, which account for all the
1127	processes in the cgroup:
1128
1129	- usage_usec
1130	- user_usec
1131	- system_usec
1132
1133	and the following five when the controller is enabled, which account for
1134	only the processes under the fair-class scheduler:
1135
1136	- nr_periods
1137	- nr_throttled
1138	- throttled_usec
1139	- nr_bursts
1140	- burst_usec
1141
1142  cpu.weight
1143	A read-write single value file which exists on non-root
1144	cgroups.  The default is "100".
1145
1146	For non idle groups (cpu.idle = 0), the weight is in the
1147	range [1, 10000].
1148
1149	If the cgroup has been configured to be SCHED_IDLE (cpu.idle = 1),
1150	then the weight will show as a 0.
1151
1152	This file affects only processes under the fair-class scheduler and a BPF
1153	scheduler with the ``cgroup_set_weight`` callback depending on what the
1154	callback actually does.
1155
1156  cpu.weight.nice
1157	A read-write single value file which exists on non-root
1158	cgroups.  The default is "0".
1159
1160	The nice value is in the range [-20, 19].
1161
1162	This interface file is an alternative interface for
1163	"cpu.weight" and allows reading and setting weight using the
1164	same values used by nice(2).  Because the range is smaller and
1165	granularity is coarser for the nice values, the read value is
1166	the closest approximation of the current weight.
1167
1168	This file affects only processes under the fair-class scheduler and a BPF
1169	scheduler with the ``cgroup_set_weight`` callback depending on what the
1170	callback actually does.
1171
1172  cpu.max
1173	A read-write two value file which exists on non-root cgroups.
1174	The default is "max 100000".
1175
1176	The maximum bandwidth limit.  It's in the following format::
1177
1178	  $MAX $PERIOD
1179
1180	which indicates that the group may consume up to $MAX in each
1181	$PERIOD duration.  "max" for $MAX indicates no limit.  If only
1182	one number is written, $MAX is updated.
1183
1184	This file affects only processes under the fair-class scheduler.
1185
1186  cpu.max.burst
1187	A read-write single value file which exists on non-root
1188	cgroups.  The default is "0".
1189
1190	The burst in the range [0, $MAX].
1191
1192	This file affects only processes under the fair-class scheduler.
1193
1194  cpu.pressure
1195	A read-write nested-keyed file.
1196
1197	Shows pressure stall information for CPU. See
1198	:ref:`Documentation/accounting/psi.rst <psi>` for details.
1199
1200	This file accounts for all the processes in the cgroup.
1201
1202  cpu.uclamp.min
1203	A read-write single value file which exists on non-root cgroups.
1204	The default is "0", i.e. no utilization boosting.
1205
1206	The requested minimum utilization (protection) as a percentage
1207	rational number, e.g. 12.34 for 12.34%.
1208
1209	This interface allows reading and setting minimum utilization clamp
1210	values similar to the sched_setattr(2). This minimum utilization
1211	value is used to clamp the task specific minimum utilization clamp,
1212	including those of realtime processes.
1213
1214	The requested minimum utilization (protection) is always capped by
1215	the current value for the maximum utilization (limit), i.e.
1216	`cpu.uclamp.max`.
1217
1218	This file affects all the processes in the cgroup.
1219
1220  cpu.uclamp.max
1221	A read-write single value file which exists on non-root cgroups.
1222	The default is "max". i.e. no utilization capping
1223
1224	The requested maximum utilization (limit) as a percentage rational
1225	number, e.g. 98.76 for 98.76%.
1226
1227	This interface allows reading and setting maximum utilization clamp
1228	values similar to the sched_setattr(2). This maximum utilization
1229	value is used to clamp the task specific maximum utilization clamp,
1230	including those of realtime processes.
1231
1232	This file affects all the processes in the cgroup.
1233
1234  cpu.idle
1235	A read-write single value file which exists on non-root cgroups.
1236	The default is 0.
1237
1238	This is the cgroup analog of the per-task SCHED_IDLE sched policy.
1239	Setting this value to a 1 will make the scheduling policy of the
1240	cgroup SCHED_IDLE. The threads inside the cgroup will retain their
1241	own relative priorities, but the cgroup itself will be treated as
1242	very low priority relative to its peers.
1243
1244	This file affects only processes under the fair-class scheduler.
1245
1246Memory
1247------
1248
1249The "memory" controller regulates distribution of memory.  Memory is
1250stateful and implements both limit and protection models.  Due to the
1251intertwining between memory usage and reclaim pressure and the
1252stateful nature of memory, the distribution model is relatively
1253complex.
1254
1255While not completely water-tight, all major memory usages by a given
1256cgroup are tracked so that the total memory consumption can be
1257accounted and controlled to a reasonable extent.  Currently, the
1258following types of memory usages are tracked.
1259
1260- Userland memory - page cache and anonymous memory.
1261
1262- Kernel data structures such as dentries and inodes.
1263
1264- TCP socket buffers.
1265
1266The above list may expand in the future for better coverage.
1267
1268
1269Memory Interface Files
1270~~~~~~~~~~~~~~~~~~~~~~
1271
1272All memory amounts are in bytes.  If a value which is not aligned to
1273PAGE_SIZE is written, the value may be rounded up to the closest
1274PAGE_SIZE multiple when read back.
1275
1276  memory.current
1277	A read-only single value file which exists on non-root
1278	cgroups.
1279
1280	The total amount of memory currently being used by the cgroup
1281	and its descendants.
1282
1283  memory.min
1284	A read-write single value file which exists on non-root
1285	cgroups.  The default is "0".
1286
1287	Hard memory protection.  If the memory usage of a cgroup
1288	is within its effective min boundary, the cgroup's memory
1289	won't be reclaimed under any conditions. If there is no
1290	unprotected reclaimable memory available, OOM killer
1291	is invoked. Above the effective min boundary (or
1292	effective low boundary if it is higher), pages are reclaimed
1293	proportionally to the overage, reducing reclaim pressure for
1294	smaller overages.
1295
1296	Effective min boundary is limited by memory.min values of
1297	all ancestor cgroups. If there is memory.min overcommitment
1298	(child cgroup or cgroups are requiring more protected memory
1299	than parent will allow), then each child cgroup will get
1300	the part of parent's protection proportional to its
1301	actual memory usage below memory.min.
1302
1303	Putting more memory than generally available under this
1304	protection is discouraged and may lead to constant OOMs.
1305
1306	If a memory cgroup is not populated with processes,
1307	its memory.min is ignored.
1308
1309  memory.low
1310	A read-write single value file which exists on non-root
1311	cgroups.  The default is "0".
1312
1313	Best-effort memory protection.  If the memory usage of a
1314	cgroup is within its effective low boundary, the cgroup's
1315	memory won't be reclaimed unless there is no reclaimable
1316	memory available in unprotected cgroups.
1317	Above the effective low	boundary (or
1318	effective min boundary if it is higher), pages are reclaimed
1319	proportionally to the overage, reducing reclaim pressure for
1320	smaller overages.
1321
1322	Effective low boundary is limited by memory.low values of
1323	all ancestor cgroups. If there is memory.low overcommitment
1324	(child cgroup or cgroups are requiring more protected memory
1325	than parent will allow), then each child cgroup will get
1326	the part of parent's protection proportional to its
1327	actual memory usage below memory.low.
1328
1329	Putting more memory than generally available under this
1330	protection is discouraged.
1331
1332  memory.high
1333	A read-write single value file which exists on non-root
1334	cgroups.  The default is "max".
1335
1336	Memory usage throttle limit.  If a cgroup's usage goes
1337	over the high boundary, the processes of the cgroup are
1338	throttled and put under heavy reclaim pressure.
1339
1340	Going over the high limit never invokes the OOM killer and
1341	under extreme conditions the limit may be breached. The high
1342	limit should be used in scenarios where an external process
1343	monitors the limited cgroup to alleviate heavy reclaim
1344	pressure.
1345
1346	If memory.high is opened with O_NONBLOCK then the synchronous
1347	reclaim is bypassed. This is useful for admin processes that
1348	need to dynamically adjust the job's memory limits without
1349	expending their own CPU resources on memory reclamation. The
1350	job will trigger the reclaim and/or get throttled on its
1351	next charge request.
1352
1353	Please note that with O_NONBLOCK, there is a chance that the
1354	target memory cgroup may take indefinite amount of time to
1355	reduce usage below the limit due to delayed charge request or
1356	busy-hitting its memory to slow down reclaim.
1357
1358  memory.max
1359	A read-write single value file which exists on non-root
1360	cgroups.  The default is "max".
1361
1362	Memory usage hard limit.  This is the main mechanism to limit
1363	memory usage of a cgroup.  If a cgroup's memory usage reaches
1364	this limit and can't be reduced, the OOM killer is invoked in
1365	the cgroup. Under certain circumstances, the usage may go
1366	over the limit temporarily.
1367
1368	In default configuration regular 0-order allocations always
1369	succeed unless OOM killer chooses current task as a victim.
1370
1371	Some kinds of allocations don't invoke the OOM killer.
1372	Caller could retry them differently, return into userspace
1373	as -ENOMEM or silently ignore in cases like disk readahead.
1374
1375	If memory.max is opened with O_NONBLOCK, then the synchronous
1376	reclaim and oom-kill are bypassed. This is useful for admin
1377	processes that need to dynamically adjust the job's memory limits
1378	without expending their own CPU resources on memory reclamation.
1379	The job will trigger the reclaim and/or oom-kill on its next
1380	charge request.
1381
1382	Please note that with O_NONBLOCK, there is a chance that the
1383	target memory cgroup may take indefinite amount of time to
1384	reduce usage below the limit due to delayed charge request or
1385	busy-hitting its memory to slow down reclaim.
1386
1387  memory.reclaim
1388	A write-only nested-keyed file which exists for all cgroups.
1389
1390	This is a simple interface to trigger memory reclaim in the
1391	target cgroup.
1392
1393	Example::
1394
1395	  echo "1G" > memory.reclaim
1396
1397	Please note that the kernel can over or under reclaim from
1398	the target cgroup. If less bytes are reclaimed than the
1399	specified amount, -EAGAIN is returned.
1400
1401	Please note that the proactive reclaim (triggered by this
1402	interface) is not meant to indicate memory pressure on the
1403	memory cgroup. Therefore socket memory balancing triggered by
1404	the memory reclaim normally is not exercised in this case.
1405	This means that the networking layer will not adapt based on
1406	reclaim induced by memory.reclaim.
1407
1408The following nested keys are defined.
1409
1410	  ==========            ================================
1411	  swappiness            Swappiness value to reclaim with
1412	  ==========            ================================
1413
1414	Specifying a swappiness value instructs the kernel to perform
1415	the reclaim with that swappiness value. Note that this has the
1416	same semantics as vm.swappiness applied to memcg reclaim with
1417	all the existing limitations and potential future extensions.
1418
1419	The valid range for swappiness is [0-200, max], setting
1420	swappiness=max exclusively reclaims anonymous memory.
1421
1422  memory.peak
1423	A read-write single value file which exists on non-root cgroups.
1424
1425	The max memory usage recorded for the cgroup and its descendants since
1426	either the creation of the cgroup or the most recent reset for that FD.
1427
1428	A write of any non-empty string to this file resets it to the
1429	current memory usage for subsequent reads through the same
1430	file descriptor.
1431
1432  memory.oom.group
1433	A read-write single value file which exists on non-root
1434	cgroups.  The default value is "0".
1435
1436	Determines whether the cgroup should be treated as
1437	an indivisible workload by the OOM killer. If set,
1438	all tasks belonging to the cgroup or to its descendants
1439	(if the memory cgroup is not a leaf cgroup) are killed
1440	together or not at all. This can be used to avoid
1441	partial kills to guarantee workload integrity.
1442
1443	Tasks with the OOM protection (oom_score_adj set to -1000)
1444	are treated as an exception and are never killed.
1445
1446	If the OOM killer is invoked in a cgroup, it's not going
1447	to kill any tasks outside of this cgroup, regardless
1448	memory.oom.group values of ancestor cgroups.
1449
1450  memory.events
1451	A read-only flat-keyed file which exists on non-root cgroups.
1452	The following entries are defined.  Unless specified
1453	otherwise, a value change in this file generates a file
1454	modified event.
1455
1456	Note that all fields in this file are hierarchical and the
1457	file modified event can be generated due to an event down the
1458	hierarchy. For the local events at the cgroup level see
1459	memory.events.local.
1460
1461	  low
1462		The number of times the cgroup is reclaimed due to
1463		high memory pressure even though its usage is under
1464		the low boundary.  This usually indicates that the low
1465		boundary is over-committed.
1466
1467	  high
1468		The number of times processes of the cgroup are
1469		throttled and routed to perform direct memory reclaim
1470		because the high memory boundary was exceeded.  For a
1471		cgroup whose memory usage is capped by the high limit
1472		rather than global memory pressure, this event's
1473		occurrences are expected.
1474
1475	  max
1476		The number of times the cgroup's memory usage was
1477		about to go over the max boundary.  If direct reclaim
1478		fails to bring it down, the cgroup goes to OOM state.
1479
1480	  oom
1481		The number of time the cgroup's memory usage was
1482		reached the limit and allocation was about to fail.
1483
1484		This event is not raised if the OOM killer is not
1485		considered as an option, e.g. for failed high-order
1486		allocations or if caller asked to not retry attempts.
1487
1488	  oom_kill
1489		The number of processes belonging to this cgroup
1490		killed by any kind of OOM killer.
1491
1492          oom_group_kill
1493                The number of times a group OOM has occurred.
1494
1495  memory.events.local
1496	Similar to memory.events but the fields in the file are local
1497	to the cgroup i.e. not hierarchical. The file modified event
1498	generated on this file reflects only the local events.
1499
1500  memory.stat
1501	A read-only flat-keyed file which exists on non-root cgroups.
1502
1503	This breaks down the cgroup's memory footprint into different
1504	types of memory, type-specific details, and other information
1505	on the state and past events of the memory management system.
1506
1507	All memory amounts are in bytes.
1508
1509	The entries are ordered to be human readable, and new entries
1510	can show up in the middle. Don't rely on items remaining in a
1511	fixed position; use the keys to look up specific values!
1512
1513	If the entry has no per-node counter (or not show in the
1514	memory.numa_stat). We use 'npn' (non-per-node) as the tag
1515	to indicate that it will not show in the memory.numa_stat.
1516
1517	  anon
1518		Amount of memory used in anonymous mappings such as
1519		brk(), sbrk(), and mmap(MAP_ANONYMOUS). Note that
1520		some kernel configurations might account complete larger
1521		allocations (e.g., THP) if only some, but not all the
1522		memory of such an allocation is mapped anymore.
1523
1524	  file
1525		Amount of memory used to cache filesystem data,
1526		including tmpfs and shared memory.
1527
1528	  kernel (npn)
1529		Amount of total kernel memory, including
1530		(kernel_stack, pagetables, percpu, vmalloc, slab) in
1531		addition to other kernel memory use cases.
1532
1533	  kernel_stack
1534		Amount of memory allocated to kernel stacks.
1535
1536	  pagetables
1537                Amount of memory allocated for page tables.
1538
1539	  sec_pagetables
1540		Amount of memory allocated for secondary page tables,
1541		this currently includes KVM mmu allocations on x86
1542		and arm64 and IOMMU page tables.
1543
1544	  percpu (npn)
1545		Amount of memory used for storing per-cpu kernel
1546		data structures.
1547
1548	  sock (npn)
1549		Amount of memory used in network transmission buffers
1550
1551	  vmalloc (npn)
1552		Amount of memory used for vmap backed memory.
1553
1554	  shmem
1555		Amount of cached filesystem data that is swap-backed,
1556		such as tmpfs, shm segments, shared anonymous mmap()s
1557
1558	  zswap
1559		Amount of memory consumed by the zswap compression backend.
1560
1561	  zswapped
1562		Amount of application memory swapped out to zswap.
1563
1564	  file_mapped
1565		Amount of cached filesystem data mapped with mmap(). Note
1566		that some kernel configurations might account complete
1567		larger allocations (e.g., THP) if only some, but not
1568		not all the memory of such an allocation is mapped.
1569
1570	  file_dirty
1571		Amount of cached filesystem data that was modified but
1572		not yet written back to disk
1573
1574	  file_writeback
1575		Amount of cached filesystem data that was modified and
1576		is currently being written back to disk
1577
1578	  swapcached
1579		Amount of swap cached in memory. The swapcache is accounted
1580		against both memory and swap usage.
1581
1582	  anon_thp
1583		Amount of memory used in anonymous mappings backed by
1584		transparent hugepages
1585
1586	  file_thp
1587		Amount of cached filesystem data backed by transparent
1588		hugepages
1589
1590	  shmem_thp
1591		Amount of shm, tmpfs, shared anonymous mmap()s backed by
1592		transparent hugepages
1593
1594	  inactive_anon, active_anon, inactive_file, active_file, unevictable
1595		Amount of memory, swap-backed and filesystem-backed,
1596		on the internal memory management lists used by the
1597		page reclaim algorithm.
1598
1599		As these represent internal list state (eg. shmem pages are on anon
1600		memory management lists), inactive_foo + active_foo may not be equal to
1601		the value for the foo counter, since the foo counter is type-based, not
1602		list-based.
1603
1604	  slab_reclaimable
1605		Part of "slab" that might be reclaimed, such as
1606		dentries and inodes.
1607
1608	  slab_unreclaimable
1609		Part of "slab" that cannot be reclaimed on memory
1610		pressure.
1611
1612	  slab (npn)
1613		Amount of memory used for storing in-kernel data
1614		structures.
1615
1616	  workingset_refault_anon
1617		Number of refaults of previously evicted anonymous pages.
1618
1619	  workingset_refault_file
1620		Number of refaults of previously evicted file pages.
1621
1622	  workingset_activate_anon
1623		Number of refaulted anonymous pages that were immediately
1624		activated.
1625
1626	  workingset_activate_file
1627		Number of refaulted file pages that were immediately activated.
1628
1629	  workingset_restore_anon
1630		Number of restored anonymous pages which have been detected as
1631		an active workingset before they got reclaimed.
1632
1633	  workingset_restore_file
1634		Number of restored file pages which have been detected as an
1635		active workingset before they got reclaimed.
1636
1637	  workingset_nodereclaim
1638		Number of times a shadow node has been reclaimed
1639
1640	  pswpin (npn)
1641		Number of pages swapped into memory
1642
1643	  pswpout (npn)
1644		Number of pages swapped out of memory
1645
1646	  pgscan (npn)
1647		Amount of scanned pages (in an inactive LRU list)
1648
1649	  pgsteal (npn)
1650		Amount of reclaimed pages
1651
1652	  pgscan_kswapd (npn)
1653		Amount of scanned pages by kswapd (in an inactive LRU list)
1654
1655	  pgscan_direct (npn)
1656		Amount of scanned pages directly  (in an inactive LRU list)
1657
1658	  pgscan_khugepaged (npn)
1659		Amount of scanned pages by khugepaged  (in an inactive LRU list)
1660
1661	  pgscan_proactive (npn)
1662		Amount of scanned pages proactively (in an inactive LRU list)
1663
1664	  pgsteal_kswapd (npn)
1665		Amount of reclaimed pages by kswapd
1666
1667	  pgsteal_direct (npn)
1668		Amount of reclaimed pages directly
1669
1670	  pgsteal_khugepaged (npn)
1671		Amount of reclaimed pages by khugepaged
1672
1673	  pgsteal_proactive (npn)
1674		Amount of reclaimed pages proactively
1675
1676	  pgfault (npn)
1677		Total number of page faults incurred
1678
1679	  pgmajfault (npn)
1680		Number of major page faults incurred
1681
1682	  pgrefill (npn)
1683		Amount of scanned pages (in an active LRU list)
1684
1685	  pgactivate (npn)
1686		Amount of pages moved to the active LRU list
1687
1688	  pgdeactivate (npn)
1689		Amount of pages moved to the inactive LRU list
1690
1691	  pglazyfree (npn)
1692		Amount of pages postponed to be freed under memory pressure
1693
1694	  pglazyfreed (npn)
1695		Amount of reclaimed lazyfree pages
1696
1697	  swpin_zero
1698		Number of pages swapped into memory and filled with zero, where I/O
1699		was optimized out because the page content was detected to be zero
1700		during swapout.
1701
1702	  swpout_zero
1703		Number of zero-filled pages swapped out with I/O skipped due to the
1704		content being detected as zero.
1705
1706	  zswpin
1707		Number of pages moved in to memory from zswap.
1708
1709	  zswpout
1710		Number of pages moved out of memory to zswap.
1711
1712	  zswpwb
1713		Number of pages written from zswap to swap.
1714
1715	  thp_fault_alloc (npn)
1716		Number of transparent hugepages which were allocated to satisfy
1717		a page fault. This counter is not present when CONFIG_TRANSPARENT_HUGEPAGE
1718                is not set.
1719
1720	  thp_collapse_alloc (npn)
1721		Number of transparent hugepages which were allocated to allow
1722		collapsing an existing range of pages. This counter is not
1723		present when CONFIG_TRANSPARENT_HUGEPAGE is not set.
1724
1725	  thp_swpout (npn)
1726		Number of transparent hugepages which are swapout in one piece
1727		without splitting.
1728
1729	  thp_swpout_fallback (npn)
1730		Number of transparent hugepages which were split before swapout.
1731		Usually because failed to allocate some continuous swap space
1732		for the huge page.
1733
1734	  numa_pages_migrated (npn)
1735		Number of pages migrated by NUMA balancing.
1736
1737	  numa_pte_updates (npn)
1738		Number of pages whose page table entries are modified by
1739		NUMA balancing to produce NUMA hinting faults on access.
1740
1741	  numa_hint_faults (npn)
1742		Number of NUMA hinting faults.
1743
1744	  pgdemote_kswapd
1745		Number of pages demoted by kswapd.
1746
1747	  pgdemote_direct
1748		Number of pages demoted directly.
1749
1750	  pgdemote_khugepaged
1751		Number of pages demoted by khugepaged.
1752
1753	  pgdemote_proactive
1754		Number of pages demoted by proactively.
1755
1756	  hugetlb
1757		Amount of memory used by hugetlb pages. This metric only shows
1758		up if hugetlb usage is accounted for in memory.current (i.e.
1759		cgroup is mounted with the memory_hugetlb_accounting option).
1760
1761  memory.numa_stat
1762	A read-only nested-keyed file which exists on non-root cgroups.
1763
1764	This breaks down the cgroup's memory footprint into different
1765	types of memory, type-specific details, and other information
1766	per node on the state of the memory management system.
1767
1768	This is useful for providing visibility into the NUMA locality
1769	information within an memcg since the pages are allowed to be
1770	allocated from any physical node. One of the use case is evaluating
1771	application performance by combining this information with the
1772	application's CPU allocation.
1773
1774	All memory amounts are in bytes.
1775
1776	The output format of memory.numa_stat is::
1777
1778	  type N0=<bytes in node 0> N1=<bytes in node 1> ...
1779
1780	The entries are ordered to be human readable, and new entries
1781	can show up in the middle. Don't rely on items remaining in a
1782	fixed position; use the keys to look up specific values!
1783
1784	The entries can refer to the memory.stat.
1785
1786  memory.swap.current
1787	A read-only single value file which exists on non-root
1788	cgroups.
1789
1790	The total amount of swap currently being used by the cgroup
1791	and its descendants.
1792
1793  memory.swap.high
1794	A read-write single value file which exists on non-root
1795	cgroups.  The default is "max".
1796
1797	Swap usage throttle limit.  If a cgroup's swap usage exceeds
1798	this limit, all its further allocations will be throttled to
1799	allow userspace to implement custom out-of-memory procedures.
1800
1801	This limit marks a point of no return for the cgroup. It is NOT
1802	designed to manage the amount of swapping a workload does
1803	during regular operation. Compare to memory.swap.max, which
1804	prohibits swapping past a set amount, but lets the cgroup
1805	continue unimpeded as long as other memory can be reclaimed.
1806
1807	Healthy workloads are not expected to reach this limit.
1808
1809  memory.swap.peak
1810	A read-write single value file which exists on non-root cgroups.
1811
1812	The max swap usage recorded for the cgroup and its descendants since
1813	the creation of the cgroup or the most recent reset for that FD.
1814
1815	A write of any non-empty string to this file resets it to the
1816	current memory usage for subsequent reads through the same
1817	file descriptor.
1818
1819  memory.swap.max
1820	A read-write single value file which exists on non-root
1821	cgroups.  The default is "max".
1822
1823	Swap usage hard limit.  If a cgroup's swap usage reaches this
1824	limit, anonymous memory of the cgroup will not be swapped out.
1825
1826  memory.swap.events
1827	A read-only flat-keyed file which exists on non-root cgroups.
1828	The following entries are defined.  Unless specified
1829	otherwise, a value change in this file generates a file
1830	modified event.
1831
1832	  high
1833		The number of times the cgroup's swap usage was over
1834		the high threshold.
1835
1836	  max
1837		The number of times the cgroup's swap usage was about
1838		to go over the max boundary and swap allocation
1839		failed.
1840
1841	  fail
1842		The number of times swap allocation failed either
1843		because of running out of swap system-wide or max
1844		limit.
1845
1846	When reduced under the current usage, the existing swap
1847	entries are reclaimed gradually and the swap usage may stay
1848	higher than the limit for an extended period of time.  This
1849	reduces the impact on the workload and memory management.
1850
1851  memory.zswap.current
1852	A read-only single value file which exists on non-root
1853	cgroups.
1854
1855	The total amount of memory consumed by the zswap compression
1856	backend.
1857
1858  memory.zswap.max
1859	A read-write single value file which exists on non-root
1860	cgroups.  The default is "max".
1861
1862	Zswap usage hard limit. If a cgroup's zswap pool reaches this
1863	limit, it will refuse to take any more stores before existing
1864	entries fault back in or are written out to disk.
1865
1866  memory.zswap.writeback
1867	A read-write single value file. The default value is "1".
1868	Note that this setting is hierarchical, i.e. the writeback would be
1869	implicitly disabled for child cgroups if the upper hierarchy
1870	does so.
1871
1872	When this is set to 0, all swapping attempts to swapping devices
1873	are disabled. This included both zswap writebacks, and swapping due
1874	to zswap store failures. If the zswap store failures are recurring
1875	(for e.g if the pages are incompressible), users can observe
1876	reclaim inefficiency after disabling writeback (because the same
1877	pages might be rejected again and again).
1878
1879	Note that this is subtly different from setting memory.swap.max to
1880	0, as it still allows for pages to be written to the zswap pool.
1881	This setting has no effect if zswap is disabled, and swapping
1882	is allowed unless memory.swap.max is set to 0.
1883
1884  memory.pressure
1885	A read-only nested-keyed file.
1886
1887	Shows pressure stall information for memory. See
1888	:ref:`Documentation/accounting/psi.rst <psi>` for details.
1889
1890
1891Usage Guidelines
1892~~~~~~~~~~~~~~~~
1893
1894"memory.high" is the main mechanism to control memory usage.
1895Over-committing on high limit (sum of high limits > available memory)
1896and letting global memory pressure to distribute memory according to
1897usage is a viable strategy.
1898
1899Because breach of the high limit doesn't trigger the OOM killer but
1900throttles the offending cgroup, a management agent has ample
1901opportunities to monitor and take appropriate actions such as granting
1902more memory or terminating the workload.
1903
1904Determining whether a cgroup has enough memory is not trivial as
1905memory usage doesn't indicate whether the workload can benefit from
1906more memory.  For example, a workload which writes data received from
1907network to a file can use all available memory but can also operate as
1908performant with a small amount of memory.  A measure of memory
1909pressure - how much the workload is being impacted due to lack of
1910memory - is necessary to determine whether a workload needs more
1911memory; unfortunately, memory pressure monitoring mechanism isn't
1912implemented yet.
1913
1914
1915Memory Ownership
1916~~~~~~~~~~~~~~~~
1917
1918A memory area is charged to the cgroup which instantiated it and stays
1919charged to the cgroup until the area is released.  Migrating a process
1920to a different cgroup doesn't move the memory usages that it
1921instantiated while in the previous cgroup to the new cgroup.
1922
1923A memory area may be used by processes belonging to different cgroups.
1924To which cgroup the area will be charged is in-deterministic; however,
1925over time, the memory area is likely to end up in a cgroup which has
1926enough memory allowance to avoid high reclaim pressure.
1927
1928If a cgroup sweeps a considerable amount of memory which is expected
1929to be accessed repeatedly by other cgroups, it may make sense to use
1930POSIX_FADV_DONTNEED to relinquish the ownership of memory areas
1931belonging to the affected files to ensure correct memory ownership.
1932
1933
1934IO
1935--
1936
1937The "io" controller regulates the distribution of IO resources.  This
1938controller implements both weight based and absolute bandwidth or IOPS
1939limit distribution; however, weight based distribution is available
1940only if cfq-iosched is in use and neither scheme is available for
1941blk-mq devices.
1942
1943
1944IO Interface Files
1945~~~~~~~~~~~~~~~~~~
1946
1947  io.stat
1948	A read-only nested-keyed file.
1949
1950	Lines are keyed by $MAJ:$MIN device numbers and not ordered.
1951	The following nested keys are defined.
1952
1953	  ======	=====================
1954	  rbytes	Bytes read
1955	  wbytes	Bytes written
1956	  rios		Number of read IOs
1957	  wios		Number of write IOs
1958	  dbytes	Bytes discarded
1959	  dios		Number of discard IOs
1960	  ======	=====================
1961
1962	An example read output follows::
1963
1964	  8:16 rbytes=1459200 wbytes=314773504 rios=192 wios=353 dbytes=0 dios=0
1965	  8:0 rbytes=90430464 wbytes=299008000 rios=8950 wios=1252 dbytes=50331648 dios=3021
1966
1967  io.cost.qos
1968	A read-write nested-keyed file which exists only on the root
1969	cgroup.
1970
1971	This file configures the Quality of Service of the IO cost
1972	model based controller (CONFIG_BLK_CGROUP_IOCOST) which
1973	currently implements "io.weight" proportional control.  Lines
1974	are keyed by $MAJ:$MIN device numbers and not ordered.  The
1975	line for a given device is populated on the first write for
1976	the device on "io.cost.qos" or "io.cost.model".  The following
1977	nested keys are defined.
1978
1979	  ======	=====================================
1980	  enable	Weight-based control enable
1981	  ctrl		"auto" or "user"
1982	  rpct		Read latency percentile    [0, 100]
1983	  rlat		Read latency threshold
1984	  wpct		Write latency percentile   [0, 100]
1985	  wlat		Write latency threshold
1986	  min		Minimum scaling percentage [1, 10000]
1987	  max		Maximum scaling percentage [1, 10000]
1988	  ======	=====================================
1989
1990	The controller is disabled by default and can be enabled by
1991	setting "enable" to 1.  "rpct" and "wpct" parameters default
1992	to zero and the controller uses internal device saturation
1993	state to adjust the overall IO rate between "min" and "max".
1994
1995	When a better control quality is needed, latency QoS
1996	parameters can be configured.  For example::
1997
1998	  8:16 enable=1 ctrl=auto rpct=95.00 rlat=75000 wpct=95.00 wlat=150000 min=50.00 max=150.0
1999
2000	shows that on sdb, the controller is enabled, will consider
2001	the device saturated if the 95th percentile of read completion
2002	latencies is above 75ms or write 150ms, and adjust the overall
2003	IO issue rate between 50% and 150% accordingly.
2004
2005	The lower the saturation point, the better the latency QoS at
2006	the cost of aggregate bandwidth.  The narrower the allowed
2007	adjustment range between "min" and "max", the more conformant
2008	to the cost model the IO behavior.  Note that the IO issue
2009	base rate may be far off from 100% and setting "min" and "max"
2010	blindly can lead to a significant loss of device capacity or
2011	control quality.  "min" and "max" are useful for regulating
2012	devices which show wide temporary behavior changes - e.g. a
2013	ssd which accepts writes at the line speed for a while and
2014	then completely stalls for multiple seconds.
2015
2016	When "ctrl" is "auto", the parameters are controlled by the
2017	kernel and may change automatically.  Setting "ctrl" to "user"
2018	or setting any of the percentile and latency parameters puts
2019	it into "user" mode and disables the automatic changes.  The
2020	automatic mode can be restored by setting "ctrl" to "auto".
2021
2022  io.cost.model
2023	A read-write nested-keyed file which exists only on the root
2024	cgroup.
2025
2026	This file configures the cost model of the IO cost model based
2027	controller (CONFIG_BLK_CGROUP_IOCOST) which currently
2028	implements "io.weight" proportional control.  Lines are keyed
2029	by $MAJ:$MIN device numbers and not ordered.  The line for a
2030	given device is populated on the first write for the device on
2031	"io.cost.qos" or "io.cost.model".  The following nested keys
2032	are defined.
2033
2034	  =====		================================
2035	  ctrl		"auto" or "user"
2036	  model		The cost model in use - "linear"
2037	  =====		================================
2038
2039	When "ctrl" is "auto", the kernel may change all parameters
2040	dynamically.  When "ctrl" is set to "user" or any other
2041	parameters are written to, "ctrl" become "user" and the
2042	automatic changes are disabled.
2043
2044	When "model" is "linear", the following model parameters are
2045	defined.
2046
2047	  =============	========================================
2048	  [r|w]bps	The maximum sequential IO throughput
2049	  [r|w]seqiops	The maximum 4k sequential IOs per second
2050	  [r|w]randiops	The maximum 4k random IOs per second
2051	  =============	========================================
2052
2053	From the above, the builtin linear model determines the base
2054	costs of a sequential and random IO and the cost coefficient
2055	for the IO size.  While simple, this model can cover most
2056	common device classes acceptably.
2057
2058	The IO cost model isn't expected to be accurate in absolute
2059	sense and is scaled to the device behavior dynamically.
2060
2061	If needed, tools/cgroup/iocost_coef_gen.py can be used to
2062	generate device-specific coefficients.
2063
2064  io.weight
2065	A read-write flat-keyed file which exists on non-root cgroups.
2066	The default is "default 100".
2067
2068	The first line is the default weight applied to devices
2069	without specific override.  The rest are overrides keyed by
2070	$MAJ:$MIN device numbers and not ordered.  The weights are in
2071	the range [1, 10000] and specifies the relative amount IO time
2072	the cgroup can use in relation to its siblings.
2073
2074	The default weight can be updated by writing either "default
2075	$WEIGHT" or simply "$WEIGHT".  Overrides can be set by writing
2076	"$MAJ:$MIN $WEIGHT" and unset by writing "$MAJ:$MIN default".
2077
2078	An example read output follows::
2079
2080	  default 100
2081	  8:16 200
2082	  8:0 50
2083
2084  io.max
2085	A read-write nested-keyed file which exists on non-root
2086	cgroups.
2087
2088	BPS and IOPS based IO limit.  Lines are keyed by $MAJ:$MIN
2089	device numbers and not ordered.  The following nested keys are
2090	defined.
2091
2092	  =====		==================================
2093	  rbps		Max read bytes per second
2094	  wbps		Max write bytes per second
2095	  riops		Max read IO operations per second
2096	  wiops		Max write IO operations per second
2097	  =====		==================================
2098
2099	When writing, any number of nested key-value pairs can be
2100	specified in any order.  "max" can be specified as the value
2101	to remove a specific limit.  If the same key is specified
2102	multiple times, the outcome is undefined.
2103
2104	BPS and IOPS are measured in each IO direction and IOs are
2105	delayed if limit is reached.  Temporary bursts are allowed.
2106
2107	Setting read limit at 2M BPS and write at 120 IOPS for 8:16::
2108
2109	  echo "8:16 rbps=2097152 wiops=120" > io.max
2110
2111	Reading returns the following::
2112
2113	  8:16 rbps=2097152 wbps=max riops=max wiops=120
2114
2115	Write IOPS limit can be removed by writing the following::
2116
2117	  echo "8:16 wiops=max" > io.max
2118
2119	Reading now returns the following::
2120
2121	  8:16 rbps=2097152 wbps=max riops=max wiops=max
2122
2123  io.pressure
2124	A read-only nested-keyed file.
2125
2126	Shows pressure stall information for IO. See
2127	:ref:`Documentation/accounting/psi.rst <psi>` for details.
2128
2129
2130Writeback
2131~~~~~~~~~
2132
2133Page cache is dirtied through buffered writes and shared mmaps and
2134written asynchronously to the backing filesystem by the writeback
2135mechanism.  Writeback sits between the memory and IO domains and
2136regulates the proportion of dirty memory by balancing dirtying and
2137write IOs.
2138
2139The io controller, in conjunction with the memory controller,
2140implements control of page cache writeback IOs.  The memory controller
2141defines the memory domain that dirty memory ratio is calculated and
2142maintained for and the io controller defines the io domain which
2143writes out dirty pages for the memory domain.  Both system-wide and
2144per-cgroup dirty memory states are examined and the more restrictive
2145of the two is enforced.
2146
2147cgroup writeback requires explicit support from the underlying
2148filesystem.  Currently, cgroup writeback is implemented on ext2, ext4,
2149btrfs, f2fs, and xfs.  On other filesystems, all writeback IOs are
2150attributed to the root cgroup.
2151
2152There are inherent differences in memory and writeback management
2153which affects how cgroup ownership is tracked.  Memory is tracked per
2154page while writeback per inode.  For the purpose of writeback, an
2155inode is assigned to a cgroup and all IO requests to write dirty pages
2156from the inode are attributed to that cgroup.
2157
2158As cgroup ownership for memory is tracked per page, there can be pages
2159which are associated with different cgroups than the one the inode is
2160associated with.  These are called foreign pages.  The writeback
2161constantly keeps track of foreign pages and, if a particular foreign
2162cgroup becomes the majority over a certain period of time, switches
2163the ownership of the inode to that cgroup.
2164
2165While this model is enough for most use cases where a given inode is
2166mostly dirtied by a single cgroup even when the main writing cgroup
2167changes over time, use cases where multiple cgroups write to a single
2168inode simultaneously are not supported well.  In such circumstances, a
2169significant portion of IOs are likely to be attributed incorrectly.
2170As memory controller assigns page ownership on the first use and
2171doesn't update it until the page is released, even if writeback
2172strictly follows page ownership, multiple cgroups dirtying overlapping
2173areas wouldn't work as expected.  It's recommended to avoid such usage
2174patterns.
2175
2176The sysctl knobs which affect writeback behavior are applied to cgroup
2177writeback as follows.
2178
2179  vm.dirty_background_ratio, vm.dirty_ratio
2180	These ratios apply the same to cgroup writeback with the
2181	amount of available memory capped by limits imposed by the
2182	memory controller and system-wide clean memory.
2183
2184  vm.dirty_background_bytes, vm.dirty_bytes
2185	For cgroup writeback, this is calculated into ratio against
2186	total available memory and applied the same way as
2187	vm.dirty[_background]_ratio.
2188
2189
2190IO Latency
2191~~~~~~~~~~
2192
2193This is a cgroup v2 controller for IO workload protection.  You provide a group
2194with a latency target, and if the average latency exceeds that target the
2195controller will throttle any peers that have a lower latency target than the
2196protected workload.
2197
2198The limits are only applied at the peer level in the hierarchy.  This means that
2199in the diagram below, only groups A, B, and C will influence each other, and
2200groups D and F will influence each other.  Group G will influence nobody::
2201
2202			[root]
2203		/	   |		\
2204		A	   B		C
2205	       /  \        |
2206	      D    F	   G
2207
2208
2209So the ideal way to configure this is to set io.latency in groups A, B, and C.
2210Generally you do not want to set a value lower than the latency your device
2211supports.  Experiment to find the value that works best for your workload.
2212Start at higher than the expected latency for your device and watch the
2213avg_lat value in io.stat for your workload group to get an idea of the
2214latency you see during normal operation.  Use the avg_lat value as a basis for
2215your real setting, setting at 10-15% higher than the value in io.stat.
2216
2217How IO Latency Throttling Works
2218~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2219
2220io.latency is work conserving; so as long as everybody is meeting their latency
2221target the controller doesn't do anything.  Once a group starts missing its
2222target it begins throttling any peer group that has a higher target than itself.
2223This throttling takes 2 forms:
2224
2225- Queue depth throttling.  This is the number of outstanding IO's a group is
2226  allowed to have.  We will clamp down relatively quickly, starting at no limit
2227  and going all the way down to 1 IO at a time.
2228
2229- Artificial delay induction.  There are certain types of IO that cannot be
2230  throttled without possibly adversely affecting higher priority groups.  This
2231  includes swapping and metadata IO.  These types of IO are allowed to occur
2232  normally, however they are "charged" to the originating group.  If the
2233  originating group is being throttled you will see the use_delay and delay
2234  fields in io.stat increase.  The delay value is how many microseconds that are
2235  being added to any process that runs in this group.  Because this number can
2236  grow quite large if there is a lot of swapping or metadata IO occurring we
2237  limit the individual delay events to 1 second at a time.
2238
2239Once the victimized group starts meeting its latency target again it will start
2240unthrottling any peer groups that were throttled previously.  If the victimized
2241group simply stops doing IO the global counter will unthrottle appropriately.
2242
2243IO Latency Interface Files
2244~~~~~~~~~~~~~~~~~~~~~~~~~~
2245
2246  io.latency
2247	This takes a similar format as the other controllers.
2248
2249		"MAJOR:MINOR target=<target time in microseconds>"
2250
2251  io.stat
2252	If the controller is enabled you will see extra stats in io.stat in
2253	addition to the normal ones.
2254
2255	  depth
2256		This is the current queue depth for the group.
2257
2258	  avg_lat
2259		This is an exponential moving average with a decay rate of 1/exp
2260		bound by the sampling interval.  The decay rate interval can be
2261		calculated by multiplying the win value in io.stat by the
2262		corresponding number of samples based on the win value.
2263
2264	  win
2265		The sampling window size in milliseconds.  This is the minimum
2266		duration of time between evaluation events.  Windows only elapse
2267		with IO activity.  Idle periods extend the most recent window.
2268
2269IO Priority
2270~~~~~~~~~~~
2271
2272A single attribute controls the behavior of the I/O priority cgroup policy,
2273namely the io.prio.class attribute. The following values are accepted for
2274that attribute:
2275
2276  no-change
2277	Do not modify the I/O priority class.
2278
2279  promote-to-rt
2280	For requests that have a non-RT I/O priority class, change it into RT.
2281	Also change the priority level of these requests to 4. Do not modify
2282	the I/O priority of requests that have priority class RT.
2283
2284  restrict-to-be
2285	For requests that do not have an I/O priority class or that have I/O
2286	priority class RT, change it into BE. Also change the priority level
2287	of these requests to 0. Do not modify the I/O priority class of
2288	requests that have priority class IDLE.
2289
2290  idle
2291	Change the I/O priority class of all requests into IDLE, the lowest
2292	I/O priority class.
2293
2294  none-to-rt
2295	Deprecated. Just an alias for promote-to-rt.
2296
2297The following numerical values are associated with the I/O priority policies:
2298
2299+----------------+---+
2300| no-change      | 0 |
2301+----------------+---+
2302| promote-to-rt  | 1 |
2303+----------------+---+
2304| restrict-to-be | 2 |
2305+----------------+---+
2306| idle           | 3 |
2307+----------------+---+
2308
2309The numerical value that corresponds to each I/O priority class is as follows:
2310
2311+-------------------------------+---+
2312| IOPRIO_CLASS_NONE             | 0 |
2313+-------------------------------+---+
2314| IOPRIO_CLASS_RT (real-time)   | 1 |
2315+-------------------------------+---+
2316| IOPRIO_CLASS_BE (best effort) | 2 |
2317+-------------------------------+---+
2318| IOPRIO_CLASS_IDLE             | 3 |
2319+-------------------------------+---+
2320
2321The algorithm to set the I/O priority class for a request is as follows:
2322
2323- If I/O priority class policy is promote-to-rt, change the request I/O
2324  priority class to IOPRIO_CLASS_RT and change the request I/O priority
2325  level to 4.
2326- If I/O priority class policy is not promote-to-rt, translate the I/O priority
2327  class policy into a number, then change the request I/O priority class
2328  into the maximum of the I/O priority class policy number and the numerical
2329  I/O priority class.
2330
2331PID
2332---
2333
2334The process number controller is used to allow a cgroup to stop any
2335new tasks from being fork()'d or clone()'d after a specified limit is
2336reached.
2337
2338The number of tasks in a cgroup can be exhausted in ways which other
2339controllers cannot prevent, thus warranting its own controller.  For
2340example, a fork bomb is likely to exhaust the number of tasks before
2341hitting memory restrictions.
2342
2343Note that PIDs used in this controller refer to TIDs, process IDs as
2344used by the kernel.
2345
2346
2347PID Interface Files
2348~~~~~~~~~~~~~~~~~~~
2349
2350  pids.max
2351	A read-write single value file which exists on non-root
2352	cgroups.  The default is "max".
2353
2354	Hard limit of number of processes.
2355
2356  pids.current
2357	A read-only single value file which exists on non-root cgroups.
2358
2359	The number of processes currently in the cgroup and its
2360	descendants.
2361
2362  pids.peak
2363	A read-only single value file which exists on non-root cgroups.
2364
2365	The maximum value that the number of processes in the cgroup and its
2366	descendants has ever reached.
2367
2368  pids.events
2369	A read-only flat-keyed file which exists on non-root cgroups. Unless
2370	specified otherwise, a value change in this file generates a file
2371	modified event. The following entries are defined.
2372
2373	  max
2374		The number of times the cgroup's total number of processes hit the pids.max
2375		limit (see also pids_localevents).
2376
2377  pids.events.local
2378	Similar to pids.events but the fields in the file are local
2379	to the cgroup i.e. not hierarchical. The file modified event
2380	generated on this file reflects only the local events.
2381
2382Organisational operations are not blocked by cgroup policies, so it is
2383possible to have pids.current > pids.max.  This can be done by either
2384setting the limit to be smaller than pids.current, or attaching enough
2385processes to the cgroup such that pids.current is larger than
2386pids.max.  However, it is not possible to violate a cgroup PID policy
2387through fork() or clone(). These will return -EAGAIN if the creation
2388of a new process would cause a cgroup policy to be violated.
2389
2390
2391Cpuset
2392------
2393
2394The "cpuset" controller provides a mechanism for constraining
2395the CPU and memory node placement of tasks to only the resources
2396specified in the cpuset interface files in a task's current cgroup.
2397This is especially valuable on large NUMA systems where placing jobs
2398on properly sized subsets of the systems with careful processor and
2399memory placement to reduce cross-node memory access and contention
2400can improve overall system performance.
2401
2402The "cpuset" controller is hierarchical.  That means the controller
2403cannot use CPUs or memory nodes not allowed in its parent.
2404
2405
2406Cpuset Interface Files
2407~~~~~~~~~~~~~~~~~~~~~~
2408
2409  cpuset.cpus
2410	A read-write multiple values file which exists on non-root
2411	cpuset-enabled cgroups.
2412
2413	It lists the requested CPUs to be used by tasks within this
2414	cgroup.  The actual list of CPUs to be granted, however, is
2415	subjected to constraints imposed by its parent and can differ
2416	from the requested CPUs.
2417
2418	The CPU numbers are comma-separated numbers or ranges.
2419	For example::
2420
2421	  # cat cpuset.cpus
2422	  0-4,6,8-10
2423
2424	An empty value indicates that the cgroup is using the same
2425	setting as the nearest cgroup ancestor with a non-empty
2426	"cpuset.cpus" or all the available CPUs if none is found.
2427
2428	The value of "cpuset.cpus" stays constant until the next update
2429	and won't be affected by any CPU hotplug events.
2430
2431  cpuset.cpus.effective
2432	A read-only multiple values file which exists on all
2433	cpuset-enabled cgroups.
2434
2435	It lists the onlined CPUs that are actually granted to this
2436	cgroup by its parent.  These CPUs are allowed to be used by
2437	tasks within the current cgroup.
2438
2439	If "cpuset.cpus" is empty, the "cpuset.cpus.effective" file shows
2440	all the CPUs from the parent cgroup that can be available to
2441	be used by this cgroup.  Otherwise, it should be a subset of
2442	"cpuset.cpus" unless none of the CPUs listed in "cpuset.cpus"
2443	can be granted.  In this case, it will be treated just like an
2444	empty "cpuset.cpus".
2445
2446	Its value will be affected by CPU hotplug events.
2447
2448  cpuset.mems
2449	A read-write multiple values file which exists on non-root
2450	cpuset-enabled cgroups.
2451
2452	It lists the requested memory nodes to be used by tasks within
2453	this cgroup.  The actual list of memory nodes granted, however,
2454	is subjected to constraints imposed by its parent and can differ
2455	from the requested memory nodes.
2456
2457	The memory node numbers are comma-separated numbers or ranges.
2458	For example::
2459
2460	  # cat cpuset.mems
2461	  0-1,3
2462
2463	An empty value indicates that the cgroup is using the same
2464	setting as the nearest cgroup ancestor with a non-empty
2465	"cpuset.mems" or all the available memory nodes if none
2466	is found.
2467
2468	The value of "cpuset.mems" stays constant until the next update
2469	and won't be affected by any memory nodes hotplug events.
2470
2471	Setting a non-empty value to "cpuset.mems" causes memory of
2472	tasks within the cgroup to be migrated to the designated nodes if
2473	they are currently using memory outside of the designated nodes.
2474
2475	There is a cost for this memory migration.  The migration
2476	may not be complete and some memory pages may be left behind.
2477	So it is recommended that "cpuset.mems" should be set properly
2478	before spawning new tasks into the cpuset.  Even if there is
2479	a need to change "cpuset.mems" with active tasks, it shouldn't
2480	be done frequently.
2481
2482  cpuset.mems.effective
2483	A read-only multiple values file which exists on all
2484	cpuset-enabled cgroups.
2485
2486	It lists the onlined memory nodes that are actually granted to
2487	this cgroup by its parent. These memory nodes are allowed to
2488	be used by tasks within the current cgroup.
2489
2490	If "cpuset.mems" is empty, it shows all the memory nodes from the
2491	parent cgroup that will be available to be used by this cgroup.
2492	Otherwise, it should be a subset of "cpuset.mems" unless none of
2493	the memory nodes listed in "cpuset.mems" can be granted.  In this
2494	case, it will be treated just like an empty "cpuset.mems".
2495
2496	Its value will be affected by memory nodes hotplug events.
2497
2498  cpuset.cpus.exclusive
2499	A read-write multiple values file which exists on non-root
2500	cpuset-enabled cgroups.
2501
2502	It lists all the exclusive CPUs that are allowed to be used
2503	to create a new cpuset partition.  Its value is not used
2504	unless the cgroup becomes a valid partition root.  See the
2505	"cpuset.cpus.partition" section below for a description of what
2506	a cpuset partition is.
2507
2508	When the cgroup becomes a partition root, the actual exclusive
2509	CPUs that are allocated to that partition are listed in
2510	"cpuset.cpus.exclusive.effective" which may be different
2511	from "cpuset.cpus.exclusive".  If "cpuset.cpus.exclusive"
2512	has previously been set, "cpuset.cpus.exclusive.effective"
2513	is always a subset of it.
2514
2515	Users can manually set it to a value that is different from
2516	"cpuset.cpus".	One constraint in setting it is that the list of
2517	CPUs must be exclusive with respect to "cpuset.cpus.exclusive"
2518	of its sibling.  If "cpuset.cpus.exclusive" of a sibling cgroup
2519	isn't set, its "cpuset.cpus" value, if set, cannot be a subset
2520	of it to leave at least one CPU available when the exclusive
2521	CPUs are taken away.
2522
2523	For a parent cgroup, any one of its exclusive CPUs can only
2524	be distributed to at most one of its child cgroups.  Having an
2525	exclusive CPU appearing in two or more of its child cgroups is
2526	not allowed (the exclusivity rule).  A value that violates the
2527	exclusivity rule will be rejected with a write error.
2528
2529	The root cgroup is a partition root and all its available CPUs
2530	are in its exclusive CPU set.
2531
2532  cpuset.cpus.exclusive.effective
2533	A read-only multiple values file which exists on all non-root
2534	cpuset-enabled cgroups.
2535
2536	This file shows the effective set of exclusive CPUs that
2537	can be used to create a partition root.  The content
2538	of this file will always be a subset of its parent's
2539	"cpuset.cpus.exclusive.effective" if its parent is not the root
2540	cgroup.  It will also be a subset of "cpuset.cpus.exclusive"
2541	if it is set.  If "cpuset.cpus.exclusive" is not set, it is
2542	treated to have an implicit value of "cpuset.cpus" in the
2543	formation of local partition.
2544
2545  cpuset.cpus.isolated
2546	A read-only and root cgroup only multiple values file.
2547
2548	This file shows the set of all isolated CPUs used in existing
2549	isolated partitions. It will be empty if no isolated partition
2550	is created.
2551
2552  cpuset.cpus.partition
2553	A read-write single value file which exists on non-root
2554	cpuset-enabled cgroups.  This flag is owned by the parent cgroup
2555	and is not delegatable.
2556
2557	It accepts only the following input values when written to.
2558
2559	  ==========	=====================================
2560	  "member"	Non-root member of a partition
2561	  "root"	Partition root
2562	  "isolated"	Partition root without load balancing
2563	  ==========	=====================================
2564
2565	A cpuset partition is a collection of cpuset-enabled cgroups with
2566	a partition root at the top of the hierarchy and its descendants
2567	except those that are separate partition roots themselves and
2568	their descendants.  A partition has exclusive access to the
2569	set of exclusive CPUs allocated to it.	Other cgroups outside
2570	of that partition cannot use any CPUs in that set.
2571
2572	There are two types of partitions - local and remote.  A local
2573	partition is one whose parent cgroup is also a valid partition
2574	root.  A remote partition is one whose parent cgroup is not a
2575	valid partition root itself.  Writing to "cpuset.cpus.exclusive"
2576	is optional for the creation of a local partition as its
2577	"cpuset.cpus.exclusive" file will assume an implicit value that
2578	is the same as "cpuset.cpus" if it is not set.	Writing the
2579	proper "cpuset.cpus.exclusive" values down the cgroup hierarchy
2580	before the target partition root is mandatory for the creation
2581	of a remote partition.
2582
2583	Currently, a remote partition cannot be created under a local
2584	partition.  All the ancestors of a remote partition root except
2585	the root cgroup cannot be a partition root.
2586
2587	The root cgroup is always a partition root and its state cannot
2588	be changed.  All other non-root cgroups start out as "member".
2589
2590	When set to "root", the current cgroup is the root of a new
2591	partition or scheduling domain.  The set of exclusive CPUs is
2592	determined by the value of its "cpuset.cpus.exclusive.effective".
2593
2594	When set to "isolated", the CPUs in that partition will be in
2595	an isolated state without any load balancing from the scheduler
2596	and excluded from the unbound workqueues.  Tasks placed in such
2597	a partition with multiple CPUs should be carefully distributed
2598	and bound to each of the individual CPUs for optimal performance.
2599
2600	A partition root ("root" or "isolated") can be in one of the
2601	two possible states - valid or invalid.  An invalid partition
2602	root is in a degraded state where some state information may
2603	be retained, but behaves more like a "member".
2604
2605	All possible state transitions among "member", "root" and
2606	"isolated" are allowed.
2607
2608	On read, the "cpuset.cpus.partition" file can show the following
2609	values.
2610
2611	  =============================	=====================================
2612	  "member"			Non-root member of a partition
2613	  "root"			Partition root
2614	  "isolated"			Partition root without load balancing
2615	  "root invalid (<reason>)"	Invalid partition root
2616	  "isolated invalid (<reason>)"	Invalid isolated partition root
2617	  =============================	=====================================
2618
2619	In the case of an invalid partition root, a descriptive string on
2620	why the partition is invalid is included within parentheses.
2621
2622	For a local partition root to be valid, the following conditions
2623	must be met.
2624
2625	1) The parent cgroup is a valid partition root.
2626	2) The "cpuset.cpus.exclusive.effective" file cannot be empty,
2627	   though it may contain offline CPUs.
2628	3) The "cpuset.cpus.effective" cannot be empty unless there is
2629	   no task associated with this partition.
2630
2631	For a remote partition root to be valid, all the above conditions
2632	except the first one must be met.
2633
2634	External events like hotplug or changes to "cpuset.cpus" or
2635	"cpuset.cpus.exclusive" can cause a valid partition root to
2636	become invalid and vice versa.	Note that a task cannot be
2637	moved to a cgroup with empty "cpuset.cpus.effective".
2638
2639	A valid non-root parent partition may distribute out all its CPUs
2640	to its child local partitions when there is no task associated
2641	with it.
2642
2643	Care must be taken to change a valid partition root to "member"
2644	as all its child local partitions, if present, will become
2645	invalid causing disruption to tasks running in those child
2646	partitions. These inactivated partitions could be recovered if
2647	their parent is switched back to a partition root with a proper
2648	value in "cpuset.cpus" or "cpuset.cpus.exclusive".
2649
2650	Poll and inotify events are triggered whenever the state of
2651	"cpuset.cpus.partition" changes.  That includes changes caused
2652	by write to "cpuset.cpus.partition", cpu hotplug or other
2653	changes that modify the validity status of the partition.
2654	This will allow user space agents to monitor unexpected changes
2655	to "cpuset.cpus.partition" without the need to do continuous
2656	polling.
2657
2658	A user can pre-configure certain CPUs to an isolated state
2659	with load balancing disabled at boot time with the "isolcpus"
2660	kernel boot command line option.  If those CPUs are to be put
2661	into a partition, they have to be used in an isolated partition.
2662
2663
2664Device controller
2665-----------------
2666
2667Device controller manages access to device files. It includes both
2668creation of new device files (using mknod), and access to the
2669existing device files.
2670
2671Cgroup v2 device controller has no interface files and is implemented
2672on top of cgroup BPF. To control access to device files, a user may
2673create bpf programs of type BPF_PROG_TYPE_CGROUP_DEVICE and attach
2674them to cgroups with BPF_CGROUP_DEVICE flag. On an attempt to access a
2675device file, corresponding BPF programs will be executed, and depending
2676on the return value the attempt will succeed or fail with -EPERM.
2677
2678A BPF_PROG_TYPE_CGROUP_DEVICE program takes a pointer to the
2679bpf_cgroup_dev_ctx structure, which describes the device access attempt:
2680access type (mknod/read/write) and device (type, major and minor numbers).
2681If the program returns 0, the attempt fails with -EPERM, otherwise it
2682succeeds.
2683
2684An example of BPF_PROG_TYPE_CGROUP_DEVICE program may be found in
2685tools/testing/selftests/bpf/progs/dev_cgroup.c in the kernel source tree.
2686
2687
2688RDMA
2689----
2690
2691The "rdma" controller regulates the distribution and accounting of
2692RDMA resources.
2693
2694RDMA Interface Files
2695~~~~~~~~~~~~~~~~~~~~
2696
2697  rdma.max
2698	A readwrite nested-keyed file that exists for all the cgroups
2699	except root that describes current configured resource limit
2700	for a RDMA/IB device.
2701
2702	Lines are keyed by device name and are not ordered.
2703	Each line contains space separated resource name and its configured
2704	limit that can be distributed.
2705
2706	The following nested keys are defined.
2707
2708	  ==========	=============================
2709	  hca_handle	Maximum number of HCA Handles
2710	  hca_object 	Maximum number of HCA Objects
2711	  ==========	=============================
2712
2713	An example for mlx4 and ocrdma device follows::
2714
2715	  mlx4_0 hca_handle=2 hca_object=2000
2716	  ocrdma1 hca_handle=3 hca_object=max
2717
2718  rdma.current
2719	A read-only file that describes current resource usage.
2720	It exists for all the cgroup except root.
2721
2722	An example for mlx4 and ocrdma device follows::
2723
2724	  mlx4_0 hca_handle=1 hca_object=20
2725	  ocrdma1 hca_handle=1 hca_object=23
2726
2727DMEM
2728----
2729
2730The "dmem" controller regulates the distribution and accounting of
2731device memory regions. Because each memory region may have its own page size,
2732which does not have to be equal to the system page size, the units are always bytes.
2733
2734DMEM Interface Files
2735~~~~~~~~~~~~~~~~~~~~
2736
2737  dmem.max, dmem.min, dmem.low
2738	A readwrite nested-keyed file that exists for all the cgroups
2739	except root that describes current configured resource limit
2740	for a region.
2741
2742	An example for xe follows::
2743
2744	  drm/0000:03:00.0/vram0 1073741824
2745	  drm/0000:03:00.0/stolen max
2746
2747	The semantics are the same as for the memory cgroup controller, and are
2748	calculated in the same way.
2749
2750  dmem.capacity
2751	A read-only file that describes maximum region capacity.
2752	It only exists on the root cgroup. Not all memory can be
2753	allocated by cgroups, as the kernel reserves some for
2754	internal use.
2755
2756	An example for xe follows::
2757
2758	  drm/0000:03:00.0/vram0 8514437120
2759	  drm/0000:03:00.0/stolen 67108864
2760
2761  dmem.current
2762	A read-only file that describes current resource usage.
2763	It exists for all the cgroup except root.
2764
2765	An example for xe follows::
2766
2767	  drm/0000:03:00.0/vram0 12550144
2768	  drm/0000:03:00.0/stolen 8650752
2769
2770HugeTLB
2771-------
2772
2773The HugeTLB controller allows to limit the HugeTLB usage per control group and
2774enforces the controller limit during page fault.
2775
2776HugeTLB Interface Files
2777~~~~~~~~~~~~~~~~~~~~~~~
2778
2779  hugetlb.<hugepagesize>.current
2780	Show current usage for "hugepagesize" hugetlb.  It exists for all
2781	the cgroup except root.
2782
2783  hugetlb.<hugepagesize>.max
2784	Set/show the hard limit of "hugepagesize" hugetlb usage.
2785	The default value is "max".  It exists for all the cgroup except root.
2786
2787  hugetlb.<hugepagesize>.events
2788	A read-only flat-keyed file which exists on non-root cgroups.
2789
2790	  max
2791		The number of allocation failure due to HugeTLB limit
2792
2793  hugetlb.<hugepagesize>.events.local
2794	Similar to hugetlb.<hugepagesize>.events but the fields in the file
2795	are local to the cgroup i.e. not hierarchical. The file modified event
2796	generated on this file reflects only the local events.
2797
2798  hugetlb.<hugepagesize>.numa_stat
2799	Similar to memory.numa_stat, it shows the numa information of the
2800        hugetlb pages of <hugepagesize> in this cgroup.  Only active in
2801        use hugetlb pages are included.  The per-node values are in bytes.
2802
2803Misc
2804----
2805
2806The Miscellaneous cgroup provides the resource limiting and tracking
2807mechanism for the scalar resources which cannot be abstracted like the other
2808cgroup resources. Controller is enabled by the CONFIG_CGROUP_MISC config
2809option.
2810
2811A resource can be added to the controller via enum misc_res_type{} in the
2812include/linux/misc_cgroup.h file and the corresponding name via misc_res_name[]
2813in the kernel/cgroup/misc.c file. Provider of the resource must set its
2814capacity prior to using the resource by calling misc_cg_set_capacity().
2815
2816Once a capacity is set then the resource usage can be updated using charge and
2817uncharge APIs. All of the APIs to interact with misc controller are in
2818include/linux/misc_cgroup.h.
2819
2820Misc Interface Files
2821~~~~~~~~~~~~~~~~~~~~
2822
2823Miscellaneous controller provides 3 interface files. If two misc resources (res_a and res_b) are registered then:
2824
2825  misc.capacity
2826        A read-only flat-keyed file shown only in the root cgroup.  It shows
2827        miscellaneous scalar resources available on the platform along with
2828        their quantities::
2829
2830	  $ cat misc.capacity
2831	  res_a 50
2832	  res_b 10
2833
2834  misc.current
2835        A read-only flat-keyed file shown in the all cgroups.  It shows
2836        the current usage of the resources in the cgroup and its children.::
2837
2838	  $ cat misc.current
2839	  res_a 3
2840	  res_b 0
2841
2842  misc.peak
2843        A read-only flat-keyed file shown in all cgroups.  It shows the
2844        historical maximum usage of the resources in the cgroup and its
2845        children.::
2846
2847	  $ cat misc.peak
2848	  res_a 10
2849	  res_b 8
2850
2851  misc.max
2852        A read-write flat-keyed file shown in the non root cgroups. Allowed
2853        maximum usage of the resources in the cgroup and its children.::
2854
2855	  $ cat misc.max
2856	  res_a max
2857	  res_b 4
2858
2859	Limit can be set by::
2860
2861	  # echo res_a 1 > misc.max
2862
2863	Limit can be set to max by::
2864
2865	  # echo res_a max > misc.max
2866
2867        Limits can be set higher than the capacity value in the misc.capacity
2868        file.
2869
2870  misc.events
2871	A read-only flat-keyed file which exists on non-root cgroups. The
2872	following entries are defined. Unless specified otherwise, a value
2873	change in this file generates a file modified event. All fields in
2874	this file are hierarchical.
2875
2876	  max
2877		The number of times the cgroup's resource usage was
2878		about to go over the max boundary.
2879
2880  misc.events.local
2881        Similar to misc.events but the fields in the file are local to the
2882        cgroup i.e. not hierarchical. The file modified event generated on
2883        this file reflects only the local events.
2884
2885Migration and Ownership
2886~~~~~~~~~~~~~~~~~~~~~~~
2887
2888A miscellaneous scalar resource is charged to the cgroup in which it is used
2889first, and stays charged to that cgroup until that resource is freed. Migrating
2890a process to a different cgroup does not move the charge to the destination
2891cgroup where the process has moved.
2892
2893Others
2894------
2895
2896perf_event
2897~~~~~~~~~~
2898
2899perf_event controller, if not mounted on a legacy hierarchy, is
2900automatically enabled on the v2 hierarchy so that perf events can
2901always be filtered by cgroup v2 path.  The controller can still be
2902moved to a legacy hierarchy after v2 hierarchy is populated.
2903
2904
2905Non-normative information
2906-------------------------
2907
2908This section contains information that isn't considered to be a part of
2909the stable kernel API and so is subject to change.
2910
2911
2912CPU controller root cgroup process behaviour
2913~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2914
2915When distributing CPU cycles in the root cgroup each thread in this
2916cgroup is treated as if it was hosted in a separate child cgroup of the
2917root cgroup. This child cgroup weight is dependent on its thread nice
2918level.
2919
2920For details of this mapping see sched_prio_to_weight array in
2921kernel/sched/core.c file (values from this array should be scaled
2922appropriately so the neutral - nice 0 - value is 100 instead of 1024).
2923
2924
2925IO controller root cgroup process behaviour
2926~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2927
2928Root cgroup processes are hosted in an implicit leaf child node.
2929When distributing IO resources this implicit child node is taken into
2930account as if it was a normal child cgroup of the root cgroup with a
2931weight value of 200.
2932
2933
2934Namespace
2935=========
2936
2937Basics
2938------
2939
2940cgroup namespace provides a mechanism to virtualize the view of the
2941"/proc/$PID/cgroup" file and cgroup mounts.  The CLONE_NEWCGROUP clone
2942flag can be used with clone(2) and unshare(2) to create a new cgroup
2943namespace.  The process running inside the cgroup namespace will have
2944its "/proc/$PID/cgroup" output restricted to cgroupns root.  The
2945cgroupns root is the cgroup of the process at the time of creation of
2946the cgroup namespace.
2947
2948Without cgroup namespace, the "/proc/$PID/cgroup" file shows the
2949complete path of the cgroup of a process.  In a container setup where
2950a set of cgroups and namespaces are intended to isolate processes the
2951"/proc/$PID/cgroup" file may leak potential system level information
2952to the isolated processes.  For example::
2953
2954  # cat /proc/self/cgroup
2955  0::/batchjobs/container_id1
2956
2957The path '/batchjobs/container_id1' can be considered as system-data
2958and undesirable to expose to the isolated processes.  cgroup namespace
2959can be used to restrict visibility of this path.  For example, before
2960creating a cgroup namespace, one would see::
2961
2962  # ls -l /proc/self/ns/cgroup
2963  lrwxrwxrwx 1 root root 0 2014-07-15 10:37 /proc/self/ns/cgroup -> cgroup:[4026531835]
2964  # cat /proc/self/cgroup
2965  0::/batchjobs/container_id1
2966
2967After unsharing a new namespace, the view changes::
2968
2969  # ls -l /proc/self/ns/cgroup
2970  lrwxrwxrwx 1 root root 0 2014-07-15 10:35 /proc/self/ns/cgroup -> cgroup:[4026532183]
2971  # cat /proc/self/cgroup
2972  0::/
2973
2974When some thread from a multi-threaded process unshares its cgroup
2975namespace, the new cgroupns gets applied to the entire process (all
2976the threads).  This is natural for the v2 hierarchy; however, for the
2977legacy hierarchies, this may be unexpected.
2978
2979A cgroup namespace is alive as long as there are processes inside or
2980mounts pinning it.  When the last usage goes away, the cgroup
2981namespace is destroyed.  The cgroupns root and the actual cgroups
2982remain.
2983
2984
2985The Root and Views
2986------------------
2987
2988The 'cgroupns root' for a cgroup namespace is the cgroup in which the
2989process calling unshare(2) is running.  For example, if a process in
2990/batchjobs/container_id1 cgroup calls unshare, cgroup
2991/batchjobs/container_id1 becomes the cgroupns root.  For the
2992init_cgroup_ns, this is the real root ('/') cgroup.
2993
2994The cgroupns root cgroup does not change even if the namespace creator
2995process later moves to a different cgroup::
2996
2997  # ~/unshare -c # unshare cgroupns in some cgroup
2998  # cat /proc/self/cgroup
2999  0::/
3000  # mkdir sub_cgrp_1
3001  # echo 0 > sub_cgrp_1/cgroup.procs
3002  # cat /proc/self/cgroup
3003  0::/sub_cgrp_1
3004
3005Each process gets its namespace-specific view of "/proc/$PID/cgroup"
3006
3007Processes running inside the cgroup namespace will be able to see
3008cgroup paths (in /proc/self/cgroup) only inside their root cgroup.
3009From within an unshared cgroupns::
3010
3011  # sleep 100000 &
3012  [1] 7353
3013  # echo 7353 > sub_cgrp_1/cgroup.procs
3014  # cat /proc/7353/cgroup
3015  0::/sub_cgrp_1
3016
3017From the initial cgroup namespace, the real cgroup path will be
3018visible::
3019
3020  $ cat /proc/7353/cgroup
3021  0::/batchjobs/container_id1/sub_cgrp_1
3022
3023From a sibling cgroup namespace (that is, a namespace rooted at a
3024different cgroup), the cgroup path relative to its own cgroup
3025namespace root will be shown.  For instance, if PID 7353's cgroup
3026namespace root is at '/batchjobs/container_id2', then it will see::
3027
3028  # cat /proc/7353/cgroup
3029  0::/../container_id2/sub_cgrp_1
3030
3031Note that the relative path always starts with '/' to indicate that
3032its relative to the cgroup namespace root of the caller.
3033
3034
3035Migration and setns(2)
3036----------------------
3037
3038Processes inside a cgroup namespace can move into and out of the
3039namespace root if they have proper access to external cgroups.  For
3040example, from inside a namespace with cgroupns root at
3041/batchjobs/container_id1, and assuming that the global hierarchy is
3042still accessible inside cgroupns::
3043
3044  # cat /proc/7353/cgroup
3045  0::/sub_cgrp_1
3046  # echo 7353 > batchjobs/container_id2/cgroup.procs
3047  # cat /proc/7353/cgroup
3048  0::/../container_id2
3049
3050Note that this kind of setup is not encouraged.  A task inside cgroup
3051namespace should only be exposed to its own cgroupns hierarchy.
3052
3053setns(2) to another cgroup namespace is allowed when:
3054
3055(a) the process has CAP_SYS_ADMIN against its current user namespace
3056(b) the process has CAP_SYS_ADMIN against the target cgroup
3057    namespace's userns
3058
3059No implicit cgroup changes happen with attaching to another cgroup
3060namespace.  It is expected that the someone moves the attaching
3061process under the target cgroup namespace root.
3062
3063
3064Interaction with Other Namespaces
3065---------------------------------
3066
3067Namespace specific cgroup hierarchy can be mounted by a process
3068running inside a non-init cgroup namespace::
3069
3070  # mount -t cgroup2 none $MOUNT_POINT
3071
3072This will mount the unified cgroup hierarchy with cgroupns root as the
3073filesystem root.  The process needs CAP_SYS_ADMIN against its user and
3074mount namespaces.
3075
3076The virtualization of /proc/self/cgroup file combined with restricting
3077the view of cgroup hierarchy by namespace-private cgroupfs mount
3078provides a properly isolated cgroup view inside the container.
3079
3080
3081Information on Kernel Programming
3082=================================
3083
3084This section contains kernel programming information in the areas
3085where interacting with cgroup is necessary.  cgroup core and
3086controllers are not covered.
3087
3088
3089Filesystem Support for Writeback
3090--------------------------------
3091
3092A filesystem can support cgroup writeback by updating
3093address_space_operations->writepages() to annotate bio's using the
3094following two functions.
3095
3096  wbc_init_bio(@wbc, @bio)
3097	Should be called for each bio carrying writeback data and
3098	associates the bio with the inode's owner cgroup and the
3099	corresponding request queue.  This must be called after
3100	a queue (device) has been associated with the bio and
3101	before submission.
3102
3103  wbc_account_cgroup_owner(@wbc, @folio, @bytes)
3104	Should be called for each data segment being written out.
3105	While this function doesn't care exactly when it's called
3106	during the writeback session, it's the easiest and most
3107	natural to call it as data segments are added to a bio.
3108
3109With writeback bio's annotated, cgroup support can be enabled per
3110super_block by setting SB_I_CGROUPWB in ->s_iflags.  This allows for
3111selective disabling of cgroup writeback support which is helpful when
3112certain filesystem features, e.g. journaled data mode, are
3113incompatible.
3114
3115wbc_init_bio() binds the specified bio to its cgroup.  Depending on
3116the configuration, the bio may be executed at a lower priority and if
3117the writeback session is holding shared resources, e.g. a journal
3118entry, may lead to priority inversion.  There is no one easy solution
3119for the problem.  Filesystems can try to work around specific problem
3120cases by skipping wbc_init_bio() and using bio_associate_blkg()
3121directly.
3122
3123
3124Deprecated v1 Core Features
3125===========================
3126
3127- Multiple hierarchies including named ones are not supported.
3128
3129- All v1 mount options are not supported.
3130
3131- The "tasks" file is removed and "cgroup.procs" is not sorted.
3132
3133- "cgroup.clone_children" is removed.
3134
3135- /proc/cgroups is meaningless for v2.  Use "cgroup.controllers" or
3136  "cgroup.stat" files at the root instead.
3137
3138
3139Issues with v1 and Rationales for v2
3140====================================
3141
3142Multiple Hierarchies
3143--------------------
3144
3145cgroup v1 allowed an arbitrary number of hierarchies and each
3146hierarchy could host any number of controllers.  While this seemed to
3147provide a high level of flexibility, it wasn't useful in practice.
3148
3149For example, as there is only one instance of each controller, utility
3150type controllers such as freezer which can be useful in all
3151hierarchies could only be used in one.  The issue is exacerbated by
3152the fact that controllers couldn't be moved to another hierarchy once
3153hierarchies were populated.  Another issue was that all controllers
3154bound to a hierarchy were forced to have exactly the same view of the
3155hierarchy.  It wasn't possible to vary the granularity depending on
3156the specific controller.
3157
3158In practice, these issues heavily limited which controllers could be
3159put on the same hierarchy and most configurations resorted to putting
3160each controller on its own hierarchy.  Only closely related ones, such
3161as the cpu and cpuacct controllers, made sense to be put on the same
3162hierarchy.  This often meant that userland ended up managing multiple
3163similar hierarchies repeating the same steps on each hierarchy
3164whenever a hierarchy management operation was necessary.
3165
3166Furthermore, support for multiple hierarchies came at a steep cost.
3167It greatly complicated cgroup core implementation but more importantly
3168the support for multiple hierarchies restricted how cgroup could be
3169used in general and what controllers was able to do.
3170
3171There was no limit on how many hierarchies there might be, which meant
3172that a thread's cgroup membership couldn't be described in finite
3173length.  The key might contain any number of entries and was unlimited
3174in length, which made it highly awkward to manipulate and led to
3175addition of controllers which existed only to identify membership,
3176which in turn exacerbated the original problem of proliferating number
3177of hierarchies.
3178
3179Also, as a controller couldn't have any expectation regarding the
3180topologies of hierarchies other controllers might be on, each
3181controller had to assume that all other controllers were attached to
3182completely orthogonal hierarchies.  This made it impossible, or at
3183least very cumbersome, for controllers to cooperate with each other.
3184
3185In most use cases, putting controllers on hierarchies which are
3186completely orthogonal to each other isn't necessary.  What usually is
3187called for is the ability to have differing levels of granularity
3188depending on the specific controller.  In other words, hierarchy may
3189be collapsed from leaf towards root when viewed from specific
3190controllers.  For example, a given configuration might not care about
3191how memory is distributed beyond a certain level while still wanting
3192to control how CPU cycles are distributed.
3193
3194
3195Thread Granularity
3196------------------
3197
3198cgroup v1 allowed threads of a process to belong to different cgroups.
3199This didn't make sense for some controllers and those controllers
3200ended up implementing different ways to ignore such situations but
3201much more importantly it blurred the line between API exposed to
3202individual applications and system management interface.
3203
3204Generally, in-process knowledge is available only to the process
3205itself; thus, unlike service-level organization of processes,
3206categorizing threads of a process requires active participation from
3207the application which owns the target process.
3208
3209cgroup v1 had an ambiguously defined delegation model which got abused
3210in combination with thread granularity.  cgroups were delegated to
3211individual applications so that they can create and manage their own
3212sub-hierarchies and control resource distributions along them.  This
3213effectively raised cgroup to the status of a syscall-like API exposed
3214to lay programs.
3215
3216First of all, cgroup has a fundamentally inadequate interface to be
3217exposed this way.  For a process to access its own knobs, it has to
3218extract the path on the target hierarchy from /proc/self/cgroup,
3219construct the path by appending the name of the knob to the path, open
3220and then read and/or write to it.  This is not only extremely clunky
3221and unusual but also inherently racy.  There is no conventional way to
3222define transaction across the required steps and nothing can guarantee
3223that the process would actually be operating on its own sub-hierarchy.
3224
3225cgroup controllers implemented a number of knobs which would never be
3226accepted as public APIs because they were just adding control knobs to
3227system-management pseudo filesystem.  cgroup ended up with interface
3228knobs which were not properly abstracted or refined and directly
3229revealed kernel internal details.  These knobs got exposed to
3230individual applications through the ill-defined delegation mechanism
3231effectively abusing cgroup as a shortcut to implementing public APIs
3232without going through the required scrutiny.
3233
3234This was painful for both userland and kernel.  Userland ended up with
3235misbehaving and poorly abstracted interfaces and kernel exposing and
3236locked into constructs inadvertently.
3237
3238
3239Competition Between Inner Nodes and Threads
3240-------------------------------------------
3241
3242cgroup v1 allowed threads to be in any cgroups which created an
3243interesting problem where threads belonging to a parent cgroup and its
3244children cgroups competed for resources.  This was nasty as two
3245different types of entities competed and there was no obvious way to
3246settle it.  Different controllers did different things.
3247
3248The cpu controller considered threads and cgroups as equivalents and
3249mapped nice levels to cgroup weights.  This worked for some cases but
3250fell flat when children wanted to be allocated specific ratios of CPU
3251cycles and the number of internal threads fluctuated - the ratios
3252constantly changed as the number of competing entities fluctuated.
3253There also were other issues.  The mapping from nice level to weight
3254wasn't obvious or universal, and there were various other knobs which
3255simply weren't available for threads.
3256
3257The io controller implicitly created a hidden leaf node for each
3258cgroup to host the threads.  The hidden leaf had its own copies of all
3259the knobs with ``leaf_`` prefixed.  While this allowed equivalent
3260control over internal threads, it was with serious drawbacks.  It
3261always added an extra layer of nesting which wouldn't be necessary
3262otherwise, made the interface messy and significantly complicated the
3263implementation.
3264
3265The memory controller didn't have a way to control what happened
3266between internal tasks and child cgroups and the behavior was not
3267clearly defined.  There were attempts to add ad-hoc behaviors and
3268knobs to tailor the behavior to specific workloads which would have
3269led to problems extremely difficult to resolve in the long term.
3270
3271Multiple controllers struggled with internal tasks and came up with
3272different ways to deal with it; unfortunately, all the approaches were
3273severely flawed and, furthermore, the widely different behaviors
3274made cgroup as a whole highly inconsistent.
3275
3276This clearly is a problem which needs to be addressed from cgroup core
3277in a uniform way.
3278
3279
3280Other Interface Issues
3281----------------------
3282
3283cgroup v1 grew without oversight and developed a large number of
3284idiosyncrasies and inconsistencies.  One issue on the cgroup core side
3285was how an empty cgroup was notified - a userland helper binary was
3286forked and executed for each event.  The event delivery wasn't
3287recursive or delegatable.  The limitations of the mechanism also led
3288to in-kernel event delivery filtering mechanism further complicating
3289the interface.
3290
3291Controller interfaces were problematic too.  An extreme example is
3292controllers completely ignoring hierarchical organization and treating
3293all cgroups as if they were all located directly under the root
3294cgroup.  Some controllers exposed a large amount of inconsistent
3295implementation details to userland.
3296
3297There also was no consistency across controllers.  When a new cgroup
3298was created, some controllers defaulted to not imposing extra
3299restrictions while others disallowed any resource usage until
3300explicitly configured.  Configuration knobs for the same type of
3301control used widely differing naming schemes and formats.  Statistics
3302and information knobs were named arbitrarily and used different
3303formats and units even in the same controller.
3304
3305cgroup v2 establishes common conventions where appropriate and updates
3306controllers so that they expose minimal and consistent interfaces.
3307
3308
3309Controller Issues and Remedies
3310------------------------------
3311
3312Memory
3313~~~~~~
3314
3315The original lower boundary, the soft limit, is defined as a limit
3316that is per default unset.  As a result, the set of cgroups that
3317global reclaim prefers is opt-in, rather than opt-out.  The costs for
3318optimizing these mostly negative lookups are so high that the
3319implementation, despite its enormous size, does not even provide the
3320basic desirable behavior.  First off, the soft limit has no
3321hierarchical meaning.  All configured groups are organized in a global
3322rbtree and treated like equal peers, regardless where they are located
3323in the hierarchy.  This makes subtree delegation impossible.  Second,
3324the soft limit reclaim pass is so aggressive that it not just
3325introduces high allocation latencies into the system, but also impacts
3326system performance due to overreclaim, to the point where the feature
3327becomes self-defeating.
3328
3329The memory.low boundary on the other hand is a top-down allocated
3330reserve.  A cgroup enjoys reclaim protection when it's within its
3331effective low, which makes delegation of subtrees possible. It also
3332enjoys having reclaim pressure proportional to its overage when
3333above its effective low.
3334
3335The original high boundary, the hard limit, is defined as a strict
3336limit that can not budge, even if the OOM killer has to be called.
3337But this generally goes against the goal of making the most out of the
3338available memory.  The memory consumption of workloads varies during
3339runtime, and that requires users to overcommit.  But doing that with a
3340strict upper limit requires either a fairly accurate prediction of the
3341working set size or adding slack to the limit.  Since working set size
3342estimation is hard and error prone, and getting it wrong results in
3343OOM kills, most users tend to err on the side of a looser limit and
3344end up wasting precious resources.
3345
3346The memory.high boundary on the other hand can be set much more
3347conservatively.  When hit, it throttles allocations by forcing them
3348into direct reclaim to work off the excess, but it never invokes the
3349OOM killer.  As a result, a high boundary that is chosen too
3350aggressively will not terminate the processes, but instead it will
3351lead to gradual performance degradation.  The user can monitor this
3352and make corrections until the minimal memory footprint that still
3353gives acceptable performance is found.
3354
3355In extreme cases, with many concurrent allocations and a complete
3356breakdown of reclaim progress within the group, the high boundary can
3357be exceeded.  But even then it's mostly better to satisfy the
3358allocation from the slack available in other groups or the rest of the
3359system than killing the group.  Otherwise, memory.max is there to
3360limit this type of spillover and ultimately contain buggy or even
3361malicious applications.
3362
3363Setting the original memory.limit_in_bytes below the current usage was
3364subject to a race condition, where concurrent charges could cause the
3365limit setting to fail. memory.max on the other hand will first set the
3366limit to prevent new charges, and then reclaim and OOM kill until the
3367new limit is met - or the task writing to memory.max is killed.
3368
3369The combined memory+swap accounting and limiting is replaced by real
3370control over swap space.
3371
3372The main argument for a combined memory+swap facility in the original
3373cgroup design was that global or parental pressure would always be
3374able to swap all anonymous memory of a child group, regardless of the
3375child's own (possibly untrusted) configuration.  However, untrusted
3376groups can sabotage swapping by other means - such as referencing its
3377anonymous memory in a tight loop - and an admin can not assume full
3378swappability when overcommitting untrusted jobs.
3379
3380For trusted jobs, on the other hand, a combined counter is not an
3381intuitive userspace interface, and it flies in the face of the idea
3382that cgroup controllers should account and limit specific physical
3383resources.  Swap space is a resource like all others in the system,
3384and that's why unified hierarchy allows distributing it separately.
3385