xref: /linux/Documentation/arch/x86/resctrl.rst (revision 24168c5e6dfbdd5b414f048f47f75d64533296ca)
1.. SPDX-License-Identifier: GPL-2.0
2.. include:: <isonum.txt>
3
4===========================================
5User Interface for Resource Control feature
6===========================================
7
8:Copyright: |copy| 2016 Intel Corporation
9:Authors: - Fenghua Yu <fenghua.yu@intel.com>
10          - Tony Luck <tony.luck@intel.com>
11          - Vikas Shivappa <vikas.shivappa@intel.com>
12
13
14Intel refers to this feature as Intel Resource Director Technology(Intel(R) RDT).
15AMD refers to this feature as AMD Platform Quality of Service(AMD QoS).
16
17This feature is enabled by the CONFIG_X86_CPU_RESCTRL and the x86 /proc/cpuinfo
18flag bits:
19
20===============================================	================================
21RDT (Resource Director Technology) Allocation	"rdt_a"
22CAT (Cache Allocation Technology)		"cat_l3", "cat_l2"
23CDP (Code and Data Prioritization)		"cdp_l3", "cdp_l2"
24CQM (Cache QoS Monitoring)			"cqm_llc", "cqm_occup_llc"
25MBM (Memory Bandwidth Monitoring)		"cqm_mbm_total", "cqm_mbm_local"
26MBA (Memory Bandwidth Allocation)		"mba"
27SMBA (Slow Memory Bandwidth Allocation)         ""
28BMEC (Bandwidth Monitoring Event Configuration) ""
29===============================================	================================
30
31Historically, new features were made visible by default in /proc/cpuinfo. This
32resulted in the feature flags becoming hard to parse by humans. Adding a new
33flag to /proc/cpuinfo should be avoided if user space can obtain information
34about the feature from resctrl's info directory.
35
36To use the feature mount the file system::
37
38 # mount -t resctrl resctrl [-o cdp[,cdpl2][,mba_MBps][,debug]] /sys/fs/resctrl
39
40mount options are:
41
42"cdp":
43	Enable code/data prioritization in L3 cache allocations.
44"cdpl2":
45	Enable code/data prioritization in L2 cache allocations.
46"mba_MBps":
47	Enable the MBA Software Controller(mba_sc) to specify MBA
48	bandwidth in MiBps
49"debug":
50	Make debug files accessible. Available debug files are annotated with
51	"Available only with debug option".
52
53L2 and L3 CDP are controlled separately.
54
55RDT features are orthogonal. A particular system may support only
56monitoring, only control, or both monitoring and control.  Cache
57pseudo-locking is a unique way of using cache control to "pin" or
58"lock" data in the cache. Details can be found in
59"Cache Pseudo-Locking".
60
61
62The mount succeeds if either of allocation or monitoring is present, but
63only those files and directories supported by the system will be created.
64For more details on the behavior of the interface during monitoring
65and allocation, see the "Resource alloc and monitor groups" section.
66
67Info directory
68==============
69
70The 'info' directory contains information about the enabled
71resources. Each resource has its own subdirectory. The subdirectory
72names reflect the resource names.
73
74Each subdirectory contains the following files with respect to
75allocation:
76
77Cache resource(L3/L2)  subdirectory contains the following files
78related to allocation:
79
80"num_closids":
81		The number of CLOSIDs which are valid for this
82		resource. The kernel uses the smallest number of
83		CLOSIDs of all enabled resources as limit.
84"cbm_mask":
85		The bitmask which is valid for this resource.
86		This mask is equivalent to 100%.
87"min_cbm_bits":
88		The minimum number of consecutive bits which
89		must be set when writing a mask.
90
91"shareable_bits":
92		Bitmask of shareable resource with other executing
93		entities (e.g. I/O). User can use this when
94		setting up exclusive cache partitions. Note that
95		some platforms support devices that have their
96		own settings for cache use which can over-ride
97		these bits.
98"bit_usage":
99		Annotated capacity bitmasks showing how all
100		instances of the resource are used. The legend is:
101
102			"0":
103			      Corresponding region is unused. When the system's
104			      resources have been allocated and a "0" is found
105			      in "bit_usage" it is a sign that resources are
106			      wasted.
107
108			"H":
109			      Corresponding region is used by hardware only
110			      but available for software use. If a resource
111			      has bits set in "shareable_bits" but not all
112			      of these bits appear in the resource groups'
113			      schematas then the bits appearing in
114			      "shareable_bits" but no resource group will
115			      be marked as "H".
116			"X":
117			      Corresponding region is available for sharing and
118			      used by hardware and software. These are the
119			      bits that appear in "shareable_bits" as
120			      well as a resource group's allocation.
121			"S":
122			      Corresponding region is used by software
123			      and available for sharing.
124			"E":
125			      Corresponding region is used exclusively by
126			      one resource group. No sharing allowed.
127			"P":
128			      Corresponding region is pseudo-locked. No
129			      sharing allowed.
130"sparse_masks":
131		Indicates if non-contiguous 1s value in CBM is supported.
132
133			"0":
134			      Only contiguous 1s value in CBM is supported.
135			"1":
136			      Non-contiguous 1s value in CBM is supported.
137
138Memory bandwidth(MB) subdirectory contains the following files
139with respect to allocation:
140
141"min_bandwidth":
142		The minimum memory bandwidth percentage which
143		user can request.
144
145"bandwidth_gran":
146		The granularity in which the memory bandwidth
147		percentage is allocated. The allocated
148		b/w percentage is rounded off to the next
149		control step available on the hardware. The
150		available bandwidth control steps are:
151		min_bandwidth + N * bandwidth_gran.
152
153"delay_linear":
154		Indicates if the delay scale is linear or
155		non-linear. This field is purely informational
156		only.
157
158"thread_throttle_mode":
159		Indicator on Intel systems of how tasks running on threads
160		of a physical core are throttled in cases where they
161		request different memory bandwidth percentages:
162
163		"max":
164			the smallest percentage is applied
165			to all threads
166		"per-thread":
167			bandwidth percentages are directly applied to
168			the threads running on the core
169
170If RDT monitoring is available there will be an "L3_MON" directory
171with the following files:
172
173"num_rmids":
174		The number of RMIDs available. This is the
175		upper bound for how many "CTRL_MON" + "MON"
176		groups can be created.
177
178"mon_features":
179		Lists the monitoring events if
180		monitoring is enabled for the resource.
181		Example::
182
183			# cat /sys/fs/resctrl/info/L3_MON/mon_features
184			llc_occupancy
185			mbm_total_bytes
186			mbm_local_bytes
187
188		If the system supports Bandwidth Monitoring Event
189		Configuration (BMEC), then the bandwidth events will
190		be configurable. The output will be::
191
192			# cat /sys/fs/resctrl/info/L3_MON/mon_features
193			llc_occupancy
194			mbm_total_bytes
195			mbm_total_bytes_config
196			mbm_local_bytes
197			mbm_local_bytes_config
198
199"mbm_total_bytes_config", "mbm_local_bytes_config":
200	Read/write files containing the configuration for the mbm_total_bytes
201	and mbm_local_bytes events, respectively, when the Bandwidth
202	Monitoring Event Configuration (BMEC) feature is supported.
203	The event configuration settings are domain specific and affect
204	all the CPUs in the domain. When either event configuration is
205	changed, the bandwidth counters for all RMIDs of both events
206	(mbm_total_bytes as well as mbm_local_bytes) are cleared for that
207	domain. The next read for every RMID will report "Unavailable"
208	and subsequent reads will report the valid value.
209
210	Following are the types of events supported:
211
212	====    ========================================================
213	Bits    Description
214	====    ========================================================
215	6       Dirty Victims from the QOS domain to all types of memory
216	5       Reads to slow memory in the non-local NUMA domain
217	4       Reads to slow memory in the local NUMA domain
218	3       Non-temporal writes to non-local NUMA domain
219	2       Non-temporal writes to local NUMA domain
220	1       Reads to memory in the non-local NUMA domain
221	0       Reads to memory in the local NUMA domain
222	====    ========================================================
223
224	By default, the mbm_total_bytes configuration is set to 0x7f to count
225	all the event types and the mbm_local_bytes configuration is set to
226	0x15 to count all the local memory events.
227
228	Examples:
229
230	* To view the current configuration::
231	  ::
232
233	    # cat /sys/fs/resctrl/info/L3_MON/mbm_total_bytes_config
234	    0=0x7f;1=0x7f;2=0x7f;3=0x7f
235
236	    # cat /sys/fs/resctrl/info/L3_MON/mbm_local_bytes_config
237	    0=0x15;1=0x15;3=0x15;4=0x15
238
239	* To change the mbm_total_bytes to count only reads on domain 0,
240	  the bits 0, 1, 4 and 5 needs to be set, which is 110011b in binary
241	  (in hexadecimal 0x33):
242	  ::
243
244	    # echo  "0=0x33" > /sys/fs/resctrl/info/L3_MON/mbm_total_bytes_config
245
246	    # cat /sys/fs/resctrl/info/L3_MON/mbm_total_bytes_config
247	    0=0x33;1=0x7f;2=0x7f;3=0x7f
248
249	* To change the mbm_local_bytes to count all the slow memory reads on
250	  domain 0 and 1, the bits 4 and 5 needs to be set, which is 110000b
251	  in binary (in hexadecimal 0x30):
252	  ::
253
254	    # echo  "0=0x30;1=0x30" > /sys/fs/resctrl/info/L3_MON/mbm_local_bytes_config
255
256	    # cat /sys/fs/resctrl/info/L3_MON/mbm_local_bytes_config
257	    0=0x30;1=0x30;3=0x15;4=0x15
258
259"max_threshold_occupancy":
260		Read/write file provides the largest value (in
261		bytes) at which a previously used LLC_occupancy
262		counter can be considered for re-use.
263
264Finally, in the top level of the "info" directory there is a file
265named "last_cmd_status". This is reset with every "command" issued
266via the file system (making new directories or writing to any of the
267control files). If the command was successful, it will read as "ok".
268If the command failed, it will provide more information that can be
269conveyed in the error returns from file operations. E.g.
270::
271
272	# echo L3:0=f7 > schemata
273	bash: echo: write error: Invalid argument
274	# cat info/last_cmd_status
275	mask f7 has non-consecutive 1-bits
276
277Resource alloc and monitor groups
278=================================
279
280Resource groups are represented as directories in the resctrl file
281system.  The default group is the root directory which, immediately
282after mounting, owns all the tasks and cpus in the system and can make
283full use of all resources.
284
285On a system with RDT control features additional directories can be
286created in the root directory that specify different amounts of each
287resource (see "schemata" below). The root and these additional top level
288directories are referred to as "CTRL_MON" groups below.
289
290On a system with RDT monitoring the root directory and other top level
291directories contain a directory named "mon_groups" in which additional
292directories can be created to monitor subsets of tasks in the CTRL_MON
293group that is their ancestor. These are called "MON" groups in the rest
294of this document.
295
296Removing a directory will move all tasks and cpus owned by the group it
297represents to the parent. Removing one of the created CTRL_MON groups
298will automatically remove all MON groups below it.
299
300Moving MON group directories to a new parent CTRL_MON group is supported
301for the purpose of changing the resource allocations of a MON group
302without impacting its monitoring data or assigned tasks. This operation
303is not allowed for MON groups which monitor CPUs. No other move
304operation is currently allowed other than simply renaming a CTRL_MON or
305MON group.
306
307All groups contain the following files:
308
309"tasks":
310	Reading this file shows the list of all tasks that belong to
311	this group. Writing a task id to the file will add a task to the
312	group. Multiple tasks can be added by separating the task ids
313	with commas. Tasks will be assigned sequentially. Multiple
314	failures are not supported. A single failure encountered while
315	attempting to assign a task will cause the operation to abort and
316	already added tasks before the failure will remain in the group.
317	Failures will be logged to /sys/fs/resctrl/info/last_cmd_status.
318
319	If the group is a CTRL_MON group the task is removed from
320	whichever previous CTRL_MON group owned the task and also from
321	any MON group that owned the task. If the group is a MON group,
322	then the task must already belong to the CTRL_MON parent of this
323	group. The task is removed from any previous MON group.
324
325
326"cpus":
327	Reading this file shows a bitmask of the logical CPUs owned by
328	this group. Writing a mask to this file will add and remove
329	CPUs to/from this group. As with the tasks file a hierarchy is
330	maintained where MON groups may only include CPUs owned by the
331	parent CTRL_MON group.
332	When the resource group is in pseudo-locked mode this file will
333	only be readable, reflecting the CPUs associated with the
334	pseudo-locked region.
335
336
337"cpus_list":
338	Just like "cpus", only using ranges of CPUs instead of bitmasks.
339
340
341When control is enabled all CTRL_MON groups will also contain:
342
343"schemata":
344	A list of all the resources available to this group.
345	Each resource has its own line and format - see below for details.
346
347"size":
348	Mirrors the display of the "schemata" file to display the size in
349	bytes of each allocation instead of the bits representing the
350	allocation.
351
352"mode":
353	The "mode" of the resource group dictates the sharing of its
354	allocations. A "shareable" resource group allows sharing of its
355	allocations while an "exclusive" resource group does not. A
356	cache pseudo-locked region is created by first writing
357	"pseudo-locksetup" to the "mode" file before writing the cache
358	pseudo-locked region's schemata to the resource group's "schemata"
359	file. On successful pseudo-locked region creation the mode will
360	automatically change to "pseudo-locked".
361
362"ctrl_hw_id":
363	Available only with debug option. The identifier used by hardware
364	for the control group. On x86 this is the CLOSID.
365
366When monitoring is enabled all MON groups will also contain:
367
368"mon_data":
369	This contains a set of files organized by L3 domain and by
370	RDT event. E.g. on a system with two L3 domains there will
371	be subdirectories "mon_L3_00" and "mon_L3_01".	Each of these
372	directories have one file per event (e.g. "llc_occupancy",
373	"mbm_total_bytes", and "mbm_local_bytes"). In a MON group these
374	files provide a read out of the current value of the event for
375	all tasks in the group. In CTRL_MON groups these files provide
376	the sum for all tasks in the CTRL_MON group and all tasks in
377	MON groups. Please see example section for more details on usage.
378
379"mon_hw_id":
380	Available only with debug option. The identifier used by hardware
381	for the monitor group. On x86 this is the RMID.
382
383Resource allocation rules
384-------------------------
385
386When a task is running the following rules define which resources are
387available to it:
388
3891) If the task is a member of a non-default group, then the schemata
390   for that group is used.
391
3922) Else if the task belongs to the default group, but is running on a
393   CPU that is assigned to some specific group, then the schemata for the
394   CPU's group is used.
395
3963) Otherwise the schemata for the default group is used.
397
398Resource monitoring rules
399-------------------------
4001) If a task is a member of a MON group, or non-default CTRL_MON group
401   then RDT events for the task will be reported in that group.
402
4032) If a task is a member of the default CTRL_MON group, but is running
404   on a CPU that is assigned to some specific group, then the RDT events
405   for the task will be reported in that group.
406
4073) Otherwise RDT events for the task will be reported in the root level
408   "mon_data" group.
409
410
411Notes on cache occupancy monitoring and control
412===============================================
413When moving a task from one group to another you should remember that
414this only affects *new* cache allocations by the task. E.g. you may have
415a task in a monitor group showing 3 MB of cache occupancy. If you move
416to a new group and immediately check the occupancy of the old and new
417groups you will likely see that the old group is still showing 3 MB and
418the new group zero. When the task accesses locations still in cache from
419before the move, the h/w does not update any counters. On a busy system
420you will likely see the occupancy in the old group go down as cache lines
421are evicted and re-used while the occupancy in the new group rises as
422the task accesses memory and loads into the cache are counted based on
423membership in the new group.
424
425The same applies to cache allocation control. Moving a task to a group
426with a smaller cache partition will not evict any cache lines. The
427process may continue to use them from the old partition.
428
429Hardware uses CLOSid(Class of service ID) and an RMID(Resource monitoring ID)
430to identify a control group and a monitoring group respectively. Each of
431the resource groups are mapped to these IDs based on the kind of group. The
432number of CLOSid and RMID are limited by the hardware and hence the creation of
433a "CTRL_MON" directory may fail if we run out of either CLOSID or RMID
434and creation of "MON" group may fail if we run out of RMIDs.
435
436max_threshold_occupancy - generic concepts
437------------------------------------------
438
439Note that an RMID once freed may not be immediately available for use as
440the RMID is still tagged the cache lines of the previous user of RMID.
441Hence such RMIDs are placed on limbo list and checked back if the cache
442occupancy has gone down. If there is a time when system has a lot of
443limbo RMIDs but which are not ready to be used, user may see an -EBUSY
444during mkdir.
445
446max_threshold_occupancy is a user configurable value to determine the
447occupancy at which an RMID can be freed.
448
449The mon_llc_occupancy_limbo tracepoint gives the precise occupancy in bytes
450for a subset of RMID that are not immediately available for allocation.
451This can't be relied on to produce output every second, it may be necessary
452to attempt to create an empty monitor group to force an update. Output may
453only be produced if creation of a control or monitor group fails.
454
455Schemata files - general concepts
456---------------------------------
457Each line in the file describes one resource. The line starts with
458the name of the resource, followed by specific values to be applied
459in each of the instances of that resource on the system.
460
461Cache IDs
462---------
463On current generation systems there is one L3 cache per socket and L2
464caches are generally just shared by the hyperthreads on a core, but this
465isn't an architectural requirement. We could have multiple separate L3
466caches on a socket, multiple cores could share an L2 cache. So instead
467of using "socket" or "core" to define the set of logical cpus sharing
468a resource we use a "Cache ID". At a given cache level this will be a
469unique number across the whole system (but it isn't guaranteed to be a
470contiguous sequence, there may be gaps).  To find the ID for each logical
471CPU look in /sys/devices/system/cpu/cpu*/cache/index*/id
472
473Cache Bit Masks (CBM)
474---------------------
475For cache resources we describe the portion of the cache that is available
476for allocation using a bitmask. The maximum value of the mask is defined
477by each cpu model (and may be different for different cache levels). It
478is found using CPUID, but is also provided in the "info" directory of
479the resctrl file system in "info/{resource}/cbm_mask". Some Intel hardware
480requires that these masks have all the '1' bits in a contiguous block. So
4810x3, 0x6 and 0xC are legal 4-bit masks with two bits set, but 0x5, 0x9
482and 0xA are not. Check /sys/fs/resctrl/info/{resource}/sparse_masks
483if non-contiguous 1s value is supported. On a system with a 20-bit mask
484each bit represents 5% of the capacity of the cache. You could partition
485the cache into four equal parts with masks: 0x1f, 0x3e0, 0x7c00, 0xf8000.
486
487Memory bandwidth Allocation and monitoring
488==========================================
489
490For Memory bandwidth resource, by default the user controls the resource
491by indicating the percentage of total memory bandwidth.
492
493The minimum bandwidth percentage value for each cpu model is predefined
494and can be looked up through "info/MB/min_bandwidth". The bandwidth
495granularity that is allocated is also dependent on the cpu model and can
496be looked up at "info/MB/bandwidth_gran". The available bandwidth
497control steps are: min_bw + N * bw_gran. Intermediate values are rounded
498to the next control step available on the hardware.
499
500The bandwidth throttling is a core specific mechanism on some of Intel
501SKUs. Using a high bandwidth and a low bandwidth setting on two threads
502sharing a core may result in both threads being throttled to use the
503low bandwidth (see "thread_throttle_mode").
504
505The fact that Memory bandwidth allocation(MBA) may be a core
506specific mechanism where as memory bandwidth monitoring(MBM) is done at
507the package level may lead to confusion when users try to apply control
508via the MBA and then monitor the bandwidth to see if the controls are
509effective. Below are such scenarios:
510
5111. User may *not* see increase in actual bandwidth when percentage
512   values are increased:
513
514This can occur when aggregate L2 external bandwidth is more than L3
515external bandwidth. Consider an SKL SKU with 24 cores on a package and
516where L2 external  is 10GBps (hence aggregate L2 external bandwidth is
517240GBps) and L3 external bandwidth is 100GBps. Now a workload with '20
518threads, having 50% bandwidth, each consuming 5GBps' consumes the max L3
519bandwidth of 100GBps although the percentage value specified is only 50%
520<< 100%. Hence increasing the bandwidth percentage will not yield any
521more bandwidth. This is because although the L2 external bandwidth still
522has capacity, the L3 external bandwidth is fully used. Also note that
523this would be dependent on number of cores the benchmark is run on.
524
5252. Same bandwidth percentage may mean different actual bandwidth
526   depending on # of threads:
527
528For the same SKU in #1, a 'single thread, with 10% bandwidth' and '4
529thread, with 10% bandwidth' can consume upto 10GBps and 40GBps although
530they have same percentage bandwidth of 10%. This is simply because as
531threads start using more cores in an rdtgroup, the actual bandwidth may
532increase or vary although user specified bandwidth percentage is same.
533
534In order to mitigate this and make the interface more user friendly,
535resctrl added support for specifying the bandwidth in MiBps as well.  The
536kernel underneath would use a software feedback mechanism or a "Software
537Controller(mba_sc)" which reads the actual bandwidth using MBM counters
538and adjust the memory bandwidth percentages to ensure::
539
540	"actual bandwidth < user specified bandwidth".
541
542By default, the schemata would take the bandwidth percentage values
543where as user can switch to the "MBA software controller" mode using
544a mount option 'mba_MBps'. The schemata format is specified in the below
545sections.
546
547L3 schemata file details (code and data prioritization disabled)
548----------------------------------------------------------------
549With CDP disabled the L3 schemata format is::
550
551	L3:<cache_id0>=<cbm>;<cache_id1>=<cbm>;...
552
553L3 schemata file details (CDP enabled via mount option to resctrl)
554------------------------------------------------------------------
555When CDP is enabled L3 control is split into two separate resources
556so you can specify independent masks for code and data like this::
557
558	L3DATA:<cache_id0>=<cbm>;<cache_id1>=<cbm>;...
559	L3CODE:<cache_id0>=<cbm>;<cache_id1>=<cbm>;...
560
561L2 schemata file details
562------------------------
563CDP is supported at L2 using the 'cdpl2' mount option. The schemata
564format is either::
565
566	L2:<cache_id0>=<cbm>;<cache_id1>=<cbm>;...
567
568or
569
570	L2DATA:<cache_id0>=<cbm>;<cache_id1>=<cbm>;...
571	L2CODE:<cache_id0>=<cbm>;<cache_id1>=<cbm>;...
572
573
574Memory bandwidth Allocation (default mode)
575------------------------------------------
576
577Memory b/w domain is L3 cache.
578::
579
580	MB:<cache_id0>=bandwidth0;<cache_id1>=bandwidth1;...
581
582Memory bandwidth Allocation specified in MiBps
583----------------------------------------------
584
585Memory bandwidth domain is L3 cache.
586::
587
588	MB:<cache_id0>=bw_MiBps0;<cache_id1>=bw_MiBps1;...
589
590Slow Memory Bandwidth Allocation (SMBA)
591---------------------------------------
592AMD hardware supports Slow Memory Bandwidth Allocation (SMBA).
593CXL.memory is the only supported "slow" memory device. With the
594support of SMBA, the hardware enables bandwidth allocation on
595the slow memory devices. If there are multiple such devices in
596the system, the throttling logic groups all the slow sources
597together and applies the limit on them as a whole.
598
599The presence of SMBA (with CXL.memory) is independent of slow memory
600devices presence. If there are no such devices on the system, then
601configuring SMBA will have no impact on the performance of the system.
602
603The bandwidth domain for slow memory is L3 cache. Its schemata file
604is formatted as:
605::
606
607	SMBA:<cache_id0>=bandwidth0;<cache_id1>=bandwidth1;...
608
609Reading/writing the schemata file
610---------------------------------
611Reading the schemata file will show the state of all resources
612on all domains. When writing you only need to specify those values
613which you wish to change.  E.g.
614::
615
616  # cat schemata
617  L3DATA:0=fffff;1=fffff;2=fffff;3=fffff
618  L3CODE:0=fffff;1=fffff;2=fffff;3=fffff
619  # echo "L3DATA:2=3c0;" > schemata
620  # cat schemata
621  L3DATA:0=fffff;1=fffff;2=3c0;3=fffff
622  L3CODE:0=fffff;1=fffff;2=fffff;3=fffff
623
624Reading/writing the schemata file (on AMD systems)
625--------------------------------------------------
626Reading the schemata file will show the current bandwidth limit on all
627domains. The allocated resources are in multiples of one eighth GB/s.
628When writing to the file, you need to specify what cache id you wish to
629configure the bandwidth limit.
630
631For example, to allocate 2GB/s limit on the first cache id:
632
633::
634
635  # cat schemata
636    MB:0=2048;1=2048;2=2048;3=2048
637    L3:0=ffff;1=ffff;2=ffff;3=ffff
638
639  # echo "MB:1=16" > schemata
640  # cat schemata
641    MB:0=2048;1=  16;2=2048;3=2048
642    L3:0=ffff;1=ffff;2=ffff;3=ffff
643
644Reading/writing the schemata file (on AMD systems) with SMBA feature
645--------------------------------------------------------------------
646Reading and writing the schemata file is the same as without SMBA in
647above section.
648
649For example, to allocate 8GB/s limit on the first cache id:
650
651::
652
653  # cat schemata
654    SMBA:0=2048;1=2048;2=2048;3=2048
655      MB:0=2048;1=2048;2=2048;3=2048
656      L3:0=ffff;1=ffff;2=ffff;3=ffff
657
658  # echo "SMBA:1=64" > schemata
659  # cat schemata
660    SMBA:0=2048;1=  64;2=2048;3=2048
661      MB:0=2048;1=2048;2=2048;3=2048
662      L3:0=ffff;1=ffff;2=ffff;3=ffff
663
664Cache Pseudo-Locking
665====================
666CAT enables a user to specify the amount of cache space that an
667application can fill. Cache pseudo-locking builds on the fact that a
668CPU can still read and write data pre-allocated outside its current
669allocated area on a cache hit. With cache pseudo-locking, data can be
670preloaded into a reserved portion of cache that no application can
671fill, and from that point on will only serve cache hits. The cache
672pseudo-locked memory is made accessible to user space where an
673application can map it into its virtual address space and thus have
674a region of memory with reduced average read latency.
675
676The creation of a cache pseudo-locked region is triggered by a request
677from the user to do so that is accompanied by a schemata of the region
678to be pseudo-locked. The cache pseudo-locked region is created as follows:
679
680- Create a CAT allocation CLOSNEW with a CBM matching the schemata
681  from the user of the cache region that will contain the pseudo-locked
682  memory. This region must not overlap with any current CAT allocation/CLOS
683  on the system and no future overlap with this cache region is allowed
684  while the pseudo-locked region exists.
685- Create a contiguous region of memory of the same size as the cache
686  region.
687- Flush the cache, disable hardware prefetchers, disable preemption.
688- Make CLOSNEW the active CLOS and touch the allocated memory to load
689  it into the cache.
690- Set the previous CLOS as active.
691- At this point the closid CLOSNEW can be released - the cache
692  pseudo-locked region is protected as long as its CBM does not appear in
693  any CAT allocation. Even though the cache pseudo-locked region will from
694  this point on not appear in any CBM of any CLOS an application running with
695  any CLOS will be able to access the memory in the pseudo-locked region since
696  the region continues to serve cache hits.
697- The contiguous region of memory loaded into the cache is exposed to
698  user-space as a character device.
699
700Cache pseudo-locking increases the probability that data will remain
701in the cache via carefully configuring the CAT feature and controlling
702application behavior. There is no guarantee that data is placed in
703cache. Instructions like INVD, WBINVD, CLFLUSH, etc. can still evict
704“locked” data from cache. Power management C-states may shrink or
705power off cache. Deeper C-states will automatically be restricted on
706pseudo-locked region creation.
707
708It is required that an application using a pseudo-locked region runs
709with affinity to the cores (or a subset of the cores) associated
710with the cache on which the pseudo-locked region resides. A sanity check
711within the code will not allow an application to map pseudo-locked memory
712unless it runs with affinity to cores associated with the cache on which the
713pseudo-locked region resides. The sanity check is only done during the
714initial mmap() handling, there is no enforcement afterwards and the
715application self needs to ensure it remains affine to the correct cores.
716
717Pseudo-locking is accomplished in two stages:
718
7191) During the first stage the system administrator allocates a portion
720   of cache that should be dedicated to pseudo-locking. At this time an
721   equivalent portion of memory is allocated, loaded into allocated
722   cache portion, and exposed as a character device.
7232) During the second stage a user-space application maps (mmap()) the
724   pseudo-locked memory into its address space.
725
726Cache Pseudo-Locking Interface
727------------------------------
728A pseudo-locked region is created using the resctrl interface as follows:
729
7301) Create a new resource group by creating a new directory in /sys/fs/resctrl.
7312) Change the new resource group's mode to "pseudo-locksetup" by writing
732   "pseudo-locksetup" to the "mode" file.
7333) Write the schemata of the pseudo-locked region to the "schemata" file. All
734   bits within the schemata should be "unused" according to the "bit_usage"
735   file.
736
737On successful pseudo-locked region creation the "mode" file will contain
738"pseudo-locked" and a new character device with the same name as the resource
739group will exist in /dev/pseudo_lock. This character device can be mmap()'ed
740by user space in order to obtain access to the pseudo-locked memory region.
741
742An example of cache pseudo-locked region creation and usage can be found below.
743
744Cache Pseudo-Locking Debugging Interface
745----------------------------------------
746The pseudo-locking debugging interface is enabled by default (if
747CONFIG_DEBUG_FS is enabled) and can be found in /sys/kernel/debug/resctrl.
748
749There is no explicit way for the kernel to test if a provided memory
750location is present in the cache. The pseudo-locking debugging interface uses
751the tracing infrastructure to provide two ways to measure cache residency of
752the pseudo-locked region:
753
7541) Memory access latency using the pseudo_lock_mem_latency tracepoint. Data
755   from these measurements are best visualized using a hist trigger (see
756   example below). In this test the pseudo-locked region is traversed at
757   a stride of 32 bytes while hardware prefetchers and preemption
758   are disabled. This also provides a substitute visualization of cache
759   hits and misses.
7602) Cache hit and miss measurements using model specific precision counters if
761   available. Depending on the levels of cache on the system the pseudo_lock_l2
762   and pseudo_lock_l3 tracepoints are available.
763
764When a pseudo-locked region is created a new debugfs directory is created for
765it in debugfs as /sys/kernel/debug/resctrl/<newdir>. A single
766write-only file, pseudo_lock_measure, is present in this directory. The
767measurement of the pseudo-locked region depends on the number written to this
768debugfs file:
769
7701:
771     writing "1" to the pseudo_lock_measure file will trigger the latency
772     measurement captured in the pseudo_lock_mem_latency tracepoint. See
773     example below.
7742:
775     writing "2" to the pseudo_lock_measure file will trigger the L2 cache
776     residency (cache hits and misses) measurement captured in the
777     pseudo_lock_l2 tracepoint. See example below.
7783:
779     writing "3" to the pseudo_lock_measure file will trigger the L3 cache
780     residency (cache hits and misses) measurement captured in the
781     pseudo_lock_l3 tracepoint.
782
783All measurements are recorded with the tracing infrastructure. This requires
784the relevant tracepoints to be enabled before the measurement is triggered.
785
786Example of latency debugging interface
787~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
788In this example a pseudo-locked region named "newlock" was created. Here is
789how we can measure the latency in cycles of reading from this region and
790visualize this data with a histogram that is available if CONFIG_HIST_TRIGGERS
791is set::
792
793  # :> /sys/kernel/tracing/trace
794  # echo 'hist:keys=latency' > /sys/kernel/tracing/events/resctrl/pseudo_lock_mem_latency/trigger
795  # echo 1 > /sys/kernel/tracing/events/resctrl/pseudo_lock_mem_latency/enable
796  # echo 1 > /sys/kernel/debug/resctrl/newlock/pseudo_lock_measure
797  # echo 0 > /sys/kernel/tracing/events/resctrl/pseudo_lock_mem_latency/enable
798  # cat /sys/kernel/tracing/events/resctrl/pseudo_lock_mem_latency/hist
799
800  # event histogram
801  #
802  # trigger info: hist:keys=latency:vals=hitcount:sort=hitcount:size=2048 [active]
803  #
804
805  { latency:        456 } hitcount:          1
806  { latency:         50 } hitcount:         83
807  { latency:         36 } hitcount:         96
808  { latency:         44 } hitcount:        174
809  { latency:         48 } hitcount:        195
810  { latency:         46 } hitcount:        262
811  { latency:         42 } hitcount:        693
812  { latency:         40 } hitcount:       3204
813  { latency:         38 } hitcount:       3484
814
815  Totals:
816      Hits: 8192
817      Entries: 9
818    Dropped: 0
819
820Example of cache hits/misses debugging
821~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
822In this example a pseudo-locked region named "newlock" was created on the L2
823cache of a platform. Here is how we can obtain details of the cache hits
824and misses using the platform's precision counters.
825::
826
827  # :> /sys/kernel/tracing/trace
828  # echo 1 > /sys/kernel/tracing/events/resctrl/pseudo_lock_l2/enable
829  # echo 2 > /sys/kernel/debug/resctrl/newlock/pseudo_lock_measure
830  # echo 0 > /sys/kernel/tracing/events/resctrl/pseudo_lock_l2/enable
831  # cat /sys/kernel/tracing/trace
832
833  # tracer: nop
834  #
835  #                              _-----=> irqs-off
836  #                             / _----=> need-resched
837  #                            | / _---=> hardirq/softirq
838  #                            || / _--=> preempt-depth
839  #                            ||| /     delay
840  #           TASK-PID   CPU#  ||||    TIMESTAMP  FUNCTION
841  #              | |       |   ||||       |         |
842  pseudo_lock_mea-1672  [002] ....  3132.860500: pseudo_lock_l2: hits=4097 miss=0
843
844
845Examples for RDT allocation usage
846~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
847
8481) Example 1
849
850On a two socket machine (one L3 cache per socket) with just four bits
851for cache bit masks, minimum b/w of 10% with a memory bandwidth
852granularity of 10%.
853::
854
855  # mount -t resctrl resctrl /sys/fs/resctrl
856  # cd /sys/fs/resctrl
857  # mkdir p0 p1
858  # echo "L3:0=3;1=c\nMB:0=50;1=50" > /sys/fs/resctrl/p0/schemata
859  # echo "L3:0=3;1=3\nMB:0=50;1=50" > /sys/fs/resctrl/p1/schemata
860
861The default resource group is unmodified, so we have access to all parts
862of all caches (its schemata file reads "L3:0=f;1=f").
863
864Tasks that are under the control of group "p0" may only allocate from the
865"lower" 50% on cache ID 0, and the "upper" 50% of cache ID 1.
866Tasks in group "p1" use the "lower" 50% of cache on both sockets.
867
868Similarly, tasks that are under the control of group "p0" may use a
869maximum memory b/w of 50% on socket0 and 50% on socket 1.
870Tasks in group "p1" may also use 50% memory b/w on both sockets.
871Note that unlike cache masks, memory b/w cannot specify whether these
872allocations can overlap or not. The allocations specifies the maximum
873b/w that the group may be able to use and the system admin can configure
874the b/w accordingly.
875
876If resctrl is using the software controller (mba_sc) then user can enter the
877max b/w in MB rather than the percentage values.
878::
879
880  # echo "L3:0=3;1=c\nMB:0=1024;1=500" > /sys/fs/resctrl/p0/schemata
881  # echo "L3:0=3;1=3\nMB:0=1024;1=500" > /sys/fs/resctrl/p1/schemata
882
883In the above example the tasks in "p1" and "p0" on socket 0 would use a max b/w
884of 1024MB where as on socket 1 they would use 500MB.
885
8862) Example 2
887
888Again two sockets, but this time with a more realistic 20-bit mask.
889
890Two real time tasks pid=1234 running on processor 0 and pid=5678 running on
891processor 1 on socket 0 on a 2-socket and dual core machine. To avoid noisy
892neighbors, each of the two real-time tasks exclusively occupies one quarter
893of L3 cache on socket 0.
894::
895
896  # mount -t resctrl resctrl /sys/fs/resctrl
897  # cd /sys/fs/resctrl
898
899First we reset the schemata for the default group so that the "upper"
90050% of the L3 cache on socket 0 and 50% of memory b/w cannot be used by
901ordinary tasks::
902
903  # echo "L3:0=3ff;1=fffff\nMB:0=50;1=100" > schemata
904
905Next we make a resource group for our first real time task and give
906it access to the "top" 25% of the cache on socket 0.
907::
908
909  # mkdir p0
910  # echo "L3:0=f8000;1=fffff" > p0/schemata
911
912Finally we move our first real time task into this resource group. We
913also use taskset(1) to ensure the task always runs on a dedicated CPU
914on socket 0. Most uses of resource groups will also constrain which
915processors tasks run on.
916::
917
918  # echo 1234 > p0/tasks
919  # taskset -cp 1 1234
920
921Ditto for the second real time task (with the remaining 25% of cache)::
922
923  # mkdir p1
924  # echo "L3:0=7c00;1=fffff" > p1/schemata
925  # echo 5678 > p1/tasks
926  # taskset -cp 2 5678
927
928For the same 2 socket system with memory b/w resource and CAT L3 the
929schemata would look like(Assume min_bandwidth 10 and bandwidth_gran is
93010):
931
932For our first real time task this would request 20% memory b/w on socket 0.
933::
934
935  # echo -e "L3:0=f8000;1=fffff\nMB:0=20;1=100" > p0/schemata
936
937For our second real time task this would request an other 20% memory b/w
938on socket 0.
939::
940
941  # echo -e "L3:0=f8000;1=fffff\nMB:0=20;1=100" > p0/schemata
942
9433) Example 3
944
945A single socket system which has real-time tasks running on core 4-7 and
946non real-time workload assigned to core 0-3. The real-time tasks share text
947and data, so a per task association is not required and due to interaction
948with the kernel it's desired that the kernel on these cores shares L3 with
949the tasks.
950::
951
952  # mount -t resctrl resctrl /sys/fs/resctrl
953  # cd /sys/fs/resctrl
954
955First we reset the schemata for the default group so that the "upper"
95650% of the L3 cache on socket 0, and 50% of memory bandwidth on socket 0
957cannot be used by ordinary tasks::
958
959  # echo "L3:0=3ff\nMB:0=50" > schemata
960
961Next we make a resource group for our real time cores and give it access
962to the "top" 50% of the cache on socket 0 and 50% of memory bandwidth on
963socket 0.
964::
965
966  # mkdir p0
967  # echo "L3:0=ffc00\nMB:0=50" > p0/schemata
968
969Finally we move core 4-7 over to the new group and make sure that the
970kernel and the tasks running there get 50% of the cache. They should
971also get 50% of memory bandwidth assuming that the cores 4-7 are SMT
972siblings and only the real time threads are scheduled on the cores 4-7.
973::
974
975  # echo F0 > p0/cpus
976
9774) Example 4
978
979The resource groups in previous examples were all in the default "shareable"
980mode allowing sharing of their cache allocations. If one resource group
981configures a cache allocation then nothing prevents another resource group
982to overlap with that allocation.
983
984In this example a new exclusive resource group will be created on a L2 CAT
985system with two L2 cache instances that can be configured with an 8-bit
986capacity bitmask. The new exclusive resource group will be configured to use
98725% of each cache instance.
988::
989
990  # mount -t resctrl resctrl /sys/fs/resctrl/
991  # cd /sys/fs/resctrl
992
993First, we observe that the default group is configured to allocate to all L2
994cache::
995
996  # cat schemata
997  L2:0=ff;1=ff
998
999We could attempt to create the new resource group at this point, but it will
1000fail because of the overlap with the schemata of the default group::
1001
1002  # mkdir p0
1003  # echo 'L2:0=0x3;1=0x3' > p0/schemata
1004  # cat p0/mode
1005  shareable
1006  # echo exclusive > p0/mode
1007  -sh: echo: write error: Invalid argument
1008  # cat info/last_cmd_status
1009  schemata overlaps
1010
1011To ensure that there is no overlap with another resource group the default
1012resource group's schemata has to change, making it possible for the new
1013resource group to become exclusive.
1014::
1015
1016  # echo 'L2:0=0xfc;1=0xfc' > schemata
1017  # echo exclusive > p0/mode
1018  # grep . p0/*
1019  p0/cpus:0
1020  p0/mode:exclusive
1021  p0/schemata:L2:0=03;1=03
1022  p0/size:L2:0=262144;1=262144
1023
1024A new resource group will on creation not overlap with an exclusive resource
1025group::
1026
1027  # mkdir p1
1028  # grep . p1/*
1029  p1/cpus:0
1030  p1/mode:shareable
1031  p1/schemata:L2:0=fc;1=fc
1032  p1/size:L2:0=786432;1=786432
1033
1034The bit_usage will reflect how the cache is used::
1035
1036  # cat info/L2/bit_usage
1037  0=SSSSSSEE;1=SSSSSSEE
1038
1039A resource group cannot be forced to overlap with an exclusive resource group::
1040
1041  # echo 'L2:0=0x1;1=0x1' > p1/schemata
1042  -sh: echo: write error: Invalid argument
1043  # cat info/last_cmd_status
1044  overlaps with exclusive group
1045
1046Example of Cache Pseudo-Locking
1047~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1048Lock portion of L2 cache from cache id 1 using CBM 0x3. Pseudo-locked
1049region is exposed at /dev/pseudo_lock/newlock that can be provided to
1050application for argument to mmap().
1051::
1052
1053  # mount -t resctrl resctrl /sys/fs/resctrl/
1054  # cd /sys/fs/resctrl
1055
1056Ensure that there are bits available that can be pseudo-locked, since only
1057unused bits can be pseudo-locked the bits to be pseudo-locked needs to be
1058removed from the default resource group's schemata::
1059
1060  # cat info/L2/bit_usage
1061  0=SSSSSSSS;1=SSSSSSSS
1062  # echo 'L2:1=0xfc' > schemata
1063  # cat info/L2/bit_usage
1064  0=SSSSSSSS;1=SSSSSS00
1065
1066Create a new resource group that will be associated with the pseudo-locked
1067region, indicate that it will be used for a pseudo-locked region, and
1068configure the requested pseudo-locked region capacity bitmask::
1069
1070  # mkdir newlock
1071  # echo pseudo-locksetup > newlock/mode
1072  # echo 'L2:1=0x3' > newlock/schemata
1073
1074On success the resource group's mode will change to pseudo-locked, the
1075bit_usage will reflect the pseudo-locked region, and the character device
1076exposing the pseudo-locked region will exist::
1077
1078  # cat newlock/mode
1079  pseudo-locked
1080  # cat info/L2/bit_usage
1081  0=SSSSSSSS;1=SSSSSSPP
1082  # ls -l /dev/pseudo_lock/newlock
1083  crw------- 1 root root 243, 0 Apr  3 05:01 /dev/pseudo_lock/newlock
1084
1085::
1086
1087  /*
1088  * Example code to access one page of pseudo-locked cache region
1089  * from user space.
1090  */
1091  #define _GNU_SOURCE
1092  #include <fcntl.h>
1093  #include <sched.h>
1094  #include <stdio.h>
1095  #include <stdlib.h>
1096  #include <unistd.h>
1097  #include <sys/mman.h>
1098
1099  /*
1100  * It is required that the application runs with affinity to only
1101  * cores associated with the pseudo-locked region. Here the cpu
1102  * is hardcoded for convenience of example.
1103  */
1104  static int cpuid = 2;
1105
1106  int main(int argc, char *argv[])
1107  {
1108    cpu_set_t cpuset;
1109    long page_size;
1110    void *mapping;
1111    int dev_fd;
1112    int ret;
1113
1114    page_size = sysconf(_SC_PAGESIZE);
1115
1116    CPU_ZERO(&cpuset);
1117    CPU_SET(cpuid, &cpuset);
1118    ret = sched_setaffinity(0, sizeof(cpuset), &cpuset);
1119    if (ret < 0) {
1120      perror("sched_setaffinity");
1121      exit(EXIT_FAILURE);
1122    }
1123
1124    dev_fd = open("/dev/pseudo_lock/newlock", O_RDWR);
1125    if (dev_fd < 0) {
1126      perror("open");
1127      exit(EXIT_FAILURE);
1128    }
1129
1130    mapping = mmap(0, page_size, PROT_READ | PROT_WRITE, MAP_SHARED,
1131            dev_fd, 0);
1132    if (mapping == MAP_FAILED) {
1133      perror("mmap");
1134      close(dev_fd);
1135      exit(EXIT_FAILURE);
1136    }
1137
1138    /* Application interacts with pseudo-locked memory @mapping */
1139
1140    ret = munmap(mapping, page_size);
1141    if (ret < 0) {
1142      perror("munmap");
1143      close(dev_fd);
1144      exit(EXIT_FAILURE);
1145    }
1146
1147    close(dev_fd);
1148    exit(EXIT_SUCCESS);
1149  }
1150
1151Locking between applications
1152----------------------------
1153
1154Certain operations on the resctrl filesystem, composed of read/writes
1155to/from multiple files, must be atomic.
1156
1157As an example, the allocation of an exclusive reservation of L3 cache
1158involves:
1159
1160  1. Read the cbmmasks from each directory or the per-resource "bit_usage"
1161  2. Find a contiguous set of bits in the global CBM bitmask that is clear
1162     in any of the directory cbmmasks
1163  3. Create a new directory
1164  4. Set the bits found in step 2 to the new directory "schemata" file
1165
1166If two applications attempt to allocate space concurrently then they can
1167end up allocating the same bits so the reservations are shared instead of
1168exclusive.
1169
1170To coordinate atomic operations on the resctrlfs and to avoid the problem
1171above, the following locking procedure is recommended:
1172
1173Locking is based on flock, which is available in libc and also as a shell
1174script command
1175
1176Write lock:
1177
1178 A) Take flock(LOCK_EX) on /sys/fs/resctrl
1179 B) Read/write the directory structure.
1180 C) funlock
1181
1182Read lock:
1183
1184 A) Take flock(LOCK_SH) on /sys/fs/resctrl
1185 B) If success read the directory structure.
1186 C) funlock
1187
1188Example with bash::
1189
1190  # Atomically read directory structure
1191  $ flock -s /sys/fs/resctrl/ find /sys/fs/resctrl
1192
1193  # Read directory contents and create new subdirectory
1194
1195  $ cat create-dir.sh
1196  find /sys/fs/resctrl/ > output.txt
1197  mask = function-of(output.txt)
1198  mkdir /sys/fs/resctrl/newres/
1199  echo mask > /sys/fs/resctrl/newres/schemata
1200
1201  $ flock /sys/fs/resctrl/ ./create-dir.sh
1202
1203Example with C::
1204
1205  /*
1206  * Example code do take advisory locks
1207  * before accessing resctrl filesystem
1208  */
1209  #include <sys/file.h>
1210  #include <stdlib.h>
1211
1212  void resctrl_take_shared_lock(int fd)
1213  {
1214    int ret;
1215
1216    /* take shared lock on resctrl filesystem */
1217    ret = flock(fd, LOCK_SH);
1218    if (ret) {
1219      perror("flock");
1220      exit(-1);
1221    }
1222  }
1223
1224  void resctrl_take_exclusive_lock(int fd)
1225  {
1226    int ret;
1227
1228    /* release lock on resctrl filesystem */
1229    ret = flock(fd, LOCK_EX);
1230    if (ret) {
1231      perror("flock");
1232      exit(-1);
1233    }
1234  }
1235
1236  void resctrl_release_lock(int fd)
1237  {
1238    int ret;
1239
1240    /* take shared lock on resctrl filesystem */
1241    ret = flock(fd, LOCK_UN);
1242    if (ret) {
1243      perror("flock");
1244      exit(-1);
1245    }
1246  }
1247
1248  void main(void)
1249  {
1250    int fd, ret;
1251
1252    fd = open("/sys/fs/resctrl", O_DIRECTORY);
1253    if (fd == -1) {
1254      perror("open");
1255      exit(-1);
1256    }
1257    resctrl_take_shared_lock(fd);
1258    /* code to read directory contents */
1259    resctrl_release_lock(fd);
1260
1261    resctrl_take_exclusive_lock(fd);
1262    /* code to read and write directory contents */
1263    resctrl_release_lock(fd);
1264  }
1265
1266Examples for RDT Monitoring along with allocation usage
1267=======================================================
1268Reading monitored data
1269----------------------
1270Reading an event file (for ex: mon_data/mon_L3_00/llc_occupancy) would
1271show the current snapshot of LLC occupancy of the corresponding MON
1272group or CTRL_MON group.
1273
1274
1275Example 1 (Monitor CTRL_MON group and subset of tasks in CTRL_MON group)
1276------------------------------------------------------------------------
1277On a two socket machine (one L3 cache per socket) with just four bits
1278for cache bit masks::
1279
1280  # mount -t resctrl resctrl /sys/fs/resctrl
1281  # cd /sys/fs/resctrl
1282  # mkdir p0 p1
1283  # echo "L3:0=3;1=c" > /sys/fs/resctrl/p0/schemata
1284  # echo "L3:0=3;1=3" > /sys/fs/resctrl/p1/schemata
1285  # echo 5678 > p1/tasks
1286  # echo 5679 > p1/tasks
1287
1288The default resource group is unmodified, so we have access to all parts
1289of all caches (its schemata file reads "L3:0=f;1=f").
1290
1291Tasks that are under the control of group "p0" may only allocate from the
1292"lower" 50% on cache ID 0, and the "upper" 50% of cache ID 1.
1293Tasks in group "p1" use the "lower" 50% of cache on both sockets.
1294
1295Create monitor groups and assign a subset of tasks to each monitor group.
1296::
1297
1298  # cd /sys/fs/resctrl/p1/mon_groups
1299  # mkdir m11 m12
1300  # echo 5678 > m11/tasks
1301  # echo 5679 > m12/tasks
1302
1303fetch data (data shown in bytes)
1304::
1305
1306  # cat m11/mon_data/mon_L3_00/llc_occupancy
1307  16234000
1308  # cat m11/mon_data/mon_L3_01/llc_occupancy
1309  14789000
1310  # cat m12/mon_data/mon_L3_00/llc_occupancy
1311  16789000
1312
1313The parent ctrl_mon group shows the aggregated data.
1314::
1315
1316  # cat /sys/fs/resctrl/p1/mon_data/mon_l3_00/llc_occupancy
1317  31234000
1318
1319Example 2 (Monitor a task from its creation)
1320--------------------------------------------
1321On a two socket machine (one L3 cache per socket)::
1322
1323  # mount -t resctrl resctrl /sys/fs/resctrl
1324  # cd /sys/fs/resctrl
1325  # mkdir p0 p1
1326
1327An RMID is allocated to the group once its created and hence the <cmd>
1328below is monitored from its creation.
1329::
1330
1331  # echo $$ > /sys/fs/resctrl/p1/tasks
1332  # <cmd>
1333
1334Fetch the data::
1335
1336  # cat /sys/fs/resctrl/p1/mon_data/mon_l3_00/llc_occupancy
1337  31789000
1338
1339Example 3 (Monitor without CAT support or before creating CAT groups)
1340---------------------------------------------------------------------
1341
1342Assume a system like HSW has only CQM and no CAT support. In this case
1343the resctrl will still mount but cannot create CTRL_MON directories.
1344But user can create different MON groups within the root group thereby
1345able to monitor all tasks including kernel threads.
1346
1347This can also be used to profile jobs cache size footprint before being
1348able to allocate them to different allocation groups.
1349::
1350
1351  # mount -t resctrl resctrl /sys/fs/resctrl
1352  # cd /sys/fs/resctrl
1353  # mkdir mon_groups/m01
1354  # mkdir mon_groups/m02
1355
1356  # echo 3478 > /sys/fs/resctrl/mon_groups/m01/tasks
1357  # echo 2467 > /sys/fs/resctrl/mon_groups/m02/tasks
1358
1359Monitor the groups separately and also get per domain data. From the
1360below its apparent that the tasks are mostly doing work on
1361domain(socket) 0.
1362::
1363
1364  # cat /sys/fs/resctrl/mon_groups/m01/mon_L3_00/llc_occupancy
1365  31234000
1366  # cat /sys/fs/resctrl/mon_groups/m01/mon_L3_01/llc_occupancy
1367  34555
1368  # cat /sys/fs/resctrl/mon_groups/m02/mon_L3_00/llc_occupancy
1369  31234000
1370  # cat /sys/fs/resctrl/mon_groups/m02/mon_L3_01/llc_occupancy
1371  32789
1372
1373
1374Example 4 (Monitor real time tasks)
1375-----------------------------------
1376
1377A single socket system which has real time tasks running on cores 4-7
1378and non real time tasks on other cpus. We want to monitor the cache
1379occupancy of the real time threads on these cores.
1380::
1381
1382  # mount -t resctrl resctrl /sys/fs/resctrl
1383  # cd /sys/fs/resctrl
1384  # mkdir p1
1385
1386Move the cpus 4-7 over to p1::
1387
1388  # echo f0 > p1/cpus
1389
1390View the llc occupancy snapshot::
1391
1392  # cat /sys/fs/resctrl/p1/mon_data/mon_L3_00/llc_occupancy
1393  11234000
1394
1395Intel RDT Errata
1396================
1397
1398Intel MBM Counters May Report System Memory Bandwidth Incorrectly
1399-----------------------------------------------------------------
1400
1401Errata SKX99 for Skylake server and BDF102 for Broadwell server.
1402
1403Problem: Intel Memory Bandwidth Monitoring (MBM) counters track metrics
1404according to the assigned Resource Monitor ID (RMID) for that logical
1405core. The IA32_QM_CTR register (MSR 0xC8E), used to report these
1406metrics, may report incorrect system bandwidth for certain RMID values.
1407
1408Implication: Due to the errata, system memory bandwidth may not match
1409what is reported.
1410
1411Workaround: MBM total and local readings are corrected according to the
1412following correction factor table:
1413
1414+---------------+---------------+---------------+-----------------+
1415|core count	|rmid count	|rmid threshold	|correction factor|
1416+---------------+---------------+---------------+-----------------+
1417|1		|8		|0		|1.000000	  |
1418+---------------+---------------+---------------+-----------------+
1419|2		|16		|0		|1.000000	  |
1420+---------------+---------------+---------------+-----------------+
1421|3		|24		|15		|0.969650	  |
1422+---------------+---------------+---------------+-----------------+
1423|4		|32		|0		|1.000000	  |
1424+---------------+---------------+---------------+-----------------+
1425|6		|48		|31		|0.969650	  |
1426+---------------+---------------+---------------+-----------------+
1427|7		|56		|47		|1.142857	  |
1428+---------------+---------------+---------------+-----------------+
1429|8		|64		|0		|1.000000	  |
1430+---------------+---------------+---------------+-----------------+
1431|9		|72		|63		|1.185115	  |
1432+---------------+---------------+---------------+-----------------+
1433|10		|80		|63		|1.066553	  |
1434+---------------+---------------+---------------+-----------------+
1435|11		|88		|79		|1.454545	  |
1436+---------------+---------------+---------------+-----------------+
1437|12		|96		|0		|1.000000	  |
1438+---------------+---------------+---------------+-----------------+
1439|13		|104		|95		|1.230769	  |
1440+---------------+---------------+---------------+-----------------+
1441|14		|112		|95		|1.142857	  |
1442+---------------+---------------+---------------+-----------------+
1443|15		|120		|95		|1.066667	  |
1444+---------------+---------------+---------------+-----------------+
1445|16		|128		|0		|1.000000	  |
1446+---------------+---------------+---------------+-----------------+
1447|17		|136		|127		|1.254863	  |
1448+---------------+---------------+---------------+-----------------+
1449|18		|144		|127		|1.185255	  |
1450+---------------+---------------+---------------+-----------------+
1451|19		|152		|0		|1.000000	  |
1452+---------------+---------------+---------------+-----------------+
1453|20		|160		|127		|1.066667	  |
1454+---------------+---------------+---------------+-----------------+
1455|21		|168		|0		|1.000000	  |
1456+---------------+---------------+---------------+-----------------+
1457|22		|176		|159		|1.454334	  |
1458+---------------+---------------+---------------+-----------------+
1459|23		|184		|0		|1.000000	  |
1460+---------------+---------------+---------------+-----------------+
1461|24		|192		|127		|0.969744	  |
1462+---------------+---------------+---------------+-----------------+
1463|25		|200		|191		|1.280246	  |
1464+---------------+---------------+---------------+-----------------+
1465|26		|208		|191		|1.230921	  |
1466+---------------+---------------+---------------+-----------------+
1467|27		|216		|0		|1.000000	  |
1468+---------------+---------------+---------------+-----------------+
1469|28		|224		|191		|1.143118	  |
1470+---------------+---------------+---------------+-----------------+
1471
1472If rmid > rmid threshold, MBM total and local values should be multiplied
1473by the correction factor.
1474
1475See:
1476
14771. Erratum SKX99 in Intel Xeon Processor Scalable Family Specification Update:
1478http://web.archive.org/web/20200716124958/https://www.intel.com/content/www/us/en/processors/xeon/scalable/xeon-scalable-spec-update.html
1479
14802. Erratum BDF102 in Intel Xeon E5-2600 v4 Processor Product Family Specification Update:
1481http://web.archive.org/web/20191125200531/https://www.intel.com/content/dam/www/public/us/en/documents/specification-updates/xeon-e5-v4-spec-update.pdf
1482
14833. The errata in Intel Resource Director Technology (Intel RDT) on 2nd Generation Intel Xeon Scalable Processors Reference Manual:
1484https://software.intel.com/content/www/us/en/develop/articles/intel-resource-director-technology-rdt-reference-manual.html
1485
1486for further information.
1487