xref: /linux/Documentation/arch/x86/resctrl.rst (revision c532de5a67a70f8533d495f8f2aaa9a0491c3ad0)
1.. SPDX-License-Identifier: GPL-2.0
2.. include:: <isonum.txt>
3
4===========================================
5User Interface for Resource Control feature
6===========================================
7
8:Copyright: |copy| 2016 Intel Corporation
9:Authors: - Fenghua Yu <fenghua.yu@intel.com>
10          - Tony Luck <tony.luck@intel.com>
11          - Vikas Shivappa <vikas.shivappa@intel.com>
12
13
14Intel refers to this feature as Intel Resource Director Technology(Intel(R) RDT).
15AMD refers to this feature as AMD Platform Quality of Service(AMD QoS).
16
17This feature is enabled by the CONFIG_X86_CPU_RESCTRL and the x86 /proc/cpuinfo
18flag bits:
19
20===============================================	================================
21RDT (Resource Director Technology) Allocation	"rdt_a"
22CAT (Cache Allocation Technology)		"cat_l3", "cat_l2"
23CDP (Code and Data Prioritization)		"cdp_l3", "cdp_l2"
24CQM (Cache QoS Monitoring)			"cqm_llc", "cqm_occup_llc"
25MBM (Memory Bandwidth Monitoring)		"cqm_mbm_total", "cqm_mbm_local"
26MBA (Memory Bandwidth Allocation)		"mba"
27SMBA (Slow Memory Bandwidth Allocation)         ""
28BMEC (Bandwidth Monitoring Event Configuration) ""
29===============================================	================================
30
31Historically, new features were made visible by default in /proc/cpuinfo. This
32resulted in the feature flags becoming hard to parse by humans. Adding a new
33flag to /proc/cpuinfo should be avoided if user space can obtain information
34about the feature from resctrl's info directory.
35
36To use the feature mount the file system::
37
38 # mount -t resctrl resctrl [-o cdp[,cdpl2][,mba_MBps][,debug]] /sys/fs/resctrl
39
40mount options are:
41
42"cdp":
43	Enable code/data prioritization in L3 cache allocations.
44"cdpl2":
45	Enable code/data prioritization in L2 cache allocations.
46"mba_MBps":
47	Enable the MBA Software Controller(mba_sc) to specify MBA
48	bandwidth in MiBps
49"debug":
50	Make debug files accessible. Available debug files are annotated with
51	"Available only with debug option".
52
53L2 and L3 CDP are controlled separately.
54
55RDT features are orthogonal. A particular system may support only
56monitoring, only control, or both monitoring and control.  Cache
57pseudo-locking is a unique way of using cache control to "pin" or
58"lock" data in the cache. Details can be found in
59"Cache Pseudo-Locking".
60
61
62The mount succeeds if either of allocation or monitoring is present, but
63only those files and directories supported by the system will be created.
64For more details on the behavior of the interface during monitoring
65and allocation, see the "Resource alloc and monitor groups" section.
66
67Info directory
68==============
69
70The 'info' directory contains information about the enabled
71resources. Each resource has its own subdirectory. The subdirectory
72names reflect the resource names.
73
74Each subdirectory contains the following files with respect to
75allocation:
76
77Cache resource(L3/L2)  subdirectory contains the following files
78related to allocation:
79
80"num_closids":
81		The number of CLOSIDs which are valid for this
82		resource. The kernel uses the smallest number of
83		CLOSIDs of all enabled resources as limit.
84"cbm_mask":
85		The bitmask which is valid for this resource.
86		This mask is equivalent to 100%.
87"min_cbm_bits":
88		The minimum number of consecutive bits which
89		must be set when writing a mask.
90
91"shareable_bits":
92		Bitmask of shareable resource with other executing
93		entities (e.g. I/O). User can use this when
94		setting up exclusive cache partitions. Note that
95		some platforms support devices that have their
96		own settings for cache use which can over-ride
97		these bits.
98"bit_usage":
99		Annotated capacity bitmasks showing how all
100		instances of the resource are used. The legend is:
101
102			"0":
103			      Corresponding region is unused. When the system's
104			      resources have been allocated and a "0" is found
105			      in "bit_usage" it is a sign that resources are
106			      wasted.
107
108			"H":
109			      Corresponding region is used by hardware only
110			      but available for software use. If a resource
111			      has bits set in "shareable_bits" but not all
112			      of these bits appear in the resource groups'
113			      schematas then the bits appearing in
114			      "shareable_bits" but no resource group will
115			      be marked as "H".
116			"X":
117			      Corresponding region is available for sharing and
118			      used by hardware and software. These are the
119			      bits that appear in "shareable_bits" as
120			      well as a resource group's allocation.
121			"S":
122			      Corresponding region is used by software
123			      and available for sharing.
124			"E":
125			      Corresponding region is used exclusively by
126			      one resource group. No sharing allowed.
127			"P":
128			      Corresponding region is pseudo-locked. No
129			      sharing allowed.
130"sparse_masks":
131		Indicates if non-contiguous 1s value in CBM is supported.
132
133			"0":
134			      Only contiguous 1s value in CBM is supported.
135			"1":
136			      Non-contiguous 1s value in CBM is supported.
137
138Memory bandwidth(MB) subdirectory contains the following files
139with respect to allocation:
140
141"min_bandwidth":
142		The minimum memory bandwidth percentage which
143		user can request.
144
145"bandwidth_gran":
146		The granularity in which the memory bandwidth
147		percentage is allocated. The allocated
148		b/w percentage is rounded off to the next
149		control step available on the hardware. The
150		available bandwidth control steps are:
151		min_bandwidth + N * bandwidth_gran.
152
153"delay_linear":
154		Indicates if the delay scale is linear or
155		non-linear. This field is purely informational
156		only.
157
158"thread_throttle_mode":
159		Indicator on Intel systems of how tasks running on threads
160		of a physical core are throttled in cases where they
161		request different memory bandwidth percentages:
162
163		"max":
164			the smallest percentage is applied
165			to all threads
166		"per-thread":
167			bandwidth percentages are directly applied to
168			the threads running on the core
169
170If RDT monitoring is available there will be an "L3_MON" directory
171with the following files:
172
173"num_rmids":
174		The number of RMIDs available. This is the
175		upper bound for how many "CTRL_MON" + "MON"
176		groups can be created.
177
178"mon_features":
179		Lists the monitoring events if
180		monitoring is enabled for the resource.
181		Example::
182
183			# cat /sys/fs/resctrl/info/L3_MON/mon_features
184			llc_occupancy
185			mbm_total_bytes
186			mbm_local_bytes
187
188		If the system supports Bandwidth Monitoring Event
189		Configuration (BMEC), then the bandwidth events will
190		be configurable. The output will be::
191
192			# cat /sys/fs/resctrl/info/L3_MON/mon_features
193			llc_occupancy
194			mbm_total_bytes
195			mbm_total_bytes_config
196			mbm_local_bytes
197			mbm_local_bytes_config
198
199"mbm_total_bytes_config", "mbm_local_bytes_config":
200	Read/write files containing the configuration for the mbm_total_bytes
201	and mbm_local_bytes events, respectively, when the Bandwidth
202	Monitoring Event Configuration (BMEC) feature is supported.
203	The event configuration settings are domain specific and affect
204	all the CPUs in the domain. When either event configuration is
205	changed, the bandwidth counters for all RMIDs of both events
206	(mbm_total_bytes as well as mbm_local_bytes) are cleared for that
207	domain. The next read for every RMID will report "Unavailable"
208	and subsequent reads will report the valid value.
209
210	Following are the types of events supported:
211
212	====    ========================================================
213	Bits    Description
214	====    ========================================================
215	6       Dirty Victims from the QOS domain to all types of memory
216	5       Reads to slow memory in the non-local NUMA domain
217	4       Reads to slow memory in the local NUMA domain
218	3       Non-temporal writes to non-local NUMA domain
219	2       Non-temporal writes to local NUMA domain
220	1       Reads to memory in the non-local NUMA domain
221	0       Reads to memory in the local NUMA domain
222	====    ========================================================
223
224	By default, the mbm_total_bytes configuration is set to 0x7f to count
225	all the event types and the mbm_local_bytes configuration is set to
226	0x15 to count all the local memory events.
227
228	Examples:
229
230	* To view the current configuration::
231	  ::
232
233	    # cat /sys/fs/resctrl/info/L3_MON/mbm_total_bytes_config
234	    0=0x7f;1=0x7f;2=0x7f;3=0x7f
235
236	    # cat /sys/fs/resctrl/info/L3_MON/mbm_local_bytes_config
237	    0=0x15;1=0x15;3=0x15;4=0x15
238
239	* To change the mbm_total_bytes to count only reads on domain 0,
240	  the bits 0, 1, 4 and 5 needs to be set, which is 110011b in binary
241	  (in hexadecimal 0x33):
242	  ::
243
244	    # echo  "0=0x33" > /sys/fs/resctrl/info/L3_MON/mbm_total_bytes_config
245
246	    # cat /sys/fs/resctrl/info/L3_MON/mbm_total_bytes_config
247	    0=0x33;1=0x7f;2=0x7f;3=0x7f
248
249	* To change the mbm_local_bytes to count all the slow memory reads on
250	  domain 0 and 1, the bits 4 and 5 needs to be set, which is 110000b
251	  in binary (in hexadecimal 0x30):
252	  ::
253
254	    # echo  "0=0x30;1=0x30" > /sys/fs/resctrl/info/L3_MON/mbm_local_bytes_config
255
256	    # cat /sys/fs/resctrl/info/L3_MON/mbm_local_bytes_config
257	    0=0x30;1=0x30;3=0x15;4=0x15
258
259"max_threshold_occupancy":
260		Read/write file provides the largest value (in
261		bytes) at which a previously used LLC_occupancy
262		counter can be considered for re-use.
263
264Finally, in the top level of the "info" directory there is a file
265named "last_cmd_status". This is reset with every "command" issued
266via the file system (making new directories or writing to any of the
267control files). If the command was successful, it will read as "ok".
268If the command failed, it will provide more information that can be
269conveyed in the error returns from file operations. E.g.
270::
271
272	# echo L3:0=f7 > schemata
273	bash: echo: write error: Invalid argument
274	# cat info/last_cmd_status
275	mask f7 has non-consecutive 1-bits
276
277Resource alloc and monitor groups
278=================================
279
280Resource groups are represented as directories in the resctrl file
281system.  The default group is the root directory which, immediately
282after mounting, owns all the tasks and cpus in the system and can make
283full use of all resources.
284
285On a system with RDT control features additional directories can be
286created in the root directory that specify different amounts of each
287resource (see "schemata" below). The root and these additional top level
288directories are referred to as "CTRL_MON" groups below.
289
290On a system with RDT monitoring the root directory and other top level
291directories contain a directory named "mon_groups" in which additional
292directories can be created to monitor subsets of tasks in the CTRL_MON
293group that is their ancestor. These are called "MON" groups in the rest
294of this document.
295
296Removing a directory will move all tasks and cpus owned by the group it
297represents to the parent. Removing one of the created CTRL_MON groups
298will automatically remove all MON groups below it.
299
300Moving MON group directories to a new parent CTRL_MON group is supported
301for the purpose of changing the resource allocations of a MON group
302without impacting its monitoring data or assigned tasks. This operation
303is not allowed for MON groups which monitor CPUs. No other move
304operation is currently allowed other than simply renaming a CTRL_MON or
305MON group.
306
307All groups contain the following files:
308
309"tasks":
310	Reading this file shows the list of all tasks that belong to
311	this group. Writing a task id to the file will add a task to the
312	group. Multiple tasks can be added by separating the task ids
313	with commas. Tasks will be assigned sequentially. Multiple
314	failures are not supported. A single failure encountered while
315	attempting to assign a task will cause the operation to abort and
316	already added tasks before the failure will remain in the group.
317	Failures will be logged to /sys/fs/resctrl/info/last_cmd_status.
318
319	If the group is a CTRL_MON group the task is removed from
320	whichever previous CTRL_MON group owned the task and also from
321	any MON group that owned the task. If the group is a MON group,
322	then the task must already belong to the CTRL_MON parent of this
323	group. The task is removed from any previous MON group.
324
325
326"cpus":
327	Reading this file shows a bitmask of the logical CPUs owned by
328	this group. Writing a mask to this file will add and remove
329	CPUs to/from this group. As with the tasks file a hierarchy is
330	maintained where MON groups may only include CPUs owned by the
331	parent CTRL_MON group.
332	When the resource group is in pseudo-locked mode this file will
333	only be readable, reflecting the CPUs associated with the
334	pseudo-locked region.
335
336
337"cpus_list":
338	Just like "cpus", only using ranges of CPUs instead of bitmasks.
339
340
341When control is enabled all CTRL_MON groups will also contain:
342
343"schemata":
344	A list of all the resources available to this group.
345	Each resource has its own line and format - see below for details.
346
347"size":
348	Mirrors the display of the "schemata" file to display the size in
349	bytes of each allocation instead of the bits representing the
350	allocation.
351
352"mode":
353	The "mode" of the resource group dictates the sharing of its
354	allocations. A "shareable" resource group allows sharing of its
355	allocations while an "exclusive" resource group does not. A
356	cache pseudo-locked region is created by first writing
357	"pseudo-locksetup" to the "mode" file before writing the cache
358	pseudo-locked region's schemata to the resource group's "schemata"
359	file. On successful pseudo-locked region creation the mode will
360	automatically change to "pseudo-locked".
361
362"ctrl_hw_id":
363	Available only with debug option. The identifier used by hardware
364	for the control group. On x86 this is the CLOSID.
365
366When monitoring is enabled all MON groups will also contain:
367
368"mon_data":
369	This contains a set of files organized by L3 domain and by
370	RDT event. E.g. on a system with two L3 domains there will
371	be subdirectories "mon_L3_00" and "mon_L3_01".	Each of these
372	directories have one file per event (e.g. "llc_occupancy",
373	"mbm_total_bytes", and "mbm_local_bytes"). In a MON group these
374	files provide a read out of the current value of the event for
375	all tasks in the group. In CTRL_MON groups these files provide
376	the sum for all tasks in the CTRL_MON group and all tasks in
377	MON groups. Please see example section for more details on usage.
378	On systems with Sub-NUMA Cluster (SNC) enabled there are extra
379	directories for each node (located within the "mon_L3_XX" directory
380	for the L3 cache they occupy). These are named "mon_sub_L3_YY"
381	where "YY" is the node number.
382
383"mon_hw_id":
384	Available only with debug option. The identifier used by hardware
385	for the monitor group. On x86 this is the RMID.
386
387Resource allocation rules
388-------------------------
389
390When a task is running the following rules define which resources are
391available to it:
392
3931) If the task is a member of a non-default group, then the schemata
394   for that group is used.
395
3962) Else if the task belongs to the default group, but is running on a
397   CPU that is assigned to some specific group, then the schemata for the
398   CPU's group is used.
399
4003) Otherwise the schemata for the default group is used.
401
402Resource monitoring rules
403-------------------------
4041) If a task is a member of a MON group, or non-default CTRL_MON group
405   then RDT events for the task will be reported in that group.
406
4072) If a task is a member of the default CTRL_MON group, but is running
408   on a CPU that is assigned to some specific group, then the RDT events
409   for the task will be reported in that group.
410
4113) Otherwise RDT events for the task will be reported in the root level
412   "mon_data" group.
413
414
415Notes on cache occupancy monitoring and control
416===============================================
417When moving a task from one group to another you should remember that
418this only affects *new* cache allocations by the task. E.g. you may have
419a task in a monitor group showing 3 MB of cache occupancy. If you move
420to a new group and immediately check the occupancy of the old and new
421groups you will likely see that the old group is still showing 3 MB and
422the new group zero. When the task accesses locations still in cache from
423before the move, the h/w does not update any counters. On a busy system
424you will likely see the occupancy in the old group go down as cache lines
425are evicted and re-used while the occupancy in the new group rises as
426the task accesses memory and loads into the cache are counted based on
427membership in the new group.
428
429The same applies to cache allocation control. Moving a task to a group
430with a smaller cache partition will not evict any cache lines. The
431process may continue to use them from the old partition.
432
433Hardware uses CLOSid(Class of service ID) and an RMID(Resource monitoring ID)
434to identify a control group and a monitoring group respectively. Each of
435the resource groups are mapped to these IDs based on the kind of group. The
436number of CLOSid and RMID are limited by the hardware and hence the creation of
437a "CTRL_MON" directory may fail if we run out of either CLOSID or RMID
438and creation of "MON" group may fail if we run out of RMIDs.
439
440max_threshold_occupancy - generic concepts
441------------------------------------------
442
443Note that an RMID once freed may not be immediately available for use as
444the RMID is still tagged the cache lines of the previous user of RMID.
445Hence such RMIDs are placed on limbo list and checked back if the cache
446occupancy has gone down. If there is a time when system has a lot of
447limbo RMIDs but which are not ready to be used, user may see an -EBUSY
448during mkdir.
449
450max_threshold_occupancy is a user configurable value to determine the
451occupancy at which an RMID can be freed.
452
453The mon_llc_occupancy_limbo tracepoint gives the precise occupancy in bytes
454for a subset of RMID that are not immediately available for allocation.
455This can't be relied on to produce output every second, it may be necessary
456to attempt to create an empty monitor group to force an update. Output may
457only be produced if creation of a control or monitor group fails.
458
459Schemata files - general concepts
460---------------------------------
461Each line in the file describes one resource. The line starts with
462the name of the resource, followed by specific values to be applied
463in each of the instances of that resource on the system.
464
465Cache IDs
466---------
467On current generation systems there is one L3 cache per socket and L2
468caches are generally just shared by the hyperthreads on a core, but this
469isn't an architectural requirement. We could have multiple separate L3
470caches on a socket, multiple cores could share an L2 cache. So instead
471of using "socket" or "core" to define the set of logical cpus sharing
472a resource we use a "Cache ID". At a given cache level this will be a
473unique number across the whole system (but it isn't guaranteed to be a
474contiguous sequence, there may be gaps).  To find the ID for each logical
475CPU look in /sys/devices/system/cpu/cpu*/cache/index*/id
476
477Cache Bit Masks (CBM)
478---------------------
479For cache resources we describe the portion of the cache that is available
480for allocation using a bitmask. The maximum value of the mask is defined
481by each cpu model (and may be different for different cache levels). It
482is found using CPUID, but is also provided in the "info" directory of
483the resctrl file system in "info/{resource}/cbm_mask". Some Intel hardware
484requires that these masks have all the '1' bits in a contiguous block. So
4850x3, 0x6 and 0xC are legal 4-bit masks with two bits set, but 0x5, 0x9
486and 0xA are not. Check /sys/fs/resctrl/info/{resource}/sparse_masks
487if non-contiguous 1s value is supported. On a system with a 20-bit mask
488each bit represents 5% of the capacity of the cache. You could partition
489the cache into four equal parts with masks: 0x1f, 0x3e0, 0x7c00, 0xf8000.
490
491Notes on Sub-NUMA Cluster mode
492==============================
493When SNC mode is enabled, Linux may load balance tasks between Sub-NUMA
494nodes much more readily than between regular NUMA nodes since the CPUs
495on Sub-NUMA nodes share the same L3 cache and the system may report
496the NUMA distance between Sub-NUMA nodes with a lower value than used
497for regular NUMA nodes.
498
499The top-level monitoring files in each "mon_L3_XX" directory provide
500the sum of data across all SNC nodes sharing an L3 cache instance.
501Users who bind tasks to the CPUs of a specific Sub-NUMA node can read
502the "llc_occupancy", "mbm_total_bytes", and "mbm_local_bytes" in the
503"mon_sub_L3_YY" directories to get node local data.
504
505Memory bandwidth allocation is still performed at the L3 cache
506level. I.e. throttling controls are applied to all SNC nodes.
507
508L3 cache allocation bitmaps also apply to all SNC nodes. But note that
509the amount of L3 cache represented by each bit is divided by the number
510of SNC nodes per L3 cache. E.g. with a 100MB cache on a system with 10-bit
511allocation masks each bit normally represents 10MB. With SNC mode enabled
512with two SNC nodes per L3 cache, each bit only represents 5MB.
513
514Memory bandwidth Allocation and monitoring
515==========================================
516
517For Memory bandwidth resource, by default the user controls the resource
518by indicating the percentage of total memory bandwidth.
519
520The minimum bandwidth percentage value for each cpu model is predefined
521and can be looked up through "info/MB/min_bandwidth". The bandwidth
522granularity that is allocated is also dependent on the cpu model and can
523be looked up at "info/MB/bandwidth_gran". The available bandwidth
524control steps are: min_bw + N * bw_gran. Intermediate values are rounded
525to the next control step available on the hardware.
526
527The bandwidth throttling is a core specific mechanism on some of Intel
528SKUs. Using a high bandwidth and a low bandwidth setting on two threads
529sharing a core may result in both threads being throttled to use the
530low bandwidth (see "thread_throttle_mode").
531
532The fact that Memory bandwidth allocation(MBA) may be a core
533specific mechanism where as memory bandwidth monitoring(MBM) is done at
534the package level may lead to confusion when users try to apply control
535via the MBA and then monitor the bandwidth to see if the controls are
536effective. Below are such scenarios:
537
5381. User may *not* see increase in actual bandwidth when percentage
539   values are increased:
540
541This can occur when aggregate L2 external bandwidth is more than L3
542external bandwidth. Consider an SKL SKU with 24 cores on a package and
543where L2 external  is 10GBps (hence aggregate L2 external bandwidth is
544240GBps) and L3 external bandwidth is 100GBps. Now a workload with '20
545threads, having 50% bandwidth, each consuming 5GBps' consumes the max L3
546bandwidth of 100GBps although the percentage value specified is only 50%
547<< 100%. Hence increasing the bandwidth percentage will not yield any
548more bandwidth. This is because although the L2 external bandwidth still
549has capacity, the L3 external bandwidth is fully used. Also note that
550this would be dependent on number of cores the benchmark is run on.
551
5522. Same bandwidth percentage may mean different actual bandwidth
553   depending on # of threads:
554
555For the same SKU in #1, a 'single thread, with 10% bandwidth' and '4
556thread, with 10% bandwidth' can consume upto 10GBps and 40GBps although
557they have same percentage bandwidth of 10%. This is simply because as
558threads start using more cores in an rdtgroup, the actual bandwidth may
559increase or vary although user specified bandwidth percentage is same.
560
561In order to mitigate this and make the interface more user friendly,
562resctrl added support for specifying the bandwidth in MiBps as well.  The
563kernel underneath would use a software feedback mechanism or a "Software
564Controller(mba_sc)" which reads the actual bandwidth using MBM counters
565and adjust the memory bandwidth percentages to ensure::
566
567	"actual bandwidth < user specified bandwidth".
568
569By default, the schemata would take the bandwidth percentage values
570where as user can switch to the "MBA software controller" mode using
571a mount option 'mba_MBps'. The schemata format is specified in the below
572sections.
573
574L3 schemata file details (code and data prioritization disabled)
575----------------------------------------------------------------
576With CDP disabled the L3 schemata format is::
577
578	L3:<cache_id0>=<cbm>;<cache_id1>=<cbm>;...
579
580L3 schemata file details (CDP enabled via mount option to resctrl)
581------------------------------------------------------------------
582When CDP is enabled L3 control is split into two separate resources
583so you can specify independent masks for code and data like this::
584
585	L3DATA:<cache_id0>=<cbm>;<cache_id1>=<cbm>;...
586	L3CODE:<cache_id0>=<cbm>;<cache_id1>=<cbm>;...
587
588L2 schemata file details
589------------------------
590CDP is supported at L2 using the 'cdpl2' mount option. The schemata
591format is either::
592
593	L2:<cache_id0>=<cbm>;<cache_id1>=<cbm>;...
594
595or
596
597	L2DATA:<cache_id0>=<cbm>;<cache_id1>=<cbm>;...
598	L2CODE:<cache_id0>=<cbm>;<cache_id1>=<cbm>;...
599
600
601Memory bandwidth Allocation (default mode)
602------------------------------------------
603
604Memory b/w domain is L3 cache.
605::
606
607	MB:<cache_id0>=bandwidth0;<cache_id1>=bandwidth1;...
608
609Memory bandwidth Allocation specified in MiBps
610----------------------------------------------
611
612Memory bandwidth domain is L3 cache.
613::
614
615	MB:<cache_id0>=bw_MiBps0;<cache_id1>=bw_MiBps1;...
616
617Slow Memory Bandwidth Allocation (SMBA)
618---------------------------------------
619AMD hardware supports Slow Memory Bandwidth Allocation (SMBA).
620CXL.memory is the only supported "slow" memory device. With the
621support of SMBA, the hardware enables bandwidth allocation on
622the slow memory devices. If there are multiple such devices in
623the system, the throttling logic groups all the slow sources
624together and applies the limit on them as a whole.
625
626The presence of SMBA (with CXL.memory) is independent of slow memory
627devices presence. If there are no such devices on the system, then
628configuring SMBA will have no impact on the performance of the system.
629
630The bandwidth domain for slow memory is L3 cache. Its schemata file
631is formatted as:
632::
633
634	SMBA:<cache_id0>=bandwidth0;<cache_id1>=bandwidth1;...
635
636Reading/writing the schemata file
637---------------------------------
638Reading the schemata file will show the state of all resources
639on all domains. When writing you only need to specify those values
640which you wish to change.  E.g.
641::
642
643  # cat schemata
644  L3DATA:0=fffff;1=fffff;2=fffff;3=fffff
645  L3CODE:0=fffff;1=fffff;2=fffff;3=fffff
646  # echo "L3DATA:2=3c0;" > schemata
647  # cat schemata
648  L3DATA:0=fffff;1=fffff;2=3c0;3=fffff
649  L3CODE:0=fffff;1=fffff;2=fffff;3=fffff
650
651Reading/writing the schemata file (on AMD systems)
652--------------------------------------------------
653Reading the schemata file will show the current bandwidth limit on all
654domains. The allocated resources are in multiples of one eighth GB/s.
655When writing to the file, you need to specify what cache id you wish to
656configure the bandwidth limit.
657
658For example, to allocate 2GB/s limit on the first cache id:
659
660::
661
662  # cat schemata
663    MB:0=2048;1=2048;2=2048;3=2048
664    L3:0=ffff;1=ffff;2=ffff;3=ffff
665
666  # echo "MB:1=16" > schemata
667  # cat schemata
668    MB:0=2048;1=  16;2=2048;3=2048
669    L3:0=ffff;1=ffff;2=ffff;3=ffff
670
671Reading/writing the schemata file (on AMD systems) with SMBA feature
672--------------------------------------------------------------------
673Reading and writing the schemata file is the same as without SMBA in
674above section.
675
676For example, to allocate 8GB/s limit on the first cache id:
677
678::
679
680  # cat schemata
681    SMBA:0=2048;1=2048;2=2048;3=2048
682      MB:0=2048;1=2048;2=2048;3=2048
683      L3:0=ffff;1=ffff;2=ffff;3=ffff
684
685  # echo "SMBA:1=64" > schemata
686  # cat schemata
687    SMBA:0=2048;1=  64;2=2048;3=2048
688      MB:0=2048;1=2048;2=2048;3=2048
689      L3:0=ffff;1=ffff;2=ffff;3=ffff
690
691Cache Pseudo-Locking
692====================
693CAT enables a user to specify the amount of cache space that an
694application can fill. Cache pseudo-locking builds on the fact that a
695CPU can still read and write data pre-allocated outside its current
696allocated area on a cache hit. With cache pseudo-locking, data can be
697preloaded into a reserved portion of cache that no application can
698fill, and from that point on will only serve cache hits. The cache
699pseudo-locked memory is made accessible to user space where an
700application can map it into its virtual address space and thus have
701a region of memory with reduced average read latency.
702
703The creation of a cache pseudo-locked region is triggered by a request
704from the user to do so that is accompanied by a schemata of the region
705to be pseudo-locked. The cache pseudo-locked region is created as follows:
706
707- Create a CAT allocation CLOSNEW with a CBM matching the schemata
708  from the user of the cache region that will contain the pseudo-locked
709  memory. This region must not overlap with any current CAT allocation/CLOS
710  on the system and no future overlap with this cache region is allowed
711  while the pseudo-locked region exists.
712- Create a contiguous region of memory of the same size as the cache
713  region.
714- Flush the cache, disable hardware prefetchers, disable preemption.
715- Make CLOSNEW the active CLOS and touch the allocated memory to load
716  it into the cache.
717- Set the previous CLOS as active.
718- At this point the closid CLOSNEW can be released - the cache
719  pseudo-locked region is protected as long as its CBM does not appear in
720  any CAT allocation. Even though the cache pseudo-locked region will from
721  this point on not appear in any CBM of any CLOS an application running with
722  any CLOS will be able to access the memory in the pseudo-locked region since
723  the region continues to serve cache hits.
724- The contiguous region of memory loaded into the cache is exposed to
725  user-space as a character device.
726
727Cache pseudo-locking increases the probability that data will remain
728in the cache via carefully configuring the CAT feature and controlling
729application behavior. There is no guarantee that data is placed in
730cache. Instructions like INVD, WBINVD, CLFLUSH, etc. can still evict
731“locked” data from cache. Power management C-states may shrink or
732power off cache. Deeper C-states will automatically be restricted on
733pseudo-locked region creation.
734
735It is required that an application using a pseudo-locked region runs
736with affinity to the cores (or a subset of the cores) associated
737with the cache on which the pseudo-locked region resides. A sanity check
738within the code will not allow an application to map pseudo-locked memory
739unless it runs with affinity to cores associated with the cache on which the
740pseudo-locked region resides. The sanity check is only done during the
741initial mmap() handling, there is no enforcement afterwards and the
742application self needs to ensure it remains affine to the correct cores.
743
744Pseudo-locking is accomplished in two stages:
745
7461) During the first stage the system administrator allocates a portion
747   of cache that should be dedicated to pseudo-locking. At this time an
748   equivalent portion of memory is allocated, loaded into allocated
749   cache portion, and exposed as a character device.
7502) During the second stage a user-space application maps (mmap()) the
751   pseudo-locked memory into its address space.
752
753Cache Pseudo-Locking Interface
754------------------------------
755A pseudo-locked region is created using the resctrl interface as follows:
756
7571) Create a new resource group by creating a new directory in /sys/fs/resctrl.
7582) Change the new resource group's mode to "pseudo-locksetup" by writing
759   "pseudo-locksetup" to the "mode" file.
7603) Write the schemata of the pseudo-locked region to the "schemata" file. All
761   bits within the schemata should be "unused" according to the "bit_usage"
762   file.
763
764On successful pseudo-locked region creation the "mode" file will contain
765"pseudo-locked" and a new character device with the same name as the resource
766group will exist in /dev/pseudo_lock. This character device can be mmap()'ed
767by user space in order to obtain access to the pseudo-locked memory region.
768
769An example of cache pseudo-locked region creation and usage can be found below.
770
771Cache Pseudo-Locking Debugging Interface
772----------------------------------------
773The pseudo-locking debugging interface is enabled by default (if
774CONFIG_DEBUG_FS is enabled) and can be found in /sys/kernel/debug/resctrl.
775
776There is no explicit way for the kernel to test if a provided memory
777location is present in the cache. The pseudo-locking debugging interface uses
778the tracing infrastructure to provide two ways to measure cache residency of
779the pseudo-locked region:
780
7811) Memory access latency using the pseudo_lock_mem_latency tracepoint. Data
782   from these measurements are best visualized using a hist trigger (see
783   example below). In this test the pseudo-locked region is traversed at
784   a stride of 32 bytes while hardware prefetchers and preemption
785   are disabled. This also provides a substitute visualization of cache
786   hits and misses.
7872) Cache hit and miss measurements using model specific precision counters if
788   available. Depending on the levels of cache on the system the pseudo_lock_l2
789   and pseudo_lock_l3 tracepoints are available.
790
791When a pseudo-locked region is created a new debugfs directory is created for
792it in debugfs as /sys/kernel/debug/resctrl/<newdir>. A single
793write-only file, pseudo_lock_measure, is present in this directory. The
794measurement of the pseudo-locked region depends on the number written to this
795debugfs file:
796
7971:
798     writing "1" to the pseudo_lock_measure file will trigger the latency
799     measurement captured in the pseudo_lock_mem_latency tracepoint. See
800     example below.
8012:
802     writing "2" to the pseudo_lock_measure file will trigger the L2 cache
803     residency (cache hits and misses) measurement captured in the
804     pseudo_lock_l2 tracepoint. See example below.
8053:
806     writing "3" to the pseudo_lock_measure file will trigger the L3 cache
807     residency (cache hits and misses) measurement captured in the
808     pseudo_lock_l3 tracepoint.
809
810All measurements are recorded with the tracing infrastructure. This requires
811the relevant tracepoints to be enabled before the measurement is triggered.
812
813Example of latency debugging interface
814~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
815In this example a pseudo-locked region named "newlock" was created. Here is
816how we can measure the latency in cycles of reading from this region and
817visualize this data with a histogram that is available if CONFIG_HIST_TRIGGERS
818is set::
819
820  # :> /sys/kernel/tracing/trace
821  # echo 'hist:keys=latency' > /sys/kernel/tracing/events/resctrl/pseudo_lock_mem_latency/trigger
822  # echo 1 > /sys/kernel/tracing/events/resctrl/pseudo_lock_mem_latency/enable
823  # echo 1 > /sys/kernel/debug/resctrl/newlock/pseudo_lock_measure
824  # echo 0 > /sys/kernel/tracing/events/resctrl/pseudo_lock_mem_latency/enable
825  # cat /sys/kernel/tracing/events/resctrl/pseudo_lock_mem_latency/hist
826
827  # event histogram
828  #
829  # trigger info: hist:keys=latency:vals=hitcount:sort=hitcount:size=2048 [active]
830  #
831
832  { latency:        456 } hitcount:          1
833  { latency:         50 } hitcount:         83
834  { latency:         36 } hitcount:         96
835  { latency:         44 } hitcount:        174
836  { latency:         48 } hitcount:        195
837  { latency:         46 } hitcount:        262
838  { latency:         42 } hitcount:        693
839  { latency:         40 } hitcount:       3204
840  { latency:         38 } hitcount:       3484
841
842  Totals:
843      Hits: 8192
844      Entries: 9
845    Dropped: 0
846
847Example of cache hits/misses debugging
848~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
849In this example a pseudo-locked region named "newlock" was created on the L2
850cache of a platform. Here is how we can obtain details of the cache hits
851and misses using the platform's precision counters.
852::
853
854  # :> /sys/kernel/tracing/trace
855  # echo 1 > /sys/kernel/tracing/events/resctrl/pseudo_lock_l2/enable
856  # echo 2 > /sys/kernel/debug/resctrl/newlock/pseudo_lock_measure
857  # echo 0 > /sys/kernel/tracing/events/resctrl/pseudo_lock_l2/enable
858  # cat /sys/kernel/tracing/trace
859
860  # tracer: nop
861  #
862  #                              _-----=> irqs-off
863  #                             / _----=> need-resched
864  #                            | / _---=> hardirq/softirq
865  #                            || / _--=> preempt-depth
866  #                            ||| /     delay
867  #           TASK-PID   CPU#  ||||    TIMESTAMP  FUNCTION
868  #              | |       |   ||||       |         |
869  pseudo_lock_mea-1672  [002] ....  3132.860500: pseudo_lock_l2: hits=4097 miss=0
870
871
872Examples for RDT allocation usage
873~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
874
8751) Example 1
876
877On a two socket machine (one L3 cache per socket) with just four bits
878for cache bit masks, minimum b/w of 10% with a memory bandwidth
879granularity of 10%.
880::
881
882  # mount -t resctrl resctrl /sys/fs/resctrl
883  # cd /sys/fs/resctrl
884  # mkdir p0 p1
885  # echo "L3:0=3;1=c\nMB:0=50;1=50" > /sys/fs/resctrl/p0/schemata
886  # echo "L3:0=3;1=3\nMB:0=50;1=50" > /sys/fs/resctrl/p1/schemata
887
888The default resource group is unmodified, so we have access to all parts
889of all caches (its schemata file reads "L3:0=f;1=f").
890
891Tasks that are under the control of group "p0" may only allocate from the
892"lower" 50% on cache ID 0, and the "upper" 50% of cache ID 1.
893Tasks in group "p1" use the "lower" 50% of cache on both sockets.
894
895Similarly, tasks that are under the control of group "p0" may use a
896maximum memory b/w of 50% on socket0 and 50% on socket 1.
897Tasks in group "p1" may also use 50% memory b/w on both sockets.
898Note that unlike cache masks, memory b/w cannot specify whether these
899allocations can overlap or not. The allocations specifies the maximum
900b/w that the group may be able to use and the system admin can configure
901the b/w accordingly.
902
903If resctrl is using the software controller (mba_sc) then user can enter the
904max b/w in MB rather than the percentage values.
905::
906
907  # echo "L3:0=3;1=c\nMB:0=1024;1=500" > /sys/fs/resctrl/p0/schemata
908  # echo "L3:0=3;1=3\nMB:0=1024;1=500" > /sys/fs/resctrl/p1/schemata
909
910In the above example the tasks in "p1" and "p0" on socket 0 would use a max b/w
911of 1024MB where as on socket 1 they would use 500MB.
912
9132) Example 2
914
915Again two sockets, but this time with a more realistic 20-bit mask.
916
917Two real time tasks pid=1234 running on processor 0 and pid=5678 running on
918processor 1 on socket 0 on a 2-socket and dual core machine. To avoid noisy
919neighbors, each of the two real-time tasks exclusively occupies one quarter
920of L3 cache on socket 0.
921::
922
923  # mount -t resctrl resctrl /sys/fs/resctrl
924  # cd /sys/fs/resctrl
925
926First we reset the schemata for the default group so that the "upper"
92750% of the L3 cache on socket 0 and 50% of memory b/w cannot be used by
928ordinary tasks::
929
930  # echo "L3:0=3ff;1=fffff\nMB:0=50;1=100" > schemata
931
932Next we make a resource group for our first real time task and give
933it access to the "top" 25% of the cache on socket 0.
934::
935
936  # mkdir p0
937  # echo "L3:0=f8000;1=fffff" > p0/schemata
938
939Finally we move our first real time task into this resource group. We
940also use taskset(1) to ensure the task always runs on a dedicated CPU
941on socket 0. Most uses of resource groups will also constrain which
942processors tasks run on.
943::
944
945  # echo 1234 > p0/tasks
946  # taskset -cp 1 1234
947
948Ditto for the second real time task (with the remaining 25% of cache)::
949
950  # mkdir p1
951  # echo "L3:0=7c00;1=fffff" > p1/schemata
952  # echo 5678 > p1/tasks
953  # taskset -cp 2 5678
954
955For the same 2 socket system with memory b/w resource and CAT L3 the
956schemata would look like(Assume min_bandwidth 10 and bandwidth_gran is
95710):
958
959For our first real time task this would request 20% memory b/w on socket 0.
960::
961
962  # echo -e "L3:0=f8000;1=fffff\nMB:0=20;1=100" > p0/schemata
963
964For our second real time task this would request an other 20% memory b/w
965on socket 0.
966::
967
968  # echo -e "L3:0=f8000;1=fffff\nMB:0=20;1=100" > p0/schemata
969
9703) Example 3
971
972A single socket system which has real-time tasks running on core 4-7 and
973non real-time workload assigned to core 0-3. The real-time tasks share text
974and data, so a per task association is not required and due to interaction
975with the kernel it's desired that the kernel on these cores shares L3 with
976the tasks.
977::
978
979  # mount -t resctrl resctrl /sys/fs/resctrl
980  # cd /sys/fs/resctrl
981
982First we reset the schemata for the default group so that the "upper"
98350% of the L3 cache on socket 0, and 50% of memory bandwidth on socket 0
984cannot be used by ordinary tasks::
985
986  # echo "L3:0=3ff\nMB:0=50" > schemata
987
988Next we make a resource group for our real time cores and give it access
989to the "top" 50% of the cache on socket 0 and 50% of memory bandwidth on
990socket 0.
991::
992
993  # mkdir p0
994  # echo "L3:0=ffc00\nMB:0=50" > p0/schemata
995
996Finally we move core 4-7 over to the new group and make sure that the
997kernel and the tasks running there get 50% of the cache. They should
998also get 50% of memory bandwidth assuming that the cores 4-7 are SMT
999siblings and only the real time threads are scheduled on the cores 4-7.
1000::
1001
1002  # echo F0 > p0/cpus
1003
10044) Example 4
1005
1006The resource groups in previous examples were all in the default "shareable"
1007mode allowing sharing of their cache allocations. If one resource group
1008configures a cache allocation then nothing prevents another resource group
1009to overlap with that allocation.
1010
1011In this example a new exclusive resource group will be created on a L2 CAT
1012system with two L2 cache instances that can be configured with an 8-bit
1013capacity bitmask. The new exclusive resource group will be configured to use
101425% of each cache instance.
1015::
1016
1017  # mount -t resctrl resctrl /sys/fs/resctrl/
1018  # cd /sys/fs/resctrl
1019
1020First, we observe that the default group is configured to allocate to all L2
1021cache::
1022
1023  # cat schemata
1024  L2:0=ff;1=ff
1025
1026We could attempt to create the new resource group at this point, but it will
1027fail because of the overlap with the schemata of the default group::
1028
1029  # mkdir p0
1030  # echo 'L2:0=0x3;1=0x3' > p0/schemata
1031  # cat p0/mode
1032  shareable
1033  # echo exclusive > p0/mode
1034  -sh: echo: write error: Invalid argument
1035  # cat info/last_cmd_status
1036  schemata overlaps
1037
1038To ensure that there is no overlap with another resource group the default
1039resource group's schemata has to change, making it possible for the new
1040resource group to become exclusive.
1041::
1042
1043  # echo 'L2:0=0xfc;1=0xfc' > schemata
1044  # echo exclusive > p0/mode
1045  # grep . p0/*
1046  p0/cpus:0
1047  p0/mode:exclusive
1048  p0/schemata:L2:0=03;1=03
1049  p0/size:L2:0=262144;1=262144
1050
1051A new resource group will on creation not overlap with an exclusive resource
1052group::
1053
1054  # mkdir p1
1055  # grep . p1/*
1056  p1/cpus:0
1057  p1/mode:shareable
1058  p1/schemata:L2:0=fc;1=fc
1059  p1/size:L2:0=786432;1=786432
1060
1061The bit_usage will reflect how the cache is used::
1062
1063  # cat info/L2/bit_usage
1064  0=SSSSSSEE;1=SSSSSSEE
1065
1066A resource group cannot be forced to overlap with an exclusive resource group::
1067
1068  # echo 'L2:0=0x1;1=0x1' > p1/schemata
1069  -sh: echo: write error: Invalid argument
1070  # cat info/last_cmd_status
1071  overlaps with exclusive group
1072
1073Example of Cache Pseudo-Locking
1074~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1075Lock portion of L2 cache from cache id 1 using CBM 0x3. Pseudo-locked
1076region is exposed at /dev/pseudo_lock/newlock that can be provided to
1077application for argument to mmap().
1078::
1079
1080  # mount -t resctrl resctrl /sys/fs/resctrl/
1081  # cd /sys/fs/resctrl
1082
1083Ensure that there are bits available that can be pseudo-locked, since only
1084unused bits can be pseudo-locked the bits to be pseudo-locked needs to be
1085removed from the default resource group's schemata::
1086
1087  # cat info/L2/bit_usage
1088  0=SSSSSSSS;1=SSSSSSSS
1089  # echo 'L2:1=0xfc' > schemata
1090  # cat info/L2/bit_usage
1091  0=SSSSSSSS;1=SSSSSS00
1092
1093Create a new resource group that will be associated with the pseudo-locked
1094region, indicate that it will be used for a pseudo-locked region, and
1095configure the requested pseudo-locked region capacity bitmask::
1096
1097  # mkdir newlock
1098  # echo pseudo-locksetup > newlock/mode
1099  # echo 'L2:1=0x3' > newlock/schemata
1100
1101On success the resource group's mode will change to pseudo-locked, the
1102bit_usage will reflect the pseudo-locked region, and the character device
1103exposing the pseudo-locked region will exist::
1104
1105  # cat newlock/mode
1106  pseudo-locked
1107  # cat info/L2/bit_usage
1108  0=SSSSSSSS;1=SSSSSSPP
1109  # ls -l /dev/pseudo_lock/newlock
1110  crw------- 1 root root 243, 0 Apr  3 05:01 /dev/pseudo_lock/newlock
1111
1112::
1113
1114  /*
1115  * Example code to access one page of pseudo-locked cache region
1116  * from user space.
1117  */
1118  #define _GNU_SOURCE
1119  #include <fcntl.h>
1120  #include <sched.h>
1121  #include <stdio.h>
1122  #include <stdlib.h>
1123  #include <unistd.h>
1124  #include <sys/mman.h>
1125
1126  /*
1127  * It is required that the application runs with affinity to only
1128  * cores associated with the pseudo-locked region. Here the cpu
1129  * is hardcoded for convenience of example.
1130  */
1131  static int cpuid = 2;
1132
1133  int main(int argc, char *argv[])
1134  {
1135    cpu_set_t cpuset;
1136    long page_size;
1137    void *mapping;
1138    int dev_fd;
1139    int ret;
1140
1141    page_size = sysconf(_SC_PAGESIZE);
1142
1143    CPU_ZERO(&cpuset);
1144    CPU_SET(cpuid, &cpuset);
1145    ret = sched_setaffinity(0, sizeof(cpuset), &cpuset);
1146    if (ret < 0) {
1147      perror("sched_setaffinity");
1148      exit(EXIT_FAILURE);
1149    }
1150
1151    dev_fd = open("/dev/pseudo_lock/newlock", O_RDWR);
1152    if (dev_fd < 0) {
1153      perror("open");
1154      exit(EXIT_FAILURE);
1155    }
1156
1157    mapping = mmap(0, page_size, PROT_READ | PROT_WRITE, MAP_SHARED,
1158            dev_fd, 0);
1159    if (mapping == MAP_FAILED) {
1160      perror("mmap");
1161      close(dev_fd);
1162      exit(EXIT_FAILURE);
1163    }
1164
1165    /* Application interacts with pseudo-locked memory @mapping */
1166
1167    ret = munmap(mapping, page_size);
1168    if (ret < 0) {
1169      perror("munmap");
1170      close(dev_fd);
1171      exit(EXIT_FAILURE);
1172    }
1173
1174    close(dev_fd);
1175    exit(EXIT_SUCCESS);
1176  }
1177
1178Locking between applications
1179----------------------------
1180
1181Certain operations on the resctrl filesystem, composed of read/writes
1182to/from multiple files, must be atomic.
1183
1184As an example, the allocation of an exclusive reservation of L3 cache
1185involves:
1186
1187  1. Read the cbmmasks from each directory or the per-resource "bit_usage"
1188  2. Find a contiguous set of bits in the global CBM bitmask that is clear
1189     in any of the directory cbmmasks
1190  3. Create a new directory
1191  4. Set the bits found in step 2 to the new directory "schemata" file
1192
1193If two applications attempt to allocate space concurrently then they can
1194end up allocating the same bits so the reservations are shared instead of
1195exclusive.
1196
1197To coordinate atomic operations on the resctrlfs and to avoid the problem
1198above, the following locking procedure is recommended:
1199
1200Locking is based on flock, which is available in libc and also as a shell
1201script command
1202
1203Write lock:
1204
1205 A) Take flock(LOCK_EX) on /sys/fs/resctrl
1206 B) Read/write the directory structure.
1207 C) funlock
1208
1209Read lock:
1210
1211 A) Take flock(LOCK_SH) on /sys/fs/resctrl
1212 B) If success read the directory structure.
1213 C) funlock
1214
1215Example with bash::
1216
1217  # Atomically read directory structure
1218  $ flock -s /sys/fs/resctrl/ find /sys/fs/resctrl
1219
1220  # Read directory contents and create new subdirectory
1221
1222  $ cat create-dir.sh
1223  find /sys/fs/resctrl/ > output.txt
1224  mask = function-of(output.txt)
1225  mkdir /sys/fs/resctrl/newres/
1226  echo mask > /sys/fs/resctrl/newres/schemata
1227
1228  $ flock /sys/fs/resctrl/ ./create-dir.sh
1229
1230Example with C::
1231
1232  /*
1233  * Example code do take advisory locks
1234  * before accessing resctrl filesystem
1235  */
1236  #include <sys/file.h>
1237  #include <stdlib.h>
1238
1239  void resctrl_take_shared_lock(int fd)
1240  {
1241    int ret;
1242
1243    /* take shared lock on resctrl filesystem */
1244    ret = flock(fd, LOCK_SH);
1245    if (ret) {
1246      perror("flock");
1247      exit(-1);
1248    }
1249  }
1250
1251  void resctrl_take_exclusive_lock(int fd)
1252  {
1253    int ret;
1254
1255    /* release lock on resctrl filesystem */
1256    ret = flock(fd, LOCK_EX);
1257    if (ret) {
1258      perror("flock");
1259      exit(-1);
1260    }
1261  }
1262
1263  void resctrl_release_lock(int fd)
1264  {
1265    int ret;
1266
1267    /* take shared lock on resctrl filesystem */
1268    ret = flock(fd, LOCK_UN);
1269    if (ret) {
1270      perror("flock");
1271      exit(-1);
1272    }
1273  }
1274
1275  void main(void)
1276  {
1277    int fd, ret;
1278
1279    fd = open("/sys/fs/resctrl", O_DIRECTORY);
1280    if (fd == -1) {
1281      perror("open");
1282      exit(-1);
1283    }
1284    resctrl_take_shared_lock(fd);
1285    /* code to read directory contents */
1286    resctrl_release_lock(fd);
1287
1288    resctrl_take_exclusive_lock(fd);
1289    /* code to read and write directory contents */
1290    resctrl_release_lock(fd);
1291  }
1292
1293Examples for RDT Monitoring along with allocation usage
1294=======================================================
1295Reading monitored data
1296----------------------
1297Reading an event file (for ex: mon_data/mon_L3_00/llc_occupancy) would
1298show the current snapshot of LLC occupancy of the corresponding MON
1299group or CTRL_MON group.
1300
1301
1302Example 1 (Monitor CTRL_MON group and subset of tasks in CTRL_MON group)
1303------------------------------------------------------------------------
1304On a two socket machine (one L3 cache per socket) with just four bits
1305for cache bit masks::
1306
1307  # mount -t resctrl resctrl /sys/fs/resctrl
1308  # cd /sys/fs/resctrl
1309  # mkdir p0 p1
1310  # echo "L3:0=3;1=c" > /sys/fs/resctrl/p0/schemata
1311  # echo "L3:0=3;1=3" > /sys/fs/resctrl/p1/schemata
1312  # echo 5678 > p1/tasks
1313  # echo 5679 > p1/tasks
1314
1315The default resource group is unmodified, so we have access to all parts
1316of all caches (its schemata file reads "L3:0=f;1=f").
1317
1318Tasks that are under the control of group "p0" may only allocate from the
1319"lower" 50% on cache ID 0, and the "upper" 50% of cache ID 1.
1320Tasks in group "p1" use the "lower" 50% of cache on both sockets.
1321
1322Create monitor groups and assign a subset of tasks to each monitor group.
1323::
1324
1325  # cd /sys/fs/resctrl/p1/mon_groups
1326  # mkdir m11 m12
1327  # echo 5678 > m11/tasks
1328  # echo 5679 > m12/tasks
1329
1330fetch data (data shown in bytes)
1331::
1332
1333  # cat m11/mon_data/mon_L3_00/llc_occupancy
1334  16234000
1335  # cat m11/mon_data/mon_L3_01/llc_occupancy
1336  14789000
1337  # cat m12/mon_data/mon_L3_00/llc_occupancy
1338  16789000
1339
1340The parent ctrl_mon group shows the aggregated data.
1341::
1342
1343  # cat /sys/fs/resctrl/p1/mon_data/mon_l3_00/llc_occupancy
1344  31234000
1345
1346Example 2 (Monitor a task from its creation)
1347--------------------------------------------
1348On a two socket machine (one L3 cache per socket)::
1349
1350  # mount -t resctrl resctrl /sys/fs/resctrl
1351  # cd /sys/fs/resctrl
1352  # mkdir p0 p1
1353
1354An RMID is allocated to the group once its created and hence the <cmd>
1355below is monitored from its creation.
1356::
1357
1358  # echo $$ > /sys/fs/resctrl/p1/tasks
1359  # <cmd>
1360
1361Fetch the data::
1362
1363  # cat /sys/fs/resctrl/p1/mon_data/mon_l3_00/llc_occupancy
1364  31789000
1365
1366Example 3 (Monitor without CAT support or before creating CAT groups)
1367---------------------------------------------------------------------
1368
1369Assume a system like HSW has only CQM and no CAT support. In this case
1370the resctrl will still mount but cannot create CTRL_MON directories.
1371But user can create different MON groups within the root group thereby
1372able to monitor all tasks including kernel threads.
1373
1374This can also be used to profile jobs cache size footprint before being
1375able to allocate them to different allocation groups.
1376::
1377
1378  # mount -t resctrl resctrl /sys/fs/resctrl
1379  # cd /sys/fs/resctrl
1380  # mkdir mon_groups/m01
1381  # mkdir mon_groups/m02
1382
1383  # echo 3478 > /sys/fs/resctrl/mon_groups/m01/tasks
1384  # echo 2467 > /sys/fs/resctrl/mon_groups/m02/tasks
1385
1386Monitor the groups separately and also get per domain data. From the
1387below its apparent that the tasks are mostly doing work on
1388domain(socket) 0.
1389::
1390
1391  # cat /sys/fs/resctrl/mon_groups/m01/mon_L3_00/llc_occupancy
1392  31234000
1393  # cat /sys/fs/resctrl/mon_groups/m01/mon_L3_01/llc_occupancy
1394  34555
1395  # cat /sys/fs/resctrl/mon_groups/m02/mon_L3_00/llc_occupancy
1396  31234000
1397  # cat /sys/fs/resctrl/mon_groups/m02/mon_L3_01/llc_occupancy
1398  32789
1399
1400
1401Example 4 (Monitor real time tasks)
1402-----------------------------------
1403
1404A single socket system which has real time tasks running on cores 4-7
1405and non real time tasks on other cpus. We want to monitor the cache
1406occupancy of the real time threads on these cores.
1407::
1408
1409  # mount -t resctrl resctrl /sys/fs/resctrl
1410  # cd /sys/fs/resctrl
1411  # mkdir p1
1412
1413Move the cpus 4-7 over to p1::
1414
1415  # echo f0 > p1/cpus
1416
1417View the llc occupancy snapshot::
1418
1419  # cat /sys/fs/resctrl/p1/mon_data/mon_L3_00/llc_occupancy
1420  11234000
1421
1422Intel RDT Errata
1423================
1424
1425Intel MBM Counters May Report System Memory Bandwidth Incorrectly
1426-----------------------------------------------------------------
1427
1428Errata SKX99 for Skylake server and BDF102 for Broadwell server.
1429
1430Problem: Intel Memory Bandwidth Monitoring (MBM) counters track metrics
1431according to the assigned Resource Monitor ID (RMID) for that logical
1432core. The IA32_QM_CTR register (MSR 0xC8E), used to report these
1433metrics, may report incorrect system bandwidth for certain RMID values.
1434
1435Implication: Due to the errata, system memory bandwidth may not match
1436what is reported.
1437
1438Workaround: MBM total and local readings are corrected according to the
1439following correction factor table:
1440
1441+---------------+---------------+---------------+-----------------+
1442|core count	|rmid count	|rmid threshold	|correction factor|
1443+---------------+---------------+---------------+-----------------+
1444|1		|8		|0		|1.000000	  |
1445+---------------+---------------+---------------+-----------------+
1446|2		|16		|0		|1.000000	  |
1447+---------------+---------------+---------------+-----------------+
1448|3		|24		|15		|0.969650	  |
1449+---------------+---------------+---------------+-----------------+
1450|4		|32		|0		|1.000000	  |
1451+---------------+---------------+---------------+-----------------+
1452|6		|48		|31		|0.969650	  |
1453+---------------+---------------+---------------+-----------------+
1454|7		|56		|47		|1.142857	  |
1455+---------------+---------------+---------------+-----------------+
1456|8		|64		|0		|1.000000	  |
1457+---------------+---------------+---------------+-----------------+
1458|9		|72		|63		|1.185115	  |
1459+---------------+---------------+---------------+-----------------+
1460|10		|80		|63		|1.066553	  |
1461+---------------+---------------+---------------+-----------------+
1462|11		|88		|79		|1.454545	  |
1463+---------------+---------------+---------------+-----------------+
1464|12		|96		|0		|1.000000	  |
1465+---------------+---------------+---------------+-----------------+
1466|13		|104		|95		|1.230769	  |
1467+---------------+---------------+---------------+-----------------+
1468|14		|112		|95		|1.142857	  |
1469+---------------+---------------+---------------+-----------------+
1470|15		|120		|95		|1.066667	  |
1471+---------------+---------------+---------------+-----------------+
1472|16		|128		|0		|1.000000	  |
1473+---------------+---------------+---------------+-----------------+
1474|17		|136		|127		|1.254863	  |
1475+---------------+---------------+---------------+-----------------+
1476|18		|144		|127		|1.185255	  |
1477+---------------+---------------+---------------+-----------------+
1478|19		|152		|0		|1.000000	  |
1479+---------------+---------------+---------------+-----------------+
1480|20		|160		|127		|1.066667	  |
1481+---------------+---------------+---------------+-----------------+
1482|21		|168		|0		|1.000000	  |
1483+---------------+---------------+---------------+-----------------+
1484|22		|176		|159		|1.454334	  |
1485+---------------+---------------+---------------+-----------------+
1486|23		|184		|0		|1.000000	  |
1487+---------------+---------------+---------------+-----------------+
1488|24		|192		|127		|0.969744	  |
1489+---------------+---------------+---------------+-----------------+
1490|25		|200		|191		|1.280246	  |
1491+---------------+---------------+---------------+-----------------+
1492|26		|208		|191		|1.230921	  |
1493+---------------+---------------+---------------+-----------------+
1494|27		|216		|0		|1.000000	  |
1495+---------------+---------------+---------------+-----------------+
1496|28		|224		|191		|1.143118	  |
1497+---------------+---------------+---------------+-----------------+
1498
1499If rmid > rmid threshold, MBM total and local values should be multiplied
1500by the correction factor.
1501
1502See:
1503
15041. Erratum SKX99 in Intel Xeon Processor Scalable Family Specification Update:
1505http://web.archive.org/web/20200716124958/https://www.intel.com/content/www/us/en/processors/xeon/scalable/xeon-scalable-spec-update.html
1506
15072. Erratum BDF102 in Intel Xeon E5-2600 v4 Processor Product Family Specification Update:
1508http://web.archive.org/web/20191125200531/https://www.intel.com/content/dam/www/public/us/en/documents/specification-updates/xeon-e5-v4-spec-update.pdf
1509
15103. The errata in Intel Resource Director Technology (Intel RDT) on 2nd Generation Intel Xeon Scalable Processors Reference Manual:
1511https://software.intel.com/content/www/us/en/develop/articles/intel-resource-director-technology-rdt-reference-manual.html
1512
1513for further information.
1514