1.. SPDX-License-Identifier: GPL-2.0 2.. include:: <isonum.txt> 3 4===================================================== 5User Interface for Resource Control feature (resctrl) 6===================================================== 7 8:Copyright: |copy| 2016 Intel Corporation 9:Authors: - Fenghua Yu <fenghua.yu@intel.com> 10 - Tony Luck <tony.luck@intel.com> 11 - Vikas Shivappa <vikas.shivappa@intel.com> 12 13 14Intel refers to this feature as Intel Resource Director Technology(Intel(R) RDT). 15AMD refers to this feature as AMD Platform Quality of Service(AMD QoS). 16 17This feature is enabled by the CONFIG_X86_CPU_RESCTRL and the x86 /proc/cpuinfo 18flag bits: 19 20=============================================== ================================ 21RDT (Resource Director Technology) Allocation "rdt_a" 22CAT (Cache Allocation Technology) "cat_l3", "cat_l2" 23CDP (Code and Data Prioritization) "cdp_l3", "cdp_l2" 24CQM (Cache QoS Monitoring) "cqm_llc", "cqm_occup_llc" 25MBM (Memory Bandwidth Monitoring) "cqm_mbm_total", "cqm_mbm_local" 26MBA (Memory Bandwidth Allocation) "mba" 27SMBA (Slow Memory Bandwidth Allocation) "" 28BMEC (Bandwidth Monitoring Event Configuration) "" 29ABMC (Assignable Bandwidth Monitoring Counters) "" 30=============================================== ================================ 31 32Historically, new features were made visible by default in /proc/cpuinfo. This 33resulted in the feature flags becoming hard to parse by humans. Adding a new 34flag to /proc/cpuinfo should be avoided if user space can obtain information 35about the feature from resctrl's info directory. 36 37To use the feature mount the file system:: 38 39 # mount -t resctrl resctrl [-o cdp[,cdpl2][,mba_MBps][,debug]] /sys/fs/resctrl 40 41mount options are: 42 43"cdp": 44 Enable code/data prioritization in L3 cache allocations. 45"cdpl2": 46 Enable code/data prioritization in L2 cache allocations. 47"mba_MBps": 48 Enable the MBA Software Controller(mba_sc) to specify MBA 49 bandwidth in MiBps 50"debug": 51 Make debug files accessible. Available debug files are annotated with 52 "Available only with debug option". 53 54L2 and L3 CDP are controlled separately. 55 56RDT features are orthogonal. A particular system may support only 57monitoring, only control, or both monitoring and control. Cache 58pseudo-locking is a unique way of using cache control to "pin" or 59"lock" data in the cache. Details can be found in 60"Cache Pseudo-Locking". 61 62 63The mount succeeds if either of allocation or monitoring is present, but 64only those files and directories supported by the system will be created. 65For more details on the behavior of the interface during monitoring 66and allocation, see the "Resource alloc and monitor groups" section. 67 68Info directory 69============== 70 71The 'info' directory contains information about the enabled 72resources. Each resource has its own subdirectory. The subdirectory 73names reflect the resource names. 74 75Each subdirectory contains the following files with respect to 76allocation: 77 78Cache resource(L3/L2) subdirectory contains the following files 79related to allocation: 80 81"num_closids": 82 The number of CLOSIDs which are valid for this 83 resource. The kernel uses the smallest number of 84 CLOSIDs of all enabled resources as limit. 85"cbm_mask": 86 The bitmask which is valid for this resource. 87 This mask is equivalent to 100%. 88"min_cbm_bits": 89 The minimum number of consecutive bits which 90 must be set when writing a mask. 91 92"shareable_bits": 93 Bitmask of shareable resource with other executing 94 entities (e.g. I/O). User can use this when 95 setting up exclusive cache partitions. Note that 96 some platforms support devices that have their 97 own settings for cache use which can over-ride 98 these bits. 99"bit_usage": 100 Annotated capacity bitmasks showing how all 101 instances of the resource are used. The legend is: 102 103 "0": 104 Corresponding region is unused. When the system's 105 resources have been allocated and a "0" is found 106 in "bit_usage" it is a sign that resources are 107 wasted. 108 109 "H": 110 Corresponding region is used by hardware only 111 but available for software use. If a resource 112 has bits set in "shareable_bits" but not all 113 of these bits appear in the resource groups' 114 schematas then the bits appearing in 115 "shareable_bits" but no resource group will 116 be marked as "H". 117 "X": 118 Corresponding region is available for sharing and 119 used by hardware and software. These are the 120 bits that appear in "shareable_bits" as 121 well as a resource group's allocation. 122 "S": 123 Corresponding region is used by software 124 and available for sharing. 125 "E": 126 Corresponding region is used exclusively by 127 one resource group. No sharing allowed. 128 "P": 129 Corresponding region is pseudo-locked. No 130 sharing allowed. 131"sparse_masks": 132 Indicates if non-contiguous 1s value in CBM is supported. 133 134 "0": 135 Only contiguous 1s value in CBM is supported. 136 "1": 137 Non-contiguous 1s value in CBM is supported. 138 139Memory bandwidth(MB) subdirectory contains the following files 140with respect to allocation: 141 142"min_bandwidth": 143 The minimum memory bandwidth percentage which 144 user can request. 145 146"bandwidth_gran": 147 The granularity in which the memory bandwidth 148 percentage is allocated. The allocated 149 b/w percentage is rounded off to the next 150 control step available on the hardware. The 151 available bandwidth control steps are: 152 min_bandwidth + N * bandwidth_gran. 153 154"delay_linear": 155 Indicates if the delay scale is linear or 156 non-linear. This field is purely informational 157 only. 158 159"thread_throttle_mode": 160 Indicator on Intel systems of how tasks running on threads 161 of a physical core are throttled in cases where they 162 request different memory bandwidth percentages: 163 164 "max": 165 the smallest percentage is applied 166 to all threads 167 "per-thread": 168 bandwidth percentages are directly applied to 169 the threads running on the core 170 171If RDT monitoring is available there will be an "L3_MON" directory 172with the following files: 173 174"num_rmids": 175 The number of RMIDs available. This is the 176 upper bound for how many "CTRL_MON" + "MON" 177 groups can be created. 178 179"mon_features": 180 Lists the monitoring events if 181 monitoring is enabled for the resource. 182 Example:: 183 184 # cat /sys/fs/resctrl/info/L3_MON/mon_features 185 llc_occupancy 186 mbm_total_bytes 187 mbm_local_bytes 188 189 If the system supports Bandwidth Monitoring Event 190 Configuration (BMEC), then the bandwidth events will 191 be configurable. The output will be:: 192 193 # cat /sys/fs/resctrl/info/L3_MON/mon_features 194 llc_occupancy 195 mbm_total_bytes 196 mbm_total_bytes_config 197 mbm_local_bytes 198 mbm_local_bytes_config 199 200"mbm_total_bytes_config", "mbm_local_bytes_config": 201 Read/write files containing the configuration for the mbm_total_bytes 202 and mbm_local_bytes events, respectively, when the Bandwidth 203 Monitoring Event Configuration (BMEC) feature is supported. 204 The event configuration settings are domain specific and affect 205 all the CPUs in the domain. When either event configuration is 206 changed, the bandwidth counters for all RMIDs of both events 207 (mbm_total_bytes as well as mbm_local_bytes) are cleared for that 208 domain. The next read for every RMID will report "Unavailable" 209 and subsequent reads will report the valid value. 210 211 Following are the types of events supported: 212 213 ==== ======================================================== 214 Bits Description 215 ==== ======================================================== 216 6 Dirty Victims from the QOS domain to all types of memory 217 5 Reads to slow memory in the non-local NUMA domain 218 4 Reads to slow memory in the local NUMA domain 219 3 Non-temporal writes to non-local NUMA domain 220 2 Non-temporal writes to local NUMA domain 221 1 Reads to memory in the non-local NUMA domain 222 0 Reads to memory in the local NUMA domain 223 ==== ======================================================== 224 225 By default, the mbm_total_bytes configuration is set to 0x7f to count 226 all the event types and the mbm_local_bytes configuration is set to 227 0x15 to count all the local memory events. 228 229 Examples: 230 231 * To view the current configuration:: 232 :: 233 234 # cat /sys/fs/resctrl/info/L3_MON/mbm_total_bytes_config 235 0=0x7f;1=0x7f;2=0x7f;3=0x7f 236 237 # cat /sys/fs/resctrl/info/L3_MON/mbm_local_bytes_config 238 0=0x15;1=0x15;3=0x15;4=0x15 239 240 * To change the mbm_total_bytes to count only reads on domain 0, 241 the bits 0, 1, 4 and 5 needs to be set, which is 110011b in binary 242 (in hexadecimal 0x33): 243 :: 244 245 # echo "0=0x33" > /sys/fs/resctrl/info/L3_MON/mbm_total_bytes_config 246 247 # cat /sys/fs/resctrl/info/L3_MON/mbm_total_bytes_config 248 0=0x33;1=0x7f;2=0x7f;3=0x7f 249 250 * To change the mbm_local_bytes to count all the slow memory reads on 251 domain 0 and 1, the bits 4 and 5 needs to be set, which is 110000b 252 in binary (in hexadecimal 0x30): 253 :: 254 255 # echo "0=0x30;1=0x30" > /sys/fs/resctrl/info/L3_MON/mbm_local_bytes_config 256 257 # cat /sys/fs/resctrl/info/L3_MON/mbm_local_bytes_config 258 0=0x30;1=0x30;3=0x15;4=0x15 259 260"mbm_assign_mode": 261 The supported counter assignment modes. The enclosed brackets indicate which mode 262 is enabled. The MBM events associated with counters may reset when "mbm_assign_mode" 263 is changed. 264 :: 265 266 # cat /sys/fs/resctrl/info/L3_MON/mbm_assign_mode 267 [mbm_event] 268 default 269 270 "mbm_event": 271 272 mbm_event mode allows users to assign a hardware counter to an RMID, event 273 pair and monitor the bandwidth usage as long as it is assigned. The hardware 274 continues to track the assigned counter until it is explicitly unassigned by 275 the user. Each event within a resctrl group can be assigned independently. 276 277 In this mode, a monitoring event can only accumulate data while it is backed 278 by a hardware counter. Use "mbm_L3_assignments" found in each CTRL_MON and MON 279 group to specify which of the events should have a counter assigned. The number 280 of counters available is described in the "num_mbm_cntrs" file. Changing the 281 mode may cause all counters on the resource to reset. 282 283 Moving to mbm_event counter assignment mode requires users to assign the counters 284 to the events. Otherwise, the MBM event counters will return 'Unassigned' when read. 285 286 The mode is beneficial for AMD platforms that support more CTRL_MON 287 and MON groups than available hardware counters. By default, this 288 feature is enabled on AMD platforms with the ABMC (Assignable Bandwidth 289 Monitoring Counters) capability, ensuring counters remain assigned even 290 when the corresponding RMID is not actively used by any processor. 291 292 "default": 293 294 In default mode, resctrl assumes there is a hardware counter for each 295 event within every CTRL_MON and MON group. On AMD platforms, it is 296 recommended to use the mbm_event mode, if supported, to prevent reset of MBM 297 events between reads resulting from hardware re-allocating counters. This can 298 result in misleading values or display "Unavailable" if no counter is assigned 299 to the event. 300 301 * To enable "mbm_event" counter assignment mode: 302 :: 303 304 # echo "mbm_event" > /sys/fs/resctrl/info/L3_MON/mbm_assign_mode 305 306 * To enable "default" monitoring mode: 307 :: 308 309 # echo "default" > /sys/fs/resctrl/info/L3_MON/mbm_assign_mode 310 311"num_mbm_cntrs": 312 The maximum number of counters (total of available and assigned counters) in 313 each domain when the system supports mbm_event mode. 314 315 For example, on a system with maximum of 32 memory bandwidth monitoring 316 counters in each of its L3 domains: 317 :: 318 319 # cat /sys/fs/resctrl/info/L3_MON/num_mbm_cntrs 320 0=32;1=32 321 322"available_mbm_cntrs": 323 The number of counters available for assignment in each domain when mbm_event 324 mode is enabled on the system. 325 326 For example, on a system with 30 available [hardware] assignable counters 327 in each of its L3 domains: 328 :: 329 330 # cat /sys/fs/resctrl/info/L3_MON/available_mbm_cntrs 331 0=30;1=30 332 333"event_configs": 334 Directory that exists when "mbm_event" counter assignment mode is supported. 335 Contains a sub-directory for each MBM event that can be assigned to a counter. 336 337 Two MBM events are supported by default: mbm_local_bytes and mbm_total_bytes. 338 Each MBM event's sub-directory contains a file named "event_filter" that is 339 used to view and modify which memory transactions the MBM event is configured 340 with. The file is accessible only when "mbm_event" counter assignment mode is 341 enabled. 342 343 List of memory transaction types supported: 344 345 ========================== ======================================================== 346 Name Description 347 ========================== ======================================================== 348 dirty_victim_writes_all Dirty Victims from the QOS domain to all types of memory 349 remote_reads_slow_memory Reads to slow memory in the non-local NUMA domain 350 local_reads_slow_memory Reads to slow memory in the local NUMA domain 351 remote_non_temporal_writes Non-temporal writes to non-local NUMA domain 352 local_non_temporal_writes Non-temporal writes to local NUMA domain 353 remote_reads Reads to memory in the non-local NUMA domain 354 local_reads Reads to memory in the local NUMA domain 355 ========================== ======================================================== 356 357 For example:: 358 359 # cat /sys/fs/resctrl/info/L3_MON/event_configs/mbm_total_bytes/event_filter 360 local_reads,remote_reads,local_non_temporal_writes,remote_non_temporal_writes, 361 local_reads_slow_memory,remote_reads_slow_memory,dirty_victim_writes_all 362 363 # cat /sys/fs/resctrl/info/L3_MON/event_configs/mbm_local_bytes/event_filter 364 local_reads,local_non_temporal_writes,local_reads_slow_memory 365 366 Modify the event configuration by writing to the "event_filter" file within 367 the "event_configs" directory. The read/write "event_filter" file contains the 368 configuration of the event that reflects which memory transactions are counted by it. 369 370 For example:: 371 372 # echo "local_reads, local_non_temporal_writes" > 373 /sys/fs/resctrl/info/L3_MON/event_configs/mbm_total_bytes/event_filter 374 375 # cat /sys/fs/resctrl/info/L3_MON/event_configs/mbm_total_bytes/event_filter 376 local_reads,local_non_temporal_writes 377 378"mbm_assign_on_mkdir": 379 Exists when "mbm_event" counter assignment mode is supported. Accessible 380 only when "mbm_event" counter assignment mode is enabled. 381 382 Determines if a counter will automatically be assigned to an RMID, MBM event 383 pair when its associated monitor group is created via mkdir. Enabled by default 384 on boot, also when switched from "default" mode to "mbm_event" counter assignment 385 mode. Users can disable this capability by writing to the interface. 386 387 "0": 388 Auto assignment is disabled. 389 "1": 390 Auto assignment is enabled. 391 392 Example:: 393 394 # echo 0 > /sys/fs/resctrl/info/L3_MON/mbm_assign_on_mkdir 395 # cat /sys/fs/resctrl/info/L3_MON/mbm_assign_on_mkdir 396 0 397 398"max_threshold_occupancy": 399 Read/write file provides the largest value (in 400 bytes) at which a previously used LLC_occupancy 401 counter can be considered for re-use. 402 403Finally, in the top level of the "info" directory there is a file 404named "last_cmd_status". This is reset with every "command" issued 405via the file system (making new directories or writing to any of the 406control files). If the command was successful, it will read as "ok". 407If the command failed, it will provide more information that can be 408conveyed in the error returns from file operations. E.g. 409:: 410 411 # echo L3:0=f7 > schemata 412 bash: echo: write error: Invalid argument 413 # cat info/last_cmd_status 414 mask f7 has non-consecutive 1-bits 415 416Resource alloc and monitor groups 417================================= 418 419Resource groups are represented as directories in the resctrl file 420system. The default group is the root directory which, immediately 421after mounting, owns all the tasks and cpus in the system and can make 422full use of all resources. 423 424On a system with RDT control features additional directories can be 425created in the root directory that specify different amounts of each 426resource (see "schemata" below). The root and these additional top level 427directories are referred to as "CTRL_MON" groups below. 428 429On a system with RDT monitoring the root directory and other top level 430directories contain a directory named "mon_groups" in which additional 431directories can be created to monitor subsets of tasks in the CTRL_MON 432group that is their ancestor. These are called "MON" groups in the rest 433of this document. 434 435Removing a directory will move all tasks and cpus owned by the group it 436represents to the parent. Removing one of the created CTRL_MON groups 437will automatically remove all MON groups below it. 438 439Moving MON group directories to a new parent CTRL_MON group is supported 440for the purpose of changing the resource allocations of a MON group 441without impacting its monitoring data or assigned tasks. This operation 442is not allowed for MON groups which monitor CPUs. No other move 443operation is currently allowed other than simply renaming a CTRL_MON or 444MON group. 445 446All groups contain the following files: 447 448"tasks": 449 Reading this file shows the list of all tasks that belong to 450 this group. Writing a task id to the file will add a task to the 451 group. Multiple tasks can be added by separating the task ids 452 with commas. Tasks will be assigned sequentially. Multiple 453 failures are not supported. A single failure encountered while 454 attempting to assign a task will cause the operation to abort and 455 already added tasks before the failure will remain in the group. 456 Failures will be logged to /sys/fs/resctrl/info/last_cmd_status. 457 458 If the group is a CTRL_MON group the task is removed from 459 whichever previous CTRL_MON group owned the task and also from 460 any MON group that owned the task. If the group is a MON group, 461 then the task must already belong to the CTRL_MON parent of this 462 group. The task is removed from any previous MON group. 463 464 465"cpus": 466 Reading this file shows a bitmask of the logical CPUs owned by 467 this group. Writing a mask to this file will add and remove 468 CPUs to/from this group. As with the tasks file a hierarchy is 469 maintained where MON groups may only include CPUs owned by the 470 parent CTRL_MON group. 471 When the resource group is in pseudo-locked mode this file will 472 only be readable, reflecting the CPUs associated with the 473 pseudo-locked region. 474 475 476"cpus_list": 477 Just like "cpus", only using ranges of CPUs instead of bitmasks. 478 479 480When control is enabled all CTRL_MON groups will also contain: 481 482"schemata": 483 A list of all the resources available to this group. 484 Each resource has its own line and format - see below for details. 485 486"size": 487 Mirrors the display of the "schemata" file to display the size in 488 bytes of each allocation instead of the bits representing the 489 allocation. 490 491"mode": 492 The "mode" of the resource group dictates the sharing of its 493 allocations. A "shareable" resource group allows sharing of its 494 allocations while an "exclusive" resource group does not. A 495 cache pseudo-locked region is created by first writing 496 "pseudo-locksetup" to the "mode" file before writing the cache 497 pseudo-locked region's schemata to the resource group's "schemata" 498 file. On successful pseudo-locked region creation the mode will 499 automatically change to "pseudo-locked". 500 501"ctrl_hw_id": 502 Available only with debug option. The identifier used by hardware 503 for the control group. On x86 this is the CLOSID. 504 505When monitoring is enabled all MON groups will also contain: 506 507"mon_data": 508 This contains a set of files organized by L3 domain and by 509 RDT event. E.g. on a system with two L3 domains there will 510 be subdirectories "mon_L3_00" and "mon_L3_01". Each of these 511 directories have one file per event (e.g. "llc_occupancy", 512 "mbm_total_bytes", and "mbm_local_bytes"). In a MON group these 513 files provide a read out of the current value of the event for 514 all tasks in the group. In CTRL_MON groups these files provide 515 the sum for all tasks in the CTRL_MON group and all tasks in 516 MON groups. Please see example section for more details on usage. 517 On systems with Sub-NUMA Cluster (SNC) enabled there are extra 518 directories for each node (located within the "mon_L3_XX" directory 519 for the L3 cache they occupy). These are named "mon_sub_L3_YY" 520 where "YY" is the node number. 521 522 When the 'mbm_event' counter assignment mode is enabled, reading 523 an MBM event of a MON group returns 'Unassigned' if no hardware 524 counter is assigned to it. For CTRL_MON groups, 'Unassigned' is 525 returned if the MBM event does not have an assigned counter in the 526 CTRL_MON group nor in any of its associated MON groups. 527 528"mon_hw_id": 529 Available only with debug option. The identifier used by hardware 530 for the monitor group. On x86 this is the RMID. 531 532When monitoring is enabled all MON groups may also contain: 533 534"mbm_L3_assignments": 535 Exists when "mbm_event" counter assignment mode is supported and lists the 536 counter assignment states of the group. 537 538 The assignment list is displayed in the following format: 539 540 <Event>:<Domain ID>=<Assignment state>;<Domain ID>=<Assignment state> 541 542 Event: A valid MBM event in the 543 /sys/fs/resctrl/info/L3_MON/event_configs directory. 544 545 Domain ID: A valid domain ID. When writing, '*' applies the changes 546 to all the domains. 547 548 Assignment states: 549 550 _ : No counter assigned. 551 552 e : Counter assigned exclusively. 553 554 Example: 555 556 To display the counter assignment states for the default group. 557 :: 558 559 # cd /sys/fs/resctrl 560 # cat /sys/fs/resctrl/mbm_L3_assignments 561 mbm_total_bytes:0=e;1=e 562 mbm_local_bytes:0=e;1=e 563 564 Assignments can be modified by writing to the interface. 565 566 Examples: 567 568 To unassign the counter associated with the mbm_total_bytes event on domain 0: 569 :: 570 571 # echo "mbm_total_bytes:0=_" > /sys/fs/resctrl/mbm_L3_assignments 572 # cat /sys/fs/resctrl/mbm_L3_assignments 573 mbm_total_bytes:0=_;1=e 574 mbm_local_bytes:0=e;1=e 575 576 To unassign the counter associated with the mbm_total_bytes event on all the domains: 577 :: 578 579 # echo "mbm_total_bytes:*=_" > /sys/fs/resctrl/mbm_L3_assignments 580 # cat /sys/fs/resctrl/mbm_L3_assignments 581 mbm_total_bytes:0=_;1=_ 582 mbm_local_bytes:0=e;1=e 583 584 To assign a counter associated with the mbm_total_bytes event on all domains in 585 exclusive mode: 586 :: 587 588 # echo "mbm_total_bytes:*=e" > /sys/fs/resctrl/mbm_L3_assignments 589 # cat /sys/fs/resctrl/mbm_L3_assignments 590 mbm_total_bytes:0=e;1=e 591 mbm_local_bytes:0=e;1=e 592 593When the "mba_MBps" mount option is used all CTRL_MON groups will also contain: 594 595"mba_MBps_event": 596 Reading this file shows which memory bandwidth event is used 597 as input to the software feedback loop that keeps memory bandwidth 598 below the value specified in the schemata file. Writing the 599 name of one of the supported memory bandwidth events found in 600 /sys/fs/resctrl/info/L3_MON/mon_features changes the input 601 event. 602 603Resource allocation rules 604------------------------- 605 606When a task is running the following rules define which resources are 607available to it: 608 6091) If the task is a member of a non-default group, then the schemata 610 for that group is used. 611 6122) Else if the task belongs to the default group, but is running on a 613 CPU that is assigned to some specific group, then the schemata for the 614 CPU's group is used. 615 6163) Otherwise the schemata for the default group is used. 617 618Resource monitoring rules 619------------------------- 6201) If a task is a member of a MON group, or non-default CTRL_MON group 621 then RDT events for the task will be reported in that group. 622 6232) If a task is a member of the default CTRL_MON group, but is running 624 on a CPU that is assigned to some specific group, then the RDT events 625 for the task will be reported in that group. 626 6273) Otherwise RDT events for the task will be reported in the root level 628 "mon_data" group. 629 630 631Notes on cache occupancy monitoring and control 632=============================================== 633When moving a task from one group to another you should remember that 634this only affects *new* cache allocations by the task. E.g. you may have 635a task in a monitor group showing 3 MB of cache occupancy. If you move 636to a new group and immediately check the occupancy of the old and new 637groups you will likely see that the old group is still showing 3 MB and 638the new group zero. When the task accesses locations still in cache from 639before the move, the h/w does not update any counters. On a busy system 640you will likely see the occupancy in the old group go down as cache lines 641are evicted and re-used while the occupancy in the new group rises as 642the task accesses memory and loads into the cache are counted based on 643membership in the new group. 644 645The same applies to cache allocation control. Moving a task to a group 646with a smaller cache partition will not evict any cache lines. The 647process may continue to use them from the old partition. 648 649Hardware uses CLOSid(Class of service ID) and an RMID(Resource monitoring ID) 650to identify a control group and a monitoring group respectively. Each of 651the resource groups are mapped to these IDs based on the kind of group. The 652number of CLOSid and RMID are limited by the hardware and hence the creation of 653a "CTRL_MON" directory may fail if we run out of either CLOSID or RMID 654and creation of "MON" group may fail if we run out of RMIDs. 655 656max_threshold_occupancy - generic concepts 657------------------------------------------ 658 659Note that an RMID once freed may not be immediately available for use as 660the RMID is still tagged the cache lines of the previous user of RMID. 661Hence such RMIDs are placed on limbo list and checked back if the cache 662occupancy has gone down. If there is a time when system has a lot of 663limbo RMIDs but which are not ready to be used, user may see an -EBUSY 664during mkdir. 665 666max_threshold_occupancy is a user configurable value to determine the 667occupancy at which an RMID can be freed. 668 669The mon_llc_occupancy_limbo tracepoint gives the precise occupancy in bytes 670for a subset of RMID that are not immediately available for allocation. 671This can't be relied on to produce output every second, it may be necessary 672to attempt to create an empty monitor group to force an update. Output may 673only be produced if creation of a control or monitor group fails. 674 675Schemata files - general concepts 676--------------------------------- 677Each line in the file describes one resource. The line starts with 678the name of the resource, followed by specific values to be applied 679in each of the instances of that resource on the system. 680 681Cache IDs 682--------- 683On current generation systems there is one L3 cache per socket and L2 684caches are generally just shared by the hyperthreads on a core, but this 685isn't an architectural requirement. We could have multiple separate L3 686caches on a socket, multiple cores could share an L2 cache. So instead 687of using "socket" or "core" to define the set of logical cpus sharing 688a resource we use a "Cache ID". At a given cache level this will be a 689unique number across the whole system (but it isn't guaranteed to be a 690contiguous sequence, there may be gaps). To find the ID for each logical 691CPU look in /sys/devices/system/cpu/cpu*/cache/index*/id 692 693Cache Bit Masks (CBM) 694--------------------- 695For cache resources we describe the portion of the cache that is available 696for allocation using a bitmask. The maximum value of the mask is defined 697by each cpu model (and may be different for different cache levels). It 698is found using CPUID, but is also provided in the "info" directory of 699the resctrl file system in "info/{resource}/cbm_mask". Some Intel hardware 700requires that these masks have all the '1' bits in a contiguous block. So 7010x3, 0x6 and 0xC are legal 4-bit masks with two bits set, but 0x5, 0x9 702and 0xA are not. Check /sys/fs/resctrl/info/{resource}/sparse_masks 703if non-contiguous 1s value is supported. On a system with a 20-bit mask 704each bit represents 5% of the capacity of the cache. You could partition 705the cache into four equal parts with masks: 0x1f, 0x3e0, 0x7c00, 0xf8000. 706 707Notes on Sub-NUMA Cluster mode 708============================== 709When SNC mode is enabled, Linux may load balance tasks between Sub-NUMA 710nodes much more readily than between regular NUMA nodes since the CPUs 711on Sub-NUMA nodes share the same L3 cache and the system may report 712the NUMA distance between Sub-NUMA nodes with a lower value than used 713for regular NUMA nodes. 714 715The top-level monitoring files in each "mon_L3_XX" directory provide 716the sum of data across all SNC nodes sharing an L3 cache instance. 717Users who bind tasks to the CPUs of a specific Sub-NUMA node can read 718the "llc_occupancy", "mbm_total_bytes", and "mbm_local_bytes" in the 719"mon_sub_L3_YY" directories to get node local data. 720 721Memory bandwidth allocation is still performed at the L3 cache 722level. I.e. throttling controls are applied to all SNC nodes. 723 724L3 cache allocation bitmaps also apply to all SNC nodes. But note that 725the amount of L3 cache represented by each bit is divided by the number 726of SNC nodes per L3 cache. E.g. with a 100MB cache on a system with 10-bit 727allocation masks each bit normally represents 10MB. With SNC mode enabled 728with two SNC nodes per L3 cache, each bit only represents 5MB. 729 730Memory bandwidth Allocation and monitoring 731========================================== 732 733For Memory bandwidth resource, by default the user controls the resource 734by indicating the percentage of total memory bandwidth. 735 736The minimum bandwidth percentage value for each cpu model is predefined 737and can be looked up through "info/MB/min_bandwidth". The bandwidth 738granularity that is allocated is also dependent on the cpu model and can 739be looked up at "info/MB/bandwidth_gran". The available bandwidth 740control steps are: min_bw + N * bw_gran. Intermediate values are rounded 741to the next control step available on the hardware. 742 743The bandwidth throttling is a core specific mechanism on some of Intel 744SKUs. Using a high bandwidth and a low bandwidth setting on two threads 745sharing a core may result in both threads being throttled to use the 746low bandwidth (see "thread_throttle_mode"). 747 748The fact that Memory bandwidth allocation(MBA) may be a core 749specific mechanism where as memory bandwidth monitoring(MBM) is done at 750the package level may lead to confusion when users try to apply control 751via the MBA and then monitor the bandwidth to see if the controls are 752effective. Below are such scenarios: 753 7541. User may *not* see increase in actual bandwidth when percentage 755 values are increased: 756 757This can occur when aggregate L2 external bandwidth is more than L3 758external bandwidth. Consider an SKL SKU with 24 cores on a package and 759where L2 external is 10GBps (hence aggregate L2 external bandwidth is 760240GBps) and L3 external bandwidth is 100GBps. Now a workload with '20 761threads, having 50% bandwidth, each consuming 5GBps' consumes the max L3 762bandwidth of 100GBps although the percentage value specified is only 50% 763<< 100%. Hence increasing the bandwidth percentage will not yield any 764more bandwidth. This is because although the L2 external bandwidth still 765has capacity, the L3 external bandwidth is fully used. Also note that 766this would be dependent on number of cores the benchmark is run on. 767 7682. Same bandwidth percentage may mean different actual bandwidth 769 depending on # of threads: 770 771For the same SKU in #1, a 'single thread, with 10% bandwidth' and '4 772thread, with 10% bandwidth' can consume upto 10GBps and 40GBps although 773they have same percentage bandwidth of 10%. This is simply because as 774threads start using more cores in an rdtgroup, the actual bandwidth may 775increase or vary although user specified bandwidth percentage is same. 776 777In order to mitigate this and make the interface more user friendly, 778resctrl added support for specifying the bandwidth in MiBps as well. The 779kernel underneath would use a software feedback mechanism or a "Software 780Controller(mba_sc)" which reads the actual bandwidth using MBM counters 781and adjust the memory bandwidth percentages to ensure:: 782 783 "actual bandwidth < user specified bandwidth". 784 785By default, the schemata would take the bandwidth percentage values 786where as user can switch to the "MBA software controller" mode using 787a mount option 'mba_MBps'. The schemata format is specified in the below 788sections. 789 790L3 schemata file details (code and data prioritization disabled) 791---------------------------------------------------------------- 792With CDP disabled the L3 schemata format is:: 793 794 L3:<cache_id0>=<cbm>;<cache_id1>=<cbm>;... 795 796L3 schemata file details (CDP enabled via mount option to resctrl) 797------------------------------------------------------------------ 798When CDP is enabled L3 control is split into two separate resources 799so you can specify independent masks for code and data like this:: 800 801 L3DATA:<cache_id0>=<cbm>;<cache_id1>=<cbm>;... 802 L3CODE:<cache_id0>=<cbm>;<cache_id1>=<cbm>;... 803 804L2 schemata file details 805------------------------ 806CDP is supported at L2 using the 'cdpl2' mount option. The schemata 807format is either:: 808 809 L2:<cache_id0>=<cbm>;<cache_id1>=<cbm>;... 810 811or 812 813 L2DATA:<cache_id0>=<cbm>;<cache_id1>=<cbm>;... 814 L2CODE:<cache_id0>=<cbm>;<cache_id1>=<cbm>;... 815 816 817Memory bandwidth Allocation (default mode) 818------------------------------------------ 819 820Memory b/w domain is L3 cache. 821:: 822 823 MB:<cache_id0>=bandwidth0;<cache_id1>=bandwidth1;... 824 825Memory bandwidth Allocation specified in MiBps 826---------------------------------------------- 827 828Memory bandwidth domain is L3 cache. 829:: 830 831 MB:<cache_id0>=bw_MiBps0;<cache_id1>=bw_MiBps1;... 832 833Slow Memory Bandwidth Allocation (SMBA) 834--------------------------------------- 835AMD hardware supports Slow Memory Bandwidth Allocation (SMBA). 836CXL.memory is the only supported "slow" memory device. With the 837support of SMBA, the hardware enables bandwidth allocation on 838the slow memory devices. If there are multiple such devices in 839the system, the throttling logic groups all the slow sources 840together and applies the limit on them as a whole. 841 842The presence of SMBA (with CXL.memory) is independent of slow memory 843devices presence. If there are no such devices on the system, then 844configuring SMBA will have no impact on the performance of the system. 845 846The bandwidth domain for slow memory is L3 cache. Its schemata file 847is formatted as: 848:: 849 850 SMBA:<cache_id0>=bandwidth0;<cache_id1>=bandwidth1;... 851 852Reading/writing the schemata file 853--------------------------------- 854Reading the schemata file will show the state of all resources 855on all domains. When writing you only need to specify those values 856which you wish to change. E.g. 857:: 858 859 # cat schemata 860 L3DATA:0=fffff;1=fffff;2=fffff;3=fffff 861 L3CODE:0=fffff;1=fffff;2=fffff;3=fffff 862 # echo "L3DATA:2=3c0;" > schemata 863 # cat schemata 864 L3DATA:0=fffff;1=fffff;2=3c0;3=fffff 865 L3CODE:0=fffff;1=fffff;2=fffff;3=fffff 866 867Reading/writing the schemata file (on AMD systems) 868-------------------------------------------------- 869Reading the schemata file will show the current bandwidth limit on all 870domains. The allocated resources are in multiples of one eighth GB/s. 871When writing to the file, you need to specify what cache id you wish to 872configure the bandwidth limit. 873 874For example, to allocate 2GB/s limit on the first cache id: 875 876:: 877 878 # cat schemata 879 MB:0=2048;1=2048;2=2048;3=2048 880 L3:0=ffff;1=ffff;2=ffff;3=ffff 881 882 # echo "MB:1=16" > schemata 883 # cat schemata 884 MB:0=2048;1= 16;2=2048;3=2048 885 L3:0=ffff;1=ffff;2=ffff;3=ffff 886 887Reading/writing the schemata file (on AMD systems) with SMBA feature 888-------------------------------------------------------------------- 889Reading and writing the schemata file is the same as without SMBA in 890above section. 891 892For example, to allocate 8GB/s limit on the first cache id: 893 894:: 895 896 # cat schemata 897 SMBA:0=2048;1=2048;2=2048;3=2048 898 MB:0=2048;1=2048;2=2048;3=2048 899 L3:0=ffff;1=ffff;2=ffff;3=ffff 900 901 # echo "SMBA:1=64" > schemata 902 # cat schemata 903 SMBA:0=2048;1= 64;2=2048;3=2048 904 MB:0=2048;1=2048;2=2048;3=2048 905 L3:0=ffff;1=ffff;2=ffff;3=ffff 906 907Cache Pseudo-Locking 908==================== 909CAT enables a user to specify the amount of cache space that an 910application can fill. Cache pseudo-locking builds on the fact that a 911CPU can still read and write data pre-allocated outside its current 912allocated area on a cache hit. With cache pseudo-locking, data can be 913preloaded into a reserved portion of cache that no application can 914fill, and from that point on will only serve cache hits. The cache 915pseudo-locked memory is made accessible to user space where an 916application can map it into its virtual address space and thus have 917a region of memory with reduced average read latency. 918 919The creation of a cache pseudo-locked region is triggered by a request 920from the user to do so that is accompanied by a schemata of the region 921to be pseudo-locked. The cache pseudo-locked region is created as follows: 922 923- Create a CAT allocation CLOSNEW with a CBM matching the schemata 924 from the user of the cache region that will contain the pseudo-locked 925 memory. This region must not overlap with any current CAT allocation/CLOS 926 on the system and no future overlap with this cache region is allowed 927 while the pseudo-locked region exists. 928- Create a contiguous region of memory of the same size as the cache 929 region. 930- Flush the cache, disable hardware prefetchers, disable preemption. 931- Make CLOSNEW the active CLOS and touch the allocated memory to load 932 it into the cache. 933- Set the previous CLOS as active. 934- At this point the closid CLOSNEW can be released - the cache 935 pseudo-locked region is protected as long as its CBM does not appear in 936 any CAT allocation. Even though the cache pseudo-locked region will from 937 this point on not appear in any CBM of any CLOS an application running with 938 any CLOS will be able to access the memory in the pseudo-locked region since 939 the region continues to serve cache hits. 940- The contiguous region of memory loaded into the cache is exposed to 941 user-space as a character device. 942 943Cache pseudo-locking increases the probability that data will remain 944in the cache via carefully configuring the CAT feature and controlling 945application behavior. There is no guarantee that data is placed in 946cache. Instructions like INVD, WBINVD, CLFLUSH, etc. can still evict 947“locked” data from cache. Power management C-states may shrink or 948power off cache. Deeper C-states will automatically be restricted on 949pseudo-locked region creation. 950 951It is required that an application using a pseudo-locked region runs 952with affinity to the cores (or a subset of the cores) associated 953with the cache on which the pseudo-locked region resides. A sanity check 954within the code will not allow an application to map pseudo-locked memory 955unless it runs with affinity to cores associated with the cache on which the 956pseudo-locked region resides. The sanity check is only done during the 957initial mmap() handling, there is no enforcement afterwards and the 958application self needs to ensure it remains affine to the correct cores. 959 960Pseudo-locking is accomplished in two stages: 961 9621) During the first stage the system administrator allocates a portion 963 of cache that should be dedicated to pseudo-locking. At this time an 964 equivalent portion of memory is allocated, loaded into allocated 965 cache portion, and exposed as a character device. 9662) During the second stage a user-space application maps (mmap()) the 967 pseudo-locked memory into its address space. 968 969Cache Pseudo-Locking Interface 970------------------------------ 971A pseudo-locked region is created using the resctrl interface as follows: 972 9731) Create a new resource group by creating a new directory in /sys/fs/resctrl. 9742) Change the new resource group's mode to "pseudo-locksetup" by writing 975 "pseudo-locksetup" to the "mode" file. 9763) Write the schemata of the pseudo-locked region to the "schemata" file. All 977 bits within the schemata should be "unused" according to the "bit_usage" 978 file. 979 980On successful pseudo-locked region creation the "mode" file will contain 981"pseudo-locked" and a new character device with the same name as the resource 982group will exist in /dev/pseudo_lock. This character device can be mmap()'ed 983by user space in order to obtain access to the pseudo-locked memory region. 984 985An example of cache pseudo-locked region creation and usage can be found below. 986 987Cache Pseudo-Locking Debugging Interface 988---------------------------------------- 989The pseudo-locking debugging interface is enabled by default (if 990CONFIG_DEBUG_FS is enabled) and can be found in /sys/kernel/debug/resctrl. 991 992There is no explicit way for the kernel to test if a provided memory 993location is present in the cache. The pseudo-locking debugging interface uses 994the tracing infrastructure to provide two ways to measure cache residency of 995the pseudo-locked region: 996 9971) Memory access latency using the pseudo_lock_mem_latency tracepoint. Data 998 from these measurements are best visualized using a hist trigger (see 999 example below). In this test the pseudo-locked region is traversed at 1000 a stride of 32 bytes while hardware prefetchers and preemption 1001 are disabled. This also provides a substitute visualization of cache 1002 hits and misses. 10032) Cache hit and miss measurements using model specific precision counters if 1004 available. Depending on the levels of cache on the system the pseudo_lock_l2 1005 and pseudo_lock_l3 tracepoints are available. 1006 1007When a pseudo-locked region is created a new debugfs directory is created for 1008it in debugfs as /sys/kernel/debug/resctrl/<newdir>. A single 1009write-only file, pseudo_lock_measure, is present in this directory. The 1010measurement of the pseudo-locked region depends on the number written to this 1011debugfs file: 1012 10131: 1014 writing "1" to the pseudo_lock_measure file will trigger the latency 1015 measurement captured in the pseudo_lock_mem_latency tracepoint. See 1016 example below. 10172: 1018 writing "2" to the pseudo_lock_measure file will trigger the L2 cache 1019 residency (cache hits and misses) measurement captured in the 1020 pseudo_lock_l2 tracepoint. See example below. 10213: 1022 writing "3" to the pseudo_lock_measure file will trigger the L3 cache 1023 residency (cache hits and misses) measurement captured in the 1024 pseudo_lock_l3 tracepoint. 1025 1026All measurements are recorded with the tracing infrastructure. This requires 1027the relevant tracepoints to be enabled before the measurement is triggered. 1028 1029Example of latency debugging interface 1030~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 1031In this example a pseudo-locked region named "newlock" was created. Here is 1032how we can measure the latency in cycles of reading from this region and 1033visualize this data with a histogram that is available if CONFIG_HIST_TRIGGERS 1034is set:: 1035 1036 # :> /sys/kernel/tracing/trace 1037 # echo 'hist:keys=latency' > /sys/kernel/tracing/events/resctrl/pseudo_lock_mem_latency/trigger 1038 # echo 1 > /sys/kernel/tracing/events/resctrl/pseudo_lock_mem_latency/enable 1039 # echo 1 > /sys/kernel/debug/resctrl/newlock/pseudo_lock_measure 1040 # echo 0 > /sys/kernel/tracing/events/resctrl/pseudo_lock_mem_latency/enable 1041 # cat /sys/kernel/tracing/events/resctrl/pseudo_lock_mem_latency/hist 1042 1043 # event histogram 1044 # 1045 # trigger info: hist:keys=latency:vals=hitcount:sort=hitcount:size=2048 [active] 1046 # 1047 1048 { latency: 456 } hitcount: 1 1049 { latency: 50 } hitcount: 83 1050 { latency: 36 } hitcount: 96 1051 { latency: 44 } hitcount: 174 1052 { latency: 48 } hitcount: 195 1053 { latency: 46 } hitcount: 262 1054 { latency: 42 } hitcount: 693 1055 { latency: 40 } hitcount: 3204 1056 { latency: 38 } hitcount: 3484 1057 1058 Totals: 1059 Hits: 8192 1060 Entries: 9 1061 Dropped: 0 1062 1063Example of cache hits/misses debugging 1064~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 1065In this example a pseudo-locked region named "newlock" was created on the L2 1066cache of a platform. Here is how we can obtain details of the cache hits 1067and misses using the platform's precision counters. 1068:: 1069 1070 # :> /sys/kernel/tracing/trace 1071 # echo 1 > /sys/kernel/tracing/events/resctrl/pseudo_lock_l2/enable 1072 # echo 2 > /sys/kernel/debug/resctrl/newlock/pseudo_lock_measure 1073 # echo 0 > /sys/kernel/tracing/events/resctrl/pseudo_lock_l2/enable 1074 # cat /sys/kernel/tracing/trace 1075 1076 # tracer: nop 1077 # 1078 # _-----=> irqs-off 1079 # / _----=> need-resched 1080 # | / _---=> hardirq/softirq 1081 # || / _--=> preempt-depth 1082 # ||| / delay 1083 # TASK-PID CPU# |||| TIMESTAMP FUNCTION 1084 # | | | |||| | | 1085 pseudo_lock_mea-1672 [002] .... 3132.860500: pseudo_lock_l2: hits=4097 miss=0 1086 1087 1088Examples for RDT allocation usage 1089~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 1090 10911) Example 1 1092 1093On a two socket machine (one L3 cache per socket) with just four bits 1094for cache bit masks, minimum b/w of 10% with a memory bandwidth 1095granularity of 10%. 1096:: 1097 1098 # mount -t resctrl resctrl /sys/fs/resctrl 1099 # cd /sys/fs/resctrl 1100 # mkdir p0 p1 1101 # echo "L3:0=3;1=c\nMB:0=50;1=50" > /sys/fs/resctrl/p0/schemata 1102 # echo "L3:0=3;1=3\nMB:0=50;1=50" > /sys/fs/resctrl/p1/schemata 1103 1104The default resource group is unmodified, so we have access to all parts 1105of all caches (its schemata file reads "L3:0=f;1=f"). 1106 1107Tasks that are under the control of group "p0" may only allocate from the 1108"lower" 50% on cache ID 0, and the "upper" 50% of cache ID 1. 1109Tasks in group "p1" use the "lower" 50% of cache on both sockets. 1110 1111Similarly, tasks that are under the control of group "p0" may use a 1112maximum memory b/w of 50% on socket0 and 50% on socket 1. 1113Tasks in group "p1" may also use 50% memory b/w on both sockets. 1114Note that unlike cache masks, memory b/w cannot specify whether these 1115allocations can overlap or not. The allocations specifies the maximum 1116b/w that the group may be able to use and the system admin can configure 1117the b/w accordingly. 1118 1119If resctrl is using the software controller (mba_sc) then user can enter the 1120max b/w in MB rather than the percentage values. 1121:: 1122 1123 # echo "L3:0=3;1=c\nMB:0=1024;1=500" > /sys/fs/resctrl/p0/schemata 1124 # echo "L3:0=3;1=3\nMB:0=1024;1=500" > /sys/fs/resctrl/p1/schemata 1125 1126In the above example the tasks in "p1" and "p0" on socket 0 would use a max b/w 1127of 1024MB where as on socket 1 they would use 500MB. 1128 11292) Example 2 1130 1131Again two sockets, but this time with a more realistic 20-bit mask. 1132 1133Two real time tasks pid=1234 running on processor 0 and pid=5678 running on 1134processor 1 on socket 0 on a 2-socket and dual core machine. To avoid noisy 1135neighbors, each of the two real-time tasks exclusively occupies one quarter 1136of L3 cache on socket 0. 1137:: 1138 1139 # mount -t resctrl resctrl /sys/fs/resctrl 1140 # cd /sys/fs/resctrl 1141 1142First we reset the schemata for the default group so that the "upper" 114350% of the L3 cache on socket 0 and 50% of memory b/w cannot be used by 1144ordinary tasks:: 1145 1146 # echo "L3:0=3ff;1=fffff\nMB:0=50;1=100" > schemata 1147 1148Next we make a resource group for our first real time task and give 1149it access to the "top" 25% of the cache on socket 0. 1150:: 1151 1152 # mkdir p0 1153 # echo "L3:0=f8000;1=fffff" > p0/schemata 1154 1155Finally we move our first real time task into this resource group. We 1156also use taskset(1) to ensure the task always runs on a dedicated CPU 1157on socket 0. Most uses of resource groups will also constrain which 1158processors tasks run on. 1159:: 1160 1161 # echo 1234 > p0/tasks 1162 # taskset -cp 1 1234 1163 1164Ditto for the second real time task (with the remaining 25% of cache):: 1165 1166 # mkdir p1 1167 # echo "L3:0=7c00;1=fffff" > p1/schemata 1168 # echo 5678 > p1/tasks 1169 # taskset -cp 2 5678 1170 1171For the same 2 socket system with memory b/w resource and CAT L3 the 1172schemata would look like(Assume min_bandwidth 10 and bandwidth_gran is 117310): 1174 1175For our first real time task this would request 20% memory b/w on socket 0. 1176:: 1177 1178 # echo -e "L3:0=f8000;1=fffff\nMB:0=20;1=100" > p0/schemata 1179 1180For our second real time task this would request an other 20% memory b/w 1181on socket 0. 1182:: 1183 1184 # echo -e "L3:0=f8000;1=fffff\nMB:0=20;1=100" > p0/schemata 1185 11863) Example 3 1187 1188A single socket system which has real-time tasks running on core 4-7 and 1189non real-time workload assigned to core 0-3. The real-time tasks share text 1190and data, so a per task association is not required and due to interaction 1191with the kernel it's desired that the kernel on these cores shares L3 with 1192the tasks. 1193:: 1194 1195 # mount -t resctrl resctrl /sys/fs/resctrl 1196 # cd /sys/fs/resctrl 1197 1198First we reset the schemata for the default group so that the "upper" 119950% of the L3 cache on socket 0, and 50% of memory bandwidth on socket 0 1200cannot be used by ordinary tasks:: 1201 1202 # echo "L3:0=3ff\nMB:0=50" > schemata 1203 1204Next we make a resource group for our real time cores and give it access 1205to the "top" 50% of the cache on socket 0 and 50% of memory bandwidth on 1206socket 0. 1207:: 1208 1209 # mkdir p0 1210 # echo "L3:0=ffc00\nMB:0=50" > p0/schemata 1211 1212Finally we move core 4-7 over to the new group and make sure that the 1213kernel and the tasks running there get 50% of the cache. They should 1214also get 50% of memory bandwidth assuming that the cores 4-7 are SMT 1215siblings and only the real time threads are scheduled on the cores 4-7. 1216:: 1217 1218 # echo F0 > p0/cpus 1219 12204) Example 4 1221 1222The resource groups in previous examples were all in the default "shareable" 1223mode allowing sharing of their cache allocations. If one resource group 1224configures a cache allocation then nothing prevents another resource group 1225to overlap with that allocation. 1226 1227In this example a new exclusive resource group will be created on a L2 CAT 1228system with two L2 cache instances that can be configured with an 8-bit 1229capacity bitmask. The new exclusive resource group will be configured to use 123025% of each cache instance. 1231:: 1232 1233 # mount -t resctrl resctrl /sys/fs/resctrl/ 1234 # cd /sys/fs/resctrl 1235 1236First, we observe that the default group is configured to allocate to all L2 1237cache:: 1238 1239 # cat schemata 1240 L2:0=ff;1=ff 1241 1242We could attempt to create the new resource group at this point, but it will 1243fail because of the overlap with the schemata of the default group:: 1244 1245 # mkdir p0 1246 # echo 'L2:0=0x3;1=0x3' > p0/schemata 1247 # cat p0/mode 1248 shareable 1249 # echo exclusive > p0/mode 1250 -sh: echo: write error: Invalid argument 1251 # cat info/last_cmd_status 1252 schemata overlaps 1253 1254To ensure that there is no overlap with another resource group the default 1255resource group's schemata has to change, making it possible for the new 1256resource group to become exclusive. 1257:: 1258 1259 # echo 'L2:0=0xfc;1=0xfc' > schemata 1260 # echo exclusive > p0/mode 1261 # grep . p0/* 1262 p0/cpus:0 1263 p0/mode:exclusive 1264 p0/schemata:L2:0=03;1=03 1265 p0/size:L2:0=262144;1=262144 1266 1267A new resource group will on creation not overlap with an exclusive resource 1268group:: 1269 1270 # mkdir p1 1271 # grep . p1/* 1272 p1/cpus:0 1273 p1/mode:shareable 1274 p1/schemata:L2:0=fc;1=fc 1275 p1/size:L2:0=786432;1=786432 1276 1277The bit_usage will reflect how the cache is used:: 1278 1279 # cat info/L2/bit_usage 1280 0=SSSSSSEE;1=SSSSSSEE 1281 1282A resource group cannot be forced to overlap with an exclusive resource group:: 1283 1284 # echo 'L2:0=0x1;1=0x1' > p1/schemata 1285 -sh: echo: write error: Invalid argument 1286 # cat info/last_cmd_status 1287 overlaps with exclusive group 1288 1289Example of Cache Pseudo-Locking 1290~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 1291Lock portion of L2 cache from cache id 1 using CBM 0x3. Pseudo-locked 1292region is exposed at /dev/pseudo_lock/newlock that can be provided to 1293application for argument to mmap(). 1294:: 1295 1296 # mount -t resctrl resctrl /sys/fs/resctrl/ 1297 # cd /sys/fs/resctrl 1298 1299Ensure that there are bits available that can be pseudo-locked, since only 1300unused bits can be pseudo-locked the bits to be pseudo-locked needs to be 1301removed from the default resource group's schemata:: 1302 1303 # cat info/L2/bit_usage 1304 0=SSSSSSSS;1=SSSSSSSS 1305 # echo 'L2:1=0xfc' > schemata 1306 # cat info/L2/bit_usage 1307 0=SSSSSSSS;1=SSSSSS00 1308 1309Create a new resource group that will be associated with the pseudo-locked 1310region, indicate that it will be used for a pseudo-locked region, and 1311configure the requested pseudo-locked region capacity bitmask:: 1312 1313 # mkdir newlock 1314 # echo pseudo-locksetup > newlock/mode 1315 # echo 'L2:1=0x3' > newlock/schemata 1316 1317On success the resource group's mode will change to pseudo-locked, the 1318bit_usage will reflect the pseudo-locked region, and the character device 1319exposing the pseudo-locked region will exist:: 1320 1321 # cat newlock/mode 1322 pseudo-locked 1323 # cat info/L2/bit_usage 1324 0=SSSSSSSS;1=SSSSSSPP 1325 # ls -l /dev/pseudo_lock/newlock 1326 crw------- 1 root root 243, 0 Apr 3 05:01 /dev/pseudo_lock/newlock 1327 1328:: 1329 1330 /* 1331 * Example code to access one page of pseudo-locked cache region 1332 * from user space. 1333 */ 1334 #define _GNU_SOURCE 1335 #include <fcntl.h> 1336 #include <sched.h> 1337 #include <stdio.h> 1338 #include <stdlib.h> 1339 #include <unistd.h> 1340 #include <sys/mman.h> 1341 1342 /* 1343 * It is required that the application runs with affinity to only 1344 * cores associated with the pseudo-locked region. Here the cpu 1345 * is hardcoded for convenience of example. 1346 */ 1347 static int cpuid = 2; 1348 1349 int main(int argc, char *argv[]) 1350 { 1351 cpu_set_t cpuset; 1352 long page_size; 1353 void *mapping; 1354 int dev_fd; 1355 int ret; 1356 1357 page_size = sysconf(_SC_PAGESIZE); 1358 1359 CPU_ZERO(&cpuset); 1360 CPU_SET(cpuid, &cpuset); 1361 ret = sched_setaffinity(0, sizeof(cpuset), &cpuset); 1362 if (ret < 0) { 1363 perror("sched_setaffinity"); 1364 exit(EXIT_FAILURE); 1365 } 1366 1367 dev_fd = open("/dev/pseudo_lock/newlock", O_RDWR); 1368 if (dev_fd < 0) { 1369 perror("open"); 1370 exit(EXIT_FAILURE); 1371 } 1372 1373 mapping = mmap(0, page_size, PROT_READ | PROT_WRITE, MAP_SHARED, 1374 dev_fd, 0); 1375 if (mapping == MAP_FAILED) { 1376 perror("mmap"); 1377 close(dev_fd); 1378 exit(EXIT_FAILURE); 1379 } 1380 1381 /* Application interacts with pseudo-locked memory @mapping */ 1382 1383 ret = munmap(mapping, page_size); 1384 if (ret < 0) { 1385 perror("munmap"); 1386 close(dev_fd); 1387 exit(EXIT_FAILURE); 1388 } 1389 1390 close(dev_fd); 1391 exit(EXIT_SUCCESS); 1392 } 1393 1394Locking between applications 1395---------------------------- 1396 1397Certain operations on the resctrl filesystem, composed of read/writes 1398to/from multiple files, must be atomic. 1399 1400As an example, the allocation of an exclusive reservation of L3 cache 1401involves: 1402 1403 1. Read the cbmmasks from each directory or the per-resource "bit_usage" 1404 2. Find a contiguous set of bits in the global CBM bitmask that is clear 1405 in any of the directory cbmmasks 1406 3. Create a new directory 1407 4. Set the bits found in step 2 to the new directory "schemata" file 1408 1409If two applications attempt to allocate space concurrently then they can 1410end up allocating the same bits so the reservations are shared instead of 1411exclusive. 1412 1413To coordinate atomic operations on the resctrlfs and to avoid the problem 1414above, the following locking procedure is recommended: 1415 1416Locking is based on flock, which is available in libc and also as a shell 1417script command 1418 1419Write lock: 1420 1421 A) Take flock(LOCK_EX) on /sys/fs/resctrl 1422 B) Read/write the directory structure. 1423 C) funlock 1424 1425Read lock: 1426 1427 A) Take flock(LOCK_SH) on /sys/fs/resctrl 1428 B) If success read the directory structure. 1429 C) funlock 1430 1431Example with bash:: 1432 1433 # Atomically read directory structure 1434 $ flock -s /sys/fs/resctrl/ find /sys/fs/resctrl 1435 1436 # Read directory contents and create new subdirectory 1437 1438 $ cat create-dir.sh 1439 find /sys/fs/resctrl/ > output.txt 1440 mask = function-of(output.txt) 1441 mkdir /sys/fs/resctrl/newres/ 1442 echo mask > /sys/fs/resctrl/newres/schemata 1443 1444 $ flock /sys/fs/resctrl/ ./create-dir.sh 1445 1446Example with C:: 1447 1448 /* 1449 * Example code do take advisory locks 1450 * before accessing resctrl filesystem 1451 */ 1452 #include <sys/file.h> 1453 #include <stdlib.h> 1454 1455 void resctrl_take_shared_lock(int fd) 1456 { 1457 int ret; 1458 1459 /* take shared lock on resctrl filesystem */ 1460 ret = flock(fd, LOCK_SH); 1461 if (ret) { 1462 perror("flock"); 1463 exit(-1); 1464 } 1465 } 1466 1467 void resctrl_take_exclusive_lock(int fd) 1468 { 1469 int ret; 1470 1471 /* release lock on resctrl filesystem */ 1472 ret = flock(fd, LOCK_EX); 1473 if (ret) { 1474 perror("flock"); 1475 exit(-1); 1476 } 1477 } 1478 1479 void resctrl_release_lock(int fd) 1480 { 1481 int ret; 1482 1483 /* take shared lock on resctrl filesystem */ 1484 ret = flock(fd, LOCK_UN); 1485 if (ret) { 1486 perror("flock"); 1487 exit(-1); 1488 } 1489 } 1490 1491 void main(void) 1492 { 1493 int fd, ret; 1494 1495 fd = open("/sys/fs/resctrl", O_DIRECTORY); 1496 if (fd == -1) { 1497 perror("open"); 1498 exit(-1); 1499 } 1500 resctrl_take_shared_lock(fd); 1501 /* code to read directory contents */ 1502 resctrl_release_lock(fd); 1503 1504 resctrl_take_exclusive_lock(fd); 1505 /* code to read and write directory contents */ 1506 resctrl_release_lock(fd); 1507 } 1508 1509Examples for RDT Monitoring along with allocation usage 1510======================================================= 1511Reading monitored data 1512---------------------- 1513Reading an event file (for ex: mon_data/mon_L3_00/llc_occupancy) would 1514show the current snapshot of LLC occupancy of the corresponding MON 1515group or CTRL_MON group. 1516 1517 1518Example 1 (Monitor CTRL_MON group and subset of tasks in CTRL_MON group) 1519------------------------------------------------------------------------ 1520On a two socket machine (one L3 cache per socket) with just four bits 1521for cache bit masks:: 1522 1523 # mount -t resctrl resctrl /sys/fs/resctrl 1524 # cd /sys/fs/resctrl 1525 # mkdir p0 p1 1526 # echo "L3:0=3;1=c" > /sys/fs/resctrl/p0/schemata 1527 # echo "L3:0=3;1=3" > /sys/fs/resctrl/p1/schemata 1528 # echo 5678 > p1/tasks 1529 # echo 5679 > p1/tasks 1530 1531The default resource group is unmodified, so we have access to all parts 1532of all caches (its schemata file reads "L3:0=f;1=f"). 1533 1534Tasks that are under the control of group "p0" may only allocate from the 1535"lower" 50% on cache ID 0, and the "upper" 50% of cache ID 1. 1536Tasks in group "p1" use the "lower" 50% of cache on both sockets. 1537 1538Create monitor groups and assign a subset of tasks to each monitor group. 1539:: 1540 1541 # cd /sys/fs/resctrl/p1/mon_groups 1542 # mkdir m11 m12 1543 # echo 5678 > m11/tasks 1544 # echo 5679 > m12/tasks 1545 1546fetch data (data shown in bytes) 1547:: 1548 1549 # cat m11/mon_data/mon_L3_00/llc_occupancy 1550 16234000 1551 # cat m11/mon_data/mon_L3_01/llc_occupancy 1552 14789000 1553 # cat m12/mon_data/mon_L3_00/llc_occupancy 1554 16789000 1555 1556The parent ctrl_mon group shows the aggregated data. 1557:: 1558 1559 # cat /sys/fs/resctrl/p1/mon_data/mon_l3_00/llc_occupancy 1560 31234000 1561 1562Example 2 (Monitor a task from its creation) 1563-------------------------------------------- 1564On a two socket machine (one L3 cache per socket):: 1565 1566 # mount -t resctrl resctrl /sys/fs/resctrl 1567 # cd /sys/fs/resctrl 1568 # mkdir p0 p1 1569 1570An RMID is allocated to the group once its created and hence the <cmd> 1571below is monitored from its creation. 1572:: 1573 1574 # echo $$ > /sys/fs/resctrl/p1/tasks 1575 # <cmd> 1576 1577Fetch the data:: 1578 1579 # cat /sys/fs/resctrl/p1/mon_data/mon_l3_00/llc_occupancy 1580 31789000 1581 1582Example 3 (Monitor without CAT support or before creating CAT groups) 1583--------------------------------------------------------------------- 1584 1585Assume a system like HSW has only CQM and no CAT support. In this case 1586the resctrl will still mount but cannot create CTRL_MON directories. 1587But user can create different MON groups within the root group thereby 1588able to monitor all tasks including kernel threads. 1589 1590This can also be used to profile jobs cache size footprint before being 1591able to allocate them to different allocation groups. 1592:: 1593 1594 # mount -t resctrl resctrl /sys/fs/resctrl 1595 # cd /sys/fs/resctrl 1596 # mkdir mon_groups/m01 1597 # mkdir mon_groups/m02 1598 1599 # echo 3478 > /sys/fs/resctrl/mon_groups/m01/tasks 1600 # echo 2467 > /sys/fs/resctrl/mon_groups/m02/tasks 1601 1602Monitor the groups separately and also get per domain data. From the 1603below its apparent that the tasks are mostly doing work on 1604domain(socket) 0. 1605:: 1606 1607 # cat /sys/fs/resctrl/mon_groups/m01/mon_L3_00/llc_occupancy 1608 31234000 1609 # cat /sys/fs/resctrl/mon_groups/m01/mon_L3_01/llc_occupancy 1610 34555 1611 # cat /sys/fs/resctrl/mon_groups/m02/mon_L3_00/llc_occupancy 1612 31234000 1613 # cat /sys/fs/resctrl/mon_groups/m02/mon_L3_01/llc_occupancy 1614 32789 1615 1616 1617Example 4 (Monitor real time tasks) 1618----------------------------------- 1619 1620A single socket system which has real time tasks running on cores 4-7 1621and non real time tasks on other cpus. We want to monitor the cache 1622occupancy of the real time threads on these cores. 1623:: 1624 1625 # mount -t resctrl resctrl /sys/fs/resctrl 1626 # cd /sys/fs/resctrl 1627 # mkdir p1 1628 1629Move the cpus 4-7 over to p1:: 1630 1631 # echo f0 > p1/cpus 1632 1633View the llc occupancy snapshot:: 1634 1635 # cat /sys/fs/resctrl/p1/mon_data/mon_L3_00/llc_occupancy 1636 11234000 1637 1638 1639Examples on working with mbm_assign_mode 1640======================================== 1641 1642a. Check if MBM counter assignment mode is supported. 1643:: 1644 1645 # mount -t resctrl resctrl /sys/fs/resctrl/ 1646 1647 # cat /sys/fs/resctrl/info/L3_MON/mbm_assign_mode 1648 [mbm_event] 1649 default 1650 1651The "mbm_event" mode is detected and enabled. 1652 1653b. Check how many assignable counters are supported. 1654:: 1655 1656 # cat /sys/fs/resctrl/info/L3_MON/num_mbm_cntrs 1657 0=32;1=32 1658 1659c. Check how many assignable counters are available for assignment in each domain. 1660:: 1661 1662 # cat /sys/fs/resctrl/info/L3_MON/available_mbm_cntrs 1663 0=30;1=30 1664 1665d. To list the default group's assign states. 1666:: 1667 1668 # cat /sys/fs/resctrl/mbm_L3_assignments 1669 mbm_total_bytes:0=e;1=e 1670 mbm_local_bytes:0=e;1=e 1671 1672e. To unassign the counter associated with the mbm_total_bytes event on domain 0. 1673:: 1674 1675 # echo "mbm_total_bytes:0=_" > /sys/fs/resctrl/mbm_L3_assignments 1676 # cat /sys/fs/resctrl/mbm_L3_assignments 1677 mbm_total_bytes:0=_;1=e 1678 mbm_local_bytes:0=e;1=e 1679 1680f. To unassign the counter associated with the mbm_total_bytes event on all domains. 1681:: 1682 1683 # echo "mbm_total_bytes:*=_" > /sys/fs/resctrl/mbm_L3_assignments 1684 # cat /sys/fs/resctrl/mbm_L3_assignment 1685 mbm_total_bytes:0=_;1=_ 1686 mbm_local_bytes:0=e;1=e 1687 1688g. To assign a counter associated with the mbm_total_bytes event on all domains in 1689exclusive mode. 1690:: 1691 1692 # echo "mbm_total_bytes:*=e" > /sys/fs/resctrl/mbm_L3_assignments 1693 # cat /sys/fs/resctrl/mbm_L3_assignments 1694 mbm_total_bytes:0=e;1=e 1695 mbm_local_bytes:0=e;1=e 1696 1697h. Read the events mbm_total_bytes and mbm_local_bytes of the default group. There is 1698no change in reading the events with the assignment. 1699:: 1700 1701 # cat /sys/fs/resctrl/mon_data/mon_L3_00/mbm_total_bytes 1702 779247936 1703 # cat /sys/fs/resctrl/mon_data/mon_L3_01/mbm_total_bytes 1704 562324232 1705 # cat /sys/fs/resctrl/mon_data/mon_L3_00/mbm_local_bytes 1706 212122123 1707 # cat /sys/fs/resctrl/mon_data/mon_L3_01/mbm_local_bytes 1708 121212144 1709 1710i. Check the event configurations. 1711:: 1712 1713 # cat /sys/fs/resctrl/info/L3_MON/event_configs/mbm_total_bytes/event_filter 1714 local_reads,remote_reads,local_non_temporal_writes,remote_non_temporal_writes, 1715 local_reads_slow_memory,remote_reads_slow_memory,dirty_victim_writes_all 1716 1717 # cat /sys/fs/resctrl/info/L3_MON/event_configs/mbm_local_bytes/event_filter 1718 local_reads,local_non_temporal_writes,local_reads_slow_memory 1719 1720j. Change the event configuration for mbm_local_bytes. 1721:: 1722 1723 # echo "local_reads, local_non_temporal_writes, local_reads_slow_memory, remote_reads" > 1724 /sys/fs/resctrl/info/L3_MON/event_configs/mbm_local_bytes/event_filter 1725 1726 # cat /sys/fs/resctrl/info/L3_MON/event_configs/mbm_local_bytes/event_filter 1727 local_reads,local_non_temporal_writes,local_reads_slow_memory,remote_reads 1728 1729k. Now read the local events again. The first read may come back with "Unavailable" 1730status. The subsequent read of mbm_local_bytes will display the current value. 1731:: 1732 1733 # cat /sys/fs/resctrl/mon_data/mon_L3_00/mbm_local_bytes 1734 Unavailable 1735 # cat /sys/fs/resctrl/mon_data/mon_L3_00/mbm_local_bytes 1736 2252323 1737 # cat /sys/fs/resctrl/mon_data/mon_L3_01/mbm_local_bytes 1738 Unavailable 1739 # cat /sys/fs/resctrl/mon_data/mon_L3_01/mbm_local_bytes 1740 1566565 1741 1742l. Users have the option to go back to 'default' mbm_assign_mode if required. This can be 1743done using the following command. Note that switching the mbm_assign_mode may reset all 1744the MBM counters (and thus all MBM events) of all the resctrl groups. 1745:: 1746 1747 # echo "default" > /sys/fs/resctrl/info/L3_MON/mbm_assign_mode 1748 # cat /sys/fs/resctrl/info/L3_MON/mbm_assign_mode 1749 mbm_event 1750 [default] 1751 1752m. Unmount the resctrl filesystem. 1753:: 1754 1755 # umount /sys/fs/resctrl/ 1756 1757Intel RDT Errata 1758================ 1759 1760Intel MBM Counters May Report System Memory Bandwidth Incorrectly 1761----------------------------------------------------------------- 1762 1763Errata SKX99 for Skylake server and BDF102 for Broadwell server. 1764 1765Problem: Intel Memory Bandwidth Monitoring (MBM) counters track metrics 1766according to the assigned Resource Monitor ID (RMID) for that logical 1767core. The IA32_QM_CTR register (MSR 0xC8E), used to report these 1768metrics, may report incorrect system bandwidth for certain RMID values. 1769 1770Implication: Due to the errata, system memory bandwidth may not match 1771what is reported. 1772 1773Workaround: MBM total and local readings are corrected according to the 1774following correction factor table: 1775 1776+---------------+---------------+---------------+-----------------+ 1777|core count |rmid count |rmid threshold |correction factor| 1778+---------------+---------------+---------------+-----------------+ 1779|1 |8 |0 |1.000000 | 1780+---------------+---------------+---------------+-----------------+ 1781|2 |16 |0 |1.000000 | 1782+---------------+---------------+---------------+-----------------+ 1783|3 |24 |15 |0.969650 | 1784+---------------+---------------+---------------+-----------------+ 1785|4 |32 |0 |1.000000 | 1786+---------------+---------------+---------------+-----------------+ 1787|6 |48 |31 |0.969650 | 1788+---------------+---------------+---------------+-----------------+ 1789|7 |56 |47 |1.142857 | 1790+---------------+---------------+---------------+-----------------+ 1791|8 |64 |0 |1.000000 | 1792+---------------+---------------+---------------+-----------------+ 1793|9 |72 |63 |1.185115 | 1794+---------------+---------------+---------------+-----------------+ 1795|10 |80 |63 |1.066553 | 1796+---------------+---------------+---------------+-----------------+ 1797|11 |88 |79 |1.454545 | 1798+---------------+---------------+---------------+-----------------+ 1799|12 |96 |0 |1.000000 | 1800+---------------+---------------+---------------+-----------------+ 1801|13 |104 |95 |1.230769 | 1802+---------------+---------------+---------------+-----------------+ 1803|14 |112 |95 |1.142857 | 1804+---------------+---------------+---------------+-----------------+ 1805|15 |120 |95 |1.066667 | 1806+---------------+---------------+---------------+-----------------+ 1807|16 |128 |0 |1.000000 | 1808+---------------+---------------+---------------+-----------------+ 1809|17 |136 |127 |1.254863 | 1810+---------------+---------------+---------------+-----------------+ 1811|18 |144 |127 |1.185255 | 1812+---------------+---------------+---------------+-----------------+ 1813|19 |152 |0 |1.000000 | 1814+---------------+---------------+---------------+-----------------+ 1815|20 |160 |127 |1.066667 | 1816+---------------+---------------+---------------+-----------------+ 1817|21 |168 |0 |1.000000 | 1818+---------------+---------------+---------------+-----------------+ 1819|22 |176 |159 |1.454334 | 1820+---------------+---------------+---------------+-----------------+ 1821|23 |184 |0 |1.000000 | 1822+---------------+---------------+---------------+-----------------+ 1823|24 |192 |127 |0.969744 | 1824+---------------+---------------+---------------+-----------------+ 1825|25 |200 |191 |1.280246 | 1826+---------------+---------------+---------------+-----------------+ 1827|26 |208 |191 |1.230921 | 1828+---------------+---------------+---------------+-----------------+ 1829|27 |216 |0 |1.000000 | 1830+---------------+---------------+---------------+-----------------+ 1831|28 |224 |191 |1.143118 | 1832+---------------+---------------+---------------+-----------------+ 1833 1834If rmid > rmid threshold, MBM total and local values should be multiplied 1835by the correction factor. 1836 1837See: 1838 18391. Erratum SKX99 in Intel Xeon Processor Scalable Family Specification Update: 1840http://web.archive.org/web/20200716124958/https://www.intel.com/content/www/us/en/processors/xeon/scalable/xeon-scalable-spec-update.html 1841 18422. Erratum BDF102 in Intel Xeon E5-2600 v4 Processor Product Family Specification Update: 1843http://web.archive.org/web/20191125200531/https://www.intel.com/content/dam/www/public/us/en/documents/specification-updates/xeon-e5-v4-spec-update.pdf 1844 18453. The errata in Intel Resource Director Technology (Intel RDT) on 2nd Generation Intel Xeon Scalable Processors Reference Manual: 1846https://software.intel.com/content/www/us/en/develop/articles/intel-resource-director-technology-rdt-reference-manual.html 1847 1848for further information. 1849