1*a10443e8SNavdeep Parhar# Chelsio T6 Factory Default configuration file. 2*a10443e8SNavdeep Parhar# 3*a10443e8SNavdeep Parhar# Copyright (C) 2014-2015 Chelsio Communications. All rights reserved. 4*a10443e8SNavdeep Parhar# 5*a10443e8SNavdeep Parhar# DO NOT MODIFY THIS FILE UNDER ANY CIRCUMSTANCES. MODIFICATION OF THIS FILE 6*a10443e8SNavdeep Parhar# WILL RESULT IN A NON-FUNCTIONAL ADAPTER AND MAY RESULT IN PHYSICAL DAMAGE 7*a10443e8SNavdeep Parhar# TO ADAPTERS. 8*a10443e8SNavdeep Parhar 9*a10443e8SNavdeep Parhar 10*a10443e8SNavdeep Parhar# This file provides the default, power-on configuration for 2-port T6-based 11*a10443e8SNavdeep Parhar# adapters shipped from the factory. These defaults are designed to address 12*a10443e8SNavdeep Parhar# the needs of the vast majority of Terminator customers. The basic idea is to 13*a10443e8SNavdeep Parhar# have a default configuration which allows a customer to plug a Terminator 14*a10443e8SNavdeep Parhar# adapter in and have it work regardless of OS, driver or application except in 15*a10443e8SNavdeep Parhar# the most unusual and/or demanding customer applications. 16*a10443e8SNavdeep Parhar# 17*a10443e8SNavdeep Parhar# Many of the Terminator resources which are described by this configuration 18*a10443e8SNavdeep Parhar# are finite. This requires balancing the configuration/operation needs of 19*a10443e8SNavdeep Parhar# device drivers across OSes and a large number of customer application. 20*a10443e8SNavdeep Parhar# 21*a10443e8SNavdeep Parhar# Some of the more important resources to allocate and their constaints are: 22*a10443e8SNavdeep Parhar# 1. Virtual Interfaces: 256. 23*a10443e8SNavdeep Parhar# 2. Ingress Queues with Free Lists: 1024. 24*a10443e8SNavdeep Parhar# 3. Egress Queues: 128K. 25*a10443e8SNavdeep Parhar# 4. MSI-X Vectors: 1088. 26*a10443e8SNavdeep Parhar# 5. Multi-Port Support (MPS) TCAM: 336 entries to support MAC destination 27*a10443e8SNavdeep Parhar# address matching on Ingress Packets. 28*a10443e8SNavdeep Parhar# 29*a10443e8SNavdeep Parhar# Some of the important OS/Driver resource needs are: 30*a10443e8SNavdeep Parhar# 6. Some OS Drivers will manage all resources through a single Physical 31*a10443e8SNavdeep Parhar# Function (currently PF4 but it could be any Physical Function). 32*a10443e8SNavdeep Parhar# 7. Some OS Drivers will manage different ports and functions (NIC, 33*a10443e8SNavdeep Parhar# storage, etc.) on different Physical Functions. For example, NIC 34*a10443e8SNavdeep Parhar# functions for ports 0-1 on PF0-1, FCoE on PF4, iSCSI on PF5, etc. 35*a10443e8SNavdeep Parhar# 36*a10443e8SNavdeep Parhar# Some of the customer application needs which need to be accommodated: 37*a10443e8SNavdeep Parhar# 8. Some customers will want to support large CPU count systems with 38*a10443e8SNavdeep Parhar# good scaling. Thus, we'll need to accommodate a number of 39*a10443e8SNavdeep Parhar# Ingress Queues and MSI-X Vectors to allow up to some number of CPUs 40*a10443e8SNavdeep Parhar# to be involved per port and per application function. For example, 41*a10443e8SNavdeep Parhar# in the case where all ports and application functions will be 42*a10443e8SNavdeep Parhar# managed via a single Unified PF and we want to accommodate scaling up 43*a10443e8SNavdeep Parhar# to 8 CPUs, we would want: 44*a10443e8SNavdeep Parhar# 45*a10443e8SNavdeep Parhar# 2 ports * 46*a10443e8SNavdeep Parhar# 3 application functions (NIC, FCoE, iSCSI) per port * 47*a10443e8SNavdeep Parhar# 16 Ingress Queue/MSI-X Vectors per application function 48*a10443e8SNavdeep Parhar# 49*a10443e8SNavdeep Parhar# for a total of 96 Ingress Queues and MSI-X Vectors on the Unified PF. 50*a10443e8SNavdeep Parhar# (Plus a few for Firmware Event Queues, etc.) 51*a10443e8SNavdeep Parhar# 52*a10443e8SNavdeep Parhar# 9. Some customers will want to use PCI-E SR-IOV Capability to allow Virtual 53*a10443e8SNavdeep Parhar# Machines to directly access T6 functionality via SR-IOV Virtual Functions 54*a10443e8SNavdeep Parhar# and "PCI Device Passthrough" -- this is especially true for the NIC 55*a10443e8SNavdeep Parhar# application functionality. 56*a10443e8SNavdeep Parhar# 57*a10443e8SNavdeep Parhar 58*a10443e8SNavdeep Parhar 59*a10443e8SNavdeep Parhar# Global configuration settings. 60*a10443e8SNavdeep Parhar# 61*a10443e8SNavdeep Parhar[global] 62*a10443e8SNavdeep Parhar rss_glb_config_mode = basicvirtual 63*a10443e8SNavdeep Parhar rss_glb_config_options = tnlmapen,hashtoeplitz,tnlalllkp 64*a10443e8SNavdeep Parhar 65*a10443e8SNavdeep Parhar # PL_TIMEOUT register 66*a10443e8SNavdeep Parhar pl_timeout_value = 1000 # the timeout value in units of us 67*a10443e8SNavdeep Parhar 68*a10443e8SNavdeep Parhar # The following Scatter Gather Engine (SGE) settings assume a 4KB Host 69*a10443e8SNavdeep Parhar # Page Size and a 64B L1 Cache Line Size. It programs the 70*a10443e8SNavdeep Parhar # EgrStatusPageSize and IngPadBoundary to 64B and the PktShift to 2. 71*a10443e8SNavdeep Parhar # If a Master PF Driver finds itself on a machine with different 72*a10443e8SNavdeep Parhar # parameters, then the Master PF Driver is responsible for initializing 73*a10443e8SNavdeep Parhar # these parameters to appropriate values. 74*a10443e8SNavdeep Parhar # 75*a10443e8SNavdeep Parhar # Notes: 76*a10443e8SNavdeep Parhar # 1. The Free List Buffer Sizes below are raw and the firmware will 77*a10443e8SNavdeep Parhar # round them up to the Ingress Padding Boundary. 78*a10443e8SNavdeep Parhar # 2. The SGE Timer Values below are expressed below in microseconds. 79*a10443e8SNavdeep Parhar # The firmware will convert these values to Core Clock Ticks when 80*a10443e8SNavdeep Parhar # it processes the configuration parameters. 81*a10443e8SNavdeep Parhar # 82*a10443e8SNavdeep Parhar reg[0x1008] = 0x40810/0x21c70 # SGE_CONTROL 83*a10443e8SNavdeep Parhar reg[0x100c] = 0x22222222 # SGE_HOST_PAGE_SIZE 84*a10443e8SNavdeep Parhar reg[0x10a0] = 0x01040810 # SGE_INGRESS_RX_THRESHOLD 85*a10443e8SNavdeep Parhar reg[0x1044] = 4096 # SGE_FL_BUFFER_SIZE0 86*a10443e8SNavdeep Parhar reg[0x1048] = 65536 # SGE_FL_BUFFER_SIZE1 87*a10443e8SNavdeep Parhar reg[0x104c] = 1536 # SGE_FL_BUFFER_SIZE2 88*a10443e8SNavdeep Parhar reg[0x1050] = 9024 # SGE_FL_BUFFER_SIZE3 89*a10443e8SNavdeep Parhar reg[0x1054] = 9216 # SGE_FL_BUFFER_SIZE4 90*a10443e8SNavdeep Parhar reg[0x1058] = 2048 # SGE_FL_BUFFER_SIZE5 91*a10443e8SNavdeep Parhar reg[0x105c] = 128 # SGE_FL_BUFFER_SIZE6 92*a10443e8SNavdeep Parhar reg[0x1060] = 8192 # SGE_FL_BUFFER_SIZE7 93*a10443e8SNavdeep Parhar reg[0x1064] = 16384 # SGE_FL_BUFFER_SIZE8 94*a10443e8SNavdeep Parhar reg[0x10a4] = 0xa000a000/0xf000f000 # SGE_DBFIFO_STATUS 95*a10443e8SNavdeep Parhar reg[0x10a8] = 0x402000/0x402000 # SGE_DOORBELL_CONTROL 96*a10443e8SNavdeep Parhar sge_timer_value = 5, 10, 20, 50, 100, 200 # SGE_TIMER_VALUE* in usecs 97*a10443e8SNavdeep Parhar reg[0x10c4] = 0x20000000/0x20000000 # GK_CONTROL, enable 5th thread 98*a10443e8SNavdeep Parhar 99*a10443e8SNavdeep Parhar #DBQ Timer duration = 1 cclk cycle duration * (sge_dbq_timertick+1) * sge_dbq_timer 100*a10443e8SNavdeep Parhar #SGE DBQ tick value. All timers are multiple of this value 101*a10443e8SNavdeep Parhar# sge_dbq_timertick = 5 #in usecs 102*a10443e8SNavdeep Parhar# sge_dbq_timer = 1, 2, 4, 6, 8, 10, 12, 16 103*a10443e8SNavdeep Parhar # enable TP_OUT_CONFIG.IPIDSPLITMODE 104*a10443e8SNavdeep Parhar reg[0x7d04] = 0x00010000/0x00010000 105*a10443e8SNavdeep Parhar 106*a10443e8SNavdeep Parhar reg[0x7dc0] = 0x0e2f8849 # TP_SHIFT_CNT 107*a10443e8SNavdeep Parhar 108*a10443e8SNavdeep Parhar #Tick granularities in kbps 109*a10443e8SNavdeep Parhar tsch_ticks = 1000, 100, 10, 1 110*a10443e8SNavdeep Parhar 111*a10443e8SNavdeep Parhar # TP_VLAN_PRI_MAP to select filter tuples and enable ServerSram 112*a10443e8SNavdeep Parhar # filter control: compact, fcoemask 113*a10443e8SNavdeep Parhar # server sram : srvrsram 114*a10443e8SNavdeep Parhar # filter tuples : fragmentation, mpshittype, macmatch, ethertype, 115*a10443e8SNavdeep Parhar # protocol, tos, vlan, vnic_id, port, fcoe 116*a10443e8SNavdeep Parhar # valid filterModes are described the Terminator 5 Data Book 117*a10443e8SNavdeep Parhar filterMode = fcoemask, srvrsram, fragmentation, mpshittype, protocol, vlan, port, fcoe 118*a10443e8SNavdeep Parhar 119*a10443e8SNavdeep Parhar # filter tuples enforced in LE active region (equal to or subset of filterMode) 120*a10443e8SNavdeep Parhar filterMask = protocol, fcoe 121*a10443e8SNavdeep Parhar 122*a10443e8SNavdeep Parhar # Percentage of dynamic memory (in either the EDRAM or external MEM) 123*a10443e8SNavdeep Parhar # to use for TP RX payload 124*a10443e8SNavdeep Parhar tp_pmrx = 30 125*a10443e8SNavdeep Parhar 126*a10443e8SNavdeep Parhar # TP RX payload page size 127*a10443e8SNavdeep Parhar tp_pmrx_pagesize = 64K 128*a10443e8SNavdeep Parhar 129*a10443e8SNavdeep Parhar # TP number of RX channels 130*a10443e8SNavdeep Parhar tp_nrxch = 0 # 0 (auto) = 1 131*a10443e8SNavdeep Parhar 132*a10443e8SNavdeep Parhar # Percentage of dynamic memory (in either the EDRAM or external MEM) 133*a10443e8SNavdeep Parhar # to use for TP TX payload 134*a10443e8SNavdeep Parhar tp_pmtx = 50 135*a10443e8SNavdeep Parhar 136*a10443e8SNavdeep Parhar # TP TX payload page size 137*a10443e8SNavdeep Parhar tp_pmtx_pagesize = 64K 138*a10443e8SNavdeep Parhar 139*a10443e8SNavdeep Parhar # TP number of TX channels 140*a10443e8SNavdeep Parhar tp_ntxch = 0 # 0 (auto) = equal number of ports 141*a10443e8SNavdeep Parhar 142*a10443e8SNavdeep Parhar # TP OFLD MTUs 143*a10443e8SNavdeep Parhar tp_mtus = 88, 256, 512, 576, 808, 1024, 1280, 1488, 1500, 2002, 2048, 4096, 4352, 8192, 9000, 9600 144*a10443e8SNavdeep Parhar 145*a10443e8SNavdeep Parhar # enable TP_OUT_CONFIG.IPIDSPLITMODE and CRXPKTENC 146*a10443e8SNavdeep Parhar reg[0x7d04] = 0x00010008/0x00010008 147*a10443e8SNavdeep Parhar 148*a10443e8SNavdeep Parhar # TP_GLOBAL_CONFIG 149*a10443e8SNavdeep Parhar reg[0x7d08] = 0x00000800/0x00000800 # set IssFromCplEnable 150*a10443e8SNavdeep Parhar 151*a10443e8SNavdeep Parhar # TP_PC_CONFIG 152*a10443e8SNavdeep Parhar reg[0x7d48] = 0x00000000/0x00000400 # clear EnableFLMError 153*a10443e8SNavdeep Parhar 154*a10443e8SNavdeep Parhar # TP_PARA_REG0 155*a10443e8SNavdeep Parhar reg[0x7d60] = 0x06000000/0x07000000 # set InitCWND to 6 156*a10443e8SNavdeep Parhar 157*a10443e8SNavdeep Parhar # LE_DB_CONFIG 158*a10443e8SNavdeep Parhar reg[0x19c04] = 0x00400000/0x00440000 # LE Server SRAM Enable, 159*a10443e8SNavdeep Parhar # LE IPv4 compression disabled 160*a10443e8SNavdeep Parhar # LE_DB_HASH_CONFIG 161*a10443e8SNavdeep Parhar reg[0x19c28] = 0x00800000/0x01f00000 # LE Hash bucket size 8, 162*a10443e8SNavdeep Parhar 163*a10443e8SNavdeep Parhar # ULP_TX_CONFIG 164*a10443e8SNavdeep Parhar reg[0x8dc0] = 0x00000104/0x00000104 # Enable ITT on PI err 165*a10443e8SNavdeep Parhar # Enable more error msg for ... 166*a10443e8SNavdeep Parhar # TPT error. 167*a10443e8SNavdeep Parhar 168*a10443e8SNavdeep Parhar # ULP_RX_MISC_FEATURE_ENABLE 169*a10443e8SNavdeep Parhar reg[0x1925c] = 0x01003400/0x01003400 # iscsi tag pi bit 170*a10443e8SNavdeep Parhar # Enable offset decrement after ... 171*a10443e8SNavdeep Parhar # PI extraction and before DDP 172*a10443e8SNavdeep Parhar # ulp insert pi source info in DIF 173*a10443e8SNavdeep Parhar # iscsi_eff_offset_en 174*a10443e8SNavdeep Parhar 175*a10443e8SNavdeep Parhar #Enable iscsi completion moderation feature 176*a10443e8SNavdeep Parhar #reg[0x1925c] = 0x000041c0/0x000031c0 # Enable offset decrement after 177*a10443e8SNavdeep Parhar # PI extraction and before DDP. 178*a10443e8SNavdeep Parhar # ulp insert pi source info in 179*a10443e8SNavdeep Parhar # DIF. 180*a10443e8SNavdeep Parhar # Enable iscsi hdr cmd mode. 181*a10443e8SNavdeep Parhar # iscsi force cmd mode. 182*a10443e8SNavdeep Parhar # Enable iscsi cmp mode. 183*a10443e8SNavdeep Parhar 184*a10443e8SNavdeep Parhar# Some "definitions" to make the rest of this a bit more readable. We support 185*a10443e8SNavdeep Parhar# 4 ports, 3 functions (NIC, FCoE and iSCSI), scaling up to 8 "CPU Queue Sets" 186*a10443e8SNavdeep Parhar# per function per port ... 187*a10443e8SNavdeep Parhar# 188*a10443e8SNavdeep Parhar# NMSIX = 1088 # available MSI-X Vectors 189*a10443e8SNavdeep Parhar# NVI = 256 # available Virtual Interfaces 190*a10443e8SNavdeep Parhar# NMPSTCAM = 336 # MPS TCAM entries 191*a10443e8SNavdeep Parhar# 192*a10443e8SNavdeep Parhar# NPORTS = 2 # ports 193*a10443e8SNavdeep Parhar# NCPUS = 16 # CPUs we want to support scalably 194*a10443e8SNavdeep Parhar# NFUNCS = 3 # functions per port (NIC, FCoE, iSCSI) 195*a10443e8SNavdeep Parhar 196*a10443e8SNavdeep Parhar# Breakdown of Virtual Interface/Queue/Interrupt resources for the "Unified 197*a10443e8SNavdeep Parhar# PF" which many OS Drivers will use to manage most or all functions. 198*a10443e8SNavdeep Parhar# 199*a10443e8SNavdeep Parhar# Each Ingress Queue can use one MSI-X interrupt but some Ingress Queues can 200*a10443e8SNavdeep Parhar# use Forwarded Interrupt Ingress Queues. For these latter, an Ingress Queue 201*a10443e8SNavdeep Parhar# would be created and the Queue ID of a Forwarded Interrupt Ingress Queue 202*a10443e8SNavdeep Parhar# will be specified as the "Ingress Queue Asynchronous Destination Index." 203*a10443e8SNavdeep Parhar# Thus, the number of MSI-X Vectors assigned to the Unified PF will be less 204*a10443e8SNavdeep Parhar# than or equal to the number of Ingress Queues ... 205*a10443e8SNavdeep Parhar# 206*a10443e8SNavdeep Parhar# NVI_NIC = 4 # NIC access to NPORTS 207*a10443e8SNavdeep Parhar# NFLIQ_NIC = 32 # NIC Ingress Queues with Free Lists 208*a10443e8SNavdeep Parhar# NETHCTRL_NIC = 32 # NIC Ethernet Control/TX Queues 209*a10443e8SNavdeep Parhar# NEQ_NIC = 64 # NIC Egress Queues (FL, ETHCTRL/TX) 210*a10443e8SNavdeep Parhar# NMPSTCAM_NIC = 16 # NIC MPS TCAM Entries (NPORTS*4) 211*a10443e8SNavdeep Parhar# NMSIX_NIC = 32 # NIC MSI-X Interrupt Vectors (FLIQ) 212*a10443e8SNavdeep Parhar# 213*a10443e8SNavdeep Parhar# NVI_OFLD = 0 # Offload uses NIC function to access ports 214*a10443e8SNavdeep Parhar# NFLIQ_OFLD = 16 # Offload Ingress Queues with Free Lists 215*a10443e8SNavdeep Parhar# NETHCTRL_OFLD = 0 # Offload Ethernet Control/TX Queues 216*a10443e8SNavdeep Parhar# NEQ_OFLD = 16 # Offload Egress Queues (FL) 217*a10443e8SNavdeep Parhar# NMPSTCAM_OFLD = 0 # Offload MPS TCAM Entries (uses NIC's) 218*a10443e8SNavdeep Parhar# NMSIX_OFLD = 16 # Offload MSI-X Interrupt Vectors (FLIQ) 219*a10443e8SNavdeep Parhar# 220*a10443e8SNavdeep Parhar# NVI_RDMA = 0 # RDMA uses NIC function to access ports 221*a10443e8SNavdeep Parhar# NFLIQ_RDMA = 4 # RDMA Ingress Queues with Free Lists 222*a10443e8SNavdeep Parhar# NETHCTRL_RDMA = 0 # RDMA Ethernet Control/TX Queues 223*a10443e8SNavdeep Parhar# NEQ_RDMA = 4 # RDMA Egress Queues (FL) 224*a10443e8SNavdeep Parhar# NMPSTCAM_RDMA = 0 # RDMA MPS TCAM Entries (uses NIC's) 225*a10443e8SNavdeep Parhar# NMSIX_RDMA = 4 # RDMA MSI-X Interrupt Vectors (FLIQ) 226*a10443e8SNavdeep Parhar# 227*a10443e8SNavdeep Parhar# NEQ_WD = 128 # Wire Direct TX Queues and FLs 228*a10443e8SNavdeep Parhar# NETHCTRL_WD = 64 # Wire Direct TX Queues 229*a10443e8SNavdeep Parhar# NFLIQ_WD = 64 ` # Wire Direct Ingress Queues with Free Lists 230*a10443e8SNavdeep Parhar# 231*a10443e8SNavdeep Parhar# NVI_ISCSI = 4 # ISCSI access to NPORTS 232*a10443e8SNavdeep Parhar# NFLIQ_ISCSI = 4 # ISCSI Ingress Queues with Free Lists 233*a10443e8SNavdeep Parhar# NETHCTRL_ISCSI = 0 # ISCSI Ethernet Control/TX Queues 234*a10443e8SNavdeep Parhar# NEQ_ISCSI = 4 # ISCSI Egress Queues (FL) 235*a10443e8SNavdeep Parhar# NMPSTCAM_ISCSI = 4 # ISCSI MPS TCAM Entries (NPORTS) 236*a10443e8SNavdeep Parhar# NMSIX_ISCSI = 4 # ISCSI MSI-X Interrupt Vectors (FLIQ) 237*a10443e8SNavdeep Parhar# 238*a10443e8SNavdeep Parhar# NVI_FCOE = 4 # FCOE access to NPORTS 239*a10443e8SNavdeep Parhar# NFLIQ_FCOE = 34 # FCOE Ingress Queues with Free Lists 240*a10443e8SNavdeep Parhar# NETHCTRL_FCOE = 32 # FCOE Ethernet Control/TX Queues 241*a10443e8SNavdeep Parhar# NEQ_FCOE = 66 # FCOE Egress Queues (FL) 242*a10443e8SNavdeep Parhar# NMPSTCAM_FCOE = 32 # FCOE MPS TCAM Entries (NPORTS) 243*a10443e8SNavdeep Parhar# NMSIX_FCOE = 34 # FCOE MSI-X Interrupt Vectors (FLIQ) 244*a10443e8SNavdeep Parhar 245*a10443e8SNavdeep Parhar# Two extra Ingress Queues per function for Firmware Events and Forwarded 246*a10443e8SNavdeep Parhar# Interrupts, and two extra interrupts per function for Firmware Events (or a 247*a10443e8SNavdeep Parhar# Forwarded Interrupt Queue) and General Interrupts per function. 248*a10443e8SNavdeep Parhar# 249*a10443e8SNavdeep Parhar# NFLIQ_EXTRA = 6 # "extra" Ingress Queues 2*NFUNCS (Firmware and 250*a10443e8SNavdeep Parhar# # Forwarded Interrupts 251*a10443e8SNavdeep Parhar# NMSIX_EXTRA = 6 # extra interrupts 2*NFUNCS (Firmware and 252*a10443e8SNavdeep Parhar# # General Interrupts 253*a10443e8SNavdeep Parhar 254*a10443e8SNavdeep Parhar# Microsoft HyperV resources. The HyperV Virtual Ingress Queues will have 255*a10443e8SNavdeep Parhar# their interrupts forwarded to another set of Forwarded Interrupt Queues. 256*a10443e8SNavdeep Parhar# 257*a10443e8SNavdeep Parhar# NVI_HYPERV = 16 # VMs we want to support 258*a10443e8SNavdeep Parhar# NVIIQ_HYPERV = 2 # Virtual Ingress Queues with Free Lists per VM 259*a10443e8SNavdeep Parhar# NFLIQ_HYPERV = 40 # VIQs + NCPUS Forwarded Interrupt Queues 260*a10443e8SNavdeep Parhar# NEQ_HYPERV = 32 # VIQs Free Lists 261*a10443e8SNavdeep Parhar# NMPSTCAM_HYPERV = 16 # MPS TCAM Entries (NVI_HYPERV) 262*a10443e8SNavdeep Parhar# NMSIX_HYPERV = 8 # NCPUS Forwarded Interrupt Queues 263*a10443e8SNavdeep Parhar 264*a10443e8SNavdeep Parhar# Adding all of the above Unified PF resource needs together: (NIC + OFLD + 265*a10443e8SNavdeep Parhar# RDMA + ISCSI + FCOE + EXTRA + HYPERV) 266*a10443e8SNavdeep Parhar# 267*a10443e8SNavdeep Parhar# NVI_UNIFIED = 28 268*a10443e8SNavdeep Parhar# NFLIQ_UNIFIED = 106 269*a10443e8SNavdeep Parhar# NETHCTRL_UNIFIED = 32 270*a10443e8SNavdeep Parhar# NEQ_UNIFIED = 124 271*a10443e8SNavdeep Parhar# NMPSTCAM_UNIFIED = 40 272*a10443e8SNavdeep Parhar# 273*a10443e8SNavdeep Parhar# The sum of all the MSI-X resources above is 74 MSI-X Vectors but we'll round 274*a10443e8SNavdeep Parhar# that up to 128 to make sure the Unified PF doesn't run out of resources. 275*a10443e8SNavdeep Parhar# 276*a10443e8SNavdeep Parhar# NMSIX_UNIFIED = 128 277*a10443e8SNavdeep Parhar# 278*a10443e8SNavdeep Parhar# The Storage PFs could need up to NPORTS*NCPUS + NMSIX_EXTRA MSI-X Vectors 279*a10443e8SNavdeep Parhar# which is 34 but they're probably safe with 32. 280*a10443e8SNavdeep Parhar# 281*a10443e8SNavdeep Parhar# NMSIX_STORAGE = 32 282*a10443e8SNavdeep Parhar 283*a10443e8SNavdeep Parhar# Note: The UnifiedPF is PF4 which doesn't have any Virtual Functions 284*a10443e8SNavdeep Parhar# associated with it. Thus, the MSI-X Vector allocations we give to the 285*a10443e8SNavdeep Parhar# UnifiedPF aren't inherited by any Virtual Functions. As a result we can 286*a10443e8SNavdeep Parhar# provision many more Virtual Functions than we can if the UnifiedPF were 287*a10443e8SNavdeep Parhar# one of PF0-1. 288*a10443e8SNavdeep Parhar# 289*a10443e8SNavdeep Parhar 290*a10443e8SNavdeep Parhar# All of the below PCI-E parameters are actually stored in various *_init.txt 291*a10443e8SNavdeep Parhar# files. We include them below essentially as comments. 292*a10443e8SNavdeep Parhar# 293*a10443e8SNavdeep Parhar# For PF0-1 we assign 8 vectors each for NIC Ingress Queues of the associated 294*a10443e8SNavdeep Parhar# ports 0-1. 295*a10443e8SNavdeep Parhar# 296*a10443e8SNavdeep Parhar# For PF4, the Unified PF, we give it an MSI-X Table Size as outlined above. 297*a10443e8SNavdeep Parhar# 298*a10443e8SNavdeep Parhar# For PF5-6 we assign enough MSI-X Vectors to support FCoE and iSCSI 299*a10443e8SNavdeep Parhar# storage applications across all four possible ports. 300*a10443e8SNavdeep Parhar# 301*a10443e8SNavdeep Parhar# Additionally, since the UnifiedPF isn't one of the per-port Physical 302*a10443e8SNavdeep Parhar# Functions, we give the UnifiedPF and the PF0-1 Physical Functions 303*a10443e8SNavdeep Parhar# different PCI Device IDs which will allow Unified and Per-Port Drivers 304*a10443e8SNavdeep Parhar# to directly select the type of Physical Function to which they wish to be 305*a10443e8SNavdeep Parhar# attached. 306*a10443e8SNavdeep Parhar# 307*a10443e8SNavdeep Parhar# Note that the actual values used for the PCI-E Intelectual Property will be 308*a10443e8SNavdeep Parhar# 1 less than those below since that's the way it "counts" things. For 309*a10443e8SNavdeep Parhar# readability, we use the number we actually mean ... 310*a10443e8SNavdeep Parhar# 311*a10443e8SNavdeep Parhar# PF0_INT = 8 # NCPUS 312*a10443e8SNavdeep Parhar# PF1_INT = 8 # NCPUS 313*a10443e8SNavdeep Parhar# PF0_3_INT = 32 # PF0_INT + PF1_INT + PF2_INT + PF3_INT 314*a10443e8SNavdeep Parhar# 315*a10443e8SNavdeep Parhar# PF4_INT = 128 # NMSIX_UNIFIED 316*a10443e8SNavdeep Parhar# PF5_INT = 32 # NMSIX_STORAGE 317*a10443e8SNavdeep Parhar# PF6_INT = 32 # NMSIX_STORAGE 318*a10443e8SNavdeep Parhar# PF7_INT = 0 # Nothing Assigned 319*a10443e8SNavdeep Parhar# PF4_7_INT = 192 # PF4_INT + PF5_INT + PF6_INT + PF7_INT 320*a10443e8SNavdeep Parhar# 321*a10443e8SNavdeep Parhar# PF0_7_INT = 224 # PF0_3_INT + PF4_7_INT 322*a10443e8SNavdeep Parhar# 323*a10443e8SNavdeep Parhar# With the above we can get 17 VFs/PF0-3 (limited by 336 MPS TCAM entries) 324*a10443e8SNavdeep Parhar# but we'll lower that to 16 to make our total 64 and a nice power of 2 ... 325*a10443e8SNavdeep Parhar# 326*a10443e8SNavdeep Parhar# NVF = 16 327*a10443e8SNavdeep Parhar 328*a10443e8SNavdeep Parhar 329*a10443e8SNavdeep Parhar# For those OSes which manage different ports on different PFs, we need 330*a10443e8SNavdeep Parhar# only enough resources to support a single port's NIC application functions 331*a10443e8SNavdeep Parhar# on PF0-3. The below assumes that we're only doing NIC with NCPUS "Queue 332*a10443e8SNavdeep Parhar# Sets" for ports 0-3. The FCoE and iSCSI functions for such OSes will be 333*a10443e8SNavdeep Parhar# managed on the "storage PFs" (see below). 334*a10443e8SNavdeep Parhar# 335*a10443e8SNavdeep Parhar 336*a10443e8SNavdeep Parhar# Some OS Drivers manage all application functions for all ports via PF4. 337*a10443e8SNavdeep Parhar# Thus we need to provide a large number of resources here. For Egress 338*a10443e8SNavdeep Parhar# Queues we need to account for both TX Queues as well as Free List Queues 339*a10443e8SNavdeep Parhar# (because the host is responsible for producing Free List Buffers for the 340*a10443e8SNavdeep Parhar# hardware to consume). 341*a10443e8SNavdeep Parhar# 342*a10443e8SNavdeep Parhar[function "0"] 343*a10443e8SNavdeep Parhar wx_caps = all # write/execute permissions for all commands 344*a10443e8SNavdeep Parhar r_caps = all # read permissions for all commands 345*a10443e8SNavdeep Parhar nvi = 28 # NVI_UNIFIED 346*a10443e8SNavdeep Parhar niqflint = 170 # NFLIQ_UNIFIED + NLFIQ_WD 347*a10443e8SNavdeep Parhar nethctrl = 96 # NETHCTRL_UNIFIED + NETHCTRL_WD 348*a10443e8SNavdeep Parhar neq = 252 # NEQ_UNIFIED + NEQ_WD 349*a10443e8SNavdeep Parhar nexactf = 40 # NMPSTCAM_UNIFIED 350*a10443e8SNavdeep Parhar nrawf = 2 351*a10443e8SNavdeep Parhar cmask = all # access to all channels 352*a10443e8SNavdeep Parhar pmask = all # access to all four ports ... 353*a10443e8SNavdeep Parhar nethofld = 1024 # number of user mode ethernet flow contexts 354*a10443e8SNavdeep Parhar ncrypto_lookaside = 32 355*a10443e8SNavdeep Parhar nclip = 32 # number of clip region entries 356*a10443e8SNavdeep Parhar nfilter = 48 # number of filter region entries 357*a10443e8SNavdeep Parhar nserver = 48 # number of server region entries 358*a10443e8SNavdeep Parhar nhash = 2048 # number of hash region entries 359*a10443e8SNavdeep Parhar nhpfilter = 0 # number of high priority filter region entries 360*a10443e8SNavdeep Parhar protocol = nic_vm, ofld, rddp, rdmac, iscsi_initiator_pdu, iscsi_target_pdu, iscsi_t10dif, tlskeys, crypto_lookaside 361*a10443e8SNavdeep Parhar tp_l2t = 3072 362*a10443e8SNavdeep Parhar tp_ddp = 2 363*a10443e8SNavdeep Parhar tp_ddp_iscsi = 2 364*a10443e8SNavdeep Parhar tp_tls_key = 3 365*a10443e8SNavdeep Parhar tp_stag = 2 366*a10443e8SNavdeep Parhar tp_pbl = 5 367*a10443e8SNavdeep Parhar tp_rq = 7 368*a10443e8SNavdeep Parhar tp_srq = 128 369*a10443e8SNavdeep Parhar 370*a10443e8SNavdeep Parhar# We have FCoE and iSCSI storage functions on PF5 and PF6 each of which may 371*a10443e8SNavdeep Parhar# need to have Virtual Interfaces on each of the four ports with up to NCPUS 372*a10443e8SNavdeep Parhar# "Queue Sets" each. 373*a10443e8SNavdeep Parhar# 374*a10443e8SNavdeep Parhar[function "1"] 375*a10443e8SNavdeep Parhar wx_caps = all # write/execute permissions for all commands 376*a10443e8SNavdeep Parhar r_caps = all # read permissions for all commands 377*a10443e8SNavdeep Parhar nvi = 4 # NPORTS 378*a10443e8SNavdeep Parhar niqflint = 34 # NPORTS*NCPUS + NMSIX_EXTRA 379*a10443e8SNavdeep Parhar nethctrl = 32 # NPORTS*NCPUS 380*a10443e8SNavdeep Parhar neq = 64 # NPORTS*NCPUS * 2 (FL, ETHCTRL/TX) 381*a10443e8SNavdeep Parhar nexactf = 16 # (NPORTS *(no of snmc grp + 1 hw mac) + 1 anmc grp)) rounded to 16. 382*a10443e8SNavdeep Parhar cmask = all # access to all channels 383*a10443e8SNavdeep Parhar pmask = all # access to all four ports ... 384*a10443e8SNavdeep Parhar nserver = 16 385*a10443e8SNavdeep Parhar nhash = 2048 386*a10443e8SNavdeep Parhar tp_l2t = 1020 387*a10443e8SNavdeep Parhar protocol = iscsi_initiator_fofld 388*a10443e8SNavdeep Parhar tp_ddp_iscsi = 2 389*a10443e8SNavdeep Parhar iscsi_ntask = 2048 390*a10443e8SNavdeep Parhar iscsi_nsess = 2048 391*a10443e8SNavdeep Parhar iscsi_nconn_per_session = 1 392*a10443e8SNavdeep Parhar iscsi_ninitiator_instance = 64 393*a10443e8SNavdeep Parhar 394*a10443e8SNavdeep Parhar 395*a10443e8SNavdeep Parhar# The following function, 1023, is not an actual PCIE function but is used to 396*a10443e8SNavdeep Parhar# configure and reserve firmware internal resources that come from the global 397*a10443e8SNavdeep Parhar# resource pool. 398*a10443e8SNavdeep Parhar# 399*a10443e8SNavdeep Parhar[function "1023"] 400*a10443e8SNavdeep Parhar wx_caps = all # write/execute permissions for all commands 401*a10443e8SNavdeep Parhar r_caps = all # read permissions for all commands 402*a10443e8SNavdeep Parhar nvi = 4 # NVI_UNIFIED 403*a10443e8SNavdeep Parhar cmask = all # access to all channels 404*a10443e8SNavdeep Parhar pmask = all # access to all four ports ... 405*a10443e8SNavdeep Parhar nexactf = 8 # NPORTS + DCBX + 406*a10443e8SNavdeep Parhar nfilter = 16 # number of filter region entries 407*a10443e8SNavdeep Parhar #nhpfilter = 0 # number of high priority filter region entries 408*a10443e8SNavdeep Parhar 409*a10443e8SNavdeep Parhar 410*a10443e8SNavdeep Parhar# For Virtual functions, we only allow NIC functionality and we only allow 411*a10443e8SNavdeep Parhar# access to one port (1 << PF). Note that because of limitations in the 412*a10443e8SNavdeep Parhar# Scatter Gather Engine (SGE) hardware which checks writes to VF KDOORBELL 413*a10443e8SNavdeep Parhar# and GTS registers, the number of Ingress and Egress Queues must be a power 414*a10443e8SNavdeep Parhar# of 2. 415*a10443e8SNavdeep Parhar# 416*a10443e8SNavdeep Parhar[function "0/*"] # NVF 417*a10443e8SNavdeep Parhar wx_caps = 0x82 # DMAQ | VF 418*a10443e8SNavdeep Parhar r_caps = 0x86 # DMAQ | VF | PORT 419*a10443e8SNavdeep Parhar nvi = 1 # 1 port 420*a10443e8SNavdeep Parhar niqflint = 4 # 2 "Queue Sets" + NXIQ 421*a10443e8SNavdeep Parhar nethctrl = 2 # 2 "Queue Sets" 422*a10443e8SNavdeep Parhar neq = 4 # 2 "Queue Sets" * 2 423*a10443e8SNavdeep Parhar nexactf = 4 424*a10443e8SNavdeep Parhar cmask = all # access to all channels 425*a10443e8SNavdeep Parhar pmask = 0x1 # access to only one port ... 426*a10443e8SNavdeep Parhar 427*a10443e8SNavdeep Parhar 428*a10443e8SNavdeep Parhar[function "1/*"] # NVF 429*a10443e8SNavdeep Parhar wx_caps = 0x82 # DMAQ | VF 430*a10443e8SNavdeep Parhar r_caps = 0x86 # DMAQ | VF | PORT 431*a10443e8SNavdeep Parhar nvi = 1 # 1 port 432*a10443e8SNavdeep Parhar niqflint = 4 # 2 "Queue Sets" + NXIQ 433*a10443e8SNavdeep Parhar nethctrl = 2 # 2 "Queue Sets" 434*a10443e8SNavdeep Parhar neq = 4 # 2 "Queue Sets" * 2 435*a10443e8SNavdeep Parhar nexactf = 4 436*a10443e8SNavdeep Parhar cmask = all # access to all channels 437*a10443e8SNavdeep Parhar pmask = 0x2 # access to only one port ... 438*a10443e8SNavdeep Parhar 439*a10443e8SNavdeep Parhar 440*a10443e8SNavdeep Parhar# MPS features a 196608 bytes ingress buffer that is used for ingress buffering 441*a10443e8SNavdeep Parhar# for packets from the wire as well as the loopback path of the L2 switch. The 442*a10443e8SNavdeep Parhar# folling params control how the buffer memory is distributed and the L2 flow 443*a10443e8SNavdeep Parhar# control settings: 444*a10443e8SNavdeep Parhar# 445*a10443e8SNavdeep Parhar# bg_mem: %-age of mem to use for port/buffer group 446*a10443e8SNavdeep Parhar# lpbk_mem: %-age of port/bg mem to use for loopback 447*a10443e8SNavdeep Parhar# hwm: high watermark; bytes available when starting to send pause 448*a10443e8SNavdeep Parhar# frames (in units of 0.1 MTU) 449*a10443e8SNavdeep Parhar# lwm: low watermark; bytes remaining when sending 'unpause' frame 450*a10443e8SNavdeep Parhar# (in inuits of 0.1 MTU) 451*a10443e8SNavdeep Parhar# dwm: minimum delta between high and low watermark (in units of 100 452*a10443e8SNavdeep Parhar# Bytes) 453*a10443e8SNavdeep Parhar# 454*a10443e8SNavdeep Parhar[port "0"] 455*a10443e8SNavdeep Parhar dcb = ppp, dcbx, b2b # configure for DCB PPP and enable DCBX offload 456*a10443e8SNavdeep Parhar hwm = 30 457*a10443e8SNavdeep Parhar lwm = 15 458*a10443e8SNavdeep Parhar dwm = 30 459*a10443e8SNavdeep Parhar dcb_app_tlv[0] = 0x8906, ethertype, 3 460*a10443e8SNavdeep Parhar dcb_app_tlv[1] = 0x8914, ethertype, 3 461*a10443e8SNavdeep Parhar dcb_app_tlv[2] = 3260, socketnum, 5 462*a10443e8SNavdeep Parhar 463*a10443e8SNavdeep Parhar 464*a10443e8SNavdeep Parhar[port "1"] 465*a10443e8SNavdeep Parhar dcb = ppp, dcbx, b2b 466*a10443e8SNavdeep Parhar hwm = 30 467*a10443e8SNavdeep Parhar lwm = 15 468*a10443e8SNavdeep Parhar dwm = 30 469*a10443e8SNavdeep Parhar dcb_app_tlv[0] = 0x8906, ethertype, 3 470*a10443e8SNavdeep Parhar dcb_app_tlv[1] = 0x8914, ethertype, 3 471*a10443e8SNavdeep Parhar dcb_app_tlv[2] = 3260, socketnum, 5 472*a10443e8SNavdeep Parhar 473*a10443e8SNavdeep Parhar 474*a10443e8SNavdeep Parhar[fini] 475*a10443e8SNavdeep Parhar version = 0x1425001d 476*a10443e8SNavdeep Parhar checksum = 0x5001af51 477*a10443e8SNavdeep Parhar 478*a10443e8SNavdeep Parhar# Total resources used by above allocations: 479*a10443e8SNavdeep Parhar# Virtual Interfaces: 104 480*a10443e8SNavdeep Parhar# Ingress Queues/w Free Lists and Interrupts: 526 481*a10443e8SNavdeep Parhar# Egress Queues: 702 482*a10443e8SNavdeep Parhar# MPS TCAM Entries: 336 483*a10443e8SNavdeep Parhar# MSI-X Vectors: 736 484*a10443e8SNavdeep Parhar# Virtual Functions: 64 485*a10443e8SNavdeep Parhar# 486*a10443e8SNavdeep Parhar# 487