1================================ 2Documentation for /proc/sys/net/ 3================================ 4 5Copyright 6 7Copyright (c) 1999 8 9 - Terrehon Bowden <terrehon@pacbell.net> 10 - Bodo Bauer <bb@ricochet.net> 11 12Copyright (c) 2000 13 14 - Jorge Nerin <comandante@zaralinux.com> 15 16Copyright (c) 2009 17 18 - Shen Feng <shen@cn.fujitsu.com> 19 20For general info and legal blurb, please look in index.rst. 21 22------------------------------------------------------------------------------ 23 24This file contains the documentation for the sysctl files in 25/proc/sys/net 26 27The interface to the networking parts of the kernel is located in 28/proc/sys/net. The following table shows all possible subdirectories. You may 29see only some of them, depending on your kernel's configuration. 30 31 32Table : Subdirectories in /proc/sys/net 33 34 ========= =================== = ========== =================== 35 Directory Content Directory Content 36 ========= =================== = ========== =================== 37 802 E802 protocol mptcp Multipath TCP 38 appletalk Appletalk protocol netfilter Network Filter 39 ax25 AX25 netrom NET/ROM 40 bridge Bridging rose X.25 PLP layer 41 core General parameter tipc TIPC 42 ethernet Ethernet protocol unix Unix domain sockets 43 ipv4 IP version 4 x25 X.25 protocol 44 ipv6 IP version 6 45 ========= =================== = ========== =================== 46 471. /proc/sys/net/core - Network core options 48============================================ 49 50bpf_jit_enable 51-------------- 52 53This enables the BPF Just in Time (JIT) compiler. BPF is a flexible 54and efficient infrastructure allowing to execute bytecode at various 55hook points. It is used in a number of Linux kernel subsystems such 56as networking (e.g. XDP, tc), tracing (e.g. kprobes, uprobes, tracepoints) 57and security (e.g. seccomp). LLVM has a BPF back end that can compile 58restricted C into a sequence of BPF instructions. After program load 59through bpf(2) and passing a verifier in the kernel, a JIT will then 60translate these BPF proglets into native CPU instructions. There are 61two flavors of JITs, the newer eBPF JIT currently supported on: 62 63 - x86_64 64 - x86_32 65 - arm64 66 - arm32 67 - ppc64 68 - ppc32 69 - sparc64 70 - mips64 71 - s390x 72 - riscv64 73 - riscv32 74 - loongarch64 75 - arc 76 77And the older cBPF JIT supported on the following archs: 78 79 - mips 80 - sparc 81 82eBPF JITs are a superset of cBPF JITs, meaning the kernel will 83migrate cBPF instructions into eBPF instructions and then JIT 84compile them transparently. Older cBPF JITs can only translate 85tcpdump filters, seccomp rules, etc, but not mentioned eBPF 86programs loaded through bpf(2). 87 88Values: 89 90 - 0 - disable the JIT (default value) 91 - 1 - enable the JIT 92 - 2 - enable the JIT and ask the compiler to emit traces on kernel log. 93 94bpf_jit_harden 95-------------- 96 97This enables hardening for the BPF JIT compiler. Supported are eBPF 98JIT backends. Enabling hardening trades off performance, but can 99mitigate JIT spraying. 100 101Values: 102 103 - 0 - disable JIT hardening (default value) 104 - 1 - enable JIT hardening for unprivileged users only 105 - 2 - enable JIT hardening for all users 106 107where "privileged user" in this context means a process having 108CAP_BPF or CAP_SYS_ADMIN in the root user name space. 109 110bpf_jit_kallsyms 111---------------- 112 113When BPF JIT compiler is enabled, then compiled images are unknown 114addresses to the kernel, meaning they neither show up in traces nor 115in /proc/kallsyms. This enables export of these addresses, which can 116be used for debugging/tracing. If bpf_jit_harden is enabled, this 117feature is disabled. 118 119Values : 120 121 - 0 - disable JIT kallsyms export (default value) 122 - 1 - enable JIT kallsyms export for privileged users only 123 124bpf_jit_limit 125------------- 126 127This enforces a global limit for memory allocations to the BPF JIT 128compiler in order to reject unprivileged JIT requests once it has 129been surpassed. bpf_jit_limit contains the value of the global limit 130in bytes. 131 132dev_weight 133---------- 134 135The maximum number of packets that kernel can handle on a NAPI interrupt, 136it's a Per-CPU variable. For drivers that support LRO or GRO_HW, a hardware 137aggregated packet is counted as one packet in this context. 138 139Default: 64 140 141dev_weight_rx_bias 142------------------ 143 144RPS (e.g. RFS, aRFS) processing is competing with the registered NAPI poll function 145of the driver for the per softirq cycle netdev_budget. This parameter influences 146the proportion of the configured netdev_budget that is spent on RPS based packet 147processing during RX softirq cycles. It is further meant for making current 148dev_weight adaptable for asymmetric CPU needs on RX/TX side of the network stack. 149(see dev_weight_tx_bias) It is effective on a per CPU basis. Determination is based 150on dev_weight and is calculated multiplicative (dev_weight * dev_weight_rx_bias). 151 152Default: 1 153 154dev_weight_tx_bias 155------------------ 156 157Scales the maximum number of packets that can be processed during a TX softirq cycle. 158Effective on a per CPU basis. Allows scaling of current dev_weight for asymmetric 159net stack processing needs. Be careful to avoid making TX softirq processing a CPU hog. 160 161Calculation is based on dev_weight (dev_weight * dev_weight_tx_bias). 162 163Default: 1 164 165default_qdisc 166------------- 167 168The default queuing discipline to use for network devices. This allows 169overriding the default of pfifo_fast with an alternative. Since the default 170queuing discipline is created without additional parameters so is best suited 171to queuing disciplines that work well without configuration like stochastic 172fair queue (sfq), CoDel (codel) or fair queue CoDel (fq_codel). Don't use 173queuing disciplines like Hierarchical Token Bucket or Deficit Round Robin 174which require setting up classes and bandwidths. Note that physical multiqueue 175interfaces still use mq as root qdisc, which in turn uses this default for its 176leaves. Virtual devices (like e.g. lo or veth) ignore this setting and instead 177default to noqueue. 178 179Default: pfifo_fast 180 181busy_read 182--------- 183 184Low latency busy poll timeout for socket reads. (needs CONFIG_NET_RX_BUSY_POLL) 185Approximate time in us to busy loop waiting for packets on the device queue. 186This sets the default value of the SO_BUSY_POLL socket option. 187Can be set or overridden per socket by setting socket option SO_BUSY_POLL, 188which is the preferred method of enabling. If you need to enable the feature 189globally via sysctl, a value of 50 is recommended. 190 191Will increase power usage. 192 193Default: 0 (off) 194 195busy_poll 196---------------- 197Low latency busy poll timeout for poll and select. (needs CONFIG_NET_RX_BUSY_POLL) 198Approximate time in us to busy loop waiting for events. 199Recommended value depends on the number of sockets you poll on. 200For several sockets 50, for several hundreds 100. 201For more than that you probably want to use epoll. 202Note that only sockets with SO_BUSY_POLL set will be busy polled, 203so you want to either selectively set SO_BUSY_POLL on those sockets or set 204sysctl.net.busy_read globally. 205 206Will increase power usage. 207 208Default: 0 (off) 209 210mem_pcpu_rsv 211------------ 212 213Per-cpu reserved forward alloc cache size in page units. Default 1MB per CPU. 214 215bypass_prot_mem 216--------------- 217 218Skip charging socket buffers to the global per-protocol memory 219accounting controlled by net.ipv4.tcp_mem, net.ipv4.udp_mem, etc. 220 221Default: 0 (off) 222 223rmem_default 224------------ 225 226The default setting of the socket receive buffer in bytes. 227 228rmem_max 229-------- 230 231The maximum receive socket buffer size in bytes. 232 233Default: 4194304 234 235rps_default_mask 236---------------- 237 238The default RPS CPU mask used on newly created network devices. An empty 239mask means RPS disabled by default. 240 241tstamp_allow_data 242----------------- 243Allow processes to receive tx timestamps looped together with the original 244packet contents. If disabled, transmit timestamp requests from unprivileged 245processes are dropped unless socket option SOF_TIMESTAMPING_OPT_TSONLY is set. 246 247Default: 1 (on) 248 249 250wmem_default 251------------ 252 253The default setting (in bytes) of the socket send buffer. 254 255wmem_max 256-------- 257 258The maximum send socket buffer size in bytes. 259 260Default: 4194304 261 262message_burst and message_cost 263------------------------------ 264 265These parameters are used to limit the warning messages written to the kernel 266log from the networking code. They enforce a rate limit to make a 267denial-of-service attack impossible. A higher message_cost factor, results in 268fewer messages that will be written. Message_burst controls when messages will 269be dropped. The default settings limit warning messages to one every five 270seconds. 271 272warnings 273-------- 274 275This sysctl is now unused. 276 277This was used to control console messages from the networking stack that 278occur because of problems on the network like duplicate address or bad 279checksums. 280 281These messages are now emitted at KERN_DEBUG and can generally be enabled 282and controlled by the dynamic_debug facility. 283 284netdev_budget 285------------- 286 287Maximum number of packets taken from all interfaces in one polling cycle (NAPI 288poll). In one polling cycle interfaces which are registered to polling are 289probed in a round-robin manner. Also, a polling cycle may not exceed 290netdev_budget_usecs microseconds, even if netdev_budget has not been 291exhausted. 292 293netdev_budget_usecs 294--------------------- 295 296Maximum number of microseconds in one NAPI polling cycle. Polling 297will exit when either netdev_budget_usecs have elapsed during the 298poll cycle or the number of packets processed reaches netdev_budget. 299 300netdev_max_backlog 301------------------ 302 303Maximum number of packets, queued on the INPUT side, when the interface 304receives packets faster than kernel can process them. 305 306qdisc_max_burst 307------------------ 308 309Maximum number of packets that can be temporarily stored before 310reaching qdisc. 311 312Default: 1000 313 314netdev_rss_key 315-------------- 316 317RSS (Receive Side Scaling) enabled drivers use a 40 bytes host key that is 318randomly generated. 319Some user space might need to gather its content even if drivers do not 320provide ethtool -x support yet. 321 322:: 323 324 myhost:~# cat /proc/sys/net/core/netdev_rss_key 325 84:50:f4:00:a8:15:d1:a7:e9:7f:1d:60:35:c7:47:25:42:97:74:ca:56:bb:b6:a1:d8: ... (52 bytes total) 326 327File contains nul bytes if no driver ever called netdev_rss_key_fill() function. 328 329Note: 330 /proc/sys/net/core/netdev_rss_key contains 52 bytes of key, 331 but most drivers only use 40 bytes of it. 332 333:: 334 335 myhost:~# ethtool -x eth0 336 RX flow hash indirection table for eth0 with 8 RX ring(s): 337 0: 0 1 2 3 4 5 6 7 338 RSS hash key: 339 84:50:f4:00:a8:15:d1:a7:e9:7f:1d:60:35:c7:47:25:42:97:74:ca:56:bb:b6:a1:d8:43:e3:c9:0c:fd:17:55:c2:3a:4d:69:ed:f1:42:89 340 341netdev_tstamp_prequeue 342---------------------- 343 344If set to 0, RX packet timestamps can be sampled after RPS processing, when 345the target CPU processes packets. It might give some delay on timestamps, but 346permit to distribute the load on several cpus. 347 348If set to 1 (default), timestamps are sampled as soon as possible, before 349queueing. 350 351netdev_unregister_timeout_secs 352------------------------------ 353 354Unregister network device timeout in seconds. 355This option controls the timeout (in seconds) used to issue a warning while 356waiting for a network device refcount to drop to 0 during device 357unregistration. A lower value may be useful during bisection to detect 358a leaked reference faster. A larger value may be useful to prevent false 359warnings on slow/loaded systems. 360Default value is 10, minimum 1, maximum 3600. 361 362skb_defer_max 363------------- 364 365Max size (in skbs) of the per-cpu list of skbs being freed 366by the cpu which allocated them. 367 368Default: 128 369 370optmem_max 371---------- 372 373Maximum ancillary buffer size allowed per socket. Ancillary data is a sequence 374of struct cmsghdr structures with appended data. TCP tx zerocopy also uses 375optmem_max as a limit for its internal structures. 376 377Default : 128 KB 378 379fb_tunnels_only_for_init_net 380---------------------------- 381 382Controls if fallback tunnels (like tunl0, gre0, gretap0, erspan0, 383sit0, ip6tnl0, ip6gre0) are automatically created. There are 3 possibilities 384(a) value = 0; respective fallback tunnels are created when module is 385loaded in every net namespaces (backward compatible behavior). 386(b) value = 1; [kcmd value: initns] respective fallback tunnels are 387created only in init net namespace and every other net namespace will 388not have them. 389(c) value = 2; [kcmd value: none] fallback tunnels are not created 390when a module is loaded in any of the net namespace. Setting value to 391"2" is pointless after boot if these modules are built-in, so there is 392a kernel command-line option that can change this default. Please refer to 393Documentation/admin-guide/kernel-parameters.txt for additional details. 394 395Not creating fallback tunnels gives control to userspace to create 396whatever is needed only and avoid creating devices which are redundant. 397 398Default : 0 (for compatibility reasons) 399 400devconf_inherit_init_net 401------------------------ 402 403Controls if a new network namespace should inherit all current 404settings under /proc/sys/net/{ipv4,ipv6}/conf/{all,default}/. By 405default, we keep the current behavior: for IPv4 we inherit all current 406settings from init_net and for IPv6 we reset all settings to default. 407 408If set to 1, both IPv4 and IPv6 settings are forced to inherit from 409current ones in init_net. If set to 2, both IPv4 and IPv6 settings are 410forced to reset to their default values. If set to 3, both IPv4 and IPv6 411settings are forced to inherit from current ones in the netns where this 412new netns has been created. 413 414Default : 0 (for compatibility reasons) 415 416txrehash 417-------- 418 419Controls default hash rethink behaviour on socket when SO_TXREHASH option is set 420to SOCK_TXREHASH_DEFAULT (i. e. not overridden by setsockopt). 421 422If set to 1 (default), hash rethink is performed on listening socket. 423If set to 0, hash rethink is not performed. 424 425txq_reselection_ms 426------------------ 427 428Controls how often (in ms) a busy connected flow can select another tx queue. 429 430A resection is desirable when/if user thread has migrated and XPS 431would select a different queue. Same can occur without XPS 432if the flow hash has changed. 433 434But switching txq can introduce reorders, especially if the 435old queue is under high pressure. Modern TCP stacks deal 436well with reorders if they happen not too often. 437 438To disable this feature, set the value to 0. 439 440Default : 1000 441 442gro_normal_batch 443---------------- 444 445Maximum number of the segments to batch up on output of GRO. When a packet 446exits GRO, either as a coalesced superframe or as an original packet which 447GRO has decided not to coalesce, it is placed on a per-NAPI list. This 448list is then passed to the stack when the number of segments reaches the 449gro_normal_batch limit. 450 451high_order_alloc_disable 452------------------------ 453 454By default the allocator for page frags tries to use high order pages (order-3 455on x86). While the default behavior gives good results in most cases, some users 456might have hit a contention in page allocations/freeing. This was especially 457true on older kernels (< 5.14) when high-order pages were not stored on per-cpu 458lists. This allows to opt-in for order-0 allocation instead but is now mostly of 459historical importance. 460 461Default: 0 462 4632. /proc/sys/net/unix - Parameters for Unix domain sockets 464---------------------------------------------------------- 465 466There is only one file in this directory. 467unix_dgram_qlen limits the max number of datagrams queued in Unix domain 468socket's buffer. It will not take effect unless PF_UNIX flag is specified. 469 470 4713. /proc/sys/net/ipv4 - IPV4 settings 472------------------------------------- 473Please see: Documentation/networking/ip-sysctl.rst and 474Documentation/admin-guide/sysctl/net.rst for descriptions of these entries. 475 476 4774. Appletalk 478------------ 479 480The /proc/sys/net/appletalk directory holds the Appletalk configuration data 481when Appletalk is loaded. The configurable parameters are: 482 483aarp-expiry-time 484---------------- 485 486The amount of time we keep an ARP entry before expiring it. Used to age out 487old hosts. 488 489aarp-resolve-time 490----------------- 491 492The amount of time we will spend trying to resolve an Appletalk address. 493 494aarp-retransmit-limit 495--------------------- 496 497The number of times we will retransmit a query before giving up. 498 499aarp-tick-time 500-------------- 501 502Controls the rate at which expires are checked. 503 504The directory /proc/net/appletalk holds the list of active Appletalk sockets 505on a machine. 506 507The fields indicate the DDP type, the local address (in network:node format) 508the remote address, the size of the transmit pending queue, the size of the 509received queue (bytes waiting for applications to read) the state and the uid 510owning the socket. 511 512/proc/net/atalk_iface lists all the interfaces configured for appletalk.It 513shows the name of the interface, its Appletalk address, the network range on 514that address (or network number for phase 1 networks), and the status of the 515interface. 516 517/proc/net/atalk_route lists each known network route. It lists the target 518(network) that the route leads to, the router (may be directly connected), the 519route flags, and the device the route is using. 520 5215. TIPC 522------- 523 524tipc_rmem 525--------- 526 527The TIPC protocol now has a tunable for the receive memory, similar to the 528tcp_rmem - i.e. a vector of 3 INTEGERs: (min, default, max) 529 530:: 531 532 # cat /proc/sys/net/tipc/tipc_rmem 533 4252725 34021800 68043600 534 # 535 536The max value is set to CONN_OVERLOAD_LIMIT, and the default and min values 537are scaled (shifted) versions of that same value. Note that the min value 538is not at this point in time used in any meaningful way, but the triplet is 539preserved in order to be consistent with things like tcp_rmem. 540 541named_timeout 542------------- 543 544TIPC name table updates are distributed asynchronously in a cluster, without 545any form of transaction handling. This means that different race scenarios are 546possible. One such is that a name withdrawal sent out by one node and received 547by another node may arrive after a second, overlapping name publication already 548has been accepted from a third node, although the conflicting updates 549originally may have been issued in the correct sequential order. 550If named_timeout is nonzero, failed topology updates will be placed on a defer 551queue until another event arrives that clears the error, or until the timeout 552expires. Value is in milliseconds. 553