1.\" Copyright (C) 2001 Matthew Dillon. All rights reserved. 2.\" Copyright (C) 2012 Eitan Adler. 3.\" 4.\" Redistribution and use in source and binary forms, with or without 5.\" modification, are permitted provided that the following conditions 6.\" are met: 7.\" 1. Redistributions of source code must retain the above copyright 8.\" notice, this list of conditions and the following disclaimer. 9.\" 2. Redistributions in binary form must reproduce the above copyright 10.\" notice, this list of conditions and the following disclaimer in the 11.\" documentation and/or other materials provided with the distribution. 12.\" 13.\" THIS SOFTWARE IS PROVIDED BY AUTHOR AND CONTRIBUTORS ``AS IS'' AND 14.\" ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE 15.\" IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE 16.\" ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE 17.\" FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL 18.\" DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS 19.\" OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) 20.\" HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT 21.\" LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY 22.\" OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF 23.\" SUCH DAMAGE. 24.\" 25.\" $FreeBSD$ 26.\" 27.Dd October 11, 2022 28.Dt TUNING 7 29.Os 30.Sh NAME 31.Nm tuning 32.Nd performance tuning under FreeBSD 33.Sh SYSTEM SETUP - DISKLABEL, NEWFS, TUNEFS, SWAP 34The swap partition should typically be approximately 2x the size of 35main memory 36for systems with less than 4GB of RAM, or approximately equal to 37the size of main memory 38if you have more. 39Keep in mind future memory 40expansion when sizing the swap partition. 41Configuring too little swap can lead 42to inefficiencies in the VM page scanning code as well as create issues 43later on if you add more memory to your machine. 44On larger systems 45with multiple disks, configure swap on each drive. 46The swap partitions on the drives should be approximately the same size. 47The kernel can handle arbitrary sizes but 48internal data structures scale to 4 times the largest swap partition. 49Keeping 50the swap partitions near the same size will allow the kernel to optimally 51stripe swap space across the N disks. 52Do not worry about overdoing it a 53little, swap space is the saving grace of 54.Ux 55and even if you do not normally use much swap, it can give you more time to 56recover from a runaway program before being forced to reboot. 57.Pp 58It is not a good idea to make one large partition. 59First, 60each partition has different operational characteristics and separating them 61allows the file system to tune itself to those characteristics. 62For example, 63the root and 64.Pa /usr 65partitions are read-mostly, with very little writing, while 66a lot of reading and writing could occur in 67.Pa /var/tmp . 68By properly 69partitioning your system fragmentation introduced in the smaller more 70heavily write-loaded partitions will not bleed over into the mostly-read 71partitions. 72.Pp 73Properly partitioning your system also allows you to tune 74.Xr newfs 8 , 75and 76.Xr tunefs 8 77parameters. 78The only 79.Xr tunefs 8 80option worthwhile turning on is 81.Em softupdates 82with 83.Dq Li "tunefs -n enable /filesystem" . 84Softupdates drastically improves meta-data performance, mainly file 85creation and deletion. 86We recommend enabling softupdates on most file systems; however, there 87are two limitations to softupdates that you should be aware of when 88determining whether to use it on a file system. 89First, softupdates guarantees file system consistency in the 90case of a crash but could very easily be several seconds (even a minute!\&) 91behind on pending write to the physical disk. 92If you crash you may lose more work 93than otherwise. 94Secondly, softupdates delays the freeing of file system 95blocks. 96If you have a file system (such as the root file system) which is 97close to full, doing a major update of it, e.g.,\& 98.Dq Li "make installworld" , 99can run it out of space and cause the update to fail. 100For this reason, softupdates will not be enabled on the root file system 101during a typical install. 102There is no loss of performance since the root 103file system is rarely written to. 104.Pp 105A number of run-time 106.Xr mount 8 107options exist that can help you tune the system. 108The most obvious and most dangerous one is 109.Cm async . 110Only use this option in conjunction with 111.Xr gjournal 8 , 112as it is far too dangerous on a normal file system. 113A less dangerous and more 114useful 115.Xr mount 8 116option is called 117.Cm noatime . 118.Ux 119file systems normally update the last-accessed time of a file or 120directory whenever it is accessed. 121This operation is handled in 122.Fx 123with a delayed write and normally does not create a burden on the system. 124However, if your system is accessing a huge number of files on a continuing 125basis the buffer cache can wind up getting polluted with atime updates, 126creating a burden on the system. 127For example, if you are running a heavily 128loaded web site, or a news server with lots of readers, you might want to 129consider turning off atime updates on your larger partitions with this 130.Xr mount 8 131option. 132However, you should not gratuitously turn off atime 133updates everywhere. 134For example, the 135.Pa /var 136file system customarily 137holds mailboxes, and atime (in combination with mtime) is used to 138determine whether a mailbox has new mail. 139You might as well leave 140atime turned on for mostly read-only partitions such as 141.Pa / 142and 143.Pa /usr 144as well. 145This is especially useful for 146.Pa / 147since some system utilities 148use the atime field for reporting. 149.Sh STRIPING DISKS 150In larger systems you can stripe partitions from several drives together 151to create a much larger overall partition. 152Striping can also improve 153the performance of a file system by splitting I/O operations across two 154or more disks. 155The 156.Xr gstripe 8 , 157.Xr gvinum 8 , 158and 159.Xr ccdconfig 8 160utilities may be used to create simple striped file systems. 161Generally 162speaking, striping smaller partitions such as the root and 163.Pa /var/tmp , 164or essentially read-only partitions such as 165.Pa /usr 166is a complete waste of time. 167You should only stripe partitions that require serious I/O performance, 168typically 169.Pa /var , /home , 170or custom partitions used to hold databases and web pages. 171Choosing the proper stripe size is also 172important. 173File systems tend to store meta-data on power-of-2 boundaries 174and you usually want to reduce seeking rather than increase seeking. 175This 176means you want to use a large off-center stripe size such as 1152 sectors 177so sequential I/O does not seek both disks and so meta-data is distributed 178across both disks rather than concentrated on a single disk. 179.Sh SYSCTL TUNING 180.Xr sysctl 8 181variables permit system behavior to be monitored and controlled at 182run-time. 183Some sysctls simply report on the behavior of the system; others allow 184the system behavior to be modified; 185some may be set at boot time using 186.Xr rc.conf 5 , 187but most will be set via 188.Xr sysctl.conf 5 . 189There are several hundred sysctls in the system, including many that appear 190to be candidates for tuning but actually are not. 191In this document we will only cover the ones that have the greatest effect 192on the system. 193.Pp 194The 195.Va vm.overcommit 196sysctl defines the overcommit behaviour of the vm subsystem. 197The virtual memory system always does accounting of the swap space 198reservation, both total for system and per-user. 199Corresponding values 200are available through sysctl 201.Va vm.swap_total , 202that gives the total bytes available for swapping, and 203.Va vm.swap_reserved , 204that gives number of bytes that may be needed to back all currently 205allocated anonymous memory. 206.Pp 207Setting bit 0 of the 208.Va vm.overcommit 209sysctl causes the virtual memory system to return failure 210to the process when allocation of memory causes 211.Va vm.swap_reserved 212to exceed 213.Va vm.swap_total . 214Bit 1 of the sysctl enforces 215.Dv RLIMIT_SWAP 216limit 217(see 218.Xr getrlimit 2 ) . 219Root is exempt from this limit. 220Bit 2 allows to count most of the physical 221memory as allocatable, except wired and free reserved pages 222(accounted by 223.Va vm.stats.vm.v_free_target 224and 225.Va vm.stats.vm.v_wire_count 226sysctls, respectively). 227.Pp 228The 229.Va kern.ipc.maxpipekva 230loader tunable is used to set a hard limit on the 231amount of kernel address space allocated to mapping of pipe buffers. 232Use of the mapping allows the kernel to eliminate a copy of the 233data from writer address space into the kernel, directly copying 234the content of mapped buffer to the reader. 235Increasing this value to a higher setting, such as `25165824' might 236improve performance on systems where space for mapping pipe buffers 237is quickly exhausted. 238This exhaustion is not fatal; however, and it will only cause pipes 239to fall back to using double-copy. 240.Pp 241The 242.Va kern.ipc.shm_use_phys 243sysctl defaults to 0 (off) and may be set to 0 (off) or 1 (on). 244Setting 245this parameter to 1 will cause all System V shared memory segments to be 246mapped to unpageable physical RAM. 247This feature only has an effect if you 248are either (A) mapping small amounts of shared memory across many (hundreds) 249of processes, or (B) mapping large amounts of shared memory across any 250number of processes. 251This feature allows the kernel to remove a great deal 252of internal memory management page-tracking overhead at the cost of wiring 253the shared memory into core, making it unswappable. 254.Pp 255The 256.Va vfs.vmiodirenable 257sysctl defaults to 1 (on). 258This parameter controls how directories are cached 259by the system. 260Most directories are small and use but a single fragment 261(typically 2K) in the file system and even less (typically 512 bytes) in 262the buffer cache. 263However, when operating in the default mode the buffer 264cache will only cache a fixed number of directories even if you have a huge 265amount of memory. 266Turning on this sysctl allows the buffer cache to use 267the VM Page Cache to cache the directories. 268The advantage is that all of 269memory is now available for caching directories. 270The disadvantage is that 271the minimum in-core memory used to cache a directory is the physical page 272size (typically 4K) rather than 512 bytes. 273We recommend turning this option off in memory-constrained environments; 274however, when on, it will substantially improve the performance of services 275that manipulate a large number of files. 276Such services can include web caches, large mail systems, and news systems. 277Turning on this option will generally not reduce performance even with the 278wasted memory but you should experiment to find out. 279.Pp 280The 281.Va vfs.write_behind 282sysctl defaults to 1 (on). 283This tells the file system to issue media 284writes as full clusters are collected, which typically occurs when writing 285large sequential files. 286The idea is to avoid saturating the buffer 287cache with dirty buffers when it would not benefit I/O performance. 288However, 289this may stall processes and under certain circumstances you may wish to turn 290it off. 291.Pp 292The 293.Va vfs.hirunningspace 294sysctl determines how much outstanding write I/O may be queued to 295disk controllers system-wide at any given time. 296It is used by the UFS file system. 297The default is self-tuned and 298usually sufficient but on machines with advanced controllers and lots 299of disks this may be tuned up to match what the controllers buffer. 300Configuring this setting to match tagged queuing capabilities of 301controllers or drives with average IO size used in production works 302best (for example: 16 MiB will use 128 tags with IO requests of 128 KiB). 303Note that setting too high a value 304(exceeding the buffer cache's write threshold) can lead to extremely 305bad clustering performance. 306Do not set this value arbitrarily high! 307Higher write queuing values may also add latency to reads occurring at 308the same time. 309.Pp 310The 311.Va vfs.read_max 312sysctl governs VFS read-ahead and is expressed as the number of blocks 313to pre-read if the heuristics algorithm decides that the reads are 314issued sequentially. 315It is used by the UFS, ext2fs and msdosfs file systems. 316With the default UFS block size of 32 KiB, a setting of 64 will allow 317speculatively reading up to 2 MiB. 318This setting may be increased to get around disk I/O latencies, especially 319where these latencies are large such as in virtual machine emulated 320environments. 321It may be tuned down in specific cases where the I/O load is such that 322read-ahead adversely affects performance or where system memory is really 323low. 324.Pp 325The 326.Va vfs.ncsizefactor 327sysctl defines how large VFS namecache may grow. 328The number of currently allocated entries in namecache is provided by 329.Va debug.numcache 330sysctl and the condition 331debug.numcache < kern.maxvnodes * vfs.ncsizefactor 332is adhered to. 333.Pp 334The 335.Va vfs.ncnegfactor 336sysctl defines how many negative entries VFS namecache is allowed to create. 337The number of currently allocated negative entries is provided by 338.Va debug.numneg 339sysctl and the condition 340vfs.ncnegfactor * debug.numneg < debug.numcache 341is adhered to. 342.Pp 343There are various other buffer-cache and VM page cache related sysctls. 344We do not recommend modifying these values. 345.Pp 346The 347.Va net.inet.tcp.sendspace 348and 349.Va net.inet.tcp.recvspace 350sysctls are of particular interest if you are running network intensive 351applications. 352They control the amount of send and receive buffer space 353allowed for any given TCP connection. 354The default sending buffer is 32K; the default receiving buffer 355is 64K. 356You can often 357improve bandwidth utilization by increasing the default at the cost of 358eating up more kernel memory for each connection. 359We do not recommend 360increasing the defaults if you are serving hundreds or thousands of 361simultaneous connections because it is possible to quickly run the system 362out of memory due to stalled connections building up. 363But if you need 364high bandwidth over a fewer number of connections, especially if you have 365gigabit Ethernet, increasing these defaults can make a huge difference. 366You can adjust the buffer size for incoming and outgoing data separately. 367For example, if your machine is primarily doing web serving you may want 368to decrease the recvspace in order to be able to increase the 369sendspace without eating too much kernel memory. 370Note that the routing table (see 371.Xr route 8 ) 372can be used to introduce route-specific send and receive buffer size 373defaults. 374.Pp 375As an additional management tool you can use pipes in your 376firewall rules (see 377.Xr ipfw 8 ) 378to limit the bandwidth going to or from particular IP blocks or ports. 379For example, if you have a T1 you might want to limit your web traffic 380to 70% of the T1's bandwidth in order to leave the remainder available 381for mail and interactive use. 382Normally a heavily loaded web server 383will not introduce significant latencies into other services even if 384the network link is maxed out, but enforcing a limit can smooth things 385out and lead to longer term stability. 386Many people also enforce artificial 387bandwidth limitations in order to ensure that they are not charged for 388using too much bandwidth. 389.Pp 390Setting the send or receive TCP buffer to values larger than 65535 will result 391in a marginal performance improvement unless both hosts support the window 392scaling extension of the TCP protocol, which is controlled by the 393.Va net.inet.tcp.rfc1323 394sysctl. 395These extensions should be enabled and the TCP buffer size should be set 396to a value larger than 65536 in order to obtain good performance from 397certain types of network links; specifically, gigabit WAN links and 398high-latency satellite links. 399RFC1323 support is enabled by default. 400.Pp 401The 402.Va net.inet.tcp.always_keepalive 403sysctl determines whether or not the TCP implementation should attempt 404to detect dead TCP connections by intermittently delivering 405.Dq keepalives 406on the connection. 407By default, this is enabled for all applications; by setting this 408sysctl to 0, only applications that specifically request keepalives 409will use them. 410In most environments, TCP keepalives will improve the management of 411system state by expiring dead TCP connections, particularly for 412systems serving dialup users who may not always terminate individual 413TCP connections before disconnecting from the network. 414However, in some environments, temporary network outages may be 415incorrectly identified as dead sessions, resulting in unexpectedly 416terminated TCP connections. 417In such environments, setting the sysctl to 0 may reduce the occurrence of 418TCP session disconnections. 419.Pp 420The 421.Va net.inet.tcp.delayed_ack 422TCP feature is largely misunderstood. 423Historically speaking, this feature 424was designed to allow the acknowledgement to transmitted data to be returned 425along with the response. 426For example, when you type over a remote shell, 427the acknowledgement to the character you send can be returned along with the 428data representing the echo of the character. 429With delayed acks turned off, 430the acknowledgement may be sent in its own packet, before the remote service 431has a chance to echo the data it just received. 432This same concept also 433applies to any interactive protocol (e.g.,\& SMTP, WWW, POP3), and can cut the 434number of tiny packets flowing across the network in half. 435The 436.Fx 437delayed ACK implementation also follows the TCP protocol rule that 438at least every other packet be acknowledged even if the standard 40ms 439timeout has not yet passed. 440Normally the worst a delayed ACK can do is 441slightly delay the teardown of a connection, or slightly delay the ramp-up 442of a slow-start TCP connection. 443While we are not sure we believe that 444the several FAQs related to packages such as SAMBA and SQUID which advise 445turning off delayed acks may be referring to the slow-start issue. 446.Pp 447The 448.Va net.inet.ip.portrange.* 449sysctls control the port number ranges automatically bound to TCP and UDP 450sockets. 451There are three ranges: a low range, a default range, and a 452high range, selectable via the 453.Dv IP_PORTRANGE 454.Xr setsockopt 2 455call. 456Most 457network programs use the default range which is controlled by 458.Va net.inet.ip.portrange.first 459and 460.Va net.inet.ip.portrange.last , 461which default to 49152 and 65535, respectively. 462Bound port ranges are 463used for outgoing connections, and it is possible to run the system out 464of ports under certain circumstances. 465This most commonly occurs when you are 466running a heavily loaded web proxy. 467The port range is not an issue 468when running a server which handles mainly incoming connections, such as a 469normal web server, or has a limited number of outgoing connections, such 470as a mail relay. 471For situations where you may run out of ports, 472we recommend decreasing 473.Va net.inet.ip.portrange.first 474modestly. 475A range of 10000 to 30000 ports may be reasonable. 476You should also consider firewall effects when changing the port range. 477Some firewalls 478may block large ranges of ports (usually low-numbered ports) and expect systems 479to use higher ranges of ports for outgoing connections. 480By default 481.Va net.inet.ip.portrange.last 482is set at the maximum allowable port number. 483.Pp 484The 485.Va kern.ipc.soacceptqueue 486sysctl limits the size of the listen queue for accepting new TCP connections. 487The default value of 128 is typically too low for robust handling of new 488connections in a heavily loaded web server environment. 489For such environments, 490we recommend increasing this value to 1024 or higher. 491The service daemon 492may itself limit the listen queue size (e.g.,\& 493.Xr sendmail 8 , 494apache) but will 495often have a directive in its configuration file to adjust the queue size up. 496Larger listen queues also do a better job of fending off denial of service 497attacks. 498.Pp 499The 500.Va kern.maxfiles 501sysctl determines how many open files the system supports. 502The default is 503typically a few thousand but you may need to bump this up to ten or twenty 504thousand if you are running databases or large descriptor-heavy daemons. 505The read-only 506.Va kern.openfiles 507sysctl may be interrogated to determine the current number of open files 508on the system. 509.Pp 510The 511.Va vm.swap_idle_enabled 512sysctl is useful in large multi-user systems where you have lots of users 513entering and leaving the system and lots of idle processes. 514Such systems 515tend to generate a great deal of continuous pressure on free memory reserves. 516Turning this feature on and adjusting the swapout hysteresis (in idle 517seconds) via 518.Va vm.swap_idle_threshold1 519and 520.Va vm.swap_idle_threshold2 521allows you to depress the priority of pages associated with idle processes 522more quickly then the normal pageout algorithm. 523This gives a helping hand 524to the pageout daemon. 525Do not turn this option on unless you need it, 526because the tradeoff you are making is to essentially pre-page memory sooner 527rather than later, eating more swap and disk bandwidth. 528In a small system 529this option will have a detrimental effect but in a large system that is 530already doing moderate paging this option allows the VM system to stage 531whole processes into and out of memory more easily. 532.Sh LOADER TUNABLES 533Some aspects of the system behavior may not be tunable at runtime because 534memory allocations they perform must occur early in the boot process. 535To change loader tunables, you must set their values in 536.Xr loader.conf 5 537and reboot the system. 538.Pp 539.Va kern.maxusers 540controls the scaling of a number of static system tables, including defaults 541for the maximum number of open files, sizing of network memory resources, etc. 542.Va kern.maxusers 543is automatically sized at boot based on the amount of memory available in 544the system, and may be determined at run-time by inspecting the value of the 545read-only 546.Va kern.maxusers 547sysctl. 548.Pp 549The 550.Va kern.dfldsiz 551and 552.Va kern.dflssiz 553tunables set the default soft limits for process data and stack size 554respectively. 555Processes may increase these up to the hard limits by calling 556.Xr setrlimit 2 . 557The 558.Va kern.maxdsiz , 559.Va kern.maxssiz , 560and 561.Va kern.maxtsiz 562tunables set the hard limits for process data, stack, and text size 563respectively; processes may not exceed these limits. 564The 565.Va kern.sgrowsiz 566tunable controls how much the stack segment will grow when a process 567needs to allocate more stack. 568.Pp 569.Va kern.ipc.nmbclusters 570may be adjusted to increase the number of network mbufs the system is 571willing to allocate. 572Each cluster represents approximately 2K of memory, 573so a value of 1024 represents 2M of kernel memory reserved for network 574buffers. 575You can do a simple calculation to figure out how many you need. 576If you have a web server which maxes out at 1000 simultaneous connections, 577and each connection eats a 16K receive and 16K send buffer, you need 578approximately 32MB worth of network buffers to deal with it. 579A good rule of 580thumb is to multiply by 2, so 32MBx2 = 64MB/2K = 32768. 581So for this case 582you would want to set 583.Va kern.ipc.nmbclusters 584to 32768. 585We recommend values between 5861024 and 4096 for machines with moderates amount of memory, and between 4096 587and 32768 for machines with greater amounts of memory. 588Under no circumstances 589should you specify an arbitrarily high value for this parameter, it could 590lead to a boot-time crash. 591The 592.Fl m 593option to 594.Xr netstat 1 595may be used to observe network cluster use. 596.Pp 597More and more programs are using the 598.Xr sendfile 2 599system call to transmit files over the network. 600The 601.Va kern.ipc.nsfbufs 602sysctl controls the number of file system buffers 603.Xr sendfile 2 604is allowed to use to perform its work. 605This parameter nominally scales 606with 607.Va kern.maxusers 608so you should not need to modify this parameter except under extreme 609circumstances. 610See the 611.Sx TUNING 612section in the 613.Xr sendfile 2 614manual page for details. 615.Sh KERNEL CONFIG TUNING 616There are a number of kernel options that you may have to fiddle with in 617a large-scale system. 618In order to change these options you need to be 619able to compile a new kernel from source. 620The 621.Xr config 8 622manual page and the handbook are good starting points for learning how to 623do this. 624Generally the first thing you do when creating your own custom 625kernel is to strip out all the drivers and services you do not use. 626Removing things like 627.Dv INET6 628and drivers you do not have will reduce the size of your kernel, sometimes 629by a megabyte or more, leaving more memory available for applications. 630.Pp 631.Dv SCSI_DELAY 632may be used to reduce system boot times. 633The defaults are fairly high and 634can be responsible for 5+ seconds of delay in the boot process. 635Reducing 636.Dv SCSI_DELAY 637to something below 5 seconds could work (especially with modern drives). 638.Pp 639There are a number of 640.Dv *_CPU 641options that can be commented out. 642If you only want the kernel to run 643on a Pentium class CPU, you can easily remove 644.Dv I486_CPU , 645but only remove 646.Dv I586_CPU 647if you are sure your CPU is being recognized as a Pentium II or better. 648Some clones may be recognized as a Pentium or even a 486 and not be able 649to boot without those options. 650If it works, great! 651The operating system 652will be able to better use higher-end CPU features for MMU, task switching, 653timebase, and even device operations. 654Additionally, higher-end CPUs support 6554MB MMU pages, which the kernel uses to map the kernel itself into memory, 656increasing its efficiency under heavy syscall loads. 657.Sh CPU, MEMORY, DISK, NETWORK 658The type of tuning you do depends heavily on where your system begins to 659bottleneck as load increases. 660If your system runs out of CPU (idle times 661are perpetually 0%) then you need to consider upgrading the CPU 662or perhaps you need to revisit the 663programs that are causing the load and try to optimize them. 664If your system 665is paging to swap a lot you need to consider adding more memory. 666If your 667system is saturating the disk you typically see high CPU idle times and 668total disk saturation. 669.Xr systat 1 670can be used to monitor this. 671There are many solutions to saturated disks: 672increasing memory for caching, mirroring disks, distributing operations across 673several machines, and so forth. 674.Pp 675Finally, you might run out of network suds. 676Optimize the network path 677as much as possible. 678For example, in 679.Xr firewall 7 680we describe a firewall protecting internal hosts with a topology where 681the externally visible hosts are not routed through it. 682Most bottlenecks occur at the WAN link. 683If expanding the link is not an option it may be possible to use the 684.Xr dummynet 4 685feature to implement peak shaving or other forms of traffic shaping to 686prevent the overloaded service (such as web services) from affecting other 687services (such as email), or vice versa. 688In home installations this could 689be used to give interactive traffic (your browser, 690.Xr ssh 1 691logins) priority 692over services you export from your box (web services, email). 693.Sh SEE ALSO 694.Xr netstat 1 , 695.Xr systat 1 , 696.Xr sendfile 2 , 697.Xr ata 4 , 698.Xr dummynet 4 , 699.Xr eventtimers 4 , 700.Xr login.conf 5 , 701.Xr rc.conf 5 , 702.Xr sysctl.conf 5 , 703.Xr firewall 7 , 704.Xr hier 7 , 705.Xr ports 7 , 706.Xr boot 8 , 707.Xr bsdinstall 8 , 708.Xr ccdconfig 8 , 709.Xr config 8 , 710.Xr fsck 8 , 711.Xr gjournal 8 , 712.Xr gpart 8 , 713.Xr gstripe 8 , 714.Xr gvinum 8 , 715.Xr ifconfig 8 , 716.Xr ipfw 8 , 717.Xr loader 8 , 718.Xr mount 8 , 719.Xr newfs 8 , 720.Xr route 8 , 721.Xr sysctl 8 , 722.Xr tunefs 8 723.Sh HISTORY 724The 725.Nm 726manual page was originally written by 727.An Matthew Dillon 728and first appeared 729in 730.Fx 4.3 , 731May 2001. 732The manual page was greatly modified by 733.An Eitan Adler Aq Mt eadler@FreeBSD.org . 734