1.\" Copyright (C) 2001 Matthew Dillon. All rights reserved. 2.\" 3.\" Redistribution and use in source and binary forms, with or without 4.\" modification, are permitted provided that the following conditions 5.\" are met: 6.\" 1. Redistributions of source code must retain the above copyright 7.\" notice, this list of conditions and the following disclaimer. 8.\" 2. Redistributions in binary form must reproduce the above copyright 9.\" notice, this list of conditions and the following disclaimer in the 10.\" documentation and/or other materials provided with the distribution. 11.\" 12.\" THIS SOFTWARE IS PROVIDED BY AUTHOR AND CONTRIBUTORS ``AS IS'' AND 13.\" ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE 14.\" IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE 15.\" ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE 16.\" FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL 17.\" DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS 18.\" OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) 19.\" HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT 20.\" LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY 21.\" OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF 22.\" SUCH DAMAGE. 23.\" 24.\" $FreeBSD$ 25.\" 26.Dd October 30, 2017 27.Dt TUNING 7 28.Os 29.Sh NAME 30.Nm tuning 31.Nd performance tuning under FreeBSD 32.Sh SYSTEM SETUP - DISKLABEL, NEWFS, TUNEFS, SWAP 33The swap partition should typically be approximately 2x the size of 34main memory 35for systems with less than 4GB of RAM, or approximately equal to 36the size of main memory 37if you have more. 38Keep in mind future memory 39expansion when sizing the swap partition. 40Configuring too little swap can lead 41to inefficiencies in the VM page scanning code as well as create issues 42later on if you add more memory to your machine. 43On larger systems 44with multiple disks, configure swap on each drive. 45The swap partitions on the drives should be approximately the same size. 46The kernel can handle arbitrary sizes but 47internal data structures scale to 4 times the largest swap partition. 48Keeping 49the swap partitions near the same size will allow the kernel to optimally 50stripe swap space across the N disks. 51Do not worry about overdoing it a 52little, swap space is the saving grace of 53.Ux 54and even if you do not normally use much swap, it can give you more time to 55recover from a runaway program before being forced to reboot. 56.Pp 57It is not a good idea to make one large partition. 58First, 59each partition has different operational characteristics and separating them 60allows the file system to tune itself to those characteristics. 61For example, 62the root and 63.Pa /usr 64partitions are read-mostly, with very little writing, while 65a lot of reading and writing could occur in 66.Pa /var/tmp . 67By properly 68partitioning your system fragmentation introduced in the smaller more 69heavily write-loaded partitions will not bleed over into the mostly-read 70partitions. 71.Pp 72Properly partitioning your system also allows you to tune 73.Xr newfs 8 , 74and 75.Xr tunefs 8 76parameters. 77The only 78.Xr tunefs 8 79option worthwhile turning on is 80.Em softupdates 81with 82.Dq Li "tunefs -n enable /filesystem" . 83Softupdates drastically improves meta-data performance, mainly file 84creation and deletion. 85We recommend enabling softupdates on most file systems; however, there 86are two limitations to softupdates that you should be aware of when 87determining whether to use it on a file system. 88First, softupdates guarantees file system consistency in the 89case of a crash but could very easily be several seconds (even a minute!\&) 90behind on pending write to the physical disk. 91If you crash you may lose more work 92than otherwise. 93Secondly, softupdates delays the freeing of file system 94blocks. 95If you have a file system (such as the root file system) which is 96close to full, doing a major update of it, e.g.,\& 97.Dq Li "make installworld" , 98can run it out of space and cause the update to fail. 99For this reason, softupdates will not be enabled on the root file system 100during a typical install. 101There is no loss of performance since the root 102file system is rarely written to. 103.Pp 104A number of run-time 105.Xr mount 8 106options exist that can help you tune the system. 107The most obvious and most dangerous one is 108.Cm async . 109Only use this option in conjunction with 110.Xr gjournal 8 , 111as it is far too dangerous on a normal file system. 112A less dangerous and more 113useful 114.Xr mount 8 115option is called 116.Cm noatime . 117.Ux 118file systems normally update the last-accessed time of a file or 119directory whenever it is accessed. 120This operation is handled in 121.Fx 122with a delayed write and normally does not create a burden on the system. 123However, if your system is accessing a huge number of files on a continuing 124basis the buffer cache can wind up getting polluted with atime updates, 125creating a burden on the system. 126For example, if you are running a heavily 127loaded web site, or a news server with lots of readers, you might want to 128consider turning off atime updates on your larger partitions with this 129.Xr mount 8 130option. 131However, you should not gratuitously turn off atime 132updates everywhere. 133For example, the 134.Pa /var 135file system customarily 136holds mailboxes, and atime (in combination with mtime) is used to 137determine whether a mailbox has new mail. 138You might as well leave 139atime turned on for mostly read-only partitions such as 140.Pa / 141and 142.Pa /usr 143as well. 144This is especially useful for 145.Pa / 146since some system utilities 147use the atime field for reporting. 148.Sh STRIPING DISKS 149In larger systems you can stripe partitions from several drives together 150to create a much larger overall partition. 151Striping can also improve 152the performance of a file system by splitting I/O operations across two 153or more disks. 154The 155.Xr gstripe 8 , 156.Xr gvinum 8 , 157and 158.Xr ccdconfig 8 159utilities may be used to create simple striped file systems. 160Generally 161speaking, striping smaller partitions such as the root and 162.Pa /var/tmp , 163or essentially read-only partitions such as 164.Pa /usr 165is a complete waste of time. 166You should only stripe partitions that require serious I/O performance, 167typically 168.Pa /var , /home , 169or custom partitions used to hold databases and web pages. 170Choosing the proper stripe size is also 171important. 172File systems tend to store meta-data on power-of-2 boundaries 173and you usually want to reduce seeking rather than increase seeking. 174This 175means you want to use a large off-center stripe size such as 1152 sectors 176so sequential I/O does not seek both disks and so meta-data is distributed 177across both disks rather than concentrated on a single disk. 178.Sh SYSCTL TUNING 179.Xr sysctl 8 180variables permit system behavior to be monitored and controlled at 181run-time. 182Some sysctls simply report on the behavior of the system; others allow 183the system behavior to be modified; 184some may be set at boot time using 185.Xr rc.conf 5 , 186but most will be set via 187.Xr sysctl.conf 5 . 188There are several hundred sysctls in the system, including many that appear 189to be candidates for tuning but actually are not. 190In this document we will only cover the ones that have the greatest effect 191on the system. 192.Pp 193The 194.Va vm.overcommit 195sysctl defines the overcommit behaviour of the vm subsystem. 196The virtual memory system always does accounting of the swap space 197reservation, both total for system and per-user. 198Corresponding values 199are available through sysctl 200.Va vm.swap_total , 201that gives the total bytes available for swapping, and 202.Va vm.swap_reserved , 203that gives number of bytes that may be needed to back all currently 204allocated anonymous memory. 205.Pp 206Setting bit 0 of the 207.Va vm.overcommit 208sysctl causes the virtual memory system to return failure 209to the process when allocation of memory causes 210.Va vm.swap_reserved 211to exceed 212.Va vm.swap_total . 213Bit 1 of the sysctl enforces 214.Dv RLIMIT_SWAP 215limit 216(see 217.Xr getrlimit 2 ) . 218Root is exempt from this limit. 219Bit 2 allows to count most of the physical 220memory as allocatable, except wired and free reserved pages 221(accounted by 222.Va vm.stats.vm.v_free_target 223and 224.Va vm.stats.vm.v_wire_count 225sysctls, respectively). 226.Pp 227The 228.Va kern.ipc.maxpipekva 229loader tunable is used to set a hard limit on the 230amount of kernel address space allocated to mapping of pipe buffers. 231Use of the mapping allows the kernel to eliminate a copy of the 232data from writer address space into the kernel, directly copying 233the content of mapped buffer to the reader. 234Increasing this value to a higher setting, such as `25165824' might 235improve performance on systems where space for mapping pipe buffers 236is quickly exhausted. 237This exhaustion is not fatal; however, and it will only cause pipes 238to fall back to using double-copy. 239.Pp 240The 241.Va kern.ipc.shm_use_phys 242sysctl defaults to 0 (off) and may be set to 0 (off) or 1 (on). 243Setting 244this parameter to 1 will cause all System V shared memory segments to be 245mapped to unpageable physical RAM. 246This feature only has an effect if you 247are either (A) mapping small amounts of shared memory across many (hundreds) 248of processes, or (B) mapping large amounts of shared memory across any 249number of processes. 250This feature allows the kernel to remove a great deal 251of internal memory management page-tracking overhead at the cost of wiring 252the shared memory into core, making it unswappable. 253.Pp 254The 255.Va vfs.vmiodirenable 256sysctl defaults to 1 (on). 257This parameter controls how directories are cached 258by the system. 259Most directories are small and use but a single fragment 260(typically 2K) in the file system and even less (typically 512 bytes) in 261the buffer cache. 262However, when operating in the default mode the buffer 263cache will only cache a fixed number of directories even if you have a huge 264amount of memory. 265Turning on this sysctl allows the buffer cache to use 266the VM Page Cache to cache the directories. 267The advantage is that all of 268memory is now available for caching directories. 269The disadvantage is that 270the minimum in-core memory used to cache a directory is the physical page 271size (typically 4K) rather than 512 bytes. 272We recommend turning this option off in memory-constrained environments; 273however, when on, it will substantially improve the performance of services 274that manipulate a large number of files. 275Such services can include web caches, large mail systems, and news systems. 276Turning on this option will generally not reduce performance even with the 277wasted memory but you should experiment to find out. 278.Pp 279The 280.Va vfs.write_behind 281sysctl defaults to 1 (on). 282This tells the file system to issue media 283writes as full clusters are collected, which typically occurs when writing 284large sequential files. 285The idea is to avoid saturating the buffer 286cache with dirty buffers when it would not benefit I/O performance. 287However, 288this may stall processes and under certain circumstances you may wish to turn 289it off. 290.Pp 291The 292.Va vfs.hirunningspace 293sysctl determines how much outstanding write I/O may be queued to 294disk controllers system-wide at any given time. 295It is used by the UFS file system. 296The default is self-tuned and 297usually sufficient but on machines with advanced controllers and lots 298of disks this may be tuned up to match what the controllers buffer. 299Configuring this setting to match tagged queuing capabilities of 300controllers or drives with average IO size used in production works 301best (for example: 16 MiB will use 128 tags with IO requests of 128 KiB). 302Note that setting too high a value 303(exceeding the buffer cache's write threshold) can lead to extremely 304bad clustering performance. 305Do not set this value arbitrarily high! 306Higher write queuing values may also add latency to reads occurring at 307the same time. 308.Pp 309The 310.Va vfs.read_max 311sysctl governs VFS read-ahead and is expressed as the number of blocks 312to pre-read if the heuristics algorithm decides that the reads are 313issued sequentially. 314It is used by the UFS, ext2fs and msdosfs file systems. 315With the default UFS block size of 32 KiB, a setting of 64 will allow 316speculatively reading up to 2 MiB. 317This setting may be increased to get around disk I/O latencies, especially 318where these latencies are large such as in virtual machine emulated 319environments. 320It may be tuned down in specific cases where the I/O load is such that 321read-ahead adversely affects performance or where system memory is really 322low. 323.Pp 324The 325.Va vfs.ncsizefactor 326sysctl defines how large VFS namecache may grow. 327The number of currently allocated entries in namecache is provided by 328.Va debug.numcache 329sysctl and the condition 330debug.numcache < kern.maxvnodes * vfs.ncsizefactor 331is adhered to. 332.Pp 333The 334.Va vfs.ncnegfactor 335sysctl defines how many negative entries VFS namecache is allowed to create. 336The number of currently allocated negative entries is provided by 337.Va debug.numneg 338sysctl and the condition 339vfs.ncnegfactor * debug.numneg < debug.numcache 340is adhered to. 341.Pp 342There are various other buffer-cache and VM page cache related sysctls. 343We do not recommend modifying these values. 344.Pp 345The 346.Va net.inet.tcp.sendspace 347and 348.Va net.inet.tcp.recvspace 349sysctls are of particular interest if you are running network intensive 350applications. 351They control the amount of send and receive buffer space 352allowed for any given TCP connection. 353The default sending buffer is 32K; the default receiving buffer 354is 64K. 355You can often 356improve bandwidth utilization by increasing the default at the cost of 357eating up more kernel memory for each connection. 358We do not recommend 359increasing the defaults if you are serving hundreds or thousands of 360simultaneous connections because it is possible to quickly run the system 361out of memory due to stalled connections building up. 362But if you need 363high bandwidth over a fewer number of connections, especially if you have 364gigabit Ethernet, increasing these defaults can make a huge difference. 365You can adjust the buffer size for incoming and outgoing data separately. 366For example, if your machine is primarily doing web serving you may want 367to decrease the recvspace in order to be able to increase the 368sendspace without eating too much kernel memory. 369Note that the routing table (see 370.Xr route 8 ) 371can be used to introduce route-specific send and receive buffer size 372defaults. 373.Pp 374As an additional management tool you can use pipes in your 375firewall rules (see 376.Xr ipfw 8 ) 377to limit the bandwidth going to or from particular IP blocks or ports. 378For example, if you have a T1 you might want to limit your web traffic 379to 70% of the T1's bandwidth in order to leave the remainder available 380for mail and interactive use. 381Normally a heavily loaded web server 382will not introduce significant latencies into other services even if 383the network link is maxed out, but enforcing a limit can smooth things 384out and lead to longer term stability. 385Many people also enforce artificial 386bandwidth limitations in order to ensure that they are not charged for 387using too much bandwidth. 388.Pp 389Setting the send or receive TCP buffer to values larger than 65535 will result 390in a marginal performance improvement unless both hosts support the window 391scaling extension of the TCP protocol, which is controlled by the 392.Va net.inet.tcp.rfc1323 393sysctl. 394These extensions should be enabled and the TCP buffer size should be set 395to a value larger than 65536 in order to obtain good performance from 396certain types of network links; specifically, gigabit WAN links and 397high-latency satellite links. 398RFC1323 support is enabled by default. 399.Pp 400The 401.Va net.inet.tcp.always_keepalive 402sysctl determines whether or not the TCP implementation should attempt 403to detect dead TCP connections by intermittently delivering 404.Dq keepalives 405on the connection. 406By default, this is enabled for all applications; by setting this 407sysctl to 0, only applications that specifically request keepalives 408will use them. 409In most environments, TCP keepalives will improve the management of 410system state by expiring dead TCP connections, particularly for 411systems serving dialup users who may not always terminate individual 412TCP connections before disconnecting from the network. 413However, in some environments, temporary network outages may be 414incorrectly identified as dead sessions, resulting in unexpectedly 415terminated TCP connections. 416In such environments, setting the sysctl to 0 may reduce the occurrence of 417TCP session disconnections. 418.Pp 419The 420.Va net.inet.tcp.delayed_ack 421TCP feature is largely misunderstood. 422Historically speaking, this feature 423was designed to allow the acknowledgement to transmitted data to be returned 424along with the response. 425For example, when you type over a remote shell, 426the acknowledgement to the character you send can be returned along with the 427data representing the echo of the character. 428With delayed acks turned off, 429the acknowledgement may be sent in its own packet, before the remote service 430has a chance to echo the data it just received. 431This same concept also 432applies to any interactive protocol (e.g.,\& SMTP, WWW, POP3), and can cut the 433number of tiny packets flowing across the network in half. 434The 435.Fx 436delayed ACK implementation also follows the TCP protocol rule that 437at least every other packet be acknowledged even if the standard 100ms 438timeout has not yet passed. 439Normally the worst a delayed ACK can do is 440slightly delay the teardown of a connection, or slightly delay the ramp-up 441of a slow-start TCP connection. 442While we are not sure we believe that 443the several FAQs related to packages such as SAMBA and SQUID which advise 444turning off delayed acks may be referring to the slow-start issue. 445.Pp 446The 447.Va net.inet.ip.portrange.* 448sysctls control the port number ranges automatically bound to TCP and UDP 449sockets. 450There are three ranges: a low range, a default range, and a 451high range, selectable via the 452.Dv IP_PORTRANGE 453.Xr setsockopt 2 454call. 455Most 456network programs use the default range which is controlled by 457.Va net.inet.ip.portrange.first 458and 459.Va net.inet.ip.portrange.last , 460which default to 49152 and 65535, respectively. 461Bound port ranges are 462used for outgoing connections, and it is possible to run the system out 463of ports under certain circumstances. 464This most commonly occurs when you are 465running a heavily loaded web proxy. 466The port range is not an issue 467when running a server which handles mainly incoming connections, such as a 468normal web server, or has a limited number of outgoing connections, such 469as a mail relay. 470For situations where you may run out of ports, 471we recommend decreasing 472.Va net.inet.ip.portrange.first 473modestly. 474A range of 10000 to 30000 ports may be reasonable. 475You should also consider firewall effects when changing the port range. 476Some firewalls 477may block large ranges of ports (usually low-numbered ports) and expect systems 478to use higher ranges of ports for outgoing connections. 479By default 480.Va net.inet.ip.portrange.last 481is set at the maximum allowable port number. 482.Pp 483The 484.Va kern.ipc.somaxconn 485sysctl limits the size of the listen queue for accepting new TCP connections. 486The default value of 128 is typically too low for robust handling of new 487connections in a heavily loaded web server environment. 488For such environments, 489we recommend increasing this value to 1024 or higher. 490The service daemon 491may itself limit the listen queue size (e.g.,\& 492.Xr sendmail 8 , 493apache) but will 494often have a directive in its configuration file to adjust the queue size up. 495Larger listen queues also do a better job of fending off denial of service 496attacks. 497.Pp 498The 499.Va kern.maxfiles 500sysctl determines how many open files the system supports. 501The default is 502typically a few thousand but you may need to bump this up to ten or twenty 503thousand if you are running databases or large descriptor-heavy daemons. 504The read-only 505.Va kern.openfiles 506sysctl may be interrogated to determine the current number of open files 507on the system. 508.Pp 509The 510.Va vm.swap_idle_enabled 511sysctl is useful in large multi-user systems where you have lots of users 512entering and leaving the system and lots of idle processes. 513Such systems 514tend to generate a great deal of continuous pressure on free memory reserves. 515Turning this feature on and adjusting the swapout hysteresis (in idle 516seconds) via 517.Va vm.swap_idle_threshold1 518and 519.Va vm.swap_idle_threshold2 520allows you to depress the priority of pages associated with idle processes 521more quickly then the normal pageout algorithm. 522This gives a helping hand 523to the pageout daemon. 524Do not turn this option on unless you need it, 525because the tradeoff you are making is to essentially pre-page memory sooner 526rather than later, eating more swap and disk bandwidth. 527In a small system 528this option will have a detrimental effect but in a large system that is 529already doing moderate paging this option allows the VM system to stage 530whole processes into and out of memory more easily. 531.Sh LOADER TUNABLES 532Some aspects of the system behavior may not be tunable at runtime because 533memory allocations they perform must occur early in the boot process. 534To change loader tunables, you must set their values in 535.Xr loader.conf 5 536and reboot the system. 537.Pp 538.Va kern.maxusers 539controls the scaling of a number of static system tables, including defaults 540for the maximum number of open files, sizing of network memory resources, etc. 541.Va kern.maxusers 542is automatically sized at boot based on the amount of memory available in 543the system, and may be determined at run-time by inspecting the value of the 544read-only 545.Va kern.maxusers 546sysctl. 547.Pp 548The 549.Va kern.dfldsiz 550and 551.Va kern.dflssiz 552tunables set the default soft limits for process data and stack size 553respectively. 554Processes may increase these up to the hard limits by calling 555.Xr setrlimit 2 . 556The 557.Va kern.maxdsiz , 558.Va kern.maxssiz , 559and 560.Va kern.maxtsiz 561tunables set the hard limits for process data, stack, and text size 562respectively; processes may not exceed these limits. 563The 564.Va kern.sgrowsiz 565tunable controls how much the stack segment will grow when a process 566needs to allocate more stack. 567.Pp 568.Va kern.ipc.nmbclusters 569may be adjusted to increase the number of network mbufs the system is 570willing to allocate. 571Each cluster represents approximately 2K of memory, 572so a value of 1024 represents 2M of kernel memory reserved for network 573buffers. 574You can do a simple calculation to figure out how many you need. 575If you have a web server which maxes out at 1000 simultaneous connections, 576and each connection eats a 16K receive and 16K send buffer, you need 577approximately 32MB worth of network buffers to deal with it. 578A good rule of 579thumb is to multiply by 2, so 32MBx2 = 64MB/2K = 32768. 580So for this case 581you would want to set 582.Va kern.ipc.nmbclusters 583to 32768. 584We recommend values between 5851024 and 4096 for machines with moderates amount of memory, and between 4096 586and 32768 for machines with greater amounts of memory. 587Under no circumstances 588should you specify an arbitrarily high value for this parameter, it could 589lead to a boot-time crash. 590The 591.Fl m 592option to 593.Xr netstat 1 594may be used to observe network cluster use. 595.Pp 596More and more programs are using the 597.Xr sendfile 2 598system call to transmit files over the network. 599The 600.Va kern.ipc.nsfbufs 601sysctl controls the number of file system buffers 602.Xr sendfile 2 603is allowed to use to perform its work. 604This parameter nominally scales 605with 606.Va kern.maxusers 607so you should not need to modify this parameter except under extreme 608circumstances. 609See the 610.Sx TUNING 611section in the 612.Xr sendfile 2 613manual page for details. 614.Sh KERNEL CONFIG TUNING 615There are a number of kernel options that you may have to fiddle with in 616a large-scale system. 617In order to change these options you need to be 618able to compile a new kernel from source. 619The 620.Xr config 8 621manual page and the handbook are good starting points for learning how to 622do this. 623Generally the first thing you do when creating your own custom 624kernel is to strip out all the drivers and services you do not use. 625Removing things like 626.Dv INET6 627and drivers you do not have will reduce the size of your kernel, sometimes 628by a megabyte or more, leaving more memory available for applications. 629.Pp 630.Dv SCSI_DELAY 631may be used to reduce system boot times. 632The defaults are fairly high and 633can be responsible for 5+ seconds of delay in the boot process. 634Reducing 635.Dv SCSI_DELAY 636to something below 5 seconds could work (especially with modern drives). 637.Pp 638There are a number of 639.Dv *_CPU 640options that can be commented out. 641If you only want the kernel to run 642on a Pentium class CPU, you can easily remove 643.Dv I486_CPU , 644but only remove 645.Dv I586_CPU 646if you are sure your CPU is being recognized as a Pentium II or better. 647Some clones may be recognized as a Pentium or even a 486 and not be able 648to boot without those options. 649If it works, great! 650The operating system 651will be able to better use higher-end CPU features for MMU, task switching, 652timebase, and even device operations. 653Additionally, higher-end CPUs support 6544MB MMU pages, which the kernel uses to map the kernel itself into memory, 655increasing its efficiency under heavy syscall loads. 656.Sh CPU, MEMORY, DISK, NETWORK 657The type of tuning you do depends heavily on where your system begins to 658bottleneck as load increases. 659If your system runs out of CPU (idle times 660are perpetually 0%) then you need to consider upgrading the CPU 661or perhaps you need to revisit the 662programs that are causing the load and try to optimize them. 663If your system 664is paging to swap a lot you need to consider adding more memory. 665If your 666system is saturating the disk you typically see high CPU idle times and 667total disk saturation. 668.Xr systat 1 669can be used to monitor this. 670There are many solutions to saturated disks: 671increasing memory for caching, mirroring disks, distributing operations across 672several machines, and so forth. 673.Pp 674Finally, you might run out of network suds. 675Optimize the network path 676as much as possible. 677For example, in 678.Xr firewall 7 679we describe a firewall protecting internal hosts with a topology where 680the externally visible hosts are not routed through it. 681Most bottlenecks occur at the WAN link. 682If expanding the link is not an option it may be possible to use the 683.Xr dummynet 4 684feature to implement peak shaving or other forms of traffic shaping to 685prevent the overloaded service (such as web services) from affecting other 686services (such as email), or vice versa. 687In home installations this could 688be used to give interactive traffic (your browser, 689.Xr ssh 1 690logins) priority 691over services you export from your box (web services, email). 692.Sh SEE ALSO 693.Xr netstat 1 , 694.Xr systat 1 , 695.Xr sendfile 2 , 696.Xr ata 4 , 697.Xr dummynet 4 , 698.Xr eventtimers 4 , 699.Xr login.conf 5 , 700.Xr rc.conf 5 , 701.Xr sysctl.conf 5 , 702.Xr firewall 7 , 703.Xr hier 7 , 704.Xr ports 7 , 705.Xr boot 8 , 706.Xr bsdinstall 8 , 707.Xr ccdconfig 8 , 708.Xr config 8 , 709.Xr fsck 8 , 710.Xr gjournal 8 , 711.Xr gpart 8 , 712.Xr gstripe 8 , 713.Xr gvinum 8 , 714.Xr ifconfig 8 , 715.Xr ipfw 8 , 716.Xr loader 8 , 717.Xr mount 8 , 718.Xr newfs 8 , 719.Xr route 8 , 720.Xr sysctl 8 , 721.Xr tunefs 8 722.Sh HISTORY 723The 724.Nm 725manual page was originally written by 726.An Matthew Dillon 727and first appeared 728in 729.Fx 4.3 , 730May 2001. 731The manual page was greatly modified by 732.An Eitan Adler Aq Mt eadler@FreeBSD.org . 733