xref: /freebsd/share/man/man7/tuning.7 (revision 9207b4cff7b8d483f4dd3c62266c2b58819eb7f9)
1.\" Copyright (c) 2001, Matthew Dillon.  Terms and conditions are those of
2.\" the BSD Copyright as specified in the file "/usr/src/COPYRIGHT" in
3.\" the source tree.
4.\"
5.\" $FreeBSD$
6.\"
7.Dd May 25, 2001
8.Dt TUNING 7
9.Os
10.Sh NAME
11.Nm tuning
12.Nd performance tuning under FreeBSD
13.Sh SYSTEM SETUP - DISKLABEL, NEWFS, TUNEFS, SWAP
14When using
15.Xr disklabel 8
16to lay out your filesystems on a hard disk it is important to remember
17that hard drives can transfer data much more quickly from outer tracks
18than they can from inner tracks.
19To take advantage of this you should
20try to pack your smaller filesystems and swap closer to the outer tracks,
21follow with the larger filesystems, and end with the largest filesystems.
22It is also important to size system standard filesystems such that you
23will not be forced to resize them later as you scale the machine up.
24I usually create, in order, a 128M root, 1G swap, 128M
25.Pa /var ,
26128M
27.Pa /var/tmp ,
283G
29.Pa /usr ,
30and use any remaining space for
31.Pa /home .
32.Pp
33You should typically size your swap space to approximately 2x main memory.
34If you do not have a lot of RAM, though, you will generally want a lot
35more swap.
36It is not recommended that you configure any less than
37256M of swap on a system and you should keep in mind future memory
38expansion when sizing the swap partition.
39The kernel's VM paging algorithms are tuned to perform best when there is
40at least 2x swap versus main memory.
41Configuring too little swap can lead
42to inefficiencies in the VM page scanning code as well as create issues
43later on if you add more memory to your machine.
44Finally, on larger systems
45with multiple SCSI disks (or multiple IDE disks operating on different
46controllers), we strongly recommend that you configure swap on each drive
47(up to four drives).
48The swap partitions on the drives should be approximately the same size.
49The kernel can handle arbitrary sizes but
50internal data structures scale to 4 times the largest swap partition.
51Keeping
52the swap partitions near the same size will allow the kernel to optimally
53stripe swap space across the N disks.
54Don't worry about overdoing it a
55little, swap space is the saving grace of
56.Ux
57and even if you don't normally use much swap, it can give you more time to
58recover from a runaway program before being forced to reboot.
59.Pp
60How you size your
61.Pa /var
62partition depends heavily on what you intend to use the machine for.
63This
64partition is primarily used to hold mailboxes, the print spool, and log
65files.
66Some people even make
67.Pa /var/log
68its own partition (but except for extreme cases it isn't worth the waste
69of a partition ID).
70If your machine is intended to act as a mail
71or print server,
72or you are running a heavily visited web server, you should consider
73creating a much larger partition \(en perhaps a gig or more.
74It is very easy
75to underestimate log file storage requirements.
76.Pp
77Sizing
78.Pa /var/tmp
79depends on the kind of temporary file usage you think you will need.
80128M is
81the minimum we recommend.
82Also note that sysinstall will create a
83.Pa /tmp
84directory, but it is usually a good idea to make
85.Pa /tmp
86a softlink to
87.Pa /var/tmp
88after the fact.
89Dedicating a partition for temporary file storage is important for
90two reasons: first, it reduces the possibility of filesystem corruption
91in a crash, and second it reduces the chance of a runaway process that
92fills up
93.Oo Pa /var Oc Ns Pa /tmp
94from blowing up more critical subsystems (mail,
95logging, etc).
96Filling up
97.Oo Pa /var Oc Ns Pa /tmp
98is a very common problem to have.
99.Pp
100In the old days there were differences between
101.Pa /tmp
102and
103.Pa /var/tmp ,
104but the introduction of
105.Pa /var
106(and
107.Pa /var/tmp )
108led to massive confusion
109by program writers so today programs haphazardly use one or the
110other and thus no real distinction can be made between the two.
111So it makes sense to have just one temporary directory.
112However you handle
113.Pa /tmp ,
114the one thing you do not want to do is leave it sitting
115on the root partition where it might cause root to fill up or possibly
116corrupt root in a crash/reboot situation.
117.Pp
118The
119.Pa /usr
120partition holds the bulk of the files required to support the system and
121a subdirectory within it called
122.Pa /usr/local
123holds the bulk of the files installed from the
124.Xr ports 7
125hierarchy.
126If you do not use ports all that much and do not intend to keep
127system source
128.Pq Pa /usr/src
129on the machine, you can get away with
130a 1 gigabyte
131.Pa /usr
132partition.
133However, if you install a lot of ports
134(especially window managers and linux-emulated binaries), we recommend
135at least a 2 gigabyte
136.Pa /usr
137and if you also intend to keep system source
138on the machine, we recommend a 3 gigabyte
139.Pa /usr .
140Do not underestimate the
141amount of space you will need in this partition, it can creep up and
142surprise you!
143.Pp
144The
145.Pa /home
146partition is typically used to hold user-specific data.
147I usually size it to the remainder of the disk.
148.Pp
149Why partition at all?
150Why not create one big
151.Pa /
152partition and be done with it?
153Then I don't have to worry about undersizing things!
154Well, there are several reasons this isn't a good idea.
155First,
156each partition has different operational characteristics and separating them
157allows the filesystem to tune itself to those characteristics.
158For example,
159the root and
160.Pa /usr
161partitions are read-mostly, with very little writing, while
162a lot of reading and writing could occur in
163.Pa /var
164and
165.Pa /var/tmp .
166By properly
167partitioning your system fragmentation introduced in the smaller more
168heavily write-loaded partitions will not bleed over into the mostly-read
169partitions.
170Additionally, keeping the write-loaded partitions closer to
171the edge of the disk (i.e. before the really big partitions instead of after
172in the partition table) will increase I/O performance in the partitions
173where you need it the most.
174Now it is true that you might also need I/O
175performance in the larger partitions, but they are so large that shifting
176them more towards the edge of the disk will not lead to a significant
177performance improvement whereas moving
178.Pa /var
179to the edge can have a huge impact.
180Finally, there are safety concerns.
181Having a small neat root partition that
182is essentially read-only gives it a greater chance of surviving a bad crash
183intact.
184.Pp
185Properly partitioning your system also allows you to tune
186.Xr newfs 8 ,
187and
188.Xr tunefs 8
189parameters.
190Tuning
191.Xr newfs 8
192requires more experience but can lead to significant improvements in
193performance.
194There are three parameters that are relatively safe to tune:
195.Em blocksize , bytes/inode ,
196and
197.Em cylinders/group .
198.Pp
199.Fx
200performs best when using 8K or 16K filesystem block sizes.
201The default filesystem block size is 16K,
202which provides best performance for most applications,
203with the exception of those that perform random access on large files
204(such as database server software).
205Such applications tend to perform better with a smaller block size,
206although modern disk characteristics are such that the performance
207gain from using a smaller block size may not be worth consideration.
208Using a block size larger than 16K
209can cause fragmentation of the buffer cache and
210lead to lower performance.
211.Pp
212The defaults may be unsuitable
213for a filesystem that requires a very large number of inodes
214or is intended to hold a large number of very small files.
215Such a filesystem should be created with an 8K or 4K block size.
216This also requires you to specify a smaller
217fragment size.
218We recommend always using a fragment size that is 1/8
219the block size (less testing has been done on other fragment size factors).
220The
221.Xr newfs 8
222options for this would be
223.Dq Li "newfs -f 1024 -b 8192 ..." .
224.Pp
225If a large partition is intended to be used to hold fewer, larger files, such
226as a database files, you can increase the
227.Em bytes/inode
228ratio which reduces the number of inodes (maximum number of files and
229directories that can be created) for that partition.
230Decreasing the number
231of inodes in a filesystem can greatly reduce
232.Xr fsck 8
233recovery times after a crash.
234Do not use this option
235unless you are actually storing large files on the partition, because if you
236overcompensate you can wind up with a filesystem that has lots of free
237space remaining but cannot accommodate any more files.
238Using 32768, 65536, or 262144 bytes/inode is recommended.
239You can go higher but
240it will have only incremental effects on
241.Xr fsck 8
242recovery times.
243For example,
244.Dq Li "newfs -i 32768 ..." .
245.Pp
246.Xr tunefs 8
247may be used to further tune a filesystem.
248This command can be run in
249single-user mode without having to reformat the filesystem.
250However, this is possibly the most abused program in the system.
251Many people attempt to
252increase available filesystem space by setting the min-free percentage to 0.
253This can lead to severe filesystem fragmentation and we do not recommend
254that you do this.
255Really the only
256.Xr tunefs 8
257option worthwhile here is turning on
258.Em softupdates
259with
260.Dq Li "tunefs -n enable /filesystem" .
261(Note: in
262.Fx
2635.x
264softupdates can be turned on using the
265.Fl U
266option to
267.Xr newfs 8 ) .
268Softupdates drastically improves meta-data performance, mainly file
269creation and deletion.
270We recommend enabling softupdates on all of your
271filesystems.
272There are two downsides to softupdates that you should be
273aware of.
274First, softupdates guarantees filesystem consistency in the
275case of a crash but could very easily be several seconds (even a minute!)
276behind updating the physical disk.
277If you crash you may lose more work
278than otherwise.
279Secondly, softupdates delays the freeing of filesystem
280blocks.
281If you have a filesystem (such as the root filesystem) which is
282close to full, doing a major update of it, e.g.\&
283.Dq Li "make installworld" ,
284can run it out of space and cause the update to fail.
285.Pp
286A number of run-time
287.Xr mount 8
288options exist that can help you tune the system.
289The most obvious and most dangerous one is
290.Cm async .
291Don't ever use it, it is far too dangerous.
292A less dangerous and more
293useful
294.Xr mount 8
295option is called
296.Cm noatime .
297.Ux
298filesystems normally update the last-accessed time of a file or
299directory whenever it is accessed.
300This operation is handled in
301.Fx
302with a delayed write and normally does not create a burden on the system.
303However, if your system is accessing a huge number of files on a continuing
304basis the buffer cache can wind up getting polluted with atime updates,
305creating a burden on the system.
306For example, if you are running a heavily
307loaded web site, or a news server with lots of readers, you might want to
308consider turning off atime updates on your larger partitions with this
309.Xr mount 8
310option.
311However, you should not gratuitously turn off atime
312updates everywhere.
313For example, the
314.Pa /var
315filesystem customarily
316holds mailboxes, and atime (in combination with mtime) is used to
317determine whether a mailbox has new mail.
318You might as well leave
319atime turned on for mostly read-only partitions such as
320.Pa /
321and
322.Pa /usr
323as well.
324This is especially useful for
325.Pa /
326since some system utilities
327use the atime field for reporting.
328.Sh STRIPING DISKS
329In larger systems you can stripe partitions from several drives together
330to create a much larger overall partition.
331Striping can also improve
332the performance of a filesystem by splitting I/O operations across two
333or more disks.
334The
335.Xr vinum 8
336and
337.Xr ccdconfig 8
338utilities may be used to create simple striped filesystems.
339Generally
340speaking, striping smaller partitions such as the root and
341.Pa /var/tmp ,
342or essentially read-only partitions such as
343.Pa /usr
344is a complete waste of time.
345You should only stripe partitions that require serious I/O performance,
346typically
347.Pa /var , /home ,
348or custom partitions used to hold databases and web pages.
349Choosing the proper stripe size is also
350important.
351Filesystems tend to store meta-data on power-of-2 boundaries
352and you usually want to reduce seeking rather than increase seeking.
353This
354means you want to use a large off-center stripe size such as 1152 sectors
355so sequential I/O does not seek both disks and so meta-data is distributed
356across both disks rather than concentrated on a single disk.
357If
358you really need to get sophisticated, we recommend using a real hardware
359RAID controller from the list of
360.Fx
361supported controllers.
362.Sh SYSCTL TUNING
363There are several hundred
364.Xr sysctl 8
365variables in the system, including many that appear to be candidates for
366tuning but actually aren't.
367In this document we will only cover the ones
368that have the greatest effect on the system.
369.Pp
370The
371.Va kern.ipc.shm_use_phys
372sysctl defaults to 0 (off) and may be set to 0 (off) or 1 (on).
373Setting
374this parameter to 1 will cause all System V shared memory segments to be
375mapped to unpageable physical RAM.
376This feature only has an effect if you
377are either (A) mapping small amounts of shared memory across many (hundreds)
378of processes, or (B) mapping large amounts of shared memory across any
379number of processes.
380This feature allows the kernel to remove a great deal
381of internal memory management page-tracking overhead at the cost of wiring
382the shared memory into core, making it unswappable.
383.Pp
384The
385.Va vfs.vmiodirenable
386sysctl defaults to 1 (on).
387This parameter controls how directories are cached
388by the system.
389Most directories are small and use but a single fragment
390(typically 1K) in the filesystem and even less (typically 512 bytes) in
391the buffer cache.
392However, when operating in the default mode the buffer
393cache will only cache a fixed number of directories even if you have a huge
394amount of memory.
395Turning on this sysctl allows the buffer cache to use
396the VM Page Cache to cache the directories.
397The advantage is that all of
398memory is now available for caching directories.
399The disadvantage is that
400the minimum in-core memory used to cache a directory is the physical page
401size (typically 4K) rather than 512 bytes.
402We recommend turning this option off in memory-constrained environments;
403however, when on, it will substantially improve the performance of services
404that manipulate a large number of files.
405Such services can include web caches, large mail systems, and news systems.
406Turning on this option will generally not reduce performance even with the
407wasted memory but you should experiment to find out.
408.Pp
409There are various buffer-cache and VM page cache related sysctls.
410We do not recommend modifying these values.
411As of
412.Fx 4.3 ,
413the VM system does an extremely good job tuning itself.
414.Pp
415The
416.Va net.inet.tcp.sendspace
417and
418.Va net.inet.tcp.recvspace
419sysctls are of particular interest if you are running network intensive
420applications.
421This controls the amount of send and receive buffer space
422allowed for any given TCP connection.
423The default sending buffer is 32K; the default receiving buffer
424is 64K.
425You can often
426improve bandwidth utilization by increasing the default at the cost of
427eating up more kernel memory for each connection.
428We do not recommend
429increasing the defaults if you are serving hundreds or thousands of
430simultaneous connections because it is possible to quickly run the system
431out of memory due to stalled connections building up.
432But if you need
433high bandwidth over a fewer number of connections, especially if you have
434gigabit ethernet, increasing these defaults can make a huge difference.
435You can adjust the buffer size for incoming and outgoing data separately.
436For example, if your machine is primarily doing web serving you may want
437to decrease the recvspace in order to be able to increase the
438sendspace without eating too much kernel memory.
439Note that the routing table (see
440.Xr route 8 )
441can be used to introduce route-specific send and receive buffer size
442defaults.
443.Pp
444As an additional management tool you can use pipes in your
445firewall rules (see
446.Xr ipfw 8 )
447to limit the bandwidth going to or from particular IP blocks or ports.
448For example, if you have a T1 you might want to limit your web traffic
449to 70% of the T1's bandwidth in order to leave the remainder available
450for mail and interactive use.
451Normally a heavily loaded web server
452will not introduce significant latencies into other services even if
453the network link is maxed out, but enforcing a limit can smooth things
454out and lead to longer term stability.
455Many people also enforce artificial
456bandwidth limitations in order to ensure that they are not charged for
457using too much bandwidth.
458.Pp
459Setting the send or receive TCP buffer to values larger then 65535 will result
460in a marginal performance improvement unless both hosts support the window
461scaling extension of the TCP protocol, which is controlled by the
462.Va net.inet.tcp.rfc1323
463sysctl.
464These extensions should be enabled and the TCP buffer size should be set
465to a value larger than 65536 in order to obtain good performance out of
466certain types of network links; specifically, gigabit WAN links and
467high-latency satellite links.
468RFC1323 support is enabled by default.
469.Pp
470The
471.Va net.inet.tcp.always_keepalive
472sysctl determines whether or not the TCP implementation should attempt
473to detect dead TCP connections by intermittently delivering
474.Dq keepalives
475on the connection.
476By default, this is enabled for all applications; by setting this
477sysctl to 0, only applications that specifically request keepalives
478will use them.
479In most environments, TCP keepalives will improve the management of
480system state by expiring dead TCP connections, particularly for
481systems serving dialup users who may not always terminate individual
482TCP connections before disconnecting from the network.
483However, in some environments, temporary network outages may be
484incorrectly identified as dead sessions, resulting in unexpectedly
485terminated TCP connections.
486In such environments, setting the sysctl to 0 may reduce the occurrence of
487TCP session disconnections.
488.Pp
489The
490.Va kern.ipc.somaxconn
491sysctl limits the size of the listen queue for accepting new TCP connections.
492The default value of 128 is typically too low for robust handling of new
493connections in a heavily loaded web server environment.
494For such environments,
495we recommend increasing this value to 1024 or higher.
496The service daemon
497may itself limit the listen queue size (e.g.\&
498.Xr sendmail 8 ,
499apache) but will
500often have a directive in its configuration file to adjust the queue size up.
501Larger listen queues also do a better job of fending off denial of service
502attacks.
503.Pp
504The
505.Va kern.maxfiles
506sysctl determines how many open files the system supports.
507The default is
508typically a few thousand but you may need to bump this up to ten or twenty
509thousand if you are running databases or large descriptor-heavy daemons.
510The read-only
511.Va kern.openfiles
512sysctl may be interrogated to determine the current number of open files
513on the system.
514.Pp
515The
516.Va vm.swap_idle_enabled
517sysctl is useful in large multi-user systems where you have lots of users
518entering and leaving the system and lots of idle processes.
519Such systems
520tend to generate a great deal of continuous pressure on free memory reserves.
521Turning this feature on and adjusting the swapout hysteresis (in idle
522seconds) via
523.Va vm.swap_idle_threshold1
524and
525.Va vm.swap_idle_threshold2
526allows you to depress the priority of pages associated with idle processes
527more quickly then the normal pageout algorithm.
528This gives a helping hand
529to the pageout daemon.
530Do not turn this option on unless you need it,
531because the tradeoff you are making is to essentially pre-page memory sooner
532rather then later, eating more swap and disk bandwidth.
533In a small system
534this option will have a detrimental effect but in a large system that is
535already doing moderate paging this option allows the VM system to stage
536whole processes into and out of memory more easily.
537.Sh LOADER TUNABLES
538Some aspects of the system behavior may not be tunable at runtime because
539memory allocations they perform must occur early in the boot process.
540To change loader tunables, you must set their values in
541.Xr loader.conf 5
542and reboot the system.
543.Pp
544The
545.Va kern.maxusers
546tunable defaults to an incredibly low value.
547For most modern machines,
548you probably want to increase this value to 64, 128, or 256.
549We do not
550recommend going above 256 unless you need a huge number of file descriptors.
551Network buffers are also affected but can be controlled with a separate
552kernel option.
553Do not increase maxusers just to get more network mbufs.
554Systems older than
555.Fx 4.4
556do not have this loader tunable and require that
557the kernel
558.Xr config 8
559option
560.Cd maxusers
561be set instead.
562.Pp
563.Va kern.ipc.nmbclusters
564may be adjusted to increase the number of network mbufs the system is
565willing to allocate.
566Each cluster represents approximately 2K of memory,
567so a value of 1024 represents 2M of kernel memory reserved for network
568buffers.
569You can do a simple calculation to figure out how many you need.
570If you have a web server which maxes out at 1000 simultaneous connections,
571and each connection eats a 16K receive and 16K send buffer, you need
572approximate 32MB worth of network buffers to deal with it.
573A good rule of
574thumb is to multiply by 2, so 32MBx2 = 64MB/2K = 32768.
575So for this case
576you would want to set
577.Va kern.ipc.nmbclusters
578to 32768.
579We recommend values between
5801024 and 4096 for machines with moderates amount of memory, and between 4096
581and 32768 for machines with greater amounts of memory.
582Under no circumstances
583should you specify an arbitrarily high value for this parameter, it could
584lead to a boot-time crash.
585The
586.Fl m
587option to
588.Xr netstat 1
589may be used to observe network cluster use.
590Older versions of
591.Fx
592do not have this tunable and require that the
593kernel
594.Xr config 8
595option
596.Dv NMBCLUSTERS
597be set instead.
598.Pp
599More and more programs are using the
600.Xr sendfile 2
601system call to transmit files over the network.
602The
603.Va kern.ipc.nsfbufs
604sysctl controls the number of filesystem buffers
605.Xr sendfile 2
606is allowed to use to perform its work.
607This parameter nominally scales
608with
609.Va kern.maxusers
610so you should not need to modify this parameter except under extreme
611circumstances.
612.Sh KERNEL CONFIG TUNING
613There are a number of kernel options that you may have to fiddle with in
614a large scale system.
615In order to change these options you need to be
616able to compile a new kernel from source.
617The
618.Xr config 8
619manual page and the handbook are good starting points for learning how to
620do this.
621Generally the first thing you do when creating your own custom
622kernel is to strip out all the drivers and services you don't use.
623Removing things like
624.Dv INET6
625and drivers you don't have will reduce the size of your kernel, sometimes
626by a megabyte or more, leaving more memory available for applications.
627.Pp
628.Dv SCSI_DELAY
629and
630.Dv IDE_DELAY
631may be used to reduce system boot times.
632The defaults are fairly high and
633can be responsible for 15+ seconds of delay in the boot process.
634Reducing
635.Dv SCSI_DELAY
636to 5 seconds usually works (especially with modern drives).
637Reducing
638.Dv IDE_DELAY
639also works but you have to be a little more careful.
640.Pp
641There are a number of
642.Dv *_CPU
643options that can be commented out.
644If you only want the kernel to run
645on a Pentium class CPU, you can easily remove
646.Dv I386_CPU
647and
648.Dv I486_CPU ,
649but only remove
650.Dv I586_CPU
651if you are sure your CPU is being recognized as a Pentium II or better.
652Some clones may be recognized as a Pentium or even a 486 and not be able
653to boot without those options.
654If it works, great!
655The operating system
656will be able to better-use higher-end CPU features for MMU, task switching,
657timebase, and even device operations.
658Additionally, higher-end CPUs support
6594MB MMU pages which the kernel uses to map the kernel itself into memory,
660which increases its efficiency under heavy syscall loads.
661.Sh IDE WRITE CACHING
662.Fx 4.3
663flirted with turning off IDE write caching.
664This reduced write bandwidth
665to IDE disks but was considered necessary due to serious data consistency
666issues introduced by hard drive vendors.
667Basically the problem is that
668IDE drives lie about when a write completes.
669With IDE write caching turned
670on, IDE hard drives will not only write data to disk out of order, they
671will sometimes delay some of the blocks indefinitely when under heavy disk
672loads.
673A crash or power failure can result in serious filesystem
674corruption.
675So our default was changed to be safe.
676Unfortunately, the
677result was such a huge loss in performance that we caved in and changed the
678default back to on after the release.
679You should check the default on
680your system by observing the
681.Va hw.ata.wc
682sysctl variable.
683If IDE write caching is turned off, you can turn it back
684on by setting the
685.Va hw.ata.wc
686kernel variable back to 1.
687This must be done from the boot
688.Xr loader 8
689at boot time.
690Attempting to do it after the kernel boots will have no effect.
691Please see
692.Xr ata 4
693and
694.Xr loader 8 .
695.Pp
696There is a new experimental feature for IDE hard drives called
697.Va hw.ata.tags
698(you also set this in the boot loader) which allows write caching to be safely
699turned on.
700This brings SCSI tagging features to IDE drives.
701As of this
702writing only IBM DPTA and DTLA drives support the feature.
703Warning!
704These
705drives apparently have quality control problems and I do not recommend
706purchasing them at this time.
707If you need performance, go with SCSI.
708.Sh CPU, MEMORY, DISK, NETWORK
709The type of tuning you do depends heavily on where your system begins to
710bottleneck as load increases.
711If your system runs out of CPU (idle times
712are perpetually 0%) then you need to consider upgrading the CPU or moving to
713an SMP motherboard (multiple CPU's), or perhaps you need to revisit the
714programs that are causing the load and try to optimize them.
715If your system
716is paging to swap a lot you need to consider adding more memory.
717If your
718system is saturating the disk you typically see high CPU idle times and
719total disk saturation.
720.Xr systat 1
721can be used to monitor this.
722There are many solutions to saturated disks:
723increasing memory for caching, mirroring disks, distributing operations across
724several machines, and so forth.
725If disk performance is an issue and you
726are using IDE drives, switching to SCSI can help a great deal.
727While modern
728IDE drives compare with SCSI in raw sequential bandwidth, the moment you
729start seeking around the disk SCSI drives usually win.
730.Pp
731Finally, you might run out of network suds.
732The first line of defense for
733improving network performance is to make sure you are using switches instead
734of hubs, especially these days where switches are almost as cheap.
735Hubs
736have severe problems under heavy loads due to collision backoff and one bad
737host can severely degrade the entire LAN.
738Second, optimize the network path
739as much as possible.
740For example, in
741.Xr firewall 7
742we describe a firewall protecting internal hosts with a topology where
743the externally visible hosts are not routed through it.
744Use 100BaseT rather
745than 10BaseT, or use 1000BaseT rather then 100BaseT, depending on your needs.
746Most bottlenecks occur at the WAN link (e.g.\&
747modem, T1, DSL, whatever).
748If expanding the link is not an option it may be possible to use
749.Xr dummynet 4
750feature to implement peak shaving or other forms of traffic shaping to
751prevent the overloaded service (such as web services) from affecting other
752services (such as email), or vice versa.
753In home installations this could
754be used to give interactive traffic (your browser,
755.Xr ssh 1
756logins) priority
757over services you export from your box (web services, email).
758.Sh SEE ALSO
759.Xr netstat 1 ,
760.Xr systat 1 ,
761.Xr ata 4 ,
762.Xr dummynet 4 ,
763.Xr login.conf 5 ,
764.Xr firewall 7 ,
765.Xr hier 7 ,
766.Xr ports 7 ,
767.Xr boot 8 ,
768.Xr ccdconfig 8 ,
769.Xr config 8 ,
770.Xr disklabel 8 ,
771.Xr fsck 8 ,
772.Xr ifconfig 8 ,
773.Xr ipfw 8 ,
774.Xr loader 8 ,
775.Xr mount 8 ,
776.Xr newfs 8 ,
777.Xr route 8 ,
778.Xr sysctl 8 ,
779.Xr tunefs 8 ,
780.Xr vinum 8
781.Sh HISTORY
782The
783.Nm
784manual page was originally written by
785.An Matthew Dillon
786and first appeared
787in
788.Fx 4.3 ,
789May 2001.
790