Lines Matching +full:higher +full:- +full:than +full:- +full:threshold
31 .Sh SYSTEM SETUP - DISKLABEL, NEWFS, TUNEFS, SWAP
34 for systems with less than 4GB of RAM, or approximately equal to
63 partitions are read-mostly, with very little writing, while
68 heavily write-loaded partitions will not bleed over into the mostly-read
81 .Dq Li "tunefs -n enable /filesystem" .
82 Softupdates drastically improves meta-data performance, mainly file
91 than otherwise.
103 A number of run-time
117 file systems normally update the last-accessed time of a file or
138 atime turned on for mostly read-only partitions such as
161 or essentially read-only partitions such as
170 File systems tend to store meta-data on power-of-2 boundaries
171 and you usually want to reduce seeking rather than increase seeking.
173 means you want to use a large off-center stripe size such as 1152 sectors
174 so sequential I/O does not seek both disks and so meta-data is distributed
175 across both disks rather than concentrated on a single disk.
179 run-time.
195 reservation, both total for system and per-user.
232 Increasing this value to a higher setting, such as `25165824' might
236 to fall back to using double-copy.
249 of internal memory management page-tracking overhead at the cost of wiring
268 the minimum in-core memory used to cache a directory is the physical page
269 size (typically 4K) rather than 512 bytes.
270 We recommend turning this option off in memory-constrained environments;
292 disk controllers system-wide at any given time.
294 The default is self-tuned and
301 (exceeding the buffer cache's write threshold) can lead to extremely
304 Higher write queuing values may also add latency to reads occurring at
309 sysctl governs VFS read-ahead and is expressed as the number of blocks
310 to pre-read if the heuristics algorithm decides that the reads are
319 read-ahead adversely affects performance or where system memory is really
340 There are various other buffer-cache and VM page cache related sysctls.
369 can be used to introduce route-specific send and receive buffer size
387 Setting the send or receive TCP buffer to values larger than 65535 will result
393 to a value larger than 65536 in order to obtain good performance from
395 high-latency satellite links.
438 slightly delay the teardown of a connection, or slightly delay the ramp-up
439 of a slow-start TCP connection.
442 turning off delayed acks may be referring to the slow-start issue.
475 may block large ranges of ports (usually low-numbered ports) and expect systems
476 to use higher ranges of ports for outgoing connections.
487 we recommend increasing this value to 1024 or higher.
501 thousand if you are running databases or large descriptor-heavy daemons.
502 The read-only
518 the system, and may be determined at run-time by inspecting the value of the
519 read-only
564 lead to a boot-time crash.
591 a large-scale system.
626 will be able to better use higher-end CPU features for MMU, task switching,
628 Additionally, higher-end CPUs support