1.\" Copyright (c) 1989, 1991, 1993 2.\" The Regents of the University of California. All rights reserved. 3.\" 4.\" Redistribution and use in source and binary forms, with or without 5.\" modification, are permitted provided that the following conditions 6.\" are met: 7.\" 1. Redistributions of source code must retain the above copyright 8.\" notice, this list of conditions and the following disclaimer. 9.\" 2. Redistributions in binary form must reproduce the above copyright 10.\" notice, this list of conditions and the following disclaimer in the 11.\" documentation and/or other materials provided with the distribution. 12.\" 3. Neither the name of the University nor the names of its contributors 13.\" may be used to endorse or promote products derived from this software 14.\" without specific prior written permission. 15.\" 16.\" THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND 17.\" ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE 18.\" IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE 19.\" ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE 20.\" FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL 21.\" DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS 22.\" OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) 23.\" HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT 24.\" LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY 25.\" OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF 26.\" SUCH DAMAGE. 27.\" 28.Dd May 30, 2025 29.Dt NFSD 8 30.Os 31.Sh NAME 32.Nm nfsd 33.Nd remote 34NFS server 35.Sh SYNOPSIS 36.Nm 37.Op Fl arduteN 38.Op Fl n Ar num_servers 39.Op Fl h Ar bindip 40.Op Fl p Ar pnfs_setup 41.Op Fl m Ar mirror_level 42.Op Fl P Ar pidfile 43.Op Fl V Ar virtual_hostname 44.Op Fl Fl maxthreads Ar max_threads 45.Op Fl Fl minthreads Ar min_threads 46.Sh DESCRIPTION 47The 48.Nm 49utility runs on a server machine to service NFS requests from client machines. 50At least one 51.Nm 52must be running for a machine to operate as a server. 53.Pp 54Unless otherwise specified, eight servers per CPU for UDP transport are 55started. 56.Pp 57When 58.Nm 59is run in an appropriately configured vnet jail, the server is restricted 60to TCP transport and no pNFS service. 61Therefore, the 62.Fl t 63option must be specified and none of the 64.Fl u , 65.Fl p 66and 67.Fl m 68options can be specified when run in a vnet jail. 69See 70.Xr jail 8 71for more information. 72.Pp 73The following options are available: 74.Bl -tag -width Ds 75.It Fl r 76Register the NFS service with 77.Xr rpcbind 8 78without creating any servers. 79This option can be used along with the 80.Fl u 81or 82.Fl t 83options to re-register NFS if the rpcbind server is restarted. 84.It Fl d 85Unregister the NFS service with 86.Xr rpcbind 8 87without creating any servers. 88.It Fl P Ar pidfile 89Specify alternative location of a file where main process PID will be stored. 90The default location is 91.Pa /var/run/nfsd.pid . 92.It Fl V Ar virtual_hostname 93Specifies a hostname to be used as a principal name, instead of 94the default hostname. 95.It Fl n Ar threads 96This option is deprecated and is limited to a maximum of 256 threads. 97The options 98.Fl Fl maxthreads 99and 100.Fl Fl minthreads 101should now be used. 102The 103.Ar threads 104argument for 105.Fl Fl minthreads 106and 107.Fl Fl maxthreads 108may be set to the same value to avoid dynamic 109changes to the number of threads. 110.It Fl Fl maxthreads Ar threads 111Specifies the maximum servers that will be kept around to service requests. 112.It Fl Fl minthreads Ar threads 113Specifies the minimum servers that will be kept around to service requests. 114.It Fl h Ar bindip 115Specifies which IP address or hostname to bind to on the local host. 116This option is recommended when a host has multiple interfaces. 117Multiple 118.Fl h 119options may be specified. 120.It Fl a 121Specifies that nfsd should bind to the wildcard IP address. 122This is the default if no 123.Fl h 124options are given. 125It may also be specified in addition to any 126.Fl h 127options given. 128Note that NFS/UDP does not operate properly when 129bound to the wildcard IP address whether you use -a or do not use -h. 130.It Fl p Ar pnfs_setup 131Enables pNFS support in the server and specifies the information that the 132daemon needs to start it. 133This option can only be used on one server and specifies that this server 134will be the MetaData Server (MDS) for the pNFS service. 135This can only be done if there is at least one 136.Fx 137system configured 138as a Data Server (DS) for it to use. 139.Pp 140The 141.Ar pnfs_setup 142string is a set of fields separated by ',' characters: 143Each of these fields specifies one DS. 144It consists of a server hostname, followed by a ':' 145and the directory path where the DS's data storage file system is mounted on 146this MDS server. 147This can optionally be followed by a '#' and the mds_path, which is the 148directory path for an exported file system on this MDS. 149If this is specified, it means that this DS is to be used to store data 150files for this mds_path file system only. 151If this optional component does not exist, the DS will be used to store data 152files for all exported MDS file systems. 153The DS storage file systems must be mounted on this system before the 154.Nm 155is started with this option specified. 156.br 157For example: 158.sp 159nfsv4-data0:/data0,nfsv4-data1:/data1 160.sp 161would specify two DS servers called nfsv4-data0 and nfsv4-data1 that comprise 162the data storage component of the pNFS service. 163These two DSs would be used to store data files for all exported file systems 164on this MDS. 165The directories 166.Dq /data0 167and 168.Dq /data1 169are where the data storage servers exported 170storage directories are mounted on this system (which will act as the MDS). 171.br 172Whereas, for the example: 173.sp 174nfsv4-data0:/data0#/export1,nfsv4-data1:/data1#/export2 175.sp 176would specify two DSs as above, however nfsv4-data0 will be used to store 177data files for 178.Dq /export1 179and nfsv4-data1 will be used to store data files for 180.Dq /export2 . 181.sp 182When using IPv6 addresses for DSs 183be wary of using link local addresses. 184The IPv6 address for the DS is sent to the client and there is no scope 185zone in it. 186As such, a link local address may not work for a pNFS client to DS 187TCP connection. 188When parsed, 189.Nm 190will only use a link local address if it is the only address returned by 191.Xr getaddrinfo 3 192for the DS hostname. 193.It Fl m Ar mirror_level 194This option is only meaningful when used with the 195.Fl p 196option. 197It specifies the 198.Dq mirror_level , 199which defines how many of the DSs will 200have a copy of a file's data storage file. 201The default of one implies no mirroring of data storage files on the DSs. 202The 203.Dq mirror_level 204would normally be set to 2 to enable mirroring, but 205can be as high as NFSDEV_MAXMIRRORS. 206There must be at least 207.Dq mirror_level 208DSs for each exported file system on the MDS, as specified in the 209.Fl p 210option. 211This implies that, for the above example using "#/export1" and "#/export2", 212mirroring cannot be done. 213There would need to be two DS entries for each of "#/export1" and "#/export2" 214in order to support a 215.Dq mirror_level 216of two. 217.Pp 218If mirroring is enabled, the server must use the Flexible File 219layout. 220If mirroring is not enabled, the server will use the File layout 221by default, but this default can be changed to the Flexible File layout if the 222.Xr sysctl 8 223vfs.nfsd.default_flexfile 224is set non-zero. 225.It Fl t 226Serve TCP NFS clients. 227.It Fl u 228Serve UDP NFS clients. 229.It Fl e 230Ignored; included for backward compatibility. 231.It Fl N 232Cause 233.Nm 234to execute in the foreground instead of in daemon mode. 235.El 236.Pp 237For example, 238.Dq Li "nfsd -u -t --minthreads 6 --maxthreads 6" 239serves UDP and TCP transports using six kernel threads (servers). 240.Pp 241For a system dedicated to servicing NFS RPCs, the number of 242threads (servers) should be sufficient to handle the peak 243client RPC load. 244For systems that perform other services, the number of 245threads (servers) may need to be limited, so that resources 246are available for these other services. 247.Pp 248The 249.Nm 250utility listens for service requests at the port indicated in the 251NFS server specification; see 252.%T "Network File System Protocol Specification" , 253RFC1094, 254.%T "NFS: Network File System Version 3 Protocol Specification" , 255RFC1813, 256.%T "Network File System (NFS) Version 4 Protocol" , 257RFC7530, 258.%T "Network File System (NFS) Version 4 Minor Version 1 Protocol" , 259RFC5661, 260.%T "Network File System (NFS) Version 4 Minor Version 2 Protocol" , 261RFC7862, 262.%T "File System Extended Attributes in NFSv4" , 263RFC8276 and 264.%T "Parallel NFS (pNFS) Flexible File Layout" , 265RFC8435. 266.Pp 267If 268.Nm 269detects that 270NFS is not loaded in the running kernel, it will attempt 271to load a loadable kernel module containing NFS support using 272.Xr kldload 2 . 273If this fails, or no NFS KLD is available, 274.Nm 275will exit with an error. 276.Pp 277If 278.Nm 279is to be run on a host with multiple interfaces or interface aliases, use 280of the 281.Fl h 282option is recommended. 283If you do not use the option NFS may not respond to 284UDP packets from the same IP address they were sent to. 285Use of this option 286is also recommended when securing NFS exports on a firewalling machine such 287that the NFS sockets can only be accessed by the inside interface. 288The 289.Nm ipfw 290utility 291would then be used to block NFS-related packets that come in on the outside 292interface. 293.Pp 294If the server has stopped servicing clients and has generated a console message 295like 296.Dq Li "nfsd server cache flooded..." , 297the value for vfs.nfsd.tcphighwater needs to be increased. 298This should allow the server to again handle requests without a reboot. 299Also, you may want to consider decreasing the value for 300vfs.nfsd.tcpcachetimeo to several minutes (in seconds) instead of 12 hours 301when this occurs. 302.Pp 303Unfortunately making vfs.nfsd.tcphighwater too large can result in the mbuf 304limit being reached, as indicated by a console message 305like 306.Dq Li "kern.ipc.nmbufs limit reached" . 307If you cannot find values of the above 308.Nm sysctl 309values that work, you can disable the DRC cache for TCP by setting 310vfs.nfsd.cachetcp to 0. 311.Pp 312The 313.Nm 314utility has to be terminated with 315.Dv SIGUSR1 316and cannot be killed with 317.Dv SIGTERM 318or 319.Dv SIGQUIT . 320The 321.Nm 322utility needs to ignore these signals in order to stay alive as long 323as possible during a shutdown, otherwise loopback mounts will 324not be able to unmount. 325If you have to kill 326.Nm 327just do a 328.Dq Li "kill -USR1 <PID of master nfsd>" 329.Sh EXIT STATUS 330.Ex -std 331.Sh SEE ALSO 332.Xr nfsstat 1 , 333.Xr kldload 2 , 334.Xr nfssvc 2 , 335.Xr nfsv4 4 , 336.Xr pnfs 4 , 337.Xr pnfsserver 4 , 338.Xr exports 5 , 339.Xr stablerestart 5 , 340.Xr gssd 8 , 341.Xr ipfw 8 , 342.Xr jail 8 , 343.Xr mountd 8 , 344.Xr nfsiod 8 , 345.Xr nfsrevoke 8 , 346.Xr nfsuserd 8 , 347.Xr rpcbind 8 348.Sh HISTORY 349The 350.Nm 351utility first appeared in 352.Bx 4.4 . 353.Sh BUGS 354If 355.Nm 356is started when 357.Xr gssd 8 358is not running, it will service AUTH_SYS requests only. 359To fix the problem you must kill 360.Nm 361and then restart it, after the 362.Xr gssd 8 363is running. 364.Pp 365For a Flexible File Layout pNFS server, 366if there are Linux clients doing NFSv4.1 or NFSv4.2 mounts, those 367clients might need the 368.Xr sysctl 8 369vfs.nfsd.flexlinuxhack 370to be set to one on the MDS as a workaround. 371.Pp 372Linux 5.n kernels appear to have been patched such that this 373.Xr sysctl 8 374does not need to be set. 375.Pp 376For NFSv4.2, a Copy operation can take a long time to complete. 377If there is a concurrent ExchangeID or DelegReturn operation 378which requires the exclusive lock on all NFSv4 state, this can 379result in a 380.Dq stall 381of the 382.Nm 383server. 384If your storage is on ZFS without block cloning enabled, 385setting the 386.Xr sysctl 8 387.Va vfs.zfs.dmu_offset_next_sync 388to 0 can often avoid this problem. 389It is also possible to set the 390.Xr sysctl 8 391.Va vfs.nfsd.maxcopyrange 392to 10-100 megabytes to try and reduce Copy operation times. 393As a last resort, setting 394.Xr sysctl 8 395.Va vfs.nfsd.maxcopyrange 396to 0 disables the Copy operation. 397