1.\" Copyright (c) 1989, 1991, 1993 2.\" The Regents of the University of California. All rights reserved. 3.\" 4.\" Redistribution and use in source and binary forms, with or without 5.\" modification, are permitted provided that the following conditions 6.\" are met: 7.\" 1. Redistributions of source code must retain the above copyright 8.\" notice, this list of conditions and the following disclaimer. 9.\" 2. Redistributions in binary form must reproduce the above copyright 10.\" notice, this list of conditions and the following disclaimer in the 11.\" documentation and/or other materials provided with the distribution. 12.\" 3. Neither the name of the University nor the names of its contributors 13.\" may be used to endorse or promote products derived from this software 14.\" without specific prior written permission. 15.\" 16.\" THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND 17.\" ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE 18.\" IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE 19.\" ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE 20.\" FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL 21.\" DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS 22.\" OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) 23.\" HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT 24.\" LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY 25.\" OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF 26.\" SUCH DAMAGE. 27.\" 28.Dd March 16, 2024 29.Dt NFSD 8 30.Os 31.Sh NAME 32.Nm nfsd 33.Nd remote 34NFS server 35.Sh SYNOPSIS 36.Nm 37.Op Fl ardute 38.Op Fl n Ar num_servers 39.Op Fl h Ar bindip 40.Op Fl p Ar pnfs_setup 41.Op Fl m Ar mirror_level 42.Op Fl V Ar virtual_hostname 43.Op Fl Fl maxthreads Ar max_threads 44.Op Fl Fl minthreads Ar min_threads 45.Sh DESCRIPTION 46The 47.Nm 48utility runs on a server machine to service NFS requests from client machines. 49At least one 50.Nm 51must be running for a machine to operate as a server. 52.Pp 53Unless otherwise specified, eight servers per CPU for UDP transport are 54started. 55.Pp 56When 57.Nm 58is run in an appropriately configured vnet jail, the server is restricted 59to TCP transport and no pNFS service. 60Therefore, the 61.Fl t 62option must be specified and none of the 63.Fl u , 64.Fl p 65and 66.Fl m 67options can be specified when run in a vnet jail. 68See 69.Xr jail 8 70for more information. 71.Pp 72The following options are available: 73.Bl -tag -width Ds 74.It Fl r 75Register the NFS service with 76.Xr rpcbind 8 77without creating any servers. 78This option can be used along with the 79.Fl u 80or 81.Fl t 82options to re-register NFS if the rpcbind server is restarted. 83.It Fl d 84Unregister the NFS service with 85.Xr rpcbind 8 86without creating any servers. 87.It Fl V Ar virtual_hostname 88Specifies a hostname to be used as a principal name, instead of 89the default hostname. 90.It Fl n Ar threads 91Specifies how many servers to create. 92This option is equivalent to specifying 93.Fl Fl maxthreads 94and 95.Fl Fl minthreads 96with their respective arguments to 97.Ar threads . 98.It Fl Fl maxthreads Ar threads 99Specifies the maximum servers that will be kept around to service requests. 100.It Fl Fl minthreads Ar threads 101Specifies the minimum servers that will be kept around to service requests. 102.It Fl h Ar bindip 103Specifies which IP address or hostname to bind to on the local host. 104This option is recommended when a host has multiple interfaces. 105Multiple 106.Fl h 107options may be specified. 108.It Fl a 109Specifies that nfsd should bind to the wildcard IP address. 110This is the default if no 111.Fl h 112options are given. 113It may also be specified in addition to any 114.Fl h 115options given. 116Note that NFS/UDP does not operate properly when 117bound to the wildcard IP address whether you use -a or do not use -h. 118.It Fl p Ar pnfs_setup 119Enables pNFS support in the server and specifies the information that the 120daemon needs to start it. 121This option can only be used on one server and specifies that this server 122will be the MetaData Server (MDS) for the pNFS service. 123This can only be done if there is at least one 124.Fx 125system configured 126as a Data Server (DS) for it to use. 127.Pp 128The 129.Ar pnfs_setup 130string is a set of fields separated by ',' characters: 131Each of these fields specifies one DS. 132It consists of a server hostname, followed by a ':' 133and the directory path where the DS's data storage file system is mounted on 134this MDS server. 135This can optionally be followed by a '#' and the mds_path, which is the 136directory path for an exported file system on this MDS. 137If this is specified, it means that this DS is to be used to store data 138files for this mds_path file system only. 139If this optional component does not exist, the DS will be used to store data 140files for all exported MDS file systems. 141The DS storage file systems must be mounted on this system before the 142.Nm 143is started with this option specified. 144.br 145For example: 146.sp 147nfsv4-data0:/data0,nfsv4-data1:/data1 148.sp 149would specify two DS servers called nfsv4-data0 and nfsv4-data1 that comprise 150the data storage component of the pNFS service. 151These two DSs would be used to store data files for all exported file systems 152on this MDS. 153The directories 154.Dq /data0 155and 156.Dq /data1 157are where the data storage servers exported 158storage directories are mounted on this system (which will act as the MDS). 159.br 160Whereas, for the example: 161.sp 162nfsv4-data0:/data0#/export1,nfsv4-data1:/data1#/export2 163.sp 164would specify two DSs as above, however nfsv4-data0 will be used to store 165data files for 166.Dq /export1 167and nfsv4-data1 will be used to store data files for 168.Dq /export2 . 169.sp 170When using IPv6 addresses for DSs 171be wary of using link local addresses. 172The IPv6 address for the DS is sent to the client and there is no scope 173zone in it. 174As such, a link local address may not work for a pNFS client to DS 175TCP connection. 176When parsed, 177.Nm 178will only use a link local address if it is the only address returned by 179.Xr getaddrinfo 3 180for the DS hostname. 181.It Fl m Ar mirror_level 182This option is only meaningful when used with the 183.Fl p 184option. 185It specifies the 186.Dq mirror_level , 187which defines how many of the DSs will 188have a copy of a file's data storage file. 189The default of one implies no mirroring of data storage files on the DSs. 190The 191.Dq mirror_level 192would normally be set to 2 to enable mirroring, but 193can be as high as NFSDEV_MAXMIRRORS. 194There must be at least 195.Dq mirror_level 196DSs for each exported file system on the MDS, as specified in the 197.Fl p 198option. 199This implies that, for the above example using "#/export1" and "#/export2", 200mirroring cannot be done. 201There would need to be two DS entries for each of "#/export1" and "#/export2" 202in order to support a 203.Dq mirror_level 204of two. 205.Pp 206If mirroring is enabled, the server must use the Flexible File 207layout. 208If mirroring is not enabled, the server will use the File layout 209by default, but this default can be changed to the Flexible File layout if the 210.Xr sysctl 8 211vfs.nfsd.default_flexfile 212is set non-zero. 213.It Fl t 214Serve TCP NFS clients. 215.It Fl u 216Serve UDP NFS clients. 217.It Fl e 218Ignored; included for backward compatibility. 219.El 220.Pp 221For example, 222.Dq Li "nfsd -u -t -n 6" 223serves UDP and TCP transports using six daemons. 224.Pp 225A server should run enough daemons to handle 226the maximum level of concurrency from its clients, 227typically four to six. 228.Pp 229The 230.Nm 231utility listens for service requests at the port indicated in the 232NFS server specification; see 233.%T "Network File System Protocol Specification" , 234RFC1094, 235.%T "NFS: Network File System Version 3 Protocol Specification" , 236RFC1813, 237.%T "Network File System (NFS) Version 4 Protocol" , 238RFC7530, 239.%T "Network File System (NFS) Version 4 Minor Version 1 Protocol" , 240RFC5661, 241.%T "Network File System (NFS) Version 4 Minor Version 2 Protocol" , 242RFC7862, 243.%T "File System Extended Attributes in NFSv4" , 244RFC8276 and 245.%T "Parallel NFS (pNFS) Flexible File Layout" , 246RFC8435. 247.Pp 248If 249.Nm 250detects that 251NFS is not loaded in the running kernel, it will attempt 252to load a loadable kernel module containing NFS support using 253.Xr kldload 2 . 254If this fails, or no NFS KLD is available, 255.Nm 256will exit with an error. 257.Pp 258If 259.Nm 260is to be run on a host with multiple interfaces or interface aliases, use 261of the 262.Fl h 263option is recommended. 264If you do not use the option NFS may not respond to 265UDP packets from the same IP address they were sent to. 266Use of this option 267is also recommended when securing NFS exports on a firewalling machine such 268that the NFS sockets can only be accessed by the inside interface. 269The 270.Nm ipfw 271utility 272would then be used to block NFS-related packets that come in on the outside 273interface. 274.Pp 275If the server has stopped servicing clients and has generated a console message 276like 277.Dq Li "nfsd server cache flooded..." , 278the value for vfs.nfsd.tcphighwater needs to be increased. 279This should allow the server to again handle requests without a reboot. 280Also, you may want to consider decreasing the value for 281vfs.nfsd.tcpcachetimeo to several minutes (in seconds) instead of 12 hours 282when this occurs. 283.Pp 284Unfortunately making vfs.nfsd.tcphighwater too large can result in the mbuf 285limit being reached, as indicated by a console message 286like 287.Dq Li "kern.ipc.nmbufs limit reached" . 288If you cannot find values of the above 289.Nm sysctl 290values that work, you can disable the DRC cache for TCP by setting 291vfs.nfsd.cachetcp to 0. 292.Pp 293The 294.Nm 295utility has to be terminated with 296.Dv SIGUSR1 297and cannot be killed with 298.Dv SIGTERM 299or 300.Dv SIGQUIT . 301The 302.Nm 303utility needs to ignore these signals in order to stay alive as long 304as possible during a shutdown, otherwise loopback mounts will 305not be able to unmount. 306If you have to kill 307.Nm 308just do a 309.Dq Li "kill -USR1 <PID of master nfsd>" 310.Sh EXIT STATUS 311.Ex -std 312.Sh SEE ALSO 313.Xr nfsstat 1 , 314.Xr kldload 2 , 315.Xr nfssvc 2 , 316.Xr nfsv4 4 , 317.Xr pnfs 4 , 318.Xr pnfsserver 4 , 319.Xr exports 5 , 320.Xr stablerestart 5 , 321.Xr gssd 8 , 322.Xr ipfw 8 , 323.Xr jail 8 , 324.Xr mountd 8 , 325.Xr nfsiod 8 , 326.Xr nfsrevoke 8 , 327.Xr nfsuserd 8 , 328.Xr rpcbind 8 329.Sh HISTORY 330The 331.Nm 332utility first appeared in 333.Bx 4.4 . 334.Sh BUGS 335If 336.Nm 337is started when 338.Xr gssd 8 339is not running, it will service AUTH_SYS requests only. 340To fix the problem you must kill 341.Nm 342and then restart it, after the 343.Xr gssd 8 344is running. 345.Pp 346For a Flexible File Layout pNFS server, 347if there are Linux clients doing NFSv4.1 or NFSv4.2 mounts, those 348clients might need the 349.Xr sysctl 8 350vfs.nfsd.flexlinuxhack 351to be set to one on the MDS as a workaround. 352.Pp 353Linux 5.n kernels appear to have been patched such that this 354.Xr sysctl 8 355does not need to be set. 356.Pp 357For NFSv4.2, a Copy operation can take a long time to complete. 358If there is a concurrent ExchangeID or DelegReturn operation 359which requires the exclusive lock on all NFSv4 state, this can 360result in a 361.Dq stall 362of the 363.Nm 364server. 365If your storage is on ZFS without block cloning enabled, 366setting the 367.Xr sysctl 8 368.Va vfs.zfs.dmu_offset_next_sync 369to 0 can often avoid this problem. 370It is also possible to set the 371.Xr sysctl 8 372.Va vfs.nfsd.maxcopyrange 373to 10-100 megabytes to try and reduce Copy operation times. 374As a last resort, setting 375.Xr sysctl 8 376.Va vfs.nfsd.maxcopyrange 377to 0 disables the Copy operation. 378