xref: /freebsd/usr.sbin/nfsd/nfsd.8 (revision 7a7741af18d6c8a804cc643cb7ecda9d730c6aa6)
1.\" Copyright (c) 1989, 1991, 1993
2.\"	The Regents of the University of California.  All rights reserved.
3.\"
4.\" Redistribution and use in source and binary forms, with or without
5.\" modification, are permitted provided that the following conditions
6.\" are met:
7.\" 1. Redistributions of source code must retain the above copyright
8.\"    notice, this list of conditions and the following disclaimer.
9.\" 2. Redistributions in binary form must reproduce the above copyright
10.\"    notice, this list of conditions and the following disclaimer in the
11.\"    documentation and/or other materials provided with the distribution.
12.\" 3. Neither the name of the University nor the names of its contributors
13.\"    may be used to endorse or promote products derived from this software
14.\"    without specific prior written permission.
15.\"
16.\" THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND
17.\" ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
18.\" IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
19.\" ARE DISCLAIMED.  IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE
20.\" FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
21.\" DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
22.\" OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
23.\" HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
24.\" LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
25.\" OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
26.\" SUCH DAMAGE.
27.\"
28.Dd July 5, 2024
29.Dt NFSD 8
30.Os
31.Sh NAME
32.Nm nfsd
33.Nd remote
34NFS server
35.Sh SYNOPSIS
36.Nm
37.Op Fl arduteN
38.Op Fl n Ar num_servers
39.Op Fl h Ar bindip
40.Op Fl p Ar pnfs_setup
41.Op Fl m Ar mirror_level
42.Op Fl V Ar virtual_hostname
43.Op Fl Fl maxthreads Ar max_threads
44.Op Fl Fl minthreads Ar min_threads
45.Sh DESCRIPTION
46The
47.Nm
48utility runs on a server machine to service NFS requests from client machines.
49At least one
50.Nm
51must be running for a machine to operate as a server.
52.Pp
53Unless otherwise specified, eight servers per CPU for UDP transport are
54started.
55.Pp
56When
57.Nm
58is run in an appropriately configured vnet jail, the server is restricted
59to TCP transport and no pNFS service.
60Therefore, the
61.Fl t
62option must be specified and none of the
63.Fl u ,
64.Fl p
65and
66.Fl m
67options can be specified when run in a vnet jail.
68See
69.Xr jail 8
70for more information.
71.Pp
72The following options are available:
73.Bl -tag -width Ds
74.It Fl r
75Register the NFS service with
76.Xr rpcbind 8
77without creating any servers.
78This option can be used along with the
79.Fl u
80or
81.Fl t
82options to re-register NFS if the rpcbind server is restarted.
83.It Fl d
84Unregister the NFS service with
85.Xr rpcbind 8
86without creating any servers.
87.It Fl V Ar virtual_hostname
88Specifies a hostname to be used as a principal name, instead of
89the default hostname.
90.It Fl n Ar threads
91Specifies how many servers to create.
92This option is equivalent to specifying
93.Fl Fl maxthreads
94and
95.Fl Fl minthreads
96with their respective arguments to
97.Ar threads .
98.It Fl Fl maxthreads Ar threads
99Specifies the maximum servers that will be kept around to service requests.
100.It Fl Fl minthreads Ar threads
101Specifies the minimum servers that will be kept around to service requests.
102.It Fl h Ar bindip
103Specifies which IP address or hostname to bind to on the local host.
104This option is recommended when a host has multiple interfaces.
105Multiple
106.Fl h
107options may be specified.
108.It Fl a
109Specifies that nfsd should bind to the wildcard IP address.
110This is the default if no
111.Fl h
112options are given.
113It may also be specified in addition to any
114.Fl h
115options given.
116Note that NFS/UDP does not operate properly when
117bound to the wildcard IP address whether you use -a or do not use -h.
118.It Fl p Ar pnfs_setup
119Enables pNFS support in the server and specifies the information that the
120daemon needs to start it.
121This option can only be used on one server and specifies that this server
122will be the MetaData Server (MDS) for the pNFS service.
123This can only be done if there is at least one
124.Fx
125system configured
126as a Data Server (DS) for it to use.
127.Pp
128The
129.Ar pnfs_setup
130string is a set of fields separated by ',' characters:
131Each of these fields specifies one DS.
132It consists of a server hostname, followed by a ':'
133and the directory path where the DS's data storage file system is mounted on
134this MDS server.
135This can optionally be followed by a '#' and the mds_path, which is the
136directory path for an exported file system on this MDS.
137If this is specified, it means that this DS is to be used to store data
138files for this mds_path file system only.
139If this optional component does not exist, the DS will be used to store data
140files for all exported MDS file systems.
141The DS storage file systems must be mounted on this system before the
142.Nm
143is started with this option specified.
144.br
145For example:
146.sp
147nfsv4-data0:/data0,nfsv4-data1:/data1
148.sp
149would specify two DS servers called nfsv4-data0 and nfsv4-data1 that comprise
150the data storage component of the pNFS service.
151These two DSs would be used to store data files for all exported file systems
152on this MDS.
153The directories
154.Dq /data0
155and
156.Dq /data1
157are where the data storage servers exported
158storage directories are mounted on this system (which will act as the MDS).
159.br
160Whereas, for the example:
161.sp
162nfsv4-data0:/data0#/export1,nfsv4-data1:/data1#/export2
163.sp
164would specify two DSs as above, however nfsv4-data0 will be used to store
165data files for
166.Dq /export1
167and nfsv4-data1 will be used to store data files for
168.Dq /export2 .
169.sp
170When using IPv6 addresses for DSs
171be wary of using link local addresses.
172The IPv6 address for the DS is sent to the client and there is no scope
173zone in it.
174As such, a link local address may not work for a pNFS client to DS
175TCP connection.
176When parsed,
177.Nm
178will only use a link local address if it is the only address returned by
179.Xr getaddrinfo 3
180for the DS hostname.
181.It Fl m Ar mirror_level
182This option is only meaningful when used with the
183.Fl p
184option.
185It specifies the
186.Dq mirror_level ,
187which defines how many of the DSs will
188have a copy of a file's data storage file.
189The default of one implies no mirroring of data storage files on the DSs.
190The
191.Dq mirror_level
192would normally be set to 2 to enable mirroring, but
193can be as high as NFSDEV_MAXMIRRORS.
194There must be at least
195.Dq mirror_level
196DSs for each exported file system on the MDS, as specified in the
197.Fl p
198option.
199This implies that, for the above example using "#/export1" and "#/export2",
200mirroring cannot be done.
201There would need to be two DS entries for each of "#/export1" and "#/export2"
202in order to support a
203.Dq mirror_level
204of two.
205.Pp
206If mirroring is enabled, the server must use the Flexible File
207layout.
208If mirroring is not enabled, the server will use the File layout
209by default, but this default can be changed to the Flexible File layout if the
210.Xr sysctl 8
211vfs.nfsd.default_flexfile
212is set non-zero.
213.It Fl t
214Serve TCP NFS clients.
215.It Fl u
216Serve UDP NFS clients.
217.It Fl e
218Ignored; included for backward compatibility.
219.It Fl N
220Cause
221.Nm
222to execute in the foreground instead of in daemon mode.
223.El
224.Pp
225For example,
226.Dq Li "nfsd -u -t -n 6"
227serves UDP and TCP transports using six daemons.
228.Pp
229A server should run enough daemons to handle
230the maximum level of concurrency from its clients,
231typically four to six.
232.Pp
233The
234.Nm
235utility listens for service requests at the port indicated in the
236NFS server specification; see
237.%T "Network File System Protocol Specification" ,
238RFC1094,
239.%T "NFS: Network File System Version 3 Protocol Specification" ,
240RFC1813,
241.%T "Network File System (NFS) Version 4 Protocol" ,
242RFC7530,
243.%T "Network File System (NFS) Version 4 Minor Version 1 Protocol" ,
244RFC5661,
245.%T "Network File System (NFS) Version 4 Minor Version 2 Protocol" ,
246RFC7862,
247.%T "File System Extended Attributes in NFSv4" ,
248RFC8276 and
249.%T "Parallel NFS (pNFS) Flexible File Layout" ,
250RFC8435.
251.Pp
252If
253.Nm
254detects that
255NFS is not loaded in the running kernel, it will attempt
256to load a loadable kernel module containing NFS support using
257.Xr kldload 2 .
258If this fails, or no NFS KLD is available,
259.Nm
260will exit with an error.
261.Pp
262If
263.Nm
264is to be run on a host with multiple interfaces or interface aliases, use
265of the
266.Fl h
267option is recommended.
268If you do not use the option NFS may not respond to
269UDP packets from the same IP address they were sent to.
270Use of this option
271is also recommended when securing NFS exports on a firewalling machine such
272that the NFS sockets can only be accessed by the inside interface.
273The
274.Nm ipfw
275utility
276would then be used to block NFS-related packets that come in on the outside
277interface.
278.Pp
279If the server has stopped servicing clients and has generated a console message
280like
281.Dq Li "nfsd server cache flooded..." ,
282the value for vfs.nfsd.tcphighwater needs to be increased.
283This should allow the server to again handle requests without a reboot.
284Also, you may want to consider decreasing the value for
285vfs.nfsd.tcpcachetimeo to several minutes (in seconds) instead of 12 hours
286when this occurs.
287.Pp
288Unfortunately making vfs.nfsd.tcphighwater too large can result in the mbuf
289limit being reached, as indicated by a console message
290like
291.Dq Li "kern.ipc.nmbufs limit reached" .
292If you cannot find values of the above
293.Nm sysctl
294values that work, you can disable the DRC cache for TCP by setting
295vfs.nfsd.cachetcp to 0.
296.Pp
297The
298.Nm
299utility has to be terminated with
300.Dv SIGUSR1
301and cannot be killed with
302.Dv SIGTERM
303or
304.Dv SIGQUIT .
305The
306.Nm
307utility needs to ignore these signals in order to stay alive as long
308as possible during a shutdown, otherwise loopback mounts will
309not be able to unmount.
310If you have to kill
311.Nm
312just do a
313.Dq Li "kill -USR1 <PID of master nfsd>"
314.Sh EXIT STATUS
315.Ex -std
316.Sh SEE ALSO
317.Xr nfsstat 1 ,
318.Xr kldload 2 ,
319.Xr nfssvc 2 ,
320.Xr nfsv4 4 ,
321.Xr pnfs 4 ,
322.Xr pnfsserver 4 ,
323.Xr exports 5 ,
324.Xr stablerestart 5 ,
325.Xr gssd 8 ,
326.Xr ipfw 8 ,
327.Xr jail 8 ,
328.Xr mountd 8 ,
329.Xr nfsiod 8 ,
330.Xr nfsrevoke 8 ,
331.Xr nfsuserd 8 ,
332.Xr rpcbind 8
333.Sh HISTORY
334The
335.Nm
336utility first appeared in
337.Bx 4.4 .
338.Sh BUGS
339If
340.Nm
341is started when
342.Xr gssd 8
343is not running, it will service AUTH_SYS requests only.
344To fix the problem you must kill
345.Nm
346and then restart it, after the
347.Xr gssd 8
348is running.
349.Pp
350For a Flexible File Layout pNFS server,
351if there are Linux clients doing NFSv4.1 or NFSv4.2 mounts, those
352clients might need the
353.Xr sysctl 8
354vfs.nfsd.flexlinuxhack
355to be set to one on the MDS as a workaround.
356.Pp
357Linux 5.n kernels appear to have been patched such that this
358.Xr sysctl 8
359does not need to be set.
360.Pp
361For NFSv4.2, a Copy operation can take a long time to complete.
362If there is a concurrent ExchangeID or DelegReturn operation
363which requires the exclusive lock on all NFSv4 state, this can
364result in a
365.Dq stall
366of the
367.Nm
368server.
369If your storage is on ZFS without block cloning enabled,
370setting the
371.Xr sysctl 8
372.Va vfs.zfs.dmu_offset_next_sync
373to 0 can often avoid this problem.
374It is also possible to set the
375.Xr sysctl 8
376.Va vfs.nfsd.maxcopyrange
377to 10-100 megabytes to try and reduce Copy operation times.
378As a last resort, setting
379.Xr sysctl 8
380.Va vfs.nfsd.maxcopyrange
381to 0 disables the Copy operation.
382