xref: /freebsd/usr.sbin/nfsd/nfsd.8 (revision c27f7d6b9cf6d4ab01cb3d0972726c14e0aca146)
1.\" Copyright (c) 1989, 1991, 1993
2.\"	The Regents of the University of California.  All rights reserved.
3.\"
4.\" Redistribution and use in source and binary forms, with or without
5.\" modification, are permitted provided that the following conditions
6.\" are met:
7.\" 1. Redistributions of source code must retain the above copyright
8.\"    notice, this list of conditions and the following disclaimer.
9.\" 2. Redistributions in binary form must reproduce the above copyright
10.\"    notice, this list of conditions and the following disclaimer in the
11.\"    documentation and/or other materials provided with the distribution.
12.\" 3. Neither the name of the University nor the names of its contributors
13.\"    may be used to endorse or promote products derived from this software
14.\"    without specific prior written permission.
15.\"
16.\" THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND
17.\" ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
18.\" IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
19.\" ARE DISCLAIMED.  IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE
20.\" FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
21.\" DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
22.\" OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
23.\" HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
24.\" LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
25.\" OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
26.\" SUCH DAMAGE.
27.\"
28.Dd February 21, 2025
29.Dt NFSD 8
30.Os
31.Sh NAME
32.Nm nfsd
33.Nd remote
34NFS server
35.Sh SYNOPSIS
36.Nm
37.Op Fl arduteN
38.Op Fl n Ar num_servers
39.Op Fl h Ar bindip
40.Op Fl p Ar pnfs_setup
41.Op Fl m Ar mirror_level
42.Op Fl V Ar virtual_hostname
43.Op Fl Fl maxthreads Ar max_threads
44.Op Fl Fl minthreads Ar min_threads
45.Sh DESCRIPTION
46The
47.Nm
48utility runs on a server machine to service NFS requests from client machines.
49At least one
50.Nm
51must be running for a machine to operate as a server.
52.Pp
53Unless otherwise specified, eight servers per CPU for UDP transport are
54started.
55.Pp
56When
57.Nm
58is run in an appropriately configured vnet jail, the server is restricted
59to TCP transport and no pNFS service.
60Therefore, the
61.Fl t
62option must be specified and none of the
63.Fl u ,
64.Fl p
65and
66.Fl m
67options can be specified when run in a vnet jail.
68See
69.Xr jail 8
70for more information.
71.Pp
72The following options are available:
73.Bl -tag -width Ds
74.It Fl r
75Register the NFS service with
76.Xr rpcbind 8
77without creating any servers.
78This option can be used along with the
79.Fl u
80or
81.Fl t
82options to re-register NFS if the rpcbind server is restarted.
83.It Fl d
84Unregister the NFS service with
85.Xr rpcbind 8
86without creating any servers.
87.It Fl V Ar virtual_hostname
88Specifies a hostname to be used as a principal name, instead of
89the default hostname.
90.It Fl n Ar threads
91This option is deprecated and is limited to a maximum of 256 threads.
92The options
93.Fl Fl maxthreads
94and
95.Fl Fl minthreads
96should now be used.
97The
98.Ar threads
99argument for
100.Fl Fl minthreads
101and
102.Fl Fl maxthreads
103may be set to the same value to avoid dynamic
104changes to the number of threads.
105.It Fl Fl maxthreads Ar threads
106Specifies the maximum servers that will be kept around to service requests.
107.It Fl Fl minthreads Ar threads
108Specifies the minimum servers that will be kept around to service requests.
109.It Fl h Ar bindip
110Specifies which IP address or hostname to bind to on the local host.
111This option is recommended when a host has multiple interfaces.
112Multiple
113.Fl h
114options may be specified.
115.It Fl a
116Specifies that nfsd should bind to the wildcard IP address.
117This is the default if no
118.Fl h
119options are given.
120It may also be specified in addition to any
121.Fl h
122options given.
123Note that NFS/UDP does not operate properly when
124bound to the wildcard IP address whether you use -a or do not use -h.
125.It Fl p Ar pnfs_setup
126Enables pNFS support in the server and specifies the information that the
127daemon needs to start it.
128This option can only be used on one server and specifies that this server
129will be the MetaData Server (MDS) for the pNFS service.
130This can only be done if there is at least one
131.Fx
132system configured
133as a Data Server (DS) for it to use.
134.Pp
135The
136.Ar pnfs_setup
137string is a set of fields separated by ',' characters:
138Each of these fields specifies one DS.
139It consists of a server hostname, followed by a ':'
140and the directory path where the DS's data storage file system is mounted on
141this MDS server.
142This can optionally be followed by a '#' and the mds_path, which is the
143directory path for an exported file system on this MDS.
144If this is specified, it means that this DS is to be used to store data
145files for this mds_path file system only.
146If this optional component does not exist, the DS will be used to store data
147files for all exported MDS file systems.
148The DS storage file systems must be mounted on this system before the
149.Nm
150is started with this option specified.
151.br
152For example:
153.sp
154nfsv4-data0:/data0,nfsv4-data1:/data1
155.sp
156would specify two DS servers called nfsv4-data0 and nfsv4-data1 that comprise
157the data storage component of the pNFS service.
158These two DSs would be used to store data files for all exported file systems
159on this MDS.
160The directories
161.Dq /data0
162and
163.Dq /data1
164are where the data storage servers exported
165storage directories are mounted on this system (which will act as the MDS).
166.br
167Whereas, for the example:
168.sp
169nfsv4-data0:/data0#/export1,nfsv4-data1:/data1#/export2
170.sp
171would specify two DSs as above, however nfsv4-data0 will be used to store
172data files for
173.Dq /export1
174and nfsv4-data1 will be used to store data files for
175.Dq /export2 .
176.sp
177When using IPv6 addresses for DSs
178be wary of using link local addresses.
179The IPv6 address for the DS is sent to the client and there is no scope
180zone in it.
181As such, a link local address may not work for a pNFS client to DS
182TCP connection.
183When parsed,
184.Nm
185will only use a link local address if it is the only address returned by
186.Xr getaddrinfo 3
187for the DS hostname.
188.It Fl m Ar mirror_level
189This option is only meaningful when used with the
190.Fl p
191option.
192It specifies the
193.Dq mirror_level ,
194which defines how many of the DSs will
195have a copy of a file's data storage file.
196The default of one implies no mirroring of data storage files on the DSs.
197The
198.Dq mirror_level
199would normally be set to 2 to enable mirroring, but
200can be as high as NFSDEV_MAXMIRRORS.
201There must be at least
202.Dq mirror_level
203DSs for each exported file system on the MDS, as specified in the
204.Fl p
205option.
206This implies that, for the above example using "#/export1" and "#/export2",
207mirroring cannot be done.
208There would need to be two DS entries for each of "#/export1" and "#/export2"
209in order to support a
210.Dq mirror_level
211of two.
212.Pp
213If mirroring is enabled, the server must use the Flexible File
214layout.
215If mirroring is not enabled, the server will use the File layout
216by default, but this default can be changed to the Flexible File layout if the
217.Xr sysctl 8
218vfs.nfsd.default_flexfile
219is set non-zero.
220.It Fl t
221Serve TCP NFS clients.
222.It Fl u
223Serve UDP NFS clients.
224.It Fl e
225Ignored; included for backward compatibility.
226.It Fl N
227Cause
228.Nm
229to execute in the foreground instead of in daemon mode.
230.El
231.Pp
232For example,
233.Dq Li "nfsd -u -t --minthreads 6 --maxthreads 6"
234serves UDP and TCP transports using six kernel threads (servers).
235.Pp
236For a system dedicated to servicing NFS RPCs, the number of
237threads (servers) should be sufficient to handle the peak
238client RPC load.
239For systems that perform other services, the number of
240threads (servers) may need to be limited, so that resources
241are available for these other services.
242.Pp
243The
244.Nm
245utility listens for service requests at the port indicated in the
246NFS server specification; see
247.%T "Network File System Protocol Specification" ,
248RFC1094,
249.%T "NFS: Network File System Version 3 Protocol Specification" ,
250RFC1813,
251.%T "Network File System (NFS) Version 4 Protocol" ,
252RFC7530,
253.%T "Network File System (NFS) Version 4 Minor Version 1 Protocol" ,
254RFC5661,
255.%T "Network File System (NFS) Version 4 Minor Version 2 Protocol" ,
256RFC7862,
257.%T "File System Extended Attributes in NFSv4" ,
258RFC8276 and
259.%T "Parallel NFS (pNFS) Flexible File Layout" ,
260RFC8435.
261.Pp
262If
263.Nm
264detects that
265NFS is not loaded in the running kernel, it will attempt
266to load a loadable kernel module containing NFS support using
267.Xr kldload 2 .
268If this fails, or no NFS KLD is available,
269.Nm
270will exit with an error.
271.Pp
272If
273.Nm
274is to be run on a host with multiple interfaces or interface aliases, use
275of the
276.Fl h
277option is recommended.
278If you do not use the option NFS may not respond to
279UDP packets from the same IP address they were sent to.
280Use of this option
281is also recommended when securing NFS exports on a firewalling machine such
282that the NFS sockets can only be accessed by the inside interface.
283The
284.Nm ipfw
285utility
286would then be used to block NFS-related packets that come in on the outside
287interface.
288.Pp
289If the server has stopped servicing clients and has generated a console message
290like
291.Dq Li "nfsd server cache flooded..." ,
292the value for vfs.nfsd.tcphighwater needs to be increased.
293This should allow the server to again handle requests without a reboot.
294Also, you may want to consider decreasing the value for
295vfs.nfsd.tcpcachetimeo to several minutes (in seconds) instead of 12 hours
296when this occurs.
297.Pp
298Unfortunately making vfs.nfsd.tcphighwater too large can result in the mbuf
299limit being reached, as indicated by a console message
300like
301.Dq Li "kern.ipc.nmbufs limit reached" .
302If you cannot find values of the above
303.Nm sysctl
304values that work, you can disable the DRC cache for TCP by setting
305vfs.nfsd.cachetcp to 0.
306.Pp
307The
308.Nm
309utility has to be terminated with
310.Dv SIGUSR1
311and cannot be killed with
312.Dv SIGTERM
313or
314.Dv SIGQUIT .
315The
316.Nm
317utility needs to ignore these signals in order to stay alive as long
318as possible during a shutdown, otherwise loopback mounts will
319not be able to unmount.
320If you have to kill
321.Nm
322just do a
323.Dq Li "kill -USR1 <PID of master nfsd>"
324.Sh EXIT STATUS
325.Ex -std
326.Sh SEE ALSO
327.Xr nfsstat 1 ,
328.Xr kldload 2 ,
329.Xr nfssvc 2 ,
330.Xr nfsv4 4 ,
331.Xr pnfs 4 ,
332.Xr pnfsserver 4 ,
333.Xr exports 5 ,
334.Xr stablerestart 5 ,
335.Xr gssd 8 ,
336.Xr ipfw 8 ,
337.Xr jail 8 ,
338.Xr mountd 8 ,
339.Xr nfsiod 8 ,
340.Xr nfsrevoke 8 ,
341.Xr nfsuserd 8 ,
342.Xr rpcbind 8
343.Sh HISTORY
344The
345.Nm
346utility first appeared in
347.Bx 4.4 .
348.Sh BUGS
349If
350.Nm
351is started when
352.Xr gssd 8
353is not running, it will service AUTH_SYS requests only.
354To fix the problem you must kill
355.Nm
356and then restart it, after the
357.Xr gssd 8
358is running.
359.Pp
360For a Flexible File Layout pNFS server,
361if there are Linux clients doing NFSv4.1 or NFSv4.2 mounts, those
362clients might need the
363.Xr sysctl 8
364vfs.nfsd.flexlinuxhack
365to be set to one on the MDS as a workaround.
366.Pp
367Linux 5.n kernels appear to have been patched such that this
368.Xr sysctl 8
369does not need to be set.
370.Pp
371For NFSv4.2, a Copy operation can take a long time to complete.
372If there is a concurrent ExchangeID or DelegReturn operation
373which requires the exclusive lock on all NFSv4 state, this can
374result in a
375.Dq stall
376of the
377.Nm
378server.
379If your storage is on ZFS without block cloning enabled,
380setting the
381.Xr sysctl 8
382.Va vfs.zfs.dmu_offset_next_sync
383to 0 can often avoid this problem.
384It is also possible to set the
385.Xr sysctl 8
386.Va vfs.nfsd.maxcopyrange
387to 10-100 megabytes to try and reduce Copy operation times.
388As a last resort, setting
389.Xr sysctl 8
390.Va vfs.nfsd.maxcopyrange
391to 0 disables the Copy operation.
392