xref: /linux/Documentation/admin-guide/nfs/nfs-client.rst (revision 69bfec7548f4c1595bac0e3ddfc0458a5af31f4c)
1==========
2NFS Client
3==========
4
5The NFS client
6==============
7
8The NFS version 2 protocol was first documented in RFC1094 (March 1989).
9Since then two more major releases of NFS have been published, with NFSv3
10being documented in RFC1813 (June 1995), and NFSv4 in RFC3530 (April
112003).
12
13The Linux NFS client currently supports all the above published versions,
14and work is in progress on adding support for minor version 1 of the NFSv4
15protocol.
16
17The purpose of this document is to provide information on some of the
18special features of the NFS client that can be configured by system
19administrators.
20
21
22The nfs4_unique_id parameter
23============================
24
25NFSv4 requires clients to identify themselves to servers with a unique
26string.  File open and lock state shared between one client and one server
27is associated with this identity.  To support robust NFSv4 state recovery
28and transparent state migration, this identity string must not change
29across client reboots.
30
31Without any other intervention, the Linux client uses a string that contains
32the local system's node name.  System administrators, however, often do not
33take care to ensure that node names are fully qualified and do not change
34over the lifetime of a client system.  Node names can have other
35administrative requirements that require particular behavior that does not
36work well as part of an nfs_client_id4 string.
37
38The nfs.nfs4_unique_id boot parameter specifies a unique string that can be
39used together with  a system's node name when an NFS client identifies itself to
40a server.  Thus, if the system's node name is not unique, its
41nfs.nfs4_unique_id can help prevent collisions with other clients.
42
43The nfs.nfs4_unique_id string is typically a UUID, though it can contain
44anything that is believed to be unique across all NFS clients.  An
45nfs4_unique_id string should be chosen when a client system is installed,
46just as a system's root file system gets a fresh UUID in its label at
47install time.
48
49The string should remain fixed for the lifetime of the client.  It can be
50changed safely if care is taken that the client shuts down cleanly and all
51outstanding NFSv4 state has expired, to prevent loss of NFSv4 state.
52
53This string can be stored in an NFS client's grub.conf, or it can be provided
54via a net boot facility such as PXE.  It may also be specified as an nfs.ko
55module parameter.
56
57This uniquifier string will be the same for all NFS clients running in
58containers unless it is overridden by a value written to
59/sys/fs/nfs/net/nfs_client/identifier which will be local to the network
60namespace of the process which writes.
61
62
63The DNS resolver
64================
65
66NFSv4 allows for one server to refer the NFS client to data that has been
67migrated onto another server by means of the special "fs_locations"
68attribute. See `RFC3530 Section 6: Filesystem Migration and Replication`_ and
69`Implementation Guide for Referrals in NFSv4`_.
70
71.. _RFC3530 Section 6\: Filesystem Migration and Replication: https://tools.ietf.org/html/rfc3530#section-6
72.. _Implementation Guide for Referrals in NFSv4: https://tools.ietf.org/html/draft-ietf-nfsv4-referrals-00
73
74The fs_locations information can take the form of either an ip address and
75a path, or a DNS hostname and a path. The latter requires the NFS client to
76do a DNS lookup in order to mount the new volume, and hence the need for an
77upcall to allow userland to provide this service.
78
79Assuming that the user has the 'rpc_pipefs' filesystem mounted in the usual
80/var/lib/nfs/rpc_pipefs, the upcall consists of the following steps:
81
82   (1) The process checks the dns_resolve cache to see if it contains a
83       valid entry. If so, it returns that entry and exits.
84
85   (2) If no valid entry exists, the helper script '/sbin/nfs_cache_getent'
86       (may be changed using the 'nfs.cache_getent' kernel boot parameter)
87       is run, with two arguments:
88       - the cache name, "dns_resolve"
89       - the hostname to resolve
90
91   (3) After looking up the corresponding ip address, the helper script
92       writes the result into the rpc_pipefs pseudo-file
93       '/var/lib/nfs/rpc_pipefs/cache/dns_resolve/channel'
94       in the following (text) format:
95
96		"<ip address> <hostname> <ttl>\n"
97
98       Where <ip address> is in the usual IPv4 (123.456.78.90) or IPv6
99       (ffee:ddcc:bbaa:9988:7766:5544:3322:1100, ffee::1100, ...) format.
100       <hostname> is identical to the second argument of the helper
101       script, and <ttl> is the 'time to live' of this cache entry (in
102       units of seconds).
103
104       .. note::
105            If <ip address> is invalid, say the string "0", then a negative
106            entry is created, which will cause the kernel to treat the hostname
107            as having no valid DNS translation.
108
109
110
111
112A basic sample /sbin/nfs_cache_getent
113=====================================
114.. code-block:: sh
115
116    #!/bin/bash
117    #
118    ttl=600
119    #
120    cut=/usr/bin/cut
121    getent=/usr/bin/getent
122    rpc_pipefs=/var/lib/nfs/rpc_pipefs
123    #
124    die()
125    {
126        echo "Usage: $0 cache_name entry_name"
127        exit 1
128    }
129
130    [ $# -lt 2 ] && die
131    cachename="$1"
132    cache_path=${rpc_pipefs}/cache/${cachename}/channel
133
134    case "${cachename}" in
135        dns_resolve)
136            name="$2"
137            result="$(${getent} hosts ${name} | ${cut} -f1 -d\ )"
138            [ -z "${result}" ] && result="0"
139            ;;
140        *)
141            die
142            ;;
143    esac
144    echo "${result} ${name} ${ttl}" >${cache_path}
145