xref: /freebsd/sbin/hastd/hastd.8 (revision a90b9d0159070121c221b966469c3e36d912bf82)
1.\" Copyright (c) 2010 The FreeBSD Foundation
2.\" All rights reserved.
3.\"
4.\" This software was developed by Pawel Jakub Dawidek under sponsorship from
5.\" the FreeBSD Foundation.
6.\"
7.\" Redistribution and use in source and binary forms, with or without
8.\" modification, are permitted provided that the following conditions
9.\" are met:
10.\" 1. Redistributions of source code must retain the above copyright
11.\"    notice, this list of conditions and the following disclaimer.
12.\" 2. Redistributions in binary form must reproduce the above copyright
13.\"    notice, this list of conditions and the following disclaimer in the
14.\"    documentation and/or other materials provided with the distribution.
15.\"
16.\" THIS SOFTWARE IS PROVIDED BY THE AUTHORS AND CONTRIBUTORS ``AS IS'' AND
17.\" ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
18.\" IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
19.\" ARE DISCLAIMED.  IN NO EVENT SHALL THE AUTHORS OR CONTRIBUTORS BE LIABLE
20.\" FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
21.\" DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
22.\" OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
23.\" HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
24.\" LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
25.\" OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
26.\" SUCH DAMAGE.
27.\"
28.Dd December 21, 2019
29.Dt HASTD 8
30.Os
31.Sh NAME
32.Nm hastd
33.Nd "Highly Available Storage daemon"
34.Sh SYNOPSIS
35.Nm
36.Op Fl dFh
37.Op Fl c Ar config
38.Op Fl P Ar pidfile
39.Sh DESCRIPTION
40The
41.Nm
42daemon is responsible for managing highly available GEOM providers.
43.Pp
44.Nm
45allows the transparent storage of data on two physically separated machines
46connected over a TCP/IP network.
47Only one machine (cluster node) can actively use storage provided by
48.Nm .
49This machine is called primary.
50The
51.Nm
52daemon operates on block level, which makes it transparent to file
53systems and applications.
54.Pp
55There is one main
56.Nm
57daemon which starts new worker process as soon as a role for the given
58resource is changed to primary or as soon as a role for the given
59resource is changed to secondary and remote (primary) node will
60successfully connect to it.
61Every worker process gets a new process title (see
62.Xr setproctitle 3 ) ,
63which describes its role and resource it controls.
64The exact format is:
65.Bd -literal -offset indent
66hastd: <resource name> (<role>)
67.Ed
68.Pp
69If (and only if)
70.Nm
71operates in primary role for the given resource, a corresponding
72.Pa /dev/hast/<name>
73disk-like device (GEOM provider) is created.
74File systems and applications can use this provider to send I/O
75requests to.
76Every write, delete and flush operation
77.Dv ( BIO_WRITE , BIO_DELETE , BIO_FLUSH )
78is sent to the local component and replicated on the remote (secondary) node
79if it is available.
80Read operations
81.Dv ( BIO_READ )
82are handled locally unless an I/O error occurs or the local version of the data
83is not up-to-date yet (synchronization is in progress).
84.Pp
85The
86.Nm
87daemon uses the GEOM Gate class to receive I/O requests from the
88in-kernel GEOM infrastructure.
89The
90.Nm geom_gate.ko
91module is loaded automatically if the kernel was not compiled with the
92following option:
93.Bd -ragged -offset indent
94.Cd "options GEOM_GATE"
95.Ed
96.Pp
97The connection between two
98.Nm
99daemons is always initiated from the one running as primary to the one
100running as secondary.
101When the primary
102.Nm
103is unable to connect or the connection fails, it will try to re-establish
104the connection every few seconds.
105Once the connection is established, the primary
106.Nm
107will synchronize every extent that was modified during connection outage
108to the secondary
109.Nm .
110.Pp
111It is possible that in the case of a connection outage between the nodes the
112.Nm
113primary role for the given resource will be configured on both nodes.
114This in turn leads to incompatible data modifications.
115Such a condition is called a split-brain and cannot be automatically
116resolved by the
117.Nm
118daemon as this will lead most likely to data corruption or loss of
119important changes.
120Even though it cannot be fixed by
121.Nm
122itself, it will be detected and a further connection between independently
123modified nodes will not be possible.
124Once this situation is manually resolved by an administrator, the resource
125on one of the nodes can be initialized (erasing local data), which makes
126a connection to the remote node possible again.
127Connection of the freshly initialized component will trigger full resource
128synchronization.
129.Pp
130A
131.Nm
132daemon never picks its role automatically.
133The role has to be configured with the
134.Xr hastctl 8
135control utility by additional software like
136.Nm ucarp
137or
138.Nm heartbeat
139that can reliably manage role separation and switch secondary node to
140primary role in case of the primary's failure.
141.Pp
142The
143.Nm
144daemon can be started with the following command line arguments:
145.Bl -tag -width ".Fl P Ar pidfile"
146.It Fl c Ar config
147Specify alternative location of the configuration file.
148The default location is
149.Pa /etc/hast.conf .
150.It Fl d
151Print or log debugging information.
152This option can be specified multiple times to raise the verbosity
153level.
154.It Fl F
155Start the
156.Nm
157daemon in the foreground.
158By default
159.Nm
160starts in the background.
161.It Fl h
162Print the
163.Nm
164usage message.
165.It Fl P Ar pidfile
166Specify alternative location of a file where main process PID will be
167stored.
168The default location is
169.Pa /var/run/hastd.pid .
170.El
171.Sh FILES
172.Bl -tag -width ".Pa /var/run/hastd.pid" -compact
173.It Pa /etc/hast.conf
174The configuration file for
175.Nm
176and
177.Xr hastctl 8 .
178.It Pa /var/run/hastctl
179Control socket used by the
180.Xr hastctl 8
181control utility to communicate with
182.Nm .
183.It Pa /var/run/hastd.pid
184The default location of the
185.Nm
186PID file.
187.El
188.Sh EXIT STATUS
189Exit status is 0 on success, or one of the values described in
190.Xr sysexits 3
191on failure.
192.Sh EXAMPLES
193Launch
194.Nm
195on both nodes.
196Set role for resource
197.Nm shared
198to primary on
199.Nm nodeA
200and to secondary on
201.Nm nodeB .
202Create file system on
203.Pa /dev/hast/shared
204provider and mount it.
205.Bd -literal -offset indent
206nodeB# hastd
207nodeB# hastctl role secondary shared
208
209nodeA# hastd
210nodeA# hastctl role primary shared
211nodeA# newfs -U /dev/hast/shared
212nodeA# mount -o noatime /dev/hast/shared /shared
213.Ed
214.Sh SEE ALSO
215.Xr sysexits 3 ,
216.Xr geom 4 ,
217.Xr hast.conf 5 ,
218.Xr ggatec 8 ,
219.Xr ggated 8 ,
220.Xr ggatel 8 ,
221.Xr hastctl 8 ,
222.Xr mount 8 ,
223.Xr newfs 8 ,
224.Xr g_bio 9
225.Sh HISTORY
226The
227.Nm
228utility appeared in
229.Fx 8.1 .
230.Sh AUTHORS
231The
232.Nm
233was developed by
234.An Pawel Jakub Dawidek Aq Mt pjd@FreeBSD.org
235under sponsorship of the FreeBSD Foundation.
236