xref: /freebsd/sbin/hastd/hastd.8 (revision 0fca6ea1d4eea4c934cfff25ac9ee8ad6fe95583)
1.\" Copyright (c) 2010 The FreeBSD Foundation
2.\"
3.\" This software was developed by Pawel Jakub Dawidek under sponsorship from
4.\" the FreeBSD Foundation.
5.\"
6.\" Redistribution and use in source and binary forms, with or without
7.\" modification, are permitted provided that the following conditions
8.\" are met:
9.\" 1. Redistributions of source code must retain the above copyright
10.\"    notice, this list of conditions and the following disclaimer.
11.\" 2. Redistributions in binary form must reproduce the above copyright
12.\"    notice, this list of conditions and the following disclaimer in the
13.\"    documentation and/or other materials provided with the distribution.
14.\"
15.\" THIS SOFTWARE IS PROVIDED BY THE AUTHORS AND CONTRIBUTORS ``AS IS'' AND
16.\" ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
17.\" IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
18.\" ARE DISCLAIMED.  IN NO EVENT SHALL THE AUTHORS OR CONTRIBUTORS BE LIABLE
19.\" FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
20.\" DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
21.\" OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
22.\" HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
23.\" LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
24.\" OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
25.\" SUCH DAMAGE.
26.\"
27.Dd December 21, 2019
28.Dt HASTD 8
29.Os
30.Sh NAME
31.Nm hastd
32.Nd "Highly Available Storage daemon"
33.Sh SYNOPSIS
34.Nm
35.Op Fl dFh
36.Op Fl c Ar config
37.Op Fl P Ar pidfile
38.Sh DESCRIPTION
39The
40.Nm
41daemon is responsible for managing highly available GEOM providers.
42.Pp
43.Nm
44allows the transparent storage of data on two physically separated machines
45connected over a TCP/IP network.
46Only one machine (cluster node) can actively use storage provided by
47.Nm .
48This machine is called primary.
49The
50.Nm
51daemon operates on block level, which makes it transparent to file
52systems and applications.
53.Pp
54There is one main
55.Nm
56daemon which starts new worker process as soon as a role for the given
57resource is changed to primary or as soon as a role for the given
58resource is changed to secondary and remote (primary) node will
59successfully connect to it.
60Every worker process gets a new process title (see
61.Xr setproctitle 3 ) ,
62which describes its role and resource it controls.
63The exact format is:
64.Bd -literal -offset indent
65hastd: <resource name> (<role>)
66.Ed
67.Pp
68If (and only if)
69.Nm
70operates in primary role for the given resource, a corresponding
71.Pa /dev/hast/<name>
72disk-like device (GEOM provider) is created.
73File systems and applications can use this provider to send I/O
74requests to.
75Every write, delete and flush operation
76.Dv ( BIO_WRITE , BIO_DELETE , BIO_FLUSH )
77is sent to the local component and replicated on the remote (secondary) node
78if it is available.
79Read operations
80.Dv ( BIO_READ )
81are handled locally unless an I/O error occurs or the local version of the data
82is not up-to-date yet (synchronization is in progress).
83.Pp
84The
85.Nm
86daemon uses the GEOM Gate class to receive I/O requests from the
87in-kernel GEOM infrastructure.
88The
89.Nm geom_gate.ko
90module is loaded automatically if the kernel was not compiled with the
91following option:
92.Bd -ragged -offset indent
93.Cd "options GEOM_GATE"
94.Ed
95.Pp
96The connection between two
97.Nm
98daemons is always initiated from the one running as primary to the one
99running as secondary.
100When the primary
101.Nm
102is unable to connect or the connection fails, it will try to re-establish
103the connection every few seconds.
104Once the connection is established, the primary
105.Nm
106will synchronize every extent that was modified during connection outage
107to the secondary
108.Nm .
109.Pp
110It is possible that in the case of a connection outage between the nodes the
111.Nm
112primary role for the given resource will be configured on both nodes.
113This in turn leads to incompatible data modifications.
114Such a condition is called a split-brain and cannot be automatically
115resolved by the
116.Nm
117daemon as this will lead most likely to data corruption or loss of
118important changes.
119Even though it cannot be fixed by
120.Nm
121itself, it will be detected and a further connection between independently
122modified nodes will not be possible.
123Once this situation is manually resolved by an administrator, the resource
124on one of the nodes can be initialized (erasing local data), which makes
125a connection to the remote node possible again.
126Connection of the freshly initialized component will trigger full resource
127synchronization.
128.Pp
129A
130.Nm
131daemon never picks its role automatically.
132The role has to be configured with the
133.Xr hastctl 8
134control utility by additional software like
135.Nm ucarp
136or
137.Nm heartbeat
138that can reliably manage role separation and switch secondary node to
139primary role in case of the primary's failure.
140.Pp
141The
142.Nm
143daemon can be started with the following command line arguments:
144.Bl -tag -width ".Fl P Ar pidfile"
145.It Fl c Ar config
146Specify alternative location of the configuration file.
147The default location is
148.Pa /etc/hast.conf .
149.It Fl d
150Print or log debugging information.
151This option can be specified multiple times to raise the verbosity
152level.
153.It Fl F
154Start the
155.Nm
156daemon in the foreground.
157By default
158.Nm
159starts in the background.
160.It Fl h
161Print the
162.Nm
163usage message.
164.It Fl P Ar pidfile
165Specify alternative location of a file where main process PID will be
166stored.
167The default location is
168.Pa /var/run/hastd.pid .
169.El
170.Sh FILES
171.Bl -tag -width ".Pa /var/run/hastd.pid" -compact
172.It Pa /etc/hast.conf
173The configuration file for
174.Nm
175and
176.Xr hastctl 8 .
177.It Pa /var/run/hastctl
178Control socket used by the
179.Xr hastctl 8
180control utility to communicate with
181.Nm .
182.It Pa /var/run/hastd.pid
183The default location of the
184.Nm
185PID file.
186.El
187.Sh EXIT STATUS
188Exit status is 0 on success, or one of the values described in
189.Xr sysexits 3
190on failure.
191.Sh EXAMPLES
192Launch
193.Nm
194on both nodes.
195Set role for resource
196.Nm shared
197to primary on
198.Nm nodeA
199and to secondary on
200.Nm nodeB .
201Create file system on
202.Pa /dev/hast/shared
203provider and mount it.
204.Bd -literal -offset indent
205nodeB# hastd
206nodeB# hastctl role secondary shared
207
208nodeA# hastd
209nodeA# hastctl role primary shared
210nodeA# newfs -U /dev/hast/shared
211nodeA# mount -o noatime /dev/hast/shared /shared
212.Ed
213.Sh SEE ALSO
214.Xr sysexits 3 ,
215.Xr geom 4 ,
216.Xr hast.conf 5 ,
217.Xr ggatec 8 ,
218.Xr ggated 8 ,
219.Xr ggatel 8 ,
220.Xr hastctl 8 ,
221.Xr mount 8 ,
222.Xr newfs 8 ,
223.Xr g_bio 9
224.Sh HISTORY
225The
226.Nm
227utility appeared in
228.Fx 8.1 .
229.Sh AUTHORS
230The
231.Nm
232was developed by
233.An Pawel Jakub Dawidek Aq Mt pjd@FreeBSD.org
234under sponsorship of the FreeBSD Foundation.
235