xref: /freebsd/sys/contrib/openzfs/tests/README.md (revision 681ce946f33e75c590e97c53076e86dff1fe8f4a)
1# ZFS Test Suite README
2
3### 1) Building and installing the ZFS Test Suite
4
5The ZFS Test Suite runs under the test-runner framework.  This framework
6is built along side the standard ZFS utilities and is included as part of
7zfs-test package.  The zfs-test package can be built from source as follows:
8
9    $ ./configure
10    $ make pkg-utils
11
12The resulting packages can be installed using the rpm or dpkg command as
13appropriate for your distributions.  Alternately, if you have installed
14ZFS from a distributions repository (not from source) the zfs-test package
15may be provided for your distribution.
16
17    - Installed from source
18    $ rpm -ivh ./zfs-test*.rpm, or
19    $ dpkg -i ./zfs-test*.deb,
20
21    - Installed from package repository
22    $ yum install zfs-test
23    $ apt-get install zfs-test
24
25### 2) Running the ZFS Test Suite
26
27The pre-requisites for running the ZFS Test Suite are:
28
29  * Three scratch disks
30    * Specify the disks you wish to use in the $DISKS variable, as a
31      space delimited list like this: DISKS='vdb vdc vdd'.  By default
32      the zfs-tests.sh script will construct three loopback devices to
33      be used for testing: DISKS='loop0 loop1 loop2'.
34  * A non-root user with a full set of basic privileges and the ability
35    to sudo(8) to root without a password to run the test.
36  * Specify any pools you wish to preserve as a space delimited list in
37    the $KEEP variable. All pools detected at the start of testing are
38    added automatically.
39  * The ZFS Test Suite will add users and groups to test machine to
40    verify functionality.  Therefore it is strongly advised that a
41    dedicated test machine, which can be a VM, be used for testing.
42
43Once the pre-requisites are satisfied simply run the zfs-tests.sh script:
44
45    $ /usr/share/zfs/zfs-tests.sh
46
47Alternately, the zfs-tests.sh script can be run from the source tree to allow
48developers to rapidly validate their work.  In this mode the ZFS utilities and
49modules from the source tree will be used (rather than those installed on the
50system).  In order to avoid certain types of failures you will need to ensure
51the ZFS udev rules are installed.  This can be done manually or by ensuring
52some version of ZFS is installed on the system.
53
54    $ ./scripts/zfs-tests.sh
55
56The following zfs-tests.sh options are supported:
57
58    -v          Verbose zfs-tests.sh output When specified additional
59                information describing the test environment will be logged
60                prior to invoking test-runner.  This includes the runfile
61                being used, the DISKS targeted, pools to keep, etc.
62
63    -q          Quiet test-runner output.  When specified it is passed to
64                test-runner(1) which causes output to be written to the
65                console only for tests that do not pass and the results
66                summary.
67
68    -x          Remove all testpools, dm, lo, and files (unsafe).  When
69                specified the script will attempt to remove any leftover
70                configuration from a previous test run.  This includes
71                destroying any pools named testpool, unused DM devices,
72                and loopback devices backed by file-vdevs.  This operation
73                can be DANGEROUS because it is possible that the script
74                will mistakenly remove a resource not related to the testing.
75
76    -k          Disable cleanup after test failure.  When specified the
77                zfs-tests.sh script will not perform any additional cleanup
78                when test-runner exists.  This is useful when the results of
79                a specific test need to be preserved for further analysis.
80
81    -f          Use sparse files directly instead of loopback devices for
82                the testing.  When running in this mode certain tests will
83                be skipped which depend on real block devices.
84
85    -c          Only create and populate constrained path
86
87    -I NUM      Number of iterations
88
89    -d DIR      Create sparse files for vdevs in the DIR directory.  By
90                default these files are created under /var/tmp/.
91
92    -s SIZE     Use vdevs of SIZE (default: 4G)
93
94    -r RUNFILES Run tests in RUNFILES (default: common.run,linux.run)
95
96    -t PATH     Run single test at PATH relative to test suite
97
98    -T TAGS     Comma separated list of tags (default: 'functional')
99
100    -u USER     Run single test as USER (default: root)
101
102
103The ZFS Test Suite allows the user to specify a subset of the tests via a
104runfile or list of tags.
105
106The format of the runfile is explained in test-runner(1), and
107the files that zfs-tests.sh uses are available for reference under
108/usr/share/zfs/runfiles. To specify a custom runfile, use the -r option:
109
110    $ /usr/share/zfs/zfs-tests.sh -r my_tests.run
111
112Otherwise user can set needed tags to run only specific tests.
113
114### 3) Test results
115
116While the ZFS Test Suite is running, one informational line is printed at the
117end of each test, and a results summary is printed at the end of the run. The
118results summary includes the location of the complete logs, which is logged in
119the form `/var/tmp/test_results/[ISO 8601 date]`.  A normal test run launched
120with the `zfs-tests.sh` wrapper script will look something like this:
121
122    $ /usr/share/zfs/zfs-tests.sh -v -d /tmp/test
123
124    --- Configuration ---
125    Runfile:         /usr/share/zfs/runfiles/linux.run
126    STF_TOOLS:       /usr/share/zfs/test-runner
127    STF_SUITE:       /usr/share/zfs/zfs-tests
128    STF_PATH:        /var/tmp/constrained_path.G0Sf
129    FILEDIR:         /tmp/test
130    FILES:           /tmp/test/file-vdev0 /tmp/test/file-vdev1 /tmp/test/file-vdev2
131    LOOPBACKS:       /dev/loop0 /dev/loop1 /dev/loop2
132    DISKS:           loop0 loop1 loop2
133    NUM_DISKS:       3
134    FILESIZE:        4G
135    ITERATIONS:      1
136    TAGS:            functional
137    Keep pool(s):    rpool
138
139
140    /usr/share/zfs/test-runner/bin/test-runner.py  -c /usr/share/zfs/runfiles/linux.run \
141        -T functional -i /usr/share/zfs/zfs-tests -I 1
142    Test: /usr/share/zfs/zfs-tests/tests/functional/arc/setup (run as root) [00:00] [PASS]
143    ...more than 1100 additional tests...
144    Test: /usr/share/zfs/zfs-tests/tests/functional/zvol/zvol_swap/cleanup (run as root) [00:00] [PASS]
145
146    Results Summary
147    SKIP	  52
148    PASS	 1129
149
150    Running Time:	02:35:33
151    Percent passed:	95.6%
152    Log directory:	/var/tmp/test_results/20180515T054509
153
154### 4) Example of adding and running test-case (zpool_example)
155
156  This broadly boils down to 5 steps
157  1. Create/Set password-less sudo for user running test case.
158  2. Edit configure.ac, Makefile.am appropriately
159  3. Create/Modify .run files
160  4. Create actual test-scripts
161  5. Run Test case
162
163  Will look at each of them in depth.
164
165  * Set password-less sudo for 'Test' user as test script cannot be run as root
166  * Edit file **configure.ac** and include line under AC_CONFIG_FILES section
167    ~~~~
168      tests/zfs-tests/tests/functional/cli_root/zpool_example/Makefile
169    ~~~~
170  * Edit file **tests/runfiles/Makefile.am** and add line *zpool_example*.
171    ~~~~
172      pkgdatadir = $(datadir)/@PACKAGE@/runfiles
173      dist_pkgdata_DATA = \
174        zpool_example.run \
175        common.run \
176        freebsd.run \
177        linux.run \
178        longevity.run \
179        perf-regression.run \
180        sanity.run \
181        sunos.run
182    ~~~~
183  * Create file **tests/runfiles/zpool_example.run**. This defines the most
184    common properties when run with test-runner.py or zfs-tests.sh.
185    ~~~~
186      [DEFAULT]
187      timeout = 600
188      outputdir = /var/tmp/test_results
189      tags = ['functional']
190
191      tests = ['zpool_example_001_pos']
192    ~~~~
193    If adding test-case to an already existing suite the runfile would
194    already be present and it needs to be only updated. For example, adding
195    **zpool_example_002_pos** to the above runfile only update the **"tests ="**
196    section of the runfile as shown below
197    ~~~~
198      [DEFAULT]
199      timeout = 600
200      outputdir = /var/tmp/test_results
201      tags = ['functional']
202
203      tests = ['zpool_example_001_pos', 'zpool_example_002_pos']
204    ~~~~
205
206  * Edit **tests/zfs-tests/tests/functional/cli_root/Makefile.am** and add line
207    under SUBDIRS.
208    ~~~~
209      zpool_example \ (Make sure to escape the line end as there will be other folders names following)
210    ~~~~
211  * Create new file **tests/zfs-tests/tests/functional/cli_root/zpool_example/Makefile.am**
212    the contents of the file could be as below. What it says it that now we have
213    a test case *zpool_example_001_pos.ksh*
214    ~~~~
215      pkgdatadir = $(datadir)/@PACKAGE@/zfs-tests/tests/functional/cli_root/zpool_example
216      dist_pkgdata_SCRIPTS = \
217        zpool_example_001_pos.ksh
218    ~~~~
219  * We can now create our test-case zpool_example_001_pos.ksh under
220    **tests/zfs-tests/tests/functional/cli_root/zpool_example/**.
221    ~~~~
222	# DESCRIPTION:
223	#	zpool_example Test
224	#
225	# STRATEGY:
226	#	1. Demo a very basic test case
227	#
228
229	DISKS_DEV1="/dev/loop0"
230	DISKS_DEV2="/dev/loop1"
231	TESTPOOL=EXAMPLE_POOL
232
233	function cleanup
234	{
235		# Cleanup
236		destroy_pool $TESTPOOL
237		log_must rm -f $DISKS_DEV1
238		log_must rm -f $DISKS_DEV2
239	}
240
241	log_assert "zpool_example"
242	# Run function "cleanup" on exit
243	log_onexit cleanup
244
245	# Prep backend device
246	log_must dd if=/dev/zero of=$DISKS_DEV1 bs=512 count=140000
247	log_must dd if=/dev/zero of=$DISKS_DEV2 bs=512 count=140000
248
249	# Create pool
250	log_must zpool create $TESTPOOL $type $DISKS_DEV1 $DISKS_DEV2
251
252	log_pass "zpool_example"
253    ~~~~
254  * Run Test case, which can be done in two ways. Described in detail above in
255    section 2.
256    * test-runner.py (This takes run file as input. See *zpool_example.run*)
257    * zfs-tests.sh. Can execute the run file or individual tests
258