xref: /linux/Documentation/dev-tools/kunit/running_tips.rst (revision 4b660dbd9ee2059850fd30e0df420ca7a38a1856)
1.. SPDX-License-Identifier: GPL-2.0
2
3============================
4Tips For Running KUnit Tests
5============================
6
7Using ``kunit.py run`` ("kunit tool")
8=====================================
9
10Running from any directory
11--------------------------
12
13It can be handy to create a bash function like:
14
15.. code-block:: bash
16
17	function run_kunit() {
18	  ( cd "$(git rev-parse --show-toplevel)" && ./tools/testing/kunit/kunit.py run "$@" )
19	}
20
21.. note::
22	Early versions of ``kunit.py`` (before 5.6) didn't work unless run from
23	the kernel root, hence the use of a subshell and ``cd``.
24
25Running a subset of tests
26-------------------------
27
28``kunit.py run`` accepts an optional glob argument to filter tests. The format
29is ``"<suite_glob>[.test_glob]"``.
30
31Say that we wanted to run the sysctl tests, we could do so via:
32
33.. code-block:: bash
34
35	$ echo -e 'CONFIG_KUNIT=y\nCONFIG_KUNIT_ALL_TESTS=y' > .kunit/.kunitconfig
36	$ ./tools/testing/kunit/kunit.py run 'sysctl*'
37
38We can filter down to just the "write" tests via:
39
40.. code-block:: bash
41
42	$ echo -e 'CONFIG_KUNIT=y\nCONFIG_KUNIT_ALL_TESTS=y' > .kunit/.kunitconfig
43	$ ./tools/testing/kunit/kunit.py run 'sysctl*.*write*'
44
45We're paying the cost of building more tests than we need this way, but it's
46easier than fiddling with ``.kunitconfig`` files or commenting out
47``kunit_suite``'s.
48
49However, if we wanted to define a set of tests in a less ad hoc way, the next
50tip is useful.
51
52Defining a set of tests
53-----------------------
54
55``kunit.py run`` (along with ``build``, and ``config``) supports a
56``--kunitconfig`` flag. So if you have a set of tests that you want to run on a
57regular basis (especially if they have other dependencies), you can create a
58specific ``.kunitconfig`` for them.
59
60E.g. kunit has one for its tests:
61
62.. code-block:: bash
63
64	$ ./tools/testing/kunit/kunit.py run --kunitconfig=lib/kunit/.kunitconfig
65
66Alternatively, if you're following the convention of naming your
67file ``.kunitconfig``, you can just pass in the dir, e.g.
68
69.. code-block:: bash
70
71	$ ./tools/testing/kunit/kunit.py run --kunitconfig=lib/kunit
72
73.. note::
74	This is a relatively new feature (5.12+) so we don't have any
75	conventions yet about on what files should be checked in versus just
76	kept around locally. It's up to you and your maintainer to decide if a
77	config is useful enough to submit (and therefore have to maintain).
78
79.. note::
80	Having ``.kunitconfig`` fragments in a parent and child directory is
81	iffy. There's discussion about adding an "import" statement in these
82	files to make it possible to have a top-level config run tests from all
83	child directories. But that would mean ``.kunitconfig`` files are no
84	longer just simple .config fragments.
85
86	One alternative would be to have kunit tool recursively combine configs
87	automagically, but tests could theoretically depend on incompatible
88	options, so handling that would be tricky.
89
90Setting kernel commandline parameters
91-------------------------------------
92
93You can use ``--kernel_args`` to pass arbitrary kernel arguments, e.g.
94
95.. code-block:: bash
96
97	$ ./tools/testing/kunit/kunit.py run --kernel_args=param=42 --kernel_args=param2=false
98
99
100Generating code coverage reports under UML
101------------------------------------------
102
103.. note::
104	TODO(brendanhiggins@google.com): There are various issues with UML and
105	versions of gcc 7 and up. You're likely to run into missing ``.gcda``
106	files or compile errors.
107
108This is different from the "normal" way of getting coverage information that is
109documented in Documentation/dev-tools/gcov.rst.
110
111Instead of enabling ``CONFIG_GCOV_KERNEL=y``, we can set these options:
112
113.. code-block:: none
114
115	CONFIG_DEBUG_KERNEL=y
116	CONFIG_DEBUG_INFO=y
117	CONFIG_DEBUG_INFO_DWARF_TOOLCHAIN_DEFAULT=y
118	CONFIG_GCOV=y
119
120
121Putting it together into a copy-pastable sequence of commands:
122
123.. code-block:: bash
124
125	# Append coverage options to the current config
126	$ ./tools/testing/kunit/kunit.py run --kunitconfig=.kunit/ --kunitconfig=tools/testing/kunit/configs/coverage_uml.config
127	# Extract the coverage information from the build dir (.kunit/)
128	$ lcov -t "my_kunit_tests" -o coverage.info -c -d .kunit/
129
130	# From here on, it's the same process as with CONFIG_GCOV_KERNEL=y
131	# E.g. can generate an HTML report in a tmp dir like so:
132	$ genhtml -o /tmp/coverage_html coverage.info
133
134
135If your installed version of gcc doesn't work, you can tweak the steps:
136
137.. code-block:: bash
138
139	$ ./tools/testing/kunit/kunit.py run --make_options=CC=/usr/bin/gcc-6
140	$ lcov -t "my_kunit_tests" -o coverage.info -c -d .kunit/ --gcov-tool=/usr/bin/gcov-6
141
142Alternatively, LLVM-based toolchains can also be used:
143
144.. code-block:: bash
145
146	# Build with LLVM and append coverage options to the current config
147	$ ./tools/testing/kunit/kunit.py run --make_options LLVM=1 --kunitconfig=.kunit/ --kunitconfig=tools/testing/kunit/configs/coverage_uml.config
148	$ llvm-profdata merge -sparse default.profraw -o default.profdata
149	$ llvm-cov export --format=lcov .kunit/vmlinux -instr-profile default.profdata > coverage.info
150	# The coverage.info file is in lcov-compatible format and it can be used to e.g. generate HTML report
151	$ genhtml -o /tmp/coverage_html coverage.info
152
153
154Running tests manually
155======================
156
157Running tests without using ``kunit.py run`` is also an important use case.
158Currently it's your only option if you want to test on architectures other than
159UML.
160
161As running the tests under UML is fairly straightforward (configure and compile
162the kernel, run the ``./linux`` binary), this section will focus on testing
163non-UML architectures.
164
165
166Running built-in tests
167----------------------
168
169When setting tests to ``=y``, the tests will run as part of boot and print
170results to dmesg in TAP format. So you just need to add your tests to your
171``.config``, build and boot your kernel as normal.
172
173So if we compiled our kernel with:
174
175.. code-block:: none
176
177	CONFIG_KUNIT=y
178	CONFIG_KUNIT_EXAMPLE_TEST=y
179
180Then we'd see output like this in dmesg signaling the test ran and passed:
181
182.. code-block:: none
183
184	TAP version 14
185	1..1
186	    # Subtest: example
187	    1..1
188	    # example_simple_test: initializing
189	    ok 1 - example_simple_test
190	ok 1 - example
191
192Running tests as modules
193------------------------
194
195Depending on the tests, you can build them as loadable modules.
196
197For example, we'd change the config options from before to
198
199.. code-block:: none
200
201	CONFIG_KUNIT=y
202	CONFIG_KUNIT_EXAMPLE_TEST=m
203
204Then after booting into our kernel, we can run the test via
205
206.. code-block:: none
207
208	$ modprobe kunit-example-test
209
210This will then cause it to print TAP output to stdout.
211
212.. note::
213	The ``modprobe`` will *not* have a non-zero exit code if any test
214	failed (as of 5.13). But ``kunit.py parse`` would, see below.
215
216.. note::
217	You can set ``CONFIG_KUNIT=m`` as well, however, some features will not
218	work and thus some tests might break. Ideally tests would specify they
219	depend on ``KUNIT=y`` in their ``Kconfig``'s, but this is an edge case
220	most test authors won't think about.
221	As of 5.13, the only difference is that ``current->kunit_test`` will
222	not exist.
223
224Pretty-printing results
225-----------------------
226
227You can use ``kunit.py parse`` to parse dmesg for test output and print out
228results in the same familiar format that ``kunit.py run`` does.
229
230.. code-block:: bash
231
232	$ ./tools/testing/kunit/kunit.py parse /var/log/dmesg
233
234
235Retrieving per suite results
236----------------------------
237
238Regardless of how you're running your tests, you can enable
239``CONFIG_KUNIT_DEBUGFS`` to expose per-suite TAP-formatted results:
240
241.. code-block:: none
242
243	CONFIG_KUNIT=y
244	CONFIG_KUNIT_EXAMPLE_TEST=m
245	CONFIG_KUNIT_DEBUGFS=y
246
247The results for each suite will be exposed under
248``/sys/kernel/debug/kunit/<suite>/results``.
249So using our example config:
250
251.. code-block:: bash
252
253	$ modprobe kunit-example-test > /dev/null
254	$ cat /sys/kernel/debug/kunit/example/results
255	... <TAP output> ...
256
257	# After removing the module, the corresponding files will go away
258	$ modprobe -r kunit-example-test
259	$ cat /sys/kernel/debug/kunit/example/results
260	/sys/kernel/debug/kunit/example/results: No such file or directory
261
262Generating code coverage reports
263--------------------------------
264
265See Documentation/dev-tools/gcov.rst for details on how to do this.
266
267The only vaguely KUnit-specific advice here is that you probably want to build
268your tests as modules. That way you can isolate the coverage from tests from
269other code executed during boot, e.g.
270
271.. code-block:: bash
272
273	# Reset coverage counters before running the test.
274	$ echo 0 > /sys/kernel/debug/gcov/reset
275	$ modprobe kunit-example-test
276
277
278Test Attributes and Filtering
279=============================
280
281Test suites and cases can be marked with test attributes, such as speed of
282test. These attributes will later be printed in test output and can be used to
283filter test execution.
284
285Marking Test Attributes
286-----------------------
287
288Tests are marked with an attribute by including a ``kunit_attributes`` object
289in the test definition.
290
291Test cases can be marked using the ``KUNIT_CASE_ATTR(test_name, attributes)``
292macro to define the test case instead of ``KUNIT_CASE(test_name)``.
293
294.. code-block:: c
295
296	static const struct kunit_attributes example_attr = {
297		.speed = KUNIT_VERY_SLOW,
298	};
299
300	static struct kunit_case example_test_cases[] = {
301		KUNIT_CASE_ATTR(example_test, example_attr),
302	};
303
304.. note::
305	To mark a test case as slow, you can also use ``KUNIT_CASE_SLOW(test_name)``.
306	This is a helpful macro as the slow attribute is the most commonly used.
307
308Test suites can be marked with an attribute by setting the "attr" field in the
309suite definition.
310
311.. code-block:: c
312
313	static const struct kunit_attributes example_attr = {
314		.speed = KUNIT_VERY_SLOW,
315	};
316
317	static struct kunit_suite example_test_suite = {
318		...,
319		.attr = example_attr,
320	};
321
322.. note::
323	Not all attributes need to be set in a ``kunit_attributes`` object. Unset
324	attributes will remain uninitialized and act as though the attribute is set
325	to 0 or NULL. Thus, if an attribute is set to 0, it is treated as unset.
326	These unset attributes will not be reported and may act as a default value
327	for filtering purposes.
328
329Reporting Attributes
330--------------------
331
332When a user runs tests, attributes will be present in the raw kernel output (in
333KTAP format). Note that attributes will be hidden by default in kunit.py output
334for all passing tests but the raw kernel output can be accessed using the
335``--raw_output`` flag. This is an example of how test attributes for test cases
336will be formatted in kernel output:
337
338.. code-block:: none
339
340	# example_test.speed: slow
341	ok 1 example_test
342
343This is an example of how test attributes for test suites will be formatted in
344kernel output:
345
346.. code-block:: none
347
348	  KTAP version 2
349	  # Subtest: example_suite
350	  # module: kunit_example_test
351	  1..3
352	  ...
353	ok 1 example_suite
354
355Additionally, users can output a full attribute report of tests with their
356attributes, using the command line flag ``--list_tests_attr``:
357
358.. code-block:: bash
359
360	kunit.py run "example" --list_tests_attr
361
362.. note::
363	This report can be accessed when running KUnit manually by passing in the
364	module_param ``kunit.action=list_attr``.
365
366Filtering
367---------
368
369Users can filter tests using the ``--filter`` command line flag when running
370tests. As an example:
371
372.. code-block:: bash
373
374	kunit.py run --filter speed=slow
375
376
377You can also use the following operations on filters: "<", ">", "<=", ">=",
378"!=", and "=". Example:
379
380.. code-block:: bash
381
382	kunit.py run --filter "speed>slow"
383
384This example will run all tests with speeds faster than slow. Note that the
385characters < and > are often interpreted by the shell, so they may need to be
386quoted or escaped, as above.
387
388Additionally, you can use multiple filters at once. Simply separate filters
389using commas. Example:
390
391.. code-block:: bash
392
393	kunit.py run --filter "speed>slow, module=kunit_example_test"
394
395.. note::
396	You can use this filtering feature when running KUnit manually by passing
397	the filter as a module param: ``kunit.filter="speed>slow, speed<=normal"``.
398
399Filtered tests will not run or show up in the test output. You can use the
400``--filter_action=skip`` flag to skip filtered tests instead. These tests will be
401shown in the test output in the test but will not run. To use this feature when
402running KUnit manually, use the module param ``kunit.filter_action=skip``.
403
404Rules of Filtering Procedure
405----------------------------
406
407Since both suites and test cases can have attributes, there may be conflicts
408between attributes during filtering. The process of filtering follows these
409rules:
410
411- Filtering always operates at a per-test level.
412
413- If a test has an attribute set, then the test's value is filtered on.
414
415- Otherwise, the value falls back to the suite's value.
416
417- If neither are set, the attribute has a global "default" value, which is used.
418
419List of Current Attributes
420--------------------------
421
422``speed``
423
424This attribute indicates the speed of a test's execution (how slow or fast the
425test is).
426
427This attribute is saved as an enum with the following categories: "normal",
428"slow", or "very_slow". The assumed default speed for tests is "normal". This
429indicates that the test takes a relatively trivial amount of time (less than
4301 second), regardless of the machine it is running on. Any test slower than
431this could be marked as "slow" or "very_slow".
432
433The macro ``KUNIT_CASE_SLOW(test_name)`` can be easily used to set the speed
434of a test case to "slow".
435
436``module``
437
438This attribute indicates the name of the module associated with the test.
439
440This attribute is automatically saved as a string and is printed for each suite.
441Tests can also be filtered using this attribute.
442
443``is_init``
444
445This attribute indicates whether the test uses init data or functions.
446
447This attribute is automatically saved as a boolean and tests can also be
448filtered using this attribute.
449