Lines Matching +full:test2 +full:. +full:good
5 Now that you have read the [GoogleTest Primer](primer.md) and learned how to
6 write tests using GoogleTest, it's time to learn some new tricks. This document
9 use various flags with your tests.
14 assertions.
18 See [Explicit Success and Failure](reference/assertions.md#success-failure) in
19 the Assertions Reference.
23 See [Exception Assertions](reference/assertions.md#exceptions) in the Assertions
24 Reference.
29 as it's impossible (nor a good idea) to anticipate all scenarios a user might
30 run into. Therefore, sometimes a user has to use `EXPECT_TRUE()` to check a
31 complex expression, for lack of a better macro. This has the problem of not
33 understand what went wrong. As a workaround, some users choose to construct the
34 failure message by themselves, streaming it into `EXPECT_TRUE()`. However, this
36 evaluate.
44 assertion* to get the function arguments printed for free. See
45 [`EXPECT_PRED*`](reference/assertions.md#EXPECT_PRED) in the Assertions
46 Reference for details.
52 feels more like Lisp than C++. The `::testing::AssertionResult` class solves
53 this problem.
56 a success or a failure, and an associated message). You can create an
63 // succeeded.
67 // failed.
74 object.
76 To provide more readable messages in Boolean assertions (e.g. `EXPECT_TRUE()`),
77 write a predicate function that returns `AssertionResult` instead of `bool`. For
138 [`EXPECT_PRED*`](reference/assertions.md#EXPECT_PRED) and
139 [`EXPECT_TRUE`](reference/assertions.md#EXPECT_TRUE) unsatisfactory, or some
142 message is formatted. See
143 [`EXPECT_PRED_FORMAT*`](reference/assertions.md#EXPECT_PRED_FORMAT) in the
144 Assertions Reference for details.
148 See [Floating-Point Comparison](reference/assertions.md#floating-point) in the
149 Assertions Reference.
153 Some floating-point operations are useful, but not that often used. In order to
156 [`EXPECT_PRED_FORMAT2`](reference/assertions.md#EXPECT_PRED_FORMAT), for
162 ...
168 `val2`.
172 See [`EXPECT_THAT`](reference/assertions.md#EXPECT_THAT) in the Assertions
173 Reference.
178 you haven't.)
180 You can use the gMock [string matchers](reference/matchers.md#string-matchers)
181 with [`EXPECT_THAT`](reference/assertions.md#EXPECT_THAT) to do more string
182 comparison tricks (sub-string, prefix, suffix, regular expression, and etc). For
188 ...
195 See [Windows HRESULT Assertions](reference/assertions.md#HRESULT) in the
196 Assertions Reference.
206 to assert that types `T1` and `T2` are the same. The function does nothing if
207 the assertion is satisfied. If the types are different, the function call will
210 values of `T1` and `T2`. This is mainly useful inside template code.
214 instantiated. For example, given:
230 instantiated. Instead, you need:
233 void Test2() { Foo<bool> foo; foo.Bar(); }
236 to cause a compiler error.
240 You can use assertions in any C++ function. In particular, it doesn't have to be
241 a method of the test fixture class. The one constraint is that assertions that
243 void-returning functions. This is a consequence of Google's not using
244 exceptions. By placing it in a non-void function you'll get a confusing compile
247 `"error: no viable conversion from 'void' to 'string'"`.
250 option is to make the function return the value in an out parameter instead. For
251 example, you can rewrite `T2 Foo(T1 x)` to `void Foo(T1 x, T2* result)`. You
253 function returns prematurely. As the function now returns `void`, you can use
254 any assertion inside of it.
257 that generate non-fatal failures, such as `ADD_FAILURE*` and `EXPECT_*`.
259 {: .callout .note}
262 assertions in them; you'll get a compilation error if you try. Instead, either
265 [constructor/destructor vs. `SetUp`/`TearDown`](faq.md#CtorVsSetUp)
267 {: .callout .warning}
273 `SetUp`/`TearDown` instead.
278 execution at runtime with the `GTEST_SKIP()` macro. This is useful when you need
280 tests in a meaningful way.
283 of classes derived from either `::testing::Environment` or `::testing::Test`.
299 // Tests for SkipFixture won't be executed.
305 As with assertion macros, you can stream a custom message into `GTEST_SKIP()`.
310 values to help you debug. It does this using a user-extensible value printer.
313 containers, and any type that supports the `<<` operator. For other types, it
314 prints the raw bytes in the value and hopes that you the user can figure it out.
316 As mentioned earlier, the printer is *extensible*. That means you can teach it
317 to do a better job at printing your particular type than to dump the bytes. To
324 class Point { // We want GoogleTest to be able to print instances of this.
325 ...
326 // Provide a friend overload.
329 absl::Format(&sink, "(%d, %d)", point.x, point.y);
337 // AbslStringify overload is defined in the SAME namespace that defines Point.
338 // C++'s look-up rules rely on that.
349 {: .callout .note}
351 string. For more information about supported operations on `AbslStringify()`'s
352 sink, see go/abslstringify.
355 within its own format strings to perform type deduction. `Point` above could be
356 formatted as `"(%v, %v)"` for example, and deduce the `int` values as `%d`.
359 types with extra debugging information for testing purposes only. If so, you can
368 ...
370 *os << "(" << point.x << "," << point.y << ")";
378 // is defined in the SAME namespace that defines Point. C++'s look-up rules
379 // rely on that.
381 *os << "(" << point.x << "," << point.y << ")";
388 used by GoogleTest. This allows you to customize how the value appears in
390 `AbslStringify()`.
393 `AbslStringify()`, the latter will be used for GoogleTest printing.
406 libraries, see go/abslstringify.
411 a condition is not met. These consistency checks, which ensure that the program
412 is in a known good state, are there to fail at the earliest possible time after
413 some program state is corrupted. If the assertion checks the wrong condition,
415 corruption, security holes, or worse. Hence it is vitally important to test that
416 such assertion statements work as expected.
419 _death tests_. More generally, any test that checks that a program terminates
420 (except by throwing an exception) in an expected fashion is also a death test.
424 exception and avoid the crash. If you want to verify exceptions thrown by your
425 code, see [Exception Assertions](#ExceptionAssertions).
428 ["Catching" Failures](#catching-failures).
432 GoogleTest provides assertion macros to support death tests. See
433 [Death Assertions](reference/assertions.md#death) in the Assertions Reference
434 for details.
436 To write a death test, simply use one of the macros inside your test function.
441 // This death test uses a compound statement.
445 }, "Error on line .* of Foo()");
463 * calling `KillProcess()` kills the process with signal `SIGKILL`.
466 necessary.
470 1. does `statement` abort or exit the process?
471 2. (in the case of `ASSERT_EXIT` and `EXPECT_EXIT`) does the exit status
474 3. does the stderr output match `matcher`?
478 the process.
482 {: .callout .important}
485 demonstrated in the above example. The
486 [Death Tests And Threads](#death-tests-and-threads) section below explains why.
493 class FooTest : public testing::Test { ... };
509 [RE2](https://github.com/google/re2/wiki/Syntax) syntax. Otherwise, for POSIX
511 [POSIX extended regular expression](https://www.opengroup.org/onlinepubs/009695399/basedefs/xbd_cha…
512 syntax. To learn about POSIX syntax, you may want to read this
513 [Wikipedia entry](https://en.wikipedia.org/wiki/Regular_expression#POSIX_extended).
515 On Windows, GoogleTest uses its own simple regular expression implementation. It
516 lacks many features. For example, we don't support union (`"x|y"`), grouping
518 others. Below is what we do support (`A` denotes a literal character, period
519 (`.`), or a single `\\ ` escape sequence; `x` and `y` denote regular
520 expressions.):
537 `.` | matches any single character except `\n`
546 defines macros to govern which regular expression it is using. The macros are:
547 `GTEST_USES_SIMPLE_RE=1` or `GTEST_USES_POSIX_RE=1`. If you want your death
549 limited syntax only.
553 See [Death Assertions](reference/assertions.md#death) in the Assertions
554 Reference.
558 The reason for the two death test styles has to do with thread safety. Due to
560 be run in a single-threaded context. Sometimes, however, it isn't feasible to
561 arrange that kind of environment. For example, statically-initialized modules
562 may start threads before main is ever reached. Once threads have been created,
563 it may be difficult or impossible to clean them up.
565 GoogleTest has three features intended to raise awareness of threading issues.
567 1. A warning is emitted if multiple threads are running when a death test is
568 encountered.
569 2. Test suites with a name ending in "DeathTest" are run before all other
570 tests.
571 3. It uses `clone()` instead of `fork()` to spawn the child process on Linux
573 to cause the child to hang when the parent process has multiple threads.
576 executed in a separate process and cannot affect the parent.
581 risks of testing in a possibly multithreaded environment. It trades increased
582 test execution time (potentially dramatically so) for improved thread safety.
584 The automated testing framework does not set the style flag. You can choose a
592 or in individual tests. Recall that flags are saved before running each test and
593 restored afterwards, so you need not do that yourself. For example:
616 The `statement` argument of `ASSERT_EXIT()` can be any valid C++ statement. If
618 exception, the death test is considered to have failed. Some GoogleTest macros
619 may return from the current function (e.g. `ASSERT_TRUE()`), so be sure to avoid
620 them in `statement`.
622 Since `statement` runs in the child process, any in-memory side effect (e.g.
624 in the parent process. In particular, if you release memory in a death test,
626 memory reclaimed. To solve this problem, you can
628 1. try not to free memory in a death test;
629 2. free the memory again in the parent process; or
630 3. do not use the heap checker in your program.
634 message.
638 handlers registered with `pthread_atfork(3)`.
642 {: .callout .note}
645 [a custom GMock matcher](gmock_cook_book.md#NewMatchers) instead. This lets you
647 issues described below.
653 from. You can alleviate this problem using extra logging or custom failure
654 messages, but that usually clutters up your tests. A better solution is to use
665 where `message` can be anything streamable to `std::ostream`. `SCOPED_TRACE`
667 added in every failure message. `ScopedTrace` accepts explicit file name and
668 line number in arguments, which is useful for writing test helpers. The effect
669 will be undone when the control leaves the current lexical scope.
682 18: // every failure in this scope.
685 21: // Now it won't.
693 path/to/foo_test.cc:11: Failure
698 path/to/foo_test.cc:17: A
700 path/to/foo_test.cc:12: Failure
707 `Sub1()` the two failures come from respectively. (You could add an extra
709 tedious.)
713 1. With a suitable message, it's often enough to use `SCOPED_TRACE` at the
714 beginning of a sub-routine, instead of at each call site.
715 2. When calling sub-routines inside a loop, make the loop iterator part of the
717 is from.
718 3. Sometimes the line number of the trace point is enough for identifying the
719 particular invocation of a sub-routine. In this case, you don't have to
720 choose a unique message for `SCOPED_TRACE`. You can simply use `""`.
721 4. You can use `SCOPED_TRACE` in an inner scope when there is one in the outer
722 scope. In this case, all active trace points will be included in the failure
723 messages, in reverse order they are encountered.
724 5. The trace dump is clickable in Emacs - hit `return` on a line number and
730 when they fail they only abort the _current function_, not the entire test. For
735 // Generates a fatal failure and aborts the current function.
738 // The following won't be executed.
739 ...
744 // in Subroutine() to abort the entire test.
746 // The actual behavior: the function goes on after Subroutine() returns.
752 To alleviate this, GoogleTest provides three different solutions. You could use
754 `HasFatalFailure()` function. They are described in the following two
755 subsections.
764 if (result.type() == testing::TestPartResult::kFatalFailure) {
770 ...
771 testing::UnitTest::GetInstance()->listeners().Append(new ThrowListener);
777 they won't see failed `OnTestPartResult`.
782 in it, the test will continue after the subroutine returns. This may not be what
783 you want.
785 Often people want fatal failures to propagate like exceptions. For that
790 …L_FAILURE(statement);` | `statement` doesn't generate any new fatal failures in the current thread.
793 the result of this type of assertions. If `statement` creates new threads,
794 failures in these threads are ignored.
807 Assertions from multiple threads are currently not supported on Windows.
812 assertion in the current test has suffered a fatal failure. This allows
813 functions to catch fatal failures in a sub-routine and return early.
818 ...
829 // Aborts if Subroutine() had a fatal failure.
832 // The following won't be executed.
833 ...
846 test has at least one failure of either kind.
851 information, where `value` can be either a string or an `int`. The *last* value
853 [XML output](#generating-an-xml-report) if you specify one. For example, the
866 ...
867 …<testcase name="MinAndMaxWidgets" file="test.cpp" line="1" status="run" time="0.006" classname="Wi…
868 ...
871 {: .callout .note}
874 > * `RecordProperty()` is a static member of the `Test` class. Therefore it
876 > `TEST` body and the test fixture class.
879 > `type_param`, and `value_param`).
880 > * Calling `RecordProperty()` outside of the lifespan of a test is allowed.
883 > attributed to the XML element for the test suite. If it's called outside
884 > of all test suites (e.g. in a test environment), it will be attributed to
885 > the top-level XML element.
890 tests independent and easier to debug. However, sometimes tests use resources
892 expensive.
895 single resource copy. So, in addition to per-test set-up/tear-down, GoogleTest
896 also supports per-test-suite set-up/tear-down. To use it:
898 1. In your test fixture class (say `FooTest` ), declare as `static` some member
899 variables to hold the shared resources.
900 2. Outside your test fixture class (typically just below it), define those
901 member variables, optionally giving them initial values.
902 3. In the same test fixture class, define a public member function `static void
905 TearDownTestSuite()` function to tear them down.
908 *first test* in the `FooTest` test suite (i.e. before creating the first
910 in it (i.e. after deleting the last `FooTest` object). In between, the tests can
911 use the shared resources.
914 preceding or following another. Also, the tests must either not modify the state
916 state to its original value before passing control to the next test.
920 body to be run only once. Also, derived classes still have access to shared
923 properly cleaned up in `TearDownTestSuite()`.
930 // Per-test-suite set-up.
931 // Called before the first test in this test suite.
932 // Can be omitted if not needed.
934 shared_resource_ = new ...;
938 // in subclasses of FooTest and lead to memory leak.
941 // shared_resource_ = new ...;
945 // Per-test-suite tear-down.
946 // Called after the last test in this test suite.
947 // Can be omitted if not needed.
953 // You can define per-test set-up logic as usual.
954 void SetUp() override { ... }
956 // You can define per-test tear-down logic as usual.
957 void TearDown() override { ... }
959 // Some expensive resource shared by all tests.
966 ... you can refer to shared_resource_ here ...
969 TEST_F(FooTest, Test2) {
970 ... you can refer to shared_resource_ here ...
974 {: .callout .note}
977 `TEST_P`.
982 level, you can also do it at the test program level. Here's how.
992 // Override this to define how to set up the environment.
995 // Override this to define how to tear down the environment.
1007 Now, when `RUN_ALL_TESTS()` is invoked, it first calls the `SetUp()` method. The
1009 fatal failures and `GTEST_SKIP()` has not been invoked. Finally, `TearDown()` is
1010 called.
1013 test to be performed. Importantly, `TearDown()` is executed even if the test is
1014 not run due to a fatal failure or `GTEST_SKIP()`.
1017 `gtest_recreate_environments_when_repeating`. `SetUp()` and `TearDown()` are
1019 iteration. However, if test environments are not recreated for each iteration,
1021 on the last iteration.
1023 It's OK to register multiple environment objects. In this suite, their `SetUp()`
1025 called in the reverse order.
1027 Note that GoogleTest takes ownership of the registered environment objects.
1028 Therefore **do not delete them** by yourself.
1031 probably in `main()`. If you use `gtest_main`, you need to call this before
1032 `main()` starts for it to take effect. One way to do this is to define a global
1045 in which global variables from different translation units are initialized).
1050 parameters without writing multiple copies of the same test. This is useful in a
1054 command-line flags. You want to make sure your code performs correctly for
1055 various values of those flags.
1056 * You want to test different implementations of an OO interface.
1057 * You want to test your code over various inputs (a.k.a. data-driven testing).
1058 This feature is easy to abuse, so please exercise your good sense when doing
1063 To write value-parameterized tests, first you should define a fixture class. It
1066 values. For convenience, you can just derive the fixture class from
1068 and `testing::WithParamInterface<T>`. `T` can be any copyable type. If it's a
1070 values.
1072 {: .callout .note}
1075 `TEST_P`.
1080 // You can implement all the usual fixture class members here.
1082 // TestWithParam<T>.
1087 ...
1091 ...
1096 as you want. The `_P` suffix is for "parameterized" or "pattern", whichever you
1097 prefer to think.
1103 EXPECT_TRUE(foo.Blah(GetParam()));
1104 ...
1108 ...
1113 test suite with any set of parameters you want. GoogleTest defines a number of
1115 [`INSTANTIATE_TEST_SUITE_P`](reference/testing.md#INSTANTIATE_TEST_SUITE_P) in
1116 the Testing Reference.
1120 [`Values`](reference/testing.md#param-generators) parameter generator:
1128 {: .callout .note}
1130 function scope.
1133 instantiation of the test suite. The next argument is the name of the test
1135 [parameter generator](reference/testing.md#param-generators).
1138 initialized (via `InitGoogleTest()`). Any prior initialization done in the
1140 the results of flag parsing.
1144 actual test suite name. Remember to pick unique prefixes for different
1145 instantiations. The tests from the instantiation above will have these names:
1147 * `MeenyMinyMoe/FooTest.DoesBlah/0` for `"meeny"`
1148 * `MeenyMinyMoe/FooTest.DoesBlah/1` for `"miny"`
1149 * `MeenyMinyMoe/FooTest.DoesBlah/2` for `"moe"`
1150 * `MeenyMinyMoe/FooTest.HasBlahBlah/0` for `"meeny"`
1151 * `MeenyMinyMoe/FooTest.HasBlahBlah/1` for `"miny"`
1152 * `MeenyMinyMoe/FooTest.HasBlahBlah/2` for `"moe"`
1154 You can use these names in [`--gtest_filter`](#running-a-subset-of-the-tests).
1158 [`ValuesIn`](reference/testing.md#param-generators) parameter generator:
1167 * `Pets/FooTest.DoesBlah/0` for `"cat"`
1168 * `Pets/FooTest.DoesBlah/1` for `"dog"`
1169 * `Pets/FooTest.HasBlahBlah/0` for `"cat"`
1170 * `Pets/FooTest.HasBlahBlah/1` for `"dog"`
1174 `INSTANTIATE_TEST_SUITE_P` statement.
1178 `GoogleTestVerification`. If you have a test suite where that omission is not an
1187 You can see [sample7_unittest.cc] and [sample8_unittest.cc] for more examples.
1189 [sample7_unittest.cc]: https://github.com/google/googletest/blob/main/googletest/samples/sample7_un…
1190 [sample8_unittest.cc]: https://github.com/google/googletest/blob/main/googletest/samples/sample8_un…
1194 In the above, we define and instantiate `FooTest` in the *same* source file.
1196 other people instantiate them later. This pattern is known as *abstract tests*.
1200 pass. When someone implements the interface, they can instantiate your suite to
1201 get all the interface-conformance tests for free.
1205 1. Put the definition of the parameterized test fixture class (e.g. `FooTest`)
1206 in a header file, say `foo_param_test.h`. Think of this as *declaring* your
1207 abstract tests.
1208 2. Put the `TEST_P` definitions in `foo_param_test.cc`, which includes
1209 `foo_param_test.h`. Think of this as *implementing* your abstract tests.
1211 Once they are defined, you can instantiate them by including `foo_param_test.h`,
1213 contains `foo_param_test.cc`. You can instantiate the same abstract test suite
1214 multiple times, possibly in different source files.
1220 the test parameters. The function should accept one argument of type
1221 `testing::TestParamInfo<class ParamType>`, and return `std::string`.
1224 returns the value of `testing::PrintToString(GetParam())`. It does not work for
1225 `std::string` or C strings.
1227 {: .callout .note}
1229 alphanumeric characters. In particular, they
1230 [should not contain underscores](faq.md#why-should-test-suite-names-and-test-names-not-contain-unde…
1246 generate helpful parameter names (e.g. strings as demonstrated above). The
1248 and a string, and also demonstrates how to combine generators. It uses a lambda
1264 std::get<0>(info.param) == MyType::MY_FOO ? "Foo" : "Bar",
1265 std::get<1>(info.param));
1274 sure that all of them satisfy some common requirements. Or, you may have defined
1276 verify it. In both cases, you want the same test logic repeated for different
1277 types.
1282 types, you'll end up writing `m*n` `TEST`s.
1284 *Typed tests* allow you to repeat the same test logic over a list of types. You
1286 when writing typed tests. Here's how you do it:
1288 First, define a fixture class template. It should be parameterized by a type.
1295 ...
1311 macro to parse correctly. Otherwise the compiler will think that each comma in
1312 the type list introduces a new macro argument.
1315 test suite. You can repeat this as many times as you want:
1320 // parameter. Since we are inside a derived class template, C++ requires
1321 // us to visit the members of FooTest via 'this'.
1325 // prefix.
1329 // prefix. The 'typename' is required to satisfy the compiler.
1332 values.push_back(n);
1333 ...
1336 TYPED_TEST(FooTest, HasPropertyA) { ... }
1339 You can see [sample6_unittest.cc] for a complete example.
1341 [sample6_unittest.cc]: https://github.com/google/googletest/blob/main/googletest/samples/sample6_un…
1346 you to know the list of types ahead of time. Instead, you can define the test
1347 logic first and instantiate it with different type lists later. You can even
1348 instantiate it more than once in the same program.
1352 the interface/concept should have. Then, the author of each implementation can
1354 the requirements, without having to write similar tests repeatedly. Here's an
1363 ...
1373 Then, use `TYPED_TEST_P()` to define a type-parameterized test. You can repeat
1378 // Inside a test, refer to TypeParam to get the type parameter.
1381 // You will need to use `this` explicitly to refer to fixture members.
1383 ...
1386 TYPED_TEST_P(FooTest, HasPropertyA) { ... }
1390 `REGISTER_TYPED_TEST_SUITE_P` macro before you can instantiate them. The first
1399 Finally, you are free to instantiate the pattern with the types you want. If you
1401 source files and instantiate it multiple times.
1410 actual test suite name. Remember to pick unique prefixes for different
1411 instances.
1414 that type directly without `::testing::Types<...>`, like this:
1420 You can see [sample6_unittest.cc] for a complete example.
1425 break as long as the change is not observable by users. Therefore, **per the
1427 its public interfaces.**
1430 consider if there's a better design.** The desire to test internal
1431 implementation is often a sign that the class is doing too much. Consider
1432 extracting an implementation class, and testing it. Then use that implementation
1433 class in the original class.
1435 If you absolutely have to test non-public interface code though, you can. There
1445 are only visible within the same translation unit. To test them, you can
1446 `#include` the entire `.cc` file being tested in your `*_test.cc` file.
1447 (#including `.cc` files is not a good way to reuse code - you should not do
1452 normally uses, and put the private declarations in a `*-internal.h` file.
1453 Your production `.cc` files and your tests are allowed to include this
1454 internal header, but your clients are not. This way, you can fully test your
1455 internal implementation without leaking it to your clients.
1458 friends. To access a class' private members, you can declare your test
1459 fixture as a friend to the class and define accessors in your fixture. Tests
1461 class via the accessors in the fixture. Note that even though your fixture
1464 fixture.
1467 implementation class, which is then declared in a `*-internal.h` file. Your
1468 clients aren't allowed to include this header but your tests can. Such is
1470 …[Pimpl](https://www.gamedev.net/articles/programming/general-and-gameplay-programming/the-c-pimpl-…
1471 (Private Implementation) idiom.
1483 // foo.h
1485 ...
1492 // foo_test.cc
1493 ...
1496 EXPECT_EQ(foo.Bar(NULL), 0); // Uses Foo's private member Bar().
1500 Pay special attention when your class is defined in a namespace. If you want
1502 defined in the exact same namespace (no anonymous or inline namespaces).
1513 ... definition of the class Foo ...
1526 ...
1529 TEST_F(FooTest, Bar) { ... }
1530 TEST_F(FooTest, Baz) { ... }
1538 your utility. What framework would you use to test it? GoogleTest, of course.
1540 The challenge is to verify that your testing utility reports failures correctly.
1542 the exception and assert on it. But GoogleTest doesn't use exceptions, so how do
1545 `"gtest/gtest-spi.h"` contains some constructs to do this.
1552 to assert that `statement` generates a fatal (e.g. `ASSERT_*`) failure in the
1559 if you are expecting a non-fatal (e.g. `EXPECT_*`) failure.
1562 type of expectations. If `statement` creates new threads, failures in these
1563 threads are also ignored. If you want to catch failures in other threads as
1571 {: .callout .note}
1572 NOTE: Assertions from multiple threads are currently not supported on Windows.
1576 1. You cannot stream a failure message to either macro.
1578 2. `statement` in `EXPECT_FATAL_FAILURE{_ON_ALL_THREADS}()` cannot reference
1579 local non-static variables or non-static members of `this` object.
1581 3. `statement` in `EXPECT_FATAL_FAILURE{_ON_ALL_THREADS}()` cannot return a
1582 value.
1587 where runtime registration logic is required. For those cases, the framework
1589 tests dynamically.
1591 This is an advanced API only to be used when the `TEST` macros are insufficient.
1593 complexity of calling this function.
1605 function pointer that creates a new instance of the Test object. It handles
1606 ownership to the caller. The signature of the callable is `Fixture*()`, where
1607 `Fixture` is the test fixture class for the test. All tests registered with the
1608 same `test_suite_name` must return the same fixture type. This is checked at
1609 runtime.
1612 `SetUpTestSuite` and `TearDownTestSuite` for it.
1615 undefined.
1622 // All of these optional, just like in regular macro usage.
1623 static void SetUpTestSuite() { ... }
1624 static void TearDownTestSuite() { ... }
1625 void SetUp() override { ... }
1626 void TearDown() override { ... }
1632 void TestBody() override { ... }
1641 "MyFixture", ("Test" + std::to_string(v)).c_str(), nullptr,
1642 std::to_string(v).c_str(),
1644 // Important to use the fixture type as the return type here.
1648 ...
1653 ...
1660 Sometimes a function may need to know the name of the currently running test.
1662 the golden file name based on which test is running. The
1663 [`TestInfo`](reference/testing.md#TestInfo) class has this information.
1666 `current_test_info()` on the [`UnitTest`](reference/testing.md#UnitTest)
1670 // Gets information about the currently running test.
1671 // Do NOT delete the returned object - it's managed by the UnitTest class.
1675 printf("We are in test %s of test suite %s.\n",
1680 `current_test_info()` returns a null pointer if no test is running. In
1683 functions called from them.
1688 about the progress of a test program and test failures. The events you can
1690 method, among others. You may use this API to augment or replace the standard
1692 of output, such as a GUI or a database. You can also use test events as
1693 checkpoints to implement a resource leak checker, for example.
1698 [`testing::TestEventListener`](reference/testing.md#TestEventListener) or
1699 [`testing::EmptyTestEventListener`](reference/testing.md#EmptyTestEventListener)
1702 `OnTestStart()` method will be called.). The latter provides an empty
1704 to override the methods it cares about.
1707 argument. The following argument types are used:
1713 * TestPartResult represents the result of a test assertion.
1716 interesting information about the event and the test program's state.
1722 // Called before a test starts.
1724 printf("*** Test %s.%s starting.\n",
1725 test_info.test_suite_name(), test_info.name());
1728 // Called after a failed assertion or a SUCCESS().
1731 test_part_result.failed() ? "*** Failure" : "Success",
1732 test_part_result.file_name(),
1733 test_part_result.line_number(),
1734 test_part_result.summary());
1737 // Called after a test ends.
1739 printf("*** Test %s.%s ending.\n",
1740 test_info.test_suite_name(), test_info.name());
1749 [`TestEventListeners`](reference/testing.md#TestEventListeners) - note the "s"
1756 // Gets hold of the event listener list.
1759 // Adds a listener to the end. GoogleTest takes the ownership.
1760 listeners.Append(new MinimalistPrinter);
1766 its output will mingle with the output from your minimalist printer. To suppress
1767 the default printer, just release it from the event listener list and delete it.
1771 ...
1772 delete listeners.Release(listeners.default_result_printer());
1773 listeners.Append(new MinimalistPrinter);
1777 Now, sit back and enjoy a completely different output from your tests. For more
1778 details, see [sample9_unittest.cc].
1780 [sample9_unittest.cc]: https://github.com/google/googletest/blob/main/googletest/samples/sample9_un…
1782 You may append more than one listener to the list. When an `On*Start()` or
1786 first). An `On*End()` event will be received by the listeners in the *reverse*
1787 order. This allows output by listeners added later to be framed by output from
1788 listeners added earlier.
1793 when processing an event. There are some restrictions:
1795 1. You cannot generate any failure in `OnTestPartResult()` (otherwise it will
1796 cause `OnTestPartResult()` to be called recursively).
1797 2. A listener that handles `OnTestPartResult()` is not allowed to generate any
1798 failure.
1801 handle `OnTestPartResult()` *before* listeners that can generate failures. This
1803 by the former.
1805 See [sample10_unittest.cc] for an example of a failure-raising listener.
1807 [sample10_unittest.cc]: https://github.com/google/googletest/blob/main/googletest/samples/sample10_…
1811 GoogleTest test programs are ordinary executables. Once built, you can run them
1813 and/or command line flags. For the flags to work, your programs must call
1814 `::testing::InitGoogleTest()` before calling `RUN_ALL_TESTS()`.
1817 with the `--help` flag.
1820 latter takes precedence.
1827 running them so that a filter may be applied if needed. Including the flag
1832 TestSuite1.
1835 TestSuite2.
1839 None of the tests listed are actually run if the flag is provided. There is no
1840 corresponding environment variable for this flag.
1844 By default, a GoogleTest program runs all tests the user has defined. Sometimes,
1845 you want to run only a subset of the tests (e.g. for debugging or quickly
1846 verifying a change). If you set the `GTEST_FILTER` environment variable or the
1848 whose full names (in the form of `TestSuiteName.TestName`) match the filter.
1852 '`:`'-separated pattern list (called the *negative patterns*). A test matches
1854 match any of the negative patterns.
1857 character). For convenience, the filter `'*-NegativePatterns'` can be also
1858 written as `'-NegativePatterns'`.
1862 * `./foo_test` Has no flag, and thus runs all its tests.
1863 * `./foo_test --gtest_filter=*` Also runs everything, due to the single
1864 match-everything `*` value.
1865 * `./foo_test --gtest_filter=FooTest.*` Runs everything in test suite
1866 `FooTest` .
1867 * `./foo_test --gtest_filter=*Null*:*Constructor*` Runs any test whose full
1868 name contains either `"Null"` or `"Constructor"` .
1869 * `./foo_test --gtest_filter=-*DeathTest.*` Runs all non-death tests.
1870 * `./foo_test --gtest_filter=FooTest.*-FooTest.Bar` Runs everything in test
1871 suite `FooTest` except `FooTest.Bar`.
1872 * `./foo_test --gtest_filter=FooTest.*:BarTest.*-FooTest.Bar:BarTest.Foo` Runs
1873 everything in test suite `FooTest` except `FooTest.Bar` and everything in
1874 test suite `BarTest` except `BarTest.Foo`.
1878 By default, a GoogleTest program runs all tests the user has defined. In some
1879 cases (e.g. iterative test development & execution) it may be desirable stop
1880 test execution upon first failure (trading improved latency for completeness).
1882 the test runner will stop execution as soon as the first test failure is found.
1887 `DISABLED_` prefix to its name. This will exclude it from execution. This is
1889 still compiled (and thus won't rot).
1893 the test suite name.
1899 // Tests that Foo does Abc.
1900 TEST(FooTest, DISABLED_DoesAbc) { ... }
1902 class DISABLED_BarTest : public testing::Test { ... };
1904 // Tests that Bar does Xyz.
1905 TEST_F(DISABLED_BarTest, DoesXyz) { ... }
1908 {: .callout .note}
1909 NOTE: This feature should only be used for temporary pain-relief. You still have
1910 to fix the disabled tests at a later date. As a reminder, GoogleTest will print
1911 a banner warning you if a test program contains any disabled tests.
1913 {: .callout .tip}
1915 `grep`. This number can be used as a metric for
1916 improving your test quality.
1922 `GTEST_ALSO_RUN_DISABLED_TESTS` environment variable to a value other than `0`.
1924 disabled tests to run.
1928 Once in a while you'll run into a test whose result is hit-or-miss. Perhaps it
1930 a debugger. This can be a major source of frustration.
1933 a program many times. Hopefully, a flaky test will eventually fail and give you
1934 a chance to debug. Here's how to use it:
1938 Repeat foo_test 1000 times and don't stop at failures.
1941 A negative count means repeating forever.
1944 Repeat foo_test 1000 times, stopping at the first failure. This
1947 variables and stacks.
1949 $ foo_test --gtest_repeat=1000 --gtest_filter=FooBar.*
1950 Repeat the tests whose name matches the filter 1000 times.
1955 repeated in each iteration as well, as the flakiness may be in it. To avoid
1957 `--gtest_recreate_environments_when_repeating=false`{.nowrap}.
1960 variable.
1965 environment variable to `1`) to run the tests in a program in a random order.
1966 This helps to reveal bad dependencies between tests.
1968 By default, GoogleTest uses a random seed calculated from the current time.
1969 Therefore you'll get a different order every time. The console output includes
1971 later. To specify the random seed explicitly, use the `--gtest_random_seed=SEED`
1973 integer in the range [0, 99999]. The seed value 0 is special: it tells
1975 time.
1978 random seed and re-shuffle the tests in each iteration.
1983 want to run the test functions in parallel and get the result faster. We call
1984 this technique *sharding*, where each machine is called a *shard*.
1986 GoogleTest is compatible with test sharding. To take advantage of this feature,
1989 1. Allocate a number of machines (shards) to run the tests.
1990 1. On each shard, set the `GTEST_TOTAL_SHARDS` environment variable to the total
1991 number of shards. It must be the same for all shards.
1992 1. On each shard, set the `GTEST_SHARD_INDEX` environment variable to the index
1993 of the shard. Different shards must be assigned different indices, which
1994 must be in the range `[0, GTEST_TOTAL_SHARDS - 1]`.
1995 1. Run the same test program on all shards. When GoogleTest sees the above two
1996 environment variables, it will select a subset of the test functions to run.
1998 once.
1999 1. Wait for all shards to finish, then collect and report the results.
2002 understand this protocol. In order for your test runner to figure out which test
2004 to a non-existent file path. If a test program supports sharding, it will create
2005 this file to acknowledge that fact; otherwise it will not create it. The actual
2007 useful information in it in the future.
2009 Here's an example to make it clear. Suppose you have a test program `foo_test`
2020 Suppose you have 3 machines at your disposal. To run the test functions in
2022 `GTEST_SHARD_INDEX` to 0, 1, and 2 on the machines respectively. Then you would
2023 run the same `foo_test` on each machine.
2028 * Machine #0 runs `A.V` and `B.X`.
2029 * Machine #1 runs `A.W` and `B.Y`.
2030 * Machine #2 runs `B.Z`.
2039 <pre>...
2041 <font color="green">[ RUN ]</font> FooTest.DoesAbc
2042 <font color="green">[ OK ]</font> FooTest.DoesAbc
2044 <font color="green">[ RUN ]</font> BarTest.HasXyzProperty
2045 <font color="green">[ OK ]</font> BarTest.HasXyzProperty
2046 <font color="green">[ RUN ]</font> BarTest.ReturnsTrueOnSuccess
2047 ... some error messages ...
2048 <font color="red">[ FAILED ]</font> BarTest.ReturnsTrueOnSuccess
2049 ...
2050 <font color="green">[==========]</font> 30 tests from 14 test suites ran.
2051 <font color="green">[ PASSED ]</font> 28 tests.
2053 <font color="red">[ FAILED ]</font> BarTest.ReturnsTrueOnSuccess
2054 <font color="red">[ FAILED ]</font> AnotherTest.DoesXyz
2061 disable colors, or let GoogleTest decide. When the value is `auto`, GoogleTest
2063 platforms) the `TERM` environment variable is set to `xterm` or `xterm-color`.
2068 passed or failed. To show only test failures, run the test program with
2069 `--gtest_brief=1`, or set the GTEST_BRIEF environment variable to `1`.
2073 By default, GoogleTest prints the time it takes to run each test. To disable
2075 set the GTEST_PRINT_TIME environment variable to `0`.
2081 they contain valid non-ASCII UTF-8 characters. If you want to suppress the UTF-8
2084 environment variable to `0`.
2089 textual output. The report contains the duration of each test, and thus can help
2090 you identify slow tests.
2094 create the file at the given location. You can also just use the string `"xml"`,
2095 in which case the output can be found in the `test_detail.xml` file in the
2096 current directory.
2100 that directory, named after the test executable (e.g. `foo_test.xml` for test
2101 program `foo_test` or `foo_test.exe`). If the file already exists (perhaps left
2102 over from a previous run), GoogleTest will pick a different name (e.g.
2103 `foo_test_1.xml`) to avoid overwriting it.
2105 The report is based on the `junitreport` Ant task. Since that format was
2110 <testsuites name="AllTests" ...>
2111 <testsuite name="test_case_name" ...>
2112 <testcase name="test_name" ...>
2113 <failure message="..."/>
2114 <failure message="..."/>
2115 <failure message="..."/>
2121 * The root `<testsuites>` element corresponds to the entire test program.
2122 * `<testsuite>` elements correspond to GoogleTest test suites.
2123 * `<testcase>` elements correspond to GoogleTest test functions.
2128 TEST(MathTest, Addition) { ... }
2129 TEST(MathTest, Subtraction) { ... }
2130 TEST(LogicTest, NonContradiction) { ... }
2139 <testcase name="Addition" file="test.cpp" line="1" status="run" time="0.007" classname="">
2140 <failure message="Value of: add(1, 1)
 Actual: 3
Expected: 2" type="">...</failure>
2141 … <failure message="Value of: add(1, -1)
 Actual: 1
Expected: 0" type="">...</failure>
2143 <testcase name="Subtraction" file="test.cpp" line="2" status="run" time="0.005" classname="">
2147 … <testcase name="NonContradiction" file="test.cpp" line="3" status="run" time="0.005" classname="">
2157 `failures` attribute tells how many of them failed.
2160 entire test program in seconds.
2163 execution.
2166 test was defined.
2169 assertion.
2173 GoogleTest can also emit a JSON report as an alternative format to XML. To
2176 create the file at the given location. You can also just use the string
2177 `"json"`, in which case the output can be found in the `test_detail.json` file
2178 in the current directory.
2184 "$schema": "https://json-schema.org/schema#",
2253 [JSON encoding](https://developers.google.com/protocol-buffers/docs/proto3#json):
2260 import "google/protobuf/timestamp.proto";
2261 import "google/protobuf/duration.proto";
2268 google.protobuf.Timestamp timestamp = 5;
2269 google.protobuf.Duration time = 6;
2280 google.protobuf.Duration time = 6;
2293 google.protobuf.Duration time = 3;
2306 TEST(MathTest, Addition) { ... }
2307 TEST(MathTest, Subtraction) { ... }
2308 TEST(LogicTest, NonContradiction) { ... }
2331 "file": "test.cpp",
2349 "file": "test.cpp",
2366 "file": "test.cpp",
2378 {: .callout .important}
2379 IMPORTANT: The exact format of the JSON document is subject to change.
2386 catch any kind of unexpected exits of test programs. Upon start, Google Test
2388 finished. Then, the test runner can check if this file exists. In case the file
2389 remains undeleted, the inspected test has exited prematurely.
2392 variable has been set.
2398 mode. GoogleTest's *break-on-failure* mode supports this behavior.
2401 other than `0`. Alternatively, you can use the `--gtest_break_on_failure`
2402 command line flag.
2406 GoogleTest can be used either with or without exceptions enabled. If a test
2409 test method. This maximizes the coverage of a test run. Also, on Windows an
2411 you to run the tests automatically.
2415 exception is thrown. To achieve that, set the `GTEST_CATCH_EXCEPTIONS`
2417 running the tests.
2422 [Undefined Behavior Sanitizer](https://clang.llvm.org/docs/UndefinedBehaviorSanitizer.html),
2423 [Address Sanitizer](https://github.com/google/sanitizers/wiki/AddressSanitizer),
2425 [Thread Sanitizer](https://github.com/google/sanitizers/wiki/ThreadSanitizerCppManual)
2427 when they detect sanitizer errors, such as creating a reference from `nullptr`.
2446 test triggers a sanitizer error, GoogleTest will report that it failed.