1// Copyright (C) 2022 The Qt Company Ltd.
2// Copyright (C) 2016 Intel Corporation.
3// SPDX-License-Identifier: LicenseRef-Qt-Commercial OR GFDL-1.3-no-invariants-only
6 \page qtest-overview.html
7 \title Qt Test Overview
8 \brief Overview of the Qt unit testing framework.
10 \ingroup frameworks-technologies
11 \ingroup qt-basic-concepts
15 Qt Test is a framework for unit testing Qt based applications and libraries.
17 all the functionality commonly found in unit testing frameworks as
18 well as extensions for testing graphical user interfaces.
20 Qt Test is designed to ease the writing of unit tests for Qt
21 based applications and libraries:
24 \header \li Feature \li Details
27 \li Qt Test consists of about 6000 lines of code and 60
31 \li Qt Test requires only a few symbols from the Qt Core module
34 \li \b {Rapid testing}
35 \li Qt Test needs no special test-runners; no special
36 registration for tests.
38 \li \b {Data-driven testing}
39 \li A test can be executed multiple times with different test data.
41 \li \b {Basic GUI testing}
42 \li Qt Test offers functionality for mouse and keyboard simulation.
45 \li Qt Test supports benchmarking and provides several measurement back-ends.
48 \li Qt Test outputs messages that can be interpreted by Qt Creator, Visual
52 \li The error reporting is thread safe and atomic.
55 \li Extensive use of templates prevent errors introduced by
56 implicit type casting.
58 \li \b {Easily extendable}
59 \li Custom types can easily be added to the test data and test output.
62 You can use a Qt Creator wizard to create a project that contains Qt tests
63 and build and run them directly from Qt Creator. For more information, see
64 \l {Qt Creator: Running Autotests}{Running Autotests}.
66 \section1 Creating a Test
68 To create a test, subclass QObject and add one or more private slots to it. Each
69 private slot is a test function in your test. QTest::qExec() can be used to execute
70 all test functions in the test object.
72 In addition, you can define the following private slots that are \e not
73 treated as test functions. When present, they will be executed by the
74 testing framework and can be used to initialize and clean up either the
75 entire test or the current test function.
78 \li \c{initTestCase()} will be called before the first test function is executed.
79 \li \c{initTestCase_data()} will be called to create a global test data table.
80 \li \c{cleanupTestCase()} will be called after the last test function was executed.
81 \li \c{init()} will be called before each test function is executed.
82 \li \c{cleanup()} will be called after every test function.
85 Use \c initTestCase() for preparing the test. Every test should leave the
86 system in a usable state, so it can be run repeatedly. Cleanup operations
87 should be handled in \c cleanupTestCase(), so they get run even if the test
90 Use \c init() for preparing a test function. Every test function should
91 leave the system in a usable state, so it can be run repeatedly. Cleanup
92 operations should be handled in \c cleanup(), so they get run even if the
93 test function fails and exits early.
95 Alternatively, you can use RAII (resource acquisition is initialization),
96 with cleanup operations called in destructors, to ensure they happen when
97 the test function returns and the object moves out of scope.
99 If \c{initTestCase()} fails, no test function will be executed. If \c{init()} fails,
100 the following test function will not be executed, the test will proceed to the next
104 \snippet code/doc_src_qtestlib.cpp 0
106 Finally, if the test class has a static public \c{void initMain()} method,
107 it is called by the QTEST_MAIN macros before the QApplication object
108 is instantiated. This was added in 5.14.
110 For more examples, refer to the \l{Qt Test Tutorial}.
112 \section1 Increasing Test Function Timeout
114 QtTest limits the run-time of each test to catch infinite loops and similar
115 bugs. By default, any test function call will be interrupted after five
116 minutes. For data-driven tests, this applies to each call with a distinct
117 data-tag. This timeout can be configured by setting the \c QTEST_FUNCTION_TIMEOUT
118 environment variable to the maximum number of milliseconds that is acceptable
119 for a single call to take. If a test takes longer than the configured timeout,
120 it is interrupted, and \c qFatal() is called. As a result, the test aborts by
121 default, as if it had crashed.
123 To set \c QTEST_FUNCTION_TIMEOUT from the command line on Linux or macOS, enter:
126 QTEST_FUNCTION_TIMEOUT=900000
127 export QTEST_FUNCTION_TIMEOUT
132 SET QTEST_FUNCTION_TIMEOUT=900000
135 Then run the test inside this environment.
137 Alternatively, you can set the environment variable programmatically in the
138 test code itself, for example by calling, from the
139 \l{Creating a Test}{initMain()} special method of your test class:
142 qputenv("QTEST_FUNCTION_TIMEOUT", "900000");
145 To calculate a suitable value for the timeout, see how long the test usually
146 takes and decide how much longer it can take without that being a symptom of
147 some problem. Convert that longer time to milliseconds to get the timeout value.
148 For example, if you decide that a test that takes several minutes could
149 reasonably take up to twenty minutes, for example on a slow machine,
150 multiply \c{20 * 60 * 1000 = 1200000} and set the environment variable to
151 \c 1200000 instead of the \c 900000 above.
153 \if !defined(qtforpython)
154 \section1 Building a Test
156 You can build an executable that contains one test class that typically
157 tests one class of production code. However, usually you would want to
158 test several classes in a project by running one command.
160 See \l {Chapter 1: Writing a Unit Test}{Writing a Unit Test} for a step by
163 \section2 Building with CMake and CTest
165 You can use \l {Building with CMake and CTest} to create a test.
166 \l{https://cmake.org/cmake/help/latest/manual/ctest.1.html}{CTest} enables
167 you to include or exclude tests based on a regular expression that is
168 matched against the test name. You can further apply the \c LABELS property
169 to a test and CTest can then include or exclude tests based on those labels.
170 All labeled targets will be run when \c {test} target is called on the
173 \note On Android, if you have one connected device or emulator, tests will
174 run on that device. If you have more than one device connected, set the
175 environment variable \c {ANDROID_DEVICE_SERIAL} to the
176 \l {Android: Query for devices}{ADB serial number} of the device that
177 you want to run tests on.
179 There are several other advantages with CMake. For example, the result of
180 a test run can be published on a web server using CDash with virtually no
183 CTest scales to very different unit test frameworks, and works out of the
186 The following is an example of a CMakeLists.txt file that specifies the
187 project name and the language used (here, \e mytest and C++), the Qt
188 modules required for building the test (Qt5Test), and the files that are
189 included in the test (\e tst_mytest.cpp).
191 \quotefile code/doc_src_cmakelists.txt
193 For more information about the options you have, see \l {Build with CMake}.
195 \section2 Building with qmake
197 If you are using \c qmake as your build tool, just add the
198 following to your project file:
200 \snippet code/doc_src_qtestlib.pro 1
202 If you would like to run the test via \c{make check}, add the
205 \snippet code/doc_src_qtestlib.pro 2
207 To prevent the test from being installed to your target, add the
210 \snippet code/doc_src_qtestlib.pro 3
212 See the \l{Building a Testcase}{qmake manual} for
213 more information about \c{make check}.
215 \section2 Building with Other Tools
217 If you are using other build tools, make sure that you add the location
218 of the Qt Test header files to your include path (usually \c{include/QtTest}
219 under your Qt installation directory). If you are using a release build
220 of Qt, link your test to the \c QtTest library. For debug builds, use
225 \section1 Qt Test Command Line Arguments
229 The syntax to execute an autotest takes the following simple form:
231 \snippet code/doc_src_qtestlib.qdoc 2
233 Substitute \c testname with the name of your executable. \c
234 testfunctions can contain names of test functions to be
235 executed. If no \c testfunctions are passed, all tests are run. If you
236 append the name of an entry in \c testdata, the test function will be
237 run only with that test data.
241 \snippet code/doc_src_qtestlib.qdoc 3
243 Runs the test function called \c toUpper with all available test data.
245 \snippet code/doc_src_qtestlib.qdoc 4
247 Runs the \c toUpper test function with all available test data,
248 and the \c toInt test function with the test data row called \c
249 zero (if the specified test data doesn't exist, the associated test
250 will fail and the available data tags are reported).
252 \snippet code/doc_src_qtestlib.qdoc 5
254 Runs the \c testMyWidget function test, outputs every signal
255 emission and waits 500 milliseconds after each simulated
256 mouse/keyboard event.
260 \section3 Logging Options
262 The following command line options determine how test results are reported:
265 \li \c -o \e{filename,format} \br
266 Writes output to the specified file, in the specified format (one
267 of \c txt, \c csv, \c junitxml, \c xml, \c lightxml, \c teamcity
268 or \c tap). Use the special filename \c{-} (hyphen) to log to
270 \li \c -o \e filename \br
271 Writes output to the specified file.
273 Outputs results in plain text.
275 Outputs results as comma-separated values (CSV) suitable for
276 import into spreadsheets. This mode is only suitable for
277 benchmarks, since it suppresses normal pass/fail messages.
279 Outputs results as a \l{JUnit XML} document.
281 Outputs results as an XML document.
283 Outputs results as a stream of XML tags.
285 Outputs results in \l{TeamCity} format.
287 Outputs results in \l{Test Anything Protocol} (TAP) format.
290 The first version of the \c -o option may be repeated in order to log
291 test results in multiple formats, but no more than one instance of this
292 option can log test results to standard output.
294 If the first version of the \c -o option is used, neither the second version
295 of the \c -o option nor the \c -txt, \c -xml, \c -lightxml, \c -teamcity,
296 \c -junitxml or \c -tap options should be used.
298 If neither version of the \c -o option is used, test results will be logged to
299 standard output. If no format option is used, test results will be logged in
302 \section3 Test Log Detail Options
304 The following command line options control how much detail is reported
309 Silent output; only shows fatal errors, test failures and minimal status
312 Verbose output; shows when each test function is entered.
313 (This option only affects plain text output.)
315 Extended verbose output; shows each \l QCOMPARE() and \l QVERIFY().
316 (This option affects all output formats and implies \c -v1 for plain text output.)
318 Shows all signals that get emitted and the slot invocations resulting from
320 (This option affects all output formats.)
323 \section3 Testing Options
325 The following command-line options influence how tests are run:
328 \li \c -functions \br
329 Outputs all test functions available in the test, then quits.
331 Outputs all data tags available in the test.
332 A global data tag is preceded by ' __global__ '.
333 \li \c -eventdelay \e ms \br
334 If no delay is specified for keyboard or mouse simulation
335 (\l QTest::keyClick(),
336 \l QTest::mouseClick() etc.), the value from this parameter
337 (in milliseconds) is substituted.
338 \li \c -keydelay \e ms \br
339 Like -eventdelay, but only influences keyboard simulation and not mouse
341 \li \c -mousedelay \e ms \br
342 Like -eventdelay, but only influences mouse simulation and not keyboard
344 \li \c -maxwarnings \e number \br
345 Sets the maximum number of warnings to output. 0 for unlimited, defaults to
347 \li \c -nocrashhandler \br
348 Disables the crash handler on Unix platforms.
349 On Windows, it re-enables the Windows Error Reporting dialog, which is
350 turned off by default. This is useful for debugging crashes.
352 \li \c -platform \e name \br
353 This command line argument applies to all Qt applications, but might be
354 especially useful in the context of auto-testing. By using the "offscreen"
355 platform plugin (-platform offscreen) it's possible to have tests that use
356 QWidget or QWindow run without showing anything on the screen. Currently
357 the offscreen platform plugin is only fully supported on X11.
360 \section3 Benchmarking Options
362 The following command line options control benchmark testing:
365 \li \c -callgrind \br
366 Uses Callgrind to time benchmarks (Linux only).
367 \li \c -tickcounter \br
368 Uses CPU tick counters to time benchmarks.
369 \li \c -eventcounter \br
370 Counts events received during benchmarks.
371 \li \c -minimumvalue \e n \br
372 Sets the minimum acceptable measurement value.
373 \li \c -minimumtotal \e n \br
374 Sets the minimum acceptable total for repeated executions of a test function.
375 \li \c -iterations \e n \br
376 Sets the number of accumulation iterations.
377 \li \c -median \e n \br
378 Sets the number of median iterations.
380 Outputs verbose benchmarking information.
383 \section3 Miscellaneous Options
387 Outputs the possible command line arguments and gives some useful help.
390 \section1 Qt Test Environment Variables
392 You can set certain environment variables in order to affect
393 the execution of an autotest:
396 \li \c QTEST_DISABLE_CORE_DUMP \br
397 Setting this variable to a non-zero value will disable the generation
399 \li \c QTEST_DISABLE_STACK_DUMP \br
400 Setting this variable to a non-zero value will prevent Qt Test from
401 printing a stacktrace in case an autotest times out or crashes.
402 \li \c QTEST_FATAL_FAIL \br
403 Setting this variable to a non-zero value will cause a failure in
404 an autotest to immediately abort the entire autotest. This is useful
405 to e.g. debug an unstable or intermittent failure in a test, by
406 launching the test in a debugger. Support for this variable was
410 \section1 Creating a Benchmark
412 To create a benchmark, follow the instructions for creating a test and then add a
413 \l QBENCHMARK macro or \l QTest::setBenchmarkResult() to the test function that
414 you want to benchmark. In the following code snippet, the macro is used:
416 \snippet code/doc_src_qtestlib.cpp 12
418 A test function that measures performance should contain either a single
419 \c QBENCHMARK macro or a single call to \c setBenchmarkResult(). Multiple
420 occurrences make no sense, because only one performance result can be
421 reported per test function, or per data tag in a data-driven setup.
423 Avoid changing the test code that forms (or influences) the body of a
424 \c QBENCHMARK macro, or the test code that computes the value passed to
425 \c setBenchmarkResult(). Differences in successive performance results
426 should ideally be caused only by changes to the product you are testing.
427 Changes to the test code can potentially result in misleading report of
428 a change in performance. If you do need to change the test code, make
429 that clear in the commit message.
431 In a performance test function, the \c QBENCHMARK or \c setBenchmarkResult()
432 should be followed by a verification step using \l QCOMPARE(), \l QVERIFY(),
433 and so on. You can then flag a performance result as \e invalid if another
434 code path than the intended one was measured. A performance analysis tool
435 can use this information to filter out invalid results.
436 For example, an unexpected error condition will typically cause the program
437 to bail out prematurely from the normal program execution, and thus falsely
438 show a dramatic performance increase.
440 \section2 Selecting the Measurement Back-end
442 The code inside the QBENCHMARK macro will be measured, and possibly also repeated
443 several times in order to get an accurate measurement. This depends on the selected
444 measurement back-end. Several back-ends are available. They can be selected on the
447 \target testlib-benchmarking-measurement
451 \li Command-line Argument
456 \row \li CPU tick counter
458 \li Windows, \macos, Linux, many UNIX-like systems.
459 \row \li Event Counter
462 \row \li Valgrind Callgrind
464 \li Linux (if installed)
470 In short, walltime is always available but requires many repetitions to
472 Tick counters are usually available and can provide
473 results with fewer repetitions, but can be susceptible to CPU frequency
475 Valgrind provides exact results, but does not take
476 I/O waits into account, and is only available on a limited number of
478 Event counting is available on all platforms and it provides the number of events
479 that were received by the event loop before they are sent to their corresponding
480 targets (this might include non-Qt events).
482 The Linux Performance Monitoring solution is available only on Linux and
483 provides many different counters, which can be selected by passing an
484 additional option \c {-perfcounter countername}, such as \c {-perfcounter
485 cache-misses}, \c {-perfcounter branch-misses}, or \c {-perfcounter
486 l1d-load-misses}. The default counter is \c {cpu-cycles}. The full list of
487 counters can be obtained by running any benchmark executable with the
488 option \c -perfcounterlist.
492 \li Using the performance counter may require enabling access to non-privileged
494 \li Devices that do not support high-resolution timers default to
495 one-millisecond granularity.
498 See \l {Chapter 5: Writing a Benchmark}{Writing a Benchmark} in the Qt Test
499 Tutorial for more benchmarking examples.
501 \section1 Using Global Test Data
503 You can define \c{initTestCase_data()} to set up a global test data table.
504 Each test is run once for each row in the global test data table. When the
505 test function itself \l{Chapter 2: Data Driven Testing}{is data-driven},
506 it is run for each local data row, for each global data row. So, if there
507 are \c g rows in the global data table and \c d rows in the test's own
508 data-table, the number of runs of this test is \c g times \c d.
510 Global data is fetched from the table using the \l QFETCH_GLOBAL() macro.
512 The following are typical use cases for global test data:
515 \li Selecting among the available database backends in QSql tests to run
516 every test against every database.
517 \li Doing all networking tests with and without SSL (HTTP versus HTTPS)
519 \li Testing a timer with a high precision clock and with a coarse one.
520 \li Selecting whether a parser shall read from a QByteArray or from a
524 For example, to test each number provided by \c {roundTripInt_data()} with
525 each locale provided by \c {initTestCase_data()}:
527 \snippet code/src_qtestlib_qtestcase_snippet.cpp 31
529 On the command-line of a test you can pass the name of a function (with no
530 test-class-name prefix) to run only that one function's tests. If the test
531 class has global data, or the function is data-driven, you can append a data
532 tag, after a colon, to run only that tag's data-set for the function. To
533 specify both a global tag and a tag specific to the test function, combine
534 them with a colon between, putting the global data tag first. For example
536 \snippet code/doc_src_qtestlib.qdoc 6
538 will run the \c zero test-case of the \c roundTripInt() test above (assuming
539 its \c TestQLocale class has been compiled to an executable \c testqlocale)
540 in each of the locales specified by \c initTestCase_data(), while
542 \snippet code/doc_src_qtestlib.qdoc 7
544 will run all three test-cases of \c roundTripInt() only in the C locale and
546 \snippet code/doc_src_qtestlib.qdoc 8
548 will only run the \c zero test-case in the C locale.
550 Providing such fine-grained control over which tests are to be run can make
551 it considerably easier to debug a problem, as you only need to step through
552 the one test-case that has been seen to fail.
557 \page qtest-tutorial.html
558 \brief A short introduction to testing with Qt Test.
559 \nextpage {Chapter 1: Writing a Unit Test}{Chapter 1}
560 \ingroup best-practices
562 \title Qt Test Tutorial
564 This tutorial introduces some of the features of the Qt Test framework. It
565 is divided into six chapters:
568 \li \l {Chapter 1: Writing a Unit Test}{Writing a Unit Test}
569 \li \l {Chapter 2: Data Driven Testing}{Data Driven Testing}
570 \li \l {Chapter 3: Simulating GUI Events}{Simulating GUI Events}
571 \li \l {Chapter 4: Replaying GUI Events}{Replaying GUI Events}
572 \li \l {Chapter 5: Writing a Benchmark}{Writing a Benchmark}
573 \li \l {Chapter 6: Skipping Tests with QSKIP}{Skipping Tests}
576 \note You can build and execute the tests from each chapter using the
577 available source code, which is linked to at the end of each chapter.
585 \previouspage {Qt Test Tutorial}{Qt Test Tutorial Overview}
586 \nextpage {Chapter 2: Data Driven Testing}{Chapter 2}
588 \title Chapter 1: Writing a Unit Test
589 \brief How to write a unit test.
591 This first chapter demonstrates how to write a simple unit test and how to
592 run the test case as a stand-alone executable.
594 \section1 Writing a Test
596 Let's assume you want to test the behavior of our QString class.
597 First, you need a class that contains your test functions. This class
598 has to inherit from QObject:
600 \snippet tutorial1/testqstring.cpp 0
602 \note You need to include the QTest header and declare the test functions as
603 private slots so the test framework finds and executes it.
605 Then you need to implement the test function itself. The
606 implementation could look like this:
608 \snippet code/doc_src_qtestlib.cpp 8
610 The \l QVERIFY() macro evaluates the expression passed as its
611 argument. If the expression evaluates to true, the execution of
612 the test function continues. Otherwise, a message describing the
613 failure is appended to the test log, and the test function stops
616 But if you want a more verbose output to the test log, you should
617 use the \l QCOMPARE() macro instead:
619 \snippet tutorial1/testqstring.cpp 1
621 If the strings are not equal, the contents of both strings are
622 appended to the test log, making it immediately visible why the
625 \section1 Preparing the Stand-Alone Executable
627 Finally, to make our test case a stand-alone executable, the
628 following two lines are needed:
630 \snippet tutorial1/testqstring.cpp 2
632 The \l QTEST_MAIN() macro expands to a simple \c main()
633 method that runs all the test functions. Note that if both the
634 declaration and the implementation of our test class are in a \c
635 .cpp file, we also need to include the generated moc file to make
636 Qt's introspection work.
638 \section1 Building the Executable
640 \include {building-examples.qdocinc} {building the executable} {tutorial1}
642 \note If you're using windows, replace \c make with \c
643 nmake or whatever build tool you use.
645 \section1 Running the Executable
647 Running the resulting executable should give you the following
650 \snippet code/doc_src_qtestlib.qdoc 10
652 Congratulations! You just wrote and executed your first unit test
653 using the Qt Test framework.
659 \previouspage {Chapter 1: Writing a Unit Test}{Chapter 1}
660 \nextpage {Chapter 3: Simulating Gui Events}{Chapter 3}
662 \title Chapter 2: Data Driven Testing
663 \brief How to create data driven tests.
665 This chapter demonstrates how to execute a test multiple times with
668 So far, we have hard coded the data we wanted to test into our
669 test function. If we add more test data, the function might look like
672 \snippet code/doc_src_qtestlib.cpp 11
674 To prevent the function from being cluttered with repetitive code, Qt Test
675 supports adding test data to a test function. All we need is to add another
676 private slot to our test class:
678 \snippet tutorial2/testqstring.cpp 0
680 \section1 Writing the Data Function
682 A test function's associated data function has \c _data appended to its
683 name. Our data function looks like this:
685 \snippet tutorial2/testqstring.cpp 1
687 First, we define the two elements of our test table using the \l
688 QTest::addColumn() function: a test string and the
689 expected result of applying the QString::toUpper() function to
692 Then, we add some data to the table using the \l QTest::newRow()
693 function. We can also use \l QTest::addRow() if we need to format some data
694 in the row name, for example when generating many data rows iteratively.
695 Each row of data will become a separate row in the test table.
697 \l QTest::newRow() takes one argument: a name that will be associated with
698 the data set and used in the test log to identify the data row. \l
699 QTest::addRow() takes a (\c{printf}-style) format string followed by the
700 parameters to be represented in place of the formatting tokens in the format
701 string. Then, we stream the data set into the new table row. First an
702 arbitrary string, and then the expected result of applying the
703 QString::toUpper() function to that string.
705 You can think of the test data as a two-dimensional table. In
706 our case, it has two columns called \c string and \c result and
707 three rows. In addition, a name and an index are associated
733 When data is streamed into the row, each datum is asserted to match
734 the type of the column whose value it supplies. If any assertion fails,
737 The names of rows and columns, in a given test function's data table, should
738 be unique: if two rows share a name, or two columns share a name, a warning
739 will (since Qt 6.5) be produced. See \l qWarning() for how you can cause
740 warnings to be treated as errors and \l {Test for Warnings} for how to get
741 your tests clear of other warnings.
743 \section1 Rewriting the Test Function
745 Our test function can now be rewritten:
747 \snippet tutorial2/testqstring.cpp 2
749 The TestQString::toUpper() function will be executed three times,
750 once for each entry in the test table that we created in the
751 associated TestQString::toUpper_data() function.
753 First, we fetch the two elements of the data set using the \l
754 QFETCH() macro. \l QFETCH() takes two arguments: The data type of
755 the element and the element name. Then, we perform the test using
756 the \l QCOMPARE() macro.
758 This approach makes it very easy to add new data to the test
759 without modifying the test itself.
761 \section1 Preparing the Stand-Alone Executable
763 And again, to make our test case a stand-alone executable,
764 the following two lines are needed:
766 \snippet tutorial2/testqstring.cpp 3
768 As before, the QTEST_MAIN() macro expands to a simple main()
769 method that runs all the test functions, and since both the
770 declaration and the implementation of our test class are in a .cpp
771 file, we also need to include the generated moc file to make Qt's
774 \section1 Building the Executable
776 \include {building-examples.qdocinc} {building the executable} {tutorial2}
778 \section1 Running the Executable
780 Running the resulting executable should give you the following
783 \snippet code/doc_src_qtestlib.qdoc 11
789 \previouspage {Chapter 2: Data Driven Testing}{Chapter 2}
790 \nextpage {Chapter 4: Replaying GUI Events}{Chapter 4}
792 \title Chapter 3: Simulating GUI Events
793 \brief How to simulate GUI events.
795 Qt Test features some mechanisms to test graphical user
796 interfaces. Instead of simulating native window system events,
797 Qt Test sends internal Qt events. That means there are no
798 side-effects on the machine the tests are running on.
800 This chapter demonstrates how to write a simple GUI test.
802 \section1 Writing a GUI Test
804 This time, let's assume you want to test the behavior of our
805 QLineEdit class. As before, you will need a class that contains
808 \snippet tutorial3/testgui.cpp 0
810 The only difference is that you need to include the Qt GUI class
811 definitions in addition to the QTest namespace.
813 \snippet tutorial3/testgui.cpp 1
815 In the implementation of the test function, we first create a
816 QLineEdit. Then, we simulate writing "hello world" in the line edit
817 using the \l QTest::keyClicks() function.
819 \note The widget must also be shown in order to correctly test keyboard
822 QTest::keyClicks() simulates clicking a sequence of keys on a
823 widget. Optionally, a keyboard modifier can be specified as well
824 as a delay (in milliseconds) of the test after each key click. In
825 a similar way, you can use the QTest::keyClick(),
826 QTest::keyPress(), QTest::keyRelease(), QTest::mouseClick(),
827 QTest::mouseDClick(), QTest::mouseMove(), QTest::mousePress()
828 and QTest::mouseRelease() functions to simulate the associated
831 Finally, we use the \l QCOMPARE() macro to check if the line edit's
834 \section1 Preparing the Stand-Alone Executable
836 As before, to make our test case a stand-alone executable, the
837 following two lines are needed:
839 \snippet tutorial3/testgui.cpp 2
841 The QTEST_MAIN() macro expands to a simple main() method that
842 runs all the test functions, and since both the declaration and
843 the implementation of our test class are in a .cpp file, we also
844 need to include the generated moc file to make Qt's introspection
847 \section1 Building the Executable
849 \include {building-examples.qdocinc} {building the executable} {tutorial3}
851 \section1 Running the Executable
853 Running the resulting executable should give you the following
856 \snippet code/doc_src_qtestlib.qdoc 12
862 \previouspage {Chapter 3: Simulating GUI Events}{Chapter 3}
863 \nextpage {Chapter 5: Writing a Benchmark}{Chapter 5}
865 \title Chapter 4: Replaying GUI Events
866 \brief How to replay GUI events.
868 In this chapter, we will show how to simulate a GUI event,
869 and how to store a series of GUI events as well as replay them on
872 The approach to storing a series of events and replaying them is
873 quite similar to the approach explained in \l {Chapter 2:
874 Data Driven Testing}{chapter 2}. All you need to do is to add a data
875 function to your test class:
877 \snippet tutorial4/testgui.cpp 0
879 \section1 Writing the Data Function
881 As before, a test function's associated data function carries the
882 same name, appended by \c{_data}.
884 \snippet tutorial4/testgui.cpp 1
886 First, we define the elements of the table using the
887 QTest::addColumn() function: A list of GUI events, and the
888 expected result of applying the list of events on a QWidget. Note
889 that the type of the first element is \l QTestEventList.
891 A QTestEventList can be populated with GUI events that can be
892 stored as test data for later usage, or be replayed on any
895 In our current data function, we create two \l
896 {QTestEventList} elements. The first list consists of a single click to
897 the 'a' key. We add the event to the list using the
898 QTestEventList::addKeyClick() function. Then we use the
899 QTest::newRow() function to give the data set a name, and
900 stream the event list and the expected result into the table.
902 The second list consists of two key clicks: an 'a' with a
903 following 'backspace'. Again we use the
904 QTestEventList::addKeyClick() to add the events to the list, and
905 QTest::newRow() to put the event list and the expected
906 result into the table with an associated name.
908 \section1 Rewriting the Test Function
910 Our test can now be rewritten:
912 \snippet tutorial4/testgui.cpp 2
914 The TestGui::testGui() function will be executed two times,
915 once for each entry in the test data that we created in the
916 associated TestGui::testGui_data() function.
918 First, we fetch the two elements of the data set using the \l
919 QFETCH() macro. \l QFETCH() takes two arguments: the data type of
920 the element and the element name. Then we create a QLineEdit, and
921 apply the list of events on that widget using the
922 QTestEventList::simulate() function.
924 Finally, we use the QCOMPARE() macro to check if the line edit's
927 \section1 Preparing the Stand-Alone Executable
929 As before, to make our test case a stand-alone executable,
930 the following two lines are needed:
932 \snippet tutorial4/testgui.cpp 3
934 The QTEST_MAIN() macro expands to a simple main() method that
935 runs all the test functions, and since both the declaration and
936 the implementation of our test class are in a .cpp file, we also
937 need to include the generated moc file to make Qt's introspection
940 \section1 Building the Executable
942 \include {building-examples.qdocinc} {building the executable} {tutorial4}
944 \section1 Running the Executable
946 Running the resulting executable should give you the following
949 \snippet code/doc_src_qtestlib.qdoc 13
955 \previouspage {Chapter 4: Replaying GUI Events}{Chapter 4}
956 \nextpage {Chapter 6: Skipping Tests with QSKIP}{Chapter 6}
958 \title Chapter 5: Writing a Benchmark
959 \brief How to write a benchmark.
961 This chapter demonstrates how to write benchmarks using Qt Test.
963 \section1 Writing a Benchmark
964 To create a benchmark we extend a test function with a QBENCHMARK macro.
965 A benchmark test function will then typically consist of setup code and
966 a QBENCHMARK macro that contains the code to be measured. This test
967 function benchmarks QString::localeAwareCompare().
969 \snippet tutorial5/benchmarking.cpp 0
971 Setup can be done at the beginning of the function. At this point, the clock
972 is not running. The code inside the QBENCHMARK macro will be
973 measured, and possibly repeated several times in order to get an
974 accurate measurement.
976 Several \l {testlib-benchmarking-measurement}{back-ends} are available
977 and can be selected on the command line.
979 \section1 Data Functions
981 Data functions are useful for creating benchmarks that compare
982 multiple data inputs, for example locale aware compare against standard
985 \snippet tutorial5/benchmarking.cpp 1
987 The test function then uses the data to determine what to benchmark.
989 \snippet tutorial5/benchmarking.cpp 2
991 The \c{if (useLocaleCompare)} switch is placed outside the QBENCHMARK
992 macro to avoid measuring its overhead. Each benchmark test function
993 can have one active QBENCHMARK macro.
995 \section1 Building the Executable
997 \include {building-examples.qdocinc} {building the executable} {tutorial5}
999 \section1 Running the Executable
1001 Running the resulting executable should give you the following
1004 \snippet code/doc_src_qtestlib.qdoc 14
1007 \page qttestlib-tutorial6.html
1009 \previouspage {Chapter 5: Writing a Benchmark}{Chapter 5}
1011 \title Chapter 6: Skipping Tests with QSKIP
1012 \brief How to skip tests in certain cases.
1014 \section2 Using QSKIP(\a description) in a test function
1016 If the QSKIP() macro is called from a test function, it stops
1017 the execution of the test without adding a failure to the test log.
1018 It can be used to skip tests that are certain to fail. The text in
1019 the QSKIP \a description parameter is appended to the test log,
1020 and should explain why the test was not carried out.
1022 QSKIP can be used to skip testing when the implementation is not yet
1023 complete or not supported on a certain platform. When there are known
1024 failures, QEXPECT_FAIL is recommended, as it supports running the rest
1025 of the test, when possible.
1027 Example of QSKIP in a test function:
1029 \snippet code/doc_src_qtqskip_snippet.cpp 0
1031 In a data-driven test, each call to QSKIP() skips only the current
1032 row of test data. If the data-driven test contains an unconditional
1033 call to QSKIP, it produces a skip message for each row of test data.
1035 \section2 Using QSKIP in a _data function
1037 If called from a _data function, the QSKIP() macro stops
1038 execution of the _data function. This prevents execution of the
1039 associated test function.
1041 See below for an example:
1043 \snippet code/doc_src_qtqskip.cpp 1
1045 \section2 Using QSKIP from initTestCase() or initTestCase_data()
1047 If called from \c initTestCase() or \c initTestCase_data(), the
1048 QSKIP() macro will skip all test and _data functions.