diff options
Diffstat (limited to 'lib/common_test/doc/src')
-rw-r--r-- | lib/common_test/doc/src/common_test_app.xml | 11 | ||||
-rw-r--r-- | lib/common_test/doc/src/cover_chapter.xml | 112 | ||||
-rw-r--r-- | lib/common_test/doc/src/ct_hooks_chapter.xml | 27 | ||||
-rw-r--r-- | lib/common_test/doc/src/ct_run.xml | 5 | ||||
-rw-r--r-- | lib/common_test/doc/src/notes.xml | 291 | ||||
-rw-r--r-- | lib/common_test/doc/src/run_test_chapter.xml | 816 | ||||
-rw-r--r-- | lib/common_test/doc/src/write_test_chapter.xml | 52 |
7 files changed, 911 insertions, 403 deletions
diff --git a/lib/common_test/doc/src/common_test_app.xml b/lib/common_test/doc/src/common_test_app.xml index b6d4a633cb..151159ad69 100644 --- a/lib/common_test/doc/src/common_test_app.xml +++ b/lib/common_test/doc/src/common_test_app.xml @@ -4,7 +4,7 @@ <erlref> <header> <copyright> - <year>2003</year><year>2012</year> + <year>2003</year><year>2013</year> <holder>Ericsson AB. All Rights Reserved.</holder> </copyright> <legalnotice> @@ -170,7 +170,9 @@ <v> UserData = term()</v> <v> Conns = [atom()]</v> <v> CSSFile = string()</v> - <v> CTHs = [CTHModule | {CTHModule, CTHInitArgs} | {CTHModule, CTHInitArgs, CTHPriority}]</v> + <v> CTHs = [CTHModule |</v> + <v> {CTHModule, CTHInitArgs} |</v> + <v> {CTHModule, CTHInitArgs, CTHPriority}]</v> <v> CTHModule = atom()</v> <v> CTHInitArgs = term()</v> </type> @@ -297,8 +299,9 @@ <v> UserData = term()</v> <v> Conns = [atom()]</v> <v> CSSFile = string()</v> - <v> CTHs = [CTHModule | {CTHModule, CTHInitArgs} | - {CTHModule, CTHInitArgs, CTHPriority}]</v> + <v> CTHs = [CTHModule |</v> + <v> {CTHModule, CTHInitArgs} |</v> + <v> {CTHModule, CTHInitArgs, CTHPriority}]</v> <v> CTHModule = atom()</v> <v> CTHInitArgs = term()</v> </type> diff --git a/lib/common_test/doc/src/cover_chapter.xml b/lib/common_test/doc/src/cover_chapter.xml index 803a71de07..4fa92d5583 100644 --- a/lib/common_test/doc/src/cover_chapter.xml +++ b/lib/common_test/doc/src/cover_chapter.xml @@ -108,6 +108,33 @@ specifications</seealso>).</p> </section> + <marker id="cover_stop"></marker> + <section> + <title>Stopping the cover tool when tests are completed</title> + <p>By default the Cover tool is automatically stopped when the + tests are completed. This causes the original (non cover + compiled) modules to be loaded back in to the test node. If a + process at this point is still running old code of any of the + modules that are cover compiled, meaning that it has not done + any fully qualified function call after the cover compilation, + the process will now be killed. To avoid this it is possible to + set the value of the <c>cover_stop</c> option to + <c>false</c>. This means that the modules will stay cover + compiled, and it is therefore only recommended if the erlang + node(s) under test is terminated after the test is completed + or if cover can be manually stopped.</p> + + <p>The option can be set by using the <c>-cover_stop</c> flag with + <c>ct_run</c>, by adding <c>{cover_stop,true|false}</c> to the + Opts argument to <c><seealso + marker="ct#run_test-1">ct:run_test/1</seealso></c>, or by adding + a <c>cover_stop</c> term in your test specification (see chapter + about <seealso + marker="run_test_chapter#test_specifications">test + specifications</seealso>).</p> + + </section> + <section> <title>The cover specification file</title> <p>These are the terms allowed in a cover specification file:</p> @@ -148,6 +175,11 @@ %% Specific modules to exclude in cover. {excl_mods, Mods}. + + %% Cross cover compilation + %% Tag = atom(), an identifier for a test run + %% Mod = [atom()], modules to compile for accumulated analysis + {cross,[{Tag,Mods}]}. </pre> <p>The <c>incl_dirs_r</c> and <c>excl_dirs_r</c> terms tell Common @@ -163,6 +195,81 @@ specification file for Common Test).</p> </section> + <marker id="cross_cover"/> + <section> + <title>Cross cover analysis</title> + <p>The cross cover mechanism allows cover analysis of modules + across multiple tests. It is useful if some code, e.g. a library + module, is used by many different tests and the accumulated cover + result is desirable.</p> + + <p>This can of course also be achieved in a more customized way by + using the <c>export</c> parameter in the cover specification and + analysing the result off line, but the cross cover mechanism is a + build in solution which also provides the logging.</p> + + <p>The mechanism is easiest explained via an example:</p> + + <p>Let's say that there are two systems, <c>s1</c> and <c>s2</c>, + which are tested in separate test runs. System <c>s1</c> contains + a library module <c>m1</c> which is tested by the <c>s1</c> test + run and is included in <c>s1</c>'s cover specification:</p> + +<code type="none"> +s1.cover: + {incl_mods,[m1]}.</code> + + <p>When analysing code coverage, the result for <c>m1</c> can be + seen in the cover log in the <c>s1</c> test result.</p> + + <p>Now, let's imagine that since <c>m1</c> is a library module, it + is also used quite a bit by system <c>s2</c>. The <c>s2</c> test + run does not specifically test <c>m1</c>, but it might still be + interesting to see which parts of <c>m1</c> is actually covered by + the <c>s2</c> tests. To do this, <c>m1</c> could be included also + in <c>s2</c>'s cover specification:</p> + +<code type="none"> +s2.cover: + {incl_mods,[m1]}.</code> + + <p>This would give an entry for <c>m1</c> also in the cover log + for the <c>s2</c> test run. The problem is that this would only + reflect the coverage by <c>s2</c> tests, not the accumulated + result over <c>s1</c> and <c>s2</c>. And this is where the cross + cover mechanism comes in handy.</p> + + <p>If instead the cover specification for <c>s2</c> was like + this:</p> + +<code type="none"> +s2.cover: + {cross,[{s1,[m1]}]}.</code> + + <p>then <c>m1</c> would be cover compiled in the <c>s2</c> test + run, but not shown in the coverage log. Instead, if + <c>ct_cover:cross_cover_analyse/2</c> is called after both + <c>s1</c> and <c>s2</c> test runs are completed, the accumulated + result for <c>m1</c> would be available in the cross cover log for + the <c>s1</c> test run.</p> + + <p>The call to the analyse function must be like this:</p> + +<code type="none"> +ct_cover:cross_cover_analyse(Level, [{s1,S1LogDir},{s2,S2LogDir}]).</code> + + <p>where <c>S1LogDir</c> and <c>S2LogDir</c> are the directories + named <c><TestName>.logs</c> for each test respectively.</p> + + <p>Note the tags <c>s1</c> and <c>s2</c> which are used in the + cover specification file and in the call to + <c>ct_cover:cross_cover_analyse/2</c>. The point of these are only + to map the modules specified in the cover specification to the log + directory specified in the call to the analyse function. The name + of the tag has no meaning beyond this.</p> + + </section> + <section> <title>Logging</title> <p>To view the result of a code coverage test, follow the @@ -170,6 +277,11 @@ takes you to the code coverage overview page. If you have successfully performed a detailed coverage analysis, you find links to each individual module coverage page here.</p> + + <p>If cross cover analysis has been performed, and there are + accumulated coverage results for the current test, then the - + "Coverdata collected over all tests" link will take you to these + results.</p> </section> </chapter> diff --git a/lib/common_test/doc/src/ct_hooks_chapter.xml b/lib/common_test/doc/src/ct_hooks_chapter.xml index 27d56fd47d..fe871eb516 100644 --- a/lib/common_test/doc/src/ct_hooks_chapter.xml +++ b/lib/common_test/doc/src/ct_hooks_chapter.xml @@ -4,7 +4,7 @@ <chapter> <header> <copyright> - <year>2011</year><year>2012</year> + <year>2011</year><year>2013</year> <holder>Ericsson AB. All Rights Reserved.</holder> </copyright> <legalnotice> @@ -439,14 +439,14 @@ terminate(State) -> <table> <row> - <cell><em>CTH Name</em></cell> - <cell><em>Is Built-in</em></cell> - <cell><em>Description</em></cell> + <cell align="left"><em>CTH Name</em></cell> + <cell align="left"><em>Is Built-in</em></cell> + <cell align="left"><em>Description</em></cell> </row> <row> - <cell>cth_log_redirect</cell> - <cell>yes</cell> - <cell>Captures all error_logger and SASL logging events and prints them + <cell align="left">cth_log_redirect</cell> + <cell align="left">yes</cell> + <cell align="left">Captures all error_logger and SASL logging events and prints them to the current test case log. If an event can not be associated with a testcase it will be printed in the common test framework log. This will happen for testcases which are run in parallel and events which occur @@ -455,9 +455,9 @@ terminate(State) -> using the normal SASL mechanisms. </cell> </row> <row> - <cell>cth_surefire</cell> - <cell>no</cell> - <cell><p>Captures all test results and outputs them as surefire + <cell align="left">cth_surefire</cell> + <cell align="left">no</cell> + <cell align="left"><p>Captures all test results and outputs them as surefire XML into a file. The file which is created is by default called junit_report.xml. The file name can be changed by setting the <c>path</c> option for this hook, e.g.</p> @@ -467,13 +467,14 @@ terminate(State) -> <p>If the <c>url_base</c> option is set, an additional attribute named <c>url</c> will be added to each <c>testsuite</c> and <c>testcase</c> XML element. The value will - be a constructed from the <c>url_base</c> and a relative path + be constructed from the <c>url_base</c> and a relative path to the test suite or test case log respectively, e.g.</p> - <code>-ct_hooks cth_surefire [{url_base,"http://myserver.com/"}]</code> + <code>-ct_hooks cth_surefire [{url_base, "http://myserver.com/"}]</code> <p>will give a url attribute value similar to</p> - <code>"http://myserver.com/[email protected]_11.19.39/x86_64-unknown-linux-gnu.my_test.logs/run.2012-12-12_11.19.39/suite.log.html"</code> + <code>"http://myserver.com/[email protected]_11.19.39/ +x86_64-unknown-linux-gnu.my_test.logs/run.2012-12-12_11.19.39/suite.log.html"</code> <p>Surefire XML can for instance be used by Jenkins to display test results.</p></cell> diff --git a/lib/common_test/doc/src/ct_run.xml b/lib/common_test/doc/src/ct_run.xml index c6749d6960..d871908952 100644 --- a/lib/common_test/doc/src/ct_run.xml +++ b/lib/common_test/doc/src/ct_run.xml @@ -4,7 +4,7 @@ <comref> <header> <copyright> - <year>2007</year><year>2012</year> + <year>2007</year><year>2013</year> <holder>Ericsson AB. All Rights Reserved.</holder> </copyright> <legalnotice> @@ -104,6 +104,7 @@ [-silent_connections [ConnType1 ConnType2 .. ConnTypeN]] [-stylesheet CSSFile] [-cover CoverCfgFile] + [-cover_stop Bool] [-event_handler EvHandler1 EvHandler2 .. EvHandlerN] | [-event_handler_init EvHandler1 InitArg1 and EvHandler2 InitArg2 and .. EvHandlerN InitArgN] @@ -125,6 +126,7 @@ <title>Run tests using test specification</title> <pre> ct_run -spec TestSpec1 TestSpec2 .. TestSpecN + [-join_specs] [-config ConfigFile1 ConfigFile2 .. ConfigFileN] [-userconfig CallbackModule1 ConfigString1 and CallbackModule2 ConfigString2 and .. and CallbackModuleN ConfigStringN] @@ -138,6 +140,7 @@ [-silent_connections [ConnType1 ConnType2 .. ConnTypeN]] [-stylesheet CSSFile] [-cover CoverCfgFile] + [-cover_stop Bool] [-event_handler EvHandler1 EvHandler2 .. EvHandlerN] | [-event_handler_init EvHandler1 InitArg1 and EvHandler2 InitArg2 and .. EvHandlerN InitArgN] diff --git a/lib/common_test/doc/src/notes.xml b/lib/common_test/doc/src/notes.xml index 8c3b13951d..5d2c065385 100644 --- a/lib/common_test/doc/src/notes.xml +++ b/lib/common_test/doc/src/notes.xml @@ -4,7 +4,7 @@ <chapter> <header> <copyright> - <year>2004</year><year>2012</year> + <year>2004</year><year>2013</year> <holder>Ericsson AB. All Rights Reserved.</holder> </copyright> <legalnotice> @@ -32,6 +32,295 @@ <file>notes.xml</file> </header> +<section><title>Common_Test 1.7.1</title> + + <section><title>Fixed Bugs and Malfunctions</title> + <list> + <item> + <p> + If an event handler installed in the CT Master event + manager took too long to respond during the termination + phase, CT Master crashed because of a timeout after 5 + secs. This would leave the system in a bad state. The + problem has been solved by means of a 30 min timeout + value and if CT Master gets a timeout after that time, it + now kills the event manager and shuts down properly.</p> + <p> + Own Id: OTP-10634 Aux Id: kunagi-347 [258] </p> + </item> + <item> + <p> + Printing with any of the ct printout functions from an + event handler installed by Common Test, would cause a + deadlock. This problem has been solved.</p> + <p> + Own Id: OTP-10826 Aux Id: seq12250 </p> + </item> + <item> + <p> + Using the force_stop flag/option to interrupt a test run + caused a crash in Common Test. This problem has been + solved.</p> + <p> + Own Id: OTP-10832</p> + </item> + </list> + </section> + + + <section><title>Improvements and New Features</title> + <list> + <item> + <p> + Removed depricated run_test program, use ct_run instead.</p> + <p> + *** POTENTIAL INCOMPATIBILITY ***</p> + <p> + Own Id: OTP-9052</p> + </item> + </list> + </section> + + + <section><title>Known Bugs and Problems</title> + <list> + <item> + <p> + Test case execution time increases with size of test run.</p> + <p> + Own Id: OTP-10855</p> + </item> + </list> + </section> + +</section> + +<section><title>Common_Test 1.7</title> + + <section><title>Fixed Bugs and Malfunctions</title> + <list> + <item> + <p> + Severe errors detected by <c>test_server</c> (e.g. if log + files directories cannot be created) will now be reported + to <c>common_test</c> and noted in the <c>common_test</c> + logs.</p> + <p> + Own Id: OTP-9769 Aux Id: kunagi-202 [113] </p> + </item> + <item> + <p> + The earlier undocumented cross cover feature for + accumulating cover data over multiple tests has now been + fixed and documented.</p> + <p> + Own Id: OTP-9870 Aux Id: kunagi-206 [117] </p> + </item> + <item> + <p> + If a busy test case generated lots of error messages, + cth_log_redirect:post_end_per_testcase would crash with a + timeout while waiting for the error logger to finish + handling all error reports. The default timer was 5 + seconds. This has now been extended to 5 minutes.</p> + <p> + Own Id: OTP-10040 Aux Id: kunagi-173 [84] </p> + </item> + <item> + <p>When a test case failed because of a timetrap time + out, the <c>Config</c> data for the case was lost in the + following call to <c>end_per_testcase/2</c>, and also in + calls to the CT Hook function + <c>post_end_per_testcase/4</c>. This problem has been + solved and the <c>Config</c> data is now correctly passed + to the above functions after a timetrap timeout + failure.</p> + <p> + Own Id: OTP-10070 Aux Id: kunagi-175 [86] </p> + </item> + <item> + <p> + Some calls to deprecated and removed functions in snmp + are removed from ct_snmp.</p> + <p> + Own Id: OTP-10088 Aux Id: kunagi-176 [87] </p> + </item> + <item> + <p>In test_server, the same process would supervise the + currently running test case and be group leader (and IO + server) for the test case. Furthermore, when running + parallel test cases, new temporary supervisor/group + leader processes were spawned and the process that was + group leader for sequential test cases would not be + active. That would lead to several problems:</p> + <p>* Processes started by init_per_suite will inherit the + group leader of the init_per_suite process (and that + group leader would not process IO requests when parallel + test cases was running). If later a parallel test case + caused such a processto print using (for example) + io:format/2, the calling would hang.</p> + <p>* Similarly, if a process was spawned from a parallel + test case, it would inherit the temporary group leader + for that parallel test case. If that spawned process + later - when the group of parallel tests have finished - + attempted to print something, its group leader would be + dead and there would be <c>badarg</c> exception.</p> + <p>Those problems have been solved by having group + leaders separate from the processes that supervises the + test cases, and keeping temporary group leader process + for parallel test cases alive until no more process in + the system use them as group leaders.</p> + <p>Also, a new <c>unexpected_io.log</c> log file + (reachable from the summary page of each test suite) has + been introduced. All unexpected IO will be printed into + it(for example, IO to a group leader for a parallel test + case that has finished).</p> + <p> + Own Id: OTP-10101 Aux Id: OTP-10125 </p> + </item> + <item> + <p> + Some bugfixes in <c>ct_snmp:</c></p> + <p> + <list> <item> ct_snmp will now use the value of the + 'agent_vsns' config variable when setting the 'variables' + parameter to snmp application agent configuration. + Earlier this had to be done separately - i.e. the + supported versions had to be specified twice. </item> + <item> Snmp application failed to write notify.conf since + ct_snmp gave the notify type as a string instead of an + atom. This has been corrected. </item> </list></p> + <p> + Own Id: OTP-10432</p> + </item> + <item> + <p> + Some bugfixes in <c>ct_snmp</c>:</p> + <p> + <list> <item> Functions <c>register_users/2</c>, + <c>register_agents/2</c> and <c>register_usm_users/2</c>, + and the corresponding <c>unregister_*/1</c> functions + were not executable. These are corrected/rewritten. + </item> <item> Function <c>update_usm_users/2</c> is + removed, and an unregister function is added instead. + Update can now be done with unregister_usm_users and then + register_usm_users. </item> <item> Functions + <c>unregister_*/2</c> are added, so specific + users/agents/usm users can be unregistered. </item> + <item> Function <c>unload_mibs/1</c> is added for + completeness. </item> <item> Overriding configuration + files did not work, since the files were written in + priv_dir instead of in the configuration dir + (priv_dir/conf). This has been corrected. </item> <item> + Arguments to <c>register_usm_users/2</c> were faulty + documented. This has been corrected. </item> </list></p> + <p> + Own Id: OTP-10434 Aux Id: kunagi-264 [175] </p> + </item> + <item> + <p> + Faulty exported specs in common test has been corrected + to <c>ct_netconfc:hook_options/0</c> and + <c>inet:hostname/0</c></p> + <p> + Own Id: OTP-10601</p> + </item> + <item> + <p> + The netconf client in common_test did not adjust the + window after receiving data. Due to this, the client + stopped receiving data after a while. This has been + corrected.</p> + <p> + Own Id: OTP-10646</p> + </item> + </list> + </section> + + + <section><title>Improvements and New Features</title> + <list> + <item> + <p>It is now possible to let a test specification include + other test specifications. Included specs can either be + joined with the source spec (and all other joined specs), + resulting in one single test run, or they can be executed + in separate test runs. Also, a start flag/option, + <c>join_specs</c>, has been introduced, to be used in + combination with the <c>spec</c> option. With + <c>join_specs</c>, Common Test can be told to either join + multiple test specifications, or run them separately. + Without <c>join_specs</c>, the latter behaviour is + default. Note that this is a change compared to earlier + versions of Common Test, where specifications could only + be joined. More information can be found in the Running + Tests chapter in the User's Guide (see the Test + Specifications section).</p> + <p> + *** POTENTIAL INCOMPATIBILITY ***</p> + <p> + Own Id: OTP-9881 Aux Id: kunagi-350 [261] </p> + </item> + <item> + <p> + The <c>ct_slave:start/3</c> function now supports an + <c>{env,[{Var,Value}]}</c> option to extend environment + for the slave node.</p> + <p> + Own Id: OTP-10469 Aux Id: kunagi-317 [228] </p> + </item> + <item> + <p> Some examples overflowing the width of PDF pages have + been corrected. </p> + <p> + Own Id: OTP-10665</p> + </item> + <item> + <p> + Update common test modules to handle unicode <list> + <item> Use UTF-8 encoding for all HTML files, except the + HTML version of the test suite generated with + erl2html2:convert, which will have the same encoding as + the original test suite (.erl) file. </item> <item> + Encode link targets in HTML files with + test_server_ctrl:uri_encode/1. </item> <item> Use unicode + modifier 't' with ~s when appropriate. </item> <item> Use + unicode:characters_to_list and + unicode:characters_to_binary for conversion between + binaries and strings instead of binary_to_list and + list_to_binary. </item> </list></p> + <p> + Own Id: OTP-10783</p> + </item> + </list> + </section> + + + <section><title>Known Bugs and Problems</title> + <list> + <item> + <p> + CT drops error reason when groups/0 crashes.</p> + <p> + Own Id: OTP-10631 Aux Id: kunagi-345 [256] </p> + </item> + <item> + <p> + Event handler on a ct_master node causes hanging.</p> + <p> + Own Id: OTP-10634 Aux Id: kunagi-347 [258] </p> + </item> + <item> + <p> + CT fails to open telnet conn after a timetrap timeout.</p> + <p> + Own Id: OTP-10648 Aux Id: seq12212 </p> + </item> + </list> + </section> + +</section> + <section><title>Common_Test 1.6.3.1</title> <section><title>Known Bugs and Problems</title> diff --git a/lib/common_test/doc/src/run_test_chapter.xml b/lib/common_test/doc/src/run_test_chapter.xml index a0b2c96006..35f89153d3 100644 --- a/lib/common_test/doc/src/run_test_chapter.xml +++ b/lib/common_test/doc/src/run_test_chapter.xml @@ -4,7 +4,7 @@ <chapter> <header> <copyright> - <year>2003</year><year>2012</year> + <year>2003</year><year>2013</year> <holder>Ericsson AB. All Rights Reserved.</holder> </copyright> <legalnotice> @@ -155,6 +155,8 @@ <item><c><![CDATA[-stylesheet <css_file>]]></c>, points out a user HTML style sheet (see below).</item> <item><c><![CDATA[-cover <cover_cfg_file>]]></c>, to perform code coverage test (see <seealso marker="cover_chapter#cover">Code Coverage Analysis</seealso>).</item> + <item><c><![CDATA[-cover_stop <bool>]]></c>, to specify if the cover tool shall be stopped after the test is completed (see + <seealso marker="cover_chapter#cover_stop">Code Coverage Analysis</seealso>).</item> <item><c><![CDATA[-event_handler <event_handlers>]]></c>, to install <seealso marker="event_handler_chapter#event_handling">event handlers</seealso>.</item> <item><c><![CDATA[-event_handler_init <event_handlers>]]></c>, to install @@ -528,369 +530,469 @@ <marker id="test_specifications"></marker> <section> <title>Test Specifications</title> - - <p>The most flexible way to specify what to test, is to use a so - called test specification. A test specification is a sequence of - Erlang terms. The terms are normally declared in a text file (see - <c><seealso marker="ct#run_test-1">ct:run_test/1</seealso></c>), but - may also be passed to Common Test on the form of a list (see - <c><seealso marker="ct#run_testspec-1">ct:run_testspec/1</seealso></c>). - There are two general types of terms: configuration terms and test - specification terms.</p> - <p>With configuration terms it is possible to e.g. label the test - run (similar to <c>ct_run -label</c>), evaluate arbitrary expressions - before starting the test, import configuration data (similar to - <c>ct_run -config/-userconfig</c>), specify the top level HTML log - directory (similar to <c>ct_run -logdir</c>), enable code coverage - analysis (similar to <c>ct_run -cover</c>), install Common Test Hooks - (similar to <c>ct_run -ch_hooks</c>), install event_handler plugins - (similar to <c>ct_run -event_handler</c>), specify include directories - that should be passed to the compiler for automatic compilation - (similar to <c>ct_run -include</c>), disable the auto compilation - feature (similar to <c>ct_run -no_auto_compile</c>), set verbosity - levels (similar to <c>ct_run -verbosity</c>), and more.</p> - <p>Configuration terms can be combined with <c>ct_run</c> start flags, - or <c>ct:run_test/1</c> options. The result will for some flags/options - and terms be that the values are merged (e.g. configuration files, - include directories, verbosity levels, silent connections), and for - others that the start flags/options override the test specification - terms (e.g. log directory, label, style sheet, auto compilation).</p> - <p>With test specification terms it is possible to state exactly - which tests should run and in which order. A test term specifies - either one or more suites, one or more test case groups (possibly nested), - or one or more test cases in a group (or in multiple groups) or in a suite.</p> - <p>An arbitrary number of test terms may be declared in sequence. - Common Test will by default compile the terms into one or more tests - to be performed in one resulting test run. Note that a term that - specifies a set of test cases will "swallow" one that only - specifies a subset of these cases. E.g. the result of merging - one term that specifies that all cases in suite S should be - executed, with another term specifying only test case X and Y in - S, is a test of all cases in S. However, if a term specifying - test case X and Y in S is merged with a term specifying case Z - in S, the result is a test of X, Y and Z in S. To disable this - behaviour, i.e. to instead perform each test sequentially in a "script-like" - manner, the term <c>merge_tests</c> can be set to <c>false</c> in - the test specification.</p> - <p>A test term can also specify one or more test suites, groups, - or test cases to be skipped. Skipped suites, groups and cases - are not executed and show up in the HTML log files as - SKIPPED.</p> - <p>When a test case group is specified, the resulting test - executes the <c>init_per_group</c> function, followed by all test - cases and sub groups (including their configuration functions), and - finally the <c>end_per_group</c> function. Also if particular - test cases in a group are specified, <c>init_per_group</c> - and <c>end_per_group</c> for the group in question are - called. If a group which is defined (in <c>Suite:group/0</c>) to - be a sub group of another group, is specified (or if particular test - cases of a sub group are), Common Test will call the configuration - functions for the top level groups as well as for the sub group - in question (making it possible to pass configuration data all - the way from <c>init_per_suite</c> down to the test cases in the - sub group).</p> - <p>The test specification utilizes the same mechanism for specifying - test case groups by means of names and paths, as explained in the - <seealso marker="run_test_chapter#group_execution">Group Execution</seealso> - section above, with the addition of the <c>GroupSpec</c> element - described next.</p> - <p>The <c>GroupSpec</c> element makes it possible to specify - group execution properties that will override those in the - group definition (i.e. in <c>groups/0</c>). Execution properties for - sub-groups may be overridden as well. This feature makes it possible to - change properties of groups at the time of execution, - without even having to edit the test suite. The very same - feature is available for <c>group</c> elements in the <c>Suite:all/0</c> - list. Therefore, more detailed documentation, and examples, can be - found in the <seealso marker="write_test_chapter#test_case_groups"> - Test case groups</seealso> chapter.</p> - - <p>Below is the test specification syntax. Test specifications can - be used to run tests both in a single test host environment and - in a distributed Common Test environment (Large Scale - Testing). The node parameters in the <c>init</c> term are only - relevant in the latter (see the - <seealso marker="ct_master_chapter#test_specifications">Large - Scale Testing</seealso> chapter for information). For more information - about the various terms, please see the corresponding sections in the - User's Guide, such as e.g. the - <seealso marker="run_test_chapter#ct_run"><c>ct_run</c> - program</seealso> for an overview of available start flags - (since most flags have a corresponding configuration term), and - more detailed explanation of e.g. - <seealso marker="write_test_chapter#logging">Logging</seealso> - (for the <c>verbosity</c>, <c>stylesheet</c> and <c>basic_html</c> terms), - <seealso marker="config_file_chapter#top">External Configuration Data</seealso> - (for the <c>config</c> and <c>userconfig</c> terms), - <seealso marker="event_handler_chapter#event_handling">Event - Handling</seealso> (for the <c>event_handler</c> term), - <seealso marker="ct_hooks_chapter#installing">Common Test Hooks</seealso> - (for the <c>ct_hooks</c> term), etc.</p> - <p>Config terms:</p> - <pre> - {merge_tests, Bool}. - - {define, Constant, Value}. - - {node, NodeAlias, Node}. - - {init, InitOptions}. - {init, [NodeAlias], InitOptions}. - - {label, Label}. - {label, NodeRefs, Label}. - - {verbosity, VerbosityLevels}. - {verbosity, NodeRefs, VerbosityLevels}. - - {stylesheet, CSSFile}. - {stylesheet, NodeRefs, CSSFile}. - - {silent_connections, ConnTypes}. - {silent_connections, NodeRefs, ConnTypes}. - - {multiply_timetraps, N}. - {multiply_timetraps, NodeRefs, N}. - - {scale_timetraps, Bool}. - {scale_timetraps, NodeRefs, Bool}. - - {cover, CoverSpecFile}. - {cover, NodeRefs, CoverSpecFile}. - - {include, IncludeDirs}. - {include, NodeRefs, IncludeDirs}. - - {auto_compile, Bool}, - {auto_compile, NodeRefs, Bool}, - - {config, ConfigFiles}. - {config, ConfigDir, ConfigBaseNames}. - {config, NodeRefs, ConfigFiles}. - {config, NodeRefs, ConfigDir, ConfigBaseNames}. - - {userconfig, {CallbackModule, ConfigStrings}}. - {userconfig, NodeRefs, {CallbackModule, ConfigStrings}}. - - {logdir, LogDir}. - {logdir, NodeRefs, LogDir}. - - {logopts, LogOpts}. - {logopts, NodeRefs, LogOpts}. - - {create_priv_dir, PrivDirOption}. - {create_priv_dir, NodeRefs, PrivDirOption}. - - {event_handler, EventHandlers}. - {event_handler, NodeRefs, EventHandlers}. - {event_handler, EventHandlers, InitArgs}. - {event_handler, NodeRefs, EventHandlers, InitArgs}. - - {ct_hooks, CTHModules}. - {ct_hooks, NodeRefs, CTHModules}. - - {enable_builtin_hooks, Bool}. - - {basic_html, Bool}. - {basic_html, NodeRefs, Bool}. + <section> + <title>General description</title> + <p>The most flexible way to specify what to test, is to use a so + called test specification. A test specification is a sequence of + Erlang terms. The terms are normally declared in one or more text files + (see <c><seealso marker="ct#run_test-1">ct:run_test/1</seealso></c>), but + may also be passed to Common Test on the form of a list (see + <c><seealso marker="ct#run_testspec-1">ct:run_testspec/1</seealso></c>). + There are two general types of terms: configuration terms and test + specification terms.</p> + <p>With configuration terms it is possible to e.g. label the test + run (similar to <c>ct_run -label</c>), evaluate arbitrary expressions + before starting the test, import configuration data (similar to + <c>ct_run -config/-userconfig</c>), specify the top level HTML log + directory (similar to <c>ct_run -logdir</c>), enable code coverage + analysis (similar to <c>ct_run -cover</c>), install Common Test Hooks + (similar to <c>ct_run -ch_hooks</c>), install event_handler plugins + (similar to <c>ct_run -event_handler</c>), specify include directories + that should be passed to the compiler for automatic compilation + (similar to <c>ct_run -include</c>), disable the auto compilation + feature (similar to <c>ct_run -no_auto_compile</c>), set verbosity + levels (similar to <c>ct_run -verbosity</c>), and more.</p> + <p>Configuration terms can be combined with <c>ct_run</c> start flags, + or <c>ct:run_test/1</c> options. The result will for some flags/options + and terms be that the values are merged (e.g. configuration files, + include directories, verbosity levels, silent connections), and for + others that the start flags/options override the test specification + terms (e.g. log directory, label, style sheet, auto compilation).</p> + <p>With test specification terms it is possible to state exactly + which tests should run and in which order. A test term specifies + either one or more suites, one or more test case groups (possibly nested), + or one or more test cases in a group (or in multiple groups) or in a suite.</p> + <p>An arbitrary number of test terms may be declared in sequence. + Common Test will by default compile the terms into one or more tests + to be performed in one resulting test run. Note that a term that + specifies a set of test cases will "swallow" one that only + specifies a subset of these cases. E.g. the result of merging + one term that specifies that all cases in suite S should be + executed, with another term specifying only test case X and Y in + S, is a test of all cases in S. However, if a term specifying + test case X and Y in S is merged with a term specifying case Z + in S, the result is a test of X, Y and Z in S. To disable this + behaviour, i.e. to instead perform each test sequentially in a "script-like" + manner, the term <c>merge_tests</c> can be set to <c>false</c> in + the test specification.</p> + <p>A test term can also specify one or more test suites, groups, + or test cases to be skipped. Skipped suites, groups and cases + are not executed and show up in the HTML log files as + SKIPPED.</p> + </section> + <section> + <title>Using multiple test specification files</title> + + <p>When multiple test specification files are given at startup (either + with <c>ct_run -spec file1 file2 ...</c> or + <c>ct:run_test([{spec, [File1,File2,...]}])</c>), + Common Test will either execute one test run per specification file, or + join the files and perform all tests within one single test run. The first + behaviour is the default one. The latter requires that the start + flag/option <c>join_suites</c> is provided, e.g. + <c>run_test -spec ./my_tests1.ts ./my_tests2.ts -join_suites</c>.</p> + + <p>Joining a number of specifications, or running them separately, can + also be accomplished with (and may be combined with) test specification + file inclusion, described next.</p> + </section> + <section> + <title>Test specification file inclusion</title> + <p>With the <c>specs</c> term (see syntax below), it's possible to have + a test specification include other specifications. An included + specification may either be joined with the source specification, + or used to produce a separate test run (like with the <c>join_specs</c> + start flag/option above). Example:</p> + <pre> + %% In specification file "a.spec" + {specs, join, ["b.spec", "c.spec"]}. + {specs, separate, ["d.spec", "e.spec"]}. + %% Config and test terms follow + ...</pre> + <p>In this example, the test terms defined in files "b.spec" and "c.spec" + will be joined with the terms in the source specification "a.spec" + (if any). The inclusion of specifications "d.spec" and + "e.spec" will result in two separate, and independent, test runs (i.e. + one for each included specification).</p> + <p>Note that the <c>join</c> option does not imply that the test terms + will be merged (see <c>merge_tests</c> above), only that all tests are + executed in one single test run.</p> + <p>Joined specifications share common configuration settings, such as + the list of <c>config</c> files or <c>include</c> directories. + For configuration that can not be combined, such as settings for <c>logdir</c> + or <c>verbosity</c>, it is up to the user to ensure there are no clashes + when the test specifications are joined. Specifications included with + the <c>separate</c> option, do not share configuration settings with the + source specification. This is useful e.g. if there are clashing + configuration settings in included specifications, making it impossible + to join them.</p> + <p>If <c>{merge_tests,true}</c> is set in the source specification + (which is the default setting), terms in joined specifications will be + merged with terms in the source specification (according to the + description of <c>merge_tests</c> above).</p> + <p>Note that it is always the <c>merge_tests</c> setting in the source + specification that is used when joined with other specifications. + Say e.g. that a source specification A, with tests TA1 and TA2, has + <c>{merge_tests,false}</c> set, and it includes another specification, + B, with tests TB1 and TB2, that has <c>{merge_tests,true}</c> set. + The result will be that the test series: <c>TA1,TA2,merge(TB1,TB2)</c>, + is executed. The opposite <c>merge_tests</c> settings would result in the + following the test series: <c>merge(merge(TA1,TA2),TB1,TB2)</c>.</p> + <p>The <c>specs</c> term may of course be used to nest specifications, + i.e. have one specification include other specifications, which in turn + include others, etc.</p> + </section> + <section> + <title>Test case groups</title> + + <p>When a test case group is specified, the resulting test + executes the <c>init_per_group</c> function, followed by all test + cases and sub groups (including their configuration functions), and + finally the <c>end_per_group</c> function. Also if particular + test cases in a group are specified, <c>init_per_group</c> + and <c>end_per_group</c> for the group in question are + called. If a group which is defined (in <c>Suite:group/0</c>) to + be a sub group of another group, is specified (or if particular test + cases of a sub group are), Common Test will call the configuration + functions for the top level groups as well as for the sub group + in question (making it possible to pass configuration data all + the way from <c>init_per_suite</c> down to the test cases in the + sub group).</p> + <p>The test specification utilizes the same mechanism for specifying + test case groups by means of names and paths, as explained in the + <seealso marker="run_test_chapter#group_execution">Group Execution</seealso> + section above, with the addition of the <c>GroupSpec</c> element + described next.</p> + <p>The <c>GroupSpec</c> element makes it possible to specify + group execution properties that will override those in the + group definition (i.e. in <c>groups/0</c>). Execution properties for + sub-groups may be overridden as well. This feature makes it possible to + change properties of groups at the time of execution, + without even having to edit the test suite. The very same + feature is available for <c>group</c> elements in the <c>Suite:all/0</c> + list. Therefore, more detailed documentation, and examples, can be + found in the <seealso marker="write_test_chapter#test_case_groups"> + Test case groups</seealso> chapter.</p> + </section> - {release_shell, Bool}.</pre> + <section> + <title>Test specification syntax</title> + + <p>Below is the test specification syntax. Test specifications can + be used to run tests both in a single test host environment and + in a distributed Common Test environment (Large Scale + Testing). The node parameters in the <c>init</c> term are only + relevant in the latter (see the + <seealso marker="ct_master_chapter#test_specifications">Large + Scale Testing</seealso> chapter for information). For more information + about the various terms, please see the corresponding sections in the + User's Guide, such as e.g. the + <seealso marker="run_test_chapter#ct_run"><c>ct_run</c> + program</seealso> for an overview of available start flags + (since most flags have a corresponding configuration term), and + more detailed explanation of e.g. + <seealso marker="write_test_chapter#logging">Logging</seealso> + (for the <c>verbosity</c>, <c>stylesheet</c> and <c>basic_html</c> terms), + <seealso marker="config_file_chapter#top">External Configuration Data</seealso> + (for the <c>config</c> and <c>userconfig</c> terms), + <seealso marker="event_handler_chapter#event_handling">Event + Handling</seealso> (for the <c>event_handler</c> term), + <seealso marker="ct_hooks_chapter#installing">Common Test Hooks</seealso> + (for the <c>ct_hooks</c> term), etc.</p> + </section> + <p>Config terms:</p> + <pre> + {merge_tests, Bool}. + + {define, Constant, Value}. + + {specs, InclSpecsOption, TestSpecs}. + + {node, NodeAlias, Node}. + + {init, InitOptions}. + {init, [NodeAlias], InitOptions}. + + {label, Label}. + {label, NodeRefs, Label}. + + {verbosity, VerbosityLevels}. + {verbosity, NodeRefs, VerbosityLevels}. + + {stylesheet, CSSFile}. + {stylesheet, NodeRefs, CSSFile}. + + {silent_connections, ConnTypes}. + {silent_connections, NodeRefs, ConnTypes}. + + {multiply_timetraps, N}. + {multiply_timetraps, NodeRefs, N}. + + {scale_timetraps, Bool}. + {scale_timetraps, NodeRefs, Bool}. + + {cover, CoverSpecFile}. + {cover, NodeRefs, CoverSpecFile}. + + {cover_stop, Bool}. + {cover_stop, NodeRefs, Bool}. + + {include, IncludeDirs}. + {include, NodeRefs, IncludeDirs}. + + {auto_compile, Bool}, + {auto_compile, NodeRefs, Bool}, + + {config, ConfigFiles}. + {config, ConfigDir, ConfigBaseNames}. + {config, NodeRefs, ConfigFiles}. + {config, NodeRefs, ConfigDir, ConfigBaseNames}. + + {userconfig, {CallbackModule, ConfigStrings}}. + {userconfig, NodeRefs, {CallbackModule, ConfigStrings}}. + + {logdir, LogDir}. + {logdir, NodeRefs, LogDir}. + + {logopts, LogOpts}. + {logopts, NodeRefs, LogOpts}. + + {create_priv_dir, PrivDirOption}. + {create_priv_dir, NodeRefs, PrivDirOption}. + + {event_handler, EventHandlers}. + {event_handler, NodeRefs, EventHandlers}. + {event_handler, EventHandlers, InitArgs}. + {event_handler, NodeRefs, EventHandlers, InitArgs}. + + {ct_hooks, CTHModules}. + {ct_hooks, NodeRefs, CTHModules}. + + {enable_builtin_hooks, Bool}. + + {basic_html, Bool}. + {basic_html, NodeRefs, Bool}. + + {release_shell, Bool}.</pre> + <p>Test terms:</p> - <pre> - {suites, Dir, Suites}. - {suites, NodeRefs, Dir, Suites}. - - {groups, Dir, Suite, Groups}. - {groups, NodeRefs, Dir, Suite, Groups}. - - {groups, Dir, Suite, Groups, {cases,Cases}}. - {groups, NodeRefs, Dir, Suite, Groups, {cases,Cases}}. - - {cases, Dir, Suite, Cases}. - {cases, NodeRefs, Dir, Suite, Cases}. - - {skip_suites, Dir, Suites, Comment}. - {skip_suites, NodeRefs, Dir, Suites, Comment}. - - {skip_groups, Dir, Suite, GroupNames, Comment}. - {skip_groups, NodeRefs, Dir, Suite, GroupNames, Comment}. - - {skip_cases, Dir, Suite, Cases, Comment}. - {skip_cases, NodeRefs, Dir, Suite, Cases, Comment}.</pre> - + <pre> + {suites, Dir, Suites}. + {suites, NodeRefs, Dir, Suites}. + + {groups, Dir, Suite, Groups}. + {groups, NodeRefs, Dir, Suite, Groups}. + + {groups, Dir, Suite, Groups, {cases,Cases}}. + {groups, NodeRefs, Dir, Suite, Groups, {cases,Cases}}. + + {cases, Dir, Suite, Cases}. + {cases, NodeRefs, Dir, Suite, Cases}. + + {skip_suites, Dir, Suites, Comment}. + {skip_suites, NodeRefs, Dir, Suites, Comment}. + + {skip_groups, Dir, Suite, GroupNames, Comment}. + {skip_groups, NodeRefs, Dir, Suite, GroupNames, Comment}. + + {skip_cases, Dir, Suite, Cases, Comment}. + {skip_cases, NodeRefs, Dir, Suite, Cases, Comment}.</pre> + <p>Types:</p> - <pre> - Bool = true | false - Constant = atom() - Value = term() - NodeAlias = atom() - Node = node() - NodeRef = NodeAlias | Node | master - NodeRefs = all_nodes | [NodeRef] | NodeRef - InitOptions = term() - Label = atom() | string() - VerbosityLevels = integer() | [{Category,integer()}] - Category = atom() - CSSFile = string() - ConnTypes = all | [atom()] - N = integer() - CoverSpecFile = string() - IncludeDirs = string() | [string()] - ConfigFiles = string() | [string()] - ConfigDir = string() - ConfigBaseNames = string() | [string()] - CallbackModule = atom() - ConfigStrings = string() | [string()] - LogDir = string() - LogOpts = [term()] - PrivDirOption = auto_per_run | auto_per_tc | manual_per_tc - EventHandlers = atom() | [atom()] - InitArgs = [term()] - CTHModules = [CTHModule | {CTHModule, CTHInitArgs} | {CTHModule, CTHInitArgs, CTHPriority}] - CTHModule = atom() - CTHInitArgs = term() - Dir = string() - Suites = atom() | [atom()] | all - Suite = atom() - Groups = GroupPath | [GroupPath] | GroupSpec | [GroupSpec] | all - GroupPath = [GroupName] - GroupSpec = GroupName | {GroupName,Properties} | {GroupName,Properties,GroupSpec} - GroupName = atom() - GroupNames = GroupName | [GroupName] - Cases = atom() | [atom()] | all - Comment = string() | ""</pre> - - <p>The difference between the <c>config</c> terms above, is that with - <c>ConfigDir</c>, <c>ConfigBaseNames</c> is a list of base names, - i.e. without directory paths. <c>ConfigFiles</c> must be full names, - including paths. E.g, these two terms have the same meaning:</p> - <pre> - {config, ["/home/testuser/tests/config/nodeA.cfg", - "/home/testuser/tests/config/nodeB.cfg"]}. - - {config, "/home/testuser/tests/config", ["nodeA.cfg","nodeB.cfg"]}.</pre> - - <note><p>Any relative paths specified in the test specification, will be - relative to the directory which contains the test specification file, if - <c>ct_run -spec TestSpecFile ...</c> or - <c>ct:run:test([{spec,TestSpecFile},...])</c> - executes the test. The path will be relative to the top level log directory, if - <c>ct:run:testspec(TestSpec)</c> executes the test.</p></note> - - <p>The <c>define</c> term introduces a constant, which is used to - replace the name <c>Constant</c> with <c>Value</c>, wherever it's found in - the test specification. This replacement happens during an initial iteration - through the test specification. Constants may be used anywhere in the test - specification, e.g. in arbitrary lists and tuples, and even in strings - and inside the value part of other constant definitions! A constant can - also be part of a node name, but that is the only place where a constant - can be part of an atom.</p> - - <note><p>For the sake of readability, the name of the constant must always - begin with an upper case letter, or a <c>$</c>, <c>?</c>, or <c>_</c>. - This also means that it must always be single quoted (obviously, since - the constant name is actually an atom, not text).</p></note> - - <p>The main benefit of constants is that they can be used to reduce the size - (and avoid repetition) of long strings, such as file paths. Compare these - terms:</p> - - <pre> - %% 1a. no constant - {config, "/home/testuser/tests/config", ["nodeA.cfg","nodeB.cfg"]}. - {suites, "/home/testuser/tests/suites", all}. - - %% 1b. with constant - {define, 'TESTDIR', "/home/testuser/tests"}. - {config, "'TESTDIR'/config", ["nodeA.cfg","nodeB.cfg"]}. - {suites, "'TESTDIR'/suites", all}. - - %% 2a. no constants - {config, [testnode@host1, testnode@host2], "../config", ["nodeA.cfg","nodeB.cfg"]}. - {suites, [testnode@host1, testnode@host2], "../suites", [x_SUITE, y_SUITE]}. - - %% 2b. with constants - {define, 'NODE', testnode}. - {define, 'NODES', ['NODE'@host1, 'NODE'@host2]}. - {config, 'NODES', "../config", ["nodeA.cfg","nodeB.cfg"]}. - {suites, 'NODES', "../suites", [x_SUITE, y_SUITE]}.</pre> - - <p>Constants make the test specification term <c>alias</c>, in previous - versions of Common Test, redundant. This term has been deprecated but will - remain supported in upcoming Common Test releases. Replacing <c>alias</c> - terms with <c>define</c> is strongly recommended though! Here's an example - of such a replacement:</p> + <pre> + Bool = true | false + Constant = atom() + Value = term() + InclSpecsOption = join | separate + TestSpecs = string() | [string()] + NodeAlias = atom() + Node = node() + NodeRef = NodeAlias | Node | master + NodeRefs = all_nodes | [NodeRef] | NodeRef + InitOptions = term() + Label = atom() | string() + VerbosityLevels = integer() | [{Category,integer()}] + Category = atom() + CSSFile = string() + ConnTypes = all | [atom()] + N = integer() + CoverSpecFile = string() + IncludeDirs = string() | [string()] + ConfigFiles = string() | [string()] + ConfigDir = string() + ConfigBaseNames = string() | [string()] + CallbackModule = atom() + ConfigStrings = string() | [string()] + LogDir = string() + LogOpts = [term()] + PrivDirOption = auto_per_run | auto_per_tc | manual_per_tc + EventHandlers = atom() | [atom()] + InitArgs = [term()] + CTHModules = [CTHModule | + {CTHModule, CTHInitArgs} | + {CTHModule, CTHInitArgs, CTHPriority}] + CTHModule = atom() + CTHInitArgs = term() + Dir = string() + Suites = atom() | [atom()] | all + Suite = atom() + Groups = GroupPath | [GroupPath] | GroupSpec | [GroupSpec] | all + GroupPath = [GroupName] + GroupSpec = GroupName | {GroupName,Properties} | {GroupName,Properties,GroupSpec} + GroupName = atom() + GroupNames = GroupName | [GroupName] + Cases = atom() | [atom()] | all + Comment = string() | ""</pre> + + <section> + <p>The difference between the <c>config</c> terms above, is that with + <c>ConfigDir</c>, <c>ConfigBaseNames</c> is a list of base names, + i.e. without directory paths. <c>ConfigFiles</c> must be full names, + including paths. E.g, these two terms have the same meaning:</p> + <pre> + {config, ["/home/testuser/tests/config/nodeA.cfg", + "/home/testuser/tests/config/nodeB.cfg"]}. + + {config, "/home/testuser/tests/config", ["nodeA.cfg","nodeB.cfg"]}.</pre> + + <note><p>Any relative paths specified in the test specification, will be + relative to the directory which contains the test specification file, if + <c>ct_run -spec TestSpecFile ...</c> or + <c>ct:run:test([{spec,TestSpecFile},...])</c> + executes the test. The path will be relative to the top level log directory, if + <c>ct:run:testspec(TestSpec)</c> executes the test.</p></note> + </section> - <pre> - %% using the old alias term - {config, "/home/testuser/tests/config/nodeA.cfg"}. - {alias, suite_dir, "/home/testuser/tests/suites"}. - {groups, suite_dir, x_SUITE, group1}. - - %% replacing with constants - {define, 'TestDir', "/home/testuser/tests"}. - {define, 'CfgDir', "'TestDir'/config"}. - {define, 'SuiteDir', "'TestDir'/suites"}. - {config, 'CfgDir', "nodeA.cfg"}. - {groups, 'SuiteDir', x_SUITE, group1}.</pre> - - <p>Actually, constants could well replace the <c>node</c> term too, but - this still has declarative value, mainly when used in combination - with <c>NodeRefs == all_nodes</c> (see types above).</p> - - <p>Here follows a simple test specification example:</p> - <pre> - {define, 'Top', "/home/test"}. - {define, 'T1', "'Top'/t1"}. - {define, 'T2', "'Top'/t2"}. - {define, 'T3', "'Top'/t3"}. - {define, 'CfgFile', "config.cfg"}. + <section> + <title>Constants</title> + + <p>The <c>define</c> term introduces a constant, which is used to + replace the name <c>Constant</c> with <c>Value</c>, wherever it's found in + the test specification. This replacement happens during an initial iteration + through the test specification. Constants may be used anywhere in the test + specification, e.g. in arbitrary lists and tuples, and even in strings + and inside the value part of other constant definitions! A constant can + also be part of a node name, but that is the only place where a constant + can be part of an atom.</p> + + <note><p>For the sake of readability, the name of the constant must always + begin with an upper case letter, or a <c>$</c>, <c>?</c>, or <c>_</c>. + This also means that it must always be single quoted (obviously, since + the constant name is actually an atom, not text).</p></note> + + <p>The main benefit of constants is that they can be used to reduce the size + (and avoid repetition) of long strings, such as file paths. Compare these + terms:</p> + + <pre> + %% 1a. no constant + {config, "/home/testuser/tests/config", ["nodeA.cfg","nodeB.cfg"]}. + {suites, "/home/testuser/tests/suites", all}. + + %% 1b. with constant + {define, 'TESTDIR', "/home/testuser/tests"}. + {config, "'TESTDIR'/config", ["nodeA.cfg","nodeB.cfg"]}. + {suites, "'TESTDIR'/suites", all}. + + %% 2a. no constants + {config, [testnode@host1, testnode@host2], "../config", ["nodeA.cfg","nodeB.cfg"]}. + {suites, [testnode@host1, testnode@host2], "../suites", [x_SUITE, y_SUITE]}. + + %% 2b. with constants + {define, 'NODE', testnode}. + {define, 'NODES', ['NODE'@host1, 'NODE'@host2]}. + {config, 'NODES', "../config", ["nodeA.cfg","nodeB.cfg"]}. + {suites, 'NODES', "../suites", [x_SUITE, y_SUITE]}.</pre> + + <p>Constants make the test specification term <c>alias</c>, in previous + versions of Common Test, redundant. This term has been deprecated but will + remain supported in upcoming Common Test releases. Replacing <c>alias</c> + terms with <c>define</c> is strongly recommended though! Here's an example + of such a replacement:</p> + + <pre> + %% using the old alias term + {config, "/home/testuser/tests/config/nodeA.cfg"}. + {alias, suite_dir, "/home/testuser/tests/suites"}. + {groups, suite_dir, x_SUITE, group1}. + + %% replacing with constants + {define, 'TestDir', "/home/testuser/tests"}. + {define, 'CfgDir', "'TestDir'/config"}. + {define, 'SuiteDir', "'TestDir'/suites"}. + {config, 'CfgDir', "nodeA.cfg"}. + {groups, 'SuiteDir', x_SUITE, group1}.</pre> + + <p>Actually, constants could well replace the <c>node</c> term too, but + this still has declarative value, mainly when used in combination + with <c>NodeRefs == all_nodes</c> (see types above).</p> + </section> - {logdir, "'Top'/logs"}. - - {config, ["'T1'/'CfgFile'", "'T2'/'CfgFile'", "'T3'/'CfgFile'"]}. - - {suites, 'T1', all}. - {skip_suites, 'T1', [t1B_SUITE,t1D_SUITE], "Not implemented"}. - {skip_cases, 'T1', t1A_SUITE, [test3,test4], "Irrelevant"}. - {skip_cases, 'T1', t1C_SUITE, [test1], "Ignore"}. - - {suites, 'T2', [t2B_SUITE,t2C_SUITE]}. - {cases, 'T2', t2A_SUITE, [test4,test1,test7]}. - - {skip_suites, 'T3', all, "Not implemented"}.</pre> + <section> + <title>Example</title> + + <p>Here follows a simple test specification example:</p> + <pre> + {define, 'Top', "/home/test"}. + {define, 'T1', "'Top'/t1"}. + {define, 'T2', "'Top'/t2"}. + {define, 'T3', "'Top'/t3"}. + {define, 'CfgFile', "config.cfg"}. + + {logdir, "'Top'/logs"}. + + {config, ["'T1'/'CfgFile'", "'T2'/'CfgFile'", "'T3'/'CfgFile'"]}. + + {suites, 'T1', all}. + {skip_suites, 'T1', [t1B_SUITE,t1D_SUITE], "Not implemented"}. + {skip_cases, 'T1', t1A_SUITE, [test3,test4], "Irrelevant"}. + {skip_cases, 'T1', t1C_SUITE, [test1], "Ignore"}. + + {suites, 'T2', [t2B_SUITE,t2C_SUITE]}. + {cases, 'T2', t2A_SUITE, [test4,test1,test7]}. + + {skip_suites, 'T3', all, "Not implemented"}.</pre> + + <p>The example specifies the following:</p> + <list> + <item>The specified logdir directory will be used for storing + the HTML log files (in subdirectories tagged with node name, + date and time).</item> + <item>The variables in the specified test system config files will be + imported for the test.</item> + <item>The first test to run includes all suites for system t1. Excluded from + the test are however the t1B and t1D suites. Also test cases test3 and + test4 in t1A as well as the test1 case in t1C are excluded from + the test.</item> + <item>Secondly, the test for system t2 should run. The included suites are + t2B and t2C. Included are also test cases test4, test1 and test7 in suite + t2A. Note that the test cases will be executed in the specified order.</item> + <item>Lastly, all suites for systems t3 are to be completely skipped and this + should be explicitly noted in the log files.</item> + </list> + </section> - <p>The example specifies the following:</p> - <list> - <item>The specified logdir directory will be used for storing - the HTML log files (in subdirectories tagged with node name, - date and time).</item> - <item>The variables in the specified test system config files will be - imported for the test.</item> - <item>The first test to run includes all suites for system t1. Excluded from - the test are however the t1B and t1D suites. Also test cases test3 and - test4 in t1A as well as the test1 case in t1C are excluded from - the test.</item> - <item>Secondly, the test for system t2 should run. The included suites are - t2B and t2C. Included are also test cases test4, test1 and test7 in suite - t2A. Note that the test cases will be executed in the specified order.</item> - <item>Lastly, all suites for systems t3 are to be completely skipped and this - should be explicitly noted in the log files.</item> - </list> - <p>With the <c>init</c> term it's possible to specify initialization options - for nodes defined in the test specification. Currently, there are options - to start the node and/or to evaluate any function on the node. - See the <seealso marker="ct_master_chapter#ct_slave">Automatic startup of - the test target nodes</seealso> chapter for details.</p> - <p>It is possible for the user to provide a test specification that - includes (for Common Test) unrecognizable terms. If this is desired, - the <c>-allow_user_terms</c> flag should be used when starting tests with - <c>ct_run</c>. This forces Common Test to ignore unrecognizable terms. - Note that in this mode, Common Test is not able to check the specification - for errors as efficiently as if the scanner runs in default mode. - If <c><seealso marker="ct#run_test-1">ct:run_test/1</seealso></c> is used for starting the tests, the relaxed scanner - mode is enabled by means of the tuple: <c>{allow_user_terms,true}</c></p> + <section> + <title>The init term</title> + <p>With the <c>init</c> term it's possible to specify initialization options + for nodes defined in the test specification. Currently, there are options + to start the node and/or to evaluate any function on the node. + See the <seealso marker="ct_master_chapter#ct_slave">Automatic startup of + the test target nodes</seealso> chapter for details.</p> + </section> + <section> + <title>User specific terms</title> + <p>It is possible for the user to provide a test specification that + includes (for Common Test) unrecognizable terms. If this is desired, + the <c>-allow_user_terms</c> flag should be used when starting tests with + <c>ct_run</c>. This forces Common Test to ignore unrecognizable terms. + Note that in this mode, Common Test is not able to check the specification + for errors as efficiently as if the scanner runs in default mode. + If <c><seealso marker="ct#run_test-1">ct:run_test/1</seealso></c> is used + for starting the tests, the relaxed scanner + mode is enabled by means of the tuple: <c>{allow_user_terms,true}</c></p> + </section> </section> <section> diff --git a/lib/common_test/doc/src/write_test_chapter.xml b/lib/common_test/doc/src/write_test_chapter.xml index 248d7de8b6..cc8d913994 100644 --- a/lib/common_test/doc/src/write_test_chapter.xml +++ b/lib/common_test/doc/src/write_test_chapter.xml @@ -4,7 +4,7 @@ <chapter> <header> <copyright> - <year>2003</year><year>2012</year> + <year>2003</year><year>2013</year> <holder>Ericsson AB. All Rights Reserved.</holder> </copyright> <legalnotice> @@ -982,38 +982,36 @@ <p>Example:</p> <pre> + Some printouts during test case execution: - Some printouts during test case execution: + io:format("1. Standard IO, importance = ~w~n", [?STD_IMPORTANCE]), + ct:log("2. Uncategorized, importance = ~w", [?STD_IMPORTANCE]), + ct:log(info, "3. Categorized info, importance = ~w", [?STD_IMPORTANCE]]), + ct:log(info, ?LOW_IMPORTANCE, "4. Categorized info, importance = ~w", [?LOW_IMPORTANCE]), + ct:log(error, "5. Categorized error, importance = ~w", [?HI_IMPORTANCE]), + ct:log(error, ?HI_IMPORTANCE, "6. Categorized error, importance = ~w", [?MAX_IMPORTANCE]), - io:format("1. Standard IO, importance = ~w~n", [?STD_IMPORTANCE]), - ct:log("2. Uncategorized, importance = ~w", [?STD_IMPORTANCE]), - ct:log(info, "3. Categorized info, importance = ~w", [?STD_IMPORTANCE]]), - ct:log(info, ?LOW_IMPORTANCE, "4. Categorized info, importance = ~w", [?LOW_IMPORTANCE]), - ct:log(error, "5. Categorized error, importance = ~w", [?HI_IMPORTANCE]), - ct:log(error, ?HI_IMPORTANCE, "6. Categorized error, importance = ~w", [?MAX_IMPORTANCE]), + If starting the test without specifying any verbosity levels: - If starting the test without specifying any verbosity levels: + $ ct_run ... - $ ct_run ... + the following gets printed: - the following gets printed: - - 1. Standard IO, importance = 50 - 2. Uncategorized, importance = 50 - 3. Categorized info, importance = 50 - 5. Categorized error, importance = 75 - 6. Categorized error, importance = 99 - - If starting the test with: - - $ ct_run -verbosity 1 and info 75 - - the following gets printed: + 1. Standard IO, importance = 50 + 2. Uncategorized, importance = 50 + 3. Categorized info, importance = 50 + 5. Categorized error, importance = 75 + 6. Categorized error, importance = 99 + + If starting the test with: + + $ ct_run -verbosity 1 and info 75 + + the following gets printed: - 3. Categorized info, importance = 50 - 4. Categorized info, importance = 25 - 6. Categorized error, importance = 99 - </pre> + 3. Categorized info, importance = 50 + 4. Categorized info, importance = 25 + 6. Categorized error, importance = 99</pre> <p>How categories can be mapped to CSS tags is documented in the <seealso marker="run_test_chapter#html_stylesheet">Running Tests</seealso> |