aboutsummaryrefslogtreecommitdiffstats
path: root/lib/common_test/doc/src/run_test_chapter.xml
diff options
context:
space:
mode:
Diffstat (limited to 'lib/common_test/doc/src/run_test_chapter.xml')
-rw-r--r--lib/common_test/doc/src/run_test_chapter.xml188
1 files changed, 134 insertions, 54 deletions
diff --git a/lib/common_test/doc/src/run_test_chapter.xml b/lib/common_test/doc/src/run_test_chapter.xml
index d731d18783..1efff25f5b 100644
--- a/lib/common_test/doc/src/run_test_chapter.xml
+++ b/lib/common_test/doc/src/run_test_chapter.xml
@@ -4,7 +4,7 @@
<chapter>
<header>
<copyright>
- <year>2003</year><year>2009</year>
+ <year>2003</year><year>2010</year>
<holder>Ericsson AB. All Rights Reserved.</holder>
</copyright>
<legalnotice>
@@ -13,15 +13,15 @@
compliance with the License. You should have received a copy of the
Erlang Public License along with this software. If not, it can be
retrieved online at http://www.erlang.org/.
-
+
Software distributed under the License is distributed on an "AS IS"
basis, WITHOUT WARRANTY OF ANY KIND, either express or implied. See
the License for the specific language governing rights and limitations
under the License.
-
+
</legalnotice>
- <title>Running Test Suites</title>
+ <title>Running Tests</title>
<prepared>Peter Andersson, Kenneth Lundin</prepared>
<docno></docno>
<date></date>
@@ -97,25 +97,32 @@
the <c><![CDATA[{auto_compile,false}]]></c> option with
<c><![CDATA[ct:run_test/1]]></c>. With automatic compilation
disabled, the user is responsible for compiling the test suite modules
- (and any help modules) before the test run. Common Test will only verify
- that the specified test suites exist before starting the tests.</p>
+ (and any help modules) before the test run. If the modules can not be loaded
+ from the local file system during startup of Common Test, the user needs to
+ pre-load the modules before starting the test. Common Test will only verify
+ that the specified test suites exist (i.e. that they are, or can be, loaded).
+ This is useful e.g. if the test suites are transferred and loaded as binaries via
+ RPC from a remote node.</p>
</section>
<section>
- <title>Running tests from the UNIX command line</title>
+ <title>Running tests from the OS command line</title>
- <p>The script <c>run_test</c> can be used for running tests from
- the Unix/Linux command line, e.g.
+ <p>The <c>run_test</c> program can be used for running tests from
+ the OS command line, e.g.
</p>
<list>
<item><c><![CDATA[run_test -config <configfilenames> -dir <dirs>]]></c></item>
<item><c><![CDATA[run_test -config <configfilenames> -suite <suiteswithfullpath>]]></c>
</item>
+ <item><c><![CDATA[run_test -userconfig <callbackmodulename> <configfilenames> -suite <suiteswithfullpath>]]></c>
+ </item>
<item><c><![CDATA[run_test -config <configfilenames> -suite <suitewithfullpath>
-group <groupnames> -case <casenames>]]></c></item>
</list>
<p>Examples:</p>
<p><c>$ run_test -config $CFGS/sys1.cfg $CFGS/sys2.cfg -dir $SYS1_TEST $SYS2_TEST</c></p>
+ <p><c>$ run_test -userconfig ct_config_xml $CFGS/sys1.xml $CFGS/sys2.xml -dir $SYS1_TEST $SYS2_TEST</c></p>
<p><c>$ run_test -suite $SYS1_TEST/setup_SUITE $SYS2_TEST/config_SUITE</c></p>
<p><c>$ run_test -suite $SYS1_TEST/setup_SUITE -case start stop</c></p>
<p><c>$ run_test -suite $SYS1_TEST/setup_SUITE -group installation -case start stop</c></p>
@@ -123,6 +130,8 @@
<p>Other flags that may be used with <c>run_test</c>:</p>
<list>
<item><c><![CDATA[-logdir <dir>]]></c>, specifies where the HTML log files are to be written.</item>
+ <item><c><![CDATA[-label <name_of_test_run>]]></c>, associates the test run with a name that gets printed
+ in the overview HTML log files.</item>
<item><c>-refresh_logs</c>, refreshes the top level HTML index files.</item>
<item><c>-vts</c>, start web based GUI (see below).</item>
<item><c>-shell</c>, start interactive shell mode (see below).</item>
@@ -136,8 +145,14 @@
<seealso marker="cover_chapter#cover">Code Coverage Analysis</seealso>).</item>
<item><c><![CDATA[-event_handler <event_handlers>]]></c>, to install
<seealso marker="event_handler_chapter#event_handling">event handlers</seealso>.</item>
+ <item><c><![CDATA[-event_handler_init <event_handlers>]]></c>, to install
+ <seealso marker="event_handler_chapter#event_handling">event handlers</seealso> including start arguments.</item>
<item><c><![CDATA[-include]]></c>, specifies include directories (see above).</item>
<item><c><![CDATA[-no_auto_compile]]></c>, disables the automatic test suite compilation feature (see above).</item>
+ <item><c><![CDATA[-multiply_timetraps <n>]]></c>, extends <seealso marker="write_test_chapter#timetraps">timetrap
+ timeout</seealso> values.</item>
+ <item><c><![CDATA[-scale_timetraps <bool>]]></c>, enables automatic <seealso marker="write_test_chapter#timetraps">timetrap
+ timeout</seealso> scaling.</item>
<item><c><![CDATA[-repeat <n>]]></c>, tells Common Test to repeat the tests n times (see below).</item>
<item><c><![CDATA[-duration <time>]]></c>, tells Common Test to repeat the tests for duration of time (see below).</item>
<item><c><![CDATA[-until <stop_time>]]></c>, tells Common Test to repeat the tests until stop_time (see below).</item>
@@ -165,7 +180,7 @@
the current working directory of the Erlang Runtime System during the test run!</p>
</note>
- <p>For details on how to generate the <c>run_test</c> script, see the
+ <p>For more information about the <c>run_test</c> program, see the
<seealso marker="install_chapter#general">Installation</seealso> chapter.
</p>
</section>
@@ -174,7 +189,7 @@
<title>Running tests from the Web based GUI</title>
<p>The web based GUI, VTS, is started with the <c>run_test</c>
- script. From the GUI you can load config files, and select
+ program. From the GUI you can load config files, and select
directories, suites and cases to run. You can also state the
config files, directories, suites and cases on the command line
when starting the web based GUI.
@@ -198,10 +213,11 @@
<p>Example:</p>
<p><c><![CDATA[$ run_test -vts -browser 'firefox&']]></c></p>
<p>Note that the browser must run as a separate OS process or VTS will hang!</p>
- <p>If no specific browser start command is specified, netscape will
+ <p>If no specific browser start command is specified, Firefox will
be the default browser on Unix platforms and Internet Explorer on Windows.
- If Common Test fails to start a browser automatically, start your
- favourite browser manually instead and type in the URL that Common Test
+ If Common Test fails to start a browser automatically, or <c>'none'</c> is
+ specified as the value for -browser (i.e. <c>-browser none</c>), start your
+ favourite browser manually and type in the URL that Common Test
displays in the shell.</p>
</section>
@@ -211,7 +227,7 @@
<p>Common Test provides an Erlang API for running tests. The main (and most
flexible) function for specifying and executing tests is called
<c>ct:run_test/1</c>. This function takes the same start parameters as
- the <c>run_test</c> script described above, only the flags are instead
+ the <c>run_test</c> program described above, only the flags are instead
given as options in a list of key-value tuples. E.g. a test specified
with <c>run_test</c> like:</p>
<p><c>$ run_test -suite ./my_SUITE -logdir ./results</c></p>
@@ -237,13 +253,14 @@
manually and call <c>ct:install/1</c> to install any configuration
data you might need (use <c>[]</c> as argument otherwise), then
call <c>ct:start_interactive/0</c> to start Common Test. If you use
- the <c>run_test</c> script, you may start the Erlang shell and Common Test
+ the <c>run_test</c> program, you may start the Erlang shell and Common Test
in the same go by using the <c>-shell</c> and, optionally, the <c>-config</c>
- flag:
+ and/or <c>-userconfig</c> flag. Examples:
</p>
<list>
<item><c>run_test -shell</c></item>
- <item><c><![CDATA[run_test -shell -config <configfilename>]]></c></item>
+ <item><c><![CDATA[run_test -shell -config cfg/db.cfg]]></c></item>
+ <item><c><![CDATA[run_test -shell -userconfig db_login testuser x523qZ]]></c></item>
</list>
<p>If no config file is given with the <c>run_test</c> command,
@@ -268,7 +285,8 @@
2> ct_telnet:open(unix_telnet).
{ok,&lt;0.105.0&gt;}
4> ct_telnet:cmd(unix_telnet, "ls .").
- {ok,["ls .","file1 ...",...]}</pre>
+ {ok,["ls .","file1 ...",...]}
+ </pre>
<p>Everything that Common Test normally prints in the test case logs,
will in the interactive mode be written to a log named
@@ -319,54 +337,99 @@
<marker id="test_specifications"></marker>
<title>Using test specifications</title>
- <p>The most expressive way to specify what to test is to use a so
- called test specification. A test specification is a sequence of
- Erlang terms. The terms may be declared in a text file or passed
- to the test server at runtime as a list (see <c>run_testspec/1</c>
- in the manual page for <c>ct</c>). There are two general types
- of terms: configuration terms and test specification terms.</p>
- <p>With configuration terms it is possible to import configuration
- data (similar to <c>run_test -config</c>), specify HTML log
- directories (similar to <c>run_test -logdir</c>), give aliases
- to test nodes and test directories (to make a specification
- easier to read and maintain), enable code coverage analysis
- (see the <seealso marker="cover_chapter#cover">Code Coverage
- Analysis</seealso> chapter) and specify event_handler plugins
+ <p>The most flexible way to specify what to test, is to use a so
+ called test specification. A test specification is a sequence of
+ Erlang terms. The terms may be declared in a text file or passed
+ to the test server at runtime as a list
+ (see <c>run_testspec/1</c> in the manual page
+ for <c>ct</c>). There are two general types of terms:
+ configuration terms and test specification terms.</p>
+ <p>With configuration terms it is possible to e.g. label the test
+ run (similar to <c>run_test -label</c>), evaluate arbitrary expressions
+ before starting a test, import configuration
+ data (similar to
+ <c>run_test -config/-userconfig</c>), specify HTML log directories (similar
+ to
+ <c>run_test -logdir</c>), give aliases to test nodes and test
+ directories (to make a specification easier to read and
+ maintain), enable code coverage analysis (see
+ the <seealso marker="cover_chapter#cover">Code Coverage
+ Analysis</seealso> chapter) and specify event_handler plugins
(see the <seealso marker="event_handler_chapter#event_handling">
- Event Handling</seealso> chapter). There is also a term
- for specifying include directories that should be passed on
- to the compiler when automatic compilation is performed
- (similar to <c>run_test -include</c>, see above).</p>
- <p>With test specification terms it is possible to state exactly which
- tests should run and in which order. A test term specifies either
- one or more suites or one or more test cases. An arbitrary number of test
- terms may be declared in sequence. A test term can also specify one or
- more test suites or test cases to be skipped. Skipped suites and cases
- are not executed and show up in the HTML test log as SKIPPED.</p>
-
- <note><p>It is not yet possible to specify test case groups in
- test specifications. This will be supported in a soon upcoming
- release.</p></note>
+ Event Handling</seealso> chapter). There is also a term for
+ specifying include directories that should be passed on to the
+ compiler when automatic compilation is performed (similar
+ to <c>run_test -include</c>, see above).</p>
+ <p>With test specification terms it is possible to state exactly
+ which tests should run and in which order. A test term specifies
+ either one or more suites, one or more test case groups, or one
+ or more test cases in a group or suite.</p>
+ <p>An arbitrary number of test terms may be declared in sequence.
+ Common Test will compile the terms into one or more tests to be
+ performed in one resulting test run. Note that a term that
+ specifies a set of test cases will "swallow" one that only
+ specifies a subset of these cases. E.g. the result of merging
+ one term that specifies that all cases in suite S should be
+ executed, with another term specifying only test case X and Y in
+ S, is a test of all cases in S. However, if a term specifying
+ test case X and Y in S is merged with a term specifying case Z
+ in S, the result is a test of X, Y and Z in S.</p>
+ <p>A test term can also specify one or more test suites, groups,
+ or test cases to be skipped. Skipped suites, groups and cases
+ are not executed and show up in the HTML test log files as
+ SKIPPED.</p>
+ <p>When a test case group is specified, the resulting test
+ executes the
+ <c>init_per_group</c> function, followed by all test cases and
+ sub groups (including their configuration functions), and
+ finally the <c>end_per_group</c> function. Also if particular
+ test cases in a group are specified, <c>init_per_group</c>
+ and <c>end_per_group</c> for the group in question are
+ called. If a group which is defined (in <c>Suite:group/0</c>) to
+ be a sub group of another group, is specified (or particular test
+ cases of a sub group are), Common Test will call the configuration
+ functions for the top level groups as well as for the sub group
+ in question (making it possible to pass configuration data all
+ the way from <c>init_per_suite</c> down to the test cases in the
+ sub group).</p>
<p>Below is the test specification syntax. Test specifications can
- be used to run tests both in a single test host environment and in
- a distributed Common Test environment. Node parameters are only relevant in the
- latter (see the chapter about running Common Test in distributed mode for information).
- For details on the event_handler term, see the
- <seealso marker="event_handler_chapter#event_handling">Event Handling</seealso>
- chapter.</p>
+ be used to run tests both in a single test host environment and
+ in a distributed Common Test environment (Large Scale
+ Testing). The node parameters in the init term are only
+ relevant in the latter (see the
+ <seealso marker="ct_master_chapter#test_specifications">Large
+ Scale Testing</seealso> chapter for information). For details on
+ the event_handler term, see the
+ <seealso marker="event_handler_chapter#event_handling">Event
+ Handling</seealso> chapter.</p>
<p>Config terms:</p>
<pre>
{node, NodeAlias, Node}.
+
+ {init, InitOptions}.
+ {init, [NodeAlias], InitOptions}.
+
+ {label, Label}.
+ {label, NodeRefs, Label}.
+
+ {multiply_timetraps, N}.
+ {multiply_timetraps, NodeRefs, N}.
+
+ {scale_timetraps, Bool}.
+ {scale_timetraps, NodeRefs, Bool}.
{cover, CoverSpecFile}.
- {cover, NodeRef, CoverSpecFile}.
+ {cover, NodeRefs, CoverSpecFile}.
{include, IncludeDirs}.
{include, NodeRefs, IncludeDirs}.
{config, ConfigFiles}.
{config, NodeRefs, ConfigFiles}.
+
+ {userconfig, {CallbackModule, ConfigStrings}}.
+ {userconfig, NodeRefs, {CallbackModule, ConfigStrings}}.
{alias, DirAlias, Dir}.
@@ -383,6 +446,12 @@
{suites, DirRef, Suites}.
{suites, NodeRefs, DirRef, Suites}.
+ {groups, DirRef, Suite, Groups}.
+ {groups, NodeRefsDirRef, Suite, Groups}.
+
+ {groups, DirRef, Suite, Group, {cases,Cases}}.
+ {groups, NodeRefsDirRef, Suite, Group, {cases,Cases}}.
+
{cases, DirRef, Suite, Cases}.
{cases, NodeRefs, DirRef, Suite, Cases}.
@@ -395,9 +464,12 @@
<p>Types:</p>
<pre>
NodeAlias = atom()
+ InitOptions = term()
Node = node()
NodeRef = NodeAlias | Node | master
NodeRefs = all_nodes | [NodeRef] | NodeRef
+ N = integer()
+ Bool = true | false
CoverSpecFile = string()
IncludeDirs = string() | [string()]
ConfigFiles = string() | [string()]
@@ -408,6 +480,9 @@
InitArgs = [term()]
DirRef = DirAlias | Dir
Suites = atom() | [atom()] | all
+ Suite = atom()
+ Groups = atom() | [atom()] | all
+ Group = atom()
Cases = atom() | [atom()] | all
Comment = string() | ""
</pre>
@@ -449,9 +524,14 @@
<item>Secondly, the test for system t2 should run. The included suites are
t2B and t2C. Included are also test cases test4, test1 and test7 in suite
t2A. Note that the test cases will be executed in the specified order.</item>
- <item>Lastly, all suites for systems t3 are to be completely skipped and this
+ <item>Lastly, all suites for systems t3 are to be completely skipped and this
should be explicitly noted in the log files.</item>
</list>
+ <p>It is possible to specify initialization options for nodes defined in the
+ test specification. Currently, there are options to start the node and/or to
+ evaluate any function on the node.
+ See the <seealso marker="ct_master_chapter#ct_slave">Automatic startup of
+ the test target nodes</seealso> chapter for details.</p>
<p>It is possible for the user to provide a test specification that
includes (for Common Test) unrecognizable terms. If this is desired,
the <c>-allow_user_terms</c> flag should be used when starting tests with