From 2ef840647acadb489d54332f6a218dcf2e629ff9 Mon Sep 17 00:00:00 2001 From: tmanevik Date: Wed, 18 Nov 2015 18:24:10 +0100 Subject: Common Test: Editorial changes 1 Conflicts: lib/common_test/doc/src/ct_hooks_chapter.xml lib/common_test/doc/src/event_handler_chapter.xml lib/common_test/doc/src/run_test_chapter.xml --- lib/common_test/doc/src/write_test_chapter.xml | 1342 ++++++++++++------------ 1 file changed, 687 insertions(+), 655 deletions(-) (limited to 'lib/common_test/doc/src/write_test_chapter.xml') diff --git a/lib/common_test/doc/src/write_test_chapter.xml b/lib/common_test/doc/src/write_test_chapter.xml index 1f5650651f..fcc77f1231 100644 --- a/lib/common_test/doc/src/write_test_chapter.xml +++ b/lib/common_test/doc/src/write_test_chapter.xml @@ -32,84 +32,88 @@
- Support for test suite authors + Support for Test Suite Authors -

The ct module provides the main interface for writing - test cases. This includes e.g:

+

The ct module provides the main + interface for writing test cases. This includes for example, the following:

- + Functions for printing and logging Functions for reading configuration data Function for terminating a test case with error reason Function for adding comments to the HTML overview page -

Please see the reference manual for the ct - module for details about these functions.

+

For details about these functions, see module ct.

-

The CT application also includes other modules named - ]]> that +

The Common Test application also includes other modules named + ]]>, which provide various support, mainly simplified use of communication - protocols such as rpc, snmp, ftp, telnet, etc.

+ protocols such as RPC, SNMP, FTP, Telnet, and others.

- Test suites + Test Suites

A test suite is an ordinary Erlang module that contains test cases. It is recommended that the module has a name on the form *_SUITE.erl. Otherwise, the directory and auto compilation - function in CT will not be able to locate it (at least not per default). + function in Common Test cannot locate it (at least not by default).

It is also recommended that the ct.hrl header file is included in all test suite modules.

-

Each test suite module must export the function all/0 +

Each test suite module must export function + all/0, which returns the list of all test case groups and test cases to be executed in that module.

-

The callback functions that the test suite should implement, and - which will be described in more detail below, are - all listed in the common_test - reference manual page. +

The callback functions to be implemented by the test suite are + all listed in module common_test + . They are also described in more detail later in this User's Guide.

- Init and end per suite + Init and End per Suite -

Each test suite module may contain the optional configuration functions - init_per_suite/1 and end_per_suite/1. If the init function - is defined, so must the end function be. +

Each test suite module can contain the optional configuration functions + init_per_suite/1 + and end_per_suite/1. + If the init function is defined, so must the end function be.

-

If it exists, init_per_suite is called initially before the - test cases are executed. It typically contains initializations that are - common for all test cases in the suite, and that are only to be - performed once. It is recommended to be used for setting up and - verifying state and environment on the SUT (System Under Test) and/or - the CT host node, so that the test cases in the suite will execute - correctly. Examples of initial configuration operations: Opening a connection - to the SUT, initializing a database, running an installation script, etc. +

If init_per_suite exists, it is called initially before the + test cases are executed. It typically contains initializations common + for all test cases in the suite, which are only to be performed once. + init_per_suite is recommended for setting up and verifying state + and environment on the System Under Test (SUT) or the Common Test + host node, or both, so that the test cases in the suite executes correctly. + The following are examples of initial configuration operations:

+ + Opening a connection to the SUT + Initializing a database + Running an installation script +

end_per_suite is called as the final stage of the test suite execution (after the last test case has finished). The function is meant to be used for cleaning up after init_per_suite.

-

init_per_suite and end_per_suite will execute on dedicated +

init_per_suite and end_per_suite execute on dedicated Erlang processes, just like the test cases do. The result of these functions - is however not included in the test run statistics of successful, failed and + is however not included in the test run statistics of successful, failed, and skipped cases.

-

The argument to init_per_suite is Config, the +

The argument to init_per_suite is Config, that is, the same key-value list of runtime configuration data that each test case takes as input argument. init_per_suite can modify this parameter with information that the test cases need. The possibly modified Config @@ -117,671 +121,683 @@

If init_per_suite fails, all test cases in the test - suite will be skipped automatically (so called auto skipped), + suite are skipped automatically (so called auto skipped), including end_per_suite.

-

Note that if init_per_suite and end_per_suite do not exist - in the suite, Common Test calls dummy functions (with the same names) - instead, so that output generated by hook functions may be saved to the log - files for these dummies - (see the Common Test Hooks - chapter for more information). +

Notice that if init_per_suite and end_per_suite do not exist + in the suite, Common Test calls dummy functions (with the same names) + instead, so that output generated by hook functions can be saved to the log + files for these dummies. For details, see + Common Test Hooks.

- Init and end per test case + Init and End per Test Case

Each test suite module can contain the optional configuration functions - init_per_testcase/2 and end_per_testcase/2. If the init function - is defined, so must the end function be.

+ init_per_testcase/2 + and end_per_testcase/2. + If the init function is defined, so must the end function be.

-

If it exists, init_per_testcase is called before each - test case in the suite. It typically contains initialization which - must be done for each test case (analogue to init_per_suite for the +

If init_per_testcase exists, it is called before each + test case in the suite. It typically contains initialization that + must be done for each test case (analog to init_per_suite for the suite).

end_per_testcase/2 is called after each test case has - finished, giving the opportunity to perform clean-up after - init_per_testcase.

+ finished, enabling cleanup after init_per_testcase.

The first argument to these functions is the name of the test case. This value can be used with pattern matching in function clauses or conditional expressions to choose different initialization and cleanup - routines for different test cases, or perform the same routine for a number of, + routines for different test cases, or perform the same routine for many, or all, test cases.

The second argument is the Config key-value list of runtime configuration data, which has the same value as the list returned by - init_per_suite. init_per_testcase/2 may modify this - parameter or return it as is. The return value of init_per_testcase/2 - is passed as the Config parameter to the test case itself.

+ init_per_suite. init_per_testcase/2 can modify this + parameter or return it "as is". The return value of init_per_testcase/2 + is passed as parameter Config to the test case itself.

The return value of end_per_testcase/2 is ignored by the test server, with exception of the - save_config + save_config and fail tuple.

-

It is possible in end_per_testcase to check if the - test case was successful or not (which consequently may determine - how cleanup should be performed). This is done by reading the value - tagged with tc_status from Config. The value is either - ok, {failed,Reason} (where Reason is timetrap_timeout, - info from exit/1, or details of a run-time error), or - {skipped,Reason} (where Reason is a user specific term). +

end_per_testcase can check if the test case was successful. + (which in turn can determine how cleanup is to be performed). + This is done by reading the value tagged with tc_status from + Config. The value is one of the following:

- -

The end_per_testcase/2 function is called even after a - test case terminates due to a call to ct:abort_current_testcase/1, - or after a timetrap timeout. However, end_per_testcase - will then execute on a different process than the test case - function, and in this situation, end_per_testcase will - not be able to change the reason for test case termination by - returning {fail,Reason}, nor will it be able to save data with - {save_config,Data}.

- -

If init_per_testcase crashes, the test case itself gets skipped - automatically (so called auto skipped). If init_per_testcase - returns a tuple {skip,Reason}, also then the test case gets skipped - (so called user skipped). It is also possible, by returning a tuple - {fail,Reason} from init_per_testcase, to mark the test case - as failed without actually executing it. + + +

ok

+ + +

{failed,Reason}

+

where Reason is timetrap_timeout, information from exit/1, + or details of a runtime error

+ +

{skipped,Reason}

+

where Reason is a user-specific term

+ + +

Function end_per_testcase/2 is even called if a + test case terminates because of a call to + ct:abort_current_testcase/1, + or after a timetrap time-out. However, end_per_testcase + then executes on a different process than the test case + function. In this situation, end_per_testcase cannot + change the reason for test case termination by returning {fail,Reason} + or save data with {save_config,Data}.

+ +

The test case is skipped in the following two cases:

+ + If init_per_testcase crashes (called auto skipped). + If init_per_testcase returns a tuple {skip,Reason} + (called user skipped). + +

The test case can also be marked as failed without executing it + by returning a tuple {fail,Reason} from init_per_testcase.

+

If init_per_testcase crashes, or returns {skip,Reason} - or {fail,Reason}, the end_per_testcase function is not called. + or {fail,Reason}, function end_per_testcase is not called.

If it is determined during execution of end_per_testcase that - the status of a successful test case should be changed to failed, - end_per_testcase may return the tuple: {fail,Reason} + the status of a successful test case is to be changed to failed, + end_per_testcase can return the tuple {fail,Reason} (where Reason describes why the test case fails).

-

init_per_testcase and end_per_testcase execute on the - same Erlang process as the test case and printouts from these - configuration functions can be found in the test case log file.

+

As init_per_testcase and end_per_testcase execute on the + same Erlang process as the test case, printouts from these + configuration functions are included in the test case log file.

- Test cases + Test Cases

The smallest unit that the test server is concerned with is a - test case. Each test case can actually test many things, for - example make several calls to the same interface function with + test case. Each test case can test many things, for + example, make several calls to the same interface function with different parameters.

-

It is possible to choose to put many or few tests into each test - case. What exactly each test case does is of course up to the - author, but here are some things to keep in mind: +

The author can choose to put many or few tests into each test + case. Some things to keep in mind follows:

- -

Having many small test cases tend to result in extra, and possibly + +

Many small test cases tend to result in extra, and possibly duplicated code, as well as slow test execution because of - large overhead for initializations and cleanups. Duplicated - code should be avoided, e.g. by means of common help functions, or - the resulting suite will be difficult to read and understand, and + large overhead for initializations and cleanups. Avoid duplicated + code, for example, by using common help functions. Otherwise, + the resulting suite becomes difficult to read and understand, and expensive to maintain. -

- -

Larger test cases make it harder to tell what went wrong if it - fails, and large portions of test code will potentially be skipped - when errors occur. Furthermore, readability and maintainability suffers - when test cases become too large and extensive. Also, the resulting log - files may not reflect very well the number of tests that have - actually been performed. -

+

+

Larger test cases make it harder to tell what went wrong if it + fails. Also, large portions of test code risk being skipped + when errors occur.

+
+

Readability and maintainability suffer + when test cases become too large and extensive. It is not certain + that the resulting log files reflect very well the number of tests + performed. +

+

The test case function takes one argument, Config, which contains configuration information such as data_dir and - priv_dir. (See Data and - Private Directories for more information about these). - The value of Config at the time of the call, is the same - as the return value from init_per_testcase, see above. + priv_dir. (For details about these, see section + Data and Private Directories. + The value of Config at the time of the call, is the same + as the return value from init_per_testcase, mentioned earlier.

-

The test case function argument Config should not be - confused with the information that can be retrieved from +

The test case function argument Config is not to be + confused with the information that can be retrieved from the configuration files (using - ct:get_config/1/2). The Config argument - should be used for runtime configuration of the test suite and the - test cases, while configuration files should typically contain data + ct:get_config/1/2). The test case argument Config + is to be used for runtime configuration of the test suite and the + test cases, while configuration files are to contain data related to the SUT. These two types of configuration data are handled - differently!

+ differently.

-

Since the Config parameter is a list of key-value tuples, i.e. - a data type generally called a property list, it can be handled by means of the - proplists module in the OTP stdlib. A value can for example - be searched for and returned with the proplists:get_value/2 function. - Also, or alternatively, you might want to look in the general lists module, - also in stdlib, for useful functions. Normally, the only operations you - ever perform on Config is insert (adding a tuple to the head of the list) - and lookup. Common Test provides a simple macro named ?config, which returns - a value of an item in Config given the key (exactly like +

As parameter Config is a list of key-value tuples, that is, + a data type called a property list, it can be handled by the + stdlib:proplists module. + A value can, for example, be searched for and returned with function + proplists:get_value/2. + Also, or alternatively, the general stdlib:lists + module contains useful functions. Normally, the only operations + performed on Config is insert (adding a tuple to the head of the list) + and lookup. Common Test provides a simple macro named ?config, + which returns a value of an item in Config given the key (exactly like proplists:get_value). Example: PrivDir = ?config(priv_dir, Config).

If the test case function crashes or exits purposely, it is considered - failed. If it returns a value (no matter what actual value) it is + failed. If it returns a value (no matter what value), it is considered successful. An exception to this rule is the return value {skip,Reason}. If this tuple is returned, the test case is considered - skipped and gets logged as such.

+ skipped and is logged as such.

If the test case returns the tuple {comment,Comment}, the case - is considered successful and Comment is printed out in the overview - log file. This is by the way equal to calling ct:comment(Comment). + is considered successful and Comment is printed in the overview + log file. This is equal to calling + ct:comment(Comment).

- Test case info function + Test Case Information Function -

For each test case function there can be an additional function - with the same name but with no arguments. This is the test case - info function. The test case info function is expected to return a - list of tagged tuples that specifies various properties regarding the - test case. +

For each test case function there can be an extra function + with the same name but without arguments. This is the test case + information function. It is expected to return a list of tagged + tuples that specifies various properties regarding the test case.

The following tags have special meaning:

- timetrap + timetrap

- Set the maximum time the test case is allowed to execute. If - the timetrap time is exceeded, the test case fails with - reason timetrap_timeout. Note that init_per_testcase + Sets the maximum time the test case is allowed to execute. If + this time is exceeded, the test case fails with + reason timetrap_timeout. Notice that init_per_testcase and end_per_testcase are included in the timetrap time. - Please see the Timetrap - section for more details. + For details, see section + Timetrap Time-Outs.

- userdata + userdata

- Use this to specify arbitrary data related to the testcase. This - data can be retrieved at any time using the ct:userdata/3 + Specifies any data related to the test case. This + data can be retrieved at any time using the + ct:userdata/3 utility function.

- silent_connections + silent_connections

- Please see the - Silent Connections - chapter for details. + For details, see section + Silent Connections.

- require + require

- Use this to specify configuration variables that are required by the + Specifies configuration variables required by the test case. If the required configuration variables are not found in any of the test system configuration files, the test case is skipped.

- It is also possible to give a required variable a default value that will + A required variable can also be given a default value to be used if the variable is not found in any configuration file. To specify - a default value, add a tuple on the form: - {default_config,ConfigVariableName,Value} to the test case info list + a default value, add a tuple on the form + {default_config,ConfigVariableName,Value} to the test case information list (the position in the list is irrelevant). - Examples:

+

+

Examples:

-	    testcase1() -> 
-	        [{require, ftp},
-	         {default_config, ftp, [{ftp, "my_ftp_host"},
-	                                {username, "aladdin"},
-	                                {password, "sesame"}]}}].
+ testcase1() -> + [{require, ftp}, + {default_config, ftp, [{ftp, "my_ftp_host"}, + {username, "aladdin"}, + {password, "sesame"}]}}].
-	    testcase2() -> 
-	        [{require, unix_telnet, unix},
-		 {require, {unix, [telnet, username, password]}},
-	         {default_config, unix, [{telnet, "my_telnet_host"},
-	                                 {username, "aladdin"},
-	                                 {password, "sesame"}]}}].
+ testcase2() -> + [{require, unix_telnet, unix}, + {require, {unix, [telnet, username, password]}}, + {default_config, unix, [{telnet, "my_telnet_host"}, + {username, "aladdin"}, + {password, "sesame"}]}}].
-

See the Config files - chapter and the - ct:require/1/2 function in the - ct reference manual for more information about - require.

+

For more information about require, see section + + Requiring and Reading Configuration Data + in section External Configuration Data and function + ct:require/1/2.

Specifying a default value for a required variable can result - in a test case always getting executed. This might not be a desired behaviour!

+ in a test case always getting executed. This might not be a desired behavior.

-

If timetrap and/or require is not set specifically for - a particular test case, default values specified by the suite/0 - function are used. +

If timetrap or require, or both, is not set specifically for + a particular test case, default values specified by function + suite/0 + are used.

-

Other tags than the ones mentioned above will simply be ignored by - the test server. +

Tags other than the earlier mentioned are ignored by the test server.

- Example of a test case info function: + An example of a test case information function follows:

-	reboot_node() ->
-	    [
-	     {timetrap,{seconds,60}},
-	     {require,interfaces},
-	     {userdata,
-	         [{description,"System Upgrade: RpuAddition Normal RebootNode"},
-	          {fts,"http://someserver.ericsson.se/test_doc4711.pdf"}]}                  
-            ].
+ reboot_node() -> + [ + {timetrap,{seconds,60}}, + {require,interfaces}, + {userdata, + [{description,"System Upgrade: RpuAddition Normal RebootNode"}, + {fts,"http://someserver.ericsson.se/test_doc4711.pdf"}]} + ].
- Test suite info function - -

The suite/0 function can be used in a test suite - module to e.g. set a default timetrap value and to - require external configuration data. If a test case-, or - group info function also specifies any of the info tags, it - overrides the default values set by suite/0. See the test - case info function above, and group info function below, for more - details. + Test Suite Information Function + +

Function suite/0 + can, for example, be used in a test suite module to set a default + timetrap value and to require external configuration data. + If a test case, or a group information function also specifies any of the information tags, it + overrides the default values set by suite/0. For details, + see + Test Case Information Function and + Test Case Groups.

-

Other options that may be specified with the suite info list are:

- +

The following options can also be specified with the suite information list:

+ stylesheet, - see HTML Style Sheets. + see HTML Style Sheets userdata, - see Test case info function. + see Test Case Information Function silent_connections, - see Silent Connections. + see Silent Connections

- Example of the suite info function: + An example of the suite information function follows:

-	suite() ->
-	    [
-	     {timetrap,{minutes,10}},
-	     {require,global_names},
-	     {userdata,[{info,"This suite tests database transactions."}]},
-	     {silent_connections,[telnet]},
-	     {stylesheet,"db_testing.css"}
-            ].
+ suite() -> + [ + {timetrap,{minutes,10}}, + {require,global_names}, + {userdata,[{info,"This suite tests database transactions."}]}, + {silent_connections,[telnet]}, + {stylesheet,"db_testing.css"} + ].
- Test case groups -

A test case group is a set of test cases that share configuration + Test Case Groups +

A test case group is a set of test cases sharing configuration functions and execution properties. Test case groups are defined by - means of the groups/0 function according to the following syntax:

+ function + groups/0 + according to the following syntax:

-      groups() -> GroupDefs
+ groups() -> GroupDefs
 
-      Types:
+ Types:
 
-      GroupDefs = [GroupDef]
-      GroupDef = {GroupName,Properties,GroupsAndTestCases}
-      GroupName = atom()
-      GroupsAndTestCases = [GroupDef | {group,GroupName} | TestCase]
-      TestCase = atom()
+ GroupDefs = [GroupDef] + GroupDef = {GroupName,Properties,GroupsAndTestCases} + GroupName = atom() + GroupsAndTestCases = [GroupDef | {group,GroupName} | TestCase] + TestCase = atom() -

GroupName is the name of the group and should be unique within - the test suite module. Groups may be nested, and this is accomplished - simply by including a group definition within the GroupsAndTestCases - list of another group. Properties is the list of execution - properties for the group. The possible values are:

+

GroupName is the name of the group and must be unique within + the test suite module. Groups can be nested, by including a group definition + within the GroupsAndTestCases list of another group. + Properties is the list of execution + properties for the group. The possible values are as follows:

-      Properties = [parallel | sequence | Shuffle | {RepeatType,N}]
-      Shuffle = shuffle | {shuffle,Seed}
-      Seed = {integer(),integer(),integer()}
-      RepeatType = repeat | repeat_until_all_ok | repeat_until_all_fail |
-                   repeat_until_any_ok | repeat_until_any_fail
-      N = integer() | forever
- -

If the parallel property is specified, Common Test will execute - all test cases in the group in parallel. If sequence is specified, - the cases will be executed in a sequence, as described in the chapter - Dependencies between - test cases and suites. If shuffle is specified, the cases - in the group will be executed in random order. The repeat property - orders Common Test to repeat execution of the cases in the group a given - number of times, or until any, or all, cases fail or succeed.

- -

Example:

+ Properties = [parallel | sequence | Shuffle | {RepeatType,N}] + Shuffle = shuffle | {shuffle,Seed} + Seed = {integer(),integer(),integer()} + RepeatType = repeat | repeat_until_all_ok | repeat_until_all_fail | + repeat_until_any_ok | repeat_until_any_fail + N = integer() | forever + +

Explanations:

+ + parallel +

Common Test executes all test cases in the group in parallel.

+ sequence +

The cases are executed in a sequence as described in section + Sequences in section + Dependencies Between Test Cases and Suites.

+ shuffle +

The cases in the group are executed in random order.

+ repeat +

Orders Common Test to repeat execution of the cases in the + group a given number of times, or until any, or all, cases fail or succeed.

+
+ +

Example:

-      groups() -> [{group1, [parallel], [test1a,test1b]},
-                   {group2, [shuffle,sequence], [test2a,test2b,test2c]}].
+ groups() -> [{group1, [parallel], [test1a,test1b]}, + {group2, [shuffle,sequence], [test2a,test2b,test2c]}]. -

To specify in which order groups should be executed (also with respect - to test cases that are not part of any group), tuples on the form - {group,GroupName} should be added to the all/0 list. Example:

+

To specify in which order groups are to be executed (also with respect + to test cases that are not part of any group), add tuples on the form + {group,GroupName} to the all/0 list.

+

Example:

-      all() -> [testcase1, {group,group1}, testcase2, {group,group2}].
+ all() -> [testcase1, {group,group1}, testcase2, {group,group2}]. -

It is also possible to specify execution properties with a group - tuple in all/0: {group,GroupName,Properties}. These - properties will override those specified in the group definition (see - groups/0 above). This way, it's possible to run the same set of tests, +

Execution properties with a group tuple in + all/0: {group,GroupName,Properties} can also be specified. + These properties override those specified in the group definition (see + groups/0 earlier). This way, the same set of tests can be run, but with different properties, without having to make copies of the group definition in question.

-

If a group contains sub-groups, the execution properties for these may +

If a group contains subgroups, the execution properties for these can also be specified in the group tuple: - {group,GroupName,Properties,SubGroups}, where SubGroups - is a list of tuples, {GroupName,Properties}, or - {GroupName,Properties,SubGroups}, representing the sub-groups. - Any sub-groups defined in group/0 for a group, that are not specified - in the SubGroups list, will simply execute with their pre-defined + {group,GroupName,Properties,SubGroups} + Where, SubGroups is a list of tuples, {GroupName,Properties} or + {GroupName,Properties,SubGroups} representing the subgroups. + Any subgroups defined in group/0 for a group, that are not specified + in the SubGroups list, executes with their predefined properties.

-

Example:

+

Example:

-      groups() -> {tests1, [], [{tests2, [], [t2a,t2b]},
-                                {tests3, [], [t31,t3b]}]}.
-

To execute group 'tests1' twice with different properties for 'tests2' + groups() -> {tests1, [], [{tests2, [], [t2a,t2b]}, + {tests3, [], [t31,t3b]}]}. +

To execute group tests1 twice with different properties for tests2 each time:

-      all() ->
-         [{group, tests1, default, [{tests2, [parallel]}]},
-          {group, tests1, default, [{tests2, [shuffle,{repeat,10}]}]}].
-

Note that this is equivalent to this specification:

+ all() -> + [{group, tests1, default, [{tests2, [parallel]}]}, + {group, tests1, default, [{tests2, [shuffle,{repeat,10}]}]}]. +

This is equivalent to the following specification:

-      all() ->
-         [{group, tests1, default, [{tests2, [parallel]},
-                                    {tests3, default}]},
-          {group, tests1, default, [{tests2, [shuffle,{repeat,10}]},
-                                    {tests3, default}]}].
-

The value default states that the pre-defined properties - should be used.

-

Here's an example of how to override properties in a scenario + all() -> + [{group, tests1, default, [{tests2, [parallel]}, + {tests3, default}]}, + {group, tests1, default, [{tests2, [shuffle,{repeat,10}]}, + {tests3, default}]}]. +

Value default states that the predefined properties + are to be used.

+

The following example shows how to override properties in a scenario with deeply nested groups:

-      groups() ->
-         [{tests1, [], [{group, tests2}]},
-          {tests2, [], [{group, tests3}]},
-          {tests3, [{repeat,2}], [t3a,t3b,t3c]}].
-
-      all() ->
-         [{group, tests1, default, 
-           [{tests2, default,
-             [{tests3, [parallel,{repeat,100}]}]}]}].
- -

The syntax described above may also be used in Test Specifications - in order to change properties of groups at the time of execution, - without even having to edit the test suite (please see the - Test - Specifications chapter for more info).

- -

As illustrated above, properties may be combined. If e.g. - shuffle, repeat_until_any_fail and sequence - are all specified, the test cases in the group will be executed + groups() -> + [{tests1, [], [{group, tests2}]}, + {tests2, [], [{group, tests3}]}, + {tests3, [{repeat,2}], [t3a,t3b,t3c]}]. + + all() -> + [{group, tests1, default, + [{tests2, default, + [{tests3, [parallel,{repeat,100}]}]}]}]. + +

The described syntax can also be used in test specifications + to change group properties at the time of execution, + without having to edit the test suite. For more information, see + section Test + Specifications in section Running Tests and Analyzing Results.

+ +

As illustrated, properties can be combined. If, for example, + shuffle, repeat_until_any_fail, and sequence + are all specified, the test cases in the group are executed repeatedly, and in random order, until a test case fails. Then - execution is immediately stopped and the rest of the cases skipped.

+ execution is immediately stopped and the remaining cases are skipped.

Before execution of a group begins, the configuration function - init_per_group(GroupName, Config) is called. The list of tuples - returned from this function is passed to the test cases in the usual - manner by means of the Config argument. init_per_group/2 - is meant to be used for initializations common for the test cases in the - group. After execution of the group is finished, the - end_per_group(GroupName, Config function is called. This function - is meant to be used for cleaning up after init_per_group/2.

+ init_per_group(GroupName, Config) + is called. The list of tuples returned from this function is passed to the + test cases in the usual manner by argument Config. + init_per_group/2 is meant to be used for initializations common + for the test cases in the group. After execution of the group is finished, function + end_per_group(GroupName, Config) + is called. This function is meant to be used for cleaning up after + init_per_group/2.

Whenever a group is executed, if init_per_group and - end_per_group do not exist in the suite, Common Test calls + end_per_group do not exist in the suite, Common Test calls dummy functions (with the same names) instead. Output generated by - hook functions will be saved to the log files for these dummies - (see the Common Test - Hooks chapter for more information). + hook functions are saved to the log files for these dummies. + For more information, see section + Manipulating Tests + in section Common Test Hooks.

init_per_testcase/2 and end_per_testcase/2 are always called for each individual test case, no matter if the case belongs to a group or not.

-

The properties for a group is always printed on the top of the HTML log - for init_per_group/2. Also, the total execution time for a group - can be found at the bottom of the log for end_per_group/2.

+

The properties for a group are always printed in the top of the HTML log + for init_per_group/2. The total execution time for a group is + included at the bottom of the log for end_per_group/2.

-

Test case groups may be nested so that sets of groups can be +

Test case groups can be nested so sets of groups can be configured with the same init_per_group/2 and end_per_group/2 - functions. Nested groups may be defined by including a group definition, - or a group name reference, in the test case list of another group. Example:

+ functions. Nested groups can be defined by including a group definition, + or a group name reference, in the test case list of another group.

+

Example:

-      groups() -> [{group1, [shuffle], [test1a,
-                                        {group2, [], [test2a,test2b]},
-                                        test1b]},
-                   {group3, [], [{group,group4},
-                                 {group,group5}]},
-                   {group4, [parallel], [test4a,test4b]},
-                   {group5, [sequence], [test5a,test5b,test5c]}].
- -

In the example above, if all/0 would return group name references - in this order: [{group,group1},{group,group3}], the order of the - configuration functions and test cases will be the following (note that + groups() -> [{group1, [shuffle], [test1a, + {group2, [], [test2a,test2b]}, + test1b]}, + {group3, [], [{group,group4}, + {group,group5}]}, + {group4, [parallel], [test4a,test4b]}, + {group5, [sequence], [test5a,test5b,test5c]}]. + +

In the previous example, if all/0 returns group name references + in the order [{group,group1},{group,group3}], the order of the + configuration functions and test cases becomes the following (notice that init_per_testcase/2 and end_per_testcase/2: are also always called, but not included in this example for simplification):

--      init_per_group(group1, Config) -> Config1  (*)
-
---          test1a(Config1)
-
---	    init_per_group(group2, Config1) -> Config2
-
----              test2a(Config2), test2b(Config2)
-
---          end_per_group(group2, Config2)
-
---          test1b(Config1)
-
--      end_per_group(group1, Config1) 
-
--      init_per_group(group3, Config) -> Config3
-
---          init_per_group(group4, Config3) -> Config4
-
----              test4a(Config4), test4b(Config4)  (**)
-
---          end_per_group(group4, Config4)
-
---	    init_per_group(group5, Config3) -> Config5
-
----              test5a(Config5), test5b(Config5), test5c(Config5)
-
---          end_per_group(group5, Config5)
-
--      end_per_group(group3, Config3)
-
-
-    (*) The order of test case test1a, test1b and group2 is not actually
-        defined since group1 has a shuffle property.
-
-    (**) These cases are not executed in order, but in parallel.
- -

Properties are not inherited from top level groups to nested - sub-groups. E.g, in the example above, the test cases in group2 - will not be executed in random order (which is the property of - group1).

+ init_per_group(group1, Config) -> Config1 (*) + test1a(Config1) + init_per_group(group2, Config1) -> Config2 + test2a(Config2), test2b(Config2) + end_per_group(group2, Config2) + test1b(Config1) + end_per_group(group1, Config1) + init_per_group(group3, Config) -> Config3 + init_per_group(group4, Config3) -> Config4 + test4a(Config4), test4b(Config4) (**) + end_per_group(group4, Config4) + init_per_group(group5, Config3) -> Config5 + test5a(Config5), test5b(Config5), test5c(Config5) + end_per_group(group5, Config5) + end_per_group(group3, Config3) + +

(*) The order of test case test1a, test1b, and group2 is + undefined, as group1 has a shuffle property.

+

(**) These cases are not executed in order, but in parallel.

+

Properties are not inherited from top-level groups to nested + subgroups. For instance, in the previous example, the test cases in group2 + are not executed in random order (which is the property of group1).

- The parallel property and nested groups -

If a group has a parallel property, its test cases will be spawned - simultaneously and get executed in parallel. A test case is not allowed - to execute in parallel with end_per_group/2 however, which means - that the time it takes to execute a parallel group is equal to the + Parallel Property and Nested Groups +

If a group has a parallel property, its test cases are spawned + simultaneously and get executed in parallel. However, a test case is not + allowed to execute in parallel with end_per_group/2, which means + that the time to execute a parallel group is equal to the execution time of the slowest test case in the group. A negative side effect of running test cases in parallel is that the HTML summary pages - are not updated with links to the individual test case logs until the - end_per_group/2 function for the group has finished.

+ are not updated with links to the individual test case logs until function + end_per_group/2 for the group has finished.

-

A group nested under a parallel group will start executing in parallel +

A group nested under a parallel group starts executing in parallel with previous (parallel) test cases (no matter what properties the nested - group has). Since, however, test cases are never executed in parallel with - init_per_group/2 or end_per_group/2 of the same group, it's - only after a nested group has finished that any remaining parallel cases - in the previous group get spawned.

+ group has). However, as test cases are never executed in parallel with + init_per_group/2 or end_per_group/2 of the same group, it is + only after a nested group has finished that remaining parallel cases + in the previous group become spawned.

- Parallel test cases and IO -

A parallel test case has a private IO server as its group leader. - (Please see the Erlang Run-Time System Application documentation for - a description of the group leader concept). The - central IO server process that handles the output from regular test - cases and configuration functions, does not respond to IO messages + Parallel Test Cases and I/O +

A parallel test case has a private I/O server as its group leader. + (For a description of the group leader concept, see + ERTS). + The central I/O server process, which handles the output from + regular test cases and configuration functions, does not respond to I/O messages during execution of parallel groups. This is important to understand - in order to avoid certain traps, like this one:

-

If a process, P, is spawned during execution of e.g. - init_per_suite/1, it will inherit the group leader of the - init_per_suite process. This group leader is the central IO server - process mentioned above. If, at a later time, during parallel test case + to avoid certain traps, like the following:

+

If a process, P, is spawned during execution of, for example, + init_per_suite/1, it inherits the group leader of the + init_per_suite process. This group leader is the central I/O server + process mentioned earlier. If, at a later time, during parallel test case execution, some event triggers process P to call - io:format/1/2, that call will never return (since the group leader - is in a non-responsive state) and cause P to hang. + io:format/1/2, that call never returns (as the group leader + is in a non-responsive state) and causes P to hang.

- Repeated groups + Repeated Groups -

A test case group may be repeated a certain number of times +

A test case group can be repeated a certain number of times (specified by an integer) or indefinitely (specified by forever). - The repetition may also be stopped prematurely if any or all cases - fail or succeed, i.e. if the property repeat_until_any_fail, + The repetition can also be stopped too early if any or all cases + fail or succeed, that is, if any of the properties repeat_until_any_fail, repeat_until_any_ok, repeat_until_all_fail, or repeat_until_all_ok is used. If the basic repeat property is used, status of test cases is irrelevant for the repeat operation.

-

It is possible to return the status of a sub-group (ok or - failed), to affect the execution of the group on the level above. +

The status of a subgroup can be returned (ok or + failed), to affect the execution of the group on the level above. This is accomplished by, in end_per_group/2, looking up the value of tc_group_properties in the Config list and checking the - result of the test cases in the group. If status failed should be - returned from the group as a result, end_per_group/2 should return - the value {return_group_result,failed}. The status of a sub-group - is taken into account by Common Test when evaluating if execution of a - group should be repeated or not (unless the basic repeat + result of the test cases in the group. If status failed is to be + returned from the group as a result, end_per_group/2 is to return + the value {return_group_result,failed}. The status of a subgroup + is taken into account by Common Test when evaluating if execution of a + group is to be repeated or not (unless the basic repeat property is used).

-

The tc_group_properties value is a list of status tuples, - each with the key ok, skipped and failed. The - value of a status tuple is a list containing names of test cases +

The value of tc_group_properties is a list of status tuples, + each with the key ok, skipped, and failed. The + value of a status tuple is a list with names of test cases that have been executed with the corresponding status as result.

-

Here's an example of how to return the status from a group:

+

The following is an example of how to return the status from a group:

-      end_per_group(_Group, Config) ->
-          Status = ?config(tc_group_result, Config),
-          case proplists:get_value(failed, Status) of
-              [] ->                                   % no failed cases 
-	          {return_group_result,ok};
-	      _Failed ->                              % one or more failed
-	          {return_group_result,failed}
-          end.
- -

It is also possible in end_per_group/2 to check the status of - a sub-group (maybe to determine what status the current group should also - return). This is as simple as illustrated in the example above, only the - name of the group is stored in a tuple {group_result,GroupName}, - which can be searched for in the status lists. Example:

+ end_per_group(_Group, Config) -> + Status = ?config(tc_group_result, Config), + case proplists:get_value(failed, Status) of + [] -> % no failed cases + {return_group_result,ok}; + _Failed -> % one or more failed + {return_group_result,failed} + end. + +

It is also possible, in end_per_group/2, to check the status of + a subgroup (maybe to determine what status the current group is to + return). This is as simple as illustrated in the previous example, only the + group name is stored in a tuple {group_result,GroupName}, + which can be searched for in the status lists.

+

Example:

-      end_per_group(group1, Config) ->
-          Status = ?config(tc_group_result, Config),
-          Failed = proplists:get_value(failed, Status),
-          case lists:member({group_result,group2}, Failed) of
-	        true ->
-		    {return_group_result,failed};
-                false ->                                                    
-	            {return_group_result,ok}
-          end; 
-      ...
+ end_per_group(group1, Config) -> + Status = ?config(tc_group_result, Config), + Failed = proplists:get_value(failed, Status), + case lists:member({group_result,group2}, Failed) of + true -> + {return_group_result,failed}; + false -> + {return_group_result,ok} + end; + ...

When a test case group is repeated, the configuration - functions, init_per_group/2 and end_per_group/2, are + functions init_per_group/2 and end_per_group/2 are also always called with each repetition.

- Shuffled test case order -

The order that test cases in a group are executed, is under normal + Shuffled Test Case Order +

The order in which test cases in a group are executed is under normal circumstances the same as the order specified in the test case list - in the group definition. With the shuffle property set, however, - Common Test will instead execute the test cases in random order.

+ in the group definition. With property shuffle set, however, + Common Test instead executes the test cases in random order.

-

The user may provide a seed value (a tuple of three integers) with - the shuffle property: {shuffle,Seed}. This way, the same shuffling +

You can provide a seed value (a tuple of three integers) with + the shuffle property {shuffle,Seed}. This way, the same shuffling order can be created every time the group is executed. If no seed value - is given, Common Test creates a "random" seed for the shuffling operation + is specified, Common Test creates a "random" seed for the shuffling operation (using the return value of erlang:now()). The seed value is always printed to the init_per_group/2 log file so that it can be used to recreate the same execution order in a subsequent test run.

-

If a shuffled test case group is repeated, the seed will not - be reset in between turns.

+

If a shuffled test case group is repeated, the seed is not + reset between turns.

-

If a sub-group is specified in a group with a shuffle property, - the execution order of this sub-group in relation to the test cases - (and other sub-groups) in the group, is also random. The order of the - test cases in the sub-group is however not random (unless, of course, the - sub-group also has a shuffle property).

+

If a subgroup is specified in a group with a shuffle property, + the execution order of this subgroup in relation to the test cases + (and other subgroups) in the group, is random. The order of the + test cases in the subgroup is however not random (unless the + subgroup has a shuffle property).

- Group info function + Group Information Function -

The test case group info function, group(GroupName), - serves the same purpose as the suite- and test case info - functions previously described in this chapter. The scope for - the group info, however, is all test cases and sub-groups in the +

The test case group information function, group(GroupName), + serves the same purpose as the suite- and test case information + functions previously described. However, the scope for + the group information function, is all test cases and subgroups in the group in question (GroupName).

-

Example:

+

Example:

-	group(connection_tests) ->
-	   [{require,login_data},
-	    {timetrap,1000}].
+ group(connection_tests) -> + [{require,login_data}, + {timetrap,1000}]. -

The group info properties override those set with the - suite info function, and may in turn be overridden by test - case info properties. Please see the test case info - function above for a list of valid info properties and more - general information.

+

The group information properties override those set with the + suite information function, and can in turn be overridden by test + case information properties. For a list of valid information properties + and more general information, see the + Test Case Information Function. +

- Info functions for init- and end-configuration -

It is possible to use info functions also for the init_per_suite, - end_per_suite, init_per_group, and end_per_group - functions, and it works the same way as with info functions - for test cases (see above). This is useful e.g. for setting - timetraps and requiring external configuration data relevant - only for the configuration function in question (without - affecting properties set for groups and test cases in the suite).

- -

The info function init/end_per_suite() is called for - init/end_per_suite(Config), and info function + Information Functions for Init- and End-Configuration +

Information functions can also be used for functions init_per_suite, + end_per_suite, init_per_group, and end_per_group, + and they work the same way as with the + Test Case Information Function. + This is useful, for example, for setting timetraps and requiring + external configuration data relevant only for the configuration + function in question (without affecting properties set for groups + and test cases in the suite).

+ +

The information function init/end_per_suite() is called for + init/end_per_suite(Config), and information function init/end_per_group(GroupName) is called for - init/end_per_group(GroupName,Config). Info functions - can not be used with init/end_per_testcase(TestCase, Config), - however, since these configuration functions execute on the test case process - and will use the same properties as the test case (i.e. the properties - set by the test case info function, TestCase()). Please see the test case - info function above for a list of valid info properties and more - general information. + init/end_per_group(GroupName,Config). However, information functions + cannot be used with init/end_per_testcase(TestCase, Config), + as these configuration functions execute on the test case process + and use the same properties as the test case (that is, the properties + set by the test case information function, TestCase()). For a list + of valid information properties and more general information, see the + Test Case Information Function.

@@ -789,77 +805,78 @@ Data and Private Directories -

The data directory, data_dir, is the directory where the - test module has its own files needed for the testing. The name - of the data_dir is the the name of the test suite followed - by "_data". For example, - "some_path/foo_SUITE.beam" has the data directory +

In the data directory, data_dir, the test module has + its own files needed for the testing. The name of data_dir + is the the name of the test suite followed by "_data". + For example, "some_path/foo_SUITE.beam" has the data directory "some_path/foo_SUITE_data/". Use this directory for portability, - i.e. to avoid hardcoding directory names in your suite. Since the data - directory is stored in the same directory as your test suite, you should - be able to rely on its existence at runtime, even if the path to your + that is, to avoid hardcoding directory names in your suite. As the data + directory is stored in the same directory as your test suite, you can + rely on its existence at runtime, even if the path to your test suite directory has changed between test suite implementation and execution.

priv_dir is the private directory for the test cases. - This directory may be used whenever a test case (or configuration function) + This directory can be used whenever a test case (or configuration function) needs to write something to file. The name of the private directory is - generated by Common Test, which also creates the directory. + generated by Common Test, which also creates the directory.

-

By default, Common Test creates one central private directory - per test run that all test cases share. This may not always be suitable, - especially if the same test cases are executed multiple times during - a test run (e.g. if they belong to a test case group with repeat - property), and there's a risk that files in the private directory get - overwritten. Under these circumstances, it's possible to configure - Common Test to create one dedicated private directory per - test case and execution instead. This is accomplished by means of - the flag/option: create_priv_dir (to be used with the - ct_run program, the ct:run_test/1 function, or +

By default, Common Test creates one central private directory + per test run, shared by all test cases. This is not always suitable. + Especially if the same test cases are executed multiple times during + a test run (that is, if they belong to a test case group with property + repeat) and there is a risk that files in the private directory get + overwritten. Under these circumstances, Common Test can be + configured to create one dedicated private directory per + test case and execution instead. This is accomplished with + the flag/option create_priv_dir (to be used with the + ct_run program, the + ct:run_test/1 function, or as test specification term). There are three possible values - for this option: + for this option as follows:

- + auto_per_run auto_per_tc manual_per_tc

- The first value indicates the default priv_dir behaviour, i.e. + The first value indicates the default priv_dir behavior, that is, one private directory created per test run. The two latter - values tell Common Test to generate a unique test directory name + values tell Common Test to generate a unique test directory name per test case and execution. If the auto version is used, all - private directories will be created automatically. This can obviously - become very inefficient for test runs with many test cases and/or - repetitions. Therefore, in case the manual version is instead used, the - test case must tell Common Test to create priv_dir when it needs it. - It does this by calling the function ct:make_priv_dir/0. + private directories are created automatically. This can become very + inefficient for test runs with many test cases or repetitions, or both. + Therefore, if the manual version is used instead, the test case must tell + Common Test to create priv_dir when it needs it. + It does this by calling the function + ct:make_priv_dir/0.

-

You should not depend on current working directory for - reading and writing data files since this is not portable. All +

Do not depend on the current working directory for + reading and writing data files, as this is not portable. All scratch files are to be written in the priv_dir and all - data files should be located in data_dir. Note also that - the Common Test server sets current working directory to the test case - log directory at the start of every case. + data files are to be located in data_dir. Also, + the Common Test server sets the current working directory to + the test case log directory at the start of every case.

- Execution environment + Execution Environment

Each test case is executed by a dedicated Erlang process. The process is spawned when the test case starts, and terminated when @@ -876,236 +893,251 @@

- Timetrap timeouts + Timetrap Time-Outs

The default time limit for a test case is 30 minutes, unless a timetrap is specified either by the suite-, group-, - or test case info function. The timetrap timeout value defined by - suite/0 is the value that will be used for each test case - in the suite (as well as for the configuration functions + or test case information function. The timetrap time-out value defined by + suite/0 is the value that is used for each test case + in the suite (and for the configuration functions init_per_suite/1, end_per_suite/1, init_per_group/2, and end_per_group/2). A timetrap value defined by group(GroupName) overrides one defined by suite() - and will be used for each test case in group GroupName, and any - of its sub-groups. If a timetrap value is defined by group/1 - for a sub-group, it overrides that of its higher level groups. Timetrap - values set by individual test cases (by means of the test case info + and is used for each test case in group GroupName, and any + of its subgroups. If a timetrap value is defined by group/1 + for a subgroup, it overrides that of its higher level groups. Timetrap + values set by individual test cases (by the test case information function) override both group- and suite- level timetraps.

-

It is also possible to dynamically set/reset a timetrap during the - excution of a test case, or configuration function. This is done by calling - ct:timetrap/1. This function cancels the current timetrap - and starts a new one (that stays active until timeout, or end of the - current function).

+

A timetrap can also be set or reset dynamically during the + execution of a test case, or configuration function. + This is done by calling + ct:timetrap/1. + This function cancels the current timetrap and starts a new one + (that stays active until time-out, or end of the current function).

Timetrap values can be extended with a multiplier value specified at - startup with the multiply_timetraps option. It is also possible - to let the test server decide to scale up timetrap timeout values - automatically, e.g. if tools such as cover or trace are running during - the test. This feature is disabled by default and can be enabled with - the scale_timetraps start option.

+ startup with option multiply_timetraps. It is also possible + to let the test server decide to scale up timetrap time-out values + automatically. That is, if tools such as cover or trace + are running during the test. This feature is disabled by default and + can be enabled with start option scale_timetraps.

If a test case needs to suspend itself for a time that also gets multipled by multiply_timetraps (and possibly also scaled up if - scale_timetraps is enabled), the function ct:sleep/1 - may be used (instead of e.g. timer:sleep/1).

+ scale_timetraps is enabled), the function + ct:sleep/1 + can be used (instead of, for example, timer:sleep/1).

-

A function (fun/0 or MFA) may be specified as - timetrap value in the suite-, group- and test case info function, as - well as argument to the ct:timetrap/1 function. Examples:

+

A function (fun/0 or {Mod,Func,Args} (MFA) tuple) can be + specified as timetrap value in the suite-, group- and test case information + function, and as argument to function + ct:timetrap/1.

+

Examples:

{timetrap,{my_test_utils,timetrap,[?MODULE,system_start]}}

ct:timetrap(fun() -> my_timetrap(TestCaseName, Config) end)

-

The user timetrap function may be used for two things:

- - To act as a timetrap - the timeout is triggered when the +

The user timetrap function can be used for two things as follows:

+ + To act as a timetrap. The time-out is triggered when the function returns. To return a timetrap time value (other than a function).

Before execution of the timetrap function (which is performed - on a parallel, dedicated timetrap process), Common Test cancels + on a parallel, dedicated timetrap process), Common Test cancels any previously set timer for the test case or configuration function. - When the timetrap function returns, the timeout is triggered, unless + When the timetrap function returns, the time-out is triggered, unless the return value is a valid timetrap time, such as an integer, - or a {SecMinOrHourTag,Time} tuple (see the - common_test reference manual for - details). If a time value is returned, a new timetrap is started - to generate a timeout after the specified time.

+ or a {SecMinOrHourTag,Time} tuple (for details, see module + common_test). If a time value + is returned, a new timetrap is started to generate a time-out after + the specified time.

-

The user timetrap function may of course return a time value after a delay, - and if so, the effective timetrap time is the delay time plus the +

The user timetrap function can return a time value after a delay. + The effective timetrap time is then the delay time plus the returned time.

- Logging - categories and verbosity levels -

Common Test provides three main functions for printing strings:

- + Logging - Categories and Verbosity Levels +

Common Test provides the following three main functions for + printing strings:

+ ct:log(Category, Importance, Format, Args) ct:print(Category, Importance, Format, Args) ct:pal(Category, Importance, Format, Args) -

The log/1/2/3/4 function will print a string to the test case - log file. The print/1/2/3/4 function will print the string to screen, - and the pal/1/2/3/4 function will print the same string both to file and - screen. (The functions are documented in the ct reference manual).

- -

The optional Category argument may be used to categorize the - log printout, and categories can be used for two things:

- +

The log/1,2,3,4 function + prints a string to the test case log file. + The print/1,2,3,4 function + prints the string to screen. + The pal/1,2,3,4 function + prints the same string both to file and screen. The functions are described + in module ct. +

+ +

The optional Category argument can be used to categorize the + log printout. Categories can be used for two things as follows:

+ To compare the importance of the printout to a specific - verbosity level, and - to format the printout according to a user specific HTML + verbosity level. + To format the printout according to a user-specific HTML Style Sheet (CSS). -

The Importance argument specifies a level of importance - which, compared to a verbosity level (general and/or set per category), - determines if the printout should be visible or not. Importance - is an arbitrary integer in the range 0..99. Pre-defined constants +

Argument Importance specifies a level of importance + that, compared to a verbosity level (general and/or set per category), + determines if the printout is to be visible. Importance + is any integer in the range 0..99. Predefined constants exist in the ct.hrl header file. The default importance level, - ?STD_IMPORTANCE (used if the Importance argument is not - provided), is 50. This is also the importance used for standard IO, e.g. - from printouts made with io:format/2, io:put_chars/1, etc.

+ ?STD_IMPORTANCE (used if argument Importance is not + provided), is 50. This is also the importance used for standard I/O, + for example, from printouts made with io:format/2, + io:put_chars/1, and so on.

-

Importance is compared to a verbosity level set by means of the +

Importance is compared to a verbosity level set by the verbosity start flag/option. The verbosity level can be set per - category and/or generally. The default verbosity level, ?STD_VERBOSITY, - is 50, i.e. all standard IO gets printed. If a lower verbosity level is set, - standard IO printouts will be ignored. Common Test performs the following test:

-
Importance >= (100-VerbosityLevel)
+ category or generally, or both. The default verbosity level, + ?STD_VERBOSITY, is 50, that is, all standard I/O gets printed. + If a lower verbosity level is set, standard I/O printouts are ignored. + Common Test performs the following test:

+
+ Importance >= (100-VerbosityLevel)

This also means that verbosity level 0 effectively turns all logging off - (with the exception of printouts made by Common Test itself).

+ (except from printouts made by Common Test itself).

The general verbosity level is not associated with any particular - category. This level sets the threshold for the standard IO printouts, - uncategorized ct:log/print/pal printouts, as well as + category. This level sets the threshold for the standard I/O printouts, + uncategorized ct:log/print/pal printouts, and printouts for categories with undefined verbosity level.

-

Example:

-
-   Some printouts during test case execution:
-
-     io:format("1. Standard IO, importance = ~w~n", [?STD_IMPORTANCE]),
-     ct:log("2. Uncategorized, importance = ~w", [?STD_IMPORTANCE]),
-     ct:log(info, "3. Categorized info, importance = ~w", [?STD_IMPORTANCE]]),
-     ct:log(info, ?LOW_IMPORTANCE, "4. Categorized info, importance = ~w", [?LOW_IMPORTANCE]),
-     ct:log(error, "5. Categorized error, importance = ~w", [?HI_IMPORTANCE]),
-     ct:log(error, ?HI_IMPORTANCE, "6. Categorized error, importance = ~w", [?MAX_IMPORTANCE]),
-
-   If starting the test without specifying any verbosity levels:
-
-     $ ct_run ...
-
-   the following gets printed:
-
-     1. Standard IO, importance = 50
-     2. Uncategorized, importance = 50
-     3. Categorized info, importance = 50
-     5. Categorized error, importance = 75
-     6. Categorized error, importance = 99
-
-   If starting the test with:
-
-     $ ct_run -verbosity 1 and info 75
-
-   the following gets printed:
-
-     3. Categorized info, importance = 50
-     4. Categorized info, importance = 25
-     6. Categorized error, importance = 99
- -

How categories can be mapped to CSS tags is documented in the - Running Tests - chapter.

- -

The Format and Args arguments in ct:log/print/pal are - always passed on to the io:format/3 function in stdlib - (please see the io manual page for details).

+

Examples:

+

Some printouts during test case execution:

+
  
+ io:format("1. Standard IO, importance = ~w~n", [?STD_IMPORTANCE]),
+ ct:log("2. Uncategorized, importance = ~w", [?STD_IMPORTANCE]),
+ ct:log(info, "3. Categorized info, importance = ~w", [?STD_IMPORTANCE]]),
+ ct:log(info, ?LOW_IMPORTANCE, "4. Categorized info, importance = ~w", [?LOW_IMPORTANCE]),
+ ct:log(error, "5. Categorized error, importance = ~w", [?HI_IMPORTANCE]),
+ ct:log(error, ?HI_IMPORTANCE, "6. Categorized error, importance = ~w", [?MAX_IMPORTANCE]),
+ +

If starting the test without specifying any verbosity levels as follows:

+
+ $ ct_run ...
+

the following is printed:

+
+ 1. Standard IO, importance = 50
+ 2. Uncategorized, importance = 50
+ 3. Categorized info, importance = 50
+ 5. Categorized error, importance = 75
+ 6. Categorized error, importance = 99
+ +

If starting the test with:

+
+ $ ct_run -verbosity 1 and info 75
+

the following is printed:

+
+ 3. Categorized info, importance = 50
+ 4. Categorized info, importance = 25
+ 6. Categorized error, importance = 99
+ +

How categories can be mapped to CSS tags is documented in section + HTML Style Sheets + in section Running Tests and Analyzing Results.

+ +

The arguments Format and Args in ct:log/print/pal are + always passed on to function stdlib:io:format/3 (For details, see the + stdlib:io manual page).

-

For more information about log files, please see the - Running Tests chapter.

+

For more information about log files, see section + Log Files + in section Running Tests and Analyzing Results.

- Illegal dependencies + Illegal Dependencies

Even though it is highly efficient to write test suites with - the Common Test framework, there will surely be mistakes made, - mainly due to illegal dependencies. Noted below are some of the + the Common Test framework, mistakes can be made, + mainly because of illegal dependencies. Some of the more frequent mistakes from our own experience with running the - Erlang/OTP test suites.

+ Erlang/OTP test suites follows:

- - Depending on current directory, and writing there:

+ +

Depending on current directory, and writing there:

This is a common error in test suites. It is assumed that - the current directory is the same as what the author used as + the current directory is the same as the author used as current directory when the test case was developed. Many test cases even try to write scratch files to this directory. Instead - data_dir and priv_dir should be used to locate + data_dir and priv_dir are to be used to locate data and for writing scratch files.

- Depending on execution order:

+

Depending on execution order:

-

During development of test suites, no assumption should preferrably - be made about the execution order of the test cases or suites. - E.g. a test case should not assume that a server it depends on, - has already been started by a previous test case. There are - several reasons for this: -

-

Firstly, the user/operator may specify the order at will, and maybe - a different execution order is more relevant or efficient on - some particular occasion. Secondly, if the user specifies a whole - directory of test suites for his/her test, the order the suites are - executed will depend on how the files are listed by the operating - system, which varies between systems. Thirdly, if a user - wishes to run only a subset of a test suite, there is no way - one test case could successfully depend on another. +

During development of test suites, make no assumptions on the + execution order of the test cases or suites. For example, a test + case must not assume that a server it depends on is already + started by a previous test case. Reasons for this follows:

+ + The user/operator can specify the order at will, and maybe + a different execution order is sometimes more relevant or + efficient. + If the user specifies a whole directory of test suites + for the test, the execution order of the suites depends on + how the files are listed by the operating system, which varies + between systems. + If a user wants to run only a subset of a test suite, + there is no way one test case could successfully depend on + another. +
- Depending on Unix:

+

Depending on Unix:

-

Running unix commands through os:cmd are likely - not to work on non-unix platforms. +

Running Unix commands through os:cmd are likely + not to work on non-Unix platforms.

- Nested test cases:

+

Nested test cases:

-

Invoking a test case from another not only tests the same - thing twice, but also makes it harder to follow what exactly - is being tested. Also, if the called test case fails for some - reason, so will the caller. This way one error gives cause to - several error reports, which is less than ideal. +

Starting a test case from another not only tests the same + thing twice, but also makes it harder to follow what is being + tested. Also, if the called test case fails for some + reason, so do the caller. This way, one error gives cause to + several error reports, which is to be avoided.

-

Functionality common for many test case functions may be implemented - in common help functions. If these functions are useful for test cases - across suites, put the help functions into common help modules. +

Functionality common for many test case functions can be + implemented in common help functions. If these functions are + useful for test cases across suites, put the help functions + into common help modules.

- Failure to crash or exit when things go wrong:

+

Failure to crash or exit when things go wrong:

Making requests without checking that the return value - indicates success may be ok if the test case will fail at a - later stage, but it is never acceptable just to print an error - message (into the log file) and return successfully. Such test cases - do harm since they create a false sense of security when overviewing - the test results. + indicates success can be OK if the test case fails + later, but it is never acceptable just to print an error + message (into the log file) and return successfully. Such test + cases do harm, as they create a false sense of security when + overviewing the test results.

- Messing up for subsequent test cases:

+

Messing up for subsequent test cases:

-

Test cases should restore as much of the execution - environment as possible, so that the subsequent test cases will - not crash because of execution order of the test cases. - The function end_per_testcase is suitable for this. +

Test cases are to restore as much of the execution + environment as possible, so that subsequent test cases + do not crash because of their execution order. + The function + end_per_testcase + is suitable for this.

-- cgit v1.2.3