20032013
Ericsson AB. All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
Writing Test Suites
Siri Hansen, Peter Andersson
write_test_chapter.xml
Support for test suite authors
The ct module provides the main interface for writing
test cases. This includes e.g:
- Functions for printing and logging
- Functions for reading configuration data
- Function for terminating a test case with error reason
- Function for adding comments to the HTML overview page
Please see the reference manual for the ct
module for details about these functions.
The CT application also includes other modules named
]]> that
provide various support, mainly simplified use of communication
protocols such as rpc, snmp, ftp, telnet, etc.
Test suites
A test suite is an ordinary Erlang module that contains test
cases. It is recommended that the module has a name on the form
*_SUITE.erl. Otherwise, the directory and auto compilation
function in CT will not be able to locate it (at least not per default).
It is also recommended that the ct.hrl header file is included
in all test suite modules.
Each test suite module must export the function all/0
which returns the list of all test case groups and test cases
to be executed in that module.
The callback functions that the test suite should implement, and
which will be described in more detail below, are
all listed in the common_test
reference manual page.
Init and end per suite
Each test suite module may contain the optional configuration functions
init_per_suite/1 and end_per_suite/1. If the init function
is defined, so must the end function be.
If it exists, init_per_suite is called initially before the
test cases are executed. It typically contains initializations that are
common for all test cases in the suite, and that are only to be
performed once. It is recommended to be used for setting up and
verifying state and environment on the SUT (System Under Test) and/or
the CT host node, so that the test cases in the suite will execute
correctly. Examples of initial configuration operations: Opening a connection
to the SUT, initializing a database, running an installation script, etc.
end_per_suite is called as the final stage of the test suite execution
(after the last test case has finished). The function is meant to be used
for cleaning up after init_per_suite.
init_per_suite and end_per_suite will execute on dedicated
Erlang processes, just like the test cases do. The result of these functions
is however not included in the test run statistics of successful, failed and
skipped cases.
The argument to init_per_suite is Config, the
same key-value list of runtime configuration data that each test case takes
as input argument. init_per_suite can modify this parameter with
information that the test cases need. The possibly modified Config
list is the return value of the function.
If init_per_suite fails, all test cases in the test
suite will be skipped automatically (so called auto skipped),
including end_per_suite.
Note that if init_per_suite and end_per_suite do not exist
in the suite, Common Test calls dummy functions (with the same names)
instead, so that output generated by hook functions may be saved to the log
files for these dummies
(see the Common Test Hooks
chapter for more information).
Init and end per test case
Each test suite module can contain the optional configuration functions
init_per_testcase/2 and end_per_testcase/2. If the init function
is defined, so must the end function be.
If it exists, init_per_testcase is called before each
test case in the suite. It typically contains initialization which
must be done for each test case (analogue to init_per_suite for the
suite).
end_per_testcase/2 is called after each test case has
finished, giving the opportunity to perform clean-up after
init_per_testcase.
The first argument to these functions is the name of the test
case. This value can be used with pattern matching in function clauses
or conditional expressions to choose different initialization and cleanup
routines for different test cases, or perform the same routine for a number of,
or all, test cases.
The second argument is the Config key-value list of runtime
configuration data, which has the same value as the list returned by
init_per_suite. init_per_testcase/2 may modify this
parameter or return it as is. The return value of init_per_testcase/2
is passed as the Config parameter to the test case itself.
The return value of end_per_testcase/2 is ignored by the
test server, with exception of the
save_config
and fail tuple.
It is possible in end_per_testcase to check if the
test case was successful or not (which consequently may determine
how cleanup should be performed). This is done by reading the value
tagged with tc_status from Config. The value is either
ok, {failed,Reason} (where Reason is timetrap_timeout,
info from exit/1, or details of a run-time error), or
{skipped,Reason} (where Reason is a user specific term).
The end_per_testcase/2 function is called even after a
test case terminates due to a call to ct:abort_current_testcase/1,
or after a timetrap timeout. However, end_per_testcase
will then execute on a different process than the test case
function, and in this situation, end_per_testcase will
not be able to change the reason for test case termination by
returning {fail,Reason}, nor will it be able to save data with
{save_config,Data}.
If init_per_testcase crashes, the test case itself gets skipped
automatically (so called auto skipped). If init_per_testcase
returns a tuple {skip,Reason}, also then the test case gets skipped
(so called user skipped). It is also possible, by returning a tuple
{fail,Reason} from init_per_testcase, to mark the test case
as failed without actually executing it.
If init_per_testcase crashes, or returns {skip,Reason}
or {fail,Reason}, the end_per_testcase function is not called.
If it is determined during execution of end_per_testcase that
the status of a successful test case should be changed to failed,
end_per_testcase may return the tuple: {fail,Reason}
(where Reason describes why the test case fails).
init_per_testcase and end_per_testcase execute on the
same Erlang process as the test case and printouts from these
configuration functions can be found in the test case log file.
Test cases
The smallest unit that the test server is concerned with is a
test case. Each test case can actually test many things, for
example make several calls to the same interface function with
different parameters.
It is possible to choose to put many or few tests into each test
case. What exactly each test case does is of course up to the
author, but here are some things to keep in mind:
Having many small test cases tend to result in extra, and possibly
duplicated code, as well as slow test execution because of
large overhead for initializations and cleanups. Duplicated
code should be avoided, e.g. by means of common help functions, or
the resulting suite will be difficult to read and understand, and
expensive to maintain.
Larger test cases make it harder to tell what went wrong if it
fails, and large portions of test code will potentially be skipped
when errors occur. Furthermore, readability and maintainability suffers
when test cases become too large and extensive. Also, the resulting log
files may not reflect very well the number of tests that have
actually been performed.
The test case function takes one argument, Config, which
contains configuration information such as data_dir and
priv_dir. (See Data and
Private Directories for more information about these).
The value of Config at the time of the call, is the same
as the return value from init_per_testcase, see above.
The test case function argument Config should not be
confused with the information that can be retrieved from
configuration files (using
ct:get_config/1/2). The Config argument
should be used for runtime configuration of the test suite and the
test cases, while configuration files should typically contain data
related to the SUT. These two types of configuration data are handled
differently!
Since the Config parameter is a list of key-value tuples, i.e.
a data type generally called a property list, it can be handled by means of the
proplists module in the OTP stdlib. A value can for example
be searched for and returned with the proplists:get_value/2 function.
Also, or alternatively, you might want to look in the general lists module,
also in stdlib, for useful functions. Normally, the only operations you
ever perform on Config is insert (adding a tuple to the head of the list)
and lookup. Common Test provides a simple macro named ?config, which returns
a value of an item in Config given the key (exactly like
proplists:get_value). Example: PrivDir = ?config(priv_dir, Config).
If the test case function crashes or exits purposely, it is considered
failed. If it returns a value (no matter what actual value) it is
considered successful. An exception to this rule is the return value
{skip,Reason}. If this tuple is returned, the test case is considered
skipped and gets logged as such.
If the test case returns the tuple {comment,Comment}, the case
is considered successful and Comment is printed out in the overview
log file. This is by the way equal to calling ct:comment(Comment).
Test case info function
For each test case function there can be an additional function
with the same name but with no arguments. This is the test case
info function. The test case info function is expected to return a
list of tagged tuples that specifies various properties regarding the
test case.
The following tags have special meaning:
timetrap
-
Set the maximum time the test case is allowed to execute. If
the timetrap time is exceeded, the test case fails with
reason timetrap_timeout. Note that init_per_testcase
and end_per_testcase are included in the timetrap time.
Please see the Timetrap
section for more details.
userdata
-
Use this to specify arbitrary data related to the testcase. This
data can be retrieved at any time using the ct:userdata/3
utility function.
silent_connections
-
Please see the
Silent Connections
chapter for details.
require
-
Use this to specify configuration variables that are required by the
test case. If the required configuration variables are not
found in any of the test system configuration files, the test case is
skipped.
It is also possible to give a required variable a default value that will
be used if the variable is not found in any configuration file. To specify
a default value, add a tuple on the form:
{default_config,ConfigVariableName,Value} to the test case info list
(the position in the list is irrelevant).
Examples:
testcase1() ->
[{require, ftp},
{default_config, ftp, [{ftp, "my_ftp_host"},
{username, "aladdin"},
{password, "sesame"}]}}].
testcase2() ->
[{require, unix_telnet, unix},
{require, {unix, [telnet, username, password]}},
{default_config, unix, [{telnet, "my_telnet_host"},
{username, "aladdin"},
{password, "sesame"}]}}].
See the Config files
chapter and the
ct:require/1/2 function in the
ct reference manual for more information about
require.
Specifying a default value for a required variable can result
in a test case always getting executed. This might not be a desired behaviour!
If timetrap and/or require is not set specifically for
a particular test case, default values specified by the suite/0
function are used.
Other tags than the ones mentioned above will simply be ignored by
the test server.
Example of a test case info function:
reboot_node() ->
[
{timetrap,{seconds,60}},
{require,interfaces},
{userdata,
[{description,"System Upgrade: RpuAddition Normal RebootNode"},
{fts,"http://someserver.ericsson.se/test_doc4711.pdf"}]}
].
Test suite info function
The suite/0 function can be used in a test suite
module to e.g. set a default timetrap value and to
require external configuration data. If a test case-, or
group info function also specifies any of the info tags, it
overrides the default values set by suite/0. See the test
case info function above, and group info function below, for more
details.
Other options that may be specified with the suite info list are:
- stylesheet,
see HTML Style Sheets.
- userdata,
see Test case info function.
- silent_connections,
see Silent Connections.
Example of the suite info function:
suite() ->
[
{timetrap,{minutes,10}},
{require,global_names},
{userdata,[{info,"This suite tests database transactions."}]},
{silent_connections,[telnet]},
{stylesheet,"db_testing.css"}
].
Test case groups
A test case group is a set of test cases that share configuration
functions and execution properties. Test case groups are defined by
means of the groups/0 function according to the following syntax:
groups() -> GroupDefs
Types:
GroupDefs = [GroupDef]
GroupDef = {GroupName,Properties,GroupsAndTestCases}
GroupName = atom()
GroupsAndTestCases = [GroupDef | {group,GroupName} | TestCase]
TestCase = atom()
GroupName is the name of the group and should be unique within
the test suite module. Groups may be nested, and this is accomplished
simply by including a group definition within the GroupsAndTestCases
list of another group. Properties is the list of execution
properties for the group. The possible values are:
Properties = [parallel | sequence | Shuffle | {RepeatType,N}]
Shuffle = shuffle | {shuffle,Seed}
Seed = {integer(),integer(),integer()}
RepeatType = repeat | repeat_until_all_ok | repeat_until_all_fail |
repeat_until_any_ok | repeat_until_any_fail
N = integer() | forever
If the parallel property is specified, Common Test will execute
all test cases in the group in parallel. If sequence is specified,
the cases will be executed in a sequence, as described in the chapter
Dependencies between
test cases and suites. If shuffle is specified, the cases
in the group will be executed in random order. The repeat property
orders Common Test to repeat execution of the cases in the group a given
number of times, or until any, or all, cases fail or succeed.
Example:
groups() -> [{group1, [parallel], [test1a,test1b]},
{group2, [shuffle,sequence], [test2a,test2b,test2c]}].
To specify in which order groups should be executed (also with respect
to test cases that are not part of any group), tuples on the form
{group,GroupName} should be added to the all/0 list. Example:
all() -> [testcase1, {group,group1}, testcase2, {group,group2}].
It is also possible to specify execution properties with a group
tuple in all/0: {group,GroupName,Properties}. These
properties will override those specified in the group definition (see
groups/0 above). This way, it's possible to run the same set of tests,
but with different properties, without having to make copies of the group
definition in question.
If a group contains sub-groups, the execution properties for these may
also be specified in the group tuple:
{group,GroupName,Properties,SubGroups}, where SubGroups
is a list of tuples, {GroupName,Properties}, or
{GroupName,Properties,SubGroups}, representing the sub-groups.
Any sub-groups defined in group/0 for a group, that are not specified
in the SubGroups list, will simply execute with their pre-defined
properties.
Example:
groups() -> {tests1, [], [{tests2, [], [t2a,t2b]},
{tests3, [], [t31,t3b]}]}.
To execute group 'tests1' twice with different properties for 'tests2'
each time:
all() ->
[{group, tests1, default, [{tests2, [parallel]}]},
{group, tests1, default, [{tests2, [shuffle,{repeat,10}]}]}].
Note that this is equivalent to this specification:
all() ->
[{group, tests1, default, [{tests2, [parallel]},
{tests3, default}]},
{group, tests1, default, [{tests2, [shuffle,{repeat,10}]},
{tests3, default}]}].
The value default states that the pre-defined properties
should be used.
Here's an example of how to override properties in a scenario
with deeply nested groups:
groups() ->
[{tests1, [], [{group, tests2}]},
{tests2, [], [{group, tests3}]},
{tests3, [{repeat,2}], [t3a,t3b,t3c]}].
all() ->
[{group, tests1, default,
[{tests2, default,
[{tests3, [parallel,{repeat,100}]}]}]}].
The syntax described above may also be used in Test Specifications
in order to change properties of groups at the time of execution,
without even having to edit the test suite (please see the
Test
Specifications chapter for more info).
As illustrated above, properties may be combined. If e.g.
shuffle, repeat_until_any_fail and sequence
are all specified, the test cases in the group will be executed
repeatedly, and in random order, until a test case fails. Then
execution is immediately stopped and the rest of the cases skipped.
Before execution of a group begins, the configuration function
init_per_group(GroupName, Config) is called. The list of tuples
returned from this function is passed to the test cases in the usual
manner by means of the Config argument. init_per_group/2
is meant to be used for initializations common for the test cases in the
group. After execution of the group is finished, the
end_per_group(GroupName, Config function is called. This function
is meant to be used for cleaning up after init_per_group/2.
Whenever a group is executed, if init_per_group and
end_per_group do not exist in the suite, Common Test calls
dummy functions (with the same names) instead. Output generated by
hook functions will be saved to the log files for these dummies
(see the Common Test
Hooks chapter for more information).
init_per_testcase/2 and end_per_testcase/2
are always called for each individual test case, no matter if the case
belongs to a group or not.
The properties for a group is always printed on the top of the HTML log
for init_per_group/2. Also, the total execution time for a group
can be found at the bottom of the log for end_per_group/2.
Test case groups may be nested so that sets of groups can be
configured with the same init_per_group/2 and end_per_group/2
functions. Nested groups may be defined by including a group definition,
or a group name reference, in the test case list of another group. Example:
groups() -> [{group1, [shuffle], [test1a,
{group2, [], [test2a,test2b]},
test1b]},
{group3, [], [{group,group4},
{group,group5}]},
{group4, [parallel], [test4a,test4b]},
{group5, [sequence], [test5a,test5b,test5c]}].
In the example above, if all/0 would return group name references
in this order: [{group,group1},{group,group3}], the order of the
configuration functions and test cases will be the following (note that
init_per_testcase/2 and end_per_testcase/2: are also
always called, but not included in this example for simplification):
- init_per_group(group1, Config) -> Config1 (*)
-- test1a(Config1)
-- init_per_group(group2, Config1) -> Config2
--- test2a(Config2), test2b(Config2)
-- end_per_group(group2, Config2)
-- test1b(Config1)
- end_per_group(group1, Config1)
- init_per_group(group3, Config) -> Config3
-- init_per_group(group4, Config3) -> Config4
--- test4a(Config4), test4b(Config4) (**)
-- end_per_group(group4, Config4)
-- init_per_group(group5, Config3) -> Config5
--- test5a(Config5), test5b(Config5), test5c(Config5)
-- end_per_group(group5, Config5)
- end_per_group(group3, Config3)
(*) The order of test case test1a, test1b and group2 is not actually
defined since group1 has a shuffle property.
(**) These cases are not executed in order, but in parallel.
Properties are not inherited from top level groups to nested
sub-groups. E.g, in the example above, the test cases in group2
will not be executed in random order (which is the property of
group1).
The parallel property and nested groups
If a group has a parallel property, its test cases will be spawned
simultaneously and get executed in parallel. A test case is not allowed
to execute in parallel with end_per_group/2 however, which means
that the time it takes to execute a parallel group is equal to the
execution time of the slowest test case in the group. A negative side
effect of running test cases in parallel is that the HTML summary pages
are not updated with links to the individual test case logs until the
end_per_group/2 function for the group has finished.
A group nested under a parallel group will start executing in parallel
with previous (parallel) test cases (no matter what properties the nested
group has). Since, however, test cases are never executed in parallel with
init_per_group/2 or end_per_group/2 of the same group, it's
only after a nested group has finished that any remaining parallel cases
in the previous group get spawned.
Parallel test cases and IO
A parallel test case has a private IO server as its group leader.
(Please see the Erlang Run-Time System Application documentation for
a description of the group leader concept). The
central IO server process that handles the output from regular test
cases and configuration functions, does not respond to IO messages
during execution of parallel groups. This is important to understand
in order to avoid certain traps, like this one:
If a process, P, is spawned during execution of e.g.
init_per_suite/1, it will inherit the group leader of the
init_per_suite process. This group leader is the central IO server
process mentioned above. If, at a later time, during parallel test case
execution, some event triggers process P to call
io:format/1/2, that call will never return (since the group leader
is in a non-responsive state) and cause P to hang.
Repeated groups
A test case group may be repeated a certain number of times
(specified by an integer) or indefinitely (specified by forever).
The repetition may also be stopped prematurely if any or all cases
fail or succeed, i.e. if the property repeat_until_any_fail,
repeat_until_any_ok, repeat_until_all_fail, or
repeat_until_all_ok is used. If the basic repeat
property is used, status of test cases is irrelevant for the repeat
operation.
It is possible to return the status of a sub-group (ok or
failed), to affect the execution of the group on the level above.
This is accomplished by, in end_per_group/2, looking up the value
of tc_group_properties in the Config list and checking the
result of the test cases in the group. If status failed should be
returned from the group as a result, end_per_group/2 should return
the value {return_group_result,failed}. The status of a sub-group
is taken into account by Common Test when evaluating if execution of a
group should be repeated or not (unless the basic repeat
property is used).
The tc_group_properties value is a list of status tuples,
each with the key ok, skipped and failed. The
value of a status tuple is a list containing names of test cases
that have been executed with the corresponding status as result.
Here's an example of how to return the status from a group:
end_per_group(_Group, Config) ->
Status = ?config(tc_group_result, Config),
case proplists:get_value(failed, Status) of
[] -> % no failed cases
{return_group_result,ok};
_Failed -> % one or more failed
{return_group_result,failed}
end.
It is also possible in end_per_group/2 to check the status of
a sub-group (maybe to determine what status the current group should also
return). This is as simple as illustrated in the example above, only the
name of the group is stored in a tuple {group_result,GroupName},
which can be searched for in the status lists. Example:
end_per_group(group1, Config) ->
Status = ?config(tc_group_result, Config),
Failed = proplists:get_value(failed, Status),
case lists:member({group_result,group2}, Failed) of
true ->
{return_group_result,failed};
false ->
{return_group_result,ok}
end;
...
When a test case group is repeated, the configuration
functions, init_per_group/2 and end_per_group/2, are
also always called with each repetition.
Shuffled test case order
The order that test cases in a group are executed, is under normal
circumstances the same as the order specified in the test case list
in the group definition. With the shuffle property set, however,
Common Test will instead execute the test cases in random order.
The user may provide a seed value (a tuple of three integers) with
the shuffle property: {shuffle,Seed}. This way, the same shuffling
order can be created every time the group is executed. If no seed value
is given, Common Test creates a "random" seed for the shuffling operation
(using the return value of erlang:now()). The seed value is always
printed to the init_per_group/2 log file so that it can be used to
recreate the same execution order in a subsequent test run.
If a shuffled test case group is repeated, the seed will not
be reset in between turns.
If a sub-group is specified in a group with a shuffle property,
the execution order of this sub-group in relation to the test cases
(and other sub-groups) in the group, is also random. The order of the
test cases in the sub-group is however not random (unless, of course, the
sub-group also has a shuffle property).
Group info function
The test case group info function, group(GroupName),
serves the same purpose as the suite- and test case info
functions previously described in this chapter. The scope for
the group info, however, is all test cases and sub-groups in the
group in question (GroupName).
Example:
group(connection_tests) ->
[{require,login_data},
{timetrap,1000}].
The group info properties override those set with the
suite info function, and may in turn be overridden by test
case info properties. Please see the test case info
function above for a list of valid info properties and more
general information.
Info functions for init- and end-configuration
It is possible to use info functions also for the init_per_suite,
end_per_suite, init_per_group, and end_per_group
functions, and it works the same way as with info functions
for test cases (see above). This is useful e.g. for setting
timetraps and requiring external configuration data relevant
only for the configuration function in question (without
affecting properties set for groups and test cases in the suite).
The info function init/end_per_suite() is called for
init/end_per_suite(Config), and info function
init/end_per_group(GroupName) is called for
init/end_per_group(GroupName,Config). Info functions
can not be used with init/end_per_testcase(TestCase, Config),
however, since these configuration functions execute on the test case process
and will use the same properties as the test case (i.e. the properties
set by the test case info function, TestCase()). Please see the test case
info function above for a list of valid info properties and more
general information.
Data and Private Directories
The data directory, data_dir, is the directory where the
test module has its own files needed for the testing. The name
of the data_dir is the the name of the test suite followed
by "_data". For example,
"some_path/foo_SUITE.beam" has the data directory
"some_path/foo_SUITE_data/". Use this directory for portability,
i.e. to avoid hardcoding directory names in your suite. Since the data
directory is stored in the same directory as your test suite, you should
be able to rely on its existence at runtime, even if the path to your
test suite directory has changed between test suite implementation and
execution.
priv_dir is the private directory for the test cases.
This directory may be used whenever a test case (or configuration function)
needs to write something to file. The name of the private directory is
generated by Common Test, which also creates the directory.
By default, Common Test creates one central private directory
per test run that all test cases share. This may not always be suitable,
especially if the same test cases are executed multiple times during
a test run (e.g. if they belong to a test case group with repeat
property), and there's a risk that files in the private directory get
overwritten. Under these circumstances, it's possible to configure
Common Test to create one dedicated private directory per
test case and execution instead. This is accomplished by means of
the flag/option: create_priv_dir (to be used with the
ct_run program, the ct:run_test/1 function, or
as test specification term). There are three possible values
for this option:
- auto_per_run
- auto_per_tc
- manual_per_tc
The first value indicates the default priv_dir behaviour, i.e.
one private directory created per test run. The two latter
values tell Common Test to generate a unique test directory name
per test case and execution. If the auto version is used, all
private directories will be created automatically. This can obviously
become very inefficient for test runs with many test cases and/or
repetitions. Therefore, in case the manual version is instead used, the
test case must tell Common Test to create priv_dir when it needs it.
It does this by calling the function ct:make_priv_dir/0.
You should not depend on current working directory for
reading and writing data files since this is not portable. All
scratch files are to be written in the priv_dir and all
data files should be located in data_dir. Note also that
the Common Test server sets current working directory to the test case
log directory at the start of every case.
Execution environment
Each test case is executed by a dedicated Erlang process. The
process is spawned when the test case starts, and terminated when
the test case is finished. The configuration functions
init_per_testcase and end_per_testcase execute on the
same process as the test case.
The configuration functions init_per_suite and
end_per_suite execute, like test cases, on dedicated Erlang
processes.
Timetrap timeouts
The default time limit for a test case is 30 minutes, unless a
timetrap is specified either by the suite-, group-,
or test case info function. The timetrap timeout value defined by
suite/0 is the value that will be used for each test case
in the suite (as well as for the configuration functions
init_per_suite/1, end_per_suite/1, init_per_group/2,
and end_per_group/2). A timetrap value defined by
group(GroupName) overrides one defined by suite()
and will be used for each test case in group GroupName, and any
of its sub-groups. If a timetrap value is defined by group/1
for a sub-group, it overrides that of its higher level groups. Timetrap
values set by individual test cases (by means of the test case info
function) override both group- and suite- level timetraps.
It is also possible to dynamically set/reset a timetrap during the
excution of a test case, or configuration function. This is done by calling
ct:timetrap/1. This function cancels the current timetrap
and starts a new one (that stays active until timeout, or end of the
current function).
Timetrap values can be extended with a multiplier value specified at
startup with the multiply_timetraps option. It is also possible
to let the test server decide to scale up timetrap timeout values
automatically, e.g. if tools such as cover or trace are running during
the test. This feature is disabled by default and can be enabled with
the scale_timetraps start option.
If a test case needs to suspend itself for a time that also gets
multipled by multiply_timetraps (and possibly also scaled up if
scale_timetraps is enabled), the function ct:sleep/1
may be used (instead of e.g. timer:sleep/1).
A function (fun/0 or MFA) may be specified as
timetrap value in the suite-, group- and test case info function, as
well as argument to the ct:timetrap/1 function. Examples:
{timetrap,{my_test_utils,timetrap,[?MODULE,system_start]}}
ct:timetrap(fun() -> my_timetrap(TestCaseName, Config) end)
The user timetrap function may be used for two things:
- To act as a timetrap - the timeout is triggered when the
function returns.
- To return a timetrap time value (other than a function).
Before execution of the timetrap function (which is performed
on a parallel, dedicated timetrap process), Common Test cancels
any previously set timer for the test case or configuration function.
When the timetrap function returns, the timeout is triggered, unless
the return value is a valid timetrap time, such as an integer,
or a {SecMinOrHourTag,Time} tuple (see the
common_test reference manual for
details). If a time value is returned, a new timetrap is started
to generate a timeout after the specified time.
The user timetrap function may of course return a time value after a delay,
and if so, the effective timetrap time is the delay time plus the
returned time.
Logging - categories and verbosity levels
Common Test provides three main functions for printing strings:
- ct:log(Category, Importance, Format, Args)
- ct:print(Category, Importance, Format, Args)
- ct:pal(Category, Importance, Format, Args)
The log/1/2/3/4 function will print a string to the test case
log file. The print/1/2/3/4 function will print the string to screen,
and the pal/1/2/3/4 function will print the same string both to file and
screen. (The functions are documented in the ct reference manual).
The optional Category argument may be used to categorize the
log printout, and categories can be used for two things:
- To compare the importance of the printout to a specific
verbosity level, and
- to format the printout according to a user specific HTML
Style Sheet (CSS).
The Importance argument specifies a level of importance
which, compared to a verbosity level (general and/or set per category),
determines if the printout should be visible or not. Importance
is an arbitrary integer in the range 0..99. Pre-defined constants
exist in the ct.hrl header file. The default importance level,
?STD_IMPORTANCE (used if the Importance argument is not
provided), is 50. This is also the importance used for standard IO, e.g.
from printouts made with io:format/2, io:put_chars/1, etc.
Importance is compared to a verbosity level set by means of the
verbosity start flag/option. The verbosity level can be set per
category and/or generally. The default verbosity level, ?STD_VERBOSITY,
is 50, i.e. all standard IO gets printed. If a lower verbosity level is set,
standard IO printouts will be ignored. Common Test performs the following test:
Importance >= (100-VerbosityLevel)
This also means that verbosity level 0 effectively turns all logging off
(with the exception of printouts made by Common Test itself).
The general verbosity level is not associated with any particular
category. This level sets the threshold for the standard IO printouts,
uncategorized ct:log/print/pal printouts, as well as
printouts for categories with undefined verbosity level.
Example:
Some printouts during test case execution:
io:format("1. Standard IO, importance = ~w~n", [?STD_IMPORTANCE]),
ct:log("2. Uncategorized, importance = ~w", [?STD_IMPORTANCE]),
ct:log(info, "3. Categorized info, importance = ~w", [?STD_IMPORTANCE]]),
ct:log(info, ?LOW_IMPORTANCE, "4. Categorized info, importance = ~w", [?LOW_IMPORTANCE]),
ct:log(error, "5. Categorized error, importance = ~w", [?HI_IMPORTANCE]),
ct:log(error, ?HI_IMPORTANCE, "6. Categorized error, importance = ~w", [?MAX_IMPORTANCE]),
If starting the test without specifying any verbosity levels:
$ ct_run ...
the following gets printed:
1. Standard IO, importance = 50
2. Uncategorized, importance = 50
3. Categorized info, importance = 50
5. Categorized error, importance = 75
6. Categorized error, importance = 99
If starting the test with:
$ ct_run -verbosity 1 and info 75
the following gets printed:
3. Categorized info, importance = 50
4. Categorized info, importance = 25
6. Categorized error, importance = 99
How categories can be mapped to CSS tags is documented in the
Running Tests
chapter.
The Format and Args arguments in ct:log/print/pal are
always passed on to the io:format/3 function in stdlib
(please see the io manual page for details).
For more information about log files, please see the
Running Tests chapter.
Illegal dependencies
Even though it is highly efficient to write test suites with
the Common Test framework, there will surely be mistakes made,
mainly due to illegal dependencies. Noted below are some of the
more frequent mistakes from our own experience with running the
Erlang/OTP test suites.
- Depending on current directory, and writing there:
This is a common error in test suites. It is assumed that
the current directory is the same as what the author used as
current directory when the test case was developed. Many test
cases even try to write scratch files to this directory. Instead
data_dir and priv_dir should be used to locate
data and for writing scratch files.
- Depending on execution order:
During development of test suites, no assumption should preferrably
be made about the execution order of the test cases or suites.
E.g. a test case should not assume that a server it depends on,
has already been started by a previous test case. There are
several reasons for this:
Firstly, the user/operator may specify the order at will, and maybe
a different execution order is more relevant or efficient on
some particular occasion. Secondly, if the user specifies a whole
directory of test suites for his/her test, the order the suites are
executed will depend on how the files are listed by the operating
system, which varies between systems. Thirdly, if a user
wishes to run only a subset of a test suite, there is no way
one test case could successfully depend on another.
- Depending on Unix:
Running unix commands through os:cmd are likely
not to work on non-unix platforms.
- Nested test cases:
Invoking a test case from another not only tests the same
thing twice, but also makes it harder to follow what exactly
is being tested. Also, if the called test case fails for some
reason, so will the caller. This way one error gives cause to
several error reports, which is less than ideal.
Functionality common for many test case functions may be implemented
in common help functions. If these functions are useful for test cases
across suites, put the help functions into common help modules.
- Failure to crash or exit when things go wrong:
Making requests without checking that the return value
indicates success may be ok if the test case will fail at a
later stage, but it is never acceptable just to print an error
message (into the log file) and return successfully. Such test cases
do harm since they create a false sense of security when overviewing
the test results.
- Messing up for subsequent test cases:
Test cases should restore as much of the execution
environment as possible, so that the subsequent test cases will
not crash because of execution order of the test cases.
The function end_per_testcase is suitable for this.