When creating test suites, it is strongly recommended to not create dependencies between test cases, that is, letting test cases depend on the result of previous test cases. There are various reasons for this, such as, the following:
There are often sufficient means to work around the need for test case dependencies. Generally, the problem is related to the state of the System Under Test (SUT). The action of one test case can change the system state. For some other test case to run properly, this new state must be known.
Instead of passing data between test cases, it is recommended that the test cases read the state from the SUT and perform assertions (that is, let the test case run if the state is as expected, otherwise reset or fail). It is also recommended to use the state to set variables necessary for the test case to execute properly. Common actions can often be implemented as library functions for test cases to call to set the SUT in a required state. (Such common actions can also be separately tested, if necessary, to ensure that they work as expected). It is sometimes also possible, but not always desirable, to group tests together in one test case, that is, let a test case perform a "scenario" test (a test consisting of subtests).
Consider, for example, a server application under test. The following functionality is to be tested:
There are obvious dependencies between the listed functions. The server cannot be configured if it has not first been started, a client connot be connectd until the server is properly configured, and so on. If we want to have one test case for each function, we might be tempted to try to always run the test cases in the stated order and carry possible data (identities, handles, and so on) between the cases and therefore introduce dependencies between them.
To avoid this, we can consider starting and stopping the server for every test.
We can thus implement the start and stop action as common functions to be
called from
-module(my_server_SUITE). -compile(export_all). -include_lib("ct.hrl"). %%% init and end functions... suite() -> [{require,my_server_cfg}]. init_per_testcase(start_and_stop, Config) -> Config; init_per_testcase(config, Config) -> [{server_pid,start_server()} | Config]; init_per_testcase(_, Config) -> ServerPid = start_server(), configure_server(), [{server_pid,ServerPid} | Config]. end_per_testcase(start_and_stop, _) -> ok; end_per_testcase(_, _) -> ServerPid = ?config(server_pid), stop_server(ServerPid). %%% test cases... all() -> [start_and_stop, config, connect_and_disconnect]. %% test that starting and stopping works start_and_stop(_) -> ServerPid = start_server(), stop_server(ServerPid). %% configuration test config(Config) -> ServerPid = ?config(server_pid, Config), configure_server(ServerPid). %% test connecting and disconnecting client connect_and_disconnect(Config) -> ServerPid = ?config(server_pid, Config), {ok,SessionId} = my_server:connect(ServerPid), ok = my_server:disconnect(ServerPid, SessionId). %%% common functions... start_server() -> {ok,ServerPid} = my_server:start(), ServerPid. stop_server(ServerPid) -> ok = my_server:stop(), ok. configure_server(ServerPid) -> ServerCfgData = ct:get_config(my_server_cfg), ok = my_server:configure(ServerPid, ServerCfgData), ok.
Sometimes it is impossible, or infeasible, to implement independent test cases. Maybe it is not possible to read the SUT state. Maybe resetting the SUT is impossible and it takes too long time to restart the system. In situations where test case dependency is necessary, CT offers a structured way to carry data from one test case to the next. The same mechanism can also be used to carry data from one test suite to the next.
The mechanism for passing data is called
To save
To read data saved by a previous test case, use macro
To pass data from one test suite to another, the same mechanism is used. The data
is to be saved by finction
Example:
-module(server_b_SUITE). -compile(export_all). -include_lib("ct.hrl"). %%% init and end functions... init_per_suite(Config) -> %% read config saved by previous test suite {server_a_SUITE,OldConfig} = ?config(saved_config, Config), %% extract server identity (comes from server_a_SUITE) ServerId = ?config(server_id, OldConfig), SessionId = connect_to_server(ServerId), [{ids,{ServerId,SessionId}} | Config]. end_per_suite(Config) -> %% save config for server_c_SUITE (session_id and server_id) {save_config,Config} %%% test cases... all() -> [allocate, deallocate]. allocate(Config) -> {ServerId,SessionId} = ?config(ids, Config), {ok,Handle} = allocate_resource(ServerId, SessionId), %% save handle for deallocation test NewConfig = [{handle,Handle}], {save_config,NewConfig}. deallocate(Config) -> {ServerId,SessionId} = ?config(ids, Config), {allocate,OldConfig} = ?config(saved_config, Config), Handle = ?config(handle, OldConfig), ok = deallocate_resource(ServerId, SessionId, Handle).
To save
The result is that the test case is skipped with
Sometimes test cases depend on each other so that
if one case fails, the following tests are not to be executed.
Typically, if the
A sequence of test cases is defined as a test case group
with a
For example, to ensure that if
groups() -> [{alloc_and_dealloc, [sequence], [alloc,dealloc]}].
Assume that the suite contains the test case
all() -> [{group,alloc_and_dealloc}, get_resource_status].
If
Test cases in a sequence are executed in order until all succeed or one fails. If one fails, all following cases in the sequence are skipped. The cases in the sequence that have succeeded up to that point are reported as successful in the log. Any number of sequences can be specified.
Example:
groups() -> [{scenarioA, [sequence], [testA1, testA2]}, {scenarioB, [sequence], [testB1, testB2, testB3]}]. all() -> [test1, test2, {group,scenarioA}, test3, {group,scenarioB}, test4].
A sequence group can have subgroups. Such subgroups can have
any property, that is, they are not required to also be sequences. If you want the
status of the subgroup to affect the sequence on the level above, return