1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
|
<?xml version="1.0" encoding="utf-8" ?>
<!DOCTYPE chapter SYSTEM "chapter.dtd">
<chapter>
<header>
<copyright>
<year>2006</year><year>2013</year>
<holder>Ericsson AB. All Rights Reserved.</holder>
</copyright>
<legalnotice>
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
</legalnotice>
<title>Dependencies between Test Cases and Suites</title>
<prepared>Peter Andersson</prepared>
<docno></docno>
<date></date>
<rev></rev>
<file>dependencies_chapter.xml</file>
</header>
<section>
<title>General</title>
<p>When creating test suites, it is strongly recommended to not
create dependencies between test cases, that is, letting test cases
depend on the result of previous test cases. There are various
reasons for this, such as, the following:</p>
<list type="bulleted">
<item>It makes it impossible to run test cases individually.</item>
<item>It makes it impossible to run test cases in a different order.</item>
<item>It makes debugging difficult (as a fault can be
the result of a problem in a different test case than the one failing).</item>
<item>There are no good and explicit ways to declare dependencies, so
it can be difficult to see and understand these in test suite
code and in test logs.</item>
<item>Extending, restructuring, and maintaining test suites with
test case dependencies is difficult.</item>
</list>
<p>There are often sufficient means to work around the need for test
case dependencies. Generally, the problem is related to the state of
the System Under Test (SUT). The action of one test case can change the
system state. For some other test case to run properly, this new state
must be known.</p>
<p>Instead of passing data between test cases, it is recommended
that the test cases read the state from the SUT and perform assertions
(that is, let the test case run if the state is as expected, otherwise reset or fail).
It is also recommended to use the state to set variables necessary for the
test case to execute properly. Common actions can often be implemented as
library functions for test cases to call to set the SUT in a required state.
(Such common actions can also be separately tested, if necessary,
to ensure that they work as expected). It is sometimes also possible,
but not always desirable, to group tests together in one test case, that is,
let a test case perform a "scenario" test (a test consisting of subtests).</p>
<p>Consider, for example, a server application under test. The following
functionality is to be tested:</p>
<list type="bulleted">
<item>Starting the server</item>
<item>Configuring the server</item>
<item>Connecting a client to the server</item>
<item>Disconnecting a client from the server</item>
<item>Stopping the server</item>
</list>
<p>There are obvious dependencies between the listed functions. The server cannot
be configured if it has not first been started, a client connot be connectd until
the server is properly configured, and so on. If we want to have one test
case for each function, we might be tempted to try to always run the
test cases in the stated order and carry possible data (identities, handles,
and so on) between the cases and therefore introduce dependencies between them.</p>
<p>To avoid this, we can consider starting and stopping the server for every test.
We can thus implement the start and stop action as common functions to be
called from
<seealso marker="common_test_app#Module:init_per_testcase-2"><c>init_per_testcase</c></seealso> and
<seealso marker="common_test_app#Module:end_per_testcase-2"><c>end_per_testcase</c></seealso>.
(Remember to test the start and stop functionality separately.)
The configuration can also be implemented as a common function, maybe grouped
with the start function. Finally, the testing of connecting and disconnecting a
client can be grouped into one test case. The resulting suite can look as
follows:</p>
<pre>
-module(my_server_SUITE).
-compile(export_all).
-include_lib("ct.hrl").
%%% init and end functions...
suite() -> [{require,my_server_cfg}].
init_per_testcase(start_and_stop, Config) ->
Config;
init_per_testcase(config, Config) ->
[{server_pid,start_server()} | Config];
init_per_testcase(_, Config) ->
ServerPid = start_server(),
configure_server(),
[{server_pid,ServerPid} | Config].
end_per_testcase(start_and_stop, _) ->
ok;
end_per_testcase(_, _) ->
ServerPid = ?config(server_pid),
stop_server(ServerPid).
%%% test cases...
all() -> [start_and_stop, config, connect_and_disconnect].
%% test that starting and stopping works
start_and_stop(_) ->
ServerPid = start_server(),
stop_server(ServerPid).
%% configuration test
config(Config) ->
ServerPid = ?config(server_pid, Config),
configure_server(ServerPid).
%% test connecting and disconnecting client
connect_and_disconnect(Config) ->
ServerPid = ?config(server_pid, Config),
{ok,SessionId} = my_server:connect(ServerPid),
ok = my_server:disconnect(ServerPid, SessionId).
%%% common functions...
start_server() ->
{ok,ServerPid} = my_server:start(),
ServerPid.
stop_server(ServerPid) ->
ok = my_server:stop(),
ok.
configure_server(ServerPid) ->
ServerCfgData = ct:get_config(my_server_cfg),
ok = my_server:configure(ServerPid, ServerCfgData),
ok.</pre>
</section>
<section>
<marker id="save_config"></marker>
<title>Saving Configuration Data</title>
<p>Sometimes it is impossible, or infeasible, to
implement independent test cases. Maybe it is not possible to read the
SUT state. Maybe resetting the SUT is impossible and it takes too long time
to restart the system. In situations where test case dependency is necessary,
CT offers a structured way to carry data from one test case to the next. The
same mechanism can also be used to carry data from one test suite to the next.</p>
<p>The mechanism for passing data is called <c>save_config</c>. The idea is that
one test case (or suite) can save the current value of <c>Config</c>, or any list of
key-value tuples, so that the next executing test case (or test suite) can read it.
The configuration data is not saved permanently but can only be passed from one
case (or suite) to the next.</p>
<p>To save <c>Config</c> data, return tuple <c>{save_config,ConfigList}</c>
from <c>end_per_testcase</c> or from the main test case function.</p>
<p>To read data saved by a previous test case, use macro <c>config</c> with a
<c>saved_config</c> key as follows:</p>
<p><c>{Saver,ConfigList} = ?config(saved_config, Config)</c></p>
<p><c>Saver</c> (<c>atom()</c>) is the name of the previous test case (where the
data was saved). The <c>config</c> macro can be used to extract particular data
also from the recalled <c>ConfigList</c>. It is strongly recommended that
<c>Saver</c> is always matched to the expected name of the saving test case.
This way, problems because of restructuring of the test suite can be avoided.
Also, it makes the dependency more explicit and the test suite easier to read
and maintain.</p>
<p>To pass data from one test suite to another, the same mechanism is used. The data
is to be saved by finction
<seealso marker="common_test_app#Module:end_per_suite-1"><c>end_per_suite</c></seealso>
and read by function
<seealso marker="common_test_app#Module:init_per_suite-1"><c>init_per_suite</c></seealso>
in the suite that follows. When passing data between suites, <c>Saver</c> carries the
name of the test suite.</p>
<p><em>Example:</em></p>
<pre>
-module(server_b_SUITE).
-compile(export_all).
-include_lib("ct.hrl").
%%% init and end functions...
init_per_suite(Config) ->
%% read config saved by previous test suite
{server_a_SUITE,OldConfig} = ?config(saved_config, Config),
%% extract server identity (comes from server_a_SUITE)
ServerId = ?config(server_id, OldConfig),
SessionId = connect_to_server(ServerId),
[{ids,{ServerId,SessionId}} | Config].
end_per_suite(Config) ->
%% save config for server_c_SUITE (session_id and server_id)
{save_config,Config}
%%% test cases...
all() -> [allocate, deallocate].
allocate(Config) ->
{ServerId,SessionId} = ?config(ids, Config),
{ok,Handle} = allocate_resource(ServerId, SessionId),
%% save handle for deallocation test
NewConfig = [{handle,Handle}],
{save_config,NewConfig}.
deallocate(Config) ->
{ServerId,SessionId} = ?config(ids, Config),
{allocate,OldConfig} = ?config(saved_config, Config),
Handle = ?config(handle, OldConfig),
ok = deallocate_resource(ServerId, SessionId, Handle).</pre>
<p>To save <c>Config</c> data from a test case that is to be
skipped, return tuple
<c>{skip_and_save,Reason,ConfigList}</c>.</p>
<p>The result is that the test case is skipped with <c>Reason</c> printed to
the log file (as described earlier) and <c>ConfigList</c> is saved
for the next test case. <c>ConfigList</c> can be read using
<c>?config(saved_config, Config)</c>, as described earlier. <c>skip_and_save</c>
can also be returned from <c>init_per_suite</c>. In this case, the saved data can
be read by <c>init_per_suite</c> in the suite that follows.</p>
</section>
<section>
<marker id="sequences"></marker>
<title>Sequences</title>
<p>Sometimes test cases depend on each other so that
if one case fails, the following tests are not to be executed.
Typically, if the <c>save_config</c> facility is used and a test
case that is expected to save data crashes, the following
case cannot run. <c>Common Test</c> offers a way to declare such dependencies,
called sequences.</p>
<p>A sequence of test cases is defined as a test case group
with a <c>sequence</c> property. Test case groups are defined
through function <c>groups/0</c> in the test suite (for details, see section
<seealso marker="write_test_chapter#test_case_groups">Test Case Groups</seealso>.</p>
<p>For example, to ensure that if <c>allocate</c>
in <c>server_b_SUITE</c> crashes, <c>deallocate</c> is skipped,
the following sequence can be defined:</p>
<pre>
groups() -> [{alloc_and_dealloc, [sequence], [alloc,dealloc]}].</pre>
<p>Assume that the suite contains the test case <c>get_resource_status</c>
that is independent of the other two cases, then function <c>all</c> can
look as follows:</p>
<pre>
all() -> [{group,alloc_and_dealloc}, get_resource_status].</pre>
<p>If <c>alloc</c> succeeds, <c>dealloc</c> is also executed. If <c>alloc</c> fails
however, <c>dealloc</c> is not executed but marked as <c>SKIPPED</c> in the HTML log.
<c>get_resource_status</c> runs no matter what happens to the <c>alloc_and_dealloc</c>
cases.</p>
<p>Test cases in a sequence are executed in order until all succeed or
one fails. If one fails, all following cases in the sequence are skipped.
The cases in the sequence that have succeeded up to that point are reported as
successful in the log. Any number of sequences can be specified.</p>
<p><em>Example:</em></p>
<pre>
groups() -> [{scenarioA, [sequence], [testA1, testA2]},
{scenarioB, [sequence], [testB1, testB2, testB3]}].
all() -> [test1,
test2,
{group,scenarioA},
test3,
{group,scenarioB},
test4].</pre>
<p>A sequence group can have subgroups. Such subgroups can have
any property, that is, they are not required to also be sequences. If you want the
status of the subgroup to affect the sequence on the level above, return
<c>{return_group_result,Status}</c> from
<seealso marker="common_test_app#Module:end_per_group-2"><c>end_per_group/2</c></seealso>,
as described in section
<seealso marker="write_test_chapter#repeated_groups">Repeated Groups</seealso>
in Writing Test Suites.
A failed subgroup (<c>Status == failed</c>) causes the execution of a
sequence to fail in the same way a test case does.</p>
</section>
</chapter>
|