aboutsummaryrefslogtreecommitdiffstats
path: root/lib/test_server/doc/src/write_test_chapter.xml
blob: 12f0dfc3616a5346cdd5739c23c681c8bff5b12f (plain) (blame)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
<?xml version="1.0" encoding="latin1" ?>
<!DOCTYPE chapter SYSTEM "chapter.dtd">

<chapter>
  <header>
    <copyright>
      <year>2002</year><year>2009</year>
      <holder>Ericsson AB. All Rights Reserved.</holder>
    </copyright>
    <legalnotice>
      The contents of this file are subject to the Erlang Public License,
      Version 1.1, (the "License"); you may not use this file except in
      compliance with the License. You should have received a copy of the
      Erlang Public License along with this software. If not, it can be
      retrieved online at http://www.erlang.org/.
    
      Software distributed under the License is distributed on an "AS IS"
      basis, WITHOUT WARRANTY OF ANY KIND, either express or implied. See
      the License for the specific language governing rights and limitations
      under the License.
    
    </legalnotice>

    <title>Writing Test Suites</title>
    <prepared>Siri Hansen</prepared>
    <docno></docno>
    <date></date>
    <rev></rev>
    <file>write_test_chapter.xml</file>
  </header>

  <section>
    <title>Support for test suite authors</title>
    <p>The <c>test_server</c> module provides some useful functions
      to support the test suite author. This includes:
      </p>
    <list type="bulleted">
      <item>Starting and stopping slave or peer nodes</item>
      <item>Capturing and checking stdout output</item>
      <item>Retrieving and flushing process message queue</item>
      <item>Watchdog timers</item>
      <item>Checking that a function crashes</item>
      <item>Checking that a function succeeds at least m out of n times</item>
      <item>Checking .app files</item>
    </list>
    <p>Please turn to the reference manual for the <c>test_server</c>
      module for details about these functions.
      </p>
  </section>

  <section>
    <title>Test suites</title>
    <p>A test suite is an ordinary Erlang module that contains test
      cases. It's recommended that the module has a name on the form
      *_SUITE.erl. Otherwise, the directory function will not find the
      modules (by default).
      </p>
    <p>For some of the test server support, the test server include
      file <c>test_server.hrl</c> must be included. Never include it
      with the full path, for portability reasons. Use the compiler 
      include directive instead.
      </p>
    <p>The special function <c>all(suite)</c> in each module is called
      to get the test specification for that module. The function
      typically returns a list of test cases in that module, but any
      test specification could be returned. Please see the chapter
      about test specifications for details about this.
      </p>
  </section>

  <section>
    <title>Init per test case</title>
    <p>In each test suite module, the functions
      <c>init_per_testcase/2</c> and <c>end_per_testcase/2</c> must be
      implemented.
      </p>
    <p><c>init_per_testcase</c> is called before each test case in the
      test suite, giving a (limited) possibility for initialization.
      </p>
    <p><c>end_per_testcase/2</c> is called after each test case is
      completed, giving a possibility to clean up.
      </p>
    <p>The first argument to these functions is the name of the test
      case. This can be used to do individual initialization and cleanup for
      each test cases.
      </p>
    <p>The second argument is a list of tuples called
      <c>Config</c>. The first element in a <c>Config</c> tuple
      should be an atom - a key value to be used for searching. 
      <c>init_per_testcase/2</c> may modify the <c>Config</c>
      parameter or just return it as is. Whatever is retuned by
      <c>init_per_testcase/2</c> is given as <c>Config</c> parameter to
      the test case itself.
      </p>
    <p>The return value of <c>end_per_testcase/2</c> is ignored by the
      test server.
      </p>
  </section>

  <section>
    <title>Test cases</title>
    <p>The smallest unit that the test server is concerned with is a
      test case. Each test case can in turn test many things, for
      example make several calls to the same interface function with
      different parameters.
      </p>
    <p>It is possible to put many or few tests into each test
      case. How many things each test case tests is up to the author,
      but here are some things to keep in mind.
      </p>
    <p>Very small test cases often leads to more code, since
      initialization has to be duplicated. Larger code, especially with
      a lot of duplication, increases maintenance and reduces
      readability.
      </p>
    <p>Larger test cases make it harder to tell what went wrong if it
      fails, and force us to skip larger portions of test code if a
      specific part fails. These effects are accentuated when running on
      multiple platforms because test cases often have to be skipped.
      </p>
    <p>A test case generally consists of three parts, the
      documentation part, the specification part and the execution
      part. These are implemented as three clauses of the same function.
      </p>
    <p>The documentation clause matches the argument '<c>doc</c>' and
      returns a list for strings describing what the test case tests.
      </p>
    <p>The specification clause matches the argument '<c>suite</c>'
      and returns the test specification for this particular test
      case. If the test specification is an empty list, this indicates
      that the test case is a leaf test case, i.e. one to be executed.
      </p>
    <p><em>Note that the specification clause of a test case is executed on the test server controller host. This means that if target is remote, the specification clause is probably executed on a different platform than the one tested.</em></p>
    <p>The execution clause implements the actual test case. It takes
      one argument, <c>Config</c>, which contain configuration
      information like <c>data_dir</c> and <c>priv_dir</c>. See <seealso marker="#data_priv_dir">Data and Private Directories</seealso> for
      more information about these.
      </p>
    <p>The <c>Config</c> variable can also contain the
      <c>nodenames</c> key, if requested by the <c>require_nodenames</c>
      command in the test suite specification file. All <c>Config</c>
      items should be extracted using the <c>?config</c> macro. This is
      to ensure future compatibility if the <c>Config</c> format
      changes. See the reference manual for <c>test_server</c> for
      details about this macro.
      </p>
    <p>If the execution clause crashes or exits, it is considered a
      failure. If it returns <c>{skip,Reason}</c>, the test case is
      considered skipped. If it returns <c>{comment,String}</c>,
      the string will be added in the 'Comment' field on the HTML
      result page. If the execution clause returns anything else, it is
      considered a success, unless it is <c>{'EXIT',Reason}</c> or
      <c>{'EXIT',Pid,Reason}</c> which can't be distinguished from a
      crash, and thus will be considered a failure.
      </p>
  </section>

  <section>
    <marker id="data_priv_dir"></marker>
    <title>Data and Private Directories</title>
    <p>The data directory (<c>data_dir</c>) is the directory where the test
      module has its own files needed for the testing. A compiler test
      case may have source files to feed into the compiler, a release
      upgrade test case may have some old and new release of
      something. A graphics test case may have some icons and a test
      case doing a lot of math with bignums might store the correct
      answers there. The name of the <c>data_dir</c> is the the name of
      the test suite and then "_data". For example,
      <c>"some_path/foo_SUITE.beam"</c> has the data directory
      <c>"some_path/foo_SUITE_data/"</c>.
      </p>
    <p>The <c>priv_dir</c> is the test suite's private directory. This
      directory should be used when a test case needs to write to
      files. The name of the private directory is generated by the test
      server, which also creates the directory.
      </p>
    <p><em>Warning:</em> Do not depend on current directory to be
      writable, or to point to anything in particular. All scratch files
      are to be written in the <c>priv_dir</c>, and all data files found
      in <c>data_dir</c>. If the current directory has to be something
      specific, it must be set with <c>file:set_cwd/1</c>.
      </p>
  </section>

  <section>
    <title>Execution environment</title>
    <p>Each time a test case is about to be executed, a new process is
      created with <c>spawn_link</c>. This is so that the test case will
      have no dependencies to earlier tests, with respect to process flags,
      process links, messages in the queue, other processes having registered
      the process, etc. As little as possible is done to change the initial
      context of the process (what is created by plain spawn). Here is a
      list of differences:
      </p>
    <list type="bulleted">
      <item>It has a link to the test server. If this link is removed,
       the test server will not know when the test case is finished,
       just wait infinitely.
      </item>
      <item>It often holds a few items in the process dictionary, all
       with names starting with '<c>test_server_</c>'. This is to keep
       track of if/where a test case fails.
      </item>
      <item>There is a top-level catch. All of the test case code is
       catched, so that the location of a crash can be reported back to
       the test server. If the test case process is killed by another
       process (thus the catch code is never executed) the test server
       is not able to tell where the test case was executing.
      </item>
      <item>It has a special group leader implemented by the test
       server. This way the test server is able to capture the io that
       the test case provokes. This is also used by some of the test
       server support functions.
      </item>
    </list>
    <p>There is no time limit for a test case, unless the test case
      itself imposes such a limit, by calling
      <c>test_server:timetrap/1</c> for example.  The call can be made
      in each test case, or in the <c>init_per_testcase/2</c>
      function. Make sure to call the corresponding
      <c>test_server:timetrap_cancel/1</c> function as well, e.g in the
      <c>end_per_testcase/2</c> function, or else the test cases will
      always fail.
      </p>
  </section>

</chapter>