aboutsummaryrefslogtreecommitdiffstats
path: root/lib/common_test/doc/src/getting_started_chapter.xml
blob: ef9c409bf1d5afee5977eda5f478f2fabdafb15a (plain) (blame)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
<?xml version="1.0" encoding="utf-8" ?>
<!DOCTYPE chapter SYSTEM "chapter.dtd">

<chapter>
  <header>
    <copyright>
      <year>2007</year><year>2013</year>
      <holder>Ericsson AB. All Rights Reserved.</holder>
    </copyright>
    <legalnotice>
      Licensed under the Apache License, Version 2.0 (the "License");
      you may not use this file except in compliance with the License.
      You may obtain a copy of the License at
 
          http://www.apache.org/licenses/LICENSE-2.0

      Unless required by applicable law or agreed to in writing, software
      distributed under the License is distributed on an "AS IS" BASIS,
      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
      See the License for the specific language governing permissions and
      limitations under the License.

    </legalnotice>

    <title>Getting Started</title>
    <prepared>Peter Andersson</prepared>
    <docno></docno>
    <date>2011-12-12</date>
    <rev></rev>
    <file>getting_started_chapter.xml</file>
  </header>

  <section>
    <title>Are you new around here?</title>
    <p>
      The purpose of this short chapter is to, with a "learning by example"
      approach, give the newcomer a chance to get started quickly writing and
      executing some first simple tests. The chapter will introduce some of the
      basics, but leave most explanations and details for the later
      chapters in this User's Guide. Hopefully though, after this chapter, you
      will be inspired and unintimidated enough to go on and get into the
      nitty-gritty that follows in this rather heavy User's Guide! If you're
      not much into "learning by example" and prefer to get into more technical
      detail right away, go ahead and skip to the next chapter. Again, the basics
      presented here will be covered in detail in later chapters.
    </p>
    <p>
      This chapter also tries to demonstrate how dead simple it actually is
      to write a very basic (yet for many module testing purposes, often sufficiently
      complex) test suite, and execute its test cases. This is not necessarily
      obvious when you read the rest of the chapters in the User's Guide.
    </p>
    <p>
      A quick note before we start: In order to understand what's discussed and
      examplified here, it is recommended that you first read through the
      opening <seealso marker="basics_chapter#basics">Common Test Basics</seealso>
      chapter.
    </p>
    </section>

    <section>
    <title>Test case execution</title>
    <p>Execution of test cases is handled this way:</p>
    
    <image file="tc_execution.gif">
      <icaption>
	Successful vs unsuccessful test case execution.
      </icaption>
    </image>

    <p>For each test case that Common Test is told to execute, it spawns a
      dedicated process on which the test case function in question starts
      running. (In parallel to the test case process, an idle waiting timer
      process is started which is linked to the test case process. If the timer
      process runs out of waiting time, it sends an exit signal to terminate
      the test case process and this is what's called a <em>timetrap</em>).
    </p>
    <p>In scenario 1, the test case process terminates normally after case A has
      finished executing its test code without detecting any errors. The test
      case function simply returns a value and Common Test logs the test case as
      successful.
    </p>
    <p>In scenario 2, an error is detected during test case execution
      which causes the test case B function to generate an exception.
      This causes the test case process to exit with reason
      other than normal, and as a result, Common Test will log this as an
      unsuccessful test case.
    </p>
    <p>As you can understand from the illustration above, Common Test requires
      that a test case generates a runtime error to indicate failure (e.g.
      by causing a bad match error or by calling <c>exit/1</c>, preferrably
      through the <seealso marker="ct#fail-1"><c>ct:fail/1,2</c></seealso> help function). A succesful execution is
      indicated by means of a normal return from the test case function.
    </p>
    </section>

    <section>
    <title>A simple test suite</title>
    <p>As you've seen in the basics chapter, the test suite module implements
      <seealso marker="common_test">callback functions</seealso>
      (mandatory or optional) for various purposes, e.g:
    </p>
      <list>
	<item>Init/end configuration function for the test suite</item>
	<item>Init/end configuration function for a test case</item>
	<item>Init/end configuration function for a test case group</item>
	<item>Test cases</item>
      </list>
    <p> 
     The configuration functions are optional and if you don't need them for
      your test, a test suite with one simple test case could look like this:
    </p>
    <pre>
      -module(my1st_SUITE).
      -compile(export_all).

      all() ->
          [mod_exists].

      mod_exists(_) ->
          {module,mymod} = code:load_file(mymod).</pre>
    <p>
      In this example we check that the <c>mymod</c> module exists (i.e. can be
      successfully loaded by the code server). If the operation fails, we will
      get a bad match error which terminates the test case.
    </p>
    </section>

    <section>
    <title>A test suite with configuration functions</title>
    <p>
      If we need to perform configuration operations in order to run our test, we
      implement configuration functions in our suite. The result from a
      configuration function is configuration data, or simply <em><c>Config</c></em>.
      This is a list of key-value tuples which get passed from the configuration
      function to the test cases (possibly through configuration functions on
      "lower level"). The data flow looks like this:
    </p>

    <image file="config.gif">
      <icaption>
	Config data flow in the suite.
      </icaption>
    </image>

    <p>
      Here's an example of a test suite which uses configuration functions
      to open and close a log file for the test cases (an operation that would
      be unnecessary and irrelevant to perform by each test case):
    </p>
    <pre>
      -module(check_log_SUITE).
      -export([all/0, init_per_suite/1, end_per_suite/1]).
      -export([check_restart_result/1, check_no_errors/1]).
      
      -define(value(Key,Config), proplists:get_value(Key,Config)).

      all() -> [check_restart_result, check_no_errors].

      init_per_suite(InitConfigData) ->
          [{logref,open_log()} | InitConfigData].

      end_per_suite(ConfigData) ->
          close_log(?value(logref, ConfigData)).

      check_restart_result(ConfigData) ->
          TestData = read_log(restart, ?value(logref, ConfigData)),
          {match,_Line} = search_for("restart successful", TestData).
      
      check_no_errors(ConfigData) ->
          TestData = read_log(all, ?value(logref, ConfigData)),
          case search_for("error", TestData) of
              {match,Line} -> ct:fail({error_found_in_log,Line});
              nomatch -> ok
          end.</pre>
    <p>
      In this example we have test cases that verify, by parsing a
      log file, that our SUT has performed a successful restart and
      that no unexpected errors have been printed.
    </p>

    <p>To execute the test cases in the test suite above, we could type this on
      the Unix/Linux command line (assuming for this example that the suite module
      is in the current working directory):
    </p>
    <pre>
      $ ct_run -dir .</pre>
    <p>or</p>
    <pre>
    $ ct_run -suite check_log_SUITE</pre>

    <p>If we want to use the Erlang shell to run our test, we could evaluate this call:
    </p>
    <pre>
      1> ct:run_test([{dir, "."}]).</pre>
    <p>or</p>
    <pre>
      1> ct:run_test([{suite, "check_log_SUITE"}]).</pre>
    <p>
      The result from running our test is printed in log files in HTML format
      (stored in unique log directories on different level). This illustration
      shows the log file structure:
    </p>

    <image file="html_logs.gif">
      <icaption>
	HTML log file structure.
      </icaption>
    </image>
  </section>

  <section>
    <title>What happens next?</title>

    <p>Well, you might already be asking yourself questions such as:</p>
      
      <list>
	<item>"How and where can I specify variable data for my tests that mustn't
	  be hard-coded in the test suites (such as host names, addresses,
	  user login data, etc)?" The
	  <seealso marker="config_file_chapter#top">External Configuration Data</seealso>
	  chapter will give you that information.
	</item>
	<item>"Is there a way to declare a number of different tests and run them
	  in one session without having to write my own scripts? And can such
	  declarations be used for regression testing?" The 
	  <seealso marker="run_test_chapter#test_specifications">Test Specifications</seealso>
	  chapter answers these questions.
	</item>
	<item>"Can test cases and/or test runs be automatically repeated?" Learn more about
	  <seealso marker="write_test_chapter#test_case_groups">Test Case Groups</seealso>
	  and also read about start flags/options in the
	  <seealso marker="run_test_chapter#ct_run">Running Tests</seealso> chapter and
	  the Reference Manual.
	</item>
	<item>"Will Common Test execute my test cases in sequence or in parallel?" The
	  <seealso marker="write_test_chapter#test_case_groups">Test Case Groups</seealso>
	  section in the Running Tests chapter will give you the answer.
	</item>
	<item>"What's the syntax for timetraps (mentioned above), and how do I set them?"
	  This is explained in the
	  <seealso marker="write_test_chapter#timetraps">Timetrap Timeouts</seealso>
	  part of the Writing Test Suites chapter.
	</item>
	<item>"What functions are available for logging and printing?" Check the
	  <seealso marker="write_test_chapter#logging">Logging</seealso>
	  section in the Writing Test Suites chapter.
	</item>
	<item>"I need data files for my tests. Where do I store them preferrably?"
	  You should read about 
	  <seealso marker="write_test_chapter#data_priv_dir">Data and Private
	  Directories</seealso> for information about this.
	</item>
	<item>"May I start with a test suite example, please?"
	  <seealso marker="example_chapter#top">Sure!</seealso>
	</item>
      </list>
      <p>You will probably want to get started on your own first test suites now, while
      at the same time digging deeper into the Common Test User's Guide and Reference Manual.
      You will find that there's lots more to learn about the things that have been introduced
      in this chapter. You will of course also be presented many more useful features, such as the
      ones listed above. Have fun!
      </p>
    </section>
</chapter>