aboutsummaryrefslogblamecommitdiffstats
path: root/lib/common_test/doc/src/dependencies_chapter.xml
blob: e6c7025b12b2907eba7ec4fe556401b60b56d609 (plain) (tree)
1
2
3
4
5
6
7
                                       




                                       
                                        













































































































































































































































































































                                                                                                  
<?xml version="1.0" encoding="utf-8" ?>
<!DOCTYPE chapter SYSTEM "chapter.dtd">

<chapter>
  <header>
    <copyright>
      <year>2006</year><year>2013</year>
      <holder>Ericsson AB. All Rights Reserved.</holder>
    </copyright>
    <legalnotice>
      The contents of this file are subject to the Erlang Public License,
      Version 1.1, (the "License"); you may not use this file except in
      compliance with the License. You should have received a copy of the
      Erlang Public License along with this software. If not, it can be
      retrieved online at http://www.erlang.org/.
    
      Software distributed under the License is distributed on an "AS IS"
      basis, WITHOUT WARRANTY OF ANY KIND, either express or implied. See
      the License for the specific language governing rights and limitations
      under the License.
    
    </legalnotice>

    <title>Dependencies between Test Cases and Suites</title>
    <prepared>Peter Andersson</prepared>
    <docno></docno>
    <date></date>
    <rev></rev>
    <file>dependencies_chapter.xml</file>
  </header>

  <section>
    <title>General</title>
    <p>When creating test suites, it is strongly recommended to not
      create dependencies between test cases, i.e. letting test cases
      depend on the result of previous test cases. There are various
      reasons for this, for example:</p>

    <list>
      <item>It makes it impossible to run test cases individually.</item>
      <item>It makes it impossible to run test cases in different order.</item>
      <item>It makes debugging very difficult (since a fault could be
	the result of a problem in a different test case than the one failing).</item>
      <item>There exists no good and explicit ways to declare dependencies, so 
	it may be very difficult to see and understand these in test suite 
	code and in test logs.</item>
      <item>Extending, restructuring and maintaining test suites with 
	test case dependencies is difficult.</item>      
    </list>
    
    <p>There are often sufficient means to work around the need for test 
      case dependencies. Generally, the problem is related to the state of 
      the system under test (SUT). The action of one test case may alter the state 
      of the system and for some other test case to run properly, the new state 
      must be known.</p>

    <p>Instead of passing data between test cases, it is recommended
      that the test cases read the state from the SUT and perform assertions
      (i.e. let the test case run if the state is as expected, otherwise reset or fail)
      and/or use the state to set variables necessary for the test case to execute
      properly. Common actions can often be implemented as library functions for
      test cases to call to set the SUT in a required state. (Such common actions
      may of course also be separately tested if necessary, to ensure they are 
      working as expected). It is sometimes also possible, but not always desirable, 
      to group tests together in one test case, i.e. let a test case perform a 
      "scenario" test (a test that consists of subtests).</p>

    <p>Consider for example a server application under test. The following 
      functionality is to be tested:</p>
    
    <list>
      <item>Starting the server.</item>
      <item>Configuring the server.</item>
      <item>Connecting a client to the server.</item>
      <item>Disconnecting a client from the server.</item>
      <item>Stopping the server.</item>
    </list>

    <p>There are obvious dependencies between the listed functions. We can't configure 
      the server if it hasn't first been started, we can't connect a client until 
      the server has been properly configured, etc. If we want to have one test 
      case for each of the functions, we might be tempted to try to always run the
      test cases in the stated order and carry possible data (identities, handles,
      etc) between the cases and therefore introduce dependencies between them. 
      To avoid this we could consider starting and stopping the server for every test.
      We would implement the start and stop action as common functions that may be 
      called from init_per_testcase and end_per_testcase. (We would of course test 
      the start and stop functionality separately). The configuration could perhaps also
      be implemented as a common function, maybe grouped with the start function.
      Finally the testing of connecting and disconnecting a client may be grouped into
      one test case. The resulting suite would look something like this:</p>


    <pre>      
      -module(my_server_SUITE).
      -compile(export_all).
      -include_lib("ct.hrl").

      %%% init and end functions...

      suite() -> [{require,my_server_cfg}].

      init_per_testcase(start_and_stop, Config) ->
          Config;

      init_per_testcase(config, Config) ->
          [{server_pid,start_server()} | Config];

      init_per_testcase(_, Config) ->
          ServerPid = start_server(),
          configure_server(),
          [{server_pid,ServerPid} | Config].

      end_per_testcase(start_and_stop, _) ->
          ok;

      end_per_testcase(_, _) ->
          ServerPid = ?config(server_pid),
          stop_server(ServerPid).

      %%% test cases...

      all() -> [start_and_stop, config, connect_and_disconnect].

      %% test that starting and stopping works
      start_and_stop(_) ->
          ServerPid = start_server(),
          stop_server(ServerPid).

      %% configuration test
      config(Config) ->
          ServerPid = ?config(server_pid, Config),
          configure_server(ServerPid).

      %% test connecting and disconnecting client
      connect_and_disconnect(Config) ->
          ServerPid = ?config(server_pid, Config),
          {ok,SessionId} = my_server:connect(ServerPid),
          ok = my_server:disconnect(ServerPid, SessionId).

      %%% common functions...

      start_server() ->
          {ok,ServerPid} = my_server:start(),
          ServerPid.

      stop_server(ServerPid) ->
          ok = my_server:stop(),
          ok.

      configure_server(ServerPid) ->
          ServerCfgData = ct:get_config(my_server_cfg),
          ok = my_server:configure(ServerPid, ServerCfgData),
          ok.
      </pre>
    </section>

    <section>
    <marker id="save_config"></marker>
      <title>Saving configuration data</title>

      <p>There might be situations where it is impossible, or infeasible at least, to
	implement independent test cases. Maybe it is simply not possible to read the 
	SUT state. Maybe resetting the SUT is impossible and it takes much too long
	to restart the system. In situations where test case dependency is necessary,
	CT offers a structured way to carry data from one test case to the next. The
	same mechanism may also be used to carry data from one test suite to the next.</p>

      <p>The mechanism for passing data is called <c>save_config</c>. The idea is that
	one test case (or suite) may save the current value of Config - or any list of
	key-value tuples - so that it can be read by the next executing test case 
	(or test suite). The configuration data is not saved permanently but can only 
	be passed from one case (or suite) to the next.</p>

      <p>To save <c>Config</c> data, return the	tuple:</p>

      <p><c>{save_config,ConfigList}</c></p>
      
      <p>from <c>end_per_testcase</c> or from the main test case function. To read data 
	saved by a previous test case, use the <c>config</c> macro with	a 
	<c>saved_config</c> key:</p>
      
      <p><c>{Saver,ConfigList} = ?config(saved_config, Config)</c></p>

      <p><c>Saver</c> (<c>atom()</c>) is the name of the previous test case (where the
	data was saved). The <c>config</c> macro may be used to extract particular data
	also from the recalled <c>ConfigList</c>. It is strongly recommended that 
	<c>Saver</c> is always matched to the expected name of the saving test case. 
	This way problems due to restructuring of the test suite may be avoided. Also it 
	makes the dependency more explicit and the test suite easier to read and maintain.</p>

      <p>To pass data from one test suite to another, the same mechanism is used. The data
	should be saved by the <c>end_per_suite</c> function and read by <c>init_per_suite</c>
	in the suite that follows. When passing data between suites, <c>Saver</c> carries the 
	name of the test suite.</p>

      <p>Example:</p>
      
      <pre>
	-module(server_b_SUITE).
	-compile(export_all).
	-include_lib("ct.hrl").

	%%% init and end functions...

	init_per_suite(Config) ->
	    %% read config saved by previous test suite
	    {server_a_SUITE,OldConfig} = ?config(saved_config, Config),
	    %% extract server identity (comes from server_a_SUITE)
	    ServerId = ?config(server_id, OldConfig),
	    SessionId = connect_to_server(ServerId),
	    [{ids,{ServerId,SessionId}} | Config].

	end_per_suite(Config) ->
	    %% save config for server_c_SUITE (session_id and server_id)
	    {save_config,Config}

	%%% test cases...

	all() -> [allocate, deallocate].

	allocate(Config) ->
	    {ServerId,SessionId} = ?config(ids, Config),
	    {ok,Handle} = allocate_resource(ServerId, SessionId),
	    %% save handle for deallocation test
	    NewConfig = [{handle,Handle}],
	    {save_config,NewConfig}.

	deallocate(Config) ->
	    {ServerId,SessionId} = ?config(ids, Config),
	    {allocate,OldConfig} = ?config(saved_config, Config),
	    Handle = ?config(handle, OldConfig),
	    ok = deallocate_resource(ServerId, SessionId, Handle). 
	</pre>

      <p>It is also possible to save <c>Config</c> data from a test case that is to be
        skipped. To accomplish this, return the following tuple:</p>

      <p><c>{skip_and_save,Reason,ConfigList}</c></p>

      <p>The result will be that the test case is skipped with <c>Reason</c> printed to
      the log file (as described in previous chapters), and <c>ConfigList</c> is saved 
      for the next test case. <c>ConfigList</c> may be read by means of 
      <c>?config(saved_config, Config)</c>, as described above. <c>skip_and_save</c>
      may also be returned from <c>init_per_suite</c>, in which case the saved data can
      be read by <c>init_per_suite</c> in the suite that follows.</p>
    </section>

    <section>
    <marker id="sequences"></marker>
      <title>Sequences</title>

      <p>It is possible that test cases depend on each other so that
	if one case fails, the following test(s) should not be executed.
        Typically, if the <c>save_config</c> facility is used and a test 
	case that is expected to save data crashes, the following 
	case can not run. CT offers a way to declare such dependencies, 
	called sequences.</p>

      <p>A sequence of test cases is defined as a test case group
        with a <c>sequence</c> property. Test case groups are defined by 
	means of the <c>groups/0</c> function in the test suite (see the 
	<seealso marker="write_test_chapter#test_case_groups">Test case groups</seealso>
	chapter for details).</p>

      <p>For example, if we would like to make sure that if <c>allocate</c>
	in <c>server_b_SUITE</c> (above) crashes, <c>deallocate</c> is skipped,
	we may define a sequence like this:</p>
      
      <pre>
	groups() -> [{alloc_and_dealloc, [sequence], [alloc,dealloc]}].</pre>

      <p>Let's also assume the suite contains the test case <c>get_resource_status</c>, 
	which is independent of the other two cases, then the <c>all</c> function could 
	look like this:</p>

      <pre>
	all() -> [{group,alloc_and_dealloc}, get_resource_status].</pre>

      <p>If <c>alloc</c> succeeds, <c>dealloc</c> is also executed. If <c>alloc</c> fails
	however, <c>dealloc</c> is not executed but marked as SKIPPED in the html log. 
	<c>get_resource_status</c> will run no matter what happens to the <c>alloc_and_dealloc</c>
	cases.</p>

      <p>Test cases in a sequence will be executed in order until they have all succeeded or 
	until one case fails. If one fails, all following cases in the sequence are skipped.
	The cases in the sequence that have succeeded up to that point are reported as successful
	in the log. An arbitrary number of sequences may be specified. Example:</p>

      <pre>
	groups() -> [{scenarioA, [sequence], [testA1, testA2]}, 
	             {scenarioB, [sequence], [testB1, testB2, testB3]}].

	all() -> [test1, 
	          test2, 
	          {group,scenarioA}, 
	          test3, 
	          {group,scenarioB}, 
	          test4].</pre>

      <p>It is possible to have sub-groups in a sequence group. Such sub-groups can have 
        any property, i.e. they are not required to also be sequences. If you want the status
	of the sub-group to affect the sequence on the level above, return 
	<c>{return_group_result,Status}</c> from <c>end_per_group/2</c>, as described in the
	<seealso marker="write_test_chapter#repeated_groups">Repeated groups</seealso>
	chapter. A failed sub-group (<c>Status == failed</c>) will cause the execution of a 
	sequence to fail in the	same way a test case does.</p>
    </section>
</chapter>