aboutsummaryrefslogblamecommitdiffstats
path: root/lib/test_server/doc/src/test_spec_chapter.xml
blob: 3a7730d61e211bfe76c879ba490b7a4e73d57774 (plain) (tree)






















































































































































































































































































































































































                                                                                    
<?xml version="1.0" encoding="latin1" ?>
<!DOCTYPE chapter SYSTEM "chapter.dtd">

<chapter>
  <header>
    <copyright>
      <year>2002</year><year>2009</year>
      <holder>Ericsson AB. All Rights Reserved.</holder>
    </copyright>
    <legalnotice>
      The contents of this file are subject to the Erlang Public License,
      Version 1.1, (the "License"); you may not use this file except in
      compliance with the License. You should have received a copy of the
      Erlang Public License along with this software. If not, it can be
      retrieved online at http://www.erlang.org/.
    
      Software distributed under the License is distributed on an "AS IS"
      basis, WITHOUT WARRANTY OF ANY KIND, either express or implied. See
      the License for the specific language governing rights and limitations
      under the License.
    
    </legalnotice>

    <title>Test Structure and Test Specifications</title>
    <prepared>Siri Hansen</prepared>
    <docno></docno>
    <date></date>
    <rev></rev>
    <file>test_spec_chapter.xml</file>
  </header>

  <section>
    <title>Test structure</title>
    <p>A test consists of a set of test cases. Each test case is
      implemented as an erlang function. An erlang module implementing
      one or more test cases is called a test suite.
      </p>
  </section>

  <section>
    <title>Test specifications</title>
    <p>A test specification is a specification of which test suites
      and test cases to run and which to skip. A test specification can
      also group several test cases into conf cases with init and
      cleanup functions (see section about configuration cases
      below). In a test there can be test specifications on three
      different levels:
      </p>
    <p>The top level is a test specification file which roughly
      specifies what to test for a whole application. The test
      specification in such a file is encapsulated in a topcase
      command. 
      </p>
    <p>Then there is a test specification for each test suite,
      specifying which test cases to run within the suite. The test
      specification for a test suite is returned from the
      <c>all(suite)</c> function in the test suite module. 
      </p>
    <p>And finally there can be a test specification per test case,
      specifying sub test cases to run. The test specification for a
      test case is returned from the specification clause of the test
      case.
      </p>
    <p>When a test starts, the total test specification is built in a
      tree fashion, starting from the top level test specification.
      </p>
    <p>The following are the valid elements of a test
      specification. The specification can be one of these elements or a
      list with any combination of the elements:
      </p>
    <taglist>
      <tag><c>{Mod, Case}</c></tag>
      <item>This specifies the test case Mod:Case/1
      </item>
      <tag><c>{dir, Dir}</c></tag>
      <item>This specifies all modules <c>*_SUITE</c> in the directory
      <c>Dir</c></item>
      <tag><c>{dir, Dir, Pattern}</c></tag>
      <item>This specifies all modules <c>Pattern*</c> in the
       directory <c>Dir</c></item>
      <tag><c>{conf, Init, TestSpec, Fin}</c></tag>
      <item>This is a configuration case. In a test specification
       file, <c>Init</c> and <c>Fin</c> must be
      <c>{Mod,Func}</c>. Inside a module they can also be just
      <c>Func</c>. See the section named Configuration Cases below for
       more information about this.
      </item>
      <tag><c>{conf, Properties, Init, TestSpec, Fin}</c></tag>
      <item>This is a configuration case as explained above, but
       which also takes a list of execution properties for its group
       of test cases and nested sub-groups.
      </item>
      <tag><c>{make, Init, TestSpec, Fin}</c></tag>
      <item>This is a special version of a conf case which is only
       used by the test server framework <c>ts</c>. <c>Init</c> and
      <c>Fin</c> are make and unmake functions for a data
       directory. <c>TestSpec</c> is the test specification for the
       test suite owning the data directory in question. If the make
       function fails, all tests in the test suite are skipped. The
       difference between this "make case" and a normal conf case is
       that for the make case, <c>Init</c> and <c>Fin</c> are given with
       arguments (<c>{Mod,Func,Args}</c>), and that they are executed
       on the controller node (i.e. not on target).
      </item>
      <tag><c>Case</c></tag>
      <item>This can only be used inside a module, i.e. not a test
       specification file. It specifies the test case
      <c>CurrentModule:Case</c>.
      </item>
    </taglist>
  </section>

  <section>
    <title>Test Specification Files</title>
    <p>A test specification file is a text file containing the top
      level test specification (a topcase command), and possibly one or
      more additional commands. A "command" in a test specification file
      means a key-value tuple ended by a dot-newline sequence.
      </p>
    <p>The following commands are valid:
      </p>
    <taglist>
      <tag><c>{topcase, TestSpec}</c></tag>
      <item>This command is mandatory in all test specification
       files. <c>TestSpec</c> is the top level test specification of a
       test.
      </item>
      <tag><c>{skip, {Mod, Comment}}</c></tag>
      <item>This specifies that all cases in the module <c>Mod</c>
       shall be skipped. <c>Comment</c> is a string.
      </item>
      <tag><c>{skip, {Mod, Case, Comment}}</c></tag>
      <item>This specifies that the case <c>Mod:Case</c> shall be
       skipped.
      </item>
      <tag><c>{skip, {Mod, CaseList, Comment}}</c></tag>
      <item>This specifies that all cases <c>Mod:Case</c>, where
      <c>Case</c> is in <c>CaseList</c>, shall be skipped.
      </item>
      <tag><c>{nodes, Nodes}</c></tag>
      <item><c>Nodes</c> is a list of nodenames available to the test
       suite. It will be added to the <c>Config</c> argument to all
       test cases. <c>Nodes</c> is a list of atoms.
      </item>
      <tag><c>{require_nodenames, Num}</c></tag>
      <item>Specifies how many nodenames the test suite will
       need. Theese will be automatically generated and inserted into the
      <c>Config</c> argument to all test cases. <c>Num</c> is an
       integer.
      </item>
      <tag><c>{hosts, Hosts}</c></tag>
      <item>This is a list of available hosts on which to start slave
       nodes. It is used when the <c>{remote, true}</c> option is given
       to the <c>test_server:start_node/3</c> function. Also, if
      <c>{require_nodenames, Num}</c> is contained in a test
       specification file, the generated nodenames will be spread over
       all hosts given in this <c>Hosts</c> list. The hostnames are
       atoms or strings.
      </item>
      <tag><c>{diskless, true}</c></tag>
      <item>Adds <c>{diskless, true}</c> to the <c>Config</c> argument
       to all test cases. This is kept for backwards compatibility and
       should not be used. Use a configuration case instead.
      </item>
      <tag><c>{ipv6_hosts, Hosts}</c></tag>
      <item>Adds <c>{ipv6_hosts, Hosts}</c> to the <c>Config</c>
       argument to all test cases.</item>
    </taglist>
    <p>All test specification files shall have the extension
      ".spec". If special test specification files are needed for
      Windows or VxWorks platforms, additional files with the
      extension ".spec.win" and ".spec.vxworks" shall be
      used. This is useful e.g. if some test cases shall be skipped on
      these platforms.
      </p>
    <p>Some examples for test specification files can be found in the
      Examples section of this user's guide.
      </p>
  </section>

  <section>
    <title>Configuration cases</title>
    <p>If a group of test cases need the same initialization, a so called
      <em>configuration</em> or <em>conf</em> case can be used. A conf
      case consists of an initialization function, the group of test cases
      needing this initialization and a cleanup or finalization function.
      </p>
    <p>If the init function in a conf case fails or returns
      <c>{skip,Comment}</c>, the rest of the test cases in the conf case
      (including the cleanup function) are skipped. If the init function
      succeeds, the cleanup function will always be called, even if some
      of the test cases in between failed.
      </p>
    <p>Both the init function and the cleanup function in a conf case
      get the <c>Config</c> parameter as only argument. This parameter
      can be modified or returned as is. Whatever is returned by the
      init function is given as <c>Config</c> parameter to the rest of
      the test cases in the conf case, including the cleanup function.
      </p>
    <p>If the <c>Config</c> parameter is changed by the init function,
      it must be restored by the cleanup function. Whatever is returned
      by the cleanup function will be given to the next test case called.
      </p>
    <p>The optional <c>Properties</c> list can be used to specify
      execution properties for the test cases and possibly nested
      sub-groups of the configuration case. The available properties are:</p>
    <pre>
      Properties = [parallel | sequence | Shuffle | {RepeatType,N}]
      Shuffle = shuffle | {shuffle,Seed}
      Seed = {integer(),integer(),integer()}
      RepeatType = repeat | repeat_until_all_ok | repeat_until_all_fail |
                   repeat_until_any_ok | repeat_until_any_fail
      N = integer() | forever</pre>

    <p>If the <c>parallel</c> property is specified, Test Server will execute
    all test cases in the group in parallel. If <c>sequence</c> is specified,
    the cases will be executed in a sequence, meaning if one case fails, all
    following cases will be skipped. If <c>shuffle</c> is specified, the cases
    in the group will be executed in random order. The <c>repeat</c> property
    orders Test Server to repeat execution of the cases in the group a given
    number of times, or until any, or all, cases fail or succeed.</p>

    <p>Properties may be combined so that e.g. if <c>shuffle</c>, 
    <c>repeat_until_any_fail</c> and <c>sequence</c> are all specified, the test 
    cases in the group will be executed repeatedly and in random order until
    a test case fails, when execution is immediately stopped and the rest of 
    the cases skipped.</p>

    <p>The properties for a conf case is always printed on the top of the HTML log 
    for the group's init function. Also, the total execution time for a conf case
    can be found at the bottom of the log for the group's end function.</p>

    <p>Configuration cases may be nested so that sets of grouped cases can be 
    configured with the same init- and end functions.</p>
  </section>

  <section>
    <title>The parallel property and nested configuration cases</title>
    <p>If a conf case has a parallel property, its test cases will be spawned
    simultaneously and get executed in parallel. A test case is not allowed
    to execute in parallel with the end function however, which means
    that the time it takes to execute a set of parallel cases is equal to the
    execution time of the slowest test case in the group. A negative side
    effect of running test cases in parallel is that the HTML summary pages
    are not updated with links to the individual test case logs until the 
    end function for the conf case has finished.</p>

    <p>A conf case nested under a parallel conf case will start executing in 
    parallel with previous (parallel) test cases (no matter what properties the 
    nested conf case has). Since, however, test cases are never executed in 
    parallel with the init- or the end function of the same conf case, it's 
    only after a nested group of cases has finished that any remaining parallel 
    cases in the previous conf case get spawned.</p>
  </section>

  <section>
    <title>Repeated execution of test cases</title>
    <marker id="repeated_cases"></marker>
    <p>A conf case may be repeated a certain number of times
    (specified by an integer) or indefinitely (specified by <c>forever</c>).
    The repetition may also be stopped prematurely if any or all cases
    fail or succeed, i.e. if the property <c>repeat_until_any_fail</c>,
    <c>repeat_until_any_ok</c>, <c>repeat_until_all_fail</c>, or 
    <c>repeat_until_all_ok</c> is used. If the basic <c>repeat</c>
    property is used, status of test cases is irrelevant for the repeat 
    operation.</p>
    
    <p>It is possible to return the status of a conf case (ok or
    failed), to affect the execution of the conf case on the level above. 
    This is accomplished by, in the end function, looking up the value
    of <c>tc_group_properties</c> in the <c>Config</c> list and checking the
    result of the finished test cases. If status <c>failed</c> should be
    returned from the conf case as a result, the end function should return
    the value <c>{return_group_result,failed}</c>. The status of a nested conf
    case is taken into account by Test Server when deciding if execution
    should be repeated or not (unless the basic <c>repeat</c> property is used).</p>

    <p>The <c>tc_group_properties</c> value is a list of status tuples, 
    each with the key <c>ok</c>, <c>skipped</c> and <c>failed</c>. The
    value of a status tuple is a list containing names of test cases 
    that have been executed with the corresponding status as result.</p>

    <p>Here's an example of how to return the status from a conf case:</p>
    <pre>
      conf_end_function(Config) ->
          Status = ?config(tc_group_result, Config),
          case proplists:get_value(failed, Status) of
              [] ->                                   % no failed cases 
	          {return_group_result,ok};
	      _Failed ->                              % one or more failed
	          {return_group_result,failed}
          end.</pre>

    <p>It is also possible in the end function to check the status of
    a nested conf case (maybe to determine what status the current conf case should
    return). This is as simple as illustrated in the example above, only the
    name of the end function of the nested conf case is stored in a tuple 
    <c>{group_result,EndFunc}</c>, which can be searched for in the status lists. 
    Example:</p>
    <pre>
      conf_end_function_X(Config) ->
          Status = ?config(tc_group_result, Config),
          Failed = proplists:get_value(failed, Status),
          case lists:member({group_result,conf_end_function_Y}, Failed) of
	        true ->
		    {return_group_result,failed};
                false ->                                                    
	            {return_group_result,ok}
          end; 
      ...</pre>

    <note><p>When a conf case is repeated, the init- and end functions
      are also always called with each repetition.</p></note>
  </section>

  <section>
    <title>Shuffled test case order</title>
    <p>The order that test cases in a conf case are executed, is under normal
    circumstances the same as the order defined in the test specification.
    With the <c>shuffle</c> property set, however, Test Server will instead 
    execute the test cases in random order.</p>

    <p>The user may provide a seed value (a tuple of three integers) with
    the shuffle property: <c>{shuffle,Seed}</c>. This way, the same shuffling
    order can be created every time the conf case is executed. If no seed value
    is given, Test Server creates a "random" seed for the shuffling operation 
    (using the return value of <c>erlang:now()</c>). The seed value is always
    printed to the log file of the init function so that it can be used to
    recreate the same execution order in subsequent test runs.</p>

    <note><p>If execution of a conf case with shuffled test cases is repeated,
      the seed will not be reset in between turns.</p></note>

    <p>If a nested conf case is specified in a conf case with a <c>shuffle</c> 
    property, the execution order of the nested cases in relation to the test cases 
    (and other conf cases) is also random. The order of the test cases in the nested
    conf case is however not random (unless, of course, this one also has a
    <c>shuffle</c> property).</p>
  </section>

  <section>
    <title>Skipping test cases</title>
    <p>It is possible to skip certain test cases, for example if you
      know beforehand that a specific test case fails. This might be
      functionality which isn't yet implemented, a bug that is known but
      not yet fixed or some functionality which doesn't work or isn't
      applicable on a specific platform.
      </p>
    <p>There are several different ways to state that a test case
      should be skipped:</p>
    <list type="bulleted">
      <item>Using the <c>{skip,What}</c> command in a test
       specification file
      </item>
      <item>Returning <c>{skip,Reason}</c> from the
      <c>init_per_testcase/2</c> function
      </item>
      <item>Returning <c>{skip,Reason}</c> from the specification
       clause of the test case
      </item>
      <item>Returning <c>{skip,Reason}</c> from the execution clause
       of the test case
      </item>
    </list>
    <p>The latter of course means that the execution clause is
      actually called, so the author must make sure that the test case
      is not run. For more information about the different clauses in a
      test case, see the chapter about writing test cases.
      </p>
    <p>When a test case is skipped, it will be noted as <c>SKIPPED</c>
      in the HTML log.
      </p>
  </section>
</chapter>