aboutsummaryrefslogblamecommitdiffstats
path: root/lib/test_server/doc/src/test_server.xml
blob: 96ff6de3ba3d2194367c122f71c2a19613fa6c88 (plain) (tree)
1
2
3
4
5
6
7
8
                                       





                                     
                       


                                                       




                                                                  
 




                                                                          





















































                                                                                     

                                                                      



























































































                                                                              















                                                                         



















                                                                        
                                      




































































































































































































































































                                                                                          
                                                                   




























































                                                                            











                                                                        





















































































                                                                                
























                                                                                
























































































































                                                                                                                                 







































                                                                                  





                                                                  















                                                                       













                                                                               


            
<?xml version="1.0" encoding="utf-8" ?>
<!DOCTYPE erlref SYSTEM "erlref.dtd">

<erlref>
  <header>
    <copyright>
      <year>2007</year>
      <year>2013</year>
      <holder>Ericsson AB, All Rights Reserved</holder>
    </copyright>
    <legalnotice>
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at
 
      http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License.

  The Initial Developer of the Original Code is Ericsson AB.
    </legalnotice>

    <title>test_server</title>
    <prepared>Siri Hansen</prepared>
    <responsible></responsible>
    <docno></docno>
    <approved></approved>
    <checked></checked>
    <date></date>
    <rev></rev>
    <file>test_server_ref.sgml</file>
  </header>
  <module>test_server</module>
  <modulesummary>This module provides support for test suite authors.</modulesummary>
  <description>
    <p>The <c>test_server</c> module aids the test suite author by providing
      various support functions. The supported functionality includes:
      </p>
    <list type="bulleted">
      <item>Logging and timestamping
      </item>
      <item>Capturing output to stdout
      </item>
      <item>Retrieving and flushing the message queue of a process
      </item>
      <item>Watchdog timers, process sleep, time measurement and unit
       conversion
      </item>
      <item>Private scratch directory for all test suites
      </item>
      <item>Start and stop of slave- or peer nodes</item>
    </list>
    <p>For more information on how to write test cases and for
      examples, please see the Test Server User's Guide. 
      </p>
  </description>

  <section>
    <title>TEST SUITE SUPPORT FUNCTIONS</title>
    <p>The following functions are supposed to be used inside a test
      suite.
      </p>
  </section>
  <funcs>
    <func>
      <name>os_type() -> OSType</name>
      <fsummary>Returns the OS type of the target node</fsummary>
      <type>
        <v>OSType = term()</v>
        <d>This is the same as returned from <c>os:type/0</c></d>
      </type>
      <desc>
        <p>This function is equivalent to <c>os:type/0</c>. It is kept
        for backwards compatibility.</p>
      </desc>
    </func>
    <func>
      <name>fail()</name>
      <name>fail(Reason)</name>
      <fsummary>Makes the test case fail.</fsummary>
      <type>
        <v>Reason = term()</v>
        <d>The reason why the test case failed.</d>
      </type>
      <desc>
        <p>This will make the test suite fail with a given reason, or
          with <c>suite_failed</c> if no reason was given. Use this
          function if you want to terminate a test case, as this will
          make it easier to read the log- and HTML files. <c>Reason</c>
          will appear in the comment field in the HTML log.</p>
      </desc>
    </func>
    <func>
      <name>timetrap(Timout) -> Handle</name>
      <fsummary></fsummary>
      <type>
        <v>Timeout = integer() | {hours,H} | {minutes,M} | {seconds,S}</v>
        <v>H = M = S = integer()</v>
        <v>Pid = pid()</v>
        <d>The process that is to be timetrapped (<c>self()</c>by default)</d>
      </type>
      <desc>
        <p>Sets up a time trap for the current process. An expired
          timetrap kills the process with reason
          <c>timetrap_timeout</c>. The returned handle is to be given
          as argument to <c>timetrap_cancel</c> before the timetrap
          expires.  If <c>Timeout</c> is an integer, it is expected to
          be milliseconds.</p>
        <note>
          <p>If the current process is trapping exits, it will not be killed
            by the exit signal with reason <c>timetrap_timeout</c>.
            If this happens, the process will be sent an exit signal
            with reason <c>kill</c> 10 seconds later which will kill the
            process. Information about the timetrap timeout will in
            this case not be found in the test logs. However, the
            error_logger will be sent a warning.</p>
        </note>
      </desc>
    </func>
    <func>
      <name>timetrap_cancel(Handle) -> ok</name>
      <fsummary>Cancels a timetrap.</fsummary>
      <type>
        <v>Handle = term()</v>
        <d>Handle returned from <c>timetrap</c></d>
      </type>
      <desc>
        <p>This function cancels a timetrap. This must be done before
          the timetrap expires.</p>
      </desc>
    </func>
    <func>
      <name>timetrap_scale_factor() -> ScaleFactor</name>
      <fsummary>Returns the scale factor for timeouts.</fsummary>
      <type>
        <v>ScaleFactor = integer()</v>
      </type>
      <desc>
        <p>This function returns the scale factor by which all timetraps
	are scaled. It is normally 1, but can be greater than 1 if
	the test_server is running <c>cover</c>, using a larger amount of
	scheduler threads than the amount of logical processors on the
	system, running under purify, valgrind or in a debug-compiled
	emulator. The scale factor can be used if you need to scale you
	own timeouts in test cases with same factor as the test_server
	uses.</p>
      </desc>
    </func>
    <func>
      <name>sleep(MSecs) -> ok</name>
      <fsummary>Suspens the calling task for a specified time.</fsummary>
      <type>
        <v>MSecs = integer() | float() | infinity</v>
        <d>The number of milliseconds to sleep</d>
      </type>
      <desc>
        <p>This function suspends the calling process for at least the
          supplied number of milliseconds. There are two major reasons
          why you should use this function instead of
          <c>timer:sleep</c>, the first being that the module
          <c>timer</c> may be unavailable at the time the test suite is
          run, and the second that it also accepts floating point
          numbers.</p>
      </desc>
    </func>
    <func>
      <name>adjusted_sleep(MSecs) -> ok</name>
      <fsummary>Suspens the calling task for a specified time.</fsummary>
      <type>
        <v>MSecs = integer() | float() | infinity</v>
        <d>The default number of milliseconds to sleep</d>
      </type>
      <desc>
        <p>This function suspends the calling process for at least the
          supplied number of milliseconds. The function behaves the same
	  way as <c>test_server:sleep/1</c>, only <c>MSecs</c>
	  will be multiplied by the 'multiply_timetraps' value, if set,
	  and also automatically scaled up if 'scale_timetraps' is set
	  to true (which it is by default).</p>
      </desc>
    </func>
    <func>
      <name>hours(N) -> MSecs</name>
      <name>minutes(N) -> MSecs</name>
      <name>seconds(N) -> MSecs</name>
      <fsummary></fsummary>
      <type>
        <v>N = integer()</v>
        <d>Value to convert to milliseconds.</d>
      </type>
      <desc>
        <p>Theese functions convert <c>N</c> number of hours, minutes
          or seconds into milliseconds.
          </p>
        <p>Use this function when you want to
          <c>test_server:sleep/1</c> for a number of seconds, minutes or
          hours(!).</p>
      </desc>
    </func>
    <func>
      <name>format(Format) -> ok</name>
      <name>format(Format, Args)</name>
      <name>format(Pri, Format)</name>
      <name>format(Pri, Format, Args)</name>
      <fsummary></fsummary>
      <type>
        <v>Format = string()</v>
        <d>Format as described for <c>io_:format</c>.</d>
        <v>Args = list()</v>
        <d>List of arguments to format.</d>
      </type>
      <desc>
        <p>Formats output just like <c>io:format</c> but sends the
          formatted string to a logfile. If the urgency value,
          <c>Pri</c>, is lower than some threshold value, it will also
          be written to the test person's console. Default urgency is
          50, default threshold for display on the console is 1.
          </p>
        <p>Typically, the test person don't want to see everything a
          test suite outputs, but is merely interested in if the test
          cases succeeded or not, which the test server tells him. If he
          would like to see more, he could manually change the threshold
          values by using the <c>test_server_ctrl:set_levels/3</c>
          function.</p>
      </desc>
    </func>
    <func>
      <name>capture_start() -> ok</name>
      <name>capture_stop() -> ok</name>
      <name>capture_get() -> list()</name>
      <fsummary>Captures all output to stdout for a process.</fsummary>
      <desc>
        <p>These functions makes it possible to capture all output to
          stdout from a process started by the test suite. The list of
          characters captured can be purged by using <c>capture_get</c>.</p>
      </desc>
    </func>
    <func>
      <name>messages_get() -> list()</name>
      <fsummary>Empty the message queue.</fsummary>
      <desc>
        <p>This function will empty and return all the messages
          currently in the calling process' message queue.</p>
      </desc>
    </func>
    <func>
      <name>timecall(M, F, A) -> {Time, Value}</name>
      <fsummary>Measures the time needed to call a function.</fsummary>
      <type>
        <v>M = atom()</v>
        <d>The name of the module where the function resides.</d>
        <v>F = atom()</v>
        <d>The name of the function to call in the module.</d>
        <v>A = list()</v>
        <d>The arguments to supply the called function.</d>
        <v>Time = integer()</v>
        <d>The number of seconds it took to call the function.</d>
        <v>Value = term()</v>
        <d>Value returned from the called function.</d>
      </type>
      <desc>
        <p>This function measures the time (in seconds) it takes to
          call a certain function. The function call is <em>not</em>
          caught within a catch.</p>
      </desc>
    </func>
    <func>
      <name>do_times(N, M, F, A) -> ok</name>
      <name>do_times(N, Fun)</name>
      <fsummary>Calls MFA or Fun N times.</fsummary>
      <type>
        <v>N = integer()</v>
        <d>Number of times to call MFA.</d>
        <v>M = atom()</v>
        <d>Module name where the function resides.</d>
        <v>F = atom()</v>
        <d>Function name to call.</d>
        <v>A = list()</v>
        <d>Arguments to M:F.</d>
      </type>
      <desc>
        <p>Calls MFA or Fun N times. Useful for extensive testing of a
          sensitive function.</p>
      </desc>
    </func>
    <func>
      <name>m_out_of_n(M, N, Fun) -> ok | exit({m_out_of_n_failed, {R,left_to_do}}</name>
      <fsummary>Fault tolerant <c>do_times</c>.</fsummary>
      <type>
        <v>N = integer()</v>
        <d>Number of times to call the Fun.</d>
        <v>M = integer()</v>
        <d>Number of times to require a successful return.</d>
      </type>
      <desc>
        <p>Repeatedly evaluates the given function until it succeeds
          (doesn't crash) M times. If, after N times, M successful
          attempts have not been accomplished, the process crashes with
          reason {m_out_of_n_failed, {R,left_to_do}}, where R indicates
          how many cases that was still to be successfully completed.
          </p>
        <p>For example:
          </p>
        <p><c>m_out_of_n(1,4,fun() -> tricky_test_case() end)</c>          <br></br>
Tries to run tricky_test_case() up to 4 times, and is
          happy if it succeeds once.
          </p>
        <p><c>m_out_of_n(7,8,fun() -> clock_sanity_check() end)</c>          <br></br>
Tries running clock_sanity_check() up to 8 times,and
          allows the function to fail once.  This might be useful if
          clock_sanity_check/0 is known to fail if the clock crosses an
          hour boundary during the test (and the up to 8 test runs could
          never cross 2 boundaries)</p>
      </desc>
    </func>
    <func>
      <name>call_crash(M, F, A) -> Result</name>
      <name>call_crash(Time, M, F, A) -> Result</name>
      <name>call_crash(Time, Crash, M, F, A) -> Result</name>
      <fsummary>Calls MFA and succeeds if it crashes.</fsummary>
      <type>
        <v>Result = ok | exit(call_crash_timeout) | exit({wrong_crash_reason, Reason})</v>
        <v>Crash = term()</v>
        <d>Crash return from the function.</d>
        <v>Time = integer()</v>
        <d>Timeout in milliseconds.</d>
        <v>M = atom()</v>
        <d>Module name where the function resides.</d>
        <v>F = atom()</v>
        <d>Function name to call.</d>
        <v>A = list()</v>
        <d>Arguments to M:F.</d>
      </type>
      <desc>
        <p>Spawns a new process that calls MFA. The call is considered
          successful if the call crashes with the gives reason
          (<c>Crash</c>) or any reason if not specified. The call must
          terminate within the given time (default <c>infinity</c>), or
          it is considered a failure.</p>
      </desc>
    </func>
    <func>
      <name>temp_name(Stem) -> Name</name>
      <fsummary>Returns a unique filename.</fsummary>
      <type>
        <v>Stem = string()</v>
      </type>
      <desc>
        <p>Returns a unique filename starting with <c>Stem</c> with
          enough extra characters appended to make up a unique
          filename. The filename returned is guaranteed not to exist in
          the filesystem at the time of the call.</p>
      </desc>
    </func>
    <func>
      <name>break(Comment) -> ok</name>
      <fsummary>Cancel all timetraps and wait for call to continue/0.</fsummary>
      <type>
        <v>Comment = string()</v>
      </type>
      <desc>
        <p><c>Comment</c> is a string which will be written in
          the shell, e.g. explaining what to do.</p>
        <p>This function will cancel all timetraps and pause the
          execution of the test case until the user executes the
          <c>continue/0</c> function. It gives the user the opportunity
          to interact with the erlang node running the tests, e.g. for
          debugging purposes or for manually executing a part of the
          test case.</p>
        <p>When the <c>break/1</c> function is called, the shell will
          look something like this:</p>
        <code type="none"><![CDATA[
   --- SEMIAUTOMATIC TESTING ---
   The test case executes on process <0.51.0>


   "Here is a comment, it could e.g. instruct to pull out a card"


   -----------------------------

   Continue with --> test_server:continue().        ]]></code>
        <p>The user can now interact with the erlang node, and when
          ready call <c>test_server:continue().</c></p>
        <p>Note that this function can not be used if the test is
          executed with <c>ts:run/0/1/2/3/4</c> in <c>batch</c> mode.</p>
      </desc>
    </func>
    <func>
      <name>continue() -> ok</name>
      <fsummary>Continue after break/1.</fsummary>
      <desc>
        <p>This function must be called in order to continue after a
          test case has called <c>break/1</c>.</p>
      </desc>
    </func>
    <func>
      <name>run_on_shielded_node(Fun, CArgs) -> term()</name>
      <fsummary>Execute a function a shielded node.</fsummary>
      <type>
        <v>Fun = function() (arity 0)</v>
        <d>Function to execute on the shielded node.</d>
        <v>CArg = string()</v>
        <d>Extra command line arguments to use when starting the shielded node.</d>
      </type>
      <desc>
        <p><c>Fun</c> is executed in a process on a temporarily created
          hidden node with a proxy for communication with the test server
          node. The node is called a shielded node (should have been called
          a shield node). If <c>Fun</c> is successfully executed, the result
          is returned. A peer node (see <c>start_node/3</c>) started from
          the shielded node will be shielded from test server node, i.e.
          they will not be aware of each other. This is useful when you want
          to start nodes from earlier OTP releases than the OTP release of
          the test server node.</p>
        <p>Nodes from an earlier OTP release can normally not be started
          if the test server hasn't been started in compatibility mode
          (see the <c>+R</c> flag in the <c>erl(1)</c> documentation) of
          an earlier release. If a shielded node is started in compatibility
          mode of an earlier OTP release than the OTP release of the test
          server node, the shielded node can start nodes of an earlier OTP
          release.</p>
        <note>
          <p>You <em>must</em> make sure that nodes started by the shielded
            node never communicate directly with the test server node.</p>
        </note>
        <note>
          <p>Slave nodes always communicate with the test server node;
            therefore, <em>never</em> start <em>slave nodes</em> from the
            shielded node, <em>always</em> start <em>peer nodes</em>.</p>
        </note>
      </desc>
    </func>
    <func>
      <name>start_node(Name, Type, Options) -> {ok, Node} | {error, Reason}</name>
      <fsummary>Start a node.</fsummary>
      <type>
        <v>Name = atom() | string()</v>
        <d>Name of the slavenode to start (as given to -sname or -name)</d>
        <v>Type = slave | peer</v>
        <d>The type of node to start.</d>
        <v>Options = [{atom(), term()]</v>
        <d>Tuplelist of options</d>
      </type>
      <desc>
        <p>This functions starts a node, possibly on a remote machine,
          and guarantees cross architecture transparency. Type is set to
          either <c>slave</c> or <c>peer</c>.
          </p>
        <p><c>slave</c> means that the new node will have a master,
          i.e. the slave node will terminate if the master terminates,
          TTY output produced on the slave will be sent back to the
          master node and file I/O is done via the master. The master is
          normally the target node unless the target is itself a slave.
          </p>
        <p><c>peer</c> means that the new node is an independent node
          with no master.
          </p>
        <p><c>Options</c> is a tuplelist which can contain one or more
          of
          </p>
        <taglist>
          <tag><c>{remote, true}</c></tag>
          <item>Start the node on a remote host. If not specified, the
           node will be started on the local host.  Test cases that
           require a remote host will fail with a reasonable comment if
           no remote hosts are available at the time they are run.
          </item>
          <tag><c>{args, Arguments}</c></tag>
          <item>Arguments passed directly to the node. This is
           typically a string appended to the command line.
          </item>
          <tag><c>{wait, false}</c></tag>
          <item>Don't wait until the node is up. By default, this
           function does not return until the node is up and running,
           but this option makes it return as soon as the node start
           command is given..
                    <br></br>
Only valid for peer nodes
          </item>
          <tag><c>{fail_on_error, false}</c></tag>
          <item>Returns <c>{error, Reason}</c> rather than failing the
           test case.
                    <br></br>
Only valid for peer nodes. Note that slave nodes always
           act as if they had <c>fail_on_error=false</c></item>
          <tag><c>{erl, ReleaseList}</c></tag>
          <item>Use an Erlang emulator determined by ReleaseList when
           starting nodes, instead of the same emulator as the test
           server is running. ReleaseList is a list of specifiers,
           where a specifier is either {release, Rel}, {prog, Prog}, or
           'this'. Rel is either the name of a release, e.g., "r12b_patched"
           or 'latest'. 'this' means using the same emulator as the test
           server. Prog is the name of an emulator executable.  If the
           list has more than one element, one of them is picked
           randomly.  (Only works on Solaris and Linux, and the test server 
           gives warnings when it notices that nodes are not of the same
           version as itself.)
                    <br></br>
          <br></br>

           When specifying this option to run a previous release, use
          <c>is_release_available/1</c> function to test if the given
           release is available and skip the test case if not.
                    <br></br>
          <br></br>

           In order to avoid compatibility problems (may not appear right
           away), use a shielded node (see <c>run_on_shielded_node/2</c>)
           when starting nodes from different OTP releases than the test
           server.
          </item>
          <tag><c>{cleanup, false}</c></tag>
          <item>Tells the test server not to kill this node if it is
           still alive after the test case is completed. This is useful
           if the same node is to be used by a group of test cases.
          </item>
          <tag><c>{env, Env}</c></tag>
          <item><c>Env</c> should be a list of tuples <c>{Name, Val}</c>,
           where <c>Name</c> is the name of an environment variable, and
          <c>Val</c> is the value it is to have in the started node.
           Both <c>Name</c> and <c>Val</c> must be strings. The one
           exception is <c>Val</c> being the atom <c>false</c> (in
           analogy  with  <c>os:getenv/1</c>),  which  removes  the
           environment variable. Only valid for peer nodes. Not
           available on VxWorks.</item>
	  <tag><c>{start_cover, false}</c></tag>
	  <item>By default the test server will start cover on all nodes
	   when the test is run with code coverage analysis. To make
	   sure cover is not started on a new node, set this option to
	   <c>false</c>. This can be necessary if the connection to
	   the node at some point will be broken but the node is
	   expected to stay alive. The reason is that a remote cover
	   node can not continue to run without its main node. Another
	   solution would be to explicitly stop cover on the node
	   before breaking the connection, but in some situations (if
	   old code resides in one or more processes) this is not
	   possible.</item>
        </taglist>
      </desc>
    </func>
    <func>
      <name>stop_node(NodeName) -> bool()</name>
      <fsummary>Stops a node</fsummary>
      <type>
        <v>NodeName = term()</v>
        <d>Name of the node to stop</d>
      </type>
      <desc>
        <p>This functions stops a node previously started with
          <c>start_node/3</c>. Use this function to stop any node you
          start, or the test server will produce a warning message in
          the test logs, and kill the nodes automatically unless it was
          started with the <c>{cleanup, false}</c> option.</p>
      </desc>
    </func>
    <func>
      <name>is_commercial() -> bool()</name>
      <fsummary>Tests whether the emulator is commercially supported</fsummary>
      <desc>
        <p>This function test whether the emulator is commercially supported
	emulator. The tests for a commercially supported emulator could be more
	stringent (for instance, a commercial release should always contain
	documentation for all applications).</p>
      </desc>
    </func>

    <func>
      <name>is_release_available(Release) -> bool()</name>
      <fsummary>Tests whether a release is available</fsummary>
      <type>
        <v>Release = string() | atom()</v>
        <d>Release to test for</d>
      </type>
      <desc>
        <p>This function test whether the release given by
          <c>Release</c> (for instance, "r12b_patched") is available
          on the computer that the test_server controller is running on.
          Typically, you should skip the test case if not.</p>
        <p>Caution: This function may not be called from the <c>suite</c>
          clause of a test case, as the test_server will deadlock.</p>
      </desc>
    </func>
    <func>
      <name>is_native(Mod) -> bool()</name>
      <fsummary>Checks whether the module is natively compiled or not</fsummary>
      <type>
        <v>Mod = atom()</v>
        <d>A module name</d>
      </type>
      <desc>
        <p>Checks whether the module is natively compiled or not</p>
      </desc>
    </func>
    <func>
      <name>app_test(App) -> ok | test_server:fail()</name>
      <name>app_test(App,Mode)</name>
      <fsummary>Checks an applications .app file for obvious errors</fsummary>
      <type>
        <v>App = term()</v>
        <d>The name of the application to test</d>
        <v>Mode = pedantic | tolerant</v>
        <d>Default is pedantic</d>
      </type>
      <desc>
        <p>Checks an applications .app file for obvious errors.
          The following is checked:
          </p>
        <list type="bulleted">
          <item>required fields
          </item>
          <item>that all modules specified actually exists
          </item>
          <item>that all requires applications exists
          </item>
          <item>that no module included in the application has export_all
          </item>
          <item>that all modules in the ebin/ dir is included (If
          <c>Mode==tolerant</c> this only produces a warning, as all
           modules does not have to be included)</item>
        </list>
      </desc>
    </func>
    <func>
      <name>appup_test(App) -> ok | test_server:fail()</name>
      <fsummary>Checks an applications .appup file for obvious errors</fsummary>
      <type>
        <v>App = term()</v>
        <d>The name of the application to test</d>
      </type>
      <desc>
        <p>Checks an applications .appup file for obvious errors.
          The following is checked:
          </p>
        <list type="bulleted">
          <item>syntax
          </item>
          <item>that .app file version and .appup file version match
          </item>
          <item>for non-library applications: validity of high-level upgrade
           instructions, specifying no instructions is explicitly allowed
           (in this case the application is not upgradeable)</item>
          <item>for library applications: that there is exactly one wildcard
           regexp clause restarting the application when upgrading or
           downgrading from any version</item>
        </list>
      </desc>
    </func>
    <func>
      <name>comment(Comment) -> ok</name>
      <fsummary>Print a comment on the HTML result page</fsummary>
      <type>
        <v>Comment = string()</v>
      </type>
      <desc>
        <p>The given String will occur in the comment field of the
          table on the HTML result page. If called several times, only
          the last comment is printed.  comment/1 is also overwritten by
          the return value {comment,Comment} from a test case or by
          fail/1 (which prints Reason as a comment).</p>
      </desc>
    </func>
  </funcs>

  <section>
    <title>TEST SUITE EXPORTS</title>
    <p>The following functions must be exported from a test suite
      module.
      </p>
  </section>
  <funcs>
    <func>
      <name>all(suite) -> TestSpec | {skip, Comment}</name>
      <fsummary>Returns the module's test specification</fsummary>
      <type>
        <v>TestSpec = list()</v>
        <v>Comment = string()</v>
        <d>This comment will be printed on the HTML result page</d>
      </type>
      <desc>
        <p>This function must return the test specification for the
          test suite module. The syntax of a test specification is
          described in the Test Server User's Guide.</p>
      </desc>
    </func>
    <func>
      <name>init_per_suite(Config0) -> Config1 | {skip, Comment}</name>
      <fsummary>Test suite initiation</fsummary>
      <type>
        <v>Config0 = Config1 = [tuple()]</v>
        <v>Comment = string()</v>
	<d>Describes why the suite is skipped</d>
      </type>
      <desc>
        <p>This function is called before all other test cases in the
          suite. <c>Config</c> is the configuration which can be modified
          here. Whatever is returned from this function is given as
          <c>Config</c> to the test cases.
          </p>
        <p>If this function fails, all test cases in the suite will be
          skipped.</p>
      </desc>
    </func>
    <func>
      <name>end_per_suite(Config) -> void()</name>
      <fsummary>Test suite finalization</fsummary>
      <type>
        <v>Config = [tuple()]</v>
      </type>
      <desc>
        <p>This function is called after the last test case in the
          suite, and can be used to clean up whatever the test cases
          have done. The return value is ignored.</p>
      </desc>
    </func>
    <func>
      <name>init_per_testcase(Case, Config0) -> Config1 | {skip, Comment}</name>
      <fsummary>Test case initiation</fsummary>
      <type>
        <v>Case = atom()</v>
        <v>Config0 = Config1 = [tuple()]</v>
        <v>Comment = string()</v>
	<d>Describes why the test case is skipped</d>
      </type>
      <desc>
        <p>This function is called before each test case. The
          <c>Case</c> argument is the name of the test case, and
          <c>Config</c> is the configuration which can be modified
          here. Whatever is returned from this function is given as
          <c>Config</c> to the test case.</p>
      </desc>
    </func>
    <func>
      <name>end_per_testcase(Case, Config) -> void()</name>
      <fsummary>Test case finalization</fsummary>
      <type>
        <v>Case = atom()</v>
        <v>Config = [tuple()]</v>
      </type>
      <desc>
        <p>This function is called after each test case, and can be
          used to clean up whatever the test case has done. The return
          value is ignored.</p>
      </desc>
    </func>
    <func>
      <name>Case(doc) -> [Decription]</name>
      <name>Case(suite) -> [] | TestSpec | {skip, Comment}</name>
      <name>Case(Config) -> {skip, Comment} | {comment, Comment} | Ok</name>
      <fsummary>A test case</fsummary>
      <type>
        <v>Description = string()</v>
        <d>Short description of the test case</d>
        <v>TestSpec = list()</v>
        <v>Comment = string()</v>
        <d>This comment will be printed on the HTML result page</d>
        <v>Ok = term()</v>
        <v>Config = [tuple()]</v>
        <d>Elements from the Config parameter can be read with the ?config macro, see section about test suite support macros</d>
      </type>
      <desc>
        <p>The <em>documentation clause</em> (argument <c>doc</c>) can
          be used for automatic generation of test documentation or test
          descriptions.
          </p>
        <p>The <em>specification clause</em> (argument <c>spec</c>)
          shall return an empty list, the test specification for the
          test case or <c>{skip,Comment}</c>. The syntax of a test
          specification is described in the Test Server User's Guide.
          </p>
        <p>The <em>execution clause</em> (argument <c>Config</c>) is
          only called if the specification clause returns an empty list.
          The execution clause is the real test case. Here you must call
          the functions you want to test, and do whatever you need to
          check the result. If something fails, make sure the process
          crashes or call <c>test_server:fail/0/1</c> (which also will
          cause the process to crash).
          </p>
        <p>You can return <c>{skip,Comment}</c> if you decide not to
          run the test case after all, e.g. if it is not applicable on
          this platform.
          </p>
        <p>You can return <c>{comment,Comment}</c> if you wish to
          print some information in the 'Comment' field on the HTML
          result page.
          </p>
        <p>If the execution clause returns anything else, it is
          considered a success, unless it is <c>{'EXIT',Reason}</c> or
          <c>{'EXIT',Pid,Reason}</c> which can't be distinguished from a
          crash, and thus will be considered a failure.
          </p>
        <p>A <em>conf test case</em> is a group of test cases with an
          init and a cleanup function. The init and cleanup functions
          are also test cases, but they have special rules:</p>
	  <list type="bulleted">
	  <item>They do not need a specification clause.</item>
	  <item>They must always have the execution clause.</item>
	  <item>They must return the <c>Config</c> parameter, a modified
          version of it or <c>{skip,Comment}</c> from the execution clause.</item>
	  <item>The cleanup function may also return a tuple 
	  <c>{return_group_result,Status}</c>, which is used to return the
	  status of the conf case to Test Server and/or to a conf case on a
	  higher level. (<c>Status = ok | skipped | failed</c>).</item>
	  <item><c>init_per_testcase</c> and <c>end_per_testcase</c> are
          not called before and after these functions.</item>
	  </list>
      </desc>
    </func>
  </funcs>


  <section>
    <title>TEST SUITE SUPPORT MACROS</title>
    <p>There are some macros defined in the <c>test_server.hrl</c>
      that are quite useful for test suite programmers:
      </p>
    <p>The <em>config</em> macro, is used to
      retrieve information from the <c>Config</c> variable sent to all
      test cases. It is used with two arguments, where the first is the
      name of the configuration variable you wish to retrieve, and the
      second is the <c>Config</c> variable supplied to the test case
      from the test server.
      </p>
    <p>Possible configuration variables include:</p>
    <list type="bulleted">
      <item><c>data_dir</c>  - Data file directory.</item>
      <item><c>priv_dir</c>  - Scratch file directory.</item>
      <item><c>nodes</c>     - Nodes specified in the spec file</item>
      <item><c>nodenames</c> - Generated nodenames.</item>
      <item>Whatever added by conf test cases or
      <c>init_per_testcase/2</c></item>
    </list>
    <p>Examples of the <c>config</c> macro can be seen in the Examples chapter
      in the user's guide.</p>
    <p>The <em>line</em> and <em>line_trace</em> macros are deprecated, see
      below.</p>
  </section>

  <section>
    <title>TEST SUITE LINE NUMBERS</title>
    <p>In the past, ERTS did not produce line numbers when generating
      stacktraces, test_server was thus unable to provide them when reporting
      test failures. It had instead two different mecanisms to do it: either by
      using the <c>line</c> macro or by using the <c>test_server_line</c> parse
      transform. Both are deprecated and should not be used in new tests
      anymore.</p>
  </section>
</erlref>