aboutsummaryrefslogtreecommitdiffstats
path: root/lib/test_server/doc/src/test_server_ctrl.xml
diff options
context:
space:
mode:
Diffstat (limited to 'lib/test_server/doc/src/test_server_ctrl.xml')
-rw-r--r--lib/test_server/doc/src/test_server_ctrl.xml844
1 files changed, 0 insertions, 844 deletions
diff --git a/lib/test_server/doc/src/test_server_ctrl.xml b/lib/test_server/doc/src/test_server_ctrl.xml
deleted file mode 100644
index 2762997ece..0000000000
--- a/lib/test_server/doc/src/test_server_ctrl.xml
+++ /dev/null
@@ -1,844 +0,0 @@
-<?xml version="1.0" encoding="utf-8" ?>
-<!DOCTYPE erlref SYSTEM "erlref.dtd">
-
-<erlref>
- <header>
- <copyright>
- <year>2007</year>
- <year>2013</year>
- <holder>Ericsson AB, All Rights Reserved</holder>
- </copyright>
- <legalnotice>
- Licensed under the Apache License, Version 2.0 (the "License");
- you may not use this file except in compliance with the License.
- You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
- Unless required by applicable law or agreed to in writing, software
- distributed under the License is distributed on an "AS IS" BASIS,
- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- See the License for the specific language governing permissions and
- limitations under the License.
-
- The Initial Developer of the Original Code is Ericsson AB.
- </legalnotice>
-
- <title>The Test Server Controller</title>
- <prepared>Siri Hansen, Peter Andersson</prepared>
- <responsible></responsible>
- <docno></docno>
- <approved></approved>
- <checked></checked>
- <date></date>
- <rev></rev>
- <file>test_server_ctrl_ref.sgml</file>
- </header>
- <module>test_server_ctrl</module>
- <modulesummary>This module provides a low level interface to the Test Server.</modulesummary>
- <description>
- <p>The <c>test_server_ctrl</c> module provides a low level
- interface to the Test Server. This interface is normally
- not used directly by the tester, but through a framework built
- on top of <c>test_server_ctrl</c>.
- </p>
- <p>Common Test is such a framework, well suited for automated
- black box testing of target systems of any kind (not necessarily
- implemented in Erlang). Common Test is also a very useful tool for
- white box testing Erlang programs and OTP applications.
- Please see the Common Test User's Guide and reference manual for
- more information.
- </p>
- <p>If you want to write your own framework, some more information
- can be found in the chapter "Writing your own test server
- framework" in the Test Server User's Guide. Details about the
- interface provided by <c>test_server_ctrl</c> follows below.
- </p>
- </description>
- <funcs>
- <func>
- <name>start() -> Result</name>
- <fsummary>Starts the test server.</fsummary>
- <type>
- <v>Result = ok | {error, {already_started, pid()}</v>
- </type>
- <desc>
- <p>This function starts the test server.</p>
- </desc>
- </func>
- <func>
- <name>stop() -> ok</name>
- <fsummary>Stops the test server immediately.</fsummary>
- <desc>
- <p>This stops the test server and
- all its activity. The running test suite (if any) will be
- halted.</p>
- </desc>
- </func>
- <func>
- <name>add_dir(Name, Dir) -> ok</name>
- <name>add_dir(Name, Dir, Pattern) -> ok</name>
- <name>add_dir(Name, [Dir|Dirs]) -> ok</name>
- <name>add_dir(Name, [Dir|Dirs], Pattern) -> ok</name>
- <fsummary>Add a directory to the job queue.</fsummary>
- <type>
- <v>Name = term()</v>
- <d>The jobname for this directory.</d>
- <v>Dir = term()</v>
- <d>The directory to scan for test suites.</d>
- <v>Dirs = [term()]</v>
- <d>List of directories to scan for test suites.</d>
- <v>Pattern = term()</v>
- <d>Suite match pattern. Directories will be scanned for Pattern_SUITE.erl files.</d>
- </type>
- <desc>
- <p>Puts a collection of suites matching (*_SUITE) in given
- directories into the job queue. <c>Name</c> is an arbitrary
- name for the job, it can be any erlang term. If <c>Pattern</c>
- is given, only modules matching <c>Pattern*</c> will be added.</p>
- </desc>
- </func>
- <func>
- <name>add_module(Mod) -> ok</name>
- <name>add_module(Name, [Mod|Mods]) -> ok</name>
- <fsummary>Add a module to the job queue with or without a given name.</fsummary>
- <type>
- <v>Mod = atom()</v>
- <v>Mods = [atom()]</v>
- <d>The name(s) of the module(s) to add.</d>
- <v>Name = term()</v>
- <d>Name for the job.</d>
- </type>
- <desc>
- <p>This function adds a module or a list of modules, to the
- test servers job queue. <c>Name</c> may be any Erlang
- term. When <c>Name</c> is not given, the job gets the name of
- the module.</p>
- </desc>
- </func>
- <func>
- <name>add_case(Mod, Case) -> ok</name>
- <fsummary>Adds one test case to the job queue.</fsummary>
- <type>
- <v>Mod = atom()</v>
- <d>Name of the module the test case is in.</d>
- <v>Case = atom() </v>
- <d>Function name of the test case to add.</d>
- </type>
- <desc>
- <p>This function will add one test case to the job queue. The
- job will be given the module's name.</p>
- </desc>
- </func>
- <func>
- <name>add_case(Name, Mod, Case) -> ok</name>
- <fsummary>Equivalent to add_case/2, but with specified name.</fsummary>
- <type>
- <v>Name = string()</v>
- <d>Name to use for the test job.</d>
- </type>
- <desc>
- <p>Equivalent to <c>add_case/2</c>, but the test job will get
- the specified name.</p>
- </desc>
- </func>
- <func>
- <name>add_cases(Mod, Cases) -> ok</name>
- <fsummary>Adds a list of test cases to the job queue.</fsummary>
- <type>
- <v>Mod = atom()</v>
- <d>Name of the module the test case is in.</d>
- <v>Cases = [Case] </v>
- <v>Case = atom() </v>
- <d>Function names of the test cases to add.</d>
- </type>
- <desc>
- <p>This function will add one or more test cases to the job
- queue. The job will be given the module's name.</p>
- </desc>
- </func>
- <func>
- <name>add_cases(Name, Mod, Cases) -> ok</name>
- <fsummary>Equivalent to add_cases/2, but with specified name.</fsummary>
- <type>
- <v>Name = string()</v>
- <d>Name to use for the test job.</d>
- </type>
- <desc>
- <p>Equivalent to <c>add_cases/2</c>, but the test job will get
- the specified name.</p>
- </desc>
- </func>
- <func>
- <name>add_spec(TestSpecFile) -> ok | {error, nofile}</name>
- <fsummary>Adds a test specification file to the job queue.</fsummary>
- <type>
- <v>TestSpecFile = string()</v>
- <d>Name of the test specification file</d>
- </type>
- <desc>
- <p>This function will add the content of the given test
- specification file to the job queue. The job will be given the
- name of the test specification file, e.g. if the file is
- called <c>test.spec</c>, the job will be called <c>test</c>.
- </p>
- <p>See the reference manual for the test server application
- for details about the test specification file.</p>
- </desc>
- </func>
- <func>
- <name>add_dir_with_skip(Name, [Dir|Dirs], Skip) -> ok</name>
- <name>add_dir_with_skip(Name, [Dir|Dirs], Pattern, Skip) -> ok</name>
- <name>add_module_with_skip(Mod, Skip) -> ok</name>
- <name>add_module_with_skip(Name, [Mod|Mods], Skip) -> ok</name>
- <name>add_case_with_skip(Mod, Case, Skip) -> ok</name>
- <name>add_case_with_skip(Name, Mod, Case, Skip) -> ok</name>
- <name>add_cases_with_skip(Mod, Cases, Skip) -> ok</name>
- <name>add_cases_with_skip(Name, Mod, Cases, Skip) -> ok</name>
- <fsummary>Same purpose as functions listed above, but with extra Skip argument.</fsummary>
- <type>
- <v>Skip = [SkipItem]</v>
- <d>List of items to be skipped from the test.</d>
- <v>SkipItem = {Mod,Comment} | {Mod,Case,Comment} | {Mod,Cases,Comment}</v>
- <v>Mod = atom()</v>
- <d>Test suite name.</d>
- <v>Comment = string()</v>
- <d>Reason why suite or case is being skipped.</d>
- <v>Cases = [Case]</v>
- <v>Case = atom()</v>
- <d>Name of test case function.</d>
- </type>
- <desc>
- <p>These functions add test jobs just like the add_dir, add_module,
- add_case and add_cases functions above, but carry an additional
- argument, Skip. Skip is a list of items that should be skipped
- in the current test run. Test job items that occur in the Skip
- list will be logged as SKIPPED with the associated Comment.</p>
- </desc>
- </func>
- <func>
- <name>add_tests_with_skip(Name, Tests, Skip) -> ok</name>
- <fsummary>Adds different types of jobs to the run queue.</fsummary>
- <type>
- <v>Name = term()</v>
- <d>The jobname for this directory.</d>
- <v>Tests = [TestItem]</v>
- <d>List of jobs to add to the run queue.</d>
- <v>TestItem = {Dir,all,all} | {Dir,Mods,all} | {Dir,Mod,Cases}</v>
- <v>Dir = term()</v>
- <d>The directory to scan for test suites.</d>
- <v>Mods = [Mod]</v>
- <v>Mod = atom()</v>
- <d>Test suite name.</d>
- <v>Cases = [Case]</v>
- <v>Case = atom()</v>
- <d>Name of test case function.</d>
- <v>Skip = [SkipItem]</v>
- <d>List of items to be skipped from the test.</d>
- <v>SkipItem = {Mod,Comment} | {Mod,Case,Comment} | {Mod,Cases,Comment}</v>
- <v>Comment = string()</v>
- <d>Reason why suite or case is being skipped.</d>
- </type>
- <desc>
- <p>This function adds various test jobs to the test_server_ctrl
- job queue. These jobs can be of different type (all or specific suites
- in one directory, all or specific cases in one suite, etc). It is also
- possible to get particular items skipped by passing them along in the
- Skip list (see the add_*_with_skip functions above).</p>
- </desc>
- </func>
- <func>
- <name>abort_current_testcase(Reason) -> ok | {error,no_testcase_running}</name>
- <fsummary>Aborts the test case currently executing.</fsummary>
- <type>
- <v>Reason = term()</v>
- <d>The reason for stopping the test case, which will be printed in the log.</d>
- </type>
- <desc>
- <p>When calling this function, the currently executing test case will be aborted.
- It is the user's responsibility to know for sure which test case is currently
- executing. The function is therefore only safe to call from a function which
- has been called (or synchronously invoked) by the test case.</p>
- </desc>
- </func>
- <func>
- <name>set_levels(Console, Major, Minor) -> ok</name>
- <fsummary>Sets the levels of I/O.</fsummary>
- <type>
- <v>Console = integer()</v>
- <d>Level for I/O to be sent to console.</d>
- <v>Major = integer()</v>
- <d>Level for I/O to be sent to the major logfile.</d>
- <v>Minor = integer()</v>
- <d>Level for I/O to be sent to the minor logfile.</d>
- </type>
- <desc>
- <p>Determines where I/O from test suites/test server will
- go. All text output from test suites and the test server is
- tagged with a priority value which ranges from 0 to 100, 100
- being the most detailed. (see the section about log files in
- the user's guide). Output from the test cases (using
- <c>io:format/2</c>) has a detail level of 50. Depending on the
- levels set by this function, this I/O may be sent to the
- console, the major log file (for the whole test suite) or to
- the minor logfile (separate for each test case).
- </p>
- <p>All output with detail level:</p>
- <list type="bulleted">
- <item>Less than or equal to <c>Console</c> is displayed on
- the screen (default 1)
- </item>
- <item>Less than or equal to <c>Major</c> is logged in the
- major log file (default 19)
- </item>
- <item>Greater than or equal to <c>Minor</c> is logged in the
- minor log files (default 10)
- </item>
- </list>
- <p>To view the currently set thresholds, use the
- <c>get_levels/0</c> function.</p>
- </desc>
- </func>
- <func>
- <name>get_levels() -> {Console, Major, Minor}</name>
- <fsummary>Returns the current levels.</fsummary>
- <desc>
- <p>Returns the current levels. See <c>set_levels/3</c> for
- types.</p>
- </desc>
- </func>
- <func>
- <name>jobs() -> JobQueue</name>
- <fsummary>Returns the job queue.</fsummary>
- <type>
- <v>JobQueue = [{list(), pid()}]</v>
- </type>
- <desc>
- <p>This function will return all the jobs currently in the job
- queue.</p>
- </desc>
- </func>
- <func>
- <name>multiply_timetraps(N) -> ok</name>
- <fsummary>All timetraps started after this will be multiplied by N.</fsummary>
- <type>
- <v>N = integer() | infinity</v>
- </type>
- <desc>
- <p>This function should be called before a test is started
- which requires extended timetraps, e.g. if extensive tracing
- is used. All timetraps started after this call will be
- multiplied by <c>N</c>.</p>
- </desc>
- </func>
- <func>
- <name>scale_timetraps(Bool) -> ok</name>
- <fsummary>.</fsummary>
- <type>
- <v>Bool = true | false</v>
- </type>
- <desc>
- <p>This function should be called before a test is started.
- The parameter specifies if test_server should attempt
- to automatically scale the timetrap value in order to compensate
- for delays caused by e.g. the cover tool.</p>
- </desc>
- </func>
- <func>
- <name>get_timetrap_parameters() -> {N,Bool} </name>
- <fsummary>Read the parameter values that affect timetraps.</fsummary>
- <type>
- <v>N = integer() | infinity</v>
- <v>Bool = true | false</v>
- </type>
- <desc>
- <p>This function may be called to read the values set by
- <c>multiply_timetraps/1</c> and <c>scale_timetraps/1</c>.</p>
- </desc>
- </func>
- <func>
- <name>cover(Application,Analyse) -> ok</name>
- <name>cover(CoverFile,Analyse) -> ok</name>
- <name>cover(App,CoverFile,Analyse) -> ok</name>
- <fsummary>Informs the test_server controller that next test shall run with code coverage analysis.</fsummary>
- <type>
- <v>Application = atom()</v>
- <d>OTP application to cover compile</d>
- <v>CoverFile = string()</v>
- <d>Name of file listing modules to exclude from or include in cover compilation. The filename must include full path to the file.</d>
- <v>Analyse = details | overview</v>
- </type>
- <desc>
- <p>This function informs the test_server controller that next
- test shall run with code coverage analysis. All timetraps will
- automatically be multiplied by 10 when cover i run.
- </p>
- <p><c>Application</c> and <c>CoverFile</c> indicates what to
- cover compile. If <c>Application</c> is given, the default is
- that all modules in the <c>ebin</c> directory of the
- application will be cover compiled. The <c>ebin</c> directory
- is found by adding <c>ebin</c> to
- <c>code:lib_dir(Application)</c>.
- </p>
- <p>A <c>CoverFile</c> can have the following entries:</p>
- <code type="none">
-{exclude, all | ExcludeModuleList}.
-{include, IncludeModuleList}.
-{cross, CrossCoverInfo}.</code>
- <p>Note that each line must end with a full
- stop. <c>ExcludeModuleList</c> and <c>IncludeModuleList</c>
- are lists of atoms, where each atom is a module name.
- </p>
-
- <p><c>CrossCoverInfo</c> is used when collecting cover data
- over multiple tests. Modules listed here are compiled, but
- they will not be analysed when the test is finished. See
- <seealso
- marker="#cross_cover_analyse-2">cross_cover_analyse/2</seealso>
- for more information about the cross cover mechanism and the
- format of <c>CrossCoverInfo</c>.
- </p>
- <p>If both an <c>Application</c> and a <c>CoverFile</c> is
- given, all modules in the application are cover compiled,
- except for the modules listed in <c>ExcludeModuleList</c>. The
- modules in <c>IncludeModuleList</c> are also cover compiled.
- </p>
- <p>If a <c>CoverFile</c> is given, but no <c>Application</c>,
- only the modules in <c>IncludeModuleList</c> are cover
- compiled.
- </p>
- <p><c>Analyse</c> indicates the detail level of the cover
- analysis. If <c>Analyse = details</c>, each cover compiled
- module will be analysed with
- <c>cover:analyse_to_file/1</c>. If <c>Analyse = overview</c>
- an overview of all cover compiled modules is created, listing
- the number of covered and not covered lines for each module.
- </p>
- <p>If the test following this call starts any slave or peer
- nodes with <c>test_server:start_node/3</c>, the same cover
- compiled code will be loaded on all nodes. If the loading
- fails, e.g. if the node runs an old version of OTP, the node
- will simply not be a part of the coverage analysis. Note that
- slave or peer nodes must be stopped with
- <c>test_server:stop_node/1</c> for the node to be part of the
- coverage analysis, else the test server will not be able to
- fetch coverage data from the node.
- </p>
- <p>When the test is finished, the coverage analysis is
- automatically completed, logs are created and the cover
- compiled modules are unloaded. If another test is to be run
- with coverage analysis, <c>test_server_ctrl:cover/2/3</c> must
- be called again.
- </p>
- </desc>
- </func>
- <func>
- <name>cross_cover_analyse(Level, Tests) -> ok</name>
- <fsummary>Analyse cover data collected from multiple tests</fsummary>
- <type>
- <v>Level = details | overview</v>
- <v>Tests = [{Tag,LogDir}]</v>
- <v>Tag = atom()</v>
- <d>Test identifier.</d>
- <v>LogDir = string()</v>
- <d>Log directory for the test identified by <c>Tag</c>. This
- can either be the <c>run.&lt;timestamp&gt;</c> directory or
- the parent directory of this (in which case the latest
- <c>run.&lt;timestamp&gt;</c> directory is chosen.</d>
- </type>
- <desc>
- <p>Analyse cover data collected from multiple tests. The modules
- analysed are the ones listed in <c>cross</c> statements in
- the cover files. These are modules that are heavily used by
- other tests than the one where they belong or are explicitly
- tested. They should then be listed as cross modules in the
- cover file for the test where they are used but do not
- belong. Se example below.</p>
- <p>This function should be run after all tests are completed,
- and the result will be stored in a file called
- <c>cross_cover.html</c> in the <c>run.&lt;timestamp&gt;</c>
- directory of the test the modules belong to.</p>
- <p>Note that the function can be executed on any node, and it
- does not require <c>test_server_ctrl</c> to be started first.</p>
- <p>The <c>cross</c> statement in the cover file must be like this:</p>
- <code type="none">
-{cross,[{Tag,Modules}]}.</code>
- <p>where <c>Tag</c> is the same as <c>Tag</c> in the
- <c>Tests</c> parameter to this function and <c>Modules</c> is a
- list of module names (atoms).</p>
- <p><em>Example:</em></p>
- <p>If the module <c>m1</c> belongs to system <c>s1</c> but is
- heavily used also in the tests for another system <c>s2</c>,
- then the cover files for the two systems' tests could be like
- this:</p>
-<code type="none">
-s1.cover:
- {include,[m1]}.
-
-s2.cover:
- {include,[....]}. % modules belonging to system s2
- {cross,[{s1,[m1]}]}.</code>
- <p>When the tests for both <c>s1</c> and <c>s2</c> are completed, run</p>
-<code type="none">
-test_server_ctrl:cross_cover_analyse(Level,[{s1,S1LogDir},{s2,S2LogDir}])
-</code>
-
- <p>and the accumulated cover data for <c>m1</c> will be written to
- <c>S1LogDir/[run.&lt;timestamp&gt;/]cross_cover.html</c>.</p>
- <p>Note that the <c>m1</c> module will also be presented in the
- normal coverage log for <c>s1</c> (due to the include statement in
- <c>s1.cover</c>), but that only includes the coverage achieved by the
- <c>s1</c> test itself.</p>
- <p>The Tag in the <c>cross</c> statement in the cover file has
- no other purpose than mapping the list of modules
- (<c>[m1]</c> in the example above) to the correct log
- directory where it should be included in the
- <c>cross_cover.html</c> file (<c>S1LogDir</c> in the example
- above). I.e. the value of <c>Tag</c> has no meaning, it
- could be <c>foo</c> as well as <c>s1</c> above, as long as
- the same <c>Tag</c> is used in the cover file and in the
- call to this function.</p>
- </desc>
- </func>
- <func>
- <name>trc(TraceInfoFile) -> ok | {error, Reason}</name>
- <fsummary>Starts call trace on target and slave nodes</fsummary>
- <type>
- <v>TraceInfoFile = atom() | string()</v>
- <d>Name of a file defining which functions to trace and how</d>
- </type>
- <desc>
- <p>This function starts call trace on target and on slave or
- peer nodes that are started or will be started by the test
- suites.
- </p>
- <p>Timetraps are not extended automatically when tracing is
- used. Use <c>multiply_timetraps/1</c> if necessary.
- </p>
- <p>Note that the trace support in the test server is in a very
- early stage of the implementation, and thus not yet as
- powerful as one might wish for.
- </p>
- <p>The trace information file specified by the
- <c>TraceInfoFile</c> argument is a text file containing one or
- more of the following elements:
- </p>
- <list type="bulleted">
- <item><c>{SetTP,Module,Pattern}.</c></item>
- <item><c>{SetTP,Module,Function,Pattern}.</c></item>
- <item><c>{SetTP,Module,Function,Arity,Pattern}.</c></item>
- <item><c>ClearTP.</c></item>
- <item><c>{ClearTP,Module}.</c></item>
- <item><c>{ClearTP,Module,Function}.</c></item>
- <item><c>{ClearTP,Module,Function,Arity}.</c></item>
- </list>
- <taglist>
- <tag><c>SetTP = tp | tpl</c></tag>
- <item>This is maps to the corresponding functions in the
- <c>ttb</c> module in the <c>observer</c>
- application. <c>tp</c> means set trace pattern on global
- function calls. <c>tpl</c> means set trace pattern on local
- and global function calls.
- </item>
- <tag><c>ClearTP = ctp | ctpl | ctpg</c></tag>
- <item>This is maps to the corresponding functions in the
- <c>ttb</c> module in the <c>observer</c>
- application. <c>ctp</c> means clear trace pattern (i.e. turn
- off) on global and local function calls. <c>ctpl</c> means
- clear trace pattern on local function calls only and <c>ctpg</c>
- means clear trace pattern on global function calls only.
- </item>
- <tag><c>Module = atom()</c></tag>
- <item>The module to trace
- </item>
- <tag><c>Function = atom()</c></tag>
- <item>The name of the function to trace
- </item>
- <tag><c>Arity = integer()</c></tag>
- <item>The arity of the function to trace
- </item>
- <tag><c>Pattern = [] | match_spec()</c></tag>
- <item>The trace pattern to set for the module or
- function. For a description of the match_spec() syntax,
- please turn to the User's guide for the runtime system
- (erts). The chapter "Match Specification in Erlang" explains
- the general match specification language.
- </item>
- </taglist>
- <p>The trace result will be logged in a (binary) file called
- <c>NodeName-test_server</c> in the current directory of the
- test server controller node. The log must be formatted using
- <c>ttb:format/1/2</c>.
- </p>
- </desc>
- </func>
- <func>
- <name>stop_trace() -> ok | {error, not_tracing}</name>
- <fsummary>Stops tracing on target and slave nodes.</fsummary>
- <desc>
- <p>This function stops tracing on target, and on slave or peer
- nodes that are currently running. New slave or peer nodes will
- no longer be traced after this.</p>
- </desc>
- </func>
- </funcs>
-
- <section>
- <title>FUNCTIONS INVOKED FROM COMMAND LINE</title>
- <p>The following functions are supposed to be invoked from the
- command line using the <c>-s</c> option when starting the erlang
- node.</p>
- </section>
- <funcs>
- <func>
- <name>run_test(CommandLine) -> ok</name>
- <fsummary>Runs the tests specified on the command line.</fsummary>
- <type>
- <v>CommandLine = FlagList</v>
- </type>
- <desc>
- <p>This function is supposed to be invoked from the
- commandline. It starts the test server, interprets the
- argument supplied from the commandline, runs the tests
- specified and when all tests are done, stops the test server
- and returns to the Erlang prompt.
- </p>
- <p>The <c>CommandLine</c> argument is a list of command line
- flags, typically <c>['KEY1', Value1, 'KEY2', Value2, ...]</c>.
- The valid command line flags are listed below.
- </p>
- <p>Under a UNIX command prompt, this function can be invoked like this:
- <br></br>
-<c>erl -noshell -s test_server_ctrl run_test KEY1 Value1 KEY2 Value2 ... -s erlang halt</c></p>
- <p>Or make an alias (this is for unix/tcsh) <br></br>
-<c>alias erl_test 'erl -noshell -s test_server_ctrl run_test \!* -s erlang halt'</c></p>
- <p>And then use it like this <br></br>
-<c>erl_test KEY1 Value1 KEY2 Value2 ...</c> <br></br>
-</p>
- <p>The valid command line flags are</p>
- <taglist>
- <tag><c>DIR dir</c></tag>
- <item>Adds all test modules in the directory <c>dir</c> to
- the job queue.
- </item>
- <tag><c>MODULE mod</c></tag>
- <item>Adds the module <c>mod</c> to the job queue.
- </item>
- <tag><c>CASE mod case</c></tag>
- <item>Adds the case <c>case</c> in module <c>mod</c> to the
- job queue.
- </item>
- <tag><c>SPEC spec</c></tag>
- <item>Runs the test specification file <c>spec</c>.
- </item>
- <tag><c>SKIPMOD mod</c></tag>
- <item>Skips all test cases in the module <c>mod</c></item>
- <tag><c>SKIPCASE mod case</c></tag>
- <item>Skips the test case <c>case</c> in module <c>mod</c>.
- </item>
- <tag><c>NAME name</c></tag>
- <item>Names the test suite to something else than the
- default name. This does not apply to <c>SPEC</c> which keeps
- its names.
- </item>
- <tag><c>COVER app cover_file analyse</c></tag>
- <item>Indicates that the test should be run with cover
- analysis. <c>app</c>, <c>cover_file</c> and <c>analyse</c>
- corresponds to the parameters to
- <c>test_server_ctrl:cover/3</c>. If no cover file is used,
- the atom <c>none</c> should be given.
- </item>
- <tag><c>TRACE traceinfofile</c></tag>
- <item>Specifies a trace information file. When this option
- is given, call tracing is started on the target node and all
- slave or peer nodes that are started. The trace information
- file specifies which modules and functions to trace. See the
- function <c>trc/1</c> above for more information about the
- syntax of this file.
- </item>
- </taglist>
- </desc>
- </func>
- </funcs>
-
- <section>
- <title>FRAMEWORK CALLBACK FUNCTIONS</title>
- <p>A test server framework can be defined by setting the
- environment variable <c>TEST_SERVER_FRAMEWORK</c> to a module
- name. This module will then be framework callback module, and it
- must export the following function:</p>
- </section>
- <funcs>
- <func>
- <name>get_suite(Mod,Func) -> TestCaseList</name>
- <fsummary>Get subcases.</fsummary>
- <type>
- <v>Mod = atom()</v>
- <d>Test suite name.</d>
- <v>Func = atom()</v>
- <d>Name of test case.</d>
- <v>TestCaseList = [SubCase]</v>
- <d>List of test cases.</d>
- <v>SubCase = atom()</v>
- <d>Name of a case.</d>
- </type>
- <desc>
- <p>This function is called before a test case is started. The
- purpose is to retrieve a list of subcases. The default
- behaviour of this function should be to call
- <c>Mod:Func(suite)</c> and return the result from this call.</p>
- </desc>
- </func>
- <func>
- <name>init_tc(Mod,Func,Args0) -> {ok,Args1} | {skip,ReasonToSkip} | {auto_skip,ReasonToSkip} | {fail,ReasonToFail}</name>
- <fsummary>Preparation for a test case or configuration function.</fsummary>
- <type>
- <v>Mod = atom()</v>
- <d>Test suite name.</d>
- <v>Func = atom()</v>
- <d>Name of test case or configuration function.</d>
- <v>Args0 = Args1 = [tuple()]</v>
- <d>Normally Args = [Config]</d>
- <v>ReasonToSkip = term()</v>
- <d>Reason to skip the test case or configuration function.</d>
- <v>ReasonToFail = term()</v>
- <d>Reason to fail the test case or configuration function.</d>
- </type>
- <desc>
- <p>This function is called before a test case or configuration
- function starts. It is called on the process executing the function
- <c>Mod:Func</c>. Typical use of this function can be to alter
- the input parameters to the test case function (<c>Args</c>) or
- to set properties for the executing process.</p>
- <p>By returning <c>{skip,Reason}</c>, <c>Func</c> gets skipped.
- <c>Func</c> also gets skipped if <c>{auto_skip,Reason}</c> is returned,
- but then gets an auto skipped status (rather than user skipped).</p>
- <p>To fail <c>Func</c> immediately instead of executing it, return
- <c>{fail,ReasonToFail}.</c></p>
- </desc>
- </func>
- <func>
- <name>end_tc(Mod,Func,Status) -> ok | {fail,ReasonToFail}</name>
- <fsummary>Cleanup after a test case or configuration function.</fsummary>
- <type>
- <v>Mod = atom()</v>
- <d>Test suite name.</d>
- <v>Func = atom()</v>
- <d>Name of test case or configuration function.</d>
- <v>Status = {Result,Args} | {TCPid,Result,Args}</v>
- <d>The status of the test case or configuration function.</d>
- <v>ReasonToFail = term()</v>
- <d>Reason to fail the test case or configuration function.</d>
- <v>Result = ok | Skip | Fail</v>
- <d>The final result of the test case or configuration function.</d>
- <v>TCPid = pid()</v>
- <d>Pid of the process executing Func</d>
- <v>Skip = {skip,SkipReason}</v>
- <v>SkipReason = term() | {failed,{Mod,init_per_testcase,term()}}</v>
- <d>Reason why the function was skipped.</d>
- <v>Fail = {error,term()} | {'EXIT',term()} | {timetrap_timeout,integer()} |
- {testcase_aborted,term()} | testcase_aborted_or_killed |
- {failed,term()} | {failed,{Mod,end_per_testcase,term()}}</v>
- <d>Reason why the function failed.</d>
- <v>Args = [tuple()]</v>
- <d>Normally Args = [Config]</d>
- </type>
- <desc>
- <p>This function is called when a test case, or a configuration function,
- is finished. It is normally called on the process where the function
- <c>Mod:Func</c> has been executing, but if not, the pid of the test
- case process is passed with the <c>Status</c> argument.</p>
- <p>Typical use of the <c>end_tc/3</c> function can be to clean up
- after <c>init_tc/3</c>.</p>
- <p>If <c>Func</c> is a test case, it is possible to analyse the value of
- <c>Result</c> to verify that <c>init_per_testcase/2</c> and
- <c>end_per_testcase/2</c> executed successfully.</p>
- <p>It is possible with <c>end_tc/3</c> to fail an otherwise successful
- test case, by returning <c>{fail,ReasonToFail}</c>. The test case <c>Func</c>
- will be logged as failed with the provided term as reason.</p>
- </desc>
- </func>
- <func>
- <name>report(What,Data) -> ok</name>
- <fsummary>Progress report for test.</fsummary>
- <type>
- <v>What = atom()</v>
- <v>Data = term()</v>
- </type>
- <desc>
- <p>This function is called in order to keep the framework up-to-date with
- the progress of the test. This is useful e.g. if the
- framework implements a GUI where the progress information is
- constantly updated. The following can be reported:
- </p>
- <p><c>What = tests_start, Data = {Name,NumCases}</c><br></br>
- <c>What = loginfo, Data = [{topdir,TestRootDir},{rundir,CurrLogDir}]</c><br></br>
- <c>What = tests_done, Data = {Ok,Failed,{UserSkipped,AutoSkipped}}</c><br></br>
- <c>What = tc_start, Data = {{Mod,{Func,GroupName}},TCLogFile}</c><br></br>
- <c>What = tc_done, Data = {Mod,{Func,GroupName},Result}</c><br></br>
- <c>What = tc_user_skip, Data = {Mod,{Func,GroupName},Comment}</c><br></br>
- <c>What = tc_auto_skip, Data = {Mod,{Func,GroupName},Comment}</c><br></br>
- <c>What = framework_error, Data = {{FWMod,FWFunc},Error}</c></p>
- <p>Note that for a test case function that doesn't belong to a group,
- <c>GroupName</c> has value <c>undefined</c>, otherwise the name of the test
- case group.</p>
- </desc>
- </func>
- <func>
- <name>error_notification(Mod, Func, Args, Error) -> ok</name>
- <fsummary>Inform framework of crashing testcase or configuration function.</fsummary>
- <type>
- <v>Mod = atom()</v>
- <d>Test suite name.</d>
- <v>Func = atom()</v>
- <d>Name of test case or configuration function.</d>
- <v>Args = [tuple()]</v>
- <d>Normally Args = [Config]</d>
- <v>Error = {Reason,Location}</v>
- <v>Reason = term()</v>
- <d>Reason for termination.</d>
- <v>Location = unknown | [{Mod,Func,Line}]</v>
- <d>Last known position in Mod before termination.</d>
- <v>Line = integer()</v>
- <d>Line number in file Mod.erl.</d>
- </type>
- <desc>
- <p>This function is called as the result of function <c>Mod:Func</c> failing
- with Reason at Location. The function is intended mainly to aid
- specific logging or error handling in the framework application. Note
- that for Location to have relevant values (i.e. other than unknown),
- the <c>line</c> macro or <c>test_server_line</c> parse transform must
- be used. For details, please see the section about test suite line numbers
- in the <c>test_server</c> reference manual page.</p>
- </desc>
- </func>
- <func>
- <name>warn(What) -> boolean()</name>
- <fsummary>Ask framework if test server should issue a warning for What.</fsummary>
- <type>
- <v>What = processes | nodes</v>
- </type>
- <desc>
- <p>The test server checks the number of processes and nodes
- before and after the test is executed. This function is a
- question to the framework if the test server should warn when
- the number of processes or nodes has changed during the test
- execution. If <c>true</c> is returned, a warning will be written
- in the test case minor log file.</p>
- </desc>
- </func>
- <func>
- <name>target_info() -> InfoStr</name>
- <fsummary>Print info about the target system to the test case log.</fsummary>
- <type>
- <v>InfoStr = string() | ""</v>
- </type>
- <desc>
- <p>The test server will ask the framework for information about
- the test target system and print InfoStr in the test case
- log file below the host information.</p>
- </desc>
- </func>
- </funcs>
-</erlref>
-