From 84adefa331c4159d432d22840663c38f155cd4c1 Mon Sep 17 00:00:00 2001 From: Erlang/OTP Date: Fri, 20 Nov 2009 14:54:40 +0000 Subject: The R13B03 release. --- lib/test_server/doc/src/test_server_ctrl.xml | 771 +++++++++++++++++++++++++++ 1 file changed, 771 insertions(+) create mode 100644 lib/test_server/doc/src/test_server_ctrl.xml (limited to 'lib/test_server/doc/src/test_server_ctrl.xml') diff --git a/lib/test_server/doc/src/test_server_ctrl.xml b/lib/test_server/doc/src/test_server_ctrl.xml new file mode 100644 index 0000000000..3d95813c14 --- /dev/null +++ b/lib/test_server/doc/src/test_server_ctrl.xml @@ -0,0 +1,771 @@ + + + + +
+ + 2007 + 2008 + Ericsson AB, All Rights Reserved + + + The contents of this file are subject to the Erlang Public License, + Version 1.1, (the "License"); you may not use this file except in + compliance with the License. You should have received a copy of the + Erlang Public License along with this software. If not, it can be + retrieved online at http://www.erlang.org/. + + Software distributed under the License is distributed on an "AS IS" + basis, WITHOUT WARRANTY OF ANY KIND, either express or implied. See + the License for the specific language governing rights and limitations + under the License. + + The Initial Developer of the Original Code is Ericsson AB. + + + The Test Server Controller + Siri Hansen + + + + + + + test_server_ctrl_ref.sgml +
+ test_server_ctrl + This module provides a low level interface to the Test Server. + +

The test_server_ctrl module provides a low level + interface to the Test Server. This interface is normally + not used directly by the tester, but through a framework built + on top of test_server_ctrl. +

+

Common Test is such a framework, well suited for automated + black box testing of target systems of any kind (not necessarily + implemented in Erlang). Common Test is also a very useful tool for + white box testing Erlang programs and OTP applications. + Please see the Common Test User's Guide and reference manual for + more information. +

+

If you want to write your own framework, some more information + can be found in the chapter "Writing your own test server + framework" in the Test Server User's Guide. Details about the + interface provided by test_server_ctrl follows below. +

+
+ + + start() -> Result + start(ParameterFile) -> Result + Starts the test server. + + Result = ok | {error, {already_started, pid()} + ParameterFile = atom() | string() + + +

This function starts the test server. If the parameter file + is given, it indicates that the target is remote. In that case + the target node is started and a socket connection is + established between the controller and the target node. +

+

The parameter file is a text file containing key-value + tuples. Each tuple must be followed by a dot-newline + sequence. The following key-value tuples are allowed: +

+ + {type,PlatformType} + This is an atom indicating the target platform type, + currently supported: PlatformType = vxworks

+Mandatory +
+ {target,TargetHost} + This is the name of the target host, can be atom or + string. +

+Mandatory +
+ {slavetargets,SlaveTargets} + This is a list of available hosts where slave nodes + can be started. The hostnames are given as atoms or strings. +

+Optional, default SlaveTargets = []
+ {longnames,Bool} + This indicates if longnames shall be used, i.e. if the + -name option should be used for the target node + instead of -sname

+Optional, default Bool = false
+ {master, {MasterHost, MasterCookie}} + If target is remote and the target node is started as + a slave node, this option indicates which master and + cookie to use. The given master + will also be used as master for slave nodes started with + test_server:start_node/3. It is expected that the + erl_boot_server is started on the master node before + the test_server_ctrl:start/1 function is called. +

+Optional, if not given the test server controller node + is used as master and the erl_boot_server is + automatically started.
+
+
+
+ + stop() -> ok + Stops the test server immediately. + +

This stops the test server (both controller and target) and + all its activity. The running test suite (if any) will be + halted.

+
+
+ + add_dir(Name, Dir) -> ok + add_dir(Name, Dir, Pattern) -> ok + add_dir(Name, [Dir|Dirs]) -> ok + add_dir(Name, [Dir|Dirs], Pattern) -> ok + Add a directory to the job queue. + + Name = term() + The jobname for this directory. + Dir = term() + The directory to scan for test suites. + Dirs = [term()] + List of directories to scan for test suites. + Pattern = term() + Suite match pattern. Directories will be scanned for Pattern_SUITE.erl files. + + +

Puts a collection of suites matching (*_SUITE) in given + directories into the job queue. Name is an arbitrary + name for the job, it can be any erlang term. If Pattern + is given, only modules matching Pattern* will be added.

+
+
+ + add_module(Mod) -> ok + add_module(Name, [Mod|Mods]) -> ok + Add a module to the job queue with or without a given name. + + Mod = atom() + Mods = [atom()] + The name(s) of the module(s) to add. + Name = term() + Name for the job. + + +

This function adds a module or a list of modules, to the + test servers job queue. Name may be any Erlang + term. When Name is not given, the job gets the name of + the module.

+
+
+ + add_case(Mod, Case) -> ok + Adds one test case to the job queue. + + Mod = atom() + Name of the module the test case is in. + Case = atom() + Function name of the test case to add. + + +

This function will add one test case to the job queue. The + job will be given the module's name.

+
+
+ + add_case(Name, Mod, Case) -> ok + Equivalent to add_case/2, but with specified name. + + Name = string() + Name to use for the test job. + + +

Equivalent to add_case/2, but the test job will get + the specified name.

+
+
+ + add_cases(Mod, Cases) -> ok + Adds a list of test cases to the job queue. + + Mod = atom() + Name of the module the test case is in. + Cases = [Case] + Case = atom() + Function names of the test cases to add. + + +

This function will add one or more test cases to the job + queue. The job will be given the module's name.

+
+
+ + add_cases(Name, Mod, Cases) -> ok + Equivalent to add_cases/2, but with specified name. + + Name = string() + Name to use for the test job. + + +

Equivalent to add_cases/2, but the test job will get + the specified name.

+
+
+ + add_spec(TestSpecFile) -> ok | {error, nofile} + Adds a test specification file to the job queue. + + TestSpecFile = string() + Name of the test specification file + + +

This function will add the content of the given test + specification file to the job queue. The job will be given the + name of the test specification file, e.g. if the file is + called test.spec, the job will be called test. +

+

See the reference manual for the test server application + for details about the test specification file.

+
+
+ + add_dir_with_skip(Name, [Dir|Dirs], Skip) -> ok + add_dir_with_skip(Name, [Dir|Dirs], Pattern, Skip) -> ok + add_module_with_skip(Mod, Skip) -> ok + add_module_with_skip(Name, [Mod|Mods], Skip) -> ok + add_case_with_skip(Mod, Case, Skip) -> ok + add_case_with_skip(Name, Mod, Case, Skip) -> ok + add_cases_with_skip(Mod, Cases, Skip) -> ok + add_cases_with_skip(Name, Mod, Cases, Skip) -> ok + Same purpose as functions listed above, but with extra Skip argument. + + Skip = [SkipItem] + List of items to be skipped from the test. + SkipItem = {Mod,Comment} | {Mod,Case,Comment} | {Mod,Cases,Comment} + Mod = atom() + Test suite name. + Comment = string() + Reason why suite or case is being skipped. + Cases = [Case] + Case = atom() + Name of test case function. + + +

These functions add test jobs just like the add_dir, add_module, + add_case and add_cases functions above, but carry an additional + argument, Skip. Skip is a list of items that should be skipped + in the current test run. Test job items that occur in the Skip + list will be logged as SKIPPED with the associated Comment.

+
+
+ + add_tests_with_skip(Name, Tests, Skip) -> ok + Adds different types of jobs to the run queue. + + Name = term() + The jobname for this directory. + Tests = [TestItem] + List of jobs to add to the run queue. + TestItem = {Dir,all,all} | {Dir,Mods,all} | {Dir,Mod,Cases} + Dir = term() + The directory to scan for test suites. + Mods = [Mod] + Mod = atom() + Test suite name. + Cases = [Case] + Case = atom() + Name of test case function. + Skip = [SkipItem] + List of items to be skipped from the test. + SkipItem = {Mod,Comment} | {Mod,Case,Comment} | {Mod,Cases,Comment} + Comment = string() + Reason why suite or case is being skipped. + + +

This function adds various test jobs to the test_server_ctrl + job queue. These jobs can be of different type (all or specific suites + in one directory, all or specific cases in one suite, etc). It is also + possible to get particular items skipped by passing them along in the + Skip list (see the add_*_with_skip functions above).

+
+
+ + abort_current_testcase(Reason) -> ok | {error,no_testcase_running} + Aborts the test case currently executing. + + Reason = term() + The reason for stopping the test case, which will be printed in the log. + + +

When calling this function, the currently executing test case will be aborted. + It is the user's responsibility to know for sure which test case is currently + executing. The function is therefore only safe to call from a function which + has been called (or synchronously invoked) by the test case.

+
+
+ + set_levels(Console, Major, Minor) -> ok + Sets the levels of I/O. + + Console = integer() + Level for I/O to be sent to console. + Major = integer() + Level for I/O to be sent to the major logfile. + Minor = integer() + Level for I/O to be sent to the minor logfile. + + +

Determines where I/O from test suites/test server will + go. All text output from test suites and the test server is + tagged with a priority value which ranges from 0 to 100, 100 + being the most detailed. (see the section about log files in + the user's guide). Output from the test cases (using + io:format/2) has a detail level of 50. Depending on the + levels set by this function, this I/O may be sent to the + console, the major log file (for the whole test suite) or to + the minor logfile (separate for each test case). +

+

All output with detail level:

+ + Less than or equal to Console is displayed on + the screen (default 1) + + Less than or equal to Major is logged in the + major log file (default 19) + + Greater than or equal to Minor is logged in the + minor log files (default 10) + + +

To view the currently set thresholds, use the + get_levels/0 function.

+
+
+ + get_levels() -> {Console, Major, Minor} + Returns the current levels. + +

Returns the current levels. See set_levels/3 for + types.

+
+
+ + jobs() -> JobQueue + Returns the job queue. + + JobQueue = [{list(), pid()}] + + +

This function will return all the jobs currently in the job + queue.

+
+
+ + multiply_timetraps(N) -> ok + All timetraps started after this will be multiplied by N. + + N = integer() | infinity + + +

This function should be called before a test is started + which requires extended timetraps, e.g. if extensive tracing + is used. All timetraps started after this call will be + multiplied by N.

+
+
+ + cover(Application,Analyse) -> ok + cover(CoverFile,Analyse) -> ok + cover(App,CoverFile,Analyse) -> ok + Informs the test_server controller that next test shall run with code coverage analysis. + + Application = atom() + OTP application to cover compile + CoverFile = string() + Name of file listing modules to exclude from or include in cover compilation. The filename must include full path to the file. + Analyse = details | overview + + +

This function informs the test_server controller that next + test shall run with code coverage analysis. All timetraps will + automatically be multiplied by 10 when cover i run. +

+

Application and CoverFile indicates what to + cover compile. If Application is given, the default is + that all modules in the ebin directory of the + application will be cover compiled. The ebin directory + is found by adding ebin to + code:lib_dir(Application). +

+

A CoverFile can have the following entries:

+ +{exclude, all | ExcludeModuleList}. +{include, IncludeModuleList}. +

Note that each line must end with a full + stop. ExcludeModuleList and IncludeModuleList + are lists of atoms, where each atom is a module name. +

+

If both an Application and a CoverFile is + given, all modules in the application are cover compiled, + except for the modules listed in ExcludeModuleList. The + modules in IncludeModuleList are also cover compiled. +

+

If a CoverFile is given, but no Application, + only the modules in IncludeModuleList are cover + compiled. +

+

Analyse indicates the detail level of the cover + analysis. If Analyse = details, each cover compiled + module will be analysed with + cover:analyse_to_file/1. If Analyse = overview + an overview of all cover compiled modules is created, listing + the number of covered and not covered lines for each module. +

+

If the test following this call starts any slave or peer + nodes with test_server:start_node/3, the same cover + compiled code will be loaded on all nodes. If the loading + fails, e.g. if the node runs an old version of OTP, the node + will simply not be a part of the coverage analysis. Note that + slave or peer nodes must be stopped with + test_server:stop_node/1 for the node to be part of the + coverage analysis, else the test server will not be able to + fetch coverage data from the node. +

+

When the test is finished, the coverage analysis is + automatically completed, logs are created and the cover + compiled modules are unloaded. If another test is to be run + with coverage analysis, test_server_ctrl:cover/2/3 must + be called again. +

+
+
+ + cross_cover_analyse(Level) -> ok + Analyse cover data collected from all tests + + Level = details | overview + + +

Analyse cover data collected from all tests. The modules + analysed are the ones listed in the cross cover file + cross.cover in the current directory of the test + server.

+

The modules listed in the cross.cover file are + modules that are heavily used by other applications than the + one they belong to. This function should be run after all + tests are completed, and the result will be stored in a file + called cross_cover.html in the run.<timestamp> + directory of the application the modules belong to. +

+

The cross.cover file contains elements like this:

+
+{App,Modules}.        
+

where App can be an application name or the atom + all. The application (or all applications) will cover + compile the listed Modules. +

+
+
+ + trc(TraceInfoFile) -> ok | {error, Reason} + Starts call trace on target and slave nodes + + TraceInfoFile = atom() | string() + Name of a file defining which functions to trace and how + + +

This function starts call trace on target and on slave or + peer nodes that are started or will be started by the test + suites. +

+

Timetraps are not extended automatically when tracing is + used. Use multiply_timetraps/1 if necessary. +

+

Note that the trace support in the test server is in a very + early stage of the implementation, and thus not yet as + powerful as one might wish for. +

+

The trace information file specified by the + TraceInfoFile argument is a text file containing one or + more of the following elements: +

+ + {SetTP,Module,Pattern}. + {SetTP,Module,Function,Pattern}. + {SetTP,Module,Function,Arity,Pattern}. + ClearTP. + {ClearTP,Module}. + {ClearTP,Module,Function}. + {ClearTP,Module,Function,Arity}. + + + SetTP = tp | tpl + This is maps to the corresponding functions in the + ttb module in the observer + application. tp means set trace pattern on global + function calls. tpl means set trace pattern on local + and global function calls. + + ClearTP = ctp | ctpl | ctpg + This is maps to the corresponding functions in the + ttb module in the observer + application. ctp means clear trace pattern (i.e. turn + off) on global and local function calls. ctpl means + clear trace pattern on local function calls only and ctpg + means clear trace pattern on global function calls only. + + Module = atom() + The module to trace + + Function = atom() + The name of the function to trace + + Arity = integer() + The arity of the function to trace + + Pattern = [] | match_spec() + The trace pattern to set for the module or + function. For a description of the match_spec() syntax, + please turn to the User's guide for the runtime system + (erts). The chapter "Match Specification in Erlang" explains + the general match specification language. + + +

The trace result will be logged in a (binary) file called + NodeName-test_server in the current directory of the + test server controller node. The log must be formatted using + ttb:format/1/2. +

+

This is valid for all targets except the OSE/Delta target + for which all nodes will be logged and automatically formatted + in one single text file called allnodes-test_server.

+
+
+ + stop_trace() -> ok | {error, not_tracing} + Stops tracing on target and slave nodes. + +

This function stops tracing on target, and on slave or peer + nodes that are currently running. New slave or peer nodes will + no longer be traced after this.

+
+
+
+ +
+ FUNCTIONS INVOKED FROM COMMAND LINE +

The following functions are supposed to be invoked from the + command line using the -s option when starting the erlang + node.

+
+ + + run_test(CommandLine) -> ok + Runs the tests specified on the command line. + + CommandLine = FlagList + + +

This function is supposed to be invoked from the + commandline. It starts the test server, interprets the + argument supplied from the commandline, runs the tests + specified and when all tests are done, stops the test server + and returns to the Erlang prompt. +

+

The CommandLine argument is a list of command line + flags, typically ['KEY1', Value1, 'KEY2', Value2, ...]. + The valid command line flags are listed below. +

+

Under a UNIX command prompt, this function can be invoked like this: +

+erl -noshell -s test_server_ctrl run_test KEY1 Value1 KEY2 Value2 ... -s erlang halt

+

Or make an alias (this is for unix/tcsh)

+alias erl_test 'erl -noshell -s test_server_ctrl run_test \\!* -s erlang halt'

+

And then use it like this

+erl_test KEY1 Value1 KEY2 Value2 ...

+

+

The valid command line flags are

+ + DIR dir + Adds all test modules in the directory dir to + the job queue. + + MODULE mod + Adds the module mod to the job queue. + + CASE mod case + Adds the case case in module mod to the + job queue. + + SPEC spec + Runs the test specification file spec. + + SKIPMOD mod + Skips all test cases in the module mod + SKIPCASE mod case + Skips the test case case in module mod. + + NAME name + Names the test suite to something else than the + default name. This does not apply to SPEC which keeps + it's names. + + PARAMETERS parameterfile + Specifies the parameter file to use when starting + remote target + + COVER app cover_file analyse + Indicates that the test should be run with cover + analysis. app, cover_file and analyse + corresponds to the parameters to + test_server_ctrl:cover/3. If no cover file is used, + the atom none should be given. + + TRACE traceinfofile + Specifies a trace information file. When this option + is given, call tracing is started on the target node and all + slave or peer nodes that are started. The trace information + file specifies which modules and functions to trace. See the + function trc/1 above for more information about the + syntax of this file. + + +
+
+
+ +
+ FRAMEWORK CALLBACK FUNCTIONS +

A test server framework can be defined by setting the + environment variable TEST_SERVER_FRAMEWORK to a module + name. This module will then be framework callback module, and it + must export the following function:

+
+ + + get_suite(Mod,Func) -> TestCaseList + Get subcases. + + Mod = atom() + Func = atom() + TestCaseList = [,SubCase] + + +

This function is called before a test case is started. The + purpose is to retrieve a list of subcases. The default + behaviour of this function should be to call + Mod:Func(suite) and return the result from this call.

+
+
+ + init_tc(Mod,Func,Args) -> {ok,Args} + Preparation for a test case. + + Mod = atom() + Func = atom() + Args = [tuple()] + Normally Args = [Config] + + +

This function is called when a test case is started. It is + called on the process executing the test case function + (Mod:Func). Typical use of this function can be to alter + the input parameters to the test case function (Args) or + to set properties for the executing process.

+
+
+ + end_tc(Mod,Func,Args) -> ok + Cleanup after a test case. + + Mod = atom() + Func = atom() + Args = [tuple()] + Normally Args = [Config] + + +

This function is called when a test case is completed. It is + called on the process where the test case function + (Mod:Func) was executed. Typical use of this function can + be to clean up stuff done by init_tc/3.

+
+
+ + report(What,Data) -> ok + Progress report for test. + + What = atom() + Data = term() + + +

This function is called in order to keep the framework upto + date about the progress of the test. This is useful e.g. if the + framework implements a GUI where the progress information is + constantly updated. The following can be reported: +

+

What = tests_start, Data = {Name,NumCases}

+What = tests_done, Data = {Ok,Failed,Skipped}

+What = tc_start, Data = {Mod,Func}

+What = tc_done, Data = {Mod,Func,Result}

+
+
+ + error_notification(Mod, Case, Args, Error) -> ok + Inform framework of crashing testcase. + + Mod = atom() + Test suite name. + Case = atom() + Name of test case function. + Args = [tuple()] + Normally Args = [Config] + Error = {Reason,Location} + Reason = term() + Reason for termination. + Location = unknown | [{Mod,Case,Line}] + Last known position in Mod before termination. + Line = integer() + Line number in file Mod.erl. + + +

This function is called as the result of testcase Mod:Case failing + with Reason at Location. The function is intended mainly to aid + specific logging or error handling in the framework application. Note + that for Location to have relevant values (i.e. other than unknown), + the line macro or test_server_line parse transform must + be used. For details, please see the section about test suite line numbers + in the test_server reference manual page.

+
+
+ + warn(What) -> boolean() + Ask framework if test server should issue a warning for What. + + What = processes | nodes + + +

The test server checks the number of processes and nodes + before and after the test is executed. This function is a + question to the framework if the test server should warn when + the number of processes or nodes has changed during the test + execution. If true is returned, a warning will be written + in the test case minor log file.

+
+
+ + target_info() -> InfoStr + Print info about the target system to the test case log. + + InfoStr = string() | "" + + +

The test server will ask the framework for information about + the test target system and print InfoStr in the test case + log file below the host information.

+
+
+
+
+ -- cgit v1.2.3