20062011 Ericsson AB. All Rights Reserved. The contents of this file are subject to the Erlang Public License, Version 1.1, (the "License"); you may not use this file except in compliance with the License. You should have received a copy of the Erlang Public License along with this software. If not, it can be retrieved online at http://www.erlang.org/. Software distributed under the License is distributed on an "AS IS" basis, WITHOUT WARRANTY OF ANY KIND, either express or implied. See the License for the specific language governing rights and limitations under the License. Event Handling Peter Andersson event_handler_chapter.xml
General

It is possible for the operator of a Common Test system to receive event notifications continously during a test run. It is reported e.g. when a test case starts and stops, what the current count of successful, failed and skipped cases is, etc. This information can be used for different purposes such as logging progress and results on other format than HTML, saving statistics to a database for report generation and test system supervision.

Common Test has a framework for event handling which is based on the OTP event manager concept and gen_event behaviour. When the Common Test server starts, it spawns an event manager. During test execution the manager gets a notification from the server every time something of potential interest happens. Any event handler plugged into the event manager can match on events of interest, take action, or maybe simply pass the information on. Event handlers are Erlang modules implemented by the Common Test user according to the gen_event behaviour (see the OTP User's Guide and Reference Manual for more information).

As already described, a Common Test server always starts an event manager. The server also plugs in a default event handler which has as its only purpose to relay notifications to a globally registered CT Master event manager (if a CT Master server is running in the system). The CT Master also spawns an event manager at startup. Event handlers plugged into this manager will receive the events from all the test nodes as well as information from the CT Master server itself.

Usage

Event handlers may be installed by means of an event_handler start flag (ct_run) or option (ct:run_test/1), where the argument specifies the names of one or more event handler modules. Example:

$ ct_run -suite test/my_SUITE -event_handler handlers/my_evh1 handlers/my_evh2 -pa $PWD/handlers

Use the option instead of to pass start arguments to the event handler init function.

All event handler modules must have gen_event behaviour. Note also that these modules must be precompiled, and that their locations must be added explicitly to the Erlang code server search path (like in the example).

An event_handler tuple in the argument Opts has the following definition (see also ct:run_test/1 in the reference manual):

    {event_handler,EventHandlers}

    EventHandlers = EH | [EH]
    EH = atom() | {atom(),InitArgs} | {[atom()],InitArgs}
    InitArgs = [term()]

Example:

    1> ct:run_test([{suite,"test/my_SUITE"},{event_handler,[my_evh1,{my_evh2,[node()]}]}]).

This will install two event handlers for the my_SUITE test. Event handler my_evh1 is started with [] as argument to the init function. Event handler my_evh2 is started with the name of the current node in the init argument list.

Event handlers can also be plugged in by means of test specification terms:

{event_handler, EventHandlers}, or

{event_handler, EventHandlers, InitArgs}, or

{event_handler, NodeRefs, EventHandlers}, or

{event_handler, NodeRefs, EventHandlers, InitArgs}

EventHandlers is a list of module names. Before a test session starts, the init function of each plugged in event handler is called (with the InitArgs list as argument or [] if no start arguments are given).

To plug a handler into the CT Master event manager, specify master as the node in NodeRefs.

For an event handler to be able to match on events, the module must include the header file ct_event.hrl. An event is a record with the following definition:

#event{name, node, data}

name is the label (type) of the event. node is the name of the node the event has originated from (only relevant for CT Master event handlers). data is specific for the particular event.

General events:

#event{name = start_logging, data = LogDir}

LogDir = string(), top level log directory for the test run.

Indicates that the logging process of Common Test has started successfully and is ready to receive IO messages.

#event{name = stop_logging, data = []}

Indicates that the logging process of Common Test has been shut down at the end of the test run.

#event{name = test_start, data = {StartTime,LogDir}}

StartTime = {date(),time()}, test run start date and time.

LogDir = string(), top level log directory for the test run.

This event indicates that Common Test has finished initial preparations and will begin executing test cases.

#event{name = test_done, data = EndTime}

EndTime = {date(),time()}, date and time the test run finished.

This indicates that the last test case has been executed and Common Test is shutting down.

#event{name = start_info, data = {Tests,Suites,Cases}}

Tests = integer(), the number of tests.

Suites = integer(), the total number of suites.

Cases = integer() | unknown, the total number of test cases.

Initial test run information that can be interpreted as: "This test run will execute Tests separate tests, in total containing Cases number of test cases, in Suites number of suites". Note that if a test case group with a repeat property exists in any test, the total number of test cases can not be calculated (unknown).

#event{name = tc_start, data = {Suite,FuncOrGroup}}

Suite = atom(), name of the test suite.

FuncOrGroup = Func | {Conf,GroupName,GroupProperties}

Func = atom(), name of test case or configuration function.

Conf = init_per_group | end_per_group, group configuration function.

GroupName = atom(), name of the group.

GroupProperties = list(), list of execution properties for the group.

This event informs about the start of a test case, or a group configuration function. The event is sent also for init_per_suite and end_per_suite, but not for init_per_testcase and end_per_testcase. If a group configuration function is starting, the group name and execution properties are also given.

#event{name = tc_done, data = {Suite,FuncOrGroup,Result}}

Suite = atom(), name of the suite.

FuncOrGroup = Func | {Conf,GroupName,GroupProperties}

Func = atom(), name of test case or configuration function.

Conf = init_per_group | end_per_group, group configuration function.

GroupName = unknown | atom(), name of the group (unknown if init- or end function times out).

GroupProperties = list(), list of execution properties for the group.

Result = ok | {skipped,SkipReason} | {failed,FailReason}, the result.

SkipReason = {require_failed,RequireInfo} | {require_failed_in_suite0,RequireInfo} | {failed,{Suite,init_per_testcase,FailInfo}} | UserTerm, the reason why the case has been skipped.

FailReason = {error,FailInfo} | {error,{RunTimeError,StackTrace}} | {timetrap_timeout,integer()} | {failed,{Suite,end_per_testcase,FailInfo}}, reason for failure.

RequireInfo = {not_available,atom()}, why require has failed.

FailInfo = {timetrap_timeout,integer()} | {RunTimeError,StackTrace} | UserTerm, detailed information about an error.

RunTimeError = term(), a run-time error, e.g. badmatch, undef, etc.

StackTrace = list(), list of function calls preceeding a run-time error.

UserTerm = term(), arbitrary data specified by user, or exit/1 info.

This event informs about the end of a test case or a configuration function (see the tc_start event for details on the FuncOrGroup element). With this event comes the final result of the function in question. It is possible to determine on the top level of Result if the function was successful, skipped (by the user), or if it failed. It is of course possible to dig deeper and also perform pattern matching on the various reasons for skipped or failed. Note that {'EXIT',Reason} tuples have been translated into {error,Reason}. Note also that if a {failed,{Suite,end_per_testcase,FailInfo} result is received, it actually means the test case was successful, but that end_per_testcase for the case failed.

#event{name = tc_auto_skip, data = {Suite,Func,Reason}}

Suite = atom(), the name of the suite.

Func = atom(), the name of the test case or configuration function.

Reason = {failed,FailReason} | {require_failed_in_suite0,RequireInfo}, reason for auto skipping Func.

FailReason = {Suite,ConfigFunc,FailInfo}} | {Suite,FailedCaseInSequence}, reason for failure.

RequireInfo = {not_available,atom()}, why require has failed.

ConfigFunc = init_per_suite | init_per_group

FailInfo = {timetrap_timeout,integer()} | {RunTimeError,StackTrace} | bad_return | UserTerm, detailed information about an error.

FailedCaseInSequence = atom(), name of a case that has failed in a sequence.

RunTimeError = term(), a run-time error, e.g. badmatch, undef, etc.

StackTrace = list(), list of function calls preceeding a run-time error.

UserTerm = term(), arbitrary data specified by user, or exit/1 info.

This event gets sent for every test case or configuration function that Common Test has skipped automatically because of either a failed init_per_suite or init_per_group, a failed require in suite/0, or a failed test case in a sequence. Note that this event is never received as a result of a test case getting skipped because of init_per_testcase failing, since that information is carried with the tc_done event.

#event{name = tc_user_skip, data = {Suite,TestCase,Comment}}

Suite = atom(), name of the suite.

TestCase = atom(), name of the test case.

Comment = string(), reason for skipping the test case.

This event specifies that a test case has been skipped by the user. It is only ever received if the skip was declared in a test specification. Otherwise, user skip information is received as a {skipped,SkipReason} result in the tc_done event for the test case.

#event{name = test_stats, data = {Ok,Failed,Skipped}}

Ok = integer(), the current number of successful test cases.

Failed = integer(), the current number of failed test cases.

Skipped = {UserSkipped,AutoSkipped}

UserSkipped = integer(), the current number of user skipped test cases.

AutoSkipped = integer(), the current number of auto skipped test cases.

This is a statistics event with the current count of successful, skipped and failed test cases so far. This event gets sent after the end of each test case, immediately following the tc_done event.

Internal events:

#event{name = start_make, data = Dir}

Dir = string(), running make in this directory.

An internal event saying that Common Test will start compiling modules in directory Dir.

#event{name = finished_make, data = Dir}

Dir = string(), finished running make in this directory.

An internal event saying that Common Test is finished compiling modules in directory Dir.

#event{name = start_write_file, data = FullNameFile}

FullNameFile = string(), full name of the file.

An internal event used by the Common Test Master process to synchronize particular file operations.

#event{name = finished_write_file, data = FullNameFile}

FullNameFile = string(), full name of the file.

An internal event used by the Common Test Master process to synchronize particular file operations.

The events are also documented in ct_event.erl. This module may serve as an example of what an event handler for the CT event manager can look like.

To ensure that printouts to standard out (or printouts made with ct:log/2/3 or ct:pal/2/3) get written to the test case log file, and not to the Common Test framework log, you can syncronize with the Common Test server by matching on the tc_start and tc_done events. In the period between these events, all IO gets directed to the test case log file. These events are sent synchronously to avoid potential timing problems (e.g. that the test case log file gets closed just before an IO message from an external process gets through). Knowing this, you need to be careful that your handle_event/2 callback function doesn't stall the test execution, possibly causing unexpected behaviour as a result.