Age | Commit message (Collapse) | Author |
|
|
|
|
|
As a first step to removing the test_server application as
as its own separate application, change the inclusion of
test_server.hrl to an inclusion of ct.hrl and remove the
inclusion of test_server_line.hrl.
|
|
|
|
To troubleshoot failed test case trace_resumed_after_node_restart on
windows.
|
|
Some tests fail (mostly on windows) every now and then with too few
trace messages in log. Extending the timer from 200 to 500 ms to see
if this is the reason.
Plus removing a compiler warning in ttb_SUITE.
|
|
There is a problem with long paths on windows, which causes some of
the ttb logs in this suite not to be created. To go around this, the
original priv_dir from the Config is no longer used for writing the
logs. Instead a new priv_dir is created in the data_dir - which makes
the path much shorter.
There is also a problem caused by the lower resolution of the system
clock on windows. It makes the test cases for sorting trace messages
fail. To get around this a sleep of 2 ms is added in "appropriate
places", and also the messages sent between client and server when
creating the trace log for these test cases is now better synched.
The cleanup functions, which terminate slave nodes, was called in
end_per_testcase. However, it seems to be a bug in the test_server
which causes this to hang if the test case failed with a
timetrap_timeout. Workaround for this is to do the cleanup in
init_per_testcase instead - i.e. make sure that nodes that are to be
started by the test case do not already live when the test case
starts.
|
|
Slave nodes were earlier stopped inside each test case. If a test case
failed before this point, a slave node would survive and it might
interfere with the next test case causing multiple failures. This
commit moves the stopping of slave nodes out to a separate function
for each test case - called during end_per_testcase.
A minor correction is also done in ttb:ensure_no_overloaded_nodes -
the reply message sent back from the ttb process is tagged so only the
expected message will be picked from the message queue. Otherwise, for
instance nodedown messages from the monitoring of slave nodes (by the
test cases) could be received here.
Finally, the sleep timer when waiting for trace messages to arrive
over tcp/ip is extended a bit since test cases sometimes failed with
missing trace messages here.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|