Age | Commit message (Collapse) | Author |
|
Ensure that we cannot get any dangling pointers into code that
has been purged. This is done by a two phase purge. At first
phase all fun entries pointing into the code to purge are marked
for purge. All processes trying to call these funs will be suspended
and by this we avoid getting new direct references into the code.
When all processes has been checked, these processes are resumed.
The new purge strategy now also completely ignore the existence of
indirect references to the code (funs). If such exist, they will
cause bad fun exceptions to the caller, but will not prevent a
soft purge or cause a kill of a process having such live references
during a hard purge. This since it is impossible to give any
guarantees that no processes in the system have such indirect
references. Even when the system is completely clean from such
references, new ones can appear via distribution and/or disk.
|
|
Those clause are obsolete and never used by common_test.
|
|
The macro ?t is deprecated. Replace its use with 'test_server'.
|
|
|
|
|
|
|
|
|
|
The expression in a bit string comprehension is limited to a
literal bit string expression. That is, the following code
is legal:
<< <<X>> || X <- List >>
but not this code:
<< foo(X) || X <- List >>
The limitation is annoying. For one thing, tools that transform
the abstract format must be careful not to produce code such as:
<< begin
%% Some instrumentation code.
<<X>>
end || X <- List >>
One reason for the limitation could be that we'll get
reduce/reduce conflicts if we try to allow an arbitrary
expression in a bit string comprehension:
binary_comprehension -> '<<' expr '||' lc_exprs '>>' :
{bc,?anno('$1'),'$2','$4'}.
Unfortunately, there does not seem to be an easy way to work
around that problem. The best we can do is to allow 'expr_max'
expressions (as in the binary syntax):
binary_comprehension -> '<<' expr_max '||' lc_exprs '>>' :
{bc,?anno('$1'),'$2','$4'}.
That will work, but functions calls must be enclosed in
parentheses:
<< (foo(X)) || X <- List >>
|
|
As a first step to removing the test_server application as
as its own separate application, change the inclusion of
test_server.hrl to an inclusion of ct.hrl and remove the
inclusion of test_server_line.hrl.
|
|
|
|
|
|
cover:compile_beam and cover:compile_beam_directory crashed when
trying to compile a beam file without a 'file' attribute. This has
been corrected, so an error is returned instead.
|
|
Functions in this module are never called, and some functions are
outdated. In order to avoid confusion (and need for update), the
module is now reduced to a simple dummy module.
|
|
If a module includes eunit.hrl, a parse transform adds the function
test/0 on line 0 in the module. A bug in OTP-18.0 caused
cover:analyse_to_file/1 to fail to insert cover data in the output
file when line 0 existed in the cover data table. The bug is corrected
by the commit "Fix cover output file".
This commit adds a test which checks that the bug is not introduced
again.
|
|
|
|
It is possible that not just the source but even the beam of a module
is not available when calling analyse_to_file.
For example when coverdata is imported from an old file and since then
a module was removed.
Before this fix cover:analyse_to_file/3 could possibly never return
because of a helper process crashed with error:undef and never reply
to the caller.
At the same time link the helper process to cover_server so any
further error won't let the caller waiting indefinitely.
|
|
Add functions for cover compilation and analysis on multiple
files. This allows for more parallelisation.
All functions for cover compilation can now take a list of
modules/files.
cover:analyse/analyze and cover:analyse_to_file/analyze_to_file can be
called without the Modules arguement in order to analyse all cover
compiled and imported modules, or with a list of modules.
Also, the number of lookups in ets tables is reduced, which has also
improved the performance when analysing and resetting cover data.
|
|
The reconnect/1 test starts another node and takes down the connection
to it for a while. However, it has been in observed in our daily builds
that sometimes a "spontaneous" re-connection occurs.
I think that it because of the call to rpc:cast/3 which internally
will set the group leader for the remote process to a process on
the test_server node. It seems that sometimes the group_leader/2
call will re-establish the connection. Unfortunately, I have not
been able to force this to happen by inserting delays in the rpc
module, so it it still just an hypothesis.
However, letting the other node do the disconnection does seem to fix
the problem and intuitively that should be safer way to do it because
the group_leader/2 call and the disconnection will be executed
sequentially on the same node instead of concurrently from two
different nodes.
|
|
|
|
We used to skip the entire test suite if the cover server had already
been started by common_test, but that means that we will get bad
coverage for the cover module.
Modify the test suite to to run all tests case that don't explicitly
start or stop the cover server to increase the coverage. In addition,
add a special coverage_analysis/1 test case that only runs when
the cover server is already running.
|
|
|
|
|
|
Similarly to cover compiling from source
(in this case some user specified compiler options are allowed)
when cover compiling from existing beam
take a filtered list of compiler options from the beamfile.
This way e.g. export_all can be preserved. See use case in eb02beb1c3
|
|
When cover:stop(Node) was called on a non-existing node, a process
waiting for cover data from the node would hang forever. This has been
corrected.
|
|
Whenever a module is compiled via compile:forms/2,
the source is set to current directory unless a source
option is passed to compile. This commit ensures that
cover passes the source information to compile:forms/2
to ensure the source won't be modified after the module
is cover compiled.
|
|
Prior to this commit, cover relied on a simple heuristic that
traverses directory from the beam file to find a source file.
The heuristic was maintained with this patch but, if it fails,
it fallbacks to the source value in the module compile info.
In order to illustrate how it works, one of the tests that
could not find its source now passes successfully (showing the
source lookup is more robust).
|
|
If one test failed, the next would sometimes also fail since cover was
not stopped.
|
|
Adding a timer:sleep between these two lines:
net_kernel:disconnect(N1),
[] = cover:which_nodes(),
This is to make sure the disconnect is detected by cover before
checking that the node is gone.
|
|
A nodes that was stopped with cover:stop/1 while marked as lost would
not be removed from the list of lost nodes. Therefore, if a nodeup was
later received for a node with the same name, it would be
reconnected. This has been corrected.
|
|
Nodes that were stopped with cover:stop/1 were marked as lost and
would be reconnected if a nodeup was later received for a node with
the same name. This has been corrected.
|
|
OTP-10523
Earlier, if the connection to a remote cover node was lost, all cover
data was lost and the cover_server on the remote node would die. This
would cause problems if there were cover compiled modules that would
still be executed since they would attempt to write to the no longer
existing ets tables belonging to the cover_server.
This commit changes this behavior so that the cover_server on the
remote node will survive connection loss and continue collecting cover
data. If the connection is re-established then the main node will sync
with the remote node again and cover data will not be lost (unless the
node was down).
|
|
OTP-10523
* Added cover:flush(Nodes), which will fetch data from remote nodes
without stopping cover on those nodes.
* Added cover:get_main_node(), which returns the node name of the main
node. This is used by test_server to avoid {error,not_main_node}
when a slave starts another slave (e.g. in test_server's own tests).
|
|
|
|
|
|
|
|
Funs are identified by a triple, <Module,Uniq,Index>, where Module is
the module name, Uniq is a 27 bit hash value of some intermediate
representation of the code for the fun, and index is a small integer.
When a fun is loaded, the triple for the fun will be compared to
previously loaded funs. If all elements in the triple in the newly
loaded fun are the same, the newly loaded fun will replace the previous
fun. The idea is that if Uniq are the same, the code for the fun is also
the same.
The problem is that Uniq is only based on the intermediate representation
of the fun itself. If the fun calls local functions in the same module,
Uniq may remain the same even if the behavior of the fun has been changed.
See
http://erlang.org/pipermail/erlang-bugs/2007-June/000368.htlm
for an example.
As a long-term plan to fix this problem, the NewIndex and NewUniq
fields was added to each fun in the R8 release (where NewUniq is the
MD5 of the BEAM code for the module). Unfortunately, it turns
out that the compiler does not assign unique value to NewIndex (if it
isn't tested, it doesn't work), so we cannot use the
<Module,NewUniq,NewIndex> triple as identification.
It would be possible to use <Module,NewUniq,Index>, but that seems
ugly. Therefore, fix the problem by making Uniq more unique by
taking 27 bits from the MD5 for the BEAM code. That only requires
a change to the compiler.
Also update a test case for cover, which now fails because of the
stronger Uniq calculation. (The comment in test case about why the
Pid2 process survived is not correct.)
|
|
cover module handle files as raw in export and import.
Assert counts of ports are the same at the beginning
and at the end of the test case.
|
|
|
|
* lukas/tools/cover_mem_footprint/OTP-9043:
Fix spelling on analyse
Add short sleep to prevent timing issues on slow machines
Update cover tests which depend on compiled files to be skipped if the compile testcase is skipped
Conflicts:
lib/tools/test/cover_SUITE.erl
|
|
|
|
|
|
compile testcase is skipped
|
|
* lukas/tools/cover_mem_footprint/OTP-9043:
Update testcases which need crypto to be skipped on platforms which does not have crypto
Update internal pmap to have a process limit Add write concurrancy to cover masters ?COVER_TABLE
Update documentation to reflect performance enhancement changes of cover
Add aync_analyse_to_file function to cover
Split the cover ets tables into two tables, one with the clause info and one with the bump info. This will make it faster to search the tables when analyzing and exporting data.
Add process debug tags
Update remote collect to handle multiple requests at once
Remove io printout warnings when exporting an imported module
Make the call to cover parallel so that the test_server takes advantage of the new cool parallel cover features.
Update cover to allow multiple analyse and analyze_to_file calls at the same time. For each call a seperate process will be spawned to handle the request.
Refactor cover to prepare it for making analysis parallel
Update remote loading to only load a certain number of modules at a time to prevent memory usage explosion
Conflicts:
lib/tools/test/cover_SUITE.erl
|
|
|
|
|
|
|
|
|
|
|
|
|
|
have crypto
|