Age | Commit message (Collapse) | Author |
|
|
|
|
|
Similarly to cover compiling from source
(in this case some user specified compiler options are allowed)
when cover compiling from existing beam
take a filtered list of compiler options from the beamfile.
This way e.g. export_all can be preserved. See use case in eb02beb1c3
|
|
When cover:stop(Node) was called on a non-existing node, a process
waiting for cover data from the node would hang forever. This has been
corrected.
|
|
Whenever a module is compiled via compile:forms/2,
the source is set to current directory unless a source
option is passed to compile. This commit ensures that
cover passes the source information to compile:forms/2
to ensure the source won't be modified after the module
is cover compiled.
|
|
Prior to this commit, cover relied on a simple heuristic that
traverses directory from the beam file to find a source file.
The heuristic was maintained with this patch but, if it fails,
it fallbacks to the source value in the module compile info.
In order to illustrate how it works, one of the tests that
could not find its source now passes successfully (showing the
source lookup is more robust).
|
|
If one test failed, the next would sometimes also fail since cover was
not stopped.
|
|
Adding a timer:sleep between these two lines:
net_kernel:disconnect(N1),
[] = cover:which_nodes(),
This is to make sure the disconnect is detected by cover before
checking that the node is gone.
|
|
A nodes that was stopped with cover:stop/1 while marked as lost would
not be removed from the list of lost nodes. Therefore, if a nodeup was
later received for a node with the same name, it would be
reconnected. This has been corrected.
|
|
Nodes that were stopped with cover:stop/1 were marked as lost and
would be reconnected if a nodeup was later received for a node with
the same name. This has been corrected.
|
|
OTP-10523
Earlier, if the connection to a remote cover node was lost, all cover
data was lost and the cover_server on the remote node would die. This
would cause problems if there were cover compiled modules that would
still be executed since they would attempt to write to the no longer
existing ets tables belonging to the cover_server.
This commit changes this behavior so that the cover_server on the
remote node will survive connection loss and continue collecting cover
data. If the connection is re-established then the main node will sync
with the remote node again and cover data will not be lost (unless the
node was down).
|
|
OTP-10523
* Added cover:flush(Nodes), which will fetch data from remote nodes
without stopping cover on those nodes.
* Added cover:get_main_node(), which returns the node name of the main
node. This is used by test_server to avoid {error,not_main_node}
when a slave starts another slave (e.g. in test_server's own tests).
|
|
|
|
|
|
|
|
Funs are identified by a triple, <Module,Uniq,Index>, where Module is
the module name, Uniq is a 27 bit hash value of some intermediate
representation of the code for the fun, and index is a small integer.
When a fun is loaded, the triple for the fun will be compared to
previously loaded funs. If all elements in the triple in the newly
loaded fun are the same, the newly loaded fun will replace the previous
fun. The idea is that if Uniq are the same, the code for the fun is also
the same.
The problem is that Uniq is only based on the intermediate representation
of the fun itself. If the fun calls local functions in the same module,
Uniq may remain the same even if the behavior of the fun has been changed.
See
http://erlang.org/pipermail/erlang-bugs/2007-June/000368.htlm
for an example.
As a long-term plan to fix this problem, the NewIndex and NewUniq
fields was added to each fun in the R8 release (where NewUniq is the
MD5 of the BEAM code for the module). Unfortunately, it turns
out that the compiler does not assign unique value to NewIndex (if it
isn't tested, it doesn't work), so we cannot use the
<Module,NewUniq,NewIndex> triple as identification.
It would be possible to use <Module,NewUniq,Index>, but that seems
ugly. Therefore, fix the problem by making Uniq more unique by
taking 27 bits from the MD5 for the BEAM code. That only requires
a change to the compiler.
Also update a test case for cover, which now fails because of the
stronger Uniq calculation. (The comment in test case about why the
Pid2 process survived is not correct.)
|
|
cover module handle files as raw in export and import.
Assert counts of ports are the same at the beginning
and at the end of the test case.
|
|
|
|
* lukas/tools/cover_mem_footprint/OTP-9043:
Fix spelling on analyse
Add short sleep to prevent timing issues on slow machines
Update cover tests which depend on compiled files to be skipped if the compile testcase is skipped
Conflicts:
lib/tools/test/cover_SUITE.erl
|
|
|
|
|
|
compile testcase is skipped
|
|
* lukas/tools/cover_mem_footprint/OTP-9043:
Update testcases which need crypto to be skipped on platforms which does not have crypto
Update internal pmap to have a process limit Add write concurrancy to cover masters ?COVER_TABLE
Update documentation to reflect performance enhancement changes of cover
Add aync_analyse_to_file function to cover
Split the cover ets tables into two tables, one with the clause info and one with the bump info. This will make it faster to search the tables when analyzing and exporting data.
Add process debug tags
Update remote collect to handle multiple requests at once
Remove io printout warnings when exporting an imported module
Make the call to cover parallel so that the test_server takes advantage of the new cool parallel cover features.
Update cover to allow multiple analyse and analyze_to_file calls at the same time. For each call a seperate process will be spawned to handle the request.
Refactor cover to prepare it for making analysis parallel
Update remote loading to only load a certain number of modules at a time to prevent memory usage explosion
Conflicts:
lib/tools/test/cover_SUITE.erl
|
|
|
|
|
|
|
|
|
|
|
|
|
|
have crypto
|
|
with the bump info. This will make it faster to search the tables when analyzing and exporting data.
Also made cover export more parallel in how data is collected from the different nodes and also how data is read from ets. This should make the performance of cover much better on machines with multiple CPUs.
|
|
|