Age | Commit message (Collapse) | Author |
|
|
|
|
|
|
|
Either rely on the default 30 minutes timetrap, or set the timeout
using the supported methods in common_test.
|
|
As a first step to removing the test_server application as
as its own separate application, change the inclusion of
test_server.hrl to an inclusion of ct.hrl and remove the
inclusion of test_server_line.hrl.
|
|
|
|
I have spent too much time lately waiting for 'cover' to finish,
so now its time to optimize the running time of the tests suite
in coverage mode.
Basically, when 'cover' is running, the test suites would not
run any tests in parallel. The reason is that using too many
parallel processes when running 'cover' would be slower than
running them sequentially. But those measurements were made
several years ago, and many improvements have been made to
improve the parallelism of the run-time system.
Experimenting with the test_lib:p_run/2 function, I found that
increasing the number of parallel processes would speed up the
self_compile tests cases in compilation_SUITE. The difference
between using 3 processes or 4 processes was slight, though,
so it seems that we should not use more than 4 processes when
running 'cover'.
We don't want to change test_lib:parallel/0, because there is
no way to limit the number of test cases that will be run in
parallel by common_test. However, there as test suites (such as
andor_SUITE) that don't invoke the compiler at run-time. We can
run the cases in such test suites in parallel even if 'cover'
is running.
|
|
This complements 933e701 (OTP-10209). Without this patch the test cases
"in_guard/1" and "coerce_to_float/1" in bs_construct_SUITE fail.
The added lines in bs_construct_SUITE cover all branches that were not
covered before (small and big numbers if BIT_OFFSET(erts_bin_offset) != 0).
|
|
Run testcases in parallel will make the test suite run slightly
faster. Another reason for this change is that we want more testing
of parallel testcase support in common_test.
|
|
The code generator uses conservative liveness information. Therefore
the number of live registers in allocation instructions (such as
test_heap/2) may be too high. Use the actual liveness information
to lower the number of live register if it's too high.
The main reason we want to do this is to enable more optimizations
that depend on liveness analysis, such as the beam_bool and beam_dead
passes.
|
|
|
|
The compiler would silently accept and Dialyzer would crash on
code like:
<<X:(2.5)>>
It is never acceptable for Dialyzer to crash. The compiler should
at least generate a warning for such code. It is tempting to let
the compiler generate an error, but that would mean that code like:
Sz = 42.0,
<<X:Sz>>.
would be possible to compile with optimizations disabled, but not
with optimizations enabled.
Dialyzer crashes because it calls cerl:bitstr_bitsize/1, which
crashes if the type of size for the segment is invalid. The easiest
way to avoid that crash is to extend the sanity checks in v3_core
to also include the size field of binary segments. That will cause
the compiler to issue a warning and to replace the bad binary
construction with a call to erlang:error/1. (It also means that
Dialyzer will not issue a warning for bad size fields.)
|
|
In 3d0f4a3085f11389e5b22d10f96f0cbf08c9337f (an update to conform
with common_test), in all test_lib:recompile(?MODULE) calls, ?MODULE
was changed to the actual name of the module. That would cause
test_lib:recompile/1 to compile the module with the incorrect
compiler options in cloned modules such as record_no_opt_SUITE,
causing worse coverage.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Code such as
foo(A) -> <<A:0>>.
would cause a compiler crash.
|
|
|