Age | Commit message (Collapse) | Author |
|
|
|
v3_kernel may produce unwanted and confusing warnings for code that
has been inlined with the new inliner (cerl_inline). Consider this
code:
-compile(inline).
compute1(X) ->
add(X, 0).
compute2(X, Y) ->
add(X, Y).
add(1, 0) ->
1;
add(1, Y) -> %% "this clause cannot match..."
1 + Y;
add(X, Y) ->
X + Y.
v3_kernel warns because add/2 has been inlined into compute1/1 and only
the first clause in add/2 will match. But the other clauses are needed
when add/2 is inlined into compute2/2, so the user cannot do anything
to eliminate the warning (short of manually inlining add/2, defeating the
purpose of the 'inline' option).
The warning would be reasonable if compute2/2 didn't exist, but it would
be too complicated for the compiler to figure whether a warning make
sense or not.
Therefore, suppress all warnings generated by v3_kernel if cerl_inline
has been run.
ERL-301
|
|
Funs must not be created in guards. The instruction for creating
a fun clobbers all X registers, which is a bad thing to do in
a guard.
|
|
The following code causes a compiler failure:
first_after(Data, Offset) ->
case byte_size(Data) > Offset of
false ->
{First, Rest} = {ok, ok},
ok;
true ->
<<_:Offset/binary, Rest/binary>> = Data,
%% 'Rest' saved in y(0) before the call.
{First, _} = match_first(Data, Rest),
%% When beam_bsm sees the code, the following line
%% which uses y(0) has been optimized away.
{First, Rest} = {First, Rest},
First
end.
match_first(_, <<First:1/binary, Rest/binary>>) ->
{First, Rest}.
Here is the error message from beam_validator:
t: function first_after/2+15:
Internal consistency check failed - please report this bug.
Instruction: {call,2,{f,7}}
Error: {multiple_match_contexts,[{x,1},0]}:
Basically, what happens is that at time of code generation,
the variable 'Rest' is needed after the call to match_first/2
and is therefore saved in y(0). When beam_bsm (a late optimization
pass) sees the code, the use of y(0) following the call
to match_first/2 has been optimized away. beam_bsm therefore
assumes that the delayed sub-binary creation is safe. (Actually,
it is safe, but beam_validator does not realize it.)
The bug was caused by two separate commits:
e199e2471a reduced the number of special cases to handle in BEAM
optimization passed by breaking apart the tail-recursive call
instructions (call_only and call_last) into separate instructions.
Unfortunately, the special handling for tail calls was lost, which
resulted in worse code (i.e. the delayed sub-binary creation
optimization could not be applied).
e1aa422290 tried to compensate, but did so in a way that was not
always safe.
Teaching beam_validator that this kind of code is safe would be
expensive.
Instead, we will undo the damage caused by the two
commits. Re-introduce the special handling of tail-recursive calls in
beam_bsm that was lost in the first commit. (Effectively) revert the
change in the second commit.
ERL-268
|
|
c2035ebb8b restricted the get_map_elements instruction so that it
could only occur at the beginning of a block. It turns out that
including it anywhere in a block is unsafe.
Therefore, never put get_map_elements instruction in blocks.
(Also remove the beam_utils:join_even/2 function since it is no
longer used.)
ERL-266
|
|
ab03678e introduced an optimization in the beam_z pass that could
introduce unreachable code in BEAM files (a 'jump' instruction is
removed after a 'raise' instruction, but the code following the
target of the 'jump' is not removed).
Since this situation happens very rarely, there is no point in adding
another pass that can remove unreachable code after beam_z. Instead we
will make sure that beam_validator can skip the unreachable code.
Skipping unreachable code is already done in valfun_1/2 (for
historical reasons), but we will also need to do it in val_dsetel/2.
|
|
Moving a fun into a guard may cause code that is not accepted
by beam_validator.
|
|
It is not safe to share code between 'catch' blocks.
|
|
beam_block has an optimization that only is safe when it is applied
immediately after code generation. That is pointed out in a comment:
NOTE: Moving allocation instructions is only safe because it is done
immediately after code generation so that we KNOW that if {x,X} is
initialized, all x registers with lower numbers are also initialized.
That assumption may not be true after other optimizations, such as
the beam_utils:live_opt/1 optimization.
The new beam_reorder pass added in OTP 19 runs before beam_block.
Therefore, the optimization is potentially unsafe. The optimization
is also unsafe if compilation is started from assembly code in a
.S file.
Rewrite the optimization to make it safe. See the newly added comment
for details.
ERL-202
|
|
The following regression was introduced in 19.0:
foo(bar, <<"x">>) -> 1;
foo(_, <<"x">>) -> 2;
foo(_, <<"y">>) -> 3;
foo(_, _) -> fail.
The call foo(bar,<<"y">>) would errorneous return 'fail' instead of 3.
A testcase in match_SUITE has been added to verify this.
|
|
retrieve the value of the environment variable ERL_COMPILER_OPTIONS
in the same manner as used by file/2, forms/2 and output_generated/2
|
|
|
|
Add more filename/line number annotations while translating to
Core Erlang in v3_core, and ensure that sys_core_fold retains
existing annotations. The goal is to avoid that sys_core_fold
generate warnings with "no_file" instead of a filename.
|
|
beam_validator would complain that x(1) is uninitialized
in a test_heap instruction when attempting to compile
the following code with sys_core_fold turned off:
foo(M) when not (M#{true := 0}); [M] ->
ok.
Simplified, the generated BEAM assembly code looked like
this:
test is_map BadMap x(0)
put_map_exact Fail x(0) => x(1) ...
jump BooleanStuff
BadMap:
move ok => x(1)
jump Fail
BooleanStuff:
...
move Boolean => x(2)
jump Build
Fail:
move false => x(2)
Build:
test_heap 2 3 %% x(0), x(1), x(2) must be live.
...
That is, if put_map_exact failed, control would transfer
to the label Fail without initializing x(1).
Fix that by making sure that x(1) is initilized even if
put_map_exact fails:
test is_map BadMap x(0)
put_map_exact BadLbl x(0) => x(1) ...
jump OkLbl
BadLbl:
move ok => x(1)
jump Fail
OkLbl:
jump BooleanStuff
BadMap:
move ok => x(1)
jump Fail
BooleanStuff:
...
move Boolean => x(2)
jump Build
Fail:
move false => x(2)
Build:
test_heap 2 3 %% x(0), x(1), x(2) must be live.
...
Note that this situation is rare, and that other optimization passes
(beam_dead and beam_jump in particular) will clean up this mess.
|
|
Somewhat simplified, beam_block would rewrite the target for
the first instruction in this code sequence:
move x(0) => y(1)
gc_bif '+' 1 x(0) => y(0)
move y(1) => x(1)
move nil => x(0)
call 2 local_function/2
The resulting code would be:
move x(0) => x(1) %% Changed target.
gc_bif '+' 1 x(0) => y(0)
move x(1) => y(1) %% Operands swapped (see 02d6135813).
move nil => x(0)
call 2 local_function/2
The resulting code is not safe because the x(1) will be killed
by the gc_bif instruction.
7a47b20c3a cleaned up move optimizations and would reject the
optimization if the target was an X register and an allocating
instruction was found. To avoid this bug, the optimization must be
rejected even if the target is a Y register.
|
|
|
|
If the Core Erlang optimization were turned off (using no_copt),
the optimization passes for Beam assembly could generate unsafe
code that did not initialize all Y registers before (for example)
a call instruction.
To fix this, beam_dead should not attempt to remove stores to Y
registers. That is not safe if there is an exception-generating
instruction inside a try...catch block.
|
|
beam_validator wrongly complained that the following was
not safe because it didn't know that is_bitstring/1 is safe:
food(Curriculum) ->
[try
is_bitstring(functions)
catch _ ->
0
end, Curriculum].
While we are it, also add a new bif_SUITE test suite to cover some
more code in beam_validator.
|
|
In complicated code with many indirect jumps to the func_info label,
a label could get lost.
|
|
A negative allocation could be calculated if a block had multiple
allocations. Make sure to process the block in the right order
so that the correct allocation is processed. Also add an assertion.
This bug was often not noticed because beam_type usually silently
recalculates the allocation amount in test_heap/2 instructions.
|
|
Generate code that not only is safe, but can easily be seen by
beam_validator to be safe.
|
|
Sometimes v3_codegen would generate unsafe code when there was
a call to error/1 in a guard.
|
|
|
|
|
|
By first adding a call to error/1 to each uncovered line,
QuickCheck could find test cases that would cover the lines.
|
|
Make sure we don't optimize code such as:
is_tuple Fail Src
test_arity Fail Src Arity
get_tuple_element Src Pos Dst
is_map Fail Src
If we would reorder the instructions like this:
is_tuple Fail Src
test_arity Fail Src Arity
is_map Fail Src
get_tuple_element Src Pos Dst
beam_validator would complain that the type for Src is a map
instead of a tuple. Since the code has problems to begin with,
there is no need to do the optimization.
|
|
When calculating the sets of registers that must be killed or
unused, registers set in a {protected,_,_,_} block were not
considered. That could result in a crash in the
assertion in beam_utils:live_opt_block/4.
|
|
a3ec2644f5 attempted to teach v3_core not to generate code with
unbound variables. The approach taken in that commit is to
discard all expressions following a badmatch. That does not
work if the badmatch is nested:
{[V] = [] = foo,V},
V
That would be rewritten to:
{error({badmatch,foo})},
V
where V is unbound.
If we were to follow the same approach, the tuple construction
code would have to look out for a badmatch. As would list construction,
begin...end, and so on.
Therefore, as it is impractical to discard all expressions that
follow a badmatch, the only other solution is to ensure that the
variables that the pattern binds will somehow be bound. That can
be arranged by rewriting the pattern to a pattern that binds the
same variables. Thus:
error({badmatch,foo}),
E = foo,
case E of
{[V],[]} ->
V;
Other ->
error({badmatch,Other}
end
|
|
The following code:
simple() ->
case try 0 after [] end of
0 -> college;
1 -> 0
end.
would crash the compiler like this:
crash reason: {case_clause,
{'EXIT',
{function_clause,
[{beam_type,simplify_select_val_int,
[{select,select_val,
{x,0},
{f,7},
[{integer,1},{f,9},{integer,0},{f,8}]},
0],
[{file,"beam_type.erl"},{line,169}]},
{beam_type,simplify_basic_1,3,
[{file,"beam_type.erl"},{line,155}]},
{beam_type,opt,3,[{file,"beam_type.erl"},{line,57}]},
{beam_type,function,1,[{file,"beam_type.erl"},{line,36}]},
{beam_type,'-module/2-lc$^0/1-0-',1,
[{file,"beam_type.erl"},{line,30}]},
{beam_type,module,2,[{file,"beam_type.erl"},{line,30}]},
{compile,'-select_passes/2-anonymous-2-',2,
[{file,"compile.erl"},{line,521}]},
{compile,'-internal_comp/4-anonymous-1-',2,
[{file,"compile.erl"},{line,306}]}]}}}
The root cause is that the type representation is not well-defined.
Integers could be represented in three different ways:
integer
{integer,{1,10}}
{integer,0}
However, only the first two forms were handled.
To avoid similar problems in the future:
* Make the type representation stricter. Make sure that integers are
only represented as 'integer' or {integer,{Min,Max}}.
* Call verify_type/1 whenever a new type is added (not only when
merging types) to ensure that only the supported types are added
to the type database).
(ERL-150)
|
|
We must be careful how we treat the type info for the result of:
setelement(Index, Tuple, NewValue)
If Tuple had type information, the result of setelement/3 (in x(0))
would be assigned the same type information. But that is not safe
for:
setelement(1, Tuple, NewValue)
since the type for the first element will be changed.
Therefore, we must take care to remove the type information for
the first element of the tuple if might have been modified by
setelement/3.
|
|
* bjorn/compiler/beam_bool/ERL-143:
Eliminate crash in beam_bool
Add beam_bool_SUITE
Add missing test cases in andor_SUITE and beam_block_SUITE
|
|
beam_bool would crash when attempting to optimize BEAM code similar
to this code:
bif '=:=' Reg1 SomeValue => y(0)
bif '=:=' Reg2 {atom,true} => x(2)
bif '=:=' Reg3 {atom,true} => x(3)
bif 'or' x(2) x(3) => x(2)
is_eq_exact Fail x(2) {atom,true}
The problem is that the first instruction that assigns a value to a Y
register. beam_bool:ssa_assign/2 will not accept a Y register
argument.
We could change ssa_assign/2 to accept a Y register, but that would
only cause the entire optimization to be rejected later because the Y
register is alive in the code that follows. Therefore, a better
solution is to modify extend_block/3 so that the instruction that
assign to Y registers are not added to the block. That is, the
optimizer will only operate on the following code:
bif '=:=' Reg2 {atom,true} => x(2)
bif '=:=' Reg3 {atom,true} => x(3)
bif 'or' x(2) x(3) => x(2)
is_eq_exact Fail x(2) {atom,true}
Usually the optimization will succeed, rewriting the four instructions
to a select_val instruction.
Assembly code such as the above can be produced by code similar to:
Y = Something == SomethingElse,
case Y of
Condition; OtherCondition ->
. . .
end,
. . .,
Y.
Reported-by: http://bugs.erlang.org/browse/ERL-143
Reported-by: José Valim
|
|
It's time that we have a dedicated test suite for beam_bool.
|
|
Two test cases were not actually run. Even if their main purpose is to
ensure that the compiler doesn't crash, we always try to also run the
test case (when practial) to also ensure that the generated code is
correct.
|
|
The inliner generates variable whose names are numeric. Run
the inliner to cover one more line in core_pp.
|
|
Several test cases in compile_SUITE (e.g. core/1) extracts the
abstract code from a BEAM file and runs the compiler on it.
It is only a waste of time to use the abstract code from
cloned versions of test case modules. That is, use record_SUITE,
but don't use record_no_opt_SUITE, record_post_opt_SUITE, or
record_inline_SUITE since they all contain essentially the same
abstract code.
|
|
Add is_cloned_mod(Mod) to determine whether Mod is the original
name for a module (e.g. record_SUITE) or a cloned module
(e.g. record_no_opt_SUITE).
|
|
Recent spring cleaning in the test suite left a line in sys_core_fold
dealing with an unnecessary catch uncovered.
|
|
Correct calculation of the number of parallel processes to use
when cover is used. It was supposed not to exceed 4 (according to
the comment), but it was calculated as the number of schedulers.
On my computer, having 8 schedulers, this change made the compiler
test suite with cover run almost twice as fast.
|
|
|
|
v3_core would generate unsafe code for the following example:
f() ->
{ok={error,E}} = foo(),
E.
Internally, the code would look similar to:
f() ->
Var = foo(),
error({badmatch,Var}),
E.
That is, there would remain a reference to an unbound variable.
Normally, sys_core_fold would remove the reference to 'E', but if
if optimization was disabled the compiler would crash.
|
|
With 'export_all' it is easy to add a new test case function
and forget to add its name to the list of test cases to run.
While we are it, remove unused functions and add the forgotten
test case on_load_inline/1.
|
|
It has not been used for ages.
|
|
|
|
|
|
|
|
long_string/1 was written to test that long string were handled
efficiently in beam_asm. Strings used to be stored in the string
table chunk, but are currently literals.
There does not seem that this test case is likely to find any bugs.
|
|
|
|
missing_testheap/1 is no longer relevant for the following reasons:
1) Because of the literal pool introduced in R12, no test_heap
instructions are needed in the guards.
2) beam_validator would abort the compilation if any needed test_heap
instructions were missing.
|
|
The integer_encoding/1 test is supposed to be run with a tighter
timetrap ensure that encoding of integer in BEAM files is efficient
enough.
|