Age | Commit message (Collapse) | Author |
|
|
|
|
|
The compiler would crash when compiling a function with two
receive statements.
https://bugs.erlang.org/browse/ERL-703
|
|
Consider the following functions:
foo() -> bar(not_a_map).
bar(M) when not is_map_key(a, M) -> ok;
bar(_) -> error.
What will `foo/0` return? It depends. If the module is compiled
with the default compiler options, the return value will be
`ok`. If the module is compiled with the `inline` option,
the return value will be `error`.
The correct value is `error`, because the call to `is_map_key/2`
when the second argument is not a map should fail the entire
guard. That is the way other failing guards BIFs are handled.
For example:
foo() -> bar(not_a_tuple).
bar(T) when not element(1, T) -> ok;
bar(_) -> error.
`foo/0` always returns `error` (whether the code is inlined
or not).
This bug can be fixed by changing the classification of `is_map_key/2`
in the `erl_internal` module. It is now classified as a type test,
which is incorrect because type tests should not fail. Reclassifying
it as a plain guard BIF corrects the bug.
This correction also fixes the internal consistency check
failure which was reported in:
https://bugs.erlang.org/browse/ERL-699
|
|
bjorng/bjorn/compiler/binary-syntax/ERL-689/OTP-15219
Fix bug in binary matching
|
|
'john/compiler/fix-deterministic-include-paths/OTP-15204/ERL-679' into maint
* john/compiler/fix-deterministic-include-paths/OTP-15204/ERL-679:
Omit include path debug info for +deterministic builds
|
|
Compiling the same file with different include paths resulted in
different files with the `+deterministic` flag even if everything
but the paths were identical. This was caused by the absolute path
of each include directory being unconditionally included in a
debug information chunk.
This commit fixes this by only including this information in
non-deterministic builds.
|
|
bjorng/bjorn/compiler/letrec-side-effect-fix/ERL-658/OTP-15188
Fix side-effect optimization when compiling from Core Erlang
|
|
The compiler generates incorrect code for the following example:
decode_binary(_, <<Length, Data/binary>>) ->
case {Length, Data} of
{0, _} ->
%% When converting the match context back to a binary,
%% Data will be set to the entire original binary,
%% that is, to <<0>> instead of <<>>.
{{0, 0, 0}, Data};
{4, <<Y:16/little, M, D, Rest/binary>>} ->
{{Y, M, D}, Rest}
end.
The problem is the delayed sub binary creation optimization, which
is not safe to do in this case.
This commit introduces a heuristic that will disable the delayed
sub binary creation optimization for this example. Unfortunately, the
heuristic may turn off the optimization when it would actually be
safe. In the OTP codebase, the optimization is turned off in two
instances, once in string.erl and once in dets_v9.erl.
https://bugs.erlang.org/browse/ERL-689
|
|
When an expression is only used for its side effects, we try to
remove everything that doesn't tie into a side-effect, but we
went a bit too far when we applied the optimization to funs
defined in such a context. Consider the following:
do letrec 'f'/0 = fun () -> ... whatever ...
in call 'side':'effect'(apply 'f'/0())
'ok'
When f/0 is optimized under the assumption that its return value
is unused, side:effect/1 will be fed the result of the last
side-effecting expression in f/0 instead of its actual result.
https://bugs.erlang.org/browse/ERL-658
Co-authored-by: Björn Gustavsson <[email protected]>
|
|
When nesting a receive in a try/catch, there could be a false
diagnostic that a fragile term is used.
https://bugs.erlang.org/browse/ERL-684
|
|
Referencing a matched-out variable in a size expression makes it
impossible to calculate the size of the result based on the size of
the matched binary. The compiler would still generate code to do
this however, which would crash since the variable isn't defined
at the size calculation.
|
|
Call test_lib:recompile/1 from init_per_suite/1 instead of
from all/0. That makes it easy to find the log from the
compilation in the log file for the init_per_suite/1 test
case.
|
|
maint-21
* bjorn/compiler/fix-beam_jump-crash/ERL-660/OTP-15166:
Eliminate a crash in the beam_jump pass
|
|
* bjorn/compiler/fix-map_get/OTP-15157:
Fix internal compiler error for map_get/2
|
|
maint-21
* bjorn/compiler/fix-skipped-matching/ERL-655/OTP-15156:
beam_type: Fix unsafe optimization
|
|
https://bugs.erlang.org/browse/ERL-660
|
|
Code such as that the following:
Val = map_get(a, Map),
Map#{a:=z} %Could be any map update
would incorrectly cause an internal consistency check failure:
Internal consistency check failed - please report this bug.
Instruction: {put_map_exact,{f,0},{x,0},{x,0},1,{list,[{atom,a},{atom,z}]}}
Error: {bad_type,{needed,map},{actual,term}}:
Update beam_validator so that it understands that the second
argument for map_get/2 is a map.
|
|
beam_type assumed that the operand for the bs_context_to_binary
instruction must be a binary. That is not correct;
bs_context_to_binary accepts anything. Based on the incorrect
assumption, beam_type would remove other test instructions.
The bug was introduced in eee8655788d2, which was supposed
to be just a refactoring commit.
https://bugs.erlang.org/browse/ERL-655
|
|
The compiler would crash when compiling code such as:
serialize(#{tag := value, id := Id, domain := Domain}) ->
[case Id of
nil ->
error(id({required, id}));
_ ->
<<10, 1:16/signed, Id:16/signed>>
end,
case Domain of
nil ->
error(id({required, domain}));
_ ->
<<8, 2:16/signed, Domain:32/signed>>
end].
The crash would look like this:
Function: serialize/1
t.erl: internal error in block2;
crash reason: {badmatch,false}
in function beam_utils:live_opt/4 (beam_utils.erl, line 861)
in call from beam_utils:live_opt/1 (beam_utils.erl, line 285)
in call from beam_block:function/2 (beam_block.erl, line 47)
in call from beam_block:'-module/2-lc$^0/1-0-'/2 (beam_block.erl, line 33)
in call from beam_block:'-module/2-lc$^0/1-0-'/2 (beam_block.erl, line 33)
in call from beam_block:module/2 (beam_block.erl, line 33)
in call from compile:block2/2 (compile.erl, line 1358)
in call from compile:'-internal_comp/5-anonymous-1-'/3 (compile.erl, line 349)
The reason for the crash is an assertion failure caused by a previous
unsafe optimization. Here is the code before the unsafe optimization:
.
.
.
{bs_init2,{f,0},7,0,0,{field_flags,[]},{x,1}}.
{bs_put_string,3,{string,[8,0,2]}}.
{bs_put_integer,{f,0},{integer,32},1,{field_flags,[signed,big]},{y,1}}.
{move,{x,1},{x,0}}.
{test_heap,4,1}.
.
.
.
beam_block:move_allocate/1 moved up the test_heap/2 instruction past the
move/2 instruction, adjusting the number of live registers at the same
time:
.
.
.
{bs_init2,{f,0},7,0,0,{field_flags,[]},{x,1}}.
%% Only x1 is live now.
{bs_put_string,3,{string,[8,0,2]}}.
{bs_put_integer,{f,0},{integer,32},1,{field_flags,[signed,big]},{y,1}}.
{test_heap,4,2}. %Unsafe. x0 is dead.
{move,{x,1},{x,0}}.
.
.
.
This optimization is unsafe because the bs_init2 instruction killed
x0.
The bug is in beam_utils:anno_defs/1, which adds annotations indicating
the registers that are defined at the beginning of each block. The
annotation before the move/2 instruction incorrectly indicated that
x0 was live.
https://bugs.erlang.org/browse/ERL-650
https://github.com/elixir-lang/elixir/issues/7782
|
|
|
|
sys_core_fold could do unsafe transformations on the
code from the old inliner (invoked using the compiler
option `{inline,[{F/A}]}` to request inlining of specific
functions).
To explain the bug, let's first look at an example that
sys_core_fold handles correctly. Consider this code:
'foo'/2 =
fun (Arg1,Arg2) ->
let <B> = Arg2
in let <A,B> = <B,Arg1>
in {A,B}
In this example, the lets can be completely eliminated,
since the arguments for the lets are variables (as opposed
to expressions). Since the variable B is rebound in the
inner let, `sys_core_fold` must take special care when
doing the substitutions.
Here is the correct result:
'foo'/2 =
fun (Arg1, Arg2) ->
{Arg2,Arg1}
Consider a slight modifictation of the example:
'bar'/2 =
fun (Arg1,Arg2) ->
let <B> = [Arg2]
in let <A,B> = <B,[Arg1]>
in {A,B}
Here some of the arguments for the lets are expressions, so
the lets must be kept. sys_core_fold does not handle this
example correctly:
'bar'/2 =
fun (Arg1,Arg2) ->
let <B> = [Arg2]
in let <B> = [Arg1]
in {B,B}
In the inner let, the variable A has been eliminated and
replaced with the variable B in the body (the first B in
the tuple). Since the B in the outer let is never used,
the outer let will be eliminated, giving:
'bar'/2 =
fun (Arg1,Arg2) ->
let <B> = [Arg1]
in {B,B}
To handle this example correctly, sys_core_fold must
rename the variable B in the inner let like this to
avoid capturing B:
'bar'/2 =
fun (Arg1,Arg2) ->
let <B> = [Arg2]
in let <NewName> = [Arg1]
in {B,NewName}
(Note: The `v3_kernel` pass alreday handles those examples correctly
in case `sys_core_fold` has been disabled.)
|
|
|
|
Introduce is_map_key/2 guard BIF
OTP-15037
|
|
This complements the `map_get/2` guard BIF introduced in #1784.
Rationale.
`map_get/2` allows accessing map fields in guards, but it might be
problematic in more complex guard expressions, for example:
foo(X) when map_get(a, X) =:= 1 or is_list(X) -> ...
The `is_list/1` part of the guard could never succeed since the
`map_get/2` guard would fail the whole guard expression. In this
situation, this could be solved by using `;` instead of `or` to separate
the guards, but it is not possible in every case.
To solve this situation, this PR proposes a `is_map_key/2` guard that
allows to check if a map has key inside a guard before trying to access
that key. When combined with `is_map/1` this allows to construct a
purely boolean guard expression testing a value of a key in a map.
Implementation.
Given the use case motivating the introduction of this function, the PR
contains compiler optimisations that produce optimial code for the
following guard expression:
foo(X) when is_map(X) and is_map_key(a, X) and map_get(a, X) =:= 1 -> ok;
foo(_) -> error.
Given all three tests share the failure label, the `is_map_key/2` and
`is_map/2` tests are optimised away.
As with `map_get/2` the `is_map_key/2` BIF is allowed in match specs.
|
|
When an exception is handled, the stack will be scanned. Therefore
all Y registers must be initialized.
|
|
Rewrite a call of a literal external fun to a direct call
OTP-15044
|
|
Rewrite calls such as:
(fun erlang:abs/1)(-42)
to:
erlang:abs(-42)
While we are at it, also add rewriting of apply/2 with a fixed
number of arguments to a direct call of the fun. For example:
apply(F, [A,B])
would be rewritten to:
F(A, B)
https://bugs.erlang.org/browse/ERL-614
|
|
sys_core_fold would crash when attempting to optimize this code:
t() when (#{})#{}->
c.
|
|
* 'map-get-bif' of git://github.com/michalmuskala/otp:
Introduce map_get guard-safe function
OTP-15037
|
|
Rationale
Today all compound data types except for maps can be deconstructed in guards.
For tuples we have `element/2` and for lists `hd/1` and `tl/1`. Maps are
completely opaque to guards. This means matching on maps can't be
abstracted into macros, which is often done with repetitive guards. It
also means that maps have to be always selected whole from ETS tables,
even when only one field would be enough, which creates a potential
efficiency issue.
This PR introduces an `erlang:map_get/2` guard-safe function that allows
extracting a map field in guard. An alternative to this function would be
to introduce the syntax for extracting a value from a map that was planned
in the original EEP: `Map#{Key}`.
Even outside of guards, since this function is a guard-BIF it is more
efficient than using `maps:get/2` (since it does not need to set up the
stack), and more convenient from pattern matching on the map (compare:
`#{key := Value} = Map, Value` to `map_get(key, Map)`).
Performance considerations
A common concern against adding this function is the notion that "guards
have to be fast" and ideally execute in constant time. While there are
some counterexamples (`length/1`), what is more important is the fact
that adding those functions does not change in any way the time
complexity of pattern matching - it's already possible to match on map
fields today directly in patterns - adding this ability to guards will
niether slow down or speed up the execution, it will only make certain
programs more convenient to write.
This first version is very naive and does not perform any optimizations.
|
|
Keys in map patterns are input variables, not pattern variables.
|
|
Waiting messages for a process may be stored in a queue
outside of any heap or heap fragment belonging to the process.
This is an optimization added in a recent major release to
avoid garbage collection messages again and again if there
is a long message queue.
Until such message has been matched and accepted by
the remove_message/0 instruction, the message must not be
included in the root set for a garbage collection, as that
would corrupt the message. The loop_rec/2 instruction explicitly
turns off garbage collection of the process as long messages
are being matched.
However, if the compiler were to put references to a message
outside of the heap in an Y register (on the stack) and there
happened to be a GC when the process had been scheduled out,
the message would be corrupted and the runtime system would
crash sooner or later.
To ensure that doesn't happen, teach beam_validator to check
for references on the stack to messages outside of the heap.
|
|
beam_record would make an unsafe optimization for the
not_used_p/4 function added to beam_utils_SUITE in this
commit. The bug is in beam_utils, which would falsely
report that {x,4} was unused when it in fact was used.
The bug was in the function not_used/1. The purpose of
not_used/1 is to return a 'not_used' result unless the
actual result is 'used'. Unfortunately it was not
implemented in that way. It would let a 'transparent'
result slip through, which the caller in this case would
convert to 'killed' (because the register was killed on
all other paths).
Reported-by: Richard Carlsson
|
|
When a generator in a list comprehension was given some
other term than a list, the wrong line could be pointed
out in the exception. Here is an example:
bad_generator() ->
[I || %%This line would be pointed out.
I <- not_a_list].
https://bugs.erlang.org/browse/ERL-572
|
|
A literal negative size in binary construction would cause a crash.
|
|
Refactor and fix minor bugs in beam_type
|
|
|
|
Every catch or try/catch must use a lower Y register number than any
enclosing catch or try/catch. That will ensure that when the stack
is scanned when an exception occurs, the innermost try/catch tag is
found first.
|
|
Eliminate get_list/3 internally in the compiler
|
|
Fix incorrect handling of floating point instructions
|
|
* maint:
Fix incorrect type interference of integer ranges
Conflicts:
lib/compiler/src/beam_type.erl
|
|
1a029efd1ad47f started to run the beam_block pass a second time.
Since it is run after introduction of the optimized floating point
instructions, it must handle those instructions correctly.
In particular, it must be careful when hoisting allocation
instructions. For example, the following code:
{test_heap,{alloc,[{words,0},{floats,1}]},5}.
.
.
.
{fmove,{fr,2},{x,0}}.
{allocate_zero,1,4}.
must not be rewritten to:
{test_heap,{alloc,[{words,0},{floats,1}]},5}.
.
.
.
{allocate_zero,1,4}.
{fmove,{fr,2},{x,0}}.
because beam_validator will not consider it safe. (The code may
actually be safe depending on what the code between the two allocation
instructions do.)
https://bugs.erlang.org/browse/ERL-555
|
|
|
|
Instructions that produce more than one result complicate
optimizations. get_list/3 is one of two instructions that
produce multiple results (get_map_elements/3 is the other).
Introduce the get_hd/2 and get_tl/2 instructions
that return the head and tail of a cons cell, respectively,
and use it internally in all optimization passes.
For efficiency, we still want to use get_list/3 if both
head and tail are used, so we will translate matching pairs
of get_hd and get_tl back to get_list instructions.
|
|
misc_SUITE:integer_encoding/1 was written to make sure
that big integers were encoding correctly in a reasonable
amount of time. Now that beam_asm will encode big integers
as literals, we can reduce the scope of integer_encode/1.
That will make it significantly faster, especially when
cover is running.
|
|
Do local common sub expression elimination (CSE)
|
|
Optimize away unnecessary test_unit instructions that verify that
binaries are byte-aligned. In a tight loop, eliminating an
instruction can have a small but measurable improvement of the
execution time.
|
|
Extend an existing optimization in beam_dead to avoid
creating a match context when matching an empty binary.
|
|
Eliminate repeated evaluation of guard BIFs and building of cons cells
in blocks. This optimization is applicable in more places than might be
expected, because code generation for binaries and record can generate
common sub expressions not visible in the original source code.
For example, consider this function:
make_binary(Term) ->
Bin = term_to_binary(Term),
Size = byte_size(Bin),
<<Size:32,Bin/binary>>.
The compiler inserts a call to byte_size/2 to calculate the size of
the binary being built:
{function, make_binary, 1, 2}.
{label,1}.
{line,...}.
{func_info,{atom,t},{atom,make_binary},1}.
{label,2}.
{allocate,0,1}.
{line,...}.
{call_ext,1,{extfunc,erlang,term_to_binary,1}}.
{line,...}.
{gc_bif,byte_size,{f,0},1,[{x,0}],{x,1}}. %Present in original code.
{line,...}.
{gc_bif,byte_size,{f,0},2,[{x,0}],{x,2}}. %Inserted by compiler.
{bs_add,{f,0},[{x,2},{integer,4},1],{x,2}}.
{bs_init2,{f,0},{x,2},0,2,{field_flags,[]},{x,2}}.
{bs_put_integer,{f,0},{integer,32},1,{field_flags,[unsigned,big]},{x,1}}.
{bs_put_binary,{f,0},{atom,all},8,{field_flags,[unsigned,big]},{x,0}}.
{move,{x,2},{x,0}}.
{deallocate,0}.
return.
Common sub expression elimination (CSE) eliminates the second call to
byte_size/2:
{function, make_binary, 1, 2}.
{label,1}.
{line,...}.
{func_info,{atom,t},{atom,make_binary},1}.
{label,2}.
{allocate,0,1}.
{line,...}.
{call_ext,1,{extfunc,erlang,term_to_binary,1}}.
{line,...}.
{gc_bif,byte_size,{f,0},1,[{x,0}],{x,1}}.
{move,{x,1},{x,2}}.
{bs_add,{f,0},[{x,2},{integer,4},1],{x,2}}.
{bs_init2,{f,0},{x,2},0,2,{field_flags,[]},{x,2}}.
{bs_put_integer,{f,0},{integer,32},1,{field_flags,[unsigned,big]},{x,1}}.
{bs_put_binary,{f,0},{atom,all},8,{field_flags,[unsigned,big]},{x,0}}.
{move,{x,2},{x,0}}.
{deallocate,0}.
return.
Note: A possible future optimization would be to include binary
construction instructions in blocks. If that is done, the
{move,{x,1},{x,2}} instruction could also be eliminated.
|