Age | Commit message (Collapse) | Author |
|
* maint:
Fix bug when beam_jump removes put_tuple instructions
Conflicts:
lib/compiler/src/beam_jump.erl
lib/compiler/test/beam_jump_SUITE.erl
|
|
beam_except: Generalize translation to func_info instructions
|
|
* bjorn/compiler/diffable:
scripts/diffable: Correct the number of modules being compiled
scripts/diffable: Use the diffable compiler option
Add a diffable option for the compiler
scripts/diffable: Refactor option parsing
|
|
`beam_jump` could remove a `put_tuple` instruction when the
tuple would not be used, but it would leave the following
`put` instructions. Make sure they are removed.
https://bugs.erlang.org/browse/ERL-759
|
|
The `beam_except` pass replaces some calls to `erlang:error/1` or
`erlang:error/2` with specialized instructions in order to reduce the
size of the BEAM code.
In functions that do binary matching, `beam_except` would fail to
translate the instructions that generate a `function_clause`
exception. Here is an example:
bsum(<<H:16,T/binary>>, Acc) ->
bsum(T, Acc+H);
bsum(<<>>, Acc) ->
Acc.
The BEAM code that generates the `function_clause` exception looks
like this:
{label,4}.
{test_heap,2,3}.
{put_list,{x,1},nil,{x,1}}.
%% The following two instructions prevents the translation.
{bs_set_position,{x,2},{x,0}}.
{bs_get_tail,{x,2},{x,0},3}.
{test_heap,2,2}.
{put_list,{x,0},{x,1},{x,1}}.
{move,{atom,function_clause},{x,0}}.
{line,...}.
{call_ext,2,{extfunc,erlang,error,2}}.
Make the translation of `function_clause` exceptions smarter, so
that the following code will be produced:
{label,4}.
{bs_set_position,{x,2},{x,0}}.
{bs_get_tail,{x,2},{x,0},3}.
{jump,{f,1}}. %Jump to `func_info` instruction.
(This issue was noticed when looking at the code generated by
https://github.com/tomas-abrahamsson/gpb.)
|
|
Add the `diffable` option to produce a more diff-friendly output in
a .S file. In a diffable .S file, label numbers starts from 1 in each
function and local function calls use function names instead of labels.
scripts/diffable produces diff-friendly files but only for a hard-coded
sub set of files in OTP. Having the option directly in the compiler
makes is easier to diff the BEAM code for arbitrary source modules.
|
|
The `beam_jump` pass could eliminate `move` instructions when it was
not safe to do so. See the new test case `unsafe_move_elimination/1`
for an example.
Reported-by: Michał Muskała
|
|
* maint:
beam_utils: Handle bs_start_match2 in anno_defs
|
|
into maint
* john/compiler/bs_match-anno-liveness-fix/OTP-15353/ERL-753:
beam_utils: Handle bs_start_match2 in anno_defs
|
|
|
|
* maint:
compiler: Forward +source flag to epp and fix bug in +deterministic
epp: Allow user to set source name independently of input file name
|
|
Fixes a crash during code generation of the following code:
call_atom() ->
fun({send = Send}) ->
Send()
end.
|
|
The source file path as given to `erlc` was included in an implicit
file attribute inserted by epp, even when the +source flag was
set to something else which was a bit surprising. It was also
included when +deterministic was specified, breaking the flag's
promise.
This commit forwards the +source flag to epp so it inserts the
right information, and if +deterministic is given it will be shaved
to just the base name of the file, guaranteeing the same result
regardless of how the input is reached.
|
|
Co-authored-by: John Högberg <[email protected]>
|
|
* maint:
Fix rare bug in binary matching (again)
Conflicts:
lib/compiler/src/beam_bsm.erl
lib/compiler/src/sys_core_bsm.erl
|
|
OTP 22 extends the binary instructions to support a Y register
destination. When giving an option to compile for an earlier
release, make sure that binary instructions don't use a Y register
destination, by rewriting the binary instructions to use an X
register destination and adding a `move` instruction to move
the value to the Y register.
|
|
Rewrite BSM optimizations in the new SSA-based intermediate format
|
|
|
|
This commit improves the bit-syntax match optimization pass,
leveraging the new SSA intermediate format to perform much more
aggressive optimizations. Some highlights:
* Watch contexts can be reused even after being passed to a
function or being used in a try block.
* Sub-binaries are no longer eagerly extracted, making it far
easier to keep "happy paths" free from binary creation.
* Trivial wrapper functions no longer disable context reuse.
|
|
2e40d8d1c51a attempted fix a bug in binary matching, but it only
fixed the bug for the minimized test case.
This commit removes the previous fix and fixes the bug in a more
effective way. See the comments in the new code in `sys_core_bsm`
for an explanation.
This commit restores the optimizations in string.erl and dets_v9.erl
that the previous fix disabled.
I have not found any code where this commit will disable optimizations
when they are actually safe. There are some changes to the code
in ssl_cipher.erl in that some bs_start_match2 instruction did not
reuse the binary register for the match context, but the delayed
sub binary optimizations was never applied to the code in the first
place.
https://bugs.erlang.org/browse/ERL-689
|
|
The following code broke because aliases weren't tracked for hd/1:
bug(Bool) ->
Bug = remote:call(),
if
Bool -> %% Branch of some kind.
_ = hd(Bug),
remote:call(),
hd(Bug)
end.
Related to 1f221b27f1336e747f7409692f260055dd3ddf79
|
|
|
|
|
|
Most of the optimizations in beam_dead have been superseded
by the optimizations in beam_ssa_dead.
The forward/1 pass of beam_dead has been moved to beam_jump.
The beam_split pass splits blocks that contain instructions with
non-zero labels. Because there are no optimizations left that optimize
instructions within blocks, beam_block never needs to put such
instructions into blocks in the first place. beam_split also moved
'move' instructions out block to help beam_dead. That is no longer
necessary since beam_dead no longer exists.
|
|
Add beam_ssa_dead to perform the main optimizations done
by beam_dead:
* Shortcut branches that jump to another block with a branch.
If it can be seen that the second branch will always branch
to a specific block, replace the target of the first branch.
* Combined nested sequences of '=:=' tests and switch
instructions operating on the same variable to a single switch.
Diffing the compiler output, it seems that beam_ssa_dead finds
many more opportunities for optimizations than beam_dead,
although it does not find all opportunities that beam_dead does.
In total, beam_ssa_dead is such improvement over beam_dead that
there is no reason to keep beam_dead as well as beam_ssa_dead.
Note that beam_ssa_dead does not attempt to optimize away
redundant bs_context_binary instructions, because that instruction
will be superseded by new instructions in the near future.
|
|
|
|
This optimization working on the SSA format will replace
the similar optimization in beam_dead. See the comment
for an explanation of what the new optimization does.
|
|
Add more instructions to the list of functions that can be safely
removed if their values are not used. This is necessary for
correctness when doing more aggressive optimizations. Without this
change, the 'succeeded' instruction could be optimized away leaving
just the instruction followed by an unconditional branch, which the
beam_ssa_codegen does not know how to handle. Here is an example:
_3 = bs_start_match _1
br label 13
By adding bs_start_match to the list, the bs_start_match instruction
will be removed too. (If the result of bs_start_match is actually
used, the succeeded instruction would not be removed.)
While we are it, rename the misnamed function is_pure/1 to
no_side_effect/1 and move it to beam_ssa. is_pure/1 is a bad name
because bif:get has no side effect, but is not pure.
|
|
|
|
Sometimes when building a tuple, there is no way to avoid an
extra `move` instruction. Consider this code:
make_tuple(A) -> {ok,A}.
The corresponding BEAM code looks like this:
{test_heap,3,1}.
{put_tuple,2,{x,1}}.
{put,{atom,ok}}.
{put,{x,0}}.
{move,{x,1},{x,0}}.
return.
To avoid overwriting the source register `{x,0}`, a `move`
instruction is necessary.
The problem doesn't exist when building a list:
%% build_list(A) -> [A].
{test_heap,2,1}.
{put_list,{x,0},nil,{x,0}}.
return.
Introduce a new `put_tuple2` instruction that builds a tuple in a
single instruction, so that the `move` instruction can be eliminated:
%% make_tuple(A) -> {ok,A}.
{test_heap,3,1}.
{put_tuple2,{x,0},{list,[{atom,ok},{x,0}]}}.
return.
Note that the BEAM loader already combines `put_tuple` and `put`
instructions into an internal instruction similar to `put_tuple2`.
Therefore the introduction of the new instruction will not speed up
execution of tuple building itself, but it will be less work for
the loader to load the new instruction.
|
|
* bjorn/compiler/ssa:
Travis CI: Run the SSA linter in the Linux64SmokeTest build
Remove retired compiler passes
Introduce a new SSA-based intermediate format
hipe_beam_to_icode: Correct translation of get_map_elements
beam_dead: Remove shortcut of binary matching instruction
beam_bs: Remove optimizations that are easier done on SSA format
Don't run unsafe compiler passes
Simplify optimizations by introducing is_nil late
beam_utils: Make is_tagged_tuple a pure test
beam_except: Enhance recognition of function_clause exceptions
beam_validator: Infer the types of copies in a smarter way
beam_validator: Improve merge of cons and literal list
beam_validator: Strengthen validation of func_info
beam_validator: Allow get_tuple_element before dsetelement
beam_validator: Don't transfer state to labels that can't be reached
beam_validator: Improve type analysis for tuples
beam_validator: Be more careful when updating try/catch state
beam_trim: Handle an empty list of instructions
v3_core: Number argument variables in ascending order
Teach binary instructions to use Y registers as destination
OTP-14894
|
|
v3_codegen is replaced by three new passes:
* beam_kernel_to_ssa which translates the Kernel Erlang format
to a new SSA-based intermediate format.
* beam_ssa_pre_codegen which prepares the SSA-based format
for code generation, including register allocation. Registers
are allocated using the linear scan algorithm.
* beam_ssa_codegen which generates BEAM assembly code from the
SSA-based format.
It easier and more effective to optimize the SSA-based format before X
and Y registers have been assigned. The current optimization passes
constantly have to make sure no "holes" in the X register assignments
are created (that is, that no X register becomes undefined that an
allocation instruction depends on).
This commit also introduces the following optimizations:
* Replacing of tuple matching of records with the is_tagged_tuple
instruction. (Replacing beam_record.)
* Sinking of get_tuple_element instructions to just before the first
use of the extracted values. As well as potentially avoiding
extracting tuple elements when they are not actually used on all
executions paths, this optimization could also reduce the number
values that will need to be stored in Y registers. (Similar to
beam_reorder, but more effective.)
* Live optimizations, removing the definition of a variable that is
not subsequently used (provided that the operation has no side
effects), as well strength reduction of binary matching by replacing
the extraction of value from a binary with a skip instruction. (Used
to be done by beam_block, beam_utils, and v3_codegen.)
* Removal of redundant bs_restore2 instructions. (Formerly done
by beam_bs.)
* Type-based optimizations across branches. More effective than
the old beam_type pass that only did type-based optimizations in
basic blocks.
* Optimization of floating point instructions. (Formerly done
by beam_type.)
* Optimization of receive statements to introduce recv_mark and
recv_set instructions. More effective with far fewer restrictions
on what instructions are allowed between creating the reference
and entering the receive statement.
* Common subexpression elimination. (Formerly done by beam_block.)
|
|
* maint:
map_SUITE: Test is_map_key/2 followed by a map update
beam_validator: Infer the type of the map argument for is_map_key/2
map_SUITE: Cover map_get optimizations in beam_dead
|
|
|
|
|
|
As a preparation for replacing v3_codegen with a new code generator,
remove unsafe optimization passes. Especially the older compiler
passes have implicit assumptions about how the code is generated.
Remove the optimizations in beam_block (keep the code that creates
blocks) because they are unsafe. beam_block also calls
beam_utils:live_opt/1, which is unsafe.
Remove beam_type because it calls beam_utils:live_opt/1, and also
because it recalculates the number of heaps words and number of live
registers in allocation instructions, thus potentially hiding bugs in
other passes.
Remove beam_receive because it is unsafe.
Remove beam_record because it is the only remaining user
of beam_utils:anno_defs/1.
Remove beam_reorder because it makes much more sense to run it
as an early SSA-based optimization pass.
Remove the now unused functions in beam_utils:
anno_def/1
delete_annos/1
is_killed_block/2
live_opt/1
usage/3
Note that the following test cases will fail because of the
removed optimizations:
compile_SUITE:optimized_guards/1
compile_SUITE:bc_options/1
receive_SUITE:ref_opt/1
|
|
* maint:
Fix compiler crash when compiling double receives
erts: Delete fd from poll-set when closing fd_driver port
|
|
The compiler would crash when compiling a function with two
receive statements.
https://bugs.erlang.org/browse/ERL-703
|
|
* maint:
Correct error behavior of is_map_key/2 in guards
|
|
Consider the following functions:
foo() -> bar(not_a_map).
bar(M) when not is_map_key(a, M) -> ok;
bar(_) -> error.
What will `foo/0` return? It depends. If the module is compiled
with the default compiler options, the return value will be
`ok`. If the module is compiled with the `inline` option,
the return value will be `error`.
The correct value is `error`, because the call to `is_map_key/2`
when the second argument is not a map should fail the entire
guard. That is the way other failing guards BIFs are handled.
For example:
foo() -> bar(not_a_tuple).
bar(T) when not element(1, T) -> ok;
bar(_) -> error.
`foo/0` always returns `error` (whether the code is inlined
or not).
This bug can be fixed by changing the classification of `is_map_key/2`
in the `erl_internal` module. It is now classified as a type test,
which is incorrect because type tests should not fail. Reclassifying
it as a plain guard BIF corrects the bug.
This correction also fixes the internal consistency check
failure which was reported in:
https://bugs.erlang.org/browse/ERL-699
|
|
* maint:
Fix bug in binary matching
|
|
bjorng/bjorn/compiler/binary-syntax/ERL-689/OTP-15219
Fix bug in binary matching
|
|
* maint:
Omit include path debug info for +deterministic builds
|
|
'john/compiler/fix-deterministic-include-paths/OTP-15204/ERL-679' into maint
* john/compiler/fix-deterministic-include-paths/OTP-15204/ERL-679:
Omit include path debug info for +deterministic builds
|
|
Compiling the same file with different include paths resulted in
different files with the `+deterministic` flag even if everything
but the paths were identical. This was caused by the absolute path
of each include directory being unconditionally included in a
debug information chunk.
This commit fixes this by only including this information in
non-deterministic builds.
|
|
* maint:
Fix side-effect optimization when compiling from Core Erlang
Conflicts:
lib/compiler/src/sys_core_fold.erl
|
|
bjorng/bjorn/compiler/letrec-side-effect-fix/ERL-658/OTP-15188
Fix side-effect optimization when compiling from Core Erlang
|
|
The compiler generates incorrect code for the following example:
decode_binary(_, <<Length, Data/binary>>) ->
case {Length, Data} of
{0, _} ->
%% When converting the match context back to a binary,
%% Data will be set to the entire original binary,
%% that is, to <<0>> instead of <<>>.
{{0, 0, 0}, Data};
{4, <<Y:16/little, M, D, Rest/binary>>} ->
{{Y, M, D}, Rest}
end.
The problem is the delayed sub binary creation optimization, which
is not safe to do in this case.
This commit introduces a heuristic that will disable the delayed
sub binary creation optimization for this example. Unfortunately, the
heuristic may turn off the optimization when it would actually be
safe. In the OTP codebase, the optimization is turned off in two
instances, once in string.erl and once in dets_v9.erl.
https://bugs.erlang.org/browse/ERL-689
|
|
* maint:
Eliminate double computation of next var
beam_validator: Fix false diagnostic for a receive nested in a try
|
|
When an expression is only used for its side effects, we try to
remove everything that doesn't tie into a side-effect, but we
went a bit too far when we applied the optimization to funs
defined in such a context. Consider the following:
do letrec 'f'/0 = fun () -> ... whatever ...
in call 'side':'effect'(apply 'f'/0())
'ok'
When f/0 is optimized under the assumption that its return value
is unused, side:effect/1 will be fed the result of the last
side-effecting expression in f/0 instead of its actual result.
https://bugs.erlang.org/browse/ERL-658
Co-authored-by: Björn Gustavsson <[email protected]>
|