Age | Commit message (Collapse) | Author |
|
Reduce redundant moves and register shuffling
|
|
Try to eliminate short blocks that starts with a phi node
and end in a return. For example:
Result = phi { Res1, 4 }, { literal true, 5 }
Ret = put_tuple literal ok, Result
ret Ret
The code in this block can be inserted at the end blocks 4 and
5. Thus, the following code can be inserted into block 4:
Ret:1 = put_tuple literal ok, Res1
ret Ret:1
And the following code into block 5:
Ret:2 = put_tuple literal ok, literal true
ret Ret:2
Which can be further simplified to:
ret literal {ok, true}
This transformation may lead to more code improvements, for example:
* Stack trimming
* Fewer test_heap instructions
* Smaller stack frames
|
|
Apply type optimizations across local function calls
|
|
This commit lets the type optimization pass work across functions,
tracking return and argument types to eliminate redundant tests.
|
|
This serves as a base for the upcoming module-level type
optimization, but may come in handy for other passes like
beam_ssa_funs and beam_ssa_bsm that have their own ad-hoc
implementations.
|
|
Consider this function and its corresponding BEAM code:
foo(Map, Key) ->
Val = case Map of
#{Key:=Val0} -> Val0;
_ -> default
end,
bar(1, 2, Val).
{label,2}.
{test,is_map,{f,3},[{x,0}]}.
{get_map_elements,{f,3},{x,0},{list,[{x,1},{x,0}]}}.
^^^^^
{jump,{f,4}}.
{label,3}.
{move,{atom,default},{x,0}}.
^^^^^
{label,4}.
{move,{integer,2},{x,1}}.
{move,{x,0},{x,2}}.
^^^^^
{move,{integer,1},{x,0}}.
{call_only,3,{f,6}}.
Note that the value of the variable `Val` will first be
placed in `{x,0}` and then moved to `{x,2}` where it needs
to be when calling the `bar/3` function.
The reason for the extra `move` instruction is that the
register allocator picks the lowest numbered available register
when choosing a register to put a variable in. In this case,
`{x,0}` will be chosen.
If we only could give a hint to the register allocator that
it would be better to put `Val` in `{x,2}`, the extra `move`
would disappear:
{label,2}.
{test,is_map,{f,3},[{x,0}]}.
{get_map_elements,{f,3},{x,0},{list,[{x,1},{x,2}]}}.
{jump,{f,4}}.
{label,3}.
{move,{atom,default},{x,2}}.
{label,4}.
{move,{integer,2},{x,1}}.
{move,{integer,1},{x,0}}.
{call_only,3,{f,6}}.
There already is an existing sub pass (`reserve_regs`) in
`beam_ssa_pre_codegen` that among things tries to give the register
allocator hints that some variables should be placed in specific
registers, if possible. However, the existing hinting mechanism is
limited, essentially only working within a single SSA block.
This commit extends the hinting mechanism, allowing hints to be passed
across SSA blocks, eliminating `move` instructions and register
shuffling in many places. (494 modules out of a sample of 1236 modules
were changed by this commit.)
|
|
Use lists:splitwith/2 instead of lists:partition/2 for splitting out
phi nodes. Since phi nodes are always the first instructions in a
block, the result will be the same, but splitwith/2 is faster.
|
|
If the match instruction was already marked as a skip, we'd ruin
its argument list.
|
|
A switch is equivalent to a series of '=:=', so we have to subtract
each value individually from the type. Subtracting a join risks
removing too much type information, and managed to narrow "number"
into "float" in the attached test case.
|
|
|
|
|
|
|
|
|
|
* bjorn/compiler/misc:
beam_ssa_type: Simplify is_singleton_type/1
beam_ssa_opt: Run ssa_opt_tuple_size early
beam_ssa_codegen: Remove forgotten and unreachable clause
beam_ssa_opt: Run the type optimization pass twice
beam_ssa_type: Eliminate redundant 'succeeded' instructions
|
|
* bjorn/compiler/fix-inlined-funs:
sys_core_inline: Kill *all* fun annotations when inlining
|
|
* bjorn/compiler/beam_validator/ERL-832:
Make the beam_validator smarter (again)
|
|
|
|
Running ssa_opt_tuple_size early will give more opportunities
for optimizations.
|
|
fd682dd3b1dc corrected label generation for 'or', but forgot to
remove the old incorrect clause (that can no longer be reached).
|
|
The code will be significantly improved by running the
type optimization pass twice.
The ssa_opt_misc pass can be eliminated because everything it does
is also done by the type optimization pass.
|
|
Needed because of the optimizations in 48f20bd589fa69.
https://bugs.erlang.org/browse/ERL-832
|
|
sys_core_inline didn't kill fun annotations in variables,
which could lead to duplicated wrapper functions for funs.
That was basically harmless because the duplicated functions
were eventually discarded.
|
|
The beam_ssa_type pass would leave redundant 'succeeded' instructions,
and depend on the live optimization pass to remove them.
Update beam_ssa_type to remove redundant 'succeeded' instructions.
This will not improve the generated code, but will improve compilation
times since it eliminates instructions and variables.
|
|
Do the optimizations of bs_put* instructions in beam_ssa_opt
and remove the beam_bs pass. This can lead to a slight improvement
of compilation times.
|
|
There are two instructions that take string operands:
{bs_put_string,Fail,NumberOfBytes,{string,String}}
{bs_match_string,Fail,Register,NumberOfBits,{string,String}}
In the canonical BEAM code that is passed to beam_asm, string String
is currently represented as a list. (The string in bs_match_string is
a bitstring before the beam_z compiler pass.) That is wasteful,
because there will be unnecessary conversions between lists and
binaries.
Change the representation of String to be a binary.
Furthermore, bs_put_string is an optimization of a bs_put_binary
instruction with a literal binary operand. Currently, the
bs_put_string instruction is introduced in beam_bs. Delay the
introduction of bs_put_string to the beam_z pass. That will simplify
optimizations and allow us to do the optimization currently done
in beam_bs in a SSA pass in a future commit.
|
|
The optimization can be applied in a few more places if done
before ssa_opt_bsm_shortcut (for example, in unicode:cbv/2).
|
|
Improve type optimizations
|
|
Introduce subtraction of types to allow some redundant tests to be
eliminated.
Consider this function:
foo(L) when is_list(L) ->
case L of
[_|_] -> non_empty;
[] -> empty
end.
After entering the body of the function, L is known to be either
a cons cell or nil (otherwise the is_list/1 guard would have failed).
If the L is not a cons cell, it must be nil. Therefore, the test
for nil in the second clause of the case can be eliminated.
Here is the SSA code with some additonal comments for the function
before the optimization:
function t:foo(_0) {
0:
@ssa_bool = bif:is_list _0
br @ssa_bool, label 4, label 3
4:
%% _0 is now a list (cons or nil).
@ssa_bool:8 = is_nonempty_list _0
br @ssa_bool:8, label 9, label 7
9:
ret literal non_empty
7:
%% _0 is not a cons (or we wouldn't be here).
%% Subtracting cons from the previously known type list
%% gives that _0 must be nil.
@ssa_bool:10 = bif:'=:=' _0, literal []
br @ssa_bool:10, label 11, label 6
11:
ret literal empty
6:
_6 = put_tuple literal case_clause, _0
%% t.erl:5
@ssa_ret:12 = call remote (literal erlang):(literal error)/1, _6
ret @ssa_ret:12
3:
_9 = put_list _0, literal []
%% t.erl:4
@ssa_ret:13 = call remote (literal erlang):(literal error)/2, literal function_clause, _9
ret @ssa_ret:13
}
Type subtraction gives us that _0 must be nil in block 7, allowing us to
remove the comparison of _0 with nil. The code for the function can be simplified
to:
function t:foo(_0) {
0:
@ssa_bool = bif:is_list _0
br @ssa_bool, label 4, label 3
4:
@ssa_bool:8 = is_nonempty_list _0
br @ssa_bool:8, label 9, label 11
9:
ret literal non_empty
11:
ret literal empty
3:
_9 = put_list _0, literal []
%% t.erl:4
@ssa_ret:13 = call remote (literal erlang):(literal error)/2, literal function_clause, _9
ret @ssa_ret:13
}
|
|
|
|
Code generation for 'or' with {z,0} destination could generate duplicate
new labels. The bug was introduced in eb571f8951bd.
|
|
There is no easy way to convert xor or is_record/2 to test operations.
|
|
|
|
When an instruction has been eliminated, the 'succeeded' instruction must
be eliminated.
|
|
|
|
As beam_ssa_type is about to get smarter, beam_validator must
be smarter too.
|
|
* maint:
Fix unsafe optimization of stack trace building
|
|
beam_ssa_pre_codegen: Fix an internal consistency failure
|
|
The `sys_core_fold` pass of the compiler would optimize
away the building of the stacktrace in code such as:
try
...
catch
C:R:Stk ->
erlang:raise(C, {R,Stk}, Stk)
end
That optimization is unsafe and would cause a crash in a later compiler
pass.
|
|
The following function:
is_two_tuple(Arg) ->
case is_tuple(Arg) of
false -> false;
true -> tuple_size(Arg) == 2
end.
would cause an internal consistency failure:
Internal consistency check failed - please report this bug.
Instruction: {bif,tuple_size,{f,0},[{x,0}],{z,0}}
Error: {invalid_store,{z,0},{integer,[]}}:
|
|
Be more careful when updating types so that fragility is not lost.
|
|
The previous optimizations caused some code in beam_jump to
become uncovered. Add tests to cover more code. Also remove
a clause in beam_jump:opt/3 that does not seem possible to
cover anymore (this is safe, because the clause was an
optimization).
|
|
Eliminate a jump to a return sequence, replacing the jump with
the return sequence. This optimization always save execution time
and may also save code space.
|
|
181cfc4ef9d1 stopping used #st.index.
|
|
Some lines in beam_peep were no longer covered when the sharing optimization
was added to beam_ssa_opt. Also remove some code from beam_peep that no
longer seems possible to cover.
|
|
Share code for semantically equivalent blocks referred to to by `br`
and `switch` instructions.
A similar optimization is done in `beam_jump`, but doing it here as
well is beneficial as it may enable other optimizations. Also, if
there are many semantically equivalent clauses, this optimization can
substanstially decrease compilation times.
|
|
Sort sequences of `move` instructions on the Y register.
When moving from X registers to Y registers, having the instructions
sorted on Y registers give the loader more opportunities to use
`move_window{3,4,5}` instructions. For examples, the following five
instructions:
move_xy x(2) y(0)
move_xy x(1) y(1)
move_xy x(0) y(2)
move_xy x(5) y(3)
move_xy x(4) y(4)
can be replaced with:
move_window5_xxxxxy x(2) x(1) x(0) x(5) x(4) y(0)
When the Y registers are not ordered so that `move_window5` can be
used, the loader would typically combine the first three moves to a
`move3_xyxyxy` instruction and the last two moves to a
`move2_par_xyxy` instruction.
When moving from Y registers to X registers, sorting on the Y
registers could potentially be more cache-friendly. It could also
be worthwhile investigating a new `move_window` instruction in
the BEAM interpreter that could move values from contiguous Y registers
to X registers.
Note that `scripts/diffable` can generate diffable dissambly files for
the loaded BEAM code:
$ scripts/diffable --dis 0
$ scripts/diffable --dis 1
$ diff -u 0 1
|
|
There could be an internal consistency failure when using is_function/2,
because an optimization did not take into account that is_function/2 can fail.
https://bugs.erlang.org/browse/ERL-778
|
|
Enhance the copy_retval/1 optimization to allow Y registers to be
reused in more circumstances. Reusing Y register can often reduce
the size of the stack frame.
|
|
There could be `allocate_zero` instructions where `allocate` would
suffice or superfluous `init` instructions because all possible
initializations of Y registers were not taken into account.
While at it, also add some more comments.
|
|
The `get_map_element` instruction has no side effects, and should be
removed if its value is not used.
|