Age | Commit message (Collapse) | Author |
|
* bjorn/compiler/misc:
beam_ssa_type: Simplify is_singleton_type/1
beam_ssa_opt: Run ssa_opt_tuple_size early
beam_ssa_codegen: Remove forgotten and unreachable clause
beam_ssa_opt: Run the type optimization pass twice
beam_ssa_type: Eliminate redundant 'succeeded' instructions
|
|
* bjorn/compiler/fix-inlined-funs:
sys_core_inline: Kill *all* fun annotations when inlining
|
|
* bjorn/compiler/beam_validator/ERL-832:
Make the beam_validator smarter (again)
|
|
|
|
Running ssa_opt_tuple_size early will give more opportunities
for optimizations.
|
|
fd682dd3b1dc corrected label generation for 'or', but forgot to
remove the old incorrect clause (that can no longer be reached).
|
|
The code will be significantly improved by running the
type optimization pass twice.
The ssa_opt_misc pass can be eliminated because everything it does
is also done by the type optimization pass.
|
|
Needed because of the optimizations in 48f20bd589fa69.
https://bugs.erlang.org/browse/ERL-832
|
|
sys_core_inline didn't kill fun annotations in variables,
which could lead to duplicated wrapper functions for funs.
That was basically harmless because the duplicated functions
were eventually discarded.
|
|
The beam_ssa_type pass would leave redundant 'succeeded' instructions,
and depend on the live optimization pass to remove them.
Update beam_ssa_type to remove redundant 'succeeded' instructions.
This will not improve the generated code, but will improve compilation
times since it eliminates instructions and variables.
|
|
Do the optimizations of bs_put* instructions in beam_ssa_opt
and remove the beam_bs pass. This can lead to a slight improvement
of compilation times.
|
|
There are two instructions that take string operands:
{bs_put_string,Fail,NumberOfBytes,{string,String}}
{bs_match_string,Fail,Register,NumberOfBits,{string,String}}
In the canonical BEAM code that is passed to beam_asm, string String
is currently represented as a list. (The string in bs_match_string is
a bitstring before the beam_z compiler pass.) That is wasteful,
because there will be unnecessary conversions between lists and
binaries.
Change the representation of String to be a binary.
Furthermore, bs_put_string is an optimization of a bs_put_binary
instruction with a literal binary operand. Currently, the
bs_put_string instruction is introduced in beam_bs. Delay the
introduction of bs_put_string to the beam_z pass. That will simplify
optimizations and allow us to do the optimization currently done
in beam_bs in a SSA pass in a future commit.
|
|
The optimization can be applied in a few more places if done
before ssa_opt_bsm_shortcut (for example, in unicode:cbv/2).
|
|
Improve type optimizations
|
|
The compiler would crash when compiling code such as:
(A / B) band 16#ff
The type for the expression would be 'none', but beam_type:verified_type/1
did not handle 'none'.
https://bugs.erlang.org/browse/ERL-829
|
|
Introduce subtraction of types to allow some redundant tests to be
eliminated.
Consider this function:
foo(L) when is_list(L) ->
case L of
[_|_] -> non_empty;
[] -> empty
end.
After entering the body of the function, L is known to be either
a cons cell or nil (otherwise the is_list/1 guard would have failed).
If the L is not a cons cell, it must be nil. Therefore, the test
for nil in the second clause of the case can be eliminated.
Here is the SSA code with some additonal comments for the function
before the optimization:
function t:foo(_0) {
0:
@ssa_bool = bif:is_list _0
br @ssa_bool, label 4, label 3
4:
%% _0 is now a list (cons or nil).
@ssa_bool:8 = is_nonempty_list _0
br @ssa_bool:8, label 9, label 7
9:
ret literal non_empty
7:
%% _0 is not a cons (or we wouldn't be here).
%% Subtracting cons from the previously known type list
%% gives that _0 must be nil.
@ssa_bool:10 = bif:'=:=' _0, literal []
br @ssa_bool:10, label 11, label 6
11:
ret literal empty
6:
_6 = put_tuple literal case_clause, _0
%% t.erl:5
@ssa_ret:12 = call remote (literal erlang):(literal error)/1, _6
ret @ssa_ret:12
3:
_9 = put_list _0, literal []
%% t.erl:4
@ssa_ret:13 = call remote (literal erlang):(literal error)/2, literal function_clause, _9
ret @ssa_ret:13
}
Type subtraction gives us that _0 must be nil in block 7, allowing us to
remove the comparison of _0 with nil. The code for the function can be simplified
to:
function t:foo(_0) {
0:
@ssa_bool = bif:is_list _0
br @ssa_bool, label 4, label 3
4:
@ssa_bool:8 = is_nonempty_list _0
br @ssa_bool:8, label 9, label 11
9:
ret literal non_empty
11:
ret literal empty
3:
_9 = put_list _0, literal []
%% t.erl:4
@ssa_ret:13 = call remote (literal erlang):(literal error)/2, literal function_clause, _9
ret @ssa_ret:13
}
|
|
|
|
Code generation for 'or' with {z,0} destination could generate duplicate
new labels. The bug was introduced in eb571f8951bd.
|
|
There is no easy way to convert xor or is_record/2 to test operations.
|
|
|
|
When an instruction has been eliminated, the 'succeeded' instruction must
be eliminated.
|
|
|
|
As beam_ssa_type is about to get smarter, beam_validator must
be smarter too.
|
|
b89044a800c4 introduced an optimization that tries to delay creation
of stack frames. It turns out that this optimization is not always
safe. (See the new test case for an example.)
Since the code generator is completely rewritten in the `master`
branch for the upcoming OTP 22 release, it does not make sense trying
to mend this optimization. It is better to remove it. Out of a sample
of about 1000 modules in OTP, about 50 of them are changed as a result
of removing this optimization.
The compiler in OTP 22 will do the same optimization in a cleaner,
safer, and more effective way.
https://bugs.erlang.org/browse/ERL-807
|
|
* maint:
Fix unsafe optimization of stack trace building
|
|
beam_ssa_pre_codegen: Fix an internal consistency failure
|
|
The `sys_core_fold` pass of the compiler would optimize
away the building of the stacktrace in code such as:
try
...
catch
C:R:Stk ->
erlang:raise(C, {R,Stk}, Stk)
end
That optimization is unsafe and would cause a crash in a later compiler
pass.
|
|
The following function:
is_two_tuple(Arg) ->
case is_tuple(Arg) of
false -> false;
true -> tuple_size(Arg) == 2
end.
would cause an internal consistency failure:
Internal consistency check failed - please report this bug.
Instruction: {bif,tuple_size,{f,0},[{x,0}],{z,0}}
Error: {invalid_store,{z,0},{integer,[]}}:
|
|
Be more careful when updating types so that fragility is not lost.
|
|
The previous optimizations caused some code in beam_jump to
become uncovered. Add tests to cover more code. Also remove
a clause in beam_jump:opt/3 that does not seem possible to
cover anymore (this is safe, because the clause was an
optimization).
|
|
Eliminate a jump to a return sequence, replacing the jump with
the return sequence. This optimization always save execution time
and may also save code space.
|
|
181cfc4ef9d1 stopping used #st.index.
|
|
Some lines in beam_peep were no longer covered when the sharing optimization
was added to beam_ssa_opt. Also remove some code from beam_peep that no
longer seems possible to cover.
|
|
Share code for semantically equivalent blocks referred to to by `br`
and `switch` instructions.
A similar optimization is done in `beam_jump`, but doing it here as
well is beneficial as it may enable other optimizations. Also, if
there are many semantically equivalent clauses, this optimization can
substanstially decrease compilation times.
|
|
Sort sequences of `move` instructions on the Y register.
When moving from X registers to Y registers, having the instructions
sorted on Y registers give the loader more opportunities to use
`move_window{3,4,5}` instructions. For examples, the following five
instructions:
move_xy x(2) y(0)
move_xy x(1) y(1)
move_xy x(0) y(2)
move_xy x(5) y(3)
move_xy x(4) y(4)
can be replaced with:
move_window5_xxxxxy x(2) x(1) x(0) x(5) x(4) y(0)
When the Y registers are not ordered so that `move_window5` can be
used, the loader would typically combine the first three moves to a
`move3_xyxyxy` instruction and the last two moves to a
`move2_par_xyxy` instruction.
When moving from Y registers to X registers, sorting on the Y
registers could potentially be more cache-friendly. It could also
be worthwhile investigating a new `move_window` instruction in
the BEAM interpreter that could move values from contiguous Y registers
to X registers.
Note that `scripts/diffable` can generate diffable dissambly files for
the loaded BEAM code:
$ scripts/diffable --dis 0
$ scripts/diffable --dis 1
$ diff -u 0 1
|
|
There could be an internal consistency failure when using is_function/2,
because an optimization did not take into account that is_function/2 can fail.
https://bugs.erlang.org/browse/ERL-778
|
|
Enhance the copy_retval/1 optimization to allow Y registers to be
reused in more circumstances. Reusing Y register can often reduce
the size of the stack frame.
|
|
There could be `allocate_zero` instructions where `allocate` would
suffice or superfluous `init` instructions because all possible
initializations of Y registers were not taken into account.
While at it, also add some more comments.
|
|
The `get_map_element` instruction has no side effects, and should be
removed if its value is not used.
|
|
`beam_ssa_dead` can waste a lot of time trying to optimize
an unoptimizable `switch` instruction.
By being a little bit smarter when optimizing `switch` instructions,
the runtime for the beam_ssa_dead pass was reduced approximately by
half on my computer for this module:
https://github.com/aggelgian/cuter/blob/master/src/cuter_binlib.erl
Noticed-by: Kostis Sagonas
|
|
The linear scan algorithm keeps unhandled intervals in two separate
lists: one for intervals with reserved registers and one for intervals
without reserved registers. When collecting the available free registers
all unhandled intervals with reserved registers must be checked for
overlap.
Unhandled intervals that had a preferred register were put into the
list of intervals with reserved registers, leading to a lot of
unecessary overlap checking if there were many intervals with
preferred registers. Changing the partition code to put intervals with
preferred registers into the general list of unhandled intervals will
reduce the compilation time if there are many preferred registers.
On my computer, this change reduced the time of the linear scan pass
from about 20 seconds down to about 0.5 seconds for this module:
https://github.com/aggelgian/cuter/blob/master/src/cuter_binlib.erl
Noticed-by: Kostis Sagonas
|
|
The type analysis pass (`beam_ssa_type`) keeps the type information
for all variables that are in scope. For huge functions, the
`join_types/2` function could get really slow when joining two
maps with thousands of variables in each.
Use a conservative approach to discard type information for
variables only used once by a `br` or `switch` in the same
block as the variable is defined in.
This approach reduces the runtime for the `beam_ssa_type` pass
from a few minutes down to few seconds for this module:
https://github.com/aggelgian/cuter/blob/master/src/cuter_binlib.erl
Rejected approach: Calculate liveness information for all variables
and discard type information for any variable that would not be
used again. That turned out to not work because some optimizations
would invalidate the liveness (in particular, substitutions could
lengthen the lifetime for a variable). Trying to update the
liveness information when doing the optimizations would be tricky.
Noticed-by: Kostis Sagonas
|
|
Also rename a few functions in attempt to make it clearer.
|
|
Recognize more safe labels to enable stack trimming in more
circumstances.
|
|
|
|
Remove the now unused beam_utils:is_not_used/3 and
beam_utils:is_killed/3 functions and friends.
Starting out as simple functions a long time ago, those functions have
grown and grown to support more optimizations. The number of bugs
found and fixed in beam_utils has also grown over time.
|
|
Eliminate the use of beam_utils:is_not_used/3 by implementing a simple
is_not_used() function in beam_trim itself. The new version actually
makes trimming possible in more circumstances, because
beam_utils:is_not_used/3 was too conservative for the purpose of stack
trimming (it was previously used for optimizations where it was
necessary to be more conservative).
Alternatives considered: I tried to implement stack trimming in
beam_ssa_codegen but it turned out to be a total mess. Not
surprisingly, it turns out that an optimization that renumbers
Y registers is hard to do on an intermediate representation that
still use variables instead of BEAM registers.
|
|
Prior to this commit, the optimizations using beam_utils:is_killed/3
were only executed a few times in the entire compiler test suite.
|
|
This will help investigation of compiler bugs.
|
|
|