Age | Commit message (Collapse) | Author |
|
The previous optimizations caused some code in beam_jump to
become uncovered. Add tests to cover more code. Also remove
a clause in beam_jump:opt/3 that does not seem possible to
cover anymore (this is safe, because the clause was an
optimization).
|
|
Eliminate a jump to a return sequence, replacing the jump with
the return sequence. This optimization always save execution time
and may also save code space.
|
|
181cfc4ef9d1 stopping used #st.index.
|
|
Some lines in beam_peep were no longer covered when the sharing optimization
was added to beam_ssa_opt. Also remove some code from beam_peep that no
longer seems possible to cover.
|
|
Share code for semantically equivalent blocks referred to to by `br`
and `switch` instructions.
A similar optimization is done in `beam_jump`, but doing it here as
well is beneficial as it may enable other optimizations. Also, if
there are many semantically equivalent clauses, this optimization can
substanstially decrease compilation times.
|
|
Sort sequences of `move` instructions on the Y register.
When moving from X registers to Y registers, having the instructions
sorted on Y registers give the loader more opportunities to use
`move_window{3,4,5}` instructions. For examples, the following five
instructions:
move_xy x(2) y(0)
move_xy x(1) y(1)
move_xy x(0) y(2)
move_xy x(5) y(3)
move_xy x(4) y(4)
can be replaced with:
move_window5_xxxxxy x(2) x(1) x(0) x(5) x(4) y(0)
When the Y registers are not ordered so that `move_window5` can be
used, the loader would typically combine the first three moves to a
`move3_xyxyxy` instruction and the last two moves to a
`move2_par_xyxy` instruction.
When moving from Y registers to X registers, sorting on the Y
registers could potentially be more cache-friendly. It could also
be worthwhile investigating a new `move_window` instruction in
the BEAM interpreter that could move values from contiguous Y registers
to X registers.
Note that `scripts/diffable` can generate diffable dissambly files for
the loaded BEAM code:
$ scripts/diffable --dis 0
$ scripts/diffable --dis 1
$ diff -u 0 1
|
|
Enhance the copy_retval/1 optimization to allow Y registers to be
reused in more circumstances. Reusing Y register can often reduce
the size of the stack frame.
|
|
There could be `allocate_zero` instructions where `allocate` would
suffice or superfluous `init` instructions because all possible
initializations of Y registers were not taken into account.
While at it, also add some more comments.
|
|
The `get_map_element` instruction has no side effects, and should be
removed if its value is not used.
|
|
`beam_ssa_dead` can waste a lot of time trying to optimize
an unoptimizable `switch` instruction.
By being a little bit smarter when optimizing `switch` instructions,
the runtime for the beam_ssa_dead pass was reduced approximately by
half on my computer for this module:
https://github.com/aggelgian/cuter/blob/master/src/cuter_binlib.erl
Noticed-by: Kostis Sagonas
|
|
The linear scan algorithm keeps unhandled intervals in two separate
lists: one for intervals with reserved registers and one for intervals
without reserved registers. When collecting the available free registers
all unhandled intervals with reserved registers must be checked for
overlap.
Unhandled intervals that had a preferred register were put into the
list of intervals with reserved registers, leading to a lot of
unecessary overlap checking if there were many intervals with
preferred registers. Changing the partition code to put intervals with
preferred registers into the general list of unhandled intervals will
reduce the compilation time if there are many preferred registers.
On my computer, this change reduced the time of the linear scan pass
from about 20 seconds down to about 0.5 seconds for this module:
https://github.com/aggelgian/cuter/blob/master/src/cuter_binlib.erl
Noticed-by: Kostis Sagonas
|
|
The type analysis pass (`beam_ssa_type`) keeps the type information
for all variables that are in scope. For huge functions, the
`join_types/2` function could get really slow when joining two
maps with thousands of variables in each.
Use a conservative approach to discard type information for
variables only used once by a `br` or `switch` in the same
block as the variable is defined in.
This approach reduces the runtime for the `beam_ssa_type` pass
from a few minutes down to few seconds for this module:
https://github.com/aggelgian/cuter/blob/master/src/cuter_binlib.erl
Rejected approach: Calculate liveness information for all variables
and discard type information for any variable that would not be
used again. That turned out to not work because some optimizations
would invalidate the liveness (in particular, substitutions could
lengthen the lifetime for a variable). Trying to update the
liveness information when doing the optimizations would be tricky.
Noticed-by: Kostis Sagonas
|
|
Also rename a few functions in attempt to make it clearer.
|
|
Recognize more safe labels to enable stack trimming in more
circumstances.
|
|
|
|
Remove the now unused beam_utils:is_not_used/3 and
beam_utils:is_killed/3 functions and friends.
Starting out as simple functions a long time ago, those functions have
grown and grown to support more optimizations. The number of bugs
found and fixed in beam_utils has also grown over time.
|
|
Eliminate the use of beam_utils:is_not_used/3 by implementing a simple
is_not_used() function in beam_trim itself. The new version actually
makes trimming possible in more circumstances, because
beam_utils:is_not_used/3 was too conservative for the purpose of stack
trimming (it was previously used for optimizations where it was
necessary to be more conservative).
Alternatives considered: I tried to implement stack trimming in
beam_ssa_codegen but it turned out to be a total mess. Not
surprisingly, it turns out that an optimization that renumbers
Y registers is hard to do on an intermediate representation that
still use variables instead of BEAM registers.
|
|
Prior to this commit, the optimizations using beam_utils:is_killed/3
were only executed a few times in the entire compiler test suite.
|
|
This will help investigation of compiler bugs.
|
|
|
|
* maint:
Fix bug when beam_jump removes put_tuple instructions
Conflicts:
lib/compiler/src/beam_jump.erl
lib/compiler/test/beam_jump_SUITE.erl
|
|
beam_except: Generalize translation to func_info instructions
|
|
* bjorn/compiler/diffable:
scripts/diffable: Correct the number of modules being compiled
scripts/diffable: Use the diffable compiler option
Add a diffable option for the compiler
scripts/diffable: Refactor option parsing
|
|
`beam_jump` could remove a `put_tuple` instruction when the
tuple would not be used, but it would leave the following
`put` instructions. Make sure they are removed.
https://bugs.erlang.org/browse/ERL-759
|
|
The `beam_except` pass replaces some calls to `erlang:error/1` or
`erlang:error/2` with specialized instructions in order to reduce the
size of the BEAM code.
In functions that do binary matching, `beam_except` would fail to
translate the instructions that generate a `function_clause`
exception. Here is an example:
bsum(<<H:16,T/binary>>, Acc) ->
bsum(T, Acc+H);
bsum(<<>>, Acc) ->
Acc.
The BEAM code that generates the `function_clause` exception looks
like this:
{label,4}.
{test_heap,2,3}.
{put_list,{x,1},nil,{x,1}}.
%% The following two instructions prevents the translation.
{bs_set_position,{x,2},{x,0}}.
{bs_get_tail,{x,2},{x,0},3}.
{test_heap,2,2}.
{put_list,{x,0},{x,1},{x,1}}.
{move,{atom,function_clause},{x,0}}.
{line,...}.
{call_ext,2,{extfunc,erlang,error,2}}.
Make the translation of `function_clause` exceptions smarter, so
that the following code will be produced:
{label,4}.
{bs_set_position,{x,2},{x,0}}.
{bs_get_tail,{x,2},{x,0},3}.
{jump,{f,1}}. %Jump to `func_info` instruction.
(This issue was noticed when looking at the code generated by
https://github.com/tomas-abrahamsson/gpb.)
|
|
Add the `diffable` option to produce a more diff-friendly output in
a .S file. In a diffable .S file, label numbers starts from 1 in each
function and local function calls use function names instead of labels.
scripts/diffable produces diff-friendly files but only for a hard-coded
sub set of files in OTP. Having the option directly in the compiler
makes is easier to diff the BEAM code for arbitrary source modules.
|
|
The `beam_jump` pass could eliminate `move` instructions when it was
not safe to do so. See the new test case `unsafe_move_elimination/1`
for an example.
Reported-by: Michał Muskała
|
|
|
|
into maint
* john/compiler/bs_match-anno-liveness-fix/OTP-15353/ERL-753:
beam_utils: Handle bs_start_match2 in anno_defs
|
|
|
|
* maint:
compiler: Forward +source flag to epp and fix bug in +deterministic
epp: Allow user to set source name independently of input file name
|
|
Fixes a crash during code generation of the following code:
call_atom() ->
fun({send = Send}) ->
Send()
end.
|
|
The source file path as given to `erlc` was included in an implicit
file attribute inserted by epp, even when the +source flag was
set to something else which was a bit surprising. It was also
included when +deterministic was specified, breaking the flag's
promise.
This commit forwards the +source flag to epp so it inserts the
right information, and if +deterministic is given it will be shaved
to just the base name of the file, guaranteeing the same result
regardless of how the input is reached.
|
|
* bjorn/compiler/misc-fixes:
beam_ssa: Remove unnecessary beam_ssa: prefixes
beam_ssa_bsm: Fix replacement of variables in a remote call
|
|
|
|
Co-authored-by: John Högberg <[email protected]>
|
|
jhogberg/john/compiler/improve-named-funs/OTP-15273/ERL-639
Optimize named funs and fun-wrapped macros
|
|
Introduce the no_spawn_compiler_process option
|
|
If a fun is defined locally and only used for calls, it can be replaced
with direct calls to the relevant function. This greatly speeds up "named
functions" (which rely on make_fun to recreate themselves) and macros that
wrap their body in a fun.
|
|
By default, all code is compiled in a separate process
which is terminated at the end of compilation. However,
some tools, like Dialyzer or compilers for other BEAM languages,
may already manage their own worker processes and spawning
an extra process may slow the compilation down.
In such scenarios, you can pass this option to stop the
compiler from spawning an additional process.
This generalizes commit 657760e18087b0cdbaecc5e96e46f6f66bc9497a.
|
|
* bjorn/compiler/fix-r21-option:
Fix code generation of binary instructions with the r21 option
|
|
Minor cleanups and bug fixes of the compiler
|
|
OTP 22 extends the binary instructions to support a Y register
destination. When giving an option to compile for an earlier
release, make sure that binary instructions don't use a Y register
destination, by rewriting the binary instructions to use an X
register destination and adding a `move` instruction to move
the value to the Y register.
|
|
Rewrite BSM optimizations in the new SSA-based intermediate format
|
|
This has been superseded by bs_get_tail/3. Note that it is NOT
removed from the emulator or beam_disasm, as old modules are still
legal.
|
|
Remove the variable aliasing support that was needed for the
old beam_bsm pass.
|
|
The beam_ssa_bsm pass welds chained matches together, but the match
expressions themselves are unchanged and if there's a tail
alignment check it will be done each time. This subpass figures out
the checks we've already done and deletes the redundant ones.
|
|
This commit improves the bit-syntax match optimization pass,
leveraging the new SSA intermediate format to perform much more
aggressive optimizations. Some highlights:
* Watch contexts can be reused even after being passed to a
function or being used in a try block.
* Sub-binaries are no longer eagerly extracted, making it far
easier to keep "happy paths" free from binary creation.
* Trivial wrapper functions no longer disable context reuse.
|
|
2e40d8d1c51a attempted fix a bug in binary matching, but it only
fixed the bug for the minimized test case.
This commit removes the previous fix and fixes the bug in a more
effective way. See the comments in the new code in `sys_core_bsm`
for an explanation.
This commit restores the optimizations in string.erl and dets_v9.erl
that the previous fix disabled.
I have not found any code where this commit will disable optimizations
when they are actually safe. There are some changes to the code
in ssl_cipher.erl in that some bs_start_match2 instruction did not
reuse the binary register for the match context, but the delayed
sub binary optimizations was never applied to the code in the first
place.
https://bugs.erlang.org/browse/ERL-689
|
|
Moving away this optimization makes beam_block do one thing
and one thing only -- creating blocks.
|