Age | Commit message (Collapse) | Author |
|
The expansion of record field updates, when more than one field is
updated, but not a majority of the fields, will create a sequence of
calls to `erlang:setelement(Index, Value, Tuple)` where Tuple in the
first call is the original record tuple, and in the subsequent calls
Tuple is the result of the previous call. Furthermore, all Index
values are constant positive integers, and the first call to
`setelement` will have the greatest index. Thus all the following
calls do not actually need to test at run-time whether Tuple has type
tuple, nor that the index is within the tuple bounds.
Since OTP R7, the `sys_core_dsetel` pass, run as the very last Core
Erlang pass, has optimized this sequence of `setelement` calls to use
a special destructive version of `setelement` (called
`set_tuple_element`) for all but the very first `setelement` in the
sequence.
It turns out that the presence of the `set_tuple_element` in SSA code
is awkward and can prevent or complicate type analysis and aggressive
optimizations.
Therefore, this commit removes the `sys_core_dsetel` pass and
reimplements it for SSA code. The optimization will be done in the
`beam_ssa_pre_codegen` pass (that is, just before code generation and
after running all other SSA code optimization passes).
In most cases, the resulting BEAM code is identical to previous
code. For a few modules, the BEAM code is actually slightly better,
with smaller stack frames.
|
|
The function `get_call_order_po/2` did not always include all
functions in a module. The method it used was to first find
all leave functions (functions not calling any other function),
and then find all others functions that called any of the leave
functions either directly or indirectly.
Functions that did not call a leave function (directly or indirectly)
would not be included in the list of functions to optimize.
Reimplement `get_call_order_po/2` to use the standard algorithm
for constructing a list of nodes in reverse postorder and then
reverse that list.
|
|
Optimize ssa_opt_sink for huge functions
|
|
The ssa_opt_sink optimization of beam_ssa_opt could get very slow
for certain huge functions. 9a190cae9bd7 partly addressed this issue
by terminating the optimization early if there happened to be no
get_tuple_element instructions at all in the function.
This commit addresses the issue more directly by making the dominator
calculation in beam_ssa:dominators/1 more efficient. The same
algorithm as before is used, but it is implemented in a more efficient
way based on the ideas in "A Simple, Fast Dominance Algorithm"
(http://www.hipersoft.rice.edu/grads/publications/dom14.pdf).
As well as being more efficient, the new implementation also gives
an explicit representation of the dominator tree, which makes it
possible to simplify and optimize the ssa_opt_sink optimization.
|
|
|
|
Two helper functions in beam_ssa_opt and beam_ssa_dead
are body-recursive for no good reason.
While at it, add some clarifying comments to the functions.
|
|
Avoiding calls usually reduces the size of the stack frame and reduces
register shuffling.
|
|
Save a little time by using gb_sets:delete/2 instead of
gb_sets:delete_any/2 when the key is known to be in the set.
|
|
Compilation will be much faster if there are many blocks, but no
get_tuple_element instructions.
Reported-by: Michał Muskała
|
|
Speed up the compiler when compiling the idna package
|
|
It is never possible to merge a block ending in a switch with the
next block, so it is not necessary to call `beam_ssa:successors/1` in
that case. Avoiding the call slightly improves compilation speeds
for switches with many branches.
|
|
If compilation failed, the name of the current function *and*
all previously compiled functions would be printed because
phase/4 was not tail-recursive.
https://bugs.erlang.org/browse/ERL-840
|
|
Try to eliminate short blocks that starts with a phi node
and end in a return. For example:
Result = phi { Res1, 4 }, { literal true, 5 }
Ret = put_tuple literal ok, Result
ret Ret
The code in this block can be inserted at the end blocks 4 and
5. Thus, the following code can be inserted into block 4:
Ret:1 = put_tuple literal ok, Res1
ret Ret:1
And the following code into block 5:
Ret:2 = put_tuple literal ok, literal true
ret Ret:2
Which can be further simplified to:
ret literal {ok, true}
This transformation may lead to more code improvements, for example:
* Stack trimming
* Fewer test_heap instructions
* Smaller stack frames
|
|
This commit lets the type optimization pass work across functions,
tracking return and argument types to eliminate redundant tests.
|
|
This serves as a base for the upcoming module-level type
optimization, but may come in handy for other passes like
beam_ssa_funs and beam_ssa_bsm that have their own ad-hoc
implementations.
|
|
If the match instruction was already marked as a skip, we'd ruin
its argument list.
|
|
Running ssa_opt_tuple_size early will give more opportunities
for optimizations.
|
|
The code will be significantly improved by running the
type optimization pass twice.
The ssa_opt_misc pass can be eliminated because everything it does
is also done by the type optimization pass.
|
|
The beam_ssa_type pass would leave redundant 'succeeded' instructions,
and depend on the live optimization pass to remove them.
Update beam_ssa_type to remove redundant 'succeeded' instructions.
This will not improve the generated code, but will improve compilation
times since it eliminates instructions and variables.
|
|
Do the optimizations of bs_put* instructions in beam_ssa_opt
and remove the beam_bs pass. This can lead to a slight improvement
of compilation times.
|
|
The optimization can be applied in a few more places if done
before ssa_opt_bsm_shortcut (for example, in unicode:cbv/2).
|
|
Share code for semantically equivalent blocks referred to to by `br`
and `switch` instructions.
A similar optimization is done in `beam_jump`, but doing it here as
well is beneficial as it may enable other optimizations. Also, if
there are many semantically equivalent clauses, this optimization can
substanstially decrease compilation times.
|
|
The beam_ssa_bsm pass welds chained matches together, but the match
expressions themselves are unchanged and if there's a tail
alignment check it will be done each time. This subpass figures out
the checks we've already done and deletes the redundant ones.
|
|
We chose to refer to variables through their var_name() because we
anticipated the need to annotate them, but it turned out we didn't
really need that, and many things become a lot cleaner if the
entire #b_var{} is used to represent variables.
|
|
Add beam_ssa_dead to perform the main optimizations done
by beam_dead:
* Shortcut branches that jump to another block with a branch.
If it can be seen that the second branch will always branch
to a specific block, replace the target of the first branch.
* Combined nested sequences of '=:=' tests and switch
instructions operating on the same variable to a single switch.
Diffing the compiler output, it seems that beam_ssa_dead finds
many more opportunities for optimizations than beam_dead,
although it does not find all opportunities that beam_dead does.
In total, beam_ssa_dead is such improvement over beam_dead that
there is no reason to keep beam_dead as well as beam_ssa_dead.
Note that beam_ssa_dead does not attempt to optimize away
redundant bs_context_binary instructions, because that instruction
will be superseded by new instructions in the near future.
|
|
The floating point optimization relies on heavily on the block
order in the lineararized representation. A new optimization
could easily break the optimization, for example so that no
`fcheckerror` instructions were emitted.
Rewrite the optimization to avoid dependencies on the linear
block order.
|
|
Not doing CSE for tuple_size/1 seems to generate slightly better
code in most cases.
|
|
|
|
Phi nodes with only literals are fairly common, so it's worthwhile
to optimize this case.
|
|
This optimization working on the SSA format will replace
the similar optimization in beam_dead. See the comment
for an explanation of what the new optimization does.
|
|
When the argument for a #b_switch{} comes from a phi node
with only literal values, the switch list could be pruned to
only contain the possible values. It could also be possible
to eliminate the failure label.
Also simplify a switch with a single value list or switch that can be
replaced with an is_boolean test.
|
|
Nested cases can led to code such as this:
10:
_1 = phi {literal value1, label 8}, {Var, label 9}
br 11
11:
_2 = phi {_1, label 10}, {literal false, label 3}
The phi nodes can be coalesced like this:
11:
_2 = phi {literal value1, label 8}, {Var, label 9},
{literal false, label 3}
Coalescing can help other optimizations, and can in some cases reduce
register shuffling (if the phi variables for two phi nodes happens to
be allocated to different registers).
|
|
Add more instructions to the list of functions that can be safely
removed if their values are not used. This is necessary for
correctness when doing more aggressive optimizations. Without this
change, the 'succeeded' instruction could be optimized away leaving
just the instruction followed by an unconditional branch, which the
beam_ssa_codegen does not know how to handle. Here is an example:
_3 = bs_start_match _1
br label 13
By adding bs_start_match to the list, the bs_start_match instruction
will be removed too. (If the result of bs_start_match is actually
used, the succeeded instruction would not be removed.)
While we are it, rename the misnamed function is_pure/1 to
no_side_effect/1 and move it to beam_ssa. is_pure/1 is a bad name
because bif:get has no side effect, but is not pure.
|
|
|
|
v3_codegen is replaced by three new passes:
* beam_kernel_to_ssa which translates the Kernel Erlang format
to a new SSA-based intermediate format.
* beam_ssa_pre_codegen which prepares the SSA-based format
for code generation, including register allocation. Registers
are allocated using the linear scan algorithm.
* beam_ssa_codegen which generates BEAM assembly code from the
SSA-based format.
It easier and more effective to optimize the SSA-based format before X
and Y registers have been assigned. The current optimization passes
constantly have to make sure no "holes" in the X register assignments
are created (that is, that no X register becomes undefined that an
allocation instruction depends on).
This commit also introduces the following optimizations:
* Replacing of tuple matching of records with the is_tagged_tuple
instruction. (Replacing beam_record.)
* Sinking of get_tuple_element instructions to just before the first
use of the extracted values. As well as potentially avoiding
extracting tuple elements when they are not actually used on all
executions paths, this optimization could also reduce the number
values that will need to be stored in Y registers. (Similar to
beam_reorder, but more effective.)
* Live optimizations, removing the definition of a variable that is
not subsequently used (provided that the operation has no side
effects), as well strength reduction of binary matching by replacing
the extraction of value from a binary with a skip instruction. (Used
to be done by beam_block, beam_utils, and v3_codegen.)
* Removal of redundant bs_restore2 instructions. (Formerly done
by beam_bs.)
* Type-based optimizations across branches. More effective than
the old beam_type pass that only did type-based optimizations in
basic blocks.
* Optimization of floating point instructions. (Formerly done
by beam_type.)
* Optimization of receive statements to introduce recv_mark and
recv_set instructions. More effective with far fewer restrictions
on what instructions are allowed between creating the reference
and entering the receive statement.
* Common subexpression elimination. (Formerly done by beam_block.)
|