Age | Commit message (Collapse) | Author |
|
We chose to refer to variables through their var_name() because we
anticipated the need to annotate them, but it turned out we didn't
really need that, and many things become a lot cleaner if the
entire #b_var{} is used to represent variables.
|
|
Add beam_ssa_dead to perform the main optimizations done
by beam_dead:
* Shortcut branches that jump to another block with a branch.
If it can be seen that the second branch will always branch
to a specific block, replace the target of the first branch.
* Combined nested sequences of '=:=' tests and switch
instructions operating on the same variable to a single switch.
Diffing the compiler output, it seems that beam_ssa_dead finds
many more opportunities for optimizations than beam_dead,
although it does not find all opportunities that beam_dead does.
In total, beam_ssa_dead is such improvement over beam_dead that
there is no reason to keep beam_dead as well as beam_ssa_dead.
Note that beam_ssa_dead does not attempt to optimize away
redundant bs_context_binary instructions, because that instruction
will be superseded by new instructions in the near future.
|
|
The floating point optimization relies on heavily on the block
order in the lineararized representation. A new optimization
could easily break the optimization, for example so that no
`fcheckerror` instructions were emitted.
Rewrite the optimization to avoid dependencies on the linear
block order.
|
|
Not doing CSE for tuple_size/1 seems to generate slightly better
code in most cases.
|
|
|
|
Phi nodes with only literals are fairly common, so it's worthwhile
to optimize this case.
|
|
This optimization working on the SSA format will replace
the similar optimization in beam_dead. See the comment
for an explanation of what the new optimization does.
|
|
When the argument for a #b_switch{} comes from a phi node
with only literal values, the switch list could be pruned to
only contain the possible values. It could also be possible
to eliminate the failure label.
Also simplify a switch with a single value list or switch that can be
replaced with an is_boolean test.
|
|
Nested cases can led to code such as this:
10:
_1 = phi {literal value1, label 8}, {Var, label 9}
br 11
11:
_2 = phi {_1, label 10}, {literal false, label 3}
The phi nodes can be coalesced like this:
11:
_2 = phi {literal value1, label 8}, {Var, label 9},
{literal false, label 3}
Coalescing can help other optimizations, and can in some cases reduce
register shuffling (if the phi variables for two phi nodes happens to
be allocated to different registers).
|
|
Add more instructions to the list of functions that can be safely
removed if their values are not used. This is necessary for
correctness when doing more aggressive optimizations. Without this
change, the 'succeeded' instruction could be optimized away leaving
just the instruction followed by an unconditional branch, which the
beam_ssa_codegen does not know how to handle. Here is an example:
_3 = bs_start_match _1
br label 13
By adding bs_start_match to the list, the bs_start_match instruction
will be removed too. (If the result of bs_start_match is actually
used, the succeeded instruction would not be removed.)
While we are it, rename the misnamed function is_pure/1 to
no_side_effect/1 and move it to beam_ssa. is_pure/1 is a bad name
because bif:get has no side effect, but is not pure.
|
|
|
|
v3_codegen is replaced by three new passes:
* beam_kernel_to_ssa which translates the Kernel Erlang format
to a new SSA-based intermediate format.
* beam_ssa_pre_codegen which prepares the SSA-based format
for code generation, including register allocation. Registers
are allocated using the linear scan algorithm.
* beam_ssa_codegen which generates BEAM assembly code from the
SSA-based format.
It easier and more effective to optimize the SSA-based format before X
and Y registers have been assigned. The current optimization passes
constantly have to make sure no "holes" in the X register assignments
are created (that is, that no X register becomes undefined that an
allocation instruction depends on).
This commit also introduces the following optimizations:
* Replacing of tuple matching of records with the is_tagged_tuple
instruction. (Replacing beam_record.)
* Sinking of get_tuple_element instructions to just before the first
use of the extracted values. As well as potentially avoiding
extracting tuple elements when they are not actually used on all
executions paths, this optimization could also reduce the number
values that will need to be stored in Y registers. (Similar to
beam_reorder, but more effective.)
* Live optimizations, removing the definition of a variable that is
not subsequently used (provided that the operation has no side
effects), as well strength reduction of binary matching by replacing
the extraction of value from a binary with a skip instruction. (Used
to be done by beam_block, beam_utils, and v3_codegen.)
* Removal of redundant bs_restore2 instructions. (Formerly done
by beam_bs.)
* Type-based optimizations across branches. More effective than
the old beam_type pass that only did type-based optimizations in
basic blocks.
* Optimization of floating point instructions. (Formerly done
by beam_type.)
* Optimization of receive statements to introduce recv_mark and
recv_set instructions. More effective with far fewer restrictions
on what instructions are allowed between creating the reference
and entering the receive statement.
* Common subexpression elimination. (Formerly done by beam_block.)
|