Age | Commit message (Collapse) | Author |
|
Some lines in beam_peep were no longer covered when the sharing optimization
was added to beam_ssa_opt. Also remove some code from beam_peep that no
longer seems possible to cover.
|
|
A select_val instruction that test whether a register is a boolean
like this:
{select_val,Reg,{f,Fail},{list,[{atom,true},Lbl,{atom,false},Lbl]}}.
can be replaced with an is_boolean test:
{test,is_boolean,{f,Fail},[Reg]}.
{jump,{f,Lbl}}.
This optimization is currently done in beam_dead. However, if done in
the beam_peep, it can catch more opportunities to do the optimization,
because after having run beam_jump, labels that were different have
been coalesced.
|
|
v3_codegen is replaced by three new passes:
* beam_kernel_to_ssa which translates the Kernel Erlang format
to a new SSA-based intermediate format.
* beam_ssa_pre_codegen which prepares the SSA-based format
for code generation, including register allocation. Registers
are allocated using the linear scan algorithm.
* beam_ssa_codegen which generates BEAM assembly code from the
SSA-based format.
It easier and more effective to optimize the SSA-based format before X
and Y registers have been assigned. The current optimization passes
constantly have to make sure no "holes" in the X register assignments
are created (that is, that no X register becomes undefined that an
allocation instruction depends on).
This commit also introduces the following optimizations:
* Replacing of tuple matching of records with the is_tagged_tuple
instruction. (Replacing beam_record.)
* Sinking of get_tuple_element instructions to just before the first
use of the extracted values. As well as potentially avoiding
extracting tuple elements when they are not actually used on all
executions paths, this optimization could also reduce the number
values that will need to be stored in Y registers. (Similar to
beam_reorder, but more effective.)
* Live optimizations, removing the definition of a variable that is
not subsequently used (provided that the operation has no side
effects), as well strength reduction of binary matching by replacing
the extraction of value from a binary with a skip instruction. (Used
to be done by beam_block, beam_utils, and v3_codegen.)
* Removal of redundant bs_restore2 instructions. (Formerly done
by beam_bs.)
* Type-based optimizations across branches. More effective than
the old beam_type pass that only did type-based optimizations in
basic blocks.
* Optimization of floating point instructions. (Formerly done
by beam_type.)
* Optimization of receive statements to introduce recv_mark and
recv_set instructions. More effective with far fewer restrictions
on what instructions are allowed between creating the reference
and entering the receive statement.
* Common subexpression elimination. (Formerly done by beam_block.)
|
|
|
|
This complements the `map_get/2` guard BIF introduced in #1784.
Rationale.
`map_get/2` allows accessing map fields in guards, but it might be
problematic in more complex guard expressions, for example:
foo(X) when map_get(a, X) =:= 1 or is_list(X) -> ...
The `is_list/1` part of the guard could never succeed since the
`map_get/2` guard would fail the whole guard expression. In this
situation, this could be solved by using `;` instead of `or` to separate
the guards, but it is not possible in every case.
To solve this situation, this PR proposes a `is_map_key/2` guard that
allows to check if a map has key inside a guard before trying to access
that key. When combined with `is_map/1` this allows to construct a
purely boolean guard expression testing a value of a key in a map.
Implementation.
Given the use case motivating the introduction of this function, the PR
contains compiler optimisations that produce optimial code for the
following guard expression:
foo(X) when is_map(X) and is_map_key(a, X) and map_get(a, X) =:= 1 -> ok;
foo(_) -> error.
Given all three tests share the failure label, the `is_map_key/2` and
`is_map/2` tests are optimised away.
As with `map_get/2` the `is_map_key/2` BIF is allowed in match specs.
|
|
|
|
When cleaning selects, it might happen we're left with only one pair. In such
case convert to a regular test + jump.
|
|
|
|
* bjorn/compiler/misc-opt:
v3_kernel: Construct literal lists properly
Use the register map in %live in beam_utils:is_killed_block/2
Teach beam_utils to check liveness for put_map instructions
beam_peep: Help out beam_jump
|
|
beam_jump fails to optimize the following:
jump 2
label 1
label 2
Since this situation is rare, instead of complicating beam_jump,
add the optimization to beam_peep. It will always succeed, since
adjacent labels have been coalesced.
|
|
|
|
There is an optimization in beam_clean that will remove values
having the same label as the failure label in a select_val
instruction. Conceptually, this optimization is in the wrong
module since ideally beam_clean is a mandatory pass that should
not do optimizations. Furthermore, this part of beam_clean is
called three times (from beam_dead, beam_peep, and as a compiler
pass from the 'compile' module), but it only does useful one of
the times it is called.
Therefore, move this optimization to the beam_peep pass.
The same optimization is done in beam_dead, but unfortunately it
misses some opportunities for optimization because the code sharing
optimization in beam_jump (share/1) runs after beam_dead. It would
be more satisfactory to have this optimization only in beam_dead,
but it turned out not to be trivial. If we try to run
beam_jump:share/1 before beam_dead, some optimizations will no
longer work in beam_dead because fallthroughs have been eliminated.
For the moment, the possible solutions to this problem seems to
involve more work and more complicated code than the gain from
eliminating the duplicated optimization would gain.
|
|
We can rewrite more instances of select_val to is_boolean because
it is not necessary that a particular label follows the select_val.
|
|
|
|
|
|
|
|
Eliminate some code bloat.
|
|
* bg/compiler:
beam_peep: Remove optimization already done by beam_dead
beam_dead: Combine is_eq_exact instructions into select_val instructions
Evaluate is_record/3 at compile-time using type information
Evaluate element/2 at compile-time using type information
erl_expand_records: Replace is_record() with matching
OTP-8668 bg/compiler
The compiler optimizes record operations better.
|
|
|
|
|