Age | Commit message (Collapse) | Author |
|
|
|
|
|
Replace beam_dead with beam_ssa_dead
|
|
* maint:
Updated OTP version
Update release notes
Update version numbers
Fix include-path regression caused by dd0a39c
|
|
* maint-20:
Updated OTP version
Update release notes
Update version numbers
Fix include-path regression caused by dd0a39c
|
|
Most of the optimizations in beam_dead have been superseded
by the optimizations in beam_ssa_dead.
The forward/1 pass of beam_dead has been moved to beam_jump.
The beam_split pass splits blocks that contain instructions with
non-zero labels. Because there are no optimizations left that optimize
instructions within blocks, beam_block never needs to put such
instructions into blocks in the first place. beam_split also moved
'move' instructions out block to help beam_dead. That is no longer
necessary since beam_dead no longer exists.
|
|
Add beam_ssa_dead to perform the main optimizations done
by beam_dead:
* Shortcut branches that jump to another block with a branch.
If it can be seen that the second branch will always branch
to a specific block, replace the target of the first branch.
* Combined nested sequences of '=:=' tests and switch
instructions operating on the same variable to a single switch.
Diffing the compiler output, it seems that beam_ssa_dead finds
many more opportunities for optimizations than beam_dead,
although it does not find all opportunities that beam_dead does.
In total, beam_ssa_dead is such improvement over beam_dead that
there is no reason to keep beam_dead as well as beam_ssa_dead.
Note that beam_ssa_dead does not attempt to optimize away
redundant bs_context_binary instructions, because that instruction
will be superseded by new instructions in the near future.
|
|
Omitting `kill` instructions before BIFs that throw exceptions
will reduce the code size. This optimization supersedes the same
optimizations in beam_dead.
|
|
|
|
In beam_ssa_type, do substitutions similar to what ssa_opt_misc does
to get rid of variables that evaluate to constant values. That
somewhat simplifies the code of beam_ssa_type, and could improve
performance of the compiler since instructions and variables are
eliminated, reducing the amount of work for later passes.
|
|
|
|
Remove the following clause from the `fun` clauses in arith_op_types/2
because it cannot possibly match:
(_, any) -> number;
Here is why it cannot match:
The second argument is the accumulator for lists:foldl/3.
Its initial value is `unknown`.
None of the clauses will update the accumulator to `any`, including
the clause that matches `any` in the first argument -- it will set the
accumulator to `number`.
Thus, the accumulator (second argument) can never be `any` and
the clause can never match. QED.
|
|
The floating point optimization relies on heavily on the block
order in the lineararized representation. A new optimization
could easily break the optimization, for example so that no
`fcheckerror` instructions were emitted.
Rewrite the optimization to avoid dependencies on the linear
block order.
|
|
* maint:
Updated OTP version
Update release notes
Update version numbers
erts: Fix "Prevent inconsistent node lists" fix
Fix include-path regression caused by dd0a39c
Restore default SIGTERM behaviour for port programs
|
|
* maint-21:
Updated OTP version
Update release notes
Update version numbers
erts: Fix "Prevent inconsistent node lists" fix
Fix include-path regression caused by dd0a39c
Restore default SIGTERM behaviour for port programs
|
|
Not doing CSE for tuple_size/1 seems to generate slightly better
code in most cases.
|
|
|
|
Phi nodes with only literals are fairly common, so it's worthwhile
to optimize this case.
|
|
This optimization working on the SSA format will replace
the similar optimization in beam_dead. See the comment
for an explanation of what the new optimization does.
|
|
When the argument for a #b_switch{} comes from a phi node
with only literal values, the switch list could be pruned to
only contain the possible values. It could also be possible
to eliminate the failure label.
Also simplify a switch with a single value list or switch that can be
replaced with an is_boolean test.
|
|
Nested cases can led to code such as this:
10:
_1 = phi {literal value1, label 8}, {Var, label 9}
br 11
11:
_2 = phi {_1, label 10}, {literal false, label 3}
The phi nodes can be coalesced like this:
11:
_2 = phi {literal value1, label 8}, {Var, label 9},
{literal false, label 3}
Coalescing can help other optimizations, and can in some cases reduce
register shuffling (if the phi variables for two phi nodes happens to
be allocated to different registers).
|
|
Add more instructions to the list of functions that can be safely
removed if their values are not used. This is necessary for
correctness when doing more aggressive optimizations. Without this
change, the 'succeeded' instruction could be optimized away leaving
just the instruction followed by an unconditional branch, which the
beam_ssa_codegen does not know how to handle. Here is an example:
_3 = bs_start_match _1
br label 13
By adding bs_start_match to the list, the bs_start_match instruction
will be removed too. (If the result of bs_start_match is actually
used, the succeeded instruction would not be removed.)
While we are it, rename the misnamed function is_pure/1 to
no_side_effect/1 and move it to beam_ssa. is_pure/1 is a bad name
because bif:get has no side effect, but is not pure.
|
|
A select_val instruction that test whether a register is a boolean
like this:
{select_val,Reg,{f,Fail},{list,[{atom,true},Lbl,{atom,false},Lbl]}}.
can be replaced with an is_boolean test:
{test,is_boolean,{f,Fail},[Reg]}.
{jump,{f,Lbl}}.
This optimization is currently done in beam_dead. However, if done in
the beam_peep, it can catch more opportunities to do the optimization,
because after having run beam_jump, labels that were different have
been coalesced.
|
|
Those optimizations are unsafe if beam_dead has been run before.
|
|
This functionality will soon be needed.
|
|
The 'move' instruction can be eliminated in code such as:
{test,is_eq_exact,{f,42},[{x,0},{atom,value}]}.
{move,{atom,value},{x,0}}.
Move that optimization from beam_dead to beam_a. The optimization
will be simpler because the 'move' instruction has not yet
been moved into a block. Getting rid of 'move' earlier will
also save work for later passes.
Also move the optimization that eliminates instructions such
as from beam_dead to beam_a:
{test_is_eq_exact,{f,42},[{x,0},{x,0}]}.
|
|
It is faster to use cerl_sets instead of gb_sets to keep track of
seen blocks.
|
|
Add trim_unreachable/1 to remove unreachable blocks
and adjust phi nodes.
|
|
Since beam_ssa:linearize/1 may remove blocks that are unreachable,
adjust phi nodes to make sure that they don't refer to discarded
blocks or to blocks that no longer branch to the phi node in
question.
|
|
|
|
|
|
Add normalize/1 to simplify optimizations.
|
|
When optimizations get more powerful, beam_validator
must keep up.
|
|
|
|
|
|
When creating a phi node for the common exit block of a receive,
the code failed to take into account that there could be more
than one predecessor to the exit block for each remove_message.
Rename exit_predessor/3 to exit_predessors/3 and make it return a
list of the predecessors.
|
|
|
|
|
|
|
|
|
|
|
|
Include paths don't actually affect code generation in any way, but
it's reasonable for a build tool like rebar3 to recompile when the
include paths change. This commit restores the old behavior without
the +deterministic flag.
|
|
Include paths don't actually affect code generation in any way, but
it's reasonable for a build tool like rebar3 to recompile when the
include paths change. This commit restores the old behavior without
the +deterministic flag.
|
|
Sometimes when building a tuple, there is no way to avoid an
extra `move` instruction. Consider this code:
make_tuple(A) -> {ok,A}.
The corresponding BEAM code looks like this:
{test_heap,3,1}.
{put_tuple,2,{x,1}}.
{put,{atom,ok}}.
{put,{x,0}}.
{move,{x,1},{x,0}}.
return.
To avoid overwriting the source register `{x,0}`, a `move`
instruction is necessary.
The problem doesn't exist when building a list:
%% build_list(A) -> [A].
{test_heap,2,1}.
{put_list,{x,0},nil,{x,0}}.
return.
Introduce a new `put_tuple2` instruction that builds a tuple in a
single instruction, so that the `move` instruction can be eliminated:
%% make_tuple(A) -> {ok,A}.
{test_heap,3,1}.
{put_tuple2,{x,0},{list,[{atom,ok},{x,0}]}}.
return.
Note that the BEAM loader already combines `put_tuple` and `put`
instructions into an internal instruction similar to `put_tuple2`.
Therefore the introduction of the new instruction will not speed up
execution of tuple building itself, but it will be less work for
the loader to load the new instruction.
|
|
* hasse/dialyzer/improve_guards/OTP-15268/ERL-680:
dialyzer: Improve handling of complex guards
|
|
uabboli/hasse/dialyzer/improve_guards/OTP-15268/ERL-680
dialyzer: Improve handling of complex guards
|
|
See also https://bugs.erlang.org/browse/ERL-680.
The right associative short circuit expressions 'andalso' and 'orelse'
are expanded by the Compiler (see v3_core) into 'case' expressions. If
parentheses are used to enforce left associativeness, variables are
introduced, and the time needed by Dialyzer increases exponentially.
Rather than trying to fix Dialyzer itself, v3_core now rewrites
repeated use of 'andalso' ('orelse') into right associative
expressions before creating the 'case' expressions.
|
|
* bjorn/compiler/ssa:
Travis CI: Run the SSA linter in the Linux64SmokeTest build
Remove retired compiler passes
Introduce a new SSA-based intermediate format
hipe_beam_to_icode: Correct translation of get_map_elements
beam_dead: Remove shortcut of binary matching instruction
beam_bs: Remove optimizations that are easier done on SSA format
Don't run unsafe compiler passes
Simplify optimizations by introducing is_nil late
beam_utils: Make is_tagged_tuple a pure test
beam_except: Enhance recognition of function_clause exceptions
beam_validator: Infer the types of copies in a smarter way
beam_validator: Improve merge of cons and literal list
beam_validator: Strengthen validation of func_info
beam_validator: Allow get_tuple_element before dsetelement
beam_validator: Don't transfer state to labels that can't be reached
beam_validator: Improve type analysis for tuples
beam_validator: Be more careful when updating try/catch state
beam_trim: Handle an empty list of instructions
v3_core: Number argument variables in ascending order
Teach binary instructions to use Y registers as destination
OTP-14894
|
|
|
|
v3_codegen is replaced by three new passes:
* beam_kernel_to_ssa which translates the Kernel Erlang format
to a new SSA-based intermediate format.
* beam_ssa_pre_codegen which prepares the SSA-based format
for code generation, including register allocation. Registers
are allocated using the linear scan algorithm.
* beam_ssa_codegen which generates BEAM assembly code from the
SSA-based format.
It easier and more effective to optimize the SSA-based format before X
and Y registers have been assigned. The current optimization passes
constantly have to make sure no "holes" in the X register assignments
are created (that is, that no X register becomes undefined that an
allocation instruction depends on).
This commit also introduces the following optimizations:
* Replacing of tuple matching of records with the is_tagged_tuple
instruction. (Replacing beam_record.)
* Sinking of get_tuple_element instructions to just before the first
use of the extracted values. As well as potentially avoiding
extracting tuple elements when they are not actually used on all
executions paths, this optimization could also reduce the number
values that will need to be stored in Y registers. (Similar to
beam_reorder, but more effective.)
* Live optimizations, removing the definition of a variable that is
not subsequently used (provided that the operation has no side
effects), as well strength reduction of binary matching by replacing
the extraction of value from a binary with a skip instruction. (Used
to be done by beam_block, beam_utils, and v3_codegen.)
* Removal of redundant bs_restore2 instructions. (Formerly done
by beam_bs.)
* Type-based optimizations across branches. More effective than
the old beam_type pass that only did type-based optimizations in
basic blocks.
* Optimization of floating point instructions. (Formerly done
by beam_type.)
* Optimization of receive statements to introduce recv_mark and
recv_set instructions. More effective with far fewer restrictions
on what instructions are allowed between creating the reference
and entering the receive statement.
* Common subexpression elimination. (Formerly done by beam_block.)
|