Age | Commit message (Collapse) | Author |
|
In 1f0ae04d374, a complete register map was introduced in the %live
instructions thar are added by beam_utils:live_opt/1.
Use the register map to improve beam_utils:is_killed_block/2.
|
|
|
|
|
|
When matching tuples, the pattern matching compiler would generate
code that would fetch all elements of the tuple that will ultimately
be used, *before* testing that (for example) the first element is the
correct record tag. For example:
is_tuple Fail {x,0}
test_arity Fail {x,0} 3
get_tuple_element {x,0} 0 {x,1}
get_tuple_element {x,0} 1 {x,2}
get_tuple_element {x,0} 2 {x,3}
is_eq_exact Fail {x,1} some_tag
If {x,2} and {x,3} are not used at label Fail, we can re-arrange the
code like this:
is_tuple Fail {x,0}
test_arity Fail {x,0} 3
get_tuple_element {x,0} 0 {x,1}
is_eq_exact Fail {x,1} some_tag
get_tuple_element {x,0} 1 {x,2}
get_tuple_element {x,0} 2 {x,3}
Doing that may be beneficial in two ways.
If the branch is taken, we have eliminated the execution of two
unnecessary instructions.
Even if the branch is never or rarely taken, there is the possibility
for more optimizations following the is_eq_exact instructions.
For example, imagine that the code looks like this:
get_tuple_element {x,0} 1 {x,2}
get_tuple_element {x,0} 2 {x,3}
move {x,2} {y,0}
move {x,3} {y,1}
Assuming that {x,2} and {x,3} have no further uses in the code
that follows, that can be rewritten to:
get_tuple_element {x,0} 1 {y,0}
get_tuple_element {x,0} 2 {y,1}
When should we perform this optimization?
At the very latest, it must be done before opt_blocks/1 in
beam_block which does the elimination of unnecessary moves.
Actually, we want do the optimization before the blocks have
been established, since moving instructions out of one block
into another is cumbersome.
Therefore, we will do the optimization in a new pass that is
run before beam_block. A new pass will make debugging easier,
and beam_block already has a fair number of sub passes.
|
|
|
|
In 8470558, the drop_labels/1 function was added to beam_utils
as a minor optimization. Since the function is already available,
we might as well use it in index_label/3 too.
|
|
Understanding get_map_elements improves the stack trimming done
by beam_trim.
|
|
beam_utils used to be overly conservative about liveness for
exit instructions such as:
call_ext erlang:exit/1
beam_utils would consider all y registers to be used, to avoid
overwriting a catch or try tag. That does not seem to be a real
risk.
However, we miss opportunities for stack trimming if we consider
y registers used by an exit instruction.
|
|
The execution time for beam_utils:index_labels_1/2 is among
the longest in the beam_bool, beam_bsm, beam_receive, and
beam_trim compiler passes. Therefore it is worthwhile to do
the minor optimization of replacing a call to lists:dropwhile/2
with a special-purpose drop_labels function.
|
|
As a preparation for fixing a bug, introduce a complete register
map in the '%live' annotations.
|
|
The has_map_fields test was not recognized in is_pure_test/1,
because beam_a has rewritten the {list,_} part of instruction.
|
|
* egil/fix-maps-compiler-coverage/OTP-12425:
compiler: Rename util function to adhere to name policy
compiler: Remove get_map_elements label check in blocks
compiler: Remove unnecassary guard for get_map_elements
compiler: Remove dead code in beam_flatten
compiler: Increase Maps code coverage
|
|
beam_utils:live_opt() is only invoked on code that has been
blockified by beam_block. Therefore the allocate/3 and
allocate_heap/4 instructions only occur in their transformed
form inside a block.
While we are it, correct a comment. 'asm' has been replaced
by 'from_asm'.
|
|
* beam_utils:joineven/1 -> beam_utils:join_even/1
* beam_utils:split_even/1 -> beam_utils:split_even/1
|
|
|
|
Reported-by: Ulf Norell
|
|
* Combine multiple get values with one instruction
* Combine multiple check keys with one instruction
|
|
To add a type-testing guard BIF, the following steps are needed:
* The BIF itself is added to bif.tab (note that it should be declared
using "ubif", not "bif"), and its implementation to erl_bif_op.c.
* erl_internal must be modified in 3 places: The type test must be
recognized as guard BIF, as a type test, and it must be auto-imported.
* There must be an instruction that implements the same type test as
the BIF (it will be used in guards). beam_utils:bif_to_test/3 must
be updated to recognize the new guard BIF.
|
|
|
|
|
|
The live optimization in beam_utils:live_opt/4 did not take into
account that the wait/1 instruction *never* falls through to
the next instruction (it has the same effect on the control flow
as the jump/1 instruction).
|
|
beam_utils:is_not_used_at/3 could be very slow for complex guards,
because the cached result for previously encountered labels were
neither used nor updated within blocks.
Reported-by: Magnus Müller
|
|
|
|
Somewhat reduce the code bloat by eliminating special cases.
|
|
Somewhat reduce code bloat.
|
|
Eliminate some code bloat.
|
|
Rewrite the five binary creation instructions to a bs_init
instruction, in order to somewhat reduce code bloat.
|
|
We can remove some code bloat by handling the special instructions
as BIF instructions in the optimization passes. Also note that
bs_utf*_size was not handled by beam_utils:check_liveness/3
(meaning the conservative answer instead of the correct answer
would be returned).
|
|
Seven bs_put_* instructions can be combined into one generic bs_put
instruction to avoid some code bloat. That will also improve some
optimizations (such as beam_trim) that did not handle all bs_put*
variants.
|
|
The usage calculation only looked at the allocation in GC BIFs, not
at the source and destination registers. Also, if there is a failure
label, make sure that we test whether the register can be used there.
|
|
The liveness at the failure label should be ignored, because if
there is an exception, all x registers will be killed.
|
|
The code generator uses conservative liveness information. Therefore
the number of live registers in allocation instructions (such as
test_heap/2) may be too high. Use the actual liveness information
to lower the number of live register if it's too high.
The main reason we want to do this is to enable more optimizations
that depend on liveness analysis, such as the beam_bool and beam_dead
passes.
|
|
Less conservative liveness analysis allows more optimizations
to be applied (such as the ones in beam_bool).
|
|
Liveness for the try_case_end/1 instruction should be calculated
in the same way as for the case_end/1 instruction.
|
|
|
|
In the following code excerpt, the instruction marked below was
incorrectly removed:
.
.
.
{'try',{y,2},{f,TryCaseLabel}}.
{bif,get,{f,0},[{x,0}],{x,0}}.
{move,{x,1},{y,0}}.
{move,{x,3},{y,1}}. <======= Incorrectly removed
{jump,{f,TryEndLabel}}.
{label,TryEndLabel}.
{try_end,{y,2}}.
{deallocate,3}.
return.
{label,TryCaseLabel}.
{try_case,{y,2}}.
.
.
.
beam_utils indicated that {y,1} was not used at TryEndLabel,
which by itself is correct. But it is still not safe to remove
the instruction, because {y,1} might be used at TryCaseLabel
if an exception occurs.
Noticed-by: Eric Merritt
|
|
|
|
|
|
Sometimes the beam_bool pass wants to know whether an
y register will be killed by the code that follows and
will do (effectively):
beam_utils:is_killed({y,Y}, Code, L)
When asked to calculate the liveness for an y register,
beam_utils:is_killed/3 will loop forever if the code
includes a receive loop.
Since this rarely occurs, fix the problem in the simplest
and most conservative way.
Reported-by: Christopher Williams
|
|
When gc_bif instructions occurred outside of a block,
beam_utils:check_liveness/3 did not take into account
that the instruction could do a garbage collection, and
could falsely report that an x register would be killed.
That could cause the beam_dead pass to make the code
unsafe by removing the assignment to an x register that
would subsequently be referenced by the garbage collector.
Reported-by: Christopher Williams
|
|
* bg/compiler-cover-and-clean:
v3_life: Remove clause that cannot match in match_fail/3
v3_life tests: Cover exception handling code in v3_life:function/1
beam_type: Remove redundant clause
v3_core tests: Cover make_bool_switch_guard/5
v3_core tests: Cover handling of pattern aliases
v3_core: Remove a clause in is_simple/1 that cannot match
v3_core: Remove unused support for generating compilation errors
Remove stray support for the put_literal/2 instruction
Remove stray support for the bs_bits_to_bytes2/2 instruction
Remove the bs_bits_to_bytes/3 instruction
Cover handling of 'math' BIFs
beam_bool: Remove a clause in live_regs/1 that cannot match
beam_bool: Cover handling of bs_context_to_binary in initialized_regs/2
beam_bool: Remove a clause in initialized_regs/2 that cannot match
beam_block: Remove a clause that will never be executed
erts: Stop supporting non-literal empty tuples
compile: Remove code that is only executed on Solaris
Do not cover-analyze core_scan
core_SUITE_data: Don't ignore *.core files in this directory
OTP-8636 bg/compiler-cover-and-clean
|
|
bs_bits_to_bytes2/2 was an experimental instruction added in R11,
but was removed in R12. Although the beam_disasm and beam_validator
modules do support instructions in older releases, there is
no reason to have them support experimental instructions.
|
|
|