Age | Commit message (Collapse) | Author |
|
The following function:
is_two_tuple(Arg) ->
case is_tuple(Arg) of
false -> false;
true -> tuple_size(Arg) == 2
end.
would cause an internal consistency failure:
Internal consistency check failed - please report this bug.
Instruction: {bif,tuple_size,{f,0},[{x,0}],{z,0}}
Error: {invalid_store,{z,0},{integer,[]}}:
|
|
Enhance the copy_retval/1 optimization to allow Y registers to be
reused in more circumstances. Reusing Y register can often reduce
the size of the stack frame.
|
|
The linear scan algorithm keeps unhandled intervals in two separate
lists: one for intervals with reserved registers and one for intervals
without reserved registers. When collecting the available free registers
all unhandled intervals with reserved registers must be checked for
overlap.
Unhandled intervals that had a preferred register were put into the
list of intervals with reserved registers, leading to a lot of
unecessary overlap checking if there were many intervals with
preferred registers. Changing the partition code to put intervals with
preferred registers into the general list of unhandled intervals will
reduce the compilation time if there are many preferred registers.
On my computer, this change reduced the time of the linear scan pass
from about 20 seconds down to about 0.5 seconds for this module:
https://github.com/aggelgian/cuter/blob/master/src/cuter_binlib.erl
Noticed-by: Kostis Sagonas
|
|
Fixes a crash during code generation of the following code:
call_atom() ->
fun({send = Send}) ->
Send()
end.
|
|
* bjorn/compiler/fix-r21-option:
Fix code generation of binary instructions with the r21 option
|
|
Minor cleanups and bug fixes of the compiler
|
|
OTP 22 extends the binary instructions to support a Y register
destination. When giving an option to compile for an earlier
release, make sure that binary instructions don't use a Y register
destination, by rewriting the binary instructions to use an X
register destination and adding a `move` instruction to move
the value to the Y register.
|
|
Rewrite BSM optimizations in the new SSA-based intermediate format
|
|
Remove the variable aliasing support that was needed for the
old beam_bsm pass.
|
|
This commit improves the bit-syntax match optimization pass,
leveraging the new SSA intermediate format to perform much more
aggressive optimizations. Some highlights:
* Watch contexts can be reused even after being passed to a
function or being used in a try block.
* Sub-binaries are no longer eagerly extracted, making it far
easier to keep "happy paths" free from binary creation.
* Trivial wrapper functions no longer disable context reuse.
|
|
The source map for `get_map_element` is not supposed to be a literal,
and the sanitization code is supposed to ensure that.
The sanitization incorrectly translated this code:
Map = get_tuple_element literal {ok,#{key=>value}}, literal 1
Var = get_map_element Map, literal {a,key}
To this code:
Var = get_map_element literal #{key=>value}, literal {a,key}
Make sure to substitute the arguments for `get_map_element` before
looking for a literal source map.
|
|
|
|
We chose to refer to variables through their var_name() because we
anticipated the need to annotate them, but it turned out we didn't
really need that, and many things become a lot cleaner if the
entire #b_var{} is used to represent variables.
|
|
When creating a phi node for the common exit block of a receive,
the code failed to take into account that there could be more
than one predecessor to the exit block for each remove_message.
Rename exit_predessor/3 to exit_predessors/3 and make it return a
list of the predecessors.
|
|
Sometimes when building a tuple, there is no way to avoid an
extra `move` instruction. Consider this code:
make_tuple(A) -> {ok,A}.
The corresponding BEAM code looks like this:
{test_heap,3,1}.
{put_tuple,2,{x,1}}.
{put,{atom,ok}}.
{put,{x,0}}.
{move,{x,1},{x,0}}.
return.
To avoid overwriting the source register `{x,0}`, a `move`
instruction is necessary.
The problem doesn't exist when building a list:
%% build_list(A) -> [A].
{test_heap,2,1}.
{put_list,{x,0},nil,{x,0}}.
return.
Introduce a new `put_tuple2` instruction that builds a tuple in a
single instruction, so that the `move` instruction can be eliminated:
%% make_tuple(A) -> {ok,A}.
{test_heap,3,1}.
{put_tuple2,{x,0},{list,[{atom,ok},{x,0}]}}.
return.
Note that the BEAM loader already combines `put_tuple` and `put`
instructions into an internal instruction similar to `put_tuple2`.
Therefore the introduction of the new instruction will not speed up
execution of tuple building itself, but it will be less work for
the loader to load the new instruction.
|
|
v3_codegen is replaced by three new passes:
* beam_kernel_to_ssa which translates the Kernel Erlang format
to a new SSA-based intermediate format.
* beam_ssa_pre_codegen which prepares the SSA-based format
for code generation, including register allocation. Registers
are allocated using the linear scan algorithm.
* beam_ssa_codegen which generates BEAM assembly code from the
SSA-based format.
It easier and more effective to optimize the SSA-based format before X
and Y registers have been assigned. The current optimization passes
constantly have to make sure no "holes" in the X register assignments
are created (that is, that no X register becomes undefined that an
allocation instruction depends on).
This commit also introduces the following optimizations:
* Replacing of tuple matching of records with the is_tagged_tuple
instruction. (Replacing beam_record.)
* Sinking of get_tuple_element instructions to just before the first
use of the extracted values. As well as potentially avoiding
extracting tuple elements when they are not actually used on all
executions paths, this optimization could also reduce the number
values that will need to be stored in Y registers. (Similar to
beam_reorder, but more effective.)
* Live optimizations, removing the definition of a variable that is
not subsequently used (provided that the operation has no side
effects), as well strength reduction of binary matching by replacing
the extraction of value from a binary with a skip instruction. (Used
to be done by beam_block, beam_utils, and v3_codegen.)
* Removal of redundant bs_restore2 instructions. (Formerly done
by beam_bs.)
* Type-based optimizations across branches. More effective than
the old beam_type pass that only did type-based optimizations in
basic blocks.
* Optimization of floating point instructions. (Formerly done
by beam_type.)
* Optimization of receive statements to introduce recv_mark and
recv_set instructions. More effective with far fewer restrictions
on what instructions are allowed between creating the reference
and entering the receive statement.
* Common subexpression elimination. (Formerly done by beam_block.)
|