Age | Commit message (Collapse) | Author |
|
* maint:
Fix missing filename and line number in warning
Conflicts:
lib/compiler/test/bs_match_SUITE.erl
|
|
When the 'bin_opt_info' is given, warnings without filenames
and line numbers could sometimes be produced:
no_file: Warning: INFO: matching non-variables after
a previous clause matching a variable will prevent delayed
sub binary optimization
The reason for the missing information is that #c_alias{} records lack
location information. There are several ways to fix the problem. The
easiest seems to be to get the location information from the
code).
Noticed-by: José Valim
|
|
* bjorn/cleanup:
beam_validator: Don't allow an 'undefined' entry label in a function
beam_validator: Remove obsolete DEBUG support
v3_kernel: Speed up compilation of modules with many funs
beam_dict: Speed up storage of funs
beam_asm: Speed up assembly for modules with many exports
sys_core_dsetel: Use a map instead of a dict
sys_pre_expand: Cover coerce_to_float/2
Cover code for callbacks in sys_pre_expand
Cover sys_pre_expand:pattern/2
sys_pre_expand: Remove uncovered clause in pat_bit_size/2
sys_pre_expand: Clean up data structures
sys_pre_expand: Remove vestiges of variable usage tracking
sys_pre_expand: Remove imports of ordsets functions
sys_pre_expand: Remove unnecessary inclusion of erl_bits.hrl
io: Make a fast code path for i/o requests
|
|
Before 912fea0b beam_validator could validate disassembled files.
That's probably why the entry label was allowed to be 'undefined'.
|
|
No one has used the debug support in many years. Also, the debug
support is not free. There are calls to lists:foreach/2 that will be
executed even when debug support is turned off.
|
|
Using a map to store the number of free variables for funs instead of
an orddict will speed up the v3_kernel pass for modules with a huge
number of funs (such as NBAP-PDU-Contents in the asn1 test suite).
|
|
For huge modules with many funs (such as NBAP-PDU-Contents in the asn1
test suite), the call to length/1 in beam_dict:lambda/3 will dominate
the running time of the beam_asm pass.
|
|
Eliminate searching in the list of exported functions in favor of
using a map. For modules with a huge number of exported functions
(such as NBAP-PDU-Contents in the asn1 test suite), that will mean a
significant speed-up.
|
|
For large modules, a map is significantly faster than a dict.
|
|
|
|
The atom 'all' can never occur in a size field before sys_pre_expand
has been run.
|
|
The handling of non-remote calls is messy, with several lookups
to determine whether the call is local or to some imported module.
We can simplify the code if we keep a map that immediately gives
us the answer. Here is an example of what the map entries look
like:
{f,1} => local
{foldl,3} => {imported,lists}
That is, there should be a local call to f/1 and a remote call
to lists:foldl/3.
Note that there is no longer any need to keep the set of all defined
functions in the state record.
|
|
Before the Core Erlang passes were introduced a long time ago,
sys_pre_expand used to track used and new variables in order to
do lambda lifting (i.e. transform funs into ordinary Erlang
functions). Lambda lifting is now done in v3_kernel.
Remove the few remaining vestiges of variable tracking in the
comments and the code.
|
|
Importing from_list/1 and union/2 from the 'ordsets', while at
the same time making calls explicit calls to the functions with
same name in the 'gb_sets' module is confusing. Make all calls
to 'ordsets' explicit.
|
|
|
|
|
|
|
|
A record field type has been modified due to commit 8ce35b2:
"Take out automatic insertion of 'undefined' from typed record fields".
|
|
c288ab87 added beam_reorder to move get_tuple_element instructions.
Compiling code such as the following would crash the compiler:
alloc(_U1, _U2, R) ->
V = R#alloc.version,
Res = id(V),
_ = id(0),
Res.
The crash would occur because the following two instructions:
{get_tuple_element,{x,2},1,{x,1}}.
{allocate_zero,1,2}.
were swapped and rewritten to:
{allocate_zero,1,1}.
{get_tuple_element,{x,2},1,{x,1}}.
That transformation is not safe because the allocate_zero instruction
would kill {x,2}, which is the register that is holding the reference
to the tuple. Only do the transformation when the tuple reference is
in an x register with a lower number than the destination register.
|
|
There is an optimization in beam_clean that will remove values
having the same label as the failure label in a select_val
instruction. Conceptually, this optimization is in the wrong
module since ideally beam_clean is a mandatory pass that should
not do optimizations. Furthermore, this part of beam_clean is
called three times (from beam_dead, beam_peep, and as a compiler
pass from the 'compile' module), but it only does useful one of
the times it is called.
Therefore, move this optimization to the beam_peep pass.
The same optimization is done in beam_dead, but unfortunately it
misses some opportunities for optimization because the code sharing
optimization in beam_jump (share/1) runs after beam_dead. It would
be more satisfactory to have this optimization only in beam_dead,
but it turned out not to be trivial. If we try to run
beam_jump:share/1 before beam_dead, some optimizations will no
longer work in beam_dead because fallthroughs have been eliminated.
For the moment, the possible solutions to this problem seems to
involve more work and more complicated code than the gain from
eliminating the duplicated optimization would gain.
|
|
There is an optimization in beam_block to simplify a select_val
on a known boolean value. We can implement this optimization
in a cleaner way in beam_type and it will also be applicable
in more situations. (When I added the optimization to beam_type
without removing the optimization from beam_block, the optimization
was applied 66 times.)
|
|
The ASN.1 compiler often generates code similar to:
f(<<0:1,...>>) -> ...;
f(<<1:1,...>>) -> ....
Internally that will be rewritten to (conceptually):
f(<<B:1,Tail/binary>>) ->
case B of
0 ->
case Tail of ... end;
1 ->
case Tail of ... end;
_ ->
error(case_clause)
end.
Since B comes from a bit field of one bit, we know that the only
possible values are 0 and 1. Therefore the error clause can be
eliminated like this:
f(<<B:1,Tail/binary>>) ->
case B of
0 ->
case Tail of ... end;
_ ->
case Tail of ... end
end.
Similarly, we can also a deduce the range for an integer from
a 'band' operation with a literal integer.
While we are at it, also add a test case to improve the coverage.
|
|
The clause cannot possibly match, because there will always be
a {bif,...} clause that will match before reaching the fclearerror
instruction.
|
|
30cc5c90 changed the internal representation of catch and
try...catch, but beam_type was not updated in one place.
|
|
When the bit syntax is used to match a single binary literal, the bit
syntax instructions will be replaced with a comparison to a binary
literal. The only problem is that the bs_context_to_binary instruction
will not be eliminated.
Example:
f(<<"string">>) ->
ok.
This function would be translated to:
{function, f, 1, 2}.
{label,1}.
{line,...}.
{func_info,...}.
{label,2}.
{test,is_eq_exact,{f,3},[{x,0},{literal,<<"string">>}]}.
{move,{atom,ok},{x,0}}.
return.
{label,3}.
{bs_context_to_binary,{x,0}}.
{jump,{f,1}}.
The bs_context_to_binary instruction serves no useful purpose,
since {x,0} can never be a match context. Eliminating the
instruction, the resulting code will be:
{function, f, 1, 2}.
{label,1}.
{line,...}.
{func_info,...}.
{label,2}.
{test,is_eq_exact,{f,1},[{x,0},{literal,<<"string">>}]}.
{move,{atom,ok},{x,0}}.
return.
|
|
In a select_val instruction, values associated with a label
which is the same as the failure label can be removed. We
already do this optimization in beam_clean, but it is better
do this sort of optimization before the beam_jump pass.
Also rewrite a select_val instruction with a single value
to is_eq_exact instruction followed by a jump instruction.
|
|
In the future we might want to add more bit syntax optimizations,
but beam_block is already sufficiently complicated. Therefore, move
the bit syntax optimizations out of beam_block into a separate
compiler pass called beam_bs.
|
|
Knowing that a BIF returns an integer makes it possible to
replace '==' with the cheaper '=:=' test.
|
|
Consider the following function:
f(Bin, Bool) ->
case Bin of
<<Val:16/binary,_/binary>> when Bool ->
Val
end.
Simplified, the generated code looks like:
bs_start_match2 Fail Live Bin => Bin
bs_get_integer2 Fail Live Bin size=Sz unit=1 => Val
bs_skip_bits2 Fail Bin size=all unit=8
is_eq_exact Fail Bool true
The code generator will replace the bs_skip_bits2 instruction with
a bs_test_unit instruction if it can be clearly seen that the
context register will not be used again. In this case, it is not
obvious without looking at the code at the Fail label.
However, it turns out that bs_test_unit instruction is always
safe beacuse of the way v3_kernel compiles pattern matching.
It doesn't matter whether the match context will be used again.
If it will be used again, the position in it will *not* be used.
Instead, a bs_restore2 instruction will restore one of the saved
instructions.
|
|
d0784035ab fixed a problem with register corruption. Because of
that, opt_moves/2 will never be asked to optimize instructions with
more than two destination registers. Therefore, to regain full
coverage of beam_block, remove the final clause in opt_moves/2.
|
|
* bjorn/compiler/remove-deprecated/OTP-12979:
core_lib: Remove previously deprecated functions
|
|
|
|
* c-rack/fix-typo3:
Fix typo in call_last/3 spec
Fix typo
Fix typo: message to send is in x(1) not x(0)
Fix another small typo
Fix typo
|
|
|
|
|
|
Instruction get_map_elements might destroy target registers when the fail-label is taken.
Only seen for patterns with two, and only two, target registers.
Specifically: we copy one register, and then jump.
foo(A,#{a := V1, b := V2}) -> ...
foo(A,#{b := V}) -> ...
call foo(value, #{a=>whops, c=>42}).
corresponding assembler:
{test,is_map,{f,5},[{x,1}]}.
{get_map_elements,{f,7},{x,1},{list,[{atom,a},{x,1},{atom,b},{x,2}]}}.
%% if 'a' exists but not 'b' {x,1} is overwritten, jump {f,7}
{move,{integer,1},{x,0}}.
{call_only,3,{f,10}}.
{label,7}.
{get_map_elements,{f,8},{x,1},{list,[{atom,b},{x,2}]}}.
%% {x,1} (src) is read with a corrupt value
{move,{x,0},{x,1}}.
{move,{integer,2},{x,0}}.
{call_only,3,{f,10}}.
The fix is to remove 'opt_moves' pass for get_map_elements instruction
in the case of two or more destinations.
Reported-by: Valery Tikhonov
|
|
|
|
|
|
|
|
|
|
In 45f469ca0890, the BEAM loader started to use x(1023) as scratch
register for some instructions. Therefore we should not allow
x(1023) to be used in code emitted by the compiler.
|
|
When translating guards to Core Erlang, it is sometimes necessary
to add an is_boolean/1 guard test. Here is an example when it is
necessary:
o(A, B) when A or B ->
ok.
That would be translated to something like:
o(A, B) when ((A =:= true) or (B =:= true)) and
is_boolean(A) and is_boolean(B) ->
ok.
The is_boolean/1 tests are necessary to ensure that the guard
fails for calls such as:
o(true, not_boolean)
However, because of a bug in v3_core, is_boolean/1 tests were
added when they were not necessary. Here is an example:
f(B) when not B -> ok.
That would be translated to:
f(B) when (B =:= false) and is_boolean(B) -> ok.
The following translation will work just as well.
f(B) when B =:= false -> ok.
Correct the bug to suppress those unnecessary is_boolean/1 tests.
|
|
We can rewrite more instances of select_val to is_boolean because
it is not necessary that a particular label follows the select_val.
|
|
Put 'try' instructions inside block to improve the optimization
of allocation instructions. Currently, the compiler only looks
at initialization of y registers inside blocks when determining
which y registers that will be "naturally" initialized.
|
|
Simplify further optimizations by moving safe instructions to
before the 'try' or 'catch' instruction.
|
|
When matching tuples, the pattern matching compiler would generate
code that would fetch all elements of the tuple that will ultimately
be used, *before* testing that (for example) the first element is the
correct record tag. For example:
is_tuple Fail {x,0}
test_arity Fail {x,0} 3
get_tuple_element {x,0} 0 {x,1}
get_tuple_element {x,0} 1 {x,2}
get_tuple_element {x,0} 2 {x,3}
is_eq_exact Fail {x,1} some_tag
If {x,2} and {x,3} are not used at label Fail, we can re-arrange the
code like this:
is_tuple Fail {x,0}
test_arity Fail {x,0} 3
get_tuple_element {x,0} 0 {x,1}
is_eq_exact Fail {x,1} some_tag
get_tuple_element {x,0} 1 {x,2}
get_tuple_element {x,0} 2 {x,3}
Doing that may be beneficial in two ways.
If the branch is taken, we have eliminated the execution of two
unnecessary instructions.
Even if the branch is never or rarely taken, there is the possibility
for more optimizations following the is_eq_exact instructions.
For example, imagine that the code looks like this:
get_tuple_element {x,0} 1 {x,2}
get_tuple_element {x,0} 2 {x,3}
move {x,2} {y,0}
move {x,3} {y,1}
Assuming that {x,2} and {x,3} have no further uses in the code
that follows, that can be rewritten to:
get_tuple_element {x,0} 1 {y,0}
get_tuple_element {x,0} 2 {y,1}
When should we perform this optimization?
At the very latest, it must be done before opt_blocks/1 in
beam_block which does the elimination of unnecessary moves.
Actually, we want do the optimization before the blocks have
been established, since moving instructions out of one block
into another is cumbersome.
Therefore, we will do the optimization in a new pass that is
run before beam_block. A new pass will make debugging easier,
and beam_block already has a fair number of sub passes.
|
|
|
|
Here is an example of a move instruction that could not be optimized
away because the {x,2} register was not killed:
get_tuple_element Reg Pos {x,2}
.
.
.
move {x,2} {y,0}
put_list {x,2} nil Any
We can do the optimization if we replace all occurrences of the {x,2}
register as a source with {y,0}:
get_tuple_element Reg Pos {y,0}
.
.
.
put_list {y,0} nil Dst
|
|
The 'move' optimization was relatively clean until GC BIFs
were introduced. Instead of re-thinking the implementation,
the existing code was fixed and patched.
The current code unsuccessfully attempts to eliminate 'move'
instructions across GC BIF and allocation instructions. We can
simplify the code if we give up as soon as we encounter any
instruction that allocates.
|
|
opt_alloc/1 makes a redundant call to opt/1. It is redundant because
the opt/1 function has already been applied to the instruction
sequence prior to calling opt_alloc/1.
|