Age | Commit message (Collapse) | Author |
|
Eliminate some code bloat.
|
|
Rewrite the five binary creation instructions to a bs_init
instruction, in order to somewhat reduce code bloat.
|
|
We can remove some code bloat by handling the special instructions
as BIF instructions in the optimization passes. Also note that
bs_utf*_size was not handled by beam_utils:check_liveness/3
(meaning the conservative answer instead of the correct answer
would be returned).
|
|
Seven bs_put_* instructions can be combined into one generic bs_put
instruction to avoid some code bloat. That will also improve some
optimizations (such as beam_trim) that did not handle all bs_put*
variants.
|
|
Since we always want to remove unused labels directly after code
generation (whether we'll run the optimization passes or not),
we can simplify the code by doing it in beam_a.
|
|
Introduce the mandary beam_a pass that will be run directly after code
generation, and the mandatory beam_z pass that will be run just before
beam_asm. Since these passes surround the optimizations, beam_a can
(for example) do instruction renaming to simplify the optimization
passes and beam_z can undo those renamings.
|
|
|
|
|
|
Don't throw the parse tree in the face of the user.
OTP-8707
|
|
beam_jump moves short code sequences ending in an instruction that causes
an exception to the end of the function, in the hope that a jump around
the moved blocked can be replaced with a fallthrough. Therefore, moving
a block that is entered via a fallthrough defeats the purpose of the
optimization.
Also add two more test cases for the beam_receive module to ensure that
all lines are still covered.
|
|
A test instruction with the same target label as jump immediately
followed it was supposed to be removed, but it was kept anyway.
Fix that optimization, but also make sure that the test instruction
is kept if the test instruction may have side effects (such as
a bit syntax matching instruction).
While at it, make the code cleaner by breaking it up into two clauses
and don't remove the jump instruction if it is redudant (removal of
redundant jump instructions is already handled in another place).
|
|
When matched variable is used as a size field in multiple clauses,
as in:
foo(<<L:8,A:L>>) -> A;
foo(<<L:8,A:L,B:8>>) -> {A,B}.
the match tree would branch out before the segment that used the
matched-out variable (in this example, the tree would branch out before
the matching of A:L). That happens because the pattern matching
compilator did not take variable substitutions into account when
grouping clauses that match the same value.
That is, the generated code would work similarly to this code:
foo(<<L:8,T/binary>>) ->
case T of
<<A:L>> ->
A;
_ ->
case T of
<<A:L,B:8>> -> %% A matched out again!
{A,B}
end
end.
We would like the matching to work more like:
foo(<<L,A:L,T/binary>>) ->
case T of
<<>> -> A;
<<B:8>> -> {A,B}
end.
Fix the problem by taking the substitutions into account when grouping
clauses that match out the same value.
|
|
The bs_match_string instruction is used to speed up matching of
binary literals. For example, given this source code:
foo1(<<1,2,3>>) -> ok.
The matching part of the code will look like:
{test,bs_start_match2,{f,1},1,[{x,0},0],{x,0}}.
{test,bs_match_string,{f,3},[{x,0},24,{string,[1,2,3]}]}.
{test,bs_test_tail2,{f,3},[{x,0},0]}.
Nice. However, if we do a simple change to the source code:
foo2(<<1,2,3>>) -> ok;
foo2(<<>>) -> error.
the resulting matching code will look like (sligthly simplified):
{test,bs_start_match2,{f,4},1,[{x,0},0],{x,0}}.
{test,bs_get_integer2,{f,7},1,[{x,0},{integer,8},1,Flags],{x,1}}.
{test,is_eq_exact,{f,8},[{x,1},{integer,1}]}.
{test,bs_match_string,{f,6},[{x,0},16,{string,[2,3]}]}.
{test,bs_test_tail2,{f,6},[{x,0},0]}.
{move,{atom,ok},{x,0}}.
return.
{label,6}.
{bs_restore2,{x,0},{atom,start}}.
{label,7}.
{test,bs_test_tail2,{f,8},[{x,0},0]}.
That is, matching of the first byte is not combined into the
bs_match_string instruction that follows.
Fix this problem by allowing a bs_match_string instruction to be
used if all clauses will match either the same integer literal or
the empty binary.
|
|
In modules with huge functions with many bs_match_string
instructions, we can speed up the compilation by combining
adjacent bs_match_strings instruction in v3_codegen (as opposed
to in beam_block where we used to do it).
For instance, on my computer the v3_codegen became more than
twice as fast when compiling the re_testoutput1_split_test module
in the STDLIB test suites.
|
|
The optimizer of boolean expressions can often reject optimizations
that it does not recognize as safe. For instance, if a boolean
expression was preceded by 'move' instructions that saved x registers
into y registers, it would almost certainly reject the optimization
because it required the y register not be used in the code that
follows.
Fix this problem by allowing identical 'move' instructions that assing
to y registers at the beginning of the old and the optimized code.
While at it, correct the spelling of "preceding".
|
|
The usage calculation only looked at the allocation in GC BIFs, not
at the source and destination registers. Also, if there is a failure
label, make sure that we test whether the register can be used there.
|
|
The liveness at the failure label should be ignored, because if
there is an exception, all x registers will be killed.
|
|
The code generator uses conservative liveness information. Therefore
the number of live registers in allocation instructions (such as
test_heap/2) may be too high. Use the actual liveness information
to lower the number of live register if it's too high.
The main reason we want to do this is to enable more optimizations
that depend on liveness analysis, such as the beam_bool and beam_dead
passes.
|
|
Less conservative liveness analysis allows more optimizations
to be applied (such as the ones in beam_bool).
|
|
Add some comments to make the code generation for binary
construction somewhat clearer. Also get rid of the cg_bo_newregs/2
function that serves no useful purpose. (It was probably
intended to undo the effect of cg_live/2, but note that the
value passed to cg_bo_newregs/2 has not passed through cg_live/2.)
|
|
|
|
|
|
Conflicts:
lib/diameter/autoconf/vxworks/sed.general
xcomp/README.md
|
|
|
|
* maint:
compiler: Warn if the size of a binary segment is invalid
|
|
* bjorn/compiler/illegal-size/OTP-10197:
compiler: Warn if the size of a binary segment is invalid
|
|
* maint:
Revert "Merge branch 'nox/compile-column-numbers' into maint"
|
|
Column numbers was merged without understanding all the whole
story. See mail on erlang-patches for details.
This reverts commit df8e67e203b83f95d1e098fec88ad5d0ad840069, reversing
changes made to 0c9d90f314f364e5b1301ec89d762baabc57c7aa.
|
|
The compiler would silently accept and Dialyzer would crash on
code like:
<<X:(2.5)>>
It is never acceptable for Dialyzer to crash. The compiler should
at least generate a warning for such code. It is tempting to let
the compiler generate an error, but that would mean that code like:
Sz = 42.0,
<<X:Sz>>.
would be possible to compile with optimizations disabled, but not
with optimizations enabled.
Dialyzer crashes because it calls cerl:bitstr_bitsize/1, which
crashes if the type of size for the segment is invalid. The easiest
way to avoid that crash is to extend the sanity checks in v3_core
to also include the size field of binary segments. That will cause
the compiler to issue a warning and to replace the bad binary
construction with a call to erlang:error/1. (It also means that
Dialyzer will not issue a warning for bad size fields.)
|
|
Tuple funs were deprecated in R15B (in commit a4029940e309518f5500).
|
|
* bjorn/compiler/minor-fixes/OTP-10185:
erl_lint: Add a deprecated warning for literal tuple funs
beam_utils:live_opt/1: Correct handling of try_case_end/1
Correct guard_SUITE_tuple_size.S
beam_type: Print the offending function if this pass crashes
beam_validator: Validate the size operand in bs_init_bits and bs_init2
|
|
Liveness for the try_case_end/1 instruction should be calculated
in the same way as for the case_end/1 instruction.
|
|
|
|
|
|
|
|
* nox/compile-column-numbers:
Fix messages ordering with column numbers
Fix type compile:err_info/0
Test column number reporting in error_SUITE
Fix printing of errors with column numbers
Create a new "column" option in compile
Allow setting of initial position in epp
Export type erl_scan:location/0
|
|
If a process trap exits, calling the compiler would leave an EXIT
message in the message queue of the calling process because the
compiler spawns a temporary work process. Eliminate the EXIT process
by monitoring the temporary process instead of linking to it.
Reported-by: Jeremy Heater
|
|
Commit a612e99fb5aaa934fe5a8591db0f083d7fa0b20a turned module attributes from
2-tuples to 3-tuples but forgot to update get_base/1, breaking BASE for
parametric modules.
|
|
* jv/forms-source:
Allow the source to be set when compiling forms
OTP-10150
|
|
Use a gb_set instead of an ordset to store the set of defined
functions in the module to avoid quadritic time complexity.
|
|
With L1, L2, C1, C2 integers such as L1 < L2 and C1 < C2, locations
are ordered like this:
L1 < {L1, C1} < {L1, C2} < L2
|
|
|
|
OTP-10106
OTP-10107
|
|
There was a colon missing after the column number, it must be an error
as it's not missing in list_errors/2.
|
|
If set, compile will call epp with a full location {1, 1} instead of 1,
thus making it keep the column numbers in the parsed AST.
|
|
This commit adds a source option to compile:forms() that
sets the source value returned by module_info(compile).
|
|
|
|
* bjorn/compiler/coverage-and-minor-fixes/OTP-9982:
v3_life: Use common code for guards and bodies
v3_core: Don't put negative line numbers in annotations
v3_kernel: Dig out the line number only when generating a warning
v3_kernel: Clean up handling of guards
v3_kernel: Introduce is_in_guard/1
v3_kernel: Removed unreached clause for #k_bin_int{} in sub_size_var/1
v3_kernel: Remove unreached handling of #k_bin_int{} in arg_con/1
v3_codegen: Eliminate the special case of 'put' without destination
v3_kernel: Don't attempt to share identical literals
v3_kernel: Handle sequences in guards
v3_kernel: Remove clauses that are never executed in arg_val/1
v3_kernel.hrl: Remove unused record #k_string{}
v3_kernel.erl: Remove unused define of EXPENSIVE_BINARY_LIMIT
sys_core_fold: Refactor previous_ctx_to_binary/2 to cover it completely
sys_core_fold: Fix opt_guard_try/1
sys_core_fold: Simplify opt_bool_not() to cover it completely
sys_core_fold: Remove duplicate optimization
|
|
|
|
In Core Erlang and later passes, compiler-generated code can be
indicated in two different ways: by negative line numbers and by
a 'compiler_generated' annotation.
Simplify the code and improve coverage by turning negative line
numbers positive and adding a 'compiler_generated' annotation in
the v3_core pass. That means that Core Erlang and latter passes
do not have deal with negative line numbers.
|