Age | Commit message (Collapse) | Author |
|
* bjorn/remove-parameterized-modules/OTP-10616:
Remove support for parameterized modules
xref_SUITE: Don't test parameterized modules
shell_SUITE: Don't test parameterized modules
erl_expand_records_SUITE: Don't test parameterized modules
erl_eval: Don't test parameterized modules
|
|
|
|
|
|
* nox/rm-reverse-eta-conversion/OTP-10682:
Don't use fun references in cprof_SUITE
Make trace_local_SUITE work without the reverse eta conversion
Remove the reverse eta-conversion from v3_kernel
|
|
* nox/promote-inline_list_funcs/OTP-10690:
Raise a function_clause error with the right arguments when inlining
Properly guard against badly-typed arguments when inlining
Make inlined list functions fail with function_clause
Document compiler option 'inline_list_funcs'
Silence some wrong warnings triggered by inline_list_funcs
|
|
|
|
Local fun variables are disallowed in both Erlang and Core Erlang guards
but core_lint doesn't detect this kind of error, making the compilation
fail later in the BEAM assembly generation. A guard is added to only
allow #c_var{} terms where the name is an atom or an integer, which is
the type used by the inliner when introducing new variables.
|
|
The inlined lists functions raised an error with only the list instead
of all their given arguments.
|
|
The inlining code for inline_list_funcs silenced the function_clause
error that should occur when calling lists:map(3.5, []).
|
|
The function_clause errors produced by inline_list_funcs should properly
be annotated with their function names to avoid kernel_v3 making them
into case_clauses errors. See v3_kernel:translate_match_fail_1/4.
|
|
|
|
|
|
|
|
Expect modifications, additions and corrections.
There is a kludge in file_io_server and
erl_scan:continuation_location() that's not so pleasing.
|
|
Local function references should be handled directly as a make_fun
internal BIF call instead of creating an extra lambda function every
time they are used.
|
|
The handling of bs_start_match2 was both too conservative and too
careless. It was too conservative in that would not do the
optimization if the were copies of the match state in other
registers. It was careless in that it did not consider the
failure branch.
Reorganize the code and fix both these issues. Add a test case
to test that the failure branch is considered.
|
|
|
|
When determining whether the delayed creation of sub-binaries
optimizations is applicable, this module some tests whether the
register containg the match state is killed. That is actually a
stronger condition than necessary; since the register is initialized,
it suffices to test whether the register is unused.
|
|
Generate slightly smaller and faster code.
|
|
|
|
|
|
* maint:
Fix compiler crash for binary matching and a complicated guard
|
|
Commit c4375a62cfaabfd8de757f59714623ba1a8cb915 added a parallel
group, but incorrectly, so no test cases at all were run in
receive_SUITE.
|
|
The compiler would crash when attempting to compile a function
head that did binary matching and had a complex expression using
'andalso' and 'not'.
Noticed-by: José Valim
|
|
Code like `lists:map(fun(X) -> X end, ?C10k), ok` triggers the following
warning:
no_file:none: Warning: a term is constructed, but never used
|
|
It should be beam_except_SUITE, since it tests the beam_except
module (introduced in 726f6e4c7afe8ce37b30eedbebe583e7b9bfc51b).
|
|
Run testcases in parallel will make the test suite run slightly
faster. Another reason for this change is that we want more testing
of parallel testcase support in common_test.
|
|
We were too conservative when handling a call when there were copies of
the match context in both x and y registers. Don't give up if there
is are copies of the match context in y registers, as long as those
copies are killed by the code that follows the call.
|
|
Somewhat reduce the code bloat by eliminating special cases.
|
|
Somewhat reduce code bloat.
|
|
Eliminate some code bloat.
|
|
Rewrite the five binary creation instructions to a bs_init
instruction, in order to somewhat reduce code bloat.
|
|
We can remove some code bloat by handling the special instructions
as BIF instructions in the optimization passes. Also note that
bs_utf*_size was not handled by beam_utils:check_liveness/3
(meaning the conservative answer instead of the correct answer
would be returned).
|
|
Seven bs_put_* instructions can be combined into one generic bs_put
instruction to avoid some code bloat. That will also improve some
optimizations (such as beam_trim) that did not handle all bs_put*
variants.
|
|
Since we always want to remove unused labels directly after code
generation (whether we'll run the optimization passes or not),
we can simplify the code by doing it in beam_a.
|
|
Introduce the mandary beam_a pass that will be run directly after code
generation, and the mandatory beam_z pass that will be run just before
beam_asm. Since these passes surround the optimizations, beam_a can
(for example) do instruction renaming to simplify the optimization
passes and beam_z can undo those renamings.
|
|
|
|
|
|
Don't throw the parse tree in the face of the user.
OTP-8707
|
|
beam_jump moves short code sequences ending in an instruction that causes
an exception to the end of the function, in the hope that a jump around
the moved blocked can be replaced with a fallthrough. Therefore, moving
a block that is entered via a fallthrough defeats the purpose of the
optimization.
Also add two more test cases for the beam_receive module to ensure that
all lines are still covered.
|
|
A test instruction with the same target label as jump immediately
followed it was supposed to be removed, but it was kept anyway.
Fix that optimization, but also make sure that the test instruction
is kept if the test instruction may have side effects (such as
a bit syntax matching instruction).
While at it, make the code cleaner by breaking it up into two clauses
and don't remove the jump instruction if it is redudant (removal of
redundant jump instructions is already handled in another place).
|
|
When matched variable is used as a size field in multiple clauses,
as in:
foo(<<L:8,A:L>>) -> A;
foo(<<L:8,A:L,B:8>>) -> {A,B}.
the match tree would branch out before the segment that used the
matched-out variable (in this example, the tree would branch out before
the matching of A:L). That happens because the pattern matching
compilator did not take variable substitutions into account when
grouping clauses that match the same value.
That is, the generated code would work similarly to this code:
foo(<<L:8,T/binary>>) ->
case T of
<<A:L>> ->
A;
_ ->
case T of
<<A:L,B:8>> -> %% A matched out again!
{A,B}
end
end.
We would like the matching to work more like:
foo(<<L,A:L,T/binary>>) ->
case T of
<<>> -> A;
<<B:8>> -> {A,B}
end.
Fix the problem by taking the substitutions into account when grouping
clauses that match out the same value.
|
|
The bs_match_string instruction is used to speed up matching of
binary literals. For example, given this source code:
foo1(<<1,2,3>>) -> ok.
The matching part of the code will look like:
{test,bs_start_match2,{f,1},1,[{x,0},0],{x,0}}.
{test,bs_match_string,{f,3},[{x,0},24,{string,[1,2,3]}]}.
{test,bs_test_tail2,{f,3},[{x,0},0]}.
Nice. However, if we do a simple change to the source code:
foo2(<<1,2,3>>) -> ok;
foo2(<<>>) -> error.
the resulting matching code will look like (sligthly simplified):
{test,bs_start_match2,{f,4},1,[{x,0},0],{x,0}}.
{test,bs_get_integer2,{f,7},1,[{x,0},{integer,8},1,Flags],{x,1}}.
{test,is_eq_exact,{f,8},[{x,1},{integer,1}]}.
{test,bs_match_string,{f,6},[{x,0},16,{string,[2,3]}]}.
{test,bs_test_tail2,{f,6},[{x,0},0]}.
{move,{atom,ok},{x,0}}.
return.
{label,6}.
{bs_restore2,{x,0},{atom,start}}.
{label,7}.
{test,bs_test_tail2,{f,8},[{x,0},0]}.
That is, matching of the first byte is not combined into the
bs_match_string instruction that follows.
Fix this problem by allowing a bs_match_string instruction to be
used if all clauses will match either the same integer literal or
the empty binary.
|
|
In modules with huge functions with many bs_match_string
instructions, we can speed up the compilation by combining
adjacent bs_match_strings instruction in v3_codegen (as opposed
to in beam_block where we used to do it).
For instance, on my computer the v3_codegen became more than
twice as fast when compiling the re_testoutput1_split_test module
in the STDLIB test suites.
|
|
The optimizer of boolean expressions can often reject optimizations
that it does not recognize as safe. For instance, if a boolean
expression was preceded by 'move' instructions that saved x registers
into y registers, it would almost certainly reject the optimization
because it required the y register not be used in the code that
follows.
Fix this problem by allowing identical 'move' instructions that assing
to y registers at the beginning of the old and the optimized code.
While at it, correct the spelling of "preceding".
|
|
The usage calculation only looked at the allocation in GC BIFs, not
at the source and destination registers. Also, if there is a failure
label, make sure that we test whether the register can be used there.
|
|
The liveness at the failure label should be ignored, because if
there is an exception, all x registers will be killed.
|
|
The code generator uses conservative liveness information. Therefore
the number of live registers in allocation instructions (such as
test_heap/2) may be too high. Use the actual liveness information
to lower the number of live register if it's too high.
The main reason we want to do this is to enable more optimizations
that depend on liveness analysis, such as the beam_bool and beam_dead
passes.
|
|
Less conservative liveness analysis allows more optimizations
to be applied (such as the ones in beam_bool).
|
|
Add some comments to make the code generation for binary
construction somewhat clearer. Also get rid of the cg_bo_newregs/2
function that serves no useful purpose. (It was probably
intended to undo the effect of cg_live/2, but note that the
value passed to cg_bo_newregs/2 has not passed through cg_live/2.)
|