Age | Commit message (Collapse) | Author |
|
This should be slightly more efficient than converting to/from
lists for large sets.
|
|
* bjorn/compiler/use-lists-types:
beam_ssa_type: Use types from some 'lists' functions
|
|
This commit lets the compiler know about the return
type of some of the functions in the `lists` module.
Knowing about the return will allow the compiler to emit
fewer type test instructions, and also fewer instructions
for throwing `case_clause` or `badmatch` exceptions, thus
producing slightly faster and more compact code.
This change makes the `lists` module a part of the language, but it
could be argued that it already is because several functions
(e.g. `member/2` and `keymember/3`) are implemented in as BIFs in the
runtime system. Therefore, a user cannot simply change the
`lists` module and expect everything to continue working as before.
The compiler will now know the return types for the following
functions:
all/2
any/2
keymember/3
member/2
prefix/2
suffix/2
dropwhile/2
duplicate/2
filter/2
flatten/1
map/2
mapfoldl/3
mapfoldr/3
partition/2
reverse/1
sort/1
splitwith/1
takewhile/1
unzip/1
usort/1
zip/2
zipwith/3
|
|
`sys_core_fold` has an optimization of repeated pattern matching.
For example, when a record is matched the first time, the pattern
is remembered. When the same record is matched again, the matching
does not need to be repeated, but the variables bound in the first
matching can be re-used.
It turns out that that there is a name capture problem when the old
inliner is used. The old inliner is used when explicitly inling
certain functions, and by the compiler test suites for testing the
compiler.
The name capture problem could be eliminated by more aggressive
variable renaming when inlining.
But, fortunately, given the new SSA passes, this optimization is no
longer as essential as it used to be. Removing the optimization
turns out to be mostly benefical, leading to a smaller stack
frame in many cases.
Also remove the optimizations of `element/2`, `is_record/3`, and
`setelement/3` from `sys_core_fold`. Because matched patterns are no
longer remembered, those optimizations can very rarely be applied any
more. (Those same optimizations are already done in `beam_ssa_type`.)
|
|
|
|
For some reason, a `get_tuple_element` instruction was not deemed
suitble for local common sub expression elimination.
It turns out that enabling CSE for `get_tuple_element` is benefical.
It will also be even more benefical in a future commit where some
of the optimizations in `sys_core_fold` are removed.
|
|
* john/compiler/more-validator-cuddling:
beam_validator: Refactor call argument validation
beam_validator: Refactor liveness/stack initialization checks
beam_validator: Refactor try/catch handling
beam_validator: Remember definitions on assignment
beam_validator: Refactor stack trimming
beam_validator: Track definitions of all terms
beam_validator: Remove special handling of map_get/is_map_key
beam_validator: Refactor select_tuple_arity
beam_validator: Treat select_val as a series of '=:='
beam_validator: Treat all bs_get instructions as extractions
beam_validator: Separate BIF/call types more clearly
beam_validator: Assert that no tuple elements are out of bounds
beam_validator: Get rid of the last uses of set_aliased_type
beam_validator: Minor cosmetic refactoring
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Neither can be used for type subtraction, so the default BIF
handler suits them just fine.
|
|
|
|
|
|
While this is strictly only relevant for bs_get_binary2, we should
never build anything while matching a message, so it ought to be
safe to remove this last raw use of propagate_fragility.
|
|
|
|
|
|
Granted, it's replaced with a thin wrapper, but it'll simplify
migration to the new type format.
|
|
|
|
|
|
|
|
Part of the switch optimization done by `ssa_opt_sw` can be better
done in `beam_ssa_type`.
|
|
|
|
Refactor optimization of `Bool =:= true` to make the code
somewhat simpler, as well as potentially apply the optimization
in more cases.
|
|
Always rewriting left-associative andalso/orelse to right-associative
will not change the code (except in very rare cases), but it will
make sure that the transformation is tested.
|
|
The expansion of record field updates, when more than one field is
updated, but not a majority of the fields, will create a sequence of
calls to `erlang:setelement(Index, Value, Tuple)` where Tuple in the
first call is the original record tuple, and in the subsequent calls
Tuple is the result of the previous call. Furthermore, all Index
values are constant positive integers, and the first call to
`setelement` will have the greatest index. Thus all the following
calls do not actually need to test at run-time whether Tuple has type
tuple, nor that the index is within the tuple bounds.
Since OTP R7, the `sys_core_dsetel` pass, run as the very last Core
Erlang pass, has optimized this sequence of `setelement` calls to use
a special destructive version of `setelement` (called
`set_tuple_element`) for all but the very first `setelement` in the
sequence.
It turns out that the presence of the `set_tuple_element` in SSA code
is awkward and can prevent or complicate type analysis and aggressive
optimizations.
Therefore, this commit removes the `sys_core_dsetel` pass and
reimplements it for SSA code. The optimization will be done in the
`beam_ssa_pre_codegen` pass (that is, just before code generation and
after running all other SSA code optimization passes).
In most cases, the resulting BEAM code is identical to previous
code. For a few modules, the BEAM code is actually slightly better,
with smaller stack frames.
|
|
* john/compiler/cuddle-validator:
compiler: Allow disabling SSA passes in -compile() directives
beam_validator: Infer types from result of all type test BIFs
beam_validator: Infer BIF argument types
beam_validator: type_test in BIFs that only fail on invalid types
beam_validator: Simplify complex branches
beam_validator: Explain why verify_get_map wipes dst registers
beam_validator: fconv means we have a number
beam_validator: Simplify update_ne/eq_types
beam_validator: Remove pointless fragility propagation
beam_validator: Misc cosmetic refactoring
|
|
This fixes compiling against older OTP versions with the +r?VSN
options, which often expand to disabling certain SSA passes to
avoid new instructions.
|
|
The compiler will usually optimize these into test instructions,
but they'll nevertheless pop up in some cases.
|
|
If we know that a BIF will badarg unless its arguments have certain
types, we can infer that we have at least those types on success.
Note that we can't do this in the general case as the BIF could
fail for reasons other than bad arguments.
|
|
The sole purpose of inline_SUITE:coverage/1 is to ensure
that all lines are covered in sys_core_inline. Do that
in a cheaper way.
|
|
This test case does not test anything unique that is not
tested by other test cases.
|
|
This makes sure that the SSA optimizations are not essential and
may help to cover more code in beam_ssa_pre_codegen and
beam_ssa_codegen.
|
|
|
|
eb0b8da6e816 started to use a binary instead of a string in
bs_match_string instructions.
Remove a clause that attempts to handle the old form of
bs_match_string from old .S files. This is pointless, because an old
.S file is likely to contain a bs_context_to_binary instruction as
well. The bs_context_to_binary is no longer recognized by
`beam_validator`, so those old .S files will not work anyway.
|
|
|
|
|
|
With the recent update to cover to use the `counters` modules,
there is no longer any reason to limit the number of parallel
processes when running `cover`.
|
|
* bjorn/compiler/beam_ssa_opt:
Make sure that beam_ssa_opt optimizes all functions
|
|
* maint:
Add persistent_term:get(Key, DefaultValue)
Make dialyzer faster for left-associative andalso/orelse expressions
|
|
The function `get_call_order_po/2` did not always include all
functions in a module. The method it used was to first find
all leave functions (functions not calling any other function),
and then find all others functions that called any of the leave
functions either directly or indirectly.
Functions that did not call a leave function (directly or indirectly)
would not be included in the list of functions to optimize.
Reimplement `get_call_order_po/2` to use the standard algorithm
for constructing a list of nodes in reverse postorder and then
reverse that list.
|
|
* bjorn/compiler/improve-move-elimination:
beam_jump: Improve elimination of redundant moves
|
|
https://bugs.erlang.org/browse/ERL-851
|
|
|
|
This change saves about about a minute when running the
cover-compiled compiler tests.
|
|
beam_ssa_type: Propagate the 'none' type from calls
|
|
Consider this pseudo code:
f(...) ->
Val = case Expr of
... ->
... ;
... ->
... ;
... ->
my_abort(something_went_wrong)
end,
%% Here follows code that uses Val.
.
.
.
my_abort(Reason) ->
throw({error,Reason}).
The first two clauses in the case will probably provide some
information about the type of the variable `Var`, information
that would be useful for optimizing the code that follows the
case.
However, the third clause would ruin everything. The call
to `my_abort/1` could return anything, and thus `Val` could
also have any type.
294d66a295f6 introduced module-level type analysis, which will in
general keep track of the return type of a local function
call. However, it does not improve the optimization for this specific
function. When a function never returns, that is, when its type is
`none`, it does not propagate the `none` type, but instead pretends
that the return type is `any`.
This commit extends the handling of functions that don't return to
properly handle the `none` type. Any instructions that directly
follows the function that does not return will be discarded, and the
call will be rewritten to a tail-recursive call.
For this specific example, it means that the type for `Val` deduced
from the first two clauses will be retained and can be used for
optimizing the code after the case.
|
|
|