Age | Commit message (Collapse) | Author |
|
Prior to 294d66a295f6c2101fe3c2da630979ad4e736c08 there wasn't much
point to keeping track of tuple element types; they were only known
when we had inserted or extracted values from a tuple, and in
neither case was it likely that we'd extract the same values again.
It makes a lot more sense to do so now that type optimizations are
applied across functions; if we return a tuple it's very likely
that its elements will be extracted soon after, and knowing their
types lets us eliminate more type checks.
Co-authored-by: Björn Gustavsson <[email protected]>
|
|
|
|
We haven't seen any related bugs so far, but all instructions that
place a term in another ought to reject fragile inputs. It can't
hurt to check.
|
|
Our current type management (based on set_type_reg etc) is rather
error-prone, often requiring special cases on a per-instruction
basis. This commit replaces nearly all ad-hoc mechanisms with more
general abstractions:
* assign - Moves a term.
* create_term - Creates a new term.
* extract_term - Extracts a term from another, maintaining
fragility as required.
* update_type - Adds more type information about a register.
* type_test - Helper function for type tests that subtracts on
failure and meets on success.
|
|
The fix in f9ea85611faca82c7494449ddb8bcb1ef1d194cb didn't consider
that the tested register could be aliased.
|
|
|
|
|
|
|
|
Needed because of the optimizations in 48f20bd589fa69.
https://bugs.erlang.org/browse/ERL-832
|
|
|
|
As beam_ssa_type is about to get smarter, beam_validator must
be smarter too.
|
|
Be more careful when updating types so that fragility is not lost.
|
|
Minor cleanups and bug fixes of the compiler
|
|
Rewrite BSM optimizations in the new SSA-based intermediate format
|
|
This has been superseded by bs_get_tail/3. Note that it is NOT
removed from the emulator or beam_disasm, as old modules are still
legal.
|
|
This commit improves the bit-syntax match optimization pass,
leveraging the new SSA intermediate format to perform much more
aggressive optimizations. Some highlights:
* Watch contexts can be reused even after being passed to a
function or being used in a try block.
* Sub-binaries are no longer eagerly extracted, making it far
easier to keep "happy paths" free from binary creation.
* Trivial wrapper functions no longer disable context reuse.
|
|
Disallow a literal map source for get_map_elements. There is currently
runtime support for get_map with a literal map source, but by
forbidding it in OTP 22, the runtime support could be removed in a
future release (perhaps OTP 24).
Also verify that the source arguments for get_list, get_hd, get_tl,
and get_tuple_element are not literals. Literals are not supported for
those instructions in the runtime system; verifying it in
beam_validator is a convenience so that this kind of bug will
be detected already during compilation.
|
|
The following code broke because aliases weren't tracked for hd/1:
bug(Bool) ->
Bug = remote:call(),
if
Bool -> %% Branch of some kind.
_ = hd(Bug),
remote:call(),
hd(Bug)
end.
Related to 1f221b27f1336e747f7409692f260055dd3ddf79
|
|
When optimizations get more powerful, beam_validator
must keep up.
|
|
|
|
|
|
Sometimes when building a tuple, there is no way to avoid an
extra `move` instruction. Consider this code:
make_tuple(A) -> {ok,A}.
The corresponding BEAM code looks like this:
{test_heap,3,1}.
{put_tuple,2,{x,1}}.
{put,{atom,ok}}.
{put,{x,0}}.
{move,{x,1},{x,0}}.
return.
To avoid overwriting the source register `{x,0}`, a `move`
instruction is necessary.
The problem doesn't exist when building a list:
%% build_list(A) -> [A].
{test_heap,2,1}.
{put_list,{x,0},nil,{x,0}}.
return.
Introduce a new `put_tuple2` instruction that builds a tuple in a
single instruction, so that the `move` instruction can be eliminated:
%% make_tuple(A) -> {ok,A}.
{test_heap,3,1}.
{put_tuple2,{x,0},{list,[{atom,ok},{x,0}]}}.
return.
Note that the BEAM loader already combines `put_tuple` and `put`
instructions into an internal instruction similar to `put_tuple2`.
Therefore the introduction of the new instruction will not speed up
execution of tuple building itself, but it will be less work for
the loader to load the new instruction.
|
|
* bjorn/compiler/ssa:
Travis CI: Run the SSA linter in the Linux64SmokeTest build
Remove retired compiler passes
Introduce a new SSA-based intermediate format
hipe_beam_to_icode: Correct translation of get_map_elements
beam_dead: Remove shortcut of binary matching instruction
beam_bs: Remove optimizations that are easier done on SSA format
Don't run unsafe compiler passes
Simplify optimizations by introducing is_nil late
beam_utils: Make is_tagged_tuple a pure test
beam_except: Enhance recognition of function_clause exceptions
beam_validator: Infer the types of copies in a smarter way
beam_validator: Improve merge of cons and literal list
beam_validator: Strengthen validation of func_info
beam_validator: Allow get_tuple_element before dsetelement
beam_validator: Don't transfer state to labels that can't be reached
beam_validator: Improve type analysis for tuples
beam_validator: Be more careful when updating try/catch state
beam_trim: Handle an empty list of instructions
v3_core: Number argument variables in ascending order
Teach binary instructions to use Y registers as destination
OTP-14894
|
|
* maint:
map_SUITE: Test is_map_key/2 followed by a map update
beam_validator: Infer the type of the map argument for is_map_key/2
map_SUITE: Cover map_get optimizations in beam_dead
|
|
Make sure that beam_validator considers a call to is_map_key/2
followed by an update of the same map without an is_map/1 test
safe. (This situation will probably not be encountered when
using the compiler in OTP 21, but better safe than sorry.)
|
|
Smarter code generation means that beam_validator must
be smarter too. In the following example, beam_validator
must be able to infer that y0 refers to a map:
move x0 y0
test is_map L1 x0
%% Here the type for y0 must be 'map'.
|
|
|
|
The func_info instruction does not expect a stack frame. There will
be an assertion failure in the debug-compiled runtime system.
|
|
|
|
If we transfer state appropriately to labels that can't be reached,
the state could taint other labels.
|
|
Since the compiler will start optimizing more aggressively, beam_validator
must keep up and improve the recognization of tuples and maps.
|
|
The new code generator will more aggressively reuse registers,
so we must be more careful about updating the state for try/catch.
In particular, an "empty" try/catch that can't throw an
exception must not update the try/catch state.
|
|
* maint:
Eliminate double computation of next var
beam_validator: Fix false diagnostic for a receive nested in a try
|
|
When nesting a receive in a try/catch, there could be a false
diagnostic that a fragile term is used.
https://bugs.erlang.org/browse/ERL-684
|
|
I did not find any legitimate use of "can not", however skipped
changing e.g RFCs archived in the source tree.
|
|
Code such as that the following:
Val = map_get(a, Map),
Map#{a:=z} %Could be any map update
would incorrectly cause an internal consistency check failure:
Internal consistency check failed - please report this bug.
Instruction: {put_map_exact,{f,0},{x,0},{x,0},1,{list,[{atom,a},{atom,z}]}}
Error: {bad_type,{needed,map},{actual,term}}:
Update beam_validator so that it understands that the second
argument for map_get/2 is a map.
|
|
|
|
When an exception is handled, the stack will be scanned. Therefore
all Y registers must be initialized.
|
|
Help us find more compiler bugs.
|
|
|
|
|
|
1ee21858db7e strengenthed validatation of GC instructions, but
forgot the following instructions:
bs_start_match2/5
bs_get_binary2/7
bs_get_float2/7
bs_get_integer2/7
bs_get_utf8/5
bs_get_utf16/5
bs_get_utf32/5
bs_skip_utf8/4
bs_skip_utf16/4
bs_skip_utf32/4
|
|
Waiting messages for a process may be stored in a queue
outside of any heap or heap fragment belonging to the process.
This is an optimization added in a recent major release to
avoid garbage collection messages again and again if there
is a long message queue.
Until such message has been matched and accepted by
the remove_message/0 instruction, the message must not be
included in the root set for a garbage collection, as that
would corrupt the message. The loop_rec/2 instruction explicitly
turns off garbage collection of the process as long messages
are being matched.
However, if the compiler were to put references to a message
outside of the heap in an Y register (on the stack) and there
happened to be a GC when the process had been scheduled out,
the message would be corrupted and the runtime system would
crash sooner or later.
To ensure that doesn't happen, teach beam_validator to check
for references on the stack to messages outside of the heap.
|
|
Every catch or try/catch must use a lower Y register number than any
enclosing catch or try/catch. That will ensure that when the stack
is scanned when an exception occurs, the innermost try/catch tag is
found first.
|
|
* maint:
Check that the stack is initialized when an exception may occur
|
|
Strengthen beam_validator to check that the stack is initialized
when an instruction with an {f,0} operand is executed.
For example, the following code sequence:
{allocate,0,1}.
{bif,element,{f,0},[{integer,1},{x,0}],{x,0}}.
should not be accepted because the stack may be scanned if
element/2 fails. That could cause a crash or other undefined
behavior if garbage on the stack looks like a catch tag.
|
|
Instructions that produce more than one result complicate
optimizations. get_list/3 is one of two instructions that
produce multiple results (get_map_elements/3 is the other).
Introduce the get_hd/2 and get_tl/2 instructions
that return the head and tail of a cons cell, respectively,
and use it internally in all optimization passes.
For efficiency, we still want to use get_list/3 if both
head and tail are used, so we will translate matching pairs
of get_hd and get_tl back to get_list instructions.
|
|
Do local common sub expression elimination (CSE)
|
|
Make sure that there is the correct number of put/1 instructions
following put_tuple/2. Also make it illegal to reference the
register for the tuple being built in a put/1 instruction.
That is, beam_validator will now issue a diagnostice for the the
following code:
{put_tuple,1,{x,0}}.
{put,{x,0}}.
|
|
Consider the following function:
function({function,Name,Arity,CLabel,Is0}, Lc0) ->
try
%% Optimize the code for the function.
catch
Class:Error:Stack ->
io:format("Function: ~w/~w\n", [Name,Arity]),
erlang:raise(Class, Error, Stack)
end.
The stacktrace is retrieved, but it is only used in the call
to erlang:raise/3. There is no need to build a stacktrace
in this function. We can avoid the building if we introduce
an instruction called raw_raise/3 that works exactly like
the erlang:raise/3 BIF except that its third argument must
be a raw stacktrace.
|