Age | Commit message (Collapse) | Author |
|
When a generator in a list comprehension was given some
other term than a list, the wrong line could be pointed
out in the exception. Here is an example:
bad_generator() ->
[I || %%This line would be pointed out.
I <- not_a_list].
https://bugs.erlang.org/browse/ERL-572
|
|
A literal negative size in binary construction would cause a crash.
|
|
The missing support for renumbering labels in recv_mark
and recv_set did not seem to cause any problems, probably because
the insructions are introduced late and their labels would keep
their numbers. But it there will definitely be a problem if the
recv_mark and recv_set instructions would be introduced much earlier.
|
|
For unclear reasons, v3_kernel attempts to guarantee that #k_try{}
always has at least one return value, even if it will never be
used. I said "attempts", because the handler block that is executed
when an exception is caught does not have the same guarantee. That
means that if an exception is thrown, the return value will not
actually be set.
In practice, however, this is not a problem for the existing code
generator (v3_codegen). The generated code will still be safe.
If we are to rewrite the code generator to generate an SSA-based
intermediate format, this inconsistency *will* cause problems
when creating phi nodes.
While at it, also remove an unecessary creation of new variables
in generation of #k_try_enter{}.
|
|
|
|
* hasse/dialyzer/extra-range/OTP-14970:
ssl: Correct some specs
os_mon: Correct a spec
Fix broken spec in beam_asm
Dialyzer should not throw away spec information because of overspec
|
|
Refactor and fix minor bugs in beam_type
|
|
Fix beam_utils bugs that could cause problems in the future
|
|
Make sure to kill all dependencies when a register is killed. For example,
in the following code, when the type information for {x,0} is killed in
the last instruction, there will still be type information for {x,1} referring
to {x,0}:
{get_tuple_element,{x,0},0,{x,1}}.
{test,is_eq_exact,{f,5},[{x,1},{atom,tag}]}.
{get_tuple_element,{x,0},1,{x,2}}.
{get_tuple_element,{x,0},2,{x,0}}.
This does not seems to have caused any problems in the past, but it may
cause problems in the future with a register allocator that reuses register
more aggressively.
|
|
|
|
The function tdb_update/2 is problematic. It does not distinguish
between assigning a register with a new value and updating information
about a register that is used a as source in a test instruction.
That was not a problem in practice when there were very few types,
but bugs started to be noticed as more types were added. (For example,
when a register was overwritten with a new value, the type for the
old value stored in the same register could linger in some cases.)
Introduce separate functions tdb_store/3 and tdb_meet/3 for assigning
a new value to a register and for updating type information for a
register referenced as as source, respectively. Also stricten
verification of the types that gets stored into the type database.
|
|
In the following code:
{get_tuple_element,{x,0},0,{x,1}}.
{put_tuple,2,{x,1}}.
{put,{atom,badmap}}.
{put,{x,0}}.
{move,{x,1},{x,0}}.
beam_block would move the get_tuple_element/3 instruction and eliminate the
move/2 instruction:
{put_tuple,2,{x,1}}.
{put,{atom,badmap}}.
{put,{x,0}}.
{get_tuple_element,{x,0},0,{x,0}}.
That is not correct, since the result of the tuple building in {x,1} is
now ignored.
|
|
live_opt_block/4 could overestimate the number of live
registers for a GC BIF and trigger an assertion. This does not
seem to be a problem when generating code using v3_codegen,
but only when using a new experimental code generator, therefore
there is no need include this correction in maint.
|
|
is_killed/3 and is_killed_at/3 could return 'true' even if the
register was referenced by an allocation instruction. Somehow, that
does not seem to have caused any problems yet.
|
|
Every catch or try/catch must use a lower Y register number than any
enclosing catch or try/catch. That will ensure that when the stack
is scanned when an exception occurs, the innermost try/catch tag is
found first.
|
|
|
|
1a029efd1ad47f started to run the beam_block pass a second time,
but it did not attempt to combine adjacent blocks.
Combining adjacent blocks leads to many more opportunities for
optimizations.
After doing some diffing in generated code, it turns out that
there is no benefit for beam_split to split out line instructions
from blocks. It seems that the only reason it was done was to
slightly simplify the implementation of the no_line_info option
in beam_clean.
|
|
As a preparation for combining blocks before running beam_block
for the second time, disable CSE for floating point operations
because it will generate invalid code.
|
|
* maint:
Check that the stack is initialized when an exception may occur
|
|
The more aggressive optimizations of 'allocate_zero' introduced
in cb6fc15c35c7e could produce unsafe code such as the following:
{allocate,0,1}.
{bif,element,{f,0},[{integer,1},{x,0}],{x,0}}.
The code is not safe because if element/2 fails, the runtime
system may scan the stack and find garbage that looks like a
catch tag, and would most probably crash.
Fix the problem by making beam_utils:is_killed/3 be more conservative
when asked whether a Y register will be killed.
Also fix an unsafe move upwards of an allocation instruction
in beam_block.
|
|
Strengthen beam_validator to check that the stack is initialized
when an instruction with an {f,0} operand is executed.
For example, the following code sequence:
{allocate,0,1}.
{bif,element,{f,0},[{integer,1},{x,0}],{x,0}}.
should not be accepted because the stack may be scanned if
element/2 fails. That could cause a crash or other undefined
behavior if garbage on the stack looks like a catch tag.
|
|
Eliminate get_list/3 internally in the compiler
|
|
Fix incorrect handling of floating point instructions
|
|
* maint:
Fix incorrect type interference of integer ranges
Conflicts:
lib/compiler/src/beam_type.erl
|
|
1a029efd1ad47f started to run the beam_block pass a second time.
Since it is run after introduction of the optimized floating point
instructions, it must handle those instructions correctly.
In particular, it must be careful when hoisting allocation
instructions. For example, the following code:
{test_heap,{alloc,[{words,0},{floats,1}]},5}.
.
.
.
{fmove,{fr,2},{x,0}}.
{allocate_zero,1,4}.
must not be rewritten to:
{test_heap,{alloc,[{words,0},{floats,1}]},5}.
.
.
.
{allocate_zero,1,4}.
{fmove,{fr,2},{x,0}}.
because beam_validator will not consider it safe. (The code may
actually be safe depending on what the code between the two allocation
instructions do.)
https://bugs.erlang.org/browse/ERL-555
|
|
|
|
Instructions that produce more than one result complicate
optimizations. get_list/3 is one of two instructions that
produce multiple results (get_map_elements/3 is the other).
Introduce the get_hd/2 and get_tl/2 instructions
that return the head and tail of a cons cell, respectively,
and use it internally in all optimization passes.
For efficiency, we still want to use get_list/3 if both
head and tail are used, so we will translate matching pairs
of get_hd and get_tl back to get_list instructions.
|
|
Numbers that clearly are not smalls can be encoded as
literals. Conservatively, we assume that integers whose
absolute value is greater than 1 bsl 128 are bignums and
that they can be encoded as literals.
Literals are slightly easier for the loader to handle than
huge integers.
|
|
Do local common sub expression elimination (CSE)
|
|
Optimize away unnecessary test_unit instructions that verify that
binaries are byte-aligned. In a tight loop, eliminating an
instruction can have a small but measurable improvement of the
execution time.
|
|
Separate the simplification of instructions from updating of the
type data base.
|
|
Extend an existing optimization in beam_dead to avoid
creating a match context when matching an empty binary.
|
|
Eliminate repeated evaluation of guard BIFs and building of cons cells
in blocks. This optimization is applicable in more places than might be
expected, because code generation for binaries and record can generate
common sub expressions not visible in the original source code.
For example, consider this function:
make_binary(Term) ->
Bin = term_to_binary(Term),
Size = byte_size(Bin),
<<Size:32,Bin/binary>>.
The compiler inserts a call to byte_size/2 to calculate the size of
the binary being built:
{function, make_binary, 1, 2}.
{label,1}.
{line,...}.
{func_info,{atom,t},{atom,make_binary},1}.
{label,2}.
{allocate,0,1}.
{line,...}.
{call_ext,1,{extfunc,erlang,term_to_binary,1}}.
{line,...}.
{gc_bif,byte_size,{f,0},1,[{x,0}],{x,1}}. %Present in original code.
{line,...}.
{gc_bif,byte_size,{f,0},2,[{x,0}],{x,2}}. %Inserted by compiler.
{bs_add,{f,0},[{x,2},{integer,4},1],{x,2}}.
{bs_init2,{f,0},{x,2},0,2,{field_flags,[]},{x,2}}.
{bs_put_integer,{f,0},{integer,32},1,{field_flags,[unsigned,big]},{x,1}}.
{bs_put_binary,{f,0},{atom,all},8,{field_flags,[unsigned,big]},{x,0}}.
{move,{x,2},{x,0}}.
{deallocate,0}.
return.
Common sub expression elimination (CSE) eliminates the second call to
byte_size/2:
{function, make_binary, 1, 2}.
{label,1}.
{line,...}.
{func_info,{atom,t},{atom,make_binary},1}.
{label,2}.
{allocate,0,1}.
{line,...}.
{call_ext,1,{extfunc,erlang,term_to_binary,1}}.
{line,...}.
{gc_bif,byte_size,{f,0},1,[{x,0}],{x,1}}.
{move,{x,1},{x,2}}.
{bs_add,{f,0},[{x,2},{integer,4},1],{x,2}}.
{bs_init2,{f,0},{x,2},0,2,{field_flags,[]},{x,2}}.
{bs_put_integer,{f,0},{integer,32},1,{field_flags,[unsigned,big]},{x,1}}.
{bs_put_binary,{f,0},{atom,all},8,{field_flags,[unsigned,big]},{x,0}}.
{move,{x,2},{x,0}}.
{deallocate,0}.
return.
Note: A possible future optimization would be to include binary
construction instructions in blocks. If that is done, the
{move,{x,1},{x,2}} instruction could also be eliminated.
|
|
The folling sequence in a block:
{move,{x,1},{x,2}}.
{move,{x,2},{x,2}}.
would be incorrectly rewritten to:
{move,{x,2},{x,2}}.
(Which in turn would be optimized away a little bit later.)
|
|
When attempting to eliminate the move/2 instruction in the following
code:
{bif,self,{f,0},[],{x,0}}.
{move,{x,0},{x,1}}.
.
.
.
{put_tuple,2,{x,1}}.
{put,{atom,ok}}.
{put,{x,0}}.
beam_block would produce the following unsafe code:
{bif,self,{f,0},[],{x,1}}.
.
.
.
{put_tuple,2,{x,1}}.
{put,{atom,ok}}.
{put,{x,1}}.
It is unsafe because the tuple is self-referential.
The following code:
{put_list,{y,6},nil,{x,4}}.
{move,{x,4},{x,5}}.
{put_list,{y,1},{x,5},{x,5}}.
.
.
.
{put_tuple,2,{x,6}}.
{put,{x,4}}.
{put,{x,5}}.
would be incorrectly transformed to:
{put_list,{y,6},nil,{x,5}}.
{put_list,{y,1},{x,5},{x,5}}.
.
.
.
{put_tuple,2,{x,6}}.
{put,{x,5}}.
{put,{x,5}}.
(Both elements in the built tuple get the same value.)
|
|
Make sure that there is the correct number of put/1 instructions
following put_tuple/2. Also make it illegal to reference the
register for the tuple being built in a put/1 instruction.
That is, beam_validator will now issue a diagnostice for the the
following code:
{put_tuple,1,{x,0}}.
{put,{x,0}}.
|
|
Consider the following function:
function({function,Name,Arity,CLabel,Is0}, Lc0) ->
try
%% Optimize the code for the function.
catch
Class:Error:Stack ->
io:format("Function: ~w/~w\n", [Name,Arity]),
erlang:raise(Class, Error, Stack)
end.
The stacktrace is retrieved, but it is only used in the call
to erlang:raise/3. There is no need to build a stacktrace
in this function. We can avoid the building if we introduce
an instruction called raw_raise/3 that works exactly like
the erlang:raise/3 BIF except that its third argument must
be a raw stacktrace.
|
|
Argument order can prevent the delayed sub binary creation.
Here is an example directly from the Efficiency Guide:
non_opt_eq([H|T1], <<H,T2/binary>>) ->
non_opt_eq(T1, T2);
non_opt_eq([_|_], <<_,_/binary>>) ->
false;
non_opt_eq([], <<>>) ->
true.
When compiling with the bin_opt_info option, there will be a
suggestion to change the argument order.
It turns out sys_core_bsm can itself change the order, not the
order of the arguments of themselves, but the order in which
the arguments are matched. Here is how it can be rewritten in
pseudo Core Erlang code:
non_opt_eq(Arg1, Arg2) ->
case < Arg2,Arg1 > of
< <<H1,T2/binary>>, [H2|T1] > when H1 =:= H2 ->
non_opt_eq(T1, T2);
< <<_,T2/binary>ffff>, [_|T1] > ->
false;
< <<>>, [] >> ->
true
end.
When rewritten like this, the bs_start_match2 instruction will be
the first instruction in the function and it will be possible to
store the match context in the same register as the binary
({x,1} in this case) and to delay the creation of sub binaries.
The switching of matching order also enables many other simplifications
in sys_core_bsm, since there is no longer any need to pass the position
of the pattern as an argument.
We will update the Efficiency Guide in a separate branch before the
release of OTP 21.
|
|
Run beam_block a second time
|
|
Clean up and improve sys_core_fold optimizations
|
|
Running beam_block again after the other optimizations have run will
give it more opportunities for optimizations. In particular, more
allocate_zero/2 instructions can be turned into allocate/2
instructions, and more get_tuple_element/3 instructions can store the
retrieved value into the correct register at once.
Out of a sample of about 700 modules in OTP, 64 modules were improved
by this commit.
|
|
If possible, when adding move/2 instructions, try to insert
them into a block. That could potentially allow them to
be optimized.
|
|
beam_utils:live_opt/1 is currently only run early (from
beam_block). Prepare it to be run after beam_split when
instructions with failure labels have been taken out of
blocks.
While we are it, also improve check_liveness/3. That will
improve the optimizations in beam_record (replacing tuple
matching instructions with an is_tagged_tuple instruction).
|
|
Since the select_val instruction never transfer directly to the next
instruction, the incoming live registers should be ignored. This
bug have not caused any problems yet, but it will in the future
if we are to run the liveness optimizations again after
the optimizations in beam_dead and beam_jump.
|
|
In a guard, reorder two consecutive calls to the element/2 BIF that
access the same tuple and have the same failure label so that highest
index is fetched first. That will allow the second element/2 to be
replace with the slightly cheaper get_tuple_element/3 instruction.
|
|
Consider a 'case' that exports variables and whose return
value is ignored:
foo(N) ->
case N of
1 ->
Res = one;
2 ->
Res = two
end,
{ok,Res}.
That code will be translated to the following Core Erlang code:
'foo'/1 =
fun (_@c0) ->
let <_@c5,Res> =
case _@c0 of
<1> when 'true' ->
<'one','one'>
<2> when 'true' ->
<'two','two'>
<_@c3> when 'true' ->
primop 'match_fail'({'case_clause',_@c3})
end
in
{'ok',Res}
The exported variables has been rewritten to explicit return
values. Note that the original return value from the 'case' is bound to
the variable _@c5, which is unused.
The corresponding BEAM assembly code looks like this:
{function, foo, 1, 2}.
{label,1}.
{line,[...]}.
{func_info,{atom,t},{atom,foo},1}.
{label,2}.
{test,is_integer,{f,6},[{x,0}]}.
{select_val,{x,0},{f,6},{list,[{integer,2},{f,3},{integer,1},{f,4}]}}.
{label,3}.
{move,{atom,two},{x,1}}.
{move,{atom,two},{x,0}}.
{jump,{f,5}}.
{label,4}.
{move,{atom,one},{x,1}}.
{move,{atom,one},{x,0}}.
{label,5}.
{test_heap,3,2}.
{put_tuple,2,{x,0}}.
{put,{atom,ok}}.
{put,{x,1}}.
return.
{label,6}.
{line,[...]}.
{case_end,{x,0}}.
Because of the test_heap instruction following label 5, the assignment
to {x,0} cannot be optimized away by the passes that optimize BEAM assembly
code.
Refactor the optimizations of 'let' in sys_core_fold to eliminate the
unused variable. Thus:
'foo'/1 =
fun (_@c0) ->
let <Res> =
case _@c0 of
<1> when 'true' ->
'one'
<2> when 'true' ->
'two'
<_@c3> when 'true' ->
primop 'match_fail'({'case_clause',_@c3})
end
in
{'ok',Res}
The resulting BEAM code will look like:
{function, foo, 1, 2}.
{label,1}.
{line,[...]}.
{func_info,{atom,t},{atom,foo},1}.
{label,2}.
{test,is_integer,{f,6},[{x,0}]}.
{select_val,{x,0},{f,6},{list,[{integer,2},{f,3},{integer,1},{f,4}]}}.
{label,3}.
{move,{atom,two},{x,0}}.
{jump,{f,5}}.
{label,4}.
{move,{atom,one},{x,0}}.
{label,5}.
{test_heap,3,1}.
{put_tuple,2,{x,1}}.
{put,{atom,ok}}.
{put,{x,0}}.
{move,{x,1},{x,0}}.
return.
{label,6}.
{line,[...]}.
{case_end,{x,0}}.
|
|
Improve handling of #c_seq{}, making sure to simplify a #c_seq{}
as much as possible. With that improvement, we can remove some
special-case code from opt_simple_let_2/6.
|
|
|
|
|
|
The annotations in the optimizing passes currently looks like this:
{'%live',NumRegistersUsed,RegistersUsedBitmap}
{'%def',RegistersDefinedBitmap}
(NumRegistersUsed is no longer used.)
When I attempted to extend some optimizations, I found that I had to
add additional clauses to tolerate/handle both types of
annotations. That problem would only get worse if any more annotations
are added in the future.
To simplify annotation handling, this commit wraps both types of
annotations in a {'%anno',_} tuple:
{'%anno',{used,RegistersUsedBitmap}}
{'%anno',{def,RegistersDefinedBitmap}}
The '%live' annotation has been renamed to 'used' to make it somewhat
clearer what it means, and the unused NumRegistersUsed part of the
old annotation has been removed.
Alternatives considered: My first attempt was to wrap the annotation
in a 'set' tuple so that there would only be 'set' tuples in a block.
For example:
{set,[],[],{anno,{live,RegistersUsedBitmap}}}
It was not as convenient as expected. Annotations often need to be
handled specially from other instructions in a block. When they are
wrapped in a 'set' tuple, they can very easily be handled incorrectly
or passed on to the next pass. That causes subtle errors or worse
code, and it can be difficult to debug.
Therefore, my conclusion is that annotations should be distinct from
other instructions, to make it obvious when one have missed to handle
an annotation.
|