aboutsummaryrefslogtreecommitdiffstats
path: root/lib/compiler
AgeCommit message (Collapse)Author
2018-01-22Don't build a stacktrace if it's only passed to erlang:raise/3Björn Gustavsson
Consider the following function: function({function,Name,Arity,CLabel,Is0}, Lc0) -> try %% Optimize the code for the function. catch Class:Error:Stack -> io:format("Function: ~w/~w\n", [Name,Arity]), erlang:raise(Class, Error, Stack) end. The stacktrace is retrieved, but it is only used in the call to erlang:raise/3. There is no need to build a stacktrace in this function. We can avoid the building if we introduce an instruction called raw_raise/3 that works exactly like the erlang:raise/3 BIF except that its third argument must be a raw stacktrace.
2018-01-16sys_core_bsm: Rearrange arguments to enable delayed sub binary creationBjörn Gustavsson
Argument order can prevent the delayed sub binary creation. Here is an example directly from the Efficiency Guide: non_opt_eq([H|T1], <<H,T2/binary>>) -> non_opt_eq(T1, T2); non_opt_eq([_|_], <<_,_/binary>>) -> false; non_opt_eq([], <<>>) -> true. When compiling with the bin_opt_info option, there will be a suggestion to change the argument order. It turns out sys_core_bsm can itself change the order, not the order of the arguments of themselves, but the order in which the arguments are matched. Here is how it can be rewritten in pseudo Core Erlang code: non_opt_eq(Arg1, Arg2) -> case < Arg2,Arg1 > of < <<H1,T2/binary>>, [H2|T1] > when H1 =:= H2 -> non_opt_eq(T1, T2); < <<_,T2/binary>ffff>, [_|T1] > -> false; < <<>>, [] >> -> true end. When rewritten like this, the bs_start_match2 instruction will be the first instruction in the function and it will be possible to store the match context in the same register as the binary ({x,1} in this case) and to delay the creation of sub binaries. The switching of matching order also enables many other simplifications in sys_core_bsm, since there is no longer any need to pass the position of the pattern as an argument. We will update the Efficiency Guide in a separate branch before the release of OTP 21.
2018-01-12beam_match_SUITE: Eliminate warnings for unused variablesBjörn Gustavsson
2018-01-12bs_match_SUITE: Add tests case written when walking into a dead endBjörn Gustavsson
Add some tests cases written when attempting some new optimizations that turned out to be unsafe.
2018-01-12Merge pull request #1680 from bjorng/bjorn/compiler/beam_blockBjörn Gustavsson
Run beam_block a second time
2018-01-12Merge pull request #1679 from bjorng/bjorn/compiler/sys_core_foldBjörn Gustavsson
Clean up and improve sys_core_fold optimizations
2018-01-11Run beam_block again after other optimizations have been runBjörn Gustavsson
Running beam_block again after the other optimizations have run will give it more opportunities for optimizations. In particular, more allocate_zero/2 instructions can be turned into allocate/2 instructions, and more get_tuple_element/3 instructions can store the retrieved value into the correct register at once. Out of a sample of about 700 modules in OTP, 64 modules were improved by this commit.
2018-01-11beam_bsm: Insert introduced 'move' instructions into blockBjörn Gustavsson
If possible, when adding move/2 instructions, try to insert them into a block. That could potentially allow them to be optimized.
2018-01-11Prepare beam_utils to run again after beam_splitBjörn Gustavsson
beam_utils:live_opt/1 is currently only run early (from beam_block). Prepare it to be run after beam_split when instructions with failure labels have been taken out of blocks. While we are it, also improve check_liveness/3. That will improve the optimizations in beam_record (replacing tuple matching instructions with an is_tagged_tuple instruction).
2018-01-11beam_utils: Correct handling of liveness for select_valBjörn Gustavsson
Since the select_val instruction never transfer directly to the next instruction, the incoming live registers should be ignored. This bug have not caused any problems yet, but it will in the future if we are to run the liveness optimizations again after the optimizations in beam_dead and beam_jump.
2018-01-11beam_block: Reorder element/2 calls in guardsBjörn Gustavsson
In a guard, reorder two consecutive calls to the element/2 BIF that access the same tuple and have the same failure label so that highest index is fetched first. That will allow the second element/2 to be replace with the slightly cheaper get_tuple_element/3 instruction.
2018-01-11Improve code generation for a 'case' with exported variablesBjörn Gustavsson
Consider a 'case' that exports variables and whose return value is ignored: foo(N) -> case N of 1 -> Res = one; 2 -> Res = two end, {ok,Res}. That code will be translated to the following Core Erlang code: 'foo'/1 = fun (_@c0) -> let <_@c5,Res> = case _@c0 of <1> when 'true' -> <'one','one'> <2> when 'true' -> <'two','two'> <_@c3> when 'true' -> primop 'match_fail'({'case_clause',_@c3}) end in {'ok',Res} The exported variables has been rewritten to explicit return values. Note that the original return value from the 'case' is bound to the variable _@c5, which is unused. The corresponding BEAM assembly code looks like this: {function, foo, 1, 2}. {label,1}. {line,[...]}. {func_info,{atom,t},{atom,foo},1}. {label,2}. {test,is_integer,{f,6},[{x,0}]}. {select_val,{x,0},{f,6},{list,[{integer,2},{f,3},{integer,1},{f,4}]}}. {label,3}. {move,{atom,two},{x,1}}. {move,{atom,two},{x,0}}. {jump,{f,5}}. {label,4}. {move,{atom,one},{x,1}}. {move,{atom,one},{x,0}}. {label,5}. {test_heap,3,2}. {put_tuple,2,{x,0}}. {put,{atom,ok}}. {put,{x,1}}. return. {label,6}. {line,[...]}. {case_end,{x,0}}. Because of the test_heap instruction following label 5, the assignment to {x,0} cannot be optimized away by the passes that optimize BEAM assembly code. Refactor the optimizations of 'let' in sys_core_fold to eliminate the unused variable. Thus: 'foo'/1 = fun (_@c0) -> let <Res> = case _@c0 of <1> when 'true' -> 'one' <2> when 'true' -> 'two' <_@c3> when 'true' -> primop 'match_fail'({'case_clause',_@c3}) end in {'ok',Res} The resulting BEAM code will look like: {function, foo, 1, 2}. {label,1}. {line,[...]}. {func_info,{atom,t},{atom,foo},1}. {label,2}. {test,is_integer,{f,6},[{x,0}]}. {select_val,{x,0},{f,6},{list,[{integer,2},{f,3},{integer,1},{f,4}]}}. {label,3}. {move,{atom,two},{x,0}}. {jump,{f,5}}. {label,4}. {move,{atom,one},{x,0}}. {label,5}. {test_heap,3,1}. {put_tuple,2,{x,1}}. {put,{atom,ok}}. {put,{x,0}}. {move,{x,1},{x,0}}. return. {label,6}. {line,[...]}. {case_end,{x,0}}.
2018-01-11Remove special cases in optimization of a simple letBjörn Gustavsson
Improve handling of #c_seq{}, making sure to simplify a #c_seq{} as much as possible. With that improvement, we can remove some special-case code from opt_simple_let_2/6.
2018-01-11sys_core_fold: Make it clear what part of Sub is usedBjörn Gustavsson
2018-01-11sys_core_fold: Simplify usage of move_case_into_arg/2Björn Gustavsson
2018-01-11Refactor '%live' and '%def' annotationsBjörn Gustavsson
The annotations in the optimizing passes currently looks like this: {'%live',NumRegistersUsed,RegistersUsedBitmap} {'%def',RegistersDefinedBitmap} (NumRegistersUsed is no longer used.) When I attempted to extend some optimizations, I found that I had to add additional clauses to tolerate/handle both types of annotations. That problem would only get worse if any more annotations are added in the future. To simplify annotation handling, this commit wraps both types of annotations in a {'%anno',_} tuple: {'%anno',{used,RegistersUsedBitmap}} {'%anno',{def,RegistersDefinedBitmap}} The '%live' annotation has been renamed to 'used' to make it somewhat clearer what it means, and the unused NumRegistersUsed part of the old annotation has been removed. Alternatives considered: My first attempt was to wrap the annotation in a 'set' tuple so that there would only be 'set' tuples in a block. For example: {set,[],[],{anno,{live,RegistersUsedBitmap}}} It was not as convenient as expected. Annotations often need to be handled specially from other instructions in a block. When they are wrapped in a 'set' tuple, they can very easily be handled incorrectly or passed on to the next pass. That causes subtle errors or worse code, and it can be difficult to debug. Therefore, my conclusion is that annotations should be distinct from other instructions, to make it obvious when one have missed to handle an annotation.
2018-01-11Merge pull request #1678 from ↵John Högberg
jhogberg/john/compiler/reintroduce-tuple-arity-optimizations/OTP-14857 Reintroduce the tuple arity optimizations removed in PR #1673
2018-01-10beam_block: Improve optimization of allocate_zero instructionsBjörn Gustavsson
Turn more allocate_zero instructions into allocate instructions.
2018-01-10beam_type: Enhance coalescing of allocation instructionsBjörn Gustavsson
An 'allocate' or 'allocate_zero' instruction should not be shortly followed by a 'test_heap' instruction. For example, we don't want this type of code: {allocate_zero,3,4}. {line,...}. {test_heap,7,4}. {bif,element,{f,0},...,...}. While the code is safe because 'allocate_zero' has initialized the stack frame, it is wasteful. Also note that the code would become unsafe if the 'allocate_zero' instruction were to be replaced with an 'allocate' instruction. What we want to see is this: {allocate_heap_zero,3,7,4}. {line,...}. {bif,element,{f,0},...,...}.
2018-01-10Correct beam_utils:combine_heap_needs/2Björn Gustavsson
In 21dd6e55877832, beam_utils:combine_heap_needs/2 stopped wrapping an allocation list in an {alloc,...} tuple. That was not noticed because the faulty heap need created in beam_block was discarded by beam_type.
2018-01-10Correct beam_utils:is_killed/3Björn Gustavsson
beam_utils:is_killed/3 could incorrectly indicate that a register was killed, when in fact it was referenced by an instruction that did a GC.
2018-01-10Merge branch 'maint'Björn Gustavsson
* maint: beam_validator: Strengthen validation of GC instructions
2018-01-10Merge pull request #1674 from bjorng/bjorn/compiler/beam_validatorBjörn Gustavsson
beam_validator: Strengthen validation of GC instructions OTP-14863
2018-01-08Reintroduce the arity optimization removed in OTP-14855John Högberg
We can safely tell when a test_arity or is_record instruction is superflous by keeping track of whether the size is exactly known or not.
2018-01-08Merge branch 'maint'John Högberg
2018-01-08beam_validator: Strengthen validation of GC instructionsBjörn Gustavsson
beam_validator did not verify that the Y registers were initialized before executing the following instructions that could cause a GC: bs_append/8 bs_init2/6 bs_init_bits/6 gc_bif1/5 gc_bif2/6 gc_bif3/7 test_heap/2 That means that, for example, an incorrect optimization that replaced an 'allocate_zero' instruction with an 'allocate' instruction when it was not safe, would not be rejected by beam_validtor, but would instead cause a crash or other undefined behavior at runtime. Also fix a minor bug in beam_type exposed by the stronger checking. When compiling from .S files, beam_type did not handle the init/1 instruction and could produce unsafe code.
2018-01-04Remove unsafe is_record/test_arity optimizationsJohn Högberg
The type optimizations for is_record and test_arity checked whether the arity was equal to the size stored in the type information, which is incorrect since said size is the *minimum* size of the tuple (as determined by previous instructions) and not its exact size. A future patch to the 'master' branch will restore these optimizations in a safe manner.
2017-12-20Reduce register shuffling in receive clausesBjörn Gustavsson
Handle a few more instructions in beam_utils. That will allow beam_reorder to reorder more instructions, delaying get_tuple_element instructions and reducing register shuffling in receive clauses.
2017-12-18v3_codegen: Don't let exit BIFs force a stack frameBjörn Gustavsson
This is an enhancement of the optimization added in 2e5d6201bb044, where we tried to avoid forcing a stack frame for functions that don't really need them. That optimization would not suppress the stack frame for this function: f(A) -> Res = case A of a -> x; b -> y end, {ok,Res}. The reason is that internally the compiler would rewrite the code to something like this: f(A) -> Res = case A of a -> x; b -> y; Other -> error({case_clause,Other}) end, {ok,Res}. The call to error/1 would force creation of a stack frame, even though it is not really needed because error/1 causes an exception. Handle calls to exit BIFs specially to allow suppressing the stack frame.
2017-12-18Merge pull request #1658 from bjorng/bjorn/compiler/delay-stackframeBjörn Gustavsson
Delay creation of stack frames
2017-12-15Merge branch 'bjorn/compiler/coverage'Björn Gustavsson
* bjorn/compiler/coverage: beam_utils: Refactor combine_alloc_lists() to cover more lines map_SUITE: Cover beam_utils:bif_to_test/3 beam_disasm: Remove support for obsolete instructions guard_SUITE: Test is_bitstring/1 and is_map/1 on literals
2017-12-15v3_codegen: Delay creation of stack framesBjörn Gustavsson
v3_codegen currently wraps a stack frame around each clause in a function (unless the clause is simple without any 'case' or other complex constructions). Consider this function: f({a,X}) -> A = abs(X), case A of 0 -> {result,"0"}; _ -> {result,integer_to_list(A)} end; f(_) -> error. The first clause needs a stack frame because there is a function call to integer_to_list/1 not in the tail position. v3_codegen currently wraps the entire first clause in stack frame. We can delay the creation of the stack frame, and create a stack frame in each arm of the 'case' (if needed): f({a,X}) -> A = abs(X), case A of 0 -> %% Don't create a stack frame here. {result,"0"}; _ -> %% Create a stack frame here. {result,integer_to_list(A)} end; f(_) -> error. There are pros and cons of this approach. The cons are that the code size may increase if there are many 'case' clauses and each needs its own stack frame. The allocation instructions may also interfere with other optimizations, but the new optimizations introduced in previous commits will mitigate most of those issues. The pros are the following: * For some clauses in a 'case', there is no need to create any stack frame at all. * Often when moving an allocation instruction into a 'case' clause, the slightly cheaper 'allocate' instruction can be used instead of 'allocate_zero'. There is also the possibility that the allocate instruction can be be combined with a 'test_heap' instruction. * Each stack frame for each arm of the 'case' will have exactly as many slots as needed.
2017-12-15beam_record: Try harder to avoid fetching the tag elementBjörn Gustavsson
When rewriting tuple matching of the first element of a tuple to an is_tagged_tuple instruction, the get_tuple_element instruction that fetches the tag will be left unless the register that is fetched is subsequently killed. We can do better than that. If the register is referenced in an allocating instruction, but its value is never actually used, we can do one of two things: if the value is known to be defined earlier (using annotations added by beam_utils:anno_defs/1) the instruction can be removed altogether; if not, it can be replaced with a 'move nil TagRegister' instruction.
2017-12-15beam_block: Improve moving of allocationsBjörn Gustavsson
Use annotations added by beam_utils:anno_defs/1 to move more allocations upwards in the instruction stream. That in turn allows us to optimize away more 'move' instructions.
2017-12-15beam_utils: Add usage/3Björn Gustavsson
To avoid having to call both is_killed/3 and is_not_used/3, add usage/3 to answer both questions in one call.
2017-12-15beam_utils: Add anno_defs/1Björn Gustavsson
Add beam_utils:anno_defs/1 which will add an annotation to the beginning of each block indicating which X registers that are defined. Having that information can improve some optimizations.
2017-12-15beam_utils: Improve precision for is_not_used/3Björn Gustavsson
2017-12-14Merge pull request #1653 from tonyrog/makedep_side_effectBjörn Gustavsson
Add -MMD option to erlc OTP-14830
2017-12-14beam_utils: Refactor combine_alloc_lists() to cover more linesBjörn Gustavsson
There are four uncovered lines in combine_heap_needs/2 and combine_alloc_lists/2. There is no way to reach starting from Erlang source code using the standard compiler. However, they can be reached starting from BEAM assembly code, so we don't want to remove them. We could add a test case that covers the lines using assembly code, but an easier solution is to rewrite the code in a more generic way using sofs so that the code can be covered with existing test cases.
2017-12-13map_SUITE: Cover beam_utils:bif_to_test/3Björn Gustavsson
2017-12-13Merge branch 'bjorn/compiler/use-stacktrace-syntax'Björn Gustavsson
* bjorn/compiler/use-stacktrace-syntax: Use the new syntax for retrieving stack traces
2017-12-13beam_util: Fix bug in is_not_used/3Björn Gustavsson
01835845579e9 fixed some problems, but introduced a bug where is_not_used/3 would report that a register was not used when it in fact was.
2017-12-13Merge branch 'maint'Henrik Nord
2017-12-09v3_codegen: Eliminate unused function argumentsBjörn Gustavsson
758712d6294 changed the need_heap/2 function so that it stopped using its second argument. Remove the second argument from need_heap(), and update all callers to similarly remove unused arguments.
2017-12-08beam_disasm: Remove support for obsolete instructionsBjörn Gustavsson
2017-12-08guard_SUITE: Test is_bitstring/1 and is_map/1 on literalsBjörn Gustavsson
2017-12-08Use the new syntax for retrieving stack tracesBjörn Gustavsson
2017-12-08Update release notesErlang/OTP
2017-12-08Update version numbersErlang/OTP
2017-12-08Merge pull request #1634 from bjorng/bjorn/get_stacktrace-syntax/OTP-14692Björn Gustavsson
Add syntax in try/catch to retrieve the stacktrace directly