aboutsummaryrefslogtreecommitdiffstats
path: root/lib/compiler/test
AgeCommit message (Collapse)Author
2018-09-28Fix code generation of binary instructions with the r21 optionBjörn Gustavsson
OTP 22 extends the binary instructions to support a Y register destination. When giving an option to compile for an earlier release, make sure that binary instructions don't use a Y register destination, by rewriting the binary instructions to use an X register destination and adding a `move` instruction to move the value to the Y register.
2018-09-28Merge pull request #1958 from jhogberg/john/compiler/ssa-bsm-optJohn Högberg
Rewrite BSM optimizations in the new SSA-based intermediate format
2018-09-28Improve coverage of 21 compatibilityBjörn Gustavsson
2018-09-28Rewrite BSM optimizations in the new SSA-based intermediate formatJohn Högberg
This commit improves the bit-syntax match optimization pass, leveraging the new SSA intermediate format to perform much more aggressive optimizations. Some highlights: * Watch contexts can be reused even after being passed to a function or being used in a try block. * Sub-binaries are no longer eagerly extracted, making it far easier to keep "happy paths" free from binary creation. * Trivial wrapper functions no longer disable context reuse.
2018-09-25beam_validator: Use set_aliased_type in more operationsJohn Högberg
The following code broke because aliases weren't tracked for hd/1: bug(Bool) -> Bug = remote:call(), if Bool -> %% Branch of some kind. _ = hd(Bug), remote:call(), hd(Bug) end. Related to 1f221b27f1336e747f7409692f260055dd3ddf79
2018-09-21Merge branch 'maint'Henrik Nord
2018-09-21Update copyright yearHenrik Nord
2018-09-17Remove the beam_dead and beam_split passesBjörn Gustavsson
Most of the optimizations in beam_dead have been superseded by the optimizations in beam_ssa_dead. The forward/1 pass of beam_dead has been moved to beam_jump. The beam_split pass splits blocks that contain instructions with non-zero labels. Because there are no optimizations left that optimize instructions within blocks, beam_block never needs to put such instructions into blocks in the first place. beam_split also moved 'move' instructions out block to help beam_dead. That is no longer necessary since beam_dead no longer exists.
2018-09-17Add beam_ssa_dead.erlBjörn Gustavsson
Add beam_ssa_dead to perform the main optimizations done by beam_dead: * Shortcut branches that jump to another block with a branch. If it can be seen that the second branch will always branch to a specific block, replace the target of the first branch. * Combined nested sequences of '=:=' tests and switch instructions operating on the same variable to a single switch. Diffing the compiler output, it seems that beam_ssa_dead finds many more opportunities for optimizations than beam_dead, although it does not find all opportunities that beam_dead does. In total, beam_ssa_dead is such improvement over beam_dead that there is no reason to keep beam_dead as well as beam_ssa_dead. Note that beam_ssa_dead does not attempt to optimize away redundant bs_context_binary instructions, because that instruction will be superseded by new instructions in the near future.
2018-09-17Cover more code in beam_ssa_typeBjörn Gustavsson
2018-09-12beam_ssa_opt: Add an optimization of tuple_size/1Björn Gustavsson
This optimization working on the SSA format will replace the similar optimization in beam_dead. See the comment for an explanation of what the new optimization does.
2018-09-12beam_ssa_opt: Fix liveness optimizationBjörn Gustavsson
Add more instructions to the list of functions that can be safely removed if their values are not used. This is necessary for correctness when doing more aggressive optimizations. Without this change, the 'succeeded' instruction could be optimized away leaving just the instruction followed by an unconditional branch, which the beam_ssa_codegen does not know how to handle. Here is an example: _3 = bs_start_match _1 br label 13 By adding bs_start_match to the list, the bs_start_match instruction will be removed too. (If the result of bs_start_match is actually used, the succeeded instruction would not be removed.) While we are it, rename the misnamed function is_pure/1 to no_side_effect/1 and move it to beam_ssa. is_pure/1 is a bad name because bif:get has no side effect, but is not pure.
2018-09-12Optimize 'and' and 'or' instructionsBjörn Gustavsson
2018-09-03Introduce a put_tuple2 instructionBjörn Gustavsson
Sometimes when building a tuple, there is no way to avoid an extra `move` instruction. Consider this code: make_tuple(A) -> {ok,A}. The corresponding BEAM code looks like this: {test_heap,3,1}. {put_tuple,2,{x,1}}. {put,{atom,ok}}. {put,{x,0}}. {move,{x,1},{x,0}}. return. To avoid overwriting the source register `{x,0}`, a `move` instruction is necessary. The problem doesn't exist when building a list: %% build_list(A) -> [A]. {test_heap,2,1}. {put_list,{x,0},nil,{x,0}}. return. Introduce a new `put_tuple2` instruction that builds a tuple in a single instruction, so that the `move` instruction can be eliminated: %% make_tuple(A) -> {ok,A}. {test_heap,3,1}. {put_tuple2,{x,0},{list,[{atom,ok},{x,0}]}}. return. Note that the BEAM loader already combines `put_tuple` and `put` instructions into an internal instruction similar to `put_tuple2`. Therefore the introduction of the new instruction will not speed up execution of tuple building itself, but it will be less work for the loader to load the new instruction.
2018-08-24Merge branch 'bjorn/compiler/ssa'Björn Gustavsson
* bjorn/compiler/ssa: Travis CI: Run the SSA linter in the Linux64SmokeTest build Remove retired compiler passes Introduce a new SSA-based intermediate format hipe_beam_to_icode: Correct translation of get_map_elements beam_dead: Remove shortcut of binary matching instruction beam_bs: Remove optimizations that are easier done on SSA format Don't run unsafe compiler passes Simplify optimizations by introducing is_nil late beam_utils: Make is_tagged_tuple a pure test beam_except: Enhance recognition of function_clause exceptions beam_validator: Infer the types of copies in a smarter way beam_validator: Improve merge of cons and literal list beam_validator: Strengthen validation of func_info beam_validator: Allow get_tuple_element before dsetelement beam_validator: Don't transfer state to labels that can't be reached beam_validator: Improve type analysis for tuples beam_validator: Be more careful when updating try/catch state beam_trim: Handle an empty list of instructions v3_core: Number argument variables in ascending order Teach binary instructions to use Y registers as destination OTP-14894
2018-08-24Introduce a new SSA-based intermediate formatBjörn Gustavsson
v3_codegen is replaced by three new passes: * beam_kernel_to_ssa which translates the Kernel Erlang format to a new SSA-based intermediate format. * beam_ssa_pre_codegen which prepares the SSA-based format for code generation, including register allocation. Registers are allocated using the linear scan algorithm. * beam_ssa_codegen which generates BEAM assembly code from the SSA-based format. It easier and more effective to optimize the SSA-based format before X and Y registers have been assigned. The current optimization passes constantly have to make sure no "holes" in the X register assignments are created (that is, that no X register becomes undefined that an allocation instruction depends on). This commit also introduces the following optimizations: * Replacing of tuple matching of records with the is_tagged_tuple instruction. (Replacing beam_record.) * Sinking of get_tuple_element instructions to just before the first use of the extracted values. As well as potentially avoiding extracting tuple elements when they are not actually used on all executions paths, this optimization could also reduce the number values that will need to be stored in Y registers. (Similar to beam_reorder, but more effective.) * Live optimizations, removing the definition of a variable that is not subsequently used (provided that the operation has no side effects), as well strength reduction of binary matching by replacing the extraction of value from a binary with a skip instruction. (Used to be done by beam_block, beam_utils, and v3_codegen.) * Removal of redundant bs_restore2 instructions. (Formerly done by beam_bs.) * Type-based optimizations across branches. More effective than the old beam_type pass that only did type-based optimizations in basic blocks. * Optimization of floating point instructions. (Formerly done by beam_type.) * Optimization of receive statements to introduce recv_mark and recv_set instructions. More effective with far fewer restrictions on what instructions are allowed between creating the reference and entering the receive statement. * Common subexpression elimination. (Formerly done by beam_block.)
2018-08-23Merge branch 'maint'Björn Gustavsson
* maint: map_SUITE: Test is_map_key/2 followed by a map update beam_validator: Infer the type of the map argument for is_map_key/2 map_SUITE: Cover map_get optimizations in beam_dead
2018-08-22map_SUITE: Test is_map_key/2 followed by a map updateBjörn Gustavsson
2018-08-22map_SUITE: Cover map_get optimizations in beam_deadBjörn Gustavsson
2018-08-17Don't run unsafe compiler passesBjörn Gustavsson
As a preparation for replacing v3_codegen with a new code generator, remove unsafe optimization passes. Especially the older compiler passes have implicit assumptions about how the code is generated. Remove the optimizations in beam_block (keep the code that creates blocks) because they are unsafe. beam_block also calls beam_utils:live_opt/1, which is unsafe. Remove beam_type because it calls beam_utils:live_opt/1, and also because it recalculates the number of heaps words and number of live registers in allocation instructions, thus potentially hiding bugs in other passes. Remove beam_receive because it is unsafe. Remove beam_record because it is the only remaining user of beam_utils:anno_defs/1. Remove beam_reorder because it makes much more sense to run it as an early SSA-based optimization pass. Remove the now unused functions in beam_utils: anno_def/1 delete_annos/1 is_killed_block/2 live_opt/1 usage/3 Note that the following test cases will fail because of the removed optimizations: compile_SUITE:optimized_guards/1 compile_SUITE:bc_options/1 receive_SUITE:ref_opt/1
2018-08-15Merge branch 'maint'Björn Gustavsson
* maint: Fix compiler crash when compiling double receives erts: Delete fd from poll-set when closing fd_driver port
2018-08-14Fix compiler crash when compiling double receivesBjörn Gustavsson
The compiler would crash when compiling a function with two receive statements. https://bugs.erlang.org/browse/ERL-703
2018-08-13Merge branch 'maint'Björn Gustavsson
* maint: Correct error behavior of is_map_key/2 in guards
2018-08-13Correct error behavior of is_map_key/2 in guardsBjörn Gustavsson
Consider the following functions: foo() -> bar(not_a_map). bar(M) when not is_map_key(a, M) -> ok; bar(_) -> error. What will `foo/0` return? It depends. If the module is compiled with the default compiler options, the return value will be `ok`. If the module is compiled with the `inline` option, the return value will be `error`. The correct value is `error`, because the call to `is_map_key/2` when the second argument is not a map should fail the entire guard. That is the way other failing guards BIFs are handled. For example: foo() -> bar(not_a_tuple). bar(T) when not element(1, T) -> ok; bar(_) -> error. `foo/0` always returns `error` (whether the code is inlined or not). This bug can be fixed by changing the classification of `is_map_key/2` in the `erl_internal` module. It is now classified as a type test, which is incorrect because type tests should not fail. Reclassifying it as a plain guard BIF corrects the bug. This correction also fixes the internal consistency check failure which was reported in: https://bugs.erlang.org/browse/ERL-699
2018-08-10Merge branch 'maint'Björn Gustavsson
* maint: Fix bug in binary matching
2018-08-10Merge pull request #1911 from ↵Björn Gustavsson
bjorng/bjorn/compiler/binary-syntax/ERL-689/OTP-15219 Fix bug in binary matching
2018-08-09Merge branch 'maint'Rickard Green
* maint: Omit include path debug info for +deterministic builds
2018-08-09Merge branch ↵Rickard Green
'john/compiler/fix-deterministic-include-paths/OTP-15204/ERL-679' into maint * john/compiler/fix-deterministic-include-paths/OTP-15204/ERL-679: Omit include path debug info for +deterministic builds
2018-08-09Omit include path debug info for +deterministic buildsJohn Högberg
Compiling the same file with different include paths resulted in different files with the `+deterministic` flag even if everything but the paths were identical. This was caused by the absolute path of each include directory being unconditionally included in a debug information chunk. This commit fixes this by only including this information in non-deterministic builds.
2018-08-09Merge branch 'maint'Björn Gustavsson
* maint: Fix side-effect optimization when compiling from Core Erlang Conflicts: lib/compiler/src/sys_core_fold.erl
2018-08-09Merge pull request #1910 from ↵Björn Gustavsson
bjorng/bjorn/compiler/letrec-side-effect-fix/ERL-658/OTP-15188 Fix side-effect optimization when compiling from Core Erlang
2018-08-08Fix bug in binary matchingBjörn Gustavsson
The compiler generates incorrect code for the following example: decode_binary(_, <<Length, Data/binary>>) -> case {Length, Data} of {0, _} -> %% When converting the match context back to a binary, %% Data will be set to the entire original binary, %% that is, to <<0>> instead of <<>>. {{0, 0, 0}, Data}; {4, <<Y:16/little, M, D, Rest/binary>>} -> {{Y, M, D}, Rest} end. The problem is the delayed sub binary creation optimization, which is not safe to do in this case. This commit introduces a heuristic that will disable the delayed sub binary creation optimization for this example. Unfortunately, the heuristic may turn off the optimization when it would actually be safe. In the OTP codebase, the optimization is turned off in two instances, once in string.erl and once in dets_v9.erl. https://bugs.erlang.org/browse/ERL-689
2018-08-08Merge branch 'maint'Björn Gustavsson
* maint: Eliminate double computation of next var beam_validator: Fix false diagnostic for a receive nested in a try
2018-08-08Fix side-effect optimization when compiling from Core ErlangJohn Högberg
When an expression is only used for its side effects, we try to remove everything that doesn't tie into a side-effect, but we went a bit too far when we applied the optimization to funs defined in such a context. Consider the following: do letrec 'f'/0 = fun () -> ... whatever ... in call 'side':'effect'(apply 'f'/0()) 'ok' When f/0 is optimized under the assumption that its return value is unused, side:effect/1 will be fed the result of the last side-effecting expression in f/0 instead of its actual result. https://bugs.erlang.org/browse/ERL-658 Co-authored-by: Björn Gustavsson <[email protected]>
2018-08-06beam_validator: Fix false diagnostic for a receive nested in a tryBjörn Gustavsson
When nesting a receive in a try/catch, there could be a false diagnostic that a fragile term is used. https://bugs.erlang.org/browse/ERL-684
2018-07-18Merge branch 'maint'John Högberg
* maint: Abort size calculation when a matched-out variable is used
2018-07-16Abort size calculation when a matched-out variable is usedJohn Högberg
Referencing a matched-out variable in a size expression makes it impossible to calculate the size of the result based on the size of the matched binary. The compiler would still generate code to do this however, which would crash since the variable isn't defined at the size calculation.
2018-07-06Merge branch 'maint'Björn Gustavsson
* maint: Call test_lib:recompile/1 from init_per_suite/1 beam_debug: Fix printing of floating point registers
2018-07-06Call test_lib:recompile/1 from init_per_suite/1Björn Gustavsson
Call test_lib:recompile/1 from init_per_suite/1 instead of from all/0. That makes it easy to find the log from the compilation in the log file for the init_per_suite/1 test case.
2018-07-04Merge branch 'maint'John Högberg
* maint: Updated OTP version Update release notes Update version numbers Eliminate a crash in the beam_jump pass stdlib: Fix a 'chars_limit' bug Fix a race condition when generating async operation ids Fix internal compiler error for map_get/2 beam_type: Fix unsafe optimization public_key: Remove moduli 5121 and 7167 Thoose were added by 598629aeba9de98e8cdf5637043eb34e5d407751 but are not universaly supported.
2018-06-29Merge branch 'bjorn/compiler/fix-beam_jump-crash/ERL-660/OTP-15166' into ↵Erlang/OTP
maint-21 * bjorn/compiler/fix-beam_jump-crash/ERL-660/OTP-15166: Eliminate a crash in the beam_jump pass
2018-06-29Merge branch 'bjorn/compiler/fix-map_get/OTP-15157' into maint-21Erlang/OTP
* bjorn/compiler/fix-map_get/OTP-15157: Fix internal compiler error for map_get/2
2018-06-29Merge branch 'bjorn/compiler/fix-skipped-matching/ERL-655/OTP-15156' into ↵Erlang/OTP
maint-21 * bjorn/compiler/fix-skipped-matching/ERL-655/OTP-15156: beam_type: Fix unsafe optimization
2018-06-29Eliminate a crash in the beam_jump passBjörn Gustavsson
https://bugs.erlang.org/browse/ERL-660
2018-06-27Fix internal compiler error for map_get/2Björn Gustavsson
Code such as that the following: Val = map_get(a, Map), Map#{a:=z} %Could be any map update would incorrectly cause an internal consistency check failure: Internal consistency check failed - please report this bug. Instruction: {put_map_exact,{f,0},{x,0},{x,0},1,{list,[{atom,a},{atom,z}]}} Error: {bad_type,{needed,map},{actual,term}}: Update beam_validator so that it understands that the second argument for map_get/2 is a map.
2018-06-27beam_type: Fix unsafe optimizationBjörn Gustavsson
beam_type assumed that the operand for the bs_context_to_binary instruction must be a binary. That is not correct; bs_context_to_binary accepts anything. Based on the incorrect assumption, beam_type would remove other test instructions. The bug was introduced in eee8655788d2, which was supposed to be just a refactoring commit. https://bugs.erlang.org/browse/ERL-655
2018-06-27Merge pull request #1717 from michalmuskala/is-function-pureBjörn Gustavsson
Fold is_function/1,2 during compilation
2018-06-25Fix unsafe optimization when running beam_block the second timeBjörn Gustavsson
The compiler would crash when compiling code such as: serialize(#{tag := value, id := Id, domain := Domain}) -> [case Id of nil -> error(id({required, id})); _ -> <<10, 1:16/signed, Id:16/signed>> end, case Domain of nil -> error(id({required, domain})); _ -> <<8, 2:16/signed, Domain:32/signed>> end]. The crash would look like this: Function: serialize/1 t.erl: internal error in block2; crash reason: {badmatch,false} in function beam_utils:live_opt/4 (beam_utils.erl, line 861) in call from beam_utils:live_opt/1 (beam_utils.erl, line 285) in call from beam_block:function/2 (beam_block.erl, line 47) in call from beam_block:'-module/2-lc$^0/1-0-'/2 (beam_block.erl, line 33) in call from beam_block:'-module/2-lc$^0/1-0-'/2 (beam_block.erl, line 33) in call from beam_block:module/2 (beam_block.erl, line 33) in call from compile:block2/2 (compile.erl, line 1358) in call from compile:'-internal_comp/5-anonymous-1-'/3 (compile.erl, line 349) The reason for the crash is an assertion failure caused by a previous unsafe optimization. Here is the code before the unsafe optimization: . . . {bs_init2,{f,0},7,0,0,{field_flags,[]},{x,1}}. {bs_put_string,3,{string,[8,0,2]}}. {bs_put_integer,{f,0},{integer,32},1,{field_flags,[signed,big]},{y,1}}. {move,{x,1},{x,0}}. {test_heap,4,1}. . . . beam_block:move_allocate/1 moved up the test_heap/2 instruction past the move/2 instruction, adjusting the number of live registers at the same time: . . . {bs_init2,{f,0},7,0,0,{field_flags,[]},{x,1}}. %% Only x1 is live now. {bs_put_string,3,{string,[8,0,2]}}. {bs_put_integer,{f,0},{integer,32},1,{field_flags,[signed,big]},{y,1}}. {test_heap,4,2}. %Unsafe. x0 is dead. {move,{x,1},{x,0}}. . . . This optimization is unsafe because the bs_init2 instruction killed x0. The bug is in beam_utils:anno_defs/1, which adds annotations indicating the registers that are defined at the beginning of each block. The annotation before the move/2 instruction incorrectly indicated that x0 was live. https://bugs.erlang.org/browse/ERL-650 https://github.com/elixir-lang/elixir/issues/7782
2018-06-18Update copyright yearHenrik Nord
2018-06-06Fold is_function/1,2 during compilationMichał Muskała
This can often appear in code after inlining some higher-order functions. * mark is_function/2 as pure * track function types in sys_core_fold * use those types to eval is_function/1,2 at compile-time when possible