aboutsummaryrefslogtreecommitdiffstats
path: root/lib/compiler/src/beam_ssa.erl
AgeCommit message (Collapse)Author
2019-06-26Fix slow compilation of huge functionsBjörn Gustavsson
Some huge functions would compile very slowly because of a bottleneck in `beam_ssa:def_used/2`. One example is the `cuter_binlib` module in https://github.com/cuter-testing/cuter. On my computer, this commit reduces the compilatation time for `cuter_binlib` to 45 seconds down from more than 4 minutes. Noticed-by: Kostis Sagonas
2019-06-24Eliminate dialyzer warningsBjörn Gustavsson
Eliminate the Dialyzer warnings shown when the limits in lib/cerl/erl_types.erl were raised as follows: -define(TUPLE_TAG_LIMIT, 10). -define(TUPLE_ARITY_LIMIT, 10). -define(SET_LIMIT, 64).
2019-02-19Do the destructive setelement optimization in SSABjörn Gustavsson
The expansion of record field updates, when more than one field is updated, but not a majority of the fields, will create a sequence of calls to `erlang:setelement(Index, Value, Tuple)` where Tuple in the first call is the original record tuple, and in the subsequent calls Tuple is the result of the previous call. Furthermore, all Index values are constant positive integers, and the first call to `setelement` will have the greatest index. Thus all the following calls do not actually need to test at run-time whether Tuple has type tuple, nor that the index is within the tuple bounds. Since OTP R7, the `sys_core_dsetel` pass, run as the very last Core Erlang pass, has optimized this sequence of `setelement` calls to use a special destructive version of `setelement` (called `set_tuple_element`) for all but the very first `setelement` in the sequence. It turns out that the presence of the `set_tuple_element` in SSA code is awkward and can prevent or complicate type analysis and aggressive optimizations. Therefore, this commit removes the `sys_core_dsetel` pass and reimplements it for SSA code. The optimization will be done in the `beam_ssa_pre_codegen` pass (that is, just before code generation and after running all other SSA code optimization passes). In most cases, the resulting BEAM code is identical to previous code. For a few modules, the BEAM code is actually slightly better, with smaller stack frames.
2019-02-06Optimize ssa_opt_sink for huge functionsBjörn Gustavsson
The ssa_opt_sink optimization of beam_ssa_opt could get very slow for certain huge functions. 9a190cae9bd7 partly addressed this issue by terminating the optimization early if there happened to be no get_tuple_element instructions at all in the function. This commit addresses the issue more directly by making the dominator calculation in beam_ssa:dominators/1 more efficient. The same algorithm as before is used, but it is implemented in a more efficient way based on the ideas in "A Simple, Fast Dominance Algorithm" (http://www.hipersoft.rice.edu/grads/publications/dom14.pdf). As well as being more efficient, the new implementation also gives an explicit representation of the dominator tree, which makes it possible to simplify and optimize the ssa_opt_sink optimization.
2019-02-01Optimize beam_ssa:def_used/2Björn Gustavsson
beam_ssa:def_used/2 is used by beam_ssa_pre_codegen when reserving Y registers. Do the following optimizations: * Use an ordset instead of a gb_set. When the only operation performed on a set is union/2, an ordset will usually be faster, especially when the result is an ordset. * Use a cerl_set instead of a gb_set for the set of all possible predecessors. cerl_sets is usually faster than gb_sets.
2019-02-01Prefer map syntax and guard BIFs over the maps modulesBjörn Gustavsson
Avoiding calls usually reduces the size of the stack frame and reduces register shuffling.
2018-11-18Add get_map_element to beam_ssa:no_side_effect/1Björn Gustavsson
The `get_map_element` instruction has no side effects, and should be removed if its value is not used.
2018-10-04Merge branch 'bjorn/compiler/misc-fixes'Björn Gustavsson
* bjorn/compiler/misc-fixes: beam_ssa: Remove unnecessary beam_ssa: prefixes beam_ssa_bsm: Fix replacement of variables in a remote call
2018-10-04beam_ssa: Remove unnecessary beam_ssa: prefixesBjörn Gustavsson
2018-10-03Optimize named funs and fun-wrapped macrosJohn Högberg
If a fun is defined locally and only used for calls, it can be replaced with direct calls to the relevant function. This greatly speeds up "named functions" (which rely on make_fun to recreate themselves) and macros that wrap their body in a fun.
2018-09-28Remove unused instruction bs_context_to_binary from the compilerJohn Högberg
This has been superseded by bs_get_tail/3. Note that it is NOT removed from the emulator or beam_disasm, as old modules are still legal.
2018-09-24beam_ssa: Add helper functions and export more typesJohn Högberg
get_anno/3: as get_anno but with a default value definitions/1-2: returns a map of variable definitions (#b_set{}) uses/1-2: returns a map of all uses of a given variable mapfold_blocks_rpo/4: mapfolds over blocks
2018-09-20Consistently use #b_var{} instead of var_name()John Högberg
We chose to refer to variables through their var_name() because we anticipated the need to annotate them, but it turned out we didn't really need that, and many things become a lot cleaner if the entire #b_var{} is used to represent variables.
2018-09-12beam_ssa_opt: Fix liveness optimizationBjörn Gustavsson
Add more instructions to the list of functions that can be safely removed if their values are not used. This is necessary for correctness when doing more aggressive optimizations. Without this change, the 'succeeded' instruction could be optimized away leaving just the instruction followed by an unconditional branch, which the beam_ssa_codegen does not know how to handle. Here is an example: _3 = bs_start_match _1 br label 13 By adding bs_start_match to the list, the bs_start_match instruction will be removed too. (If the result of bs_start_match is actually used, the succeeded instruction would not be removed.) While we are it, rename the misnamed function is_pure/1 to no_side_effect/1 and move it to beam_ssa. is_pure/1 is a bad name because bif:get has no side effect, but is not pure.
2018-09-12beam_ssa: Optimize linearize/1 and rpo/2Björn Gustavsson
It is faster to use cerl_sets instead of gb_sets to keep track of seen blocks.
2018-09-12beam_ssa: Add trim_unreachable/1Björn Gustavsson
Add trim_unreachable/1 to remove unreachable blocks and adjust phi nodes.
2018-09-12beam_ssa: Extend linearize/1 to also adjust phi nodesBjörn Gustavsson
Since beam_ssa:linearize/1 may remove blocks that are unreachable, adjust phi nodes to make sure that they don't refer to discarded blocks or to blocks that no longer branch to the phi node in question.
2018-09-12beam_ssa: Add normalize/1Björn Gustavsson
Add normalize/1 to simplify optimizations.
2018-08-24Introduce a new SSA-based intermediate formatBjörn Gustavsson
v3_codegen is replaced by three new passes: * beam_kernel_to_ssa which translates the Kernel Erlang format to a new SSA-based intermediate format. * beam_ssa_pre_codegen which prepares the SSA-based format for code generation, including register allocation. Registers are allocated using the linear scan algorithm. * beam_ssa_codegen which generates BEAM assembly code from the SSA-based format. It easier and more effective to optimize the SSA-based format before X and Y registers have been assigned. The current optimization passes constantly have to make sure no "holes" in the X register assignments are created (that is, that no X register becomes undefined that an allocation instruction depends on). This commit also introduces the following optimizations: * Replacing of tuple matching of records with the is_tagged_tuple instruction. (Replacing beam_record.) * Sinking of get_tuple_element instructions to just before the first use of the extracted values. As well as potentially avoiding extracting tuple elements when they are not actually used on all executions paths, this optimization could also reduce the number values that will need to be stored in Y registers. (Similar to beam_reorder, but more effective.) * Live optimizations, removing the definition of a variable that is not subsequently used (provided that the operation has no side effects), as well strength reduction of binary matching by replacing the extraction of value from a binary with a skip instruction. (Used to be done by beam_block, beam_utils, and v3_codegen.) * Removal of redundant bs_restore2 instructions. (Formerly done by beam_bs.) * Type-based optimizations across branches. More effective than the old beam_type pass that only did type-based optimizations in basic blocks. * Optimization of floating point instructions. (Formerly done by beam_type.) * Optimization of receive statements to introduce recv_mark and recv_set instructions. More effective with far fewer restrictions on what instructions are allowed between creating the reference and entering the receive statement. * Common subexpression elimination. (Formerly done by beam_block.)