Age | Commit message (Collapse) | Author |
|
|
|
|
|
Due to a bug in Dialyzer, unknown types have been introduced.
|
|
* sverker/refactor:
erts: Introduce struct binary_internals
erts: Introduce erts_bin_release
erts: Init refc=1 in erts_bin_drv_alloc*
erts: Init refc=1 in erts_bin_nrml_alloc
erts: Remove deliberate leak of hipe fun entries
erts: Remove hipe_bifs:remove_refs_from/1
Refactor hipe specific code to use ErtsCodeInfo
erts: Refactor ErtsCodeInfo.native
|
|
|
|
which serves no purpose after all the
hipe load&purge fixes merged at
32729cab75325de58bf127e6e8836348071b8682
|
|
By having ErLLVM explicitly tell LLVM which architecture we're expecting
it to compile for we remove the risk of having LLVM generate amd64 code
for a x86 VM.
|
|
HiPE: Fix ERL-278: Fix range analysis miscompilation bug
|
|
|
|
hipe: Fix alignment of byte-sized constants
|
|
HiPE: Fix off-by-one bug in register allocators
Fix for PR-1380
|
|
HiPE's range analysis would not update the arguments of a callee when
the result of the call was ignored.
Fixes ERL-278.
|
|
erl_bif_types contains a fixed and improved copy-paste (obvious from the
dead Max_range2_leq_zero if branches) of hipe_icode_range:range_rem/2.
For now, delete the dead code and propagate back fixes and improvements
to hipe_icode_range.
|
|
|
|
hipe_regalloc_loop considers SpillLimit to be an inclusive lower bound,
the allocators considered it to be an exclusive lower bound. The
allocators are changed to also consider it an inclusive lower bound.
This caused the register allocators to occasionally spill the first
"unspillable" temporary. This caused a failure in a newly added
assertion when hipe-compiling dets_v9 on x86.
|
|
|
|
|
|
|
|
|
|
|
|
These pseudo instructions are added to all backends and allow spill slot
to spill slot move coalescing in a clean way.
They have regular move semantics, but contain an additional scratch
register to be used if both source and destination are spilled, and can
not be move coalesced.
Additionally, a register allocator callback
Target:is_spill_move(Instr, Context) is added which allows the spill
slot allocators to check for these instructions and try to coalesce the
spill slots the two temporaries are allocated to.
|
|
hipe_range_split is a complex live range splitter, more sophisticated
thatn hipe_restore_reuse, but still targeted specifically at temporaries
forced onto stack by being live over call instructions.
hipe_range_split partitions the control flow graph at call instructions,
like hipe_regalloc_prepass. Splitting decisions are made on a per
partition and per temporary basis.
There are three different ways in which hipe_range_split may choose to
split a temporary in a program partition:
* Mode1: Spill the temp before calls, and restore it after them
* Mode2: Spill the temp after definitions, restore it after calls
* Mode3: Spill the temp after definitions, restore it before uses
To pick which of these should be used for each temp×partiton pair,
hipe_range_split uses a cost function. The cost is simply the sum of the
cost of all expected stack accesses, and the cost for an individual
stack access is based on the probability weight of the basic block that
it resides in. This biases the range splitter so that it attempts moving
stack accesses from a functions hot path to the cold path.
hipe_bb_weights is used to compute the probability weights.
mode3 is effectively the same as what hipe_restore_reuse does. Because
of this, hipe_restore_reuse reuses the analysis pass of
hipe_restore_reuse in order to compute the minimal needed set of spills
and restores. The reason mode3 was introduced to hipe_range_split rather
than simply composing it with hipe_restore_reuse (by running both) is
that such a composition resulted in poor register allocation results due
to insufficiently strong move coalescing in the register allocator.
The cost function heuristic has a couple of tuning knobs:
* {range_split_min_gain, Gain} (default: 1.1, range: [0.0, inf))
The minimum proportional improvement that the cost of all stack
accesses to a temp must display in order for that temp to be split.
* {range_split_mode1_fudge, Factor} (default: 1.1, range: [0.0, inf))
Costs for mode1 are multiplied by this factor in order to discourage
it when it provides marginal benefits. The justification is that
mode1 causes temps to be live for longest, thus leading to higher
register pressure.
* {range_split_weight_power, Factor} (default: 2, range: (0.0, inf))
Adjusts how much effect the basic block weights have on the cost of a
stack access. A stack access in a block with weight 1.0 has cost 1.0,
a stack access in a block with weight 0.01 has cost 1/Factor.
Additionally, the option range_split_weights chooses whether the basic
block weights are used at all.
In the case that the input is very big, hipe_range_split automatically
falls back to hipe_restore_reuse only in order to keep compile times
under control. Note that this is not only because of hipe_range_split
being slow, but also due to the resulting program being slow to register
allocate, and is not as partitionable by hipe_regalloc_prepass.
hipe_restore_reuse, on the other hand, does not affect the programs
partitionability.
The hipe_range_split pass is controlled by a new option ra_range_split.
ra_range_split is added to o2, and ra_restore_reuse is disabled in o2.
|
|
hipe_bb_weights computes basic block weights by using the branch
probability predictions as the coefficients in a linear equation system.
This linear equation system is then solved using Gauss-Jordan
Elimination.
The equation system representation is picked to be efficient with highly
sparse data. During triangelisation, the remaining equations are
dynamically reordered in order to prevent the equations from growing in
the common case, preserving the benefit of the sparse equation
representation.
In the case that the input is very big, hipe_bb_weights automatically
falls back to a rough approximation in order to keep compile times under
control.
|
|
Adds a new register allocator callback
Target:branch_preds(Instr, Context) which, for a control flow
instruction Instr, returns a list of tuples {Target, Probability} for
each label name Target that Instr may branch to. Probability is a float
between 0.0 and 1.0 and corresponds to the predicted probability that
control flow branches to the corresponding target. The probabilities may
sum to at most 1.0 (rounding errors aside). Note that a sum less than
1.0 is valid.
|
|
hipe_restore_reuse is a simplistic range splitter that splits temps that
are forced onto the stack by being live over call instructions. In
particular, it attempts to avoid cases where there are several accesses
to such stack allocated temps in straight-line code, uninterrupted by
any calls. In order to achieve this it splits temps between just before
the first access(es) and just after the last access(es) in such
straight-line code groups.
The hipe_restore_reuse pass is controlled by a new option
ra_restore_reuse.
ra_restore_reuse is added to o1.
|
|
In addition to the temporary name rewriting that hipe_regalloc_prepass
does, range splitters also need to be able to insert move instructions,
as well as inserting new basic blocks in the control flow graph. The
following four callbacks are added for that purpose:
* Target:mk_move(Src, Dst, Context)
Returns a move instruction from the temporary (not just register
number) Src to Dst.
* Target:mk_goto(Label, Context)
Returns a unconditional control flow instruction that branches to the
label with name Label.
* Target:redirect_jmp(Instr, ToOld, ToNew, Context)
Modifies the control flow instruction Instr so that any control flow
that would go to a label with name ToOld instead goes to the label
with name ToNew.
* Target:new_label(Context)
Returns a fresh label name that does not belong to any existing block
in the current function, and is to be used to create a new basic
block in the control flow graph by calling Target:update_bb/4 with
this new name.
|
|
Two tests are added, primarily aimed at the range splitters.
* test_float_spills, which exercises the rare case of high floating
point register pressure, including spill slot move coalescing.
* test_infinite_loops, which tests that various infinite loops are
properly compiled and do contain reduction tests (otherwise they
would permanently hog their scheduler and not notice being sent an
exit signal).
|
|
|
|
|
|
* maint:
Updated OTP version
Prepare release
Conflicts:
OTP_VERSION
lib/typer/doc/src/notes.xml
lib/typer/vsn.mk
|
|
|
|
* hasse/hipe/remove_work_around:
hipe: Remove work around for Dialyzer bug
|
|
* maint:
Fix xml warnings in old release notes
|
|
|
|
The bug in Dialyzer is fixed in commit 5ac2943.
|
|
HiPE: Various small code cleanups and codegen improvements
|
|
|
|
|
|
|
|
|
|
|
|
|
|
* Omit bounds check in more cases.
A test case that needs this change to omit bounds check is added.
* Improve code generation by reformulating bounds check to decrease
register pressure.
|
|
|
|
|
|
|
|
This is useful to generate shorter code for closures generated by
(fun F/A).
|
|
* maint:
dialyzer: Improve a warning
dialyzer: Fix a weird warning
dialyzer: Fix an opaque bug
dialyzer: Minor fix
Conflicts:
lib/dialyzer/src/dialyzer_dataflow.erl
|
|
|
|
With this change, both the matches and does not match cases have
fastpaths that does not need to call primops.
|