Age | Commit message (Collapse) | Author |
|
|
|
|
|
Conflicts:
erts/emulator/beam/beam_bif_load.c
erts/emulator/beam/beam_load.c
and added macro DBG_TRACE_MFA_P in beam_load.h
|
|
Did not work with purge and made worse by new purge strategy.
Did yield terrible performance when fun thing is created *before*
fun code is loaded. Like when receiving not yet loaded fun
from other node. The cached 'native_address' in ErlFunThing
will not be updated leading to mode switch and error_handler
being called for every call to the fun from native code.
|
|
This commit adds two new structs to be used to represent
erlang code in erts.
ErtsCodeInfo is used to describe the i_func_info header
that is part of all Export entries and the prelude of
each function. This replaces all the BeamInstr * that
were previously used to point to these locations.
After this change the code should never use BeamInstr *
with offsets to figure out different parts of the
func_info header.
ErtsCodeMFA is a struct that is used to descripe a
MFA in code. It is used within ErtsCodeInfo and also
in Process->current.
All function that previously took Eterm * or BeamInstr *
to identify a MFA now use the ErtsCodeMFA or ErtsCodeInfo
where appropriate.
The code has been tested to work when adding a new field to the
ErtsCodeInfo struct, but some updates are needed in ops.tab to
make it work.
|
|
Yes this is an ugly workaround. One approach for a better
solution could be to introduce an internal secret atom
tagged as an atom with a unique index, but impossible
to find by string hash lookup/insert.
|
|
second merge of this branch with some bug fixing
|
|
provoked by nif_SUITE:nif_binary_to_term.
If we fail to decode an immediate (unsafe atom for example) with
a dummy factory then hp and factory->hp will both be uninitialized
and valgrind will complain about comparing them.
|
|
Must skip 3 extra bytes after node name atom.
|
|
* henrik/update-copyrightyear:
update copyright-year
|
|
from future nodes.
|
|
Instead of INTERNAL_CREATION (255), use empty atom for node name
to mean the local node (regardless of node name or creation).
The purpose is to get rid of special value 255, for future expansion
of creation to 32-bit.
|
|
|
|
* Accept a raw data buffer instead of ErlNifBinary
* Accept option ERL_NIF_BIN2TERM_SAFE
* Return number of read bytes
|
|
|
|
|
|
This is mostly a pure refactoring.
Except for the buggy cases when calling erlang:halt() with a positive
integer in the range -(INT_MIN+2) to -INT_MIN that got confused with
ERTS_ABORT_EXIT, ERTS_DUMP_EXIT and ERTS_INTR_EXIT.
Outcome OLD erl_exit(n, ) NEW erts_exit(n, )
------- ------------------- -------------------------------------------
exit(Status) n = -Status <= 0 n = Status >= 0
crashdump+abort n > 0, ignore n n = ERTS_ERROR_EXIT < 0
The outcome of the old ERTS_ABORT_EXIT, ERTS_INTR_EXIT and
ERTS_DUMP_EXIT are the same as before (even though their values have
changed).
|
|
Conflicts:
erts/emulator/beam/external.c
|
|
Decoding a term with a large (HAMT) map in an small (FLAT) map could cause
a critical error if the external format was not produced by beam.
|
|
* Removed COMPRESS_POINTER and EXPAND_POINTER
|
|
|
|
|
|
by adding a dynamic heap factory.
"binary_to_term" is now a hybrid solution with both
a call to decoded_size() to calculate needed heap space
AND possible dynamic allocation of more heap space
if needed for big maps.
The heap size returned from decoded_size() is guaranteed
to be sufficient for all term heap data except for hashmap
nodes. All hashmap nodes are created at the end of dec_term()
by invoking the heap factory interface that may allocate more
heap space on process heap or in fragments.
With this commit it is no longer guaranteed that a message
is confined to only one heap fragment.
|
|
to handle the "start of list" case in one place and not seven.
Note that this commit reverts (47d6fd3ccf35) back to using WSTACK
and pushing raw pointers. We disable GC while yielding, so this should not
be a problem.
|
|
* sverk/hamt-encode-size-bug/OTP-12585:
erts: Fix bug in term_to_binary size estimation for hamt
erts: Optimize term_to_binary size estimation
|
|
|
|
for tuples and maps containing ascii strings (lists).
|
|
|
|
This will also fix a bug in term_to_binary
treating full nodes as tuples and emiting LIST_EXT for leafs.
|
|
Must save hamt_list in context.
|
|
* sverk/dec_term-bin-overhead/OTP-12554:
erts: Add missing binary offheap overhead in binary_to_term
|
|
|
|
Adding ERTS_SWORD_MAX to a pointer does not work
as a way to disable a bound check.
Remove the hp_end from ErtsHeapFactory as it isn't really used anyway.
|
|
flatmap: Small map
hashmap: Large map
map: flatmap or hashmap
|
|
Strategy: Calculate an over estimation of heap size that will give
such a low probability for overflow, that "it will not happen".
Scary assumption 1: Uniformly distributed hash values.
Scary assumption 2: Tree size is normally distributed (right?)
|
|
|
|
with over estimation of heap size.
|
|
|
|
|
|
|
|
Binary offheap overhead is used to trigger GC when a process is
referring "too much" binary offheap data.
Offheap binaries created from external format (binary_to_term,
distributed messages or compacted ets tables) were not accounted for.
Example: A process receiving a lot of binary data in distributed messages,
while not building much terms on its heap, could cause an extensive
memory consumption for garbage binaries.
|
|
Conflicts:
erts/emulator/hipe/hipe_bif0.c
|
|
Bignums are artifically restricted in size. Arithmetic and logical
operations check the sizes of resulting bignums, and turn oversize
results into system_limit exceptions.
However, this check is not performed when bignums are constructed by
binary matching. The consequence is that such matchings can construct
oversize bignums that satisfy is_integer/1 yet don't work. Performing
arithmetic such as Term - 0 fails with a system_limit exception. Worse,
performing a logical operation such as Term band Term results in [].
The latter occurs because the size checking (e.g. in erts_band()) is
a simple ASSERT(is_not_nil(...)) on the result of the bignum operation,
which internally is [] (NIL) in the case of oversize results. However,
ASSERT is a no-op in release builds, so the error goes unnoticed and []
is returned as the result of the band/2.
This patch addresses this by preventing oversize bignums from entering
the VM via binary matching:
- the internal bytes_to_big() procedure is augmented to return NIL for
oversize results, just like big_norm()
- callers of bytes_to_big() are augmented to check for NIL returns and
signal errors in those cases
- erts_bs_get_integer_2() can only fail with badmatch, so that is the
Erlang-level result of oversize bignums from binary matches
- big_SUITE.erl is extended with a test case that fails without this
fix (no error signalled) and passes with it (badmatch occurs)
Credit goes to Nico Kruber for the initial bug report.
|
|
* sverk/yielding-distr-send/OTP-12232:
erts: Add constant TERM_TO_BINARY_MEMCPY_FACTOR
erts: Optimize some repeated calls to {E,W}STACK_PUSH
erts: Yield in term_to_binary when encoding big maps
erts: Remove unnecessary goto for fun encoding
erts: Yield in term_to_binary while copying large binaries
erts: Implement yielding for distributed send of large messages
|
|
and do not piggyback on B2T_MEMCPY_FACTOR
|
|
* sverk/hipe-inline-reserve-trap-frame:
erts: Extend usage of ASM macro to avoid including asm macros in C code
erts: Make hipe_{un}reserve_beam_trap_frame INLINE
|
|
|
|
except the reference counter 'refc', as different callers
have different strategies regarding the lifetime of the binary.
|
|
|
|
|