Age | Commit message (Collapse) | Author |
|
* maint:
Improve trapping in lists:reverse/2
Fix unsafe use of lists:reverse/1
|
|
* john/erts/improve-list-reverse-trapping/OTP-15199:
Improve trapping in lists:reverse/2
Fix unsafe use of lists:reverse/1
|
|
If the process had more free space than reductions it could run a
lot longer than it was supposed to. It didn't honor the number of
reductions going in either, nor did it bump reductions when
returning its result or erroring out.
This commit also removes this function from a work function in
scheduler_SUITE as it's extremely sensitive to the number of
reductions spent in the test, causing
equal_and_high_with_part_time_max to fail on some machines.
|
|
* maint:
Updated OTP version
Update release notes
Update version numbers
erts: Fix "Prevent inconsistent node lists" fix
Fix include-path regression caused by dd0a39c
Restore default SIGTERM behaviour for port programs
|
|
* maint-21:
Updated OTP version
Update release notes
Update version numbers
erts: Fix "Prevent inconsistent node lists" fix
Fix include-path regression caused by dd0a39c
Restore default SIGTERM behaviour for port programs
|
|
* sverker/enif-cancel-select/OTP-15095:
erts: Add ERL_NIF_SELECT_CANCEL flag for enif_select
|
|
Uses port specific data out of bounds.
|
|
done in a31216200bdee2c04b3fb3ae5e26607674715c8a
that could cause a new pending connection to be incorrectly aborted.
|
|
"(void)result" will silence warning about unused variable
and compiler will optimize away such unused variables.
|
|
* maint:
Updated OTP version
Update release notes
Update version numbers
kernel: Fix missing abort_connection in net_kernel
Prevent inconsistent node lists
Fix an endless rescheduling loop when a process is executing process_info(self(), ...)
|
|
* maint-21:
Updated OTP version
Update release notes
Update version numbers
kernel: Fix missing abort_connection in net_kernel
Prevent inconsistent node lists
Fix an endless rescheduling loop when a process is executing process_info(self(), ...)
|
|
* maint:
Fix an endless rescheduling loop when a process is executing process_info(self(), ...)
|
|
Fix an endless rescheduling loop when a process is executing process_…
OTP-15275
|
|
The current ETS ordered_set implementation can quickly become a
scalability bottleneck on multicore machines when an application updates
an ordered_set table from concurrent processes [1][2]. The current
implementation is based on an AVL tree protected from concurrent writes
by a single readers-writer lock. Furthermore, the current implementation
has an optimization, called the stack optimization [3], that can improve
the performance when only a single process accesses a table but can
cause bad scalability even in read-only scenarios. It is possible to
pass the option {write_concurrency, true} to ets:new/2 when creating an
ETS table of type ordered_set but this option has no effect for tables
of type ordered_set without this commit. The new ETS ordered_set
implementation, added by this commit, is only activated when one passes
the options ordered_set and {write_concurrency, true} to the ets:new/2
function. Thus, the previous ordered_set implementation (from here on
called the default implementation) can still be used in applications
that do not benefit from the new implementation. The benchmark results
on the following web page show that the new implementation is many times
faster than the old implementation in some scenarios and that the old
implementation is still better than the new implementation in some
scenarios.
http://winsh.me/ets_catree_benchmark/ets_ca_tree_benchmark_results.html
The new implementation is expected to scale better than the default
implementation when concurrent processes use the following ETS
operations to operate on a table:
delete/2, delete_object/2, first/1, insert/2 (single object),
insert_new/2 (single object), lookup/2, lookup_element/2, member/2,
next/2, take/2 and update_element/3 (single object).
Currently, the new implementation does not have scalable support for the
other operations (e.g., select/2). However, when these operations are
used infrequently, the new implantation may still scale better than the
default implementation as the benchmark results at the URL above shows.
Description of the New Implementation
----------------------------------
The new implementation is based on a data structure which is called the
contention adapting search tree (CA tree for short). The following
publication contains a detailed description of the CA tree:
A Contention Adapting Approach to Concurrent Ordered Sets
Journal of Parallel and Distributed Computing, 2018
Kjell Winblad and Konstantinos Sagonas
https://doi.org/10.1016/j.jpdc.2017.11.007
http://www.it.uu.se/research/group/languages/software/ca_tree/catree_proofs.pdf
A discussion of how the CA tree can be used as an ETS back-end can be
found in another publication [1]. The CA tree is a data structure that
dynamically changes its synchronization granularity based on detected
contention. Internally, the CA tree uses instances of a sequential data
structure to store items. The CA tree implementation contained in this
commit uses the same AVL tree implementation as is used for the default
ordered set implementation. This AVL tree implementation is reused so
that much of the existing code to implement the ETS operations can be
reused.
Tests
-----
The ETS tests in `lib/stdlib/test/ets_SUITE.erl` have been extended to
also test the new ordered_set implementation. The function
ets_SUITE:throughput_benchmark/0 has also been added to this file. This
function can be used to measure and compare the performance of the
different ETS table types and options. This function writes benchmark
data to standard output that can be visualized by the HTML page
`lib/stdlib/test/ets_SUITE_data/visualize_throughput.html`.
[1]
More Scalable Ordered Set for ETS Using Adaptation.
In Thirteenth ACM SIGPLAN workshop on Erlang (2014).
Kjell Winblad and Konstantinos Sagonas.
https://doi.org/10.1145/2633448.2633455
http://www.it.uu.se/research/group/languages/software/ca_tree/erlang_paper.pdf
[2]
On the Scalability of the Erlang Term Storage
In Twelfth ACM SIGPLAN workshop on Erlang (2013)
Kjell Winblad, David Klaftenegger and Konstantinos Sagonas
https://doi.org/10.1145/2505305.2505308
http://winsh.me/papers/erlang_workshop_2013.pdf
[3]
The stack optimization works by keeping one preallocated stack instance
in every ordered_set table. This stack is updated so that it contains
the search path in some read operations (e.g., ets:next/2). This makes
it possible for a subsequent ets:next/2 to avoid traversing some nodes
in some cases. Unfortunately, the preallocated stack needs to be flagged
so that it is not updated concurrently by several threads which cause
bad scalability.
|
|
* rickard/dist-entry-gc-fix/OTP-15279:
Prevent inconsistent node lists
|
|
If net_kernel "forgets" to abort a connection (as it currently might),
the garbage collection of a distribution entry could cause node lists
to enter an inconsistent state.
|
|
|
|
process_info(self(), ...)
It is possible that a process has to yield before completing process_info BIF when it runs out of reductions. If this BIF is called by the process itself, it does not send a signal but executes in the context of a process. If it has to yield, it turns F_LOCAL_SIGS_ONLY flag on, which means new signals won't be fetched from the outer message queue.
When the same process needs to execute dirty system code (e.g. dirty GC) it has to be run on a dirty scheduler. However signals enqueued into outer queue cause it to be rescheduled on a normal scheduler. F_LOCAL_SIGS_ONLY prevent outer queue signals delivery, creating an endless rescheduling loop.
This commit disengages F_LOCAL_SIG_ONLY if process needs to execute dirty code in order to complete signal delivery and allow process to be moved to dirty run queue.
|
|
|
|
causing erlang:memory to report too much ets memory.
|
|
Sometimes when building a tuple, there is no way to avoid an
extra `move` instruction. Consider this code:
make_tuple(A) -> {ok,A}.
The corresponding BEAM code looks like this:
{test_heap,3,1}.
{put_tuple,2,{x,1}}.
{put,{atom,ok}}.
{put,{x,0}}.
{move,{x,1},{x,0}}.
return.
To avoid overwriting the source register `{x,0}`, a `move`
instruction is necessary.
The problem doesn't exist when building a list:
%% build_list(A) -> [A].
{test_heap,2,1}.
{put_list,{x,0},nil,{x,0}}.
return.
Introduce a new `put_tuple2` instruction that builds a tuple in a
single instruction, so that the `move` instruction can be eliminated:
%% make_tuple(A) -> {ok,A}.
{test_heap,3,1}.
{put_tuple2,{x,0},{list,[{atom,ok},{x,0}]}}.
return.
Note that the BEAM loader already combines `put_tuple` and `put`
instructions into an internal instruction similar to `put_tuple2`.
Therefore the introduction of the new instruction will not speed up
execution of tuple building itself, but it will be less work for
the loader to load the new instruction.
|
|
* maint:
ops.tab: Fix potentially unsafe optimization of raise/2
|
|
The operands for the raise/2 instruction are almost always in x(2) and
x(1). Therefore the loader translates the raise/2 instruction to an
i_raise/0 instruction which uses the values in x(2) and x(1). If the
operands happens to be in other registers, the loader inserts move/2
instruction to move them to x(2) and x(1).
The problem is that x(3) is used as a temporary register when
generating the move/2 instructions. That is unsafe if the
Value operand for raise/2 is x(3).
Thus:
raise x(0) x(3)
will be translated to:
move x(0) x(3)
move x(3) x(1)
move x(3) x(2)
i_raise
The Trace will be written to both x(2) and x(1).
The current compiler will never use x(3) for the Value operand,
so there is no need to patch previous releases. But a future compiler
version might allocate registers differently.
|
|
process_info(self(), ...)
It is possible that a process has to yield before completing process_info BIF when it runs out of reductions. If this BIF is called by the process itself, it does not send a signal but executes in the context of a process. If it has to yield, it turns F_LOCAL_SIGS_ONLY flag on, which means new signals won't be fetched from the outer message queue.
When the same process needs to execute dirty system code (e.g. dirty GC) it has to be run on a dirty scheduler. However signals enqueued into outer queue cause it to be rescheduled on a normal scheduler. F_LOCAL_SIGS_ONLY prevent outer queue signals delivery, creating an endless rescheduling loop.
This commit disengages F_LOCAL_SIG_ONLY if process needs to execute dirty code in order to complete signal delivery and allow process to be moved to dirty run queue.
|
|
* maint:
Updated OTP version
Update release notes
Update version numbers
Fix missing 'in' trace events during 'running' trace
|
|
* maint-21:
Updated OTP version
Update release notes
Update version numbers
Fix missing 'in' trace events during 'running' trace
|
|
* rickard/running-trace-fix/ERL-713/OTP-15269:
Fix missing 'in' trace events during 'running' trace
|
|
'in' trace events could be lost when a process had to be
rescheduled on another scheduler type (normal <-> dirty).
|
|
* bjorn/compiler/ssa:
Travis CI: Run the SSA linter in the Linux64SmokeTest build
Remove retired compiler passes
Introduce a new SSA-based intermediate format
hipe_beam_to_icode: Correct translation of get_map_elements
beam_dead: Remove shortcut of binary matching instruction
beam_bs: Remove optimizations that are easier done on SSA format
Don't run unsafe compiler passes
Simplify optimizations by introducing is_nil late
beam_utils: Make is_tagged_tuple a pure test
beam_except: Enhance recognition of function_clause exceptions
beam_validator: Infer the types of copies in a smarter way
beam_validator: Improve merge of cons and literal list
beam_validator: Strengthen validation of func_info
beam_validator: Allow get_tuple_element before dsetelement
beam_validator: Don't transfer state to labels that can't be reached
beam_validator: Improve type analysis for tuples
beam_validator: Be more careful when updating try/catch state
beam_trim: Handle an empty list of instructions
v3_core: Number argument variables in ascending order
Teach binary instructions to use Y registers as destination
OTP-14894
|
|
Do not allocate good and bad shifts for single byte lookups
|
|
* max-au/dist_msg_too_long:
Cleanup unused dist output buf immediately instead of at GC
Throw 'system_limit' when distribution message size exceed INT_MAX instead of crashing emulator with 'Absurdly large distribution data buffer'
|
|
|
|
* maint:
Fix incoming suspend monitor down
|
|
* rickard/fix-suspend-monitor-down/OTP-15237/ERL-704:
Fix incoming suspend monitor down
|
|
An incoming suspend monitor down wasn't handled correct when the
local monitor half had been removed with an emulator crash as result.
|
|
The new code generator will use Y registers as a destination for
binary construction and matching instructions. v3_codegen would
always first store terms in an X register and it would be the
responsibility of the optimization passes to optimize the extra
moves.
|
|
The single byte lookups always rely on `memchr` and
never really use the good and bad shifts arrays.
|
|
Optimize binary match from 10% up to 70x
|
|
* rickard/full-cache-nif-env/OTP-15223/ERL-695:
Fix caching of NIF environment when executing dirty
# Conflicts:
# erts/emulator/beam/erl_nif.c
|
|
* dotsimon/ref_ordering_bug/OTP-15225:
Fixed #Ref ordering bug
Test #Ref ordering in lists and ets
|
|
* maint:
Fix caching of NIF environment when executing dirty
|
|
* rickard/full-cache-nif-env/OTP-15223/ERL-695:
Fix caching of NIF environment when executing dirty
|
|
|
|
|
|
* maint:
Fixed #Ref ordering bug
Test #Ref ordering in lists and ets
|
|
|
|
The idea is to use memchr on the first lookup for
binary:match/2 and also after every match on binary:matches/2.
We only use memchr in case of matches because benchmarks
showed that using memchr even when we had false positives
could negatively affect performance.
This speeds up binary matching and binary splitting by 4x
in some cases and by 70x in other scenarios (when the last
character in the needle does not occur in the subject).
The reason to use memchr is that it is highly specialized
in most modern operating systems, often defaulting to
SIMD operations.
The implementation uses the reduction count to figure out
how many bytes should be read with memchr. We could increase
those numbers but they do not seem to make a large difference.
|
|
Do not allocate a new map when the value is the same
|
|
|
|
A lot of erts internal messages used behind APIs to create
non-blocking calls, e.g. port_command, would cause the seq_trace
token to be cleared from the caller when it should not.
This commit fixes that and adds asserts that makes sure
that all messages sent have to correct token set.
Fixes: ERL-602
|