Age | Commit message (Collapse) | Author |
|
The current ETS ordered_set implementation can quickly become a
scalability bottleneck on multicore machines when an application updates
an ordered_set table from concurrent processes [1][2]. The current
implementation is based on an AVL tree protected from concurrent writes
by a single readers-writer lock. Furthermore, the current implementation
has an optimization, called the stack optimization [3], that can improve
the performance when only a single process accesses a table but can
cause bad scalability even in read-only scenarios. It is possible to
pass the option {write_concurrency, true} to ets:new/2 when creating an
ETS table of type ordered_set but this option has no effect for tables
of type ordered_set without this commit. The new ETS ordered_set
implementation, added by this commit, is only activated when one passes
the options ordered_set and {write_concurrency, true} to the ets:new/2
function. Thus, the previous ordered_set implementation (from here on
called the default implementation) can still be used in applications
that do not benefit from the new implementation. The benchmark results
on the following web page show that the new implementation is many times
faster than the old implementation in some scenarios and that the old
implementation is still better than the new implementation in some
scenarios.
http://winsh.me/ets_catree_benchmark/ets_ca_tree_benchmark_results.html
The new implementation is expected to scale better than the default
implementation when concurrent processes use the following ETS
operations to operate on a table:
delete/2, delete_object/2, first/1, insert/2 (single object),
insert_new/2 (single object), lookup/2, lookup_element/2, member/2,
next/2, take/2 and update_element/3 (single object).
Currently, the new implementation does not have scalable support for the
other operations (e.g., select/2). However, when these operations are
used infrequently, the new implantation may still scale better than the
default implementation as the benchmark results at the URL above shows.
Description of the New Implementation
----------------------------------
The new implementation is based on a data structure which is called the
contention adapting search tree (CA tree for short). The following
publication contains a detailed description of the CA tree:
A Contention Adapting Approach to Concurrent Ordered Sets
Journal of Parallel and Distributed Computing, 2018
Kjell Winblad and Konstantinos Sagonas
https://doi.org/10.1016/j.jpdc.2017.11.007
http://www.it.uu.se/research/group/languages/software/ca_tree/catree_proofs.pdf
A discussion of how the CA tree can be used as an ETS back-end can be
found in another publication [1]. The CA tree is a data structure that
dynamically changes its synchronization granularity based on detected
contention. Internally, the CA tree uses instances of a sequential data
structure to store items. The CA tree implementation contained in this
commit uses the same AVL tree implementation as is used for the default
ordered set implementation. This AVL tree implementation is reused so
that much of the existing code to implement the ETS operations can be
reused.
Tests
-----
The ETS tests in `lib/stdlib/test/ets_SUITE.erl` have been extended to
also test the new ordered_set implementation. The function
ets_SUITE:throughput_benchmark/0 has also been added to this file. This
function can be used to measure and compare the performance of the
different ETS table types and options. This function writes benchmark
data to standard output that can be visualized by the HTML page
`lib/stdlib/test/ets_SUITE_data/visualize_throughput.html`.
[1]
More Scalable Ordered Set for ETS Using Adaptation.
In Thirteenth ACM SIGPLAN workshop on Erlang (2014).
Kjell Winblad and Konstantinos Sagonas.
https://doi.org/10.1145/2633448.2633455
http://www.it.uu.se/research/group/languages/software/ca_tree/erlang_paper.pdf
[2]
On the Scalability of the Erlang Term Storage
In Twelfth ACM SIGPLAN workshop on Erlang (2013)
Kjell Winblad, David Klaftenegger and Konstantinos Sagonas
https://doi.org/10.1145/2505305.2505308
http://winsh.me/papers/erlang_workshop_2013.pdf
[3]
The stack optimization works by keeping one preallocated stack instance
in every ordered_set table. This stack is updated so that it contains
the search path in some read operations (e.g., ets:next/2). This makes
it possible for a subsequent ets:next/2 to avoid traversing some nodes
in some cases. Unfortunately, the preallocated stack needs to be flagged
so that it is not updated concurrently by several threads which cause
bad scalability.
|
|
|
|
* sverker/erts/ets-memstat-false-leak/ERL-720/OTP-15278:
erts: Refactor ets FixedDeletion allocations
erts: Fix ets memstat false leak of FixedDeletion
|
|
Introduce a put_tuple2 instruction
|
|
|
|
Fix bug in compact representation of float_to_list/2
|
|
|
|
causing erlang:memory to report too much ets memory.
|
|
Sometimes when building a tuple, there is no way to avoid an
extra `move` instruction. Consider this code:
make_tuple(A) -> {ok,A}.
The corresponding BEAM code looks like this:
{test_heap,3,1}.
{put_tuple,2,{x,1}}.
{put,{atom,ok}}.
{put,{x,0}}.
{move,{x,1},{x,0}}.
return.
To avoid overwriting the source register `{x,0}`, a `move`
instruction is necessary.
The problem doesn't exist when building a list:
%% build_list(A) -> [A].
{test_heap,2,1}.
{put_list,{x,0},nil,{x,0}}.
return.
Introduce a new `put_tuple2` instruction that builds a tuple in a
single instruction, so that the `move` instruction can be eliminated:
%% make_tuple(A) -> {ok,A}.
{test_heap,3,1}.
{put_tuple2,{x,0},{list,[{atom,ok},{x,0}]}}.
return.
Note that the BEAM loader already combines `put_tuple` and `put`
instructions into an internal instruction similar to `put_tuple2`.
Therefore the introduction of the new instruction will not speed up
execution of tuple building itself, but it will be less work for
the loader to load the new instruction.
|
|
* maint:
ops.tab: Fix potentially unsafe optimization of raise/2
|
|
The operands for the raise/2 instruction are almost always in x(2) and
x(1). Therefore the loader translates the raise/2 instruction to an
i_raise/0 instruction which uses the values in x(2) and x(1). If the
operands happens to be in other registers, the loader inserts move/2
instruction to move them to x(2) and x(1).
The problem is that x(3) is used as a temporary register when
generating the move/2 instructions. That is unsafe if the
Value operand for raise/2 is x(3).
Thus:
raise x(0) x(3)
will be translated to:
move x(0) x(3)
move x(3) x(1)
move x(3) x(2)
i_raise
The Trace will be written to both x(2) and x(1).
The current compiler will never use x(3) for the Value operand,
so there is no need to patch previous releases. But a future compiler
version might allocate registers differently.
|
|
* maint:
Updated OTP version
Update release notes
Update version numbers
Fix missing 'in' trace events during 'running' trace
|
|
* maint-21:
Updated OTP version
Update release notes
Update version numbers
Fix missing 'in' trace events during 'running' trace
|
|
|
|
|
|
* rickard/running-trace-fix/ERL-713/OTP-15269:
Fix missing 'in' trace events during 'running' trace
|
|
* rickard/crypto-configure/OTP-15129:
Fix crypto configure on Darwin
|
|
|
|
'in' trace events could be lost when a process had to be
rescheduled on another scheduler type (normal <-> dirty).
|
|
|
|
* bjorn/compiler/ssa:
Travis CI: Run the SSA linter in the Linux64SmokeTest build
Remove retired compiler passes
Introduce a new SSA-based intermediate format
hipe_beam_to_icode: Correct translation of get_map_elements
beam_dead: Remove shortcut of binary matching instruction
beam_bs: Remove optimizations that are easier done on SSA format
Don't run unsafe compiler passes
Simplify optimizations by introducing is_nil late
beam_utils: Make is_tagged_tuple a pure test
beam_except: Enhance recognition of function_clause exceptions
beam_validator: Infer the types of copies in a smarter way
beam_validator: Improve merge of cons and literal list
beam_validator: Strengthen validation of func_info
beam_validator: Allow get_tuple_element before dsetelement
beam_validator: Don't transfer state to labels that can't be reached
beam_validator: Improve type analysis for tuples
beam_validator: Be more careful when updating try/catch state
beam_trim: Handle an empty list of instructions
v3_core: Number argument variables in ascending order
Teach binary instructions to use Y registers as destination
OTP-14894
|
|
Do not allocate good and bad shifts for single byte lookups
|
|
* rickard/crypto-configure/OTP-15129:
Move configuration of crypto to crypto application from erts
|
|
* rickard/parallel-configure/OTP-14625:
Parallel configure
Remove undocumented and unused lazy configure
|
|
In order to be able to handle runtime library path in crypto
also DED parts was broken out into a macro.
|
|
|
|
* max-au/dist_msg_too_long:
Cleanup unused dist output buf immediately instead of at GC
Throw 'system_limit' when distribution message size exceed INT_MAX instead of crashing emulator with 'Absurdly large distribution data buffer'
|
|
|
|
* maint:
Fix incoming suspend monitor down
|
|
* rickard/fix-suspend-monitor-down/OTP-15237/ERL-704:
Fix incoming suspend monitor down
|
|
* maint:
erts/time_correction.xml: remove extra closing parenthesis
use ssl:handshake/1 function
|
|
An incoming suspend monitor down wasn't handled correct when the
local monitor half had been removed with an emulator crash as result.
|
|
The new code generator will use Y registers as a destination for
binary construction and matching instructions. v3_codegen would
always first store terms in an X register and it would be the
responsibility of the optimization passes to optimize the extra
moves.
|
|
|
|
The single byte lookups always rely on `memchr` and
never really use the good and bad shifts arrays.
|
|
Optimize binary match from 10% up to 70x
|
|
* maint:
Fix compiler crash when compiling double receives
erts: Delete fd from poll-set when closing fd_driver port
|
|
into maint
erts: Delete fd from poll-set when closing fd_driver port
|
|
* maint:
Updated OTP version
Update release notes
Update version numbers
|
|
* maint-21:
Updated OTP version
Update release notes
Update version numbers
|
|
* maint:
Updated OTP version
Update release notes
Update version numbers
syntax_tools: Fix a bug regarding reverting map types.
|
|
* maint-19:
Updated OTP version
Update release notes
Update version numbers
syntax_tools: Fix a bug regarding reverting map types.
|
|
|
|
|
|
* rickard/full-cache-nif-env/OTP-15223/ERL-695:
Fix caching of NIF environment when executing dirty
# Conflicts:
# erts/emulator/beam/erl_nif.c
|
|
* dotsimon/ref_ordering_bug/OTP-15225:
Fixed #Ref ordering bug
Test #Ref ordering in lists and ets
|
|
* maint:
Updated OTP version
Update release notes
Update version numbers
crypto: Fix crash in compute_key(ecdh, ...) on badarg
Relax add_table_copy restriction
Fixed #Ref ordering bug
Test #Ref ordering in lists and ets
Do NOT disc_load from ram_copies when master_node is set
ssl: Make sure that a correct cipher suite is selected
ssl: Correct handling of empty server SNI extension
|
|
* maint-20:
Updated OTP version
Update release notes
Update version numbers
crypto: Fix crash in compute_key(ecdh, ...) on badarg
Relax add_table_copy restriction
Fixed #Ref ordering bug
Test #Ref ordering in lists and ets
Do NOT disc_load from ram_copies when master_node is set
ssl: Make sure that a correct cipher suite is selected
ssl: Correct handling of empty server SNI extension
|
|
|
|
|