Age | Commit message (Collapse) | Author |
|
* maint:
compiler: Forward +source flag to epp and fix bug in +deterministic
epp: Allow user to set source name independently of input file name
|
|
Note that this does *not* affect -include()'d files or the -file()
directive.
|
|
* sverker/ets-test-cuddle:
ets_SUITE: Remove more redundant option combos
ets_SUITE: Try avoid redundant option combos
ets_SUITE: Reduce table type combos by removing "void"
ets_SUITE: Optimize throughput_benchmark
|
|
|
|
meta_wb
smp_insert
smp_fixed_delete
smp_select_delete
|
|
|
|
to reduce test times
|
|
Avoid repeating same tests for [] and [set].
Test case 'default' verifies 'set' to be the default type.
|
|
by populating the table with the help of a random number generator
creating series of unique integers.
|
|
Add a more scalable ETS ordered_set implementation
|
|
|
|
|
|
OTP-14737
* raimo/stdlib/gen_statem-cleanup:
Improve user's guide on time-outs
Clean up and optimize code and doc
|
|
OTP-14461 - New 'rand' algorithm: Xoroshiro928** also for 'crypto'
Implement a new 'rand' algorithm named 'exro928ss' and a new 'crypto' plugin for 'rand' named 'crypto_aes'.
Both are based on Xoroshiro928** which is derived from Xoroshiro1024** modified to use 58-bit words for performance reasons in the Erlang VM. Xoroshiro1024** has got the Xoroshiro1024 generator and the StarStar scrambler from the 2018 paper "Scrambled Linear Pseudorandom Number Generators" by David Blackman and Sebastiano Vigna.
This generator and scrambler combination shows no systematic weaknesses in standard statistical tests as TestU01(BigCrush) and PractRand, unlike the previously used * and + scramblers in the 'rand' module that exhibit statistical weaknesses for the lowest bits.
The 'crypto' plugin uses AES-256 as scrambler and the Xoroshiro928 as generator, which gives the same very long period and jump functions as for Xoroshiro928**, but a cryptographically secure scrambler gives absolutely no detectable statistical weaknesses regardless of how the generated numbers are used.
The speed of 'exro928ss' is only about 30-50% slower than the default fast 'rand' algorithm, but the state is roughly the double and it produces about 8 times the garbage per iteration.
The speed of 'crypto_aes' is about half (amortized) that of the default fast 'rand' algorithm which is fast and thanks to doing encryption in batches caching the result. Hence the state is much larger.
|
|
|
|
|
|
Conflicts:
lib/crypto/doc/src/crypto.xml
lib/crypto/src/crypto.erl
|
|
Conflicts:
lib/crypto/src/crypto.erl
|
|
|
|
|
|
|
|
|
|
* maint:
Update PCRE from version 8.41 to version 8.42
|
|
* rickard/pcre-8.42/OTP-15217:
Update PCRE from version 8.41 to version 8.42
|
|
of repeated table opts
and waiting for workers
|
|
The current ETS ordered_set implementation can quickly become a
scalability bottleneck on multicore machines when an application updates
an ordered_set table from concurrent processes [1][2]. The current
implementation is based on an AVL tree protected from concurrent writes
by a single readers-writer lock. Furthermore, the current implementation
has an optimization, called the stack optimization [3], that can improve
the performance when only a single process accesses a table but can
cause bad scalability even in read-only scenarios. It is possible to
pass the option {write_concurrency, true} to ets:new/2 when creating an
ETS table of type ordered_set but this option has no effect for tables
of type ordered_set without this commit. The new ETS ordered_set
implementation, added by this commit, is only activated when one passes
the options ordered_set and {write_concurrency, true} to the ets:new/2
function. Thus, the previous ordered_set implementation (from here on
called the default implementation) can still be used in applications
that do not benefit from the new implementation. The benchmark results
on the following web page show that the new implementation is many times
faster than the old implementation in some scenarios and that the old
implementation is still better than the new implementation in some
scenarios.
http://winsh.me/ets_catree_benchmark/ets_ca_tree_benchmark_results.html
The new implementation is expected to scale better than the default
implementation when concurrent processes use the following ETS
operations to operate on a table:
delete/2, delete_object/2, first/1, insert/2 (single object),
insert_new/2 (single object), lookup/2, lookup_element/2, member/2,
next/2, take/2 and update_element/3 (single object).
Currently, the new implementation does not have scalable support for the
other operations (e.g., select/2). However, when these operations are
used infrequently, the new implantation may still scale better than the
default implementation as the benchmark results at the URL above shows.
Description of the New Implementation
----------------------------------
The new implementation is based on a data structure which is called the
contention adapting search tree (CA tree for short). The following
publication contains a detailed description of the CA tree:
A Contention Adapting Approach to Concurrent Ordered Sets
Journal of Parallel and Distributed Computing, 2018
Kjell Winblad and Konstantinos Sagonas
https://doi.org/10.1016/j.jpdc.2017.11.007
http://www.it.uu.se/research/group/languages/software/ca_tree/catree_proofs.pdf
A discussion of how the CA tree can be used as an ETS back-end can be
found in another publication [1]. The CA tree is a data structure that
dynamically changes its synchronization granularity based on detected
contention. Internally, the CA tree uses instances of a sequential data
structure to store items. The CA tree implementation contained in this
commit uses the same AVL tree implementation as is used for the default
ordered set implementation. This AVL tree implementation is reused so
that much of the existing code to implement the ETS operations can be
reused.
Tests
-----
The ETS tests in `lib/stdlib/test/ets_SUITE.erl` have been extended to
also test the new ordered_set implementation. The function
ets_SUITE:throughput_benchmark/0 has also been added to this file. This
function can be used to measure and compare the performance of the
different ETS table types and options. This function writes benchmark
data to standard output that can be visualized by the HTML page
`lib/stdlib/test/ets_SUITE_data/visualize_throughput.html`.
[1]
More Scalable Ordered Set for ETS Using Adaptation.
In Thirteenth ACM SIGPLAN workshop on Erlang (2014).
Kjell Winblad and Konstantinos Sagonas.
https://doi.org/10.1145/2633448.2633455
http://www.it.uu.se/research/group/languages/software/ca_tree/erlang_paper.pdf
[2]
On the Scalability of the Erlang Term Storage
In Twelfth ACM SIGPLAN workshop on Erlang (2013)
Kjell Winblad, David Klaftenegger and Konstantinos Sagonas
https://doi.org/10.1145/2505305.2505308
http://winsh.me/papers/erlang_workshop_2013.pdf
[3]
The stack optimization works by keeping one preallocated stack instance
in every ordered_set table. This stack is updated so that it contains
the search path in some read operations (e.g., ets:next/2). This makes
it possible for a subsequent ets:next/2 to avoid traversing some nodes
in some cases. Unfortunately, the preallocated stack needs to be flagged
so that it is not updated concurrently by several threads which cause
bad scalability.
|
|
|
|
causing erlang:memory to report too much ets memory.
|
|
* maint:
Updated OTP version
Update release notes
stdlib: Let dets:open_file() crash when given raw file name
Fix kernel_app doc logger_level default from info to notice
Change-Id: I581946ac5cec6574ed79017e2987039c1fdcf80a
|
|
uabboli/hb/stdlib/fix_dets_file_name/OTP-15253/ERL-555
stdlib: Let dets:open_file() crash when given raw file name
|
|
* maint:
stdlib: Correct contracts in module io_lib_format
stdlib: Improve error handling in module io_lib
|
|
See also ERL-55 and OTP-13229.
|
|
|
|
Optimize binary match from 10% up to 70x
|
|
|
|
The idea is to use memchr on the first lookup for
binary:match/2 and also after every match on binary:matches/2.
We only use memchr in case of matches because benchmarks
showed that using memchr even when we had false positives
could negatively affect performance.
This speeds up binary matching and binary splitting by 4x
in some cases and by 70x in other scenarios (when the last
character in the needle does not occur in the subject).
The reason to use memchr is that it is highly specialized
in most modern operating systems, often defaulting to
SIMD operations.
The implementation uses the reduction count to figure out
how many bytes should be read with memchr. We could increase
those numbers but they do not seem to make a large difference.
|
|
I did not find any legitimate use of "can not", however skipped
changing e.g RFCs archived in the source tree.
|
|
Optimise creation of anonymous functions
|
|
This introduces a similar optimisation for normal funs
to what was introduced for external funs in #1725.
It is possible to allocate the fun as a literal, if it does not capture
the environment (i.e. it does not close over any variables).
Unfortunately it's not possible to do this in the compiler due to
problems with representation of such functions in the `.beam` files.
Fortunately, we can do this in the loader.
Simple evaluation shows that functions that don't capture the
enviornment consistute over 60% of all funs in the source code of
Erlang/OTP itself.
The only downside is that we lose a meningful value in the `pid` field
of the fun. The goal of this field, beyond debugging, was to be
able to identify the original node of a function. To be able to still do
this, the functions that are created in the loader are assigned the init
pid as the creator.
To solve issues with staryp, initially set the `erts_init_process_id`
to `ERTS_INVALID_PID` and skip the described optimisation if the value
is still uninitialised.
|
|
* maint:
Updated OTP version
Update release notes
Update version numbers
Eliminate a crash in the beam_jump pass
stdlib: Fix a 'chars_limit' bug
Fix a race condition when generating async operation ids
Fix internal compiler error for map_get/2
beam_type: Fix unsafe optimization
public_key: Remove moduli 5121 and 7167 Thoose were added by 598629aeba9de98e8cdf5637043eb34e5d407751 but are not universaly supported.
|
|
* maint-21:
Updated OTP version
Update release notes
Update version numbers
Eliminate a crash in the beam_jump pass
stdlib: Fix a 'chars_limit' bug
Fix a race condition when generating async operation ids
Fix internal compiler error for map_get/2
beam_type: Fix unsafe optimization
public_key: Remove moduli 5121 and 7167 Thoose were added by 598629aeba9de98e8cdf5637043eb34e5d407751 but are not universaly supported.
|
|
jhogberg/john/erts/cross-type-carrier-migration/OTP-15063
Allow carrier migration between different allocator types
|
|
|
|
|
|
|
|
uri_string: support key without value in query string
|
|
|
|
|
|
|
|
|