Age | Commit message (Collapse) | Author |
|
For some outgoing messages (not response) the following
error(s) has been corrected:
* encrypted: logged incorrectly, should have written the v3-header
and the scoped pdu, but was actually logged as-is (encrypted),
making conversion impossible.
* un-encrypted: messages was not logged at all.
OTP-15287 (ERIERL-206)
|
|
If conversion of an Audit Trail Log (ATL) entry failed,
this could result in an abort of the entire conversion,
not just the one entry. This has now been improved so
that the failure now results in a "error message" into
the "stream". Furthermore, we now keep track of the number
of entries we succeede and fail to convert.
OTP-15287 (ERIERL-206)
|
|
|
|
* maint:
public_key: Remove strange and unused(?) DSAPrivateKey from verify/5
|
|
* hans/public_key/DSAPrivateKey_in_verify/OTP-15284:
public_key: Remove strange and unused(?) DSAPrivateKey from verify/5
|
|
* maint:
crypto: Bug fix - crypto:next_iv regarding aes_ige256
crypto: Bug fix - blowfish_cbc allowed in crypto:next_iv
|
|
|
|
|
|
|
|
ERL-724: "During a 'gentle' shutdown, supervisors unlink from their
children before sending shutdown signals to them. This can lead to a
race condition in supervision trees, when the timeout for gentle
shutdown of a parent supervisor expires and it kills a child
supervisor that has just unlinked from a child of its own, leaving the
child supervisor's own child still running after its supervisor is
killed."
This commit adds a warning about this in the documentation.
|
|
Implementations of TLS 1.3 which choose to support prior versions of
TLS SHOULD support TLS 1.2. That is, a TLS 1.3 ClientHello shall
advertise support for TLS 1.2 ciphers in order to be able to connect
to TLS 1.2 servers.
This commit changes the list of the advertised cipher suites to
include old TLS 1.2 ciphers.
Change-Id: Iaece3ac4b66a59dfbe97068b682d6010d74522b8
|
|
TLS_AES_128_GCM_SHA256 = {0x13,0x01}
TLS_AES_256_GCM_SHA384 = {0x13,0x02}
TLS_CHACHA20_POLY1305_SHA256 = {0x13,0x03}
Change-Id: I3406aaedac812fc43519ff31e5f00d26e375c5d5
|
|
* peterdmv/ssl/add_signature_algorithms:
ssl: Use 'HighestVersion' instead of extra function call
ssl: Add new extension with encode/decode functions
ssl: Format code in handle options
Change-Id: Iba3600edc86dc646a7bbabf550d88e7884877e18
|
|
* ingela/ssl/property-tests:
ssl: Correct compression decoding
ssl: Add property tests framework
ssl: Fix typo
|
|
* maint:
Update PCRE from version 8.41 to version 8.42
|
|
* rickard/pcre-8.42/OTP-15217:
Update PCRE from version 8.41 to version 8.42
|
|
* maint:
Updated OTP version
Update release notes
Update version numbers
kernel: Fix missing abort_connection in net_kernel
Prevent inconsistent node lists
Fix an endless rescheduling loop when a process is executing process_info(self(), ...)
|
|
* maint-21:
Updated OTP version
Update release notes
Update version numbers
kernel: Fix missing abort_connection in net_kernel
Prevent inconsistent node lists
Fix an endless rescheduling loop when a process is executing process_info(self(), ...)
|
|
|
|
Correct dialyzer spec for key option
OTP-15281
|
|
Property testing revealed an decoding error of "compression_methods"
in the client hello. As we do not implement any compression methods
this has no practical impact.
|
|
|
|
Change-Id: I7521cd4e83f881d3caeae8faf2dd8108db15aa7e
|
|
Change-Id: I8a5c11b3503b44cfc6cbd6e4fd8ff3005a8669dd
|
|
|
|
of repeated table opts
and waiting for workers
|
|
The current ETS ordered_set implementation can quickly become a
scalability bottleneck on multicore machines when an application updates
an ordered_set table from concurrent processes [1][2]. The current
implementation is based on an AVL tree protected from concurrent writes
by a single readers-writer lock. Furthermore, the current implementation
has an optimization, called the stack optimization [3], that can improve
the performance when only a single process accesses a table but can
cause bad scalability even in read-only scenarios. It is possible to
pass the option {write_concurrency, true} to ets:new/2 when creating an
ETS table of type ordered_set but this option has no effect for tables
of type ordered_set without this commit. The new ETS ordered_set
implementation, added by this commit, is only activated when one passes
the options ordered_set and {write_concurrency, true} to the ets:new/2
function. Thus, the previous ordered_set implementation (from here on
called the default implementation) can still be used in applications
that do not benefit from the new implementation. The benchmark results
on the following web page show that the new implementation is many times
faster than the old implementation in some scenarios and that the old
implementation is still better than the new implementation in some
scenarios.
http://winsh.me/ets_catree_benchmark/ets_ca_tree_benchmark_results.html
The new implementation is expected to scale better than the default
implementation when concurrent processes use the following ETS
operations to operate on a table:
delete/2, delete_object/2, first/1, insert/2 (single object),
insert_new/2 (single object), lookup/2, lookup_element/2, member/2,
next/2, take/2 and update_element/3 (single object).
Currently, the new implementation does not have scalable support for the
other operations (e.g., select/2). However, when these operations are
used infrequently, the new implantation may still scale better than the
default implementation as the benchmark results at the URL above shows.
Description of the New Implementation
----------------------------------
The new implementation is based on a data structure which is called the
contention adapting search tree (CA tree for short). The following
publication contains a detailed description of the CA tree:
A Contention Adapting Approach to Concurrent Ordered Sets
Journal of Parallel and Distributed Computing, 2018
Kjell Winblad and Konstantinos Sagonas
https://doi.org/10.1016/j.jpdc.2017.11.007
http://www.it.uu.se/research/group/languages/software/ca_tree/catree_proofs.pdf
A discussion of how the CA tree can be used as an ETS back-end can be
found in another publication [1]. The CA tree is a data structure that
dynamically changes its synchronization granularity based on detected
contention. Internally, the CA tree uses instances of a sequential data
structure to store items. The CA tree implementation contained in this
commit uses the same AVL tree implementation as is used for the default
ordered set implementation. This AVL tree implementation is reused so
that much of the existing code to implement the ETS operations can be
reused.
Tests
-----
The ETS tests in `lib/stdlib/test/ets_SUITE.erl` have been extended to
also test the new ordered_set implementation. The function
ets_SUITE:throughput_benchmark/0 has also been added to this file. This
function can be used to measure and compare the performance of the
different ETS table types and options. This function writes benchmark
data to standard output that can be visualized by the HTML page
`lib/stdlib/test/ets_SUITE_data/visualize_throughput.html`.
[1]
More Scalable Ordered Set for ETS Using Adaptation.
In Thirteenth ACM SIGPLAN workshop on Erlang (2014).
Kjell Winblad and Konstantinos Sagonas.
https://doi.org/10.1145/2633448.2633455
http://www.it.uu.se/research/group/languages/software/ca_tree/erlang_paper.pdf
[2]
On the Scalability of the Erlang Term Storage
In Twelfth ACM SIGPLAN workshop on Erlang (2013)
Kjell Winblad, David Klaftenegger and Konstantinos Sagonas
https://doi.org/10.1145/2505305.2505308
http://winsh.me/papers/erlang_workshop_2013.pdf
[3]
The stack optimization works by keeping one preallocated stack instance
in every ordered_set table. This stack is updated so that it contains
the search path in some read operations (e.g., ets:next/2). This makes
it possible for a subsequent ets:next/2 to avoid traversing some nodes
in some cases. Unfortunately, the preallocated stack needs to be flagged
so that it is not updated concurrently by several threads which cause
bad scalability.
|
|
|
|
|
|
|
|
|
|
* sverker/erts/ets-memstat-false-leak/ERL-720/OTP-15278:
erts: Refactor ets FixedDeletion allocations
erts: Fix ets memstat false leak of FixedDeletion
|
|
|
|
Introduce a put_tuple2 instruction
|
|
Change-Id: I997fa8808eaf48aad24a7097b82571be9f0ee252
|
|
* ingela/ssl/initial-tls-1.3-functions:
ssl: Initial cipher suites adoption for TLS-1.3
ssl: Add new TLS-1.3 Alerts
ssl: Add initial TLS 1.3 hanshake encode/decode support
|
|
This commit filters out cipher suites not to be used in TLS-1.3
We still need to add new cipher suites for TLS-1.3 and possible
add new information to the suite data structure.
|
|
|
|
|
|
|
|
Erl compare ext lists bug (ERL-705)
|
|
|
|
|
|
|
|
Implement socket options recvtclass, recvtos, recvttl and pktoptions.
Document the implemented socket options, new types and message formats.
The options recvtclass, recvtos and recvttl are boolean options that
when activated (true) for a socket will cause ancillary data to be
received through recvmsg(). That is for packet oriented sockets
(UDP and SCTP).
The required options for this feature were recvtclass and recvtos,
and recvttl was only added to test that the ancillary data parsing
handled multiple data items in one message correctly.
These options does not work on Windows since ancillary data
is not handled by the Winsock2 API.
For stream sockets (TCP) there is no clear connection between
a received packet and what is returned when reading data from
the socket, so recvmsg() is not useful. It is possible to get
the same ancillary data through a getsockopt() call with
the IPv6 socket option IPV6_PKTOPTIONS, on Linux named
IPV6_2292PKTOPTIONS after the now obsoleted RFC where it originated.
(unfortunately RFC 3542 that obsoletes it explicitly undefines
this way to get packet ancillary data from a stream socket)
Linux also has got a way to get packet ancillary data for IPv4
TCP sockets through a getsockopt() call with IP_PKTOPTIONS,
which appears to be Linux specific.
This implementation uses a flag field in the inet_drv.c socket
internal data that records if any setsockopt() call with recvtclass,
recvtos or recvttl (IPV6_RECVTCLASS, IP_RECVTOS or IP_RECVTTL)
has been activated. If so recvmsg() is used instead of recvfrom().
Ancillary data is delivered to the application by a new return
tuple format from gen_udp:recv/2,3 containing a list of
ancillary data tuples [{tclass,TCLASS} | {tos,TOS} | {ttl,TTL}],
as returned by recvmsg(). For a socket in active mode a new
message format, containing the ancillary data list, delivers
the data in the same way.
For gen_sctp the ancillary data is delivered in the same way,
except that the gen_sctp return tuple format already contained
an ancillary data list so there are just more possible elements
when using these socket options. Note that the active mode
message format has got an extra tuple level for the ancillary
data compared to what is now implemented gen_udp.
The gen_sctp active mode format was considered to be the odd one
- now all tuples containing ancillary data are flat,
except for gen_sctp active mode.
Note that testing has not shown that Linux SCTP sockets deliver
any ancillary data for these socket options, so it is probably
not implemented yet. Remains to be seen what FreeBSD does...
For gen_tcp inet:getopts([pktoptions]) will deliver the latest
received ancillary data for any activated socket option recvtclass,
recvtos or recvttl, on platforms where IP_PKTOPTIONS is defined
for an IPv4 socket, or where IPV6_PKTOPTIONS or IPV6_2292PKTOPTIONS
is defined for an IPv6 socket. It will be delivered as a
list of ancillary data items in the same way as for gen_udp
(and gen_sctp).
On some platforms, e.g the BSD:s, when you activate IP_RECVTOS
you get ancillary data tagged IP_RECVTOS with the TOS value,
but on Linux you get ancillary data tagged IP_TOS with the
TOS value. Linux follows the style of RFC 2292, and the BSD:s
use an older notion. For RFC 2292 that defines the IP_PKTOPTIONS
socket option it is more logical to tag the items with the
tag that is the item's, than with the tag that defines that you
want the item. Therefore this implementation translates all
BSD style ancillary data tags to the corresponding Linux style
data tags, so the application will only see the tags 'tclass',
'tos' and 'ttl' on all platforms.
|
|
Fix type spec of ms_transform:parse_trans/2
|
|
causing erlang:memory to report too much ets memory.
|
|
Sometimes when building a tuple, there is no way to avoid an
extra `move` instruction. Consider this code:
make_tuple(A) -> {ok,A}.
The corresponding BEAM code looks like this:
{test_heap,3,1}.
{put_tuple,2,{x,1}}.
{put,{atom,ok}}.
{put,{x,0}}.
{move,{x,1},{x,0}}.
return.
To avoid overwriting the source register `{x,0}`, a `move`
instruction is necessary.
The problem doesn't exist when building a list:
%% build_list(A) -> [A].
{test_heap,2,1}.
{put_list,{x,0},nil,{x,0}}.
return.
Introduce a new `put_tuple2` instruction that builds a tuple in a
single instruction, so that the `move` instruction can be eliminated:
%% make_tuple(A) -> {ok,A}.
{test_heap,3,1}.
{put_tuple2,{x,0},{list,[{atom,ok},{x,0}]}}.
return.
Note that the BEAM loader already combines `put_tuple` and `put`
instructions into an internal instruction similar to `put_tuple2`.
Therefore the introduction of the new instruction will not speed up
execution of tuple building itself, but it will be less work for
the loader to load the new instruction.
|
|
* maint:
crypto: Let otp_test_engine only add what is needed OpenSSL_add_all_algorithms hangs on some test machines
|
|
* hans/crypto/init_test_engine_fix:
crypto: Let otp_test_engine only add what is needed OpenSSL_add_all_algorithms hangs on some test machines
|