Age | Commit message (Collapse) | Author |
|
* Remove out-commented code
* Fix obvious typos and bad grammar
* Adhere to the conventions for when to use "%" and "%%".
|
|
Remove blank lines between clauses; use matching instead of
is_list/1 guards.
|
|
|
|
Stop up using asn1ct_gen:emit/1 with a tuple instead of a list.
Also remove the remaining uses of asn1ct_gen:demit/1.
|
|
The debug option no longer serves any useful purpose.
|
|
That will make code slightly easier to read.
|
|
The code is not covered. The code is also not present
in the PER backend.
Here is a somewhat more formal proof that the code cannot
be reached:
asn1ct_gen_ber_bin_v2:gen_encode_user/3 calls
asn1ct_gen:gen_encode_constructed/4 where Typename is a list
of one element.
asn1ct_gen:gen_encode_constructed/4 will call
asn1ct_gen_ber_bin_v2:gen_encode/3 via asn1ct_gen:gen_types/4.
Note that if InnerType in asn1ct_gen:gen_encode_constructed/4
is 'SEQUENCE OF' or 'SET OF', Typename will be extended to a
list with two elements.
If InnerType in asn1ct_gen:gen_encode_constructed/4 is
'SET', 'SEQUENCE', or 'CHOICE', then asn1ct_gen_ber_bin_v2:gen_encode/3
will be called with the last argument being a #'ComponentType'{}.
asn1ct_gen_ber_bin_v2:gen_encode/3 will in that cause extend
Typename before calling itself recursively.
Therefore, Typename is always a list with at least two elements
when the removed code is called.
|
|
ce431409d0daba broke generation of dialyzer suppressions
for per and uper.
While we are it, add type tests to asn1ct_func:is_used/1
to avoid similar problems in the future.
|
|
Tags number above 16383 were not decoded correctly in
ber_decode_tag().
We could fix the problem, but there does not seem to be any need.
First, the only way that high tag numbers can be created is with
manual tagging; after 1994 manual tagging is no longer recommended.
Second, the ASN.1 playground (http://asn1-playground.oss.com) only
supports tags up to 16383 (the same is presumably true for OSS
Nokalva's other tools).
Therefore, clean up the existing code and make it an explicit
'invalid_tag' error when tags above 13383 are encountered
(instead of an implicit 'wrong_tag' error).
|
|
* maint:
Fix xml warnings in old release notes
|
|
|
|
|
|
|
|
|
|
* hasse/dialyzer/fix_plt_suite:
dialyzer: Correct a test case
|
|
* egil/tools/fix-makefile:
tools: Remove percept from makefile
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Conflicts:
lib/kernel/src/kernel.appup.src
lib/stdlib/src/stdlib.appup.src
|
|
* siri/appups-19.3:
Update appups in kernel and stdlib for OTP-19.3
|
|
HiPE: Various small code cleanups and codegen improvements
|
|
|
|
* siri/typer/remove-application/OTP-14251:
Remove typer application
|
|
|
|
The application now has an own repo, https://github.com/erlang/typer
|
|
|
|
* anders/diameter/capx_strictness/OTP-14257:
Add transport_opt() capx_strictness
|
|
|
|
|
|
* anders/diameter/19.3/OTP-14252:
vsn -> 1.12.2
Update appup for 19.3
|
|
* anders/diameter/19.2/failover/OTP-14206:
Avoid sending large terms between nodes unnecessarily
Don't use request table for answer routing
Fix/redo failover optimization
|
|
|
|
* ingela/ssl/next-maint-version:
ssl: Version update
|
|
* ingela/ssl/dtls-cont:
dtls: Only test this for TLS for now
dtls: Avoid mixup of protocol to test
dtls: 'dtlsv1.2' corresponds to 'tlsv1.2'
dtls: Correct dialyzer spec and postpone inclusion of test
dtls: Erlang distribution over DTLS is not supported
dtls: Enable some DTLS tests in ssl_to_openssl_SUITE
dtls: Enable DTLS test in ssl_certificate_verify_SUITE
dtls: Hibernation and retransmit timers
dtls: Make sure retransmission timers are run
dtls: DTLS specific handling of socket and ciphers
|
|
We want to avoid failing test cases but still be able to merge
DTLS progress for 19.3
|
|
To allow the Peer State Machine requirement that only the expected
capabilities exchange message be received in the relevant state to be
relaxed. If {capx_strictness, false} is configured then anything bu the
expected CER/CEA is ignored.
This is non-standard behaviour, and thusfar undocumented. Use at your
own risk.
|
|
When relaying outgoing requests through transport on a remote node,
terms that were stripped when sending to the transport process weren't
stripped when spawning a process on the remote node.
Also, don't save the request to the process dictionary in a process that
just relays an answer.
|
|
The table has existed forever, to route incoming answers to a waiting
request process: each outgoing request writes to the table, and each
incoming answer reads. This has been seen to suffer from lock contention
at high load however, so this commit moves the routing into the
diameter_peer_fsm processes that are diameter's conduit to transport
processes. The request table is still used for failover detection, but
entries are only written when a watchdog state transitions leaves or
enters state OKAY.
|
|
|