Age | Commit message (Collapse) | Author |
|
See commits d3829525 (unordered), c591056b (packet), eadf4efc (sender),
636a7199 (tcp message_cb), 373cd07c (sctp message_cb)
|
|
Changing the default in the parent commit is possibly a bit dangerous,
even if the motivation still holds. Take a step back and make unordered
delivery a matter of configuration, without changing the default:
configuration is {unordered, boolean() | pos_integer()}, with false the
default, and N equivalent to OS =< N, where OS is the number of outbound
streams negotiated on the association in question.
A user can mess with this by configuring an sctp_default_send_param of
their own, but unordered sending is them from start, not only after the
second message reception.
|
|
There's no reason for sending ordered since request handling is
concurrent: different processes handling incoming requests can't know in
which order they were received on the transport, and different processes
sending requests can't know the order in which they're sent.
|
|
For the same reason as unordered delivery is delayed in the grandparent
commit. Delivery is ordered only within a stream, so until a second
message is received from the peer, there's no guarantee that a second
outgoing request won't be received before the initial capablities
exchange message. Rotation begins upon reception of a second message
from the peer, messages being sent on stream 0 until then.
|
|
By randomly setting the number of outbound streams on associations.
|
|
RFC 6733 say the below about head-of-line blocking and SCTP, the
suggestion of unordered sending being new compared to the RFC 3588.
Until now, all delivery in diameter_sctp has been ordered, and
roundrobin over available streams unless the user passes a stream
identifier in an outgoing diameter_packet record. This commit changes
this to use unordered delivery when there's only a single outbound
stream to choose from.
The special case at capabilities exchange is handled by only starting to
send unordered after the second message from the peer has been received.
(First message after capabilities exchange, as the RFC probably means.)
The special case at DPR isn't handled, since there's no knowing the
order in which a peer will answer: a node that sends DPR while it has
other requests outstanding can't expect that the latter will be answered
before DPR, even if delivery is ordered since incoming requests are
handled concurrently. If it wants a guarantee then it simply has to wait
for answers before sending DPR.
A user can force all delivery to be unordered by specifying
{sctp_default_send_params, #sctp_sndrcvinfo{flags = [unordered]}}
as a config option to diameter_sctp, but in this case there's no
handling of a request being sent directly after CEA since there's no
ordered flag to override the default.
RFC 6733:
2.1.1. SCTP Guidelines
Diameter messages SHOULD be mapped into SCTP streams in a way that
avoids head-of-the-line (HOL) blocking. Among different ways of
performing the mapping that fulfill this requirement it is
RECOMMENDED that a Diameter node send every Diameter message (request
or response) over stream zero with the unordered flag set. However,
Diameter nodes MAY select and implement other design alternatives for
avoiding HOL blocking such as using multiple streams with the
unordered flag cleared (as originally instructed in RFC 3588). On
the receiving side, a Diameter entity MUST be ready to receive
Diameter messages over any stream, and it is free to return responses
over a different stream. This way, both sides manage the available
streams in the sending direction, independently of the streams chosen
by the other side to send a particular Diameter message. These
messages can be out-of-order and belong to different Diameter
sessions.
Out-of-order delivery has special concerns during a connection
establishment and termination. When a connection is established, the
responder side sends a CEA message and moves to R-Open state as
specified in Section 5.6. If an application message is sent shortly
after the CEA and delivered out-of-order, the initiator side, still
in Wait-I-CEA state, will discard the application message and close
the connection. In order to avoid this race condition, the receiver
side SHOULD NOT use out-of-order delivery methods until the first
message has been received from the initiator, proving that it has
moved to I-Open state. To trigger such a message, the receiver side
could send a DWR immediately after sending a CEA. Upon reception of
the corresponding DWA, the receiver side should start using out-of-
order delivery methods to counter the HOL blocking.
Another race condition may occur when DPR and DPA messages are used.
Both DPR and DPA are small in size; thus, they may be delivered to
the peer faster than application messages when an out-of-order
delivery mechanism is used. Therefore, it is possible that a DPR/DPA
exchange completes while application messages are still in transit,
resulting in a loss of these messages. An implementation could
mitigate this race condition, for example, using timers, and wait for
a short period of time for pending application level messages to
arrive before proceeding to disconnect the transport connection.
Eventually, lost messages are handled by the retransmission mechanism
described in Section 5.5.4.
|
|
Since these can make sense per peer. The remaining service-only options
either belong there or make little sense being configured per transport.
|
|
Only a default spawn_opt has been possible to configure, but there's
no reason why most others should need to be configured per transport.
Those options that still only make sense on a transport are
transport_module/config (because of the semantics of multiple values),
applications/capabilities (since these override service options), and
private (since it's only to allow user-specific options in a backwards
compatible way).
|
|
Export the old type as a synonym for backwards compatability. The name
evaluable is a bit too awkward.
|
|
|
|
To follow the naming of options like strict_mbit and more. Still accept
capx_strictness since this is known to be used.
Introduced in commit e4f28f3b.
|
|
To be able to disable the counting of messages for which application
callbacks take place. Messages sent/handled by diameter itself are
always counted.
|
|
That dialyzer hasn't noticed is broken.
|
|
Despite what the Efficiency Guide says about match being more efficient,
split_binary appears to be slightly faster. (Although this one
extraction is a drop in the bucket.) Binary optimizations aren't an
issue during decode.
|
|
Since messages are accumulated by appending binaries as of three
commits back, the accumulated binary is prone to being copied, as
discussed in the Efficiency Guide. Matching the Message Length header
as bytes are being accumulated is one cause of this, so work around it
by splitting the binary and extracting the length without a match.
This doesn't feel like something that should be necessary: that matching
a binary would cause an append to copy isn't obvious. The first attempt
at simplifying the accumulation was simply to append an incoming binary
to the current fragment, match against <<_, Len:24, _/binary>> to
extract the length, and then test if there are enough bytes for a
message. This lead to horrible performance (response times for 2 MB
messages approximately 100 times worse than previously), and it wasn't
at all obvious that the reason was the accumulated binary being copied
with each append as a result of the match. Using split_binary avoids the
match context that forces the copying.
|
|
Not with each setopts to re-activate the socket.
|
|
Only reset the fragment after all messages have been extracted.
|
|
Appending to a binary is efficient, so just append message fragments
Only match out the length once per message since doing so for every
packet from TCP causes the binary to be copied.
|
|
Create less garbage.
|
|
|
|
Which has had no negative effect.
|
|
That dialyzer hasn't noticed is broken.
|
|
Also inline incr/8 and associated functions that were needed for the
compiler to accept the optimization, since this does make for a
measuable improvement.
|
|
With ERL_COMPILER_OPTIONS=bin_opt_info, before:
base/diameter_codec.erl:508: Warning: NOT OPTIMIZED: the binary matching instruction that follows in the same function have problems that prevent delayed sub binary optimization (probably indicated by INFO warnings)
And after:
base/diameter_codec.erl:508: Warning: OPTIMIZED: creation of sub binary delayed
This has a surprisingly large impact on the performance of
diameter_codec:collect_avps/2: about 15% faster in one testcase on a RAR
with 7 AVPs.
|
|
The index field in record diameter_avp was previously used to enumerate
AVPs so that the list could be returned in some cases, since
diameter_codec:collect_avps/2 (now /1) reversed the order. That's no
longer the case as of the grandparent commit, so use the field to
enumerate instances of the same AVP instead, and only when arities are
being checked, to save having to look them up in the map when checking
for 5009 errors, or counting AVPs at all in diameter_codec:collect_avps/1.
|
|
Extract strict_arities once and pass it through as an argument.
|
|
Decode has previously been two passes: first chunk the message into a
reversed list of toplevel diameter_avp records, then fold through the
reversed list to build the full result. Various workarounds have made it
a bit more convoluted than it should be however. Rework it completely,
turning the previous 2-pass tail-recursive implementation into a 1-pass
body recursive one.
The relay decode still exists in diameter_codec, as a stripped down
version of the full decode in diameter_gen.
|
|
|
|
|
|
To be able to disable the relatively expensive check that the number of
AVPs received in a message or grouped AVP agrees with the dictionary in
question. The may well be easier for the user in handle_request/answer
callbacks, when digesting the received message, and in some cases may
not be important.
The check at encode can also be disabled, allowing messages that don't
agree with the dictionary in question to be sent, which can be useful in
test (at least).
|
|
As noted in the parent commit, the wrong AVPs were reported, being
counted from the back of the message instead of the front. Both 5005 and
5009 errors are now detected after the message is decoded.
|
|
Aka DIAMETER_AVP_OCCURS_TOO_MANY_TIMES.
This reveals a fault. The RFC says this:
A message was received that included an AVP that appeared more
often than permitted in the message definition. The Failed-AVP
AVP MUST be included and contain a copy of the first instance of
the offending AVP that exceeded the maximum number of occurrences.
The list of AVPs is reversed when diameter checks arities, so Failed-AVP
contains the wrong AVP, causing the new testcase to fail.
|
|
Stop counting when there can be no arity errors.
|
|
|
|
Since the number is now under 50K again. Also make testing of individual
groups or testcases easier.
|
|
Count as AVPs are encoded instead.
|
|
Use the same [MsgName | Avps] representation as for the list decode, but
with Avps a map instead of a AVP name/values list. As a result, don't
set the message/AVP name on an additional key in the map, which felt a
bit odd. Messages are [MsgName :: atom() | map()], Grouped AVPs are just
map().
Fix at least one problem in the traffic suite along the way: with
decode_format false, the own decode in to_map/2 didn't know whether or
not to decode strings, resulting on some failures.
|
|
For slightly better readability in the ct logs
|
|
To reduce the number of combinations tested, as in the parent commit.
|
|
To reduce the number of config combinations that are tested. The
encoding is the format in which messages are provided to diameter for
encode (to binary), and if there is any difference in the end result
then the peer will detect this at decode, independently of its encoding
format.
|
|
Undocumented, for transforming a map decode to record.
The record decode becomes more expensive the larger the number of AVPs
in the message definition in question, since the record is recreated
each time an AVP value is set in it. The map decode can potentially do
better.
|
|
{record_decode, map} is a bit too quirky.
|
|
One for each server decoding/encoding/container combination is overkill.
Just want a few from which one can be chosen in the pick_peer callback.
|
|
|
|
Instead of to lists, to simplify matching.
|
|
|
|
That is, decode to the same format that encode already accepts. Only a
message has its name at the head of the list since AVPs are already
name/value pairs.
|
|
With {record_decode, map}. The option name is arguably a bit misleading
now, but not too objectionable given that the encode/decode in question
has historically only been of records.
One advantage of the map decode is that the map only contains values for
those AVPs existing in the message or grouped AVP in question. The name
of the message or grouped AVP is stored in with key ':name', the leading
colon ensuring that the key isn't a diameter-name.
Decoding to maps makes the hrl files generated from dictionary files
largely irrelevant. There are value defines generated into these, but
they're typically so long as to be unusable.
|
|
To control whether or not messages and grouped AVPs are decoded to
records, in #diameter_packet.msg and #diameter_avp.value respectively.
The decode became unnecessary for diameter's needs in parent commit,
which decoupled it from the checking of AVP arities.
|
|
Instead of after, during the check that AVPs have sufficient arity.
This makes the arity checks independent of the record decode, which
will allow the latter to be made optional.
|