Age | Commit message (Collapse) | Author |
|
To solve the problem of being able to send messages to a peer that
hasn't advertised support for the application in question, as discussed
in the parent commit. diameter:call/4 can be passed 'peer' options to
identify candidates, and the only requirement is that an appropriate
dictionary be configured for encode. Filters are applied as if
candidates had been selected by advertised application.
|
|
It is tempting to regard remote support for the common application as
implicit, but that leads to the problems noted, and a node could never
then expect non-intersecting application support to result in 5010. It
probably can't anyway given the different ways the RFC's intent can be
interpreted, but it's not unreasonable that a node should be able to
advertise a single Diameter application and get 5010 if the peer doesn't
support it.
The problem we have currently is that peer selection is based on the
support advertised by the peer. The application id of an outgoing
request is used to lookup peers that have advertised support, so if the
peer hasn't advertised support for Diameter common messages then the
user won't be able to send DPR and more: diameter:call/4 will just
return {error, no_connection}. This commit doesn't solve the problem.
|
|
|
|
To remove the requirement that dictionary modules be recompiled whenever
the encode/decode implementation changes. The included diameter_gen.hrl
now only contains trivial functions that call info diameter_gen.erl.
|
|
To pass the options map through the encode. This is not backwards
compatible, and dictionaries supporting @custom_types or @codecs will
need to be updated.
|
|
As in commit fb14eac9, but for outgoing answers.
|
|
To simplify the call chains and intermediate terms, that had become a
little convoluted over time.
|
|
The documentation has been out of date since the string_decode option
was added in commit 1590920c. The optionless decode/2 was removed in the
commit that removed the use of the process dictionary in decode.
|
|
To allow list-valued messaged to be encoded in the specified order,
instead of in the dictionary order by first converting the list to a
record. This is not yet exposed in configuration.
|
|
The parent commit removed the convenience of setting something like the
following in the errors field of the diameter_packet of an answer
message.
[#diameter_avp{} = A2, {5001, #diameter_avp{} = A1}]
This results in Result-Code = 5001 and Failed-AVP = [A1,A2], but is
currently undocumented. Probably useful, so restore it.
Also accept {RC, [#diameter_avp{}]} at encode, which is probably more
useful; eg. [{5001, [A || {5001, A} <- Errors]}]
Anyone who wants full control can set errors = false and formulate
Result-Code/Failed-AVP themselves. (As opposed to not setting a value
explicitly, which results in setting from the decoded errors list. A bit
quirky, but documented and historical.)
|
|
When setting the Result-Code/Failed-AVP of an outgoing answer from an
errors list either returned from or not discarded by a handle_request
callback, more than the AVP paired with the Result-Code in question
could be set in Failed-AVP.
RFC 6733:
7.5. Failed-AVP AVP
The Failed-AVP AVP (AVP Code 279) is of type Grouped and provides
debugging information in cases where a request is rejected or not
fully processed due to erroneous information in a specific AVP. The
value of the Result-Code AVP will provide information on the reason
for the Failed-AVP AVP. A Diameter answer message SHOULD contain an
instance of the Failed-AVP AVP that corresponds to the error
indicated by the Result-Code AVP. For practical purposes, this
Failed-AVP would typically refer to the first AVP processing error
that a Diameter node encounters.
|
|
In this case the diameter_packet of an answer message for encode. The
record itself could be avoided, but that requires a new interface in
diameter_codec, probably for little gain.
|
|
In the theme of the previous two commits, creating the required
diameter_header of diameter_packet record only once.
|
|
As in the parent commit, recreating the options record is relatively
costly.
|
|
This old construction is approximately two to four times slower from
best (no elements modified) to worst (all modified) case, with the new
construction having constant speed.
|
|
|
|
Replace old macro-based implementation with something more readable.
|
|
|
|
The tuple is returned from and passed to callbacks, so retain the tuple
instead of its elements.
|
|
By passing additional arguments through it.
|
|
Folded when I should have mapped.
|
|
|
|
Converting with list_to_binary/1 appears to be faster than the
equivalent binary comprehension:
<< (z(F,A)) || {F,A} <- avp_arity(Name) >>
|
|
Recursing over the entire list of arities and values is faster than
retrieving them one at a time.
|
|
|
|
|
|
This and subsequent commits are destined for OTP 20.0.
|
|
|
|
|
|
In particular, allow {Name, Value} and {Dict, Name, Value} without
requiring a diameter_avp wrapper.
|
|
Since value is ignored.
|
|
Which is the equivalent of what was done with '#new-'/1 and '#set-'/2.
|
|
|
|
base/diameter_codec.erl:716: Warning: OPTIMIZED: creation of sub binary delayed
|
|
base/diameter_codec.erl:545: Warning: OPTIMIZED: creation of sub binary delayed
|
|
base/diameter_codec.erl:600: Warning: OPTIMIZED: creation of sub binary delayed
|
|
|
|
|
|
|
|
Dict:avp(encode, Value, Name) no longer needs to return a binary, only
an iolist(). Message encode runs list_to_binary/1 to convert accumulated
lists into a message binary.
|
|
This is a special case to allow encode of something other than an
iolist.
Eg. #diameter_avp{data = {diameter_gen_base_rfc6733,
'Proxy-Info',
[{'Proxy-Host', "HOST"}, {'Proxy-State', "STATE"}]}}
Only worked as expected for AVPs of type other than Grouped.
|
|
As when detecting missing AVPs, extract a list of field/value pairs in
one step, which looks to be slightly more efficient. Flattening the list
was unnecessary since the result is passed to list_to_binary.
|
|
On the same theme as the parent commit, building binaries in fewer
steps.
|
|
Prepend the header in a single step.
Before:
{[{{diameter_codec,pack_avp,1}, 7000, 126.074, 51.058}],
{ {diameter_codec,pack_avp,2}, 7000, 126.074, 51.058}, %
[{{diameter_codec,pack_avp,5}, 7000, 51.144, 25.758},
{{diameter_codec,pad,2}, 7000, 23.844, 23.570},
{suspend, 1, 0.028, 0.000}]}.
After:
{[{{diameter_codec,pack_avp,1}, 7000, 78.563, 26.986}],
{ {diameter_codec,pack_avp,2}, 7000, 78.563, 26.986}, %
[{{diameter_codec,pack_avp,6}, 7000, 51.459, 26.381},
{suspend, 4, 0.118, 0.000}]}.
|
|
Which appears to be about an order of magnitude slower than just
creating a binary of the desired size.
|
|
|
|
By using the existing '#get-'/1 in generated dictionary modules to
retrieve fields and values at the same time.
Before:
{[{{diameter_gen_base_rfc6733,missing,3}, 1000, 211.722, 8.741},
{{diameter_gen_base_rfc6733,'-missing/3-lc$^0/1-0-',4},12000, 0.000, 95.764}],
{ {diameter_gen_base_rfc6733,'-missing/3-lc$^0/1-0-',4},13000, 211.722, 104.505}, %
[{{diameter_gen_base_rfc6733,'#get-',2}, 12000, 49.917, 28.221},
{{diameter_gen_base_rfc6733,has_arity,2}, 12000, 31.811, 23.442},
{{diameter_gen_base_rfc6733,avp_arity,2}, 12000, 21.076, 20.975},
{garbage_collect, 457, 3.918, 3.918},
{suspend, 31, 0.495, 0.000},
{{diameter_gen_base_rfc6733,'-missing/3-lc$^0/1-0-',4},12000, 0.000, 95.764}]}.
After:
{[{{diameter_gen_base_rfc6733,missing,3}, 1000, 134.098, 2.402},
{{diameter_gen_base_rfc6733,'-missing/3-lc$^0/1-0-',3},13000, 0.000, 77.327}],
{ {diameter_gen_base_rfc6733,'-missing/3-lc$^0/1-0-',3},14000, 134.098, 79.729}, %
[{{diameter_gen_base_rfc6733,has_arity,2}, 12000, 31.084, 22.913},
{{diameter_gen_base_rfc6733,avp_arity,2}, 12000, 20.526, 20.440},
{garbage_collect, 253, 2.504, 2.504},
{suspend, 17, 0.255, 0.000},
{{diameter_gen_base_rfc6733,'-missing/3-lc$^0/1-0-',3},13000, 0.000, 77.327}]}.
|
|
Instead of the slower sets. Bump application dependencies to 17.5, even
though earlier versions may do fine.
|
|
Profiling with fprof showed this prior to this commit:
{[{{diameter_codec,decode,3}, 1000, 231.122, 4.092},
{{diameter_codec,collect_avps,1}, 1000, 0.000, 3.929}],
{ {diameter_codec,collect_avps,1}, 2000, 231.122, 8.021}, %
[{{diameter_codec,collect_avps,3}, 1000, 222.932, 11.644},
{garbage_collect, 19, 0.169, 0.169},
{{diameter_codec,collect_avps,1}, 1000, 0.000, 3.929}]}.
{[{{diameter_codec,collect_avps,1}, 1000, 222.932, 11.644},
{{diameter_codec,collect_avps,3}, 7000, 0.000, 68.186}],
{ {diameter_codec,collect_avps,3}, 8000, 222.932, 79.830}, %
[{{diameter_codec,split_avp,1}, 7000, 120.886, 72.382},
{{erlang,setelement,3}, 7000, 21.830, 21.830},
{garbage_collect, 48, 0.386, 0.386},
{{diameter_codec,collect_avps,3}, 7000, 0.000, 68.186}]}.
Note the time consumed in split_avp/1 and erlang:setelement/3. This
commit does more matching in one go, without intermediate results,
giving this:
{[{{diameter_codec,decode,3}, 1000, 42.512, 3.701},
{{diameter_codec,collect_avps,1}, 1000, 0.000, 3.594}],
{ {diameter_codec,collect_avps,1}, 2000, 42.512, 7.295}, %
[{{diameter_codec,collect_avps,3}, 1000, 35.217, 4.577},
{{diameter_codec,collect_avps,1}, 1000, 0.000, 3.594}]}.
{[{{diameter_codec,collect_avps,1}, 1000, 35.217, 4.577},
{{diameter_codec,collect_avps,3}, 7000, 0.000, 27.754}],
{ {diameter_codec,collect_avps,3}, 8000, 35.217, 32.331}, %
[{garbage_collect, 262, 2.647, 2.647},
{suspend, 9, 0.239, 0.000},
{{diameter_codec,collect_avps,3}, 7000, 0.000, 27.754}]}.
|
|
Don't call a function when we know the result, and consistently return a
binary.
|