Age | Commit message (Collapse) | Author |
|
Converting with list_to_binary/1 appears to be faster than the
equivalent binary comprehension:
<< (z(F,A)) || {F,A} <- avp_arity(Name) >>
|
|
Recursing over the entire list of arities and values is faster than
retrieving them one at a time.
|
|
|
|
|
|
This and subsequent commits are destined for OTP 20.0.
|
|
|
|
|
|
In particular, allow {Name, Value} and {Dict, Name, Value} without
requiring a diameter_avp wrapper.
|
|
Since value is ignored.
|
|
Which is the equivalent of what was done with '#new-'/1 and '#set-'/2.
|
|
|
|
base/diameter_codec.erl:716: Warning: OPTIMIZED: creation of sub binary delayed
|
|
base/diameter_codec.erl:545: Warning: OPTIMIZED: creation of sub binary delayed
|
|
base/diameter_codec.erl:600: Warning: OPTIMIZED: creation of sub binary delayed
|
|
|
|
|
|
|
|
Dict:avp(encode, Value, Name) no longer needs to return a binary, only
an iolist(). Message encode runs list_to_binary/1 to convert accumulated
lists into a message binary.
|
|
This is a special case to allow encode of something other than an
iolist.
Eg. #diameter_avp{data = {diameter_gen_base_rfc6733,
'Proxy-Info',
[{'Proxy-Host', "HOST"}, {'Proxy-State', "STATE"}]}}
Only worked as expected for AVPs of type other than Grouped.
|
|
As when detecting missing AVPs, extract a list of field/value pairs in
one step, which looks to be slightly more efficient. Flattening the list
was unnecessary since the result is passed to list_to_binary.
|
|
On the same theme as the parent commit, building binaries in fewer
steps.
|
|
Prepend the header in a single step.
Before:
{[{{diameter_codec,pack_avp,1}, 7000, 126.074, 51.058}],
{ {diameter_codec,pack_avp,2}, 7000, 126.074, 51.058}, %
[{{diameter_codec,pack_avp,5}, 7000, 51.144, 25.758},
{{diameter_codec,pad,2}, 7000, 23.844, 23.570},
{suspend, 1, 0.028, 0.000}]}.
After:
{[{{diameter_codec,pack_avp,1}, 7000, 78.563, 26.986}],
{ {diameter_codec,pack_avp,2}, 7000, 78.563, 26.986}, %
[{{diameter_codec,pack_avp,6}, 7000, 51.459, 26.381},
{suspend, 4, 0.118, 0.000}]}.
|
|
Which appears to be about an order of magnitude slower than just
creating a binary of the desired size.
|
|
|
|
By using the existing '#get-'/1 in generated dictionary modules to
retrieve fields and values at the same time.
Before:
{[{{diameter_gen_base_rfc6733,missing,3}, 1000, 211.722, 8.741},
{{diameter_gen_base_rfc6733,'-missing/3-lc$^0/1-0-',4},12000, 0.000, 95.764}],
{ {diameter_gen_base_rfc6733,'-missing/3-lc$^0/1-0-',4},13000, 211.722, 104.505}, %
[{{diameter_gen_base_rfc6733,'#get-',2}, 12000, 49.917, 28.221},
{{diameter_gen_base_rfc6733,has_arity,2}, 12000, 31.811, 23.442},
{{diameter_gen_base_rfc6733,avp_arity,2}, 12000, 21.076, 20.975},
{garbage_collect, 457, 3.918, 3.918},
{suspend, 31, 0.495, 0.000},
{{diameter_gen_base_rfc6733,'-missing/3-lc$^0/1-0-',4},12000, 0.000, 95.764}]}.
After:
{[{{diameter_gen_base_rfc6733,missing,3}, 1000, 134.098, 2.402},
{{diameter_gen_base_rfc6733,'-missing/3-lc$^0/1-0-',3},13000, 0.000, 77.327}],
{ {diameter_gen_base_rfc6733,'-missing/3-lc$^0/1-0-',3},14000, 134.098, 79.729}, %
[{{diameter_gen_base_rfc6733,has_arity,2}, 12000, 31.084, 22.913},
{{diameter_gen_base_rfc6733,avp_arity,2}, 12000, 20.526, 20.440},
{garbage_collect, 253, 2.504, 2.504},
{suspend, 17, 0.255, 0.000},
{{diameter_gen_base_rfc6733,'-missing/3-lc$^0/1-0-',3},13000, 0.000, 77.327}]}.
|
|
Instead of the slower sets. Bump application dependencies to 17.5, even
though earlier versions may do fine.
|
|
Profiling with fprof showed this prior to this commit:
{[{{diameter_codec,decode,3}, 1000, 231.122, 4.092},
{{diameter_codec,collect_avps,1}, 1000, 0.000, 3.929}],
{ {diameter_codec,collect_avps,1}, 2000, 231.122, 8.021}, %
[{{diameter_codec,collect_avps,3}, 1000, 222.932, 11.644},
{garbage_collect, 19, 0.169, 0.169},
{{diameter_codec,collect_avps,1}, 1000, 0.000, 3.929}]}.
{[{{diameter_codec,collect_avps,1}, 1000, 222.932, 11.644},
{{diameter_codec,collect_avps,3}, 7000, 0.000, 68.186}],
{ {diameter_codec,collect_avps,3}, 8000, 222.932, 79.830}, %
[{{diameter_codec,split_avp,1}, 7000, 120.886, 72.382},
{{erlang,setelement,3}, 7000, 21.830, 21.830},
{garbage_collect, 48, 0.386, 0.386},
{{diameter_codec,collect_avps,3}, 7000, 0.000, 68.186}]}.
Note the time consumed in split_avp/1 and erlang:setelement/3. This
commit does more matching in one go, without intermediate results,
giving this:
{[{{diameter_codec,decode,3}, 1000, 42.512, 3.701},
{{diameter_codec,collect_avps,1}, 1000, 0.000, 3.594}],
{ {diameter_codec,collect_avps,1}, 2000, 42.512, 7.295}, %
[{{diameter_codec,collect_avps,3}, 1000, 35.217, 4.577},
{{diameter_codec,collect_avps,1}, 1000, 0.000, 3.594}]}.
{[{{diameter_codec,collect_avps,1}, 1000, 35.217, 4.577},
{{diameter_codec,collect_avps,3}, 7000, 0.000, 27.754}],
{ {diameter_codec,collect_avps,3}, 8000, 35.217, 32.331}, %
[{garbage_collect, 262, 2.647, 2.647},
{suspend, 9, 0.239, 0.000},
{{diameter_codec,collect_avps,3}, 7000, 0.000, 27.754}]}.
|
|
Don't call a function when we know the result, and consistently return a
binary.
|
|
Do nothing, but convenient for adding trace.
|
|
|
|
Decode on both ends or not, since the choice doesn't affect the peer.
|
|
To determine the wrapping of messages passed to recv callbacks and into
diameter. The default passing of the input stream in transport_data is
probably of no practical use, but has been set since time immemorial.
|
|
Corresponding to diameter_tcp callbacks a few commits back. Exercise the
callbacks in the traffic suite.
|
|
To let a recv callback for an incoming request set transport_data and
have it returned in a send callback.
|
|
Since the number of configuration variants tested makes for (too) many.
Randomly select a subset of testcases in each configuration group.
|
|
|
|
|
|
|
|
From the receiver process, that can return binaries to send/receive and
stop the transport process from reading on the socket.
This is still undocumented, and may change.
|
|
With sends still from the receiving process by default, since changing
the default behaviour may well have negative effects. A separate sender
probably implies a greater need for some form of load regulation for
one, since a blocking send would no longer imply that incoming messages
are no longer recevied. Dealing with this could result in the same
deadlock that the sending process intends to avoid, but the user should
be in control over how/when incoming traffic is regulated.
|
|
Added in commit 2afd1fe5. Only rename variables in diameter_tcp, no
functional change.
|
|
This:
diameter_tcp.erl:241: Record construction #transport{parent::'false',ssl::boolean() | maybe_improper_list(),frag::<<>>,tref::'false',flush::'false',pending::0,reset::{1 | 4,0 | 2},throttled::boolean(),q::{0,queue:queue(_)},monitor::'undefined' | pid()} violates the declared type of field parent::pid()
The problem isn't #transport.pid at all, it's #monitor.pid, and the only
relation is that the pid that's assigned to the latter is also (later)
assigned to the former. There is no record construction that assigns
false to #transport.parent.
Introduced in commit 33a535e4.
|
|
What's interesting when implementing some form of load regulation is
when an incoming request has been answered or discarded. Acknowledge
exactly this, not the identity of handler processes as previously. A
transport process can request acks of nonforthcoming answers by sending
{diameter, ack} to the parent peer_fsm, a handler processes identifies
itself with a {handler, pid()} message, and the peer_fsm monitors on
this to be able to send a notification to the transport if the handler
dies before sending an answer.
|
|
Commits starting at 472a080c added a throttle_cb option to diameter_tcp
to let a callback apply backpressure when it decides that additional
requests should not be read. It didn't provide a hook for knowing that
an answer was sent however, which is needed when sends no longer take
place in the receiver process, and is more complicated than it should
be. Strip it all away, in preparation for a simpler incarnation.
|
|
Shutdown events have been seen to get a different association id.
For example, first incoming message with association id = 0:
+ {trace_ts,<6421.268.0>,call,
{diameter_sctp,handle_info,
[{sctp,#Port<6421.2588>,
{10,67,16,179},
44159,
{[{sctp_sndrcvinfo,0,0,[],0,0,0,269950872,269950872,0}],
<<1,0,0,156,128,0,1,1,0,0,0,0,6,193,40,137,6,193,40,137,0,0,
1,8,64,0,0,30,67,45,49,51,52,50,49,55,52,52,49,46,101,114,
108,97,110,103,46,111,114,103,0,0,0,0,1,40,64,0,0,18,101,
114,108,97,110,103,46,111,114,103,0,0,0,0,1,1,64,0,0,14,0,
1,127,0,0,1,0,0,0,0,1,10,64,0,0,12,0,0,48,57,0,0,1,13,0,0,
0,20,79,84,80,47,100,105,97,109,101,116,101,114,0,0,1,22,
64,0,0,12,0,0,0,1,0,0,1,2,64,0,0,12,0,0,0,0,0,0,1,3,64,0,0,
12,0,0,0,3>>}},
{transport,<6421.252.0>,accept,#Port<6421.2588>,true,undefined,
{32,32},
0,undefined}]},
{1493,21505,577938}}
Later, a shutdown event with association id 1536:
+ {trace_ts,<6421.268.0>,call,
{diameter_sctp,handle_info,
[{sctp,#Port<6421.2588>,
{10,67,16,179},
44159,
{[],{sctp_shutdown_event,1536}}},
{transport,<6421.252.0>,accept,#Port<6421.2588>,0,undefined,
{32,32},
2,<6421.304.0>}]},
{1493,21505,746929}}
Both this and the grandparent commit are on this:
$ uname -a
SunOS beren 5.10 Generic_118833-33 sun4v sparc SUNW,Netra-T2000
|
|
Faster than lists:duplicate/2.
|
|
In particular, that the association id received in messages on a
one-to-one socket after peeloff may be different from the id received on
the listen socket at comm_up.
This seems odd, since it's then not possible to send until the id is
discovered by reception of an SCTP message containing it, but it's
unclear if this is a bug or a feature, or if it's specific to certain
platforms. Treat it as a feature in this commit, and get the association
id as mentioned, an incoming CER being expected before anything is sent.
Commit da3e5d67 has more history.
|
|
By explicitly skipping instead of omitting testcases from groups.
|
|
Autoskip traffic testcases if transport isn't established instead of
having traffic cases run and fail.
|
|
Testcase is already run elsewhere on the suite.
|