Age | Commit message (Collapse) | Author |
|
* maint: (26 commits)
genop.tab: Add documentation for many BEAM instructions
asn1ct_constucted_per: Directly call asn1ct_gen_per
Clean up handling of .asn1db files
PER, UPER: Fix encoding/decoding of open types greater than 16K
PER, UPER: Optimize table constraints
PER, UPER: Optimize encoding using an intermediate format
Refactor decoding of components of SEQUENCE OF / SET OF
PER,UPER: Get rid of unused 'telltype' argument in decoding functions
Optimize the generated encode/2 function
UPER: Optimize complete/1
Clean up checking of objects
Improve tests of deep table constraints
BER: Handle multiple optional SEQUENCE fields with table constraints
Test OPTIONAL and DEFAULT for open types
PER/UPER: Fix encoding of an object set with multiple inlined constructs
Remove broken support for multiple UNIQUE
Extend the test for parameterized information objects
asn1_SUITE: Remove off-topic (and slow) smp/1 test case
SeqOf: Add more tricky SEQUENCE OF tests
Clean up handling of extension addition groups
...
|
|
|
|
There is (differenct) code for reading .asn1db files both in
asn1ct and asn1_db. Consolidate the reading into one routine
in asn1db.
Another problem is that the encoding rule that the .asn1db
file was created for is not in the .asn1db, but only in the
generated Erlang module. It is much easier and safer to put
the encoding rule in the .asn1db file itself. We will also
put the version number of the asn1 application into the file,
to ensure that we don't use an old .asn1db file that could
potentially be incompatible.
|
|
|
|
The generated code for table constraints has several problems:
* For each object set, a function for getting an encoding or decoding
fun is generated, regardless of whether it is actually used. In many
specifications, the object set actually used is the union of several
other object sets. That means that the code can become a lot bulkier
than it would need to be.
* The funs are not necessary. The funs just add to the code bloat
and generate more unnecessary garbage at run-time. Also, one of
the arguments of the fun is the name of the field in the class which
is known at compile-time, and the fun for decoding has unused arguments.
How to fix the problems:
At each call site where an open type should be encoded/decoded, call a
specific generated function specialized for the actual object set and
the name of the field in the class. When generating the specialized
functions, make sure that we re-use a previously generated function if
possible.
|
|
There are some minor incompatibilities for BIT STRING:
{bit,Position} is now only only supported for a named
BIT STRING type.
Values longer than the maximum size for the BIT STRING type
would be truncated silently - they now cause an exception.
|
|
As a preparation for rewriting handling of table constraints,
we must make sure that code for decoding a SEQUENCE OF / SET OF
can be be contained in a single clause of a function; thus, we
must not output the helper function for decoding of each component
directly following the code that follows it. Use asn1ct_func:call_gen/3
to delay outputting the helper function.
|
|
|
|
Use 'try' instead of 'catch', and don't match anything that
cannot actually be returned from the generated encoding code.
|
|
|
|
|
|
|
|
Also extend the test suite with more tests of inlined constructs
in object sets.
|
|
According to the ASN.1 standard, having multiple UNIQUE in class
is allowed. For example:
C ::= CLASS {
&id1 INTEGER UNIQUE,
&id2 INTEGER UNIQUE
}
In practice, no one uses multiple UNIQUE.
The ASN.1 compiler will crash if a class with multiple UNIQUE
is used, but the backends have half-hearted support for multiple
UNIQUE in that they generate helper functions similar to:
getenc_OBJECT_SET(id1, 42) ->
fun enc_XXX/3;
...
Since we have no plans to implement support for multiple UNIQUE
(no one seems to have missed it), simplify the helper functions
like this:
getenc_OBJECT_SET(42) ->
fun enc_XXX/3;
...
|
|
Break out the code to a separate function to make it more readable.
Also avoid hard-coding the name of the value to use as "Val1" as
it may not be true in the future.
Instead of using a list comprenhension like this:
case [X || X <- [element(5, Val),element(6, Val)],
X =/= asn1_NOVALUE] of
[] -> ...;
_ -> ...
end
use an orelse chain:
case element(5, Val) =/= asn1_NOVALUE orelse
element(5, Val) =/= asn1_NOVALUE of
false -> ...;
true -> ...
end
|
|
To facilitate optimizing PER encoding using an intermediate
format, we must change asn1rtt_real_common:encode_real/1 so that
it only returns the encoded binary.
|
|
|
|
The first clause of gen_enc_line() allows us to pass in [] as
the value for Element; if we modify the only caller that passes
[] to pass an actual expression we can remove the first clause.
Furthermore, since the Pos argument was only used by the first
clause, we can remove the Pos argument.
We can also remove the first clause in gen_enc_component_optional(),
since the code in its body is exactly the same as in the following
clause.
|
|
An field in a class that references an object or object set is not
allowed to be referenced directly from within a SEQUENCE.
|
|
Using a list comprehension will simplify both the code generator
and the generated code. Also, if there is an ObjFun argument in
the host function, the BEAM compiler will make sure it is only
passed to the generated list comprehension function if it is
actually used.
|
|
Break out the the rules for determining whether a string should
be in aligned so that it can be reused for encoding.
|
|
Both crypto and asn1 are supported.
|
|
|
|
* bjorn/asn1/not-small-bugs/OTP-11153:
PER/UPER: Correct decoding of SEQUENCEs with more than 64 extensions
testConstraints: Improve tests of semi-constrained INTEGERs
Test ENUMERATED with many extended values
UPER: Correct encoding of ENUMERATED with more than 63 extended values
Add asn1_test_lib:hex_to_bin/1
|
|
|
|
When a SEQUENCE was defined inline inside extension addition group
like this:
InlinedSeq ::= SEQUENCE {
...,
[[
s SEQUENCE {
a INTEGER,
b BOOLEAN
}
]]
}
the decoding code would return the contents of the SEQUENCE in a
record named 'InlinedSeq_ExtAddGroup1_s', while the record definition
in the generated HRL file would be 'InlinedSeq_s'.
Since there is no reason to use the longer record name (no risk for
ambiguity), correct the name in the decoding code.
|
|
|
|
The Per argument is no longer used; it is only passed around.
|
|
Simplify the backends by letting asn1ct_check replacing a
with the actual type.
|
|
Since fbcb7fe589edbfe79d10d7fe01be8a9f77926b89, the 'enumval'
variable is no longer used.
|
|
Given:
Semi ::= INTEGER (Lb..MAX, ...)
where Lb is an arbitrary integer, attempting to encode an
integer less than Lb would cause the encoder to enter an
infinite loop.
|
|
A semi-constrained INTEGER with a non-zero lower bound would be
incorrectly decoded. This bug was introduced in R16.
|
|
|
|
For the PER backends, generate code for accessing deep table
constraints at compile-time in the same way as is done for BER.
While at it, remove the complicated indentation code.
Also modernize the test suite and add a test for a deeper nested
constraint.
|
|
The name of the referenced object set name in #simpletableattributes{}
would when used by INSTANCE OF be an atom, but in all other cases
be a {Module,ObjectSetName} tuple. Simplify the code by always using
the latter format.
|
|
Most types don't have any validation functions that does anything
useful, so it is sufficient to call normalize_value/4 for them.
|
|
Unify the code for checking an enumeration value named in a
DEFAULT and in an ENUMERATED value. There is no need to handle
those cases differently. That also will also make sure that
the following works:
E ::= ENUMERATED { x, ..., y }
e E ::= x
(Extensible ENUMERATEDs were not handled when defining values.)
Always generate an error when an unknown enumeration value is
given (used in a DEFAULT, a message would be printed, but the
compilation would succeed). Also make sure that we always include
the line number for the incorrect enumeration.
Write a new test case and remove the extremely rudimentary
value_bad_enum_test/1 test case.
|
|
|
|
Those functions have no reason to be synchronous since they don't
have a useful return value.
|
|
Also replace unused code with assertions.
While at it, also let reply/2 return 'ok' to silence Dialyzer
warnings for unmatched returns.
|
|
|
|
|
|
|
|
Capture the common pattern of checking a list of named ASN.1
items in a check_fold/3 function.
Clean up checkt/3 using it, replacing the old-style catch
with a try..catch.
|
|
As a preparation for future clean up of the error handling, we
will need to take control on how how asn1ct runs the different
compiler passes.
|
|
An ENUMERATED is always represented as a two-tuple, never as
three-tuple.
|
|
asn1ct_constructed_per:gen_encode_prim_wrapper() no longer serves
any useful purpose, as it is easier to call
asn1ct_per:gen_encode_prim() directly. Also, the DoTag argument
for asn1ct_per:gen_encode_prim() is never actually used, so it can
be eliminated at the same time.
|
|
asn1ct_check does not pass #pobjectdef{} records on to the backends
(all the original #pobjectdef{} records have been instantiated and
changed to #objectdef{} records).
|
|
Dialyzer issued two new warnings when the 'catch' was removed in
the previous commit.
|
|
The last clause in asn1ct_gen:type/1 does a catched call to type2/1.
If the type2/1 fails {notype,X} is returned.
Since the body of type2/1 essentially is:
case lists:member(X, [...]) of
true ->
{primitive,bif};
false ->
case lists:member(X, [...]) of
true ->
{constructed,bif};
false ->
{undefined,user}
end
end
there is no way that type2/1 can fail. Therefore, we can eliminate
the catch and put the body of type2/1 into the last clause of
type/1. We can also eliminate the code in the callers of type/1
that match {notype,X}.
|