Age | Commit message (Collapse) | Author |
|
Simplify the backends by letting asn1ct_check replacing a
with the actual type.
|
|
Since fbcb7fe589edbfe79d10d7fe01be8a9f77926b89, the 'enumval'
variable is no longer used.
|
|
Given:
Semi ::= INTEGER (Lb..MAX, ...)
where Lb is an arbitrary integer, attempting to encode an
integer less than Lb would cause the encoder to enter an
infinite loop.
|
|
A semi-constrained INTEGER with a non-zero lower bound would be
incorrectly decoded. This bug was introduced in R16.
|
|
While at it, also test more integer values.
|
|
|
|
For the PER backends, generate code for accessing deep table
constraints at compile-time in the same way as is done for BER.
While at it, remove the complicated indentation code.
Also modernize the test suite and add a test for a deeper nested
constraint.
|
|
The name of the referenced object set name in #simpletableattributes{}
would when used by INSTANCE OF be an atom, but in all other cases
be a {Module,ObjectSetName} tuple. Simplify the code by always using
the latter format.
|
|
Most types don't have any validation functions that does anything
useful, so it is sufficient to call normalize_value/4 for them.
|
|
|
|
Unify the code for checking an enumeration value named in a
DEFAULT and in an ENUMERATED value. There is no need to handle
those cases differently. That also will also make sure that
the following works:
E ::= ENUMERATED { x, ..., y }
e E ::= x
(Extensible ENUMERATEDs were not handled when defining values.)
Always generate an error when an unknown enumeration value is
given (used in a DEFAULT, a message would be printed, but the
compilation would succeed). Also make sure that we always include
the line number for the incorrect enumeration.
Write a new test case and remove the extremely rudimentary
value_bad_enum_test/1 test case.
|
|
|
|
Those functions have no reason to be synchronous since they don't
have a useful return value.
|
|
Also replace unused code with assertions.
While at it, also let reply/2 return 'ok' to silence Dialyzer
warnings for unmatched returns.
|
|
|
|
|
|
|
|
Capture the common pattern of checking a list of named ASN.1
items in a check_fold/3 function.
Clean up checkt/3 using it, replacing the old-style catch
with a try..catch.
|
|
As a preparation for future clean up of the error handling, we
will need to take control on how how asn1ct runs the different
compiler passes.
|
|
An ENUMERATED is always represented as a two-tuple, never as
three-tuple.
|
|
asn1ct_constructed_per:gen_encode_prim_wrapper() no longer serves
any useful purpose, as it is easier to call
asn1ct_per:gen_encode_prim() directly. Also, the DoTag argument
for asn1ct_per:gen_encode_prim() is never actually used, so it can
be eliminated at the same time.
|
|
The list of files to remove was obtained by running "ls -lut"
after running the test cases and comparing the atime for each
file with the start of the test run.
|
|
|
|
asn1ct_check does not pass #pobjectdef{} records on to the backends
(all the original #pobjectdef{} records have been instantiated and
changed to #objectdef{} records).
|
|
Dialyzer issued two new warnings when the 'catch' was removed in
the previous commit.
|
|
The last clause in asn1ct_gen:type/1 does a catched call to type2/1.
If the type2/1 fails {notype,X} is returned.
Since the body of type2/1 essentially is:
case lists:member(X, [...]) of
true ->
{primitive,bif};
false ->
case lists:member(X, [...]) of
true ->
{constructed,bif};
false ->
{undefined,user}
end
end
there is no way that type2/1 can fail. Therefore, we can eliminate
the catch and put the body of type2/1 into the last clause of
type/1. We can also eliminate the code in the callers of type/1
that match {notype,X}.
|
|
|
|
|
|
Stop export functions that are not called from outside their
module. If the functions are not used at all, remove the functions
too.
The unused exports were found by running:
xref:start(s).
xref:add_application(s, code:lib_dir(asn1)).
io:format("~p\n", [xref:analyze(s, exports_not_used)]).
|
|
ObjectDescriptor, UTCTime, and GeneralizedTime are not special and
can be handled in the same way as all other restricted string types.
|
|
This slight optimization will also eliminate some Dialyzer warnings.
|
|
This change brings down the execution time on my computer for the
entire asn1 test suite from about 340 seconds to 310 seconds.
|
|
asn1ct_gen:emit/1 used to make one call to io:put_chars/2 for each
part of the term passed emit/1. By collecting all output into one
iolist for each call emit/1 the time for running the entire asn1
test suite is reduced from about 460 seconds to 340 seconds on my
computer.
|
|
In the 64-bit run-time, there is about 40Mb in old code after
running the asn1 test suite. Since it is very easy to reclaim
that memory, let's do it. Also get rid of the unnecessary call
to code:soft_purge/1 just before loading the new code.
|
|
Almost always, encode_open_type/1 is called with the return value
from complete/1, which always is a binary. In the rare situation
that encode_open_type/1 is called directly with data from the
user application, call iolist_to_binary/1 before calling
encode_open_type/1.
|
|
Hiding the details of decoding an external type will facilitate
changing the calling convention in a future commit.
|
|
Eliminate use of line macros and asn1_wrapper.
|
|
|
|
For the example from X.691, make sure to test that our encoded
value is the same as given in X.691.
Run the test for BER, too.
|
|
Ensure that compilation errors of Erlang code gets printed.
If compilation of an ASN.1 specification goes wrong, print out
the filename of the offending specification. It can be seen
in the test case log, but because most test cases run in a
parallel group, the test case log is not available until all
test case in the group have finished.
|
|
The code does record operations one step at the time and checks
for conditions that cannot happen.
|
|
asn1ct_check has translated all occurrences of 'ANY' to 'ASN1_OPEN_TYPE'.
|
|
|
|
The encoder wrongly assumed that a known multiplier string (such as
IA5String) encoded as exactly 16 bits did not need to be aligned to an
octet boundary. X.691 (07/2002) 27.5.7 says that it does. Since an
OCTET STRING encoded to 16 bits (two octets) should not be aligned to
an octet boundary, that means that asnct_imm:dec_string() needs an
additional parameter to determine whether a string of a given length
needs to be aligned.
Furthermore, there is another subtle rule difference: An OCTET STRING
which does not have fixed length is always aligned (in PER), but
a known multiplier string is aligned if its upper bound is greater
than or equal to 16.
In encoding, make sure that short known multiplier strings and
OCTET STRINGs with extensible sizes are not aligned when they are
below the appropriate limit.
|
|
Given the type:
S ::= IA5String (SIZE (5, ...))
attempting to encode (to PER/UPER) a string shorter than 5 characters
would fail. Similarly, attempting to decode such string in the BER
format would fail.
In the case of BER, we can do no range checks if the size constraint
is extensible.
|
|
Consider a type with a size constraint with an extension marker
such as:
S ::= OCTET STRING (SIZE (0..10, ...))
For a length outside the root range (e.g. 42), the PER/UPER encoder
will encode the length field in the same way as it would the type
INTEGER (0..MAX) (i.e., as semi-constrained whole number), while the
decoder would decode the length in the same way as length field
without any constraint.
Clearly, either the encoder or the decoder is wrong. But which one?
Dubuisson's [1] book (page 442) says that the length should be encoded
as a semi-constrained whole number if the length is outside the root
range.
The X.691 standard document [2] also says (e.g. in 15.11) that length
fields should be a semi-constrained number, but gives a reference
to section gives a reference to section 10.9, "General rules for encoding
a length determinant", and not to to 10.7, "Encoding of a
semi-constrained whole number".
Reading the standard that way should imply that a length outside the
root range should be encoded in the same way as an unconstrained
length, and that the decoder does the right thing.
Further support for that interpretation:
- Larmouth's book [3], page 303.
- The ASN.1 playground. [4]
References:
[1] http://www.oss.com/asn1/resources/books-whitepapers-pubs/dubuisson-asn1-book.PDF
[2] http://www.itu.int/ITU-T/studygroups/com17/languages/X.691-0207.pdf
[3] http://www.oss.com/asn1/resources/books-whitepapers-pubs/larmouth-asn1-book.pdf
[4] http://asn1-playground.oss.com
|
|
For what seems to be historical reasons,
asn1_rtt_per:encode_octet_string/3 has an ExtensionMarker argument
that is no longer used. The extension mark (if any) is included in the
constraints argument.
|
|
|
|
The compiler would crash when given code such as the following:
Type ::= INTEGER (lower<..<upper)
lower INTEGER ::= 0
lower INTEGER ::= 42
|
|
The record #constraint{} is almost unused outside of the parser
except for two places in asn1ct_check.
The only correct usage of the record is in instance_of_constraints/2.
Eliminate that usage by updating the parser to pass that constraint
in the same way as all other constraints.
In check_integer_range/2, the record is used incorrectly. A
constraint for an integer will never be a list of #constraint{}
records. Therefore, the list comprehension will always produce
an empty list, and check_constr/2 will not actually check anything
(which is kind of lucky, since the 'ValueRange' range constraint
is incorrectly written - the lower and upper bounds should be in
a tuple).
For now, we will not attempt to actually start validating integer
ranges. Firstly (obviously) we will need to be sure that we
correctly handles all forms of constraints, and secondly we will
need to consider whether we need to produce a warning rather than an
error for compatibility reasons.
|