Age | Commit message (Collapse) | Author |
|
I did not find any legitimate use of "can not", however skipped
changing e.g RFCs archived in the source tree.
|
|
ssl_pkix_db should not hard code names. On the other hand the names
are nicer with as <Prefix>_dist than <Prefix>dist.
|
|
The commit 256e01ce80b3aadd63f303b9bda5722ad313220f was a misunderstanding
that actually broke the implementation.
It is not so important to keep specific max, rather max is a threshold
when the table should be shrinked as to not grow indefinitely.
New sessions are created when the id is created and may be short lived
it they are not registered for reuse due to handshake failure.
|
|
|
|
Use map instead of large tuple, which was not an option when the code
was written originally. More simplifications along these lines may
be done later to the state record.
|
|
Move of PEM cache to own process was flawed and not all PEM files
where cached properly. We must properly handle both the ditributed
and the normal mode of the ssl application.
|
|
The PEM cache handling has proven to be too disruptive of the manager process.
|
|
The current SSL implementation has a PEM cache running through the ssl
manager process, whose primary role is caching CA chains from files on
disk. This is intended as a way to save on disk operation when the
requested certificates are often the same, and those cache values are
both time-bound and reference-counted. The code path also includes
caching the Erlang-formatted certificate as decoded by the public_key
application
The same code path is used for DER-encoded certificates, which are
passed in memory and do not require file access. These certificates are
cached, but not reference-counted and also not shared across
connections.
For heavy usage of DER-encoded certificates, the PEM cache becomes a
central bottleneck for a server, forcing the decoding of every one of
them individually through a single critical process. It is also not
clear if the cache remains useful for disk certificates in all cases.
This commit adds a configuration variable for the ssl application
(bypass_pem_cache = true | false) which allows to open files and decode
certificates in the calling connection process rather than the manager.
When this action takes place, the operations to cache and return data
are replaced to strictly return data.
To provide a transparent behaviour, the 'CacheDbRef' used to keep track
of the certificates in the cache is replaced by the certificates itself,
and all further lookup functions or folds can be done locally.
This has proven under benchmark to more than triple the performance of
the SSL application under load (once the session cache had also been
disabled).
|
|
|
|
* ingela/ssl-max-session-table/OTP-13490:
ssl: Adjust max table to work as expected from documentation
|
|
ssl already used crypto:strong_rand_bytes/1 for most operations as
its use cases are mostly cryptographical. Now crypto:strong_rand_bytes/1
will be used everywhere.
However crypto:rand_bytes/1 was used as fallback if
crypto:strong_rand_bytes/1 throws low_entropy, this
will no longer be the case. This is a potential incompatibility.
The fallback was introduced a long time ago for interoperability reasons.
Now days this should not be a problem, and if it is, the security
compromise is not acceptable anyway.
|
|
The session table max size should be the configurable value Max and
not Max + 1.
|
|
|
|
If the session table is big the validator may not have finshed before
the validation interval is up, in this case we should not start a new
validator adding to the cpu load.
|
|
If upper limit is reached invalidate the current cache entries, e.i the session
lifetime is the max time a session will be keept, but it may be invalidated
earlier if the max limit for the table is reached. This will keep the ssl
manager process well behaved, not exhusting memeory. Invalidating the entries
will incrementally empty the cache to make room for fresh sessions entries.
|
|
|
|
The previous commit - 7b93f5d8a224a0a076a420294c95a666a763ee60 fixed the macro
only in one place.
|
|
|
|
|
|
|
|
|
|
For comparison with file time stamps os:timestamp makes more sense
and is present in 17 as well as 18.
|
|
Conflicts:
lib/ssl/doc/src/ssl_app.xml
lib/ssl/src/ssl_manager.erl
|
|
The PEM cache is now validated by a background process, instead of
always keeping it if it is small enough and clearing it otherwhiss.
That strategy required that small caches where cleared by API function
if a file changes on disk.
However document the clearing API function as it can still be usefull.
|
|
Even though in the most common case an erlang node will not be both client
and server, it may happen (for instance when running the erlang ditribution
over TLS).
Also try to mitigate the affect of dumb clients that could cause a
very lagre session cache on the client side that can cause long delays
in the client. The server will have other means to handle a large
session table and will not do any select operations on it anyhow.
|
|
This reverts commit fcc6a756277c8f041aae1b2aa431e43f9285c368.
|
|
* qrilka/ssl-seconds-in-24h:
ssl: Fix incorrect number of seconds in 24 hours
|
|
24 hours in seconds should be equal to 86400 and 86400000 in milliseconds
|
|
|
|
Errors discovered using `erldocs`:
Superfluous @hidden tag would exit edoc application;
'Multiple @spec tag': appended a @clear tag after macro condition;
'@spec arity does not match': added missing argument.
|
|
relative to the module name of the ssl_manager.
This can be beneficial when making tools that rename modules for internal
processing in the tool.
|
|
|
|
Conflicts:
lib/ssl/src/ssl.app.src
lib/ssl/src/ssl_manager.erl
|
|
Use the functions in crypto that we want to keep in the API.
|
|
|
|
Certificate db cleaning messages where sent to the wrong process after
restructuring to avoid bottlenecks.
It is possible that the ssl manager process gets two cleaning messages
for the same entry. E.i. first cleaning message is sent and before it
is processed a new reference is allocated and again released for the
entry, generating a second cleaning message.
Also in ssl_manger:handle_info/2 it is possible that there exists a
new reference to an "old" file name with a potential new content.
|
|
|
|
|
|
|
|
A general case clause was put before a less general so that the less
general case would never match.
|
|
Avoid cach validation with file:file_info/2 as this i too expensive and
causes a bottleneck in the file server. Instead we expose a new API function
ssl:clear_pem_cache/0 to deal with the problem. As we think it will be
of occasional use and the normal case is that the cache will be valid we think
it is the right thing to do.
Convert file paths to binary representation in the ssl API module to
avoid uncessarry calls in file later on.
Also add sanity checks for openssl versions in testsuite due to new
openssl bugs.
|
|
|
|
Only use ssl_manager for selecting new ids to guarantee uniqueness,
but reuse check does not need to be performed by the manager.
|
|
Do not use ssl_manager process for selecting an id. It's unnecessary
to involve the manager process at all on the client side.
|
|
Check last delay timer for both client and server side to avoide
timing issues.
|
|
* ia/ssl/ets-next-problem/OTP-9703:
Replaced ets:next traversal with ets:foldl and throw
|
|
ets:next needs an explicit safe_fixtable call to be safe, we
rather use ets:foldl and throw to get out of it when we find the
correct entry.
|
|
Added session status "new" to mark sessions that are
in the session database to reserve the session id
but not resumable yet and that we want to separate from
sessions that has been invalidated for further reuse.
|
|
* ia/ssl/dist/OTP-7053:
First fully working version
Use ssl instead of being a proxy command
Connect from both sides works now
|
|
|