Age | Commit message (Collapse) | Author |
|
Given any Vt1 and Vt2 values, vtmerge(vtnew(Vt1, Vt2), vtold(Vt1, Vt2)) is always
equal to Vt1.
|
|
* siri/cover-perf/OTP-12330:
[test_server] Use new, more efficient functions in cover
[cover] Improve performance
|
|
|
|
Add functions for cover compilation and analysis on multiple
files. This allows for more parallelisation.
All functions for cover compilation can now take a list of
modules/files.
cover:analyse/analyze and cover:analyse_to_file/analyze_to_file can be
called without the Modules arguement in order to analyse all cover
compiled and imported modules, or with a list of modules.
Also, the number of lookups in ets tables is reduced, which has also
improved the performance when analysing and resetting cover data.
|
|
* raimo/infinite-loop-gethostbyname/OTP-12133:
Remove infinite loop in inet:gethostbyname_tm/4
|
|
|
|
|
|
Behaviour of character types \d, \w and \s has always been to not match characters
with value above 255, not 128, i.e. they are limited to ISO-Latin-1 and not ASCII.
|
|
|
|
* bjorn/compiler/beam_jump:
beam_jump: Eliminate pathologically slow compilation
|
|
* bjorn/compiler/beam_validator:
beam_validator: Exit immediately on crashes
beam_validator: Remove the file/1 and files/1 functions
beam_validator: Remove support for all other unsupported instructions
beam_validator: Remove support for unsupported bit syntax instructions
|
|
* bjorn/compiler/maps:
beam_validator: Tighten and simplify map validation code
beam_utils: Correct test for has_map_fields in is_pure_test/1
map_SUITE: Cover comparisons of 'nil' in v3_codegen
|
|
649d6e73 simplified opt_simple_let_2/6 a little bit too much,
so that some list comprehensions in effect context were not
properly tail-recursive.
|
|
José Valim noticed that code such as:
match(1) -> 1;
match(2) -> 2;
match(3) -> 3;
...
match(1000) -> 1000.
would compile very slowly. The culprit is opt/3 in beam_jump.
What happens is that opt/3 will rewrite this code:
select_val ...
label 1
jump 1000
label 2
jump 1000
...
label 999
jump 1000
label 1000
return
very slowly to this code:
select_val ...
label 1
label 2
...
label 999
label 1000
return
The reason for the slowness is that when opt/3 sees this
sequence:
label 1
jump 1000
...
it will remove the label (storing it in a dictionary),
and pick up the previously processed instruction from
the accumulator:
select_val ...
jump 1000
label 2
jump 1000
...
That is done in order to process all labels before the
jump and also to get rid of the jump instruction if the
previous instruction is an "unreachable after". In this
case, re-processing the sequence will remove the now
unreachable jump instruction:
select_val ...
label 2
jump 1000
...
The problem is that re-processing the select_val instruction is
expensive. The instruction has a list of 1000 labels, all of which
will be added (again) to the set of referenced labels. The
select_val instruction will be re-processed again and again
until all labels and jumps have been gobbled up.
In the original version of beam_jump, opt/3 was not called
repeatedly until a fixpoint was found, but was expected to do
all its optimizations in one pass. The fixpoint iteration was
added later.
Since we now have the fixpoint iteration, there is no need
to do everything in a single pass. When we encounter a jump, we will
collect all previously seen labels and put them into the dictionary,
and then we will move on.
As a further optimization, we will look for sequences like this:
jump X
label ...
jump X
and replace them with:
label ...
jump X
In the example above, that will avoid 1000 updates of the dictionary.
After applying this optimization, compilation of the
pattern went from roughly 55 s to 0.1 s for the example
above but with 10000 clauses.
Reported-by: José Valim
|
|
|
|
* ethercrow/export_gen_udp_socket:
Export type gen_udp:socket/0
|
|
The code sharing optimization could produce a jump into the
middle of a 'try' block. beam_validator would reject the code.
Reported-by: Ulf Norell
|
|
Where it's less important to do so, but it has to be done at some point
since erlang:now/0 is deprecated. As in the parent commit, continue to
use the old api if the new one is unavailable.
|
|
In particular, deal with the deprecation of erlang:now/0 in OTP 18. Be
backwards compatible with older releases: the new api is only used when
available.
The test suites have not been modified.
|
|
Commit 24993fc2 modified the state even in the case that the new
pool_size option the change was introduced to support was not used.
Doing so made downgrade impossible since old code would not be prepared
for the modified state. Retain a compatible state, so that simple code
replacement is enough.
|
|
On OS X at least. The testcase opens a listening socket, spawns 8
processes that call gen_tcp:accept/1, waits a couple of seconds, and
then spawns 8 processes that call gen_tcp:connect/3. Some of these
occasionally return {error, econnreset}.
|
|
Using the fact that transport processes can now be started concurrently.
The suite serialized starts itself when pretending to be diameter
starting a transport process.
|
|
Commit 9a671bf0 removed the need for diameter_sctp to send outgoing
messages through the listening process. That was prior to R5B02, so the
clause isn't need for any upgrade case.
|
|
Stops were aborted at the first failure.
|
|
Which hasn't received any attention for some time. Clean it up, rename
the poorly named peer.erl (it's Diameter *nodes* that are implemented),
and make the it possible to specify arbitrary transport configuration.
|
|
In particular, do away with unnecessary articles in the first sentence
of item lists.
|
|
With testcases that uses restrict_connections and pool_size config to
establish multiple connections between two Diameter nodes, checking for
the expected number of transport processes using
diameter:service_info/2.
|
|
In particular, that starts for the same transport reference can now be
concurrent. Looking up a listener process and starting a new one if not
found did handle this (more than one process could find no listener),
and diameter_sctp assumed there could only be one transport process
waiting for an association.
|
|
Transport processes are started by diameter one at a time. In the
listening case, a transport process accepts a connection, tells the
peer_fsm process, which tells its watchdog process, which tells its
service process, which then starts a new watchdog, which starts a new
peer_fsm, which starts a new transport process, which (finally) goes
about accepting another connection. In other words, not particularly
aggressive in accepting new connections. This behaviour doesn't do
particularly well with a large number of concurrent connections: with
TCP and 250 connecting peers we see connections being refused.
This commit adds the possibilty of configuring a pool of accepting
processes, by way of a new transport option, pool_size. Instead of
diameter:add_transport/2 starting just a single process, it now starts
the configured number, so that instead of a single process waiting for a
connection there's now a pool.
The option is even available for connecting processes, which provides an
alternate to adding multiple transports when multiple connections to the
same peer are required. In practice this also means configuring
{restrict_connections, false}: this is not implicit.
For backwards compatibility, the form of
diameter:service_info(_,transport) differs in the connecting case,
depending on whether or not pool_size is configured.
Note that transport processes for the same transport_ref() can be
started concurrently when pool_size > 1. This places additional
requirements on diameter_{tcp,sctp}, that will be dealt with in a
subsequent commit.
|
|
|
|
* dgud/erlang/pr/116:
observer: Add SASL log view for processes
|
|
Add a new menu to toggle log view.
Disabled by default.
Disabled if rb_server already started on observed node,
in order not to interfere with somebody else.
If enabled, add a tab in process view where log entries
related to pid process are shown.
Need an observed node with at least a version R16B2,
due to the use of newly capability to rb to write into
a file descriptor (on the observing node).
|
|
Stop traversing all segments and buckets of empty
dictionaries by adding a clause that checks the
dictionary size.
This improved compilation of an erlang project from
7.5s to 5.5s seconds when trying out a sample
implementation of erl_lint that uses dicts.
|
|
|
|
* erland/diameter/time/OTP-12439:
otp_SUITE: Ignore diameter undefined function errors
|
|
|
|
* ia/ssl/delete-vs-remove:
ssl: remove -> delete
|
|
Perfctr is a Linux kernel extension that allows programmatic access
to the performance monitoring counters found in most current CPUs.
However, development of perfctr ceased after 2010, and it cannot be
used with Linux kernels newer than 2.6.32.
Therefore the perfctr support code in the Erlang VM is effectively
dead code, so this patch removes it.
|
|
|
|
|
|
The beam_validator catches all exceptions and collect them.
It makes more sense to don't catch 'error' and 'exit' exceptions,
but to just print out the name of the current function and pass
on the exception just as all other compilation passes do. Those
kind of exceptions are the symptoms of the kind of severe but
easily catched bugs that occur during development.
|
|
Before the beam_validator was added as compiler pass, it was a
standalone module that could analyse existing .beam files and .S
files.
Even though beam_validator has been part of the compiler for many
releases, it still supports the analysis of .beam and .S files.
To reduce the code bloat and to improve coverage of beam_validator,
remove the file/1 and files/1 functions and all associated help
functions. We'll need to update the test suite, since some of the
checked in .S files have errors that beam_validator ignores, but
that will not be accepted when running them throught the compiler
using the 'from_asm' option. In particular, we will need to export
all functions that should be validated (since the beam_clean pass
will remove any function that is not possible to call).
|
|
|
|
|
|
The assert_strict_literal_termorder/1 function is used to validate the
get_map_elements and has_map_fields instructions. In neither case is
it useful to allow an empty lists of fields, so we should no longer
allow an empty list.
The mmap/2 function is cute, but it is used in only one place, so it
is much simpler to write a special-purpose function to extract the
keys from the list of map pairs.
|
|
The has_map_fields test was not recognized in is_pure_test/1,
because beam_a has rewritten the {list,_} part of instruction.
|
|
|
|
* bjorn/stdlib/string-tokens/OTP-12422:
Optimize string:tokens/2
Modernize and strengthen the test case for string:tokens/2
|
|
We can save some time by reversing the original string
before starting the tokenization.
When there is only one separator, we can save even more time
by treating that case specially so that we don't have to call
lists:member/2 for each character.
|
|
|