Age | Commit message (Collapse) | Author |
|
* maint:
Updated OTP version
Update release notes
Update version numbers
Fix missing 'in' trace events during 'running' trace
|
|
* maint-21:
Updated OTP version
Update release notes
Update version numbers
Fix missing 'in' trace events during 'running' trace
|
|
* rickard/running-trace-fix/ERL-713/OTP-15269:
Fix missing 'in' trace events during 'running' trace
|
|
'in' trace events could be lost when a process had to be
rescheduled on another scheduler type (normal <-> dirty).
|
|
* bjorn/compiler/ssa:
Travis CI: Run the SSA linter in the Linux64SmokeTest build
Remove retired compiler passes
Introduce a new SSA-based intermediate format
hipe_beam_to_icode: Correct translation of get_map_elements
beam_dead: Remove shortcut of binary matching instruction
beam_bs: Remove optimizations that are easier done on SSA format
Don't run unsafe compiler passes
Simplify optimizations by introducing is_nil late
beam_utils: Make is_tagged_tuple a pure test
beam_except: Enhance recognition of function_clause exceptions
beam_validator: Infer the types of copies in a smarter way
beam_validator: Improve merge of cons and literal list
beam_validator: Strengthen validation of func_info
beam_validator: Allow get_tuple_element before dsetelement
beam_validator: Don't transfer state to labels that can't be reached
beam_validator: Improve type analysis for tuples
beam_validator: Be more careful when updating try/catch state
beam_trim: Handle an empty list of instructions
v3_core: Number argument variables in ascending order
Teach binary instructions to use Y registers as destination
OTP-14894
|
|
Do not allocate good and bad shifts for single byte lookups
|
|
* max-au/dist_msg_too_long:
Cleanup unused dist output buf immediately instead of at GC
Throw 'system_limit' when distribution message size exceed INT_MAX instead of crashing emulator with 'Absurdly large distribution data buffer'
|
|
|
|
* maint:
Fix incoming suspend monitor down
|
|
* rickard/fix-suspend-monitor-down/OTP-15237/ERL-704:
Fix incoming suspend monitor down
|
|
An incoming suspend monitor down wasn't handled correct when the
local monitor half had been removed with an emulator crash as result.
|
|
The new code generator will use Y registers as a destination for
binary construction and matching instructions. v3_codegen would
always first store terms in an X register and it would be the
responsibility of the optimization passes to optimize the extra
moves.
|
|
The single byte lookups always rely on `memchr` and
never really use the good and bad shifts arrays.
|
|
Optimize binary match from 10% up to 70x
|
|
* maint:
Fix compiler crash when compiling double receives
erts: Delete fd from poll-set when closing fd_driver port
|
|
into maint
erts: Delete fd from poll-set when closing fd_driver port
|
|
* rickard/full-cache-nif-env/OTP-15223/ERL-695:
Fix caching of NIF environment when executing dirty
# Conflicts:
# erts/emulator/beam/erl_nif.c
|
|
* dotsimon/ref_ordering_bug/OTP-15225:
Fixed #Ref ordering bug
Test #Ref ordering in lists and ets
|
|
|
|
* maint:
Fix caching of NIF environment when executing dirty
|
|
* rickard/full-cache-nif-env/OTP-15223/ERL-695:
Fix caching of NIF environment when executing dirty
|
|
|
|
|
|
|
|
* maint:
Fixed #Ref ordering bug
Test #Ref ordering in lists and ets
|
|
|
|
|
|
The idea is to use memchr on the first lookup for
binary:match/2 and also after every match on binary:matches/2.
We only use memchr in case of matches because benchmarks
showed that using memchr even when we had false positives
could negatively affect performance.
This speeds up binary matching and binary splitting by 4x
in some cases and by 70x in other scenarios (when the last
character in the needle does not occur in the subject).
The reason to use memchr is that it is highly specialized
in most modern operating systems, often defaulting to
SIMD operations.
The implementation uses the reduction count to figure out
how many bytes should be read with memchr. We could increase
those numbers but they do not seem to make a large difference.
|
|
Do not allocate a new map when the value is the same
|
|
|
|
A lot of erts internal messages used behind APIs to create
non-blocking calls, e.g. port_command, would cause the seq_trace
token to be cleared from the caller when it should not.
This commit fixes that and adds asserts that makes sure
that all messages sent have to correct token set.
Fixes: ERL-602
|
|
|
|
* lukas/erts/fix_udp_realloc_bug:
erts: Limit the automatic max buffer for UDP to 2^16
erts: Free udp buffer when getting EAGAIN
|
|
|
|
* lukas/erts/etoomanyrefs_forker/OTP-15210:
erts: Handle EMFILE errors in forker_driver for write
|
|
|
|
erl_alloc: align ErtsAllocatorState_t
|
|
There is no reason to have a larger buffer than this as
the recvmsg call will never return more data.
OTP-15206
|
|
Before this change, if a write to the uds failed due to
EMFILE to ETOOMANYREFS the entire vm would crash. This
change makes it so that an SIGCHLD is simulated to that
the error is propagated to the user instead of terminating
the VM.
|
|
|
|
* lukas/erts/win_break_poll_thread_fix/OTP-15205:
erts: Fix bug where break would not trigger on windows
|
|
RaimoNiskanen/raimo/can_not-should-mostly-be-cannot
OTP-14282
'can not' should mostly be 'cannot'
|
|
|
|
I did not find any legitimate use of "can not", however skipped
changing e.g RFCs archived in the source tree.
|
|
After this whitespace modification there should be no "can not"s
separated by a newline in the entire OTP repository, so to find
them all a simple git grep will do just fine.
|
|
|
|
|
|
This patch optimizes map operations to not allocate new maps
when the key is being replaced by the exact same value in memory.
Imagine this very common idiom:
Map#{key := compute_new_value(Value, Condition)}
where:
compute_new_x(X, true) -> X + 1;
compute_new_x(X, false) -> X;
In many cases, we are not changing the value in `Key`, however
the code prior to this patch would still allocate a new array
for the map values. This optimization changes this.
The cost of optimization is minimum, as in the worst case scenario
it only adds a pointer comparison and boolean check. The major benefit
is reducing the GC pressure by avoiding allocating data.
Next we list the operations we have changed alongside the benchmark
results. The benchmarks basically create a map and perform the same
operations, roughly 20000 times, once replacing the key with the same
value, and another with a different value.
* Map#{Key := Value}
For a map with 4 keys, replacing the fourth key 20000 times went from
718us to 539us.
For a map with 8 keys, replacing the fourth key 20000 times went from
976us to 555us.
* maps:update/3
For a map with 4 keys, replacing the fourth key 20000 times went from
673us to 575us.
For a map with 8 keys, replacing the fourth key 20000 times went from
827us to 585us.
* maps:put/3
For a map with 4 keys, replacing the fourth key 20000 times went from
763us to 553us.
For a map with 8 keys, replacing the fourth key 20000 times went from
788us to 561us.
Note that we have ported some optimizations found in maps:update/3
to maps:put/3 while creating this patch.
|
|
maps:new/0 is no longer a BIF
|
|
|