Age | Commit message (Collapse) | Author |
|
|
|
* lukas/stdlib/maps_iterators/OTP-14012:
erts: Limit size of first iterator for hashmaps
Update primary bootstrap
Update preloaded modules
erts: Remove erts_internal:maps_to_list/2
stdlib: Make io_lib and io_lib_pretty use maps iterator
erts: Implement batching maps:iterator
erts: Implement maps path iterator
erts: Implement map iterator using a stack
stdlib: Introduce maps iterator API
Conflicts:
bootstrap/lib/stdlib/ebin/io_lib.beam
bootstrap/lib/stdlib/ebin/io_lib_pretty.beam
erts/emulator/beam/bif.tab
erts/preloaded/ebin/erlang.beam
erts/preloaded/ebin/erts_internal.beam
erts/preloaded/ebin/zlib.beam
|
|
This iterator implementation fetches multiple elements to
iterate over in one call to erts_internal:maps_next instead
of one at a time. This means that the memory usage will go
up for the iterator as we are buffering elements, but the
usage is still bounded.
In this implementation the max memory usage is 1000 words.
Using this approach makes the iterator as fast as using
maps:to_list, so maps:iterator/2 has been removed.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Magic references are *intentionally* indistinguishable from ordinary
references for the Erlang software. Magic references do not change
the language, and are intended as a pure runtime internal optimization.
An ordinary reference is typically used as a key in some table. A
magic reference has a direct pointer to a reference counted magic
binary. This makes it possible to implement various things without
having to do lookups in a table, but instead access the data directly.
Besides very fast lookups this can also improve scalability by
removing a potentially contended table. A couple of examples of
planned future usage of magic references are ETS table identifiers,
and BIF timer identifiers.
Besides future optimizations using magic references it should also
be possible to replace the exposed magic binary cludge with magic
references. That is, magic binaries that are exposed as empty
binaries to the Erlang software.
|
|
* egil/20/erts/signal-service/OTP-14186:
kernel: Document signal server
erts: Use os module instead of erts_internal for set_signal/2
erts: Do not handle SIGILL
erts: Fix thread suspend in crashdump
erts: Do not enable SIGINT
erts: Use generic signal handler
erts: Add OS signal tests
erts: Handle SIGUSR1 via signal service instead
erts: Handle SIGTERM via signal service instead
kernel: Add gen_event signal server and default handler
erts: Add SIGHUP signal handler
erts: Remove whitespace errors
Conflicts:
erts/emulator/beam/bif.tab
|
|
|
|
|
|
|
|
|
|
|
|
A received SIGHUP signal to beam will generate a '{notify, sighup}' message
to the registered process 'erl_signal_server'. 'erl_signal_server' is a
gen_event process.
|
|
|
|
* maint:
erts: Add nif_SUITE:t_on_load
erts: Improve nif_SUITE:upgrade test
Don't leak old code when loading a modules with an on_load function
Conflicts:
erts/preloaded/ebin/erts_code_purger.beam
erts/preloaded/ebin/erts_internal.beam
erts/preloaded/src/erts_code_purger.erl
|
|
Normally, calling code:delete/1 before re-loading the code for a
module is unnecessary but causes no problem.
But there will be be problems if the new code has an on_load function.
Code with an on_load function will always be loaded as old code
to allowed it to be easily purged if the on_load function would fail.
If the on_load function succeeds, the old and current code will be
swapped.
So in the scenario where code:delete/1 has been called explicitly,
there is old code but no current code. Loading code with an
on_load function will cause the reference to the old code to be
overwritten. That will at best cause a memory leak, and at worst
an emulator crash (especially if NIFs are involved).
To avoid that situation, we will put the code with the on_load
function in a special, third slot in Module.
ERL-240
|
|
* kvakvs/erts/gc_minor_option/OTP-11695:
erts: Fix req_system_task gc typespec
Fix process_SUITE system_task_blast and no_priority_inversion2
Option to erlang:garbage_collect to request minor (generational) GC
Conflicts:
erts/emulator/beam/erl_process.c
erts/preloaded/src/erts_internal.erl
|
|
* rickard/time-unit/OTP-13735:
Update test-cases to use new symbolic time units
Replace misspelled symbolic time units
Conflicts:
erts/doc/src/erlang.xml
erts/emulator/test/long_timers_test.erl
|
|
|
|
Ensure that we cannot get any dangling pointers into code that
has been purged. This is done by a two phase purge. At first
phase all fun entries pointing into the code to purge are marked
for purge. All processes trying to call these funs will be suspended
and by this we avoid getting new direct references into the code.
When all processes has been checked, these processes are resumed.
The new purge strategy now also completely ignore the existence of
indirect references to the code (funs). If such exist, they will
cause bad fun exceptions to the caller, but will not prevent a
soft purge or cause a kill of a process having such live references
during a hard purge. This since it is impossible to give any
guarantees that no processes in the system have such indirect
references. Even when the system is completely clean from such
references, new ones can appear via distribution and/or disk.
|
|
|
|
Besides using two words for 'milliseconds' et. al. they are
also changed from plural to singular.
|
|
Note: Minor GC option is a hint, and GC may still decide to run fullsweep.
Test case for major and minor gc on self
Test case for major and minor gs on some other process + async gc test check
docs fix
|
|
* rickard/rm-mqd-mixed/OTP-13366:
Remove the 'message_queue_data' option 'mixed'
|
|
|
|
The extra trace data has been moved to the opts map in order
for the tracer to be able to distinguish inbetween extra
trace data 'undefined' and no extra trace data. In the same
commit all opts associations have been changed so that if
the tracer should not use them, the key is left unassicated
instead of being sent to undefined. This should be give a
small performance gain and also makes the API easier to work
with.
|
|
The max_heap_size process flag can be used to limit the
growth of a process heap by killing it before it becomes
too large to handle. It is possible to set the maximum
using the `erl +hmax` option, `system_flag(max_heap_size, ...)`,
`spawn_opt(Fun, [{max_heap_size, ...}])` and
`process_flag(max_heap_size, ...)`.
It is possible to configure the behaviour of the process
when the maximum heap size is reached. The process may be
sent an untrappable exit signal with reason kill and/or
send an error_logger message with details on the process
state. A new trace event called gc_max_heap_size is
also triggered for the garbage_collection trace flag
when the heap grows larger than the configured size.
If kill and error_logger are disabled, it is still
possible to see that the maximum has been reached by
doing garbage collection tracing on the process.
The heap size is defined as the sum of the heap memory
that the process is currently using. This includes
all generational heaps, the stack, any messages that
are considered to be part of the heap and any extra
memory the garbage collector may need during collection.
In the current implementation this means that when a process
is set using on_heap message queue data mode, the messages
that are in the internal message queue are counted towards
this value. For off_heap, only matched messages count towards
the size of the heap. For mixed, it depends on race conditions
within the VM whether a message is part of the heap or not.
Below is an example run of the new behaviour:
Eshell V8.0 (abort with ^G)
1> f(P),P = spawn_opt(fun() -> receive ok -> ok end end, [{max_heap_size, 512}]).
<0.60.0>
2> erlang:trace(P, true, [garbage_collection, procs]).
1
3> [P ! lists:duplicate(M,M) || M <- lists:seq(1,15)],ok.
ok
4>
=ERROR REPORT==== 26-Apr-2016::16:25:10 ===
Process: <0.60.0>
Context: maximum heap size reached
Max heap size: 512
Total heap size: 723
Kill: true
Error Logger: true
GC Info: [{old_heap_block_size,0},
{heap_block_size,609},
{mbuf_size,145},
{recent_size,0},
{stack_size,9},
{old_heap_size,0},
{heap_size,211},
{bin_vheap_size,0},
{bin_vheap_block_size,46422},
{bin_old_vheap_size,0},
{bin_old_vheap_block_size,46422}]
flush().
Shell got {trace,<0.60.0>,gc_start,
[{old_heap_block_size,0},
{heap_block_size,233},
{mbuf_size,145},
{recent_size,0},
{stack_size,9},
{old_heap_size,0},
{heap_size,211},
{bin_vheap_size,0},
{bin_vheap_block_size,46422},
{bin_old_vheap_size,0},
{bin_old_vheap_block_size,46422}]}
Shell got {trace,<0.60.0>,gc_max_heap_size,
[{old_heap_block_size,0},
{heap_block_size,609},
{mbuf_size,145},
{recent_size,0},
{stack_size,9},
{old_heap_size,0},
{heap_size,211},
{bin_vheap_size,0},
{bin_vheap_block_size,46422},
{bin_old_vheap_size,0},
{bin_old_vheap_block_size,46422}]}
Shell got {trace,<0.60.0>,exit,killed}
|
|
Replace 'gc_start' and 'gc_end' with
* 'gc_minor_start'
* 'gc_minor_end'
* 'gc_major_start'
* 'gc_major_end'
|
|
OTP-13497
This trace event is triggered when a process is created from the
process that is created.
|
|
This commit completes the tracing for processes so that
all messages sent by a process (via nifs or otherwise) will
be traced.
The commit also adds tracing of all types of events from ports.
When enabling tracing using erlang:trace, the 'all' flag now also
enables tracing on all ports.
OTP-13496
|
|
Add the possibility to use modules as trace data receivers. The functions
in the module have to be nifs as otherwise complex trace probes will be
very hard to handle (complex means trace probes for ports for example).
This commit changes the way that the ptab->tracer field works from always
being an immediate, to now be NIL if no tracer is present or else be
the tuple {TracerModule, TracerState} where TracerModule is an atom that
is later used to lookup the appropriate tracer callbacks to call and
TracerState is just passed to the tracer callback. The default process and
port tracers have been rewritten to use the new API.
This commit also changes the order which trace messages are delivered to the
potential tracer process. Any enif_send done in a tracer module may be delayed
indefinitely because of lock order issues. If a message is delayed any other
trace message send from that process is also delayed so that order is preserved
for each traced entity. This means that for some trace events (i.e. send/receive)
the events may come in an unintuitive order (receive before send) to the
trace receiver. Timestamps are taken when the trace message is generated so
trace messages from differented processes may arrive with the timestamp
out of order.
Both the erlang:trace and seq_trace:set_system_tracer accept the new tracer
module tracers and also the backwards compatible arguments.
OTP-10267
|
|
* henrik/update-copyrightyear:
update copyright-year
|
|
|
|
|
|
- The calling process is now suspended while synchronizing
scheduler suspends via erlang:system_flag(schedulers_online, _)
and erlang:system_flag(multi_scheduling, _), instead of blocking
the scheduler thread in the BIF call waiting for the operation
to synchronize. Besides releasing the scheduler for other work
(or immediate suspend) it also makes it possible to abort the
operation by killing the process.
- erlang:system_flag(schedulers_online, _) now only wait for normal
schedulers to complete before it returns. This since it may take
a very long time before all dirty schedulers suspends.
- erlang:system_flag(multi_scheduling, block_normal|unblock_normal)
which only operate on normal schedulers has been introduced. This
since there are use cases where suspend of dirty schedulers are
not of interest (hipe loader).
- erlang:system_flag(multi_scheduling, block) still blocks all
dirty schedulers as well as all normal schedulers except one since
it is hard to redefine what multi scheduling block means.
- The three operations:
- changing amount of schedulers online
- blocking/unblocking normal multi scheduling
- blocking/unblocking full multi scheduling
can now be done in parallel. This is important since otherwise
a full multi scheduling block would potentially delay the other
operations for a very long time.
|
|
* bjorn/multiple-load/OTP-13111:
code: Add functions that can load multiple modules
Refactor post_beam_load handling
Simplify and robustify code_server:all_loaded/1
Update preloaded modules
Add erl_prim_loader:get_modules/3
Add has_prepared_code_on_load/1 BIF
Allow erlang:finish_loading/1 to load more than one module
beam_load.c: Add a function to check for an on_load function
|
|
Conflicts:
erts/emulator/beam/erl_alloc.types
erts/emulator/beam/erl_bif_info.c
erts/emulator/beam/erl_process.c
erts/preloaded/ebin/erts_internal.beam
|
|
The BIFs prepare_loading/2 and finish_loading/1 have been
designed to allow fast loading in parallel of many modules.
Because of the complications with on_load functions,
the initial implementation of finish_loading/1 only allowed
a single element in the list of prepared modules.
finish_loading/1 does not suspend other processes, but it must wait
for all schedulers to pass a write barrier ("thread progress"). The
time for all schedulers to pass the write barrier is highly variable,
depending on what kind of code they are executing. Therefore, allowing
finish_loading/1 to finish the loading for more than one module before
passing the write barrier could potentially be much faster than
calling finish_loading/1 multiple times.
The test case many/1 run on my computer shows that with "heavy load",
finish loading of 100 modules in parallel is almost 50 times faster
than loading them sequentially. With "light load", the gain is still
almost 10 times.
Here follows an actual sample of the output from the test case on
my computer (an 2012 iMac):
Light load
==========
Sequential: 22361 µs
Parallel: 2586 µs
Ratio: 9
Heavy load
==========
Sequential: 254512 µs
Parallel: 5246 µs
Ratio: 49
|
|
This commit implements erts_internal:system_check(schedulers) with the
intent of a basic responsiveness test check of the schedulers.
|
|
Microstate accounting is a way to track which state the
different threads within ERTS are in. The main usage area
is to pin point performance bottlenecks by checking which
states the threads are in and then from there figuring out
why and where to optimize.
Since checking whether microstate accounting is on or off is
relatively expensive if done in a short loop only a few of the
states are enabled by default and more states can be enabled
through configure.
I've done some benchmarking and the overhead with it turned off
is not noticible and with it on it is a fraction of a percent.
If you enable the extra states, depending on the benchmark,
the ovehead when turned off is about 1% and when turned on
somewhere inbetween 5-15%.
OTP-12345
|
|
* maint:
Introduce time management in native APIs
Introduce time warp safe replacement for safe_fixed option
Introduce time warp safe trace timestamp formats
Conflicts:
erts/emulator/beam/erl_bif_trace.c
erts/emulator/beam/erl_driver.h
erts/emulator/beam/erl_nif.h
erts/emulator/beam/erl_trace.c
erts/preloaded/ebin/erlang.beam
|
|
* lukas/erts/gc_info/OTP-13265:
erts: Add garbage_collection_info to process_info/2
Conflicts:
erts/emulator/beam/erl_bif_info.c
|
|
New timestamp options for trace, sequential trace, and
system profile:
- monotonic_timestamp
- strict_monotonic_timestamp
|
|
* maint:
Light weight statistics of run queue lengths
Conflicts:
erts/preloaded/ebin/erlang.beam
|