Age | Commit message (Collapse) | Author |
|
|
|
|
|
|
|
* Literals are not copied between processes for messages or spawn
Increases performance of message sent and processes spawned when
literals are involved in messages or arguments.
|
|
Conflicts:
erts/emulator/Makefile.in
|
|
This type of statistics is now available through the microstate
accounting API.
|
|
Add the BIF type "gcbif" in bif.tab for defining GC BIFs. That will
eliminate some of the hand-written administrative code for handling
GC BIFs, saving the developer's time.
|
|
|
|
Dirty schedulers only execute NIFs, so having them execute the full
process_main function isn't necessary. Add dirty_process_main for
dirty schedulers to execute instead.
Add erts_pre_dirty_nif(), called when preparing to execute a dirty
nif.
Add more dirty NIF tests to verify that activities requiring the
process main lock can succeed when the process is executing a dirty
NIF.
|
|
* mikpe/otp-19-erts-integer-truncation-bugs/PR-1045/OTP-13606:
erl_unicode.c: fix integer truncation problems
do not limit heap fragments to 4 giga-words
erts_new_mso_binary(): do not truncate len
Conflicts:
erts/emulator/beam/erl_nif.c
|
|
- Termination of a process...
- Modify trace flags of process...
- Process info on process...
- Register/unregister of name on process...
- Set group leader on process...
... while it is executing a dirty NIF.
|
|
|
|
|
|
also simplified the interface to to run PAM from trace
|
|
and non-call-trace.
This is the easy way out to avoid difficult locking
scenarios when accessing tracing flags on another process.
|
|
|
|
|
|
erts_block/unblock_fpe should only be called at entry to/exit from
native user code.
|
|
Add the possibility to use modules as trace data receivers. The functions
in the module have to be nifs as otherwise complex trace probes will be
very hard to handle (complex means trace probes for ports for example).
This commit changes the way that the ptab->tracer field works from always
being an immediate, to now be NIL if no tracer is present or else be
the tuple {TracerModule, TracerState} where TracerModule is an atom that
is later used to lookup the appropriate tracer callbacks to call and
TracerState is just passed to the tracer callback. The default process and
port tracers have been rewritten to use the new API.
This commit also changes the order which trace messages are delivered to the
potential tracer process. Any enif_send done in a tracer module may be delayed
indefinitely because of lock order issues. If a message is delayed any other
trace message send from that process is also delayed so that order is preserved
for each traced entity. This means that for some trace events (i.e. send/receive)
the events may come in an unintuitive order (receive before send) to the
trace receiver. Timestamps are taken when the trace message is generated so
trace messages from differented processes may arrive with the timestamp
out of order.
Both the erlang:trace and seq_trace:set_system_tracer accept the new tracer
module tracers and also the backwards compatible arguments.
OTP-10267
|
|
|
|
* sverk/master/halt-INT_MIN:
erts: Make erlang:halt() accept bignums as Status
erts: Change erl_exit into erts_exit
kernel: Remove calls to erl_exit
|
|
The BIFs prepare_loading/2 and finish_loading/1 have been
designed to allow fast loading in parallel of many modules.
Because of the complications with on_load functions,
the initial implementation of finish_loading/1 only allowed
a single element in the list of prepared modules.
finish_loading/1 does not suspend other processes, but it must wait
for all schedulers to pass a write barrier ("thread progress"). The
time for all schedulers to pass the write barrier is highly variable,
depending on what kind of code they are executing. Therefore, allowing
finish_loading/1 to finish the loading for more than one module before
passing the write barrier could potentially be much faster than
calling finish_loading/1 multiple times.
The test case many/1 run on my computer shows that with "heavy load",
finish loading of 100 modules in parallel is almost 50 times faster
than loading them sequentially. With "light load", the gain is still
almost 10 times.
Here follows an actual sample of the output from the test case on
my computer (an 2012 iMac):
Light load
==========
Sequential: 22361 µs
Parallel: 2586 µs
Ratio: 9
Heavy load
==========
Sequential: 254512 µs
Parallel: 5246 µs
Ratio: 49
|
|
|
|
This is mostly a pure refactoring.
Except for the buggy cases when calling erlang:halt() with a positive
integer in the range -(INT_MIN+2) to -INT_MIN that got confused with
ERTS_ABORT_EXIT, ERTS_DUMP_EXIT and ERTS_INTR_EXIT.
Outcome OLD erl_exit(n, ) NEW erts_exit(n, )
------- ------------------- -------------------------------------------
exit(Status) n = -Status <= 0 n = Status >= 0
crashdump+abort n > 0, ignore n n = ERTS_ERROR_EXIT < 0
The outcome of the old ERTS_ABORT_EXIT, ERTS_INTR_EXIT and
ERTS_DUMP_EXIT are the same as before (even though their values have
changed).
|
|
We will need a way to check whether an prepared BEAM modules has
an on_load function.
|
|
* sverk/fix-list-length-int/OTP-13288:
erts: Fix error cases in enif_get_list_length
erts: Use Sint instead of int for list lengths
|
|
* sverk/thread-unsafe-alloc:
erts: Fix faulty assert for non-smp
erts: Add checks for thread safe allocation
|
|
This avoids potential integer arithmetic overflow for very large lists.
|
|
Microstate accounting is a way to track which state the
different threads within ERTS are in. The main usage area
is to pin point performance bottlenecks by checking which
states the threads are in and then from there figuring out
why and where to optimize.
Since checking whether microstate accounting is on or off is
relatively expensive if done in a short loop only a few of the
states are enabled by default and more states can be enabled
through configure.
I've done some benchmarking and the overhead with it turned off
is not noticible and with it on it is a fraction of a percent.
If you enable the extra states, depending on the benchmark,
the ovehead when turned off is about 1% and when turned on
somewhere inbetween 5-15%.
OTP-12345
|
|
Assert thread unsafe allocator is only created on non-smp
and only called by the main thread.
Removed test of unsafe allocator in custom thread.
|
|
by ignoring literals.
erts_internal:check_process_code will be called again anyway
(with option {copy_literals, true}) before the module is actually purged.
No need to check literals twice.
|
|
* Same process must do enable-disable.
* System process will force it and never get 'aborted'
|
|
* lukas/erts/forker: (28 commits)
erts: Never abort in the forked child
erts: Mend ASSERT makro for erl_child_setup
erts: Allow enomem failures in port_SUITE
erts: iter_port sleep longer on freebsd
erts: Allow one dangling fd if there is a gethost port
erts: Only use forker StackAck on freebsd
erts: It is not possible to exit the forker driver
erts: Add forker StartAck for port start flowcontrol
erts: Fix large open_port arg segfault for win32
erts: Fix memory leak at async open port
kernel: Remove cmd server for unix os:cmd
erts: Add testcase for huge port environment
erts: Move os_pid to port hash to child setup
erts: Handle all EINTR and EAGAIN cases in child setup
erts: Make child_setup work with large environments
erts: Fix forker driver ifdefs for win32
erts: Fix uds socket handling for os x
erts: Fix dereferencing of unaligned integer for sparc
erts: Flatten too long io vectors in uds write
erts: Add fd count test for spawn_driver
...
Conflicts:
erts/emulator/beam/erl_node_tables.c
erts/preloaded/src/erts_internal.erl
|
|
Instead of forking from the beam process, we create a separate
process in which all forks are done. This has several advantages:
1) performance:
* don't have to close all fd's in the world
* fork only has to copy stuff from a small process
* work is done in a completely seperate process
* a 3x performance increase has been measured,
can be made even greater (10x) if we cache the
environment in child setup
2) stability
* the exec is done in another process than beam, which means that
if the file that we exec to is on an nfs that is not available
right now we will not block a scheduler until the nfs returns.
3) simplicity
* don't have to deal with SIGCHLD in the erts
Unfortunately, this solution also implies some badness.
1) There will always be a seperate process running together with
beam on unix. This could be confusing and undesirable.
2) We have to transfer the entire environment to child_setup
for each command.
OTP-13088
|
|
The TMPBUF option is no longer needed due to is_literal test and NONE
was only used for initial debugging. So we remove the entire option.
|
|
|
|
|
|
|
|
|
|
|
|
This is very verbose, you have been warned.
It should work with the copy-spy.py script, which may be a bit outdated.
|
|
This commit is just for debugging purposes, will probably be reverted.
It comes with a the erts_debug:copy_shared/1 BIF. If SHCOPY_DISABLE
is defined, SHCOPY starts disabled and is dynamically enabled the first
time that the BIF is called.
|
|
|
|
|
|
Add functions size_shared, copy_shared_calculate and copy_shared_perform.
Add the infrastructure for making these communicate with each other.
Add debug information to other places in the VM, to watch interaction
with the sharing-preserving copy.
CAUTION: If you define the SHCOPY_DEBUG macro (after SHCOPY is actually
used in the VM) and make the whole OTP, there will be a lot of debugging
messages during make (it will also be enabled in erlc). You have been
warned...
|
|
|
|
* The youngest generation of the heap can now consist of multiple
blocks. Heap fragments and message fragments are added to the
youngest generation when needed without triggering a GC. After
a GC the youngest generation is contained in one single block.
* The off_heap_message_queue process flag has been added. When
enabled all message data in the queue is kept off heap. When
a message is selected from the queue, the message fragment (or
heap fragment) containing the actual message is attached to the
youngest generation. Messages stored off heap is not part of GC.
|
|
|
|
|
|
|