Age | Commit message (Collapse) | Author |
|
Introduce a wrapper API for lttng.
|
|
Floating-point exception support on MacOS X has never been especially
reliable, and has therefore been disabled by default for a long time.
The fpe support is now broken.
Therefore, take out the unnecessary test for modern mcontext in
configure (whatever that means) and the associated code in sys_float.c.
Add #error directives to sys_float.c to make it clear that
fpe is not supported.
It seems to risky to mess with the mess of #ifdef's, so we will
not attempt to remove all fpe support code for MacOS X.
|
|
* egil/fix-fdatasync-mac/OTP-13411:
erts: Use fcntl(fd, F_FULLFSYNC) instead of fdatasync on Mac OSX
|
|
The syscall fdatasync does not work as intended on Mac OSX.
Both the function fsync and fdatasync now uses fcntl(fd, F_FULLFSYNC) on Mac OSX.
|
|
* egil/extend-ttsl_drv-logging:
erts: Increase ttsl_drv logging capabilities
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Those clause are obsolete and never used by common_test.
|
|
The macro ?t is deprecated. Replace its use with 'test_server'.
|
|
|
|
|
|
|
|
|
|
* vinoski/ds-avoid-lock:
Skip run queue lock check for dirty schedulers
|
|
* rickard/ds-sched-suspend:
Improved scheduler suspend functionality
|
|
- The calling process is now suspended while synchronizing
scheduler suspends via erlang:system_flag(schedulers_online, _)
and erlang:system_flag(multi_scheduling, _), instead of blocking
the scheduler thread in the BIF call waiting for the operation
to synchronize. Besides releasing the scheduler for other work
(or immediate suspend) it also makes it possible to abort the
operation by killing the process.
- erlang:system_flag(schedulers_online, _) now only wait for normal
schedulers to complete before it returns. This since it may take
a very long time before all dirty schedulers suspends.
- erlang:system_flag(multi_scheduling, block_normal|unblock_normal)
which only operate on normal schedulers has been introduced. This
since there are use cases where suspend of dirty schedulers are
not of interest (hipe loader).
- erlang:system_flag(multi_scheduling, block) still blocks all
dirty schedulers as well as all normal schedulers except one since
it is hard to redefine what multi scheduling block means.
- The three operations:
- changing amount of schedulers online
- blocking/unblocking normal multi scheduling
- blocking/unblocking full multi scheduling
can now be done in parallel. This is important since otherwise
a full multi scheduling block would potentially delay the other
operations for a very long time.
|
|
|
|
No point in checking tmp_alloc instance 0
as any non-scheduler thread could race us.
|
|
* sverk/literal-alloc-polish:
erts: Add emulator flag +MIscs for literal super carrier size
erts: Refactor init of erts_literal_mmapper
erts: Make literal_alloc documented and configurable
|
|
|
|
One little (unsigned long) left behind.
|
|
|
|
|
|
* sverk/master/halt-INT_MIN:
erts: Make erlang:halt() accept bignums as Status
erts: Change erl_exit into erts_exit
kernel: Remove calls to erl_exit
|
|
OTP-13251
* sverk/halt-INT_MIN:
erts: Make erlang:halt() accept bignums as Status
erts: Change erl_exit into erts_exit
kernel: Remove calls to erl_exit
|
|
|
|
* bjorn/multiple-load/OTP-13111:
code: Add functions that can load multiple modules
Refactor post_beam_load handling
Simplify and robustify code_server:all_loaded/1
Update preloaded modules
Add erl_prim_loader:get_modules/3
Add has_prepared_code_on_load/1 BIF
Allow erlang:finish_loading/1 to load more than one module
beam_load.c: Add a function to check for an on_load function
|
|
Conflicts:
erts/emulator/beam/erl_alloc.types
erts/emulator/beam/erl_bif_info.c
erts/emulator/beam/erl_process.c
erts/preloaded/ebin/erts_internal.beam
|
|
|
|
|
|
The BIFs prepare_loading/2 and finish_loading/1 have been
designed to allow fast loading in parallel of many modules.
Because of the complications with on_load functions,
the initial implementation of finish_loading/1 only allowed
a single element in the list of prepared modules.
finish_loading/1 does not suspend other processes, but it must wait
for all schedulers to pass a write barrier ("thread progress"). The
time for all schedulers to pass the write barrier is highly variable,
depending on what kind of code they are executing. Therefore, allowing
finish_loading/1 to finish the loading for more than one module before
passing the write barrier could potentially be much faster than
calling finish_loading/1 multiple times.
The test case many/1 run on my computer shows that with "heavy load",
finish loading of 100 modules in parallel is almost 50 times faster
than loading them sequentially. With "light load", the gain is still
almost 10 times.
Here follows an actual sample of the output from the test case on
my computer (an 2012 iMac):
Light load
==========
Sequential: 22361 µs
Parallel: 2586 µs
Ratio: 9
Heavy load
==========
Sequential: 254512 µs
Parallel: 5246 µs
Ratio: 49
|
|
|
|
Just mask away the high bits to get a more tolerant erlang:halt
that behaves the same on 32 and 64 bit architectures.
|
|
This is mostly a pure refactoring.
Except for the buggy cases when calling erlang:halt() with a positive
integer in the range -(INT_MIN+2) to -INT_MIN that got confused with
ERTS_ABORT_EXIT, ERTS_DUMP_EXIT and ERTS_INTR_EXIT.
Outcome OLD erl_exit(n, ) NEW erts_exit(n, )
------- ------------------- -------------------------------------------
exit(Status) n = -Status <= 0 n = Status >= 0
crashdump+abort n > 0, ignore n n = ERTS_ERROR_EXIT < 0
The outcome of the old ERTS_ABORT_EXIT, ERTS_INTR_EXIT and
ERTS_DUMP_EXIT are the same as before (even though their values have
changed).
|
|
* maint:
Do not wait for main lock when looking up process not running
|
|
|
|
|
|
|
|
Except it cannot be disabled and cannot be multi-threaded.
The bit-vector 'erts_literal_vspace_map' on 32-bit is currently only
protected by the literal allocator mutex. We could allow multiple
instances on 64-bit (I think), but what would be the point?
|
|
|
|
* sverk/proc-lock-check-fix:
erts: Fix lock checker for process locks
|
|
The wake_scheduler function asserts that the run queue is not locked,
but this assertion sometimes fails for dirty schedulers (in January
2016 a user in erlang-questions reported a dirty schedulers problem
related to this). After discussing it with Rickard, we decided
modifying the assertion was the most practical way to address the
problem.
|
|
We will need a way to check whether an prepared BEAM modules has
an on_load function.
|
|
* rickard/rq-state-bug/OTP-13298:
Fix bug causing run-queue mask to become inconsistent
|
|
Do lock order check *before* trying to seize lock... duh!
|