Age | Commit message (Collapse) | Author |
|
Besides being noisy, they were already defined by a global Unix-
specific header, causing the Windows build to fail if one forgot to
define them.
|
|
* john/erts/runtime-lcnt:
Document rt_mask and add warnings about copy_save
Add an emulator test suite for lock counting
Break erts_debug:lock_counters/1 into separate BIFs
Allow toggling lock counting at runtime
Move lock flags to a common header
Enable register_SUITE for lcnt builds
Enable lcnt smoke test on all builds that have lcnt enabled
Make lock counter info independent of the locks being counted
OTP-14412
OTP-13170
OTP-14413
|
|
The implementation is still hidden behind ERTS_ENABLE_LOCK_COUNT, and
all categories are still enabled by default, but the actual counting can be
toggled at will.
OTP-13170
|
|
|
|
|
|
|
|
Instead of passing around a file descriptor
use a function pointer to facilitate more advanced
backend write logic such as size limitation or compression.
|
|
For non-amd64 it's a "normal" allocator with a
wrapper around mseg_alloc to call mprotect(PROT_EXEC).
|
|
|
|
on 32-bit, as the granularity of the literal bit vector
is super-alignment.
|
|
Make the callbacks more general to be usable for any allocator
that that uses its own ErtsMemMapper.
|
|
* henrik/update-copyrightyear:
update copyright-year
|
|
* carrier_create
* carrier_destroy
* carrier_pool_put
* carrier_pool_get
|
|
|
|
|
|
|
|
Problem #1 Goodfit was crippled by the fact that destroying_mbc()
was called _before_ the carriers was unlinked from mbc_list.
Problem #2 destroying_mbc() was called for carriers that later could be
resurrected from dc_list without a matching call to creating_mbc().
This was mostly a practical problem for the new test case
alloc_SUITE:migration that use the callbacks to create/destroy a mutex.
Solution:
destroying_mbc() is now only called just before a carrier is
destroyed (deallocated or put in mseg cache).
remove_mbc() is called both (like before) when inserted into cpool
but now also when last block is freed and mbc is scheduled for
destruction but may later be resurrected from dc_list.
|
|
And remove old case of using only page alignment (12 bits).
|
|
|
|
|
|
|
|
The switch "+Musac <boolean>" controls if sys_alloc carriers
are allowed.
|
|
|
|
rickard/aligned-sys_alloc-carriers_maint/OTP-11318
Conflicts:
erts/emulator/beam/erl_alloc.c
erts/emulator/beam/erl_alloc_util.c
erts/emulator/beam/erl_alloc_util.h
|
|
erts_sys_aligned_alloc() is currently implemented using posix_memalign if
it exist, or using _aligned_malloc on Windows.
If erts_sys_aligned_alloc() exist allocators will create sys_alloc
carriers similar to how this was done pre-R16.
|
|
|
|
|
|
|
|
|
|
A cleanup after SBMBC was removed.
|
|
|
|
and add new callbacks add_mbc(), remove_mbc() and largest_fblk_in_mbc()
for carrier migration.
|
|
by putting blocks from different carrier into separate search trees.
Carriers are currently organized in a naive linked list by address order.
|
|
This is a modified partial revert of 2ab1d972f6fd37c17b05
|
|
|
|
|
|
|
|
|
|
|
|
No allocator strategy is using customized carrier headers anyway.
|
|
|
|
|
|
by making use of the new block header scheme to find the carrier header
and thereby the allocator.
|
|
to allow realloc to determine block size (in MBC or SBC)
without having to read the footer of the previous block
that might be written to by concurrent thread.
|
|
|
|
- Document barrier semantics
- Introduce ddrb suffix on atomic ops
- Barrier macros for both non-SMP and SMP case
- Make the thread progress API a bit more intuitive
|
|
Almost all uses of the 'long' datatype is removed from VM and tests
Emulator test now runs w/o drivers crashing
Nasty abs bug fixed in VM as well as type errors in allocator debug functions
Still one allocator test that fails, domain knowledge is needed to fix that.
Fix type inconsistency in beam_load causing crashes
|
|
A number of memory allocation optimizations have been implemented. Most
optimizations reduce contention caused by synchronization between
threads during allocation and deallocation of memory. Most notably:
* Synchronization of memory management in scheduler specific allocator
instances has been rewritten to use lock-free synchronization.
* Synchronization of memory management in scheduler specific
pre-allocators has been rewritten to use lock-free synchronization.
* The 'mseg_alloc' memory segment allocator now use scheduler specific
instances instead of one instance. Apart from reducing contention
this also ensures that memory allocators always create memory
segments on the local NUMA node on a NUMA system.
|
|
|
|
Also add 'low' field in system_info(allocator)
SHORT_LIVED is still in low memory
|