Age | Commit message (Collapse) | Author |
|
|
|
as it fails with timeout sometimes.
|
|
|
|
* vinoski/ds-enif-send:
enable enif_send to work from a dirty scheduler
|
|
* bjorn/erts/fix-lingering-tracer:
Teach the call_time trace to notice when the trace dies (non-SMP system)
|
|
The call_time trace is a special kind of tracing that requires
a tracer process just like ordinary call trace, but it never
actually sends anything to the tracer. It merely use the existence
of a trace process (and call trace flags) as an indication that
call_time tracing is active for the process.
If the tracer dies in a non-SMP run-time system, processes with
call_time tracing would not notice that the tracer had
died. Furthermore, if the set_on_spawn flag was active, the dead
tracer could be propagaged to newly spawned processes.
Before accumulating trace information in a non-SMP system, always
validate the tracer process. (In an SMP system, a reference to a
dead tracer will be cleared away each time a process is scheduled.)
While we could put all of the new code beam_bp.c, we have chosen to
make a function call from beam_bp.c to a function in erl_trace.c for
clarity's sake and to ease further maintenance. In the future, we
might want to handle tracing in more similar ways in the SMP and
non-SMP system.
|
|
* sverk/maps-erl_interface:
erts: Add distribution capability flag for maps DFLAG_MAP_TAG
erts: Change external format for maps
erts: Document external format for maps (MAP_EXT)
erl_interface: Add test for ei_skip_term of container terms
erl_interface: Add map support in ei_skip_term
erl_interface: Fix mem leak in ei_decode_encode_test
erl_interface: test decode/encode of maps
erl_interface: Add ei encode/decode for maps
erl_interface: test decode_encode of tuples and lists
erl_interface: refactor ei_decode_encode_test.c
|
|
to be: 116,Arity, K1,V1,K2,V2,...,Kn,Vn
instead of: 116,Arity, K1,K2,...,Kn, V1,V2,....,Vn
We think this will be better for future internal map structures
like HAMT. Would be bad if we need to iterate twice over HAMT
in term_to_binary, one for keys and one for values.
|
|
* sverk/valgrind-leaks:
erts: Suppress false leak in hipe_thread_signal_init
erts: Fix leak in nif_SUITE:resource_takeover (again)
|
|
|
|
|
|
The expression,
#{} =:= M
where M was any Map, would always result in 'true'.
This commit fixes this issue by first comparing sizes for
both terms and then checking for size zero.
|
|
* lukas/erts/float_encoding/OTP-11738:
erts: Set default external enc to use new float scheme
|
|
* vinoski/ds2:
further enhancements for dirty schedulers
allow optional whitespace in dirty scheduler erl options
|
|
Add support for setting the number of dirty CPU schedulers online via
erlang:system_flag/2. Assuming the emulator is built with dirty schedulers
enabled, the number of dirty CPU schedulers online may not be less than 1,
nor greater than the number of dirty CPU schedulers available, nor greater
than the number of normal schedulers online. Dirty CPU scheduler threads
that are taken offline via system_flag/2 are suspended. The number of dirty
CPU schedulers online may be adjusted independently of the number of normal
schedulers online, but if system_flag/2 is used to set the number of normal
schedulers online to a value less than the current number of normal
schedulers online, the number of dirty CPU schedulers online is decreased
proportionally. Likewise, if the number of normal schedulers online is
increased, the number of dirty CPU schedulers online is increased
proportionally. For example, if 8 normal schedulers and 4 dirty CPU
schedulers are online, and system_flag/2 is called to set the number of
normal schedulers online to 4, the number of dirty CPU schedulers online is
also decreased by half, to 2. Subsequently setting the number of normal
schedulers online back to 8 also sets the number of dirty CPU schedulers
online back to 4. Augment the system_flag/2 documentation in the erlang man
page to explain this relationship between schedulers_online and
dirty_cpu_schedulers_online.
Also ensure that all dirty CPU and I/O schedulers are suspended when
multi-scheduling is blocked via system_flag/2, and brought back online when
multi-scheduling is unblocked.
Add Rickard Green's rewritten check_enqueue_in_prio_queue() function that
inspects process state more thoroughly to determine if to enqueue it and if
so on what queue, including dirty queues when appropriate.
Make sure dirty NIF jobs do not trigger erlang:system_monitor long_schedule
messages.
Add more dirty scheduler testing to the scheduler test suite.
Remove the erts_no_dirty_cpu_schedulers_online global variable, since it's
no longer needed.
Execute dirty NIFs on a normal scheduler thread while multi-scheduling
blocking is in effect. Evacuate any dirty jobs residing in the dirty run
queues over to a normal run queue when multi-scheduling is blocked.
Allow dirty schedulers to execute aux work.
Set the dirty run queues halt_in_progress flag when halting the normal
schedulers.
Change dirty scheduler numbers to a structure including both scheduler
number and type, either dirty CPU or dirty I/O. Add some assertions to
ensure that dirty CPU schedulers operate only on dirty CPU scheduler
process flags, and the same for dirty I/O schedulers.
|
|
This change was triggered by the OSE float printing function
not working exactly the same way as linux/win32. But it is
also a good one in general as it cuts size in more than half
for floats.
|
|
|
|
* sverk/test_server/openbsd-dynlink:
erts: Skip driver_SUITE:thr_free_drv for VM without threads
erts: Fix driver_SUITE:otp_9302 for VM without threads
test_server: Fix dynlib link command for openbsd
|
|
OTP-11722
OTP-11724
* sverk/crypto/hmac-context-reuse-bug:
crypto: Fix bug when using old hmac context
erts: Fix NIF bug when load/upgrade fails after enif_open_resource_type
Conflicts:
erts/emulator/test/nif_SUITE.erl
|
|
Faulty test for maps update
|
|
..has been successfully called.
Opened resource types (created or taken-over) were left "hanging"
leading both to memory leakage and other more strange and serious behavior.
Now a proper rollback is done.
|
|
|
|
|
|
* Add tests for maps:merge/2
* Add tests for maps:update/3
* Test more corner cases
|
|
|
|
erl_drv_output_term() and erl_drv_send_term() can send messages
containing maps with the use of the new ERL_DRV_MAP.
The driver API minor version is updated as new functionality is added.
|
|
|
|
|
|
and simplify code by ignoring h_limit which is always zero.
|
|
|
|
|
|
|
|
(foo())#{ k1 := V1, k2 => V2 }
|
|
* maps:without/2
* maps:foldl/3
* maps:foldr/3
* maps:map/2
* maps:size/1
|
|
Name conforms to EEP.
|
|
|
|
|
|
|
|
In the current iteration of Maps we should deny *any* variables in
Map keys.
|
|
|
|
|
|
- Update map_SUITE with hash and encode tests
- Update map_SUITE with external format decode tests
- Update map_SUITE with map:to_list/1 and map:from_list/1 tests
|
|
|
|
|
|
|
|
* vinoski/ds:
initial support for dirty schedulers and dirty NIFs
|
|
Add initial support for dirty schedulers.
There are two types of dirty schedulers: CPU schedulers and I/O
schedulers. By default, there are as many dirty CPU schedulers as there are
normal schedulers and as many dirty CPU schedulers online as normal
schedulers online. There are 10 dirty I/O schedulers (similar to the choice
of 10 as the default for async threads).
By default, dirty schedulers are disabled and conditionally compiled
out. To enable them, you must pass --enable-dirty-schedulers to the
top-level configure script when building Erlang/OTP.
Current dirty scheduler support requires the emulator to be built with SMP
support. This restriction will be lifted in the future.
You can specify the number of dirty schedulers with the command-line
options +SDcpu (for dirty CPU schedulers) and +SDio (for dirty I/O
schedulers). The +SDcpu option is similar to the +S option in that it takes
two numbers separated by a colon: C1:C2, where C1 specifies the number of
dirty schedulers available and C2 specifies the number of dirty schedulers
online. The +SDPcpu option allows numbers of dirty CPU schedulers available
and dirty CPU schedulers online to be specified as percentages, similar to
the existing +SP option for normal schedulers. The number of dirty CPU
schedulers created and dirty CPU schedulers online may not exceed the
number of normal schedulers created and normal schedulers online,
respectively. The +SDio option takes only a single number specifying the
number of dirty I/O schedulers available and online. There is no support
yet for programmatically changing at run time the number of dirty CPU
schedulers online via erlang:system_flag/2. Also, changing the number of
normal schedulers online via erlang:system_flag(schedulers_online,
NewSchedulersOnline) should ensure that there are no more dirty CPU
schedulers than normal schedulers, but this is not yet implemented. You can
retrieve the number of dirty schedulers by passing dirty_cpu_schedulers,
dirty_cpu_schedulers_online, or dirty_io_schedulers to
erlang:system_info/1.
Currently only NIFs are able to access dirty scheduler
functionality. Neither drivers nor BIFs currently support dirty
schedulers. This restriction will be addressed in the future.
If dirty scheduler support is present in the runtime, the initial status
line Erlang prints before presenting its interactive prompt will include
the indicator "[ds:C1:C2:I]" where "ds" indicates "dirty schedulers", "C1"
indicates the number of dirty CPU schedulers available, "C2" indicates the
number of dirty CPU schedulers online, and "I" indicates the number of
dirty I/O schedulers.
Document The dirty NIF API in the erl_nif man page. The API closely follows
Rickard Green's presentation slides from his talk "Future Extensions to the
Native Interface", presented at the 2011 Erlang Factory held in the San
Francisco Bay Area. Rickard's slides are available online at
http://bit.ly/1m34UHB .
Document the new erl command-line options, the additions to
erlang:system_info/1, and also add the erlang:system_flag/2 dirty scheduler
documentation even though it's not yet implemented.
To determine whether the dirty NIF API is available, native code can check
to see whether the C preprocessor macro ERL_NIF_DIRTY_SCHEDULER_SUPPORT is
defined. To check if dirty schedulers are available at run time, native
code can call the boolean enif_have_dirty_schedulers() function, and Erlang
code can call erlang:system_info(dirty_cpu_schedulers), which raises
badarg if no dirty scheduler support is available.
Add a simple dirty NIF test to the emulator NIF suite.
|
|
OTP-11628
* vinoski/rm-drv-async-cancel:
remove deprecated driver_async_cancel function
|
|
|
|
* sverk/term2bin-simplify:
erts: Refactor ESTACK & WSTACK to use a struct easy to "export"
erts: Fix benign ESTACK/WSTACK typo
erts: Fix compiler warnings for NO_JUMP_TABLE
erts: Run binary_SUITE:trapping even for 32bit
erts: Extend binary_SUITE:ttb_trap to also cover binary_to_term
erts: Remove the extra_root feature from the process structure
erts: Simplify term_to_binary by removing saved ESTACK from root set
|