Age | Commit message (Collapse) | Author |
|
|
|
At start of the VM a poll-set that the schedulers
will check is created where fds that have triggered
many (at the moment, many means 10) times without
being deselected inbetween. In this scheduler specific
poll-set fds do not use ONESHOT, which means that the
number of syscalls goes down dramatically for such fds.
This pollset is introduced in order to handle fds that
are used by the erlang distribution and that never
change their state from {active, true}.
This pollset only handles ready_input events,
ready_output is still handled by the poll threads.
During overload, polling the scheduler poll-set is done
on a 10ms timer.
|
|
|
|
If the process had more free space than reductions it could run a
lot longer than it was supposed to. It didn't honor the number of
reductions going in either, nor did it bump reductions when
returning its result or erroring out.
This commit also removes this function from a work function in
scheduler_SUITE as it's extremely sensitive to the number of
reductions spent in the test, causing
equal_and_high_with_part_time_max to fail on some machines.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
* rickard/time-unit/OTP-13735:
Update test-cases to use new symbolic time units
Replace misspelled symbolic time units
Conflicts:
erts/doc/src/erlang.xml
erts/emulator/test/long_timers_test.erl
|
|
|
|
|
|
|
|
|
|
|
|
* henrik/update-copyrightyear:
update copyright-year
|
|
|
|
|
|
|
|
|
|
The macro ?t is deprecated. Replace its use with 'test_server'.
|
|
|
|
|
|
|
|
|
|
- The calling process is now suspended while synchronizing
scheduler suspends via erlang:system_flag(schedulers_online, _)
and erlang:system_flag(multi_scheduling, _), instead of blocking
the scheduler thread in the BIF call waiting for the operation
to synchronize. Besides releasing the scheduler for other work
(or immediate suspend) it also makes it possible to abort the
operation by killing the process.
- erlang:system_flag(schedulers_online, _) now only wait for normal
schedulers to complete before it returns. This since it may take
a very long time before all dirty schedulers suspends.
- erlang:system_flag(multi_scheduling, block_normal|unblock_normal)
which only operate on normal schedulers has been introduced. This
since there are use cases where suspend of dirty schedulers are
not of interest (hipe loader).
- erlang:system_flag(multi_scheduling, block) still blocks all
dirty schedulers as well as all normal schedulers except one since
it is hard to redefine what multi scheduling block means.
- The three operations:
- changing amount of schedulers online
- blocking/unblocking normal multi scheduling
- blocking/unblocking full multi scheduling
can now be done in parallel. This is important since otherwise
a full multi scheduling block would potentially delay the other
operations for a very long time.
|
|
* bjorn/remove-test_server/OTP-12705:
Remove test_server as a standalone application
Erlang mode for Emacs: Include ct.hrl instead test_server.hrl
Remove out-commented references to the test_server applications
Makefiles: Remove test_server from include path and code path
Eliminate use of test_server.hrl and test_server_line.hrl
|
|
As a first step to removing the test_server application as
as its own separate application, change the inclusion of
test_server.hrl to an inclusion of ct.hrl and remove the
inclusion of test_server_line.hrl.
|
|
In scheduler_SUITE add a new test that runs a single dirty I/O
scheduler and launches a number of dirty I/O NIF calls that each sleep
for 3 seconds. Given the single scheduler, the first of these will run
while the rest queue up. Then start killing these processes, and
verify they call exit correctly.
|
|
|
|
|
|
Add support for setting the number of dirty CPU schedulers online via
erlang:system_flag/2. Assuming the emulator is built with dirty schedulers
enabled, the number of dirty CPU schedulers online may not be less than 1,
nor greater than the number of dirty CPU schedulers available, nor greater
than the number of normal schedulers online. Dirty CPU scheduler threads
that are taken offline via system_flag/2 are suspended. The number of dirty
CPU schedulers online may be adjusted independently of the number of normal
schedulers online, but if system_flag/2 is used to set the number of normal
schedulers online to a value less than the current number of normal
schedulers online, the number of dirty CPU schedulers online is decreased
proportionally. Likewise, if the number of normal schedulers online is
increased, the number of dirty CPU schedulers online is increased
proportionally. For example, if 8 normal schedulers and 4 dirty CPU
schedulers are online, and system_flag/2 is called to set the number of
normal schedulers online to 4, the number of dirty CPU schedulers online is
also decreased by half, to 2. Subsequently setting the number of normal
schedulers online back to 8 also sets the number of dirty CPU schedulers
online back to 4. Augment the system_flag/2 documentation in the erlang man
page to explain this relationship between schedulers_online and
dirty_cpu_schedulers_online.
Also ensure that all dirty CPU and I/O schedulers are suspended when
multi-scheduling is blocked via system_flag/2, and brought back online when
multi-scheduling is unblocked.
Add Rickard Green's rewritten check_enqueue_in_prio_queue() function that
inspects process state more thoroughly to determine if to enqueue it and if
so on what queue, including dirty queues when appropriate.
Make sure dirty NIF jobs do not trigger erlang:system_monitor long_schedule
messages.
Add more dirty scheduler testing to the scheduler test suite.
Remove the erts_no_dirty_cpu_schedulers_online global variable, since it's
no longer needed.
Execute dirty NIFs on a normal scheduler thread while multi-scheduling
blocking is in effect. Evacuate any dirty jobs residing in the dirty run
queues over to a normal run queue when multi-scheduling is blocked.
Allow dirty schedulers to execute aux work.
Set the dirty run queues halt_in_progress flag when halting the normal
schedulers.
Change dirty scheduler numbers to a structure including both scheduler
number and type, either dirty CPU or dirty I/O. Add some assertions to
ensure that dirty CPU schedulers operate only on dirty CPU scheduler
process flags, and the same for dirty I/O schedulers.
|
|
|
|
For applications where measurements show enhanced performance from the use
of a non-default number of emulator scheduler threads, having to accurately
set the right number of scheduler threads across multiple hosts each with
different numbers of logical processors is difficult because the erl +S
option requires absolute numbers of scheduler threads and scheduler threads
online to be specified.
To address this issue, add a +SP option to erl, similar to the existing +S
option but allowing the number of scheduler threads and scheduler threads
online to be set as percentages of logical processors configured and
logical processors available, respectively. For example, "+SP 50:25" sets
the number of scheduler threads to 50% of the logical processors
configured, and the number of scheduler threads online to 25% of the
logical processors available. The +SP option also interacts with any
settings specified with the +S option, such that the combination of options
"+S 4:4 +SP 50:50" (in either order) results in 2 scheduler threads and 2
scheduler threads online.
Add documentation for the +SP option.
Add tests for the +SP option to scheduler_SUITE.
Add tests and documentation for two existing features of the +S option: +S
0:0 resets the scheduler thread count and scheduler threads online count to
their defaults, and specifying negative numbers for +S results in those
values being subtracted from the default values for the host.
|
|
Unlink the test-proc instead of monitoring it and waiting for it to
terminate before stopping the node. This since an unlink is faster,
simpler and in this case more stable.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Large parts of the ethread library have been rewritten. The
ethread library is an Erlang runtime system internal, portable
thread library used by the runtime system itself.
Most notable improvement is a reader optimized rwlock
implementation which dramatically improve the performance of
read-lock/read-unlock operations on multi processor systems by
avoiding ping-ponging of the rwlock cache lines. The reader
optimized rwlock implementation is used by miscellaneous
rwlocks in the runtime system that are known to be read-locked
frequently, and can be enabled on ETS tables by passing the
`{read_concurrency, true}' option upon table creation. See the
documentation of `ets:new/2' for more information.
The ethread library can now also use the libatomic_ops library
for atomic memory accesses. This makes it possible for the
Erlang runtime system to utilize optimized atomic operations
on more platforms than before. Use the
`--with-libatomic_ops=PATH' configure command line argument
when specifying where the libatomic_ops installation is
located. The libatomic_ops library can be downloaded from:
http://www.hpl.hp.com/research/linux/atomic_ops/
The changed API of the ethread library has also caused
modifications in the Erlang runtime system. Preparations for
the to come "delayed deallocation" feature has also been done
since it depends on the ethread library.
Note: When building for x86, the ethread library will now use
instructions that first appeared on the pentium 4 processor. If
you want the runtime system to be compatible with older
processors (back to 486) you need to pass the
`--enable-ethread-pre-pentium4-compatibility' configure command
line argument when configuring the system.
|