Age | Commit message (Collapse) | Author |
|
Avoid suspending fun caller not just if purge is already done
but also if purge of another module has started. Another purge
of the same module again cannot happen as making current to old
transition includes thread progress.
|
|
Ensure that we cannot get any dangling pointers into code that
has been purged. This is done by a two phase purge. At first
phase all fun entries pointing into the code to purge are marked
for purge. All processes trying to call these funs will be suspended
and by this we avoid getting new direct references into the code.
When all processes has been checked, these processes are resumed.
The new purge strategy now also completely ignore the existence of
indirect references to the code (funs). If such exist, they will
cause bad fun exceptions to the caller, but will not prevent a
soft purge or cause a kill of a process having such live references
during a hard purge. This since it is impossible to give any
guarantees that no processes in the system have such indirect
references. Even when the system is completely clean from such
references, new ones can appear via distribution and/or disk.
|
|
|
|
|
|
* henrik/update-copyrightyear:
update copyright-year
|
|
Generic instructions have a min_window field. Its purpose is to
avoid calling transform_engine() when there are too few instructions
in the current "transformation window" for a transformation to
succeed.
Currently it does not do much good since the window size will be
decremented by one before being used. The reason for the subtraction
is probably that in some circumstances in the past, the loader could
read past the end of the BEAM module while attempting to fetch
instructions to increase the window size. Therefore, it would not
be safe to just remove the subtraction by one.
The simplest and safest solution seems to always ensure that there
are always at least TWO instructions when calling transform_engine().
That will be safe, as long as a BEAM module is always finished with
an int_code_end/0 that is not involved in any transformation.
|
|
|
|
to use real C struct with correct types
|
|
to use a real C struct instead of array.
|
|
Since 'd' operands can only either an X register or an Y register,
we only need a single bit to distinguish them. Furthermore, we can
pre-multiply the register number with the word size to speed up
address calculation.
|
|
|
|
|
|
|
|
Add initial support for dirty schedulers.
There are two types of dirty schedulers: CPU schedulers and I/O
schedulers. By default, there are as many dirty CPU schedulers as there are
normal schedulers and as many dirty CPU schedulers online as normal
schedulers online. There are 10 dirty I/O schedulers (similar to the choice
of 10 as the default for async threads).
By default, dirty schedulers are disabled and conditionally compiled
out. To enable them, you must pass --enable-dirty-schedulers to the
top-level configure script when building Erlang/OTP.
Current dirty scheduler support requires the emulator to be built with SMP
support. This restriction will be lifted in the future.
You can specify the number of dirty schedulers with the command-line
options +SDcpu (for dirty CPU schedulers) and +SDio (for dirty I/O
schedulers). The +SDcpu option is similar to the +S option in that it takes
two numbers separated by a colon: C1:C2, where C1 specifies the number of
dirty schedulers available and C2 specifies the number of dirty schedulers
online. The +SDPcpu option allows numbers of dirty CPU schedulers available
and dirty CPU schedulers online to be specified as percentages, similar to
the existing +SP option for normal schedulers. The number of dirty CPU
schedulers created and dirty CPU schedulers online may not exceed the
number of normal schedulers created and normal schedulers online,
respectively. The +SDio option takes only a single number specifying the
number of dirty I/O schedulers available and online. There is no support
yet for programmatically changing at run time the number of dirty CPU
schedulers online via erlang:system_flag/2. Also, changing the number of
normal schedulers online via erlang:system_flag(schedulers_online,
NewSchedulersOnline) should ensure that there are no more dirty CPU
schedulers than normal schedulers, but this is not yet implemented. You can
retrieve the number of dirty schedulers by passing dirty_cpu_schedulers,
dirty_cpu_schedulers_online, or dirty_io_schedulers to
erlang:system_info/1.
Currently only NIFs are able to access dirty scheduler
functionality. Neither drivers nor BIFs currently support dirty
schedulers. This restriction will be addressed in the future.
If dirty scheduler support is present in the runtime, the initial status
line Erlang prints before presenting its interactive prompt will include
the indicator "[ds:C1:C2:I]" where "ds" indicates "dirty schedulers", "C1"
indicates the number of dirty CPU schedulers available, "C2" indicates the
number of dirty CPU schedulers online, and "I" indicates the number of
dirty I/O schedulers.
Document The dirty NIF API in the erl_nif man page. The API closely follows
Rickard Green's presentation slides from his talk "Future Extensions to the
Native Interface", presented at the 2011 Erlang Factory held in the San
Francisco Bay Area. Rickard's slides are available online at
http://bit.ly/1m34UHB .
Document the new erl command-line options, the additions to
erlang:system_info/1, and also add the erlang:system_flag/2 dirty scheduler
documentation even though it's not yet implemented.
To determine whether the dirty NIF API is available, native code can check
to see whether the C preprocessor macro ERL_NIF_DIRTY_SCHEDULER_SUPPORT is
defined. To check if dirty schedulers are available at run time, native
code can call the boolean enif_have_dirty_schedulers() function, and Erlang
code can call erlang:system_info(dirty_cpu_schedulers), which raises
badarg if no dirty scheduler support is available.
Add a simple dirty NIF test to the emulator NIF suite.
|
|
|
|
Calls to erlang:set_trace_pattern/3 will no longer block all
other schedulers.
We will still go to single-scheduler mode when new code is loaded
for a module that is traced, or when loading code when there is a
default trace pattern set. That is not impossible to fix, but that
requires much closer cooperation between tracing BIFs and the loader
BIFs.
|
|
|
|
The is a refactoring in preparation to add a counter in Module struct
for export entry tracing. It is nicer if the two are kept together.
|
|
Having the entire implementation of range handling (address table)
in one source file will help when we'll need to update the ranges
without stopping all schedulers in the next commit.
|
|
|
|
To simplify the implementation of literal pools (constant pools)
for the R12 release, a shortcut was taken regarding binaries --
all binaries would be stored as heap binaries regardless of size.
To allow a module containing literals to be unloaded, literal
terms are copied when sent to another process. That means that
huge literal binaries will also be copied if they are sent to
another process, which could be surprising.
Another problem is that the arity field in the header for the heap
object may not be wide enough to handle big binaries.
Therefore, bite the bullet and allow refc binaries to be stored
in literal pools. In short, the following need to be changed:
* Each loaded module needs a MSO list, linking all refc binaries
in the literal pool.
* When check_process_code/2 copies literals to a process heap,
it must link each referenced binary into the MSO list for the
process and increment the reference counter for the binary.
* purge_module/1 must decrement the reference counter for each
refc binary in the literal pool.
|
|
Break apart code loading into the three functions:
erts_alloc_loader_state()
erts_prepare_loading()
erts_finish_loading()
The erts_alloc_loader_state() and erts_prepare_loading() can be
executed with all schedulers running. Only erts_finish_loading()
needs to be run in a single-scheduling system.
|
|
There is no reason to have erts_load_module() return integer values,
only to have the caller convert the values to atoms. Return the
appropriate atom directly from the place where the error is generated
instead. Return NIL if the module was successfully loaded.
|
|
|
|
* pan/otp_8332_halfword:
Teach testcase in driver_suite the new prototype for driver_async
wx: Correct usage of driver callbacks from wx thread
Adopt the new (R13B04) Nif functionality to the halfword codebase
Support monitoring and demonitoring from driver threads
Fix further test-suite problems
Correct the VM to work for more test suites
Teach {wordsize,internal|external} to system_info/1
Make tracing and distribution work
Turn on instruction packing in the loader and virtual machine
Add the BeamInstr data type for loaded BEAM code
Fix the BEAM dissambler for the half-word emulator
Store pointers to heap data in 32-bit words
Add a custom mmap wrapper to force heaps into the lower address range
Fit all heap data into the 32-bit address range
|
|
For cleanliness, use BeamInstr instead of the UWord
data type to any machine-sized words that are used
for BEAM instructions. Only use UWord for untyped
words in general.
|
|
Store Erlang terms in 32-bit entities on the heap, expanding the
pointers to 64-bit when needed. This works because all terms are stored
on addresses in the 32-bit address range (the 32 most significant bits
of pointers to term data are always 0).
Introduce a new datatype called UWord (along with its companion SWord),
which is an integer having the exact same size as the machine word
(a void *), but might be larger than Eterm/Uint.
Store code as machine words, as the instructions are pointers to
executable code which might reside outside the 32-bit address range.
Continuation pointers are stored on the 32-bit stack and hence must
point to addresses in the low range, which means that loaded beam code
much be placed in the low 32-bit address range (but, as said earlier,
the instructions themselves are full words).
No Erlang term data can be stored on C stacks (enforced by an
earlier commit).
This version gives a prompt, but test cases still fail (and dump core).
The loader (and emulator loop) has instruction packing disabled.
The main issues has been in rewriting loader and actual virtual
machine. Subsystems (like distribution) does not work yet.
|
|
|