Age | Commit message (Collapse) | Author |
|
Instead of passing around a file descriptor
use a function pointer to facilitate more advanced
backend write logic such as size limitation or compression.
|
|
Normally, calling code:delete/1 before re-loading the code for a
module is unnecessary but causes no problem.
But there will be be problems if the new code has an on_load function.
Code with an on_load function will always be loaded as old code
to allowed it to be easily purged if the on_load function would fail.
If the on_load function succeeds, the old and current code will be
swapped.
So in the scenario where code:delete/1 has been called explicitly,
there is old code but no current code. Loading code with an
on_load function will cause the reference to the old code to be
overwritten. That will at best cause a memory leak, and at worst
an emulator crash (especially if NIFs are involved).
To avoid that situation, we will put the code with the on_load
function in a special, third slot in Module.
ERL-240
|
|
|
|
|
|
The BIFs prepare_loading/2 and finish_loading/1 have been
designed to allow fast loading in parallel of many modules.
Because of the complications with on_load functions,
the initial implementation of finish_loading/1 only allowed
a single element in the list of prepared modules.
finish_loading/1 does not suspend other processes, but it must wait
for all schedulers to pass a write barrier ("thread progress"). The
time for all schedulers to pass the write barrier is highly variable,
depending on what kind of code they are executing. Therefore, allowing
finish_loading/1 to finish the loading for more than one module before
passing the write barrier could potentially be much faster than
calling finish_loading/1 multiple times.
The test case many/1 run on my computer shows that with "heavy load",
finish loading of 100 modules in parallel is almost 50 times faster
than loading them sequentially. With "light load", the gain is still
almost 10 times.
Here follows an actual sample of the output from the test case on
my computer (an 2012 iMac):
Light load
==========
Sequential: 22361 µs
Parallel: 2586 µs
Ratio: 9
Heavy load
==========
Sequential: 254512 µs
Parallel: 5246 µs
Ratio: 49
|
|
to use a real C struct instead of array.
|
|
|
|
|
|
Move implementation from beam_load into new file code_ix.c and module.c
and make some function inline.
|
|
|
|
The is a refactoring in preparation to add a counter in Module struct
for export entry tracing. It is nicer if the two are kept together.
|
|
Staging is a better and more general name as does not necessary need
to involve code loading (can be deletion, tracing, etc).
|
|
|
|
|
|
For cleanliness, use BeamInstr instead of the UWord
data type to any machine-sized words that are used
for BEAM instructions. Only use UWord for untyped
words in general.
|
|
Store Erlang terms in 32-bit entities on the heap, expanding the
pointers to 64-bit when needed. This works because all terms are stored
on addresses in the 32-bit address range (the 32 most significant bits
of pointers to term data are always 0).
Introduce a new datatype called UWord (along with its companion SWord),
which is an integer having the exact same size as the machine word
(a void *), but might be larger than Eterm/Uint.
Store code as machine words, as the instructions are pointers to
executable code which might reside outside the 32-bit address range.
Continuation pointers are stored on the 32-bit stack and hence must
point to addresses in the low range, which means that loaded beam code
much be placed in the low 32-bit address range (but, as said earlier,
the instructions themselves are full words).
No Erlang term data can be stored on C stacks (enforced by an
earlier commit).
This version gives a prompt, but test cases still fail (and dump core).
The loader (and emulator loop) has instruction packing disabled.
The main issues has been in rewriting loader and actual virtual
machine. Subsystems (like distribution) does not work yet.
|
|
|
|
|