aboutsummaryrefslogtreecommitdiffstats
path: root/erts/preloaded
diff options
context:
space:
mode:
authorBjörn Gustavsson <[email protected]>2015-11-30 15:35:47 +0100
committerBjörn Gustavsson <[email protected]>2016-02-25 14:50:48 +0100
commit8f4c278b69fe4d613a0b865a2edac43231cad913 (patch)
tree3e55347065faba3a2ae5950935da336fc8ffe7ab /erts/preloaded
parente1be12434b06fb2594af5cdafc5efc5b9182d8b6 (diff)
downloadotp-8f4c278b69fe4d613a0b865a2edac43231cad913.tar.gz
otp-8f4c278b69fe4d613a0b865a2edac43231cad913.tar.bz2
otp-8f4c278b69fe4d613a0b865a2edac43231cad913.zip
Allow erlang:finish_loading/1 to load more than one module
The BIFs prepare_loading/2 and finish_loading/1 have been designed to allow fast loading in parallel of many modules. Because of the complications with on_load functions, the initial implementation of finish_loading/1 only allowed a single element in the list of prepared modules. finish_loading/1 does not suspend other processes, but it must wait for all schedulers to pass a write barrier ("thread progress"). The time for all schedulers to pass the write barrier is highly variable, depending on what kind of code they are executing. Therefore, allowing finish_loading/1 to finish the loading for more than one module before passing the write barrier could potentially be much faster than calling finish_loading/1 multiple times. The test case many/1 run on my computer shows that with "heavy load", finish loading of 100 modules in parallel is almost 50 times faster than loading them sequentially. With "light load", the gain is still almost 10 times. Here follows an actual sample of the output from the test case on my computer (an 2012 iMac): Light load ========== Sequential: 22361 µs Parallel: 2586 µs Ratio: 9 Heavy load ========== Sequential: 254512 µs Parallel: 5246 µs Ratio: 49
Diffstat (limited to 'erts/preloaded')
0 files changed, 0 insertions, 0 deletions