aboutsummaryrefslogtreecommitdiffstats
path: root/system/doc/efficiency_guide/processes.xml
diff options
context:
space:
mode:
authorBjörn Gustavsson <[email protected]>2015-03-12 15:35:13 +0100
committerBjörn Gustavsson <[email protected]>2015-03-12 15:38:25 +0100
commit6513fc5eb55b306e2b1088123498e6c50b9e7273 (patch)
tree986a133cb88ddeaeb0292f99af67e4d1015d1f62 /system/doc/efficiency_guide/processes.xml
parent42a0387e886ddbf60b0e2cb977758e2ca74954ae (diff)
downloadotp-6513fc5eb55b306e2b1088123498e6c50b9e7273.tar.gz
otp-6513fc5eb55b306e2b1088123498e6c50b9e7273.tar.bz2
otp-6513fc5eb55b306e2b1088123498e6c50b9e7273.zip
Update Efficiency Guide
Language cleaned up by the technical writers xsipewe and tmanevik from Combitech. Proofreading and corrections by Björn Gustavsson.
Diffstat (limited to 'system/doc/efficiency_guide/processes.xml')
-rw-r--r--system/doc/efficiency_guide/processes.xml159
1 files changed, 82 insertions, 77 deletions
diff --git a/system/doc/efficiency_guide/processes.xml b/system/doc/efficiency_guide/processes.xml
index 86951e2dcc..3bdc314235 100644
--- a/system/doc/efficiency_guide/processes.xml
+++ b/system/doc/efficiency_guide/processes.xml
@@ -30,15 +30,15 @@
</header>
<section>
- <title>Creation of an Erlang process</title>
+ <title>Creating an Erlang Process</title>
- <p>An Erlang process is lightweight compared to operating
- systems threads and processes.</p>
+ <p>An Erlang process is lightweight compared to threads and
+ processes in operating systems.</p>
<p>A newly spawned Erlang process uses 309 words of memory
in the non-SMP emulator without HiPE support. (SMP support
- and HiPE support will both add to this size.) The size can
- be found out like this:</p>
+ and HiPE support both add to this size.) The size can
+ be found as follows:</p>
<pre>
Erlang (BEAM) emulator version 5.6 [async-threads:0] [kernel-poll:false]
@@ -51,11 +51,11 @@ Eshell V5.6 (abort with ^G)
3> <input>Bytes div erlang:system_info(wordsize).</input>
309</pre>
- <p>The size includes 233 words for the heap area (which includes the stack).
- The garbage collector will increase the heap as needed.</p>
+ <p>The size includes 233 words for the heap area (which includes the
+ stack). The garbage collector increases the heap as needed.</p>
<p>The main (outer) loop for a process <em>must</em> be tail-recursive.
- If not, the stack will grow until the process terminates.</p>
+ Otherwise, the stack grows until the process terminates.</p>
<p><em>DO NOT</em></p>
<code type="erl">
@@ -74,7 +74,7 @@ loop() ->
<p>The call to <c>io:format/2</c> will never be executed, but a
return address will still be pushed to the stack each time
<c>loop/0</c> is called recursively. The correct tail-recursive
- version of the function looks like this:</p>
+ version of the function looks as follows:</p>
<p><em>DO</em></p>
<code type="erl">
@@ -90,92 +90,98 @@ loop() ->
end.</code>
<section>
- <title>Initial heap size</title>
+ <title>Initial Heap Size</title>
<p>The default initial heap size of 233 words is quite conservative
- in order to support Erlang systems with hundreds of thousands or
- even millions of processes. The garbage collector will grow and
- shrink the heap as needed.</p>
+ to support Erlang systems with hundreds of thousands or
+ even millions of processes. The garbage collector grows and
+ shrinks the heap as needed.</p>
<p>In a system that use comparatively few processes, performance
- <em>might</em> be improved by increasing the minimum heap size using either
- the <c>+h</c> option for
+ <em>might</em> be improved by increasing the minimum heap size
+ using either the <c>+h</c> option for
<seealso marker="erts:erl">erl</seealso> or on a process-per-process
basis using the <c>min_heap_size</c> option for
<seealso marker="erts:erlang#spawn_opt/4">spawn_opt/4</seealso>.</p>
- <p>The gain is twofold: Firstly, although the garbage collector will
- grow the heap, it will grow it step by step, which will be more
- costly than directly establishing a larger heap when the process
- is spawned. Secondly, the garbage collector may also shrink the
- heap if it is much larger than the amount of data stored on it;
- setting the minimum heap size will prevent that.</p>
-
- <warning><p>The emulator will probably use more memory, and because garbage
- collections occur less frequently, huge binaries could be
+ <p>The gain is twofold:</p>
+ <list type="bulleted">
+ <item>Although the garbage collector grows the heap, it grows it
+ step-by-step, which is more costly than directly establishing a
+ larger heap when the process is spawned.</item>
+ <item>The garbage collector can also shrink the heap if it is
+ much larger than the amount of data stored on it;
+ setting the minimum heap size prevents that.</item>
+ </list>
+
+ <warning><p>The emulator probably uses more memory, and because garbage
+ collections occur less frequently, huge binaries can be
kept much longer.</p></warning>
<p>In systems with many processes, computation tasks that run
- for a short time could be spawned off into a new process with
- a higher minimum heap size. When the process is done, it will
- send the result of the computation to another process and terminate.
- If the minimum heap size is calculated properly, the process may not
- have to do any garbage collections at all.
- <em>This optimization should not be attempted
+ for a short time can be spawned off into a new process with
+ a higher minimum heap size. When the process is done, it sends
+ the result of the computation to another process and terminates.
+ If the minimum heap size is calculated properly, the process might
+ not have to do any garbage collections at all.
+ <em>This optimization is not to be attempted
without proper measurements.</em></p>
</section>
-
</section>
<section>
- <title>Process messages</title>
+ <title>Process Messages</title>
- <p>All data in messages between Erlang processes is copied, with
- the exception of
+ <p>All data in messages between Erlang processes is copied,
+ except for
<seealso marker="binaryhandling#refc_binary">refc binaries</seealso>
on the same Erlang node.</p>
<p>When a message is sent to a process on another Erlang node,
- it will first be encoded to the Erlang External Format before
- being sent via an TCP/IP socket. The receiving Erlang node decodes
- the message and distributes it to the right process.</p>
+ it is first encoded to the Erlang External Format before
+ being sent through a TCP/IP socket. The receiving Erlang node decodes
+ the message and distributes it to the correct process.</p>
<section>
- <title>The constant pool</title>
+ <title>Constant Pool</title>
<p>Constant Erlang terms (also called <em>literals</em>) are now
kept in constant pools; each loaded module has its own pool.
- The following function</p>
+ The following function does no longer build the tuple every time
+ it is called (only to have it discarded the next time the garbage
+ collector was run), but the tuple is located in the module's
+ constant pool:</p>
<p><em>DO</em> (in R12B and later)</p>
<code type="erl">
days_in_month(M) ->
- element(M, {31,28,31,30,31,30,31,31,30,31,30,31}).</code>
-
- <p>will no longer build the tuple every time it is called (only
- to have it discarded the next time the garbage collector was run), but
- the tuple will be located in the module's constant pool.</p>
+ element(M, {31,28,31,30,31,30,31,31,30,31,30,31}).</code>
<p>But if a constant is sent to another process (or stored in
- an ETS table), it will be <em>copied</em>.
- The reason is that the run-time system must be able
- to keep track of all references to constants in order to properly
- unload code containing constants. (When the code is unloaded,
- the constants will be copied to the heap of the processes that refer
+ an Ets table), it is <em>copied</em>.
+ The reason is that the runtime system must be able
+ to keep track of all references to constants to unload code
+ containing constants properly. (When the code is unloaded,
+ the constants are copied to the heap of the processes that refer
to them.) The copying of constants might be eliminated in a future
- release.</p>
+ Erlang/OTP release.</p>
</section>
<section>
- <title>Loss of sharing</title>
+ <title>Loss of Sharing</title>
- <p>Shared sub-terms are <em>not</em> preserved when a term is sent
- to another process, passed as the initial process arguments in
- the <c>spawn</c> call, or stored in an ETS table.
- That is an optimization. Most applications do not send messages
- with shared sub-terms.</p>
+ <p>Shared subterms are <em>not</em> preserved in the following
+ cases:</p>
+ <list type="bulleted">
+ <item>When a term is sent to another process</item>
+ <item>When a term is passed as the initial process arguments in
+ the <c>spawn</c> call</item>
+ <item>When a term is stored in an Ets table</item>
+ </list>
+ <p>That is an optimization. Most applications do not send messages
+ with shared subterms.</p>
- <p>Here is an example of how a shared sub-term can be created:</p>
+ <p>The following example shows how a shared subterm can be created:</p>
<code type="erl">
kilo_byte() ->
@@ -186,32 +192,32 @@ kilo_byte(0, Acc) ->
kilo_byte(N, Acc) ->
kilo_byte(N-1, [Acc|Acc]).</code>
- <p><c>kilo_byte/0</c> creates a deep list. If we call
- <c>list_to_binary/1</c>, we can convert the deep list to a binary
- of 1024 bytes:</p>
+ <p><c>kilo_byte/1</c> creates a deep list.
+ If <c>list_to_binary/1</c> is called, the deep list can be
+ converted to a binary of 1024 bytes:</p>
<pre>
1> <input>byte_size(list_to_binary(efficiency_guide:kilo_byte())).</input>
1024</pre>
- <p>Using the <c>erts_debug:size/1</c> BIF we can see that the
+ <p>Using the <c>erts_debug:size/1</c> BIF, it can be seen that the
deep list only requires 22 words of heap space:</p>
<pre>
2> <input>erts_debug:size(efficiency_guide:kilo_byte()).</input>
22</pre>
- <p>Using the <c>erts_debug:flat_size/1</c> BIF, we can calculate
- the size of the deep list if sharing is ignored. It will be
+ <p>Using the <c>erts_debug:flat_size/1</c> BIF, the size of the
+ deep list can be calculated if sharing is ignored. It becomes
the size of the list when it has been sent to another process
- or stored in an ETS table:</p>
+ or stored in an Ets table:</p>
<pre>
3> <input>erts_debug:flat_size(efficiency_guide:kilo_byte()).</input>
4094</pre>
- <p>We can verify that sharing will be lost if we insert the
- data into an ETS table:</p>
+ <p>It can be verified that sharing will be lost if the data is
+ inserted into an Ets table:</p>
<pre>
4> <input>T = ets:new(tab, []).</input>
@@ -223,21 +229,21 @@ true
7> <input>erts_debug:flat_size(element(2, hd(ets:lookup(T, key)))).</input>
4094</pre>
- <p>When the data has passed through an ETS table,
+ <p>When the data has passed through an Ets table,
<c>erts_debug:size/1</c> and <c>erts_debug:flat_size/1</c>
return the same value. Sharing has been lost.</p>
- <p>In a future release of Erlang/OTP, we might implement a
- way to (optionally) preserve sharing. We have no plans to make
- preserving of sharing the default behaviour, since that would
+ <p>In a future Erlang/OTP release, it might be implemented a
+ way to (optionally) preserve sharing. There are no plans to make
+ preserving of sharing the default behaviour, as that would
penalize the vast majority of Erlang applications.</p>
</section>
</section>
<section>
- <title>The SMP emulator</title>
+ <title>SMP Emulator</title>
- <p>The SMP emulator (introduced in R11B) will take advantage of a
+ <p>The SMP emulator (introduced in R11B) takes advantage of a
multi-core or multi-CPU computer by running several Erlang scheduler
threads (typically, the same as the number of cores). Each scheduler
thread schedules Erlang processes in the same way as the Erlang scheduler
@@ -247,11 +253,11 @@ true
<em>must have more than one runnable Erlang process</em> most of the time.
Otherwise, the Erlang emulator can still only run one Erlang process
at the time, but you must still pay the overhead for locking. Although
- we try to reduce the locking overhead as much as possible, it will never
- become exactly zero.</p>
+ Erlang/OTP tries to reduce the locking overhead as much as possible,
+ it will never become exactly zero.</p>
- <p>Benchmarks that may seem to be concurrent are often sequential.
- The estone benchmark, for instance, is entirely sequential. So is also
+ <p>Benchmarks that appear to be concurrent are often sequential.
+ The estone benchmark, for example, is entirely sequential. So is
the most common implementation of the "ring benchmark"; usually one process
is active, while the others wait in a <c>receive</c> statement.</p>
@@ -259,6 +265,5 @@ true
can be used to profile your application to see how much potential (or lack
thereof) it has for concurrency.</p>
</section>
-
</chapter>