From e2ad0e63077cc08c14edebae54925c50828cde3a Mon Sep 17 00:00:00 2001
From: Tuncer Ayaz A good start when programming efficiently is to have knowledge about
how much memory different data types and operations require. It is
implementation-dependent how much memory the Erlang data types and
- other items consume, but here are some figures for
+ other items consume, but here are some figures for the
erts-5.2 system (OTP release R9B). (There have been no significant
changes in R13.)
In R11B, a match context was only using during a binary matching +
In R11B, a match context was only used during a binary matching operation.
In R12B, the compiler tries to avoid generating code that
@@ -205,7 +205,7 @@ Bin4 = <
Therefore, certain operations on a binary will mark it so that @@ -291,7 +291,7 @@ my_binary_to_list(<<>>) -> [].]]> that initializes the matching operation will basically do nothing when it sees that it was passed a match context instead of a binary.
-When the end of the binary is reached and second clause matches, +
When the end of the binary is reached and the second clause matches, the match context will simply be discarded (removed in the next garbage collection, since there is no longer any reference to it).
diff --git a/system/doc/efficiency_guide/drivers.xml b/system/doc/efficiency_guide/drivers.xml index 9fe54fb19a..1967fd7ada 100644 --- a/system/doc/efficiency_guide/drivers.xml +++ b/system/doc/efficiency_guide/drivers.xml @@ -40,7 +40,7 @@ any code in a driver.By default, that lock will be at the driver level, meaning that - if several ports has been opened to the same driver, only code for + if several ports have been opened to the same driver, only code for one port at the same time can be running.
A driver can be configured to instead have one lock for each port.
diff --git a/system/doc/efficiency_guide/functions.xml b/system/doc/efficiency_guide/functions.xml index fe14a4f000..6be49dd7c9 100644 --- a/system/doc/efficiency_guide/functions.xml +++ b/system/doc/efficiency_guide/functions.xml @@ -127,7 +127,7 @@ map_pairs2(_Map, [_|_]=Xs, [] ) -> map_pairs2(Map, [X|Xs], [Y|Ys]) -> [Map(X, Y)|map_pairs2(Map, Xs, Ys)].]]> -the compiler is free rearrange the clauses. It will generate code +
the compiler is free to rearrange the clauses. It will generate code similar to this
DO NOT (already done by the compiler)
diff --git a/system/doc/efficiency_guide/processes.xml b/system/doc/efficiency_guide/processes.xml index a25ec53370..b75be7d531 100644 --- a/system/doc/efficiency_guide/processes.xml +++ b/system/doc/efficiency_guide/processes.xml @@ -105,7 +105,7 @@ loop() ->The gain is twofold: Firstly, although the garbage collector will - grow the heap, it will it grow it step by step, which will be more + grow the heap, it will grow it step by step, which will be more costly than directly establishing a larger heap when the process is spawned. Secondly, the garbage collector may also shrink the heap if it is much larger than the amount of data stored on it; @@ -172,7 +172,7 @@ days_in_month(M) ->
Shared sub-terms are not preserved when a term is sent
to another process, passed as the initial process arguments in
the
Here is an example of how a shared sub-term can be created:
@@ -237,8 +237,8 @@ trueThe SMP emulator (introduced in R11B) will take advantage of - multi-core or multi-CPU computer by running several Erlang schedulers +
The SMP emulator (introduced in R11B) will take advantage of a + multi-core or multi-CPU computer by running several Erlang scheduler threads (typically, the same as the number of cores). Each scheduler thread schedules Erlang processes in the same way as the Erlang scheduler in the non-SMP emulator.
diff --git a/system/doc/efficiency_guide/profiling.xml b/system/doc/efficiency_guide/profiling.xml index 65d13408bc..8be1c7175d 100644 --- a/system/doc/efficiency_guide/profiling.xml +++ b/system/doc/efficiency_guide/profiling.xml @@ -74,7 +74,7 @@When analyzing the result file from the profiling activity
you should look for functions that are called many
- times and have a long "own" execution time (time excluded calls
+ times and have a long "own" execution time (time excluding calls
to other functions). Functions that just are called very
many times can also be interesting, as even small things can add
up to quite a bit if they are repeated often. Then you need to
@@ -87,7 +87,7 @@
It is probably a good idea to do both wall-clock measurements and CPU time measurements.
@@ -239,18 +239,18 @@Some additional advice:
A simple solution would be to use the