From 6513fc5eb55b306e2b1088123498e6c50b9e7273 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Bj=C3=B6rn=20Gustavsson?= Date: Thu, 12 Mar 2015 15:35:13 +0100 Subject: Update Efficiency Guide MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Language cleaned up by the technical writers xsipewe and tmanevik from Combitech. Proofreading and corrections by Björn Gustavsson. --- system/doc/efficiency_guide/binaryhandling.xml | 359 +++++++++++++------------ 1 file changed, 188 insertions(+), 171 deletions(-) (limited to 'system/doc/efficiency_guide/binaryhandling.xml') diff --git a/system/doc/efficiency_guide/binaryhandling.xml b/system/doc/efficiency_guide/binaryhandling.xml index 4ba1378059..0ac1a7ee32 100644 --- a/system/doc/efficiency_guide/binaryhandling.xml +++ b/system/doc/efficiency_guide/binaryhandling.xml @@ -23,7 +23,7 @@ The Initial Developer of the Original Code is Ericsson AB. - Constructing and matching binaries + Constructing and Matching Binaries Bjorn Gustavsson 2007-10-12 @@ -31,10 +31,10 @@ binaryhandling.xml -

In R12B, the most natural way to write binary construction and matching is now +

In R12B, the most natural way to construct and match binaries is significantly faster than in earlier releases.

-

To construct at binary, you can simply write

+

To construct a binary, you can simply write as follows:

DO (in R12B) / REALLY DO NOT (in earlier releases)

my_list_to_binary([], Acc) -> Acc.]]> -

In releases before R12B, Acc would be copied in every iteration. - In R12B, Acc will be copied only in the first iteration and extra - space will be allocated at the end of the copied binary. In the next iteration, - H will be written in to the extra space. When the extra space runs out, - the binary will be reallocated with more extra space.

- -

The extra space allocated (or reallocated) will be twice the size of the +

In releases before R12B, Acc is copied in every iteration. + In R12B, Acc is copied only in the first iteration and extra + space is allocated at the end of the copied binary. In the next iteration, + H is written into the extra space. When the extra space runs out, + the binary is reallocated with more extra space. The extra space allocated + (or reallocated) is twice the size of the existing binary data, or 256, whichever is larger.

The most natural way to match binaries is now the fastest:

@@ -64,55 +63,79 @@ my_binary_to_list(<>) -> my_binary_to_list(<<>>) -> [].]]>
- How binaries are implemented + How Binaries are Implemented

Internally, binaries and bitstrings are implemented in the same way. - In this section, we will call them binaries since that is what + In this section, they are called binaries because that is what they are called in the emulator source code.

-

There are four types of binary objects internally. Two of them are - containers for binary data and two of them are merely references to - a part of a binary.

- -

The binary containers are called refc binaries - (short for reference-counted binaries) and heap binaries.

+

Four types of binary objects are available internally:

+ +

Two are containers for binary data and are called:

+ + Refc binaries (short for + reference-counted binaries) + Heap binaries +
+

Two are merely references to a part of a binary and + are called:

+ + sub binaries + match contexts +
+
-

Refc binaries - consist of two parts: an object stored on - the process heap, called a ProcBin, and the binary object itself - stored outside all process heaps.

+
+ + Refc Binaries +

Refc binaries consist of two parts:

+ + An object stored on the process heap, called a + ProcBin + The binary object itself, stored outside all process + heaps +

The binary object can be referenced by any number of ProcBins from any - number of processes; the object contains a reference counter to keep track + number of processes. The object contains a reference counter to keep track of the number of references, so that it can be removed when the last reference disappears.

All ProcBin objects in a process are part of a linked list, so that the garbage collector can keep track of them and decrement the reference counters in the binary when a ProcBin disappears.

+
-

Heap binaries are small binaries, - up to 64 bytes, that are stored directly on the process heap. - They will be copied when the process - is garbage collected and when they are sent as a message. They don't +

+ + Heap Binaries +

Heap binaries are small binaries, up to 64 bytes, and are stored + directly on the process heap. They are copied when the process is + garbage-collected and when they are sent as a message. They do not require any special handling by the garbage collector.

+
-

There are two types of reference objects that can reference part of - a refc binary or heap binary. They are called sub binaries and - match contexts.

+
+ Sub Binaries +

The reference objects sub binaries and + match contexts can reference part of + a refc binary or heap binary.

A sub binary is created by split_binary/2 and when a binary is matched out in a binary pattern. A sub binary is a reference - into a part of another binary (refc or heap binary, never into a another + into a part of another binary (refc or heap binary, but never into another sub binary). Therefore, matching out a binary is relatively cheap because the actual binary data is never copied.

+
-

A match context is - similar to a sub binary, but is optimized - for binary matching; for instance, it contains a direct pointer to the binary - data. For each field that is matched out of a binary, the position in the - match context will be incremented.

+
+ Match Context + +

A match context is similar to a sub binary, but is + optimized for binary matching. For example, it contains a direct + pointer to the binary data. For each field that is matched out of + a binary, the position in the match context is incremented.

In R11B, a match context was only used during a binary matching operation.

@@ -122,27 +145,28 @@ my_binary_to_list(<<>>) -> [].]]> context and discard the sub binary. Instead of creating a sub binary, the match context is kept.

-

The compiler can only do this optimization if it can know for sure +

The compiler can only do this optimization if it knows that the match context will not be shared. If it would be shared, the functional properties (also called referential transparency) of Erlang would break.

+
- Constructing binaries - -

In R12B, appending to a binary or bitstring

+ Constructing Binaries +

In R12B, appending to a binary or bitstring + is specially optimized by the runtime system:

> <>]]> -

is specially optimized by the run-time system. - Because the run-time system handles the optimization (instead of +

As the runtime system handles the optimization (instead of the compiler), there are very few circumstances in which the optimization - will not work.

+ does not work.

-

To explain how it works, we will go through this code

+

To explain how it works, let us examine the following code line + by line:

>, %% 1 @@ -152,81 +176,81 @@ Bin3 = <>, %% 4 Bin4 = <>, %% 5 !!! {Bin4,Bin3} %% 6]]> -

line by line.

- -

The first line (marked with the %% 1 comment), assigns + + Line 1 (marked with the %% 1 comment), assigns a heap binary to - the variable Bin0.

+ the Bin0 variable. -

The second line is an append operation. Since Bin0 + Line 2 is an append operation. As Bin0 has not been involved in an append operation, a new refc binary - will be created and the contents of Bin0 will be copied - into it. The ProcBin part of the refc binary will have + is created and the contents of Bin0 is copied + into it. The ProcBin part of the refc binary has its size set to the size of the data stored in the binary, while - the binary object will have extra space allocated. - The size of the binary object will be either twice the + the binary object has extra space allocated. + The size of the binary object is either twice the size of Bin0 or 256, whichever is larger. In this case - it will be 256.

+ it is 256. -

It gets more interesting in the third line. + Line 3 is more interesting. Bin1 has been used in an append operation, - and it has 255 bytes of unused storage at the end, so the three new bytes - will be stored there.

+ and it has 255 bytes of unused storage at the end, so the 3 new + bytes are stored there. -

Same thing in the fourth line. There are 252 bytes left, - so there is no problem storing another three bytes.

+ Line 4. The same applies here. There are 252 bytes left, + so there is no problem storing another 3 bytes. -

But in the fifth line something interesting happens. - Note that we don't append to the previous result in Bin3, - but to Bin1. We expect that Bin4 will be assigned - the value <<0,1,2,3,17>>. We also expect that + Line 5. Here, something interesting happens. Notice + that the result is not appended to the previous result in Bin3, + but to Bin1. It is expected that Bin4 will be assigned + the value <<0,1,2,3,17>>. It is also expected that Bin3 will retain its value (<<0,1,2,3,4,5,6,7,8,9>>). - Clearly, the run-time system cannot write the byte 17 into the binary, + Clearly, the runtime system cannot write byte 17 into the binary, because that would change the value of Bin3 to - <<0,1,2,3,4,17,6,7,8,9>>.

- -

What will happen?

+ <<0,1,2,3,4,17,6,7,8,9>>. + -

The run-time system will see that Bin1 is the result +

The runtime system sees that Bin1 is the result from a previous append operation (not from the latest append operation), - so it will copy the contents of Bin1 to a new binary - and reserve extra storage and so on. (We will not explain here how the - run-time system can know that it is not allowed to write into Bin1; + so it copies the contents of Bin1 to a new binary, + reserve extra storage, and so on. (Here is not explained how the + runtime system can know that it is not allowed to write into Bin1; it is left as an exercise to the curious reader to figure out how it is done by reading the emulator sources, primarily erl_bits.c.)

- Circumstances that force copying + Circumstances That Force Copying

The optimization of the binary append operation requires that there is a single ProcBin and a single reference to the ProcBin for the binary. The reason is that the binary object can be - moved (reallocated) during an append operation, and when that happens + moved (reallocated) during an append operation, and when that happens, the pointer in the ProcBin must be updated. If there would be more than one ProcBin pointing to the binary object, it would not be possible to find and update all of them.

-

Therefore, certain operations on a binary will mark it so that +

Therefore, certain operations on a binary mark it so that any future append operation will be forced to copy the binary. In most cases, the binary object will be shrunk at the same time to reclaim the extra space allocated for growing.

-

When appending to a binary

+

When appending to a binary as follows, only the binary returned + from the latest append operation will support further cheap append + operations:

>]]> -

only the binary returned from the latest append operation will - support further cheap append operations. In the code fragment above, +

In the code fragment in the beginning of this section, appending to Bin will be cheap, while appending to Bin0 will force the creation of a new binary and copying of the contents of Bin0.

If a binary is sent as a message to a process or port, the binary will be shrunk and any further append operation will copy the binary - data into a new binary. For instance, in the following code fragment

+ data into a new binary. For example, in the following code fragment + Bin1 will be copied in the third line:

>, @@ -234,12 +258,12 @@ PortOrPid ! Bin1, Bin = <> %% Bin1 will be COPIED ]]> -

Bin1 will be copied in the third line.

- -

The same thing happens if you insert a binary into an ets - table or send it to a port using erlang:port_command/2 or pass it to +

The same happens if you insert a binary into an Ets + table, send it to a port using erlang:port_command/2, or + pass it to enif_inspect_binary in a NIF.

+

Matching a binary will also cause it to shrink and the next append operation will copy the binary data:

@@ -249,22 +273,23 @@ Bin1 = <>, Bin = <> %% Bin1 will be COPIED ]]> -

The reason is that a match context +

The reason is that a + match context contains a direct pointer to the binary data.

-

If a process simply keeps binaries (either in "loop data" or in the process - dictionary), the garbage collector may eventually shrink the binaries. - If only one such binary is kept, it will not be shrunk. If the process later - appends to a binary that has been shrunk, the binary object will be reallocated - to make place for the data to be appended.

+

If a process simply keeps binaries (either in "loop data" or in the + process + dictionary), the garbage collector can eventually shrink the binaries. + If only one such binary is kept, it will not be shrunk. If the process + later appends to a binary that has been shrunk, the binary object will + be reallocated to make place for the data to be appended.

-
- Matching binaries + Matching Binaries -

We will revisit the example shown earlier

+

Let us revisit the example in the beginning of the previous section:

DO (in R12B)

>) -> [H|my_binary_to_list(T)]; my_binary_to_list(<<>>) -> [].]]> -

too see what is happening under the hood.

- -

The very first time my_binary_to_list/1 is called, +

The first time my_binary_to_list/1 is called, a match context - will be created. The match context will point to the first - byte of the binary. One byte will be matched out and the match context - will be updated to point to the second byte in the binary.

+ is created. The match context points to the first + byte of the binary. 1 byte is matched out and the match context + is updated to point to the second byte in the binary.

-

In R11B, at this point a sub binary +

In R11B, at this point a + sub binary would be created. In R12B, the compiler sees that there is no point in creating a sub binary, because there will soon be a call to a function (in this case, - to my_binary_to_list/1 itself) that will immediately + to my_binary_to_list/1 itself) that immediately will create a new match context and discard the sub binary.

-

Therefore, in R12B, my_binary_to_list/1 will call itself +

Therefore, in R12B, my_binary_to_list/1 calls itself with the match context instead of with a sub binary. The instruction - that initializes the matching operation will basically do nothing + that initializes the matching operation basically does nothing when it sees that it was passed a match context instead of a binary.

When the end of the binary is reached and the second clause matches, the match context will simply be discarded (removed in the next - garbage collection, since there is no longer any reference to it).

+ garbage collection, as there is no longer any reference to it).

To summarize, my_binary_to_list/1 in R12B only needs to create one match context and no sub binaries. In R11B, if the binary contains N bytes, N+1 match contexts and N - sub binaries will be created.

+ sub binaries are created.

-

In R11B, the fastest way to match binaries is:

+

In R11B, the fastest way to match binaries is as follows:

DO NOT (in R12B)

end.]]>

This function cleverly avoids building sub binaries, but it cannot - avoid building a match context in each recursion step. Therefore, in both R11B and R12B, + avoid building a match context in each recursion step. + Therefore, in both R11B and R12B, my_complicated_binary_to_list/1 builds N+1 match - contexts. (In a future release, the compiler might be able to generate code - that reuses the match context, but don't hold your breath.)

+ contexts. (In a future Erlang/OTP release, the compiler might be able + to generate code that reuses the match context.)

-

Returning to my_binary_to_list/1, note that the match context was - discarded when the entire binary had been traversed. What happens if +

Returning to my_binary_to_list/1, notice that the match context + was discarded when the entire binary had been traversed. What happens if the iteration stops before it has reached the end of the binary? Will the optimization still work?

@@ -336,29 +361,23 @@ after_zero(<<>>) -> <<>>. ]]> -

Yes, it will. The compiler will remove the building of the sub binary in the - second clause

+

Yes, it will. The compiler will remove the building of the sub binary in + the second clause:

>) -> after_zero(T); -. -. -.]]> +...]]> -

but will generate code that builds a sub binary in the first clause

+

But it will generate code that builds a sub binary in the first clause:

>) -> T; -. -. -.]]> +...]]> -

Therefore, after_zero/1 will build one match context and one sub binary +

Therefore, after_zero/1 builds one match context and one sub binary (assuming it is passed a binary that contains a zero byte).

Code like the following will also be optimized:

@@ -371,12 +390,14 @@ all_but_zeroes_to_list(<<0,T/binary>>, Acc, Remaining) -> all_but_zeroes_to_list(<>, Acc, Remaining) -> all_but_zeroes_to_list(T, [Byte|Acc], Remaining-1).]]> -

The compiler will remove building of sub binaries in the second and third clauses, - and it will add an instruction to the first clause that will convert Buffer - from a match context to a sub binary (or do nothing if Buffer already is a binary).

+

The compiler removes building of sub binaries in the second and third + clauses, and it adds an instruction to the first clause that converts + Buffer from a match context to a sub binary (or do nothing if + Buffer is a binary already).

-

Before you begin to think that the compiler can optimize any binary patterns, - here is a function that the compiler (currently, at least) is not able to optimize:

+

Before you begin to think that the compiler can optimize any binary + patterns, the following function cannot be optimized by the compiler + (currently, at least):

>) -> @@ -386,43 +407,43 @@ non_opt_eq([_|_], <<_,_/binary>>) -> non_opt_eq([], <<>>) -> true.]]> -

It was briefly mentioned earlier that the compiler can only delay creation of - sub binaries if it can be sure that the binary will not be shared. In this case, - the compiler cannot be sure.

+

It was mentioned earlier that the compiler can only delay creation of + sub binaries if it knows that the binary will not be shared. In this case, + the compiler cannot know.

-

We will soon show how to rewrite non_opt_eq/2 so that the delayed sub binary - optimization can be applied, and more importantly, we will show how you can find out - whether your code can be optimized.

+

Soon it is shown how to rewrite non_opt_eq/2 so that the delayed + sub binary optimization can be applied, and more importantly, it is shown + how you can find out whether your code can be optimized.

- The bin_opt_info option + Option bin_opt_info

Use the bin_opt_info option to have the compiler print a lot of - information about binary optimizations. It can be given either to the compiler or - erlc

+ information about binary optimizations. It can be given either to the + compiler or erlc:

-

or passed via an environment variable

+

or passed through an environment variable:

-

Note that the bin_opt_info is not meant to be a permanent option added - to your Makefiles, because it is not possible to eliminate all messages that - it generates. Therefore, passing the option through the environment is in most cases - the most practical approach.

+

Notice that the bin_opt_info is not meant to be a permanent + option added to your Makefiles, because all messages that it + generates cannot be eliminated. Therefore, passing the option through + the environment is in most cases the most practical approach.

-

The warnings will look like this:

+

The warnings look as follows:

-

To make it clearer exactly what code the warnings refer to, - in the examples that follow, the warnings are inserted as comments - after the clause they refer to:

+

To make it clearer exactly what code the warnings refer to, the + warnings in the following examples are inserted as comments + after the clause they refer to, for example:

>) -> @@ -434,12 +455,12 @@ after_zero(<<_,T/binary>>) -> after_zero(<<>>) -> <<>>.]]> -

The warning for the first clause tells us that it is not possible to - delay the creation of a sub binary, because it will be returned. - The warning for the second clause tells us that a sub binary will not be +

The warning for the first clause says that the creation of a sub + binary cannot be delayed, because it will be returned. + The warning for the second clause says that a sub binary will not be created (yet).

-

It is time to revisit the earlier example of the code that could not +

Let us revisit the earlier example of the code that could not be optimized and find out why:

>) -> non_opt_eq([], <<>>) -> true.]]> -

The compiler emitted two warnings. The INFO warning refers to the function - non_opt_eq/2 as a callee, indicating that any functions that call non_opt_eq/2 - will not be able to make delayed sub binary optimization. - There is also a suggestion to change argument order. - The second warning (that happens to refer to the same line) refers to the construction of - the sub binary itself.

+

The compiler emitted two warnings. The INFO warning refers + to the function non_opt_eq/2 as a callee, indicating that any + function that call non_opt_eq/2 cannot make delayed sub binary + optimization. There is also a suggestion to change argument order. + The second warning (that happens to refer to the same line) refers to + the construction of the sub binary itself.

-

We will soon show another example that should make the distinction between INFO - and NOT OPTIMIZED warnings somewhat clearer, but first we will heed the suggestion - to change argument order:

+

Soon another example will show the difference between the + INFO and NOT OPTIMIZED warnings somewhat clearer, but + let us first follow the suggestion to change argument order:

>, [H|T2]) -> @@ -485,15 +506,13 @@ match_body([0|_], <>) -> %% sub binary optimization; %% SUGGEST changing argument order done; -. -. -.]]> +...]]>

The warning means that if there is a call to match_body/2 (from another clause in match_body/2 or another function), the - delayed sub binary optimization will not be possible. There will be additional - warnings for any place where a sub binary is matched out at the end of and - passed as the second argument to match_body/2. For instance:

+ delayed sub binary optimization will not be possible. More warnings will + occur for any place where a sub binary is matched out at the end of and + passed as the second argument to match_body/2, for example:

>) -> @@ -504,10 +523,10 @@ match_head(List, <<_:10,Data/binary>>) ->
- Unused variables + Unused Variables -

The compiler itself figures out if a variable is unused. The same - code is generated for each of the following functions

+

The compiler figures out if a variable is unused. The same + code is generated for each of the following functions:

>, Count) -> count1(T, Count+1); @@ -519,11 +538,9 @@ count2(<<>>, Count) -> Count. count3(<<_H,T/binary>>, Count) -> count3(T, Count+1); count3(<<>>, Count) -> Count.]]> -

In each iteration, the first 8 bits in the binary will be skipped, not matched out.

- +

In each iteration, the first 8 bits in the binary will be skipped, + not matched out.

-
- -- cgit v1.2.3