aboutsummaryrefslogtreecommitdiffstats
path: root/system/doc/efficiency_guide/myths.xml
blob: 5d3ad78b2357d76b65df16d7a91d23087cd39566 (plain) (blame)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
<?xml version="1.0" encoding="utf-8" ?>
<!DOCTYPE chapter SYSTEM "chapter.dtd">

<chapter>
  <header>
    <copyright>
      <year>2007</year>
      <year>2016</year>
      <holder>Ericsson AB, All Rights Reserved</holder>
    </copyright>
    <legalnotice>
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at
 
      http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License.

  The Initial Developer of the Original Code is Ericsson AB.
    </legalnotice>

    <title>The Eight Myths of Erlang Performance</title>
    <prepared>Bjorn Gustavsson</prepared>
    <docno></docno>
    <date>2007-11-10</date>
    <rev></rev>
    <file>myths.xml</file>
  </header>

  <marker id="myths"></marker>
  <p>Some truths seem to live on well beyond their best-before date,
  perhaps because "information" spreads faster from person-to-person
  than a single release note that says, for example, that funs
  have become faster.</p>

  <p>This section tries to kill the old truths (or semi-truths) that have
  become myths.</p>

  <section>
    <title>Myth: Funs are Slow</title>
    <p>Funs used to be very slow, slower than <c>apply/3</c>.
    Originally, funs were implemented using nothing more than
    compiler trickery, ordinary tuples, <c>apply/3</c>, and a great
    deal of ingenuity.</p>

    <p>But that is history. Funs was given its own data type
    in R6B and was further optimized in R7B.
    Now the cost for a fun call falls roughly between the cost for a call
    to a local function and <c>apply/3</c>.</p>
  </section>

  <section>
    <title>Myth: List Comprehensions are Slow</title>

    <p>List comprehensions used to be implemented using funs, and in the
    old days funs were indeed slow.</p>

    <p>Nowadays, the compiler rewrites list comprehensions into an ordinary
    recursive function. Using a tail-recursive function with
    a reverse at the end would be still faster. Or would it?
    That leads us to the next myth.</p>
  </section>

  <section>
    <title>Myth: Tail-Recursive Functions are Much Faster
    Than Recursive Functions</title>

    <p><marker id="tail_recursive"></marker>According to the myth,
    recursive functions leave references
    to dead terms on the stack and the garbage collector has to copy
    all those dead terms, while tail-recursive functions immediately
    discard those terms.</p>

    <p>That used to be true before R7B. In R7B, the compiler started
    to generate code that overwrites references to terms that will never
    be used with an empty list, so that the garbage collector would not
    keep dead values any longer than necessary.</p>

    <p>Even after that optimization, a tail-recursive function is
    still most of the times faster than a body-recursive function. Why?</p>

    <p>It has to do with how many words of stack that are used in each
    recursive call. In most cases, a recursive function uses more words
    on the stack for each recursion than the number of words a tail-recursive
    would allocate on the heap. As more memory is used, the garbage
    collector is invoked more frequently, and it has more work traversing
    the stack.</p>

    <p>In R12B and later releases, there is an optimization that
    in many cases reduces the number of words used on the stack in
    body-recursive calls. A body-recursive list function and a
    tail-recursive function that calls <seealso
    marker="stdlib:lists#reverse/1">lists:reverse/1</seealso> at
    the end will use the same amount of memory.
    <c>lists:map/2</c>, <c>lists:filter/2</c>, list comprehensions,
    and many other recursive functions now use the same amount of space
    as their tail-recursive equivalents.</p>

    <p>So, which is faster?
    It depends. On Solaris/Sparc, the body-recursive function seems to
    be slightly faster, even for lists with a lot of elements. On the x86
    architecture, tail-recursion was up to about 30% faster.</p>

    <p>So, the choice is now mostly a matter of taste. If you really do need
    the utmost speed, you must <em>measure</em>. You can no longer be
    sure that the tail-recursive list function always is the fastest.</p>

    <note><p>A tail-recursive function that does not need to reverse the
    list at the end is faster than a body-recursive function,
    as are tail-recursive functions that do not construct any terms at all
    (for example, a function that sums all integers in a list).</p></note>
  </section>

  <section>
    <title>Myth: Operator "++" is Always Bad</title>

    <p>The <c>++</c> operator has, somewhat undeservedly, got a bad reputation.
    It probably has something to do with code like the following,
    which is the most inefficient way there is to reverse a list:</p>
    
    <p><em>DO NOT</em></p>
    <code type="erl">
naive_reverse([H|T]) ->
    naive_reverse(T)++[H];
naive_reverse([]) ->
    [].</code>

    <p>As the <c>++</c> operator copies its left operand, the result
    is copied repeatedly, leading to quadratic complexity.</p>

    <p>But using <c>++</c> as follows is not bad:</p>

    <p><em>OK</em></p>
    <code type="erl">
naive_but_ok_reverse([H|T], Acc) ->
    naive_but_ok_reverse(T, [H]++Acc);
naive_but_ok_reverse([], Acc) ->
    Acc.</code>

    <p>Each list element is copied only once.
    The growing result <c>Acc</c> is the right operand
    for the <c>++</c> operator, and it is <em>not</em> copied.</p>

    <p>Experienced Erlang programmers would write as follows:</p>

    <p><em>DO</em></p>
    <code type="erl">
vanilla_reverse([H|T], Acc) ->
    vanilla_reverse(T, [H|Acc]);
vanilla_reverse([], Acc) ->
    Acc.</code>

    <p>This is slightly more efficient because here you do not build a
    list element only to copy it directly. (Or it would be more efficient
    if the compiler did not automatically rewrite <c>[H]++Acc</c>
    to <c>[H|Acc]</c>.)</p>
  </section>

  <section>
    <title>Myth: Strings are Slow</title>

    <p>String handling can be slow if done improperly.
    In Erlang, you need to think a little more about how the strings
    are used and choose an appropriate representation. If you
    use regular expressions, use the
    <seealso marker="stdlib:re">re</seealso> module in STDLIB
    instead of the obsolete <c>regexp</c> module.</p>
  </section>

  <section>
    <title>Myth: Repairing a Dets File is Very Slow</title>

    <p>The repair time is still proportional to the number of records
    in the file, but Dets repairs used to be much slower in the past.
    Dets has been massively rewritten and improved.</p>
  </section>

  <section>
    <title>Myth: BEAM is a Stack-Based Byte-Code Virtual Machine
    (and Therefore Slow)</title>

    <p>BEAM is a register-based virtual machine. It has 1024 virtual registers
    that are used for holding temporary values and for passing arguments when
    calling functions. Variables that need to survive a function call are saved
    to the stack.</p>

    <p>BEAM is a threaded-code interpreter. Each instruction is word pointing
    directly to executable C-code, making instruction dispatching very fast.</p>
  </section>

  <section>
    <title>Myth: Use "_" to Speed Up Your Program When a Variable
    is Not Used</title>

    <p>That was once true, but from R6B the BEAM compiler can see
    that a variable is not used.</p>
  </section>
</chapter>