From 42a0387e886ddbf60b0e2cb977758e2ca74954ae Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Bj=C3=B6rn=20Gustavsson?= Date: Thu, 12 Mar 2015 15:35:13 +0100 Subject: Update Design Principles MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Language cleaned up by the technical writers xsipewe and tmanevik from Combitech. Proofreading and corrections by Björn Gustavsson. --- system/doc/design_principles/applications.xml | 270 ++++----- system/doc/design_principles/appup_cookbook.xml | 368 ++++++------ system/doc/design_principles/des_princ.xml | 113 ++-- .../design_principles/distributed_applications.xml | 182 +++--- system/doc/design_principles/events.xml | 95 ++-- system/doc/design_principles/fsm.xml | 168 +++--- .../doc/design_principles/gen_server_concepts.xml | 160 +++--- .../design_principles/included_applications.xml | 67 +-- system/doc/design_principles/release_handling.xml | 628 +++++++++++---------- system/doc/design_principles/release_structure.xml | 190 ++++--- system/doc/design_principles/spec_proc.xml | 214 ++++--- system/doc/design_principles/sup_princ.xml | 141 ++--- 12 files changed, 1337 insertions(+), 1259 deletions(-) (limited to 'system/doc') diff --git a/system/doc/design_principles/applications.xml b/system/doc/design_principles/applications.xml index 7b030115df..9d3a204999 100644 --- a/system/doc/design_principles/applications.xml +++ b/system/doc/design_principles/applications.xml @@ -29,55 +29,63 @@ applications.xml -

This chapter should be read in conjunction with app(4) and - application(3).

+

This section is to be read with the app(4) and + application(3) manual pages in Kernel.

Application Concept -

When we have written code implementing some specific - functionality, we might want to make the code into an - application, that is a component that can be started and - stopped as a unit, and which can be re-used in other systems as - well.

-

To do this, we create an - application callback module, where we describe how the application should - be started and stopped.

+

When you have written code implementing some specific functionality + you might want to make the code into an application, + that is, a component that can be started and stopped as a unit, + and which can also be reused in other systems.

+

To do this, create an + application callback module, + and describe how the application is to be started and stopped.

Then, an application specification is needed, which is - put in an application resource file. Among other things, we specify which - modules the application consists of and the name of the callback - module.

-

If we use systools, the Erlang/OTP tools for packaging code + put in an + application resource file. + Among other things, this file specifies which modules the application + consists of and the name of the callback module.

+

If you use systools, the Erlang/OTP tools for packaging code (see Releases), - the code for each application is placed in a separate directory - following a pre-defined directory structure.

+ the code for each application is placed in a + separate directory following a pre-defined + directory structure.

Application Callback Module -

How to start and stop the code for the application, i.e. +

How to start and stop the code for the application, that is, the supervision tree, is described by two callback functions:

start(StartType, StartArgs) -> {ok, Pid} | {ok, Pid, State} -stop(State) -

start is called when starting the application and should - create the supervision tree by starting the top supervisor. - It is expected to return the pid of the top supervisor and an - optional term State, which defaults to []. This term is - passed as-is to stop.

-

StartType is usually the atom normal. It has other +stop(State) + + + start is called when starting the application and is to + create the supervision tree by starting the top supervisor. It is + expected to return the pid of the top supervisor and an optional + term, State, which defaults to []. This term is passed + as is to stop. + StartType is usually the atom normal. It has other values only in the case of a takeover or failover, see - Distributed Applications. StartArgs is defined by the key - mod in the application resource file file.

-

stop/1 is called after the application has been - stopped and should do any necessary cleaning up. Note that - the actual stopping of the application, that is the shutdown of - the supervision tree, is handled automatically as described in - Starting and Stopping Applications.

+ Distributed Applications. + + StartArgs is defined by the key mod in the + application + resource file. + stop/1 is called after the application has been + stopped and is to do any necessary cleaning up. The actual stopping of + the application, that is, the shutdown of the supervision tree, is + handled automatically as described in + Starting and Stopping Applications. + +

Example of an application callback module for packaging the supervision tree from - the Supervisor chapter:

+ Supervisor Behaviour:

-module(ch_app). -behaviour(application). @@ -89,44 +97,48 @@ start(_Type, _Args) -> stop(_State) -> ok. -

A library application, which can not be started or stopped, - does not need any application callback module.

+

A library application that cannot be started or stopped, does not + need any application callback module.

Application Resource File -

To define an application, we create an application specification which is put in an application resource file, or in short .app file:

+

To define an application, an application specification is + created, which is put in an application resource file, or in + short an .app file:

{application, Application, [Opt1,...,OptN]}. -

Application, an atom, is the name of the application. - The file must be named Application.app.

-

Each Opt is a tuple {Key, Value} which define a + + Application, an atom, is the name of the application. + The file must be named Application.app. + Each Opt is a tuple {Key,Value}, which define a certain property of the application. All keys are optional. - Default values are used for any omitted keys.

+ Default values are used for any omitted keys. +

The contents of a minimal .app file for a library - application libapp looks like this:

+ application libapp looks as follows:

{application, libapp, []}.

The contents of a minimal .app file ch_app.app for - a supervision tree application like ch_app looks like this:

+ a supervision tree application like ch_app looks as follows:

{application, ch_app, [{mod, {ch_app,[]}}]}. -

The key mod defines the callback module and start - argument of the application, in this case ch_app and - [], respectively. This means that

+

The key mod defines the callback module and start argument of + the application, in this case ch_app and [], respectively. + This means that the following is called when the application is to be + started:

ch_app:start(normal, []) -

will be called when the application should be started and

+

The following is called when the application is stopped.

ch_app:stop([]) -

will be called when the application has been stopped.

When using systools, the Erlang/OTP tools for packaging - code (see Releases), - the keys description, vsn, modules, - registered and applications should also be - specified:

+ code (see Section + Releases), the keys + description, vsn, modules, registered, + and applications are also to be specified:

{application, ch_app, [{description, "Channel allocator"}, @@ -136,67 +148,54 @@ ch_app:stop([]) {applications, [kernel, stdlib, sasl]}, {mod, {ch_app,[]}} ]}. - - description - A short description, a string. Defaults to "". - vsn - Version number, a string. Defaults to "". - modules - All modules introduced by this application. - systools uses this list when generating boot scripts and - tar files. A module must be defined in one and only one - application. Defaults to []. - registered - All names of registered processes in the application. - systools uses this list to detect name clashes - between applications. Defaults to []. - applications - All applications which must be started before this - application is started. systools uses this list to - generate correct boot scripts. Defaults to [], but note that - all applications have dependencies to at least kernel - and stdlib. - -

The syntax and contents of of the application resource file - are described in detail in the - Application resource file reference.

+ + description - A short description, a string. Defaults to + "". + vsn - Version number, a string. Defaults to "". + modules - All modules introduced by this + application. systools uses this list when generating boot scripts + and tar files. A module must be defined in only one application. + Defaults to []. + registered - All names of registered processes in the + application. systools uses this list to detect name clashes + between applications. Defaults to []. + applications - All applications that must be + started before this application is started. systools uses this + list to generate correct boot scripts. Defaults to []. Notice + that all applications have dependencies to at least Kernel + and STDLIB. + +

For details about the syntax and contents of the application + resource file, see the app + manual page in Kernel.

Directory Structure

When packaging code using systools, the code for each - application is placed in a separate directory + application is placed in a separate directory, lib/Application-Vsn, where Vsn is the version number.

-

This may be useful to know, even if systools is not used, - since Erlang/OTP itself is packaged according to the OTP principles +

This can be useful to know, even if systools is not used, + since Erlang/OTP is packaged according to the OTP principles and thus comes with this directory structure. The code server - (see code(3)) will automatically use code from - the directory with the highest version number, if there are - more than one version of an application present.

-

The application directory structure can of course be used in - the development environment as well. The version number may then + (see the code(3) manual page in Kernel) automatically + uses code from + the directory with the highest version number, if more than one + version of an application is present.

+

The application directory structure can also be used in the + development environment. The version number can then be omitted from the name.

-

The application directory have the following sub-directories:

+

The application directory has the following sub-directories:

- src - ebin - priv - include + src - Contains the Erlang source code. + ebin - Contains the Erlang object code, the + beam files. The .app file is also placed here. + priv - Used for application specific files. For + example, C executables are placed here. The function + code:priv_dir/1 is to be used to access this directory. + include - Used for include files. - - src - Contains the Erlang source code. - ebin - Contains the Erlang object code, the beam files. - The .app file is also placed here. - priv - Used for application specific files. For example, C - executables are placed here. The function code:priv_dir/1 - should be used to access this directory. - include - Used for include files. -
@@ -207,17 +206,17 @@ ch_app:stop([]) processes is the application controller process, registered as application_controller.

All operations on applications are coordinated by the application - controller. It is interfaced through the functions in - the module application, see application(3). - In particular, applications can be loaded, unloaded, started and - stopped.

+ controller. It is interacted through the functions in + the module application, see the application(3) + manual page in Kernel. In particular, applications can be + loaded, unloaded, started, and stopped.

Loading and Unloading Applications

Before an application can be started, it must be loaded. The application controller reads and stores the information from - the .app file.

+ the .app file:

 1> application:load(ch_app).
 ok
@@ -236,7 +235,7 @@ ok
  {stdlib,"ERTS  CXC 138 10","1.11.4.3"}]

Loading/unloading an application does not load/unload the code - used by the application. Code loading is done the usual way.

+ used by the application. Code loading is done the usual way.

@@ -252,13 +251,14 @@ ok {stdlib,"ERTS CXC 138 10","1.11.4.3"}, {ch_app,"Channel allocator","1"}]

If the application is not already loaded, the application - controller will first load it using application:load/1. It - will check the value of the applications key, to ensure - that all applications that should be started before this + controller first loads it using application:load/1. It + checks the value of the applications key, to ensure + that all applications that are to be started before this application are running.

-

The application controller then creates an application master for the application. The application master is - the group leader of all the processes in the application. +

The application controller then creates an + application master for the application. The application master + is the group leader of all the processes in the application. The application master starts the application by calling the application callback function start/2 in the module, and with the start argument, defined by the mod key in @@ -268,8 +268,8 @@ ok 7> application:stop(ch_app). ok

The application master stops the application by telling the top - supervisor to shutdown. The top supervisor tells all its child - processes to shutdown etc. and the entire tree is terminated in + supervisor to shut down. The top supervisor tells all its child + processes to shut down, and so on; the entire tree is terminated in reversed start order. The application master then calls the application callback function stop/1 in the module defined by the mod key.

@@ -277,8 +277,10 @@ ok
Configuring an Application -

An application can be configured using configuration parameters. These are a list of {Par, Val} tuples - specified by a key env in the .app file.

+

An application can be configured using + configuration parameters. These are a list of + {Par,Val} tuples + specified by a key env in the .app file:

{application, ch_app, [{description, "Channel allocator"}, @@ -289,11 +291,12 @@ ok {mod, {ch_app,[]}}, {env, [{file, "/usr/local/log"}]} ]}. -

Par should be an atom, Val is any term. +

Par is to be an atom. Val is any term. The application can retrieve the value of a configuration parameter by calling application:get_env(App, Par) or a - number of similar functions, see application(3).

-

Example:

+ number of similar functions, see the application(3) + manual page in Kernel.

+

Example:

 % erl
 Erlang (BEAM) emulator version 5.2.3.6 [hipe] [threads:0]
@@ -304,20 +307,21 @@ ok
 2> application:get_env(ch_app, file).
 {ok,"/usr/local/log"}

The values in the .app file can be overridden by values - in a system configuration file. This is a file which + in a system configuration file. This is a file that contains configuration parameters for relevant applications:

[{Application1, [{Par11,Val11},...]}, ..., {ApplicationN, [{ParN1,ValN1},...]}]. -

The system configuration should be called Name.config and - Erlang should be started with the command line argument - -config Name. See config(4) for more information.

-

Example: A file test.config is created with the following - contents:

+

The system configuration is to be called Name.config and + Erlang is to be started with the command-line argument + -config Name. For details, see the config(4) + manual page in Kernel.

+

Example:

+

A file test.config is created with the following contents:

[{ch_app, [{file, "testlog"}]}]. -

The value of file will override the value of file +

The value of file overrides the value of file as defined in the .app file:

 % erl -config test
@@ -330,14 +334,14 @@ ok
 {ok,"testlog"}

If release handling - is used, exactly one system configuration file should be used and - that file should be called sys.config

-

The values in the .app file, as well as the values in a - system configuration file, can be overridden directly from + is used, exactly one system configuration file is to be used and + that file is to be called sys.config.

+

The values in the .app file and the values in a + system configuration file can be overridden directly from the command line:

 % erl -ApplName Par1 Val1 ... ParN ValN
-

Example:

+

Example:

 % erl -ch_app file '"testlog"'
 Erlang (BEAM) emulator version 5.2.3.6 [hipe] [threads:0]
@@ -368,10 +372,10 @@ application:start(Application, Type)
       If a temporary application terminates, this is reported but
        no other applications are terminated.
     
-    

It is always possible to stop an application explicitly by +

An application can always be stopped explicitly by calling application:stop/1. Regardless of the mode, no - other applications will be affected.

-

Note that transient mode is of little practical use, since when + other applications are affected.

+

The transient mode is of little practical use, since when a supervision tree terminates, the reason is set to shutdown, not normal.

diff --git a/system/doc/design_principles/appup_cookbook.xml b/system/doc/design_principles/appup_cookbook.xml index 22c48db855..63adea8a5c 100644 --- a/system/doc/design_principles/appup_cookbook.xml +++ b/system/doc/design_principles/appup_cookbook.xml @@ -28,15 +28,15 @@ appup_cookbook.xml -

This chapter contains examples of .appup files for - typical cases of upgrades/downgrades done in run-time.

+ +

This section includes examples of .appup files for + typical cases of upgrades/downgrades done in runtime.

Changing a Functional Module -

When a change has been made to a functional module, for example +

When a functional module has been changed, for example, if a new function has been added or a bug has been corrected, - simple code replacement is sufficient.

-

Example:

+ simple code replacement is sufficient, for example:

{"2", [{"1", [{load_module, m}]}], @@ -46,29 +46,31 @@
Changing a Residence Module -

In a system implemented according to the OTP Design Principles, +

In a system implemented according to the OTP design principles, all processes, except system processes and special processes, reside in one of the behaviours supervisor, - gen_server, gen_fsm or gen_event. These + gen_server, gen_fsm, or gen_event. These belong to the STDLIB application and upgrading/downgrading normally requires an emulator restart.

-

OTP thus provides no support for changing residence modules - except in the case of special processes.

+

OTP thus provides no support for changing residence modules except + in the case of special processes.

Changing a Callback Module -

A callback module is a functional module, and for code +

A callback module is a functional module, and for code extensions simple code replacement is sufficient.

-

Example: When adding a function to ch3 as described in - the example in Release Handling, ch_app.appup looks as follows:

+

Example: When adding a function to ch3, + as described in the example in + Release Handling, + ch_app.appup looks as follows:

{"2", [{"1", [{load_module, ch3}]}], [{"1", [{load_module, ch3}]}] }.

OTP also supports changing the internal state of behaviour - processes, see Changing Internal State below.

+ processes, see Changing Internal State.

@@ -77,26 +79,28 @@

In this case, simple code replacement is not sufficient. The process must explicitly transform its state using the callback function code_change before switching to the new version - of the callback module. Thus synchronized code replacement is + of the callback module. Thus, synchronized code replacement is used.

-

Example: Consider the gen_server ch3 from the chapter - about the gen_server behaviour. The internal state is a term Chs - representing the available channels. Assume we want add a counter - N which keeps track of the number of alloc requests - so far. This means we need to change the format to +

Example: Consider gen_server ch3 from + gen_server Behaviour. + The internal state is a term Chs + representing the available channels. Assume you want to add a counter + N, which keeps track of the number of alloc requests + so far. This means that the format must be changed to {Chs,N}.

-

The .appup file could look as follows:

+

The .appup file can look as follows:

{"2", [{"1", [{update, ch3, {advanced, []}}]}], [{"1", [{update, ch3, {advanced, []}}]}] }.

The third element of the update instruction is a tuple - {advanced,Extra} which says that the affected processes - should do a state transformation before loading the new version + {advanced,Extra}, which says that the affected processes + are to do a state transformation before loading the new version of the module. This is done by the processes calling the callback - function code_change (see gen_server(3)). The term - Extra, in this case [], is passed as-is to the function:

+ function code_change (see the gen_server(3) manual + page in STDLIB). The term Extra, in this case + [], is passed as is to the function:

-module(ch3). @@ -107,40 +111,41 @@ code_change({down, _Vsn}, {Chs, N}, _Extra) -> {ok, Chs}; code_change(_Vsn, Chs, _Extra) -> {ok, {Chs, 0}}. -

The first argument is {down,Vsn} in case of a downgrade, - or Vsn in case of an upgrade. The term Vsn is - fetched from the 'original' version of the module, i.e. - the version we are upgrading from, or downgrading to.

+

The first argument is {down,Vsn} if there is a downgrade, + or Vsn if there is a upgrade. The term Vsn is + fetched from the 'original' version of the module, that is, + the version you are upgrading from, or downgrading to.

The version is defined by the module attribute vsn, if any. There is no such attribute in ch3, so in this case - the version is the checksum (a huge integer) of the BEAM file, an - uninteresting value which is ignored.

-

(The other callback functions of ch3 need to be modified - as well and perhaps a new interface function added, this is not - shown here).

+ the version is the checksum (a huge integer) of the beam file, an + uninteresting value, which is ignored.

+

The other callback functions of ch3 must also be modified + and perhaps a new interface function must be added, but this is not + shown here.

Module Dependencies -

Assume we extend a module by adding a new interface function, as - in the example in Release Handling, where a function available/0 is - added to ch3.

-

If we also add a call to this function, say in the module - m1, a run-time error could occur during release upgrade if +

Assume that a module is extended by adding an interface function, + as in the example in + Release Handling, + where a function available/0 is added to ch3.

+

If a call is added to this function, say in module + m1, a runtime error could can occur during release upgrade if the new version of m1 is loaded first and calls ch3:available/0 before the new version of ch3 is loaded.

-

Thus, ch3 must be loaded before m1 is, in - the upgrade case, and vice versa in the downgrade case. We say - that m1 is dependent on ch3. In a release - handling instruction, this is expressed by the element - DepMods:

+

Thus, ch3 must be loaded before m1, in + the upgrade case, and conversely in the downgrade case. + m1 is said to be dependent on ch3. In a release + handling instruction, this is expressed by the + DepMods element:

{load_module, Module, DepMods} {update, Module, {advanced, Extra}, DepMods}

DepMods is a list of modules, on which Module is dependent.

-

Example: The module m1 in the application myapp is +

Example: The module m1 in application myapp is dependent on ch3 when upgrading from "1" to "2", or downgrading from "2" to "1":

@@ -157,8 +162,8 @@ ch_app.appup: [{"1", [{load_module, ch3}]}], [{"1", [{load_module, ch3}]}] }. -

If m1 and ch3 had belonged to the same application, - the .appup file could have looked like this:

+

If instead m1 and ch3 belong to the same application, + the .appup file can look as follows:

{"2", [{"1", @@ -168,48 +173,48 @@ ch_app.appup: [{load_module, ch3}, {load_module, m1, [ch3]}]}] }. -

Note that it is m1 that is dependent on ch3 also +

m1 is dependent on ch3 also when downgrading. systools knows the difference between - up- and downgrading and will generate a correct relup, - where ch3 is loaded before m1 when upgrading but + up- and downgrading and generates a correct relup, + where ch3 is loaded before m1 when upgrading, but m1 is loaded before ch3 when downgrading.

- Changing Code For a Special Process + Changing Code for a Special Process

In this case, simple code replacement is not sufficient. When a new version of a residence module for a special process is loaded, the process must make a fully qualified call to - its loop function to switch to the new code. Thus synchronized + its loop function to switch to the new code. Thus, synchronized code replacement must be used.

The name(s) of the user-defined residence module(s) must be listed in the Modules part of the child specification - for the special process, in order for the release handler to + for the special process. Otherwise the release handler cannot find the process.

-

Example. Consider the example ch4 from the chapter about +

Example: Consider the example ch4 in sys and proc_lib. - When started by a supervisor, the child specification could look - like this:

+ When started by a supervisor, the child specification can look + as follows:

{ch4, {ch4, start_link, []}, permanent, brutal_kill, worker, [ch4]}

If ch4 is part of the application sp_app and a new - version of the module should be loaded when upgrading from - version "1" to "2" of this application, sp_app.appup could - look like this:

+ version of the module is to be loaded when upgrading from + version "1" to "2" of this application, sp_app.appup can + look as follows:

{"2", [{"1", [{update, ch4, {advanced, []}}]}], [{"1", [{update, ch4, {advanced, []}}]}] }.

The update instruction must contain the tuple - {advanced,Extra}. The instruction will make the special + {advanced,Extra}. The instruction makes the special process call the callback function system_code_change/4, a function the user must implement. The term Extra, in this - case [], is passed as-is to system_code_change/4:

+ case [], is passed as is to system_code_change/4:

-module(ch4). ... @@ -218,39 +223,43 @@ ch_app.appup: system_code_change(Chs, _Module, _OldVsn, _Extra) -> {ok, Chs}. -

The first argument is the internal state State passed from - the function sys:handle_system_msg(Request, From, Parent, Module, Deb, State), called by the special process when - a system message is received. In ch4, the internal state is - the set of available channels Chs.

-

The second argument is the name of the module (ch4).

-

The third argument is Vsn or {down,Vsn} as - described for - gen_server:code_change/3.

+ + The first argument is the internal state State, + passed from function + sys:handle_system_msg(Request, From, Parent, Module, Deb, State), + and called by the special process when a system message is received. + In ch4, the internal state is the set of available channels + Chs. + The second argument is the name of the module + (ch4). + The third argument is Vsn or {down,Vsn}, as + described for gen_server:code_change/3 in + Changing Internal State. +

In this case, all arguments but the first are ignored and the function simply returns the internal state again. This is - enough if the code only has been extended. If we had wanted to - change the internal state (similar to the example in + enough if the code only has been extended. If instead the + internal state is changed (similar to the example in Changing Internal State), - it would have been done in this function and - {ok,Chs2} returned.

+ this is done in this function and {ok,Chs2} returned.

Changing a Supervisor

The supervisor behaviour supports changing the internal state, - i.e. changing restart strategy and maximum restart intensity - properties, as well as changing existing child specifications.

-

Adding and deleting child processes are also possible, but not + that is, changing the restart strategy and maximum restart frequency + properties, as well as changing the existing child specifications.

+

Child processes can be added or deleted, but this is not handled automatically. Instructions must be given by in the .appup file.

Changing Properties -

Since the supervisor should change its internal state, +

Since the supervisor is to change its internal state, synchronized code replacement is required. However, a special update instruction must be used.

-

The new version of the callback module must be loaded first +

First, the new version of the callback module must be loaded, both in the case of upgrade and downgrade. Then the new return value of init/1 can be checked and the internal state be changed accordingly.

@@ -258,10 +267,11 @@ system_code_change(Chs, _Module, _OldVsn, _Extra) -> supervisors:

{update, Module, supervisor} -

Example: Assume we want to change the restart strategy of - ch_sup from the Supervisor Behaviour chapter from one_for_one to one_for_all. - We change the callback function init/1 in - ch_sup.erl:

+

Example: To change the restart strategy of + ch_sup (from + Supervisor Behaviour) + from one_for_one to one_for_all, change the callback + function init/1 in ch_sup.erl:

-module(ch_sup). ... @@ -280,7 +290,7 @@ init(_Args) -> Changing Child Specifications

The instruction, and thus the .appup file, when changing an existing child specification, is the same as when - changing properties as described above:

+ changing properties as described earlier:

{"2", [{"1", [{update, ch_sup, supervisor}]}], @@ -288,25 +298,25 @@ init(_Args) -> }.

The changes do not affect existing child processes. For example, changing the start function only specifies how - the child process should be restarted, if needed later on.

-

Note that the id of the child specification cannot be changed.

-

Note also that changing the Modules field of the child - specification may affect the release handling process itself, + the child process is to be restarted, if needed later on.

+

The id of the child specification cannot be changed.

+

Changing the Modules field of the child + specification can affect the release handling process itself, as this field is used to identify which processes are affected when doing a synchronized code replacement.

- Adding And Deleting Child Processes -

As stated above, changing child specifications does not affect + Adding and Deleting Child Processes +

As stated earlier, changing child specifications does not affect existing child processes. New child specifications are - automatically added, but not deleted. Also, child processes are - not automatically started or terminated. Instead, this must be - done explicitly using apply instructions.

-

Example: Assume we want to add a new child process m1 to - ch_sup when upgrading ch_app from "1" to "2". - This means m1 should be deleted when downgrading from + automatically added, but not deleted. Child processes are + not automatically started or terminated, this must be + done using apply instructions.

+

Example: Assume a new child process m1 is to be + added to ch_sup when upgrading ch_app from "1" to "2". + This means m1 is to be deleted when downgrading from "2" to "1":

{"2", @@ -320,13 +330,13 @@ init(_Args) -> {update, ch_sup, supervisor} ]}] }. -

Note that the order of the instructions is important.

-

Note also that the supervisor must be registered as +

The order of the instructions is important.

+

The supervisor must be registered as ch_sup for the script to work. If the supervisor is not registered, it cannot be accessed directly from the script. Instead a help function that finds the pid of the supervisor - and calls supervisor:restart_child etc. must be written, - and it is this function that should be called from the script + and calls supervisor:restart_child, and so on, must be + written. This function is then to be called from the script using the apply instruction.

If the module m1 is introduced in version "2" of ch_app, it must also be loaded when upgrading and @@ -345,18 +355,18 @@ init(_Args) -> {delete_module, m1} ]}] }. -

Note again that the order of the instructions is important. - When upgrading, m1 must be loaded and the supervisor's +

As stated earlier, the order of the instructions is important. + When upgrading, m1 must be loaded, and the supervisor child specification changed, before the new child process can be started. When downgrading, the child process must be - terminated before child specification is changed and the module + terminated before the child specification is changed and the module is deleted.

Adding or Deleting a Module -

Example: A new functional module m is added to +

Example: A new functional module m is added to ch_app:

{"2", @@ -367,15 +377,16 @@ init(_Args) ->
Starting or Terminating a Process

In a system structured according to the OTP design principles, - any process would be a child process belonging to a supervisor, - see Adding and Deleting Child Processes above.

+ any process would be a child process belonging to a supervisor, see + Adding and Deleting Child Processes + in Changing a Supervisor.

Adding or Removing an Application

When adding or removing an application, no .appup file is needed. When generating relup, the .rel files - are compared and add_application and + are compared and the add_application and remove_application instructions are added automatically.

@@ -383,11 +394,11 @@ init(_Args) -> Restarting an Application

Restarting an application is useful when a change is too complicated to be made without restarting the processes, for - example if the supervisor hierarchy has been restructured.

-

Example: When adding a new child m1 to ch_sup, as - in the example above, an - alternative to updating the supervisor is to restart the entire - application:

+ example, if the supervisor hierarchy has been restructured.

+

Example: When adding a child m1 to ch_sup, as in + Adding and Deleting Child Processes + in Changing a Supervisor, an alternative to updating + the supervisor is to restart the entire application:

{"2", [{"1", [{restart_application, ch_app}]}], @@ -400,7 +411,7 @@ init(_Args) -> Changing an Application Specification

When installing a release, the application specifications are automatically updated before evaluating the relup script. - Hence, no instructions are needed in the .appup file:

+ Thus, no instructions are needed in the .appup file:

 {"2",
  [{"1", []}],
@@ -412,28 +423,29 @@ init(_Args) ->
     Changing Application Configuration
     

Changing an application configuration by updating the env key in the .app file is an instance of changing an - application specification, see above.

+ application specification, see the previous section.

Alternatively, application configuration parameters can be added or updated in sys.config.

Changing Included Applications -

The release handling instructions for adding, removing and +

The release handling instructions for adding, removing, and restarting applications apply to primary applications only. There are no corresponding instructions for included applications. However, since an included application is really a supervision tree with a topmost supervisor, started as a child process to a supervisor in the including application, a relup file can be manually created.

-

Example: Assume we have a release containing an application - prim_app which have a supervisor prim_sup in its +

Example: Assume there is a release containing an application + prim_app, which have a supervisor prim_sup in its supervision tree.

-

In a new version of the release, our example application - ch_app should be included in prim_app. That is, - its topmost supervisor ch_sup should be started as a child +

In a new version of the release, the application ch_app + is to be included in prim_app. That is, + its topmost supervisor ch_sup is to be started as a child process to prim_sup.

-

1) Edit the code for prim_sup:

+

The workflow is as follows:

+

Step 1) Edit the code for prim_sup:

init(...) -> {ok, {...supervisor flags..., @@ -441,7 +453,7 @@ init(...) -> {ch_sup, {ch_sup,start_link,[]}, permanent,infinity,supervisor,[ch_sup]}, ...]}}. -

2) Edit the .app file for prim_app:

+

Step 2) Edit the .app file for prim_app:

{application, prim_app, [..., @@ -450,27 +462,29 @@ init(...) -> {included_applications, [ch_app]}, ... ]}. -

3) Create a new .rel file, including ch_app:

+

Step 3) Create a new .rel file, including + ch_app:

{release, ..., [..., {prim_app, "2"}, {ch_app, "1"}]}. +

The included application can be started in two ways. + This is described in the next two sections.

Application Restart -

4a) One way to start the included application is to restart - the entire prim_app application. Normally, we would then - use the restart_application instruction in - the .appup file for prim_app.

-

However, if we did this and then generated a relup file, - not only would it contain instructions for restarting (i.e. +

Step 4a) One way to start the included application is to + restart the entire prim_app application. Normally, the + restart_application instruction in the .appup file + for prim_app would be used.

+

However, if this is done and a relup file is generated, + not only would it contain instructions for restarting (that is, removing and adding) prim_app, it would also contain instructions for starting ch_app (and stopping it, in - the case of downgrade). This is due to the fact that - ch_app is included in the new .rel file, but not - in the old one.

+ the case of downgrade). This is because ch_app is included + in the new .rel file, but not in the old one.

Instead, a correct relup file can be created manually, either from scratch or by editing the generated version. The instructions for starting/stopping ch_app are @@ -512,7 +526,8 @@ init(...) ->

Supervisor Change -

4b) Another way to start the included application (or stop it +

Step 4b) Another way to start the included + application (or stop it in the case of downgrade) is by combining instructions for adding and removing child processes to/from prim_sup with instructions for loading/unloading all ch_app code and @@ -521,7 +536,7 @@ init(...) -> scratch or by editing a generated version. Load all code for ch_app first, and also load the application specification, before prim_sup is updated. When - downgrading, prim_sup should be updated first, before + downgrading, prim_sup is to updated first, before the code for ch_app and its application specification are unloaded.

@@ -560,10 +575,10 @@ init(...) ->
Changing Non-Erlang Code

Changing code for a program written in another programming - language than Erlang, for example a port program, is very - application dependent and OTP provides no special support for it.

-

Example, changing code for a port program: Assume that - the Erlang process controlling the port is a gen_server + language than Erlang, for example, a port program, is + application-dependent and OTP provides no special support for it.

+

Example: When changing code for a port program, assume that + the Erlang process controlling the port is a gen_server portc and that the port is opened in the callback function init/1:

@@ -573,10 +588,11 @@ init(...) -> Port = open_port({spawn,PortPrg}, [...]), ..., {ok, #state{port=Port, ...}}. -

If the port program should be updated, we can extend the code for - the gen_server with a code_change function which closes - the old port and opens a new port. (If necessary, the gen_server - may first request data that needs to be saved from the port +

If the port program is to be updated, the code for the + gen_server can be extended with a code_change function, + which closes the old port and opens a new port. + (If necessary, the gen_server can + first request data that must be saved from the port program and pass this data to the new port):

code_change(_OldVsn, State, port) -> @@ -595,8 +611,8 @@ code_change(_OldVsn, State, port) -> [{"1", [{update, portc, {advanced,port}}]}], [{"1", [{update, portc, {advanced,port}}]}] ]. -

Make sure the priv directory where the C program is - located is included in the new release package:

+

Ensure that the priv directory, where the C program is + located, is included in the new release package:

 1> systools:make_tar("my_release", [{dirs,[priv]}]).
 ...
@@ -604,28 +620,29 @@ code_change(_OldVsn, State, port) ->
Emulator Restart and Upgrade -

There are two upgrade instructions that will restart the emulator:

- - restart_new_emulator - Intended for when erts, kernel, stdlib or sasl is - upgraded. It is automatically added when the relup file is - generated by systools:make_relup/3,4. It is executed - before all other upgrade instructions. See - Release - Handling for more information about this - instruction. - restart_emulator - Used when a restart of the emulator is required after all - other upgrade instructions are executed. See - Release - Handling for more information about this - instruction. - - +

Two upgrade instructions restart the emulator:

+ +

restart_new_emulator

+

Intended when ERTS, Kernel, STDLIB, or + SASL is upgraded. It is automatically added when the + relup file is generated by systools:make_relup/3,4. + It is executed before all other upgrade instructions. + For more information about this instruction, see + restart_new_emulator (Low-Level) in + Release Handling Instructions. +

+

restart_emulator

+

Used when a restart of the emulator is required after all + other upgrade instructions are executed. + For more information about this instruction, see + restart_emulator (Low-Level) in + Release Handling Instructions. +

+

If an emulator restart is necessary and no upgrade instructions - are needed, i.e. if the restart itself is enough for the - upgraded applications to start running the new versions, a very - simple .relup file can be created manually:

+ are needed, that is, if the restart itself is enough for the + upgraded applications to start running the new versions, a + simple relup file can be created manually:

{"B", [{"A", @@ -637,26 +654,27 @@ code_change(_OldVsn, State, port) -> }.

In this case, the release handler framework with automatic packing and unpacking of release packages, automatic path - updates etc. can be used without having to specify .appup - files.

+ updates, and so on, can be used without having to specify + .appup files.

- Emulator Upgrade from pre OTP R15 + Emulator Upgrade From Pre OTP R15

From OTP R15, an emulator upgrade is performed by restarting the emulator with new versions of the core applications - (kernel, stdlib and sasl) before loading code + (Kernel, STDLIB, and SASL) before loading code and running upgrade instruction for other applications. For this - to work, the release to upgrade from must includes OTP R15 or - later. For the case where the release to upgrade from includes an - earlier emulator version, systools:make_relup will create a + to work, the release to upgrade from must include OTP R15 or + later.

+

For the case where the release to upgrade from includes an + earlier emulator version, systools:make_relup creates a backwards compatible relup file. This means that all upgrade - instructions will be executed before the emulator is - restarted. The new application code will therefore be loaded into + instructions are executed before the emulator is + restarted. The new application code is therefore loaded into the old emulator. If the new code is compiled with the new - emulator, there might be cases where the beam format has changed - and beam files can not be loaded. To overcome this problem, the - new code should be compiled with the old emulator.

+ emulator, there can be cases where the beam format has changed + and beam files cannot be loaded. To overcome this problem, compile + the new code with the old emulator.

diff --git a/system/doc/design_principles/des_princ.xml b/system/doc/design_principles/des_princ.xml index e8f289b905..77c61eafb0 100644 --- a/system/doc/design_principles/des_princ.xml +++ b/system/doc/design_principles/des_princ.xml @@ -28,50 +28,52 @@ des_princ.xml -

The OTP Design Principles is a set of principles for how - to structure Erlang code in terms of processes, modules and - directories.

+ +

The OTP design principles define how to + structure Erlang code in terms of processes, modules, + and directories.

Supervision Trees

A basic concept in Erlang/OTP is the supervision tree. This is a process structuring model based on the idea of - workers and supervisors.

+ workers and supervisors:

- Workers are processes which perform computations, that is, + Workers are processes that perform computations, that is, they do the actual work. - Supervisors are processes which monitor the behaviour of + Supervisors are processes that monitor the behaviour of workers. A supervisor can restart a worker if something goes wrong. The supervision tree is a hierarchical arrangement of - code into supervisors and workers, making it possible to + code into supervisors and workers, which makes it possible to design and program fault-tolerant software. +

In the following figure, square boxes represents supervisors and + circles represent workers:

Supervision Tree -

In the figure above, square boxes represents supervisors and - circles represent workers.

Behaviours

In a supervision tree, many of the processes have similar structures, they follow similar patterns. For example, - the supervisors are very similar in structure. The only difference - between them is which child processes they supervise. Also, many - of the workers are servers in a server-client relation, finite - state machines, or event handlers such as error loggers.

+ the supervisors are similar in structure. The only difference + between them is which child processes they supervise. Many + of the workers are servers in a server-client relation, + finite-state machines, or event handlers such as error loggers.

Behaviours are formalizations of these common patterns. The idea is to divide the code for a process in a generic part - (a behaviour module) and a specific part (a callback module).

+ (a behaviour module) and a specific part (a + callback module).

The behaviour module is part of Erlang/OTP. To implement a process such as a supervisor, the user only has to implement - the callback module which should export a pre-defined set of + the callback module which is to export a pre-defined set of functions, the callback functions.

-

An example to illustrate how code can be divided into a generic - and a specific part: Consider the following code (written in +

The following example illustrate how code can be divided into a + generic and a specific part. Consider the following code (written in plain Erlang) for a simple server, which keeps track of a number of "channels". Other processes can allocate and free the channels by calling the functions alloc/0 and free/1, @@ -149,7 +151,7 @@ loop(Mod, State) -> State2 = Mod:handle_cast(Req, State), loop(Mod, State2) end. -

and a callback module ch2.erl:

+

And a callback module ch2.erl:

-module(ch2). -export([start/0]). @@ -173,27 +175,27 @@ handle_call(alloc, Chs) -> handle_cast({free, Ch}, Chs) -> free(Ch, Chs). % => Chs2 -

Note the following:

+

Notice the following:

- The code in server can be re-used to build many + The code in server can be reused to build many different servers. - The name of the server, in this example the atom - ch2, is hidden from the users of the client functions. - This means the name can be changed without affecting them. + The server name, in this example the atom + ch2, is hidden from the users of the client functions. This + means that the name can be changed without affecting them. The protcol (messages sent to and received from the server) - is hidden as well. This is good programming practice and allows - us to change the protocol without making changes to code using + is also hidden. This is good programming practice and allows + one to change the protocol without changing the code using the interface functions. - We can extend the functionality of server, without + The functionality of server can be extended without having to change ch2 or any other callback module. -

(In ch1.erl and ch2.erl above, the implementation - of channels/0, alloc/1 and free/2 has been +

In ch1.erl and ch2.erl above, the implementation + of channels/0, alloc/1, and free/2 has been intentionally left out, as it is not relevant to the example. For completeness, one way to write these functions are given - below. Note that this is an example only, a realistic + below. This is an example only, a realistic implementation must be able to handle situations like running out - of channels to allocate etc.)

+ of channels to allocate, and so on.

channels() -> {_Allocated = [], _Free = lists:seq(1,100)}. @@ -208,30 +210,30 @@ free(Ch, {Alloc, Free} = Channels) -> false -> Channels end. -

Code written without making use of behaviours may be more - efficient, but the increased efficiency will be at the expense of +

Code written without using behaviours can be more + efficient, but the increased efficiency is at the expense of generality. The ability to manage all applications in the system - in a consistent manner is very important.

+ in a consistent manner is important.

Using behaviours also makes it easier to read and understand - code written by other programmers. Ad hoc programming structures, + code written by other programmers. Improvised programming structures, while possibly more efficient, are always more difficult to understand.

-

The module server corresponds, greatly simplified, +

The server module corresponds, greatly simplified, to the Erlang/OTP behaviour gen_server.

The standard Erlang/OTP behaviours are:

- - gen_server - For implementing the server of a client-server relation. - gen_fsm - For implementing finite state machines. - gen_event - For implementing event handling functionality. - supervisor - For implementing a supervisor in a supervision tree. - + +

gen_server

+

For implementing the server of a client-server relation

+

gen_fsm

+

For implementing finite-state machines

+

gen_event

+

For implementing event handling functionality

+

supervisor

+

For implementing a supervisor in a supervision tree

+

The compiler understands the module attribute -behaviour(Behaviour) and issues warnings about - missing callback functions. Example:

+ missing callback functions, for example:

-module(chs3). -behaviour(gen_server). @@ -248,13 +250,17 @@ free(Ch, {Alloc, Free} = Channels) -> some specific functionality. Components are with Erlang/OTP terminology called applications. Examples of Erlang/OTP applications are Mnesia, which has everything needed for - programming database services, and Debugger which is used to - debug Erlang programs. The minimal system based on Erlang/OTP - consists of the applications Kernel and STDLIB.

+ programming database services, and Debugger, which is used + to debug Erlang programs. The minimal system based on Erlang/OTP + consists of the following two applications:

+ + Kernel - Functionality necessary to run Erlang + STDLIB - Erlang standard libraries +

The application concept applies both to program structure (processes) and directory structure (modules).

-

The simplest kind of application does not have any processes, - but consists of a collection of functional modules. Such an +

The simplest applications do not have any processes, + but consist of a collection of functional modules. Such an application is called a library application. An example of a library application is STDLIB.

An application with processes is easiest implemented as a @@ -266,12 +272,11 @@ free(Ch, {Alloc, Free} = Channels) ->

Releases

A release is a complete system made out from a subset of - the Erlang/OTP applications and a set of user-specific - applications.

+ Erlang/OTP applications and a set of user-specific applications.

How to program releases is described in Releases.

How to install a release in a target environment is described - in the chapter about Target Systems in System Principles.

+ in the section about target systems in Section 2 System Principles.

diff --git a/system/doc/design_principles/distributed_applications.xml b/system/doc/design_principles/distributed_applications.xml index 4d4ba3136e..f40a24fdf5 100644 --- a/system/doc/design_principles/distributed_applications.xml +++ b/system/doc/design_principles/distributed_applications.xml @@ -28,71 +28,73 @@ distributed_applications.xml +
- Definition -

In a distributed system with several Erlang nodes, there may be - a need to control applications in a distributed manner. If + Introduction +

In a distributed system with several Erlang nodes, it can be + necessary to control applications in a distributed manner. If the node, where a certain application is running, goes down, - the application should be restarted at another node.

+ the application is to be restarted at another node.

Such an application is called a distributed application. - Note that it is the control of the application which is - distributed, all applications can of course be distributed in - the sense that they, for example, use services on other nodes.

-

Because a distributed application may move between nodes, some + Notice that it is the control of the application that is distributed. + All applications can be distributed in the sense that they, + for example, use services on other nodes.

+

Since a distributed application can move between nodes, some addressing mechanism is required to ensure that it can be addressed by other applications, regardless on which node it currently executes. This issue is not addressed here, but the - Kernel modules global or pg2 can be - used for this purpose.

+ global or pg2 modules in Kernel + can be used for this purpose.

Specifying Distributed Applications

Distributed applications are controlled by both the application - controller and a distributed application controller process, - dist_ac. Both these processes are part of the kernel - application. Therefore, distributed applications are specified by - configuring the kernel application, using the following - configuration parameter (see also kernel(6)):

- - distributed = [{Application, [Timeout,] NodeDesc}] - -

Specifies where the application Application = atom() - may execute. NodeDesc = [Node | {Node,...,Node}] is - a list of node names in priority order. The order between - nodes in a tuple is undefined.

-

Timeout = integer() specifies how many milliseconds to - wait before restarting the application at another node. - Defaults to 0.

-
-
+ controller and a distributed application controller process, + dist_ac. Both these processes are part of the Kernel + application. Distributed applications are thus specified by + configuring the Kernel application, using the following + configuration parameter (see also kernel(6)):

+

distributed = [{Application, [Timeout,] NodeDesc}]

+ + Specifies where the application Application = atom() + can execute. + >NodeDesc = [Node | {Node,...,Node}] is a list of + node names in priority order. The order between nodes in a tuple + is undefined. + Timeout = integer() specifies how many milliseconds + to wait before restarting the application at another node. It + defaults to 0. +

For distribution of application control to work properly, - the nodes where a distributed application may run must contact + the nodes where a distributed application can run must contact each other and negotiate where to start the application. This is - done using the following kernel configuration parameters:

- - sync_nodes_mandatory = [Node] - Specifies which other nodes must be started (within - the timeout specified by sync_nodes_timeout. - sync_nodes_optional = [Node] - Specifies which other nodes can be started (within - the timeout specified by sync_nodes_timeout. - sync_nodes_timeout = integer() | infinity - Specifies how many milliseconds to wait for the other nodes - to start. - -

When started, the node will wait for all nodes specified by + done using the following configuration parameters in + Kernel:

+ + sync_nodes_mandatory = [Node] - Specifies which + other nodes must be started (within the time-out specified by + sync_nodes_timeout). + sync_nodes_optional = [Node] - Specifies which + other nodes can be started (within the time-out specified by + sync_nodes_timeout). + sync_nodes_timeout = integer() | infinity - + Specifies how many milliseconds to wait for the other nodes to + start. + +

When started, the node waits for all nodes specified by sync_nodes_mandatory and sync_nodes_optional to - come up. When all nodes have come up, or when all mandatory nodes - have come up and the time specified by sync_nodes_timeout - has elapsed, all applications will be started. If not all - mandatory nodes have come up, the node will terminate.

-

Example: An application myapp should run at the node - cp1@cave. If this node goes down, myapp should + come up. When all nodes are up, or when all mandatory nodes + are up and the time specified by sync_nodes_timeout + has elapsed, all applications start. If not all + mandatory nodes are up, the node terminates.

+

Example:

+

An application myapp is to run at the node + cp1@cave. If this node goes down, myapp is to be restarted at cp2@cave or cp3@cave. A system - configuration file cp1.config for cp1@cave could - look like:

+ configuration file cp1.config for cp1@cave can + look as follows:

[{kernel, [{distributed, [{myapp, 5000, [cp1@cave, {cp2@cave, cp3@cave}]}]}, @@ -103,13 +105,13 @@ ].

The system configuration files for cp2@cave and cp3@cave are identical, except for the list of mandatory - nodes which should be [cp1@cave, cp3@cave] for + nodes, which is to be [cp1@cave, cp3@cave] for cp2@cave and [cp1@cave, cp2@cave] for cp3@cave.

All involved nodes must have the same value for - distributed and sync_nodes_timeout, or - the behaviour of the system is undefined.

+ distributed and sync_nodes_timeout. + Otherwise the system behaviour is undefined.

@@ -117,28 +119,29 @@ Starting and Stopping Distributed Applications

When all involved (mandatory) nodes have been started, the distributed application can be started by calling - application:start(Application) at all of these nodes.

-

It is of course also possible to use a boot script (see - Releases) which - automatically starts the application.

-

The application will be started at the first node, specified - by the distributed configuration parameter, which is up - and running. The application is started as usual. That is, an - application master is created and calls the application callback - function:

+ application:start(Application) at all of these + nodes.

+

A boot script (see + Releases) + can be used that automatically starts the application.

+

The application is started at the first operational node that + is listed in the list of nodes in the distributed + configuration parameter. The application is started as usual. + That is, an application master is created and calls the + application callback function:

Module:start(normal, StartArgs) -

Example: Continuing the example from the previous section, - the three nodes are started, specifying the system configuration - file:

+

Example:

+

Continuing the example from the previous section, the three nodes + are started, specifying the system configuration file:

 > erl -sname cp1 -config cp1
 > erl -sname cp2 -config cp2
 > erl -sname cp3 -config cp3
-

When all nodes are up and running, myapp can be started. +

When all nodes are operational, myapp can be started. This is achieved by calling application:start(myapp) at all three nodes. It is then started at cp1, as shown in - the figure below.

+ the following figure:

Application myapp - Situation 1 @@ -150,31 +153,33 @@ Module:start(normal, StartArgs)
Failover

If the node where the application is running goes down, - the application is restarted (after the specified timeout) at - the first node, specified by the distributed configuration - parameter, which is up and running. This is called a + the application is restarted (after the specified time-out) at + the first operational node that is listed in the list of nodes + in the distributed configuration parameter. This is called a failover.

The application is started the normal way at the new node, that is, by the application master calling:

Module:start(normal, StartArgs) -

Exception: If the application has the start_phases key - defined (see Included Applications), then the application is instead started - by calling:

+

An exception is if the application has the start_phases + key defined + (see Included Applications). + The application is then instead started by calling:

Module:start({failover, Node}, StartArgs) -

where Node is the terminated node.

-

Example: If cp1 goes down, the system checks which one of +

Here Node is the terminated node.

+

Example:

+

If cp1 goes down, the system checks which one of the other nodes, cp2 or cp3, has the least number of running applications, but waits for 5 seconds for cp1 to restart. If cp1 does not restart and cp2 runs fewer - applications than cp3, then myapp is restarted on + applications than cp3, myapp is restarted on cp2.

Application myapp - Situation 2 -

Suppose now that cp2 goes down as well and does not +

Suppose now that cp2 goes also down and does not restart within 5 seconds. myapp is now restarted on cp3.

@@ -186,28 +191,29 @@ Module:start({failover, Node}, StartArgs)
Takeover

If a node is started, which has higher priority according - to distributed, than the node where a distributed - application is currently running, the application will be - restarted at the new node and stopped at the old node. This is + to distributed than the node where a distributed + application is running, the application is restarted at the + new node and stopped at the old node. This is called a takeover.

The application is started by the application master calling:

Module:start({takeover, Node}, StartArgs) -

where Node is the old node.

-

Example: If myapp is running at cp3, and if - cp2 now restarts, it will not restart myapp, - because the order between nodes cp2 and cp3 is +

Here Node is the old node.

+

Example:

+

If myapp is running at cp3, and if + cp2 now restarts, it does not restart myapp, + as the order between the cp2 and cp3 nodes is undefined.

Application myapp - Situation 4 -

However, if cp1 restarts as well, the function +

However, if cp1 also restarts, the function application:takeover/2 moves myapp to cp1, - because cp1 has a higher priority than cp3 for this - application. In this case, - Module:start({takeover, cp3@cave}, StartArgs) is executed - at cp1 to start the application.

+ as cp1 has a higher priority than cp3 for this + application. In this case, + Module:start({takeover, cp3@cave}, StartArgs) is + executed at cp1 to start the application.

Application myapp - Situation 5 diff --git a/system/doc/design_principles/events.xml b/system/doc/design_principles/events.xml index 529e12c216..6e5afb939e 100644 --- a/system/doc/design_principles/events.xml +++ b/system/doc/design_principles/events.xml @@ -21,7 +21,7 @@ - Gen_Event Behaviour + gen_event Behaviour @@ -29,35 +29,36 @@ events.xml -

This chapter should be read in conjunction with - gen_event(3), where all interface functions and callback - functions are described in detail.

+

This section is to be read with the gen_event(3) manual + page in STDLIB, where all interface functions and callback + functions are described in detail.

Event Handling Principles

In OTP, an event manager is a named object to which - events can be sent. An event could be, for example, - an error, an alarm or some information that should be logged.

-

In the event manager, zero, one or several event handlers are installed. When the event manager is notified - about an event, the event will be processed by all the installed + events can be sent. An event can be, for example, + an error, an alarm, or some information that is to be logged.

+

In the event manager, zero, one, or many event handlers + are installed. When the event manager is notified + about an event, the event is processed by all the installed event handlers. For example, an event manager for handling errors - can by default have a handler installed which writes error + can by default have a handler installed, which writes error messages to the terminal. If the error messages during a certain - period should be saved to a file as well, the user adds another - event handler which does this. When logging to file is no longer - necessary, this event handler is deleted.

+ period is to be saved to a file as well, the user adds another + event handler that does this. When logging to the file is no + longer necessary, this event handler is deleted.

An event manager is implemented as a process and each event handler is implemented as a callback module.

The event manager essentially maintains a list of {Module, State} pairs, where each Module is an - event handler, and State the internal state of that event - handler.

+ event handler, and State is the internal state of that + event handler.

Example

The callback module for the event handler writing error messages - to the terminal could look like:

+ to the terminal can look as follows:

-module(terminal_logger). -behaviour(gen_event). @@ -74,7 +75,7 @@ handle_event(ErrorMsg, State) -> terminate(_Args, _State) -> ok.

The callback module for the event handler writing error messages - to a file could look like:

+ to a file can look as follows:

-module(file_logger). -behaviour(gen_event). @@ -98,29 +99,28 @@ terminate(_Args, Fd) -> Starting an Event Manager

To start an event manager for handling errors, as described in - the example above, call the following function:

+ the previous example, call the following function:

gen_event:start_link({local, error_man})

This function spawns and links to a new process, an event manager.

-

The argument, {local, error_man} specifies the name. In - this case, the event manager will be locally registered as - error_man.

+

The argument, {local, error_man} specifies the name. The + event manager is then locally registered as error_man.

If the name is omitted, the event manager is not registered. - Instead its pid must be used. The name could also be given + Instead its pid must be used. The name can also be given as {global, Name}, in which case the event manager is registered using global:register_name/2.

gen_event:start_link must be used if the event manager is - part of a supervision tree, i.e. is started by a supervisor. - There is another function gen_event:start to start a - stand-alone event manager, i.e. an event manager which is not + part of a supervision tree, that is, started by a supervisor. + There is another function, gen_event:start, to start a + standalone event manager, that is, an event manager that is not part of a supervision tree.

Adding an Event Handler -

Here is an example using the shell on how to start an event - manager and add an event handler to it:

+

The following example shows how to start an event manager and + add an event handler to it by using the shell:

 1> gen_event:start({local, error_man}).
 {ok,<0.31.0>}
@@ -128,16 +128,16 @@ gen_event:start_link({local, error_man})
 ok

This function sends a message to the event manager registered as error_man, telling it to add the event handler - terminal_logger. The event manager will call the callback - function terminal_logger:init([]), where the argument [] - is the third argument to add_handler. init is - expected to return {ok, State}, where State is + terminal_logger. The event manager calls the callback + function terminal_logger:init([]), where the argument + [] is the third argument to add_handler. init + is expected to return {ok, State}, where State is the internal state of the event handler.

init(_Args) -> {ok, []}.

Here, init does not need any input data and ignores its - argument. Also, for terminal_logger the internal state is + argument. For terminal_logger, the internal state is not used. For file_logger, the internal state is used to save the open file descriptor.

@@ -147,7 +147,7 @@ init(File) ->
- Notifying About Events + Notifying about Events
 3> gen_event:notify(error_man, no_reply).
 ***Error*** no_reply
@@ -158,7 +158,7 @@ ok
When the event is received, the event manager calls handle_event(Event, State) for each installed event handler, in the same order as they were added. The function is - expected to return a tuple {ok, State1}, where + expected to return a tuple {ok,State1}, where State1 is a new value for the state of the event handler.

In terminal_logger:

@@ -179,17 +179,17 @@ handle_event(ErrorMsg, Fd) -> ok

This function sends a message to the event manager registered as error_man, telling it to delete the event handler - terminal_logger. The event manager will call the callback + terminal_logger. The event manager calls the callback function terminal_logger:terminate([], State), where - the argument [] is the third argument to delete_handler. - terminate should be the opposite of init and do any + the argument [] is the third argument to delete_handler. + terminate is to be the opposite of init and do any necessary cleaning up. Its return value is ignored.

For terminal_logger, no cleaning up is necessary:

terminate(_Args, _State) -> ok.

For file_logger, the file descriptor opened in init - needs to be closed:

+ must be closed:

terminate(_Args, Fd) -> file:close(Fd). @@ -197,20 +197,22 @@ terminate(_Args, Fd) ->
Stopping -

When an event manager is stopped, it will give each of +

When an event manager is stopped, it gives each of the installed event handlers the chance to clean up by calling terminate/2, the same way as when deleting a handler.

In a Supervision Tree

If the event manager is part of a supervision tree, no stop - function is needed. The event manager will automatically be + function is needed. The event manager is automatically terminated by its supervisor. Exactly how this is done is - defined by a shutdown strategy set in the supervisor.

+ defined by a + shutdown strategy + set in the supervisor.

- Stand-Alone Event Managers + Standalone Event Managers

An event manager can also be stopped by calling:

 > gen_event:stop(error_man).
@@ -219,16 +221,17 @@ ok
Handling Other Messages -

If the gen_event should be able to receive other messages than - events, the callback function handle_info(Info, StateName, StateData) - must be implemented to handle them. Examples of - other messages are exit messages, if the gen_event is linked to +

If the gen_event is to be able to receive other messages + than events, the callback function + handle_info(Info, StateName, StateData) + must be implemented to handle them. Examples of other + messages are exit messages, if the gen_event is linked to other processes (than the supervisor) and trapping exit signals.

handle_info({'EXIT', Pid, Reason}, State) -> ..code to handle exits here.. {ok, NewState}. -

The code_change method also has to be implemented.

+

The code_change method must also be implemented.

code_change(OldVsn, State, Extra) -> ..code to convert state (and more) during code change diff --git a/system/doc/design_principles/fsm.xml b/system/doc/design_principles/fsm.xml index 9dce159dca..ef961f5fad 100644 --- a/system/doc/design_principles/fsm.xml +++ b/system/doc/design_principles/fsm.xml @@ -21,32 +21,33 @@ - Gen_Fsm Behaviour + gen_fsm Behaviour fsm.xml -

This chapter should be read in conjunction with gen_fsm(3), - where all interface functions and callback functions are described - in detail.

+ +

This section is to be read with the gen_fsm(3) manual page + in STDLIB, where all interface functions and callback + functions are described in detail.

- Finite State Machines -

A finite state machine, FSM, can be described as a set of + Finite-State Machines +

A Finite-State Machine (FSM) can be described as a set of relations of the form:

 State(S) x Event(E) -> Actions(A), State(S')

These relations are interpreted as meaning:

-

If we are in state S and the event E occurs, we - should perform the actions A and make a transition to - the state S'.

+

If we are in state S and event E occurs, we + are to perform actions A and make a transition to + state S'.

For an FSM implemented using the gen_fsm behaviour, the state transition rules are written as a number of Erlang - functions which conform to the following convention:

+ functions, which conform to the following convention:

 StateName(Event, StateData) ->
     .. code for actions here ...
@@ -55,16 +56,16 @@ StateName(Event, StateData) ->
 
   
Example -

A door with a code lock could be viewed as an FSM. Initially, +

A door with a code lock can be viewed as an FSM. Initially, the door is locked. Anytime someone presses a button, this generates an event. Depending on what buttons have been pressed - before, the sequence so far may be correct, incomplete or wrong.

-

If it is correct, the door is unlocked for 30 seconds (30000 ms). + before, the sequence so far can be correct, incomplete, or wrong.

+

If it is correct, the door is unlocked for 30 seconds (30,000 ms). If it is incomplete, we wait for another button to be pressed. If it is is wrong, we start all over, waiting for a new button sequence.

Implementing the code lock FSM using gen_fsm results in - this callback module:

+ the following callback module:

- Starting a Gen_Fsm -

In the example in the previous section, the gen_fsm is started by - calling code_lock:start_link(Code):

+ Starting gen_fsm +

In the example in the previous section, the gen_fsm is + started by calling code_lock:start_link(Code):

start_link(Code) -> gen_fsm:start_link({local, code_lock}, code_lock, lists:reverse(Code), []). -

start_link calls the function gen_fsm:start_link/4. - This function spawns and links to a new process, a gen_fsm.

+

start_link calls the function gen_fsm:start_link/4, + which spawns and links to a new process, a gen_fsm.

-

The first argument {local, code_lock} specifies - the name. In this case, the gen_fsm will be locally registered - as code_lock.

-

If the name is omitted, the gen_fsm is not registered. - Instead its pid must be used. The name could also be given as - {global, Name}, in which case the gen_fsm is registered - using global:register_name/2.

+

The first argument, {local, code_lock}, specifies + the name. In this case, the gen_fsm is locally + registered as code_lock.

+

If the name is omitted, the gen_fsm is not registered. + Instead its pid must be used. The name can also be given + as {global, Name}, in which case the gen_fsm is + registered using global:register_name/2.

The second argument, code_lock, is the name of - the callback module, that is the module where the callback + the callback module, that is, the module where the callback functions are located.

-

In this case, the interface functions (start_link and - button) are located in the same module as the callback - functions (init, locked and open). This +

The interface functions (start_link and button) + are then located in the same module as the callback + functions (init, locked, and open). This is normally good programming practice, to have the code corresponding to one process contained in one module.

-

The third argument, Code, is a list of digits which is passed - reversed to the callback function init. Here, init +

The third argument, Code, is a list of digits that + which is passed reversed to the callback function init. + Here, init gets the correct code for the lock as indata.

-

The fourth argument, [], is a list of options. See - gen_fsm(3) for available options.

+

The fourth argument, [], is a list of options. See + the gen_fsm(3) manual page for available options.

-

If name registration succeeds, the new gen_fsm process calls +

If name registration succeeds, the new gen_fsm process calls the callback function code_lock:init(Code). This function is expected to return {ok, StateName, StateData}, where - StateName is the name of the initial state of the gen_fsm. - In this case locked, assuming the door is locked to begin - with. StateData is the internal state of the gen_fsm. (For - gen_fsms, the internal state is often referred to 'state data' to + StateName is the name of the initial state of the + gen_fsm. In this case locked, assuming the door is + locked to begin with. StateData is the internal state of + the gen_fsm. (For gen_fsm, the internal state is + often referred to 'state data' to distinguish it from the state as in states of a state machine.) In this case, the state data is the button sequence so far (empty to begin with) and the correct code of the lock.

init(Code) -> {ok, locked, {[], Code}}. -

Note that gen_fsm:start_link is synchronous. It does not - return until the gen_fsm has been initialized and is ready to +

gen_fsm:start_link is synchronous. It does not return until + the gen_fsm has been initialized and is ready to receive notifications.

-

gen_fsm:start_link must be used if the gen_fsm is part of - a supervision tree, i.e. is started by a supervisor. There is - another function gen_fsm:start to start a stand-alone - gen_fsm, i.e. a gen_fsm which is not part of a supervision tree.

+

gen_fsm:start_link must be used if the gen_fsm is + part of a supervision tree, that is, started by a supervisor. There + is another function, gen_fsm:start, to start a standalone + gen_fsm, that is, a gen_fsm that is not part of a + supervision tree.

- Notifying About Events + Notifying about Events

The function notifying the code lock about a button event is implemented using gen_fsm:send_event/2:

button(Digit) -> gen_fsm:send_event(code_lock, {button, Digit}). -

code_lock is the name of the gen_fsm and must agree with - the name used to start it. {button, Digit} is the actual - event.

-

The event is made into a message and sent to the gen_fsm. When - the event is received, the gen_fsm calls - StateName(Event, StateData) which is expected to return a - tuple {next_state, StateName1, StateData1}. +

code_lock is the name of the gen_fsm and must + agree with the name used to start it. + {button, Digit} is the actual event.

+

The event is made into a message and sent to the gen_fsm. + When the event is received, the gen_fsm calls + StateName(Event, StateData), which is expected to return a + tuple {next_state,StateName1,StateData1}. StateName is the name of the current state and StateName1 is the name of the next state to go to. StateData1 is a new value for the state data of - the gen_fsm.

+ the gen_fsm.

case [Digit|SoFar] of @@ -198,20 +202,21 @@ open(timeout, State) ->

If the door is locked and a button is pressed, the complete button sequence so far is compared with the correct code for the lock and, depending on the result, the door is either unlocked - and the gen_fsm goes to state open, or the door remains in - state locked.

+ and the gen_fsm goes to state open, or the door + remains in state locked.

- Timeouts + Time-Outs

When a correct code has been given, the door is unlocked and the following tuple is returned from locked/2:

{next_state, open, {[], Code}, 30000}; -

30000 is a timeout value in milliseconds. After 30000 ms, i.e. - 30 seconds, a timeout occurs. Then StateName(timeout, StateData) is called. In this case, the timeout occurs when - the door has been in state open for 30 seconds. After that - the door is locked again:

+

30,000 is a time-out value in milliseconds. After this time, + that is, 30 seconds, a time-out occurs. Then, + StateName(timeout, StateData) is called. The time-out + then occurs when the door has been in state open for 30 + seconds. After that the door is locked again:

open(timeout, State) -> do_lock(), @@ -220,7 +225,7 @@ open(timeout, State) ->
All State Events -

Sometimes an event can arrive at any state of the gen_fsm. +

Sometimes an event can arrive at any state of the gen_fsm. Instead of sending the message with gen_fsm:send_event/2 and writing one clause handling the event for each state function, the message can be sent with gen_fsm:send_all_state_event/2 @@ -245,15 +250,16 @@ handle_event(stop, _StateName, StateData) ->

In a Supervision Tree -

If the gen_fsm is part of a supervision tree, no stop function - is needed. The gen_fsm will automatically be terminated by its - supervisor. Exactly how this is done is defined by a - shutdown strategy +

If the gen_fsm is part of a supervision tree, no stop + function is needed. The gen_fsm is automatically + terminated by its supervisor. Exactly how this is done is + defined by a + shutdown strategy set in the supervisor.

If it is necessary to clean up before termination, the shutdown - strategy must be a timeout value and the gen_fsm must be set to - trap exit signals in the init function. When ordered - to shutdown, the gen_fsm will then call the callback function + strategy must be a time-out value and the gen_fsm must be + set to trap exit signals in the init function. When ordered + to shutdown, the gen_fsm then calls the callback function terminate(shutdown, StateName, StateData):

init(Args) -> @@ -270,9 +276,9 @@ terminate(shutdown, StateName, StateData) ->
- Stand-Alone Gen_Fsms -

If the gen_fsm is not part of a supervision tree, a stop - function may be useful, for example:

+ Standalone gen_fsm +

If the gen_fsm is not part of a supervision tree, a stop + function can be useful, for example:

... -export([stop/0]). @@ -290,26 +296,28 @@ handle_event(stop, _StateName, StateData) -> terminate(normal, _StateName, _StateData) -> ok.

The callback function handling the stop event returns a - tuple {stop,normal,StateData1}, where normal + tuple, {stop,normal,StateData1}, where normal specifies that it is a normal termination and StateData1 - is a new value for the state data of the gen_fsm. This will - cause the gen_fsm to call + is a new value for the state data of the gen_fsm. This + causes the gen_fsm to call terminate(normal,StateName,StateData1) and then - terminate gracefully:

+ it terminates gracefully:

Handling Other Messages -

If the gen_fsm should be able to receive other messages than - events, the callback function handle_info(Info, StateName, StateData) must be implemented to handle them. Examples of - other messages are exit messages, if the gen_fsm is linked to +

If the gen_fsm is to be able to receive other messages + than events, the callback function + handle_info(Info, StateName, StateData) must be implemented + to handle them. Examples of + other messages are exit messages, if the gen_fsm is linked to other processes (than the supervisor) and trapping exit signals.

handle_info({'EXIT', Pid, Reason}, StateName, StateData) -> ..code to handle exits here.. {next_state, StateName1, StateData1}. -

The code_change method also has to be implemented.

+

The code_change method must also be implemented.

code_change(OldVsn, StateName, StateData, Extra) -> ..code to convert state (and more) during code change diff --git a/system/doc/design_principles/gen_server_concepts.xml b/system/doc/design_principles/gen_server_concepts.xml index d24d87aa03..d721845c6d 100644 --- a/system/doc/design_principles/gen_server_concepts.xml +++ b/system/doc/design_principles/gen_server_concepts.xml @@ -21,7 +21,7 @@ - Gen_Server Behaviour + gen_server Behaviour @@ -29,16 +29,16 @@ gen_server_concepts.xml -

This chapter should be read in conjunction with - gen_server(3), - where all interface functions and callback - functions are described in detail.

+

This section is to be read with the + gen_server(3) + manual page in stdblib, where all interface functions and + callback functions are described in detail.

Client-Server Principles

The client-server model is characterized by a central server and an arbitrary number of clients. The client-server model is - generally used for resource management operations, where several + used for resource management operations, where several different clients want to share a common resource. The server is responsible for managing this resource.

@@ -49,9 +49,10 @@
Example -

An example of a simple server written in plain Erlang was - given in Overview. - The server can be re-implemented using gen_server, +

An example of a simple server written in plain Erlang is + provided in + Overview. + The server can be reimplemented using gen_server, resulting in this callback module:

@@ -86,61 +87,60 @@ handle_cast({free, Ch}, Chs) ->
Starting a Gen_Server -

In the example in the previous section, the gen_server is started - by calling ch3:start_link():

+

In the example in the previous section, gen_server is + started by calling ch3:start_link():

start_link() -> gen_server:start_link({local, ch3}, ch3, [], []) => {ok, Pid} -

start_link calls the function - gen_server:start_link/4. This function spawns and links to - a new process, a gen_server.

+

start_link calls function gen_server:start_link/4. + This function spawns and links to a new process, a + gen_server.

-

The first argument {local, ch3} specifies the name. In - this case, the gen_server will be locally registered as - ch3.

-

If the name is omitted, the gen_server is not registered. - Instead its pid must be used. The name could also be given - as {global, Name}, in which case the gen_server is +

The first argument, {local, ch3}, specifies the name. + The gen_server is then locally registered as ch3.

+

If the name is omitted, the gen_server is not registered. + Instead its pid must be used. The name can also be given + as {global, Name}, in which case the gen_server is registered using global:register_name/2.

The second argument, ch3, is the name of the callback - module, that is the module where the callback functions are + module, that is, the module where the callback functions are located.

-

In this case, the interface functions (start_link, - alloc and free) are located in the same module - as the callback functions (init, handle_call and +

The interface functions (start_link, alloc, + and free) are then located in the same module + as the callback functions (init, handle_call, and handle_cast). This is normally good programming practice, to have the code corresponding to one process contained in one module.

-

The third argument, [], is a term which is passed as-is to - the callback function init. Here, init does not +

The third argument, [], is a term that is passed as is + to the callback function init. Here, init does not need any indata and ignores the argument.

-

The fourth argument, [], is a list of options. See - gen_server(3) for available options.

+

The fourth argument, [], is a list of options. See the + gen_server(3) manual page for available options.

-

If name registration succeeds, the new gen_server process calls - the callback function ch3:init([]). init is expected - to return {ok, State}, where State is the internal - state of the gen_server. In this case, the state is the available - channels.

+

If name registration succeeds, the new gen_server process + calls the callback function ch3:init([]). init is + expected to return {ok, State}, where State is the + internal state of the gen_server. In this case, the state + is the available channels.

init(_Args) -> {ok, channels()}. -

Note that gen_server:start_link is synchronous. It does - not return until the gen_server has been initialized and is ready +

gen_server:start_link is synchronous. It does not return + until the gen_server has been initialized and is ready to receive requests.

-

gen_server:start_link must be used if the gen_server is - part of a supervision tree, i.e. is started by a supervisor. - There is another function gen_server:start to start a - stand-alone gen_server, i.e. a gen_server which is not part of a - supervision tree.

+

gen_server:start_link must be used if the gen_server + is part of a supervision tree, that is, started by a supervisor. + There is another function, gen_server:start, to start a + standalone gen_server, that is, a gen_server that + is not part of a supervision tree.

@@ -150,14 +150,17 @@ init(_Args) -> alloc() -> gen_server:call(ch3, alloc). -

ch3 is the name of the gen_server and must agree with - the name used to start it. alloc is the actual request.

-

The request is made into a message and sent to the gen_server. - When the request is received, the gen_server calls - handle_call(Request, From, State) which is expected to - return a tuple {reply, Reply, State1}. Reply is - the reply which should be sent back to the client, and - State1 is a new value for the state of the gen_server.

+

ch3 is the name of the gen_server and must agree + with the name used to start it. alloc is the actual + request.

+

The request is made into a message and sent to the + gen_server. When the request is received, the + gen_server calls + handle_call(Request, From, State), which is expected to + return a tuple {reply,Reply,State1}. Reply is + the reply that is to be sent back to the client, and + State1 is a new value for the state of the + gen_server.

handle_call(alloc, _From, Chs) -> {Ch, Chs2} = alloc(Chs), @@ -166,8 +169,8 @@ handle_call(alloc, _From, Chs) -> the new state is the set of remaining available channels Chs2.

Thus, the call ch3:alloc() returns the allocated channel - Ch and the gen_server then waits for new requests, now - with an updated list of available channels.

+ Ch and the gen_server then waits for new requests, + now with an updated list of available channels.

@@ -177,20 +180,21 @@ handle_call(alloc, _From, Chs) -> free(Ch) -> gen_server:cast(ch3, {free, Ch}). -

ch3 is the name of the gen_server. {free, Ch} is - the actual request.

-

The request is made into a message and sent to the gen_server. +

ch3 is the name of the gen_server. + {free, Ch} is the actual request.

+

The request is made into a message and sent to the + gen_server. cast, and thus free, then returns ok.

-

When the request is received, the gen_server calls - handle_cast(Request, State) which is expected to - return a tuple {noreply, State1}. State1 is a new - value for the state of the gen_server.

+

When the request is received, the gen_server calls + handle_cast(Request, State), which is expected to + return a tuple {noreply,State1}. State1 is a new + value for the state of the gen_server.

handle_cast({free, Ch}, Chs) -> Chs2 = free(Ch, Chs), {noreply, Chs2}.

In this case, the new state is the updated list of available - channels Chs2. The gen_server is now ready for new + channels Chs2. The gen_server is now ready for new requests.

@@ -199,15 +203,17 @@ handle_cast({free, Ch}, Chs) ->
In a Supervision Tree -

If the gen_server is part of a supervision tree, no stop - function is needed. The gen_server will automatically be +

If the gen_server is part of a supervision tree, no stop + function is needed. The gen_server is automatically terminated by its supervisor. Exactly how this is done is - defined by a shutdown strategy set in the supervisor.

+ defined by a + shutdown strategy + set in the supervisor.

If it is necessary to clean up before termination, the shutdown - strategy must be a timeout value and the gen_server must be set - to trap exit signals in the init function. When ordered - to shutdown, the gen_server will then call the callback function - terminate(shutdown, State):

+ strategy must be a time-out value and the gen_server must + be set to trap exit signals in function init. When ordered + to shutdown, the gen_server then calls the callback + function terminate(shutdown, State):

init(Args) -> ..., @@ -223,9 +229,9 @@ terminate(shutdown, State) ->
- Stand-Alone Gen_Servers -

If the gen_server is not part of a supervision tree, a stop - function may be useful, for example:

+ Standalone Gen_Servers +

If the gen_server is not part of a supervision tree, a + stop function can be useful, for example:

... export([stop/0]). @@ -245,26 +251,26 @@ handle_cast({free, Ch}, State) -> terminate(normal, State) -> ok.

The callback function handling the stop request returns - a tuple {stop, normal, State1}, where normal + a tuple {stop,normal,State1}, where normal specifies that it is a normal termination and State1 is - a new value for the state of the gen_server. This will cause - the gen_server to call terminate(normal,State1) and then - terminate gracefully.

+ a new value for the state of the gen_server. This causes + the gen_server to call terminate(normal, State1) + and then it terminates gracefully.

Handling Other Messages -

If the gen_server should be able to receive other messages than - requests, the callback function handle_info(Info, State) +

If the gen_server is to be able to receive other messages + than requests, the callback function handle_info(Info, State) must be implemented to handle them. Examples of other messages - are exit messages, if the gen_server is linked to other processes - (than the supervisor) and trapping exit signals.

+ are exit messages, if the gen_server is linked to other + processes (than the supervisor) and trapping exit signals.

handle_info({'EXIT', Pid, Reason}, State) -> ..code to handle exits here.. {noreply, State1}. -

The code_change method also has to be implemented.

+

The code_change method must also be implemented.

code_change(OldVsn, State, Extra) -> ..code to convert state (and more) during code change diff --git a/system/doc/design_principles/included_applications.xml b/system/doc/design_principles/included_applications.xml index 3aa43fd595..7f139edf76 100644 --- a/system/doc/design_principles/included_applications.xml +++ b/system/doc/design_principles/included_applications.xml @@ -28,35 +28,36 @@ included_applications.xml +
- Definition + Introduction

An application can include other applications. An included application has its own application directory and .app file, but it is started as part of the supervisor tree of another application.

An application can only be included by one other application.

An included application can include other applications.

-

An application which is not included by any other application is +

An application that is not included by any other application is called a primary application.

- Primary Application and Included Applications. + Primary Application and Included Applications -

The application controller will automatically load any included - applications when loading a primary application, but not start +

The application controller automatically loads any included + applications when loading a primary application, but does not start them. Instead, the top supervisor of the included application must be started by a supervisor in the including application.

This means that when running, an included application is in fact - part of the primary application and a process in an included - application will consider itself belonging to the primary + part of the primary application, and a process in an included + application considers itself belonging to the primary application.

Specifying Included Applications

Which applications to include is defined by - the included_applications key in the .app file.

+ the included_applications key in the .app file:

 {application, prim_app,
  [{description, "Tree application"},
@@ -71,7 +72,7 @@
   
- Synchronizing Processes During Startup + Synchronizing Processes during Startup

The supervisor tree of an included application is started as part of the supervisor tree of the including application. If there is a need for synchronization between processes in @@ -79,12 +80,12 @@ by using start phases.

Start phases are defined by the start_phases key in the .app file as a list of tuples {Phase,PhaseArgs}, - where Phase is an atom and PhaseArgs is a term. - Also, the value of the mod key of the including application + where Phase is an atom and PhaseArgs is a term.

+

The value of the mod key of the including application must be set to {application_starter,[Module,StartArgs]}, - where Module as usual is the application callback module - and StartArgs a term provided as argument to the callback - function Module:start/2.

+ where Module as usual is the application callback module. + StartArgs is a term provided as argument to the callback + function Module:start/2:

{application, prim_app, [{description, "Tree application"}, @@ -108,36 +109,38 @@ {mod, {incl_app_cb,[]}} ]}.

When starting a primary application with included applications, - the primary application is started the normal way: - The application controller creates an application master for - the application, and the application master calls + the primary application is started the normal way, that is:

+ + The application controller creates an application master for + the application + The application master calls Module:start(normal, StartArgs) to start the top - supervisor.

+ supervisor.
+

Then, for the primary application and each included application in top-down, left-to-right order, the application master calls Module:start_phase(Phase, Type, PhaseArgs) for each phase - defined for for the primary application, in that order. - Note that if a phase is not defined for an included application, + defined for the primary application, in that order. If a phase + is not defined for an included application, the function is not called for this phase and application.

The following requirements apply to the .app file for an included application:

- The {mod, {Module,StartArgs}} option must be - included. This option is used to find the callback module - Module of the application. StartArgs is ignored, - as Module:start/2 is called only for the primary - application. + The {mod, {Module,StartArgs}} option must be included. + This option is used to find the callback module Module of the + application. StartArgs is ignored, as Module:start/2 + is called only for the primary application. If the included application itself contains included - applications, instead the option - {mod, {application_starter, [Module,StartArgs]}} must be - included. + applications, instead the + {mod, {application_starter, [Module,StartArgs]}} option + must be included. The {start_phases, [{Phase,PhaseArgs}]} option must - be included, and the set of specified phases must be a subset - of the set of phases specified for the primary application. + be included, and the set of specified phases must be a subset + of the set of phases specified for the primary application.

When starting prim_app as defined above, the application - controller will call the following callback functions, before - application:start(prim_app) returns a value:

+ controller calls the following callback functions before + application:start(prim_app) returns a value:

application:start(prim_app) => prim_app_cb:start(normal, []) diff --git a/system/doc/design_principles/release_handling.xml b/system/doc/design_principles/release_handling.xml index 9d1e2c8669..eeb71125b6 100644 --- a/system/doc/design_principles/release_handling.xml +++ b/system/doc/design_principles/release_handling.xml @@ -28,128 +28,126 @@ release_handling.xml +
Release Handling Principles

An important feature of the Erlang programming language is - the ability to change module code in run-time, code replacement, as described in Erlang Reference Manual.

+ the ability to change module code in runtime, + code replacement, as described in the Erlang + Reference Manual.

Based on this feature, the OTP application SASL provides a framework for upgrading and downgrading between different - versions of an entire release in run-time. This is what we call + versions of an entire release in runtime. This is called release handling.

-

The framework consists of off-line support (systools) for - generating scripts and building release packages, and on-line - support (release_handler) for unpacking and installing - release packages.

-

Note that the minimal system based on Erlang/OTP, enabling - release handling, thus consists of Kernel, STDLIB and SASL.

- - -

A release is created as described in the previous chapter - Releases. - The release is transferred to and installed at target - environment. Refer to System Principles for - information of how to install the first target system.

-
- -

Modifications, for example error corrections, are made to - the code in the development environment.

-
- -

At some point it is time to make a new version of release. - The relevant .app files are updated and a new - .rel file is written.

-
- -

For each modified application, an - application upgrade file, - .appup, is created. In this file, it is described how - to upgrade and/or downgrade between the old and new version of - the application.

-
- -

Based on the .appup files, a - release upgrade file called - relup, is created. This file describes how to upgrade - and/or downgrade between the old and new version of - the entire release.

-
- -

A new release package is made and transferred to - the target system.

-
- -

The new release package is unpacked using the release - handler.

-
- -

The new version of the release is installed, also using - the release handler. This is done by evaluating - the instructions in relup. Modules may be added, - deleted or re-loaded, applications may be started, stopped or - re-started etc. In some cases, it is even necessary to restart - the entire emulator.

-

If the installation fails, the system may be rebooted. - The old release version is then automatically used.

-
- -

If the installation succeeds, the new version is made - the default version, which should now be used in case of a - system reboot.

-
-
-

The next chapter, Appup Cookbook, contains examples of .appup files - for typical cases of upgrades/downgrades that are normally easy - to handle in run-time. However, there are a many aspects that can - make release handling complicated. To name a few examples:

+

The framework consists of:

- -

Complicated or circular dependencies can make it difficult - or even impossible to decide in which order things must be - done without risking run-time errors during an upgrade or - downgrade. Dependencies may be:

- - between nodes, - between processes, and - between modules. - -
- -

During release handling, non-affected processes continue - normal execution. This may lead to timeouts or other problems. - For example, new processes created in the time window between - suspending processes using a certain module and loading a new - version of this module, may execute old code.

-
+ Offline support - systools for generating scripts + and building release packages + Online support - release_handler for unpacking and + installing release packages
-

It is therefore recommended that code is changed in as small +

The minimal system based on Erlang/OTP, enabling release handling, + thus consists of the Kernel, STDLIB, and SASL + applications.

+ +
+ Release Handling Workflow +

Step 1) A release is created as described in + Releases.

+

Step 2) The release is transferred to and installed at + target environment. For information of how to install the first + target system, see System Principles.

+

Step 3) Modifications, for example, error corrections, + are made to the code in the development environment.

+

Step 4) At some point, it is time to make a new version + of release. The relevant .app files are updated and a new + .rel file is written.

+

Step 5) For each modified application, an + application upgrade file, + .appup, is created. In this file, it is described how to + upgrade and/or downgrade between the old and new version of the + application.

+

Step 6) Based on the .appup files, a + release upgrade file called + relup, is created. This file describes how to upgrade and/or + downgrade between the old and new version of the entire release.

+

Step 7) A new release package is made and transferred to + the target system.

+

Step 8) The new release package is unpacked using the + release handler.

+

Step 9) The new version of the release is installed, + also using the release handler. This is done by evaluating the + instructions in relup. Modules can be added, deleted, or + reloaded, applications can be started, stopped, or restarted, and so + on. In some cases, it is even necessary to restart the entire + emulator.

+ + If the installation fails, the system can be rebooted. + The old release version is then automatically used. + If the installation succeeds, the new version is made + the default version, which is to now be used if there is a + system reboot. + +
+ +
+ Release Handling Aspects +

Appup Cookbook, + contains examples of .appup files + for typical cases of upgrades/downgrades that are normally easy to + handle in runtime. However, many aspects can make release handling + complicated, for example:

+ + +

Complicated or circular dependencies can make it difficult + or even impossible to decide in which order things must be + done without risking runtime errors during an upgrade or + downgrade. Dependencies can be:

+ + Between nodes + Between processes + Between modules + +
+ +

During release handling, non-affected processes continue + normal execution. This can lead to time-outs or other problems. + For example, new processes created in the time window between + suspending processes using a certain module, and loading a new + version of this module, can execute old code.

+
+
+

It is thus recommended that code is changed in as small steps as possible, and always kept backwards compatible.

+
Requirements -

For release handling to work properly, the runtime system needs - to have knowledge about which release it is currently running. It - must also be able to change (in run-time) which boot script and - system configuration file should be used if the system is - rebooted, for example by heart after a failure. - Therefore, Erlang must be started as an embedded system, see - Embedded System for information on how to do this.

+

For release handling to work properly, the runtime system must + have knowledge about which release it is running. It + must also be able to change (in runtime) which boot script and + system configuration file to use if the system is + rebooted, for example, by heart after a failure. + Thus, Erlang must be started as an embedded system; for + information on how to do this, see Embedded System.

For system reboots to work properly, it is also required that - the system is started with heart beat monitoring, see - erl(1) and heart(3).

+ the system is started with heartbeat monitoring, see the + erl(1) manual page in ERTS and the heart(3) + manual page in Kernel

Other requirements:

The boot script included in a release package must be generated from the same .rel file as the release package itself.

-

Information about applications are fetched from the script +

Information about applications is fetched from the script when an upgrade or downgrade is performed.

-

The system must be configured using one and only one system +

The system must be configured using only one system configuration file, called sys.config.

If found, this file is automatically included when a release package is created.

@@ -165,13 +163,13 @@
Distributed Systems -

If the system consists of several Erlang nodes, each node may use +

If the system consists of several Erlang nodes, each node can use its own version of the release. The release handler is a locally registered process and must be called at each node where an - upgrade or downgrade is required. There is a release handling - instruction that can be used to synchronize the release handler - processes at a number of nodes: sync_nodes. See - appup(4).

+ upgrade or downgrade is required. A release handling + instruction, sync_nodes, can be used to synchronize the + release handler processes at a number of nodes, see the + appup(4) manual page in SASL.

@@ -183,31 +181,26 @@ instructions. To make it easier for the user, there are also a number of high-level instructions, which are translated to low-level instructions by systools:make_relup.

-

Here, some of the most frequently used instructions are - described. The complete list of instructions can be found in - appup(4).

+

Some of the most frequently used instructions are described in + this section. The complete list of instructions is included in the + appup(4) manual page in SASL.

First, some definitions:

- - Residence module - -

The module where a process has its tail-recursive loop - function(s). If the tail-recursive loop functions are - implemented in several modules, all those modules are residence - modules for the process.

-
- Functional module - -

A module which is not a residence module for any process.

-
-
-

Note that for a process implemented using an OTP behaviour, - the behaviour module is the residence module for that process. - The callback module is a functional module.

+ + Residence module - The module where a process has + its tail-recursive loop function(s). If these functions are + implemented in several modules, all those modules are residence + modules for the process. + Functional module - A module that is not a + residence module for any process. + +

For a process implemented using an OTP behaviour, the behaviour + module is the residence module for that process. + The callback module is a functional module.

load_module

If a simple extension has been made to a functional module, it - is sufficient to simply load the new version of the module into + is sufficient to load the new version of the module into the system, and remove the old version. This is called simple code replacement and for this the following instruction is used:

@@ -217,44 +210,50 @@
update -

If a more complex change has been made, for example a change - to the format of the internal state of a gen_server, simple code - replacement is not sufficient. Instead it is necessary to - suspend the processes using the module (to avoid that they try - to handle any requests before the code replacement is - completed), ask them to transform the internal state format and - switch to the new version of the module, remove the old version - and last, resume the processes. This is called synchronized code replacement and for this the following instructions - are used:

+

If a more complex change has been made, for example, a change + to the format of the internal state of a gen_server, simple + code replacement is not sufficient. Instead, it is necessary to:

+ + Suspend the processes using the module (to avoid that + they try to handle any requests before the code replacement is + completed). + Ask them to transform the internal state format and + switch to the new version of the module. + Remove the old version. + Resume the processes. + +

This is called synchronized code replacement and for + this the following instructions are used:

{update, Module, {advanced, Extra}} {update, Module, supervisor}

update with argument {advanced,Extra} is used when changing the internal state of a behaviour as described - above. It will cause behaviour processes to call the callback + above. It causes behaviour processes to call the callback function code_change, passing the term Extra and - some other information as arguments. See the man pages for + some other information as arguments. See the manual pages for the respective behaviours and - Appup Cookbook.

+ Appup Cookbook.

update with argument supervisor is used when changing the start specification of a supervisor. See - Appup Cookbook.

+ Appup Cookbook.

When a module is to be updated, the release handler finds which processes that are using the module by traversing the supervision tree of each running application and checking all the child specifications:

{Id, StartFunc, Restart, Shutdown, Type, Modules} -

A process is using a module if the name is listed in +

A process uses a module if the name is listed in Modules in the child specification for the process.

If Modules=dynamic, which is the case for event managers, the event manager process informs the release handler - about the list of currently installed event handlers (gen_fsm) - and it is checked if the module name is in this list instead.

+ about the list of currently installed event handlers + (gen_fsm), and it is checked if the module name is in + this list instead.

The release handler suspends, asks for code change, and resumes processes by calling the functions - sys:suspend/1,2, sys:change_code/4,5 and - sys:resume/1,2 respectively.

+ sys:suspend/1,2, sys:change_code/4,5, and + sys:resume/1,2, respectively.

@@ -263,39 +262,39 @@ used:

{add_module, Module} -

The instruction loads the module and is absolutely necessary +

The instruction loads the module and is necessary when running Erlang in embedded mode. It is not strictly required when running Erlang in interactive (default) mode, since the code server then automatically searches for and loads unloaded modules.

-

The opposite of add_module is delete_module which +

The opposite of add_module is delete_module, which unloads a module:

{delete_module, Module} -

Note that any process, in any application, with Module +

Any process, in any application, with Module as residence module, is killed when the instruction is - evaluated. The user should therefore ensure that all such + evaluated. The user must therefore ensure that all such processes are terminated before deleting the module, to avoid - a possible situation with failing supervisor restarts.

+ a situation with failing supervisor restarts.

Application Instructions -

Instruction for adding an application:

+

The following is the instruction for adding an application:

{add_application, Application}

Adding an application means that the modules defined by the modules key in the .app file are loaded using - a number of add_module instructions, then the application + a number of add_module instructions, and then the application is started.

-

Instruction for removing an application:

+

The following is the instruction for removing an application:

{remove_application, Application}

Removing an application means that the application is stopped, the modules are unloaded using a number of delete_module - instructions and then the application specification is unloaded + instructions, and then the application specification is unloaded from the application controller.

-

Instruction for restarting an application:

+

The following is the instruction for restarting an application:

{restart_application, Application}

Restarting an application means that the application is stopped @@ -305,46 +304,48 @@

- apply (low-level) + apply (Low-Level)

To call an arbitrary function from the release handler, the following instruction is used:

{apply, {M, F, A}} -

The release handler will evaluate apply(M, F, A).

+

The release handler evalutes apply(M, F, A).

- restart_new_emulator (low-level) + restart_new_emulator (Low-Level)

This instruction is used when changing to a new emulator - version, or when any of the core applications kernel, stdlib - or sasl is upgraded. If a system reboot is needed for some - other reason, the restart_emulator instruction should - be used instead.

-

Requires that the system is started with heart beat - monitoring, see erl(1) and heart(3).

-

The restart_new_emulator instruction shall always be - the very first instruction in a relup. If the relup is - generated by systools:make_relup/3,4 this is + version, or when any of the core applications Kernel, + STDLIB, or SASL is upgraded. If a system reboot + is needed for another reason, the restart_emulator + instruction is to be used instead.

+

This instruction requires that the system is started with + heartbeat monitoring, see the erl(1) manual page in + ERTS and the heart(3) manual page in Kernel.

+

The restart_new_emulator instruction must always be + the first instruction in a relup. If the relup is + generated by systools:make_relup/3,4, this is automatically ensured.

When the release handler encounters the instruction, it first generates a temporary boot file, which starts the new versions of the emulator and the core applications, and the old version of all other applications. Then it shuts down - the current emulator by calling init:reboot(), see - init(3). All processes are terminated gracefully and - the system is rebooted by the heart program, using the + the current emulator by calling init:reboot(), see the + init(3) manual page in Kernel. + All processes are terminated gracefully and + the system is rebooted by the heart program, using the temporary boot file. After the reboot, the rest of the relup instructions are executed. This is done as a part of the temporary boot script.

-

Since this mechanism causes the new versions of the - emulator and core applications to run with the old version of - other applications during startup, extra care must be taken to +

This mechanism causes the new versions of the emulator and + core applications to run with the old version of other + applications during startup. Thus, take extra care to avoid incompatibility. Incompatible changes in the core - applications may in some situations be necessary. If possible, + applications can in some situations be necessary. If possible, such changes are preceded by deprecation over two major - releases before the actual change. To make sure your + releases before the actual change. To ensure the application is not crashed by an incompatible change, always remove any call to deprecated functions as soon as possible.

@@ -352,35 +353,36 @@

An info report is written when the upgrade is completed. To programmatically find out if the upgrade is complete, call release_handler:which_releases(current) and check - if it returns the expected (i.e. the new) release.

+ if it returns the expected (that is, the new) release.

The new release version must be made permanent when the new - emulator is up and running. Otherwise, the old version will be - used in case of a new system reboot.

-

On UNIX, the release handler tells the heart program which - command to use to reboot the system. Note that the environment - variable HEART_COMMAND, normally used by the heart - program, in this case is ignored. The command instead defaults - to $ROOT/bin/start. Another command can be set - by using the SASL configuration parameter start_prg, see - sasl(6).

+ emulator is operational. Otherwise, the old version will be + used if there is a new system reboot.

+

On UNIX, the release handler tells the heart program + which command to use to reboot the system. The environment + variable HEART_COMMAND, normally used by the heart + program, is ignored in this case. The command instead defaults + to $ROOT/bin/start. Another command can be set by using + the SASL configuration parameter start_prg, see + the sasl(6) manual page.

- restart_emulator (low-level) -

This instruction is not related to upgrades of erts or any of - the core applications. It can be used by any application to + restart_emulator (Low-Level) +

This instruction is not related to upgrades of ERTS or any + of the core applications. It can be used by any application to force a restart of the emulator after all upgrade instructions are executed.

-

There can only be one restart_emulator instruction in - a relup script, and it shall always be placed at the end. If - the relup is generated by systools:make_relup/3,4 this +

A relup script can only have one restart_emulator + instruction and it must always be placed at the end. If + the relup is generated by systools:make_relup/3,4, this is automatically ensured.

When the release handler encounters the instruction, it shuts - down the emulator by calling init:reboot(), see - init(3). All processes are terminated gracefully and - the system can then be rebooted by the heart program using the - new release version. No more upgrade instruction will be + down the emulator by calling init:reboot(), see the + init(3) manual page in Kernel. + All processes are terminated gracefully and the system + can then be rebooted by the heart program using the + new release version. No more upgrade instruction is executed after the restart.

@@ -389,10 +391,11 @@ Application Upgrade File

To define how to upgrade/downgrade between the current version - and previous versions of an application, we create an - application upgrade file, or in short .appup file. - The file should be called Application.appup, where - Application is the name of the application:

+ and previous versions of an application, an + application upgrade file, or in short an .appup + file is created. + The file is to be called Application.appup, where + Application is the application name:

{Vsn, [{UpFromVsn1, InstructionsU1}, @@ -401,24 +404,27 @@ [{DownToVsn1, InstructionsD1}, ..., {DownToVsnK, InstructionsDK}]}. -

Vsn, a string, is the current version of the application, - as defined in the .app file. Each UpFromVsn - is a previous version of the application to upgrade from, and each - DownToVsn is a previous version of the application to - downgrade to. Each Instructions is a list of release - handling instructions.

-

The syntax and contents of the appup file are described - in detail in appup(4).

-

In the chapter Appup Cookbook, examples of .appup files for typical - upgrade/downgrade cases are given.

-

Example: Consider the release ch_rel-1 from - the Releases - chapter. Assume we want to add a function available/0 to - the server ch3 which returns the number of available - channels:

-

(Hint: When trying out the example, make the changes in a copy of - the original directory, so that the first versions are still - available.)

+ + Vsn, a string, is the current version of the application, + as defined in the .app file. + Each UpFromVsn is a previous version of the application + to upgrade from. + Each DownToVsn is a previous version of the application + to downgrade to. + Each Instructions is a list of release handling + instructions. + +

For information about the syntax and contents of the .appup + file, see the appup(4) manual page in SASL.

+

Appup Cookbook + includes examples of .appup files for typical upgrade/downgrade + cases.

+

Example: Consider the release ch_rel-1 from + Releases. + Assume you want to add a function available/0 to server + ch3, which returns the number of available channels (when + trying out the example, change in a copy of the original + directory, so that the first versions are still available):

-module(ch3). -behaviour(gen_server). @@ -465,9 +471,9 @@ handle_cast({free, Ch}, Chs) -> {mod, {ch_app,[]}} ]}.

To upgrade ch_app from "1" to "2" (and - to downgrade from "2" to "1"), we simply need to + to downgrade from "2" to "1"), you only need to load the new (old) version of the ch3 callback module. - We create the application upgrade file ch_app.appup in + Create the application upgrade file ch_app.appup in the ebin directory:

{"2", @@ -480,25 +486,25 @@ handle_cast({free, Ch}, Chs) -> Release Upgrade File

To define how to upgrade/downgrade between the new version and - previous versions of a release, we create a release upgrade file, or in short relup file.

+ previous versions of a release, a release upgrade file, + or in short relup file, is to be created.

This file does not need to be created manually, it can be generated by systools:make_relup/3,4. The relevant versions - of the .rel file, .app files and .appup files - are used as input. It is deducted which applications should be - added and deleted, and which applications that need to be upgraded - and/or downgraded. The instructions for this is fetched from + of the .rel file, .app files, and .appup files + are used as input. It is deducted which applications are to be + added and deleted, and which applications that must be upgraded + and/or downgraded. The instructions for this are fetched from the .appup files and transformed into a single list of low-level instructions in the right order.

If the relup file is relatively simple, it can be created - manually. Remember that it should only contain low-level - instructions.

-

The syntax and contents of the release upgrade file are - described in detail in relup(4).

-

Example, continued from the previous section. We have a new - version "2" of ch_app and an .appup file. We also - need a new version of the .rel file. This time the file is - called ch_rel-2.rel and the release version string is - changed changed from "A" to "B":

+ manually. It it only to contain low-level instructions.

+

For details about the syntax and contents of the release upgrade + file, see the relup(4) manual page in SASL.

+

Example, continued from the previous section: You have a + new version "2" of ch_app and an .appup file. A new + version of the .rel file is also needed. This time the file + is called ch_rel-2.rel and the release version string is + changed from "A" to "B":

{release, {"ch_rel", "B"}, @@ -512,13 +518,13 @@ handle_cast({free, Ch}, Chs) ->
 1> systools:make_relup("ch_rel-2", ["ch_rel-1"], ["ch_rel-1"]).
 ok
-

This will generate a relup file with instructions for +

This generates a relup file with instructions for how to upgrade from version "A" ("ch_rel-1") to version "B" ("ch_rel-2") and how to downgrade from version "B" to version "A".

-

Note that both the old and new versions of the .app and - .rel files must be in the code path, as well as - the .appup and (new) .beam files. It is possible - to extend the code path by using the option path:

+

Both the old and new versions of the .app and + .rel files must be in the code path, as well as the + .appup and (new) .beam files. The code path can be + extended by using the option path:

 1> systools:make_relup("ch_rel-2", ["ch_rel-1"], ["ch_rel-1"],
 [{path,["../ch_rel-1",
@@ -529,31 +535,34 @@ ok
Installing a Release -

When we have made a new version of a release, a release package +

When you have made a new version of a release, a release package can be created with this new version and transferred to the target environment.

-

To install the new version of the release in run-time, +

To install the new version of the release in runtime, the release handler is used. This is a process belonging - to the SASL application, that handles unpacking, installation, - and removal of release packages. It is interfaced through - the module release_handler, which is described in detail in - release_handler(3).

-

Assuming there is a target system up and running with + to the SASL application, which handles unpacking, installation, + and removal of release packages. It is communicated through + the release_handler module. For details, see the + release_handler(3) manual page in SASL.

+

Assuming there is an operational target system with installation root directory $ROOT, the release package with - the new version of the release should be copied to + the new version of the release is to be copied to $ROOT/releases.

-

The first action is to unpack the release package, - the files are then extracted from the package:

+

First, unpack the release package. + The files are then extracted from the package:

release_handler:unpack_release(ReleaseName) => {ok, Vsn} -

ReleaseName is the name of the release package except - the .tar.gz extension. Vsn is the version of - the unpacked release, as defined in its .rel file.

-

A directory $ROOT/lib/releases/Vsn will be created, where + + ReleaseName is the name of the release package except + the .tar.gz extension. + Vsn is the version of the unpacked release, as + defined in its .rel file. + +

A directory $ROOT/lib/releases/Vsn is created, where the .rel file, the boot script start.boot, - the system configuration file sys.config and relup + the system configuration file sys.config, and relup are placed. For applications with new version numbers, - the application directories will be placed under $ROOT/lib. + the application directories are placed under $ROOT/lib. Unchanged applications are not affected.

An unpacked release can be installed. The release handler then evaluates the instructions in relup, step by @@ -563,11 +572,11 @@ release_handler:install_release(Vsn) => {ok, FromVsn, []}

If an error occurs during the installation, the system is rebooted using the old version of the release. If installation succeeds, the system is afterwards using the new version of - the release, but should anything happen and the system is - rebooted, it would start using the previous version again. To be - made the default version, the newly installed release must be made - permanent, which means the previous version becomes - old:

+ the release, but if anything happens and the system is + rebooted, it starts using the previous version again.

+

To be made the default version, the newly installed release + must be made permanent, which means the previous + version becomes old:

release_handler:make_permanent(Vsn) => ok

The system keeps information about which versions are old and @@ -579,41 +588,44 @@ release_handler:make_permanent(Vsn) => ok release_handler:install_release(FromVsn) => {ok, Vsn, []}

An installed, but not permanent, release can be removed. Information about the release is then deleted from - $ROOT/releases/RELEASES and the release specific code, - that is the new application directories and + $ROOT/releases/RELEASES and the release-specific code, + that is, the new application directories and the $ROOT/releases/Vsn directory, are removed.

release_handler:remove_release(Vsn) => ok -

Example, continued from the previous sections:

-

1) Create a target system as described in System Principles of the first version "A" of ch_rel - from - the Releases - chapter. This time sys.config must be included in - the release package. If no configuration is needed, the file - should contain the empty list:

- + +
+ Example (continued from the previous sections) +

Step 1) Create a target system as described in + System Principles of the first version "A" + of ch_rel from + Releases. + This time sys.config must be included in the release package. + If no configuration is needed, the file is to contain the empty + list:

+ []. -

2) Start the system as a simple target system. Note that in - reality, it should be started as an embedded system. However, - using erl with the correct boot script and config file is - enough for illustration purposes:

-
+    

Step 2) Start the system as a simple target system. In + reality, it is to be started as an embedded system. However, using + erl with the correct boot script and config file is enough for + illustration purposes:

+
 % cd $ROOT
 % bin/erl -boot $ROOT/releases/A/start -config $ROOT/releases/A/sys
 ...

$ROOT is the installation directory of the target system.

-

3) In another Erlang shell, generate start scripts and create a - release package for the new version "B". Remember to - include (a possible updated) sys.config and - the relup file, see Release Upgrade File above.

-
+    

Step 3) In another Erlang shell, generate start scripts and + create a release package for the new version "B". Remember to + include (a possible updated) sys.config and the relup file, + see Release Upgrade File.

+
 1> systools:make_script("ch_rel-2").
 ok
 2> systools:make_tar("ch_rel-2").
 ok
-

The new release package now contains version "2" of ch_app - and the relup file as well:

- +

The new release package now also contains version "2" of ch_app + and the relup file:

+ % tar tf ch_rel-2.tar lib/kernel-2.9/ebin/kernel.app lib/kernel-2.9/ebin/application.beam @@ -633,28 +645,30 @@ releases/B/relup releases/B/sys.config releases/B/ch_rel-2.rel releases/ch_rel-2.rel -

4) Copy the release package ch_rel-2.tar.gz to - the $ROOT/releases directory.

-

5) In the running target system, unpack the release package:

-
+    

Step 4) Copy the release package ch_rel-2.tar.gz + to the $ROOT/releases directory.

+

Step 5) In the running target system, unpack the release + package:

+
 1> release_handler:unpack_release("ch_rel-2").
 {ok,"B"}

The new application version ch_app-2 is installed under $ROOT/lib next to ch_app-1. The kernel, - stdlib and sasl directories are not affected, as + stdlib, and sasl directories are not affected, as they have not changed.

Under $ROOT/releases, a new directory B is created, containing ch_rel-2.rel, start.boot, - sys.config and relup.

-

6) Check if the function ch3:available/0 is available:

-
+      sys.config, and relup.

+

Step 6) Check if the function ch3:available/0 is + available:

+
 2> ch3:available().
 ** exception error: undefined function ch3:available/0
-

7) Install the new release. The instructions in +

Step 7) Install the new release. The instructions in $ROOT/releases/B/relup are executed one by one, resulting in the new version of ch3 being loaded. The function ch3:available/0 is now available:

-
+      
 3> release_handler:install_release("B").
 {ok,"A",[]}
 4> ch3:available().
@@ -663,15 +677,16 @@ releases/ch_rel-2.rel
 ".../lib/ch_app-2/ebin/ch3.beam"
 6> code:which(ch_sup).
 ".../lib/ch_app-1/ebin/ch_sup.beam"
-

Note that processes in ch_app for which code have not - been updated, for example the supervisor, are still evaluating +

Processes in ch_app for which code have not + been updated, for example, the supervisor, are still evaluating code from ch_app-1.

-

8) If the target system is now rebooted, it will use version "A" - again. The "B" version must be made permanent, in order to be +

Step 8) If the target system is now rebooted, it uses + version "A" again. The "B" version must be made permanent, to be used when the system is rebooted.

-
+      
 7> release_handler:make_permanent("B").
 ok
+
@@ -681,37 +696,40 @@ ok
specifications are automatically updated for all loaded applications.

-

The information about the new application specifications are +

The information about the new application specifications is fetched from the boot script included in the release package. - It is therefore important that the boot script is generated from + Thus, it is important that the boot script is generated from the same .rel file as is used to build the release package itself.

Specifically, the application configuration parameters are automatically updated according to (in increasing priority order):

- + The data in the boot script, fetched from the new application resource file App.app The new sys.config - Command line arguments -App Par Val + Command-line arguments -App Par Val

This means that parameter values set in the other system - configuration files, as well as values set using - application:set_env/3, are disregarded.

+ configuration files and values set using + application:set_env/3 are disregarded.

When an installed release is made permanent, the system process init is set to point out the new sys.config.

-

After the installation, the application controller will compare +

After the installation, the application controller compares the old and new configuration parameters for all running applications and call the callback function:

Module:config_change(Changed, New, Removed) -

Module is the application callback module as defined by - the mod key in the .app file. Changed and - New are lists of {Par,Val} for all changed and - added configuration parameters, respectively. Removed is - a list of all parameters Par that have been removed.

-

The function is optional and may be omitted when implementing an + + Module is the application callback module as defined + by the mod key in the .app file. + Changed and New are lists of {Par,Val} for + all changed and added configuration parameters, respectively. + Removed is a list of all parameters Par that have + been removed. + +

The function is optional and can be omitted when implementing an application callback module.

diff --git a/system/doc/design_principles/release_structure.xml b/system/doc/design_principles/release_structure.xml index cec33f42e3..aa04f5e6a3 100644 --- a/system/doc/design_principles/release_structure.xml +++ b/system/doc/design_principles/release_structure.xml @@ -28,21 +28,23 @@ release_structure.xml -

This chapter should be read in conjuction with rel(4), - systools(3) and script(4).

+ +

This section is to be read with the rel(4), systools(3), + and script(4) manual pages in SASL.

Release Concept -

When we have written one or more applications, we might want to - create a complete system consisting of these applications and a +

When you have written one or more applications, you might want + to create a complete system with these applications and a subset of the Erlang/OTP applications. This is called a release.

-

To do this, we create a release resource file which defines which applications - are included in the release.

+

To do this, create a + release resource file that + defines which applications are included in the release.

The release resource file is used to generate boot scripts and release packages. A system - which is transfered to and installed at another site is called a + that is transferred to and installed at another site is called a target system. How to use a release package to create a target system is described in System Principles.

@@ -50,29 +52,30 @@
Release Resource File -

To define a release, we create a release resource file, - or in short .rel file, where we specify the name and - version of the release, which ERTS version it is based on, and - which applications it consists of:

+

To define a release, create a release resource file, + or in short a .rel file. In the file, specify the name and + version of the release, which ERTS version it is based on, + and which applications it consists of:

{release, {Name,Vsn}, {erts, EVsn}, [{Application1, AppVsn1}, ... {ApplicationN, AppVsnN}]}. +

Name, Vsn, EVsn, and AppVsn are + strings.

The file must be named Rel.rel, where Rel is a unique name.

-

Name, Vsn and EVsn are strings.

-

Each Application (atom) and AppVsn (string) is +

Each Application (atom) and AppVsn is the name and version of an application included in the release. - Note that the minimal release based on Erlang/OTP consists of - the kernel and stdlib applications, so these + The minimal release based on Erlang/OTP consists of + the Kernel and STDLIB applications, so these applications must be included in the list.

If the release is to be upgraded, it must also include - the sasl application.

+ the SASL application.

-

Example: We want to make a release of ch_app from - the Applications - chapter. It has the following .app file:

+

Example: A release of ch_app from + Applications + has the following .app file:

{application, ch_app, [{description, "Channel allocator"}, @@ -83,8 +86,8 @@ {mod, {ch_app,[]}} ]}.

The .rel file must also contain kernel, - stdlib and sasl, since these applications are - required by ch_app. We call the file ch_rel-1.rel:

+ stdlib, and sasl, as these applications are required + by ch_app. The file is called ch_rel-1.rel:

{release, {"ch_rel", "A"}, @@ -99,24 +102,28 @@
Generating Boot Scripts -

There are tools in the SASL module systools available to - build and check releases. The functions read the .rel and +

systools in the SASL application includes tools to + build and check releases. The functions read the rel and .app files and performs syntax and dependency checks. - The function systools:make_script/1,2 is used to generate - a boot script (see System Principles).

+ The systools:make_script/1,2 function is used to generate + a boot script (see System Principles):

 1> systools:make_script("ch_rel-1", [local]).
 ok
-

This creates a boot script, both the readable version - ch_rel-1.script and the binary version used by the runtime - system, ch_rel-1.boot. "ch_rel-1" is the name of - the .rel file, minus the extension. local is an - option that means that the directories where the applications are - found are used in the boot script, instead of $ROOT/lib. - ($ROOT is the root directory of the installed release.) - This is a useful way to test a generated boot script locally.

+

This creates a boot script, both the readable version, + ch_rel-1.script, and the binary version, ch_rel-1.boot, + used by the runtime system.

+ + "ch_rel-1" is the name of the .rel file, + minus the extension. + local is an option that means that the directories + where the applications are found are used in the boot script, + instead of $ROOT/lib ($ROOT is the root directory + of the installed release). + +

This is a useful way to test a generated boot script locally.

When starting Erlang/OTP using the boot script, all applications - from the .rel file are automatically loaded and started:

+ from the .rel file are automatically loaded and started:

 % erl -boot ch_rel-1
 Erlang (BEAM) emulator version 5.3
@@ -147,18 +154,24 @@ Eshell V5.3  (abort with ^G)
   
Creating a Release Package -

There is a function systools:make_tar/1,2 which takes - a .rel file as input and creates a zipped tar-file with - the code for the specified applications, a release package.

+

The systools:make_tar/1,2 function takes a .rel file + as input and creates a zipped tar file with the code for the specified + applications, a release package:

 1> systools:make_script("ch_rel-1").
 ok
 2> systools:make_tar("ch_rel-1").
 ok
-

The release package by default contains the .app files and - object code for all applications, structured according to - the application directory structure, the binary boot script renamed to - start.boot, and the .rel file.

+

The release package by default contains:

+ + The .app files + The .rel file + The object code for all applications, structured according + to the + application directory + structure + The binary boot script renamed to start.boot +
 % tar tf ch_rel-1.tar
 lib/kernel-2.9/ebin/kernel.app
@@ -177,40 +190,39 @@ lib/ch_app-1/ebin/ch3.beam
 releases/A/start.boot
 releases/A/ch_rel-1.rel
 releases/ch_rel-1.rel
-

Note that a new boot script was generated, without +

A new boot script was generated, without the local option set, before the release package was made. In the release package, all application directories are placed - under lib. Also, we do not know where the release package - will be installed, so we do not want any hardcoded absolute paths - in the boot script here.

+ under lib. You do not know where the release package + will be installed, so no hard-coded absolute paths are allowed.

The release resource file mysystem.rel is duplicated in the tar file. Originally, this file was only stored in - the releases directory in order to make it possible for + the releases directory to make it possible for the release_handler to extract this file separately. After unpacking the tar file, release_handler would automatically copy the file to releases/FIRST. However, sometimes the tar file is - unpacked without involving the release_handler (e.g. when - unpacking the first target system) and therefore the file is now - instead duplicated in the tar file so no manual copying is - necessary.

+ unpacked without involving the release_handler (for + example, when unpacking the first target system) and the file + is therefore now instead duplicated in the tar file so no manual + copying is necessary.

If a relup file and/or a system configuration file called - sys.config is found, these files are included in - the release package as well. See + sys.config is found, these files are also included in + the release package. See Release Handling.

Options can be set to make the release package include source code and the ERTS binary as well.

-

Refer to System Principles for how to install the first target - system, using a release package, and to - Release Handling for - how to install a new release package in an existing system.

+

For information on how to install the first target system, using + a release package, see System Principles. For information + on how to install a new release package in an existing system, see + Release Handling.

Directory Structure -

Directory structure for the code installed by the release handler - from a release package:

+

The directory structure for the code installed by the release handler + from a release package is as follows:

$ROOT/lib/App1-AVsn1/ebin /priv @@ -222,24 +234,18 @@ $ROOT/lib/App1-AVsn1/ebin /erts-EVsn/bin /releases/Vsn /bin - - lib - Application directories. - erts-EVsn/bin - Erlang runtime system executables. - releases/Vsn - .rel file and boot script start.boot.

- - If present in the release package,

-relup and/or sys.config.
- bin - Top level Erlang runtime system executables. -
-

Applications are not required to be located under the - $ROOT/lib directory. Accordingly, several installation - directories may exist which contain different parts of a - system. For example, the previous example could be extended as - follows:

+ + lib - Application directories + erts-EVsn/bin - Erlang runtime system executables + releases/Vsn - .rel file and boot script + start.boot; if present in the release package, relup + and/or sys.config + bin - Top-level Erlang runtime system executables + +

Applications are not required to be located under directory + $ROOT/lib. Several installation directories, which contain + different parts of a system, can thus exist. + For example, the previous example can be extended as follows:

 $SECOND_ROOT/.../SApp1-SAVsn1/ebin
                              /priv
@@ -256,24 +262,24 @@ $THIRD_ROOT/TApp1-TAVsn1/ebin
            ...
            /TAppN-TAVsnN/ebin
                         /priv
-

The $SECOND_ROOT and $THIRD_ROOT are introduced as +

$SECOND_ROOT and $THIRD_ROOT are introduced as variables in the call to the systools:make_script/2 function.

Disk-Less and/or Read-Only Clients -

If a complete system consists of some disk-less and/or - read-only client nodes, a clients directory should be - added to the $ROOT directory. By a read-only node we - mean a node with a read-only file system.

-

The clients directory should have one sub-directory +

If a complete system consists of disk-less and/or + read-only client nodes, a clients directory is to be + added to the $ROOT directory. A read-only node is + a node with a read-only file system.

+

The clients directory is to have one subdirectory per supported client node. The name of each client directory - should be the name of the corresponding client node. As a - minimum, each client directory should contain the bin and - releases sub-directories. These directories are used to + is to be the name of the corresponding client node. As a + minimum, each client directory is to contain the bin and + releases subdirectories. These directories are used to store information about installed releases and to appoint the - current release to the client. Accordingly, the $ROOT - directory contains the following:

+ current release to the client. The $ROOT + directory thus contains the following:

$ROOT/... /clients/ClientName1/bin @@ -283,14 +289,14 @@ $ROOT/... ... /ClientNameN/bin /releases/Vsn -

This structure should be used if all clients are running +

This structure is to be used if all clients are running the same type of Erlang machine. If there are clients running different types of Erlang machines, or on different operating - systems, the clients directory could be divided into one - sub-directory per type of Erlang machine. Alternatively, you - can set up one $ROOT per type of machine. For each + systems, the clients directory can be divided into one + subdirectory per type of Erlang machine. Alternatively, one + $ROOT can be set up per type of machine. For each type, some of the directories specified for the $ROOT - directory should be included:

+ directory are to be included:

$ROOT/... /clients/Type1/lib diff --git a/system/doc/design_principles/spec_proc.xml b/system/doc/design_principles/spec_proc.xml index e849388a38..aceb5ba99e 100644 --- a/system/doc/design_principles/spec_proc.xml +++ b/system/doc/design_principles/spec_proc.xml @@ -21,30 +21,31 @@ - Sys and Proc_Lib + sys and proc_lib spec_proc.xml -

The module sys contains functions for simple debugging of - processes implemented using behaviours.

-

There are also functions that, together with functions in - the module proc_lib, can be used to implement a - special process, a process which comply to the OTP design - principles without making use of a standard behaviour. They can - also be used to implement user defined (non-standard) behaviours.

-

Both sys and proc_lib belong to the STDLIB - application.

+ +

The sys module has functions for simple debugging of + processes implemented using behaviours. It also has functions that, + together with functions in the proc_lib module, can be used + to implement a special process that complies to the OTP + design principles without using a standard behaviour. These + functions can also be used to implement user-defined (non-standard) + behaviours.

+

Both sys and proc_lib belong to the STDLIB + application.

Simple Debugging -

The module sys contains some functions for simple debugging - of processes implemented using behaviours. We use the - code_lock example from - the gen_fsm chapter to - illustrate this:

+

The sys module has functions for simple debugging of + processes implemented using behaviours. The code_lock + example from + gen_fsm Behaviour + is used to illustrate this:

 % erl
 Erlang (BEAM) emulator version 5.2.3.6 [hipe] [threads:0]
@@ -102,16 +103,18 @@ ok
 
   
Special Processes -

This section describes how to write a process which comply to - the OTP design principles, without making use of a standard - behaviour. Such a process should:

+

This section describes how to write a process that complies to + the OTP design principles, without using a standard behaviour. + Such a process is to:

- be started in a way that makes the process fit into a - supervision tree, - support the sys debug facilities, and - take care of system messages. + Be started in a way that makes the process fit into a + supervision tree + Support the sys + debug facilities + Take care of + system messages. -

System messages are messages with special meaning, used in +

System messages are messages with a special meaning, used in the supervision tree. Typical system messages are requests for trace output, and requests to suspend or resume process execution (used during release handling). Processes implemented using @@ -120,9 +123,9 @@ ok

Example

The simple server from - the Overview chapter, - implemented using sys and proc_lib so it fits into - a supervision tree:

+ Overview, + implemented using sys and proc_lib so it fits into a + supervision tree:

 -module(ch4).
@@ -190,8 +193,8 @@ system_replace_state(StateFun, Chs) ->
 
 write_debug(Dev, Event, Name) ->
     io:format(Dev, "~p event = ~p~n", [Name, Event]).
-

Example on how the simple debugging functions in sys can - be used for ch4 as well:

+

Example on how the simple debugging functions in the sys + module can also be used for ch4:

 % erl
 Erlang (BEAM) emulator version 5.2.3.6 [hipe] [threads:0]
@@ -230,31 +233,32 @@ ok
 
     
Starting the Process -

A function in the proc_lib module should be used to - start the process. There are several possible functions, for - example spawn_link/3,4 for asynchronous start and +

A function in the proc_lib module is to be used to + start the process. Several functions are available, for + example, spawn_link/3,4 for asynchronous start and start_link/3,4,5 for synchronous start.

-

A process started using one of these functions will store - information that is needed for a process in a supervision tree, - for example about the ancestors and initial call.

-

Also, if the process terminates with another reason than - normal or shutdown, a crash report (see SASL - User's Guide) is generated.

-

In the example, synchronous start is used. The process is - started by calling ch4:start_link():

+

A process started using one of these functions stores + information (for example, about the ancestors and initial call) + that is needed for a process in a supervision tree.

+

If the process terminates with another reason than + normal or shutdown, a crash report is generated. + For more information about the crash report, see the SASL + User's Guide.

+

In the example, synchronous start is used. The process + starts by calling ch4:start_link():

start_link() -> proc_lib:start_link(ch4, init, [self()]).

ch4:start_link calls the function proc_lib:start_link. This function takes a module name, - a function name and an argument list as arguments and spawns + a function name, and an argument list as arguments, spawns, and links to a new process. The new process starts by executing - the given function, in this case ch4:init(Pid), where - Pid is the pid (self()) of the first process, that - is the parent process.

-

In init, all initialization including name registration - is done. The new process must also acknowledge that it has been - started to the parent:

+ the given function, here ch4:init(Pid), where + Pid is the pid (self()) of the first process, + which is the parent process.

+

All initialization, including name registration, is done in + init. The new process must also acknowledge that it has + been started to the parent:

init(Parent) -> ... @@ -267,8 +271,8 @@ init(Parent) ->
Debugging -

To support the debug facilites in sys, we need a - debug structure, a term Deb which is +

To support the debug facilites in sys, a + debug structure is needed. The Deb term is initialized using sys:debug_options/1:

init(Parent) -> @@ -278,50 +282,41 @@ init(Parent) -> loop(Chs, Parent, Deb).

sys:debug_options/1 takes a list of options as argument. Here the list is empty, which means no debugging is enabled - initially. See sys(3) for information about possible - options.

-

Then for each system event that we want to be logged - or traced, the following function should be called.

+ initially. For information about the possible options, see the + sys(3) manual page in STDLIB.

+

Then, for each system event to be logged + or traced, the following function is to be called.

sys:handle_debug(Deb, Func, Info, Event) => Deb1 +

Here:

- -

Deb is the debug structure.

-
- -

Func is a fun specifying - a (user defined) function used to format + Deb is the debug structure. + Func is a fun specifying + a (user-defined) function used to format trace output. For each system event, the format function is - called as Func(Dev, Event, Info), where:

+ called as Func(Dev, Event, Info), where: - -

Dev is the IO device to which the output should - be printed. See io(3).

-
- -

Event and Info are passed as-is from - handle_debug.

-
+ Dev is the I/O device to which the output is to + be printed. See the io(3) manual page in + STDLIB. + Event and Info are passed as is from + handle_debug.
- -

Info is used to pass additional information to - Func, it can be any term and is passed as-is.

-
- -

Event is the system event. It is up to the user to - define what a system event is and how it should be - represented, but typically at least incoming and outgoing + Info is used to pass more information to + Func. It can be any term and is passed as is. + Event is the system event. It is up to the user to + define what a system event is and how it is to be + represented. Typically at least incoming and outgoing messages are considered system events and represented by the tuples {in,Msg[,From]} and {out,Msg,To}, - respectively.

-
+ respectively.

handle_debug returns an updated debug structure Deb1.

In the example, handle_debug is called for each incoming and outgoing message. The format function Func is - the function ch4:write_debug/3 which prints the message + the function ch4:write_debug/3, which prints the message using io:format/3.

loop(Chs, Parent, Deb) -> @@ -354,22 +349,22 @@ write_debug(Dev, Event, Name) -> {system, From, Request}

The content and meaning of these messages do not need to be interpreted by the process. Instead the following function - should be called:

+ is to be called:

sys:handle_system_msg(Request, From, Parent, Module, Deb, State) -

This function does not return. It will handle the system - message and then call:

+

This function does not return. It handles the system + message and then either calls the following if process execution is + to continue:

Module:system_continue(Parent, Deb, State) -

if process execution should continue, or:

+

Or calls the following if the process is to terminate:

Module:system_terminate(Reason, Parent, Deb, State) -

if the process should terminate. Note that a process in a - supervision tree is expected to terminate with the same reason as - its parent.

+

A process in a supervision tree is expected to terminate with + the same reason as its parent.

- Request and From should be passed as-is from - the system message to the call to handle_system_msg. + Request and From are to be passed as is from + the system message to the call to handle_system_msg. Parent is the pid of the parent. Module is the name of the module. Deb is the debug structure. @@ -377,10 +372,12 @@ Module:system_terminate(Reason, Parent, Deb, State)
is passed to system_continue/system_terminate/ system_get_state/system_replace_state. -

If the process should return its state handle_system_msg will call:

+

If the process is to return its state, handle_system_msg + calls:

Module:system_get_state(State) -

or if the process should replace its state using the fun StateFun:

+

If the process is to replace its state using the fun StateFun, + handle_system_msg calls:

Module:system_replace_state(StateFun, State)

In the example:

@@ -407,9 +404,9 @@ system_replace_state(StateFun, Chs) -> NChs = StateFun(Chs), {ok, NChs, NChs}.
-

If the special process is set to trap exits, note that if - the parent process terminates, the expected behavior is to - terminate with the same reason:

+

If the special process is set to trap exits and if the parent + process terminates, the expected behavior is to terminate with + the same reason:

init(...) -> ..., @@ -431,28 +428,23 @@ loop(...) ->
User-Defined Behaviours -

To implement a user-defined behaviour, - write code similar to code for a special process but calling - functions in a callback module for handling specific tasks.

-

If it is desired that the compiler should warn for missing - callback functions, as it does for the OTP behaviours, add - -callback attributes in the behaviour module to describe - the expected callback functions:

- + write code similar to + code for a special process, but call functions in a callback + module for handling specific tasks.

+

If the compiler is to warn for missing callback functions, as it + does for the OTP behaviours, add -callback attributes in the + behaviour module to describe the expected callbacks:

-callback Name1(Arg1_1, Arg1_2, ..., Arg1_N1) -> Res1. -callback Name2(Arg2_1, Arg2_2, ..., Arg2_N2) -> Res2. ... -callback NameM(ArgM_1, ArgM_2, ..., ArgM_NM) -> ResM. - -

where each Name is the name of a callback function and - Arg and Res are types as described in - Specifications for functions in Types and Function - Specifications. The whole syntax of the - -spec attribute is supported by -callback - attribute.

+

NameX are the names of the expected callbacks. + ArgX_Y and ResX are types as they are described in + Types and + Function Specifications. The whole syntax of the -spec + attribute is supported by the -callback attribute.

Callback functions that are optional for the user of the behaviour to implement are specified by use of the -optional_callbacks attribute:

@@ -487,10 +479,10 @@ behaviour_info(callbacks) -> generated by the compiler using the -callback attributes.

When the compiler encounters the module attribute - -behaviour(Behaviour). in a module Mod, it will - call Behaviour:behaviour_info(callbacks) and compare the + -behaviour(Behaviour). in a module Mod, it + calls Behaviour:behaviour_info(callbacks) and compares the result with the set of functions actually exported from - Mod, and issue a warning if any callback function is + Mod, and issues a warning if any callback function is missing.

Example:

diff --git a/system/doc/design_principles/sup_princ.xml b/system/doc/design_principles/sup_princ.xml index 3d7b53e339..9583ca5c55 100644 --- a/system/doc/design_principles/sup_princ.xml +++ b/system/doc/design_principles/sup_princ.xml @@ -28,15 +28,16 @@ sup_princ.xml -

This section should be read in conjunction with - supervisor(3), where - all details about the supervisor behaviour are described.

+

This section should be read with the + supervisor(3) manual page + in STDLIB, where all details about the supervisor + behaviour is given.

Supervision Principles

A supervisor is responsible for starting, stopping, and monitoring its child processes. The basic idea of a supervisor is - that it shall keep its child processes alive by restarting them + that it is to keep its child processes alive by restarting them when necessary.

Which child processes to start and monitor is specified by a list of child specifications. @@ -47,8 +48,8 @@

Example

The callback module for a supervisor starting the server from - the gen_server chapter - could look like this:

+ gen_server Behaviour + can look as follows:

-module(ch_sup). @@ -79,6 +80,7 @@ init(_Args) ->
Supervisor Flags +

This is the type definition for the supervisor flags:

strategy(), % optional @@ -136,9 +138,9 @@ SupFlags = #{strategy => Strategy, ...}
rest_for_one -

If a child process terminates, the 'rest' of the child - processes -- i.e. the child processes after the terminated - process in start order -- are terminated. Then the terminated +

If a child process terminates, the rest of the child + processes (that is, the child processes after the terminated + process in start order) are terminated. Then the terminated child process and the rest of the child processes are restarted.

@@ -162,7 +164,7 @@ SupFlags = #{intensity => MaxR, period => MaxT, ...}

If more than MaxR number of restarts occur in the last MaxT seconds, the supervisor terminates all the child processes and then itself.

-

When the supervisor terminates, the next higher level +

When the supervisor terminates, then the next higher-level supervisor takes some action. It either restarts the terminated supervisor or terminates itself.

The intention of the restart mechanism is to prevent a situation @@ -176,14 +178,14 @@ SupFlags = #{intensity => MaxR, period => MaxT, ...}

Child Specification -

This is the type definition for a child specification:

+

The type definition for a child specification is as follows:

child_id(), % mandatory start => mfargs(), % mandatory restart => restart(), % optional shutdown => shutdown(), % optional type => worker(), % optional - modules => modules()} % optional
+ modules => modules()} % optional child_id() = term() mfargs() = {M :: module(), F :: atom(), A :: [term()]} modules() = [module()] | dynamic @@ -195,7 +197,7 @@ child_spec() = #{id => child_id(), % mandatory

id is used to identify the child specification internally by the supervisor.

The id key is mandatory.

-

Note that this identifier on occations has been called +

Note that this identifier occasionally has been called "name". As far as possible, the terms "identifier" or "id" are now used but in order to keep backwards compatibility, some occurences of "name" can still be found, for example @@ -205,24 +207,28 @@ child_spec() = #{id => child_id(), % mandatory

start defines the function call used to start the child process. It is a module-function-arguments tuple used as apply(M, F, A).

-

It should be (or result in) a call to - supervisor:start_link, gen_server:start_link, - gen_fsm:start_link, or gen_event:start_link. - (Or a function compliant with these functions, see - supervisor(3) for details.

+

It is to be (or result in) a call to any of the following:

+ + supervisor:start_link + gen_server:start_link + gen_fsm:start_link + gen_event:start_link + A function compliant with these functions. For details, + see the supervisor(3) manual page. +

The start key is mandatory.

-

restart defines when a terminated child process shall +

restart defines when a terminated child process is to be restarted.

A permanent child process is always restarted. A temporary child process is never restarted - (not even when the supervisor's restart strategy - is rest_for_one or one_for_all and a sibling's + (not even when the supervisor restart strategy + is rest_for_one or one_for_all and a sibling death causes the temporary process to be terminated). A transient child process is restarted only if it - terminates abnormally, i.e. with another exit reason than + terminates abnormally, that is, with another exit reason than normal, shutdown, or {shutdown,Term}.

The restart key is optional. If it is not given, the @@ -230,27 +236,27 @@ child_spec() = #{id => child_id(), % mandatory -

shutdown defines how a child process shall be +

shutdown defines how a child process is to be terminated.

- brutal_kill means the child process is + brutal_kill means that the child process is unconditionally terminated using exit(Child, kill). - An integer timeout value means that the supervisor tells + An integer time-out value means that the supervisor tells the child process to terminate by calling exit(Child, shutdown) and then waits for an exit signal back. If no exit signal is received within the specified time, the child process is unconditionally terminated using exit(Child, kill). - If the child process is another supervisor, it should be + If the child process is another supervisor, it is to be set to infinity to give the subtree enough time to shut down. It is also allowed to set it to infinity, - if the child process is a worker. + if the child process is a worker. See the warning below:

Be careful when setting the shutdown time to infinity when the child process is a worker. Because, in this situation, the termination of the supervision tree depends on the - child process, it must be implemented in a safe way and its cleanup + child process; it must be implemented in a safe way and its cleanup procedure must always return.

The shutdown key is optional. If it is not given, @@ -266,7 +272,7 @@ child_spec() = #{id => child_id(), % mandatory default value worker will be used.

-

modules should be a list with one element +

modules are to be a list with one element [Module], where Module is the name of the callback module, if the child process is a supervisor, gen_server or gen_fsm. If the child process is a gen_event, @@ -279,8 +285,8 @@ child_spec() = #{id => child_id(), % mandatory child's start {M,F,A}.

-

Example: The child specification to start the server ch3 - in the example above looks like:

+

Example: The child specification to start the server + ch3 in the previous example look as follows:

#{id => ch3, start => {ch3, start_link, []}, @@ -301,11 +307,11 @@ child_spec() = #{id => child_id(), % mandatory start => {gen_event, start_link, [{local, error_man}]}, modules => dynamic}

Both server and event manager are registered processes which - can be expected to be accessible at all times, thus they are + can be expected to be always accessible. Thus they are specified to be permanent.

ch3 does not need to do any cleaning up before - termination, thus no shutdown time is needed but - brutal_kill should be sufficient. error_man may + termination. Thus, no shutdown time is needed, but + brutal_kill is sufficient. error_man can need some time for the event handlers to clean up, thus the shutdown time is set to 5000 ms (which is the default value).

@@ -320,19 +326,20 @@ child_spec() = #{id => child_id(), % mandatory
Starting a Supervisor -

In the example above, the supervisor is started by calling +

In the previous example, the supervisor is started by calling ch_sup:start_link():

start_link() -> supervisor:start_link(ch_sup, []). -

ch_sup:start_link calls the function - supervisor:start_link/2. This function spawns and links to - a new process, a supervisor.

+

ch_sup:start_link calls function + supervisor:start_link/2, which spawns and links to a new + process, a supervisor.

The first argument, ch_sup, is the name of - the callback module, that is the module where the init + the callback module, that is, the module where the init callback function is located. - The second argument, [], is a term which is passed as-is to + The second argument, [], is a term that is passed + as is to the callback function init. Here, init does not need any indata and ignores the argument. @@ -351,26 +358,27 @@ init(_Args) -> shutdown => brutal_kill}], {ok, {SupFlags, ChildSpecs}}.

The supervisor then starts all its child processes according to - the given child specifications. In this case there, is one child - process, ch3.

-

Note that supervisor:start_link is synchronous. It does + the child specifications in the start specification. In this case + there is one child process, ch3.

+

supervisor:start_link is synchronous. It does not return until all child processes have been started.

Adding a Child Process -

In addition to the static supervision tree, we can also add - dynamic child processes to an existing supervisor with - the following call:

+

In addition to the static supervision tree, dynamic child + processes can be added to an existing supervisor with the following + call:

supervisor:start_child(Sup, ChildSpec)

Sup is the pid, or name, of the supervisor. - ChildSpec is a child specification.

+ ChildSpec is a + child specification.

Child processes added using start_child/2 behave in - the same manner as the other child processes, with the following - important exception: If a supervisor dies and is re-created, then - all child processes which were dynamically added to the supervisor - will be lost.

+ the same way as the other child processes, with the an important + exception: if a supervisor dies and is recreated, then + all child processes that were dynamically added to the supervisor + are lost.

@@ -393,11 +401,12 @@ supervisor:delete_child(Sup, Id)
- Simple-One-For-One Supervisors + Simplified one_for_one Supervisors

A supervisor with restart strategy simple_one_for_one is - a simplified one_for_one supervisor, where all child processes are - dynamically added instances of the same child specification.

-

Example of a callback module for a simple_one_for_one supervisor:

+ a simplified one_for_one supervisor, where all child + processes are dynamically added instances of the same process.

+

The following is an example of a callback module for a + simple_one_for_one supervisor:

-module(simple_sup). -behaviour(supervisor). @@ -416,12 +425,12 @@ init(_Args) -> start => {call, start_link, []}, shutdown => brutal_kill}], {ok, {SupFlags, ChildSpecs}}. -

When started, the supervisor will not start any child processes. +

When started, the supervisor does not start any child processes. Instead, all child processes are added dynamically by calling:

supervisor:start_child(Sup, List)

Sup is the pid, or name, of the supervisor. - List is an arbitrary list of terms which will be added to + List is an arbitrary list of terms, which are added to the list of arguments specified in the child specification. If the start function is specified as {M, F, A}, the child process is started by calling @@ -429,17 +438,17 @@ supervisor:start_child(Sup, List)

For example, adding a child to simple_sup above:

supervisor:start_child(Pid, [id1]) -

results in the child process being started by calling +

The result is that the child process is started by calling apply(call, start_link, []++[id1]), or actually:

call:start_link(id1) -

A child under a simple_one_for_one supervisor can be terminated - with

+

A child under a simple_one_for_one supervisor can be + terminated with the following:

supervisor:terminate_child(Sup, Pid) -

where Sup is the pid, or name, of the supervisor and +

Sup is the pid, or name, of the supervisor and Pid is the pid of the child.

-

Because a simple_one_for_one supervisor could have many +

Because a simple_one_for_one supervisor can have many children, it shuts them all down asynchronously. This means that the children will do their cleanup in parallel and therefore the order in which they are stopped is not defined.

@@ -447,11 +456,11 @@ supervisor:terminate_child(Sup, Pid)
Stopping -

Since the supervisor is part of a supervision tree, it will - automatically be terminated by its supervisor. When asked to - shutdown, it will terminate all child processes in reversed start +

Since the supervisor is part of a supervision tree, it is + automatically terminated by its supervisor. When asked to + shut down, it terminates all child processes in reversed start order according to the respective shutdown specifications, and - then terminate itself.

+ then terminates itself.

-- cgit v1.2.3 From 6513fc5eb55b306e2b1088123498e6c50b9e7273 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Bj=C3=B6rn=20Gustavsson?= Date: Thu, 12 Mar 2015 15:35:13 +0100 Subject: Update Efficiency Guide MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Language cleaned up by the technical writers xsipewe and tmanevik from Combitech. Proofreading and corrections by Björn Gustavsson. --- system/doc/efficiency_guide/advanced.xml | 296 ++++++++++--------- system/doc/efficiency_guide/binaryhandling.xml | 359 +++++++++++++----------- system/doc/efficiency_guide/commoncaveats.xml | 135 +++++---- system/doc/efficiency_guide/drivers.xml | 115 ++++---- system/doc/efficiency_guide/functions.xml | 120 ++++---- system/doc/efficiency_guide/introduction.xml | 28 +- system/doc/efficiency_guide/listhandling.xml | 120 ++++---- system/doc/efficiency_guide/myths.xml | 128 ++++----- system/doc/efficiency_guide/processes.xml | 159 ++++++----- system/doc/efficiency_guide/profiling.xml | 319 +++++++++++---------- system/doc/efficiency_guide/tablesDatabases.xml | 194 ++++++------- 11 files changed, 1027 insertions(+), 946 deletions(-) (limited to 'system/doc') diff --git a/system/doc/efficiency_guide/advanced.xml b/system/doc/efficiency_guide/advanced.xml index 51f1b2612c..3513a91e34 100644 --- a/system/doc/efficiency_guide/advanced.xml +++ b/system/doc/efficiency_guide/advanced.xml @@ -18,7 +18,6 @@ basis, WITHOUT WARRANTY OF ANY KIND, either express or implied. See the License for the specific language governing rights and limitations under the License. - Advanced @@ -31,175 +30,216 @@
Memory -

A good start when programming efficiently is to have knowledge about +

A good start when programming efficiently is to know how much memory different data types and operations require. It is implementation-dependent how much memory the Erlang data types and - other items consume, but here are some figures for the - erts-5.2 system (OTP release R9B). (There have been no significant - changes in R13.)

+ other items consume, but the following table shows some figures for + the erts-5.2 system in R9B. There have been no significant + changes in R13.

-

The unit of measurement is memory words. There exists both a 32-bit - and a 64-bit implementation, and a word is therefore, 4 bytes or +

The unit of measurement is memory words. There exists both a + 32-bit and a 64-bit implementation. A word is therefore 4 bytes or 8 bytes, respectively.

- Data type - Memory size + Data Type + Memory Size - Small integer - 1 word

-On 32-bit architectures: -134217729 < i < 134217728 (28 bits)

-On 64-bit architectures: -576460752303423489 < i < 576460752303423488 (60 bits)
+ Small integer + 1 word.

+ On 32-bit architectures: -134217729 < i < 134217728 + (28 bits).

+ On 64-bit architectures: -576460752303423489 < i < + 576460752303423488 (60 bits).
- Big integer - 3..N words + Large integer + 3..N words. - Atom - 1 word. Note: an atom refers into - an atom table which also consumes memory. + Atom + 1 word.

+ An atom refers into an atom table, which also consumes memory. The atom text is stored once for each unique atom in this table. The atom table is not garbage-collected.
- Float - On 32-bit architectures: 4 words

-On 64-bit architectures: 3 words
+ Float + On 32-bit architectures: 4 words.

+ On 64-bit architectures: 3 words.
- Binary - 3..6 + data (can be shared) + Binary + 3..6 words + data (can be shared). - List - 1 word + 1 word per element + the size of each element + List + 1 word + 1 word per element + the size of each element. - String (is the same as a list of integers) - 1 word + 2 words per character + String (is the same as a list of integers) + 1 word + 2 words per character. - Tuple - 2 words + the size of each element + Tuple + 2 words + the size of each element. - Pid - 1 word for a process identifier from the current local node, and 5 words for a process identifier from another node. Note: a process identifier refers into a process table and a node table which also consumes memory. + Pid + 1 word for a process identifier from the current local node + + 5 words for a process identifier from another node.

+ A process identifier refers into a process table and a node table, + which also consumes memory.
- Port - 1 word for a port identifier from the current local node, and 5 words for a port identifier from another node. Note: a port identifier refers into a port table and a node table which also consumes memory. + Port + 1 word for a port identifier from the current local node + + 5 words for a port identifier from another node.

+ A port identifier refers into a port table and a node table, + which also consumes memory.
- Reference - On 32-bit architectures: 5 words for a reference from the current local node, and 7 words for a reference from another node.

-On 64-bit architectures: 4 words for a reference from the current local node, and 6 words for a reference from another node. Note: a reference refers into a node table which also consumes memory.
+ Reference + On 32-bit architectures: 5 words for a reference from the + current local node + 7 words for a reference from another + node.

+ On 64-bit architectures: 4 words for a reference from the current + local node + 6 words for a reference from another node.

+ A reference refers into a node table, which also consumes + memory.
- Fun - 9..13 words + size of environment. Note: a fun refers into a fun table which also consumes memory. + Fun + 9..13 words + the size of environment.

+ A fun refers into a fun table, which also consumes memory.
- Ets table - Initially 768 words + the size of each element (6 words + size of Erlang data). The table will grow when necessary. + Ets table + Initially 768 words + the size of each element (6 words + + the size of Erlang data). The table grows when necessary. - Erlang process - 327 words when spawned including a heap of 233 words. + Erlang process + 327 words when spawned, including a heap of 233 words. - Memory size of different data types + Memory Size of Different Data Types
- System limits -

The Erlang language specification puts no limits on number of processes, - length of atoms etc., but for performance and memory saving reasons, - there will always be limits in a practical implementation of the Erlang - language and execution environment.

- - Processes - -

The maximum number of simultaneously alive Erlang processes is - by default 32768. This limit can be configured at startup, - for more information see the - +P - command line flag of - erl(1).

-
- Distributed nodes - - - Known nodes - -

A remote node Y has to be known to node X if there exist - any pids, ports, references, or funs (Erlang data types) from Y - on X, or if X and Y are connected. The maximum number of remote - nodes simultaneously/ever known to a node is limited by the - maximum number of atoms - available for node names. All data concerning remote nodes, - except for the node name atom, are garbage-collected.

-
- Connected nodes - The maximum number of simultaneously connected nodes is limited by - either the maximum number of simultaneously known remote nodes, - the maximum number of (Erlang) ports - available, or - the maximum number of sockets - available. -
-
- Characters in an atom - 255 - Atoms - - By default, the maximum number of atoms is 1048576. - This limit can be raised or lowered using the +t option. - Ets-tables - The default is 1400, can be changed with the environment variable ERL_MAX_ETS_TABLES. - Elements in a tuple - The maximum number of elements in a tuple is 67108863 (26 bit unsigned integer). Other factors - such as the available memory can of course make it hard to create a tuple of that size. - Size of binary - In the 32-bit implementation of Erlang, 536870911 bytes is the - largest binary that can be constructed or matched using the bit syntax. - (In the 64-bit implementation, the maximum size is 2305843009213693951 bytes.) - If the limit is exceeded, bit syntax construction will fail with a - system_limit exception, while any attempt to match a binary that is - too large will fail. - This limit is enforced starting with the R11B-4 release; in earlier releases, - operations on too large binaries would in general either fail or give incorrect - results. - In future releases of Erlang/OTP, other operations that create binaries (such as - list_to_binary/1) will probably also enforce the same limit. - Total amount of data allocated by an Erlang node - The Erlang runtime system can use the complete 32 (or 64) bit address space, - but the operating system often limits a single process to use less than that. - Length of a node name - An Erlang node name has the form host@shortname or host@longname. The node name is - used as an atom within the system so the maximum size of 255 holds for the node name too. - Open ports - - -

The maximum number of simultaneously open Erlang ports is - often by default 16384. This limit can be configured at startup, - for more information see the - +Q - command line flag of - erl(1).

-
- Open files, and sockets - + System Limits +

The Erlang language specification puts no limits on the number of + processes, length of atoms, and so on. However, for performance and + memory saving reasons, there will always be limits in a practical + implementation of the Erlang language and execution environment.

- The maximum number of simultaneously open files and sockets - depend on - the maximum number of Erlang ports - available, and operating system specific settings and limits.
- Number of arguments to a function or fun - 255 -
+ + + Processes + The maximum number of simultaneously alive Erlang processes + is by default 32,768. This limit can be configured at startup. + For more information, see the + +P + command-line flag in the + erl(1) + manual page in erts. + + + Known nodes + A remote node Y must be known to node X if there exists + any pids, ports, references, or funs (Erlang data types) from Y + on X, or if X and Y are connected. The maximum number of remote + nodes simultaneously/ever known to a node is limited by the + maximum number of atoms + available for node names. All data concerning remote nodes, + except for the node name atom, are garbage-collected. + + + Connected nodes + The maximum number of simultaneously connected nodes is + limited by either the maximum number of simultaneously known + remote nodes, + the maximum number of (Erlang) ports + available, or + the maximum number of sockets + available. + + + Characters in an atom + 255. + + + Atoms + By default, the maximum number of atoms is 1,048,576. This + limit can be raised or lowered using the +t option. + + + Ets tables + Default is 1400. It can be changed with the environment + variable ERL_MAX_ETS_TABLES. + + + Elements in a tuple + The maximum number of elements in a tuple is 67,108,863 + (26-bit unsigned integer). Clearly, other factors such as the + available memory can make it difficult to create a tuple of + that size. + + + Size of binary + In the 32-bit implementation of Erlang, 536,870,911 + bytes is the largest binary that can be constructed or matched + using the bit syntax. In the 64-bit implementation, the maximum + size is 2,305,843,009,213,693,951 bytes. If the limit is + exceeded, bit syntax construction fails with a + system_limit exception, while any attempt to match a + binary that is too large fails. This limit is enforced starting + in R11B-4.

+ In earlier Erlang/OTP releases, operations on too large + binaries in general either fail or give incorrect results. In + future releases, other operations that create binaries (such as + list_to_binary/1) will probably also enforce the same + limit.
+
+ + Total amount of data allocated by an Erlang node + The Erlang runtime system can use the complete 32-bit + (or 64-bit) address space, but the operating system often + limits a single process to use less than that. + + + Length of a node name + An Erlang node name has the form host@shortname or + host@longname. The node name is used as an atom within + the system, so the maximum size of 255 holds also for the + node name. + + + Open ports + The maximum number of simultaneously open Erlang ports is + often by default 16,384. This limit can be configured at startup. + For more information, see the + +Q + command-line flag in the + erl(1) manual page + in erts. + + + Open files and + sockets + The maximum number of simultaneously open files and + sockets depends on + the maximum number of Erlang ports + available, as well as on operating system-specific settings + and limits. + + + Number of arguments to a function or fun + 255 + + System Limits +
diff --git a/system/doc/efficiency_guide/binaryhandling.xml b/system/doc/efficiency_guide/binaryhandling.xml index 4ba1378059..0ac1a7ee32 100644 --- a/system/doc/efficiency_guide/binaryhandling.xml +++ b/system/doc/efficiency_guide/binaryhandling.xml @@ -23,7 +23,7 @@ The Initial Developer of the Original Code is Ericsson AB. - Constructing and matching binaries + Constructing and Matching Binaries Bjorn Gustavsson 2007-10-12 @@ -31,10 +31,10 @@ binaryhandling.xml -

In R12B, the most natural way to write binary construction and matching is now +

In R12B, the most natural way to construct and match binaries is significantly faster than in earlier releases.

-

To construct at binary, you can simply write

+

To construct a binary, you can simply write as follows:

DO (in R12B) / REALLY DO NOT (in earlier releases)

my_list_to_binary([], Acc) -> Acc.]]> -

In releases before R12B, Acc would be copied in every iteration. - In R12B, Acc will be copied only in the first iteration and extra - space will be allocated at the end of the copied binary. In the next iteration, - H will be written in to the extra space. When the extra space runs out, - the binary will be reallocated with more extra space.

- -

The extra space allocated (or reallocated) will be twice the size of the +

In releases before R12B, Acc is copied in every iteration. + In R12B, Acc is copied only in the first iteration and extra + space is allocated at the end of the copied binary. In the next iteration, + H is written into the extra space. When the extra space runs out, + the binary is reallocated with more extra space. The extra space allocated + (or reallocated) is twice the size of the existing binary data, or 256, whichever is larger.

The most natural way to match binaries is now the fastest:

@@ -64,55 +63,79 @@ my_binary_to_list(<>) -> my_binary_to_list(<<>>) -> [].]]>
- How binaries are implemented + How Binaries are Implemented

Internally, binaries and bitstrings are implemented in the same way. - In this section, we will call them binaries since that is what + In this section, they are called binaries because that is what they are called in the emulator source code.

-

There are four types of binary objects internally. Two of them are - containers for binary data and two of them are merely references to - a part of a binary.

- -

The binary containers are called refc binaries - (short for reference-counted binaries) and heap binaries.

+

Four types of binary objects are available internally:

+ +

Two are containers for binary data and are called:

+ + Refc binaries (short for + reference-counted binaries) + Heap binaries +
+

Two are merely references to a part of a binary and + are called:

+ + sub binaries + match contexts +
+
-

Refc binaries - consist of two parts: an object stored on - the process heap, called a ProcBin, and the binary object itself - stored outside all process heaps.

+
+ + Refc Binaries +

Refc binaries consist of two parts:

+ + An object stored on the process heap, called a + ProcBin + The binary object itself, stored outside all process + heaps +

The binary object can be referenced by any number of ProcBins from any - number of processes; the object contains a reference counter to keep track + number of processes. The object contains a reference counter to keep track of the number of references, so that it can be removed when the last reference disappears.

All ProcBin objects in a process are part of a linked list, so that the garbage collector can keep track of them and decrement the reference counters in the binary when a ProcBin disappears.

+
-

Heap binaries are small binaries, - up to 64 bytes, that are stored directly on the process heap. - They will be copied when the process - is garbage collected and when they are sent as a message. They don't +

+ + Heap Binaries +

Heap binaries are small binaries, up to 64 bytes, and are stored + directly on the process heap. They are copied when the process is + garbage-collected and when they are sent as a message. They do not require any special handling by the garbage collector.

+
-

There are two types of reference objects that can reference part of - a refc binary or heap binary. They are called sub binaries and - match contexts.

+
+ Sub Binaries +

The reference objects sub binaries and + match contexts can reference part of + a refc binary or heap binary.

A sub binary is created by split_binary/2 and when a binary is matched out in a binary pattern. A sub binary is a reference - into a part of another binary (refc or heap binary, never into a another + into a part of another binary (refc or heap binary, but never into another sub binary). Therefore, matching out a binary is relatively cheap because the actual binary data is never copied.

+
-

A match context is - similar to a sub binary, but is optimized - for binary matching; for instance, it contains a direct pointer to the binary - data. For each field that is matched out of a binary, the position in the - match context will be incremented.

+
+ Match Context + +

A match context is similar to a sub binary, but is + optimized for binary matching. For example, it contains a direct + pointer to the binary data. For each field that is matched out of + a binary, the position in the match context is incremented.

In R11B, a match context was only used during a binary matching operation.

@@ -122,27 +145,28 @@ my_binary_to_list(<<>>) -> [].]]> context and discard the sub binary. Instead of creating a sub binary, the match context is kept.

-

The compiler can only do this optimization if it can know for sure +

The compiler can only do this optimization if it knows that the match context will not be shared. If it would be shared, the functional properties (also called referential transparency) of Erlang would break.

+
- Constructing binaries - -

In R12B, appending to a binary or bitstring

+ Constructing Binaries +

In R12B, appending to a binary or bitstring + is specially optimized by the runtime system:

> <>]]> -

is specially optimized by the run-time system. - Because the run-time system handles the optimization (instead of +

As the runtime system handles the optimization (instead of the compiler), there are very few circumstances in which the optimization - will not work.

+ does not work.

-

To explain how it works, we will go through this code

+

To explain how it works, let us examine the following code line + by line:

>, %% 1 @@ -152,81 +176,81 @@ Bin3 = <>, %% 4 Bin4 = <>, %% 5 !!! {Bin4,Bin3} %% 6]]> -

line by line.

- -

The first line (marked with the %% 1 comment), assigns + + Line 1 (marked with the %% 1 comment), assigns a heap binary to - the variable Bin0.

+ the Bin0 variable. -

The second line is an append operation. Since Bin0 + Line 2 is an append operation. As Bin0 has not been involved in an append operation, a new refc binary - will be created and the contents of Bin0 will be copied - into it. The ProcBin part of the refc binary will have + is created and the contents of Bin0 is copied + into it. The ProcBin part of the refc binary has its size set to the size of the data stored in the binary, while - the binary object will have extra space allocated. - The size of the binary object will be either twice the + the binary object has extra space allocated. + The size of the binary object is either twice the size of Bin0 or 256, whichever is larger. In this case - it will be 256.

+ it is 256. -

It gets more interesting in the third line. + Line 3 is more interesting. Bin1 has been used in an append operation, - and it has 255 bytes of unused storage at the end, so the three new bytes - will be stored there.

+ and it has 255 bytes of unused storage at the end, so the 3 new + bytes are stored there. -

Same thing in the fourth line. There are 252 bytes left, - so there is no problem storing another three bytes.

+ Line 4. The same applies here. There are 252 bytes left, + so there is no problem storing another 3 bytes. -

But in the fifth line something interesting happens. - Note that we don't append to the previous result in Bin3, - but to Bin1. We expect that Bin4 will be assigned - the value <<0,1,2,3,17>>. We also expect that + Line 5. Here, something interesting happens. Notice + that the result is not appended to the previous result in Bin3, + but to Bin1. It is expected that Bin4 will be assigned + the value <<0,1,2,3,17>>. It is also expected that Bin3 will retain its value (<<0,1,2,3,4,5,6,7,8,9>>). - Clearly, the run-time system cannot write the byte 17 into the binary, + Clearly, the runtime system cannot write byte 17 into the binary, because that would change the value of Bin3 to - <<0,1,2,3,4,17,6,7,8,9>>.

- -

What will happen?

+ <<0,1,2,3,4,17,6,7,8,9>>. + -

The run-time system will see that Bin1 is the result +

The runtime system sees that Bin1 is the result from a previous append operation (not from the latest append operation), - so it will copy the contents of Bin1 to a new binary - and reserve extra storage and so on. (We will not explain here how the - run-time system can know that it is not allowed to write into Bin1; + so it copies the contents of Bin1 to a new binary, + reserve extra storage, and so on. (Here is not explained how the + runtime system can know that it is not allowed to write into Bin1; it is left as an exercise to the curious reader to figure out how it is done by reading the emulator sources, primarily erl_bits.c.)

- Circumstances that force copying + Circumstances That Force Copying

The optimization of the binary append operation requires that there is a single ProcBin and a single reference to the ProcBin for the binary. The reason is that the binary object can be - moved (reallocated) during an append operation, and when that happens + moved (reallocated) during an append operation, and when that happens, the pointer in the ProcBin must be updated. If there would be more than one ProcBin pointing to the binary object, it would not be possible to find and update all of them.

-

Therefore, certain operations on a binary will mark it so that +

Therefore, certain operations on a binary mark it so that any future append operation will be forced to copy the binary. In most cases, the binary object will be shrunk at the same time to reclaim the extra space allocated for growing.

-

When appending to a binary

+

When appending to a binary as follows, only the binary returned + from the latest append operation will support further cheap append + operations:

>]]> -

only the binary returned from the latest append operation will - support further cheap append operations. In the code fragment above, +

In the code fragment in the beginning of this section, appending to Bin will be cheap, while appending to Bin0 will force the creation of a new binary and copying of the contents of Bin0.

If a binary is sent as a message to a process or port, the binary will be shrunk and any further append operation will copy the binary - data into a new binary. For instance, in the following code fragment

+ data into a new binary. For example, in the following code fragment + Bin1 will be copied in the third line:

>, @@ -234,12 +258,12 @@ PortOrPid ! Bin1, Bin = <> %% Bin1 will be COPIED ]]> -

Bin1 will be copied in the third line.

- -

The same thing happens if you insert a binary into an ets - table or send it to a port using erlang:port_command/2 or pass it to +

The same happens if you insert a binary into an Ets + table, send it to a port using erlang:port_command/2, or + pass it to enif_inspect_binary in a NIF.

+

Matching a binary will also cause it to shrink and the next append operation will copy the binary data:

@@ -249,22 +273,23 @@ Bin1 = <>, Bin = <> %% Bin1 will be COPIED ]]> -

The reason is that a match context +

The reason is that a + match context contains a direct pointer to the binary data.

-

If a process simply keeps binaries (either in "loop data" or in the process - dictionary), the garbage collector may eventually shrink the binaries. - If only one such binary is kept, it will not be shrunk. If the process later - appends to a binary that has been shrunk, the binary object will be reallocated - to make place for the data to be appended.

+

If a process simply keeps binaries (either in "loop data" or in the + process + dictionary), the garbage collector can eventually shrink the binaries. + If only one such binary is kept, it will not be shrunk. If the process + later appends to a binary that has been shrunk, the binary object will + be reallocated to make place for the data to be appended.

-
- Matching binaries + Matching Binaries -

We will revisit the example shown earlier

+

Let us revisit the example in the beginning of the previous section:

DO (in R12B)

>) -> [H|my_binary_to_list(T)]; my_binary_to_list(<<>>) -> [].]]> -

too see what is happening under the hood.

- -

The very first time my_binary_to_list/1 is called, +

The first time my_binary_to_list/1 is called, a match context - will be created. The match context will point to the first - byte of the binary. One byte will be matched out and the match context - will be updated to point to the second byte in the binary.

+ is created. The match context points to the first + byte of the binary. 1 byte is matched out and the match context + is updated to point to the second byte in the binary.

-

In R11B, at this point a sub binary +

In R11B, at this point a + sub binary would be created. In R12B, the compiler sees that there is no point in creating a sub binary, because there will soon be a call to a function (in this case, - to my_binary_to_list/1 itself) that will immediately + to my_binary_to_list/1 itself) that immediately will create a new match context and discard the sub binary.

-

Therefore, in R12B, my_binary_to_list/1 will call itself +

Therefore, in R12B, my_binary_to_list/1 calls itself with the match context instead of with a sub binary. The instruction - that initializes the matching operation will basically do nothing + that initializes the matching operation basically does nothing when it sees that it was passed a match context instead of a binary.

When the end of the binary is reached and the second clause matches, the match context will simply be discarded (removed in the next - garbage collection, since there is no longer any reference to it).

+ garbage collection, as there is no longer any reference to it).

To summarize, my_binary_to_list/1 in R12B only needs to create one match context and no sub binaries. In R11B, if the binary contains N bytes, N+1 match contexts and N - sub binaries will be created.

+ sub binaries are created.

-

In R11B, the fastest way to match binaries is:

+

In R11B, the fastest way to match binaries is as follows:

DO NOT (in R12B)

end.]]>

This function cleverly avoids building sub binaries, but it cannot - avoid building a match context in each recursion step. Therefore, in both R11B and R12B, + avoid building a match context in each recursion step. + Therefore, in both R11B and R12B, my_complicated_binary_to_list/1 builds N+1 match - contexts. (In a future release, the compiler might be able to generate code - that reuses the match context, but don't hold your breath.)

+ contexts. (In a future Erlang/OTP release, the compiler might be able + to generate code that reuses the match context.)

-

Returning to my_binary_to_list/1, note that the match context was - discarded when the entire binary had been traversed. What happens if +

Returning to my_binary_to_list/1, notice that the match context + was discarded when the entire binary had been traversed. What happens if the iteration stops before it has reached the end of the binary? Will the optimization still work?

@@ -336,29 +361,23 @@ after_zero(<<>>) -> <<>>. ]]> -

Yes, it will. The compiler will remove the building of the sub binary in the - second clause

+

Yes, it will. The compiler will remove the building of the sub binary in + the second clause:

>) -> after_zero(T); -. -. -.]]> +...]]> -

but will generate code that builds a sub binary in the first clause

+

But it will generate code that builds a sub binary in the first clause:

>) -> T; -. -. -.]]> +...]]> -

Therefore, after_zero/1 will build one match context and one sub binary +

Therefore, after_zero/1 builds one match context and one sub binary (assuming it is passed a binary that contains a zero byte).

Code like the following will also be optimized:

@@ -371,12 +390,14 @@ all_but_zeroes_to_list(<<0,T/binary>>, Acc, Remaining) -> all_but_zeroes_to_list(<>, Acc, Remaining) -> all_but_zeroes_to_list(T, [Byte|Acc], Remaining-1).]]> -

The compiler will remove building of sub binaries in the second and third clauses, - and it will add an instruction to the first clause that will convert Buffer - from a match context to a sub binary (or do nothing if Buffer already is a binary).

+

The compiler removes building of sub binaries in the second and third + clauses, and it adds an instruction to the first clause that converts + Buffer from a match context to a sub binary (or do nothing if + Buffer is a binary already).

-

Before you begin to think that the compiler can optimize any binary patterns, - here is a function that the compiler (currently, at least) is not able to optimize:

+

Before you begin to think that the compiler can optimize any binary + patterns, the following function cannot be optimized by the compiler + (currently, at least):

>) -> @@ -386,43 +407,43 @@ non_opt_eq([_|_], <<_,_/binary>>) -> non_opt_eq([], <<>>) -> true.]]> -

It was briefly mentioned earlier that the compiler can only delay creation of - sub binaries if it can be sure that the binary will not be shared. In this case, - the compiler cannot be sure.

+

It was mentioned earlier that the compiler can only delay creation of + sub binaries if it knows that the binary will not be shared. In this case, + the compiler cannot know.

-

We will soon show how to rewrite non_opt_eq/2 so that the delayed sub binary - optimization can be applied, and more importantly, we will show how you can find out - whether your code can be optimized.

+

Soon it is shown how to rewrite non_opt_eq/2 so that the delayed + sub binary optimization can be applied, and more importantly, it is shown + how you can find out whether your code can be optimized.

- The bin_opt_info option + Option bin_opt_info

Use the bin_opt_info option to have the compiler print a lot of - information about binary optimizations. It can be given either to the compiler or - erlc

+ information about binary optimizations. It can be given either to the + compiler or erlc:

-

or passed via an environment variable

+

or passed through an environment variable:

-

Note that the bin_opt_info is not meant to be a permanent option added - to your Makefiles, because it is not possible to eliminate all messages that - it generates. Therefore, passing the option through the environment is in most cases - the most practical approach.

+

Notice that the bin_opt_info is not meant to be a permanent + option added to your Makefiles, because all messages that it + generates cannot be eliminated. Therefore, passing the option through + the environment is in most cases the most practical approach.

-

The warnings will look like this:

+

The warnings look as follows:

-

To make it clearer exactly what code the warnings refer to, - in the examples that follow, the warnings are inserted as comments - after the clause they refer to:

+

To make it clearer exactly what code the warnings refer to, the + warnings in the following examples are inserted as comments + after the clause they refer to, for example:

>) -> @@ -434,12 +455,12 @@ after_zero(<<_,T/binary>>) -> after_zero(<<>>) -> <<>>.]]> -

The warning for the first clause tells us that it is not possible to - delay the creation of a sub binary, because it will be returned. - The warning for the second clause tells us that a sub binary will not be +

The warning for the first clause says that the creation of a sub + binary cannot be delayed, because it will be returned. + The warning for the second clause says that a sub binary will not be created (yet).

-

It is time to revisit the earlier example of the code that could not +

Let us revisit the earlier example of the code that could not be optimized and find out why:

>) -> non_opt_eq([], <<>>) -> true.]]> -

The compiler emitted two warnings. The INFO warning refers to the function - non_opt_eq/2 as a callee, indicating that any functions that call non_opt_eq/2 - will not be able to make delayed sub binary optimization. - There is also a suggestion to change argument order. - The second warning (that happens to refer to the same line) refers to the construction of - the sub binary itself.

+

The compiler emitted two warnings. The INFO warning refers + to the function non_opt_eq/2 as a callee, indicating that any + function that call non_opt_eq/2 cannot make delayed sub binary + optimization. There is also a suggestion to change argument order. + The second warning (that happens to refer to the same line) refers to + the construction of the sub binary itself.

-

We will soon show another example that should make the distinction between INFO - and NOT OPTIMIZED warnings somewhat clearer, but first we will heed the suggestion - to change argument order:

+

Soon another example will show the difference between the + INFO and NOT OPTIMIZED warnings somewhat clearer, but + let us first follow the suggestion to change argument order:

>, [H|T2]) -> @@ -485,15 +506,13 @@ match_body([0|_], <>) -> %% sub binary optimization; %% SUGGEST changing argument order done; -. -. -.]]> +...]]>

The warning means that if there is a call to match_body/2 (from another clause in match_body/2 or another function), the - delayed sub binary optimization will not be possible. There will be additional - warnings for any place where a sub binary is matched out at the end of and - passed as the second argument to match_body/2. For instance:

+ delayed sub binary optimization will not be possible. More warnings will + occur for any place where a sub binary is matched out at the end of and + passed as the second argument to match_body/2, for example:

>) -> @@ -504,10 +523,10 @@ match_head(List, <<_:10,Data/binary>>) ->
- Unused variables + Unused Variables -

The compiler itself figures out if a variable is unused. The same - code is generated for each of the following functions

+

The compiler figures out if a variable is unused. The same + code is generated for each of the following functions:

>, Count) -> count1(T, Count+1); @@ -519,11 +538,9 @@ count2(<<>>, Count) -> Count. count3(<<_H,T/binary>>, Count) -> count3(T, Count+1); count3(<<>>, Count) -> Count.]]> -

In each iteration, the first 8 bits in the binary will be skipped, not matched out.

- +

In each iteration, the first 8 bits in the binary will be skipped, + not matched out.

-
- diff --git a/system/doc/efficiency_guide/commoncaveats.xml b/system/doc/efficiency_guide/commoncaveats.xml index 551b0a03e6..71991d342f 100644 --- a/system/doc/efficiency_guide/commoncaveats.xml +++ b/system/doc/efficiency_guide/commoncaveats.xml @@ -18,7 +18,6 @@ basis, WITHOUT WARRANTY OF ANY KIND, either express or implied. See the License for the specific language governing rights and limitations under the License. - Common Caveats @@ -29,49 +28,50 @@ commoncaveats.xml -

Here we list a few modules and BIFs to watch out for, and not only +

This section lists a few modules and BIFs to watch out for, not only from a performance point of view.

- The timer module + Timer Module

Creating timers using erlang:send_after/3 - and erlang:start_timer/3 + and + erlang:start_timer/3 +, is much more efficient than using the timers provided by the - timer module. The - timer module uses a separate process to manage the timers, - and that process can easily become overloaded if many processes + timer module in STDLIB. + The timer module uses a separate process to manage the timers. + That process can easily become overloaded if many processes create and cancel timers frequently (especially when using the SMP emulator).

-

The functions in the timer module that do not manage timers (such as - timer:tc/3 or timer:sleep/1), do not call the timer-server process - and are therefore harmless.

+

The functions in the timer module that do not manage timers + (such as timer:tc/3 or timer:sleep/1), do not call the + timer-server process and are therefore harmless.

list_to_atom/1 -

Atoms are not garbage-collected. Once an atom is created, it will never - be removed. The emulator will terminate if the limit for the number - of atoms (1048576 by default) is reached.

+

Atoms are not garbage-collected. Once an atom is created, it is never + removed. The emulator terminates if the limit for the number + of atoms (1,048,576 by default) is reached.

-

Therefore, converting arbitrary input strings to atoms could be - dangerous in a system that will run continuously. - If only certain well-defined atoms are allowed as input, you can use +

Therefore, converting arbitrary input strings to atoms can be + dangerous in a system that runs continuously. + If only certain well-defined atoms are allowed as input, list_to_existing_atom/1 + can be used to to guard against a denial-of-service attack. (All atoms that are allowed - must have been created earlier, for instance by simply using all of them + must have been created earlier, for example, by simply using all of them in a module and loading that module.)

Using list_to_atom/1 to construct an atom that is passed to - apply/3 like this

- + apply/3 as follows, is quite expensive and not recommended + in time-critical code:

-apply(list_to_atom("some_prefix"++Var), foo, Args) - -

is quite expensive and is not recommended in time-critical code.

+apply(list_to_atom("some_prefix"++Var), foo, Args)
@@ -81,25 +81,25 @@ apply(list_to_atom("some_prefix"++Var), foo, Args) length of the list, as opposed to tuple_size/1, byte_size/1, and bit_size/1, which all execute in constant time.

-

Normally you don't have to worry about the speed of length/1, - because it is efficiently implemented in C. In time critical-code, though, - you might want to avoid it if the input list could potentially be very long.

+

Normally, there is no need to worry about the speed of length/1, + because it is efficiently implemented in C. In time-critical code, + you might want to avoid it if the input list could potentially be very + long.

Some uses of length/1 can be replaced by matching. - For instance, this code

- + For example, the following code:

foo(L) when length(L) >= 3 -> ... -

can be rewritten to

+

can be rewritten to:

foo([_,_,_|_]=L) -> ... -

(One slight difference is that length(L) will fail if the L - is an improper list, while the pattern in the second code fragment will - accept an improper list.)

+

One slight difference is that length(L) fails if L + is an improper list, while the pattern in the second code fragment + accepts an improper list.

@@ -107,50 +107,49 @@ foo([_,_,_|_]=L) ->

setelement/3 copies the tuple it modifies. Therefore, updating a tuple in a loop - using setelement/3 will create a new copy of the tuple every time.

+ using setelement/3 creates a new copy of the tuple every time.

There is one exception to the rule that the tuple is copied. If the compiler clearly can see that destructively updating the tuple would - give exactly the same result as if the tuple was copied, the call to - setelement/3 will be replaced with a special destructive setelement - instruction. In the following code sequence

- + give the same result as if the tuple was copied, the call to + setelement/3 is replaced with a special destructive setelement + instruction. In the following code sequence, the first setelement/3 + call copies the tuple and modifies the ninth element:

multiple_setelement(T0) -> T1 = setelement(9, T0, bar), T2 = setelement(7, T1, foobar), setelement(5, T2, new_value). -

the first setelement/3 call will copy the tuple and modify the - ninth element. The two following setelement/3 calls will modify +

The two following setelement/3 calls modify the tuple in place.

-

For the optimization to be applied, all of the followings conditions +

For the optimization to be applied, all the followings conditions must be true:

The indices must be integer literals, not variables or expressions. The indices must be given in descending order. - There must be no calls to other function in between the calls to + There must be no calls to another function in between the calls to setelement/3. The tuple returned from one setelement/3 call must only be used in the subsequent call to setelement/3. -

If it is not possible to structure the code as in the multiple_setelement/1 +

If the code cannot be structured as in the multiple_setelement/1 example, the best way to modify multiple elements in a large tuple is to - convert the tuple to a list, modify the list, and convert the list back to + convert the tuple to a list, modify the list, and convert it back to a tuple.

size/1 -

size/1 returns the size for both tuples and binary.

+

size/1 returns the size for both tuples and binaries.

-

Using the new BIFs tuple_size/1 and byte_size/1 introduced - in R12B gives the compiler and run-time system more opportunities for - optimization. A further advantage is that the new BIFs could help Dialyzer +

Using the new BIFs tuple_size/1 and byte_size/1, introduced + in R12B, gives the compiler and the runtime system more opportunities for + optimization. Another advantage is that the new BIFs can help Dialyzer to find more bugs in your program.

@@ -159,22 +158,21 @@ multiple_setelement(T0) ->

It is usually more efficient to split a binary using matching instead of calling the split_binary/2 function. Furthermore, mixing bit syntax matching and split_binary/2 - may prevent some optimizations of bit syntax matching.

+ can prevent some optimizations of bit syntax matching.

DO

> = Bin,]]>

DO NOT

- {Bin1,Bin2} = split_binary(Bin, Num) - + {Bin1,Bin2} = split_binary(Bin, Num)
- The '--' operator -

Note that the '--' operator has a complexity - proportional to the product of the length of its operands, - meaning that it will be very slow if both of its operands + Operator "--" +

The "--" operator has a complexity + proportional to the product of the length of its operands. + This means that the operator is very slow if both of its operands are long lists:

DO NOT

@@ -182,42 +180,39 @@ multiple_setelement(T0) -> HugeList1 -- HugeList2]]>

Instead use the ordsets - module:

+ module in STDLIB:

DO

HugeSet1 = ordsets:from_list(HugeList1), HugeSet2 = ordsets:from_list(HugeList2), - ordsets:subtract(HugeSet1, HugeSet2) - + ordsets:subtract(HugeSet1, HugeSet2) -

Obviously, that code will not work if the original order +

Obviously, that code does not work if the original order of the list is important. If the order of the list must be - preserved, do like this:

+ preserved, do as follows:

DO

-

Subtle note 1: This code behaves differently from '--' - if the lists contain duplicate elements. (One occurrence - of an element in HugeList2 will remove all +

This code behaves differently from "--" + if the lists contain duplicate elements (one occurrence + of an element in HugeList2 removes all occurrences in HugeList1.)

+

Also, this code compares lists elements using the + "==" operator, while "--" uses the "=:=" operator. + If that difference is important, sets can be used instead of + gb_sets, but sets:from_list/1 is much + slower than gb_sets:from_list/1 for long lists.

-

Subtle note 2: This code compares lists elements using the - '==' operator, while '--' uses the '=:='. If - that difference is important, sets can be used instead of - gb_sets, but note that sets:from_list/1 is much - slower than gb_sets:from_list/1 for long lists.

- -

Using the '--' operator to delete an element +

Using the "--" operator to delete an element from a list is not a performance problem:

OK

- HugeList1 -- [Element] - + HugeList1 -- [Element]
diff --git a/system/doc/efficiency_guide/drivers.xml b/system/doc/efficiency_guide/drivers.xml index dfc49bdf21..33d6333e7d 100644 --- a/system/doc/efficiency_guide/drivers.xml +++ b/system/doc/efficiency_guide/drivers.xml @@ -18,7 +18,6 @@ basis, WITHOUT WARRANTY OF ANY KIND, either express or implied. See the License for the specific language governing rights and limitations under the License. - Drivers @@ -29,26 +28,26 @@ drivers.xml -

This chapter provides a (very) brief overview on how to write efficient - drivers. It is assumed that you already have a good understanding of +

This section provides a brief overview on how to write efficient drivers.

+

It is assumed that you have a good understanding of drivers.

- Drivers and concurrency + Drivers and Concurrency -

The run-time system will always take a lock before running +

The runtime system always takes a lock before running any code in a driver.

-

By default, that lock will be at the driver level, meaning that +

By default, that lock is at the driver level, that is, if several ports have been opened to the same driver, only code for one port at the same time can be running.

-

A driver can be configured to instead have one lock for each port.

+

A driver can be configured to have one lock for each port instead.

-

If a driver is used in a functional way (i.e. it holds no state, +

If a driver is used in a functional way (that is, holds no state, but only does some heavy calculation and returns a result), several - ports with registered names can be opened beforehand and the port to - be used can be chosen based on the scheduler ID like this:

+ ports with registered names can be opened beforehand, and the port to + be used can be chosen based on the scheduler ID as follows:

-define(PORT_NAMES(), @@ -67,82 +66,82 @@ client_port() ->
- Avoiding copying of binaries when calling a driver + Avoiding Copying Binaries When Calling a Driver

There are basically two ways to avoid copying a binary that is - sent to a driver.

- -

If the Data argument for - port_control/3 - is a binary, the driver will be passed a pointer to the contents of - the binary and the binary will not be copied. - If the Data argument is an iolist (list of binaries and lists), - all binaries in the iolist will be copied.

- -

Therefore, if you want to send both a pre-existing binary and some - additional data to a driver without copying the binary, you must call - port_control/3 twice; once with the binary and once with the - additional data. However, that will only work if there is only one - process communicating with the port (because otherwise another process - could call the driver in-between the calls).

- -

Another way to avoid copying binaries is to implement an outputv - callback (instead of an output callback) in the driver. - If a driver has an outputv callback, refc binaries passed - in an iolist in the Data argument for - port_command/2 - will be passed as references to the driver.

+ sent to a driver:

+ + +

If the Data argument for + port_control/3 + is a binary, the driver will be passed a pointer to the contents of + the binary and the binary will not be copied. If the Data + argument is an iolist (list of binaries and lists), all binaries in + the iolist will be copied.

+ +

Therefore, if you want to send both a pre-existing binary and some + extra data to a driver without copying the binary, you must call + port_control/3 twice; once with the binary and once with the + extra data. However, that will only work if there is only one + process communicating with the port (because otherwise another process + can call the driver in-between the calls).

+ +

Implement an outputv callback (instead of an + output callback) in the driver. If a driver has an + outputv callback, refc binaries passed in an iolist + in the Data argument for + port_command/2 + will be passed as references to the driver.

+
- Returning small binaries from a driver + Returning Small Binaries from a Driver -

The run-time system can represent binaries up to 64 bytes as - heap binaries. They will always be copied when sent in a messages, - but they will require less memory if they are not sent to another +

The runtime system can represent binaries up to 64 bytes as + heap binaries. They are always copied when sent in messages, + but they require less memory if they are not sent to another process and garbage collection is cheaper.

-

If you know that the binaries you return are always small, - you should use driver API calls that do not require a pre-allocated - binary, for instance +

If you know that the binaries you return are always small, you + are advised to use driver API calls that do not require a pre-allocated + binary, for example, driver_output() - or - erl_drv_output_term() + or + erl_drv_output_term(), using the ERL_DRV_BUF2BINARY format, - to allow the run-time to construct a heap binary.

+ to allow the runtime to construct a heap binary.

- Returning big binaries without copying from a driver + Returning Large Binaries without Copying from a Driver -

To avoid copying data when a big binary is sent or returned from +

To avoid copying data when a large binary is sent or returned from the driver to an Erlang process, the driver must first allocate the binary and then send it to an Erlang process in some way.

-

Use driver_alloc_binary() to allocate a binary.

+

Use + driver_alloc_binary() + to allocate a binary.

There are several ways to send a binary created with - driver_alloc_binary().

+ driver_alloc_binary():

-

From the control callback, a binary can be returned provided - that - set_port_control_flags() - has been called with the flag value PORT_CONTROL_FLAG_BINARY.

-
+ From the control callback, a binary can be returned if + set_port_control_flags() + has been called with the flag value PORT_CONTROL_FLAG_BINARY. -

A single binary can be sent with - driver_output_binary().

+ A single binary can be sent with + driver_output_binary(). -

Using + Using erl_drv_output_term() or erl_drv_send_term(), - a binary can be included in an Erlang term.

-
+ a binary can be included in an Erlang term.
- diff --git a/system/doc/efficiency_guide/functions.xml b/system/doc/efficiency_guide/functions.xml index ec1a45eaa9..bd23c9d90d 100644 --- a/system/doc/efficiency_guide/functions.xml +++ b/system/doc/efficiency_guide/functions.xml @@ -18,7 +18,6 @@ basis, WITHOUT WARRANTY OF ANY KIND, either express or implied. See the License for the specific language governing rights and limitations under the License. - Functions @@ -30,17 +29,18 @@
- Pattern matching -

Pattern matching in function head and in case and receive - clauses are optimized by the compiler. With a few exceptions, there is nothing - to gain by rearranging clauses.

+ Pattern Matching +

Pattern matching in function head as well as in case and + receive clauses are optimized by the compiler. With a few + exceptions, there is nothing to gain by rearranging clauses.

One exception is pattern matching of binaries. The compiler - will not rearrange clauses that match binaries. Placing the - clause that matches against the empty binary last will usually - be slightly faster than placing it first.

+ does not rearrange clauses that match binaries. Placing the + clause that matches against the empty binary last is usually + slightly faster than placing it first.

-

Here is a rather contrived example to show another exception:

+

The following is a rather unnatural example to show another + exception:

DO NOT

@@ -53,27 +53,30 @@ atom_map1(five) -> 5; atom_map1(six) -> 6.

The problem is the clause with the variable Int. - Since a variable can match anything, including the atoms - four, five, and six that the following clauses - also will match, the compiler must generate sub-optimal code that will - execute as follows:

+ As a variable can match anything, including the atoms + four, five, and six, which the following clauses + also match, the compiler must generate suboptimal code that + executes as follows:

-

First the input value is compared to one, two, and + + First, the input value is compared to one, two, and three (using a single instruction that does a binary search; thus, quite efficient even if there are many values) to select which - one of the first three clauses to execute (if any).

+ one of the first three clauses to execute (if any). + + >If none of the first three clauses match, the fourth clause + match as a variable always matches. -

If none of the first three clauses matched, the fourth clause - will match since a variable always matches. If the guard test - is_integer(Int) succeeds, the fourth clause will be - executed.

+ If the guard test is_integer(Int) succeeds, the fourth + clause is executed. -

If the guard test failed, the input value is compared to + If the guard test fails, the input value is compared to four, five, and six, and the appropriate clause - is selected. (There will be a function_clause exception if - none of the values matched.)

+ is selected. (There is a function_clause exception if none of + the values matched.) + -

Rewriting to either

+

Rewriting to either:

DO

5; atom_map2(six) -> 6; atom_map2(Int) when is_integer(Int) -> Int.]]> -

or

+

or:

DO

4; atom_map3(five) -> 5; atom_map3(six) -> 6.]]> -

will give slightly more efficient matching code.

+

gives slightly more efficient matching code.

-

Here is a less contrived example:

+

Another example:

DO NOT

match anything, the compiler is not allowed to rearrange the clauses, but must generate code that matches them in the order written.

-

If the function is rewritten like this

+

If the function is rewritten as follows, the compiler is free to + rearrange the clauses:

DO

map_pairs2(Map, [X|Xs], [Y|Ys]) -> [Map(X, Y)|map_pairs2(Map, Xs, Ys)].]]> -

the compiler is free to rearrange the clauses. It will generate code - similar to this

+

The compiler will generate code similar to this:

DO NOT (already done by the compiler)

Ys0 end.]]> -

which should be slightly faster for presumably the most common case +

This is slightly faster for probably the most common case that the input lists are not empty or very short. - (Another advantage is that Dialyzer is able to deduce a better type - for the variable Xs.)

+ (Another advantage is that Dialyzer can deduce a better type + for the Xs variable.)

- Function Calls + Function Calls -

Here is an intentionally rough guide to the relative costs of - different kinds of calls. It is based on benchmark figures run on +

This is an intentionally rough guide to the relative costs of + different calls. It is based on benchmark figures run on Solaris/Sparc:

Calls to local or external functions (foo(), m:foo()) - are the fastest kind of calls. + are the fastest calls. + Calling or applying a fun (Fun(), apply(Fun, [])) - is about three times as expensive as calling a local function. + is about three times as expensive as calling a local + function. + Applying an exported function (Mod:Name(), - apply(Mod, Name, [])) is about twice as expensive as calling a fun, - or about six times as expensive as calling a local function. + apply(Mod, Name, [])) is about twice as expensive as calling + a fun or about six times as expensive as calling a local + function.
- Notes and implementation details + Notes and Implementation Details

Calling and applying a fun does not involve any hash-table lookup. A fun contains an (indirect) pointer to the function that implements @@ -178,42 +185,44 @@ explicit_map_pairs(Map, Xs0, Ys0) ->

Tuples are not fun(s). A "tuple fun", {Module,Function}, is not a fun. The cost for calling a "tuple fun" is similar to that - of apply/3 or worse. Using "tuple funs" is strongly discouraged, - as they may not be supported in a future release, - and because there exists a superior alternative since the R10B - release, namely the fun Module:Function/Arity syntax.

+ of apply/3 or worse. + Using "tuple funs" is strongly discouraged, + as they might not be supported in a future Erlang/OTP release, + and because there exists a superior alternative from R10B, + namely the fun Module:Function/Arity syntax.

apply/3 must look up the code for the function to execute - in a hash table. Therefore, it will always be slower than a + in a hash table. It is therefore always slower than a direct call or a fun call.

It no longer matters (from a performance point of view) - whether you write

+ whether you write:

Module:Function(Arg1, Arg2) -

or

+

or:

apply(Module, Function, [Arg1,Arg2]) -

(The compiler internally rewrites the latter code into the former.)

+

The compiler internally rewrites the latter code into the + former.

-

The following code

+

The following code is slightly slower because the shape of the + list of arguments is unknown at compile time.

apply(Module, Function, Arguments) -

is slightly slower because the shape of the list of arguments - is not known at compile time.

- Memory usage in recursion -

When writing recursive functions it is preferable to make them - tail-recursive so that they can execute in constant memory space.

+ Memory Usage in Recursion +

When writing recursive functions, it is preferable to make them + tail-recursive so that they can execute in constant memory space:

+

DO

list_length(List) -> @@ -224,13 +233,14 @@ list_length([], AccLen) -> list_length([_|Tail], AccLen) -> list_length(Tail, AccLen + 1). % Tail-recursive +

DO NOT

+ list_length([]) -> 0. % Base case list_length([_ | Tail]) -> list_length(Tail) + 1. % Not tail-recursive
- diff --git a/system/doc/efficiency_guide/introduction.xml b/system/doc/efficiency_guide/introduction.xml index 9726d3ad11..a8360f1cdd 100644 --- a/system/doc/efficiency_guide/introduction.xml +++ b/system/doc/efficiency_guide/introduction.xml @@ -18,7 +18,6 @@ basis, WITHOUT WARRANTY OF ANY KIND, either express or implied. See the License for the specific language governing rights and limitations under the License. - Introduction @@ -32,38 +31,39 @@
Purpose -

Premature optimization is the root of all evil. -- D.E. Knuth

+

"Premature optimization is the root of all evil" + (D.E. Knuth)

-

Efficient code can be well-structured and clean code, based on +

Efficient code can be well-structured and clean, based on a sound overall architecture and sound algorithms. Efficient code can be highly implementation-code that bypasses documented interfaces and takes advantage of obscure quirks in the current implementation.

-

Ideally, your code should only contain the first kind of efficient - code. If that turns out to be too slow, you should profile the application +

Ideally, your code only contains the first type of efficient + code. If that turns out to be too slow, profile the application to find out where the performance bottlenecks are and optimize only the - bottlenecks. Other code should stay as clean as possible.

+ bottlenecks. Let other code stay as clean as possible.

-

Fortunately, compiler and run-time optimizations introduced in - R12B makes it easier to write code that is both clean and - efficient. For instance, the ugly workarounds needed in R11B and earlier +

Fortunately, compiler and runtime optimizations introduced in + Erlang/OTP R12B makes it easier to write code that is both clean and + efficient. For example, the ugly workarounds needed in R11B and earlier releases to get the most speed out of binary pattern matching are no longer necessary. In fact, the ugly code is slower than the clean code (because the clean code has become faster, not because the uglier code has become slower).

-

This Efficiency Guide cannot really learn you how to write efficient +

This Efficiency Guide cannot really teach you how to write efficient code. It can give you a few pointers about what to avoid and what to use, and some understanding of how certain language features are implemented. - We have generally not included general tips about optimization that will - work in any language, such as moving common calculations out of loops.

+ This guide does not include general tips about optimization that + works in any language, such as moving common calculations out of loops.

Prerequisites -

It is assumed that the reader is familiar with the Erlang - programming language and concepts of OTP.

+

It is assumed that you are familiar with the Erlang programming + language and the OTP concepts.

diff --git a/system/doc/efficiency_guide/listhandling.xml b/system/doc/efficiency_guide/listhandling.xml index 9112738b18..b950f55ad1 100644 --- a/system/doc/efficiency_guide/listhandling.xml +++ b/system/doc/efficiency_guide/listhandling.xml @@ -18,10 +18,9 @@ basis, WITHOUT WARRANTY OF ANY KIND, either express or implied. See the License for the specific language governing rights and limitations under the License. - - List handling + List Handling Bjorn Gustavsson 2007-11-16 @@ -30,19 +29,18 @@
- Creating a list + Creating a List -

Lists can only be built starting from the end and attaching - list elements at the beginning. If you use the ++ operator - like this

+

Lists can only be built starting from the end and attaching list + elements at the beginning. If you use the "++" operator as + follows, a new list is created that is a copy of the elements in + List1, followed by List2:

List1 ++ List2 -

you will create a new list which is copy of the elements in List1, - followed by List2. Looking at how lists:append/1 or ++ would be - implemented in plain Erlang, it can be seen clearly that the first list - is copied:

+

Looking at how lists:append/1 or ++ would be + implemented in plain Erlang, clearly the first list is copied:

append([H|T], Tail) -> @@ -50,12 +48,12 @@ append([H|T], Tail) -> append([], Tail) -> Tail. -

So the important thing when recursing and building a list is to - make sure that you attach the new elements to the beginning of the list, - so that you build a list, and not hundreds or thousands of - copies of the growing result list.

+

When recursing and building a list, it is important to ensure + that you attach the new elements to the beginning of the list. In + this way, you will build one list, not hundreds or thousands + of copies of the growing result list.

-

Let us first look at how it should not be done:

+

Let us first see how it is not to be done:

DO NOT

bad_fib(N, Current, Next, Fibs) -> bad_fib(N - 1, Next, Current + Next, Fibs ++ [Current]).]]> -

Here we are not a building a list; in each iteration step we - create a new list that is one element longer than the new previous list.

+

Here more than one list is built. In each iteration step a new list + is created that is one element longer than the new previous list.

-

To avoid copying the result in each iteration, we must build the list in - reverse order and reverse the list when we are done:

+

To avoid copying the result in each iteration, build the list in + reverse order and reverse the list when you are done:

DO

- List comprehensions + List Comprehensions

Lists comprehensions still have a reputation for being slow. They used to be implemented using funs, which used to be slow.

-

In recent Erlang/OTP releases (including R12B), a list comprehension

+

In recent Erlang/OTP releases (including R12B), a list comprehension:

-

is basically translated to a local function

+

is basically translated to a local function:

'lc^0'([E|Tail], Expr) -> [Expr(E)|'lc^0'(Tail, Expr)]; 'lc^0'([], _Expr) -> []. -

In R12B, if the result of the list comprehension will obviously not be used, - a list will not be constructed. For instance, in this code

+

In R12B, if the result of the list comprehension will obviously + not be used, a list will not be constructed. For example, in this code:

-

or in this code

+

or in this code:

[io:put_chars(E) || E <- List]; ... -> end, some_function(...), -. -. -.]]> +...]]>
-

the value is neither assigned to a variable, nor passed to another function, - nor returned, so there is no need to construct a list and the compiler will simplify - the code for the list comprehension to

+

the value is not assigned to a variable, not passed to another function, + and not returned. This means that there is no need to construct a list and + the compiler will simplify the code for the list comprehension to:

'lc^0'([E|Tail], Expr) -> @@ -139,14 +133,15 @@ some_function(...),
- Deep and flat lists + Deep and Flat Lists

lists:flatten/1 - builds an entirely new list. Therefore, it is expensive, and even - more expensive than the ++ (which copies its left argument, - but not its right argument).

+ builds an entirely new list. It is therefore expensive, and even + more expensive than the ++ operator (which copies its + left argument, but not its right argument).

-

In the following situations, you can easily avoid calling lists:flatten/1:

+

In the following situations, you can easily avoid calling + lists:flatten/1:

When sending data to a port. Ports understand deep lists @@ -155,16 +150,19 @@ some_function(...), When calling BIFs that accept deep lists, such as list_to_binary/1 or iolist_to_binary/1. - When you know that your list is only one level deep, you can can use + When you know that your list is only one level deep, you can use lists:append/1. -

Port example

+
+ Port Example +

DO

       ...
       port_command(Port, DeepList)
       ...
+

DO NOT

       ...
@@ -180,7 +178,7 @@ some_function(...),
       port_command(Port, TerminatedStr)
       ...
-

Instead do like this:

+

Instead:

DO

@@ -188,47 +186,53 @@ some_function(...),
       TerminatedStr = [String, 0], % String="foo" => [[$f, $o, $o], 0]
       port_command(Port, TerminatedStr) 
       ...
+
+ +
+ Append Example -

Append example

DO

       > lists:append([[1], [2], [3]]).
       [1,2,3]
       >
+

DO NOT

       > lists:flatten([[1], [2], [3]]).
       [1,2,3]
       >
+
- Why you should not worry about recursive lists functions + Recursive List Functions -

In the performance myth chapter, the following myth was exposed: - Tail-recursive functions - are MUCH faster than recursive functions.

+

In Section 7.2, the following myth was exposed: + Tail-Recursive Functions + are Much Faster Than Recursive Functions.

To summarize, in R12B there is usually not much difference between a body-recursive list function and tail-recursive function that reverses the list at the end. Therefore, concentrate on writing beautiful code - and forget about the performance of your list functions. In the time-critical - parts of your code (and only there), measure before rewriting - your code.

- -

Important note: This section talks about lists functions that - construct lists. A tail-recursive function that does not construct - a list runs in constant space, while the corresponding body-recursive - function uses stack space proportional to the length of the list. - For instance, a function that sums a list of integers, should not be - written like this

+ and forget about the performance of your list functions. In the + time-critical parts of your code (and only there), measure + before rewriting your code.

+ +

This section is about list functions that construct + lists. A tail-recursive function that does not construct a list runs + in constant space, while the corresponding body-recursive function + uses stack space proportional to the length of the list.

+ +

For example, a function that sums a list of integers, is + not to be written as follows:

DO NOT

recursive_sum([H|T]) -> H+recursive_sum(T); recursive_sum([]) -> 0. -

but like this

+

Instead:

DO

diff --git a/system/doc/efficiency_guide/myths.xml b/system/doc/efficiency_guide/myths.xml index b1108dbab2..70d2dae88e 100644 --- a/system/doc/efficiency_guide/myths.xml +++ b/system/doc/efficiency_guide/myths.xml @@ -31,47 +31,48 @@ myths.xml +

Some truths seem to live on well beyond their best-before date, - perhaps because "information" spreads more rapidly from person-to-person - faster than a single release note that notes, for instance, that funs + perhaps because "information" spreads faster from person-to-person + than a single release note that says, for example, that funs have become faster.

-

Here we try to kill the old truths (or semi-truths) that have +

This section tries to kill the old truths (or semi-truths) that have become myths.

- Myth: Funs are slow -

Yes, funs used to be slow. Very slow. Slower than apply/3. + Myth: Funs are Slow +

Funs used to be very slow, slower than apply/3. Originally, funs were implemented using nothing more than compiler trickery, ordinary tuples, apply/3, and a great deal of ingenuity.

-

But that is ancient history. Funs was given its own data type - in the R6B release and was further optimized in the R7B release. - Now the cost for a fun call falls roughly between the cost for a call to - local function and apply/3.

+

But that is history. Funs was given its own data type + in R6B and was further optimized in R7B. + Now the cost for a fun call falls roughly between the cost for a call + to a local function and apply/3.

- Myth: List comprehensions are slow + Myth: List Comprehensions are Slow

List comprehensions used to be implemented using funs, and in the - bad old days funs were really slow.

+ old days funs were indeed slow.

-

Nowadays the compiler rewrites list comprehensions into an ordinary - recursive function. Of course, using a tail-recursive function with +

Nowadays, the compiler rewrites list comprehensions into an ordinary + recursive function. Using a tail-recursive function with a reverse at the end would be still faster. Or would it? That leads us to the next myth.

- Myth: Tail-recursive functions are MUCH faster - than recursive functions + Myth: Tail-Recursive Functions are Much Faster + Than Recursive Functions

According to the myth, recursive functions leave references - to dead terms on the stack and the garbage collector will have to - copy all those dead terms, while tail-recursive functions immediately + to dead terms on the stack and the garbage collector has to copy + all those dead terms, while tail-recursive functions immediately discard those terms.

That used to be true before R7B. In R7B, the compiler started @@ -79,48 +80,47 @@ be used with an empty list, so that the garbage collector would not keep dead values any longer than necessary.

-

Even after that optimization, a tail-recursive function would - still most of the time be faster than a body-recursive function. Why?

+

Even after that optimization, a tail-recursive function is + still most of the times faster than a body-recursive function. Why?

It has to do with how many words of stack that are used in each - recursive call. In most cases, a recursive function would use more words + recursive call. In most cases, a recursive function uses more words on the stack for each recursion than the number of words a tail-recursive - would allocate on the heap. Since more memory is used, the garbage - collector will be invoked more frequently, and it will have more work traversing + would allocate on the heap. As more memory is used, the garbage + collector is invoked more frequently, and it has more work traversing the stack.

-

In R12B and later releases, there is an optimization that will +

In R12B and later releases, there is an optimization that in many cases reduces the number of words used on the stack in - body-recursive calls, so that a body-recursive list function and + body-recursive calls. A body-recursive list function and a tail-recursive function that calls lists:reverse/1 at the - end will use exactly the same amount of memory. + marker="stdlib:lists#reverse/1">lists:reverse/1 at + the end will use the same amount of memory. lists:map/2, lists:filter/2, list comprehensions, and many other recursive functions now use the same amount of space as their tail-recursive equivalents.

-

So which is faster?

+

So, which is faster? + It depends. On Solaris/Sparc, the body-recursive function seems to + be slightly faster, even for lists with a lot of elements. On the x86 + architecture, tail-recursion was up to about 30% faster.

-

It depends. On Solaris/Sparc, the body-recursive function seems to - be slightly faster, even for lists with very many elements. On the x86 - architecture, tail-recursion was up to about 30 percent faster.

- -

So the choice is now mostly a matter of taste. If you really do need +

So, the choice is now mostly a matter of taste. If you really do need the utmost speed, you must measure. You can no longer be - absolutely sure that the tail-recursive list function will be the fastest - in all circumstances.

+ sure that the tail-recursive list function always is the fastest.

-

Note: A tail-recursive function that does not need to reverse the - list at the end is, of course, faster than a body-recursive function, +

A tail-recursive function that does not need to reverse the + list at the end is faster than a body-recursive function, as are tail-recursive functions that do not construct any terms at all - (for instance, a function that sums all integers in a list).

+ (for example, a function that sums all integers in a list).

- Myth: '++' is always bad + Myth: Operator "++" is Always Bad -

The ++ operator has, somewhat undeservedly, got a very bad reputation. - It probably has something to do with code like

+

The ++ operator has, somewhat undeservedly, got a bad reputation. + It probably has something to do with code like the following, + which is the most inefficient way there is to reverse a list:

DO NOT

@@ -129,12 +129,10 @@ naive_reverse([H|T]) -> naive_reverse([]) -> []. -

which is the most inefficient way there is to reverse a list. - Since the ++ operator copies its left operand, the result - will be copied again and again and again... leading to quadratic - complexity.

+

As the ++ operator copies its left operand, the result + is copied repeatedly, leading to quadratic complexity.

-

On the other hand, using ++ like this

+

But using ++ as follows is not bad:

OK

@@ -143,11 +141,11 @@ naive_but_ok_reverse([H|T], Acc) -> naive_but_ok_reverse([], Acc) -> Acc. -

is not bad. Each list element will only be copied once. +

Each list element is copied only once. The growing result Acc is the right operand - for the ++ operator, and it will not be copied.

+ for the ++ operator, and it is not copied.

-

Of course, experienced Erlang programmers would actually write

+

Experienced Erlang programmers would write as follows:

DO

@@ -156,32 +154,34 @@ vanilla_reverse([H|T], Acc) -> vanilla_reverse([], Acc) -> Acc. -

which is slightly more efficient because you don't build a - list element only to directly copy it. (Or it would be more efficient - if the the compiler did not automatically rewrite [H]++Acc +

This is slightly more efficient because here you do not build a + list element only to copy it directly. (Or it would be more efficient + if the compiler did not automatically rewrite [H]++Acc to [H|Acc].)

- Myth: Strings are slow - -

Actually, string handling could be slow if done improperly. - In Erlang, you'll have to think a little more about how the strings - are used and choose an appropriate representation and use - the re module instead of the obsolete - regexp module if you are going to use regular expressions.

+ Myth: Strings are Slow + +

String handling can be slow if done improperly. + In Erlang, you need to think a little more about how the strings + are used and choose an appropriate representation. If you + use regular expressions, use the + re module in STDLIB + instead of the obsolete regexp module.

- Myth: Repairing a Dets file is very slow + Myth: Repairing a Dets File is Very Slow

The repair time is still proportional to the number of records - in the file, but Dets repairs used to be much, much slower in the past. + in the file, but Dets repairs used to be much slower in the past. Dets has been massively rewritten and improved.

- Myth: BEAM is a stack-based byte-code virtual machine (and therefore slow) + Myth: BEAM is a Stack-Based Byte-Code Virtual Machine + (and Therefore Slow)

BEAM is a register-based virtual machine. It has 1024 virtual registers that are used for holding temporary values and for passing arguments when @@ -193,11 +193,11 @@ vanilla_reverse([], Acc) ->

- Myth: Use '_' to speed up your program when a variable is not used + Myth: Use "_" to Speed Up Your Program When a Variable + is Not Used -

That was once true, but since R6B the BEAM compiler is quite capable of seeing itself +

That was once true, but from R6B the BEAM compiler can see that a variable is not used.

- diff --git a/system/doc/efficiency_guide/processes.xml b/system/doc/efficiency_guide/processes.xml index 86951e2dcc..3bdc314235 100644 --- a/system/doc/efficiency_guide/processes.xml +++ b/system/doc/efficiency_guide/processes.xml @@ -30,15 +30,15 @@
- Creation of an Erlang process + Creating an Erlang Process -

An Erlang process is lightweight compared to operating - systems threads and processes.

+

An Erlang process is lightweight compared to threads and + processes in operating systems.

A newly spawned Erlang process uses 309 words of memory in the non-SMP emulator without HiPE support. (SMP support - and HiPE support will both add to this size.) The size can - be found out like this:

+ and HiPE support both add to this size.) The size can + be found as follows:

 Erlang (BEAM) emulator version 5.6 [async-threads:0] [kernel-poll:false]
@@ -51,11 +51,11 @@ Eshell V5.6  (abort with ^G)
 3> Bytes div erlang:system_info(wordsize).
 309
-

The size includes 233 words for the heap area (which includes the stack). - The garbage collector will increase the heap as needed.

+

The size includes 233 words for the heap area (which includes the + stack). The garbage collector increases the heap as needed.

The main (outer) loop for a process must be tail-recursive. - If not, the stack will grow until the process terminates.

+ Otherwise, the stack grows until the process terminates.

DO NOT

@@ -74,7 +74,7 @@ loop() ->

The call to io:format/2 will never be executed, but a return address will still be pushed to the stack each time loop/0 is called recursively. The correct tail-recursive - version of the function looks like this:

+ version of the function looks as follows:

DO

@@ -90,92 +90,98 @@ loop() -> end.
- Initial heap size + Initial Heap Size

The default initial heap size of 233 words is quite conservative - in order to support Erlang systems with hundreds of thousands or - even millions of processes. The garbage collector will grow and - shrink the heap as needed.

+ to support Erlang systems with hundreds of thousands or + even millions of processes. The garbage collector grows and + shrinks the heap as needed.

In a system that use comparatively few processes, performance - might be improved by increasing the minimum heap size using either - the +h option for + might be improved by increasing the minimum heap size + using either the +h option for erl or on a process-per-process basis using the min_heap_size option for spawn_opt/4.

-

The gain is twofold: Firstly, although the garbage collector will - grow the heap, it will grow it step by step, which will be more - costly than directly establishing a larger heap when the process - is spawned. Secondly, the garbage collector may also shrink the - heap if it is much larger than the amount of data stored on it; - setting the minimum heap size will prevent that.

- -

The emulator will probably use more memory, and because garbage - collections occur less frequently, huge binaries could be +

The gain is twofold:

+ + Although the garbage collector grows the heap, it grows it + step-by-step, which is more costly than directly establishing a + larger heap when the process is spawned. + The garbage collector can also shrink the heap if it is + much larger than the amount of data stored on it; + setting the minimum heap size prevents that. + + +

The emulator probably uses more memory, and because garbage + collections occur less frequently, huge binaries can be kept much longer.

In systems with many processes, computation tasks that run - for a short time could be spawned off into a new process with - a higher minimum heap size. When the process is done, it will - send the result of the computation to another process and terminate. - If the minimum heap size is calculated properly, the process may not - have to do any garbage collections at all. - This optimization should not be attempted + for a short time can be spawned off into a new process with + a higher minimum heap size. When the process is done, it sends + the result of the computation to another process and terminates. + If the minimum heap size is calculated properly, the process might + not have to do any garbage collections at all. + This optimization is not to be attempted without proper measurements.

-
- Process messages + Process Messages -

All data in messages between Erlang processes is copied, with - the exception of +

All data in messages between Erlang processes is copied, + except for refc binaries on the same Erlang node.

When a message is sent to a process on another Erlang node, - it will first be encoded to the Erlang External Format before - being sent via an TCP/IP socket. The receiving Erlang node decodes - the message and distributes it to the right process.

+ it is first encoded to the Erlang External Format before + being sent through a TCP/IP socket. The receiving Erlang node decodes + the message and distributes it to the correct process.

- The constant pool + Constant Pool

Constant Erlang terms (also called literals) are now kept in constant pools; each loaded module has its own pool. - The following function

+ The following function does no longer build the tuple every time + it is called (only to have it discarded the next time the garbage + collector was run), but the tuple is located in the module's + constant pool:

DO (in R12B and later)

days_in_month(M) -> - element(M, {31,28,31,30,31,30,31,31,30,31,30,31}). - -

will no longer build the tuple every time it is called (only - to have it discarded the next time the garbage collector was run), but - the tuple will be located in the module's constant pool.

+ element(M, {31,28,31,30,31,30,31,31,30,31,30,31}).

But if a constant is sent to another process (or stored in - an ETS table), it will be copied. - The reason is that the run-time system must be able - to keep track of all references to constants in order to properly - unload code containing constants. (When the code is unloaded, - the constants will be copied to the heap of the processes that refer + an Ets table), it is copied. + The reason is that the runtime system must be able + to keep track of all references to constants to unload code + containing constants properly. (When the code is unloaded, + the constants are copied to the heap of the processes that refer to them.) The copying of constants might be eliminated in a future - release.

+ Erlang/OTP release.

- Loss of sharing + Loss of Sharing -

Shared sub-terms are not preserved when a term is sent - to another process, passed as the initial process arguments in - the spawn call, or stored in an ETS table. - That is an optimization. Most applications do not send messages - with shared sub-terms.

+

Shared subterms are not preserved in the following + cases:

+ + When a term is sent to another process + When a term is passed as the initial process arguments in + the spawn call + When a term is stored in an Ets table + +

That is an optimization. Most applications do not send messages + with shared subterms.

-

Here is an example of how a shared sub-term can be created:

+

The following example shows how a shared subterm can be created:

kilo_byte() -> @@ -186,32 +192,32 @@ kilo_byte(0, Acc) -> kilo_byte(N, Acc) -> kilo_byte(N-1, [Acc|Acc]). -

kilo_byte/0 creates a deep list. If we call - list_to_binary/1, we can convert the deep list to a binary - of 1024 bytes:

+

kilo_byte/1 creates a deep list. + If list_to_binary/1 is called, the deep list can be + converted to a binary of 1024 bytes:

 1> byte_size(list_to_binary(efficiency_guide:kilo_byte())).
 1024
-

Using the erts_debug:size/1 BIF we can see that the +

Using the erts_debug:size/1 BIF, it can be seen that the deep list only requires 22 words of heap space:

 2> erts_debug:size(efficiency_guide:kilo_byte()).
 22
-

Using the erts_debug:flat_size/1 BIF, we can calculate - the size of the deep list if sharing is ignored. It will be +

Using the erts_debug:flat_size/1 BIF, the size of the + deep list can be calculated if sharing is ignored. It becomes the size of the list when it has been sent to another process - or stored in an ETS table:

+ or stored in an Ets table:

 3> erts_debug:flat_size(efficiency_guide:kilo_byte()).
 4094
-

We can verify that sharing will be lost if we insert the - data into an ETS table:

+

It can be verified that sharing will be lost if the data is + inserted into an Ets table:

 4> T = ets:new(tab, []).
@@ -223,21 +229,21 @@ true
 7> erts_debug:flat_size(element(2, hd(ets:lookup(T, key)))).
 4094
-

When the data has passed through an ETS table, +

When the data has passed through an Ets table, erts_debug:size/1 and erts_debug:flat_size/1 return the same value. Sharing has been lost.

-

In a future release of Erlang/OTP, we might implement a - way to (optionally) preserve sharing. We have no plans to make - preserving of sharing the default behaviour, since that would +

In a future Erlang/OTP release, it might be implemented a + way to (optionally) preserve sharing. There are no plans to make + preserving of sharing the default behaviour, as that would penalize the vast majority of Erlang applications.

- The SMP emulator + SMP Emulator -

The SMP emulator (introduced in R11B) will take advantage of a +

The SMP emulator (introduced in R11B) takes advantage of a multi-core or multi-CPU computer by running several Erlang scheduler threads (typically, the same as the number of cores). Each scheduler thread schedules Erlang processes in the same way as the Erlang scheduler @@ -247,11 +253,11 @@ true must have more than one runnable Erlang process most of the time. Otherwise, the Erlang emulator can still only run one Erlang process at the time, but you must still pay the overhead for locking. Although - we try to reduce the locking overhead as much as possible, it will never - become exactly zero.

+ Erlang/OTP tries to reduce the locking overhead as much as possible, + it will never become exactly zero.

-

Benchmarks that may seem to be concurrent are often sequential. - The estone benchmark, for instance, is entirely sequential. So is also +

Benchmarks that appear to be concurrent are often sequential. + The estone benchmark, for example, is entirely sequential. So is the most common implementation of the "ring benchmark"; usually one process is active, while the others wait in a receive statement.

@@ -259,6 +265,5 @@ true can be used to profile your application to see how much potential (or lack thereof) it has for concurrency.

- diff --git a/system/doc/efficiency_guide/profiling.xml b/system/doc/efficiency_guide/profiling.xml index b93c884270..5df12eefe0 100644 --- a/system/doc/efficiency_guide/profiling.xml +++ b/system/doc/efficiency_guide/profiling.xml @@ -30,190 +30,197 @@
- Do not guess about performance - profile + Do Not Guess About Performance - Profile

Even experienced software developers often guess wrong about where - the performance bottlenecks are in their programs.

- -

Therefore, profile your program to see where the performance + the performance bottlenecks are in their programs. Therefore, profile + your program to see where the performance bottlenecks are and concentrate on optimizing them.

-

Erlang/OTP contains several tools to help finding bottlenecks.

+

Erlang/OTP contains several tools to help finding bottlenecks:

+ + + fprof provides the most detailed information about + where the program time is spent, but it significantly slows down the + program it profiles. -

fprof provide the most detailed information - about where the time is spent, but it significantly slows down the - program it profiles.

+

eprof provides time information of each function + used in the program. No call graph is produced, but eprof has + considerable less impact on the program it profiles.

+

If the program is too large to be profiled by fprof or + eprof, the cover and cprof tools can be used + to locate code parts that are to be more thoroughly profiled using + fprof or eprof.

-

eprof provides time information of each function used - in the program. No callgraph is produced but eprof has - considerable less impact on the program profiled.

+ cover provides execution counts per line per + process, with less overhead than fprof. Execution counts + can, with some caution, be used to locate potential performance + bottlenecks. -

If the program is too big to be profiled by fprof or eprof, - cover and cprof could be used to locate parts of the - code that should be more thoroughly profiled using fprof or - eprof.

+ cprof is the most lightweight tool, but it only + provides execution counts on a function basis (for all processes, + not per process). +
-

cover provides execution counts per line per process, - with less overhead than fprof. Execution counts can - with some caution be used to locate potential performance bottlenecks. - The most lightweight tool is cprof, but it only provides execution - counts on a function basis (for all processes, not per process).

+

The tools are further described in + Tools.

- Big systems -

If you have a big system it might be interesting to run profiling + Large Systems +

For a large system, it can be interesting to run profiling on a simulated and limited scenario to start with. But bottlenecks - have a tendency to only appear or cause problems when - there are many things going on at the same time, and when there - are many nodes involved. Therefore it is desirable to also run + have a tendency to appear or cause problems only when + many things are going on at the same time, and when + many nodes are involved. Therefore, it is also desirable to run profiling in a system test plant on a real target system.

-

When your system is big you do not want to run the profiling - tools on the whole system. You want to concentrate on processes - and modules that you know are central and stand for a big part of the - execution.

+ +

For a large system, you do not want to run the profiling + tools on the whole system. Instead you want to concentrate on + central processes and modules, which contribute for a big part + of the execution.

- What to look for -

When analyzing the result file from the profiling activity - you should look for functions that are called many + What to Look For +

When analyzing the result file from the profiling activity, + look for functions that are called many times and have a long "own" execution time (time excluding calls - to other functions). Functions that just are called very - many times can also be interesting, as even small things can add - up to quite a bit if they are repeated often. Then you need to - ask yourself what can I do to reduce this time. Appropriate - types of questions to ask yourself are:

+ to other functions). Functions that are called a lot of + times can also be interesting, as even small things can add + up to quite a bit if repeated often. Also + ask yourself what you can do to reduce this time. The following + are appropriate types of questions to ask yourself:

+ - Can I reduce the number of times the function is called? - Are there tests that can be run less often if I change - the order of tests? - Are there redundant tests that can be removed? - Is there some expression calculated giving the same result - each time? - Are there other ways of doing this that are equivalent and + Is it possible to reduce the number of times the function + is called? + Can any test be run less often if the order of tests is + changed? + Can any redundant tests be removed? + Does any calculated expression give the same result + each time? + Are there other ways to do this that are equivalent and more efficient? - Can I use another internal data representation to make - things more efficient? + Can another internal data representation be used to make + things more efficient? -

These questions are not always trivial to answer. You might - need to do some benchmarks to back up your theory, to avoid - making things slower if your theory is wrong. See benchmarking.

+ +

These questions are not always trivial to answer. Some + benchmarks might be needed to back up your theory and to avoid + making things slower if your theory is wrong. For details, see + Benchmarking.

Tools - +
fprof -

- fprof measures the execution time for each function, - both own time i.e how much time a function has used for its - own execution, and accumulated time i.e. including called - functions. The values are displayed per process. You also get - to know how many times each function has been - called. fprof is based on trace to file in order to - minimize runtime performance impact. Using fprof is just a - matter of calling a few library functions, see - fprof - manual page under the application tools. fprof was introduced in - version R8 of Erlang/OTP. -

+

fprof measures the execution time for each function, + both own time, that is, how much time a function has used for its + own execution, and accumulated time, that is, including called + functions. The values are displayed per process. You also get + to know how many times each function has been called.

+ +

fprof is based on trace to file to minimize runtime + performance impact. Using fprof is just a matter of + calling a few library functions, see the + fprof manual page in + tools .fprof was introduced in R8.

-
- eprof -

- eprof is based on the Erlang trace_info BIFs. Eprof shows how much time has been used by - each process, and in which function calls this time has been - spent. Time is shown as percentage of total time and absolute time. - See eprof for - additional information. -

-
+
+ eprof +

eprof is based on the Erlang trace_info BIFs. + eprof shows how much time has been used by each process, + and in which function calls this time has been spent. Time is + shown as percentage of total time and absolute time. For more + information, see the eprof + manual page in tools.

+
cover -

- cover's primary use is coverage analysis to verify - test cases, making sure all relevant code is covered. - cover counts how many times each executable line of - code is executed when a program is run. This is done on a per - module basis. Of course this information can be used to - determine what code is run very frequently and could therefore - be subject for optimization. Using cover is just a matter of - calling a few library functions, see - cover - manual page under the application tools.

+

The primary use of cover is coverage analysis to verify + test cases, making sure that all relevant code is covered. + cover counts how many times each executable line of code + is executed when a program is run, on a per module basis.

+

Clearly, this information can be used to determine what + code is run very frequently and can therefore be subject for + optimization. Using cover is just a matter of calling a + few library functions, see the + cover manual page in + tools.

cprof

cprof is something in between fprof and - cover regarding features. It counts how many times each - function is called when the program is run, on a per module - basis. cprof has a low performance degradation effect (versus - fprof) and does not need to recompile - any modules to profile (versus cover). - See cprof manual page for additional - information. -

+ cover regarding features. It counts how many times each + function is called when the program is run, on a per module + basis. cprof has a low performance degradation effect + (compared with fprof) and does not need to recompile + any modules to profile (compared with cover). + For more information, see the + cprof manual page in + tools.

- Tool summarization + Tool Summary - Tool - Results - Size of result - Effects on program execution time - Records number of calls - Records Execution time - Records called by - Records garbage collection + Tool + Results + Size of Result + Effects on Program Execution Time + Records Number of Calls + Records Execution Time + Records Called by + Records Garbage Collection - fprof - per process to screen/file - large - significant slowdown - yes - total and own - yes - yes + fprof + Per process to screen/file + Large + Significant slowdown + Yes + Total and own + Yes + Yes - eprof - per process/function to screen/file - medium - small slowdown - yes - only total - no - no + eprof + Per process/function to screen/file + Medium + Small slowdown + Yes + Only total + No + No - cover - per module to screen/file - small - moderate slowdown - yes, per line - no - no - no + cover + Per module to screen/file + Small + Moderate slowdown + Yes, per line + No + No + No - cprof - per module to caller - small - small slowdown - yes - no - no - no + cprof + Per module to caller + Small + Small slowdown + Yes + No + No + No - + Tool Summary
@@ -226,49 +233,51 @@ implementation of a given algorithm or function is the fastest. Benchmarking is far from an exact science. Today's operating systems generally run background tasks that are difficult to turn off. - Caches and multiple CPU cores doesn't make it any easier. - It would be best to run Unix-computers in single-user mode when + Caches and multiple CPU cores does not facilitate benchmarking. + It would be best to run UNIX computers in single-user mode when benchmarking, but that is inconvenient to say the least for casual testing.

Benchmarks can measure wall-clock time or CPU time.

-

timer:tc/3 measures + + timer:tc/3 measures wall-clock time. The advantage with wall-clock time is that I/O, - swapping, and other activities in the operating-system kernel are + swapping, and other activities in the operating system kernel are included in the measurements. The disadvantage is that the - the measurements will vary wildly. Usually it is best to run the - benchmark several times and note the shortest time - that time should + measurements vary a lot. Usually it is best to run the + benchmark several times and note the shortest time, which is to be the minimum time that is possible to achieve under the best of - circumstances.

+ circumstances. -

statistics/1 - with the argument runtime measures CPU time spent in the Erlang - virtual machine. The advantage is that the results are more + statistics/1 + with argument runtime measures CPU time spent in the Erlang + virtual machine. The advantage with CPU time is that the results are more consistent from run to run. The disadvantage is that the time spent in the operating system kernel (such as swapping and I/O) - are not included. Therefore, measuring CPU time is misleading if - any I/O (file or socket) is involved.

+ is not included. Therefore, measuring CPU time is misleading if + any I/O (file or socket) is involved. +

It is probably a good idea to do both wall-clock measurements and CPU time measurements.

-

Some additional advice:

+

Some final advice:

- The granularity of both types of measurement could be quite - high so you should make sure that each individual measurement + The granularity of both measurement types can be high. + Therefore, ensure that each individual measurement lasts for at least several seconds. - To make the test fair, each new test run should run in its own, + To make the test fair, each new test run is to run in its own, newly created Erlang process. Otherwise, if all tests run in the - same process, the later tests would start out with larger heap sizes - and therefore probably do less garbage collections. You could - also consider restarting the Erlang emulator between each test. + same process, the later tests start out with larger heap sizes + and therefore probably do fewer garbage collections. + Also consider restarting the Erlang emulator between each test. Do not assume that the fastest implementation of a given algorithm - on computer architecture X also is the fastest on computer architecture Y. - + on computer architecture X is also the fastest on computer architecture + Y.
diff --git a/system/doc/efficiency_guide/tablesDatabases.xml b/system/doc/efficiency_guide/tablesDatabases.xml index 94c921fa1c..215c2afa1f 100644 --- a/system/doc/efficiency_guide/tablesDatabases.xml +++ b/system/doc/efficiency_guide/tablesDatabases.xml @@ -18,10 +18,9 @@ basis, WITHOUT WARRANTY OF ANY KIND, either express or implied. See the License for the specific language governing rights and limitations under the License. - - Tables and databases + Tables and Databases Ingela Anderton 2001-08-07 @@ -30,46 +29,45 @@
- Ets, Dets and Mnesia + Ets, Dets, and Mnesia

Every example using Ets has a corresponding example in - Mnesia. In general all Ets examples also apply to Dets tables.

+ Mnesia. In general, all Ets examples also apply to Dets tables.

- Select/Match operations -

Select/Match operations on Ets and Mnesia tables can become + Select/Match Operations +

Select/match operations on Ets and Mnesia tables can become very expensive operations. They usually need to scan the complete - table. You should try to structure your - data so that you minimize the need for select/match - operations. However, if you really need a select/match operation, - it will still be more efficient than using tab2list. - Examples of this and also of ways to avoid select/match will be provided in - some of the following sections. The functions - ets:select/2 and mnesia:select/3 should be preferred over - ets:match/2,ets:match_object/2, and mnesia:match_object/3.

- -

There are exceptions when the complete table is not - scanned, for instance if part of the key is bound when searching an - ordered_set table, or if it is a Mnesia - table and there is a secondary index on the field that is - selected/matched. If the key is fully bound there will, of course, be - no point in doing a select/match, unless you have a bag table and - you are only interested in a sub-set of the elements with - the specific key.

-
-

When creating a record to be used in a select/match operation you - want most of the fields to have the value '_'. The easiest and fastest way - to do that is as follows:

+ table. Try to structure the data to minimize the need for select/match + operations. However, if you require a select/match operation, + it is still more efficient than using tab2list. + Examples of this and of how to avoid select/match are provided in + the following sections. The functions + ets:select/2 and mnesia:select/3 are to be preferred + over ets:match/2, ets:match_object/2, and + mnesia:match_object/3.

+

In some circumstances, the select/match operations do not need + to scan the complete table. + For example, if part of the key is bound when searching an + ordered_set table, or if it is a Mnesia + table and there is a secondary index on the field that is + selected/matched. If the key is fully bound, there is + no point in doing a select/match, unless you have a bag table + and are only interested in a subset of the elements with + the specific key.

+

When creating a record to be used in a select/match operation, you + want most of the fields to have the value "_". The easiest and + fastest way to do that is as follows:

 #person{age = 42, _ = '_'}. 
- Deleting an element -

The delete operation is considered + Deleting an Element +

The delete operation is considered successful if the element was not present in the table. Hence all attempts to check that the element is present in the Ets/Mnesia table before deletion are unnecessary. Here follows - an example for Ets tables.

+ an example for Ets tables:

DO

 ...
@@ -88,14 +86,16 @@ end,
     
- Data fetching -

Do not fetch data that you already have! Consider that you - have a module that handles the abstract data type Person. You - export the interface function print_person/1 that uses the internal functions - print_name/1, print_age/1, print_occupation/1.

+ Fetching Data +

Do not fetch data that you already have.

+

Consider that you have a module that handles the abstract data + type Person. You export the interface function + print_person/1, which uses the internal functions + print_name/1, print_age/1, and + print_occupation/1.

-

If the functions print_name/1 and so on, had been interface - functions the matter comes in to a whole new light, as you +

If the function print_name/1, and so on, had been interface + functions, the situation would have been different, as you do not want the user of the interface to know about the internal data representation.

@@ -136,7 +136,7 @@ print_person(PersonId) -> io:format("No person with ID = ~p~n", [PersonID]) end. -%%% Internal functions +%%% Internal functionss print_name(PersonID) -> [Person] = ets:lookup(person, PersonId), io:format("No person ~p~n", [Person#person.name]). @@ -151,31 +151,31 @@ print_occupation(PersonID) ->
- Non-persistent data storage + Non-Persistent Database Storage

For non-persistent database storage, prefer Ets tables over - Mnesia local_content tables. Even the Mnesia dirty_write + Mnesia local_content tables. Even the Mnesia dirty_write operations carry a fixed overhead compared to Ets writes. Mnesia must check if the table is replicated or has indices, this involves at least one Ets lookup for each - dirty_write. Thus, Ets writes will always be faster than + dirty_write. Thus, Ets writes is always faster than Mnesia writes.

tab2list -

Assume we have an Ets-table, which uses idno as key, - and contains:

+

Assuming an Ets table that uses idno as key + and contains the following:

 [#person{idno = 1, name = "Adam",  age = 31, occupation = "mailman"},
  #person{idno = 2, name = "Bryan", age = 31, occupation = "cashier"},
  #person{idno = 3, name = "Bryan", age = 35, occupation = "banker"},
  #person{idno = 4, name = "Carl",  age = 25, occupation = "mailman"}]
-

If we must return all data stored in the Ets-table we - can use ets:tab2list/1. However, usually we are only +

If you must return all data stored in the Ets table, you + can use ets:tab2list/1. However, usually you are only interested in a subset of the information in which case - ets:tab2list/1 is expensive. If we only want to extract - one field from each record, e.g., the age of every person, we - should use:

+ ets:tab2list/1 is expensive. If you only want to extract + one field from each record, for example, the age of every person, + then:

DO

 ...
@@ -192,8 +192,8 @@ ets:select(Tab,[{ #person{idno='_',
 TabList = ets:tab2list(Tab),
 lists:map(fun(X) -> X#person.age end, TabList),
 ...
-

If we are only interested in the age of all persons named - Bryan, we should:

+

If you are only interested in the age of all persons named + "Bryan", then:

DO

 ...
@@ -224,8 +224,8 @@ BryanList = lists:filter(fun(X) -> X#person.name == "Bryan" end,
                          TabList),
 lists:map(fun(X) -> X#person.age end, BryanList),
 ...
-

If we need all information stored in the Ets table about - persons named Bryan we should:

+

If you need all information stored in the Ets table about + persons named "Bryan", then:

DO

 ...
@@ -243,60 +243,60 @@ lists:filter(fun(X) -> X#person.name == "Bryan" end, TabList),
     
- Ordered_set tables -

If the data in the table should be accessed so that the order + Ordered_set Tables +

If the data in the table is to be accessed so that the order of the keys in the table is significant, the table type - ordered_set could be used instead of the more usual + ordered_set can be used instead of the more usual set table type. An ordered_set is always - traversed in Erlang term order with regard to the key field - so that return values from functions such as select, + traversed in Erlang term order regarding the key field + so that the return values from functions such as select, match_object, and foldl are ordered by the key values. Traversing an ordered_set with the first and next operations also returns the keys ordered.

An ordered_set only guarantees that - objects are processed in key order. Results from functions as - ets:select/2 appear in the key order even if + objects are processed in key order. + Results from functions such as + ets:select/2 appear in key order even if the key is not included in the result.

- Ets specific + Ets-Specific
- Utilizing the keys of the Ets table -

An Ets table is a single key table (either a hash table or a - tree ordered by the key) and should be used as one. In other + Using Keys of Ets Table +

An Ets table is a single-key table (either a hash table or a + tree ordered by the key) and is to be used as one. In other words, use the key to look up things whenever possible. A - lookup by a known key in a set Ets table is constant and for a - ordered_set Ets table it is O(logN). A key lookup is always + lookup by a known key in a set Ets table is constant and for + an ordered_set Ets table it is O(logN). A key lookup is always preferable to a call where the whole table has to be - scanned. In the examples above, the field idno is the + scanned. In the previous examples, the field idno is the key of the table and all lookups where only the name is known - will result in a complete scan of the (possibly large) table + result in a complete scan of the (possibly large) table for a matching result.

A simple solution would be to use the name field as the key instead of the idno field, but that would cause - problems if the names were not unique. A more general solution - would be to create a second table with name as key and - idno as data, i.e. to index (invert) the table with regards - to the name field. The second table would of course have to be - kept consistent with the master table. Mnesia could do this - for you, but a home brew index table could be very efficient + problems if the names were not unique. A more general solution would + be to create a second table with name as key and + idno as data, that is, to index (invert) the table regarding + the name field. Clearly, the second table would have to be + kept consistent with the master table. Mnesia can do this + for you, but a home brew index table can be very efficient compared to the overhead involved in using Mnesia.

An index table for the table in the previous examples would - have to be a bag (as keys would appear more than once) and could + have to be a bag (as keys would appear more than once) and can have the following contents:

- 
 [#index_entry{name="Adam", idno=1},
  #index_entry{name="Bryan", idno=2},
  #index_entry{name="Bryan", idno=3},
  #index_entry{name="Carl", idno=4}]
-

Given this index table a lookup of the age fields for - all persons named "Bryan" could be done like this:

+

Given this index table, a lookup of the age fields for + all persons named "Bryan" can be done as follows:

 ...
 MatchingIDs = ets:lookup(IndexTable,"Bryan"),
@@ -306,30 +306,31 @@ lists:map(fun(#index_entry{idno = ID}) ->
           end,
           MatchingIDs),
 ...
-

Note that the code above never uses ets:match/2 but - instead utilizes the ets:lookup/2 call. The +

Notice that this code never uses ets:match/2 but + instead uses the ets:lookup/2 call. The lists:map/2 call is only used to traverse the idnos - matching the name "Bryan" in the table; therefore the number of lookups + matching the name "Bryan" in the table; thus the number of lookups in the master table is minimized.

Keeping an index table introduces some overhead when - inserting records in the table, therefore the number of operations - gained from the table has to be weighted against the number of - operations inserting objects in the table. However, note that the gain when - the key can be used to lookup elements is significant.

+ inserting records in the table. The number of operations gained + from the table must therefore be compared against the number of + operations inserting objects in the table. However, notice that the + gain is significant when the key can be used to lookup elements.

- Mnesia specific + Mnesia-Specific
- Secondary index + Secondary Index

If you frequently do a lookup on a field that is not the - key of the table, you will lose performance using - "mnesia:select/match_object" as this function will traverse the - whole table. You may create a secondary index instead and + key of the table, you lose performance using + "mnesia:select/match_object" as this function traverses the + whole table. You can create a secondary index instead and use "mnesia:index_read" to get faster access, however this - will require more memory. Example:

+ requires more memory.

+

Example

 -record(person, {idno, name, age, occupation}).
         ...
@@ -347,14 +348,15 @@ PersonsAge42 =
 
     
Transactions -

Transactions is a way to guarantee that the distributed +

Using transactions is a way to guarantee that the distributed Mnesia database remains consistent, even when many different - processes update it in parallel. However if you have - real time requirements it is recommended to use dirty - operations instead of transactions. When using the dirty - operations you lose the consistency guarantee, this is usually + processes update it in parallel. However, if you have + real-time requirements it is recommended to use dirty + operations instead of transactions. When using dirty + operations, you lose the consistency guarantee; this is usually solved by only letting one process update the table. Other - processes have to send update requests to that process.

+ processes must send update requests to that process.

+

Example

 ...
 % Using transaction
-- 
cgit v1.2.3


From 82dd592d078c473c93ba5cded74f9d71dc504e30 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Bj=C3=B6rn=20Gustavsson?= 
Date: Thu, 12 Mar 2015 15:35:13 +0100
Subject: Update Embedded Systems User's Guide
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

Language cleaned up by the technical writers xsipewe and tmanevik
from Combitech. Proofreading and corrections by Björn Gustavsson.
---
 system/doc/embedded/embedded_nt.xml      |  57 +--
 system/doc/embedded/embedded_solaris.xml | 774 ++++++++++++++-----------------
 system/doc/embedded/part.xml             |  16 +-
 3 files changed, 376 insertions(+), 471 deletions(-)

(limited to 'system/doc')

diff --git a/system/doc/embedded/embedded_nt.xml b/system/doc/embedded/embedded_nt.xml
index 530e3663e4..2e3b32eb84 100644
--- a/system/doc/embedded/embedded_nt.xml
+++ b/system/doc/embedded/embedded_nt.xml
@@ -31,54 +31,47 @@
     PA2
     embedded_nt.xml
   
-  

This chapter describes the OS specific parts of OTP which relate - to Windows NT. -

+ +

This section describes the operating system-specific parts of OTP + that relate to Windows NT.

+

A normal installation of Windows NT 4.0, with Service Pack 4 or + later, is required for an embedded Windows NT running OTP.

- Introduction -

A normal installation of NT 4.0, with service pack 4 or later, - is required for an embedded Windows NT running OTP.

+ Memory Use +

RAM memory of 96 MB is recommended to run OTP on Windows NT. + A system with less than 64 MB of RAM is not recommended.

- Memory Usage -

RAM memory of 96 MBytes is recommended to run OTP on NT. - A system with less than 64 Mbytes of RAM is not recommended.

+ Disk Space Use +

A minimum Windows NT installation with networking needs 250 MB, + and an extra 130 MB for the swap file.

- Disk Space Usage -

A minimum NT installation with networking needs 250 MB, and - an additional 130 MB for the swap file.

-
- -
- Installation -

Normal NT installation is performed. No additional application - programs are needed, such as Internet explorer or web server. Networking - with TCP/IP is required.

- - Service pack 4 or later must be installed.

+ Installing an Embedded System +

Normal Windows NT installation is performed. No additional + application programs are needed, such as Internet Explorer or + web server. Networking with TCP/IP is required.

+

Service Pack 4 or later must be installed.

Hardware Watchdog -

For Windows NT running on standard PCs with ISA and/or PCI bus - there is a possibility to install an extension card with a hardware - watchdog. -

-

See also the heart(3) reference manual page in - Kernel. -

+

For Windows NT running on standard PCs with ISA and/or PCI bus, + an extension card with a hardware watchdog can be installed.

+

For more information, see the heart(3) manual page in + kernel.

Starting Erlang -

On an embedded system, the erlsrv module should be used, - to install the erlang process as a Windows system service. - This service can start - after NT has booted. See documentation for erlsrv.

+

On an embedded system, the erlsrv module is to be used + to install the Erlang process as a Windows system service. + This service can start after Windows NT has booted.

+

For more information, see the erlsrv manual page + in erts.

diff --git a/system/doc/embedded/embedded_solaris.xml b/system/doc/embedded/embedded_solaris.xml index cab3437725..1861436a8e 100644 --- a/system/doc/embedded/embedded_solaris.xml +++ b/system/doc/embedded/embedded_solaris.xml @@ -31,125 +31,97 @@ B embedded_solaris.xml -

This chapter describes the OS specific parts of OTP which relate - to Solaris. -

+ + +

This section describes the operating system-specific parts + of OTP that relate to Solaris.

- Memory Usage -

Solaris takes about 17 Mbyte of RAM on a system with 64 Mbyte of - total RAM. This leaves about 47 Mbyte for the applications. If - the system utilizes swapping, these figures cannot be improved + Memory Use +

Solaris takes about 17 MB of RAM on a system with 64 MB of + total RAM. This leaves about 47 MB for the applications. If + the system uses swapping, these figures cannot be improved because unnecessary daemon processes are swapped out. However, if swapping is disabled, or if the swap space is of limited resource in the system, it becomes necessary to kill off - unnecessary daemon processes. -

+ unnecessary daemon processes.

- Disk Space Usage + Disk Space Use

The disk space required by Solaris can be minimized by using the - Core User support installation. It requires about 80 Mbyte of + Core User support installation. It requires about 80 MB of disk space. This installs only the minimum software required to - boot and run Solaris. The disk space can be further reduced by + boot and run Solaris. The disk space can be further reduced by deleting unnecessary individual files. However, unless disk space is a critical resource the effort required and the risks - involved may not be justified.

+ involved cannot be justified.

- Installation + Installing an Embedded System

This section is about installing an embedded system. - The following topics are considered, -

+ The following topics are considered: +

- -

Creation of user and installation directory,

-
- -

Installation of embedded system,

-
- -

Configuration for automatic start at reboot,

-
- -

Making a hardware watchdog available,

-
- -

Changing permission for reboot,

-
- -

Patches,

-
- -

Configuration of the OS_Mon application.

-
+ Creating user and installation directory + Installing an embedded system + Configuring automatic start at boot + Making a hardware watchdog available + Changing permission for reboot + Setting TERM environment variable + Adding patches + Installing module os_sup in application os_mon
-

Several of the procedures described below require expert - knowledge of the Solaris 2 operating system. For most of them - super user privilege is needed. -

+

Several of the procedures in this section require expert + knowledge of the Solaris operating system. For most of them + super user privilege is needed.

- Creation of User and Installation Directory -

It is recommended that the Embedded Environment is run by an - ordinary user, i.e. a user who does not have super user - privileges. -

-

Throughout this section we assume that the user name is - otpuser, and that the home directory of that user is, -

+ Creating User and Installation Directory +

It is recommended that the embedded environment is run by an + ordinary user, that is, a user who does not have super user + privileges.

+

In this section, it is assumed that the username is + otpuser and that the home directory of that user is:

         /export/home/otpuser
-

Furthermore, we assume that in the home directory of +

It is also assumed that in the home directory of otpuser, there is a directory named otp, the - full path of which is, -

+ full path of which is:

         /export/home/otpuser/otp

This directory is the installation directory of the - Embedded Environment. -

+ embedded environment.

- Installation of an Embedded System -

The procedure for installation of an embedded system does - not differ from that of an ordinary system (see the - Installation Guide), - except for the following: -

+ Installing an Embedded System +

The procedure for installing an embedded system + is the same as for an ordinary system (see + Installation Guide), except for the following:

- -

the (compressed) tape archive file should be - extracted in the installation directory as defined above, - and,

-
- -

there is no need to link the start script to a - standard directory like /usr/local/bin.

-
+ The (compressed) tape archive file is to be extracted in + the installation directory defined above. + It is not needed to link the start script to a standard + directory like /usr/local/bin.
- Configuration for Automatic Start at Boot -

A true embedded system has to start when the system - boots. This section accounts for the necessary configurations - needed to achieve that. -

-

The embedded system and all the applications will start - automatically if the script file shown below is added to the - /etc/rc3.d directory. The file must be owned and - readable by root, and its name cannot be arbitrarily - assigned. The following name is recommended, -

+ Configuring Automatic Start at Boot +

A true embedded system must start when the system boots. + This section accounts for the necessary configurations + needed to achieve that.

+

The embedded system and all the applications start + automatically if the script file shown below is added to + directory /etc/rc3.d. The file must be owned and + readable by root. Its name cannot be arbitrarily + assigned; the following name is recommended:

         S75otp.system
-

For further details on initialization (and termination) - scripts, and naming thereof, see the Solaris documentation. -

+

For more details on initialization (and termination) + scripts, and naming thereof, see the Solaris documentation.

 #!/bin/sh
 #  
@@ -187,386 +159,333 @@ case "$1" in
         echo "Usage: $0 { start | stop }"
         ;;
 esac
-

The file /export/home/otpuser/otp/bin/start referred to - in the above script, is precisely the script start - described in the section Starting Erlang below. The +

File /export/home/otpuser/otp/bin/start referred to + in the above script is precisely the start script + described in Starting Erlang. The script variable OTP_ROOT in that start script - corresponds to the example path -

+ corresponds to the following example path used in this + section:

         /export/home/otpuser/otp
-

used in this section. The start script should be edited - accordingly. -

-

Use of the killproc procedure in the above script could - be combined with a call to erl_call, e.g. -

+

The start script is to be edited accordingly.

+

Use of the killproc procedure in the above script can + be combined with a call to erl_call, for example:

         $SOME_PATH/erl_call -n Node init stop
-

In order to take Erlang down gracefully see the - erl_call(1) reference manual page for further details - on the use of erl_call. That however requires that - Erlang runs as a distributed node which is not always the - case. -

-

The killproc procedure should not be removed: the +

To take Erlang down gracefully, see the erl_call(1) + manual page in erl_interface for details on the use + of erl_call. However, + that requires that Erlang runs as a distributed node, which is + not always the case.

+

The killproc procedure is not to be removed. The purpose is here to move from run level 3 (multi-user mode with networking resources) to run level 2 (multi-user mode without - such resources), in which Erlang should not run. -

+ such resources), in which Erlang is not to run.

- Hardware Watchdog + Making Hardware Watchdog Available

For Solaris running on VME boards from Force Computers, - there is a possibility to activate the onboard hardware - watchdog, provided a VME bus driver is added to the operating - system (see also Installation Problems below). -

-

See also the heart(3) reference manual page in - Kernel. -

+ the onboard hardware watchdog can be activated, + provided a VME bus driver is added to the operating system + (see also Installation Problems).

+

See also the heart(3) manual page in kernel.

Changing Permissions for Reboot

If the HEART_COMMAND environment variable is to be set - in the start script in the section, Starting Erlang, and if the value shall be set to the - path of the Solaris reboot command, i.e. -

+ in the start script in + Starting Erlang, and if the value is to be set to the + path of the Solaris reboot command, that is:

         HEART_COMMAND=/usr/sbin/reboot
-

the ownership and file permissions for /usr/sbin/reboot - must be changed as follows, -

+

then the ownership and file permissions for + /usr/sbin/reboot must be changed as follows:

         chown 0 /usr/sbin/reboot
         chmod 4755 /usr/sbin/reboot
-

See also the heart(3) reference manual page in - Kernel. -

+

See also the heart(3) manual page in kernel.

- The TERM Environment Variable -

When the Erlang runtime system is automatically started from the - S75otp.system script the TERM environment - variable has to be set. The following is a minimal setting, -

+ Setting TERM Environment Variable +

When the Erlang runtime system is automatically started from + the S75otp.system script, the TERM environment + variable must be set. The following is a minimal setting:

         TERM=sun
-

which should be added to the start script described in - the section. -

+

This is to be added to the start script.

- Patches + Adding Patches

For proper functioning of flushing file system data to disk on - Solaris 2.5.1, the version specific patch with number - 103640-02 must be added to the operating system. There may be - other patches needed, see the release README file - /README]]>. -

+ Solaris 2.5.1, the version-specific patch with number + 103640-02 must be added to the operating system. Other + patches might be needed, see the release README file + /README]]>.

- Installation of Module os_sup in Application OS_Mon + Installing Module os_sup in Application os_mon

The following four installation procedures require super user - privilege. -

- -
- Installation - - -

Make a copy the Solaris standard configuration file for syslogd.

- - -

Make a copy the Solaris standard configuration - file for syslogd. This file is usually named - syslog.conf and found in the /etc - directory.

-
- -

The file name of the copy must be - syslog.conf.ORIG but the directory location - is optional. Usually it is /etc. -

-

A simple way to do this is to issue the command

- + privilege:

+ +
+ Installation + + Make a copy of the Solaris standard configuration + file for syslogd: + + Make a copy of the Solaris standard configuration + file for syslogd. This file is usually named + syslog.conf and found in directory /etc. + The filename of the copy must be syslog.conf.ORIG. + The directory location is optional; usually it is /etc. + A simple way to do this is to issue the following command: + cp /etc/syslog.conf /etc/syslog.conf.ORIG - - - -

Make an Erlang specific configuration file for syslogd.

- - -

Make an edited copy of the back-up copy previously - made.

-
- -

The file name must be syslog.conf.OTP and the - path must be the same as the back-up copy.

-
- -

The format of the configuration file is found in the - man page for syslog.conf(5), by issuing the - command man syslog.conf.

-
- -

Usually a line is added which should state:

- - -

which types of information that will be - supervised by Erlang,

-
- -

the name of the file (actually a named pipe) - that should receive the information.

-
-
-
- -

If e.g. only information originating from the - unix-kernel should be supervised, the line should - begin with kern.LEVEL (for the possible - values of LEVEL see syslog.conf(5)).

-
- -

After at least one tab-character, the line added - should contain the full name of the named pipe where - syslogd writes its information. The path must be the - same as for the syslog.conf.ORIG and - syslog.conf.OTP files. The file name must be - syslog.otp.

-
- -

If the directory for the syslog.conf.ORIG and - syslog.conf.OTP files is /etc the line - in syslog.conf.OTP will look like:

- +
+
+ Make an Erlang-specific configuration file for + syslogd: + + Make an edited copy of the backup copy previously + made. + The filename must be syslog.conf.OTP. The + path must be the same as the backup copy. + The format of the configuration file is found in the + syslog.conf(5) manual page, by issuing the command + man syslog.conf. + Usually a line is added that is to state: + + Which types of information that is to be + supervised by Erlang + The name of the file (actually a named pipe) that + is to receive the information + + + If, for example, only information originating from + the UNIX kernel is to be supervised, the line is to + begin with kern.LEVEL. For the possible + values of LEVEL, see syslog.conf(5). + After at least one tab-character, the line added is to + contain the full name of the named pipe where syslogd + writes its information. The path must be the same as for the + files syslog.conf.ORIG and syslog.conf.OTP. + The filename must be syslog.otp. + If the directory for the files syslog.conf.ORIG + and syslog.conf.OTP is /etc, the line in + syslog.conf.OTP is as follows: + kern.LEVEL /etc/syslog.otp - - - - -

Check the file privileges of the configuration files.

- - -

The configuration files should have rw-r--r-- - file privileges and be owned by root.

-
- -

A simple way to do this is to issue the commands

- +
+
+
+ Check the file privileges of the configuration + files: + + The configuration files is to have rw-r--r-- + file privileges and be owned by root. + A simple way to do this is to issue these commands: + chmod 644 /etc/syslog.conf chmod 644 /etc/syslog.conf.ORIG chmod 644 /etc/syslog.conf.OTP - - -

Note: If the syslog.conf.ORIG and - syslog.conf.OTP files are not in the - /etc directory, the file path in the second - and third command must be modified.

-
-
-
- -

Modify file privileges and ownership of the mod_syslog utility.

- - -

The file privileges and ownership of the - mod_syslog utility must be modified.

-
- -

The full name of the binary executable file is - derived from the position of the os_mon - application if the file system by adding - /priv/bin/mod_syslog. The generic full name - of the binary executable file is thus

- + Notice that if the files syslog.conf.ORIG and + syslog.conf.OTP are not in directory /etc, + the file path in the second and third command must be + modified. +
+
+ Modify file privileges and ownership of the + mod_syslog utility: + + The file privileges and ownership of the + mod_syslog utility must be modified. +

The full name of the binary executable file is + derived from the position of application os_mon + in the file system by adding + /priv/bin/mod_syslog. The generic full name + of the binary executable file is thus:

+ /lib/os_mon-/priv/bin/mod_syslog]]> -

Example: If the path to the otp-root is - /usr/otp, thus the path to the os_mon - application is /usr/otp/lib/os_mon-1.0 - (assuming revision 1.0) and the full name of the - binary executable file is - /usr/otp/lib/os_mon-1.0/priv/bin/mod_syslog.

-
- -

The binary executable file must be owned by root, - have rwsr-xr-x file privileges, in particular - the setuid bit of user must be set. -

-
- -

A simple way to do this is to issue the commands

- Example: If the path to otp-root is + /usr/otp, then the path to the os_mon + application is /usr/otp/lib/os_mon-1.0 + (assuming revision 1.0) and the full name of the + binary executable file is + /usr/otp/lib/os_mon-1.0/priv/bin/mod_syslog.

+
+ The binary executable file must be owned by root, + have rwsr-xr-x file privileges, in particular + the setuid bit of the user must be set. +

A simple way to do this is to issue the following + commands:

+ /lib/os_mon-/priv/bin/mod_syslog chmod 4755 mod_syslog chown root mod_syslog]]> -
-
-
-
-
- -
- Testing the Application Configuration File -

The following procedure does not require root privilege. -

- - -

Ensure that the configuration parameters for the - os_sup module in the os_mon application - are correct.

-
- -

Browse the application configuration file (do - not edit it). The full name of the application - configuration file is derived from the position of the - OS_Mon application if the file system by adding - /ebin/os_mon.app. -

-

The generic full name of the file is thus

- +
+ + +
+ +
+ Testing the Application Configuration File +

The following procedure does not require root privilege:

+ + Ensure that the configuration parameters for the + os_sup module in the os_mon application + are correct. +

Browse the application configuration file (do + not edit it). The full name of the application + configuration file is derived from the position of the + os_mon application in the file system by adding + /ebin/os_mon.app.

+

The generic full name of the file is thus:

+ /lib/os_mon-/ebin/os_mon.app.]]> -

Example: If the path to the otp-root is - /usr/otp, thus the path to the os_mon - application is /usr/otp/lib/os_mon-1.0 (assuming - revision 1.0) and the full name of the binary executable - file is /usr/otp/lib/os_mon-1.0/ebin/os_mon.app.

-
- -

Ensure that the following configuration parameters are - bound to the correct values.

-
+

Example: If the path to otp-root is + /usr/otp, then the path to the os_mon application + is /usr/otp/lib/os_mon-1.0 (assuming revision 1.0) and + the full name of the binary executable file is + /usr/otp/lib/os_mon-1.0/ebin/os_mon.app.

+ + Ensure that the following configuration parameters have + correct values:
- + +
Parameter Function Standard value - start_os_sup - Specifies if os_sup will be started or not. - truefor the first instance on the hardware; falsefor the other instances. + start_os_sup + Specifies if os_sup + is to be started or not. + true for the + first instance on the hardware; false for the + other instances - os_sup_own - The directory for (1)the back-up copy, (2) the Erlang specific configuration file for syslogd. + os_sup_own + The directory for + (1) back-up copy and (2) Erlang-specific configuration + file for syslogd "/etc" - os_sup_syslogconf - The full name for the Solaris standard configuration file for syslogd + os_sup_syslogconf + The full name for the + Solaris standard configuration file for syslogd "/etc/syslog.conf" - error_tag - The tag for the messages that are sent to the error logger in the Erlang runtime system. + error_tag + The tag for the + messages that are sent to the error logger in the Erlang + runtime system std_error Configuration Parameters
-

If the values listed in the os_mon.app do not suit - your needs, you should not edit that file. Instead - you should override values in a system configuration file, the full pathname of which is given - on the command line to erl. -

-

Example: The following is an example of the - contents of an application configuration file.

-

-
+
+        

If the values listed in os_mon.app do not suit + your needs, do not edit that file. Instead + override the values in a system configuration + file, the full pathname of which is given + on the command line to erl.

+

Example: Contents of an application configuration + file:

+
           [{os_mon, [{start_os_sup, true}, {os_sup_own, "/etc"}, 
           {os_sup_syslogconf, "/etc/syslog.conf"}, {os_sup_errortag, std_error}]}].
-
- -
- Related Documents -

See also the os_mon(3), application(3) and - erl(1) reference manual pages.

-
- Installation Problems -

The hardware watchdog timer which is controlled by the - heart port program requires the FORCEvme - package, which contains the VME bus driver, to be - installed. This driver, however, may clash with the Sun - mcp driver and cause the system to completely refuse to - boot. To cure this problem, the following lines should be - added to /etc/system: -

+ Related Documents +

See the os_mon(3) application, + the application(3) manual page in kernel, + and the erl(1) manual page in erts.

+
+
+ +
+ Installation Problems +

The hardware watchdog timer, which is controlled by the + heart port program, requires package FORCEvme, + which contains the VME bus driver, to be + installed. However, this driver can clash with the Sun + mcp driver and cause the system to refuse to + boot. To cure this problem, the following lines are + to be added to /etc/system:

exclude: drv/mcp exclude: drv/mcpzsa exclude: drv/mcpp -

It is recommended that these lines be added to avoid the - clash described, which may make it completely impossible to - boot the system.

+

It is recommended to add these lines to avoid a clash. + The clash can make it impossible to boot the system.

Starting Erlang -

This section describes how an embedded system is started. There - are four programs involved, and they all normally reside in the - directory /bin]]>. The only exception is - the program start, which may be located anywhere, and - also is the only program that must be modified by the user. -

-

In an embedded system there usually is no interactive shell. - However, it is possible for an operator to attach to the Erlang - system by giving the command to_erl. He is then - connected to the Erlang shell, and may give ordinary Erlang - commands. All interaction with the system through this shell is - logged in a special directory. -

-

Basically, the procedure is as follows. The program - start is called when the machine is started. It calls - run_erl, which sets things up so the operator can attach - to the system. It calls start_erl which calls the - correct version of erlexec (which is located in - /erts-EVsn/bin]]>) with the correct - boot and config files. -

+

This section describes how an embedded system is started. Four + programs are involved and they normally reside in the directory + /bin]]>. The only exception is + the start program, which can be located anywhere, and + is also the only program that must be modified by the user.

+

In an embedded system, there is usually no interactive shell. + However, an operator can attach to the Erlang + system by command to_erl. The operator is then + connected to the Erlang shell and can give ordinary Erlang + commands. All interaction with the system through this shell is + logged in a special directory.

+

Basically, the procedure is as follows:

+ + The start program is called when the machine + is started. + It calls run_erl, which sets up things so the + operator can attach to the system. + It calls start_erl, which calls the correct + version of erlexec (which is located in + /erts-EVsn/bin]]>) with the + correct boot and config files. +
Programs -
start -

This program is called when the machine is started. It may - be modified or re-written to suit a special system. By +

This program is called when the machine is started. It can + be modified or rewritten to suit a special system. By default, it must be called start and reside in - /bin]]>. Another start program can be - used, by using the configuration parameter start_prg in - the application sasl.

+ /bin]]>. Another start + program can be used, by using configuration parameter + start_prg in application sasl.

The start program must call run_erl as shown below. - It must also take an optional parameter which defaults to - /releases/start_erl.data]]>. -

-

This program should set static parameters and environment + It must also take an optional parameter, which defaults to + /releases/start_erl.data]]>.

+

This program is to set static parameters and environment variables such as -sname Name and HEART_COMMAND - to reboot the machine. -

-

The ]]> directory is where new release packets - are installed, and where the release handler keeps information - about releases. See release_handler(3) in the - application sasl for further information. -

+ to reboot the machine.

+

The ]]> directory is where new release + packets are installed, and where the release handler keeps + information about releases. For more information, see the + release_handler(3) manual page in sasl.

The following script illustrates the default behaviour of the - program. -

+ program:

/dev/null 2>&1 &]]>

The following script illustrates a modification where the node - is given the name cp1, and the environment variables + is given the name cp1, and where the environment variables HEART_COMMAND and TERM have been added to the - above script. -

+ previous script:

/dev/null 2>&1 &]]> -

If a diskless and/or read-only client node is about to start the - start_erl.data file is located in the client directory at - the master node. Thus, the START_ERL_DATA line should look - like: -

+

If a diskless and/or read-only client node is about to start, + file start_erl.data is located in the client directory at + the master node. Thus, the START_ERL_DATA line is to look + like:

CLIENTDIR=$ROOTDIR/clients/clientname START_ERL_DATA=${1:-$CLIENTDIR/bin/start_erl.data} @@ -620,22 +537,24 @@ START_ERL_DATA=${1:-$CLIENTDIR/bin/start_erl.data}
run_erl

This program is used to start the emulator, but you will not be connected to the shell. to_erl is used to connect to - the Erlang shell. -

+ the Erlang shell.

Usage: run_erl pipe_dir/ log_dir "exec command [parameters ...]" -

Where pipe_dir/ should be /tmp/ (to_erl - uses this name by default) and log_dir is where the log - files are written. command [parameters] is executed, - and everything written to stdin and stdout is logged in the - log_dir. -

-

In the log_dir, log files are written. Each logfile - has a name of the form: erlang.log.N where N is a - generation number, ranging from 1 to 5. Each logfile holds up - to 100kB text. As time goes by the following logfiles will be - found in the logfile directory

- +

Here:

+ + pipe_dir/ is to be /tmp/ (to_erl + uses this name by default). + log_dir is where the log files are written. + command [parameters] is executed. + Everything written to stdin and stdout + is logged in log_dir. + +

Log files are written in log_dir. Each log file + has a name of the form erlang.log.N, where N is a + generation number, ranging from 1 to 5. Each log file holds up + to 100 kB text. As time goes by, the following log files are + found in the log file directory:

+ erlang.log.1 erlang.log.1, erlang.log.2 erlang.log.1, erlang.log.2, erlang.log.3 @@ -643,48 +562,40 @@ erlang.log.1, erlang.log.2, erlang.log.3, erlang.log.4 erlang.log.2, erlang.log.3, erlang.log.4, erlang.log.5 erlang.log.3, erlang.log.4, erlang.log.5, erlang.log.1 ... -

with the most recent logfile being the right most in each row - of the above list. That is, the most recent file is the one - with the highest number, or if there are already four files, - the one before the skip. -

-

When a logfile is opened (for appending or created) a time - stamp is written to the file. If nothing has been written to +

The most recent log file is the rightmost in each row. That + is, the most recent file is the one with the highest number, or + if there are already four files, the one before the skip.

+

When a log file is opened (for appending or created), a time + stamp is written to the file. If nothing has been written to the log files for 15 minutes, a record is inserted that says - that we're still alive. -

+ that we are still alive.

to_erl

This program is used to attach to a running Erlang runtime - system, started with run_erl. -

+ system, started with run_erl.

Usage: to_erl [pipe_name | pipe_dir] -

Where pipe_name defaults to /tmp/erlang.pipe.N. -

+

Here pipe_name defaults to /tmp/erlang.pipe.N.

To disconnect from the shell without exiting the Erlang - system, type Ctrl-D. -

+ system, type Ctrl-D.

start_erl

This program starts the Erlang emulator with parameters - -boot and -config set. It reads data about - where these files are located from a file called - start_erl.data which is located in the ]]>. - Each new release introduces a new data file. This file is - automatically generated by the release handler in Erlang. -

-

The following script illustrates the behaviour of the - program. -

+ -boot and -config set. It reads data about + where these files are located from a file named + start_erl.data, which is located in + ]]>. + Each new release introduces a new data file. This file is + automatically generated by the release handler in Erlang.

+

The following script illustrates the behaviour of the program:

#!/bin/sh # -# This program is called by run_erl. It starts +# This program is called by run_erl. It starts # the Erlang emulator and sets -boot and -config parameters. # It should only be used at an embedded target system. # @@ -710,22 +621,23 @@ export PROGNAME export RELDIR exec $BINDIR/erlexec -boot $RELDIR/$VSN/start -config $RELDIR/$VSN/sys $* +

If a diskless and/or read-only client node with the sasl configuration parameter static_emulator set - to true is about to start the -boot and - -config flags must be changed. As such a client cannot - read a new start_erl.data file (the file is not - possible to change dynamically) the boot and config files are + to true is about to start, the -boot and + -config flags must be changed.

+

As such a client cannot + read a new start_erl.data file (the file cannot + be changed dynamically). The boot and config files are always fetched from the same place (but with new contents if - a new release has been installed). The release_handler - copies this files to the bin directory in the client + a new release has been installed).

+

The release_handler + copies these files to the bin directory in the client directory at the master nodes whenever a new release is made - permanent. -

-

Assuming the same CLIENTDIR as above the last line - should look like: -

- + permanent.

+

Assuming the same CLIENTDIR as above, the last line + is to look like:

+ exec $BINDIR/erlexec -boot $CLIENTDIR/bin/start \ -config $CLIENTDIR/bin/sys $*
diff --git a/system/doc/embedded/part.xml b/system/doc/embedded/part.xml index e4366bd2c2..f3b44bf494 100644 --- a/system/doc/embedded/part.xml +++ b/system/doc/embedded/part.xml @@ -31,17 +31,17 @@ C + -

This manual describes the issues that are specific + +

This section describes the issues that are specific for running Erlang on an embedded system. It describes the differences in installing and starting - Erlang compared to how it is done for a non-embedded system. -

-

Note that this is a supplementary document. You still need to - read the Installation Guide. -

-

There is also target architecture specific information in - the top level README file of the Erlang distribution.

+ Erlang compared to how it is done for a non-embedded system.

+

This is a supplementary section. You also need to + read Section 1 Installation Guide.

+

There is also target architecture-specific information in + the top-level README file of the Erlang distribution.

-- cgit v1.2.3 From 5446dff69b014565a664d86174539114b94bc99e Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Bj=C3=B6rn=20Gustavsson?= Date: Thu, 12 Mar 2015 15:35:13 +0100 Subject: Update Installation Guide MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Language cleaned up by the technical writers xsipewe and tmanevik from Combitech. Proofreading and corrections by Björn Gustavsson. --- system/doc/installation_guide/install-binary.xml | 24 +++++++++++------------- system/doc/installation_guide/part.xml | 5 +++-- 2 files changed, 14 insertions(+), 15 deletions(-) (limited to 'system/doc') diff --git a/system/doc/installation_guide/install-binary.xml b/system/doc/installation_guide/install-binary.xml index af7dab6e44..ead4b19323 100644 --- a/system/doc/installation_guide/install-binary.xml +++ b/system/doc/installation_guide/install-binary.xml @@ -33,25 +33,23 @@
Windows +

The system is delivered as a Windows Installer executable. + Get it from http://www.erlang.org/download.html

- Introduction -

The system is delivered as a Windows Installer executable. - Get it from our download page.

-
- -
- Installation -

The installation procedure is is automated. Double-click the + Installing +

The installation procedure is automated. Double-click the .exe file icon and follow the instructions.

+
- Verification + Verifying

Start Erlang/OTP by double-clicking on the Erlang shortcut icon on the desktop.

-

Expect a command line window to pop up with an output looking something like this:

+

Expect a command-line window to pop up with an output looking + something like this:

   Erlang/OTP 17 [erts-6.0] [64-bit] [smp:2:2]
 
@@ -59,12 +57,12 @@
   1>
-

Exit by entering the command halt(),

+

Exit by entering the command halt().

   2> halt().
-

which will close the Erlang/OTP shell.

+

This closes the Erlang/OTP shell.

-
+
diff --git a/system/doc/installation_guide/part.xml b/system/doc/installation_guide/part.xml index 02bf98db7c..96a43d744b 100644 --- a/system/doc/installation_guide/part.xml +++ b/system/doc/installation_guide/part.xml @@ -29,10 +29,11 @@ part.xml -

How to install Erlang/OTP on UNIX or Windows.

+ +

This section describes how to install Erlang/OTP on UNIX and Windows.

- \ No newline at end of file + -- cgit v1.2.3 From e9dec8213a30bb12b4499bea4b8fdac6d55fa9f0 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Bj=C3=B6rn=20Gustavsson?= Date: Thu, 12 Mar 2015 15:35:13 +0100 Subject: Update OAM Principles MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Language cleaned up by the technical writers xsipewe and tmanevik from Combitech. Proofreading and corrections by Björn Gustavsson. --- system/doc/oam/oam_intro.xml | 391 +++++++++++++++++++++---------------------- 1 file changed, 187 insertions(+), 204 deletions(-) (limited to 'system/doc') diff --git a/system/doc/oam/oam_intro.xml b/system/doc/oam/oam_intro.xml index f4f990393e..de4867ca16 100644 --- a/system/doc/oam/oam_intro.xml +++ b/system/doc/oam/oam_intro.xml @@ -28,241 +28,224 @@ A oam_intro.xml -

The operation and maintenance support in OTP consists of a - generic model for management subsystems in OTP, and some - components to be used in these subsystems. This document - describes the model. -

-

The main idea in the model is that it is management protocol - independent. Thus, it is not tied to any specific management - protocol. An API is defined which can be used to write - adaptations for specific management protocols. -

-

Each OAM component in OTP is implemented as one sub application, - which can be included in a management application for the system. - Note that such a complete management application is not in the - scope of this generic functionality. Examples illustrating how such an - application can be built are included however. -

+ +

The Operation and Maintenance (OAM) support in OTP consists of a + generic model for management subsystems in OTP, and some components + to be used in these subsystems. This section describes the model.

+ +

The main idea in the model is that it is not tied to any specific + management protocol. An Application Programming Interface (API) is + defined, which can be used to write adaptations for specific + management protocols.

+ +

Each OAM component in OTP is implemented as one sub-application, which + can be included in a management application for the system. Notice that + such a complete management application is not in the scope of this + generic functionality. However, this section includes examples + illustrating how such an application can be built.

Terminology -

The protocol independent architectural model on the network - level is the well-known Client-Server model for management operations. This model is based on the client-server - principle, where the manager (client) sends A request is sent from a manager to an agent when it accesses management information.to the - agent (server), the agent sends A reply is sent from the agent as a response to a request from a manager.back to the manager. There are two main - differences to the normal client-server model. First, there are - usually a few managers that communicate with many agents; and - second, the agent may spontaneously send A notification is sent spontaneously from an agent to a manager, e.g. an alarm.to the - manager. The picture below illustrates the idea.

+

The protocol-independent architectural model on the network level + is the well-known client-server model for management operations. This + model is based on the client-server principle, where the manager + (client) sends a request from a manager to an agent (server) when it + accesses management information. The agent sends a reply back to the + manager. There are two main differences to the normal + client-server model:

+ +

Usually a few managers communicate with many agents.

+

The agent can spontaneously send a notification, for example, + an alarm, to the manager.

+
+

The following picture illustrates the idea:

+ Terminology -

The manager is often referred to as the , to - emphasize that it usually is realized as a program that presents - data to an operator. -

-

The agent is an entity that executes within a . - In OTP, the network element may be a distributed system, meaning - that the distributed system is managed as one entity. Of - course, the agent may be configured to be able to run on one of - several nodes, making it a distributed OTP application. -

-

The management information is defined in an . - It is a formal definition of which information the agent makes - available to the manager. The manager accesses the MIB through - a management protocol, such as SNMP, CMIP, HTTP or CORBA. Each - of these protocols have their own MIB definition language. In - SNMP, it is a subset of ASN.1, in CMIP it is GDMO, in HTTP it is - implicit, and using CORBA, it is IDL. Usually, the entities - defined in the MIB are called , although these - objects do not have to be objects in the OO way,for example, a simple - scalar variable defined in an MIB is called a Managed Object. - The Managed Objects are logical objects, not necessarily with a - one-to-one mapping to the resources. -

+ +

The manager is often referred to as the Network Management + System (NMS), to emphasize that it usually is realized as a + program that presents data to an operator.

+ +

The agent is an entity that executes within a Network + Element (NE). In OTP, the NE can be a distributed system, + meaning that the distributed system is managed as one entity. + Of course, the agent can be configured to be able to run on one + of several nodes, making it a distributed OTP application.

+ +

The management information is defined in a Management + Information Base (MIB). It is a formal definition of which + information the agent makes available to the manager. The + manager accesses the MIB through a management protocol, such + as SNMP, CMIP, HTTP, or CORBA. Each protocol has its own MIB + definition language. In SNMP, it is a subset of ASN.1, in CMIP + it is GDMO, in HTTP it is implicit, and using CORBA, it is IDL.

+ +

Usually, the entities defined in the MIB are + called Managed Objects (MOs), although they do not + have to be objects in the object-oriented way. For example, + a simple scalar variable defined in a MIB is called an MO. The + MOs are logical objects, not necessarily with a one-to-one + mapping to the resources.

Model -

In this section, the generic protocol independent model for use - within an OTP based network element is presented. This model is - used by all operation and maintenance components, and may be - used by the applications. The advantage of the model is that it - clearly separates the resources from the management protocol. - The resources do not need to be aware of which management - protocol is used to manage the system. This makes it possible - to manage the same resources with different protocols. -

-

The different entities involved in this model are the which terminates the management protocol, and the - which is to be managed, i.e. the actual - application entities. The resources should in general have no - knowledge of the management protocol used, and the agent should - have no knowledge of the managed resources. This implies that - some sort of translation mechanism must be used, to translate - the management operations to operations on the resources. This - translation mechanism is usually called - instrumentation, and the function that implements it is - called . The - instrumentation functions are written for each combination of - management protocol and resource to be managed. For example, if - an application is to be managed by SNMP and HTTP, two sets of - instrumentation functions are defined; one that maps SNMP - requests to the resources, and one that e.g. generates an HTML - page for some resources. -

-

When a manager makes a request to the agent, we have the - following picture:

+

This section presents the generic protocol-independent model + for use within an OTP-based NE. This model is used by + all OAM components and can be used by the applications. The + advantage of the model is that it clearly separates the + resources from the management protocol. The resources do not + need to be aware of which management protocol is used to manage + the system. The same resources can therefore be managed with + different protocols.

+ +

The entities involved in this model are the agent, which + terminates the management protocol, and the resources, which + is to be managed, that is, the actual application entities. + The resources should in general have no knowledge of the + management protocol used, and the agent should have no + knowledge of the managed resources. This implies that a + translation mechanism is needed, to translate the management + operations to operations on the resources. This translation + mechanism is usually called instrumentation and the + function that implements it is called instrumentation + function. The instrumentation functions are written for + each combination of management protocol and resource to be + managed. For example, if an application is to be managed by + SNMP and HTTP, two sets of instrumentation functions are + defined; one that maps SNMP requests to the resources, and + one that, for example, generates an HTML page for some + resources.

+ +

When a manager makes a request to the agent, the following + illustrates the situation:

+ - Request to an agent by a manager + Request to An Agent by a Manager -

Note that the mapping between instrumentation function and - resource is not necessarily 1-1. It is also possible to write - one instrumentation function for each resource, and use that - function from different protocols. -

-

The agent receives a request and maps this request to calls to - one or several instrumentation functions. The instrumentation - functions perform operations on the resources to implement the - semantics associated with the managed object. -

-

For example, a system that is managed with SNMP and HTTP may be - structured in the following way:

+ +

The mapping between an instrumentation function and a + resource is not necessarily 1-1. It is also possible to write + one instrumentation function for each resource, and use that + function from different protocols.

+ +

The agent receives a request and maps it to calls to one or + more instrumentation functions. These functions perform + operations on the resources to implement the semantics + associated with the MO.

+ +

For example, a system that is managed with SNMP and HTTP + can be structured as follows:

+ - Structure of a system managed with SNMP and HTTP + Structure of a System Managed with SNMP and HTTP -

The resources may send notifications to the manager as well. - Examples of notifications are events and alarms. There is a - need for the resource to generate protocol independent - notifications. The following picture illustrates how this is - achieved:

+ +

The resources can send notifications to the manager as well. + Examples of notifications are events and alarms. The resource + needs to generate protocol-independent notifications. + The following picture illustrates how this is achieved:

+ - Notification handling + Notification Handling -

The main idea is that the resource sends the notfications as - Erlang terms to a dedicated gen_event process. Into this - process, handlers for the different management protocols are - installed. When an event is received by this process, it is - forwarded to each installed handler. The handlers are - responsible for translating the event into a notification to be - sent over the management protocol. For example, a handler for - SNMP would translate each event into an SNMP trap. -

+ +

The main idea is that the resource sends the notifications as + Erlang terms to a dedicated gen_event process. Into this + process, handlers for the different management protocols are + installed. When an event is received by this process, it is + forwarded to each installed handler. The handlers are + responsible for translating the event into a notification to be + sent over the management protocol. For example, a handler for + SNMP translates each event into an SNMP trap.

- SNMP based OAM -

For all OAM components, SNMP adaptations are provided. Other - adaptations may be defined in the future. -

+ SNMP-Based OAM +

For all OAM components, SNMP adaptations are provided. Other + adaptations might be defined in the future.

+

The OAM components, and some other OTP applications, define - SNMP MIBs. All these MIBs are written in SNMPv2 SMI syntax, as - defined in RFC1902. For convenience we also deliver the SNMPv1 - SMI equivalent. All MIBs are designed to be v1/v2 compatible, - i.e. the v2 MIBs do not use any construct not available in v1. -

+ SNMP MIBs. These MIBs are written in SNMPv2 SMI syntax, as + defined in RFC 1902. For convenience we also deliver the SNMPv1 + SMI equivalent. All MIBs are designed to be v1/v2 compatible, + that is, the v2 MIBs do not use any construct not available in + v1.

- MIB structure -

The top-level OTP MIB is called OTP-REG, and it is - included in the sasl application. All other OTP mibs - import some objects from this MIB. -

-

Each MIB is contained in one application. The MIB text files - are stored under .mib]]> in the application - directory. The generated .hrl files with constant - declarations are stored under .hrl]]>, and - the compiled MIBs are stored under - .bin]]>. For example, the OTP-MIB - is included in the sasl application: -

+ MIB Structure +

The top-level OTP MIB is called OTP-REG and it is + included in the sasl application. All other OTP MIBs + import some objects from this MIB.

+ +

Each MIB is contained in one application. The MIB text + files are stored under .mib]]> in + the application directory. The generated .hrl files + with constant declarations are stored under + .hrl]]>, and the compiled MIBs + are stored under .bin]]>. + For example, the OTP-MIB is included in the + sasl application:

+ sasl-1.3/mibs/OTP-MIB.mib - include/OTP-MIB.hrl - priv/mibs/OTP-MIB.bin -

An application that needs to IMPORT this mib into another - MIB, should use the il option to the snmp mib compiler: -

+include/OTP-MIB.hrl +priv/mibs/OTP-MIB.bin
+ +

An application that needs to import this MIB into another + MIB is to use the il option to the SNMP MIB compiler:

+ snmp:c("MY-MIB", [{il, ["sasl/priv/mibs"]}]). +

If the application needs to include the generated - .hrl file, it should use the -include_lib - directive to the Erlang compiler. -

+ .hrl file, it is to use the -include_lib + directive to the Erlang compiler:

+ -module(my_mib). - -include_lib("sasl/include/OTP-MIB.hrl"). -

The following MIBs are defined in the OTP system: -

- - OTP-REG (sasl) - -

This MIB contains the top-level OTP registration - objects, used by all other MIBs. -

-
- OTP-TC (sasl) - -

This MIB contains the general Textual Conventions, - which can be used by any other MIB. -

-
- OTP-MIB (sasl) - -

This MIB contains objects for instrumentation of the - Erlang nodes, the Erlang machines and the applications in - the system. -

-
- OTP-OS-MON-MIB (os_mon) - -

This MIB contains objects for instrumentation of disk, - memory and cpu usage of the nodes in the system. -

-
- OTP-SNMPEA-MIB (snmp) - -

This MIB contains objects for instrumentation and - control of the extensible snmp agent itself. Note that - the agent also implements the standard SNMPv2-MIB (or v1 - part of MIB-II, if SNMPv1 is used). -

-
- OTP-EVA-MIB (eva) - -

This MIB contains objects for instrumentation and - control of the events and alarms in the system. -

-
- OTP-LOG-MIB (eva) - -

This MIB contains objects for instrumentation and - control of the logs and FTP transfer of logs. -

-
- OTP-EVA-LOG-MIB (eva) - -

This MIB contains objects for instrumentation and - control of the events and alarm logs in the system. -

-
- OTP-SNMPEA-LOG-MIB (eva) - -

This MIB contains objects for instrumentation and - control of the snmp audit trail log in the system. -

-
-
+ +

The following MIBs are defined in the OTP system:

+ +

OTP-REG) (in sasl) contains the top-level + OTP registration objects, used by all other MIBs.

+

OTP-TC (in sasl) contains the general + Textual Conventions, which can be used by any other MIB.

+

OTP-MIB (in sasl) contains objects for + instrumentation of the Erlang nodes, the Erlang machines, + and the applications in the system.

+

OTP-OS-MON-MIB (in oc_mon) contains + objects for instrumentation of disk, memory, and CPU use + of the nodes in the system.

+

OTP-SNMPEA-MIB (in snmp) + contains objects for instrumentation and control of the extensible + SNMP agent itself. The agent also implements the standard SNMPv2-MIB + (or v1 part of MIB-II, if SNMPv1 is used).

+

OTP-EVA-MIB (in eva) contains objects + for instrumentation and control of the events and alarms in + the system.

+

OTP-LOG-MIB (in eva) contains objects + for instrumentation and control of the logs and FTP transfer of + logs.

+

OTP-EVA-LOG-MIB (in eva) contains objects + for instrumentation and control of the events and alarm logs + in the system.

+

OTP-SNMPEA-LOG-MIB (in eva) contains + objects for instrumentation and control of the SNMP audit + trail log in the system.

+
+

The different applications use different strategies for - loading the MIBs into the agent. Some MIB implementations are - code-only, while others need a server. One way, used by the - code-only mib implementations, is for the user to call a - function such as otp_mib:init(Agent) to load the MIB, - and otp_mib:stop(Agent) to unload the MIB. See the - application manual page for each application for a description - of how to load each MIB. -

+ loading the MIBs into the agent. Some MIB implementations are + code-only, while others need a server. One way, used by the + code-only MIB implementations, is for the user to call a + function such as otp_mib:init(Agent) to load the MIB, + and otp_mib:stop(Agent) to unload the MIB. See the + manual page for each application for a description of how + to load each MIB.

-- cgit v1.2.3 From f98300fbe9bb29eb3eb2182b12094974a6dc195b Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Bj=C3=B6rn=20Gustavsson?= Date: Thu, 12 Mar 2015 15:35:13 +0100 Subject: Update Programming Examples MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Language cleaned up by the technical writers xsipewe and tmanevik from Combitech. Proofreading and corrections by Björn Gustavsson. --- system/doc/programming_examples/bit_syntax.xml | 210 ++++++++-------- system/doc/programming_examples/funs.xmlsrc | 280 +++++++++++---------- .../programming_examples/list_comprehensions.xml | 86 ++++--- system/doc/programming_examples/part.xml | 5 +- system/doc/programming_examples/records.xml | 88 ++++--- 5 files changed, 349 insertions(+), 320 deletions(-) (limited to 'system/doc') diff --git a/system/doc/programming_examples/bit_syntax.xml b/system/doc/programming_examples/bit_syntax.xml index fb321c1ba9..7ede5b71f9 100644 --- a/system/doc/programming_examples/bit_syntax.xml +++ b/system/doc/programming_examples/bit_syntax.xml @@ -31,62 +31,64 @@
Introduction -

In Erlang a Bin is used for constructing binaries and matching +

In Erlang, a Bin is used for constructing binaries and matching binary patterns. A Bin is written with the following syntax:

>]]> -

A Bin is a low-level sequence of bits or bytes. The purpose of a Bin is - to be able to, from a high level, construct a binary,

+

A Bin is a low-level sequence of bits or bytes. + The purpose of a Bin is to enable construction of binaries:

>]]> -

in which case all elements must be bound, or to match a binary,

+

All elements must be bound. Or match a binary:

> = Bin ]]> -

where Bin is bound, and where the elements are bound or +

Here, Bin is bound and the elements are bound or unbound, as in any match.

-

In R12B, a Bin need not consist of a whole number of bytes.

+

Since Erlang R12B, a Bin does not need to consist of a whole number of bytes.

A bitstring is a sequence of zero or more bits, where - the number of bits doesn't need to be divisible by 8. If the number + the number of bits does not need to be divisible by 8. If the number of bits is divisible by 8, the bitstring is also a binary.

Each element specifies a certain segment of the bitstring. A segment is a set of contiguous bits of the binary (not necessarily on a byte boundary). The first element specifies the initial segment, the second element specifies the following - segment etc.

-

The following examples illustrate how binaries are constructed + segment, and so on.

+

The following examples illustrate how binaries are constructed, or matched, and how elements and tails are specified.

Examples -

Example 1: A binary can be constructed from a set of +

Example 1: A binary can be constructed from a set of constants or a string literal:

>, Bin12 = <<"abc">>]]> -

yields binaries of size 3; binary_to_list(Bin11) - evaluates to [1, 17, 42], and - binary_to_list(Bin12) evaluates to [97, 98, 99].

-

Example 2: Similarly, a binary can be constructed +

This gives two binaries of size 3, with the following evaluations:

+ + binary_to_list(Bin11) evaluates to [1, 17, 42]. + binary_to_list(Bin12) evaluates to [97, 98, 99]. + +

Example 2:Similarly, a binary can be constructed from a set of bound variables:

>]]> -

yields a binary of size 4, and binary_to_list(Bin2) - evaluates to [1, 17, 00, 42] too. Here we used a - size expression for the variable C in order to +

This gives a binary of size 4. + Here, a size expression is used for the variable C to specify a 16-bits segment of Bin2.

-

Example 3: A Bin can also be used for matching: if +

binary_to_list(Bin2) evaluates to [1, 17, 00, 42].

+

Example 3: A Bin can also be used for matching. D, E, and F are unbound variables, and - Bin2 is bound as in the former example,

+ Bin2 is bound, as in Example 2:

> = Bin2]]> -

yields D = 273, E = 00, and F binds to a binary +

This gives D = 273, E = 00, and F binds to a binary of size 1: binary_to_list(F) = [42].

Example 4: The following is a more elaborate example - of matching, where Dgram is bound to the consecutive - bytes of an IP datagram of IP protocol version 4, and where we - want to extract the header and the data of the datagram:

+ of matching. Here, Dgram is bound to the consecutive + bytes of an IP datagram of IP protocol version 4. The ambition is + to extract the header and the data of the datagram:

> = RestDgram, ... end.]]> -

Here the segment corresponding to the Opts variable - has a type modifier specifying that Opts should +

Here, the segment corresponding to the Opts variable + has a type modifier, specifying that Opts is to bind to a binary. All other variables have the default type equal to unsigned integer.

-

An IP datagram header is of variable length, and its length - - measured in the number of 32-bit words - is given in - the segment corresponding to HLen, the minimum value of - which is 5. It is the segment corresponding to Opts - that is variable: if HLen is equal to 5, Opts - will be an empty binary.

+

An IP datagram header is of variable length. This length is + measured in the number of 32-bit words and is given in + the segment corresponding to HLen. The minimum value of + HLen is 5. It is the segment corresponding to Opts + that is variable, so if HLen is equal to 5, Opts + becomes an empty binary.

The tail variables RestDgram and Data bind to - binaries, as all tail variables do. Both may bind to empty + binaries, as all tail variables do. Both can bind to empty binaries.

-

If the first 4-bits segment of Dgram is not equal to - 4, or if HLen is less than 5, or if the size of - Dgram is less than 4*HLen, the match of - Dgram fails.

+

The match of Dgram fails if one of the following occurs:

+ + The first 4-bits segment of Dgram is not equal to 4. + HLen is less than 5. + The size of Dgram is less than 4*HLen. +
- A Lexical Note -

Note that ">]]>" will be interpreted as + Lexical Note +

Notice that ">]]>" will be interpreted as ">]]>", which is a syntax error. - The correct way to write the expression is - ">]]>".

+ The correct way to write the expression is: + >]]>.

Segments

Each segment has the following general syntax:

Value:Size/TypeSpecifierList

-

Both the Size and the TypeSpecifier or both may be - omitted; thus the following variations are allowed:

-

Value

-

Value:Size

-

Value/TypeSpecifierList

-

Default values will be used for missing specifications. - The default values are described in the section +

The Size or the TypeSpecifier, or both, can be + omitted. Thus, the following variants are allowed:

+ + Value + Value:Size + Value/TypeSpecifierList + +

Default values are used when specifications are missing. + The default values are described in Defaults.

-

Used in binary construction, the Value part is any - expression. Used in binary matching, the Value part must - be a literal or variable. You can read more about - the Value part in the section about constructing - binaries and matching binaries.

+

The Value part is any expression, when used in binary construction. + Used in binary matching, the Value part must + be a literal or a variable. For more information about + the Value part, see + Constructing Binaries and Bitstrings + and + Matching Binaries.

The Size part of the segment multiplied by the unit in - the TypeSpecifierList (described below) gives the number + TypeSpecifierList (described later) gives the number of bits for the segment. In construction, Size is any expression that evaluates to an integer. In matching, Size must be a constant expression or a variable.

@@ -160,22 +168,22 @@ end.]]>
binary. Signedness The signedness specification can be either signed - or unsigned. Note that signedness only matters for + or unsigned. Notice that signedness only matters for matching. Endianness The endianness specification can be either big, little, or native. Native-endian means that - the endian will be resolved at load time to be either + the endian is resolved at load time, to be either big-endian or little-endian, depending on what is "native" for the CPU that the Erlang machine is run on. Unit The unit size is given as unit:IntegerLiteral. - The allowed range is 1-256. It will be multiplied by + The allowed range is 1-256. It is multiplied by the Size specifier to give the effective size of - the segment. In R12B, the unit size specifies the alignment - for binary segments without size (examples will follow). + the segment. Since Erlang R12B, the unit size specifies the alignment + for binary segments without size. -

Example:

+

Example:

X:4/little-signed-integer-unit:8

This element has a total size of 4*8 = 32 bits, and it contains @@ -184,13 +192,14 @@ X:4/little-signed-integer-unit:8

Defaults -

The default type for a segment is integer. The default +

The default type for + a segment is integer. The default type does not depend on the value, even if the value is a - literal. For instance, the default type in '>]]>' is + literal. For example, the default type in >]]> is integer, not float.

The default Size depends on the type. For integer it is 8. For float it is 64. For binary it is all of the binary. In - matching, this default value is only valid for the very last + matching, this default value is only valid for the last element. All other binary elements in matching must have a size specification.

The default unit depends on the the type. For integer, @@ -201,61 +210,60 @@ X:4/little-signed-integer-unit:8

Constructing Binaries and Bitstrings +

This section describes the rules for constructing binaries using the bit syntax. Unlike when constructing lists or tuples, the construction of a binary can fail with a badarg exception.

There can be zero or more segments in a binary to be - constructed. The expression '>]]>' constructs a zero + constructed. The expression >]]> constructs a zero length binary.

Each segment in a binary can consist of zero or more bits. There are no alignment rules for individual segments of type integer and float. For binaries and bitstrings without size, the unit specifies the alignment. Since the default alignment for the binary type is 8, the size of a binary - segment must be a multiple of 8 bits (i.e. only whole bytes). - Example:

+ segment must be a multiple of 8 bits, that is, only whole bytes.

+

Example:

>]]>

The variable Bin must contain a whole number of bytes, because the binary type defaults to unit:8. - A badarg exception will be generated if Bin would - consist of (for instance) 17 bits.

+ A badarg exception is generated if Bin + consist of, for example, 17 bits.

-

On the other hand, the variable Bitstring may consist of - any number of bits, for instance 0, 1, 8, 11, 17, 42, and so on, - because the default unit for bitstrings is 1.

+

The Bitstring variable can consist of + any number of bits, for example, 0, 1, 8, 11, 17, 42, and so on. + This is because the default unit for bitstrings is 1.

-

For clarity, it is recommended not to change the unit - size for binaries, but to use binary when you need byte - alignment, and bitstring when you need bit alignment.

+

For clarity, it is recommended not to change the unit + size for binaries. Instead, use binary when you need byte alignment + and bitstring when you need bit alignment.

-

The following example

+

The following example successfully constructs a bitstring of 7 bits, + provided that all of X and Y are integers:

>]]> -

will successfully construct a bitstring of 7 bits. - (Provided that all of X and Y are integers.)

-

As noted earlier, segments have the following general syntax:

+

As mentioned earlier, segments have the following general syntax:

Value:Size/TypeSpecifierList

When constructing binaries, Value and Size can be any Erlang expression. However, for syntactical reasons, both Value and Size must be enclosed in parenthesis if the expression consists of anything more than a single literal - or variable. The following gives a compiler syntax error:

+ or a variable. The following gives a compiler syntax error:

>]]> -

This expression must be rewritten to

+

This expression must be rewritten into the following, + to be accepted by the compiler:

>]]> -

in order to be accepted by the compiler.

Including Literal Strings -

As syntactic sugar, an literal string may be written instead - of a element.

+

A literal string can be written instead of an element:

>]]> -

which is syntactic sugar for

+

This is syntactic sugar for the following:

>]]>
@@ -263,29 +271,30 @@ X:4/little-signed-integer-unit:8
Matching Binaries -

This section describes the rules for matching binaries using + +

This section describes the rules for matching binaries, using the bit syntax.

There can be zero or more segments in a binary pattern. - A binary pattern can occur in every place patterns are allowed, - also inside other patterns. Binary patterns cannot be nested.

-

The pattern '>]]>' matches a zero length binary.

-

Each segment in a binary can consist of zero or more bits.

-

A segment of type binary must have a size evenly - divisible by 8 (or divisible by the unit size, if the unit size has been changed).

-

A segment of type bitstring has no restrictions on the size.

-

As noted earlier, segments have the following general syntax:

+ A binary pattern can occur wherever patterns are allowed, + including inside other patterns. Binary patterns cannot be nested. + The pattern >]]> matches a zero length binary.

+

Each segment in a binary can consist of zero or more bits. + A segment of type binary must have a size evenly divisible by 8 + (or divisible by the unit size, if the unit size has been changed). + A segment of type bitstring has no restrictions on the size.

+

As mentioned earlier, segments have the following general syntax:

Value:Size/TypeSpecifierList

-

When matching Value value must be either a variable or - an integer or floating point literal. Expressions are not +

When matching Value, value must be either a variable or + an integer, or a floating point literal. Expressions are not allowed.

Size must be an integer literal, or a previously bound - variable. Note that the following is not allowed:

+ variable. The following is not allowed:

>) -> {X,T}.]]>

The two occurrences of N are not related. The compiler will complain that the N in the size field is unbound.

-

The correct way to write this example is like this:

+

The correct way to write this example is as follows:

<> = Bin, @@ -303,14 +312,14 @@ foo(<>) ->]]> without size:

>) ->]]> -

There is no restriction on the number of bits in the tail.

+

There are no restrictions on the number of bits in the tail.

Appending to a Binary -

In R12B, the following function for creating a binary out of - a list of triples of integers is now efficient:

+

Since Erlang R12B, the following function for creating a binary out of + a list of triples of integers is efficient:

triples_to_bin(T, <<>>). @@ -321,7 +330,8 @@ triples_to_bin([], Acc) -> Acc.]]>

In previous releases, this function was highly inefficient, because the binary constructed so far (Acc) was copied in each recursion step. - That is no longer the case. See the Efficiency Guide for more information.

+ That is no longer the case. For more information, see + Efficiency Guide.

diff --git a/system/doc/programming_examples/funs.xmlsrc b/system/doc/programming_examples/funs.xmlsrc index 7bfac9db8c..e4f5c9c9c9 100644 --- a/system/doc/programming_examples/funs.xmlsrc +++ b/system/doc/programming_examples/funs.xmlsrc @@ -30,128 +30,124 @@
- Example 1 - map -

If we want to double every element in a list, we could write a - function named double:

+ map +

The following function, double, doubles every element in a list:

double([H|T]) -> [2*H|double(T)]; double([]) -> []. -

This function obviously doubles the argument entered as input - as follows:

+

Hence, the argument entered as input is doubled as follows:

 > double([1,2,3,4]).
 [2,4,6,8]
-

We now add the function add_one, which adds one to every +

The following function, add_one, adds one to every element in a list:

add_one([H|T]) -> [H+1|add_one(T)]; add_one([]) -> []. -

These functions, double and add_one, have a very - similar structure. We can exploit this fact and write a function - map which expresses this similarity:

+

The functions double and add_one have a + similar structure. This can be used by writing a function + map that expresses this similarity:

-

We can now express the functions double and - add_one in terms of map as follows:

+

The functions double and add_one can now be expressed + in terms of map as follows:

double(L) -> map(fun(X) -> 2*X end, L). add_one(L) -> map(fun(X) -> 1 + X end, L). -

map(F, List) is a function which takes a function - F and a list L as arguments and returns the new - list which is obtained by applying F to each of +

map(F, List) is a function that takes a function + F and a list L as arguments and returns a new + list, obtained by applying F to each of the elements in L.

The process of abstracting out the common features of a number - of different programs is called procedural abstraction. - Procedural abstraction can be used in order to write several - different functions which have a similar structure, but differ - only in some minor detail. This is done as follows:

+ of different programs is called procedural abstraction. + Procedural abstraction can be used to write several + different functions that have a similar structure, but differ + in some minor detail. This is done as follows:

- write one function which represents the common features of - these functions - parameterize the difference in terms of functions which + Step 1. Write one function that represents the common features of + these functions. + Step 2. Parameterize the difference in terms of functions that are passed as arguments to the common function.
- Example 2 - foreach -

This example illustrates procedural abstraction. Initially, we - show the following two examples written as conventional - functions:

- - all elements of a list are printed onto a stream - a message is broadcast to a list of processes. - + foreach +

This section illustrates procedural abstraction. Initially, + the following two examples are written as conventional + functions.

+

This function prints all elements of a list onto a stream:

print_list(Stream, [H|T]) -> io:format(Stream, "~p~n", [H]), print_list(Stream, T); print_list(Stream, []) -> true. +

This function broadcasts a message to a list of processes:

broadcast(Msg, [Pid|Pids]) -> Pid ! Msg, broadcast(Msg, Pids); broadcast(_, []) -> true. -

Both these functions have a very similar structure. They both - iterate over a list doing something to each element in the list. - The "something" has to be carried round as an extra argument to - the function which does this.

+

These two functions have a similar structure. They both + iterate over a list and do something to each element in the list. + The "something" is passed on as an extra argument to + the function that does this.

The function foreach expresses this similarity:

-

Using foreach, print_list becomes:

+

Using the function foreach, the function print_list becomes:

foreach(fun(H) -> io:format(S, "~p~n",[H]) end, L) -

broadcast becomes:

+

Using the function foreach, the function broadcast becomes:

foreach(fun(Pid) -> Pid ! M end, L)

foreach is evaluated for its side-effect and not its value. foreach(Fun ,L) calls Fun(X) for each element X in L and the processing occurs in - the order in which the elements were defined in L. + the order that the elements were defined in L. map does not define the order in which its elements are processed.

- The Syntax of Funs -

Funs are written with the syntax:

+ Syntax of Funs +

Funs are written with the following syntax:

F = fun (Arg1, Arg2, ... ArgN) -> ... end

This creates an anonymous function of N arguments and binds it to the variable F.

-

If we have already written a function in the same module and - wish to pass this function as an argument, we can use - the following syntax:

+

Another function, FunctionName, written in the same module, + can be passed as an argument, using the following syntax:

F = fun FunctionName/Arity -

With this form of function reference, the function which is +

With this form of function reference, the function that is referred to does not need to be exported from the module.

-

We can also refer to a function defined in a different module +

It is also possible to refer to a function defined in a different module, with the following syntax:

F = {Module, FunctionName}

In this case, the function must be exported from the module in question.

-

The follow program illustrates the different ways of creating +

The following program illustrates the different ways of creating funs:

-

We can evaluate the fun F with the syntax:

+

The fun F can be evaluated with the following syntax:

F(Arg1, Arg2, ..., Argn)

To check whether a term is a fun, use the test - is_function/1 in a guard. Example:

+ is_function/1 in a guard.

+

Example:

f(F, Args) when is_function(F) -> apply(F, Args); f(N, _) when is_integer(N) -> N. -

Funs are a distinct type. The BIFs erlang:fun_info/1,2 can +

Funs are a distinct type. The BIFs erlang:fun_info/1,2 can be used to retrieve information about a fun, and the BIF - erlang:fun_to_list/1 returns a textual representation of a fun. - The check_process_code/2 BIF returns true if the process + erlang:fun_to_list/1 returns a textual representation of a fun. + The check_process_code/2 BIF returns true if the process contains funs that depend on the old version of a module.

In OTP R5 and earlier releases, funs were represented using @@ -161,15 +157,15 @@ f(N, _) when is_integer(N) ->

Variable Bindings Within a Fun -

The scope rules for variables which occur in funs are as +

The scope rules for variables that occur in funs are as follows:

- All variables which occur in the head of a fun are assumed + All variables that occur in the head of a fun are assumed to be "fresh" variables. - Variables which are defined before the fun, and which + Variables that are defined before the fun, and that occur in function calls or guard tests within the fun, have the values they had outside the fun. - No variables may be exported from a fun. + Variables cannot be exported from a fun.

The following examples illustrate these rules:

@@ -177,12 +173,13 @@ print_list(File, List) -> {ok, Stream} = file:open(File, write), foreach(fun(X) -> io:format(Stream,"~p~n",[X]) end, List), file:close(Stream). -

In the above example, the variable X which is defined in - the head of the fun is a new variable. The value of the variable - Stream which is used within within the fun gets its value +

Here, the variable X, defined in + the head of the fun, is a new variable. The variable + Stream, which is used within the fun, gets its value from the file:open line.

-

Since any variable which occurs in the head of a fun is - considered a new variable it would be equally valid to write:

+

As any variable that occurs in the head of a fun is + considered a new variable, it is equally valid to write + as follows:

print_list(File, List) -> {ok, Stream} = file:open(File, write), @@ -190,21 +187,21 @@ print_list(File, List) -> io:format(Stream,"~p~n",[File]) end, List), file:close(Stream). -

In this example, File is used as the new variable - instead of X. This is rather silly since code in the body - of the fun cannot refer to the variable File which is - defined outside the fun. Compiling this example will yield - the diagnostic:

+

Here, File is used as the new variable + instead of X. This is not so wise because code in the fun + body cannot refer to the variable File, which is + defined outside of the fun. Compiling this example gives + the following diagnostic:

./FileName.erl:Line: Warning: variable 'File' shadowed in 'lambda head' -

This reminds us that the variable File which is defined - inside the fun collides with the variable File which is +

This indicates that the variable File, which is defined + inside the fun, collides with the variable File, which is defined outside the fun.

The rules for importing variables into a fun has the consequence - that certain pattern matching operations have to be moved into + that certain pattern matching operations must be moved into guard expressions and cannot be written in the head of the fun. - For example, we might write the following code if we intend + For example, you might write the following code if you intend the first clause of F to be evaluated when the value of its argument is Y:

@@ -216,7 +213,7 @@ f(...) -> ... end, ...) ... -

instead of

+

instead of writng the following code:

f(...) -> Y = ... @@ -229,35 +226,37 @@ f(...) ->
- Funs and the Module Lists + Funs and Module Lists

The following examples show a dialogue with the Erlang shell. All the higher order functions discussed are exported from the module lists.

map +

map takes a function of one argument and a list of terms:

-

map takes a function of one argument and a list of - terms. It returns the list obtained by applying the function +

It returns the list obtained by applying the function to every argument in the list.

+

When a new fun is defined in the shell, the value of the fun + is printed as ]]>:

 > Double = fun(X) -> 2 * X end.
 #Fun<erl_eval.6.72228031>
 > lists:map(Double, [1,2,3,4,5]).
 [2,4,6,8,10]
-

When a new fun is defined in the shell, the value of the Fun - is printed as ]]>.

+
any -

any takes a predicate P of one argument and a - list of terms. A predicate is a function which returns - true or false. any is true if there is a - term X in the list such that P(X) is true.

-

We define a predicate Big(X) which is true if - its argument is greater that 10.

+ list of terms:

+ +

A predicate is a function that returns true or false. + any is true if there is a term X in the list such that + P(X) is true.

+

A predicate Big(X) is defined, which is true if + its argument is greater that 10:

 > Big =  fun(X) -> if X > 10 -> true; true -> false end end.
 #Fun<erl_eval.6.72228031>
@@ -269,9 +268,10 @@ true
all +

all has the same arguments as any:

-

all has the same arguments as any. It is true - if the predicate applied to all elements in the list is true.

+

It is true + if the predicate applied to all elements in the list is true.

 > lists:all(Big, [1,2,3,4,12,6]).   
 false
@@ -281,11 +281,12 @@ true
foreach -

foreach takes a function of one argument and a list of - terms. The function is applied to each argument in the list. - foreach returns ok. It is used for its - side-effect only.

+ terms:

+ +

The function is applied to each argument in the list. + foreach returns ok. It is only used for its + side-effect:

 > lists:foreach(fun(X) -> io:format("~w~n",[X]) end, [1,2,3,4]). 
 1
@@ -297,15 +298,16 @@ ok
foldl -

foldl takes a function of two arguments, an - accumulator and a list. The function is called with two + accumulator and a list:

+ +

The function is called with two arguments. The first argument is the successive elements in - the list, the second argument is the accumulator. The function - must return a new accumulator which is used the next time + the list. The second argument is the accumulator. The function + must return a new accumulator, which is used the next time the function is called.

-

If we have a list of lists L = ["I","like","Erlang"], - then we can sum the lengths of all the strings in L as +

If you have a list of lists L = ["I","like","Erlang"], + then you can sum the lengths of all the strings in L as follows:

 > L = ["I","like","Erlang"].
@@ -325,11 +327,11 @@ end
 
     
mapfoldl +

mapfoldl simultaneously maps and folds over a list:

-

mapfoldl simultaneously maps and folds over a list. - The following example shows how to change all letters in - L to upper case and count them.

-

First upcase:

+

The following example shows how to change all letters in + L to upper case and then count them.

+

First the change to upper case:

 > Upcase =  fun(X) when $a =< X,  X =< $z -> X + $A - $a;
 (X) -> X 
@@ -344,7 +346,7 @@ end
 "ERLANG"
 > lists:map(Upcase_word, L).
 ["I","LIKE","ERLANG"]
-

Now we can do the fold and the map at the same time:

+

Now, the fold and the map can be done at the same time:

 > lists:mapfoldl(fun(Word, Sum) ->
 {Upcase_word(Word), Sum + length(Word)}
@@ -354,23 +356,24 @@ end
 
     
filter -

filter takes a predicate of one argument and a list - and returns all element in the list which satisfy - the predicate.

+ and returns all elements in the list that satisfy + the predicate:

+
 > lists:filter(Big, [500,12,2,45,6,7]).
 [500,12,45]
-

When we combine maps and filters we can write very succinct - code. For example, suppose we want to define a set difference - function. We want to define diff(L1, L2) to be - the difference between the lists L1 and L2. - This is the list of all elements in L1 which are not contained - in L2. This code can be written as follows:

+

Combining maps and filters enables writing of very succinct + code. For example, to define a set difference + function diff(L1, L2) to be + the difference between the lists L1 and L2, + the code can be written as follows:

diff(L1, L2) -> filter(fun(X) -> not member(X, L2) end, L1). -

The AND intersection of the list L1 and L2 is +

This gives the list of all elements in L1 that are not contained + in L2.

+

The AND intersection of the list L1 and L2 is also easily defined:

intersection(L1,L2) -> filter(fun(X) -> member(X,L1) end, L2). @@ -378,9 +381,9 @@ intersection(L1,L2) -> filter(fun(X) -> member(X,L1) end, L2).
takewhile -

takewhile(P, L) takes elements X from a list - L as long as the predicate P(X) is true.

+ L as long as the predicate P(X) is true:

+
 > lists:takewhile(Big, [200,500,45,5,3,45,6]).  
 [200,500,45]
@@ -388,8 +391,8 @@ intersection(L1,L2) -> filter(fun(X) -> member(X,L1) end, L2).
dropwhile +

dropwhile is the complement of takewhile:

-

dropwhile is the complement of takewhile.

 > lists:dropwhile(Big, [200,500,45,5,3,45,6]).
 [5,3,45,6]
@@ -397,10 +400,10 @@ intersection(L1,L2) -> filter(fun(X) -> member(X,L1) end, L2).
splitwith -

splitwith(P, L) splits the list L into the two - sub-lists {L1, L2}, where L = takewhile(P, L) - and L2 = dropwhile(P, L).

+ sublists {L1, L2}, where L = takewhile(P, L) + and L2 = dropwhile(P, L):

+
 > lists:splitwith(Big, [200,500,45,5,3,45,6]).
 {[200,500,45],[5,3,45,6]}
@@ -408,17 +411,17 @@ intersection(L1,L2) -> filter(fun(X) -> member(X,L1) end, L2).
- Funs Which Return Funs -

So far, this section has only described functions which take - funs as arguments. It is also possible to write more powerful - functions which themselves return funs. The following examples - illustrate these type of functions.

+ Funs Returning Funs +

So far, only functions that take + funs as arguments have been described. More powerful + functions, that themselves return funs, can also be written. The following + examples illustrate these type of functions.

Simple Higher Order Functions -

Adder(X) is a function which, given X, returns +

Adder(X) is a function that given X, returns a new function G such that G(K) returns - K + X.

+ K + X:

 > Adder = fun(X) -> fun(Y) -> X + Y end end.
 #Fun<erl_eval.6.72228031>
@@ -438,7 +441,7 @@ ints_from(N) ->
     fun() ->
             [N|ints_from(N+1)]
     end.
-      

Then we can proceed as follows:

+

Then proceed as follows:

 > XX = lazy:ints_from(1).
 #Fun<lazy.0.29874839>
@@ -450,7 +453,7 @@ ints_from(N) ->
 #Fun<lazy.0.29874839>
 > hd(Y()).
 2
-

etc. - this is an example of "lazy embedding".

+

And so on. This is an example of "lazy embedding".

@@ -459,17 +462,21 @@ ints_from(N) ->
 Parser(Toks) -> {ok, Tree, Toks1} | fail

Toks is the list of tokens to be parsed. A successful - parse returns {ok, Tree, Toks1}, where Tree is a - parse tree and Toks1 is a tail of Tree which - contains symbols encountered after the structure which was - correctly parsed. Otherwise fail is returned.

-

The example which follows illustrates a simple, functional - parser which parses the grammar:

+ parse returns {ok, Tree, Toks1}.

+ + Tree is a parse tree. + Toks1 is a tail of Tree that + contains symbols encountered after the structure that was + correctly parsed. + +

An unsuccessful parse returns fail.

+

The following example illustrates a simple, functional + parser that parses the grammar:

 (a | b) & (c | d)

The following code defines a function pconst(X) in - the module funparse, which returns a fun which parses a - list of tokens.

+ the module funparse, which returns a fun that parses a + list of tokens:

This function can be used as follows:

@@ -479,17 +486,18 @@ Parser(Toks) -> {ok, Tree, Toks1} | fail
{ok,{const,a},[b,c]} > P1([x,y,z]). fail
-

Next, we define the two higher order functions pand - and por which combine primitive parsers to produce more - complex parsers. Firstly pand:

+

Next, the two higher order functions pand + and por are defined. They combine primitive parsers to produce more + complex parsers.

+

First pand:

Given a parser P1 for grammar G1, and a parser P2 for grammar G2, pand(P1, P2) returns a - parser for the grammar which consists of sequences of tokens - which satisfy G1 followed by sequences of tokens which + parser for the grammar, which consists of sequences of tokens + that satisfy G1, followed by sequences of tokens that satisfy G2.

por(P1, P2) returns a parser for the language - described by the grammar G1 or G2.

+ described by the grammar G1 or G2:

The original problem was to parse the grammar . The following code addresses this @@ -497,7 +505,7 @@ fail

The following code adds a parser interface to the grammar:

-

We can test this parser as follows:

+

The parser can be tested as follows:

 > funparse:parse([a,c]).
 {ok,{'and',{'or',1,{const,a}},{'or',1,{const,c}}}}
diff --git a/system/doc/programming_examples/list_comprehensions.xml b/system/doc/programming_examples/list_comprehensions.xml
index d6c8a66e13..5b33b14dea 100644
--- a/system/doc/programming_examples/list_comprehensions.xml
+++ b/system/doc/programming_examples/list_comprehensions.xml
@@ -31,18 +31,15 @@
 
   
Simple Examples -

We start with a simple example:

+

This section starts with a simple example, showing a generator and a filter:

 > [X || X <- [1,2,a,3,4,b,5,6], X > 3].
 [a,4,b,5,6]
-

This should be read as follows:

- -

The list of X such that X is taken from the list +

This is read as follows: The list of X such that X is taken from the list [1,2,a,...] and X is greater than 3.

-

The notation is a generator and the expression X > 3 is a filter.

-

An additional filter can be added in order to restrict +

An additional filter, integer(X), can be added to restrict the result to integers:

 > [X || X <- [1,2,a,3,4,b,5,6], integer(X), X > 3].
@@ -56,7 +53,7 @@
 
   
Quick Sort -

The well known quick sort routine can be written as follows:

+

The well-known quick sort routine can be written as follows:

sort([ X || X <- T, X < Pivot]) ++ @@ -64,15 +61,20 @@ sort([Pivot|T]) -> sort([ X || X <- T, X >= Pivot]); sort([]) -> [].]]>

The expression is the list of - all elements in T, which are less than Pivot.

+ all elements in T that are less than Pivot.

= Pivot]]]> is the list of all elements in - T, which are greater or equal to Pivot.

-

To sort a list, we isolate the first element in the list and - split the list into two sub-lists. The first sub-list contains - all elements which are smaller than the first element in - the list, the second contains all elements which are greater - than or equal to the first element in the list. We then sort - the sub-lists and combine the results.

+ T that are greater than or equal to Pivot.

+

A list sorted as follows:

+ + The first element in the list is isolated + and the list is split into two sublists. + The first sublist contains + all elements that are smaller than the first element in + the list. + The second sublist contains all elements that are greater + than, or equal to, the first element in the list. + Then the sublists are sorted and the results are combined. +
@@ -82,10 +84,10 @@ sort([]) -> [].]]> [[]]; perms(L) -> [[H|T] || H <- L, T <- perms(L--[H])].]]> -

We take take H from L in all possible ways. +

This takes H from L in all possible ways. The result is the set of all lists [H|T], where T - is the set of all possible permutations of L with - H removed.

+ is the set of all possible permutations of L, with + H removed:

 > perms([b,u,g]).
 [[b,u,g],[b,g,u],[u,b,g],[u,g,b],[g,b,u],[g,u,b]]
@@ -97,7 +99,7 @@ perms(L) -> [[H|T] || H <- L, T <- perms(L--[H])].]]> that A**2 + B**2 = C**2.

The function pyth(N) generates a list of all integers {A,B,C} such that A**2 + B**2 = C**2 and where - the sum of the sides is equal to or less than N.

+ the sum of the sides is equal to, or less than, N:

[ {A,B,C} || @@ -140,7 +142,7 @@ pyth1(N) ->
- Simplifications with List Comprehensions + Simplifications With List Comprehensions

As an example, list comprehensions can be used to simplify some of the functions in lists.erl:

[X || X <- L, Pred(X)].]]>
Variable Bindings in List Comprehensions -

The scope rules for variables which occur in list +

The scope rules for variables that occur in list comprehensions are as follows:

- all variables which occur in a generator pattern are - assumed to be "fresh" variables - any variables which are defined before the list - comprehension and which are used in filters have the values - they had before the list comprehension - no variables may be exported from a list comprehension. + All variables that occur in a generator pattern are + assumed to be "fresh" variables. + Any variables that are defined before the list + comprehension, and that are used in filters, have the values + they had before the list comprehension. + Variables cannot be exported from a list comprehension. -

As an example of these rules, suppose we want to write +

As an example of these rules, suppose you want to write the function select, which selects certain elements from - a list of tuples. We might write + a list of tuples. Suppose you write [Y || {X, Y} <- L].]]> with the intention - of extracting all tuples from L where the first item is + of extracting all tuples from L, where the first item is X.

-

Compiling this yields the following diagnostic:

+

Compiling this gives the following diagnostic:

./FileName.erl:Line: Warning: variable 'X' shadowed in generate -

This diagnostic warns us that the variable X in - the pattern is not the same variable as the variable X - which occurs in the function head.

-

Evaluating select yields the following result:

+

This diagnostic warns that the variable X in + the pattern is not the same as the variable X + that occurs in the function head.

+

Evaluating select gives the following result:

 > select(b,[{a,1},{b,2},{c,3},{b,7}]).
 [1,2,3,7]
-

This result is not what we wanted. To achieve the desired - effect we must write select as follows:

+

This is not the wanted result. To achieve the desired + effect, select must be written as follows:

[Y || {X1, Y} <- L, X == X1].]]>

The generator now contains unbound variables and the test has - been moved into the filter. This now works as expected:

+ been moved into the filter.

+

This now works as expected:

 > select(b,[{a,1},{b,2},{c,3},{b,7}]).
 [2,7]
-

One consequence of the rules for importing variables into a +

A consequence of the rules for importing variables into a list comprehensions is that certain pattern matching operations - have to be moved into the filters and cannot be written directly - in the generators. To illustrate this, do not write as follows:

+ must be moved into the filters and cannot be written directly + in the generators.

+

To illustrate this, do not write as follows:

Y = ... diff --git a/system/doc/programming_examples/part.xml b/system/doc/programming_examples/part.xml index 0bec9b4cf5..9329717ce4 100644 --- a/system/doc/programming_examples/part.xml +++ b/system/doc/programming_examples/part.xml @@ -28,8 +28,9 @@ -

This chapter contains examples on using records, funs, list - comprehensions and the bit syntax.

+ +

This section contains examples on using records, funs, list + comprehensions, and the bit syntax.

diff --git a/system/doc/programming_examples/records.xml b/system/doc/programming_examples/records.xml index 58cf136a0b..ffcc05e758 100644 --- a/system/doc/programming_examples/records.xml +++ b/system/doc/programming_examples/records.xml @@ -30,37 +30,39 @@
- Records vs Tuples -

The main advantage of using records instead of tuples is that + Records and Tuples +

The main advantage of using records rather than tuples is that fields in a record are accessed by name, whereas fields in a tuple are accessed by position. To illustrate these differences, - suppose that we want to represent a person with the tuple + suppose that you want to represent a person with the tuple {Name, Address, Phone}.

-

We must remember that the Name field is the first - element of the tuple, the Address field is the second - element, and so on, in order to write functions which manipulate - this data. For example, to extract data from a variable P - which contains such a tuple we might write the following code - and then use pattern matching to extract the relevant fields.

+

To write functions that manipulate this data, remember the following:

+ + The Name field is the first element of the tuple. + The Address field is the second element. + The Phone field is the third element. + +

For example, to extract data from a variable P + that contains such a tuple, you can write the following code + and then use pattern matching to extract the relevant fields:

Name = element(1, P), Address = element(2, P), ... -

Code like this is difficult to read and understand and errors - occur if we get the numbering of the elements in the tuple wrong. - If we change the data representation by re-ordering the fields, - or by adding or removing a field, then all references to - the person tuple, wherever they occur, must be checked and - possibly modified.

-

Records allow us to refer to the fields by name and not - position. We use a record instead of a tuple to store the data. - If we write a record definition of the type shown below, we can - then refer to the fields of the record by name.

+

Such code is difficult to read and understand, and errors + occur if the numbering of the elements in the tuple is wrong. + If the data representation of the fields is changed, by re-ordering, + adding, or removing fields, all references to + the person tuple must be checked and possibly modified.

+

Records allow references to the fields by name, instead of by + position. In the following example, a record instead of a tuple + is used to store the data:

-record(person, {name, phone, address}). -

For example, if P is now a variable whose value is a - person record, we can code as follows in order to access - the name and address fields of the records.

+

This enables references to the fields of the record by name. + For example, if P is a variable whose value is a + person record, the following code access + the name and address fields of the records:

Name = P#person.name, Address = P#person.address, @@ -72,24 +74,25 @@ Address = P#person.address,
Defining a Record -

This definition of a person will be used in many of - the examples which follow. It contains three fields, name, - phone and address. The default values for +

This following definition of a person is used in several + examples in this section. Three fields are included, name, + phone, and address. The default values for name and phone is "" and [], respectively. The default value for address is the atom undefined, since no default value is supplied for this field:

 -record(person, {name = "", phone = [], address}).
-

We have to define the record in the shell in order to be able - use the record syntax in the examples:

+

The record must be defined in the shell to enable + use of the record syntax in the examples:

 > rd(person, {name = "", phone = [], address}).
 person
-

This is due to the fact that record definitions are available - at compile time only, not at runtime. See shell(3) for - details on records in the shell. -

+

This is because record definitions are only available + at compile time, not at runtime. For details on records + in the shell, see the + shell(3) + manual page in stdlib.

@@ -98,12 +101,12 @@ person
 > #person{phone=[0,8,2,3,4,3,1,2], name="Robert"}.
 #person{name = "Robert",phone = [0,8,2,3,4,3,1,2],address = undefined}
-

Since the address field was omitted, its default value +

As the address field was omitted, its default value is used.

-

There is a new feature introduced in Erlang 5.1/OTP R8B, - with which you can set a value to all fields in a record, - overriding the defaults in the record specification. The special - field _, means "all fields not explicitly specified".

+

From Erlang 5.1/OTP R8B, a value to all + fields in a record can be set with the special field _. + _ means "all fields not explicitly specified".

+

Example:

 > #person{name = "Jakob", _ = '_'}.
 #person{name = "Jakob",phone = '_',address = '_'}
@@ -114,6 +117,7 @@ person
Accessing a Record Field +

The following example shows how to access a record field:

 > P = #person{name = "Joe", phone = [0,8,2,3,4,3,1,2]}.
 #person{name = "Joe",phone = [0,8,2,3,4,3,1,2],address = undefined}
@@ -123,6 +127,7 @@ person
Updating a Record +

The following example shows how to update a record:

 > P1 = #person{name="Joe", phone=[1,2,3], address="A street"}.
 #person{name = "Joe",phone = [1,2,3],address = "A street"}
@@ -133,7 +138,7 @@ person
Type Testing

The following example shows that the guard succeeds if - P is record of type person.

+ P is record of type person:

 foo(P) when is_record(P, person) -> a_person;
 foo(_) -> not_a_person.
@@ -141,7 +146,7 @@ foo(_) -> not_a_person.
Pattern Matching -

Matching can be used in combination with records as shown in +

Matching can be used in combination with records, as shown in the following example:

 > P3 = #person{name="Joe", phone=[0,0,7], address="A street"}.
@@ -163,7 +168,7 @@ find_phone([], Name) ->
 
   
Nested Records -

The value of a field in a record might be an instance of a +

The value of a field in a record can be an instance of a record. Retrieval of nested data can be done stepwise, or in a single step, as shown in the following example:

@@ -173,11 +178,12 @@ find_phone([], Name) ->
 demo() ->
   P = #person{name= #name{first="Robert",last="Virding"}, phone=123},
   First = (P#person.name)#name.first.
-

In this example, demo() evaluates to "Robert".

+

Here, demo() evaluates to "Robert".

- Example + A Longer Example +

Comments are embedded in the following example:

 %% File: person.hrl
 
-- 
cgit v1.2.3


From b61ee25ee7e922b36bb4ae6d505a5f6cbe5b23e6 Mon Sep 17 00:00:00 2001
From: Hans Bolinder 
Date: Thu, 12 Mar 2015 15:35:13 +0100
Subject: Update Getting Started

Language cleaned up by the technical writers xsipewe and tmanevik
from Combitech. Proofreading and corrections by Hans Bolinder.
---
 system/doc/getting_started/conc_prog.xml      | 350 +++++------
 system/doc/getting_started/intro.xml          |  63 +-
 system/doc/getting_started/records_macros.xml |  88 +--
 system/doc/getting_started/robustness.xml     | 145 ++---
 system/doc/getting_started/seq_prog.xml       | 823 +++++++++++++-------------
 5 files changed, 759 insertions(+), 710 deletions(-)

(limited to 'system/doc')

diff --git a/system/doc/getting_started/conc_prog.xml b/system/doc/getting_started/conc_prog.xml
index 2b64826a93..0dd9efb363 100644
--- a/system/doc/getting_started/conc_prog.xml
+++ b/system/doc/getting_started/conc_prog.xml
@@ -29,25 +29,26 @@
     conc_prog.xml
   
 
+  
   
Processes

One of the main reasons for using Erlang instead of other functional languages is Erlang's ability to handle concurrency - and distributed programming. By concurrency we mean programs - which can handle several threads of execution at the same time. - For example, modern operating systems would allow you to use a - word processor, a spreadsheet, a mail client and a print job all - running at the same time. Of course each processor (CPU) in + and distributed programming. By concurrency is meant programs + that can handle several threads of execution at the same time. + For example, modern operating systems allow you to use a + word processor, a spreadsheet, a mail client, and a print job all + running at the same time. Each processor (CPU) in the system is probably only handling one thread (or job) at a - time, but it swaps between the jobs a such a rate that it gives + time, but it swaps between the jobs at such a rate that it gives the illusion of running them all at the same time. It is easy to - create parallel threads of execution in an Erlang program and it - is easy to allow these threads to communicate with each other. In - Erlang we call each thread of execution a process.

+ create parallel threads of execution in an Erlang program and + to allow these threads to communicate with each other. In + Erlang, each thread of execution is called a process.

(Aside: the term "process" is usually used when the threads of execution share no data with each other and the term "thread" when they share data in some way. Threads of execution in Erlang - share no data, that's why we call them processes).

+ share no data, that is why they are called processes).

The Erlang BIF spawn is used to create a new process: spawn(Module, Exported_Function, List of Arguments). Consider the following module:

@@ -73,14 +74,14 @@ hello hello hello done
-

We can see that function say_something writes its first - argument the number of times specified by second argument. Now - look at the function start. It starts two Erlang processes, - one which writes "hello" three times and one which writes - "goodbye" three times. Both of these processes use the function - say_something. Note that a function used in this way by - spawn to start a process must be exported from the module - (i.e. in the -export at the start of the module).

+

As shown, the function say_something writes its first + argument the number of times specified by second argument. + The function start starts two Erlang processes, + one that writes "hello" three times and one that writes + "goodbye" three times. Both processes use the function + say_something. Notice that a function used in this way by + spawn, to start a process, must be exported from the module + (that is, in the -export at the start of the module).

 9> tut14:start().
 hello
@@ -90,19 +91,19 @@ hello
 goodbye
 hello
 goodbye
-

Notice that it didn't write "hello" three times and then - "goodbye" three times, but the first process wrote a "hello", +

Notice that it did not write "hello" three times and then + "goodbye" three times. Instead, the first process wrote a "hello", the second a "goodbye", the first another "hello" and so forth. But where did the <0.63.0> come from? The return value of a - function is of course the return value of the last "thing" in - the function. The last thing in the function start is:

+ function is the return value of the last "thing" in + the function. The last thing in the function start is

spawn(tut14, say_something, [goodbye, 3]).

spawn returns a process identifier, or pid, which uniquely identifies the process. So <0.63.0> - is the pid of the spawn function call above. We will see - how to use pids in the next example.

-

Note as well that we have used ~p instead of ~w in + is the pid of the spawn function call above. + The next example shows how to use pids.

+

Notice also that ~p is used instead of ~w in io:format. To quote the manual: "~p Writes the data with standard syntax in the same way as ~w, but breaks terms whose printed representation is longer than one line into many lines @@ -112,8 +113,8 @@ spawn(tut14, say_something, [goodbye, 3]).

Message Passing -

In the following example we create two processes which send - messages to each other a number of times.

+

In the following example two processes are created and + they send messages to each other a number of times.

-module(tut15). @@ -157,13 +158,13 @@ Pong received ping Ping received pong ping finished Pong finished
-

The function start first creates a process, let's call it - "pong":

+

The function start first creates a process, + let us call it "pong":

Pong_PID = spawn(tut15, pong, [])

This process executes tut15:pong(). Pong_PID is the process identity of the "pong" process. The function - start now creates another process "ping".

+ start now creates another process "ping":

spawn(tut15, ping, [3, Pong_PID]),

This process executes:

@@ -181,7 +182,7 @@ receive pong() end.

The receive construct is used to allow processes to wait - for messages from other processes. It has the format:

+ for messages from other processes. It has the following format:

receive pattern1 -> @@ -192,35 +193,37 @@ receive patternN actionsN end. -

Note: no ";" before the end.

+

Notice there is no ";" before the end.

Messages between Erlang processes are simply valid Erlang terms. - I.e. they can be lists, tuples, integers, atoms, pids etc.

+ That is, they can be lists, tuples, integers, atoms, pids, + and so on.

Each process has its own input queue for messages it receives. New messages received are put at the end of the queue. When a process executes a receive, the first message in the queue - is matched against the first pattern in the receive, if + is matched against the first pattern in the receive. If this matches, the message is removed from the queue and - the actions corresponding to the the pattern are executed.

+ the actions corresponding to the pattern are executed.

However, if the first pattern does not match, the second pattern - is tested, if this matches the message is removed from the queue + is tested. If this matches, the message is removed from the queue and the actions corresponding to the second pattern are executed. - If the second pattern does not match the third is tried and so on - until there are no more pattern to test. If there are no more - patterns to test, the first message is kept in the queue and we - try the second message instead. If this matches any pattern, + If the second pattern does not match, the third is tried and so on + until there are no more patterns to test. If there are no more + patterns to test, the first message is kept in the queue and + the second message is tried instead. If this matches any pattern, the appropriate actions are executed and the second message is removed from the queue (keeping the first message and any other - messages in the queue). If the second message does not match we - try the third message and so on until we reach the end of - the queue. If we reach the end of the queue, the process blocks + messages in the queue). If the second message does not match, + the third message is tried, and so on, until the end of + the queue is reached. If the end of the queue is reached, + the process blocks (stops execution) and waits until a new message is received and this procedure is repeated.

-

Of course the Erlang implementation is "clever" and minimizes +

The Erlang implementation is "clever" and minimizes the number of times each message is tested against the patterns in each receive.

Now back to the ping pong example.

"Pong" is waiting for messages. If the atom finished is - received, "pong" writes "Pong finished" to the output and as it + received, "pong" writes "Pong finished" to the output and, as it has nothing more to do, terminates. If it receives a message with the format:

@@ -229,20 +232,20 @@ end. pong to the process "ping":

Ping_PID ! pong -

Note how the operator "!" is used to send messages. The syntax +

Notice how the operator "!" is used to send messages. The syntax of "!" is:

Pid ! Message -

I.e. Message (any Erlang term) is sent to the process +

That is, Message (any Erlang term) is sent to the process with identity Pid.

After sending the message pong to the process "ping", "pong" calls the pong function again, which causes it to - get back to the receive again and wait for another message. - Now let's look at the process "ping". Recall that it was started + get back to the receive again and wait for another message.

+

Now let us look at the process "ping". Recall that it was started by executing:

tut15:ping(3, Pong_PID) -

Looking at the function ping/2 we see that the second +

Looking at the function ping/2, the second clause of ping/2 is executed since the value of the first argument is 3 (not 0) (first clause head is ping(0,Pong_PID), second clause head is @@ -250,9 +253,9 @@ tut15:ping(3, Pong_PID)

The second clause sends a message to "pong":

Pong_PID ! {ping, self()}, -

self() returns the pid of the process which executes +

self() returns the pid of the process that executes self(), in this case the pid of "ping". (Recall the code - for "pong", this will land up in the variable Ping_PID in + for "pong", this lands up in the variable Ping_PID in the receive previously explained.)

"Ping" now waits for a reply from "pong":

@@ -260,37 +263,37 @@ receive pong -> io:format("Ping received pong~n", []) end, -

and writes "Ping received pong" when this reply arrives, after +

It writes "Ping received pong" when this reply arrives, after which "ping" calls the ping function again.

ping(N - 1, Pong_PID)

N-1 causes the first argument to be decremented until it becomes 0. When this occurs, the first clause of ping/2 - will be executed:

+ is executed:

ping(0, Pong_PID) -> Pong_PID ! finished, io:format("ping finished~n", []);

The atom finished is sent to "pong" (causing it to terminate as described above) and "ping finished" is written to - the output. "Ping" then itself terminates as it has nothing left + the output. "Ping" then terminates as it has nothing left to do.

Registered Process Names -

In the above example, we first created "pong" so as to be able - to give the identity of "pong" when we started "ping". I.e. in - some way "ping" must be able to know the identity of "pong" in - order to be able to send a message to it. Sometimes processes - which need to know each others identities are started completely +

In the above example, "pong" was first created to be able + to give the identity of "pong" when "ping" was started. That is, in + some way "ping" must be able to know the identity of "pong" to be + able to send a message to it. Sometimes processes + which need to know each other's identities are started independently of each other. Erlang thus provides a mechanism for processes to be given names so that these names can be used as identities instead of pids. This is done by using the register BIF:

register(some_atom, Pid) -

We will now re-write the ping pong example using this and giving +

Let us now rewrite the ping pong example using this and give the name pong to the "pong" process:

-module(tut16). @@ -335,52 +338,57 @@ Pong received ping Ping received pong ping finished Pong finished
-

In the start/0 function,

+

Here the start/0 function,

register(pong, spawn(tut16, pong, [])),

both spawns the "pong" process and gives it the name pong. - In the "ping" process we can now send messages to pong by:

+ In the "ping" process, messages can be sent to pong by:

pong ! {ping, self()}, -

so that ping/2 now becomes ping/1 as we don't have - to use the argument Pong_PID.

+

ping/2 now becomes ping/1 as + the argument Pong_PID is not needed.

Distributed Programming -

Now let's re-write the ping pong program with "ping" and "pong" - on different computers. Before we do this, there are a few things - we need to set up to get this to work. The distributed Erlang +

Let us rewrite the ping pong program with "ping" and "pong" + on different computers. First a few things + are needed to set up to get this to work. The distributed Erlang implementation provides a basic security mechanism to prevent unauthorized access to an Erlang system on another computer. Erlang systems which talk to each other must have the same magic cookie. The easiest way to achieve this is by having a file called .erlang.cookie in your home - directory on all machines which on which you are going to run - Erlang systems communicating with each other (on Windows systems - the home directory is the directory where pointed to by the $HOME - environment variable - you may need to set this. On Linux or Unix - you can safely ignore this and simply create a file called - .erlang.cookie in the directory you get to after executing - the command cd without any argument). - The .erlang.cookie file should contain one line with - the same atom. For example, on Linux or Unix in the OS shell:

+ directory on all machines on which you are going to run + Erlang systems communicating with each other: +

+ + On Windows systems the home directory is the directory + pointed out by the environment variable $HOME - you may need + to set this. + On Linux or UNIX + you can safely ignore this and simply create a file called + .erlang.cookie in the directory you get to after executing + the command cd without any argument. + +

The .erlang.cookie file is to contain a line with + the same atom. For example, on Linux or UNIX, in the OS shell:

 $ cd
 $ cat > .erlang.cookie
 this_is_very_secret
 $ chmod 400 .erlang.cookie
-

The chmod above make the .erlang.cookie file +

The chmod above makes the .erlang.cookie file accessible only by the owner of the file. This is a requirement.

-

When you start an Erlang system which is going to talk to other - Erlang systems, you must give it a name, e.g.:

+

When you start an Erlang system that is going to talk to other + Erlang systems, you must give it a name, for example:

 $ erl -sname my_name

We will see more details of this later. If you want to experiment with distributed Erlang, but you only have one computer to work on, you can start two separate Erlang systems on the same computer but give them different names. Each Erlang - system running on a computer is called an Erlang node.

+ system running on a computer is called an Erlang node.

(Note: erl -sname assumes that all nodes are in the same IP domain and we can use only the first component of the IP address, if we want to use nodes in different domains we use @@ -420,10 +428,10 @@ start_pong() -> start_ping(Pong_Node) -> spawn(tut17, ping, [3, Pong_Node]). -

Let us assume we have two computers called gollum and kosken. We - will start a node on kosken called ping and then a node on gollum +

Let us assume there are two computers called gollum and kosken. + First a node is started on kosken, called ping, and then a node on gollum, called pong.

-

On kosken (on a Linux/Unix system):

+

On kosken (on a Linux/UNIX system):

 kosken> erl -sname ping
 Erlang (BEAM) emulator version 5.2.3.7 [hipe] [threads:0]
@@ -437,12 +445,12 @@ Erlang (BEAM) emulator version 5.2.3.7 [hipe] [threads:0]
 
 Eshell V5.2.3.7  (abort with ^G)
 (pong@gollum)1>
-

Now we start the "pong" process on gollum:

+

Now the "pong" process on gollum is started:

 (pong@gollum)1> tut17:start_pong().
 true
-

and start the "ping" process on kosken (from the code above you - will see that a parameter of the start_ping function is +

And the "ping" process on kosken is started (from the code above you + can see that a parameter of the start_ping function is the node name of the Erlang system where "pong" is running):

 (ping@kosken)1> tut17:start_ping(pong@gollum).
@@ -451,8 +459,7 @@ Ping received pong
 Ping received pong 
 Ping received pong
 ping finished
-

Here we see that the ping pong program has run, on the "pong" - side we see:

+

As shown, the ping pong program has run. On the "pong" side:

 (pong@gollum)2>
 Pong received ping                 
@@ -460,28 +467,28 @@ Pong received ping
 Pong received ping                 
 Pong finished                      
 (pong@gollum)2>
-

Looking at the tut17 code we see that the pong - function itself is unchanged, the lines:

+

Looking at the tut17 code, you see that the pong + function itself is unchanged, the following lines work in the same way + irrespective of on which node the "ping" process is executes:

{ping, Ping_PID} -> io:format("Pong received ping~n", []), Ping_PID ! pong, -

work in the same way irrespective of on which node the "ping" - process is executing. Thus Erlang pids contain information about - where the process executes so if you know the pid of a process, - the "!" operator can be used to send it a message if the process - is on the same node or on a different node.

-

A difference is how we send messages to a registered process on +

Thus, Erlang pids contain information about + where the process executes. So if you know the pid of a process, + the "!" operator can be used to send it a message disregarding + if the process is on the same node or on a different node.

+

A difference is how messages are sent to a registered process on another node:

{pong, Pong_Node} ! {ping, self()}, -

We use a tuple {registered_name,node_name} instead of +

A tuple {registered_name,node_name} is used instead of just the registered_name.

-

In the previous example, we started "ping" and "pong" from +

In the previous example, "ping" and "pong" were started from the shells of two separate Erlang nodes. spawn can also be - used to start processes in other nodes. The next example is - the ping pong program, yet again, but this time we will start - "ping" in another node:

+ used to start processes in other nodes.

+

The next example is the ping pong program, yet again, + but this time "ping" is started in another node:

-module(tut18). @@ -513,7 +520,7 @@ start(Ping_Node) -> register(pong, spawn(tut18, pong, [])), spawn(Ping_Node, tut18, ping, [3, node()]).

Assuming an Erlang system called ping (but not the "ping" - process) has already been started on kosken, then on gollum we do:

+ process) has already been started on kosken, then on gollum this is done:

 (pong@gollum)1> tut18:start(ping@kosken).
 <3934.39.0>
@@ -525,39 +532,40 @@ Pong received ping
 Ping received pong
 Pong finished
 ping finished
-

Notice we get all the output on gollum. This is because the io +

Notice that all the output is received on gollum. This is because + the I/O system finds out where the process is spawned from and sends all output there.

A Larger Example -

Now for a larger example. We will make an extremely simple - "messenger". The messenger is a program which allows users to log +

Now for a larger example with a simple + "messenger". The messenger is a program that allows users to log in on different nodes and send simple messages to each other.

-

Before we start, let's note the following:

+

Before starting, notice the following:

-

This example will just show the message passing logic - no - attempt at all has been made to provide a nice graphical user - interface. This can, of course, also be done in Erlang - but - that's another tutorial.

+

This example only shows the message passing logic - no + attempt has been made to provide a nice graphical user + interface, although this can also be done in Erlang.

-

This sort of problem can be solved more easily if you use - the facilities in OTP, which will also provide methods for - updating code on the fly etc. But again, that's another - tutorial.

+

This sort of problem can be solved easier by use of + the facilities in OTP, which also provide methods for + updating code on the fly and so on (see + + OTP Design Principles).

-

The first program we write will contain some inadequacies - regarding the handling of nodes which disappear. We will correct - these in a later version of the program.

+

The first program contains some inadequacies + regarding handling of nodes which disappear. + These are corrected in a later version of the program.

-

We will set up the messenger by allowing "clients" to connect to - a central server and say who and where they are. I.e. a user - won't need to know the name of the Erlang node where another user +

The messenger is set up by allowing "clients" to connect to + a central server and say who and where they are. That is, a user + does not need to know the name of the Erlang node where another user is located to send a message.

File messenger.erl:

@@ -728,19 +736,19 @@ await_result() -> {messenger, What} -> % Normal response io:format("~p~n", [What]) end. -

To use this program you need to:

+

To use this program, you need to:

- configure the server_node() function - copy the compiled code (messenger.beam) to - the directory on each computer where you start Erlang. + Configure the server_node() function. + Copy the compiled code (messenger.beam) to + the directory on each computer where you start Erlang. -

In the following example of use of this program I have started - nodes on four different computers, but if you don't have that - many machines available on your network you could start up +

In the following example using this program, + nodes are started on four different computers. If you do not have that + many machines available on your network, you can start several nodes on the same machine.

-

We start up four Erlang nodes: messenger@super, c1@bilbo, +

Four Erlang nodes are started up: messenger@super, c1@bilbo, c2@kosken, c3@gollum.

-

First we start up a the server at messenger@super:

+

First the server at messenger@super is started up:

 (messenger@super)1> messenger:start_server().
 true
@@ -754,7 +762,7 @@ logged_on
(c2@kosken)1> messenger:logon(james). true logged_on
-

and Fred logs on at c3@gollum:

+

And Fred logs on at c3@gollum:

 (c3@gollum)1> messenger:logon(fred).
 true
@@ -764,7 +772,7 @@ logged_on
(c1@bilbo)2> messenger:message(fred, "hello"). ok sent -

And Fred receives the message and sends a message to Peter and +

Fred receives the message and sends a message to Peter and logs off:

 Message from peter: "hello"
@@ -779,27 +787,28 @@ logoff
ok receiver_not_found

But this fails as Fred has already logged off.

-

First let's look at some of the new concepts we have introduced.

+

First let us look at some of the new concepts that have + been introduced.

There are two versions of the server_transfer function: one with four arguments (server_transfer/4) and one with five (server_transfer/5). These are regarded by Erlang as two separate functions.

-

Note how we write the server function so that it calls - itself, via server(User_List), and thus creates a loop. +

Notice how to write the server function so that it calls + itself, through server(User_List), and thus creates a loop. The Erlang compiler is "clever" and optimizes the code so that this really is a sort of loop and not a proper function call. But - this only works if there is no code after the call, otherwise - the compiler will expect the call to return and make a proper + this only works if there is no code after the call. Otherwise, + the compiler expects the call to return and make a proper function call. This would result in the process getting bigger and bigger for every loop.

-

We use functions from the lists module. This is a very +

Functions in the lists module are used. This is a very useful module and a study of the manual page is recommended (erl -man lists). lists:keymember(Key,Position,Lists) looks through a list of tuples and looks at Position in each tuple to see if it is the same as Key. The first element is position 1. If it finds a tuple where the element at Position is the same as - Key, it returns true, otherwise false.

+ Key, it returns true, otherwise false.

 3> lists:keymember(a, 2, [{x,y,z},{b,b,b},{b,a,c},{q,r,s}]).
 true
@@ -812,82 +821,83 @@ false
[{x,y,z},{b,b,b},{q,r,s}]

lists:keysearch is like lists:keymember, but it returns {value,Tuple_Found} or the atom false.

-

There are a lot more very useful functions in the lists +

There are many very useful functions in the lists module.

-

An Erlang process will (conceptually) run until it does a +

An Erlang process (conceptually) runs until it does a receive and there is no message which it wants to receive - in the message queue. I say "conceptually" because the Erlang + in the message queue. "conceptually" is used here because the Erlang system shares the CPU time between the active processes in the system.

A process terminates when there is nothing more for it to do, - i.e. the last function it calls simply returns and doesn't call + that is, the last function it calls simply returns and does not call another function. Another way for a process to terminate is for it to call exit/1. The argument to exit/1 has a - special meaning which we will look at later. In this example we - will do exit(normal) which has the same effect as a + special meaning, which is discussed later. In this example, + exit(normal) is done, which has the same effect as a process running out of functions to call.

The BIF whereis(RegisteredName) checks if a registered - process of name RegisteredName exists and return the pid - of the process if it does exist or the atom undefined if - it does not.

-

You should by now be able to understand most of the code above - so I'll just go through one case: a message is sent from one user - to another.

+ process of name RegisteredName exists. If it exists, the pid of + that process is returned. If it does not exist, the atom + undefined is returned.

+

You should by now be able to understand most of the code in the + messenger-module. Let us study one case in detail: a message is + sent from one user to another.

The first user "sends" the message in the example above by:

messenger:message(fred, "hello")

After testing that the client process exists:

whereis(mess_client) -

and a message is sent to mess_client:

+

And a message is sent to mess_client:

mess_client ! {message_to, fred, "hello"}

The client sends the message to the server by:

{messenger, messenger@super} ! {self(), message_to, fred, "hello"}, -

and waits for a reply from the server.

+

And waits for a reply from the server.

The server receives this message and calls:

server_transfer(From, fred, "hello", User_List), -

which checks that the pid From is in the User_List:

+

This checks that the pid From is in the User_List:

lists:keysearch(From, 1, User_List) -

If keysearch returns the atom false, some sort of +

If keysearch returns the atom false, some error has occurred and the server sends back the message:

From ! {messenger, stop, you_are_not_logged_on} -

which is received by the client which in turn does +

This is received by the client, which in turn does exit(normal) and terminates. If keysearch returns - {value,{From,Name}} we know that the user is logged on and - is his name (peter) is in variable Name. We now call:

+ {value,{From,Name}} it is certain that the user is logged on and + that his name (peter) is in variable Name.

+

Let us now call:

server_transfer(From, peter, fred, "hello", User_List) -

Note that as this is server_transfer/5 it is not the same - as the previous function server_transfer/4. We do another - keysearch on User_List to find the pid of the client - corresponding to fred:

+

Notice that as this is server_transfer/5, it is not the same + as the previous function server_transfer/4. Another + keysearch is done on User_List to find the pid of + the client corresponding to fred:

lists:keysearch(fred, 2, User_List) -

This time we use argument 2 which is the second element in - the tuple. If this returns the atom false we know that - fred is not logged on and we send the message:

+

This time argument 2 is used, which is the second element in + the tuple. If this returns the atom false, + fred is not logged on and the following message is sent:

From ! {messenger, receiver_not_found}; -

which is received by the client, if keysearch returns:

+

This is received by the client.

+

If keysearch returns:

{value, {ToPid, fred}} -

we send the message:

+

The following message is sent to fred's client:

ToPid ! {message_from, peter, "hello"}, -

to fred's client and the message:

+

The following message is sent to peter's client:

From ! {messenger, sent} -

to peter's client.

Fred's client receives the message and prints it:

{message_from, peter, "hello"} -> io:format("Message from ~p: ~p~n", [peter, "hello"]) -

and peter's client receives the message in +

Peter's client receives the message in the await_result function.

diff --git a/system/doc/getting_started/intro.xml b/system/doc/getting_started/intro.xml index e8d568bcaf..f9a56e4322 100644 --- a/system/doc/getting_started/intro.xml +++ b/system/doc/getting_started/intro.xml @@ -18,7 +18,7 @@ basis, WITHOUT WARRANTY OF ANY KIND, either express or implied. See the License for the specific language governing rights and limitations under the License. - + Introduction @@ -28,38 +28,47 @@ intro.xml + + +

This section is a quick start tutorial to get you started with Erlang. + Everything in this section is true, but only part of the truth. For example, + only the simplest form of the syntax is shown, not all esoteric forms. + Also, parts that are greatly simplified are indicated with *manual*. + This means that a lot more information on the subject is to be found in + the Erlang book or in + + Erlang Reference Manual.

- Introduction -

This is a "kick start" tutorial to get you started with Erlang. - Everything here is true, but only part of the truth. For example, - I'll only tell you the simplest form of the syntax, not all - esoteric forms. Where I've greatly oversimplified things I'll - write *manual* which means there is lots more information to be - found in the Erlang book or in the Erlang Reference Manual.

-

I also assume that this isn't the first time you have touched a - computer and you have a basic idea about how they are programmed. - Don't worry, I won't assume you're a wizard programmer.

+ Prerequisites + +

The reader of this section is assumed to be familiar with the following:

+ + Computers in general + Basics on how computers are programmed + +
- Things Left Out -

In particular the following has been omitted:

+ Omitted Topics + +

The following topics are not treated in this section:

- References - Local error handling (catch/throw) - Single direction links (monitor) - Handling of binary data (binaries / bit syntax) - List comprehensions - How to communicate with the outside world and/or software - written in other languages (ports). There is however a separate - tutorial for this, Interoperability Tutorial - Very few of the Erlang libraries have been touched on (for - example file handling) - OTP has been totally skipped and in consequence the Mnesia - database has been skipped. - Hash tables for Erlang terms (ETS) - Changing code in running systems + References. + Local error handling (catch/throw). + Single direction links (monitor). + Handling of binary data (binaries / bit syntax). + List comprehensions. + How to communicate with the outside world and software + written in other languages (ports); + this is described in + + Interoperability Tutorial. + Erlang libraries (for example, file handling). + OTP and (in consequence) the Mnesia database. + Hash tables for Erlang terms (ETS). + Changing code in running systems.
diff --git a/system/doc/getting_started/records_macros.xml b/system/doc/getting_started/records_macros.xml index 73c8ce5c8d..bec751fea2 100644 --- a/system/doc/getting_started/records_macros.xml +++ b/system/doc/getting_started/records_macros.xml @@ -29,27 +29,32 @@ record_macros.xml

Larger programs are usually written as a collection of files with - a well defined interface between the various parts.

+ a well-defined interface between the various parts.

The Larger Example Divided into Several Files -

To illustrate this, we will divide the messenger example from - the previous chapter into five files.

- - mess_config.hrl - header file for configuration data - mess_interface.hrl - interface definitions between the client and the messenger - user_interface.erl - functions for the user interface - mess_client.erl - functions for the client side of the messenger - mess_server.erl - functions for the server side of the messenger - -

While doing this we will also clean up the message passing - interface between the shell, the client and the server and define - it using records, we will also introduce macros.

+

To illustrate this, the messenger example from + the previous section is divided into the following five files:

+ + +

mess_config.hrl

+

Header file for configuration data

+ +

mess_interface.hrl

+

Interface definitions between the client and the messenger

+ +

user_interface.erl

+

Functions for the user interface

+ +

mess_client.erl

+

Functions for the client side of the messenger

+ +

mess_server.erl

+

Functions for the server side of the messenger

+
+

While doing this, the message passing interface between the shell, + the client, and the server is cleaned up and is defined + using records. Also, macros are introduced:

%%%----FILE mess_config.hrl---- @@ -244,14 +249,14 @@ server_transfer(From, Name, To, Message, User_List) ->
Header Files -

You will see some files above with extension .hrl. These - are header files which are included in the .erl files by:

+

As shown above, some files have extension .hrl. These + are header files that are included in the .erl files by:

-include("File_Name").

for example:

-include("mess_interface.hrl"). -

In our case above the file is fetched from the same directory as +

In the case above the file is fetched from the same directory as all the other files in the messenger example. (*manual*).

.hrl files can contain any valid Erlang code but are most often used for record and macro definitions.

@@ -265,64 +270,63 @@ server_transfer(From, Name, To, Message, User_List) ->

For example:

-record(message_to,{to_name, message}). -

This is exactly equivalent to:

+

This is equivalent to:

{message_to, To_Name, Message} -

Creating record, is best illustrated by an example:

+

Creating a record is best illustrated by an example:

#message_to{message="hello", to_name=fred) -

This will create:

+

This creates:

{message_to, fred, "hello"} -

Note that you don't have to worry about the order you assign +

Notice that you do not have to worry about the order you assign values to the various parts of the records when you create it. The advantage of using records is that by placing their definitions in header files you can conveniently define - interfaces which are easy to change. For example, if you want to - add a new field to the record, you will only have to change + interfaces that are easy to change. For example, if you want to + add a new field to the record, you only have to change the code where the new field is used and not at every place the record is referred to. If you leave out a field when creating - a record, it will get the value of the atom undefined. (*manual*)

+ a record, it gets the value of the atom undefined. (*manual*)

Pattern matching with records is very similar to creating records. For example, inside a case or receive:

#message_to{to_name=ToName, message=Message} -> -

is the same as:

+

This is the same as:

{message_to, ToName, Message}
Macros -

The other thing we have added to the messenger is a macro. +

Another thing that has been added to the messenger is a macro. The file mess_config.hrl contains the definition:

%%% Configure the location of the server node, -define(server_node, messenger@super). -

We include this file in mess_server.erl:

+

This file is included in mess_server.erl:

-include("mess_config.hrl").

Every occurrence of ?server_node in mess_server.erl - will now be replaced by messenger@super.

-

The other place a macro is used is when we spawn the server - process:

+ is now replaced by messenger@super.

+

A macro is also used when spawning the server process:

spawn(?MODULE, server, []) -

This is a standard macro (i.e. defined by the system, not - the user). ?MODULE is always replaced by the name of - current module (i.e. the -module definition near the start +

This is a standard macro (that is, defined by the system, not by + the user). ?MODULE is always replaced by the name of the + current module (that is, the -module definition near the start of the file). There are more advanced ways of using macros with, - for example parameters (*manual*).

+ for example, parameters (*manual*).

The three Erlang (.erl) files in the messenger example are individually compiled into object code file (.beam). The Erlang system loads and links these files into the system - when they are referred to during execution of the code. In our - case we simply have put them in the same directory which is our - current working directory (i.e. the place we have done "cd" to). + when they are referred to during execution of the code. In this + case, they are simply put in our current working directory + (that is, the place you have done "cd" to). There are ways of putting the .beam files in other directories.

In the messenger example, no assumptions have been made about - what the message being sent is. It could be any valid Erlang term.

+ what the message being sent is. It can be any valid Erlang term.

diff --git a/system/doc/getting_started/robustness.xml b/system/doc/getting_started/robustness.xml index e8fb81d5e8..82fe0cbc4f 100644 --- a/system/doc/getting_started/robustness.xml +++ b/system/doc/getting_started/robustness.xml @@ -28,27 +28,27 @@ robustness.xml -

There are several things which are wrong with - the messenger example from - the previous chapter. For example, if a node where a user is logged - on goes down without doing a log off, the user will remain in - the server's User_List but the client will disappear thus - making it impossible for the user to log on again as the server - thinks the user already logged on.

+

Several things are wrong with the messenger example in + A Larger Example. + For example, if a node where a user is logged + on goes down without doing a logoff, the user remains in + the server's User_List, but the client disappears. This + makes it impossible for the user to log on again as the server + thinks the user already is logged on.

Or what happens if the server goes down in the middle of sending a - message leaving the sending client hanging for ever in + message, leaving the sending client hanging forever in the await_result function?

- Timeouts -

Before improving the messenger program we will look into some + Time-outs +

Before improving the messenger program, let us look at some general principles, using the ping pong program as an example. Recall that when "ping" finishes, it tells "pong" that it has done so by sending the atom finished as a message to "pong" - so that "pong" could also finish. Another way to let "pong" - finish, is to make "pong" exit if it does not receive a message - from ping within a certain time, this can be done by adding a - timeout to pong as shown in the following example:

+ so that "pong" can also finish. Another way to let "pong" + finish is to make "pong" exit if it does not receive a message + from ping within a certain time. This can be done by adding a + time-out to pong as shown in the following example:

-module(tut19). @@ -80,9 +80,9 @@ start_pong() -> start_ping(Pong_Node) -> spawn(tut19, ping, [3, Pong_Node]). -

After we have compiled this and copied the tut19.beam - file to the necessary directories:

-

On (pong@kosken):

+

After this is compiled and the file tut19.beam + is copied to the necessary directories, the following is seen + on (pong@kosken):

 (pong@kosken)1> tut19:start_pong().
 true
@@ -90,7 +90,7 @@ Pong received ping
 Pong received ping
 Pong received ping
 Pong timed out
-

On (ping@gollum):

+

And the following is seen on (ping@gollum):

 (ping@gollum)1> tut19:start_ping(pong@kosken).
 <0.36.0>
@@ -98,7 +98,7 @@ Ping received pong
 Ping received pong
 Ping received pong
 ping finished   
-

(The timeout is set in:

+

The time-out is set in:

pong() -> receive @@ -109,35 +109,36 @@ pong() -> after 5000 -> io:format("Pong timed out~n", []) end. -

We start the timeout (after 5000) when we enter - receive. The timeout is canceled if {ping,Ping_PID} +

The time-out (after 5000) is started when + receive is entered. + The time-out is canceled if {ping,Ping_PID} is received. If {ping,Ping_PID} is not received, - the actions following the timeout will be done after 5000 + the actions following the time-out are done after 5000 milliseconds. after must be last in the receive, - i.e. preceded by all other message reception specifications in - the receive. Of course we could also call a function which - returned an integer for the timeout:

+ that is, preceded by all other message reception specifications in + the receive. It is also possible to call a function that + returned an integer for the time-out:

after pong_timeout() -> -

In general, there are better ways than using timeouts to - supervise parts of a distributed Erlang system. Timeouts are - usually appropriate to supervise external events, for example if +

In general, there are better ways than using time-outs to + supervise parts of a distributed Erlang system. Time-outs are + usually appropriate to supervise external events, for example, if you have expected a message from some external system within a - specified time. For example, we could use a timeout to log a user - out of the messenger system if they have not accessed it, for - example, in ten minutes.

+ specified time. For example, a time-out can be used to log a user + out of the messenger system if they have not accessed it for, + say, ten minutes.

Error Handling -

Before we go into details of the supervision and error handling - in an Erlang system, we need see how Erlang processes terminate, +

Before going into details of the supervision and error handling + in an Erlang system, let us see how Erlang processes terminate, or in Erlang terminology, exit.

A process which executes exit(normal) or simply runs out of things to do has a normal exit.

-

A process which encounters a runtime error (e.g. divide by zero, - bad match, trying to call a function which doesn't exist etc) - exits with an error, i.e. has an abnormal exit. A +

A process which encounters a runtime error (for example, divide by zero, + bad match, trying to call a function that does not exist and so on) + exits with an error, that is, has an abnormal exit. A process which executes exit(Reason) where Reason is any Erlang term except the atom @@ -151,18 +152,23 @@ after pong_timeout() -> links to.

The signal carries information about the pid it was sent from and the exit reason.

-

The default behaviour of a process which receives a normal exit +

The default behaviour of a process that receives a normal exit is to ignore the signal.

-

The default behaviour in the two other cases (i.e. abnormal exit) - above is to bypass all messages to the receiving process and to - kill it and to propagate the same error signal to the killed - process' links. In this way you can connect all processes in a - transaction together using links and if one of the processes - exits abnormally, all the processes in the transaction will be - killed. As we often want to create a process and link to it at +

The default behaviour in the two other cases (that is, abnormal exit) + above is to:

+ + Bypass all messages to the receiving process. + Kill the receiving process. + Propagate the same error signal to the links of the + killed process. + +

In this way you can connect all processes in a + transaction together using links. If one of the processes + exits abnormally, all the processes in the transaction are + killed. As it is often wanted to create a process and link to it at the same time, there is a special BIF, spawn_link - which does the same as spawn, but also creates a link to + that does the same as spawn, but also creates a link to the spawned process.

Now an example of the ping pong example using links to terminate "pong":

@@ -208,13 +214,13 @@ Pong received ping Ping received pong

This is a slight modification of the ping pong program where both processes are spawned from the same start/1 function, - where the "ping" process can be spawned on a separate node. Note + and the "ping" process can be spawned on a separate node. Notice the use of the link BIF. "Ping" calls - exit(ping) when it finishes and this will cause an exit - signal to be sent to "pong" which will also terminate.

+ exit(ping) when it finishes and this causes an exit + signal to be sent to "pong", which also terminates.

It is possible to modify the default behaviour of a process so that it does not get killed when it receives abnormal exit - signals, but all signals will be turned into normal messages with + signals. Instead, all signals are turned into normal messages on the format {'EXIT',FromPID,Reason} and added to the end of the receiving process' message queue. This behaviour is set by:

@@ -223,8 +229,8 @@ process_flag(trap_exit, true) erlang(3). Changing the default behaviour of a process in this way is usually not done in standard user programs, but is left to - the supervisory programs in OTP (but that's another tutorial). - However we will modify the ping pong program to illustrate exit + the supervisory programs in OTP. + However, the ping pong program is modified to illustrate exit trapping.

-module(tut21). @@ -277,7 +283,7 @@ pong exiting, got {'EXIT',<3820.39.0>,ping}
The Larger Example with Robustness Added -

Now we return to the messenger program and add changes which +

Let us return to the messenger program and add changes to make it more robust:

%%% Message passing utility. @@ -449,35 +455,34 @@ await_result() -> io:format("No response from server~n", []), exit(timeout) end. -

We have added the following changes:

+

The following changes are added:

The messenger server traps exits. If it receives an exit signal, - {'EXIT',From,Reason} this means that a client process has - terminated or is unreachable because:

+ {'EXIT',From,Reason}, this means that a client process has + terminated or is unreachable for one of the following reasons:

- the user has logged off (we have removed the "logoff" - message), - the network connection to the client is broken, - the node on which the client process resides has gone down, - or - the client processes has done some illegal operation. + The user has logged off (the "logoff" + message is removed). + The network connection to the client is broken. + The node on which the client process resides has gone down. + The client processes has done some illegal operation. -

If we receive an exit signal as above, we delete the tuple, - {From,Name} from the servers User_List using +

If an exit signal is received as above, the tuple + {From,Name} is deleted from the servers User_List using the server_logoff function. If the node on which the server runs goes down, an exit signal (automatically generated by - the system), will be sent to all of the client processes: + the system) is sent to all of the client processes: {'EXIT',MessengerPID,noconnection} causing all the client processes to terminate.

-

We have also introduced a timeout of five seconds in - the await_result function. I.e. if the server does not +

Also, a time-out of five seconds has been introduced in + the await_result function. That is, if the server does not reply within five seconds (5000 ms), the client terminates. This - is really only needed in the logon sequence before the client and + is only needed in the logon sequence before the client and the server are linked.

-

An interesting case is if the client was to terminate before +

An interesting case is if the client terminates before the server links to it. This is taken care of because linking to a non-existent process causes an exit signal, - {'EXIT',From,noproc}, to be automatically generated as if - the process terminated immediately after the link operation.

+ {'EXIT',From,noproc}, to be automatically generated. This is + as if the process terminated immediately after the link operation.

diff --git a/system/doc/getting_started/seq_prog.xml b/system/doc/getting_started/seq_prog.xml index 699b9487ed..5d96aed8d4 100644 --- a/system/doc/getting_started/seq_prog.xml +++ b/system/doc/getting_started/seq_prog.xml @@ -31,11 +31,15 @@
The Erlang Shell -

Most operating systems have a command interpreter or shell- Unix - and Linux have many, while Windows has the Command Prompt. Erlang has - its own shell where you can directly write bits of Erlang code - and evaluate (run) them to see what happens (see - shell(3)). Start +

+ Most operating systems have a command interpreter or shell, UNIX + and Linux have many, Windows has the command prompt. Erlang has + its own shell where bits of Erlang code can be written directly, + and be evaluated to see what happens + (see the shell(3) + manual page in STDLIB). +

+

Start the Erlang shell (in Linux or UNIX) by starting a shell or command interpreter in your operating system and typing erl. You will see something like this.

@@ -45,41 +49,39 @@ Erlang R15B (erts-5.9.1) [source] [smp:8:8] [rq:8] [async-threads:0] [hipe] [ker Eshell V5.9.1 (abort with ^G) 1> -

Now type in "2 + 5." as shown below.

+

Type "2 + 5." in the shell and then press Enter (carriage return). + Notice that you tell the shell you are done entering code by finishing + with a full stop "." and a carriage return.

 1> 2 + 5.
 7
 2>
-

In Windows, the shell is started by double-clicking on the Erlang - shell icon.

-

You'll notice that the Erlang shell has numbered the lines that - can be entered, (as 1> 2>) and that it has correctly told you - that 2 + 5 is 7! Also notice that you have to tell it you are - done entering code by finishing with a full stop "." and a - carriage return. If you make mistakes writing things in the shell, - you can delete things by using the backspace key as in most - shells. There are many more editing commands in the shell - (See the chapter "tty - A command line interface" in ERTS User's Guide).

-

(Note: you will find a lot of line numbers given by the shell - out of sequence in this tutorial as it was written and the code - tested in several sessions.)

-

Now let's try a more complex calculation.

+

As shown, the Erlang shell numbers the lines that + can be entered, (as 1> 2>) and that it correctly says + that 2 + 5 is 7. If you make writing mistakes in the shell, + you can delete with the backspace key, as in most shells. + There are many more editing commands in the shell + (see tty - A command line interface in ERTS User's Guide).

+

(Notice that many line numbers given by the shell in the + following examples are out of sequence. This is because this + tutorial was written and code-tested in separate sessions).

+

Here is a bit more complex calculation:

 2> (42 + 77) * 66 / 3.
 2618.0
-

Here you can see the use of brackets and the multiplication - operator "*" and division operator "/", just as in normal - arithmetic (see the chapter - "Arithmetic Expressions" in the Erlang Reference Manual).

-

To shutdown the Erlang system and the Erlang shell type - Control-C. You will see the following output:

+

Notice the use of brackets, the multiplication operator "*", + and the division operator "/", as in normal arithmetic (see + Expressions).

+

Press Control-C to shut down the Erlang system and the Erlang + shell.

+

The following output is shown:

 BREAK: (a)bort (c)ontinue (p)roc info (i)nfo (l)oaded
        (v)ersion (k)ill (D)b-tables (d)istribution
 a
 %

Type "a" to leave the Erlang system.

-

Another way to shutdown the Erlang system is by entering +

Another way to shut down the Erlang system is by entering halt():

 3> halt().
@@ -88,67 +90,70 @@ BREAK: (a)bort (c)ontinue (p)roc info (i)nfo (l)oaded
 
   
Modules and Functions -

A programming language isn't much use if you can just run code +

A programming language is not much use if you only can run code from the shell. So here is a small Erlang program. Enter it into - a file called tut.erl (the file name tut.erl is - important, also make sure that it is in the same directory as - the one where you started erl) using a suitable - text editor. If you are lucky your editor will have an Erlang - mode which will make it easier for you to enter and format your - code nicely (see the chapter - "The Erlang mode for Emacs" in Tools User's Guide), but you can manage - perfectly well without. Here's the code to enter:

+ a file named tut.erl using a suitable + text editor. The file name tut.erl is important, and also + that it is in the same directory as the one where you started + erl). If you are lucky your editor has an Erlang mode + that makes it easier for you to enter and format your code + nicely (see The Erlang mode for + Emacs in Tools User's Guide), but you can manage + perfectly well without. Here is the code to enter:

-module(tut). -export([double/1]). double(X) -> 2 * X. -

It's not hard to guess that this "program" doubles the value of - numbers. I'll get back to the first two lines later. Let's compile - the program. This can be done in your Erlang shell as shown below:

+

It is not hard to guess that this program doubles the value of + numbers. The first two lines of the code are described later. + Let us compile the program. This can be done in an Erlang shell + as follows, where c means compile:

 3> c(tut).
 {ok,tut}
-

The {ok,tut} tells you that the compilation was OK. If it - said "error" instead, you have made some mistake in the text you - entered and there will also be error messages to give you some - idea as to what has gone wrong so you can change what you have - written and try again.

-

Now let's run the program.

+

The {ok,tut} means that the compilation is OK. If it + says "error" it means that there is some mistake in the text + that you entered. Additional error messages gives an idea to + what is wrong so you can modify the text and then try to compile + the program again.

+

Now run the program:

 4> tut:double(10).
 20
-

As expected double of 10 is 20.

-

Now let's get back to the first two lines. Erlang programs are - written in files. Each file contains what we call an Erlang - module. The first line of code in the module tells us - the name of the module (see the chapter - "Modules" - in the Erlang Reference Manual).

+

As expected, double of 10 is 20.

+

Now let us get back to the first two lines of the code. Erlang + programs are + written in files. Each file contains an Erlang + module. The first line of code in the module is + the module name (see + Modules):

-module(tut). -

This tells us that the module is called tut. Note - the "." at the end of the line. The files which are used to store +

Thus, the module is called tut. Notice + the full stop "." at the end of the line. The files which are + used to store the module must have the same name as the module but with - the extension ".erl". In our case the file name is tut.erl. - When we use a function in another module, we use the syntax, - module_name:function_name(arguments). So

+ the extension ".erl". In this case the file name is tut.erl. + When using a function in another module, the syntax + module_name:function_name(arguments) is used. So the + following means call function double in module tut + with argument "10".

 4> tut:double(10).
-

means call function double in module tut with - argument "10".

-

The second line:

+

The second line says that the module tut contains a + function called double, which takes one argument + (X in our example):

-export([double/1]). -

says that the module tut contains a function called - double which takes one argument (X in our example) - and that this function can be called from outside the module - tut. More about this later. Again note the "." at the end - of the line.

-

Now for a more complicated example, the factorial of a number - (e.g. factorial of 4 is 4 * 3 * 2 * 1). Enter the following code - in a file called tut1.erl.

+

The second line also says that this function can be called from + outside the module tut. More about this later. Again, + notice the "." at the end of the line.

+

Now for a more complicated example, the factorial of a number. + For example, the factorial of 4 is 4 * 3 * 2 * 1, which equals 24.

+

Enter the following code in a file named tut1.erl:

-module(tut1). -export([fac/1]). @@ -157,30 +162,34 @@ fac(1) -> 1; fac(N) -> N * fac(N - 1). -

Compile the file

-
-5> c(tut1).
-{ok,tut1}
-

And now calculate the factorial of 4.

-
-6> tut1:fac(4).
-24
-

The first part:

+

So this is a module, called tut1 that contains a + function called fac>, which takes one argument, + N.

+

The first part says that the factorial of 1 is 1.:

fac(1) -> 1; -

says that the factorial of 1 is 1. Note that we end this part - with a ";" which indicates that there is more of this function to - come. The second part:

+

Notice that this part ends with a semicolon ";" that indicates + that there is more of the function fac> to come.

+

The second part says that the factorial of N is N multiplied + by the factorial of N - 1:

fac(N) -> N * fac(N - 1). -

says that the factorial of N is N multiplied by the factorial of - N - 1. Note that this part ends with a "." saying that there are +

Notice that this part ends with a "." saying that there are no more parts of this function.

-

A function can have many arguments. Let's expand the module - tut1 with the rather stupid function to multiply two - numbers:

+

Compile the file:

+
+5> c(tut1).
+{ok,tut1}
+

And now calculate the factorial of 4.

+
+6> tut1:fac(4).
+24
+

Here the function fac> in module tut1 is called + with argument 4.

+

A function can have many arguments. Let us expand the module + tut1 with the function to multiply two numbers:

-module(tut1). -export([fac/1, mult/2]). @@ -192,36 +201,36 @@ fac(N) -> mult(X, Y) -> X * Y. -

Note that we have also had to expand the -export line +

Notice that it is also required to expand the -export line with the information that there is another function mult with two arguments.

Compile:

 7> c(tut1).
 {ok,tut1}
-

and try it out:

+

Try out the new function mult:

 8> tut1:mult(3,4).
 12
-

In the example above the numbers are integers and the arguments - in the functions in the code, N, X, Y are +

In this example the numbers are integers and the arguments + in the functions in the code N, X, and Y are called variables. Variables must start with a capital letter - (see the chapter - "Variables" - in the Erlang Reference Manual). Examples of variables could be - Number, ShoeSize, Age etc.

+ (see + Variables). + Examples of variables are + Number, ShoeSize, and Age.

Atoms -

Atoms are another data type in Erlang. Atoms start with a small - letter ((see the chapter - "Atom" - in the Erlang Reference Manual)), for example: charles, - centimeter, inch. Atoms are simply names, nothing - else. They are not like variables which can have a value.

-

Enter the next program (file: tut2.erl) which could be - useful for converting from inches to centimeters and vice versa:

+

Atom is another data type in Erlang. Atoms start with a small + letter (see + Atom), + for example, charles, + centimeter, and inch. Atoms are simply names, nothing + else. They are not like variables, which can have a value.

+

Enter the next program in a file named tut2.erl). It can be + useful for converting from inches to centimeters and conversely:

-module(tut2). -export([convert/2]). @@ -231,27 +240,30 @@ convert(M, inch) -> convert(N, centimeter) -> N * 2.54. -

Compile and test:

+

Compile:

 9> c(tut2).
 {ok,tut2}
+
+

Test:

+
 10> tut2:convert(3, inch).
 1.1811023622047243
 11> tut2:convert(7, centimeter).
 17.78
-

Notice that I have introduced decimals (floating point numbers) - without any explanation, but I guess you can cope with that.

-

See what happens if I enter something other than centimeter or - inch in the convert function:

+

Notice the introduction of decimals (floating point numbers) + without any explanation. Hopefully you can cope with that.

+

Let us see what happens if something other than centimeter or + inch is entered in the convert function:

 12> tut2:convert(3, miles).
 ** exception error: no function clause matching tut2:convert(3,miles) (tut2.erl, line 4)

The two parts of the convert function are called its - clauses. Here we see that "miles" is not part of either of - the clauses. The Erlang system can't match either of - the clauses so we get an error message function_clause. - The shell formats the error message nicely, but the error tuple - is saved in the shell's history list and can be output by the shell + clauses. As shown, miles is not part of either of + the clauses. The Erlang system cannot match either of + the clauses so an error message function_clause is returned. + The shell formats the error message nicely, but the error tuple + is saved in the shell's history list and can be output by the shell command v/1:

 13> v(12).
@@ -271,14 +283,15 @@ convert(N, centimeter) ->
       Consider:

tut2:convert(3, inch). -

Does this mean that 3 is in inches? Or that 3 is in centimeters - and we want to convert it to inches? So Erlang has a way to group - things together to make things more understandable. We call these - tuples. Tuples are surrounded by "{" and "}".

-

So we can write {inch,3} to denote 3 inches and - {centimeter,5} to denote 5 centimeters. Now let's write a - new program which converts centimeters to inches and vice versa. - (file tut3.erl).

+

Does this mean that 3 is in inches? Or does it mean that 3 is + in centimeters + and is to be converted to inches? Erlang has a way to group + things together to make things more understandable. These are called + tuples and are surrounded by curly brackets, "{" and "}".

+

So, {inch,3} denotes 3 inches and + {centimeter,5} denotes 5 centimeters. Now let us write a + new program that converts centimeters to inches and conversely. + Enter the following code in a file called tut3.erl):

-module(tut3). -export([convert_length/1]). @@ -295,47 +308,48 @@ convert_length({inch, Y}) -> {centimeter,12.7} 16> tut3:convert_length(tut3:convert_length({inch, 5})). {inch,5.0}
-

Note on line 16 we convert 5 inches to centimeters and back - again and reassuringly get back to the original value. I.e +

Notice on line 16 that 5 inches is converted to centimeters and back + again and reassuringly get back to the original value. That is, the argument to a function can be the result of another function. - Pause for a moment and consider how line 16 (above) works. - The argument we have given the function {inch,5} is first - matched against the first head clause of convert_length - i.e. convert_length({centimeter,X}) where it can be seen + Consider how line 16 (above) works. + The argument given to the function {inch,5} is first + matched against the first head clause of convert_length, + that is, convert_length({centimeter,X}). It can be seen that {centimeter,X} does not match {inch,5} - (the head is the bit before the "->"). This having failed, we try - the head of the next clause i.e. convert_length({inch,Y}), - this matches and Y get the value 5.

-

We have shown tuples with two parts above, but tuples can have - as many parts as we want and contain any valid Erlang + (the head is the bit before the "->"). This having failed, + let us try + the head of the next clause that is, convert_length({inch,Y}). + This matches, and Y gets the value 5.

+

Tuples can have more than two parts, in fact + as many parts as you want, and contain any valid Erlang term. For example, to represent the temperature of - various cities of the world we could write:

+ various cities of the world:

{moscow, {c, -10}} {cape_town, {f, 70}} {paris, {f, 28}} -

Tuples have a fixed number of things in them. We call each thing - in a tuple an element. So in the tuple {moscow,{c,-10}}, - element 1 is moscow and element 2 is {c,-10}. I - have chosen c meaning Centigrade (or Celsius) and f - meaning Fahrenheit.

+

Tuples have a fixed number of items in them. Each item in a + tuple is called an element. In the tuple + {moscow,{c,-10}}, element 1 is moscow and element + 2 is {c,-10}. Here c represents Celsius and + f Fahrenheit.

Lists -

Whereas tuples group things together, we also want to be able to - represent lists of things. Lists in Erlang are surrounded by "[" - and "]". For example, a list of the temperatures of various cities - in the world could be:

+

Whereas tuples group things together, it is also needed to + represent lists of things. Lists in Erlang are surrounded by + square brackets, "[" and "]". For example, a list of the + temperatures of various cities in the world can be:

[{moscow, {c, -10}}, {cape_town, {f, 70}}, {stockholm, {c, -4}}, {paris, {f, 28}}, {london, {f, 36}}] -

Note that this list was so long that it didn't fit on one line. - This doesn't matter, Erlang allows line breaks at all "sensible - places" but not, for example, in the middle of atoms, integers - etc.

-

A very useful way of looking at parts of lists, is by using "|". - This is best explained by an example using the shell.

+

Notice that this list was so long that it did not fit on one line. + This does not matter, Erlang allows line breaks at all "sensible + places" but not, for example, in the middle of atoms, integers, + and others.

+

A useful way of looking at parts of lists, is by using "|". + This is best explained by an example using the shell:

 17> [First |TheRest] = [1,2,3,4,5].
 [1,2,3,4,5]
@@ -343,9 +357,9 @@ convert_length({inch, Y}) ->
 1
 19> TheRest.
 [2,3,4,5]
-

We use | to separate the first elements of the list from - the rest of the list. (First has got value 1 and - TheRest value [2,3,4,5].)

+

To separate the first elements of the list from the rest of the + list, | is used. First has got value 1 and + TheRest has got the value [2,3,4,5].

Another example:

 20> [E1, E2 | R] = [1,2,3,4,5,6,7].
@@ -356,10 +370,10 @@ convert_length({inch, Y}) ->
 2
 23> R.
 [3,4,5,6,7]
-

Here we see the use of | to get the first two elements from - the list. Of course if we try to get more elements from the list - than there are elements in the list we will get an error. Note - also the special case of the list with no elements [].

+

Here you see the use of | to get the first two elements from + the list. If you try to get more elements from the list + than there are elements in the list, an error is returned. Notice + also the special case of the list with no elements, []:

 24> [A, B | C] = [1, 2].
 [1,2]
@@ -369,13 +383,13 @@ convert_length({inch, Y}) ->
 2
 27> C.
 []
-

In all the examples above, I have been using new variable names, - not reusing the old ones: First, TheRest, E1, - E2, R, A, B, C. The reason +

In the previous examples, new variable names are used, instead of + reusing the old ones: First, TheRest, E1, + E2, R, A, B, and C. The reason for this is that a variable can only be given a value once in its - context (scope). I'll get back to this later, it isn't so - peculiar as it sounds!

-

The following example shows how we find the length of a list:

+ context (scope). More about this later.

+

The following example shows how to find the length of a list. + Enter the following code in a file named tut4.erl):

-module(tut4). @@ -385,7 +399,7 @@ list_length([]) -> 0; list_length([First | Rest]) -> 1 + list_length(Rest). -

Compile (file tut4.erl) and test:

+

Compile and test:

 28> c(tut4).
 {ok,tut4}
@@ -404,14 +418,14 @@ list_length([First | Rest]) ->
       Rest.

(Advanced readers only: This is not tail recursive, there is a better way to write this function.)

-

In general we can say we use tuples where we would use "records" - or "structs" in other languages and we use lists when we want to - represent things which have varying sizes, (i.e. where we would - use linked lists in other languages).

-

Erlang does not have a string data type, instead strings can be - represented by lists of ASCII characters. So the list - [97,98,99] is equivalent to "abc". The Erlang shell is - "clever" and guesses the what sort of list we mean and outputs it +

In general, tuples are used where "records" + or "structs" are used in other languages. Also, lists are used when + representing things with varying sizes, that is, where + linked lists are used in other languages.

+

Erlang does not have a string data type. Instead, strings can be + represented by lists of Unicode characters. This implies for example that + the list [97,98,99] is equivalent to "abc". The Erlang shell is + "clever" and guesses what list you mean and outputs it in what it thinks is the most appropriate form, for example:

 30> [97,98,99].
@@ -420,16 +434,17 @@ list_length([First | Rest]) ->
 
   
Maps -

Maps are a set of key to value associations. These associations - are encapsulated with "#{" and "}". To create an association from - "key" to value 42, we write:

+

Maps are a set of key to value associations. These associations + are encapsulated with "#{" and "}". To create an association + from "key" to value 42:

> #{ "key" => 42 }. #{"key" => 42} -

We will jump straight into the deep end with an example using some - interesting features.

-

The following example shows how we calculate alpha blending using - maps to reference color and alpha channels:

+

Let us jump straight into the deep end with an example using some + interesting features.

+

The following example shows how to calculate alpha blending + using maps to reference color and alpha channels. Enter the code + in a file named color.erl):

-module(color). @@ -468,7 +483,7 @@ green(#{green := SV, alpha := SA}, #{green := DV, alpha := DA}) -> SV*SA + DV*DA*(1.0 - SA). blue(#{blue := SV, alpha := SA}, #{blue := DV, alpha := DA}) -> SV*SA + DV*DA*(1.0 - SA). -

Compile (file color.erl) and test:

+

Compile and test:

 > c(color).
 {ok,color}
@@ -484,50 +499,48 @@ blue(#{blue := SV, alpha := SA}, #{blue := DV, alpha := DA}) ->
     

This example warrants some explanation:

-define(is_channel(V), (is_float(V) andalso V >= 0.0 andalso V =< 1.0)). -

- First we define a macro is_channel to help with our guard tests. - This is only here for convenience and to reduce syntax cluttering. - - You can read more about Macros - in the Erlang Reference Manual. -

+

First a macro is_channel is defined to help with the + guard tests. This is only here for convenience and to reduce + syntax cluttering. For more information about macros, see + + The Preprocessor. +

new(R,G,B,A) when ?is_channel(R), ?is_channel(G), ?is_channel(B), ?is_channel(A) -> #{red => R, green => G, blue => B, alpha => A}. -

- The function new/4 creates a new map term with and lets the keys - red, green, blue and alpha be associated - with an initial value. In this case we only allow for float - values between and including 0.0 and 1.0 as ensured by the ?is_channel/1 macro - for each argument. Only the => operator is allowed when creating a new map. +

The function new/4 creates a new map term and lets the keys + red, green, blue, and alpha be + associated with an initial value. In this case, only float + values between and including 0.0 and 1.0 are allowed, as ensured + by the ?is_channel/1 macro for each argument. Only the + => operator is allowed when creating a new map. +

+

By calling blend/2 on any color term created by + new/4, the resulting color can be calculated as + determined by the two map terms. +

+

The first thing blend/2 does is to calculate the + resulting alpha channel:

-

- By calling blend/2 on any color term created by new/4 we can calculate - the resulting color as determined by the two maps terms. -

-

- The first thing blend/2 does is to calculate the resulting alpha channel. -

alpha(#{alpha := SA}, #{alpha := DA}) -> SA + DA*(1.0 - SA). -

- We fetch the value associated with key alpha for both arguments using - the := operator. Any other keys - in the map are ignored, only the key alpha is required and checked for. -

-

This is also the case for functions red/2, blue/2 and green/2.

+

The value associated with key alpha is fetched for both + arguments using the := operator. The other keys in the + map are ignored, only the key alpha is required and + checked for. +

+

This is also the case for functions red/2, + blue/2, and green/2.

red(#{red := SV, alpha := SA}, #{red := DV, alpha := DA}) -> SV*SA + DV*DA*(1.0 - SA). -

- The difference here is that we check for two keys in each map argument. The other keys - are ignored. -

-

- Finally we return the resulting color in blend/3. -

+

The difference here is that a check is made for two keys in + each map argument. The other keys are ignored. +

+

Finally, let us return the resulting color in blend/3: +

blend(Src,Dst,Alpha) when Alpha > 0.0 -> Dst#{ @@ -536,20 +549,20 @@ blend(Src,Dst,Alpha) when Alpha > 0.0 -> blue := blue(Src,Dst) / Alpha, alpha := Alpha }; -

- We update the Dst map with new channel values. The syntax for updating an existing key with a new value is done with := operator. -

+

The Dst map is updated with new channel values. The + syntax for updating an existing key with a new value is with the + := operator. +

Standard Modules and Manual Pages -

Erlang has a lot of standard modules to help you do things. For - example, the module io contains a lot of functions to help - you do formatted input/output. To look up information about - standard modules, the command erl -man can be used at - the operating shell or command prompt (i.e. at the same place as - that where you started erl). Try the operating system - shell command:

+

Erlang has many standard modules to help you do things. For + example, the module io contains many functions that help + in doing formatted input/output. To look up information about + standard modules, the command erl -man can be used at the + operating shell or command prompt (the same place as you started + erl). Try the operating system shell command:

 % erl -man io
 ERLANG MODULE DEFINITION                                    io(3)
@@ -561,21 +574,21 @@ DESCRIPTION
      This module provides an  interface  to  standard  Erlang  IO
      servers. The output functions all return ok if they are suc-
      ...
-

If this doesn't work on your system, the documentation is - included as HTML in the Erlang/OTP release, or you can read +

If this does not work on your system, the documentation is + included as HTML in the Erlang/OTP release. You can also read the documentation as HTML or download it as PDF from either of the sites www.erlang.se (commercial Erlang) or www.erlang.org - (open source), for example for release R9B:

+ (open source). For example, for Erlang/OTP release R9B:

http://www.erlang.org/doc/r9b/doc/index.html
Writing Output to a Terminal -

It's nice to be able to do formatted output in these example, so +

It is nice to be able to do formatted output in examples, so the next example shows a simple way to use the io:format - function. Of course, just like all other exported functions, you - can test the io:format function in the shell:

+ function. Like all other exported functions, you can test the + io:format function in the shell:

 31> io:format("hello world~n", []).
 hello world
@@ -589,28 +602,28 @@ ok
 34> io:format("this outputs two Erlang terms: ~w ~w~n", [hello, world]).
 this outputs two Erlang terms: hello world
 ok
-

The function format/2 (i.e. format with two +

The function format/2 (that is, format with two arguments) takes two lists. The first one is nearly always a list - written between " ". This list is printed out as it stands, + written between " ". This list is printed out as it is, except that each ~w is replaced by a term taken in order from the second list. Each ~n is replaced by a new line. The io:format/2 function itself returns the atom ok if everything goes as planned. Like other functions in Erlang, it crashes if an error occurs. This is not a fault in Erlang, it is a deliberate policy. Erlang has sophisticated mechanisms to - handle errors which we will show later. As an exercise, try to - make io:format crash, it shouldn't be difficult. But + handle errors which are shown later. As an exercise, try to + make io:format crash, it should not be difficult. But notice that although io:format crashes, the Erlang shell itself does not crash.

A Larger Example -

Now for a larger example to consolidate what we have learnt so - far. Assume we have a list of temperature readings from a number - of cities in the world. Some of them are in Celsius (Centigrade) - and some in Fahrenheit (as in the previous list). First let's - convert them all to Celsius, then let's print out the data neatly.

+

Now for a larger example to consolidate what you have learnt so + far. Assume that you have a list of temperature readings from a number + of cities in the world. Some of them are in Celsius + and some in Fahrenheit (as in the previous list). First let us + convert them all to Celsius, then let us print the data neatly.

%% This module is in file tut5.erl @@ -642,50 +655,50 @@ stockholm -4 c paris -2.2222222222222223 c london 2.2222222222222223 c ok
-

Before we look at how this program works, notice that we have - added a few comments to the code. A comment starts with a % - character and goes on to the end of the line. Note as well that +

Before looking at how this program works, notice that + a few comments are added to the code. A comment starts with a + %-character and goes on to the end of the line. Notice also that the -export([format_temps/1]). line only includes - the function format_temps/1, the other functions are - local functions, i.e. they are not visible from outside + the function format_temps/1. The other functions are + local functions, that is, they are not visible from outside the module tut5.

-

Note as well that when testing the program from the shell, I had - to spread the input over two lines as the line was too long.

-

When we call format_temps the first time, City +

Notice also that when testing the program from the shell, + the input is spread over two lines as the line was too long.

+

When format_temps is called the first time, City gets the value {moscow,{c,-10}} and Rest is - the rest of the list. So we call the function - print_temp(convert_to_celsius({moscow,{c,-10}})).

-

Here we see a function call as + the rest of the list. So the function + print_temp(convert_to_celsius({moscow,{c,-10}})) is called.

+

Here is a function call as convert_to_celsius({moscow,{c,-10}}) as the argument to - the function print_temp. When we nest function - calls like this we execute (evaluate) them from the inside out. - I.e. we first evaluate convert_to_celsius({moscow,{c,-10}}) + the function print_temp. When function calls are nested + like this, they execute (evaluate) from the inside out. + That is, first convert_to_celsius({moscow,{c,-10}}) is evaluated, which gives the value {moscow,{c,-10}} as the temperature - is already in Celsius and then we evaluate - print_temp({moscow,{c,-10}}). The function - convert_to_celsius works in a similar way to + is already in Celsius. Then print_temp({moscow,{c,-10}}) + is evaluated. + The function convert_to_celsius works in a similar way to the convert_length function in the previous example.

print_temp simply calls io:format in a similar way - to what has been described above. Note that ~-15w says to print + to what has been described above. Notice that ~-15w says to print the "term" with a field length (width) of 15 and left justify it. - (io(3)).

-

Now we call format_temps(Rest) with the rest of the list + (see the io(3)) manual page in STDLIB.

+

Now format_temps(Rest) is called with the rest of the list as an argument. This way of doing things is similar to the loop - constructs in other languages. (Yes, this is recursion, but don't + constructs in other languages. (Yes, this is recursion, but do not let that worry you.) So the same format_temps function is called again, this time City gets the value - {cape_town,{f,70}} and we repeat the same procedure as - before. We go on doing this until the list becomes empty, i.e. [], + {cape_town,{f,70}} and the same procedure is repeated as + before. This is done until the list becomes empty, that is [], which causes the first clause format_temps([]) to match. This simply returns (results in) the atom ok, so the program ends.

- Matching, Guards and Scope of Variables -

It could be useful to find the maximum and minimum temperature + Matching, Guards, and Scope of Variables +

It can be useful to find the maximum and minimum temperature in lists like this. Before extending the program to do this, - let's look at functions for finding the maximum value of + let us look at functions for finding the maximum value of the elements in a list:

-module(tut6). @@ -705,53 +718,57 @@ list_max([Head|Rest], Result_so_far) -> {ok,tut6} 38> tut6:list_max([1,2,3,4,5,7,4,3,2,1]). 7
-

First note that we have two functions here with the same name - list_max. However each of these takes a different number +

First notice that two functions have the same name, + list_max. However, each of these takes a different number of arguments (parameters). In Erlang these are regarded as - completely different functions. Where we need to distinguish - between these functions we write name/arity, where - name is the name of the function and arity is + completely different functions. Where you need to distinguish + between these functions, you write Name/Arity, where + Name is the function name and Arity is the number of arguments, in this case list_max/1 and list_max/2.

-

This is an example where we walk through a list "carrying" a - value with us, in this case Result_so_far. +

In this example you walk through a list "carrying" a + value, in this case Result_so_far. list_max/1 simply assumes that the max value of the list is the head of the list and calls list_max/2 with the rest - of the list and the value of the head of the list, in the above - this would be list_max([2,3,4,5,7,4,3,2,1],1). If we tried + of the list and the value of the head of the list. In the above + this would be list_max([2,3,4,5,7,4,3,2,1],1). If you tried to use list_max/1 with an empty list or tried to use it - with something which isn't a list at all, we would cause an error. - Note that the Erlang philosophy is not to handle errors of this + with something that is not a list at all, you would cause an error. + Notice that the Erlang philosophy is not to handle errors of this type in the function they occur, but to do so elsewhere. More about this later.

-

In list_max/2 we walk down the list and use Head +

In list_max/2, you walk down the list and use Head instead of Result_so_far when Head > - Result_so_far. when is a special word we use before - the -> in the function to say that we should only use this part - of the function if the test which follows is true. We call tests - of this type a guard. If the guard isn't true (we say - the guard fails), we try the next part of the function. In this - case if Head isn't greater than Result_so_far then - it must be smaller or equal to is, so we don't need a guard on - the next part of the function.

-

Some useful operators in guards are, < less than, > - greater than, == equal, >= greater or equal, =< less or - equal, /= not equal. (See the chapter - "Guard Sequences" in the Erlang Reference Manual.)

-

To change the above program to one which works out the minimum - value of the element in a list, all we would need to do is to + Result_so_far. when is a special word used before + the -> in the function to say that you only use this part + of the function if the test that follows is true. A test + of this type is called guard. If the guard is false (that is, + the guard fails), the next part of the function is tried. In this + case, if Head is not greater than Result_so_far, then + it must be smaller or equal to it. This means that a guard on + the next part of the function is not needed.

+

Some useful operators in guards are: +

< less than + > greater than + == equal + >= greater or equal + =< less or equal + /= not equal +

(see Guard Sequences).

+

To change the above program to one that works out the minimum + value of the element in a list, you only need to write < instead of >. (But it would be wise to change - the name of the function to list_min :-).)

-

Remember that I mentioned earlier that a variable could only be - given a value once in its scope? In the above we see, for example, - that Result_so_far has been given several values. This is - OK since every time we call list_max/2 we create a new - scope and one can regard the Result_so_far as a completely + the name of the function to list_min.)

+

Earlier it was mentioned that a variable can only be + given a value once in its scope. In the above you see + that Result_so_far is given several values. This is + OK since every time you call list_max/2 you create a new + scope and one can regard Result_so_far as a different variable in each scope.

Another way of creating and giving a variable a value is by using - the match operator = . So if I write M = 5, a variable - called M will be created and given the value 5. If, in - the same scope I then write M = 6, I'll get an error. Try + the match operator = . So if you write M = 5, a variable + called M is created with the value 5. If, in + the same scope, you then write M = 6, an error is returned. Try this out in the shell:

 39> M = 5.
@@ -771,21 +788,21 @@ list_max([Head|Rest], Result_so_far)  ->
 paris
 45> Y.
 {f,28}
-

Here we see that X gets the value paris and +

Here X gets the value paris and Y{f,28}.

-

Of course if we try to do the same again with another city, we - get an error:

+

If you try to do the same again with another city, + an error is returned:

 46> {X, Y} = {london, {f, 36}}.
 ** exception error: no match of right hand side value {london,{f,36}}

Variables can also be used to improve the readability of - programs, for example, in the list_max/2 function above, - we could write:

+ programs. For example, in function list_max/2 above, + you can write:

list_max([Head|Rest], Result_so_far) when Head > Result_so_far -> New_result_far = Head, list_max(Rest, New_result_far); -

which is possibly a little clearer.

+

This is possibly a little clearer.

@@ -824,9 +841,9 @@ reverse([], Reversed_List) -> {ok,tut8} 53> tut8:reverse([1,2,3]). [3,2,1] -

Consider how Reversed_List is built. It starts as [], we - then successively take off the heads of the list to be reversed - and add them to the the Reversed_List, as shown in +

Consider how Reversed_List is built. It starts as [], + then successively the heads are taken off of the list to be reversed + and added to the the Reversed_List, as shown in the following:

reverse([1|2,3], []) => @@ -840,14 +857,15 @@ reverse([3|[]], [2,1]) => reverse([], [3,2,1]) => [3,2,1] -

The module lists contains a lot of functions for - manipulating lists, for example for reversing them, so before you - write a list manipulating function it is a good idea to check - that one isn't already written for you. (see - lists(3)).

-

Now let's get back to the cities and temperatures, but take a more - structured approach this time. First let's convert the whole list - to Celsius as follows and test the function:

+

The module lists contains many functions for + manipulating lists, for example, for reversing them. So before + writing a list-manipulating function it is a good idea to check + if one not already is written for you + (see the lists(3) + manual page in STDLIB).

+

Now let us get back to the cities and temperatures, but take a more + structured approach this time. First let us convert the whole list + to Celsius as follows:

-module(tut7). -export([format_temps/1]). @@ -864,6 +882,7 @@ convert_list_to_c([City | Rest]) -> convert_list_to_c([]) -> []. +

Test the function:

 54> c(tut7).
 {ok, tut7}.
@@ -874,26 +893,26 @@ convert_list_to_c([]) ->
  {stockholm,{c,-4}},
  {paris,{c,-2.2222222222222223}},
  {london,{c,2.2222222222222223}}]
-

Looking at this bit by bit:

+

Explanation:

format_temps(List_of_cities) -> convert_list_to_c(List_of_cities). -

Here we see that format_temps/1 calls +

Here format_temps/1 calls convert_list_to_c/1. convert_list_to_c/1 takes off the head of the List_of_cities, converts it to Celsius if needed. The | operator is used to add the (maybe) converted to the converted rest of the list:

[Converted_City | convert_list_to_c(Rest)]; -

or

+

or:

[City | convert_list_to_c(Rest)]; -

We go on doing this until we get to the end of the list (i.e. - the list is empty):

+

This is done until the end of the list is reached, that is, + the list is empty:

convert_list_to_c([]) -> []. -

Now we have converted the list, we add a function to print it:

+

Now when the list is converted, a function to print it is added:

-module(tut7). -export([format_temps/1]). @@ -928,12 +947,12 @@ stockholm -4 c paris -2.2222222222222223 c london 2.2222222222222223 c ok -

We now have to add a function to find the cities with - the maximum and minimum temperatures. The program below isn't - the most efficient way of doing this as we walk through the list +

Now a function has to be added to find the cities with + the maximum and minimum temperatures. The following program is not + the most efficient way of doing this as you walk through the list of cities four times. But it is better to first strive for clarity and correctness and to make programs efficient only if - really needed.

+ needed.

If and Case

The function find_max_and_min works out the maximum and - minimum temperature. We have introduced a new construct here - if. If works as follows:

+ minimum temperature. A new construct, if, is introduced here. + If works as follows:

if Condition 1 -> @@ -1016,14 +1035,15 @@ if Condition 4 -> Action 4 end -

Note there is no ";" before end! Conditions are the same - as guards, tests which succeed or fail. Erlang starts at the top - until it finds a condition which succeeds and then it evaluates +

Notice that there is no ";" before end. Conditions do + the same as guards, that is, tests that succeed or fail. Erlang + starts at the top + and tests until it finds a condition that succeeds. Then it evaluates (performs) the action following the condition and ignores all - other conditions and action before the end. If no - condition matches, there will be a run-time failure. A condition - which always is succeeds is the atom, true and this is - often used last in an if meaning do the action following + other conditions and actions before the end. If no + condition matches, a run-time failure occurs. A condition + that always succeeds is the atom true. This is + often used last in an if, meaning, do the action following the true if all other conditions have failed.

The following is a short program to show the workings of if.

@@ -1039,10 +1059,10 @@ test_if(A, B) -> B == 6 -> io:format("B == 6~n", []), b_equals_6; - A == 2, B == 3 -> %i.e. A equals 2 and B equals 3 + A == 2, B == 3 -> %That is A equals 2 and B equals 3 io:format("A == 2, B == 3~n", []), a_equals_2_b_equals_3; - A == 1 ; B == 7 -> %i.e. A equals 1 or B equals 7 + A == 1 ; B == 7 -> %That is A equals 1 or B equals 7 io:format("A == 1 ; B == 7~n", []), a_equals_1_or_b_equals_7 end.
@@ -1068,19 +1088,19 @@ a_equals_1_or_b_equals_7 66> tut9:test_if(33, 33). ** exception error: no true branch found when evaluating an if expression in function tut9:test_if/2 (tut9.erl, line 5) -

Notice that tut9:test_if(33,33) did not cause any - condition to succeed so we got the run time error - if_clause, here nicely formatted by the shell. See the chapter - "Guard Sequences" in the Erlang Reference Manual for details - of the many guard tests available. case is another - construct in Erlang. Recall that we wrote the - convert_length function as:

+

Notice that tut9:test_if(33,33) does not cause any + condition to succeed. This leads to the run time error + if_clause, here nicely formatted by the shell. See + Guard Sequences + for details of the many guard tests available.

+

case is another construct in Erlang. Recall that the + convert_length function was written as:

convert_length({centimeter, X}) -> {inch, X / 2.54}; convert_length({inch, Y}) -> {centimeter, Y * 2.54}. -

We could also write the same program as:

+

The same program can also be written as:

-module(tut10). -export([convert_length/1]). @@ -1099,12 +1119,13 @@ convert_length(Length) -> {centimeter,15.24} 69> tut10:convert_length({centimeter, 2.5}). {inch,0.984251968503937} -

Notice that both case and if have return values, i.e. in the above example case returned +

Both case and if have return values, that is, + in the above example case returned either {inch,X/2.54} or {centimeter,Y*2.54}. The behaviour of case can also be modified by using guards. - An example should hopefully clarify this. The following example - tells us the length of a month, given the year. We need to know - the year of course, since February has 29 days in a leap year.

+ The following example clarifies this. It + tells us the length of a month, given the year. + The year must be known, since February has 29 days in a leap year.

-module(tut11). -export([month_length/2]). @@ -1150,57 +1171,58 @@ month_length(Year, Month) ->
- Built In Functions (BIFs) -

Built in functions (BIFs) are functions which for some reason are - built in to the Erlang virtual machine. BIFs often implement - functionality that is impossible to implement in Erlang or is too - inefficient to implement in Erlang. Some BIFs can be called - by use of the function name only, but they by default belong - to the erlang module. So for example, the call to the BIF trunc + Built-In Functions (BIFs) +

BIFs are functions that for some reason are + built-in to the Erlang virtual machine. BIFs often implement + functionality that is impossible or is too + inefficient to implement in Erlang. Some BIFs can be called + using the function name only but they are by default belonging + to the erlang module. For example, the call to the + BIF trunc below is equivalent to a call to erlang:trunc.

-

As you can see, we first find out if a year is leap or not. If a - year is divisible by 400, it is a leap year. To find this out we - first divide the year by 400 and use the built in function - trunc (more later) to cut off any decimals. We then - multiply by 400 again and see if we get back the same value. For - example, year 2004:

+

As shown, first it is checked if a year is leap. If a + year is divisible by 400, it is a leap year. To determine this, + first divide the year by 400 and use the BIF + trunc (more about this later) to cut off any decimals. Then + multiply by 400 again and see if the same value is returned again. + For example, year 2004:

2004 / 400 = 5.01 trunc(5.01) = 5 5 * 400 = 2000 -

and we can see that we got back 2000 which is not the same as - 2004, so 2004 isn't divisible by 400. Year 2000:

+

2000 is not the same as 2004, so 2004 is not divisible by 400. + Year 2000:

2000 / 400 = 5.0 trunc(5.0) = 5 5 * 400 = 2000 -

so we have a leap year. The next two tests, which check if the year is - divisible by 100 or 4, are done in the same way. The first - if returns leap or not_leap which ends up - in the variable Leap. We use this variable in the guard - for feb in the following case which tells us how +

That is, a leap year. The next two trunc-tests evaluate + if the year is divisible by 100 or 4 in the same way. The first + if returns leap or not_leap, which lands up + in the variable Leap. This variable is used in the guard + for feb in the following case that tells us how long the month is.

-

This example showed the use of trunc. An easier way would - be to use the Erlang operator rem, which gives the remainder - after division. For example:

+

This example showed the use of trunc. It is easier + to use the Erlang operator rem that gives the remainder + after division, for example:

 74> 2004 rem 400.
 4
-

so instead of writing:

+

So instead of writing:

trunc(Year / 400) * 400 == Year -> leap; -

we could write:

+

it can be written:

Year rem 400 == 0 -> leap; -

There are many other built in functions (BIF) such as - trunc. Only a few built in functions can be used in guards, +

There are many other BIFs such as + trunc. Only a few BIFs can be used in guards, and you cannot use functions you have defined yourself in guards. - (see the chapter - "Guard Sequences" in the Erlang Reference Manual) (Aside for - advanced readers: This is to ensure that guards don't have side - effects.) Let's play with a few of these functions in the shell:

+ (see + Guard Sequences) + (For advanced readers: This is to ensure that guards do not have side + effects.) Let us play with a few of these functions in the shell:

 75> trunc(5.6).
 5
@@ -1218,7 +1240,7 @@ false
 true
 82> is_tuple([paris, {c, 30}]).
 false
-

All the above can be used in guards. Now for some which can't be +

All of these can be used in guards. Now for some BIFs that cannot be used in guards:

 83> atom_to_list(hello).
@@ -1227,22 +1249,22 @@ false
goodbye 85> integer_to_list(22). "22" -

The 3 BIFs above do conversions which would be difficult (or +

These three BIFs do conversions that would be difficult (or impossible) to do in Erlang.

- Higher Order Functions (Funs) + Higher-Order Functions (Funs)

Erlang, like most modern functional programming languages, has - higher order functions. We start with an example using the shell:

+ higher-order functions. Here is an example using the shell:

 86> Xf = fun(X) -> X * 2 end.
 #Fun<erl_eval.5.123085357>
 87> Xf(5).
 10
-

What we have done here is to define a function which doubles - the value of number and assign this function to a variable. Thus - Xf(5) returned the value 10. Two useful functions when +

Here is defined a function that doubles + the value of a number and assigned this function to a variable. Thus + Xf(5) returns value 10. Two useful functions when working with lists are foreach and map, which are defined as follows:

@@ -1258,17 +1280,16 @@ map(Fun, []) -> [].

These two functions are provided in the standard module lists. foreach takes a list and applies a fun to - every element in the list, map creates a new list by + every element in the list. map creates a new list by applying a fun to every element in a list. Going back to - the shell, we start by using map and a fun to add 3 to + the shell, map is used and a fun to add 3 to every element of a list:

 88> Add_3 = fun(X) -> X + 3 end.
 #Fun<erl_eval.5.123085357>
 89> lists:map(Add_3, [1,2,3]).
 [4,5,6]
-

Now let's print out the temperatures in a list of cities (yet - again):

+

Let us (again) print the temperatures in a list of cities:

 90> Print_City = fun({City, {X, Temp}}) -> io:format("~-15w ~w ~w~n",
 [City, X, Temp]) end.
@@ -1281,7 +1302,7 @@ stockholm       c -4
 paris           f 28
 london          f 36
 ok
-

We will now define a fun which can be used to go through a list +

Let us now define a fun that can be used to go through a list of cities and temperatures and transform them all to Celsius.

-module(tut13). @@ -1303,21 +1324,21 @@ convert_list_to_c(List) -> {stockholm,{c,-4}}, {paris,{c,-2}}, {london,{c,2}}] -

The convert_to_c function is the same as before, but we - use it as a fun:

+

The convert_to_c function is the same as before, but here + it is used as a fun:

lists:map(fun convert_to_c/1, List) -

When we use a function defined elsewhere as a fun we can refer - to it as Function/Arity (remember that Arity = - number of arguments). So in the map call we write - lists:map(fun convert_to_c/1, List). As you can see +

When a function defined elsewhere is used as a fun, it can be referred + to as Function/Arity (remember that Arity = + number of arguments). So in the map-call + lists:map(fun convert_to_c/1, List) is written. As shown, convert_list_to_c becomes much shorter and easier to understand.

The standard module lists also contains a function sort(Fun, List) where Fun is a fun with two - arguments. This fun should return true if the the first + arguments. This fun returns true if the first argument is less than the second argument, or else false. - We add sorting to the convert_list_to_c:

+ Sorting is added to the convert_list_to_c:

{paris,{c,-2}}, {london,{c,2}}, {cape_town,{c,21}}] -

In sort we use the fun:

+

In sort the fun is used:

Temp1 < Temp2 end,]]> -

Here we introduce the concept of an anonymous variable - "_". This is simply shorthand for a variable which is going to - get a value, but we will ignore the value. This can be used - anywhere suitable, not just in fun's. +

Here the concept of an anonymous variable + "_" is introduced. This is simply shorthand for a variable that + gets a value, but the value is ignored. This can be used + anywhere suitable, not just in funs. returns true if Temp1 is less than Temp2.

-- cgit v1.2.3 From 9fe8adf35c16ab5d4566b03f3b36863c90b5b6dd Mon Sep 17 00:00:00 2001 From: Hans Bolinder Date: Thu, 12 Mar 2015 15:35:13 +0100 Subject: Update Erlang Reference Manual Language cleaned up by the technical writers xsipewe and tmanevik from Combitech. Proofreading and corrections by Hans Bolinder. --- system/doc/reference_manual/character_set.xml | 18 +- system/doc/reference_manual/code_loading.xml | 117 ++-- system/doc/reference_manual/data_types.xml | 164 +++--- system/doc/reference_manual/distributed.xml | 120 +++-- system/doc/reference_manual/errors.xml | 67 +-- system/doc/reference_manual/expressions.xml | 732 ++++++++++++++------------ system/doc/reference_manual/functions.xml | 79 +-- system/doc/reference_manual/introduction.xml | 53 +- system/doc/reference_manual/macros.xml | 75 +-- system/doc/reference_manual/modules.xml | 153 +++--- system/doc/reference_manual/patterns.xml | 2 +- system/doc/reference_manual/ports.xml | 114 ++-- system/doc/reference_manual/processes.xml | 89 ++-- system/doc/reference_manual/records.xml | 66 +-- system/doc/reference_manual/typespec.xml | 230 ++++---- 15 files changed, 1162 insertions(+), 917 deletions(-) (limited to 'system/doc') diff --git a/system/doc/reference_manual/character_set.xml b/system/doc/reference_manual/character_set.xml index b09b484582..d6989373bf 100644 --- a/system/doc/reference_manual/character_set.xml +++ b/system/doc/reference_manual/character_set.xml @@ -4,7 +4,7 @@
- 20142014 + 20142015 Ericsson AB. All Rights Reserved. @@ -31,7 +31,7 @@
Character Set -

In Erlang 4.8/OTP R5A the syntax of Erlang tokens was extended to +

Since Erlang 4.8/OTP R5A, the syntax of Erlang tokens is extended to allow the use of the full ISO-8859-1 (Latin-1) character set. This is noticeable in the following ways:

@@ -98,7 +98,7 @@ ø - ÿ Lowercase letters - Character Classes. + Character Classes

In Erlang/OTP R16B the syntax of Erlang tokens was extended to handle Unicode. The support is limited to @@ -111,13 +111,13 @@

Source File Encoding -

The Erlang source file encoding is selected by a + +

The Erlang source file encoding is selected by a comment in one of the first two lines of the source file. The first string that matches the regular expression coding\s*[:=]\s*([-a-zA-Z0-9])+ selects the encoding. If - the matching string is not a valid encoding it is ignored. The - valid encodings are Latin-1 and UTF-8 where the + the matching string is an invalid encoding, it is ignored. The + valid encodings are Latin-1 and UTF-8, where the case of the characters can be chosen freely.

The following example selects UTF-8 as default encoding:

@@ -127,7 +127,7 @@
 %% For this file we have chosen encoding = Latin-1
 %% -*- coding: latin-1 -*-
-

The default encoding for Erlang source files was changed from - Latin-1 to UTF-8 in Erlang OTP 17.0.

+

The default encoding for Erlang source files is changed from + Latin-1 to UTF-8 since Erlang/OTP 17.0.

diff --git a/system/doc/reference_manual/code_loading.xml b/system/doc/reference_manual/code_loading.xml index b5b5704df5..48ec32d6df 100644 --- a/system/doc/reference_manual/code_loading.xml +++ b/system/doc/reference_manual/code_loading.xml @@ -4,7 +4,7 @@
- 20032014 + 20032015 Ericsson AB. All Rights Reserved. @@ -29,35 +29,39 @@ code_loading.xml

How code is compiled and loaded is not a language issue, but - is system dependent. This chapter describes compilation and - code loading in Erlang/OTP with pointers to relevant parts of + is system-dependent. This section describes compilation and + code loading in Erlang/OTP with references to relevant parts of the documentation.

Compilation

Erlang programs must be compiled to object code. - The compiler can generate a new file which contains the object - code. The current abstract machine which runs the object code is + The compiler can generate a new file that contains the object + code. The current abstract machine, which runs the object code, is called BEAM, therefore the object files get the suffix .beam. The compiler can also generate a binary which can be loaded directly.

-

The compiler is located in the Kernel module compile, see - compile(3).

+

The compiler is located in the module compile (see the + compile(3) manual page in + Compiler).

 compile:file(Module)
 compile:file(Module, Options)

The Erlang shell understands the command c(Module) which both compiles and loads Module.

-

There is also a module make which provides a set of - functions similar to the UNIX type Make functions, see - make(3).

-

The compiler can also be accessed from the OS prompt, see - erl(1).

+

There is also a module make, which provides a set of + functions similar to the UNIX type Make functions, see the + make(3) + manual page in Tools.

+

The compiler can also be accessed from the OS prompt, see the + erl(1) manual page in ERTS.

 % erl -compile Module1...ModuleN
 % erl -make

The erlc program provides an even better way to compile - modules from the shell, see erlc(1). It understands a + modules from the shell, see the + erlc(1) manual page in ERTS. + It understands a number of flags that can be used to define macros, add search paths for include files, and more.

@@ -68,13 +72,17 @@ compile:file(Module, Options)
Code Loading

The object code must be loaded into the Erlang runtime - system. This is handled by the code server, see - code(3).

-

The code server loads code according to a code loading strategy + system. This is handled by the code server, see the + code(3) + manual page in Kernel.

+

The code server loads code according to a code loading strategy, which is either interactive (default) or - embedded. In interactive mode, code are searched for in + embedded. In interactive mode, code is searched for in a code path and loaded when first referenced. In - embedded mode, code is loaded at start-up according to a boot script. This is described in System Principles.

+ embedded mode, code is loaded at start-up according to a + boot script. This is described in + + System Principles .

@@ -86,16 +94,17 @@ compile:file(Module, Options) the system for the first time, the code becomes 'current'. If then a new instance of the module is loaded, the code of the previous instance becomes 'old' and the new instance becomes 'current'.

-

Both old and current code is valid, and may be evaluated +

Both old and current code is valid, and can be evaluated concurrently. Fully qualified function calls always refer to - current code. Old code may still be evaluated because of processes + current code. Old code can still be evaluated because of processes lingering in the old code.

-

If a third instance of the module is loaded, the code server will - remove (purge) the old code and any processes lingering in it will - be terminated. Then the third instance becomes 'current' and +

If a third instance of the module is loaded, the code server + removes (purges) the old code and any processes lingering in it is + terminated. Then the third instance becomes 'current' and the previously current code becomes 'old'.

To change from old code to current code, a process must make a - fully qualified function call. Example:

+ fully qualified function call.

+

Example:

 -module(m).
 -export([loop/0]).
@@ -109,60 +118,62 @@ loop() ->
             loop()
     end.

To make the process change code, send the message - code_switch to it. The process then will make a fully - qualified call to m:loop() and change to current code. - Note that m:loop/0 must be exported.

-

For code replacement of funs to work, the syntax - fun Module:FunctionName/Arity should be used.

+ code_switch to it. The process then makes a fully + qualified call to m:loop() and changes to current code. + Notice that m:loop/0 must be exported.

+

For code replacement of funs to work, use the syntax + fun Module:FunctionName/Arity.

- Running a function when a module is loaded + Running a Function When a Module is Loaded -

The on_load feature should be considered experimental - as there are a number of known weak points in current semantics - which therefore might also change in future releases:

+

The on_load feature is to be considered experimental + as there are a number of known weak points in current semantics, + which therefore might change in future Erlang/OTP releases:

-

Doing external call in on_load to the module itself +

Doing external call in on_load to the module itself leads to deadlock.

At module upgrade, other processes calling the module - get suspended waiting for on_load to finish. This can be very bad + get suspended waiting for on_load to finish. This can be very bad for applications with demands on realtime characteristics.

-

At module upgrade, no rollback is done if the on_load function fails. - The system will be left in a bad limbo state without any working +

At module upgrade, no rollback is done if the + on_load function fails. + The system is left in a bad limbo state without any working and reachable instance of the module.

-

The problems with module upgrade described above could be fixed in future - releases by changing the behaviour to not make the module reachable until - after the on_load function has successfully returned.

+

The problems with module upgrade described above can be fixed in future + Erlang/OTP releases by changing the behaviour to not make the module reachable until + after the on_load function has successfully returned.

-

The -on_load() directive names a function that should - be run automatically when a module a loaded. Its syntax is:

+

The -on_load() directive names a function that is to + be run automatically when a module is loaded.

+

Its syntax is as follows:

 -on_load(Name/0).
-

It is not necessary to export the function. It will be called in a - freshly spawned process (which will be terminated as soon as the function +

It is not necessary to export the function. It is called in a + freshly spawned process (which terminates as soon as the function returns). The function must return ok if the module is to - be remained loaded and become callable, or any other value if the module - is to be unloaded. Generating an exception will also cause the + remain loaded and become callable, or any other value if the module + is to be unloaded. Generating an exception also causes the module to be unloaded. If the return value is not an atom, - a warning error report will be sent to the error logger.

+ a warning error report is sent to the error logger.

A process that calls any function in a module whose on_load - function has not yet returned will be suspended until the on_load + function has not yet returned, is suspended until the on_load function has returned.

-

In embedded mode, all modules will be loaded first and then - will all on_load functions be called. The system will be - terminated unless all of the on_load functions return +

In embedded mode, first all modules are loaded. + Then all on_load functions are called. The system is + terminated unless all of the on_load functions return ok

. -

Example:

+

Example:

 -module(m).
@@ -174,7 +185,7 @@ load_my_nifs() ->
     erlang:load_nif(NifPath, Info).

If the call to erlang:load_nif/2 fails, the module - will be unloaded and there will be warning report sent to + is unloaded and a warning report is sent to the error loader.

diff --git a/system/doc/reference_manual/data_types.xml b/system/doc/reference_manual/data_types.xml index 37c0db5ff7..6226fa2f31 100644 --- a/system/doc/reference_manual/data_types.xml +++ b/system/doc/reference_manual/data_types.xml @@ -4,7 +4,7 @@
- 20032013 + 20032015 Ericsson AB. All Rights Reserved. @@ -28,12 +28,12 @@ data_types.xml
+

Erlang provides a number of data types, which are listed in + this section.

Terms -

Erlang provides a number of data types which are listed in this - chapter. A piece of data of any data type is called a - term.

+

A piece of data of any data type is called a term.

@@ -44,16 +44,17 @@ $char

- ASCII value of the character char.
+ ASCII value or unicode code-point of the character + char. base#value

- Integer with the base base, which must be an + Integer with the base base, that must be an integer in the range 2..36.

In Erlang 5.2/OTP R9B and earlier versions, the allowed range is 2..16.
-

Examples:

+

Examples:

 1> 42.
 42
@@ -75,11 +76,11 @@
 
   
Atom -

An atom is a literal, a constant with name. An atom should be +

An atom is a literal, a constant with name. An atom is to be enclosed in single quotes (') if it does not begin with a lower-case letter or if it contains other characters than alphanumeric characters, underscore (_), or @.

-

Examples:

+

Examples:

 hello
 phone_number
@@ -90,11 +91,11 @@ phone_number
   
Bit Strings and Binaries

A bit string is used to store an area of untyped memory.

-

Bit Strings are expressed using the +

Bit strings are expressed using the bit syntax.

-

Bit Strings which consists of a number of bits which is evenly - divisible by eight are called Binaries

-

Examples:

+

Bit strings that consist of a number of bits that are evenly + divisible by eight, are called binaries

+

Examples:

 1> <<10,20>>.
 <<10,20>>
@@ -102,12 +103,14 @@ phone_number
 <<"ABC">>
 1> <<1:1,0:1>>.
 <<2:2>>
-

More examples can be found in Programming Examples.

+

For more examples, + see + Programming Examples.

Reference -

A reference is a term which is unique in an Erlang runtime +

A reference is a term that is unique in an Erlang runtime system, created by calling make_ref/0.

@@ -116,34 +119,42 @@ phone_number

A fun is a functional object. Funs make it possible to create an anonymous function and pass the function itself -- not its name -- as argument to other functions.

-

Example:

+

Example:

 1> Fun1 = fun (X) -> X+1 end.
 #Fun<erl_eval.6.39074546>
 2> Fun1(2).
 3
-

Read more about funs in Fun Expressions. More examples can be found in Programming - Examples.

+

Read more about funs in + Fun Expressions. For more examples, see + + Programming Examples.

Port Identifier -

A port identifier identifies an Erlang port. open_port/2, - which is used to create ports, will return a value of this type.

+

A port identifier identifies an Erlang port.

+

open_port/2, which is used to create ports, returns + a value of this data type.

Read more about ports in Ports and Port Drivers.

Pid -

A process identifier, pid, identifies a process. - spawn/1,2,3,4, spawn_link/1,2,3,4 and - spawn_opt/4, which are used to create processes, return - values of this type. Example:

+

A process identifier, pid, identifies a process.

+

The following BIFs, which are used to create processes, return + values of this data type:

+ + spawn/1,2,3,4 + spawn_link/1,2,3,4 + spawn_opt/4 + +

Example:

 1> spawn(m, f, []).
 <0.51.0>
-

The BIF self() returns the pid of the calling process. - Example:

+

In the following example, the BIF self() returns + the pid of the calling process:

 -module(m).
 -export([loop/0]).
@@ -166,14 +177,14 @@ who_are_you
Tuple -

Compound data type with a fixed number of terms:

+

A tuple is a compound data type with a fixed number of terms:

 {Term1,...,TermN}

Each term Term in the tuple is called an element. The number of elements is said to be the size of the tuple.

There exists a number of BIFs to manipulate tuples.

-

Examples:

+

Examples:

 1> P = {adam,24,{july,29}}.
 {adam,24,{july,29}}
@@ -191,7 +202,8 @@ adam
 
   
Map -

Compound data type with a variable number of key-value associations:

+

A map is a compound data type with a variable number of + key-value associations:

 #{Key1=>Value1,...,KeyN=>ValueN}

Each key-value association in the map is called an @@ -199,7 +211,7 @@ adam called elements. The number of association pairs is said to be the size of the map.

There exists a number of BIFs to manipulate maps.

-

Examples:

+

Examples:

 1> M1 = #{name=>adam,age=>24,date=>{july,29}}.
 #{age => 24,date => {july,29},name => adam}
@@ -214,16 +226,18 @@ adam
 6> map_size(#{}).
 0

A collection of maps processing functions can be found in - the STDLIB module maps.

-

Read more about Maps.

+ maps manual page + in STDLIB.

+

Read more about maps in + Map Expressions.

-

Maps are considered experimental during OTP 17.

+

Maps are considered to be experimental during Erlang/OTP R17.

List -

Compound data type with a variable number of terms.

+

A list is a compound data type with a variable number of terms.

 [Term1,...,TermN]

Each term Term in the list is called an @@ -231,20 +245,21 @@ adam the length of the list.

Formally, a list is either the empty list [] or consists of a head (first element) and a tail - (remainder of the list) which is also a list. The latter can + (remainder of the list). + The tail is also a list. The latter can be expressed as [H|T]. The notation - [Term1,...,TermN] above is actually shorthand for + [Term1,...,TermN] above is equivalent with the list [Term1|[...|[TermN|[]]]].

-

Example:

-[] is a list, thus

+

Example:

+

[] is a list, thus

[c|[]] is a list, thus

[b|[c|[]]] is a list, thus

-[a|[b|[c|[]]]] is a list, or in short [a,b,c].

-

+[a|[b|[c|[]]]] is a list, or in short [a,b,c]

+

A list where the tail is a list is sometimes called a proper list. It is allowed to have a list where the tail is not a - list, for example [a|b]. However, this type of list is of + list, for example, [a|b]. However, this type of list is of little practical use.

-

Examples:

+

Examples:

 1> L1 = [a,2,{c,4}].
 [a,2,{c,4}]
@@ -261,18 +276,19 @@ a
 7> length([]).
 0

A collection of list processing functions can be found in - the STDLIB module lists.

+ the lists manual + page in STDLIB.

String

Strings are enclosed in double quotes ("), but is not a - data type in Erlang. Instead a string "hello" is - shorthand for the list [$h,$e,$l,$l,$o], that is + data type in Erlang. Instead, a string "hello" is + shorthand for the list [$h,$e,$l,$l,$o], that is, [104,101,108,108,111].

Two adjacent string literals are concatenated into one. This is - done at compile-time and does not incur any runtime overhead. - Example:

+ done in the compilation, thus, does not incur any runtime overhead.

+

Example:

 "string" "42"

is equivalent to

@@ -284,12 +300,13 @@ a Record

A record is a data structure for storing a fixed number of elements. It has named fields and is similar to a struct in C. - However, record is not a true data type. Instead record + However, a record is not a true data type. Instead, record expressions are translated to tuple expressions during compilation. Therefore, record expressions are not understood by - the shell unless special actions are taken. See shell(3) - for details.

-

Examples:

+ the shell unless special actions are taken. For details, see the + shell(3) manual + page in STDLIB).

+

Examples:

 -module(person).
 -export([new/2]).
@@ -303,14 +320,15 @@ new(Name, Age) ->
 {person,ernie,44}

Read more about records in Records. More examples can be - found in Programming Examples.

+ found in + Programming Examples.

Boolean

There is no Boolean data type in Erlang. Instead the atoms true and false are used to denote Boolean values.

-

Examples:

+

Examples:

 1> 2 =< 3.
 true
@@ -329,76 +347,80 @@ true
\b - backspace + Backspace \d - delete + Delete \e - escape + Escape \f - form feed + Form feed \n - newline + Newline \r - carriage return + Carriage return \s - space + Space \t - tab + Tab \v - vertical tab + Vertical tab \XYZ, \YZ, \Z - character with octal representation XYZ, YZ or Z + Character with octal + representation XYZ, YZ or Z \xXY - character with hexadecimal representation XY + Character with hexadecimal + representation XY \x{X...} - character with hexadecimal representation; X... is one or more hexadecimal characters + Character with hexadecimal + representation; X... is one or more hexadecimal characters \^a...\^z

\^A...\^Z
- control A to control Z + Control A to control Z
\' - single quote + Single quote \" - double quote + Double quote \\ - backslash + Backslash - Recognized Escape Sequences. + Recognized Escape Sequences
Type Conversions -

There are a number of BIFs for type conversions. Examples:

+

There are a number of BIFs for type conversions.

+

Examples:

 1> atom_to_list(hello).
 "hello"
diff --git a/system/doc/reference_manual/distributed.xml b/system/doc/reference_manual/distributed.xml
index 88f98bc106..fb83e356f9 100644
--- a/system/doc/reference_manual/distributed.xml
+++ b/system/doc/reference_manual/distributed.xml
@@ -4,7 +4,7 @@
 
   
- 20032013 + 20032015 Ericsson AB. All Rights Reserved. @@ -36,22 +36,24 @@ runtime system is called a node. Message passing between processes at different nodes, as well as links and monitors, are transparent when pids are used. Registered names, however, are - local to each node. This means the node must be specified as well - when sending messages etc. using registered names.

+ local to each node. This means that the node must be specified as well + when sending messages, and so on, using registered names.

The distribution mechanism is implemented using TCP/IP sockets. - How to implement an alternative carrier is described in ERTS User's Guide.

+ How to implement an alternative carrier is described in the + ERTS User's Guide.

Nodes -

A node is an executing Erlang runtime system which has - been given a name, using the command line flag -name +

A node is an executing Erlang runtime system that has + been given a name, using the command-line flag -name (long names) or -sname (short names).

-

The format of the node name is an atom name@host where - name is the name given by the user and host is +

The format of the node name is an atom name@host. + name is the name given by the user. host is the full host name if long names are used, or the first part of the host name if short names are used. node() returns - the name of the node. Example:

+ the name of the node.

+

Example:

 % erl -name dilbert
 (dilbert@uab.ericsson.se)1> node().
@@ -69,16 +71,16 @@ dilbert@uab
Node Connections

The nodes in a distributed Erlang system are loosely connected. - The first time the name of another node is used, for example if + The first time the name of another node is used, for example, if spawn(Node,M,F,A) or net_adm:ping(Node) is called, - a connection attempt to that node will be made.

+ a connection attempt to that node is made.

Connections are by default transitive. If a node A connects to - node B, and node B has a connection to node C, then node A will - also try to connect to node C. This feature can be turned off by - using the command line flag -connect_all false, see - erl(1).

+ node B, and node B has a connection to node C, then node A + also tries to connect to node C. This feature can be turned off by + using the command-line flag -connect_all false, see the + erl(1) manual page in ERTS.

If a node goes down, all connections to that node are removed. - Calling erlang:disconnect_node(Node) will force disconnection + Calling erlang:disconnect_node(Node) forces disconnection of a node.

The list of (visible) nodes currently connected to is returned by nodes().

@@ -89,23 +91,24 @@ dilbert@uab

The Erlang Port Mapper Daemon epmd is automatically started at every host where an Erlang node is started. It is responsible for mapping the symbolic node names to machine - addresses. See epmd(1).

+ addresses. See the + epmd(1) manual page in ERTS.

Hidden Nodes

In a distributed Erlang system, it is sometimes useful to connect to a node without also connecting to all other nodes. - An example could be some kind of O&M functionality used to - inspect the status of a system without disturbing it. For this - purpose, a hidden node may be used.

-

A hidden node is a node started with the command line flag + An example is some kind of O&M functionality used to + inspect the status of a system, without disturbing it. For this + purpose, a hidden node can be used.

+

A hidden node is a node started with the command-line flag -hidden. Connections between hidden nodes and other nodes are not transitive, they must be set up explicitly. Also, hidden nodes does not show up in the list of nodes returned by nodes(). Instead, nodes(hidden) or nodes(connected) must be used. This means, for example, - that the hidden node will not be added to the set of nodes that + that the hidden node is not added to the set of nodes that global is keeping track of.

This feature was added in Erlang 5.0/OTP R7.

@@ -114,9 +117,11 @@ dilbert@uab
C Nodes

A C node is a C program written to act as a hidden node in a distributed Erlang system. The library Erl_Interface - contains functions for this purpose. Refer to the documentation - for Erl_Interface and Interoperability Tutorial for more - information about C nodes.

+ contains functions for this purpose. For more information about + C nodes, see the + Erl_Interface application and + + Interoperability Tutorial..

@@ -125,7 +130,7 @@ dilbert@uab with each other. In a network of different Erlang nodes, it is built into the system at the lowest possible level. Each node has its own magic cookie, which is an Erlang atom.

-

When a nodes tries to connect to another node, the magic cookies +

When a node tries to connect to another node, the magic cookies are compared. If they do not match, the connected node rejects the connection.

At start-up, a node has a random atom assigned as its magic @@ -141,8 +146,8 @@ dilbert@uab the local node assume that all other nodes have the same cookie Cookie.

Thus, groups of users with identical cookie files get Erlang - nodes which can communicate freely and without interference from - the magic cookie system. Users who want run nodes on separate + nodes that can communicate freely and without interference from + the magic cookie system. Users who want to run nodes on separate file systems must make certain that their cookie files are identical on the different file systems.

For a node Node1 with magic cookie Cookie to be @@ -154,18 +159,24 @@ dilbert@uab

The default when a connection is established between two nodes, is to immediately connect all other visible nodes as well. This way, there is always a fully connected network. If there are - nodes with different cookies, this method might be inappropriate - and the command line flag -connect_all false must be set, - see erl(1).

+ nodes with different cookies, this method can be inappropriate + and the command-line flag -connect_all false must be set, + see the erl(1) + manual page in ERTS.

The magic cookie of the local node is retrieved by calling erlang:get_cookie().

Distribution BIFs -

Some useful BIFs for distributed programming, see - erlang(3) for more information:

+

Some useful BIFs for distributed programming + (for more information, see the + erlang(3) manual page in ERTS:

+ + BIF + Description + erlang:disconnect_node(Node) Forces the disconnection of a node. @@ -180,7 +191,9 @@ dilbert@uab monitor_node(Node, true|false) - Monitor the status of Node. A message{nodedown, Node} is received if the connection to it is lost. + Monitors the status of + Node. A message{nodedown, Node} is received + if the connection to it is lost. node() @@ -196,11 +209,16 @@ dilbert@uab nodes(Arg) - Depending on Arg, this function can return a list not only of visible nodes, but also hidden nodes and previously known nodes, etc. + Depending on Arg, + this function can return a list not only of visible nodes, + but also hidden nodes and previously known nodes, and so on. erlang:set_cookie(Node, Cookie) - Sets the magic cookie used when connecting to Node. If Node is the current node, Cookie will be used when connecting to all new nodes. + Sets the magic cookie used + when connecting to Node. If Node is the + current node, Cookie is used when connecting to + all new nodes. spawn[_link|_opt](Node, Fun) @@ -210,18 +228,24 @@ dilbert@uab spawn[_link|opt](Node, Module, FunctionName, Args) Creates a process at a remote node. - Distribution BIFs. + Distribution BIFs
- Distribution Command Line Flags -

Examples of command line flags used for distributed programming, - see erl(1) for more information:

+ Distribution Command-Line Flags +

Examples of command-line flags used for distributed programming + (for more information, see the erl(1) + manual page in ERTS:

+ + Command-Line Flag + Description + -connect_all false - Only explicit connection set-ups will be used. + Only explicit connection + set-ups are used. -hidden @@ -239,15 +263,19 @@ dilbert@uab -sname Name Makes a runtime system into a node, using short node names. - Distribution Command Line Flags. + Distribution Command-Line Flags
Distribution Modules

Examples of modules useful for distributed programming:

-

In Kernel:

+

In the Kernel application:

+ + Module + Description + global A global name registration facility. @@ -266,8 +294,12 @@ dilbert@uab Kernel Modules Useful For Distribution.
-

In STDLIB:

+

In the STDLIB application:

+ + Module + Description + slave Start and control of slave nodes. diff --git a/system/doc/reference_manual/errors.xml b/system/doc/reference_manual/errors.xml index dde6e68f4a..66ecf6aa94 100644 --- a/system/doc/reference_manual/errors.xml +++ b/system/doc/reference_manual/errors.xml @@ -4,7 +4,7 @@
- 20032013 + 20032015 Ericsson AB. All Rights Reserved. @@ -38,12 +38,12 @@ Run-time errors Generated errors -

A compile-time error, for example a syntax error, should not +

A compile-time error, for example a syntax error, does not cause much trouble as it is caught by the compiler.

A logical error is when a program does not behave as intended, - but does not crash. An example could be that nothing happens when + but does not crash. An example is that nothing happens when a button in a graphical user interface is clicked.

-

A run-time error is when a crash occurs. An example could be +

A run-time error is when a crash occurs. An example is when an operator is applied to arguments of the wrong type. The Erlang programming language has built-in features for handling of run-time errors.

@@ -54,23 +54,23 @@ of class error.

A generated error is when the code itself calls - exit/1 or throw/1. Note that emulated run-time + exit/1 or throw/1. Notice that emulated run-time errors are not denoted as generated errors here.

Generated errors are exceptions of classes exit and throw.

When a run-time error or generated error occurs in Erlang, - execution for the process which evaluated + execution for the process that evaluated the erroneous expression is stopped. This is referred to as a failure, that execution or evaluation fails, or that the process fails, - terminates or exits. Note that a process may + terminates, or exits. Notice that a process can terminate/exit for other reasons than a failure.

-

A process that terminates will emit an exit signal with +

A process that terminates emits an exit signal with an exit reason that says something about which error - has occurred. Normally, some information about the error will - be printed to the terminal.

+ has occurred. Normally, some information about the error is + printed to the terminal.

@@ -78,10 +78,12 @@

Exceptions are run-time errors or generated errors and are of three different classes, with different origins. The try expression - (appeared in Erlang 5.4/OTP-R10B) + (new in Erlang 5.4/OTP R10B) can distinguish between the different classes, whereas the catch - expression can not. They are described in the Expressions chapter.

+ expression cannot. They are described in + Expressions + .

Class @@ -89,7 +91,9 @@ error - Run-time error for example 1+a, or the process called erlang:error/1,2 (appeared in Erlang 5.4/OTP-R10B) + Run-time error, + for example, 1+a, or the process called + erlang:error/1,2 (new in Erlang 5.4/OTP R10B) exit @@ -102,11 +106,11 @@ Exception Classes.

An exception consists of its class, an exit reason - (the Exit Reason), - and a stack trace (that aids in finding the code location of + (see Exit Reason), + and a stack trace (which aids in finding the code location of the exception).

The stack trace can be retrieved using - erlang:get_stacktrace/0 (new in Erlang 5.4/OTP-R10B) + erlang:get_stacktrace/0 (new in Erlang 5.4/OTP R10B) from within a try expression, and is returned for exceptions of class error from a catch expression.

An exception of class error is also known as a run-time @@ -114,38 +118,38 @@

- Handling of Run-Time Errors in Erlang + Handling of Run-time Errors in Erlang
Error Handling Within Processes

It is possible to prevent run-time errors and other exceptions from causing the process to terminate by using catch or - try, see the Expressions chapter about - Catch - and Try.

+ try, see + Expressions about + catch + and try.

Error Handling Between Processes

Processes can monitor other processes and detect process terminations, see - the Processes - chapter.

+ Processes.

Exit Reasons -

When a run-time error occurs, - that is an exception of class error, - the exit reason is a tuple {Reason,Stack}. +

When a run-time error occurs, + that is an exception of class error. + The exit reason is a tuple {Reason,Stack}, where Reason is a term indicating the type of error:

Reason - Type of error + Type of Error badarg @@ -181,7 +185,7 @@ {badfun,F} - There is something wrong with a fun F. + Something is wrong with a fun F. {badarity,F} @@ -201,14 +205,17 @@ system_limit - A system limit has been reached. See Efficiency Guide for information about system limits. + A system limit has been reached. + See + Efficiency Guide for information about system limits. + - Exit Reasons. + Exit Reasons

Stack is the stack of function calls being evaluated when the error occurred, given as a list of tuples {Module,Name,Arity} with the most recent function call - first. The most recent function call tuple may in some + first. The most recent function call tuple can in some cases be {Module,Name,[Arg]}.

diff --git a/system/doc/reference_manual/expressions.xml b/system/doc/reference_manual/expressions.xml index 62a344ad58..fd3cfabd3d 100644 --- a/system/doc/reference_manual/expressions.xml +++ b/system/doc/reference_manual/expressions.xml @@ -4,7 +4,7 @@
- 20032013 + 20032015 Ericsson AB. All Rights Reserved. @@ -28,13 +28,17 @@ expressions.xml
-

In this chapter, all valid Erlang expressions are listed. +

In this section, all valid Erlang expressions are listed. When writing Erlang programs, it is also allowed to use macro- and record expressions. However, these expressions are expanded during compilation and are in that sense not true Erlang expressions. Macro- and record expressions are covered in - separate chapters: Macros and - Records.

+ separate sections: +

+ +

Preprocessor

+

Records

+
Expression Evaluation @@ -48,15 +52,15 @@ Expr1 + Expr2 performed.

Many of the operators can only be applied to arguments of a certain type. For example, arithmetic operators can only be - applied to numbers. An argument of the wrong type will cause - a badarg run-time error.

+ applied to numbers. An argument of the wrong type causes + a badarg runtime error.

Terms

The simplest form of expression is a term, that is an integer, - float, atom, string, list, map or tuple. + float, atom, string, list, map, or tuple. The return value is the term itself.

@@ -65,9 +69,10 @@ Expr1 + Expr2

A variable is an expression. If a variable is bound to a value, the return value is this value. Unbound variables are only allowed in patterns.

-

Variables start with an uppercase letter or underscore (_) - and may contain alphanumeric characters, underscore and @. - Examples:

+

Variables start with an uppercase letter or underscore (_). + Variables can contain alphanumeric characters, underscore and @. +

+

Examples:

 X
 Name1
@@ -77,18 +82,20 @@ _
 _Height

Variables are bound to values using pattern matching. Erlang - uses single assignment, a variable can only be bound + uses single assignment, that is, a variable can only be bound once.

The anonymous variable is denoted by underscore (_) and can be used when a variable is required but its value can be - ignored. Example:

+ ignored.

+

Example:

 [H|_] = [1,2,3]
-

Variables starting with underscore (_), for example +

Variables starting with underscore (_), for example, _Height, are normal variables, not anonymous. They are - however ignored by the compiler in the sense that they will not - generate any warnings for unused variables. Example: The following - code

+ however ignored by the compiler in the sense that they do not + generate any warnings for unused variables.

+

Example:

+

The following code:

 member(_, []) ->
     [].
@@ -96,36 +103,37 @@ member(_, []) ->
 member(Elem, []) ->
     [].
-

This will however cause a warning for an unused variable +

This causes a warning for an unused variable, Elem, if the code is compiled with the flag warn_unused_vars set. Instead, the code can be rewritten to:

 member(_Elem, []) ->
     [].
-

Note that since variables starting with an underscore are - not anonymous, this will match:

+

Notice that since variables starting with an underscore are + not anonymous, this matches:

 {_,_} = {1,2}
-

But this will fail:

+

But this fails:

 {_N,_N} = {1,2}

The scope for a variable is its function clause. Variables bound in a branch of an if, case, or receive expression must be bound in all branches - to have a value outside the expression, otherwise they - will be regarded as 'unsafe' outside the expression.

+ to have a value outside the expression. Otherwise they + are regarded as 'unsafe' outside the expression.

For the try expression introduced in - Erlang 5.4/OTP-R10B, variable scoping is limited so that + Erlang 5.4/OTP R10B, variable scoping is limited so that variables bound in the expression are always 'unsafe' outside - the expression. This will be improved.

+ the expression. This is to be improved.

Patterns -

A pattern has the same structure as a term but may contain - unbound variables. Example:

+

A pattern has the same structure as a term but can contain + unbound variables.

+

Example:

 Name1
 [H|T]
@@ -136,13 +144,13 @@ Name1
     
Match Operator = in Patterns

If Pattern1 and Pattern2 are valid patterns, - then the following is also a valid pattern:

+ the following is also a valid pattern:

 Pattern1 = Pattern2

When matched against a term, both Pattern1 and - Pattern2 will be matched against the term. The idea - behind this feature is to avoid reconstruction of terms. - Example:

+ Pattern2 are matched against the term. The idea + behind this feature is to avoid reconstruction of terms.

+

Example:

 f({connect,From,To,Number,Options}, To) ->
     Signal = {connect,From,To,Number,Options},
@@ -163,16 +171,20 @@ f(Signal, To) ->
       
 f("prefix" ++ Str) -> ...

This is syntactic sugar for the equivalent, but harder to - read

+ read:

 f([$p,$r,$e,$f,$i,$x | Str]) -> ...
Expressions in Patterns -

An arithmetic expression can be used within a pattern, if - it uses only numeric or bitwise operators, and if its value - can be evaluated to a constant at compile-time. Example:

+

An arithmetic expression can be used within a pattern if + it meets both of the following two conditions:

+ + It uses only numeric or bitwise operators. + Its value can be evaluated to a constant when complied. + +

Example:

 case {Value, Result} of
     {?THRESHOLD+1, ok} -> ...
@@ -182,21 +194,21 @@ case {Value, Result} of
Match +

The following matches Expr1, a pattern, against + Expr2:

 Expr1 = Expr2
-

Matches Expr1, a pattern, against Expr2. - If the matching succeeds, any unbound variable in the pattern +

If the matching succeeds, any unbound variable in the pattern becomes bound and the value of Expr2 is returned.

-

If the matching fails, a badmatch run-time error will - occur.

-

Examples:

+

If the matching fails, a badmatch run-time error occurs.

+

Examples:

 1> {A, B} = {answer, 42}.
 {answer,42}
 2> A.
 answer
 3> {C, D} = [1, 2].
-** exception error: no match of right hand side value [1,2]
+** exception error: no match of right-hand side value [1,2]
@@ -210,27 +222,28 @@ ExprM:ExprF(Expr1,...,ExprN) ExprF must be an atom or an expression that evaluates to an atom. The function is said to be called by using the fully qualified function name. This is often referred - to as a remote or external function call. - Example:

+ to as a remote or external function call.

+

Example:

lists:keysearch(Name, 1, List)

In the second form of function calls, ExprF(Expr1,...,ExprN), ExprF must be an atom or evaluate to a fun.

-

If ExprF is an atom the function is said to be called by +

If ExprF is an atom, the function is said to be called by using the implicitly qualified function name. If the function ExprF is locally defined, it is called. - Alternatively if ExprF is explicitly imported from module - M, M:ExprF(Expr1,...,ExprN) is called. If + Alternatively, if ExprF is explicitly imported from the + M module, M:ExprF(Expr1,...,ExprN) is called. If ExprF is neither declared locally nor explicitly imported, ExprF must be the name of an automatically - imported BIF. Examples:

+ imported BIF.

+

Examples:

handle(Msg, State) spawn(m, init, []) -

Examples where ExprF is a fun:

+

Examples where ExprF is a fun:

Fun1 = fun(X) -> X+1 end Fun1(3) @@ -239,16 +252,15 @@ Fun1(3) fun lists:append/2([1,2], [3,4]) => [1,2,3,4] -

Note that when calling a local function, there is a difference - between using the implicitly or fully qualified function name, as - the latter always refers to the latest version of the module. See - Compilation and Code Loading.

- -

See also the chapter about - Function Evaluation.

+

Notice that when calling a local function, there is a difference + between using the implicitly or fully qualified function name. + The latter always refers to the latest version of the module. + See Compilation and Code Loading + and + Function Evaluation.

- Local Function Names Clashing With Auto-imported BIFs + Local Function Names Clashing With Auto-Imported BIFs

If a local function has the same name as an auto-imported BIF, the semantics is that implicitly qualified function calls are directed to the locally defined function, not to the BIF. To avoid @@ -260,9 +272,9 @@ fun lists:append/2([1,2], [3,4])

Before OTP R14A (ERTS version 5.8), an implicitly qualified function call to a function having the same name as an auto-imported BIF always resulted in the BIF being called. In - newer versions of the compiler the local function is instead - called. The change is there to avoid that future additions to the - set of auto-imported BIFs does not silently change the behavior + newer versions of the compiler, the local function is called instead. + This is to avoid that future additions to the + set of auto-imported BIFs do not silently change the behavior of old code.

However, to avoid that old (pre R14) code changed its @@ -272,8 +284,8 @@ fun lists:append/2([1,2], [3,4]) 5.8) and have an implicitly qualified call to that function in your code, you either need to explicitly remove the auto-import using a compiler directive, or replace the call with a fully - qualified function call, otherwise you will get a compilation - error. See example below:

+ qualified function call. Otherwise you get a compilation + error. See the following example:

-export([length/1,f/1]). @@ -290,9 +302,10 @@ f(X) when erlang:length(X) > 3 -> %% Calls erlang:length/1, long.

The same logic applies to explicitly imported functions from - other modules as to locally defined functions. To both import a + other modules, as to locally defined functions. + It is not allowed to both import a function from another module and have the function declared in the - module at the same time is not allowed.

+ module at the same time:

-export([f/1]). @@ -310,10 +323,10 @@ f(X) -> length(X). %% mod:length/1 is called -

For auto-imported BIFs added to Erlang in release R14A and thereafter, +

For auto-imported BIFs added in Erlang/OTP R14A and thereafter, overriding the name with a local function or explicit import is always allowed. However, if the -compile({no_auto_import,[F/A]) - directive is not used, the compiler will issue a warning whenever + directive is not used, the compiler issues a warning whenever the function is called in the module using the implicitly qualified function name.

@@ -330,15 +343,16 @@ if BodyN end

The branches of an if-expression are scanned sequentially - until a guard sequence GuardSeq which evaluates to true is + until a guard sequence GuardSeq that evaluates to true is found. Then the corresponding Body (sequence of expressions separated by ',') is evaluated.

The return value of Body is the return value of the if expression.

-

If no guard sequence is true, an if_clause run-time error - will occur. If necessary, the guard expression true can be +

If no guard sequence is evaluated as true, + an if_clause run-time error + occurs. If necessary, the guard expression true can be used in the last branch, as that guard sequence is always true.

-

Example:

+

Example:

 is_greater_than(X, Y) ->
     if
@@ -367,8 +381,8 @@ end

The return value of Body is the return value of the case expression.

If there is no matching pattern with a true guard sequence, - a case_clause run-time error will occur.

-

Example:

+ a case_clause run-time error occurs.

+

Example:

 is_valid_signal(Signal) ->
     case Signal of
@@ -389,15 +403,15 @@ Expr1 ! Expr2

Sends the value of Expr2 as a message to the process specified by Expr1. The value of Expr2 is also the return value of the expression.

-

Expr1 must evaluate to a pid, a registered name (atom) or - a tuple {Name,Node}, where Name is an atom and - Node a node name, also an atom.

+

Expr1 must evaluate to a pid, a registered name (atom), or + a tuple {Name,Node}. Name is an atom and + Node is a node name, also an atom.

If Expr1 evaluates to a name, but this name is not - registered, a badarg run-time error will occur. + registered, a badarg run-time error occurs. Sending a message to a pid never fails, even if the pid identifies a non-existing process. - Distributed message sending, that is if Expr1 + Distributed message sending, that is, if Expr1 evaluates to a tuple {Name,Node} (or a pid located at another node), also never fails. @@ -420,14 +434,14 @@ end the second, and so on. If a match succeeds and the optional guard sequence GuardSeq is true, the corresponding Body is evaluated. The matching message is consumed, that - is removed from the mailbox, while any other messages in + is, removed from the mailbox, while any other messages in the mailbox remain unchanged.

The return value of Body is the return value of the receive expression.

-

receive never fails. Execution is suspended, possibly - indefinitely, until a message arrives that does match one of +

receive never fails. The execution is suspended, possibly + indefinitely, until a message arrives that matches one of the patterns and with a true guard sequence.

-

Example:

+

Example:

 wait_for_onhook() ->
     receive
@@ -438,7 +452,7 @@ wait_for_onhook() ->
             B ! {busy, self()},
             wait_for_onhook()
     end.
-

It is possible to augment the receive expression with a +

The receive expression can be augmented with a timeout:

 receive
@@ -451,14 +465,14 @@ after
     ExprT ->
         BodyT
 end
-

ExprT should evaluate to an integer. The highest allowed - value is 16#ffffffff, that is, the value must fit in 32 bits. +

ExprT is to evaluate to an integer. The highest allowed + value is 16#FFFFFFFF, that is, the value must fit in 32 bits. receive..after works exactly as receive, except that if no matching message has arrived within ExprT - milliseconds, then BodyT is evaluated instead and its - return value becomes the return value of the receive..after - expression.

-

Example:

+ milliseconds, then BodyT is evaluated instead. The + return value of BodyT then becomes the return value + of the receive..after expression.

+

Example:

 wait_for_onhook() ->
     receive
@@ -481,10 +495,10 @@ after
     ExprT ->
         BodyT
 end
-

This construction will not consume any messages, only suspend - execution in the process for ExprT milliseconds and can be +

This construction does not consume any messages, only suspends + execution in the process for ExprT milliseconds. This can be used to implement simple timers.

-

Example:

+

Example:

 timer() ->
     spawn(m, timer, [self()]).
@@ -498,12 +512,12 @@ timer(Pid) ->
     

There are two special cases for the timeout value ExprT:

infinity - The process should wait indefinitely for a matching message - -- this is the same as not using a timeout. Can be - useful for timeout values that are calculated at run-time. + The process is to wait indefinitely for a matching message; + this is the same as not using a timeout. This can be + useful for timeout values that are calculated at runtime. 0 If there is no matching message in the mailbox, the timeout - will occur immediately. + occurs immediately.
@@ -518,39 +532,39 @@ Expr1 op Expr2 == - equal to + Equal to /= - not equal to + Not equal to =< - less than or equal to + Less than or equal to < - less than + Less than >= - greater than or equal to + Greater than or equal to > - greater than + Greater than =:= - exactly equal to + Exactly equal to =/= - exactly not equal to + Exactly not equal to Term Comparison Operators. -

The arguments may be of different data types. The following +

The arguments can be of different data types. The following order is defined:

 number < atom < reference < fun < port < pid < tuple < list < bit string
@@ -558,17 +572,18 @@ number < atom < reference < fun < port < pid < tuple < list size, two tuples with the same size are compared element by element.

When comparing an integer to a float, the term with the lesser - precision will be converted into the other term's type, unless the - operator is one of =:= or =/=. A float is more precise than + precision is converted into the type of the other term, unless the + operator is one of =:= or =/=. A float is more precise than an integer until all significant figures of the float are to the left of the decimal point. This happens when the float is larger/smaller than +/-9007199254740992.0. The conversion strategy is changed depending on the size of the float because otherwise comparison of large floats and integers would lose their transitivity.

-

Returns the Boolean value of the expression, true or - false.

-

Examples:

+

Term comparison operators return the Boolean value of the + expression, true or false.

+ +

Examples:

 1> 1==1.0.
 true
@@ -585,19 +600,19 @@ false
Expr1 op Expr2 - op + Operator Description - Argument type + Argument Type + - unary + - number + Unary + + Number - - unary - - number + Unary - + Number + @@ -607,62 +622,62 @@ Expr1 op Expr2 -   - number + Number *   - number + Number / - floating point division - number + Floating point division + Number bnot - unary bitwise not - integer + Unary bitwise NOT + Integer div - integer division - integer + Integer division + Integer rem - integer remainder of X/Y - integer + Integer remainder of X/Y + Integer band - bitwise and - integer + Bitwise AND + Integer bor - bitwise or - integer + Bitwise OR + Integer bxor - arithmetic bitwise xor - integer + Arithmetic bitwise XOR + Integer bsl - arithmetic bitshift left - integer + Arithmetic bitshift left + Integer bsr - bitshift right - integer + Bitshift right + Integer Arithmetic Operators.
-

Examples:

+

Examples:

 1> +1.
 1
@@ -697,28 +712,28 @@ Expr1 op Expr2
Expr1 op Expr2 - op + Operator Description not - unary logical not + Unary logical NOT and - logical and + Logical AND or - logical or + Logical OR xor - logical xor + Logical XOR Logical Operators.
-

Examples:

+

Examples:

 1> not true.
 false
@@ -737,28 +752,37 @@ true
     
 Expr1 orelse Expr2
 Expr1 andalso Expr2
-

Expressions where Expr2 is evaluated only if - necessary. That is, Expr2 is evaluated only if Expr1 - evaluates to false in an orelse expression, or only - if Expr1 evaluates to true in an andalso - expression. Returns either the value of Expr1 (that is, +

Expr2 is evaluated only if + necessary. That is, Expr2 is evaluated only if:

+ +

Expr1 evaluates to false in an + orelse expression.

+
+
+

or

+ +

Expr1 evaluates to true in an + andalso expression.

+
+
+

Returns either the value of Expr1 (that is, true or false) or the value of Expr2 - (if Expr2 was evaluated).

+ (if Expr2 is evaluated).

-

Example 1:

+

Example 1:

 case A >= -1.0 andalso math:sqrt(A+1) > B of
-

This will work even if A is less than -1.0, +

This works even if A is less than -1.0, since in that case, math:sqrt/1 is never evaluated.

-

Example 2:

+

Example 2:

 OnlyOne = is_atom(L) orelse
          (is_list(L) andalso length(L) == 1),
-

From R13A, Expr2 is no longer required to evaluate to a - boolean value. As a consequence, andalso and orelse +

From Erlang/OTP R13A, Expr2 is no longer required to evaluate to a + Boolean value. As a consequence, andalso and orelse are now tail-recursive. For instance, the following function is - tail-recursive in R13A and later:

+ tail-recursive in Erlang/OTP R13A and later:

 all(Pred, [Hd|Tail]) ->
@@ -774,11 +798,11 @@ Expr1 ++ Expr2
 Expr1 -- Expr2

The list concatenation operator ++ appends its second argument to its first and returns the resulting list.

-

The list subtraction operator -- produces a list which - is a copy of the first argument, subjected to the following - procedure: for each element in the second argument, the first +

The list subtraction operator -- produces a list that + is a copy of the first argument. The procedure is a follows: + for each element in the second argument, the first occurrence of this element (if any) is removed.

-

Example:

+

Example:

 1> [1,2,3]++[4,5].
 [1,2,3,4,5]
@@ -786,8 +810,8 @@ Expr1 -- Expr2
[3,1,2]

The complexity of A -- B is - proportional to length(A)*length(B), meaning that it - will be very slow if both A and B are + proportional to length(A)*length(B). That is, it + becomes very slow if both A and B are long lists.

@@ -802,7 +826,7 @@ Expr1 -- Expr2

#{ K => V }

- New maps may include multiple associations at construction by listing every + New maps can include multiple associations at construction by listing every association:

#{ K1 => V1, .., Kn => Vn } @@ -816,11 +840,11 @@ Expr1 -- Expr2

Keys and values are separated by the => arrow and associations are - separated by ,. + separated by a comma ,.

- Examples: + Examples:

M0 = #{}, % empty map @@ -829,14 +853,14 @@ M2 = #{1 => 2, b => b}, % multiple associations with literals M3 = #{k => {A,B}}, % single association with variables M4 = #{{"w", 1} => f()}. % compound key associated with an evaluated expression

- where, A and B are any expressions and M0 through M4 + Here, A and B are any expressions and M0 through M4 are the resulting map terms.

- If two matching keys are declared, the latter key will take precedence. + If two matching keys are declared, the latter key takes precedence.

- Example: + Example:

@@ -846,54 +870,57 @@ M4 = #{{"w", 1} => f()}.  % compound key associated with an evaluated expression
 #{1 => b, 1.0 => a}
 

- The order in which the expressions constructing the keys and their - associated values are evaluated is not defined. The syntactic order of + The order in which the expressions constructing the keys (and their + associated values) are evaluated is not defined. The syntactic order of the key-value pairs in the construction is of no relevance, except in - the above mentioned case of two matching keys. + the recently mentioned case of two matching keys.

Updating Maps

- Updating a map has similar syntax as constructing it. + Updating a map has a similar syntax as constructing it.

- An expression defining the map to be updated is put in front of the expression - defining the keys to be updated and their respective values. + An expression defining the map to be updated, is put in front of the expression + defining the keys to be updated and their respective values:

M#{ K => V }

- where M is a term of type map and K and V are any expression. + Here M is a term of type map and K and V are any expression.

If key K does not match any existing key in the map, a new association - will be created from key K to value V. If key K matches - an existing key in map M its associated value will be replaced by the - new value V. In both cases the evaluated map expression will return a new map. + is created from key K to value V. +

+

If key K matches an existing key in map M, + its associated value + is replaced by the new value V. In both cases, the evaluated map expression + returns a new map.

- If M is not of type map an exception of type badmap is thrown. + If M is not of type map, an exception of type badmap is thrown.

- To only update an existing value, the following syntax is used, + To only update an existing value, the following syntax is used:

M#{ K := V }

- where M is an term of type map, V is an expression and K - is an expression which evaluates to an existing key in M. + Here M is a term of type map, V is an expression and K + is an expression that evaluates to an existing key in M.

- If key K does not match any existing keys in map M an exception - of type badarg will be triggered at runtime. If a matching key K - is present in map M its associated value will be replaced by the new - value V and the evaluated map expression returns a new map. + If key K does not match any existing keys in map M, an exception + of type badarg is triggered at runtime. If a matching key K + is present in map M, its associated value is replaced by the new + value V, and the evaluated map expression returns a new map.

- If M is not of type map an exception of type badmap is thrown. + If M is not of type map, an exception of type badmap is thrown.

- Examples: + Examples:

M0 = #{}, @@ -902,10 +929,10 @@ M2 = M1#{a => 1, b => 2}, M3 = M2#{"function" => fun() -> f() end}, M4 = M3#{a := 2, b := 3}. % 'a' and 'b' was added in `M1` and `M2`.

- where M0 is any map. It follows that M1 .. M4 are maps as well. + Here M0 is any map. It follows that M1 .. M4 are maps as well.

- More Examples: + More Examples:

 1> M = #{1 => a}.
@@ -921,83 +948,84 @@ M4 = M3#{a := 2, b := 3}.  % 'a' and 'b' was added in `M1` and `M2`.
 		  As in construction, the order in which the key and value expressions
 		  are evaluated is not defined. The
 		  syntactic order of the key-value pairs in the update is of no
-		  relevance, except in the case where two keys match, in which
-		  case the latter value is used.
+		  relevance, except in the case where two keys match.
+		  In that case, the latter value is used.
 	  

Maps in Patterns

- Matching of key-value associations from maps is done in the following way: + Matching of key-value associations from maps is done as follows:

#{ K := V } = M

- where M is any map. The key K has to be an expression with bound - variables or a literals, and V can be any pattern with either bound or + Here M is any map. The key K must be an expression with bound + variables or literals. V can be any pattern with either bound or unbound variables.

- If the variable V is unbound, it will be bound to the value associated - with the key K, which has to exist in the map M. If the variable - V is bound, it has to match the value associated with K in M. + If the variable V is unbound, it becomes bound to the value associated + with the key K, which must exist in the map M. If the variable + V is bound, it must match the value associated with K in M.

-

Example:

- +

Example:

+
 1> M = #{"tuple" => {1,2}}.
 #{"tuple" => {1,2}}
 2> #{"tuple" := {1,B}} = M.
 #{"tuple" => {1,2}}
 3> B.
-2.
+2.

- This will bind variable B to integer 2. + This binds variable B to integer 2.

- Similarly, multiple values from the map may be matched: + Similarly, multiple values from the map can be matched:

#{ K1 := V1, .., Kn := Vn } = M

- where keys K1 .. Kn are any expressions with literals or bound variables. If all - keys exist in map M all variables in V1 .. Vn will be matched to the + Here keys K1 .. Kn are any expressions with literals or bound variables. If all + keys exist in map M, all variables in V1 .. Vn is matched to the associated values of their respective keys.

- If the matching conditions are not met, the match will fail, either with + If the matching conditions are not met, the match fails, either with:

- - a badmatch exception, if used in the context of the matching operator - as in the example, +

A badmatch exception.

+

This is if it is used in the context of the matching operator + as in the example.

- - or resulting in the next clause being tested in function heads and - case expressions. +

Or resulting in the next clause being tested in function heads and + case expressions.

Matching in maps only allows for := as delimiters of associations. +

+

The order in which keys are declared in matching has no relevance.

- Duplicate keys are allowed in matching and will match each pattern associated - to the keys. + Duplicate keys are allowed in matching and match each pattern associated + to the keys:

#{ K := V1, K := V2 } = M

- Matching an expression against an empty map literal will match its type but - no variables will be bound: + Matching an expression against an empty map literal, matches its type but + no variables are bound:

#{} = Expr

- This expression will match if the expression Expr is of type map, otherwise - it will fail with an exception badmatch. + This expression matches if the expression Expr is of type map, otherwise + it fails with an exception badmatch.

- Matching syntax: Example with literals in function heads + Matching Syntax

- Matching of literals as keys are allowed in function heads. + Matching of literals as keys are allowed in function heads:

%% only start if not_started @@ -1014,17 +1042,19 @@ handle_call(change, From, #{ state := start } = S) ->
Maps in Guards

- Maps are allowed in guards as long as all sub-expressions are valid guard expressions. + Maps are allowed in guards as long as all subexpressions are valid guard expressions.

- Two guard BIFs handles maps: + Two guard BIFs handle maps:

is_map/1 + in the erlang module map_size/1 + in the erlang module
@@ -1044,29 +1074,34 @@ Ei = Value | Value/TypeSpecifierList | Value:Size/TypeSpecifierList

Used in a bit string construction, Value is an expression - which should evaluate to an integer, float or bit string. If the - expression is something else than a single literal or variable, it - should be enclosed in parenthesis.

+ that is to evaluate to an integer, float, or bit string. If the + expression is not a single literal or variable, it + is to be enclosed in parenthesis.

Used in a bit string matching, Value must be a variable, - or an integer, float or string.

+ or an integer, float, or string.

-

Note that, for example, using a string literal as in +

Notice that, for example, using a string literal as in >]]> is syntactic sugar for >]]>.

Used in a bit string construction, Size is an expression - which should evaluate to an integer.

+ that is to evaluate to an integer.

-

Used in a bit string matching, Size must be an integer or a +

Used in a bit string matching, Size must be an integer, or a variable bound to an integer.

The value of Size specifies the size of the segment in units (see below). The default value depends on the type (see - below). For integer it is 8, for - float it is 64, for binary and bitstring it is - the whole binary or bit string. In matching, this default value is only - valid for the very last element. All other bit string or binary + below):

+ + For integer it is 8. + For float it is 64. + For binary and bitstring it is + the whole binary or bit string. + +

In matching, this default value is only + valid for the last element. All other bit string or binary elements in the matching must have a size specification.

For the utf8, utf16, and utf32 types, @@ -1090,7 +1125,7 @@ Ei = Value | The default is unsigned. Endianness= big | little | native - Native-endian means that the endianness will be resolved at load + Native-endian means that the endianness is resolved at load time to be either big-endian or little-endian, depending on what is native for the CPU that the Erlang machine is run on. Endianness only matters when the Type is either integer, @@ -1099,7 +1134,7 @@ Ei = Value | Unit= unit:IntegerLiteral The allowed range is 1..256. Defaults to 1 for integer, - float and bitstring, and to 8 for binary. + float, and bitstring, and to 8 for binary. No unit specifier must be given for the types utf8, utf16, and utf32. @@ -1110,8 +1145,8 @@ Ei = Value |

When constructing binaries, if the size N of an integer segment is too small to contain the given integer, the most significant - bits of the integer will be silently discarded and only the N least - significant bits will be put into the binary.

+ bits of the integer are silently discarded and only the N least + significant bits are put into the binary.

The types utf8, utf16, and utf32 specifies encoding/decoding of the Unicode Transformation Formats UTF-8, UTF-16, @@ -1120,39 +1155,39 @@ Ei = Value |

When constructing a segment of a utf type, Value must be an integer in the range 0..16#D7FF or 16#E000....16#10FFFF. Construction - will fail with a badarg exception if Value is + fails with a badarg exception if Value is outside the allowed ranges. The size of the resulting binary - segment depends on the type and/or Value. For utf8, - Value will be encoded in 1 through 4 bytes. For - utf16, Value will be encoded in 2 or 4 - bytes. Finally, for utf32, Value will always be - encoded in 4 bytes.

+ segment depends on the type or Value, or both:

+ + For utf8, Value is encoded in 1-4 bytes. + For utf16, Value is encoded in 2 or 4 bytes. + For utf32, Value is always be encoded in 4 bytes. + -

When constructing, a literal string may be given followed +

When constructing, a literal string can be given followed by one of the UTF types, for example: >]]> - which is syntatic sugar for + which is syntactic sugar for >]]>.

-

A successful match of a segment of a utf type results +

A successful match of a segment of a utf type, results in an integer in the range 0..16#D7FF or 16#E000..16#10FFFF. - The match will fail if returned value - would fall outside those ranges.

+ The match fails if the returned value falls outside those ranges.

-

A segment of type utf8 will match 1 to 4 bytes in the binary, +

A segment of type utf8 matches 1-4 bytes in the binary, if the binary at the match position contains a valid UTF-8 sequence. (See RFC-3629 or the Unicode standard.)

-

A segment of type utf16 may match 2 or 4 bytes in the binary. - The match will fail if the binary at the match position does not contain +

A segment of type utf16 can match 2 or 4 bytes in the binary. + The match fails if the binary at the match position does not contain a legal UTF-16 encoding of a Unicode code point. (See RFC-2781 or the Unicode standard.)

-

A segment of type utf32 may match 4 bytes in the binary in the - same way as an integer segment matching 32 bits. - The match will fail if the resulting integer is outside the legal ranges +

A segment of type utf32 can match 4 bytes in the binary in the + same way as an integer segment matches 32 bits. + The match fails if the resulting integer is outside the legal ranges mentioned above.

-

Examples:

+

Examples:

 1> Bin1 = <<1,17,42>>.
 <<1,17,42>>
@@ -1181,11 +1216,13 @@ Ei = Value |
 13> <<1024/utf8>>.
 <<208,128>>
 
-

Note that bit string patterns cannot be nested.

-

Note also that ">]]>" is interpreted as +

Notice that bit string patterns cannot be nested.

+

Notice also that ">]]>" is interpreted as ">]]>" which is a syntax error. The correct way is to write a space after '=': ">]]>.

-

More examples can be found in Programming Examples.

+

More examples are provided in + + Programming Examples.

@@ -1200,16 +1237,16 @@ fun BodyK end

A fun expression begins with the keyword fun and ends - with the keyword end. Between them should be a function + with the keyword end. Between them is to be a function declaration, similar to a regular function declaration, - except that the function name is optional and should be a variable if + except that the function name is optional and is to be a variable, if any.

Variables in a fun head shadow the function name and both shadow - variables in the function clause surrounding the fun expression, and - variables bound in a fun body are local to the fun body.

+ variables in the function clause surrounding the fun expression. + Variables bound in a fun body are local to the fun body.

The return value of the expression is the resulting fun.

-

Examples:

+

Examples:

 1> Fun1 = fun (X) -> X+1 end.
 #Fun<erl_eval.6.39074546>
@@ -1232,15 +1269,17 @@ fun Module:Name/Arity
syntactic sugar for:

 fun (Arg1,...,ArgN) -> Name(Arg1,...,ArgN) end
-

In Module:Name/Arity, Module and Name are atoms - and Arity is an integer. Starting from the R15 release, - Module, Name, and Arity may also be variables. - A fun defined in this way will refer to the function Name +

In Module:Name/Arity, Module, and Name are atoms + and Arity is an integer. Starting from Erlang/OTP R15, + Module, Name, and Arity can also be variables. + A fun defined in this way refers to the function Name with arity Arity in the latest version of module - Module. A fun defined in this way will not be dependent on - the code for module in which it is defined. + Module. A fun defined in this way is not dependent on + the code for the module in which it is defined.

-

More examples can be found in Programming Examples.

+

More examples are provided in + + Programming Examples.

@@ -1250,23 +1289,26 @@ fun (Arg1,...,ArgN) -> Name(Arg1,...,ArgN) end catch Expr

Returns the value of Expr unless an exception occurs during the evaluation. In that case, the exception is - caught. For exceptions of class error, - that is run-time errors: {'EXIT',{Reason,Stack}} - is returned. For exceptions of class exit, that is - the code called exit(Term): {'EXIT',Term} is returned. - For exceptions of class throw, that is - the code called throw(Term): Term is returned.

+ caught.

+

For exceptions of class error, that is, + run-time errors, + {'EXIT',{Reason,Stack}} is returned.

+

For exceptions of class exit, that is, + the code called exit(Term), + {'EXIT',Term} is returned.

+

For exceptions of class throw, that is + the code called throw(Term), + Term is returned.

Reason depends on the type of error that occurred, and Stack is the stack of recent function calls, see - Errors and Error Handling.

-

Examples:

-

+ Exit Reasons.

+

Examples:

 1> catch 1+2.
 3
 2> catch 1+a.
 {'EXIT',{badarith,[...]}}
-

Note that catch has low precedence and catch +

Notice that catch has low precedence and catch subexpressions often needs to be enclosed in a block expression or in parenthesis:

@@ -1275,13 +1317,14 @@ catch Expr
 4> A = (catch 1+2).
 3

The BIF throw(Any) can be used for non-local return from - a function. It must be evaluated within a catch, which will - return the value Any. Example:

+ a function. It must be evaluated within a catch, which + returns the value Any.

+

Example:

 5> catch throw(hello).
 hello

If throw/1 is not evaluated within a catch, a - nocatch run-time error will occur.

+ nocatch run-time error occurs.

@@ -1297,14 +1340,17 @@ catch end

This is an enhancement of catch that appeared in - Erlang 5.4/OTP-R10B. It gives the possibility do distinguish - between different exception classes, and to choose to handle only - the desired ones, passing the others on to an enclosing - try or catch or to default error handling.

-

Note that although the keyword catch is used in + Erlang 5.4/OTP R10B. It gives the possibility to:

+ + Distinguish between different exception classes. + Choose to handle only the desired ones. + Passing the others on to an enclosing + try or catch, or to default error handling. + +

Notice that although the keyword catch is used in the try expression, there is not a catch expression within the try expression.

-

Returns the value of Exprs (a sequence of expressions +

It returns the value of Exprs (a sequence of expressions Expr1, ..., ExprN) unless an exception occurs during the evaluation. In that case the exception is caught and the patterns ExceptionPattern with the right exception @@ -1318,7 +1364,7 @@ end Class with a true guard sequence, the exception is passed on as if Exprs had not been enclosed in a try expression.

-

If an exception occurs during evaluation of ExceptionBody +

If an exception occurs during evaluation of ExceptionBody, it is not caught.

The try expression can have an of section: @@ -1341,7 +1387,7 @@ end the patterns Pattern are sequentially matched against the result in the same way as for a case expression, except that if - the matching fails, a try_clause run-time error will occur.

+ the matching fails, a try_clause run-time error occurs.

An exception occurring during the evaluation of Body is not caught.

The try expression can also be augmented with an @@ -1364,7 +1410,7 @@ after AfterBody end

AfterBody is evaluated after either Body or - ExceptionBody no matter which one. The evaluated value of + ExceptionBody, no matter which one. The evaluated value of AfterBody is lost; the return value of the try expression is the same with an after section as without.

Even if an exception occurs during evaluation of Body or @@ -1373,13 +1419,13 @@ end evaluated, so the exception from the try expression is the same with an after section as without.

If an exception occurs during evaluation of AfterBody - itself it is not caught, so if AfterBody is evaluated after - an exception in Exprs, Body or ExceptionBody, + itself, it is not caught. So if AfterBody is evaluated after + an exception in Exprs, Body, or ExceptionBody, that exception is lost and masked by the exception in AfterBody.

-

The of, catch and after sections are all +

The of, catch, and after sections are all optional, as long as there is at least a catch or an - after section, so the following are valid try + after section. So the following are valid try expressions:

try Exprs of @@ -1398,9 +1444,9 @@ after end try Exprs after AfterBody end -

Example of using after, this code will close the file +

Next is an example of using after. This closes the file, even in the event of exceptions in file:read/2 or in - binary_to_term/1, and exceptions will be the same as + binary_to_term/1. The exceptions are the same as without the try...after...end expression:

termize_file(Name) -> @@ -1411,7 +1457,7 @@ termize_file(Name) -> after file:close(F) end. -

Example: Using try to emulate catch Expr.

+

Next is an example of using try to emulate catch Expr:

try Expr catch @@ -1427,7 +1473,7 @@ end (Expr)

Parenthesized expressions are useful to override operator precedences, - for example in arithmetic expressions:

+ for example, in arithmetic expressions:

 1> 1 + 2 * 3.
 7
@@ -1451,7 +1497,7 @@ end
List Comprehensions -

List comprehensions are a feature of many modern functional +

List comprehensions is a feature of many modern functional programming languages. Subject to certain rules, they provide a succinct notation for generating elements in a list.

List comprehensions are analogous to set comprehensions in @@ -1461,32 +1507,34 @@ end

List comprehensions are written with the following syntax:

 [Expr || Qualifier1,...,QualifierN]
-

Expr is an arbitrary expression, and each +

Here, Expr is an arbitrary expression, and each Qualifier is either a generator or a filter.

A generator is written as:

  .

-ListExpr must be an expression which evaluates to a +ListExpr must be an expression, which evaluates to a list of terms.
A bit string generator is written as:

  .

-BitStringExpr must be an expression which evaluates to a +BitStringExpr must be an expression, which evaluates to a bitstring.
- A filter is an expression which evaluates to + A filter is an expression, which evaluates to true or false.
-

The variables in the generator patterns shadow variables in the function - clause surrounding the list comprehensions.

A list comprehension +

The variables in the generator patterns, shadow variables in the function + clause, surrounding the list comprehensions.

A list comprehension returns a list, where the elements are the result of evaluating Expr for each combination of generator list elements and bit string generator - elements for which all filters are true.

Example:

+ elements, for which all filters are true.

+

Example:

 1> [X*2 || X <- [1,2,3]].
 [2,4,6]
-

More examples can be found in Programming Examples.

- +

More examples are provoded in + + Programming Examples.

@@ -1500,34 +1548,35 @@ end the following syntax:

 << BitString || Qualifier1,...,QualifierN >>
-

BitString is a bit string expression, and each +

Here, BitString is a bit string expression and each Qualifier is either a generator, a bit string generator or a filter.

A generator is written as:

  .

- ListExpr must be an expression which evaluates to a + ListExpr must be an expression that evaluates to a list of terms.
A bit string generator is written as:

  .

-BitStringExpr must be an expression which evaluates to a +BitStringExpr must be an expression that evaluates to a bitstring.
- A filter is an expression which evaluates to + A filter is an expression that evaluates to true or false.
-

The variables in the generator patterns shadow variables in - the function clause surrounding the bit string comprehensions.

+

The variables in the generator patterns, shadow variables in + the function clause, surrounding the bit string comprehensions.

A bit string comprehension returns a bit string, which is created by concatenating the results of evaluating BitString - for each combination of bit string generator elements for which all + for each combination of bit string generator elements, for which all filters are true.

-

-

Example:

+

Example:

-1> << << (X*2) >> || 
+1> << << (X*2) >> ||
 <<X>> <= << 1,2,3 >> >>.
 <<2,4,6>>
-

More examples can be found in Programming Examples.

+

More examples are provided in + + Programming Examples.

@@ -1536,27 +1585,27 @@ end

A guard sequence is a sequence of guards, separated by semicolon (;). The guard sequence is true if at least one of - the guards is true. (The remaining guards, if any, will not be - evaluated.)

-Guard1;...;GuardK

+ the guards is true. (The remaining guards, if any, are not + evaluated.)

+

Guard1;...;GuardK

A guard is a sequence of guard expressions, separated by comma (,). The guard is true if all guard expressions - evaluate to true.

-GuardExpr1,...,GuardExprN

+ evaluate to true.

+

GuardExpr1,...,GuardExprN

The set of valid guard expressions (sometimes called guard tests) is a subset of the set of valid Erlang expressions. The reason for restricting the set of valid expressions is that evaluation of a guard expression must be guaranteed to be free - of side effects. Valid guard expressions are:

+ of side effects. Valid guard expressions are the following:

- the atom true, - other constants (terms and bound variables), all regarded - as false, - calls to the BIFs specified below, - term comparisons, - arithmetic expressions, - boolean expressions, and - short-circuit expressions (andalso/orelse). + The atom true + Other constants (terms and bound variables), all regarded + as false + Calls to the BIFs specified in table Type Test BIFs + Term comparisons + Arithmetic expressions + Boolean expressions + Short-circuit expressions (andalso/orelse) @@ -1610,13 +1659,13 @@ end is_tuple/1 - Type Test BIFs. + Type Test BIFs
-

Note that most type test BIFs have older equivalents, without +

Notice that most type test BIFs have older equivalents, without the is_ prefix. These old BIFs are retained for backwards - compatibility only and should not be used in new code. They are + compatibility only and are not to be used in new code. They are also only allowed at top level. For example, they are not allowed - in boolean expressions in guards.

+ in Boolean expressions in guards.

abs(Number) @@ -1666,14 +1715,14 @@ end tuple_size(Tuple) - Other BIFs Allowed in Guard Expressions. + Other BIFs Allowed in Guard Expressions
-

If an arithmetic expression, a boolean expression, a +

If an arithmetic expression, a Boolean expression, a short-circuit expression, or a call to a guard BIF fails (because of invalid arguments), the entire guard fails. If the guard was part of a guard sequence, the next guard in the sequence (that is, - the guard following the next semicolon) will be evaluated.

+ the guard following the next semicolon) is evaluated.

@@ -1726,12 +1775,13 @@ end catch   - Operator Precedence. + Operator Precedence

When evaluating an expression, the operator with the highest priority is evaluated first. Operators with the same priority - are evaluated according to their associativity. Example: - The left associative arithmetic operators are evaluated left to + are evaluated according to their associativity.

+

Example:

+

The left associative arithmetic operators are evaluated left to right:

 6 + 5 * 4 - 3 / 2 evaluates to
diff --git a/system/doc/reference_manual/functions.xml b/system/doc/reference_manual/functions.xml
index 9498ef1402..8cf4da1b8b 100644
--- a/system/doc/reference_manual/functions.xml
+++ b/system/doc/reference_manual/functions.xml
@@ -4,7 +4,7 @@
 
   
- 20032013 + 20032015 Ericsson AB. All Rights Reserved. @@ -38,7 +38,7 @@ clause body, separated by ->.

A clause head consists of the function name, an argument list, and an optional guard sequence - beginning with the keyword when.

+ beginning with the keyword when:

 Name(Pattern11,...,Pattern1N) [when GuardSeq1] ->
     Body1;
@@ -48,9 +48,9 @@ Name(PatternK1,...,PatternKN) [when GuardSeqK] ->
     

The function name is an atom. Each argument is a pattern.

The number of arguments N is the arity of the function. A function is uniquely defined by the module name, - function name and arity. That is, two functions with the same + function name, and arity. That is, two functions with the same name and in the same module, but with different arities are two - completely different functions.

+ different functions.

A function named f in the module m and with arity N is often denoted as m:f/N.

A clause body consists of a sequence of expressions @@ -60,8 +60,8 @@ Expr1, ..., ExprN

Valid Erlang expressions and guard sequences are described in - Erlang Expressions.

-

Example:

+ Expressions.

+

Example:

 fact(N) when N>0 ->  % first clause head
     N * fact(N-1);   % first clause body
@@ -75,23 +75,23 @@ fact(0) ->           % second clause head
     Function Evaluation
     

When a function m:f/N is called, first the code for the function is located. If the function cannot be found, an - undef run-time error will occur. Note that the function + undef runtime error occurs. Notice that the function must be exported to be visible outside the module it is defined in.

If the function is found, the function clauses are scanned - sequentially until a clause is found that fulfills the following - two conditions:

+ sequentially until a clause is found that fulfills both of + the following two conditions:

- the patterns in the clause head can be successfully - matched against the given arguments, and - the guard sequence, if any, is true. + The patterns in the clause head can be successfully + matched against the given arguments. + The guard sequence, if any, is true.

If such a clause cannot be found, a function_clause - run-time error will occur.

+ runtime error occurs.

If such a clause is found, the corresponding clause body is evaluated. That is, the expressions in the body are evaluated sequentially and the value of the last expression is returned.

-

Example: Consider the function fact:

+

Consider the function fact:

 -module(m).
 -export([fact/1]).
@@ -100,17 +100,17 @@ fact(N) when N>0 ->
     N * fact(N-1);
 fact(0) ->
     1.
-

Assume we want to calculate factorial for 1:

+

Assume that you want to calculate the factorial for 1:

 1> m:fact(1).

Evaluation starts at the first clause. The pattern N is - matched against the argument 1. The matching succeeds and - the guard (N>0) is true, thus N is bound to 1 and + matched against argument 1. The matching succeeds and + the guard (N>0) is true, thus N is bound to 1, and the corresponding body is evaluated:

 N * fact(N-1) => (N is bound to 1)
 1 * fact(0)
-

Now fact(0) is called and the function clauses are +

Now, fact(0) is called, and the function clauses are scanned sequentially again. First, the pattern N is matched against 0. The matching succeeds, but the guard (N>0) is false. Second, the pattern 0 is matched against @@ -121,48 +121,51 @@ fact(0) -> 1

Evaluation has succeed and m:fact(1) returns 1.

If m:fact/1 is called with a negative number as - argument, no clause head will match. A function_clause - run-time error will occur.

+ argument, no clause head matches. A function_clause + runtime error occurs.

Tail recursion

If the last expression of a function body is a function call, - a tail recursive call is done so that no system - resources for example call stack are consumed. This means - that an infinite loop can be done if it uses tail recursive + a tail recursive call is done. + This is to ensure that no system + resources, for example, call stack, are consumed. This means + that an infinite loop can be done if it uses tail-recursive calls.

-

Example:

+

Example:

 loop(N) ->
     io:format("~w~n", [N]),
     loop(N+1).
-

As a counter-example see the factorial example above - that is not tail recursive since a multiplication is done +

The earlier factorial example can act as a counter-example. + It is not tail-recursive, since a multiplication is done on the result of the recursive call to fact(N-1).

- Built-In Functions, BIFs -

Built-in functions, BIFs, are implemented in C code in - the runtime system and do things that are difficult or impossible - to implement in Erlang. Most of the built-in functions belong - to the module erlang but there are also built-in functions + Built-In Functions (BIFs) +

BIFs are implemented in C code in + the runtime system. BIFs do things that are difficult or impossible + to implement in Erlang. Most of the BIFs belong + to the module erlang but there are also BIFs belonging to a few other modules, for example lists and ets.

-

The most commonly used BIFs belonging to erlang are - auto-imported, they do not need to be prefixed with - the module name. Which BIFs are auto-imported is specified in - erlang(3). For example, standard type conversion BIFs like +

The most commonly used BIFs belonging to erlang(3) are + auto-imported. They do not need to be prefixed with + the module name. Which BIFs that are auto-imported is specified in the + erlang(3) module in ERTS. + For example, standard-type conversion BIFs like atom_to_list and BIFs allowed in guards can be called - without specifying the module name. Examples:

+ without specifying the module name.

+

Examples:

 1> tuple_size({a,b,c}).
 3
 2> atom_to_list('Erlang').
 "Erlang"
-

Note that normally it is the set of auto-imported built-in - functions that is referred to when talking about 'BIFs'.

+

Notice that it is normally the set of auto-imported BIFs + that are referred to when talking about 'BIFs'.

diff --git a/system/doc/reference_manual/introduction.xml b/system/doc/reference_manual/introduction.xml index 36bec17825..ee8b82e60f 100644 --- a/system/doc/reference_manual/introduction.xml +++ b/system/doc/reference_manual/introduction.xml @@ -4,7 +4,7 @@
- 20032014 + 20032015 Ericsson AB. All Rights Reserved. @@ -28,20 +28,38 @@ introduction.xml
+ + +

This section is the Erlang reference manual. It describes the + Erlang programming language.

Purpose -

This reference manual describes the Erlang programming - language. The focus is on the language itself, not - the implementation. The language constructs are described in - text and with examples rather than formally specified, with - the intention to make the manual more readable. - The manual is not intended as a tutorial.

-

Information about this implementation of Erlang can be found, for - example, in System Principles (starting and stopping, - boot scripts, code loading, error logging, creating target - systems), Efficiency Guide (memory consumption, system - limits) and ERTS User's Guide (crash dumps, drivers).

+

The focus of the Erlang reference manual is on the language itself, + not the implementation of it. The language constructs are described in + text and with examples rather than formally specified. This is + to make the manual more readable. + The Erlang reference manual is not intended as a tutorial.

+

Information about implementation of Erlang can, for example, be found, + in the following:

+ +

+ System Principles

+

Starting and stopping, boot scripts, code loading, + + error logging, + + creating target systems

+
+

+ Efficiency Guide

+

Memory consumption, system limits

+
+

ERTS User's Guide

+

Crash dumps, + drivers

+
+
@@ -53,13 +71,13 @@
Document Conventions -

In the document, the following terminology is used:

+

In this section, the following terminology is used:

A sequence is one or more items. For example, a clause body consists of a sequence of expressions. This means that there must be at least one expression. A list is any number of items. For example, - an argument list can consist of zero, one or more arguments. + an argument list can consist of zero, one, or more arguments.

If a feature has been added recently, in Erlang 5.0/OTP R7 or later, this is mentioned in the text.

@@ -68,15 +86,16 @@
Complete List of BIFs

For a complete list of BIFs, their arguments and return values, - refer to erlang(3).

+ see erlang(3) + manual page in ERTS.

Reserved Words

The following are reserved words in Erlang:

-

after and andalso band begin bnot bor bsl bsr bxor case catch +

after and andalso band begin bnot bor bsl bsr bxor case catch cond div end fun if let not of or orelse receive rem try - when xor

+ when xor

diff --git a/system/doc/reference_manual/macros.xml b/system/doc/reference_manual/macros.xml index 9fd0b0f287..01994aae5e 100644 --- a/system/doc/reference_manual/macros.xml +++ b/system/doc/reference_manual/macros.xml @@ -4,7 +4,7 @@
- 20032013 + 20032015 Ericsson AB. All Rights Reserved. @@ -21,7 +21,7 @@ - The Preprocessor + Preprocessor @@ -31,17 +31,17 @@
File Inclusion -

A file can be included in the following way:

+

A file can be included as follows:

 -include(File).
 -include_lib(File).
-

File, a string, should point out a file. The contents of - this file are included as-is, at the position of the directive.

+

File, a string, is to point out a file. The contents of + this file are included as is, at the position of the directive.

Include files are typically used for record and macro definitions that are shared by several modules. It is - recommended that the file name extension .hrl be used - for include files.

-

File may start with a path component $VAR, for + recommended to use the file name extension .hrl for + include files.

+

File can start with a path component $VAR, for some string VAR. If that is the case, the value of the environment variable VAR as returned by os:getenv(VAR) is substituted for $VAR. If @@ -49,21 +49,29 @@ as is.

If the filename File is absolute (possibly after variable substitution), the include file with that name is - included. Otherwise, the specified file is searched for in - the current working directory, in the same directory as - the module being compiled, and in the directories given by - the include option, in that order. - See erlc(1) and compile(3) for details.

-

Examples:

+ included. Otherwise, the specified file is searched for + in the following directories, and in this order:

+ + The current working directory + The directory where the module is being compiled + The directories given by the include option + +

For details, see the + erlc(1) manual page + in ERTS and + compile(3) + manual page in Compiler.

+

Examples:

 -include("my_records.hrl").
 -include("incdir/my_records.hrl").
 -include("/home/user/proj/my_records.hrl").
 -include("$PROJ_ROOT/my_records.hrl").
-

include_lib is similar to include, but should not +

include_lib is similar to include, but is not to point out an absolute file. Instead, the first path component (possibly after variable substitution) is assumed to be - the name of an application. Example:

+ the name of an application.

+

Example:

 -include_lib("kernel/include/file.hrl").

The code server uses code:lib_dir(kernel) to find @@ -74,7 +82,7 @@

Defining and Using Macros -

A macro is defined the following way:

+

A macro is defined as follows:

-define(Const, Replacement). -define(Func(Var1,...,VarN), Replacement). @@ -83,33 +91,34 @@ come before any usage of the macro.

If a macro is used in several modules, it is recommended that the macro definition is placed in an include file.

-

A macro is used the following way:

+

A macro is used as follows:

?Const ?Func(Arg1,...,ArgN)

Macros are expanded during compilation. A simple macro - ?Const will be replaced with Replacement. - Example:

+ ?Const is replaced with Replacement.

+

Example:

-define(TIMEOUT, 200). ... call(Request) -> server:call(refserver, Request, ?TIMEOUT). -

This will be expanded to:

+

This is expanded to:

call(Request) -> server:call(refserver, Request, 200). -

A macro ?Func(Arg1,...,ArgN) will be replaced with +

A macro ?Func(Arg1,...,ArgN) is replaced with Replacement, where all occurrences of a variable Var from the macro definition are replaced with the corresponding - argument Arg. Example:

+ argument Arg.

+

Example:

-define(MACRO1(X, Y), {a, X, b, Y}). ... bar(X) -> ?MACRO1(a, b), ?MACRO1(X, 123) -

This will be expanded to:

+

This is expanded to:

bar(X) -> {a,a,b,b}, @@ -154,7 +163,7 @@ bar(X) -> -define(F0(), c). -define(F1(A), A). -define(C, m:f). -

the following will not work:

+

the following does not work:

f0() -> ?F0. % No, an empty list of arguments expected. @@ -165,7 +174,7 @@ f1(A) -> f() -> ?C(). -

will expand to

+

is expanded to

f() -> m:f(). @@ -185,7 +194,7 @@ f() -> defined. -else. Only allowed after an ifdef or ifndef - directive. If that condition was false, the lines following + directive. If that condition is false, the lines following else are evaluated instead. -endif. Specifies the end of an ifdef or ifndef @@ -194,7 +203,7 @@ f() ->

The macro directives cannot be used inside functions.

-

Example:

+

Example:

-module(m). ... @@ -206,7 +215,7 @@ f() -> -endif. ... -

When trace output is desired, debug should be defined +

When trace output is desired, debug is to be defined when the module m is compiled:

 % erlc -Ddebug m.erl
@@ -215,18 +224,18 @@ or
 
 1> c(m, {d, debug}).
 {ok,m}
-

?LOG(Arg) will then expand to a call to io:format/2 +

?LOG(Arg) is then expanded to a call to io:format/2 and provide the user with some simple trace output.

Stringifying Macro Arguments

The construction ??Arg, where Arg is a macro - argument, will be expanded to a string containing the tokens of + argument, is expanded to a string containing the tokens of the argument. This is similar to the #arg stringifying construction in C.

The feature was added in Erlang 5.0/OTP R7.

-

Example:

+

Example:

-define(TESTCALL(Call), io:format("Call ~s: ~w~n", [??Call, Call])). @@ -236,7 +245,7 @@ or io:format("Call ~s: ~w~n",["myfunction ( 1 , 2 )",myfunction(1,2)]), io:format("Call ~s: ~w~n",["you : function ( 2 , 1 )",you:function(2,1)]). -

That is, a trace output with both the function called and +

That is, a trace output, with both the function called and the resulting value.

diff --git a/system/doc/reference_manual/modules.xml b/system/doc/reference_manual/modules.xml index 5cb0c11371..39c739a146 100644 --- a/system/doc/reference_manual/modules.xml +++ b/system/doc/reference_manual/modules.xml @@ -4,7 +4,7 @@
- 20032014 + 20032015 Ericsson AB. All Rights Reserved. @@ -33,7 +33,8 @@ Module Syntax

Erlang code is divided into modules. A module consists of a sequence of attributes and function declarations, each - terminated by period (.). Example:

+ terminated by period (.).

+

Example:

 -module(m).          % module attribute
 -export([fact/1]).   % module attribute
@@ -42,50 +43,52 @@ fact(N) when N>0 ->  % beginning of function declaration
     N * fact(N-1);   %  |
 fact(0) ->           %  |
     1.               % end of function declaration
-

See the Functions chapter - for a description of function declarations.

+

For a description of function declarations, see + Function Declaration Syntax.

Module Attributes

A module attribute defines a certain property of a - module. A module attribute consists of a tag and a value.

+ module.

+

A module attribute consists of a tag and a value:

 -Tag(Value).

Tag must be an atom, while Value must be a literal term. As a convenience in user-defined attributes, if the literal term Value has the syntax Name/Arity (where Name is an atom and Arity a positive integer), - the term Name/Arity will be translated to {Name,Arity}.

+ the term Name/Arity is translated to {Name,Arity}.

Any module attribute can be specified. The attributes are stored in the compiled code and can be retrieved by calling - Module:module_info(attributes) or by using - beam_lib(3).

+ Module:module_info(attributes), or by using the module + beam_lib(3) + in STDLIB.

-

There are several module attributes with predefined meanings, - some of which have arity two, but user-defined module +

Several module attributes have predefined meanings. + Some of them have arity two, but user-defined module attributes must have arity one.

Pre-Defined Module Attributes -

Pre-defined module attributes should be placed before any +

Pre-defined module attributes is to be placed before any function declaration.

-module(Module).

Module declaration, defining the name of the module. - The name Module, an atom, should be the same as - the file name minus the extension erl. Otherwise - code loading will + The name Module, an atom, is to be same as + the file name minus the extension .erl. Otherwise + code loading does not work as intended.

-

This attribute should be specified first and is the only - attribute which is mandatory.

+

This attribute is to be specified first and is the only + mandatory attribute.

-export(Functions). -

Exported functions. Specifies which of the functions - defined within the module that are visible outside +

Exported functions. Specifies which of the functions, + defined within the module, that are visible from outside the module.

Functions is a list [Name1/Arity1, ..., NameN/ArityN], where each @@ -93,32 +96,37 @@ fact(0) -> % | -import(Module,Functions). -

Imported functions. Imported functions can be called - the same way as local functions, that is without any module +

Imported functions. Can be called + the same way as local functions, that is, without any module prefix.

Module, an atom, specifies which module to import functions from. Functions is a list similar as for - export above.

+ export.

-compile(Options). -

Compiler options. Options, which is a single option - or a list of options, will be added to the option list when - compiling the module. See compile(3).

+

Compiler options. Options is a single option + or a list of options. + This attribute is added to the option list when + compiling the module. See the + compile(3) manual page in Compiler.

-vsn(Vsn).

Module version. Vsn is any literal term and can be - retrieved using beam_lib:version/1, see - beam_lib(3).

+ retrieved using beam_lib:version/1, see the + beam_lib(3) + manual page in STDLIB.

If this attribute is not specified, the version defaults to the MD5 checksum of the module.

-on_load(Function). -

Names a function that should be run automatically when a - module a loaded. See - code loading for more information.

+

This attribute names a function that is to be run + automatically when a + module is loaded. For more information, see + + Running a Function When a Module is Loaded.

@@ -130,9 +138,14 @@ fact(0) -> % |
 -behaviour(Behaviour).

The atom Behaviour gives the name of the behaviour, - which can be a user defined behaviour or one of the OTP - standard behaviours gen_server, gen_fsm, - gen_event or supervisor.

+ which can be a user-defined behaviour or one of the following OTP + standard behaviours:

+ + gen_server + gen_fsm + gen_event + supervisor +

The spelling behavior is also accepted.

The callback functions of the module can be specified either directly by the exported function behaviour_info/1:

@@ -142,7 +155,7 @@ behaviour_info(callbacks) -> Callbacks. function:

 -callback Name(Arguments) -> Result.
-

where Arguments is a list of zero or more arguments. +

Here, Arguments is a list of zero or more arguments. The -callback attribute is to be preferred since the extra type information can be used by tools to produce documentation or find discrepancies.

@@ -153,7 +166,7 @@ behaviour_info(callbacks) -> Callbacks.
Record Definitions -

The same syntax as for module attributes is used by +

The same syntax as for module attributes is used for record definitions:

 -record(Record,Fields).
@@ -163,7 +176,7 @@ behaviour_info(callbacks) -> Callbacks.
- The Preprocessor + Preprocessor

The same syntax as for module attributes is used by the preprocessor, which supports file inclusion, macros, and conditional compilation:

@@ -171,7 +184,7 @@ behaviour_info(callbacks) -> Callbacks. -include("SomeFile.hrl"). -define(Macro,Replacement). -

Read more in The Preprocessor.

+

Read more in Preprocessor.

@@ -180,17 +193,17 @@ behaviour_info(callbacks) -> Callbacks. changing the pre-defined macros ?FILE and ?LINE:

 -file(File, Line).
-

This attribute is used by tools such as Yecc to inform the - compiler that the source program was generated by another tool - and indicates the correspondence of source files to lines of - the original user-written file from which the source program - was produced.

+

This attribute is used by tools, such as Yecc, to inform the + compiler that the source program is generated by another tool. + It also indicates the correspondence of source files to lines of + the original user-written file, from which the source program + is produced.

Types and function specifications

A similar syntax as for module attributes is used for - specifying types and function specifications. + specifying types and function specifications:

 -type my_type() :: atom() | integer().
@@ -200,32 +213,36 @@ behaviour_info(callbacks) -> Callbacks.

The description is based on EEP8 - - Types and function specifications - which will not be further updated. + Types and function specifications, + which is not to be further updated.

Comments -

Comments may be placed anywhere in a module except within strings - and quoted atoms. The comment begins with the character "%", +

Comments can be placed anywhere in a module except within strings + and quoted atoms. A comment begins with the character "%", continues up to, but does not include the next end-of-line, and - has no effect. Note that the terminating end-of-line has + has no effect. Notice that the terminating end-of-line has the effect of white space.

- The module_info/0 and module_info/1 functions + module_info/0 and module_info/1 functions

The compiler automatically inserts the two special, exported - functions into each module: Module:module_info/0 and - Module:module_info/1. These functions can be called to - retrieve information about the module.

+ functions into each module:

+ + Module:module_info/0 + Module:module_info/1 + +

These functions can be called to retrieve information + about the module.

module_info/0 -

The module_info/0 function in each module returns +

The module_info/0 function in each module, returns a list of {Key,Value} tuples with information about the module. Currently, the list contain tuples with the following Keys: module, attributes, compile, @@ -235,7 +252,7 @@ behaviour_info(callbacks) -> Callbacks.

module_info/1 -

The call module_info(Key), where key is an atom, +

The call module_info(Key), where Key is an atom, returns a single piece of information about the module.

The following values are allowed for Key:

@@ -243,44 +260,46 @@ behaviour_info(callbacks) -> Callbacks. module -

Return an atom representing the module name.

+

Returns an atom representing the module name.

attributes -

Return a list of {AttributeName,ValueList} tuples, +

Returns a list of {AttributeName,ValueList} tuples, where AttributeName is the name of an attribute, - and ValueList is a list of values. Note: a given - attribute may occur more than once in the list with different + and ValueList is a list of values. Notice that a given + attribute can occur more than once in the list with different values if the attribute occurs more than once in the module.

-

The list of attributes will be empty if - the module has been stripped with - beam_lib(3).

+

The list of attributes becomes empty if + the module is stripped with the + beam_lib(3) + module (in STDLIB).

compile -

Return a list of tuples containing information about - how the module was compiled. This list will be empty if - the module has been stripped with - beam_lib(3).

+

Returns a list of tuples with information about + how the module was compiled. This list is empty if + the module has been stripped with the + beam_lib(3) + module (in STDLIB).

md5 -

Return a binary representing the MD5 checksum of the module.

+

Returns a binary representing the MD5 checksum of the module.

exports -

Return a list of {Name,Arity} tuples with +

Returns a list of {Name,Arity} tuples with all exported functions in the module.

functions -

Return a list of {Name,Arity} tuples with +

Returns a list of {Name,Arity} tuples with all functions in the module.

diff --git a/system/doc/reference_manual/patterns.xml b/system/doc/reference_manual/patterns.xml index 1611002fa1..2163583636 100644 --- a/system/doc/reference_manual/patterns.xml +++ b/system/doc/reference_manual/patterns.xml @@ -40,7 +40,7 @@ term. If the matching succeeds, any unbound variables in the pattern become bound. If the matching fails, a run-time error occurs.

-

Examples:

+

Examples:

 1> X.
 ** 1: variable 'X' is unbound **
diff --git a/system/doc/reference_manual/ports.xml b/system/doc/reference_manual/ports.xml
index 621af10624..e5dc99641b 100644
--- a/system/doc/reference_manual/ports.xml
+++ b/system/doc/reference_manual/ports.xml
@@ -4,7 +4,7 @@
 
   
- 20042013 + 20042015 Ericsson AB. All Rights Reserved. @@ -28,9 +28,12 @@ ports.xml
-

Examples of how to use ports and port drivers can be found in - Interoperability Tutorial. The BIFs mentioned are as usual - documented in erlang(3).

+

Examples of how to use ports and port drivers are provided in + + Interoperability Tutorial. + For information about the BIFs mentioned, see the + erlang(3) manual + page in ERTS.

Ports @@ -39,29 +42,34 @@ provide a byte-oriented interface to an external program. When a port has been created, Erlang can communicate with it by sending and receiving lists of bytes, including binaries.

-

The Erlang process which creates a port is said to be +

The Erlang process creating a port is said to be the port owner, or the connected process of - the port. All communication to and from the port should go via - the port owner. If the port owner terminates, so will the port + the port. All communication to and from the port must go through + the port owner. If the port owner terminates, so does the port (and the external program, if it is written correctly).

The external program resides in another OS process. By default, - it should read from standard input (file descriptor 0) and write + it reads from standard input (file descriptor 0) and writes to standard output (file descriptor 1). The external program - should terminate when the port is closed.

+ is to terminate when the port is closed.

Port Drivers -

It is also possible to write a driver in C according to certain +

It is possible to write a driver in C according to certain principles and dynamically link it to the Erlang runtime system. The linked-in driver looks like a port from the Erlang programmer's point of view and is called a port driver.

-

An erroneous port driver will cause the entire Erlang runtime +

An erroneous port driver causes the entire Erlang runtime system to leak memory, hang or crash.

-

Port drivers are documented in erl_driver(4), - driver_entry(1) and erl_ddll(3).

+

For information about port drivers, see the + erl_driver(4) + manual page in ERTS, + driver_entry(1) + manual page in ERTS, and + erl_ddll(3) + manual page in Kernel.

@@ -70,53 +78,74 @@ open_port(PortName, PortSettings - Returns a port identifier Portas the result of opening a new Erlang port. Messages can be sent to and received from a port identifier, just like a pid. Port identifiers can also be linked to or registered under a name using link/1and register/2. + Returns a port identifier + Port as the result of opening a new Erlang port. + Messages can be sent to, and received from, a port identifier, + just like a pid. Port identifiers can also be linked to + using link/1, or registered under a name using + register/2. - Port Creation BIF. + Port Creation BIF

PortName is usually a tuple {spawn,Command}, where the string Command is the name of the external program. - The external program runs outside the Erlang workspace unless a - port driver with the name Command is found. If found, that - driver is started.

+ The external program runs outside the Erlang workspace, unless a + port driver with the name Command is found. If Command + is found, that driver is started.

PortSettings is a list of settings (options) for the port. - The list typically contains at least a tuple {packet,N} + The list typically contains at least a tuple {packet,N}, which specifies that data sent between the port and the external program are preceded by an N-byte length indicator. Valid values - for N are 1, 2 or 4. If binaries should be used instead of lists + for N are 1, 2, or 4. If binaries are to be used instead of lists of bytes, the option binary must be included.

The port owner Pid can communicate with the port Port by sending and receiving messages. (In fact, any process can send the messages to the port, but the port owner must be identified in the message).

-

As of OTP-R16 messages sent to ports are delivered truly +

As of Erlang/OTP R16, messages sent to ports are delivered truly asynchronously. The underlying implementation previously delivered messages to ports synchronously. Message passing has - however always been documented as an asynchronous operation, so - this should not be an issue for an Erlang program communicating - with ports, unless false assumptions about ports has been made.

-

Below, Data must be an I/O list. An I/O list is a binary - or a (possibly deep) list of binaries or integers in the range - 0..255.

+ however always been documented as an asynchronous operation. Hence, + this is not to be an issue for an Erlang program communicating + with ports, unless false assumptions about ports have been made.

+

In the following tables of examples, Data must be an I/O list. + An I/O list is a binary or a (possibly deep) list of binaries + or integers in the range 0..255:

+ + Message + Description + {Pid,{command,Data}} - Sends Datato the port. + Sends Data to the port. {Pid,close} - Closes the port. Unless the port is already closed, the port replies with {Port,closed}when all buffers have been flushed and the port really closes. + Closes the port. Unless the + port is already closed, the port replies with + {Port,closed} when all buffers have been flushed + and the port really closes. {Pid,{connect,NewPid}} - Sets the port owner of Portto NewPid. Unless the port is already closed, the port replies with{Port,connected}to the old port owner. Note that the old port owner is still linked to the port, but the new port owner is not. + Sets the port owner of + Portto NewPid. Unless the port is already closed, + the port replies with{Port,connected} to the old + port owner. Note that the old port owner is still linked + to the port, but the new port owner is not. - Messages Sent To a Port. + Messages Sent To a Port
+

+ + Message + Description + {Port,{data,Data}} - Datais received from the external program. + Data is received from the external program. {Port,closed} @@ -124,20 +153,24 @@ {Port,connected} - Reply to Port ! {Pid,{connect,NewPid}} + Reply to Port ! {Pid,{connect,NewPid}}. {'EXIT',Port,Reason} If the port has terminated for some reason. - Messages Received From a Port. + Messages Received From a Port

Instead of sending and receiving messages, there are also a - number of BIFs that can be used.

+ number of BIFs that can be used:

+ + Port BIF + Description + port_command(Port,Data) - Sends Datato the port. + Sends Data to the port. port_close(Port) @@ -145,7 +178,10 @@ port_connect(Port,NewPid) - Sets the port owner of Portto NewPid. The old port owner Pidstays linked to the port and have to call unlink(Port)if this is not desired. + Sets the port owner of + Portto NewPid. The old port owner Pid + stays linked to the port and must call unlink(Port) + if this is not desired. erlang:port_info(Port,Item) @@ -155,9 +191,9 @@ erlang:ports() Returns a list of all ports on the current node. - Port BIFs. + Port BIFs
-

There are some additional BIFs that only apply to port drivers: +

Some additional BIFs that apply to port drivers: port_control/3 and erlang:port_call/3.

diff --git a/system/doc/reference_manual/processes.xml b/system/doc/reference_manual/processes.xml index 95ae0672ec..32af6d4480 100644 --- a/system/doc/reference_manual/processes.xml +++ b/system/doc/reference_manual/processes.xml @@ -4,7 +4,7 @@
- 20032013 + 20032015 Ericsson AB. All Rights Reserved. @@ -32,8 +32,8 @@
Processes

Erlang is designed for massive concurrency. Erlang processes are - light-weight (grow and shrink dynamically) with small memory - footprint, fast to create and terminate and the scheduling + lightweight (grow and shrink dynamically) with small memory + footprint, fast to create and terminate, and the scheduling overhead is low.

@@ -46,10 +46,10 @@ spawn(Module, Name, Args) -> pid() Args = [Arg1,...,ArgN] ArgI = term()

spawn creates a new process and returns the pid.

-

The new process will start executing in - Module:Name(Arg1,...,ArgN) where the arguments is +

The new process starts executing in + Module:Name(Arg1,...,ArgN) where the arguments are the elements of the (possible empty) Args argument list.

-

There exist a number of other spawn BIFs, for example +

There exist a number of other spawn BIFs, for example, spawn/4 for spawning a process at another node.

@@ -59,19 +59,26 @@ spawn(Module, Name, Args) -> pid() BIFs for registering a process under a name. The name must be an atom and is automatically unregistered if the process terminates:

+ + BIF + Description + register(Name, Pid) Associates the name Name, an atom, with the process Pid. registered() - Returns a list of names which have been registered usingregister/2. + Returns a list of names that + have been registered using register/2. whereis(Name) - Returns the pid registered under Name, orundefinedif the name is not registered. + Returns the pid registered + under Name, or undefined if the name is not + registered. - Name Registration BIFs. + Name Registration BIFs
@@ -79,22 +86,27 @@ spawn(Module, Name, Args) -> pid() Process Termination

When a process terminates, it always terminates with an - exit reason. The reason may be any term.

+ exit reason. The reason can be any term.

A process is said to terminate normally, if the exit reason is the atom normal. A process with no more code to execute terminates normally.

-

A process terminates with exit reason {Reason,Stack} +

A process terminates with an exit reason {Reason,Stack} when a run-time error occurs. See - Error and Error Handling.

-

A process can terminate itself by calling one of the BIFs - exit(Reason), - erlang:error(Reason), erlang:error(Reason, Args), - erlang:fault(Reason) or erlang:fault(Reason, Args). - The process then terminates with reason Reason for + Exit Reasons.

+

A process can terminate itself by calling one of the + following BIFs:

+ + exit(Reason) + erlang:error(Reason) + erlang:error(Reason, Args) + erlang:fault(Reason) + erlang:fault(Reason, Args) + +

The process then terminates with reason Reason for exit/1 or {Reason,Stack} for the others.

-

A process may also be terminated if it receives an exit signal +

A process can also be terminated if it receives an exit signal with another exit reason than normal, see - Error Handling below.

+ Error Handling.

@@ -113,35 +125,38 @@ spawn(Module, Name, Args) -> pid() Links

Two processes can be linked to each other. A link between two processes Pid1 and Pid2 is created - by Pid1 calling the BIF link(Pid2) (or vice versa). + by Pid1 calling the BIF link(Pid2) (or conversely). There also exist a number of spawn_link BIFs, which spawn and link to a process in one operation.

Links are bidirectional and there can only be one link between two processes. Repeated calls to link(Pid) have no effect.

A link can be removed by calling the BIF unlink(Pid).

Links are used to monitor the behaviour of other processes, see - Error Handling below.

+ Error Handling.

Error Handling

Erlang has a built-in feature for error handling between - processes. Terminating processes will emit exit signals to all - linked processes, which may terminate as well or handle the exit + processes. Terminating processes emit exit signals to all + linked processes, which can terminate as well or handle the exit in some way. This feature can be used to build hierarchical program structures where some processes are supervising other - processes, for example restarting them if they terminate + processes, for example, restarting them if they terminate abnormally.

-

Refer to OTP Design Principles for more information about - OTP supervision trees, which uses this feature.

+

See + OTP Design Principles for more information about + OTP supervision trees, which use this feature.

Emitting Exit Signals -

When a process terminates, it will terminate with an exit reason as explained in Process Termination above. This exit reason is emitted in +

When a process terminates, it terminates with an + exit reason as explained in + Process Termination. This exit reason is emitted in an exit signal to all linked processes.

A process can also call the function exit(Pid,Reason). - This will result in an exit signal with exit reason + This results in an exit signal with exit reason Reason being emitted to Pid, but does not affect the calling process.

@@ -156,14 +171,14 @@ spawn(Module, Name, Args) -> pid()

A process can be set to trap exit signals by calling:

 process_flag(trap_exit, true)
-

When a process is trapping exits, it will not terminate when +

When a process is trapping exits, it does not terminate when an exit signal is received. Instead, the signal is transformed - into a message {'EXIT',FromPid,Reason} which is put into - the mailbox of the process just like a regular message.

+ into a message {'EXIT',FromPid,Reason}, which is put into + the mailbox of the process, just like a regular message.

An exception to the above is if the exit reason is kill, - that is if exit(Pid,kill) has been called. This will - unconditionally terminate the process, regardless of if it is - trapping exit signals or not.

+ that is if exit(Pid,kill) has been called. This + unconditionally terminates the process, regardless of if it is + trapping exit signals.

@@ -180,12 +195,12 @@ process_flag(trap_exit, true)

If Pid2 does not exist, the 'DOWN' message is sent immediately with Reason set to noproc.

Monitors are unidirectional. Repeated calls to - erlang:monitor(process, Pid) will create several, - independent monitors and each one will send a 'DOWN' message when + erlang:monitor(process, Pid) creates several + independent monitors, and each one sends a 'DOWN' message when Pid terminates.

A monitor can be removed by calling erlang:demonitor(Ref).

-

It is possible to create monitors for processes with registered +

Monitors can be created for processes with registered names, also at other nodes.

diff --git a/system/doc/reference_manual/records.xml b/system/doc/reference_manual/records.xml index 04766531df..3294255af9 100644 --- a/system/doc/reference_manual/records.xml +++ b/system/doc/reference_manual/records.xml @@ -32,16 +32,19 @@ elements. It has named fields and is similar to a struct in C. Record expressions are translated to tuple expressions during compilation. Therefore, record expressions are not understood by - the shell unless special actions are taken. See shell(3) - for details.

-

More record examples can be found in Programming Examples.

+ the shell unless special actions are taken. For details, see the + shell(3) + manual page in STDLIB.

+

More examples are provided in + + Programming Examples.

Defining Records

A record definition consists of the name of the record, followed by the field names of the record. Record and field names must be atoms. Each field can be given an optional default value. - If no default value is supplied, undefined will be used.

+ If no default value is supplied, undefined is used.

 -record(Name, {Field1 [= Value1],
                ...
@@ -60,17 +63,18 @@
       the corresponding expression ExprI:

 #Name{Field1=Expr1,...,FieldK=ExprK}
-

The fields may be in any order, not necessarily the same order as +

The fields can be in any order, not necessarily the same order as in the record definition, and fields can be omitted. Omitted - fields will get their respective default value instead.

-

If several fields should be assigned the same value, + fields get their respective default value instead.

+

If several fields are to be assigned the same value, the following construction can be used:

 #Name{Field1=Expr1,...,FieldK=ExprK, _=ExprL}
-

Omitted fields will then get the value of evaluating ExprL +

Omitted fields then get the value of evaluating ExprL instead of their default values. This feature was added in Erlang 5.1/OTP R8 and is primarily intended to be used to create - patterns for ETS and Mnesia match functions. Example:

+ patterns for ETS and Mnesia match functions.

+

Example:

 -record(person, {name, phone, address}).
 
@@ -84,13 +88,13 @@ lookup(Name, Tab) ->
     Accessing Record Fields
     
 Expr#Name.Field
-

Returns the value of the specified field. Expr should +

Returns the value of the specified field. Expr is to evaluate to a Name record.

The following expression returns the position of the specified field in the tuple representation of the record:

 #Name.Field
-

Example:

+

Example:

 -record(person, {name, phone, address}).
 
@@ -104,8 +108,8 @@ lookup(Name, List) ->
     Updating Records
     
 Expr#Name{Field1=Expr1,...,FieldK=ExprK}
-

Expr should evaluate to a Name record. Returns a - copy of this record, with the value of each specified field +

Expr is to evaluate to a Name record. A + copy of this record is returned, with the value of each specified field FieldI changed to the value of evaluating the corresponding expression ExprI. All other fields retain their old values.

@@ -116,17 +120,17 @@ Expr#Name{Field1=Expr1,...,FieldK=ExprK}
Records in Guards

Since record expressions are expanded to tuple expressions, creating records and accessing record fields are allowed in - guards. However all subexpressions, for example for field - initiations, must of course be valid guard expressions as well. - Examples:

+ guards. However all subexpressions, for example, for field + initiations, must be valid guard expressions as well.

+

Examples:

handle(Msg, State) when Msg==#msg{to=void, no=3} -> ... handle(Msg, State) when State#state.running==true -> ... -

There is also a type test BIF is_record(Term, RecordTag). - Example:

+

There is also a type test BIF is_record(Term, RecordTag).

+

Example:

 is_person(P) when is_record(P, person) ->
     true;
@@ -136,18 +140,18 @@ is_person(_P) ->
 
   
Records in Patterns -

A pattern that will match a certain record is created the same +

A pattern that matches a certain record is created in the same way as a record is created:

 #Name{Field1=Expr1,...,FieldK=ExprK}
-

In this case, one or more of Expr1...ExprK may be +

In this case, one or more of Expr1...ExprK can be unbound variables.

- Nested records -

Beginning with R14 parentheses when accessing or updating nested - records can be omitted. Assuming we have the following record + Nested Records +

Beginning with Erlang/OTP R14, parentheses when accessing or updating nested + records can be omitted. Assume the following record definitions:

 -record(nrec0, {name = "nested0"}).
@@ -156,12 +160,12 @@ is_person(_P) ->
 
 N2 = #nrec2{},
     
-

Before R14 you would have needed to use parentheses as following:

+

Before R14, parentheses were needed as follows:

 "nested0" = ((N2#nrec2.nrec1)#nrec1.nrec0)#nrec0.name,
 N0n = ((N2#nrec2.nrec1)#nrec1.nrec0)#nrec0{name = "nested0a"},
     
-

Since R14 you can also write:

+

Since R14, the following can also be written:

 "nested0" = N2#nrec2.nrec1#nrec1.nrec0#nrec0.name,
 N0n = N2#nrec2.nrec1#nrec1.nrec0#nrec0{name = "nested0a"},
@@ -170,23 +174,23 @@ N0n = N2#nrec2.nrec1#nrec1.nrec0#nrec0{name = "nested0a"},
Internal Representation of Records

Record expressions are translated to tuple expressions during - compilation. A record defined as

+ compilation. A record defined as:

 -record(Name, {Field1,...,FieldN}).
-

is internally represented by the tuple

+

is internally represented by the tuple:

 {Name,Value1,...,ValueN}
-

where each ValueI is the default value for FieldI.

+

Here each ValueI is the default value for FieldI.

To each module using records, a pseudo function is added during compilation to obtain information about records:

 record_info(fields, Record) -> [Field]
 record_info(size, Record) -> Size
-

Size is the size of the tuple representation, that is +

Size is the size of the tuple representation, that is, one more than the number of fields.

In addition, #Record.Name returns the index in the tuple - representation of Name of the record Record. - Name must be an atom.

+ representation of Name of the record Record.

+

Name must be an atom.

diff --git a/system/doc/reference_manual/typespec.xml b/system/doc/reference_manual/typespec.xml index d1584d2b98..0891dbaa9b 100644 --- a/system/doc/reference_manual/typespec.xml +++ b/system/doc/reference_manual/typespec.xml @@ -34,43 +34,46 @@

Erlang is a dynamically typed language. Still, it comes with a notation for declaring sets of Erlang terms to form a particular - type, effectively forming a specific sub-type of the set of all + type. This effectively forms specific subtypes of the set of all Erlang terms.

Subsequently, these types can be used to specify types of record fields - and the argument and return types of functions. -

-

- Type information can be used to document function interfaces, - provide more information for bug detection tools such as Dialyzer, - and can be exploited by documentation tools such as Edoc for - generating program documentation of various forms. - It is expected that the type language described in this document will - supersede and replace the purely comment-based @type and - @spec declarations used by Edoc. -

+ and also the argument and return types of functions. +

+

+ Type information can be used for the following:

+ + To document function interfaces + To provide more information for bug detection tools, + such as Dialyzer + To be exploited by documentation tools, such as EDoc, for + generating program documentation of various forms + +

It is expected that the type language described in this section + supersedes and replaces the purely comment-based @type and + @spec declarations used by EDoc.

Types and their Syntax

Types describe sets of Erlang terms. - Types consist and are built from a set of predefined types - (e.g. integer(), atom(), pid(), ...) - described below. - Predefined types represent a typically infinite set of Erlang terms which + Types consist of, and are built from, a set of predefined types, + for example, integer(), atom(), and pid(). + Predefined types represent a typically infinite set of Erlang terms that belong to this type. For example, the type atom() stands for the set of all Erlang atoms.

- For integers and atoms, we allow for singleton types (e.g. the integers - -1 and 42 or the atoms 'foo' and 'bar'). + For integers and atoms, it is allowed for singleton types; for example, + the integers + -1 and 42, or the atoms 'foo' and 'bar'). All other types are built using unions of either predefined types or singleton types. In a type union between a type and one - of its sub-types the sub-type is absorbed by the super-type and - the union is subsequently treated as if the sub-type was not a + of its subtypes, the subtype is absorbed by the supertype. Thus, + the union is then treated as if the subtype was not a constituent of the union. For example, the type union:

  atom() | 'bar' | integer() | 42
@@ -79,13 +82,13 @@

  atom() | integer()

- Because of sub-type relations that exist between types, types - form a lattice where the topmost element, any(), denotes + Because of subtype relations that exist between types, types + form a lattice where the top-most element, any(), denotes the set of all Erlang terms and the bottom-most element, none(), denotes the empty set of terms.

- The set of predefined types and the syntax for types is given below: + The set of predefined types and the syntax for types follows:


     The general form of bitstrings is <<_:M, _:_*N>>,
     where M and N are positive integers. It denotes a
-    bitstring that is M + (k*N) bits long (i.e., a bitstring that
+    bitstring that is M + (k*N) bits long (that is, a bitstring that
     starts with M bits and continues with k segments of
     N bits each, where k is also a positive integer).
     The notations <<_:_*N>>, <<_:M>>,
     and <<>> are convenient shorthands for the cases
-    that M, N, or both, respectively, are zero.
+    that M or N, or both, are zero.
   

Because lists are commonly used, they have shorthand type notations. The types list(T) and nonempty_list(T) have the shorthands [T] and [T,...], respectively. - The only difference between the two shorthands is that [T] may be an - empty list but [T,...] may not. + The only difference between the two shorthands is that [T] can be an + empty list but [T,...] cannot.

- Notice that the shorthand for list(), i.e. the list of + Notice that the shorthand for list(), that is, the list of elements of unknown type, is [_] (or [any()]), not []. The notation [] specifies the singleton type for the empty list.

@@ -172,7 +175,7 @@

- Built-in typeDefined as + Built-in typeDefined as term()any() @@ -237,6 +240,7 @@ no_return()none() + Built-in types, predefined aliases

In addition, the following three built-in types exist and can be @@ -245,7 +249,8 @@

- Built-in typeCould be thought defined by the syntax + Built-in type + Can be thought defined by the syntax non_neg_integer()0.. @@ -256,6 +261,7 @@ neg_integer()..-1 + Additional built-in types

@@ -278,109 +284,118 @@ define the set of Erlang terms one would expect.

- Also for convenience, we allow for record notation to be used. - Records are just shorthands for the corresponding tuples. + Also for convenience, record notation is allowed to be used. + Records are shorthands for the corresponding tuples:

   Record :: #Erlang_Atom{}
           | #Erlang_Atom{Fields}

- Records have been extended to possibly contain type information. - This is described in the sub-section "Type information in record declarations" below. + Records are extended to possibly contain type information. + This is described in + Type Information in Record Declarations.

-

Map types, both map() and #{ ... }, are considered experimental during OTP 17.

-

No type information of maps pairs, only the containing map types, are used by Dialyzer in OTP 17.

+

Map types, both map() and #{...}, + are considered experimental during OTP 17.

+

No type information of maps pairs, only the containing map types, + are used by Dialyzer in OTP 17.

- +
- Type declarations of user-defined types + Type Declarations of User-Defined Types

As seen, the basic syntax of a type is an atom followed by closed - parentheses. New types are declared using '-type' and '-opaque' + parentheses. New types are declared using -type and -opaque compiler attributes as in the following:

   -type my_struct_type() :: Type.
   -opaque my_opaq_type() :: Type.

- where the type name is an atom ('my_struct_type' in the above) - followed by parentheses. Type is a type as defined in the + The type name is the atom my_struct_type, + followed by parentheses. Type is a type as defined in the previous section. - A current restriction is that Type can contain only predefined types, - or user-defined types which are either module-local (i.e., with a - definition that is present in the code of the module) or are remote - types (i.e., types defined in and exported by other modules; see below). - For module-local types, the restriction that their definition + A current restriction is that Type can contain + only predefined types, + or user-defined types which are either of the following: +

+ + Module-local type, that is, with a + definition that is present in the code of the module + Remote type, that is, type defined in, and exported by, + other modules; more about this soon. + +

For module-local types, the restriction that their definition exists in the module is enforced by the compiler and results in a - compilation error. (A similar restriction currently exists for records.) -

+ compilation error. (A similar restriction currently exists for records.)

Type declarations can also be parameterized by including type variables between the parentheses. The syntax of type variables is the same as - Erlang variables (starts with an upper case letter). - Naturally, these variables can - and should - appear on the RHS of the - definition. A concrete example appears below: + Erlang variables, that is, starts with an upper-case letter. + Naturally, these variables can - and is to - appear on the RHS of the + definition. A concrete example follows:

   -type orddict(Key, Val) :: [{Key, Val}].

- A module can export some types in order to declare that other modules + A module can export some types to declare that other modules are allowed to refer to them as remote types. - This declaration has the following form: + This declaration has the following form:

   -export_type([T1/A1, ..., Tk/Ak]).
- where the Ti's are atoms (the name of the type) and the Ai's are their - arguments. An example is given below: +

Here the Ti's are atoms (the name of the type) and the Ai's are their + arguments

+

Example:

   -export_type([my_struct_type/0, orddict/2]).
- Assuming that these types are exported from module 'mod' then - one can refer to them from other modules using remote type expressions - like those below: +

Assuming that these types are exported from module 'mod', + you can refer to them from other modules using remote type expressions + like the following:

   mod:my_struct_type()
   mod:orddict(atom(), term())
- One is not allowed to refer to types which are not declared as exported. +

It is not allowed to refer to types that are not declared as exported.

Types declared as opaque represent sets of terms whose - structure is not supposed to be visible in any way outside of - their defining module (i.e., only the module defining them is - allowed to depend on their term structure). Consequently, such + structure is not supposed to be visible from outside of + their defining module. That is, only the module defining them + is allowed to depend on their term structure. Consequently, such types do not make much sense as module local - module local - types are not accessible by other modules anyway - and should - always be exported. + types are not accessible by other modules anyway - and is + always to be exported.

- - +
- Type information in record declarations + + Type Information in Record Declarations

- The types of record fields can be specified in the declaration of the - record. The syntax for this is: + The types of record fields can be specified in the declaration of the + record. The syntax for this is as follows:

   -record(rec, {field1 :: Type1, field2, field3 :: Type3}).

For fields without type annotations, their type defaults to any(). - I.e., the above is a shorthand for: + That is, the previous example is a shorthand for the following:

   -record(rec, {field1 :: Type1, field2 :: any(), field3 :: Type3}).

In the presence of initial values for fields, - the type must be declared after the initialization as in the following: + the type must be declared after the initialization, as follows:

   -record(rec, {field1 = [] :: Type1, field2, field3 = 42 :: Type3}).

- Naturally, the initial values for fields should be compatible - with (i.e. a member of) the corresponding types. - This is checked by the compiler and results in a compilation error - if a violation is detected. For fields without initial values, - the singleton type 'undefined' is added to all declared types. + The initial values for fields are to be compatible + with (that is, a member of) the corresponding types. + This is checked by the compiler and results in a compilation error + if a violation is detected. For fields without initial values, + the singleton type 'undefined' is added to all declared types. In other words, the following two record declarations have identical effects:

@@ -398,13 +413,13 @@

Any record, containing type information or not, once defined, - can be used as a type using the syntax: + can be used as a type using the following syntax:

  #rec{}

In addition, the record fields can be further specified when using - a record type by adding type information about the field in - the following manner: + a record type by adding type information about the field + as follows:

  #rec{some_field :: Type}

@@ -414,16 +429,16 @@

- Specifications for functions + Specifications for Functions

A specification (or contract) for a function is given using the new - compiler attribute '-spec'. The general format is as follows: + compiler attribute -spec. The general format is as follows:

   -spec Module:Function(ArgType1, ..., ArgTypeN) -> ReturnType.

- The arity of the function has to match the number of arguments, - or else a compilation error occurs. + The arity of the function must match the number of arguments, + else a compilation error occurs.

This form can also be used in header files (.hrl) to declare type @@ -432,7 +447,7 @@ explicitly) import these functions.

- For most uses within a given module, the following shorthand suffices: + Within a given module, the following shorthand suffice in most cases:

   -spec Function(ArgType1, ..., ArgTypeN) -> ReturnType.
@@ -450,8 +465,8 @@ ; (T4, T5) -> T6.

A current restriction, which currently results in a warning - (OBS: not an error) by the compiler, is that the domains of - the argument types cannot be overlapping. + (not an error) by the compiler, is that the domains of + the argument types cannot overlap. For example, the following specification results in a warning:

@@ -466,41 +481,43 @@
     
   -spec id(X) -> X.

- However, note that the above specification does not restrict the input - and output type in any way. - We can constrain these types by guard-like subtype constraints + Notice that the above specification does not restrict the input + and output type in any way. + These types can be constrained by guard-like subtype constraints and provide bounded quantification:

  -spec id(X) -> X when X :: tuple().

Currently, the :: constraint (read as is_subtype) is - the only guard constraint which can be used in the 'when' + the only guard constraint that can be used in the 'when' part of a '-spec' attribute.

- The above function specification, using multiple occurrences of - the same type variable, provides more type information than the - function specification below where the type variables are missing: + The above function specification uses multiple occurrences of + the same type variable. That provides more type information than the + following function specification, where the type variables are missing:

  -spec id(tuple()) -> tuple().

The latter specification says that the function takes some tuple - and returns some tuple, while the one with the X type + and returns some tuple. The specification with the X type variable specifies that the function takes a tuple and returns the same tuple.

- However, it's up to the tools that process the specs to choose - whether to take this extra information into account or ignore it. + However, it is up to the tools that process the specificationss + to choose whether to take this extra information into account + or not.

- The scope of an :: constraint is the - (...) -> RetType - specification after which it appears. To avoid confusion, - we suggest that different variables are used in different - constituents of an overloaded contract as in the example below: + The scope of a :: constraint is the + (...) -> RetType + specification after which it appears. To avoid confusion, + it is suggested that different variables are used in different + constituents of an overloaded contract, as shown in the + following example:

   -spec foo({X, integer()}) -> X when X :: atom()
@@ -511,19 +528,20 @@
       

  -spec id(X) -> X when is_subtype(X, tuple()).

- but its use is discouraged. It will be taken out in a future + but its use is discouraged. It will be removed in a future Erlang/OTP release.

Some functions in Erlang are not meant to return; either because they define servers or because they are used to - throw exceptions as the function below: + throw exceptions, as in the following function:

  my_error(Err) -> erlang:throw({error, Err}).

- For such functions we recommend the use of the special no_return() - type for their "return", via a contract of the form: + For such functions, it is recommended to use the special + no_return() type for their "return", through a contract + of the following form:

  -spec my_error(term()) -> no_return().
-- cgit v1.2.3 From 0c20078ff0fbad9066c8dd4ebcd6faa0b4f31b42 Mon Sep 17 00:00:00 2001 From: Hans Bolinder Date: Thu, 12 Mar 2015 15:35:13 +0100 Subject: Update System Principles Language cleaned up by the technical writers xsipewe and tmanevik from Combitech. Proofreading and corrections by Hans Bolinder. --- system/doc/system_principles/create_target.xmlsrc | 367 ++++++++++----------- system/doc/system_principles/error_logging.xml | 38 ++- system/doc/system_principles/system_principles.xml | 186 ++++++----- system/doc/system_principles/upgrade.xml | 98 +++--- system/doc/system_principles/versions.xml | 288 ++++++++-------- 5 files changed, 482 insertions(+), 495 deletions(-) (limited to 'system/doc') diff --git a/system/doc/system_principles/create_target.xmlsrc b/system/doc/system_principles/create_target.xmlsrc index a8ee2d1245..7c566229ac 100644 --- a/system/doc/system_principles/create_target.xmlsrc +++ b/system/doc/system_principles/create_target.xmlsrc @@ -31,55 +31,54 @@ A create_target.xml + -
- Introduction -

When creating a system using Erlang/OTP, the most simple way is - to install Erlang/OTP somewhere, install the application specific +

When creating a system using Erlang/OTP, the simplest way is + to install Erlang/OTP somewhere, install the application-specific code somewhere else, and then start the Erlang runtime system, - making sure the code path includes the application specific code.

-

Often it is not desirable to use an Erlang/OTP system as is. A - developer may create new Erlang/OTP compliant applications for a + making sure the code path includes the application-specific code.

+

It is often not desirable to use an Erlang/OTP system as is. A + developer can create new Erlang/OTP-compliant applications for a particular purpose, and several original Erlang/OTP applications - may be irrelevant for the purpose in question. Thus, there is a + can be irrelevant for the purpose in question. Thus, there is a need to be able to create a new system based on a given - Erlang/OTP system, where dispensable applications are removed, - and a set of new applications are included. Documentation and + Erlang/OTP system, where dispensable applications are removed + and new applications are included. Documentation and source code is irrelevant and is therefore not included in the new system.

-

This chapter is about creating such a system, which we call a +

This chapter is about creating such a system, which is called a target system.

-

In the following sections we consider creating target systems with - different requirements of functionality:

+

The following sections deal with target systems + with different requirements of functionality:

- a basic target system that can be started by - calling the ordinary erl script, - a simple target system where also code - replacement in run-time can be performed, and - an embedded target system where there is also + A basic target system that can be started by + calling the ordinary erl script. + A simple target system where also code + replacement in runtime can be performed. + An embedded target system where there is also support for logging output from the system to file for later inspection, and where the system can be started automatically - at boot time. + at boot time. -

We only consider the case when Erlang/OTP is running on a UNIX - system.

-

In the sasl application there is an example Erlang - module target_system.erl that contains functions for - creating and installing a target system. This module is used in - the examples below, and the source code of the module is listed - at the end of this chapter.

-
+

Here is only considered the case when Erlang/OTP is running on a + UNIX system.

+

The sasl application includes the example Erlang + module target_system.erl, which contains functions for + creating and installing a target system. This module is used in + the following examples. The source code of the module is listed + in + Listing of target_system.erl

Creating a Target System

It is assumed that you have a working Erlang/OTP system structured - according to the OTP Design Principles.

-

Step 1. First create a .rel file (see rel(4)) that specifies the erts - version and lists all applications that should be included in the - new basic target system. An example is the following - mysystem.rel file:

+ according to the OTP design principles.

+

Step 1. Create a .rel file (see the + rel(4) manual page in + SASL), which specifies the ERTS version and lists + all applications that are to be included in the new basic target + system. An example is the following mysystem.rel file:

%% mysystem.rel {release, @@ -91,23 +90,23 @@ {pea, "1.0"}]}.

The listed applications are not only original Erlang/OTP applications but possibly also new applications that you have - written yourself (here exemplified by the application - pea).

-

Step 2. From the directory where the mysystem.rel - file reside, start the Erlang/OTP system:

+ written (here exemplified by the application Pea (pea)).

+

Step 2. Start Erlang/OTP from the directory where + the mysystem.rel file resides:

 os> erl -pa /home/user/target_system/myapps/pea-1.0/ebin
-

where also the path to the pea-1.0 ebin directory is - provided.

-

Step 3. Now create the target system:

+

Here also the path to the pea-1.0 ebin directory is + provided.

+

Step 3. Create the target system:

 1> target_system:create("mysystem").
-

The target_system:create/1 function does the following:

+

The function target_system:create/1 performs the + following:

- Reads the mysystem.rel file, and creates a new file - plain.rel which is identical to former, except that it - only lists the kernel and stdlib applications. - From the mysystem.rel and plain.rel files + Reads the file mysystem.rel and creates a new file + plain.rel that is identical to the former, except that it + only lists the Kernel and STDLIB applications. + From the files mysystem.rel and plain.rel creates the files mysystem.script, mysystem.boot, plain.script, and plain.boot through a call to @@ -124,26 +123,26 @@ releases/mysystem.rel lib/kernel-2.16.4/ lib/stdlib-1.19.4/ lib/sasl-2.3.4/ -lib/pea-1.0/
+lib/pea-1.0/

The file releases/FIRST/start.boot is a copy of our mysystem.boot

The release resource file mysystem.rel is duplicated in the tar file. Originally, this file was only stored in - the releases directory in order to make it possible + the releases directory to make it possible for the release_handler to extract this file separately. After unpacking the tar file, release_handler would automatically copy the file to releases/FIRST. However, sometimes the tar file is unpacked without involving - the release_handler (e.g. when unpacking the first - target system) and therefore the file is now instead + the release_handler (for example, when unpacking the + first target system). The file is therefore now instead duplicated in the tar file so no manual copying is - necessary.

+ needed.

- Creates the temporary directory tmp and extracts the tar file - mysystem.tar.gz into that directory. - Deletes the erl and start files from - tmp/erts-5.10.4/bin. These files will be created again from + Creates the temporary directory tmp and extracts + the tar file mysystem.tar.gz into that directory. + Deletes the files erl and start from + tmp/erts-5.10.4/bin. These files are created again from source when installing the release. Creates the directory tmp/bin. Copies the previously created file plain.boot to @@ -151,31 +150,31 @@ lib/pea-1.0/ Copies the files epmd, run_erl, and to_erl from the directory tmp/erts-5.10.4/bin to the directory tmp/bin. - Creates the directory tmp/log, which will be used + Creates the directory tmp/log, which is used if the system is started as embedded with the bin/start script. Creates the file tmp/releases/start_erl.data with the contents "5.10.4 FIRST". This file is to be passed as data - file to the start_erl script. - + file to the start_erl script. Recreates the file mysystem.tar.gz from the directories - in the directory tmp, and removes tmp. + in the directory tmp and removes tmp.
Installing a Target System

Step 4. Install the created target system in a - suitable directory.

+ suitable directory.

 2> target_system:install("mysystem", "/usr/local/erl-target").
-

The function target_system:install/2 does the following: +

The function target_system:install/2 performs the following:

Extracts the tar file mysystem.tar.gz into the target directory /usr/local/erl-target. - In the target directory reads the file releases/start_erl.data - in order to find the Erlang runtime system version ("5.10.4"). + In the target directory reads the file + releases/start_erl.data to find the Erlang runtime system + version ("5.10.4"). Substitutes %FINAL_ROOTDIR% and %EMU% for /usr/local/erl-target and beam, respectively, in the files erl.src, start.src, and @@ -184,97 +183,102 @@ lib/pea-1.0/ start, and run_erl in the target bin directory. Finally the target releases/RELEASES file is created - from data in the releases/mysystem.rel file. + from data in the file releases/mysystem.rel.
Starting a Target System -

Now we have a target system that can be started in various ways.

-

We start it as a basic target system by invoking

+

Now we have a target system that can be started in various ways. + We start it as a basic target system by invoking:

 os> /usr/local/erl-target/bin/erl
-

where only the kernel and stdlib applications are - started, i.e. the system is started as an ordinary development - system. There are only two files needed for all this to work: - bin/erl file (obtained from erts-5.10.4/bin/erl.src) - and the bin/start.boot file (a copy of plain.boot).

+

Here only the Kernel and STDLIB applications are + started, that is, the system is started as an ordinary development + system. Only two files are needed for all this to work:

+ + bin/erl (obtained from + erts-5.10.4/bin/erl.src) + bin/start.boot (a copy of + plain.boot) +

We can also start a distributed system (requires bin/epmd).

To start all applications specified in the original - mysystem.rel file, use the -boot flag as follows:

+ mysystem.rel file, use flag -boot as follows:

 os> /usr/local/erl-target/bin/erl -boot /usr/local/erl-target/releases/FIRST/start
-

We start a simple target system as above. The only difference - is that also the file releases/RELEASES is present for - code replacement in run-time to work.

-

To start an embedded target system the shell script - bin/start is used. That shell script calls +

We start a simple target system as above. The only + difference is that also the file releases/RELEASES is + present for code replacement in runtime to work.

+

To start an embedded target system, the shell script + bin/start is used. The script calls bin/run_erl, which in turn calls bin/start_erl (roughly, start_erl is an embedded variant of - erl).

+ erl).

The shell script start, which is generated from erts-5.10.4/bin/start.src during installation, is only an - example. You should edit it to suite your needs. Typically it is + example. Edit it to suite your needs. Typically it is executed when the UNIX system boots.

run_erl is a wrapper that provides logging of output from - the run-time system to file. It also provides a simple mechanism + the runtime system to file. It also provides a simple mechanism for attaching to the Erlang shell (to_erl).

-

start_erl requires the root directory - ("/usr/local/erl-target"), the releases directory - ("/usr/local/erl-target/releases"), and the location of - the start_erl.data file. It reads the run-time system - version ("5.10.4") and release version ("FIRST") from - the start_erl.data file, starts the run-time system of the - version found, and provides -boot flag specifying the boot - file of the release version found - ("releases/FIRST/start.boot").

-

start_erl also assumes that there is sys.config in - release version directory ("releases/FIRST/sys.config"). That - is the topic of the next section (see below).

-

The start_erl shell script should normally not be +

start_erl requires:

+ + The root directory ("/usr/local/erl-target") + The releases directory + ("/usr/local/erl-target/releases" + The location of the file start_erl.data + +

It performs the following:

+ + Reads the runtime system version ("5.10.4") and + release version ("FIRST") from the file + start_erl.data. + Starts the runtime system of the version found. + Provides the flag -boot specifying the boot + file of the release version found + ("releases/FIRST/start.boot"). + +

start_erl also assumes that there is sys.config + in the release version directory ("releases/FIRST/sys.config"). + That is the topic of the next section.

+

The start_erl shell script is normally not to be altered by the user.

System Configuration Parameters -

As was pointed out above start_erl requires a - sys.config in the release version directory - ("releases/FIRST/sys.config"). If there is no such a - file, the system start will fail. Hence such a file has to - be added as well.

-

-

If you have system configuration data that are neither file - location dependent nor site dependent, it may be convenient to - create the sys.config early, so that it becomes a part of +

As was mentioned in the previous section, start_erl + requires a sys.config in the release version directory + ("releases/FIRST/sys.config"). If there is no such + file, the system start fails. Such a file must therefore + also be added.

+

If you have system configuration data that is neither + file-location-dependent nor site-dependent, it can be convenient + to create sys.config early, so it becomes part of the target system tar file created by - target_system:create/1. In fact, if you create, in the - current directory, not only the mysystem.rel file, but - also a sys.config file, that latter file will be tacitly + target_system:create/1. In fact, if you in the + current directory create not only the file mysystem.rel, + but also file sys.config, the latter file is tacitly put in the appropriate directory.

- Differences from the Install Script -

The above install/2 procedure differs somewhat from that + Differences From the Install Script +

The previous install/2 procedure differs somewhat from that of the ordinary Install shell script. In fact, create/1 makes the release package as complete as possible, and leave to the - install/2 procedure to finish by only considering location - dependent files.

+ install/2 procedure to finish by only considering + location-dependent files.

Creating the Next Version - -

- In this example the pea application has been changed, and - so are erts, kernel, stdlib and - sasl. -

- -

- Step 1. Create the .rel file: -

+

In this example the Pea application has been changed, and + so are the applications ERTS, Kernel, STDLIB + and SASL.

+

Step 1. Create the file .rel:

%% mysystem2.rel {release, @@ -284,65 +288,49 @@ os> /usr/local/erl-target/bin/erl -boot /usr/local/erl-target/releases/FI {stdlib, "2.0"}, {sasl, "2.4"}, {pea, "2.0"}]}. -

- Step 2. Create the application upgrade file (see - appup(4)) for pea, - for example: -

+

Step 2. Create the application upgrade file (see the + appup(4) manual page in + SASL) for Pea, for example:

%% pea.appup {"2.0", [{"1.0",[{load_module,pea_lib}]}], [{"1.0",[{load_module,pea_lib}]}]}. -

- Step 3. From the directory where the - mysystem2.rel file reside, start the Erlang/OTP system: -

+

Step 3. From the directory where the file + mysystem2.rel resides, start the Erlang/OTP system, + giving the path to the new version of Pea:

 os> erl -pa /home/user/target_system/myapps/pea-2.0/ebin
-

giving the path to the new version of pea.

- -

- Step 4. Create the release upgrade file (see relup(4)): -

+

Step 4. Create the release upgrade file (see the + relup(4) manual page in + SASL):

-1> systools:make_relup("mysystem2",["mysystem"],["mysystem"],[{path,["/home/user/target_system/myapps/pea-1.0/ebin","/my/old/erlang/lib/*/ebin"]}]).
-

- where "mysystem" is the base release and - "mysystem2" is the release to upgrade to. -

-

- Note that the path option is used for pointing out the +1> systools:make_relup("mysystem2",["mysystem"],["mysystem"], + [{path,["/home/user/target_system/myapps/pea-1.0/ebin", + "/my/old/erlang/lib/*/ebin"]}]). +

Here "mysystem" is the base release and + "mysystem2" is the release to upgrade to.

+

The path option is used for pointing out the old version of all applications. (The new versions are already - in the code path - assuming of course that the erlang node on + in the code path - assuming of course that the Erlang node on which this is executed is running the correct version of - Erlang/OTP.) -

-

- Step 5. Create the new release: -

+ Erlang/OTP.)

+

Step 5. Create the new release:

 2> target_system:create("mysystem2").
-

- Given that the relup file generated in step 4 above is - now located in the current directory, it will automatically be - included in the release package. -

+

Given that the file relup generated in Step 4 is + now located in the current directory, it is automatically + included in the release package.

Upgrading the Target System -

- This part is done on the target node, and for this example we +

This part is done on the target node, and for this example we want the node to be running as an embedded system with the - -heart option, allowing automatic restart of the - node. See Starting a Target - System above for more information. -

-

- We add -heart to bin/start: -

+ -heart option, allowing automatic restart of the node. + For more information, see + Starting a Target System.

+

We add -heart to bin/start:

#!/bin/sh ROOTDIR=/usr/local/erl-target/ @@ -354,36 +342,27 @@ fi START_ERL_DATA=${1:-$RELDIR/start_erl.data} -$ROOTDIR/bin/run_erl -daemon /tmp/ $ROOTDIR/log "exec $ROOTDIR/bin/start_erl $ROOTDIR $RELDIR $START_ERL_DATA -heart -

- And we use the simplest possible sys.config, which we - store in releases/FIRST: -

+$ROOTDIR/bin/run_erl -daemon /tmp/ $ROOTDIR/log "exec $ROOTDIR/bin/start_erl $ROOTDIR\ +$RELDIR $START_ERL_DATA -heart +

We use the simplest possible sys.config, which we + store in releases/FIRST:

%% sys.config []. -

- Finally, in order to prepare the upgrade, we need to put the new +

Finally, to prepare the upgrade, we must put the new release package in the releases directory of the first - target system: -

+ target system:

 os> cp mysystem2.tar.gz /usr/local/erl-target/releases
-

- And assuming that the node has been started like this: -

+

Assuming that the node has been started as follows:

 os> /usr/local/erl-target/bin/start
-

- it can be accessed like this: -

+

It can be accessed as follows:

 os> /usr/local/erl-target/bin/to_erl /tmp/erlang.pipe.1
-

- Also note that logs can be found in +

Logs can be found in /usr/local/erl-target/log. This directory is specified as - an argument to run_erlin the start script listed above. -

+ an argument to run_erlin the start script listed above.

Step 1. Unpack the release:

@@ -402,18 +381,19 @@ heart: Tue Apr 1 12:15:11 2014: Executed "/usr/local/erl-target/bin/start /usr/ The above return value and output after the call to release_handler:install_release/1 means that the release_handler has restarted the node by using - heart. This will always be done when the upgrade involves - a change of erts, kernel, stdlib or - sasl. See Upgrade when - Erlang/OTP has Changed for more infomation about this. + heart. This is always done when the upgrade involves + a change of the applications ERTS, Kernel, + STDLIB, or SASL. For more information, see + + Upgrade when Erlang/OTP has Changed.

- The node will be accessible via a new pipe: + The node is accessible through a new pipe:

 os> /usr/local/erl-target/bin/to_erl /tmp/erlang.pipe.2

- Let's see which releases we have in our system: + Check which releases there are in the system:

 1> release_handler:which_releases().
@@ -426,7 +406,7 @@ os> /usr/local/erl-target/bin/to_erl /tmp/erlang.pipe.2

Our new release, "SECOND", is now the current release, but we can also see that our "FIRST" release is still permanent. This - means that if the node would be restarted at this point, it + means that if the node would be restarted now, it would come up running the "FIRST" release again.

@@ -434,11 +414,9 @@ os> /usr/local/erl-target/bin/to_erl /tmp/erlang.pipe.2

 2> release_handler:make_permanent("SECOND").
-

- Now look at the releases again: + Check the releases again:

-
 3> release_handler:which_releases().
 [{"MYSYSTEM","SECOND",
@@ -447,19 +425,16 @@ os> /usr/local/erl-target/bin/to_erl /tmp/erlang.pipe.2
{"MYSYSTEM","FIRST", ["kernel-2.16.4","stdlib-1.19.4","sasl-2.3.4","pea-1.0"], old}] -

- Here we see that the new release version is permanent, so - it would be safe to restart the node. -

- + We see that the new release version is permanent, so + it would be safe to restart the node.

+ Listing of target_system.erl

This module can also be found in the examples directory - of the sasl application.

+ of the SASL application.

-
diff --git a/system/doc/system_principles/error_logging.xml b/system/doc/system_principles/error_logging.xml index 80d5211323..3a82f4e0e0 100644 --- a/system/doc/system_principles/error_logging.xml +++ b/system/doc/system_principles/error_logging.xml @@ -28,41 +28,43 @@ error_logging.xml +
Error Information From the Runtime System

Error information from the runtime system, that is, information - about a process terminating due to an uncaught error exception, + about a process terminating because of an uncaught error exception, is by default written to terminal (tty):

with exit value: {{badmatch,[1,2,3]},[{m,f,1},{shell,eval_loop,2}]}]]>

The error information is handled by the error logger, a system process registered as error_logger. This process - receives all error messages from the Erlang runtime system and - also from the standard behaviours and different Erlang/OTP + receives all error messages from the Erlang runtime system as + well as from the standard behaviours and different Erlang/OTP applications.

-

The exit reasons (such as badarg above) used by +

The exit reasons (such as badarg) used by the runtime system are described in - Errors and Error Handling - in the Erlang Reference Manual.

-

The process error_logger and its user interface (with - the same name) are described in - error_logger(3). - It is possible to configure the system so that error information - is written to file instead/as well as tty. Also, it is possible - for user defined applications to send and format error - information using error_logger.

+ + Errors and Error Handling.

+

For information about the process error_logger and its user + interface (with the same name), see the + error_logger(3) + manual page in Kernel. The system can be configured so that + error information + is written to file or to tty, or both. In addition, user-defined + applications can send and format error information using + error_logger.

SASL Error Logging -

The standard behaviors (supervisor, gen_server, - etc.) sends progress and error information to error_logger. - If the SASL application is started, this information is written - to tty as well. See +

The standard behaviours (supervisor, gen_server, + and so on) send progress and error information to error_logger. + If the SASL application is started, this information is + written to tty as well. For more information, see SASL Error Logging - in the SASL User's Guide for further information.

+ in the SASL User's Guide.

 % erl -boot start_sasl
 Erlang (BEAM) emulator version 5.4.13 [hipe] [threads:0] [kernel-poll]
diff --git a/system/doc/system_principles/system_principles.xml b/system/doc/system_principles/system_principles.xml
index 79ed86cd9f..5718e8a3f6 100644
--- a/system/doc/system_principles/system_principles.xml
+++ b/system/doc/system_principles/system_principles.xml
@@ -28,35 +28,41 @@
     
     system_principles.xml
   
+  
 
   
Starting the System -

An Erlang runtime system is started with the command erl:

+

An Erlang runtime system is started with command erl:

 % erl
 Erlang/OTP 17 [erts-6.0] [hipe] [smp:8:8]
 
 Eshell V6.0  (abort with ^G)
 1> 
-

erl understands a number of command line arguments, see - erl(1). A number of them are also described in this chapter.

-

Application programs can access the values of the command line - arguments by calling one of the functions - init:get_argument(Key), or init:get_arguments(). - See init(3).

+

erl understands a number of command-line arguments, see + the erl(1) manual page in + ERTS. Some of them are also described in this chapter.

+

Application programs can access the values of the command-line + arguments by calling the function init:get_argument(Key) + or init:get_arguments(). See the + init(3) manual page in + ERTS.

Restarting and Stopping the System -

The runtime system can be halted by calling halt/0,1. - See erlang(3).

-

The module init contains function for restarting, - rebooting and stopping the runtime system. See init(3).

+

The runtime system is halted by calling halt/0,1. For + details, see the erlang(3) + manual page in ERTS.

+

The module init contains functions for restarting, + rebooting, and stopping the runtime system:

 init:restart()
 init:reboot()
 init:stop()
-

Also, the runtime system will terminate if the Erlang shell is +

For details, see the init(3) + manual page in ERTS.

+

The runtime system terminates if the Erlang shell is terminated.

@@ -69,14 +75,15 @@ init:stop()

A boot script file has the extension .script. The runtime system uses a binary version of the script. This binary boot script file has the extension .boot.

-

Which boot script to use is specified by the command line flag - -boot. The extension .boot should be omitted. - Example, using the boot script start_all.boot:

+

Which boot script to use is specified by the command-line flag + -boot. The extension .boot is to be omitted. + For example, using the boot script start_all.boot:

 % erl -boot start_all

If no boot script is specified, it defaults to - ROOT/bin/start, see Default Boot Scripts below.

-

The command line flag -init_debug makes the init + ROOT/bin/start, see + Default Boot Scripts.

+

The command-line flag -init_debug makes the init process write some debug information while interpreting the boot script:

@@ -87,59 +94,55 @@ init:stop()
{start,heart} {start,error_logger} ... -

See script(4) for a detailed description of the syntax - and contents of the boot script.

+

For a detailed description of the syntax and contents of the + boot script, see the script(4) manual page in SASL.

+ Default Boot Scripts -

Erlang/OTP comes with two boot scripts:

- - start_clean.boot - -

Loads the code for and starts the applications Kernel and - STDLIB.

-
- start_sasl.boot - -

Loads the code for and starts the applications Kernel, - STDLIB and SASL.

-
- no_dot_erlang.boot - -

Loads the code for and starts the applications Kernel and - STDLIB, skips loading the .erlang file. - Useful for scripts and other tools that should be behave the - same regardless of user preferences. -

-
-
+

Erlang/OTP comes with these boot scripts:

+ + start_clean.boot - Loads the code for and starts + the applications Kernel and STDLIB. + start_sasl.boot - Loads the code for and starts + the applications Kernel, STDLIB, and + SASL). + no_dot_erlang.boot - Loads the code for and + starts the applications Kernel and STDLIB. + Skips loading the file .erlang. Useful for scripts and + other tools that are to behave the same irrespective of user + preferences. +

Which of start_clean and start_sasl to use as default is decided by the user when installing Erlang/OTP using Install. The user is asked "Do you want to use a minimal system startup instead of the SASL startup". If the answer is yes, then start_clean is used, otherwise - start_sasl is used. A copy of the selected boot script - is made, named start.boot and placed in - the ROOT/bin directory.

+ start_sasl is used. A copy of the selected boot script is + made, named start.boot and placed in directory + ROOT/bin.

User-Defined Boot Scripts

It is sometimes useful or necessary to create a user-defined boot script. This is true especially when running Erlang in - embedded mode, see Code Loading Strategy.

-

It is possible to write a boot script manually. - The recommended way to create a boot script, however, is to - generate the boot script from a release resource file - Name.rel, using the function + embedded mode, see + Code Loading Strategy.

+

A boot script can be written manually. However, it is + recommended to create a boot script by generating it from a + release resource file Name.rel, using the function systools:make_script/1,2. This requires that the source code is structured as applications according to the OTP design principles. (The program does not have to be started in terms of - OTP applications but can be plain Erlang).

-

Read more about .rel files in OTP Design Principles and - rel(4).

+ OTP applications, but can be plain Erlang).

+

For more information about .rel files, see + + OTP Design Principles and the + rel(4) manual page in + SASL.

The binary boot script file Name.boot is generated from - the boot script file Name.script using the function + the boot script file Name.script, using the function systools:script2boot(File).

@@ -148,16 +151,17 @@ init:stop() Code Loading Strategy

The runtime system can be started in either embedded or - interactive mode. Which one is decided by the command - line flag -mode.

+ interactive mode. Which one is decided by the + command-line flag -mode.

 % erl -mode embedded

Default mode is interactive.

+

The mode properties are as follows:

- In embedded mode, all code is loaded during system start-up + In embedded mode, all code is loaded during system startup according to the boot script. (Code can also be loaded later - by explicitly ordering the code server to do so). - In interactive mode, code is dynamically loaded when first + by explicitly ordering the code server to do so.) + In interactive mode, the code is dynamically loaded when first referenced. When a call to a function in a module is made, and the module is not loaded, the code server searches the code path and loads the module into the system. @@ -165,21 +169,21 @@ init:stop()

Initially, the code path consists of the current working directory and all object code directories under ROOT/lib, where ROOT is the installation directory - of Erlang/OTP. Directories can be named Name[-Vsn] and - the code server, by default, chooses the directory with + of Erlang/OTP. Directories can be named Name[-Vsn]. The + code server, by default, chooses the directory with the highest version number among those which have the same Name. The -Vsn suffix is optional. If an ebin directory exists under the Name[-Vsn] - directory, it is this directory which is added to the code path.

-

The code path can be extended by using the command line flags - -pa Directories and -pz Directories. These will add - Directories to the head or end of the code path, - respectively. Example

+ directory, this directory is added to the code path.

+

The code path can be extended by using the command-line flags + -pa Directories and -pz Directories. These add + Directories to the head or the end of the code path, + respectively. Example:

 % erl -pa /home/arne/mycode

The code server module code contains a number of - functions for modifying and checking the search path, see - code(3).

+ functions for modifying and checking the search path, see the + code(3) manual page in Kernel.

@@ -192,49 +196,65 @@ init:stop() Documented in - module + Module .erl - Erlang Reference Manual + + + Erlang Reference Manual - include file + Include file .hrl - Erlang Reference Manual + + + Erlang Reference Manual - release resource file + Release resource file .rel - rel(4) + + rel(4) + manual page in SASL - application resource file + Application resource file .app - app(4) + + app(4) + manual page in Kernel - boot script + Boot script .script - script(4) + + script(4) + manual page in SASL - binary boot script + Binary boot script .boot - - configuration file + Configuration file .config - config(4) + + config(4) + manual page in Kernel - application upgrade file + Application upgrade file .appup - appup(4) + + appup(4) + manual page in SASL - release upgrade file + Release upgrade file relup - relup(4) + + relup(4) + manual page in SASL File Types diff --git a/system/doc/system_principles/upgrade.xml b/system/doc/system_principles/upgrade.xml index 68e48da0b8..83e8128f94 100644 --- a/system/doc/system_principles/upgrade.xml +++ b/system/doc/system_principles/upgrade.xml @@ -31,88 +31,72 @@ upgrade.xml +
Introduction -

- As of Erlang/OTP 17, most applications deliver a valid - application upgrade (appup) file. In earlier releases, a + +

As of Erlang/OTP 17, most applications deliver a valid + application upgrade file (appup). In earlier releases, a majority of the applications in Erlang/OTP did not support - upgrade at all. Many of the applications use the + upgrade. Many of the applications use the restart_application instruction. These are applications for which it is not crucial to support real soft upgrade, for - instance tools and library applications. The + example, tools and library applications. The restart_application instruction ensures that all modules in the application are reloaded and - thereby running the new code. -

+ thereby running the new code.

- Upgrade of core applications -

- The core applications ERTS, Kernel, STDLIB + Upgrade of Core Applications +

The core applications ERTS, Kernel, STDLIB, and SASL never allow real soft upgrade, but require the Erlang emulator to be restarted. This is indicated to the release_handler by the upgrade instruction - restart_new_emulator. This instruction will always be the - very first instruction executed, and it will restart the + restart_new_emulator. This instruction is always the + very first instruction executed, and it restarts the emulator with the new versions of the above mentioned core - applications and the old versions of all other - applications. When the node is back up all other upgrade instructions are + applications and the old versions of all other applications. + When the node is back up, all other upgrade instructions are executed, making sure each application is finally running its - new version. -

- -

- It might seem strange to do a two-step upgrade instead of + new version.

+

It might seem strange to do a two-step upgrade instead of just restarting the emulator with the new version of all applications. The reason for this design decision is to allow - code_change functions to have side effects, for example changing - data on disk. It also makes sure that the upgrade mechanism for - non-core applications does not differ depending on whether or not - core applications are changed at the same time. -

- -

- If, however, the more brutal variant is preferred, it is - possible to handwrite the release upgrade file using only the + code_change functions to have side effects, for example, + changing data on disk. It also guarantees that the upgrade + mechanism for non-core applications does not differ depending + on whether or not core applications are changed at the same time.

+

If, however, the more brutal variant is preferred, the + the release upgrade file can be handwritten using only the single upgrade instruction restart_emulator. This - instruction, in contrast to restart_new_emulator, will - cause the emulator to restart with the new versions of - all applications. -

- -

- Note that if other instructions are included before + instruction, in contrast to restart_new_emulator, + causes the emulator to restart with the new versions of + all applications.

+

Note: If other instructions are included before restart_emulator in the handwritten relup file, - they will be executed in the old emulator. This is a big risk + they are executed in the old emulator. This is a big risk since there is no guarantee that new beam code can be loaded into the old emulator. Adding instructions after restart_emulator has no effect as the - release_handler will not do any attempt at executing - them. -

- -

- See relup(4) for - information about the release upgrade file, and appup(4) for further information - about upgrade instructions. -

+ release_handler will not execute them.

+

For information about the release upgrade file, see the + relup(4) manual page + in SASL. + For more information about upgrade instructions, see the + appup(4) manual page + in SASL.

- Applications that still do not allow code upgrade -

- A few applications, for instance HiPE do not support - upgrade at all. This is indicated by an application upgrade file - containing only {Vsn,[],[]}. Any attempt at creating a release - upgrade file with such input will fail. - The only way to force an upgrade involving applications like this is to - handwrite the relup file, preferably as described above - with only the restart_emulator instruction. -

- + Applications that Still do Not Allow Code Upgrade +

A few applications, such as HiPE, do not support upgrade. + This is indicated by an application upgrade file containing only + {Vsn,[],[]}. Any attempt at creating a release upgrade file + with such input fails. The only way to force an upgrade involving + applications like this is to + handwrite the file relup, preferably as described above + with only the restart_emulator instruction.

diff --git a/system/doc/system_principles/versions.xml b/system/doc/system_principles/versions.xml index ff042f4a3b..25eb90f626 100644 --- a/system/doc/system_principles/versions.xml +++ b/system/doc/system_principles/versions.xml @@ -31,93 +31,99 @@ versions.xml -
OTP Version + + +
+ OTP Version

As of OTP release 17, the OTP release number corresponds to the major part of the OTP version. The OTP version as a concept was - introduced in OTP 17. The version - scheme used is described in more detail below.

- + introduced in OTP 17. The version scheme used is described in detail in + Version Scheme.

OTP of a specific version is a set of applications of specific versions. The application versions identified by an OTP version corresponds to application versions that have been tested together - by the Erlang/OTP team at Ericsson AB. An OTP system can however be + by the Erlang/OTP team at Ericsson AB. An OTP system can, however, be put together with applications from different OTP versions. Such a combination of application versions has not been tested by the Erlang/OTP team. It is therefore always preferred to use OTP applications from one single OTP version.

-

Release candidates have an -rc<N> - suffix. The suffix -rc0 will be used during development up to + suffix. The suffix -rc0 is used during development up to the first release candidate.

-
Retrieving Current OTP Version -

In an OTP source code tree, the OTP version can be read from - the text file <OTP source root>/OTP_VERSION. The - absolute path to the file can be constructed by calling - filename:join([code:root_dir(), "OTP_VERSION"]).

-

In an installed OTP development system, the OTP version can be read - from the text file <OTP installation root>/releases/<OTP release number>/OTP_VERSION. - The absolute path to the file can by constructed by calling - filename:join([code:root_dir(), "releases", erlang:system_info(otp_release), "OTP_VERSION"]).

-

If the version read from the OTP_VERSION file in a - development system has a ** suffix, the system has been - patched using the otp_patch_apply tool available to - licensed customers. In this case, the system consists of application - versions from multiple OTP versions. The version preceding the ** - suffix corresponds to the OTP version of the base system that - has been patched. Note that if a development system is updated by - other means than otp_patch_apply, the OTP_VERSION file - may identify an incorrect OTP version.

- -

No OTP_VERSION file will be placed in a - target system created - by OTP tools. This since one easily can create a target system - where it is hard to even determine the base OTP version. You may, - however, place such a file there yourself if you know the OTP - version.

+
+ Retrieving Current OTP Version +

In an OTP source code tree, the OTP version can be read from + the text file <OTP source root>/OTP_VERSION. The + absolute path to the file can be constructed by calling + filename:join([code:root_dir(), "OTP_VERSION"]).

+

In an installed OTP development system, the OTP version can be read + from the text file <OTP installation root>/releases/<OTP release number>/OTP_VERSION. + The absolute path to the file can by constructed by calling + filename:join([code:root_dir(), "releases", erlang:system_info(otp_release), "OTP_VERSION"]).

+

If the version read from the OTP_VERSION file in a + development system has a ** suffix, the system has been + patched using the otp_patch_apply tool available to + licensed customers. In this case, the system consists of application + versions from multiple OTP versions. The version preceding the ** + suffix corresponds to the OTP version of the base system that + has been patched. Notice that if a development system is updated by + other means than otp_patch_apply, the file OTP_VERSION + can identify an incorrect OTP version.

+

No OTP_VERSION file is placed in a + target system created + by OTP tools. This since one easily can create a target system + where it is hard to even determine the base OTP version. You can, + however, place such a file there if you know the OTP version.

-
OTP Versions Table -

The text file <OTP source root>/otp_versions.table that - is part of the source code contains information about all OTP versions - from OTP 17.0 up to current OTP version. Each line contains information - about application versions that are part of a specific OTP version, and - is on the format:

-
-<OtpVersion> : <ChangedAppVersions> # <UnchangedAppVersions> :
-
-

<OtpVersion> is on the format OTP-<VSN>, i.e., - the same as the git tag used to identify the source. - <ChangedAppVersions> and <UnchangedAppVersions> - are space separated lists of application versions on the - format <application>-<vsn>. - <ChangedAppVersions> corresponds to changed applications - with new version numbers in this OTP version, and - <UnchangedAppVersions> corresponds to unchanged application - versions in this OTP version. Both of them might be empty, although - not at the same time. If <ChangedAppVersions> is empty, no changes - has been made that change the build result of any application. This could - for example be a pure bug fix of the build system. The order of lines - is undefined. All white space characters in this file are either space - (character 32) or line-break (character 10).

-

Using ordinary UNIX tools like sed and grep one - can easily find answers to various questions like:

- - Which OTP versions are kernel-3.0 part of? -

$ grep ' kernel-3\.0 ' otp_versions.table

- In which OTP version was kernel-3.0 introduced? -

$ sed 's/#.*//;/ kernel-3\.0 /!d' otp_versions.table

-
-

The above commands give a bit more information than the exact answers, - but adequate information when manually searching for answers to these - questions.

-

The format of the otp_versions.table might be subject - to changes during the OTP 17 release.

-
+
+ OTP Versions Table +

The text file <OTP source root>/otp_versions.table, + which is part of the source code, contains information about all + OTP versions from OTP 17.0 up to the current OTP version. Each line + contains information about application versions that are part of a + specific OTP version, and has the following format:

+
+<OtpVersion> : <ChangedAppVersions> # <UnchangedAppVersions> :
+

<OtpVersion> has the format OTP-<VSN>, + that is, the same as the git tag used to identify the source.

+

<ChangedAppVersions> and + <UnchangedAppVersions> are space-separated lists of + application versions and has the format + <application>-<vsn>.

+ + <ChangedAppVersions> corresponds to changed + applications with new version numbers in this OTP version. + <UnchangedAppVersions> corresponds to unchanged + application versions in this OTP version. + +

Both of them can be empty, but not at the same time. + If <ChangedAppVersions> is empty, no changes have + been made that change the build result of any application. This could, + for example, be a pure bug fix of the build system. The order of lines + is undefined. All white-space characters in this file are either space + (character 32) or line-break (character 10).

+

By using ordinary UNIX tools like sed and grep one + can easily find answers to various questions like:

+ +

Which OTP versions are kernel-3.0 part of?

+

$ grep ' kernel-3\.0 ' otp_versions.table

+

In which OTP version was kernel-3.0 introduced?

+

$ sed 's/#.*//;/ kernel-3\.0 /!d' otp_versions.table +

+
+

The above commands give a bit more information than the exact + answers, but adequate information when manually searching for answers + to these questions.

+

The format of the otp_versions.table might be + subject to changes during the OTP 17 release.

+
-
Application Version -

As of OTP 17.0 application versions will use the same +

+ Application Version +

As of OTP 17.0 application versions use the same version scheme as the OTP version. Application versions part of a release candidate will however not have an -rc<N> suffix as the OTP version. @@ -125,88 +131,88 @@ necessarily imply a major increment of the OTP version. This depends on whether the major change in the application is considered as a major change for OTP as a whole or not.

-
+
+
+ Version Scheme -
Version Scheme - Note that the version scheme was changed as of OTP 17.0. This implies +

The version scheme was changed as of OTP 17.0. This implies that application versions used prior to OTP 17.0 do not adhere to this version scheme. A list of - application versions used in OTP 17.0 can be found - at the end of this document. - -

In the normal case, a version will be constructed as - <Major>.<Minor>.<Patch> where <Major> - is the most significant part. However, more dot separated parts than - this may exist. The dot separated parts consists of non-negative integers. - If all parts less significant than <Minor> equals 0, - they are omitted. The three normal parts - <Major>.<Minor>.<Patch> will be changed as + application versions used in OTP 17.0 is included at the + end of this section

+

In the normal case, a version is constructed as + <Major>.<Minor>.<Patch>, + where <Major> is the most significant part.

+

However, more dot-separated parts than this can exist. + The dot-separated parts consist of non-negative integers. If + all parts less significant than <Minor> equals + 0, they are omitted. The three normal parts + <Major>.<Minor>.<Patch> are changed as follows:

- - <Major>Increased when major changes, - including incompatibilities, have been made. - <Minor>Increased when new functionality - has been added. - <Patch>Increased when pure bug fixes - have been made. - -

When a part in the version number is increased, all less significant + + <Major> - Increases when major changes, + including incompatibilities, are made. + <Minor> - Increases when new + functionality is added. + <Patch> - Increases when pure bug fixes + are made. + +

When a part in the version number increases, all less significant parts are set to 0.

-

An application version or an OTP version identifies source code - versions. That is, it does not imply anything about how the application + versions. That is, it implies nothing about how the application or OTP has been built.

-
Order of Versions -

Version numbers in general are only partially ordered. However, - normal version numbers (with three parts) as of OTP 17.0 have a total - or linear order. This applies both to normal OTP versions and - normal application versions.

- -

When comparing two version numbers that have an order, one - compare each part as ordinary integers from the most - significant part towards less significant parts. The order is - defined by the first parts of the same significance that - differ. An OTP version with a larger version include all - changes that that are part of a smaller OTP version. The same - goes for application versions.

- -

In the general case, versions may have more than three parts. In - this case the versions are only partially ordered. Note that such - versions are only used in exceptional cases. When an extra - part (out of the normal three parts) is added to a version number, - a new branch of versions is made. The new branch has a linear - order against the base version. However, versions on different - branches have no order. Since they have no order, we - only know that they all include what is included in their - closest common ancestor. When branching multiple times from the - same base version, 0 parts are added between the base - version and the least significant 1 part until a unique - version is found. Versions that have an order can be compared - as described in the paragraph above.

- -

An example of branched versions: The version 6.0.2.1 - is a branched version from the base version 6.0.2. - Versions on the form 6.0.2.<X> can be compared - with normal versions smaller than or equal to 6.0.2, - and other versions on the form 6.0.2.<X>. The - version 6.0.2.1 will include all changes in - 6.0.2. However, 6.0.3 will most likely - not include all changes in 6.0.2.1 (note that - these versions have no order). A second branched version from the base - version 6.0.2 will be version 6.0.2.0.1, and a - third branched version will be 6.0.2.0.0.1.

-
+
+ Order of Versions +

Version numbers in general are only partially ordered. However, + normal version numbers (with three parts) as of OTP 17.0 have a total + or linear order. This applies both to normal OTP versions and + normal application versions.

+

When comparing two version numbers that have an order, one + compare each part as ordinary integers from the most + significant part to less significant parts. The order is + defined by the first parts of the same significance that + differ. An OTP version with a larger version includes all + changes that are part of a smaller OTP version. The same + goes for application versions.

+

In general, versions can have more than three parts. + The versions are then only partially ordered. Such + versions are only used in exceptional cases. When an extra + part (out of the normal three parts) is added to a version number, + a new branch of versions is made. The new branch has a linear + order against the base version. However, versions on different + branches have no order, and therefore one can only conclude + that they all include what is included in their + closest common ancestor. When branching multiple times from the + same base version, 0 parts are added between the base + version and the least significant 1 part until a unique + version is found. Versions that have an order can be compared + as described in the previous paragraph.

+

An example of branched versions: The version 6.0.2.1 + is a branched version from the base version 6.0.2. + Versions on the form 6.0.2.<X> can be compared + with normal versions smaller than or equal to 6.0.2, + and other versions on the form 6.0.2.<X>. The + version 6.0.2.1 will include all changes in + 6.0.2. However, 6.0.3 will most likely + not include all changes in 6.0.2.1 (note that + these versions have no order). A second branched version from the base + version 6.0.2 will be version 6.0.2.0.1, and a + third branched version will be 6.0.2.0.0.1.

+
+
+ OTP 17.0 Application Versions -
OTP 17.0 Application Versions -

The following application versions were part of OTP 17.0. If - the normal part of an applications version number compares - as smaller than the corresponding application version in this list, +

The following list details the application versions that + were part of OTP 17.0. If + the normal part of an application version number compares + as smaller than the corresponding application version in the list, the version number does not adhere to the version scheme introduced - in OTP 17.0 and should be considered as not having an order against + in OTP 17.0 and is to be considered as not having an order against versions used as of OTP 17.0.

asn1-3.0 @@ -262,6 +268,6 @@ wx-1.2 xmerl-1.3.7 -
+
-- cgit v1.2.3 From 2d3ab68c60e8bacf9e0efe403895e7065ef683be Mon Sep 17 00:00:00 2001 From: Hans Bolinder Date: Thu, 12 Mar 2015 15:35:13 +0100 Subject: Update Interoperability Tutorial Language cleaned up by the technical writers xsipewe and tmanevik from Combitech. Proofreading and corrections by Hans Bolinder. --- system/doc/tutorial/c_port.xmlsrc | 86 ++++++++--- system/doc/tutorial/c_portdriver.xmlsrc | 112 +++++++-------- system/doc/tutorial/cnode.xmlsrc | 162 ++++++++++++++++----- system/doc/tutorial/erl_interface.xmlsrc | 108 ++++++++++---- system/doc/tutorial/example.xmlsrc | 21 ++- system/doc/tutorial/introduction.xml | 26 +++- system/doc/tutorial/nif.xmlsrc | 127 +++++++++-------- system/doc/tutorial/overview.xml | 235 +++++++++++++++++++++++++------ 8 files changed, 623 insertions(+), 254 deletions(-) (limited to 'system/doc') diff --git a/system/doc/tutorial/c_port.xmlsrc b/system/doc/tutorial/c_port.xmlsrc index 8579da8520..0631293237 100644 --- a/system/doc/tutorial/c_port.xmlsrc +++ b/system/doc/tutorial/c_port.xmlsrc @@ -4,7 +4,7 @@
- 20002013 + 20002015 Ericsson AB. All Rights Reserved. @@ -28,16 +28,34 @@ c_port.xml
-

This is an example of how to solve the example problem by using a port.

+

This section outlines an example of how to solve the example + problem in the previous section + by using a port.

+

The scenario is illustrated in the following figure:

- Port Communication. + Port Communication
Erlang Program -

First of all communication between Erlang and C must be established by creating the port. The Erlang process which creates a port is said to be the connected process of the port. All communication to and from the port should go via the connected process. If the connected process terminates, so will the port (and the external program, if it is written correctly).

-

The port is created using the BIF open_port/2 with {spawn,ExtPrg} as the first argument. The string ExtPrg is the name of the external program, including any command line arguments. The second argument is a list of options, in this case only {packet,2}. This option says that a two byte length indicator will be used to simplify the communication between C and Erlang. Adding the length indicator will be done automatically by the Erlang port, but must be done explicitly in the external C program.

-

The process is also set to trap exits which makes it possible to detect if the external program fails.

+

All communication between Erlang and C must be established by + creating the port. The Erlang process that creates a port is + said to be the connected process of the port. All + communication to and from the port must go through the connected + process. If the connected process terminates, the port also + terminates (and the external program, if it is written + properly).

+

The port is created using the BIF open_port/2 with + {spawn,ExtPrg} as the first argument. The string + ExtPrg is the name of the external program, including any + command line arguments. The second argument is a list of + options, in this case only {packet,2}. This option says + that a 2 byte length indicator is to be used to simplify the + communication between C and Erlang. The Erlang port + automatically adds the length indicator, but this must be done + explicitly in the external C program.

+

The process is also set to trap exits, which enables detection + of failure of the external program:

 -module(complex1).
 -export([start/1, init/1]).
@@ -50,7 +68,9 @@ init(ExtPrg) ->
   process_flag(trap_exit, true),
   Port = open_port({spawn, ExtPrg}, [{packet, 2}]),
   loop(Port).
-

Now it is possible to implement complex1:foo/1 and complex1:bar/1. They both send a message to the complex process and receive the reply.

+

Now complex1:foo/1 and complex1:bar/1 can be + implemented. Both send a message to the complex process + and receive the following replies:

 foo(X) ->
   call_port({foo, X}).
@@ -63,7 +83,14 @@ call_port(Msg) ->
     {complex, Result} ->
       Result
   end.
-

The complex process encodes the message into a sequence of bytes, sends it to the port, waits for a reply, decodes the reply and sends it back to the caller.

+

The complex process does the following:

+ + Encodes the message into a sequence of bytes. + Sends it to the port. + Waits for a reply. + Decodes the reply. + Sends it back to the caller: +
 loop(Port) ->
   receive
@@ -75,37 +102,52 @@ loop(Port) ->
       end,
       loop(Port)
  end.
-

Assuming that both the arguments and the results from the C functions will be less than 256, a very simple encoding/decoding scheme is employed where foo is represented by the byte 1, bar is represented by 2, and the argument/result is represented by a single byte as well.

+

Assuming that both the arguments and the results from the C + functions are less than 256, a simple encoding/decoding scheme + is employed. In this scheme, foo is represented by byte + 1, bar is represented by 2, and the argument/result is + represented by a single byte as well:

 encode({foo, X}) -> [1, X];
 encode({bar, Y}) -> [2, Y].
-      
+
 decode([Int]) -> Int.
-

The resulting Erlang program, including functionality for stopping the port and detecting port failures is shown below. +

The resulting Erlang program, including functionality for + stopping the port and detecting port failures, is as follows:

C Program -

On the C side, it is necessary to write functions for receiving and sending - data with two byte length indicators from/to Erlang. By default, the C program - should read from standard input (file descriptor 0) and write to standard output - (file descriptor 1). Examples of such functions, read_cmd/1 and - write_cmd/2, are shown below.

+

On the C side, it is necessary to write functions for receiving + and sending data with 2 byte length indicators from/to Erlang. + By default, the C program is to read from standard input (file + descriptor 0) and write to standard output (file descriptor 1). + Examples of such functions, read_cmd/1 and + write_cmd/2, follows:

-

Note that stdin and stdout are for buffered input/output and should not be used for the communication with Erlang!

-

In the main function, the C program should listen for a message from Erlang and, according to the selected encoding/decoding scheme, use the first byte to determine which function to call and the second byte as argument to the function. The result of calling the function should then be sent back to Erlang.

+

Notice that stdin and stdout are for buffered + input/output and must not be used for the communication + with Erlang.

+

In the main function, the C program is to listen for a + message from Erlang and, according to the selected + encoding/decoding scheme, use the first byte to determine which + function to call and the second byte as argument to the + function. The result of calling the function is then to be sent + back to Erlang:

-

Note that the C program is in a while-loop checking for the return value of read_cmd/1. The reason for this is that the C program must detect when the port gets closed and terminate.

+

Notice that the C program is in a while-loop, checking + for the return value of read_cmd/1. This is because the C + program must detect when the port closes and terminates.

Running the Example -

1. Compile the C code.

+

Step 1. Compile the C code:

 unix> gcc -o extprg complex.c erl_comm.c port.c
-

2. Start Erlang and compile the Erlang code.

+

Step 2. Start Erlang and compile the Erlang code:

 unix> erl
 Erlang (BEAM) emulator version 4.9.1.2
@@ -113,7 +155,7 @@ Erlang (BEAM) emulator version 4.9.1.2
 Eshell V4.9.1.2 (abort with ^G)
 1> c(complex1).
 {ok,complex1}
-

3. Run the example.

+

Step 3. Run the example:

 2> complex1:start("extprg").
 <0.34.0>
diff --git a/system/doc/tutorial/c_portdriver.xmlsrc b/system/doc/tutorial/c_portdriver.xmlsrc
index 2fd6fb0aac..61187703a4 100644
--- a/system/doc/tutorial/c_portdriver.xmlsrc
+++ b/system/doc/tutorial/c_portdriver.xmlsrc
@@ -4,7 +4,7 @@
 
   
- 20002013 + 20002015 Ericsson AB. All Rights Reserved. @@ -21,46 +21,45 @@ - Port drivers + Port Drivers c_portdriver.xml
-

This is an example of how to solve the example problem by using a linked in port driver.

+

This section outlines an example of how to solve the example problem + in Problem Example + by using a linked-in port driver.

+

A port driver is a linked-in driver that is accessible as a port + from an Erlang program. It is a shared library (SO in UNIX, DLL in + Windows), with special entry points. The Erlang runtime system + calls these entry points when the driver is started and when data + is sent to the port. The port driver can also send data to + Erlang.

+

As a port driver is dynamically linked into the emulator process, + this is the fastest way of calling C-code from Erlang. Calling + functions in the port driver requires no context switches. But it + is also the least safe way, because a crash in the port driver + brings the emulator down too.

+

The scenario is illustrated in the following figure:

- Port Driver Communication. + Port Driver Communication -
- Port Drivers -

A port driver is a linked in driver that is accessible as a - port from an Erlang program. It is a shared library (SO in Unix, - DLL in Windows), with special entry points. The Erlang runtime - calls these entry points, when the driver is started and when - data is sent to the port. The port driver can also send data to - Erlang.

-

Since a port driver is dynamically linked into the emulator - process, this is the fastest way of calling C-code from Erlang. - Calling functions in the port driver requires no context - switches. But it is also the least safe, because a crash in the - port driver brings the emulator down too.

-
-
Erlang Program -

Just as with a port program, the port communicates with a Erlang +

Like a port program, the port communicates with an Erlang process. All communication goes through one Erlang process that is the connected process of the port driver. Terminating this process closes the port driver.

Before the port is created, the driver must be loaded. This is done with the function erl_dll:load_driver/1, with the name of the shared library as argument.

-

The port is then created using the BIF open_port/2 with +

The port is then created using the BIF open_port/2, with the tuple {spawn, DriverName} as the first argument. The string SharedLib is the name of the port driver. The second - argument is a list of options, none in this case.

+ argument is a list of options, none in this case:

 -module(complex5).
 -export([start/1, init/1]).
@@ -77,9 +76,9 @@ init(SharedLib) ->
   register(complex, self()),
   Port = open_port({spawn, SharedLib}, []),
   loop(Port).
-

Now it is possible to implement complex5:foo/1 and - complex5:bar/1. They both send a message to the - complex process and receive the reply.

+

Now complex5:foo/1 and complex5:bar/1 + can be implemented. Both send a message to the + complex process and receive the following reply:

 foo(X) ->
     call_port({foo, X}).
@@ -92,10 +91,14 @@ call_port(Msg) ->
         {complex, Result} ->
             Result
     end.
-

The complex process encodes the message into a sequence - of bytes, sends it to the port, waits for a reply, decodes the - reply and sends it back to the caller. -

+

The complex process performs the following:

+ + Encodes the message into a sequence of bytes. + Sends it to the port. + Waits for a reply. + Decodes the reply. + Sends it back to the caller: +
 loop(Port) ->
     receive
@@ -108,59 +111,58 @@ loop(Port) ->
             loop(Port)
     end.

Assuming that both the arguments and the results from the C - functions will be less than 256, a very simple encoding/decoding - scheme is employed where foo is represented by the byte + functions are less than 256, a simple encoding/decoding scheme + is employed. In this scheme, foo is represented by byte 1, bar is represented by 2, and the argument/result is - represented by a single byte as well. -

+ represented by a single byte as well:

 encode({foo, X}) -> [1, X];
 encode({bar, Y}) -> [2, Y].
-      
+
 decode([Int]) -> Int.
-

The resulting Erlang program, including functionality for - stopping the port and detecting port failures is shown below.

+

The resulting Erlang program, including functions for stopping + the port and detecting port failures, is as follows:

C Driver

The C driver is a module that is compiled and linked into a - shared library. It uses a driver structure, and includes the + shared library. It uses a driver structure and includes the header file erl_driver.h.

The driver structure is filled with the driver name and function pointers. It is returned from the special entry point, declared with the macro )]]>.

-

The functions for receiving and sending data, are combined into +

The functions for receiving and sending data are combined into a function, pointed out by the driver structure. The data sent - into the port is given as arguments, and the data the port - sends back is sent with the C-function driver_output.

-

Since the driver is a shared module, not a program, no main - function should be present. All function pointers are not used - in our example, and the corresponding fields in the + into the port is given as arguments, and the replied data is sent + with the C-function driver_output.

+

As the driver is a shared module, not a program, no main + function is present. All function pointers are not used + in this example, and the corresponding fields in the driver_entry structure are set to NULL.

-

All functions in the driver, takes a handle (returned from - start), that is just passed along by the erlang +

All functions in the driver takes a handle (returned from + start) that is just passed along by the Erlang process. This must in some way refer to the port driver instance.

-

The example_drv_start, is the only function that is called with - a handle to the port instance, so we must save this. It is - customary to use a allocated driver-defined structure for this - one, and pass a pointer back as a reference.

-

It is not a good idea to use a global variable; since the port - driver can be spawned by multiple Erlang processes, this - driver-structure should be instantiated multiple times. +

The example_drv_start, is the only function that is called with + a handle to the port instance, so this must be saved. It is + customary to use an allocated driver-defined structure for this + one, and to pass a pointer back as a reference.

+

It is not a good idea to use a global variable as the port + driver can be spawned by multiple Erlang processes. This + driver-structure is to be instantiated multiple times:

Running the Example -

1. Compile the C code.

+

Step 1. Compile the C code:

 unix> gcc -o exampledrv -fpic -shared complex.c port_driver.c
 windows> cl -LD -MD -Fe exampledrv.dll complex.c port_driver.c
-

2. Start Erlang and compile the Erlang code.

+

Step 2. Start Erlang and compile the Erlang code:

 > erl
 Erlang (BEAM) emulator version 5.1
@@ -168,7 +170,7 @@ Erlang (BEAM) emulator version 5.1
 Eshell V5.1 (abort with ^G)
 1> c(complex5).
 {ok,complex5}
-

3. Run the example.

+

Step 3. Run the example:

 2> complex5:start("example_drv").
 <0.34.0>
diff --git a/system/doc/tutorial/cnode.xmlsrc b/system/doc/tutorial/cnode.xmlsrc
index 293406160f..bcdd1298de 100644
--- a/system/doc/tutorial/cnode.xmlsrc
+++ b/system/doc/tutorial/cnode.xmlsrc
@@ -4,7 +4,7 @@
 
   
- 20002013 + 20002015 Ericsson AB. All Rights Reserved. @@ -28,19 +28,39 @@ cnode.xml
-

This is an example of how to solve the example problem by using a C node. Note that a C node would not typically be used for solving a simple problem like this, a port would suffice.

+

This section outlines an example of how to solve the example + problem in Problem Example + by using a C node. Notice that a C node is not typically + used for solving simple problems like this, a port is + sufficient.

Erlang Program -

From Erlang's point of view, the C node is treated like a normal Erlang node. Therefore, calling the functions foo and bar only involves sending a message to the C node asking for the function to be called, and receiving the result. Sending a message requires a recipient; a process which can be defined using either a pid or a tuple consisting of a registered name and a node name. In this case a tuple is the only alternative as no pid is known.

+

From Erlang's point of view, the C node is treated like a + normal Erlang node. Thus, calling the functions foo and + bar only involves sending a message to the C node asking + for the function to be called, and receiving the result. Sending + a message requires a recipient, that is, a process that can be + defined using either a pid or a tuple, consisting of a + registered name and a node name. In this case, a tuple is the + only alternative as no pid is known:

 {RegName, Node} ! Msg
-

The node name Node should be the name of the C node. If short node names are used, the plain name of the node will be cN where N is an integer. If long node names are used, there is no such restriction. An example of a C node name using short node names is thus c1@idril, an example using long node names is cnode@idril.ericsson.se.

-

The registered name RegName could be any atom. The name can be ignored by the C code, or it could be used for example to distinguish between different types of messages. Below is an example of what the Erlang code could look like when using short node names. +

The node name Node is to be the name of the C node. If + short node names are used, the plain name of the node is + cN, where N is an integer. If long node names are + used, there is no such restriction. An example of a C node name + using short node names is thus c1@idril, an example using + long node names is cnode@idril.ericsson.se.

+

The registered name, RegName, can be any atom. The name + can be ignored by the C code, or, for example, be used to + distinguish between different types of messages. An example of + Erlang code using short node names follows:

- When using long node names the code is slightly different as shown in the following example: + When using long node names, the code is slightly different as + shown in the following example:

@@ -50,39 +70,77 @@ C Program
- Setting Up the Communication -

Before calling any other Erl_Interface function, the memory handling must be initiated.

+ Setting Up Communication +

Before calling any other function in Erl_Interface, the + memory handling must be initiated:

 erl_init(NULL, 0);
-

Now the C node can be initiated. If short node names are used, this is done by calling erl_connect_init().

+

Now the C node can be initiated. If short node names are + used, this is done by calling erl_connect_init():

 erl_connect_init(1, "secretcookie", 0);
-

The first argument is the integer which is used to construct the node name. In the example the plain node name will be c1.

- - The second argument is a string defining the magic cookie.

- - The third argument is an integer which is used to identify a particular instance of a C node.

-

If long node node names are used, initiation is done by calling erl_connect_xinit().

+

Here:

+ + The first argument is the integer used to construct the node name. +

In the example, the plain node name is c1.

+ The second argument is a string defining the magic cookie. + The third argument is an integer that is used to identify + a particular instance of a C node. +
+

If long node node names are used, initiation is done by + calling erl_connect_xinit():

 erl_connect_xinit("idril", "cnode", "cnode@idril.ericsson.se",
                   &addr, "secretcookie", 0);
-

The first three arguments are the host name, the plain node name, and the full node name. The fourth argument is a pointer to an in_addr struct with the IP address of the host, and the fifth and sixth arguments are the magic cookie and instance number.

-

The C node can act as a server or a client when setting up the communication Erlang-C. If it acts as a client, it connects to an Erlang node by calling erl_connect(), which will return an open file descriptor at success.

+

Here:

+ + The first argument is the host name. + The second argument is the plain node name. + The third argument is the full node name. + The fourth argument is a pointer to an in_addr + struct with the IP address of the host. + The fifth argument is the magic cookie. + The sixth argument is the instance number. + +

The C node can act as a server or a client when setting up + the Erlang-C communication. If it acts as a client, it + connects to an Erlang node by calling erl_connect(), + which returns an open file descriptor at success:

 fd = erl_connect("e1@idril");
-

If the C node acts as a server, it must first create a socket (call bind() and listen()) listening to a certain port number port. It then publishes its name and port number with epmd (the Erlang port mapper daemon, see the man page for epmd).

+

If the C node acts as a server, it must first create a socket + (call bind() and listen()) listening to a + certain port number port. It then publishes its name + and port number with epmd, the Erlang port mapper + daemon. For details, see the epmd manual page in ERTS:

 erl_publish(port);
-

Now the C node server can accept connections from Erlang nodes.

+

Now the C node server can accept connections from Erlang nodes:

 fd = erl_accept(listen, &conn);
-

The second argument to erl_accept is a struct ErlConnect that will contain useful information when a connection has been established; for example, the name of the Erlang node.

+

The second argument to erl_accept is a struct + ErlConnect which contains useful information when a + connection has been established, for example, the name of the + Erlang node.

Sending and Receiving Messages -

The C node can receive a message from Erlang by calling erl_receive msg(). This function reads data from the open file descriptor fd into a buffer and puts the result in an ErlMessage struct emsg. ErlMessage has a field type defining which kind of data was received. In this case the type of interest is ERL_REG_SEND which indicates that Erlang sent a message to a registered process at the C node. The actual message, an ETERM, will be in the msg field.

-

It is also necessary to take care of the types ERL_ERROR (an error occurred) and ERL_TICK (alive check from other node, should be ignored). Other possible types indicate process events such as link/unlink and exit.

+

The C node can receive a message from Erlang by calling + erl_receive msg(). This function reads data from the + open file descriptor fd into a buffer and puts the + result in an ErlMessage struct emsg. + ErlMessage has a field type defining what kind + of data is received. In this case, the type of interest is + ERL_REG_SEND which indicates that Erlang sent a message + to a registered process at the C node. The actual message, an + ETERM, is in the msg field.

+

It is also necessary to take care of the types + ERL_ERROR (an error occurred) and ERL_TICK + (alive check from other node, is to be ignored). Other + possible types indicate process events such as link, unlink, + and exit:

   while (loop) {
 
@@ -93,7 +151,16 @@ fd = erl_accept(listen, &conn);
loop = 0; /* exit while loop */ } else { if (emsg.type == ERL_REG_SEND) {
-

Since the message is an ETERM struct, Erl_Interface functions can be used to manipulate it. In this case, the message will be a 3-tuple (because that was how the Erlang code was written, see above). The second element will be the pid of the caller and the third element will be the tuple {Function,Arg} determining which function to call with which argument. The result of calling the function is made into an ETERM struct as well and sent back to Erlang using erl_send(), which takes the open file descriptor, a pid and a term as arguments.

+

As the message is an ETERM struct, Erl_Interface + functions can be used to manipulate it. In this case, the + message becomes a 3-tuple, because that is how the Erlang code + is written. The second element will be the pid of the caller + and the third element will be the tuple {Function,Arg} + determining which function to call, and with which argument. + The result of calling the function is made into an + ETERM struct as well and sent back to Erlang using + erl_send(), which takes the open file descriptor, a + pid, and a term as arguments:

         fromp = erl_element(2, emsg.msg);
         tuplep = erl_element(3, emsg.msg);
@@ -108,29 +175,30 @@ fd = erl_accept(listen, &conn);
resp = erl_format("{cnode, ~i}", res); erl_send(fd, fromp, resp);
-

Finally, the memory allocated by the ETERM creating functions (including erl_receive_msg() must be freed.

+

Finally, the memory allocated by the ETERM creating + functions (including erl_receive_msg() must be + freed:

         erl_free_term(emsg.from); erl_free_term(emsg.msg);
         erl_free_term(fromp); erl_free_term(tuplep);
         erl_free_term(fnp); erl_free_term(argp);
         erl_free_term(resp);
-

The resulting C programs can be found in looks like the following examples. First a C node server using short node names.

+

The following examples show the resulting C programs. + First a C node server using short node names:

-

Below follows a C node server using long node names.

+

A C node server using long node names:

-

And finally we have the code for the C node client.

+

Finally, the code for the C node client:

Running the Example -

1. Compile the C code, providing the paths to the Erl_Interface include files and libraries, and to the socket and nsl libraries.

-

In R5B and later versions of OTP, the include and lib directories are situated under OTPROOT/lib/erl_interface-VSN, where OTPROOT is the root directory of the OTP installation (/usr/local/otp in the example above) and VSN is the version of the erl_interface application (3.2.1 in the example above).

- - In R4B and earlier versions of OTP, include and lib are situated under OTPROOT/usr.

+

Step 1. Compile the C code. This provides the paths to + the Erl_Interface include files and libraries, and to the + socket and nsl libraries:

-      
 >  gcc -o cserver \\ 
 -I/usr/local/otp/lib/erl_interface-3.2.1/include \\ 
 -L/usr/local/otp/lib/erl_interface-3.2.1/lib \\ 
@@ -148,11 +216,29 @@ unix> gcc -o cclient \\ 
 -L/usr/local/otp/lib/erl_interface-3.2.1/lib \\ 
 complex.c cnode_c.c \\ 
 -lerl_interface -lei -lsocket -lnsl
-

2. Compile the Erlang code.

+

In Erlang/OTP R5B and later versions of OTP, the + include and lib directories are situated under + OTPROOT/lib/erl_interface-VSN, where OTPROOT is + the root directory of the OTP installation + (/usr/local/otp in the recent example) and VSN is + the version of the Erl_Interface application (3.2.1 in the + recent example).

+

In R4B and earlier versions of OTP, include and + lib are situated under OTPROOT/usr.

+

Step 2. Compile the Erlang code:

 unix> erl -compile complex3 complex4
-

3. Run the C node server example with short node names.

-

Start the C program cserver and Erlang in different windows. cserver takes a port number as argument and must be started before trying to call the Erlang functions. The Erlang node should be given the short name e1 and must be set to use the same magic cookie as the C node, secretcookie.

+

Step 3. Run the C node server example with short node names.

+

Do as follows:

+ + Start the C program cserver and Erlang in + different windows. + cserver takes a port number as argument and must + be started before trying to call the Erlang functions. + The Erlang node is to be given the short name e1 + and must be set to use the same magic cookie as the C node, + secretcookie: +
 unix> cserver 3456
 
@@ -164,7 +250,9 @@ Eshell V4.9.1.2  (abort with ^G)
 4
 (e1@idril)2> complex3:bar(5).
 10
-

4. Run the C node client example. Terminate cserver but not Erlang and start cclient. The Erlang node must be started before the C node client is.

+

Step 4. Run the C node client example. Terminate + cserver, but not Erlang, and start cclient. The + Erlang node must be started before the C node client:

 unix> cclient
 
@@ -172,7 +260,7 @@ unix> cclient
 4
 (e1@idril)4> complex3:bar(5).
 10
-

5. Run the C node server, long node names, example.

+

Step 5. Run the C node server example with long node names:

 unix> cserver2 3456
 
diff --git a/system/doc/tutorial/erl_interface.xmlsrc b/system/doc/tutorial/erl_interface.xmlsrc
index 0c4c5a99c2..5751a945d6 100644
--- a/system/doc/tutorial/erl_interface.xmlsrc
+++ b/system/doc/tutorial/erl_interface.xmlsrc
@@ -4,7 +4,7 @@
 
   
- 20002013 + 20002015 Ericsson AB. All Rights Reserved. @@ -28,14 +28,29 @@ erl_interface.xml
-

This is an example of how to solve the example problem by using a port and erl_interface. It is necessary to read the port example before reading this chapter.

+ +

This section outlines an example of how to solve the example + problem in Problem Example by + using a port and Erl_Interface. It is necessary to read the port + example in Ports before reading + this section.

Erlang Program -

The example below shows an Erlang program communicating with a C program over a plain port with home made encoding.

- -

Compared to the Erlang module - above used for the plain port, there are two differences when using Erl_Interface on the C side: Since Erl_Interface operates on the Erlang external term format the port must be set to use binaries and, instead of inventing an encoding/decoding scheme, the BIFs term_to_binary/1 and binary_to_term/1 should be used. That is:

+

The following example shows an Erlang program communicating + with a C program over a plain port with home made encoding:

+ +

There are two differences when using Erl_Interface on the C + side compared to the example in + Ports, using only the plain port:

+ + As Erl_Interface operates on the Erlang external term format, + the port must be set to use binaries. + Instead of inventing an encoding/decoding scheme, the + term_to_binary/1 and binary_to_term/1 BIFs are to + be used. + +

That is:

 open_port({spawn, ExtPrg}, [{packet, 2}])

is replaced with:

@@ -55,69 +70,110 @@ receive {Port, {data, Data}} -> Caller ! {complex, binary_to_term(Data)} end
-

The resulting Erlang program is shown below.

+

The resulting Erlang program is as follows:

-

Note that calling complex2:foo/1 and complex2:bar/1 will result in the tuple {foo,X} or {bar,Y} being sent to the complex process, which will code them as binaries and send them to the port. This means that the C program must be able to handle these two tuples.

+

Notice that calling complex2:foo/1 and + complex2:bar/1 results in the tuple {foo,X} or + {bar,Y} being sent to the complex process, which + codes them as binaries and sends them to the port. This means + that the C program must be able to handle these two tuples.

C Program -

The example below shows a C program communicating with an Erlang program over a plain port with home made encoding.

+

The following example shows a C program communicating with an + Erlang program over a plain port with home made encoding:

-

Compared to the C program above - used for the plain port the while-loop must be rewritten. Messages coming from the port will be on the Erlang external term format. They should be converted into an ETERM struct, a C struct similar to an Erlang term. The result of calling foo() or bar() must be converted to the Erlang external term format before being sent back to the port. But before calling any other erl_interface function, the memory handling must be initiated.

+

Compared to the C program in + Ports, using only the plain port, the + while-loop must be rewritten. Messages coming from the + port is on the Erlang external term format. They must be + converted into an ETERM struct, which is a C struct + similar to an Erlang term. The result of calling foo() or + bar() must be converted to the Erlang external term + format before being sent back to the port. But before calling + any other Erl_Interface function, the memory handling must be + initiated:

 erl_init(NULL, 0);
-

For reading from and writing to the port the functions read_cmd() and write_cmd() from the erl_comm.c example below - can still be used. +

The following functions, read_cmd() and + write_cmd(), from the erl_comm.c example in + Ports can still be + used for reading from and writing to the port:

-

The function erl_decode() from erl_marshal will convert the binary into an ETERM struct.

+

The function erl_decode() from erl_marshal + converts the binary into an ETERM struct:

 int main() {
   ETERM *tuplep;
 
   while (read_cmd(buf) > 0) {
     tuplep = erl_decode(buf);
-

In this case tuplep now points to an ETERM struct representing a tuple with two elements; the function name (atom) and the argument (integer). By using the function erl_element() from erl_eterm it is possible to extract these elements, which also must be declared as pointers to an ETERM struct.

+

Here, tuplep points to an ETERM struct + representing a tuple with two elements; the function name (atom) + and the argument (integer). Using the function + erl_element() from erl_eterm, these elements can + be extracted, but they must also be declared as pointers to an + ETERM struct:

     fnp = erl_element(1, tuplep);
     argp = erl_element(2, tuplep);
-

The macros ERL_ATOM_PTR and ERL_INT_VALUE from erl_eterm can be used to obtain the actual values of the atom and the integer. The atom value is represented as a string. By comparing this value with the strings "foo" and "bar" it can be decided which function to call.

+

The macros ERL_ATOM_PTR and ERL_INT_VALUE from + erl_eterm can be used to obtain the actual values of the + atom and the integer. The atom value is represented as a string. + By comparing this value with the strings "foo" and "bar", it can + be decided which function to call:

     if (strncmp(ERL_ATOM_PTR(fnp), "foo", 3) == 0) {
       res = foo(ERL_INT_VALUE(argp));
     } else if (strncmp(ERL_ATOM_PTR(fnp), "bar", 3) == 0) {
       res = bar(ERL_INT_VALUE(argp));
     }
-

Now an ETERM struct representing the integer result can be constructed using the function erl_mk_int() from erl_eterm. It is also possible to use the function erl_format() from the module erl_format.

+

Now an ETERM struct that represents the integer result + can be constructed using the function erl_mk_int() from + erl_eterm. The function + erl_format() from the module erl_format can also + be used:

     intp = erl_mk_int(res);
-

The resulting ETERM struct is converted into the Erlang external term format using the function erl_encode() from erl_marshal and sent to Erlang using write_cmd().

+

The resulting ETERM struct is converted into the Erlang + external term format using the function erl_encode() from + erl_marshal and sent to Erlang using + write_cmd():

     erl_encode(intp, buf);
     write_cmd(buf, erl_eterm_len(intp));
-

Last, the memory allocated by the ETERM creating functions must be freed.

+

Finally, the memory allocated by the ETERM creating + functions must be freed:

     erl_free_compound(tuplep);
     erl_free_term(fnp);
     erl_free_term(argp);
     erl_free_term(intp);
-

The resulting C program is shown below:

+

The resulting C program is as follows:

Running the Example -

1. Compile the C code, providing the paths to the include files erl_interface.h and ei.h, and to the libraries erl_interface and ei.

+

Step 1. Compile the C code. This provides the paths to + the include files erl_interface.h and ei.h, and + also to the libraries erl_interface and ei:

 unix> gcc -o extprg -I/usr/local/otp/lib/erl_interface-3.2.1/include \\ 
       -L/usr/local/otp/lib/erl_interface-3.2.1/lib \\ 
       complex.c erl_comm.c ei.c -lerl_interface -lei
-

In R5B and later versions of OTP, the include and lib directories are situated under OTPROOT/lib/erl_interface-VSN, where OTPROOT is the root directory of the OTP installation (/usr/local/otp in the example above) and VSN is the version of the erl_interface application (3.2.1 in the example above).

- - In R4B and earlier versions of OTP, include and lib are situated under OTPROOT/usr.

-

2. Start Erlang and compile the Erlang code.

+

In Erlang/OTP R5B and later versions of OTP, the include + and lib directories are situated under + OTPROOT/lib/erl_interface-VSN, where OTPROOT is + the root directory of the OTP installation + (/usr/local/otp in the recent example) and VSN is + the version of the Erl_interface application (3.2.1 in the + recent example).

+

In R4B and earlier versions of OTP, include and lib + are situated under OTPROOT/usr.

+

Step 2. Start Erlang and compile the Erlang code:

 unix> erl
 Erlang (BEAM) emulator version 4.9.1.2
@@ -125,7 +181,7 @@ Erlang (BEAM) emulator version 4.9.1.2
 Eshell V4.9.1.2 (abort with ^G)
 1> c(complex2).
 {ok,complex2}
-

3. Run the example.

+

Step 3. Run the example:

 2> complex2:start("extprg").
 <0.34.0>
diff --git a/system/doc/tutorial/example.xmlsrc b/system/doc/tutorial/example.xmlsrc
index f87eb217e9..e205ca189e 100644
--- a/system/doc/tutorial/example.xmlsrc
+++ b/system/doc/tutorial/example.xmlsrc
@@ -4,7 +4,7 @@
 
   
- 20002013 + 20002015 Ericsson AB. All Rights Reserved. @@ -31,16 +31,25 @@
Description -

A common interoperability situation is when there exists a piece of code solving some complex problem, and we would like to incorporate this piece of code in our Erlang program. Suppose for example we have the following C functions that we would like to be able to call from Erlang.

- -

(For the sake of keeping the example as simple as possible, the functions are not very complicated in this case).

-

Preferably we would like to able to call foo and bar without having to bother about them actually being C functions.

+

A common interoperability situation is when you want to incorporate + a piece of code, solving a complex problem, in your Erlang + program. Suppose for example, that you have the following C + functions that you would like to call from Erlang:

+ +

The functions are deliberately kept as simple as possible, for + readability reasons.

+

From an Erlang perspektive, it is preferable to be able to call + foo and bar without having to bother about that + they are C functions:

 % Erlang code
 ...
 Res = complex:foo(X),
 ...
-

The communication with C is hidden in the implementation of complex.erl. In the following chapters it is shown how this module can be implemented using the different interoperability mechanisms.

+

Here, the communication with C is hidden in the implementation + of complex.erl. + In the following sections, it is shown how this module can be + implemented using the different interoperability mechanisms.

diff --git a/system/doc/tutorial/introduction.xml b/system/doc/tutorial/introduction.xml index ed86a00f76..dcf462e311 100644 --- a/system/doc/tutorial/introduction.xml +++ b/system/doc/tutorial/introduction.xml @@ -4,7 +4,7 @@
- 20002013 + 20002015 Ericsson AB. All Rights Reserved. @@ -28,18 +28,34 @@ introduction.xml
+ +

This section informs on interoperability, that is, information + exchange, between Erlang and other programming languages. The + included examples mainly treat interoperability between Erlang and + C.

Purpose -

The purpose of this tutorial is to give the reader an orientation of the different interoperability mechanisms that can be used when integrating a program written in Erlang with a program written in another programming language, from the Erlang programmer's point of view.

+

The purpose of this tutorial is to describe different + interoperability mechanisms that can be used when integrating a + program written in Erlang with a program written in another + programming language, from the Erlang programmer's + perspective.

Prerequisites -

It is assumed that the reader is a skilled Erlang programmer, familiar with concepts such as Erlang data types, processes, messages and error handling.

-

To illustrate the interoperability principles C programs running in a UNIX environment have been used. It is assumed that the reader has enough knowledge to be able to apply these principles to the relevant programming languages and platforms.

+

It is assumed that you are a skilled Erlang programmer, + familiar with concepts such as Erlang data types, processes, + messages, and error handling.

+

To illustrate the interoperability principles, C programs + running in a UNIX environment have been used. It is assumed that + you have enough knowledge to apply these principles to the + relevant programming languages and platforms.

-

For the sake of readability, the example code has been kept as simple as possible. It does not include functionality such as error handling, which might be vital in a real-life system.

+

For readability, the example code is kept as simple as + possible. For example, it does not include error handling, + which might be vital in a real-life system.

diff --git a/system/doc/tutorial/nif.xmlsrc b/system/doc/tutorial/nif.xmlsrc index 8ddad60f74..c79370e8c8 100644 --- a/system/doc/tutorial/nif.xmlsrc +++ b/system/doc/tutorial/nif.xmlsrc @@ -4,7 +4,7 @@
- 20002013 + 20002015 Ericsson AB. All Rights Reserved. @@ -28,92 +28,105 @@ nif.xml
-

This is an example of how to solve the example problem - by using NIFs. NIFs were introduced in R13B03 as an experimental - feature. It is a simpler and more efficient way of calling C-code - than using port drivers. NIFs are most suitable for synchronous functions like - foo and bar in the example, that does some - relatively short calculations without side effects and return the result.

-
- NIFs -

A NIF (Native Implemented Function) is a function that is - implemented in C instead of Erlang. NIFs appear as any other functions to - the callers. They belong to a module and are called like any other Erlang - functions. The NIFs of a module are compiled and linked into a dynamic - loadable shared library (SO in Unix, DLL in Windows). The NIF library must - be loaded in runtime by the Erlang code of the module.

-

Since a NIF library is dynamically linked into the emulator - process, this is the fastest way of calling C-code from Erlang (alongside - port drivers). Calling NIFs requires no context switches. But it is also - the least safe, because a crash in a NIF will bring the emulator down - too.

-
+

This section outlines an example of how to solve the example + problem in Problem Example + by using Native Implemented Functions (NIFs).

+

NIFs were introduced in Erlang/OTP R13B03 as an experimental + feature. It is a simpler and more efficient way of calling C-code + than using port drivers. NIFs are most suitable for synchronous + functions, such as foo and bar in the example, that + do some relatively short calculations without side effects and + return the result.

+

A NIF is a function that is implemented in C instead of Erlang. + NIFs appear as any other functions to the callers. They belong to + a module and are called like any other Erlang functions. The NIFs + of a module are compiled and linked into a dynamic loadable, + shared library (SO in UNIX, DLL in Windows). The NIF library must + be loaded in runtime by the Erlang code of the module.

+

As a NIF library is dynamically linked into the emulator process, + this is the fastest way of calling C-code from Erlang (alongside + port drivers). Calling NIFs requires no context switches. But it + is also the least safe, because a crash in a NIF brings the + emulator down too.

Erlang Program -

Even if all functions of a module will be NIFs, you still need an Erlang - module for two reasons. First, the NIF library must be explicitly loaded - by Erlang code in the same module. Second, all NIFs of a module must have - an Erlang implementation as well. Normally these are minimal stub - implementations that throw an exception. But it can also be used as - fallback implementations for functions that do not have native - implemenations on some architectures.

-

NIF libraries are loaded by calling erlang:load_nif/2, with the - name of the shared library as argument. The second argument can be any - term that will be passed on to the library and used for - initialization.

+

Even if all functions of a module are NIFs, an Erlang + module is still needed for two reasons:

+ + The NIF library must be explicitly loaded by + Erlang code in the same module. + All NIFs of a module must have an Erlang implementation + as well. + +

Normally these are minimal stub implementations that throw an + exception. But they can also be used as fallback implementations + for functions that do not have native implemenations on some + architectures.

+

NIF libraries are loaded by calling erlang:load_nif/2, + with the name of the shared library as argument. The second + argument can be any term that will be passed on to the library + and used for initialization:

-

We use the directive on_load to get function init to be - automatically called when the module is loaded. If init - returns anything other than ok, such when the loading of - the NIF library fails in this example, the module will be - unloaded and calls to functions within it will fail.

-

Loading the NIF library will override the stub implementations +

Here, the directive on_load is used to get function + init to be automatically called when the module is + loaded. If init returns anything other than ok, + such when the loading of the NIF library fails in this example, + the module is unloaded and calls to functions within it, + fail.

+

Loading the NIF library overrides the stub implementations and cause calls to foo and bar to be dispatched to the NIF implementations instead.

- NIF library code + NIF Library Code

The NIFs of the module are compiled and linked into a shared library. Each NIF is implemented as a normal C function. The macro ERL_NIF_INIT together with an array of structures defines the names, - arity and function pointers of all the NIFs in the module. The header - file erl_nif.h must be included. Since the library is a shared - module, not a program, no main function should be present.

+ arity, and function pointers of all the NIFs in the module. The header + file erl_nif.h must be included. As the library is a shared + module, not a program, no main function is to be present.

The function arguments passed to a NIF appears in an array argv, - with argc as the length of the array and thus the arity of the + with argc as the length of the array, and thus the arity of the function. The Nth argument of the function can be accessed as argv[N-1]. NIFs also take an environment argument that serves as an opaque handle that is needed to be passed on to most API functions. The environment contains information about - the calling Erlang process.

+ the calling Erlang process:

-

The first argument to ERL_NIF_INIT must be the name of the +

Here,ERL_NIF_INIT has the following arguments:

+ +

The first argument must be the name of the Erlang module as a C-identifier. It will be stringified by the - macro. The second argument is the array of ErlNifFunc - structures containing name, arity and function pointer of - each NIF. The other arguments are pointers to callback functions - that can be used to initialize the library. We do not use them - in this simple example so we set them all to NULL.

+ macro.

+
+ The second argument is the array of ErlNifFunc + structures containing name, arity, and function pointer of + each NIF. + The remaining arguments are pointers to callback functions + that can be used to initialize the library. They are not used + in this simple example, hence they are all set to NULL. +

Function arguments and return values are represented as values - of type ERL_NIF_TERM. We use functions like enif_get_int - and enif_make_int to convert between Erlang term and C-type. - If the function argument argv[0] is not an integer then - enif_get_int will return false, in which case we return + of type ERL_NIF_TERM. Here, functions like enif_get_int + and enif_make_int are used to convert between Erlang term + and C-type. + If the function argument argv[0] is not an integer, + enif_get_int returns false, in which case it returns by throwing a badarg-exception with enif_make_badarg.

Running the Example -

1. Compile the C code.

+

Step 1. Compile the C code:

 unix> gcc -o complex6_nif.so -fpic -shared complex.c complex6_nif.c
 windows> cl -LD -MD -Fe complex6_nif.dll complex.c complex6_nif.c
-

2. Start Erlang and compile the Erlang code.

+

Step 2: Start Erlang and compile the Erlang code:

 > erl
 Erlang R13B04 (erts-5.7.5) [64-bit] [smp:4:4] [rq:4] [async-threads:0] [kernel-poll:false]
@@ -121,7 +134,7 @@ Erlang R13B04 (erts-5.7.5) [64-bit] [smp:4:4] [rq:4] [async-threads:0] [kernel-p
 Eshell V5.7.5  (abort with ^G)
 1> c(complex6).
 {ok,complex6}
-

3. Run the example.

+

Step 3: Run the example:

 3> complex6:foo(3).
 4
diff --git a/system/doc/tutorial/overview.xml b/system/doc/tutorial/overview.xml
index 1fe1aad22b..3814a135b4 100644
--- a/system/doc/tutorial/overview.xml
+++ b/system/doc/tutorial/overview.xml
@@ -4,7 +4,7 @@
 
   
- 20002013 + 20002015 Ericsson AB. All Rights Reserved. @@ -31,35 +31,90 @@
Built-In Mechanisms -

There are two interoperability mechanisms built into the Erlang runtime system. One is distributed Erlang and the other one is ports. A variation of ports is linked-in drivers.

+

Two interoperability mechanisms are built into the Erlang + runtime system, distributed Erlang and ports. + A variation of ports is linked-in drivers.

Distributed Erlang -

An Erlang runtime system is made into a distributed Erlang node by giving it a name. A distributed Erlang node can connect to and monitor other nodes, it is also possible to spawn processes at other nodes. Message passing and error handling between processes at different nodes are transparent. There exists a number of useful stdlib modules intended for use in a distributed Erlang system; for example, global which provides global name registration. The distribution mechanism is implemented using TCP/IP sockets.

-

When to use: Distributed Erlang is primarily used for communication Erlang-Erlang. It can also be used for communication between Erlang and C, if the C program is implemented as a C node, see below.

-

Where to read more: Distributed Erlang and some distributed programming techniques are described in the Erlang book.

- - In the Erlang/OTP documentation there is a chapter about distributed Erlang in "Getting Started" (User's Guide).

- - Relevant man pages are erlang (describes the BIFs) and global, net_adm, pg2, rpc, pool and slave.

+

An Erlang runtime system is made a distributed Erlang node by + giving it a name. A distributed Erlang node can connect to, + and monitor, other nodes. It can also spawn processes at other + nodes. Message passing and error handling between processes at + different nodes are transparent. A number of useful STDLIB + modules are available in a distributed Erlang system. For + example, global, which provides global name + registration. The distribution mechanism is implemented using + TCP/IP sockets.

+

When to use: Distributed Erlang is primarily used + for Erlang-Erlang communication. It can also be used for + communication between Erlang and C, if the C program is + implemented as a C node, see + C and Java Libraries.

+

Where to read more: Distributed Erlang and some distributed + programming techniques are described in the Erlang book.

+

For more information, see + Distributed Programming.

+

Relevant manual pages are the following:

+ + erlang manual page in ERTS + (describes the BIFs) + global manual page in Kernel + net_adm manual page in Kernel + pg2 manual page in Kernel + rpc manual page in Kernel + pool manual page in STDLIB + slave manual page in STDLIB +
Ports and Linked-In Drivers -

Ports provide the basic mechanism for communication with the external world, from Erlang's point of view. They provide a byte-oriented interface to an external program. When a port has been created, Erlang can communicate with it by sending and receiving lists of bytes (not Erlang terms). This means that the programmer may have to invent a suitable encoding and decoding scheme.

-

The actual implementation of the port mechanism depends on the platform. In the Unix case, pipes are used and the external program should as default read from standard input and write to standard output. Theoretically, the external program could be written in any programming language as long as it can handle the interprocess communication mechanism with which the port is implemented.

-

The external program resides in another OS process than the Erlang runtime system. In some cases this is not acceptable, consider for example drivers with very hard time requirements. It is therefore possible to write a program in C according to certain principles and dynamically link it to the Erlang runtime system, this is called a linked-in driver.

-

When to use: Being the basic mechanism, ports can be used for all kinds of interoperability situations where the Erlang program and the other program runs on the same machine. Programming is fairly straight-forward.

- - Linked-in drivers involves writing certain call-back functions in C. Very good skills are required as the code is linked to the Erlang runtime system.

+

Ports provide the basic mechanism for communication with the + external world, from Erlang's point of view. The ports provide + a byte-oriented interface to an external program. When a port + is created, Erlang can communicate with it by sending and + receiving lists of bytes (not Erlang terms). This means that + the programmer might have to invent a suitable encoding and + decoding scheme.

+

The implementation of the port mechanism depends on the + platform. For UNIX, pipes are used and the external program is + assumed to read from standard input and write to standard + output. The external program can be written in any programming + language as long as it can handle the interprocess + communication mechanism with which the port is + implemented.

+

The external program resides in another OS process than the + Erlang runtime system. In some cases this is not acceptable. + Consider, for example, drivers with very hard time + requirements. It is therefore possible to write a program in C + according to certain principles, and dynamically link it to + the Erlang runtime system. This is called a linked-in + driver.

+

When to use: Ports can be used for all kinds of + interoperability situations where the Erlang program and the + other program runs on the same machine. Programming is fairly + straight-forward.

+

Linked-in drivers involves writing certain call-back + functions in C. This requires very good skills as the code is + linked to the Erlang runtime system.

-

An erroneous linked-in driver will cause the entire Erlang runtime system to leak memory, hang or crash.

+

A faulty linked-in driver causes the entire Erlang runtime + system to leak memory, hang, or crash.

-

Where to read more: Ports are described in the "Miscellaneous Items" chapter of the Erlang book. Linked-in drivers are described in Appendix E.

- - The BIF open_port/2 is documented in the man page for erlang. For linked-in drivers, the programmer needs to read the information in the man page for erl_ddll.

-

Examples:Port example.

+

Where to read more: Ports are described in section + "Miscellaneous Items" of the Erlang book. Linked-in drivers + are described in Appendix E.

+

The BIF open_port/2 is documented in the + erlang manual page in + ERTS.

+

For linked-in drivers, the programmer needs to read the + erl_ddll manual + page in Kernel.

+

Examples: Port example in + Ports.

@@ -68,64 +123,152 @@
Erl_Interface -

Very often the program at the other side of a port is a C program. To help the C programmer a library called Erl_Interface has been developed. It consists of five parts:

+

The program at the other side of a port is often a C program. + To help the C programmer, the Erl_Interface library + has been developed, including the following five parts:

- erl_marshal, erl_eterm, erl_format, erl_malloc Handling of the Erlang external term format. - erl_connect Communication with distributed Erlang, see C nodes below. - erl_error Error print routines. - erl_global Access globally registered names. - Registry Store and backup of key-value pairs. + + erl_marshal, erl_eterm, erl_format, and + erl_malloc: Handling of the Erlang external term format + + erl_connect: + Communication with distributed Erlang, see C nodes below + + erl_error: + Error print routines + + erl_global: + Access globally registered names + + Registry: + Store and backup of key-value pairs -

The Erlang external term format is a representation of an Erlang term as a sequence of bytes, a binary. Conversion between the two representations is done using BIFs.

+

The Erlang external term format is a representation of an + Erlang term as a sequence of bytes, that is, a binary. + Conversion between the two representations is done using the + following BIFs:

 Binary = term_to_binary(Term)
 Term = binary_to_term(Binary)
-

A port can be set to use binaries instead of lists of bytes. It is then not necessary to invent any encoding/decoding scheme. Erl_Interface functions are used for unpacking the binary and convert it into a struct similar to an Erlang term. Such a struct can be manipulated in different ways and be converted to the Erlang external format and sent to Erlang.

+

A port can be set to use binaries instead of lists of bytes. + It is then not necessary to invent any encoding/decoding + scheme. Erl_Interface functions are used for unpacking the + binary and convert it into a struct similar to an Erlang term. + Such a struct can be manipulated in different ways, be + converted to the Erlang external format, and sent to + Erlang.

When to use: In C code, in conjunction with Erlang binaries.

-

Where to read more: Read about the Erl_Interface User's Guide; Command Reference and Library Reference. In R5B and earlier versions the information can be found under the Kernel application.

-
-

Examples:erl_interface example.

+

Where to read more: See the Erlang Interface User's + Guide, Command Reference, and Library Reference. In Erlang/OTP + R5B, and earlier versions, the information is part of the + Kernel application.

+

Examples: Erl_Interface example in + Erl_Interface.

C Nodes -

A C program which uses the Erl_Interface functions for setting up a connection to and communicating with a distributed Erlang node is called a C node, or a hidden node. The main advantage with a C node is that the communication from the Erlang programmer's point of view is extremely easy, since the C program behaves as a distributed Erlang node.

-

When to use: C nodes can typically be used on device processors (as opposed to control processors) where C is a better choice than Erlang due to memory limitations and/or application characteristics.

-

Where to read more: In the erl_connect part of the Erl_Interface documentation, see above. The programmer also needs to be familiar with TCP/IP sockets, see below, and distributed Erlang, see above.

-

Examples:C node example.

+

A C program that uses the Erl_Interface functions for setting + up a connection to, and communicating with, a distributed + Erlang node is called a C node, or a hidden + node. The main advantage with a C node is that the + communication from the Erlang programmer's perspective is + extremely easy, as the C program behaves as a distributed + Erlang node.

+

When to use: C nodes can typically be used on device + processors (as opposed to control processors) where C is a + better choice than Erlang due to memory limitations or + application characteristics, or both.

+

Where to read more: See the erl_connect part + of the Erl_Interface documentation. The programmer also needs + to be familiar with TCP/IP sockets, see Sockets in Standard + Protocols and Distributed Erlang in Built-In Mechanisms.

+

Example: C node example in + C Nodes.

Jinterface -

In Erlang/OTP R6B, a library similar to Erl_Interface for Java was added called jinterface.

+

In Erlang/OTP R6B, a library similar to Erl_Interface for + Java was added called jinterface. It provides a tool + for Java programs to communicate with Erlang nodes.

Standard Protocols -

Sometimes communication between an Erlang program and another program using a standard protocol is desirable. Erlang/OTP currently supports TCP/IP and UDP sockets, SNMP, HTTP and IIOP (CORBA). Using one of the latter three requires good knowledge about the protocol and is not covered by this tutorial. Please refer to the documentation for the SNMP, Inets and Orber applications, respectively.

+

Sometimes communication between an Erlang program and another + program using a standard protocol is desirable. Erlang/OTP + currently supports TCP/IP and UDP sockets: as + follows:

+ + SNMP + HTTP + IIOP (CORBA) + +

Using one of the latter three requires good knowledge about the + protocol and is not covered by this tutorial. See the SNMP, + Inets, and Orber applications, respectively.

Sockets -

Simply put, connection-oriented socket communication (TCP/IP) consists of an initiator socket ("server") started at a certain host with a certain port number. A connector socket ("client") aware of the initiator's host name and port number can connect to it and data can be sent between them. Connection-less socket communication (UDP) consists of an initiator socket at a certain host with a certain port number and a connector socket sending data to it. For a detailed description of the socket concept, please refer to a suitable book about network programming. A suggestion is UNIX Network Programming, Volume 1: Networking APIs - Sockets and XTI by W. Richard Stevens, ISBN: 013490012X.

-

In Erlang/OTP, access to TCP/IP and UDP sockets is provided by the - Kernel modules gen_tcp and gen_udp. Both are easy to - use and do not require any deeper knowledge about the socket concept.

-

When to use: For programs running on the same or on another machine than the Erlang program.

-

Where to read more: The man pages for gen_tcp and gen_udp.

+

Simply put, connection-oriented socket communication (TCP/IP) + consists of an initiator socket ("server") started at a + certain host with a certain port number. A connector socket + ("client"), which is aware of the initiator host name and port + number, can connect to it and data can be sent between + them.

+

Connection-less socket communication (UDP) consists of an + initiator socket at a certain host with a certain port number + and a connector socket sending data to it.

+

For a detailed description of the socket concept, refer to a + suitable book about network programming. A suggestion is + UNIX Network Programming, Volume 1: Networking APIs - + Sockets and XTI by W. Richard Stevens, ISBN: + 013490012X.

+

In Erlang/OTP, access to TCP/IP and UDP sockets is provided + by the modules gen_tcp and gen_udp in + Kernel. Both are easy to use and do not require + detailed knowledge about the socket concept.

+

When to use: For programs running on the same or on + another machine than the Erlang program.

+

Where to read more: See the gen_tcp and the gen_udp manual pages in + Kernel.

IC -

IC (IDL Compiler) is an interface generator which given an IDL interface specification automatically generates stub code in Erlang, C or Java. Please refer to the IC User's Guide and IC Reference Manual.

+

IC (Erlang IDL Compiler) is an interface generator that, given + an IDL interface specification, automatically generates stub + code in Erlang, C, or Java. See the IC User's Guide and IC + Reference Manual.

+

For details, see the ic + manual page in IC.

Old Applications -

There are two old applications of interest when talking about interoperability: IG which was removed in Erlang/OTP R6B and Jive which was removed in Erlang/OTP R7B. Both applications have been replaced by IC and are mentioned here for reference only.

-

IG (Interface Generator) automatically generated code for port or socket communication between an Erlang program and a C program, given a C header file with certain keywords. Jive provided a simple interface between an Erlang program and a Java program.

+

Two old applications are of interest regarding + interoperability. Both have been replaced by IC and are + mentioned here for reference only:

+ +

IG - Removed from Erlang/OTP R6B.

+

IG (Interface Generator) automatically generated code for + port or socket communication between an Erlang program and a + C program, given a C header file with certain keywords.

+
+

Jive - Removed from Erlang/OTP R7B.

+

Jive provided a simple interface between an Erlang program + and a Java program.

+
+
-- cgit v1.2.3 From e16e17d8ec2085b418cfa807a1bcbf0268f1d836 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Bj=C3=B6rn=20Gustavsson?= Date: Wed, 11 Mar 2015 11:47:13 +0100 Subject: Replace mention of a tuple fun with an external fun --- system/doc/programming_examples/funs.xmlsrc | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) (limited to 'system/doc') diff --git a/system/doc/programming_examples/funs.xmlsrc b/system/doc/programming_examples/funs.xmlsrc index e4f5c9c9c9..7bcf2e6171 100644 --- a/system/doc/programming_examples/funs.xmlsrc +++ b/system/doc/programming_examples/funs.xmlsrc @@ -127,7 +127,7 @@ F = fun FunctionName/Arity

It is also possible to refer to a function defined in a different module, with the following syntax:

-F = {Module, FunctionName} +F = fun Module:FunctionName/Arity

In this case, the function must be exported from the module in question.

The following program illustrates the different ways of creating -- cgit v1.2.3 From 2daff33c1cee100e7e8851cef463bda7c0237310 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Bj=C3=B6rn=20Gustavsson?= Date: Wed, 11 Mar 2015 11:50:34 +0100 Subject: Remove an historical note about fun representation before R6B It is hardly useful to mention that funs used to be represented as tuples in ancient releases. --- system/doc/programming_examples/funs.xmlsrc | 4 ---- 1 file changed, 4 deletions(-) (limited to 'system/doc') diff --git a/system/doc/programming_examples/funs.xmlsrc b/system/doc/programming_examples/funs.xmlsrc index 7bcf2e6171..57b90ccf7c 100644 --- a/system/doc/programming_examples/funs.xmlsrc +++ b/system/doc/programming_examples/funs.xmlsrc @@ -149,10 +149,6 @@ f(N, _) when is_integer(N) -> erlang:fun_to_list/1 returns a textual representation of a fun. The check_process_code/2 BIF returns true if the process contains funs that depend on the old version of a module.

- -

In OTP R5 and earlier releases, funs were represented using - tuples.

-
-- cgit v1.2.3 From 0a1d39481440eb033f48fbbc8889bc99eda85d41 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Bj=C3=B6rn=20Gustavsson?= Date: Wed, 11 Mar 2015 12:02:26 +0100 Subject: Replace "lambda head" with "fun" in compiler warning We no longer use the term "lambda". --- system/doc/programming_examples/funs.xmlsrc | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) (limited to 'system/doc') diff --git a/system/doc/programming_examples/funs.xmlsrc b/system/doc/programming_examples/funs.xmlsrc index 57b90ccf7c..d4c32bc854 100644 --- a/system/doc/programming_examples/funs.xmlsrc +++ b/system/doc/programming_examples/funs.xmlsrc @@ -190,7 +190,7 @@ print_list(File, List) -> the following diagnostic:

./FileName.erl:Line: Warning: variable 'File' - shadowed in 'lambda head' + shadowed in 'fun'

This indicates that the variable File, which is defined inside the fun, collides with the variable File, which is defined outside the fun.

-- cgit v1.2.3