From 572323a87f3ed28ae2af42f32cbc745e35b95101 Mon Sep 17 00:00:00 2001 From: xsipewe Date: Mon, 18 May 2015 14:54:00 +0200 Subject: Update asn1 documentation MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Language cleaned up by the technical writer xsipewe from Combitech. Proofreading and and additional corrections by Björn Gustavsson and Dan Gudmundsson. --- lib/mnesia/doc/src/Mnesia_chap5.xmlsrc | 1653 ++++++++++++++++---------------- 1 file changed, 802 insertions(+), 851 deletions(-) (limited to 'lib/mnesia/doc/src/Mnesia_chap5.xmlsrc') diff --git a/lib/mnesia/doc/src/Mnesia_chap5.xmlsrc b/lib/mnesia/doc/src/Mnesia_chap5.xmlsrc index 127c23e0f7..813731e0b8 100644 --- a/lib/mnesia/doc/src/Mnesia_chap5.xmlsrc +++ b/lib/mnesia/doc/src/Mnesia_chap5.xmlsrc @@ -31,222 +31,205 @@ Mnesia_chap5.xml -

The earlier chapters of this User Guide described how to get - started with Mnesia, and how to build a Mnesia database. In this - chapter, we will describe the more advanced features available - when building a distributed, fault tolerant Mnesia database. This - chapter contains the following sections: -

+ +

The previous sections describe how to get started + with Mnesia and how to build a Mnesia database. This + section describes the more advanced features available + when building a distributed, fault-tolerant Mnesia database. + The following topics are included:

- Indexing - - Distribution and Fault Tolerance - - Table fragmentation. - - Local content tables. - - Disc-less nodes. - - More about schema management - - Debugging a Mnesia application - - Concurrent Processes in Mnesia - - Prototyping - - Object Based Programming with Mnesia. - + Indexing + Distribution and fault tolerance + Table fragmentation + Local content tables + Disc-less nodes + More about schema management + Mnesia event handling + Debugging Mnesia applications + Concurrent processes in Mnesia + Prototyping + Object-based programming with Mnesia
Indexing -

Data retrieval and matching can be performed very efficiently - if we know the key for the record. Conversely, if the key is not - known, all records in a table must be searched. The larger the - table the more time consuming it will become. To remedy this - problem Mnesia's indexing capabilities are used to improve data - retrieval and matching of records. -

-

The following two functions manipulate indexes on existing tables: -

+

Data retrieval and matching can be performed efficiently + if the key for the record is known. Conversely, if the key is + unknown, all records in a table must be searched. The larger the + table, the more time consuming it becomes. To remedy this + problem, Mnesia indexing capabilities are used to improve + data retrieval and matching of records.

+

The following two functions manipulate indexes on existing + tables:

- mnesia:add_table_index(Tab, AttributeName) -> {aborted, R} |{atomic, ok} - mnesia:del_table_index(Tab, AttributeName) -> {aborted, R} |{atomic, ok} - -

These functions create or delete a table index on field - defined by AttributeName. To illustrate this, add an - index to the table definition (employee, {emp_no, name, salary, sex, phone, room_no}, which is the example table - from the Company database. The function - which adds an index on the element salary can be expressed in - the following way: -

- - mnesia:add_table_index(employee, salary) + mnesia:add_table_index(Tab, AttributeName) + -> {aborted, R} |{atomic, ok} + mnesia:del_table_index(Tab, AttributeName) + -> {aborted, R} |{atomic, ok} -

The indexing capabilities of Mnesia are utilized with the - following three functions, which retrieve and match records on the - basis of index entries in the database. -

+

These functions create or delete a table index on a field + defined by AttributeName. To illustrate this, add an + index to the table definition (employee, {emp_no, name, + salary, sex, phone, room_no}), which is the example table + from the Company database. The function that + adds an index on element salary can be expressed + as mnesia:add_table_index(employee, salary).

+

The indexing capabilities of Mnesia are used with the + following three functions, which retrieve and match records + based on index entries in the database:

- mnesia:index_read(Tab, SecondaryKey, AttributeName) -> transaction abort | RecordList. - Avoids an exhaustive search of the entire table, by looking up - the SecondaryKey in the index to find the primary keys. + + mnesia:index_read(Tab, SecondaryKey, AttributeName) + -> transaction abort | RecordList + avoids an exhaustive search of the entire table, by looking up + SecondaryKey in the index to find the primary keys. - mnesia:index_match_object(Pattern, AttributeName) -> transaction abort | RecordList - Avoids an exhaustive search of the entire table, by looking up + + mnesia:index_match_object(Pattern, AttributeName) + -> transaction abort | RecordList + avoids an exhaustive search of the entire table, by looking up the secondary key in the index to find the primary keys. - The secondary key is found in the AttributeName field of - the Pattern. The secondary key must be bound. + The secondary key is found in field AttributeName of + Pattern. The secondary key must be bound. - mnesia:match_object(Pattern) -> transaction abort | RecordList - Uses indices to avoid exhaustive search of the entire table. - Unlike the other functions above, this function may utilize + + mnesia:match_object(Pattern) + -> transaction abort | RecordList + uses indexes to avoid exhaustive search of the entire table. + Unlike the previous functions, this function can use any index as long as the secondary key is bound.

These functions are further described and exemplified in - Chapter 4: Pattern matching. -

+ Pattern Matching. +

Distribution and Fault Tolerance -

Mnesia is a distributed, fault tolerant DBMS. It is possible - to replicate tables on different Erlang nodes in a variety of - ways. The Mnesia programmer does not have to state +

Mnesia is a distributed, fault-tolerant DBMS. Tables + can be replicated on different Erlang nodes in various + ways. The Mnesia programmer does not need to state where the different tables reside, only the names of the - different tables are specified in the program code. This is - known as "location transparency" and it is an important - concept. In particular: -

+ different tables need to be specified in the program code. This + is known as "location transparency" and is an important + concept. In particular:

- A program will work regardless of the - location of the data. It makes no difference whether the data - resides on the local node, or on a remote node. Note: The program - will run slower if the data is located on a remote node. +

A program works regardless of the data + location. It makes no difference whether the data + resides on the local node or on a remote node.

+

Notice that the program runs slower if the data + is located on a remote node.

The database can be reconfigured, and tables can be - moved between nodes. These operations do not effect the user + moved between nodes. These operations do not affect the user programs.
-

We have previously seen that each table has a number of - system attributes, such as index and - type. -

+

It has previously been shown that each table has a number of + system attributes, such as index and type.

Table attributes are specified when the table is created. For - example, the following function will create a new table with two - RAM replicas: -

+ example, the following function creates a table with two + RAM replicas:

       mnesia:create_table(foo,
                           [{ram_copies, [N1, N2]},
-                           {attributes, record_info(fields, foo)}]).
-    
+ {attributes, record_info(fields, foo)}]).

Tables can also have the following properties, - where each attribute has a list of Erlang nodes as its value. -

+ where each attribute has a list of Erlang nodes as its value:

-

ram_copies. The value of the node list is a list of - Erlang nodes, and a RAM replica of the table will reside on - each node in the list. This is a RAM replica, and it is - important to realize that no disc operations are performed when - a program executes write operations to these replicas. However, - should permanent RAM replicas be a requirement, then the +

ram_copies. The value of the node list is a list + of Erlang nodes, and a RAM replica of the table resides on + each node in the list.

+

Notice that no disc operations are performed when + a program executes write operations to these replicas. + However, if permanent RAM replicas are required, the following alternatives are available:

- The mnesia:dump_tables/1 function can be used - to dump RAM table replicas to disc. + The function + mnesia:dump_tables/1 + can be used to dump RAM table replicas to disc. - The table replicas can be backed up; either from - RAM, or from disc if dumped there with the above - function. + The table replicas can be backed up, either from + RAM, or from disc if dumped there with this function.
disc_copies. The value of the attribute is a list - of Erlang nodes, and a replica of the table will reside both + of Erlang nodes, and a replica of the table resides both in RAM and on disc on each node in the list. Write operations - addressed to the table will address both the RAM and the disc + addressed to the table address both the RAM and the disc copy of the table. disc_only_copies. The value of the attribute is a - list of Erlang nodes, and a replica of the table will reside + list of Erlang nodes, and a replica of the table resides only as a disc copy on each node in the list. The major disadvantage of this type of table replica is the access speed. The major advantage is that the table does not occupy space in memory.
-

It is also possible to set and change table properties on - existing tables. Refer to Chapter 3: Defining the Schema for full - details. -

+

In addition, table properties can be set and changed. + For details, see + Define a Schema. +

There are basically two reasons for using more than one table - replica: fault tolerance, or speed. It is worthwhile to note + replica: fault tolerance and speed. Notice that table replication provides a solution to both of these - system requirements. -

-

If we have two active table replicas, all information is - still available if one of the replicas fail. This can be a very + system requirements.

+

If there are two active table replicas, all information is + still available if one replica fails. This can be an important property in many applications. Furthermore, if a table - replica exists at two specific nodes, applications which execute + replica exists at two specific nodes, applications that execute at either of these nodes can read data from the table without - accessing the network. Network operations are considerably - slower and consume more resources than local operations. -

+ accessing the network. Network operations are considerably + slower and consume more resources than local operations.

It can be advantageous to create table replicas for a - distributed application which reads data often, but writes data - seldom, in order to achieve fast read operations on the local + distributed application that reads data often, but writes data + seldom, to achieve fast read operations on the local node. The major disadvantage with replication is the increased time to write data. If a table has two replicas, every write operation must access both table replicas. Since one of these write operations must be a network operation, it is considerably more expensive to perform a write operation to a replicated - table than to a non-replicated table. -

+ table than to a non-replicated table.

Table Fragmentation
- The Concept -

A concept of table fragmentation has been introduced in - order to cope with very large tables. The idea is to split a - table into several more manageable fragments. Each fragment - is implemented as a first class Mnesia table and may be - replicated, have indices etc. as any other table. But the - tables may neither have local_content nor have the - snmp connection activated. -

-

In order to be able to access a record in a fragmented - table, Mnesia must determine to which fragment the - actual record belongs. This is done by the - mnesia_frag module, which implements the - mnesia_access callback behaviour. Please, read the - documentation about mnesia:activity/4 to see how - mnesia_frag can be used as a mnesia_access - callback module. -

-

At each record access mnesia_frag first computes - a hash value from the record key. Secondly the name of the - table fragment is determined from the hash value. And - finally the actual table access is performed by the same + Concept +

A concept of table fragmentation has been introduced + to cope with large tables. The idea is to split a + table into several manageable fragments. Each fragment is + implemented as a first class Mnesia table and can be + replicated, have indexes, and so on, as any other table. But + the tables cannot have local_content or have the + snmp connection activated.

+

To be able to access a record in a fragmented + table, Mnesia must determine to which fragment the + actual record belongs. This is done by module + mnesia_frag, which implements the mnesia_access + callback behavior. It is recommended to read the + documentation about the function + mnesia:activity/4 + to see how mnesia_frag + can be used as a mnesia_access callback module.

+

At each record access, mnesia_frag first computes + a hash value from the record key. Second, the name of the + table fragment is determined from the hash value. + Finally the actual table access is performed by the same functions as for non-fragmented tables. When the key is not known beforehand, all fragments are searched for - matching records. Note: In ordered_set tables - the records will be ordered per fragment, and the - the order is undefined in results returned by select and - match_object. -

-

The following piece of code illustrates - how an existing Mnesia table is converted to be a - fragmented table and how more fragments are added later on. -

+ matching records.

+

Notice that in ordered_set tables, the records + are ordered per fragment, and the the order is undefined in + results returned by select and match_object.

+

The following code illustrates how a Mnesia table is + converted to be a fragmented table and how more fragments + are added later:

mnesia:start(). @@ -299,102 +282,96 @@ ok
Fragmentation Properties -

There is a table property called - frag_properties and may be read with - mnesia:table_info(Tab, frag_properties). The - fragmentation properties is a list of tagged tuples with - the arity 2. By default the list is empty, but when it is - non-empty it triggers Mnesia to regard the table as - fragmented. The fragmentation properties are: -

+

The table property frag_properties can be read with + the function + mnesia:table_info(Tab, frag_properties). + The fragmentation properties are a list of tagged tuples with + arity 2. By default the list is empty, but when it is + non-empty it triggers Mnesia to regard the table as + fragmented. The fragmentation properties are as follows:

{n_fragments, Int}

n_fragments regulates how many fragments - that the table currently has. This property may explicitly + that the table currently has. This property can explicitly be set at table creation and later be changed with {add_frag, NodesOrDist} or - del_frag. n_fragments defaults to 1. -

+ del_frag. n_fragments defaults to 1.

{node_pool, List} -

The node pool contains a list of nodes and may +

The node pool contains a list of nodes and can explicitly be set at table creation and later be changed - with {add_node, Node} or {del_node, Node}. At table creation Mnesia tries to distribute + with {add_node, Node} or {del_node, Node}. + At table creation Mnesia tries to distribute the replicas of each fragment evenly over all the nodes in - the node pool. Hopefully all nodes will end up with the + the node pool. Hopefully all nodes end up with the same number of replicas. node_pool defaults to the - return value from mnesia:system_info(db_nodes). -

+ return value from the function + mnesia:system_info(db_nodes).

{n_ram_copies, Int}

Regulates how many ram_copies replicas - that each fragment should have. This property may - explicitly be set at table creation. The default is + that each fragment is to have. This property can + explicitly be set at table creation. Defaults is 0, but if n_disc_copies and n_disc_only_copies also are 0, - n_ram_copies will default be set to 1. -

+ n_ram_copies defaults to 1.

{n_disc_copies, Int} -

Regulates how many disc_copies replicas - that each fragment should have. This property may - explicitly be set at table creation. The default is 0. -

+

Regulates how many disc_copies replicas that + each fragment is to have. This property can explicitly + be set at table creation. Default is 0.

{n_disc_only_copies, Int}

Regulates how many disc_only_copies replicas - that each fragment should have. This property may - explicitly be set at table creation. The default is 0. -

+ that each fragment is to have. This property can + explicitly be set at table creation. Defaults is + 0.

{foreign_key, ForeignKey} -

ForeignKey may either be the atom +

ForeignKey can either be the atom undefined or the tuple {ForeignTab, Attr}, - where Attr denotes an attribute which should be + where Attr denotes an attribute that is to be interpreted as a key in another fragmented table named - ForeignTab. Mnesia will ensure that the number of + ForeignTab. Mnesia ensures that the number of fragments in this table and in the foreign table are - always the same. When fragments are added or deleted - Mnesia will automatically propagate the operation to all - fragmented tables that has a foreign key referring to this + always the same.

+

When fragments are added or deleted, Mnesia + automatically propagates the operation to all + fragmented tables that have a foreign key referring to this table. Instead of using the record key to determine which - fragment to access, the value of the Attr field is - used. This feature makes it possible to automatically - co-locate records in different tables to the same - node. foreign_key defaults to - undefined. However if the foreign key is set to - something else it will cause the default values of the + fragment to access, the value of field Attr is + used. This feature makes it possible to colocate records + automatically in different tables to the same node. + foreign_key defaults to + undefined. However, if the foreign key is set to + something else, it causes the default values of the other fragmentation properties to be the same values as - the actual fragmentation properties of the foreign table. -

+ the actual fragmentation properties of the foreign table.

{hash_module, Atom} -

Enables definition of an alternate hashing scheme. - The module must implement the mnesia_frag_hash - callback behaviour (see the reference manual). This - property may explicitly be set at table creation. - The default is mnesia_frag_hash.

-

Older tables that was created before the concept of - user defined hash modules was introduced, uses - the mnesia_frag_old_hash module in order to - be backwards compatible. The mnesia_frag_old_hash - is still using the poor deprecated erlang:hash/1 - function. -

+

Enables definition of an alternative hashing scheme. + The module must implement the + mnesia_frag_hash + callback behavior. This property can explicitly be set at + table creation. Default is mnesia_frag_hash.

+

Older tables, that were created before the concept of + user-defined hash modules was introduced, use module + mnesia_frag_old_hash to be backwards compatible. + mnesia_frag_old_hash still uses the poor + deprecated function erlang:hash/1.

{hash_state, Term} -

Enables a table specific parameterization - of a generic hash module. This property may explicitly - be set at table creation. - The default is undefined.

+

Enables a table-specific parameterization of a + generic hash module. This property can explicitly be set + at table creation. Default is undefined.

mnesia:start(). @@ -463,177 +440,159 @@ ok Management of Fragmented Tables

The function mnesia:change_table_frag(Tab, Change) is intended to be used for reconfiguration of fragmented - tables. The Change argument should have one of the - following values: -

+ tables. Argument Change is to have one of the + following values:

{activate, FragProps}

Activates the fragmentation properties of an - existing table. FragProps should either contain - {node_pool, Nodes} or be empty. -

+ existing table. FragProps is either to contain + {node_pool, Nodes} or be empty.

deactivate

Deactivates the fragmentation properties of a - table. The number of fragments must be 1. No other - tables may refer to this table in its foreign key. -

+ table. The number of fragments must be 1. No other + table can refer to this table in its foreign key.

{add_frag, NodesOrDist} -

Adds one new fragment to a fragmented table. All - records in one of the old fragments will be rehashed and - about half of them will be moved to the new (last) - fragment. All other fragmented tables, which refers to this - table in their foreign key, will automatically get a new - fragment, and their records will also be dynamically - rehashed in the same manner as for the main table. -

-

The NodesOrDist argument may either be a list - of nodes or the result from mnesia:table_info(Tab, frag_dist). The NodesOrDist argument is +

Adds a fragment to a fragmented table. All + records in one of the old fragments are rehashed and + about half of them are moved to the new (last) + fragment. All other fragmented tables, which refer to this + table in their foreign key, automatically get a new + fragment. Also, their records are dynamically + rehashed in the same manner as for the main table.

+

Argument NodesOrDist can either be a list of + nodes or the result from the function + mnesia:table_info(Tab, frag_dist). + Argument NodesOrDist is assumed to be a sorted list with the best nodes to host new replicas first in the list. The new fragment - will get the same number of replicas as the first - fragment (see n_ram_copies, n_disc_copies + gets the same number of replicas as the first + fragment (see n_ram_copies, n_disc_copies, and n_disc_only_copies). The NodesOrDist list must at least contain one element for each - replica that needs to be allocated. -

+ replica that needs to be allocated.

del_frag -

Deletes one fragment from a fragmented table. All - records in the last fragment will be moved to one of the other - fragments. All other fragmented tables which refers to - this table in their foreign key, will automatically lose - their last fragment and their records will also be +

Deletes a fragment from a fragmented table. All + records in the last fragment are moved to one of the other + fragments. All other fragmented tables, which refer to + this table in their foreign key, automatically lose + their last fragment. Also, their records are dynamically rehashed in the same manner as for the main - table. -

+ table.

{add_node, Node} -

Adds a new node to the node_pool. The new - node pool will affect the list returned from - mnesia:table_info(Tab, frag_dist). -

+

Adds a node to node_pool. The new + node pool affects the list returned from the function + mnesia:table_info(Tab, frag_dist). +

{del_node, Node} -

Deletes a new node from the node_pool. The - new node pool will affect the list returned from - mnesia:table_info(Tab, frag_dist).

+

Deletes a node from node_pool. The new + node pool affects the list returned from the function + mnesia:table_info(Tab, frag_dist). +

Extensions of Existing Functions -

The function mnesia:create_table/2 is used to - create a brand new fragmented table, by setting the table - property frag_properties to some proper values. -

-

The function mnesia:delete_table/1 is used to - delete a fragmented table including all its - fragments. There must however not exist any other - fragmented tables which refers to this table in their foreign key. -

-

The function mnesia:table_info/2 now understands - the frag_properties item. -

-

If the function mnesia:table_info/2 is invoked in - the activity context of the mnesia_frag module, - information of several new items may be obtained: -

+

The function + mnesia:create_table/2 + creates a brand new fragmented table, by setting table + property frag_properties to some proper values.

+

The function + mnesia:delete_table/1 + deletes a fragmented table including all its + fragments. There must however not exist any other fragmented + tables that refer to this table in their foreign key.

+

The function + mnesia:table_info/2 + now understands item frag_properties.

+

If the function mnesia:table_info/2 is started in + the activity context of module mnesia_frag, + information of several new items can be obtained:

base_table - -

the name of the fragmented table -

-
+ The name of the fragmented table n_fragments - -

the actual number of fragments -

-
+ The actual number of fragments node_pool - -

the pool of nodes -

-
+ The pool of nodes n_ram_copies n_disc_copies n_disc_only_copies -

the number of replicas with storage type - ram_copies, disc_copies and disc_only_copies +

The number of replicas with storage type ram_copies, + disc_copies, and disc_only_copies, respectively. The actual values are dynamically derived from the first fragment. The first fragment serves as a - pro-type and when the actual values needs to be computed - (e.g. when adding new fragments) they are simply - determined by counting the number of each replicas for - each storage type. This means, when the functions - mnesia:add_table_copy/3, - mnesia:del_table_copy/2 andmnesia:change_table_copy_type/2 are applied on the - first fragment, it will affect the settings on + protype. When the actual values need to be computed + (for example, when adding new fragments) they are + determined by counting the number of each replica for + each storage type. This means that when the functions + mnesia:add_table_copy/3, + + mnesia:del_table_copy/2, + and + mnesia:change_table_copy_type/2 are applied on the + first fragment, it affects the settings on n_ram_copies, n_disc_copies, and - n_disc_only_copies. -

+ n_disc_only_copies.

foreign_key -

the foreign key. -

+

The foreign key

foreigners -

all other tables that refers to this table in - their foreign key. -

+

All other tables that refer to this table in + their foreign key

frag_names -

the names of all fragments. -

+

The names of all fragments

frag_dist -

a sorted list of {Node, Count} tuples - which is sorted in increasing Count order. The +

A sorted list of {Node, Count} tuples + that are sorted in increasing Count order. Count is the total number of replicas that this fragmented table hosts on each Node. The list - always contains at least all nodes in the - node_pool. The nodes which not belongs to the - node_pool will be put last in the list even if - their Count is lower. -

+ always contains at least all nodes in + node_pool. Nodes that do not belong to + node_pool are put last in the list even if + their Count is lower.

frag_size -

a list of {Name, Size} tuples where - Name is a fragment Name and Size is - how many records it contains. -

+

A list of {Name, Size} tuples, where + Name is a fragment Name, and Size is + how many records it contains

frag_memory -

a list of {Name, Memory} tuples where - Name is a fragment Name and Memory is - how much memory it occupies. -

+

A list of {Name, Memory} tuples, where + Name is a fragment Name, and Memory is + how much memory it occupies

size -

total size of all fragments -

+

Total size of all fragments

memory -

the total memory of all fragments

+

Total memory of all fragments

@@ -642,42 +601,45 @@ ok Load Balancing

There are several algorithms for distributing records in a fragmented table evenly over a - pool of nodes. No one is best, it simply depends of the - application needs. Here follows some examples of - situations which may need some attention: -

-

permanent change of nodes when a new permanent - db_node is introduced or dropped, it may be time to - change the pool of nodes and re-distribute the replicas - evenly over the new pool of nodes. It may also be time to - add or delete a fragment before the replicas are re-distributed. -

-

size/memory threshold when the total size or + pool of nodes. No one is best, it depends on the + application needs. The following examples of + situations need some attention:

+ + permanent change of nodes. When a new permanent + db_node is introduced or dropped, it can be time to + change the pool of nodes and redistribute the replicas + evenly over the new pool of nodes. It can also be time to + add or delete a fragment before the replicas are redistributed. + + size/memory threshold. When the total size or total memory of a fragmented table (or a single - fragment) exceeds some application specific threshold, it - may be time to dynamically add a new fragment in order - obtain a better distribution of records. -

-

temporary node down when a node temporarily goes - down it may be time to compensate some fragments with new - replicas in order to keep the desired level of - redundancy. When the node comes up again it may be time to - remove the superfluous replica. -

-

overload threshold when the load on some node is - exceeds some application specific threshold, it may be time to - either add or move some fragment replicas to nodes with lesser - load. Extra care should be taken if the table has a foreign - key relation to some other table. In order to avoid severe - performance penalties, the same re-distribution must be - performed for all of the related tables. -

-

Use mnesia:change_table_frag/2 to add new fragments + fragment) exceeds some application-specific threshold, it + can be time to add a new fragment dynamically to + obtain a better distribution of records. + + temporary node down. When a node temporarily goes + down, it can be time to compensate some fragments with new + replicas to keep the desired level of + redundancy. When the node comes up again, it can be time to + remove the superfluous replica. + + overload threshold. When the load on some node + exceeds some application-specific threshold, it can be time to + either add or move some fragment replicas to nodes with lower + load. Take extra care if the table has a foreign + key relation to some other table. To avoid severe + performance penalties, the same redistribution must be + performed for all the related tables. + + +

Use the function + mnesia:change_table_frag/2 to add new fragments and apply the usual schema manipulation functions (such as - mnesia:add_table_copy/3, mnesia:del_table_copy/2 - and mnesia:change_table_copy_type/2) on each fragment - to perform the actual re-distribution. -

+ mnesia:add_table_copy/3, + mnesia:del_table_copy/2, + and + mnesia:change_table_copy_type/2) + on each fragment to perform the actual redistribution.

@@ -685,356 +647,369 @@ ok Local Content Tables

Replicated tables have the same content on all nodes where they are replicated. However, it is sometimes advantageous to - have tables but different content on different nodes. -

-

If we specify the attribute {local_content, true} when - we create the table, the table will reside on the nodes where - we specify that the table shall exist, but the write operations on the - table will only be performed on the local copy. -

-

Furthermore, when the table is initialized at start-up, the - table will only be initialized locally, and the table - content will not be copied from another node. -

+ have tables, but different content on different nodes.

+

If attribute {local_content, true} is specified when + you create the table, the table resides on the nodes where you + specify the table to exist, but the write operations on the + table are only performed on the local copy.

+

Furthermore, when the table is initialized at startup, the + table is only initialized locally, and the table + content is not copied from another node.

- Disc-less Nodes -

It is possible to run Mnesia on nodes that do not have a - disc. It is of course not possible to have replicas - of neither disc_copies, nor disc_only_copies - on such nodes. This especially troublesome for the - schema table since Mnesia need the schema in order - to initialize itself. -

-

The schema table may, as other tables, reside on one or - more nodes. The storage type of the schema table may either - be disc_copies or ram_copies - (not disc_only_copies). At - start-up Mnesia uses its schema to determine with which - nodes it should try to establish contact. If any - of the other nodes are already started, the starting node + Disc-Less Nodes +

Mnesia can be run on nodes that do not have a disc. + Replicas of disc_copies or disc_only_copies are + not possible on such nodes. This is especially troublesome for + the schema table, as Mnesia needs the schema + to initialize itself.

+

The schema table can, as other tables, reside on one or + more nodes. The storage type of the schema table can either + be disc_copies or ram_copies + (but not disc_only_copies). At + startup, Mnesia uses its schema to determine with which + nodes it is to try to establish contact. If any + other node is started already, the starting node merges its table definitions with the table definitions brought from the other nodes. This also applies to the - definition of the schema table itself. The application - parameter extra_db_nodes contains a list of nodes which - Mnesia also should establish contact with besides the ones - found in the schema. The default value is the empty list - []. -

+ definition of the schema table itself. Application + parameter extra_db_nodes contains a list of nodes that + Mnesia also is to establish contact with besides those + found in the schema. Default is [] (empty list).

Hence, when a disc-less node needs to find the schema - definitions from a remote node on the network, we need to supply - this information through the application parameter -mnesia extra_db_nodes NodeList. Without this - configuration parameter set, Mnesia will start as a single node - system. It is also possible to use mnesia:change_config/2 - to assign a value to 'extra_db_nodes' and force a connection - after mnesia have been started, i.e. - mnesia:change_config(extra_db_nodes, NodeList). -

-

The application parameter schema_location controls where - Mnesia will search for its schema. The parameter may be one of - the following atoms: -

+ definitions from a remote node on the network, this + information must be supplied through application parameter + -mnesia extra_db_nodes NodeList. Without this + configuration parameter set, Mnesia starts as a single + node system. Also, the function + mnesia:change_config/2 + can be used to assign a value to extra_db_nodes and force + a connection after Mnesia has been started, that is, + mnesia:change_config(extra_db_nodes, NodeList).

+

Application parameter schema_location controls where + Mnesia searches for its schema. The parameter can be one + of the following atoms:

disc

Mandatory disc. The schema is assumed to be located - on the Mnesia directory. And if the schema cannot be found, - Mnesia refuses to start. -

+ in the Mnesia directory. If the schema cannot be found, + Mnesia refuses to start.

ram -

Mandatory ram. The schema resides in ram - only. At start-up a tiny new schema is generated. This - default schema contains just the definition of the schema - table and only resides on the local node. Since no other - nodes are found in the default schema, the configuration - parameter extra_db_nodes must be used in order to let the - node share its table definitions with other nodes. (The - extra_db_nodes parameter may also be used on disc-full nodes.) -

+

Mandatory RAM. The schema resides in RAM + only. At startup, a tiny new schema is generated. This + default schema contains only the definition of the schema + table and resides on the local node only. Since no other + nodes are found in the default schema, configuration + parameter extra_db_nodes must be used to let the + node share its table definitions with other nodes. (Parameter + extra_db_nodes can also be used on disc-full nodes.)

opt_disc -

Optional disc. The schema may reside on either disc - or ram. If the schema is found on disc, Mnesia starts as a - disc-full node (the storage type of the schema table is - disc_copies). If no schema is found on disc, Mnesia starts - as a disc-less node (the storage type of the schema table is - ram_copies). The default value for the application parameter - is - opt_disc.

+

Optional disc. The schema can reside on either disc or + RAM. If the schema is found on disc, Mnesia starts as + a disc-full node (the storage type of the schema table is + disc_copies). If no schema is found on disc, Mnesia + starts as a disc-less node (the storage type of the schema + table is ram_copies). The default for the + application parameter is opt_disc.

-

When the schema_location is set to opt_disc the - function mnesia:change_table_copy_type/3 may be used to - change the storage type of the schema. - This is illustrated below: -

+

When schema_location is set to opt_disc, the + function + mnesia:change_table_copy_type/3 + can be used to change the storage type of the schema. + This is illustrated as follows:

         1> mnesia:start().
         ok
         2> mnesia:change_table_copy_type(schema, node(), disc_copies).
-        {atomic, ok}
-    
-

Assuming that the call to mnesia:start did not - find any schema to read on the disc, then Mnesia has started - as a disc-less node, and then changed it to a node that - utilizes the disc to locally store the schema. -

+ {atomic, ok} +

Assuming that the call to + mnesia:start/0 does not + find any schema to read on the disc, Mnesia starts + as a disc-less node, and then change it to a node that + use the disc to store the schema locally.

- More Schema Management -

It is possible to add and remove nodes from a Mnesia system. - This can be done by adding a copy of the schema to those nodes. -

-

The functions mnesia:add_table_copy/3 and - mnesia:del_table_copy/2 may be used to add and delete - replicas of the schema table. Adding a node to the list - of nodes where the schema is replicated will affect two - things. First it allows other tables to be replicated to - this node. Secondly it will cause Mnesia to try to contact - the node at start-up of disc-full nodes. -

-

The function call mnesia:del_table_copy(schema, mynode@host) deletes the node 'mynode@host' from the - Mnesia system. The call fails if mnesia is running on - 'mynode@host'. The other mnesia nodes will never try to connect - to that node again. Note, if there is a disc - resident schema on the node 'mynode@host', the entire mnesia - directory should be deleted. This can be done with - mnesia:delete_schema/1. If - mnesia is started again on the the node 'mynode@host' and the - directory has not been cleared, mnesia's behaviour is undefined. -

-

If the storage type of the schema is ram_copies, i.e, we - have disc-less node, Mnesia - will not use the disc on that particular node. The disc - usage is enabled by changing the storage type of the table - schema to disc_copies. -

-

New schemas are - created explicitly with mnesia:create_schema/1 or implicitly - by starting Mnesia without a disc resident schema. Whenever - a table (including the schema table) is created it is - assigned its own unique cookie. The schema table is not created with - mnesia:create_table/2 as normal tables. -

-

At start-up Mnesia connects different nodes to each other, - then they exchange table definitions with each other and the - table definitions are merged. During the merge procedure Mnesia + More about Schema Management +

Nodes can be added to and removed from a Mnesia system. + This can be done by adding a copy of the schema to those nodes.

+

The functions + mnesia:add_table_copy/3 + and + mnesia:del_table_copy/2 + can be used to add and delete + replicas of the schema table. Adding a node to the list of + nodes where the schema is replicated affects the following:

+ + It allows other tables to be replicated to this node. + + It causes Mnesia to try to contact the node at + startup of disc-full nodes. + + +

The function call mnesia:del_table_copy(schema, + mynode@host) deletes node mynode@host from the + Mnesia system. The call fails if Mnesia is running + on mynode@host. The other Mnesia nodes never try to + connect to that node again. Notice that if there is a disc resident + schema on node mynode@host, the entire Mnesia + directory is to be deleted. This is done with the function + mnesia:delete_schema/1. + If Mnesia is started again + on node mynode@host and the directory has not been + cleared, the behavior of Mnesia is undefined.

+

If the storage type of the schema is ram_copies, + that is, a disc-less node, Mnesia + does not use the disc on that particular node. The disc + use is enabled by changing the storage type of table + schema to disc_copies.

+

New schemas are created explicitly with the function + mnesia:create_schema/1 + or implicitly by starting + Mnesia without a disc resident schema. Whenever + a table (including the schema table) is created, it is + assigned its own unique cookie. The schema table is not created + with the function + mnesia:create_table/2 + as normal tables.

+

At startup, Mnesia connects different nodes to each other, + then they exchange table definitions with each other, and the table + definitions are merged. During the merge procedure, Mnesia performs a sanity test to ensure that the table definitions are - compatible with each other. If a table exists on several nodes - the cookie must be the same, otherwise Mnesia will shutdown one - of the nodes. This unfortunate situation will occur if a table + compatible with each other. If a table exists on several nodes, + the cookie must be the same, otherwise Mnesia shut down one + of the nodes. This unfortunate situation occurs if a table has been created on two nodes independently of each other while - they were disconnected. To solve the problem, one of the tables - must be deleted (as the cookies differ we regard it to be two - different tables even if they happen to have the same name). -

-

Merging different versions of the schema table, does not + they were disconnected. To solve this, one of the tables + must be deleted (as the cookies differ, it is regarded to be two + different tables even if they have the same name).

+

Merging different versions of the schema table does not always require the cookies to be the same. If the storage - type of the schema table is disc_copies, the cookie is - immutable, and all other db_nodes must have the same - cookie. When the schema is stored as type ram_copies, + type of the schema table is disc_copies, the cookie is + immutable, and all other db_nodes must have the same + cookie. When the schema is stored as type ram_copies, its cookie can be replaced with a cookie from another node - (ram_copies or disc_copies). The cookie replacement (during - merge of the schema table definition) is performed each time - a RAM node connects to another node. -

-

mnesia:system_info(schema_location) and - mnesia:system_info(extra_db_nodes) may be used to determine - the actual values of schema_location and extra_db_nodes - respectively. mnesia:system_info(use_dir) may be used to - determine whether Mnesia is actually using the Mnesia - directory. use_dir may be determined even before - Mnesia is started. The function mnesia:info/0 may now be - used to printout some system information even before Mnesia - is started. When Mnesia is started the function prints out - more information. -

-

Transactions which update the definition of a table, - requires that Mnesia is started on all nodes where the - storage type of the schema is disc_copies. All replicas of + (ram_copies or disc_copies). The cookie replacement + (during merge of the schema table definition) is performed each + time a RAM node connects to another node.

+

Further, the following applies:

+ + mnesia:system_info(schema_location) + and + mnesia:system_info(extra_db_nodes) + can be used to determine the actual values of schema_location + and extra_db_nodes, respectively. + + mnesia:system_info(use_dir) + can be used to determine whether Mnesia is actually + using the Mnesia directory. + + use_dir can be determined even before + Mnesia is started. + + +

The function mnesia:info/0 + can now be used to print + some system information even before Mnesia is started. + When Mnesia is started, the function prints more + information.

+

Transactions that update the definition of a table + requires that Mnesia is started on all nodes where the + storage type of the schema is disc_copies. All replicas of the table on these nodes must also be loaded. There are a - few exceptions to these availability rules. Tables may be - created and new replicas may be added without starting all - of the disc-full nodes. New replicas may be added before all - other replicas of the table have been loaded, it will suffice - when one other replica is active. -

+ few exceptions to these availability rules:

+ + Tables can be created and new replicas can be added + without starting all the disc-full nodes. + + New replicas can be added before all other replicas of + the table have been loaded, provided that at least one other + replica is active. + +
Mnesia Event Handling -

System events and table events are the two categories of events - that Mnesia will generate in various situations. -

-

It is possible for user process to subscribe on the - events generated by Mnesia. - We have the following two functions:

+

System events and table events are the two event categories + that Mnesia generates in various situations.

+

A user process can subscribe on the events generated by + Mnesia. The following two functions are provided:

- mnesia:subscribe(Event-Category) - -

Ensures that a copy of all events of type - Event-Category are sent to the calling process. -

-
- mnesia:unsubscribe(Event-Category) + mnesia:subscribe(Event-Category) + + Ensures that a copy of all events of type + Event-Category are sent to the calling process + mnesia:unsubscribe(Event-Category) + Removes the subscription on events of type - Event-Category + Event-Category +
-

Event-Category may either be the atom system, the atom activity, or - one of the tuples {table, Tab, simple}, {table, Tab, detailed}. The old event-category {table, Tab} is the same - event-category as {table, Tab, simple}. - The subscribe functions activate a subscription +

Event-Category can be either of the following:

+ + The atom system + + The atom activity + + The tuple {table, Tab, simple} + + The tuple {table, Tab, detailed} + + +

The old event category {table, Tab} is the same + event category as {table, Tab, simple}.

+

The subscribe functions activate a subscription of events. The events are delivered as messages to the process - evaluating the mnesia:subscribe/1 function. The syntax of - system events is {mnesia_system_event, Event}, - {mnesia_activity_event, Event} for activity events, and - {mnesia_table_event, Event} for table events. What the various - event types mean is described below.

-

All system events are subscribed by Mnesia's - gen_event handler. The default gen_event handler is - mnesia_event. But it may be changed by using the application - parameter event_module. The value of this parameter must be - the name of a module implementing a complete handler - as specified by the gen_event module in - STDLIB. mnesia:system_info(subscribers) and - mnesia:table_info(Tab, subscribers) may be used to determine - which processes are subscribed to various - events. -

+ evaluating the function + mnesia:subscribe/1 + The syntax is as follows:

+ + {mnesia_system_event, Event} for system events + + {mnesia_activity_event, Event} for activity events + + {mnesia_table_event, Event} for table events + + +

The event types are described in the next sections.

+

All system events are subscribed by the Mnesia + gen_event handler. The default gen_event handler + is mnesia_event, but it can be changed by using + application parameter event_module. The value of this + parameter must be the name of a module implementing a complete + handler, as specified by the + gen_event module + in STDLIB.

+

mnesia:system_info(subscribers) + and + mnesia:table_info(Tab, subscribers) + can be used to determine which processes are subscribed to + various events.

System Events -

The system events are detailed below:

+

The system events are as follows:

{mnesia_up, Node} - -

Mnesia has been started on a node. - Node is the name of the node. By default this event is ignored. -

+ Mnesia is started on a node. Node is the node + name. By default this event is ignored. {mnesia_down, Node} - -

Mnesia has been stopped on a node. - Node is the name of the node. By default this event is - ignored. -

+ Mnesia is stopped on a node. Node is the node + name. By default this event is ignored. {mnesia_checkpoint_activated, Checkpoint} - -

a checkpoint with the name - Checkpoint has been activated and that the current node is - involved in the checkpoint. Checkpoints may be activated - explicitly with mnesia:activate_checkpoint/1 or implicitly - at backup, adding table replicas, internal transfer of data - between nodes etc. By default this event is ignored. -

+ A checkpoint with the name Checkpoint is + activated and the current node is involved in the + checkpoint. Checkpoints can be activated explicitly with + the function + mnesia:activate_checkpoint/1 + or implicitly at + backup, when adding table replicas, at internal transfer of + data between nodes, and so on. By default this event is + ignored. {mnesia_checkpoint_deactivated, Checkpoint} - -

A checkpoint with the name - Checkpoint has been deactivated and that the current node was - involved in the checkpoint. Checkpoints may explicitly be - deactivated with mnesia:deactivate/1 or implicitly when the - last replica of a table (involved in the checkpoint) - becomes unavailable, e.g. at node down. By default this - event is ignored. -

+ A checkpoint with the name Checkpoint is + deactivated and the current node is involved in the + checkpoint. Checkpoints can be deactivated explicitly with + the function + mnesia:deactivate/1 + or implicitly when the last + replica of a table (involved in the checkpoint) becomes + unavailable, for example, at node-down. By default this + event is ignored. {mnesia_overload, Details} - -

Mnesia on the current node is - overloaded and the subscriber should take action. -

+

Mnesia on the current node is + overloaded and the subscriber is to take action.

A typical overload situation occurs when the - applications are performing more updates on disc - resident tables than Mnesia is able to handle. Ignoring - this kind of overload may lead into a situation where + applications perform more updates on disc resident + tables than Mnesia can handle. Ignoring + this kind of overload can lead to a situation where the disc space is exhausted (regardless of the size of - the tables stored on disc). -

- Each update is appended to - the transaction log and occasionally(depending of how it + the tables stored on disc).

+

Each update is appended to the transaction log and + occasionally (depending on how it is configured) dumped to the tables files. The table file storage is more compact than the transaction log storage, especially if the same record is updated - over and over again. If the thresholds for dumping the - transaction log have been reached before the previous - dump was finished an overload event is triggered. -

+ repeatedly. If the thresholds for dumping the + transaction log are reached before the previous + dump is finished, an overload event is triggered.

Another typical overload situation is when the transaction manager cannot commit transactions at the - same pace as the applications are performing updates of - disc resident tables. When this happens the message - queue of the transaction manager will continue to grow + same pace as the applications perform updates of + disc resident tables. When this occurs, the message + queue of the transaction manager continues to grow until the memory is exhausted or the load - decreases. -

-

The same problem may occur for dirty updates. The overload - is detected locally on the current node, but its cause may - be on another node. Application processes may cause heavy - loads if any table are residing on other nodes (replicated or not). By default this event - is reported to the error_logger. -

+ decreases.

+

The same problem can occur for dirty updates. The overload + is detected locally on the current node, but its cause can + be on another node. Application processes can cause high + load if any table resides on another node (replicated + or not). By default this event + is reported to error_logger.

{inconsistent_database, Context, Node} - -

Mnesia regards the database as - potential inconsistent and gives its applications a chance - to recover from the inconsistency, e.g. by installing a - consistent backup as fallback and then restart the system - or pick a MasterNode from mnesia:system_info(db_nodes)) - and invoke mnesia:set_master_node([MasterNode]). By default an - error is reported to the error logger. -

+ Mnesia regards the database as potential + inconsistent and gives its applications a chance to + recover from the inconsistency. For example, by installing a + consistent backup as fallback and then restart the system. + An alternative is to pick a MasterNode from + mnesia:system_info(db_nodes) + and invoke + mnesia:set_master_node([MasterNode]). + By default an error is reported to error_logger. {mnesia_fatal, Format, Args, BinaryCore} -

Mnesia has encountered a fatal error - and will (in a short period of time) be terminated. The reason for - the fatal error is explained in Format and Args which may - be given as input to io:format/2 or sent to the - error_logger. By default it will be sent to the - error_logger. BinaryCore is a binary containing a summary of - Mnesia's internal state at the time the when the fatal error was - encountered. By default the binary is written to a - unique file name on current directory. On RAM nodes the - core is ignored. -

+

Mnesia detected a fatal error and + terminates soon. The fault reason is explained in + Format and Args, which can be given as input + to io:format/2 or sent to error_logger. By + default it is sent to error_logger.

+

BinaryCore is a binary containing a summary of the + Mnesia internal state at the time when the fatal + error was detected. By default the binary is written to a + unique filename on the current directory. On RAM nodes, the + core is ignored.

{mnesia_info, Format, Args} - -

Mnesia has detected something that - may be of interest when debugging the system. This is explained - in Format and Args which may appear - as input to io:format/2 or sent to the error_logger. By - default this event is printed with io:format/2. -

+ Mnesia detected something that can be of + interest when debugging the system. This is explained in + Format and Args, which can appear as input + to io:format/2 or sent to error_logger. By + default this event is printed with io:format/2. {mnesia_error, Format, Args} - -

Mnesia has encountered an error. The - reason for the error is explained i Format and Args - which may be given as input to io:format/2 or sent to the - error_logger. By default this event is reported to the error_logger. -

+ Mnesia has detected an error. The fault reason is + explained in Format and Args, which can be + given as input to io:format/2 or sent to + error_logger. By default this event is reported to + error_logger. {mnesia_user, Event} - -

An application has invoked the - function mnesia:report_event(Event). Event may be any Erlang - data structure. When tracing a system of Mnesia applications - it is useful to be able to interleave Mnesia's own events with - application related events that give information about the - application context. Whenever the application starts with - a new and demanding Mnesia activity or enters a - new and interesting phase in its execution it may be a good idea - to use mnesia:report_event/1.

+ An application started the function + mnesia:report_event(Event). + Event can be + any Erlang data structure. When tracing a system of + Mnesia applications, it is useful to be able to + interleave own events of Mnesia with application-related + events that give information about the application context. + Whenever the application starts with a new and demanding + Mnesia activity, or enters a new and interesting + phase in its execution, it can be a good idea to use + mnesia:report_event/1.
@@ -1045,80 +1020,86 @@ ok {complete, ActivityID} -

This event occurs when a transaction that caused a modification to the database - has completed. It is useful for determining when a set of table events - (see below) caused by a given activity have all been sent. Once the this event - has been received, it is guaranteed that no further table events with the same - ActivityID will be received. Note that this event may still be received even - if no table events with a corresponding ActivityID were received, depending on +

This event occurs when a transaction that caused a modification + to the database is completed. It is useful for determining when + a set of table events (see the next section), caused by a given + activity, have been sent. Once this event is received, it is + guaranteed that no further table events with the same + ActivityID will be received. Notice that this event can + still be received even if no table events with a corresponding + ActivityID were received, depending on the tables to which the receiving process is subscribed.

-

Dirty operations always only contain one update and thus no activity event is sent.

+

Dirty operations always contain only one update and thus no + activity event is sent.

Table Events -

The final category of events are table events, which are - events related to table updates. There are two types of table - events simple and detailed. -

-

The simple table events are tuples looking like this: - {Oper, Record, ActivityId}. Where Oper is the - operation performed. Record is the record involved in the - operation and ActivityId is the identity of the - transaction performing the operation. Note that the name of the - record is the table name even when the record_name has - another setting. The various table related events that may - occur are: -

+

Table events are events related to table updates. There are + two types of table events, simple and detailed.

+

The simple table events are tuples like + {Oper, Record, ActivityId}, where:

+ + Oper is the operation performed. + + Record is the record involved in the operation. + + ActivityId is the identity of the transaction + performing the operation. + + +

Notice that the record name is the table name even when + record_name has another setting.

+

The table-related events that can occur are as follows:

{write, NewRecord, ActivityId} - -

a new record has been written. - NewRecord contains the new value of the record. -

+ A new record has been written. NewRecord contains + the new record value. {delete_object, OldRecord, ActivityId} - -

a record has possibly been deleted - with mnesia:delete_object/1. OldRecord - contains the value of the old record as stated as argument - by the application. Note that, other records with the same - key may be remaining in the table if it is a bag. -

+ A record has possibly been deleted with + mnesia:delete_object/1. + OldRecord + contains the value of the old record, as stated as argument + by the application. Notice that other records with the same + key can remain in the table if it is of type bag. {delete, {Tab, Key}, ActivityId} - -

one or more records possibly has - been deleted. All records with the key Key in the table - Tab have been deleted.

+ One or more records have possibly been deleted. + All records with the key Key in the table + Tab have been deleted.
-

The detailed table events are tuples looking like - this: {Oper, Table, Data, [OldRecs], ActivityId}. - Where Oper is the operation - performed. Table is the table involved in the operation, - Data is the record/oid written/deleted. - OldRecs is the contents before the operation. - and ActivityId is the identity of the transaction - performing the operation. - The various table related events that may occur are: -

+

The detailed table events are tuples like + {Oper, Table, Data, [OldRecs], ActivityId}, where:

+ + Oper is the operation performed. + + Table is the table involved in the operation. + + Data is the record/OID written/deleted. + + OldRecs is the contents before the operation. + + ActivityId is the identity of the transaction + performing the operation. + + +

The table-related events that can occur are as follows:

{write, Table, NewRecord, [OldRecords], ActivityId} - -

a new record has been written. - NewRecord contains the new value of the record and OldRecords - contains the records before the operation is performed. - Note that the new content is dependent on the type of the table.

+ A new record has been written. NewRecord contains + the new record value and OldRecords contains the + records before the operation is performed. Notice that the + new content depends on the table type. {delete, Table, What, [OldRecords], ActivityId} - -

records has possibly been deleted - What is either {Table, Key} or a record {RecordName, Key, ...} - that was deleted. - Note that the new content is dependent on the type of the table.

+ Records have possibly been deleted. What is + either {Table, Key} or a record + {RecordName, Key, ...} that was deleted. Notice + that the new content depends on the table type.
@@ -1126,69 +1107,55 @@ ok
Debugging Mnesia Applications -

Debugging a Mnesia application can be difficult due to a number of reasons, primarily related +

Debugging a Mnesia application can be difficult + for various reasons, primarily related to difficulties in understanding how the transaction - and table load mechanisms work. An other source of - confusion may be the semantics of nested transactions. -

-

We may set the debug level of Mnesia by calling: -

- - mnesia:set_debug_level(Level) - -

Where the parameter Level is: -

+ and table load mechanisms work. Another source of + confusion can be the semantics of nested transactions.

+

The debug level of Mnesia is set by calling the function + mnesia:set_debug_level(Level), + where Levelis one of the following:

none - -

no trace outputs at all. This is the default. -

+ No trace outputs. This is the default. verbose - -

activates tracing of important debug events. These - debug events will generate {mnesia_info, Format, Args} - system events. Processes may subscribe to these events with - mnesia:subscribe/1. The events are always sent to Mnesia's - event handler. -

+ Activates tracing of important debug events. These + events generate {mnesia_info, Format, Args} + system events. Processes can subscribe to these events with + the function + mnesia:subscribe/1. + The events are always sent to the Mnesia event handler. debug - -

activates all events at the verbose level plus - traces of all debug events. These debug events will generate - {mnesia_info, Format, Args} system events. Processes may - subscribe to these events with mnesia:subscribe/1. The - events are always sent to Mnesia's event handler. On this - debug level Mnesia's event handler starts subscribing - updates in the schema table. -

+ Activates all events at the verbose level plus + traces of all debug events. These debug events generate + {mnesia_info, Format, Args} system events. Processes + can subscribe to these events with mnesia:subscribe/1. + The events are always sent to the Mnesia event handler. + On this debug level, the Mnesia event handler starts + subscribing to updates in the schema table. trace - -

activates all events at the debug level. On this - debug level Mnesia's event handler starts subscribing - updates on all Mnesia tables. This level is only intended - for debugging small toy systems, since many large - events may be generated.

+ Activates all events at the debug level. On this + level, the Mnesia event handler starts subscribing to + updates on all Mnesia tables. This level is intended + only for debugging small toy systems, as many large + events can be generated. false - -

is an alias for none.

+ An alias for none. true - -

is an alias for debug.

+ An alias for debug.
-

The debug level of Mnesia itself, is also an application - parameter, thereby making it possible to start an Erlang system - in order to turn on Mnesia debug in the initial - start-up phase by using the following code: -

+

The debug level of Mnesia itself is also an application + parameter, making it possible to start an Erlang system + to turn on Mnesia debug in the initial + startup phase by using the following code:

-      % erl -mnesia debug verbose
-    
+ % erl -mnesia debug verbose
@@ -1196,85 +1163,81 @@ ok

Programming concurrent Erlang systems is the subject of a separate book. However, it is worthwhile to draw attention to the following features, which permit concurrent processes to - exist in a Mnesia system. -

-

A group of functions or processes can be called within a - transaction. A transaction may include statements that read, - write or delete data from the DBMS. A large number of such + exist in a Mnesia system:

+ +

A group of functions or processes can be called within a + transaction. A transaction can include statements that read, + write, or delete data from the DBMS. Many such transactions can run concurrently, and the programmer does not - have to explicitly synchronize the processes which manipulate - the data. All programs accessing the database through the - transaction system may be written as if they had sole access to - the data. This is a very desirable property since all + need to explicitly synchronize the processes that manipulate + the data.

+

All programs accessing the database through the + transaction system can be written as if they had sole access to + the data. This is a desirable property, as all synchronization is taken care of by the transaction handler. If a program reads or writes data, the system ensures that no other - program tries to manipulate the same data at the same time. -

-

It is possible to move tables, delete tables or reconfigure - the layout of a table in various ways. An important aspect of - the actual implementation of these functions is that it is - possible for user programs to continue to use a table while it - is being reconfigured. For example, it is possible to - simultaneously move a table and perform write operations to the - table . This is important for many applications that - require continuously available services. Refer to Chapter 4: - Transactions and other access contexts for more information. -

+ program tries to manipulate the same data at the same time.

+
+ Tables can be moved or deleted, and the layout of a table + can be reconfigured in various ways. An important aspect of + the implementation of these functions is that user programs + can continue to use a table while it + is being reconfigured. For example, it is possible to move a + table and perform write operations to the table at the same + time. This is important for many applications that require + continuously available services. For more information, see + Transactions and Other Access Contexts. + +
Prototyping -

If and when we decide that we would like to start and manipulate - Mnesia, it is often easier to write the definitions and +

If and when you would like to start and manipulate + Mnesia, it is often easier to write the definitions and data into an ordinary text file. Initially, no tables and no data exist, or which - tables are required. At the initial stages of prototyping it - is prudent write all data into one file, process - that file and have the data in the file inserted into the database. - It is possible to initialize Mnesia with data read from a text file. - We have the following two functions to work with text files. -

+ tables are required. At the initial stages of prototyping, it + is prudent to write all data into one file, process that + file, and have the data in the file inserted into the database. + Mnesia can be initialized with data read from a text file. + The following two functions can be used to work with text + files.

-

mnesia:load_textfile(Filename) Which loads a - series of local table definitions and data found in the file - into Mnesia. This function also starts Mnesia and possibly - creates a new schema. The function only operates on the - local node. -

+ mnesia:load_textfile(Filename) + loads a series of local table definitions and data found in the + file into Mnesia. This function also starts Mnesia + and possibly creates a new schema. The function operates + on the local node only.
-

mnesia:dump_to_textfile(Filename) Dumps - all local tables of a mnesia system into a text file which can - then be edited (by means of a normal text editor) and then - later reloaded.

+ mnesia:dump_to_textfile(Filename) + dumps all local + tables of a Mnesia system into a text file, which + can be edited (with a normal text editor) and later reloaded.
-

These functions are of course much slower than the ordinary - store and load functions of Mnesia. However, this is mainly intended for minor experiments - and initial prototyping. The major advantages of these functions is that they are very easy - to use. -

-

The format of the text file is: -

+

These functions are much slower than the ordinary store and + load functions of Mnesia. However, this is mainly intended + for minor experiments and initial prototyping. The major + advantage of these functions is that they are easy to use.

+

The format of the text file is as follows:

       {tables, [{Typename, [Options]},
       {Typename2 ......}]}.
       
-      {Typename, Attribute1, Atrribute2 ....}.
-      {Typename, Attribute1, Atrribute2 ....}.
-    
+ {Typename, Attribute1, Attribute2 ....}. + {Typename, Attribute1, Attribute2 ....}.

Options is a list of {Key,Value} tuples conforming - to the options we could give to mnesia:create_table/2. -

-

For example, if we want to start playing with a small - database for healthy foods, we enter then following data into - the file FRUITS. -

+ to the options that you can give to + mnesia:create_table/2. +

+

For example, to start playing with a small database for healthy + foods, enter the following data into file FRUITS:

-

The following session with the Erlang shell then shows how - to load the fruits database. -

+

The following session with the Erlang shell shows how + to load the FRUITS database:

 
     ]]>
-

Where we can see that the DBMS was initiated from a - regular text file. -

+

It can be seen that the DBMS was initiated from a + regular text file.

- Object Based Programming with Mnesia -

The Company database introduced in Chapter 2 has three tables - which store records (employee, dept, project), and three tables - which store relationships (manager, at_dep, in_proj). This is a - normalized data model, which has some advantages over a - non-normalized data model. -

-

It is more efficient to do a + Object-Based Programming with Mnesia +

The Company database, introduced in + Getting Started, + has three tables that store records (employee, + dept, project), and three tables that store + relationships (manager, at_dep, in_proj). + This is a normalized data model, which has some advantages over + a non-normalized data model.

+

It is more efficient to do a generalized search in a normalized database. Some operations are also easier to perform on a normalized data model. For example, - we can easily remove one project, as the following example - illustrates: -

+ one project can easily be removed, as the following example + illustrates:

In reality, data models are seldom fully normalized. A realistic alternative to a normalized database model would be - a data model which is not even in first normal form. Mnesia - is very suitable for applications such as telecommunications, - because it is easy to organize data in a very flexible manner. A - Mnesia database is always organized as a set of tables. Each - table is filled with rows/objects/records. What sets Mnesia - apart is that individual fields in a record can contain any type - of compound data structures. An individual field in a record can - contain lists, tuples, functions, and even record code. -

+ a data model that is not even in first normal form. Mnesia + is suitable for applications such as telecommunications, + because it is easy to organize data in a flexible manner. A + Mnesia database is always organized as a set of tables. + Each table is filled with rows, objects, and records. + What sets Mnesia apart is that individual fields in + a record can contain any type of + compound data structures. An individual field in a record can + contain lists, tuples, functions, and even record code.

Many telecommunications applications have unique requirements - on lookup times for certain types of records. If our Company - database had been a part of a telecommunications system, then it - could be that the lookup time of an employee together - with a list of the projects the employee is working on, should - be minimized. If this was the case, we might choose a - drastically different data model which has no direct - relationships. We would only have the records themselves, and - different records could contain either direct references to - other records, or they could contain other records which are not - part of the Mnesia schema. -

-

We could create the following record definitions: -

+ on lookup times for certain types of records. If the Company + database had been a part of a telecommunications system, it + could be to minimize the lookup time of an employee + together with a list of the projects the employee is + working on. If this is the case, a drastically different data model + without direct relationships can be chosen. You would then have + only the records themselves, and different records could contain + either direct references to other records, or contain other + records that are not part of the Mnesia schema.

+

The following record definitions can be created:

-

An record which describes an employee might look like this: -

+

A record that describes an employee can look as follows:

         Me = #employee{emp_no= 104732,
         name = klacke,
@@ -1368,50 +1326,43 @@ ok
         room_no = {221, 015},
         dept = 'B/SFR',
         projects = [erlang, mnesia, otp],
-        manager = 114872},
-    
-

This model only has three different tables, and the employee - records contain references to other records. We have the following - references in the record. -

+ manager = 114872}, +

This model has only three different tables, and the employee + records contain references to other records. The record has the + following references:

- 'B/SFR' refers to a dept record. + 'B/SFR' refers to a dept record. - [erlang, mnesia, otp]. This is a list of three - direct references to three different projects records. + [erlang, mnesia, otp] is a list of three + direct references to three different projects records. - 114872. This refers to another employee record. + 114872 refers to another employee record. -

We could also use the Mnesia record identifiers ({Tab, Key}) - as references. In this case, the dept attribute would be - set to the value {dept, 'B/SFR'} instead of - 'B/SFR'. -

+

The Mnesia record identifiers ({Tab, Key}) can + also be used as references. In this case, attribute dept + would be set to value {dept, 'B/SFR'} instead of + 'B/SFR'.

With this data model, some operations execute considerably - faster than they do with the normalized data model in our - Company database. On the other hand, some other operations + faster than they do with the normalized data model in the + Company database. However, some other operations become much more complicated. In particular, it becomes more difficult to ensure that records do not contain dangling - pointers to other non-existent, or deleted, records. -

+ pointers to other non-existent, or deleted, records.

The following code exemplifies a search with a non-normalized - data model. To find all employees at department - Dep with a salary higher than Salary, use the following code: -

+ data model. To find all employees at department Dep with + a salary higher than Salary, use the following code:

-

This code is not only easier to write and to understand, but it - also executes much faster. -

-

It is easy to show examples of code which executes faster if - we use a non-normalized data model, instead of a normalized - model. The main reason for this is that fewer tables are - required. For this reason, we can more easily combine data from - different tables in join operations. In the above example, the - get_emps/2 function was transformed from a join operation - into a simple query which consists of a selection and a projection - on one single table. -

+

This code is easier to write and to understand, and it + also executes much faster.

+

It is easy to show examples of code that executes faster if + a non-normalized data model is used, instead of a normalized + model. The main reason is that fewer tables are required. + Therefore, data from different tables can more easily be + combined in join operations. In the previous example, the + function get_emps/2 is transformed from a join operation + into a simple query, which consists of a selection and a + projection on one single table.

-- cgit v1.2.3