From 572323a87f3ed28ae2af42f32cbc745e35b95101 Mon Sep 17 00:00:00 2001
From: xsipewe The earlier chapters of this User Guide described how to get
- started with Mnesia, and how to build a Mnesia database. In this
- chapter, we will describe the more advanced features available
- when building a distributed, fault tolerant Mnesia database. This
- chapter contains the following sections:
- The previous sections describe how to get started
+ with Data retrieval and matching can be performed very efficiently
- if we know the key for the record. Conversely, if the key is not
- known, all records in a table must be searched. The larger the
- table the more time consuming it will become. To remedy this
- problem Mnesia's indexing capabilities are used to improve data
- retrieval and matching of records.
- The following two functions manipulate indexes on existing tables:
- Data retrieval and matching can be performed efficiently
+ if the key for the record is known. Conversely, if the key is
+ unknown, all records in a table must be searched. The larger the
+ table, the more time consuming it becomes. To remedy this
+ problem, The following two functions manipulate indexes on existing
+ tables: These functions create or delete a table index on field
- defined by The indexing capabilities of Mnesia are utilized with the
- following three functions, which retrieve and match records on the
- basis of index entries in the database.
- These functions create or delete a table index on a field
+ defined by The indexing capabilities of These functions are further described and exemplified in
- Chapter 4:
-
-
-
-
-
-
Mnesia is a distributed, fault tolerant DBMS. It is possible - to replicate tables on different Erlang nodes in a variety of - ways. The Mnesia programmer does not have to state +
A program works regardless of the data + location. It makes no difference whether the data + resides on the local node or on a remote node.
+Notice that the program runs slower if the data + is located on a remote node.
We have previously seen that each table has a number of
- system attributes, such as
It has previously been shown that each table has a number of
+ system attributes, such as
Table attributes are specified when the table is created. For - example, the following function will create a new table with two - RAM replicas: -
+ example, the following function creates a table with two + RAM replicas:mnesia:create_table(foo, [{ram_copies, [N1, N2]}, - {attributes, record_info(fields, foo)}]). -+ {attributes, record_info(fields, foo)}]).
Tables can also have the following properties, - where each attribute has a list of Erlang nodes as its value. -
+ where each attribute has a list of Erlang nodes as its value:Notice that no disc operations are performed when + a program executes write operations to these replicas. + However, if permanent RAM replicas are required, the following alternatives are available:
It is also possible to set and change table properties on
- existing tables. Refer to Chapter 3:
In addition, table properties can be set and changed.
+ For details, see
+
There are basically two reasons for using more than one table - replica: fault tolerance, or speed. It is worthwhile to note + replica: fault tolerance and speed. Notice that table replication provides a solution to both of these - system requirements. -
-If we have two active table replicas, all information is - still available if one of the replicas fail. This can be a very + system requirements.
+If there are two active table replicas, all information is + still available if one replica fails. This can be an important property in many applications. Furthermore, if a table - replica exists at two specific nodes, applications which execute + replica exists at two specific nodes, applications that execute at either of these nodes can read data from the table without - accessing the network. Network operations are considerably - slower and consume more resources than local operations. -
+ accessing the network. Network operations are considerably + slower and consume more resources than local operations.It can be advantageous to create table replicas for a - distributed application which reads data often, but writes data - seldom, in order to achieve fast read operations on the local + distributed application that reads data often, but writes data + seldom, to achieve fast read operations on the local node. The major disadvantage with replication is the increased time to write data. If a table has two replicas, every write operation must access both table replicas. Since one of these write operations must be a network operation, it is considerably more expensive to perform a write operation to a replicated - table than to a non-replicated table. -
+ table than to a non-replicated table.A concept of table fragmentation has been introduced in
- order to cope with very large tables. The idea is to split a
- table into several more manageable fragments. Each fragment
- is implemented as a first class Mnesia table and may be
- replicated, have indices etc. as any other table. But the
- tables may neither have
In order to be able to access a record in a fragmented
- table, Mnesia must determine to which fragment the
- actual record belongs. This is done by the
-
At each record access
A concept of table fragmentation has been introduced
+ to cope with large tables. The idea is to split a
+ table into several manageable fragments. Each fragment is
+ implemented as a first class
To be able to access a record in a fragmented
+ table,
At each record access,
The following piece of code illustrates - how an existing Mnesia table is converted to be a - fragmented table and how more fragments are added later on. -
+ matching records. +Notice that in
The following code illustrates how a
mnesia:start().
@@ -299,102 +282,96 @@ ok
Fragmentation Properties
- There is a table property called
- frag_properties and may be read with
- mnesia:table_info(Tab, frag_properties) . The
- fragmentation properties is a list of tagged tuples with
- the arity 2. By default the list is empty, but when it is
- non-empty it triggers Mnesia to regard the table as
- fragmented. The fragmentation properties are:
-
+ The table property frag_properties can be read with
+ the function
+ mnesia:table_info(Tab, frag_properties) .
+ The fragmentation properties are a list of tagged tuples with
+ arity 2. By default the list is empty, but when it is
+ non-empty it triggers Mnesia to regard the table as
+ fragmented. The fragmentation properties are as follows:
{n_fragments, Int}
-
n_fragments regulates how many fragments
- that the table currently has. This property may explicitly
+ that the table currently has. This property can explicitly
be set at table creation and later be changed with
{add_frag, NodesOrDist} or
- del_frag . n_fragment s defaults to 1 .
-
+ del_frag . n_fragments defaults to 1 .
{node_pool, List}
-
-
The node pool contains a list of nodes and may
+
The node pool contains a list of nodes and can
explicitly be set at table creation and later be changed
- with {add_node, Node} or {del_node, Node} . At table creation Mnesia tries to distribute
+ with {add_node, Node} or {del_node, Node} .
+ At table creation Mnesia tries to distribute
the replicas of each fragment evenly over all the nodes in
- the node pool. Hopefully all nodes will end up with the
+ the node pool. Hopefully all nodes end up with the
same number of replicas. node_pool defaults to the
- return value from mnesia:system_info(db_nodes) .
-
+ return value from the function
+ mnesia:system_info(db_nodes) .
{n_ram_copies, Int}
-
Regulates how many ram_copies replicas
- that each fragment should have. This property may
- explicitly be set at table creation. The default is
+ that each fragment is to have. This property can
+ explicitly be set at table creation. Defaults is
0 , but if n_disc_copies and
n_disc_only_copies also are 0 ,
- n_ram_copies will default be set to 1 .
-
+ n_ram_copies defaults to 1 .
{n_disc_copies, Int}
-
-
Regulates how many disc_copies replicas
- that each fragment should have. This property may
- explicitly be set at table creation. The default is 0 .
-
+ Regulates how many disc_copies replicas that
+ each fragment is to have. This property can explicitly
+ be set at table creation. Default is 0 .
{n_disc_only_copies, Int}
-
Regulates how many disc_only_copies replicas
- that each fragment should have. This property may
- explicitly be set at table creation. The default is 0 .
-
+ that each fragment is to have. This property can
+ explicitly be set at table creation. Defaults is
+ 0 .
{foreign_key, ForeignKey}
-
-
ForeignKey may either be the atom
+
ForeignKey can either be the atom
undefined or the tuple {ForeignTab, Attr} ,
- where Attr denotes an attribute which should be
+ where Attr denotes an attribute that is to be
interpreted as a key in another fragmented table named
- ForeignTab . Mnesia will ensure that the number of
+ ForeignTab . Mnesia ensures that the number of
fragments in this table and in the foreign table are
- always the same. When fragments are added or deleted
- Mnesia will automatically propagate the operation to all
- fragmented tables that has a foreign key referring to this
+ always the same.
+ When fragments are added or deleted, Mnesia
+ automatically propagates the operation to all
+ fragmented tables that have a foreign key referring to this
table. Instead of using the record key to determine which
- fragment to access, the value of the Attr field is
- used. This feature makes it possible to automatically
- co-locate records in different tables to the same
- node. foreign_key defaults to
- undefined . However if the foreign key is set to
- something else it will cause the default values of the
+ fragment to access, the value of field Attr is
+ used. This feature makes it possible to colocate records
+ automatically in different tables to the same node.
+ foreign_key defaults to
+ undefined . However, if the foreign key is set to
+ something else, it causes the default values of the
other fragmentation properties to be the same values as
- the actual fragmentation properties of the foreign table.
-
+ the actual fragmentation properties of the foreign table.
{hash_module, Atom}
-
-
Enables definition of an alternate hashing scheme.
- The module must implement the mnesia_frag_hash
- callback behaviour (see the reference manual). This
- property may explicitly be set at table creation.
- The default is mnesia_frag_hash .
- Older tables that was created before the concept of
- user defined hash modules was introduced, uses
- the mnesia_frag_old_hash module in order to
- be backwards compatible. The mnesia_frag_old_hash
- is still using the poor deprecated erlang:hash/1
- function.
-
+ Enables definition of an alternative hashing scheme.
+ The module must implement the
+ mnesia_frag_hash
+ callback behavior. This property can explicitly be set at
+ table creation. Default is mnesia_frag_hash .
+ Older tables, that were created before the concept of
+ user-defined hash modules was introduced, use module
+ mnesia_frag_old_hash to be backwards compatible.
+ mnesia_frag_old_hash still uses the poor
+ deprecated function erlang:hash/1 .
{hash_state, Term}
-
-
Enables a table specific parameterization
- of a generic hash module. This property may explicitly
- be set at table creation.
- The default is undefined .
+ Enables a table-specific parameterization of a
+ generic hash module. This property can explicitly be set
+ at table creation. Default is undefined .
mnesia:start().
@@ -463,177 +440,159 @@ ok
Management of Fragmented Tables
The function mnesia:change_table_frag(Tab, Change)
is intended to be used for reconfiguration of fragmented
- tables. The Change argument should have one of the
- following values:
-
+ tables. Argument Change is to have one of the
+ following values:
{activate, FragProps}
-
Activates the fragmentation properties of an
- existing table. FragProps should either contain
- {node_pool, Nodes} or be empty.
-
+ existing table. FragProps is either to contain
+ {node_pool, Nodes} or be empty.
deactivate
-
Deactivates the fragmentation properties of a
- table. The number of fragments must be 1 . No other
- tables may refer to this table in its foreign key.
-
+ table. The number of fragments must be 1 . No other
+ table can refer to this table in its foreign key.
{add_frag, NodesOrDist}
-
-
Adds one new fragment to a fragmented table. All
- records in one of the old fragments will be rehashed and
- about half of them will be moved to the new (last)
- fragment. All other fragmented tables, which refers to this
- table in their foreign key, will automatically get a new
- fragment, and their records will also be dynamically
- rehashed in the same manner as for the main table.
-
- The NodesOrDist argument may either be a list
- of nodes or the result from mnesia:table_info(Tab, frag_dist) . The NodesOrDist argument is
+
Adds a fragment to a fragmented table. All
+ records in one of the old fragments are rehashed and
+ about half of them are moved to the new (last)
+ fragment. All other fragmented tables, which refer to this
+ table in their foreign key, automatically get a new
+ fragment. Also, their records are dynamically
+ rehashed in the same manner as for the main table.
+ Argument NodesOrDist can either be a list of
+ nodes or the result from the function
+ mnesia:table_info(Tab, frag_dist) .
+ Argument NodesOrDist is
assumed to be a sorted list with the best nodes to
host new replicas first in the list. The new fragment
- will get the same number of replicas as the first
- fragment (see n_ram_copies , n_disc_copies
+ gets the same number of replicas as the first
+ fragment (see n_ram_copies , n_disc_copies ,
and n_disc_only_copies ). The NodesOrDist
list must at least contain one element for each
- replica that needs to be allocated.
-
+ replica that needs to be allocated.
del_frag
-
-
Deletes one fragment from a fragmented table. All
- records in the last fragment will be moved to one of the other
- fragments. All other fragmented tables which refers to
- this table in their foreign key, will automatically lose
- their last fragment and their records will also be
+
Deletes a fragment from a fragmented table. All
+ records in the last fragment are moved to one of the other
+ fragments. All other fragmented tables, which refer to
+ this table in their foreign key, automatically lose
+ their last fragment. Also, their records are
dynamically rehashed in the same manner as for the main
- table.
-
+ table.
{add_node, Node}
-
-
Adds a new node to the node_pool . The new
- node pool will affect the list returned from
- mnesia:table_info(Tab, frag_dist) .
-
+ Adds a node to node_pool . The new
+ node pool affects the list returned from the function
+ mnesia:table_info(Tab, frag_dist) .
+
{del_node, Node}
-
-
Deletes a new node from the node_pool . The
- new node pool will affect the list returned from
- mnesia:table_info(Tab, frag_dist) .
+ Deletes a node from node_pool . The new
+ node pool affects the list returned from the function
+ mnesia:table_info(Tab, frag_dist) .
+
Extensions of Existing Functions
- The function mnesia:create_table/2 is used to
- create a brand new fragmented table, by setting the table
- property frag_properties to some proper values.
-
- The function mnesia:delete_table/1 is used to
- delete a fragmented table including all its
- fragments. There must however not exist any other
- fragmented tables which refers to this table in their foreign key.
-
- The function mnesia:table_info/2 now understands
- the frag_properties item.
-
- If the function mnesia:table_info/2 is invoked in
- the activity context of the mnesia_frag module,
- information of several new items may be obtained:
-
+ The function
+ mnesia:create_table/2
+ creates a brand new fragmented table, by setting table
+ property frag_properties to some proper values.
+ The function
+ mnesia:delete_table/1
+ deletes a fragmented table including all its
+ fragments. There must however not exist any other fragmented
+ tables that refer to this table in their foreign key.
+ The function
+ mnesia:table_info/2
+ now understands item frag_properties .
+ If the function mnesia:table_info/2 is started in
+ the activity context of module mnesia_frag ,
+ information of several new items can be obtained:
base_table
- -
-
the name of the fragmented table
-
-
+ - The name of the fragmented table
n_fragments
- -
-
the actual number of fragments
-
-
+ - The actual number of fragments
node_pool
- -
-
the pool of nodes
-
-
+ - The pool of nodes
n_ram_copies
n_disc_copies
n_disc_only_copies
-
-
the number of replicas with storage type
- ram_copies , disc_copies and disc_only_copies
+
The number of replicas with storage type ram_copies ,
+ disc_copies , and disc_only_copies ,
respectively. The actual values are dynamically derived
from the first fragment. The first fragment serves as a
- pro-type and when the actual values needs to be computed
- (e.g. when adding new fragments) they are simply
- determined by counting the number of each replicas for
- each storage type. This means, when the functions
- mnesia:add_table_copy/3 ,
- mnesia:del_table_copy/2 andmnesia:change_table_copy_type/2 are applied on the
- first fragment, it will affect the settings on
+ protype. When the actual values need to be computed
+ (for example, when adding new fragments) they are
+ determined by counting the number of each replica for
+ each storage type. This means that when the functions
+ mnesia:add_table_copy/3 ,
+
+ mnesia:del_table_copy/2 ,
+ and
+ mnesia:change_table_copy_type/2 are applied on the
+ first fragment, it affects the settings on
n_ram_copies , n_disc_copies , and
- n_disc_only_copies .
-
+ n_disc_only_copies .
foreign_key
-
-
the foreign key.
-
+ The foreign key
foreigners
-
-
all other tables that refers to this table in
- their foreign key.
-
+ All other tables that refer to this table in
+ their foreign key
frag_names
-
-
the names of all fragments.
-
+ The names of all fragments
frag_dist
-
-
a sorted list of {Node, Count} tuples
- which is sorted in increasing Count order. The
+
A sorted list of {Node, Count} tuples
+ that are sorted in increasing Count order.
Count is the total number of replicas that this
fragmented table hosts on each Node . The list
- always contains at least all nodes in the
- node_pool . The nodes which not belongs to the
- node_pool will be put last in the list even if
- their Count is lower.
-
+ always contains at least all nodes in
+ node_pool . Nodes that do not belong to
+ node_pool are put last in the list even if
+ their Count is lower.
frag_size
-
-
a list of {Name, Size} tuples where
- Name is a fragment Name and Size is
- how many records it contains.
-
+ A list of {Name, Size} tuples, where
+ Name is a fragment Name , and Size is
+ how many records it contains
frag_memory
-
-
a list of {Name, Memory} tuples where
- Name is a fragment Name and Memory is
- how much memory it occupies.
-
+ A list of {Name, Memory} tuples, where
+ Name is a fragment Name , and Memory is
+ how much memory it occupies
size
-
-
total size of all fragments
-
+ Total size of all fragments
memory
-
-
the total memory of all fragments
+ Total memory of all fragments
@@ -642,42 +601,45 @@ ok
Load Balancing
There are several algorithms for distributing records
in a fragmented table evenly over a
- pool of nodes. No one is best, it simply depends of the
- application needs. Here follows some examples of
- situations which may need some attention:
-
- permanent change of nodes when a new permanent
- db_node is introduced or dropped, it may be time to
- change the pool of nodes and re-distribute the replicas
- evenly over the new pool of nodes. It may also be time to
- add or delete a fragment before the replicas are re-distributed.
-
- size/memory threshold when the total size or
+ pool of nodes. No one is best, it depends on the
+ application needs. The following examples of
+ situations need some attention:
+
+ permanent change of nodes . When a new permanent
+ db_node is introduced or dropped, it can be time to
+ change the pool of nodes and redistribute the replicas
+ evenly over the new pool of nodes. It can also be time to
+ add or delete a fragment before the replicas are redistributed.
+
+ size/memory threshold . When the total size or
total memory of a fragmented table (or a single
- fragment) exceeds some application specific threshold, it
- may be time to dynamically add a new fragment in order
- obtain a better distribution of records.
-
- temporary node down when a node temporarily goes
- down it may be time to compensate some fragments with new
- replicas in order to keep the desired level of
- redundancy. When the node comes up again it may be time to
- remove the superfluous replica.
-
- overload threshold when the load on some node is
- exceeds some application specific threshold, it may be time to
- either add or move some fragment replicas to nodes with lesser
- load. Extra care should be taken if the table has a foreign
- key relation to some other table. In order to avoid severe
- performance penalties, the same re-distribution must be
- performed for all of the related tables.
-
- Use mnesia:change_table_frag/2 to add new fragments
+ fragment) exceeds some application-specific threshold, it
+ can be time to add a new fragment dynamically to
+ obtain a better distribution of records.
+
+ temporary node down . When a node temporarily goes
+ down, it can be time to compensate some fragments with new
+ replicas to keep the desired level of
+ redundancy. When the node comes up again, it can be time to
+ remove the superfluous replica.
+
+ overload threshold . When the load on some node
+ exceeds some application-specific threshold, it can be time to
+ either add or move some fragment replicas to nodes with lower
+ load. Take extra care if the table has a foreign
+ key relation to some other table. To avoid severe
+ performance penalties, the same redistribution must be
+ performed for all the related tables.
+
+
+ Use the function
+ mnesia:change_table_frag/2 to add new fragments
and apply the usual schema manipulation functions (such as
- mnesia:add_table_copy/3 , mnesia:del_table_copy/2
- and mnesia:change_table_copy_type/2 ) on each fragment
- to perform the actual re-distribution.
-
+ mnesia:add_table_copy/3 ,
+ mnesia:del_table_copy/2 ,
+ and
+ mnesia:change_table_copy_type/2 )
+ on each fragment to perform the actual redistribution.
Replicated tables have the same content on all nodes where they are replicated. However, it is sometimes advantageous to - have tables but different content on different nodes. -
-If we specify the attribute
Furthermore, when the table is initialized at start-up, the - table will only be initialized locally, and the table - content will not be copied from another node. -
+ have tables, but different content on different nodes. +If attribute
Furthermore, when the table is initialized at startup, the + table is only initialized locally, and the table + content is not copied from another node.
It is possible to run Mnesia on nodes that do not have a
- disc. It is of course not possible to have replicas
- of neither
The schema table may, as other tables, reside on one or
- more nodes. The storage type of the schema table may either
- be
The schema table can, as other tables, reside on one or
+ more nodes. The storage type of the schema table can either
+ be
Hence, when a disc-less node needs to find the schema
- definitions from a remote node on the network, we need to supply
- this information through the application parameter
The application parameter schema_location controls where - Mnesia will search for its schema. The parameter may be one of - the following atoms: -
+ definitions from a remote node on the network, this + information must be supplied through application parameter +Application parameter
Mandatory disc. The schema is assumed to be located - on the Mnesia directory. And if the schema cannot be found, - Mnesia refuses to start. -
+ in theMandatory ram. The schema resides in ram
- only. At start-up a tiny new schema is generated. This
- default schema contains just the definition of the schema
- table and only resides on the local node. Since no other
- nodes are found in the default schema, the configuration
- parameter
Mandatory RAM. The schema resides in RAM
+ only. At startup, a tiny new schema is generated. This
+ default schema contains only the definition of the schema
+ table and resides on the local node only. Since no other
+ nodes are found in the default schema, configuration
+ parameter
Optional disc. The schema may reside on either disc
- or ram. If the schema is found on disc, Mnesia starts as a
- disc-full node (the storage type of the schema table is
- disc_copies). If no schema is found on disc, Mnesia starts
- as a disc-less node (the storage type of the schema table is
- ram_copies). The default value for the application parameter
- is
-
Optional disc. The schema can reside on either disc or
+ RAM. If the schema is found on disc,
When the
When
1> mnesia:start(). ok 2> mnesia:change_table_copy_type(schema, node(), disc_copies). - {atomic, ok} --
Assuming that the call to
Assuming that the call to
+
It is possible to add and remove nodes from a Mnesia system. - This can be done by adding a copy of the schema to those nodes. -
-The functions
The function call
If the storage type of the schema is ram_copies, i.e, we
- have disc-less node, Mnesia
- will not use the disc on that particular node. The disc
- usage is enabled by changing the storage type of the table
-
New schemas are
- created explicitly with
At start-up Mnesia connects different nodes to each other, - then they exchange table definitions with each other and the - table definitions are merged. During the merge procedure Mnesia +
Nodes can be added to and removed from a
The functions
+
The function call
If the storage type of the schema is
New schemas are created explicitly with the function
+
At startup,
Merging different versions of the schema table, does not + they were disconnected. To solve this, one of the tables + must be deleted (as the cookies differ, it is regarded to be two + different tables even if they have the same name).
+Merging different versions of the schema table does not
always require the cookies to be the same. If the storage
- type of the schema table is disc_copies, the cookie is
- immutable, and all other db_nodes must have the same
- cookie. When the schema is stored as type ram_copies,
+ type of the schema table is
Transactions which update the definition of a table,
- requires that Mnesia is started on all nodes where the
- storage type of the schema is disc_copies. All replicas of
+ (
Further, the following applies:
+The function
Transactions that update the definition of a table
+ requires that
System events and table events are the two categories of events - that Mnesia will generate in various situations. -
-It is possible for user process to subscribe on the - events generated by Mnesia. - We have the following two functions:
+System events and table events are the two event categories
+ that
A user process can subscribe on the events generated by
+
Ensures that a copy of all events of type
-
The old event category
The subscribe functions activate a subscription
of events. The events are delivered as messages to the process
- evaluating the
All system events are subscribed by Mnesia's
- gen_event handler. The default gen_event handler is
-
The event types are described in the next sections.
+All system events are subscribed by the
The system events are detailed below:
+The system events are as follows:
Mnesia has been started on a node. - Node is the name of the node. By default this event is ignored. -
+Mnesia has been stopped on a node. - Node is the name of the node. By default this event is - ignored. -
+a checkpoint with the name
-
A checkpoint with the name
-
Mnesia on the current node is - overloaded and the subscriber should take action. -
+A typical overload situation occurs when the
- applications are performing more updates on disc
- resident tables than Mnesia is able to handle. Ignoring
- this kind of overload may lead into a situation where
+ applications perform more updates on disc resident
+ tables than
- Each update is appended to
- the transaction log and occasionally(depending of how it
+ the tables stored on disc).
Each update is appended to the transaction log and + occasionally (depending on how it is configured) dumped to the tables files. The table file storage is more compact than the transaction log storage, especially if the same record is updated - over and over again. If the thresholds for dumping the - transaction log have been reached before the previous - dump was finished an overload event is triggered. -
+ repeatedly. If the thresholds for dumping the + transaction log are reached before the previous + dump is finished, an overload event is triggered.Another typical overload situation is when the transaction manager cannot commit transactions at the - same pace as the applications are performing updates of - disc resident tables. When this happens the message - queue of the transaction manager will continue to grow + same pace as the applications perform updates of + disc resident tables. When this occurs, the message + queue of the transaction manager continues to grow until the memory is exhausted or the load - decreases. -
-The same problem may occur for dirty updates. The overload - is detected locally on the current node, but its cause may - be on another node. Application processes may cause heavy - loads if any table are residing on other nodes (replicated or not). By default this event - is reported to the error_logger. -
+ decreases. +The same problem can occur for dirty updates. The overload
+ is detected locally on the current node, but its cause can
+ be on another node. Application processes can cause high
+ load if any table resides on another node (replicated
+ or not). By default this event
+ is reported to
Mnesia regards the database as
- potential inconsistent and gives its applications a chance
- to recover from the inconsistency, e.g. by installing a
- consistent backup as fallback and then restart the system
- or pick a
Mnesia has encountered a fatal error
- and will (in a short period of time) be terminated. The reason for
- the fatal error is explained in Format and Args which may
- be given as input to
Mnesia has detected something that
- may be of interest when debugging the system. This is explained
- in
Mnesia has encountered an error. The
- reason for the error is explained i
An application has invoked the
- function
This event occurs when a transaction that caused a modification to the database - has completed. It is useful for determining when a set of table events - (see below) caused by a given activity have all been sent. Once the this event - has been received, it is guaranteed that no further table events with the same - ActivityID will be received. Note that this event may still be received even - if no table events with a corresponding ActivityID were received, depending on +
This event occurs when a transaction that caused a modification
+ to the database is completed. It is useful for determining when
+ a set of table events (see the next section), caused by a given
+ activity, have been sent. Once this event is received, it is
+ guaranteed that no further table events with the same
+
Dirty operations always only contain one update and thus no activity event is sent.
+Dirty operations always contain only one update and thus no + activity event is sent.
The final category of events are table events, which are - events related to table updates. There are two types of table - events simple and detailed. -
-The simple table events are tuples looking like this:
-
Table events are events related to table updates. There are + two types of table events, simple and detailed.
+The simple table events are tuples like
+
Notice that the record name is the table name even when
+
The table-related events that can occur are as follows:
a new record has been written. - NewRecord contains the new value of the record. -
+a record has possibly been deleted
- with
one or more records possibly has
- been deleted. All records with the key Key in the table
-
The detailed table events are tuples looking like
- this:
The detailed table events are tuples like
+
The table-related events that can occur are as follows:
a new record has been written. - NewRecord contains the new value of the record and OldRecords - contains the records before the operation is performed. - Note that the new content is dependent on the type of the table.
+records has possibly been deleted
-
Debugging a Mnesia application can be difficult due to a number of reasons, primarily related +
Debugging a
We may set the debug level of Mnesia by calling: -
-Where the parameter
The debug level of
no trace outputs at all. This is the default. -
+activates tracing of important debug events. These
- debug events will generate
activates all events at the verbose level plus
- traces of all debug events. These debug events will generate
-
activates all events at the debug level. On this - debug level Mnesia's event handler starts subscribing - updates on all Mnesia tables. This level is only intended - for debugging small toy systems, since many large - events may be generated.
+is an alias for none.
+is an alias for debug.
+The debug level of Mnesia itself, is also an application - parameter, thereby making it possible to start an Erlang system - in order to turn on Mnesia debug in the initial - start-up phase by using the following code: -
+The debug level of
- % erl -mnesia debug verbose -+ % erl -mnesia debug verbose
Programming concurrent Erlang systems is the subject of a separate book. However, it is worthwhile to draw attention to the following features, which permit concurrent processes to - exist in a Mnesia system. -
-A group of functions or processes can be called within a
- transaction. A transaction may include statements that read,
- write or delete data from the DBMS. A large number of such
+ exist in a
A group of functions or processes can be called within a + transaction. A transaction can include statements that read, + write, or delete data from the DBMS. Many such transactions can run concurrently, and the programmer does not - have to explicitly synchronize the processes which manipulate - the data. All programs accessing the database through the - transaction system may be written as if they had sole access to - the data. This is a very desirable property since all + need to explicitly synchronize the processes that manipulate + the data.
+All programs accessing the database through the + transaction system can be written as if they had sole access to + the data. This is a desirable property, as all synchronization is taken care of by the transaction handler. If a program reads or writes data, the system ensures that no other - program tries to manipulate the same data at the same time. -
-It is possible to move tables, delete tables or reconfigure
- the layout of a table in various ways. An important aspect of
- the actual implementation of these functions is that it is
- possible for user programs to continue to use a table while it
- is being reconfigured. For example, it is possible to
- simultaneously move a table and perform write operations to the
- table . This is important for many applications that
- require continuously available services. Refer to Chapter 4:
-
If and when we decide that we would like to start and manipulate - Mnesia, it is often easier to write the definitions and +
If and when you would like to start and manipulate
+
These functions are of course much slower than the ordinary - store and load functions of Mnesia. However, this is mainly intended for minor experiments - and initial prototyping. The major advantages of these functions is that they are very easy - to use. -
-The format of the text file is: -
+These functions are much slower than the ordinary store and
+ load functions of
The format of the text file is as follows:
{tables, [{Typename, [Options]}, {Typename2 ......}]}. - {Typename, Attribute1, Atrribute2 ....}. - {Typename, Attribute1, Atrribute2 ....}. -+ {Typename, Attribute1, Attribute2 ....}. + {Typename, Attribute1, Attribute2 ....}.
For example, if we want to start playing with a small
- database for healthy foods, we enter then following data into
- the file
For example, to start playing with a small database for healthy
+ foods, enter the following data into file
The following session with the Erlang shell then shows how - to load the fruits database. -
+The following session with the Erlang shell shows how
+ to load the
]]>-
Where we can see that the DBMS was initiated from a - regular text file. -
+It can be seen that the DBMS was initiated from a + regular text file.
The Company database introduced in Chapter 2 has three tables - which store records (employee, dept, project), and three tables - which store relationships (manager, at_dep, in_proj). This is a - normalized data model, which has some advantages over a - non-normalized data model. -
-It is more efficient to do a +
The
It is more efficient to do a generalized search in a normalized database. Some operations are also easier to perform on a normalized data model. For example, - we can easily remove one project, as the following example - illustrates: -
+ one project can easily be removed, as the following example + illustrates:In reality, data models are seldom fully normalized. A realistic alternative to a normalized database model would be - a data model which is not even in first normal form. Mnesia - is very suitable for applications such as telecommunications, - because it is easy to organize data in a very flexible manner. A - Mnesia database is always organized as a set of tables. Each - table is filled with rows/objects/records. What sets Mnesia - apart is that individual fields in a record can contain any type - of compound data structures. An individual field in a record can - contain lists, tuples, functions, and even record code. -
+ a data model that is not even in first normal form.Many telecommunications applications have unique requirements - on lookup times for certain types of records. If our Company - database had been a part of a telecommunications system, then it - could be that the lookup time of an employee together - with a list of the projects the employee is working on, should - be minimized. If this was the case, we might choose a - drastically different data model which has no direct - relationships. We would only have the records themselves, and - different records could contain either direct references to - other records, or they could contain other records which are not - part of the Mnesia schema. -
-We could create the following record definitions: -
+ on lookup times for certain types of records. If theThe following record definitions can be created:
An record which describes an employee might look like this: -
+A record that describes an employee can look as follows:
Me = #employee{emp_no= 104732, name = klacke, @@ -1368,50 +1326,43 @@ ok room_no = {221, 015}, dept = 'B/SFR', projects = [erlang, mnesia, otp], - manager = 114872}, --
This model only has three different tables, and the employee - records contain references to other records. We have the following - references in the record. -
+ manager = 114872}, +This model has only three different tables, and the employee + records contain references to other records. The record has the + following references:
We could also use the Mnesia record identifiers (
The
With this data model, some operations execute considerably
- faster than they do with the normalized data model in our
- Company database. On the other hand, some other operations
+ faster than they do with the normalized data model in the
+
The following code exemplifies a search with a non-normalized
- data model. To find all employees at department
-
This code is not only easier to write and to understand, but it - also executes much faster. -
-It is easy to show examples of code which executes faster if
- we use a non-normalized data model, instead of a normalized
- model. The main reason for this is that fewer tables are
- required. For this reason, we can more easily combine data from
- different tables in join operations. In the above example, the
-
This code is easier to write and to understand, and it + also executes much faster.
+It is easy to show examples of code that executes faster if
+ a non-normalized data model is used, instead of a normalized
+ model. The main reason is that fewer tables are required.
+ Therefore, data from different tables can more easily be
+ combined in join operations. In the previous example, the
+ function