19962016 Ericsson AB. All Rights Reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. net_kernel Claes Wikstrom 1 96-09-10 A
net_kernel Erlang Networking Kernel

The net kernel is a system process, registered as net_kernel, which must be running for distributed Erlang to work. The purpose of this process is to implement parts of the BIFs spawn/4 and spawn_link/4, and to provide monitoring of the network.

An Erlang node is started using the command line flag -name or -sname:

$ erl -sname foobar

It is also possible to call net_kernel:start([foobar]) directly from the normal Erlang shell prompt:

1> net_kernel:start([foobar, shortnames]).
{ok,<0.64.0>}
(foobar@gringotts)2>

If the node is started with the command line flag -sname, the node name will be foobar@Host, where Host is the short name of the host (not the fully qualified domain name). If started with the -name flag, Host is the fully qualified domain name. See erl(1).

Normally, connections are established automatically when another node is referenced. This functionality can be disabled by setting the Kernel configuration parameter dist_auto_connect to false, see kernel(6). In this case, connections must be established explicitly by calling net_kernel:connect_node/1.

Which nodes are allowed to communicate with each other is handled by the magic cookie system, see Distributed Erlang in the Erlang Reference Manual.

Permit access to a specified set of nodes

Permits access to the specified set of nodes.

Before the first call to allow/1, any node with the correct cookie can be connected. When allow/1 is called, a list of allowed nodes is established. Any access attempts made from (or to) nodes not in that list will be rejected.

Subsequent calls to allow/1 will add the specified nodes to the list of allowed nodes. It is not possible to remove nodes from the list.

Returns error if any element in Nodes is not an atom.

Establish a connection to a node

Establishes a connection to Node. Returns true if successful, false if not, and ignored if the local node is not alive.

Subscribe to node status change messages

The calling process subscribes or unsubscribes to node status change messages. A nodeup message is delivered to all subscribing process when a new node is connected, and a nodedown message is delivered when a node is disconnected.

If Flag is true, a new subscription is started. If Flag is false, all previous subscriptions -- started with the same Options -- are stopped. Two option lists are considered the same if they contain the same set of options.

As of kernel version 2.11.4, and erts version 5.5.4, the following is guaranteed:

nodeup messages will be delivered before delivery of any message from the remote node passed through the newly established connection. nodedown messages will not be delivered until all messages from the remote node that have been passed through the connection have been delivered.

Note, that this is not guaranteed for kernel versions before 2.11.4.

As of kernel version 2.11.4 subscriptions can also be made before the net_kernel server has been started, i.e., net_kernel:monitor_nodes/[1,2] does not return ignored.

As of kernel version 2.13, and erts version 5.7, the following is guaranteed:

nodeup messages will be delivered after the corresponding node appears in results from erlang:nodes/X. nodedown messages will be delivered after the corresponding node has disappeared in results from erlang:nodes/X.

Note, that this is not guaranteed for kernel versions before 2.13.

The format of the node status change messages depends on Options. If Options is [], which is the default, the format is:

{nodeup, Node} | {nodedown, Node} Node = node()

If Options /= [], the format is:

{nodeup, Node, InfoList} | {nodedown, Node, InfoList} Node = node() InfoList = [{Tag, Val}]

InfoList is a list of tuples. Its contents depends on Options, see below.

Also, when OptionList == [] only visible nodes, that is, nodes that appear in the result of nodes/0, are monitored.

Option can be any of the following:

{node_type, NodeType}

Currently valid values for NodeType are:

visible Subscribe to node status change messages for visible nodes only. The tuple {node_type, visible} is included in InfoList. hidden Subscribe to node status change messages for hidden nodes only. The tuple {node_type, hidden} is included in InfoList. all Subscribe to node status change messages for both visible and hidden nodes. The tuple {node_type, visible | hidden} is included in InfoList.
nodedown_reason

The tuple {nodedown_reason, Reason} is included in InfoList in nodedown messages. Reason can be:

connection_setup_failed The connection setup failed (after nodeup messages had been sent). no_network No network available. net_kernel_terminated The net_kernel process terminated. shutdown Unspecified connection shutdown. connection_closed The connection was closed. disconnect The connection was disconnected (forced from the current node). net_tick_timeout Net tick timeout. send_net_tick_failed Failed to send net tick over the connection. get_status_failed Status information retrieval from the Port holding the connection failed.
Get net_ticktime

Gets net_ticktime (see kernel(6)).

Currently defined return values (Res):

NetTicktime

net_ticktime is NetTicktime seconds.

{ongoing_change_to, NetTicktime}

net_kernel is currently changing net_ticktime to NetTicktime seconds.

ignored

The local node is not alive.

Set net_ticktime

Sets net_ticktime (see kernel(6)) to NetTicktime seconds. TransitionPeriod defaults to 60.

Some definitions:

The minimum transition traffic interval (MTTI)

minimum(NetTicktime, PreviousNetTicktime)*1000 div 4 milliseconds.

The transition period

The time of the least number of consecutive MTTIs to cover TransitionPeriod seconds following the call to set_net_ticktime/2 (i.e. ((TransitionPeriod*1000 - 1) div MTTI + 1)*MTTI milliseconds).

If NetTicktime < PreviousNetTicktime]]>, the actual net_ticktime change will be done at the end of the transition period; otherwise, at the beginning. During the transition period, net_kernel will ensure that there will be outgoing traffic on all connections at least every MTTI millisecond.

The net_ticktime changes have to be initiated on all nodes in the network (with the same NetTicktime) before the end of any transition period on any node; otherwise, connections may erroneously be disconnected.

Returns one of the following:

unchanged

net_ticktime already had the value of NetTicktime and was left unchanged.

change_initiated

net_kernel has initiated the change of net_ticktime to NetTicktime seconds.

{ongoing_change_to, NewNetTicktime}

The request was ignored; because, net_kernel was busy changing net_ticktime to NewNetTicktime seconds.

start([Name]) -> {ok, pid()} | {error, Reason} start([Name, NameType]) -> {ok, pid()} | {error, Reason} start([Name, NameType, Ticktime]) -> {ok, pid()} | {error, Reason} Turn an Erlang runtime system into a distributed node Name = atom() NameType = shortnames | longnames Reason = {already_started, pid()} | term()

Note that the argument is a list with exactly one, two or three arguments. NameType defaults to longnames and Ticktime to 15000.

Turns a non-distributed node into a distributed node by starting net_kernel and other necessary processes.

Turn a node into a non-distributed Erlang runtime system

Turns a distributed node into a non-distributed node. For other nodes in the network, this is the same as the node going down. Only possible when the net kernel was started using start/1, otherwise returns {error, not_allowed}. Returns {error, not_found} if the local node is not alive.