diff options
Diffstat (limited to 'erts/doc')
-rw-r--r-- | erts/doc/src/absform.xml | 10 | ||||
-rw-r--r-- | erts/doc/src/erl.xml | 52 | ||||
-rw-r--r-- | erts/doc/src/erl_nif.xml | 259 | ||||
-rw-r--r-- | erts/doc/src/erl_tracer.xml | 82 | ||||
-rw-r--r-- | erts/doc/src/erlang.xml | 359 |
5 files changed, 532 insertions, 230 deletions
diff --git a/erts/doc/src/absform.xml b/erts/doc/src/absform.xml index 6d6ba224a0..bfabb7f042 100644 --- a/erts/doc/src/absform.xml +++ b/erts/doc/src/absform.xml @@ -68,22 +68,12 @@ <item>If D is a module declaration consisting of the forms <c>F_1</c>, ..., <c>F_k</c>, then Rep(D) = <c>[Rep(F_1), ..., Rep(F_k)]</c>.</item> - <item>If F is an attribute <c>-behavior(Behavior)</c>, then - Rep(F) = <c>{attribute,LINE,behavior,Behavior}</c>.</item> - <item>If F is an attribute <c>-behaviour(Behaviour)</c>, then - Rep(F) = <c>{attribute,LINE,behaviour,Behaviour}</c>.</item> - <item>If F is an attribute <c>-compile(Options)</c>, then - Rep(F) = <c>{attribute,LINE,compile,Options}</c>.</item> <item>If F is an attribute <c>-export([Fun_1/A_1, ..., Fun_k/A_k])</c>, then Rep(F) = <c>{attribute,LINE,export,[{Fun_1,A_1}, ..., {Fun_k,A_k}]}</c>.</item> - <item>If F is an attribute <c>-export_type([Type_1/A_1, ..., Type_k/A_k])</c>, then - Rep(F) = <c>{attribute,LINE,export_type,[{Type_1,A_1}, ..., {Type_k,A_k}]}</c>.</item> <item>If F is an attribute <c>-import(Mod,[Fun_1/A_1, ..., Fun_k/A_k])</c>, then Rep(F) = <c>{attribute,LINE,import,{Mod,[{Fun_1,A_1}, ..., {Fun_k,A_k}]}}</c>.</item> <item>If F is an attribute <c>-module(Mod)</c>, then Rep(F) = <c>{attribute,LINE,module,Mod}</c>.</item> - <item>If F is an attribute <c>-optional_callbacks([Fun_1/A_1, ..., Fun_k/A_k])</c>, then - Rep(F) = <c>{attribute,LINE,optional_callbacks,[{Fun_1,A_1}, ..., {Fun_k,A_k}]}</c>.</item> <item>If F is an attribute <c>-file(File,Line)</c>, then Rep(F) = <c>{attribute,LINE,file,{File,Line}}</c>.</item> <item>If F is a function declaration diff --git a/erts/doc/src/erl.xml b/erts/doc/src/erl.xml index e13470c83c..1bbde7f1e0 100644 --- a/erts/doc/src/erl.xml +++ b/erts/doc/src/erl.xml @@ -627,11 +627,48 @@ <p>Sets the default binary virtual heap size of processes to the size <c><![CDATA[Size]]></c>.</p> </item> + <marker id="+hmax"/> + <tag><c><![CDATA[+hmax Size]]></c></tag> + <item> + <p>Sets the default maximum heap size of processes to the size + <c><![CDATA[Size]]></c>. If <c>+hmax</c> is not given, the default is <c>0</c> + which means that no maximum heap size is used. + For more information, see the documentation of + <seealso marker="erlang#process_flag_max_heap_size"> + <c>process_flag(max_heap_size, MaxHeapSize)</c></seealso>.</p> + </item> + <marker id="+hmaxel"/> + <tag><c><![CDATA[+hmaxel true|false]]></c></tag> + <item> + <p>Sets whether to send an error logger message for processes that reach + the maximum heap size or not. If <c>+hmaxel</c> is not given, the default is <c>true</c>. + For more information, see the documentation of + <seealso marker="erlang#process_flag_max_heap_size"> + <c>process_flag(max_heap_size, MaxHeapSize)</c></seealso>.</p> + </item> + <marker id="+hmaxk"/> + <tag><c><![CDATA[+hmaxk true|false]]></c></tag> + <item> + <p>Sets whether to kill processes that reach the maximum heap size or not. If + <c>+hmaxk</c> is not given, the default is <c>true</c>. For more information, + see the documentation of + <seealso marker="erlang#process_flag_max_heap_size"> + <c>process_flag(max_heap_size, MaxHeapSize)</c></seealso>.</p> + </item> <tag><c><![CDATA[+hpds Size]]></c></tag> <item> <p>Sets the initial process dictionary size of processes to the size <c><![CDATA[Size]]></c>.</p> </item> + <tag><marker id="+hmqd"><c>+hmqd off_heap|on_heap|mixed</c></marker></tag> + <item><p> + Sets the default value for the process flag + <c>message_queue_data</c>. If <c>+hmqd</c> is not + passed, <c>mixed</c> will be the default. For more information, + see the documentation of + <seealso marker="erlang#process_flag_message_queue_data"><c>process_flag(message_queue_data, + MQD)</c></seealso>. + </p></item> <tag><c><![CDATA[+K true | false]]></c></tag> <item> <p>Enables or disables the kernel poll functionality if @@ -1361,21 +1398,6 @@ <seealso marker="kernel:error_logger#warning_map/0">error_logger(3)</seealso> for further information.</p> </item> - <tag><c><![CDATA[+xFlag Value]]></c></tag> - <item> - <p>Default process flag settings.</p> - <taglist> - <tag><marker id="+xmqd"><c>+xmqd off_heap|on_heap|mixed</c></marker></tag> - <item><p> - Sets the default value for the process flag - <c>message_queue_data</c>. If <c>+xmqd</c> is not - passed, <c>mixed</c> will be the default. For more information, - see the documentation of - <seealso marker="erlang#process_flag_message_queue_data"><c>process_flag(message_queue_data, - MQD)</c></seealso>. - </p></item> - </taglist> - </item> <tag><c><![CDATA[+zFlag Value]]></c></tag> <item> <p>Miscellaneous flags.</p> diff --git a/erts/doc/src/erl_nif.xml b/erts/doc/src/erl_nif.xml index 7546f7ef81..33a4fee182 100644 --- a/erts/doc/src/erl_nif.xml +++ b/erts/doc/src/erl_nif.xml @@ -138,29 +138,6 @@ ok automatically unloaded when the module code that it belongs to is purged by the code server.</p> - <p><marker id="lengthy_work"/> - As mentioned in the <seealso marker="#WARNING">warning</seealso> text at - the beginning of this document it is of vital importance that a native function - return relatively quickly. It is hard to give an exact maximum amount - of time that a native function is allowed to work, but as a rule of thumb - a well-behaving native function should return to its caller before a - millisecond has passed. This can be achieved using different approaches. - If you have full control over the code to execute in the native - function, the best approach is to divide the work into multiple chunks of - work and call the native function multiple times, either directly from Erlang code - or by having a native function schedule a future NIF call via the - <seealso marker="#enif_schedule_nif"> enif_schedule_nif</seealso> function. Function - <seealso marker="#enif_consume_timeslice">enif_consume_timeslice</seealso> can be - used to help with such work division. In some cases, however, this might not - be possible, e.g. when calling third-party libraries. Then you typically want - to dispatch the work to another thread, return - from the native function, and wait for the result. The thread can send - the result back to the calling thread using message passing. Information - about thread primitives can be found below. If you have built your system - with <em>the currently experimental</em> support for dirty schedulers, - you may want to try out this functionality by dispatching the work to a - <seealso marker="#dirty_nifs">dirty NIF</seealso>, - which does not have the same duration restriction as a normal NIF.</p> </description> <section> <title>FUNCTIONALITY</title> @@ -328,38 +305,161 @@ ok </list></p> </item> - <tag>Long-running NIFs</tag> - <item><p><marker id="dirty_nifs"/>Native functions - <seealso marker="#lengthy_work"> - must normally run quickly</seealso>, as explained earlier in this document. They - generally should execute for no more than a millisecond. But not all native functions - can execute so quickly; for example, functions that encrypt large blocks of data or - perform lengthy file system operations can often run for tens of seconds or more.</p> - <p>If the functionality of a long-running NIF can be split so that its work can be - achieved through a series of shorter NIF calls, the application can either make that series - of NIF calls from the Erlang level, or it can call a NIF that first performs a chunk of the - work, then invokes the <seealso marker="#enif_schedule_nif">enif_schedule_nif</seealso> - function to schedule another NIF call to perform the next chunk. The final call scheduled - in this manner can then return the overall result. Breaking up a long-running function in - this manner enables the VM to regain control between calls to the NIFs, thereby avoiding - degraded responsiveness, scheduler load balancing problems, and other strange behaviours.</p> - <p>A NIF that cannot be split and cannot execute in a millisecond or less is called a "dirty NIF" - because it performs work that the Erlang runtime cannot handle cleanly. - <em>Note that the dirty NIF functionality described here is experimental</em> and that you have to - enable support for dirty schedulers when building OTP in order to try the functionality out. - Applications that make use of such functions must indicate to the runtime that the functions are - dirty so they can be handled specially. To schedule a dirty NIF for execution, the - appropriate flags value can be set for the NIF in its <seealso marker="#ErlNifFunc">ErlNifFunc</seealso> - entry, or the application can call <seealso marker="#enif_schedule_nif">enif_schedule_nif</seealso>, - passing to it a pointer to the dirty NIF to be executed and indicating with the <c>flags</c> - argument whether it expects the operation to be CPU-bound or I/O-bound.</p> - <note><p>Dirty NIF support is available only when the emulator is configured with dirty - schedulers enabled. This feature is currently disabled by default. To determine whether - the dirty NIF API is available, native code can check to see if the C preprocessor macro - <c>ERL_NIF_DIRTY_SCHEDULER_SUPPORT</c> is defined. Also, if the Erlang runtime was built - without threading support, dirty schedulers are disabled. To check at runtime for the presence - of dirty scheduler threads, code can use the <seealso marker="#enif_system_info"><c> - enif_system_info()</c></seealso> API function.</p></note> + <tag><marker id="lengthy_work"/>Long-running NIFs</tag> + + <item><p> + As mentioned in the <seealso marker="#WARNING">warning</seealso> text at + the beginning of this document it is of <em>vital importance</em> that a + native function return relatively quickly. It is hard to give an exact + maximum amount of time that a native function is allowed to work, but as a + rule of thumb a well-behaving native function should return to its caller + before a millisecond has passed. This can be achieved using different + approaches. If you have full control over the code to execute in the + native function, the best approach is to divide the work into multiple + chunks of work and call the native function multiple times. In some + cases this might however not always be possible, e.g. when calling + third-party libraries.</p> + + <p>The + <seealso marker="#enif_consume_timeslice">enif_consume_timeslice()</seealso> + function can be used to inform the runtime system about the lenght of the + NIF call. It should typically always be used unless the NIF executes very + quickly.</p> + + <p>If the NIF call is too lenghty one needs to handle this in one of the + following ways in order to avoid degraded responsiveness, scheduler load + balancing problems, and other strange behaviours:</p> + + <taglist> + <tag>Yielding NIF</tag> + <item> + <p> + If the functionality of a long-running NIF can be split so that + its work can be achieved through a series of shorter NIF calls, + the application can either make that series of NIF calls from the + Erlang level, or it can call a NIF that first performs a chunk of + the work, then invokes the + <seealso marker="#enif_schedule_nif">enif_schedule_nif</seealso> + function to schedule another NIF call to perform the next chunk. + The final call scheduled in this manner can then return the + overall result. Breaking up a long-running function in + this manner enables the VM to regain control between calls to the + NIFs. + </p> + <p> + This approach is always preferred over the other alternatives + described below. This both from a performance perspective and + a system characteristics perspective. + </p> + </item> + + <tag>Threaded NIF</tag> + <item> + <p> + This is accomplished by dispatching the work to another thread + managed by the NIF library, return from the NIF, and wait for the + result. The thread can send the result back to the Erlang + process using <seealso marker="#enif_send">enif_send</seealso>. + Information about thread primitives can be found below. + </p> + </item> + + <tag><marker id="dirty_nifs"/>Dirty NIF</tag> + <item> + + <note> + <p> + <em>The dirty NIF functionality described here + is experimental</em>. Dirty NIF support is available only when + the emulator is configured with dirty schedulers enabled. This + feature is currently disabled by default. The Erlang runtime + without SMP support do not support dirty schedulers even when + the dirty scheduler support has been enabled. To check at + runtime for the presence of dirty scheduler threads, code can + use the + <seealso marker="#enif_system_info"><c>enif_system_info()</c></seealso> + API function. + </p> + </note> + + <p> + A NIF that cannot be split and cannot execute in a millisecond or + less is called a "dirty NIF" because it performs work that the + Erlang runtime cannot handle cleanly. Applications that make use + of such functions must indicate to the runtime that the functions + are dirty so they can be handled specially. To schedule a dirty + NIF for execution, the appropriate flags value can be set for the + NIF in its <seealso marker="#ErlNifFunc"><c>ErlNifFunc</c></seealso> + entry, or the application can call + <seealso marker="#enif_schedule_nif"><c>enif_schedule_nif</c></seealso>, + passing to it a pointer to the dirty NIF to be executed and + indicating with the <c>flags</c> argument whether it expects the + operation to be CPU-bound or I/O-bound. A dirty NIF executing + on a dirty scheduler does not have the same duration restriction + as a normal NIF. + </p> + + <p> + While a process is executing a dirty NIF some operations that + communicate with it may take a very long time to complete. + Suspend, or garbage collection of a process executing a dirty + NIF cannot be done until the dirty NIF has returned, so other + processes waiting for such operations to complete might have to + wait for a very long time. Blocking multi scheduling, i.e., + calling + <seealso marker="erlang#system_flag_multi_scheduling"><c>erlang:system_flag(multi_scheduling, + block)</c></seealso>, might also take a very long time to + complete. This since all ongoing dirty operations on all + dirty schedulers need to complete before the the block + operation can complete. + </p> + + <p> + A lot of operations communicating with a process executing a + dirty NIF can, however, complete while it is executing the + dirty NIF. For example, retreiving information about it via + <c>process_info()</c>, setting its group leader, + register/unregister its name, etc. + </p> + + <p> + Termination of a process executing a dirty NIF can only be + completed up to a certain point while it is executing the + dirty NIF. All Erlang resources such as registered names, + ETS tables, etc will be released. All links and monitors + will be triggered. The actual execution of the NIF will + however <em>not</em> be stopped. The NIF can safely contiue + execution, allocate heap memory, etc, but it is of course better + to stop executing as soon as possible. The NIF can check + whether current process is alive or not using + <seealso marker="#enif_is_current_process_alive"><c>enif_is_current_process_alive</c></seealso>. + Communication using + <seealso marker="#enif_send"><c>enif_send</c></seealso>, + and <seealso marker="#enif_port_command"><c>enif_port_command</c></seealso> + will also be dropped when the sending process is not alive. + Deallocation of certain internal resources such as process + heap, and process control block will be delayed until the + dirty NIF has completed. + </p> + + <p>Currently known issues that are planned to be fixed:</p> + <list> + <item> + <p> + Since purging of a module currently might need to garbage + collect a process in order to determine if it has + references to the module, a process executing a dirty + NIF might delay purging for a very long time. Delaying + a purge operatin implies delaying <em>all</em> code + loding operations which might cause severe problems for + the system as a whole. + </p> + </item> + </list> + + </item> + </taglist> + </item> </taglist> </section> @@ -508,6 +608,10 @@ typedef struct { CPU-bound, its <c>flags</c> field should be set to <c>ERL_NIF_DIRTY_JOB_CPU_BOUND</c>, or for I/O-bound jobs, <c>ERL_NIF_DIRTY_JOB_IO_BOUND</c>.</p> + <note><p>If one of the + <c>ERL_NIF_DIRTY_JOB_*_BOUND</c> flags is set, and the runtime + system has no support for dirty schedulers, the runtime system + will refuse to load the NIF library.</p></note> </item> <tag><marker id="ErlNifBinary"/>ErlNifBinary</tag> <item> @@ -963,6 +1067,13 @@ typedef enum { <fsummary>Determine if a term is a binary</fsummary> <desc><p>Return true if <c>term</c> is a binary</p></desc> </func> + <func><name><ret>int</ret><nametext>enif_is_current_process_alive(ErlNifEnv* env)</nametext></name> + <fsummary>Determine if currently executing process is alive or not.</fsummary> + <desc><p>Return true if currently executing process is currently alive; otherwise + false.</p> + <p>This function can only be used from a NIF-calling thread, and with an + environment corresponding to currently executing processes.</p></desc> + </func> <func><name><ret>int</ret><nametext>enif_is_empty_list(ErlNifEnv* env, ERL_NIF_TERM term)</nametext></name> <fsummary>Determine if a term is an empty list</fsummary> <desc><p>Return true if <c>term</c> is an empty list.</p></desc> @@ -993,15 +1104,10 @@ typedef enum { <func><name><ret>int</ret><nametext>enif_is_on_dirty_scheduler(ErlNifEnv* env)</nametext></name> <fsummary>Check to see if executing on a dirty scheduler thread</fsummary> <desc> - <p>Check to see if the current NIF is executing on a dirty scheduler thread. If the - emulator is built with threading support, calling <c>enif_is_on_dirty_scheduler</c> - from within a dirty NIF returns true. It returns false when the calling NIF is a regular - NIF running on a normal scheduler thread, or when the emulator is built without threading - support.</p> - <note><p>This function is available only when the emulator is configured with dirty - schedulers enabled. This feature is currently disabled by default. To determine whether - the dirty NIF API is available, native code can check to see if the C preprocessor macro - <c>ERL_NIF_DIRTY_SCHEDULER_SUPPORT</c> is defined.</p></note> + <p>Check to see if the current NIF is executing on a dirty scheduler thread. If + executing on a dirty scheduler thread true returned; otherwise false.</p> + <p>This function can only be used from a NIF-calling thread, and with an + environment corresponding to currently executing processes.</p> </desc> </func> <func><name><ret>int</ret><nametext>enif_is_pid(ErlNifEnv* env, ERL_NIF_TERM term)</nametext></name> @@ -1015,7 +1121,8 @@ typedef enum { <func><name><ret>int</ret><nametext>enif_is_port_alive(ErlNifEnv* env, ErlNifPort *port_id)</nametext></name> <fsummary>Determine if a local port is alive or not.</fsummary> <desc><p>Return true if <c>port_id</c> is currently alive.</p> - <p>This function can only be used in a from a NIF-calling thread.</p></desc> + <p>This function is only thread-safe when the emulator with SMP support is used. + It can only be used in a non-SMP emulator from a NIF-calling thread.</p></desc> </func> <func><name><ret>int</ret><nametext>enif_is_process_alive(ErlNifEnv* env, ErlNifPid *pid)</nametext></name> <fsummary>Determine if a local process is alive or not.</fsummary> @@ -1483,9 +1590,7 @@ enif_map_iterator_destroy(env, &iter); <fsummary>Send a port_command to to_port</fsummary> <desc> <p>This function works the same as <seealso marker="erlang#port_command-2">erlang:port_command/2</seealso> - except that it is always completely asynchronous. This call may return false - if it detects that the port is already dead, otherwise it will return true. - </p> + except that it is always completely asynchronous.</p> <taglist> <tag><c>env</c></tag> <item>The environment of the calling process. May not be NULL.</item> @@ -1504,7 +1609,10 @@ enif_map_iterator_destroy(env, &iter); calls to <c>enif_alloc_env</c>, <c>enif_make_copy</c>, <c>enif_port_command</c> and <c>enif_free_env</c> into one call. This optimization is only usefull when a majority of the terms are to be copied from <c>env</c> to the <c>msg_env</c>.</p> - <p>The call may return false if it detects that the command failed for some reason. Otherwise true is returned.</p> + <p>This function return true if the command was successfully sent; otherwise, + false. The call may return false if it detects that the command failed for some + reason. For example, <c>*to_port</c> does not refer to a local port, if currently + executing process, i.e. the sender, is not alive, or if <c>msg</c> is invalid.</p> <p>See also: <seealso marker="#enif_get_local_port"><c>enif_get_local_port</c></seealso>.</p> </desc> </func> @@ -1635,7 +1743,9 @@ enif_map_iterator_destroy(env, &iter); <tag><c>msg</c></tag> <item>The message term to send.</item> </taglist> - <p>Return true on success, or false if <c>*to_pid</c> does not refer to an alive local process.</p> + <p>Return true if the message was successfully sent; otherwise, false. The send + operation will fail if <c>*to_pid</c> does not refer to an alive local process, + or if currently executing process, i.e. the sender, is not alive.</p> <p>The message environment <c>msg_env</c> with all its terms (including <c>msg</c>) will be invalidated by a successful call to <c>enif_send</c>. The environment should either be freed with <seealso marker="#enif_free_env">enif_free_env</seealso> @@ -1653,6 +1763,15 @@ enif_map_iterator_destroy(env, &iter); <desc><p>Get the byte size of a resource object <c>obj</c> obtained by <seealso marker="#enif_alloc_resource">enif_alloc_resource</seealso>.</p></desc> </func> + + <func><name><ret>int</ret><nametext>enif_snprintf(char *str, size_t size, const char *format, ...)</nametext></name> + <fsummary>Format strings and Erlang terms</fsummary> + <desc> + <p>Similar to <c>snprintf</c> but this format string also accepts <c>"%T"</c> which formats Erlang terms. + </p> + </desc> + </func> + <func> <name><ret>void</ret><nametext>enif_system_info(ErlNifSysInfo *sys_info_ptr, size_t size)</nametext></name> <fsummary>Get information about the Erlang runtime system</fsummary> diff --git a/erts/doc/src/erl_tracer.xml b/erts/doc/src/erl_tracer.xml index 2075b962d8..d4c8bbad31 100644 --- a/erts/doc/src/erl_tracer.xml +++ b/erts/doc/src/erl_tracer.xml @@ -67,7 +67,7 @@ <desc> <p>The different trace tags that the tracer will be called with. Each trace tag is described in greater detail in - <seealso marker="#trace">Module:trace/6</seealso> + <seealso marker="#Module:trace/6">Module:trace/6</seealso> </p> </desc> </datatype> @@ -81,7 +81,7 @@ <datatype> <name name="trace_opts" /> <desc> - <p>The options for the tracee. + <p>The options for the tracee.</p> <taglist> <tag><c>timestamp</c></tag> <item>If not set to <c>undefined</c>, the tracer has been requested to @@ -90,10 +90,9 @@ <item>If not set to <c>true</c>, the tracer has been requested to include the output of a match specification that was run.</item> <tag><c>scheduler_id</c></tag> - <item>Set to a number of the scheduler id is to be included by the tracer. + <item>Set to a number if the scheduler id is to be included by the tracer. Otherwise it is set to <c>undefined</c>.</item> </taglist> - </p> </desc> </datatype> <datatype> @@ -114,30 +113,30 @@ <p>The following functions should be exported from a <c>erl_tracer</c> callback module.</p> <taglist> - <tag><seealso marker="#enabled"><c>Module:enabled/3</c></seealso></tag> + <tag><seealso marker="#Module:enabled/3"><c>Module:enabled/3</c></seealso></tag> <item>Mandatory</item> - <tag><seealso marker="#trace"><c>Module:trace/6</c></seealso></tag> + <tag><seealso marker="#Module:trace/6"><c>Module:trace/6</c></seealso></tag> <item>Mandatory</item> - <tag><seealso marker="#enabled_procs"><c>Module:enabled_procs/3</c></seealso></tag> + <tag><seealso marker="#Module:enabled_procs/3"><c>Module:enabled_procs/3</c></seealso></tag> <item>Optional</item> - <tag><seealso marker="#trace_procs"><c>Module:trace_procs/6</c></seealso></tag> + <tag><seealso marker="#Module:trace_procs/6"><c>Module:trace_procs/6</c></seealso></tag> <item>Optional</item> - <tag><seealso marker="#enabled_ports"><c>Module:enabled_ports/3</c></seealso></tag> + <tag><seealso marker="#Module:enabled_ports/3"><c>Module:enabled_ports/3</c></seealso></tag> <item>Optional</item> - <tag><seealso marker="#trace_ports"><c>Module:trace_ports/6</c></seealso></tag> + <tag><seealso marker="#Module:trace_ports/6"><c>Module:trace_ports/6</c></seealso></tag> <item>Optional</item> - <tag><seealso marker="#enabled_running_ports"><c>Module:enabled_running_ports/3</c></seealso></tag> + <tag><seealso marker="#Module:enabled_running_ports/3"><c>Module:enabled_running_ports/3</c></seealso></tag> <item>Optional</item> - <tag><seealso marker="#trace_running_ports"><c>Module:trace_running_ports/6</c></seealso></tag> + <tag><seealso marker="#Module:trace_running_ports/6"><c>Module:trace_running_ports/6</c></seealso></tag> <item>Optional</item> - <tag><seealso marker="#enabled_running_procs"><c>Module:enabled_running_procs/3</c></seealso></tag> + <tag><seealso marker="#Module:enabled_running_procs/3"><c>Module:enabled_running_procs/3</c></seealso></tag> <item>Optional</item> - <tag><seealso marker="#trace_running_procs"><c>Module:trace_running_procs/6</c></seealso></tag> + <tag><seealso marker="#Module:trace_running_procs/6"><c>Module:trace_running_procs/6</c></seealso></tag> <item>Optional</item> </taglist> </section> - <marker id="enabled"></marker> + <funcs> <func> <name>Module:enabled(TraceTag, TracerState, Tracee) -> Result</name> @@ -155,19 +154,17 @@ overhead associated with tracing. If <c>trace</c> is returned the necessary trace data will be created and the trace call-back of the tracer will be called. If <c>discard</c> is returned, this trace call - will be discarded and no call to trace will be done. If - <c>remove</c> is returned, the VM will attempt to remove this tracer - from the tracee, together with any trace flags set on the tracee. + will be discarded and no call to trace will be done. </p> <p><c>trace_status</c> is a special type of <c>TraceTag</c> which is used to check if the tracer should still be active. It is called in multiple scenarios, but most significantly it is used when tracing is started - using this tracer.</p> + using this tracer. If <c>remove</c> is returned when the <c>trace_status</c> + is checked, the tracer will be removed from the tracee.</p> <p>This function may be called multiple times per tracepoint, so it is important that it is both fast and side effect free.</p> </desc> </func> - <marker id="trace"></marker> <func> <name>Module:trace(TraceTag, TracerState, Tracee, FirstTraceTerm, SecondTraceTerm, Opts) -> Result</name> <fsummary>Check if a trace event should be generated.</fsummary> @@ -182,7 +179,7 @@ </type> <desc> <p>This callback will be called when a tracepoint is triggered and - the <seealso marker="#enabled">Module:enabled/3</seealso> + the <seealso marker="#Module:enabled/3">Module:enabled/3</seealso> callback returned <c>trace</c>. In it any side effects needed by the tracer should be done. The tracepoint payload is located in the <c>FirstTraceTerm</c> and <c>SecondTraceTerm</c>. The content @@ -213,7 +210,6 @@ </desc> </func> - <marker id="enabled_procs"></marker> <func> <name>Module:enabled_procs(TraceTag, TracerState, Tracee) -> Result</name> <fsummary>Check if a trace event should be generated.</fsummary> @@ -231,7 +227,6 @@ </desc> </func> - <marker id="trace_procs"></marker> <func> <name>Module:trace_procs(TraceTag, TracerState, Tracee, FirstTraceTerm, SecondTraceTerm, Opts) -> Result</name> <fsummary>Check if a trace event should be generated.</fsummary> @@ -246,13 +241,12 @@ </type> <desc> <p>This callback will be called when a tracepoint is triggered and - the <seealso marker="#enabled_procs">Module:enabled_procs/3</seealso> + the <seealso marker="#Module:enabled_procs/3">Module:enabled_procs/3</seealso> callback returned <c>trace</c>.</p> <p>If <c>trace_procs/6</c> is not defined <c>trace/6</c> will be called instead.</p> </desc> </func> - <marker id="enabled_ports"></marker> <func> <name>Module:enabled_ports(TraceTag, TracerState, Tracee) -> Result</name> <fsummary>Check if a trace event should be generated.</fsummary> @@ -270,7 +264,6 @@ </desc> </func> - <marker id="trace_ports"></marker> <func> <name>Module:trace_ports(TraceTag, TracerState, Tracee, FirstTraceTerm, SecondTraceTerm, Opts) -> Result</name> <fsummary>Check if a trace event should be generated.</fsummary> @@ -285,13 +278,12 @@ </type> <desc> <p>This callback will be called when a tracepoint is triggered and - the <seealso marker="#enabled_ports">Module:enabled_ports/3</seealso> + the <seealso marker="#Module:enabled_ports/3">Module:enabled_ports/3</seealso> callback returned <c>trace</c>.</p> <p>If <c>trace_ports/6</c> is not defined <c>trace/6</c> will be called instead.</p> </desc> </func> - <marker id="enabled_running_procs"></marker> <func> <name>Module:enabled_running_procs(TraceTag, TracerState, Tracee) -> Result</name> <fsummary>Check if a trace event should be generated.</fsummary> @@ -309,7 +301,6 @@ </desc> </func> - <marker id="trace_running_procs"></marker> <func> <name>Module:trace_running_procs(TraceTag, TracerState, Tracee, FirstTraceTerm, SecondTraceTerm, Opts) -> Result</name> <fsummary>Check if a trace event should be generated.</fsummary> @@ -324,13 +315,12 @@ </type> <desc> <p>This callback will be called when a tracepoint is triggered and - the <seealso marker="#enabled_running_procs">Module:enabled_running_procs/3</seealso> + the <seealso marker="#Module:enabled_running_procs/3">Module:enabled_running_procs/3</seealso> callback returned <c>trace</c>.</p> <p>If <c>trace_running_procs/6</c> is not defined <c>trace/6</c> will be called instead.</p> </desc> </func> - <marker id="enabled_running_ports"></marker> <func> <name>Module:enabled_running_ports(TraceTag, TracerState, Tracee) -> Result</name> <fsummary>Check if a trace event should be generated.</fsummary> @@ -348,7 +338,6 @@ </desc> </func> - <marker id="trace_running_ports"></marker> <func> <name>Module:trace_running_ports(TraceTag, TracerState, Tracee, FirstTraceTerm, SecondTraceTerm, Opts) -> Result</name> <fsummary>Check if a trace event should be generated.</fsummary> @@ -363,13 +352,12 @@ </type> <desc> <p>This callback will be called when a tracepoint is triggered and - the <seealso marker="#enabled_running_ports">Module:enabled_running_ports/3</seealso> + the <seealso marker="#Module:enabled_running_ports/3">Module:enabled_running_ports/3</seealso> callback returned <c>trace</c>.</p> <p>If <c>trace_running_ports/6</c> is not defined <c>trace/6</c> will be called instead.</p> </desc> </func> - <marker id="enabled_call"></marker> <func> <name>Module:enabled_call(TraceTag, TracerState, Tracee) -> Result</name> <fsummary>Check if a trace event should be generated.</fsummary> @@ -387,7 +375,6 @@ </desc> </func> - <marker id="trace_call"></marker> <func> <name>Module:trace_call(TraceTag, TracerState, Tracee, FirstTraceTerm, SecondTraceTerm, Opts) -> Result</name> <fsummary>Check if a trace event should be generated.</fsummary> @@ -402,13 +389,12 @@ </type> <desc> <p>This callback will be called when a tracepoint is triggered and - the <seealso marker="#enabled_call">Module:enabled_call/3</seealso> + the <seealso marker="#Module:enabled_call/3">Module:enabled_call/3</seealso> callback returned <c>trace</c>.</p> <p>If <c>trace_call/6</c> is not defined <c>trace/6</c> will be called instead.</p> </desc> </func> - <marker id="enabled_send"></marker> <func> <name>Module:enabled_send(TraceTag, TracerState, Tracee) -> Result</name> <fsummary>Check if a trace event should be generated.</fsummary> @@ -426,7 +412,6 @@ </desc> </func> - <marker id="trace_send"></marker> <func> <name>Module:trace_send(TraceTag, TracerState, Tracee, FirstTraceTerm, SecondTraceTerm, Opts) -> Result</name> <fsummary>Check if a trace event should be generated.</fsummary> @@ -441,13 +426,12 @@ </type> <desc> <p>This callback will be called when a tracepoint is triggered and - the <seealso marker="#enabled_send">Module:enabled_send/3</seealso> + the <seealso marker="#Module:enabled_send/3">Module:enabled_send/3</seealso> callback returned <c>trace</c>.</p> <p>If <c>trace_send/6</c> is not defined <c>trace/6</c> will be called instead.</p> </desc> </func> - <marker id="enabled_receive"></marker> <func> <name>Module:enabled_receive(TraceTag, TracerState, Tracee) -> Result</name> <fsummary>Check if a trace event should be generated.</fsummary> @@ -465,7 +449,6 @@ </desc> </func> - <marker id="trace_receive"></marker> <func> <name>Module:trace_receive(TraceTag, TracerState, Tracee, FirstTraceTerm, SecondTraceTerm, Opts) -> Result</name> <fsummary>Check if a trace event should be generated.</fsummary> @@ -480,13 +463,12 @@ </type> <desc> <p>This callback will be called when a tracepoint is triggered and - the <seealso marker="#enabled_receive">Module:enabled_receive/3</seealso> + the <seealso marker="#Module:enabled_receive/3">Module:enabled_receive/3</seealso> callback returned <c>trace</c>.</p> <p>If <c>trace_receive/6</c> is not defined <c>trace/6</c> will be called instead.</p> </desc> </func> - <marker id="enabled_garbage_collection"></marker> <func> <name>Module:enabled_garbage_collection(TraceTag, TracerState, Tracee) -> Result</name> <fsummary>Check if a trace event should be generated.</fsummary> @@ -504,7 +486,6 @@ </desc> </func> - <marker id="trace_garbage_collection"></marker> <func> <name>Module:trace_garbage_collection(TraceTag, TracerState, Tracee, FirstTraceTerm, SecondTraceTerm, Opts) -> Result</name> <fsummary>Check if a trace event should be generated.</fsummary> @@ -519,7 +500,7 @@ </type> <desc> <p>This callback will be called when a tracepoint is triggered and - the <seealso marker="#enabled_garbage_collection">Module:enabled_garbage_collection/3</seealso> + the <seealso marker="#Module:enabled_garbage_collection/3">Module:enabled_garbage_collection/3</seealso> callback returned <c>trace</c>.</p> <p>If <c>trace_garbage_collection/6</c> is not defined <c>trace/6</c> will be called instead.</p> </desc> @@ -617,7 +598,7 @@ static int upgrade(ErlNifEnv* env, void** priv_data, void** old_priv_data, } /* - * argv[0]: Trace Tag + * argv[0]: TraceTag * argv[1]: TracerState * argv[2]: Tracee */ @@ -626,8 +607,11 @@ static ERL_NIF_TERM enabled(ErlNifEnv* env, int argc, const ERL_NIF_TERM argv[]) ErlNifPid to_pid; if (enif_get_local_pid(env, argv[1], &to_pid)) if (!enif_is_process_alive(env, &to_pid)) - /* tracer is dead so we should remove this tracepoint */ - return enif_make_atom(env, "remove"); + if (enif_is_identical(enif_make_atom(env, "trace_status"), argv[0])) + /* tracer is dead so we should remove this tracepoint */ + return enif_make_atom(env, "remove"); + else + return enif_make_atom(env, "discard"); /* Only generate trace for when tracer != tracee */ if (enif_is_identical(argv[1], argv[2])) @@ -645,7 +629,7 @@ static ERL_NIF_TERM enabled(ErlNifEnv* env, int argc, const ERL_NIF_TERM argv[]) } /* - * argv[0]: Trace Tag, should only be 'send' + * argv[0]: TraceTag, should only be 'send' * argv[1]: TracerState, process to send {argv[2], argv[4]} to * argv[2]: Tracee * argv[3]: Message, ignored diff --git a/erts/doc/src/erlang.xml b/erts/doc/src/erlang.xml index 5af4f0bd66..e0723127f2 100644 --- a/erts/doc/src/erlang.xml +++ b/erts/doc/src/erlang.xml @@ -4322,6 +4322,7 @@ os_prompt% </pre> </desc> </func> + <marker id="process_flag_min_heap_size"/> <func> <name name="process_flag" arity="2" clause_i="3"/> <fsummary>Sets process flag <c>min_heap_size</c> for the calling process.</fsummary> @@ -4340,9 +4341,95 @@ os_prompt% </pre> <p>Returns the old value of the flag.</p> </desc> </func> - <marker id="process_flag_message_queue_data"/> + <marker id="process_flag_max_heap_size"/> <func> <name name="process_flag" arity="2" clause_i="5"/> + <type name="max_heap_size"/> + <fsummary>Sets process flag <c>max_heap_size</c> for the calling process.</fsummary> + <desc> + <p> + This flag sets the maximum heap size for the calling process. + If <c><anno>MaxHeapSize</anno></c> is an integer, the system default + values for <c>kill</c> and <c>error_logger</c> are used. + <taglist> + <tag><c>size</c></tag> + <item> + <p> + The maximum size in words of the process. If set to zero, the + heap size limit is disabled. Badarg will be thrown if the value is + smaller than + <seealso marker="#process_flag_min_heap_size"><c>min_heap_size</c></seealso>. + The size check is only done when a garbage collection is triggered. + </p> + <p> + <c>size</c> is the entire heap of the process when garbage collection + is triggered, this includes all generational heaps, the process stack, + any <seealso marker="#process_flag_message_queue_data"> + messages that are considered to be part of the heap</seealso> and any + extra memory that the garbage collector needs during collection. + </p> + <p> + <c>size</c> is the same as can be retrieved using + <seealso marker="#process_info_total_heap_size"> + <c>erlang:process_info(Pid, total_heap_size)</c></seealso>, + or by adding <c>heap_block_size</c>, <c>old_heap_block_size</c> + and <c>mbuf_size</c> from <seealso marker="#process_info_garbage_collection_info"> + <c>erlang:process_info(Pid, garbage_collection_info)</c></seealso>. + </p> + </item> + <tag><c>kill</c></tag> + <item> + <p> + When set to <c>true</c> the runtime system will send an + untrappable exit signal with reason <c>kill</c> to the process + if the maximum heap size is reached. The garbage collection + that triggered the <c>kill</c> will not be completed, instead the + process will exit as soon as is possible. When set to <c>false</c> + no exit signal will be sent to the process, instead it will + continue executing. + </p> + <p> + If <c>kill</c> is not defined in the map + the system default will be used. The default system default + is <c>true</c>. It can be changed by either the erl + <seealso marker="erl#+hmaxk">+hmaxk</seealso> option, + or <seealso marker="#system_flag_max_heap_size"><c> + erlang:system_flag(max_heap_size, MaxHeapSize)</c></seealso>. + </p> + </item> + <tag><c>error_logger</c></tag> + <item> + <p> + When set to <c>true</c> the runtime system will send a + message to the current <seealso marker="kernel:error_logger"><c>error_logger</c></seealso> + containing details about the process when the maximum + heap size is reached. One <c>error_logger</c> report will + be sent each time the limit is reached. + </p> + <p> + If <c>error_logger</c> is not defined in the map the system + default will be used. The default system default is <c>true</c>. + It can be changed by either the erl <seealso marker="erl#+hmaxel">+hmaxel</seealso> + option, or <seealso marker="#system_flag_max_heap_size"><c> + erlang:system_flag(max_heap_size, MaxHeapSize)</c></seealso>. + </p> + </item> + <p> + The heap size of a process is quite hard to predict, especially the + amount of memory that is used during the garbage collection. When + contemplating using this option, it is recommended to first run + it in production with <c>kill</c> set to <c>false</c> and inspect + the <c>error_logger</c> reports to see what the normal peak sizes + of the processes in the system is and then tune the value + accordingly. + </p> + </taglist> + </p> + </desc> + </func> + <marker id="process_flag_message_queue_data"/> + <func> + <name name="process_flag" arity="2" clause_i="6"/> <fsummary>Set process flag <c>message_queue_data</c> for the calling process</fsummary> <type name="message_queue_data"/> <desc> @@ -4371,7 +4458,7 @@ os_prompt% </pre> </taglist> <p> The default <c>message_queue_data</c> process flag is determined - by the <seealso marker="erl#+xmqd"><c>+xmqd</c></seealso> + by the <seealso marker="erl#+hmqd"><c>+hmqd</c></seealso> <c>erl</c> command line argument. </p> <p> @@ -4392,7 +4479,7 @@ os_prompt% </pre> </desc> </func> <func> - <name name="process_flag" arity="2" clause_i="6"/> + <name name="process_flag" arity="2" clause_i="7"/> <fsummary>Sets process flag <c>priority</c> for the calling process.</fsummary> <type name="priority_level"/> <desc> @@ -4466,7 +4553,7 @@ os_prompt% </pre> </func> <func> - <name name="process_flag" arity="2" clause_i="7"/> + <name name="process_flag" arity="2" clause_i="8"/> <fsummary>Sets process flag <c>save_calls</c> for the calling process.</fsummary> <desc> <p><c><anno>N</anno></c> must be an integer in the interval 0..10000. @@ -4497,7 +4584,7 @@ os_prompt% </pre> </func> <func> - <name name="process_flag" arity="2" clause_i="8"/> + <name name="process_flag" arity="2" clause_i="9"/> <fsummary>Sets process flag <c>sensitive</c> for the calling process.</fsummary> <desc> <p>Sets or clears flag <c>sensitive</c> for the current process. @@ -4551,6 +4638,7 @@ os_prompt% </pre> <type name="process_info_result_item"/> <type name="priority_level"/> <type name="stack_item"/> + <type name="max_heap_size" /> <type name="message_queue_data" /> <desc> <p>Returns a list containing <c><anno>InfoTuple</anno></c>s with @@ -4604,6 +4692,7 @@ os_prompt% </pre> <type name="process_info_result_item"/> <type name="stack_item"/> <type name="priority_level"/> + <type name="max_heap_size" /> <type name="message_queue_data" /> <desc> <p>Returns information about the process identified by @@ -4696,13 +4785,14 @@ os_prompt% </pre> The content of <c><anno>GCInfo</anno></c> can be changed without prior notice.</p> </item> + <marker id="process_info_garbage_collection_info"/> <tag><c>{garbage_collection_info, <anno>GCInfo</anno>}</c></tag> <item> <p><c><anno>GCInfo</anno></c> is a list containing miscellaneous detailed information about garbage collection for this process. The content of <c><anno>GCInfo</anno></c> can be changed without prior notice. - See <seealso marker="#gc_start">gc_start</seealso> in + See <seealso marker="#gc_minor_start">gc_minor_start</seealso> in <seealso marker="#trace/3">erlang:trace/3</seealso> for details about what each item means. </p> @@ -4875,10 +4965,13 @@ os_prompt% </pre> total suspend count on <c><anno>Suspendee</anno></c>, only the parts contributed by <c><anno>Pid</anno></c>.</p> </item> + <marker id="process_info_total_heap_size"/> <tag><c>{total_heap_size, <anno>Size</anno>}</c></tag> <item> <p><c><anno>Size</anno></c> is the total size, in words, of all heap - fragments of the process. This includes the process stack.</p> + fragments of the process. This includes the process stack and + any unreceived messages that are considered to be part of the + heap. </p> </item> <tag><c>{trace, <anno>InternalTraceFlags</anno>}</c></tag> <item> @@ -5567,6 +5660,7 @@ true</pre> <name name="spawn_opt" arity="2"/> <fsummary>Creates a new process with a fun as entry point.</fsummary> <type name="priority_level"/> + <type name="max_heap_size" /> <type name="message_queue_data" /> <type name="spawn_opt_option" /> <desc> @@ -5584,6 +5678,7 @@ true</pre> <name name="spawn_opt" arity="3"/> <fsummary>Creates a new process with a fun as entry point on a given node.</fsummary> <type name="priority_level"/> + <type name="max_heap_size" /> <type name="message_queue_data" /> <type name="spawn_opt_option" /> <desc> @@ -5600,6 +5695,7 @@ true</pre> <name name="spawn_opt" arity="4"/> <fsummary>Creates a new process with a function as entry point.</fsummary> <type name="priority_level"/> + <type name="max_heap_size" /> <type name="message_queue_data" /> <type name="spawn_opt_option" /> <desc> @@ -5703,13 +5799,23 @@ true</pre> fine-tuning an application and to measure the execution time with various <c><anno>VSize</anno></c> values.</p> </item> + <tag><c>{max_heap_size, <anno>Size</anno>}</c></tag> + <item> + <p>Sets the <c>max_heap_size</c> process flag. The default + <c>max_heap_size</c> is determined by the + <seealso marker="erl#+hmax"><c>+hmax</c></seealso> <c>erl</c> + command line argument. For more information, see the + documentation of + <seealso marker="#process_flag_max_heap_size"><c>process_flag(max_heap_size, + <anno>Size</anno>)</c></seealso>.</p> + </item> <tag><c>{message_queue_data, <anno>MQD</anno>}</c></tag> <item> <p>Sets the state of the <c>message_queue_data</c> process flag. <c><anno>MQD</anno></c> should be either <c>off_heap</c>, <c>on_heap</c>, or <c>mixed</c>. The default <c>message_queue_data</c> process flag is determined by the - <seealso marker="erl#+xmqd"><c>+xmqd</c></seealso> <c>erl</c> + <seealso marker="erl#+hmqd"><c>+hmqd</c></seealso> <c>erl</c> command line argument. For more information, see the documentation of <seealso marker="#process_flag_message_queue_data"><c>process_flag(message_queue_data, @@ -5723,6 +5829,7 @@ true</pre> <name name="spawn_opt" arity="5"/> <fsummary>Creates a new process with a function as entry point on a given node.</fsummary> <type name="priority_level"/> + <type name="max_heap_size" /> <type name="message_queue_data" /> <type name="spawn_opt_option" /> <desc> @@ -6504,8 +6611,25 @@ ok </desc> </func> + <marker id="system_flag_max_heap_size"></marker> <func> <name name="system_flag" arity="2" clause_i="8"/> + <type name="max_heap_size"/> + <fsummary>Sets system flag <c>max_heap_size</c></fsummary> + <desc> + <p> + Sets the default maximum heap size settings for processes. + The size is given in words. The new <c>max_heap_size</c> + effects only processes spawned efter the change has been made. + <c>max_heap_size</c> can be set for individual processes using + <seealso marker="#spawn_opt/4">spawn_opt/N</seealso> or + <seealso marker="#process_flag_message_queue_data">process_flag/2</seealso>.</p> + <p>Returns the old value of the flag.</p> + </desc> + </func> + + <func> + <name name="system_flag" arity="2" clause_i="9"/> <fsummary>Sets system flag <c>multi_scheduling</c>.</fsummary> <desc> <p><marker id="system_flag_multi_scheduling"></marker> @@ -6555,7 +6679,7 @@ ok </func> <func> - <name name="system_flag" arity="2" clause_i="9"/> + <name name="system_flag" arity="2" clause_i="10"/> <fsummary>Sets system flag <c>scheduler_bind_type</c>.</fsummary> <type name="scheduler_bind_type"/> <desc> @@ -6673,7 +6797,7 @@ ok </func> <func> - <name name="system_flag" arity="2" clause_i="10"/> + <name name="system_flag" arity="2" clause_i="11"/> <fsummary>Sets system flag <c>scheduler_wall_time</c>.</fsummary> <desc><p><marker id="system_flag_scheduler_wall_time"></marker> Turns on or off scheduler wall time measurements.</p> @@ -6683,7 +6807,7 @@ ok </func> <func> - <name name="system_flag" arity="2" clause_i="11"/> + <name name="system_flag" arity="2" clause_i="12"/> <fsummary>Sets system flag <c>schedulers_online</c>.</fsummary> <desc> <p><marker id="system_flag_schedulers_online"></marker> @@ -6708,7 +6832,7 @@ ok </func> <func> - <name name="system_flag" arity="2" clause_i="12"/> + <name name="system_flag" arity="2" clause_i="13"/> <fsummary>Sets system flag <c>trace_control_word</c>.</fsummary> <desc> <p>Sets the value of the node trace control word to @@ -6722,7 +6846,7 @@ ok </func> <func> - <name name="system_flag" arity="2" clause_i="12"/> + <name name="system_flag" arity="2" clause_i="14"/> <fsummary>Finalize the Time Offset</fsummary> <desc> <p><marker id="system_flag_time_offset"></marker> @@ -6839,11 +6963,7 @@ ok As from <c>ERTS</c> 5.6.1, the return value is a list of <c>{instance, InstanceNo, InstanceInfo}</c> tuples, where <c>InstanceInfo</c> contains information about - a specific instance of the allocator. As from - <c>ERTS</c> 5.10.4, the returned list when calling - <c>erlang:system_info({allocator, mseg_alloc})</c> also - includes an <c>{erts_mmap, _}</c> tuple as one element - in the list. If <c><anno>Alloc</anno></c> is not a + a specific instance of the allocator. If <c><anno>Alloc</anno></c> is not a recognized allocator, <c>undefined</c> is returned. If <c><anno>Alloc</anno></c> is disabled, <c>false</c> is returned.</p> @@ -6855,7 +6975,13 @@ ok briefly documented.</p> <p>The recognized allocators are listed in <seealso marker="erts:erts_alloc">erts_alloc(3)</seealso>. - After reading the <c>erts_alloc(3)</c> documentation, + Information about super carriers can be obtained from + <c>ERTS</c> 8.0 with <c>{allocator, erts_mmap}</c> or from + <c>ERTS</c> 5.10.4, the returned list when calling with + <c>{allocator, mseg_alloc}</c> also includes an + <c>{erts_mmap, _}</c> tuple as one element in the list.</p> + + <p>After reading the <c>erts_alloc(3)</c> documentation, the returned information more or less speaks for itself, but it can be worth explaining some things. Call counts are presented by two @@ -6987,6 +7113,81 @@ ok </func> <func> + <name name="system_info" arity="1" clause_i="27"/> + <name name="system_info" arity="1" clause_i="28"/> + <name name="system_info" arity="1" clause_i="36"/> + <name name="system_info" arity="1" clause_i="37"/> + <name name="system_info" arity="1" clause_i="38"/> + <name name="system_info" arity="1" clause_i="39"/> + <type name="message_queue_data"/> + <type name="max_heap_size"/> + <fsummary>Information about the default process heap settings.</fsummary> + <desc> + <taglist> + <tag><c>fullsweep_after</c></tag> + <item> + <p>Returns <c>{fullsweep_after, integer() >= 0}</c>, which is + the <c>fullsweep_after</c> garbage collection setting used + by default. For more information, see + <c>garbage_collection</c> described in the following.</p> + </item> + <tag><c>garbage_collection</c></tag> + <item> + <p>Returns a list describing the default garbage collection + settings. A process spawned on the local node by a + <c>spawn</c> or <c>spawn_link</c> uses these + garbage collection settings. The default settings can be + changed by using + <seealso marker="#system_flag/2">system_flag/2</seealso>. + <seealso marker="#spawn_opt/4">spawn_opt/4</seealso> + can spawn a process that does not use the default + settings.</p> + </item> + <tag><c>max_heap_size</c></tag> + <item> + <p>Returns <c>{max_heap_size, <anno>MaxHeapSize</anno>}</c>, + where <c><anno>MaxHeapSize</anno></c> is the current + system-wide max heap size settings for spawned processes. + This setting can be set using the <c>erl</c> command line + flags <seealso marker="erl#+hmax"><c>+hmax</c></seealso>, + <seealso marker="erl#+hmaxk"><c>+hmaxk</c></seealso> and + <seealso marker="erl#+hmaxel"><c>+hmaxel</c></seealso>. It can + also be changed at run-time using + <seealso marker="#system_flag_max_heap_size"> + <c>erlang:system_flag(max_heap_size, MaxHeapSize)</c></seealso>. + For more details about the <c>max_heap_size</c> process flag + see <seealso marker="#process_flag_max_heap_size"> + <c>process_flag(max_heap_size, MaxHeapSize)</c></seealso>. + </p> + </item> + <tag><c>min_heap_size</c></tag> + <item> + <p>Returns <c>{min_heap_size, <anno>MinHeapSize</anno>}</c>, + where <c><anno>MinHeapSize</anno></c> is the current + system-wide minimum heap size for spawned processes.</p> + </item> + <tag><marker id="system_info_message_queue_data"><c>message_queue_data</c></marker></tag> + <item> + <p>Returns the default value of the <c>message_queue_data</c> + process flag which is either <c>off_heap</c>, <c>on_heap</c>, or <c>mixed</c>. + This default is set by the <c>erl</c> command line argument + <seealso marker="erl#+hmqd"><c>+hmqd</c></seealso>. For more information on the + <c>message_queue_data</c> process flag, see documentation of + <seealso marker="#process_flag_message_queue_data"><c>process_flag(message_queue_data, + MQD)</c></seealso>.</p> + </item> + <tag><c>min_bin_vheap_size</c></tag> + <item> + <p>Returns <c>{min_bin_vheap_size, + <anno>MinBinVHeapSize</anno>}</c>, where + <c><anno>MinBinVHeapSize</anno></c> is the current system-wide + minimum binary virtual heap size for spawned processes.</p> + </item> + </taglist> + </desc> + </func> + + <func> <name name="system_info" arity="1" clause_i="6"/> <name name="system_info" arity="1" clause_i="7"/> <name name="system_info" arity="1" clause_i="8"/> @@ -7006,8 +7207,6 @@ ok <name name="system_info" arity="1" clause_i="24"/> <name name="system_info" arity="1" clause_i="25"/> <name name="system_info" arity="1" clause_i="26"/> - <name name="system_info" arity="1" clause_i="27"/> - <name name="system_info" arity="1" clause_i="28"/> <name name="system_info" arity="1" clause_i="29"/> <name name="system_info" arity="1" clause_i="30"/> <name name="system_info" arity="1" clause_i="31"/> @@ -7015,10 +7214,6 @@ ok <name name="system_info" arity="1" clause_i="33"/> <name name="system_info" arity="1" clause_i="34"/> <name name="system_info" arity="1" clause_i="35"/> - <name name="system_info" arity="1" clause_i="36"/> - <name name="system_info" arity="1" clause_i="37"/> - <name name="system_info" arity="1" clause_i="38"/> - <name name="system_info" arity="1" clause_i="39"/> <name name="system_info" arity="1" clause_i="40"/> <name name="system_info" arity="1" clause_i="41"/> <name name="system_info" arity="1" clause_i="42"/> @@ -7048,6 +7243,7 @@ ok <name name="system_info" arity="1" clause_i="66"/> <name name="system_info" arity="1" clause_i="67"/> <name name="system_info" arity="1" clause_i="68"/> + <name name="system_info" arity="1" clause_i="69"/> <fsummary>Information about the system.</fsummary> <desc> <p>Returns various information about the current system @@ -7284,25 +7480,6 @@ ok <c>ERL_MAX_ETS_TABLES</c> before starting the Erlang runtime system.</p> </item> - <tag><c>fullsweep_after</c></tag> - <item> - <p>Returns <c>{fullsweep_after, integer() >= 0}</c>, which is - the <c>fullsweep_after</c> garbage collection setting used - by default. For more information, see - <c>garbage_collection</c> described in the following.</p> - </item> - <tag><c>garbage_collection</c></tag> - <item> - <p>Returns a list describing the default garbage collection - settings. A process spawned on the local node by a - <c>spawn</c> or <c>spawn_link</c> uses these - garbage collection settings. The default settings can be - changed by using - <seealso marker="#system_flag/2">system_flag/2</seealso>. - <seealso marker="#spawn_opt/4">spawn_opt/4</seealso> - can spawn a process that does not use the default - settings.</p> - </item> <tag><c>heap_sizes</c></tag> <item> <p>Returns a list of integers representing valid heap sizes @@ -7377,29 +7554,6 @@ ok <item> <p>Returns a string containing the Erlang machine name.</p> </item> - <tag><c>min_heap_size</c></tag> - <item> - <p>Returns <c>{min_heap_size, <anno>MinHeapSize</anno>}</c>, - where <c><anno>MinHeapSize</anno></c> is the current - system-wide minimum heap size for spawned processes.</p> - </item> - <tag><marker id="system_info_message_queue_data"><c>message_queue_data</c></marker></tag> - <item> - <p>Returns the default value of the <c>message_queue_data</c> - process flag which is either <c>off_heap</c>, <c>on_heap</c>, or <c>mixed</c>. - This default is set by the <c>erl</c> command line argument - <seealso marker="erl#+xmqd"><c>+xmqd</c></seealso>. For more information on the - <c>message_queue_data</c> process flag, see documentation of - <seealso marker="#process_flag_message_queue_data"><c>process_flag(message_queue_data, - MQD)</c></seealso>.</p> - </item> - <tag><c>min_bin_vheap_size</c></tag> - <item> - <p>Returns <c>{min_bin_vheap_size, - <anno>MinBinVHeapSize</anno>}</c>, where - <c><anno>MinBinVHeapSize</anno></c> is the current system-wide - minimum binary virtual heap size for spawned processes.</p> - </item> <tag><c>modified_timing_level</c></tag> <item> <p>Returns the modified timing-level (an integer) if @@ -7966,7 +8120,7 @@ ok <c>stack_size</c>, <c>mbuf_size</c>, <c>old_heap_size</c>, and <c>old_heap_block_size</c>. These tuples are explained in the description of trace message - <seealso marker="#gc_start">gc_start</seealso> (see + <seealso marker="#gc_minor_start">gc_minor_start</seealso> (see <seealso marker="#trace/3">erlang:trace/3</seealso>). New tuples can be added, and the order of the tuples in the <c>Info</c> list can be changed at any time without @@ -8024,12 +8178,13 @@ ok <c>GcPid</c> and <c>Info</c> are the same as for <c>long_gc</c> earlier, except that the tuple tagged with <c>timeout</c> is not present.</p> - <p>As of <c>ERTS</c> 5.6, the monitor message is sent - if the sum of the sizes of all memory blocks allocated - for all heap generations is equal to or higher than <c>Size</c>. - Previously the monitor message was sent if the memory block - allocated for the youngest generation was equal to or higher - than <c>Size</c>.</p> + <p>The monitor message is sent if the sum of the sizes of + all memory blocks allocated for all heap generations after + a garbage collection is equal to or higher than <c>Size</c>.</p> + <p>When a process is killed by <seealso marker="#process_flag_max_heap_size"> + <c>max_heap_size</c></seealso>, it is killed before the + garbage collection is complete and thus no large heap message + will be sent.</p> </item> <tag><c>busy_port</c></tag> <item> @@ -8556,7 +8711,9 @@ timestamp() -> <tag><c>garbage_collection</c></tag> <item> <p>Traces garbage collections of processes.</p> - <p>Message tags: <c><seealso marker="#trace_3_trace_messages_gc_start">gc_start</seealso></c> and <c><seealso marker="#trace_3_trace_messages_gc_end">gc_end</seealso></c>.</p> + <p>Message tags: <c><seealso marker="#trace_3_trace_messages_gc_minor_start">gc_minor_start</seealso></c>, + <c><seealso marker="#trace_3_trace_messages_gc_max_heap_size">gc_max_heap_size</seealso></c> and + <c><seealso marker="#trace_3_trace_messages_gc_minor_end">gc_minor_end</seealso></c>.</p> </item> <tag><c>timestamp</c></tag> <item> @@ -8637,8 +8794,8 @@ timestamp() -> <p>Specifies that a tracer module should be called instead of sending a trace message. The tracer module can then ignore or change the trace message. For more details - on how to write a tracer module see <seealso marker="erl_tracer"> - erl_tracer</seealso> + on how to write a tracer module see + <seealso marker="erts:erl_tracer"><c>erl_tracer</c></seealso> </p> </item> </taglist> @@ -8880,12 +9037,12 @@ timestamp() -> </p> </item> <tag> - <marker id="trace_3_trace_messages_gc_start"></marker> - <c>{trace, Pid, gc_start, Info}</c> + <marker id="trace_3_trace_messages_gc_minor_start"></marker> + <c>{trace, Pid, gc_minor_start, Info}</c> </tag> <item> - <marker id="gc_start"></marker> - <p>Sent when garbage collection is about to be started. + <marker id="gc_minor_start"></marker> + <p>Sent when a young garbage collection is about to be started. <c>Info</c> is a list of two-element tuples, where the first element is a key, and the second is the value. Do not depend on any order of the tuples. @@ -8925,15 +9082,45 @@ timestamp() -> <p>All sizes are in words.</p> </item> <tag> - <marker id="trace_3_trace_messages_gc_end"></marker> - <c>{trace, Pid, gc_end, Info}</c> + <marker id="trace_3_trace_messages_gc_max_heap_size"></marker> + <c>{trace, Pid, gc_max_heap_size, Info}</c> </tag> <item> - <p>Sent when garbage collection is finished. <c>Info</c> - contains the same kind of list as in message <c>gc_start</c>, + <p> + Sent when the <seealso marker="#process_flag_max_heap_size"><c>max_heap_size</c></seealso> + is reached during garbage collection. <c>Info</c> contains the + same kind of list as in message <c>gc_start</c>, + but the sizes reflect the sizes that triggered max_heap_size to + be reached. + </p> + </item> + <tag> + <marker id="trace_3_trace_messages_gc_minor_end"></marker> + <c>{trace, Pid, gc_minor_end, Info}</c> + </tag> + <item> + <p>Sent when young garbage collection is finished. <c>Info</c> + contains the same kind of list as in message <c>gc_minor_start</c>, but the sizes reflect the new sizes after garbage collection.</p> </item> + <tag> + <marker id="trace_3_trace_messages_gc_major_start"></marker> + <c>{trace, Pid, gc_major_start, Info}</c> + </tag> + <item> + <p>Sent when fullsweep garbage collection is about to be started. <c>Info</c> + contains the same kind of list as in message <c>gc_minor_start</c>.</p> + </item> + <tag> + <marker id="trace_3_trace_messages_gc_major_end"></marker> + <c>{trace, Pid, gc_major_end, Info}</c> + </tag> + <item> + <p>Sent when fullsweep garbage collection is finished. <c>Info</c> + contains the same kind of list as in message <c>gc_minor_start</c> + but the sizes reflect the new sizes after a fullsweep garbage collection.</p> + </item> </taglist> <p>If the tracing process/port dies or the tracer module returns <c>remove</c>, the flags are silently removed.</p> @@ -8989,7 +9176,7 @@ timestamp() -> <c>erlang:trace_delivered(<anno>Tracee</anno>)</c> resides on. The special <c><anno>Tracee</anno></c> atom <c>all</c> denotes all processes that currently are traced in the node.</p> - <p>When used together with an <seealso marker="#erl_tracer"> + <p>When used together with an <seealso marker="erts:erl_tracer"> Tracer Module</seealso> any message sent in the trace callback is guaranteed to have reached it's recipient before the <c>trace_delivered</c> message is sent.</p> |