2017 Ericsson AB. All Rights Reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. Logging logger_chapter.xml

Erlang/OTP 21.0 provides a new standard API for logging through Logger, which is part of the Kernel application. Logger consists of the API for issuing log events, and a customizable backend where log handlers, filters and formatters can be plugged in.

By default, the Kernel application installs one log handler at system start. This handler is named default. It receives and processes standard log events produced by the Erlang runtime system, standard behaviours and different Erlang/OTP applications. The log events are by default written to the terminal.

You can also configure the system so that the default handler prints log events to a single file, or to a set of wrap logs via disk_log.

By confiugration, you can aslo modify or disable the default handler, replace it by a custom handler, and install additional handlers.

Overview

A log event consists of a log level, the message to be logged, and metadata.

The Logger backend forwards log events from the API, first through a set of global filters, then through a set of handler filters for each log handler.

Each filter set consists of a log level check, followed by zero or more filter functions.

The following figure show a conseptual overview of Logger. The figure shows two log handlers, but any number of handlers can be installed.

Conceptual Overview

Log levels are expressed as atoms. Internally in Logger, the atoms are mapped to integer values, and a log event passes the log level check if the integer value of its log level is less than or equal to the currently configured log level. That is, the check pases if the event is equally or more severe than the configured level. See section Log Level for a listing and description of all log levels.

The global log level can be overridden by a log level configured per module. This is to, for instance, allow more verbose logging from a specific part of the system.

Filter functions can be used for more sophisticated filtering than the log level check provides. A filter function can stop or pass a log event, based on any of the event's contents. It can also modify all parts of the log event. See see section Filters for more details.

If a log event passes through all global filters and all handler filters for a specific handler, Logger forwards the event to the handler callback. The handler formats and prints the event to its destination. See section Handlers for more details.

Everything up to and including the call to the handler callbacks is executed on the client process, that is, the process where the log event was issued. It is up to the handler implementation if other processes are involved or not.

The handlers are called in sequence, and the order is not defined.

Logger API

The API for logging consists of a set of macros, and a set of functions on the form logger:Level/1,2,3, which are all shortcuts for logger:log(Level,Arg1[,Arg2[,Arg3]]).

The difference between using the macros and the exported functions is that macros add location (originator) information to the metadata, and performs lazy evaluation by wrapping the logger call in a case statement, so it is only evaluated if the log level of the event passes the global log level check.

Log Level

The log level indicates the severity of a event. In accordance with the Syslog protocol, RFC-5424, eight log levels can be specified. The following table lists all possible log levels by name (atom), integer value, and description:

Level Integer Description emergency 0 system is unusable alert 1 action must be taken immediately critical 2 critical contidions error 3 error conditions warning 4 warning conditions notice 5 normal but significant conditions info 6 informational messages debug 7 debug-level messages Log Levels

Notice that the integer value is only used internally in Logger. In the API, you must always use the atom. To compare the severity of two log levels, use logger:compare_levels/2.

Log Message

The log message contains the information to be logged. The message can consist of a format string and arguments (given as two separate parameters in the Logger API), a string or a report. The latter, which is either a map or a key-value list, can be accompanied by a report callback specified in the log event's metadata. The report callback is a convenience function that the formatter can use to convert the report to a format string and arguments. The formatter can also use its own conversion function, if no callback is provided, or if a customized formatting is desired.

Example, format string and arguments:

logger:error("The file does not exist: ~ts",[Filename])

Example, string:

logger:notice("Something strange happened!")

Example, report, and metadata with report callback:

logger:debug(#{got => connection_request, id => Id, state => State}, #{report_cb => fun(R) -> {"~p",[R]} end})

The log message can also be provided through a fun for lazy evaluation. The fun is only evaluated if the global log level check passes, and is therefore recommended if it is expensive to generate the message. The lazy fun must return a string, a report, or a tuple with format string and arguments.

Metadata

Metadata contains additional data associated with a log message. Logger inserts some metadata fields by default, and the client can add custom metadata in two different ways:

Set process metadata

Process metadata is set and updated with logger:set_process_metadata/1 and logger:update_process metadata/1, respectively. This metadata applies to the process on which these calls are made, and Logger adds the metadata to all log events issued on that process.

Add metadata to a specifc log event

Metadata associated with one specifc log event is given as the last parameter to the log macro or Logger API function when the event is issued. For example:

?LOG_ERROR("Connection closed",#{context => server})

See the description of the logger:metadata() type for information about which default keys Logger inserts, and how the different metadata maps are merged.

Filters

Filters can be global, or attached to a specific handler. Logger calls the global filters first, and if they all pass, it calls the handler filters for each handler. Logger calls the handler callback only if all filters attached to the handler in question also pass.

A filter is defined as:

{FilterFun, Extra}

where FilterFun is a function of arity 2, and Extra is any term. When applying the filter, Logger calls the function with the log event as the first argument, and the value of Extra as the second argument. See logger:filter() for type definitions.

The filter function can return stop, ignore or the (possibly modified) log event.

If stop is returned, the log event is immediately discarded. If the filter is global, no handler filters or callbacks are called. If it is a handler filter, the corresponding handler callback is not called, but the log event is forwarded to filters attached to the next handler, if any.

If the log event is returned, the next filter function is called with the returned value as the first argument. That is, if a filter function modifies the log event, the next filter function receives the modified event. The value returned from the last filter function is the value that the handler callback receives.

If the filter function returns ignore, it means that it did not recognize the log event, and thus leaves to other filters to decide the event's destiny.

The configuration option filter_default specifies the behaviour if all filter functions return ignore, or if no filters exist. filter_default is by default set to log, meaning that if all existing filters ignore a log event, Logger forwards the event to the handler callback. If filter_default is set to stop, Logger discards such events.

Global filters are added with logger:add_logger_filter/2 and removed with logger:remove_logger_filter/1. They can also be added at system start via the Kernel configuration parameter logger.

Handler filters are added with logger:add_handler_filter/3 and removed with logger:remove_handler_filter/2. They can also be specified directly in the configuration when adding a handler with logger:add_handler/3 or via the Kernel configuration parameter logger.

To see which filters are currently installed in the system, use logger:i/0, or logger:get_logger_config/0 and logger:get_handler_config/1. Filters are listed in the order they are applied, that is, the first filter in the list is applied first, and so on.

For convenience, the following built-in filters exist:

logger_filters:domain/2

Provides a way of filtering log events based on a domain field in Metadata.

logger_filters:level/2

Provides a way of filtering log events based on the log level.

logger_filters:progress/2

Stops or allows progress reports from supervisor and application_controller.

logger_filters:remote_gl/2

Stops or allows log events originating from a process that has its group leader on a remote node.

Handlers

A handler is defined as a module exporting at least the following function:

log(LogEvent, Config) -> void()

This function is called when a log event has passed through all global filters, and all handler filters attached to the handler in question. The function call is executed on the client process, and it is up to the handler implementation if other processes are involved or not.

Logger allows adding multiple instances of a handler callback. That is, if a callback module implementation allows it, you can add multiple handler instances using the same callback module. The different instances are identified by unique handler identities.

In addition to the mandatory callback function log/2, a handler module can export the optional callback functions adding_handler/1, changing_config/2 and removing_handler/1. See section Handler Callback Functions in the logger(3) manual for more information about these function.

The following built-in handlers exist:

logger_std_h

This is the default handler used by OTP. Multiple instances can be started, and each instance will write log events to a given destination, terminal or file.

logger_disk_log_h

This handler behaves much like logger_std_h, except it uses disk_log as its destination.

error_logger

This handler is provided for backwards compatibility only. It is not started by default, but will be automatically started the first time an error_logger event handler is added with error_logger:add_report_handler/1,2.

The old error_logger event handlers in STDLIB and SASL still exist, but they are not added by Erlang/OTP 21.0 or later.

Formatters

A formatter can be used by the handler implementation to do the final formatting of a log event, before printing to the handler's destination. The handler callback receives the formatter information as part of the handler configuration, which is passed as the second argument to HModule:log/2.

The formatter information consits of a formatter module, FModule and its configuration, FConfig. FModule must export the following function, which can be called by the handler:

format(LogEvent,FConfig)
	-> FormattedLogEntry

The formatter information for a handler is set as a part of its configuration when the handler is added. It can also be changed during runtime with logger:set_handler_config(HandlerId,formatter,{FModule,FConfig}) , which overwrites the current formatter information, or with logger:update_formatter_config/2,3, which only modifies the formatter configuration.

If the formatter module exports the optional callback function check_config(FConfig), Logger calls this function when the formatter information is set or modified, to verify the validity of the formatter configuration.

If no formatter information is specified for a handler, Logger uses logger_formatter(3) as default.

Configuration

Logger can be configured either when the system starts through configuration parameters, or at run-time by using the logger(3) API. The recommended approach is to do the initial configuration in the sys.config file and then use the API when some configuration has to be changed at runtime, such as the log level.

Kernel Configuration Parameters

Logger is best configured by using the configuration parameters of Kernel. There are four possible configuration parameters: logger, logger_level, logger_sasl_compatible and logger_progress_reports. logger_level, logger_sasl_compatible and logger_progress_reports are described in the Kernel Configuration, while logger is described below.

logger

The application configuration parameter logger is used to configure three different Logger aspects; handlers, logger filters and module levels. The configuration is a list containing tagged tuples that look like this:

DisableHandler = {handler,default,undefined}

Disable the default handler. This allows another application to add its own default handler. See logger:add_handlers/1 for more details.

Only one entry of this option is allowed.

AddHandler = {handler,HandlerId,Module,HandlerConfig}

Add a handler as if logger:add_handler(HandlerId,Module,HandlerConfig) is called.

It is allowed to have multiple entries of this option.

Filters = {filters, default, [Filter]}
FilterDefault = log | stop
Filter = {FilterId, {FilterFun, FilterConfig}}

Add the specified logger filters.

Only one entry of this option is allowed.

ModuleLevel = {module_level, Level, [Module]}

This option configures module log level.

It is allowed to have multiple entries of this option.

Examples:

Output logs into the file "logs/erlang.log"

[{kernel, [{logger, [{handler, default, logger_std_h, #{ logger_std_h => #{ type => {file,"log/erlang.log"}}}}]}]}].

Output logs in single line format

[{kernel, [{logger, [{handler, default, logger_std_h, #{ formatter => { logger_formatter,#{ single_line => true}}}}]}]}].

Add the pid to each log event

[{kernel, [{logger, [{handler, default, logger_std_h, #{ formatter => { logger_formatter, #{ template => [time," ",pid," ",msg,"\n"]}} }}]}]}].

Use a different file for debug logging

[{kernel, [{logger, [{handler, default, logger_std_h, #{ level => error, logger_std_h => #{ type => {file, "log/erlang.log"}}}}, {handler, info, logger_std_h, #{ level => debug, logger_std_h => #{ type => {file, "log/debug.log"}}}} ]}]}].
Global Logger Configuration level = logger:level()

Specifies the global log level to log.

See section Log Level for a listing and description of possible log levels.

The initial value of this option is set by the Kernel configuration parameter logger_level. It can be changed during runtime with logger:set_logger_config(level,NewLevel).

filters = [{ logger:filter_id(), logger:filter()}]

Global filters are added and removed with logger:add_logger_filter/2 and logger:remove_logger_filter/1, respectively.

See section Filters for more information.

Default is [], that is, no filters exist.

filter_default = log | stop

Specifies what to do with an event if all filters return ignore, or if no filters exist.

See section Filters for more information about how this option is used.

Default is log.

Handler Configuration level = logger:level()

Specifies the log level which the handler logs.

See section Log Level for a listing and description of possible log levels.

The log level can be specified when adding the handler, or changed during runtime with, for instance, logger:set_handler_config/3.

Default is info.

filters = [{ logger:filter_id(), logger:filter()}]

Handler filters can be specified when adding the handler, or added or removed during runtime with logger:add_handler_filter/3 and logger:remove_handler_filter/2, respectively.

See Filters for more information.

Default is [], that is, no filters exist.

filter_default = log | stop

Specifies what to do with an event if all filters return ignore, or if no filters exist.

See section Filters for more information about how this option is used.

Default is log.

formatter = {module(), logger:formatter_config()}

The formatter which the handler can use for converting the log event term to a printable string.

See Formatters for more information.

Default is {logger_formatter,DefaultFormatterConfig}, see the logger_formatter(3) manual for information about this formatter and its default configuration.

HandlerConfig, atom() = term()

Any keys not listed above are considered to be handler specific configuration. The configuration of the Kernel handlers can be found in the logger_std_h(3) and logger_disk_log_h(3) manual pages.

Notice that level and filters are obeyed by Logger itself before forwarding the log events to each handler, while formatter and all handle specific options are left to the handler implementation.

All Logger's built-in handlers will call the given formatter before printing.

Backwards Compatibility with error_logger

Logger provides backwards compatibility with error_logger in the following ways:

API for Logging

The error_logger API still exists, but should only be used by legacy code. It will be removed in a later release.

Calls to error_logger:error_report/1,2, error_logger:error_msg/1,2, and corresponding functions for warning and info messages, are all forwarded to Logger as calls to logger:log(Level,Report,Metadata).

Level = error | warning | info and is taken from the function name. Report contains the actual log message, and Metadata contains additional information which can be used for creating backwards compatible events for legacy error_logger event handlers, see section Legacy Event Handlers.

Output Format

To get log events on the same format as produced by error_logger_tty_h and error_logger_file_h, use the default formatter, logger_formatter, with configuration parameter legacy_header => true. This is also the default.

Default Format of Log Events from OTP

By default, all log events originating from within OTP, except the former so called "SASL reports", look the same as before.

SASL Reports

By SASL reports we mean supervisor reports, crash reports and progress reports.

In earlier releases, these reports were only logged when the SASL application was running, and they were printed trough specific event handlers named sasl_report_tty_h and sasl_report_file_h.

The destination of these log events were configured by SASL configuration parameters.

Due to the specific event handlers, the output format slightly differed from other log events.

As of Erlang/OTP 21.0, the concept of SASL reports is removed, meaning that the default behaviour is as follows:

Supervisor reports, crash reports, and progress reports are no longer connected to the SASL application. Supervisor reports and crash reports are logged by default. Progress reports are not logged by default, but can be enabled with the Kernel configuration parameter logger_progress_reports. The output format is the same for all log events.

If the old behaviour is preferred, the Kernel configuation parameter logger_sasl_compatible can be set to true. The SASL configuration parameters can then be used as before, and the SASL reports will only be printed if the SASL application is running, through a second log handler named sasl.

All SASL reports have a metadata field domain => [beam,erlang,otp,sasl], which can be used, for example, by filters to stop or allow the log events.

See the SASL User's Guide for more information about the old SASL error logging functionality.

Legacy Event Handlers

To use event handlers written for error_logger, just add your event handler with

error_logger:add_report_handler/1,2.

This will automatically start the error_logger event manager, and add error_logger as a handler to logger, with configuration

#{level => info, filter_default => log, filters => []}.

Notice that this handler will ignore events that do not originate from the error_logger API, or from within OTP. This means that if your code uses the Logger API for logging, then your log events will be discarded by this handler.

Also notice that error_logger is not overload protected.

Error Handling

Log data is expected to be either a format string and arguments, a string ( unicode:chardata()), or a report (map or key-value list) which can be converted to a format string and arguments by the handler. If a report is given, a default report callback can be included in the log event's metadata. The handler can use this callback for converting the report to a format string and arguments. If the format obtained by the provided callback is not desired, or if there is no provided callback, the handler must do a custom conversion.

Logger does, to a certain extent, check its input data before forwarding a log event to the handlers, but it does not evaluate conversion funs or check the validity of format strings and arguments. This means that any filter or handler must be careful when formatting the data of a log event, making sure that it does not crash due to bad input data or faulty callbacks.

If a filter or handler still crashes, Logger will remove the filter or handler in question from the configuration, and print a short error message to the terminal. A debug event containing the crash reason and other details is also issued, and can be seen if a handler logging debug events is installed.

Example: add a handler to log debug events to file

When starting an Erlang node, the default behaviour is that all log events with level info and above are logged to the terminal. In order to also log debug events, you can either change the global log level to debug or add a separate handler to take care of this. In this example we will add a new handler which prints the debug events to a separate file.

First, we add an instance of logger_std_h with type {file,File}, and we set the handler's level to debug:

1> Config = #{level => debug, logger_std_h => #{type => {file,"./debug.log"}}}.
#{logger_std_h => #{type => {file,"./debug.log"}},
  level => debug}
2> logger:add_handler(debug_handler,logger_std_h,Config).
ok

By default, the handler receives all events (filter_default=log, see section Filters for more details), so we need to add a filter to stop all non-debug events. The built-in filter logger_filters:level/2 is used for this:

3> logger:add_handler_filter(debug_handler,stop_non_debug,
                             {fun logger_filters:level/2,{stop,neq,debug}}).
ok

And finally, we need to make sure that Logger itself allows debug events. This can either be done by setting the global log level:

4> logger:set_logger_config(level,debug).
ok

Or by allowing debug events from one or a few modules only:

5> logger:set_module_level(mymodule,debug).
ok
Example: implement a handler

The only requirement that a handler MUST fulfill is to export the following function:

log(logger:log_event(),logger:config()) -> ok

It can optionally also implement the following callbacks:

adding_handler(logger:config()) -> {ok,logger:config()} | {error,term()} removing_handler(logger:config()) -> ok changing_config(logger:config(),logger:config()) -> {ok,logger:config()} | {error,term()}

When logger:add_handler(Id,Module,Config) is called, Logger first calls HModule:adding_handler(Config). If this function returns {ok,NewConfig}, Logger writes NewConfig to the configuration database, and the logger:add_handler/3 call returns. After this, the handler is installed and must be ready to receive log events as calls to HModule:log/2.

A handler can be removed by calling logger:remove_handler(Id). Logger calls HModule:removing_handler(Config), and removes the handler's configuration from the configuration database.

When logger:set_handler_config/2,3 or logger:update_handler_config/2 is called, Logger calls HModule:changing_config(OldConfig,NewConfig). If this function returns {ok,NewConfig}, Logger writes NewConfig to the configuration database.

A simple handler that prints to the terminal can be implemented as follows:

-module(myhandler). -export([log/2]). log(LogEvent,#{formatter:={FModule,FConfig}) -> io:put_chars(FModule:format(LogEvent,FConfig)).

A simple handler which prints to file could be implemented like this:

-module(myhandler). -export([adding_handler/1, removing_handler/1, log/2]). -export([init/1, handle_call/3, handle_cast/2, terminate/2]). adding_handler(Config) -> {ok,Fd} = file:open(File,[append,{encoding,utf8}]), {ok,Config#{myhandler_fd => Fd}}. removing_handler(#{myhandler_fd:=Fd}) -> _ = file:close(Fd), ok. log(LogEvent,#{myhandler_fd:=Fd,formatter:={FModule,FConfig}}) -> io:put_chars(Fd,FModule:format(LogEvent,FConfig)).

The above handlers do not have any overload protection, and all log events are printed directly from the client process.

For information and examples of overload protection, please refer to section Protecting the Handler from Overload, and the implementation of logger_std_h(3) and logger_disk_log_h(3) .

Below is a simpler example of a handler which logs through one single process.

-module(myhandler). -export([adding_handler/1, removing_handler/1, log/2]). -export([init/1, handle_call/3, handle_cast/2, terminate/2]). adding_handler(Config) -> {ok,Pid} = gen_server:start(?MODULE,Config), {ok,Config#{myhandler_pid => Pid}}. removing_handler(#{myhandler_pid:=Pid}) -> gen_server:stop(Pid). log(LogEvent,#{myhandler_pid:=Pid} = Config) -> gen_server:cast(Pid,{log,LogEvent,Config}). init(#{myhandler_file:=File}) -> {ok,Fd} = file:open(File,[append,{encoding,utf8}]), {ok,#{file => File, fd => Fd}}. handle_call(_,_,State) -> {reply,{error,bad_request},State}. handle_cast({log,LogEvent,Config},#{fd:=Fd} = State) -> do_log(Fd,LogEvent,Config), {noreply,State}. terminate(Reason,#{fd:=Fd}) -> _ = file:close(Fd), ok. do_log(Fd,LogEvent,#{formatter:={FModule,FConfig}}) -> String = FModule:format(LogEvent,FConfig), io:put_chars(Fd,String).
Protecting the Handler from Overload

In order for the built-in handlers to survive, and stay responsive, during periods of high load (i.e. when huge numbers of incoming log requests must be handled), a mechanism for overload protection has been implemented in the logger_std_h and logger_disk_log_h handler. The mechanism, used by both handlers, works as follows:

Message Queue Length

The handler process keeps track of the length of its message queue and reacts in different ways depending on the current status. The purpose is to keep the handler in, or (as quickly as possible), get the handler into, a state where it can keep up with the pace of incoming log requests. The memory usage of the handler must never keep growing larger and larger, since that would eventually cause the handler to crash. Three thresholds with associated actions have been defined:

toggle_sync_qlen

The default value of this level is 10 messages, and as long as the length of the message queue is lower, all log requests are handled asynchronously. This simply means that the process sending the log request (by calling a log function in the Logger API) does not wait for a response from the handler but continues executing immediately after the request (i.e. it will not be affected by the time it takes the handler to print to the log device). If the message queue grows larger than this value, however, the handler starts handling the log requests synchronously instead, meaning the process sending the request will have to wait for a response. When the handler manages to reduce the message queue to a level below the toggle_sync_qlen threshold, asynchronous operation is resumed. The switch from asynchronous to synchronous mode will force the logging tempo of few busy senders to slow down, but can not protect the handler sufficiently in situations of many concurrent senders.

drop_new_reqs_qlen

When the message queue has grown larger than this threshold, which defaults to 200 messages, the handler switches to a mode in which it drops any new requests being made. Dropping a message in this state means that the log function never actually sends a message to the handler. The log call simply returns without an action. When the length of the message queue has been reduced to a level below this threshold, synchronous or asynchronous request handling mode is resumed.

flush_reqs_qlen

Above this threshold, which defaults to 1000 messages, a flush operation takes place, in which all messages buffered in the process mailbox get deleted without any logging actually taking place. (Processes waiting for a response from a synchronous log request will receive a reply indicating that the request has been dropped).

For the overload protection algorithm to work properly, it is required that:

toggle_sync_qlen =< drop_new_reqs_qlen =< flush_reqs_qlen

and that:

drop_new_reqs_qlen > 1

If toggle_sync_qlen is set to 0, the handler will handle all requests synchronously. Setting the value of toggle_sync_qlen to the same as drop_new_reqs_qlen, disables the synchronous mode. Likewise, setting the value of drop_new_reqs_qlen to the same as flush_reqs_qlen, disables the drop mode.

During high load scenarios, the length of the handler message queue rarely grows in a linear and predictable way. Instead, whenever the handler process gets scheduled in, it can have an almost arbitrary number of messages waiting in the mailbox. It's for this reason that the overload protection mechanism is focused on acting quickly and quite drastically (such as immediately dropping or flushing messages) as soon as a large queue length is detected.

The thresholds listed above may be modified by the user if, e.g, a handler shouldn't drop or flush messages unless the message queue length grows extremely large. (The handler must be allowed to use large amounts of memory under such circumstances however). Another example of when the user might want to change the settings is if, for performance reasons, the logging processes must never get blocked by synchronous log requests, while dropping or flushing requests is perfectly acceptable (since it doesn't affect the performance of the loggers).

A configuration example:

logger:add_handler(my_standard_h, logger_std_h, #{logger_std_h => #{type => {file,"./system_info.log"}, toggle_sync_qlen => 100, drop_new_reqs_qlen => 1000, flush_reqs_qlen => 2000}}).
Controlling Bursts of Log Requests

A potential problem with large bursts of log requests, is that log files may get full or wrapped too quickly (in the latter case overwriting previously logged data that could be of great importance). For this reason, both built-in handlers offer the possibility to set a maximum level of how many requests to process with a certain time frame. With this burst control feature enabled, the handler will take care of bursts of log requests without choking log files, or the terminal, with massive amounts of printouts. These are the configuration parameters:

enable_burst_limit

This is set to true by default. The value false disables the burst control feature.

burst_limit_size

This is how many requests should be processed within the burst_window_time time frame. After this maximum has been reached, successive requests will be dropped until the end of the time frame. The default value is 500 messages.

burst_window_time

The default window is 1000 milliseconds long.

A configuration example:

logger:add_handler(my_disk_log_h, logger_disk_log_h, #{disk_log_opts => #{file => "./my_disk_log"}, logger_disk_log_h => #{burst_limit_size => 10, burst_window_time => 500}}).
Terminating a Large Handler

A handler process may grow large even if it can manage peaks of high load without crashing. The overload protection mechanism includes user configurable levels for a maximum allowed message queue length and maximum allowed memory usage. This feature is disabled by default, but can be switched on by means of the following configuration parameters:

enable_kill_overloaded

This is set to false by default. The value true enables the feature.

handler_overloaded_qlen

This is the maximum allowed queue length. If the mailbox grows larger than this, the handler process gets terminated.

handler_overloaded_mem

This is the maximum allowed memory usage of the handler process. If the handler grows any larger, the process gets terminated.

handler_restart_after

If the handler gets terminated because of its queue length or memory usage, it can get automatically restarted again after a configurable delay time. The time is specified in milliseconds and 5000 is the default value. The value never can also be set, which prevents a restart.

See Also

disk_log(3), error_logger(3), logger(3), logger_disk_log_h(3), logger_filters(3), logger_formatter(3), logger_std_h(3), sasl(6)