Erlang/OTP 21.0 provides a standard API for logging
through
By default, the Kernel application installs one log handler at
system start. This handler is named
You can also configure the system so that the default handler
prints log events to a single file, or to a set of wrap logs
via
By configuration, you can also modify or disable the default handler, replace it by a custom handler, and install additional handlers.
Since Logger is new in Erlang/OTP 21.0, we do reserve the right to introduce changes to the Logger API and functionality in patches following this release. These changes might or might not be backwards compatible with the initial version.
A log event consists of a log level, the message to be logged, and metadata.
The Logger backend forwards log events from the API, first through a set of primary filters, then through a set of secondary filters attached to each log handler. The secondary filters are in the following named handler filters.
Each filter set consists of a log level check, followed by zero or more filter functions.
The following figure shows a conceptual overview of Logger. The figure shows two log handlers, but any number of handlers can be installed.
Log levels are expressed as atoms. Internally in Logger, the
atoms are mapped to integer values, and a log event passes the
log level check if the integer value of its log level is less
than or equal to the currently configured log level. That is,
the check passes if the event is equally or more severe than the
configured level. See section
The primary log level can be overridden by a log level configured per module. This is to, for instance, allow more verbose logging from a specific part of the system.
Filter functions can be used for more sophisticated filtering
than the log level check provides. A filter function can stop or
pass a log event, based on any of the event's contents. It can
also modify all parts of the log event. See see
section
If a log event passes through all primary filters and all
handler filters for a specific handler, Logger forwards the
event to the handler callback. The handler formats and
prints the event to its destination. See
section
Everything up to and including the call to the handler callbacks is executed on the client process, that is, the process where the log event was issued. It is up to the handler implementation if other processes are involved or not.
The handlers are called in sequence, and the order is not defined.
The API for logging consists of a set
of
The difference between using the macros and the exported functions is that macros add location (originator) information to the metadata, and performs lazy evaluation by wrapping the logger call in a case statement, so it is only evaluated if the log level of the event passes the primary log level check.
The log level indicates the severity of a event. In
accordance with the Syslog protocol,
Notice that the integer value is only used internally in
Logger. In the API, you must always use the atom. To compare
the severity of two log levels,
use
The log message contains the information to be logged. The
message can consist of a format string and arguments (given as
two separate parameters in the Logger API), a string or a
report. The latter, which is either a map or a key-value list,
can be accompanied by a report callback specified in
the log event's
Example, format string and arguments:
logger:error("The file does not exist: ~ts",[Filename])
Example, string:
logger:notice("Something strange happened!")
Example, report, and metadata with report callback:
logger:debug(#{got => connection_request, id => Id, state => State},
#{report_cb => fun(R) -> {"~p",[R]} end})
The log message can also be provided through a fun for lazy evaluation. The fun is only evaluated if the primary log level check passes, and is therefore recommended if it is expensive to generate the message. The lazy fun must return a string, a report, or a tuple with format string and arguments.
Metadata contains additional data associated with a log message. Logger inserts some metadata fields by default, and the client can add custom metadata in two different ways:
Process metadata is set and updated
with
Metadata associated with one specific log event is given as the last parameter to the log macro or Logger API function when the event is issued. For example:
?LOG_ERROR("Connection closed",#{context => server})
See the description of
the
Filters can be primary, or attached to a specific handler. Logger calls the primary filters first, and if they all pass, it calls the handler filters for each handler. Logger calls the handler callback only if all filters attached to the handler in question also pass.
A filter is defined as:
{FilterFun, Extra}
where
The filter function can return
If
If the log event is returned, the next filter function is called with the returned value as the first argument. That is, if a filter function modifies the log event, the next filter function receives the modified event. The value returned from the last filter function is the value that the handler callback receives.
If the filter function returns
The configuration option
Primary filters are added
with
Handler filters are added
with
To see which filters are currently installed in the system,
use
For convenience, the following built-in filters exist:
Provides a way of filtering log events based on a
Provides a way of filtering log events based on the log level.
Stops or allows progress reports from
Stops or allows log events originating from a process that has its group leader on a remote node.
A handler is defined as a module exporting at least the following callback function:
log(LogEvent, Config) -> void()
This function is called when a log event has passed through all primary filters, and all handler filters attached to the handler in question. The function call is executed on the client process, and it is up to the handler implementation if other processes are involved or not.
Logger allows adding multiple instances of a handler callback. That is, if a callback module implementation allows it, you can add multiple handler instances using the same callback module. The different instances are identified by unique handler identities.
In addition to the mandatory callback function
The following built-in handlers exist:
This is the default handler used by OTP. Multiple instances can be started, and each instance will write log events to a given destination, terminal or file.
This handler behaves much like
This handler is provided for backwards compatibility
only. It is not started by default, but will be
automatically started the first time an
The old
A formatter can be used by the handler implementation to do the
final formatting of a log event, before printing to the
handler's destination. The handler callback receives the
formatter information as part of the handler configuration,
which is passed as the second argument
to
The formatter information consist of a formatter
module,
format(LogEvent,FConfig) -> FormattedLogEntry
The formatter information for a handler is set as a part of its
configuration when the handler is added. It can also be changed
during runtime
with
If the formatter module exports the optional callback
function
If no formatter information is specified for a handler, Logger
uses
At system start, Logger is configured through Kernel
configuration parameters. The parameters that apply to Logger
are described in
section
During runtime, Logger configuration is changed via API
functions. See
section
Logger API functions that apply to the primary Logger configuration are:
The primary Logger configuration is a map with the following keys:
Specifies the primary log level, that is, log event that are equally or more severe than this level, are forwarded to the primary filters. Less severe log events are immediately discarded.
See section
The initial value of this option is set by the Kernel
configuration parameter
Defaults to
Specifies the primary filters.
The initial value of this option is set by the Kernel
configuration
parameter
See section
Defaults to
Specifies what happens to a log event if all filters
return
See section
Defaults to
Logger API functions that apply to handler configuration are:
The configuration for a handler is a map with the following keys:
Automatically inserted by Logger. The value is the same
as the
Automatically inserted by Logger. The value is the same
as the
Specifies the log level for the handler, that is, log events that are equally or more severe than this level, are forwarded to the handler filters for this handler.
See section
The log level is specified when adding the handler, or
changed during runtime with, for
instance,
Defaults to
Specifies the handler filters.
Handler filters are specified when adding the handler,
or added or removed during runtime with
See
Defaults to
Specifies what happens to a log event if all filters
return
See section
Defaults to
Specifies a formatter that the handler can use for converting the log event term to a printable string.
The formatter information is specified when adding the
handler. The formatter configuration can be changed during
runtime
with
See
section
Defaults
to
Handler specific configuration, that is, configuration data related to a specific handler implementation.
The configuration for the built-in handlers is described
in
the
Notice that
The following Kernel configuration parameters apply to Logger:
Specifies the configuration
for
With this parameter, you can modify or disable the default handler, add custom handlers and primary logger filters, and set log levels per module.
Disables the default handler. This allows another application to add its own default handler.
Only one entry of this type is allowed.
If
logger:set_handler_config(default, Module, HandlerConfig)
For all other values of
logger:add_handler(HandlerId, Module, HandlerConfig)
Multiple entries of this type are allowed.
Adds the specified primary filters.
Equivalent to calling
logger:add_primary_filter(FilterId, {FilterFun, FilterConfig})
for each
Only one entry of this type is allowed.
Sets module log level for the given modules. Equivalent to calling
logger:set_module_level(Module, Level)
for each
Multiple entries of this type are allowed.
See
section
Specifies the primary log level. See
the
Specifies Logger's compatibility
with
The value of the Kernel configuration parameter
Each of the following examples shows a simple system configuration file that configures Logger according to the description.
Modify the default handler to print to a file instead of
[{kernel,
[{logger,
[{handler, default, logger_std_h, % {handler, HandlerId, Module,
#{config => #{type => {file,"log/erlang.log"}}}} % Config}
]}]}].
Modify the default handler to print each log event as a single line:
[{kernel,
[{logger,
[{handler, default, logger_std_h,
#{formatter => {logger_formatter, #{single_line => true}}}}
]}]}].
Modify the default handler to print the pid of the logging process for each log event:
[{kernel,
[{logger,
[{handler, default, logger_std_h,
#{formatter => {logger_formatter,
#{template => [time," ",pid," ",msg,"\n"]}}}}
]}]}].
Modify the default handler to only print errors and more severe log events to "log/erlang.log", and add another handler to print all log events to "log/debug.log".
[{kernel,
[{logger,
[{handler, default, logger_std_h,
#{level => error,
config => #{type => {file, "log/erlang.log"}}}},
{handler, info, logger_std_h,
#{level => debug,
config => #{type => {file, "log/debug.log"}}}}
]}]}].
Logger provides backwards compatibility with
The
Calls
to
To get log events on the same format as produced
by
By default, all log events originating from within OTP, except the former so called "SASL reports", look the same as before.
By SASL reports we mean supervisor reports, crash reports and progress reports.
Prior to Erlang/OTP 21.0, these reports were only logged
when the SASL application was running, and they were printed
trough SASL's own event handlers
The destination of these log events was configured by
Due to the specific event handlers, the output format slightly differed from other log events.
As of Erlang/OTP 21.0, the concept of SASL reports is removed, meaning that the default behaviour is as follows:
If the old behaviour is preferred, the Kernel configuration
parameter
All SASL reports have a metadata field
See section
To use event handlers written for
error_logger:add_report_handler/1,2.
This automatically starts the error logger event manager,
and adds
#{level => info,
filter_default => log,
filters => []}.
This handler ignores events that do not originate from
the
The handler is not overload protected.
Logger does, to a certain extent, check its input data before forwarding a log event to filters and handlers. It does, however, not evaluate report callbacks, or check the validity of format strings and arguments. This means that all filters and handlers must be careful when formatting the data of a log event, making sure that it does not crash due to bad input data or faulty callbacks.
If a filter or handler still crashes, Logger will remove the filter or handler in question from the configuration, and print a short error message to the terminal. A debug event containing the crash reason and other details is also issued.
See section
When starting an Erlang node, the default behaviour is that all
log events on level
1> logger:set_primary_config(level, info). ok
or set the level for one or a few modules only:
2> logger:set_module_level(mymodule, info). ok
This allows info events to pass through to the default handler, and be printed to the terminal as well. If there are many info events, it can be useful to print these to a file instead.
First, set the log level of the default handler
to
3> logger:set_handler_config(default, level, notice). ok
Then, add a new handler which prints to file. You can use the
handler
module
4> Config = #{config => #{type => {file,"./info.log"}}, level => info}. #{config => #{type => {file,"./info.log"}},level => info} 5> logger:add_handler(myhandler, logger_std_h, Config). ok
Since
6> logger:add_handler_filter(myhandler, stop_non_info, {fun logger_filters:level/2, {stop, neq, info}}). ok
See section
Section
A handler callback module must export:
It can optionally also export some, or all, of the following:
When a handler is added, by for example a call
to
A handler can be removed by calling
When
A simple handler that prints to the terminal can be implemented as follows:
-module(myhandler1).
-export([log/2]).
log(LogEvent, #{formatter := {FModule, FConfig}}) ->
io:put_chars(FModule:format(LogEvent, FConfig)).
Notice that the above handler does not have any overload protection, and all log events are printed directly from the client process.
For information and examples of overload protection, please
refer to
section
The following is a simpler example of a handler which logs to a file through one single process:
-module(myhandler2).
-export([adding_handler/1, removing_handler/1, log/2]).
-export([init/1, handle_call/3, handle_cast/2, terminate/2]).
adding_handler(Config) ->
MyConfig = maps:get(config,Config,#{file => "myhandler2.log"}),
{ok, Pid} = gen_server:start(?MODULE, MyConfig, []),
{ok, Config#{config => MyConfig#{pid => Pid}}}.
removing_handler(#{config := #{pid := Pid}}) ->
gen_server:stop(Pid).
log(LogEvent,#{config := #{pid := Pid}} = Config) ->
gen_server:cast(Pid, {log, LogEvent, Config}).
init(#{file := File}) ->
{ok, Fd} = file:open(File, [append, {encoding, utf8}]),
{ok, #{file => File, fd => Fd}}.
handle_call(_, _, State) ->
{reply, {error, bad_request}, State}.
handle_cast({log, LogEvent, Config}, #{fd := Fd} = State) ->
do_log(Fd, LogEvent, Config),
{noreply, State}.
terminate(_Reason, #{fd := Fd}) ->
_ = file:close(Fd),
ok.
do_log(Fd, LogEvent, #{formatter := {FModule, FConfig}}) ->
String = FModule:format(LogEvent, FConfig),
io:put_chars(Fd, String).
The default handlers,
The handler process keeps track of the length of its message queue and takes some form of action when the current length exceeds a configurable threshold. The purpose is to keep the handler in, or to as quickly as possible get the handler into, a state where it can keep up with the pace of incoming log events. The memory use of the handler must never grow larger and larger, since that will eventually cause the handler to crash. These three thresholds, with associated actions, exist:
As long as the length of the message queue is lower than this
value, all log events are handled asynchronously. This means that
the client process sending the log event, by calling a log function
in the
Defaults to
When the message queue grows larger than this threshold, the handler switches to a mode in which it drops all new events that senders want to log. Dropping an event in this mode means that the call to the log function never results in a message being sent to the handler, but the function returns without taking any action. The handler keeps logging the events that are already in its message queue, and when the length of the message queue is reduced to a level below the threshold, synchronous or asynchronous mode is resumed. Notice that when the handler activates or deactivates drop mode, information about it is printed in the log.
Defaults to
If the length of the message queue grows larger than this threshold, a flush (delete) operation takes place. To flush events, the handler discards the messages in the message queue by receiving them in a loop without logging. Client processes waiting for a response from a synchronous log request receive a reply from the handler indicating that the request is dropped. The handler process increases its priority during the flush loop to make sure that no new events are received during the operation. Notice that after the flush operation is performed, the handler prints information in the log about how many events have been deleted.
Defaults to
For the overload protection algorithm to work properly, it is required that:
and that:
To disable certain modes, do the following:
During high load scenarios, the length of the handler message queue rarely grows in a linear and predictable way. Instead, whenever the handler process is scheduled in, it can have an almost arbitrary number of messages waiting in the message queue. It is for this reason that the overload protection mechanism is focused on acting quickly, and quite drastically, such as immediately dropping or flushing messages, when a large queue length is detected.
The values of the previously listed thresholds can be specified by the user. This way, a handler can be configured to, for example, not drop or flush messages unless the message queue length of the handler process grows extremely large. Notice that large amounts of memory can be required for the node under such circumstances. Another example of user configuration is when, for performance reasons, the client processes must never be blocked by synchronous log requests. It is possible, perhaps, that dropping or flushing events is still acceptable, since it does not affect the performance of the client processes sending the log events.
A configuration example:
logger:add_handler(my_standard_h, logger_std_h,
#{config => #{type => {file,"./system_info.log"},
sync_mode_qlen => 100,
drop_mode_qlen => 1000,
flush_qlen => 2000}}).
Large bursts of log events - many events received by the handler under a short period of time - can potentially cause problems, such as:
For this reason, both built-in handlers offer the possibility to specify the maximum number of events to be handled within a certain time frame. With this burst control feature enabled, the handler can avoid choking the log with massive amounts of printouts. The configuration parameters are:
Value
Defaults to
This is the maximum number of events to handle within a
Defaults to
See the previous description of
Defaults to
A configuration example:
logger:add_handler(my_disk_log_h, logger_disk_log_h,
#{config => #{file => "./my_disk_log",
burst_limit_enable => true,
burst_limit_max_count => 20,
burst_limit_window_time => 500}}).
It is possible that a handler, even if it can successfully manage peaks of high load without crashing, can build up a large message queue, or use a large amount of memory. The overload protection mechanism includes an automatic termination and restart feature for the purpose of guaranteeing that a handler does not grow out of bounds. The feature is configured with the following parameters:
Value
Defaults to
This is the maximum allowed queue length. If the message queue grows larger than this, the handler process is terminated.
Defaults to
This is the maximum memory size that the handler process is allowed to use. If the handler grows larger than this, the process is terminated.
Defaults to
If the handler is terminated, it restarts automatically after a
delay specified in milliseconds. The value
Defaults to
If the handler process is terminated because of overload, it prints information about it in the log. It also prints information about when a restart has taken place, and the handler is back in action.
The sizes of the log events affect the memory needs of the handler.
For information about how to limit the size of log events, see the