Erlang/OTP 21.0 provides a new standard API for logging
through
By default, the Kernel application installs one log handler at
system start. This handler is named
You can also configure the system so that the default handler
prints log events to a single file, or to a set of wrap logs
via
By confiugration, you can aslo modify or disable the default handler, replace it by a custom handler, and install additional handlers.
A log event consists of a log level, the message to be logged, and metadata.
The Logger backend forwards log events from the API, first through a set of global filters, then through a set of handler filters for each log handler.
Each filter set consists of a log level check, followed by zero or more filter functions.
The following figure show a conseptual overview of Logger. The figure shows two log handlers, but any number of handlers can be installed.
Log levels are expressed as atoms. Internally in Logger, the
atoms are mapped to integer values, and a log event passes the
log level check if the integer value of its log level is less
than or equal to the currently configured log level. That is,
the check pases if the event is equally or more severe than the
configured level. See section
The global log level can be overridden by a log level configured per module. This is to, for instance, allow more verbose logging from a specific part of the system.
Filter functions can be used for more sophisticated filtering
than the log level check provides. A filter function can stop or
pass a log event, based on any of the event's contents. It can
also modify all parts of the log event. See see
section
If a log event passes through all global filters and all
handler filters for a specific handler, Logger forwards the event
to the handler callback. The handler formats and prints the
event to its destination. See
section
Everything up to and including the call to the handler callbacks is executed on the client process, that is, the process where the log event was issued. It is up to the handler implementation if other processes are involved or not.
The handlers are called in sequence, and the order is not defined.
The API for logging consists of a set
of
The difference between using the macros and the exported functions is that macros add location (originator) information to the metadata, and performs lazy evaluation by wrapping the logger call in a case statement, so it is only evaluated if the log level of the event passes the global log level check.
The log level indicates the severity of a event. In accordance with the Syslog protocol, RFC-5424, eight log levels can be specified. The following table lists all possible log levels by name (atom), integer value, and description:
Notice that the integer value is only used internally in
Logger. In the API, you must always use the atom. To compare
the severity of two log levels,
use
The log message contains the information to be logged. The
message can consist of a format string and arguments (given as
two separate parameters in the Logger API), a string or a
report. The latter, which is either a map or a key-value list,
can be accompanied by a report callback specified in the log
event's
Example, format string and arguments:
logger:error("The file does not exist: ~ts",[Filename])
Example, string:
logger:notice("Something strange happened!")
Example, report, and metadata with report callback:
logger:debug(#{got => connection_request, id => Id, state => State},
#{report_cb => fun(R) -> {"~p",[R]} end})
The log message can also be provided through a fun for lazy evaluation. The fun is only evaluated if the global log level check passes, and is therefore recommended if it is expensive to generate the message. The lazy fun must return a string, a report, or a tuple with format string and arguments.
Metadata contains additional data associated with a log message. Logger inserts some metadata fields by default, and the client can add custom metadata in two different ways:
Process metadata is set and updated
with
Metadata associated with one specifc log event is given as the last parameter to the log macro or Logger API function when the event is issued. For example:
?LOG_ERROR("Connection closed",#{context => server})
See the description of
the
Filters can be global, or attached to a specific handler. Logger calls the global filters first, and if they all pass, it calls the handler filters for each handler. Logger calls the handler callback only if all filters attached to the handler in question also pass.
A filter is defined as:
{FilterFun, Extra}
where
The filter function can return
If
If the log event is returned, the next filter function is called with the returned value as the first argument. That is, if a filter function modifies the log event, the next filter function receives the modified event. The value returned from the last filter function is the value that the handler callback receives.
If the filter function returns
The configuration
option
Global filters are added
with
Handler filters are added
with
To see which filters are currently installed in the system,
use
For convenience, the following built-in filters exist:
Provides a way of filtering log events based on a
Provides a way of filtering log events based on the log level.
Stops or allows progress reports from
Stops or allows log events originating from a process that has its group leader on a remote node.
A handler is defined as a module exporting at least the following function:
log(LogEvent, Config) -> void()
This function is called when a log event has passed through all global filters, and all handler filters attached to the handler in question. The function call is executed on the client process, and it is up to the handler implementation if other processes are involved or not.
Logger allows adding multiple instances of a handler callback. That is, if a callback module implementation allows it, you can add multiple handler instances using the same callback module. The different instances are identified by unique handler identities.
In addition to the mandatory callback function
The following built-in handlers exist:
This is the default handler used by OTP. Multiple instances can be started, and each instance will write log events to a given destination, terminal or file.
This handler behaves much like
This handler is provided for backwards compatibility
only. It is not started by default, but will be
automatically started the first time an
The old
A formatter can be used by the handler implementation to do the
final formatting of a log event, before printing to the
handler's destination. The handler callback receives the
formatter information as part of the handler configuration,
which is passed as the second argument
to
The formatter information consits of a formatter
module,
format(LogEvent,FConfig) -> FormattedLogEntry
The formatter information for a handler is set as a part of its
configuration when the handler is added. It can also be changed
during runtime
with
If the formatter module exports the optional callback
function
If no formatter information is specified for a handler, Logger
uses
Logger can be configured either when the system starts through
Logger is best configured by using the configuration parameters
of Kernel. There are four possible configuration parameters:
logger
The application configuration parameter
Disable the default handler. This allows another application
to add its own default handler. See
Only one entry of this option is allowed.
Add a handler as if
It is allowed to have multiple entries of this option.
Add the specified
Only one entry of this option is allowed.
This option configures
It is allowed to have multiple entries of this option.
Examples:
Output logs into the file "logs/erlang.log"
[{kernel,
[{logger,
[{handler, default, logger_std_h,
#{ logger_std_h => #{ type => {file,"log/erlang.log"}}}}]}]}].
Output logs in single line format
[{kernel,
[{logger,
[{handler, default, logger_std_h,
#{ formatter => { logger_formatter,#{ single_line => true}}}}]}]}].
Add the pid to each log event
[{kernel,
[{logger,
[{handler, default, logger_std_h,
#{ formatter => { logger_formatter,
#{ template => [time," ",pid," ",msg,"\n"]}}
}}]}]}].
Use a different file for debug logging
[{kernel,
[{logger,
[{handler, default, logger_std_h,
#{ level => error,
logger_std_h => #{ type => {file, "log/erlang.log"}}}},
{handler, info, logger_std_h,
#{ level => debug,
logger_std_h => #{ type => {file, "log/debug.log"}}}}
]}]}].
Specifies the global log level to log.
See section
The initial value of this option is set by the Kernel
configuration
parameter
Global filters are added and removed with
See section
Default is
Specifies what to do with an event if all filters
return
See section
Default is
Specifies the log level which the handler logs.
See section
The log level can be specified when adding the handler,
or changed during runtime with, for
instance,
Default is
Handler filters can be specified when adding the handler,
or added or removed during runtime with
See
Default is
Specifies what to do with an event if all filters
return
See section
Default is
The formatter which the handler can use for converting the log event term to a printable string.
See
Default
is
Any keys not listed above are considered to be handler
specific configuration. The configuration of the Kernel
handlers can be found in
the
Notice that
All Logger's built-in handlers will call the given formatter before printing.
Logger provides backwards compatibility with
The
Calls
to
To get log events on the same format as produced
by
By default, all log events originating from within OTP, except the former so called "SASL reports", look the same as before.
By SASL reports we mean supervisor reports, crash reports and progress reports.
In earlier releases, these reports were only logged when
the SASL application was running, and they were printed
trough specific event handlers
named
The destination of these log events were configured by
Due to the specific event handlers, the output format slightly differed from other log events.
As of Erlang/OTP 21.0, the concept of SASL reports is removed, meaning that the default behaviour is as follows:
If the old behaviour is preferred, the Kernel configuation
parameter
All SASL reports have a metadata
field
See the
To use event handlers written for
error_logger:add_report_handler/1,2.
This will automatically start the
#{level => info,
filter_default => log,
filters => []}.
Notice that this handler will ignore events that do not
originate from the
Also notice that
Log data is expected to be either a format string and
arguments, a string
(
Logger does, to a certain extent, check its input data before forwarding a log event to the handlers, but it does not evaluate conversion funs or check the validity of format strings and arguments. This means that any filter or handler must be careful when formatting the data of a log event, making sure that it does not crash due to bad input data or faulty callbacks.
If a filter or handler still crashes, Logger will remove the filter or handler in question from the configuration, and print a short error message to the terminal. A debug event containing the crash reason and other details is also issued, and can be seen if a handler logging debug events is installed.
When starting an Erlang node, the default behaviour is that all
log events with level info and above are logged to the
terminal. In order to also log debug events, you can either
change the global log level to
First, we add an instance of
1> Config = #{level => debug, logger_std_h => #{type => {file,"./debug.log"}}}. #{logger_std_h => #{type => {file,"./debug.log"}}, level => debug} 2> logger:add_handler(debug_handler,logger_std_h,Config). ok
By default, the handler receives all events
(
3> logger:add_handler_filter(debug_handler,stop_non_debug, {fun logger_filters:level/2,{stop,neq,debug}}). ok
And finally, we need to make sure that Logger itself allows debug events. This can either be done by setting the global log level:
4> logger:set_logger_config(level,debug). ok
Or by allowing debug events from one or a few modules only:
5> logger:set_module_level(mymodule,debug). ok
The only requirement that a handler MUST fulfill is to export the following function:
log(logger:log_event(),logger:config()) -> ok
It can optionally also implement the following callbacks:
adding_handler(logger:config()) -> {ok,logger:config()} | {error,term()}
removing_handler(logger:config()) -> ok
changing_config(logger:config(),logger:config()) -> {ok,logger:config()} | {error,term()}
When
A handler can be removed by calling
When
A simple handler that prints to the terminal can be implemented as follows:
-module(myhandler).
-export([log/2]).
log(LogEvent,#{formatter:={FModule,FConfig}) ->
io:put_chars(FModule:format(LogEvent,FConfig)).
A simple handler which prints to file could be implemented like this:
-module(myhandler).
-export([adding_handler/1, removing_handler/1, log/2]).
-export([init/1, handle_call/3, handle_cast/2, terminate/2]).
adding_handler(Config) ->
{ok,Fd} = file:open(File,[append,{encoding,utf8}]),
{ok,Config#{myhandler_fd => Fd}}.
removing_handler(#{myhandler_fd:=Fd}) ->
_ = file:close(Fd),
ok.
log(LogEvent,#{myhandler_fd:=Fd,formatter:={FModule,FConfig}}) ->
io:put_chars(Fd,FModule:format(LogEvent,FConfig)).
The above handlers do not have any overload protection, and all log events are printed directly from the client process.
For information and examples of overload protection, please
refer to
section
Below is a simpler example of a handler which logs through one single process.
-module(myhandler).
-export([adding_handler/1, removing_handler/1, log/2]).
-export([init/1, handle_call/3, handle_cast/2, terminate/2]).
adding_handler(Config) ->
{ok,Pid} = gen_server:start(?MODULE,Config),
{ok,Config#{myhandler_pid => Pid}}.
removing_handler(#{myhandler_pid:=Pid}) ->
gen_server:stop(Pid).
log(LogEvent,#{myhandler_pid:=Pid} = Config) ->
gen_server:cast(Pid,{log,LogEvent,Config}).
init(#{myhandler_file:=File}) ->
{ok,Fd} = file:open(File,[append,{encoding,utf8}]),
{ok,#{file => File, fd => Fd}}.
handle_call(_,_,State) ->
{reply,{error,bad_request},State}.
handle_cast({log,LogEvent,Config},#{fd:=Fd} = State) ->
do_log(Fd,LogEvent,Config),
{noreply,State}.
terminate(Reason,#{fd:=Fd}) ->
_ = file:close(Fd),
ok.
do_log(Fd,LogEvent,#{formatter:={FModule,FConfig}}) ->
String = FModule:format(LogEvent,FConfig),
io:put_chars(Fd,String).
In order for the built-in handlers to survive, and stay responsive,
during periods of high load (i.e. when huge numbers of incoming
log requests must be handled), a mechanism for overload protection
has been implemented in the
The handler process keeps track of the length of its message queue and reacts in different ways depending on the current status. The purpose is to keep the handler in, or (as quickly as possible), get the handler into, a state where it can keep up with the pace of incoming log requests. The memory usage of the handler must never keep growing larger and larger, since that would eventually cause the handler to crash. Three thresholds with associated actions have been defined:
The default value of this level is
When the message queue has grown larger than this threshold, which
defaults to
Above this threshold, which defaults to
For the overload protection algorithm to work properly, it is required that:
and that:
If
During high load scenarios, the length of the handler message queue rarely grows in a linear and predictable way. Instead, whenever the handler process gets scheduled in, it can have an almost arbitrary number of messages waiting in the mailbox. It's for this reason that the overload protection mechanism is focused on acting quickly and quite drastically (such as immediately dropping or flushing messages) as soon as a large queue length is detected.
The thresholds listed above may be modified by the user if, e.g, a handler shouldn't drop or flush messages unless the message queue length grows extremely large. (The handler must be allowed to use large amounts of memory under such circumstances however). Another example of when the user might want to change the settings is if, for performance reasons, the logging processes must never get blocked by synchronous log requests, while dropping or flushing requests is perfectly acceptable (since it doesn't affect the performance of the loggers).
A configuration example:
logger:add_handler(my_standard_h, logger_std_h,
#{logger_std_h =>
#{type => {file,"./system_info.log"},
toggle_sync_qlen => 100,
drop_new_reqs_qlen => 1000,
flush_reqs_qlen => 2000}}).
A potential problem with large bursts of log requests, is that log files may get full or wrapped too quickly (in the latter case overwriting previously logged data that could be of great importance). For this reason, both built-in handlers offer the possibility to set a maximum level of how many requests to process with a certain time frame. With this burst control feature enabled, the handler will take care of bursts of log requests without choking log files, or the terminal, with massive amounts of printouts. These are the configuration parameters:
This is set to
This is how many requests should be processed within the
The default window is
A configuration example:
logger:add_handler(my_disk_log_h, logger_disk_log_h,
#{disk_log_opts =>
#{file => "./my_disk_log"},
logger_disk_log_h =>
#{burst_limit_size => 10,
burst_window_time => 500}}).
A handler process may grow large even if it can manage peaks of high load without crashing. The overload protection mechanism includes user configurable levels for a maximum allowed message queue length and maximum allowed memory usage. This feature is disabled by default, but can be switched on by means of the following configuration parameters:
This is set to
This is the maximum allowed queue length. If the mailbox grows larger than this, the handler process gets terminated.
This is the maximum allowed memory usage of the handler process. If the handler grows any larger, the process gets terminated.
If the handler gets terminated because of its queue length or
memory usage, it can get automatically restarted again after a
configurable delay time. The time is specified in milliseconds
and