Erlang/OTP provides a standard API for logging. The backend of this API can be used as is, or it can be customized to suite specific needs.
It consists of two parts - the logger part and the handler part. The logger will forward log events to one or more handler(s).
Filters can be added to the logger and to each handler. The filters decide if an event is to be forwarded or not, and they can also modify all parts of the log event.
A formatter can be set for each handler. The formatter does the final formatting of the log event, including the log message itself, and possibly a timestamp, header and other metadata.
In accordance with the Syslog protocol, RFC-5424, eight severity levels can be specified:
A log event is allowed by Logger if the integer value of
its
A handler is defined as a module exporting the following function:
log(Log, Config) -> ok
A handler is called by the logger backend after filtering on logger level and on handler level for the handler which is about to be called. The function call is done on the client process, and it is up to the handler implementation if other processes are to be involved or not.
Multiple instances of the same handler can be added. Configuration is per instance.
Filters can be set on the logger or on a handler. Logger filters are applied first, and if passed, the handler filters for each handler are applied. The handler plugin is only called if all handler filters for the handler in question also pass.
A filter is specified as:
{fun((Log,Extra) -> Log | stop | ignore), Extra}
The configuration parameter
The
A formatter is defined as a module exporting the following function:
format(Log,Extra) -> string()
The formatter plugin is called by each handler, and the returned string can be printed to the handler's destination (stdout, file, ...).
This is the default handler used by OTP. Multiple instances can be started, and each instance will write log events to a given destination, console or file. Filters can be used for selecting which event to send to which handler instance.
This handler behaves much like logger_std_h, except it uses
This handler is to be used for backwards compatibility
only. It is not started by default, but will be automatically
started the first time an event handler is added
with
No built-in event handlers exist.
This filter provides a way of filtering log events based on a
This filter provides a way of filtering log events based
on the log level. See
This filter matches all progress reports
from
This filter matches all events originating from a process
that has its group leader on a remote node.
See
The default formatter is
See
Specifies the severity level to log.
Logger filters are added or removed with
See
By default, no filters exist.
Specifies what to do with an event if all filters
return
Default is
Handlers are added or removed with
See
Specifies the severity level to log.
Handler filters can be specified when adding the handler,
or added or removed later with
See
By default, no filters exist.
Specifies what to do with an event if all filters
return
Default is
Specifies if the depth of terms in the log events shall
be limited by using control characters
Specifies if the size of a log event shall be limited by truncating the formatted string.
See
The default module is
Note that
Logger provides backwards compatibility with the old
To use event handlers written for
error_logger:add_report_handler/1,2.
This will automatically start the
#{level=>info,
filter_default=>log,
filters=>[]}.
Note that this handler will ignore events that do not
originate from the old
Also note that
The old
To get log events on the same format as produced
by
By default, all log events originating from within OTP, except the former so called "SASL reports", look the same as before.
By SASL reports we mean supervisor reports, crash reports and progress reports.
In earlier releases, these reports were only logged when
the SASL application was running, and they were printed
trough specific event handlers
named
The destination of these log events were configured by environment variables for the SASL application.
Due to the specific event handlers, the output format slightly differed from other log events.
As of OTP-21, the concept of SASL reports is removed, meaning that the default behavior is as follows:
If the old behavior is preferred, the kernel environment
variable
All SASL reports have a metadata
field
Log data is expected to be either a format string and arguments, a string (unicode:chardata), or a report (map or key-value list) which can be converted to a format string and arguments by the handler. A default report callback should be included in the log event's metadata, which can be used for converting the report to a format string and arguments. The handler might also do a custom conversion if the default format is not desired.
If a filter or handler still crashes, logger will remove the filter or handler in question from the configuration, and then print a short error message on the console. A debug event containing the crash reason and other details is also issued, and can be seen if a handler is installed which logs on debug level.
When starting an erlang node, the default behavior is that all
log events with level info and above are logged to the
console. In order to also log debug events, you can either
change the global log level to
First, we add an instance of logger_std_h with
type
1> Config = #{level=>debug,logger_std_h=>#{type=>{file,"./debug.log"}}}. #{logger_std_h => #{type => {file,"./debug.log"}}, level => debug} 2> logger:add_handler(debug_handler,logger_std_h,Config). ok
By default, the handler receives all events, so we need to add a filter to stop all non-debug events:
3> Fun = fun(#{level:=debug}=Log,_) -> Log; (_,_) -> stop end. #Fun<erl_eval.12.98642416> 4> logger:add_handler_filter(debug_handler,allow_debug,{Fun,[]}). ok
And finally, we need to make sure that the logger itself allows debug events. This can either be done by setting the global logger level:
5> logger:set_logger_config(level,debug). ok
Or by allowing debug events from one or a few modules only:
6> logger:set_module_level(mymodule,debug). ok
The only requirement that a handler MUST fulfill is to export the following function:
log(logger:log(),logger:config()) ->ok
It may also implement the following callbacks:
adding_handler(logger:handler_id(),logger:config()) -> {ok,logger:config()} | {error,term()}
removing_handler(logger:handler_id()) -> ok
changing_config(logger:handler_id(),logger:config(),logger:config()) -> {ok,logger:config()} | {error,term()}
When logger:add_handler(Id,Module,Config) is called, logger will first call Module:adding_handler(Id,Config), and if it returns {ok,NewConfig} the NewConfig is written to the configuration database. After this, the handler may receive log events as calls to Module:log/2.
A handler can be removed by calling logger:remove_handler(Id). logger will call Module:removing_handler(Id), and then remove the handler's configuration from the configuration database.
When logger:set_handler_config is called, logger calls Module:changing_config(Id,OldConfig,NewConfig). If this function returns ok, the NewConfig is written to the configuration database.
A simple handler which prints to the console could be implemented as follows:
-module(myhandler).
-export([log/2]).
log(#{msg:={report,R}},_) ->
io:format("~p~n",[R]);
log(#{msg:={string,S}},_) ->
io:put_chars(S);
log(#{msg:={F,A}},_) ->
io:format(F,A).
A simple handler which prints to file could be implemented like this:
-module(myhandler).
-export([adding_handler/2, removing_handler/1, log/2]).
-export([init/1, handle_call/3, handle_cast/2, terminate/2]).
adding_handler(Id,Config) ->
{ok,Fd} = file:open(File,[append,{encoding,utf8}]),
{ok,Config#{myhandler_fd=>Fd}}.
removing_handler(Id,#{myhandler_fd:=Fd}) ->
_ = file:close(Fd),
ok.
log(#{msg:={report,R}},#{myhandler_fd:=Fd}) ->
io:format(Fd,"~p~n",[R]);
log(#{msg:={string,S}},#{myhandler_fd:=Fd}) ->
io:put_chars(Fd,S);
log(#{msg:={F,A}},#{myhandler_fd:=Fd}) ->
io:format(Fd,F,A).
Note that none of the above handlers have any overload protection, and all log events are printed directly from the client process. Neither do the handlers use the formatter or in any way add time or other metadata to the printed events.
For examples of overload protection, please refer to the
implementation
of
Below is a simpler example of a handler which logs through one single process, and uses the default formatter to gain a common look of the log events.
It also uses the metadata field
-module(myhandler).
-export([adding_handler/2, removing_handler/1, log/2]).
-export([init/1, handle_call/3, handle_cast/2, terminate/2]).
adding_handler(Id,Config) ->
{ok,Pid} = gen_server:start(?MODULE,Config),
{ok,Config#{myhandler_pid=>Pid}}.
removing_handler(Id,#{myhandler_pid:=Pid}) ->
gen_server:stop(Pid).
log(Log,#{myhandler_pid:=Pid} = Config) ->
gen_server:cast(Pid,{log,Log,Config}).
init(#{myhandler_file:=File}) ->
{ok,Fd} = file:open(File,[append,{encoding,utf8}]),
{ok,#{file=>File,fd=>Fd}}.
handle_call(_,_,State) ->
{reply,{error,bad_request},State}.
handle_cast({log,Log,Config},#{fd:=Fd} = State) ->
do_log(Fd,Log,Config),
{noreply,State}.
terminate(Reason,#{fd:=Fd}) ->
_ = file:close(Fd),
ok.
do_log(Fd,#{msg:={report,R}} = Log, Config) ->
Fun = maps:get(report_cb,Config,fun my_report_cb/1,
{F,A} = Fun(R),
do_log(Fd,Log#{msg=>{F,A},Config);
do_log(Fd,Log,#{formatter:={FModule,FConfig}}) ->
String = FModule:format(Log,FConfig),
io:put_chars(Fd,String).
my_report_cb(R) ->
{"~p",[R]}.
In order for the built-in handlers to survive, and stay responsive,
during periods of high load (i.e. when huge numbers of incoming
log requests must be handled), a mechanism for overload protection
has been implemented in the
The handler process keeps track of the length of its message queue and reacts in different ways depending on the current status. The purpose is to keep the handler in, or (as quickly as possible), get the handler into, a state where it can keep up with the pace of incoming log requests. The memory usage of the handler must never keep growing larger and larger, since that would eventually cause the handler to crash. Three thresholds with associated actions have been defined:
The default value of this level is
When the message queue has grown larger than this threshold, which
defaults to
Above this threshold, which defaults to
For the overload protection algorithm to work properly, it is a requirement that:
During high load scenarios, the length of the handler message queue rarely grows in a linear and predictable way. Instead, whenever the handler process gets scheduled in, it can have an almost arbitrary number of messages waiting in the mailbox. It's for this reason that the overload protection mechanism is focused on acting quickly and quite drastically (such as immediately dropping or flushing messages) as soon as a large queue length is detected.
The thresholds listed above may be modified by the user if, e.g, a handler shouldn't drop or flush messages unless the message queue length grows extremely large. (The handler must be allowed to use large amounts of memory under such circumstances however). Another example of when the user might want to change the settings is if, for performance reasons, the logging processes must never get blocked by synchronous log requests, while dropping or flushing requests is perfectly acceptable (since it doesn't affect the performance of the loggers).
A configuration example:
logger:add_handler(my_standard_h, logger_std_h,
#{logger_std_h =>
#{type => {file,"./system_info.log"},
toggle_sync_qlen => 100,
drop_new_reqs_qlen => 1000,
flush_reqs_qlen => 2000}}).
A potential problem with large bursts of log requests, is that log files may get full or wrapped too quickly (in the latter case overwriting previously logged data that could be of great importance). For this reason, both built-in handlers offer the possibility to set a maximum level of how many requests to process with a certain time frame. With this burst control feature enabled, the handler will take care of bursts of log requests without choking log files, or the console, with massive amounts of printouts. These are the configuration parameters:
This is set to
This is how many requests should be processed within the
The default window is
A configuration example:
logger:add_handler(my_disk_log_h, logger_disk_log_h,
#{disk_log_opts =>
#{file => "./my_disk_log"},
logger_disk_log_h =>
#{burst_limit_size => 10,
burst_window_time => 500}}).
A handler process may grow large even if it can manage peaks of high load without crashing. The overload protection mechanism includes user configurable levels for a maximum allowed message queue length and maximum allowed memory usage. This feature is disabled by default, but can be switched on by means of the following configuration parameters:
This is set to
This is the maximum allowed queue length. If the mailbox grows larger than this, the handler process gets terminated.
This is the maximum allowed memory usage of the handler process. If the handler grows any larger, the process gets terminated.
If the handler gets terminated because of its queue length or
memory usage, it can get automatically restarted again after a
configurable delay time. The time is specified in milliseconds
and