2017 Ericsson AB. All Rights Reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. Logging logger_chapter.xml
Overview

Erlang/OTP provides a standard API for logging. The backend of this API can be used as is, or it can be customized to suite specific needs.

It consists of two parts - the logger part and the handler part. The logger will forward log events to one or more handler(s).

Conceptual overview

Filters can be added to the logger and to each handler. The filters decide if an event is to be forwarded or not, and they can also modify all parts of the log event.

A formatter can be set for each handler. The formatter does the final formatting of the log event, including the log message itself, and possibly a timestamp, header and other metadata.

In accordance with the Syslog protocol, RFC-5424, eight severity levels can be specified:

Level Integer Description emergency 0 system is unusable alert 1 action must be taken immediately critical 2 critical contidions error 3 error conditions warning 4 warning conditions notice 5 normal but significant conditions info 6 informational messages debug 7 debug-level messages Severity levels

A log event is allowed by Logger if the integer value of its Level is less than or equal to the currently configured log level. The log level can be configured globally, or to allow more verbose logging from a specific part of the system, per module.

Customizable parts Handler

A handler is defined as a module exporting the following function:

log(Log, Config) -> ok

A handler is called by the logger backend after filtering on logger level and on handler level for the handler which is about to be called. The function call is done on the client process, and it is up to the handler implementation if other processes are to be involved or not.

Multiple instances of the same handler can be added. Configuration is per instance.

Filter

Filters can be set on the logger or on a handler. Logger filters are applied first, and if passed, the handler filters for each handler are applied. The handler plugin is only called if all handler filters for the handler in question also pass.

A filter is specified as:

{fun((Log,Extra) -> Log | stop | ignore), Extra}

The configuration parameter filter_default specifies the behavior if all filters return ignore. filter_default is by default set to log.

The Extra parameter may contain any data that the filter needs.

Formatter

A formatter is defined as a module exporting the following function:

format(Log,Extra) -> string()

The formatter plugin is called by each handler, and the returned string can be printed to the handler's destination (stdout, file, ...).

Built-in handlers logger_std_h

This is the default handler used by OTP. Multiple instances can be started, and each instance will write log events to a given destination, console or file. Filters can be used for selecting which event to send to which handler instance.

logger_disk_log_h

This handler behaves much like logger_std_h, except it uses disk_log as its destination.

error_logger

This handler is to be used for backwards compatibility only. It is not started by default, but will be automatically started the first time an event handler is added with error_logger:add_report_handler/1,2.

No built-in event handlers exist.

Built-in filters logger_filters:domain/2

This filter provides a way of filtering log events based on a domain field Metadata. See logger_filters:domain/2

logger_filters:level/2

This filter provides a way of filtering log events based on the log level. See logger_filters:domain/2

logger_filters:progress/2

This filter matches all progress reports from supervisor and application_controller. See logger_filters:progress/2

logger_filters:remote_gl/2

This filter matches all events originating from a process that has its group leader on a remote node. See logger_filters:remote_gl/2

Default formatter

The default formatter is logger_formatter. See logger_formatter:format/2.

Configuration
Application environment variables

See Kernel(6) for information about the application environment variables that can be used for configuring logger.

Logger configuration level

Specifies the severity level to log.

filters

Logger filters are added or removed with logger:add_logger_filter/2 and logger:remove_logger_filter/1, respectively.

See Filter for more information.

By default, no filters exist.

filter_default = log | stop

Specifies what to do with an event if all filters return ignore.

Default is log.

handlers

Handlers are added or removed with logger:add_handler/3 and logger:remove_handler/1, respectively.

See Handler for more information.

Handler configuration level

Specifies the severity level to log.

filters

Handler filters can be specified when adding the handler, or added or removed later with logger:add_handler_filter/3 and logger:remove_handler_filter/2, respectively.

See Filter for more information.

By default, no filters exist.

filter_default = log | stop

Specifies what to do with an event if all filters return ignore.

Default is log.

depth = pos_integer() | unlimited

Specifies if the depth of terms in the log events shall be limited by using control characters ~P and ~W instead of ~p and ~w, respectively. See io:format.

max_size = pos_integer() | unlimited

Specifies if the size of a log event shall be limited by truncating the formatted string.

formatter = {Module::module(),Extra::term()}

See Formatter for more information.

The default module is logger_formatter, and Extra is it's configuration map.

Note that level and filters are obeyed by Logger itself before forwarding the log events to each handler, while depth, max_size and formatter are left to the handler implementation. All Logger's built-in handlers do apply these configuration parameters before printing.

Backwards compatibility with error_logger

Logger provides backwards compatibility with the old error_logger in the following ways:

Legacy event handlers

To use event handlers written for error_logger, just add your event handler with

error_logger:add_report_handler/1,2.

This will automatically start the error_logger event manager, and add error_logger as a handler to logger, with configuration

#{level=>info, filter_default=>log, filters=>[]}.

Note that this handler will ignore events that do not originate from the old error_logger API, or from within OTP. This means that if your code uses the logger API for logging, then your log events will be discarded by this handler.

Also note that error_logger is not overload protected.

Logger API

The old error_logger API still exists, but should only be used by legacy code. It will be removed in a later release.

Output format

To get log events on the same format as produced by error_logger_tty_h and error_logger_file_h, use the default formatter, logger_formatter, with configuration parameter legacy_header=>true. This is also the default.

Default format of log events from OTP

By default, all log events originating from within OTP, except the former so called "SASL reports", look the same as before.

SASL reports

By SASL reports we mean supervisor reports, crash reports and progress reports.

In earlier releases, these reports were only logged when the SASL application was running, and they were printed trough specific event handlers named sasl_report_tty_h and sasl_report_file_h.

The destination of these log events were configured by environment variables for the SASL application.

Due to the specific event handlers, the output format slightly differed from other log events.

As of OTP-21, the concept of SASL reports is removed, meaning that the default behavior is as follows:

Supervisor reports, crash reports and progress reports are no longer connected to the SASL application. Supervisor reports and crash reports are logged by default. Progress reports are not logged by default, but can be enabled with the kernel environment variable logger_log_progress. The output format is the same for all log events.

If the old behavior is preferred, the kernel environment variable logger_sasl_compatible can be set to true. The old SASL environment variables can then be used as before, and the SASL reports will only be printed if the SASL application is running - through a second log handler named sasl_h.

All SASL reports have a metadata field domain=>[beam,erlang,otp,sasl], which can be used, for example, by filters to to stop or allow the events.

Error handling

Log data is expected to be either a format string and arguments, a string (unicode:chardata), or a report (map or key-value list) which can be converted to a format string and arguments by the handler. A default report callback should be included in the log event's metadata, which can be used for converting the report to a format string and arguments. The handler might also do a custom conversion if the default format is not desired.

logger does, to a certain extent, check its input data before forwarding a log event to the handlers, but it does not evaluate conversion funs or check the validity of format strings and arguments. This means that any filter or handler must be careful when formatting the data of a log event, making sure that it does not crash due to bad input data or faulty callbacks.

If a filter or handler still crashes, logger will remove the filter or handler in question from the configuration, and then print a short error message on the console. A debug event containing the crash reason and other details is also issued, and can be seen if a handler is installed which logs on debug level.

Example: add a handler to log debug events to file

When starting an erlang node, the default behavior is that all log events with level info and above are logged to the console. In order to also log debug events, you can either change the global log level to debug or add a separate handler to take care of this. In this example we will add a new handler which prints the debug events to a separate file.

First, we add an instance of logger_std_h with type {file,File}, and we set the handler's level to debug:

1> Config = #{level=>debug,logger_std_h=>#{type=>{file,"./debug.log"}}}.
#{logger_std_h => #{type => {file,"./debug.log"}},
  level => debug}
2> logger:add_handler(debug_handler,logger_std_h,Config).
ok

By default, the handler receives all events, so we need to add a filter to stop all non-debug events:

3> Fun = fun(#{level:=debug}=Log,_) -> Log; (_,_) -> stop end.
#Fun<erl_eval.12.98642416>
4> logger:add_handler_filter(debug_handler,allow_debug,{Fun,[]}).
ok

And finally, we need to make sure that the logger itself allows debug events. This can either be done by setting the global logger level:

5> logger:set_logger_config(level,debug).
ok

Or by allowing debug events from one or a few modules only:

6> logger:set_module_level(mymodule,debug).
ok
Example: implement a handler

The only requirement that a handler MUST fulfill is to export the following function:

log(logger:log(),logger:config()) ->ok

It may also implement the following callbacks:

adding_handler(logger:handler_id(),logger:config()) -> {ok,logger:config()} | {error,term()} removing_handler(logger:handler_id()) -> ok changing_config(logger:handler_id(),logger:config(),logger:config()) -> {ok,logger:config()} | {error,term()}

When logger:add_handler(Id,Module,Config) is called, logger will first call Module:adding_handler(Id,Config), and if it returns {ok,NewConfig} the NewConfig is written to the configuration database. After this, the handler may receive log events as calls to Module:log/2.

A handler can be removed by calling logger:remove_handler(Id). logger will call Module:removing_handler(Id), and then remove the handler's configuration from the configuration database.

When logger:set_handler_config is called, logger calls Module:changing_config(Id,OldConfig,NewConfig). If this function returns ok, the NewConfig is written to the configuration database.

A simple handler which prints to the console could be implemented as follows:

-module(myhandler). -export([log/2]). log(#{msg:={report,R}},_) -> io:format("~p~n",[R]); log(#{msg:={string,S}},_) -> io:put_chars(S); log(#{msg:={F,A}},_) -> io:format(F,A).

A simple handler which prints to file could be implemented like this:

-module(myhandler). -export([adding_handler/2, removing_handler/1, log/2]). -export([init/1, handle_call/3, handle_cast/2, terminate/2]). adding_handler(Id,Config) -> {ok,Fd} = file:open(File,[append,{encoding,utf8}]), {ok,Config#{myhandler_fd=>Fd}}. removing_handler(Id,#{myhandler_fd:=Fd}) -> _ = file:close(Fd), ok. log(#{msg:={report,R}},#{myhandler_fd:=Fd}) -> io:format(Fd,"~p~n",[R]); log(#{msg:={string,S}},#{myhandler_fd:=Fd}) -> io:put_chars(Fd,S); log(#{msg:={F,A}},#{myhandler_fd:=Fd}) -> io:format(Fd,F,A).

Note that none of the above handlers have any overload protection, and all log events are printed directly from the client process. Neither do the handlers use the formatter or in any way add time or other metadata to the printed events.

For examples of overload protection, please refer to the implementation of logger_std_h and logger_disk_log_h .

Below is a simpler example of a handler which logs through one single process, and uses the default formatter to gain a common look of the log events.

It also uses the metadata field report_cb, if it exists, to print reports in the way the event issuer suggests. The formatter will normally do this, but if the handler either has an own default (as in this example) or if the given report_cb should not be used at all, then the handler must take care of this itself.

-module(myhandler). -export([adding_handler/2, removing_handler/1, log/2]). -export([init/1, handle_call/3, handle_cast/2, terminate/2]). adding_handler(Id,Config) -> {ok,Pid} = gen_server:start(?MODULE,Config), {ok,Config#{myhandler_pid=>Pid}}. removing_handler(Id,#{myhandler_pid:=Pid}) -> gen_server:stop(Pid). log(Log,#{myhandler_pid:=Pid} = Config) -> gen_server:cast(Pid,{log,Log,Config}). init(#{myhandler_file:=File}) -> {ok,Fd} = file:open(File,[append,{encoding,utf8}]), {ok,#{file=>File,fd=>Fd}}. handle_call(_,_,State) -> {reply,{error,bad_request},State}. handle_cast({log,Log,Config},#{fd:=Fd} = State) -> do_log(Fd,Log,Config), {noreply,State}. terminate(Reason,#{fd:=Fd}) -> _ = file:close(Fd), ok. do_log(Fd,#{msg:={report,R}} = Log, Config) -> Fun = maps:get(report_cb,Config,fun my_report_cb/1, {F,A} = Fun(R), do_log(Fd,Log#{msg=>{F,A},Config); do_log(Fd,Log,#{formatter:={FModule,FConfig}}) -> String = FModule:format(Log,FConfig), io:put_chars(Fd,String). my_report_cb(R) -> {"~p",[R]}.
Protecting the handler from overload

In order for the built-in handlers to survive, and stay responsive, during periods of high load (i.e. when huge numbers of incoming log requests must be handled), a mechanism for overload protection has been implemented in the logger_std_h and logger_disk_log_h handler. The mechanism, used by both handlers, works as follows:

Message queue length

The handler process keeps track of the length of its message queue and reacts in different ways depending on the current status. The purpose is to keep the handler in, or (as quickly as possible), get the handler into, a state where it can keep up with the pace of incoming log requests. The memory usage of the handler must never keep growing larger and larger, since that would eventually cause the handler to crash. Three thresholds with associated actions have been defined:

toggle_sync_qlen

The default value of this level is 10 messages, and as long as the length of the message queue is lower, all log requests are handled asynchronously. This simply means that the process sending the log request (by calling a log function in the logger API) does not wait for a response from the handler but continues executing immediately after the request (i.e. it will not be affected by the time it takes the handler to print to the log device). If the message queue grows larger than this value, however, the handler starts handling the log requests synchronously instead, meaning the process sending the request will have to wait for a response. When the handler manages to reduce the message queue to a level below the toggle_sync_qlen threshold, asynchronous operation is resumed. The switch from asynchronous to synchronous mode will force the logging tempo of few busy senders to slow down, but can not protect the handler sufficiently in situations of many concurrent senders.

drop_new_reqs_qlen

When the message queue has grown larger than this threshold, which defaults to 200 messages, the handler switches to a mode in which it drops any new requests being made. Dropping a message in this state means that the log function never actually sends a message to the handler. The log call simply returns without an action. When the length of the message queue has been reduced to a level below this threshold, synchronous or asynchronous request handling mode is resumed.

flush_reqs_qlen

Above this threshold, which defaults to 1000 messages, a flush operation takes place, in which all messages buffered in the process mailbox get deleted without any logging actually taking place. (Processes waiting for a response from a synchronous log request will receive a reply indicating that the request has been dropped).

For the overload protection algorithm to work properly, it is a requirement that:

toggle_sync_qlen < drop_new_reqs_qlen < flush_reqs_qlen

During high load scenarios, the length of the handler message queue rarely grows in a linear and predictable way. Instead, whenever the handler process gets scheduled in, it can have an almost arbitrary number of messages waiting in the mailbox. It's for this reason that the overload protection mechanism is focused on acting quickly and quite drastically (such as immediately dropping or flushing messages) as soon as a large queue length is detected.

The thresholds listed above may be modified by the user if, e.g, a handler shouldn't drop or flush messages unless the message queue length grows extremely large. (The handler must be allowed to use large amounts of memory under such circumstances however). Another example of when the user might want to change the settings is if, for performance reasons, the logging processes must never get blocked by synchronous log requests, while dropping or flushing requests is perfectly acceptable (since it doesn't affect the performance of the loggers).

A configuration example:

logger:add_handler(my_standard_h, logger_std_h, #{logger_std_h => #{type => {file,"./system_info.log"}, toggle_sync_qlen => 100, drop_new_reqs_qlen => 1000, flush_reqs_qlen => 2000}}).
Controlling bursts of log requests

A potential problem with large bursts of log requests, is that log files may get full or wrapped too quickly (in the latter case overwriting previously logged data that could be of great importance). For this reason, both built-in handlers offer the possibility to set a maximum level of how many requests to process with a certain time frame. With this burst control feature enabled, the handler will take care of bursts of log requests without choking log files, or the console, with massive amounts of printouts. These are the configuration parameters:

enable_burst_limit

This is set to true by default. The value false disables the burst control feature.

burst_limit_size

This is how many requests should be processed within the burst_window_time time frame. After this maximum has been reached, successive requests will be dropped until the end of the time frame. The default value is 500 messages.

burst_window_time

The default window is 1000 milliseconds long.

A configuration example:

logger:add_handler(my_disk_log_h, logger_disk_log_h, #{disk_log_opts => #{file => "./my_disk_log"}, logger_disk_log_h => #{burst_limit_size => 10, burst_window_time => 500}}).
Terminating a large handler

A handler process may grow large even if it can manage peaks of high load without crashing. The overload protection mechanism includes user configurable levels for a maximum allowed message queue length and maximum allowed memory usage. This feature is disabled by default, but can be switched on by means of the following configuration parameters:

enable_kill_overloaded

This is set to false by default. The value true enables the feature.

handler_overloaded_qlen

This is the maximum allowed queue length. If the mailbox grows larger than this, the handler process gets terminated.

handler_overloaded_mem

This is the maximum allowed memory usage of the handler process. If the handler grows any larger, the process gets terminated.

handler_restart_after

If the handler gets terminated because of its queue length or memory usage, it can get automatically restarted again after a configurable delay time. The time is specified in milliseconds and 5000 is the default value. The value never can also be set, which prevents a restart.

See Also

error_logger(3), SASL(6)