aboutsummaryrefslogtreecommitdiffstats
path: root/lib/kernel/doc/src/logger_chapter.xml
blob: 519df2ba48824e03d7a540d570bd72087e9c732b (plain) (blame)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
<?xml version="1.0" encoding="utf-8" ?>
<!DOCTYPE chapter SYSTEM "chapter.dtd">

<chapter>
  <header>
    <copyright>
      <year>2017</year>
      <holder>Ericsson AB. All Rights Reserved.</holder>
    </copyright>
    <legalnotice>
      Licensed under the Apache License, Version 2.0 (the "License");
      you may not use this file except in compliance with the License.
      You may obtain a copy of the License at
 
          http://www.apache.org/licenses/LICENSE-2.0

      Unless required by applicable law or agreed to in writing, software
      distributed under the License is distributed on an "AS IS" BASIS,
      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
      See the License for the specific language governing permissions and
      limitations under the License.
    
    </legalnotice>

    <title>Logging</title>
    <prepared></prepared>
    <docno></docno>
    <date></date>
    <rev></rev>
    <file>logger_chapter.xml</file>
  </header>

  <section>
    <title>Overview</title>
    <p>Erlang/OTP provides a standard API for logging. The backend of
      this API can be used as is, or it can be customized to suite
      specific needs.</p>
    <p>It consists of two parts - the <em>logger</em> part and the
      <em>handler</em> part. The logger will forward log events to one
      or more handler(s).</p>

    <image file="logger_arch.png">
      <icaption>Conceptual overview</icaption>
    </image>

    <p><em>Filters</em> can be added to the logger and to each
      handler. The filters decide if an event is to be forwarded or
      not, and they can also modify all parts of the log event.</p>

    <p>A <em>formatter</em> can be set for each handler. The formatter
      does the final formatting of the log event, including the log
      message itself, and possibly a timestamp, header and other
      metadata.</p>

    <p>In accordance with the Syslog protocol, RFC-5424, eight
      severity levels can be specified:</p>

    <table align="left">
      <row>
	<cell><strong>Level</strong></cell>
	<cell align="center"><strong>Integer</strong></cell>
	<cell><strong>Description</strong></cell>
      </row>
      <row>
	<cell>emergency</cell>
	<cell align="center">0</cell>
	<cell>system is unusable</cell>
      </row>
      <row>
	<cell>alert</cell>
	<cell align="center">1</cell>
	<cell>action must be taken immediately</cell>
      </row>
      <row>
	<cell>critical</cell>
	<cell align="center">2</cell>
	<cell>critical contidions</cell>
      </row>
      <row>
	<cell>error</cell>
	<cell align="center">3</cell>
	<cell>error conditions</cell>
      </row>
      <row>
	<cell>warning</cell>
	<cell align="center">4</cell>
	<cell>warning conditions</cell>
      </row>
      <row>
	<cell>notice</cell>
	<cell align="center">5</cell>
	<cell>normal but significant conditions</cell>
      </row>
      <row>
	<cell>info</cell>
	<cell align="center">6</cell>
	<cell>informational messages</cell>
      </row>
      <row>
	<cell>debug</cell>
	<cell align="center">7</cell>
	<cell>debug-level messages</cell>
      </row>
      <tcaption>Severity levels</tcaption>
    </table>

    <p>A log event is allowed by Logger if the integer value of
      its <c>Level</c> is less than or equal to the currently
      configured log level. The log level can be configured globally,
      or to allow more verbose logging from a specific part of the
      system, per module.</p>

    <section>
      <title>Customizable parts</title>

      <taglist>
	<tag><marker id="Handler"/>Handler</tag>
	<item>
	  <p>A handler is defined as a module exporting the following
	    function:</p>

	  <code>log(Log, Config) -> ok</code>

	  <p>A handler is called by the logger backend after filtering on
	    logger level and on handler level for the handler which is
	    about to be called. The function call is done on the client
	    process, and it is up to the handler implementation if other
	    processes are to be involved or not.</p>

	  <p>Multiple instances of the same handler can be
	    added. Configuration is per instance.</p>

	</item>

	<tag><marker id="Filter"/>Filter</tag>
	<item>
	  <p>Filters can be set on the logger or on a handler. Logger
	    filters are applied first, and if passed, the handler filters
	    for each handler are applied. The handler plugin is only
	    called if all handler filters for the handler in question also
	    pass.</p>

	  <p>A filter is specified as:</p>

	  <code>{fun((Log,Extra) -> Log | stop | ignore), Extra}</code>

	  <p>The configuration parameter <c>filter_default</c>
	    specifies the behavior if all filters return <c>ignore</c>.
	    <c>filter_default</c> is by default set to <c>log</c>.</p>

	  <p>The <c>Extra</c> parameter may contain any data that the
	    filter needs.</p>
	</item>

	<tag><marker id="Formatter"/>Formatter</tag>
	<item>
	  <p>A formatter is defined as a module exporting the following
	    function:</p>

	  <code>format(Log,Extra) -> unicode:chardata()</code>

	  <p>The formatter plugin is called by each handler, and the
	    returned string can be printed to the handler's destination
	    (stdout, file, ...).</p>
	</item>

      </taglist>
    </section>

    <section>
      <title>Built-in handlers</title>

      <taglist>
	<tag><c>logger_std_h</c></tag>
	<item>
	  <p>This is the default handler used by OTP. Multiple instances
	    can be started, and each instance will write log events to a
	    given destination, console or file. Filters can be used for
	    selecting which event to send to which handler instance.</p>
	</item>

	<tag><c>logger_disk_log_h</c></tag>
	<item>
	  <p>This handler behaves much like logger_std_h, except it uses
	    <seealso marker="disk_log"><c>disk_log</c></seealso> as its
	    destination.</p>
	</item>

	<tag><marker id="ErrorLoggerManager"/><c>error_logger</c></tag>
	<item>
	  <p>This handler is to be used for backwards compatibility
	    only. It is not started by default, but will be automatically
	    started the first time an event handler is added
	    with <seealso marker="error_logger#add_report_handler-1">
	      <c>error_logger:add_report_handler/1,2</c></seealso>.</p>

	  <p>No built-in event handlers exist.</p>
	</item>
      </taglist>
    </section>

    <section>
      <title>Built-in filters</title>

      <taglist>
	<tag><c>logger_filters:domain/2</c></tag>
	<item>
	  <p>This filter provides a way of filtering log events based on a
	    <c>domain</c> field <c>Metadata</c>. See 
	    <seealso marker="logger_filters#domain-2">
	    <c>logger_filters:domain/2</c></seealso></p>
	</item>

	<tag><c>logger_filters:level/2</c></tag>
	<item>
	  <p>This filter provides a way of filtering log events based
	    on the log level. See <seealso marker="logger_filters#domain-2">
	    <c>logger_filters:domain/2</c></seealso></p>
	</item>

	<tag><c>logger_filters:progress/2</c></tag>
	<item>
	  <p>This filter matches all progress reports
	    from <c>supervisor</c> and <c>application_controller</c>.
	    See <seealso marker="logger_filters#progress/2">
	    <c>logger_filters:progress/2</c></seealso></p>
	</item>

	<tag><c>logger_filters:remote_gl/2</c></tag>
	<item>
	  <p>This filter matches all events originating from a process
	    that has its group leader on a remote node.
	    See <seealso marker="logger_filters#remote_gl/2">
	    <c>logger_filters:remote_gl/2</c></seealso></p>
	</item>
      </taglist>
    </section>

    <section>
      <title>Default formatter</title>

      <p>The default formatter is <c>logger_formatter</c>.
	See <seealso marker="logger_formatter#format-2">
	  <c>logger_formatter:format/2</c></seealso>.</p>
    </section>
  </section>

  <section>
    <title>Configuration</title>

    <section>
      <title>Application environment variables</title>
      <p>See <seealso marker="kernel_app#configuration">Kernel(6)</seealso> for
	information about the application environment variables that can
	be used for configuring logger.</p>
    </section>

    <section>
      <title>Logger configuration</title>

      <taglist>
	<tag><c>level</c></tag>
	<item>
	  <p>Specifies the severity level to log.</p>
	</item>
	<tag><c>filters</c></tag>
	<item>
	  <p>Logger filters are added or removed with
	    <seealso marker="logger#add_logger_filter-2">
	      <c>logger:add_logger_filter/2</c></seealso> and
	    <seealso marker="logger#remove_logger_filter-1">
	      <c>logger:remove_logger_filter/1</c></seealso>,
	    respectively.</p>
	  <p>See <seealso marker="#Filter">Filter</seealso> for more
	    information.</p>
	  <p>By default, no filters exist.</p>
	</item>
	<tag><c>filter_default = log | stop</c></tag>
	<item>
	  <p>Specifies what to do with an event if all filters
	    return <c>ignore</c>.</p>
	  <p>Default is <c>log</c>.</p>
	</item>
	<tag><c>handlers</c></tag>
	<item>
	  <p>Handlers are added or removed with
	    <seealso marker="logger#add_handler-3">
	      <c>logger:add_handler/3</c></seealso> and
	    <seealso marker="logger#remove_handler-1">
	      <c>logger:remove_handler/1</c></seealso>,
	    respectively.</p>
	  <p>See <seealso marker="#Handler">Handler</seealso> for more
	    information.</p>
	</item>
      </taglist>
    </section>

    <section>
      <marker id="handler_configuration"/>
      <title>Handler configuration</title>
      <taglist>
	<tag><c>level</c></tag>
	<item>
	  <p>Specifies the severity level to log.</p>
	</item>
	<tag><c>filters</c></tag>
	<item>
	  <p>Handler filters can be specified when adding the handler,
	    or added or removed later with
	    <seealso marker="logger#add_handler_filter-3">
	      <c>logger:add_handler_filter/3</c></seealso> and
	    <seealso marker="logger#remove_handler_filter-2">
	      <c>logger:remove_handler_filter/2</c></seealso>,
	    respectively.</p>
	  <p>See <seealso marker="#Filter">Filter</seealso> for more
	    information.</p>
	  <p>By default, no filters exist.</p>
	</item>
	<tag><c>filter_default = log | stop</c></tag>
	<item>
	  <p>Specifies what to do with an event if all filters
	    return <c>ignore</c>.</p>
	  <p>Default is <c>log</c>.</p>
	</item>
	<tag><c>formatter = {Module::module(),Extra::term()}</c></tag>
	<item>
	  <p>See <seealso marker="#Formatter">Formatter</seealso> for more
	    information.</p>
	  <p>The default module is <seealso marker="logger_formatter">
	      <c>logger_formatter</c></seealso>, and <c>Extra</c> is
	      it's configuration map.</p>
	</item>
      </taglist>

      <p>Note that <c>level</c> and <c>filters</c> are obeyed by
	Logger itself before forwarding the log events to each
	handler, while <c>formatter</c> is left to the handler
	implementation. All Logger's built-in handlers will call the
	given formatter before printing.</p>
    </section>

  </section>

  <section>
    <marker id="compatibility"/>
    <title>Backwards compatibility with error_logger</title>
    <p>Logger provides backwards compatibility with the old
      <c>error_logger</c> in the following ways:</p>

    <taglist>
      <tag>Legacy event handlers</tag>
      <item>
	<p>To use event handlers written for <c>error_logger</c>, just
	  add your event handler with</p>
	<code>
error_logger:add_report_handler/1,2.
	</code>
	<p>This will automatically start the <c>error_logger</c>
	  event manager, and add <c>error_logger</c> as a
	  handler to <c>logger</c>, with configuration</p>
<code>
#{level=>info,
  filter_default=>log,
  filters=>[]}.
</code>
	<p>Note that this handler will ignore events that do not
	  originate from the old <c>error_logger</c> API, or from
	  within OTP. This means that if your code uses the logger API
	  for logging, then your log events will be discarded by this
	  handler.</p>
	<p>Also note that <c>error_logger</c> is not overload
	  protected.</p>
      </item>
      <tag>Logger API</tag>
      <item>
	<p>The old <c>error_logger</c> API still exists, but should
	  only be used by legacy code. It will be removed in a later
	  release.</p>
      </item>
      <tag>Output format</tag>
      <item>
	<p>To get log events on the same format as produced
	  by <c>error_logger_tty_h</c> and <c>error_logger_file_h</c>,
	  use the default formatter, <c>logger_formatter</c>, with
	  configuration parameter <c>legacy_header=>true</c>. This is
	  also the default.</p>
      </item>
      <tag>Default format of log events from OTP</tag>
      <item>
	<p>By default, all log events originating from within OTP,
	  except the former so called "SASL reports", look the same as
	  before.</p>
      </item>
      <tag>SASL reports</tag>
      <item>
	<p>By SASL reports we mean supervisor reports, crash reports
	  and progress reports.</p>
	<p>In earlier releases, these reports were only logged when
	  the SASL application was running, and they were printed
	  trough specific event handlers
	  named <c>sasl_report_tty_h</c>
	  and <c>sasl_report_file_h</c>.</p>
	<p>The destination of these log events were configured by
	  environment variables for the SASL application.</p>
	<p>Due to the specific event handlers, the output format
	  slightly differed from other log events.</p>
	<p>As of OTP-21, the concept of SASL reports is removed,
	  meaning that the default behavior is as follows:</p>
	<list>
	  <item>Supervisor reports, crash reports and progress reports
	    are no longer connected to the SASL application.</item>
	  <item>Supervisor reports and crash reports are logged by
	    default.</item>
	  <item>Progress reports are not logged by default, but can be
	    enabled with the kernel environment
	    variable <c>logger_log_progress</c>.</item>
	  <item>The output format is the same for all log
	    events.</item>
	</list>
	<p>If the old behavior is preferred, the kernel environment
	  variable <c>logger_sasl_compatible</c> can be set
	  to <c>true</c>. The old SASL environment variables can then
	  be used as before, and the SASL reports will only be printed
	  if the SASL application is running - through a second log
	  handler named <c>sasl_h</c>.</p>
	<p>All SASL reports have a metadata
	  field <c>domain=>[beam,erlang,otp,sasl]</c>, which can be
	  used, for example, by filters to to stop or allow the
	  events.</p>
      </item>
    </taglist>
  </section>


  <section>
    <title>Error handling</title>
    <p>Log data is expected to be either a format string and
      arguments, a string (unicode:chardata), or a report (map or
      key-value list) which can be converted to a format string and
      arguments by the handler. A default report callback should be
      included in the log event's metadata, which can be used for
      converting the report to a format string and arguments. The
      handler might also do a custom conversion if the default format
      is not desired.</p>
    <p><c>logger</c> does, to a certain extent, check its input data
      before forwarding a log event to the handlers, but it does not
      evaluate conversion funs or check the validity of format strings
      and arguments. This means that any filter or handler must be
      careful when formatting the data of a log event, making sure
      that it does not crash due to bad input data or faulty
      callbacks.</p>
    <p>If a filter or handler still crashes, logger will remove the
      filter or handler in question from the configuration, and then
      print a short error message on the console. A debug event
      containing the crash reason and other details is also issued,
      and can be seen if a handler is installed which logs on debug
      level.</p>
  </section>

  <section>
    <title>Example: add a handler to log debug events to file</title>
    <p>When starting an erlang node, the default behavior is that all
      log events with level info and above are logged to the
      console. In order to also log debug events, you can either
      change the global log level to <c>debug</c> or add a separate
      handler to take care of this. In this example we will add a new
      handler which prints the debug events to a separate file.</p>
    <p>First, we add an instance of logger_std_h with
      type <c>{file,File}</c>, and we set the handler's level
      to <c>debug</c>:</p>
    <pre>
1> <input>Config = #{level=>debug,logger_std_h=>#{type=>{file,"./debug.log"}}}.</input>
#{logger_std_h => #{type => {file,"./debug.log"}},
  level => debug}
2> <input>logger:add_handler(debug_handler,logger_std_h,Config).</input>
ok</pre>
    <p>By default, the handler receives all events
      (<c>filter_defalt=log</c>), so we need to add a filter to stop
      all non-debug events:</p>
    <pre>
3> <input>Fun = fun(#{level:=debug}=Log,_) -> Log; (_,_) -> stop end.</input>
#Fun&lt;erl_eval.12.98642416>
4> <input>logger:add_handler_filter(debug_handler,allow_debug,{Fun,[]}).</input>
ok</pre>
    <p>And finally, we need to make sure that the logger itself allows
      debug events. This can either be done by setting the global
      logger level:</p>
    <pre>
5> <input>logger:set_logger_config(level,debug).</input>
ok</pre>
    <p>Or by allowing debug events from one or a few modules only:</p>
    <pre>
6> <input>logger:set_module_level(mymodule,debug).</input>
ok</pre>

  </section>

  <section>
    <title>Example: implement a handler</title>
    <p>The only requirement that a handler MUST fulfill is to export
      the following function:</p>
    <code>log(logger:log(),logger:config()) ->ok</code>
    <p>It may also implement the following callbacks:</p>
    <code>
adding_handler(logger:handler_id(),logger:config()) -> {ok,logger:config()} | {error,term()}
removing_handler(logger:handler_id(),logger:config()) -> ok
changing_config(logger:handler_id(),logger:config(),logger:config()) -> {ok,logger:config()} | {error,term()}
    </code>
    <p>When logger:add_handler(Id,Module,Config) is called, logger
      will first call Module:adding_handler(Id,Config), and if it
      returns {ok,NewConfig} the NewConfig is written to the
      configuration database. After this, the handler may receive log
      events as calls to Module:log/2.</p>
    <p>A handler can be removed by calling
      logger:remove_handler(Id). logger will call
      Module:removing_handler(Id,Config), and then remove the handler's
      configuration from the configuration database.</p>
    <p>When logger:set_handler_config is called, logger calls
      Module:changing_config(Id,OldConfig,NewConfig). If this function
      returns ok, the NewConfig is written to the configuration
      database.</p>

    <p>A simple handler which prints to the console could be
      implemented as follows:</p>
    <code>
-module(myhandler).
-export([log/2]).

log(Log,#{formatter:={FModule,FConfig}) ->
    io:put_chars(FModule:format(Log,FConfig)).
    </code>

    <p>A simple handler which prints to file could be implemented like
      this:</p>
    <code>
-module(myhandler).
-export([adding_handler/2, removing_handler/2, log/2]).
-export([init/1, handle_call/3, handle_cast/2, terminate/2]).

adding_handler(Id,Config) ->
    {ok,Fd} = file:open(File,[append,{encoding,utf8}]),
    {ok,Config#{myhandler_fd=>Fd}}.

removing_handler(Id,#{myhandler_fd:=Fd}) ->
    _ = file:close(Fd),
    ok.

log(Log,#{myhandler_fd:=Fd,formatter:={FModule,FConfig}}) ->
    io:put_chars(Fd,FModule:format(Log,FConfig)).
    </code>

    <note><p>The above handlers do not have any overload
      protection, and all log events are printed directly from the
      client process.</p></note>

    <p>For examples of overload protection, please refer to the
      implementation
      of <seealso marker="logger_std_h"><c>logger_std_h</c></seealso>
      and <seealso marker="logger_disk_log_h"><c>logger_disk_log_h</c>
      </seealso>.</p>

    <p>Below is a simpler example of a handler which logs through one
      single process.</p>
    <code>
-module(myhandler).
-export([adding_handler/2, removing_handler/2, log/2]).
-export([init/1, handle_call/3, handle_cast/2, terminate/2]).

adding_handler(Id,Config) ->
    {ok,Pid} = gen_server:start(?MODULE,Config),
    {ok,Config#{myhandler_pid=>Pid}}.

removing_handler(Id,#{myhandler_pid:=Pid}) ->
    gen_server:stop(Pid).

log(Log,#{myhandler_pid:=Pid} = Config) ->
    gen_server:cast(Pid,{log,Log,Config}).

init(#{myhandler_file:=File}) ->
    {ok,Fd} = file:open(File,[append,{encoding,utf8}]),
    {ok,#{file=>File,fd=>Fd}}.

handle_call(_,_,State) ->
    {reply,{error,bad_request},State}.

handle_cast({log,Log,Config},#{fd:=Fd} = State) ->
    do_log(Fd,Log,Config),
    {noreply,State}.

terminate(Reason,#{fd:=Fd}) ->
    _ = file:close(Fd),
    ok.

do_log(Fd,Log,#{formatter:={FModule,FConfig}}) ->
    String = FModule:format(Log,FConfig),
    io:put_chars(Fd,String).
    </code>
  </section>

  <section>
    <marker id="overload_protection"/>
    <title>Protecting the handler from overload</title>
    <p>In order for the built-in handlers to survive, and stay responsive,
    during periods of high load (i.e. when huge numbers of incoming
    log requests must be handled), a mechanism for overload protection
    has been implemented in the
    <seealso marker="logger_std_h"><c>logger_std_h</c></seealso>
    and <seealso marker="logger_disk_log_h"><c>logger_disk_log_h</c>
    </seealso> handler. The mechanism, used by both handlers, works
    as follows:</p>
    
    <section>
      <title>Message queue length</title>
      <p>The handler process keeps track of the length of its message
      queue and reacts in different ways depending on the current status.
      The purpose is to keep the handler in, or (as quickly as possible),
      get the handler into, a state where it can keep up with the pace
      of incoming log requests. The memory usage of the handler must never
      keep growing larger and larger, since that would eventually cause the
      handler to crash. Three thresholds with associated actions have been
      defined:</p>
      
      <taglist>
	<tag><c>toggle_sync_qlen</c></tag>
	<item>
	  <p>The default value of this level is <c>10</c> messages,
	  and as long as the length of the message queue is lower, all log
	  requests are handled asynchronously. This simply means that the
	  process sending the log request (by calling a log function in the
	  logger API) does not wait for a response from the handler but
	  continues executing immediately after the request (i.e. it will not
	  be affected by the time it takes the handler to print to the log
	  device). If the message queue grows larger than this value, however,
	  the handler starts handling the log requests synchronously instead,
	  meaning the process sending the request will have to wait for a
	  response. When the handler manages to reduce the message queue to a
	  level below the <c>toggle_sync_qlen</c> threshold, asynchronous
	  operation is resumed. The switch from asynchronous to synchronous
	  mode will force the logging tempo of few busy senders to slow down,
	  but can not protect the handler sufficiently in situations of many
	  concurrent senders.</p>
	</item>
	<tag><c>drop_new_reqs_qlen</c></tag>
	<item>
	  <p>When the message queue has grown larger than this threshold, which
	  defaults to <c>200</c> messages, the handler switches to a mode in
	  which it drops any new requests being made. Dropping a message in
	  this state means that the log function never actually sends a message
	  to the handler. The log call simply returns without an action. When
	  the length of the message queue has been reduced to a level below this
	  threshold, synchronous or asynchronous request handling mode is
	  resumed.</p>
	</item>
	<tag><c>flush_reqs_qlen</c></tag>
	<item>
	  <p>Above this threshold, which defaults to <c>1000</c> messages, a
	  flush operation takes place, in which all messages buffered in the
	  process mailbox get deleted without any logging actually taking
	  place. (Processes waiting for a response from a synchronous log request
	  will receive a reply indicating that the request has been dropped).</p>
	</item>
      </taglist>

      <p>For the overload protection algorithm to work properly, it is a
      requirement that:</p>

      <p><c>toggle_sync_qlen &lt; drop_new_reqs_qlen &lt; flush_reqs_qlen</c></p>

      <p>During high load scenarios, the length of the handler message queue
      rarely grows in a linear and predictable way. Instead, whenever the
      handler process gets scheduled in, it can have an almost arbitrary number
      of messages waiting in the mailbox. It's for this reason that the overload
      protection mechanism is focused on acting quickly and quite drastically
      (such as immediately dropping or flushing messages) as soon as a large
      queue length is detected. </p>

      <p>The thresholds listed above may be modified by the user if, e.g, a handler
      shouldn't drop or flush messages unless the message queue length grows
      extremely large. (The handler must be allowed to use large amounts of memory
      under such circumstances however). Another example of when the user might want
      to change the settings is if, for performance reasons, the logging processes must
      never get blocked by synchronous log requests, while dropping or flushing requests
      is perfectly acceptable (since it doesn't affect the performance of the
      loggers).</p>

      <p>A configuration example:</p>
      <code type="none">
logger:add_handler(my_standard_h, logger_std_h,
                   #{logger_std_h =>
                              #{type => {file,"./system_info.log"},
                                toggle_sync_qlen => 100,
                                drop_new_reqs_qlen => 1000,
                                flush_reqs_qlen => 2000}}).
    </code>
    </section>

    <section>
      <title>Controlling bursts of log requests</title>
      <p>A potential problem with large bursts of log requests, is that log files
      may get full or wrapped too quickly (in the latter case overwriting
      previously logged data that could be of great importance). For this reason,
      both built-in handlers offer the possibility to set a maximum level of how
      many requests to process with a certain time frame. With this burst control
      feature enabled, the handler will take care of bursts of log requests
      without choking log files, or the console, with massive amounts of
      printouts. These are the configuration parameters:</p>
      
      <taglist>
	<tag><c>enable_burst_limit</c></tag>
	<item>
	  <p>This is set to <c>true</c> by default. The value <c>false</c>
	  disables the burst control feature.</p>
	</item>
	<tag><c>burst_limit_size</c></tag>
	<item>
	  <p>This is how many requests should be processed within the
	  <c>burst_window_time</c> time frame. After this maximum has been
	  reached, successive requests will be dropped until the end of the
	  time frame. The default value is <c>500</c> messages.</p>
	</item>
	<tag><c>burst_window_time</c></tag>
	<item>
	  <p>The default window is <c>1000</c> milliseconds long.</p>
	</item>
      </taglist>

      <p>A configuration example:</p>
      <code type="none">
logger:add_handler(my_disk_log_h, logger_disk_log_h,
                   #{disk_log_opts =>
                              #{file => "./my_disk_log"},
                     logger_disk_log_h =>
                              #{burst_limit_size => 10,
                                burst_window_time => 500}}).
    </code>
    </section>

    <section>
      <title>Terminating a large handler</title>
      <p>A handler process may grow large even if it can manage peaks of high load
      without crashing. The overload protection mechanism includes user configurable
      levels for a maximum allowed message queue length and maximum allowed memory
      usage. This feature is disabled by default, but can be switched on by means
      of the following configuration parameters:</p>
      
      <taglist>
	<tag><c>enable_kill_overloaded</c></tag>
	<item>
	  <p>This is set to <c>false</c> by default. The value <c>true</c>
	  enables the feature.</p>
	</item>
	<tag><c>handler_overloaded_qlen</c></tag>
	<item>
	  <p>This is the maximum allowed queue length. If the mailbox grows larger
	  than this, the handler process gets terminated.</p>
	</item>
	<tag><c>handler_overloaded_mem</c></tag>
	<item>
	  <p>This is the maximum allowed memory usage of the handler process. If
	  the handler grows any larger, the process gets terminated.</p>
	</item>
	<tag><c>handler_restart_after</c></tag>
	<item>
	  <p>If the handler gets terminated because of its queue length or
	  memory usage, it can get automatically restarted again after a
	  configurable delay time. The time is specified in milliseconds
	  and <c>5000</c> is the default value. The value <c>never</c> can
	  also be set, which prevents a restart.</p>
	</item>
      </taglist>
    </section>
  </section>

  <section>
    <title>See Also</title>
    <p><seealso marker="error_logger"><c>error_logger(3)</c></seealso>,
    <seealso marker="sasl:sasl_app"><c>SASL(6)</c></seealso></p>
  </section>
</chapter>