From 615b79a01706033e1ef0d78f020ebbb47fc80b86 Mon Sep 17 00:00:00 2001 From: Rickard Green Date: Thu, 16 Jun 2016 17:33:28 +0200 Subject: Add documentation about dirty job type --- erts/doc/src/erl.xml | 19 ++++++++++++++++++- erts/doc/src/erl_nif.xml | 20 +++++++++++++++++++- 2 files changed, 37 insertions(+), 2 deletions(-) (limited to 'erts/doc/src') diff --git a/erts/doc/src/erl.xml b/erts/doc/src/erl.xml index 7b90a1ccca..436c2c57e1 100644 --- a/erts/doc/src/erl.xml +++ b/erts/doc/src/erl.xml @@ -915,6 +915,13 @@ number of dirty CPU schedulers online can be changed at run time via erlang:system_flag(dirty_cpu_schedulers_online, DirtyCPUSchedulersOnline).

+

+ The amount of dirty CPU schedulers is limited by the amount of + normal schedulers in order to limit the effect on processes + executing on ordinary schedulers. If the amount of dirty CPU + schedulers was allowed to be unlimited, dirty CPU bound jobs would + potentially starve normal jobs. +

This option is ignored if the emulator doesn't have threading support enabled. Currently, this option is experimental and is supported only if the emulator was configured and built with support for dirty schedulers @@ -944,7 +951,7 @@ enabled (it's disabled by default).

- +

Sets the number of dirty I/O scheduler threads to create when threading support has been enabled. The valid range is 0-1024. By default, the number @@ -952,6 +959,16 @@ threads in the async thread pool .

+

+ The amount of dirty IO schedulers is not limited by the amount of + normal schedulers like the amount of + dirty CPU schedulers. This since only I/O bound work is + expected to execute on dirty I/O schedulers. I/O bound jobs are + expected to either block waiting for I/O, and/or spend a limited + amount of time moving data. However, if the user should schedule CPU + bound jobs on dirty I/O schedulers, these jobs might starve ordinary + jobs executing on ordinary schedulers. +

This option is ignored if the emulator doesn't have threading support enabled. Currently, this option is experimental and is supported only if the emulator was configured and built with support for dirty schedulers diff --git a/erts/doc/src/erl_nif.xml b/erts/doc/src/erl_nif.xml index 8b02b3bae1..c9b2092c17 100644 --- a/erts/doc/src/erl_nif.xml +++ b/erts/doc/src/erl_nif.xml @@ -442,6 +442,24 @@ ok dirty NIF has completed.

+

+ It is important to classify the dirty job correct. An I/O bound + job should be classified as such and execute on a dirty I/O + scheduler, and a CPU bound job should be classified as such and + execute on a dirty CPU scheduler. If you classify CPU bound jobs + as I/O bound jobs, dirty I/O schedulers might starve ordinary + schedulers. For more information see the documentation + of the erl command line arguments + +SDcpu, and + +SDio as well + as enif_schedule_nif, + and ErlNifFunc. + A job that alternates between I/O bound and CPU bound can + be reclassified and rescheduled using enif_schedule_nif + so that it exectutes on the correct type of dirty scheduler + at all times. +

+

Currently known issues that are planned to be fixed:

@@ -1702,7 +1720,7 @@ enif_map_iterator_destroy(env, &iter); be converted to an atom, enif_schedule_nif returns a badarg exception.

The flags argument must be set to 0 for a regular NIF, or if the emulator was built the experimental dirty scheduler support enabled, flags can be set to either ERL_NIF_DIRTY_JOB_CPU_BOUND - if the job is expected to be primarily CPU-bound, or ERL_NIF_DIRTY_JOB_IO_BOUND for jobs that will + if the job is expected to be CPU-bound, or ERL_NIF_DIRTY_JOB_IO_BOUND for jobs that will be I/O-bound. If dirty scheduler threads are not available in the emulator, a try to schedule such a job will result in a badarg exception.

-- cgit v1.2.3