From f9459092940943876dff040ee997515b96fd5d50 Mon Sep 17 00:00:00 2001 From: Rickard Green Date: Wed, 4 Jan 2017 18:10:26 +0100 Subject: Scheduler wall time support for dirty schedulers --- erts/doc/src/erlang.xml | 97 +++++++++++++++++++++++++++++++++++++++++++------ 1 file changed, 85 insertions(+), 12 deletions(-) (limited to 'erts/doc') diff --git a/erts/doc/src/erlang.xml b/erts/doc/src/erlang.xml index b3fab3874b..1f64d7be86 100644 --- a/erts/doc/src/erlang.xml +++ b/erts/doc/src/erlang.xml @@ -6415,12 +6415,17 @@ lists:map( TotalTime is the total time duration since scheduler_wall_time - activation. The time unit is undefined and can be subject - to change between releases, OSs, and system restarts. - scheduler_wall_time is only to be used to - calculate relative values for scheduler-utilization. - ActiveTime can never exceed - TotalTime.

+ activation for the specific scheduler. Note that + activation time can differ significantly between + schedulers. Currently dirty schedulers are activated + at system start while normal schedulers are activated + some time after the scheduler_wall_time + functionality is enabled. The time unit is undefined + and can be subject to change between releases, OSs, + and system restarts. scheduler_wall_time is only + to be used to calculate relative values for scheduler + utilization. ActiveTime can never + exceed TotalTime.

The definition of a busy scheduler is when it is not idle and is not scheduling (selecting) a process or port, that is:

@@ -6438,15 +6443,37 @@ lists:map( scheduler_wall_time is turned off.

The list of scheduler information is unsorted and can appear in different order between calls.

+

As of ERTS version 9.0, also dirty CPU schedulers will + be included in the result. That is, all scheduler threads + that are expected to handle CPU bound work. If you also + want information about dirty I/O schedulers, use + statistics(scheduler_wall_time_all) + instead.

+ +

Normal schedulers will have scheduler identifiers in + the range 1 =< SchedulerId =< + erlang:system_info(schedulers). + Dirty CPU schedulers will have scheduler identifiers in + the range erlang:system_info(schedulers) < + SchedulerId =< erlang:system_info(schedulers) + + + erlang:system_info(dirty_cpu_schedulers). +

+

The different types of schedulers handle + specific types of jobs. Every job is assigned to a specific + scheduler type. Jobs can migrate between different schedulers + of the same type, but never between schedulers of different + types. This fact has to be taken under consideration when + evaluating the result returned.

Using scheduler_wall_time to calculate - scheduler-utilization:

+ scheduler utilization:

 > erlang:system_flag(scheduler_wall_time, true).
 false
 > Ts0 = lists:sort(erlang:statistics(scheduler_wall_time)), ok.
 ok

Some time later the user takes another snapshot and calculates - scheduler-utilization per scheduler, for example:

+ scheduler utilization per scheduler, for example:

 > Ts1 = lists:sort(erlang:statistics(scheduler_wall_time)), ok.
 ok
@@ -6461,11 +6488,32 @@ ok
  {7,0.973237033077876},
  {8,0.9741297293248656}]

Using the same snapshots to calculate a total - scheduler-utilization:

+ scheduler utilization:

 > {A, T} = lists:foldl(fun({{_, A0, T0}, {_, A1, T1}}, {Ai,Ti}) ->
-	{Ai + (A1 - A0), Ti + (T1 - T0)} end, {0, 0}, lists:zip(Ts0,Ts1)), A/T.
+	{Ai + (A1 - A0), Ti + (T1 - T0)} end, {0, 0}, lists:zip(Ts0,Ts1)),
+	TotalSchedulerUtilization = A/T.
+0.9769136803764825
+

Total scheduler utilization will equal 1.0 when + all schedulers have been active all the time between the + two measurements.

+

Another (probably more) useful value is to calculate + total scheduler utilization weighted against maximum amount + of available CPU time:

+
+> WeightedSchedulerUtilization = (TotalSchedulerUtilization
+                                  * (erlang:system_info(schedulers)
+                                     + erlang:system_info(dirty_cpu_schedulers)))
+                                 / erlang:system_info(logical_processors_available).
 0.9769136803764825
+

This weighted scheduler utilization will reach 1.0 + when schedulers are active the same amount of time as + maximum available CPU time. If more schedulers exist + than available logical processors, this value may + be greater than 1.0.

+

As of ERTS version 9.0, the Erlang runtime system + with SMP support will as default have more schedulers + than logical processors. This due to the dirty schedulers.

scheduler_wall_time is by default disabled. To enable it, use @@ -6476,6 +6524,31 @@ ok + Information about each schedulers work time. + + +

The same as + statistics(scheduler_wall_time), + except that it also include information about all dirty I/O + schedulers.

+

Dirty IO schedulers will have scheduler identifiers in + the range + erlang:system_info(schedulers) + + + erlang:system_info(dirty_cpu_schedulers) < + SchedulerId =< erlang:system_info(schedulers) + + erlang:system_info(dirty_cpu_schedulers) + + + erlang:system_info(dirty_io_schedulers).

+

Note that work executing on dirty I/O schedulers + are expected to mainly wait for I/O. That is, when you + get high scheduler utilization on dirty I/O schedulers, + CPU utilization is not expected to be high due to + this work.

+ + + + Information about active processes and ports.

Returns the total amount of active processes and ports in @@ -6495,7 +6568,7 @@ ok - + Information about the run-queue lengths.

Returns the total length of the run queues. That is, the number @@ -6515,7 +6588,7 @@ ok - + Information about wall clock.

Returns information about wall clock. wall_clock can -- cgit v1.2.3