This module is used to profile a program to find out how the execution time is used. Trace to file is used to minimize runtime performance impact.
The
If, however, the profiling time is short, and the host machine OS does not support high resolution cpu time measurements, some few OS schedulings may show up as ridiculously long execution times for functions doing practically nothing. An example of a function more or less just composing a tuple in about 100 times the normal execution time has been seen, and when the tracing was repeated, the execution time became normal.
Profiling is essentially done in 3 steps:
Since
Starts the
Note that it seldom
needs to be started explicitly since it is automatically
started by the functions that need a running server.
Same as
Stops the
The supplied
If the
When the
Same as
Same as
Calls
Some effort is made to keep the trace clean from unnecessary
trace messages; tracing is started and stopped from a spawned
process while the
The
The
Same as
Same as
Same as
Same as
Same as
Same as
Same as
Starts or stops tracing.
Option description:
Getting correct values out of cpu_time can be difficult.
The best way to get correct values is to run using a single
scheduler and bind that scheduler to a specific CPU,
i.e.
Same as
Same as
Same as
Same as
Compiles a trace into raw profile data held by the
Option description:
Same as
Same as
Same as
Same as
Analyses raw profile data in the
Option description:
This section describes the output format of the analyse
command. See
The format is parsable with the standard Erlang parsing tools
The following example was run on OTP/R8 on Solaris 8, all OTP internals in this example are very version dependent.
As an example, we will use the following function, that you may recognise as a slightly modified benchmark function from the manpage file(3):
= 0 ->
{ok, FD} =
file:open(Name, [raw, write, delayed_write, binary]),
if N > 256 ->
ok = file:write(FD,
lists:map(fun (X) -> <> end,
lists:seq(0, 255))),
ok = create_file_slow(FD, 256, N);
true ->
ok = create_file_slow(FD, 0, N)
end,
ok = file:close(FD).
create_file_slow(FD, M, M) ->
ok;
create_file_slow(FD, M, N) ->
ok = file:write(FD, <>),
create_file_slow(FD, M+1, N).]]>
Let us have a look at the printout after running:
1> fprof:apply(foo, create_file_slow, [junk, 1024]). 2> fprof:profile(). 3> fprof:analyse().
The printout starts with:
%% Analysis results: { analysis_options, [{callers, true}, {sort, acc}, {totals, false}, {details, true}]}. % CNT ACC OWN [{ totals, 9627, 1691.119, 1659.074}]. %%%
The CNT column shows the total number of function calls that was found in the trace. In the ACC column is the total time of the trace from first timestamp to last. And in the OWN column is the sum of the execution time in functions found in the trace, not including called functions. In this case it is very close to the ACC time since the emulator had practically nothing else to do than to execute our test program.
All time values in the printout are in milliseconds.
The printout continues:
% CNT ACC OWN [{ "<0.28.0>", 9627,undefined, 1659.074}]. %%
This is the printout header of one process. The printout
contains only this one process since we did
All paragraphs up to the next process header only concerns function calls within this process.
Now we come to something more interesting:
{[{undefined, 0, 1691.076, 0.030}], { {fprof,apply_start_stop,4}, 0, 1691.076, 0.030}, % [{{foo,create_file_slow,2}, 1, 1691.046, 0.103}, {suspend, 1, 0.000, 0.000}]}. {[{{fprof,apply_start_stop,4}, 1, 1691.046, 0.103}], { {foo,create_file_slow,2}, 1, 1691.046, 0.103}, % [{{file,close,1}, 1, 1398.873, 0.019}, {{foo,create_file_slow,3}, 1, 249.678, 0.029}, {{file,open,2}, 1, 20.778, 0.055}, {{lists,map,2}, 1, 16.590, 0.043}, {{lists,seq,2}, 1, 4.708, 0.017}, {{file,write,2}, 1, 0.316, 0.021}]}.
The printout consists of one paragraph per called function. The
function marked with '%' is the one the paragraph
concerns -
The paragraphs are per default sorted in decreasing order of the ACC column for the marked function. The calling list and called list within one paragraph are also per default sorted in decreasing order of their ACC column.
The columns are: CNT - the number of times the function has been called, ACC - the time spent in the function including called functions, and OWN - the time spent in the function not including called functions.
The rows for the calling functions contain statistics for the marked function with the constraint that only the occasions when a call was made from the row's function to the marked function are accounted for.
The row for the marked function simply contains the sum of all calling rows.
The rows for the called functions contains statistics for the row's function with the constraint that only the occasions when a call was made from the marked to the row's function are accounted for.
So, we see that
We also see that the call to
The function 'undefined' that has called
Let us continue down the printout to find:
{[{{foo,create_file_slow,2}, 1, 249.678, 0.029}, {{foo,create_file_slow,3}, 768, 0.000, 23.294}], { {foo,create_file_slow,3}, 769, 249.678, 23.323}, % [{{file,write,2}, 768, 220.314, 14.539}, {suspend, 57, 6.041, 0.000}, {{foo,create_file_slow,3}, 768, 0.000, 23.294}]}.
If you compare with the code you will see there also that
Let us find the
{[{{file,write,2}, 53, 6.281, 0.000}, {{foo,create_file_slow,3}, 57, 6.041, 0.000}, {{prim_file,drv_command,4}, 50, 4.582, 0.000}, {{prim_file,drv_get_response,1}, 34, 2.986, 0.000}, {{lists,map,2}, 10, 2.104, 0.000}, {{prim_file,write,2}, 17, 1.852, 0.000}, {{erlang,port_command,2}, 15, 1.713, 0.000}, {{prim_file,drv_command,2}, 22, 1.482, 0.000}, {{prim_file,translate_response,2}, 11, 1.441, 0.000}, {{prim_file,'-drv_command/2-fun-0-',1}, 15, 1.340, 0.000}, {{lists,seq,4}, 3, 0.880, 0.000}, {{foo,'-create_file_slow/2-fun-0-',1}, 5, 0.523, 0.000}, {{erlang,bump_reductions,1}, 4, 0.503, 0.000}, {{prim_file,open_int_setopts,3}, 1, 0.165, 0.000}, {{prim_file,i32,4}, 1, 0.109, 0.000}, {{fprof,apply_start_stop,4}, 1, 0.000, 0.000}], { suspend, 299, 32.002, 0.000}, % [ ]}.
We find no particulary long suspend times, so no function seems
to have waited in a receive statement. Actually,
The
Now we look at another interesting pseudo function,
{[{{prim_file,drv_command,4}, 25, 0.873, 0.873}, {{prim_file,write,2}, 16, 0.692, 0.692}, {{lists,map,2}, 2, 0.195, 0.195}], { garbage_collect, 43, 1.760, 1.760}, % [ ]}.
Here we see that no function distinguishes itself considerably, which is very normal.
The
Garbage collect often occurs while a process is suspended, but
Let us now get back to the test code:
{[{{foo,create_file_slow,3}, 768, 220.314, 14.539}, {{foo,create_file_slow,2}, 1, 0.316, 0.021}], { {file,write,2}, 769, 220.630, 14.560}, % [{{prim_file,write,2}, 769, 199.789, 22.573}, {suspend, 53, 6.281, 0.000}]}.
Not unexpectedly, we see that
We see that
But, if we nevertheless do dig down we find the call to the linked in driver that does the file operations towards the host operating system:
{[{{prim_file,drv_command,4}, 772, 1458.356, 1456.643}], { {erlang,port_command,2}, 772, 1458.356, 1456.643}, % [{suspend, 15, 1.713, 0.000}]}.
This is 86 % of the total run time, and as we saw before it is the close operation the absolutely biggest contributor. We find a comparison ratio a little bit up in the call stack:
{[{{prim_file,close,1}, 1, 1398.748, 0.024}, {{prim_file,write,2}, 769, 174.672, 12.810}, {{prim_file,open_int,4}, 1, 19.755, 0.017}, {{prim_file,open_int_setopts,3}, 1, 0.147, 0.016}], { {prim_file,drv_command,2}, 772, 1593.322, 12.867}, % [{{prim_file,drv_command,4}, 772, 1578.973, 27.265}, {suspend, 22, 1.482, 0.000}]}.
The time for file operations in the linked in driver distributes itself as 1 % for open, 11 % for write and 87 % for close. All data is probably buffered in the operating system until the close.
The unsleeping reader may notice that the ACC times for
The missing time can be found in the paragraph
for
{[{{prim_file,drv_command,2}, 772, 1578.973, 27.265}], { {prim_file,drv_command,4}, 772, 1578.973, 27.265}, % [{{erlang,port_command,2}, 772, 1458.356, 1456.643}, {{prim_file,'-drv_command/2-fun-0-',1}, 772, 87.897, 12.736}, {suspend, 50, 4.582, 0.000}, {garbage_collect, 25, 0.873, 0.873}]}.
And some more missing time can be explained by the fact that
{[{{prim_file,open,2}, 1, 20.309, 0.029}, {{prim_file,open_int,4}, 1, 0.000, 0.057}], { {prim_file,open_int,4}, 2, 20.309, 0.086}, % [{{prim_file,drv_command,2}, 1, 19.755, 0.017}, {{prim_file,open_int_setopts,3}, 1, 0.360, 0.032}, {{prim_file,drv_open,2}, 1, 0.071, 0.030}, {{erlang,list_to_binary,1}, 1, 0.020, 0.020}, {{prim_file,i32,1}, 1, 0.017, 0.017}, {{prim_file,open_int,4}, 1, 0.000, 0.057}]}. . . . {[{{prim_file,open_int,4}, 1, 0.360, 0.032}, {{prim_file,open_int_setopts,3}, 1, 0.000, 0.016}], { {prim_file,open_int_setopts,3}, 2, 0.360, 0.048}, % [{suspend, 1, 0.165, 0.000}, {{prim_file,drv_command,2}, 1, 0.147, 0.016}, {{prim_file,open_int_setopts,3}, 1, 0.000, 0.016}]}.
The actual supervision of execution times is in itself a CPU intensive activity. A message is written on the trace file for every function call that is made by the profiled code.
The ACC time calculation is sometimes difficult to make correct, since it is difficult to define. This happens especially when a function occurs in several instances in the call stack, for example by calling itself perhaps through other functions and perhaps even non-tail recursively.
To produce sensible results,
Sometimes a function may unexpectedly waste a lot (some 10 ms
or more depending on host machine OS) of OWN (and ACC) time, even
functions that does practically nothing at all. The problem may
be that the OS has chosen to schedule out the
Erlang runtime system process for a while, and if the OS does
not support high resolution cpu time measurements
dbg(3),