<?xml version="1.0" encoding="latin1" ?>
<!DOCTYPE erlref SYSTEM "erlref.dtd">
<erlref>
<header>
<copyright>
<year>2006</year>
<year>2007</year>
<holder>Ericsson AB, All Rights Reserved</holder>
</copyright>
<legalnotice>
The contents of this file are subject to the Erlang Public License,
Version 1.1, (the "License"); you may not use this file except in
compliance with the License. You should have received a copy of the
Erlang Public License along with this software. If not, it can be
retrieved online at http://www.erlang.org/.
Software distributed under the License is distributed on an "AS IS"
basis, WITHOUT WARRANTY OF ANY KIND, either express or implied. See
the License for the specific language governing rights and limitations
under the License.
The Initial Developer of the Original Code is Ericsson AB.
</legalnotice>
<title>inviso_lfm</title>
<prepared></prepared>
<docno></docno>
<date></date>
<rev></rev>
</header>
<module>inviso_lfm</module>
<modulesummary>An Inviso Off-Line Logfile Merger</modulesummary>
<description>
<p>Implements an off-line logfile merger, merging binary trace-log files from several nodes together in chronological order. The logfile merger can also do pid-to-alias translations.</p>
<p>The logfile merger is supposed to be called from the Erlang shell or a higher layer trace tool. For it to work, all logfiles and trace information files (containing the pid-alias associations) must be located in the file system accessible from this node and organized according to the API description.</p>
<p>The logfile merger starts a process, the output process, which in its turn starts one reader process for every node it shall merge logfiles from. Note that the reason for a process for each node is not remote communication, the logfile merger is an off-line utility, it is to sort the logfile entries in chronological order.</p>
<p>The logfile merger can be customized both when it comes to the implementation of the reader processes and the output the output process shall generate for every logfile entry.</p>
</description>
<funcs>
<func>
<name>merge(Files, OutFile) -></name>
<name>merge(Files, WorkHFun, InitHandlerData) -></name>
<name>merge(Files, BeginHFun, WorkHFun, EndHFun, InitHandlerData) -> {ok, Count} | {error, Reason}</name>
<fsummary>Merge logfiles into one file in chronological order</fsummary>
<type>
<v>Files = [FileDescription]</v>
<v> FileDescription = FileSet | {reader,RMod,RFunc,FileSet}</v>
<v> FileSet = {Node,LogFiles} | {Node,[LogFiles]}</v>
<v> Node = atom()</v>
<v> LogFiles = [{trace_log,[FileName]}] | [{trace_log,[FileName]},{ti_log,TiFileSpec}]</v>
<v> TiFileSpec = [string()] - a list of one string.</v>
<v> FileName = string()</v>
<v> RMod = RFunc = atom()</v>
<v>OutFile = string()</v>
<v>BeginHFun = fun(InitHandlerData) -> {ok, NewHandlerData} | {error, Reason}</v>
<v>WorkHFun = fun(Node, LogEntry, PidMappings, HandlerData) -> {ok, NewHandlerData}</v>
<v> LogEntry = tuple()</v>
<v> PidMappings = term()</v>
<v>EndHFun = fun(HandlerData) -> ok | {error, Reason}</v>
<v>Count = int()</v>
<v>Reason = term()</v>
</type>
<desc>
<p>Merges the logfiles in <c>Files</c> together into one file in chronological order. The logfile merger consists of an output process and one or several reader processes.</p>
<p>Returns <c>{ok, Count}</c> where <c>Count</c> is the total number of log entries processed, if successful.</p>
<p>When specifying <c>LogFiles</c>, currently the standard reader-process only supports:</p>
<list type="bulleted">
<item>one single file</item>
<item>a list of wraplog files, following the naming convention <c><![CDATA[<Prefix><Nr><Suffix>]]></c>.</item>
</list>
<p>Note that (when using the standard reader process) it is possible to give a list of <c>LogFiles</c>. The list must be sorted starting with the oldest. This will cause several trace-logs (from the same node) to be merged together in the same <c>OutFile</c>. The reader process will simply start reading the next file (or wrapset) when the previous is done.</p>
<p><c>FileDescription == {reader,RMod,RFunc,FileSet}</c> indicates that <c>spawn(RMod, RFunc, [OutputPid,LogFiles])</c> shall create a reader process.</p>
<p>The output process is customized with <c>BeginHFun</c>, <c>WorkHFun</c> and <c>EndHFun</c>. If using <c>merge/2</c> a default output process configuration is used, basically creating a text file and writing the output line by line. <c>BeginHFun</c> is called once before requesting log entries from the reader processes. <c>WorkHFun</c> is called for every log entry (trace message) <c>LogEntry</c>. Here the log entry typically gets written to the output. <c>PidMappings</c> is the translations produced by the reader process. <c>EndHFun</c> is called when all reader processes have terminated.</p>
<p>Currently the standard reader can only handle one ti-file (per <c>LogFiles</c>). The current inviso meta tracer is further not capable of wrapping ti-files. (This also because a wrapped ti-log will most likely be worthless since alias associations done in the beginning are erased but still used in the trace-log).</p>
<p>The standard reader process is implemented in the module <c>inviso_lfm_tpreader</c> (trace port reader). It understands Erlang linked in trace-port driver generated trace-logs and <c>inviso_rt_meta</c> generated trace information files.</p>
</desc>
</func>
</funcs>
<section>
<title>Writing Your Own Reader Process</title>
<p>Writing a reader process is not that difficult. It must:</p>
<list type="bulleted">
<item>Export an init-like function accepting two arguments, pid of the output process and the <c>LogFiles</c> component. <c>LogFiles</c> is actually only used by the reader processes, making it possible to redefine <c>LogFiles</c> if implementing an own reader process.</item>
<item>Respond to <c>{get_next_entry, OutputPid}</c> messages with <c>{next_entry, self(), PidMappings, NowTimeStamp, Term}</c> or <c>{next_entry, self(), {error,Reason}}</c>.</item>
<item>Terminate normally when no more log entries are available.</item>
<item>Terminate on an incoming EXIT-signal from <c>OutputPid</c>.</item>
</list>
<p>The reader process must of course understand the format of a logfile written by the runtime component.</p>
</section>
</erlref>