Difference between revisions of "AUG Predefined Probes"

From OC Systems Wiki!
Jump to: navigation, search
m
m
Line 17: Line 17:
 
This appendix provides more in-depth information about how the major predefined probes work and how to use them.
 
This appendix provides more in-depth information about how the major predefined probes work and how to use them.
  
There are six ''dynamic analysis'' predefined probes:
+
==== coverage.ual ====
  
==== '''coverage.ual''' ====
+
records the execution of each known source line in the specified functions of the executable program and produces a report indicating which lines in each function were not executed. See [[Coverage_Predefined_Probe|coverage.ual]].
  
records the execution of each known source line in the specified functions of the executable program and produces a report indicating which lines in each function were not executed. See [#MARKER-9-1978 "Test Coverage Probe: coverage.ual"].
+
==== events.ual ====
  
==== '''events.ual''' ====
+
records the start and end times of program functions and user-defined "events", organizes them by call tree, and provides several reports which help to analyze the performance of an application. This probe supports Java as well as native programs. See [[Events_Predefined_Probe|events.ual]].
 
 
records the start and end times of program functions and user-defined "events", organizes them by call tree, and provides several reports which help to analyze the performance of an application. This probe supports Java as well as native programs. See [#MARKER-9-2031 "Performance Probe: events.ual"].
 
  
 
==== profile.ual ====
 
==== profile.ual ====
  
monitors the selected function calls in the executable program and displays execution time statistics about those functions to help indicate where and when the program is spending its wallclock time. See [#MARKER-9-2058 "Profile Probe: profile.ual"].
+
monitors the selected function calls in the executable program and displays execution time statistics about those functions to help indicate where and when the program is spending its wallclock time. See [[Profile Predefined Probe|profile.ual]].
  
 
==== statprof.ual ====
 
==== statprof.ual ====
  
uses the Unix kernel sampling mechanism for performance analysis (as used by prof) to provide statistical information about where the application is spending its CPU time. See [#MARKER-9-2108 "Statistical Profiling: statprof.ual"]
+
uses the Unix kernel sampling mechanism for performance analysis (as used by prof) to provide statistical information about where the application is spending its CPU time. See [[Statistical Profiling Predefined Probe|statprof.ual]]
  
 
==== trace.ual ====
 
==== trace.ual ====
  
traces the execution of the selected functions within an application program to record when, and in which order, and by which thread each was invoked. This probe supports Java as well as native programs. See [#MARKER-9-2117 "Trace Probe: trace.ual"]. The <code>trace.ual</code> predefined probe is used in the on-line example <code>[../examples/evaluate/4.predefined/README $APROBE/examples/evaluate/4.predefined]</code>. Note that direct use of this probe, especially using the GUI to define what to trace, has been superceded by the RootCause interface.
+
traces the execution of the selected functions within an application program to record when, and in which order, and by which thread each was invoked. This probe supports Java as well as native programs. See [[Trace Predefined Probe|trace.ual]]. The '''trace.ual''' predefined probe is used in the on-line example <code>$APROBE/examples/evaluate/4.predefined</code>. Note that direct use of this probe, especially using the GUI to define what to trace, has been superceded by the RootCause interface.
 +
 
 +
==== memstat.ual ====
 +
 
 +
reports heap memory allocation patterns and leaks using statistical sampling. See [[Memstat Predefined Probe|memstat.ual]].
  
 
==== memwatch.ual ====
 
==== memwatch.ual ====
  
reports heap memory allocation patterns and leaks. See [#MARKER-9-2173 "Memory Watch Probe: memwatch.ual"].
+
reports heap memory allocation patterns and leaks. See [[Memwatch Predefined Probe|memwatch.ual]].
 
 
In addition, there is a ''static analysis'' predefined probe:
 
  
 
==== info.ual ====
 
==== info.ual ====
Line 51: Line 51:
 
Lastly there is
 
Lastly there is
  
==== '''quick_gui.ual''' ====
+
==== quick_gui.ual ====
  
a library of functions to provide simple Java GUI support to a user-written probe. The interface is defined in [../include/quick_gui.h <code>$APROBE/include/quick_gui.h</code>], and includes histogram and plot graph support, as well pop-up Yes/No and message dialogs. Use of this library is illustrated by the [../examples/learn/visualize_data/README <code>$APROBE/examples/learn/visualize_data</code>] example. See [#MARKER-9-2220 "quick_gui Library"].
+
a library of functions to provide simple Java GUI support to a user-written probe. The interface is defined in <code>$APROBE/include/quick_gui.h</code>, and includes histogram and plot graph support, as well pop-up Yes/No and message dialogs. Use of this library is illustrated by the <code>$APROBE/examples/learn/visualize_data</code> example. See [#MARKER-9-2220 "quick_gui Library"].
  
 
In most of this appendix, we'll refer to these by their simple filename: coverage, events, profile, trace, memwatch, etc.
 
In most of this appendix, we'll refer to these by their simple filename: coverage, events, profile, trace, memwatch, etc.
Line 101: Line 101:
 
These mechanisms are much the same for the dynamic analysis probes above.The <code>info.ual</code> predefined probe is a utility for which all behavior is specified on the command-line; and <code>quick_gui.ual</code> is a library, so all interaction with it is by function calls from within a user-written probe. The statprof.ual predefined is another simple probe that does not require a configuration file or GUI to customize its behavior.
 
These mechanisms are much the same for the dynamic analysis probes above.The <code>info.ual</code> predefined probe is a utility for which all behavior is specified on the command-line; and <code>quick_gui.ual</code> is a library, so all interaction with it is by function calls from within a user-written probe. The statprof.ual predefined is another simple probe that does not require a configuration file or GUI to customize its behavior.
  
The remainder of this section describe the common aspects of the coverage, profile, trace, and memwatch predefined probes, using the trace probe as an example. Subsequent sections provide detailed descriptions of each probe.
 
  
 
=== Command Line ===
 
=== Command Line ===
Line 220: Line 219:
 
==== Example D-1. Calling ap_Trace_DoSnapshot() ====
 
==== Example D-1. Calling ap_Trace_DoSnapshot() ====
  
== Performance Probe: events.ual ==
 
 
''''''The '''events.ual''' predefined probe records the start and end times of program functions and user-defined "events", organizes them by call tree, and provides several reports which help to analyze the performance of an application. This probe supports Java as well as native programs.
 
 
An event can be any particular operation that the application performs that it makes sense for the application to report. An instance of an event should have a specific start and end point somewhere within the application. For instance, we might recognize that we have an "incoming missile" message and start the IncomingMissile event. At the point at which we display this to the user we can record the end point for the IncomingMissile.
 
 
All events are logged as they occur. At format time, the events can be analyzed in a summary or detailed format.
 
 
You can find an example in the directory $APROBE/examples/predefined_probes/events/.
 
 
=== Usage ===
 
 
This probe is applied at run time using [aprobe-9.html#MARKER-9-1075 aprobe] as described under [#MARKER-9-2038 "Events UAL Parameters"]. The data to be collected is defined by a configuration file specific to the application being probed. See [#MARKER-9-1955 "Configuration File"]. This configuration file is constructed by the user with a text editor starting with a template defined in $APROBE/probes/event.cfg.
 
 
=== Events UAL Parameters ===
 
 
'''events.ual''' is specified on the [aprobe-9.html#MARKER-9-1075 aprobe] command line or in an [aprobe-10.html#MARKER-9-1332 APO file], and with [aprobe-9.html#MARKER-9-1035 apformat], as described in [#MARKER-9-1941 "Command Line"]. The specific options are:
 
 
 
'''aprobe  -u events.ual '''    ['''-p''' "['''-h'''] ['''-v'''] ['''-c''' ''config_filename'']"
 
    ''your_program ''
 
 
where:
 
 
; '''-c ''config_filename'''''
 
: <br />specifies that the name of the probe configuration options file will follow immediately after -c. The default file name is your_program.events.cfg. For example, if your executable program is called wilbur.exe, then the default file name would be wilbur.exe.events.cfg.
 
; '''-h '''
 
: produces brief help text.
 
; '''-v '''
 
: verbose mode, which produces additional progress messages.
 
 
Note that the -c argument is stored by this probe, so you don't need any other arguments, you don't need specify the UAL and configuration file with apformat.
 
 
=== Events UAL Configuration File ===
 
 
The Events configuration file specifies everything about the behavior of the Events performance probe. Without explicitly defining the events in the configuration file, the probe will do nothing. The configuration file is responsible for defining the events (runtime options) as well as several format time options.
 
 
By default the events probe will be looking for ''base_exe_name''.events.cfg file in the current directory. You can override this location by passing it as a parameter to events probe. For example:
 
 
 
aprobe -u events -p "-c ./my_events.cfg" prog.exe
 
  
When formatting the data with apformat command, the configuration file will be expected to remain in the same location where it was found when the data was collected. Again, you can override this location by passing it as a parameter to the events probe:
 
 
 
apformat -u events -p "-c ./my_events.cfg" a.apd
 
 
The example below shows one possible Events configuration file.
 
 
 
PROBE CONFIGURATION FILE FOR EVENTS VERSION 1.1.0
 
 
CorrectCpuTime    FALSE
 
DisplayReport    TotalReport
 
PrintHeaders      TRUE
 
EventNameLength  30
 
SaveReportsToFile FALSE
 
SummarizeWallTime FALSE
 
 
EVENT FUNCTION foo*
 
EVENT FUNCTION * in "libc.so"
 
EVENT FUNCTION com.sun.java.* in $java$
 
EVENT START ProcessMessage MessageGet() ON EXIT
 
EVENT STOP  ProcessMessage MessageGet() ON ENTRY
 
 
==== Example D-4. events.cfg File ====
 
 
=== Run Time Configuration Options ===
 
 
The <code>events.cfg</code> file defines the behavior of the Events probe both when the program is running and when the collected data is formatted. This section defines the run-time EVENT directive and related options.
 
 
==== Events ====
 
 
Events can be either LEAF or NON-LEAF events. LEAF events are those that signify one point in time and can not have any other events nested underneath them. NON-LEAF events, on the other hand, are comprised of a START and STOP event pair. They signify a time interval or a PROCESS. There is a special kind of NON-LEAF events that we call MESSAGES. Just like PROCESSES these events also have duration, but they can not be properly nested inside of other events or have other events nested underneath them. They also don't have to originate and stop in the same thread (though this is not precluded). The MESSAGE events are signified by SEND and RECV points. The syntax of an event specification formally defined as follows:
 
 
''event'' ::= EVENT ''event_details<br />event_details'' ::= <br />''event_name function_name<br />'' | LEAF ''event_name function_name'' [ON ENTRY | ON EXIT]<br /> | START ''event_name function_name'' [ON ENTRY | ON EXIT]<br /> | STOP ''event_name function_name'' [ON ENTRY | ON EXIT]<br /> | SEND ''event_name function_name'' [ON ENTRY | ON EXIT]<br /> | RECV ''event_name function_name'' [ON ENTRY | ON EXIT]<br /> | FUNCTION ''function_name<br /><br />''START signifies the starting point of a NON-LEAF event pair;<br />STOP signifies the ending point of a NON-LEAF event pair
 
 
When none of LEAF, START, STOP or FUNCTION are specified the event is assumed to signify both the starting and ending points.
 
 
==== FUNCTION Events ====
 
 
FUNCTION events will have their function name be used also as their event name. FUNCTION events are the simplest and quickest events one may define. They allow the use of wildcards, which helps one to setup the initial configuration in minutes. For example:
 
 
EVENT FUNCTION foo*<br />EVENT FUNCTION * in "libc.so"<br />EVENT FUNCTION com.sun.java.* in $java$<br /><br />Function events may be easy to define, but a bit harder to refer to by name, e.g.<br />from a FOCUS or IGNORE directive. This is because you have to refer to function events using the full function/method name known to Aprobe. For example,
 
 
extern:foo(), or "my_file.c":"local_function()".
 
 
==== Examples of Events ====
 
 
 
 
EVENT START ProcessMessage MessageGet() ON EXIT
 
EVENT STOP  ProcessMessage MessageGet() ON ENTRY
 
 
In the example above we defined one event: <code>ProcessMessage</code>. This non-leaf event starts when we get a message (on exit from <code>MessageGet()</code>) and stops when we come back for more messages (on entry to <code>MessageGet()</code>). Notice that the above definition will result in the first <code>STOP ProcessMessage</code> event occur before the first START ProcessMessage. While this probe will produce a warning and recover in such a situation it may be better to avoid it. We can do that by adding another START and STOP points pair for ProcessMessage event on on entry/exit to/from main(). These two event points could be added in one statement: <code>EVENT ProcessMessage main()</code>.
 
 
Some events may be best described by their function name. The keyword FUNCTION would tell this probe that it should use the function name as the name of the event:
 
 
 
EVENT FUNCTION extern:"MyFunctionEvent()"
 
 
Leaf events are a good way to record an occurrence of a "point in time" event:
 
 
 
EVENT LEAF  Error#10 extern:"ReportFileNotFoundError()"
 
EVENT SEND SendMessage "MyClass::SendMsg*" in "$java$" ON ENTRY
 
EVENT RECV  ReceiveMessage "MyClass::GetMsg*"  in "$java$" ON EXIT
 
 
==== Hook Events ====
 
 
Some event points may be hard to define with configuration file alone. To help you identify such events you may build the event points into your application, by using hook functions. For example, your application may make the following call before processing an event:
 
 
 
hook_start2(99, Event-&gt;EventId);
 
Event-&gt;Process();
 
hook_stop2(99, Event-&gt;EventId);
 
 
Events identified by the calls to the hooks library are automatically recorded by the events probe. The parameters passed to the hook function will be recorded together with the event and will be used for the event identification. For example, if the EventId == 10in the example above, the event will name "Hook.99.10" will be recorded. You can use "Define" directive to assign a more meaningful name to this event. The complete list of the hooks routines may found in <code>hooks.h</code>.
 
 
==== Time measurement ====
 
 
We take a sample time at each user defined event. Two types of time stamps are recorded: WALL time and CPU time. Wall time will only appear in the detailed report, all statistics such as total/min/max/avg times are based on CPU times. CPU time measurement may be corrected for the overhead of time sampling itself. This is controlled by CorrectCpuTime flag, which is set to FALSE by default. Change this to <code>CorrectCpuTime=TRUE</code> if do wish to have CPU time corrected. ''Warning''<nowiki>: this may result in negative deltas reported for very short events. </nowiki>
 
 
=== Formatting Options ===
 
 
Formatting options in the configuration file don't affect what data is collected. They only affect they way the data is presented when [aprobe-9.html#MARKER-9-1035 apformat] is run. Therefore you can change these options and get a different report of your data without rerunning your application.
 
 
==== DisplayReport Option ====
 
 
The DisplayReport option will tell the formatter whether you would like to see data in detailed or summary formats.
 
 
==== '''DetailedReport''' ====
 
 
will contain information from each individual event.
 
 
==== '''SubtotaReport''' ====
 
 
summary report will contain information summarized from the detailed events since the last summary. One "Subtotal" report will be generated for each top level event.
 
 
==== '''TotalReport''' ====
 
 
will contain the summary of the data collected for the whole run. This is the initial value in the configuration file.
 
 
==== DEFINE ====
 
 
One may change the way the event name is presented in the report, without having to rerun the application, using the DEFINE directive followed by the event name and the desired representation.
 
 
For example:
 
 
 
DEFINE Event#256 "Process Radar Data"
 
DEFINE Hook.999 "Move Troops"
 
DEFINE "ThisIsAVeryVeryVeryLooooooooooooooooooongFunctionName()" BetterName
 
 
Note that the new name is for report purposes only and the original event name must still be used to identify the event, e.g. to [#MARKER-9-2052 FOCUS] or [#MARKER-9-2053 IGNORE]. For function events, the name you have to use is the full function name. If you would like to use a more readable name for your event you can define it together with your event.
 
 
Hook events have to be referred to by the name generated for them using one of the following formats:
 
 
 
Hook.''HookParam1 ''Hook.''HookParam1''.''HookParam2''
 
 
==== FOCUS ====
 
 
"The FOCUS directive followed by an event name will allow you to limit the reporting to the events that have a focus set on them and those events that are nested underneath them. When no FOCUS directive is used all events are assumed to be "in focus". For example, if your events.cfg file contained only one FOCUS event:
 
 
 
FOCUS Hook.999
 
 
You would see only information about events which occurred between start and stop of Hook.999 event.
 
 
==== IGNORE ====
 
 
Once you collected the data you may find that you would like to filter out some of the events either because they are not properly defined (e.g. START doesn't match STOP) or, perhaps, because they add too much clutter to the report. You can filter such events out by using "Ignore" directive followed by the event name. For example, specifying
 
 
IGNORE Hook.999
 
 
would eliminate the appearance of this event in the output. Note that this does not eliminate events "under" (occurring during) the ignored event.
 
 
==== Runtime modifier ====
 
 
One may precede [#MARKER-9-2052 FOCUS] and [#MARKER-9-2053 IGNORE] directives with a modifier "Runtime" to make sure that no data is collected at run time for those events that would be filtered out by these directives at format time. For example.
 
 
Runtime IGNORE Hook.999
 
 
Would record the minimum amount of data for the Hook.999 event. Since other events may depend upon this one, it is not necessarily the same as commenting out the original event definition.
 
 
==== PrintHeaders ====
 
 
The PrintHeaders flag controls whether report headers will be printed out. The default is TRUE.
 
 
==== EventNameLength ====
 
 
EventNameLength specifies the length to which FUNCTION event's names will be abbreviated in the report. The default is 30.
 
 
==== SummarizeWallTime ====
 
 
Setting SummarizeWallTime=TRUE results in total and subtotal reports summarizing the Wall Clock Time, instead of Cpu Time. The default is FALSE.
 
 
==== Report File Options ====
 
 
==== SaveReportsToFile ====
 
 
When set to TRUE, this causes the results to be written to files named according to their contents. The default is FALSE, wherein the reports are be written out to standard output.
 
 
==== ReportsDirectory ====
 
 
The ReportsDirectory option specifies the location of where the report files should be created. By default, they are created in the directory containing the APD files.
 
 
==== ReportsBaseNamePrefix ====
 
 
This option specifies the prefix used to generate report files' base names. The default name is:
 
 
''TimeStampOnProgramEntry''.pid=''PID''.''BaseApdName''
 
 
For example:
 
 
2002-10-10@15:33.pid=11286.hello
 
 
You can override the value of ReportsBaseNamePrefix specified in the configuration file with '''-b''' format-time option. For example:
 
 
 
apformat -u events -p "-b node10" hello.apd
 
 
 
=== Reports ===
 
 
There are three output formats specified using the [#MARKER-9-2047 DisplayReport Option]<nowiki>: </nowiki>[#MARKER-9-2048 DetailedReport], [#MARKER-9-2049 SubtotaReport], and [#MARKER-9-2050 TotalReport]. The Detailed report shows every event. The Subtotal and Summary reports merge all events with the same traceback. As generated, all reports are about 160 characters wide, and include all columns to facilitate automatic postprocessing. These columns are, in order:
 
 
# Thread Id
 
# Nesting Level - This is the event's nesting level within a given thread relative to other events.
 
# Event type
 
#* " " - StartEvent
 
#* "-" - StopEvent
 
#* " " - LeafEvent
 
#* "X" - Placeholder StopEvent created for unmatched StartEvent (error condition)
 
# Event Name - This is a short form of the name. The width of this column is determined by the [#MARKER-9-2054 EventNameLength] variable, which defaults to 30.
 
 
Fields 5-9 are filled in only for the DetailedReport.
 
 
# Event Sequence Id - Unique Id assigned to each instance of an event (DetailedReport only).
 
# Parent Event Sequence Id - Sequence Id of the parent event or 00000 if the event is not nested.
 
# Event Wall Time - the time at which the event occurred.
 
# Delta Wall Time - Wall time in microseconds since the matching start event This field is only filled for Stop events in the DetailedReport.
 
# Delta CPU Time - CPU time in microseconds since the matching start event. This field is only filled for Stop events in the DetailedReport.
 
 
Fields 10-14 are only filled in for summary reports (SubtotalReport and TotalReport).
 
 
# Call Count - the number of times the given event took place.
 
# Total CPU Time - the sum of the CPU time taking by all instances of this event.
 
# Minimum CPU Time - the longest time any instance of this event took.
 
# Maximum CPU Time - the short time any instance of this event took.
 
# Average CPU Time - the average time of all instances of this event.
 
# Last Label - This field will contain the last value of the label recorded by a call to <code>hook_label1()</code>. See [#MARKER-9-2046 "Hook Events"].
 
 
Additional fields (16 and onward) may contain user data.
 
 
In the examples that follow, blank columns have been deleted so they fit on a page. You can do this yourself using the cut command, for example:
 
 
 
apformat events_example | cut -d'|' -f1-4,10-14 &gt; summary.out
 
 
==== Detailed Report ====
 
 
The detailed report simply contains all the recorded events in their chronological order. A small subset of the table is shown below, with columns 10-15 not shown.
 
 
 
--|--|-|---------------|-----|-----|---------------|-----------|-----------|-...
 
Th|Ns|N|          Event|Event| Prnt|          Event|      Delta|      Delta|
 
Id|Lv|N|          Name |SeqId|SeqId|      Wall Time|  Wall(us)|  CpuT(us)|
 
--|--|-|---------------|-----|-----|hh:mm:ss.mmmuuu|sssssmmmuuu|sssssmmmuuu|-...
 
  0| 0| |        main()|00002|00000|16:58:03.190531|          |          |
 
  0| 1| |    BubbleSort|0001e|00002|16:58:03.206741|          |          |
 
  0| 1|-|    BubbleSort|0001e|00002|16:58:04.413698|    1206957|    730000| ...
 
  0| 1| |            QS|002bx|00002|16:58:04.424019|          |          |
 
  0| 1|-|            QS|002bx|00002|16:58:04.530950|    106931|      60000|
 
 
 
==== Example D-5. Detailed Events Report ====
 
 
==== Summary Reports ====
 
 
The TotalReport and SubTotalReport options contain similar output -- identical in-fact if there is only one thread and hence one call tree. Below is a subset of the TotalReport corresponding to the Detailed report above. Note that again the blank fields (5-9) are not shown here but will exist in your output.
 
 
 
--|--|-|---------------|-----|-----------|-----------|-----------|-------|
 
Th|Ns|N|          Event|    |      Min |      Max |      Avg |  Last|
 
Id|Lv|N|          Name | ... |  CpuT(us)|  CpuT(us)|  CpuT(us)|  Label|
 
--|--|-|---------------|-----|sssssmmmuuu|sssssmmmuuu|sssssmmmuuu|-------|
 
  S| 0| |THREAD # 26672 |    |    7820000|    7820000|    7820000|      |
 
  S| 1| |        main()| ... |    7820000|    7820000|    7820000|      |
 
  S| 2| |    BubbleSort|    |    600000|    890000|    717000|      |
 
  S| 2| |            QS|    |      10000|      90000|      51000|      |
 
 
 
==== Example D-6. Total Events Report ====
 
 
== Profile Probe: profile.ual ==
 
 
The '''profile.ual''' predefined probe profiles the execution of functions within an application program. The probe records the number of calls made to functions, the number of calls made ''by'' functions, the amount of time spent in each function, and the amount of time spent in each function plus all the functions it calls. The profiling information is reported for each thread and for the program as a whole.
 
 
Profiling allows you to determine how much time is spent executing the various functions that comprise your application. The profile information can be used to locate "hot spots" within your application on which to focus your attention when trying to improve your program's performance.
 
 
While this is a very powerful tool, it is not going to tell you in the first run what needs to be re-coded, unless it's a small application with a very obvious problem. Performance analysis of an application is an iterative process that requires an understanding of what the application does, and a feeling for how long operations ''should'' take.
 
 
This section does not address a methodology for performance analysis using aprobe. It simply describes the use of the <code>profile.ual</code> predefined probe. A larger example which also uses statprof.ual can be found in the [../examples/predefined_probes/statprof/README $APROBE/examples/learn/statprof] directory and you are encouraged to contact OC Systems to discuss using Aprobe on your specific application.
 
 
=== Usage ===
 
 
This probe is applied at run time using [aprobe-9.html#MARKER-9-1075 aprobe] as described under [#MARKER-9-2066 "Profile UAL Parameters"] below. The only functions for which profiling data will be collected are those selected with the ''Profile'' keyword in the configuration file (see [#MARKER-9-1955 "Configuration File"] and [#MARKER-9-1961 "Configuration GUI"]). The ''Remove'' keyword allows you to remove a function from the set to be profiled, and the ''Trigger'' keyword limits profiling to specific paths rooted at a specified function.
 
 
The profile information is logged to a file for later processing by [aprobe-9.html#MARKER-9-1035 apformat]. A table of profile information is generated for each thread in the application program, and one summary table is generated to display profile information for the program as a whole. See [#MARKER-9-2105 "Profile Probe Output"] for an example.
 
 
=== Profile UAL Parameters ===
 
 
'''profile.ual''' is specified on the [aprobe-9.html#MARKER-9-1075 aprobe] command line or in an [aprobe-10.html#MARKER-9-1332 APO file] as described in [#MARKER-9-1941 "Command Line"]. The specific options are:
 
 
 
aprobe  -u '''profile.ual '''    [-p "[-h] [-v] [-g] [-c config_filename]"]
 
    your_program
 
 
where:
 
 
; '''-c ''config_filename'''''
 
: <br />specifies that the name of the probe configuration options file will follow immediately after -c. The default file name is your_program.profile.cfg. For example, if your executable program is called wilbur.exe, then the default file name would be wilbur.exe.profile.cfg.
 
; '''-g '''
 
: starts they Java configuration GUI at probe startup, before running your program.
 
; '''-h '''
 
: produces brief help text.
 
; '''-v '''
 
: verbose mode, which produces additional progress messages.
 
 
=== Profile Configuration File ===
 
 
The Profile configuration file is used to specify what subprograms are to be analyzed, when snapshots are to be taken, and other options, as described in [#MARKER-9-1955 "Configuration File"]. The example below shows one possible Profile configuration file.
 
 
 
PROBE CONFIGURATION FILE FOR PROFILE VERSION 2.0.0
 
 
Verbose                  FALSE
 
ProfilingEnabledInitially TRUE
 
DefaultLevels            5
 
StartWithGUI              FALSE
 
ReportCallers            FALSE
 
IndexSymbols              TRUE
 
SortByCumulativeTime      FALSE
 
SeparateTables            TRUE
 
CommaOutputFile          "app_output.csv"
 
 
// see how much of that time is spent in system functions
 
PROFILE "open()" in "libc.so"
 
PROFILE "read()" in "libc.so"
 
 
// profile ''almost'' everything in the Motif Library
 
PROFILE "*" in "libXm.so"
 
REMOVE "_XmRecordEvent" in "libXm.so"
 
 
// ...but only when in RefreshCallback
 
PROFILE "RefreshCallback()"
 
TRIGGER "RefreshCallback()" Levels 4
 
 
==== Example D-7. profile.cfg File ====
 
 
==== Configuration Variables ====
 
 
The following are the only valid keywords that identify lines to set configuration variables. Each such line must begin with one of these keywords, and the keyword must be followed by its value. Nothing else is allowed on the same line.
 
 
==== Verbose ====
 
 
This must be followed by the value TRUE or FALSE. The default is FALSE. The value TRUE indicates the progress messages should be produced by the profile probe.
 
 
==== ProfilingEnabledInitially ====
 
 
This must be followed by the value TRUE or FALSE. The default is TRUE. The value TRUE indicates that data logging will begin as soon as the application program starts running. The value FALSE indicates that data logging will begin only after a call is made to the probe's function <code>ap_Profile_Enable()</code> rather than as soon as the application program starts running or a TRIGGER function is hit.
 
 
==== DefaultLevels ====
 
 
This must be followed by a valid number from 0 upwards. In the absence of any specific triggers (or levels on a TRIGGER) this controls how deep down the call stack profile will go before disabling profiling.
 
 
==== StartWithGUI ====
 
 
This must be followed by the value TRUE or FALSE. The default is FALSE. The value TRUE indicates that the configuration GUI should be started before the target program runs, even if -g wasn't specified on the command-line. A FALSE value is overridden by the '''-g''' command-line option.
 
 
==== ReportCallers. ====
 
 
must be followed by the value TRUE or FALSE. The default is FALSE. The value TRUE means that information about the caller of each profiled routine is saved at runtime and displayed at format time. This gives output akin to the Unix gprof command. This switch can be modified at format time to turn this output off (note that turning it on at format time when it was off at runtime will not give useful information).
 
 
==== IndexSymbols. ====
 
 
must be followed by the value TRUE or FALSE. The default is TRUE. When set, each routine is displayed in an abbreviated form when it occurs in the formatted tables. An index showing the mapping between abbreviated and "real" names is displayed at the end. When this is set to FALSE, full symbol names are displayed in the formatted output.
 
 
==== SortByCumulativeTime. ====
 
 
This must be followed by the value TRUE or FALSE. The default is FALSE. This option has no effect unless SeparateTables is set to TRUE. In that case one combined table showing both individual and cumulative times for each routine is displayed and this option controls the ordering of that table. The default is to sort the table by individual time (the routine which has the most individual time allocated to it is shown first); if this option is TRUE, the table is sorted by cumulative time.
 
 
==== SeparateTables. ====
 
 
This must be followed by the value TRUE or FALSE. The default is TRUE. When set to TRUE, two tables are produced for each thread (and / or for the entire program): One shows the breakdown sorted by individual time, the other by cumulative time. If this is set to FALSE one table is produced and the sort order depends on the SortByCumulativeTime option described above.
 
 
==== CommaOutputFile. ====
 
 
This must be followed by the name of an output file (or nothing for a null string). If this is not a null string, a file of the given name is created and data in a form suitable for input to a variety of spreadsheet programs (namely comma-separated-values) is produced. It should be noted that this file has PC line separators as it is found that more spreadsheet programs will accept data correctly in this format than if Unix line feeds are used.
 
 
==== Configuration of Profiled Functions ====
 
 
Each function to be profiled must be specified explicitly using the '''PROFILE''' keyword followed by the name of the function, as described in [#MARKER-9-1958 "Configuration of Selected Functions"]
 
 
The REMOVE keyword allows you to specify functions that should ''not'' be instrumented for profiling. This is useful when used in conjunction with a wildcard ("*"), to gather data about everything except certain routines.
 
 
A line beginning with the keyword TRIGGER specifies the name of a ''trigger function'' in the usual manner. Entry to the trigger function will enable profile information collection, and the corresponding exit will disable profiling. This is similar to the behavior of nested probes (see [aprobe-6.html#MARKER-9-574 "Nesting of Probes"]). Only while a trigger function is active (executing in the call chain), will profile data will be collected for functions selected with PROFILE statements.
 
 
The TRIGGER statement may optionally be followed by a "Levels n" option (where n is a number). This will cause the trigger to only be active for the given number of calls. For instance, Levels 2 will only turn the trigger on for the trigger function and the function it calls. If triggers are nested (e.g. we have a trigger on Routine1() which calls Routine2() which we also have a trigger on), the levels setting for the innermost trigger overrides those at the outer levels.
 
 
Note that TRIGGER may imply PROFILE. Specifically, if you have a TRIGGER statement for a particular module (including the application module) and have not specified any PROFILE lines for that module, all routines in that module will automatically be profiled.
 
 
Note that if you add a TRIGGER statement to your config file (either by editing the file directly or by using the GUI) you should check the state of the ProfilingEnabledInitially flag - by default that is TRUE and you usually want to make it FALSE when you add a trigger.
 
 
=== Profile Configuration GUI ===
 
 
The configuration GUI (Graphical User Interface) for '''profile.ual''' is the "Profile Probe Configuration" dialog. This dialog allows you to set the options that control the coverage probe and is generally as described in [#MARKER-9-1961 "Configuration GUI"]. The ''Help'' button provides the following description:
 
 
'Trigger Functions' lists the functions whose invocation controls profiling. Profiling will occur when profiled functions are called from a trigger function.
 
 
This list is changed by clicking on 'Pick' below the list, and then choosing functions to be triggers.
 
 
Clicking the 'Advanced' button will bring up another dialog containing the list of 'Profiled Functions', which lists the functions whose invocation you want to profile.
 
 
'Produce verbose output' indicates that progress messages should be output by this probe
 
 
'StartWithGUI' indicates that the configuration GUI should be started before the target program runs.
 
 
'Profiling Enabled Initially' indicates that data logging will begin as soon as the application program starts running.
 
 
'Default depth of trigger profiling' controls how far down the call-chain below a trigger function profiling will occur (unless a given trigger specifies some other value).
 
 
'Index Symbols' indicates whether a list of symbol names should appear at the end of the output.
 
 
'Sort output by cumulative time' sorts subprograms by cumulative time rather than individual time.
 
 
'Separate Tables' indicates whether individual &amp; cumulative time should be combined in one table, or in two separate tables.
 
 
'Report Callers', when selected breaks down each call to a subprogram by the subprogram which called it.
 
 
'Comma-delimited output file' indicates the name of a file (if any) to which spreadsheet or database readable tables should be written.
 
 
'Save As' button saves the current options to a configuration file. 'Run (no save)' button runs the target program using the selected options. 'Save &amp; Run' button save the options to the configuration file, then runs the target program using the selected options.
 
 
'Abort' button will abort the target program instead.
 
 
=== Profile API ===
 
 
Users can control the behavior of the profile probe by calls from within their own probes. The API is defined by [../include/profile.h <code>$APROBE/include/profile.h</code>]. Some of the functions exported by <code>profile.ual</code> are:
 
 
; '''ap_Profile_GetOptions'''
 
: <br />Get current set of options controlling the probe.
 
; '''ap_Profile_SetOptions'''
 
: <br />Set current set of options to control the probe.
 
; '''ap_Profile_SaveOptions'''
 
: <br />Save a set of options to a configuration file.
 
; '''ap_Profile_Enable'''
 
: <br />Enables collection of profile data.
 
; '''ap_Profile_Disable'''
 
: <br />Disables collection of profile data.
 
; '''ap_Profile_DoSnapshot'''
 
: <br />Takes a snapshot of profile data for a specific thread.
 
; '''ap_Profile_DoSnapshotForAll'''
 
: <br />Takes a snapshot of profile data for all threads.
 
 
See [#MARKER-9-1970 "Snapshots"] for more information and an example.
 
 
=== Profile Performance Issues ===
 
 
See [#MARKER-9-1931 "Performance Issues"] for a general discussion of factors that affect performance.
 
 
==== Reducing Overhead with TRIGGER and LEVELS ====
 
 
The amount of data collected by the profile probe (and the trace probe, see [#MARKER-9-2117 "Trace Probe: trace.ual"]) may be greatly reduced by using the TRIGGER keyword to name a ''trigger function''. Probing is turned on when entering the trigger function, and turned off when exiting it, so the probe is only active if the trigger function is a direct or indirect caller. For example, if you only want to profile the symbol-table-reading part of your application, you might put <code>TRIGGER "ReadSymbolTable()"</code> in your configuration file and then you'd only record data for calls made directly or indirectly from <code>ReadSymbolTable()</code>.
 
 
Further control of data collection is provided with the LEVELS attribute of a trigger. This is essentially an "inverse trigger", which turns profile or trace ''off'' at a given call nesting level relative to the trigger function.
 
 
Used efficiently, the trigger and levels mechanisms this should enable you to instrument everything and only take a performance hit when your triggers are active.
 
 
By default there is a TRIGGER is at the start of each thread and continues for a depth of DefaultLevels (initially 5). This thread-level trigger is controlled by the configuration variable ProfilingEnabledInitially (or TracingEnabledInitially). So for a TRIGGER on some other function to be effective, you must set this to FALSE in the configuration file or GUI as well as selecting one or more trigger functions.
 
 
The distinction between functions that are ''probed'' and those that are designated as triggers, is crucial to understanding how to control the performance of the profile (and trace) probes.
 
 
The PROBE or PROFILE keyword identifies those functions that should be profiled under any circumstances; that is, those functions that should be patched at all. Generally you don't have to worry about this because the predefined probe logic makes a "guess" at what should be probed by what triggers are specified:
 
 
The current default is to probe each routine in a module when there is a ''trigger'' for a function in that module, or in the absence of any triggers, to probe everything in the application module.
 
 
This means don't need to specify any triggers in order for the probe to gather data, and in general you will only pick a few trigger points to limit what data is collected to the portion of the program you're focusing on. Within that portion of the program, data is collected for those functions which are probed. Outside of the triggered functions, the probes are still invoked, but are disabled.
 
 
In the event that your application spends most of the time with profile disabled (based on the triggers) this should have relatively low overhead. If, however, your application is such that your functions are always turned on, this may be a problem.
 
 
The profile probe is applied to the functions specified in the configuration file, so the run-time overhead is directly proportional to the number of calls to the functions selected. Note that a probe is executed for every function included by a PROFILE line (and not subsequently REMOVEd), even if it is not "active" due to a TRIGGER (subject to the extended instrumentation rules detailed above).
 
 
It's often hard to select the initial set of functions to be probed. One effective approach to this problem is to try to identify small functions that are called very often, and REMOVE those from profiling early, since the overhead of each profile probe is constant. You can often identify such "nuisance functions" by letting the application start, run for a few minutes, then send a Control-C to terminate it. This will invoke the thread and program exit processing, and profiling data up to that point will be valid. You can then update the <code>profile.cfg</code> file for the application, and re-run it.
 
 
Of course, if the application is complicated to run, you may want to subset it first in a test environment. This is an advantage of including Aprobe in your process early in development, rather than bringing it in during integration and delivery testing -- you can profile smaller parts of your application separately.
 
 
==== Implementation Note: Quick-Check Patching ====
 
 
An extended form of patching is used by the trace and profile predefined probes which provides a very low overhead to a function if there are no active triggers. When there are no active triggers the profiled (or traced) routines can run very close to their full (non-patched) speed.
 
 
Essentially a global flag is checked before performing any of the processing associated with the probe. This saves the time normally needed to determine whether there are any active probes for a specific function. Without this mechanism, if you profile (or trace) "the world" using regular patches there may be a significant overhead just determining that there are no probes active for the patched function.
 
 
The only user-visible effect of this "quick-check" patch mechanism, other than lower overhead, is that other predefined probes may be unintentionally disabled if a quick-check patch associated with trace or profile is applied to a function before a patch for another predefined probe such as coverage. This problem is avoided simply by naming trace and/or profile ''after'' other predefined probes when they are used in the same aprobe invocation.
 
 
=== Profile Report ===
 
 
The reports generated by running apformat vary consirably depending upon the options chosen as specified above. The default looks like the example below:
 
 
<nowiki>
 
  Profile for thread: _start() in threads[0]
 
  Comment: End-of-thread profile information.
 
-----------------------------------------------------------------------------
 
    Calls To  |              Individual Time                |
 
  Self  Child | Pct    Total  Average  Minimum  Maximum  |  Function Name
 
------ ------  --- --------- --------- --------- ---------  ---------------
 
      1      1    99  2.02011  2.02011  2.02011  2.02011  [4] main()
 
      1      0    0  0.00035  0.00035  0.00035  0.00035  [0] MyRoutine()
 
      1      0    0  0.00001  0.00001  0.00001  0.00001  [3] _init()
 
      1      0    0  0.00001  0.00001  0.00001  0.00001  [5] _fini()
 
-----------------------------------------------------------------------------
 
    Calls To  |              Cumulative Time              |
 
  Self  Child | Pct    Total    Average  Minimum  Maximum |  Function Name
 
------ ------  --- --------- --------- --------- ---------  ---------------
 
      1      1    99  2.02046  2.02046  2.02046  2.02046  [4] main()
 
      1      0    0  0.00035  0.00035  0.00035  0.00035  [0] MyRoutine()
 
      1      0    0  0.00001  0.00001  0.00001  0.00001  [3] _init()
 
      1      0    0  0.00001  0.00001  0.00001  0.00001  [5] _fini()
 
-----------------------------------------------------------------------------
 
Total time spent in the timed functions: 2.020473
 
Total probe overhead for the timed functions: 0.000000
 
Total thread time: 2.029944
 
 
  Overall program profile
 
-----------------------------------------------------------------------------
 
    Calls To  |              Individual Time                |
 
  Self  Child | Pct    Total  Average  Minimum  Maximum  |  Function Name
 
------ ------  --- --------- --------- --------- ---------  ---------------
 
      1      1    50  2.02011  2.02011  2.02011  2.02011  [4] main()
 
      1      2    49  2.01504  2.01504  2.01504  2.01504  [2] MyThre..outine
 
      3      0    0  0.00054  0.00018  0.00009  0.00035  [0] MyRoutine()
 
      1      0    0  0.00001  0.00001  0.00001  0.00001  [3] _init()
 
      1      0    0  0.00001  0.00001  0.00001  0.00001  [5] _fini()
 
-----------------------------------------------------------------------------
 
    Calls To  |              Cumulative Time              |
 
  Self  Child | Pct    Total    Average  Minimum  Maximum |  Function Name
 
------ ------  --- --------- --------- --------- ---------  ---------------
 
      1      1    99  2.02046  2.02046  2.02046  2.02046  [4] main()
 
      1      2    99  2.01524  2.01524  2.01524  2.01524  [2] MyThre..outine
 
      3      0    0  0.00054  0.00018  0.00009  0.00035  [0] MyRoutine()
 
      1      0    0  0.00001  0.00001  0.00001  0.00001  [3] _init()
 
      1      0    0  0.00001  0.00001  0.00001  0.00001  [5] _fini()
 
-----------------------------------------------------------------------------
 
Total time spent in the timed functions: 4.035710
 
Total probe overhead for the timed functions: 0.000000
 
Total program time: 2.031339
 
Adjustment factors: call overhead = 0.000000; measurement overhead = 0.000000
 
 
Symbol Index Table: (maps the abbreviated names to full names)
 
--------------------------------------------------------------
 
    [0] MyRoutine()            -&gt;  "threads.c":"MyRoutine()"
 
    [2] MyThre..outine        -&gt;  extern:"MyThreadRoutine()"
 
    [5] _fini()                -&gt;  extern:"_fini()"
 
    [3] _init()                -&gt;  extern:"_init()"
 
    [4] main()                -&gt;  extern:"main()"
 
--------------------------------------------------------------
 
 
</nowiki>
 
 
==== Example D-8.  Profile Probe Output ====
 
 
== Statistical Profiling: statprof.ual ==
 
 
The statprof.ual predefined probe uses the Unix statistical profiling functions to provide a very low overhead mechanism to sample CPU timing. This is the underlying mechanism used when linking an application with the -p flag (or equivalent), the output of which is viewed using the prof tool. Since this is based on Unix services it is not available for the Windows 2000 version of Aprobe.
 
 
The probe works by using the Unix profil function. Normally this would be called from the startup and exit code of an application linked with -p; with Aprobe we call it on program entry and exit. A buffer is allocated and passed to the kernel with a "slot" (or cell) for every couple of instructions. When a kernel clock "ticks", if one of the threads for the application is running, the value in that slot is incremented.
 
 
When the application exits (or we take a snapshot), the memory buffer is saved to the APD file. At format time we convert each cell index to an address and use it to compute the time spent in each routine. Obviously, since the kernel is sampling the application at intervals, this is a statistical profiling method: Theoretically an application could spend 99% of it's time in one routine and jump to another routine at exactly the time we are sampling and then immediately return back again. Although this is theoretically possible, in practice it will never happen (given enough samples).
 
 
Statprof.ual has a number of advantages over using -p when building your application:
 
 
* You do not need to rebuild your application to use it!
 
* With -p you can only get information about the application module. While <br />-p does link in some special static libraries to replace the standard libs this is not a perfect solution. Statprof allows you to access any of your modules.
 
* You can take snapshots with staprof, control the cell size, display lines and generally control things to a much finer level.
 
 
One thing that statprof doesn't replicate from prof is the call counts facility since this is adequately covered by profile.ual.
 
 
Note that unlike profile, which measures wallclock time, statprof only measures CPU time. If the application is paused waiting on input, this time will not be seen. Sometimes you will be more interested in this than wallclock time, other times it will be the reverse. The [../examples/predefined_probes/statprof/README $APROBE/examples/learn/statprof] directory has a worked example where both statprof and profile are used as appropriate.
 
 
=== Usage ===
 
 
Using statprof is straightforward - there is no configuration file or GUI to worry about. All options can be set by command line options (either at runtime or format-time).
 
 
=== Statprof UAL parameters ===
 
 
'''statprof.ual''' is specified on the [aprobe-9.html#MARKER-9-1075 aprobe] command line or in an [aprobe-10.html#MARKER-9-1332 APO file] as described in [#MARKER-9-1941 "Command Line"]. The specific options are:
 
 
 
aprobe  -u statprof.ual [-p "[-c] [-h] [-l] [-s size] [-z]
 
    [module name]]" your_program
 
 
where:
 
 
; '''-c'''
 
: This performs a coarse profile to see overall usage in shared libraries. This uses a very large "cell" size which allows you to determine which shared library was active when the kernel tick occurred but not which routine.
 
; '''-l'''
 
: This displays a breakdown of time within a function by line number as well as by function. The bigger the cell size the less reliable it will be but this can give useful information.
 
; '''-o filename'''
 
: This outputs comma-delimited data to the given filename. This is suitable for input into a spreadsheet or similar program. Note that the line breaks are PC-based rather than Unix-based as this seems to be more generally accepted by programs we've tested with.
 
; '''-s'''
 
: This sets the number of bytes used for each sampling cell. It's unlikely that you would modify this but if you have a huge application and limited memory you may wish to do so. Basically the larger the cell size, the smaller the amount of storage required to hold the sampling table but it's more likely that the output data will be incorrect. By default two instructions are mapped to a cell which has nearly a 100% chance of both instructions belonging to the same routine due to the padding usually used by compilers. But if there are four instructions for the same cell, there's a chance that some of the instructions are for one function and some for another. However, when formatted they are treated as one address (obviously) so wrong information can result.
 
; '''-z'''
 
: This displays symbols or modules which have no associated CPU: Normally just the names of the routines (or modules) that have matches in the sampling table are displayed. This switch will also display the list of all other routines (or modules).
 
 
If you do not provide a module name, the application will be profiled.
 
 
At format time, the -l, -o and -z options may be provided if they were not specified at runtime.
 
 
=== Statprof API ===
 
 
Users can control the behavior of the statprof probe by calls from within their own probes. The API is defined by $APROBE/include/statprof.h. The functions exported by statprof.ual are:
 
 
; '''ap_Statprof_Enable'''
 
: <br />Enables collection of statprof data.
 
; '''ap_Statprof_Disable'''
 
: <br />Disables collection of statprof data.
 
; '''ap_Statprof_Snapshot'''
 
: <br />Takes a snapshot of current statprof data.
 
 
=== Statprof report ===
 
 
The reports generated by statprof differ depending on the options you choose. The following is a sample output using the default options:
 
 
<nowiki>
 
Snapshot #1 @ 11:26:39.952, Final Snapshot
 
------------------------------------------
 
Statistical Profiling (cell size 8) for module thisone, sorted by time:
 
  %Time  Seconds  Name
 
  ------  -------  -----------------------------------------------
 
  75.0    0.48    extern:"DoWork()"
 
  23.4    0.15    extern:"DoLessWork()"
 
    1.6    0.01    Other functions (not in profiled module)
 
 
</nowiki>
 
 
==== Example D-9. Statprof Report ====
 
 
== Trace Probe: trace.ual ==
 
 
The '''trace.ual''' predefined probe traces the execution of selected subprograms within any application program. This probe creates a report that lists which subprograms were invoked, in which order, and optionally the time when each was invoked. In addition, line tracing may be applied to selected subprograms to log the order in which any source lines within the subprograms were executed. The probe also traces the switching from one program thread to another.
 
 
'''Note'''<nowiki>: if you're using Aprobe as part of OC Systems' </nowiki>RootCause product, you should use RootCause to define your traces. It's ''much'' easier.''.''
 
 
As your application program is running, the trace data is collected and logged. It can be logged directly into a file (an APD file), or it can reside in memory in a circular buffer. Either way, you can log many millions of calls, and you can analyze the data after the application has completed using [aprobe-9.html#MARKER-9-1035 apformat]. This "trace report" is simply ASCII text and can be viewed with a text editor or processed with other text-manipulation tools.
 
 
The APD file size is limited only by your hard disk space. If the trace data resides in memory, then the oldest contents of that circular buffer are simply overwritten by the newest trace data each time the buffer fills up and wraps around; and of course, the entire contents can be saved into a file at any time by taking a snapshot.
 
 
Snapshots of the circular buffer's contents may be taken at any time while the program is running. A snapshot takes the current state of any logged trace data in the circular buffer in memory and writes it out to an APD file. Taking periodic snapshots at known events, like the entry or exit to a particular subprogram, allows you to compare and contrast the state data that was recorded at those various times, and allows you to save a historical log of what happened just prior to those moments. A final snapshot also automatically occurs at program termination, if the circular buffer is being used and still contains data. For more information, see [#MARKER-9-1970 "Snapshots"].
 
 
=== Usage ===
 
 
This probe is applied at run time using [aprobe-9.html#MARKER-9-1075 aprobe] as described under [#MARKER-9-2124 "Trace UAL Parameters"] below. The only functions that will be traced are those selected with the ''Trace'' or Snapshot keywords in the configuration file (see [#MARKER-9-1955 "Configuration File"] and [#MARKER-9-1961 "Configuration GUI"]). The ''Remove'' keyword allows you to remove a function from the set to be traced, and the ''Trigger'' keyword limits tracing to specific paths rooted at a specified function.
 
 
By default, trace information is written to an [aprobe-10.html#MARKER-9-1325 APD file]. The aprobe '''-if '''option may be used to send trace output directly to the screen as the program executes. The configuration keywords ''SaveTraceDataTo'' and ''Snapshot'' may be used to save trace data to a circular memory buffer instead, and write out its contents only at specified events to an APD file.
 
 
=== Trace UAL Parameters ===
 
 
'''trace.ual''' is specified on the [aprobe-9.html#MARKER-9-1075 aprobe] command line or in an [aprobe-10.html#MARKER-9-1332 APO file] as described in [#MARKER-9-1941 "Command Line"]. The specific options are:
 
 
 
aprobe -u '''trace.ual '''    [-p "[-h] [-v] [-g] [-t] [-l] [-c config_filename]"]
 
    your_program
 
 
where:
 
 
; '''-c ''config_filename'''''
 
: <br />specifies that the name of the probe configuration options file will follow immediately after -c. The default file name is your_program.trace.cfg. For example, if your executable program is called wilbur.exe, then the default file name would be wilbur.exe.trace.cfg.
 
; '''-g '''
 
: starts the Java configuration GUI at probe startup, before running your program.
 
; '''-h '''
 
: produces brief help text.
 
; '''-l '''
 
: records the source line numbers of lines that were executed in a probed subprogram invocation. This ''supersedes'' the configuration option ''LogLines'' ''FALSE'' that may appear in the configuration file. Note that only routines that have the Trace_Lines modified specified (see [#MARKER-9-2149 "TRACE_LINES"]) will have their lines logged.
 
; '''-t '''
 
: records a time-stamp on entry to and exit from each probed subprogram. This supersedes the configuration option ''LogTimes FALSE ''that may appear in the configuration file
 
; '''-v '''
 
: verbose mode, which produces additional progress messages.
 
 
=== Trace Configuration File ===
 
 
The Trace configuration file is used to specify what subprograms are to be analyzed, when snapshots are to be taken, and other options, as described in [#MARKER-9-1955 "Configuration File"]. The example below shows one possible Trace configuration file.
 
 
 
PROBE CONFIGURATION FILE FOR TRACE VERSION 2.1.0
 
 
// Boolean Options:
 
LogTimes TRUE
 
LogLines TRUE
 
TracingEnabledInitially TRUE
 
StartWithGUI FALSE
 
VisualHistogram FALSE
 
LogJavaClassLoads TRUE
 
 
// Numeric Options:
 
NumberOfTracedItems 10000
 
DefaultLevels 5
 
LoadSheddingThreshold 0
 
 
// Other Options:
 
SaveTraceDataTo APD_FILE
 
// SaveTraceDataTo CIRCULAR_BUFFER
 
 
// Here we select which subprograms we want to trace:
 
//
 
// Use 'Trigger' to identify the functions we're
 
// directly interested in, and Trace to identify what
 
// ''called'' functions should also be traced.
 
//
 
// We want to see OpenCallback and a few things it calls:
 
TRIGGER "OpenCallback()" LEVELS 20
 
 
// see who's calling open, read:
 
TRACE "open()" in "libc.so"
 
TRACE "read()" in "libc.so"
 
 
// see who's calling Motif library functions
 
TRACE "*" in "libXm.so" LEVELS 5
 
 
// For Java, specify "$java$" module.
 
// TRACE "MyClass::*" in "$java$"
 
 
 
==== Example D-10. trace.cfg File ====
 
 
==== Configuration Variables ====
 
 
The following are the only valid keywords that identify lines to set configuration variables. Each such line must begin with one of these keywords, and the keyword must be followed by its value. Nothing else is allowed on the same line.
 
 
==== LogLines ====
 
 
This must be followed by the value TRUE or FALSE. The default is TRUE. If this is TRUE then any TRIGGER or TRACE function which specifies TRACE_LINES will have lines traced. If this is false, no function's lines will be traced even if TRACE_LINES is specified for them..
 
 
==== LogTimes ====
 
 
This option is deprecated; time information is always recorded.
 
 
==== DefaultLevels ====
 
 
This must be followed by an unsigned integer value that specifies the maximum nesting level of any traced calls that will be logged as a result of tracing within calls to a TRACE function. The default is 5. Note that this value can be overidden for each TRIGGER function. This value used to be called MaxDepthOfTracedCalls which is still allowed but deprecated (for commonality with profile).
 
 
==== LoadSheddingThreshold ====
 
 
The trace probe provides automatic, load-based disabling of probes. This is called "load shedding". The LoadSheddingThreshold variable must be followed by an integer. The default is zero, indicating that load shedding is disabled. A nonzero number specifies an acceptable level of tracing overhead, which, when exceeded, will cause tracing of the most frequently-called functions to be disabled. Thus a lower number means lower overhead and more load-shedding.
 
 
==== LogJavaClassLoads ====
 
 
This must be followed by TRUE or FALSE. The default is TRUE. When TRUE, and the Java virtual machine is traced as part of the application, the initial load of each Java class is reported For example: <code>Java class loaded: "Pi"</code>.
 
 
==== NumberOfTracedItems ====
 
 
This must be followed by a non-zero unsigned integer value that specifies how many traced items (either calls or lines) will be logged at once in the circular buffer. The default is 10,000 and there is an upper limit of 1 million. This value indirectly specifies the amount of memory that will be reserved for data logging purposes, and therefore, it can adversely affect the total amount of memory left for the application program itself.
 
 
==== TracingEnabledInitially ====
 
 
This must be followed by the value TRUE or FALSE. The default is TRUE. The value TRUE indicates that data logging will begin as soon as the application program starts running. The value FALSE indicates that data logging will not begin until a call is made to the probe's function ap_EnableTracing or a TRIGGER function is encountered (rather than as soon as the application program starts running).
 
 
==== SaveTraceDataTo ====
 
 
This must be followed by a value that is one of APD_FILE or CIRCULAR_BUFFER. The default is APD_FILE. A circular buffer is in memory, not in a file, and it will only hold a finite number of traced items (calls), so after the buffer is full, any further data logged into it will wrap around and overwrite the oldest previously logged data. Thus, a circular buffer will always contain only the most recently logged data. Logging the data to an APD file instead would typically allow you to log a much larger amount of data, limited only by the space available on your file system, but with a slowdown in the execution of the traced program.
 
 
==== StartWithGUI ====
 
 
This must be followed by the value TRUE or FALSE. The default is FALSE. The value TRUE indicates that the configuration GUI should be started before the target program runs, even if -g wasn't specified on the command-line. A FALSE value is overridden by the '''-g''' command-line option.
 
 
==== VisualHistogram. ====
 
 
This must be followed by the value TRUE or FALSE. The default is FALSE. When set to TRUE this displays a simple Ascii histogram representation after each event (routine entry / exit, etc) of the time that has elapsed since the last event. For instance, **000oooo..... is 2.345 seconds - the scale is displayed as part of the output. This option is only meaningful if LogTimes is also TRUE.
 
 
==== Configuration of Traced Functions ====
 
 
By default, if no TRACE or TRIGGER lines exist, only the "main()" function of the application module will be traced.
 
 
==== TRACE ====
 
 
Each function to be traced must be specified explicitly using the TRACE keyword followed by the name of the function, as described in [#MARKER-9-1958 "Configuration of Selected Functions"]
 
 
==== TRIGGER ====
 
 
This keyword limits tracing to those functions called directly or indirectly by the specified functions. Entry to the trigger function will enable tracing and the corresponding exit will disable profiling. This is similar to the behavior of nested probes (see [aprobe-6.html#MARKER-9-574 "Nesting of Probes"]).
 
 
Note that TRIGGER ''does'' imply TRACE, so you needn't specify both.
 
 
==== REMOVE ====
 
 
The REMOVE keyword allows you to specify functions that should ''not'' be traced. This is useful when used in conjunction with a wildcard ("*"), to gather data about everything except certain routines. For example
 
 
 
TRACE "*" in "myModule"
 
REMOVE "Bad File.c" : "Bad_Function1" in "myModule"
 
REMOVE  "Bad File.c" : "Bad_Function2" in "myModule"
 
 
==== DoNotLoadShed ====
 
 
This keyword, followed by a function or method name specifies a function that should not be subject to load shedding regardless of how often it is called. See [#MARKER-9-2134 "LoadSheddingThreshold"].
 
 
==== '''LEVELS''' ====
 
 
If this special identifier is specified, it must be followed by an unsigned integer value. The default is DefaultLevels. It specifies the nesting level for which additional calls from the traced function will also be traced. A value of 0 means only the invocation of the function itself will be traced, but not any calls it makes. A value of 1 or more means that the invocation of the function itself will be traced, plus all other instrumented functions that it calls, either directly or indirectly, as long as the specified function is no deeper in the call stack than the specified value. (This used to be called MAX_DEPTH, which is still accepted but is deprecated.)
 
 
A non-zero LEVELS helps you to trace what the function was doing in addition to tracing the fact that it was called. For example, a value of 1 means that tracing will occur for the invocation of the function itself, plus, while still inside it, any other functions that it directly calls. A value of 2 means the same as a value of 1 except that we also trace 1 level deeper, thus including any further functions called directly from the other functions if they were called directly from the specified function.
 
 
The LEVELS has an effect only in a TRIGGER line, and is valid but simply ignored in a TRACE line. This is purely for convenience, to permit easy switching of the keyword at the beginning of the line without forcing changes to the rest of the line.
 
 
==== TRACE_LINES ====
 
 
This must be followed by TRUE or FALSE. If TRUE this TRIGGER or TRACE function will be instrumented for lines (if debug information is available for this routine) and the line trace will be displayed in the formatted data. Note that if the global LogLines option is set to FALSE (and -l is not specified on the command line), this value will be ignored.
 
 
==== PARAMETERS ====
 
 
This must be followed by TRUE or FALSE, and applies only to the special Java module name "$java$". If TRUE, all parameters of the matching Java method or methods will be logged.
 
 
==== Configuration of Trace Snapshots ====
 
 
==== SNAPSHOT ====
 
 
This keyword in the trace probe configuration file allows you to specify the name of some functions for which snapshots of the circular buffer's contents are to be automatically taken. Each SNAPSHOT line must specify a particular function in the usual manner, just as is done for a TRACE line. The remainder of the SNAPSHOT line contains pairs, where each pair has a special identifier keyword followed by its own associated value. These pairs give supplementary information about the snapshot.
 
 
==== ON ====
 
 
This optional special identifier must be followed by the value ENTRY or EXIT. These signify that the snapshot is to be taken on entry to the function, or upon exit from the function, respectively. The default is ON ENTRY.
 
 
==== IS ====
 
 
This optional special identifier must followed by an arbitrarily long string enclosed within "" quotation marks. It specifies a textual description or title that is to be logged along with the snapshot.
 
 
==== Enabling and Disabling Probes ====
 
 
==== ACTION ====
 
 
This keyword is used to introduce special actions on functions that were introduced to support RootCause. Note that these should not be used in conjunction with the TRIGGER and MAX_DEPTH keywords described above. The ACTION keyword is followed by a function name as for the TRACE keyword, or by one of the special names '''PROGRAM''' or '''THREAD'''. This is followed by a specification of where the action is to occur: ON ENTRY or ON EXIT. Last comes the action itself, which should be one of LOGGING ENABLE or LOGGING DISABLE. For example:
 
 
 
TracingEnabledInitially TRUE
 
TRACE "Pi::*" in "$java$" PARAMETERS TRUE
 
ACTION PROGRAM ON ENTRY LOGGING DISABLE
 
ACTION "Pi::calc_pi()" IN "$java$" ON ENTRY LOGGING ENABLE
 
ACTION "Pi::calc_pi()" IN "$java$" ON EXIT LOGGING DISABLE
 
 
 
Is roughly equivalent to
 
 
 
TracingEnabledInitially FALSE
 
TRACE "Pi::*" in "$java$" PARAMETERS TRUE
 
TRIGGER "Pi::calc_pi()" IN "$java$"
 
 
 
== Trace Configuration GUI ==
 
 
The configuration GUI (Graphical User Interface) for '''trace.ual''' is the "Trace Probe Configuration" dialog. This dialog allows you to set the options that control the trace probe and is generally as described in [#MARKER-9-1961 "Configuration GUI"].
 
 
'''Note'''<nowiki>: If you have RootCause, you should use the RootCause Console GUI to develop your trace. This interface is deprecated, is not applicable to Java applications, and does not provide access to all configuration options.</nowiki>
 
 
The ''Help'' button provides the following description:
 
 
This dialog allows you to set the options that control the trace probe,and to override any options from either the configuration file or thecommand-line. The default settings for the options will come from theconfiguration file, if one exists.
 
 
'Trigger Functions' lists the functions whose invocation can be used to control tracing. If trigger functions are listed then tracing will occur when in functions are called directly or indirectly from the trigger function. If no trigger functions are present then every invocation of traced functions will be traced.
 
 
If one clicks on the 'Advanced selections' button, then thelists for 'Traced Functions' and 'Snapshot Functions appear.
 
 
'Traced Functions' lists the functions whose invocation you want to trace.
 
 
'Circular Buffer Snapshot Functions' lists the functions whose invocation will cause the contents of the circular buffer to be logged to an APD file. This list is ignored when not using the circular buffer.
 
 
'Log mechanism' indicates whether you want the trace data to be logged directly to an APD file on disk or to a circular buffer that resides in memory instead. A circular buffer is useful when you expect potentially large volumes of data, but you are interested in capturing only the most recently traced function invocations that preceded some event. A snapshot is usually taken soon after that event to transfer trace data from the circular buffer to an APD file for later analysis. Typically, this means that you are willing to set aside a finite storage buffer in which trace data will be temporarily logged, and you are willing to discard any older trace data that may get overwritten when the circular buffer fills up and wraps around.
 
 
'Number of circular buffer entries' governs the size of the circular buffer and therefore the amount of memory that will be set aside for it.
 
 
'Default depth of trigger tracing' is the nesting level of calls made from a trigger function that will be traced for triggers that don't specify such a value.
 
 
'Log time stamps' indicates whether the date and time of each function invocation will be logged along with the trace of its entry and exit.
 
 
'Log lines executed' indicates whether the line number of each source line that is executed within a traced subprogram will be logged.
 
 
'Trace enabled initially' indicates whether the logging of trace data will begin immediately upon target program startup or deferred until after function ap_EnableTracing is called from some other probe.
 
 
'Start configuration GUI' causes this graphical user interface to appear every time this UAL is about to begin probing the program, whether or not the -g option was specified on the UAL command line.
 
 
'Save As' button saves the current options to a configuration file.
 
 
'Run (no save)' button runs the target program using the selected options.
 
 
'Save &amp; Run' button save the options to the configuration file, then runs the target program using the selected options.
 
 
'Abort' button will abort the target program instead.
 
 
=== Trace API ===
 
 
Users can control the behavior of the trace probe by calls from within their own probes. The API for the trace probe is defined by [../include/trace.h <code>$APROBE/include/trace.h</code>]. Some of the functions exported by <code>trace</code>.ual are:
 
 
==== ap_Trace_GetOptions ====
 
 
Get current set of options controlling probe.
 
 
==== ap_Trace_GetDefaultOptions ====
 
 
Get the default options applicable if no configuration file is found.
 
 
==== ap_Trace_SetOptions ====
 
 
Set current set of options to control probe.
 
 
==== ap_Trace_SaveOptions ====
 
 
Save the options to the specified file.
 
 
==== ap_Trace_ClearBuffer ====
 
 
; '''Deletes any circular buffer of logged trace data.'''
 
:
 
 
==== ap_Trace_DoSnapshot ====
 
 
Dumps circular buffer of trace data and resets. See [#MARKER-9-1970 "Snapshots"] for more information and an example.
 
 
=== Trace Performance Issues ===
 
 
See [#MARKER-9-1931 "Performance Issues"] for a general discussion of factors that affect performance.
 
 
The most important mechanism for controlling tracing overhead is '''load shedding'''. Unless you are tracing a small set of specific functions or methods, you should set [#MARKER-9-2134 LoadSheddingThreshold] to 10. If you discover that functions you are interested in are being load-shed, then you can prevent that by specifying <br />[#MARKER-9-2144 DoNotLoadShed] for such functions.
 
 
The trace probe is applied to the functions specified in the configuration file, so the run-time overhead is directly proportional to the number of calls to the functions selected. Note that a probe is executed for every function included by a TRACE or TRIGGER line (and not subsequently REMOVEd).
 
 
Tracing of each line, by setting LogLines to TRUE in the configuration file or GUI, greatly increases the overhead of the trace probe. Furthermore, the overhead of line probes are not counted when computing load-shedding overhead. It is highly recommended that you use the circular buffer if logging lines, to limit the total amount of data collected.
 
 
As with profiling, functions are instrumented in an extended manner that allows traced functions to be executed with very little overhead if there are no currently active triggers. See [#MARKER-9-2099 "Profile Performance Issues"] for more information on this.
 
 
=== Trace Report ===
 
 
The formatted output trace data will look similar to this sample:
 
 
 
Switching to thread main 1
 
 
[Enter : main
 
  [Enter : ofstream::ofstream(const char*,int,int)
 
  Leave] : ofstream::ofstream(const char*,int,int)
 
  [Enter : fstreambase::close()
 
  Leave] : fstreambase::close()
 
  [Enter : ofstream::open(const char*,int,int)
 
  Leave] : ofstream::open(const char*,int,int)
 
  [Enter : fstreambase::close()
 
  Leave] : fstreambase::close()
 
  [Enter : ofstream::~ofstream()
 
  Leave] : ofstream::~ofstream()
 
  [Enter : mystream::openlevel1()
 
    [Enter : mystream::openlevel2()
 
      [Enter : mystream::openlevel3()
 
4:[Enter : mystream::openlevel4()
 
5:  [Enter : mystream::openlevel5()
 
6:    [Enter : mystream::openlevel6()
 
7:[Enter : mystream::openlevel7()
 
8:  [Enter : mystream::openlevel8()
 
8:  Leave] : mystream::openlevel8()
 
7:Leave] : mystream::openlevel7()
 
6:    Leave] : mystream::openlevel6()
 
5:  Leave] : mystream::openlevel5()
 
4:Leave] : mystream::openlevel4()
 
      Leave] : mystream::openlevel3()
 
    Leave] : mystream::openlevel2()
 
  Leave] : mystream::openlevel1()
 
Leave] : main
 
[Enter : exit
 
  [Enter : __C_runtime_termination
 
  Leave] : __C_runtime_termination
 
 
==== Example D-11. Trace Report ====
 
 
== Memory Watch Probe: memwatch.ual ==
 
 
The '''memwatch.ual''' predefined probe monitors and gathers data about memory usage to help you detect heap memory leaks.
 
 
Heap memory leaks may be identified by tracking the invocations of all known heap manager functions that allocate and deallocate portions of the heap memory, and by matching each such heap memory allocation with its corresponding deallocation.
 
 
Snapshots may be taken at any time while the program is running to save the present state of the tracked heap data. This allows the allocation state data that was recorded at various times to be compared to each other and to give an indication of how much heap memory was consumed at those moments. Also, when associated with particular actions of the application program, snapshots can help locate heap usage problems.
 
 
The collected data, including any snapshots, are saved in an APD file so that they can be viewed using '''[aprobe-9.html#MARKER-9-1035 apformat]''' after the program has terminated. Reports are produced showing the time, size, and point of allocation for all items that were still allocated at the time of the snapshot.
 
 
=== Assumptions ===
 
 
This probe assumes that all requests for allocations and deallocations of dynamic storage are made through calls to discrete run-time heap manager functions (e.g., malloc, free, etc.).
 
 
There can be any number of these allocation or deallocation functions, and some functions (e.g., realloc) may do both allocations and deallocations during the same call. Additional heap manager functions may be specified in addition to the default ones by adding your own probes to those provided by memwatch.ual.
 
 
=== Background ===
 
 
Typically, the heap is a large repository of unused memory available for dynamic storage that is controlled by a heap manager. All requests for more memory (especially for dynamically sized objects) needed by a running application program are made to the heap manager. The heap manager carves the heap into smaller portions and allocates those portions upon request. In a well behaved application program, when those allocated portions of heap memory are no longer needed, they are deallocated by returning them to the heap manager for later reuse.
 
 
A heap memory leak is an unused and possibly inaccessible portion of heap memory that was allocated but was not subsequently deallocated. If an allocated portion of heap is no longer being used, such as when the only pointer to it goes out of scope or is overwritten, then that portion has probably "leaked." Such heap leaks gradually erode the available heap memory, which may lead to disastrous results when memory runs out.
 
 
A true "heap leak" is hard to detect without language and compiler support, because often a program allocates large amounts of data and intentionally keeps it around until program completion. Without tracking every variable to which a heap address is assigned, it is impossible to know when all pointers to a given location are lost. And even if this were possible, there may be other cases where memory should be freed even though it is still potentially accessible.
 
 
The memwatch probe helps the user identify heap leaks by recording all allocations and deallocations, and keeping a running total of total allocated memory, and allowing the user to analyze this data to determine if memory usage is appropriate. The reports produced by memwatch are designed to help identify potential leaks by grouping allocations by size, age, and point of allocation.
 
 
=== Usage ===
 
 
This probe is applied at run time using [aprobe-9.html#MARKER-9-1075 aprobe] as described in [#MARKER-9-2178 "Memwatch UAL Parameters"] below. The configuration file (see [#MARKER-9-2182 "Memwatch Configuration File"] controls the amount and kinds of data collected by this probe.
 
 
While most of the probes described in this Appendix provide a point-and-click interface for setting configuration options, the memwatch probe currently does not. The default configuration is generally acceptable, and the heap manager functions to probe are hard-coded into the probe itself.
 
 
However, there is a GUI: When the program starts, graphs are displayed which show statistics on memory usage, and allow interactive recording of snapshots showing what data is allocated at that point. This graph is described in detail in [#MARKER-9-2192 "Memory Usage Monitor"]
 
 
=== Memwatch UAL Parameters ===
 
 
'''memwatch.ual''' is specified on the [aprobe-9.html#MARKER-9-1075 aprobe] command line or in an [aprobe-10.html#MARKER-9-1332 APO file] as described in [#MARKER-9-1941 "Command Line"]. The specific options are:
 
 
 
aprobe -u '''memwatch.ual '''    [-p "[-h] [-v] [-g]  [-c config_filename]"]
 
    your_program
 
 
where:
 
 
; '''-c ''config_filename'''''
 
: <br />specifies that the name of the probe ''configuration file'' will follow immediately after -c. The default file name is your_program.memwatch.cfg, where your_program is replaced with the name of your executable program. For example, if your executable program is called wilbur.exe, then the default file name would be wilbur.exe.memwatch.cfg.
 
; '''-g '''
 
: starts the Java graphical user interface (GUI) dialog upon probe startup, before running your program. This GUI shows runtime memory usage.
 
; '''-h '''
 
: produces brief help text.
 
; '''-v '''
 
: means verbose mode, which produces additional progress messages.
 
 
=== Memwatch Configuration File ===
 
 
The example below shows the default memwatch configuration file.
 
 
 
PROBE CONFIGURATION FILE FOR MEMWATCH VERSION 1.0.0
 
 
// StartGui can be TRUE (to display the graphical display of
 
// memory usage during runtime) or FALSE (to turn the display
 
// off)
 
'''StartGui FALSE '''
 
// DepthOfCallChain is a number between 0 and 99 that
 
// specifies how far back up the call chain we will look to
 
// distinguish between different allocation points
 
'''DepthOfCallChain 9 '''
 
// IndexCallChains can be TRUE (to display an ID instead of
 
// a traceback with the mapping between IDs and tracebacks
 
// being displayed in a separate table)or FALSE (to display
 
// the full tracebacks each time one is encountered)
 
'''IndexCallChains TRUE '''
 
// DisplayReports can take one or both of the following
 
// options:
 
//  AgeReport:      Displays a report of each outstanding
 
//        allocation in a snapshot, starting with the
 
//        oldest
 
//  SizeReport:    Displays a list of each allocation point, with
 
//        the total number of allocations (and size)
 
//        outstanding for that given allocation point.
 
//        The list is sorted by size (largest first)
 
// e.g. DisplayReports AgeReport
 
//      DisplayReports SizeReport AgeReport
 
// (Note that you can use DisplayReports NoReport to just get
 
// summary information)
 
'''DisplayReports AgeReport SizeReport '''
 
// Snapshots can be specified using the snapshot keyword with
 
// the following syntax (multiple entries are
 
// allowed):
 
// SNAPSHOT extern:"MyFunction()" On Exit Is "Snapshot name"
 
// or
 
// SNAPSHOT extern:"Another()" On Entry Is "Another name"
 
 
// Normally all allocations are recorded by the memwatch
 
// predefined probe.By specifying a filter, only allocations
 
// whose call chain matches the filter will be recorded.
 
// Multiple filters will allow additional points to
 
// be included. The filters are easily obtained by cutting and
 
// pasting the output from a traceback into this file; note
 
// that the "==&gt;" parts of the traceback are necessary for
 
// parsing of the filter. The following example shows the
 
// correct syntax:
 
//    Filter extern:"malloc()" in "libc.so"
 
//      ==&gt; extern:"calloc()"  0x0044 in "libc.so"
 
//      ==&gt; extern:"getcwd()"  0x0130 in "libc.so"
 
//      ==&gt; extern:"::getCurDir(void)" at line 86 (t.cc)
 
//      ==&gt; extern:"main()" at line 348 (xpdf.cc)
 
//      ==&gt; extern:"_start()"  0x00dc
 
 
==== Example D-12. memwatch.cfg file ====
 
 
==== Configuration Variables ====
 
 
==== '''DepthOfCallChain''' ====
 
 
This must be followed by an unsigned integer value that specifies how far back up the call chain we will look to distinguish between different allocation points. The value must be within the range of 0 to 99. The default is 9.
 
 
==== IndexCallChains. ====
 
 
This must be followed by the value TRUE or FALSE. The default is TRUE. When this is set to TRUE, each unique traceback is denoted by a unique identification number, and the formatted report tables refer to this ID number rather than the entire traceback. This makes the tables easier to read. A separate list will be reported to show each traceback and its ID number. When this is set to FALSE, the full tracebacks are displayed in the formatted output.
 
 
==== DisplayReports. ====
 
 
This may be followed by any one or more of the following values: NoREPORT, AgeREPORT, or SizeREPORT. The default is both AgeREPORT and SizeREPORT. AgeREPORT causes the formatted output to contain a report showing the outstanding allocations sorted by age, starting with the oldest. SizeREPORT causes the formatted output to contain a report showing the outstanding allocations sorted by size, starting with the biggest. If neither value is specified, then no report is omitted. NoReport is just a placeholder, it does nothing except to specify that you want at least the summary report.
 
 
==== StartGUI ====
 
 
This must be followed by the value TRUE or FALSE. The default is FALSE. The value TRUE indicates that the heap allocation graphs should be shown when the target program runs, even if -g wasn't specified on the command-line. A FALSE value is overridden by the '''-g''' command-line option.
 
 
==== Configuration of Filters ====
 
 
==== FILTER ====
 
 
This must be followed by all the lines of a traceback. Normally, a traceback consists of multiple lines: the first line has the name of a called function, and each subsequent line begins with an arrow and names the immediate caller of the previous line's function. By default, all allocations are recorded by the memwatch predefined probe. But a filter allows you to limit the memwatch probe's recording to only those allocations whose call chain matches all the lines in the filter.
 
 
You may specify multiple filters at once. The traceback you need to specify for each filter can be easily obtained by cut-and-paste from a previously formatted output.
 
 
Notice that the traceback almost always occupies several lines, and that the "==&gt;" parts of the traceback are necessary at the beginning of each additional line. The following example shows the correct syntax:
 
 
FILTER extern:"malloc()" in "libc.so"<br /> ==&gt; extern:"calloc()" 0x0044 in "libc.so"<br /> ==&gt; extern:"getcwd()" 0x0130 in "libc.so"<br /> ==&gt; extern:"::getCurDir(void)" at line 86 (t.cc)<br /> ==&gt; extern:"main()" at line 348 (xpdf.cc)<br /> ==&gt; extern:"_start()" 0x00dc<br />
 
 
==== Configuration of Snapshots ====
 
 
The memwatch probe configuration file allows you to specify the name of some functions for which snapshots are to be automatically taken. This is done with lines beginning with the keyword SNAPSHOT.
 
 
Each SNAPSHOT line must specify a particular function as described above. The remainder of the SNAPSHOT line contains pairs, where each pair has a special identifier keyword followed by its own associated value. These pairs give supplementary information about the snapshot.
 
 
ON - This optional special identifier must be followed by the value ENTRY or EXIT. These signify that the snapshot is to be taken, respectively, on entry to the function, or upon exit from the function. The default is ON ENTRY.
 
 
IS - This optional special identifier must be followed by an arbitrarily long string enclosed within "" quotation marks. It specifies a textual description that is to be logged with the snapshot.
 
 
=== Memory Usage Monitor ===
 
 
When '''memwatch.ual''' is invoked with the UAL '''-g''' parameter, the ''Aprobe Memwatch Probe'' GUI window comes up. By default, this GUI shows two graphs providing information about the amount and rate of heap memory allocations.
 
 
In addition to the graphs, there are buttons to control the graphs, and there is a button that allows a snapshot of the current allocation data to be taken interactively.
 
 
The graphs display a record of the actual heap activity. The top graph displays the size (number of bytes) of outstanding heap memory allocated over time. Outstanding means that the heap memory is still allocated and has not yet been deallocated. The bottom graph displays the number of allocations that took place during each time interval. The time interval length (in seconds) is shown below.
 
 
You can manually zoom in to a selected portion of each graph by dragging the mouse while holding down the CONTROL key on the keyboard. Similarly, you can shift each graph horizontally by dragging the mouse sideways while holding down the SHIFT key on the keyboard. The ''ResumeUpdates'' button will restore the graphs back to their normal scale and positions, and resume updating them.
 
 
Use the ''Snapshot'' button while the target program is running to take snapshots of the current state of recorded heap allocations.
 
 
Use the ''Close'' button to close the probe and its GUI.
 
 
The graphs in this runtime GUI show actual heap memory usage. These graphs are updated periodically, at user-specified intervals. Many allocations and deallocations usually occur within each interval, between updates, so the range of heap sizes is shown for each interval. The HighWaterMark shows the highest value of heap size that was every recorded.
 
 
The heap allocation and deallocation events can be examined in more detail, later, by formatting the recorded data to produce reports. The formatted reports will list the data, sorted by age, size, or both. Each snapshot will be reported separately, and each will list all outstanding allocations (allocations which have not be deallocated yet) and all new deallocations since the previous (if any) snapshot was taken.
 
 
=== Memwatch API ===
 
 
You can control the behavior of the memwatch probe by calls from within their own probes. The API for the memwatch probe is defined by [../include/memwatch.h <code>$APROBE/include/memwatch.h</code>]. Some of the functions exported by <code>memwatch.ual</code> are:
 
 
; '''ap_Memwatch_Allocation'''
 
: <br />Record that a heap allocation event has occurred.
 
; '''ap_Memwatch_Deallocation<br />Record that a heap deallocation event has occurred.'''
 
:
 
;
 
:
 
; '''ap_Memwatch_Reallocation'''
 
: <br />Record that a heap reallocation event has occurred (which both a deallocation and an allocation event at once).
 
; '''ap_Memwatch_DoSnapshot'''
 
: <br />Takes a snapshot of current heap allocations.
 
 
=== Memwatch Performance Issues ===
 
 
The additional execution time caused by the memwatch probe is small (except for snapshot overhead, discussed below). This is because the memwatch probe only instruments a few specific functions, and the probe is tiny compared to the functions themselves (which are usually very computationally intensive).
 
 
The memwatch data requires quite a bit of memory. The amount of memory required is proportional to the number of unique tracebacks found among the allocation and deallocation events. This will reduce the memory available to your application program. So, if your application program is close to the process or system memory limit, this could cause its allocations to fail, which could even kill the application.
 
 
All the memwatch data that was collected is logged to an APD file when a snapshot is taken. A snapshot is taken either by interactively clicking the GUI button, by calling <code>ap_Memwatch_DoSnapShot()</code>, or by default at program exit. You can see that each snapshot may consume many megabytes, and it can take some time for the probe to write all this data to APD files on disk. The probe shares the same process as your application program, so we suggest that you take a snapshot only if the delay caused by writing out this data will not change your program's behavior.
 
  
 
== Info Probe: info.ual ==
 
== Info Probe: info.ual ==
Line 1,624: Line 451:
  
 
Copyright 2006-2017 OC Systems, Inc.
 
Copyright 2006-2017 OC Systems, Inc.
 +
 +
[[Category:AUG]]
 +
[[Category:Predfined Probe]]
  
 
</div>
 
</div>

Revision as of 20:34, 5 May 2017

Next Previous Top Contents [Index

Aprobe User Guide

Predefined Probes and Libraries Reference


Overview

The Aprobe product includes ready-to-use UALs built from probes written by OC Systems. We call these "predefined probes". These may be referenced by name on the aprobe command line and applied to any application, without writing any APC or using the apc command.

This appendix provides more in-depth information about how the major predefined probes work and how to use them.

coverage.ual

records the execution of each known source line in the specified functions of the executable program and produces a report indicating which lines in each function were not executed. See coverage.ual.

events.ual

records the start and end times of program functions and user-defined "events", organizes them by call tree, and provides several reports which help to analyze the performance of an application. This probe supports Java as well as native programs. See events.ual.

profile.ual

monitors the selected function calls in the executable program and displays execution time statistics about those functions to help indicate where and when the program is spending its wallclock time. See profile.ual.

statprof.ual

uses the Unix kernel sampling mechanism for performance analysis (as used by prof) to provide statistical information about where the application is spending its CPU time. See statprof.ual

trace.ual

traces the execution of the selected functions within an application program to record when, and in which order, and by which thread each was invoked. This probe supports Java as well as native programs. See trace.ual. The trace.ual predefined probe is used in the on-line example $APROBE/examples/evaluate/4.predefined. Note that direct use of this probe, especially using the GUI to define what to trace, has been superceded by the RootCause interface.

memstat.ual

reports heap memory allocation patterns and leaks using statistical sampling. See memstat.ual.

memwatch.ual

reports heap memory allocation patterns and leaks. See memwatch.ual.

info.ual

prints information about the executable program and/or an APD file. It can be used with either [aprobe-9.html#MARKER-9-1075 aprobe] or [aprobe-9.html#MARKER-9-1035 apformat] to display the symbol names within the executable and its shared libraries, or the UALs and threads recorded in an APD file. This probe is used to implement the [aprobe-9.html#MARKER-9-1052 apinfo] and [aprobe-9.html#MARKER-9-1110 apsymbols] commands. See [#MARKER-9-2205 "Info Probe: info.ual"].

Lastly there is

quick_gui.ual

a library of functions to provide simple Java GUI support to a user-written probe. The interface is defined in $APROBE/include/quick_gui.h, and includes histogram and plot graph support, as well pop-up Yes/No and message dialogs. Use of this library is illustrated by the $APROBE/examples/learn/visualize_data example. See [#MARKER-9-2220 "quick_gui Library"].

In most of this appendix, we'll refer to these by their simple filename: coverage, events, profile, trace, memwatch, etc.

How They Work

The predefined probes are written using the features and interfaces described in this user's guide, and could have been written by a user. As such, they demonstrate the power and flexibility of Aprobe. But make no mistake--they are complex, and are not recommended as models or examples for learning Aprobe. Nonetheless, it is useful to understand what happens when you apply one of these probes, particularly as it relates to performance of your application and your choice of options.

When aprobe is invoked on an application, it loads the executable, the libraries it references, and the specified UALs and reads in all the symbol table information. The aprobe runtime applies all the statically defined probes described in the UALs, then invokes any probe program on_entry actions. It is in such a probe that all the start-up actions for a predefined probe occur: UAL command-line arguments are read; the configuration file is read and recorded; and, for those probes that support it, the Java GUI process is started if requested. Using the GUI, the user may request a list of all the symbols in a module from which to select those to be probed, and specify other options.

When the final configuration is set and execution is continued, the probes on the functions specified in the configuration file are applied. When all probes have been applied successfully, and any other probe program and probe thread on_entry actions have been executed, the application program continues its normal execution.

As the application executes, the probes are invoked and collect or update the data according to their function. If snapshots are used, the data is written to the APD file at that point -- otherwise the data is written out when execution finishes.

When execution is completed, the APD files created by the probe contain the data collected. Using apformat on these produces the data collected which of course varies with each probe.

Performance Issues

As the application executes, there may be thousands of different probes, executed millions of times, all collecting data. The memwatch probe will only probe a few system functions that manipulate heap memory, but these may be invoked thousands of times. The coverage, profile, and trace probes are invoked for every function (and perhaps every line of every function) that was specified in the probe's configuration file. If data is logged directly to an APD file on disk (as the trace probe can do), this may be millions of bytes of I/O being written. If data is collected in memory, as is done by the memwatch probe, a long-running or large program may require a lot of virtual memory.

The areas where performance of a probe is noticed are:

  1. Aprobe (and apformat) start-up, when the symbol tables are read from the target application and its libraries;
  2. The Configuration GUI, when lists of symbols must be built for display and manipulated by the interface;
  3. Instrumentation, when probes are patched into the entry, exit, and perhaps each line of the user-selected list of functions.
  4. Execution, when the probes are invoked and executed as an extension to the user's program;
  5. Data collection, when the probes record their data in memory, and perhaps write it to disk; and
  6. Data formatting, when the data that has been logged is analyzed.

The perceived speed of items (1) and (2) is directly related to the size of the application, or more specifically, the number of symbols in the application. The instrumentation time (3) is directly proportional to the number of probes being applied, which for the coverage, profile, and trace probes means the number of functions selected in the configuration file or GUI. The instrumentation time for the memwatch and statprof probes is constant and fast. The execution overhead (4) of the probes once they're started depends on the number of probes actually invoked, which is a product of the number of functions selected by the user and the number of times each such function is called. The data collection and formatting times (5, 6) vary greatly for each probe, and are described in the Performance Issues section for each probe. In brief, however:

  • The memwatch probe records data for each memory allocation, proportional to the amount of memory allocated by the application under test.
  • The coverage probe simply increments an integer for each line executed, so the amount of data is a constant multiple of the number of lines being covered.
  • The profile probe simply adds to a total execution time kept for each function, so the data is proportional to the number of functions being profiled.
  • The trace probe may be very data intensive at run-time unless load-shedding is enabled, or the fixed-sized circular buffer log method is specified, since every entry and exit (and optionally every line) records many bytes of data.
  • The events probe is similar to the trace probe in overhead. In fact trace may be used in conjunction with events in order to provide load-shedding, even if nothing was traced.

Common Interfaces

The behavior of a predefined probe may be controlled in a number of ways:

  • by passing parameters to the UAL on the aprobe command line;
  • by editing a text "configuration file"
  • by interacting with a Java Graphical User Interface (GUI) controlled by the probe;
  • by providing your own probe which explicitly calls functions exported from the predefined probe.

These mechanisms are much the same for the dynamic analysis probes above.The info.ual predefined probe is a utility for which all behavior is specified on the command-line; and quick_gui.ual is a library, so all interaction with it is by function calls from within a user-written probe. The statprof.ual predefined is another simple probe that does not require a configuration file or GUI to customize its behavior.


Command Line

In the on-line "4.predefined" example, a command line like the following is used:


aprobe -u trace.ual -p "-g -v" main

The trace.ual file is actually located in the directory $APROBE/ual_lib, but this directory searched by default for any UAL not found in the current working directory, so you only need to provide the base name of the UAL file.

The -p option which follows trace.ual indicates that the next argument is a string of options to be passed to the UAL. In this case, those options are -g, which indicates that the Java Graphical User Interface (GUI) is to be started; and -v, which causes "verbose" informative output to be generated by the probe. Every predefined probe accepts the option -h, which describes the full set of command-line options available to that probe. So, to display the full set of options for trace.ual, execute:


aprobe -u trace.ual -p -h main

Predefined probes are generally initiated by invoking [aprobe-9.html#MARKER-9-1075 aprobe] explicitly. However, they may also be invoked implicitly by linking your application with libdal.so or using the run_with_aprobe_apo script and specifying the command-line options in a .apo file as described in [aprobe-7.html#MARKER-9-868 Chapter 4, "Loading Probes Without aprobe"].

Configuration File

The configuration file is used to designate which functions in the target application are to be analyzed, and to specify other options. The easiest way to get an initial configuration file is simply to invoke the predefined probe with no options. You'll get a warning about not finding a configuration file, but one will be created.

You can use the [#MARKER-9-1961 Configuration GUI] to create or change the configuration file by specifying the -g option. This provides a point-and-click interface in which to choose your functions and other options and save them in a configuration file. (Note that this GUI can be quite slow for very large applications, so editing the file directly may be more practical in such cases.)

The default name for the probe configuration file is your_program.probe_name.cfg, where your_program stands for the name of your executable application program, and probe_name is the basename of the predefined probe, e.g., trace. For example, if your program is called wilbur.exe and you're using trace.ual, then the default configuration file name would be wilbur.exe.trace.cfg.

The format of the a probe configuration file is:


PROBE CONFIGURATION FILE FOR probe_name VERSION 2.0.0
keyword data
...
keyword data
// This is a comment line

A warning will be given if the very first line of this file isn't valid, but an attempt will be made to use the information provided. The only allowed variations on this header line are the number of blank spaces between the words, and the case of each character.

All keywords in the configuration file are case-insensitive. However, all function or module names are always case-sensitive.

Each line begins with a keyword, or is a continuation line. A line is a continuation line if the previous line ends in a backslash ("\"). The continuation character is treated as a word separator, like a blank.

Comment lines can be interspersed with any other lines. Comment lines are those lines whose first 2 non-blank characters are "//". Comment lines are ignored by probe and are intended only for the benefit of the human reader.

Any invalid line is ignored. An invalid line is one which contains an invalid keyword or value, or has a missing or extraneous word.

Configuration of Selected Functions

The coverage, profile, and trace predefined probes operate by probing functions in the user's application, so the names of those functions must be specified in the configuration file. While each predefined probe uses different keywords to designate which functions to probe, the syntax of the function name itself is the same in all cases. It is exactly the same as the syntax used for a probe itself, as described in [aprobe-6.html#MARKER-9-497 "Specifying Function Names"] and repeated here:


[ [file_name | extern] : ] function_name [ in module_name ]

file_name, function_name, and module_name are all strings that are bounded by quotation marks (that is, ""), which allows each name to contain embedded blank space characters. The file_name and the module_name are optional. If module_name is omitted, the application module is assumed, and the special character "*" can be used to designated every module. The file_name is required only for static (file-local) functions. A colon must exist between the file_name and the function_name. The colon may be surrounded by optional whitespace, and the keyword IN must always be surrounded by whitespace. Examples are:


"my_file":"my_function"
extern:"malloc()" in "libc.so"

In the configuration file for trace.ual, the keyword indicating that a function is to be traced is "Trace", where the cases of the letters is not significant. In the main.trace.cfg file, then should appear one or more lines that look like


Trace "my_file":"my_function"

Predefined probes may have other keywords, such as Trigger, Remove and Snapshot, which are followed by function names. These are described in their corresponding section below. And, note that memwatch.ual does not require specifying application functions this way -- see [#MARKER-9-2173 "Memory Watch Probe: memwatch.ual"].

Configuration GUI

Some, but not all, predefined probes offer a Configuration GUI (Graphical User Interface). The coverage, profile, and trace predefined probes do offer it, but not events, memwatch, nor info

If the -g option is specified to the UAL, the probe starts a Java process to run the Configuration GUI. This provides a point-and-click interface for selecting functions and setting other options which are recorded in the [#MARKER-9-1955 Configuration File]. This interface differs for each probe, but generally consists of several panes in which functions and option settings are selected. At the bottom are Save & Run , Run, Save As, Abort, and Help buttons. Run proceeds with the execution of the program under aprobe with the current configuration; Save updates the configuration file on disk with the current settings; Abort quits the GUI and aborts the aprobe and target application processes; and Help brings up a dialog describing each pane and the other buttons.

Function Selection Interface

The Configuration GUI can be useful for selecting the functions that are to be operated upon by the probe. The "4.predefined" example illustrates this for the trace probe, and this process generally applies to the other probes as well:

Click on the "Pick..." button under the top pane (for Trace, the top pane is labeled as "Traced Functions"). You'll be presented with a list of modules (the executable plus any shared libraries it loads). When you double-click on one of those, it will be replaced with a dialog that lists all the functions in that module. Those on the left have been selected, those on the right have not. Double-clicking on a function name moves it to the other column. When the functions you want to probe from the selected module are listed on the left, click Ok, and list in the configuration gui itself will be updated.

Note: this process can be very slow for large applications with many thousands of symbols (see [#MARKER-9-1931 "Performance Issues"]). In such cases it may be easier to use [aprobe-9.html#MARKER-9-1016 apcgen] or [aprobe-9.html#MARKER-9-1052 apinfo] to generate symbol lists, and edit those lists by preceding them with the keyword. For example:


apsymbols main | sed "s/^/Trace / " >> main.trace.cfg

Note that for memwatch.ual, selecting individual functions isn't necessary. Instead the MemWatch probe displays graphs at run time showing memory usage patterns, and allowing interactive snapshots.

Callable API

You can invoke functions defined by a predefined probe from within your own probes. The interface is defined by the header file in the [../include/ $APROBE/include] directory whose name corresponds to the predefined probe; for example, trace.h is the interface for trace.ual.

The most useful functions are generally at the end of the file: those which enable and disable the probes, and those which log a "snapshot" of the data currently selected. See the comments in each file for details.

Note that there is no info.h file -- its operation is entirely determined by its command-line parameters; and the quick_gui.h file is the only interface to quick_gui.ual, since it defines no probes.

Snapshots

Trace, profile, coverage, statprof and memwatch all provide the ability to take a "snapshot" of the data at a point during a program's execution. A snapshot is simply a dump of the data collected in memory since the start of execution or the previous snapshot. The coverage, memwatch, and trace probes support the keyword SNAPSHOT in the configuration file, with which a function name is specified. For example, in your main.trace.cfg file you would do:


Snapshot extern:"LOCAL_PROC()" On Exit

This is translated into a probe which calls a function to take the snapshot. You can include this in your own probe--call it mytrace.apc--as follows:

 #include "trace.h"
 probe thread {
   probe extern:"LOCAL_PROC()" {
     on_exit
       ap_Trace_DoSnapshot("after LOCAL_PROC");
   }
 }
 Then, compile this probe, using trace.ual as a library:

apc mytrace.apc trace.ual -o mytrace.ual

and then use  mytrace.ual instead of trace.ual to gather data:

aprobe -u mytrace.ual main

Example D-1. Calling ap_Trace_DoSnapshot()

Info Probe: info.ual

The info.ual predefined probe provides the user with the general information available about the program. It can be used on an executable before the program is run, or with apformat to see additional information about the program after it was run.

The most common use of this probe is to get the names of the target program's entry points for which one can define a probe. Without using this probe, it is sometimes difficult to come up with the exact name of the function, that would be understood by Aprobe. Using this probe ensures that the one refers to target functions by the same name as does Aprobe.

Using info.ual directly with aprobe is made unnecessary by the apinfo and apsymbols commands (see [aprobe-9.html#MARKER-9-1052 "apinfo"] and [aprobe-9.html#MARKER-9-1110 "apsymbols"]). These are scripts which pass invoke aprobe using info.ual and simplify the handling of the parameters and other files.

Another important use of this probe is at format time to get the names of all the [aprobe-10.html#MARKER-9-1336 UAL file]s that were used to collect the data at runtime. This is especially useful in lab environments where the multiple users may be collecting different data using different UALs on the same executable at the same time.

Usage

info.ual is specified on the [aprobe-9.html#MARKER-9-1075 aprobe] or [aprobe-9.html#MARKER-9-1035 apformat] command line. The specific options are:


aprobe -u info.ual [-p " param_list "] executable apformat -u info.ual [-p  " param_list " ] apd_file

where param_list may include:

-a
Prints all information.
-d
Print data symbols from application module.
-da
Print data symbols from all modules.
-h
Prints options.
-l
Prints instrumentable lines for each function symbol shown.
-m
Prints the list of modules and their checksums.
-s
Prints function names from the application module.
-sa
Prints function names from all modules.
-t
Prints the list of threads (format-time only)
-u
Prints the list of UALs used at runtime (format-time only)
'-x'
Indicate functions that are not instrumentable.
'-xb'
Exclude listing functions that are not instrumentable.
'-xg'
Exclude listing functions that are instrumentable.

For example, the command


  aprobe -u info.ual -p "-s" a.out

Provides the same output as the command "apsymbols a.out".

If one doesn't know what UALs were used in data collection at runtime, he can get this information with the following command:


  apformat -u info.ual -p "-u" a.apd

quick_gui Library

quick_gui.ual is a library of functions to provide simple Java GUI support to a user-written probe. The interface is defined in [../include/quick_gui.h $APROBE/include/quick_gui.h], and includes histogram and plot graph support, as well pop-up Yes/No and message dialogs. Use of this library is illustrated by the [../examples/learn/visualize_data/README $APROBE/examples/learn/visualize_data] example.

The contents of quick_gui.h is reproduced here with some additional comments and an example.

Histogram

For each monitored variable in your program, call ap_CreateHistogram() to allocate an object that will keep track of its values' frequency distribution.


typedef struct _ap_HistogramObject_ *ap_HistogramObjectPtrT;
 
 ap_HistogramObjectPtrT ap_CreateHistogram(
      ap_NameT Title,
      int      Granularity);

void ap_UpdateHistogram(
      ap_HistogramObjectPtrT Histogram,
      int                    NewValue);

Call ap_UpdateHistogram() with the current value of the monitored variable.This can be done periodically, or every time the variable changes.

ap_UpdateHistogramValues() will add a new value to the histogram, but will not redraw it. Use it when you want to display a large number of updates in one batch operation.


void ap_UpdateHistogramValues(
      ap_HistogramObjectPtrT Histogram,
      int                    NewValue);

Call ap_DisplayHistogram() to redraw the histogram with the current values. This can be done periodically (using ap_DoPeriodically()), while ap_UpdateHistogramValues() could be called for each new value of the monitored variable.


void ap_DisplayHistogram(
      ap_HistogramObjectPtrT Histogram);

XY Graph

This is a traditional XY graph, such as used by the Heap Probe. To use it, you create a graph object with a few basic attributes, then send points to be plotted on the graph.

Call ap_CreateXYGraph() to creates a new empty XY graph object. Provide the title and the labels for the axes. \

NumberOfPoints indicates the maximum number of pairs kept in the graph, before the data scroll off the left side of the graph. Give zero to keep all the data points and compress the X axis instead of scrolling.


typedef struct _ap_XYGraphObject_ *ap_XYGraphObjectPtrT;
 
 ap_XYGraphObjectPtrT ap_CreateXYGraph(
      ap_NameT Title,
      ap_NameT X_AxisName,
      ap_NameT Y_AxisName,
      int      MaxNumberOfPoints);

Call ap_PlotGraphUpdate() with the current X and Y coordinates.


void ap_UpdateXYGraph(
   ap_XYGraphObjectPtrT Graph,
   double               X,
   double               Y);

Yes-No Dialog

ap_AskYesOrNo() is a simple call you can make to prompt the user before taking some action. Returns TRUE if Yes, FALSE otherwise.


ap_BooleanT ap_AskYesOrNo(
   ap_NameT QuestionToAsk,
   ap_NameT Title,
   ap_NameT HelpText);

Message Dialog

ap_DisplayMessageAndWait() simply pops up a dialog containing the message provided, and waits for the user to click OK.


void ap_DisplayMessageAndWait(
   ap_NameT Title,
   ap_NameT Message,
   ap_NameT HelpText);

Text Input Dialog

Call ap_InputText() to prompt the user for a text string, such as a name or value, and wait for the user to click OK or Cancel. Returns TRUE and the InputTextBuffer if the user chose OK; returns FALSE if Cancel was pressed.


ap_BooleanT ap_InputText(
   ap_NameT Title,
   ap_NameT Prompt,
   ap_NameT HelpText,
   char     *InputTextBuffer);

Example

The following is "visualize_data.apc" found on-line with the corresponding test program in [../examples/learn/visualize_data/README $APROBE/examples/learn/visualize_data]. This probe will graphically display the value of the target variable MyTargetVariable by plotting its value every second and by drawing the histogram of the frequency distribution for each value of it.

The first part shows the declaration of the graph objects and the function that is called to update them. The second is the "probe program on entry" action that initializes the graph objects and registers the update function to be called periodically.

 #include "quick_gui.h"
 
 static ap_HistogramObjectPtrT HistogramObjectPtr = NULL;
 static ap_XYGraphObjectPtrT   XYGraphObjectPtr   = NULL;
 
 // Define the action that updates the graph, to be called by
 //ap_DoPeriodically()

void MyPeriodicAction(void *EP)
{
  // Note the use of the target expression $MyTargetVariable.
  // MyTargetVariable is not declared in this file, but rather is the name
  // of a variable declared in the target program.

 ap_UpdateHistogram(
    HistogramObjectPtr,           // Pointer to the histogram object
    $MyTargetVariable);           // Integer value to update the histogram with

  ap_UpdateXYGraph(
    XYGraphObjectPtr, // Pointer to the XYGraph object
    ap_TimeToSeconds(ap_ElapsedTime()),
                // Seconds since the program start
    $MyTargetVariable);           // vs. the value of MyTargetVariable
}

Example D-13a. Define Graph Objects and Update Function

probe program
{
  on_entry
  {
    // Create a new histogram object that could be used in updates
    HistogramObjectPtr = ap_CreateHistogram(
      "My Histogram",         // Title of the histogram
      5);   // Granularity of the histogram

    // Create a new XYGraph object that could be used for plotting
    XYGraphObjectPtr = ap_CreateXYGraph(
      "My Plot",        // Title of the Graph
      "Time",       // X coordinate name
      "Value of 'MyTargetVariable'",
              // Y coordinate name
      0);       // Number of points, 0 => keep them all

    // Register to update the graphs every second.
    ap_DoPeriodically(
      MyPeriodicAction,           // Periodic action to perform
      1,            // Do it every 1 second
      NULL);            // No data to pass to the periodic routine
   }
}

Example D-13b. Create Graphs and Register Update Function



Copyright 2006-2017 OC Systems, Inc.