Hung-Hsun's report summary

Download Report

Transcript Hung-Hsun's report summary

Profiling/Tracing Method and
Tool Evaluation Strategy
Summary Slides
Hung-Hsun Su
UPC Group, HCS lab
1/25/2005
Profiling/Tracing Method
Experimental performance measurement





Instrumentation –
insertion of
instrumentation code (in
general)
Measurement – actual
measuring stage
Analysis – filtering,
aggregation, analysis of
data gathered
Presentation – display of
analyzed data to the user.
The only phase that deals
directly with user
Optimization – process of
resolving bottleneck
Original code
Instrumented
code
Improved code
Instrumentation
Optimization
Measurement
Raw data
Bottleneck
detection
Presentation
Meaningful set
of data
Analysis
Instrumentation (1)



Overhead
 Manual – amount of work needed from user
 Performance – overhead added by tool to program
Profiling / Tracing
 Profiling – collecting of statistical event data. Generally
refers to filtering and aggregating a subset of event data
after program terminates
 Tracing – Use to record the majority of events possible in
logical order (generally with timestamp). Can use to
reconstruct accurate program behavior. Require large
amount of storage
 2 ways to lower tracing cost – (1) compact tract file
format (2) Smart tracing system that turns on and off
Manual vs. Automatic – user/tool that is responsible for the
instrumentation of original code. Categorization of which
event is better suited for which method is desirable
Instrumentation (2)


Number of passes – The number of times a program need to
be executed to get performance data. One pass is desirable
for long running program but multi-pass can provide more
accurate data (ex: first pass=profiling, later pass=tracing using
profiling data to turn on and off tracing). Hybrid method is
available but might not be as accurate as multi-pass
Levels - need at least source and binary to be useful (some
event more suited for source level and other binary level)
 Source level – manual, pre-compiler, instrumentation
language
 System level – library or compiler
 Operating system level
 Binary level – statically or dynamically
Tool Evaluation Strategy
Feature (section)
Description
Information to gather
Categories
Importance
Rating
Available metrics (9.2.1.3)
Kind of metric/events the tool can
tract (ex: function, hardware,
synchronization)
Metrics it can provide (function,
hw …)
Productivity
Critical
Cost (9.1.1)
Physical cost for obtaining software,
license, etc.
How much
Miscellaneous
Average
Documentation quality
(9.3.2)
Helpfulness of the document in term
of understanding the tool
design and its usage (usage
more important)
Clear document? Helpful document?
Miscellaneous
Minor
Ease of (1) add new metrics (2)
extend to new language,
particularly UPC/SHMEM
1.
Miscellaneous
Critical
Productivity,
Scalabilit
y
Critical
Usability,
Portabilit
y
Critical
Miscellaneous
Minor
Extendibility (9.3.1)
Estimating of how easy it is to
extend to UPC/SHMEM
2.
How easy is it to add new
metrics
Filtering and aggregation
(9.2.3.1)
Filtering is the elimination of “noise”
data, aggregation is the
combining of data into a single
meaningful event.
Does it provide filtering? Aggregation?
To what degree
Hardware support (9.1.4)
Hardware support of the tool
Which platforms?
Heterogeneity deals with the ability
to run the tool in a system
where nodes have different
HW/SW configuration.
Support running in a heterogeneous
environment?
Heterogeneity support (9.1.5)
Installation (9.1.2)
Ease of installing the tool
How to get the software
How hard to install the
software
3.
Components needed
4.
Estimate number of hours
needed for installation
Usability
Minor
Ease of viewing result of tool using
other tool, using other tool in
conjunction with this tool, etc.
List of other tools that can be used
with this
Portability
Average
Learning time required to use the
tool
Estimate learning time for basic set
of features and complete set of
features
Usability,
Productiv
ity
Critical
Manual overhead (9.2.1.1)
Amount of work needed by the user
to instrument their program
1.
Method for manual
instrumentation (source code,
instrumentation language, etc)
2.
Automatic instrumentation
support
Usability,
Productiv
ity
Average
Measurement accuracy
(9.2.2.1)
Accuracy level of the measurement
Evaluation of the measuring method
Productivity,
Portabilit
y
Critical
Provide multiple analyses? Useful
analyses?
Usability
Average
Support multiple executions?
Productivity
Minor 
Aver
age
Interoperability (9.2.2.2)
Learning curve (9.1.6)
Multiple analyses (9.2.3.2)
The amount of post measurement
analysis the tool provides.
Generally good to have
different analyses for the same
set of data
Multiple executions (9.3.5)
Tool support for executing multiple
program at once
Multiple views (9.2.4.1)
Tool’s ability to provide different
view/presentation for the same
set of data
1.
2.
Provide multiple views? Intuitive
views?
Usability,
Productiv
ity
Critical
Performance bottleneck
identification (9.2.5.1)
Tool’s ability to identify the point of
performance bottleneck and
it’s ability to help resolving the
problem
Profiling / tracing support
(9.2.1.2)
Method of profiling/tracing the tool
utilize
Support automatic bottleneck
identification? How?
1.
4.
Profiling? Tracing?
2.
Trace format
3.
Trace strategy
Mechanism for turning on and
off tracing
Productivity
Productivity,
Portabilit
y,
Scalabilit
y
Minor 
Aver
age
Critical
Response time (9.2.6)
Amount of time needed before any
useful information is feed back
to the user after program
execution
How long does it take to get back
useful information
Productivity
Average
Searching (9.3.6)
Tool support for search of particular
event or set of events
Support data searching?
Productivity
Minor
Software support (9.1.3)
Software support of the tool
Source code correlation
(9.2.4.2)
Tool’s ability to correlate event data
back to the source code
System stability (9.3.3)
Stability of the tool
Technical support (9.3.4)
Responsiveness of the tool
developer
1.
2.
Usability,
Productiv
ity
Critical
Able to correlate performance data to
source code?
Usability,
Productiv
ity
Critical
Crash rate
Usability,
Productiv
ity
Average
Usability
Minor 
Aver
age
1.
2.
Libraries it supports
Languages it supports
Time to get a response from
developer.
Quality/usefulness of system
messages