Analysis Trader Outcome

Download Report

Transcript Analysis Trader Outcome

Performance Research in the Boderc project
Jozef Hooman
Research Fellow
Embedded Systems Institute
Eindhoven
The Netherlands
world's fastest duplex digital printing system
250 duplex A4 or 132 duplex A3 per minute
Artist Meeting Bologna 22 May 2006
A Boderc Research Topic
General business model:
• Develop first high-end machine / system with over
dimensioned HW
• In next versions this is optimized to increase
performance and to reduce costs
• Questions:
– what is most suitable hardware platform
(currently typically PC + local nodes connected by CAN)?
– how to distribute software (high-level management, UI, lowlevel motor control, etc.) on various processing nodes?
– what is most optimal configuration for the processing nodes
(scheduling strategies, memory management, caches sizes, ..)?
 research on performance modeling and analysis
3
Comparison of methods
Continuation of the work on comparison of
performance analysis methods by Marcel Verhoef
Uppaal, MPA, SymTA/S, and POOSL used on
in-car radio-navigation system
Results have been improved and extended after
Artist workshop at Leiden (November 2005)
http://www.ee.ethz.ch/~leiden05
Uppaal work and comparison presented at WPDRTS workshop
International Parallel and Distributed Processing Symposium 2006.
4
In-Car Radio Navigation System
•
•
•
•
Car radio with built-in navigation system
User interface needs to be responsive
Traffic messages must be processed in a timely way
Several applications may execute concurrently
5
System Overview – Change Volume
200 msec
User Interface
50 msec
Communication
Navigation
Radio
Database
6
System Overview – Handle TMC
User Interface
1000 msec
Communication
Navigation
Radio
Database
7
Proposed Architecture Alternatives
22 MIPS
72 kbps
MMI
22 MIPS
57 kbps
MMI
(A)
(B)
NAV
260 MIPS
RAD
NAV
72 kbps
(C)
72 kbps
11 MIPS
113 MIPS
11 MIPS
RAD
NAV
RAD
(D)
22 MIPS
113 MIPS
MMI
NAV
72 kbps
113 MIPS
(E)
130 MIPS
260 MIPS
RAD
MMI
MMI
RAD
NAV
• kbps = kilo bit per second
• mips = 106 instructions per second
• assume no (protocol or scheduling) overhead (zero cost)
• inter task communication on same resource is instantaneous (zero cost)
8
Method: Uppaal (1)
• Model checker for timed automata
• Co-developed at Uppsala (S) and Aalborg (DK) by
(Wang Yi, Kim Larsen et al)
• Integrated tool, graphical modeling interface
• Validation (simulation) and verification (model
checking)
• Networks of timed automata
• Expressive and powerful language
• TA models prone to state space explosion problem
• http://www.uppaal.com
9
Uppaal model
(Martijn Hendriks)
10
Method: MPA (1)
•
•
•
•
•
•
•
•
Modular Performance Analysis
Developed at ETH Zurich (Lothar Thiele et al)
Performance networks analyzed with real-time calculus
Analytic method, deterministic queuing theory
Adaptation of Network Calculus (Boudec, Thiran)
Describes event streams by interval bound functions
Information is lost: t → Δt
Evaluation is very fast (no simulation)
• http://www.mpa.ethz.ch
11
MPA model
(Ernesto Wandeler)
12
Method: SymTA/S (1)
•
•
•
•
•
•
•
•
•
Symbolic Timing Analysis for Systems
Developed at TU Braunschweich (Rolf Ernst et al)
Classical (formal) scheduling analysis techniques
Symbolic simulation
Calculate resource local optima
Optimize system level by iteration over local optima
Heterogeneous architectures
Complex task dependencies, context aware analysis
Rapid design space exploration by sensitivity analysis
• http://www.symtavision.com
13
SymTA/S model
(Kai Richter)
14
Method: POOSL (1)
• Parallel Object-Oriented Specification Language
• Languages combines primitives for specifying data
manipulations, concurrency and timing
• SHE method: Software / Hardware Engineering
• SheSIM tool for model construction and simulation
• Rotalumis for high-speed batch-oriented simulation
• Formal semantics based on probabilistic timed labeled
transition systems
• Symbolic execution
• http://www.es.ele.tue.nl/poosl/
15
POOSL model
(Menno de Hoon)
16
Analysis
• Played with many environment models
–
–
–
–
–
–
Pure periodic with zero offset (synchronous)
Pure periodic with fixed offset (synchronous)
Pure periodic with unknown offset (asynchronous)
Periodic with jitter (j ≤ p)
Periodic with bursts (j = 2p, d = 0)
Sporadic (periodic with only upper bound to period)
• Some results easy to verify by hand
– AddressLookup is fully independent and has highest priority
– ChangeVolume is only dependent on itself
17
Results
18
Observations & lessons learnt
• Comparing results is as hard as getting the results
– Did we really model the same thing?
– Simulation / computation effects or true “problem”?
– Interaction with method experts is needed to make comparison!
• Methods are typically
– Either biased towards application domain; can cause mismatch
– Or very generic; can cause huge modeling effort
• Methods can be used complementary
– Provide answers to different types of questions
– Model validation by moving to another paradigm
19
SymTA/S Evaluation
SymTA/S used at Océ (Hennie Freriks) to model data path
• Nice modeling tool, after understanding theoretical
background, model made in less than a day by
industrial engineer
• Modeling itself (and collecting data for it) very useful
and provided useful insight
• Not is so useful for analysis of data path because, e.g.,
– time-dependency between use case scenarios could not be
modeled (and hence result were far too pessimistic)
– finite event streams cannot be modeled
• Usability of tool improved a lot during evaluation period
(Jan/Feb 2006)
20
Performance analysis
Industrial practice:
start with distributed solution, next try to reduce costs
by combining more functionality on single node
Question: does it fit, which hardware is suitable?
work on performance measurements and models
(Peter van den Bosch, Océ)
Aim: method to help embedded control engineer
to make well-founded choice for hardware platform
21
Performance models
SW development based on UML in Rose RealTime
APPLICATION SOFTWARE
T
ROSE RT RUN-TIME SYSTEM
MOTOR
CONTROL
SOFTWARE
TIMER
VXWORKS
T
VXFUSION
SEMPROCE
How to relate performance of underlying HW platform to
high-level SW model and requirements?
22
Performance Measurements
Measurements have been done on existing platform
(ARM9, 200 MHz, 5-stage instruction pipeline,
100 MHz 32-bit memory bus SDRAM, 8k data, 8k instruction cache)
and SW with soft real-time (action planning) and hard-real time tasks
• insight in caching behavior, memory latencies, modeling overhead
(Rose RT)
• large variation on lowest level, but less at high-level application
• formulas and graphics have been derived, also based on
characteristics of SW (# cache misses, # interrupts, …)
Unclear: general method/approach and/or suitable models & tools
to decide fast and with some accuracy about suitable HW platform
and best configuration choices given certain characteristics of SW
 for the moment, focus in examples, later try to generalize
23
Concluding Remarks
Several results of the Boderc project are used at Océ, e.g.
• Matlab-based visualization of paper path
• event-based scheduling
Important spin-off: increase awareness at Océ about
• usefulness of modeling and
• benefits of multi-disciplinary collaboration
24