Efficient Dynamic Verification of Concurrent Programs

Download Report

Transcript Efficient Dynamic Verification of Concurrent Programs

Cross-Cutting Seminar
N
Verification in the 10 thread regime
Ganesh Gopalakrishnan
http://www.cs.utah.edu/formal_verification
1
Correctness Concerns Will Loom Everywhere…
Debug Concurrent Systems, providing rigorous guarantees
How is system correctness established?
• Certify a system to be correct without attempting to
create all real-world conditions
– Airplanes
• Can’t simulate all stress, turbulence
• Hence mathematically model, and analyze
• Over-engineer
– Software / Hardware : The old way
• Attempt (in vain) at what was proscribed
• Time-out and “ship it” based on ad-hoc / monetary criteria
– The new way
• Do real engineering – i.e. really attempt to mathematically
analyze and predict (coverage metrics, formal methods)
• Over-engineering also an option
– Redundant cores, Fault Tolerance Algorithms, …
3
Verification of Large-scale Concurrent Systems
• Must conquer the exponentials in the state
space of a concurrent program / system
– Data space is exponential
– Symmetry space is exp
– Interleaving space is exp
4
Conquering the exponentials (practice)
• Data space
– Find data that does not affect control
• Often possible to guess data that does not influence control
• Static Analysis can help
• Symmetry space
– Find symmetry reduction arguments
• Often possible to guess the instance to model
– Example: Three Dining Philosophers
• Symbolic Analysis can help
– e.g. symbolic MPI ranks
• Interleaving space
– Employ action independence
• Perhaps THE most non-intuitive of spaces
– Designers find it difficult to guess which interleavings to ignore
• Hence need automation in this space
– Partial Order Reduction methods
5
Challenge: exponential interleavings
P0
P1
P2
A
P3
P4
TOTAL > 10 Billion Interleavings !!
B
Dependent actions
Only these 2 are
RELEVANT!!!
6
Focal Areas for Correctness Research
7
Focal Areas
• Hardware
– Pre-Silicon
– Post-Silicon
• Firmware
– Microcode on chip
– Microcode in subsystems (bus bridges, I/O, …)
• Software
–
–
–
–
User apps
APIs and libraries
Compilation
Runtime support (OS, work-stealing algorithms, …)
8
Where Post-Si Verification fits
in the Hardware Verification Flow
Spec
Specification
Validation
Design
Verification
Pre-manufacture
Testing for
Fabrication
Faults
Post-Silicon
Verification
product
Post-manufacture
Does functionality
match designed behavior?
9
Focal Areas : Hardware
• Hardware
– Pre-Silicon
• Logical bugs must be caught through
– Systematic testing
– Formal analysis (i7 core was FV-ed in lieu of testing)
– Post-Silicon
• Fresh logical bugs are triggered by
– high-frequency, and
– integrated operation
• Must detect them through
– Systematic testing
– Limited Observability Testing
– Built-in support for post-silicon debugging
» e.g. “backspace queues”
» staggered clock-phase cores
10
Focal Areas : Hardware
• Hardware
– Pre-Silicon
– When it works, Formal Verification rocks!
– Gold-star result:
» i7 core execution engine was FV-ed in lieu of
testing !!
Forthcoming CAV 2009 paper:
Roope Kaivola, Rajnish Ghughal, Naren Narasimhan, Amber
Telfer, Jesse Whittemore, Sudhindra Pandav, Anna Slobodova,
Christopher Taylor, Vladimir Frolov, Erik Reeber and Armaghan
Naik, “Replacing testing with formal verification in Intel
Core i7 processor execution engine validation”
(Utah Alums are in red)
11
Post-Silicon Challenge : Limited Observability!
a
x
c
b
y
d
a x c d y b …
12
Focal Areas : Firmware
• Firmware
– Microcode on chip
• Huge amounts of late-binding microcode on chip
– Path analysis of microcode is a HUGE problem
– Industry is heavily investing into formal methods
• Must encrypt
• Must have sufficient “planned wiggle-room”
– Microcode in subsystems (bus bridges, I/O, …)
• Crucial for overall system integrity
13
Focal Areas : Software
• Software
–
–
–
–
–
User apps
APIs and libraries
Compilation
Runtime support (OS, work-stealing algorithms, …)
Dynamic Verification offers considerable hope
•
•
•
•
•
MODIST (dynamic verif. of distributed systems)
Backtrackable VMs
Testing using FPGA hardware (Simics)
CHESS project of Microsoft Research (for threads)
Local projects
– ISP (for MPI)
– Inspect (for threading)
14
Workflow of a dynamic verifier
Program
or FPGA
emulation or
VM
Interposition
Layer
Executable
Run
Proc1
Proc2
……
Procn
Runtime
Scheduler
 Hijack scheduler
 Playout
Relevant Interleavings
15
Focal Areas : Software
• Software
– Compilation
• So many focal areas just here
– Correct compilation respecting API / library semantics
– Correctness with respect to weak memory orderings
– Interaction of compiled code with intelligent runtime
– Failure handling
» What must be done when a core says “bye bye” ?
16
Correctness Myths in the Multi-core Era
• Myth: Simpler cores -> easier to verify
– Reality: Circuits will be highly energy optimized
• Circuit-level verification
• Verification of power-down / power-up protocols
• Myth: Streaming models have no “rats nest” control
– Reality: Proper use of streaming models will re-introduce
reactive / control complexity
• Myth: Single programming paradigms simplify things
– Reality: Performance will force multi-paradigm programming
• Road-runner uses MPI and IBM Cell
• Mixed programming paradigms make verification tricky
• Myth: More cores will allow parallel verification
– Reality: Superior abstraction methods often outperform any
brute-force
17
How about the performance space?
18
Performance / reliability bugs
• Not meeting energy budgets
• “Feature interactions” that occur only at scale
• Inability to gauge efficacy of parallelization
– For each context, decide whether parallelization
pays off
19
Concluding Remarks (1)
• Correctness is an enabler of performance!
– Safety through timidity never worked
• People will seek performance eventually
– Provide them strong safety-nets
– Incrementally re-verify performance optimizations
• Plan for correctness
– Correctness cannot be left to chance
– Not an after-thought
– Formal Correctness WILL sell and pay for itself
20
Concluding Remarks (2)
• Standardization is very important!
• How to standardize? Two options
– One minimal API with a few high-level functions
• Pros:
– Easier to understand and verify
• Cons:
– Ensuring portable performance could be difficult
» So people will learn to bypass and hack around them
– One very broad API that exposes a LOT of low level
• Cons:
– Steeper learning curve
• Pros:
– Has a much better chance to succeed – example : MPI
– Formal Methods can help mitigate burden of learning / use
» Example: Our ISP push-button verifier of MPI programs
» Recently solved many large apps
» Recently worked out most examples from Pacheco’s book
21