iWarp-Based Remote Interactive Scientific Visualization

Download Report

Transcript iWarp-Based Remote Interactive Scientific Visualization

iWarp Based Remote Interactive
Scientific Visualization
CENIC 2008
Oakland, CA
Scott A. Friedman
UCLA Academic Technology Services
Research Computing Technologies Group
Our Challenge
• Applications which make use of
– 10 gigabit network infrastructure
– iWarp, leveraging our existing IB code
• Extend access to our visualization
cluster
– Remote sites around UCLA campus
– Remote sites around UC system
– Remote sites beyond…
November 13, 2007
CENIC 08
2
Our Visualization Cluster
• Hydra - Infiniband based
– 24 rendering nodes (2x nvidia G70)
• 1 high definition display node
• 3 visualization center projection nodes
• 1 remote visualization bridge node
– Research System - primarily
• High performance interactive rendering (60Hz)
• Load balanced rendering
• Spatio-temporal data exploration and discovery
– System requires both low latency and high
bandwidth
November 13, 2007
CENIC 08
3
Remote Visualization Bridge
Node
• Bridges from IB to iWarp/10G ethernet
• Appears like a display to cluster
– No change to existing rendering system
• Pixel data arrives over IB, which
– is sent over iWarp to a remote display node
– uses same RDMA protocol
• Same buffer used for receives and sends
• Very simple in principle - sort of…
– is pipelined along the entire path
• pipeline chunk size is optimized offline
November 13, 2007
CENIC 08
4
Simple Diagram
remote
HD node
local
HD node
input
IB
iWarp
scinet
nlr
pixel
data
cenic
Acts like
a display to
the cluster
visualization
cluster
November 13, 2007
IB
bridge
node
CENIC 08
iWarp
ucla
5
What are we transmitting
• High definition uncompressed ‘video’ stream
– 60Hz at 1920x1080 ~ 396MB/s (3Gbps)
– One frame every 16.6ms
• Achieved three simultaneous streams at
UCLA
– Using a single bridge node
– Just over 9.2Gbps over campus backbone
• Actual visualization
– Demo is a particle DLA simulation
• Diffusion limited aggregation - physical chemistry
– N-body simulations
November 13, 2007
CENIC 08
6
Diffusion limited aggregation
November 13, 2007
CENIC 08
7
Continuing Work
• Local latency is manageable
– ~60usec around campus
• Longer latencies / distances
–
–
–
–
UCLA to UC Davis ~20ms rtt (CENIC)
UCLA to SC07 (Reno, NV) ~14ms rtt (CENIC/NLR)
How much can we tolerate - factor of interactivity
Jitter can be an issue, difficult to buffer, typically just toss data
• Challenges
– Proper application data pipelining helps hide latency
– Buffering is not really an option due to interaction
– Packet loss is a killer - some kind of provisioning
desired/needed
– Remote DMA is not amenable to unreliable transmission
– Reliable hybrid application
protocol likely best solution
November 13, 2007
CENIC 08
8
Thank you
• UCLA
– Mike Van Norman, Chris Thomas
• UC Davis
– Rodger Hess, Kevin Kawaguchi
• SDSC
– Tom Hutton, Matt Kullberg, Susan Rathburn
• CENIC
– Chris Costa
• Open Grid Computing, Chelsio
– Steve Wise, Felix Marti
November 13, 2007
CENIC 08
9