Visualizing Time-Varying Three

Download Report

Transcript Visualizing Time-Varying Three

— 11th International Symposium on Flow Visualization, Notre Dame, Indiana, Aug 9-12, 2004 —
Visualizing Time-Varying Three-Dimensional Flow Fields
Using Accelerated UFLIC
Zhanping Liu, PhD
Robert J. Moorhead II, PhD
Visualization Analysis & Imaging Lab
ERC / GeoResources Institute
Mississippi State University
Outline
 Introduction
 Flow visualization
 LIC (Line Integral Convolution)
 UFLIC (Unsteady Flow LIC)
 AUFLIC (Accelerated UFLIC)
 Overview
 Flow-driven seeding strategy
 Dynamic seeding controller
 VAUFLIC (Volume AUFLIC)
 Volume AUFLIC
 VAUFLIC rendering
 Results
 Conclusions
Introduction — Flow Visualization
 Major Challenges

In-depth perception of directions
 Increasingly large-scale data size
 Flows defined on complex grids

Computational performance
 Time-varying flows
 Feature extraction & tracking
 Available Methods

Graphics Based Methods achieve either local, discrete, coarse, or
cluttered representations using various graphical primitives
arrow plots, streamlines, pathlines, timelines, streaklines, particle tracing,
surface particles, stream ribbons, stream polygons, stream surfaces, stream
arrows, stream tubes, stream balls, flow volumes, and topological analysis

Texture / Image Based Methods employ texture synthesis and image
processing to provide global, continuous, dense, pleasing representations
Spot Noise, LIC (Line Integral Convolution), UFLIC (Unsteady Flow LIC), HATA
(Hardware-Accelerated Texture Advection), IBFV (Image-Based Flow
Visualization), LEA (Lagrangian-Eulerian Advection), UFAC (Unsteady Flow
Advection-Convolution), and their variations
Introduction — LIC
A LIC image color-mapped with the velocity magnitude (blue: lowest; red: highest)
Introduction — LIC
 Basic Idea

LIC (Line Integral Convolution) was presented by Brian Cabral and
Casey Leedom in ACM SIGGRAPH'93 Conference

LIC convolves an input noise texture using a low-pass filter along pixelcentered symmetrically bi-directional streamlines to exploit spatial
correlation in the flow direction

LIC synthesizes an image that provides a global dense representation of
the flow, analogous to the resulting pattern of wind-blown sand

flow field (wind)
noise texture (fine sand)
LIC image (pattern)
Introduction — LIC
( () )
 Pipeline
( + d)
a point in the flow field,
the counterpart of a pixel
in the LIC image
()
d
the correlated pixels
along the streamline
index the input noise
for the texture values
d( () ) / d  = ( () )
( + d) = () +  ( () ) d 
compute the target
pixel value in the LIC
image by convolution
Introduction — LIC
 Animation — successively shifting the phase of a periodic convolution kernel
VolumeLIC  3D LIC + Volume Rendering
Visualizing
Steady Volume Flows
Introduction — UFLIC
 Motivation
 LIC is an image-space oriented, Eulerian-based texture synthesis technique
— given a pixel in the output image, locate the correlated pixels and accept
the contributions / properties
 LIC can only be used to visualize steady flow fields since animating a sequence of
LIC frames of a time-varying flow fails to maintain temporal coherence
? Particle-space oriented, Lagrangian-based texture synthesis techniques
— given a particle at a time step, locate downstream points (i.e., pixels in subsequent
frames) where it leaves the footprint (scatters the contribution / property) over time
 UFLIC
 Unsteady Flow LIC
by Han-Wei Shen & David L. Kao
in IEEE Visualization'97
& IEEE TVCG Vol.4, No.2, 1998
 High spatial coherence
 Strong temporal coherence
www.erc.msstate.edu/~zhanping/Research/
FlowVis/AUFLIC/Comparison.html
Introduction — UFLIC
 Components

Time-Accurate Value Scattering Scheme
 At each time step, a Scattering Process (SCAP) occurs for which a seed is released
from each pixel as a contributor to scatter the texture value to the downstream
pixels along the advected pathline in its Life Span — usually several time steps
 Value scattering to “downstream pixels” correlates both intra-frame pixels and
inter-frame pixels to establish high temporal-spatial coherence

Per-Pixel Value Accumulation-Convolution
As a receiver, each pixel keeps several stamped buckets to accumulate deposited
values while a frame is obtained by convolving the values that each pixel has received
and stored in the very bucket stamped with the frame index

Successive Texture Feed-Forward Strategy
Besides the output, each synthesized frame is High-Pass Filtered with Noise Jittering
and taken as the input texture for the next SCAP to enhance inter-frame coherence
Introduction — UFLIC
 Pipeline
white noise
disk files
input texture
vector data buffer
release a seed from each pixel center
advect the pathline to the next pixel
if within the life span
scatter the seed value
accumulate to the receiver’s buckets
convolve each pixel in the bucket with stamp t
noise-jittered high-pass filtering
frame t
t=t+1
feed texture forward
time-accurate value scattering process (SCAP)
y
: the input texture for SCAP k + 1
: the pathline advected by seed A
(the pixel center) in SCAP k
: the pathline advected by seed B
(the pixel center) in SCAP k + 1
: the point (P) through which seed A passes at exactly time step k + 1 in SCAP k
: the point (Q) through which seed A passes at a small fractional time past time step k + 1 in SCAP k
: the pixel center from which seed B is released at time step k + 1 in SCAP k + 1
P
Q
B
: the input texture for SCAP k

A
Seed B can be released either from point P at exactly time step k + 1, or from a loosely specified
point Q at a small fractional time past time step k + 1 to reuse the red pathline in SCAP k + 1
B
SCAP k
x
SCAP k + 1
To reuse pathlines, a
temporally-spatially
flexible seeding
strategy is required
stamp
seed A’s receiverpixels in SCAP k
seed B’s receiverpixels in SCAP k + 1
A & B’s common
receiver-pixels
life span = 4 time steps
time step
k
k + 1 k + 2 k +3 k + 4
AUFLIC — Overview
 Motivation
 Problem
 Reason
 Solution
low computational performance
redundant pathline advection
as-many-as-possible pathline reuse
 AUFLIC — Accelerated UFLIC
 Targets the bottleneck
 Exploits the correlations
value scattering process (SCAP)
intra-SCAP and inter-SCAP
 Employs a flexible seeding strategy
instead of the conservative one
 Spatial flexibility: a seed may not necessarily be released exactly from a pixel
center as long as the seed is within the pixel
 Temporal flexibility: a seed may not necessarily be released exactly at an integer
time step; instead at a fractional time shortly after the SCAP begins

Places seeds along available pathlines
flow structures taken into account
 Reuses as many pathlines as possible
as few pathlines advected as possible
 Maintains a dense scattering coverage
for high temporal-spatial coherence
AUFLIC — Overview
 AUFLIC v.s. UFLIC
Comparison Items
Pathline
Integration
Flow-Driven
Seeding
Strategy
SCAP
Correlation
Dynamic
Seeding
Controller
Line
Convolution
Result
UFLIC
AUFLIC
pathline integrator
step size
error control
numerical accuracy
overall efficiency
Euler (first-order)
line-segment clamp against pixels
none
first-order
slow and inaccurate
Fourth-order Runge-Kutta (RK4)
adaptive step size
embedded Runge-Kutta formulae
second-order
fast and accurate
temporal flexibility
spatial flexibility
flow-aligned
none (always at an integer time step)
none (always from a pixel center)
no (a lattice pattern)
within a fractional time past a time step
an arbitrary position within a pixel
yes (release seeds along pathlines)
intra-SCAP
inter-SCAP
ignored
ignored
copy and truncate pathlines
reuse and extend pathlines
seeding order
flow structures
dense scattering
left-to-right, top-to-bottom
ignored
yes
roughly left-to-right, top-to-bottom
adapt seeding to flow structures
yes
evenly line sampling
accumulation
none
line-segment lengths as accum-weights
by cubic Hermite polynomial
collected values are averaged
pathline integration
performance
quality
a large amount
low
high temporal-spatial coherence
substantially reduced
high (1 order-of-magnitude faster)
high temporal-spatial coherence
AUFLIC — Flow-Driven Seeding Strategy
 Key Points
 Spatial flexibility
arbitrary position with a pixel (not necessarily the center)
 Temporal flexibility within a small fractional time shortly after the SCAP begins
 Spatially-evenly Pathline sampling
convolution is simplified to averaging
 Fourth-order Runge-Kutta integrator with adaptive step-size and error control
 Cubic Hermite polynomial interpolation
 If possible, seeds (S) are released at some sample points along an available
pathline (seed S0) at the same time as S0 passes through these sample points
 only a small number of seeds need to actually advect pathlines
 a significant number of seeds extract pathlines by pathline copying & pathline reuse
 Pathline copying intra-SCAP operation — S and S0 are released in the same SCAP
 Pathline reuse
inter-SCAP operation — S and S0 are released in different SCAPs
 A relatively-continuous, flow-structure based seeding strategy to replace the
intermittent (only at integer time steps), pixel-center based scheme of UFLIC
 High temporal-spatial coherence is maintained due to a still dense value
scattering coverage
advect
A
copy & truncate
B
C
copy & truncate
save in a pathline-list
truncated part for B
D
reuse from the list & extend
truncated part for E
copy & truncate
E
copy & truncate
F
stamp
y
k + 1 k + 2 k +3 k + 4
receiver
pixels
Life span = 4 time steps
SCAP k + 1
SCAP k
k
k
time step
k+1
k+2
k+3
k+4
k+5
C
B
A
D
E
F
x
AUFLIC — Dynamic Seeding Controller
 Problem
As pathlines are advected, copied & truncated, saved, and reused & extended
over SCAPs, there may be an undesirable seeding pattern in a SCAP
 Seeding redundancy many seeds released from the same pixel
 un-necessary & image blurred
 increased storage overhead & degraded acceleration-efficiency
 Seeding vacancy
no seeds released from pixels, e.g., in diverging regions
 value scattering disabled & features missed
 artifacts introduced
 Controller
governs the seed distribution in a SCAP by determining for pixel whether a pathline
is advected, reused & extended, copied & truncated, saved for the next SCAP, or deleted
 An adaptive, global, organized control over the seed placement
preventing redundant pathline copying or reuse while maintaining dense scattering
 A balance between pathline reuse and advection in each SCAP
computational fluctuation is suppressed to obtain a nearly constant frame rate
AUFLIC — Dynamic Seeding Controller
 Pixel State
To ensure no more than one seed is released from a pixel in a SCAP
 open
there has not been yet a seed released from the pixel
allowing for a seed release
 close
there has been already a seed released from the pixel
blocking further seed releases
 Two Arrays
 Current[1 …M, 1 … N]
for pixels in the current SCAP
checks if a pixel still allows for a seed release in the current SCAP
 Next[1 …M, 1 … N]
for pixels in the next SCAP
checks if a pixel allows for a given pathline to be saved in the current SCAP so that it
is reused from the pixel in the next SCAP
 Dynamic Update
 Initialization
 When each SCAP begins
 During each SCAP
Next[1 …M, 1 … N] = open
Current[1 …M, 1 … N] = Next[1 …M, 1 … N]
two arrays are dynamically updated
AUFLIC achieves near-interactive flow visualization (frame
generation) with up to 160k particles in time-varying flow
fields (left : 397  397 data points & 101 time steps at 1.2
FPS; right: 576  291 data points & 41 time steps at 1.0 FPS)
on SGI Onyx2 (four 400MHZ MIPS R12000 / 4GB RAM)
VAUFLIC — Volume AUFLIC
 Texture-Based Volume Flow Visualization
 Limited to steady flows
 “Visualizing Vector Fields using Line Integral Convolution and Dye Advection”
— Han-Wei Shen et al, IEEE Symposium on Volume Visualization 96
 “Strategies for Effectively Visualizing 3D Flow with Volume LIC”
— Victoria Interrante & Chester Grosch, IEEE Visualization 97
 “Interactive Exploration of Volume LIC Based on 3D-Texture Mapping”
— C. Rezk-Salama et al, IEEE Visualization 99
 “3D IBFV: Hardware-Accelerated 3D Flow Visualization”
— Alexandru Telea & Jarke J. van Wijk, IEEE Visualization 03
 Dependent on special-purpose hardware
 “Hardware-Accelerated Visualization of Time-Varying 2D and 3D Vector Fields by
Texture Advection via Programmable Per-pixel Operations”
— D. Weiskopf et al, International Workshop on Vision, Modeling&Visualization 01
— the only publication on texture-based time-varying volume visualization
— dependent on per-pixel operations which are not supported by ordinary cards
 An open problem
intensive computation, temporal-spatial coherence, rendering of 3D flow textures
VAUFLIC — Volume AUFLIC
 Volume AUFLIC
 The first hardware-independent texture-based time-varying volume flow
visualization method
 Extension of AUFLIC to time-varying volume flows
 2D vectors  3D vectors
 2D input (noise) textures  3D input (noise) textures
 output pixels  output voxels
 flow-driven seeding strategy
 dynamic seeding controller
work in the same way as in AUFLIC
 Volume AUFLIC is 5 times faster than brute-force volume UFLIC
 Small memory footprint for large-scale time-varying volume flow vis
 Application of volume rendering to view the output volumetric textures
VAUFLIC — VAUFLIC Rendering
 Volume Rendering
 Without constructing intermediate geometric representations (e.g., triangles)
to fit iso-surfaces through the volume
 Operating directly on voxels by using a light absorption-transmission model
and a transfer function to assign colors & opacities to voxels that are then
composited along view directions
 Well suitable for investigating the distribution of a physical property (e.g.,
density, velocity magnitude, vorticity, temperature, pressure, precipitation)
within a dense volume, representing amorphous transparent gel-like objects
such as clouds and smoke that are too complicated to be either geometrically
modeled or effectively rendered using extracted iso-surfaces
 Available techniques
 ray casting
 ray tracing
 splatting
 shear-warp
 hardware-based texture mapping
 Ray Casting
 Backward mapping  Sampling & interpolation
rays are cast from the
viewer through the
pixels backward into
the volume (object)
the original signal is reconstructed
by (e.g., tri-linear) interpolation
and then (in most cases evenly)
sampled along the ray to find the
contributions affecting the pixel
 Compositing samples
samples are assigned color
& opacity values (RGBAs)
by a transfer function and
then composited from front
to back to sum the opacityweighted colors until the
opacity accumulates to 1
VAUFLIC — VAUFLIC Rendering
 Problems
Although volume rendering is well established in medical data visualization
 Ad hoc in rendering dense volume flow textures
 degraded visual perception of 3D flow directions in dense volume
 poor depth cueing
 occluded interior flow structures
 Lack of any physical meaning of a texture value
 the histogram of a VAUFLIC texture, nearly a horizontal line, offers no guidance
to transfer function design since it does not convey information provided by, e.g.,
that of a medical data which can be used to distinguish between bones and soft
tissues
VAUFLIC — VAUFLIC Rendering
 Magnitude-Based Color-Opacity Mapping
 VAUFLIC texture values are used to compute gradients used as normals in
Phong shading
 The velocity magnitude is used to guide color & opacity mapping in the
transfer function design to enhance or suppress certain parts of the flow
texture volume
magnitude volume
gradients (normals)
Phong shading
voxel intensity (grey)
ray casting
transfer function
color hue (r, g, b)
voxel color (R, G, B)
2D color image
voxel opacity (A)
VAUFLIC — Results
 Time-Varying Volume Flow Dataset
 41 time steps
 144  73  81 data points
 Transfer Function
magnitude histogram
opacity curve
VAUFLIC texture histogram
color mapping bar
opacity mapping bar
 High Temporal-Spatial Coherence
Flow directions and interior flow structures can be clearly revealed in images
by tuning the magnitude-based transfer function while the flow evolution is
shown by means of a smooth animation
VAUFLIC — Results
VAUFLIC — Results
VAUFLIC — Results
VAUFLIC — Results
VAUFLIC — Results
VAUFLIC — Results
VAUFLIC — Results
VAUFLIC — Results
VAUFLIC — Results
Conclusions
 UFLIC employs a time-accurate value scattering scheme & a successive texture
feed-forward strategy to achieve very high temporal-spatial coherence in
visualizing 2D unsteady flows
 AUFLIC adopts a flow-driven seeding strategy & a dynamic seeding controller
to reuse pathlines in the computationally expensive value scattering process of
UFLIC to achieve one order-of-magnitude acceleration, or near-interactive (1.0f
FPS) visualization with up to 160k particles in time-varying 2D flows without
temporal-spatial coherence degradation
 VAUFLIC is the extension of AUFLIC to texture-based time-varying volume
flow visualization, so far the first hardware-independent solution of its kind
 Magnitude-based color-opacity mapping is used in transfer function design for
effective volume rendering of VAUFLIC flow textures to reveal interior flow
structures and the evolution
Conclusions
 Future Work
 To enhance VAUFLIC by using shorter pathlines while maintaining high
temporal-spatial coherence
 To improve volume rendering of 3D flow textures by, e.g., using ROI
masking, clipping planes, and 3D halos
 Acknowledgments
— DoD HPCMP program & NSF EPS-0132618
 Key References
 Han-Wei Shen and David L. Kao, “New Line Integral Convolution
Algorithm for Visualizing Time-Varying Flow Fields,” IEEE Transactions
on Visualization and Computer Graphics, Vol. 4, No. 2, pp. 98-108, 1998
 Zhanping Liu and Robert J. Moorhead, “Accelerated Unsteady Flow Line
Integral Convolution, ” IEEE Transactions on Visualization and Computer
Graphics, 2004 (accepted to appear)
 URL
http://www.erc.msstate.edu/~zhanping/Research/FlowVis/FlowVis.htm