Point-Based Multiresolution Splatting for Interactive

Download Report

Transcript Point-Based Multiresolution Splatting for Interactive

Point-Based Multiresolution Splatting for Interactive Volume Visualization
Matthew Fawcett, Undergraduate in Information & Computer Science
Dr. Renato Pajarola, Professor in Information & Computer Science
Abstract
Interactive visualization of large volume data sets is an
increasingly
important
research
problem
with
applications in a variety of domains such as scientific
visualization, medical imaging, physical simulations and
sciences such as oceanography, meteorology, or
chemistry. Of particular interest are scientific
visualization applications in the areas of material
simulations, computational fluid dynamics, blast
simulations, or atmospheric and oceanographic
visualizations.
This research project proposes to develop a hardware
accelerated volume rendering method using a
multiresolution point splatting approach. The volume
data will be organized in a hierarchical multiresolution
data structure that provides adaptive access and
rendering at multiple levels-of-detail (LODs). A userdefined transfer function assigns color and opacity
values to the scalar field. Volume rendering is performed
by projecting the sample kernels onto the image plane
and blending the color and opacity values, including
depth and visibility attenuation. Hardware acceleration
will be achieved by planar integration of the volumetric
blending kernel and polygonal rendering of samples
using alpha texturing.
Background Information
Volume rendering techniques have been developed
somewhat differently than traditional rendering
techniques. Traditionally, 3-dimensional objects are
represented by an approximation of their surfaces, such
as a polygon mesh. This means that objects are shown
as hollow shells, which for many purposes is perfectly
acceptable. However, if you wished to step inside that
object and examine its internal structure, you would see
nothing. Volume rendering takes a different approach in
that instead of just the surface of an object, everything
inside that object is stored as well. This is accomplished
by recording an intensity value (usually the density) for
each point in space. Since we now know the composition
of an object, it is possible to peel back layers and step
inside objects.
There are two commonly used techniques for rendering
volume information, ray-casting and splatting. Raycasting involves drawing a ray from the viewpoint
through each pixel into the dataset. For each ray, at
evenly spaced intervals, color and opacity values are
obtained by interpolation and then merged to form the
final color of the pixel. Splatting involves projecting
volume data units (voxels) onto the 2D viewing plane.
These projections are approximated typically by
Gaussian splats, which depend on the color and opacity
values of the voxels. A projection is made for every
voxel, and the final composition of these projections
produces the final image. Splatting is generally much
faster than ray-casting, but it gains that speed at the
sacrifice of some accuracy.
Methods
Optimizations
The datasets we obtained were produced
by a medical CT Scanner, and as such
were of over a hundred intensity maps,
each being one 2-dimensional “slice”.
These images were read into an octree
data structure to enable viewing of
different levels of detail, from viewing 2^3
(2x2x2) voxels at the lowest level to 2^24
(256x256x256) voxels at the highest. Each
voxel needs only to store its intensity and
gradient to keep the size of the data
structure small, since all other information
can be calculated during run time.
A LOD (Level of Detail) scheme is used to
make sure that voxels which would be
smaller than some threshold (percentage
of the display area) are not drawn, but
instead the parent voxel is drawn.
Each voxel displayed was represented as
an triangle textured with a circular “splat”.
This triangle was then assigned a color
and opacity value based on its intensity.
The color and opacity values were
calculated through a transfer function
which mapped each intensity to an RGBA
(Red, Green, Blue, Alpha) value. The
transfer function is able to either read
RGBA values given by the user for
particular intensity values, and then
linearly interpolate between them, or to
linearly cycle through the RGB color
wheel. This is the basic display technique,
which is capable of rendering 2^15
triangles at >30 frames per second, or
2^18 triangles at ~3-5 frames per second.
We then used various optimizations to
gain speed.
A normal table index (4 bytes) is used to
store the gradient vector instead of 3
floating point numbers (12 bytes). This is
accomplished by discretizing the space
surrounding the origin and creating a
discrete set of unit vectors that can be
indexed into. This allows us to use
hardware accelerated lighting.
Instead of color being calculated when the
voxel is drawn, a lookup table is created
and updated whenever the parameters of
the transfer function are changed. This
means finding the color for a given
intensity is simply an array index
operation.
We are currently working on further
optimizations in order to reach real time
rendering speed for higher levels of detail.
So far this approach to volume rendering
shows promise and with further research
should become more efficient.
A view on a human skull with the transfer function centered
on different intensity values.
References
[KVH84] James T. Kajiya and Brian P. Von Herzen. Ray tracing
volume densities. In Proceedings ACM SIGGRAPH 84, pages 165174. ACM Press, 1984.
[LH91] David Laur and Pat Hanrahan. Hierarchical splatting: A
progressive refinement algorithm for volume rendering. In
Proceedings ACM SIGGRAPH 91, pages 285-288. ACM Press, 1991.
[Wes90] Lee Westover. Footprint evaluation for volume rendering. In
Proceedings SIGGRAPH 90, pages 367-376. ACM SIGGRAPH,
1990.
Acknowledgements
We would like to thank Dr. Joerg Meyer for his assistance in obtaining
volume data sets, and the Undergraduate Research Opportunities
Program for their continued support.
Our volume rendering technique aims to improve upon
traditionally splatting by using precomputed splats which
are then displayed as textured triangles, utilizing the
graphics hardware found in modern computers to
achieve almost real time hi resolution volume
visualization.
2^15 triangles
2^18 triangles
2^21 triangles
2^24 triangles