Fast Depth-of-Field Rendering with Surface Splatting

Download Report

Transcript Fast Depth-of-Field Rendering with Surface Splatting

Fast Depth-of-Field Rendering with
Surface Splatting
Jaroslav Křivánek
Jiří Žára
Kadi Bouatouch
CTU Prague
IRISA – INRIA Rennes
CTU Prague
IRISA – INRIA Rennes
Computer
Graphics
Group
Goal
• Depth-of-field rendering with point-based objects
• Why point-based ?
– Efficient for complex objects
• Why depth-of-field ?
– Nice and naturally looking images
2/25
Overview
• Introduction
– Point-based rendering
– Depth-of-field
• Depth-of-field techniques
• Our contribution: Point-based depth-of-field
rendering
– Basic approach
– Extended method: depth-of-field with level of detail
• Results
• Discussion
• Conclusions
3/25
Point-based rendering
• Object represented by
points without connectivity
• Point (surfel)
– position, normal,
radius, material
y
z
x
• Rendering = screen space
surface reconstruction
• Efficient for very complex
objects
4/25
Depth-of-Field
• More naturally looking images
• Important depth cue for perception of scene
configuration
• Draws attention to the focused objects
5/25
Thin Lens Camera Model
Circle of Confusion
(CoC)
VD
VP
D
P
F/n
C
image plane
lens
focal plane
C = f ( F, F/n, D, P )
F…... focal distance
F/n… lens diameter
P……focal plane distance
D……point depth
6/25
Depth-of-Field Techniques in CG
• Supersampling
– Distributed ray tracing [Cook et al. 1984]
– Sample the light paths through the lens
• Multisampling [Haeberli & Akeley 1990]
– Several images from different viewpoints on the lens
– Average the resulting images using accumulation buffer
7/25
Depth of Field Techniques in CG
• Post-filtering [Potmesil & Chakravarty 1981]
– Out-of-focus pixels displayed as CoC
– Intensity leakage, hypo-intensity
– Slow for larger kernels
Image
synthesizer
Image + depth
Focus
processor
(filtering)
Image with
DOF
8/25
Point-based rendering - splatting
• Draw each point as a fuzzy splat (an ellipse)
screen space
object space
splat
y
y
z
x
x
Image =
 SPLAT
i
9/25
Our Basic Approach
• Post-filtering
i SPLAT
Imagei + depth
Focus processor
(filtering)
Image with
DOF
Image =i SPLATi
• Our Approach: Swap  and Focus filtering
SPLATi
Focus filtering
SPLATj
Focus filtering
SPLATk
Focus filtering

Image with
DOF
10/25
Our Basic Approach
screen space
object space
y
y
z
x
Splat = reconstr.
kernel r
DOF filter
GQDOF
x
Blurred reconstr.
kernel
rDOF = r  GQDOF
11/25
Properties of our basic approach
PROS…
+ Avoids leakage
– Reconstruction takes into account the splat depth
+ No hypo-intensities
– Visibility resolved after blurring
+ Handles transparency
– In the same way as the EWA splatting – A-buffer
CONS
- Very slow, especially for large apertures
– A lot of large overlapping splats
– High number of fragments:
• E.g. Lion, no blur: 2.3 mil.; blur 90.2 mil. (40x more)
12/25
Our Extended Method
• Use Level of Detail (LOD) to attack complexity
• blur = detail
• Select lower LOD for blurred parts
Blurred img.
Selected LOD
• # of fragments increases more slowly
• E.g. Lion, no blur: 2.3 mil.; blur 5.3 mil. (2.3x more)
13/25
Observation
• Selecting lower LOD for rendering equivalent to
1) selecting the fine LOD
2) low-pass filtering is screen space
Fine LOD
• Use LOD as a means for blurring
Lower LOD
– not only to reduce complexity
14/25
Effect of LOD Selection
• How to quantify the effect of LOD selection in terms
of blur in the resulting image ?
• We use Bounding sphere hierarchy
– Qsplat [Rusinkiewicz & Levoy, 2000]
15/25
Bounding Sphere Hierarchy
• Building the hierarchy levels
low-pass filtering + subsampling
The finest level: L=0
Lower level: L=1
subsample
Center the
filter GQL
16/25
LOD Filter in Screen Space
• GQL defined in local coordinates in object space
• GQL related to screen space by the local affine approximation
J of the object-to-screen transform
• Selecting level L = filtering in screen space by GJQLJT
GJQ J
GQ
Screen space
Object space
L T
L
17/25
DOF with LOD - Algorithm
•
Given the required screen space filter GQDOF
1. Select LOD L such that
support( GJQLJT ) < support ( r  GQDOF )
2. Apply an additional screen space filter GQDIFF to get GQDOF
object space
r
G J ]  GQDIFF
r r= [r= 
GJQ
DOF DOF
L DOF
T
Q
y
y
z
x
r  GJQLJT
x
18/25
Results
No Depth-of-Field – everything in focus
19/25
Results
Transparent mask in focus, male figure out of focus
20/25
Results
Male figure in focus, transparent mask out of focus
21/25
Results
Our algorithm
Reference solution
(multisampling)
• Our blur looks too smooth because of the Gaussian filter
22/25
Results
Our algorithm
Reference solution
(multisampling)
• Artifacts due to incorrect surface reconstruction
23/25
Discussion
• Simplifying assumptions & limitations
– Gaussian distribution of light within the CoC
• Mostly ok
– We are blurring the texture before lighting
• We should blur after lighting
– Possible incorrect image reconstruction from blurred
splats
24/25
Conclusion
•
•
+
+
+
•
A novel algorithm for depth of field rendering
LOD as a means for depth-blurring
Transparency
Avoids intensity leakage
Running time independent of the DOF
Only for point based rendering
A number of artifacts can appear
Ideal tool for interactive DOF previewing
– Trial and error camera parameters setting
Acknowledgement: Grant 2159/2002 MSMT Czech Republic
25/25