IBMR: Image Based Modeling and Rendering CS395/495: Spring 2003 Jack Tumblin

Download Report

Transcript IBMR: Image Based Modeling and Rendering CS395/495: Spring 2003 Jack Tumblin

CS395/495: Spring 2003
IBMR: Image Based
Modeling and Rendering
Jack Tumblin
[email protected]
Admin:
How this course works
• Refer to class website: (soon)
http://www.cs.northwestern.edu/~jet/Teach/2
003_3spr_IBMR/IBMRsyllabus2003.htm
• Tasks:
– Reading, lectures, class participation, projects
• Evaluation:
– Progressive Programming Project
– Take-Home Midterm, Take-Home Final
GOAL: First-Class Primitive
• Want images as ‘first-class’ primitives
– Useful as BOTH input and output
– Convert to/from traditional scene descriptions
• Want to mix real & synthetic scenes freely
• Want to extend photography
– Easily capture scene:
shape, movement, surface/BRDF, lighting …
– Modify & Render the captured scene data
• --BUT-• images hold only PARTIAL scene information
“You can’t always get what you want” –(Mick Jagger 1968)
Back To Basics: Scene & Image
2D Image:
Collection of rays
through a point
Image Plane
I(x,y)
Position(x,y)
Angle(,)
Light + 3D Scene:
Illumination,
shape, movement,
surface BRDF,…
Trad. Computer Graphics
2D Image:
Collection of rays
through a point
Image Plane
I(x,y)
Position(x,y)
Angle(,)
Light + 3D Scene:
Illumination,
Reduced,
shape, movement, Incomplete
surface BRDF,…
Information
Trad. Computer Vision
!TOUGH!
‘ILL-POSED’
Many Simplifications,
External knowledge…
2D Image:
Collection of rays
through a point
Image Plane
I(x,y)
Position(x,y)
Angle(,)
Light + 3D Scene:
Illumination,
shape, movement,
surface BRDF,…
IBMR Goal: Bidirectional Rendering
• Both forward and ‘inverse’ rendering!
‘3D Scene’
Description
Camera Pose
Camera View Geom
Scene illumination
Object shape, position
Surface reflectance,…
transparency
?! New
Research?
Traditional
Computer
Graphics
IBMR
‘Optical’
Description
2D
2D
Display
2D
Display
2D
Image(s)
Display
Image(s)
Display
Image(s)
Image(s)
OLDEST IBR: Shadow Maps (1984)
Fast Shadows from Z-buffer hardware:
1) Make the “Shadow Map”:
– Render image seen from light source, BUT
– Keep ONLY the Z-buffer values (depth)
2) Render Scene from Eyepoint:
– Pixel + Z depth gives 3D position of surface;
– Project 3D position into Shadow map image
– If Shadow Map depth < 3D depth, SHADOW!
Early IBR: QuickTime VR (Chen, Williams ’93)
1) Four Planar Images  1 Cylindrical Panorama:
Re-sampling Required!
Planar Pixels: equal distance on x,y plane (tan-1)
Cylinder Pixs: horiz: equal angle on cylinder ()
vert: equal distance on y (tan-1)
Early IBR: QuickTime VR (Chen, Williams ’93)
1) Four Planar Images  1 Cylindrical Panorama:
IN:
OUT:
Early IBR: QuickTime VR (Chen, Williams ’93)
2) Windowing, Horizontal-only Reprojection:
IN:
OUT:
Early IBR: QuickTime VR (Chen, Williams ’93)
2) Windowing, Horizontal-only Reprojection:
IN:
OUT:
View Interpolation: How?
• But what if no depth is available?
• Traditional Stereo Disparity Map:
pixel-by-pixel search for correspondence
View Interpolation: How?
• Store Depth at each pixel: reproject
• Coarse or Simple 3D model:
Plenoptic Array: ‘The Matrix Effect’
• Brute force!
Simple arc, line, or ring array of cameras
• Synchronized shutter http://www.ruffy.com/firingline.html
• Warp/blend between images to
change viewpoint on ‘time-frozen’ scene:
Plenoptic Function (Adelson, Bergen `91)
• for a given scene, describe
ALL rays through
…
– ALL pixels, of
– ALL cameras, at
– ALL wavelengths,
– ALL time
…
…
…
…
F(x,y,z, ,, , t)
“Eyeballs Everywhere”
function (5-D x 2-D!)
…
…
…
…
…
…
Seitz: ‘View Morphing’ SIGG`96
• http://www.cs.washington.edu/homes/seitz/vmorph/vmorph.htm
1)Manually set some
corresp.points
(eye corners, etc.)
2) pre-warp and
post-warp to match
points in 3D,
3) Reproject for
Virtual cameras
Seitz: ‘View Morphing’ SIGG`96
• http://www.cs.washington.edu/homes/seitz/vmorph/vmorph.htm
Seitz: ‘View Morphing’ SIGG`96
• http://www.cs.washington.edu/homes/seitz/vmorph/vmorph.htm
Seitz: ‘View Morphing’ SIGG`96
• http://www.cs.washington.edu/homes/seitz/vmorph/vmorph.htm
Seitz: ‘View Morphing’ SIGG`96
• http://www.cs.washington.edu/homes/seitz/vmorph/vmorph.htm
‘Scene’ causes Light Field
Light field: holds all outgoing light rays
Shape,
Position,
Movement,
Emitted
Light
BRDF,
Texture,
Scattering
Reflected,
Scattered,
Light …
Cameras capture
subset of these
rays.
? Can we recover Shape ?
Can you find ray intersections? Or ray depth?
Ray colors might
not match for
non-diffuse
materials (BRDF)
? Can we recover Surface Material ?
Can you find ray intersections? Or ray depth?
Ray colors might
not match for
non-diffuse
materials (BRDF)
Hey, wait…
• Light field describes light LEAVING the
enclosing surface….
• ? Isn’t there a complementary ‘light field’ for
the light ENTERING the surface?
• *YES* ‘Image-Based Lighting’ too!
‘Full 8-D Light Field’ (10-D, actually: time, )
• Cleaner Formulation:
– Orthographic camera,
– positioned on sphere
around object/scene
– Orthographic projector,
– positioned on sphere
around object/scene
F(xc,yc,c,c,xl,yl l,l, , t)
camera
‘Full 8-D Light Field’ (10-D, actually: time, )
• Cleaner Formulation:
– Orthographic camera,
– positioned on sphere
around object/scene
– Orthographic projector,
– positioned on sphere
around object/scene
F(xc,yc,c,c,xl,yl l,l, , t)
camera
c
c
‘Full 8-D Light Field’ (10-D, actually: time, )
• Cleaner Formulation:
– Orthographic camera,
– positioned on sphere
around object/scene
– Orthographic projector,
– positioned on sphere
around object/scene
F(xc,yc,c,c,xl,yl l,l, , t)
camera
Projector (laser brick)
‘Full 8-D Light Field’ (10-D, actually: time, )
• Cleaner Formulation:
camera
– Orthographic camera,
– positioned on sphere
around object/scene
– Orthographic projector,
– positioned on sphere
around object/scene
– (and wavelength and time)
F(xc,yc,c,c,xl,yl l,l, , t)
projector
‘Full 8-D Light Field’ (10-D, actually: time, )
• Cleaner Formulation:
camera
– Orthographic camera,
– positioned on sphere
around object/scene
– Orthographic projector,
– positioned on sphere
around object/scene
– (and wavelength and time)
F(xc,yc,c,c,xl,yl l,l, , t)
projector
‘Full 8-D Light Field’ (10-D, actually: time, )
• ! Complete !
– Geometry, Lighting, BRDF,… ! NOT REQUIRED !
• Preposterously Huge:
(?!?! 8-D function sampled at image resolution !?!?),
but
• Hugely Redundant:
– WavelengthRGB triplets, ignore Time, and Restrict
eyepoint movement: maybe ~3 or 4D ?
– Very Similar images—use Warping rules? (SIGG2002)
– Exploit Movie Storage/Compression Methods?
IBR-Motivating Opinions
“Computer Graphics: Hard”
– Complex! geometry, texture, lighting, shadows,
compositing, BRDF, interreflections, etc. etc., etc., …
– Irregular! Visibility,Topology, Render Eqn.,…
– Isolated! Tough to use real objects in CGI
– Slow! compute-bound, off-line only,…
“Digital Imaging: Easy”
–
–
–
–
Simple! More quality? Just pump more pixels!
Regular! Vectorized, compressible, pipelined…
Accessible! Use real OR synthetic (CGI) images!
Fast! Scalable, Image reuse, a path to interactivity…
Practical IBMR
What useful partial solutions are possible?
•
•
•
•
•
•
Texture Maps++: Panoramas, Env. Maps
Image(s)+Depth: (3D shell)
Estimating Depth & Recovering Silhouettes
‘Light Probe’ measures real-world light
Structured Light to Recover Surfaces…
Hybrids: BTF, stitching, …
Conclusion
• Heavy overlap with computer vision:
careful not to re-invent & re-name!
• Elegant Geometry is at the heart of it all,
even surface reflectance, illumination, etc.
etc.
• THUS: we’ll dive into geometry--all the rest
is built on it!