Capture of Hair Geometry from Multiple Images Sylvain Paris – Hector M.

Download Report

Transcript Capture of Hair Geometry from Multiple Images Sylvain Paris – Hector M.

Capture of Hair Geometry
from Multiple Images
Sylvain Paris – Hector M. Briceño – François X. Sillion
ARTIS is a team of the GRAVIR - IMAG research lab
(UMR 5527 between CNRS, INPG, INRIA and UJF), and a project of INRIA.
Motivation
Digital humans more and more common
 Movies, games…
Hairstyle is important
 Characteristic feature
 Duplicating real
hairstyle
Dusk demo - NVIDIA
© 2004 NVIDIA Corporation. Dusk image is © 2004
by NVIDIA Corporation. All rights reserved.
Motivation
User-based duplication of hair
 Creation from scratch
 Edition at fine level
Image-based capture
 Automatic creation
 Copy of original features
 Edition still possible
[Kim02]
Our approach
Dense set of 3D strands
from
multiple images
Digital copy of real hairstyle
Only static geometry
(animation and appearance as future work)
Outline
•
•
•
•
•
Previous work
Definitions and overview
Details of the hair capture
Results
Conclusions
Previous work
Shape reconstruction
Computer Vision techniques
– Shape from motion, shading, specularities
3D scanners
 Difficulties with hair complexity
 Only surfaces
Previous work
Light-field approach
Matusik et al. 2002
New images from:
Different viewpoints + lights
Alpha mattes
 Duplication of hairstyle
 No 3D strands
 Not editable
[Matusik02]
Previous work
Editing packages
Hadap and Magnenat-Thalmann 2001
Kim et al. 2002
Dedicated tools to help the user
[Kim02]
 3D strands
 Total control
 Time-consuming
 Duplication very hard
[Hadap01]
MIRALab, University of Geneva
Previous work
Procedural & Image-based
Kong et al. 1997
Hair volume from images
Procedural filling
 3D strands
 Duplication of hair volume
 No duplication of hairstyle
 New procedure for each hair type
[Kong97]
Previous work
Image-based
Grabli et al. 2002
Fixed camera, moving light
3D from shading
Sample input image
 3D strands
 Duplication of hairstyle
 Partial reconstruction (holes)
Captured geometry
We build upon their approach.
[Grabli02]
Our approach
1. Dense and reliable 2D data
 Robust image analysis
2. From 2D to 3D
 Reflection variation analysis
•
Light moves, camera is fixed.
 Several light sweeps for all hair orientations
3. Complete hairstyle
 Above process from several viewpoints
Outline
Previous work
• Definitions and overview
Details of the hair capture
Results
Conclusions
Definitions
Fiber
Strand
Visible entity
Segment
Project on 1 pixel
Orientation
Setup & input
Input summary
We use:
 4 viewpoints
 2 sweeps per viewpoint
 50 to 100 images per sweep
 Camera and light positions known
 Hair region known (binary mask)
Main steps
1. Image analysis
 2D orientation
2. Highlight analysis
 3D orientation
3. Segment chaining
 3D strands
Main steps
1. Image analysis
 2D orientation
2. Highlight analysis
 3D orientation
3. Segment chaining
 3D strands
Measure of 2D orientation
Difficult points
Fiber smaller than pixel  aliasing
Complex light interaction
 Scattering, self-shadowing…
 Varying image properties
Select
measure method
per pixel
Measure of 2D orientation
Useful information
Many images available
…
…
Select
light position
per pixel
Our approach
Try several options  Use the ``best’’
Based on oriented filters
response
0°
 = argmax |K  I|
90°

180°
Most reliable  most discriminant
Lowest variance
Filter selection
Implementation
1. Pre-processing: Filter images
2. For each pixel, test:
 Filter profiles
 Filter parameters
2
4
8
 Light positions
 Pick option with lowest variance
8
3. Post-processing: Smooth orientations (bilateral filter)
Per pixel selection
4
Canny
2
Gabor
4
2nd Gauss.
2D results
8 filter profiles
3 filter parameters
9 light positions
Sobel [Grabli02]
(More results in the paper)
Our result
Main steps
1. Image analysis
 2D orientation
2. Highlight analysis
 3D orientation
3. Segment chaining
 3D strands
Mirror reflection
Computing segment normal

n
a
a
~3° accuracy
[Marschner03]
For each pixel:
Light position?
Practical measure
Orientation from 2 planes

n
Intersection
2 planes
3D
orientation
(3D position determined later)
Main steps
1. Image analysis
 2D orientation
2. Highlight analysis
 3D orientation
3. Segment chaining
 3D strands
Starting point
of a strand
Head
approximation
3D ellipsoid
Chaining the segments
?
Blending weights
Ending criterion
Strand grows until:
 Limit length (user setting)
 Out of volume (visual hull)
Outline
Previous work
Definitions and overview
Details of the hair capture
• Results
Conclusions
Results
Result summary
Similar reflection patterns
Duplication of hairstyle
 Curly, wavy and tangled
 Blond, auburn and black
 Middle length, long and short
Conclusions
General contributions
– Dense 2D orientation (filter selection)
– 3D from highlights on hair
System
– Proof of concept
– Sufficient to validate the approach
 Capture of a whole head of hair
 Different hair types
Limitations
• Image-based approach: only visible part
 Occlusions not handled (curls)
• Head: poor approximation
• Setup: makes the subject move
During light sweep
Between viewpoints
Future work
• Better setup and better head approximation
Short term
• Data structures for editing and animation
• Reflectance
Long term
• Hair motion capture
• Extended study of filter selection
Thanks…
Questions ?
The authors thank Stéphane Grabli,
Steven Marschner, Laure Heïgéas,
Stéphane Guy, Marc Lapierre,
John Hughes, and Gilles Debunne.
Rendering using
deep shadow maps
kindly provided by
Florence Bertails.
The images in the previous work appear
by courtesy by NVIDIA, Tae-Yong Kim,
Wojciech Matusik, Nadia Magnenat-Thalmann,
Hiroki Takahashi, and Stéphane Grabli.
Questions
Visual hull
Grazing angle
2D validation
Comparisons
Pre-processing
Post-processing