PRT Summary Motivation for Precomputed Transfer • better light integration and light transport – dynamic, area lights – shadowing – interreflections point light area light • in real-time area.
Download ReportTranscript PRT Summary Motivation for Precomputed Transfer • better light integration and light transport – dynamic, area lights – shadowing – interreflections point light area light • in real-time area.
PRT Summary Motivation for Precomputed Transfer • better light integration and light transport – dynamic, area lights – shadowing – interreflections point light area light • in real-time area lighting, no shadows area lighting, shadows Precomputed Radiance Transfer (PRT) represent lighting using spherical harmonics low frequency = few coefficients n=4 n=9 n=676 original n=25 .. . Lighting Basis i Lighting Basis i+1 Tabulating PRT illuminate store coefs on surface Lighting Basis i+2 .. . PRT is a linear operator (matrix) per surface point maps incident lighting to exit radiance Self-Transfer Results (Diffuse) No Shadows/Inter Shadows Shadows+Inter Self-Transfer Results (Glossy) No Shadows/Inter Shadows Shadows+Inter PRT Terminology PRT Terminology PRT Terminology PRT Terminology PRT as a Linear Operator ep Mp l ep (v p ) y (v p ) e p y (v p ) M p l T T • l : light vector (in source basis) • Mp : source-to-exit transfer matrix • ep : exit radiance vector (in exit basis) • y(vp) : exit basis evaluated in direction vp • ep(vp) : exit radiance in direction vp PRT Special Case: Diffuse Objects transfer vector rather than matrix ep y M p l T [PRT02] [Xi03] [Ng03] [Ashikhmin02] SH Directional Haar Steerable • independent of view (constant exit basis) • matrix is row vector • previous work uses different light bases • image relighting PRT Special Case: Surface Light Fields transfer vector rather than matrix ep (v p ) y (v p) M p l T • frozen lighting environment • matrix is column vector [Miller98] [Nishino99] [Wood00] [Chen02] [Matusik02] Factoring PRT (BRDFs) e p B R p Tp l e p (v p ) y (v p ) e p T • Tp: source → transferred incident radiance • Rp : rotate to local frame • B: integrate against BRDF [Westin92] • y(vp) • ep: evaluate exit radiance at vp Hemispherical Projection • exit radiance is defined over hemisphere, not sphere • spherical harmonics not orthogonal over hemisphere • how to project hemispherical functions using SH? – naïve projection assumes “underside” is zero – least squares projection minimizes approximation error • see appendix Factoring PRT (BRDFs) ep (v p ) y (v p ) B R p Tp l T Technique [Sloan02] LightB ExitB Note SH SH Phong [Kautz02] SH Dir Arb [Lehtinen03] SH Dir Lsq [Matusik02] Dir Dir IBR Extending PRT to BSSRDFs • already handled by original equation • use [Jensen02], only multiple scattering (matrix with only 1 row) • mix with “conventional” BRDF e p (v p ) y (v p ) M p l T M p S p B R p Tp Problems With PRT • Big matrices at each surface point – 25-vectors for diffuse, x3 for spectral – 25x25-matrices for glossy – at ~50,000 vertices • Slows glossy rendering (4hz) – Frozen View/Light can increase performance – Not as GPU friendly • Limits diffuse lighting order – Only very soft shadows Compression Goals • Decode efficiently – As much on the GPU as possible – Render compressed representation directly • Increase rendering performance – Make non-diffuse case practical • Reduce memory consumption – Not just on disk Compression Example Surface is curve, signal is normal Compression Example Signal Space VQ Cluster normals VQ Replace samples with cluster mean M p M p MC p PCA Replace samples with mean + linear combination N Mp Mp M w M 0 i p i 1 i CPCA Compute a linear subspace in each cluster Mp Mp M N 0 Cp w M i p i 1 i Cp CPCA • Clusters with low dimensional affine models • How should clustering be done? • Static PCA – VQ, followed by one-time per-cluster PCA – optimizes for piecewise-constant reconstruction • Iterative PCA – PCA in the inner loop, slower to compute – optimizes for piecewise-affine reconstruction Static vs. Iterative Related Work • • • • VQ+PCA [Kambhatla94] (static) VQPCA [Khambhatla97] (iterative) Mixture PC [Dony95] (iterative) More sophisticated models exist – [Brand03], [Roweis02] – Mapping to current GPUs is challenging • Variable storage per vertex • Partitioning is more difficult (or requires more passes) Equal Rendering Cost VQ PCA CPCA Rendering with CPCA ep (v p ) y (v p )M p l T Mp M N 0 Cp w M i p i 1 0 i i e p (v p ) y (v p ) M C p wp M C p i 1 N T i Cp l 00 i ii eCCpp l wp M e p (v p ) y (v p ) M eCCpp l i 1 T N Rendering with CPCA 0 i i e p (v p ) y (v p ) eC p w p eC p i 1 T N Constant per cluster – precompute on the CPU Rendering is a dot product Compute linear combination of vectors Only depends on # rows of M Non-Local Viewer N T 0 i i e p (v p ) y (v p ) M C p l wp M C p l i 1 Assume: • vp constant across object (distant viewer) N e p yT(vg ) M C0 p l wip yT(vg ) M iC p l i 1 Rendering independent of view & light orders - linear combination of colors Rendering = + + Overdraw faces belong to 1-3 clusters 2 1 2 1 2 2 2 2 3 2 3 2 2 2 2 2 2 1 OD = 1 face drawn once OD = 2 face drawn 2x OD = 3 face drawn 3x coherence optimization: • reclassification • superclustering GPU Dataflow 0 C1 1 C1 Constants e Eye Point Xform Matrix p Normal Tangent Cp InCluster 1 w wN Vertices N e e CN1 e wi eiC p i 0 v local view vec Vertex Shader Texture yT(v) yT(v) e Pixel Shader Exit Rad. Demo Results All examples have 25x25 matrices, 256 clusters, 8 PCA vectors Model #Pts SPCA IPCA FPS Buddha 49.9k 3m30s 1h51m 27 BuddhaSS 49.9k 6m12s 4h32m 27 Bird Anis 48.7k 6m34s 3h43m 45 Bird Diff 48.7k 43s Head 50k 3m26s 227 4m20s 2h12m 58.5 Conclusions CPCA • works in “signal space”, not “surface space” • uses affine subspace per-cluster • compresses PRT well • is used directly without “blowing out” signal • requires small, uniform state storage • provides – faster rendering – higher-frequency lighting Future Work • • • • time-dependent and parameterized geometry higher-frequency lighting combination with bi-scale rendering better signal continuity Questions? • DirectX SDK for PRT available soon. • Jason Mitchell, Hugues Hoppe, Jason Sandlin, David Kirk • Stanford, MPI for models