Gaming Overview Week 4 3D Game Engine Technology Adapted from Ben Lok @UFL Slides.

Download Report

Transcript Gaming Overview Week 4 3D Game Engine Technology Adapted from Ben Lok @UFL Slides.

Gaming Overview
Week 4
3D Game Engine Technology
Adapted from Ben Lok @UFL Slides
The job of the rendering engine:
to make a 2D image appear as 3D!
•
•
•
•
Input is 3D model data
Output is 2D Images (screen)
Yet we want to show a 3D world!
How can we do this?
– We can include ‘cues’ in the image that give our
brain 3D information about the scene
– These cues are visual depth cues
Visual Depth Cues
•
•
•
•
Monoscopic Depth Cues (single 2D image)
Stereoscopic Depth Cues (two 2D images)
Motion Depth Cues (series of 2D images)
Physiological Depth Cues (body cues)
Monoscopic Depth Cues
• Interposition
– An object that occludes another is closer
• Shading
– Shape info. Shadows are included here
• Size
– Usually, the larger object is closer
• Linear Perspective
– parallel lines converge at a single point
• Surface Texture Gradient
– more detail for closer objects
• Height in the visual field
– Higher the object is (vertically), the further it
is
• Atmospheric effects
– further away objects are blurrier
• Brightness
– further away objects are dimmer
Viewing a 3D world
+Y
We have a model in this world
and would like to view it from a
new position.
+X
+Z
We’ll call this new position the
camera or eyepoint. Our job is to
figure out what the model looks
like on the display plane.
Parallel Projection
+Y
+Z
+X
Perspective Projection
+Y
+Z
+X
Coordinate Systems
• Object Coordinates
• World Coordinates
• Eye Coordinates
Object Coordinates
World Coordinates
Screen Coordinates
Transformations
• Rigid Body Transformations - transformations that do
not change the object. i.e. when you place your model in
your scene
• Translate
– If you translate a rectangle, it is still a rectangle
• Scale
– If you scale a rectangle, it is still a rectangle
• Rotate
– If you rotate a rectangle, it is still a rectangle
Translation
• Translation - repositioning an object along a
straight-line path (the translation
distances) from one coordinate location to
another.
(x’,y’)
(tx,ty)
(x,y)
Applying to Triangles
(tx,ty)
Scale
• Scale - Alters the size of an object.
• Scales about a fixed point
(x’,y’)
(x,y)
Rotation
• P=(4,4)
• =45 degrees
Rotations
V(-0.6,0) V(0,-0.6) V(0.6,0.6)
Rotate -30 degrees
V(0,0.6) V(0.3,0.9) V(0,1.2)
Combining Transformations
• Note there are two ways to combine rotation
and translation. Why?
How would we get:
How would we get:
Coordinate Hierarchy
Screen Coordinates
Transformation
World->Screen
World Coordinates
Transformation
Object #1 ->
World
Transformation
Object #2 ->
World
Transformation
Object #3 ->
World
Object #1
Object Coordinates
Object #2
Object Coordinates
Object #3
Object Coordinates
Transformation Hierarchies
• For example:
Transformation Hierarchies
World Coordinates
• We can have transformations be
in relation to each other
Transformation
Green -> World
Green
Object's Coordinates
Transformation
Red -> Green
Red
Object Coordinates
Transformation
Blue -> Red
Blue
Object Coordinates
More Complex Models
Vertices, Lines, and Polygons
• So, we know how to move, scale, and rotate the
points (vertices) of our model.
– object coordinates -> world coordinates
• We know how these points relate to locations on
the screen
– world coordinates -> screen coordinates
• How do we connect the dots (draw the edges) and
color in the lines (fill the polygons)?
Draw a line from 0,0 to 4,2
How do we choose between 1,0 and 1,1? What would be a good heuristic?
(4,2)
2
1
(0,0)
0
0
1
2
3
4
Let’s draw a triangle
(4,2)
2
1
(0,0)
(4,0)
0
0
1
2
3
4
A Slight Translation
2
1
0
0
1
2
3
4
Filling in the polygons
• What is area filling?
• Scan Conversion algorithms
Wireframe Vs. Filled Area
2
1
0
0
1
2
3
4
Scan Conversion
• Scan converting is converting a picture
definition, or a model, into pixel intensity
values.
• We want to scan convert polygons.
Area Filling
• We want to find which pixels on a scan line
are “inside” the polygon
• Note: the scan line intersects the polygon an
EVEN number of times.
• We simply fill an interval between the odd
and even numbered intersections.
• What happens if the scan line exactly
intersects a vertex:
Area filling Triangles
• How is this easier?
• What would you do for:
What do we have here?
• You know how to:
•What we don’t know?
–Overlapping polygons: which pixels do we see?
Goal of Visible Surface
Determination
To draw only the surfaces (triangles) that are visible, given a
view point and a view direction
Three reasons to not draw
something
• 1. It isn’t in the view frustum
• 2. It is “back facing”
• 3. Something is in front of it (occlusion)
• We need to do this computation quickly.
How quickly?
Surface Normal
• Surface Normal vector perpendicular to
the surface
• Three non-collinear
points (that make up a
triangle), also
describes a plane. The
normal is the vector
perpendicular to this
plane.
Normals
Vertex Order
• Vertex order matters. We usually agree that
counterclockwise determines which “side” or a
triangle is labelled the “front”. Think: Right
handed coordinate system.
What do the normals tell us?
Q: How can we use normals to tell us which “face” of a
triangle we see?
Examine the angle between the
normal and the view direction
V
N
Front if V . N <0 (angle
> 90 degrees)
Backface Culling
•Before scan converting a triangle, determine if it is facing you
•Compute the dot product between the view vector (V) and triangle
normal (N)
•Simplify this to examining only the z component of the normal
•If Nz<0 then it is a front facing triangle, and you should scan
convert it
•What surface visibility problems does this solve? Not solve?
Multiple Objects
• If we want to draw:
We can sort in z.
What are the
advantages?
Disadvantages?
Called Painter’s
Algorithm or
splatting.
Side View
Side View - What is a solution?
Even Worse… Why?
Painter’s Algorithm
• Pros:
– No extra memory
– Relatively fast
– Easy to understand and
implement
• Cons:
– Precision issues (and
additional work to
handle them)
– Sort stage
– Intersecting objects
Depth Buffers
Goal: We want to only draw
something if it appears in front
of what is already drawn.
What does this require? Can
we do this on a per object
basis?
Depth Buffers
We can’t do it object based, it
must be image based.
What do we know about the
x,y,z points where the objects
overlap?
Remember our “eye” or
“camera” is at the origin of
our view coordinates.
What does that mean need to
store?
Side View
Depth or Z-Buffer requirements
• We need to have an additional value for
each pixel that stores the depth value.
• What is the data type for the depth value?
• How much memory does this require?
• Playstation 1 had 2 MB.
• The first 512 x 512 framebuffer cost
$50,000
• Called Depth Buffering or Z buffering
Depth Buffer Algorithm
• Begin frame
– Clear color
– Clear depth to z = zmax
• Draw Triangles
– When scan converting znew pixel < zvalue at the pixel, set color and
zvalue at the pixel = znew pixel
– What does it mean if znew pixel > zvalue at the pixel?
– Why do we clear the depth buffer?
– Now we see why it is sometimes called the z buffer
Computing the znew pixel
• Q: We can compute the znsc at the vertices, but
what is the znsc as we scan convert?
• A: We interpolate znsc while we scan convert too!
Z Buffer Precision
• What does the # of bits for a depth buffer element mean?
• The z from eye space to normalized screen space is not
linear. That is we do not have the same precision across z.
(we divided by z).
• In fact, half of our precision is in z=0 and z=0.5. What
does this mean? What happens if we do NOT have enough
precision?
Z Fighting
• If we do not have enough precision in the
depth buffer, we can not determine which
fragment should be “in front”.
• What does this mean for the near and far
plane?
We want them to
as closely approximate
our volume
Don’t forget
• Even in 1994, memory wasn’t cheap. If we
wanted 1024x768x16bit = 1.6 MB additional
memory.
• Depth Buffers weren’t common till recently
because of this.
• Since we have to draw every triangle -> fill rate
goes UP. Currently graphics cards approach the 1
Gigapixel fill rate.
• An image space algorithm
Depth Buffer Algorithm
• Pros:
– Easy to understand and
implement
– per pixel “correct”
answer
– no preprocess
– draw objects in any
order
– no need to redivide
objects
• Cons:
– Z precision
– additional memory
– Z fighting
What we know
• We already know how to render the world
from a viewpoint.
• Why doesn’t this look 3D?
•Lighting and shading!
How do we know what color each pixel gets?
•Lighting
•Lighting models
•Ambient
•Diffuse
•Specular
•Surface Rendering Methods
“Lighting”
• Two components:
– Lighting Model or
Shading Model - how
we calculate the intensity
at a point on the surface
– Surface Rendering
Method - How we
calculate the intensity at
each pixel
Jargon
• Illumination - the transport of light from a
source to a point via direct and indirect paths
• Lighting - computing the luminous intensity
for a specified 3D point, given a viewpoint
• Shading - assigning colors to pixels
• Illumination Models:
– Empirical - approximations to observed
light properties
– Physically based - applying physics
properties of light and its interactions with
matter
Lighting in general
• What factors play a part in how an object is
“lit”?
• Let’s examine different items here…
Two components
• Light Source Properties
– Color (Wavelength(s) of light)
– Shape
– Direction
• Object Properties
– Material
– Geometry
– Absorption
Light Source Properties
• Color
– We usually assume the light has
one wavelength
• Shape
– point light source - approximate
the light source as a 3D point in
space. Light rays emanate in all
directions.
• good for small light sources
(compared to the scene)
• far away light sources
Distributed Lights
• Light Source Shape continued
– distributed light source - approximating the light source
as a 3D object. Light rays usually emanate in specific
directions
• good for larger light sources
• area light sources
Light Source Direction
• In computer graphics, we usually treat lights
as rays emanating from a source. The
direction of these rays can either be:
– Omni-directional (point light source)
– Directional (spotlights)
Light Position
• We can specify the position of a light one of
two ways, with an x, y, and z coordinate.
– What are some examples?
– These lights are called positional lights
• Q: Should the sun be represented as a
positional light?
A: Nope! If a light is significantly far away, we can represent
the light with only a direction vector. These are called
directional lights. How does this help?
Contributions from lights
• We will breakdown what a light does to an
object into three different components. This
APPROXIMATES what a light does. To
actually compute the rays is too expensive
to do in real-time.
– Light at a pixel from a light = Ambient +
Diffuse + Specular contributions.
– Ilight = Iambient + Idiffuse + Ispecular
Ambient Term - Background Light
• The ambient term is a HACK!
• It represents the approximate
contribution of the light to the
general scene, regardless of
location of light and object
• Indirect reflections that are too
complex to completely and
accurately compute
• Iambient = color
Diffuse Term
• Contribution that a light has on the
surface, regardless of viewing
direction.
• Diffuse surfaces, on a microscopic
level, are very rough. This means
that a ray of light coming in has an
equal chance of being reflected in
any direction.
• What are some ideal diffuse
surfaces?
Lambert’s Cosine Law
• Diffuse surfaces follow Lambert’s Cosine Law
• Lambert’s Cosine Law - reflected energy from a small
surface area in a particular direction is proportional to
the cosine of the angle between that direction and the
surface normal.
• Think about surface area and # of rays
Specular Reflection
• Specular contribution can be thought of as
the “shiny highlight” of a plastic object.
• On a microscopic level, the surface is very
smooth. Almost all light is reflected.
• What is an ideal purely specular reflector?
• What does this term depend on?
Viewing Direction
Normal of the Surface
Snell’s Law
• Specular reflection applies Snell’s Law.
 The incoming ray, the surface normal, and
the reflected ray all lie in a common
plane.
 The angle that the reflected ray forms with
the surface normal is determined by the
angle that the incoming ray forms with the
surface normal, and the relative speeds of
light of the mediums in which the incident
and reflected rays propagate according to:
 We assume l = r
Snell’s Law is for IDEAL surfaces
• Think about the amount of light reflected at
different angles.
N
R
L

V
Different for shiny vs. dull objects
Snell’s Law is for IDEAL surfaces
• Think about the amount of light reflected at
different angles.
N
R
L


V

Phong Model
Phong Reflection Model
• An approximation sets the intensity of
specular reflection proportional to (cos
)shininess
• What does the value of shininess mean?
• How do we represent shinny or dull
surfaces using the Phong model?
Effect of the shininess value
Combining the terms
• Ambient - the combination of light reflections
from various surfaces to produce a uniform
illumination. Background light.
• Diffuse - uniform light scattering of light rays on a
surface. Proportional to the “amount of light” that
hits the surface. Depends on the surface normal
and light vector.
• Sepecular - light that gets reflected. Depends on
the light ray, the viewing angle, and the surface
normal.
Ambient + Diffuse + Specular
Lighting Equation
I final  I ambientkambient  I diffusekdiffuseN  L   I speculark specularN  H 
shininess
I final 
lights1

l 0
I lambientkambient  I ldiffusekdiffuseN  L   I lspeculark specularN  H 
shininess
Ilambient = light source l’s ambient component
Ildiffuse = light source l’s diffuse component
Ilspecular = light source l’s specular component
kambient = surface material ambient reflectivity
N
R
L

kdiffuse = surface material diffuse reflectivity
kspecular = surface material specular reflectivity
shininess = specular reflection parameter (1 -> dull, 100+ -> very shiny)
V
Attenuation & Spotlights
• One factor we have yet to take into account is that
a light source contributes a higher incident
intensity to closer surfaces.
• The energy from a point light source falls off
proportional to 1/d2.
– Actually, using only 1/d2, makes it difficult to correctly light things. Think
if d=1 and d=2. Why?
• What happens if we don’t do this?
• How do we do spotlights?
Full Illumination Model
I final  I lambientkambient 
lights1

l 0

f dl  I ldiffusekdiffuseN  L   I lspeculark specular N  H 


1

f d   min 1,
2 
 a0  a1d  a2 d 
shininess

Lighting and Shading
I final  I lambientkambient 
lights1

l 0

f d l  I ldiffusekdiffuseN  L   I lspeculark specularN  H 
• When do we do the lighting equation?
• Does lighting calculate the pixel colors?

shininess
Shading
•
•
•
•
Shading is how we “color” a triangle.
Constant Shading
Gouraud Shading
Phong Shading
Constant Shading
•
•
•
•
•
•
Constant Intensity or Flat Shading
One color for the entire triangle
Fast
Good for some objects
What happens if triangles are small?
Sudden intensity changes at borders
Gouraud Shading
• Intensity Interpolation Shading
• Calculate lighting at the vertices. Then interpolate
the colors as you scan convert
Gouraud Shading
•
•
•
•
•
Relatively fast, only do three calculations
No sudden intensity changes
What can it not do?
What are some approaches to fix this?
Question, what is the normal at a vertex?
Phong Shading
• Interpolate the normal, since that is the
information that represents the
“curvature”
• Linearly interpolate the vertex
normals. For each pixel, as you scan
convert, calculate the lighting per
pixel.
• True “per pixel” lighting
• Not done by most
hardware/libraries/etc
Shading Techniques
• Constant Shading
– Calculate one lighting calculation (pick a vertex) per triangle
– Color the entire triangle the same color
• Gouraud Shading
– Calculate three lighting calculations (the vertices) per triangle
– Linearly interpolate the colors as you scan convert
• Phong Shading
– While you scan convert, linearly interpolate the normals.
– With the interpolated normal at each pixel, calculate the lighting at
each pixel
How do we do this?
…or this?
…using only Triangles?
• Using only triangles to
model everything is hard
• Think about a label on a
soup can
• Instead of interpolating
colors, map a texture
pattern
Texture Patterns
• Image Textures
• Procedure (Procedural Textures)
Let’s look at a game
• What effects do we see?
Transparencies
• The Alpha channel can stencil out textures.
• Thus per pixel, set alpha = 1 or alpha = 0.
Where alpha = 0,
Combining Lighting + Texturing
• If you notice there is no lighting involved
with texture mapping!
• They are independent operations, which
MAY (you decide) be combined
• It all depends on how you “apply” the
texture to the underlying triangle
Combining Lighting + Texturing
•
•
•
•
CT = Texture Color
CC = Base Triangle
Replace, CF = CT
Blend, CF = CT * CC
What else does the engine need to do?
• NOT an exhaustive list!
• Load Models
– Model Acquisition
– Surfaces/Curves/NURBS
• Fast Performance
– Simplification/Level of Detail
– Culling/Cells and Portals/BSP
• Advanced Rendering
– Lighting/Shaders
– Non-Photorealistic Rendering
– Effects
•
•
•
•
Interactive Techniques/User interfaces
Game logic/Scripting/Artificial Intelligence
Physical properties: Collisions, gravity, etc.
Load/Save States
Global Illumination
• Radiosity
• Radiosity as textures
– Light maps
• Bidirection
Reflectance
Distribution Function
(BRDF)
• Light as rays, doesn’t
do everything
– raytracing
Advanced Effects
•
•
•
•
•
•
•
Cloth
Liquids
Fire
Hair/Fur
Skin
Grass
What are the common denominators here?
Performance
• Massive Models
– models of 100,000,000
triangles
– Replace geometry with
images
– Warp images
– Occlusion culling/BSP
trees
– Cell and Portal culling
– Level of detail
Simplification/Level of Detail
•
•
•
•
Objects farther away can be represented with less detail
How do we “remove” triangles?
What are the advantages and disadvantages?
Can we do this automatically?
BSP Trees (Fuchs, et. al 1980)
• Binary Space Partitioning
– Doom and most games before framebuffers
(circa 1994-95)
– Given a world, we want to build a datastructure
that given anypoint, it can return a sorted list of
objects
– What assumptions are we making?
– Note, what happens in those “old” games like
Doom?
BSP Trees
• Two stages:
– preprocess - we do this at the “offline”
– runtime - what we do per frame
• Draw parallels to Doom
– Since this is easier in 2D, note all “old” FPS are
really 2D.
BSP Algorithm
• For a viewpoint, determine where it sits on the
tree.
• Now draw objects on the “other half of the tree”
– farside.draw(viewpoint)
– nearside.draw(viewpoint)
• Intuition - we draw things farther away first
• Is this an image space or object space algorithm?
BSP Trees
• Pros
– Preprocess step means
fast determination of
what we can see and
can’t
– Works in 3D ->
Quake1
– Painter’s algorithm
Pros
• Cons
– Still has intersecting
object problems
– Static scene
Determining if something is
viewable
• Viewfrustum Culling
• Cells and Portals
– definitions
• cell
• portal
– preprocess step
– runtime computation
– where do we see it?
• Quake3
Collision Detection
•
•
•
•
•
•
•
Determining intersections between models
Resolution of collisions
Where is the intersection?
Normal of surfaces
Depth of intersection
Multiple collisions
Collisions over time
– Vector collisions
Shaders and Non Photorealistic
Rendering
•
•
•
•
•
Cartoons
Pen/Pencil
Paints
Art
Drawing Styles