The Z-buffer algorithm and geometric primitives Drawing with lines and curves only – no surfaces Used today in PostScript, text, and some output devices.
Download
Report
Transcript The Z-buffer algorithm and geometric primitives Drawing with lines and curves only – no surfaces Used today in PostScript, text, and some output devices.
The Z-buffer algorithm and
geometric primitives
Drawing with lines and curves only – no
surfaces
Used today in PostScript, text, and some
output devices
Creating images for a rectangular array of
pixels -- virtually all modern displays
Usually RGB colorspace used: say 8 bits for
each of red, green, blue
Hardware-accelerated graphics emphasized
populating the raster with sensible values as
quickly as possible
Rendering: converting a description of a scene
into an image of the scene
Typically, scene descriptions are geometry
Explicit geometry: list of points (vertices) about
which some information is known
Different ways of assembling vertices
isolated vertices (points)
sequences of points (lines, piecewise linear curves)
triangles
collections of triangles
Triangle fan
Triangle Strip
fewer than 3 vertices per triangle
saves memory, bus usage
Hardware support for real-time rendering
Rasterization
which pixels are needed to show the object?
Visibility
which object can be seen?
Project objects onto screen
screen
eye position
Project every vertex onto screen
Pixels receive appropriate colors
screen
eye position
The vertex is the fundamental
primitive in modern real-time rendering
All information stored in vertex
position
color
texture coordinates
surface normal (direction perpendicular to surface)
possibly other attributes, in custom vertex format
Fundamental rendering problem:
Have a collection of geometry
Need to know what is visible (closest to the eye) at a
given point on the screen
Don’t draw things that are behind other things
Historically, devolved to sorting
"Painter's Algorithm"
Sort your objects in order of decreasing distance to
the eye
Paint the most distant ones first, the closest ones last
Paint over the images of the distant objects with the
closer objects that are in front of them
Sorting is an enormous burden
The Z-buffer uses dedicated memory to free us
from that problem (mostly)
Depth buffer: stores z value at every pixel
Depth test: only draw a fragment if it is closer
than the last drawn fragment
Now, objects can be drawn in any order
With each pixel, store a depth (Z) value
Initialize z-buffer values to infty
For each fragment: Draw it iff it has a lower
value than the previous value (hence is closer)
Specialized buffer available
Update the depth buffer
Brute force solution to visibility
Lack of resolution in depth buffer results in “zfighting” between close values
Transparent objects need to be drawn last, and
multiple transparent objects still demand
sorting
Few transparent objects
Maybe, don’t care if we accurately show transparent
objects behind other transparent objects
Lighting calculations done on each vertex to
determine color
Custom vertex shader executed, or standard
one : BasicEffect in XNA
Historically, final colors computed by vertex
shader, later interpolated across pixels
"Three term lighting model"
Gouraud shading
Critical task of vertex shader: compute final
position of every vertex
Each vertex in the geometry receives
appropriate transformation
Same transformation on each vertex
Modeling transform: Moving, orienting, and scaling
the objects to create the scene
Viewing transform: Change of coordinate systems
from whatever world coordinates into canonical
coordinates
Values from vertices interpolated to find values
of fragment
Color interpolated (RGBA)
Texture coordinates interpolated
Texture lookup produces per-pixel color
Custom pixel shader executes at this step,
potentially taking additional values from
vertices and computing final color
Primitives converted into pixels in the raster
(grid)
Fragment values combined into pixel values
(might have multiple fragments per pixel)
Depth test applied here
Virtually all modern real-time computer
graphics done with z-buffering
Hardware executes operations in parallel to
accelerate image synthesis
Strict limitations on what can be done
“why do all video games look the same?”
Changing because of access to shaders
Now we will look at how to make use of some
of this information in practice
Future lectures:
Writing custom vertex and pixel shaders
Applying and combining transformations
Using transformations to control the camera
moving the camera around in a scene, like in a FPS
For now:
putting geometry in the world
rendering with static camera and BasicEffect shader
XNA runs the Z-buffer algorithm for you
Fixed bit depth of Z-buffer is an issue
enable depth testing, and nearer objects will be
drawn in front of further objects
floating point Z means less resolution at larger
distances
Reminder: pixel shader can modify depth
values
can obtain interesting special effects by adjusting
depth
Fein and McGuire, NPAR
2006
Partial silhouettes by
adjusting depth values
To render geometry, execute the following
steps:
create your vertices and set their properties
create a vertex declaration for the graphics device
create and configure an Effect
establish your camera parameters
in Draw, use the Effect to draw your vertices
Various builtin vertex types provided
different combinations of what information stored
position
color
texture coordinates
surface normal
VertexPositionColor
VertexPositionColorTexture
VertexPositionNormalTexture (**)
VertexPositionTexture
Probably you will want to make an array
containing your vertex data
VertexPositionColor[] mydata = new
VertexPositionColor[6];
...
mydata[0] = new VertexPositionColor(
new Vector3(1, 3, -1), Color.Aquamarine);
The Graphics Device has to be informed what
kind of data it will receive
Done through a VertexDeclaration object
vd = new
VertexDeclaration(graphics.GraphicsDevice,
VertexPositionColor.VertexElements);
...
graphics.GraphicsDevice.VertexDeclaration = vd;
Built-in vertex shader
The BasicEffect can do lighting
cleverly designed with a 3-light rig
key light: main light (often overhead)
fill light (somewhat dimmer, reduces shadows)
back light (behind object, illuminates silhouettes)
Or, you can disable lighting and just use the
raw color
"the Basic Effect is not so basic"
Critical job of any vertex shader – (what?)
Calculation of screen position from vertex
position done with matrix multiplication
we'll look at this in some detail in later
Done with three matrices:
world matrix: computes true world coordinates
just set to identity for now
view matrix: transforms world to "canonical"
coordinates relative to camera
projection matrix: transforms 3D canonical
coordinates to 2D screen coordinates
Viewing transformation matrix
Matrix view;
...
Matrix.CreateLookAt(eyepos, lookat, up,
out view);
Viewing transformation matrix
Matrix view;
position of camera
...
Matrix.CreateLookAt(eyepos, lookat, up,
out view);
position looked at
output – view matrix
"up" direction
Projection transformation matrix
Matrix projection;
...
Matrix.CreatePerspectiveFieldOfView(
fov, aspect, near, far, out projection);
Projection transformation matrix
Matrix projection;
...
Matrix.CreatePerspectiveFieldOfView(
fov, aspect, near, far, out projection);
aspect ratio
Field of View
(radians)
projection
matrix
far clipping plane
near clipping plane
Need to define a frustum (truncated pyramid)
Different ways of describing
always need near & far distances
In any API: function to get projection matrix
given frustum description
effect = new BasicEffect(graphics.GraphicsDevice,
null);
...
effect.View = view;
effect.Model = model;
effect.Projection = projection;
An Effect contains one or more Techniques
A Technique contains one or more Passes
effect.Begin();
foreach (EffectPass pass in
effect.CurrentTechnique.Passes) {
pass.Begin();
... // drawing geometry here
pass.End();
}
effect.End();
Various ways to specify
Arguably simplest:
graphics.GraphicsDevice.DrawUserPrimitives(
PrimitiveType.TriangleStrip, mydata,
start, numprimitives);
Various ways to specify
Arguably simplest:
graphics.GraphicsDevice.DrawUserPrimitives(
PrimitiveType.TriangleStrip, mydata,
start, numprimitives);
first
element
number of
primitives
vertex
array
Z-buffer: algorithm for real-time rendering
vertices projected onto screen
vertices contain data: position, color, ...
intermediate fragments interpolated
depth test used to render fragments in front
Lot of setup needed in XNA to render
Vertex data and VertexDeclaration
"Effects" to transform and light vertices
transforms achieved through matrices
Drawing syntax
Custom shaders
Texture for added visual complexity
Closer look at transforms
mathematics of transforms
homogeneous coordinates
composite transforms
modeling transforms, camera control