CS 445 / 645 Introduction to Computer Graphics Lecture 20 Antialiasing Environment Mapping Used to model a object that reflects surrounding textures to the eye •

Download Report

Transcript CS 445 / 645 Introduction to Computer Graphics Lecture 20 Antialiasing Environment Mapping Used to model a object that reflects surrounding textures to the eye •

CS 445 / 645
Introduction to Computer Graphics
Lecture 20
Antialiasing
Environment Mapping
Used to model a object that reflects surrounding textures
to the eye
• Polished sphere reflects walls and ceiling textures
• Cyborg in Terminator 2 reflects flaming destruction
Texture is distorted fisheye view of environment
Spherical texture mapping
creates texture coordinates
that correctly index into
this texture map
Materials from NVidia
http://developer.nvidia.com/object/Cube_Mapping_Paper.html
Sphere Mapping
Blinn/Newell Lattitude Mapping
Cube Mapping
Multitexturing
Pipelining of multiple texture applications to one
polygon
The results of each texture unit application is
passed to the next texture unit, which adds its
effects
More bookkeeping is required to pull this off
Antialiasing
What is a pixel?
A pixel is not…
• A box
• A disk
• A teeny tiny little light
A pixel is a point
• It has no dimension
• It occupies no area
• It cannot be seen
• It can have a coordinate
A pixel is more than a point, it is a sample
Samples
Most things in the real world are continuous
Everything in a computer is discrete
The process of mapping a continuous function to a
discrete one is called sampling
The process of mapping a continuous variable to a
discrete one is called quantization
Rendering an image requires sampling and
quantization
Samples
Samples
Line Segments
We tried to sample a line segment so it would map
to a 2D raster display
We quantized the pixel values to 0 or 1
We saw stair steps, or jaggies
Line Segments
Instead, quantize to many shades
But what sampling algorithm is used?
Area Sampling
Shade pixels according to the area covered by thickened
line
This is unweighted area sampling
A rough approximation formulated by dividing each pixel
into a finer grid of pixels
Unweighted Area Sampling
Primitive cannot affect intensity of pixel if it does
not intersect the pixel
Equal areas cause equal intensity, regardless of
distance from pixel center to area
Weighted Area Sampling
Unweighted sampling colors two pixels identically
when the primitive cuts the same area through
the two pixels
Intuitively, pixel cut through the center should be
more heavily weighted than one cut along
corner
Weighted Area Sampling
Weighting function, W(x,y)
• specifies the contribution of primitive passing through
the point (x, y) from pixel center
Intensity
W(x,y)
x
Images
An image is a 2D function I(x, y) that specifies
intensity for each point (x, y)
Sampling and Image
Our goal is to convert the continuous image to a
discrete set of samples
The graphics system’s display hardware will
attempt to reconvert the samples into a
continuous image: reconstruction
Point Sampling an Image
Simplest sampling is on a grid
Sample depends
solely on value
at grid points
Point Sampling
Multiply sample grid by image intensity to obtain a
discrete set of points, or samples.
Sampling Geometry
Sampling Errors
Some objects missed entirely, others poorly
sampled
Fixing Sampling Errors
Supersampling
• Take more than one sample for each pixel and combine
them
– How many
samples is
enough?
– How do we
know no
features are
lost?
150x15 to 100x10
200x20 to 100x10
300x30 to 100x10
400x40 to 100x10
Unweighted Area Sampling
Average supersampled points
All points are weighted equally
Weighted Area Sampling
Points in pixel are weighted differently
• Flickering occurs as object moves
across display
Overlapping regions eliminates flicker
Signal Theory
Intensity
Convert spatial signal to frequency domain
Pixel position across scanline
Example from Foley, van Dam, Feiner, and Hughes
Signal Theory
Represent spatial
signal as sum of
sine waves
(varying frequency
and phase shift)
Very commonly
used to represent
sound “spectrum”
Fourier Analysis
Convert spatial domain to frequency domain
• Let f(x) indicate the intensity at a location in space, x (pixel value)
• u is a complex number representing frequency and phase shift
– i = sqrt (-1) … frequently not plotted
• F(u) is the amplitude of a particular frequency in a signal
– In this case the signal is f(x)
Fourier Transform
Examples of spatial
and frequency
domains
Nyquist Sampling Theorem
The ideal samples of a continuous function
contain all the information in the original function
if and only if the continuous function is sampled
at a frequency greater than twice the highest
frequency in the function
Nyquist Rate
The lower bound on the sampling rate equals
twice the highest frequency component in the
image’s spectrum
This lower bound is the Nyquist Rate
Band-limited Signals
If you know a function contains no components of
frequencies higher than x
• Band-limited implies original function will not require any
ideal functions with frequencies greater than x
• This facilitates reconstruction
Flaws with Nyquist Rate
Samples may not align with peaks
Flaws with Nyquist Rate
When sampling below Nyquist Rate, resulting
signal looks like a lower-frequency one
• With no knowledge of band-limits, samples could have
been derived from signal of higher frequency
Low-pass Filtering
• We know we are limited in the resolution of our screen
• We want the screen (sampling grid) to have twice the resolution
of the signal (image) we want to display
• How can we reduce the high-frequencies of the image?
– Low-pass filter
– Band-limits the image
Low-pass Filtering
In frequency domain
• If signal is F(u)
• Just chop off parts of F(u) in high frequencies using a second
function, G(u)
– G(u) ==
1 when –k <= u <= k
0 elsewhere
– This is called the pulse function
Low-pass Filtering
In spatial domain
• Multiplying two Fourier transforms in the spatial domain
corresponds exactly to performing an operation called
convolution in the spatial domain
• f(x) * g(x) = h(x)  the convolution of f with g…
– The value of h(x) at x is the integral of the product of f(x)
with the filter g(x) such that g(x) is centered at x
• The pulse (frequency) == sinc (spatial)
Low-pass Filtering
In spatial domain
Sinc: sinc(x) = sin (px)/px
• Note this isn’t perfect way to eliminate high frequencies
– “ringing” occurs
Sinc Filter
Slide filter along
spatial domain and
compute new pixel
value that results
from convolution
Bilinear Filter
Sometimes called a tent filter
Easy to compute
• just linearly interpolate between samples
Finite extent and no negative values
Still has artifacts
Sampling Pipeline
How is this done today?
Full Screen Antialiasing
Nvidia GeForce2
• OpenGL: render image 400% larger and supersample
• Direct3D: render image 400% - 1600% larger
Nvidia GeForce3
• Multisampling but with fancy overlaps
– Don’t render at higher resolution
– Use one image, but combine values of neighboring pixels
– Beware of recognizable combination artifacts
 Human perception of patterns is too good
GeForce3
Multisampling
• After each pixel is rendered, write pixel value to two
different places in frame buffer
GeForce3 - Multisampling
After rendering two copies of entire frame
• Shift pixels of Sample #2 left and up by ½ pixel
• Imagine laying Sample #2 (red) over Sample #1 (black)
GeForce3 - Multisampling
Resolve the two samples into one image by
computing average between each pixel from
Sample 1 (black) and the four pixels from
Sample 2 (red) that
are 1/ sqrt(2) pixels
away
GeForce3 - Multisampling
No AA
Multisampling
GeForce3 - Multisampling
• 4x Supersample
Multisampling
ATI Smoothvision
ATI SmoothVision
• Programmer selects
samping pattern
• Some supersampling
thrown in
ATI NVidia comparison
ATI Radeon 8500
AA Off
2x AA
NVidia GF3 Ti 500
Quincunx AA
http://www.anandtech.com/video/showdoc.html?i=1562&p=4
3dfx
3dfx Multisampling
• 2- or 4-frame shift and average
Tradeoffs?