Transcript File

Advance topics :
UNIT - 6
Topics to cover :
 visible surface detection concepts,
 back-face detection,
 depth buffer
 method, illumination,
 light sources,
 illumination methods (ambient, diffuse reflection, specular
reflection),
 Color models: properties of light, XYZ, RGB,YIQ and CMY
color models
2
Introduction
 Purpose
 Identify the parts of a scene that are visible from a chosen
viewing position.
 Classification
 Object-Space Methods
 Compare objects and parts of objects to each other within the scene
definition.
 Image-Space Methods
 Visibility is decided point by point at each pixel position on the projection
plane.
 Sorting : Facilitate depth comparisons
 Coherence : Take advantage of regularities in a scene
3
Visible Surface Detection
 A major consideration in the generation of realistic graphics
is determining what is visible within a scene from a chosen
viewing position
 Algorithms to detect visible objects are referred to as visiblesurface detection methods
visible surface detection concepts
 Object space algorithms: determine which objects are
in front of others
 Resize doesn’t require recalculation
 Works for static scenes
 May be difficult to determine
 Image space algorithms: determine
which object is visible at each pixel
 Resize requires recalculation
 Works for dynamic scenes
5
Back-Face Detection
 A fast and simple object-space method for locating back faces
 A point (x,y,z) is “inside” a polygon surface with plane
parameters A, B, C, D if :
Ax + By + Cz + D < 0
 When an inside point is along the line of sight to the surface,
the polygon must be a back face and so cannot be seen
Back-Face Culling
 Back faces
N = ( A, B, C )
V
Ax + By + Cz + D = 0
View Point
N ·V > 0
 Back-face removal expected to eliminate about half of the
polygon surfaces in a scene.
 A scene contains only nonoverlapping convex polyhedra
7
Depth-Buffer Method
 Depth-buffer(z-buffer) Method
 Object depth: Measured from the view plane along the z axis of
a viewing system.
 Each surface is processed separately.
 Can be applied to planar and nonplanar surfaces
 Require two buffers: depth and frame buffers
 Basic concepts
 Project each surface onto the view plane
 For each pixel position covered, compare the depth values.
 Keep the intensity value of the nearest surface
8
Depth-Buffer Method
 Initial states
 Depth buffer : Infinity
 Frame buffer : Background color
yv
S3
S2
S1
( x, y )
xv
zv
9
Depth Buffer Method
 A commonly used image-space approach
 Each surface is processed separately, one point at a time
 Also called the z-buffer method
 It is generally hardware implemented
Depth Buffer Method
 3 surfaces overlap at (x,y). S1 has the smallest depth value
Depth Buffer Method
 Two buffers are needed
 Depth buffer (distance information)
 Frame buffer (intensity/color information)
Depth Buffer Method
 Depth-Buffer Algorithm
for all (x,y)
depthBuff(x,y) = 1.0, frameBuff(x,y)=backgndcolor
for each polygon P
for each position (x,y) on polygon P
calculate depth z
if z < depthBuff(x,y) then
depthBuff(x,y) =z
frameBuff(x,y)=surfColor(x,y)
Scan-Line Method
 Image space method
 Extension of scan-line algorithm for polygon filling
 As each scan line is processed, all polygon surface projections
intersecting that line are examined to determine which are
visible
Scan-Line Method
 Example
A-Buffer Method
 Extension of Depth-buffer ideas
 Drawback of the Depth-buffer method
 Deal only with opaque surfaces
 A-buffer
 Reference a linked list of surfaces
 Antialiased, Area-averaged, Accumulation-buffer
 Each position has two fields
 Depth field
 Store a positive or negative real number
 Intensity field
 Store surface-intensity information or a pointer value
16
A-Buffer Method
 Organization of an A-buffer
 Single-surface overlap
d> 0
I
depth
field
intensity
field
 Multiple-surface overlap
Surf
1
d< 0
depth
field
17
intensity
field
Surf
2
.....
Scan-Line Method
 An extension of scan-line polygon filling
 Fill multiple surfaces at the same time
 Across each scan line, depth calculations are made for each
overlapping surface
B E
yv
F
A
S1
H
18
D
S2
C
Scan Line 1
Scan Line 2
Scan Line 3
G
xv
Scan-Line Method
 Surface tables
 Edge table
 Coordinate endpoints, inverse slop, pointers into polygon table
 Polygon table
 Coefficient, intensity, pointers into the edge table
 Flag
 Taking advantage of Coherence
 Pass from one scan line to the next
 Same active list
 No need of depth calculation
 Be careful
 not cut through or cyclically overlap each other
19
Scan-Line Method
 Overlapping polygon surfaces
Subdividing
Line
20
Subdividing
Line
Subdividing
Line
Depth Sorting Method
 Painter’s Algorithm
 Surfaces are sorted in order of decreasing depth.
 Surfaces are scan converted in order, starting with the surface of
greatest depth.
 Two surfaces with no depth overlap
z max
S
z min
z ' max
S'
z ' min
21
zv
xv
Painter’s Algorithm
Draw surfaces from back (farthest away) to front (closest):
 Sort surfaces/polygons by their depth (z value)
 Draw objects in order (farthest to closest)
 Closer objects paint over the top of farther away objects
22
Depth Sorting Method
 Tests for each surface that overlaps with S in depth:
 The bounding rectangles in the xy plane for the two surfaces do
not overlap.
 Surface S is completely behind the overlapping surface relative
to the viewing position.
 The overlapping surface is completely in front of S relative to
the viewing position.
 The projections of the two surfaces onto the view plane do not
overlap.
If one of these tests is true, no reordering is necessary
23
Depth Sorting Method
S'
S
zv
xmin
xmax x' min
Two surfaces with depth overlap
but no overlap in the x direction.
24
x'max
xv
Depth Sorting Method
S
S
S'
zv
Surface S is completely
behind the overlapping
surface S’
25
S'
xv
zv
Overlapping surface S’ is
completely in front of surface S,
but S is not completely behind S’
xv
Depth Sorting Method
Two surfaces with overlapping bounding rectangles
in the xy plane.
26
Depth Sorting Method
S'
S
S"
S
zv
Surface S has greater depth
but obscures surface S’
27
xv
zv
Three surfaces entered into
the sorted surface list
in the order S, S’, S”
should be reordered S’, S”, S.
S'
xv
BSP-Tree Method
 Binary Space-Partitioning (BSP) tree
 An efficient method for determining object visibility by painting
surface onto the screen
 Particularly useful when the view reference point changes, but
the scene is fixed.
 In VR applications, BSP is commonly used for visibility
preprocessing.
 Basic concepts
 Identify surfaces that are “inside” and “outside” the partitioning
plane, relative to the viewing direction
28
BSP-Tree Method
P1
P2
C
P1
D
front
A
P2
front
B
back
front
back
29
back
front
A
P2
back
C
front
B
back
D
BSP-Tree Method
 Partitioning planes
 For polygonal objects, we may choose the planes of object facets.
 Any intersected polygon is split into two parts
 When the BSP tree is complete
 Selecting the surfaces for display from back to front
 Implemented as hardware for the construction and process of
BSP tree.
30
Area Subdivision Method
 Take advantage of area coherence in a scene
 Locating those view areas that represent part of a single surface.
 Recursively dividing the total viewing area into smaller and
smaller rectangles.
 Fundamental tests
 Identify an area as part of a single surface
 By comparing surfaces to the boundary of the area
 It is too complex to analyze easily.
 Subdivide this area into smaller rectangles.
31
Area Subdivision Method
 Four possible relationships that a surface can have with a
specified area boundary:
 Surrounding surface
 One that completely encloses the area.
 Overlapping surface
 One that is partly inside and partly outside the area.
 Inside surface
 One that is completely inside the area.
 Outside surface
 One that is completely outside the area.
32
Area Subdivision Method
Surrounding Surface
Inside Surface
33
Overlapping Surface
Outside Surface
Area Subdivision Method
 No further subdivisions of a specified area are needed if one
of the following conditions is true:
 All surfaces are outside surfaces with respect to the area.
 Check bounding rectangles of all surfaces
 Only one inside, overlapping, or surrounding surface is in the
area.
 Check bounding rectangles first and other checks
 A surrounding surface obscures all other surfaces within the
area boundaries.
34
Area Subdivision Method
 Obscuring Surround surfaces identification1
 Using depth sorting
 Order surfaces according to their minimum depth from the view plane.
 Compute the maximum depth of each surrounding surface.
z max
(Surroundng
Surface)
35
zv
Area
xv
Area Subdivision Method
 Obscuring-surround surfaces identification-2
 Using plane equations
 Calculate depth values at the four vertices of the area for all surrounding,
overlapping, and inside surfaces.
 If the calculated depths for one of the surrounding surfaces is less than
those for all other surfaces, it is identified.
 If both methods for identifying obscuring-surround surface
are fail
 It is faster to subdivide the area than to continue with more
complex testing.
36
Area Subdivision Method
 Subdivide areas along surface boundaries
 Sort the surfaces according to the minimum depth
 Use the surface with the smallest depth value to subdivide a
given area.
yv
Area A
A2
S
37
xv
A1
zv
Octree Methods
 Octree representation
 Projecting octree nodes onto the viewing surface in a front-to-
back order
6
5
4
1
0
7
2
3
38
Ray-casting Method
 Algorithm
 Cast ray from viewpoint from each pixel to find front-most
surface
39
Ray-casting Method
 Comments
O ( p log n )
 for p pixels
 May (or may not) utilize pixel-to-pixel coherence
 Conceptually simple, but not generally used
40
Curved Surfaces
 Visibility detection for objects with curved surfaces
 Ray casting
 Calculating ray-surface intersection
 Locating the smallest intersection distance
 Octree
 Approximation of a curved surface as a set of plane, polygon
surface
 Replace each curved surface with a polygon mesh
 use one of the other hidden-surface methods
41
Curved Surfaces
 Curved-Surface Representations
 Explicit surface equation
z = f ( x, y )
 Surface Contour Plots
 Plotting the visible-surface contour lines
y = f ( x, z )
 Eliminating those contour section that are hidden by the visible parts
of the surface
 One way
 maintain a list of ymin and ymax
 ymin <= y <= ymax : not visible
42
Wireframe Methods
 Wireframe visibility methods
 Visible-line detection, hidden-line detection method
 Visibility tests are applied to surface edges
 Direct Approach
 Clipping line
 For each line, depth values are compared to the surfaces
 Hidden line removal
 Using Scan-line methos
 Depth sorting (interior : backgnd color)
43
Wireframe Methods
(b) Penetrate a surface
(a) Pass behind a surface
44
Properties of Light
 What is light?
 “light” = narrow frequency band of electromagnetic spectrum
 The Electromagnetic Spectrum
 Red: 3.8x1014 hertz
 Violet: 7.9x1014 hertz
45
Spectrum of Light
 Monochrome light can be described by frequency f and
wavelength λ
 c = λ f (c = speed of light)
 Normally, a ray of light
contains many different
waves with individual
frequencies
 The associated distribution
of wavelength intensities
per wavelength is referred
to as the spectrum of
a given ray or light source
46
The Human Eye
47
Psychological Characteristics of Color
 Dominant frequency (hue, color)
 Brightness (area under the curve), total light energy
 Purity (saturation), how close a light appear to be a pure spectral color, such as
red
 Purity = ED − EW
 ED = dominant energy density
 EW = white light energy density
 Chromaticity, used to refer collectively to the two properties describing color
characteristics: purity and dominant frequency
48
Intuitive Color Concepts
 Color mixing created by an artist
 Shades, tints and tones in scene can be produced by mixing color
pigments (hues) with white and black pigments
 Shades
 Add black pigment to pure color
 The more black pigment, the darker the shade
 Tints
 Add white pigment to the original color
 Making it lighter as more white is added
 Tones
 Produced by adding both black and white pigments
49
Color Matching Experiments
 Observers had to match
a test light by
combining three fixed
primaries
 Goal: find the unique
RGB coordinates for
each stimulus
50
Tristimulus Values
 The values RQ, GQ and BQ for a stimulus Q that fulfill
Q = RQ*R + GQ*G + BQ*B
are called the tristimulus values of Q
 R = 700.0 nm
 G = 546.1 nm
 B = 435.8 nm
51
Negative Light in a CME
 if a match using only positive RGB values proved
impossible, observers could simulate a subtraction of red
from the match side by adding it to the test side
52
Color Models
 Method for explaining the properties or behavior of color within some




particular context
 Combine the light from two or more sources with different dominant
frequencies and vary the intensity of light to generate a range of additional
colors
Primary Colors
 3 primaries are sufficient for most purposes
Hues that we choose for the sources
Color gamut is the set of all colors that we can produce from the primary colors
Complementary color is two primary colors that produce white
 Red and Cyan, Green and Magenta, Blue and Yellow
53
Color-Matching
 Colors in the vicinity of 500 nm can be matched by subtracting an
amount of red light from a combination of blue and green lights
 Thus, an RGB color monitor cannot display colors in the
neighborhood of 500 nm
CIE XYZ
 Problem solution: XYZ color system
 Tristimulus system derived from RGB
 Based on 3 imaginary primaries
 All 3 primaries are outside the human visual gamut
 Only positive XYZ values can occur
 1931 by CIE (Commission Internationale de l’Eclairage)
55
Transformation CIE RGB->XYZ
 Projective transformation specifically designed so that Y
= V (luminous efficiency function)
 XYZ  CIE RGB uses inverse matrix
 XYZ  any RGB matrix is device dependent
X = 0.723R + 0.273G + 0.166B
Y = 0.265R + 0.717G + 0.008B
Z = 0.000R + 0.008G + 0.824B
56
The XYZ Model
 CIE primitives is referred to as the XYZ model
 In XYZ color space, color C (λ) represented as
C (λ) = (X,Y, Z)
where X Y Z are calculated from the color-matching functions
X = k

visible 
Y = k

Z = k

visible 
visible 
fx (  ) I (  ) d 
fy (  ) I (  ) d 
fz (  ) I (  ) d 
k = 683 lumens/watt
I(λ) = spectral radiance
f = color-matching function
C ( ) = X X + YY + Z Z
57
The XYZ Model
 Normalized XYZ values
 Normalize the amounts of each primary against the sum X+Y+Z,
which represent the total light energy
X =
x=
x
y
X
Y,
Z =
z
Y
y
X +Y + Z
where z = 1 - x - y, color can be represented with just x and y
y=
x and y called chromaticity value,
X +Y + Z
it depend only on hue and purify
Y
Y is luminance
58
RGB vs. XYZ
59
The CIE Chromaticity Diagram
 A tongue-shape curve formed by
plotting the normalized amounts x
and y for colors in the visible
spectrum
 Points along the curve are spectral
color (pure color)
 Purple line, the line joining the red
and violet spectral points
 Illuminant C, plotted for a white
light source and used as a standard
approximation for average daylight
Spectral
Colors
C
Illuminant
Purple
Line
60
The CIE Chromaticity Diagram
 Luminance values are not available because of normalization
 Colors with different luminance but same chromaticity map to the
same point
 Usage of CIE chromaticity diagram
 Comparing color gamuts for different set of primaries
 Identifying complementary colors
 Determining purity and dominate wavelength for a given color
 Color gamuts
 Identify color gamuts on diagram as straight-line segments or polygon
regions
61
The CIE Chromaticity Diagram
 Color gamuts
 All color along the straight line
joining C1 and C2 can be
obtained by mixing colors C1
and C2
 Greater proportion of C1 is
used, the resultant color is
closer to C1 than C2
 Color gamut for C3, C4, C5
generate colors inside or on
edges
 No set of three primaries can
be combined to generate all
colors
62
The CIE Chromaticity Diagram
 Complementary colors
 Represented on the diagram as
two points on opposite sides of
C and collinear with C
 The distance of the two colors
C1 and C2 to C determine the
amount of each needed to
produce white light
63
The CIE Chromaticity Diagram
 Dominant wavelength
 Draw a straight from C through color
point to a spectral color on the curve, the
spectral color is the dominant wavelength
 Special case: a point between C and a point
on the purple line Cp, take the
compliment Csp as dominant
 Purity
 For a point C1, the purity determined as
the relative distance of C1 from C along
the straight line joining C to Cs
 Purity ratio = dC1 / dCs
64
subYM
Complementary Colors
subCR
 Additive
 Subtractive
 Blue is one-third
 Orange (between red and
 Yellow (red+green) is two-thirds
yellow)<>cyan-blue
 green-cyan<>magenta-red color
 When blue and yellow light are
added together, they produce
white light
 Pair of complementary colors
 blue and yellow
 green and magenta
addRG
 red and cyan
65
The RGB Color Model
 Basic theory of RGB color model
 The tristimulus theory of vision
 It states that human eyes perceive color through the stimulation of three visual pigment of the
cones of the retina
 Red, Green and Blue
 Model can be represented by the unit cube defined on R,G and B axes
66
The RGB Color Model
 An additive model, as with the XYZ
color system
 Each color point within the unit
cube can be represented as a
weighted vector sum of the primary
colors, using vectors R, G and B
 C(λ)=(R, G, B)=RR+GG+BB
 Chromaticity coordinates for the
C (  )Television
= ( R , G , BSystem
) = RR + GG + BB
National
Committee (NTSC) standard RGB
primaries
67
Subtractive RGB Colors
Cyan
Green
Yellow
Black
Red
Blue
Yellow absorbs Blue
Magenta absorbs Green
Cyan absorbs Red
Magenta
White  minus Blue minus Green = Red
68
The CMY and CMYK Color Models
 Color models for hard-copy devices, such as printers
 Produce a color picture by coating a paper with color pigments
 Obtain color patterns on the paper by reflected light, which is a subtractive process
 The CMY parameters
 A subtractive color model can be formed with the primary colors cyan, magenta and yellow
 Unit cube representation for the CMY model with white at origin
69
The CMY and CMYK Color Models
 Transformation between RGB and
CMY color spaces
 Transformation matrix of conversion
from RGB to CMY
C

M

 Y
 1  R 
    
= 1  G
    
 1  B 
 Transformation matrix of conversion
from CMY to RGB
 R  1  C 
     
G = 1  M
     
 B  1  Y 
70
The YIQ and Related Color Models
 YIQ, NTSC color encoding for forming the composite
video signal
 YIQ parameters
 Y, same as the Y complement in CIE XYZ color space, luminance
 Calculated Y from the RGB equations
 Y = 0.299 R + 0.587 G + 0.114 B
 Chromaticity information (hue and purity) is incorporated with I and Q
parameters, respectively
 Calculated by subtracting the luminance from the red and blue
components of color
 I = R –Y
 Q = B –Y
 Separate luminance or brightness from color, because we perceive
brightness ranges better than color
71
The YIQ and Related Color Models (2)
 Transformation between RGB and YIQ color spaces
 Transformation matrix of conversion from RGB to YIQ
 Transformation matrix of conversion from YIQ to RGB
 Obtain from the inverse matrix of the RGB to YIQ conversion
 Y   0 . 299
  
I = 0 . 701
  
 Q    0 . 299
0 . 587
 0 . 587
 0 . 587
 R  1
  
G = 1
  
 B  1
0 . 114   R 
  
 0 . 114  G
  
0 . 886   B 
1
 0 . 509
0
 Y 
  
 0 . 194  I
  
  Q 
1
0
72
The HSV Color Model
 Interface for selecting colors often use a color model based on intuitive
concepts rather than a set of primary colors
 The HSV parameters
 Color parameters are hue (H), saturation (S) and value (V)
 Derived by relating the HSV parameters to the direction in the RGB cube
 Obtain a color hexagon by viewing the RGB cube along the diagonal from the white
vertex to the origin
73
The HSV Color Model
 The HSV hexcone
 Hue is represented as an angle about the vertical axis ranging from 0 degree at red to
360 degree
 Saturation parameter is used to designate the purity of a color
 Value is measured along a vertical axis through center of hexcone
74
HSV Color Model Hexcone
 Color components:
 Hue (H) ∈ [0°, 360°]
 Saturation (S) ∈ [0, 1]
 Value (V) ∈ [0, 1]
75
HSV Color Definition
 Color definition
 Select hue, S=1, V=1
 Add black pigments, i.e., decrease
V
 Add white pigments, i.e., decrease
S
 Cross section of the HSV
hexcone showing regions for
shades, tints, and tones
76
HSV
 Hue is the most obvious characteristic of a color
 Chroma is the purity of a color
 High chroma colors look rich and full
 Low chroma colors look dull and grayish
 Sometimes chroma is called saturation
 Value is the lightness or darkness of a color
 Sometimes light colors are called tints, and
 Dark colors are called shades
77
Transformation
 To move from RGB space to HSV space:
 Can we use a matrix? No, it’s non-linear.
min = the minimum
max = the maximum
0

gb
 
60

max  min

gb

 60
h = 
max  min
br
 
60

max  min

rg

 60
max  min

R, G, or B value
R, G, or B value
if max = min
+0
if max = r and g  b

+ 360

if max = r and g  b
+ 120

if max = g
+ 240

if max = b
 0
s =  max  min

max
if max = 0
otherwise
v = max
78
The HSV Color Model
 Transformation between HSV and RGB color spaces
 Procedure for mapping RGB into HSV
class rgbSpace {public: float r, g, b;};
class hlsSpace {public: float h, l, s;};
void hsvT0rgb (hlsSpace& hls, rgbSpace& rgb) {
/* HLS and RGB values are in the range from 0 to
1.0 */
int k
float aa, bb, cc, f;
if ( s <= 0.0)
r = g = b = v;
/* Have gray scale if s = 0 */
else {
if (h == 1.0) h = 0.0;
h *= 6.0;
k = floor (h);
f = h - k;
aa = v * (1.0 - s);
bb = v * (1.0 - (s * f));
cc = v * (1.0 - (s * (1.0 - f)));
switch
case
case
case
case
case
case
}
(k)
0:
1:
2:
3:
4:
5:
{
r
r
r
r
r
r
=
=
=
=
=
=
v;
bb;
aa;
aa;
cc;
v;
g
g
g
g
g
g
=
=
=
=
=
=
cc;
v;
v;
bb;
aa;
aa;
b
b
b
b
b
b
=
=
=
=
=
=
aa;
aa;
cc;
v;
v;
bb;
}
}
79
break;
break;
break;
break;
break;
break;
The HSV Color Model
 Transformation between HSV and RGB color spaces
 Procedure for mapping HSV into RGB
class rgbSpace {public: float r, g, b;};
class hlsSpace {public: float h, l, s;};
const float noHue = -1.0;
inline float min(float a, float b) {return (a < b)? a : b;}
inline float mab(float a, float b) {return (a > b)? a : b;}
void rgbTOhsv (rgbSpace rgb, hlsSpace hls) {
float minRGB = min (r, min (g, b)), maxRGB = max (r, max (g,
b));
float deltaRGB = maxRGB - minRGB;
if (s <= 0.0) h = noHue;
v = maxRGB;
else {
if (maxRGB != 0.0) s = deltaRGB / maxRGB;
if (r == maxRGB) h = (g - b) / deltaRGB;
else s = 0.0;
else
if (g == maxRGB)
h = 2.0 + (b - r) / deltaRGB;
else
if (b == maxRGB)
h = 4.0 + (r - g) / deltaRGB;
h *= 60.0;
if (h < 0.0) h += 360.0;
h /= 360.0;
}
80
}
The HLS Color Model
 HLS color model
 Another model based on intuitive
color parameter
 Used by the Tektronix
Corporation
 The color space has the doublecone representation
 Used hue (H), lightness (L) and
saturation (S) as parameters
81
Color Model Summary
 Colorimetry:
 CIE XYZ: contains all visible colors
 Device Color Systems:
 RGB: additive device color space (monitors)
 CMYK: subtractive device color space (printers)
 YIQ: NTSC television (Y=luminance, I=R-Y, Q=B-Y)
 Color Ordering Systems:
 HSV, HLS: for user interfaces
82
Comparison
RGB
CMY
YIQ
CMYK
HSV
HSL
83
Color Selection and Applications
 Graphical package provide color capabilities in a way
that aid users in making color selections
 For example, contain sliders and color wheels for RGB components instead
of numerical values
 Color applications guidelines
 Displaying blue pattern next to a red pattern can cause eye fatigue
 Prevent by separating these color or by using colors from one-half or less of the
color hexagon in the HSV model
 Smaller number of colors produces a better looking display
 Tints and shades tend to blend better than pure hues
 Gray or complement of one of the foreground color is usually best for
background
84
How different are the
colors of square A
and square B?
They are
the same!
Don’t
believe me?
85
What color is this
blue cube?
How about this
yellow cube?
86
Want to see it slower?
What color is this
blue cube?
How about this
yellow cube?
87
Even slower?
What color is this
blue cube?
How about this
yellow cube?
88
So what color is it?
What color is this
blue cube?
It’s gray!
How about this
yellow cube?
89
Humans Only Perceive Relative
Brightness
90
Cornsweet Illusion
Cornsweet illusion. Left part of the picture
seems to be darker than the right one. In fact
they have the same brightness.
The same image, but the edge in the middle
is hidden. Left and right part of the image
appear as the same color now.
91
Self-Animated Images
92