How to Make Printed and Displayed Images Have High Visual Quality Brian L.

Download Report

Transcript How to Make Printed and Displayed Images Have High Visual Quality Brian L.

How to Make Printed and Displayed Images
Have High Visual Quality
Brian L. Evans
Embedded Signal Processing Laboratory
The University of Texas at Austin
Austin TX 78712-1084
http:://www.ece.utexas.edu/~bevans
Ph.D. Graduates:
Graduate Student:
Support:
Niranjan Damera-Venkata (HP Labs)
Thomas D. Kite (Audio Precision)
Vishal Monga
HP Labs
National Science Foundation
Outline
• Introduction
• Grayscale halftoning for printing
–
–
–
–
Screening
Error diffusion
Direct binary search
Linear human visual system model
• Color halftoning for display
– Optimal design
– Linear human visual system model
• Conclusion
UT Center for
Perceptual Systems
2
Need for Digital Image Halftoning
• Many devices incapable of reproducing grayscale
(e.g. eight bits/pixel or gray levels from 0 to 255)
– Laser and inkjet printers
– Facsimile machines
– Low-cost liquid crystal displays
• Grayscale imagery binarized for these devices
• Halftoning tries to reproduce full range of gray
while preserving quality and spatial resolution
– Screening methods are fast and simple
– Error diffusion gives better results on some media
UT Center for
Perceptual Systems
3
Digital Halftoning Methods
Clutered Dot Screening
AM Halftoning
Dispersed Dot Screening
FM Halftoning
Error Diffusion
FM Halftoning 1975
Blue-noise Mask
FM Halftoning 1993
Green-noise Halftoning
AM-FM Halftoning 1992
Direct Binary Search
FM Halftoning 1992
UT Center for
Perceptual Systems
4
Screening (Masking) Methods
• Periodic array of thresholds smaller than image
– Spatial resampling leads to aliasing (gridding effect)
– Clustered dot screening produces a coarse image that is
more resistant to printer defects such as ink spread
– Dispersed dot screening has higher spatial resolution
– Blue noise masking uses large array of thresholds
UT Center for
Perceptual Systems
5
Grayscale Error Diffusion
• Shape quantization noise into high frequencies
• Design of error filter key to quality
• Not a screening technique
difference
threshold
current pixel
u(m)
x(m)
_
+
Error Diffusion
b(m)
_
h(m )
7/16
+
3/16 5/16 1/16
e(m)
shape error
UT Center for
Perceptual Systems
compute
error
2-D sigma-delta
modulation
weights
Spectrum
6
Simple Noise Shaping Example
• Two-bit output device and four-bit input words
– Going from 4 bits down to 2 increases noise by ~ 12 dB
– Shaping eliminates noise at DC at expense of increased
noise at high frequency.
Input
words
4
2
2
2
To
output
device
Average output = 1/4(10+10+10+11)=1001
4-bit resolution at DC!
Added noise
1 sample
delay
Assume input = 1001 constant
Periodic
Time
1
2
3
4
UT Center for
Perceptual Systems
Input Feedback Sum Output
1001
00
1001 10
1001
01
1010 10
1001
10
1011 10
1001
11
1100 10
12 dB
(2 bits)
f
If signal is in this band,
you are better off
7
Modeling Grayscale Error Diffusion
• Sharpening caused by a correlated error image
[Knox, 1992]
FloydSteinberg
Jarvis
UT Center for
Perceptual Systems
Error images
Halftones
8
Modeling Grayscale Error Diffusion
• Apply sigma-delta modulation analysis to 2-D
– Linear gain model for quantizer in 1-D [Ardalan and Paulos, 1988]
– Linear gain model for grayscale image [Kite, Evans, Bovik, 2000]
– Signal transfer function (STF), noise transfer function (NTF)
Bs z 
Ks
STF 

X (z) 1  K s  1H z 
u(m)
b(m)
Q(.)
NTF 
Bn z 
 1  H (z )
N (z )
UT Center for
Perceptual Systems
Ks us(m)
us(m)
{
Ks
n(m)
un(m)
+
Signal Path
Kn un(m) + n(m)
Noise Path
1 – H(z) is highpass so H(z) is lowpass
9
Problems with Error Diffusion
• Objectionable artifacts
– Scan order affects results
– “Worminess” visible in constant graylevel areas
• Image sharpening
– Error filters due to [Jarvis, Judice & Ninke, 1976] and [Stucki, 1980]
reduce worminess and sharpen edges
– Sharpening not always desirable: may be adjustable by
prefiltering based on linear gain model [Kite, Evans, Bovik, 2000]
• Computational complexity
– Larger error filters require more operations per pixel
– Push towards simple schemes for fast printing
UT Center for
Perceptual Systems
10
Direct Binary Search
• Minimize mean-squared error between lowpass
filtered versions of grayscale and halftone images
– Lowpass filter is based a linear model of the human visual
system (contrast sensitivity function)
– Iterative method that gives a practical upper bound on the
achievable quality for a halftone of an original image
• Each iteration visits every pixel
– At each pixel, consider changing state of the pixel (toggle)
or swapping it with each of its 8 nearest neighbors that
differ in state from it
– Terminate when if no pixels are changed in an iteration
UT Center for
Perceptual Systems
11
Direct Binary Search
• Advantages
– Significantly improved
image quality over
screening and error
diffusion methods
– Quality of final solution
is relatively insensitive to
the choice of starting
point (initial halftone)
– Has application in off-line
design of threshold arrays
for screening methods
UT Center for
Perceptual Systems
• Disadvantages
– Computational cost and
memory usage is very
high in comparison to
error diffusion and
screening methods
– Increase complexity
makes it unsuitable for
real-time applications
such as printing
12
Contrast Sensitivity Function
• Contrast needed at a
particular spatial
frequency for visibility
– Angular dependence modeled
with a cosine function
– Modify it at low frequency to
be lowpass, which gives
better correlation with
psychovisual results
– Useful in image quality
metrics for optimization and
performance evaluation
UT Center for
Perceptual Systems
13
Color Halftoning by Error Diffusion for Display
• Input image has a vector of values (e.g. RedGreen-Blue) at each pixel
– Error filter has matrix-valued coefficients
– Algorithm for adapting matrix coefficients
[Akarun, Yardimci, Cetin 1997]
difference
threshold
u(m)
x(m)
+





t m    h
k
e
m

k
 


k
matrix
vector
b(m)
_
t(m)

h(m)
_
e(m)
shape error
UT Center for
Perceptual Systems
+
compute
error
14
Matrix Gain Model for the Quantizer
• Replace scalar gain with a matrix

  1

2


K s  arg min E bm   A um   Cbu Cuu


A


u(m) quantizer input
Kn  I
b(m) quantizer output
– Noise uncorrelated with signal component of quantizer input
– Convolution becomes matrix–vector multiplication in
frequency domain


 
Bn z   I  Hz  Nz 


  
 
B s z   K I  H z  K  I
UT Center for
Perceptual Systems
Noise component of output
 Xz 
1
Signal component of output
15
Optimum Color Noise Shaping
• Vector color error diffusion halftone model
– We use the matrix gain model [Damera-Venkata and Evans, 2001]
– Predicts signal frequency distortion
– Predicts shaped color halftone noise
• Visibility of halftone noise depends on
– Model predicting noise shaping
– Human visual system model (assume linear shift-invariant)
• Formulation of design problem
– Given HVS model and matrix gain model find the color
error filter that minimizes average visible noise power
subject to certain diffusion constraints
UT Center for
Perceptual Systems
16
Linear Color Vision Model
• Pattern-Color separable model [Poirson and Wandell, 1993]
– Forms the basis for S-CIELab [Zhang and Wandell, 1996]
– Pixel-based color transformation
B-W
R-G
E
B-Y
UT Center for
Perceptual Systems
Opponent
Spatial
representation filtering
17
Linear Color Vision Model
• Undo gamma correction on RGB image
• Color separation
–
–
–
–
–
Measure power spectral distribution of RGB phosphor excitations
Measure absorption rates of long, medium, short (LMS) cones
Device dependent transformation C from RGB to LMS space
Transform LMS to opponent representation using O
Color separation may be expressed as T = OC
• Spatial filtering is incorporated using matrix filter
• Linear color vision model

d(m)




vm  d(m)T where d(m) is a diagonal matrix
UT Center for
Perceptual Systems
18
Sample Images and optimum
coefficients for sRGB monitor
available at:
http://signal.ece.utexas.edu/~damera/col-vec.html
Original Image
UT Center for
Perceptual Systems
19
Floyd-Steinberg
UT Center for
Perceptual Systems
Optimum Filter
20
Conclusions
• Design of “optimal” color noise shaping filters
– We use the matrix gain model [Damera-Venkata and Evans, 2001]
• Predicts shaped color halftone noise
– HVS could be modeled as a general LSI system
– Solve for best error filter that minimizes visually weighted
average color halftone noise energy
• Future work
– Above optimal solution does not guarantee “optimal” dot
distributions
• Tone dependent error filters for optimal dot distributions
– Improve numerical stability of descent procedure
UT Center for
Perceptual Systems
21
Backup Slides
Designing the Error Filter
• Eliminate linear distortion filtering before error
diffusion
• Optimize error filter h(m) for noise shaping

min E b n m 
2



 
2


 E vm   I  hm   nm  


Subject to diffusion constraints

where vm
*
UT Center for
Perceptual Systems
 

  hm  1  1
m

linear model of human visual system
matrix-valued convolution
22
Backup Slides
Generalized Optimum Solution
• Differentiate scalar objective function for visual
noise shaping w/r to matrix-valued coefficients

d E b n m 
d hi 
2
  0
 i
x  Tr xx
• Write norm as trace and differentiate trace using
identities from linear algebra
  


d Tr AX

 A
dX

 
d Tr AXB

 AB
dX
 
UT Center for
Perceptual Systems

 

 
 
 
  
d Tr XAXB

 AXB  AXB
dX


Tr AB  Tr BA
23
Backup Slides
Generalized Optimum Solution (cont.)
• Differentiating and using linearity of expectation operator
give a generalization of the Yule-Walker equations


   
 v(k)ran (i  k)   v(s)v(q)h(p)rnn (i  s  p  q)
k
p
q
s
where

a(m)  v(m)  n(m)
• Assuming white noise injection
rnn (k )  En(m) n(m  k )   k 

ran (k )  Ea(m) n(m  k )  v k 
• Solve using gradient descent with projection onto
constraint set
UT Center for
Perceptual Systems
24
Backup Slides
Implementation of Vector Color Error Diffusion
 H rr (z ) H rg (z ) H rb (z ) 



H z    H gr (z ) H gg (z ) H gb (z ) 
 H (z ) H (z ) H (z ) 
bg
bb
 br

r
 
g
b
 
UT Center for
Perceptual Systems
Hgr
Hgg
Hgb
+

 
g

 
25