Image Compression

Download Report

Transcript Image Compression

Image Compression
อ.รัชดาพร คณาวงษ์
วิทยาการคอมพิวเตอร์ คณะวิทยาศาสตร์
มหาวิทยาลัยศิลปากรวิทยาเขตพระราชวังสนามจันทร์
1
Image Compression
• Reducing the size of image data files
• While retaining necessary information
Original
Image
compress
Compressed
Image file
decompress
extracted
Image file
2
Terminology
 refer relation between original image and
the compressed file
1. Compression Ratio
A larger number implies a better compression
2. Bits per Pixel
A smaller number implies a better compression
3
Compression Ratio
Uncom pressed File Size
(1)
Com pression Ratio 
 CR
Com pressed File Size
Ex Image 256X256 pixels, 256 level grayscale can be
compressed file size 6554 byte.
Original Image Size = 256X256(pixels) X 1(byte/pixel)
= 65536 bytes
65536
compressio n Ratio 
 10
6554
4
Bits per Pixel
Num berof Bits
(2) Bits per Pixel 
Num berof Pixels
Ex Image 256X256 pixels, 256 level grayscale can be
compressed file size 6554 byte.
Original Image Size = 256X256(pixels) X 1(byte/pixel)
= 65536 bytes
Compressed file
= 6554(bytes)X8(bits/pixel)
= 52432 bits
52432
Bits per Pixel 
 0.8
65536
5
Why we want to compress?
To transmit an RGB 512X512, 24 bit image
via modem 28.2 kbaud(kilobits/second)
(512 512 pixels)(24 bits / pixel)
 213second
(28.8 1024bits / second)
6
Key of compression
• Reducing Data but Retaining Information
DATA are used to convey information.
Various amounts of data can be used to represent the
same amount of information. It’s “Data redundancy”
Relative data redundancy
1
RD  1
CR
7
Entropy
• Average information in an image.
L 1
Entropy  pk log2 ( pk )
k 0
• Average number of bits per pixel
L 1
La   lk pk
k 0
nk
pk 
n
, where k  0,1, , L  1
8
Redundancy
• Coding Redundancy
• Interpixel Redundancy
• Psychovisual Redundancy
9
Coding Redundancy
• Occurred when data used to represent image
are not utilized in an optimal manner
10
Coding Redundancy(cont)
• An 8 gray-level image distribution shown in Table
rk
p(rk)
code1
l1(rk)
code2
l2(rk)
r0=0
0.19
000
3
11
2
r1=1/7
0.25
001
3
01
2
r2=2/7
0.21
010
3
10
2
r3=3/7
0.16
011
3
001
3
r4=4/7
0.08
100
3
0001
4
r5=5/7
0.06
101
3
00001
5
r6=6/7
0.03
110
3
000001
6
r7=1
0.02
111
3
000000
6
11
Coding Redundancy(cont)
• Original Image 8 possible gray level = 23
7
La   l2 (rk ) p(rk )
k 0
 2(0.19)  2(0.25)  2(0.21)  3(0.16)
 4(0.08)  5(0.06)  6(0.03)  6(0.02)
 2.7 bits
3
CR 
 1.11
2.7
1
RD  1
1.11
12
Interpixel Redundancy
• Adjacent pixel values tend to be highly
correlated
13
Psychovisual Redundancy
• Some information is more important to the
human visual system than other types of
information
14
Compression System Model
• Compression
Input
Preprocessing
Encoding
Compressed
File
• Decompression
Compressed
File
Decoding
Postprocessing
Output
15
Types of Compression
There are 2 types of Compression
• Loseless Compression
• Lossy Compression
16
Loseless Compression
• No data are lost
• Can recreated exactly original image
• Often the achievable compression is mush
less
17
Huffman Coding
Using Histogram probability
5 Steps
Find the histogram probabilities
Order the input probabilities(smalllarge)
Addition the 2 smallest
Repeat step 2&3, until 2 probability are left
Backward along the tree assign 0 and 1
18
Huffman Coding(cont)
 Step 1 Histogram Probability
40
30
20
10
0
1
2
3
p0 = 20/100 = 0.2
p1 = 30/100 = 0.3
p2 = 10/100 = 0.1
p3 = 40/100 = 0.4
 Step 2 Order
p3  0.4
p1  0.3
p0  0.2
p2  0.1
19
Huffman Coding(cont)
 Step 3 Add 2 smallest
p3  0 .4  0.4 0.4
p1  0 .3  0.3 0.6
p 0  0 .2 0 .3
p 2  0 .1
0 .6
0 .4
Natural Code
00
Probability
0.2
Huffman Code
010
01
0.3
00
10
0.1
011
11
0.4
1
20
Huffman Coding(cont)
• The original Image :average 2 bits/pixel
• The Huffman Code:average
3
La   li pi  3(0.2)  2(0.3)  3(0.1)  1(0.4)  1.9
i 0
3
Entropy  pi log2 ( pi )
i 0
(0.2) log2 (0.2)  (0.3) log2 (0.3) 
 


(
0
.
1
)
log
(
0
.
1
)

(
0
.
4
)
log
(
0
.
4
)
2
2


 1.846bits/ pixel
21
Run-Length Coding
• Counting the number of adjacent pixels
with the same gray-level value
• Used primarily for binary image
• Mostly use horizontal RLC
22
Run-Length Coding(cont)
Binary Image 8X8
horizontal
0 0 0 0 0 0 0 0
1 1 1 1 0 0 0 0
1st Row
8
2nd Row
0,4,4
0 1 1 0 0 0 0 0
0 1 1 1 1 1 0 0
0 1 1 1 0 0 1 0
3rd Row
1,2,5
4th Row
1,5,2
5th Row
1,3,2,1,1
0 0 1 0 0 1 1 0
1 1 1 1 0 1 0 0
6th Row
2,1,2,2,1
7th Row
0,4,1,1,2
0 0 0 0 0 0 0 0
8th Row
8
23
Run-Length Coding(cont)
• Extending basic RLC to gray-level image by using
bit-plane coding
• It will better if change the natural code into gray
code
1
0
1
00
01
10
11
00
01
11
10
Natural
Gray Code
24
Lempel-Ziv-Weich Coding(LZW)
39 39 120 120
CRS
39 39 120 120
39 39 120 120
39
39
39
39 39 120 120
PBP
Encoded
O/P
Dictionary
Location
Dictionary
Entry
39
256
39-39
120
39
257
39-120
120
39
120
259
120-39
39-39
120
256
260
39-39-126
…
…
• Assign fixed-length code words to variable
• GIF,TIFF,PDF
25
Lossy Compression
• Allow a loss in the actual image data
• Can not recreated exactly original image
• Commonly the achievable compression is
mush more
• JPEG
26
Fidelity Criteria
• Objective fidelity criteria
– RMS Error
RMS 
1
N2
N 1 N 1
 ^

I

I

 r ,c r ,c 
r 0 c 0 
2
– RMS Signal-To-Noise Ratio
N 1 N 1
RMSSN 
[ I
r 0 c 0
N 1 N 1 ^
[ I
r 0 c 0
r ,c
^
r ,c
]2
 I r ,c ]2
• Subjective fidelity criteria
27
JPEG
28