Transcript slides1
A robust detection algorithm
for copy-move forgery in
digital images
Presented by Issam Laradji
Authors: Yanjun Cao, Tiegang Gao, Li Fan, Qunting Yang
Course: COE589-Digital Forensics
Date:
18 September, 2012
Outline
–
–
–
–
–
–
–
Introduction
Challenges
Background Concepts
Related Work
Proposed Approach
Experimental Results
Summary
2
Most definitions were obtained from Wikipedia, others are from online tutorials
Introduction
Some statistics state that around 20% of
accepted manuscripts are tempered
– 1% are fraudulent manipulations
– Innocent people can get framed of crimes they
didn’t commit
– Negative social impact
– Premature propaganda
3
Challenges
Sophisticated tools
– 3D Max, Photoshop
– Automated lighting, and processing that
conceal forgery
Increase of large size images
– High-definition images
– Much more costly to process
4
Background Concepts
Normal Distribution
– Used to describe real-valued random variables that cluster around a
single mean value
– The most prominent distribution
– All distributions areas add up to 1, bell-shaped
– Allows for tractable analysis
– Observational error in an experiment is usually assumed to follow
a normal distribution
– Has a symmetric distribution about its mean
Normal distribution formula
5
Background Concepts (2)
Energy of the image
– Amount of information present:
• High energy: city, lots of details
• Low energy: plain, minimal details
Feature vector
– N-dimensional vector of numerical features to represent
some object
• Facilitates statistical analysis
• Explanatory “independent” variables used in procedures such
as linear regression
6
Background Concepts (3)
Feature vector cont.
– Linear regression can be used to model the relationship
between independent variable X (Feature vector) and
dependent variable Y
– least square is commonly used for fitting
Time Complexity
– The time complexity of an algorithm quantifies the
amount of time taken by an algorithm to run as a function
of the length of the variables representing the input
7
Background Concepts (4)
Global and local features
– Global features represent details about the
whole image
• Color distribution, brightness, and sharpness
• Faster to process
– Local features represent more finer details such
as the relationship between pixels
• Similarities and differences between pixels
• Much more costly in processing
8
Background Concepts (5)
Eigenvector & Eigenvalue
– An eigenvector of a square matrix is a non-zero
vector that, when multiplied by the matrix, yields a
vector that is parallel to the original
– The eigenvalue is the scalar value that corresponds to
the eigenvector λ
In this case, [3;-3] is an eigenvector of A, with eigenvalue
1 (MATLAB syntax)
9
Background Concepts (6)
Principal analysis component
– Mathematical procedure that uses orthogonal
transformation
– Converts correlated variables to linearly uncorrelated
variables called principal components
– Principal components are guaranteed to be independent
only if the data set is normally distributed
– Identifies patterns in data
• Highlighting their similarities and differences
10
Background Concepts (7)
Principal analysis component cont.
– Eigenvalue decomposition of data correlation
– Eigenvalues obtained can measure weights of different
image features
– Main advantage
• Data compression without much loss of information
– Applications
• Data mining, image processing, marketing, and chemical
research
11
Background Concepts (8)
Scale-invariant feature transform (or SIFT)
– An algorithm in computer vision to detect and describe
local features in images
– Local image features helps in object recognition
– Invariant to image scale and rotation
– Robust to changes in illumination, noise, and minor
changes in viewpoint
– Applications
• Object/face recognition, navigation, gesture recognition, and
tracking
12
Background Concepts (9)
Discrete Cosine transform
– Transforms image from spatial to “frequency domain”
in which it can be efficiently encoded
– Discard high frequency “sharp variations” components
which refines the details of the image
– Focuses on the low frequency “smooth variations”,
holds the base of an image
DCT basis (64)
Zigzag scanning
13
Background Concepts (10)
Discrete Cosine transform cont.
– Removes redundancy between neighboring pixels
– Prepares image for compression / quantization
– Quantization:
• Maps large set of values to smaller set
• Reduces the number of bits needed to store the
coefficients by removing less important high
frequency.
14
Background Concepts (11)
Why DCT?
– Approximates better with fewer coefficients as
compared to other contemporary transforms
• However wavelet transforms is the new trend
– Less space required to represent the image
features, hence easier to store in memory
– Applications:
• Lossy compression for .mp3 and .jpg
15
Related Work
Straightforward approach
– Compare each group of pixels with the rest, and check
for similarities!
– Very impractical, exponential time complexity
– False positives could be high
Related work
1. Exhaustive search
2. Fridrich used DCT-based features for duplication
detection
• Sensitive to variations (additive noise)
16
Related Work (2)
3.
4.
Haung et al. increased performance by
reducing feature vector dimensions
However, none considered multiple copymove forgery
Popescu: PCA-based feature,
– Can endure additive noise
– Low in accuracy
17
Related Work (3)
5.
6.
7.
8.
Luo proposed color features as well as block intensity
ratio
Bayram et al. applied Fourier-Mellin transform to each
block, then projected to one dimension to form the feature
vector
B. Mahdian, and S. Saic used a method based on blur
moments invariants to locate the forgery regions
X. Pan, and S. Lyu took the advantages of SIFT features
to detect the duplication regions
However, all these are of higher time complexity than the
proposed approach!
18
Proposed Approach
Basically, the algorithm divides the original
image into overlapping blocks, then
similarities between these blocks are
calculated, based on some threshold the
duplicated regions are highlighted in the
output image
19
Proposed approach advantages
(contributions)
Improved version of copy-move forgery detection
algorithm
– Lower feature vector dimension
– Robust to various attacks: multiple copy-move forgery,
Gaussian blurring, and noise contamination
– Lower time complexity
20
Step 1- dividing the image into
blocks
Say we have an input image of size m x n
If its not gray scale
The image is converted to Grayscale using the
formulae: I=0.228R+0.587G+0.114B
Human eye is most sensitive to green and red
That’s why most weights are on green and red
21
Green channel gives the clearest image
Step 1- dividing the image into
blocks (2)
The input image is split into overlapping blocks
The standard block size is 8 x 8
B
B
B
B
Generates
m
(m-b+1)(n-b+1) = N Blocks
n
Each block differ by one row or one column by its preceding block
Let ‘N’, and ‘b’ be the number of blocks obtained, and the height
22
of the block respectively
Step 1- dividing the image into
Block size : 8 x 8
blocks (3)
155 155 155 158 158 156 158 159
155 155 155 158 158 156 158 159
Dividing into blocks
155 155 155 158 158 156 158 159
155 155 155 158 158 156 158 159
155 155 155 158 158 156 158 159
151 151 151 154 157 156 156 156
155 155 155 156 157 158 156 153
149 149 149 153 155 154 153 154
…
155 155 155 158 158 156 158 159
155 155 155 158 158 156 158 159
Original image
…
155 155 155 158 158 156 158 159
155 155 155 158 158 156 158 159
155 155 155 158 158 156 158 159
151 151 151 154 157 156 156 156
155 155 155 156 157 158 156 153
149 149 149 153 155 154 153 154
Complexity: O(N) where N is the number of blocks
23
Step 2 – Applying DCT transform
For each block, DCT is applied
We get DCT coefficients matrix
155
155
155
158
155
155
155
158
155
155
155
158
155
155
155
158
Original Sample block
420.75 37.70297
DCT
Transform
-2.98619
-0.25
-3.25 4.136577
0.926777
2.1744
-0.32322
-5.44081
0.75
-0.72292
2.589912 0.676777
-0.63007 0.573223
DCT coefficient block
24
Step 2 – Applying DCT transform
(1)
The block is compared with its 64 DCT
basis to get the correlation coefficients
155 155 155 158 158 156 158 159
155 155 155 158 158 156 158 159
155 155 155 158 158 156 158 159
DCT basis (64)
155 155 155 158 158 156 158 159
155 155 155 158 158 156 158 159
151 151 151 154 157 156 156 156
155 155 155 156 157 158 156 153
149 149 149 153 155 154 153 154
Discrete Coefficients
…
…
…
…
…
…
…
…
..
…
…
…
…
…
…
…
25
Step 2 – Applying DCT transform
(2)
The transformation allows us to focus on the low
frequency coefficients which hold the basis of the image
Zigzag extraction is done so that coefficients are in order
of increasing frequency
– Allows for zeroing the high frequency blocks
Time complexity: O(N x b x b)
(a) the Lena image
coefficients.
(b) Zigzag order scanning
(c) the reconstruction image of Lena by using 1/4 DCT
26
Step 3 – feature extraction
The coefficients matrix are divided and represented by four
parts: C1,C2, C3, and C4
p_ratio=c_area/m_area is approximately 0.79
The circle block represents the low frequency, hence
decreasing the computation cost without affecting efficiency
420.75 37.70297
C1
-2.98619 0.926777
-3.25 4.136577
C2
2.1744 -0.32322
Generate matching feature :
f
x
,
y
v
,
(
f
x
,
y
c
_
a
r
e
a
,
i
1
,
2
,
3
,
4
)
i
-0.25 -5.44081
C4
0.75 -0.72292
C3
2.589912 0.676777 -0.63007 0.573223
DCT coefficient block
c
_
a
r
e
a
i
i
f
e
a
t
u
r
e
v
e
c
t
o
r
:
V
v
,
v
,
v
,
v
1234
27
Step 3 – feature extraction (2)
Each v is quantized by its corresponding c_area
Four features that represent the matrix are obtained
vi is the mean of the coefficients value, corresponding to each ci
Matching features generated:
420.75 37.70297
C1
-3.25 4.136577
C2
-2.98619 0.926777
2.1744 -0.32322
-0.25 -5.44081
0.75 -0.72292
C4
C3
2.589912 0.676777 -0.63007 0.573223
r
2
2
c
_
a
r
e
a
r
4
c
_
a
r
e
a
4
/4
fo
r
i
1
,2
,
3
,4
i
4
2
0
.
7
5
3
7
.
7
0
2
9
7
2
.
9
8
6
1
9
0
.
9
2
6
7
7
7
≒ 145.2746
v
1
(
3
.
2
5
)
4
.
1
3
6
5
7
7
2
.
1
7
4
4
(
0
.
3
2
3
2
2
) ≒ 0.8715
v
0
.
7
5
(
0
.
7
2
2
9
2
)
(
0
.
6
3
0
0
7
)
0
.
5
7
3
2
2
3
v
≒ -0.0095
28
2
DCT coefficient block
3
Step 3 – feature extraction (3)
The extracted features are invariant to a lot of
processing operations according to the results
below
Time complexity of feature extraction: O(N x 4 x b x b)
29
Step 4 – Matching
V1
V
2
A
V
(NB1)(NB1)
The extracted feature vectors are arranged in a
matrix A
A is then lexicographically sorted , with time
complexity of O(N log N)
Each element (vector) of A is compared to each
subsequent vector to check if the thresholds
Dsimilar, Nd, are satisfied i.e. the equations:
4
k k2
i
i
j
s
i
m
i
l
a
r
k
1
m
_
m
a
t
c
h
(
A
,
A
)
(
v
v
)D
i
i
j
d
(
V
,
V
)
x
x
y
y
N
i
i
j
i
i
j
i
i
j
d
2
2
30
Step 4 – Matching (2)
1
4
5
.2
7
4
6
,0
.8
7
1
5
,
0
.0
0
9
5
,
0
.7
7
1
6
A
1
9
6
.6
8
1
5
,0
.7
6
8
1
,0
.2
5
3
4
,1
.6
0
3
2
1
4
5
.
1
6
7
3
,
0
.
9
7
7
1
,
0
.
0
0
3
2
,
1
.
0
4
3
8
1
7
9
.2
3
3
4
,4
.8
0
1
5
,
0
.4
9
6
8
,4
.1
9
0
4
P
re
d
i
c
tv
a
l
u
e
:
D
0
.4
,N
2
5
s
i
m
i
l
a
r
d
2
2
2
2
Dsimilar
m
_
m
a
t
c
h
(
A
,
A
)
(
1
4
5
.
2
7
4
6
1
9
6
.
6
8
1
5
)
(
0
.
8
7
1
5
0
.
7
6
8
1
)
(
0
.
0
0
9
5
0
.
2
5
3
4
)
(
0
.
7
7
1
6
1
.
6
0
3
2
)
5
1
.
4
6
2
5
1
1
1
6
Not
Similar
2
2
2
2
Dsimilar
m
_
m
a
t
c
h
(
A
,
A
)
(
1
4
5
.
2
7
4
6
1
4
5
.
1
6
7
3
)
(
0
.
8
7
1
5
0
.
9
7
7
1
)
(
0
.
0
0
9
5
0
.
0
0
3
2
)
(
0
.
7
7
1
6
1
.
0
4
3
8
)
0
.
3
1
1
3
1
1
1
7
31
Step 4 – Matching (3)
1
4
5
.2
7
4
6
,0
.8
7
1
5
,
0
.0
0
9
5
,
0
.7
7
1
6
A
1
9
6
.6
8
1
5
,0
.7
6
8
1
,0
.2
5
3
4
,1
.6
0
3
2
1
4
5
.
1
6
7
3
,
0
.
9
7
7
1
,
0
.
0
0
3
2
,
1
.
0
4
3
8
1
7
9
.2
3
3
4
,4
.8
0
1
5
,
0
.4
9
6
8
,4
.1
9
0
4
V1 (1,1)
V
(100,81
)
117
P
r
e
d
i
c
t
v
a
l
u
e
:
D
0
.
4
,
N
2
5
s
i
m
i
l
a
r
d
2
2
2
2
Dsimilar
m
_
m
a
t
c
h
(
A
,
A
)
(
1
4
5
.
2
7
4
6
1
4
5
.
1
6
7
3
)
(
0
.
8
7
1
5
0
.
9
7
7
1
)
(
0
.
0
0
9
5
0
.
0
0
3
2
)
(
0
.
7
7
1
6
1
.
0
4
3
8
)
0
.
3
1
1
3
1
1
1
7
d
(
V
,
V
)
1
1
0
0
1
8
1
≒ 127.28
11
1
7
2
2
Nd 5
Similar
Detected image
32
Step 5 – Displaying duplicated
regions
Finally, regions showing the duplicated
regions are expected to be displayed
The computational complexities of extraction methods are
compared
The green rectangles
indicate a duplicated
region
33
Time complexity analysis
As claimed, the total computation
complexity:
– O(N)+O(Nxbxb)+O(Nx4xbxb)+O(4NxlogN)
• Where N, b are the number of blocks and the height
of the block respectively
– Questionable?
• The computation complexity of matching was not
calculated which could be O(NxN)
• However, they stated that their computational
complexity is dominated by the matching blocks
34
Experimental results environment
Photoshop 8.0
2.59 GHz, AMD processor
Matlab2009a software
First dataset
– Gray images of size of 128 x 128
– DVMM lab at Columbia University
Second dataset
– uncompressed colour PNG images of size 768 x 521
– the Kodak Corporation
Third dataset
– Internet collection of images of size 1600 x 1000
35
Experimental results - Setting
Thresholds
Detection accuracy rate (DAR) and False positive rate
(FPR)
– psis & “psis tilde” are set as the copy region, and the
detected copy respectively
– psit and “psit tilde” are set as the tampered region and
detected tampered respectively
– Questionable?
• Vague formulas
• Nothing in the paper have shown what the symbols really
mean
• Accuracy check is normally calculated in ratios
36
Experimental results - Setting
Thresholds (2)
Selecting the circle representation for matching features
extraction can be challenging
– Therefore, 200 images are randomly chosen from the three datasets
– Series of forgeries are applied to them
– Different circle radius ranging from 2 to 6 are used, with 1
increment
– Optimum at r = 4, as shown in the diagram below
37
Experimental results - Setting
Thresholds (3)
Choosing the threshold parameters, b, Dsimilar, Nd, and Nnumber, is also
challenging
Colour images:
– The optimal values: 8, 0.0015, 120 and 5 for b, Dsimilar, Nd, and Nnumber,,
respectively
Gray images:
– The optimal values: 8, 0.0025, 25 and 5 for b, Dsimilar, Nd, and Nnumber,
respectively
38
Experimental results – Effective
testing
To test the proposed method, gray images of different sizes are chosen:
–
Tempered regions of sizes: 32x32, 64x64, 96x96, 128x128, are tested
The detection results (from left to right is the original image, tampered image,
detection results)
39
Experimental results – Robustness
and accuracy test
Signal-to-noise ratio (SNR): level of a desired signal to the level of
background noise
(a)–(b) DAR/FPR performance with SNR, and (c)–(d) DAR/FPR
performance with Gaussian blurring
40
Experimental results – Robustness
and accuracy test (2)
DAR/FPR curves for DCT, DCT-improved, PCA, FMT, and Proposed
methods when the duplicated region is 64 pixels 64 pixels. (a)–(b) with
different SNR levels, and (c)–(d) with Gaussian blurring
41
Experimental results – Demonstration
The detection results for non-regular copy-move forgery
42
Experimental results – Demonstration (2)
The test results for multiple copy-move forgery under a mixed
operation
43
Experimental results – Demonstration (3)
The top row are tampered images with duplicated region size of 32 pixels × 32
pixels. Shown below are the detection results using our algorithm
44
Experimental results – Demonstration (4)
a) the original image
b) the manipulated image
c) The analyzed image (Python script)
• Duplicated regions were detected
45
Experimental results – Demonstration (5)
a) Original image
b) Manipulated image
c) The analyzed image (Python script)
• Used --blcoldev=0.05
• False positive
• Duplicate regions were not detected
46
Experimental results – Demonstration (6)
a) the original image
b) the manipulated image
c) The analyzed image (Python script)
• Partial part of the duplicated region was detected
47
Summary
The chart illustrates a summary of how the proposed
algorithm works
Flowchart of the proposed scheme
48
Summary(2)
Automatic and efficient detection algorithm for
copy-move forgery have been presented
Contributions
– Outperforms contemporary algorithms in speed and storage
– Robust to various attacks: multiple copy-move forgery,
Gaussian blurring, and noise contamination
– Different way of representing blocks (circles), reducing
memory requirements
49
References
A robust detection algorithm for copy-move forgery in
digital images; By: Yanjun Cao a,*, Tiegang Gao b, Li
Fan a, Qunting Yang
Wikipedia: The Free Encyclopedia. Wikimedia
Foundation, Inc. 22 July 2004. Web. 10 Aug. 2004
cao2012-image-forgery-slides.ppt; By: Li-Ting Liao
The Discrete Cosine Transform (DCT): Theory and
Application1; By: Syed Ali Khayam
A tutorial on Principal Components Analysis; By:
Lindsay I Smith
50