Transcript 7class

Data Mining:
Concepts and Techniques
(2nd ed.)
— Chapter 6 —
Classification: Advanced Methods
1
Pattern Classification




Classification is a multivariate technique concerned with
assigning data cases (i.e. an observations) to one of a fixed
number of possible classes (represented by nominal output
variables).
For the character recognition example we could evaluate
the ratio of the height of the character to its width or count
number of black grids , convex hulls, etc.( selection of
feature).
One approach would be to build a classifier system which
uses a threshold for the value of x1. Classifies as C2 for
which x1 exceeds the threshold, and C1 otherwise. The
number of misclassifications will be minimized if we choose
the threshold to be at the point where the two histograms
cross.
Obtain a decision boundary. New patterns which lie above
the decision boundary are classified as belonging to C1
while patterns falling below the decision boundary are
classified as C2.
2
Classification: A Mathematical Mapping

Classification: predicts categorical class labels
E.g. xi = (x1, x2, x3, …), yi = +1 or –1
Mathematically, x  X = n, y  Y = {+1, –1},
 We want to derive a function f: X  Y
x
x
Linear Classification
x
x
x
x
 Binary Classification problem
o
x x x
o
 Formulate a linear discriminant hyperplane.
x
o o o
o
 Data above the red line belongs to class ‘x’
o o
o
o
o
o
o
 Data below red line belongs to class ‘o’
 Examples: SVM, Perceptron, Probabilistic Classifiers



3
Discriminative Classifiers


Advantages
 Prediction accuracy is generally high
 As compared to Bayesian methods – in general
 Robust, works when training examples contain errors
 Fast evaluation of the learned target function
 Bayesian networks are normally slow
Criticism
 Long training time
 Difficult to understand the learned function (weights)
 Bayesian networks can be used easily for pattern
discovery
 Not easy to incorporate domain knowledge
 Easy in the form of priors on the data or distributions
4
Classification: Advanced Methods

MLP & Backpropagation

Support Vector Machines

Summary
5
What is neural Computing?



ANN (artificial neural network) is a model inspired
by biological neural network.
Network functions collectively and in massive
parallelism.
Key features
- Learning Ability
- Adaptive
- Faster Computation
- Accuracy
6
A single perceptron

Output is scaled sum of inputs. Consists of three units :
Sensory Unit, Association Unit and Response Unit
Dendrites
Dendrites
Terminal Branches
Terminal
of Axon
Branches
of Axon
x1
w1
x2
x3
w2
w3
S
Axon
Axon
wn
xn
N
net   wn xn
n 1
7
Case-I: 2-class linearly separable
Class 1 (+1) : { [-1,0], [-1.5,-1],[-1,-2]}
Class 2 (-1) : { [2,0],[2.5,-1],[1,-2]}
Bias input
With out the bias decision boundary passes through the origin.
8
Case-I: 2-class nonlinearly separated
k 1
k
Wi  Wi  sgn(e)   xi
+1
1
3
2
Each unit realizes a
hyperplane
(discriminant function).
+1
9
What do the multiple layers do?
The importance of neural networks
in this context is that they offer a
very powerful and very general
framework for representing nonlinear mappings from several
input
variables to
several
output variables where the form
of the mapping is governed by a
number of adjustable weight and
bias parameters.
2nd layer combines More layers for arbitrarily
1st layer draws
complex boundaries.
the boundaries
linear boundaries
# output neurons = # classes.
10
Multi Layer Perceptron (MLP)




Together, the hidden units map the input onto the
vertices of a p-dimensional hypercube.
These p hyper-planes partition the l-dimensional
input space into polyhedral regions
Thus, the two layer perceptron has the capability
to classify vectors into classes that consist of
unions of polyhedral regions. ….Not ‘Union’ of
clusters or regions.
Thus the three-layer perceptron can separate
classes resulting from any union of polyhedral
regions in the input space.
11
Network Topology

Feed-forward neural network architecture.
Number of nodes and number of hidden layers
12
How A Multi-Layer Neural Network Works


The inputs to the network correspond to the attributes measured
for each training tuple
Inputs are fed simultaneously into the units (neurons) making up the
input layer

They are then weighted and fed simultaneously to a hidden layer

The number of hidden layers is arbitrary, although usually only one



The weighted outputs of the last hidden layer are input to units
making up the output layer, which emits the network's prediction
The network is feed-forward: None of the weights cycles back to
an input unit or to an output unit of a previous layer
From a statistical point of view, networks perform nonlinear
regression: Given enough hidden units and enough training
samples, they can closely approximate any function
13
Defining a Network Topology





Decide the network topology: Specify # of units in the
input layer, # of hidden layers (if > 1), # of units in each
hidden layer, and # of units in the output layer
Normalize the input values for each attribute measured in
the training tuples to [0.0—1.0]
Discrete-valued attributes may be encoded such that there
is one input unit per domain value.
Output, if for classification and more than two classes,
one output unit per class is used
Once a network has been trained and if its accuracy is
unacceptable, repeat the training process with a different
network topology or a different set of initial weights
14
Backpropagation

Iteratively process a set of training tuples & compare the network's
prediction with the actual known target value

For each training tuple, the weights are modified to minimize the
mean squared error between the network's prediction and the actual
target value

Modifications are made in the “backwards” direction: from the output
layer, through each hidden layer down to the first hidden layer, hence
“backpropagation”

Steps

Initialize weights to small random numbers, associated with biases

Propagate the inputs forward (by applying activation function)

Backpropagate the error (by updating weights and biases)

Terminating condition (when error is very small, etc.)
15
Oj 
1
1 e j
Errj  O j (1  O j )(T jO j )
I
Errj  O j (1  O j ) ErrkW jk
k
Wij    Errj  Oi
16
Efficiency and Interpretability


Efficiency of backpropagation: Each epoch (one iteration through the
training set) takes O(|D| * w), with |D| tuples and w weights, but # of
epochs can be exponential to n, the number of inputs, in worst case
For easier comprehension: Rule extraction by network pruning




Simplify the network structure by removing weighted links that
have the least effect on the trained network
Then perform link, unit, or activation value clustering
The set of input and activation values are studied to derive rules
describing the relationship between the input and hidden unit
layers
Sensitivity analysis: assess the impact that a given input variable
has on a network output. The knowledge gained from this analysis
can be represented in rules
17
Neural Network as a Classifier

Weakness




Long training time
Require a number of parameters typically best determined
empirically, e.g., the network topology or “structure”, initial values
of the weights
Poor interpretability: Difficult to interpret the symbolic meaning
behind the learned weights and of “hidden units” in the network
Strength

High tolerance to noisy data

Ability to classify untrained patterns

Well-suited for continuous-valued inputs and outputs

Successful on an array of real-world data, e.g., hand-written letters

Algorithms are inherently parallel

Techniques have recently been developed for the extraction of
rules from trained neural networks
18
Classification: Advanced Methods

Classification by Backpropagation

Support Vector Machines

Summary
19
SVM—History and Applications

Vapnik and colleagues (1992)—groundwork from Vapnik
& Chervonenkis’ statistical learning theory in 1960s. A
relatively new classification method for both linear and
nonlinear data

Features: training can be slow but accuracy is high owing
to their ability to model complex nonlinear decision
boundaries (margin maximization)

Used for: classification and numeric prediction

Applications: handwritten digit recognition, object
recognition, speaker identification, benchmarking time-
series prediction tests
20
SVM—Support Vector Machines




It uses a nonlinear mapping to transform the original
training data into a higher dimension if required.
With the new dimension, it searches for the linear optimal
separating hyperplane (i.e., “decision boundary”)
With an appropriate nonlinear mapping to a sufficiently
high dimension, data from two classes can always be
separated by a hyperplane
SVM finds this hyperplane using support vectors
(“essential” training tuples) and margins (defined by the
support vectors)
21
SVM—General Philosophy
Infinite number of answers!
Small Margin
Which one is the best?
Large Margin
Support Vectors
22
Large Margin Linear Classifier
denotes +1

The linear discriminant
function (classifier) with the
maximum margin is the
best

Margin is defined as the
width that the boundary
could be shifted by before
hitting a data point

Why it is the best?

Robust to noise and
outliners and thus strong
generalization ability
x2
denotes -1
“safe zone”
Margin
x1
23
Large Margin Linear Classifier
denotes +1

We know that
denotes -1
x2

w x  b 1
T
Margin
wT x  b  1
x+
The margin width is:

M  (x   x  )  n
w
2


 (x  x ) 

w
w

x+
n
x-
Formulation:
Support Vectors
maximize
2
w
1
minimize
w
2
x1
2
24


SVM searches for the hyperplane with the largest margin,
i.e., maximum marginal hyperplane (MMH)
constrained (convex) quadratic optimization problem:
Quadratic objective function and linear constraints
1
minimize
w
2
For yi  1, w xi  b  1
T
For yi  1, wT xi  b  1
Lagrangian
Function
s.t.
2
yi (wT xi  b)  1
n
1
2
minimize Lp (w, b, i )  w  i  yi (wT xi  b)  1
2
i 1
s.t.
Thus, only support vectors have
i  0
Lagrangian multipliers
i  0
25
Solution of SVM

The solution has the form
n
w  i yi xi 
i 1

 y x
iSV
i
i
i
The linear discriminant function is:
g (x)  w x  b 
T
 x x b
iSV
T
i i
26
Why Is SVM Effective on High Dimensional Data?

The complexity of trained classifier is characterized by the # of
support vectors rather than the dimensionality of the data

The support vectors are the essential or critical training examples —
they lie closest to the decision boundary (MMH)

If all other training examples are removed and the training is
repeated, the same separating hyperplane would be found

The number of support vectors found can be used to compute an
(upper) bound on the expected error rate of the SVM classifier, which
is independent of the data dimensionality

Thus, an SVM with a small number of support vectors can
have good generalization, even when the dimensionality of
the data is high.
27
A2
SVM—Linearly Inseparable


Transform the original input data into a higher dimensional
space
A1
Search for a linear separating hyperplane in the new space
28
Kernel functions for Nonlinear Classification

Instead of computing the dot product on the transformed
data, it is math. equivalent to applying a kernel function
K(Xi, Xj) to the original data, i.e., K(Xi, Xj) = Φ(Xi) Φ(Xj)

SVM Website: http://www.kernel-machines.org/

Typical Kernel Functions

SVM can also be used for classifying multiple (> 2) classes
and for regression analysis (with additional steps)
29
Scaling SVM by Hierarchical Micro-Clustering

SVM is not scalable to the number of data objects in terms of training
time and memory usage

H. Yu, J. Yang, and J. Han, “Classifying Large Data Sets Using SVM
with Hierarchical Clusters”, KDD'03)

CB-SVM (Clustering-Based SVM)

Given limited amount of system resources (e.g., memory),
maximize the SVM performance in terms of accuracy and the
training speed


Use micro-clustering to effectively reduce the number of points to
be considered
At deriving support vectors, de-cluster micro-clusters near
“candidate vector” to ensure high classification accuracy
30
CF-Tree: Hierarchical Micro-cluster


Read the data set once, construct a statistical summary of the data
(i.e., hierarchical clusters) given a limited amount of memory
Micro-clustering: Hierarchical indexing structure

provide finer samples closer to the boundary and coarser
samples farther from the boundary
31
Selective Declustering: Ensure High Accuracy

CF tree is a suitable base structure for selective declustering

De-cluster only the cluster Ei such that


Di – Ri < Ds, where Di is the distance from the boundary to the
center point of Ei and Ri is the radius of Ei
Decluster only the cluster whose subclusters have possibilities to be
the support cluster of the boundary

“Support cluster”: The cluster whose centroid is a support vector
32
CB-SVM Algorithm: Outline





Construct two CF-trees from positive and negative data
sets independently
 Need one scan of the data set
Train an SVM from the centroids of the root entries
De-cluster the entries near the boundary into the next
level
 The children entries de-clustered from the parent
entries are accumulated into the training set with the
non-declustered parent entries
Train an SVM again from the centroids of the entries in
the training set
Repeat until nothing is accumulated
33
SVM vs. Neural Network

SVM




Deterministic algorithm
Nice generalization
properties
Hard to learn – learned
in batch mode using
quadratic programming
techniques
Using kernels can learn
very complex functions

Neural Network




Nondeterministic
algorithm
Generalizes well but
doesn’t have strong
mathematical foundation
Can easily be learned in
incremental fashion
To learn complex
functions—use multilayer
perceptron (nontrivial)
34
Summary





NN and SVM are robust generalised classifiers.
Backpropagation: Employs method of gradient descent,
searches for a set of weight that can minimize the mse
between the predicted and actual class label.
The SVM uses mapping to higher dimension, solution to
constrained quadratic optimization, that fits the available
data well without over-fitting. Essential training tuples are
support vectors.
Both methods allow extensive degrees of freedom in the
model building process.
Learning Outcome: Basics of BPNN & SVM as
classifiers for data analysis?
35