งานนำเสนอ PowerPoint

Download Report

Transcript งานนำเสนอ PowerPoint

EECP0720 Expert Systems – Artificial Neural Networks
Artificial Neural Networks
Sanun Srisuk 42973003
[email protected]
1
EECP0720 Expert Systems – Artificial Neural Networks
Introduction
Artificial neural networks (ANNs) provide a general,
practical method for learning real-valued, discretevalued, and vector-valued functions from examples.
Algorithms such as BACKPROPAGATION use gradient
descent to tune network parameters to best fit a training
set of input-output pairs. ANN learning is robust to
errors in the training data and has been successfully
applied to problems such as face recognition/detection,
speech recognition, and learning robot control strategies.
2
EECP0720 Expert Systems – Artificial Neural Networks
Autonomous Vehicle Steering
3
EECP0720 Expert Systems – Artificial Neural Networks
Characteristics of ANNs
Instances are represented by many attribute-value pairs.
The target function output may be discrete-valued, realvalued, or a vector of several real- or discrete-valued
attributes.
The training examples may contain errors.
Long training times are acceptable.
Fast evaluation of the learned target function may be
required.
The ability of humans to understand the learned target
function is not important.
4
EECP0720 Expert Systems – Artificial Neural Networks
Perceptrons
One type of ANN system is based on a unit called a perceptron.
The perceptron function can sometimes be written as
The space H of candidate hypotheses considered in perceptron
learning is the set of all possible real-valued weight vectors.
5
EECP0720 Expert Systems – Artificial Neural Networks
Representational Power of Perceptrons

6
EECP0720 Expert Systems – Artificial Neural Networks
Decision surface
linear decision surface
nonlinear decision surface
Programming Example of Decision Surface
7
EECP0720 Expert Systems – Artificial Neural Networks
The Perceptron Training Rule
One way to learn an acceptable weight vector is to begin with
random weights, then iteratively apply the perceptron to each
training example, modifying the perceptron weights whenever it
misclassifies an example. This process is repeated, iterating through
the training examples as many times as needed until the perceptron
classifies all training examples correctly. Weights are modified at
each step according to the perceptron training rule, which revises the
weight associated with input according to the rule
8
EECP0720 Expert Systems – Artificial Neural Networks
Gradient Descent and Delta Rule
The delta training rule is best understood by considering
the task of training an unthresholded perceptron; that is,
a linear unit for which the output o is given by
In order to derive a weight learning rule for linear units,
let us begin by specifying a measure for the training error
of a hypothesis (weight vector), relative to the training
examples.
9
EECP0720 Expert Systems – Artificial Neural Networks
Visualizing the Hypothesis Space
initial weight vector by random
minimum error
10
EECP0720 Expert Systems – Artificial Neural Networks
Derivation of the Gradient Descent Rule
The vector derivative is called the gradient of E with respect to
written
,
The gradient specifies the direction that produces the steepest increase
in E. The negative of this vector therefore gives the direction of
steepest decrease. The training rule for gradient descent is
11
EECP0720 Expert Systems – Artificial Neural Networks
Derivation of the Gradient Descent Rule (cont.)
The negative sign is presented because we want to move
the weight vector in the direction that decreases E. This
training rule can also written in its component form
which makes it clear that steepest descent is achieved by
altering each component of in proportion to .
12
EECP0720 Expert Systems – Artificial Neural Networks
Derivation of the Gradient Descent Rule (cont.)
The vector of
derivatives that form
the gradient can be obtained by
differentiating E
The weight update
rule for standard
gradient descent can
be summarized as
13
EECP0720 Expert Systems – Artificial Neural Networks
Stochastic Approximation to Gradient Descent
14
EECP0720 Expert Systems – Artificial Neural Networks
Summary of Perceptron
Perceptron training rule guaranteed to succeed if
training examples are linearly separable
sufficiently small learning rate
Linear unit training rule uses gradient descent
guaranteed to converge to hypothesis with minimum
squared error
given sufficiently small learning rate
even when training data contains noise
15
EECP0720 Expert Systems – Artificial Neural Networks
BACKPROPAGATION Algorithm
16
EECP0720 Expert Systems – Artificial Neural Networks
Error Function
The Backpropagation algorithm learns the weights for a multilayer
network, given a network with a fixed set of units and
interconnections. It employs gradient descent to attempt to
minimize the squared error between the network output values and
the target values for those outputs. We begin by redefining E to
sum the errors over all of the network output units
where outputs is the set of output units in the network, and tkd and
okd are the target and output values associated with the kth output
unit and training example d.
17
Architecture of Backpropagation
EECP0720 Expert Systems – Artificial Neural Networks
18
EECP0720 Expert Systems – Artificial Neural Networks
Backpropagation Learning Algorithm
19
EECP0720 Expert Systems – Artificial Neural Networks
Backpropagation Learning Algorithm (cont.)
20
EECP0720 Expert Systems – Artificial Neural Networks
Backpropagation Learning Algorithm (cont.)
21
EECP0720 Expert Systems – Artificial Neural Networks
Backpropagation Learning Algorithm (cont.)
22
EECP0720 Expert Systems – Artificial Neural Networks
Backpropagation Learning Algorithm (cont.)
23
EECP0720 Expert Systems – Artificial Neural Networks
Face Detection using Neural Networks
Training Process
Face Database
Output=1, for face database
Non-Face Database
Output=0, for non-face database
Neural
Network
Face
or
NonFace?
24
EECP0720 Expert Systems – Artificial Neural Networks
End of Presentation
25
EECP0720 Expert Systems – Artificial Neural Networks
Derivation of Backpropagation
26
EECP0720 Expert Systems – Artificial Neural Networks
Derivation of Backpropagation (cont.)
27