Transcript Slide 1

Natural Language Processing

machine learning tools Zhao Hai

赵海

Department of Computer Science and Engineering Shanghai Jiao Tong University [email protected]

Outline

Machine Learning Approaches for Natural Language Processing

k

-Nearest Neighbor  Support Vector Machine  Maximum Entropy (log-linear) Model 2

What’s Machine Learning

• Learning from some known data, and give predictions on unknown data.

• Typically, classification.

• Types – Supervised learning: labeled data are necessary – Unsupervised learning: only unlabeled data are used but some heuristic rules are necessary.

– Semi-supervised learning: both labeled and unlabeled data are used.

3

What’s Machine Learning

• What we are talking about is supervised machine learning.

• Natural language processing often asks for structure learning.

4

Data

• Real data: you know nothing about it.

• Training data: for learning • Test data: for evaluation • Development data: for parameter optimization 5

Classification

• Basic operation in machine learning – Binary classification – Multi-class classification can be determined by a group of binary classification results.

• Learning often results in a model.

• Prediction is given based on such a model.

6

Outline

 Machine Learning Approaches for Natural Language Processing 

k

-Nearest Neighbor  Support Vector Machine  Maximum Entropy (log-linear) Model 7

k

-Nearest Neighbor (

k

-NN)

This part is based on slides by Xia Fei

Instance-based (IB) learning

• No training: store all training instances.

 “Lazy learning” • Examples: –

k

-NN – Locally weighted regression – Radial basis functions – Case-based reasoning – … • The most well-known IB method:

k

-NN 9

+ + + o o

k-NN

+

?

o o o o + o + o o o o + o o o o o o o + 10

k-NN

• For a new instance

d

, – find

k

training instances that are closest to

d

.

– perform majority voting or weighted voting.

• Properties: – A “lazy” classifier. No training.

– Feature selection and distance measure are crucial.

11

The algorithm

1. Determine parameter

k

2. Determine the distance or similarity between instances 3. Calculate the distance between query-instance and all the training instances 4. Sort the distances and determine

k

nearest neighbors 5. Gather the labels of the

k

nearest neighbors 6. Use simple majority voting or weighted voting.

12

Picking k

• Use

N

-fold cross validation: pick the one that minimizes cross validation error.

13

Normalizing attribute values

• Distance could be dominated by some attributes with large numbers: – Ex: features: age, income – Original data: x 1 =(35, 76K), x 2 =(36, 80K), x 3 =(70, 79K) – Assume: age 2 [0,100], income 2 [0, 200K] – After normalization: x 1 =(0.35, 0.38), x 2 =(0.36, 0.40), x 3 = (0.70, 0.395).

14

The Choice of Features

• Imagine there are 100 features, and only 2 of them are relevant to the target label.

k

-NN is easily misled in high-dimensional space.

 Feature weighting or feature selection 15

Feature weighting

• Stretch j-th axis by weight w j , • Use cross-validation to automatically choose weights w 1 , …, w n • Setting w j to zero eliminates this dimension altogether.

16

Similarity measure

• Euclidean distance: • Weighted Euclidean distance: • Similarity measure: cosine 17

Voting to determine the Label

• Majority voting: c* = arg max c  i  (c, f i (x)) • Weighted voting: weighting is on each neighbor c* = arg max c  i w i  (c, f i (x)) w i = 1/dist(x, x i )  We can use all the training examples.

18

Summary of kNN

• Strengths: – Simplicity (conceptual) – Efficiency at training: no training – Handling multi-class – Stability and robustness: averaging k neighbors – Predication accuracy: when the training data is large • Weakness: – Efficiency at testing time: need to calc all distances – It is not clear which types of distance measure and features to use.

19

Outline

 Machine Learning Approaches for Natural Language Processing 

k

-Nearest Neighbor  Support Vector Machine  Maximum Entropy (log-linear) Model 20

Support Vector Machines

This part is partially revised from the slides by Constantin F. Aliferis & Loannis Tsamardinos

Support Vector Machines

• Decision surface is a hyperplane (line in 2D) in

feature

space (similar to the Perceptron) • Arguably, the most important recent discovery in machine learning • In a nutshell: – Find the hyperplane that maximizes the margin between the two classes – If data are not separable find the hyperplane that maximizes the margin and minimizes the (a weighted average of the) misclassifications – map the data to a predetermined very high dimensional space via a kernel function 22

Support Vector Machines

• Three main ideas: 1. Define what an optimal hyperplane is (in way that can be identified in a computationally efficient way): maximize margin 2.

Extend the above definition for non-linearly separable problems: have a penalty term for misclassifications 3.

Map data to high dimensional space where it is easier to classify with linear decision surfaces: reformulate problem so that data is mapped implicitly to this space 23

Which Separating Hyperplane to Use?

Var 1 Var 2 24

Var 1

Maximizing the Margin

IDEA 1: Select the separating hyperplane that maximizes the margin!

Margin Width Margin Width Var 2 25

Support Vectors

Var 1 Support Vectors Margin Width Var 2 26

Var 1

Setting Up the Optimization Problem

The width of the margin is: 2

k w

w

w

 

x

b

k

w

 

x

b

 

k k

w

k

x

b

Var 2  0 So, the problem is: max 2

w k

  (   ) ) 

k

, 

x

of class 1

k x

of class 2 27

Var 1

Setting Up the Optimization Problem

w w x b

1 1 There is a scale and unit for data so that

k=1

. Then problem becomes: 1 

w

 

x

b

 0 Var 2 1 ( 2 max

w

    )

x

of class 1

x

of class 2 28

Setting Up the Optimization Problem

• If class 1 corresponds to 1 and class 2 corresponds to 1, we can rewrite (  

i

(  

i b

)

b

)  1, 

x i

with

y i x i

with

y i

 1   1 • as

i i b

) 1,

x i

• So the problem becomes: max 2

w

. . (

i i b

) 1,

x i

or min 1 2 . . (

i w

2

i b x i

29

Linear, Hard-Margin SVM Formulation

• Find

w,b

that solves 1 2 min

w

2 . . (

i

 

i b

)

x i

• Problem is convex so, there is a unique global minimum value (when feasible) • There is also a unique minimizer, i.e. weight and

b

value that provides the minimum • Non-solvable if the data is not linearly separable • Quadratic Programming – Very efficient computationally with modern constraint optimization engines (handles thousands of constraints and training instances).

30

Support Vector Machines

• Three main ideas: 1.

Define what an optimal hyperplane is (in way that can be identified in a computationally efficient way): maximize margin 2. Extend the above definition for non-linearly separable problems: have a penalty term for misclassifications 3.

Map data to high dimensional space where it is easier to classify with linear decision surfaces: reformulate problem so that data is mapped implicitly to this space 31

Non-Linearly Separable Data

Var 1 

i

i

w w x b

1 1 1 1 

w

 

x

b

 0 Var 2 Introduce slack variables 

i

Allow some instances to fall within the margin, but penalize them 32

Formulating the Optimization Problem

Var 1 

w

i w x b

1 1 

i

1 1 

w

 

x

b

 0 Var 2 Constraint becomes : 

i i

 0

b

) 

i

, 

x i

Objective function penalizes for misclassified instances and those within the margin min 1 2

w

2 

C

i

i C

trades-off margin width and misclassifications 33

Linear, Soft-Margin SVMs

min 1 2

w

2 

C

i

i

i i

 0

b

) 1 

i

, 

x i

• Algorithm tries to maintain 

i

margin to zero while maximizing • Notice: algorithm does not minimize the

number

of misclassifications (NP-complete problem) but the sum of distances from the margin hyperplanes • Other formulations use 

i 2

instead • As

C



,

we get closer to the hard-margin solution 34

Robustness of Soft vs Hard Margin SVMs

Var 1 Var 1 

i

i

Soft Margin SVM 

w

 

x

b

 0 Var 2 

w

 

x

b

 0 Hard Margin SVM Var 2 35

Soft vs Hard Margin SVM

• Soft-Margin SVMs always have a solution • Soft-Margin is more robust to outliers – Smoother surfaces (in the non-linear case) • Hard-Margin does not require to guess the cost parameter (requires no parameters at all) 36

Support Vector Machines

• Three main ideas: 1.

Define what an optimal hyperplane is (in way that can be identified in a computationally efficient way): maximize margin 2.

Extend the above definition for non-linearly separable problems: have a penalty term for misclassifications 3. Map data to high dimensional space where it is easier to classify with linear decision surfaces: reformulate problem so that data is mapped implicitly to this space 37

Var 1

Disadvantages of Linear Decision Surfaces

Var 2 38

Advantages of Non-Linear Surfaces

Var 1 Var 2 39

Var 1

Linear Classifiers in High-Dimensional Spaces

Constructed Feature 2 Var 2 Find function  (x) to map to a different space Constructed Feature 1 40

Mapping Data to a High-Dimensional Space

• Find function  (x) to map to a different space, then SVM formulation becomes: min 1 2

w

2 

C

i

i s

.

t

.

y i

i

 0 (

w

  (

x

) 

b

)  1  

i

, 

x i

• Data appear as  (x), weights

w

are now weights in the new space • Explicit mapping expensive if  (x) is very high dimensional • Solving the problem without explicitly mapping the data is desirable 41

Constrained Optimization

• Convert to unconstrained optimization by incorporating the constraints as an additional term s min w , b 1 2 w 2 .

y i ( w   ( x )  b )  1  0 ,  x i • We find the optimal setting of {w, b} by introducing Lagrange multipliers

α i

≥0 for the inequality constraints.

min w , b 1 2 w 2 s   i .

 i  i ( y  i ( w 0  x  i  ( x )  b )  1 )

Constrained Optimization

• We thus minimize J ( w , b ,  )  min w , b 1 2 w 2   i  i ( y i ( w   ( x )  b )  1 ) with respect to {w,b}.

• For fixed {

α i

}   w   b J ( w , b ,  )  J ( w , b ,  )  w   i  i y i  ( x )    i  i y i  0 0

The Dual of the SVM Formulation

• Original SVM formulation –

n

inequality constraints – –

n positivity constraints n

number of  variables • The (Wolfe) dual of this problem – one equality constraint –

n

positivity constraints –

n

number of complicated  variables (Lagrange multipliers) – Objective function more • NOTICE: Data only appear as  (

x i

)   (

x j

) min

w

,

b

1 2

w

2 

C

i

i s

.

t

.

y i

i

 0 (

w

  (

x

) 

b

)  1  

i

, 

x i

min

a i

1 2

i

, 

j

i

j y i y j

(  (

x i

)   (

x j

))  

i

i s

.

t

.

C  

i

i

 i

y i

 0 , 

x i

 0 44

The Kernel Trick

•  (

x i

)   (

x j

): means, map data into new space, then take the inner product of the new vectors • We can find a function such that:

K(x i

x j ) =

 (

x i

)   (

x j

), i.e., the image of the inner product of the data is the inner product of the images of the data • Then, we do not need to explicitly map the data into the high dimensional space to solve the optimization problem (for training) • How do we classify without explicitly mapping the new instances? Turns out sgn( wx  b where b solves  ) j  ( y j sgn(  i  i  i y  i i K y ( x i K i ( x i , x j , x )  ) b  for any j with  j  0 b )  1 )  0 , 45

Examples of Kernels

• Assume we use the mapping:  : 

x TrkC

,

x SH

• Consider the function:  2 {

x TrkC

, 2

x SH

,

K

(

x

z

)  (

x

z

 1 ) 2 2

x TrkC x SH

,

x TrkC

,

x SH

, 1 } • We can verify that:  (

x

)   (

z

)  2

x TrkC z

2

TrkC

 2

x SH

 (

x TrkC z TrkC

 2

z SH x SH

 2

x TrkC x SH z SH

 1 ) 2 

z TrkC z SH

(

x

z

  1 ) 2

x TrkC z TrkC

K

(

x

 

z

)

x SH z SH

 1  46

Polynomial and Gaussian Kernels

K

(

x

z

)  (

x

z

 1 )

p

• is called the polynomial kernel of degree

p

.

• For

p=2

, if we measure 7,000 genes using the kernel once means calculating a summation product with 7,000 terms then taking the square of this number • Mapping explicitly to the high-dimensional space means calculating approximately 50,000,000 new features for both training instances, then taking the inner product of that (another 50,000,000 terms to sum) • In general, using the Kernel trick provides huge computational savings over explicit mapping!

• Another commonly used Kernel is the dimensional space with number of dimensions equal to the number of training cases): Gaussian (maps to a

K

(

x

z

)  exp( 

x

z

/ 2  2 ) 47

The Mercer Condition

• Is there a mapping  (

x

) for any symmetric function

K

(x,

z

)? No • The SVM dual formulation requires calculation

K(x i , x j )

for each pair of training instances. The array

G ij = K(x i , x j )

is called the Gram matrix • There is a feature space  (

x

) when the Kernel is such that

G

is always semi-positive definite (Mercer condition) 48

Other Types of Kernel Methods

• SVMs that perform regression • • SVMs that perform clustering  -Support Vector Machines: maximize margin while bounding the number of margin errors • Leave One Out Machines: minimize the bound of the leave-one-out error • SVM formulations that take into consideration difference in cost of misclassification for the different classes • Kernels suitable for sequences of strings, or other specialized kernels 49

Variable Selection with SVMs

• Recursive Feature Elimination – Train a linear SVM – Remove the variables with the lowest weights (those variables affect classification the least), e.g., remove the lowest 50% of variables – Retrain the SVM with remaining variables and repeat until classification is reduced • Very successful • Other formulations exist where minimizing the number of variables is folded into the optimization problem • Similar algorithm exist for non-linear SVMs • Some of the best and most efficient variable selection methods 50

Comparison with Neural Networks

Neural Networks • Hidden Layers map to lower dimensional spaces • Search space has multiple local minima • Training is expensive • Classification extremely efficient • Requires number of hidden units and layers • Very good accuracy in typical domains SVMs • Kernel maps to a very-high dimensional space • Search space has a unique minimum • Training is extremely efficient • Classification extremely efficient • Kernel and cost the two parameters to select • Very good accuracy in typical domains • Extremely robust 51

Why do SVMs Generalize?

• Even though they map to a very high dimensional space – They have a very strong bias in that space – The solution has to be a linear combination of the training instances • Large theory on Structural Risk Minimization providing bounds on the error of an SVM – Typically the error bounds too loose to be of practical use 52

MultiClass SVMs

• One-versus-all – Train – Train

n

binary classifiers, one for each class against all other classes.

– Predicted class is the class of the most confident classifier • One-versus-one

n(n-1)/2

classifiers, each discriminating between a pair of classes – Several strategies for selecting the final classification based on the output of the binary SVMs • Truly MultiClass SVMs – Generalize the SVM formulation to multiple categories 53

Summary for SVMs

• SVMs express learning as a mathematical program taking advantage of the rich theory in optimization • SVM uses the kernel trick to map indirectly to extremely high dimensional spaces • SVMs extremely successful, robust, efficient, and versatile while there are good theoretical indications as to why they generalize well 54

SVM Tools: SVM-light

• SVM-light: a command line C program that implements the SVM learning algorithm • Classification , regression, ranking • Download at http://svmlight.joachims.org/ • Documentation on the same page • Two programs – svm_learn for training – svm_classify for classification 55

SVM-light Examples

• Input format 1 1:0.5 3:1 5:0.4

-1 2:0.9 3:0.1 4:2 • To train a classifier from train.data

– svm_learn train.data train.model

• To classify new documents in test.data

– svm_classify test.data train.model test.result

• Output format – Positive score  positive class – Negative score  negative class – Absolute value of the score indicates confidence • Command line options – -c a tradeoff parameter (use cross validation to tune) 56

More on SVM-light

• Kernel – Use the “-t” option – Polynomial kernel – User-defined kernel • Semi-supervised learning (transductive SVM) – Use “0” as the label for unlabeled examples – Very slow 57

LibLinear

• • • LIBLINEAR – A Library for Large Linear Classification – http://www.csie.ntu.edu.tw/~cjlin/liblinear/

LIBLINEAR

is a

linear

classifier for data with

millions

of instances and features. It supports – L2-regularized classifiers L2-loss linear SVM, L1-loss linear SVM, and logistic regression (LR) – L1-regularized classifiers (after version 1.4) L2-loss linear SVM and logistic regression (LR) – L2-regularized support vector regression (after version 1.9) L2-loss linear SVR and L1-loss linear SVR. Main features of

LIBLINEAR

include – Same data format as

LIBSVM

, our general-purpose SVM solver, and also similar usage – Multi-class classification: 1) one-vs-the rest, 2) Crammer & Singer – Cross validation for model selection – Probability estimates (logistic regression only) – Weights for unbalanced data – MATLAB/Octave, Java, Python, Ruby interfaces

Outline

 Machine Learning Approaches for Natural Language Processing 

k

-Nearest Neighbor   Support Vector Machine Maximum Entropy (log-linear) Model (this part is revised from that by Michael Collins) 59

Overview

• Log-linear models • The maximum-entropy property • Smoothing, feature selection etc. in log linear models 60

Task: Part-of-Speech Tagging

• INPUT: – Profits soared at Boeing Co., easily topping forecasts on Wall Street, as their CEO Alan Mulally announced first quarter results.

• OUTPUT: – Profits/N soared/V at/P Boeing/N Co./N ,/, easily/ADV topping/V forecasts/N on/P Wall/N Street/N ,/, as/P their/POSS CEO/N Alan/N Mulally/N announced/V first/ADJ quarter/N results/N ./.

• N = Noun V = Verb P = Preposition Adv = Adverb Adj = Adjective … 61

Task: Information Extraction

• Named Entity Recognition • INPUT: – Profits soared at Boeing Co., easily topping forecasts on Wall Street, as their CEO Alan Mulally announced first quarter results.

• OUTPUT: – Profits soared at [Company Boeing Co.

] , easily topping forecasts on [Location Wall Street ] , as their CEO [Person Alan Mulally ] announced first quarter results.

62

Task:

Named Entity Extraction as Tagging

• INPUT: – Profits soared at Boeing Co., easily topping forecasts on Wall Street, as their CEO Alan Mulally announced first quarter results.

• OUTPUT: – Profits/NA soared/NA at/NA Boeing/SC Co./CC ,/NA easily/NA topping/NA forecasts/NA on/NA Wall/SL Street/CL ,/NA as/NA their/NA CEO/NA Alan/SP Mulally/CP announced/NA first/NA quarter/NA results/NA ./NA • NA = No entity SC = Start Company CC = Continue Company SL = Start Location CL = Continue Location … 63

The General Problem

• We have some

input domain

• Have a finite

label set

Y

 • Aim is to provide a

conditional probability

for any

x

  and  64

An Example

• Hispaniola/NNP quickly/RB became/VB an/DT important/JJ base/?? from which Spain expanded its empire into the rest of the Western Hemisphere .

• There are many possible tags in the position ??

Y

= {NN, NNS, Vt, Vi, IN, DT, . . .}  contexts) • Need to learn a function from (history, tag) pairs to a probability | ) 65

Representation: Histories

• • • • • A

history

is a

4-tuple

t

 1 ,

t

 2 

t

 1 ,

t

 2 ,

w

are the previous two tags.

,

i

w

are the n words in the input sentence. 

i

is the index of the word being tagged is the set of all possible histories • Hispaniola/NNP quickly/RB became/VB an/DT important/JJ base/?? from which Spain expanded its empire into the rest of the Western Hemisphere .

– – –

t w

 1 ,

i

= 6

t

 2 = DT, JJ = 66

Feature Vector Representations

label set

Y

. Aim is to provide a conditional

y Y

• A

feature

is a

function Ɍ

(Often binary features or indicator functions

f

:  {0,1} ).

• Say we have

m

 

k k

 1...

m

Ɍ m

  

y

Y

67

An Example (continued)

•  is the set of all possible histories of form 

t

 1 ,

t

 2 ,

w

Y

 { , , , , ,

t i

• For example:  

k

 1 1 0 ,...} if current word is base and t = Vt

i

otherwise

Ɍ

for

k

 1...

m

 2 1 0 otherwise ,

i

  1  2 (  (  , 

Hispaniola

 

Vt

, 

Hispaniola

 

Vt

)  0 68

The Full Set of Features in [Ratnaparkhi 96]

• Word/tag features for all word/tag pairs, e.g.,  100 1 0 otherwise  101  102 1 0 1 0 if current word ends in ing and t = VBG

i

otherwise otherwise 69

The Full Set of Features in [Ratnaparkhi 96]

• Contextual Features, e.g.,  103  104  105  106  107 )   1 if 

t

 2 ,

t

 1 0 otherwise ,

t

   1 if 

t

 1 ,

t

0 otherwise   1 if

V t

t

 ,

t

  0 otherwise   1 if previous word 0 otherwise

w i

 1   1 if next word 0 otherwise

w i

 1 

the

and

t

the

and

t

V t

V t

70

The Final Result

• We can come up with practically any questions (

features)

regarding history/tag pairs.

• For a given history   , each label in

Y

mapped to a different feature vector is  (   (   (   (  , , ,  , 

Hispaniola

 

V t Hispaniola

 

JJ

)   0110010101011110010  

Hispaniola Hispaniola

   

NN

)  0001111101001100100

IN

)  00010110110 00000010 71

Log-Linear Models

 Aim is to provide a conditional probability

P(y|x)

for any

x

  and

y

Y

Ɍ m

  

k

A feature vector

Ɍ m

k

 1...

m y

Y

• We also have a

parameter vector Ɍ m

• We define )  

e W

 

e W

  '  72

More About Log-Linear Models

• Why the name?

)    ' 

e W

  Linear term Normalization term • Maximum-likelihood estimates given training sample ( ,

i i

) for

i

n x y i i

)

Y

:

W ML

 arg max

W

m

)

where

) 

i n

  1 

i n

  1

W

 

i

|

i

) 

i n

   1 

e W

  73

Parameter Estimation:

Calculating the Maximum-Likelihood

• Need to maximize: 

n

W

  ( ,

i i

) 

n

 

i

 1

i

 1 • Calculating gradients: 

e W

 

dL dW W

i n

  1  ( ,

i i

)  =

i n

  1  ( ,

i i

) 

i i n n

  1    1  '    ( , ')

i W

  ( 

e W

  (

i

x y i i

e W

  ( 

i e W

  (

i

=

i n

  1  ( ,

i i

) 

i n

  1    ( , ') ( ' |

i i

,

W

) Empirical counts Expected counts 74

Parameter Estimation Approaches

• • Iterative Scaling – GIS – IIS

Gradient Ascent Methods

– –

First Order: Conjugate Gradient Methods Second Order: LMVM/L-BFGS

Generalized Iterative Scaling

(Darroch and Ratcliff, 1972) • Initialization:

W

= 0 Calculate

C

 

i

ma x

i i

 1... , 

i

(

k m

  1 

k

Empirical counts

x y i

)) ) • Iterate until convergence: Calculate )  

i

 '   ( ( Expected counts ) For

k

m W k

W k

 1

C i

  log

H k E k

(

W

) • Converges to maximum-likelihood solution provided that 

k

( ,

i i

)  0 for all , 76

Derivation of Iterative Scaling

• • Consider a vector of updates Ɍ m , so that

W k

 1 

W k

  • The gain in log-likelihood is then   )    )  ) 

i n

  1 (

W

x y i

) 

i n

  1 log 

e

(

W

  )   (x

i

,

y

 )   

i n

  1 (

i n

  1

W

  ( ,

i i

( ,

i i

)  )

i n

  1 

i n

  1 lo g log   

e

(

W z

Y e W

 

e

(

W

   ( 

i

)  (

i x i

,

y

 )

y

i n

  1 ( ,

i i

) 

i n

  1 l og   )

p

( ' |

i

, )   (

i

 ) 77

i n

  1 (from  ( ,

i i

=

i n

  1 ( ,

i i

(where C

i

i n

  1

x i

(from e 

x y i i n

  1  

x

)

i n

  1   

k

k x y i i n

  1

k

 

x

i

, (

i

 )

x i

,

W

) exp{(  

x y i

  

C

C i y

))}

a n d

C=max

i i

,

W

)( 

k

 (

x i

,

C y

 )

e C

k

C

C i

for any

q x

 0, and 

x

 1)  ) 78

,  )  )  ,  )

dA d

k

i n

  1 

k

(

x i

,

y i

) 

i n

  1  =

H k

e C

k k

)

i

,

W

) 

k

(

x i

,  )

C

k

• Setting derivatives equal to 0 gives iterative scaling : 

k

 1

C

log

H k E k

(

W

) 79

Properties of GIS

• L(w (n+1) ) >= L(w (n) ) • The sequence is guaranteed to converge.

• The converge can be very slow.

• The running time of each iteration is O(NPA): – N: the training set size – P: the number of classes – A: the average number of features that are active for a given event (a, b).

i

Improved Iterative Scaling (

Berger et. al

)

n

  1

i n

  1 ( ,

i i x y i i n

  1

i n

  1

x i

,

x i

,

W

(

x y

 ) )( 

k

 (

x i

,

y

 )

f x y i

 )

e i

,  ) 

k

) ( ( ,

i

 )   

k

(

x i

,

y

 ), and from e 

x

 

x q

x

 0, and 

x q

(

x

)  1)

i n

  1 

k

(

x i

,

y i

) 

i n

  1 

i

k x i

 )

i

, ') 

k

 0 81

Gradient Ascent Methods First Order

• Need to maximize where

dL dW W

i n

  1  ( ,

i i

) 

i n

  1 '  

x y P y i i

) • Initialization:

W

 0 Iterate until convergence: • Calculate    

dL dW

|

W

Line Search ) • Set

W

W

 * 82

Conjugate Gradient Methods

• (Vanilla) gradient ascent can be very slow • Conjugate gradient methods require calculation of gradient at each iteration, but do a line search in

a direction which is a function of the current gradient, and the previous step taken.

• Conjugate gradient packages are widely available. In general: they require a function

calc

_ )  ),

dy dx

|

w

) And that’s about it!

83

Gradient Ascent Methods Second Order

Limited memory variable metric

methods (LMVM) – [Nocedal, 1997] or [Nocedal and Wright, 1999] • The limited-memory BFGS (L-BFGS or LM-BFGS) algorithm is a member of the broad family of quasi-Newton optimization methods that uses a limited memory variation of the Broyden –Fletcher– Goldfarb –Shanno (BFGS) update to approximate the inverse Hessian matrix.

– Nocedal, J. (1980). "Updating Quasi-Newton Matrices with Limited Storage". Mathematics of Computation 35 (151): 773 –782 • You have to choose this type but its implementation is very complicated

Parameter Estimation Approach Matters [

Robert Malouf, 2002

]@COLING

Overview

• Log-linear models • The maximum-entropy property • Smoothing, feature selection etc. in log linear models 86

Maximum-Entropy Properties of Log-Linear Models

• We define the set of distributions which satisfy linear constraints implied by the data:  

i n

  1  ( ,

i i

) 

i n

  1  ( , ) ( |

i i

)} Empirical counts Expected counts here,

p n

 |

Y

|

i

,

y

.

• Note that at least one distribution satisfies these constraints, i.e.,

i

)     1 if

y

y i

0 otherwise 87

Maximum-Entropy Properties of Log-Linear Models

• The

entropy

of any distribution is: )   ( 1

n



i i

|

i

) ) • Entropy is a measure of “smoothness” of a distribution • In this case, entropy is maximized by uniform distribution,

i

)  1 |

Y

|

i

88

The Maximum-Entropy Solution

• The maximum entropy model is

p

*  arg max • Intuition: find a distribution which – satisfies the constraints – is as smooth as possible 89

Maximum-Entropy Properties of Log-Linear Models

• We define the set of distributions which can be specified in log-linear form

Q

i

) 

e W

  ( 

y

' 

Y i e W

  (

i

,

W

 Ɍ m Here, each p is an vector defining for all

i

,

y

.

• Define the negative log-likelihood of the data

L

(

p

)   

i

|

x i

)

p

*  arg min  where is the closure of

Q

90

Duality Theorem

– – –

q

*  intersection of and

Q q

*  arg max (Max-ent solution)

q

*  arg min (Max-likelihood solution) • This implies: 1. The maximum entropy solution can be written in log-linear form 2. Finding the maximum-likelihood solution also gives the maximum entropy solution 91

Developing Intuition Using Lagrange Multipliers

• Max-Ent Problem: Find arg max • Equivalent (unconstrained) problem max

p

 inf

W

m

) and 

m

)  (

k

 1

W k

• Why the equivalence?:

i

k

( ,

i i

)  

i

k

(

x i

inf

W

m

) |

x i

( ) if all constraints satisfied, i.e.,

p

P

   otherwise 92

Developing Intuition Using Lagrange Multipliers

• We can now switch the min and max: max

p

 inf

W

m L p W

W

m

max

p



L p W

W

m

• Where )  max

p

 ) 93

• • By differentiating w.r.t.

p,

and setting the derivative to zero (making sure to include Lagrange multipliers that ensure for all 

y i

), and solving

p

*  max

p

 ) gives * ( |

i

,

W

)  

e

k W

 (

x i

,

y

)

e

W

k

(

x i

• Also, )  ma x

p

 =  

i

)  ( * ( |

x i

,

W

)

i

, ), ) i.e., the negative log-likelihood under parameters 94

To Summarize

• We’ve shown that max  inf

W

m

) ) is negative log-likelihood • This argument is pretty informal, as we have to be careful about switching the max and inf, and we need to inf

W

q

*  arg min • See [Della Pietra, Della Pietra, and Lafferty 1997] for a proof of the duality theorem.

95

Is the Maximum-Entropy Property Useful?

• Intuition: find a distribution which 1. satisfies the constraints 2. is as smooth as possible • One problem: the constraints are define by

empirical counts

from the data.

• Another problem: no formal relationship between maximum entropy property and generalization(?) (at least none is given in the NLP literature) 96

Overview

• Log-linear models • The maximum-entropy property • Smoothing, feature selection etc. in log linear models 97

Smoothing in Maximum Entropy Models

• Say we have a feature:  100 (   1 if current word i 0 otherwise s w

i

is base and t=Vt • In training data, base is seen 3 times, with Vt every time • Maximum likelihood solution satisfies 

i

 100  ( ,

i i

|

i

, ) 

W

100   

i y i

,

W

)  100

x y i i

base i

 at maximum-likelihood solution (most likely ) 

p

( | , 

x w

base

98

A Simple Approach: Count Cut-Offs

• [Ratnaparkhi 1998] (PhD thesis): include all features that occur 5 times or more in training data. i.e., 

i

k

(

x i

,

y i

)  5 for all features 

k

99

Gaussian Priors

• Modified loss function ) 

i n

  1

W

  ( ,

i i

) 

i n

   1 

e W

  (

x i

k m

  1

W k

2  2 2 • Calculating gradients:

dL dW W

i n

  1  ( ,

i i

) 

i n

  1 '  

x y P i

Empirical counts Expected count s

i

)   1 2

W

• Can run conjugate gradient methods as before • Adds a penalty for large weights 100

The Bayesian Justification for Gaussian Priors

• In

Bayesian methods, combine the log-likelihood w

ith a prior over parameters, ) | )  

W

| | ) ( ) ( ) ) • The

MAP

(Maximum A-Posteriori) estimates are

W MAP

 arg max

W

| )

W P data W

Pr

ior

• Gaussian prior ) 

e

 

k w

2

k

2  2  )   

k w k

2 2  2 

C

)) 101

Experiments with Gaussian Priors

• [Chen and Rosenfeld, 1998] apply maximum entropy models to language modeling: Estimate

P

(

w i

|

w i

• Unigram, bigram, trigram features, e.g.,  2 ,

w i

 1 )  1 (

w i

 2 ,

w i

 1 ,

w i

)  2 (

w i

 2 ,

w i

 1 ,

w i

)  3 (

w i

 2 ,

w i

 1 ,

w i

)

i

|  1 if trigram is (the, dog, laughs)  0 otherwise  1 if bigram is (dog, laughs)  0 otherwise  1 if unigram is (laughs)  0 otherwise

i

 2 ,

w i

 1 ) 

e

k

k

(

w i

 2 ,

w i

 1 , 

w e

k

k

(

w i

 2 ,

w i

 1 ,

i

i

)  102

Experiments with Gaussian Priors

• In regular (unsmoothed) maxent, if all n-gram features are included, then it’s equivalent to maximum-likelihood estimates!

i

|

i

 2 ,

w i

 1 ) 

i

 2 ,

w i

 1 ,

w i

)

i

 2 ,

w i

 1 ) • [Chen and Rosenfeld, 1998]: with Gaussian priors, get very good results. Performs as well as or better than standardly used “discounting methods” such as Kneser Ney smoothing (see lecture on language model).

parameters.

w e

k

k

(

w i

 2 ,

w i

 1 ,

w

) 

W

103

Feature Selection Methods

• Goal: find

a small number of features which make good

progress in optimizing log-likelihood • A greedy method: –

Step 1

Throughout the algorithm, maintain a set of active features. Initialize this set to be empty.

Step 2

Choose a feature from outside of the set of active features which has largest estimated impact in terms of increasing the log-likelihood and add this to the active feature set.

Step 3

Minimize Return to

Step 2.

) with respect to the set of active features. 104

Experimental Results from [Ratnaparkhi 1998] (PhD thesis)

• • • • The task: PP attachment ambiguity

ME Default:

Count cut-off of 5

ME Tuned:

Count cut-offs vary for 4-tuples, 3-tuples, 2 tuples, unigram features

ME IFS:

feature selection method 105

Maximum Entropy (ME) and Decision Tree (DT) Experiments on PP attachment

Experiment

ME Default ME Tuned ME IFS DT Default DT Tuned DT Binary Baseline

Accuracy

82.0% 83.7% 80.5% 72.2% 80.4% 70.4%

Training Time

10min 10min 30hours 1min 10min 1 week

#of Features

4028 83875 387 106

Maximum Entropy (ME) and Decision Tree (DT) Experiments on text classification

Experiment

ME Default ME IFS DT Default DT Tuned

Accuracy

95.5% 95.8% 91.6% 92.1%

Training Time

15min 15hours 18hours 10hours

#of Features

2350 356 107

Toolkits of MaxEnt

• ME software available on the internet – YASMET • http://www-i6.informatik.rwth-aachen.de/web/Software/YASMET.html

– yasmetFS • http://www.isi.edu/natural-language/people/ravichan/YASMET/ – OpenNLP MaxEnt • http://opennlp.apache.org/ – Maximum Entropy Modeling Toolkit for Python and C++ • http://homepages.inf.ed.ac.uk/lzhang10/maxent_toolkit.html

• • • • • • • • •

References

[Altun, Tsochantaridis, and Hofmann, 2003] Altun, Y., I. Tsochantaridis, and T. Hofmann. 2003. Hidden Markov Support Vector Machines. In

Proceedings of ICML 2003.

[Bartlett 1998] P. L. Bartlett. 1998. The sample complexity of pattern classification with neural networks: the size of the weights is more important than the size of the network, IEEE Transactions on Information Theory, 44(2): 525-536, 1998.

[Bod 98] Bod, R. (1998).

Beyond Grammar: An Experience-Based Theory of Language. CSLI

Publications/Cambridge University Press.

[Booth and Thompson 73] Booth, T., and Thompson, R. 1973. Applying probability measures to abstract languages.

IEEE Transactions on Computers, C-22(5), pages 442 –450.

[Borthwick et. al 98] Borthwick, A., Sterling, J., Agichtein, E., and Grishman, R. (1998). Exploiting Diverse Knowledge Sources via Maximum Entropy in Named Entity Recognition.

Proc. of the Sixth Workshop on Very Large Corpora.

[Collins and Duffy 2001] Collins, M. and Duffy, N. (2001). Convolution Kernels for Natural Language. In

Proceedings of NIPS 14.

[Collins and Duffy 2002] Collins, M. and Duffy, N. (2002). New Ranking Algorithms for Parsing and Tagging: Kernels over Discrete Structures, and the Voted Perceptron. In

Proceedings of ACL 2002.

[Collins 2002a] Collins, M. (2002a). Discriminative Training Methods for Hidden Markov models: Theory and Experiments with the Perceptron Algorithm. In

Proceedings of EMNLP 2002.

[Collins 2002b] Collins, M. (2002b). Parameter Estimation for Statistical Parsing Models: Theory and Practice of Distribution-Free Methods. To appear as a book chapter.

109

• • • • • • • • • • • [Crammer and Singer 2001a] Crammer, K., and Singer, Y. 2001a. On the Algorithmic Implementation of Multiclass Kernel-based Vector Machines. In

Journal of Machine Learning Research, 2(Dec):265-292.

[Crammer and Singer 2001b] Koby Crammer and Yoram Singer. 2001b. Ultraconservative Online Algorithms for Multiclass Problems In

Proceedings of COLT 2001.

[Freund and Schapire 99] Freund, Y. and Schapire, R. (1999). Large Margin Classification using the Perceptron Algorithm. In

Machine Learning, 37(3):277 –296.

[Helmbold and Warmuth 95] Helmbold, D., and Warmuth, M. On Weak Learning.

Journal of Computer and System Sciences, 50(3):551-573, June 1995.

[Hopcroft and Ullman 1979] Hopcroft, J. E., and Ullman, J. D. 1979.

Introduction to automata theory, languages, and computation. Reading, Mass.: Addison –Wesley.

[Johnson et. al 1999] Johnson, M., Geman, S., Canon, S., Chi, S., & Riezler, S. (1999). Estimators for stochastic ‘unification-based” grammars. In

Proceedings of the 37th Annual Meeting of the Association for Computational Linguistics. San Francisco: Morgan Kaufmann.

[Lafferty et al. 2001] John Lafferty, Andrew McCallum, and Fernando Pereira. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In Proceedings of ICML 01, pages 282-289, 2001.

[Littlestone and Warmuth, 1986] Littlestone, N., and Warmuth, M. 1986. Relating data compression and learnability.

Technical report, University of California, Santa Cruz.

[MSM93] Marcus, M., Santorini, B., & Marcinkiewicz, M. (1993). Building a large annotated corpus of English: The Penn treebank.

Computational Linguistics, 19, 313-330.

[McCallum et al. 2000] McCallum, A., Freitag, D., and Pereira, F. (2000) Maximum entropy markov models for information extraction and segmentation. In

Proceedings of ICML 2000.

[Miller et. al 2000] Miller, S., Fox, H., Ramshaw, L., and Weischedel, R. 2000. A Novel Use of Statistical Parsing to Extract Information from Text. In

Proceedings of ANLP 2000.

110

• • • • [Ramshaw and Marcus 95] Ramshaw, L., and Marcus, M. P. (1995). Text Chunking Using Transformation-Based Learning. In

Proceedings of the Third ACL Workshop on Very Large Corpora, Association for Computational Linguistics, 1995.

[Ratnaparkhi 96] A maximum entropy part-of-speech tagger. In

Proceedings of the empirical methods in natural language processing conference.

[Schapire et al., 1998] Schapire R., Freund Y., Bartlett P. and Lee W. S. 1998. Boosting the margin: A new explanation for the effectiveness of voting methods.

The Annals of Statistics,

26(5):1651-1686.

[Zhang, 2002] Zhang, T. 2002. Covering Number Bounds of Certain Regularized Linear Function Classes. In

Journal of Machine Learning Research, 2(Mar):527-550, 2002.

111