Intelligent Information Retrieval and Web Search

Download Report

Transcript Intelligent Information Retrieval and Web Search

Machine Learning
Introduction
Paola Velardi
1
Course material
• Slides (partly) from:
http://www.cs.utexas.edu/users/mooney/cs3
91L/
• Textbook: Tom Mitchell, Machine
Learning, McGraw Hill, 1997.
• Additional material and papers will be
supplied during the last part of the course
• Course twiki
http://twiki.di.uniroma1.it/twiki/view/Appr
Auto
2
Course Syllabus
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
Concept Learning and the General-to-Specific Ordering
Decision Tree Learning
Ensamble methods
Evaluation methods: experimental and theoretical methods
Rule learning
Artificial Neural Networks
Support Vector Machines
Bayesian Learning
Instance-based learning
Q-learning and Genetic Algorithms
Clustering
Except for first lesson, each algorithm is experimented on
Weka toolkit http://www.cs.waikato.ac.nz/ml/weka/
Bring your PC with Weka package installed!
3
Exam
• Written exam on course material
• Course project using Weka
• Project is an application of a ML algorithm (from
Weka) to a problem tbd (tentative: classify tweets
on current political elections in 3 categories:
media, politician, public)
• Projects can be carried on by teams of 2 students
• “homeworks” carried on during second two hours
4
ML: definitions and introduction
5
What is Learning?
• Herbert Simon: “Learning is any process by
which a system improves performance from
experience.”
• What is the task?
– Classification
– Problem solving
6
Classification
• Assign object/event to one of a given finite set of
categories.
–
–
–
–
–
–
–
–
–
–
–
–
–
Medical diagnosis
Credit card applications or transactions
Fraud detection in e-commerce
Worm detection in network packets
Spam filtering in email
Recommended articles in a newspaper
Recommended books, movies, music, or jokes
Financial investments
DNA sequences
Spoken words
Handwritten letters
Astronomical images
Tweets
Example (medical diagnosis): given a set of categories
(disease types) and a laboratory data (blood test etc.) ,
assign a patient to the appropriate category (disease)
7
Example of classification: healthcare
domain
Starting from: clinical records, each describing a pregnancy and
a delivery. Each record has 215 features.
Learn: how to identify potential risk for emergency section
Ex: If No previous non-cesarean delivery, and Abnormal 2nd Trimester Ultrasound,
and Malpresentation ad admission
Then Probability Emergency C-Section is 0.6
8
Example of classification: Risk analysis
9
Machine learning as planning/problem
solving tasks
• Performing actions in an environment in order to
achieve a goal.
–
–
–
–
–
–
–
Solving calculus problems
Playing checkers, chess, or backgammon
Driving a car or a jeep
Flying a plane, helicopter, or rocket
Controlling an elevator
Controlling a character in a video game
Controlling a mobile robot
Though the general problem formulation remains the same,
there are machine learning algorithms that are more
suited for classification or for problem solving
10
Examples of problem sloving tasks
• Game playing (e.g. chess)
• Planning (e.g. robotics, automated
driving..)
• Example: given a representation of an
environment (a domestic environmnt) and a
command for a robot (e.g. bring the chair
from the kitchen to the dining room), find
the best plan to execute the command
• Objective: learning to move and perform
actions in a given environment
11
Example 2: Automated Driving and
Collision Warning
(Stanford Junior)
Another nice example is Google’s self-driving car
12
How does it work?
• Combination of machine learning and
robotics
• Basically, three key tasks
1) Precise Localization
2) Obstacle Detection
3) Path Planning
• Reinforcement learning often used (later in
this course) + traditional planning
algorithms (A*)
13
YOUR TURN: CAN YOU LIST
THE PROBLEMS YOU SEE FOR
AN AUTOMATED SYSTEM TO
FORMALISE THIS TASK,
GIVEN THE CLINICAL
RECORD EXAMPLE?
14
Why Study Machine Learning?
Engineering Better Computing Systems
• Develop systems that are too difficult/expensive to
construct manually because they require specific detailed
skills or knowledge tuned to a specific task (knowledge
engineering bottleneck).
• Develop systems that can automatically adapt and
customize themselves to individual users.
– Personalized news or mail filter
– Personalized tutoring
• Discover new knowledge from large databases (data
mining).
– Market basket analysis (e.g. diapers and beer)
– Medical text mining (e.g. migraines to calcium channel blockers to
magnesium)
– Twitter mining
15
Why Study Machine Learning?
The Time is Ripe
• Many basic effective and efficient
algorithms available.
• Large amounts of on-line data available.
• Large amounts of computational resources
available.
16
Related Disciplines
•
•
•
•
•
•
•
•
•
•
•
Artificial Intelligence
Data Mining
Probability and Statistics
Information theory
Numerical optimization
Computational complexity theory
Control theory (adaptive)
Psychology (developmental, cognitive)
Neurobiology
Linguistics
Philosophy
17
Architecture of a learning system
the algorithm
to learn C(x)
Training set
<x,C(x)>
Learner
An hypothesis
h(x) for C(x)
Environment/
Experience
Knowledge
Test set
<x,?>
Performance
evaluation
How good is
h(x)?
18
18
ML= classification /problem solving
• The very fist step, given a problem, is to
formalise the task: what would we like to
learn?
• Generally speaking, we can formulate a ML
problem in this way:
Improve on task T, with respect to
performance metric P, based on experience E.
19
Defining the Learning Task: examples
T: Playing checkers
P: Percentage of games won against an arbitrary opponent
E: Playing practice games against itself
T: Recognizing hand-written words
P: Percentage of words correctly classified
E: Database of human-labeled images of handwritten words
T: Driving on four-lane highways using vision sensors
P: Average distance traveled before a human-judged error
E: A sequence of images and steering commands recorded while
observing a human driver.
T: Categorize email messages as spam or legitimate.
P: Percentage of email messages correctly classified.
E: Database of emails, some with human-given labels
20
Summary so far
• There is a set of available records for which a classification
is known (outcome of a delivery, debtor solvency)
(LEARNING EXPERIENCE)
• Input data are records with features (age, credit period..)
and values (boolean, real..) (OBJECT
REPRESENTATION)
• We need to learn a probabilistic or boolean function (riskof-emergency, credit-approval) (REPRESENTATION OF
TARGET FUNCTION)
• The learning algorithm output (MODEL) is a rule (TYPE
OF LEARNING ALGORITHM IS RULE LEARNING)
• How do we evaluate how good is our model?
(EVALUATION)
21
Problems
•
•
•
•
•
Modeling the domain objects
Learning experience
Modeling the target function
Learning Algorithm
Evaluation
22
Example : recognizing lions and
frogs
Representation: How do we represent
our objects?
• Simple: color! (e.g. a bitmap)
• Less simple: silhouette
24
From what experience?
• Supervised learning
• Unsupervised learning
• Reinforcement learning
25
Learning paradigms
26
Supervised learning
• Either an “expert”
(e.g. manually
classified
examples) or some
available database
of already classified
examples
 lion
 frog
27
Unsupervised learning
• No examples are
available. The
learner must be
able to identify
distinguishing
features that
differentiate the
various classes
• (clustering)
28
Reinforcement learning
• No examples are
available, but some
function is provided
to associate a
reward (or
punishment) to a
good (bad) move
Frog!!
WRONG!!!!
29
Other issues (more in this course)
• Modeling the target • E.g. Pr(x=Lion)
function
probabilistic
function, defined in
• Learning Algorithm
[01]
• Evaluation
• E.g. Neural
Networks
• E.g. number of
correct
classifications/total
classifications
30
MACHINE LEARNING AS A
PLANNING TASK: PLAY
CHECKERS
31
Sample Learning Problem
• We now consider a “machine learning as
problem solving” example
• Learn to play checkers from self-play
• We will develop an approach analogous to
that used in the first machine learning
system developed by Arthur Samuels at
IBM in 1959.
32
Training Experience
• Direct experience: Given sample input and output
pairs for a useful target function C.
– Checker boards labeled with the correct move, e.g.
extracted from record of expert play
• Indirect experience: Given feedback which is not
direct I/O pairs for a useful target function.
– Potentially arbitrary sequences of game moves and their
final game results.
• Credit/Blame Assignment Problem: How to assign
credit/ blame to individual moves given only
indirect feedback?
33
Example sequence
More moves here
34
Source of Training Data: possible cases
1. Provided random examples outside of the learner’s
control.
Negative examples available or only positive?
2. Good training examples selected by a “benevolent
teacher.”
An experienced player select good and nearly-good
moves
3. Learner can query an oracle about the class (good or bad
move) of an unlabeled example in the environment.(qao=
obtain from some external source the label for an
example)
4. Learner can construct an arbitrary example and query an
oracle for its label.
5. Learner can design and run experiments directly in the
environment without any human guidance.
35
Training vs. Test Distribution
• Generally assume that the training and test examples are
independently drawn from the same overall
distribution of data.
– IID: Independently and identically distributed
– This simply means that we need to learn and test on
“equally representative” data: e.g. not good if learn
from Mr. X games and test on Mr. Y games, and is not
even good if we train and test ONLY on X &Y’s games,
there might be other gaming strategies that the learner
would never see, and would fail to recognize when the
system is in operation
36
Choosing a Target Function
• What function is to be learned (representing C(x))
and how will it be used by the performance system
(ML algorithm choice)?
• For checkers, assume we are given a function for
generating the legal moves for a given board position
and want to decide the best move.
– Could learn a function:
ChooseMove(board, legal-moves) → best-move
– Or could learn an evaluation function, V(board) → R,
that gives each board position a score for how favorable it
is. V can be used to pick a move by applying each legal
move, scoring the resulting board position, and choosing
the move that results in the highest scoring board position.
37
Ideal Definition of V(b)
•
•
•
•
If b is a final winning board, then V(b) = 100
If b is a final losing board, then V(b) = –100
If b is a final draw board, then V(b) = 0
Otherwise, then V(b) = V(b´), where b´ is the
highest scoring final board position that is achieved
starting from b and playing optimally until the end
of the game (assuming the opponent plays
optimally as well).
38
Approximating V(b)
• Computing V(b) is intractable since it
involves searching the complete exponential
game tree.
• Therefore, this definition is said to be nonoperational.
• An operational definition can be computed
in reasonable (polynomial) time.
• Need to learn an operational
approximation to the ideal evaluation
function.
39
Representing the Target Function
• Target function can be represented in many ways:
lookup table, symbolic rules, numerical function,
neural network (a graph).
• There is a trade-off between the expressiveness
of a representation and the ease of learning.
• The more expressive a representation, the better it
will be at approximating an arbitrary function;
however, the more examples will be needed to
learn an accurate function.
40
Example (generic)
Only two examples are sufficient to perfectly learn a linear function, many
examples are needed to approximate a spline
41
Linear Function for Representing V(b)
• In checkers, use a linear approximation of the
evaluation function.
V (b) = w0 + w1 × bp(b) + w2 × rp(b) + w3 × bk (b) + w4 × rk (b) + w5 × bt (b) + w6 × rt (b)
– bp(b): number of black pieces on board b
– rp(b): number of red pieces on board b
– bk(b): number of black kings on board b
– rk(b): number of red kings on board b
– bt(b): number of black pieces threatened (i.e. which can
be immediately taken by red on its next turn)
– rt(b): number of red pieces threatened
Learning problem: need to estimate the wi values
42
Example
“Sicilian Opening”
bp(b): number of black pieces
on board b =16
rp(b): number of red pieces on
board b = 16
bk(b): number of black kings
on board b = 1
rk(b): number of red kings on
board b = 1
bt(b): number of black pieces
threatened (i.e. which can
be immediately taken by
red on its next turn) =0
rt(b): number of red pieces
threatened = 0
43
How do we learn?
• Direct supervision may be available for the target
function, for a set of points in V(b).
– < <bp=3,rp=0,bk=1,rk=0,bt=0,rt=0>, 100>
(win
for black)
• With indirect feedback, training values can be estimated
using temporal difference learning (used in reinforcement
learning where supervision is delayed reward, more on
that later in this course).
• In our example, we are provided with sequences of chess
moves. We don’t know the value of intermediate positions,
but we know the value of the final board (-100,+100,0)
44
Temporal Difference Learning
• Estimate training values for intermediate (nonterminal) board positions by the estimated value of
their successor in an actual game trace.
Vtrain (b) = V (successor(b))
where successor(b) is the next board position
where it is the program’s move in actual play.
• Values towards the end of the game are initially
more accurate and continued training slowly
“backs up” accurate values to earlier board
positions.
45
Example: a checkmate sequence
90
95
100
The intuition is that the “value” of a move depends on what happens
after. We know the value of the last move (100, -100 or 0) and from
that, we “backtrack” trough the sequence of moves assigning weight
46
Least Mean Squares (LMS) Algorithm
• A gradient descent algorithm that incrementally
updates the weights of a linear function in an
attempt to minimize the mean squared error
Initialize wi (at random, or wi=1 for all i)
Until error >ε:
For each training example b do :
1) Compute the absolute error :
error (b) = Vtrain (b) - V (b)
2) For each board feature, fi, update its weight, wi :
wi = wi + c × f i × error (b)
for some small constant (learning rate) c (e.g. 0,1)
47
LMS Discussion
• Intuitively, LMS executes the following rules:
– If the output for an example is correct, make no change.
– If the output is too high, lower the weights proportional
to the values of their corresponding features, so the
overall output decreases
– If the output is too low, increase the weights
proportional to the values of their corresponding
features, so the overall output increases.
• Under the proper weak assumptions, LMS can be
proven to eventually converge to a set of weights
that minimizes the mean squared error.
48
Example
• Initialize V^ with w1=w2=..1
• Let’s consider a point of the function for which we
know the “correct” value of V ex: es:
V(b(3,0,1,0,0,0))=100
Compute error
e=V-V^=100-(1+1×3+1×0+1×1+1×0+1×0+1×0)=95
w1=1+0,1×3×95=29,5 Update w1,w2
w3=1+0,1×1×95=10,5
• After the first iteration the wi values quickly grow,
thus reducing the error!
49
LMS more formally
50
LMS (2)
51
Lessons Learned about Learning
• Learning can be viewed as using direct or indirect
experience to approximate a chosen target
function.
• Function approximation can be viewed as a
search through a space of hypotheses
(representations of functions) for one that best fits
a set of training data.
• Different learning methods assume different
hypothesis spaces (representation languages)
and/or employ different search techniques.
52
Various Representations for the
Objective/target function C
• Numerical functions
– Linear regression
– Neural networks
– Support vector machines
• Symbolic functions
– Decision trees
– Rules in propositional logic
– Rules in first-order predicate logic
• Instance-based functions
– Nearest-neighbor
– Case-based
• Probabilistic Graphical Models
–
–
–
–
–
Naïve Bayes
Bayesian networks
Hidden-Markov Models (HMMs)
Probabilistic Context Free Grammars (PCFGs)
Markov networks
53
Various Search Algorithms
• Gradient descent
– Perceptron
– Backpropagation
• Dynamic Programming
– HMM Learning
– PCFG Learning
• Divide and Conquer
– Decision tree induction
– Rule learning
• Evolutionary Computation
– Genetic Algorithms (GAs)
– Genetic Programming (GP)
– Neuro-evolution
54
Evaluation of Learning Systems
• Experimental
– Conduct controlled cross-validation experiments to
compare various methods on a variety of benchmark
datasets.
– Gather data on their performance, e.g. test accuracy,
training-time, testing-time.
– Analyze differences for statistical significance.
• Theoretical
– Analyze algorithms mathematically and prove theorems
about their:
• Computational complexity
• Ability to fit training data
• Sample complexity (number of training examples needed to
learn an accurate function)
55
Summary
Machine learning “general” tasks: classification,
problem solving
Learning paradigms: supervised, unsupervised,
reinforcement
Sub-problems:
– representation: how to represent domain objects
and the target function
– algorithm selection: how to learn the target
function
– evaluation: how to test the performance of the
learner
56