Likelihood and Information Theoretic Methods in Forest Ecology

Download Report

Transcript Likelihood and Information Theoretic Methods in Forest Ecology

Likelihood Methods in Ecology
June 4-6, 2013
Gault Nature Reserve, Saint Hilaire, Quebec
Instructor:
Charles Canham
Course Outline
Statistical Inference using Likelihood







Principles and practice of maximum likelihood estimation
Know your data – choosing appropriate likelihood functions
Formulate statistical models as alternate hypotheses
Find the ML estimates of the parameters of your models
Compare alternate models and choose the most
parsimonious
Evaluate individual models
Advanced topics
Likelihood is much more than a set of statistical methods...
(it can completely change the way you ask and answer questions…)
Lecture 1
An Introduction to Likelihood Estimation

Probability and probability density functions

Maximum likelihood estimates (versus traditional “method of
moment” estimates)

Statistical inference

Classical “frequentist” statistics : Limitations and mental gyrations...

The “likelihood” alternative: Basic principles and definitions

Model comparison as a generalization of hypothesis testing
A simple definition of probability for
discrete events...
“...the ratio of the number of events of type A to the total
number of all possible events (outcomes)...”
The enumeration of all possible outcomes is called the
sample space (S).
If there are n possible outcomes in a sample space, S, and m
of those are favorable for event A, then the probability of
event, A is given as
P{A} = m/n
Probability defined more generally...

Consider an outcome X from some process that has a set of
possible outcomes S:
-
If X and S are discrete, then P{X} = X/S
-
If X is continuous, then the probability has to be defined in the
limit:
b
P{xa  X  xb }   g ( x )dx
a
Where g(x) is a probability density function (PDF)
The Normal Probability Density Function (PDF)
( x  u )2
prob( x ) 
exp( 
)
2
2
2
2
1
m = mean
2= variance
Normal PDF with mean = 0
Prob(x)
1
Properties of a PDF:
Var
= 0.50 < prob(x) < 1
(1)
0.8
Var = 0.25
0.6
Var = 1
Var = 2
(2) ∫ prob(x) = 1
0.4
Var = 5
0.2
Var = 10
0
-5 -4 -3 -2 -1
0
X
1
2
3
4
5
Common PDFs...

For continuous data:
-
Normal
Lognormal
Gamma
For discrete data:
-
Poisson
Binomial
Multinomial
Negative Binomial
0.3
m = 2.5
m=5
m = 10
Poisson PDF
Prob(x)

0.2
0.1
0.0
0
5
10
15
20
25
30
x
See McLaughlin (1993) “A compendium of common probability distributions” in the reading list
Why are PDFs important?
Answer: because they are used to calculate likelihood…
(And in that case, they are called “likelihood functions”)
Statistical “Estimators”
A statistical estimator is a function applied to a sample of data,
and used to estimate an unknown population parameter
(and an “estimate” is just the result of applying an “estimator” to a
sample)
1 n
A commonestimatorfor thepopulationmean: x   xi
n i 1
Properties of Estimators

Some desirable properties of “point estimators” (functions to
estimate a fixed parameter)
- Bias: if the average error is zero, the estimate is unbiased
-
Efficiency: an estimate with the minimum variance is the most
efficient (note: the most efficient estimator is often biased)
-
Consistency: As sample size increases, the probability of the
estimate being close to the parameter increases
-
Asymptotically normal: a consistent estimator whose
distribution around the true parameter θ approaches a normal
distribution with standard deviation shrinking in proportion to 1
as the sample size n grows
n
Maximum likelihood (ML) estimates
versus
Method of moment (MOM) estimates
Bottom line:
MOM was born in the time before computers, and was OK,
ML needs computing power, but has more desirable properties…
Doing it MOM’s way: Central Moments
1 n
If thesample(arithmeti
c) mean: x   xi
n i 1
1 n
First centralmoment  ( xi  x )1  0
n i 1
1 n
Second moment   ( xi  x )2  samplevariance(s2 )
n i 1
1 n
1 n
3
T hirdmoment   ( xi  x ) , skew  3  ( xi  x )3
n i 1
ns i 1
1 n
 1 n
4
4
Fourth moment   ( xi  x ) , kurtosis  4  ( xi  x )   3
n i 1
 ns i 1

What’s wrong with MOM’s way?

Nothing, if all you are interested in is calculating properties of
your sample…

But MOM’s formulas are generally not the best way1 to infer
estimates of the statistical properties of the population from which
the sample was drawn…
For example:
Population variance
1 n
 
( xi  x )2

n  1 i 1
2
(because the second central moment is a biased underestimate of the
population variance)
1…
in the formal terms of bias, efficiency, consistency, and asymptotic normality
The Maximum Likelihood alternative…
Going back to PDF’s: in plain language, a PDF allows you to
calculate the probability that an observation will take on a
value (x), given the underlying (true?) parameters of the
population
0.3
m = 2.5
m=5
m = 10
Poisson PDF
Prob(x)
expa a x
PoissonPDF: P( x) 
x!
where themean(and variance) a
0.2
0.1
0.0
0
5
10
15
x
20
25
30
But there’s a problem…

The PDF defines the probability of observing an outcome (x),
given that you already know the true population parameter
(θ)

But we want to generate an estimate of θ, given our data (x)

And, unfortunately, the two are not identical:
P( | x)  P( x |  )
Fisher and the concept of “Likelihood”...
The “Likelihood Principle”
L( | x)  P( x |  )
In plain English: “The likelihood (L) of the parameter
estimates (θ), given a sample (x) is proportional to the
probability of observing the data, given the parameters...”
{and this probability is something we can calculate, using the
appropriate underlying probability model (i.e. a PDF)}
Calculating Likelihood and Log-Likelihood
for Datasets
From basic probability theory:
If two events (A and B) are independent, then P(A,B) = P(A)P(B)
More generally, for i = 1..n independent observations, and a vector X
of observations (xi):
n
Likelihood L | X   P( X |  )   g ( xi |  )
i 1
where
g( xi |  )
is the appropriate PDF
But, logarithms are easier to work with, so...
n
Log - likelihood lnL | X    lng ( xi |  )
i 1
A simple example…
4.5
1.2
observation likelihood = log(x)
prob(x|f
likelihood
6.11
0.136
-1.998
6.40
0.095
-2.354
5.73
0.196
-1.629
5.71
0.200
-1.610
5.91
0.166
-1.796
4.96
0.309
-1.174
5.36
0.257
-1.358
6.29
0.110
-2.210
5.54
0.229
-1.475
6.02
0.149
-1.901
likelihood
2.4964E-08
summed log-likelihood
-17.506
( x  u )2
prob( x ) 
exp( 
)
2
2
2
2
1
A sample of 10 observations…
Assume they are normally distributed, with an
unknown population mean and standard
deviation.
What is the (log) likelihood that the mean is 4.5
and the standard deviation is 1.2?
0.35
0.30
prob(x)
mu
sigma
0.25
0.20
0.15
0.10
0.05
0.00
0
2
4
6
X
8
10
Likelihood “Surfaces”
The variation in likelihood for any given set of
parameter values defines a likelihood “surface”...
-147
Log- Likelihood
For a model with
just 1 parameter,
the surface is
simply a curve:
(aka a “likelihood
profile”)
-149
-151
-153
-155
2
2.1
2.2
2.3
2.4
2.5
Parameter Estimate
2.6
2.7
2.8
“Support” and “Support Limits”
Log-likelihood = “Support” (Edwards 1992)
-147
Log-Likelihood
Maximum likelihood estimate
-149
-151
-153
2-unit support interval
-155
2
2.1
2.2
2.3
2.4
2.5
Parameter Estimate
2.6
2.7
2.8