Review of Probability Theory
Download
Report
Transcript Review of Probability Theory
Review of Probability Theory
CWR 6536
Stochastic Subsurface Hydrololgy
Random Variable (r.v.)
• A variable (x) which takes on values at
random, and may be thought of as a function
of the outcomes of some random experiment.
• The r.v. maps sample space of experiment
onto the real line
• The probability with which different values
are taken by the r.v. is defined by the
cumulative distribution function, F(x), or the
probability density function, f(x).
Examples
• Discontinuous r.v. - die tossing experiment
Examples
• Categorical r.v. – An observation, s(a), that can take on any of a finite
number of mutually exclusive, exhaustive states (sk) , e.g. soil type,
land use, landscape position
• An indicator random variable can be defined
ia , sk 1 if s(a ) sk
0 otherwise
• The frequency of occurrence of a state f (sk) can be determined as the
arithmetic average of n indicator data (i(a,sk) )where:
1 n
f ( sk ) ia , sk
n a 1
• The joint frequency of two states sk and vk is
1 n
f (s , v )
i a, s i a,v
k k
k
k
na 1
Frequency Table for Categorical Data
Soil type
Sand
Silt
Loam
Frequency
20%
33%
25%
Clay
22%
Land Use
Forest
Pasture
Meadow
Frequency
14%
21%
65%
The probability distribution of categorical data is completely described
by a frequency table
Probability Density Function
(pdf)
• The function f(x) is a pdf for the continuous
random variable x, defined over the set of
real numbers R, if
1)
f ( x) 0
2)
f ( x)dx 1
3)
for all x R
b
P(a x b) f ( x)dx,
a
i.e. f ( x )x P x x x x
0
0
0
Cumulative Distribution Function
(cdf)
• The cdf of a continuous r.v. x with a pdf f(x) is
given by:
x
0
F ( x0 ) P( x x0 ) f ( x)dx
• Properties of the cdf:
dF ( x)
T herefore f ( x)
dx
F ( x1 ) F ( x 2) fo r x1 x 2
F ( ) 0,
F ( ) 1,
0 F ( x1 ) 1
F ( x ) F ( x)
b
P ( a x b) f ( x ) d x F(b) - F(a)
a
Examples: Continuous r.v.
• uniform distribution,
• exponential distribution,
• gaussian distribution
• log-normal distribution
Moments of a Random Variable
• The pdf (and cdf) summarize all knowledge
of the r.v., however we almost never really
know this much about actual natural
phenomena.
• Moments of a r.v. provide a more
aggregated description of its behavior which
is often easier to estimate from field data
than pdf or cdf
The First Moment
• The expected value (or first moment, or
population mean, or ensemble mean) of a
r.v. is defined as the sum of all the values a
r.v. may take, each weighted by the
probability with which the value is taken
x E ( x) xf ( x)dx
• This quantity is a single valued,
deterministic summary of the r.v.
Properties of the Expectation
Operator
• If the pdf, f(x) is even (i.e. f(x)=f(-x)), then
the expected value is equal to zero
• The expectation is a linear operator
• The expected value of a function of the r.v.
Higher Order Moments
n
n
n
x E( x ) x f ( x)dx
n=1 E[x]=mean (measure of central tendency)
n=2 E[x2]= mean square
n=3 E[x3]= mean cube
Central Moments
E ( x x) ( x x) f ( x)dx
n=1
n=2
n=3
n=4
n
n
1 E[( x x)1] 0
2 E[( x x) 2 ] 2
3 E[( x x)3 ]
4 E[( x x) 4 ]
variance
skewness
kurtosis
It can be shown that the full (infinite) set of
moments completely exhausts the statistical
information concerning the r.v., and thus the pdf
can be constructed from the full set of moments
Joint Probability Distributions
• The joint cdf for two random variables, x
and y, is
Fxy ( x0 , y0 ) P( x x0 , y y0 )
x0
y0
f
xy
( x, y )dxdy
• where
Fxy (, y) Fxy ( x,) 0, Fxy (, ) 1,
Marginal cdfs and pdfs
• cdfs
• pdfs
• conditional pdfs
Moments of two random
variables
E xy
xyf xy ( x, y )dxdy
Cov( x, y ) E ( x x)( y y )
( x x)( y y ) f xy ( x, y )dxdy
E x / y
xf x / y ( x, y )dx
E g ( x , y )
g ( x, y ) f xy ( x, y )dxdy
Statistical Independence vs
Uncorrelation
• Two random variables are statistically
independent if
• Two random variables are uncorrelated if
• Two random variables are orthogonal if
Statistical Independence vs
Uncorrelation
• If two r.v. are independent then they are
uncorrelated (but not vice versa)
• If x and y are independent random variables then
g(x) and h(y) are also independent random
variables (this is not generally true if x and y are
merely uncorrelated)
• Correlation measures linear relatedness only
An exception is jointly normal r.v.s where uncorrelation
is equivalent to independence