Transcript Document

Recap
Random variables
Discrete random variable
Continuous random variable
• Sample space is finite or
countably many elements
• Sample space has infinitely many
elements
• The probability function f(x)
Is often tabulated
• The density function f(x) is a
continuous function
• Calculation of probabilities
• Calculation of probabilities
b
P( a < X < b) =  f(t)
a<t<b
1
P( a < X < b ) =  a f(t) dt
lecture 3
Mean / Expected value
Definition
Definition:
Let X be a random variable with probability /
Density function f(x). The mean or expected value of X
is give by
μ  E( X)   x f ( x)
x
if X is discrete, and

μ  E( X)   x f ( x)dx
if X is continuous.
2

lecture 3
Mean / Expected value
Interpretation
Interpretation:
The total contribution of a value multiplied by the probability
of the value – a weighted average.
Example:
Mean value= 1,5
f(x)
0.4
0.3
0.2
0.1
0 1
3
2
3
x
lecture 3
Mean / Expected value
Example
Problem:
• A private pilot wishes to insure his plane valued at 1 mill kr.
• The insurance company expects a loss with the following
probabilities:
• Total loss with probability 0.001
• 50% loss with probability 0.01
• 25% loss with probability 0.1
1. What is the expected loss in kroner ?
2. What premium should the insurance company
ask if they want an expected profit of 3000 kr ?
4
lecture 3
Mean / Expected value
Function of a random variable
Theorem:
Let X be a random variable with probability / density
function f(x). The expected value of g(X) is
μg( X )  E [g( X)]   g( x) f ( x)
x
if X is discrete, and

μg( X )  E [g( X)]   g( x) f ( x) dx

if X is continuous.
5
lecture 3
Expected value
Linear combination
Theorem: Linear combination
Let X be a random variable (discrete or continuous), and
let a and b be constants. For the random variable aX + b
we have
E(aX+b) = aE(X)+b
6
lecture 3
Mean / Expected value
Example
Problem:
• The pilot from before buys a new plane valued at 2 mill kr.
• The insurance company’s expected losses are unchanged:
• Total loss with probability 0.001
• 50% loss with probability 0.01
• 25% loss with probability 0.1
1. What is the expected loss for the new plane?
7
lecture 3
Mean / Expected value
Function of a random variables
Definition:
Let X and Y be random variables with joint probability /
density function f(x,y). The expected value of g(X,Y) is
μg( X,Y )  E [g( X, Y)]   g( x, y) f ( x, y)
x
y
if X and Y are discrete, and
 
μg( X,Y )  E [g( X, Y)]    g( x, y) f ( x, y) dx dy
 
if X and Y are continuous.
8
lecture 3
Mean / Expected value
Function of two random variables
Problem:
Burger King sells both via “drive-in” and “walk-in”.
Let X and Y be the fractions of the opening hours that “drive-in” and “walkin” are busy.
Assume that the joint density for X and Y are given by
f(x,y) =
{
4xy
0
0  x 1, 0y1
otherwise
The turn over g(X,Y) on a single day is given by
g(X,Y) = 6000 X + 9000Y
What is the expected turn over on a single day?
9
lecture 3
Mean / Expected value
Sums and products
Theorem: Sum/Product
Let X and Y be random variables then
E[X+Y] = E[X] + E[Y]
If X and Y are independent then
E[X.Y] = E[X] . E[Y]
10
lecture 3
Variance
Definition
Definition:
Let X be a random variable with probability / density
function f(x) and expected value . The variance of X is
then given
σ 2  Var ( X)  E [( X  μ)2 ]   ( x  μ)2 f ( x)
x
if X is discrete, and

σ  Var ( X)  E [( X  μ) ]   ( x  μ)2 f ( x) dx
2
2

if X is continuous.
The standard deviation is the positive root of the variance:
11
σ  Var ( X)
lecture 3
Variance
Interpretation
The variance expresses, how dispersed the density /
probability function is around the mean.
f(x)
Varians = 0.5
f(x)
0.5
0.4
0.3
0.2
0.1
0.5
0.4
0.3
0.2
0.1
1
2
3
x
Rewrite of the variance:
12
Varians = 2
0
1
2
3
4
x
σ 2  Var ( X)  E [ X2 ]  μ2
lecture 3
Variance
Linear combinations
Theorem: Linear combination
Let X be a random variable, and let a be b constants.
For the random variable aX + b the variance is
Var (aX  b)  a2 Var ( X)
Examples:
Var (X + 7) = Var (X)
Var (-X ) = Var (X)
Var ( 2X ) = 4 Var (X)
13
lecture 3
Covariance
Definition
Definition:
Let X and Y be to random variables with joint probability /
density function f(x,y). The covariance between X and Y is
σ XY  Cov( X, Y)  E[( X  μX )(Y  μY )]   ( x  μX )(y  μY ) f ( x, y)
x
y
if X and Y are discrete, and
 
σ XY  Cov( X, Y)    ( x  μX )(y  μY ) f ( x, y) dx dy
 
if X and Y are continuous.
14
lecture 3
Covariance
Interpretation
Covariance between X and Y expresses how X and Y influence
each other.
Examples: Covariance between
• X = sale of bicycle and Y = bicycle pumps is positive.
• X = Trips booked to Spain and Y = outdoor temperature is negative.
• X = # eyes on red dice and Y = # eyes on the green dice is zero.
15
lecture 3
Covariance
Properties
Theorem:
The covariance between two random variables X and Y
with means X and Y, respectively, is
σ XY  Cov( X, Y)  E [ X Y]  μX μY
Notice!
Cov (X,X) = Var (X)
If X and Y are independent random variables, then
Cov (X,Y) = 0
Notice! Cov(X,Y) = 0 does not imply independence!
16
lecture 3
Variance/Covariace
Linear combinations
Theorem: Linear combination
Let X and Y be random variables, and let a and b be
constants.
For the random variables aX + bY the variance is
Var (aX  bY)  a2 Var ( X)  b2 Var ( Y)  2 a b Cov( X, Y)
Specielt: Var[X+Y] = Var[X] + Var[Y] +2Cov (X,Y)
If X and Y are independent, the variance is
Var[X+Y] = Var[X] + Var[Y]
17
lecture 3
Correlation
Definition
Definition:
Let X and Y be two random variables with covariance
Cov (X,Y) and standard deviations X and Y, respectively.
The correlation coefficient of X and Y is
Cov( X, Y )
ρ XY 
σX σY
It holds that
 1  ρXY  1
If X and Y are independent, then ρXY  0
18
lecture 3
Mean, variance, covariace
Collection of rules
Sums and multiplications of constants:
E (aX) = a E(X) Var(aX) = a2Var (X)
Cov(aX,bY) = abCov(X,Y)
E (aX+b) = aE(X)+b
Var(aX+b) = a2 Var (X)
Sum:
E (X+Y) = E(X) + E(Y)
Var(X+Y) = Var(X) + Var(Y) + 2Cov(X,Y)
X and Y are independent: E(XY) = E(X) E(Y)
Var(X+Y) = Var(X) + Var(Y)
19
lecture 3