PowerPoint Template

Download Report

Transcript PowerPoint Template

Chapter 5a: Functions of Random
Variables
Yang Zhenlin
[email protected]
http://www.mysmu.edu/faculty/zlyang/
Chapter 5a
Chapter 5a Contents
The main purpose of this chapter:
Introducing methods for finding the distribution of a
function of the random variable(s).
Functions of One Random Variable
--- change of variable technique
Functions of Two Random Variables
--- change of variable technique
Sum of Independent Random variables
--- The Moment Generating Function Technique
STAT151,Term
TermII,II09/10
14-15
STAT306,
2
© Zhenlin Yang, SMU
Functions of One Random Variable
Chapter 5a
The case of a continuous random variable.
Definition 5a.1 (Change-of-Variable Technique) Let X be a
continuous type random variable with pdf f(x). Let Y = u(X) be a
one-to-one transformation of X with inverse function X = v(Y).
Then the pdf of Y is given by
g(y) = f[v(y)] |v(y)|,
where v(y) is the derivative of v(y). If the possible values of X are
c1< x <c2, then the possible values of Y are u(c1)< y <u(c2).
Example 5a.1. Let X have a gamma distribution with pdf
f ( x) 
1
 1  x 
x
e , 0  x  .

( )
Let Y = eX. Find the pdf of Y.
STAT151,Term
TermII,II09/10
14-15
STAT306,
3
© Zhenlin Yang, SMU
Functions of One Random Variable
Chapter 5a
Solution: Since the inverse function is X= v(Y) = log (Y), v(y) =
1/y. Thus, by Definition 5.1, the pdf of Y is given by
1
1
(ln y) 1
 1 ln y  1
g ( y)  f [v( y)] | v( y) | 
(ln y) e


y ( )  y11 
( )
Since the support of X is (0, ), the support of Y is (1, ). The pdf
of Y is thus,
1
(ln y) 1
g ( y) 
, 1  y  .

11 
( )
y
The way to see the change-of-variable technique is through CDF:
G(y) = P{Y  y} = P{X  v(y)} = F[v(y)].
Taking derivatives leads to g(y) = f[v(y)] |v(y)|. So, the changeof-variable technique is essentially the CDF technique.
STAT151,Term
TermII,II09/10
14-15
STAT306,
4
© Zhenlin Yang, SMU
Functions of One Random Variable
Chapter 5a
The case of a discrete random variable.
The Change-of-variable technique can be applied to a random
variable X of the discrete type, but there is major difference: pmf
p(x) = P{X = x} represents probability, but pdf f(x) does not.
For a one-to-one transformation, Y = u(X), with inverse X = v(Y),
we can easily see that the pmf g(y) of Y is
g(y) = P{Y = y} = P{X = v(y)} = p[v(y)]
The possible values of Y are found directly from the possible
values of X through the functional relation Y = u(X).
Example 5a.2. Let X have a Poisson distribution with  = 4.
Find the pmf of Y = X1/2. Since X = Y2, we have,
p ( x) 
4
4
x
y2
e 4
e (4)
, x  0, 1, 2,. . . and g ( y) 
, y  0, 1, 2 , 3, . . .
2
x!
( y )!
STAT151,Term
TermII,II09/10
14-15
STAT306,
5
© Zhenlin Yang, SMU
Functions of One Random Variable
Chapter 5a
The case of a non-one-to-one function of a continuous r.v.
The change-of-variable technique requires that the function is
one-to-one, thus cannot be applied when the function is not
one-to-one.
However, as noted earlier, the distribution of functions of a
random variable are essentially developed from the CDF. Thus,
the distribution of a non-one-to-one function can still be derived
from the CDF!
We will demonstrate this idea by showing an important result:
The square of a standard normal random variable is a gamma r.v.
with parameters (1/2, 2), this special gamma r.v. is called the chisquared random variable with degrees of freedom equal to 1.
STAT151,Term
TermII,II09/10
14-15
STAT306,
6
© Zhenlin Yang, SMU
Functions of One Random Variable
Chapter 5a
Example 5a.3. Let Z be a standard normal r.v. and let X = Z2.
The CDF of X is
G ( x)  P{ X  x}
Let t t ingz 
 P{ x  Z  x }
x
1 z2 2

e
dz
 x
2
x
1 z2 2
 2
e
dz
0
2
t hen, dz  1 /(2 y )dy,
y,
t heCDF becomes
x
1 y 2
G ( x)  
e dy, x  0
0
2y
Taking derivative with respect to x, we obtain the pdf of X:
1
1
x 2
g ( x) 
e

x1 21e  x 2 , x  0.
2 x
2 
Recognizing that ( 12 )   , the above is the pdf of a gamma r.v.
with parameters (1/2, 2), called the chi-squared with 1 d.f.
STAT151,Term
TermII,II09/10
14-15
STAT306,
7
© Zhenlin Yang, SMU
Functions of Two Random Variables
Chapter 5a
The above change-of-variable technique can be extended to
the case of joint distributions involving two or more random
variables. Many interesting problems solved.
Definition 5a.2. (Change-of-Variable Technique) Let X1 and X2
be two continuous type random variables with joint pdf f(x1, x2).
Let Y1 = u1(X1, X2) and Y2 = u2(X1, X2) be two continuous
functions, which have single-valued inverse: X1 = v1(Y1, Y2) and
X2 = v2(Y1, Y2). Then the joint pdf of Y1 and Y2 is
g ( y1 , y2 )  f [v1 ( y1 , y2 ), v2 ( y1 , y2 )] | J |
v1 ( y1 , y 2 )
y1
J
v 2 ( y1 , y 2 )
y1
where J, called the Jacobian,
is the determinant of the matrix
of partial derivatives:
STAT151,Term
TermII,II09/10
14-15
STAT306,
8
v1 ( y1 , y 2 )
y 2
v 2 ( y1 , y 2 )
y 2
© Zhenlin Yang, SMU
Functions of Two Random Variables
Chapter 5a
Example 5a.4. Let X1 and X2 be two independent r.v.s, each with
pdf f(x) = ex, 0 < x < . Consider Y1 = X1  X2, and Y2 = X1 + X2.
(a) Find the joint pdf of Y1 and Y2
(b) Find the marginal pdfs of Y1 and Y2, respectively.
Solution:
(a) Since X1 and X2 are independent and have the same distribution,
f(x1, x2) = f(x1) f(x2) = e( x1  x2 ) , 0 < x1 < , 0 < x2 < 
From Y1 = X1  X2, and Y2 = X1 + X2, we obtain
Y1  Y2
1
Y2  Y1
12 12
X1 
=
, with J =
and X 2 
1 2 1 2
2
2
2
The joint pdf of Y1 and Y2 is
1  y2
g(y1, y2) = e , with  y2 < y1 < y2, 0 < y2 < , ???
2
STAT151,Term
TermII,II09/10
14-15
STAT306,
9
© Zhenlin Yang, SMU
Functions of Two Random Variables
Chapter 5a
where the possible values of Y1 and Y2 can be found as follows:
• Y2 = X1 + X2 implies 0 < Y2 < ;
• X1 = (Y1 + Y2)/2 > 0 implies Y1 >  Y2;
• X2 = (Y2  Y1)/2 > 0 implies Y1 < Y2.
y2
y2 = -y1
The region of
(Y1, Y2) values
y2 = y1
y1
STAT151,Term
TermII,II09/10
14-15
STAT306,
10
© Zhenlin Yang, SMU
Functions of Two Random Variables
Chapter 5a
(b) The marginal pdf of Y2:
1  y2
e dy1  y2e  y 2 ,
 y2 2
g 2 ( y2 )  
y2
0  y2  
The marginal pdf of Y1:
  1  y2
1 y1
 y1 2 e dy2  2 e ,    y1  0,
g1( y1 )   
1  y2
1  y1
  e dy2  e
0  y1  .
2
 y1 2
The latter expression can simply be written as
1 | y1|
g 1( y1 )  e ,
2
   y1  
That is called the double exponential pdf.
STAT151,Term
TermII,II09/10
14-15
STAT306,
11
© Zhenlin Yang, SMU
Functions of Two Random Variables
Chapter 5a
Definition 5a.3. Let X and Y be jointly distributed r.v.s with joint
pmf p(x, y), or a joint pdf f(x, y). Let u(X, Y) be a continuous
function of X and Y. Then, u(X, Y) is also a random variable. If
X and Y are both discrete,
E[u( X ,Y )]  u( x, y) p( x, y)
x
y
And if X and Y are both continuous,
E[u( X , Y )]  



 
u( x, y) f ( x, y)dxdy
If X and Y are jointly distributed r.v.s, then,
Var(X + Y) = Var(X) + Var(Y) + 2Cov(X, Y)
If further X and Y are independent, then
Var(X + Y) = Var(X) + Var(Y)
STAT151,Term
TermII,II09/10
14-15
STAT306,
12
© Zhenlin Yang, SMU
pYX ((xy)
Functions of Two Random Variables
Chapter 5a
Example 5a.5. The joint probability distribution of variables X
and Y is shown in the table below,
Y
1
2
3
(a)
(b)
(c)
(d)
X
2
0.18
0.09
0.03
1
0.20
0.15
0.07
3
0.12
0.06
0.10
Determine the marginal probability distributions of X and Y.
Are X and Y independent? Explain.
Find the probability mass function of X+Y.
Find the probability of P(X  Y).
Solution: (a) The marginal pmfs are:
X
1
2
pX(x) 0.42 0.30
STAT151,Term
TermII,II09/10
14-15
STAT306,
3
Y
0.28
pY(y)
13
1
2
3
0.50 0.30 0.20
© Zhenlin Yang, SMU
pYZX ((z
(xy)
Functions of Two Random Variables
Chapter 5a
(b) No. Because p(1, 1) = 0.20, but = 0.420.50 = 0.21.
(c) Let Z = X+Y, then the pmf of Z is
Z
2
3
4
pZ(z)
0.20 0.33 0.28
5
6
0.09
0.10
Where, for example,
pZ(3) = P(X+Y = 3) = P(X = 1, Y = 2) + P(X = 2, Y = 1)
= 0.15 + 0.18 = 0.33.
(d) P(X  Y) = 1  P(X = Y)
= 1  P(X = 1, Y = 1)  P(X = 2, Y = 2)  P(X = 3, Y = 3)
= 1  0.20  0.09  0.10 = 0.61
STAT151,Term
TermII,II09/10
14-15
STAT306,
14
© Zhenlin Yang, SMU
Sum of Independent Random Variables
Chapter 5a
Recall the Uniqueness Property of MGF: The MGF of a r.v. X
uniquely determines its distribution, and vise versa, e.g., if the
MGF of X is the same of that of a normal r.v., then, X must be
normally distributed.
Using the above property, one can easily see the following results:
Sum of independent binomial r.v.s with the same probability of
success  is again a binomial r.v.
Sum of independent Poisson r.v.s is again a Poisson r.v.
Sum of independent exponential r.v.s with the same mean is a
gamma r.v.
Sum of independent normal r.v.s is again a normal r.v.
And more . . . .
STAT151,Term
TermII,II09/10
14-15
STAT306,
15
© Zhenlin Yang, SMU
Sum of Independent Random Variables
Chapter 5a
Sum of independent normal r.v.s is again a normal r.v.
Recall the MGF of X ~ N(, 2):
M (t )  exp(t  12  2t 2 )
If one can show that the MGF of a random variable has the same
for as above, then one can conclude that this random variable is
normal with mean and variance being, respectively, the quantities
in front of ‘t’ and ‘t2’.
To demonstrate the above result using the MGF technique,
consider two independent normal random variables X1 ~ N (1,12 )
and X 2 ~ N (2 , 22 ) . Let Y = X1 + X2. The MGF of Y is
MY (t )  E[etY ]  E[et ( X1  X 2 ) ]  E[etX1 etX 2 ]  E[etX1 ]E[etX 2 ]  M X1 (t )M X 2 (t )
Follows from the independence between X1 and X2
STAT151,Term
TermII,II09/10
14-15
STAT306,
16
© Zhenlin Yang, SMU
Sum of Independent Random Variables
Chapter 5a
It follows that
M Y(t )  exp(1t  12  12t 2 ) exp( 2t  12  22t 2 )
 exp((1   2 )t  12 ( 12   22 )t 2 )
Recognizing that this MGF is in the same form of the MGF of a
normal random variable, Y must be normally distributed. In
particular,
Y ~ N (1  2 , 12   22 )
This result can easily be extended to the case of many normal r.v.s
STAT151,Term
TermII,II09/10
14-15
STAT306,
17
© Zhenlin Yang, SMU
Sum of Independent Random Variables
Chapter 5a
Sum of independent binomial r.v.s with the same probability of
success  is again a binomial r.v.
To see this, using MGF technique. White board presentation.
STAT151,Term
TermII,II09/10
14-15
STAT306,
18
© Zhenlin Yang, SMU
Sum of Independent Random Variables
Chapter 5a
Sum of independent Poisson r.v.s is again a Poisson r.v.
To see this, using MGF technique. White board presentation.
STAT151,Term
TermII,II09/10
14-15
STAT306,
19
© Zhenlin Yang, SMU
Sum of Independent Random Variables
Chapter 5a
Sum of independent exponential r.v.s with the same mean is a
gamma r.v.
To see this, using MGF technique. White board presentation.
STAT151,Term
TermII,II09/10
14-15
STAT306,
20
© Zhenlin Yang, SMU
Functions of Normal R.V.s
Chapter 5a
In Example 5a.3, we have shown that if Z is a standard normal
r.v., then X = Z2 follows a chi-squared distribution with 1 d.f.,
which is seen to be a special gamma r.v.
We have also shown using the MGF technique that the sum of
two independent normal r.v.s is again normally distributed.
There are many other functions of normal r.v.(s) of which the
distributions are of interest. In particular in the context of
statistical inference, functions of a random sample drawn
from a normal population, or functions of two random samples
drawn from two independent normal populations, are needed
for the purposes of drawing statistical inferences about the
normal populations.
We put these into a general topic: “Sampling Distribution”,
with details presented in Chapter 5b.
STAT151,Term
TermII,II09/10
14-15
STAT306,
21
© Zhenlin Yang, SMU