variance of the random variable X - Erwin Sitompul

Download Report

Transcript variance of the random variable X - Erwin Sitompul

Probability and Statistics

Lecture 5 Dr.-Ing. Erwin Sitompul President University http://zitompul.wordpress.com

President University

2 0 1 3

Erwin Sitompul PBST 5/1

Chapter 4.2 Variance and Covariance

Variance and Covariance

  The mean or expected value of a random variable X is important because it describes the center of the probability distribution.

However, the mean does not give adequate description of the shape and variability in the distribution.

Distribution with equal means but different dispersions (variability)

  The most important measure of variability of a random variable X is obtained by letting g(X) = (Xμ) 2 .

This variability measure is referred to as the variance of the   President University Erwin Sitompul PBST 5/2

Chapter 4.2 Variance and Covariance

Variance and Covariance

 Let X be a random variable with probability distribution f(x) and mean μ. The variance of X is  2

 

 2

) ] 

x

(

x

)

2 if X is discrete, and  2

 

 2

) ] 

  

(

x

)

2 if X is continuous.  The positive square root of the variance, σ, is called the standard deviation of X.

President University Erwin Sitompul PBST 5/3

Chapter 4.2 Variance and Covariance

Variance and Covariance

Let the random variable X represent the number of cars that are used for official business purposes on any given workday. The probability distribution for company A and company B are  

Company A Company B

Show that the variance of the probability distribution for company B is greater than that of company A.

 2    (1)(0.3)  (2)(0.4)  (3)(0.3)  2 

x

3   1 (

x

 2) 2  (2   0.6

 2     (0)(0.2)  (1)(0.1)  (2)(0.3)  (3)(0.3)  (4)(0.1)

x

3   1 (

x

 2) 2   2   2   2   2   2  2  1.6

Clearly, the variance of the number of cars that are used for official business purposes is greater for company B than for company A.

President University Erwin Sitompul PBST 5/4

Chapter 4.2 Variance and Covariance

Variance and Covariance

 The variance of a random variable X is also given by  2

2

) 

 2 Let the random variable X represent the number of defective parts for a machine when 3 parts are sampled from a production line and tested. The following is the probability distribution of X Calculate the variance σ 2 .

  (0)(0.51)  (1)(0.38)  (2)(0.10)  (3)(0.01)  0.61

2 ) 

x

3   0 2 ( )  2  2  2  2  2  2 )   2  0.87

 (0.61) 2  0.4979

 0.87

President University Erwin Sitompul PBST 5/5

Chapter 4.2 Variance and Covariance

Variance and Covariance

The weekly demand for a drinking-water product, in thousands liters, from a local chain of efficiency stores, is a continuous random variable X having the probability density

2( 0,

x

 1), 1 2

elsewhere Find the mean and variance of X.

  1 2 

x

 2(

x

 1)

dx

 2 3

x

3 

x

2 1 2  5 3 2 )  1 2 

x

2  2(

x

 1)

dx

 2 4

x

4  2 3

x

3 1 2  17 6  2  2 )   2  17 6 2  1 18 President University Erwin Sitompul PBST 5/6

 Let X be a random variable with probability distribution f(x). The variance of the random variable g(X) is  2 )

Chapter 4.2 Variance and Covariance

Variance and Covariance 

 if X is discrete, and ) 2

] } 

x

 )

]

2  2 )

if X is continuous.

 ) 2

] } 

  

 )

]

2 President University Erwin Sitompul PBST 5/7

Chapter 4.2 Variance and Covariance

Variance and Covariance

Calculate the variance of g(X) = 2X + 3, where X is a random variable with probability distribution given as  )   2

X

 3  3 

x

 0 (2

x

  (3) 1    (5)  (7)  (9)  6  2 )    

E

[4

X

2  12

X

 (9) 1    (1) ) 2 ] } 

E

{[(2

X

  2  (1)

x

3   0 (4

x

2  12

x

  (9)  4 President University Erwin Sitompul PBST 5/8

Chapter 4.2 Variance and Covariance

Variance and Covariance

Let X be a random variable with density function

 

x

3 0,

2

,

x

2

elsewhere Find the variance of the random variable g(X) = 4X + 3 if it is known that the expected value of g(X) = 8.

 2 4

X

 3 

E

{[(4

X

  2 

E

[16

X

2  40

X

 25]   1 2  (16

x

2  40

x

   1 2  (16

x

2  40

x

 25)  

x

2 3 

dx

2   1 3  2 1  16

x

4   40

x

3  25 2

x dx

  323 15     459  45  1 16 3  5

x

5 51 5  40

x

4 4  25

x

3 3  1 President University Erwin Sitompul PBST 5/9

Chapter 4.2 Variance and Covariance

Variance and Covariance

 Let X and Y be a random variables with probability distribution f(x, y). The covariance of the random variables X and Y is 

XY

 

X

)(

Y

Y

)] 



x y

(

x

X

)(

y

Y

if X and Y are discrete, and 

XY

 

X

)(

Y

Y

)] 

     

(

x

X

)(

y

Y

if X and Y are continuous.

 

σ XY

>0, Positive correlation

President University  

σ XY

<0 Negative correlation

Erwin Sitompul PBST 5/10

Chapter 4.2 Variance and Covariance

Variance and Covariance

 The covariance of two random variables X and Y with means μ and μ

Y

, respectively, is given by

X

XY

)

  

X Y

President University Erwin Sitompul PBST 5/11

Chapter 4.2 Variance and Covariance

Variance and Covariance

Referring back again to the “ballpoint pens” example, find the covariance of X and Y.

X

 

Y

 

XY

   2 2 

x

 0

y

 0

x

2   0

y

2  0 

x

2   0 

y

2   0  (0) 5 14  (1) 15 28  (2) 3 28  3 4  (0)  15 28   (1) 3    (2) 1 28  1 2 )   

X Y

 3 14  3 4 1      9 56 President University 

See again Lecture 4

Erwin Sitompul PBST 5/12

Chapter 4.2 Variance and Covariance

Variance and Covariance

The fraction X of male runners and the fraction Y of female runners who compete in marathon races is described by the joint density function

8 0,

xy

, 0

x

elsewhere

1

Find the covariance of X and Y   4 0,

x

3 , 0 1 elsewhere   0, 

y

2 ), 0 1 elsewhere 

X

 

Y

   1 0  1 0  4 4 4

x dx y

2 (1   4 5 2 )  8 15 President University Erwin Sitompul )  

XY

 1 1 0 

y

8 2 2

x y dxdy

)   

X Y

 4 9 9 4   8 15  4 225 PBST 5/13

Chapter 4.2 Variance and Covariance

Variance and Covariance

   Although the covariance between two random variables does provide information regarding the nature of the relationship, the magnitude of σ

XY

does not indicate anything regarding the

XY

is not scale free.

This means, that its magnitude will depend on the units measured for both X and Y.

There is a scale-free version of the covariance called the correlation coefficient, that is used widely in statistics.

 Let X and Y be random variables with covariance σ deviation σ and Y is

X

and σ

Y XY

and standard , respectively. The correlation coefficient X

XY

X XY

 

Y

President University Erwin Sitompul PBST 5/14

Chapter 4.3 Means and Variances of Linear Combinations of Random Variables

Means of Linear Combinations of X

 If a and b are constant, then 

b

)

 

b

Applying theorem to the discrete random variable g(X) = 2X – 1, rework the carwash example.

E

(2

X

X

 

x

9   4  1 (4)  12   1 (5)  12   (6) 1    (7)  2

X

 1  2 

X

 1  2 41 6 President University $12.67

Erwin Sitompul  (8)  (9)  41 6 PBST 5/15

Chapter 4.3 Means and Variances of Linear Combinations of Random Variables

Means of Linear Combinations of X

Let X be a random variable with density function    

x

3 0, 2 ,

x

2 elsewhere Find the expected value of g(X) = 4X + 3 by using the theorem presented recently.

E

(4

X

  1 2 

x

 

x

2 3  

dx

  1 2 

x

3 3

dx

 5 4

E

(4

X

4 5   8 President University Erwin Sitompul PBST 5/16

Chapter 4.3 Means and Variances of Linear Combinations of Random Variables

Means of Linear Combinations of X

 The expected value of the sum or difference of two or more functions of a random variable X is the sum or difference of the expected values of the functions. That is    Let X be a random variable with probability distribution as given next. Find the expected value of Y = (X – 1) 2 .

  2  2

X

 1]  2 

E

(1)  (0) 1    (1) 2 )  (0) 1    (1) 2    2 1) ] 1 2  2  1  2 President University Erwin Sitompul PBST 5/17

Chapter 4.3 Means and Variances of Linear Combinations of Random Variables

Means of Linear Combinations of X

The weekly demand for a certain drink, in thousands of liters, at a chain of convenience stores is a continuous random variable g(X) =

X

2 + X – 2, where X has the density function

2( 0,

x

 1), 1 2

elsewhere Find the expected value for the weekly demand of the drink.

2

X

2 )  

E

(2) 2  1 2  2 )   1)

dx

2 1  2 2 (  1)

dx

 2 2 ( 1 

x

2   2 2 ( 1 

x

3  5 3  2 )  17 3 2)  17 6 3 2 5 2 President University Erwin Sitompul PBST 5/18

Chapter 4.3 Means and Variances of Linear Combinations of Random Variables

Means of Linear Combinations of X

 The expected value of the sum or difference of two or more functions of a random variables X and Y is the sum or difference of the expected values of the functions. That is 

   Let X and Y be two independent random variables. Then

)

  President University Erwin Sitompul PBST 5/19

Chapter 4.3 Means and Variances of Linear Combinations of Random Variables

Means of Linear Combinations of X

In producing gallium-arsenide microchips, it is known that the ratio between gallium and arsenide is independent of producing a high percentage of workable wafers, which are the main components of microchips. Let X denote the ratio of gallium to arsenide and Y denote the percentage of workable microwafers retrieved during a 1-hour period. X and Y are independent random variables with the joint density being known as

   

x

 0, 4

y

2

) , 0

elsewhere Illustrate that E(XY) = E(X)E(Y).

2, 0 1

)  1 2  0 0  0 1  3

x y

President University 12

y

2 )

x

 2

x

 0  1 2  0 0 2

x y dy

 1 0  4 3

y

2 )

dxdy y

2 )

dy

 5 6 Erwin Sitompul PBST 5/20

Chapter 4.3 Means and Variances of Linear Combinations of Random Variables

Means of Linear Combinations of X

 1 2  0 0  1 0 

x

3 12  1 2  0 0

x

2

y

2 )

x

 2

dy x

 0  0 1  4

y

2 )

dxdy

3

y

2 )

dy

 4 3  1 2  0 0  1 0  2

x y

8

y

2 )

x

 2  1 2  0 0

x

 0

dy xy

 1 0 

y

4

y

2 )

dxdy

2

y

2 )

dy

 5 8 Hence, it is proven that ) 6 4 3 5      President University Erwin Sitompul PBST 5/21

Chapter 4.3 Means and Variances of Linear Combinations of Random Variables

Means of Linear Combinations of X

 If a and b are constant, then  2 

a

2 

X

2 

a

2  2  If X and Y are random variables with joint probability distribution f(x, y), then  2 

a

2  2

X

b

2  2

Y

2

ab

XY

President University Erwin Sitompul PBST 5/22

Chapter 4.3 Means and Variances of Linear Combinations of Random Variables

Means of Linear Combinations of X

covariance σ

XY

 2

X

2

 2

Y

 4

= –2, find the variance of the random variable 

Z

2   2 3

X

 4

Y

 8   2 3

X

 4

Y

 9 

X

2  16 

Y

2  24 

XY

 (9)(2)   130 Let X and Y denote the amount of two different types of impurities in  

2

the variance of the random variable Z = 3X – 2Y + 5.

 2

Y

 3

Z

2   2 3

X

 2

Y

 5   2 3

X

 2

Y

 9  2

X

 4 

Y

2   30 President University Erwin Sitompul PBST 5/23

Chapter 4.4 Chebyshev’s Theorem

Chebyshev’s Theorem

   As we already discussed, the variance of a random variable tells us something about the variability of the observation about the mean.

If a variable has a small variance or standard deviation, we would expect most of the values to be grouped around the mean.

The probability that a random variable assumes a value within a certain interval about the main is greater in this case.

 If we think of probability in terms of area, we would expect a continuous distribution with a small standard deviation to have most of its area close to μ.

President University 

Variability of continuous observations about the mean

Erwin Sitompul PBST 5/24

Chapter 4.4 Chebyshev’s Theorem

Chebyshev’s Theorem

 We can argue the same way for a discrete distribution. The spread out of an area in the probability histogram indicates a more variable distribution of measurements or outcomes.

President University 

Variability of discrete observations about the mean

Erwin Sitompul PBST 5/25

Chapter 4.4 Chebyshev’s Theorem

Chebyshev’s Theorem

 A Russian mathematician P. L. Chebyshev discovered that the fraction of the area between any two values symmetric about the mean is related to the standard deviation.

| Chebyshev’s Theorem

variable X will assume a value within k standard deviations of the mean is at least 1 – 1/k 2 .

|

The probability that any random That is

P

(

k

X k

1

k

2    Chebyshev’s Theorem holds for any distribution of observations and, for this reason, the results are usually weak.

The value given by the theorem is a lower bound only. Exact probabilities can only be determined when the probability distribution is known.

The use of Chebyshev’s Theorem is relegated to situations where the form of the distribution is unknown.

President University Erwin Sitompul PBST 5/26

Chapter 4.4 Chebyshev’s Theorem

Chebyshev’s Theorem and Normal Distribution

President University Erwin Sitompul PBST 5/27

Chapter 4.4 Chebyshev’s Theorem

Chebyshev’s Theorem

A random variable X has a mean μ = 8, a variance σ 2 = 9, and an unknown probability distribution. Find (a) P(–4 < X < 20) (b) P(|X – 8| ≥ 6) (a)

P

 

X

 20) 

P

   2  15 16 

X

 (b) 6)  1 4

P

P

  1 1 4 

X

6  8 

X

6  President University Erwin Sitompul  PBST 5/28

Probability and Statistics

Homework 5A

1.

For the joint probability distribution of the two random variables X and Y as given in the following figure, calculate the covariance of X and Y. (Mo.E5.27 p.0172) 2.

The photoresist thickness in semiconductor manufacturing has a mean of 10 micrometers and a standard deviation of 1 micrometer. Bound the probability that the thickness is less than 6 or greater than 14 micrometers.

(Mo.S5.25 p05.15) President University Erwin Sitompul PBST 5/29