No Slide Title

Download Report

Transcript No Slide Title

Chapter 13 Inference about Comparing Two Populations

1

13.1 Introduction

• In previous discussions we presented methods designed to make an inference about characteristics of a single population. We estimated, for example the population mean, or hypothesized on the value of the standard deviation. • However, in the real world we encounter many times the need to study the relationship between two populations. For example: – We want to compare the effects of a new drug on blood pressure, in which case we can test the relationship between the mean blood pressure of two groups of individuals: those who take the drug, and those who don’t.

– We are interested in the effects a certain ad has on voters’ preferences as part of an election campaign. In this case we can estimate the difference in the proportion of voters who prefer one candidate before and after the ad is televised.

2

13.1 Introduction

• Variety of techniques are presented whose objective is to compare two populations.

• These techniques are designed to compare: – two population means.

– two population variances.

– two proportions.

3

13.2 Inference about the Difference between Two Means: Independent Samples

• We’ll look at the relationship between the two population means by analyzing the value of m 1 – m 2 .

• The reason we are looking at the difference between for details.

x 1  x 2 normal distribution, whose mean is m 1 – m 2 . See next • Two random samples are therefore drawn from the are calculated.

x x 4

The Sampling Distribution of

 x 1  x 1  x 2 is normally distributed if the (original) population distributions are normal .

x 2  x 1  x 2 is approximately normally distributed if the (original) population is not normal, but the samples’ size is sufficiently large (greater than 30).

 x  x m

1 -

m

2

 x  x σ 2 1 n 1  σ 2 2 n 2 5

Making an inference about

m 

m  x 1  x 2 Z  ( x 1  x 2 )    n 1   ( m   n 2    m  ) x 1  x 2 normal, then Z is standard normal. So… • Z can be used to build a confidence interval or test a hypothesis about m 1 m 2 . See next.

6

Making an inference about

m 

m  • Practically, the “Z” statistic is hardly used, because the population variances are not known.

• Instead, we construct a “t” statistic using the sample “variances” ( and ).

Z t  ( x 1  x 2 )  ?

1   2 n 1   ( m   ?

2   2 n 2  m  ) 7

Making an inference about

m 

m  • Two cases are considered when producing the t-statistic.

– The two unknown population variances are equal. – The two unknown population variances are not equal.

8

Inference about

m 

m 

: Equal variances

• If the two variances and are equal to one another, then and estimate the same value.

S 2 1 σ S 2 2 2 1 • Therefore, we can pool the two sample variances and provide a better estimate of the common populations’ variance, based on a larger amount of information.

• This is done by forming the pooled variance estimate. See next.

9

Inference about

m 

m 

: Equal variances

• Calculate the pooled variance estimate by: S 2 p  ( n 1  1 ) s 1 2 n 1   n 2 ( n 2  2  1 ) s 2 2 To get some intuition about this pooled estimate, note that we can re-write it as S 2 p  n 1 n  1 n  2 1  2 S 2 1  n 1 n  2 n  2 1  2 S 2 2 which has the form of a weighted average of the two sample variances. The weights are the relative sample sizes. A larger sample provides larger weight and thus influences the pooled estimate more (it might be easier to eliminate the values ‘-1’ and ‘-2’ from the formula in order to see the structure more easily . 10

Inference about

m 

m 

: Equal variances

• Calculate the pooled variance estimate by: S 2 p  ( n 1  1 ) s 1 2 n 1   n 2 ( n 2  2  1 ) s 2 2 Example: S 1 2 = 25; S 2 2 = 30; n 1 = 10; n 2 = 15. Then, S 2 p  ( 10  1 )( 25 )  ( 15  1 )( 30 ) 10  15  2  28 .

04347 11

Inference about

m 

m 

: Equal variances

• Construct the t-statistic as follows: t  ( x 1  x 2 )  (μ 1  μ 2 ) s 2 p   1 n 1  n 1 2   d.f.

 n 1  n 2  2 Note how S p 2 S 1 2 and S 2 2 .

replaces both t  ( x 1  x 2 )  (μ 1  μ 2 ) s 2 p n 1  s 2 p n 2 12

Inference about

m 

m 

: Unequal variances

t  d.f.

( • • σ σ we can’t produce a single estimate for both variances.

Thus we use the sample variances in the ‘t’ formula  x 1  x 2 s n ) 2 1 1   (μ s n 1 2 2 2  μ 2 ) ( s (s 2 1 n 2 1 1 n n 1  1 ) 2 1   s 2 2 ( s 2 2 n 2 n 2 ) 2 n  2 1 ) 2 13

Which case to use: Equal variance or unequal variance?

• Whenever there is insufficient evidence that the variances are unequal, it is preferable to run the equal variances t-test.

• This is so, because for any two given samples The number of degrees of freedom for the equal variances case  The number of degrees of freedom for the unequal variances case 14

15

Example: Making an inference about

m 

m  • Example 1 – Do people who eat high-fiber cereal for breakfast consume, on average, fewer calories for lunch than people who do not eat high-fiber cereal for breakfast?

– A sample of 150 people was randomly drawn. Each person was identified as a consumer or a non-consumer of high-fiber cereal.

– For each person the number of calories consumed at lunch was recorded.

16

Example: Making an inference about

m 

Consmers Non-cmrs

568 705 498 589 681 540 646 636 739 539 596 607 529 637 617 633 555

.

.

.

.

819 706 509 613 582 601 608 787 573 428 754 741 628 537 748

.

.

.

.

Solution:

• The data are quantitative.

• The parameter to be tested is the difference between two means.

17 m  • The claim to be tested is: The mean caloric intake of consumers ( m 1 ) is less than that of non-consumers ( m 2 ).

Example: Making an inference about

m 

m  • The hypotheses are: H 0 : m 1 H 1 : m 1 m 2 m 2 = 0 < 0 m 1 = mean caloric intake for fiber consumers m 2 = mean caloric intake for fiber non-consumers – To check the relationships between the variances, we use a computer output to find the sample variances ( Xm13-1.xlsx

). From the data we have S

1 2 = 4103, and S 2 2 = 10,670.

It appears that the variances are unequal

18

.

Example: Making an inference about

m 

m  • Solving by hand – From the data we have: x 1 df   604.2, x 2  633.23

s 2 1 4103  4103 43 43  1 43    10670 10670 107 107 107  1  2  4,103,  122.6

s 2 2  10,670  123 19

Example: Making an inference about

m 

m  • Solving by hand – H 1 : m 1 m 2 < 0 The rejection region is t < -t a, df = -t .05,123   1.658

t  ( x 1  x 2 )  ( m   m  ) s 1 2 n 1  s 2 2 n 2  ( 604 .

2  4103 43 633 .

23 )  107  10670 ( 0 )  -2.09

20

Example: Making an inference about

m 

m  t-Test: Two-Sample Assuming Unequal Variances Mean Variance Observations 604.023 633.234

4102.98 10669.8

43 107 df t Stat P(T<=t) one-tail t Critical one-tail P(T<=t) two-tail t Critical two-tail 123 -2.09107

0.01929

1.65734

0.03858

1.97944

Conclusion: At 5% significance level there is sufficient evidence to reject the null hypothesis, and argue that m 1 < m 2 . Click.

The p-value approach: .01929 < .05

The rejection region approach -2.09107 < -1.65734

21

Example: Making an inference about

m 

m  • Solving by hand The confidence interval estimator for the difference between two means when the variances are unequal is ( x 1  x 2 )  t a 2   s 2 1 n 1  s 2 2 n 2     ( 604 .

02  29 .

21   633 .

239 ) 27 .

65     1 .

9796 4103 56 .

86 ,  1 .

56 43   10670 107 22

Example: Making an inference about

m 

m  Note that the confidence interval for the difference between the two means falls entirely in the negative region: [-56.86, -1.56]; even at best the difference between the two means is m 1 – m 2 = -1.56, so we can be 95% confident m 1 is smaller than m  !

This conclusion agrees with the results of the test performed before.

23

Example: Making an inference about

m 

m  • Example 2 – An ergonomic chair can be assembled using two different sets of operations (Method A and Method B) – The operations manager would like to know whether the assembly time under the two methods differ. 24

Example: Making an inference about

m 

m  • Example 13.2 – Two samples are randomly and independently selected • A sample of 25 workers assembled the chair using design A. • A sample of 25 workers assembled the chair using design B.

• The assembly times were recorded – Do the assembly times of the two methods differs? 25

Example: Making an inference about

m 

m  Assembly times in Minutes

Design-A Design-B

6.8

5.2

5.0

7.9

5.2

7.6

5.0

5.9

5.2

6.5

.

.

.

.

6.7

5.7

6.6

8.5

6.5

5.9

6.7

6.6

.

.

.

.

Solution

• The data are quantitative.

• The parameter of interest is the difference between two population means.

• The claim to be tested is whether a difference between the two designs exists.

26

Example: Making an inference about

m 

m  • Solving by hand –The hypotheses test is: H 0 : m 1 H 1 : m 1 m 2 m 2   0 0 Since we ask whether or not the assembly times are the same on the average, the alternative hypothesis is of the form m 1  m  – To check the relationship between the two variances we run the F test. ( Xm13-02 ). S The p-value of the F test = 2(.1496) = .299

S Conclusion:  1 2 and  2 2 appear to be equal.

27

Example: Making an inference about

m 

m  • Solving by hand – To calculate the t-statistic we have: x 1  6 .

288 x 2  6 .

016 s 2 1  0 .

8478 s 2 2  1 .

3031 S 2 p  ( 25  1 )( 0 .

848 )  ( 25  1 )( 1 .

303 ) 25  25  2  1 .

076 t  ( 6 .

288  6 .

016 )  0 1 .

076 1 25  1 25 d .

.f

 25  25  2  48  0 .

93 28

Example: Making an inference about

m 

m  • The 2-tail rejection region is t < -t a/,n t > t a/,n =-t .

025,48 = t .

025,48 = 2.009 or = 2.009 For a = 0.05

• The test: Since t= -2.009 < 0.93 < 2.009, there is insufficient evidence to reject the null hypothesis.

Rejection region -2.009

.093

2.009

Rejection region 29

Example: Making an inference about

m 

m 

t-Test: Two-Sample Assuming Equal Variances Design-A Design-B Mean Variance Observations

6.288

6.016

0.847766667 1.3030667

25

Pooled Variance

1.075416667

0

df

48 25

t Stat P(T<=t) one-tail t Critical one-tail P(T<=t) two-tail t Critical two-tail

0.927332603

0.179196744

1.677224191

0.358393488

2.01063358

Conclusion: From this experiment, it is unclear at 5% significance level if the two assembly methods are different in terms of assembly time -2.0106 < .9273 < +2.0106

.35839 > .05

30

Example: Making an inference about

m 

m  : A 95% confidence interval for m 1 equal is calculated as follows: m 2 when the two variances are ( x 1  x 2 )  t a  s 2 p ( 1 n 1  1 ) n 2    6 .

288 0 .

272  6 .

016   0 .

5896 2  .

0106 1.075( 1 25 [  0 .

3176 , 0 .

8616 ]  1 25 )  Thus, at 95% confidence level -0.3176 < m 1 m 2 < 0.8616

Notice: “Zero” is included in the confidence interval and therefore the two mean values could be equal.

31

10 8 6 4 2 0 12

Checking the required Conditions for the equal variances case (example 13.2) Design A

The data appear to be approximately normal 5 5.8

6.6

7 6 5 4 3 2 1 0 7.4

8.2

4.2

5 5.8

6.6

7.4

More 32

13.4 Matched Pairs Experiment Dependent samples

• What is a matched pair experiment?

• A matched pairs experiment is a sampling design in which every two observations share some characteristic. For example, suppose we are interested in increasing workers productivity. We establish a compensation program and want to study its efficiency. We could select two groups of workers, measure productivity before and after the program is established and run a test as we did before. Click.

• But, if we believe workers’ age is a factor that may affect changes in productivity, we can divide the workers into different age groups, select a worker from each age group, and measure his or her productivity twice. One time before and one time after the program is established. Each two observations constitute a matched pair, and because they belong to the same age group they are not independent. 33

13.4 Matched Pairs Experiment Dependent samples

Why matched pairs experiments are needed?

The following example demonstrates a situation where a matched pair experiment is the correct approach to testing the difference between two population means.

34

Additional example

13.4 Matched Pairs Experiment

Example 3

– To investigate the job offers obtained by MBA graduates, a study focusing on salaries was conducted.

– Particularly, the salaries offered to finance majors were compared to those offered to marketing majors.

– Two random samples of 25 graduates in each discipline were selected, and the highest salary offer was recorded for each one.

– From the data, can we infer that finance majors obtain higher salary offers than marketing majors among MBAs?. 35

13.4 Matched Pairs Experiment

• Solution – Compare two populations of quantitative data.

– The parameter tested is m 1 m 2 – H 0 : m 1 H 1 : m 1 m 2 m 2 = 0 > 0

Finance

61,228 51,836 20,620 73,356 84,186 .

.

.

Marketing

73,361 36,956 63,627 71,069 40,203 .

.

.

m

1

The mean of the highest salary offered to Finance MBAs m

2

The mean of the highest salary offered to Marketing MBAs 36

13.4 Matched Pairs Experiment

• Solution – continued From Xm13-3.xls

we have: x 1 x 2  65 , 624  60 , 423 s 2 1  360 , 433 , 294 , s 2 2  262 , 228 , 559

t-Test: Two-Sample Assuming Equal Variances

Mean Variance Observations Pooled Variance Hypothesized Mean Difference df t Stat P(T<=t) one-tail t Critical one-tail P(T<=t) two-tail t Critical two-tail

Finance Mark eting

65624 60423 360433294 262228559 25 25 311330926 0 48 1.04215119

0.15128114

1.67722419

0.30256227

2.01063358

• Let us assume equal variances There is insufficient evidence to conclude that Finance MBAs are offered higher salaries than marketing MBAs.

37

The effect of a large sample variability

• Question – The difference between the sample means is

65624 – 60423 = 5,201.

– So, why could not we reject H 0 and favor H 1 ?

38

The effect of a large sample variability

• Answer: – S p 2 is large (because the sample variances are large) S

p 2 = 311,330,926.

– A large variance reduces the value of the t statistic and this is why t does not fall in the rejection region.

t  ( x 1  x 2 )  ( m  s 2 p ( 1 n 1   1 n 2 ) m  ) Recall that rejection of H ‘t’ is sufficiently large (t>t 0 in this problem occurs when a ).

A large S p 2 reduces ‘t’ and therefore it does not fall in the rejection region.

39

The matched pairs experiment

• We are looking for hypotheses formulation where the variability of the two samples has been reduced.

• By taking matched pair observations and testing the differences per pair we achieve two goals: – We still test m 1 usually smaller – m 2 (see explanation next) – The variability used to calculate the t-statistic is (see explanation next) .

40

The matched pairs experiment – Are we still testing

of observations

m

1

m

2

?

• Yes. Note that the difference between the two means is equal to the mean difference of pairs A short example Group 1 10 15 Group 2 12 11 Difference - 2 +4 Mean1 =12.5 Mean2 =11.5

Mean1 – Mean2 = 1 Mean Differences = 1 41

The matched pairs experiment – Reducing the variability

The range of observations sample A Observations might markedly differ...

The range of observations sample B 42

The matched pairs experiment – Reducing the variability

0 ...but the differences between pairs of observations might have much smaller variability.

The range of the differences 43

The matched pairs experiment

• Example 4 (Example 3 part II) – It was suspected that salary offers were affected by students’ GPA. Since GPAs were different, so were the salaries (which caused S used: 1 2 and S 2 2 to increase).

– To reduce this variability, the following procedure was • 25 ranges of GPAs were predetermined.

• Students from each major were randomly selected, one from each GPA range.

• The highest salary offer for each student was recorded.

– From the data presented can we conclude that Finance majors are offered higher salaries?

44

The matched pairs hypothesis test

• Solution (by hand) – The parameter tested is m D – The hypotheses: H 0 : m D = 0 H 1 : m D > 0 – The t statistic:

t > t

(= m 1 – m 2 ) Finance Marketing The rejection region is

.05,25-1 = 1.711

Degrees of freedom  n D –  t  x D s D  m D n 45

The matched pairs hypothesis test

• Solution (by hand) – continue – From the data ( Xm13-4.xls

) calculate: Using Descriptive Statistics in Excel we get:

Difference

GPA Group Finance

1 95171 2 88009 3 4 5 6 7 8 9 10 98089 106322 74566 87089 88664 71200 69367 82618 .

.

.

Marketing Difference

89329 5842 92705 99205 99003 74825 77038 78272 59462 51555 81591 .

.

.

.

.

.

-4696 -1116 7319 -259 10051 10392 11738 17812 1027 Mean Standard Error Median Mode Standard Deviation Sample Variance Kurtosis Skewness Range Minimum Maximum Sum Count 5064.52

1329.3791

3285 #N/A 6646.8953

44181217 -0.659419

0.359681

23533 -5721 17812 126613 25 46

The matched pairs hypothesis test

• Solution (by hand) – continue x D s D  5 .

065  6 , 647 See conclusion later – Calculate t t  x D s D  m D n  5065 6647  0 25  3 .

81 47

The matched pairs hypothesis test

Using Data Analysis in Excel t-Test: Paired Two Sample for Means Mean Variance Observations

Finance Mark eting

65438.2 60373.68

4.45E+08 4.69E+08 25 Pearson Correlation 0.952025

Hypothesized Mean Difference0 df t Stat 24 3.809688

25 P(T<=t) one-tail t Critical one-tail P(T<=t) two-tail t Critical two-tail 0.000426

1.710882

0.000851

2.063898

Conclusion:

There is sufficient evidence to infer at 5% significance level that the Finance MBAs’ highest salary offer is, on the average, higher than this of the Marketing MBAs.

Recall: The rejection region is t > t a . Indeed, 3.809 > 1.7108

.000426 < .05

48

The matched pairs mean difference estimation

Confidence int erval Estimator of m D x D  t a / 2 , n  1 s n Example 13 .

5 The in 95 % example confidence 13 .

4 is int erval of the 5065  2 .

064 6647 25 mean  difference 5 , 065  2 , 744 49

The matched pairs mean difference estimation

Using Data Analysis Plus

GPA Group Finance

1 95171 2 3 88009 98089 4 5 6 7 8 9 10 106322 74566 87089 88664 71200 69367 82618 .

.

.

Marketing Difference

89329

5842

92705 99205 99003 74825 77038 78272 59462 51555 81591 .

.

.

.

.

.

-4696 -1116 7319 -259 10051 10392 11738 17812 1027 t-Estimate:Mean

Mean Standard Deviation LCL UCL Difference

5065 6647 2321 7808

First calculate the differences for each pair, then run the confidence interval procedure in Data Analysis Plus.

50

Checking the required conditions for the paired observations case

• The results validity depends on the normality of the differences.

Diffrences 6 4 2 0 -3 00 0 0 30 00 60 00 90 00 12 00 0 15 00 0 18 00 0 Mo re 51

13.5 Inferences about the ratio of two variances

• In this section we draw inference about the relationship between two population variances.

• This question is interesting because: – Variances can be used to evaluate the consistency of processes. – The relationships between variances determine the technique used to test relationships between mean values 52

Parameter tested and statistic

• The parameter tested is  1 2 /  2 2 • The statistic used is F  s 2 1 s 2 2  2 1  2 2 • The Sampling distribution of  1 2 /  2 2 – The statistic [ s 1 2 /  1 2 ] with… Numerator d.f. = n 1 / [ s 2 2 /  2 2 ] follows the F distribution – 1, and Denominator d.f. = n 2 – 1.

53

Parameter tested and statistic

– Our null hypothesis is always H 0 :  1 2 /  2 2 = 1 S 2 /  2 – Under this null hypothesis the F statistic becomes S 2 2 /  2 2 F  s 2 1 s 2 2 54

55

Testing the ratio of two population variances

Example 6 (revisiting example 1)

(see example 1) In order to test whether having a rich-in-fiber breakfast reduces the amount of caloric intake at lunch, we need to decide whether the variances are equal or not.

Calories intake at lunch Consmers Non-cmrs

568 705 498 589 681 540 646 636 739 539 596 607 529 637 617 633 555

.

.

.

.

819 706 509 613 582 601 608 787 573 428 754 741 628 537 748

.

.

.

.

The hypotheses are: H 0 : H 1 :               1 1 56

Testing the ratio of two population variances

• Solving by hand – The rejection region is Note: The numerator degrees of freedom and the denominator degrees of freed are replacing one another!

F F  F a , n 1  1 , n 2 2  1  1 F a , n 2 2  1 , n 1  1   F .

025 , 42 , 106 F .

1 025 , 106 , 42  F .

025 , 40 , 120  F .

1 025 , 120 , 40  1 .

61  .

63 – The F statistic value is F=S 1 2 /S 2 2 = .3845

– Conclusion: Because .3845<.63 we can reject the null hypothesis in favor of the alternative hypothesis, and conclude that there is sufficient evidence in the data to argue at 5% significance level that the variance of the two groups differ.

57

Testing the ratio of two population variances

Example 6 (revisiting example 1)

(see Xm13.1

) From Data Analysis F-Test Two-Sample for Variances The hypotheses are: H 0 : H 1 :               1 1 Mean Variance Observations

Consumers

604.0232558

4102.975637

43 df F 42 0.384542245

Nonconsumers

633.2336449

10669.76565

107 106 58

Estimating the Ratio of Two Population Variances

• From the statistic F = [ s 1 2 /  1 2 ] interval: / [ s 2 2 /  2 2 ] we can isolate  1 2 /  2 2 and build the following confidence s s 2 1 2 2 where F a n 1 / 2 , n 1 , n 2 1  n   1   2 1 2 2 and  n 2 s s 2 1 2 2  n   F a / 2 , n 2 , n 1 2  1 59

Estimating the Ratio of Two Population Variances

Example 7 – Determine the 95% confidence interval estimate of the ratio of the two population variances in example 12.1

– Solution • We find F a /2,v1,v2 = F .025,40,120 F a /2,v2,v1 = F .025,120,40 = 1.61 (approximately) = 1.72 (approximately) • LCL = (s 1 2 /s 2 2 )[1/ F a/2,v1,v2 ] = (4102.98/10,669.770)[1/1.61]= .2388

• UCL = (s 1 2 /s 2 2 )[ F a/2,v2,v1 ] = (4102.98/10,669.770)[1.72]= .6614

60

13.6 Inference about the difference between two population proportions

• In this section we deal with two populations whose data are nominal.

• For nominal data we compare the population proportions of the occurrence of a certain event.

• Examples – Comparing the effectiveness of new drug vs.old one – Comparing market share before and after advertising campaign – Comparing defective rates between two machines 61

Parameter tested and statistic

• Parameter – When the data is nominal, we can only count the occurrences of a certain event in the two populations, and calculate proportions. – The parameter tested is therefore p 1 • Statistic – p 2.

– An unbiased estimator of p 1 – p 2 is (the 1 2 difference between the sample proportions). 62

Sampling distribution of

pˆ 1  pˆ 2 • Two random samples are drawn from two populations.

• The number of successes in each sample is recorded.

• The sample proportions are computed.

Sample 1 Sample size n 1 Number of successes x 1 Sample proportion 1  x n 1 1 Sample 2 Sample size n 2 Number of successes x 2 Sample proportion pˆ 2  x 2 n 2 63

Sampling distribution of

pˆ 1  pˆ 2 • The statistic is approximately normally distributed if n 1 p 1 than 5.

, n 1 pˆ (1 - p 1 1  ), n pˆ 2 2 p 2 , n 2 (1 - p 2 ) are all equal to or greater • The mean of is p 1 2 1 - p 2 .

pˆ 1 p 1 (1  n 1 p 1 )  p 2 (1  n 2 p 2 ) 64

The z-statistic

Z  ( pˆ 1  p 1 ( 1  n 1 pˆ 2 ) p 1 )   ( p 1 p 2  p ( 1  n 2 2 p ) 2 )

Because p 1 and p 2 are unknown, we use their estimates instead. Thus,

n 1 pˆ 1 , n 1 ( 1  pˆ 1 ), n 2 pˆ 2 , n 2 ( 1  pˆ 2 )

should all be equal to or greater than 5.

65

Testing p

1

– p

2

• There are two cases to consider:

Case 1:

H 0 : p 1 -p 2 =0

Case 2:

H 0 : p 1 -p 2 =D (D is not equal to 0) Pool the samples and determine the pooled proportion pˆ  x n 1 1  x 2  n 2 Then Then Z  ( pˆ 1  pˆ 2 pˆ ( 1  pˆ )( 1 n 1 )  1 n 2 ) Keep the sample proportions separate Z 

ˆ

p 1  x n 1 1 ;

ˆ

p 2  x n 2 2 ( pˆ 1 pˆ 1 ( 1  n 1  pˆ 1 ) pˆ 2 )   D pˆ 2 ( 1  n pˆ 2 )

Testing p

1

– p

2

(Case I)

• Example 8 – Management needs to decide which of two new packaging designs to adopt, to help improve sales of a certain soap.

– A study is performed in two communities: • Design A is distributed in Community 1.

• Design B is distributed in Community 2.

• The old design packages is still offered in both communities.

– Design A is more expensive, therefore, to be financially viable it has to outsell design B.

67

Testing p

1

– p

2

(Case I)

• Summary of the experiment results – Community 1 - 580 packages with new design A sold 324 packages with old design sold – Community 2 - 604 packages with new design B sold 442 packages with old design sold – Use 5% significance level and perform a test to find which type of packaging to use.

68

Testing p

1

– p

2

(Case I)

• Solution – The problem objective is to compare the population of sales of the two packaging designs.

– The data is qualitative (yes/no for the purchase of the new design per customer) – The hypotheses test are H 0 : p 1 - p 2 = 0 H 1 : p 1 - p 2 > 0 – We identify here case 1. Population 1 – purchases of Design A Population 2 – purchases of Design B 69

Testing p

1

– p

2

(Case I)

• Solving by hand – For a 5% significance level the rejection region is z > z a = z .05

= 1.645

From Xm13-08.xls

The pooled pˆ  ( x 1  x 2 ) we have: proportion is The sample pˆ 1  580 proportion s are 904  .

6416 , and ( n 1  n 2 )  ( 580  604 ) ( 904  1046 )  pˆ 2 .

 604 1046  .

5774 6072 The Z  z statistic becomes ( pˆ 1  pˆ 2 )  ( p 1  p 2 ) pˆ ( 1  pˆ )   1 n 1  1 n 2    .

6416  .

5774 .

6072 ( 1  .

6072 ) 1 904  1 1046  2 .

89 70

Testing p

1

– p

2

(Case I)

• Conclusion: At 5% significance level there sufficient evidence to infer that the proportion of sales with design A is greater that the proportion of sales with design B (since 2.89 > 1.645). 71

Additional example

Testing p

1

– p

2

(Case I)

• Excel (Data Analysis Plus)

z-Test: Two Proportions

sample proportions Observations Hypothesized Difference z Stat P(Z<=z) one tail z Critical one-tail P(Z<=z) two-tail z Critical two-tail

Community 1 Community 2

0.6416

0.5774

904 0 1046 2.89

0.0019

1.6449

0.0038

1.96

• Conclusion Since 2.89 > 1.645, there is sufficient evidence in the data to conclude at 5% significance level, that design A will outsell design B.

72

Testing p

1

– p

2

(Case II)

Example 9 (modifying example 8) – Management needs to decide which of two new packaging designs to adopt, to help improve sales of a certain soap.

– A study is performed in two communities: • Design A is distributed in Community 1.

• Design B is distributed in Community 2.

• The old design packages is still offered in both communities.

– For design A to be financially viable it has to

outsell design B by at least 3%.

73

Testing p

1

– p

2

(Case II)

• Summary of the experiment results – Community 1 - 580 packages with new design A sold 324 packages with old design sold – Community 2 - 604 packages with new design B sold 442 packages with old design sold • Use 5% significance level and perform a test to find which type of packaging to use.

74

Testing p

1

– p

2

(Case II)

• Solution – The hypotheses to test are H 0 : p 1 - p 2 = .03

H 1 : p 1 - p 2 > .03

– We identify case 2 of the test for difference in proportions (the difference is not equal to zero).

75

Testing p

1

– p

2

(Case II)

• Solving by hand Z  ( pˆ 1 pˆ 1 ( 1  n 1  pˆ 1 ) pˆ 2 )   D pˆ 2 ( 1  n 2 pˆ 2 )  580 580  324  .

642 ( 1  .

642 ) 904 604 604  442   .

03 .

577 ( 1  .

577 ) 1046  1 .

58 The rejection region is z > z .05

= 1.645.

Conclusion: Since 1.58 < 1.645 do not reject the null hypothesis. There is insufficient evidence to infer that packaging with Design A will outsell this of Design B by 3% or more. 76

Testing p

1

– p

2

(Case II)

• Using Excel (Data Analysis Plus)

z-Test: Two Proportions

Sample Proportion Observations Hypothesized Difference z stat P(Z<=z) one-tail z Critical one-tail P(Z<=z) two-tail z Critical two-tail

Community 1 Community 2

0.6416

904 0.03

0.5774

1046 1.5467

0.061

1.6449

0.122

1.96

77

Estimating p

1

– p

2

• Example (estimating the cost of life saved) – Two drugs are used to treat heart attack victims: • Streptokinase (available since 1959, costs $460) • t-PA (genetically engineered, costs $2900).

– The maker of t-PA claims that its drug outperforms Streptokinase.

– An experiment was conducted in 15 countries. • 20,500 patients were given t-PA • 20,500 patients were given Streptokinase • The number of deaths by heart attacks was recorded.

78

Estimating p

1

– p

2

• Experiment results – A total of 1497 patients treated with Streptokinase died.

– A total of 1292 patients treated with t-PA died.

• Estimate the cost per life saved by using t-PA instead of Streptokinase.

79

Estimating p

1

– p

2

• Solution – The problem objective: Compare the outcomes of two treatments.

– The data is nominal (a patient lived/died) – The parameter estimated is p 1 • p 1 = death rate with t-PA • p 2 – p = death rate with Streptokinase 2 .

80

Estimating p

1

– p

2

• Solving by hand – Sample proportions: pˆ 1  1497 20500  .

0730 , pˆ 2  1292 20500  .

0630 ( pˆ 1  pˆ 2 )  pˆ 1 ( 1  n 1 pˆ 1 )  pˆ 2 ( 1  n 2 pˆ 2 ) – The 95% confidence interval is .

0730 LCL   .

0630 .

0051  1 .

96 UCL .

0730 ( 1  .

0730 ) 20500  .

0149  .

0630 ( 1  .

0630 ) 20500  .

100  .

0049 81

Estimating p

1

– p

2

• Interpretation – We estimate that between .51% and 1.49% more heart attack victims will survive because of the use of t-PA.

– The difference in cost per life saved is 2900-460= $2440.

– The total cost saved by switching to t-PA is estimated to be between 2440/.0149 = $163,758 and 2440/.0051 = $478,431 82