LGM #12 - Penn State Statistics Department

Download Report

Transcript LGM #12 - Penn State Statistics Department

Presentation 11
More About Significance Tests
Inference Techniques
Confidence Intervals:
1.
1-Proportion
2.
1-Mean
3.
Difference between 2-Proportions
4.
Difference between 2-Means
Hypothesis Tests:
1.
1-Proportion
2.
1-Mean
3.
Difference between 2-Means
4.
Difference between 2-Proportions
5.
Difference between Paired Means.
6.
Chi-Square (Relationship between 2 Categorical Variables)
7.
Regression (Relationship between 2 Quantitative Variables)
8.
ANOVA (Difference between 3 or More Means)
9.
Median Tests
General Steps for Hypothesis Tests

Step 1: Determine the H0 and Ha.
The alternative hypothesis, Ha, is the claim regarding the population
parameter that we want to test. There are three possible Ha's - the
parameter is not equal to a null value (two-sided), less than a null value
(one-sided) or greater than a null value (one-sided). The null
hypothesis, H0, claims that nothing is happening. H0 can be the
opposite of Ha or just that the parameter is equal to the null value.

Step 2: Verify necessary data conditions, if met
summarize the data into an appropriate test statistic.
The conditions to be verified are the same conditions we have seen in
Chapter 12 for creating C.I's.
The test statistic is a standardized statistic, i.e. is of the form
Sample statistic - Null value
Null s.e (sample statistic)
Under H0 the test statistic follows either a normal distribution
(proportion cases) or a t-distribution with some d.f (mean cases).
General Steps for Hypothesis Tests

Step 3: Find the p-value.
The p-value is the probability that the test statistic is as large or larger
in the direction(s) specified from Ha assuming that H0 is true. This, to
get the p-value you need the form of Ha, the value of the test statistic,
and the distribution of the test statistic under the H0.

Step 4: Decide whether or not the result is statistically
significant.
The results are statistically significant if the p-value is less than alpha,
where alpha is the significance level (usually 0.05). Based on the pvalue we have two possible conclusions



If p-value < α we reject the null and we claim the alternative is true.
If p-value > α we fail to reject the null, we claim that there isn't enough
evidence to report Ha is true. In this case, we do not claim that the null is
true!
Step 5: Report the conclusion in the context of the
situation.
Finally, write one or two sentences explaining what is the conclusion in
terms of the problem.
One sample t-test
(one mean or paired data)


In this case we consider the following hypotheses:
H a : µ  µ0
H0 : µ = µ0 vs one of the following
Ha : µ > µ0
Ha : µ < µ0
We have the following t-test statistic
t
Sample estimate- Null value x  0

s
Null Std. Error
n
Assuming that H0 is true, the test statistic has a t-distribution with (n1) degrees of freedom, if one of the following conditions is true:
1.
the random variable of interest is bell-shaped (in practice, for
small samples the data should show no extreme skewness or
outliers).
2.
the random variable is not bell-shaped, but a large random
sample is measured, n ≥ 30.
One sample t-test
(one mean or paired data)

If we denote with T a random variable with t-distribution with (n-1)
d.f, and t is the value of out test statistic, then
Ha
µ < µ0
µ > µ0
µ  µ0

p-value
P( T < t )
P( T > t )
2P( T > |t| )
We can find the p-value using the t-table (table A3), and draw a
conclusion about the hypotheses in the usual manner.
Two-Sample t-test
(difference between two means).


There are two populations (Population 1 and 2) of interest having
unknown means µ1 and µ2 respectively. In this case we consider the
following hypotheses:
H a : µ1 - µ2 < 0
H0 : µ1 - µ2 = 0 (i.e. no difference) vs one of the following Ha : µ1 - µ2 > 0
H a : µ1 - µ2  0
We have the following t-test statistic
(x  x )  0
t 1 2 2 2
s1
s2

n1
n2
Assuming H0 is true, the test statistic has a t-distribution with
approximate d.f. equal to the minimum of n1-1 and n2-1 if the following
conditions are true:
1.
2.

The two samples are independent.
Each sample is either coming from a bell shaped population or the sample
size is ≥30.
Using the appropriate d.f, we can get the p-value from table A.3 in the
same way as in the one-mean case.
Difference between two proportions


Now we will consider two populations having some unknown
proportions of interest, p1 and p2 respectively. In this case we consider
the following hypotheses:
Ha : p1 - p2 < 0
H0 : p1 - p2 = 0 (i.e. no difference) vs one of the following Ha : p1 - p2 > 0
Ha : p1 - p2  0
ˆ1  pˆ 2 . To get the null standard
The sample statistic we will use is p
error of the statistic we assume that H0 is true, so let p1 = p2 =p. Thus,
ˆ1 and pˆ 2 in the s.e formula, we will substitute them
instead of using p
with an estimate of the common population proportion. That is, we use
the proportion in all available data
n1 pˆ 1  n2 pˆ 2
pˆ 
n1  n2
and we have the null s.e.
pˆ (1  pˆ ) pˆ (1  pˆ )

n1
n2
Difference between two proportions

Thus, we have the following t-test statistic
z
pˆ1  pˆ 2

pˆ (1  pˆ ) pˆ (1  pˆ )

n1
n2
pˆ1  pˆ 2
1 1 
pˆ (1  pˆ )   
 n1 n2 
Assuming H0 is true, the test statistic has a standard normal
distribution if the following conditions are true:
1.
The two samples are independent.
2.
All the quantities n1 pˆ 1, n1(1  pˆ 1), n2 pˆ 2 and n2(1  pˆ 2) are at least 5
and preferably at least 10.

We can obtain the p-value using the standard normal table A1. If we
denote with Z a random variable with standard normal distribution,
and z is the value of out test statistic, then
Ha
p-value
p 1 - p2 < 0
P( Z < z )
p 1 - p2 > 0
P( Z > z )
p1 - p2  0
2P( Z > |z| )
Table For Hypothesis Testing
Type
One Mean (or
Paired mean)
.
=
Difference
Between
Means
One Proportion
Parameter
Statistic
µ or µd
x or d
µ1- µ2
x1  x 2
p
pˆ
Null Std. Error
s
or
n
sd
n
s12 s 2 2

n1 n 2
po(1  po)
n
pˆ1  pˆ 2
pˆ (1  pˆ ) pˆ (1  pˆ )

n1
n2
Test Statistic
t
df = n-1
t
df=min(n1-1,n2-1)
z
Difference
Between
Proportions
p1-p2
n1 pˆ 1  n2 pˆ 2
pˆ =
n1  n2
This is the overall proportion-like if we had one big sample!
pˆ 1  pˆ 2
z
Detailed Example 1
A box of corn flakes is advertised as containing
16 oz. of cereal. John works for consumer
reports and is interesting in verifying the
companies claim. He suspects that the average
amount of cereal is less than 16oz. He takes a
random sample of 100 boxes of cereal and
records the weight of each box. The sample
mean is 15.7oz and the sample standard
deviation is 1.2oz.
Example 1: Overview
Before you carry out the hypothesis test it is a good idea
to label the information you have.
Since there is 1 quantitative variable we are doing a
hypothesis test of 1-mean.
The parameter of interest is µ, the mean weight of
cereal in all corn flakes boxes.
The statistic of interest is x = 15.7oz, the sample mean
weight.
Other information: The sample size n = 100, and the
sample standard deviation s = 1.2oz
Example 1: Hypotheses and Conditions
1. Define the null and alternative hypotheses:
Ho: µ = 16oz
Ha: μ < 16oz
2. Check conditions: Either A or B
A) The distribution of weights is normal.
B) The sample size is ≥ 30.
Example 1: Test Statistic and P-Value
3. Calculate the test statistic.
Test Statistic 
Sample Statistic  Null Value
Null Standard Error
Null Value = 16, Sample Statistic = 15.7
s
Null Std. Error =
= 1.2/10 = .12
n
Test Statistic = 15.7  16  2.5
.12
P-Value = P(T<-2.5) Use Table A.3!
degrees of freedom = df = n-1 = 100-1 = 99
Example 1: P-value & Conclusion
T-Distribution, df = 99
Density
0.2
0.1
P-Value = P(T<-2.5)
0.0
Look up the absolute value of the test
statistic and df in the table. If they do
not have the exact test statistic, then
take your best guess OR use the next
smallest test statistic. Using the next
larger and next smallest value for tstatistic on the table you can get an
interval for the p-value.
0.3
Table A.3 gives the one-sided p-values
for t-tests. For 2-sided hypothesis tests
make sure you multiply the p-value by
2!
-4
-2
0
2
4
T
In our case, df = 99 and t-statistic = -2.5. Since 2.5 is not in the table we
can use 2.33, and 90 df (since 99 is not in the table). We get that the pvalue is less than .011.
The p-value is < .05 so we REJECT the null hypothesis and we conclude
that the average weight IS less than 16oz.
Detailed Example 2
A potential presidential candidate in 2002 election
wished to know if men and women had equal
preferences for her candidacy versus this not being
the case (she suspected that men were more likely
to favor her). Her pollster polled a random sample
of 400 men and 400 women, with the following
results.
Preference: Would vote for her?
No
Yes
Total
Men
164
236
400
Women
180
220
400
Total
344
556
800
Example 2: Overview
Here there are 2 categorical variables, both with 2 levels so we
are interested in a test of 2-proportions!
Is the proportion of men who favor the candidate greater than
the proportion of women who favor the candidate?
The parameter of interest is pm-pw.
The sample statistic of interest is
pˆ m  pˆ w
=.59-.55 = .04
Example 2:
1.
Calculate the null and alternative hypotheses:
Ho: pm = pf or pm-pf = 0
Ha: pm > pf or pm-pf > 0
Let’s denote with pm with p1 and pf with p2.
2.
Check the conditions:
All quantities, n1 pˆ 1, n1(1  pˆ 1) and
are greater than or equal to 10.
n2 pˆ 2, n2(1  pˆ 2)
Example 2: Test Statistic and P-Value
3. Calculate the test statistic
Test Statistic  z 
Sample Statistic  Null Value
Null Standard Error
Null Value  0 (since theparameteis p1  p2 )
Statistic pˆ 1  pˆ 2  236/400– 220/400 0.04
n1 pˆ 1  n2 pˆ 2 236 220
pˆ 

 0.57
n1  n2
80
Null Std. Error
pˆ (1  pˆ )1 n1  1 n2 
 .57(1  .57)1 400 1 400  0.035
T est Statistic
( pˆ 1  pˆ 2 )  0
.04

 1.14
pˆ (1  pˆ ) pˆ (1  pˆ ) .035

n1
n2
Example 2: P-Value and Conclusion
P-Value = P(Z>1.14) = 1-P(Z<1.14)
= 1-0.8729 = 0.127
P-Value is >.05 so we can NOT reject the null
hypothesis. Therefore, we do NOT have enough
evidence to conclude that men are more likely to
favor the candidate.
MINITAB Output for Example 2
Test and CI for Two Proportions
Sample
Men
Women
X
236
220
N
Sample p
400 0.590000
400 0.550000
Estimate for p(1) - p(2): 0.04
Test for p(1) - p(2) = 0 (vs > 0): Z = 1.14
P-Value = 0.126
CI’s and Two-sided Alternatives

When testing the hypotheses
H0: parameter = null value vs.
Ha: parameter  null value

If the null value is covered by a (1 - ) CI, the null hypothesis is not
rejected and the test is not statistically significant at level .

If the null value is not covered by a (1 - ) CI, the null hypothesis is
rejected and the test is statistically significant at level .

For instance, for a 95% Confidence Interval, (1- ) = 0.95 = 95%.
So for 95% confidence, the significance level is  = 0.05, which is
the significance level and confidence interval used most frequently.
Problem 13.38 in the Text

Each of the following presents a 95% CI and the
alternative hypothesis of a corresponding hypothesis test.
In each case, state a conclusion for the test, including
the level of significance you are using.
A) CI for µ is (101 to 105), Ha: µ ≠1 00
B) CI for p is (.12 to .28), Ha: p<.10
C) CI for µ1-µ2 is (3 to 15), Ha: µ1-µ2 > 0
D) CI for p1-p2 is (-.15 to .07), Ha: p1-p2 ≠ 0
Possible Errors, Power and
Sample Size Considerations

When we are testing a hypothesis two out four possible decisions lead
to an error.
Type I error - We reject H0 when it is true
Type II error- We fail to reject H0 when Ha is true.

There is an inverse relationship between the probabilities of the two
types of errors. Increase in the probability of type I error leads to
decrease in the probability of type II error and vice versa.

When the alternative hypothesis is true, the probability of making the
correct decision is called power of the test.

The power increases when the sample size increased. Can you see why?

The power increases when the difference between the true population
parameter value and the null value increases. Can you see why?

The hypothesis test may have very low power because of small data set.

We should also be careful in cases our conclusions are based on
extremely large samples, since in cases like that even a weak
relationship (or a small difference) can be statistically significant.