Problems - Ravanshenas

Download Report

Transcript Problems - Ravanshenas

t-Static
t-test is used to test
hypothesis about an
unknown population
mean, µ, when the
value of σ or σ² is
unknown.
2
Degrees of Freedom
df=n-1
3
Assumption of the t-test
(Parametric Tests)
 1.The Values in the sample must
consist of independent
observations
 2. The population sample must be
normal
 3. use a large sample n ≥ 30
4
Inferential
Statistics
 t-Statistics:
 There are different types of t- Statistic
 1. Single (one) Sample t-statistic (test)
 2. Two independent sample t-test,
Matched-Subject Experiment, or
Between Subject Design
 3.Repeated Measure Experiment, or
Related/Paired Sample t-test
5
Hypothesis Testing
Step 3: Computations/ Calculations or Collect
Data and Compute Sample Statistics
FYI
Z Score for Research

6
Hypothesis Testing
Step 3: Computations/ Calculations or Collect
Data and Compute Sample Statistics

7
Calculations for t-test
M-μ
 t=
s
Sm
Sm= M-μ
t
M=t.Sm+μ
μ=M- Sm.t
Sm=
or
2
S /n
√n
8
FYI
Variability
SS, Standard Deviations and Variances
 X
1
2
4
5
σ² = ss/N
σ = √ss/N
s = √ss/df
s² = ss/n-1 or ss/df
Pop
Sample
SS=Σx²-(Σx)²/N
SS=Σ( x-μ)²
Sum of Squared Deviation from Mean
9
d=Effect Size
Use S instead of σ for t-test
10
Cohn’s d=Effect Size
Use S instead of σ for t-test
 d = (M - µ)/s
 s= (M - µ)/d
 M= (d . s) + µ
 µ= (M – d) s
11
Percentage of Variance Accounted
for by the Treatment (similar to
Cohen’s d) Also known as ω²
Omega Squared
r 
t
2
2
t  df
2
12
percentage of Variance accounted
for by the Treatment
 Percentage of Variance Explained
 r²=0.01-------- Small Effect
r
2
r
2
 r²=0.09-------- Medium Effect
 r²=0.25-------- Large Effect
13
Problems
 Infants, even newborns prefer to look at
attractive faces (Slater, et al., 1998). In the
study, infants from 1 to 6 days old were shown
two photographs of women’s face. Previously, a
group of adults had rated one of the faces as
significantly more attractive than the other. The
babies were positioned in front of a screen on
which the photographs were presented. The pair
of faces remained on the screen until the baby
accumulated a total of 20 seconds of looking at
one or the other. The number of seconds looking
at the attractive face was recorded for each
14
infant.
Problems
 Suppose that the study used a sample of n=9
infants and the data produced an average of
M=13 seconds for attractive face with SS=72.
 Set the level of significance at α=.05
for two tails
 Note that all the available information comes
from the sample. Specifically, we do not know
the population mean μ or the population
standard deviation σ.
 On the basis of this sample, can we
conclude that infants prefer to look at
15
attractive faces?
Null Hypothesis
 t-Statistic:
 If the Population mean or µ and the
sigma are unknown the statistic of
choice will be t-Static
 1. Single (one) Sample t-statistic (test)
 Step 1
 H : µ = 10 seconds
H : µ
≠ 10 seconds
0
1
attractive
attractive
16
Problems
 A psychologist has prepared an “Optimism
Test” that is administered yearly to
graduating college seniors. The test
measures how each graduating class
feels about its future. The higher the score
, the more optimistic the class. Last year’s
class had a mean score of μ=15. A
sample of n=9 seniors from this year’s
class was selected and tested..
17
Problems
 The scores for these seniors are 7, 12, 11,
15, 7, 8, 15, 9, and 6, which produced a
sample mean of M=10 with SS=94.
 On the basis of this sample, can the
psychologist conclude that this year’s
class has a different level of optimism?
 Note that this hypothesis test will use a tstatistic because the population variance
σ² is not known. USE SPSS
 Set the level of significance at α=.01 for two tails
18
Null Hypothesis
 t-Statistic:
 If the Population mean or µ and the
sigma are unknown the statistic of
choice will be t-Static
 1. Single (one) Sample t-statistic (test)
 Step 1
 H : µ = 15
H : µ
≠ 15
0
1
optimism
optimism
19
Chapter 10
Two Independent Sample t-test
Matched-Subject Experiment, or
Between Subject Design
 An independent-measures
study uses a separate
sample to represent each of
the populations or treatment
conditions being compared.
20
Two Independent Sample t-test
Null Hypothesis:
 If the Population mean or µ is unknown
the statistic of choice will be t-Static
 Two independent sample t-test, MatchedSubject Experiment, or Between Subject
Design
 H : µ -µ = 0
 H : µ -µ ≠ 0
0
1
1
1
2
2
21
Problems
 Research results suggest a relationship
Between the TV viewing habits of 5-yearold children and their future performance
in high school. For example, Anderson,
Huston, Wright & Collins (1998) report that
high school students who regularly
watched Sesame Street as children had
better grades in high school than their
peers who did not watch Sesame Street.
22
Problems
 The researcher intends to examine
this phenomenon using a sample of
20 high school students. She first
surveys the students’ s parents to
obtain information on the family’s TV
viewing habits during the time that the
students were 5 years old. Based on
the survey results, the researcher
selects a sample of n=10
23
Problems
 students with a history of watching
“Sesame Street“ and a sample of
n=10 students who did not watch the
program. The average high school
grade is recorded for each student
and the data are as follows: Set the
level of significance at α=.05
for two tails
24
Problems
Average High School Grade
Watched Sesame St (1). Did not Watch Sesame St.(2)
86
87
91
97
98
99
97
94
89
92
n1=10
M1=93
SS1=200
90
89
82
83
85
79
83
86
81
92
n2=10
M2= 85
SS2=160
25
Two Independent Sample t-test
Null Hypothesis:
 Two independent sample t-test, MatchedSubject Experiment, or Between Subject
Design- non-directional or two-tailed
test
 Step 1.
 H : µ -µ = 0
 H : µ -µ ≠ 0
0
1
1
1
2
2
26
Two Independent Sample t-test
Null Hypothesis:
 Two independent sample t-test, MatchedSubject Experiment, or Between Subject
Design - directional or one-tailed test

 Step 1.
H : µ
≤µ
0
H :
1
Sesame St .
µ
Sesame St.
No Sesame St.
>µ
No Sesame St.
27
Measuring d=Effect Size for the
independent measures

d 
M 1 M 2
2
S p
28
Percentage of Variance Accounted
for by the Treatment (similar to
Cohen’s d) Also known as ω² Omega
Squared and Coefficient of
Determination
r 
t
2
2
t  df
2
29
We use the Point-Biserial Correlation
when one of our variable is dichotomous, in
this case (1) watched Sesame St.
(2) and didn’t watch Sesame St.
r 
t
2
2
t  df
2
30
Problems
 In recent years, psychologists have
demonstrated repeatedly that using mental
images can greatly improve memory. Here we
present a hypothetical experiment designed to
examine this phenomenon.
The psychologist first prepares a list of 40 pairs of
nouns (for example, dog/bicycle, grass/door,
lamp/piano). Next, two groups of participants
are obtained (two separate samples).
Participants in one group are given the list for 5
minutes and instructed to memorize the 40
31
noun pairs.
Problems
 Participants in another group receive the same
list of words, but in addition to the regular
instruction, they are told to form a mental image
for each pair of nouns (imagine a dog riding a
bicycle, for example). Later each group is given
a memory test in which they are given the first
word from each pair and asked to recall the
second word. The psychologist records the
number of words correctly recalled for each
individual. The data from this experiment are as
follows: Set the level of significance at α=.01
32
for two tails
Problems
Data (Number of words recalled)
Group 1 (Images)
Group 2 (No Images)
19
20
24
30
31
32
30
27
22
25
n1=10
M1=26
SS1=200
23
22
15
16
18
12
16
19
14
25
n2=10
M2= 18
SS2=160
33
3.Repeated Measure Experiment,
or Related/Paired Sample t-test
Within Subject Experiment Design
 A single sample of individuals
is measured more than once
on the same dependent
variable. The same subjects
are used in all of the treatment
conditions.
34
Null Hypothesis
 t-Statistics:
 If the Population mean or µ is unknown
the statistic of choice will be t-Statistic
 3.Repeated Measure Experiment, or
Related/Paired Sample t-test
 For Non-directional or two tailed test
 Step. 1
H : µ =0
 H : µD ≠ 0
0
1
D
35
Null Hypothesis
 t-Statistics:
 If the Population mean or µ is unknown
the statistic of choice will be t-Statistic
 3.Repeated Measure Experiment, or
Related/Paired Sample t-test
 For Directional or one tailed tests
 Step. 1
H : µ ≤0
 H : µD > 0
0
1
D
36
Problems
Research indicates that the color red
increases men’s attraction to
women (Elliot & Niesta, 2008). In
the original study, men were shown
women’s photographs presented
on either white or red background.
Photographs presented on red were
rated significantly more attractive
than the same photographs mounted
on whit.
37
Problems
In a similar study, a researcher
prepares a set of 30 women’s
photographs, with 15 mounted on
a white background and 15
mounted on red. One picture is
identified as the test photograph,
and appears twice in the set, once
on white and once on red.
38
Problems
 Each male participant looks through the entire
set of photographs and rated the attractiveness
of each woman on a 12-point scale. The data
in the next slide summarizes the responses for
a sample of n=9 men.
 Set the level of significance at α=.01
for two tails
Do the data indicate that the color
red increases men’s attraction to
women ??
39
Problems
Participants White background X1 Red Background X2 D=X2-X1
A
6
9
+3
B
8
9
+1
C
7
10
+3
D
7
11
+4
E
8
11
+3
F
6
9
+3
G
5
11
+6
H
10
11
+1
I
8
11
+3
D²
9
1
9
16
9
9
36
1
9
ΣD =27 ΣD²=99
MD =
D
n
40
Null Hypothesis
 For Non-Directional or two tailed tests
Step. 1
H0 : µ = 0
H1 : µ ≠ 0
D
D
41
Problems
One technique to help people deal with phobia is
to have them counteract the feared objects by
using imagination to move themselves to a
place of safety. In an experiment test of this
technique, patients sit in front of a screen and
are instructed to relax. Then they are shown a
slide of the feared object for example, a picture
of a spider, (arachnophobia). The patient
signals the researcher as soon as feelings of
anxiety begin to arise, and the researcher
records the amount of time that the patient was
able to endure looking at the slide.
42
Problems
 The patient then spends two minutes
imagining a “safe scene” such as a tropical
beach (next slide) before the slide is presented
again. If patients can tolerate the feared object
longer after the imagination exercise, it is
viewed as a reduction in the phobia. The data in
next slide summarize the items recorded from a
sample of n=7 patients. Do the data indicate
that the imagination technique effectively
alters phobia? . Set the level of significance
at α=.05 for one tailed test.
43
Problems
Participant Before imagination X1 After Imagination X2 D=X2-X1
D²
A
15
24
+9
81
B
10
23
+13
169
C
7
11
+4
16
D
18
25
+7
49
E
5
14
+9
81
F
9
14
+5
25
G
12
21
+9
81
ΣD =56 ΣD²=502
MD =
D
n
45
Null Hypothesis
 For Directional or one tailed tests
 Step. 1
H : µ ≤0
H : µ >0
0
D
(The amount of time is not increased.)
1
D
(The amount of time is increased.)
46
Relationship of Statistical
S tru c tu ra l M o d e l
Tests
 Does this Diagram Make Sense to You?
M u ltip le
R e g re s s io n
ANOVA
t ra tio
C o rre la tio n
Estimation
 The inferential process of using sample
statistics to estimate population parameter
is called estimation.
 We use estimation 1. After hypotheses test
when H is rejected. 2. when we know there
is an effect and simply want to find out how
much. 3. When we want some basic
information about an unknown population.
 See the logic behind hypothesis tests
ans estimation.
48
0
Estimation
 Point Estimate:
 Interval Estimate:
 µ= M ± tsM
 µ1- µ = M1-M2 ± ts(M1-M2)
2
 µD = MD ± tsM
D
49
50
ANALYSIS OF VARIANCE
ANOVA
 TESTS FOR DIFFERENCES AMONG TWO
OR MORE POPULATION MEANS
 σ²=S²=MS
 MS=Mean Squared Deviation
 Ex of ANOVA Research: The effect of
temperature on recall.
51
Statistics
Standard Deviations and Variances
X
1
2
4
5
σ² = ss/N Pop
σ = √ss/N
s = √ss/df Sample
s² = ss/n-1 or ss/df
MS = SS/df
52
Effects of Temperature (IV) on Recall (DV)
53
54
55
MS = SS / df
bet
bet
MS = SS /df
with
with
bet
with
56
57
ANOVA





SS bet =Σ(T²/n-G²/n)
SS with =Σss
SS =SS + SS
df = K-1
df =N-K
df = df + df
total
bet
with
bet
with
total
bet
with
58
Post Hoc Tests (Post Tests)
 Post Hoc Tests are additional
hypothesis tests that are
done after an ANOVA to
determine exactly which
mean difference are
significant and which are not.
59
Post Hoc Tests (Post Tests)
 Tukey’s Honestly Significant
Difference(HSD) Test
 HSD= q
M S w ith / n
60
Problems
 The data in next slide were obtained from an
independent-measures experiment designed to
examine people’s performances for viewing
distance of a 60-inch high definition television.
Four viewing distances were evaluated, 9 feet,
12 feet, 15 feet, and 18 feet, with a separate
group of participants tested at each distance.
Each individual watched a 30-minute television
program from a specific distance and then
completed a brief questionnaire measuring their
satisfaction with the experience.
61
Problems
 One question asked them to rate the viewing
distance on a scale from 1 (Very Bad definitely
need to move closer or farther away) to 7
(Excellent-perfect viewing distance). The
purpose of the ANOVA is to determine
whether there are any significant differences
among the four viewing distances that were
tested. Before we begin the hypothesis test,
note that we have already computed several
summary statistics for the data in next slide.
Specifically, the tretment totals (T) and SS
values are shown for the entire set of data. 62
Problems
9 feet 12 feet 15 fet 18 feet
3
4
7
6
0
3
6
3
2
1
5
4
0
1
4
3
0
1
3
4
T =5 T =10
T =25 T =20
SS =8 SS = 8 SS =10 SS =6
M =1 M =2 M =5 M =4
n =5 n =5
n =5
n =5
1
2
3
1
1
1
2
2
2
N=20
G=60
ΣX² =262
K=4
4
3
4
3
4
3
4
63
Problems
 Having these summary values simplifies the
computations in the hypothesis test, and we
suggest that you always compute these
summary statistics before you begin an
ANOVA.
 Step 1)
 H : µ =µ =µ =µ (There is no treatment effect.)
 H : (At least one of the treatment means is different.)
0
1
2
3
4
1
 We will set alpha at α =.05
64
Step 2
65
Problems
 A human factor psychologist
studied three computer keyboard
designs. Three samples of
individuals were given material to
type on a particular keyboard, and
the number of errors committed
by each participant was recorded.
The data are on next slide.
66
Problems
Keyboard A Keyboard B Keyboard C
0
6
6
N=15
4
8
5
G=60
0
5
9
ΣX² =356
1
4
4
0
2
6
T =5
T =25
T =30
SS =12
SS =20
SS =14
M1=1
M2=5
M3=6 Is there a significant
1
2
1
3
2
3
differences among the three computer keyboard designs ?
Problems
 Step 1)
 H : µ =µ =µ
(No differences between the
computer keyboard designs )
0
1
2
3
 H : (At least one of the computer keyboard designs is
1
different.)
 We will set alpha at α =.01
68
Step 2
69
2-WAY ANOVA
70
Correlation
&
Regression
71
Correlation
 Correlation measures the strength and
the direction of the relationship
between two or more variables.
 A correlation has three components:
X
– The strength of the coefficient
– The direction of the relationship
– The form of the relationship
 The strength of the coefficient is
indicated by the absolute value of the
coefficient.
Y
– The closer the value is to 1.0, either
positive or negative, the stronger or more
linear the relationship.
– The closer the value is to 0, the weaker or
nonlinear the relationship.
72
Correlation
 The direction of coefficient is indicated by
the sign of the correlation coefficient.
– A positive coefficient indicates that as one
variable (X) increases, so does the other (Y).
– A negative coefficient indicates that as one
variable (X) increases, the other variable (Y)
decreases.
– The form of the relationship
The form of the relationship is linear.
 In correlation variables are not identified as
independent or dependent because the
researcher is measuring the one
relationship that is mutually shared between
the two variables
X
Y
– As a result, causality should not be implied
with correlation.
73
Correlation
 Remember, the correlation coefficient can only
measure a linear relationship.
 A zero correlation indicates no linear relationship.
However, does not indicate no relationship.
 a coefficient of zero rules out linear
relationship,
 but a curvilinear could still exist.
– The scatterplots below illustrate this point:
No Re la tions hip
r =
No Line a r Re la tions hip
.0
r = 0
20
20
15
15
10
10
5
5
0
0
0
10
5
20
15
0
10
5
20
15
74
The Correlation is based on a Statistic
Called Covariance
Variance and Covariance are used to
measure the quality of an item in a test.
Reliability and validity measure the quality of
the entire test.
 σ²=SS/N  used for one set of data
Variance is the degree of variability
of scores from mean.
75
The Correlational Method
SS, Standard Deviations and Variances
 X
1
2
4
5
σ² = ss/N
σ = √ss/N
s = √ss/df
s² = ss/n-1 or ss/df
Pop
Sample
SS=Σx²-(Σx)²/N
SS=Σ( x-μ)²
Sum of Squared Deviation from Mean
76
Variance
 X
1
2
4
5
σ² = ss/N
Pop
s² = ss/n-1 or ss/df Sample
SS=Σx²-(Σx)²/N
SS=Σ( x-μ)²
Sum of Squared Deviation from Mean
77
Covariance
 Correlation is based on a statistic called
Covariance (Cov xy or S xy) …..
r=sp/√ssx.ssy
 Covariance is a number that reflects the
degree to which 2 variables vary together.
 Original Data
X Y
1 3
2 6
4 4
5 7
78
Covariance
COVxy=SP/N-1
2 ways to calculate the SP
SP= Σxy-(Σx.Σy)/N
SP= (x-μx)(y-μy)
SP requires 2 sets of data
SS requires only one set of
data
Computation
Definition
79
The Correlational Method
 Correlational data can be graphed and a
“line of best fit” can be drawn
1- Pearson
 Correlations
2-Spearman
3-Point-Biserial Correlation
4- Partial Correlation
80
Types of Correlation
 In correlational research we use continues
variables (interval or ratio scale) for Pearson
Correlation (for linear relationship).
 If it is difficult to measure a variable on an
interval or ratio scale then we use
Spearman Correlation
 Spearman Correlation uses ordinal or rank
ordered data
 Spearman Correlation measures the
consistency of a relationship (Monotonic
Relationship). Ex. next
81
Monotonic Transformation
They are rank ordered numbers
(DATA), and use ordinal scale(data)
examples; 1, 2, 3, 4, or 2, 4, 6, 8, 10
 Spearman Correlation can be used to
measure the degree of Monotonic
relationship between two variables.
82
Ex. of Monotonic data
X
22
25
19
6
Y
87
102
10
5
83
Types of Correlation
 Ex. A teacher may feel confident about rank
ordering students’ leadership abilities but would
find it difficult to measure leadership on some
other scale.
 The Point-Biserial Correlation.
 However, we can use both continues and
discrete variables(data) in The Point-Biserial
Correlation. (can be a substitute for two
independent t-test)
 In special situations we can use Partial
Correlations.
84
The Point-Biserial Correlation
 The point-biserial correlation is used to
measure the relationship between two
variables in situations in which one variable
consist of regular, numerical scores (nondichotomies), but the second variable has only
two values (dichotomies).
 We can calculate the correlation from t-test
 r² = Coefficient of Determination which
measures the effect size=d
 r² = t²/t²+df
 r = √r²
85
The Correlational Method
 Correlation is the degree to which events or
characteristics vary from each other
– Measures the strength of a relationship
– Does not imply cause and effect
 The people chosen for a study are its
subjects or participants, collectively called a
sample
– The sample must be representative
86
A Partial Correlation
 Measures the relationship
between two variables while
controlling the influence of a
third variable by holding it
constant.
 Ex. The correlation between
churches and crime.
87
The Correlational Method
Correlational data
can be graphed
and a “line of best
fit” can be drawn
88
Positive Correlation
Positive correlation:
variables change in
the same direction
89
Positive Correlation
90
Negative Correlation
Negative correlation:
variables change in
the opposite
direction
91
Negative Correlation
92
No Correlation
Unrelated: no
consistent
relationship
93
No Correlation
94
The Correlational Method
 The magnitude (strength) of a
correlation is also important
–High magnitude = variables which
vary closely together; fall close to the
line of best fit
–Low magnitude = variables which do
not vary as closely together; loosely
scattered around the line of best fit
95
The Correlational Method
 Direction and magnitude of a correlation are
often calculated statistically
– Called the “correlation coefficient,” symbolized by
the letter “r”
 Sign (+ or -) indicates direction
 Number (from 0.00 to 1.00) indicates magnitude
0.00 = no consistent relationship
 +1.00 = perfect positive correlation
 -1.00 = perfect negative correlation
 Most correlations found in psychological
research fall far short of “perfect”
96
The Correlational Method
 Correlations can be trusted based on
statistical probability
– “Statistical significance” means that the finding
is unlikely to have occurred by chance
 By convention, if there is less than a 5% probability
that findings are due to chance (p < 0.05), results are
considered “significant” and thought to reflect the
larger population
– Generally, confidence increases with the size of
the sample and the magnitude of the correlation
97
The Correlational Method
 Advantages of correlational studies:
– Have high external validity
 Can generalize findings
– Can repeat (replicate) studies on other
samples
 Difficulties with correlational studies:
– Lack internal validity
 Results describe but do not explain a
relationship
98
External & Internal Validity
 External Validity
External validity addresses the ability to generalize
your study to other people and other situations.
 Internal Validity
Internal validity addresses the "true" causes of the
outcomes that you observed in your study. Strong
internal validity means that you not only have
reliable measures of your independent (predictors)
and dependent variables (criterions) BUT a strong
justification that causally links your independent
variables (IV) to your dependent variables (DV).
99
The Correlational Method
Pearson
 r=sp/√ssx.ssy
 Original Data
 X Y
1 3
2 6
4 4
5 7
SP requires 2 sets of data
SS requires only one set of data
df=n-2
100
The Correlational Method
Spearman
 r=sp/√ssx.ssy
 Original Data  Ranks
 X Y
X
Y
1 3
1
1
2 6
2
3
4 4
3
2
5 7
4
4
SP requires 2 sets of data
SS requires only one set of data
101
Percentage of Variance Accounted for
by the Treatment (similar to Cohen’s d)
is known as ω² Omega Squared also is
called Coefficient of Determination
r 
t
2
2
t  df
2
102
Coefficient of Determination
 If r = 0.80 then, r ² = 0.64
 This means 64% of the
variability in the Y scores can
be predicted from the
relationship with X.
103
Problems
Test the hypothesis for the following n=4 pairs of
scores for a correlation.
r=sp/√ssx.ssy
Original Data
X Y
1 3
2 6
4 4
5 7
104
Problems
 Step 1)
H : ρ=0 (There is no population correlation.)
H : ρ≠0 (There is a real correlation.)
0
1
Ρ: probability or chances are…
We will set alpha at α =.01
105
STEP 2
106
107
Problems
Test the hypothesis for the following set of n=5
pairs of scores for a positive correlation.
Original Data
X Y
0 2
10 6
4 2
8 4
8 6
108
Problems
 Step 1)
H : ρ≤0 ((The population correlation is not positive.)
H : ρ>0 (The population correlation is positive.)
0
1
Ρ: probability or chances are…
We will set alpha at α =.05
109
Bi-Variate Regression Analysis
 Bi-variate regression analysis extends
correlation and attempts to measure the extent
to which a predictor variable (X) can be used to
make a prediction about a criterion measure
(Y).
X
Y
E
Bi-variate regression uses a linear model to predict the
criterion measure.
The formula for the predicted score is:
Y'
= a + bX
110
Bivariate Regression
 The components of the line of best
fit (Y' = a + bX) are:
–the Y-intercept (a) Constant
–the slope (b)
–Variable (X)
111
Bivariate Regression
 The Y-intercept is the average value of Y
when X is zero.
–The Y-intercept is also called constant.
–Because, this is the amount of Y that is
constant or present when the influence
of X is null (0).
 The slope is average value of a one unit
change in Y for a corresponding one unit
change in X.
–Thus, the slope represents the direction
and intensity of the line.
112
Regression and Prediction
Y=bX+a
Regression Line
113
114
Bivariate Regression
 Line of Best Fit: Y' = 2.635 + .204X
With this equation a predicted score may be made for
any value of X within the range of data.
a=2.635 and b=.204
First Year Grade Point Average
4
Y-intercept
3
Slope = .204
2.635
2
1
0
0
2
1
4
3
High School Grade Point Average
115
Multiple Regression Analysis
X1
 Multiple regression analysis
is an extension of bi-variate
regression, in which several
predictor variables are used
X2
to predict one criterion
measure (Y).
– In general, this method is
considered to be
X3
advantageous; since
seldom can an outcome
measure be accurately
Y' = a + b1X1 +b2X2 +b3X3
explained by one
predictor variable.
Y
E
116
117
118
Relationship of Statistical
S tru c tu ra l M o d e l
Tests
 Does this diagram make sense to you?
M u ltip le
R e g re s s io n
ANOVA
t ra tio
C o rre la tio n
PARAMETRIC AND NONPARAMETRIC
STATISTICAL TESTS
 Parametric tests are more accurate and
have 3 assumptions:
 1. Random selection
 2. Independent of observation
 3. Sample is taken from a normal
population with a normal distribution
120
NONPARAMETRIC
STATISTICAL TESTS
 CHI SQURE: It is like frequency distribution
 Is is used for comparative studies.
 Ex. Of the two leading brands of cola,
which is preferred by most American?
121
CHI SQURE

=
Σ(fo-fe)² /fe
 df = C-1
fe = n/c
122
CHI SQURE
123
df=C-1
124
125
Problems
 A psychologist examining art appreciation selected an
abstract painting that had no obvious top or bottom.
Hangers were placed on the painting so that it could
be hung with any one of the four sides at the top. The
painting was shown to a sample of n=50 participants,
and each was asked to hung the painting in the
orientation that looked correct. The following data
indicate how many people choose each of the four
sides to be placed at the top.
 fo Top up
Bottom up
Left side up
Right side up
(correct)
18
17
7
8
126
Problems
 The question for the hypothesis test is
whether there are any preferences among
the four possible orientations. Are any of the
orientations selected more (or less) often
than would be expected simply by chance?
 We will set alpha at α =.05
127
Problems
 Step 1)
H : fo=fe no preference for any specific orientation
H : fo≠fe preference for specific orientation
0
1
128
Step 2
df=C-1
129