Faking in personnel selection:Does it matter and can we do

Download Report

Transcript Faking in personnel selection:Does it matter and can we do

Faking in personnel selection:
Does it matter and can we do anything
about it?
Eric D. Heggestad
University of North Carolina - Charlotte
Education Testing Service Mini-Conference
Oct 13th & 14th 2006
Four Questions About Faking in
Personnel Selection Contexts
1.
Can people fake?
2.
Do applicants fake?
3.
Does faking matter?
—
4.
I will talk about one project
What do we do about it?
—
I will talk about one project
Does faking matter?
Effects on Validity and Selection
Mueller-Hanson, Heggestad, & Thornton (2003)
Ss completed personality and criterion
measures in lab setting
 Personality measure
—
Achievement Motivation Inventory
 Criterion measure
A speeded ability test with no time limit
— Could leave when they wanted, opportunity for
normative feedback
—
 Groups
—
Honest (n = 240) vs. faking (n = 204)
Means & Standard Deviations
Honest
Group
Faking
Group
Effect
Size
Predictor
214.7
225.6
0.41
Criterion
40.5
40.1
-0.05
Criterion-Related Validity
Honest
Group
Faking
Group
Full Groups
.17*
.05
Upper third
.20*
.07
Lower third
.26*
.45*
* p < .05
But Validity is Only Skin Deep
Important to look at selection
 Groups were combined and various
selection ratios examined
Variables examined
 Percent of selectees from each group
 Performance of those selected
Effects on Selection
Percent hired at various selection ratios
70
Percent of Selectees
60
50
40
Honest
Faking
30
20
10
0
90
80
70
60
50
25
20
Selection Ratio (%)
15
10
Note: Honest made up 54%
of sample
Effects on Selection
Group performance at various selection ratios
46
.07
.09 .08
.15 .18 .23 .31 .50 .56
45
Performance
44
43
42
Honest
Faking
41
40
39
38
37
36
90
80
70
60
50
25
20
Selection Ratio (%)
15
10
Conclusions
Faking appears to have…
 An impact on the criterion-related
validity of our predictor
—
Most noticeably at the high end of the distribution
 An impact on the quality of decisions
—
Low performing fakers more likely to be selected in
top-down contexts
What do we do about faking?
What Do We Do About Faking?
Approach 1: Detection and Correction
Tries to correct faking that has already
occurred
 Score corrections
—
Not successful (Ellingson, Sackett & Hough, 1999;
Schmitt & Oswald, 2006)
 IRT work
 Retesting
What Do We Do About Faking?
Approach 2: Prevention
Many prevention strategies
 Warnings
 Subtle items
 Multidimensional forced-choice (MFC)
response formats
What is an MFC Format?
Dichotomous quartet format
 Item contains four statements
 Each statement represents a different trait
 2 statements positively worded,
2 statements negatively worded
 Indicate “Most Like Me” and “Least
Like Me”
Example MFC Item
Most
Least
Like Me Like Me
Avoid difficult reading material (-)
Only feel comfortable with friends (-)
Believe that others have good
intentions (+)
Make lists of things to do (+)
X
X
MFC Formats
Appears to be faking resistant
(Christiansen et al., 1998; Jackson et al., 2000)
Example from Jackson et al. (2000)
 Likert-type format effect size = .95
 MFC format effect size = .32
However….
Normative vs. Ipsative
 MFC measures typically provide
partially ipsative measurement
 Selection settings require normative
assessment
Also, evaluations have focused on group
level analyses
Forced-Choice as Prevention?
Heggestad, Morrison, Reeve & McCloy (2006)
Two studies
 Study 1 – Do MFC measures provide
normative trait information?
 Study 2 – Are MFC measures resistant to
faking at individual level?
Study 1
Do MFC measures provide normative information?
Participants (n= 307) completed three
measures under honest instructions
 NEO-FFI
 IPIP Likert measure
 IPIP MFC measure
—
Conducted three data collections to create this
measure
Study 1
Do MFC measures provide normative information?
Logic: If MFC provides normative
information, then correspondence
between …
 IPIP-Likert and IPIP-MFC scales should
be quite good
 Each IPIP measure and the NEO-FFI
should be similar
Study 1
Do MFC measures provide normative information?
Correlations
IPIP Likert
IPIP MFC
NEO
IPIP Likert
NEO
IPIP MFC
Stability
Extroversion
.81
.87
.68
.67
.59
.58
Openness
Agreeableness
.75
.75
.76
.70
.65
.64
Conscientious.
.83
.81
.71
Study 1
Do MFC measures provide normative information?
We also defined correspondence as mean
percentile differences across the
measures
| %
tile
FORM 1
n
%
tile
FORM 2
|
Study 1
Do MFC measures provide normative information?
Percentile Rank
IPIP Likert
IPIP MFC
NEO
IPIP Likert
NEO
IPIP MFC
Stability
14.00
18.29
21.13
Extroversion
11.38
18.61
20.49
Openness
15.22
15.28
18.58
Agreeableness
16.39
17.63
19.31
Conscientious.
12.61
14.07
16.96
Study 1
Do MFC measures provide normative information?
Conclusions
 MFC seems to do a reasonable job of
capturing normative trait information
—
People can be compared directly!
Study 2
Are MFC measures resistant to faking at individual level?
Participants (n= 282) completed three
measures
 NEO-FFI  Honest instructions
 IPIP Likert  Faking instructions
 IPIP MFC  Faking instructions
Replication of Previous
Findings
Effect Sizes
IPIP Likert
IPIP MFC
Stability
0.75
0.61
Extroversion
0.65
0.33
Openness
0.36
0.13
Agreeableness
0.65
0.07
Conscientious.
1.23
1.20
Study 2
Are MFC measures resistant to faking at individual level?
Logic: If MFC is resistant to faking at
the individual level, then…
 NEO-FFI (honest)  IPIP-MFC (like honest)
and
 NEO-FFI (honest)  IPIP-Likert (fakeable)
 IPIP-MFC  IPIP-Likert
Study 2
Are MFC measures resistant to faking at individual level?
Correlations
IPIP Likert
IPIP MFC
NEO
IPIP Likert
NEO
IPIP MFC
Stability
Extroversion
.62
.61
.37
.37
.26
.36
Openness
Agreeableness
.59
.48
.53
.50
.55
.52
Conscientious.
.68
.40
.39
Study 2
Are MFC measures resistant to faking at individual level?
Percentile Rank
IPIP Likert
IPIP MFC
NEO
IPIP Likert
NEO
IPIP MFC
Stability
20.23
25.29
28.87
Extroversion
21.09
24.23
26.12
Openness
20.44
21.85
20.69
Agreeableness
24.33
21.54
22.82
Conscientious.
18.05
23.47
23.75
Study 2
Are MFC measures resistant to faking at individual level?
Conclusion
 MFC not a solution to faking
Can fake specific scales
— Not faking resistant at individual level
—
Summary and Conclusion
Faking does impact scores
 Changes the nature of the score
 Not likely to have a big effect on CRV
 Could have notable implications for
selection
Dichotomous quartet response format
does not offer a viable remedy