Structural equation modeling with Mplus

Download Report

Transcript Structural equation modeling with Mplus

E. Kevin Kelloway, Ph.D.
Canada Research Chair in
Occupational Health Psychology



Day 1: Familiarization with the Mplus
environment – Varieties of regression
Day 2 Introduction to SEM: Path Modeling,
CFA and Latent variable analysis
Day 3 Advanced Techniques – Longitudinal
data, multi-level SEM etc








0900 - 1000
1000 – 1015
1015 – 1100
1100 – 1200
1200 – 1300
1300 – 1400
1400 – 1415
1415 – 1530
Introduction : The Mplus
Environment
Break
Using Mplus: Regression
Variations on a theme:
Categorical, Censored and
Count Outcomes
Break
Multilevel models: Some theory
Break
Estimating multilevel models in
Mplus





Statistical modeling program that allows for a
wide variety of models and estimation
techniques
Explicitly designed to “do everything”
Techniques for handling all kinds of data
(continuous, categorical, zero-inflated etc),
Allows for multilevel and complex data
Allows the integration of all of these techniques
Observed variables
x background variables (no model structure)
y continuous and censored outcome variables
u categorical (dichotomous, ordinal, nominal) and
count outcome variables
• Latent variables
f continuous variables
– interactions among f’s
c categorical variables
– multiple c’s




BASE MODEL – Does regression and most
versions of SEM
Mixture - Adds in mixture analysis (using
categorical latent variables)
Multi-level Add-on –adds the potential for
multi-level analysis
Recommend the Combo Platter




Batch processor
Text commands (no graphical interface) and
keywords
Commands can come in any order in the file
Three main tasks
GET THE DATA into MPLUS and DESCRIBE IT
 ESTIMATE THE MODEL of INTEREST
 REQUEST THE DESIRED OUTPUT


10 Commands
TITLE
• DATA (required)
• VARIABLE (required)
• DEFINE
• ANALYSIS
• MODEL
• OUTPUT
• SAVEDATA
• PLOT
• MONTECARLO

Comments are denoted by ! And can be anywhere in the file










Provides a title
Describes the Dataset
Names/identifies Variables
Computes/transforms
Technical details of analysis
Model to be estimated
Specifies the output
Saves the data
Graphical Output
Monte Carlo Analysis

“is” “are” and = can generally be used
interchangeably




“-” denotes a range



Variable: Names is Bob
Variable: Names = Bob
Variable: Names are Bob
Variable: Names = Bob1 – Bob5
: ends each command
; ends each line




Step 1: Move your data into a “.dat” file
(ASCII) – SPSS or Excel will do this
Step 2: Create the command file with DATA
and VARIABLE STATEMENTS
Step 3 (Optional) I always ask for the sample
statistics so that I can check the accuracy of
data reading
OPEN and RUN Day1 Example 1.inp

TITLE:
This is an example of how to read data into
Mplus from an ASCII File
DATA:
file is workshop1.dat;
Variable: NAMES are sex age hours location TL PL
GHQ Injury;
USEVARIABLES = tl – injury;
Output: Sampstat;

Include the demographic variables in the analysis







Repeat the input instructions – check to see if
proper N, K and number of groups
Describe the analysis – describes the analysis,
check for accuracy
Report the results




Fit Statistics
Parameter Estimates
Requested information (sample statistics,
standardized parameters etc)
NOTE: Not all output is relevant to your analysis





N2Mplus – freeware program that will read
SPSS or excel files
Will Create the data file
Will write the Mplus syntax which can be
pasted into mplus
Limit of 300 variables
Watch variable name lengths (SPSS allows
more characters than does Mplus)
General Goal
To predict one variable (DV or criterion) from a set of
other variables (IVs or Predictors). IVs may be (and
usually are) intercorrelated. Minimize least squares
(minimize prediction error) - Maximize R




Correlation is ZxZy/N
Line of best fit (OLS Regression line) is found
by y = mx+b where
b = Y intercept Y – bX
And m = slope = r Sdy/Sdx


Extension of Bivariate Regression to the case of
multiple predictors
Predictors may be (usually are) intercorrelated
so need to partial variance to determine the
UNIQUE effects of X on Y








To specify a simple linear regression you simply
add a Model line to the file
Model DV on IV1 IV2 IV3….IVX
You also want to specify some specific forms of
output to get the “normal” regression information
Useful options are
SAMPSTAT – sample statistics for the variables
STANDARDIZED – standardized parameters
Savedata:
Save=Cooks Mahalanobis
What predicts GHQ?






Used typically with dichotomous outcome
(also ordered logistic and probit models)
Similar to regression – generate an overall test
of goodness of fit
Generate parameters and tests of parameters
Odds ratios
When split is 50/50 then discriminant and
logistic should give the same result
When split varies, then logistic is preferred



Likelihood chi-squared - baseline to model
comparisons
ParameterTest (B/SE)
Odds ratio - increase/decrease in odds of being
in one outcome category if predictor increases
by 1 unit (Log of B)





Specify one outcome as categorical (can be
either binary or ordered)
Default estimator is MLR which gives you a
probit analysis
Changing to ML gives you a Logistic
regression
RUN DAY1Example3.inp
To dichotomize the outcome (from a multicategory or continuous measure
 define:
cut injury (1);
An Example
Data from a study of metro transit bus drivers (n=174)
 Data on workplace violence (extent to which one has been hit/kicked;
attacked by a weapon;had something thrown at you) 1 = not at all 4 = 3 or more
times
 Data cleaning suggests highly skewed and kurtotic distribution

Descriptive
Statistics
N
Minimum
Statistic
Statistic
violence
170
1.00
Valid N (listwise)


Maximum
Statistic
Mean
Statistic
Std. Deviation Skewness
Statistic
Statistic
Kurtosis
Std. Error
Statistic
Std. Error
3.00
170
1.2353
.37623
.186
3.677
.370
1.900
Scores pile up at 1 (Not at all)
violence

Valid
Missing
Total
1.00
1.33
1.67
2.00
2.33
3.00
Total
System
Frequency
104
36
15
8
6
1
170
4
174
Percent
59.8
20.7
8.6
4.6
3.4
.6
97.7
2.3
100.0
Valid Percent
61.2
21.2
8.8
4.7
3.5
.6
100.0
Cumulative
Percent
61.2
82.4
91.2
95.9
99.4
100.0
More Estimators
►Negative Binomial. This distribution can be thought of as the
number of trials required to observe k successes and is
appropriate for variables with non-negative integer values. If a
data value is non-integer, less than 0, or missing, then the
corresponding case is not used in the analysis. The fixed value of
the negative binomial distribution's ancillary parameter can be
any number greater than or equal to 0. When the ancillary
parameter is set to 0, using this distribution is equivalent to using
the Poisson distribution.
Normal. This is appropriate for scale variables whose values take
a symmetric, bell-shaped distribution about a central (mean)
value. The dependent variable must be numeric.
Poisson. This distribution can be thought of as the number of
occurrences of an event of interest in a fixed period of time and is
appropriate for variables with non-negative integer values. If a
data value is non-integer, less than 0, or missing, then the
corresponding case is not used in the analysis.
Some Observations on Count Data
►Counts
are discrete not continuous
►Counts are generated by a Poisson distribution (discrete
probability distribution)
►Poisson distributions are typically problematic because they
are skewed (by definition non-normal)
are non-negative (cannot have negative predicted values)
have non constant variance– variance increases as mean
increases
BUT…
Poisson regressions also make some very restrictive
assumptions about the data (i.e., the underlying rate of the DV is
the same for all individuals in the population or we have measured
every possible influence on the DV)
The Negative Binomial Distribution
►allows for more variance than does the
poisson model (less restrictive assumptions)
Can fit a poisson model and calculate
dispersion (Deviance/df). Dispersion close to 1
indicates no problem; if over dispersion use
the negative binomial
Poisson but not neg binomial is available in
Mplus
Zero Inflated Models
Zero Inflated Poisson Regresson (ZIP Regression)
Zero Inflated Negative Binomial Regression (ZINB
Regression)
Assumes two underlying processes
predict whether one scores 0 or not 0
Predict count for those scoring > 0

Run to obtain a Poisson Regression
Outcome is specified as a count variable

To obtain a ZIP regression run Day1 Example5


Note that one can specify different models for
occurrence and frequency
An Example
What

is the correlation between X and Y?

Descriptive Statistics
Mean
Std. Deviation
x
8.0000
4.42396
y 8.0000
4.42396

Correlationsa









x
Pearson Correlation
Sig. (2-tailed)
y Pearson Correlation
Sig. (2-tailed)
a. Listwise N=15
x
1
.912**
.000
N
15
15
y
.912**
.000
1



Group 1 r = 0.0
Group 2 r = 0.0
Group 3 r = 0.0
Mean = 3 N=5
Mean = 8 N=5
Mean = 13 N=5
Introduction
Multi-level
data occurs when responses are
grouped (nested) within one or more higher
level units of responses
E.G.
Employees nested within teams/groups
Longitudinal
individuals
Creates
data – observations nested within
a series of problems that may not be
accounted for in standard techniques (e.g.,
regression, SEM etc)
Some Problems with
MultiLevel Data
Individuals
within each group are more alike
than individuals from different groups (variance
is distorted) – violation of the assumption of
independence
We
may want to predict level 1 responses from
level 2 characteristics (i.e., does company size
predict individual job satisfaction). If we
analyse at the lowest level only we underestimate variance and hence standard errors
leading to inflated Type 1 errors – we find
effects where they don’t exist
Aggregation
to the highest level may distort
the variables of interest (or may not be
appropriate)
Two Paradoxes
Simpson’s
– Completely erroneous conclusions
may be drawn if grouped data, drawn from
heterogeneous populations are collapsed and
analyzed as if drawn from a single population
Ecological
– The mistake of assuming that the
relationship between variables at the aggregated
(higher) level will be the same at the disaggregated
(lower) level
What are multi-level
models?
Essentially
an extension of a regression model
Y = mx + b + error
Multilevel models allow for variation in the
regression parameter (intercepts (b) and
slopes(m)) across the groups comprising your
sample
Also allow us to predict variation ask why
groups might vary in intercepts or slopes
Intercept differences imply mean differences
across groups
Slope differences indicate different
relationships (e.g., correlations) across groups
The Multilevel model
Attempting
Why
to explain (partition) variance in the DV
don’t we all score the same on a given variable?
Simplest
explanation is error – individual’s score is the
grand mean + error.
 If employees are in groups – then the variance of the
level 1 units has at least 2 components – the variance of
individuals around the group mean (within group
variance) and the variance of the group means around
the grand mean (between group variance)
This
is known as the intercepts only or “variance
components” or “unconditional” model – it is a baseline
that incorporates no predictors
The Multilevel model
(cont’d)
Can
introduce predictors either at level 1 or level 2
or both to further explain variance
Can
allow the effects of level 1 predictors to vary
across groups (random slopes)
Can
examine interactions within and across levels
Can
incorporate quadratic terms etc
File Handling:
Aggregation
To
create level 2 observations we often need to
aggregate variables to the higher level and to merge
the aggregated data with our level 1 data. To
aggregate you need to specify [a] the variables to
be aggregated, [b] the method of aggregation
(sum, mean etc) and [c] the break variable
(definition for level 2)
SPSS
allows you to aggregate and save group level
data to the current file using the aggregate
command
Mplus
allows you to do this within the Mplus run
Notes on Aggregation
If
you choose to aggregate, then there should be some
empirical support (i.e., evidence for similar responses
within group). Some typical measures are:
ICC
– the interclass correlation. The extent to which
variance is attributable to group differences. From
ANOVA (MSb-MSw)/MSb+C-1(MSw) where C= average
group size
ICC(2)
Rwg
-reliability of means(MSb – MSw)/MSb
(multiple variants) indices of agreement
MPLUS
calculates the ICC for random intercept models
Centering Predictors
Centering
a variable helps us to interpret the
effect of predictors. In the simplest sense,
centering involves subtracting the mean from
each score (resulting a distribution of deviation
scores that have a mean of 0)
Centering (among other things) helps with
convergence by imposing a common scale
GRAND MEAN Centering – involves subtracting
the sample mean from each score
GROUP MEAN Centering –involves subtracting
the group mean from each score – must be
done manually.
Centering (cont’d)
Grand
mean – each score is measured as a
deviation from the grand mean. The intercept is the
score of an individual who is at the mean of all
predictors “the average person”
Group mean – each score measured as a deviation
from the group mean. The intercept is the score of
an individual who is at the mean of all predictors in
the group “the average person in group X”
Grand mean is the same transformation for all
cases – for fixed main effects and overall fit will give
the same results as raw data
Group mean – different for each group – different
results
Centering (cont’d)
Grand
mean – helps model fitting, aids
interpretation (meaningful 0), may reduce
collinearity in testing interactions, or between
model parameters or squared effects – may
reduce meaning if raw scores actually “mean
something”
Group mean – helps model fitting, can remove
collinearity if you are including both group
(aggregate) and individual measures of the
same construct in the model (aggregate data
explains between group and individual level
explains within group variance).
A general
recommendation
Grand
mean – may be appropriate when the
underlying model is either incremental (group
effects add to individual level effects) or
mediational (group effects exert influence
through individual)
Group mean – may be more appropriate when
testing cross-level interactions
Hoffman
& Gavin (1998) – Journal of
Management
Power and Sample Size
How many subjects = how long is a
piece of string?
Calculations
are complex, dependent on intraclass
correlations, sample size, effect size etc etc
In general power at Level 1 increases with the number
of observations and a Level 2 with the number of groups
Hox (2002) recommends 30 observations in each of 30
groups Heck & Thomas (2000) suggested 20 groups with
30 observations in each
Others suggest that even k=50 is too small
Practical constraints likely rule
Better to have a large number of groups with fewer
individuals in each group than a small number of groups
with large group sizes
Convergence
Occassionally (about 50% of the time) the program will
not converge on a solution and will report a partial
solution (i.e., not all parameters).
 In my experience lack of convergence is a direct
function of sample size (small samples = convergence
failures)
 The easiest fix is to ensure that this is not a scaling
issue – ie that all variables are measured on roughly the
same metric (standardize)
The single most frustrating aspect of multi-level models

A plan of analysis
 1.
Ensure data are structured/arranged properly
(aggregate, centered etc) – most of this can be done in
MPLUS
2.
Run a null model – The null model estimates a
grand mean only model and provides a baseline for
comparison
3.
Run the unconditional model (grouping but no
predictors) – assess ICC1 and whether varying intercepts
is appropriate - a low ICC1 leads one to question the
importance of a multilevel model (although this can be
controversial)
A plan of analysis
4.
Incorporate level 1 predictors. Assess change in fit,
level 1 variance and level 2 variance – starting to move
into conditional models - this is equivalent to modeling
our data as a series of parallel lines (one for each group)
– slopes are the same but intercepts are allowed to vary
5.
Allow slope to vary Assess fit, change in variance
etc. Can now also estimate the covariance between
intercept and slope effects that may be of interest
6.
Incorporate level 2 predictors - explain team group
but not individual level variance
Testing Models: -2 Log
Likelihood
A global test of the model adequacy is given
by the -2 log likelihood statistic – also known as
the model deviance

We
can examine the change in deviance as
models are made more complex
No
equivalent to the difference test in REML
(Residual Max Likelihood)
Testing Models:
Percentage of variance
No direct equivalent to an R-squared
because there are multiple portions of variance
Can focus on explaining variance at either the
group or the individual level (i.e., reducing the
residual)
One useful approach is to calculate the
variance explained at each step of the model
Variance explained after predictor is
added/variance before the addition of the
predictor

Testing Models:
Parameter tests

Statistical tests of parameters
Analagous to the tests of regression (B)
coefficients in regression

Tests
0
the null hypothesis that the parameter is
Implementing the
Analysis
Run
Day1Example 6 to read in the data.
Measures include GHQ, transformational
leadership and team identifier
Sample Total N =851 in 31 locations
Start
by estimating the variance components
(random intercept only) model
On the variable statement specify the
usevariables=ghq team
 Specify cluster=team
Add an analysis command
Analysis:
Type = twolevel
Implementing the
Analysis (cont’d)
•Hypotheses
•GHQ varies across team
•GHQ is predicted by leadership
•Effect of leadership on stress varies
by location





Run Day1Example6.inp – the variance components
model – a random intercept only model
Add in the within group predictor TFL
Need to include tfl on the use variables line
Specify the centering centering=grandmean(tfl)
Specify the within group model

Model %Within%
 GHQ on tfl

Maybe try the between group model

Model %between%
 Ghq on Tfl


In Mplus twolevel analyses variables are
specified as either Within (can only be modeled
in the within group model) or Between (can
only be modeled with the Between group
model)
Unspecified variables will be used
appropriately (if used in the between group
model then MPLUS will calculate the aggregate
score on the variable)


Add “random” to the type statement Type =
Twolevel random;
Specify the random slope in the within model as
S| Y on X where S is the name of the slope, Y is the
DV, X is the predictor e.g,

%Within%
 S|ghq on tfl;
In the between model allow the random slope to
correlate with the random intercept
GHQ with S
Predict the random slope
S GHQ on TFL

Can use any techniques previously discussed

Specify outcomes as binary or ordered (multilevel
logistic), multilevel poisson etc etc etc
Can incorporate multilevel regressions into path
or SEM analyses (More about this later)