Transcript Slide 1
Four Common Misconceptions in Exploratory Factor Analysis
Deborah Bandalos University of Georgia Presentation to the EDMS Department * University of Maryland * November 14, 2007
Problems in EFA
In recent reviews of factor analysis applications researchers have described the state of the art to be
“routinely quite poor” (Fabrigar et al, 1999, p. 295),
leading to
“potentially misleading factor analytic results” (Preacher & MacCallum, 2003, p. 14).
Presentation to the EDMS Department * University of Maryland * November 14, 2007
Four Common Misconceptions
The choice between component and common factor extraction procedures is inconsequential. Orthogonal rotation results in better simple structure.
The minimum sample size needed for factor analysis is…(insert your favorite guideline).
The “Eigenvalues Greater then One” rule is the best way of choosing the number of factors.
Presentation to the EDMS Department * University of Maryland * November 14, 2007
What’s the difference between component and common factor analysis?
Component analysis - all of the variance among the variables is analyzed: factor the entire correlation (or covariance) matrix
X iv
w v
1
F
1
i
w v
2
F
2
i
...
w vf F fi
Common factor analysis - only the shared variance is factored replace the diagonal elements of the matrix with estimates of the shared variance (or communalities)
X iv
w v
1
F
1
i
w v
2
F
2
i
...
w vf F fi
w vu U iv
Presentation to the EDMS Department * University of Maryland * November 14, 2007
Differences in purpose component analysis typically recommended for reducing a large number of variables to a smaller number of components common factor analysis better suited for identifying the latent constructs underlying the variables. Presentation to the EDMS Department * University of Maryland * November 14, 2007
Arguments against common factor analysis
Factor score indeterminacy
The occurrence of Heywood cases
Greater computational demands
Component and common factor analysis yield the same results, so component analysis should be used because of problems with FA
Presentation to the EDMS Department * University of Maryland * November 14, 2007
What is factor score indeterminacy?
Calculation of factor scores is based on equations relating the observed variables to the factors:
X iv
w v
1
F
1
i
w v
2
F
2
i
...
w vf F fi
w vu U iv
For
v
variables, there are
v
such equations, so in common factor analysis scores on
f
common factors as well as on
v
unique factors must be estimated. The solution is indeterminate not because there is no set of factor scores that can be obtained from the variables scores; it is that there are many such sets of factor scores. This results in a total of
f
estimated from only
v
+
v
unknowns that must be equations.
Note that a similar problem does not exist for the component model, because the unique factor scores are not estimated. Presentation to the EDMS Department * University of Maryland * November 14, 2007
Counterarguments to factor score indeterminacy
Researchers are rarely interested in saving and using factor scores.
Confirmatory factor analysis (CFA) can be used to obtain scores that do not suffer from the problem of indeterminacy. Presentation to the EDMS Department * University of Maryland * November 14, 2007
Counterarguments to occurrence of Heywood cases
Heywood cases are negative estimates of the uniquenesses in common factor analysis Heywood cases occur only when iterated communalities are used and recommends that communalities be iterated only two to three times to avoid this problem (Gorsuch, 1997).
Heywood cases “often indicate that a misspecified model has been fit to the data or that the data violate assumptions of the common factor model” (p. 276) and can therefore be seen as having diagnostic value. Fabrigar et al. (1999) Presentation to the EDMS Department * University of Maryland * November 14, 2007
Counterarguments to greater computational demands for common factor analysis
Non-issue given the high speed and computational capacity of current computers.
Some continue to argue that with very large numbers of variables the difference could still be an issue (Velicer & Jackson, 1990).
Presentation to the EDMS Department * University of Maryland * November 14, 2007
Philosophical differences between component and common factor analysis
Common factor analysis is a latent variable method, but component analysis is not.
Presentation to the EDMS Department * University of Maryland * November 14, 2007
Haig (2005) provides a philosophical framework for this view, based on abductive inference in which new information is generated by reasoning from factual premises to explanatory conclusions. In other words, “abduction consists in studying the facts and devising a theory to explain them.” (Peirce, cited in Haig, 2005, p. 305). In the context of EFA, this would typically involve studying the correlations among variables and developing theories that might explain why they are correlated.
Presentation to the EDMS Department * University of Maryland * November 14, 2007
According to Haig, component models cannot be viewed as latent variable models.
…an abductive interpretation of EFA reinforces the view that it is best regarded as a latent variable method, thus distancing it from the data reduction method of principal components analysis. From this, it obviously follows that EFA should always be used in preference to PC analysis when the underlying common causal structure of a domain is being investigated. (p. 321).
Presentation to the EDMS Department * University of Maryland * November 14, 2007
Results based on such inferences must be subjected to further testing on additional data. In other words, if induction is to have any kind of empirical merit, it must be seen as a hypothesis-generating method and not as a method that produces unambiguous, incorrigible results.
(Mulaik, 1987, p. 299).
This view is amplified by Haig (2005), who explains that …using EFA to facilitate judgments about the initial plausibility of hypotheses will still leave the domains being investigated in a state of considerable theoretical underdetermination. It should also be stressed that the resulting plurality of competing theories is entirely to be expected, and should not be thought of as an undesirable consequence of employing EFA. (Haig, 2005, p. 320).
Presentation to the EDMS Department * University of Maryland * November 14, 2007
Do component and common factor analysis yield the same results?
Proponents of component analysis argue that any differences are trivial and possibly the result of extracting too many factors (Velicer & Jackson, 1990, p. 10). However, studies by Widaman (1990; 1993) and Bentler and Kano (1990) have shown that the analyses can produce very different results
when the communalities and/or the number of variables per factor are low.
Presentation to the EDMS Department * University of Maryland * November 14, 2007
These two conditions interact: if pattern loadings are at least .8, 3 to 7 variables per factor will suffice to yield estimates from the two methods that are very close. with loadings of .4, 20 to 50, or even more variables per factor would be needed to yield similar estimates.
These results are not surprising given that the difference between the component and common factor models is that the latter model contains a uniqueness term, while the former does not. This implies that conditions in which the uniquenesses of the variables are minimized will lead to greater similarity between the two methods, with higher communalities and larger numbers of variables representing two such conditions.
Presentation to the EDMS Department * University of Maryland * November 14, 2007
Widaman (1993) demonstrated that, even with relatively small proportions (i.e., .4) of uniqueness in the variables component analysis resulted in overestimates of the population factor pattern loadings for models with correlated factors, component analysis yielded underestimates of the population factor correlations. More generally, unless variable uniquenesses are actually zero, component analysis would be expected to yield estimates of pattern loadings that are overestimates of the corresponding population loadings.
Presentation to the EDMS Department * University of Maryland * November 14, 2007
Orthogonal Rotation Results in Better Simple Structure than Oblique Rotation
Common knowledge that rotated solutions result in solutions that are easier to interpret Comrey and Lee (1992) point out a misconception about orthogonal vs oblique rotations: “It is sometimes thought that this retention of statistically independent factors ‘cleans up’ and clarifies solutions, making them easier to interpret. Unfortunately, this intuition is exactly the opposite of what the methodological literature suggests.” (p. 287).
Presentation to the EDMS Department * University of Maryland * November 14, 2007
Clear advice to the contrary in the methodological literature. Comrey and Lee (1992): “orthogonal rotations are likely to produce solutions with poorer simple structure when clusters of variables are less than 90 degrees from one another….” (p. 282). Russell (2002): “[orthogonal rotations] often do not lead to simple structure due to underlying correlations between the factors” (p. 1637). Nunnally and Bernstein (1994): “Oblique factors thus generally represent the salient variables better than orthogonal factors” (pp. 498-499).
So why the confusion?
Presentation to the EDMS Department * University of Maryland * November 14, 2007
Possible reasons for the misconception
Conflation of the term “simple structure” with the idea of simplicity of interpretation
.
For example, Gorsuch (1983) states that “the interpretation of the results may be easier if the factors are uncorrelated” (p. 35). This may be true, but this ease of interpretation may come at the price of simplicity of the factor structure.
It is only when variables on one factor are not correlated with variables on other factors that an orthogonal rotation can yield a simple structure solution.
Presentation to the EDMS Department * University of Maryland * November 14, 2007
In any other case it will produce cross-loadings, and the more highly correlated the variables are across factors the more cross loadings will be produced. This is because, if variables are correlated across factors and factors are not correlated, the only way the variable correlations can be modeled is through a cross-loading. If the factors are correlated, cross-factor variable correlations can be modeled through the factor correlation.
v1 v2 v3 v4 v5 v6 Presentation to the EDMS Department * University of Maryland * November 14, 2007
An oblique solution will “default” to an orthogonal solution if factors really are uncorrelated, but will allow for the factors to be correlated if this is necessary to fit the structure of the variables.
Comrey and Lee (1983): “Given the advantages of oblique rotation over orthogonal rotation, we see little justification for using orthogonal rotation as a general approach to achieving solutions with simple structure” (p. 283). MacCallum and Preacher (2002): “it is almost always safer to assume that there is not perfect independence, and to use oblique rotation instead of orthogonal rotation” (p. 26).
Presentation to the EDMS Department * University of Maryland * November 14, 2007
The minimum sample size needed for factor analysis is…(insert your favorite guideline)
Many rules of thumb have been suggested to answer the question of what N is needed: These rules fall into two categories: absolute value for minimum
N
minimum sample size to number of variables (
N:p)
ratio. Presentation to the EDMS Department * University of Maryland * November 14, 2007
Recent studies of these guidelines by Velicer and Fava (1998), MacCallum et al. (1999), and Hogarty et al. (2005) have all reached the same conclusion:
There is no absolute minimum N or N:p ratio
.
In the words of MacCallum et al.: We suggest that previous recommendations regarding the issue of sample size in factor analysis have been based on a misconception. That misconception is that the minimum level of N (or the minimum N:p ratio) to achieve stability and recovery of population factors is invariant across studies. We show that such a view is incorrect and that the necessary N is in fact highly dependent on several specific aspects of a given study.
Presentation to the EDMS Department * University of Maryland * November 14, 2007
How should we decide on sample size?
Studies cited previously have found that the N necessary to accurately reproduce population pattern/structure loadings is related to: Level of communality of the variables being factored (higher communalities result in more accuracy) Number of variables per factor (more variables per factor result in more accuracy) Sample size (large N results in more accuracy) As stated by Velicer & Fava (1998): Rules that related sample size to the number of observed variables were also demonstrated to be incorrect. In fact, for the low-variables conditions considered here, the opposite was true. Increasing
p
[the number of variables] when the number of factors remains constant will actually improve pattern reproduction for the same sample size. (p. 244) Presentation to the EDMS Department * University of Maryland * November 14, 2007
These variables also interact!
One important aspect of these studies is that sample size, number of variables per factor, and variable communalities interact in a compensatory manner.
Strongest compensatory effect appears to be that of communality level on sample size. MacCallum et al. (1999) found that with communalities of approximately .7, good recovery of population factors required a sample size of only 100, with three to four variables per factor. If we assume orthogonal factors and/or good simple structure, this level of communality translates into loadings of ≈.84
At this level of communality, increasing the number of variables per factor had little effect.
Presentation to the EDMS Department * University of Maryland * November 14, 2007
With lower communalities, larger samples of both people and variables are necessary to obtain good recovery.
With communalities lower than .5 (i.e., loadings of ≈ .7) six or seven variables per factor and a sample size “well over 100” would be required to obtain the same level of recovery. Finally, with communalities of less than .5 and three or four variables per factor, sample sizes of at least 300 are needed. Given that communalities of .5 would mean loadings of .7 for orthogonal factors and/or perfect simple structure, even the last situation is probably not common in applied research.
Presentation to the EDMS Department * University of Maryland * November 14, 2007
For example, Henson & Roberts (2006), in a review of 60 applications in measurement-oriented journals, found that the average number of variables was about 6, and the average % of explained variance was about 52%.
Given this percentage of explained variance, it does not appear that the variable loadings could have been very high.
Presentation to the EDMS Department * University of Maryland * November 14, 2007
Finally, it should be noted that these results were for a model with 3 factors.
With more factors, larger Ns were needed.
For example, with 7 rather than 3 factors, each measured by 3-4 variables, MacCallum et al. (1999) found that samples of well over 500 were needed to obtain good recovery in the low communality (< .5) condition. As stated by Hogarty et al. (2005): Overdetermination of factors was also shown to improve the factor analysis solution. We found, however, in comparing results over different numbers of factors and levels of overdetermination, that samples with fewer factors by far yielded the more stable factor solutions. (p. 224) Presentation to the EDMS Department * University of Maryland * November 14, 2007
The “Eigenvalues Greater then One” Rule is the Best Way of Choosing the Number of Factors
Although this rule is the default in both SPSS and SAS, studies have been unanimous in finding that it is the least accurate guideline available.
Zwick and Velicer (1986) “we cannot recommend the K1 rule for PCA [principal component analysis]” (p. 439) The eigenvalue greater then one rule was extremely inaccurate and was the most variable of all the methods. Continued use of this method is not recommended” (Velicer, Eaton, & Fava, 2000, p. 68) Presentation to the EDMS Department * University of Maryland * November 14, 2007
What’s wrong with it???
Most studies have found that K1 consistently overestimates the number of components.
To see why, look at the original sources: Guttman (1954) derived three methods for estimating the
lower bound
for the rank, or dimensionality, of a
population
correlation matrix.
One of these was that the minimum dimension of a correlation matrix with
unities on the diagonal
was greater than or equal to the number of eigenvalues that are at least one. Presentation to the EDMS Department * University of Maryland * November 14, 2007
Three problems….
K1 rule was derived for component analysis; not for common factor analysis, so applications to common factor analysis are inappropriate.
Guttman did not suggest K1 as a method of determining the number of components that
should
be extracted, but rather as determining the number of components that c
ould
be extracted.
Guttman’s derivations are based on
population
data. As noted by Nunnally and Bernstein (1994), the first few eigenvalues in a sample correlation matrix are typically larger than their population counterparts, resulting in extraction of too many components in samples when this rule is used. Presentation to the EDMS Department * University of Maryland * November 14, 2007
Kaiser (1960) provided another rationale for the K1 criterion,that components with eigenvalues less than one would have negative “Kuder Richardson” or internal consistency reliability. However, Cliff (1988) refutes Kaiser’s claim, stating that “…the conclusion made by Kaiser (1960) is erroneous: There is no direct relation between the size of an eigenvalue and the reliability of the corresponding composite” Presentation to the EDMS Department * University of Maryland * November 14, 2007
What should be used instead of K1???
Parallel analysis (PA) procedure, introduced by Horn (1965).
The idea is that the number of components extracted should have eigenvalues greater than those from a random data matrix.
Set of random data correlation matrices are created Average value of their eigenvalues is computed. Eigenvalues from the matrix to be factored are compared to those from the random data.
Only components with eigenvalues greater than those from the random data are retained. Presentation to the EDMS Department * University of Maryland * November 14, 2007
Zwick & Velicer (1986) and Velicer, Eaton, & Fava (2000), compared methods for determining the number of factors to retain (K1, the scree test, the Bartlett chi square test, the minimum average partial procedure, and PA)
found PA to be the most accurate method across all conditions studied
. One problem is having to create random data matrices and obtain eigenvalues.
However, O’Connor (2000) has recently provided macros for both SAS and SPSS that will implement PA for both component and factor analysis (available at http://flash.lakeheadu.ca/~boconno2/nfactors.html
). Thompson and Daniel (1996) have also provided SPSS code for PA as an appendix to their article.
Presentation to the EDMS Department * University of Maryland * November 14, 2007
Mean of random eigenvalues Random Data Eigenvalues Root Means Prcntyle 1.000000 1.063295 1.222187
2.000000 .884561 1.026388
3.000000 .752150 .878787
4.000000 .620368 .744596
5.000000 .513304 .610780
6.000000 .422605 .496383
7.000000 .336271 .422468
8.000000 .257631 .327463
9.000000 .182296 .243393
10.000000 .117411 .190074
11.000000 .038259 .100156
12.000000 -.025923 .026614
13.000000 -.085558 -.039277
14.000000 -.141240 -.101874
15.000000 -.199311 -.169249
16.000000 -.251610 -.216527
17.000000 -.299880 -.258148
18.000000 -.356702 -.321957
19.000000 -.412817 -.375849
95 th %tile of random eigenvalues Presentation to the EDMS Department * University of Maryland * November 14, 2007
Raw Data Eigenvalues Root Eigen.
1.000000 10.493467
2.000000 .684234
3.000000 .467787
4.000000 .393812
5.000000 .333261
6.000000 .241923
7.000000 .210225
8.000000 .201083
9.000000 .097664
10.000000 .068366
11.000000 .022161
12.000000 -.012714
13.000000 -.051812
14.000000 -.068564
15.000000 -.100923
16.000000 -.125277
17.000000 -.138908
18.000000 -.154161
19.000000 -.190358
Presentation to the EDMS Department * University of Maryland * November 14, 2007
Minimum Average Partial (MAP) Procedure
MAP developed by Velicer (1976) Simulation studies (Velicer, Eaton, & Fava, 2000; Zwick & Velicer, 1982; 1986) found that MAP nearly as accurate as PA However, MAP is only appropriate for component, not common factor analysis Presentation to the EDMS Department * University of Maryland * November 14, 2007
How MAP works
Partial correlation matrix (partialing out that component) is computed after extraction of each component Average squared off-diagonal element of the partialed matrix is obtained.
Number of components retained is determined by the point at which the average partial correlation is at a minimum. Idea is that components remove shared variance from the matrix, until all that is left is variance that is shared between only two components. At this point, average partial correlations begin to increase Presentation to the EDMS Department * University of Maryland * November 14, 2007
How to implement MAP
O’Connor (2000) has provided macros for SPSS and SAS.
Velicer's Average Squared Correlations .000000 .300540
1.000000 .021879
2.000000 .023623
3.000000 .030195
4.000000 .036170
5.000000 .043225
6.000000 .048928
.
.
.
17.000000 .556766
18.000000 1.000000
_ The smallest average squared correlation is .021879
The number of components is 1 Presentation to the EDMS Department * University of Maryland * November 14, 2007
Recommendations for determining # of factors
Scree test has been found to be fairly accurate in simulation Methodologists recommend using > 1 criterion Interpretation and theoretical rationale are most important
However,
it should be noted that the studies cited in this area have only looked at situations in which the factors or components are orthogonal.
If factors/components are correlated, these methods may not perform as well.
Presentation to the EDMS Department * University of Maryland * November 14, 2007
The table below shows the results of the PA procedure using 2-3 category data generated to have factor correlations of either .3 or .7.
As can be seen from the table, the percentage of correct decisions decreases markedly when factor are correlated at .7.
Factor Correlation .3 .7 95 th Percentile 80.0 12.9 Mean 89.4 20.0 Presentation to the EDMS Department * University of Maryland * November 14, 2007
Take-home message
Component vs common factor analysis Component and common factor analysis will yield very similar results with high communalities and # of variables per factor.
However, these conditions may not often be met in practice.
Two procedures have different purposes, rationales, and philosophical underpinnings, and these should guide the choice between them.
Determining # of factors K1 is inaccurate!
Use PA, MAP, scree: look for convergence Final decision should be based on judgments of interpretability and consistency of the factors with sound theory.
Presentation to the EDMS Department * University of Maryland * November 14, 2007
Take home message continued…
Sample size Unreliable variables contribute to instability in much the same way as do small samples Choose variables carefully Although simulations have shown smaller N may suffice if variable communalities are high and var/factor ratio is high, these conditions may not be found in practice Also, necessary N increases with # of factors Orthogonal vs oblique rotations If factors are not truly uncorrelated, orthogonal rotations result in
worse
simple structure If factors really are uncorrelated, oblique rotation will show this Then researcher can rerun with orthogonal rotation, if desired It’s OK to obtain both orthogonal and oblique rotations Presentation to the EDMS Department * University of Maryland * November 14, 2007
Presentation to the EDMS Department * University of Maryland * November 14, 2007
References
Acito, F. & Anderson, R.D. (1986). A simulation study of factor score indeterminacy.
Journal of Marketing Research, 23
, 111-118.
Benson, J. & Nasser, F. (1998). On the use of factor analysis as a research tool.
Journal of Vocational Research, 23
, 13-23.
Bentler, P.M. & Kano, Y. (1990). On the equivalence of factors and components.
Multivariate Behavioral Research, 25
, 67-74.
Borgatta, E.R., Kercher, K., & Stull, D.E. (1986). A cautionary note on the use of principal component analysis.
Sociological Methods and Research, 15
, 160-168.
Cliff, N. (1988). The Eigenvalues-Greater-Than-One Rule and the Reliability of Components.
Psychological Bulletin, 103(2)
, 276-279.
Cliff, N. & Hamburger, C.D. (1967). The study of sampling errors in factor analysis by means of artificial experiments.
Psychological Bulletin, 68
, 430-445.
Conway, J.M. & Huffcutt, A.I. (2003). A review and evaluation of exploratory factor analysis practices in organizational research
. Organizational Research Methods, 6
, 147-168.
Cortina, J.M. (2002). Big things have small beginnings: An assortment of “minor” methodological misunderstandings.
Journal of Management, 28,
339-362.
Fabrigar, L.R., Wegener, D.T., MacCallum, R.C., & Strahan, E.J. (1999). Evaluating the use of exploratory factor analysis in psychological research.
Psych Methods, 4,
272 299.
Presentation to the EDMS Department * University of Maryland * November 14, 2007
Gorsuch, R.L. (1983).
Factor analysis, 2nd edition
. Hillsdale, N.J.: Erlbaum.
Gorsuch, R.L. (1997). Exploratory factor analysis: Its role in item analysis.
Journal of Personality Assessment, 68(3)
, 532-560.
Guttman, L. (1954). Some necessary conditions for common factor analysis.
Psychometrika, 19,
149-161.
Haig, B.D. (2005). Exploratory factor analysis, theory generation, and scientific method.
Multivariate Behavioral Research, 40
, 303-329.
Hayton, J. C., Allen, D. G., & Scarpello, V. (2004). Factor retention decisions in exploratory factor analysis: A tutorial on parallel analysis.
Organizational Research Methods
,
7
, 191-205. Henson, R.K. & Roberts, J.K. (2006). Use of exploratory factor analysis in published research.
Educational and Psychological Measurement, 66
, 393-416.
Hill, R. B. & Petty, G.C. (1995). A new look at selected employability skills: A factor analysis of the Occupation Work Ethic.
Journal of Vocational Education Research, 20(4)
, 59-73.
Hogarty, K.Y., Hines, C.V., Kromrey, J.D., Ferron, J.M., & Mumford, K.R. (2005). The quality of factor solutions in exploratory factor analysis: The influence of sample size, communality, and overdetermination.
Educational and Psychological Measurement, 65,
202-226.
Horn, J. L. (1965).A rationale and test for the number of factors in factor analysis.
Psychometrika
,
30
, 179-185.
Kaiser, H. F. (1960). The application of electronic computers to factor analysis.
Educational and Psychological Measurement
,
20
, 141-151.
MacCallum, R.C., Widaman, K.F., Preacher, K.J., & Hong, S. (2001). Sample size in factor analysis: The role of model error.
Multivariate Behavioral Research, 36,
611 637.
Presentation to the EDMS Department * University of Maryland * November 14, 2007
MacCallum, R.C., Widaman, K.F., Zhang, S., & Hong, S. (1999). Sample size in factor analysis.
Psychological Methods, 4
, 84-99.
Maraun, M.D. (1996). Metaphor taken as math: Indeterminacy in the factor analysis model.
Multivariate Behavioral Research, 31(4)
, 517-538.
McArdle, J.J. (1990). Principles versus principals of structural factor analysis.
Multivariate Behavioral Research, 25(1), 81-88.
Mulaik, S.A. (1987). A brief history of the philosophical foundations of exploratory factor analysis.
Multivariate Behavioral Research, 22
, 267-305.
Nunnally, J.C. & Bernstein, I.H. (1994).
Psychometric theory, 3rd edition
. New York: McGraw-Hill.
O'Connor, B. P. (2000). SPSS and SAS programs for determining the number of components using parallel analysis and Velicer's MAP test.
Behavior Research Methods, Instrumentation, and Computers
Preacher, K.J. & MacCallum, R.C. (2003). Repairing Tom Swift’s electric factor analysis machine.
Understanding Statistics, 2 , 32,
, 13-43.
396-402.
Russell, D.W. (2002). In search of underlying dimensions: The use (and abuse) of factor analysis in
Personality and Social Psychology Bulletin. Personality and Social Psychology Bulletin, 28,
1629-1646.
Schneeweiss, H. (1997). Factors and principal components in the near spherical case.
Multivariate Behavioral Research, 32(4)
, 375-401.
Thompson, B. & Daniel, L.G. (1996). Factor analytic evidence for the construct validity of scores: A historical overview and some guidelines.
Educational and Psychological Measurement, 56
, 197-208.
Tinsley, H.E.A. & Tinsley, D.J. (1987). Uses of factor analysis in counseling psychology research.
Journal of Counseling Psychology, 34
, 414-424.
Presentation to the EDMS Department * University of Maryland * November 14, 2007
Thurstone, L.L. (1947).
Multiple –factor analysis: A development and expansion of the vectors of mind.
Chicago: University of Chicago Press.
Velicer, W.F. (1976). Determining the number of components from the matrix of partial correlations.
Psychometrika, 41
, 321-327.
Velicer, W.F., Eaton, C.A., & Fava, J.L. (2000). Construct explication through factor or component analysis: A review and evaluation of alternative procedures for determining the number of factors or components. In R.D. Goffin & E. Helmes (Eds.),
Problems and solutions in human assessment,
(chapter 3, pp. 41 -71). Boston: Kluwer.
Velicer, W.F. & Jackson, D.N. (1990). Component analysis versus common factor analysis: Some issues in selecting an appropriate procedure.
Multivariate Behavioral Research, 25
, 1-28.
Velicer, W.F. & Fava, J.L. (1998). Effects of variable and subject sampling on factor pattern recovery.
Psychological Methods, 2
, 231-251.
Widaman, K. F. (1990). Bias in pattern loadings represented by common factor analysis and component analysis.
Multivariate Behavioral Research, 25
, 89-96.
Widaman, K.F. (1993). Common factor analysis versus principal components analysis: Differential bias in representing model parameters?
Multivariate Behavioral Research, 28
, 263-311.
Worthington, R.L. & Whittaker, T.A. (2006). Scale development research: A content analysis and recommendations for best practice.
The Counseling Psychologist, 34
, 806-838.
Zwick, W.R. & Velicer, W.F. (1982). Factors influencing four rules for determining the number of components to retain.
Multivariate Behavioral Research, 17
, 253-269.
Zwick, W.R. & Velicer, W.F. (1986). Comparison of five rules for determining the number of components to retain.
Psychological Bulletin, 99
, 432-442.
Presentation to the EDMS Department * University of Maryland * November 14, 2007