Factor Analysis

views updated May 18 2018

FACTOR ANALYSIS

Factor analysis is a mathematical and statistical technique for analyzing differences among units of analysis and the structure of relationships among variables assessing those units. The units of analysis may be persons, groups, organizations, ecological units, or any other justifiable basis of aggregation although persons are most often the focus of analysis. The chief purpose of the method is the attainment of scientific parsimony, which is achieved by positing a set of latent common factors that underlie the data. The factor model was developed by Charles Spearman (1904a, 1927) to be used to describe economically the correlations among mental test scores observed for persons. Spearman's famous bi-factor model of intelligence held that measures of mental abilities had two major sources: a factor common to all measures of ability, which he called the g-factor (factor of general ability), and a specific component of variation (an s-factor) unique to the test. For example, a test of numerical ability may be affected in part by a general factor of intelligence as well as a factor specific to numerical aptitude. This model, although never the predominant psychological theory of mental tests, has persisted in the culture in the sense that people often believe there is a general factor of intelligence underlying performance across different domains (see Gould 1981 for a critique of this view).

Although Spearman's work did not go very far beyond such a simple model, his approach to model construction and theory testing using tetrad differences has provided the basis for much further work (see, e.g., Glymour et al. 1987). Many contemporaries of Spearman—Cyril Burt, Karl Pearson, Godfrey Thomson, J. C. Maxwell Garnett, and others—working in the fields of human abilities and statistics also contributed to the development of factor analysis. Several worked to modify Spearman's bi-factor model to include multiple factors of intelligence. But the most radical departure from the g-factor view of human intelligence came with Thurstone's (1938) publication of Primary Mental Abilities, in which he demonstrated empirically through the application of multiple factor analysis that several common factors were necessary to explain correlations among measures of human abilities. While Thurstone (1947) is usually credited with the popularization of this more general technique, the concept of multiple factor analysis first arose in the work of Garnett (1919–1920; see Harmon 1976).

Multiple factor analysis proved to be a major advance over the Spearman model, which was later to be seen as a special case (the one-factor case). Multiple factor analysis permitted a general solution to the possibility of positing multiple factors (k) in a set of variables (p). Within this framework, two competing research strategies emerged, each resting on distinct principles. One was based on Pearson's principle of principal axes, which was later developed by Hotelling (1933) as the method of principal components. This approach emphasized the objective of "extracting" a maximum of variance from the set of p variables so that the k factors explained as much of the variance in the variables as they could. This tradition still exists in approaches to factor analysis that rely on principal components analysis, and, although many researchers use the technique, they are likely to be unaware of the objectives underlying the approach (see Harman 1976).

In contrast to the strategy of maximizing the variance explained in the variables, the other basic strategy—more squarely in the tradition of Spearman—emphasized the objective of reproducing the observed correlations among the variables. These two objectives—one emphasizing the extraction of maximum variance in the variables and the other emphasizing the fit to the correlations among the variables—eventually became the object of serious debate. However, with time there has emerged a consensus that the "debate" between these approaches rested on a misconception. The method of principal axes, which is the basis of principal components analysis—involving the analysis of a correlation matrix with unities in the diagonal—is now better understood as a computational method and not a model, as the factor analysis approach is now considered (see Maxwell 1977).

The early developments in the field of factor analysis and related techniques were carried out primarily by psychometricians. These early developments were followed by many important contributions to estimation, computation, and model construction during the post–World War II period. Some of the most important contributions to the method during the 1950s were made by a sociologist, Louis Guttman. Guttman made important contributions to the resolution of the issue of deciding upon the best number of latent factors (Guttman 1954), the problem of "factor indeterminacy" (Guttman 1955), and the problem of estimating communalities (Guttman 1956), among many others. Guttman (1960) also invented yet a third model, called image analysis, which has a certain elegance but is rarely used (see Harris 1964; Kaiser 1963).

Research workers in many fields made contributions to the problem of deciding how best to represent a particular factor model in a theoretical/geometrical space, via the transformation or rotation of factors. Methods of rotation included the quartimax (Neuhaus and Wrigley 1954), varimax (Kaiser 1958), and oblimax (Harman 1976), among others. Several contributions were made during the early development of factor analysis with respect to the most useful strategies for estimating factor scores (see reviews by Harris 1967 and McDonald and Burr 1967) and for dealing with the problem of assessing factorial invariance (e.g., Meredith 1964a, 1964b; Mulaik 1972). Beginning in the mid-1960s the advances in the field of factor analysis have focused on the development of maximum-likelihood estimation techniques (Lawley and Maxwell 1971; Jöreskog 1967; Jöreskog and Lawley 1968); alternative distribution-free techniques (Bentler 1983, 1989; Bentler and Weeks 1980; Browne 1974, 1984; Browne and Shapiro 1988); the development of confirmatory factor analysis, which permits the setting of specific model constraints on the data to be analyzed (Bentler 1989; Jöreskog 1966, 1967, 1970, 1971a, 1973; Jöreskog and Sörbom 1986); and the development of factor analysis strategies for categoric variables (Christofferson 1975; Jöreskog and Sörbom 1988; Muthén 1983, 1988).

Factor analysis is used extensively by sociologists as a research tool. It is used in at least four related ways. First, it is frequently used as a data reduction or item analysis technique in index construction. Second, it is used as an exploratory device for examining the dimensional structure of content within a well-specified domain. Third, it is used as a confirmatory, hypothesis-testing tool aimed at testing prior hypotheses about the dimensional structure of a set of variables. And fourth, it is used to conceptualize the relationships of multiple indicators of latent variables in a causal modeling framework in which a factor model is assumed for the relationships between latent variables and their indicators. After a brief introduction to each of these four ways in which factor analytic tools are used in sociological research, this discussion covers the basic factor model and issues that arise in its application, either in the exploratory or confirmatory frameworks of analysis.


DATA REDUCTION APPROACHES

When a researcher wishes to build a composite score from a set of empirical variables, factor analysis and related techniques are often useful. Indeed, it is perhaps in this area of "index construction" that factor analysis is most often used by sociologists. There are various related data reduction approaches that fall under the heading of "dimensional analysis" or "cluster analysis," but the basic goal of all these techniques is to perform some decomposition of the data into sets of variables, each of which is relatively independent and homogeneous in content. When factor analysis is used in this way the researcher is essentially interested in determining the sets of linear dependence among a set of variables that are intended to measure the same general domain of content. The factor analysis of such variables may proceed in a number of different ways, but the basic goal is to determine the number of clusters of homogeneous content and the extent of relationship among various clusters or factors. Relationships among factors may be conceptualized either in terms of uncorrelated (or orthogonal) sets of factors or in terms of correlated (or oblique) factors. Such analyses are often supplemented with information on how to build "factor scores," with item-analysis information, such as item-to-total score correlations, and with information estimating the "internal consistency" or "reliability" of such composite scores (see Greene and Carmines 1979; Maxwell 1971).

When using factor analysis and related techniques as a basis for index construction, one of two situations is typically the case. Either the investigator has some a priori basis for expecting that items in a set have a common factor (or common factors) underlying them, and therefore the investigator has certain well-founded expectations that the items can be combined into a scale, or the investigator has no a priori set of hypotheses for what clusters will be found and is willing to let the inherent properties of the data themselves determine the set of clusters. In the first case confirmatory factor models are appropriate, whereas in the second case exploratory methods are mandated. In either case the use of factor analysis as a data reduction tool is aimed at the development and construction of a set of "scores," based on factor analysis, that can then be introduced as variables in research.

Exploratory Factor Analysis. As noted above, in situations where the researcher has no a priori expectations of the number of factors or the nature of the pattern of loadings of variables on factors, we normally refer to applications of factor analysis as exploratory. In the case of exploratory factor analysis the goal is to find a set of k latent dimensions that will best reproduce the correlations among the set of p observed variables. It is usually desirable that k be considerably less than p and as small as possible. In exploratory factor analysis one typically does not have a clear idea of the number of factors but instead begins with uncertainty about what the data will reveal. The most common practice is to find k orthogonal (uncorrelated) dimensions that will best reproduce the correlations among the variables, but there is nothing intrinsic to the factor analytic model that restricts the conceptual domain to several orthogonal dimensions.

Confirmatory Factor Analysis. Confirmatory factor analysis, in contrast, refers to situations in which the investigator wishes to test some hypotheses regarding the structure of relationships in the presence of a strong set of assumptions about the number of factors, the values of the factor pattern coefficients, the presence or absence of correlations of factors, or other aspects of the model. In confirmatory factor analysis it is essential that one begin with a theory that contains enough detailed specification regarding constraints that should be imposed on the data in order to provide such a test, whereas in exploratory factor analysis there is no requirement that one specify the number of factors and expected relationships to be predicted in the data. Confirmatory approaches are thus more theory-driven, whereas exploratory approaches are more data-driven (see Alwin 1990). However, much of the so-called confirmatory factor analysis that is carried out in modern social and behavioral science is in fact exploratory, and much current research would be more realistically appraised if such confusion did not exist. Often, there is considerable tinkering with "confirmatory" models in order to improve their fit to the data, either by removing variables or by loosening up (or "freeing") certain parameters. It is also often the case that the "confirmatory" factor analyses are actually preceded by an exploratory analysis, and then a confirmatory model based on these results is fit to the same data. Although very common, this approach "capitalizes" on chance and gives an illusory sense that one has confirmed (or verified) a particular model. Placed in the proper perspective, there is nothing in principal wrong with the approach, as long as the "test" of the model is cross-validated in other data.

FACTOR ANALYSIS AND MULTIPLE INDICATOR CAUSAL MODELS

In the 1960s and 1970s, with the introduction of causal modeling strategies in social science (see Blalock 1964; Duncan 1966, 1975; Heise 1968), a fundamental shift occurred in the nature and uses of the common factor models by sociologists. Methods and the logic of causal modeling with nonexperimental statistical designs had been around for a long time. Due largely to the influence of Lazarsfeld (1968), causal inference strategies had been prevalent especially among analysts of sample survey data since the 1940s, but the research strategies were based on tabular presentation of data and the calculation of percentage differences. In the early 1960s there was a general infusion of techniques of causal modeling in sociology and other social science disciplines. Path analysis, principal of these newly adopted techniques, of course, was invented before 1920 by the great geneticist Sewall Wright, but his contributions were not appreciated by social and behavioral scientists, including the psychometricians responsible for the development of factor analysis. Wright (1921) developed path models as deductive systems for deriving correlations of genetic traits among relatives of stated degree. He also used the method inductively to model complex economic and social processes using correlational data (Wright 1925).

Psychometricians, like Spearman, had been dealing with models that could be thought of as "causal models," which could be understood in Wright's path analysis framework—common factors were viewed as the causes underlying the observed variables—but Spearman and others who developed common factor models were unfamiliar with Wright's work. None of the early psychometricians apparently recognized the possibility of causal relationships among the latent variables of their models, or for that matter among their indicators. However, with the publication of work by Jöreskog (1970) and others working in the "new" field of structural equation models (see Goldberger 1971, 1972; Goldberger and Duncan 1973; Hauser and Goldberger 1971); the convergence and integration of linear models in the path analysis tradition and those in the factor analysis tradition provided a basic "breakthrough" in one of the major analytic paradigms most prevalent in social science. These developments were assisted by the interest in conceptualizing measurement errors within a causal analysis framework. A number of researchers began to incorporate conceptions of measurement error into their causal analyses (Alwin 1973a, 1974; Blalock 1965, 1969, 1970; Costner 1969; Duncan 1972; Heise 1969; Siegel and Hodge 1968), ushering in a new approach that essentially combined factor models and path models.

At about this same time, Karl Jöreskog and his colleagues were developing efficient procedures for estimating the parameters of such models—called LISREL models, named after Jöreskog and his colleagues' computer program, LISREL—and this provided a major impetus for the widespread use of confirmatory approaches to the estimation of structural equation models. Jöreskog's (1967) early contributions to maximum-likelihood factor analysis became readily applied by even the most novice of analysts. Unfortunately, the widespread availability of these techniques to researchers who do not understand them has led to serious risks of abuse. This can be true of any technique, including the techniques of exploratory factor analysis. In any event, the proper use and interpretation of the results of LISREL-type model estimation is a significant challenge to the present generation of data analysts.


THE COMMON FACTOR MODEL

The formal mathematical properties of the common factor model are well known and can be found in many of the accompanying references. It is useful for purposes of exposition briefly to review its salient features. Although the model can be most compactly represented using vector and matrix notation, it is normally best to begin with a scalar representation for the data, such as in equation 1. Here the z variables are empirical quantities observed in a sample of units of observation (e.g., persons, groups, ecological units), and for present purposes the variables are standardized to have a mean of zero and standard deviation of unity. This scaling is not a requirement of the model. In fact, in confirmatory factor models, it is often desirable to leave the variables in their original metric, especially when comparing the properties of these models across populations or subpopulations (see Alwin 1988b).

According to the common factor model, each observed z variable is, then, a linear function of a set of k latent or unobserved variables and a residual variable, ui, (also unobserved), which contains variation specific to that particular variable and random measurement error. The a coefficients in equation 1 are "factor loadings." They reflect the linkage between the "observed" variables and the "unobserved" factors. In the case of uncorrelated factor these loadings equal the correlations of the variables with the factors. The loadings thus provide a basis for interpreting the factors in the model; factors obtain their meaning from the variables to which they are linked and vice versa. Thus, in many investigations the primary objective is to estimate the magnitudes of these factor loadings in order to obtain a meaningful understanding of the nature of the data.

The k latent variables are called common factors because they represent common sources of variation in the observed variables. As such, these common factors are thought to be responsible for covariation among the variables. The unique parts of the variables, by contrast, contribute to lack of covariation among the variables. Covariation among the variables is greater when they measure the same factors, whereas covariation is less when the unique parts of the variables dominate. Indeed, this is the essence of the model—variables correlate because they measure the same things. This was the basis of Spearman's original reasoning about the correlations among tests of mental ability. Those tests correlated because they measured the same general factor. In the general case those variables that correlate do so because of their multiple sources of common variation.

Common variation in the aggregate is referred to as communality. More precisely, a variable's communality is the proportion of its total variation that is due to its common sources of variation. The communality of variable i is denoted h2i. A variable's uniqueness, denoted u2i, is the complement of the communality; that is, u2 = 1.0 - h2i. The uniqueness is thought of as being composed of two independent parts, one representing specific variation and one representing random measurement error variation; that is, u2i = s2i + e2i. (This notation follows traditional psychometric factor analysis notation. Each of these quantities is a variance, and, thus, in the covariance modeling or structural equation modeling tradition these quantities would be represented as variances, such that σ 2ui = σ 21i + σ 2ei [see below].) Specific variance is reliable variance, and thus the reliability of variable i can be expressed as r2i = h2i + s2i. Unfortunately, because of the presence of specific variance in most variables, it is virtually impossible to use the traditional form of the common factor model as a basis for reliability estimation (see Alwin 1989; Alwin and Jackson 1979). The problem is that the common factor model typically does not permit the partitioning of u2i into its components, s2i and e2i. In the absence of specific variance, classicial reliability models may be viewed as a special case of the common factor model, but in general it is risky to assume that e2i = u2i. Alwin and Jackson (1979) discuss this issue in detail. Some attempts have been made to augment the traditional latent "trait" model inherent in the common factor model by adding "method" factors based on the multitrait–multimethod design of measurement within the framework of confirmatory factor models (see Alwin 1974; Alwin and Jackson 1979; Werts and Linn 1970). This provides a partitioning of the specific variance due to method, but it does not provide a general solution to the problem of handling specific variance.


Returning to the above example (in equation 1), in Spearman's case (the one-factor case) each variable contains a common factor and a specific factor, as shown in equation 2.

In this case h2i = a2i and u2i = s2i. Spearman's (1927) theory in essence assumes perfect measurement, not unlike most common factor models. However, unlike researchers of today, Spearman was very concerned about measurement errors, and he went to great lengths to correct his observed correlations for imperfections due to random errors of measurement (Spearman 1904b). Thus, when applied to such corrected correlational data, these assumptions may be appropriate.

As can be seen from the equations for Spearman's model (equation 2), the correlations among variables zi, and zjry = E[zizj] (the expected value of the cross-products of the z scores for the two variables) may be written as ry = aiaj . For example, if p = 3, the correlations among the variables can be written as r12 = aia2, r13 = aia3, and r23 = a2a3. In vector notation (introduced in greater detail below), the common parts of the correlations among the variables of the model are composed of the matrix product AA′. In the case where p = 3, the matrix A is written as in equation 3,

and the product AA′ is written as in equation 4.


The variances of the variables are also affected by the common factors, but, as indicated in the foregoing, there is a residual portion of variance containing specific and unreliable variance. In Spearman's model the variance of variables i is as shown in equation 5.

Then it can be seen that the correlation matrix is equal to R = AA′ + U2, where the matrix U2 for the p = 3 case is written in matrix form as in equation 6.

These results have general applicability, as will be seen below.

ESTIMATION AND TESTING OF THE FACTOR MODEL

Before proceeding to the more general uses of the model, it is important to review the logic behind Spearman's approach. In the general Spearman case, the correlation of two variables is equal to the product of their loadings on the general factor; that is, rij = aiaj . Recall that under this model the a coefficients represent the correlations of the variables with the factor. Spearman reasoned therefore that if the model were true (that is, if a single unobserved common factor could account for the correlations among the observed variables), then certain things had to hold in the empirical data.

Spearman reasoned that, if the single factor model holds, the partial correlation between any two variables, holding constant the underlying common factor, rijf should be zero. This stems from the fact that the numerator of this partial correlation, rij - rifrjf , is zero, because under the model rij - aiaj = aiaj - aiaj = 0. Of course, it is not possible to calculate such a partial correlation from the data because the factor score, f, does not exist except in the theory. Spearman, however, recognized a specific pattern to the components of the correlations under the model. He noted that, if the single factor model held for p = 4, the intercorrelations of the variables had to satisfy two independent conditions, referred to by Spearman (1927) as vanishing tetrads, shown in equation 7.

Note that the case of p = 3 is a trivial case, since a one-factor model can always be used to describe the intercorrelations among three variables. For p = 5 there are [p(p - 1) (p - 2) (p - 3)]/8 different tetrads (see Harman 1976), which equals fifteen. Not all of the possible tetrad differences formed from these fifteen are independent, and for one factor to explain the correlations, there are p (p - 3)/2 independent tetrad differences. Thus, in the case of five variables there are five tetrad differences that must vanish, and for six there are nine, and so forth.

Although in recent years there has been a revival of interest in Spearman's vanishing tetrads for sets of four variables (Glymour et al. 1987), at the time he developed this logic there was little that could be done computationally with very large problems. Thurstone (1947) developed the centroid method as an approximation to the principal axes approach involved in Spearman's early work, which was in common use during the 1940s and 1950s, but with the development of the high-speed computer, principal axes methods became (and remain) quite common in many applications of the model.

In exploratory factor analysis, where the number of factors of the model is not known beforehand, estimation is carried out by way of an eigenvalue/eigen-vector decomposition of some matrix, either R or some estimate of R - U2. There is a wide variety of types of factor analyses that can be done—principal component factor analysis (which analyzes the p first nonzero components of R), communality-based factor analysis (which analyzes R with a communality estimate in the diagonal), alpha factor analysis, canonical factor analysis, or image analysis (see Harris 1963, 1964). Few developments have been made in these approaches since the 1960s, although there continues to be considerable debate about the desireable properties of these various approaches (e.g., see Widaman 1991).

Perhaps the most important development affecting exploratory factor analysis since the 1960s has been the development of maximum-likelihood factor analysis. Maximum-likelihood estimation, however, requires the prior estimate of the number of factors. These methods are most often discussed in connection with confirmatory factor analysis, although the approach to exploratory factor analysis discussed by Lawley and Maxwell (1971) illustrates how a form of traditional exploratory factor analysis can be done by setting minimal constraints on the model and testing successive hypotheses about the number of factors. A discussion of these models occurs in a subsequent section on confirmatory factor analysis. Before this, a more formal presentation of the factor model in matrix form is given, along with a discussion of several of the longstanding problems that dominate the literature on factor analysis, specifically the problem of estimating communality, the problem of estimating factor scores, and the problem of determining factorial invariance.

THE FACTOR MODEL IN MATRIX NOTATION

We can generalize the model given above for the case of multiple factors, k, in matrix notation. And again, the factor model can be easily represented in terms of the data matrix at hand. (The model can also be written compactly in vector notation for populations of interest. This is the approach taken in the subsequent discussion of confirmatory factor analysis.) The data matrix in this case can be represented as a p by n array of variable scores. Let Z′ symbolize this p by n data matrix. Using this notation, write the common factor model for a set of p variables as Z′ = AF′ + UW′ where Z′ is as defined above, A is a p by k factor pattern matrix (in the case of uncorrelated factors A is called the factor structure matrix), F′ is a k by n matrix of hypothetical factor scores, U is a p by p diagonal matrix of unique-score standard deviations (defined such that the element ui is the square root of the unique variances, σ2ui), and W′ is a p by n matrix of hypothetical unique scores. Note that the factors (both common and unique) are never observed—they exist purely at the hypothetical level. Note also that because we have standardized the variables (the z's) to be centered about the mean and to have standard deviations of unity, the factor scores in this model are theoretically standardized in the same fashion. In other words, E(FF) = Ik , and the variances of the unique scores are equal to E(WW) = U2, assumed to be a diagonal matrix.

Traditionally, the factor model assumed that the factors of this model were uncorrelated, that the unique parts of the data (the W) are uncorrelated with the common parts (the F), and that the unique variation in variable i is uncorrelated with the unique variation in variable j, for all i and j. In matrix notation, the factor model assumes that E(FF) = Ik , E(WW) = U2, and E(FW) = E(WF) = 0. In other words, the factors of the model are uncorrelated with one another and have variances of unity. Also, the common factors and unique factors are uncorrelated, and the unique factors are uncorrelated among themselves.

This type of notation helps clarify the fact that factor analysis is in effect interested in the "reduced" data matrix, Z′ - UW′, rather than Z′. Consequently, the factor model is concerned with the decomposition of the matrix R - U2 (the correlation matrix with communalities in the diagonal) rather than R (the correlation matrix with unities in the diagonal), since in equation 8 we demonstrate the following:

This demonstrates an often misunderstood fact —namely, that factor analysis focuses on the reduced-correlation matrix, R - U2, rather than on the correlation matrix (with 1s in the diagonal). As will be clarified below, this is the fact that differentiates factor analysis from principal components analysis—the latter operates on the correlation matrix. In factor analysis, then, one must begin with some estimate of I - U2 or H2, the matrix of communalities, and then work on the decomposition of R - U2. This poses a dilemma, since neither the common nor unique factors are observed, and it is therefore not possible to know U2 and H2 beforehand. The objective is to come up with an estimate of H2 that retains a positive semidefinite property to R - U2. At the same time, H2 is one aspect of what one wants to discover from the analysis, and yet in order to estimate the model one must know this matrix beforehand. The solution to this problem is to begin with an "estimate" of the communalities of the variables, and then through an iterative procedure new estimates are obtained and the solution is reached through convergence to some criterion of fit.


COMMUNALITY ESTIMATION AND THE NUMBER OF FACTORS

Determining the number of factors in exploratory factor analysis is one of the fundamental problems involved in arriving at a solution to the parameters of the factor model. The problem essentially involves determining the rank, k, of the matrix R - U2, where these matrices are as defined above. Technically, we want to find a matrix U2 that will retain the property of positive semidefiniteness in R - U2 with the smallest possible rank (Guttman 1954). The rank of this matrix in this case is the minimum number of factors necessary to reproduce the off-diagonal elements of R. Thus, the problem of determining k is closely related to the communality estimation problem, that is, determining an estimate for the diagonal of R - U2, that is, H2.

Guttman (1954) outlined the problem of deciding on the number of factors and compared three principles for estimating k via solutions to the communality estimation problem. He described a "weak lower bound" on the number of factors, k1, as the nonnegative roots (eigen values) of the matrix R - I. This is equivalent to the number of roots of R greater or equal to unity, since R and R - I differ only by I, and their roots differ therefore only by unity. Guttman shows that k1 is a lower bound to k, that is, kk1. A second principle, one that also implies another approach to estimating communality, is based on the matrix R - D, where D is a diagonal matrix whose elements, l - r2j (j = 1, p), are the result of unity minus the square of the largest correlation of variable j with any of the p - 1 other variables. Guttman shows that k2 is also a lower bound to k, such that kk2. A third and perhaps the most common approach to estimating communalities is based on the idea that the squared multiple correlation for each variable predicted on the basis of all the other variables in the model is the upper limit on what the factors of the model might reasonably explain. If we define the matrix R - C2, where C2 is a diagonal matrix whose elements C2j (1, p) are equal to 1 - r2j, where r2j is the squared multiple correlation of variable j with the remaining p - 1 variables. Guttman shows that k3 is also a lower bound to k. This third lower bound is often referred to as Guttman's strong lower bound since he showed the following relationships among the lower bounds: kk3 ≥ k2 ≥ k1. In practice, k1 may be adequate but it could be wrong, and, if wrong, it is likely to be too small. The use of k1 is probably questionable in the general case. The use of k2 is obsolete and not practicable. It estimates communality in the way of the Thurstone centroid method, which is only a rough approximation to a least-squares approach. Perhaps the best solution is the choice of k3. It is less likely to overlook common factors, as k might, since 1 - s2j is a lower bound to h2j. It should be pointed out that the lower bounds k1 and k3 are what distinguish the two main approaches to factor analysis, namely an incomplete principal components decomposition (referred to as principal components factor analysis) and the principal factor method of analysis.

FACTOR ANALYSIS VERSUS PRINCIPAL COMPONENTS ANALYSIS

It was mentioned above that it is not understood well enough that factor analysis is concerned mainly with the matrix R - U2 rather than with R. This is in fact one of the things that distinguishes factor analysis from principal components analysis. However, the differences between the two are more fundamental. Factor analysis is based on a model, a particular theoretical view (hypothesis, if you like) about the covariance (or correlational) structure of the variables. This model states (as given above) that the correlation matrix for a set of variables can be partitioned into two parts—one representing the common parts of the data and one representing uniqueness—that is, R = AA + U2. The factor model states, first, that the off-diagonal elements of AA equal the off-diagonal elements of R and, second, that the elements of U2 (a diagonal matrix) when added to the diagonal elements of AA give the diagonal elements of R. Thus, the factor model posits a set of k hypothetical variables (k < < p) that can account for the interrelationships (or correlations) of the variables but not for their total variances.

In contrast to this, principal components is not a model in the same sense—it is best viewed as a method. It is one method for obtaining an initial approximation to the common factor model (see Guttman's weak lower bound, discussed above), but it is extremely important to distinguish such an "incomplete" principal components solution (one associated with the roots of R that are equal to or greater than unity) from the full-rank principal components decomposition of R (see Maxwell 1977).

Any square symmetric nonsingular matrix, for example, R = R′, can be written in the form R = QD2Q′, where D2 is a diagonal matrix of order p containing eigen values ordered according to decreasing magnitude, and Q is a matrix of unit-length eigen vectors (as columns) associated with the eigen values. Q is an orthonormal matrix, QQ = I = QQ′. This model is referred to as the principal components decomposition of R. Typically, one either analyzes a correlation matrix with 1s in the diagonal, or a covariance matrix with variances in the diagonal, in the application of this decomposition. In this model the correlation matrix, R, is formed by a centered or deviation-score data matrix scaled so that the variables have variances of unity. Let Z′ be the p × n data matrix, as above. Note that the expected value of ZZ = QD2Q′, since Z′ = QDY′.

If the correlation matrix is of full rank, then there will be p columns in Q. This means that in this case the principal components model involves a transformation of p variables into a set of p orthogonal components. When the correlation matrix is singular, meaning that the rank of the matrix is less than p, the principal components decomposition is said to be incomplete, but from the point of view of factor analysis this is often irrelevant since it is the matrix R - U2 that is of interest to the factor analyst.

If P is an r × p components matrix (P = QD), and r = p, then it is well known that Y′ = (PP)-1;PZ′ = P1Z′, where Y′ is a set of r component scores, P is as defined above, and Z′ is a p × n data matrix involving p variables and n units of observation (e.g., persons, cities, social organizations). In other words, component scores (in contrast to factor scores) are directly calculable.


THE ROTATION PROBLEM–CORRELATED VERSUS UNCORRELATED FACTORS

Principal components are by definition uncorrelated with one another. The basic objective of the method is to obtain a set of p orthogonal (uncorrelated) new variables via a linear transformation of the p original variables. Factors are different. Factors may be uncorrelated, and in classical exploratory factor analysis one always begins with a set of uncorrelated factors, but in general this is not a requirement. Indeed, in exploratory factor analysis the factors one obtains are uncorrelated because of the nature of the methods used, but normally one performs a transformation or rotation of these factors to achieve a more pleasing representation for interpretation purposes.

Two types of rotations are available—those that preserve the uncorrelated nature of the factors, such as the varimax and quartimax rotations (see Kaiser 1958; Neuhaus and Wrigley 1954), and those that allow the factors to be correlated. The latter are called "oblique" rotations because they move the factors out of the orthogonal reference into a vector space that reduces the geometric angles between them. Using either of these approaches, the basic goal of rotation is to achieve what Thurstone called simple structure, the principle that variables should simultaneously load highly on one factor and low on all other factors. These rotational approaches are relatively straightforward and discussed in all of the textbook descriptions of factor analysis.


FACTORIAL INVARIANCE

Following Thurstone's (1938, 1947) discussions of factor analysis, students of the method have frequently been concerned with the problem of the correspondence between factors identified in separate studies or in subgroups of the same study. Using Thurstone's terminology, a concern with the correspondence of factors refers to the invariance of factors. The concern with factorial invariance has generated an array of methods for comparing factors (see Mulaik 1972). The most common approach to the problem involves the computation of an index of factor similarity for corresponding factors given estimates of a factor model using the same variables in two or more samples. The details of various strategies for estimating factor similarity will not be covered here, as descriptions can be found in a variety of factor analysis textbooks.

These approaches were developed primarily for results obtained from exploratory factor analysis, and it can be argued that the issues of factorial invariance can be more fruitfully addressed using the methods of confirmatory factor analysis (see Jöreskog 1971b; Lawley and Maxwell 1971). The technical aspects of these methods will not be reviewed here, as they have been exposited elsewhere (see Alwin and Jackson 1979, 1981). Suffice it to say that issues of factorial invariance can be phrased, not only with respect to the correspondence of the factor pattern coefficients (the A matrix) across populations, but also with respect to other parameters of the model as well, particularly the matrix of factor interrelationships (correlations and covariances) and the matrix of disturbance covariances.

It is perhaps useful in this context to raise a more general question regarding the nature of factorial invariance that is sought in the analysis of the factorial content of measures. In general there is no consensus regarding whether stronger or weaker forms of invariance are necessary for comparisons across populations or subpopulations. Horn and associates (1983), for example, suggest that rather than the "equivalence" of factor structures across populations, weaker "configurational" forms of invariance are "more interesting" and "more accurate representations of the true complexity of nature." By contrast, Schaie and Hertzog (1985, pp. 83–85) argue that the condition of factorial invariance, that is, "the equivalence of unstandardized factor loadings across multiple groups," is critical to the analysis of differences among groups and developmental changes within groups.

Clearly, these represent extremes along a continuum of what is meant by the question of factorial invariance. On the one hand, for strict comparison of content across groups, it is necessary to have the same units of measurement, that is, invariance of metric. This requires the same variables measured across populations, and some would also argue that such invariance of metric requires that the relationships of the variables and the factors be equivalent across populations (see Jöreskog 1971a). On the other hand, if the same pattern of loadings seems to exist, it may be an example of misplaced precision to require equivalence in the strictest sense. Of course, the resolution of these issues has implications for other uses to which factor analysis is typically put, especially the construction of factor scores and the use of causal modeling strategies to compare substantive processes across groups.


THE PROBLEM OF FACTOR SCORE ESTIMATION


Researchers using the common factor model as a data reduction tool typically engage in the estimation of such models in order to obtain scores based on the factors of the model, which can then be used to represent those factors in further research. As will be shown here, factor scores can never be computed directly (as in the case of component scores). Factor scores are always estimated, and, due to the nature of the factor model, "estimated factor scores" never correlate perfectly with the underlying factors of the model. An important alternative to factor scores is what have come to be called "factor-based" scores, which are scores derived from the results of the factor analysis, using unit versus zero weightings for the variables instead of the factor score weights derived from one or another method of estimating factor scores. Factor-based scores, which are frequently more easy to justify and much more practical, typically correlate so highly with factor score estimates as to make one skeptical of the need for factor score estimates at all (see, e.g., Alwin 1973b).

However, it is important that factor analysts understand the nature of the factor score estimation problem, regardless of whether factor score estimates become any more appealing than the simpler and more stable factor-based scores. The factor score estimation problem can best be seen in terms of an interest in solving for the matrix F′ in the above matrix representation of the common factor model, Z′ = AF′ + UW′. This can be done analytically, but, as will be seen, it is not possible to do so empirically because of the nature of the model. To solve for F′ in this model we arrive at the following representation (without going through all of the necessary steps): F′ = (AA)-1A′ [Z′ - UW′]. The calculations implied by this expression cannot actually be carried out because one never knows W′. This is known as the "factor measurement problem," which results in the fact that factor scores cannot be computed directly and must therefore be estimated.

The question, then, becomes whether it is possible to estimate factor scores in a manner that is useful, given what is known—Z, A, and U2. Several approaches have been set forth for estimating the factors, all of which involve some transformation of the data matrix Z′ into a set of k scores that vary in their properties (see Harris 1967; McDonald and Burr 1967). Most of these methods bear some resemblance to the analytic solution for F′ above, but there are some technical differences. One of the most commonly misunderstood facts involved in the estimation of factor scores is that the factor pattern coefficient matrix, A, cannot be applied directly to the estimation of factors; that is, F′ cannot be estimated by AZ′. This, of course, should be clear from the above representation, but it is often used, probably due to ignorance of the more "correct" factor score estimation strategies.

There are four recognized strategies for estimating scores representing the common factors of the model, given Z, A, and U2 (Alwin 1973b). All of these approaches are typically discussed for a model such as that discussed above, namely a set of uncorrelated factors scaled to have 0 means and standard deviations of 1. It is not the purpose of the present discussion to evaluate the properties of these various approaches to factor score estimation, but a brief summary can perhaps provide a guide to the technical literature on this topic. It is important to emphasize that none of these approaches produces factor score "estimates" that are perfectly correlated with the underlying factors of the model. Some of these approaches produce univocal score estimates, meaning that each factor score correlates only with the factors they are intended to measure and not with factors they are not intended to measure. Only one of the approaches produces a set of factor score estimates that reflect the property of uncorrelated factors with unit standard deviations. But it is difficult in practice to evaluate the desirability of any of the properties of factor score estimates.


METHODS OF CONFIRMATORY FACTOR ANALYSIS


Confirmatory factor analysis, unlike the methods of exploratory factor analysis, begins with prior knowledge regarding the number of factors and something about the nature of their relationships to the observed variables. In the absence of such knowledge, confirmatory factor analysis is not appropriate. In the typical situation of confirmatory factor analysis, then, the investigator begins with some specific theoretical hypotheses involving a model that can be tested against the data. Naturally, there is an extremely large number of such possible models, so it should be obvious that the techniques cannot easily be used to "search" for the best possible set of restrictions involving k factors (but see Glymour et al. 1987).

The bulk of this review has been devoted to the use of exploratory techniques of factor analysis. This imbalance is perhaps justified, given what is known within sociology about the common factors in our data. Exploratory factor analysis techniques are likely to be much more useful, especially at a stage where less knowledge has been developed. And within a field like sociology, where there is a broad variety of competing concepts and paradigms, exploration of data may often be the most salutary strategy. There are, however, clear-cut instances where the use of confirmatory factor analysis techniques is in order, and the remainder of this discussion focuses on these situations.

Consider the following model for a set of p variables: y = ν + λ η + ε, where ν is a vector of location parameters or constants representing the origins of measurement of the p observed variables, η is a vector of k latent variables or factors, and ε is a vector of random disturbances for the p observed variables. The covariance properties associated with η and ε are basically the same as those discussed in the section on exploratory factor analysis for F and W, except that in general these are not required to be uncorrelated within the common factor set. And, of course, there is no restriction on the metric of the variables; that is, the p variables and k factors are not necessarily standardized to have 0 means and standard deviations of unity. The coefficient matrix, λ, is a matrix of regression coefficients relating the p observed variables to the k latent factors. In the case of a single population, one can easily consider the p variables centered (which would remove the vector of location constants) and scaled to have unit variance, but in the situation where one wants to compare populations neither of these constraints is probably desirable.

The use of these models requires the distinction between constrained and unconstrained parameters. Typically, one refers to parameters of the model as fixed if they are constrained to have a particular value, such as a factor loading of 0 or, in the case of the variance of a latent factor, a variance of unity. By contrast, the unknown parameters of the model, for example the λs, are referred to as free parameters, which means that they are estimated in the model under the constraints specified for fixed parameters. Thus, one speaks of fixed or constrained parameters on the one hand and free or estimable parameters on the other. The major breakthrough in the use of this type of model was the development of computer programs that allow one to fix certain parameters of the model to known quantities while estimating the free parameters under these constraints. The general approach also allows one to specify causal connections among the latent factors of the model, and it allows one to specify correlations among the errors in the variables and the errors in the equations connecting the latent factors.

Consider the factor model for the situation where there are p = 4 variables and k = 2 factors, with the pattern of factor pattern coefficients shown in equation 9, where the first two variables are believed to measure η1 and the third and fourth variables are said to measure η2. This is the kind of situation described by Costner (1969), who developed an approach to confirmatory factor analysis using Spearman's tetrad differences. Of course, there are more efficient estimation strategies than those proposed by Costner. In any event, the point of this example is that the investigator begins not only with a specific number of factors in mind but also with a specific set of assumptions about the pattern of loadings.



In the general case the covariances and correlations of the common factors of the model, E(η′η), can be symbolized by ψ (sometimes this matrix is denoted as Φ, but there are many ways in which to symbolize these quantities), and the covariances of the disturbances (or errors) on the variables can be symbolized by θe. Neither of these two matrices, ψ and θe, is required by the model to be diagonal, although in the simplest form of the confirmatory model, θe is often assumed to represent a set of uncorrelated disturbances. In more complicated forms of the model, within constraints placed by the identification problem, both of these matrices can be nondiagonal. In either case, the model here is written with the assumption that the investigator has prior theoretical knowledge regarding the number of sources of common variation and that the η vector exhausts those sources.

Any application of this model requires that it be identified, which essentially means that there must be enough independent information within the covariance and correlation structure being analyzed sufficient to solve for the unknown parameters of the model. In general, there need to be k2 constraints on a particular common factor model, that is, in the parameters in λ and ψ. Other constraints, of course, are possible. Space does not permit the discussion of these matters, but a detailed treatment of these issues can be found elsewhere (see Alwin 1988a; Alwin and Jackson 1979).

The True-Score Models. It can be shown that a well-known class of measurement models that form the basis for classical test theory (Lord and Novick 1968) can be specified as a special case of confirmatory factor analysis (see Alwin and Jackson 1979; Jöreskog 1971a). In brief, by placing particular constraints on the λ and θ, matrices of the model, one can estimate the parameters of models that assume the measures are parallel, tau-equivalent, or congeneric. Of course, as indicated earlier, in order to use the common factor model in such a fashion, one must be reasonably sure that there is little or no specific variance in the measures. Otherwise, one runs the risk of confusing reliable specific variance with measurement error variance.

Multitrait–Multimethod Models. In addition to the application of confirmatory factor analysis to the estimation of classical true-score models, several attempts have been made to augment the traditional latent "trait" model, inherent in the classical model, by the addition of "method" factors based on the multitrait–multimethod design of measurement within the framework of confirmatory factor models (see Alwin 1974; Alwin and Jackson 1979; Werts and Linn 1970). This provides a partitioning of the specific variance due to method, but it does not provide a general solution to the problem of handling specific variance. While these models can be very useful for partitioning item-level variance into components due to trait, method, and error, they place relatively high demands on the measurement design. And while the required designs are relatively rare in practice, these models help sensitize the researcher to problems of correlated method error (see e.g., Costner 1969). Recent work in this area has shown that the multitrait-multimethod model can be fruitfully applied to questions of survey measurement quality, assessing the extent to which correlated measurement errors account for covariation among survey measures (see Alwin, 1997; Scherpenzeel, 1995).

Multiple-Indicator, Multiple-Cause Models. One of the simplest forms of causal models involving latent common factor models is one in which a single latent endogenous variable having several indicators is determined by several perfectly having measured exogenous variables. Jöreskog (1974) and Jöreskog and Goldberger (1975) refer to this as a multiple-indicator, multiple-cause (MIMC) model. This kind of model has certain similarities to the canonical correlation problem (see Hauser and Goldberger 1971).

Analysis of Change—Simplex Models. One type of model that can be viewed as a confirmatory factor model, and is useful in analyzing change with respect to the latent common factors over time, falls under the rubric of simplex models (Jöreskog 1974). Such models are characterized by a series of measures of the same variables separated in time, positing a Markovian (lag-1) process to describe change and stability in the underlying latent variable. This model can be used in situations where there is a single variable measured over time (see Heise 1969) or in situations where there are multiple measures of the latent variable at each time (see Wheaton et al. 1977).

These models have proven to be valuable in the study of human development and change, especially applied to panel studies of individual lives over time (Alwin 1994, 1995a; Alwin and Krosnick 1991; Alwin et al. 1991; Asendorf 1992.); A related set of models in this area are latent growth curves, in which levels and trajectories of the growth in latent factors can be conceptualized and estimated (e.g., Karney and Bradbury 1995; McArdle 1991; McArdle and Anderson 1990; Willett and Sayer 1994). In such applications the focus is on the nature of growth processes and the correlates/predictors of individual change in latent factors over time. This is a natural extension of traditional methods of confirmatory factor analysis and causal modeling strategies within the context of longitudinal data.

Causal Modeling of Factors. If one obtains multiple measures of factors, for which common factor models are believed to hold, and the factors can be specified to be causally related, then it is possible to use confirmatory techniques to estimate the causal influences of factors on one another. Of course, one must be able to justify these models strongly in terms of theoretical considerations, and there must be considerable prior knowledge (as in the use of confirmatory factor analysis) that justifies the specification of such measurement models. The logic involved in dealing with the linkage between observed and unobserved variables is essentially that involved in confirmatory factor analysis, while the logic applied in dealing with the causal linkages among factors is that involved in path analysis and structural equation modeling. The main point is that the parameters of models that essentially contain two parts—a measurement part specifying a model linking observed and latent variables and a structural model linking the latent variables—can be estimated within the framework of LISREL-type models. The measurement part can typically be viewed within the framework of confirmatory factor analysis, although in some cases an "induced variable" model is more appropriate (Alwin, 1988b).


CONCLUSION

This review is designed to provide an overview of the major issues involved in the use of factor analysis as a research tool, including both exploratory and confirmatory techniques. There are several useful textbook discussions of factor analysis that will aid those who desire further study. Among these, the texts by Gorsuch (1984), Harman (1976), Mulaik (1972), McDonald (1985), and Lawley and Maxwell (1971) are some of the best sources on exploratory factor analysis. There are some recent texts offering instruction on the conceptual understanding and practical guidance in the use of these confirmatory factor analysis and causal modeling strategies (e.g., Bollen 1989; Loehlin 1992). The newest developments in the area involve advances in the analysis of categorical and ordinal data, statistical estimation in the presence of incomplete data, the provision of graphic interfaces for ease of specification of causal models. There are now several major competitors in the area of software packages that can be used to estimate many of the confirmatory factor models discussed here. Although these several packages offer largely comparable material, each offers a somewhat special approach. The LISREL approach to the analysis of covariance structures was first made available in the early 1970s. It is now in its eighth version and offers many improvements to its use (Jöreskog and Sörbom 1996a). The LISREL8 program is now supplemented by PRELIS (Jöreskog and Sörbom 1996b), which provides data transformaton capabilities, and by SIMPLIS (Jöreskog and Sörbom 1993), which provides a simple command language for formulating LISREL-type models. There are several alternatives to the LISREL approach, including EQS (Bentler 1992), AMOS (Arbuckle 1997), and Mplus (Muthén and Muthén 1998), among others, which provide many attractive features (see West, 1997).

(see also: Causal Inference Models; Measurement; Multiple Indicator Models; Validity).


references

Alwin, D. F. 1973a "Making Inferences from Attitude—Behavior Correlations." Sociometry 36:253–278.

——1973b "The Use of Factor Analysis in the Construction of Linear Composites in Social Research." Sociological Methods and Research 2:191–214.

——1974 "Approaches to the Interpretation of Relationships in the Multitrait-Multimethod Matrix." In H. L. Costner, ed., Sociological Methodology 1973–74. San Francisco: Jossey-Bass.

——1988a "Structural Equation Models in Research on Human Development and Aging." In K. Warner Schaie, Richard T. Campbell, William Meredith, and Samuel C. Rawlings, eds., Methodological Issues inAging Research. New York: Springer.

——1988b "Measurement and the Interpretation of Coefficients in Structural Equation Models." In J. S. Long. ed., Common Problems/Proper Solutions: Avoiding Error in Quantitative Research. Beverly Hills, Calif.: Sage.

——1989 "Problems in the Estimation and Interpretation of the Reliability of Survey Data." Quality andQuantity 23:277–331.

——1990 "From Causal Theory to Causal Modeling: Conceptualization and Measurement in Social Science." In J. J. Hox and J. De-Jong. Gierveld, eds., Operationalization and Research Stategy. Amsterdam: Swets and Zeitlinger.

——1994 "Aging, Personality and Social Change." In D. L. Featherman, R. M. Lerner, and M. Perlmutter, eds., Life-Span Development and Behavior, vol. 12. Hills-dale, N.J.: Lawrence Erlbaum.

——1995a "Taking Time Seriously: Social Change, Social Character Structure, and Human Lives." In P. Moen, G. H. Elder, Jr., and K. Lüscher, eds., Examining Lives in Context: Perspectives on the Ecology of Human Development. Washington, D.C.: American Psychological Association.

——1995b "Quantitative Methods in Social Psychology." In K. Cook, G. Fine, and J. House, eds., Sociological Perspectives on Social Psychology. New York: Allyn and Bacon.

——1997 "Feeling Thermometers Versus 7-Point Scales—Which Are Better?" Sociological Methods and Research 25:318–340.

——, R.L. Cohen, and T.M. Newcomb 1991 Political Attitudes Over the Life-Span: The Bennington Women After 50 Years. Madison: University of Wisconsin Press.

——, and D. J. Jackson 1979 "Measurement Models for Response Errors in Surveys: Issues and Applications." In K. F. Schuessler, ed., Sociological Methodology 1980. San Francisco: Jossey-Bass.

——, 1981 "Applications of Simultaneous Factor Analysis to Issues of Factorial Invariance." In D. J. Jackson and E. F. Borgatta, eds., Factor Analysis and Measurement in Sociological Research. Beverly Hills, Calif.: Sage.

——, and D. J. Jackson 1982 "Adult Values for Children: An Application of Factor Analysis to Ranked Preference Data." In R. M. Hauser et al. eds., Social Structure and Behavior. New York: Academic Press.

——, and J. A. Krosnick. 1991 "Aging, Cohorts and the Stability of Socio-Political Orientations Over the Life-Span. American Journal of Sociology 97:169–195.

——, and R. C. Tessler. 1974 "Causal Models, Unobserved Variables, and Experimental Data." American Journal of Sociology 80:58–86.

Arbuckle, J. L. 1997 Amos Users' Guide. Version 3.6. Chicago: SmallWaters Corporation.

Asendorpf J. B. 1992 "Continuity and Stability of Personality Traits and Personality Patterns." In J. B. Asendorpf and J. Valsiner, eds., Stability and Change in Development. Newbury Park, Calif.: Sage.

Bentler, P. M. 1983 "Some Contributions to Efficient Statistics for Structural Models: Specification and Estimation of Moment Structures." Psychometrika 48:493–517.

——1989 EQS Structural Equations Program Manual, Version 3.0. Los Angeles: BMDP Statistical Software.

——, and D. G. Weeks 1980 "Linear Structural Equations with Latent Variables." Psychometrika 45:289–308.

Blalock, H. M., Jr. 1964 Causal Inferences in Nonexperimental Research. Chapel Hill: University of North Carolina Press.

——1965 "Some Implications of Random Measurement Error for Causal Inferences." American Journal of Sociology 71:37–47.

——1969 "Multiple Indicators and the Causal Approach to Measurement Error." American Journal of Sociology 75:264–272.

——1970 "Estimating Measurement Error Using Multiple Indicators and Several Points in Time." American Sociological Review 35:101–111.

Bollen, K. A. 1989 Structural Equations with Latent Variables. New York: John Wiley and Sons.

Browne, M. W. 1974 "Generalized Least Squares Estimators in the Analysis of Covariance Structure." South African Statistical Journal 8:1–24.

——1984 "Asymptotically Distribution-Free Methods for the Analysis of Covariance Structures." British Journal of Mathematical and Statistical Psychology 37:62–83.

——, and A. Shapiro 1988 "Robustness of Normal Theory Methods in the Analysis of Linear Latent Variate Models." British Journal of Mathematical and Statistical Psychology 41:193–208.

Christofferson, A. 1975 "Factor Analysis of Dichotomized Variables." Psychometrika 40:5–32.

Costner, H. L. 1969 "Theory, Deduction, and Rules of Correspondence." American Journal of Sociology 75:245–263.

Duncan, O. D. 1966 "Path Analysis: Sociological Examples." American Journal of Sociology 72: 1–16.

——1972 "Unmeasured Variables in Linear Models for Panel Analysis." In H. L. Costner, ed., Sociological Methodology 1972. San Francisco: Jossey-Bass.

——1975 Introduction to Structural Equation Models. New York: Academic Press.

Garnett, J. C. M. 1919–20 "On Certain Independent Factors in Mental Measurement." Proceedings of the Royal Society of London A. 96:91–111.

Glymour, C., R. Scheines, P. Spines, and K. Kelly 1987 Discovering Causal Structure: Artificial Intelligence, Philosophy of Science, and Statistical Modeling. New York: Academic Press.

Goldberger, A. S. 1971 "Economics and Psychometrics: A Survey of Communalities." Psychometrika 36:83–107.

——1972 "Structural Equation Models in the Social Sciences." Econometrika 40:979–999.

——, and O. D. Duncan 1973 Structural Equation Models in the Social Sciences. New York: Seminar Press.

Gorsuch, R. L. 1984 Factor Analysis, 2d ed. Hillsdale, N.J.: Lawrence Erlbaum.

Gould, S. J. 1981 The Mismeasure of Man. New York: Norton.

Greene, V. L., and E. G. Carmines 1979 "Assessing the Reliability of Linear Composites." In K. F. Schuessler, ed., Sociological Methodology 1980. San Francisco: Jossey-Bass.

Guttman, L. 1954 "Some Necessary Conditions for Common Factor Analysis." Psychometrika 19:149–161.

——1955 "The Determinacy of Factor Score Matrices with Implications for Five Other Basic Problems of Common-Factor Theory." British Journal of Statistical Psychology 8:65–81.

——1956 "'Best Possible' Systematic Estimates of Communalities." Psychometrika 21:273–285.

——1960 "The Matrices of Linear Least-Squares Image Analysis." British Journal of Statistical Psychology 13:109–118.

Harman, H. H. 1976 Modern Factor Analysis. Chicago: University of Chicago Press.

Harris, C. W. 1963 "Canonical Factor Models for the Description of Change." in C. W. Harris, ed., Problems in Measuring Change. Madison: University of Wisconsin Press.

——1964 "Some Recent Developments in Factor Analysis." Educational and Psychological Measurement 24:193–206.

——1967 "On Factors and Factor Scores. Psychometrika 32:193–379.

Hauser, R. M. 1972 "Disaggregating a Social Psychological Model of Educational Attainment." Social Science Research 1:159–188.

——, and A. S. Goldberger 1971 "The Treatment of Unobservable Variables in Path Analysis." In H. L. Costner, ed., Sociological Methodology 1971. San Francisco: Jossey-Bass.

Heise, D. R. 1968 "Problems in Path Analysis and Causal Inference." In E. F. Borgatta, ed., Sociological Methodology 1969. San Francisco: Jossey-Bass.

——1969 "Separating, Reliability and Stability in Test–Retest Correlation." American Sociological Review 34:93–101.

——1972 "Employing Nominal Variables, Induced Variables, and Block Variables in Path Analysis." Sociological Methods and Research 1:147–174.

——, and G. W. Bohrnstedt 1970 "The Validity, Invalidity, and Reliability of a Composite Score. In E. F. Borgatta and G. W. Bohrnstedt, eds., Sociological Methodology 1970. San Francisco: Jossey-Bass.

Horn, J. L., J. J. McArdle, and R. Mason 1983 "When Is Invariance Not Invariant?: A Practical Scientist's Look at the Ethereal Concept of Factor Invariance." The Southern Pschologist 1:179–188.

Hotelling. H. 1933 "Analysis of a Complex of Statistical Variables into Principal Components." Journal of Educational Psychology 24:417–441, 498–520.

Jöreskog. K. G. 1966 "Testing a Simple Structure Hypothesis in Factor Analysis." Psychometrika 31: 165–178.

——1967 "Some Contributions to Maximum-Likelihood Factor Analysis." Psychometrika 32:443–482.

——1970 "A General Method for Analysis of Covariance Structures." Biometrika 56:239–251.

——1971a "Simultaneous Factor Analysis in Several Populations." Psychometrika 36:409–426.

——1971b "Statistical Analysis of Sets of Congeneric Tests." Psychometrika 36:109–133.

——1973 "A General Method for Estimating a Linear Structural Equation System." In A. S. Goldberger and O. D. Duncan, eds., Structural Equation Models in the Social Sciences. New York: Seminar Press.

——1974 "Analyzing Psychological Data by Structural Analysis of Covariance Matrices." In D. H. Kranz, et al., eds., Measurement, Psychophysics, and Neural Information Processing. San Francisco: Freeman.

——, and A. S. Goldberger 1975 "Estimation of a Model with Multiple Indicators and Multiple Causes of a Single Latent Variable." Journal of the American Statistical Association 70:631–639.

——, and D. N. Lawley 1968 New Methods in Maximum-Likelihood Factor Analysis." British Journal of Mathematical and Statistical Psychology 21:85–96.

——, and D. Sörbom 1986 LISREL: Analysis of Linear Structural Relationships by the Method of MaximumLikelihood. User's Guide. Version 6. Chicago: Scientific Software.

——, and D. Sörbom 1988 PRELIS: A Program for Multivariate Data Screening and Data Summarization (A Preprocessor for LISREL). Version 1.8, 2d ed. Chicago: Scientific Software.

——, and D. Sörbom 1996a LISREL8: User's Reference Guide. Chicago: Scientific Software International.

——, and D. Sörbom 1996b PRELIS2: User's Reference Guide. Chicago: Scientific Software International, Inc. 7383 N. Lincoln Avenue. Chicago IL 60646-1704.

——, and D. Sörbom 1993. LISREL8: Structural Equation Modeling with the SIMPLIS Command Language. Chicago: Scientific Software International.

Kaiser, H. F. 1958 "The Varimax Criterion for Analytic Rotation in Factor Analysis." Psychometrika 23:187–200.

——1963 "Image Analysis." In C. W. Harris, ed., Problems in Measuring Change. Madison: University of Wisconsin Press.

Karney, B. R., and T. N. Bradbury 1995 "Assessing Longitudinal Change in Marriage: An Introduction to the Analysis of Growth Curves." Journal of Marriage and the Family 57:1091–1108.

Lawley, D. N., and A. E. Maxwell 1971 Factor Analysis as a Statistical Method. London: Butterworths.

Lazarsfeld, P. F. 1968 "The Analysis of Attribute Data." In D. L. Sills, ed., International Encyclopedia of the Social Science, vol. 15. New York: Macmillan and Free Press.

Loehlin, J. C. 1992 Latent Variable Models: An Introduction to Factor, Path, and Structural Analysis. Hillsdale, N.J.: Lawrence Erlbaum.

Lord, F. M., and M. L. Novick 1968 Statistical Theories of Mental Test Scores. Reading. Mass.: Addison-Wesley.

Maxwell, A. E. 1971 "Estimating True Scores and Their Reliabilities in the Case of Composite Psychological Tests." British Journal of Mathematical and Statistical Psychology 24:195–204.

——1977 Multivariate Analysis in Behavioral Research. London: Chapman and Hall.

McArdle, J.J. 1991 "Structural Models of Developmental Theory in Psychology." Annals of Theoretical Psychology 7:139–160.

McArdle, J.J., and E. Anderson. 1990 "Latent Variable Growth Models for Research on Aging." In J.E. Birren and K.W. Schaie, eds., Handbook of the Psychology of Aging, 3rd ed. New York: Academic Press.

Meredith, W. 1964a "Notes on Factorial Invariance." Psychometrika 29:177–185.

——1964b "Rotation to Achieve Factorial Invariance." Psychometrika 29:187–206.

Mulaik, S. A. 1972 The Foundations of Factor Analysis. New York: McGraw-Hill.

——1975 "Confirmatory Factor Analysis." In D. J. Amick and H. J. Walberg, eds., Introductory Multivariate Analysis for Educational, Psychological, and Social Research. Berkeley: McCutchan Publishing Corp.

Muthén, B. 1978 "Contributions to Factor Analysis of Dichotomous Variables." Psychometrika 43:551–560.

——1983 "Latent Variable Structural Equation Modeling with Categorical Data." Journal of Econometrics 22:43–65.

——1988 LISCOMP—Analysis of Linear Structural Equations with a Comprehensive Measurement Model: A Program for Advanced Research. Version 1.1. Chicago: Scientific Software.

——, and B.O. Muthén 1998 Mplus—The Comprehensive Modeling Program for Applied Researchers. User's Guide Version 1.0. Los Angeles: Muthen and Muthen.

Neuhaus, J. O., and C. Wrigley 1954 "The Quartimax Method: An Analytical Approach to Orthogonal Simple Structure." British Journal of Statistical Psychology 7:81–91.

Schaie, K. W., and C. Hertzog 1985 "Measurement in the Psychology of Adulthood and Aging." In J. E. Birren and K. W. Schaie, eds., Handbook of the Psychology of Aging. New York: Van Nostrand.

Scherpenzeel, A. 1995 "A Question of Quality: Evaluating Survey Questions By Multitrait–Multimethod Studies." Unpublished doctoral dissertation, University of Amsterdam, The Netherlands.

Siegel, P. M., and R. W. Hodge 1968 "A Causal Approach to Measurement Error." In H. M. Blalock, Jr. and A. B. Blalock. eds., Methodology in Social Research. New York: McGraw-Hill.

Spearman, C. 1904a "'General Intelligence,' Objectively Determined and Measured." American Journal of Psychology 15:201–293.

——1904b "The Proof and Measurement of Association between Two Things." American Journal of Psychology 15:88–103.

——1927 The Abilities of Man. New York: Macmillan. Thurstone, L. L. 1931 "Multiple Factor Analysis." Psychological Review 38:406–427.

——1938 Primary Mental Abilities. Psychometric Monographs, no. 1. Chicago: University of Chicago Press.

——1947 Multiple Factor Analysis. Chicago: University of Chicago Press.

Werts, C. E., and R. L. Linn 1970 "Path Analysis: Psychological Examples." Psychological Bulletin 74:194–212.

Wheaton, B., et al. 1977 "Assessing Reliability and Stability in Panel Models." In D. R. Heise, ed., Sociological Methodology 1977. San Francisco: Jossey-Bass.

West,Joel 1997 "Structural Equation Software." http://www.gsm.uci.edu/˜joelwest/SEM/Software.htm.

Widaman, K. 1991 "Common Factor Analysis vs. Component Analysis: Differential Bias in Representing Model Parameters." Department of Psychology, University of California, Riverside. Typescript.

Wiley, D. E., and J. A. Wiley 1970 "The Estimation of Measurement Error in Panel Data." American Sociological Review 35:112–117.

Willett, J. B., and A. G. Sayer 1994 "Using Covariance Structure Analysis to Detect Correlates and Predictors of Individual Change Over Time." Psychological Bulletin 116:363–381.

West, Joel 1997 "Structural Equation Software." http://www.gsm.uci.edu/˜joelwest/SEM/Software.htm.

Wright, S. 1921 "Systems of Mating." Genetics 6:111–178.

——1925 Corn and Hog Correlations. U.S. Dept. of Agriculture, Bulletin no. 1300L1-60. Washington, D.C.: U.S. Government Printing Office.


Duane F. Alwin

Factor Analysis

views updated Jun 08 2018

Factor Analysis

I STATISTICAL ASPECTSA. E. Maxwell

BIBLIOGRAPHY

II PSYCHOLOGICAL APPLICATIONSLloyd G. Humphreys

BIBLIOGRAPHY

I STATISTICAL ASPECTS

In many fields of research–for example, agriculture (Banks 1954), psychology (Burt 1947), economics (Geary 1948), medicine (Hammond 1944; 1955), and the study of accidents (Herdan 1943), but notably in psychology and the other social sciences–an experimenter frequently has scores for each member of a sample of individuals, animals, or other experimental units on each of a number of variates, such as cognitive tests, personality inventories, sociometric and socioeconomic ratings, and physical or physiological measures. If the number of variates is large, or even moderately so, the experimenter may wish to seek some reduction or simplification of his data. One approach to this problem is to search for some hypothetical variates that are weighted sums of the observed variates and that, although fewer in number than the latter, can be used to replace them. The statistical techniques by which such a reduction of data is achieved are known collectively as factor analysis, although it is well to note here that the principal component method of analysis discussed below (see also Kendall & Lawley 1956) has certain special features. The derived variates are generally viewed merely as convenient descriptive summarizations of the observed data. But occasionally their composition is such that they appear to represent some general basic aspects of everyday life, performance or achievement, and in such cases they are often suitably labeled and are referred to as factors. Typical examples from psychology are such factors as “numerical ability,” “originality,” “neuroticism,” and “toughmindedness.” This article describes the statistical procedures in general use for arriving at these hypothetical variates or factors.

Preliminary concepts. Suppose that for a random sample of size N from some population, scores exist on each of p jointly normally distributed variates x (i=1,2, ċ ,p). If the scores on each variate are expressed as deviations from the sample mean of that variate, then an unbiased estimator of the variance of xi is given by the expression

summation being over the sample of size N. Similarly, an unbiased estimator of the covariance between variates xi and xj is given by

Note that this is conventional condensed notation. A fuller, but clumsier, notation would use xiv for the deviation (v = 1, ċ , N) so that really means .

In practice, factor analysis is often used even in cases in which its usual assumptions are known to be appreciably in error. Such uses make the tacit presumption that the effect of the erroneous assumptions will be small or negligible. Unfortunately, nearly nothing is known about the circumstances under which this robustness, or non sensitivity to errors in assumptions, is justified. Of course, the formal manipulations may always be carried out; the assumptions enter crucially into distribution theory and optimality of the estimators.

The estimated variances and covariances between the p variates can conveniently be written in square matrix form as follows:

Since aij = aji, the matrix A is symmetric about its main diagonal.

From the terms of A, the sample correlations, rij, between the pairs of variates may be obtained from

with rij. The corresponding matrix is the correlation matrix.

The partial correlation concept is helpful here. If, to take the simplest case, estimates of the correlations between three variates are available, then the estimated correlation between any two, say xi and xj, for a given constant value of the third, xk, can be found from the expression

and is denoted by rij.k.

In terms of a correlation matrix, the aim of factor analysis can be simply stated in terms of partial correlations (see Howe 1955). The first question asked is whether a hypothetical random variate f1 exists such that the partial correlations rij.f1, for all i and j, are zero, within the limits of sampling error, after the effect of f1 has been removed. (If this is so, it is customary to say that the correlation matrix, apart from its diagonal cells, is of rank one, but details will not be given here.) If the partial correlations are not zero, then the question is asked whether two hypothetical random variates, f1 and f2, exist such that the partial correlations between the variates are zero after the effects of both f1 and f2, have been removed from the original matrix, and so on. (If f1 and f2, reduce the partial correlations to zero, then the matrix, apart from its diagonal cells, is said to be of rank two, and so on.) The aim of the procedure is to replace the observed variates with a set of derived variates that, although fewer in number than the former, are still adequate to account for the correlations between them. In other words, the derived variates, or factors, account for the variance common to the observed variates.

Historical note. Factor analysis is generally taken to date from 1904, when C. E. Spearman published an article entitled “‘General Intelligence’ Objectively Determined and Measured.” Spearman postulated that a single hypothetical variate would in general account for the intercorrelations of a set of cognitive tests, and this variate was his famous factor “g.” For the sets of tests that Spearman was considering, this hypothesis seemed reasonable. As further matrices of correlations became available, however, it soon became obvious that Spearman’s hypothesis was an oversimplification of the facts, and multiple factor concepts were developed. L. L. Thurstone, in America, and C. Burt and G. H. Thomson, in Britain, were the most active pioneers in this movement. Details of their contributions and references to early journal articles can be found in their textbooks (Thurstone 1935; 1947; Burt 1940; Thomson 1939). These writers were psychologists, and the statistical methods they developed for estimating factors were more or less approximate in nature. The first rigorous attempt by a mathematical statistician to treat the problem of factor estimation (as distinct from principal components) came with the publication in 1940 of a paper by D. N. Lawley entitled “The Estimation of Factor Loadings by the Method of Maximum Likelihood.” Since 1940, Lawley has published other articles dealing with various factor problems, and further contributions have been made by Howe (1955), by Anderson and Rubin (1956 ), and by Rao (1955 ), to mention just a few. Modern textbooks on factor analysis are those of Harman (1960) and Lawley and Maxwell (1963).

While methods of factor analysis, based on the above model, were being developed, Hotelling in 1933 published his principal components model, which, although it bears certain formal resemblances to the factor model proper, has rather different aims. It is widely used today and is described below.

The basic factor equations. The factor model described in general correlational terms above can be expressed more explicitly by the equations

In these equations k (the number of factors) is specified; fs stands for the factors (generally referred to as common factors, since they usually enter into the composition of more than one variate). The factors are taken to be normally distributed and, without loss of generality, to have zero means and unit variances; to begin with, they will be assumed to be independent. The term ei, refers to a residual random variate affecting only the variate Xi. There are p of these ei, and they are assumed to be normally distributed with zero means and to be independent of each other and of the fs. Their variances will be denoted by vi the diagonal matrix of the Vi is called V. The J-values are called loadings (weights), lis being the loading of the ith variate on the sth factor. The quantities lis, and Vi are taken to be unknown parameters that have to be estimated. If a subscript for individual were introduced, it would be added to Xi and fs, but not to lis or Vi.

If the population variance-covariance matrix corresponding to the sample matrix A is denoted by C, with elements cij, then it follows from the model that

and

If the loadings for p variates on k factors are denoted by the p × k matrix L, with transpose L′, eqs. (2) and (3) can be combined in the single matrix equation

Estimating the parameters in the model. Since the introduction of multiple factor analysis, various approximate methods for estimating the parameters lis and vi have been proposed. Of these, the best known is the centroid, or simple summation, method. It is well described in the textbooks mentioned above, but since the arithmetic details are unwieldy, they will not be given here. The method works fairly well in practice, but there is an arbitrariness in its procedure that makes statistical treatment of it almost impossible (see Lawley & Maxwell 1963, chapter 3). For a rigorous approach to the estimation of the factor parameters, I turn to the method of maximum likelihood, although this decision requires some justification. The maximum likelihood method of factor estimation has not been widely used in the past for two reasons. First, it involves very onerous calculations which were well-nigh prohibitive before the development of electronic computers. Second, the arithmetic procedures available, which were iterative, frequently did not lead to convergent estimates of the loadings. But recently, largely because of the work of the Swedish statistician K. G. Jöreskog, quick and efficient estimation procedures have been found. These methods are still being perfected, but a preliminary account of them is contained in a recent paper (Jöreskog 1966). When they become better known, it is likely that the maximum likelihood method of factor analysis will become the accepted method. An earlier monograph by Jöreskog (1963) is also of interest. In it he links up work by Guttman (1953) on image theory with classical factor analytic concepts (see also Kaiser, in Harris 1963). (The image of a variate is defined as that part of its variance which can be estimated from the other variates in a matrix.)

The first point to note about eqs. (1) is that since the p observed variates Xi are expressed in terms of p + k other variates, namely, the k common factors and the p residual variates, which are not observable, these equations are not capable of direct verification. But eq. (4) implies a hypothesis, H0 regarding the covariance matrix C, which can be tested, that it can be expressed as the sum of a diagonal matrix with positive diagonal elements and a symmetric positive semidefinite matrix with at most k latent roots: these matrices are respectively V and LL/. The value postulated for k must not be too large; otherwise, the hypothesis would be trivially true. If the v were known, it would only be necessary to require k < p, but in the more usual case, where they are unknown, the condition can be shown to be (p + k) < (p – k)2. Since the xi are assumed to be distributed in a multivariate normal way, the log-likelihood function, omitting a function of the observations, is given by

where n = N –1, C is the determinant of the matrix C, and cij is the element in the ith row and jth column of its inverse, C-1. To find maximum likelihood estimators of lis and vi, (5) is differentiated with respect to them and the results are equated to zero. A difficulty arises, however, when k > 1, for there are then too many parameters in the model for them to be specified uniquely. This can be seen by an examination of eq. (4), for if L is postmultiplied by an orthogonal matrix M. the value of LL′, which is now given by LMM′L′, is unaltered since MM′ = I, the identity matrix. This means that the maximum likelihood method, although it provides a unique set of estimates of the cij, leads to equations for estimating the lis which are satisfied by an infinity of solutions, all equally good from a statistical point of view. In this situation all the statistician can do is to select a particular solution, one that is convenient to find, and leave the experimenter to apply whatever rotation he thinks desirable. Thus the custom is to choose L in such a way that the k × k matrix J = L′V-1L, is diagonal. It can be shown that the successive elements of J are the latent roots, in order of magnitude, of the matrix V-1/2 (A - V) V-1/2, so that for a given value of V, the determination of the factors in the factor model resembles the determination of the principal components in the component model.

The maximization of eq. (5) with the above diagonalization side condition leads to the equations

and

where circumflex accents denote estimates of the parameters in question. Eq. (7) can usually be solved by iterative methods and details of those in current use can be found in Lawley and Maxwell (1963), Howe (1955), and Jöreskog (1963; 1966). The calculations involved are onerous, and when p is fairly large, say 12 or more, an electronic computer is essential.

A satisfactory property of the above method of estimation, which does not hold for the centroid and principal component methods, is that it can be shown to be independent of the metric used. A change of scale of any variate xi merely introduces proportional changes in its loadings.

Testing hypotheses on number of factors. In the factor analysis of a set of data the value of k is seldom known in advance and has to be estimated. To begin with, some value of it is assumed and a matrix of loadings L for this value is estimated. The effects of the factors concerned are now eliminated from the observed covariance (or correlation) matrix, and the residual matrix, A - LL′, is tested for significance. If it is found to be statistically significant, the value of k is increased by one and the estimation process is repeated. The test employed is of the large sample chi-square type, based on the likelihood ratio method of Neyman and Pearson, and is given by

with 1/2{(p - k)2 - (p + k )} degrees of freedom. A good approximation to expression (8), and one easier to calculate, is

There is also some evidence to suggest that the test can be improved by replacing n by

Factor interpretation. As already mentioned, the matrix of loadings, L, given by a factor analysis is not unique and can be replaced by an equivalent set LM where M is an orthogonal matrix. This fact is frequently used by experimenters when interpreting their results, a matrix M being chosen that will in some way simplify the pattern of loadings or make it more intuitively meaningful. For example, M may be chosen so as to reduce to zero, or nearly zero, as many loadings as possible in order to reduce the number of parameters necessary for describing the data. Again, M may be chosen so as to concentrate the loadings of variates of similar content, say verbal tests, on a single factor so that this factor may be labeled appropriately. Occasionally, too, the factors are allowed to become correlated if this seems to lead to more meaningful results.

It is now clear that given a matrix of loadings from some analysis, different experimenters might choose different rotation matrices in their interpretation of the data. This subjective element in factor analysis has led to a great deal of controversy. To avoid subjectivity, various empirical methods of rotation have been proposed which, while tending to simplify the pattern of loadings, also lead to unique solutions. The best known of these are the varimax and the promax methods (for details see Kaiser 1958; Hendrickson & White 1964). But another approach to the problem, proposed independently by Howe (1955), Anderson and Rubin (1956), and Lawley (1958), seems promising. From prior knowledge the experimenter is asked to postulate in advance (a) how many factors he expects from his data and (b) which variates will have zero loadings on the several factors. In other words, he is asked to formulate a specific hypothesis about the factor composition of his variates. The statistician then estimates the nonzero loadings and makes a test of the “goodness of fit” of the factors structure. In this approach the factors may be correlated or uncorrelated, and in the former case estimates of the correlations between them are obtained. The equations of estimation and illustrative examples of their application can be found in Howe (1955) and in Lawley and Maxwell (1963; 1964); the latter gives a quick method of finding approximate estimates of the nonzero loadings.

Estimating factor scores. As the statistical theory of factor analysis now stands, estimation is a twofold process. First, the factor structure, as described above, of a set of data is determined. In practice, however, it is often desirable to find, in addition, equations for estimating the scores of individuals on the factors themselves. One method of doing this, developed by Thomson, is known as the “regression method.” In it the li8 are taken to be the covariances between the f8 and the xi, and then for uncorrelated factors the estimation equation is

or, more simply from the computational viewpoint,

where , and, as before, J = L′V-1L, and I is the identity matrix. If sampling errors in L and V are neglected, the covariance matrix for the errors of estimates of the factor scores is given by (I + J)-1.

If the factors are correlated and their estimated correlation matrix is denoted by P, then eqs. (10) and (11) become, respectively,

and

while the errors of estimates are given by (P-1 + J)-1. An alternative method of estimating factor scores is that of Bartlett (1938). Here, the principle adopted is the minimization, for a given set of observations, of which is the sum of squares of standardized residuals. The estimation equation now is

It is of interest to note that although the sets of estimates gotten by the two methods have been reached by entirely different approaches, a comparison shows that they are simply related. For uncorrelated factors the relationship is

for correlated factors it is

Comparing factors across populations. If factors can be viewed as representing “permanent” aspects of behavior or performance, ways of identifying them from one population to another are required. In the past, identification has generally been based on the comparison of matrices of loadings. In the case of two matrices, a common approach, developed by Ahmavaara (1954) and Cattell and Hurley (1962), is to rotate one into maximum conformity in the least square sense with the other. For example, the matrix required for rotating L1 into maximum conformity with L2, when they both involve the same variates, is obtained by calculating the expression and normalizing it by columns. The factors represented by L1 in its transformed state are likely to be more or less correlated, but estimates of the correlations between them are given by , standardized so that its diagonal cells are unity, where and is its transpose. This procedure is fairly satisfactory when the sample covariance matrices involved do not differ significantly. When they do, the problem of identifying factors is more complicated.

A possible approach to it has been suggested by Lawley and Maxwell (1963, chapter 8), who make the assumption that although two covariance matrices, C1 and C2, involving the same variates may be different, they may still have the same L-matrix. This could occur if the two k × k covariance matrices “1 and “2 between the factors themselves were different. To keep the model fairly simple, they assume that the residual variances in the populations are in each case V and then set up the equations

For this model Lawley and Maxwell show how estimates of L, V, Γ1, and Γ2 may be obtained from two sample covariance matrices A1 and A2. They also supply a test for assessing the significance of the difference between the estimates of Γ1 and Γ2, and also for testing the “goodness of fit” of the model.

The method of principal components

The principal component method of analyzing a matrix of covariances or correlations is also widely used in the social sciences. The components correspond to the latent roots of the matrix, and the weights defining them are proportional to the corresponding latent vectors.

The model can also be stated in terms of the observed variates and the derived components. An orthogonal transformation is applied to the Xi (i = 1,2, ċ , p) to produce a new set of uncorrelated variates y1,y2, ċ , yp These are chosen such that y1 has maximum variance, y2 has maximum variance subject to being uncorrelated with y1, and so on. This is equivalent to a rotation of the coordinate system so that the new coordinate axes lie along the principal axes of an ellipsoid closely related to the covariance structure of the xi. The transformed variates are then standardized to give a new set, which will be denoted zs. When this method is used, no hypothesis need be made about the nature or distribution of the xi. The model is by definition linear and additive, and the basic equations are

where zs, stands for the sth component, and Wis, is the weight of the sth component in the ith variate. In matrix notation eqs. (16) become

x = Wz

where and W is a square matrix of order p with elements wis.

Comparison of eqs. (16) with eqs. (1) shows that in the principal component model residual variates do not appear, and that if all p components are obtained, the sample covariances can be reproduced exactly, that is, A = W’W. Indeed, there is a simple reciprocal relationship between the observed variates and the derived components.

A straightforward iterative method for obtaining the weights wis is given by Hotelling in his original papers; the details are also given in most textbooks on factor analysis. In practice, all p components are seldom found, for a small number generally accounts for a large percentage of the variance of the variates and can be used to summarize the data. There is also a criterion, developed by Bartlett (1950; 1954), for testing the equality of the remaining latent roots of a matrix after the first k have been extracted; this is sometimes used to help in deciding when to stop the analysis.

The principal component method is most useful when the variates x{ are all measured in the same units. Otherwise, it is more difficult to justify. A change in the scales of measurement of some or all of the variates results in the covariance matrix being multiplied on both sides by a diagonal matrix. The effect of this on the latent roots and vectors is very complicated, and unfortunately the components are not invariant under such changes of scale. Because of this, the principal component approach is at a disadvantage in comparison with the proper factor analysis approach.

A. E. MAXWELL

[see also CLUSTERING; DISTRIBUTIONS, STATISTICAL, article on MIXTURES OF DISTRIBUTIONS; LATENT STRUCTURE; STATISTICAL IDENTIFIABILITY.]

BIBLIOGRAPHY

AHMAVAARA, Y. 1954 Transformational Analysis of Factorial Data. Suomalainen Tiedeakatemia, Helsinki, Toimituksia: Annales Series B 88, no. 2.

ANDERSON, T. W.; and RUBIN, HERMAN 1956 Statistical Inference in Factor Analysis. Volume 5, pages 111–150 in Berkeley Symposium on Mathematical Statistics and Probability, Third, Proceedings. Edited by Jerzy Neyman. Berkeley: Univ. of California Press.

BANKS, CHARLOTTE 1954 The Factorial Analysis of Crop Productivity: A Re-examination of Professor Kendall’s Data. Journal of the Royal Statistical Society Series B 16:100–111.

BARTLETT, M. S. 1938 Methods of Estimating Mental Factors. Nature 141:609–610.

BARTLETT, M. S. 1950 Tests of Significance in Factor Analysis. British Journal of Psychology (Statistical Section) 3:77–85.

BARTLETT, M. S. 1954 A Note on the Multiplying Factor for Various x2 Approximations. Journal of the Royal Statistical Society Series B 16:296–298.

BURT, CYRIL 1940 The Factors of the Mind: An Introduction to Factor-analysis in Psychology. Univ. of London Press.

BURT, CYRIL 1947 Factor Analysis and Physical Types. Psychometrika 12:171–188.

CATTELL, RAYMOND B.; and HURLEY, JOHN R. 1962 The Procrustes Program: Producing Direct Rotation to Test a Hypothesized Factor Structure. Behavioral Science 7:258–262.

GEARY, R. C. 1948 Studies in Relationships Between Economic Time Series. Journal of the Royal Statistical Society Series B 10:140–158.

GIBSON, W. A. 1960 Nonlinear Factors in Two Dimensions. Psychometrika 25:381–392.

HAMMOND, W. H. 1944 Factor Analysis as an Aid to Nutritional Assessment. Journal of Hygiene 43:395–399.

HAMMOND, W. H. 1955 Measurement and Interpretation of Subcutaneous Fats, With Norms for Children and Young Adult Males. British Journal of Preventive and Social Medicine 9:201–211.

HARMAN, HARRY H. 1960 Modern Factor Analysis. Univ. of Chicago Press. → A new edition was scheduled for publication in 1967.

HARRIS, CHESTER W. (editor) 1963 Problems in Measuring Change: Proceedings of a Conference. Madison: Univ. of Wisconsin Press. → See especially “Image Analysis” by Henry F. Kaiser.

HENDRICKSON, ALAN E.; and WHITE, PAUL O. 1964 Promax: A Quick Method for Rotation to Oblique Simple’ Structure. British Journal of Statistical Psychology 17: 65–70.

HERDAN, G. 1943 The Logical and Analytical Relationship Between the Theory of Accidents and Factor Analysis. Journal of the Royal Statistical Society Series A 106:125–142.

HOHST, PAUL 1965 Factor Analysis of Data Matrices. New York: Holt.

HOTELLINC, HAROLD 1933 Analysis of a Complex of Statistical Variables Into Principal Components. Journal of Educational Psychology 24:417–441, 498–520.

HOWE, W. G. 1955 Some Contributions to Factor Analysis. Report No. ORNL-1919, U.S. National Laboratory, Oak Ridge, Tenn. Unpublished manuscript.

JöRESKOG, K. G. 1963 Statistical Estimation in Factor Analysis: A New Technique and Its Foundation. Stockholm: Almqvist & Wiksell.

JöRESKOG, K. G. 1966 Testing a Simple Hypothesis in Factor Analysis. Psychometrika 31:165–178.

KAISER, HENRY F. 1958 The Varimax Criterion for Analytic Rotation in Factor Analysis. Psychometrika 23: 187–200.

KENDALL, M. G.; and LAWLEY, D. N. 1956 The Principles of Factor Analysis. Journal of the Royal Statistical Society Series A 119:83–84.

LAWLEY, D. N. 1940 The Estimation of Factor Loadings by the Method of Maximum Likelihood. Royal Society of Edinburgh, Proceedings 60:64–82.

LAWLEY, D. N. 1953 A Modified Method of Estimation in Factor Analysis and Some Large Sample Results. Pages 35–42 in Uppsala Symposium on Psychological Factor Analysis, March 17–19, 1953. Nordisk Psykologi, Monograph Series, No. 3. Uppsala (Sweden): Almqvist & Wiksell.

LAWLEY, D. N. 1958 Estimation in Factor Analysis Under Various Initial Assumptions. British Journal of Statistical Psychology 11:1–12.

LAWLEY, D. N.; and MAXWELL, ALBERT E. 1963 Factor Analysis as a Statistical Method. London: Butterworth.

LAWLEY, D. N.; and MAXWELL, A. E. 1964 Factor Transformation Methods. British Journal of Statistical Psychology 17:97–103.

MAXWELL, A. E. 1964 Calculating Maximum-likelihood Factor Loadings. Journal of the Royal Statistical Society Series A 127:238–241.

RAO, C. R. 1955 Estimation and Tests of Significance in Factor Analysis. Psychometrika 20:93–111.

SPEARMAN, C. E. 1904 “General Intelligence” Objectively Determined and Measured. American Journal of Psychology 15:201–293.

THOMSON, GODFREY H. (1939) 1951 The Factorial Analysis of Human Ability. 5th ed. Boston: Houghton Mifflin.

THURSTONE, LOUIS L. 1935 The Vectors of Mind: Multiple-factor Analysis for the Isolation of Primary Traits. Univ. of Chicago Press.

THURSTONE, Louis L. 1947 Multiple-factor Analysis. Univ. of Chicago Press. → A development and expansion of Thurstone’s The Vectors of Mind, 1935.

II PSYCHOLOGICAL APPLICATIONS

The essential statistical problem of factor analysis involves reduction or simplification of a large number of variates so that some hypothetical variates, fewer in number, which are weighted sums of the observed variates, can be used to replace them. If psychological experimenters were satisfied with this sole, statistical objective, there would be no problem of psychological interpretation and of meaning of factors. They would simply be convenient abstractions. However, psychologists and psychometricians, starting with Charles Spearman (1904), the pioneer factor analyst, have wanted to go beyond this objective and have thereby created the very large psychological literature in this field. The goal of factor analysts following in the Spearman tradition has been to find not only convenient statistical abstractions but the elements or the basic building blocks, the primary mental abilities and personality traits in human behavior. Such theorists have explicitly accepted chemical elements– sometimes even the periodic table–as their model and factor analysis as the method of choice in reaching their goal.

Factor interpretation and methodology

Factor extraction methods. There are several variations of factor methods, certain of which are more amenable to psychological interpretation than others. For example, the experimenter can start his analysis from a variance-covariance matrix or from a correlational matrix with estimated communalities (discussed below) in the principal diagonal. If he is interested in psychological interpretations of factors, he almost uniformly selects the latter, since use of the variance-covariance procedure results in obtaining factors that contain unknown amounts of common-factor, nonerror-specific, and error components. For purposes of psychological interpretation, including generalizing to new samples of psychological measures, the inclusion of nonerror-specific and error variance in the factors is undesirable. The intercorrelations and communalities, on the other hand, are determined only by the common factors.

The experimenter also has a choice among several methods of factor extraction, including the centroid, principal components (sometimes called principal axes), and maximum likelihood methods. Choice among these is based largely on feasibility criteria. The first was used almost exclusively before the advent of high-speed digital computers. The third is generally acknowledged to be superior statistically to the second, but it is too expensive in time and computer to use. The second is at present the method most frequently used by psychologists, since it extracts a maximum amount of variance with each successive factor. The centroid method only approximates this criterion, although frequently it is a close approximation. There is thus no pressing need to redo all previous work involving the centroid method now that computational facilities are available. The maximum likelihood method can and should be used, as a check on conclusions reached with the more economical principal components, when size of matrix and computer availability make it feasible.

The communality problem. When the experimenter elects to analyze correlations and communalities, he must estimate the latter. These communalities represent the proportion of common factor variance in the total variance of a variable: the amount that a variable has in common with other variables in a particular study. Unfortunately, from the methodological viewpoint, there is no way to obtain an unbiased estimate of the communality. Several rule-of-thumb methods are available, and there are theoretically sound upper and lower bounds for the communality estimate.

An unbiased reliability estimate can be used as an upper bound for the estimated communality. Reliability and communality differ to the extent that reliability includes specific nonerror variance. A lower-bound estimate in the population of persons is the squared multiple correlation between each variable and all of the others (Guttman 1954). The reader should note, however, that while this procedure provides a lower-bound estimate for the population, a sample value can be seriously inflated. The multiple correlation coefficient capitalizes on chance very effectively. For example, when the number of variables equals the number of observations, the multiple correlation in the sample is necessarily unity, although the population value may in fact be zero. The investigator who wants a lower-bound estimate may still utilize the Guttman theorem if he estimates the population values from sample values that are corrected for their capitalization on chance.

Number of factors. If he is interested in the psychological meaning of his factors, the experimenter has a further choice among criteria for determining the number of factors to retain and interpret. When estimated communalities are employed, no one of the possible criteria is more than a rule of thumb. The various criteria lead to radically different decisions concerning the number of factors to be retained; and different investigators, in applying one, several, or all of these criteria, will reach different conclusions about the number of factors.

One class of criteria for determining the number of factors has been characterized as emphasizing psychological importance without regard to sampling stability. Some investigators use some absolute value of the factor loadings, e.g., .30, either rotated or unrelated or both, without regard to the number of observations on which the correlations are based. A more recent suggestion has been to retain factors whose principal roots were greater than unity (Kaiser & Caffrey 1965). Such criteria appear to make an assumption that the number of observations is very large, so that the factors and loadings that are large enough psychologically are at the same time not the result of sampling error.

A second class of criteria has been characterized as emphasizing the number of observations, even though there are no known sampling distributions for factors or factor loadings. In several related criteria, factor loadings and/or residuals are compared in one way or another with the standard error of correlation coefficients of zero magnitude for the sample size involved. Factoring of the intercorrelations of random normal deviates as a method of obtaining empirical sampling errors has also been used.

A third criterion involves the “psychological meaning” of the rotated factors: the investigator merely states in effect that he is satisfied with the results of his analysis. Since any behavioral scientist of any modest degree of ingenuity can rationalize the random grouping of any set of variables, this does not appear to be a useful criterion scientifically. Without agreement on an objective criterion, however, psychological meaning of the factors tends to be the principal criterion used in deciding upon the number of factors to interpret.

Even in situations where probabilities of alpha and beta errors can be estimated, different investigators, depending on their temperaments or on social consequences, may set quite different standards for such errors. In determining the number of factors, however, there are no objective methods of error estimation, and the range of probabilities of alpha and beta errors resulting from differences among investigators or differences in social consequences is increased several-fold. For example, for one matrix of personality variables two investigators differ by a ratio of four to one in their assessment of the proper number of factors to retain and interpret. The difference between 12 and 3 factors is far from trivial. Such discrepancies reduce factor analysis to a hypothesis formation technique. As a method of discovery of psychological principles, or of hypothesis testing generally, ambiguities of this magnitude cannot be tolerated. The lack of a suitable test for number of factors has opened the door for a great deal of poor research.

Factor rotations. After the factors are extracted, the experimenter has to decide whether to rotate or not. Rotation of axes to psychologically meaningful positions follows inevitably from an interest in finding the psychological elements.

The rotation problem is seen most clearly in the two-factor case. First, the two factors are conceptualized as orthogonal (perpendicular) dimensions extending from values of -1.00 to +1.00. Then the points representing the loadings of the tests on these factors are fixed in the space defined by these dimensions. Imagine now that a pin is inserted at the origin of the two dimensions and that these are now rotated about the pin. Wherever they stop, new coordinates can be determined for the test points. It must be noted that the test points are located as accurately by the new dimensions as by the original ones, and that the intereorrelations of the tests are described with equal accuracy. There are, in point of fact, an infinite number of positions of the coordinates and thus an infinite number of mathematical solutions to the factor problem. The investigator interested in psychological meaning rotates the dimensions into some psychologically unique position. It is important, where possible, that factor descriptions of measures remain stable from sample to sample of either persons or measures, or both. This can be achieved, apparently in the great majority of cases, with an adequate rotational solution.

Rotation is almost uniformly performed when factors obtained are from a correlation matrix having communality estimates in the diagonal. Factors obtained from the variance-covariance matrix, on the other hand, are generally not rotated and are preferred by the experimenter interested in description alone rather than in explanation. The experimenter also has a choice among several different rotational methods, based upon different criteria and leading to either orthogonal or oblique factors.

Orthogonal versus oblique rotation. Orthogonal rotations offer the simplicity of uncorrelated dimensions in exchange for a poorer fit of the test points. Oblique rotations offer a better fit for the test points in exchange for a complexity of correlated dimensions. If oblique rotations are used, the investigator can also elect to factor in the second and perhaps higher orders; i.e., he can factor the intercorrela-tions among his first-order factors, among his second-order factors, and so on. After factoring in several orders, the investigator also has the option of presenting and interpreting his results in the several orders, or, by means of a simple transformation, he can convert the oblique factors in several orders to orthogonal, hierarchical factors in a single order.

Until the advent of high-speed digital computers, basically the only method for achieving a given rotational result was hand rotation. There are now several computer programs for rotation to either orthogonal or oblique structure.

If the investigator elects an orthogonal solution to his pr’oblem, he has a number of programs among which to choose. One of the earlier programs is the quartimax of Neuhaus and Wrigley (1954). This was followed by Kaiser’s varimax program (1958). An important difference between the two is that quartimax typically produces a general factor in ability data which is a function of the sampling of test variables, i.e., the general factor may reflect verbal, perceptual, or other specific emphasis, depending upon the nature of the tests sampled. Varimax provides results that are more stable from one test battery to another. This is achieved by a more even distribution of variance among the rotated factors. In the opinion of many investigators, varimax rotations have achieved a near-ultimate status for the orthogonal case, but Schonemann (1964) has now developed a program that he calls varisim, which spreads existing variance more evenly among the several factors than varimax does. Results from the two programs are not completely parallel even for well-defined factors. There is as much rationale for varisim as for varimax. In consequence, the ultimate status of varimax has been dislodged, and we are again faced with a somewhat arbitrary choice among orthogonal rotational methods.

Oblique rotational programs are now fairly numerous and exhibit variability in results comparable to that among orthogonal ones. There is one important difference: no oblique program has as yet achieved the status that varimax once had. Because of the various sources of dissatisfaction with existing programs, there is much more research activity in the area of oblique rotation than in orthogonal rotation. There is still frequent resort to visually guided rotations if the investigator is striving for an oblique structure.

Methodological summary. The investigator who wishes to find psychological meaning in his data, the one who is trying to discover the basic building blocks or causal entities in human behavior, has a difficult task. Important decisions for which there are no sound foundations must be made at several steps in the procedure. Communalities must be estimated; the estimate of the number of factors to be extracted and retained for rotations must be based upon inadequate criteria; and although subjective bias possibly resulting from hand rotations has been eliminated by rotations obtained on highspeed computers, the choice of rotational program among either oblique or orthogonal solutions may lead to quite different results.

In the absence of sound estimation methods, the criterion of replicability is typically offered as a substitute. Replicability is a very important criterion in science generally. When applied to factor analysis, however, one must be aware that seemingly parallel results may have been forced on the data, typically without intention on the part of the experimentalist to do so. For example, considerable congruence of factor patterns can be obtained from the intercorrelations of two independent sets of random normal deviates by extracting as many factors as variables and by rotating to oblique simple structure. The result will be one-to-one correspondence of the factors. The intercorrelations of the factors will differ, but even these differences will not be large, since they are randomly distributed about zero.

Methods of assessing the congruence of factor patterns also leave something to be desired. The most common method by far is that of visual inspection and unaided judgment. The most precise, the correlation between two estimated factor scores in the same sample, is rarely seen. Claimed replication of a factor is frequently without adequate foundation.

Early general factor interpretations

Mental energy. Spearman did not have available any of the above-described techniques for the factor analysis of relationships among variables. Neither did he have access to the multitude of tests now available. He hypothesized that one general factor was sufficient to account for the intercorrelations among his variables, and he developed relatively simple methods to test this hypothesis. (Present methods of multiple factor analysis include Spearman’s single factor as a special case.) In psychological interpretation Spearman is of interest, however, because he interpreted his single factor as “mental energy.” This was considered the sole basis or building block of mental ability or intelligence. [see SPEARMAN.]

Multiple bonds. Spearman’s interpretation was challenged by Godfrey Thomson (1919) and by Edward Thorndike (Thorndike et al. 1926). Thomson proved that correlational matrices having the form required to satisfy Spearman’s one-factor interpretation could also be “explained” by the presence of many overlapping elements. Thorndike discussed connections (bonds) between stimuli and responses as an alternative to Spearman’s mental energy concept. Considering that there are many thousands of stimuli to which a person will respond differentially, and that tests sample these, the extent to which there is overlap in the elements sampled by two measures determines the degree to which they are correlated. If the intercorrelations of several measures have the formal properties necessary for Spearman’s unitary mental energy explanation (one factor), they also can be explained by multiple bonds or overlapping elements (multiplicity of factors). [see THORNDIKE.]

Unitary mental energy is a basic building block, a general influence or “cause”; multiple bonds are a complex of stimulus-response connections that are acquired in a dynamic, complex physical and social environment. Multiple bonds that underlie the behavior under observation cannot be said to cause that behavior in the same sense that mental energy is said to cause intellectual performance.

Recourse to parsimony in this instance is not an acceptable solution, since the two explanations are so different. It should come as no surprise, for example, to learn that Spearman and his followers have stressed genetic bases for intelligence, while the multiple bonds notion lends itself most readily to a stress on environmental forces and learning.

Multiple factors

Thurstone’s primary mental abilities. Although Thurstone (1938) is considered to have broken with Spearman, the break was related only to the number of factors required to account for intelligence. Thurstone considered that some seven to nine factors were sufficient to account for the intercorrelations of the more than fifty tests he used. However, Spearman himself had come to doubt the single-factor explanation; the break was more apparent than real. On the issue of what lies behind factors there was no break. Careful reading of Thurstone’s writings makes it quite clear that to him factors were much more than descriptive devices. Factors were functional unities; their ubiquity strongly suggested genetic determiners; after all, they were called primary mental abilities. [see THURSTONE.]

Ferguson’s learning emphasis. However, just as a single factor can be replaced by multiple overlapping bonds, so also can multiple group factors be replaced by sets of overlapping bonds. One need only assume that environmental pressures and learning come in somewhat separate “chunks.” Demographic differences, e.g., parental occupation, region of the country, rural–urban differences, etc., could account for some of the “chunking” required. Ferguson (1956) has produced a very satisfactory explanation along these lines in which learning and transfer are important variables. Various kinds of learning are facilitated or inhibited by the variety of environments in which children develop. Learning transfers, both positively and negatively, to novel situations. The amount and direction of the transfer are determined by stimulus and environmental similarities. Learning and transfer, along with environmental differences, produce the clustering of measures on which the factors depend. [see LEARNING, article on TRANSFER.]

Physical analogies. Thurstone (1947), in order to convince himself and others that factors were “real,” constructed a factor problem that has attracted a good deal of attention. He showed that if dimensions of boxes were factored, the result was a three-factor solution which could be rotated into a position such that the factors represented the dimensions of length, breadth, and depth, the three basic dimensions of Euclidean space. He also showed that these factors were correlated, i.e., an oblique solution gave a better fit to the data than did an orthogonal one. The obliquity reflects, of course, the fact that the dimensions of man-made boxes tend to be correlated, i.e., long boxes tend to be big boxes.

In a situation more relevant to behavior Cattell and Dickman (1962) have demonstrated that the intercorrelations of the performance of balls in several “tests” yield four factors that can be identified as size, weight, elasticity, and string length. It is clear from this and the preceding example that factor analysis can sometimes identify known physical factors in data.

One question about these examples is the certitude with which the factors can be identified after rotation, granting that the correct number of factors can be obtained by present methods. Thurstone suggested the criteria of simple structure for the adequacy of rotations. Generally speaking, simple structure is achieved when the number of zero loadings in a factor table has been maximized while increasing the magnitude of loadings on a small number of variables. The application of these criteria to the examples described resulted in clearcut identification of the three and four factors. It has been shown by Overall (1964), however, that if Thurstone had started with a different set of measurements, the criteria of simple structure for rotations would have led to differently defined factors, i.e., they would not have been the “pure” physical dimensions but would have represented complex combinations of those dimensions.

A more basic question is whether psychological data are similar to physical data, i.e., whether psychological dimensions obtained by factoring are similar to physical dimensions. The demonstration that three or four physical factors, as the case may be, can be recovered from correlational data does not prove that factors in psychological data have a similar functional unity. Not only is Thomson’s alternative explanation theoretically acceptable for multiple factors, but it makes good psychological sense as well. Psychological tests measure performance on each of a series of items. These performances make up the total score. Although Thomson would not have suggested a one-to-one correspondence between item and element or stimulusresponse bond, one can conclude that there are at least as many elements represented in a test as there are items. Thus the multiple bonds approach fits the actual measurement situation so well that the adherents of the other point of view must bear the burden of proof–and for psychological, not physical, data.

Guilford’s structure of intellect. The work of J. P. Guilford has been most influential in the factor analysis of human abilities (e.g., 1956). It has increased by ten times the small number of primary mental abilities proposed by Thurstone, but the approach to their interpretation remains much the same. Guilford’s thinking about the nature of factors is modeled very closely after the periodic table of the chemical elements; he has in fact proposed a structure which points out missing factors and has proceeded in his own empirical work to “discover” many of these.

In spite of similarities in thinking about the nature of factors, the discrepancy in numbers between Guilford and Thurstone is highly significant, and it illustrates a basic difficulty with psychological tests and the attempts to find causal entities from the analysis of their intercorrelations. Not only do psychological tests measure performance on a relatively large number of pass-fail items, but there is at present no necessary or sufficient methodological or theoretical basis for deciding which items should be added together to make up a single test score (Humphreys 1962). The number of factors has proliferated in Guilford’s work because he has produced large numbers of homogeneous experimental tests. By additional test construction, making each test more and more homogeneous, the number of factors could be increased still further. As a matter of fact, there is no agreed-upon stopping place short of the individual test item, i.e., a single item represents the maximum amount of homogeneity. This logic results in the same number of primary mental abilities as there are ability test items.

The progression from Thurstone to Guilford can be interpreted as further evidence for the multiple bonds theoretical approach. On the other hand, positing a functional unity inside the organism for each item represents a scientific dead end.

Cattall’s structure of personality . The work of R. B. Cattell has been most influential in the factor analysis of the domain of personality (e.g. 1957). Cattell’s thinking about the character of factors does not differ materially from that of Spearman, Thurstone, and Guilford in that for Cattell, factors are real influences.

The number of identified personality factors has increased considerably under Cattell’s direction. Although measurement problems differ, Cattell’s work parallels that of Guilford with human abilities. Self-report questionnaires present the multiple items problem with yes-no scoring of items. Personality investigators also have the problem of deciding which items should be added together in any given score. A great deal of additional work, however, has been done with rating scales and with so-called objective tests of personality. “Density” of sampling of the test or rating domain, a concept introduced by Cattell, is still involved in the proliferation of factors, even though the mechanism is not that of item selection. Thus, in obtaining ratings, one must decide on the number and overlap in meaning of traits to be rated. One must decide whether to include both extroversion and sociability or, even closer, both ascendance and dominance. While there is no rigorous method to depend on in the sampling of measures, decisions about what will be tested still affect the number of factors and their importance.

Furthermore, it is also typical of many experimental designs that large numbers of variables relative to the number of observations are analyzed; that many of these variables have low reliability and thus low communality; that many factors are retained for rotational purposes; and that rotations are made to an oblique structure. All of these elements contribute to possible capitalization on chance.

It is of interest that Cattell uses as a primary rotational criterion a count of the number of variables in the hyperplane, i.e., the multidimensional plane defined by all factors other than the one in question. (More simply, a measure having a zero loading on a factor is located geometrically someplace in the factor’s hyperplane.) This criterion places a premium on the extraction of a large number of factors relative to the number of measures, on the use of variables of low reliability, and on the use of variables unrelated to the major purpose of the analysis. In the opinion of many critics Cattell has increased the probability of making Type i errors beyond tolerable bounds, although neither he nor his critics can assign a value to alpha in this situation.

A dramatic example of the difficulties that may be involved in typical factor analytic research is given by some data described by Horn (1967). He obtained a good fit to an oblique factor pattern derived from an analysis of ability and personality variables by factoring the intercorrelations of the same number of random normal deviates, based upon the same number of observations, as the psychological variables. This finding highlights the principle that replication of findings may be of little import in factor analytic investigations.

It is also apparent that the essential reason for factor analyzing intercorrelations, to seek some reduction or simplification of data, has not been realized. The number of variables and the number of factors have grown astronomically, and the end is not yet in sight. It is highly possible that the search for psychological meaning, the search for the basic building blocks or elements, has been responsible. If psychological data are different from physical data in important respects, and if the multiple bonds are a more accurate representation of the data than the chemical elements point of view, researchers would profit from taking another look at the reasons why they factor analyze. An economical description of complex data is itself an important scientific goal.

LLOYD G. HUMPHREYS

[Directly related are the entriesCLUSTERING; MULTIVARIATE ANALYSIS; TRAITS. Other relevant material may be found inINTELLIGENCE AND INTELLIGENCE TESTING; PSYCHOLOGY, article OnCONSTITUTIONAL PSYCHOLOGY; and in the biographies ofSPEARMAN and THORNDIKE.]

BIBLIOGRAPHY

CATTELL, RAYMOND B. 1957 Personality and Motivation Structure and Measurement. New York: World.

CATTELL, RAYMOND B.; and DICKMAN, KERN 1962 A Dynamic Model of Physical Influences Demonstrating the Necessity of Oblique Simple Structure. Psychological Bulletin 59:389–400.

FERGUSON, GEORGE A. 1956 On Transfer and the Abilities of Man. Canadian Journal of Psychology 10:121–131.

GUILFORD, J. P. 1956 The Structure of Intellect. Psychological Bulletin 53:267–293.

GUTTMAN, LOUIS 1954 Some Necessary Conditions for Common Factor Analysis. Psychometrika 19:149–161.

HORN, JOHN 1967 On Subjectivity in Factor Analysis. Unpublished manuscript.

HUMPHREYS, LLOYD G. 1962 The Organization of Human Abilities. American Psychologist 17:475–483.

KAISER, HENRY F. 1958 The Varimax Criterion for Analytic Rotation in Factor Analysis. Psychometrika 23:187–200.

KAISER, HENRY F.; and CAFFREY, JOHN 1965 Alpha Factor Analysis. Psychometrika 30:1–14.

NEUHAUS, JACK O.; and WRIGLEY, CHARLES 1954 The Quartimax Method: An Analytical Approach to Orthogonal Simple Structure. British Journal of Statistical Psychology 7:81–91.

OVERALL, JOHN E. 1964 Note on the Scientific Status of Factors. Psychological Bulletin 61:270–276.

SCHONEMANN, P. H. 1964 A Solution of the Orthogonal Procrustes Problem With Applications to Orthogonal and Oblique Rotation. Ph.D. dissertation, Univ. of Illinois.

SPEARMAN, CHARLES 1904 “General Intelligence” Objectively Determined and Measured. American Journal of Psychology 15:201–293.

THOMSON, GODFREY H. 1919 The Proof or Disproof of the Existence of General Ability. British Journal of Psychology 9:321–336.

THORNDIKE, EDWARD L. et al. 1926 The Measurement of Intelligence. New York: Columbia Univ., Teachers College.

THURSTONE, LOUIS L. 1938 Primary Mental Abilities. Univ. of Chicago Press.

THUHSTONE, LOUIS L. 1947 Multiple-factor Analysis. Univ. of Chicago Press. → A development and expansion of Thurstone’s The Vectors of Mind, 1935.

Factor Analysis

views updated Jun 11 2018

Factor Analysis

BIBLIOGRAPHY

Factor analysis is usually adopted in social scientific studies for the purposes of: (1) reducing the number of variables; and (2) detecting structure in the relationships between variables. The first is often referred to as common factor analysis, whereas the second is known as component analysis when both variables are operated as statistical techniques. While factor analysis expresses the underlying common factors for an entire group of variables, it also helps researchers differentiate these factors by grouping variables into different dimensions or factors, each of which is ideally uncorrelated with the others.

A major breakthrough in attitude measurement came with the development of factor analysis (1931) by psychologist L. L. Thurstone (18871955). Thurstone introduced multiple-factors theory, which identified seven distinct and primary mental abilities consisting of: verbal comprehension, word fluency, number facility, spatial visualization, associative memory, perceptual speed, and reasoning. This theory differed from the more general, less separated theories of intelligence that were prevalent at the time and was among the first to show that human beings can be intelligent in different areas. The concept of multiple factors slowly received validation from empirical studies and gradually replaced the unidimensional factor in social research.

In social science studies, researchers often face a large number of variables. Although it is a good idea for scientists to exhaust all the relevant variables in their research to provide thorough responses to research questions, such an approach makes a theory too complex to generalize to empirical applications. For example, a researcher may want to explain delinquent behaviors by exploring all relevant independent variables, such as illegal drug use, harsh parenting, school dropout, school failure, single-parent household, gang affiliation, parent-child bonding, smoking, alcohol use, and many other variables. With so many independent variables, it is difficult to provide a simplified model for a parsimonious explanation of delinquent behavior. A good theoretical explanation should achieve both consideration of completion and parsimony in its coverage of variables. Factor analysis reduces the number of variables to a smaller set of factors that facilitates our understanding of the social problem. It provides such functions to determine the common factors of these independent variables. Each of these common factors should be the best representative of certain independent variables, and every factor should be, theoretically, independent from the other factors. Researchers substitute these factors for the variables because the factors can explain a similar degree of variance on the dependent variable but are simpler in terms of the number of independent variables. In most cases, factors found in an analysis may not provide a complete description of the relevant independent variables, but these factors should be the most important factors, the best way of summarizing a body of data.

Factor analysis may use either correlations or covariances. The covariance covab between two variables, a and b, is their correlation times their two standard deviations: covab = rab sa sb, where rab is their correlation and sa and sb are their standard deviations. Any variables covariance with itself is its variancethe square of its standard deviation. A correlation matrix can be thought of as a matrix of variances and covariances of a set of variables that have already been adjusted to a standard deviation of 1. Since a correlation or covariance matrix can be translated to one another easily, in many statistical books, authors may use either a correlation or covariance matrix or both to illustrate how factor scores are obtained.

The central theorem of factor analysis, in mathematical terms, is that we can partition a covariance matrix M into a common portion C that is explained by a set of factors, and a unique portion R that is unexplained by those factors. In matrix language, M = C + R, which means that each entry in matrix M is the sum of the corresponding entries in matrices C and R. The explained C can be further broken down into component matrices C 1, C 2, C 3, and Cx, explained by individual factors. Each of these one-factor components Cx equals the outer product of a column of factor loading. A statistical program may rank several matrices Cx if it finds that there is more than one matrix with eigenvalues greater than 1. An eigenvalue is defined as the amount of variance explained by one more factor. Since a component analysis is adopted to summarize a set of data, it would not be meaningful to find another factor that explains less variance than is contained in one variable (eigenvalues of less than 1). Therefore, statistical programs often default this rule selecting factors.

Principal component analysis is commonly used in statistics for factor analysis and was introduced to achieve representation or summarization. It attempts to reduce p variables to a set of m linear functions of those variables that best describe and summarize the original p. Some conditions need to be satisfied to have a set of m factors for the purpose of factor analysis. First, the m factors must be mutually uncorrelated. Second, any set of m factors should include the functions of a smaller set. Third, the squared weights defining each linear function must sum to 1, denoting the total variance explained. By using all p, we get a perfect reconstruction of the original X -scores, while by using the first m (with the greatest eigenvalues), we get the best reconstruction possible for that value of m and the most simplified model for interpretation.

Statistical programs allow researchers to select how many factors will be chosen. Ideally, we want to identify a certain number of factors that would explain or represent all the relevant variables. However, the use of factor analysis is not just to find all the statistically significant factors; rather, those factors identified should be meaningful to the researchers and interpreted subjectively by them. If the factors generated are meaningless in terms of the compositions of variables, such a factor analysis is not useful. In general, researchers may use exploratory factor analysis to find statistically significant factors (eigenvalues > 1) if they do not have prior knowledge of what factors may be generated from a number of variables. Therefore, it is very common that two different researchers would have two sets of factors even though they used an identical dataset. It is not about who is right or wrong, but whether researchers can adopt a group of factors that lead to better interpretation of the data. If researchers have prior knowledge (e.g., theories) of those factors, they can limit the number of factors to be generated in statistical programs rather than allowing statistical programs to generate them. In other words, researchers determine if the proposed variables are grouped into factors as suggested by the theory.

Researchers may use the rotation of a factor-loading matrix to simplify structure in factor analysis. Consider a set of p multiple regressions from p observed variables, wherein each regression predicts one of the variables from all m factors. The standardized coefficients in this set of regressions form a p × m matrix called the factor-loading matrix. We may replace the original factors with a set of linear functions of those factors for the same predictions as before, but with a different factor-loading matrix. In practice, this rotated matrix is expected to be used with simpler structures to better serve researchers subjective interpretations.

SEE ALSO Covariance; Eigen-Values and Eigen-Vectors, Perron-Frobenius Theorem: Economic Applications; Methods, Quantitative; Models and Modeling; Regression Analysis; Statistics

BIBLIOGRAPHY

Thurstone, L. L. 1931. Measurement of Social Attitudes. Journal of Abnormal and Social Psychology 26 (3): 249269.

Cheng-Hsien Lin

factor analysis

views updated May 18 2018

factor analysis A family of statistical techniques for exploring data, generally used to simplify the procedures of analysis, mainly by examining the internal structure of a set of variables in order to identify any underlying constructs. The most common version is so-called principal component factor analysis.

In survey data, it is often the case that attitudinal, cognitive, or evaluative characteristics go together. For example, respondents who are in favour of capital punishment may also be opposed to equality of opportunity for racial minorities, opposed to abortion, and may favour the outlawing of trade unions and the right to strike, so that these items are all intercorrelated. Similarly, we might expect that those who endorse these (in the British context) right-wing political values may also support right-wing economic values, such as the privatization of all state-owned utilities, reduction of welfare state benefits, and suspending of minimum-wage legislation. Where these characteristics do go together, they are said either to be a factor, or to load on to an underlying factor–in this case with what one might call the factor ‘authoritarian conservatism’.

Factor analysis techniques are available in a variety of statistical packages and can be used for a number of different purposes. For example, one common use is to assess the ‘factorial validity’ of the various questions comprising a scale, by establishing whether or not the items are measuring the same concept or variable. Confronted by data from a battery of questions all asking about different aspects of (say) satisfaction with the government, it may be that individual items dealing with particular economic, political, and social policies, the government's degree of trustworthiness, and the respondent's satisfaction with the President are not related, which suggests that these different aspects are seen as conceptually distinct by interviewees. Similarly, for any given set of variables, factor analysis can determine the extent to which these can be reduced to a smaller set in order to simplify the analysis, without losing any of the underlying concepts or variables being measured. Alternatively, researchers may ask respondents to describe the characteristics of a social attribute or person (such as ‘class consciousness’ or ‘mugger’), and factor-analyse the adjectives applied to see how the various characteristics are grouped.

All these uses are ‘exploratory’, in the sense that they attempt to determine which variables are related to which, without in any sense testing or fitting a particular model. Consequently, as is often the case in this kind of analysis, researchers may have difficulty interpreting the underlying factors on to which the different groups of variables load. Some marvellously imaginative labels have been devised by sociologists who have detected apparent underlying factors but have no clear idea of what these higher-order abstractions might be. Less frequently, however, a ‘confirmatory’ factor analysis is undertaken. Here, the researcher anticipates that a number of items measuring (say) ‘job satisfaction’ all form one factor, and this proposition is then tested by comparing the actual results with a solution in which the factor loading is perfect.

Alternative criteria exist for determining the best method for doing the analysis, the number of factors to be retained, and the extent to which the computer should ‘rotate’ factors to make them easier to interpret. An ‘orthogonal rotation’ yields factors which are unrelated to each other whereas an ‘oblique’ rotation allows the factors themselves to be correlated; and, as might be expected, there is some controversy about which procedure is the more plausible in any analysis. Although there are conventions about the extent to which variables should correlate before any are omitted from a factor, and the amount of variance (see VARIATION) to be explained by a factor before it may be ignored as insignificant, these too are matters of some debate. The general rule of thumb is that there should be at least three variables per factor, for meaningful interpretation, and that factors with an ‘eigenvalue’ of less than one should be discarded. (The latter quantity corresponds to the percentage of variance, on average, explained by the equivalent number of variables in the data, and is thus a standardized measure which allows researchers to eliminate those factors that account for less of the variance than the average variable.) However, even when a factor has an eigenvalue greater than 1, there is little to be gained by retaining it unless it can be interpreted and is substantively meaningful. At that point, statistical analysis ceases, and sociological theory and imagination take over. Moreover, the correlation matrix which is produced for the variables in any set and which yields the data from which factors are extracted, requires for its calculation variables which have been measured at the interval level and have a normal distribution. The use of the technique is therefore often accompanied by disputes as to whether or not these conditions have been met. For a useful introduction by a sociologist see Duane F. Alwin , ‘Factor Analysis’, in E. F. Borgatta and and M. L. Borgatta ( eds.) , Encyclopedia of Sociology (1992
). See also MEASUREMENT; PERSONALITY; SCREE TEST.

factor analysis

views updated May 21 2018

factor analysis See multivariate analysis.