ArticlePDF Available


Problems in the use of factor analysis for deriving theory are illustrated by means of an example in which the underlying factors are known. The actual underlying model is simple and it provides a perfect explanation of the data. While the factor analysis 'explains' a large proportion of the total variance, it fails to identify the known factors in the model, The illustration is used to emphasize that factor analysis, by itself, may be misleading as far as the development of theory is concerned. The use of a comprehensive, and explicit à priori analysis is proposed so that there will be independent criteria for the evaluation of the factor analytic results.
Published in The American Statistician, 21 (December), 1967, 17-21
Derivation of Theory by Means of Factor Analysis or
Tom Swift and His Electric Factor Analysis Machine
J. Scott Armstrong
Problems in the use of factor analysis for deriving theory are illustrated by means of
an example in which the underlying factors are known. The actual underlying model is
simple and it provides a perfect explanation of the data. While the factor analysis
"explains" a large proportion of the total variance, it fails to identify the known factors
in the model, The illustration is used to emphasize that factor analysis, by itself, may
be misleading as far as the development of theory is concerned. The use of a
comprehensive, and explicit à priori analysis is proposed so that there will be
independent criteria for the evaluation of the factor analytic results.
It has not been uncommon for social scientists to draw upon analogies from the physical sciences in their
discussions of scientific methods. They look with envy at some of the mathematical advances ,in the physical
sciences and one gets the impression that the social sciences are currently on the verge of some major mathematical
advances. Perhaps they are – but there are many social scientists who would disagree. Their position is that we really
don't know enough about what goes into our mathematical models in order to expect results that are meaningfully-
related to anything in the “real world.” In other words, the complaint is not that the models are no good or that they
don't really give us optimum results; rather it is that the assumptions on which the model is based do not provide a
realistic representation of the world as it exists. And it is in this area where the social sciences differ from the
physical sciences.
But now, thanks to recent advances in computer technology, and to refinements in mathematics, social
scientists can analyze masses of data and determine just what the world is like. Armchair theorizing has lost some of
its respectability. The computer provides us with objective results.
Despite the above advances, there is still a great deal of controversy over the relevant roles of theorizing
and of empirical analysis. We should note that the problem extends beyond one of scientific methodology; it is also
an emotional problem with scientists. There is probably no one reading this paper who is not aware of the proper
relationship between theorizing and empirical analysis. On the other hand, we all know of others who do not
understand the problem. We are willing to label others as either theorists or empiricists; and we note that these
people argue over the relative merits of each approach.
It may be useful at this point to describe these mythical people. The theorist is a person who spends a great
deal of time in reading and contemplation. He then experiences certain revelations or conceptual breakthroughs from
which his theory is published. When others fail to validate his theory (that is, to demonstrate its usefulness) the
problems are nearly always said to be due to improper specification or measurement.
The empiricist is a person who spends a great deal of time collecting data and talking to computers.
Eventually he uncovers relationships that are significant at the 5% level and he publishes his findings. If he is very
careful and reports only “what the data say,” he will not even have to defend himself when the other 99 people in his
line of work read his study.
While it would appear that the relationships between the theorist and the empiricist should be
complementary, this is not always evident from the literature which is published. Everyone knows that theorists
have existed (and probably much more comfortably) without empiricists; and one now gets the impression that the
empiricist feels little need for the theorist. The data speak for themselves. There is no need for a predetermined
theory because the theory will be drawn directly from the data. An examination of the literature reveals many studies
which seem to fit this category. For example, Cattell (1949) has attempted to discover primary dimensions of culture
by obtaining data on 72 variables for each of 69 national cultures. The 12 basic factors which were obtained seemed
to me to be rather mysterious. They included factors such as cultural assertion, enlightened affluence, thoughtful
industriousness, bourgeois philistinism, and cultural disintegration.
I would now like to draw upon an analogy in the physical sciences1 in order to indicate how science might
have advanced if only computers had been invented earlier. More specifically, we’ll assume that computer
techniques have advanced to the stage where sophisticated data analysis can be carried out rather inexpensively. Our
hero will be an empiricist.
Tom Swift is an operations researcher who has recently been hired by the American Metals Company.
Some new metals have been discovered. They have been shipped to the American Metals Company and now sit in
the basement. AMC is unfamiliar with the characteristics of these metals and it was Tom's job to obtain a short but
comprehensive classification scheme.
Tom hadn't read the literature in geometry, in metallurgy, or in economics, but he did know something
about factor analysis. He also had a large staff.
In fact, all of the 63 objects were solid metallic right-angled parallelepipeds of varying sizes – which is to
say, they looked like rectangular boxes.
Tom instructed his staff to obtain measurements on all relevant dimensions. After some careful
observations. the staff decided that the following measures would provide a rather complete description of the
(a) thickness (g) total surface area
(b) width (h) cross-sectional area
(c) length (i) total edge length
(d) volume (j) length of internal diagonal
(e) density (k) cost per pound
(f) weight
Each of the above measurements was obtained independently (e.g., volume was measured in terms of cubic feet of
water displaced when the object was immersed in a tub.)2
Being assured that the measurements were accurate.3 Tom then proceeded to analyze the data in order to
determine the basic underlying dimensions. He reasoned that factor analysis was the proper way to approach the
1 The idea of using data from physical objects is not new. Demonstration analyses have been performed on boxes,
bottles, geometric figures, cup; of coffee and balls. Overall (1964) provides a bibliography on this literature. The
primary concern in these papers has been to determine which measurement models provide the most adequate
2 Actually, the data for length. width and thickness were determined from the following arbitrary rules:
(a) Random integers from 1 to 4 were selected to represent width and thickness with the additional provision
that the width thickness.
(b) A random integer from 1 to 6 was selected to represent length with the provision that length width.
(c) A number of the additional variables are merely obvious combinations of length, width, and thickness.
The physical characteristics of the metals were derived from the Handbook of Chemistry and Physics. Nine different
metals were used (aluminum, steel, lead, magnesium, gold. copper, silver, tin, and zinc.) Seven parallellepipeds of
each type of metal were created.
problem since he was interested in reducing the number of descriptive measures from his original set of 11 and he
also suspected that there was a great deal of multicollinearity in the original data. The California Biomedical 03M
program was used to obtain a principal components solution. The procedure conformed with the following
(a) Only factors having eigenvalues greater than 1.0 were used. (This yielded three factors which
summarized 90% of the information contained in the original 11 variables.)
(b) An orthogonal rotation was performed. This was done since Swift believed that basic underlying
factors are statistically independent of one another.
(c) The factors were interpreted by trying to minimize the overlap of variable loadings on each factor.
(The decision rule to use only those variables with a loading greater than 0.70 utilized all 11
variable. with no overlap in the 3 factor rotation.)
Principal components was used since this is the recommended factor analytic method when one is interested in
generating hypotheses from a set of data.
The factor loadings are shown in Table 1.
Table 1. Three Factor Results
Factor I Factor II
Variable Loading Variable Loading
(a) Thickness
(d) Volume
(g) Surface area
(b) Width
(f) Weight
(i) Edge length
(e) Density
(k) Cost/lb.
Factor III
Variable Loading
(c) Length
(j) D. length
(h) C. S. area
Tom had a great deal of difficulty in interpreting the factors. Factor II was clearly a measure of the intensity
of the metal. Factor III appeared to be a measure of shortness. But Factor I was only loosely identified as a measure
of compactness.
To summarize then, the three basic underlying factors of intensity, shortness, and compactness summarize
over 90% of the variance found in the original 11 variables. Tom felt that this finding would assist him in some of
his coming projects – one of which was to determine just how the total cost of each of the metallic objects was
derived. In other words, he could develop a regression model with Total Cost as the dependent variable and the three
basic factors as his independent variables.
3 Another variation would have been to trace the development of the science by having the data be collected first in
ordinal form. Then another researcher skilled in the latest measurement techniques would come along, recognize the
failure of the first study as a “measurement problem” – obtain interval data – then replicate the study.
Let us step back now and analyze what contribution Tom Swift has made to science. Those people who
have read the literature in metallurgy. Geometry, and economics will recognize that, in the initial study, all of the
information is contained in five of the original 11 variables – namely length, width, height, density, and cost per
pound. The remaining six variables are merely built up from the five “underlying factors” by additions and
multiplications. Since a rather simple model will give a perfect explanation, it is difficult to get excited about a
factor analytic model which “explains” 90.7% of the total information.
The factor analysis was unable to uncover the basic dimensions. It determined that there were three rather
than five basic factors. And the interpretation of these factors was not easy. In fact, one suspects that, had the field
followed alone the lines advocated by Swift (by measuring intensity, shortness, and compactness), progress would
have been much slower! The Swift study could easily mislead other researchers.
As one other example of how the researchers could be misled we can take the following. Both volume and
surface area load heavily on Factor 1. We could go back to the original matrix and find that the correlation
coefficient between surface area and volume is .969. We conclude that, allowing for some measurement error, these
variables are really measuring the same thing and we are just as well off if we know either one of them as when we
know both. This statement is, of course, a good approximation to this set of data. But if we tried to go beyond our
data it is easy to see where the reasoning breaks down. That is. one can construct a very thin right-angled
parallelepiped with surface area equal to that of a cube but with volume much smaller.
If Mr. Swift had not followed his original “rules” he might have done a little better. Let us say that he
dropped the rule that the eigenvalues must be greater than 1.0. The fourth factor has an eigenvalue of .55; the fifth is
.27 and the sixth is .09. He then rotates four factors, then five, etc. In this case the rotation of five factors showed
that he had gone too far as none of the variables achieved a high loading (.70) on the fifth factor.
The four factor rotation is interesting. however. Thus is shown in Table 2.
Table 2. Four Factor Results
Factor I Factor II Factor III Factor IV
Variable Loading Variable Loading Variable Loading Variable Loading
(a) Thickness
(b) Volume
(g) Surface area
(f) Weight
(e) Density
(k) Cost/lb
(c) Length
(j) D. Length
(b) Width
(h) C.S. Area
Swift's solution includes all variables except total edge length and there is no overlap (in the sense that one
variable loads heavily on more than one factor). The fact that edge length is not included seems reasonable since it is
merely the total of the length + width + thickness factors (and multiplied by the constant 4, of course).
The rotation of four factors appears to be very reasonable to us – since we know the theory. It is not clear,
however, that Swift would prefer this rotation since he had no prior theory. Factor II once again comes through as
intensity. Factors I, III, and IV may conceivably be named as thickness, length, and width factors. The factors still
do not distinguish between density and cost per pound, however.
An Extension
Not being content with his findings, Swift called upon his staff for a more thorough study. As a result, the
original set of 11 variables was extended to include:
(l) average tensile strength (p) reflectivity
(m) hardness (Mohs scale) (q) boiling point
(n) molting point (r) specific heat at 20ºC
(o) resistivity (s) Young’s modulus
The results of this principal components study are shown in Table 3.
Table 3. Five Factor Results
Factor I Factor II Factor III Factor IV Factor V
Variable Loading Variable Loading Variable Loading Variable Loading Variable Loading
(d) Volume
(g) Surf. area
(a) Thickness
(I) E. Length
(b) Width
(f) Weight
(h) C.S. area
(l) T. strength
(s) Y. modulus
(m) Hardness
(n) Melt pt.
(g) Boil pt.
(e) Density
(r) Sp. heat
(t) Mol. wt.
(k) Cost/lb
(o) Resist.
(p) Reflect.
(c) Length
(j) D. Length
Five factors explain almost 90% of the total variance. Swift, with much difficulty, identified the factors
Impressiveness, Cohesiveness, Intensity, Transference, and Length (reading from I to V respectively). There seem to
be strange bedfellows within some of the factors. It is difficult to imagine how work in the field would proceed from
this point.
There are, of course, many other variations that Swift could have tried. Mostly these variations would be
derived by using different communality estimates, obtaining different numbers of factors, making transformations of
the original data, and experimenting with both orthogonal and oblique rotations. The point is, however, that without
a prespecified theory Swift has no way to evaluate his results.
The factor analysis might have been useful in evaluating theory. For example, if one of the theorists had
developed a theory that length, width, thickness, density, and cost-per-pound are all basic independent factors, then
the four factor rotation above would seem to be somewhat consistent with the theory. Assuming that one was not
able to experiment but just had to take the data as they came, this approach does not seem unreasonable.
If one does use the factor analytic approach, it would seem necessary to draw on existing theory and
previous research as much as possible. That is to say, the researcher should make prior evaluations of such things as:
(a) What type of relationships exist among the variables? This should lead to a prior specification as
to what transformations are reasonable in order to satisfy the fundamental assumptions that the
observed variables are linear functions of the factor scores and also that the observed variables are
not causally related to one another. Note that, in the example given above, the variables did not
come from a linear model. One can hardly expect all of the variables in the real world to relate to
each other in a linear fashion.
(b) How many factors are expected to show up in the solution?
(c) What types of factors are expected? The analyst should outline his conceptual model in sufficient
detail so that he can make à priori statements about what combinations are reasonable and what
combinations are unreasonable. In operational terms. the analyst should be in a position, to
formulate indices on the basis of his theory before he examines the data.
(d) What set of variables should be considered in the original rata? Is each variable logically
consistent with the theory?
(e) What relationships are expected to exist between the resulting factors? (e.g. should we expect
them to be orthogonal?)
(f) What are the most meaningful communality estimates for the problem? (The choice here will in-
fluence the number of factors which are obtained).
Tom Swift’s work would have been much more valuable if he had specified a conceptual model. He would
have been able to present a more convincing argument for his resulting theory had it agreed with his prior model.
Such agreement is evidence of construct validity. In addition, the model might have led to further testing (e.g.,
through the use of other sets of data or by means of other analytic techniques).
I would not like to argue that all factor analytic studies fall into the same category as the Swift study. On
the other hand, there is a large number of published studies which do seem to fit the category. In these studies,
where the data stand alone and speak for themselves, my impression is that it would be better had the studies never
been published. The conclusion that “this factor analytic study has provided a useful framework for further research
may not only be unsupported – it may also be misleading.
The cost of doing factor analytic studies has dropped substantially in recent years. In contrast with earlier
time,. it is now much easier to perform the factor analysis than to decide what you want to factor analyze. It is not
clear that the resulting proliferation of the literature will lead us to the development of better theories.
Factor analysis may provide a means of evaluating theory or of suggesting revisions in theory. This
requires, however, that the theory be explicitly specified prior to the analysis of the data. Otherwise, there will be
insufficient criteria for the evaluation of the results. If principal components is used for generating hypotheses
without an explicit a priori analysis. the world will soon be overrun by hypotheses.
Cattell, R. B. (1949), The dimensions of culture patterns by factorization of national characters,” Journal of
Abnormal and Social Psychology, 44, 443-469.
Overall, J, (1964), “Note on the scientific status of factors,” Psychological Bulletin, 61 (4), 270-276.
... Most of the studies in Table 1 used only EVG1 criterion (dominant rule) to select latent retained factors which according to Brown (2015) and Patil et al. (2008) often lead to over or under factorisation when used alone as a proxy measure. This inaccurate and ingrained approach before Path Analysis (PA) has the likelihood to introduce non-parsimonious theories or misleading theory development effort (Armstrong, 1967;Ehrenberg, 1968). Moreover, some authors skipped CFA (Confirmatory Factor Analysis) among in favour of EFA (Exploratory Factor Analysis) while altering hypothesised items or constructs or both and might have failed to detect superfluous constructs steering to misspecified models. ...
... In EFA, three vital decision packages (factor extraction technique, factor retention criteria, and factor rotation method) have to be made by the researchers to circumvent the pitfall of generating trivial factors and misleading theory development effort (Armstrong, 1967;Ehrenberg, 1968). Therefore, multiple criteria factor retention techniques used are Cattell's (1996) scree plot, Kaiser criterion, Parallel analysis and Velicer's minimum average partial (MAP) test. ...
The quality of bus transport to school and its synergistic effects on school attendance, quality teaching, and learning are best measured using experiences and perceptions of users. The study sought to investigate 43 obstacles to the delivery of quality bus service via 20 selected private schools in the Sunyani Municipality, Ghana. The survey was an exploratory case study which focused on the use of a questionnaire. Survey participants of 403 students were selected to respond, using the probability sampling technique. Descriptive statistics were used to define essential socio-demographic and bus service-related characteristics and their effects on schoolchildren's mobility decision-making as well as academic work. Exploratory factor analysis was used to construct four latent factors which describe obstacles to perceived service quality (PSQ) delivery. The theoretical factor structure of the data was tested using confirmatory factor analysis in AMOSS 23. Simultaneously, path analysis was employed to investigate the direct and indirect effects of barriers to PSQ in the private schools' transport system. The principal component analysis highlighted four constructs as barriers to bus PSQ in the municipality: (1) perceived scheduling and routing barriers, (2) perceived safety and bus attribute barriers, (3) pedestrian and bus stop facilities barriers and (4) efficiency, effectiveness, and equity-related barriers. We recommend that Private School Authorities and Municipal Board of Education adopt policies and draft operational safety and guideline manuals that clarify acceptable bus service delivery benchmarks and performance indicators for schools.
... Our results indicate that catastrophizing is a unitary construct, independent of anxiety, depression, rumination or worry. Therefore, we suggest that catastrophizing is not just an epiphenomenon or a straightforward consequence of anxiety and depression, but may be a separable construct with at least partially independent aetiology (although, notably, factor analytic methods may not always accurately capture the underlying factor structure of a construct [39]). A personalized medicine approach could thus be brought to bear: those with high levels of catastrophizing could receive therapy (such as decatastrophizing) targeted at this cognitive process. ...
Full-text available
Catastrophizing is a cognitive process that can be defined as predicting the worst possible outcome. It has been shown to be related to psychiatric diagnoses such as depression and anxiety, yet there are no self-report questionnaires specifically measuring it outside the context of pain research. Here we therefore, develop a novel, comprehensive self-report measure of general catastrophizing. We performed five online studies (total n = 734), in which we created and refined a Catastrophizing Questionnaire, and used a factor analytic approach to understand its underlying structure. We also assessed convergent and discriminant validity, and analysed test–retest reliability. Furthermore, we tested the ability of Catastrophizing Questionnaire scores to predict relevant clinical variables over and above other questionnaires. Finally, we also developed a four-item short version of this questionnaire. We found that our questionnaire is best fit by a single underlying factor, and shows convergent and discriminant validity. Exploratory factor analyses indicated that catastrophizing is independent from other related constructs, including anxiety and worry. Moreover, we demonstrate incremental validity for this questionnaire in predicting diagnostic and medication status. Finally, we demonstrate that our Catastrophizing Questionnaire has good test–retest reliability (intraclass correlation coefficient = 0.77, p < 0.001). Critically, we can now, for the first time, obtain detailed self-report data on catastrophizing.
... The selection criteria for principal components included a cumulative contribution rate over 85% and eigenvalues greater than 1. Indicator whose absolute value of a loading matrix was greater than 0.7 was selected as the dominant factors (Armstrong 1967) for vegetation restoration. The cosine values of the angles between variables indicate relationship strength; angles ranging from 0°to 90°indicate variables have positive correlations, and 90°to 180°indicate negative correlations. ...
Full-text available
Background Soil and vegetation have a direct impact on the process and direction of plant community succession, and determine the structure, function, and productivity of ecosystems. However, little is known about the synergistic influence of soil physicochemical properties and vegetation features on vegetation restoration. The aim of this study was to investigate the co-evolution of soil physicochemical properties and vegetation features in the process of vegetation restoration, and to distinguish the primary and secondary relationships between soil and vegetation in their collaborative effects on promoting vegetation restoration in a subtropical area of China. Methods Soil samples were collected to 40 cm in four distinct plant communities along a restoration gradient from herb (4–5 years), to shrub (11–12 years), to Pinus massoniana coniferous and broadleaved mixed forest (45–46 years), and to evergreen broadleaved forest (old growth forest). Measurements were taken of the soil physicochemical properties and Shannon–Wiener index (SD), diameter at breast height (DBH), height ( H ), and biomass. Principal component analysis, linear function analysis, and variation partitioning analysis were then performed to prioritize the relative importance of the leading factors affecting vegetation restoration. Results Soil physicochemical properties and vegetation features showed a significant trend of improvement across the vegetation restoration gradient, reflected mainly in the high response rates of soil organic carbon (SOC) (140.76%), total nitrogen (TN) (222.48%), total phosphorus (TP) (59.54%), alkaline hydrolysis nitrogen (AN) (544.65%), available phosphorus (AP) (53.28%), species diversity (86.3%), biomass (2906.52%), DBH (128.11%), and H (596.97%). The soil properties (pH, SOC, TN, AN, and TP) and vegetation features (biomass, DBH, and H ) had a clear co-evolutionary relationship over the course of restoration. The synergistic interaction between soil properties and vegetation features had the greatest effect on biomass (55.55%–72.37%), and the soil properties contributed secondarily (3.30%–31.44%). The main impact factors of biomass varied with the restoration periods. Conclusions In the process of vegetation restoration, soil and vegetation promoted each other. Vegetation restoration was the cumulative result of changes in soil fertility and vegetation features.
... The only sure safeguard against being deceived by chance relationships is replication. When replication is impossible, the best procedure is to divide the sample into random parts, complete the analysis on one part, and then determine whether the same conclusions would have been reached via an identical analysis of the holdout group [6,7,14,22,34,94,141]. It is indeed unfortunate that this procedure has been the exception rather than the rule. ...
This article presents case histories of five somewhat different uses of psychographic research, and it critically reviews the status of research in this field.
... The factor-analytic approach to model building has been criticized by numerous researchers (Armstrong 1967;Ehrenberg and Goodhardt 1976;Gorsuch 1974). Perhaps the most relevant criticism is that in the typical factor analytic study the investigator appears to be rather naive about what is being measured by the variables that have been selected (Ehrenberg and Goodhardt 1976). ...
A simple structural model relating attitudes toward business to product satisfaction, experience in shopping, and search effort is proposed and tested by use of confirmatory factor analysis (COFAMM) and the LISREL model. The fact that the basic hypotheses of the model are supported suggests a positive relation between business attitude and product satisfaction and a negative one between business attitude and information search. The LISREL model is seen as a useful tool in future work in structural modeling.
... Limited testing for predictive validity is a third limitation in the present study as well as in most behavioral research studies using symmetric tests. Armstrong (1967), Gigerenzer andBrighton (2009), McClelland (1998) , and Roberts and Pashler (20 0 0) all point out the severe shortcomings of relying on fit estimates only for the usefulness of any and all forecasting models. The use of separate samples of cases (such as holdout samples) is necessary for testing the accuracy of prediction models. ...
This study provides a theory of the influences of alternative national cultures (as complex wholes) on customers’ tipping behaviors following receiving of services in restaurants and taxicabs. Based on complexity theory tenets, the study constructs and tests models asymmetrically—offers separate models for explaining and forecasting high tipping versus low tipping national cultures. The study uses multiple sources of secondary data for 30 nations including Hofstede's first four culture values, religiosity, Gini index, and GDP_PPP. Model construction includes computing-with-words (CWW) screens that prior theory forecasts to be accurate in identifying high (low) tipping behavior. Analysis includes using fuzzy-set qualitative comparative analysis (fsQCA) and somewhat precise outcome testing (SPOT) of the consistency (degree of accuracy) and coverage for each model. Model testing includes predictive as well as fit validation. The findings support core tenets of complexity theory (e.g., equifinality of different recipes for the same outcome, both negative and positive associations of individual ingredients in different recipes contribute to the same outcome, and causal asymmetry). Because national cultures are complex wholes, hospitality researchers need to embrace the complexity theory tenets and asymmetric tools to achieve deep understanding and for accurate forecasting of customer responses to hospitality services. This study provides new theory and methodological tools for recognizing the complexities and forecasting customers’ behavior in their responses following receiving hospitality services.
Topic models, as developed in computer science, are effective tools for exploring and summarizing large document collections. When applied in social science research, however, they are commonly used for measurement, a task that requires careful validation to ensure that the model outputs actually capture the desired concept of interest. In this paper, we review current practices for topic validation in the field and show that extensive model validation is increasingly rare, or at least not systematically reported in papers and appendices. To supplement current practices, we refine an existing crowd-sourcing method by Chang and coauthors for validating topic quality and go on to create new procedures for validating conceptual labels provided by the researcher. We illustrate our method with an analysis of Facebook posts by U.S. Senators and provide software and guidance for researchers wishing to validate their own topic models. While tailored, case-specific validation exercises will always be best, we aim to improve standard practices by providing a general-purpose tool to validate topics as measures.
Full-text available
One of the world's largest economies, India has made enormous strides in its economies and social development in the past two decades. But according to a new World Bank Report, India can do much more to leverage its strengths in today's knowledge-based global economy. The importance of knowledge and information in all the industries cannot be overemphasized at the same time one cannot disagree to its indispensability for the industries included in the service sector. In this new economy, managements have to focus on:  Personal creativity, where values alignment becomes important  Personal productivity, where mission alignment is critical  Knowledge/experience, where professional development is imperative, and  Emotional intelligence, where personal development is the path. This research proceeds on the premise that the term knowledge work denotes a certain set of occupations which involve comparatively higher level of cognitive activity than the level engaged in by the industrial/ manual workers and the people engaged in such work are knowledge workers. Knowledge as dimensions of work may vary across different occupations.
Though factor analysis continues to be one of the most frequently used multivariate techniques, its value has been questioned because of the indeterminacy of factor scores. The authors review the literature on factor score indeterminacy and discuss the implications of indeterminacy for research practice. A simulation experiment is used to investigate the effects of characteristics of the factor analytic data set on the accuracy of factor score estimation. Indeterminacy is found to depend critically on the level of communality and to be detected more accurately via image factoring than by principal axis or principal component analysis.
Full-text available
This Editorial highlights a unique focus of this theme issue on the biological perspectives in deriving psychological taxonomies coming from neurochemistry, neuroanatomy, neurophysiology, genetics, psychiatry, developmental and comparative psychology—as contrasted to more common discussions of socio-cultural concepts (personality) and methods (lexical approach). It points out the importance of the distinction between temperament and personality for studies in human and animal differential psychophysiology, psychiatry and psycho-pharmacology, sport and animal practices during the past century. It also highlights the inability of common statistical methods to handle nonlinear, feedback, contingent, dynamical and multi-level relationships between psychophysiological systems of consistent psychological traits discussed in this theme issue. _____________________________________________________________________________
The frequently assumed relationship between factor analysis results and primary dimensions of objects is seriously questioned. Demonstration analyses are contrived to show that simple structure factors may be quite complex with regard to the conceptually primary dimensions of objects from which measurements were obtained and to indicate that there is no necessary correspondence between the number of factors obtained and the number of primary dimensions of the objects. Recommendations are made for a more realistic view of this valuable methodologic tool.