Article

Spearman and the origin and development of factor models

Authors:
To read the full-text of this research, you can request a copy directly from the author.

Abstract

Spearman invented factor analysis but his almost exclusive concern with the notion of a general factor prevented him from realizing its full potential. The leadership passed to others, notably Thurstone and Thomson, but progress was hampered by inadequate computing facilities and a limited conceptual framework. It is argued that the fitful progress of factor analyais and its slow and incomplete assimilation into the mainstream of statistical theory can be traced to the lack of a clear idea, until relatively recently, of the role of a model in the development of statistical methods. The combination of an appropriate modelling framework with Spearman's original idea provides a base for further development.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... If p > 0.05, the variation is non-significant. The significance is considered at the level of 0.01 and 0.05, when if the analysis is 2-tail (Bartholomew 1995;Malik and Hashmi 2017). A lot of research work has been done on the statistical correlation between water quality parameters (Jothivenkatachalam et al. 2010;Kale et al. 2018;Sar et al. 2017;Shivanna et al. 2008;Singh et al. 2003;Tiwary et al. 2018). ...
... The factor analysis has been developed by Spearman in 1904, and it is the oldest one (Bartholomew 1995). Factor analysis is a data reduction method used to describe variability between observed and correlated variables. ...
Article
Full-text available
In the present investigation, hydrogeochemistry and multivariate statistical analysis of groundwater quality were assessed from hard rock aquifers of the Deccan trap basalt in the Jalna district of Maharashtra. Groundwater samples (n = 105) were collected from the study area in a systematic grid pattern to avoid biasing in sampling. Water quality parameters of all these groundwater samples were analyzed by standard BIS and APHA procedures by titrimetric and using sensors. Uranium in all samples was analyzed using an LED fluorimeter. Strict quality assurance and quality control features were adopted in all stages of the study to ensure the quality of the data. The observed sequence of the dominance of major cations and anions is Ca 2+ > Na + > Mg 2+ > K + and HCO 3− > Cl − > SO 4 2− respectively. The observed uranium values were in the range of 0.1 to 16.2 µg/L with an average value of 2.04 µg/L, well below the safe limits recommended by WHO and AERB i.e. 30 and 60 µg/L, respectively. Piper trilinear diagram indicates dominant hydrochemical facies of groundwater in the study area are Mixed Ca 2+-Na +-HCO 3 − , Na +-Cl − and Ca +-HCO 3 − while Gibbs plot infers host rock-water interaction is the major geochemical process in these aquifers. The correlation analysis, cluster analysis, and factor analysis tests are performed. Groundwater was assessed for its suitability for irrigation purposes using multiple indices such as SAR, RSC, and Na percentage. From the estimated indices, it was found that the groundwater in hard rock aquifers of the Deccan trap basalt is suitable for irrigation purposes.
... Bu nedenle, öğrencilerin olumlu tutum geliştirdikleri derslerde daha başarılı olmaları (Hongwarittorrn ve Krairit, 2010) sebebiyle programlamaya karşı tutumlarının belirlenmesi de önemlidir. Tutum, insan davranışlarını belirleyen, doğrudan gözle göremediğimiz psikolojik bir değişken (Anderson, 1988) (Bartholomew, 1995). Fakat açıklayıcı faktör analizi sadece faktöriyel geçerlilikte ve ölçme modelinin çözümlenmesinde kullanıldığı için yapı geçerliğini belirlemede tek başına yeterli değildir (Demir ve Yurdugül, 2014). ...
Article
Full-text available
The aim of this study is to develop a valid and reliable attitude scale that measures the attitudes of high school students towards programming. The sample of the study consisted of 214 high school students studying in 12 schools in different provinces and districts. The 5-Likert type of 64-item scale was prepared after the literature review and the scale was evaluated by four experts in the related fields in order to determine the scope validity of the scale. After the evaluation of the judges, a 21 item form was constructed. As a result of explanatory and confirmatory factor analyses, 7 items were discarded and a 3-factor solution consisting of 14 items demerged. Factors were labeled as demand towards programming, faith towards the benefit of programming and interest towards programming. The Cronbach Alpha of the scale were calculated as .89, and the total variance of the scale was calculated as 65.71 %. Psychometric properties indicated that the scale was valid and reliable.
... A particularly popular dimension reduction technique is Exploratory Factor Analysis (EFA). A precursor of modern EFA was invented by Spearman (Bartholomew, 1995), who developed it to reduce performance scores on a large battery of cognitive ability tests into one, or a small number, of ability factors. EFA models the observed covariance matrix of a set of P variables by assuming there are M < P factors, which predict the values on the observed variables. ...
Preprint
Full-text available
Dimension reduction is widely used and often necessary to reduce high dimensional data to a small number of underlying variables -- factors or components -- to make data analyses and their interpretation tractable. One popular technique is Exploratory Factor Analysis (EFA), which extracts factors when the underlying factor structure is not known. However, we observe that datasets exist where researchers indeed do not know the factor structure, but do have other relevant a priori knowledge. For instance, cognitive neuroscientists may want to reduce individual differences in brain structure across a large number of regions to a tractable number of factors. In this field, it is well established that the brain displays contralateral symmetry, such that the same regions in the left and right half of the brain will be highly correlated. Here, a) we show the adverse consequences of ignoring such a priori structure in standard factor analysis, b) we propose a technique for Exploratory factor analysis with structured residuals (EFAST) which accommodates such a priori structure into an otherwise standard EFA, and c) we apply this technique to a large (N = 647, 68 brain regions) empirical dataset, demonstrating the superior fit and improved intepretability of our approach. We provide an R software package to allow researchers to apply this technique to other suitable datasets.
... He tried to see whether something like 'general intelligence' could explain the correlations among sets of test scores. For an overview of the origins and the development of factor analysis see Bartholomew (1995) The first statistical treatment of factor analysis was given in Lawley and Maxwell (1971). In factor analysis we have a number r of observed metric variables that we want to express as linear combinations of q latent variables where q is much less than r. ...
Article
Full-text available
Latent variable models are widely used in social sciences in which interest is centred on entities such as attitudes, beliefs or abilities for which there e)dst no direct measuring instruments. Latent modelling tries to extract these entities, here described as latent (unobserved) variables, from measurements on related manifest (observed) variables. Methodology already exists for fitting a latent variable model to manifest data that is either categorical (latent trait and latent class analysis) or continuous (factor analysis and latent profile analysis). In this thesis a latent trait and a latent class model are presented for analysing the relationships among a set of mixed manifest variables using one or more latent variables. The set of manifest variables contains metric (continuous or discrete) and binary items. The latent dimension is continuous for the latent trait model and discrete for the latent class model. Scoring methods for allocating individuals on the identified latent dimen-sions based on their responses to the mixed manifest variables are discussed. ' Item nonresponse is also discussed in attitude scales with a mixture of binary and metric variables using the latent trait model. The estimation and the scoring methods for the latent trait model have been generalized for conditional distributions of the observed variables given the vector of latent variables other than the normal and the Bernoulli in the exponential family. To illustrate the use of the naixed model four data sets have been analyzed. Two of the data sets contain five memory questions, the first on Thatcher's resignation and the second on the Hillsborough football disaster; these five questions were included in BMRBI's August 1993 face to face omnibus survey. The third and the fourth data sets are from the 1990 and 1991 British Social Attitudes surveys; the questions which have been analyzed are from the sexual attitudes sections and the environment section respectively.
... Ölçme modelinin çözümlenmesinde klasik test kuramı (classical testing theory) (Novick, 1966) ya da madde-yanıt kuramı (item response theory) (Lord ve Novick, 1968) gibi yöntemlere başvurulmaktadır. Özellikle klasik test kuramına dayalı yaklaşımda Spearman'dan (1904) günümüze yaygın bir şekilde faktör analizi kullanılmaktadır (Bartholomew, 1995). Ancak faktör analizi yalnızca ölçme modelinin kestiriminde ve faktöriyel geçerlikte kullanılmaktadır. ...
Data
Full-text available
Bu çalışmanın amacı ortaokul ve lise öğrencilerinin bilgisayara yönelik tutumlarını (ÖBYT) ölçmeye yönelik güncel ihtiyaçları karşılayan, güvenilir ve geçerli bir Türkçe ölçme aracı ortaya koymaktır. Bunun için bu çalışma kapsamında Teo’un (2008) öğrencilerin bilgisayara yönelik tutumları ölçeği Türkçe’ye uyarlanmıştır. Araştırmanın yöntemine gelindiğinde, ölçek Ankara’daki okullarda ortaöğretim veya lise kademesinde öğrenim gören 1678 öğrenciye uygulanmıştır. Yapılan doğrulayıcı faktör analizi sonrasında ölçeğin üç faktör (bilgisayardan hoşlanma, bilgisayarın önemi ve bilgisayar kaygısı) ve yirmi maddelik bir yapı gösterdiği görülmüştür. Ölçeğin genel Cronbach Alfa ve Omega güvenirlik katsayıları sırasıyla 0,83 ve 0,95 olarak hesaplanmıştır. Ölçeğin bulgularına ve bu bulgulara dayanarak ortaya atılan önerilere tam metinde daha detaylı olarak yer verilmiştir.
... For the analysis of measurement model, methods such as classical testing theory (Novick, 1966) and item response theory (Lord and Novick, 1968) are employed. Especially in the approach based on classical testing theory, factor analysis has been widely run from the times of Spearman (1904) to today (Bartholomew, 1995). However, factor analysis is preferred for the estimation of measurement model and factorial validity. ...
Article
Full-text available
This study aimed to develop a Turkish scale, which is reliable, valid and meets the current requirements for assessing the attitudes of students towards computers. That is why, the scale of students' attitude towards computers (SATC), which originally belongs to Teo (2008), was adapted into Turkish. When it comes to methodology part of the study, the scale was administered to a total of 1678 students enrolled in primary or secondary school located in Ankara. After Confirmatory Factor Analysis (CFA) was performed, it was ascertained that the scale consisted of 20 items and 3 factors (computer enjoyment, computer importance and computer anxiety). The Cronbach Alpha and Omega values of the scale were found out to be 0.83 and 0.95 respectively. The findings and implications based on these findings were discussed in a more detailed way in the full paper.
... Around the mid-1900s, when the psychological testing movement in American schools was in full force (Kaplan & Saccuzzo, 2010 ), English psychologist and statistician Charles Spearman (1939) coined the term ''factor analysis'' to describe a mathematical approach to determining the underlying patterns of relationships among different test scores. With this procedure, Spearman demonstrated that schoolchildren's performances on a wide range of mental ability tasks were at least moderately intercorrelated on a general (g-factor) mental ability dimension (Bartholomew, 1995). His seminal work on factor analysis has in part contributed to the dramatic rise in the use of tests and measurements in fields like psychology, education , and counseling. ...
Article
Full-text available
This article summarizes the general uses and major characteristics of factor analysis, particularly as they may apply to counseling research and practice. Exploratory factor analysis (EFA) and confirmatory factor analysis (CFA) are overviewed, including their principal aims, procedures, and interpretations. The basic steps of each type of factor analysis are elucidated. For EFA, the methods of factor extraction (principal component analysis and principal axis factoring), retention, rotation, and naming are summarized. CFA’s basic operations (model specification, testing, and interpretation) are discussed. In conclusion, EFA and CFA are directly applied to the development of a counseling-related instrument.
... Spearman spends part of his 90-page paper defending his one-factor model of general intelligence, arguably his main, seminal contribution to the fields of psychometrics as well as statistical modeling at this time. Nonetheless, Bartholomew's (1995) review paper starts by stating that "Spearman invented factor analysis but his almost exclusive concern with the notion of a general factor prevented him from realizing its full potential." ...
... Spearman [3] originally developed his two-factor model as a way to represent Francis Galton's proposals [2] about cognitive ability, namely that all measures of cognitive ability have something in common: g [12]. Spearman was not really interested testing Galton's theory, as he firmly believed that g existed by the time he started collecting the data for his study [13]. Nonetheless, the two-factor model that he developed in the process was the foundation for the development of factor analysis. ...
Article
Full-text available
The development of factor models is inextricably tied to the history of intelligence research. One of the most commonly-cited scholars in the field is John Carroll, whose three-stratum theory of cognitive ability has been one of the most influential models of cognitive ability in the past 20 years. Nonetheless, there is disagreement about how Carroll conceptualized the factors in his model. Some argue that his model is best represented through a higher-order model, while others argue that a bi-factor model is a better representation. Carroll was explicit about what he perceived the best way to represent his model, but his writings are not always easy to understand. In this article, I clarify his position by first describing the details and implications of bi-factor and higher-order models then show that Carroll’s published views are better represented by a bi-factor model.
... Originally invented by Charles Spearman in 1904 [8], [9], factor analysis (FA) harbours a class of latent linear models that factorize an observed data into a small number of uncorrelated factors. Denoting the observed and mean-subtracted data by X ∈ R M ×N (M : number of features, N : number of independently observed samples), Bayesian factor analysis [10] can be formulated by the following distributions: ...
Conference Paper
Full-text available
As the emergement of high-throughput measurement technologies, we are entering the big data era. Modern data are often generated from heterogeneous multiple sources, thus can be called multi-view data. The challenge of effectively integrating such data for decision making and novel knowledge discovery is raised. Matrix factorization methods have historically played important roles in various analyses of single-view data sets. These models have been recently generalized for the analysis of multi-view data in a few areas. In order to better understand the theories and applications of these models, as well as inspire new studies in this field, in this paper, we discuss multi-view matrix factorization models mainly from a Bayesian view point using the same notation system.
... This set off a somewhat long and acrimonious debate between the "British factor analysis" and "the Thurstonians" about whether or not general or broad/group L Vs should be included in their statistical models (Mulaik, 1986). While it turns out that, statistically, Spearman's and Thurstone's models were not all that different (Hedman, 1938), it is probably incorrect to say that their disagreements were unnecessary ( e.g., Bartholomew, 1995;Carroll, 1993). It is doubtful they could have had much rapprochement since their underlying philosophies were so divergent (e.g., Haig, 2013). ...
Chapter
Full-text available
In this chapter, I describe the origins of the Charles Spearman's Two-Factor Theory (TFT) of intelligence. Culling from Spearman's own scholarship, I show that the TFT was the first scientific intelligence theory to include both a general attribute as well as more specific narrower attributes (i.e., "group factors"). In addition, I describe how Spearman's approach to studying intelligence attributes substantially differs from much modern scholarship, both in purpose and philosophy, and argue that scholars would be wise to integrate a Spearman-like perspective in their own research.
... Chief among the latter camp was Thurstone (1935Thurstone ( , 1938a. Originally the differences were largely believed to be methodological (e.g., tetrad differences vs. centroid criterion), although it has been shown that the approaches are actually quite similar (Bartholomew, 1995). The differences in method, however, were merely outgrowths of fundamentally different philosophies and approaches to intelligence. ...
Article
Charles Spearman and L. L. Thurstone were pioneers in the field of intelligence. They not only developed methods to assess and understand intelligence, but also developed theories about its structure and function. Methodologically, their approaches were not that distinct, but their theories of intelligence were philosophically very different – and this difference is still seen in modern approaches to intellectual assessment. In this article, we describe their theories of intelligence and then trace how these theories have influenced the development and use of intelligence instruments, paying particular attention to score interpretation.
... Bu nedenle, öğrencilerin olumlu tutum geliştirdikleri derslerde daha başarılı olmaları (Hongwarittorrn ve Krairit, 2010) sebebiyle programlamaya karşı tutumlarının belirlenmesi de önemlidir. Tutum, insan davranışlarını belirleyen, doğrudan gözle göremediğimiz psikolojik bir değişken (Anderson, 1988) (Bartholomew, 1995). Fakat açıklayıcı faktör analizi sadece faktöriyel geçerlilikte ve ölçme modelinin çözümlenmesinde kullanıldığı için yapı geçerliğini belirlemede tek başına yeterli değildir (Demir ve Yurdugül, 2014). ...
... Pioneered by Spearman (1904), whether the factor analysis model (as illustrated in Fig. 1d and to be further described in the next section) is identifiable has been a classical topic for more than 100 years, from perspectives that are more or less similar to constraints on the second-order statistics obtained from Eq. (9). The well-known TETRAD equations or differences were discovered already in Spearman (1904) and have been used for constructing casual structures not just in Pearl (1986) but also by others (Spirtes and Glymour 2000;Bartholomew 1995;Bollen and Ting 2000). Moreover, Theorem 4.2 in Anderson and Rubin (1956) also gave a necessary and sufficient condition for identifying whether a covariance matrix can be the one of a factor analysis model with one factor and three observation variables, which is actually equivalent to Eq. (5) but expressed in a different format. ...
Article
Full-text available
Advances in causal discovery from data are becoming a widespread topic in machine learning these recent years. In this paper, studies on conditional independence-based causality are briefly reviewed along a line of observable two-variable, three-variable, star decomposable, and tree decomposable, as well as their relationship to factor analysis. Then, developments along this line are further addressed from three perspectives with a number of issues, especially on learning approximate star decomposable, and tree decomposable, as well as their generalisations to block star-causality analysis on factor analysis and block tree decomposable analysis on linear causal model.
... where X i is the ith observable variable (i = 1 … k), F j is the common factor (j = 1 … p), and p b k. Ui is a unique part of the Xi variable (cannot be explained by common factors) (Bartholomew, 1995;Kabacoff, 2015;Spearman, 1904). a i can be considered as the contribution value of each factor to the composite observable variable. ...
Article
Number concentration is an important index to measure atmospheric particle pollution. However, tailored methods for data preprocessing and characteristic and source analyses of particle number concentrations (PNC) are rare and interpreting the data is time-consuming and inefficient. In this method-oriented study, we develop and investigate some techniques via flexible conditions, C++ optimized algorithms, and parallel computing in R (an open source software for statistics and graphics) to tackle these challenges. The data preprocessing methods include deletions of variables and observations, outlier removal, and interpolation for missing values (NA). They do better in cleaning data and keeping samples and generate no new outliers after interpolation, compared with previous methods. Besides, automatic division of PNC pollution events based on relative values suites PNC properties and highlights the pollution characteristics related to sources and mechanisms. Additionally, basic functions of k-means clustering, Principal Component Analysis (PCA), Factor Analysis (FA), Positive Matrix Factorization (PMF), and a newly-introduced model NMF (Non-negative Matrix Factorization) were tested and compared in analyzing PNC sources. Only PMF and NMF can identify coal heating and produce more explicable results, meanwhile NMF apportions more distinctly and runs 11–28 times faster than PMF. Traffic is interannually stable in non-heating periods and always dominant. Coal heating's contribution has decreased by 40%–86% in recent 5 heating periods, reflecting the effectiveness of coal burning control.
... Tradicionalmente el coeficiente de correlación de Pearson es el más utilizado para estimar las relaciones existentes entre variables climáticas y variables geográficas (Kadiolu, 2000), sin embargo, la presente investigación aplica el coeficiente de correlación de Spearman, que describe la intensidad de la asociación monotónica entre los pronósticos y las observaciones (Fallas y Alfaro, 2012). Se resalta el hecho de que las diferencias entre los dos coeficientes son muy pequeñas, diferenciándose únicamente, en la mayoría de los casos, en los decimales del valor estimado por los coeficientes; pero el coeficiente de Spearman es una técnica no paramétrica con una distribución probabilística libre, esto permite que los supuestos sean menos estrictos pero robusto a la presencia de outliers (Bartholomew, 1995;Restrepo y González, 2007). ...
Article
Full-text available
The present study seeks to examine the influence of altitude on maximum daily precipitation events in the Pacific basin of the Jubones River (Ecuador), which is known for having a marked difference in altitudinal variation. Precipitation events are assessed based on the 95th and 99th percentiles of a pluviometric data series, classifying them as rainy events and stormy events. The results show that altitude does not influence daily precipitation events, including rainy or very rainy, when events are taken as a total series throughout the year. However, if robust indicators are used for analysis, the results show a relationship between altitude and average rainfall. This relationship is affected by how the study area is classified between wet and dry seasons. In conclusion, examining very rainy and extremely rainy events as an inter-annual monthly daily series shows a high correlation coefficient between these values and altitude values.
... First, we conducted an exploratory factor analysis (EFA) to identify potential latent constructs measuring health professionals' perception regarding the effects of hospital accreditation. Originated in the early 1900s concerning Charles Spearman's interest in developing the Two-Factor Theory [37], factor analysis is a multivariate statistical technique commonly used in psychological research [38], applied psychology [39], and, more recently, in health-related professions [40]. The functions involved in conducting EFA in this paper are available in the "psych" package in R. Given the nonnormal distribution of the items, we used the principal axis as an extraction method and the "onlimin" rotation, although the extraction method "minres" led to similar results [41]. ...
Article
Full-text available
Hospital accreditation, as a quality signal, is gaining its popularity among low-and middle-income countries, such as Romania, despite its costly nature. Nevertheless, its effectiveness as a quality signal in driving patients' choice of hospital services remains unclear. In this study, we intend to empirically explore the perceptions of both healthcare professionals and patients toward Romanian hospital accreditation and identify perception gaps between the two parties. Exploratory and confirmatory factor analyses were carried out to extract the latent constructs of health professionals' perceived effects of hospital accreditation. The Wilcoxon rank-sum test and Kruskal-Wallis test were used to identify correlations between patients' sociodemographic characteristics and their behavioral intentions when confronted with low-quality services. We found that health professionals believe that hospital accreditation plays a positive role in improving patient satisfaction, institutional reputation, and healthcare services quality. However, we found a lack of awareness of hospital accreditation status among patients, indicating the existence of the perception gap of the accreditation effectiveness as a market signal. Our results suggest that the effect of interpersonal trust in current service providers may distract patients from the accreditation status. Our study provides important practical implications for Romanian hospitals on enhancing the quality of accreditation signal and suggests practical interventions.
... If p > 0.05, the variation is non-significant. The significance is considered at the level of 0.01 and 0.05, when if the analysis is 2-tail [35,36]. A lot of research work has been done on the statistical correlation between water quality parameters [37][38][39][40][41][42][43]. ...
Article
Full-text available
The current study of the spatial distribution of uranium and water quality parameters along subsequent radiological impact due to uranium in the groundwater from the Buldhana district was undertaken. The chemo-radio toxicological dose owing to such dissolved uranium is estimated. The water quality parameters are compared with the World Health Organization and Bureau of Indian Standard's safe recommended limits and found well below. A correlation study was carried out between uranium and water quality parameters. Spatial distribution is mapped by GIS-based software. The chemo-radio toxicological risks due to uranium for different age groups were calculated. This finding in the study suggests that groundwater of the region is safe for drinking purposes based on a chemo-radiological point of view.
... The computational time is also calculated which was required to train and test the model. Moreover, we utilize other dimensional reduction techniques such as principal component analysis (PCA) [81] and factor analysis [82] to compare the performance with our propose method. All the experiments are performed on a workstation with 3.5 GHz Intel Core i7-5930k and 64 GB RAM memory. ...
Article
Full-text available
COVID-19 is a rapidly spreading viral disease and has affected over 100 countries worldwide. The numbers of casualties and cases of infection have escalated particularly in countries with weakened healthcare systems. Recently, reverse transcription-polymerase chain reaction (RT-PCR) is the test of choice for diagnosing COVID-19. However, current evidence suggests that COVID-19 infected patients are mostly stimulated from a lung infection after coming in contact with this virus. Therefore, chest X-ray (i.e., radiography) and chest CT can be a surrogate in some countries where PCR is not readily available. This has forced the scientific community to detect COVID-19 infection from X-ray images and recently proposed machine learning methods offer great promise for fast and accurate detection. Deep learning with convolutional neural networks (CNNs) has been successfully applied to radiological imaging for improving the accuracy of diagnosis. However, the performance remains limited due to the lack of representative X-ray images available in public benchmark datasets. To alleviate this issue, we propose a self-augmentation mechanism for data augmentation in the feature space rather than in the data space using reconstruction independent component analysis (RICA). Specifically, a unified architecture is proposed which contains a deep convolutional neural network (CNN), a feature augmentation mechanism, and a bidirectional LSTM (BiLSTM). The CNN provides the high-level features extracted at the pooling layer where the augmentation mechanism chooses the most relevant features and generates low-dimensional augmented features. Finally, BiLSTM is used to classify the processed sequential information. We conducted experiments on three publicly available databases to show that the proposed approach achieves the state-of-the-art results with accuracy of 97%, 84% and 98%. Explainability analysis has been carried out using feature visualization through PCA projection and t-SNE plots.
... It was introduced at the beginning of the 20th century by Spearman. 39 Factor analysis offers two different techniques: principle component analysis and correspondence analysis. ...
Article
Full-text available
Introduction: In recent years, the concept of "disease burden" has been given a central role in evaluating patient care, particularly in skin diseases. Measuring patient-reported outcomes (PRO) such as symptoms and disease burden may be useful. Aim: To present a methodology that facilitates the development and validation of burden questionnaires for patients suffering from skin diseases. Methodology: Based on past published burden questionnaires, a methodology for designing skin disease burden questionnaires was to be developed. Results: Based on 16 burden questionnaires developed and published over the last 10 years, the authors propose a standardized methodology for the easy design and validation of disease burden questionnaires for patients with chronic skin diseases. The authors provide detailed guidance for the conception, development and validation of the questionnaires, including reliability, internal consistency, external validity, cognitive debriefing, testing-retesting, translation and cross-cultural adaptation, as well as for statistical analysis. Conclusion: The proposed methodology enhances the design and validation of disease burden questionnaires in dermatology. Burden questionnaires may be used in clinical research as well as in daily clinical practice.
... Factor Analysis (FA) was introduced in 1904 by Charles Edward Spearman and described in 1995 by Bartholomew D. J. [18]. This method allows new variables to be created from a set of original variables. ...
Article
Full-text available
The present contribution is devoted to the theory of fuzzy sets, especially Atanassov Intuitionistic Fuzzy sets (IF sets) and their use in practice. We define the correlation between IF sets and the correlation coefficient, and we bring a new perspective to solving the problem of data file reduction in case sets where the input data come from IF sets. We present specific applications of the two best-known methods, the Principal Component Analysis and Factor Analysis, used to solve the problem of reducing the size of a data file. We examine input data from IF sets from three perspectives: through membership function, non-membership function and hesitation margin. This examination better reflects the character of the input data and also better captures and preserves the information that the input data carries. In the article, we also present and solve a specific example from practice where we show the behavior of these methods on data from IF sets. The example is solved using R programming language, which is useful for statistical analysis of data and their graphical representation.
Article
The period 1895-1925 saw the origins and establishment of the fields that came to be called econometrics and psychometrics. I consider what these fields owed to biometry-the statistical approach to the biological problems of evolution-and make some comparisons between all three. I emphasize developments in biology and psychology, for these are less familiar to historians of econometrics. These developments are interesting to contemplate, for the biometricians and psychometricians were already discussing issues associated with the respective roles of statistical analysis and of subject matter theory, issues that became prominent in econometrics only much later.
Chapter
Factor analysis is a latent variable technique. It reduces the dimension of a data set by assuming that underlying the measured variables is a smaller number of unobservable factors. The measured variables are assumed to be linearly related to the factors, apart from an error term. There are many ways of estimating the loadings that relate the variables to the factors, and predicting the scores of the factors themselves. Solutions are typically nonunique, and are made as “simple” as possible by means of rotation of loadings. Much of factor analysis has been developed within psychology, but it is useful more widely. The technique is often used in an exploratory or descriptive manner, but confirmatory versions are also available.
Chapter
Full-text available
Psychometrics developed as a means for measuring psychological abilities and attributes, usually via a standardized psychological test. Its emergence as a specialized area of psychology combined entrenched social policy assumptions, existing educational practices, contemporary evolutionary thinking, and novel statistical techniques. Psychometrics came to offer a pragmatic, scientific approach that both fulfilled and encouraged the need to rank, classify, and select people.
Article
Provides a more historically complete picture of the conflict between C. E. Spearman and E. B. Wilson on the 2-factor theory and indeterminacy than earlier chronicles have achieved. In the view of some, the 2-factor theory, and indeed factor analysis as a general technique, was invalidated by Wilson's demonstration of the indeterminacy of the factor solution and only survived by in effect denying the problem. However, a different story emerges from an examination of both the public record and unpublished correspondence between the 2 principals and with other players from 1928–1933. Proponents of factor analysis, although forced to accept the reality of indeterminacy, also found ways to justify adopting Wilson's transformations. "Managed" indeterminacy has proved in practice no serious obstacle to the development of factor analysis. The spirit in which their interchange was conducted illustrates the socially negotiated nature of science. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
We consider the documentation of the working process in the context of matrix computations and multivariate statistical analyses. Our focus is on the way of working: how well does the working process get documented dynamically, or what sort of traces are left behind. These questions are relevant in any area of research. Leaving useful traces while working may save a considerable amount of time, and provide better possibilities for other researchers to comprehend the points of a study. These principles are demonstrated with examples using Survo software and its matrix interpreter.
Article
Full-text available
Purpose – This paper aims to investigate empirically how broadband has been implemented at the business level and what are the potential adoption benchmarks. Several recent studies have called for the development of frameworks of broadband adoption, particularly at the business level, to help policy makers, communities and businesses with their strategic decision-making process. Design/methodology/approach – This paper opens the discussion by presenting concerns and challenges of Internet adoption. Internet adoption is viewed as the current challenge facing businesses, communities and governments. Exploratory factor analysis (EFA) and confirmatory factor analysis (CFA) techniques are used to create, analyze and develop Internet adoption models. Findings – Based on the Internet usage data from a number of states across the USA, measurement models are developed using EFA and CFA. The findings indicate that for our sample, a three-factor model is the most appropriate for the representation of Internet adoption in the tourism sector, while a five-factor model can best describe Internet adoption in the sample of manufacturing organizations. Research limitations/implications – The availability of data on Internet usage at the business/organizational level is one of the main constraints. Industry/community-specific data can also provide valuable insights about the Internet adoption and support the development of industry/community-specific adoption models. Practical implications – The findings and the employed research method can be used by businesses, communities and government managers and policy makers as benchmarks to examine broadband adoption based on gap-opportunity criteria. Originality/value – This is the first study that provides Internet adoption models based on an empirical study at the business level. The benefits of broadband Internet have been investigated by many researchers in the past decade. There seems to be a consensus among practitioners and scholars about the role of broadband Internet in gaining competitive advantage. However, there have not been any previous studies that investigate how broadband has been implemented and what the potential adoption benchmarks at the business level are.
Book
Full-text available
Was ist Persönlichkeit? Und wie kann sie wissenschaftlich erschlossen und diagnostiziert werden? Vor allem in der ersten Hälfte des 20. Jahrhunderts entfaltete die kulturell voraussetzungsreiche Idee der Persönlichkeit innerhalb der psychologischen Wissenschaften große produktive Kraft. David Keller stellt eine Vielfalt von Medien und Techniken in den Mittelpunkt seiner systematischen Untersuchung, die mobilisiert wurden, um 'Persönlichkeit' anhand experimenteller Suchprozesse als ein stabiles Konzept der Humanwissenschaften zu legitimieren. Dabei verdeutlicht seine Rekonstruktion einschlägiger Forschungs- und Diagnosepraktiken sowie die Betrachtung popularisierender Diskurse, wie die Suche nach der Persönlichkeit nicht allein eine wissensstiftende Funktion besaß, sondern zuletzt immer neue Fragen aufwarf. Auf diese Weise wurde sie zu einer fortgesetzten Herausforderung für die Wissenschaften vom Menschen.
Chapter
Many factors in life are complex and difficult to measure directly. Often add-up scores of separate aspects of the complex factors are used. However, add-up scores do not account for their relative importance, their interactions and differences in units. Factor analysis accounts all of this, but is, virtually, unused in clinical research.
Article
Humans vary considerably in their ability to perform and learn new motor skills. In addition, they respond to different performance and practice conditions in varying ways. Historically, experimental psychologists have characterized these differences as ‘experimental noise’, yet for those who embrace differential psychology, the study of individual differences promises to deepen insights into the processes that mediate motor control and learning. In this paper, we highlight what we know about predicting motor learning based on individual difference characteristics and renew a call made by Lee Cronbach several decades ago to combine the methodologies used by experimental and differential psychologists to further our understanding of how to promote motor learning. The paper provides a brief historical overview of research on individual differences and motor learning followed by a systematic review of the last 20 years of research on this issue. The paper ends by highlighting some of the methodological challenges associated with conducting research on individual differences, as well as providing suggestions for future research. The study of individual differences has important implications for furthering our understanding of motor learning and when tailoring interventions for diverse learners at different stages of practice.
Chapter
MANOVA (multivariate analysis of variance) is used for the analysis of clinical trials with multiple outcome variables. However, its performance is poor if the relationship between the outcome variables is positive. Discriminant analysis is not affected by this mechanism.
Chapter
Traditional statistical tests are unable to handle a large number of variables. The simplest method to reduce large numbers of variables is the use of add-up scores. But add-up scores do not account for the relative importance of the separate variables, their interactions and differences in units. Principal components analysis and partial least square analysis account all of that, but are virtually unused in clinical trials.
Chapter
In clinical trials the research question is often measured with multiple variables, and multiple regression is commonly used for analysis. The problem with multiple regression is that consecutive levels of the variables are assumed to be equal, while in practice this is virtually never true. Optimal scaling is a method designed to maximize the relationship between a predictor and an outcome variable by adjusting their scales.
Chapter
Major accounts of validity that existed prior to construct validity theory (CVT) are described. The chapter opens with a review of Charles Spearman’s contributions to early psychological testing theory. Particular emphasis is placed on the description of two papers authored by Spearman, both published 1904. Arguably, these works serve as the foundation of “classical” and “modern” test theories, respectively, the latter of which were critical to the articulation and development of CVT. The contributions of other figures prominent in early psychometric theory are also described. The chapter closes with a discussion of how classical conceptions of validity and approaches to validation came under increasing scrutiny as a dramatic re-conceptualization of the concept of ‘validity’ began to emerge toward the mid-twentieth century.
Chapter
Structural equation modeling (SEM) is becoming the most used method when multiple constructs, relationships, and latent and observed variables are involved. Two main statistical analysis techniques of SEM can be applied by researchers: Partial Least Squares SEM (PLS-SEM), and covariance-based (CB-SEM). This paper is presenting a comparison study between the two methods. The comparison is based on a case study of the sustainable supply chain innovation model for the Moroccan industrial field during 2020. It gives a presentation of the conceptual model and the two structural analysis techniques comparison in the literature. Then it explains the methodology. The last sections are consolidating the findings and conclusions revealed where the CB-SEM technique is performed and consistently compared with PLS-SEM.
Chapter
Latent variables are unmeasured variables, inferred from measured variables. The main purpose for using them is data variables reduction. Current research increasingly involves multiple variables, and traditional statistical models tend to get powerless with too many variables included. Multiple analyses is, generally, no solution, because of increased type I errors due to multiple testing. In contrast, a few latent variables replacing multiple manifest variables can be applied. However, a disadvantage is, that latent variables are rather subjective, because they are dependent on subjective decisions to cluster some measured variables and remove others. The current chapter reviews, how to construct high quality latent variables, and how they can be successfully implemented in many modern methodologies for data analysis. Three of them will be reviewed.
Article
Our aim is to construct a general measurement framework for analyzing the effects of measurement errors in multivariate measurement scales. We define a measurement model, which forms the core of the framework. The measurement scales in turn are often produced by methods of multivariate statistical analysis. As a central element of the framework, we introduce a new, general method of estimating the reliability of measurement scales. It is more appropriate than the classical procedures, especially in the context of multivariate analyses. The framework provides methods for various topics related to the quality of measurement, such as assessing the structural validity of the measurement model, estimating the standard errors of measurement, and correcting the predictive validity of a measurement scale for attenuation. A proper estimate of reliability is a requisite in each task. We illustrate the idea of the measurement framework with an example based on real data.
Some scaling models and estimation procedures in the latent class model Prohability and Statistics, The Harald Cramer Volume Latent Variahle Models and Factor Analysis. London: Charles Griffin Some pre-war statistical correspondence
  • T W Anderson
Anderson, T. W. (1959). Some scaling models and estimation procedures in the latent class model. In U. Grenander (Ed.) Prohability and Statistics, The Harald Cramer Volume. Stockholm: Almqvist & Wicksell and Wiley. Bartholomew, D. J. ( I 987). Latent Variahle Models and Factor Analysis. London: Charles Griffin; New York: Oxford University Press. Bartlett. M. S. (1994). Some pre-war statistical correspondence. In F. f? Kelley (Ed.), Probability, Statistics and Optimization. Chichester: Wiley.