Working PaperPDF Available

Advice on Exploratory Factor Analysis

Authors:
© 2016, Centre for Academic Success, Birmingham City University 1
Advice on Exploratory Factor Analysis
Overview
Exploratory Factor Analysis (EFA) is a process which can be carried out to validate scales of
items in a questionnaire which has not been validated. This process is supported by SPSS.
Once a questionnaire has been validated, an appropriate process is Confirmatory Factor
Analysis. This is supported by the AMOS package, a ‘sister’ package to SPSS.
There are two forms of EFA known as Factor Analysis (FA) and Principal Component
Analysis (PCA). The reduced dimensions produced by a FA are known as factors whereas
those produced by a PCA are known as components. FA analyses the relationship between
the individual item variances and common variances shared between items whereas the
PCA analyses the relationships between the individual and total (common and error)
variances shared between items. FA is therefore preferable to PCA in the early stages of an
analysis as it allows you to measure the ratio of an item’s unique variance to its shared
variance, known as its communality. As dimension reduction techniques seek to identify
items with a shared variance, is advisable to remove any item with a communality score less
than 0.2 (Child, 2006). Items with low communality scores may indicate additional factors
which could be explored in further studies by measuring additional items (Costello and
Osborne, 2005).
Before carrying out an EFA the values of the bivariate correlation matrix of all items should
be analysed. High values are an indication of multicollinearity (although they are not a
necessary condition). Field (2013: 686) suggests removing one of a pair of items with
bivariate correlation scores greater than 0.8. There is no statistical means for deciding which
item of a pair to remove this should be based on a qualitative interpretation. An additional
test is to look at the determinant of the matrix, which should be greater than 0.00001. A
lower score might indicate that groups of three or more questions have high
intercorrelations, so the threshold for item removal should be reduced until this condition is
satisfied.
The next step is to decide on an appropriate method. If you are only dealing with your
sample for further analysis (i.e. it is a population in terms of the EFA) use the Principal Axis
Factoring method. Otherwise, if you are trying to develop and instrument to be used with
other data sets in the future, use a sample-based EFA method such as Maximum Likelihood
or Kaiser’s alpha factoring.
Whether to rotate the factors and the type of rotation also needs to be decided. An
orthogonal rotation can improve the solution from the unrotated one but it forces the factors
to be independent of each other. The most popular orthogonal rotation technique is Varimax.
An oblique rotation allows a degree of correlation between the factors in order to improve the
intercorrelation between the items within the factors. Although Reise et al. (2000) give
several reasons why it should be considered, it is more difficult to interpret and should only
be considered if the orthogonal solution is unacceptable. More detail is provided in the
section at the end of the worked example.
The purpose of an EFA is to describe a multidimensional data set using fewer variables. The
items which make up these factors (or components) should generally be more correlated to
each other than the factors are to each other. Orthogonally rotated factors have zero (in
practice negligible) intercorrelation by definition. However, if EFA is being used to select
variables to create scales then these derived scales will have non-zero intercorrelations.
The adequacy of the sample size also needs to be assessed. There is a test of sample
adequacy called KMO which is available in SPSS. A minimum acceptable score for this test
is 0.5 (Kaiser, 1974). This statistic can also be calculated for each item with the same cut off
© 2016, Centre for Academic Success, Birmingham City University 2
criterion. If the sample size is less than 300 it is also worth looking at the average
communality of the retained items. According to MacCallum et al. (1999) an average value
above 0.6 is acceptable for samples less than 100, an average value between 0.5 and 0.6 is
acceptable for sample sizes between 100 and 200.
Factor loadings are also important. Tabachnick and Fidell (2014) recommend ignoring factor
loadings with an absolute value less than 0.32 (representing 10% of the shared variance).
Following the advice of Field (2013: 692) we recommend suppressing factor loadings less
than 0.3. Retained factors should have at least three items with a loading greater than 0.4.
These items should also not cross load highly on other factors. If the above rules are used
for factor suppression and retention, a consistent cross factor loading cut off is a maximum
of 75% of any factor loading. Any items which load on more than two factors would require a
lower cut off value.
There is also a relationship between sample size and acceptable factor loadings. According
to Stevens (2012) for a sample size of 100 factor loadings are significant at the 0.01 level
when they are larger than 0.512, for a sample of 200 they are significant when they are
larger than 0.364 and for a sample of 300 they are significant when they are larger than
0.298. According to Guadagnoli and Velicer (1988) a factor with four loadings greater than
0.6 is stable for sample sizes greater than 50 and a factor with 10 loadings greater than 0.4
is stable for a sample size greater than 150.
The number of factors to be retained needs to be decided. There are different criteria for
making this decision. It is probably sensible to use the SPSS default rule to start with. This
cuts off factor eigenvalues less than 1. The item loadings on each factor should then be
examined. Any item which does not load above 0.3 on any factor should be removed and the
analysis re-run. Items which load less than 0.4 on any factor should be removed one at a
time in reverse order of highest factor loading. Cross factor loadings should then be
considered using the cut-off rules described above. The number of factors can then be
adjusted. All retained factors should have at least three items with a loading greater than 0.4.
The proportion of the total variance explained by the retained factors should also be noted.
As a general rule this should be at least 50% (Streiner, 1994).
If the goal of the analysis is to create scales of unique items then the meaning of the group
of unique items which load on each factor should be interpreted to give each factor a name.
A PCA without rotation with a single component should then be run on each group of items
in turn. The size of the first eigenvalue and the items loading upon it should be noted. Items
with a loading less than 0.4 on this first component should be removed. Once each group
has stabilised the regression loadings on the component should be saved. This is a more
accurate measure of scale score than adding the item scores together as component
loadings may vary considerably.
A reliability analysis can then be undertaken for each scale. For further information see our
sheet on Advice on Reliability Analysis with Small Samples.
Finally, as a validation check, compare the average inter-item correlation for each scale with
the inter-scale correlation. The former should be higher than the latter.
Worked example
171 business men and women responded to a questionnaire on entrepreneurship which was
constructed from 8 groups of questions derived from existing questionnaires, comprising of a
total of 39 questions. Each of the questions comprised of a five point Likert response scale.
As the data from the questionnaire was to be used in a further analysis it was decided to
carry out an Exploratory Factor Analysis using the Principal Axis Factoring technique and a
Varimax rotation.
© 2016, Centre for Academic Success, Birmingham City University 3
A Pearson bivariate correlation of all the items was carried out in SPSS. A highlighting style
Condition was set for any correlations with an absolute value greater than 0.8 (the condition
shown below appears automatically when the Value is changed to Correlation):
This returned a table of correlations which included 9 unique pairs of correlations with an
absolute value greater than 0.8, with the lowest absolute value being 0.922. As this was
markedly higher than the threshold it was decided to remove one item from each of these
pairs based on a qualitative analysis of the items, leaving 30 items.
An EFA was then run on the remaining 30 items using a Principal Axis Factoring technique
with a Varimax rotation, providing the KMO statistics and determinant of the correlation
matrix, retaining all factors with eigenvalues greater than 1, sorting the factor coefficients by
size and suppressing all factor coefficients less than 0.3:
The communalities of the initial solution were observed. Two items initially had
communalities less than 0.2. These were removed in turn, staring with the smallest
© 2016, Centre for Academic Success, Birmingham City University 4
and re-observing the
communalities. Eventually
three items were removed,
leaving 27 items with
communalities greater than
0.2.
This led to a solution
comprising of 6 factors, each
having loadings of 0.4 on at
least 3 items.
However, several items in the
rotated factor matrix cross
loaded on more than one
factor. These were removed in
turn, starting with the item with
the highest ratio of loadings
on the most variables with the
lowest highest loading.
In addition, items with highest
factor components much less
than 0.4 were also removed.
This eventually yielded a
stable solution with 19
variables with 6 factors with
each factor still having at least
3 items with a loading greater
than 0.4 (see below).
The KMO statistic for this
solution was 0.805 and the
correlation matrix determinant
was greater than 0.0001. The
6 extracted factors accounted
for 69.1% of the total variance
in the data.
Even though there was still
some cross loadings on the
items, in view of the fact that
the questionnaire was
constructed from 8 groups of
questions, it was decided to
retain these 19 variables and
run a PCA on each of the
groups of items indicated by
the 6 factors.
1
2
© 2016, Centre for Academic Success, Birmingham City University 5
The groups of items retained for the PCA are indicated on the figure below.
1
2
© 2016, Centre for Academic Success, Birmingham City University 6
The settings for a PCA with a single factor are shown below. The Regression factor
scores were saved.
The results of the PCA for the 6 groups is shown below.
Group 1:
First eigenvalue = 2.961
Group 2:
First eigenvalue = 2.266
Group 4:
First eigenvalue = 2.003
Group 5:
First eigenvalue = 1.942
© 2016, Centre for Academic Success, Birmingham City University 7
All six groups were considered acceptable as the each had at least 3 component scores
greater than 0.7. The Cronbach’s alpha scores for the 6 groups were as follows:
Group
1
2
3
4
5
6
Cronbach’s alpha
0.791
0.744
0.725
0.745
0.728
0.630
Based on the sample size, the number of items within each scale, the overlap between some
groups and these Cronbach’s alpha values it was decided to retain groups 1 to 5 for further
analysis. A validation check was made of these groups intercorrelations.
The average intercorrelation
was 0.45 which was
unacceptably high (it was
higher than the average
intercorrelation within each
group, which was 0.43). It
was therefore decided to
return to the EFA and save
the factor score values from
the previous converged
solution.
As an orthogonal rotation had
been used, these factors
would have a negligible
intercorrelation. Both types of
combined variable were used
in an analysis of the data. A
qualitative interpretation was made of the scales.
An alternative is to attempt an oblique factor rotation, as discussed below.
Oblique factor rotation
If the orthogonal factor rotation does not lead to an acceptable solution an oblique rotation
can be considered. There are two methods available in SPSS: direct oblimin and promax.
Either method is acceptable, however it is advised that the default values of the coefficients
Delta and Kappa are not changed.
A Principal Axis FA with a direct oblimin oblique
rotation with Delta = 0 was carried out using the
same 30 items as the original FA above. Due to
problems of slow convergence, the number of
iterations was increased to 100.
Again, any items with a communality below 0.2
were removed in turn, starting with the item with
lowest value. Eventually three items were
removed.
An oblique rotation creates two additional factor
matrices called pattern and structure. It is the
pattern matrix which needs to be analysed in the
same way as the single matrix for orthogonal
rotations.
© 2016, Centre for Academic Success, Birmingham City University 8
Items were removed in the same manner
as before and the analysis was re-run.
The number of factors extracted was
reduced from 6 to 5 due to too few items
loading on the sixth factor above 0.4.
Further variables were removed according
to these conditions and also low
communality. Eventually the fifth factor
was also removed, leaving four factors
and 16 items, accounting for 60.9% of the
total variance and providing the unique
pattern matrix loadings shown. These
factors were also interpreted according to
their item loadings.
The regression scores were saved and
the inter factor correlations were
calculated. The average value was now
0.28. This was seen to be acceptable and
an improvement on the orthogonal
rotation as it was seen to better reflect the
actual interaction between factors.
References
Child, D. (2006). The Essentials of Factor
Analysis. 3rd edn. New York:
Continuum.
Costello, A. B. and Osborne, J. W. (2005)
Best practices in exploratory factor
analysis: four recommendations for
getting the most from your analysis.
Practical Assessment, Research &
Evaluation, 10(7), pp. 1-9.
Field, A. (2013) Discovering Statistics
using SPSS, 4th edn. London:
SAGE.
Guadagnoli, E. and Velicer, W. F. (1988) Relation of sample size to the stability of
component patterns. Psychological Bulletin, 103(2), pp. 265-275.
Kaiser, H. F. (1974) An index of factorial simplicity. Psychometrika, 39(1), pp. 31-36.
MacCallum, R. C., Widaman, K. F., Zhang, S. and Hong, S. (1999) Sample size in factor
analysis. Psychological Methods, 4(1), pp. 84-99.
Reise, S. P., Waller, N. G. and Comrey, A. L. (2000) Factor analysis and scale revision.
Psychological Assessment, 12(3), pp. 287-297.
Stevens, J. P. (2012) Applied Multivariate Statistics for the Social Sciences. 5th edn. London:
Routledge.
Streiner (1994) Figuring out factors: the use and misuse of factor analysis. Canadian Journal
of Psychiatry, 39(3), pp. 135-140.
Tabachnick, B. G. and Fidell, L. S. (2014) Using Multivariate Statistics. 6th edn. Harlow:
Pearson.
... Principal Component Analysis (PCA) is a type of exploratory FA, that generates a reduced structure of factor groups that represent relationships among the variables [46]. The analysis groups the latent variables (waste causes in this study) into identical and similar clusters/components through the analysis of variances [47]. The adequacy of data for suitability of factor analysis is checked through Kaiser-Meyer-Olkin (KMO) test and the Bartlett sphericity test which can be [46]. ...
... In FA, the values of each variable are computed to assess their retention. Variables with Eigenvalues of greater than 1 are retained for analysis [47]. Many construction management studies have used FA [9,48,49]. ...
... However, Waste Management (HWF5) was retained despite being supported by only two causes with factor loadings more than 0.50 because of being essentially important. Although this practice has been discouraged, FA studies in construction management have retained factors with 2 variables as well [47,48]. Details of seven HWFs computed in this study are as under: ...
Article
Full-text available
Highway construction projects are known for their propensity to consume enormous quantities of materials and their susceptibility to generate considerable waste. Adequate research has been conducted on identifying the causes of waste produced in building projects, however, there are limited studies available on identifying causative factors for highway projects. This study aims to identify and evaluate the causes and factors of waste generated on highway projects using a literature review and questionnaire survey technique. Causes leading to waste in highway projects were identified from the literature as well as from highway construction experts. Subsequently, quantitative data were collected from 127 highway construction professionals using a Likert Scale questionnaire survey, which was ranked using the Relative Importance Index (RII) and further analyzed by using a very robust Factor Analysis (FA) technique. RII results highlight the most significant causes of waste in highway construction, while FA suggests the main factors contributing to the waste in highway projects. The top five most significant causes of waste revealed by this study were: (1) mistakes of surveyors, (2) faulty drawings, (3) incompetence of quantity surveyors, (4) faulty/substandard work, and (5) poor workers’ skills, whereas the seven waste factor groups evaluated by the study were: (1) design, (2) storage, (3) survey, (4) workers, (5) waste management, (6) site management, and (7) external. This study further suggested waste management and mitigation strategies for highway construction projects corresponding to each factor group. This is a novel study on waste generation in highway projects in Pakistan and will assist academia and industry practitioners in understanding and controlling construction waste generation in highway projects during various stages of project execution.
... This analysis was performed for each of the Likert-scaled measured psychometric constructs including agroecological practices (AG), circular agriculture (CA), ecological identity (EID), traditional agroecological knowledge (TAK) and access to scientific information and experts (ASI). The structural path diagram from the CFA and the model fit indices are presented in Fig. 3. Generally, a factor score of less than 0.3 is considered not appropriately measured and often removed [70]. The results as presented in Fig. 3 show each of the measure items has a factor score greater than 0.3 and therefore none were deleted. ...
Article
Full-text available
Circular agriculture and agroecological practices have the potential to offset the negative impacts of climate change but their adoption by farmers remain constrained. This study explores the practices and challenges associated with a range of agroecological and circular agriculture techniques adopted by farmers in climate-vulnerable regions of Ghana. Using an explanatory sequential mixed method approach, data were collected from 154 randomly selected farmers and a focus group of 7 farmers each in 5 communities within the savanna ecological zone of Ghana. The findings show at least 50% of the farmers are engaged in agroecological practices that involve a reduction in external input use and natural processes for soil fertility and soil health improvement. Between 46.1% to 57.8% of the farmers also engage in circular agriculture practices. At a 5% level of significance, socioecological (ecological identity, access to scientific information, and traditional knowledge) and demographic factors (gender, landowner, and education) were found to have effects on the adoption of agroecological practices and circular agriculture. Challenges to agroecological and circular agricultural practices include perceived ineffectiveness, lower productivity, lack of technical knowledge, resource capacity, capital, and infrastructure limitation. Technical capacity development and promotion of the “good farmer” concept of ecological identity through extension service is needed to enhance circular agriculture and agroecological practices adoption among farmers.
... In this study, the original ASES is a one-dimensional construct, but after it was adapted and sinicised, it needed to be verified for its applicability and validity in the Chinese graduate student population. Therefore, EFA was used to test the construct validity of the scale and to ensure that its dimensional structure remained solid in the new research context (Samuels, 2017). EFA also helps to assess the contribution of the items and determine the presence of potential sub-factors, thus enhancing the measurement quality and reliability of the scale. ...
Article
Full-text available
Recent research has explored the academic self-efficacy of students in higher education and its impact on learning engagement. In graduate learning environments, academic self-efficacy is a key motivator influencing students’ motives, educational achievement, and self-directed study ability. However, the Chinese adaption of the Academic Self-Efficacy Scale for Graduate Students (ASES-G) has not been systematically validated, which limits the development of related research to some extent. Therefore, the primary aim of this study is to adapt the Academic Self-Efficacy Scale (ASES) into a Chinese context and to assess its validity and reliability for graduate students in China. In this study, the original ASES was first culturally adapted and translated, and then experts were invited to assess its content validity. A two-step method was implemented to evaluate validity and reliability based on this. Exploratory factor analysis and reliability analysis were conducted to evaluate the construct validity and internal consistency of the scale; thereafter, the split-half reliability was calculated to validate the scale’s stability further. This study employed IBM SPSS Statistics for statistical analysis, revealing a Cronbach’s alpha of 0.886, indicating high reliability. The findings in this study suggest that the Chinese ASES-G demonstrates good validity and reliability among Chinese graduate students, making it a valuable tool for evaluating academic self-efficacy in forthcoming higher education research.
... Two criteria were set for cross-loading identification: (a) an item could only load onto one factor above the .4 cutoff and (b) the secondary loading could not exceed 75% of the primary loading (Samuels, 2017). Retained factors were required to have three or more primary items with requisite loading, as factors with fewer than three items tend to be compromised by overextraction, poor reliability, and interpretation challenges. ...
Article
Full-text available
Evidence suggests that racial discrimination plays a role in the development of posttraumatic stress disorder, but may also produce a set of symptoms that, while similar to posttraumatic stress disorder, has important distinctions from currently defined diagnoses. The Trauma Symptoms of Discrimination Scale (TSDS) was developed to evaluate the presence of trauma-related anxiety and avoidance symptoms that occur after experiencing discriminatory distress. Development and validation of symptom measures following discrimination are critical steps for understanding and measuring this construct empirically. Whereas initial validation of the TSDS provided evidence of good psychometric properties (e.g., internal consistency, convergent validity), this study aimed to confirm the factorial validity of the TSDS in a large sample of racial and/or ethnic minoritized students. A sample (N = 711) of undergraduate students at a large midwestern university completed the TSDS and additional self-report measures through an online platform. Confirmatory factor analysis suggested middling-to-poor fit for the original four-factor solution, resulting in subsequent exploratory factor analysis. However, factor retention methods (i.e., parallel analyses, scree plot inspection) failed to converge regarding the identification of an alternate factor structure. Thus, models ranging from one to five factors were conducted for descriptive purposes. Convergent validity was only supported for the TSDS total score. This study extends the existing literature by attempting to replicate the factor structure of the TSDS in a large sample of racial and/or ethnic minoritized students. These results indicate that its proposed factor structure lacks stability in the current configuration and warrants further exploration.
Article
Full-text available
Incorporating Artificial Intelligence (AI) as a technological tool to strengthen training processes can potentially increase pedagogical quality and learning experiences; however, analyzing the acceptance of this technology within the complexity of university environments is necessary. This research aims to analyze a test’s design, validation, and reliability to assess the factors that guide the acceptance of AI in higher education. The study’s questionnaire design incorporated the interplay between the unified theory of adoption and use of technology (UTAUT2) and the complex thinking paradigm (CT) in a model. It was validated by applying the K coefficient to select the experts and the Simplified Digital Delphi method. Subsequently, a reliability analysis used a structural equation model. The results indicate that the questionnaire reliably evaluated students’ acceptance of artificial intelligence applications within the complexity of higher education. It demonstrated that the strategy to analyze the validation design of the instrument’s reliability could be replicated to systematically produce new questionnaires that measure the acceptance of emerging technologies. The study concludes that the questionnaire and the methodology can serve as a reference to measure the acceptance of technologies in universities.
Article
Full-text available
Background Nurses face significant challenges due to the rapid changes brought about by the emergence of increasingly complex clinical practices. Although many instruments have been validated for measuring nursing competence, they lack an updated and more comprehensive perspective. We aimed to develop and conduct psychometric testing of a comprehensive scale for measuring the self-perceived competence of professional nurses in clinical settings. Methods This study employed an exploratory sequential mixed methods design to develop and validate the Professional Nurses Competence Scale (PNCS) in three phases. Phase 1 (development of the PNCS) included identification domains and item generation through literature review and qualitative focus group discussions, as well as content validity assessment. Phase 2 (application of the PNCS) included examining the item quality and exploring structures of the scale with 108 registered nurses (RNs). Phase 3 (evaluation of the final PNCS version) involved confirming structures through confirmatory factor analysis (CFA), as well as assessing construct validity and internal consistency with 245 RNs. Twenty-two RNs were randomly selected for test-retest reliability of the final scale. Results During the first phase, the developed PNCS contained 68 items and tests for content validity. During the second phase, tests for homogeneity and item analysis retained 39 items, which were examined with exploratory factor analysis (EFA). The EFA produced six factors that explained 30 items. During the third phase, the six-factor structure showed an acceptable model fit in the CFA, satisfied construct validity, internal consistency and stability. Conclusions The PNCS was developed based on current clinical practices and perspectives. This scale could be used to assess self-perceived competence for professional nurses who require knowledge of clinical competence.
Article
Full-text available
Forest-water resource management often fails to deliver intended effects as farmers are limited to adopt agroforestry/sustainable land-use practices due to several barriers to uptake the scientifically proven and ecologically valuable land-use planning and management practices. It remains uncertain why it is difficult to align agroforestry campaigns with local interests despite numerous existing natural resource frameworks, policies, and management structures. In this study, we examined the potential of Q-methodology as a tool to analyze drivers of stakeholders' perceptions on the forest–water–people nexus (FWP-nexus) in relation to water availability and quality. The study was guided by a research question: What are the drivers of perception differences and/or similarities of scientists and local stakeholders on the FWP-nexus in relation to water availability in an agroforested landscape? For both Sipi River Sub-catchment and River Manafwa Sub-catchment, we discussed with diverse stakeholder groups. In each sub-catchment, stakeholders expressed their views on forest-water issues and possible management options and solutions. Together with stakeholders' groups, we used the generated information on forest-water issues in addition to relevant literature to develop a Q-set. The study compares the scientific insights and local stakeholders' perceptions on the FWP-nexus using the Q-methodology across the two sub-catchments. Study showed that perceptions of the FWP-nexus varied slightly among the two sub-catchments and among the upper and lower zones of the sub-catchments. From the two Sub-catchment comparison, the results indicate the significance of perceived importance of forests in increasing local rainfall and effect of local communities' involvement in planting trees on tree cover increase. The results indicate that issues surrounding the forest-water are majorly due to institutional failure other than farmers unwillingness to adopt sustainable agroforested landscape management practices and conforming to existing policies. There is need for: empowering and funding natural resource management departments to overcome institutional failure; adequate information on the performance assessment of agroforestry/tree growing projects; and developing and implementing the integrated management of forest/trees and water resources. For this study, the Q-methodology can guide in developing, testing, and documenting/communicating sustainable agroforested landscape management scenarios for water towers of mountainous regions. While comparative analysis across two sub-catchments strengthens robustness and reveals both shared and context-specific perceptions, the limited geographic scope may affect broader generalizability. Nonetheless, the results offer valuable guidance for inclusive land-use planning and spatially nuanced water governance.
Article
Full-text available
Háttér: A világban és a társadalomban való eligazodáshoz szükséges tudás megszerzésében alapvető szerepet játszik a gondviselő felé irányuló episztemikus bizalom kialakulása. A serdülőkor kiemelten fontos időszak ebből a szempontból. Célunk az episztemikus bizalom mérésére szolgáló kérdőív magyar nyelvű adaptációja volt. Módszer: A vizsgálatban kényelmi mintavétellel összesen 831 fő vett részt, kis serdülők ( N = 342, M életkor = 12,5, SD = 1,24) és serdülők ( N = 489, M életkor = 16,7, SD = 1,21), ebből 41% fiú és 59% lány. A résztvevők kitöltötték az Episztemikus Bizalom, Bizalmatlanság és Hiszékenység Kérdőív magyar változatát, a Mentalizáció Multidimenzionális Kérdőívét, a Torontói Alexitímia Skálát, a Rosenberg Önértékelés Skálát és az Általános Énhatékonyság Skálát. Ellenőriztük a kérdőív szerkezetét és megbízhatóságát, a nemi különbségeket és a pszichológiai változóval való kapcsolatát. Eredmények: Vizsgálatunk csak a 15–18 éves serdülők korosztályában erősítette meg az eredeti háromfaktoros szerkezetet, amelyet a bizalom, a bizalmatlanság és a hiszékenység skálák alkotnak, a teljes kérdőív 15 tételt tartalmaz. Az episztemikus bizalom pozitív összefüggést jelez a reflektivitással és a kapcsolati összehangoltsággal. A bizalmatlanság pozitív kapcsolata igazolódott a reflektivitással, a szegényes mentalizációval, az érzelmek azonosításának és kifejezésének nehézségével, negatív kapcsolata pedig az önértékeléssel. Az episztemikus hiszékenység az érzelmi diszkontrollal és az érzelmek azonosításának nehézségével mutat pozitív együttjárást. Az énhatékonyság elhanyagolható mértékű összefüggést mutatott a vizsgált változókkal. Következtetések: Az eredmények alapján a kérdőív alkalmas az episztemikus bizalom vizsgálatára serdülők körében.
Article
Full-text available
The factor analysis literature includes a range of recommendations regarding the minimum sample size necessary to obtain factor solutions that are adequately stable and that correspond closely to population factors. A fundamental misconception about this issue is that the minimum sample size, or the minimum ratio of sample size to the number of variables, is invariant across studies. In fact, necessary sample size is dependent on several aspects of any given study, including the level of communality of the variables and the level of overdetermination of the factors. The authors present a theoretical and mathematical framework that provides a basis for understanding and predicting these effects. The hypothesized effects are verified by a sampling study using artificial data. Results demonstrate the lack of validity of common rules of thumb and provide a basis for establishing guidelines for sample size in factor analysis.
Article
Full-text available
Exploratory factor analysis (EFA) is a complex, multi-step process. The goal of this paper is to collect, in one article, information that will allow researchers and practitioners to understand the various choices available through popular software packages, and to make decisions about "best practices" in exploratory factor analysis. In particular, this paper provides practical information on making decisions regarding (a) extraction, (b) rotation
Article
Full-text available
A variety of rules have been suggested for determining the sample size required to produce a stable solution when performing a factor or component analysis. The most popular rules suggest that sample size be determined as a function of the number of variables. These rules, however, lack both empirical support and a theoretical rationale. We used a Monte Carlo procedure to systematically vary sample size, number of variables, number of components, and component saturation (i.e., the magnitude of the correlation between the observed variables and the components) in order to examine the conditions under which a sample component pattern becomes stable relative to the population pattern. We compared patterns by means of a single summary statistic, g–2, and by means of direct pattern comparisons using the kappa statistic. Results indicated that, contrary to the popular rules, samples size as a function of the number of variables was not an important factor in determining stability. Component saturation and absolute sample size were the most important factors. To a lesser degree, the number of variables per component was also important, with variables per component producing more stable results. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
This article reviews methodological issues that arise in the application of exploratory factor analysis (EFA) to scale revision and refinement. The authors begin by discussing how the appropriate use of EFA in scale revision is influenced by both the hierarchical nature of psychological constructs and the motivations underlying the revision. Then they specifically address (a) important issues that arise prior to data collection (e.g., selecting an appropriate sample), (b) technical aspects of factor analysis (e.g., determining the number of factors to retain), and (c) procedures used to evaluate the outcome of the scale revision (e.g., determining whether the new measure functions equivalently for different populations).
Article
An index of factorial simplicity, employing the quartimax transformational criteria of Carroll, Wrigley and Neuhaus, and Saunders, is developed. This index is both for each row separately and for a factor pattern matrix as a whole. The index varies between zero and one. The problem of calibrating the index is discussed.
Article
Factor analysis is a technique which is designed to reveal whether or not the pattern of responses on a number of tests can be explained by a smaller number of underlying traits or factors. Similarly, it can be used to indicate whether or not the various items on a questionnaire can be grouped into a few clusters with each cluster reflecting a different construct. As with all multivariate statistical tests, it is quite powerful and can provide much information about the instruments being used. Similarly, there are many ways it can be abused and misinterpreted. This paper will explain the basics of factor analysis and provide some guidelines relating to how the results should be reported.