Science topic
EFA  Science topic
Explore the latest questions and answers in EFA, and find EFA experts.
Questions related to EFA
I'm developing a hierarchical model with 5 depen, 1 indepen, and 2 mediating elemetns (n=180).
Thru EFA, only 4 factors survived before tesed in CFA with great model fit. However,every item failed the 0.5 of EVA and CR. To continue would be meaningless so I'm wondering is there anything else I can do?
Previously I calculated the bunch of above 50 items by some extraordinary way and got 3 scales and now recalculating them as it should be.
Previously I used Statistica 12.0 and AMOS 23.0. Now for EFA I’m using Factor 12.03.01 64 bits (https://psico.fcep.urv.cat/utilitats/factor/Download.html).
So, concerning the number of factors:
 Hull recommends me 1 factor,
 MAP – 3 factors,
 Parallel analysis – 5 factors.
As I know from my previous exploration of these items, 3 factors for the first steps are bad interpretable and very noisy. If we take 5 factors, they are quite understandable but by step by step removing weak items will be got the structure of 3 or maybe 4 factors (scales).
So, concerning the item selection.
Factor 12.03 gives me a bunch of opportunity. But which of them should be consider as statistical criteria for selecting items and which of them are common ‘decision rules’ for selecting items and how one can prove this decision rules?
Factor 12.03. provides measurement of sample adequacy, MSA (LorenzoSeva & Ferrando, 2021) everywhere as such criteria. I like and appreciate this opportunity but at initially steps of selecting items it brings some risks for washing out scales with low quantity of items and for washing out some unique questions with low communalities.
I would like to use D, communalitystandardized Pratt’s measure (Wu & Zumbo, 2017), as a criterion for items selection at very first steps because in considering (a) the unique contribution of a factor to an item’s observed variation (i.e. Pratt’ss measure) and (b) uniqueness of questions.
For example, I have an item V58. Factor 12.03 suggests that I remove it because of prefactor solution.
Items Normed 95% Confidence The Pool
MSA interval
  
58 0.661 (0.493  0.726) Might not work  Revise
  
I would lice to see how V58 should be bad
So what I have found by further Factor 12.03 using.
Variable Mean Confidence Interval Variance Skewness Kurtosis
(95%) (Zero centered)
s_12_58 3.300 ( 3.21 3.39) 0.885 0.061 0.008
Good
Variable 58
Value Freq

1 24  ***
2 78  *********
3 320  ****************************************
4 174  *********************
5 77  *********
+++++
0 80.0 160.0 240.0 320.0
Almost excellent
ROTATED LOADING MATRIX
Variable F 1 F 2 F 3 F 4 F 5
s_12_58 0.160 0.257 0.002 0.065 0.008
The loadings are low. Maybe one should exclude V58, because loadings are less than 0.300. It is one of decision rules (but I don’t know how I can prove it). Let’s see other V58 characteristics.
UNROTATED LOADING MATRIX
Variable F 1 F 2 F 3 F 4 F 5 Communality
s_12_58 0.076 0.233 0.147 0.016 0.038 0.083
But because of low communality V58 should be unique.
It means that afterwards
COMMUNALITYSTANDARDIZED PRATT'S MEASURES
Variable F 1 F 2 F 3 F 4 F 5
s_12_58 0.161 0.771 0.000 0.064 0.004
These communalitystandardized Pratt's measures for V58 look optimistic, so V58 shouldn’t be removed at least during these first steps.
And I should assess some residuals.
Largest Negative Standardized Residuals
…
Residual for Var 39 and Var 6 2.95
Residual for Var 58 and Var 6 2.70
…
Largest Positive Standardized Residuals
…
Residual for Var 8 and Var 6 2.65
Residual for Var 42 and Var 6 2.58
Residual for Var 52 and Var 23 2.96
Residual for Var 54 and Var 6 3.23
Residual for Var 58 and Var 28 2.63
Residual for Var 58 and Var 52 2.61
And for my sorrow bunch of Indices for detecting correlated residuals (doublets) should not work in Factor 12.03 because program calculate these indices with all excluded variables (((.
So, concerning items selection the questions are:
 If one so interested in communalitystandardized Pratt's measures why can he formulate or prove the decision rule for it. Is it decision rule or statistical criterion?
 When and why should be used MSA, if it probably depends of ‘noise’ items and proportions of quantity of items in scales?
 How could one prove the cutoff of 0.300 for loadings?
 Which are decision rules for standardized residuals?
Thank you very much I’ll be happy every advice in this sphere.
LorenzoSeva, U., & Ferrando, P. J. (2021). MSA: The forgotten index for identifying inappropriate items before computing exploratory item factor analysis. Methodology, 17(4), Article 4. https://doi.org/10.5964/meth.7185
Wu, A., & Zumbo, B. (2017). Using Pratt’s Importance Measures in Confirmatory Factor Analyses. Journal of Modern Applied Statistical Methods, 16(2), 81–98. https://doi.org/10.22237/jmasm/1509494700
Dear all, I am conducting research on the impact of blockchain traceability for charitable donations on donation intentions. One scale/factor measures “likelihood to donate” consisting of 3 items (dependent variable). Another ”trust” factor, consisting of 4 items (potential mediator). Furthermore, a “perception of quality” consisting of 2 items (control). And a scale “prior blockchain knowledge” consisting of 4 items (control). My question is: since all these scales are taken from prior research, is CFA sufficient? Or, since the factors are from different studies (and thus have never been used together in one survey/model) should I start out with an EFA?
I am developing a questionnaire for social science. The preliminary questionnaire has 35 items with 6 domains. However, one of the domains was optional ( 5 items), where only those who had been involved in an accident before needed to answer that questions. Unfortunately, my answer option for answering the questionnaire did not include not applicable.
When I run the EFA, parallel analysis suggests 5 factors. But this optional domain loads closely with another item with different instructions/ other domains.
So my question is, can I remove this one domain ( 5 items) from the EFA analysis and run the remaining 30 items for EFA. Then, reinclude this optional domain when I run for CFA. Can I use expert judgment /based on the importance of that optional domain to the questionnaire to retain it as it is?
( I try to rerun the 35 items and 6 domain/factor ( including the optional domain) using CFA, and the result indicated good convergent and discriminant validity)
I wonder if this method is permissible or how I should go about it? Thanks
I am conducting a research which involves scale development for emotional experiences with Wanghong (Internetfamous) restaurant dining consumption. With reference to the steps in prior literature, I have already done interview and expert review on the measurement scales. It is interesting to see that the emotional experiences may be categorized into three stages, pre, during, and postdining experience. I have conducted the first study, with the objective to purify the scale. I have done one analysis on all the measurement items using EFA without the consideration of three stages, and four factors emerged. In order to reflect the finding that emotional experiences are different in the three stages, I think three EFAs should be conducted? It seems to me that the first way is more methodological correct, while the second way is more theoretically or conceptually correct. Would appreciate if anyone may give me some advices on this! Thanks a lot!
Dear Research Scholars, you are doing well!
I am a Ph.D. scholar in Education. Now I am working on my thesis. kindly guide me that when to perform the EFA whether it use on pilot studying data or actual research data.
Regards
Muhammad Nadeem
Ph.D. In Education , faculty of Education,
University of Sindh, Jamshoro
I am conducting an exploratory factor analysis and to determine the number of factors I used a paired analysis.
How can I generate the number of factors correctly in stata? Or other tool?
When using parallel analysis in stata, for example, if you proceed with Principal Axis Factoring all my Eigenvalues from the Parallel Analysis using a Principal Axis Factoring lower than 1. (Suggests in this case to retain all factors)
when out of curiosity, I use principal component factors, or even principal component analysis (I know this is not EFA), it suggests retaining 3 factors (which satisfies me)
I am running a PCA in JASP and SPSS with the same settings, however, the PCA in SPSS shows some factors with negative value, while in JASP all of them are positive.
In addition, when running a EFA in JASP, it allows me to provide results with Maximum Loading, whle SPSS does not. JASP goes so far with the EFA that I can choose to extract 3 factors and somehow get the results that one would have expected from previous researches. However, SPSS does not run under Maximum Loading setting, regardless of setting it to 3 factors or Eigenvalue.
Has anyone come across the same problem?
UPDATE: Screenshots were updated. EFA also shows results on SPSS, just without cumulative values, because value(s) are over 1. But why the difference in positive and negative factor loadings between JASP and SPSS.
Hypothetically, if I would like to validate a scale and I need to explore its latent factors first with EFA followed by a CFA to validate its structure, do I need a new data set for the CFA? Some argued that randomly spliting one dataset into two is also acceptable. My question is that can I apply EFA on the dataset (with all data included) and then randomly select half of the dataset to conduct a CFA?
Do I treat polychoric correlations like I would other correlations when reporting zero, low, or high correlations before running EFA on my scale items? Would there be an instance where if two variables were highly correlated or not correlated at all that I would give pause and not include the items in the EFA that I run?
I hope this updated question is a clearer. Thank you.
Hello!
I hope this message finds you well!
I am looking for an experienced psychometric with expertise in conducting factor analyses (EFA & CFA) and item analyses (e.g., interitem correlation, SpearmanBrown splithalf reliability, & the McDonald’s omega value) for the item and scale validation, and knowledge of various regression analyses, who is interested in media psychological studies. I need a hand with the statistical aspects of some of my studies. Please send me a message if you were interested to collaborate.
Many thanks!
When designing the questionnaire for EFA, what do I need to keep in mind when it comes to the order of the questions?
More specifically, does the order of the questions need to be completely randomized or is it generally allowed to still ask questions in topic blocks according to potential factors/constructs I have in mind?
Thanks everyone!
For convenience, I collected data from a single large sample for scale development.
and then I randomly split into two samples for EFA and CFA.
In this case, I wondering which sample (total? or CFA sample?) should be evaluated for the criterion validity or reliability of the newly developed scale.
Hello!
I have a question about the discrepancy between conceptualization and operationalization in my study.
I used the concept named 'multicultural teaching competency'(MTC) (Spanierman et al., 2010). In their paper on validating the MTC scale, they conceptualised MTC as awareness, knowledge, and skills.
But when they processed EFA, CFA, and reliability test, the results showed that only knowledge and skills are two sufficient factors to explain MTC.
The authors explained a few reasons why awareness is not one of the factors.
But how can I justify in my dissertation the discrepancy between their original conceptualisation in which they mentioned MTC is composed of awareness, knowledge, and skills, and the outcome of the study which only knowledge and skills are two sufficient factors for MTC?
I know hot to reduce questions with EFA, by I can not do this with CFA
and I can use only JASP
Would appreciate if anyone could give a clear explanation and if possible suggest reading materials or articles that can help me increase my understanding.
I am applying EXPLORATORY FACTOR ANALYSIS to my data and I get the following results using parallel factor analysis:
Parallel analysis suggests that the number of factors = NA and the number of components = 3
Are the component and factor the same or shall I interpret the results that EFA is not possible, only principal component analysis?
By using very simple structure (VSS) I got:
The Velicer MAP achieves a minimum of 0.04 with 6 factors
BIC achieves a minimum of Inf with factors
Sample Size adjusted BIC achieves a minimum of Inf with factors
The scree plot shows 3 factors as suitable. The Exploratory Graph Analysis shows 5 groups.
How many factors shall I use for EFA?
CFA of these factors scores variables done. No Validity issues are there.
My dependent variable is categorical with order.
I would like to know the minimum sample that we need to do Confimatory Factor Analysis. It is same with EFA or not
We are developing a measure of compassionate care. We have done a literature review and a Delphi study and come up with a 21 item measure across 6 domains. In a second project, we have looked at the statistical properties. One group of over 300 people completed the 21 item measure. We then reduced the number of items based on a lack of generalisability of two items which we removed, and very high correlations between other items which caused us to remove one of each of the highly correlated pairs (following guidance from Field 2013). We ended up with a 6 item measure and our subsequent EFA showed a one factor solution. A subsequent CFA with a different sample completing the 6 item measure showed it was statistically robust. However, only 4 of the 6 domains which the Delphi identified are now included.
My question is... the measure has gone from 21 items to 6 items and lost 2 domains. Statistically it is robust but it looks less like one might expect a measure of compassionate care to look because several items have been lost (e.g. ones about kindness and caring) and theoretically we have lost 2 of the domains which our Delphi identified. Should we (and could we) try to find a middle ground between the short form questionnaire we ended up with and the original 21 item measure? Maybe by adding back in a few of the original items and redoing the EFA? It would mean the CFA we did wouldn't count for this new hybrid measure, but we could perhaps still use it as a longer form alternative to the short form which we have completed a CFA on, and we could at a later date complete a CFA on the longer form.
Or... should I just be following the stats protocol and not trying to mess about with it just because it doesn't look as we initially expected?
Can i split a factor that has been identified through EFA.
N=102 4 factors have been identified.
However, one of the 4 actually has two different ideas that obviously are factoring together. I am working on trying to explain how they go together but it is very easy to explain them as two separate factors.
When I conduct a Conformatory analysis the model fit is better for them separate. . . but running a confirmatory analysis on the same population of subjects that I conducted the Exploratory analysis on appears to be a frowned upon behavior.
I am currently working on with my thesis and I admit I am really having problems with presenting EFA and CFA results, respectively. The objective of my thesis is to establish the validity and reliability of a selfmade measure. I have already presented and interpreted the EFA results and started with the presentation of CFA results but stuck after presenting the figures of the model and the fit indices. What to present and discuss next? I am asking for an outline so that I could check if i presented the EFA results correctly and in order to proceed with the presentation of my CFA results. Your response is highly appreciated.
Are EFA and CCA required for the validity of formative scale?
Is EFA and CFA required for validity in case of Formative scale , which I will be doing PLS SEM?Thank you in advance
Hi, everyone!
I just received the comments of a reviewer who said:
you conducted EFA and based on EFA results, run the SEM modeling. You are supposed to conduct a CFA to confirm the EFA results and finalize the measurement model before proceeding SEM. You can fairly use half of the sample to test EFA and another half to test CFA.
Actually, in my study, i used the EFA to explore the possible dimensions of the highorder constructs, and then build a PLSSEM with the results of EFA. However, i don´t think I should do also the CFA.
So, how can I answer the reviewer ? and is my method wrong??
Thanks!!!
Firstly, I developed an index (low, moderate, and high). For index development, I have first taken four indicators with several items with different categories. After running EFA, which has loading .5, I have used those variables to create an index. Now I want to know, can I perform CFA for variables having different categories ( like, one variable has 3 categories, some have four, some have two)? Or Are there any other methods? thanks a lot .
Hi all,
I'm validating a tool after translation and have a technical question.
I'll do confirmatory factor analysis (CFA) to test the goodness of the model fit. Previous studies conducted exploratory factor analysis (EFA) or principal component analysis (PCA), but didn't come up with consistent structure matrix of the tool. My questions are:
1. Is it appropriate to do PCA only (without factor analysis) to identify structure matrix? If doing PCA only is adequate, do I still need to split sample into half for PCA and CFA? Are there any papers to support the choice?
2. If conducting an EFA is necessary, can I still choose 'PCA' as Extraction ? Or I should choose 'Maximum Likelihood' as a factor analysis?
Many thanks!
I performed 200 dental implant procedures, corresponding to 4 implants in 50 patients. How could I control the effect of this variable in my structural equation modelling if it does not follow the nonindependence assumption?
I am conducting an EFA for a big sample and nearly 100 variables, but no matter what I do, the determinant keeps its ~0 value.
What should I do now?
Hello,
I intend to develop a social scale that can be used in conjunction with other dimensions of a modular framework.
Reading the literature, I found three potential social dimensions. I conducted a focus group and came up with 7, 8, and 12 items for them.
Similar to other scales developed for the aforementioned framework, I ran three separate EFAs and retained one factor for the first two dimensions each with 4 items.
The third dimension (with 12 items) retained two factors. However, there was one item with crossloading of 0.387 and 0.390 for two factors (CLS, BRZ). I know I should have removed it, but I kept this item for the second factor because I wanted to have the 4item format for all the dimensions, and theoretically, it makes sense based on the items.
I have collected another sample to conduct CFA. It does not give great goodness of fit indices but my main problem is the high correlation between the two factors I mentioned above (0.91). My understanding of another thread in researchgate is that one can use secondorder CFA to solve discriminant validity of highly correlated factors. Is it correct? Can I do it?
You can see the AMOS results attached.
One more question: in case I delete one factor altogether should I go back and repeat the EFA or I can continue with the 3 factors.
Thank you
P.S: I am not very knowledgeable in statistics so I would be grateful if you could explain it a little. Thank you
Hi,
I have conducted an EFA on three items, and all items load on one factor. I then ran a reliability analysis with the three variables to ensure internal reliability using Chronbachs Alpha.
My question: Should I run a reliability analysis before or after the EFA?
Does the order really matter in this case?
Thank you in advance!
Hello,
Using EFA I developed a social scale with 4 factors each having 4 items.
In a new round of data collection I collected three separate samples (150 each) to test if my new scale can show the differences in social ratings between three different product categories.
My question (considering the results belong to three different types of products with various levels of social features):
1 Can I use the 450 responses combined for CFA?
2 Should I use them separately (i.e. three CFAs)? What if the model doesn't fit for one of them?
Thank you
Hi,
I did an EFA with oblimin rotation in RStudio, because I would like to know the factor correlations.
Input:
#EFA for 5 factors, oblimin
out.efa<factanal(na.omit(p1N[,1:20]),factors=5,rotation = "oblimin")
print(out.efa, cutoff=0)
This was the output:
Factor Correlations:
Factor2 Factor1 Factor4 Factor5 Factor3
Factor2 1.000
Factor1 0.293 1.000
Factor4 0.400 0.309 1.000
Factor5 0.267 0.267 0.458 1.000
Factor3 0.467 0.322 0.411 0.368 1.000
Then, I also wanted to know the factor correlations between these 5 factors and a single item factor. So I computed new factors based on the output of the EFA. I used spss to calculate the correlations, with the following syntax:
Input:
CORRELATIONS
/VARIABLES= f1 f2 f3 f4 ef5 Sitem
/PRINT=TWOTAIL NOSIG FULL
/STATISTICS DESCRIPTIVES
/MISSING=PAIRWISE.
NONPAR CORR
/VARIABLES= f1 f2 f3 f4 f5 Sitem
/PRINT=SPEARMAN TWOTAIL NOSIG FULL
/MISSING=PAIRWISE.
Output:
Factor Correlations:
Factor1 Factor2 Factor3 Factor4 Factor5 Sitem
Factor1 1.000
Factor2 .33** 1.000
Factor3 .39** .43** 1.000
Factor4 .48** .53** .50** 1.000
Factor5 .33** .34** .49** .49** 1.000
Sitem .49** .50** .55** .57** .50** 1.00
** Pearson correlation is significant at the 0.01 level (2tailed).
(I also did spearman but factor correlations did not differ much).
Why is it possible that the factor correlations of the second output is different from the first output? Does it have to do with standardisation?
How should I interpret the factor correlations? / which factor correlations should I report?
Why negative factor loadings in EFA? Even recoding of the items is also not working. It makes other items to be negative. Why & how to interpret it?
How can all these discrepancies be explained mathematically?
I am working with a scale that could be considered to have 11 dichotomous (01) and polytomous (02, 03, and 05) items OR 20 subitems (dichotomous and some polytomous). The sample size is sound (> 700 subjects).
A) If I do an exploratory analysis* with 11 items on half of the (randomized) sample, I get 2 factors. The confirmatory analysis** with the other half confirms the two factors, presenting good adequacy indices values. Some authors also obtained 2 factors, either with similar or different methodology (including itemtheory analyses).
B) Moreover, I also tested unidimensionality as some found only 1 factor. Again, all indices are adequate (ECVI slightly higher).
C) However, if I do an exploratory analysis* with the 20 subitems (similarly to other authors), I get 4 factors. Additionally, just out of curiosity, in the total sample, I get 5 factors!
E) There have also been 3, 4, and 5factor solutions in the literature, either with 11, 20, or 30 subitems (all dichotomous).
* Unrotated EFA; maximum likelihood extraction; eigenvalues > 1; based on tetrachoric correlation matrix followed by an oblique rotation (promax).
** CFA with the Diagonally Weighted Least Squares method.
Hi,
I performed PCA on dataset of small sample of 67 people and reduced my variables from 42 to 16. Then I collected responses from 231 people on those 16 variables. I want to perform CFA and then cluster analysis on it. I want to know Do I need to perform EFA before conducting CFA analysis on it? because if I conduct EFA analysis on it again the variable count reduces again. Your guidance please.
Kind Regards,
Ali Abbas
Hi
I have to conduct EFA for the pilot study but my sample is 60 with 41 items.
I cant run EFA as it says the sample size is inadquate.
what could be the solution?
I cant increase my sample size for the pilot study.
I read below references
Hair et al (1998) give rules of thumb for assessing the practical significance of standardised factor loadings as denoted by either the component coefficients in the case of principal components, the factor matrix (in a single factor model or an uncorrelated multiple factor model) or the pattern matrix (in a correlated multiple factor model).
On the other hand Field (2005) advocates the suggestion of Guadagnoli & Velicer (1988) to regard a factor as reliable if it has four or more loadings of at least 0.6 regardless of sample size. Stevens (1992) suggests using a cutoff of 0.4, irrespective of sample size, for interpretative purposes. When the items have different frequency distributions Tabachnick and Fidell (2007) follow Comrey and Lee (1992) in suggesting using more stringent cutoffs going from 0.32 (poor), 0.45 (fair), 0.55 (good), 0.63 (very good) or 0.71 (excellent).
MacCallum et al. (1999, 2001) advocate that all items in a factor model should have communalities of over 0.60 or an average communality of 0.7 to justify performing a factor analysis with small sample sizes.
For my study , I am taking an already established questionnaire. EFA is used when factor structure has to established for the new scale.With well established factor structure , generally CFA is used for testing the established questionnaires . I have done pilot study where I have collected the data using full length questionnaire. However I wish to delete some items in this already etsbalished questionnaire so as to shrink the length of questionnaire for the conveneience of futurerespondents. So should I go for EFA or CFA for the items deletion?
Dear all,
I am conducting research on the impact of blockchain traceability for charitable donations on donation intentions (experimental design with multiple conditions, i.e., no traceability vs. blockchain traceability).
One scale/factor measures “likelihood to donate” consisting of 3 items (dependent variable).
Another ”trust” factor, consisting of 4 items (potential mediator).
And a scale “prior blockchain knowledge” consisting of 4 items (control).
All factors/items are taken from previous research.
The first two (hypothesized) factors are measured on a 7 point Likert scale, ranging from 1 (Strongly disagree) to 7 (Strongly agree). However, the “prior blockchain knowledge” contains of 4 items that are measured as a semantical differential from 3 to +3. Essentially, all items in my study thus have the same “scale length” of 7.
My question is: if I perform an EFA/PCA on this data, is this a problem? Is there any gain in recoding the semantical differential to 17 after conducting the survey but before EFA/CFA. Or is this unnecessary or even problematic?
Curios to read your ideas, thank you in advance!
Hi there,
I definitely do need your help!!!
Looking through studies and books I got a little confused by the different approaches used to conduct factor analyses for reflective scales before running PLSanalysis.
Some recommend carrying out exploratory factor analysis (EFA) using SPSS first, followed by covariancebased confirmatory factor analysis (CBCFA) using e.g. AMOS. The "stepwise" received results (items) are then applied to PLS for further analyses.
Others are pro EFA (in SPSS) but advice against using CBCFA (e.g. AMOS) before PLSanalysis, criticizing they have different underlying assumptions. Instead they recommend doing the CFA directly in PLS (using the EFA's results).
But even within the field of EFA there seems to be some confusion about what extraction method (principal component vs. principal axis vs. ...) and which rotation procedure (oblique vs. Varimax) are most appropriate when using PLS afterwards.
So, my question: Are there any rules or is there a generally accepted way of how to conduct EFA and CFA when using PLS? Could you provide me with corresponding references (published articles etc.)?
Hope, someone can help!
Thanks in advance!
Total variance tells how much variance is explained by all factors. However, if we want to know how much of it is explained by each factor, what is the process?
I am aware that a high degree of normality in the data is desirable when maximum likelihood (ML) is chosen as the extraction method in EFA and that the constraint of normality is less important if principal axis factoring (PAF) is used as the method of extraction.
However, we have a couple of items in which the data are highly skewed to the left (i.e., there are very few responses at the low end of the response continuum). Does that put the validity of our EFAs at risk even if we use PAF?
This is a salient issue in some current research I'm involved in because the two items are among a very small number of items that we would like, if possible, to load on one of our anticipated factors.
Dear researchers
I developed a scale with 6 items and I want to compare it with a widely used scale with 4 items. These two scales have similar validity, reliability and in the EFA they are loading on the same factor. In addition, I have done a CFA between these two scales (or better said between these two latent variables) and the correlation among them is 1. Finally, another CFA between two other latent variables and these scales, (CFA was done separately that is one at a a time) and the correlations between the latent variables and the scales were similar  not identical though.
Are there any suggestions on how I should work ?
Thank you
Dear all,
I am conducting research on the impact of blockchain traceability for charitable donations on donation intentions (experimental design with multiple conditions, i.e., no traceability vs. blockchain traceability).
One scale/factor measures “likelihood to donate” consisting of 3 items (dependent variable).
Another ”trust” factor, consisting of 4 items (potential mediator).
Furthermore, a “perception of quality” consisting of 2 items (control).
And a scale “prior blockchain knowledge” consisting of 4 items (control).
My question is: since all these scales are taken from prior research, is CFA sufficient? Or, since the factors are from different studies (and thus have never been used together in one survey/model) should I start out with an EFA?
For instance, I am concerned that one (or perhaps both) items of ”perception of charity quality” might also load on the “trust”scale. e.g., the item “I am confident that this charity uses money wisely”
Curious to hear your opinions on this, thank you in advance!
One of my constructs around knowledge did not load as a single construct during EFA. It loaded as a number of constructs. Just wondering what is the best way to report that within a thesis? Additionally, can you discuss scale items of this construct within the findings and discussion chapters? There is a lot of rich data from these responses, it just cannot answer the hypothesis.
I would really appreciate some guidance. Thank you for any advice.
Hi...I have 7 constructs to be measured in my study. Three constructs are adopting the items from the established scales and the remaining constructs are adapting scales from the previous studies. My question is, should I run the EFA only to the adapted scales or to both adapted and adopted scales that I used in my study?
Thank you in advance for your help.
Hello Guys,
I'm performing factor analysis in SPSS with 168 observations of likertscale variables, EFA generates two factors, thus Factor 1 includes 3 variables and loading scores are high (0.777, 0.785, 0.421) and so the composite realibility and extracted variance, around 0.8. However, cronbach Alpha for internal consistency showed value of only 0.52.
The Factor 2 showed good values for all three measures, so the only problem is internal consistency of the Factor 1. The discriminant validity between two factor is also good. KMO of whole construct is around 0.75.
In this case, can I consider this "problematic" factor as "valid"? Otherwise, what can I do to solve this issue?
The idea is to use them within Structural Equation Modelling later. Thank to all of you in advance!
My factor loadings are showing 3 factors when it is done with Eigenvalues in the SPSS. Now I fixed the number of factors to the equal number of variables in the study and the EFA results come satisfactory without any crossloadings.
Can I compare eigenvalues from PAF (EFA) of data with eigenvalues of Parallel analysis (PCA not factor approach) to extract the components in spss? If yes kindly provide me a reference to support the argument.
Kind regards,
Ali Abbas
Hi,
Can we run CFA analysis on the questionnaire that its item has been translated, deleted, and validated using EFA?
Originally, the questionnaire has 25 items. It has been translated and were tested on an adult sample. Using EFA analysis, the questionnaire has a 3factor model and 5 items were removed from the questionnaire.
Currently, I am planning to conduct CFA on those 20 items. Am I doing it correctly? Because I'm using an adolescent sample. I figured out that in past research that uses adolescent samples (from different countries), they test again on 25 items and consistently found a 4factor model (some used EFA, and some used CFA).
What if I just test the 20 items on the adolescent sample without reanalyzing the 25 items?
Respected Researchers,
I am working on urban sustainability and my final objective is to propose a framework of urban sustainability. In this regard, I have used EFA first and want to use CFA, but I got to know that SEM has two methods i.e., CBSEM and PLSSEM. If I do not use CBSEM or PLSSEM, can I use just CFA in my study? If I can use it, please recommend me the procedures for conducting CFA.
Is it fair to compare findings from studies where one study has used PCA and the other EFA when the measure is unidimensional?
I`m conducting the translation of a very short scale of 12 ítems to assess therapeutic alliance in children. I have 61 answers and I wonder if that number of subjects it`s acceptable to run Exploratory Factor Analysis. I know that there is a suggestion of 5 participants for item to do EFA and 10 participants for item to do CFA. However, the number of participants here seem to be very smal for these analysys. What it´s your opinion?
Hello everyone
How can I statically validate a translated sleep questionnaire with small sample (n=30 maybe more)?
 I have no access to previous Valid questionnaire used in the same population or to a clinical sample to verify discriminative validity.
Someone advised me to use Bayesian Factor Analysis, if it's possible to do CFA using Bayesian with this sample size can you recommend any books on the matter.
thank you.
I have translated an SEL scale from English to Urdu in the Pakistani population. suggest better ways to establish its psychometrics' properties. In the initial analysis, EFA shows a different factor structure than the original scale.
Dear experts,
I would like to know whether it is okay/possible to develop two models (one based on theoretical evaluation and another based on theory development). Like for example, I have developed a questionnaire based on a model. Now I have two processes. process 1) EFA (Quartimax that yielded the same result as the model) CFA for confirming the model and SEM for Model testing. this helps me in evaluating my theory. I also want to know if there are any hidden factors or maybe I might get a better model so theory development based on scientific numbers. So Process 2) EFA based on Parallel analysis varimax, CFA, and Model 2 SEM.
Exploratory factor analysis was conducted to determine the underlying constructs of the questionnaire. The results show the % variance explained by each factor. What does % variance mean in EFA? How to interpret the results? How can I explain the % variance to a nontechnical person in simple nonstatistical language?
Hi, I have done EFA on a questionnaire. Initially, there was a total of 16 variables. After EFA I extracted two components with 8 variables.
My total variance explained appears to be 55.9%, all communalities are > 0.4 and KMO is 0.689. If I further delete the items to increase the TVE above 60% only 6 variables are left with 2 components and two of my highly important are deleted. Now, first of all, I would like to ask if TVE of 55.9% is acceptable (My work is surveybased from Industry).
Secondly I am using direct oblimin rotation, Do i need to report only pattern matrix in my paper as I have also performed CFA and cluster analysis.
Data collected using Likert Scale and collecting insights from Experts, entrepreneurs and customers
What are the determinants to measure the effectiveness of customer loyalty programs in the emerging Supermarkets of Bangladesh conducting EFA and using AMOS Software where data has been collected from more than 350 respondents through 5  Point Likert Scale?
What is the best way to measure effectiveness of customer loyalty programs of supermarkets using AMOS Software?
Is the sample size for CFA the same like that of EFA in educational survey research?
I translated a scale from Engish to Arabic using the backtranslation technique. After collecting data, the scale has a low reliability (α < 0.4), but when one item is removed alpha exceeds 0.7. I suspect this particular item has a translation issue, because the original English scale demonstrated high reliability across many published studies.
So is it right to use EFA or PCA to justify why I removed the troubling item in my research? This particular item does not load on factor 1 but on factor 2 (the orignial scale loads on 1 factor).
Thank you!
Hi, I am working on a project about ethical dilemmas. This project requires development of a new questionnaire that should be valid and reliable. We started with collecting the items from the literature (n= 57), performed content validity where irrelevant items were removed (n=46), and piloted it to get the level of internal consistency. Results showed that the questionnaire has a high level of content validity and internal consistency. We were requested to perform exploratory factor analysis to confirm convergent and discriminant validity.
Extraction PCA
rotation varimax
Results: the items' communalities were higher than 0.6.
kMO 70%
Barttlett's test is significant.
Number of extracted factors 11with total explained variance 60%.
My issue is 6 factors contain only 2 items. Should I remove all these items?
With notice that the items are varied, each one describes a different situation, and only they share in that they are ethical dilemmas and deleting them will affect the overall questionnaire ability to assess participants' level of difficulty and frequency of such situations.
EFA is new concept for me; I am really confused by this data.
Do you know any renowned article which has been published in Scopus journal describing that for conducting the Exploratory Factor Analysis (EFA), which method is the best, 'Principal component' or 'Principal Axis Factoring ' in SPSS?
I am running an EFA on an 8item questionnaire. Each item can be rated between 0 and 10, with higher scores = more negative/worse relative to lower scores. Sample size is small at n=81. I am treating the data as ordinal, so am looking at polychoric correlations. I have done some preliminary analyses but have struck problems as item 7 on the scale has missing response categories (no participants chose 6, 7, 9 or 10) and so polychoric correlations could not be calculated. Must I remove this item for the analysis to work or are there other ways to manage this?
I collected 109 responses for 60 indicators to measure the status of urban sustainability as a pilot study. So far I know, I cannot run EFA as 1 indicator required at least 5 responses, but I do not know whether I can run PCA with limited responses? Would you please suggest to me the applicability of PCA or any other possible analysis?
Hi,
One tool that I used showed 4 factors in EFA, however, in CFA, 3 factors were identified as a best goodness of fit. Furthermore, according to the original article that developed the tool also showed 3 factors for the tool.
In this case, even though my EFA showed 4 factors, can I say that the tool has 3 factors in my sample and is valid for my sample?
I need your precious thoughts and opinion.
Thank you so much in advance.
Best regards,
Judy