Science method

SEM Analysis - Science method

Explore the latest questions and answers in SEM Analysis, and find SEM Analysis experts.
Questions related to SEM Analysis
  • asked a question related to SEM Analysis
Question
1 answer
My study is using the moderated mediation Model B and has three moderators. I want to get more clarification on index calculation and conditional indirect effect result interpretation. I would really appreciate your guidance and thank you n advance.
Relevant answer
Answer
The moderated mediation Model B refers to a statistical model where the effect of an independent variable (X) on a dependent variable (Y) is indirect, and is mediated by a mediator variable (M), but this indirect effect is moderated by one or more moderator variables (W). In the case of three moderators, the conditional indirect effect of X on Y through M can be calculated and interpreted as follows:
  1. Index calculation: To calculate the conditional indirect effect, you first need to estimate the values of the moderator variables at specific levels (e.g., mean, -1SD, +1SD), then use these values to predict the mediator and the dependent variable for each level of the independent variable.
  2. Conditional indirect effect result interpretation: The conditional indirect effect of X on Y through M can be interpreted as the amount of change in Y per unit change in X given specific levels of the moderator variables. For example, if the moderator variables are at the mean levels, the conditional indirect effect is the average indirect effect across the sample. If the moderator variables are at -1SD, the conditional indirect effect is the indirect effect for those individuals who have lower levels of the moderator variables compared to the average. If the moderator variables are at +1SD, the conditional indirect effect is the indirect effect for those individuals who have higher levels of the moderator variables compared to the average.
The results of the conditional indirect effect can be used to determine how the relationship between X, M, and Y is affected by the moderators. If the conditional indirect effect is significantly different across different levels of the moderator variables, it suggests that the moderator variables play an important role in moderating the indirect effect of X on Y through M.
  • asked a question related to SEM Analysis
Question
3 answers
Hello,
I saw there is two different morphologies in SEM images of PCN-224, including cubic and spherical. I would appreciate if someone could explain this issue to me.
Relevant answer
Answer
Dear Naeimeh,
It is quite normal for the same MOF to have different morphologies and the morphology of the MOF is related to the synthesis conditions such as surfactant, solvent (both solvent type and solvent ratio), reactant ratio (ratio of metal to ligand), temperature, pH, reaction time etc. However, it is pretty difficult to give a general answer to this question. Crystal facet arrangement and crystal facet energy are the ultimate causes. You can try searching the relevant reviews on controlling morphology and you will get your answer.
Some related articles:
  • asked a question related to SEM Analysis
Question
2 answers
I have a SEM study where some variables were collected with a 7-points scale, while others with a 5-point. Is there any literature that I can look at? or any opinion? Thanks.
Relevant answer
Answer
From my understanding, this shouldn't be a problem at all. With SEM you are essentially looking at covariances between variables. To this end, their absolute means as determined by the number of response options do not matter.
  • asked a question related to SEM Analysis
Question
3 answers
Hello,
I have survey data that I am attempting to use in IBM AMOS to create a SEM UTAUT model. However, during output, for "Result" I get:
Minimum was achieved
The model is probably unidentified. In order to achieve identifiability, it will probably be necessary to impose 7 additional constraints.
Chi-square = 2417.406
Degrees of freedom (corrected for nonidentifiability) = 82
Probability level = .000
I know close to nothing about statistics, and am a total newbie when it comes to AMOS, but from what I gather, that chi-square is bad; also for P CMIN/DF I get 29.481, which I also think is not great since it is greater than 3.
The SEM Model is supposed to be UTAUT model. Worst case scenario, can I still use the data as is? If so, how to correctly interpret the data? I have provided the model.
Any help is appreciated. Thank you.
Relevant answer
Answer
In general, you should not interpret the parameter estimates or fit statistics for an underidentified model. They may be completely incorrect/misleading.
SEM is a complex statistical methodology. You mention that you "know close to nothing about statistics." I would say, given that, it is not a good idea for you to use SEM unless you are guided by a statistician who is an expert in SEM and AMOS. There is a lot that can go wrong with SEM.
In your case, you have two latent variables (PerformanceExpectancy and EffortExpectancy) that each have only one indicator (observed/measured variable). These latent variables are not identified unless you add more indicators or fix the error variances of the indicators to a meaningful value. Also, your endogenous (dependent) latent variable does not seem to have an indicator (measured variable) at all.
  • asked a question related to SEM Analysis
Question
3 answers
I am trying to conduct an SEM analysis using Mplus7 with planned missing data. But I got this error: one or more variables in the data set have no non-missing values. Check your data and format statement. How can I fix this?
Relevant answer
Answer
It is impossible to know what went wrong by just looking at the input file (syntax). You need to check your data file (exp.dat) to see whether the columns/entries in that data file match with what you have in your NAMES list in the VARIABLE command. I'm almost certain that there must be some sort of mismatch between what's in the exp.dat file and the variable names list in your input file.
You can also try using a reduced set of variables (using the USEVARIABLES option) and requesting ANALYSIS: TYPE = BASIC; to check the descriptive statistics for that reduced set of variables to see if that works. That might bring you closer to identifying the problem.
  • asked a question related to SEM Analysis
Question
5 answers
In evaluating second-order structure, I have faced two common approaches in applied studies:
1) Some fit the second-order structure in an explicitly hierarchical model: the first-order structure models the indicators and the second-order structure models the [implied] covariance between first-order latent factors.
2) Some fit the second-order structure on the first-order factors that are represented with sum scores (i.e., each subscale as a parcel).
To me, it seems that the first approach is more accurate (as it correctly models the measurement error), but I have some doubts about how to assess the fit of the second-order structure. Fit indices seem to put much more weight on the first-order structure. This goes to the extent that in a large model with a good first-order and a very poor second-order structure, the fit indices tend to show a good fit. Many authors consider the good fit of this hierarchical model as evidence of the good fit for second-order structure as well. (Some authors compare fit indices such as CFI and RMSEA from models with and without second-order factors and if there is little difference they conclude that the second-order structure is a good fit.)
Is this practice OK? And am I missing something here?
And is there any way to use the first approach and still calculate fit indices that exclusively evaluate the second-order structure?
(Something like calculating a chi-square for the discrepancy between the implied covariance matrix from the first-order structure and the implied covariance matrix from the second-order structure and using this for calculating other fit indices!)
Thank you in advance.
Relevant answer
Answer
Dear Ali Zia-Tohidi,
Marsh and Hocevar (1985) proposed a target coefficient T that relates the fit (Chi²) of the model with correlated first order factors to the fit of the second order factor model. Thus, the target coefficient can be used to determine whether a poor model fit is caused by the first-order models or the second-order model.
Marsh, H. W., & Hocevar, D. (1985). Application of confirmatory factor analysis to the study of self-concept: First-and higher order factor models and their invariance across groups. Psychological bulletin, 97(3), 562.
See also
Cheung, D. (2000). Evidence of a single second-order factor in student ratings of teaching effectiveness. Structural Equation Modeling, 7(3), 442-460.
best
Christoph
  • asked a question related to SEM Analysis
Question
3 answers
Hello!
In general, as a rule of thumb, what is the acceptable value for standardised factor loadings produced by a confirmatory factor analysis?
And, what could be done/interpretation if the obtained loadings are lower than the acceptable value?
How does everyone approach this?
Relevant answer
Answer
@ Ravisha Jayawickrama. Most sources accept value for standardised factor loadings above 0.4
  • asked a question related to SEM Analysis
Question
6 answers
Hi All,
I have run a default model and it can be calculated and has good fit both in two data sets (different topics) separately. Now I want to check for the moderating effect using multiple group analysis with AMOS. However, the unconstrained model, measurement weights model, structural weights model and structural covariances model are not identified.
My question is:
1. What is the reason for the unidentifiability of the unconstrained model? I am a bit confused since the default model without multiple group analysis definitely works.
2. What should I do to make unconstrained model, measurement weights model and structural weights model identified?
3. If unconstrained model, measurement weights model, structural weights model cannot be identified, is there some other way to test the moderating effect?
I attached some pictures. Hope someone can enlighten me, thanks so much for the help!
Relevant answer
Answer
Hi Christian,
Thank you very much for your quick answer.
I checked the correlations between the 3 indicators of the LS factor and they are significantly correlated according to SPSS Correlation analysis result (see picture 1).
I have estimated the same model each group separately without using multi-group analysis before and it worked well (see picture 2 and 3) (This is the reason why I say Why does an identified SEM model become unidentified after multiple group analysis with AMOS in the title of my question ). However, strangely, since I didn’t save the output at that time, I re-draw and re-run the model in two data sets (P & S groups) separately just now and find that it became unidentified and I do get the negative factor variance (-0.008, see picture 4 and 5). I'm really puzzled.
Once again thank you for your suggestion.
Best,
Bo Bo
  • asked a question related to SEM Analysis
Question
6 answers
I am preparing sample for SEM analysis, but lacking the facility of liquid.co2 and HMDS.
So, is there any other method to dry the sample ?
Your suggestions will be highly appreciated .
Relevant answer
Answer
dehydration = replacement of water by solvent (ethanol, acetone).
If you dry by evaporation of the solvent, you will obtain a very bad quality of your sample structure.
  • asked a question related to SEM Analysis
Question
4 answers
We have finished writing an article about the psychological and emotional dimensions of the covid-19 disease, but it needs revision and final editing in English. In case of tendency, if someone has enough experience in this field and is completely fluent in English, I would appreciate it if you could send me a message.
Relevant answer
Answer
Revered Kharazmi,
Thank you for your information.
I can edit in terms of Grammatical, Editorial and Citations in your manuscript.
This is my email ID drsenapathy@gmail.com
Be in touch.
Regards
Senapathy
Ethiopia
  • asked a question related to SEM Analysis
Question
1 answer
Hi to everyone, I'm trying to obtain information about the morphology of small cylinders of GelMA. As they were too thick we cut them prior to standard treatment with Pt/pd coating required for SEM imaging. However, this process seems to alter the hydrogel structure leading to no significant data. Can GelMA films work better? We want to try heat-drying the samples instead of freeze-drying them but my concern is creating a temperature gradient that can alter the porosity. Has anyone ever tried this method?
Relevant answer
Answer
It looks like you used the following method
GelMA hydrogels were placed on copper meshes, flash frozen in liquid nitrogen and lyophilized by vacuum freeze dryer overnight. The specimen surface was then coated with gold/palladium Pt/Pd) for SEM observation.
Freeze drying is the best for this purpose. The Pt / Pd coating is necessary to protect the film from electron beams that decompose organic matter. I would try using a SEM after freeze drying, but keep the sample in the chamber for as little time as possible.
  • asked a question related to SEM Analysis
Question
4 answers
Dear SEM experienced Users 😊, I’m thinking whether it is possible to make sthg like “variance decomposition” of latent variable in Structural Equation Modelling (SEM)?
Let’s imagine we’ve got some latent factor Y “determined” by two other latent variables X and Z. We have standardized parameter estimates between X->Y (0,5) and Z->Y (-0,4). Is it possible somehow to use theses two estimates to say which latent factor is more “important” for Y determination? Ideally, is it possible to say that X accounts for x% of variability in Y, and Z for z%? Thanks a lot in advance for any hints.
Relevant answer
Answer
With high collinearity among predictor variables, the standardized regression coefficients may become difficult to interpret. See, for example:
  • asked a question related to SEM Analysis
Question
6 answers
my ZnO nanoparticles sample was used for XRD and I do not have enough samples for SEM. Can the NPs samples be reused for SEM??
Relevant answer
Answer
XRD is a non-destructive research method (may be with the exception of biological materials), so the sample used in XRD can normally be used in other experiments (you yourself should know if the sample has been subjected of, for example, additional milling, exposure to any reagents, prolonged air storage (moisture and carbon dioxide), etc.).
  • asked a question related to SEM Analysis
Question
9 answers
I synthesized gold nanoparticle thin films under four conditions, three of them show agglomerated islands, like any metallic particle observed before, highly agglomerated. However this fourth condition presents with a lot of circles and round objects ( as in attachement).
what it could be this structures in your opinion? is it gold nanoparticles? why are they so monodisperesed ?
Relevant answer
Answer
Thank you, unfortunatelly I dont have any experience with this technique, but it seems plausable that you can trap them in the subtrate as they seems to be from SEM images. I just look for some papers about this technique, and it seems temperature plays a role in the agglomeration - maybe this may be your case.
To conclude, beside SEM, I would perform any chemical analysis (EDX, XPS) to be sure it is gold. Optical spectroscopy may also help, those AuNPs on quartz should have the distinguish absorbance somewhere about 600nm (blue/purple-ish look).
  • asked a question related to SEM Analysis
Question
4 answers
i make (SEM analysis) in my research, then I modify the research model based on the correlation between independent variables. now i need to justify this modification in research model because the high value of the correlation. i am really confusing about how can I justify that change. I was add the research model before modification and after.
#SEM #Structural_equation_model #correlation
Relevant answer
Answer
Hi Michael,
absolutely, blindly following modification indices can (and often will) make the model fit *better* albeit being more causally *misspecified*. Fit is only a means to an end not the end itself as completely nonsensical models can nonetheless fit very nicely.
Hayduk, L. A. (2014). Seeing perfectly fitting factor models that are causally misspecified: Understanding that close-fitting models can be worse. Educational and Psychological Measurement, 1-22. https://doi.org/10.1177/0013164414527449
Best,
Holger
  • asked a question related to SEM Analysis
Question
4 answers
Simpson’s paradox is a statistical phenomenon where the relationship between two variables changes if the population is divided into subcategories. In the following animation, we can see how the linear relationship between two variables is inversed, if we take into account a third categorical variable. Simpson's paradox highlights the fact that analysts should be diligent to avoid mistakes.
How to identify this phenomenon in SmartPLS?
Relevant answer
Answer
Hi Rasoul,
the fact that an estimated relationship changes when stratifying (=controlling or adjusting!) a third variable, doesn't tell you anything as the key issue is whether this variable acts as a confounder (common cause auf X and Y) or collider (a common effect of X and Y). Both change the relatinship between X and Y but controlling for the confounder UN-biases the effect but controlling for the collider biases the (otherwise unbiased) effect.
I strongly recommend this book that will enlighten you :)
Pearl, J., & MacKenzie, D. (2018). The book of why. Basic books.
HTH
Holger
  • asked a question related to SEM Analysis
Question
4 answers
Dear all,
I am conducting a multiple-group path analysis (with observed variables) using AMOS. The unconstrained model reveals that several paths coefficient were different according to p values.
When I compare the fully constrained model to the unconstrained model, chi-square differences were insignificant. However, when I compare the models by constraining only one path at a time, I have various variant path coefficients (significant chi-square change).
Should I assume that the model is invariant based on the comparison between fully unconstrained and fully constrained models? Or, should I use the "one path at a time approach" to decide and freely estimate all variant paths? So, I would like to have your opinion on the best approach.
Ps: Although the chi-square did not change, the CFI decreased in the constrained model. Could it be a reason to examine specific path variances?
Thanks
Best
Pinar
Relevant answer
Answer
Unless you have a priori hypotheses about specific paths that would be expected to be non-invariant, I would think that the omnibus chi-square difference approach (comparing the fully constrained vs. unconstrained models globally) is the more appropriate approach. This is somewhat similar to what you would do in ANOVA. You look at the omnibus F test first. You run post hoc tests (pairwise comparisons) only if you obtain a significant F statistic. Otherwise you run into the risk of type-1 error inflation.
In your case with a non-significant overall chi-square difference, you would not compare individual paths.
  • asked a question related to SEM Analysis
Question
4 answers
I am building a model for the Five Factor Model of personality in AMOS (as measured by BFI-44 in a large >300,000 dataset). I am doing this to test for MI across groups so I can be sure of my conclusions. (using a small sample c. 10,000 for the CFA)
I based the model on preliminary EFA, using this to specify some cross loadings.
All fine... improved model slightly
I then added a method factor (to represent Halo Bias)... again good improvement
The part I am having trouble with is adding an "Acquiescence Bias" factor.
I am attempting to use either:
1. normalised sum of all scores (5point likert) or
2. sum of reverse scored items (normalised)
..... it won't run with either of these
CoVarience doesn't see to make a difference nor does adding error variables.
When I use a dummy variable for the observed variable, it runs (this is the sum of reversed divided by participant SD).
- I get a slight improvement in fit but I am expecting a lot more.
I get this error:
Minimization was unsuccessful
The results that follow are therefore incorrect.
The model is probably unidentified. In order to achieve identifiability, it will probably be necessary to impose 101 additional constraints.
Chi-square = 222005.662
Degrees of freedom (corrected for nonidentifiability) = 953
Probability level = .000
I am self teaching this over the past 3 days so please forgive my naivety.
Relevant answer
Answer
I managed to fit my model. Working more with EFA to define cross-loadings and playing with the starting parameters to properly load the AQU and M(Social Bias) to each item.
If you thought the first model had too much going on, look at this beast!
CFI from 0.7>0.9
  • asked a question related to SEM Analysis
Question
4 answers
Hi,
I am looking at the three-way interaction of a latent variable (F) with ordered-categorical indicators (mostly likert scales) and two observed variables (M1, M2), using Mplus. The two following issues arise:
1. How do I account for the ordered structure of the indicators? Do they need to be defined as categorical?
2. What does this mean for the standardization of the latent construct (F)? Standardizing the indicators does not seem appropriate!
Any advice or guidance is greatly appreciated. Thank you in advance!
Relevant answer
Answer
Joanna, Mplus now provides the option to use cat-LMS. Therefore, you just have to define the indicator variables as being categorical, and then the usual syntax of LMS can be used.
HTH, Karin
  • asked a question related to SEM Analysis
Question
3 answers
Hello everyone, i've already did a characterization of SEM-EDX on iron electrodes before and after electrolysis. From this picture, I can only explain that this electrode:
- Before treatment: the iron surface is smooth and regular, the iron element is more dominant
- After treatment: the surface of the iron has cracks, holes and irregularities caused by the electrolysis process, there is also an oxide layer on the surface of the iron which means the surface of the iron has been oxidized, the element of oxygen is more dominant
Any additional suggestions to describe this SEM-EDX result?
Also, I have some questions about SEM-EDX :
1. Why is the oxygen level higher after the treatment?
2. How does Fe react with oxygen in water during electrolysis?
3. How to write down the chemical reaction that occurs between Fe and oxygen in water to form Fe-oxide?
4. Talking about the morphology of SEM on the image after treatment, how can a needle-like layer be formed?
5. If correlated with the pourbaix diagram, how to explain the formation process of iron oxidation, when there is water, the presence of dissolved oxygen in the water?
Thank you in advance
Relevant answer
Answer
I assume that the EDS analysis was done over the area of the SEM images shown and the concentrations were obtained by pressing the “QUANT” button of the software.
The spatial resolution of EDS (laterally and in depth) is given by the beam-sample interaction (i.e. beam energy, atomic number, density) and the physics of X-ray emission.
A frequently used estimate is the Kanaya-Okayama range (K Kanaya and S Okayama, J. Phys. D: Appl. Phys. 5(1972) 43-58, DOI: 10.1088/0022-3727/5/1/308).
For pure Fe and 15 keV electrons the electron excitation range is ca. 1 µm, the size of the volume where the X-rays are from is ca. 0.7 µm for Fe K radiation.
EDS quantification assumes that the sample is sample is homogeneous, flat, and thick (compared to the excitation range).
Advanced Spectrum Analysis and Quantification Precision
This presentation by Dr. Jens Rafaelsen was recorded on the EDAX booth at M&M 2018
At 35:45 “If your sample doesn't meet those requirements: don't click the QUANT button: You'll get numbers, they'll be garbage!”
The “Fe-before” sample is likely Fe with a native iron oxide.
Is the sample flat? Is the sample homogeneous (laterally and in depth)? Is the oxide thick?
If the iron oxide is thin (< 0.7 µm), the volume you analyse is composed of iron oxide(s) + iron metal. The prerequisites of the quantification model are not fulfilled.
The “Fe-after” sample is likely Fe with thick iron oxide(s).
Is the sample flat? Is the sample homogeneous (laterally and in depth)?
The prerequisites of the quantification model are not fulfilled.
Is the oxide thick?
If the iron oxide is thick (> 1 µm), the volume you analyse might be composed of the iron oxide(s) formed, but the prerequisites of the quantification model are not fulfilled.
As mentioned by Vladimir Dusevich the EDS spectra should be evaluated carefully. There are small peaks around 2 keV. For “Fe-before” at < 2 keV, for “Fe-after” at ca. 2.3 keV. What’s this?
> Also, I have some questions about SEM-EDX:
Your questions 1-5 are not directly related to EDX.
The chemistry of oxidation of iron (and steels) in aqueous solutions has been studied for decades because of the importance for corrosion and power generation and the results are now textbook knowledge.
I guess for electrolysis a lot of actual work can be found.
For thicker oxides scales composed of different oxides may form. Here, preparation and analysis of cross sections may give more insight.
Preparation methods and analysis will depend on the thickness and stability of the scales (fracturing, preparation of cross section by metallographic methods, FIB cuts; EDS in SEM or TEM?)
  • asked a question related to SEM Analysis
Question
6 answers
I synthesized ZIF-93 by the method of aqueous phase in the article, but unlike the article, I synthesized a tetrahedral structure instead of rhombic dodecahedron. zif-93 is composed of zinc acetate dihydrate and 4-methyl-5-imidazolecarboxaldehyde. Can you tell me what causes the formation of tetrahedra?
Relevant answer
Answer
I have no idea. Unfortunately this is outside of our areas of expertise. Sorry. With best wishes, Frank Edelmann
  • asked a question related to SEM Analysis
Question
3 answers
Hello everybody,
I developed a SEM model. Model fit was good and my other hypothesized pathways were in line with the theory and significant, but one pathway is showing totally meaningless relationship. my model was predicting social anxiety from social stress. the model fit analysis found the path significant but with a negative value. It says a decrease in social stress predicts an increase in social anxiety. What should I do know? Should I exclude the path by stating that it is meaningless or something else?
Thank you!
Relevant answer
This is not necessarily meaningless. Stress is of two types: eustress and distress. Not all stress is distress and you need a certain amout of stress= eustress to feel well
  • asked a question related to SEM Analysis
Question
2 answers
Hi Everybody,
I am a research scholar presently working on tourism, conflict and peace studies. I developed a conceptual model and want to use Structural Equation Modelling Approach. I have a total of 4 constructs and 23 indicators in my model. I want to know what should be the minimum sample size for my study to do SEM analysis.
Thanking you very much in advance
Wanie Mehraj
Relevant answer
Answer
Ali A. Al-Allaq: I believe Mehraj Din Wanie 's question is about structural equation modeling (a statistical technique), not microscopy.
Although there are some rough guidelines and rules of thumb in the literature as to the minimum sample size required for SEM, those guidelines lack generality because the sample size depends on many different factors (e.g., model size, number of indicators, factor loadings/reliability of indicators, effect sizes, amount of missing data, normality of data, etc.). In my opinion, the best way to determine the optimal sample size is by running a Monte Carlo simulation with your specific model and the expected parameter values. That way, you can increase the certainty that the sample size estimate will actually apply to your specific case. For some guidance, see
I offer a free mini-course on sample size planning in SEM that you can find here:
  • asked a question related to SEM Analysis
Question
3 answers
Hello everyone. Requesting your kind suggestion. I'm struggling with SEM analysis for my doctoral study by AMOS. My modified model result showed GFI & AGFI values under the recommended value of 0.90. May I ask .... which alternative indices will you suggest instead of GFI and AGFI? Which references should I use? Thank you.
Relevant answer
Answer
These indices are fairly outdated anyway. It is more common nowadays to look at the chi-square test of model fit, RMSEA, CFI or TLI, and SRMR. Guidelines were provided by Hu and Bentler (1999) as well as Schermelleh-Engel, Moosbrugger, and Mueller (2003). My personal opinion is that the most relevant information about model (mis)fit is provided by the chi-square test and the model residuals.
  • asked a question related to SEM Analysis
Question
1 answer
Hi all,
I've done a CLF test and could you recommend what's the threshold value of a good CLF (if possible could you pls add some references)? I used '0.2' recommended by James Gaskin (on YouTube) and found 4 out of 17 exceeded 0.2 (ranged from 0.2-0.24), is it acceptable?
p.s. the test of common method variance is the last step of my data analysis, other tests all have good results. (preliminary analysis and SEM)
or do you recommend any other threshold value?
MANY THANKS!!!!!!!!!!!!!!
Relevant answer
Answer
Hello Rowena,
I would not trust any arbitrary threshold values of some magical indices. Better is to statistically test your model with a chi-square test. And, yes, there is also a arbitrary threshold involved but this threshold (the .05 p-value) does not tell your when the model is problematic but represents a cultural agreement when to regard a deviation from chance as noticable.
If you tell more bout the model (including exact question wordings and intended meaning of the latents) and tell the model chisquare test and df, I could possibly help :)
All the best
Holger
P.S. One note: Of course you can invent abbreviations like CLF as you like but this makes conversations difficult.
  • asked a question related to SEM Analysis
Question
2 answers
What is the best method for drying microfibrillated and nanofibrillated cellulose (MFC/NFC) for SEM analysis?
I would like to take a clear SEM images for MFC and NFC, from which I expect to see the morphological characteristics of fibrils. Please give your suggestions or comments. Thanks.
Relevant answer
Answer
First - NFC are too small to be accurately observed with SEM. You need to use AFM for those fibers. The coating depth required is not insignificant and will obscure any details that you may have with larger diameter NFC fibers. As cellulose fibers are hygroscopic, it is very unlikely that you will have a completely dry sample for SEM. The technique that I've found works is to allow the sample to dry in an oven at 50C, then store it in a nitrogen atmosphere until ready to coat. Once coated you should have no issues obtaining good scans of your fibers. I coat approximately 4 nm depth and obtain good resolution images.
  • asked a question related to SEM Analysis
Question
5 answers
Dear all,
I have a question about a mediation hypothesis interpretation.
We have a model in which the direct effect of X on Y is significant, and its standardized estimate is greater than the indirect effect estimate (X -> M -> Y), which is significant too.
As far as I can understand, it should be a partial mediation, but should the indirect effect estimate be larger than the direct effect estimate to assess a partial mediation effect?
Or is the significance of the indirect effect sufficient to assess the mediation?
THanks in advance,
Marco
Relevant answer
Answer
Marco Marini as far as I know, you must have two conditions both verified for a partial mediation hypothesis to be confirmed:
1 - the indirect effect must be significant (X -> M -> Y) *
2 - the direct effect must be significant (X -> Y)
If both conditions are satisfied, then you have a partial mediation. If condition 1 is satisfied, but not condition 2, then you have a full mediation (i.e., your mediator entirely explains the effect of X over Y).
As Christian Geiser suggested: "Partial mediation simply means that only some of the X --> Y effect is mediated through M".
To my knowledge, the ratio between direct and indirect effect has no role in distinguishing between partial vs. full mediation.
* Please note: "the indirect effect must be significant" doesn't mean that path a and b must both be significant. All you need is path a × b significant (better if bootstrapped).
  • asked a question related to SEM Analysis
Question
1 answer
when conducting the SEM analysis, if RMSEA, GFI, CFI, and Chisq/df achieved the required levels except for CFI was 0.80, can I consider the model as a good fit?
Relevant answer
Answer
Model fit assessment should be based on a lot more information than just the indices that you mentioned. First of all, I would pay close attention to the chi-square test of model fit and it's p value. If the test is significant, looking at model residual statistics is often useful to determine sources of misfit. Descriptive fit statistics such as RMSEA and CFI tend to be less sensitive to model misspecification, especially in larger samples. Also, a low CFI may indicate that the variables in your model have rather low correlations on average as this index compares the target model fit to the fit of a baseline independence model.
Also important to look at are the parameter estimates and their standard errors. Are the parameter estimates all proper with the expected sign and do the standard errors look reasonable in terms of their magnitude?
  • asked a question related to SEM Analysis
Question
10 answers
Dear researchers:
Through my read some of the papers, I find that the welding efficiency may reach 100%, so the separation of the metal occurs away from the welding area (during the tensile test).
From your point of view: How do you evaluate the microstructure of welding zone?
With Regards
Relevant answer
Answer
Observation of Microstructure and Mechanical Properties in
Heat Affected Zone of As-Welded Carbon Steel by Using
Plasma MIG Welding Process
It was seen that the use of plasma MIG welding process has resulted in the refinement of the microstructure in the CGHAZ region, thus improving the mechanical properties of the as-welded SPCC steel. The highest microhardness values were obtained for conventional MIG in the CGHAZ regions. Incorporation of the plasma arc reduces the hardness, potentially increasing the ductility of the joining by plasma MIG welds. Further reduction in hardness can be obtained by decreasing the plasma current values.
  • asked a question related to SEM Analysis
Question
3 answers
Hi, everyone, i want to know if someone ever done the SEM analysis for activated carbon using silicon wafers instead carbon adhesive tape. Thank you.
Relevant answer
Answer
In general, carbon adhesive tape is not good for high-resolution SEM, because it usually induces some drift. The easiest way to prepare a stable SEM samples for bulk materials (powders, granules, etc) - just put some conductive adhesive (just a thin layer) on the surface of a clean stub, and then immediately pour the powder. When the adhesive dries, just remove the excess of the material with a stream of pure air or nitrogen. That's it.
Good luck!
  • asked a question related to SEM Analysis
Question
2 answers
I have synthesized silver nanoparticles using PVA as surfactant and silver nitrate as precursor. I need to do SEM analysis and hence require powdered form of the silver sample. I tried freeze drying the solution but the lyophilised sample is not in powdered form but in cotton candy state . I tried to grind it in a mortar & pestle but the sample is getting contaminated and visible color change could be seen.
I'm anticipating that it's because of higher percentage of PVA that is being used that gives it the cotton candy state. Please correct me if I'm wrong..if so what could be done to get powdered form or to powder the lyophilised sample
Also, I tried to centrifuge it as a part of purification process. 10,000 rpm for 30mins at 4 degrees. I got small amount of brownish precipitate like literally the size of a sugar crystal. And the supernatant was yellowish in color which was the colour of colloidal silver sample.
Is there any other method for purification or to proceed with SEM analysis?
Relevant answer
Answer
Thankyou fir the answer. I'll try this method
  • asked a question related to SEM Analysis
Question
7 answers
In confimatory factor analysis (CFA) in Stata, the first observed variable is constrained by default (beta coefficient =1, mean of latent variable =constant).
I don't know what is it! Because, other software packages report beta coefficients of all observed variables.
So, I have two questions.
1- Which variable should be constrained in confirmatory factor analysis in stata?
2- Is it possible to have a model without a constrained variable like other software packages?
Relevant answer
Answer
Hello Seyyed,
I guess you mean with beta the factor loading? Traditionally, these are denoted with lambda but probably, Stata treats these differently.
The fixation of the "marker variable" is needed a) to assign a metric to the latent variable--those of the marker, and to b) identify the equation system.
As far as I know it does not matter which variable you choose unless it is no valid indicator of the latent.
HTH
Holger
  • asked a question related to SEM Analysis
Question
4 answers
Hi! I am trying to prepare hydroxyapatite scaffold samples for SEM imaging of cell growth. I have the Karnovsky's fixative kit but the procedure provided in the tech sheet (attached) is not sufficient for my applications. First, does anyone have a standard protocol for this SEM fixation using Karnovsky's fixative kit? Second, do I need to do the post-fix using OsO4 or is there an alternative method to the post-fix mentioned in the tech sheet? Can I do the fixation procedure without it, followed by the graded ethanol dehydration or will it have a negative impact on my sample preparation?
I would really appreciate any help answering this question. Thanks!
Relevant answer
Answer
If you have a cells monolayer, 30 min is a good time. If you have something like a tissue developing, with a lot of collagen, then you need 1 hr. HA is soluble in water (very slow, but still...). So if you culture started generate small centers of mineralization, you do not want to keep it too long (days, weeks) in water solutions. From the other side prolonged storage in desiccator can lead to fungus growth. Some desiccators are badly infested with fungus and need through cleaning and disinfection. From my opinion the best way to store specimens is when their preparation is complete, i.e. they are dehydrated and coated with conductive coating.
  • asked a question related to SEM Analysis
Question
7 answers
Hi, everyone!
I just received the comments of a reviewer who said:
you conducted EFA and based on EFA results, run the SEM modeling. You are supposed to conduct a CFA to confirm the EFA results and finalize the measurement model before proceeding SEM. You can fairly use half of the sample to test EFA and another half to test CFA.
Actually, in my study, i used the EFA to explore the possible dimensions of the high-order constructs, and then build a PLS-SEM with the results of EFA. However, i don´t think I should do also the CFA.
So, how can I answer the reviewer ? and is my method wrong??
Thanks!!!
Relevant answer
Answer
I have also come across this situation. In one of my papers, I used EFA and then the structural model. Recently received review comments that I should run CFA. The reviewers recommended that if you have a valid scale to measure then you should run CFA must. EFA can only run under the exploration stage only. As many experts have already answered this question, let me ask a little more about this:
1. When reliability and validity can be checked by Cronbach alpha and expert validity, still is it mandatory to run CFA to check validity and reliability?
2. If exploratory research extracts factors with their items and loadings, why do we further need to run CFA?
Looking forward to hearing from experts.
  • asked a question related to SEM Analysis
Question
4 answers
I have a SEM model (with 9 psychological and/or physical activity latent variables) with cross-sectional data in which, guided by theory, different predictor and mediator variables are related to each other to explain a final outcome variable. After verifying the good fit of the model (and after being published), I would like to replicate such a model on the same sample, but with observations for those variables already taken after 2 and after 5 years. My interest is in the quasi-causal relationships between variables (also in directionality), rather than in the stability/change of the constructs. Would it be appropriate to test an identical model in which only the predictor exogenous variables are included at T1, the mediator variables at T2 and the outcome variable at T3? I have found few articles with this approach. Or, is it preferable to use another model, such as an autoregressive cross-lagged (ACL) model despite the high number of latent variables? The overall sample is 600 participants, but only 300 have complete data for each time point, so perhaps this ACL model is too complex for this sample size (especially if I include indicator-specific factors, second-order autoregressive effects, etc.).
Thank you very very much in advance!!
Relevant answer
Answer
Hi there,
a completely different approach could be to run a TETRAD model. This allows to set some restrictions where you can be sure about the direction (e.g., the autoregressive effects as well as forbidding reverse effects from t_n to t_n-1) and freely expore the rest. The model will print a path diagram that shows you three things
1) clearly supported causal (happens only rarely)
2) Effects with one side (the arrowhead) crealy "depentent" but the other end ambiguous (may be a cause or a consequent)
3) completely ambiguous relationships
TETRAD exists since the 80s and is remarkably invisible to our field.
Eberhardt, F. (2009). Introduction to the epistemology of causation. Philosophy Compass, 4(6), 913-925.
Malinsky, D., & Danks, D. (2018). Causal discovery algorithms: A practical guide. Philosophy Compass, 13(1), 1-11. https://doi.org/10.1111/phc3.12470
A few final comments
1) 90% of confirmatory SEMs are much too complex and involve dozens and sometimes hundreds of testable implications. And because of that the models never fit which--in almost funny manner--is then used as support for the model ("well, models never fit, so why should my do?"). I would always focus one ONE or TWO essential effects or chains of effect and then try to a) think hard about confounders and b) potential instruments
2) Yes, causation needs time to evolve but most often, the time lag is embedded in the measurement. Otherwise you would not have any cross-sectional correlations. That is, if the causal lag is similar to the lag embedded in the measure (e.g., "how satisfied are you with your job" will prompt an answer that is derived from memory) OR / AND the IV is stable, then cross-sectional data will generally allow to identify causal effects. The key issue is and stays "causal identification"--that is removing confounding biases and potential reverse effects. The latter can be solved in a cross-lagged design but not the former. That is, you have to think hard about confounding no matter what the timely design is.
I had a long discussion in the following thread in case you differ in your opinion (which is fine for me):
Best,
Holger
  • asked a question related to SEM Analysis
Question
3 answers
Hi,
Could you please tell me how can we calculate effect size measures for structural equation modelling? Could we do it via AMOS? Are there any practical resources?
Thanks in advance,
Tahani
Relevant answer
Answer
"Effect size - f2 " tells whether a construct has a substantive impact on another one. Guidelines for assessing ƒ2 are (Cohen, 1988): values of 0.02, 0.15, and 0.35, respectively, represent small, medium, and large effects of an exogenous latent variable on an endogenous latent variable.
  • asked a question related to SEM Analysis
Question
9 answers
Hello everyone,
I am surprised to see that PLS-SEM is an accepted tool in different areas: Management, Marketing, Tourism, ..., having become an "alternative" tool to the previously prevalent CB-SEM. I think I'm not wrong if I say that in recent years PLS-SEM is more widely used than CB-SEM in these areas. However, this does not seem to be the case in psychology, where PLS-SEM does not have a significant presence.
Are the objectives or premises really different in these areas of knowledge to justify that PLS-SEM is really valid in some and not in others?
The areas in which PLS-SEM is accepted, are they less rigorous?
Is it a matter of time before PLS-SEM succeeds in displacing CB-SEM in psychology?
I would appreciate if someone could help me understand this.
Thank you
Relevant answer
Thank you Francisco Arteaga for the fascinating question and professors for the valuable answers, I agree with your answers.
Social researchers have access to numerous statistical methods. Consequently, knowing the proper technique can be difficult. For instance, selecting between covariance-based (CB-SEM) and variance-based partial least squares (PLS-SEM) can be difficult when considering structural equation modelling (SEM). It conducts a direct comparison using the same theoretical measurement and structural models and data set. In order to achieve an acceptable goodness-of-fit when employing CB-SEM as opposed to PLS-SEM, a greater number of indicators are eliminated. In addition, composite reliability and convergent validity were typically higher when PLS-SEM was utilized, whereas other metrics such as discriminant validity and beta coefficients were comparable. PLS-SEM performed significantly better than CB-SEM when comparing variance explained in indicators of the dependent variable. The revised guidelines aid researchers in determining whether CB-SEM or PLS-SEM is the most appropriate technique to employ.
Reference
Hair Jr, J. F., Matthews, L. M., Matthews, R. L., & Sarstedt, M. (2017). PLS-SEM or CB-SEM: updated guidelines on which method to use. International Journal of Multivariate Data Analysis, 1(2), 107-123.
  • asked a question related to SEM Analysis
Question
2 answers
Articles have only mentioned that cell-laden hydrogel scaffolds were lyophilized before SEM analyses for cell adhesion. However, no details were mentioned.
Relevant answer
Answer
Samples were fixed by adding 5% glutaraldehyde solution overnight which was replaced with a fresh sterilised solution of PBS and changed three times before soaking in fresh sterilised deionised water for one hour twice. Then, the CCC samples were frozen at −80 °C for three hours before placing them into a Christ ALPHA 2–4 freeze-dryer for 24 hours.
  • asked a question related to SEM Analysis
Question
9 answers
The particle size of samples for XRD and SEM analyses is crucial for obtaining relevant results.
Relevant answer
Answer
Srikanth Satish Kumar Darapu Be aware that the mineralogy is likely to change with particle size. Softer, more brittle minerals are found in the finer fractions and the harder (gangue) materials may be found in the larger size fractions.
  • asked a question related to SEM Analysis
Question
7 answers
In my SEM analysis, all the paths from constructs to the outcome construct were shown to be insignificant, although the model fit indices were all acceptable. My particular focus is on whether an A variable is directly related to the B variable or the A variable is fully mediated by C.
Considering this result is related to type 2 error by the multicollinearity among the latent constructs, I tried a regression analysis to prove if there is a significant direct effect between the A variable to the outcome variable B. In this regression analysis, measured variables for the A were used. My question is whether this process, which is to use regression analysis to see a signigicant direct effect that was not shown in the SEM analysis with latent variables, is statistically valid.
Relevant answer
Answer
Hello Hyunsoon,
Shifting from estimated factor scores as an IV to individual constituent manifest variable scores as IVs will do several things (none of which is particularly good):
1. Your comparison (does a path exist among latent variables in sem vs. does some linear combination of manifest variables relate to some other score, regardless of whether it is for a latent or manifest variable) is no longer "apples to apples" or the same research question, so no inference may be made as to whether one is better than another.
2. Regressing your "B" score on multiple A manifest indicators almost guarantees that the weights which might apply from your sem measurement model will be ignored in order to maximize the multiple R/R-squared in your regression. Hence, a relationship may or may not help make the case for the constructs being related.
3. The approach is somewhat like modifying the model in order to yield the results you would like to see, and so any interpretation of p-values (for a model changed after having looked at a prior analysis of the same data) is likely misleading, and the opportunity for overfitting is higher.
Finally, please note that model fit indices are driven by how well estimated model parameters serve to reproduce the observed correlations/covariances among the measured variables. You certainly can have good fit with no significant paths among latent variables, if the latent variables are unrelated.
Good luck with your work.
  • asked a question related to SEM Analysis
Question
3 answers
Dear fellow researchers,
Usually we use lavaan for continuous variable, so can we still use lavaan for categorical variable (e.g. high and low ethnic diversity composition)?
Thank you very much!
Best,
Edita
Relevant answer
Answer
Hello Edita,
A categorical variable having only two levels (e.g., coded 0/1) can be used in any linear model as an IV or antecedent variable.
If such a variable is the DV, however, it likely makes more sense to switch from linear to logistic models.
Good luck with your work.
  • asked a question related to SEM Analysis
Question
4 answers
Hello everyone!
I currently have 2 measurement models. All are correlated factor models and the factors reflect subscales of anxiety related constructs. Estimator is robust maximum likelihood (to account for lack of multivariate normal distribution). This happens in the context of construct independence. First model implies independence between all subscales. Second model clusters factor 1 and factor 2 together, and factor 4 and factor 5 together.
model 1:
f1 =~ item 1 + item 2 + item 3
f2 =~ item 4 + item 5 + item 6
f3 =~ item 7 + item 8 + item 9
f4 =~ item 10 + item11 +item12
f5 =~ item 13 + item14 + item15
f6 =~ item 16 + item 17 + item18
f7 =~ item 19 + item 20 + item21
model 2:
f1,2 =~ item1 + item2 + item3 + item4 + item5 +item6
f3 =~ item 7 + item 8 + item9
f4,5=~ item 10 + item11 + item12 +item13 + item 14 +item15
f6 =~ item 16 + item 17 + item18
f7 =~ item 19 + item 20 + item21
Nested models are models wherein all parameters of a more restricted model are included within a less restrictive one. I am new to this, and I reviewed examples, but I cannot make a conclusion. I would appreciate an answer and the reasoning behind it a lot!
(note: based on the fit indices, I know that the second model does not work. Fit indices are well below the acceptable threshold and the differences are enormous. But in the case of non-nested models, comparison of absolute and relative fit indices is not the case and I want to do the comparison anyways to learn how to do it correctly)
Relevant answer
Answer
The models are nested. You can see this from the fact that an equivalent way to specify Model 2 by setting two factor correlations in Model 1 to 1.0, namely between Factors 1 and 2 as well as 4 and 5.
There are two complications in this case, however, when it comes to applying nested model chi-square testing. First, correlation coefficients of 1.0 are at the boundary of the admissible parameter space for correlations, violating assumptions of the chi-square test. Stoel et. al. (Psychological Methods) developed a correction for this. The second complication is the non-normality correction of the chi-square values for each model. You would have to take the scaling correction factor into account when computing chi-square difference tests. It is perhaps easier to simply compare the absolute fit of the models or look at information criteria such as BIC.
  • asked a question related to SEM Analysis
Question
5 answers
Dear experts,
For my thesis, I want to clarify that the difference between “moderate variable “and “control variable’’ in SEM analysis techniques. If this is same, would it be possible to consider same variable to be both case?.
Please help me.
Kind regards,
Thathsarani
Relevant answer
Answer
Dear Thathsarani,
there are five types of variables in any causal system (besides your target independent variable X and outcome Y):
1) Confounders or variables lying on a "backdour path". Confounders are variables that act as common causes of X and Y and create a spurious/non-causal connection between X and Y when you not adjust/control for them. This non-causal connection is in particular covariance that is not due to an X-->Y effect and can create and effect where there is non at all or bias the effect. The bias can be an upward bias or downwad bias depending on the X-effect and the confounding effect (it is quite easy to predict, google for "path tracing").
A backdoor path is the path from X on Y that passes the confounder (see above). If a variable is part of this path but innocent in the confounding process (where the confounder is the one guilty), adjusting for such a variable will, too, eliminate the bias. That is, even if you cannot control for the confounder itself, controlling fo the "surrogate confounder" will do the job.
THIS what a control variable has to achieve.
2) Mediators are variables that transmit an effect of X on Y. Never accidently control for them unless you delibaretely want to eliminate the indirect effect (e.g., to exclude a certain process and you want to see if there are other processes or a residual direct effect). That is: take a look onto your set of predictors/control variables, and check whether there could be a mediator among them that you missed in the first place. Controling for a mediator causes overcontrol bias (see Elwert for a nice example)
3) Moderators (better term: effect modifyer) are variables which affect the effect of X on Y or in other words: The moderators present some kind of circumstance or context in which the effect may occur (or not) or that affect the size or direction of that effect. For instance, take a simple example of the X "banging with your head to the wall" which results in the Y "headache". However, this effect is moderated by the dummy variable "helmet" (yes/no).
A moderator CAN also be a confounder (affecting X and Y) OR a mediator (transmitting some portion of the X-Y effect). The key issue is whether it affects the residual X-Y effect that results from simply controlling for the moderator.
4) Instrumental variables, finally, are variables which either a) are a cause X but have no direct effect on Y or b) have a relationship with X (but not Y) that is due to a confounder. Instrumental variables are valuable as they can achieve to identify a causal X-Y effect (in a two-stage-least squares regression or a SEM, see Maydeu-Olivares et al., 2019) even if there there is confounding and you cannot control for the confounder or variables on the backdoor path.
Simply (mis)using instruments as controls will make a confounding bias worse! This is call Z bias or bias amplification effect.
5) Colliders are variables which are a common EFFECT of X on Y (the opposite of confounders). Controlling for a collider accidently will lead to a bias. The surprising thing: Collider bias also happens when you control for the Y (or select some subgroup, or stratify Y) or an outcome of Y! To illustrate that, take a look onto this model (by the way: A brilliant browser software!) and move the mouse over Y (not click) and type "A" (for adjust) on your keyboard.
This simulates an adjustment process. The bias is signalled by the effect on X becoming purple. Now repeat this (after de-typing A) to Z and see the same result.
The solution of this miracle is the error terms of Y and Z. To get an unbiased effect, there must not be a correlation between X and the e's. Y, however is a collider in the X-e1 path, and Z is a collider in the X-e1 path. Hence, controlling for Y and Z creates the bias. In graph theoretical parlance, controlling a collider "opens a closed path"--creating a spurious relationship.
I know this is a lot to swallow but really understanding these variables helps a lot.
It should be noted that to evaluate on the role of these type of variables is a purely theoretical issue. The data will not tell you that.
If you have any question, please ask.
All the best,
Holger
Maydeu-Olivares, A., Shi, D., & Rosseel, Y. (2019). Instrumental variables two-stage least squares (2SLS) vs. maximum likelihood structural equation modeling of causal effects in linear regression models. Structural Equation Modeling: A Multidisciplinary Journal, 26(6), 876-892.
Elwert, F. (2013). Graphical causal models. In S. L. Morgan (Ed.), Handbook of causal analysis for social research. (pp. 245-273). Springer. https://doi.org/10.1007/978-94-007-6094-3
  • asked a question related to SEM Analysis
Question
4 answers
I'm currently doing SEM (Structural Equation Modeling) in R using the lavaan package and found that my data violated the normality and homoscedasticity assumptions. However I get good numbers on CFI and RMSEA values. How is this possible? Does this mean the model is good? Do I still need to check the model assumptions? Thanks in advance.
Relevant answer
Answer
Your CFA/SEM fit statistics and standard errors may be biased due to non-normality when using standard maximum likelihood estimation. In case of non-normality, you can use maximum likelihood estimation with robust test statistics and standard errors such as the Satorra-Bentler correction to avoid/reduce bias.
  • asked a question related to SEM Analysis
Question
4 answers
Hello everyone
I hope you are doing well
  • AA6061-T6 or AA7075-T6 Al alloys fusion-welded plates contain the FZ (fusion zone) with a dendritic structure. I want the dendrites to be identified separately and in the form of grains (whether they can be called grains or not is another issue). The figure shows the dendritic structure in the FZ, but it isn't easy to separate dendrites from each other. (Figure shows the FZ in fusion welded AA7075 (not AA6061) etched with Keller).
  • What do you suggest as the etchant solution for the SEM investigation of the PMZ and FZ grain boundaries of the AA6061 fusion weld sample?
  • If you have experience in this field, I would appreciate writing it here.
Relevant answer
Answer
You can try for modified Poulton’s reagent to etch this alloy. Use freshly prepared reagent for better results. The composition of modified Poulton’s reagent is as follows:-
50ml Poulton reagent + 25ml HNO3 +1ml HF+1ml H2O.
(Poulton reagent : 12mlHCl concentrated+6mlHNO3+1ml HF(48%)+1mlH2O).
time 5 to 10 s
  • asked a question related to SEM Analysis
Question
2 answers
Multigroup analysis in SEM is an excellent method to estimate the measurement invariance across different groups. JASP software has a user-friendly GUI for the application of R package lavaan with embedded multigroup analysis. My experience is that the new analytical method should be used not only through the theoretical framework, but also by the insight into the excellent yet simple examples. The following paper presents the simple approach that can be useful for novice researchers applying multigroup analysis in SEM.
1. What is your experience with multigroup analysis in SEM?
2. Which software do you use?
3. Can anyone share the syntax for the constraints in JASP?
Relevant answer
Answer
Agreed, multigroup CFA/SEM is a powerful method for group comparisons with regard to both measurement-related issues and structural (latent variable) parameters. Regarding your second question, I like to use Mplus for multigroup CFA and SEM as it now conveniently automates measurement invariance (MI) testing across groups. When using the setting
ANALYSIS: MODEL = CONFIGURAL METRIC SCALAR;
Mplus will automatically estimate those three models with different levels of MI and compare their fit via chi-square difference tests. I have a free Youtube video here in which I demonstrate this: https://www.youtube.com/watch?v=0vVv-TiJp-4&t=27s
  • asked a question related to SEM Analysis
Question
9 answers
Hello,
I need to synthesize a lithium-rich cathode using the sol-gel method. What is the best molar ratio between citric acid and transition metals in this method? Does citric acid have an effect on particle size?
thanks
Relevant answer
Answer
Dear all, indeed complexing agents have additional roles on the morphology, shape, size and size distribution. Various studies are devoted to witness on their benificial role. Please have a look at the following documents. My Regards
doi: 10.1021/la903470f
DOI:10.3390/batteries6040048
  • asked a question related to SEM Analysis
Question
4 answers
I intend to pursue a Ph.D. and intend to apply Structural Equation Modelling on 'Factors influencing student success'. I have the following potential latent factors:
1. Academic Success Factors
2. Student Success,
3. Student Retention
There are many studies building on Vincent Tinto's 1995 model of student departure, and many factors have been suggested to retain students. I also intend to build on Tinto's model, however applying SEM.
I am wondering if the 'success factors' can be modelled especially through SEM as exogenous variable/s to influence 'student success' and or 'student retention'.
Would appreciate any advice, suggestions, or comments on this potential research (I have attached the proposed model).
Relevant answer
Answer
Lucky Sibanda I think that sounds useful - having a model that includes factors in existing literature to compare their relative effects. In terms of pre-enrolment factors, I think Tinto's suggestion of controlling for students' commitment to persist with education from pre-enrolment is important. We have developed a model for measuring this which hopefully we will put a working paper out on soon. We identified subfactors like social pressure to persist, expectation mismatch, and a couple of other things. Let me know if you are interested and maybe I can share it with you. Some aspects of commitment to persist will be culturally specific like in some regions and class groups very high proportions of people go to higher ed so there is more of a social pressure to get a degree even for those who do not really want to. I think accounting for structural factors pre and post enrolment is very important before looking at micro-sociological or psychological factors. Things like family or personal hardship are probably going to be more predictive than any psychological factor but maybe a model that accounts for structural factors can identify what psychological factors or social supports are protective after a factor that is out of an institution's control has been accounted for.
  • asked a question related to SEM Analysis
Question
4 answers
Hi there, I definitely do need your help!!! Looking through studies and books I got a little confused by the different approaches used to conduct factor analyses for reflective scales before running PLS-analysis. Some recommend carrying out exploratory factor analysis (EFA) using SPSS first, followed by covariance-based confirmatory factor analysis (CB-CFA) using e.g. AMOS. The "stepwise" received results (items) are then applied to PLS for further analyses. Others are pro EFA (in SPSS) but advice against using CB-CFA (e.g. AMOS) before PLS-analysis, criticizing they have different underlying assumptions. Instead they recommend doing the CFA directly in PLS (using the EFA's results). But even within the field of EFA there seems to be some confusion about what extraction method (principal component vs. principal axis vs. ...) and which rotation procedure (oblique vs. Varimax) are most appropriate when using PLS afterwards. So, my question: Are there any rules or is there a generally accepted way of how to conduct EFA and CFA when using PLS? Could you provide me with corresponding references (published articles etc.)? Hope, someone can help! Thanks in advance!
Relevant answer
Answer
Dear Yinan Li
Please see Table 1 and relevant studies/stages in the following recently published JR article. The article also provides an extensive web appendix, which should help you with EFA and CFA in a PLS-SEM based study. Please note that this is an open/free access article. Good luck!
Syed Mahmudur Rahman, Jamie Carlson, Siegfried P. Gudergan, Martin Wetzels, Dhruv Grewal. (2022). Perceived Omnichannel Customer Experience (OCX): Concept, measurement, and impact. Journal of Retailing, https://doi.org/10.1016/j.jretai.2022.03.003
  • asked a question related to SEM Analysis
Question
2 answers
Greetings,
According to Hair et al. (2020) about the Confirmatory Composite Analysis (CCA) to assess quality of the measurement model, Nomological and Predictive validity steps are suggested. would someone explain how can i apply these in Smart-PLS software and what exactly the indices or measures that i should extract?
thank you in advance.
Relevant answer
Answer
Hi Omar,
Please see Table 1 and relevant studies/stages in the following recently published JR article. The article also provides an extensive web appendix, which should help you with PLS-SEM based analysis and reporting. Please note that this is an open/free access article. Good luck!
Syed Mahmudur Rahman, Jamie Carlson, Siegfried P. Gudergan, Martin Wetzels, Dhruv Grewal. (2022). Perceived Omnichannel Customer Experience (OCX): Concept, measurement, and impact. Journal of Retailing, https://doi.org/10.1016/j.jretai.2022.03.003
  • asked a question related to SEM Analysis
Question
8 answers
A common threshold for standardized coefficients in structural equation models is 0.1. But is this also valid for first difference models?
Relevant answer
Answer
Jochen Wilhelm I agree very much with your statement that "you should better do more research on the meaning of the variable you are actually analyzing." I think that this is generally desirable for many studies. I also agree that there is a tendency in the social sciences to overemphasize standardized coefficients and to not even report unstandardized coefficients. That is very unfortunate in my opinion, as I believe both are important and have their place.
That being said, there are fields (mine included: psychology) where we are dealing with variables that simply do not have an intuitive metric. Many variables are based on test or questionnaire sum or average scores. People use different tests/questionnaires with different metrics/scoring rules in different studies. What does it mean when, for example, subjective well-being is expected to change by 2.57 for every one-unit change in self-rated health and by 1.24 for every one-unit change in extraversion when self rated health is measured on a 0 - 10 scale and extraversion ranges between 20 and 50?
Standardized estimates can give us a better sense for the "strength" of influence/association in the presence of other predictors than unstandardized coefficients when variables have such arbitrary and not widely agreed upon metrics. The interpretation in standard deviation (SD) units is not completely useless in my opinion, especially since we operate a lot with SD units also in the context of other effect size measures such as Cohen's d. It allows us (often, not always) to see fairly quickly which variables are relatively more important as predictors of an outcome--we may not care so much about the absolute/concrete interpretation or magnitude of a standardized coefficient, but it does matter whether it is .1 or .6.
In addition, in the context of latent variable regression or path models (i.e., structural equation models), unstandardized paths between latent variables often have an even more arbitrary interpretation as there are different ways to identify/fix latent variable scales (e.g., by using a reference indicator or by standardizing the latent variable to a variance of 1). Regardless of the scaling of the latent variables, the standardized coefficients will generally be the same.
This does not mean that I recommend standardized coefficients over unstandardized coefficients. Variance dependence and (non-)comparability across studies/different populations are important issues/drawbacks of standardized coefficients. Unstandardized coefficients should always be reported as well, and they are very useful when variables have clear/intuitive/known metrics such as, for example, income in dollar, age, number of siblings (or pretty much any count), IQ scores, etc. Unstandardized coefficients are also preferable for making comparisons across groups/populations/studies that used the same variables. I would always report both unstandardized and standardized coefficients along with standard errors and, if possible, confidence intervals.
I believe there are many examples of regression or path models in psychology for which standardized coefficients were reported and that did advance our knowledge regarding which variables are more important than others in predicting an outcome.
  • asked a question related to SEM Analysis
Question
5 answers
If I use SmartPLS to test the structural model then how I can measure the Goodness of Fit Index (GFI). What are the indices I need to observe for validating the research model?
  • asked a question related to SEM Analysis
Question
12 answers
I have to do SEM analysis of root samples colonized by a selected bacterium. After removal of roots from the pots, how long I can store the samples in 2.5% Glutaraldehyde solution. Is there any special treatment to increase the the storage period without affecting the colonized microbes?
Relevant answer
Answer
Thank you so much again, Jessica Tay Ying Ling
It really helped.
  • asked a question related to SEM Analysis
Question
9 answers
I need to analyze plant tissue samples using SEM for which I am following the 2.5 % Glutaraldehyde fixation method. But, I cannot access SEM for a month and so need to store my plant samples. Should I fix my samples first and then refrigerate them? Or should I refrigerate them first for days and fix them before SEM analyses?
Relevant answer
Answer
I am no expert in plant microscopy, but you still do not have relevant answers... In general you can store biological specimens for a while in a fixative solution (glut) in a refrigerator. Or fix specimens, dehydrate properly in graded solutions of alcohol and stop at 70-96% of alcohol and store in a refrigerator.
  • asked a question related to SEM Analysis
Question
4 answers
I run my model which consist of 5 factors but factor 4 and 5 are not significant in Pearson correlation. Is this ok? because I would like to run for SEM.
Any suggestions? Thanks/...
Relevant answer
Answer
@Imran Thank you
  • asked a question related to SEM Analysis
Question
22 answers
I am doing research on natural fiber composites. For composite analysis the tests are as following
1. XRD
2.SEM
3.FTIR
4.DMA
5.DSC
6.TGDTA
7.UTM
8.UV Visible
Is that possible a sample can be use in more than one test?
Please help me
Relevant answer
Answer
Yes, you can use the same sample for more techniques, but those techniques should not modify the sample, meaning that those techniques should not be invasive. In your case, you can use the same sample for UV-Vis, XRD, SEM, and FTIR.
  • asked a question related to SEM Analysis
Question
2 answers
Hi everyone,
in my SEM study, I have some problems with the Fornell-Larcker criterion assessing discriminant validity. The square root of the respective AVE is smaller than some of the correlations of this factor with other factors which is why I assume that discriminant validity is problematic.
I know that it is possible to merge factors and look at cross-loadings, however this really does not make much sense in my case. Is there any other way of dealing with a lack of discriminant validity or is it maybe possible to just mention the lack of discriminant validity as a limitation and continue with the interpretation of the model? It's for a Master's thesis, I am not planning to publish in a top journal ;)
Thanks in advance for your help!
Relevant answer
Answer
try to move to PLS or SPSS regression analysis
  • asked a question related to SEM Analysis
Question
22 answers
Hi, I am running path analysis with latent variables. My model fit indices are good, however some of the factor loadings are negative. Also some of the standardized estimate are more than 1 like chemical on N2O is more than 1 (1.60) and topographical on N2O is -1.03.
Is it alright to have negative loadings in the attached path diagram. How can i correct this diagram?
Thanks
Relevant answer
Answer
I am disappointed that I am late for the party - a very interesting discussion for someone trying to understand SEM like myself. Waqar Ashiq I guess you are done with your study (it's impressive that you had to read widely and eventually improved your model).
Your question of negative loadings has been answered from the discussion above, I suppose. After looking at your two models, I have some questions, perhaps some people who are part of the discussion will clarify here ( Zainudin Awang , Christian Geiser , Marcel Grieger , Karin Schermelleh-Engel , David Eugene Booth )
I will refer to your initial model as Model 1 and the revised model as Model 2.
1. On Model 1, why did you include an error term on the Topographical latent independent variable (exogenous)? I think all the other latent variables should have an error term except this one.
2. Just like the above point raised, on Model 2, there should be no error term on the Landscape latent exogenous variable. Waqar Ashiq , I will be glad to get a source that justifies an error term on an independent latent variable.
3. On Model 1, I see many error terms of observed variables covariated while they form part of different latent variables. For example, e1 is covariated with e9 while they form part of two different latent variables Topographical and Biological respectively. I agree with what Imtiaz Ahmad mentioned. However, covariating e1 and e2 is appropriate since they are both under the latent variable Topographical.
4. Is it advisable to use 2 items per latent variable in a SEM model (like in both models above)? I read somewhere that 3 or more are better.
5. In Model 1, independent latent variables are not covariated. Is it optional to covariate independent latent variables?
  • asked a question related to SEM Analysis
Question
3 answers
Hello.
I am doing path analysis with exogenous variables from my common garden experiments.
For this analysis, I the following steps;
1) structure hypothetical model with exogenous (A) and endogenous (B, C, D, E, and Z) variables
Z ~ p1*A + p2*B + p3*C + p4*D + p5*E
C ~ p6*B + p9*A
D ~ p7*B + p10*A
E ~ p8*B + p11*A
B ~ p12*A
2) data processing;
2-1) calculate mean values of observed individuals
2-2) standardizing the mean values (mean = 0, sd = 1)
2-3) Dataset consists of 90 < n < 100 values of the above variables.
So I ran cfa from the lavaan package in R using the dataset and path model.
However, I met severe troubles from the results
The model did not fit.
----------------------------------------------------------------------------------
lavaan 0.6-10 ended normally after 27 iterations
Estimator ML
Optimization method NLMINB
Number of model parameters 27
Used Total
Number of observations 89 94
Model Test User Model:
Test statistic 129.907
Degrees of freedom 6
P-value (Chi-square) 0.000
Model Test Baseline Model:
Test statistic 273.261
Degrees of freedom 21
P-value 0.000
User Model versus Baseline Model:
Comparative Fit Index (CFI) 0.509
Tucker-Lewis Index (TLI) -0.719
Loglikelihood and Information Criteria:
Loglikelihood user model (H0) -678.739
Loglikelihood unrestricted model (H1) -613.785
Akaike (AIC) 1411.478
Bayesian (BIC) 1478.671
Sample-size adjusted Bayesian (BIC) 1393.464
Root Mean Square Error of Approximation:
RMSEA 0.482
90 Percent confidence interval - lower 0.412
90 Percent confidence interval - upper 0.555
P-value RMSEA <= 0.05 0.000
Standardized Root Mean Square Residual:
SRMR 0.191
----------------------------------------------------------------------------------
1) Chi-square was significant.
2) CFI and RMSEA values also said that model is not fit.
To fit the model, I tried to modify the dataset, change the model, remove NA or add extra value into the dataset, but the un-fit was not changed.
Here're my questions,
What can I do to fit the model? Change or modify something? Try to use the other functions or packages?
And
Can I run separate multiple regression for the model? Then Do I use the coefficients from the regressions for my papers?
Please kind reply.
Thank you.
Relevant answer
Answer
Your model is overidentified with 6 degrees of freedom. This means that there are six testable restrictions in the model: six possible direct paths between variables are currently not estimated (these paths are implicitly fixed at zero). The high chi-square value indicates that some of these "not estimated"/fixed-to-zero direct paths/effects may be non-zero in the population. Are there any possible direct paths that you omitted in your model that may be non-zero? You can examine this by looking at standardized covariance residuals (these residuals can show you which observed covariances/correlations are underestimated by the model) and/or by looking at model modification indices. The modification indices may point you to those additional paths that should be freely estimated since they differ significantly from zero.
The model cannot properly be estimated via regression because it is overidentified (non-saturated, has > 0 degrees of freedom).
  • asked a question related to SEM Analysis
Question
8 answers
It is usual to say that causality is time-dependent. However, a reviewer sent this message for a study "The authors state that one limitation is the cross-sectional design, which does not enable establishing a cause-and-effect relationship. However, the paper intends to evaluate causal mechanisms. Please be coherent: if you believe causal effects are not plausible to be evaluated, then the whole conceptualization of your study is unsupported".
May you help me to understand the conception of causality in SEM?
Relevant answer
Answer
I think the bottom line of the reviewer comment is that the cross-sectional design is inadequate for examining the focal research question or hypotheses on causal effects. This is not primarily a stats issue but a matter of research methods in more general terms.
I have seen lots of submissions to academic journals applying sequential designs. That is, researchers introduce a time lag of a few weeks between measuring predictor and criterion, but do not include repeated measures of the focal variable (no panel data). In my view, such designs are a more or less desparate attempt to avoid desk rejection because of cross-sectional data.
There is quite a bit research that suggests that sequential designs do not perform better than cross-sectional data (see below).
I agree that experimental or real longitidunal survey data are a much better fit to test causal effects. In my view, providing evidence for causality is possible in limited ways to impossible with non-experimental data. I can recommend the chapter of John Antonakis discussing among other things the role of third variables. I like his example of a loud noise occurring (instead of a rifle) causes discs to be shattered, because the correlation between the occurrence of a crack and the condition of the discs is so highly correlated. :-)
In my view, leaving the causality obsession (in psychology) aside, there are lots of meaningful and important research questions that can be addressed with correlational data (even cross-sectional data), e.g. regarding the interplay of variables or the relative weight of predictors (no matter if these predictors are the only or real causes).
Best
Oliver
Antonakis, J., Bendahan, S., Jacquart, P., & Lalive, R. (2014). Causality and Endogeneity. The Oxford Handbook of Leadership and Organizations. https://doi.org/10.1093/oxfordhb/9780199755615.013.007
O’Laughlin, K. D., Martin, M. J., & Ferrer, E. (2018). Cross-Sectional Analysis of Longitudinal Mediation Processes. Multivariate Behavioral Research, 53(3), 375–402. https://doi.org/10.1080/00273171.2018.1454822
Mitchell, M. A., & Maxwell, S. E. (2013). A Comparison of the Cross-Sectional and Sequential Designs when Assessing Longitudinal Mediation. Multivariate Behavioral Research, 48(3), 301–339. https://doi.org/10.1080/00273171.2013.784696
  • asked a question related to SEM Analysis
Question
2 answers
Hi everyone,
I have longitudinal data for the same set of 300 subjects over seven years. Can I use '''year' as a control variable? Initially, I used one way ANOVA and found no significant different across seven years in each construct.
Which approach is more appropriate?. Pooling time series after ANOVA (if not significant) or using 'year' as a control variable?
Relevant answer
Answer
I thinking, when there is no significant difference found Through, its improper to use it as control variable. Better is allow PLS to create its own groups if any were present in the data.
  • asked a question related to SEM Analysis
Question
7 answers
What are the main differences between XRD & SEM Analysis?
Relevant answer
Answer
XRD analysis defines the crystal structure that is the formation of desired phase (s). The peak analysis represents the volume of unit cell. In case of nanomaterial, crystallite size can also be computed. In contrast, the SEM analysis represents surface structure, the grain, grain boundary, and porosity.
In briefly, XRD provides internal information of a material while SEM gives sole the surface information.
  • asked a question related to SEM Analysis
Question
5 answers
I want to analyse cross-section samples by SEM microscopy whose constituent layers have been applied by dip-coating using SS and glass as substrates. However, I do not know if I would have to prepare the sample in a special way for its analysis and if it is the case, how should I previously prepare the sample without altering the applied layers? I have already tried to cut the glass samples using diamond tip cutter but the cut was neither clean nor precise.
Thank you for your help.
Relevant answer
Answer
If ion mill (as described by Pierre Caulet ) is out of reach, you can use metallographic polishing. To preserve thin layers it's better to make specimen "sandwich": with thin epoxy glue two pieces of specimen with substrate on outside. Embed, polish.