Science topic

Advanced Statistical Analysis - Science topic

Explore the latest questions and answers in Advanced Statistical Analysis, and find Advanced Statistical Analysis experts.
Questions related to Advanced Statistical Analysis
  • asked a question related to Advanced Statistical Analysis
Question
4 answers
To my knowledge, the total effect in mediation reflects the overall impact of X on Y, including the magnitude of the mediator (M) effects. A mediator is assumed to account for part or all of this impact. In mediation analysis, statistical software typically calculates the total effect as: Total effect = Direct effect + Indirect effect.
When all the effects are positive (i.e., the direct effect of X on Y (c’), the effect of X on M (a), and the effect of M on Y (b)), the interpretation of the total effect is straightforward. However, when the effects have mixed or negative signs, interpreting the total effect can become confusing.
For instance, consider the following model: X: Chronic Stress, M: Sleep Quality, Y: Depression Symptoms. Theoretically, all paths (a, b, c’) are expected to be negative. In this case, the indirect effect (a*b) should be positive. Now, assume the indirect effect is 0.150, and the direct effect is -0.150. The total effect would then be zero. This implies the overall impact of chronic stress on depression symptoms is null, which seems illogical given the theoretical assumptions.
Let’s take another example with mixed signs: X: Social Support, M: Self-Esteem, Y: Anxiety. Here, the paths for a and c’ are theoretically positive, while b is negative. The indirect effect (a*b) should also be negative. If the indirect effect is -0.150 and the direct effect is 0.150, the total effect would again be zero, suggesting no overall impact of social support on anxiety.
This leads to several key questions:
1. Does a negative indirect effect indicate a reduction in the impact of X on Y, or does it merely represent the direction of the association (e.g., social support first improves self-esteem, which in turn reduces anxiety)? If the second case holds, should we consider the absolute value of the indirect effect when calculating the total effect? After all, regardless of the sign, the mediator still helps to explain the mechanism by which X affects Y.
2. If the indirect effect reflects a reduction or increase (based on the coefficient sign) in the impact of X on Y, and this change is explained by the mediator, then the indirect effect should be added to the direct effect regardless of its sign to accurately represent the overall impact of both X and M.
3. My main question is: Should I use the absolute values of all coefficients when calculating the total effect?
Relevant answer
Answer
Yes, the signs of the direct and indirect effects do matter when calculating the total effect in mediation analysis. Here's how the signs influence the total effect:
Breakdown of Effects in Mediation:
  1. Direct Effect: The effect of the independent variable (X) on the outcome variable (Y) without considering the mediator.
  2. Indirect Effect: The effect of X on Y through the mediator (M). This is calculated as the product of:The effect of X on M (aaa), The effect of M on Y while controlling for X (bbb). Indirect effect = a×ba \times ba×b
  3. Total effect=Direct effect+Indirect effect\text{Total effect} = \text{Direct effect} + \text{Indirect effect}Total effect=Direct effect+Indirect effectTotal Effect: This is the combined effect of X on Y, accounting for both the direct path and the mediated (indirect) path. It is the sum of the direct and indirect effects.
How Signs Matter:
  • If both the direct effect and the indirect effect have the same sign (both positive or both negative), the total effect will increase in magnitude.
  • If the direct effect and indirect effect have opposite signs, they will work against each other, and the total effect will decrease in magnitude or potentially even change direction (depending on the relative sizes of the effects).
Example:
  1. Positive Direct Effect and Positive Indirect Effect:Direct effect = +0.5 Indirect effect = a×b=+0.3×+0.4=+0.12a \times b = +0.3 \times +0.4 = +0.12a×b=+0.3×+0.4=+0.12 Total effect = +0.5+0.12=+0.62+0.5 + 0.12 = +0.62+0.5+0.12=+0.62
  2. Negative Direct Effect and Positive Indirect Effect:Direct effect = -0.5 Indirect effect = +0.12+0.12+0.12 Total effect = −0.5+0.12=−0.38-0.5 + 0.12 = -0.38−0.5+0.12=−0.38
  3. Opposing Signs:Direct effect = +0.5 Indirect effect = −0.12-0.12−0.12 (e.g., if a=−0.3a = -0.3a=−0.3 and b=+0.4b = +0.4b=+0.4) Total effect = +0.5−0.12=+0.38+0.5 - 0.12 = +0.38+0.5−0.12=+0.38
Interpretation:
  • The signs of the direct and indirect effects influence whether the mediator amplifies or reduces the overall effect of the independent variable on the outcome.
  • If the signs are opposite, the mediator might be suppressing the effect of X on Y, or even reversing it, depending on the magnitude of the indirect effect.
  • asked a question related to Advanced Statistical Analysis
Question
4 answers
What are the processes to extract ASI microdata with STATA and SPSS?
I have microdata in STATA and SPSS formats. I want to know about the process. Is there any tutorial on youtube for ASI microdata?
Relevant answer
Answer
Good morning Sir Florian Schütze
Thank you very much for your reply/comment.
I have visited there. I found video for PLFS and NSS but did not get for ASI.
From MoSPI microdata catalog, I have downloaded data but unable to get specific variables' quantity. Variables Like, no. of firms and operated firms, I have get. But I am unable to get fixed capital, input, output and other variables. I merged two blocks and applied formula but perhaps there is some mistake. So I am not getting values.
  • asked a question related to Advanced Statistical Analysis
Question
6 answers
I want to use SPSS Amos to calculate SEM because I use SPSS for my statistical analysis. I have already found some workarounds, but they are not useful for me. For example, using a correlation matrix where the weights are already applied seems way too confusing to me and is really error prone since I have a large dataset. I already thought about using Lavaan with SPSS, because I read somewhere that you can apply weights in the syntax in Lavaan. But I don't know if this is true and if it will work with SPSS. Furthermore, to be honest, I'm not too keen on learning another syntax again.
So I hope I'm not the first person who has problems adding weights in Amos (or SEM in general) - if you have any ideas or workarounds I'll be forever grateful! :)
Relevant answer
Answer
You can see www.Stats4Edu.com
  • asked a question related to Advanced Statistical Analysis
Question
4 answers
Hi everyone,
If you have written or come across any papers where Generalised Linear Mixed Models are used to examine intervention (e.g., in mental health) efficacy, could you please share the link/s? I'd love to see how the results are laid out and reported.
Thank you!
Relevant answer
Answer
Ravisha Jayawickrama Thanks for the domain info, I have a background in experimental psychology so it makes perfect sense. :) Now could you share how you specified the model? Holger Steinmetz sounds interesting I'm working on learning a useful approach for the pragmatics of having a single model in those types of situations. i.e. a multivariate response with different distributions (for example one is binomial and the other is normal).
  • asked a question related to Advanced Statistical Analysis
Question
4 answers
Hi everyone,
I ran a Generalised Linear Mixed Model to see if an intervention condition (video 1, video 2, control) had any impact on an outcome measure across time (baseline, immediate post-test and follow-up). I am having trouble interpreting the Fixed Coefficients table. Can anyone help?
Also, why are the last four lines empty?
Thanks in advance!
Relevant answer
Answer
Alexander Pabst I would add that the first thing to do is a likelihood ratio test to see if having the fixed effects in the model was better fitting than a model without them. I see that the two of the interaction terms may be significant but that's contingent on the overall system of variables being 'significant'. Personally I don't use Wald tests, their approximation sometimes isn't very good. I would use stepwise LRT to determine whether a term (or system of terms) should be included in the model (although for some situations in a mixed model one needs to use something like the BIC).
  • asked a question related to Advanced Statistical Analysis
Question
5 answers
Hello everyone, I hope you're doing well.
I recently conducted a test on simulating near-field reflections. Using a measured dataset of OBRIRs from a KEMAR HATS in an anechoic chamber, facing a reflective surface at distances of 0.25m and 0.5m as the hidden reference. I then created a simulated room and generated OBRIRs using AKTools roomsimualtion software, using various HRTFs of near-field (matching the 0.25m and 0.5m) and far-field measurements (an overall 2m measurement).
These were then presented to listeners using headphones and head tracking, convolved with separate male and female voice stimulus that had been modelled to come out of the listeners mouth, and the listener had to imagine that it was. and repeated 3 times for each voice. For each comparison, they were asked to pick which of the 3 options (the measured, near field HRTF, the far field HRTF) they thought was the most real/believable/plausible and then rate it on a scale from 1-6, 1 being not at all, 6 being very plausible. Each comparison, the options were randomised, so that the listener wouldn't get used to picking the same one. This was then repeated 3 times for each voice, then also repeated another 3 times for the other distance. This gave a total of 12 measurements per listener (3 male 0.25, 3 female 0.25, 3 male 0.5, 3 female 0.5).
My Hypothesis was that each of the options would be equally plausible and so there would be an equal selection from the listeners choices overall. So a presumed split of 1/3 between each option. I thought a Chi Square test would be suitable, however this is not true, as the data holds multiple answers from each listener.
I can't seem to find any data analysis methods that work for this setup? I thought about just taking each listeners initial response for the male 0.25, then female 0.25, then male 0.5, then female 0.5 and comparing that...somehow using Chi Square?
I was also intrigued if the distance and the voice had an effect on which option the listeners liked the most?
It does seem like there is a slight difference in which option was preferred. From a total of 22 listeners, the far-field HRTF had a higher frequency of 105, compared to the reference of 71 and the near-field of 88. I'm mostly looking at tests that can say whether this is statistically significant or not, but with a sample size of 22, I doubt I'll be able to make any huge judgements. But some listeners caught onto which they preferred and gave the same option each time. I might need to exclude this, or keep it I'm not sure?
Any advice you can provide is greatly appreciated, any further questions or information you need please let me know!
Thank you!
Relevant answer
Answer
For repeated measures data, alternatives to the Chi-square test include:
  • McNemar's test: Specifically for binary repeated measures data.
  • Cochran's Q test: For repeated measures of a binary outcome.
  • Fisher's exact test: Similar to Chi-square but used when sample sizes are small.
These tests are suited for analyzing categorical data across multiple time points or conditions.
  • asked a question related to Advanced Statistical Analysis
Question
4 answers
We are looking for a highly qualified researcher with expertise in advanced statistical analysis to contribute to a scientific article to be submitted to a prestigious journal by the end of the year (2024). The article will focus on the adoption of digital innovations in agriculture.
Key responsibilities:
- Carry out in-depth statistical analysis using a provided database (the dataset is ready and available in SPSS format).
- Apply advanced statistical techniques, including structural equation modelling and/or random forest models.
- Work closely to interpret the results and contribute to the manuscript.
The aim is to fully analyse the data and prepare it for publication.
If you are passionate about agricultural innovation and have the necessary statistical expertise, we would like to hear from you.
Relevant answer
Answer
Carlos Parra-López this sounds interesting. I'm interested, but if you like, we can have a preliminary discussion earlier.
  • asked a question related to Advanced Statistical Analysis
Question
1 answer
Hi everyone.
When running a GLMM, I need to turn the data from wide format to the long format (stacked).
When checking for assumptions like normality, do I check them for the stacked variable (e.g., outcomemeasure_time) or for each variable separately (e.g., outcomemeasure_baseline, outcomemeasure_posttest, outcomemeasure_followup)?
Also, when identifying covariates via correlations (Pearson's or Spearman's), do I use the seperate variables or the stacked one?
Normality: say the outcomemeasure_baseline normality is violated but normality for the others weren't (ouecomemeasure_posttest and outcomemeasure_followup). Normality for the stacked variable is also not violated. In this case when running the GLMM, do I adjust for normality violations because normality for one of the seperate measures was violated?
Covariates: say age was identified as a covariate for outcomemeasure_baseline but not the others (separately: ouecomemeasure_posttest and outcomemeasure_followup OR the stacked variable). In this case, do I include age as a covariate since it was identified as one for one of the seperate variables?
Thank you so much in advance!
Relevant answer
Answer
The assumption on normality only matters for a model with normally (Gaussian) distributed errors (LMM). Meaning the residuals of the model are from your side to approximate normality and this assumption is reasonable. Assuming that you use the word GLMM, you have selected a model with a different distribution and link function? If these words sound like gibberish, it might provide some help to search the terminology I just used or find a few introductory articles or books. Best
  • asked a question related to Advanced Statistical Analysis
Question
5 answers
So my student have a question that i cannot answer as well. She analyzing the effect of ICT toward labor productivity using 8 years data panel using 4 independent variables with EVIEWS 13. Frankly i quite surprised that the R-squared value on her results is 0.94 with only 2 significance variables. Theoretically, in simple regression model with higher value of R-squared most likely indicates bad and have statistics problems. Recently, i asked her to calculate the data using STATA and the results shows that only have 0,51 R-Square with exact similar coefficient.
I've search some articles about it and says that eviews might be wrong, and some says that STATA is wrong. Can someone explain what should i do and which software have to use?
note:
1. Some articles says to using areg command in stata to find similar value as eviews, but i quite doubt because areg is using for categorical regression in stata and its not quite fit in panel regression model.
2. Some says that eviews is wrong calculation.
Relevant answer
Answer
Because the software are different.
  • asked a question related to Advanced Statistical Analysis
Question
6 answers
Hi everyone,
Does anyone have a detailed SPSS (v. 29) guide on how to conduct Generalised Linear Mixed Models?
Thanks in advance!
Relevant answer
Answer
Ravisha Jayawickrama dont thank
Onipe Adabenege Yahaya
, but chatGPT, you could have gotten the same answer yourself.
  • asked a question related to Advanced Statistical Analysis
Question
1 answer
I'm doing a research proposal and want to compare bilingual people to monolingual people on a dot perspective task.
- The first IV  (IV1) will be language ability with two levels: monolingual (control) /bilingual
- The second IV (IV2) will ONLY be applied to the bilingual group: Ps are informed the avatar is bilingual or not (so two levels again, repeated measures with counterbalancing)
The DV is reaction times in the dot perspective task.
I am just wondering how I would go about analysing this? I was thinking an ANOVA, but as the control group are not exposed to IV2 do I just simply compare the means of all groups?
I want to compare
  1. Control group reaction times to BOTH levels of IV2 combined (overall RT for bilinguals)
  2. Control group reaction times to each level of IV2
  3. Level 1 vs level 2 of IV2 (whether avatar is said to be bilingual or not)
Is it best to split this study into 2 experiments or is it possible to keep it as one and analyse it as one?
Relevant answer
Answer
Hello,
You can use a mixed-design ANOVA with language ability as a between-subjects factor and the avatar's language ability as a within-subjects factor for the bilingual group only. Planned contrasts or post-hoc tests can compare the control group to bilinguals.
Hope this helps
  • asked a question related to Advanced Statistical Analysis
Question
2 answers
Suppose that we have three variables (X, Y, Z). According to past literature Y mediates the relationship between X & Z while X mediates the relationship between Y & Z. Can I analyze these interrelationships in a single SEM using a duplicate variable for either X (i.e., Xiv & X Ddv) or Y (Yiv or Ydv)?
Relevant answer
Answer
It is possible to use the same variable twice, once as a mediator and once as an independent variable. This methodology enables a more comprehensive examination of the connections inside the model.
For Reference:
  • asked a question related to Advanced Statistical Analysis
Question
3 answers
What are the possible ways of rectifying a lack of fit test showing up as significant. Context: Optimization of lignocellulosic biomass acid hydrolysis (dilute acid) mediated by nanoparticles
  • asked a question related to Advanced Statistical Analysis
Question
6 answers
Hello,
I have the following problem. I have made three measurements of the same event under the same measurement conditions.
Each measurement has a unique probability distribution. I have already calculated the mean and standard deviation for each measurement.
My goal is to combine my three measurements to get a general result of my experiment.
I know how to calculate the combined mean: (x_comb = (x1_mean+x2_mean+x3_mean)/3)
I don't know how to calculate the combined standard deviation.
Please let me know if you can help me. If you have any other questions, don't hesitate to ask me.
Thank you very much! :)
Relevant answer
Answer
What is the pooled standard deviation?
The pooled standard deviation is a method for estimating a single standard deviation to represent all independent samples or groups in your study when they are assumed to come from populations with a common standard deviation. The pooled standard deviation is the average spread of all data points about their group mean (not the overall mean). It is a weighted average of each group's standard deviation.
Attached is the formula.
  • asked a question related to Advanced Statistical Analysis
Question
1 answer
Dear Community,
I would like a question regarding the use of Partial Least Square Regression Analysis. Basically ,I am confused in units. For example, I have Year 1, Year 2, Year 3 land cover and Water Balance Components. The units of Water Balance Components are "mm", while the units of each landcover type for year 1, 2 and 3 are in square Km. I am confused how the different units will perform the PLSR test.
Either, I have to use the % difference in each Year or % of particular landcover type to the total area of the basin and similarly convert the water balance variables from "mm" to percentage.
Looking for a guidance. Please teach me.
Regards
Relevant answer
Answer
In my opinion, Normalizing each component will solve problem!
  • asked a question related to Advanced Statistical Analysis
Question
1 answer
Hello everyone,
I am performing multiple comparisons at the same time (post hoc tests), but among all the possible p value adjustments available (Bonferroni, Holm, Hochberg, Sidak, Bonferroni-Sidak, Benjamini-Hochberg, Benjamini-Yekutieli, Hommel, Tukey, etc.), I don't know which one to choose... And I want to be statistically correct for the comparisons that I am making in my experiment.
In my experiment, there are 4 groups (let say A, B, C, D), but I want to compare A vs B, and C vs D. That's all. So, after performing wilcoxon tests, the non-parametrical equivalent of a t test (because I have such a low amount of repeat per group (n=6) + non-normality for some groups), for A vs B, and C vs D, I don't know which p value adjustment should be performed here.
I would like to understand 1. which adjustment I should perform here. 2. how to decide which test I should perform for any other analysis (what is the reasoning).
Thanks in advance for your response,
Relevant answer
Answer
Hi! Was your query answered? I am confused about a similar set up of mine!
  • asked a question related to Advanced Statistical Analysis
Question
5 answers
I have a dataset that includes 1900 companies. Also, I investigated 10 employees for each company. There is a question about the risk preference of each employee. At now, I need to calculate the ICC1 and ICC2 values for each company. I have already coded for each company, so each company will have a unique company_id. At now, I have the employee dataset, it means I have the 19000 data, and each employee will match the company according to the company_id. In this case, how to get the ICC1, and ICC2 value of each company in R. I have already tried for few days, expecting someone could resolve my problem.
Relevant answer
Answer
P.S.: Plaul Bliese has a multilevel tutorial for R, where he shows how to calcualte the above mentioned indices, as well as others, since all have their specific problems, which would lead too far to discuss them here.
  • asked a question related to Advanced Statistical Analysis
Question
3 answers
In the case of a constant coefficient, where the VIF is greater than 10, what does that mean? Do all the variables in the model exhibit multicollinearity? How can multicollinearity be reduced? Multicollinearity could be reduced by removing variables with VIF >10. But I don't know what to do with the constant coefficient.
Thank you very much
Relevant answer
Answer
Looking further - your package may be reporting an uncentred VIF in place of or in addition to a centred VIF. There is an apparently unresolved debate in the literature about when or why that's useful. For practical purpose in most regressions it seems likely that high uncentred VIF may not be problematic. I've never seen uncentred VIF used in a published paper ...
  • asked a question related to Advanced Statistical Analysis
Question
5 answers
I want to repeat a statistical function (like -lm-, -glm- or -glmgee-) for a lot of variables. But, it does not work for statistical functions (example 1) but works for simple functions (example 2).
Important: I do not mean multivariate regression and using cbind()!
Example 1:
a = rnorm(10, 5, 1)
b = rnorm(10, 7, 1)
c = rnorm(10, 9, 1)
d = rnorm(10, 10, 1)
i = list(a, b, c)
for (x in i) {
lm(x~d)
}
Example 2:
a = rnorm(10, 5, 1)
b = rnorm(10, 7, 1)
c = rnorm(10, 9, 1)
d = rnorm(10, 10, 1)
i = list(a, b, c)
for (x in i) {
plot(x+d)
}
You can check in this site: https://rdrr.io/snippets/
Relevant answer
Answer
You need to save the output in an object. In the case of lm(), the best is to save the results in a list. This is achieved automatically if you'd use lapply() instead of for().
models <- lapply(i, function(x) lm(x~d))
  • asked a question related to Advanced Statistical Analysis
Question
3 answers
Dear all,
I hope this message finds you well. I am currently in the process of applying for an Alexander von Humboldt Foundation fellowship, and I am actively seeking a host professor in Germany who shares my research interests and expertise.
As an experienced epidemiologist, my primary research focus lies in the fields of obesity and diabetes from a life course perspective. Over the years, I have honed my skills in the intricate handling of complex data and advanced statistical analysis, including the application of multilevel growth models and causal mediation analysis.
I would be honored to explore the possibility of collaborating with you as my host professor in Germany. Your expertise and research interests align well with my background, making you an ideal candidate for this partnership.
If you are open to discussing the potential of hosting me as a fellow in your research group, I would greatly appreciate the opportunity to engage in a more detailed conversation about our research synergies.
Thank you for considering my inquiry, and I look forward to your response.
Best,
Jie
Relevant answer
Dear Jie Zhang
You're welcome. Wish you the best.
Regards.
  • asked a question related to Advanced Statistical Analysis
Question
10 answers
There are a lot of researchers who go by the book the right approach and write results, and observations in their field of work, proving the existing information or suggesting improvement in the experiment for better analysis and so on, very hard working but then there are other who are crazy thinkers always suggesting things with little backup from existing experiments or know facts, always radical in their understanding of results, and these people mostly get dismissed as blip by the first category of researchers.
So if I have to take your opinion who will you back for hitting gold one who is methodical and hardworking or who are crazy thinker?
Relevant answer
Answer
I agree with your contention that some ideas initially strike most people as 'crazy' both in technical and nontechnical fields. Examples from nontechnical fields include: opposing slavery, gun control, democracy, women voting, environmentalism, climate change, etc. Examples from technical fields include: mRNA vaccines (COVID-19 vaccines from Moderna and Pfizer), prions (self replicating proteins), continental drift, quasicrystalls, Josephson junctions (SQUIDs), quantum mechanics, the personal computer, the Internet, the airplane, radio, TV, electricity, etc. One person's 'crazy' idea may eventually become widely accepted, and even commercially important. And don't forget, many 'crazy' ideas originated from by-the-book investigations: the idea of the quantum of energy arose from Max Planck's tireless attempts at trying to explain the shape of the blackbody curve using classical thermodynamics, and superconductivity in some metals was the result of a rather pedestrian checking of electrical conductivity of metals at liquid helium temperatures - no one expected superconductivity and no theory predicted it.
I really like your question.
Regards,
Tom Cuff
  • asked a question related to Advanced Statistical Analysis
Question
10 answers
It is possible to run a regression of both Seconday and primary data in the same model? I mean, when the dependent variable is primary data to be sourced via questionnaire and the Independent variable is secondary data to be gathered from published financial statements?
For Example: if the topic is  Capital Budgeting moderator and shareholders wealth (SHW). Capital budgeting moderators is proxy by inflation , management attitude to risk, Economic condition and Political instability. while SHW is proxy by Market value, Profitability and Retained earnings.
Relevant answer
Answer
There should be a causal effect of the independent variables on the dependent variable in regression analysis. Primary data gathered through questionnaire for the dependent variable would be influenced by the current happenings while the independent variables based on secondary data was influenced by past or historical happenings. Therefore, there would not true linkages between independent variables and the dependent variable. Therefore, running a regression with both Secondary and primary data in the same model would not give you best outcome.
  • asked a question related to Advanced Statistical Analysis
Question
3 answers
Hi all,
I am trying to calculate the curvatures of the cornea and compare them with Pentacam values. I have the Zernike equation in polar coordinates (Zfit = f(r, theta)). Can anybody let me know the equations for calculating the curvatures ?.
Thanks & Regards.
Nithin
Relevant answer
I think you can try something like this
  • asked a question related to Advanced Statistical Analysis
Question
4 answers
Hi! I have a dataset with a list of species of sponges (Porifera) and the number of specimens found for each specie in three different sites. I add here a sample from my dataset. Which test should I use to compare the three sites showing both which species where found in each site and their abundance? I was also thinking of a visual representation showing just the difference between sites in terms of diversity of species (and not abundance), so that is possible to see which species were just in one sites and which ones were in both sites. For this last purpose I thought about doing an MDS but I am not sure if it is the right test to do neither how to do it in R and how to set the dataset, can you help me finding a script which also show the shape of the dataset? any advice in general would be great! thank you!
Relevant answer
Answer
Hi,
I wonder why you would like to ignore the abundance information?
Based on a species-site abundance matrix, you could calculate a dissimilarty matrix (if the abundance data should be considered, I would use bray-curtis similarity) and conduct a mantel test between all three dissimilarity matrices to test for correlations between the species compositions of the three sites.
  • asked a question related to Advanced Statistical Analysis
Question
4 answers
Is it possible to test a 2x2x2 design where the first two variables are manipulated high/low categories and the third variable is a measured continuous variable?
Would it be suitable to convert the measured continuous variable to a categorical variable to create a 2x2x2 design?
If so, i would now have 8 categories with multiple high/low combinations.
What test would i use to identify the differences across these groups in a dependent variable if i want to hypothesize that the DV would vary as a function of high/low categorical variable (3rd variable) values?
Relevant answer
Answer
In addition to the loss of power that Kelvyn Jones mentioned, when you carve a quantitative variable into categories, the fitted values are forced to follow an artificial step function. The attached image shows the relationship between age (X) and total cholesterol (Y). Notice that when X is carved into 3 categories, the fitted values are forced to follow a step function. Notice that people near a cut-point who have tiny differences in age have quite large differences in fitted values. And notice that people who fall at opposite ends of an age category have the same fitted value, despite having fairly large age differences. With that in mind, the fitted values from the linear regression model make a lot more sense, I think! HTH.
  • asked a question related to Advanced Statistical Analysis
Question
7 answers
I am conducting a study which has 3 IVs ( POP, SOE, PI) and 1 DV (COO). 2 of the IVs (POP and SOE) are manipulated variables for high/ low, making 4 groups. However, the third IV (PI) is a measured variable which is a continuous variable. This means i cannot manipulate it to create high/ low conditions.
Should i convert the continuous IV (PI) to high/low conditions to make a 2x2x2 design?
If yes, what values of the high/ low aspects will i enter into my data sheet ?
If no, what options do i have for my analysis?
Someone told me it is not a good idea to convert the continuous third IV to a categorical variable. They told me the options i have are either Hierarchical regresison analysis or multiple regression analysis with interaction terms.
I would like to mention that i would also like to see the interactive effects of all three IVs, not only combinations of 2 IVs, on the DV. I want to hypothesize that COO will be highest for the combinations of High POP, high SOE, and high PI. Alternatively, the COO outcome should vary for high PI, when POP and SOE are high.
I would like suggestions to gain clarity on the best apporach i should follow, and the tests my study needs. For any analysis, what values do i enter in my data sheet for high/ low values of the two categorical IVs?
Relevant answer
Answer
Sundas Azim, In your study with three independent variables (IVs) and one dependent variable (DV), it's generally not recommended to convert a continuous IV like PI into a categorical variable, as doing so can lead to a loss of information and statistical power. Instead, you can employ hierarchical regression analysis or multiple regression analysis with interaction terms to examine the interactive effects of all three IVs on the DV. To investigate your hypothesis that COO will be highest for the combination of High POP, high SOE, and high PI, you can create interaction terms for these conditions and include them in your regression model. For instance, you can create a variable that multiplies the values of POP, SOE, and PI when they are all high. Similarly, you can create interaction terms for other combinations you want to explore. This approach allows you to assess the impact of each IV while considering their interactive effects, avoiding the need to categorize the continuous variable PI. In your data sheet, you would enter the actual continuous values for POP and SOE, and for PI, you would enter the measured values. Ensure you center your continuous IVs (subtract the mean from each score) to aid in the interpretation of interaction effects. This approach will provide a more robust and informative analysis for your research question.
  • asked a question related to Advanced Statistical Analysis
Question
5 answers
Greetings,
I am currently in the process of conducting a Confirmatory Factor Analysis (CFA) on a dataset consisting of 658 observations, using a 4-point Likert scale. As I delve into this analysis, I have encountered an interesting dilemma related to the choice of estimation method.
Upon examining my data, I observed a slight negative kurtosis of approximately -0.0492 and a slight negative skewness of approximately -0.243 (please refer to the attached file for details). Considering these properties, I initially leaned towards utilizing the Diagonally Weighted Least Squares (DWLS) estimation method, as existing literature suggests that it takes into account the non-normal distribution of observed variables and is less sensitive to outliers.
However, to my surprise, when I applied the Unweighted Least Squares (ULS) estimation method, it yielded significantly better fit indices for all three factor solutions I am testing. In fact, it even produced a solution that seemed to align with the feedback provided by the respondents. In contrast, DWLS showed no acceptable fit for this specific solution, leaving me to question whether the assumptions of ULS are being violated.
In my quest for guidance, I came across a paper authored by Forero et al. (2009; DOI: 10.1080/10705510903203573), which suggests that if ULS provides a better fit, it may be a valid choice. However, I remain uncertain about the potential violations of assumptions associated with ULS.
I would greatly appreciate your insights, opinions, and suggestions regarding this predicament, as well as any relevant literature or references that can shed light on the suitability of ULS in this context.
Thank you in advance for your valuable contributions to this discussion.
Best regards, Matyas
Relevant answer
Answer
Thank you for your question. I have searched the web for information about the Diagonally Weighted Least Squares (DWLS) and Unweighted Least Squares (ULS) estimators, and I have found some relevant sources that may help you with your decision.
One of the factors that you should consider when choosing between DWLS and ULS is the sample size. According to Forero et al. (2009)1, DWLS tends to perform better than ULS when the sample size is small (less than 200), but ULS tends to perform better than DWLS when the sample size is large (more than 1000). Since your sample size is 658, it falls in the intermediate range, where both methods may provide similar results.
Another factor that you should consider is the degree of non-normality of your data. According to Finney and DiStefano (2006), DWLS is more robust to non-normality than ULS, especially when the data are highly skewed or kurtotic. However, ULS may be more efficient than DWLS when the data are moderately non-normal or close to normal. Since your data have slight negative skewness and kurtosis, it may not be a serious violation of the ULS assumptions.
A third factor that you should consider is the model fit and parameter estimates. According to Forero et al. (2009)1, both methods provide accurate and similar results overall, but ULS tends to provide more accurate and less variable parameter estimates, as well as more precise standard errors and better coverage rates. However, DWLS has higher convergence rates than ULS, which means that it is less likely to encounter numerical problems or estimation errors.
Based on these factors, it seems that both DWLS and ULS are reasonable choices for your data and model, but ULS may have some advantages over DWLS in terms of efficiency and accuracy. However, you should also check the sensitivity of your results to different estimation methods, and compare them with other criteria such as theoretical plausibility, parsimony, and interpretability.
I hope this answer helps you with your analysis. If you need more information, you can refer to the sources that I have cited below.
1: Factor analysis with ordinal indicators: A Monte Carlo study comparing DWLS and ULS estimation by Carlos G. Forero, Alberto Maydeu-Olivares & David Gallardo-Pujol in British Journal of Mathematical and Statistical Psychology (2009)
: Non-normal and categorical data in structural equation modeling by Sara J. Finney & Christine DiStefano in Structural equation modeling: A second course (2006)
Good luck
  • asked a question related to Advanced Statistical Analysis
Question
4 answers
I have a longitudinal model and the stability coefficients for one construct change dramatically from the first and second time point (.04) to the second and third time point (.89). I have offered a theoretical explanation for why this occurs, but have been asked about potential model bias.
Why would this indicate model bias? (A link to research would be helpful).
How can I determine whether the model is biased or not? (A link to research would be helpful).
Thanks!
Relevant answer
Answer
That makes sense. Are you comparing the cross-lagged panel (auto)regression (path) coefficients to zero-order correlations? This could be part of the issue (explain the "discrepancy"/low autoregressive stability coefficient). Regression coefficients are not equal to zero-order (bivariate) correlations. The regression coefficients take the correlation with other independent variables into account. This may explain why the autoregressive "stability" coefficients in your model look very different from the zero-order correlations. It is impossible to know without looking at your data and model in more detail.
The model fit does not look completely horrible at first sight but the chi-square test is significant and the RMSEA value is a bit high. I would take a look at model residuals and/or modification indices to find out where the model may be misspecified.
  • asked a question related to Advanced Statistical Analysis
Question
1 answer
Hello !
Please, relevant to the significance level of a test, I would like to know when can we increase the critical value α to 0.1.
Thank you in advance.
Relevant answer
Answer
The significance level of a test is a predetermined threshold that is used to determine if the evidence from the sample provides enough support to reject the null hypothesis. It is typically denoted by the symbol α. The significance level is relevant in several ways: Type I error rate, Confidence level, Decision-makingm, Sample size determination
In summary, the significance level guides the researcher in interpreting the test results and making informed decisions regarding the null and alternative hypotheses.
  • asked a question related to Advanced Statistical Analysis
Question
4 answers
first i send this number to bard chat bot
6020 5603 54830 6750 23000 12370 10000
it responds this.
You have listed 7 numbers.
  • 6020
  • 5603
  • 54830
  • 6750
  • 23000
  • 12370
  • 10000
The sum of these numbers is 103573.
The average of these numbers is 14825.
The median of these numbers is 6750.
The mode of these numbers is 10000, which appears twice.
The range of these numbers is 23000 - 10000 = 13000.
The standard deviation of these numbers is 3844.74.
Is there anything else I can help you with?
then i ask it to standardize them.
and the respond was this.
Sure. Standardizing the numbers will mean subtracting the mean and then dividing by the standard deviation. This will transform the numbers so that they have a mean of 0 and a standard deviation of 1.
The standardized numbers are:
  • -2.18
  • -2.88
  • 2.44
  • -1.38
  • 3.22
  • -0.78
  • 0.00
----------------------------------------
Bing chat respond to the same question what this
I can standardize the numbers you provided. Standardization is the process of transforming data to have a mean of 0 and a standard deviation of 1. This is done by subtracting the mean from each data point and then dividing by the standard deviation.
Here are the standardized values for the numbers you provided:
  • 6020: -0.705
  • 5603: -0.801
  • 54830: 2.891
  • 6750: -0.524
  • 23000: 0.693
  • 12370: -0.191
  • 10000: -0.363
Is there anything else I can help with? 😊
------------------------------------------
and perplexity respond was this.
Based on the search results, it is possible to standardize the numbers provided by converting them into z-scores. Standardizing data is a common practice in statistics to compare variables that have different scales or units of measurement. To standardize the numbers provided, one would need to calculate the mean and standard deviation of the data and then transform each value to its corresponding z-score
Here are the steps to standardize the numbers:
  1. Calculate the mean of the data: Mean = (6020 + 5603 + 54830 + 6750 + 23000 + 12370 + 10000) / 7 = 15038.43
  2. Calculate the standard deviation of the data: Standard deviation = sqrt(((6020 - 15038.43)^2 + (5603 - 15038.43)^2 + (54830 - 15038.43)^2 + (6750 - 15038.43)^2 + (23000 - 15038.43)^2 + (12370 - 15038.43)^2 + (10000 - 15038.43)^2) / 6) = 17523.22
  3. Calculate the z-score for each value: Z-score = (X - Mean) / Standard deviationZ-score for 6020 = (6020 - 15038.43) / 17523.22 = -0.706 Z-score for 5603 = (5603 - 15038.43) / 17523.22 = -0.768 Z-score for 54830 = (54830 - 15038.43) / 17523.22 = 2.238 Z-score for 6750 = (6750 - 15038.43) / 17523.22 = -0.426 Z-score for 23000 = (23000 - 15038.43) / 17523.22 = 0.455 Z-score for 12370 = (12370 - 15038.43) / 17523.22 = -0.152 Z-score for 10000 = (10000 - 15038.43) / 17523.22 = -0.853
By standardizing the numbers, we have transformed them into a common scale that allows for easier comparison and analysis. It is important to note that the interpretation of z-scores depends on the context and the distribution of the data.
-----------------------------------------------------
so, as you can see it can't even get sum of the data correct. my question is why this models cant get sum of this number right even when they apply all the correct steps
Relevant answer
Answer
There are seven numbers and you stated that 10000 appears twice, which means that there are eight numbers. In calculation of average denominator is 7 which means that 10000 cannot apper twice. Range is calculated as 23000 - 10000 instead as 54830 - 5603 = 49227. Sum of those numbers is 128573, not 103573. Are you sure about those numbers ?
  • asked a question related to Advanced Statistical Analysis
Question
2 answers
Hello, could someone assist me in interpreting the results of the sequential Mann-Kendall Sneyer test? Indeed, according to Dufek (2008: Precipitation variability in São Paulo State, Brazil), "In the absence of any trend, the graphical representation of the direct series (u(t)) and the backward series (u'(t)) obtained with this method yields curves that overlap several times." In my case, I observe two to three overlaps, often with sequences that exhibit significant trends. Should I also conclude that there is an absence of trends in my dataset?
Relevant answer
Answer
The sequential Mann-Kendall test, also known as the Mann-Kendall-Sneyers (MKS) test, is a variation of the Mann-Kendall test that aims to detect trends in time series data. The test involves comparing the original time series to its reverse version to identify potential trends. The graphical representation of the direct series (u(t)) and the backward series (u'(t)) can provide insights into the presence or absence of trends. However, the interpretation can be nuanced.
Dufek (2008) suggests that if there is no trend, the curves of the direct and backward series will overlap several times. In your case, you observe two to three overlaps, often with sequences that exhibit significant trends. This situation requires careful consideration:
  1. Overlaps: The fact that you observe overlaps in the curves suggests that there might be a lack of consistent and significant trends in your dataset. If you're seeing two to three overlaps, it could indicate a certain level of fluctuation without a clear upward or downward trend. However, it's important to consider the magnitude and duration of these overlaps. Short overlaps might be less indicative of a lack of trend than longer ones.
  2. Significant Trends: The presence of sequences with significant trends might complicate the interpretation. Significant trends imply that some portions of the data are exhibiting systematic changes over time. The presence of these trends could be in contrast to the overlaps you observe.
  3. Complex Patterns: Time series data can exhibit complex patterns that might not be captured by a single test or method. Overlapping curves and significant trends suggest that the behavior of your data might be more intricate than a simple upward or downward trend.
  4. Data Context: Consider the context of your data and the subject matter. Sometimes, fluctuations and variations might be inherent to the process being studied, and these might not necessarily indicate a clear trend.
In conclusion, while the observation of overlaps in the graphical representation of the sequential Mann-Kendall test might suggest a lack of clear trends, the presence of significant trends in some segments complicates the interpretation. It's important to analyze the trends, magnitudes, and durations of both overlaps and significant trends while considering the broader context of your data and the subject matter you're studying. If possible, consulting with a statistician or subject-matter expert might help you make a more informed interpretation of your findings.
  • asked a question related to Advanced Statistical Analysis
Question
6 answers
In plant breeding, what are uses discrimination function.
Relevant answer
Answer
Discriminant function technique involves the development of selection criteria on a combination of various characters and aids the breeder in indirect selection for genetic improvement in yield. In plant breeding, the selection index refers to a linear combination of characters associated with yield.
  • asked a question related to Advanced Statistical Analysis
Question
5 answers
I am looking for a graphical tool like visual basic software to define R codes for interactive graphical buttons and text boxes.
For example, I want to design a windows application with graphical design for calculation of body mass index (BMI). I want to have two boxes for weight and height imputation and a button for run. When clicking the button, I want to the below code be run.
BMI < - box1/(box2^2)
Relevant answer
Answer
R in Power BI ?
It should not contain complex R syntaxes though.
  • asked a question related to Advanced Statistical Analysis
Question
5 answers
some of the people who consult are only users of statistics, while others are the ones who develop statistics, and we would love that people use it correctly.
But, "I believe" that many arrive late, always post process of experimentation, asking "what statistical process can I do or apply". Perhaps they do not know that they should always consult, with the question or the hypothesis that they wish to answer or verify, since it would allow a better answer. On the other hand, some come with simple queries, but usually a statistics class is given as an answer, which I feel in some cases is late. In some cases it is extremely necessary, but in others, it opens a debate that leads to serendipity. Wouldn't it be better, to try to advise them in a more precise way? I read them:
Relevant answer
Answer
precisely: two sides of the same coin.
  • asked a question related to Advanced Statistical Analysis
Question
3 answers
Dear colleagues,
I analyzed my survey data using binary logistic regression, and I am trying to assess the results by looking at the p-value, B, and Exp(B) values. However, the task is also to specify the significance of the marginal effects. How to interpret the results of binary logistic regression considering the significance of the marginal effects?
Best,
Relevant answer
Answer
To specify the significance of the marginal effects in binary logistic regression analysis, you can interpret the results by examining the p-values, B (coefficient estimates), and Exp(B) (exponentiated coefficient estimates) values. The p-value indicates the statistical significance of each predictor variable's effect on the outcome variable. A low p-value (typically less than 0.05) suggests a significant effect. The B values represent the estimated change in the log-odds of the outcome for a one-unit change in the predictor, with positive values indicating a positive association and negative values indicating a negative association. Exp(B) provides the odds ratio, which quantifies the change in odds for a one-unit increase in the predictor. An Exp(B) greater than 1 indicates an increased odds of the outcome, while a value less than 1 implies a decreased odds. By considering the significance of the marginal effects, you can determine the direction, magnitude, and statistical significance of the predictor variables' impacts on the binary outcome variable in your logistic regression analysis.
  • asked a question related to Advanced Statistical Analysis
Question
6 answers
I constructed a linear mixed-effects model in Matlab with several categorical fixed factors, each having several levels. Fitlme calculates confidence intervals and p values for n-1 levels of each fixed factor compared to a selected reference. How can I get these values for other combinations of factor levels? (e.g., level 1 vs. level 2, level 1 vs. level 3, level 2 vs. level 3).
Thanks,
Chen
Relevant answer
Answer
First, to change the reference level You can specify the order of items in categorical array
categorical(A,[1, 2, 3],{'red', 'green', 'blue'}) or
categorical(A,[3, 2, 1],{'blue', 'green', 'red'})
Second, You can specify the correct hypothesis matrix for coefTest function for comparison between every pair of categories.
  • asked a question related to Advanced Statistical Analysis
Question
5 answers
Has anyone conducted a meta-analysis with Comprehensive Meta-Analysis (CMA) software?
I have selected: comparison of two groups > means > Continuous (means) > unmatched groups (pre-post data) > means, SD pre and post, N in each group, Pre/post corr > finish
However, it is asking for pre/post correlations which none of my studies report. Is there a way to calculate this manually or estimate it somehow?
Thanks!
Relevant answer
Answer
Yes, it is possible to estimate the pre-post correlation coefficient in a meta-analysis using various methods, such as imputing a value or using a range of plausible values. Here are a few options:
  1. Imputing a value: If none of your studies report the pre-post correlation, you can impute a value based on previous research or assumptions. A commonly used estimate is a correlation coefficient of 0.5, which assumes a moderate positive relationship between the pre and post-measures. However, it is important to note that this value may not be appropriate for all studies or research questions.
  2. Using a range of plausible values: Another option is to use a range of plausible correlation coefficients in the analysis, rather than a single value. This can help to account for the uncertainty and variability in the data. A common range is 0 to 0.8, which covers a wide range of possible correlations.
  3. Contacting study authors: If possible, you can try to contact the authors of the included studies to request the missing information or clarification about the pre-post correlation coefficient. This can help to ensure that the analysis is based on accurate and complete data.
Once you have estimated the pre-post correlation coefficient, you can enter it into the appropriate field in the CMA software and proceed with the analysis. It is important to carefully consider the implications of the chosen correlation coefficient and to conduct sensitivity analyses to test the robustness of the results.
  • asked a question related to Advanced Statistical Analysis
Question
6 answers
Hello,
I’m working with ANCOVA models in R-studio. I’ve constructed a model as follows:
fit<-aov(outcome~factor1+cov1+cov2+cov3)
Where “outcome” is a normal distributed continuous variable; “factor1” has 3 levels; “cov 1” and 2 are continuous variables; and “cov3” is a 2 levels variable.
The model fits well, but I want to perform multiple comparisons between the levels of my factor. That is:
1 vs 2
2 vs 3
3 vs 1
Therefore I’ve been trying the “glht” function:
postHocs<-glht(fit, linfct = mcp(factor = "Tukey"))
And I receive this error:
Error: unexpected '=' in "postHocs<-glht(fit[…]
 
I’ve also try to use the function “as.factor” in my "factor" to avoid problems related to the type of variables, but I get the same error.
 
I will appreciate any help
Thanks in advance!
Joan.
Relevant answer
Answer
Romeo Alberto Saldaña-Vázquez , but note that glht() doesn't conduct a Tukey test. "tukey" there just means "all pairwise comparisons".
  • asked a question related to Advanced Statistical Analysis
Question
2 answers
Hello everyone,
I'm going to conduct a meta-analysis of psychological interventions relevant to a topic via Comprehensive Meta-Analysis (CMA) software. I have a few questions/points for clarification:
- From my understanding, I should only meta-analyse interventions that have used a pre-test, post-test (with and/or without follow-up) design, as meta-analysing post-test only designs with the others is not effective. Is my understanding correct?
- Can I combine between-subjects and within-subjects designs together or do I need to meta-analyse them separately?
Thanks in advance!
Relevant answer
Answer
Hello Ravisha,
If cases are randomly assigned to treatment condition, there's no reason that post-only design results should be considered uninformative.
Designs with pre-post measures can offer the added benefits of: (a) allowing for estimation of change (though unless scores are completely reliable, the change scores will be less reliable than either the pre- or post- score by itself); or (b) pre-scores can be used as a covariate, to adjust for randomly occurring differences across groups.
One noted threat to pre-post designs is that if the interval separating them is too short, the post-results, and therefore group comparisons, can be biased, especially with measures of affect.
Ultimately, the answer depends on what your target ES might be: If it is post-treatment differences across groups/conditions, then either design can contribute. You could estimate ES separately by study type to see whether inclusion of pre-test appears to account for differences.
If it is strictly pre-post change, then post-only designs can't contribute (again, though, note the caveats above).
Good luck with your work.
  • asked a question related to Advanced Statistical Analysis
Question
4 answers
I have ordinal data on happiness of citizens from multiple countries (from the European Value Study) and I have continuous data on the GDP per capita of multiple countries from the World Bank. Both of these variables are measured at multiple time points.
I want to test the hypothesis that countries with a low GDP per capita will see more of an increase in happiness with an increase in GDP per capita than countries that already have a high GDP per capita.
My first thought to approach this is that I need to make two groups; 1) countries with low GDP per capita, 2) countries with high GDP per capita. Then, for both groups I need to calculate the correlation between (change in) happiness and (change in) GDP per capita. Lastly, I need to compare the two correlations to check for a significant difference.
I am stuck however on how to approach the correlation analysis. For example, I dont know how to (and if I even have to) include the repeated measures of the different time points the data was collected. If I just base my correlations on one timepoint the data was measured, I feel like I am not really testing my research question, considering I am talking about an increase in happiness and an increase in GDP, which is a change over time.
If anyone has any suggestions on the right approach, I would be very thankful! Maybe I am overcomplicating it (wouldnt be the first time)!
Relevant answer
Answer
At the same time,Collect two variables data,As a sample,After collecting N samples over time,erform data regression analysis on them,The correlation coefficient will be obtained.
  • asked a question related to Advanced Statistical Analysis
Question
4 answers
Hello! I would like to address the experts regarding a question about conducting a statistical analysis using only nominal variables. Specifically, I would like to compare the responses of survey participants answered the question whether they take certain medications "Yes" or "No", and analyze the data with different criteria such as education level, economic status, marital status, etc. I have conducted a Chi-squared test to determine if there is a significant difference between the variables, but now I would like to compare the answers of whether or not this medicine is taken depending on each group, for example in the education variable (higher, secondary, vocational and basic education). Is there a statistical test similar to Tukey's test that is suitable for nominal variables? I would also like to know if it is possible to create a column chart with asterisks above the columns indicating the significant differences between them based on this test for nominal variables.
I usually use Statistica StatSoft and R studio. But none of my attempts to do post-hoc for nominal variables analysis on any of them were successful. In R studio I tried pairwise.prop.test(cont_table, p.adjust.method = "bonferroni")
But I got an error:
Error in pairwise.prop.test(cont_table, p.adjust.method = "bonferroni") :
'x' must have 2 columns
I assume that this is due to the fact that I have groups in one of the variables and not two.
What should I do?
Thank you in advance for your help!
Relevant answer
Answer
In attachment an R script with the BH post-hoc test based on Benjamini & Hochberg (1995). You could replace this with Bonferroni, but in my opinion this last method is too conservative.
  • asked a question related to Advanced Statistical Analysis
Question
4 answers
The variables I have- vegetation index and plant disease severity scores, were not normal. So, I did log10(y+2) transformation of vegetation index and sqrt(log10(y+2)) transformation of plant disease severity score. Plant disease severity is on the scale of 0, 10, 20, 30,..., 100 and were scored based on visual observations. Even after combined transformation, disease severity scoring data is non-normal but it improves the CV in simple linear regression.
Can I proceed with the parametric test, a simple linear regression between the log transformed vegetation index (normally distributed) and combined transformed (non-normal) disease severity data?
Relevant answer
Answer
Why would these variables have to be normal? As far as I understand our problem, a logistic model might do well. You can try it with my software "FittingKVdm", but if you can send me some dat, I can try it for you.
  • asked a question related to Advanced Statistical Analysis
Question
6 answers
Hi everyone! I need to examine interactions between categorical and continuos predictors in a path analysis model. What strategy would be more accurate: 1) including the categorical variable, the continous one and the interaction as separate terms, 2) run a multigroup analysis?
I have the same problem with several models. For instance, examining potential differences of executive function (continuos predictor) effects on reading comprehension (outcome variable) among children from different grades (categorical predictor).
Thank you so much for your help!
Relevant answer
Answer
Very helpful paper with references:
Best,
Wadie
  • asked a question related to Advanced Statistical Analysis
Question
3 answers
I want to study the relationship between parameters for physical activity in a lifespan and the outcome of pain (binary). I have a longitudinal data with four measurement, hence repeated measures.
Should I do an GEE or a mixed method? And does anyone guides on how to rearrange my dataset so it will fit the methods? I have tried the GEE with long data and wide but I keep on getting errors.
To clarify, my outcome is binary (at the last measurement) and further my independent variables are measured at four times (with the risk of them being correlated).
Relevant answer
Answer
Yes, that would be correct.
As your outcome/ dependent measure is only at one time point you would not have to consider time in relation to the outcome, so not a longitudinal model (no variation over time to model).
That is not to say that time may or may not be important in your research question. If trends/ differences/ averages in the repeated measures of independent variables are important in relation to your outcome then you can find ways to incorporate these things into your modelling strategy (in the way that you choose to use your repeated independent measures - being guided by research questions).
  • asked a question related to Advanced Statistical Analysis
Question
4 answers
How can I define a graphics space to make plots like the attached figure below using the graphics package in r?
I need help locating each position (centering) using the "mar" argument.
Reghais.a
Relevant answer
Answer
You can use layout() to define a matrix of plots with different heights/widths. In your case, this will produce a layout similar to your picture:
m <- rbind(c(0,1,0), c(2:4))
layout(m, widths = c(1,1,1.5), heights = c(1,1))
par(oma = c(3,3,3,3), mar = c(0,0,0,0), las = 1, xaxs = "i", yaxs="i")
plot(NA, xlim = c(-1, 9), ylim = c(-1, 4), xaxt="n", yaxt="n")
axis(2, at = 0:4)
axis(3, at = 0:4 * 2)
plot(NA, xlim = c(0, 4), ylim = c(0, 3.5), xaxt="n", yaxt="n")
axis(1, at = 0:4)
axis(2)
plot(NA, xlim = c(-1, 9), ylim = c(0, 3.5), xaxt="n", yaxt="n")
plot(NA, xlim = c(0, 7), ylim = c(0, 3.5), xaxt="n", yaxt="n")
axis(1, at = 0:7)
axis(4)
  • asked a question related to Advanced Statistical Analysis
Question
6 answers
I am tiding up with the below problem, it's a pleasure to have your ideas.
I've written a coding program in two languages, Python and R, but each came to a completely different result. Before jumping to a conclusion, I declare that:
- Every word of the code in two languages has multiple checks and is correct and represents the same thing.
- The used packages in two languages are the same version.
So, what do you think?
The code is about applying deep neural networks for time series data.
Relevant answer
Answer
Good morning, without the code it is difficilt to know where is the difference I do not use Python i work on R but maybe these difference is due to the stage of spitting dataset do you try to add thr same number in the count of generator of randomly for example seed(1234) (if my memory is good this function is also used in Python language. Were your results and metrics of evaluation totally different? In this case, mayve there is a reliability issue in your model. You should check your data preparation and features selection .
  • asked a question related to Advanced Statistical Analysis
Question
3 answers
Hi, I am looking for a way to derive standard deviations from estimated marginal means using mixed linear models with SPSS. I already figured where SPSS provides the pooled SD to calculate the SMD, however, I still need the SD of the means. Any help is appreciated!
Relevant answer
Answer
I was unsure how to pool SD from the SE without knowing N. A method I found used the "baseline SD" for each group.
  • asked a question related to Advanced Statistical Analysis
Question
5 answers
I have an data of 30 X 1 matrix, in which by using gradient descent algorithm is it possible to find the best optimized value.If yes, please share me the procedure or link for the detailed background theory behind it.it will be helpful for me to proceed further on my research.
Relevant answer
Answer
It depends on the cost function and the model that you are using. Gradient descent will converge to the optimal value (or very close to it) of the training loss function, given a properly set learning rate, if the optimization problem is convex with respect to the parameters. That is the case for linear regression using the mean squared error loss, or logistic regression using cross entropy. For the case of neural networks with several layers and non-linearities none of these loss functions make the problem convex, therefore there is no guarantee that you will find the optimal value. The same would happen if you used logistic regression with the mean squared error instead of cross entropy.
An important thing to note is that when I talk about the optimal value, I mean the value that minimizes the loss in your training set. It is always possible to overfit, which means that you find the optimal parameters for your training set, but those parameters make inaccurate predictions on the test set.
  • asked a question related to Advanced Statistical Analysis
Question
4 answers
I want to display the bivariate distribution of two (laboratory) parameters in sets of patients. I have available the data of N, mean +- SD of the first and second parameters. I am looking for software that could draw a bivariate distribution = ellipse from the given parameters. Can someone help me? Thank you.
Relevant answer
Answer
Dear Dr. Gaško,
I'm glad to hear that. You are very welcome.
Best wishes.
  • asked a question related to Advanced Statistical Analysis
Question
10 answers
Hi,
There is an article that I want to know which statistical method has been used, regression or Pearson correlation.
However, they don't say which one. They show the correlation coefficient and standard error.
Based on these two parameters, can I know if they use regression or Pearson correlation?
Relevant answer
Answer
Not sure I understand your question. If there is a single predictor and by regression you mean linear OLS regression, then the r is the same. Can you provide more details>
  • asked a question related to Advanced Statistical Analysis
Question
8 answers
How to run the Bootstrap method to estimate the error rate in linear discriminant analysis using r code?
Best
reghais.A
Relevant answer
Answer
Using R code, the bootstrap method can estimate the error rate in linear discriminant analysis. First, the data must be split into a training set and a test set and then normalized. The lda() function can then be used to run the calculations twice, with CV=TRUE for the first run to get predictions of class membership derived from leave-one-out cross-validation. The second run should use CV=FALSE to get predictions of class membership based on the entire training set. The true error rate estimator BT2 of the restricted linear or quadratic discriminant analysis can be calculated using the dawai package in R. Finally, resampling methods such as bootstrapping can be used to estimate the test error rate.
  • asked a question related to Advanced Statistical Analysis
Question
5 answers
Relevant answer
Answer
I've only glanced quickly at those two resources, but are you sure they are addressing the same thing? Yates' (continuity) correction as typically described entails subtraction 0.5 from |O-E| before squaring in the usual equation for Pearson's Chi2. E.g.,
But adding 0.5 to each cell in a 2x2 table is generally done to avoid division by 0 (e.g., when computing an odds ratio), not to correct for continuity (AFAIK). This is what makes me wonder if your two resources are really addressing the same issues. But as I said, I only had time for a very quick glance at each. HTH.
  • asked a question related to Advanced Statistical Analysis
Question
1 answer
How can I add the robust confidence ellipses of 97.5% on the variation diagrams (XY ilr-Transformed) in the robcompositions ,or composition packages?
Best
Azzeddine
Relevant answer
Answer
In order for the benefit to prevail, I have verified a group of packages that do the add of The robust confidence ellipses of 97.5%
View them here by package and its function
1- ellipses () using the package 'ellipse'
## ellipses () using the package 'rrcov'
## ellipses () using the package 'cluster'
  • asked a question related to Advanced Statistical Analysis
Question
8 answers
Res. Sir/ Madam,
I am working as Scientist (Horticulture) and my research focus is improvement of tropical and semi arid fruits. I am also interested in working out role of nutrients in fruit based cropping systems.
Looking for collaborators from the field of Genetics and Plant Breeding, Horticulture, Agricultural Statistics, Soil Science and Agronomy.
Currently working on Genetic analysis for fruit traits in Jamun (Indian Blackberry).
Relevant answer
Try to publish on your own then you have complete control. Collaborators will steal your data and treat you badly :)
  • asked a question related to Advanced Statistical Analysis
Question
10 answers
I am testing hypothesis of relationships between CEA and Innovation Performance (IP). If I am testing the relationship of one construct , say Management support to IP , is it ok to use single linear regression? Of should I be testing it in a multiple regression with all the constructs?
  • asked a question related to Advanced Statistical Analysis
Question
2 answers
What are current recommendations for reporting effect size measures from repeated measures multilevel model?
Concerning analytical approach, I have followed procedure by Garson (2020) with matrix for repeated measures: diagonal, and matrix for random effects: variance components.
In advance, thank you for your contributions.
Relevant answer
Answer
You can use standard procedures for the fixed effects estimates as they are akin to regression model estimates if the response is continuous. Things are more complicated of the response is categorical.
  • asked a question related to Advanced Statistical Analysis
Question
4 answers
Merry Christmas everyone!
I used the Interpersonal Reactivity Index (IRI) subscales Empathic Concern (EC), Perspective Taking (PT) and Personal Distress (PD) in my study (N = 900) When I calculated Cronbach's alpha for each subscale, I got .71 for EC, .69 for PT and .39 for PD. The value for PD is very low. The analysis indicated that if I deleted one item, the alpha would increase to .53 which is still low but better than .39. However, as my study does not focus mainly on the psychometric properties of the IRI, what kind of arguments can I make to say the results are still valid? I did say findings (for the PD) should be taken with caution but what else can I say?
Relevant answer
Answer
A scale reliability of .39 (and even .53!) is very low. Even if your main focus is not on the psychometric properties of your measures, you should still care about those properties. Inadequate reliability and validity can jeopardize your substantive results.
My recommendation would be to examine why you get such a low alpha value. Most importantly, you should first check whether each scale (item set) can be seen as unidimensional (measuring a single factor). This is usually done by running a confirmatory factor analysis (CFA) or item response theory analysis. Unidimensionality is a prerequisite for a meaningful interpretation of Cronbach's alpha (alpha is a composite reliability index for essentially tau-equivalent measures). CFA allows you to test the assumption of unidimensionality/essential tau equivalence and to examine the item loadings.
Also, you can take a look at the item intercorrelations. If some items have low correlations with others, this may indicate that they do not measure the same factor (and/or that they contain a lot of measurement error). Another reason for a low alpha value can be an insufficient number of items.
  • asked a question related to Advanced Statistical Analysis
Question
4 answers
I have done my qPCR experiments and gave me some results, I used the DDCt method and I calculated the 2^(-DDCt), I transformed my data in base 10 logarithm and separated my samples between control and patients. I want to ask if I see that there is for example a fold change 4 times higher in patients for my gene of interest then I use one-tail or two-tail t-test, and what if the distribution is not normal, will I do non-parametric test, or I can skip the outliers and do the t-test. I am very confused in that statistical conundrum.
Relevant answer
Answer
If your data is not normally distributed, you should use non-parametric statistical tests such as Wilcoxon rank sum tests or Mann-Whitney U tests in order to compare the expression levels between the two groups.
Regarding the one tailed or two tailed. one tailed can specify the direction of the effect (positive or negative) but the two tailed one can be used for both direction at the same time.
Best...
  • asked a question related to Advanced Statistical Analysis
Question
13 answers
Dear all,
I have conducted a research about snake chemical communication where I test the reaction of a few adult snake individuals (both males and females) to different chemical compounds. Every individual is tested 3 times with each of the compounds. Basically, I put a soaked paper towel in each of the individual terrariums and record the behavior for 10 minutes with a camera. The compounds are presented to the individuals in random order.
My grouping variable represents the reactions to each of the compounds for each of the sexes. For example, in the grouping variable I have categories titled “male reactions to compound X”, “male reactions to compound Y” etc. I have three dependent variables as follows: 1) whether there is an interest towards the compound presented or not (binary), 2) chin rubbing behavior recorded (I record how many times this behavior is exhibited) and 3) tongue-flick rate (average tongue-flicks per minute). The distribution is not normal.
What I would like to test is 1) whether there is a difference in the behavior between males and females, 2) whether there is a difference between the behavior of males snakes to the different compounds (basically if males react more to compound X, rather than to compound Y) and the same goes for females, and finally 3) whether males exhibit different behavior to different types of compounds (I want to combine for example compounds X, Y and Z, because they are lipids and A, B and C, because they are alkanes and check difference in male responses).
I thought that PERMANOVA will be enough, since it is a multivariate non-parametric test, but two reviewers wrote that I have to use Generalized linear mixed models, because of the repeated measures (as mentioned, I test each individual with each of the compounds 3 times). They think there might be some individual differences that could affect the results if not taken into consideration.
Unfortunately, I am a newbie in GLMM, and I do not really see how such model can help me answer my questions and test the respective hypotheses. Could you, please, advise me on that? And how should I build the data matrix in order to test for such differences?
Isn’t it also possible to check for differences between individuals with the Friedman test and then use PERMANOVA?
Thank you very much in advance!
Relevant answer
Answer
In general, permanova is a test of the effect of two parallel variables on the organism. It is equivalent to two one-way ANOVAs. Whereas GLMS is equivalent to the combined effect of all factors, in GLMS you can derive the contribution of each variable to determine the magnitude of the contribution of each environmental factor. You can understand that permanova is the parallel effect of several factors, while GLMS is the combined effect. GLMS is simple and easy to operate in R language.
  • asked a question related to Advanced Statistical Analysis
Question
7 answers
Holzinger (Psychometrika, vol. 9, no. 4, Dec. 1944) and Thurstone (Psychometrika, vol. 10, no. 2, June 1945; vol. 14, no. 1, March, 1949) discussed an alternative method for factoring a correlation matrix. The idea was to enter several clusters of items (tests) in the computer program beforehand, and then test them, optimize them and produce the residual matrix (which may show the necessity of further factoring). These clusters could stem from theoretical and substantive considerations, or from an inspection of the correlation matrix. It was an alternative to producing one factor at a time until the residual matrix becomes negligible, and was attractive because it spared much calculation time for the computers in that era. That reason soon lapsed but the method is still interesting as an alternative kind of confirmatory factor analysis.
My problem is: I would like to know the exact procedure (especially the one by Holzinger) but I cannot get hold of these three original publications (except the first two pages), unless against big expenses, nor can I find a thorough discussion of it in another publication, except perhaps in H.H. Harman (1976): Modern factor analysis, Section 11.5, but that book has disappeared from the university library, while on Google-books it is incomplete. Has anyone a copy of these publications, or is he/she familiar with this type of factor analysis?
Relevant answer
Answer
In the last few months, a colleague of mine has written a version of the PCO-program in R. The first impressions are good, but we need a few more months to test it and prepare a publication aboiut it.
  • asked a question related to Advanced Statistical Analysis
Question
14 answers
Please share this question with expert in statistics if you don't know answere.
I am stuck here, as i am working on therapy and trying to evalute the changes in biomarker levels. So I have selected 5 patients and analysed their biomarker levels prior therapy and then after first therapy and followed by 2nd therapy. So as i apply anova results show significant difference in their mean values but due larger difference in their standard deviations i am getting non significant results
like in this table below.
Sample Size Mean Standard Deviation SE of Mean
vb bio 5 314.24 223.53627 99.96846
cb1 bio 5 329.7 215.54712 96.3956
CB II 5 371.6 280.77869 125.56805
So I want to know from all those good statsticians who are well aware about the clinical trial studies.
Please suggest
Am i performing statistics correctly?
Should not i worry about non significant results?
What are the statistical tests I should use?
How will I represent my data for publication purposes?
Please be eloberative in answers?
Try to teach like you are teaching to the fresher to this field.
Relevant answer
Answer
Massimo Sivo very nice of you that you want to try to help your colleague, however, as was mentioned earlier, you should understand experimental design very well before you have sufficient information to construct an appropriate statistical model (that can than provide meaningful insights). To understand experimental design of a clinical trial it is not enough to understand some statistics, you should also understand some medicine. E.g. From what Shahnawaz Ahmad Wani described one can by no means understand what was actually measured, how, and why. It is super challenging to sample 5 patients with appropriate controls to make any kind of biologically meaningful inference about the population of patients. Tons of confounders and no power to take them into account. From how the problem was presented I can image such (confounder) data was not collected. Using such data and shoving them into e.g. mixed model to account for repeated measures (and dependence of some parameters) will most likely give meaningless results. Even if everything was done right but you did not account for a biologically meaningful confounder... it would still be meaningless. For example, Shahnawaz Ahmad Wani told us he is measuring some kind of biochemical parameter from the blood from female patients. Many parameters in the blood change with the part of the menstrual cycle, and it is not clear whether the authors analysed this. Without accounting for such a serious confounder the study makes no sense. Furthermore, how would you construct a model around the dependence of venous blood and "tissue blood". The variables are most definitely related, but we cannot imagine the nature of their dependence without having much more information. So, before asking for data, you should really ask for the explanation of the design to make sure all the confounders were ruled out the the level of design as this is the only scenario in which several lines of code or several clicks in some statistical software will help you in any way to make some kind of inference.
Please be mindful about this kind of stuff, uncertain and misleading conclusions can be very dangerous in medicine and it is healthier for the community to talk about them than to just generate some numbers from the data.
Best,
jan
  • asked a question related to Advanced Statistical Analysis
Question
5 answers
I have a distribution map produced with only presence data. And there is a certain number of presence data that is in no way included in the model. How can I evaluate the compatibility of the presence data not included in the model I have with the predictive values corresponding to these points in the potential distribution map? So we can also think like this: I have two columns. The first column has only 1 values, the second column has the predictive values. Which method would be the best approach to examine the relationship between these two columns?
Relevant answer
Answer
I'm not sure I fully understand your question but when you have a column with a constant value (e.g., 1), this constant by definition cannot covary with another column/variable. A constant does not have any variance and therefore, the covariance/correlation with another variable will also be zero by definition.
  • asked a question related to Advanced Statistical Analysis
Question
3 answers
Hello, I currently have a set of categorical variables, coded as Variable A,B,C,etc... (Yes = 1, No = 0). I would like to create a new variable called severity. To create severity, I know I'll need to create a coding scheme like so:
if Variable A = 1 and all other variables = 0, then severity = 1.
if Variable B = 1 and all other variables = 0, then severity = 2.
So on, and so forth, until I have five categories for severity.
How would you suggest I write a syntax in SPSS for something like this?
Relevant answer
Answer
* Create a toy dataset to illustrate.
NEW FILE.
DATASET CLOSE ALL.
DATA LIST LIST / A B C D E (5F1).
BEGIN DATA
1 0 0 0 0
0 1 0 0 0
0 0 1 0 0
0 0 0 1 0
0 0 0 0 1
1 1 0 0 0
0 1 1 0 0
0 0 1 1 0
0 0 0 1 1
1 0 2 0 0
END DATA.
IF A EQ 1 and MIN(B,C,D,E) EQ 0 AND MAX(B,C,D,E) EQ 0 severity = 1.
IF B EQ 1 and MIN(A,C,D,E) EQ 0 AND MAX(A,C,D,E) EQ 0 severity = 2.
IF C EQ 1 and MIN(B,A,D,E) EQ 0 AND MAX(B,A,D,E) EQ 0 severity = 3.
IF D EQ 1 and MIN(B,C,A,E) EQ 0 AND MAX(B,C,A,E) EQ 0 severity = 4.
IF E EQ 1 and MIN(B,C,D,A) EQ 0 AND MAX(B,C,D,A) EQ 0 severity = 5.
FORMATS severity (F1).
LIST.
* End of code.
Q. Is it possible for any of the variables A to E to be missing? If so, what do you want to do in that case?
  • asked a question related to Advanced Statistical Analysis
Question
9 answers
I am using -corr2data- to simulated raw data from a correlation matrix. However, some variables that I need should be binary. How can I convert?
Is it possible to convert higher amounts to 1 (and the other ones to 0) as the form to reach the same mean? How should I do it?
Is a way in R?
(I want to perform a GSEM on a correlation matrix)
(I know -faux- package in R. But my problem is that just some of [not all of] my variables are binary.)
Relevant answer
Answer
Maybe the attached is what you mean. David Booth
  • asked a question related to Advanced Statistical Analysis
Question
3 answers
Hello, I currently have a set of categorical variables, coded as Variable A,B,C,etc... (Yes = 1, No = 0). I would like to create a new variable called severity. To create severity, I know I'll need to create a coding scheme like so:
if Variable A = 1 and all other variables = 0, then severity = 1.
if Variable B = 1 and all other variables = 0, then severity = 2.
So on, and so forth, until I have five categories for severity.
How would you suggest I write a syntax in SPSS for something like this? Thank you in advance!
Relevant answer
Answer
Ange, I think the easiest way for you to find an answer to your question would be to google something such as "SPSS recode variables YouTube". You'll probably find several sites that demonstrate what you want to do.
All the best with your research.
  • asked a question related to Advanced Statistical Analysis
Question
5 answers
I am creating a hypothetical study in which there are two drugs being tested. Thus I have taken 60 participants and randomly split them into three groups: drug A, drug B and a control group. A YBOCS score will be taken before the trial after the trial has ended and then again at a 3-month follow-up. Which statistical test should I use to compare the three groups and to find out which was most effective?
Relevant answer
Answer
What do you mean "hypothetical study?" Is this a homework question?
  • asked a question related to Advanced Statistical Analysis
Question
4 answers
For example:
If there are 40 species identical between two sites, they are the same. However, two sites can each have 40 species each, but none in common. So by species number they are identical but by species composition they are 0% alike.
How can I calculate or show the species composition of the two sites over time?
Relevant answer
You use beta diversity (β), which is a measure of the difference in composition of species between locations :)
  • asked a question related to Advanced Statistical Analysis
Question
4 answers
During the lecture, the lecturer mentioned the properties of Frequentist. As following
Unbiasedness is only one of the frequentist properties — arguably, the most compelling from a frequentist perspective and possibly one of the easiest to verify empirically (and, often, analytically).
There are however many others, including:
1. Bias-variance trade-off: we would consider as optimal an estimator with little (or no) bias; but we would also value ones with small variance (i.e. more precision in the estimate), So when choosing between two estimators, we may prefer one with very little bias and small variance to one that is unbiased but with large variance;
2. Consistency: we would like an estimator to become more and more precise and less and less biased as we collect more data (technically, when n → ∞).
3. Efficiency: as the sample size incrases indefinitely (n → ∞), we expect an estimator to become increasingly precise (i.e. its variance to reduce to 0, in the limit).
Why Frequentist has these kinds of properties and can we prove it? I think these properties can be applied to many other statistical approach.
Relevant answer
Answer
Sorry, Jianhing. But I think you have misunderstood something in the lecture. Frequentist statistics, which is an interpretation of probability to be assigned on the basis of many random experiments.
In this setting, on designs functions of the data (also called statistics) which estimate certain quantities from data. For example, the probability p of a coin to land heads is given from n independent trials with the same coin and just counting the fraction of heads. This is then an estimator for the parameter p.
Each estimator should have desirable properties, as unbiasedness, consistency, efficiency and low variance and so on. Not every estimator has these properties. But, in principle one can proof, whether a given estimator has these properties.
So, it is not a characteristics of frequentist statistics, but a property of an individual estimator based on frequentist statistics.
  • asked a question related to Advanced Statistical Analysis
Question
4 answers
Assuming that a researcher does not know the nature of population distribution (the parameters or the type e.g. normal, exponential, etc.), is it possible that the sampling distribution can indicate the nature of the population distribution.