Science topic
Advanced Statistical Analysis - Science topic
Explore the latest questions and answers in Advanced Statistical Analysis, and find Advanced Statistical Analysis experts.
Questions related to Advanced Statistical Analysis
Greetings,
I am currently in the process of conducting a Confirmatory Factor Analysis (CFA) on a dataset consisting of 658 observations, using a 4-point Likert scale. As I delve into this analysis, I have encountered an interesting dilemma related to the choice of estimation method.
Upon examining my data, I observed a slight negative kurtosis of approximately -0.0492 and a slight negative skewness of approximately -0.243 (please refer to the attached file for details). Considering these properties, I initially leaned towards utilizing the Diagonally Weighted Least Squares (DWLS) estimation method, as existing literature suggests that it takes into account the non-normal distribution of observed variables and is less sensitive to outliers.
However, to my surprise, when I applied the Unweighted Least Squares (ULS) estimation method, it yielded significantly better fit indices for all three factor solutions I am testing. In fact, it even produced a solution that seemed to align with the feedback provided by the respondents. In contrast, DWLS showed no acceptable fit for this specific solution, leaving me to question whether the assumptions of ULS are being violated.
In my quest for guidance, I came across a paper authored by Forero et al. (2009; DOI: 10.1080/10705510903203573), which suggests that if ULS provides a better fit, it may be a valid choice. However, I remain uncertain about the potential violations of assumptions associated with ULS.
I would greatly appreciate your insights, opinions, and suggestions regarding this predicament, as well as any relevant literature or references that can shed light on the suitability of ULS in this context.
Thank you in advance for your valuable contributions to this discussion.
Best regards, Matyas
I have a longitudinal model and the stability coefficients for one construct change dramatically from the first and second time point (.04) to the second and third time point (.89). I have offered a theoretical explanation for why this occurs, but have been asked about potential model bias.
Why would this indicate model bias? (A link to research would be helpful).
How can I determine whether the model is biased or not? (A link to research would be helpful).
Thanks!
Hello, could someone assist me in interpreting the results of the sequential Mann-Kendall Sneyer test? Indeed, according to Dufek (2008: Precipitation variability in São Paulo State, Brazil), "In the absence of any trend, the graphical representation of the direct series (u(t)) and the backward series (u'(t)) obtained with this method yields curves that overlap several times." In my case, I observe two to three overlaps, often with sequences that exhibit significant trends. Should I also conclude that there is an absence of trends in my dataset?

first i send this number to bard chat bot
6020 5603 54830 6750 23000 12370 10000
it responds this.
You have listed 7 numbers.
- 6020
- 5603
- 54830
- 6750
- 23000
- 12370
- 10000
The sum of these numbers is 103573.
The average of these numbers is 14825.
The median of these numbers is 6750.
The mode of these numbers is 10000, which appears twice.
The range of these numbers is 23000 - 10000 = 13000.
The standard deviation of these numbers is 3844.74.
Is there anything else I can help you with?
then i ask it to standardize them.
and the respond was this.
Sure. Standardizing the numbers will mean subtracting the mean and then dividing by the standard deviation. This will transform the numbers so that they have a mean of 0 and a standard deviation of 1.
The standardized numbers are:
- -2.18
- -2.88
- 2.44
- -1.38
- 3.22
- -0.78
- 0.00
----------------------------------------
Bing chat respond to the same question what this
I can standardize the numbers you provided. Standardization is the process of transforming data to have a mean of 0 and a standard deviation of 1. This is done by subtracting the mean from each data point and then dividing by the standard deviation.
Here are the standardized values for the numbers you provided:
- 6020: -0.705
- 5603: -0.801
- 54830: 2.891
- 6750: -0.524
- 23000: 0.693
- 12370: -0.191
- 10000: -0.363
Is there anything else I can help with? 😊
------------------------------------------
and perplexity respond was this.
Based on the search results, it is possible to standardize the numbers provided by converting them into z-scores. Standardizing data is a common practice in statistics to compare variables that have different scales or units of measurement. To standardize the numbers provided, one would need to calculate the mean and standard deviation of the data and then transform each value to its corresponding z-score
Here are the steps to standardize the numbers:
- Calculate the mean of the data: Mean = (6020 + 5603 + 54830 + 6750 + 23000 + 12370 + 10000) / 7 = 15038.43
- Calculate the standard deviation of the data: Standard deviation = sqrt(((6020 - 15038.43)^2 + (5603 - 15038.43)^2 + (54830 - 15038.43)^2 + (6750 - 15038.43)^2 + (23000 - 15038.43)^2 + (12370 - 15038.43)^2 + (10000 - 15038.43)^2) / 6) = 17523.22
- Calculate the z-score for each value: Z-score = (X - Mean) / Standard deviationZ-score for 6020 = (6020 - 15038.43) / 17523.22 = -0.706 Z-score for 5603 = (5603 - 15038.43) / 17523.22 = -0.768 Z-score for 54830 = (54830 - 15038.43) / 17523.22 = 2.238 Z-score for 6750 = (6750 - 15038.43) / 17523.22 = -0.426 Z-score for 23000 = (23000 - 15038.43) / 17523.22 = 0.455 Z-score for 12370 = (12370 - 15038.43) / 17523.22 = -0.152 Z-score for 10000 = (10000 - 15038.43) / 17523.22 = -0.853
By standardizing the numbers, we have transformed them into a common scale that allows for easier comparison and analysis. It is important to note that the interpretation of z-scores depends on the context and the distribution of the data.
-----------------------------------------------------
so, as you can see it can't even get sum of the data correct. my question is why this models cant get sum of this number right even when they apply all the correct steps
Hello !
Please, relevant to the significance level of a test, I would like to know when can we increase the critical value α to 0.1.
Thank you in advance.
In plant breeding, what are uses discrimination function.
I am looking for a graphical tool like visual basic software to define R codes for interactive graphical buttons and text boxes.
For example, I want to design a windows application with graphical design for calculation of body mass index (BMI). I want to have two boxes for weight and height imputation and a button for run. When clicking the button, I want to the below code be run.
BMI < - box1/(box2^2)
some of the people who consult are only users of statistics, while others are the ones who develop statistics, and we would love that people use it correctly.
But, "I believe" that many arrive late, always post process of experimentation, asking "what statistical process can I do or apply". Perhaps they do not know that they should always consult, with the question or the hypothesis that they wish to answer or verify, since it would allow a better answer.
On the other hand, some come with simple queries, but usually a statistics class is given as an answer, which I feel in some cases is late. In some cases it is extremely necessary, but in others, it opens a debate that leads to serendipity.
Wouldn't it be better, to try to advise them in a more precise way?
I read them:
I’ve got a data set and I want to calculate R2 for linear regression equation from another study.
For example, I have derived an equation from my data (with R2) and I want to test how other equations perform on my data (and thus calculate R2 for them). Then, I want to compare R2 from my data set with R2 from derivation studies.
Do you have any software for this? Any common statistical software could cope with this task (e.g. SPSS or SAS)? Maybe you have any tutorials on YouTube for this?
Dear colleagues,
I analyzed my survey data using binary logistic regression, and I am trying to assess the results by looking at the p-value, B, and Exp(B) values. However, the task is also to specify the significance of the marginal effects. How to interpret the results of binary logistic regression considering the significance of the marginal effects?
Best,
I constructed a linear mixed-effects model in Matlab with several categorical fixed factors, each having several levels. Fitlme calculates confidence intervals and p values for n-1 levels of each fixed factor compared to a selected reference. How can I get these values for other combinations of factor levels? (e.g., level 1 vs. level 2, level 1 vs. level 3, level 2 vs. level 3).
Thanks,
Chen
Has anyone conducted a meta-analysis with Comprehensive Meta-Analysis (CMA) software?
I have selected: comparison of two groups > means > Continuous (means) > unmatched groups (pre-post data) > means, SD pre and post, N in each group, Pre/post corr > finish
However, it is asking for pre/post correlations which none of my studies report. Is there a way to calculate this manually or estimate it somehow?
Thanks!
Hello everyone,
I'm going to conduct a meta-analysis of psychological interventions relevant to a topic via Comprehensive Meta-Analysis (CMA) software. I have a few questions/points for clarification:
- From my understanding, I should only meta-analyse interventions that have used a pre-test, post-test (with and/or without follow-up) design, as meta-analysing post-test only designs with the others is not effective. Is my understanding correct?
- Can I combine between-subjects and within-subjects designs together or do I need to meta-analyse them separately?
Thanks in advance!
I have ordinal data on happiness of citizens from multiple countries (from the European Value Study) and I have continuous data on the GDP per capita of multiple countries from the World Bank. Both of these variables are measured at multiple time points.
I want to test the hypothesis that countries with a low GDP per capita will see more of an increase in happiness with an increase in GDP per capita than countries that already have a high GDP per capita.
My first thought to approach this is that I need to make two groups; 1) countries with low GDP per capita, 2) countries with high GDP per capita. Then, for both groups I need to calculate the correlation between (change in) happiness and (change in) GDP per capita. Lastly, I need to compare the two correlations to check for a significant difference.
I am stuck however on how to approach the correlation analysis. For example, I dont know how to (and if I even have to) include the repeated measures of the different time points the data was collected. If I just base my correlations on one timepoint the data was measured, I feel like I am not really testing my research question, considering I am talking about an increase in happiness and an increase in GDP, which is a change over time.
If anyone has any suggestions on the right approach, I would be very thankful! Maybe I am overcomplicating it (wouldnt be the first time)!
Hello! I would like to address the experts regarding a question about conducting a statistical analysis using only nominal variables. Specifically, I would like to compare the responses of survey participants answered the question whether they take certain medications "Yes" or "No", and analyze the data with different criteria such as education level, economic status, marital status, etc. I have conducted a Chi-squared test to determine if there is a significant difference between the variables, but now I would like to compare the answers of whether or not this medicine is taken depending on each group, for example in the education variable (higher, secondary, vocational and basic education). Is there a statistical test similar to Tukey's test that is suitable for nominal variables? I would also like to know if it is possible to create a column chart with asterisks above the columns indicating the significant differences between them based on this test for nominal variables.
I usually use Statistica StatSoft and R studio. But none of my attempts to do post-hoc for nominal variables analysis on any of them were successful. In R studio I tried pairwise.prop.test(cont_table, p.adjust.method = "bonferroni")
But I got an error:
Error in pairwise.prop.test(cont_table, p.adjust.method = "bonferroni") :
'x' must have 2 columns
I assume that this is due to the fact that I have groups in one of the variables and not two.
What should I do?
Thank you in advance for your help!
The variables I have- vegetation index and plant disease severity scores, were not normal. So, I did log10(y+2) transformation of vegetation index and sqrt(log10(y+2)) transformation of plant disease severity score. Plant disease severity is on the scale of 0, 10, 20, 30,..., 100 and were scored based on visual observations. Even after combined transformation, disease severity scoring data is non-normal but it improves the CV in simple linear regression.
Can I proceed with the parametric test, a simple linear regression between the log transformed vegetation index (normally distributed) and combined transformed (non-normal) disease severity data?
Hi everyone! I need to examine interactions between categorical and continuos predictors in a path analysis model. What strategy would be more accurate: 1) including the categorical variable, the continous one and the interaction as separate terms, 2) run a multigroup analysis?
I have the same problem with several models. For instance, examining potential differences of executive function (continuos predictor) effects on reading comprehension (outcome variable) among children from different grades (categorical predictor).
Thank you so much for your help!
I want to study the relationship between parameters for physical activity in a lifespan and the outcome of pain (binary). I have a longitudinal data with four measurement, hence repeated measures.
Should I do an GEE or a mixed method? And does anyone guides on how to rearrange my dataset so it will fit the methods? I have tried the GEE with long data and wide but I keep on getting errors.
To clarify, my outcome is binary (at the last measurement) and further my independent variables are measured at four times (with the risk of them being correlated).
How can I define a graphics space to make plots like the attached figure below using the graphics package in r?
I need help locating each position (centering) using the "mar" argument.
Reghais.a
I am tiding up with the below problem, it's a pleasure to have your ideas.
I've written a coding program in two languages, Python and R, but each came to a completely different result. Before jumping to a conclusion, I declare that:
- Every word of the code in two languages has multiple checks and is correct and represents the same thing.
- The used packages in two languages are the same version.
So, what do you think?
The code is about applying deep neural networks for time series data.
Hi, I am looking for a way to derive standard deviations from estimated marginal means using mixed linear models with SPSS. I already figured where SPSS provides the pooled SD to calculate the SMD, however, I still need the SD of the means. Any help is appreciated!
I have an data of 30 X 1 matrix, in which by using gradient descent algorithm is it possible to find the best optimized value.If yes, please share me the procedure or link for the detailed background theory behind it.it will be helpful for me to proceed further on my research.
I want to display the bivariate distribution of two (laboratory) parameters in sets of patients. I have available the data of N, mean +- SD of the first and second parameters. I am looking for software that could draw a bivariate distribution = ellipse from the given parameters. Can someone help me? Thank you.
Hi,
There is an article that I want to know which statistical method has been used, regression or Pearson correlation.
However, they don't say which one. They show the correlation coefficient and standard error.
Based on these two parameters, can I know if they use regression or Pearson correlation?
How to run the Bootstrap method to estimate the error rate in linear discriminant analysis using r code?
Best
reghais.A
Dear all,
I want to know your opinions
Also, there is good paper here
Also,
How can I add the robust confidence ellipses of 97.5% on the variation diagrams (XY ilr-Transformed) in the robcompositions ,or composition packages?
Best
Azzeddine
Res. Sir/ Madam,
I am working as Scientist (Horticulture) and my research focus is improvement of tropical and semi arid fruits. I am also interested in working out role of nutrients in fruit based cropping systems.
Looking for collaborators from the field of Genetics and Plant Breeding, Horticulture, Agricultural Statistics, Soil Science and Agronomy.
Currently working on Genetic analysis for fruit traits in Jamun (Indian Blackberry).
I am testing hypothesis of relationships between CEA and Innovation Performance (IP). If I am testing the relationship of one construct , say Management support to IP , is it ok to use single linear regression? Of should I be testing it in a multiple regression with all the constructs?
What are current recommendations for reporting effect size measures from repeated measures multilevel model?
Concerning analytical approach, I have followed procedure by Garson (2020) with matrix for repeated measures: diagonal, and matrix for random effects: variance components.
In advance, thank you for your contributions.
Merry Christmas everyone!
I used the Interpersonal Reactivity Index (IRI) subscales Empathic Concern (EC), Perspective Taking (PT) and Personal Distress (PD) in my study (N = 900) When I calculated Cronbach's alpha for each subscale, I got .71 for EC, .69 for PT and .39 for PD. The value for PD is very low. The analysis indicated that if I deleted one item, the alpha would increase to .53 which is still low but better than .39. However, as my study does not focus mainly on the psychometric properties of the IRI, what kind of arguments can I make to say the results are still valid? I did say findings (for the PD) should be taken with caution but what else can I say?
I have done my qPCR experiments and gave me some results, I used the DDCt method and I calculated the 2^(-DDCt), I transformed my data in base 10 logarithm and separated my samples between control and patients. I want to ask if I see that there is for example a fold change 4 times higher in patients for my gene of interest then I use one-tail or two-tail t-test, and what if the distribution is not normal, will I do non-parametric test, or I can skip the outliers and do the t-test. I am very confused in that statistical conundrum.
Dear all,
I have conducted a research about snake chemical communication where I test the reaction of a few adult snake individuals (both males and females) to different chemical compounds. Every individual is tested 3 times with each of the compounds. Basically, I put a soaked paper towel in each of the individual terrariums and record the behavior for 10 minutes with a camera. The compounds are presented to the individuals in random order.
My grouping variable represents the reactions to each of the compounds for each of the sexes. For example, in the grouping variable I have categories titled “male reactions to compound X”, “male reactions to compound Y” etc. I have three dependent variables as follows: 1) whether there is an interest towards the compound presented or not (binary), 2) chin rubbing behavior recorded (I record how many times this behavior is exhibited) and 3) tongue-flick rate (average tongue-flicks per minute). The distribution is not normal.
What I would like to test is 1) whether there is a difference in the behavior between males and females, 2) whether there is a difference between the behavior of males snakes to the different compounds (basically if males react more to compound X, rather than to compound Y) and the same goes for females, and finally 3) whether males exhibit different behavior to different types of compounds (I want to combine for example compounds X, Y and Z, because they are lipids and A, B and C, because they are alkanes and check difference in male responses).
I thought that PERMANOVA will be enough, since it is a multivariate non-parametric test, but two reviewers wrote that I have to use Generalized linear mixed models, because of the repeated measures (as mentioned, I test each individual with each of the compounds 3 times). They think there might be some individual differences that could affect the results if not taken into consideration.
Unfortunately, I am a newbie in GLMM, and I do not really see how such model can help me answer my questions and test the respective hypotheses. Could you, please, advise me on that? And how should I build the data matrix in order to test for such differences?
Isn’t it also possible to check for differences between individuals with the Friedman test and then use PERMANOVA?
Thank you very much in advance!
Holzinger (Psychometrika, vol. 9, no. 4, Dec. 1944) and Thurstone (Psychometrika, vol. 10, no. 2, June 1945; vol. 14, no. 1, March, 1949) discussed an alternative method for factoring a correlation matrix. The idea was to enter several clusters of items (tests) in the computer program beforehand, and then test them, optimize them and produce the residual matrix (which may show the necessity of further factoring). These clusters could stem from theoretical and substantive considerations, or from an inspection of the correlation matrix. It was an alternative to producing one factor at a time until the residual matrix becomes negligible, and was attractive because it spared much calculation time for the computers in that era. That reason soon lapsed but the method is still interesting as an alternative kind of confirmatory factor analysis.
My problem is: I would like to know the exact procedure (especially the one by Holzinger) but I cannot get hold of these three original publications (except the first two pages), unless against big expenses, nor can I find a thorough discussion of it in another publication, except perhaps in H.H. Harman (1976): Modern factor analysis, Section 11.5, but that book has disappeared from the university library, while on Google-books it is incomplete. Has anyone a copy of these publications, or is he/she familiar with this type of factor analysis?
Please share this question with expert in statistics if you don't know answere.
I am stuck here, as i am working on therapy and trying to evalute the changes in biomarker levels. So I have selected 5 patients and analysed their biomarker levels prior therapy and then after first therapy and followed by 2nd therapy. So as i apply anova results show significant difference in their mean values but due larger difference in their standard deviations i am getting non significant results
like in this table below.
Sample Size Mean Standard Deviation SE of Mean
vb bio 5 314.24 223.53627 99.96846
cb1 bio 5 329.7 215.54712 96.3956
CB II 5 371.6 280.77869 125.56805
So I want to know from all those good statsticians who are well aware about the clinical trial studies.
Please suggest
Am i performing statistics correctly?
Should not i worry about non significant results?
What are the statistical tests I should use?
How will I represent my data for publication purposes?
Please be eloberative in answers?
Try to teach like you are teaching to the fresher to this field.
I have a distribution map produced with only presence data. And there is a certain number of presence data that is in no way included in the model. How can I evaluate the compatibility of the presence data not included in the model I have with the predictive values corresponding to these points in the potential distribution map? So we can also think like this: I have two columns. The first column has only 1 values, the second column has the predictive values. Which method would be the best approach to examine the relationship between these two columns?
Hello, I currently have a set of categorical variables, coded as Variable A,B,C,etc... (Yes = 1, No = 0). I would like to create a new variable called severity. To create severity, I know I'll need to create a coding scheme like so:
if Variable A = 1 and all other variables = 0, then severity = 1.
if Variable B = 1 and all other variables = 0, then severity = 2.
So on, and so forth, until I have five categories for severity.
How would you suggest I write a syntax in SPSS for something like this?
I am using -corr2data- to simulated raw data from a correlation matrix. However, some variables that I need should be binary. How can I convert?
Is it possible to convert higher amounts to 1 (and the other ones to 0) as the form to reach the same mean? How should I do it?
Is a way in R?
(I want to perform a GSEM on a correlation matrix)
(I know -faux- package in R. But my problem is that just some of [not all of] my variables are binary.)
Hello, I currently have a set of categorical variables, coded as Variable A,B,C,etc... (Yes = 1, No = 0). I would like to create a new variable called severity. To create severity, I know I'll need to create a coding scheme like so:
if Variable A = 1 and all other variables = 0, then severity = 1.
if Variable B = 1 and all other variables = 0, then severity = 2.
So on, and so forth, until I have five categories for severity.
How would you suggest I write a syntax in SPSS for something like this? Thank you in advance!
I am creating a hypothetical study in which there are two drugs being tested. Thus I have taken 60 participants and randomly split them into three groups: drug A, drug B and a control group. A YBOCS score will be taken before the trial after the trial has ended and then again at a 3-month follow-up. Which statistical test should I use to compare the three groups and to find out which was most effective?
For example:
If there are 40 species identical between two sites, they are the same. However, two sites can each have 40 species each, but none in common. So by species number they are identical but by species composition they are 0% alike.
How can I calculate or show the species composition of the two sites over time?
During the lecture, the lecturer mentioned the properties of Frequentist. As following
Unbiasedness is only one of the frequentist properties — arguably, the most compelling from a frequentist perspective and possibly one of the easiest to verify empirically (and, often, analytically).
There are however many others, including:
1. Bias-variance trade-off: we would consider as optimal an estimator with little (or no) bias; but we would also value ones with small variance (i.e. more precision in the estimate), So when choosing between two estimators, we may prefer one with very little bias and small variance to one that is unbiased but with large variance;
2. Consistency: we would like an estimator to become more and more precise and less and less biased as we collect more data (technically, when n → ∞).
3. Efficiency: as the sample size incrases indefinitely (n → ∞), we expect an estimator to become increasingly precise (i.e. its variance to reduce to 0, in the limit).
Why Frequentist has these kinds of properties and can we prove it? I think these properties can be applied to many other statistical approach.
Assuming that a researcher does not know the nature of population distribution (the parameters or the type e.g. normal, exponential, etc.), is it possible that the sampling distribution can indicate the nature of the population distribution.
According to the central limit theorem, the sampling distribution is likely to be normal. So, the exact population distribution can not be known. The shape of the distribution for a large sample size is enough? or It has to be inferred logically based on different factors?
Am I missing some points? Any lead or literature will help.
Thank you
Hello,
I have a variable in a dataset with ~500 answers; it essentially represents participants' answers to an essay question. I am interested in how many words each individual has used in the task and I cannot seem to find a function in R to calculate/count each participant's words in that particular variable.
Is there a way to do this in R? Any packages you think could help me do this? Thank you so much in advance!
Can you still have a good model despite a p-value < .05 for the H-L goodness of fit test? Any alternative testing in SAS or R?
if for example I want to compare BMI for two group?
When I use shapiro wilk test to check normality between BMI of each group!
one group is normally distributed
and the other was not ?
what test should i use either t-test or mann whitney?
A few years ago, in a conversation that remained unfinished, a statistics expert told me that percentage variables should not be taken as a response variable in an ANOVA. Does anyone know if this is true, and why?
Hi all,
As a non-statistician, I have a (seemingly) complicated statistical question on my hands that I'm hoping to gather some guidance on.
For background, I am studying the spatial organization of a biological process over time (14 days), in a roughly-spherical structure. Starting with fluorescence images (single plane, no stacks), I generate one curve per experimental day that corresponds to the average intensity of the process as I pass through the structure; this is in the vein of line intensity profiling for immunofluorescence colocalization. I have one curve per day (see attached) and I'm wondering if there are any methods that can be used to compare these curves to check for statistical differences.
Any direction to specific methods or relevant literature is deeply appreciated, thank you!
Cheers,
Matt
Edit to add some additional information: the curves to be analyzed will be averages of curves generated from multiple biological replicates, and therefore will have error associated with them. Across the various time points and conditions, the number of values per curve ranges roughly from 200 -- 1000 (one per pixel).

We are altering our original analytical method to save time and cost and are trying to come up with a good footing to say if the method results are the same or if they are significantly different. We are taking actual samples and analyzing thru both the original analytical method that we validated and then also thru the alterations we made. I am not very knowledgeable with statistics but is there a statistical way to say if the methods are producing results that are the same or significantly different? Or is there a more common method to determine if two analytical methods are the same? I have attached the results from each of the variations we tried along with the original method resutls.
Different steps and procedure with complete example of a meta analysis.
What is pooled prevalence?
I have two datasets (measured and modelled).
I want to calculate 95% confidence intervals for RMSE.
Cany anyone please help me with this?
Thank you in advance
Greetings Fellow Researchers,
I am a newbie in using survival analysis (Cox Regression). In my data-set 10-40% cases have missing values (Depending on the variable I include in my analysis). Based on this I have two questions,
1- there are any recommendations on accepted percentage of cases dropped (missing values) from the analysis?
2- Should I impute the missing values of all the cases that were dropped (lets say maximum of 40%).
Thank you so much for your time and kind consideration.
Best,
Sarang
I'm doing a germination assay of 6 Arabidopsis mutants under 3 different ABA concentrations in solid medium. I've 4 batches. Each batch has 2 plates for each mutant, 3 for the wild type, and each plate contains 8-13 seeds. Some seeds and plates are lost to contamination. So I don't have the same sample size for each mutant in each batch. In same cases the mutant is no longer present in the batch. I've recorded the germination rate per mutant after a week and expressed it as percentage. I'm using R. How can I analyse them best to test if the mutations affect the germination rate in presence of ABA?
I've two main questions:
1. Do I consider each seed as a biological replica with categorical type of result (germinated/not-germinated) or each plate with a numerical result (% germination)?
2. I compare treatments within the genotype. Should I compare mutant against wild type within the treatment, the treatment against itself within mutant, or both?
For Individual responses I can calculate the value with respect to which we have to check the outlier don't know?
Please help in this regards
I need suggestions for groundwater assessment-related articles used discriminant analysis in their analysis and study, as well as how to apply this analysis in R programming.
Reghais.A
Thanks
I'm trying to construct a model for binary logistics. The first model includes 4 variable of predictor and the intercept is not statistically significant. Meanwhile, in the second model, I exclude one variable from the first model and the intercept is significant.
The consideration that I take here is that:
The pseudo R² of the first model is better at explaining the model rather than the second model.
Any suggestion which model should I use?
i need a statistical result to test my hypothesis, but my N isn't so rich to put in cochran formula!
besides that, I can not collect more information to solve this issue, do you know any other reliable method that fits this issue?
300 Participants in my study viewed 66 different moral photos and had to make a binary choice (yes/no) in response to each. There were 3 moral photo categories (22 positive images, 22 neutral images and 22 negative images). I am running a multilevel logistic regression (we manipulated two other aspects about the images) and have found unnaturally high odd ratios (see below). We have no missing values. Could anyone please help me understand what the below might mean? I understand I need to approach with extreme caution so any advice would be highly appreciated.
Yes choice: morally negative compared morally positive (OR=441.11; 95% CI [271.07,717.81]; p<.001)
Yes choice: morally neutral compared to morally positive (OR=0.94; 95% CI [0.47,1.87]; p=0.86)
It should be noted that when I plot the data, very very few participants chose yes in response to the neutral and positive images. Almost all yes responses were given in response to the negative images.
Hi,
I have 4 animals (A, B, C &D), and did behavioural observation for 90 days. The parameter that I want to test is the temperature and time (whether the temperature/time is affecting their behaviour or not). I categorized the temperature as cool temperature and hot temperature. Time is categorized between morning and afternoon.
The questions are -
1) is it correct to use non-parametric test since the subjects is small (n>10)?
2) if I want to differentiate the behaviour between the individual animals, do I use paired t-test or independent t-test?
3) if I want to see the interaction between individual subjects X temperature X time ; is the data considered as dependent or independent?
In my research I need to calculate the correlation between two variables. Each variable was measured 100 times for each subjects (N=20), so in total, I have 2000 data points, of which every 100 belongs to the same participant.
I have calculated a simple bivariate correlation on these 2000 data points without taking the participant effects into account. I know that this is wrong and the participant effects should be accounted for, but I didn't manage to find a way to do this (in SPSS). Any help would be highly appreciated.
I am using an ARDL model however I am having some difficulties interpreting the results. I found out that there is a cointegration in the long run. I provided pictures below.
I have long-term rainfall data and have calculated Mann-Kendall test statistics using the XLSTAT trial version ( addon in MS word). There is an option for asymptotic and continuity correction in XLSTAT drop-down menu.
- What does the term "Asymptotic" and "continuity correction" mean?
- When and under what circumstances should we apply it?
- Is there any assumption on time series before applying it?
- What are the advantages and limitations of these two processes?
In confimatory factor analysis (CFA) in Stata, the first observed variable is constrained by default (beta coefficient =1, mean of latent variable =constant).
I don't know what is it! Because, other software packages report beta coefficients of all observed variables.
So, I have two questions.
1- Which variable should be constrained in confirmatory factor analysis in stata?
2- Is it possible to have a model without a constrained variable like other software packages?
For my bachelor thesis I'm conducting a study on the relationship between eye-movements and memory. One of the hypotheses is that the number of fixations made during the viewing of a movie clip will be positively related to the memory that movie clip.
Each participant viewed 100 movie clips, and the number of fixations were counted for each movie clip for each participant. Later participants' memory of the clips were tested and each movie was categorized as "remembered" or "forgotten" for each participant.
So, for each participants there are 100 trials with the corresponding number of fixations and categorization as "remembered" or "forgotten".
My first idea was to do a paired-samples t-test (to compare the number of fixations between "remembered" and "forgotten"), but I didn't find a way to do that in SPSS with this file format as there are 100 rows for each participant. I though of calculating the average number of fixations for the remembered vs forgotten movies per participant and compare and do a t-test on these means (one mean per participant for both categories) but this way the means get distorted because some subjects remember way more clips than others (so the "mean of the means" is not the same as the overall mean).
Now I'm thinking that doing a t-test might not be appropriate at all, and that logistic regression would be a better choice (to see how well the number of fixations predicts whether a clip will be remembered vs forgotten), but I didn't manage to find out how to do this in SPSS in for a within subject design with multiple trials per participant. Any help/suggestions would be highly appreciated.
Do Serial correlation, auto-correlation & Seasonality mean the same thing? or Are they different terms? If so what are the exact differences with respect to the field of statistical Hydrology? What are the different statistical tests to determine(quantity) the serial correlation, autocorrelation & seasonality of a time series?
Hi
I have applied several conditional independence testing methods:
1- Fisher's exact test
2- Monte-Carlo Chi-sq
3- Yates correction of Chi-sq
4- CMH
The number of distinct feature segments that reject the independence (null H) is different in each method. Which method is more reliable and why?
(The data satisfies the prerequisite of all of these methods)
I have six kinds of compounds which I then tested for antioxidant activity using the DDPH assay and also anticancer activity on five types of cell lines, so I got two types of data groups:
1. Antioxidant activity data
2. Anticancer activity (5 types of cancer cell line)
Each data consisted of 3 replications. Which correlation test is the most appropriate to determine whether there is a relationship between the two activities?
I want to draw a graph between predicted probabilities vs observed probabilities. For predicted probabilities I use this “R” code (see below). Is this code ok or not ?.
Could any tell me, how can I get the observed probabilities and draw a graph between predicted and observed probability.
analysis10<-glm(Response~ Strain + Temp + Time + Conc.Log10
+ Strain:Conc.Log1+ Temp:Time
,family=binomial(link=logit),data=df)
predicted_probs = data.frame(probs = predict(analysis10, type="response"))
I have attached that data file
Four homogeneity tests, namely the Standard Normal Homogeneity Test(SNHT), Buishand Range(BR) and Pettitt test and Von-Neumann Ratio test (VNR) are applied for finding the break-point. Out of which SNHT, BR and Pettitt give the timestamp at which the break occurs whereas VNR measures the amount of inhomogeneity. Multiple papers have made the claim that "SNHT finds the break point at the beginning and end of the series whereas BR & Pettitt test finds the break point at the middle of the series."
Is there any mathematical proof behind that claim ? Is there any peer-reviewed work (Journal article) which has proved the claim or is there any paper which has crosschecked the claim ?
Let me say that I have a 100 years data, then start of the time series means whether it is the first 10 years or first 15 years or first 20 years? How to come to a conclusion ?
<