Science topic

Replication - Science topic

Explore the latest questions and answers in Replication, and find Replication experts.
Questions related to Replication
  • asked a question related to Replication
Question
2 answers
What is the appropriate ANOVA model for the following experimental design: the effect of four different concentrations of compound X on microorganisms? Each X concentration has three jars and three replicates are collected from each pot. Samples are drawn weekly for 18 weeks.
Relevant answer
Answer
You would actually need a rather complex model.
One aspect is that you have repeated measures, in that you are taking samples from the same experimental units across 18 weeks.
Another aspect is that you have three samples from each jar, so usually you would use a model that includes sample within jar (nested effects).
You might be able to simply things depending on what you need to know.
It may be that the effect of jar isn't meaningful, so that you can treat three samples from three jars as nine simple samples.
It may be that you you don't need to account for the 18 weeks of samples, but only look at the final week for your statistics. In this case, you could still plot and present the data over 18 weeks, which may be of interest, but not have to run statistics over the 18 weeks. Again, it depends on what you want to know.
  • asked a question related to Replication
Question
1 answer
I'm planning an experimental design to investigate the gene expression of mussels under thermal stress conditions, to compare their responses.
The experiment includes a control tank and a treatment tank, with enough mussels to sample 4 time points (60 mussels per time point per treatment).
In our facilities, I have two separate recirculation systems (with heaters and coolers) with two tanks each.
Since the two tanks of one recirculating system are connected they cannot be considered as isolated tanks and therefore they are not replicates. I'm planning to distribute the control samples between the two tanks of system 1 and the treatment samples between the two tanks of system 2, and at each time point sample from both tanks of one system and pooling them together (e.g. 30 samples from tank 1 and 30 samples from tank 2 of the control system)
My question is: do I need to have tank replicates for this experiment? Or can I sample three groups of 60 mussels per timepoint and per treatment between the two tanks of one system? i.e. have sample replicates within the control temperature and the treatment temperature tanks at each timepoint?
Thanks!
Relevant answer
Answer
I assume the two tanks in system 1 cannot be independently temperature controlled and the same for system 2.
If the two systems have identical tanks and control/ciiculation mechanisms, then you could assume no tank effect and hence no need for tank replicates. If there are differences you could do a repeat of the whole experiment swapping the control and treatment tank system.
  • asked a question related to Replication
Question
1 answer
I have a compound (C23N3OH27) to repeat some results with a molecular weight of 361.48. The problem is that the results are not being the same, I am evaluating cell viability (K562 and KG1) with resazurin (24 hours of plating 20.000 cells/100uL, 24 hours of treatment 100uL, 4 hours of resazurin 20uL) and the results lead us to believe that it does not induce death in any of the cases. concentrations tested (30 uM, 20uM, 10uM, 5uM, 1uM), I have already evaluated cellular metabolism, resazurin, interaction of the compound with resazurin and none explains the reason for not repeating the results. I am suspicious that it could be my dilution, I used a table from a colleague that performs the calculation automatically. Could someone help me to do the dilution directly just so I can assess if it's correct? I have 5g powder of the compound which was diluted in 2305.34uL of 100% DMSO, which according to the table gave me a solution of 6,000uM, I don't know if that's correct.
obs: my controls (+/-) are responding well so I don't believe it's the resazurin or the plating
Thanks for all contributions!
I have attached the dilution table below.
Relevant answer
Answer
Sorry! I did not understand the calculations from the excel sheet as it is very complicated.
Could someone help me to do the dilution directly?”
Yes, let me make it simple.
The Molecular weight of the compound (C23N3OH27) is 361.48.
Then follow the sequence below.
361.48g -------- 1L -------- 1M
361.48g --------- 1L ------- 1000mM
0.36148g ---------- 1L ------ 1mM
361.48mg -------- 1000ml ------ 1mM
3.614 mg ----------- 10ml -------- 1mM
So, weigh 3.614mg of the compound in 10ml 100% DMSO to give 1mM stock.
You may prepare working solutions (30uM, 20uM, 10uM, 5uM, 1uM) as follows.
You may use the formula: C1V1=C2V2
C1= Concentration of stock solution (1mM)
V1= Volume of stock solution (X)
C2= Concentration of working solution (30uM)
V2= Volume of working solution (say 1ml)
Then,
1mM x X = 30uM x 1ml
1000uM x X = 30uM x 1ml
30/1000 = 0.03ml of stock i.e., add 30ul of stock solution to 970ul of media to give 1ml of 30uM working solution.
Similarly,
For 20uM
20/1000 = 0.02 ml of stock i.e., add 20ul of stock solution to 980ul of media to give 1ml of 20uM working solution.
For 10uM
10/1000= 0.01ml of stock i.e., add 10ul of stock solution to 990ul of media to give 1ml of 10uM working solution.
For 5uM
5/1000 = 0.005ml of stock i.e., add 5ul of stock solution to 995ul of media to give 1ml of 5uM working solution.
For 1uM
1/1000= 0.001ml of stock i.e., add 1ul of stock solution to 999ul of media to give 1ml of 1uM working solution.
Since 1ul is a very minute quantity to pipette, it may lead to error. So, you may dilute the stock by 1:10 to make a diluted stock (0.1mM). Then take 10ul of diluted stock (0.1mM) and add to 990ul of media to obtain 1uM working solution. Use this calculation for 1uM working solution instead of the above.
Best.
  • asked a question related to Replication
Question
2 answers
I want to measure the root volume of ramie plants grown in pots. I have replicated the trial, and I want to calculate the root volume of each plant in each replication. The procedure is unclear, and I want to get the expert's suggestions and guidelines. Any easy way to do this?
Relevant answer
Answer
Think of this. The volume of the root system is related to the weight of the root system. So think of weight of the root system as a proxy for the volume of the root system.
Use coarse substrate that will detach from the roots themselves.
Use a time sequence of 2 4 8 weeks sampling plants at each time to get a curve.
Use at least 4 to 8 plants for each time. Soak the system gently remove substrate. Weigh the fresh weight of the system and than dry to get the dry weight.
Make sure you have a both a control and treatments according to your hypothesis.
  • asked a question related to Replication
Question
1 answer
Dear all,
I'm working on the finer details of my experimental design, and have some questions regarding bridging channels for TMT based experiments.
I have two conditions to test, across nine biological replicates, in order to run as one 18-plex TMT-pro experiment.
I am aware of the use of one or more bridging channels being used with pooled samples to combine multiple TMT mixtures, however a colleague has mentioned that a bridging channel should also be considered for normalisation if only one set is used.
Does anyone have any experience using a bridging channel for normalisation in a single mixture? Is it worth sacrificing one or more biological replicates for?
I will be using MSstatsTMT for normalisation and summarisation.
Sam
Relevant answer
Answer
As an update to this discussion, I have decided to reduce my sample size and incorporate a pooled reference channel. Mostly to open up the possibility of integrating additional samples and conditions in the future.
Sam
  • asked a question related to Replication
Question
1 answer
Hi all,
I am optimising ThT assay protocol for a-syn aggregation. Even if I perform 5 replicates, the graphs are having different lag times for the same sample. I am not too sure if there is a better way to ensure reproducibility between replicates?
Currently I have tried with 100uM and 20uM of wild type monomeric alpha synuclein protein shaking at 800cpm, with an without addition of NaCl as well. The ThT concentration I use in the final solution is 20uM.
Kindly refer to the image attached for 20uM monomer with 100mM NaCl (as referenced from literature). Kindly ignore the timestamp, as these were all transferred from a different plate reader (so t=0 is actually after around 48 hours of shaking elapsed).
Or is it ok to just take the average of these curves? This does not sound right to me as they all have different lag times and it would not be fair to just take the average of them.
Hope someone can advise!
Thanks and Regards,
Mathangi
Relevant answer
Answer
One point I forgot to include, I used a 384-well plate, 50uL volume per well. As in the image attached with this comment, I started filling my well from row 2 of the plate, But I forgot to add PBS/water to the surrounding wells to tackle the issue of evaporation. I noticed that my first and last wells (denoted by arrows) have been showing decrease in the lag phase as opposed to the wells in between. Not sure if this might a reason.
  • asked a question related to Replication
Question
3 answers
I need to conduct a feeding trial on the broiler, 5 dietary treatments 3 replications. How many broilers should be placed totally and in each single treatment?
Relevant answer
Answer
The experimental unit is the replicate (and not the birds in that replicate).
If you have one sex only, ten to twelve birds per replicate may be sufficient (see e.g. attached paper) but with mixed sexes the chance of uneven distribution of males and female over the replicates - thus increasing variability - , must be taken into account. I would choose in that case some 30 -50 animals per replicate.
  • asked a question related to Replication
Question
1 answer
How do you behave when you have, among your biological replicate, mixed sample distribution?
In my case I have 2 out of 3 biological replicated that are normally distributed, while the third one is not.
What kind of statistics shall I use?
Thank you for your help
Relevant answer
Answer
Hi,
For mixed distributions in biological replicates, consider non-parametric tests, data transformation, bootstrap methods, or robust statistical methods.
Hope this helps.
  • asked a question related to Replication
Question
3 answers
I am particularly interested in replicating, or creating research connected to primary music education for children in Jamaica. I have some Orff Training and lots of experience teaching, although I'll be training student teachers, I'd like to collaborate to share best practice.
Relevant answer
Answer
Sure thing! Primary schools in the Caribbean region have indeed been a subject of research. Various studies explore educational strategies, cultural influences, and challenges faced in this vibrant setting. To uncover this treasure trove of knowledge, check out academic databases like ERIC, JSTOR, or Google Scholar. You'll be navigating through an academic jungle, but fear not! With some research prowess and a dash of Caribbean spirit, you'll unearth the information you seek! Happy hunting, and may your scholarly adventures be as delightful as a sunny day at the beach! 🌴📚
  • asked a question related to Replication
Question
1 answer
Hi all, I am interested in performing bulk RNA sequencing on primary human cells that have been cultured in absence/presence of certain types of drugs. I know n=6 is quoted as an acceptable number for cell lines and genetically identical mouse samples, but I can imagine the number of replicates needs to be higher when using primary human cells taken from a variety of donors. I am struggling to find any comparable published studies so I was wondering if anyone here had an idea/some experience with this? Many thanks!
Relevant answer
Answer
For RNA-seq it largely depends on the effect size you are hoping to resolve. I've found that Table 1 in the following paper to be helpful here:
So to call a fold change of 2 statistically significant 98% of the time, do N=5 with 30M reads per sub-library.
As the table points out, it also depends on your sequencing depth which gives you more or less confidence.
For your case if you are also concerned about large variance between replicates, you could do pre-NGS qPCR validation of a couple genes you are interested in to gauge the dispersion between replicates and between treatment groups.
  • asked a question related to Replication
Question
13 answers
I am a beginner with the use of SAS and Specially Orthogonal contrast. My experiment involve 4 rate of Nitrogen (23,46,69 and 92 kg N) at 3 time of application plus a control for bread wheat. The trail was at field by RCBD with three replication. The different responses are labeled as variables 1-39 as depicted in the SAS command I just prepared.
My treatments are:-
N-rates= 4
N application time =3
Control=1
Total treatments= 13
Thank you for your recommendation!
Relevant answer
Answer
Dear Alemayehu,
I understand you very well and it is very common! We usually add (0,0) or control as a pilot treatment for the reason I mentioned earlier. You can call me at 0918766289 if there are any points I can help you with! For the midlands (Woina Dega) and highlands (Dega) of Ethiopia, we know that factorial combination of 0 N and 0 P is already not recommended as it needs high rates of these nutrients as you reasonably set your experiment!
  • asked a question related to Replication
Question
3 answers
We are trying to compare results from a cell culture analysis. we have done 3 or 4 replicates per condition which is common practice.
Student's t-test is often used in similar publications, however, I'm not sure if is the best option, as 3 replicates per condition we cannot assume a normal distribution.
Which is the optimal statistical test you would use in such experimental conditions?
Relevant answer
Answer
If normality can be assumed or not has nothing to to with the sample size. The degree of reasonability of that assumption follows from the understanding of the response variable and the processes leading to the data (the "data-generating process"). Variables that represent concentrations or intensities (gene or protein expression and alikes) are usually assumed log-normal distributed (there is vast theoretical and empirical evidence available in support). Time-to-event variables are typically assumed Gamma- or Weibull-distributed, count variables (e.g. number of clones growing on a plate etc) are assumed having a Poisson or negative-binomial distribution. Juts to give a few common cases.
Of course, it is recommended to cross-check the actually observed data are not grossly contraditive of a presumably reasonable assumption. Usually one uses residual diagnostic plots for this purpose or compares information critera of models fit that use different distributional assumptions. But this is possible only whe there is a reasonable amount of data available. Your sample size might be too small to go this way, so all you have is your understanding of the variable / data-generating process.
In any case it is NOT recommended to use separate unrelated t-tests to compare conditions, because the information about the variance is then used only from the two groups being compared, and all the valuable information that is provided by all other groups is ignored (note that t-tests are used when the variance is needed for the test and has to be estimated from the observed data). It is much better to estimate the variance from all the data, using a model that includes all the groups (a special case of a model assuming a normal distributed variable and with the predictor being a categorical variable would be known as an ANOVA model, as Yasser Al Zaim wrote; other options are for instance Gamma or "quasi" models that also need to estimate a variance parameter), and perform so-called post-hoc tests that use this "pooled variance estimate". Again, in the case of a normal distributed variable these would be post-hoc t-tests.
This procedure uses the additional assumption that the variance is independent from all predictors in the model (e.g. that the variance is identical in all your groups). Under the (vvalid) assumption of normal distribution, this should also be the case. If it is not the case, then this is a strong hint that you are missing something in your model, that the data from the different groups are not really comparable because they are obtained under systematically different conditions. In biological experiments, inhomogeneous variance ("heteroscedasticity") is typically a sign that the assumtion of a normal distribution is not reasonable. You often observe that the variance and the mean are positively correlated - a stron hint that the log-normal distribution is a much more reasonable assumption. I don't know any practical case where the normal assumption under the presence of heteroscedasticity would make sense. But if you'd encounter such a strange case, you might consider using regression weights, what is still better than using individual variance estimates.
Eventually, if you have several (2+) groups, you might be interested in screening for some difference between some of the groups. In this case, chosing the "significant" p-value amoung the set of tests performed will accumulate the type-I errors of all the individual tests (including those that failed to reach significance), and this must be considered for the interpretation of the selected p-values. Typically, this is done by a correction for multiple testing: either a more stringent level of significance the p-values are compared to is chosen, or teh p-values are multiplied by a correction factor. There are many different ways a correctionfor multiple correction can be done. The simplest is the Bonferroni-correction, and this is typically sufficient when the number of tests is relatively small (3-5). With more tests, the Bonferroni-Holm procedure has more power. And there are further, computationally more complicated procedures like for instance Tukey's HSD procedure.
  • asked a question related to Replication
Question
2 answers
how far can Artificial Intelligence simulate and replicate human capabilities? Can it extend to the human abilities such as discovery and Inspiration?
Is scientific approach capable of answering this question at present or should we employ a rational reasoning approach? what would be that rational reasoning approach then?
Relevant answer
Answer
Dear Hossein Mohammadi , nice and deep question. I have tried to formulate my position in https://www.academia.edu/44503746/Becoming_artificial_intelligent_from_a_controllers_point_of_view_Contents. I am looking forward to your view?
Kind regards
Rob
  • asked a question related to Replication
Question
7 answers
Hi All
Due to the high cost of RNA-seq per sample. Do you think that it will be correct if I bulk three-four biological replicates and send this bulk for RNA-seq?
Relevant answer
Answer
Dear all,
thanks for the insightful commentaries of my colleagues. However, I beg to differ in some detail as this problem goes a bit deeper. (Just my 0.02$)
1. Throwing three samples into one is, of course, a bad idea because you waste ressources. Why not make three differently barcoded libraries and then send them for sequencing on one lane. You do not loose information and you can always ask for more reads if your sequencing depth is not sufficient. Thus you save in sequencing costs but keep all the options.
2. Do not make technical replicates. If you master the technique they will be +/- identical. If you have technical problems no replicate will help you anyway.
3. If you run biological replicates I wouldn't use the classical R-programs. Most assume that there is a "true" value that you can't measure because of random variation in your method/sample. However that is not exactly what happens in nature. Imagine you derive three transgenic cell lines with an inducible transcription factor to find target genes. Now you compare 3 times TFon/TFoff. You get following values:
TFon TFoff
geneA
sampleA: 1000 100
sampleB: 100 10
sampleC: 10 1
It is clear that geneA is very interesting. However, if you define sampleA,B,C as triplicate most analysis programs will throw this gene out because base-line expression has a higher variation as the overall difference between on/off.
Alas, geneA may be a perfect and important target gene as you do not control the overall concentration of the transcripiton factor in the transgenic cells. So it may be perfectly ok that you see this large variation in base expression. Fold-change is what counts here. In biology a value very frequently depends on more than one factor and not all can be controlled. Classical statistics fails in this cases.
Therefore I'd recommend to run barcoded libraries - evaluate each one individually and look for the intersection of genes that come up as interesting in all three instances. Then follow up on these.
In the end no statistics can replace the good old biological confirmatory experiment anyway (although "wet" biology seems to be out of fashion nowadays).
Good luck with your experiment.
Best
Robert
  • asked a question related to Replication
Question
3 answers
The FACE experiment is relatively expensive, making it difficult to include multiple replicates for each treatment. This presents a challenge: Are the final carbon cycling results due to the treatments themselves or the inherent variations among the sample points?
Does this require comparing with baseline data? How to do it?
Relevant answer
Answer
You are welcome
  • asked a question related to Replication
Question
1 answer
I believe physics masters programs need innovation to overcome the slow progress in solving theoretical problems in physics in the form of ph.D replicating standards.
Imagine a student who has to write an essay about the paradoxes of dpevial relativity, literature teview akin ph.D and outline ideas to explsin them with his own indights
Or a similar topic about unitary quantum mechanics/spacetime quantization approaches duch CST.
Or a topic about how space has time properties, how its ststic vs dynamic fluidity give rise to spacetime phenomena.
Although some mofules would have more trafitional mathematical emphasis, 60% would have conceptual pre- ph.D level.
Relevant answer
Answer
I believe physics curriculum designers are either too convervative or agnostic about the pedagogic structure.
Since critical thinking is the main feature of masters programs, physicists who insist on problem solving or advanced skills-based curricula for masters are in fact not being loyal to pedagocic theories or learning at the graduate level.
This might have consequences as, beyond the extra-IQ students, others are left at a disadvantage - even if they acquire those skills. Masters level education was always about critical thinking- and this is gained via essay-based projects on cutting edge field fronts better. Knowing physics does not imply critical thinking at higher level, as it is narcisistically assumed.
  • asked a question related to Replication
Question
1 answer
In my study there are 2 intervention groups and 1 control group. In the ANCOVA I take trait scores as covariates and state scores as AV (so I compare the differences in the groups after the intervention).
Is there a statistical way to check if there was a change in the control group? I am replicating a study, they assumed there was no effect in the control group - but what if I am not sure (because the control design could have an effect). Within the ANCOVA, I only know that the results differ, but not whether the changes differ, right?
Relevant answer
Answer
Inga Siebert In the context of ANCOVA (Analysis of Covariance), controlling for change within the control group can be achieved by including the pre-intervention scores as a covariate. This approach allows you to account for any pre-existing differences between the groups and assess the intervention effect more accurately.
Typically, ANCOVA involves including pre-intervention scores as a covariate alongside the group variable (intervention vs. control) to adjust for baseline differences. By including the pre-intervention scores as a covariate, you are essentially controlling for the initial level of the outcome variable before the intervention took place.
To address your concern about checking for change within the control group specifically, you can compare the pre- and post-intervention scores separately for the control group. This analysis will allow you to evaluate whether any significant changes occurred within the control group over time. You can use paired t-tests or a similar statistical test to compare the pre- and post-intervention scores within the control group.
If you find significant changes within the control group, it suggests that factors other than the intervention may have influenced the outcome. This finding would warrant further investigation and might require additional adjustments in your analysis or interpretation of the results.
Remember that in ANCOVA, the primary focus is on comparing the intervention groups while controlling for pre-intervention differences. However, examining and reporting any observed changes within the control group can provide valuable insights into the overall dynamics of the study.
It is important to note that the appropriateness of the ANCOVA assumptions should also be assessed, such as the assumptions of linearity, homogeneity of regression slopes, and normality of residuals. These assumptions ensure the validity and reliability of the ANCOVA results.
In summary, to control for change within the control group in ANCOVA, include the pre-intervention scores as a covariate and assess any significant changes within the control group separately. This approach helps in evaluating the specific effects of the intervention while considering potential changes within the control group.
I hope this explanation clarifies your question. Should you have any further inquiries, please feel free to ask.
  • asked a question related to Replication
Question
1 answer
I was challenged to identificate what kind of microscopy technique was used in five different images. The only information provided was the image, without other information, including scale bars that was omitted.
One of the images was assigned as TEM image from a freeze-fracture replication of a cell, but I have a doubt about it, as it resembles a SEM secundary electron image of a common surface of porous inorganic surface.
I'm not sure if is possiIble to differentiate this two kind of images just with the images and no other information. I'm not a microspist or a specialist in cell biology, but I'm a user of electron microscopy technique in materials science research field.
Can someone help me with this question?
Relevant answer
Answer
Shadowing is a normal means for distinguishing a secondary SEM image from a TEM image. A secondary SEM image creates shadowing due to the secondary e- collector usually being to one side of the probe’s path.
However potential confusion can result in a carbon film due to variable thickness of the carbon film on various surfaces due to the variable angles often found on the surface of the fracture of interest, which can and often do collect varying amounts of carbon.
But, carbon films should show similar grain or particle structure or size, while SEM of differing surfaces should differ in particle size or structure.
Too, shadowing should be different.
Such can be a challenge.
  • asked a question related to Replication
Question
6 answers
Hello, currently I am observing the increase of water activity and moisture content of 4 different formulas over the course of 6 months, with each formula having 3 replicates, meaning a total of 12 samples per month or 72 samples for 6 months. Ideally I would like to use two-way ANOVA as this looks like a factorial design with 2 factors, one factor has 4 levels and the other has 6 levels. All the replicates are normally distributed. However, upon using Levene's test for equality of variances, I found that some replicates are not homoscedastic.
The questions are:
- If Levene's mean test shows that some data do not have equal variances, but Levene's median test (Brown-Forsythe) shows that all data have equal variances, can I use BF's median test and move on using two-way ANOVA? If so, are there any changes regarding what post-hoc test I use?
- If it turns out I cannot use BF median test as replacement for Levene's mean test, thus the assumption of equal variances is violated, can I still use ANOVA? If yes, should I use the mean of the replicates for the ANOVA test and what post-hoc test is appropriate? If no, what other tests can I use and what post-hoc test is appropriate?
Relevant answer
Answer
If Levene's mean test shows unequal variances while Levene's median test (Brown-Forsythe) shows equal variances, it is generally recommended to use the median test as a more robust alternative. In this case, you can proceed with conducting a two-way ANOVA using the median test results. Regarding the post-hoc test, you can still use traditional post-hoc tests like Tukey's honestly significant difference (HSD) or Bonferroni correction, as they do not rely on the assumption of equal variances.
However, if the assumption of equal variances is violated and you cannot use the median test as a replacement, you can still consider using ANOVA. In this situation, you may apply transformations (e.g., log transformation) to the data to address heteroscedasticity. Alternatively, you can employ non-parametric tests such as Kruskal-Wallis test, which does not require the assumption of equal variances. For post-hoc analysis with non-parametric tests, you can use pairwise comparison tests such as Dunn's test or the Mann-Whitney U test with appropriate adjustments for multiple comparisons.
  • asked a question related to Replication
Question
3 answers
I want to do a meta-analysis on how plant diversity affects soil carbon storage. For some studies, plant diversity was considered a categorical variable. The authors set plant diversity gradients and many replicates. These results have mean, replicates, and standard error. For other studies, plant diversity was considered a continuous variable. The authors investigated dozens of plots, which had different species richness and soil carbon storage, and no replicates and mean. The results were represented using a scatter diagram plus a regression line. I wonder how I should solve this problem. Should my research only include those studies with replicates, mean and standard error, discarding the latter?
Relevant answer
Answer
When conducting a meta-analysis with studies that have both categorical and continuous independent variables, such as in your case with plant diversity, you can approach the analysis in the following way:
  1. Separate the studies: Divide the studies into two groups based on whether plant diversity was treated as a categorical or continuous variable. This will help maintain clarity and enable you to handle each type appropriately.
  2. Data extraction: Extract relevant data from each study, including the effect size (e.g., correlation coefficient, mean difference), sample size, standard error or variance, and any other necessary information for meta-analysis.
For studies with categorical variables:
  • Collect data on means, standard errors, and sample sizes for each level of plant diversity. This allows you to compute effect sizes (e.g., mean difference, standardized mean difference) for each comparison of interest.
For studies with continuous variables:
  • Obtain the necessary statistics for each study, such as correlation coefficients or regression slopes, which represent the relationship between plant diversity and soil carbon storage.
  1. Effect size calculation: Calculate effect sizes for each study based on the available data. For studies with categorical variables, you can compute effect sizes using standardized mean differences or other appropriate measures. For studies with continuous variables, you can use correlation coefficients or regression slopes as effect sizes.
  2. Meta-analysis: Conduct separate meta-analyses for each group of studies (categorical and continuous) using appropriate statistical methods. For categorical studies, you can use techniques such as meta-analysis of mean differences or meta-regression to analyze the effect sizes. For continuous studies, meta-analysis of correlation coefficients or regression slopes can be employed.
  3. Subgroup analysis: If there is sufficient heterogeneity within each group, you can perform subgroup analyses based on other factors, such as study design, geographic location, or other relevant variables, to explore potential sources of variation and assess their impact on the overall effect.
  4. Interpretation: Finally, interpret the results of your meta-analysis, taking into account the combined effect sizes from both groups of studies and any subgroup analyses conducted. Consider the limitations of the included studies, potential biases, and other relevant factors that may affect the overall findings.
It's important to note that the inclusion or exclusion of studies should be based on the research question, relevance, and quality of the studies rather than solely on the availability of certain statistical measures. The decision to include studies with different data formats should be guided by the research objectives and the comparability of the studies within each group.
  • asked a question related to Replication
Question
6 answers
Can some one suggest me some researches from Project Management Journals which must have variables, conceptual framework and research methodology so that I can replicate in some other demographic area?
Relevant answer
Answer
First study latest published work in impact factor journals on area of your interest.Find study gaps and then after topic approval from your thesis supervisor, start writing the proposal following the thesis guidelines of your university research office proposal.
  • asked a question related to Replication
Question
1 answer
Hello everybody! I did a small REST2 simulation (only 6 replicates and 400 ps) and would like to know if these results are "acceptable".
Repl average probabilities:
Repl 0 1 2 3 4 5
Repl .14 .18 .14 .15 .25
Repl number of exchanges:
Repl 0 1 2 3 4 5
Repl 30 36 28 28 47
Repl average number of exchanges:
Repl 0 1 2 3 4 5
Repl .15 .18 .14 .14 .23
Thanks!
Relevant answer
Answer
REST2 stands for Replica Exchange with Solute Tempering and it is a sampling enhancement algorithm of molecular dynamics (MD) that uses fewer replicas but achieves higher sampling efficiency than standard temperature exchange algorithm. REST2 works by rescaling the force field parameters of the solute region according to a temperature parameter that varies across replicas. REST2 can be used for various applications such as conformation sampling, free energy calculation, and free energy landscape exploration.
To interpret your results, you may want to look at the average probabilities and the average number of exchanges of your replicas. The average probabilities indicate how often each replica visits each temperature level. Ideally, you want these probabilities to be uniform across replicas and levels, meaning that each replica has equal chance of visiting any temperature level. The average number of exchanges indicate how often each replica swaps its configuration with another replica. Ideally, you want these numbers to be high enough to ensure good mixing and ergodicity of the replicas.
Based on your results, it seems that your average probabilities are not very uniform, especially for replica 2 and 5 which have higher probabilities than the others. This may indicate that your temperature levels are not well distributed or that your simulation is not long enough to reach equilibrium. Your average number of exchanges are also not very high, ranging from 0.14 to 0.23 per cycle. This may indicate that your exchange acceptance rates are low or that your exchange frequency is too low.
  • asked a question related to Replication
Question
3 answers
Hello everyone,
I am trying to determine relative expression values for specific genes in different life-stages of my organisms (adult, larval, and microfilaria). For each of the three life-stages, I have three biological replicates and performed all PCRs in triplicate. I have two reference genes to compare with my genes of interest, but I am unsure how to calculate fold expression changes if there are no treatment groups/control groups with the ΔΔCt method since I am only assessing life-stage differences in expression. Also, with two reference genes, I am unsure at which point in the analysis I need to account for this.
Any advice would be greatly appreciated!
Relevant answer
Answer
For multiple references genes, you calculate either the arithmetic mean (for Cq values) or the geometric mean (for linearised relative quantities). This is because Cq values are approximately normally distributed, whereas linearised RQ values are lognormal.
Also, well done for using two references: this is very good practice.
For your actual data, you don't actually need to calculate absolute values (and indeed absolute values won't necessarily tell you much more than relative values + "rough eyeballing of the raw Cq" will): what you're interested in is the difference in expression between your different life stages, and this is an entirely relative comparison. You could, in essence, pick a single life stage (any life stage) as your "control" and then relate all your other life stages to that.
So, here's a basic workflow: you have one GOI and two REF for larval, microfilaria and adult, with three biological replicates in each, PCR'd in triplicate.
  • Calculate the mean Cq values per gene, per sample (i.e. the average of your triplicate reactions). Look for any outliers and discard accordingly (this is why you do this in triplicate: it's not uncommon to get "22.3, 25.4, 25.3" or similar, and the approach is generally just that "assume the 22.3 is a well where something went weird, so discard")
  • Calculate the per-sample mean REF Cq, which can be arithmetic, since you're in log space (where PCR data is normally distributed): so for adult #1, that's 0.5*( adult#1 REF1 Cq + adult#1 REF2 Cq). This is your "normalisation factor" (NF) for that sample. At this point it's worth looking at your NF values for your dataset as a whole and confirming that they're pretty consistent: anything with very low or very high NFs might be outliers, and any consistent deviation (i..e "larval NFs are always 3 cycles higher") is a sign your references are not good references.
  • Calculate the per-sample normalized GOI expression (adult#1 GOI Cq - adult#1 NF). These are your dCt values, and...honestly, this is where you can stop. All your comparisons will be between dCt values, so you don't actually need to do any more data manipulation. You can, if you wish, invert everything (flip the sign, so 3.4 becomes -3.4) because for dCt low values indicate high expression, while "high values = high expression" is more intuitive. Just put "-dCt" on the Y axis.
  • Note that it is very good practice to present qPCR data like this (i.e. in log space) because here changes both up and down (10 fold up, 10 fold down, etc) will be given equal apparent weighting in your plots.
  • Additional changes: if you want to standardise your data such that one of your groups is distributed around "0", you can do that. It won't actually change any of your data, because it will be an en bloc transformation. For this (since you're still in log space), you determine the arithmetic mean of whichever group you want as your 'reference' and then just subtract that mean from all data (usually your would invert your data first, to give -dCt as above). Then just plot that. If you picked "adult" as your 'reference', then all your adult values will now cluster around 0, while larval and microfilaria; values will be either more or less than that.
If you want to share some sample data, I'm happy to work through an excel version of...well, all the above, if you like.
  • asked a question related to Replication
Question
1 answer
Dear All,
I have three columns in stata, the db column, dgdpmillionpkr and the districts. The dgdpmillionpkr has 110 observation. The db has two id values, db = 1 for the first 110 observations of dgdpmillionpkr and db=2 for the replicated 110 observations appended below the first 110 observation. I want to declare first 110 of districts as string and for the remaining 110 dgdpmillionpkr value of the respective districts, How I can code/do this in stata? Actually I want to show labels(districts name) and its values in order to visualize my dgdpmillionpkr data in map. I am stucked here, any help will be greatly appreciated. Thanks
Relevant answer
Answer
To declare the first 110 districts as string and the remaining 110 districts as dgdpmillionpkr values of the respective districts, you can use the following Stata code:
/* Declare first 110 districts as string */
forvalues i = 1/110 {
tostring districts[`i'], replace
}
/* Set value labels for districts */
label define districts_labels 1 "District1" 2 "District2" /* and so on for all districts */
label values districts districts_labels
/* Set value labels for dgdpmillionpkr */
label variable dgdpmillionpkr "GDP (in millions of PKR)"
This code will first convert the first 110 districts to string variables using the tostring command. Then, it will set value labels for the districts using the label define and label values commands. Finally, it will set a label for the dgdpmillionpkr variable using the label variable command.
Note that you will need to replace "District1", "District2", etc. with the actual names of your districts.
I hope that will help . please recommend my reply if you find it useful.Thanks
  • asked a question related to Replication
Question
2 answers
I'm performing RNA-seq data analysis. I want to do healthy vs disease_stage_1, Healthy vs disease_stage_2, and Healthy vs disease_stage_3. In the case of healthy, disease_stage_1, disease_stage_2, and disease_stage_3 data sets, I have 19, 7, 8, and 15 biological replicates respectively.
Does this uneven number of replicates affect the data analysis?
Should I Use an even no of datasets like for every dataset, 7 biological replicates (As the lowest number of replicates here is 7)?
Relevant answer
Answer
Alwala Nehansh Thank you for the clarification.
  • asked a question related to Replication
Question
1 answer
I have been working with A. tumefaciens for several months. I have generated different vectors (with Kanamycin resistance as bacterial selection maker) that I would like to test in A. rhizogenes (strain K599). I transformed my cells by electroporation following this strain's suggested settings, but I am not getting any resistant colony. I think that the origin of replication in my vectors that works in tumefaciens might not be compatible with rhizogenes. I am using the pVS1 origin of replication. I have considered testing RK2 or pRIA4 instead. I wonder if someone from the community has worked with K599 and has any suggestions.
Relevant answer
Answer
In the end, the people who gave me the R. rhizogenes (strain K599) did not know it was contaminated with an unknown bacteria that had taken over the original culture, so that is why the pVS1 ori was not working. Just for those who are looking for answers about the compatible oris, R. rhizogenes, and A. tumefaciens strains are compatible with the following origins of replication: RK2, pVS1, and pRIA4.
  • asked a question related to Replication
Question
4 answers
I am currently undergoing my end of year research project which is testing if DNA can be found in a secondary transfer when multiple transfers have occurred, one of my replicates have a Ct value of 0, do I include this or find an average of the remaining replicates as when I have calculated out the fold change of my replicates it has a value of 20.29
Relevant answer
Answer
You might need to provide more experimental design information.
Are you dealing with very low expected [template], such that Cqs are often 30+ (in which case "no amplification" genuinely implies "no target"), or are you dealing with fairly robust [template], i.e. Cqs of 20-30 (in which case "no amplification" means "freak weird event")?
If the former, the extent of quantitative info you can glean is going to be more limited anyway (stochastic template numbers will be inherently variable), and you should flag the well as "no amplification".
If the latter, you just ignore it.
  • asked a question related to Replication
Question
1 answer
I am trying to conduct a replication study (Hierarchical multiple regression), evidently I cant find anything to replicate. (I have found a number of overseas studies or studies involving other demographics).
It appears a gap exists in the literature.
If anyone can find a study for my research I would be much obliged.
Relevant answer
Personality and Individual Differences
Volume 127, 1 June 2018, Pages 54-60
Resilience and Big Five personality traits: A meta-analysis☆
Atsushi Oshio a, Kanako Taku b, Mari Hirano c, Gul Saeed d
  • asked a question related to Replication
Question
2 answers
I am doing a research and am doing 5 replicates. My data is as follow. These are measurements of rice shoots and the variables (41mm,41mmm etc) are the distance of the well from the source in a 6 well dishpack. I wanted to use SPPS to do my post hoc Tukey test to quicken the job as to not do it manually.... However I do not know how to do it properly. If there are any videos or tutorials I can follow, it would be much appreciated.
Relevant answer
Answer
You can compare means here.
Go to "Analyze" in the top menu bar, select "Compare Means," and then choose "Post Hoc"
  • asked a question related to Replication
Question
4 answers
Hi everyone, I have some trouble finding the correct method for statistical analysis. I was thinking about a two-tailed paired T test, but that only considers the mean value of my replicates and not the distribution of the individual replicates as well.
My data set consists of 4 groups that are divided based on percentages (together 100%).
These groups are dependent on one variable (control, A, B, C, D, E and F) and I want to know whether condition A, B, C etc. is significantly different from the control.
I have 3 replicates of the experiment (with some measurement variance).
Relevant answer
Answer
I added a figure to the script.
  • asked a question related to Replication
Question
2 answers
I have 300 wheat lines phenotyped in two years with two treatment with four replication (model CRD). Could you please have anyone please tell what is best suited R package for BLUP calculation?
Relevant answer
Answer
ASReml would be the best pacakge to get BLUPs. You may use the following code:
BLUP< -summary(model,coef=TRUE)$coef.random
  • asked a question related to Replication
Question
1 answer
I've grown kenaf plants at four different Cd concentrations: 0, 100, 250, and 400 uM. Using SPAD, I measured the content of chlorophyll in the third leaf from the top. Even though there is a noticeable decrease in plant growth under Cd stress compared to the control, I have noticed an increase in chlorophyll content under all Cd treatments compared to the control. Please assist me in interpreting these results. I grew plants in a hydroponic culture tray with three replicates, each tray containing 12 plants. I chose three plants from each tray to test the chlorophyll content.
Relevant answer
Answer
May be priming with metal treatments increase the chlorophyll Biosynthesis in the leaves just like seed priming with any biotic stress or abiotic stress improve the seed germination percentage
  • asked a question related to Replication
Question
4 answers
Hi I am aiming to replicate an aerosol assisted CVD metod of fabricating BiFeO3, In this method it calls for 18 vol% nitric acid to disolve the Bi precursor in triple distilled water. I was wondering if it would be possible to used citric acid at higher concentration, as I already have this in stock and it is less intense in terms of safety? I appolgise if this is a stupid question, I'm not a chemist.
Relevant answer
Answer
Solubilty of bismuth nitrate pentahydrate, Bi(NO3)3·5H2O, at 'ordinary temperature' is given as 80.37g (as Bi(NO3)3) in 100 cm3 of HNO3 aq. 2.3 N, and 86.86g (as Bi(NO3)3) in 100 cm3 of HNO3 aq. 0.922 N; after R. Dubrissay, Compt. Rend. 153 (1911), 1077, apud: A. Seidell, "Solubilities of Inorganic and Organic Compounds", 2nd ed., Van Nostrand Comp., New York, 1919, p. 151.
About predicting the pH of citric acid solutions for approx. pH < 4.0 ― cf. my post at: https://www.researchgate.net/post/How-to-prepare-a-citric-acid-solution-of-pH-25-and-3-from-100-citric-acid-powder
  • asked a question related to Replication
Question
4 answers
Hi all,
I just tried using a pre-coated ELISA plate for the first time, and I have a question about the results. I started by doing a dilution curve for two samples to see what would be the best dilution for my target protein. The absorbance of all the technical replicates are very consistent, and after I used the std. curve to calculate the concentrations they are still consistent. However, after multiplying the concentrations I got by their dilution factors, I got the highest concentrations in the highest dilution, with the concentrations decreasing as the dilution decreased. I added a photo to show what I mean. What could be happening here? It seems like a technical error but I don't know what the source would be.
Thanks.
Relevant answer
Answer
It may be that you are in a non-linear range, or at least partly, which might explain the anomalies.
If you have sufficient sample, I would try doing something like a 2-fold dilution series and read the raw numbers from the machine (along with the calculated numbers, and maybe by comparing them you can see where things are going wrong.
  • asked a question related to Replication
Question
1 answer
I have been reading many papers but struggling to find a clear explanation for how to interpret the data apart from lower FP - faster tumbling, higher FP slower tumbling. I've not been taught on this topic, but have been given data to analyse as part of my honours project.
The data is on florescence polarisation with heparin, I also have random spikes in data, where FP is reading low at a certain concentration of heparin but one replicate will read a very high FP from the rest of the replicates in that concentration, is there an explanation for this?
Any help would be greatly appreciated. Thankyou.
Relevant answer
Answer
luorescence polarization (FP) is a technique used to measure the rotation of a fluorescent molecule in a solution. The principle of FP is that a molecule with a larger molecular weight will rotate more slowly and have a higher FP value, while a molecule with a smaller molecular weight will rotate more quickly and have a lower FP value.
When interpreting FP data, there are a few key things to consider:
  1. Tumbling rate: As you mentioned, a lower FP value corresponds to a faster tumbling rate, while a higher FP value corresponds to a slower tumbling rate. This can be used to infer information about the size and shape of a molecule or the interaction between molecules.
  2. Anisotropy: In addition to tumbling rate, FP can also be used to measure the degree of anisotropy in a solution. Anisotropy refers to the degree of alignment of molecules in a solution, and a higher anisotropy value indicates that the molecules are more aligned. Anisotropy can be influenced by factors such as temperature, viscosity, and the presence of other molecules.
  3. Standard curves: To interpret FP data quantitatively, it's important to generate standard curves using known standards or controls. This allows you to relate the FP value to a specific concentration or amount of a molecule in the solution.
  4. Instrumentation: Finally, it's important to consider the instrumentation used to collect the FP data. Factors such as excitation wavelength, emission wavelength, and instrument sensitivity can all influence the accuracy and precision of FP measurements.
Overall, interpreting FP data requires a combination of understanding the underlying principles of the technique, careful consideration of the experimental conditions, and appropriate data analysis and quantitation. It's always a good idea to consult with a mentor or experienced researcher in the field to ensure that you're interpreting the data correctly.
  • asked a question related to Replication
Question
4 answers
Hello,
I have performed some recombineering protocols and realised that the chances of my plasmid being in a multimeric state are quite high.
I previously designed 7 primer pairs that will produce alternating amplicons of 500 and 700 bp around my recombineered plasmid (which is 35kb) just so that I could get an idea that no weird recombination events occurred when looking at it in a gel.
Anyways, I did the 7 PCR reactions on a control with the original plasmid, and they produced the expected pattern, but when performing it on my miniprep-purified plasmid I was obtaining a lot of bands of all sorts of sizes (larger and shorter than expected amplicon). Funny thing is that these multiple bands seemed to follow the same pattern in all my replicates (different pattern for each primer of course, but same throughout the different colonies tested) which makes me rule out the possibility of salt contaminants affecting primer binding etc. I thought it might be bacterial genomic contamination that was being amplified, so I performed a CsCl-ethidium bromide density gradient to purify it and sent it off for sequencing.
But now Im wondering, would a multimeric plasmid yield multiple bands if amplified with a single pair of primers?
By the way, I can't run it on a gel to assess if it's multimeric because of its large size 35kb, although I am going to ask if anyone at my lab has a pulse field gel electrophoresis just in case.
Thanks!
Relevant answer
Hi all,
Thank you for your answers,
I did a restriction digest with enzymes that cut multiple times and indeed, this plasmid has recombined in all sorts of ways except the one I was planning on...
I don't know if any of you have practiced recombineering before, but if you have I would really appreciate your advice regarding how to reduce unwanted recombination events in this type of cloning.
I am using an L-arabinose inducible plasmid for the λRed system. Are NEB10betas good cells for these protocols or maybe Stabl3 would be a better option? Also, would co-electroporating my plasmid at very low concentrations and the linear dsDNA into E. coli (which contains the induced λRed system-plasmid) help in avoiding these undesired recombinations?
Any other thoughts or help on how to avoid this?
Thanks!
  • asked a question related to Replication
Question
2 answers
Hello to everyone,
I've had several discussions with my colleagues about setting up field experiments to be replicated in different environments.
We agree that each experiment must have exactly the same experimental design to ensure data comparability.
I've been told that these experiments must also have the exact same randomization, I don't agree because I believe that it is the experimental design itself that ensures data comparability. Below I attach a drawing to better explain the issue:
In the attached file, I have the same experimental design between locations, with the same randomization within subplots. Shouldn't we randomize the treatments (i, ii, iii and iv) within each subplot? Does it make sense to have an exact copy of experimental fields?
Thanks in advance!
Relevant answer
Answer
It is randomization, if they are the same or different randomization, I see no problem. Regards.
  • asked a question related to Replication
Question
5 answers
How do you replicate the Finland education model for a country like India and make it better for personalisation.
Relevant answer
Answer
MIKHMAILMESTÁ CERTO! E VEJA QUE ELE NÃO FALA NADA DE REPLICAR MODELO DE UM PAÍS RICO E COM MAIS TÉCNICOS E FINANCEIROS. ELE DÁ UMA IDEIA PARA A ÍNDIA, ESQUECE, PROPOSITADAMENTE, DE FAZER COMPARAÇÕES COM O IMCOMPARÁVEL, SORTE, ABS, ANDRÉ (EU AMO A ÍNDIA, IRIA PRIMEIRO LÁ, E AMO A FINLÂNDIA, O PAIS MAIS SEGURO DO MMUNDO)
  • asked a question related to Replication
Question
2 answers
Dear All
I have scored plant height and spike length in three replication in two years.
The analysis of anove was
genotypes+replicaiton+years+ R*G+R*Y+R*G*Y
I have found high significant correlation between the two years and no significant interaction G*Y
the correlation between 2021 and 2022 for plant height is 0.99. I really astonished how I have high significant differences between the two years in Ph and found such high correlation between the two years for pH. the same trend also was found for SL
Relevant answer
Answer
Correlation compares population means over the two seasons, the high correlation indicates a similar overall performance of the population. ANOVA compares the performance of each genotype between both environments. In your case genotypes behave in different ways in both environments; however, the overall performance is similar.
  • asked a question related to Replication
Question
4 answers
I have been doing a lot of qPCRs and one of my genes gets a lot of "No Ct value"s so a colleague suggested going back into the Aria software and changing it, so the two technical replicates are not automatically averaged but instead shown as two individual values to see if there was any values there. And for some there were which is great.
but my question is, is this a widely accepted things to do?
Relevant answer
Answer
Ahmad Al Khraisat that is a really great point to include in my methods so that it's completely transparent. As this is more of a cool side project and not the sole focus of my master's project I think it's acceptable. In a perfect world with lots of time and money, I would spend time optimizing the assay.
Really appreciate your help
  • asked a question related to Replication
Question
4 answers
Hi,
I found a plasmid have pSa ori and colE1 ori. Does any scientist know how this kind of plasmid replication in E.coil? If it will make a mistake during replication? Thanks for your kind help.
Relevant answer
Answer
Thanks, I got it. It is not Ti plasmid, which should be a plasmid dimer. The different origins functional on different host strains. But I know the pSa ori could also work in E coil. Maybe in E.coil, colE1 ori works preferentially.
  • asked a question related to Replication
Question
7 answers
While computing descriptive statistics should I use means or replications.
Relevant answer
Answer
The mean is a descriptive statistic but replicates are meaningless in this context.
  • asked a question related to Replication
Question
4 answers
I've used a drug that I am suspecting increases cell migration. I've got data of area at 0 hour and 24h of a scratch assay. I've got many replicates but can't seem to figure out the best strategy for statistical analysis. Any suggestion with some brief explanation?
Relevant answer
Answer
Dear Hammad
First, record the number of migrated cells in every replicate for treated and untreated samples using image analysis tools (e.g. ImageJ and the like), Then you can utilize unpaired t-test to make a statistical comparison. However, if you have employed more than one concentration of the drug of interest, you must use one-way ANOVA (followed by tukey's post-test) to compare the performance of different concentrations (including zero concentration=untreated group). I suggest to use GraphPad Prism as a simple and straightforward statistical software.
  • asked a question related to Replication
Question
3 answers
I recently performed qRT-PCR using samples and internal control (ACT8). I am a bit concerned that the sd value of sample technical replicates is around 0.5.
Is the data still valid? Although the difference of value of technical replicates is less than 0.3
Relevant answer
Answer
You're welcome :)
  • asked a question related to Replication
Question
1 answer
I am looking for collaboration in science teaching and learning specifically in physics because we have amassed many materials that can be replicated and implemented. I am looking at countries that has perennial problems on science equipment and modalities. advanced classrooms will not benefit in my materials. We developed materials in the wake of also scarcity of modalities. we are willing to share it for free. We also look forward to publishing the result of implementation for dissemination to a greater users.
Relevant answer
Answer
Dear Physicist Sotero Ontal Malayao, Jr.,
I think it is a good idea to contact the Philippine Department of Education, share your ideas, and collaborate with them in this regard, since this educational agency has so many affiliations with global/foreign educational institutions, universities, organizations like the United States and others.
All the best.
Respectfully,
Dr. Zoncita Del Mundo Norman, retired (SFUSD)
  • asked a question related to Replication
Question
3 answers
I want to use Alpha Lattice Design in SAS. It is multi location and multi year trial. 4 locations, 2 years (4x2=8 environments) genotypes are 45.
2 replications
5 blocks/rep.
Thanks in adcance.
Relevant answer
Answer
Hello
I need SAS command for genotypic correlation
  • asked a question related to Replication
Question
1 answer
I'm trying to replicate an older protocol that used Promega PCR Master Mix (2x), using the master mix I have on hand (AmpliTaq Gold 360)
Relevant answer
Answer
Hi Kari
in the safety data sheet inclose you'll find the informations.
all the best
fred
  • asked a question related to Replication
Question
3 answers
I have performed TaqMan qPCR assay for miRNA in tissue samples and found that my two technical replicates are always aligned while my third replicate goes way off in value. Why is it so?
Relevant answer
Answer
Mohammad Alzeyadi Thanks for the information. It seems the instrument had some issues.
  • asked a question related to Replication
Question
8 answers
The number of samples is 30 with three replicates.
Relevant answer
Answer
From your regression model you get the estimate (2.1434) and the standard error of the slope. From the estimate, subtract 1 (2.1434-1 = 1.1434). Divide this value by the standard error. The resulting ratio is a t value. You can get the corresponding p-value from a t-table with n-2 degrees of freedom, where n is the number of data points used to estimate the regression line.
  • asked a question related to Replication
Question
1 answer
Hi :)
I'm trying to replicate a protocol contained in the following paper: doi:10.1152/jn.00511.2016
I'd need to measure the median frequency of spontaneous oscillations of the membrane potential. To do so, I would like to calculate the discrete Fourier transform from a recording of spontaneous Vm oscillations and then the median frequency from a first-order interpolation of the cumulative probability of the power-spectral densities from 0.1 to 100 Hz.
I don't know how to perform this kind of calculations in Origin Pro software or Matlab: could you please help me with suggestions? Is there any simple code you know to start from?
Thanks,
Relevant answer
  • asked a question related to Replication
Question
2 answers
The Background of the Question and a Suggested Approach
Consider that, e.g., a tensile strength test has been performed with, say, three replicate specimens per specimen type on an inhomogeneous or anisotropic material like wood. Why do the strength property determinations typically not consider the number of collected data points? As a simplification, imagine, e.g., that replicate specimen 1 fails at 1.0 % strain with 500 collected data points, replicate 2 at 1.5 % strain with 750 data points and replicate 3 at 2.0 % strain with 1 000 data points. For the sake of argument, let us assume that the replicates with a lower strain are not defective specimens, i.e., they are accounted for in natural variation(s). Would it not make sense to use the ratio of the collected data points per replicate specimen (i.e., the number of data points a given replicate specimen has divided by the total number of data points for all replicates of a given specimen type combined) as a weighing factor to potentially calculate more realistic results? Does this make sense if one were to, e.g., plot an averaged stress-strain curve that considers all replicates by combining them into one plot for a given specimen type?
Questioning of the Weighing
Does this weighing approach introduce bias and a significant error(s) in the results by emphasising the measurements with a higher number of data points? For example, suppose the idea is to average all repeat specimens to describe the mechanical properties of a given specimen type. In that case, the issue is that the number of collected data points can vary significantly. Therefore, the repeat specimen with a higher number of data points is emphasised in the weighted averaged results. Then again, if no weighing is executed, then, e.g., there are 500 more data points between replicates 1 and 3 in the above hypothetical situation, i.e., the averaging is still biased since there is a 500 data point difference in the strain and other load data and, e.g., replicate 3 has some data points that neither of the preceding replicates has. Is the “answer” such that we assume a similar type of behaviour even when the recorded data vary, i.e., the trends of the stress-strain curves should be the same even if the specimens fail at different loads, strains, and times?
Further Questions and Suggestions
If this data point based weighing of the average mechanical properties is by its very nature an incorrect approach, should at least the number of collected data points or time taken in the test per replicate be reported to give a more realistic understanding of the research results? Furthermore, when averaging the results from repeat specimens, the assumption is that the elapsed times in the recorded data match the applied load(s). However, this is never the case with repeat specimens; matching the data meticulously as an exact function of time is tedious and time-consuming. So, instead of just weighing the data, should the data be somehow normalised concerning the elapsed time of the test in question? Consider that the overall strength of a given material might, e.g., have contributions from only one repeat specimen that simply took much longer to fail, as is the case in the above hypothetical example.
Relevant answer
Answer
yes of course, because our target is to produce strong, rigid materials and avoid carbon as much as possible.
  • asked a question related to Replication
Question
2 answers
Hello all,
I would like to export to an excel file all the data that has been recorded in my records modules in different excel files or in the same one, but different sheets per replication. Would you have an idea on how to do so ?
Thank you in advance !
Relevant answer
Answer
Thank you, I really appreciate both thew questions and the answer. They are both important
  • asked a question related to Replication
Question
5 answers
I have 2 experimental groups, each with 3-5 biological replicates.
I want a statistical test that can compare parameter x between the two experimental groups.
Within each biological replicate I have ~2000-4000 cells that I want to factor into the analysis. How should I do this? Taking the mean for each biological replicate and performing a t test to compare the experimental groups seems inappropriate since the variance of the data is not accounted for. Similarly, factoring in all the data points and performing t tests between the groups seems inappropriate.
Any advice would be much appreciated.
Relevant answer
Answer
I think multilevel analysis is unneccesarily sophisticated here. The resukt will be identical to the much simpler analysis of the 3-5 averages per group.
Both approaches (using averages or multilevel models) both assume that the distribution of the response in all cells of the replicates are "homogeneous". If there is a pattern that changes (eg, a splitting up of a unimodal into a bimodal distribution) is not accounted for. If this is an interesting biological phenomenon, you should go for an entirely different analysis (as this is an entirerly different question).
  • asked a question related to Replication
Question
1 answer
what is the threshold values for selecting or rejecting the gene as reference gene? how much intergroup variability can be tolerated?
Relevant answer
Answer
That's not a valid concern. If you have different amounts of starting materials, you would expect to have different Ct values in your reference gene between BIOLOGICAL samples. That's the entire point of including this control.
What you should be worried about are TECHNICAL replicates. R squared values should be as close to 0.99 as possible to publish.
  • asked a question related to Replication
Question
4 answers
Is it proposed to use replicates (and if yes, how many) when doing spatial omics, using the same type of tissues but from different animals within the same phylum?
Relevant answer
Answer
Sure, costs are relevant, but much more relevant is the clear definition of a statistical model that maps meaningful biological features. This is simply not really clear how to do that in a multidemsional space.
If you found some kind of pattern (however you define it), it is certainly good practice to repeat the entire experiment and analysis at least onece (better twice or trice) to see if the pattern you identified occurs (more or less) robustly. If the observed pattern indicates some biological interpretation, you can go back to biology ind infer the hypothesized effects by new experiments (knock-ins, knock-outs, blocking, competeing, histology, etc.).
  • asked a question related to Replication
Question
3 answers
Experimental set-up:
I have recorded plant performance values in triplicate on two plants from a 4x3x2 factorial designed experiment, over 7 days
As I have only two biological replicates per treatment, does this negate any meaningful statistics?
Thanks very much for the communities help!
Relevant answer
Answer
In other words, there are ways to publish your results, even here, in RG net, if they somehow make sense.
  • asked a question related to Replication
Question
5 answers
Often, we see work published and replicated by open-access review articles. Shouldn't we focus on original ideas to promote science?
Relevant answer
Answer
As far as I am concerned, a lot of work has to be replicated. A single paper or study (with often a small sample) is not conclusive enough. Hence, there is benefit from replication - and reviews, more broadly.
To answer your second question - about hunting for ideas and going for it, I agree that is what I do as well. I am now (thankfully) more experienced and can develop the methodology better for addressing the question(s) at hand.
  • asked a question related to Replication
Question
1 answer
It's a bit of a silly question but is there an explanation why E. coli has so many termination sites? Logically, one or two would be enough.
Is it just in case on of the replication fork progresses much faster than the other one, and so the two forks don't meet in the middle?
Maybe a weird question but I was wondering...
Relevant answer
Answer
The presence of several ter sites for each replication fork is necessary to ensure that replication termination occurs and indicates a sense of redundancy, supported by the highly conserved nature of ter sites and their highly specific cognate binding capabilities.
Please refer to the link attached below for more information.
The ter elements are asymmetric patterns of DNA that act as protein binding sites. These elements are situated in the terminus region, approximately opposite the origin of replication. The binding of specific proteins to ter elements provides a trap for the proceeding replication fork, catching the replication fork as it passes. There are several ter elements responsible for stopping each replication fork, with each of these elements being specific for the fork passing in one direction only, that is, they have functional polarity. The ter-protein complex responsible for catching the clockwise replication fork will allow the anticlockwise fork to proceed unchecked, until it is stopped by its own anticlockwise facing ter element fork trap.
Replication fork traps have been identified in multiple species possessing circular chromosomes, including E. coli. Fork traps prevent over replication of the bacterial chromosome and stall a faster fork in the case that one side of the replication was proceeding faster than the other.
Best.
  • asked a question related to Replication
Question
1 answer
Hi! I'm trying to replicate this synthesis for Copper nanoparticles using as a capping agent PEG 10,000, however I'm having trouble finding the right amount of each reagent in grams, can anyone please help me with that?
"In a typical preparation process, CuCl2·2H2O aqueous solution was prepared by dissolving CuCl2·2H2O (10 mmol) in 50 ml deionized water. A flask containing CuCl2·2H2O aqueous solution was heated to 80 ◦C in an oil bath with magnetic stirring. A 50 ml L-ascorbic acid aqueous solution of various concentrations (0.4, 0.6, 0.8 and 1.0 M) was added dropwise into the flask while stirring. The mixture was kept at 80 ◦C until a dark solution was obtained. The resulting dispersion was centrifuged at 8000 rpm for 15 min. The supernatant was placed under ambient conditions for 2 months"
Relevant answer
Answer
Calculate the ratio of reactants according to the reaction.
Cu2+ +2e =Cu
2ascorbic acid -2e = 2dehydroascorbic acid
file:///C:/Users/user/Downloads/inorganics-10-00102-v2.pdf
  • asked a question related to Replication
Question
1 answer
I have noticed that there are single microscopic slide/slip chambers (Cytodyne, Flexflow, IBIDI) and many studies have used these chambers. I wondered how it is possible to have more robust data by using a single fluid flow chamber (1 replicate) and a control?
Relevant answer
Answer
Hi Mustafa,
our tech support team is happy to help with your question but would need a bit more info on your research question, experimental setup, etc. Please get in touch via Email: techsupport@ibidi.com
  • asked a question related to Replication
Question
5 answers
I am studying the replication of a management process in the big four audit companies (EY, PwC, KPMG, and Deloitte). This process is relatively similar across these companies. So, is it a single case of replicating the process in the big four? or is it a multiple case study?
Note: I am not looking for variances between cases as there aren't any, I am looking into how the process is replicated in these firms and i am considering them as a one unit.
Relevant answer
Answer
It is normal to repeat the audit process, but in other semi-expanded ways
  • asked a question related to Replication
Question
2 answers
I'm currently designing a new experiment to measure RNA expression of several genes in mice samples. I normally use the 2-ddCt method to compare Control versus treated animals (5 vs 5 animals for instance) and regular T test (using the 5 biological replicates per group). But in this case I'd like to evaluate the expression of some genes only in liver NK cells, and the quantity of RNA I can get is really small, so I'm polling 3 livers together as one pooled sample. If I have only 6 animals, I will get only 2 biological replicates, and therefore the stats will be really poor (Control 1 and 2 average/ Treated 1 and 2 average). I repeated the same RT-PCR twice with those same samples and used 5 replicates per sample in each PCR. Is that of any help to get a better statistical analysis? What is the best (but correct) way to statistically analyse that data? Could I analyse each RTPCR separately (technical replicates) and then compare those to each other? Or is it correct to use the 5 replicates as 5 "samples" for the T test analysys? Any suggestions would be really appreciated.
Relevant answer
Answer
I don't think that you win any statistical power by reapeating the measurements of the same old samples in a second run. It does not provide any further information about the biological variance but adds the complexity that dCt values are not comparable between different runs (you would have to correct for, and this does not seems possible with just a single technical replicate).
Pooling animals in a single sample ("biological replicate") to be measured is something you can certainly do. This will reduce the standard deviation by the square root of the pool size to
sd(x_pooled) = sd(x)/sqrt(poolsize)
and the standard error to
se = sd(x_pooled) / sqrt(n_pooled)
se = (sd(x)/sqrt(poolsize)) / sqrt(n/poolsize)
se = (sd(x)/sqrt(poolsize)) / (sqrt(n)/sqrt(poolsize))
se = sd(x) / sqrt(n)
what is the same as without pooling. The draw-back of pooling comes with the loss of degrees of freedom (df) for the statistical analysis. Instead of n values you have only n/poolsize values. In your case the df using 6+6 = 12 individual (independent) values would give you 12-2 = 10 df for the test, corresponding to a t(alpha=0.05) of 2.2. After pooling 3 animals, you have only 2+2 = 4 individual values giving only 4-2=2 df and a critical t of 4.3. This will make tests less powerful. With very small sample sizes, the critical t increases rapidly and strongly with pooling, so pooling is often not advantagous when the sample size is already small.
Just for comparison: if you had 50 animals per group, the unpooled experiment would give you 50+50-2 = 48 df with t_crit = 2.0, pooling 5 animals would reduce the analysis costs to 1/5th and still give you 10+10-2 = 18 df with t_crit = 2.2, what is only slightly larger than for the unpooled experiment.
I'd suggest a WTA (whole transcriptome amplification) rather than pooling to get enough material for qPCR.
  • asked a question related to Replication
Question
5 answers
Within a project about geographical traceability of horticultural products, we would like to apply classification models to our data set (e.g. LDA) to predict if it is possible to correctly classify samples according to their origin and based on the results of 20-25 different chemical variables.
We identified 5 cultivation areas and selected 41 orchards (experimental units) in total. In each orchard, 10 samples were collected (each sample from a different tree). The samples were analyzed separately. So, at the end, we have the results for 410 samples.
The question is: the 10 samples per orchard have to be considered pseudoreplicates since they belong to the same experimental unit (even if collected from indepedent trees)? Should the LDA be performed considering 41 replicates (the 41 orchards, taking the average of the 10 samples) or should we run it for the whole dataset?
Thank you for your help.
Relevant answer
Answer
Nick VL Serão Thank you for this solution. I have been looking for this answer. But do you know how to accomplish this on R?
  • asked a question related to Replication
Question
3 answers
I want to replicate the results of newey and west Hac ols regression results of eviews in r.....I have used neweywest () function in r but error is there...
I want to apply The default setting which eviews follow for newey and west ...but I can't understand how to apply it...plz help me with the code
Relevant answer
Answer
Try to use the function NeweyWest from the sandwich library.
Newey-West HAC Covariance Matrix Estimation Description
A set of functions implementing the Newey & West (1987, 1994) heteroscedasticity and autocorrelation consistent (HAC) covariance matrix estimators.
1. Usage
NeweyWest(x, lag = NULL, order.by = NULL, prewhite = TRUE, adjust = FALSE, diagnostics = FALSE, sandwich = TRUE, ar.method = "ols", data = list(), verbose = FALSE)
2. Arguments
x : a fitted model object.
lag : integer specifying the maximum lag with positive weight for the Newey-West estimator. If set to NULL floor(bwNeweyWest(x, ...)) is used.
order.by : Either a vector z or a formula with a single explanatory variable like ~ z. The observations in the model are ordered by the size of z. If set to NULL (the default) the observations are assumed to be ordered (e.g., a time series).
prewhite : logical or integer. Should the estimating functions be prewhitened? If TRUE or greater than 0 a VAR model of order as.integer(prewhite) is fitted via ar with method "ols" and demean = FALSE. The default is to use VAR(1) prewhitening.
kernel : a character specifying the kernel used. All kernels used are described in Andrews (1991). bwNeweyWest can only compute bandwidths for "Bartlett", "Parzen" and "Quadratic Spectral".
adjust : logical. Should a finite sample adjustment be made? This amounts to multiplication with n/(n-k) where n is the number of observations and k the number of estimated parameters.
diagnostics : logical. Should additional model diagnostics be returned? See vcovHAC for details.
sandwich : logical. Should the sandwich estimator be computed? If set to FALSE only the middle matrix is returned.
ar.method : character. The method argument passed to ar for prewhitening (only, not for bandwidth selection).
data : an optional data frame containing the variables in the order.by model. By default the variables are taken from the environment which the function is called from.
verbose : logical. Should the lag truncation parameter used be printed?
weights : numeric. A vector of weights used for weighting the estimated coefficients of the approximation model (as specified by approx). By default all weights are 1 except that for the intercept term (if there is more than one variable).
3. Details
NeweyWest is a convenience interface to vcovHAC using Bartlett kernel weights as described in Newey & West (1987, 1994). The automatic bandwidth selection procedure described in Newey & West (1994) is used as the default and can also be supplied to kernHAC for the Parzen and quadratic spectral kernel. It is implemented in bwNeweyWest which does not truncate its results - if the results for the Parzen and Bartlett kernels should be truncated, this has to be applied afterwards. For Bartlett weights this is implemented in NeweyWest.
To obtain the estimator described in Newey & West (1987), prewhitening has to be suppressed.
  • asked a question related to Replication
Question
1 answer
We've consulted definitions of repeat and replicate measures, but thought we'd put this question to the Researchgate world nonetheless.
The hypothetical case involves 30 pairs of subjects, chosen at random, in 30 regions of the world, with none of them knowing of the others. In each trial, Subject A taps out a number of identical sounds on table top -- e.g. "tap tap tap", so that Subject B can hear the sounds. After two seconds, Subject B taps a number of taps in response -- either the same or different number. Some of the 30 pairs of subjects exchange only one series of taps, while others more series of taps. In total, there are 100 sent-and-responded pairs in the dataset. Does this mean there are 100 repeated measurements and 30 replicates? Or is it the other way around? Or something else?
Relevant answer
Here you have our mouse experiment with repeat measurements. It may give you ideas.
  • asked a question related to Replication
Question
5 answers
Consider FACS data from two groups (A and B, to be compared), each containing N biological replicates (in total 2N FACS plots). The outcome of the analysis is either a cell is positive or negative for a marker. At the end, one can make following contigency table by adding (or averaging) cells from N replicates for each groups:
Group A Group B
number of postive cells A pos B pos
number of negative cells A neg B neg
I guess, one can perform Fisher's exact test to check if the positive cells (or negative cells) are more likely to be in any one of the two groups.
1) If one does so, what is the use of biological replicates?
2) Should one average the number of cells per group or add cells from N replicates in each group?
3) Is there any other appropiate way to to perform such analysis: For example calculating percentage of positive cells for each replicate and then checking if the mean percentage (from N replicates) differs significantly different between the two groups?
Relevant answer
Answer
نعم يمكن لان اختبار فيشر من الاختباؤات المهمة والدقيقية من خلالها يمكن اجراء اختبرات احصائية مع البيانات النوعية التي تكون الغير مرتبطةوهدف هذا الاختبار هو تحديد اهمية الاختلاف في نسب بين تلك المتغييرات
  • asked a question related to Replication
Question
3 answers
I want to analyze my RT-qPCR experiments. I want to compare the expression of genes in cell type A, B and C and I have this cell types from the individuals 1, 2 and 3 (this are my biological replicates n=3). I have a reference gene (housekeeping gene, GAPDH), but in this case I dont have a control sample. For example I could normalize all values for each individual 1, 2 or 3 on cell type A from each individual. But this feels wrong because the standard deviation for cell type A will be nearly 0 and a statistical comparison with this cell type will not be possible.
Would it be also possible to choose the lowest value I measured in all samples from all three biological replicates and then normalize every value to it? Or to apply the deltadelta ct method and just normalize the values to the reference gene (housekeeping gene) without normalizing it to a control sample?
Relevant answer
Answer
Where is the problem to simply compare dCt values between the three groups?
Differences could be tested with Fisher's LSD or Tukey's HSD (both will control the family-wise error-rate across all three possible comparisons).
  • asked a question related to Replication
Question
2 answers
Dear all,
we have TMT10plex data on four samples and one pooled reference sample, structure is
TMT1: Pool-Pool-S1-S1-S2-S2-S3-S3-S4-S4
TMT2: Pool-Pool-S1-S1-S2-S2-S3-S3-S4-S4
....so 4 replicates for both the pooled reference and for each of the samples, however split to two individual TMT runs (with two technical replicates, for the record). What is the best framework to join this for getting significance of enrichment? After all we have 4x2 replication, which I would like to use. Any suggestions or references highly appreciated!
Christof
Relevant answer
Answer
Hi Christof,
I am assuming you are hoping to bridge the 2 TMT batches (plus the technical replicates) for a joint analysis. If that is the case, I would recommend looking at the MSstatsTMT package in R.
We used this to bridge 2 TMT 16-plex batches (with 3 technical replicates each) into one combined analysis.
MSstatsTMT website - https://msstats.org/msstatstmt/ (the guide on setting up the annotation file may have the framework you are looking for)
Best wishes!
Stephanie
  • asked a question related to Replication
Question
3 answers
Looking for advice as to how to reduce variability between technical replicates when performing ELISA's of lysates of supernatants from bacterial recombinant expression strains. Currently incubating plates statically at room temp for 1 hour after adding standards and biological samples. Would extending this incubation or mild shaking improve results?
Relevant answer
Answer
Thank‘s y’all. STD’s are tight, so don’t think it’s pipetting error. But mixing sounds good. Will also increase capture antibody.
  • asked a question related to Replication
Question
4 answers
Hello,
I want to create a glycerol stock in a 96-well format so I can directly replicate it into a 96-well plate for growing and future experiments. I expect to do the replications fairly frequently. I'd really appreciate it if people can share what solution works the best for them in a similar situation.
In particular:
- what plate works better: deep well (2 mL) or the normal (~ 200 uL) ones
- do you fully thaw the plate before using or do you scrap from the frozen? If latter, how does resealing work on the cold and potentially wet surface?
- Any other tips to ensure no cross-contamination between the wells?
Thank you in advance!
Nina
Relevant answer
Answer
I don't remember. It's been a while that I worked with it.