Science topic

Randomized Clinical Trials - Science topic

Explore the latest questions and answers in Randomized Clinical Trials, and find Randomized Clinical Trials experts.
Questions related to Randomized Clinical Trials
  • asked a question related to Randomized Clinical Trials
Question
3 answers
I want to study the effects of a pharmacological treatment (antidepressants) related to quality of life in oncologic patients. Apart from a depression diagnosis that would be a prerequisite for administering the treatment, i need another screening tool that could confirm the patient's ability to be functioning enough to give me true and valid answers later in the main tests. For this reason I am looking for a validated tool in clinical setting that could detect any cognitive impairment due to a psychiatric condition or substance induced (es. high doses of morphine).
Relevant answer
Answer
Hello, I'm developing a tool that can help with that, it works with id and color pigmentation areas, one of the tool's modules can help with that. I'll be happy to help.
  • asked a question related to Randomized Clinical Trials
Question
1 answer
Hi Researchers
I am doing meta-analysis for one of my SR. Could you please from your expertise suggests if we can do the meta-analysis of RCT and multiarm-RCTs together?
Thank you
Relevant answer
Answer
Hi Preet,
It seems to me that what you are looking for is a network meta analysis.
Since you have multi-arm trials, you probably have multiple treatments and network meta analysis is an extension of meta-analysis to a network of treatments. You can easily find many references for network meta analysis and I just give a review here
It could be that your groups is the same treatment in different dosages or that you would like to combine groups. In this case, have a look at the Cochrane guidance
  • asked a question related to Randomized Clinical Trials
Question
2 answers
Hello,
The researcher plan to conduct a (RCT) to compare a control group with an intervention group. They have provided a (MCID) for the length of stay (LOS) of one day. However, the standard deviation is extremely high at 120, resulting in an unreasonably large sample size requirement of approximately 500,000 participants, which is not feasible for a prospective RCT.
The issue arises from the high variability in LOS.I have set the power at the conventional level of 0.8 and the significance level (α) at 0.05.
could you please help me how to address this issue?
Thank you!
Relevant answer
Answer
Thank you for your answer, yes narrowing down the characteristics of the trial group to bring down the SD might be an option, but This would reduce the generalizability and could make recruitment difficult.
I suggested running a pilot study instead of a full RCT because a power analysis is a necessary requirement for a full RCT. Given the high variability in LOS, the required sample size would be extremely large, making a full RCT impractical at this stage.
  • asked a question related to Randomized Clinical Trials
Question
1 answer
This article 《Low Risk of Hyperprogression with First-Line Chemoimmunotherapy for Advanced Non-Small Cell Lung Cancer: Pooled Analysis of 7 Clinical Trials》 can have 7 clinical Trials to analys, does that mean these 7 trials start by themselves or there is a way to get information like that?
Relevant answer
Answer
You need to look for "data availability" section when looking for patient data. They usually specify if the data is openly available in GEO, other repositories that require access request, available upon reasonable request from the corresponding author or not available.
  • asked a question related to Randomized Clinical Trials
Question
1 answer
Can significant results in the primary outcome of a pilot RCT (n = 50) with prespecified confirmatory analysis be interpreted as class 1 evidence as it would have been in a full scale trial? That means, is it possible to conclude from a pilot trial that a significant effect in the primary outcome means that the intervention improves the targeted outcome or is it still necessary to run a larger full-scale trial to confirm the hypothesis again?
Relevant answer
Answer
Significant results obtained in the primary outcome of a pilot RCT (n = 50) with prior confirmatory analysis cannot be interpreted as sufficient evidence as would be the case in a full trial. Pilot trials are primarily designed to assess feasibility, methods and procedures. If the result is positive, a larger, definitive trial may then be considered. Pilot trials are not intended to provide definitive conclusions about the effectiveness of an intervention.
To obtain sufficient evidence, it is necessary to conduct a large, full trial that includes a sufficient sample size (and therefore sufficient power) to validate the results and reduce sampling bias. The results of a pilot trial may indicate a promising trend, but they must be confirmed by a more rigorous and larger trial to be considered conclusive.
  • asked a question related to Randomized Clinical Trials
Question
1 answer
In a rct, we have already decided the sample size based on one interim analysis for adaptive design; however, we are planning to add one more interim analysis for the safety, do I need to re-estimate the sample size?
Thanks!
  • asked a question related to Randomized Clinical Trials
Question
3 answers
As sample size calculation is extremely important for RCT, are there any recommendations for simple tools? For example, can GPower be used for the calculation for RCT?
Relevant answer
Answer
Standard practice seems to be to power the study to detect specific between group comparisons on the primary outcome or outcomes. If there is no clustering (i.e., it isn't multi-site etc.) this reduces to one or more t tests or ANCOVA if you have baseline primary outcome measures as covariate (which. typically increases power). So you could in principle treat this as one or more t tests (in the absence of clustering) and use something like GPower. If the outcome is dichotomous you might use a Chi-square test or similar.
  • asked a question related to Randomized Clinical Trials
Question
5 answers
I am doing a systematic review, and I am measuring risk of bias with RoB2 for RCT, and ROBINS-I for non RCT. My questions is, for single arm studies, can I use ROBINS-I? I am not sure how to answer the questions for the domain regarding confounding in this case.
Thank you!
Relevant answer
Answer
you can try ROBINS-E for single arm studies or try to adapt ROBINS-I to fit your situation. You should mention any adaptation or modification in the methodology section.
  • asked a question related to Randomized Clinical Trials
Question
1 answer
If we consider an RCT, a case control study, an opinion article or grey unpublished literature all together due to the unavailability of papers in a systematic review knowing the fact that it will not be 100% quality assurance but comes with its own limitations, would you go ahead?.
or
Is it better to do a scoping review?.,
Looking forward for expert opinions
Many thanks
Punitha
Relevant answer
Answer
Hi, I have faced this situation many times, and I also often see systematic reviews of studies that are cases series (prospective/ retrospective) only. In my own work I often take the decision to include non-RCTs that are prospective in design and, if possible, control for confounding by matching or by propensity scores and seek the best means of assessing risk of bias. It certainly wouldn't be an anomaly if you performed a review of studies that were mainly case series. Cochrane Library provides guidance on the decision whether to include non-randomised studies, and I would look here first I think.
  • asked a question related to Randomized Clinical Trials
Question
11 answers
In RCT study designed I have 2 groups; one of them control group and the 2nd is intervention group.
Each group with with 2 time point: pre management and post management.
Can I use Repeated measure ANOVA?
How can I describe the results in a table ?
Relevant answer
Answer
1. Repeated measures ANOVA assumes multiple groups or conditions, which is not the case here.
A one-factor repeated measures (RM) ANOVA assumes that there are k = 2 or more measurements of the DV for each independent subject. When k = 2, the F-test from a one-factor RM ANOVA is equivalent to a paired t-test, t2 = F. (Try it if you don't believe it!)
The design Ali Itimad described has k = 2 measurements of the DV for each independent subject, but it also has two independent groups of subjects (control & intervention). One option for that design is what I would call a 2x2 mixed design ANOVA*, with Group as a between-Ss factor and Time (pre, post) as a within-Ss (or repeated measures) factor. But another option is ANCOVA, as suggested by Jos Feys (and seconded by me).
* I know that some folks (and maybe some disciplines) refer to any ANOVA model that has at least one RM factor as a repeated measures ANOVA. This is a very imprecise description that can cause a lot of confusion, IMO. I think it is wise to spell out explicitly how many factors there are, including which factors are between-Ss factors and which ones are within-Ss factors. YMMV.
PS- Thank you for acknowledging your oversight in not citing the source of the material you posted earlier. As I have said in other threads, I firmly believe that posting AI-generated content without proper attribution is just another form of academic dishonesty. And I wish people would stop doing it!
  • asked a question related to Randomized Clinical Trials
Question
5 answers
Hi all!
I have a question & any help would be appreciated (especially Dr. Holger Steinmetz & Dr. Jan Antfolk).
Suppose we are conducting a Meta-analysis & looking at 2 Dichotomous outcomes. Outcome 1 is reported in 14 studies (all observational, no RCT) with different measures (OR= 5/14, RR= 2/14, PR= 2/14, % Raw data=5/14). Outcome 2 is reported in 3 studies (OR=1/3, HR=1/3, Raw data=1/3).
Which Effect size to use for each outcome & how to convert one into another?
Relevant answer
Answer
I agree with Heba Ramadan, choose either RR, OR or Hazard ratio (HR), depending on what makes most sense and what is more applicable for your research question and commonly reported measures in that field. For some cases, OR or Hazard ratio may be more applicable, while RR is much more intuitive to interpret.
Check out this material for more info:
  • asked a question related to Randomized Clinical Trials
Question
2 answers
We intend to integrate qualitative methods in a clinical trial that evaluates the risks and benefits of a certain intervention/drug. I am just wondering if you have examples of studies exploring risks and benefits of a drug/intervention using qualitative methods.
Relevant answer
Answer
This kind of mixed methods is sometimes referred to an "embedded" design because the qualitative portion of the project takes place within the context of the larger intervention study.
Vicky Plano-Clark has written about this design, so you could look her articles, as well as the section on embedded designs in the textbook on mixed methods by Creswell and Plano-Clark.
  • asked a question related to Randomized Clinical Trials
Question
6 answers
Can someone pls assist me with sample size calculation for RCT in scientific research, 2 grps control and intervention. Is there a method utilizing ANCOVA? Which software is the best. Assuming I have all assumption to run the ANCOVA.
Thank you kindly
Hashim
Relevant answer
Answer
I have found ANCOVA a bit of a pain to do power calculations for. In the past I have used simulations, which is flexible but time consuming. More recently I've used Superpower in R:
  • asked a question related to Randomized Clinical Trials
Question
5 answers
Dear Colleagues
Which the best type of regression analysis can be performed to test the affect the treatment on the single/multiple outcomes?
Dependent Variable is continous such as thiskness of Achilles tendon (in mm)
Independent variable is categorical (treatment/no treatment).
Best regards
Relevant answer
Answer
Hello dear researcher, I agree with Martin's opinion.
good luck with your research
  • asked a question related to Randomized Clinical Trials
Question
2 answers
I received comments from a reviewer on my manuscript having RCT design and I compared pre and post data by pair t- test. The reviewer commented:
Effect size indices and 95%CI could be presented.
What it means and how can I present it?
Any sample article or help
Thank you
Relevant answer
Answer
The reviewer suggests incorporating effect size indices, like Cohen's d, and their 95% confidence intervals alongside your pair t-test results in the RCT manuscript. Effect size indices quantify the practical significance, with Cohen's d offering a standardized measure based on the mean difference and standard deviation. Including 95% confidence intervals provides a range for the true effect size. This additional information enhances the interpretation of your findings, offering readers a more nuanced understanding of both statistical and practical significance in the pre-post data comparison.
  • asked a question related to Randomized Clinical Trials
Question
2 answers
This is because the sample size is small and some specific characters are required.
The inclusion and exclusion criteria were indicated. I acknowledged that this might led to the limitation for finding generalisation.
Relevant answer
Answer
If you want to have separate experimental and control groups, you can use "random assignment" from the original sample. But be sure your overall sample size is large enough to ensure that you have the power to detect significance. If you are not familiar with assessing the power of a test, the most widely used tool is g*Power
  • asked a question related to Randomized Clinical Trials
Question
1 answer
In my RCT, one of the exclusion criteria included the following: "Participants did not complete more than two modules." Is this related to the intent-to-treat principles, the gold standard for RCTs, or could we use it even if the study design doesn't adhere to intent-to-treat principles?
Relevant answer
Answer
If the exclusion is based on something that happens to participants after randomization, such as not completing the intended course of study, you should retain them in the study for an intention to treat analysis. You can do an RCT without using intention-to-treat analysis but if there are a substantial number of dropouts related to treatment compliance (or similar) your randomly formed groups rapidly become non-random because of something that may be related to your treatment. It introduces bias because the processes leading to dropout will be different in the control group. You must at least acknowledge this limitation and consider the implications.
  • asked a question related to Randomized Clinical Trials
Question
2 answers
Many clinical trialists integrate qualitative and/or mixed methods research as part of their clinical trial projects. Could you please share your experiences and thoughts on the challenges in integrating these methodologies in clinical trials, and how to address them.
Relevant answer
Answer
This kind of design is sometimes referred to as "embedding" and I have attached an article that uses this approach. My personal opinion is that most of the designs I have seen with clinical trials fall into two of the classic categories in mixed methods, either exploratory sequential (qual --> QUAN) or explanatory sequential (QUAN --> qual). In the first case, qualitative methods are used to help create aspects of the trial. In the second case, aspects of the trial are followed by qualitative methods to help understand the outcomes of the trial.
  • asked a question related to Randomized Clinical Trials
Question
1 answer
I have two datasets (both with experimental group and control group) which measure the same construct with two different forms due to age differences. All groups completed the measure at a pretest and post-test. The summation score of each individual at each time point was calculated.
Form A (10 items):
Exp group Pretest Post-test
Control group Pretest Post-test
Form B (6 items):
Exp group Pretest Post-test
Control group Pretest Post-test
I would like to transform the raw score into Z-score and aggregate the data from two groups, so that I can evaluate if there is any pre-post change in this construct. I wonder which mean and standard deviation to use for the calculation. Here are some of my considerations.
Option 1: Overall M and SD of both groups at pretest (T1)
The assumption is that the pretest M and SD represent the population without intervention. The post-test Z-score should reflect how much the score varies from the population mean at baseline (when Z = 0). My main concern is whether this ignored the differences between the time points where the data is collected (i.e. the M at the two time points may be different) and I can't attribute the pre-post difference to the intervention.
Option 2: Aggregating all the data and calculating a single overall M and SD
The assumption is all the data collected are from the same population/distribution, which is not true for the experimental group (as they received the intervention). However, the time-point difference seems to be considered and the M should be between T1 and T2.
Option 3: Use the overall M and SD for pretest score and use the control group M and SD for post-test score
This is a weird one, which I am not comfortable with. The overall M and SD represent the population at T1, while the post-test score of control group represent the population at T2.
May I have some advice on which option would be appropriate?
Relevant answer
Answer
Do you have paired data? ie same individuals's pre and post test scores.
If so work on the means and sds of the differences.
  • asked a question related to Randomized Clinical Trials
Question
2 answers
Relevant answer
Answer
Sasidhar Duggineni
thanks for your comment. To be clear I am not saying stop all RCT's or that they are not of value, but rather they often get lumped together and praised as the best when some research designs are poor or questions require something else.
  • asked a question related to Randomized Clinical Trials
Question
6 answers
I have performed a double-blind, placebo-controlled, randomized clinical trial with N=63 (32 in the placebo group and 31 in the intervention group). After submitting my manuscript, I received major revisions. In one of the comments, the reviewer asked me to justify the sample size. The reviewer's comment was: "Sample size justification required proper drafting"
I do not know how to justify the sample size of my research
Sample size calculation
For the calculation of sample size, the significance level and statistical power were considered as 5% and 80%, respectively. According to the study of Mesri Alamdari N and colleagues, we used 0.23 nmol/L as the change in mean (d) and 0.29 nmol/L as the standard deviation of MDA as a primary outcome. Based on the formula, we needed a minimum number of 25 participants for each group. Considering the dropout rate of 20% during the clinical trial, 30 subjects in each group will be considered.
Thanks in advance
Kind regards
Relevant answer
Answer
Ma'Mon Abu Hammad, I noticed that in another thread (https://www.researchgate.net/post/MANCOVA-some_assumptions_not_met_Where_next) you stated explicitly that your response was generated by ChatGPT. Thank you for doing that. The response you posted above looks (to me, at least) like it was also generated via ChatGPT. If so, please label it as such (as you did in that other thread). Thanks for considering.
  • asked a question related to Randomized Clinical Trials
Question
2 answers
Recently, we want to do a trial for an interesting clinical problem that should be involving blind and placebo control, what should be observed at the beginning of a new RCT research?
Relevant answer
Answer
Dear Dr Anand
Thanks a lot for your suggestion.
Best regards,
Ping Hu
  • asked a question related to Randomized Clinical Trials
Question
2 answers
Any suggestion on calculating sample size for a superior RCT in R or SAS?
I have used the 'pwr' package which gives me a different result from the one I got from an online calculator (Sample size calculator (riskcalc.org)).
my parameters are
control- change in mean- 20+/- 5
treatment- change in mean- 15 +/- 5
drop out -20%
power-0.9
Any suggestion would be appreciable.
Relevant answer
Answer
Hello Hanish,
Presuming that your threshold for clinical significance to detect between groups is 5 units or more (20 vs. 15), and that the SD for each group is 5 points, then your study is intended to determine if a group difference of at least 1 SD (5 units or more on a scale having an SD of 5 units) exists.
To mimic the two-stage process outlined by Chou & Liu (2004), then you could estimate N by evaluating a directional hypothesis test at target risk level of alpha/2 (see )
Via the freely available program, g*power (t-test, difference between two means), ES = 1.0, alpha = .025, power = .90, allocation ratio (n2/n1) = 1, the resultant N is 46 cases. With maximum attrition of 20%, that implies a starting N of at least 58 cases (29 per batch).
Good luck with your work.
  • asked a question related to Randomized Clinical Trials
Question
7 answers
Is there free software to calculate a sample size for more than two arm RCT?
May anyone provide an article commenting on the calculation of sample size for more than two arm of RCT
Thank you
Relevant answer
Answer
we should compare 2 interventions separately Suppose the arms are a, b and c. We will obtain sample size based on a and b. Then we will obtain sample size based on b and c. Then we calculate the sample size based on comparing a and c. Among the 3 sample sizes whichever will be the highest, we will take that sample size for each 3 arm. This method will ensure adequate sample in each arm
  • asked a question related to Randomized Clinical Trials
Question
2 answers
How to calculate sample size for multi-arm (three arm) RCT?
Relevant answer
Answer
To calculate sample size for a three-arm randomized controlled trial (RCT), you will need to first determine the desired level of power, the effect size, and the number of arms. Then, use a sample size calculator to estimate the sample size for each arm. For example, if the desired power is 0.80, the effect size is 0.15, and the number of arms is three, then the sample size for each arm would be approximately 220.
  • asked a question related to Randomized Clinical Trials
Question
1 answer
Randomized clinical trials have become universally accepted as the standard procedure for comparing treatments. However, there is no guarantee that the subjects allocated to the different treatment groups will be similar in all important characteristics. Can anyone explain, what is the main factor when we got the imbalanced results between the treatment groups? a mistake on the randomization step? and then, what can we do to make it balanced?
Relevant answer
Answer
Unfortunately a random selection of patients can be associated with a difference in the group just by chance (especially if you have many characteristics that you describe the risk of finding a difference at a type 1 error of 0.05 would increase). Just imagine you flipping a coin 5 times in a row, you could just by chance get the same results and it does not mean that the coin is biased ( if repeating the flipping hundred of times you would get the 0.5 probability of head or tails).This is exactly the same for the patients characteristics you could get just by chance an imbalanced dataset in a random selection of patients.
If it is anticipated that a specific risk factor or characteristic would be critical in the subsequent analysis of your trial (ex tumor stage), you could block patients by this characteristic at inclusion. This would improve the repartitions of patients based on this blocking factor and ensure adequate sample size).
The last option would be to control for this factor in your analysis a posteriori. However, you could be at risk of lacking power in your trial (especially because most of time in RCT sample size is determined a priori not accounting for a specific imbalance of patients on a specific characteristic).
see the paper below for more information on blocking (Efird J. Blocked randomization with randomly selected block sizes. Int J Environ Res Public Health. 2011 Jan;8(1):15-20. doi: 10.3390/ijerph8010015. Epub 2010 Dec 23. PMID: 21318011; PMCID: PMC3037057.)
  • asked a question related to Randomized Clinical Trials
Question
4 answers
I have three reports that I want to include in my systematic review. These are all qualitative information of the findings of the original RCT. They only include qualitative data of the intervention group.
Relevant answer
Answer
One option that the Cochrane group supports using is the CASP Qualitative Research checklist: https://casp-uk.net/images/checklist/documents/CASP-Qualitative-Studies-Checklist/CASP-Qualitative-Checklist-2018_fillable_form.pdf
Another useful resource is Section 21.8 "Assessing methodological strengths and limitations of qualitative studies" from the Cochrane group: https://training.cochrane.org/handbook/current/chapter-21
  • asked a question related to Randomized Clinical Trials
Question
5 answers
As two of the team members contributed as much as the other in the process of manuscript preparation in a two-year lasting study, I want to assign two first-author for the manuscript if it is possible.
Relevant answer
Answer
Peterson K. Ozili Arsalan Haneef Malik Medhat Elsahookie Thank you for your responses. But I found several examples (not considering sys.review and meta-analyses) that co-first author is possible. Thank you again for your responses, but please read Masoumeh's answer above. best, Mahdi
*example:
  • asked a question related to Randomized Clinical Trials
Question
2 answers
My systematic review has 9 unique RCT studies and 4 additional reports which are secondary analysis of the parent RCT. Considering that the reports are from the same parent studies, do I need to show them individually in the quality analysis table?
I chose Cochrane rob2 as my quality assessment tool of choice.
Relevant answer
Answer
Zafar Khan In general where you have 1 RCT reported in X papers you present one risk of bias assessment for the entire study, so that is an accepted approach. However, the risk of bias may vary for different outcomes reported in different papers (or indeed within the same paper) - for example, because methods of outcome assessment or follow-up rates differed. In this case, it may be an entirely appropriate thing to do a per-outcome risk of bias assessment - but that also applies when you have several outcomes reported in one paper. This is the approach recommended for assessing the risk of bias in observational studies. However, it could all get rather tedious - my suggestion is to use your judgement and do what best reflects the quality of the evidence you have and if a single risk of bias assessment reasonably reflects overall bias (and your primary outcome in particular) that is OK.
  • asked a question related to Randomized Clinical Trials
Question
4 answers
I am going to do a research by using the baseline data from a RCT for analyzing the association between variables. What will be the study design suitable in this case?
How to calculate the sample size of the study according to the study design suggested? Or should I use the same sample size as the RCT?
Relevant answer
Answer
We can't actually say that there are subtypes of cross-sectional studies but they can be used for descriptive and analytical purposes (even though the analytic purpose is only used to set-up hypothesis that need further investigation)
  • asked a question related to Randomized Clinical Trials
Question
6 answers
Dear RG Family,
I am working on certain aluminide-based coating systems for anti-corrosion applications. I performed different electrochemical tests including EIS and Tafel after immersion in 3.5 % NaCl for different intervals (from 1 hour to 100 days). For 1 hour, I am getting the normal charge transfer resistance (RCT) value (up to few thousand Ohm) after circuit fitting. As the immersion time increases, the impedance value was expected to increase due to the formation of a sacrificial layer. However, in my case, I have observed an exponential increase in RCT after the immersion for 100 days. The new Rct value is in millions, about ten thousand times as compared to 1 hr of immersion time (The Nyquist plot is attached). Even after the repetition of tests with a different specimen, I got a similar trend. A few Tafel Plots are also complementing the results from EIS, as the corrosion rate decreased significantly after immersion. As far as I understood, I am going right and obtaining a significant reduction in corrosion rate. My question is am I missing something? What significance do these results really have?
Sincerely,
Relevant answer
Answer
I would first check the layout used for the research. With resistances of tens or hundreds of megaohms, it is hard to imagine making a Tafel curve.
stefan krakowiak
  • asked a question related to Randomized Clinical Trials
Question
1 answer
I used cochrane-rob2 for the RCTs. What is the best tool for secondary data analysis of a parent RCT?
Relevant answer
Answer
Hi,
There was an attempt made in this article which was published in German:
Google translate to understand. STROBE has most of the items on the checklist.
Swart E, Schmitt J. STandardized Reporting Of Secondary data Analyses (STROSA)—Vorschlag für ein Berichtsformat für Sekundärdatenanalysen [STandardized Reporting Of Secondary data Analyses (STROSA)—a recommendation]. Z Evid Fortbild Qual Gesundhwes. 2014;108(8-9):511-6. German. doi: 10.1016/j.zefq.2014.08.022
Swart E, Bitzer EM, Gothe H, Harling M, Hoffmann F, Horenkamp-Sonntag D, Maier B, March S, Petzold T, Röhrig R, Rommel A, Schink T, Wagner C, Wobbe S, Schmitt J. A Consensus German Reporting Standard for Secondary Data Analyses, Version 2 (STROSA-STandardisierte BerichtsROutine für SekundärdatenAnalysen). Gesundheitswesen. 2016 Sep;78(S 01):e145-e160. doi: 10.1055/s-0042-108647
  • asked a question related to Randomized Clinical Trials
Question
11 answers
There is a difference between the researchers when choosing the terms Effectiveness and Efficacy for Randomized Controlled Trials. While checking the previously published research studies the authors have used both. The existing literature states "Efficacy can be defined as the performance of an intervention under ideal and controlled circumstances, whereas effectiveness refers to its performance under ‘real-world' conditions". Therefore, which is the correct term for randomized controlled trials? Efficacy or Effectiveness. I kindly request the experts to share your expertise.
Relevant answer
Answer
We have used the word efficacy for some nutrient media that are used in the Microbiology and Public Health laboratory for the isolation of organism from the clinical and environmental samples.
  • asked a question related to Randomized Clinical Trials
Question
4 answers
Hello,
I´m in the process of writing a literature review on the topic of cough assessment for specific populations. The goal is to retrieve norm values for those populations before treatment. Therefore a lot of different study types will be included.
Because outcome values after an intervention or the intervention itself are not of interest to my research question, my questions are:
1. How do I assess the quality of the different study types?
2. Do I apply the checklists like the Newcastle Ottawa Scales for the different study types on the intire study although the main objectives (the interventions or outcomes) of most of the studies are not going to be reviewed?
3. Does anyone have experiences or ideas?
Many thanks,
Laura
Relevant answer
Answer
Thank you! All the answers were very helpful!
  • asked a question related to Randomized Clinical Trials
Question
7 answers
HI everyone,
I am reading a diary study for the first time and I am not sure how to appraise it. What should I look for?
The study was conducted along a RCT , so they included the participants of the RCT.
Should I appraise it as every other qualitative study? Looking for elements of credibility, confirmability, reflexivity etc.?
In addition, should the authors report on saturation? if they achieved it or not? because since the number of participants was set by the RCT, what could they have done if they had not reached saturation?
thank you
Relevant answer
Answer
Theoroetical sampling is desirable but not required for grounded theory. What is required is that the analysis and the data collection occur together. So, if the authors waited until the end of the time period and then collected all the diaries before they analyzed them, that would not fit with grounded theory.
  • asked a question related to Randomized Clinical Trials
Question
3 answers
Hello, everyone, In a study with 1 independent variable with 2 levels and 2 dependent variables, using MANOVA to analyze whether is right. If representing overall significant, post hoc comparison should use multiple independent t tests? Looking forward to your reply. Thanks.
Relevant answer
Answer
Hello Bette Jingyi Wu. Are you able to tell us what the dependent variables are? Also, do you have one multivariate question, or do you have two univariate questions? Putting that another way, do you care about group differences on both DVs separately, or do you care only if the groups differ on some linear combination of the two DVs? I think that more often than not, people (mistakenly) use MANOVA when they really have 2 (or more) univariate questions.
Here are a couple of relevant articles you could take a look at.
See also the attached excerpt from Frank Harrell's regression book for another interesting possibility.
HTH.
  • asked a question related to Randomized Clinical Trials
Question
2 answers
I have calculated Cox Regression in SPSS (HR) but is there any way of calculating RR in SPSS?
Relevant answer
Bhogaraju Anand Thank you very much. was a great help. :)
  • asked a question related to Randomized Clinical Trials
Question
3 answers
The posttest-only control group design is a basic experimental design where participants get randomly assigned to either receive an intervention or not, and then the outcome of interest is measured only once after the intervention takes place in order to determine its effect. This design differs from the pretest-posttest randomized controlled trial by requiring no measurements before the intervention. I request the experts to share opinions on this.....
Relevant answer
Answer
Hi Ramesh
I strongly agree with Torres’s comment on your query.
The RCT research design is mostly used in evaluating the efficacy of a newly prepared therapeutic drugs by the pharmaceuticals.
  • asked a question related to Randomized Clinical Trials
Question
3 answers
When conducting a systematic review and mete-analysis of observational studies we included two studies which are two secondary analysis of the same original RCT, but each are reporting different numbers according to the same outcomes. I think that inclusion of both enriches our analysis as each study reports different sample size and numbers.
I ask if this inclusion is right or we will get comments on this point by the reviewers?
Relevant answer
Answer
No, you can't add two post-hoc analysis of the same trial in the same review, this is considered duplication of data. The reason behind different numbers for the same outcomes can be different populations, for example one post hoc investigation (drug x for hypertension) efficacy in patients with diabetes mellitus and the other post hoc in patients with chronic renal failure. There are other reasons for the different numbers, but in my perspective pooling two post hocs from the same trial is not valid unless the outcomes are different.
  • asked a question related to Randomized Clinical Trials
Question
2 answers
We are running a systematic review of methods used to assess toxicity from glucocorticoids in inflammatory conditions (Prospero registration: CRD42022346875). We are in the process of screening and selecting papers for full-text review. As part of the quality appraisal we are thinking using the MAPP tool but several studies (e.g., DOI: 10.1111/jep.12884, DOI: 10.1016/j.jclinepi.2019.03.008) present some conflicting findings as for the tool's usefulness. Studies conclude that Additional validation research on the MMAT is still
needed. In our review we will include some mixed methods studies but most quantitative studies (e.g., RCT, observationals, cross-sectionals, validation, etc.). Given the broad nature of the study, we think this tools deemed appropriate, yet, we doubt as for its sound efficacy. Any previous experience or thoughs, would be much appreciated.
Relevant answer
Answer
David C. Coker I get your point. It seems your answer points toward what best work for the research topic and the results someone wants to generate. I am still inclined to use more structured methods as these have been somehow tested and examined in different contexts, but I think what you say is useful, not to get caught up by the checklists.
thank you.
  • asked a question related to Randomized Clinical Trials
Question
1 answer
Dear Colleagues,
I am currently supervising a pilot RCT to examine the effectiveness and feasibility of a new neuroplastic treatment for stuttering/stammering with my postgraduate (Ms Hilary Mc Donagh).
We have run into a difficulty regarding deciding what is a suitable control for this study.
Our understanding is that placebos are potentially not recommended in a behavioural intervention because teaching a no benefit behaviour that has a no benefit consequence is possibly unethical. One suggestion would be to have a no treatment 'waiting list' period before the participants begin their 6-8 week trial, however, I am worried that participants then benefit from the Hawthorne Effect before the start of the trial which may make it more difficult to prove the efficacy of the new innovative intervention.
I suppose the question is what controls have people ever used in the past for RCT's involving treatments for stammering/stuttering.
We will welcome all advice.
Thanks Ken
Relevant answer
Answer
In my opinion, a solution could be to test the efficacy of your new therapeutic intervention against that of the standard rehabilitation program usually provided by the public/national healthcare system to stuttering/stammering patients. This would be useful even to do a cost-effectiveness analysis of your new intervention when comparing it to a proper benchmark.
  • asked a question related to Randomized Clinical Trials
Question
2 answers
Dear all,
I am performing a two-arm parallel cluster randomised controlled trial. Each cluster (each cluster is a kindergarten) will be randomly assigned to one of two arms. Outcomes will be measured at the level of individuals (who are kindergarten employees). I want to perform a form of restricted random assignment of clusters, to ensure balance in terms of number of clusters and number of individuals in the two arms. There are about 200 clusters, and the total sample size (individuals) is approximately 1300. However, the number of individuals within each cluster varies a lot (i.e., the cluster sizes vary). The number of clusters and individuals are known prior to the randomisation.
One approach I came across goes like this:
1. Rank-order clusters in terms of the number of individuals within each cluster (i.e., the cluster sizes).
2. Create blocks of kindergartens which contain similar number of individuals (according to their rank). Block size will then be 2.
3. Within each block, randomly assign one cluster to the intervention arm (with the remaining cluster being assigned to the control arm).
See attached file for a table that exemplifies this approach. Is this an appropriate approach, or are there some other concerns I am missing? Of note, I do not want to balance the two arms in terms of some predefined covariate; just balance arms in terms of number of clusters and cluster sizes.
All comments appreciated!
Best regards,
Lasse Bang
Relevant answer
Answer
Thank you for your response Peter!
I agree - we could consider using larger blocks - I've read that smaller blocks are preferable, but I guess it depends on how well-matched the pairs are (i.e., if difficult to find comparable pairs then larger blocks could be better). I also agree that we should consider including the strata in the analysis.
Best,
-Lasse
  • asked a question related to Randomized Clinical Trials
Question
3 answers
Hi mates, im looking for a risk of bias tools, that can be used in interventional studies ( either RCT or NON RCT).
Im doing a meta analisys about exercise and inflammatory markers in the elderly. i got some studys RCT and some Non RCT, what tool do you advise ?
Thanks for any help, im a newbie, best regards Luís Silva.
Relevant answer
Answer
Please check this article; it is very helpful
  • asked a question related to Randomized Clinical Trials
Question
2 answers
A patient is taken off a treatment because the outcome value of interest dropped below value B. For whatever reason the exact outcome value is missing. I need to impute it to avoid bias and to reduce my confidence intervals.
Is multiple imputation something I can use and if yes, how should I adjust it? This is obviously Missing Not At Random. If not multiple imputation, what else can I do? Is there a standard approach? Non–random attrition should be a very common thing in RCTs.
Relevant answer
Answer
I don't know much about imputation but it seems to me that MI only gives the same result if you impute values with the same mean as the mean of the observed values. That would probably not be the case if you impute values that are all lower than that value B. From how I understand the question, it may even be the case that all reported values are higher than that value B.
Question remains what should you impute?
Irina Titova: do you have ideas on the distribution of those missing values?
If you only know that it is lower than B, then maybe you should analyze two scenarios, one where imputed values are drawn from a distribution that centers toward B and another from a distribution that centers toward A.
  • asked a question related to Randomized Clinical Trials
Question
2 answers
Hi, what are the possible confounding factors for "financial incentives and smoking cessation"?
Relevant answer
Answer
May be the financial status / salary etc.
  • asked a question related to Randomized Clinical Trials
Question
3 answers
Hello everyone!
We are developing a phase I randomized clinical trial, in 18 healthy volunteers, aimed to test the safety and pharmacokinetics of i.v drug. However, we want to test two different doses of the drug (doses A and B), and each dose is to be administered with a specific infusion rate: dose A will be administered at X ml/min, and dose B at Y ml/min.
We need to randomize the 18 patients with a 2:1 ratio (active drug vs placebo), in blocks of size 6. However, to maintain the blind, we also would need two different infusion rates for the placebo (X and Y).
What do you think is the best way to randomize the volunteers in this study?
One way could be to randomize the patients in a 2 x 2 factorial design: one axis to assign the drug vs placebo, and the other axis to assign the drug dose with the infusion rate. To maintain a 2:1 ratio for the first axis, a and 1:1 ratio for the second axis, in blocks of size 6. A second way could be to randomize "three treatments" (dose A with X infusion rate, dose B with Y infusion rate, and placebo), 1:1:1 ratio, in blocks of size 6, and then, to randomize patients assigned to placebo in blocks of size two (or without blocks) to infusion rate X or Y.
What do you think is the best manner to randomize in methodological terms? In the case of the first way, Do we need to test the interaction between dose and infusion rate? Do you have another idea to randomize the patients in this study?
Thank you so much for your suggestions and help.
Relevant answer
Answer
Hi,
As there are only 18 units and a single dose study without repetition of treatments, Whynot go for simple randomization procedure with random number generator into the three groups (A,B,C) one of the groups is similar to other that gives a 2:1 ratio. Groups A,B and C could themselves be randomized.
Here is a good book on this subject.
Lachin, John M.; Rosenberger, William F (Wiley series in probability and statistics) Randomization in clinical trials: theory and practice [2 ed.] John Wiley & Sons,2016.
  • asked a question related to Randomized Clinical Trials
Question
7 answers
Dear all ,
I cam a cross an RCT: High potency multistrain probiotic improves liver histology in non-alcoholic fatty liver disease (NAFLD): a randomised, double-blind, proof of concept study.
I searched what does POC means that " A proof of concept is meant to determine the feasibility of the idea or to verify that the idea will function as envisioned " the source is https://www.techtarget.com/searchcio/definition/proof-of-concept-POC
I wonder what does it ment in that RCT study ??
  • asked a question related to Randomized Clinical Trials
Question
3 answers
I am writing a systematic review where I will be including RCT and also randomized trials without the control group. Which risk of bias will be best to use for both please?
Relevant answer
Answer
Gracie Pretty It is a standard to use the Version 2 of the Cochrane risk-of-bias tool for randomized trials (RoB 2). And, it is expected by PRISMA reporting also.
  • asked a question related to Randomized Clinical Trials
Question
4 answers
Hi
How can I do filtration for RCT studies in Google Scholar for a systematic review?
Relevant answer
Hi Jumanah
You should include RCT in your search terms. I mean you should add RCT (with AND) to your search terms. Because as I know there is not any filtering tool in the Google scholar. You can also use Advanced search in Google scholar.
  • asked a question related to Randomized Clinical Trials
Question
1 answer
Hi everyone,
im working with RCT data .. 4 treatment groups and 4 control .. the experiement was 30 mins for control group, 30 mins for treatment groups, then 30 mins for control and so on till there were 4 treatment and 4 control groups. All treatments were same.
I'm now exploring the data.
Please guide me:
1) do I explore data for treatment and control groups seperately or treat the whole data as one?
2) if i have some outliers in treatment or control groups should i drop them?
3) what analysis can be conducted on such dataset ie Difference in differences, treatment effects, rrgression logit probit etc
Please share if you have a book specificaly for data analysis for RCT or experiments.
Looking forward for your guidance
thank you
  • asked a question related to Randomized Clinical Trials
Question
2 answers
I'm performing an RCT with cancer patients, with two distinct groups: both groups have cancer patients. I have planned to carry out the intention-to-treat if a patient does not continue in the study. One of the patients left the study for palliative care, and died due to the disease. Should I carry out the intention-to-treat with this specific patient?
Relevant answer
Answer
Death is one of the most important (if not the most important) outcomes, so I believe it should be included in the Intent to Treat analysis. Unless you have a good justification (already in the research protocol) for not including it.
  • asked a question related to Randomized Clinical Trials
Question
3 answers
We know in cross -over trial , there is washout period and new drug / intervention is given to opposite group. Why then it is included in RCT. What is the purpose of randomization here?
Relevant answer
Answer
In crossover trials, the two groups are randomised to decide the treatment sequence provided to them, i.e. Intervention-washout-placebo or Placebo-washout-intervention. Randomisation will improve interindividual comparisons and improve the comparability of time-varying covariates.
  • asked a question related to Randomized Clinical Trials
Question
2 answers
Per protocol analysis is a method used in RCT...are there any disadvantages of Per protocol analysis over intention to treat analysis?
Relevant answer
Answer
Suraj Kapoor - there are major disadvantages in a per-protocol analysis vs intention to treat. In very simple terms the two groups compared in a pe protocol analysis are not formed by randomisation alone - they are formed by random allocation to groups + whatever process led to them not following the protocol. The potential for bias is large. In most cases, the factors that may generate bias are unmeasured and so cannot be quantified. Conceptually intention to treat mimics the process of real-world treatment and so can estimate average treatment effects when a treatment is used. Of course in reality the world is more complex than that and per-protocol analysis has a role to estimate effects in treated individuals. However I would say that the itrinsic bias means any caual inference must be weaker and if possible it should be secondary to an intention to treat analysis.
  • asked a question related to Randomized Clinical Trials
Question
3 answers
When we have 2 experimental groups (with different treatments) and 1 control group, Is that an RCT or clinical trial or a three-arm Randomized Controlled Trial?
Relevant answer
Answer
Randomized control trial is the trial with property randomised into various arms irrespective of number of arms of intervention vs the controlled group.
  • asked a question related to Randomized Clinical Trials
Question
2 answers
I am conducting a systematic review about a newly developed psychological intervention for children, but all studies available are of exploratory/feasibility nature and I am not sure how to evaluate/analyse their results.
I am wondering if anyone is aware of guidelines or articles regarding the decision process of going from a pilot/feasibility stage to full RCT?
I am sure this is a relevant question in many departments, deciding if the results from exploratory studies justify going further with expensive RCTs.
I have been looking in to Cochrane Library etc. to see how they evaluate evidence, but what I can find is mostly regarding making clinical guidelines based on evidence availible (RCTs and other sources of evidence).
If anyone knows anything about this process, guidelines for going from exploratory to RCTs, I would be immensely grateful since I feel a bit stuck on this question (how to analyse the results from feasibility/exploratory intervention studies, what criteria can be used for deciding to go ahead or not with RCTs etc..)
Relevant answer
Answer
Unfortunately, there is (as is so often the case in scientific research) no pre-determined answer for your question. You will have to consider, decide and substantiate the criteria for inclusion in your study. You might find this specific part of the Cochrane handbook you already consulted specifically helpful in the process:
  • asked a question related to Randomized Clinical Trials
Question
1 answer
Dear researchers, I am planning to create an evidence map for a subject. What types of studies should be included in the evidence map (systematic review, meta-analysis, RCT..)? Is the data made using only the bubble chart? Can you recommend a resource on how it's technically done?
Relevant answer
Answer
  • asked a question related to Randomized Clinical Trials
Question
3 answers
In one study, a person in each treatment group converted from MCI to dementia. The study eliminated them from the analysis, using them as statistical outliers. I know that the two analysis for RCTs are per protocol and intention to treat. It did not use an ITT analysis. It was a pilot study with a control group. Subjects were randomized. Are there other analyses suitable for this case?
Relevant answer
Answer
They did not include all the participants from baseline.
  • asked a question related to Randomized Clinical Trials
Question
1 answer
Power analysis in RCT studies?
Relevant answer
Answer
Onur Ertugrul this is an interesting question and my answer is no, I know of no such source. I think more important than the question itself is to consider why are you asking? If it is because you want to say it is OK for a RCT to have a power of 70% then it is for you to answer. The answer entirely depends on the context.
Leaving aside the fundamental absurdity of the "null hypothesis" based" significance testing" the issue is one of competing risks - 80% power was seen as the standard for a long time - giving you an 80% chance of finding a 'true' difference of a given size if that was indeed the case in the population. So you are implicitly accepting a 20% risk of falsely concluding no difference when there is. You balance this with a 5% (conventional) risk of rejecting the null (no difference) when it is true... Many reasonably argue that these risks ought to e the same (so power should be 95%) but the consequences of different errors differ so there is no absolute reason why the risks should be equal.
In my view the problem largely disappears if you think in terms of estimation and CI width - how precisely do you need estimate a parameter (difference) in order to usefully inform decision making? At 80% power you are implicitly accepting a lot of imprecision - which might or might not be OK. At 70% you are accepting more.
  • asked a question related to Randomized Clinical Trials
Question
4 answers
I had read there were various ways to handle missing data and the modern imputation methods might be the best solution to handle missing data. However, following planned statistical analysis plan in our RCT protocol, Last observation carried forward (LOCF) method was chosen.
I had done complete case analysis/ per protocol analysis using repeated measure ANOVA. As for the power of the study, we had achieved the minimum sample size (since we had included attrition rate during sample size calculation).
As for ITT analysis, I had a problem with drop-out with missing value at each timepoints and also missing all measurement value including baseline measurement (no data at all). So, is it okay if use mean imputation for to impute the data from those who had no data at all combine with LOCF method for drop-out who have baseline value.
Please advise, I will really appreciate it since statistics always making me confuse. Thank you.
Relevant answer
In the protocol, it said we will use LOCF for drop out. Unfortunately, it did not specify for drop out with no baseline value. So, is it ok of we use mean imputation for this case?
  • asked a question related to Randomized Clinical Trials
Question
4 answers
Hi so I am doing a systematic review to examine the effect of a surgical produce A vs surgical procedure B on patients by using patient reported outcome scores.
There is only 1 RCT and the rest are case series for surgical procedure A or surgical procedure B with a mean no of patients of 50.
  1. Can I do a metanalaysis with these case series ? Or should I just say there is not enough evidence to do it as there is only 1 RCT.
  2. Can I pool the mean post operative PROM and mean preop PROM and check for the difference in improvement for surgical Procedure A vs Procedure B ? Or would this be a huge no no in the world of statistics? As one is assuming the populations baseline characteristics are the same . *
  3. Is it right to say that even if I do NO 2 I can not compare the results of A or B as the population demographics are different
  4. If I can't / you would not recommended to do no 2 and no 3 what should I do with my case series ? Is there a good way of presenting the results in a graph or I can only present them in a table ?
Please understand that most case series done by surgeons are usually retrospective and done within their unit. By aim is to highlight what we know so far and the future, where i will be doing large data registry work.
I understand I will use the GRADE approach to check for bias.
Relevant answer
Answer
Due to the nature of the case series, I would not recommend any further analysis, as there is no control group or any comparison. And so, no you can't do a pooled analysis on the result of these case series, as there are too many various differences between these cases that might generate further bias. I would simply do a simple comparison using tables and elaborate it on the discussion section.
  • asked a question related to Randomized Clinical Trials
Question
4 answers
I have questions about the primary and secondary outcomes if you have an experience in randomized control trial (RCT)
If outcome variable measured by multiple methods. Do I need to have primary and secondary outcomes because one variable is measured through different tools? and
If one outcome variable measured at two different time points, do I should to indicate which time point is the primary outcome?
Relevant answer
Answer
Dears
Thank you for sharing your experiences. To make more clear, the outcome variable is "adherence". There are a lot of adherence measurement tools for adherence. While there is no gold standard method of adherence measurement. Thus, it is recommended to use multiple methods to measure adherence to enhance the validity of the data. In this scenario, could I should specify which method of measurement will be used as primary and secondary outcome? one problem having more than one primary outcome is the need for adjusted analysis because it increase the false positive. But, is one variable 'adherence' can be used as a primary and secondary outcome because it measured by different methods?
  • asked a question related to Randomized Clinical Trials
Question
4 answers
I can understand the importance of having an outcome assessor independent and blinded of intervention for a Randomised Controlled Trial (RCT), but when conducting a Case Study investigating a new Clinical Health intervention is it also important to NOT allow the principal researcher to carry out baseline and post-intervention outcome scales/measures?
My understanding is that the principal objective of a Case Study is to establish the feasibility of a new intervention rather than effectiveness and more importantly ensure that the participant does not experience any adverse effects. Because of this, the significance of any non-feasibility scales are minimal so for this reason would it be a poor design for the principal researcher to undertake the outcome scales/measures?
Thank you for your advice regarding this question!
Ken
Relevant answer
Answer
To a degree this will be determined by the tools you are using to evaluate outcome. If the tools have high inter-observer reliability then it would be ideal to have an independent observer
  • asked a question related to Randomized Clinical Trials
Question
8 answers
I am planning a cross-over design study (RCT) on effect of a certain supplement/medicine on post-exercise muscle pain. There hasn't been any similar study to recent date on the effect of this medicine (or similar medicines) on post-exercise muscle pain. However, some studies have been conducted for effect of this medicine on certain conditions such as hypertension.
As long as I have been searching formulas for estimating sample size, they need information (such as standard deviation, mean, effect size, etc.) from some similar kind of studies which was conducted before.
Is there anyway to estimate a sample size for my RCT with the aforementioned conditions?
Relevant answer
Answer
The calculation of the sample size depends on the variance in the results. Certain software such as G power can help you calculate the sample size based on the mean difference, and the variance between the groups.
  • asked a question related to Randomized Clinical Trials
Question
3 answers
Suppose an RCT, with 30 participants in each arm. Group A receives CBT sessions, Group B as a waitlist control does not. Both groups complete a depression questionnaire measure at baseline, and after Group A have had their CBT, both groups complete the questionnaire again. Te change in depression scores is the outcome of interest. However, let's say 5 individuals from Group A drop out, and so do not have the second time-point data, and likewise 2 individuals from group B dropout. If the study is wanting to carry out intention-to-treat analysis, how does this work. Is missing data computed for the dropped out participants (by some method)? Are group means of the depression scores just calculated as normal and the difference examined, despite group size discrepancy at the end time-point? Or are the participants that dropped out excluded entirely from calculations?
I have gotten very confused! Many thanks for any help!
Relevant answer
Answer
ITT is by default preferred "gold standard" analysis in most superiority intervention trials that assess efficacy, as it yields conservative estimates. The dictum is "once randomized, always analyzed". But in certain situations, you may want to additionally present the per protocol analysis. Whatever analysis you present, it would be prudent to mention it a priori while registering the protocol to dispel concerns about HARKING.
  • asked a question related to Randomized Clinical Trials
Question
1 answer
If we want to publish RCT protocol which journals are recommended and any indexed journal publishing protocol for free
Relevant answer
Answer
All Trials need to be registered according to the ICJME recommendations. The trial can be registered at multiple sites included https://clinicaltrials.gov
Many journals have started to publish protocols along with the publication of the trial. You can check out the following article for the same.
Spence O, Hong K, Onwuchekwa Uba R, Doshi P. Availability of study protocols for randomized trials published in high-impact medical journals: A cross-sectional analysis. Clin Trials. 2020;17(1):99-105. doi:10.1177/1740774519868310
Our group has published several studies based on randomized controlled trials. I have shared it with you below. Thanks
Mehta S, Wang D, Kuo CL, et al. Long-term effects of mini-screw-assisted rapid palatal expansion on airway. Angle Orthod. 2021;91(2):195-205. doi:10.2319/062520-586.1
  • asked a question related to Randomized Clinical Trials
Question
4 answers
I am designing a small RCT for pediatric depression. If participants need continued treatment after the intervention, and before 6-week follow-up measures are administered, ethically we will need to provide continued treatment and forgo follow-up data collection for that individual. Given our sample will be quite small (a total of 30 participants in the study), what would our options be in this scenario? Would imputation be a possibility if there are few such cases? Similarly, participants may drop out of our waitlist control group because they need immediate treatment. How have other researchers planned around these kind of ethical dilemmas in the grant proposal stage?
Relevant answer
Answer
Hello, If any patient needs treatment, ethically, and morally the necessary treatment should be provided. The way to handle these type of situations in a trial have to be stated in the protocol before the start of the trial. The intention-to-treat analysis can be used in which you can compute the stats even after some participants opt out of the study for certain reasons. I have attached a link below to one such paper that explains it in detail. Hope this is helpful.
Intention-to-treat analysis in clinical trials: principles and practical importance - PubMed (nih.gov)
  • asked a question related to Randomized Clinical Trials
Question
3 answers
Degerative RCT is a common entity in elderly population. I would like to ask what are the indications of operative intervention in rotator cuff other than tear of the cuff with weakness of RC?
Relevant answer
Answer
Hi,
I think the the treatment algorithm published by Robert Z. Tashjian, MD* “Epidemiology, Natural History, and Indications for Treatment of Rotator Cuff Tears” Clin Sports Med 31 (2012) 589–604 is useful
Treatment algorithm for rotator cuff disease
 Group I—initial nonoperative treatment  Tendonitis  Partial-thickness tears (except maybe larger bursal-sided tears)  Maybe small (<1 cm) full-thickness tears
 Group II—consider early surgical repair
  •   All acute tears full-thickness (except maybe small [<1 cm] tears)
  •   All chronic full-thickness tears in a young (<65) age group (except maybe small [<1 cm] tears)
  • Group III—initial nonoperative treatment  All chronic full-thickness tears in an older (>65 or 70) age group  Irreparable tears (based on tear size, retraction, muscle quality, and migration)
Also the recommendation of the AAOS published in JBJS are very good but we have to recognize the more recent literature present good result in rotator cuff repair in elderly people with the right technique for poor quality tissue.
  • asked a question related to Randomized Clinical Trials
Question
4 answers
Hi all,
I wonder from the methodological point of view, is it correct to include RCT with other observational studies in meta-analysis if they are reporting the same comparison? and is there any steps to do before that?
Thanks
Relevant answer
Answer
Aedrian Abrilla
Thank you very much for your reply . it is very sensible. We have the same issue actually with only one RCT and the rest are observational studies.
I have found this article which is in the opinion that including observational studies with RCTs is preferable over excluding them and will decrease risk of bias:
  • asked a question related to Randomized Clinical Trials
Question
9 answers
I am doing an RCT in New Zealand and one of the before and after measures I want is family relationships. The intervention will last 6-8 weeks, so I am looking for an instrument that can pick up changes within this timeframe. It would be good but not essential if the questionnaire also asked about the quality of life. The intervention will involve NZ adolescents and one adult family member, so having both perspectives would be ideal.
Is there a validated questionnaire that's suitable?
Relevant answer
Answer
  • asked a question related to Randomized Clinical Trials
Question
5 answers
I would be interested in your thoughts on this new clinical trial.
Thanks
Phuoc-Tan
Phase II RCT to Assess Efficacy of Intravenous Administration of Oxytocin in Patients Affected by COVID-19 (OsCOVID19)
Relevant answer
Answer
Unfortunately this was withdrawn.
Therefore, at present, as far as I know, there are no clinical trials for this possible treatment.
Please see papers on this topic below:
Oxytocin and COVID-19 papers:
Cardiovascular protective properties of oxytocin against COVID-19
Stephani C. Wanga and Yu-Feng Wang
Life Sci. 2021 Jan 26 : 119130.
doi: 10.1016/j.lfs.2021.119130
Oxytocin May be Superior to Gliptins as a Potential Treatment for Diabetic COVID-19 Patients
Phuoc-Tan Diep
SciMedicine Journal
Doi: 10.28991/SciMedJ-2020-02-SI-10
Is there an underlying link between COVID-19, ACE2, oxytocin and vitamin D?
Diep P
Medical Hypotheses (2020) 110360
Hypothesis: Oxytocin is a direct COVID-19 antiviral
Phuoc-Tan Diep, Khojasta Talash, Violet Kasabri
Medical Hypotheses 145, 110329, 2020
Oxytocin as a Potential Adjuvant against COVID-19 Infection
Pratibha Thakur, Renu Shrivastava and Vinoy Kumar Shrivastava,
Endocrine, Metabolic & Immune Disorders - Drug Targets (2020) 20: 1. https://doi.org/10.2174/1871530320666200910114259
Can intravenous oxytocin infusion counteract hyperinflammation in COVID-19 infected patients?
Benjamin Buemann, Donatella Marazziti and Kerstin Uvnäs-Moberg
Oxytocin's Anti-inflammatory and Pro-immune functions in COVID-19: A Transcriptomic Signature Based Approach
Ali S. Imami et al.
Brain oxytocin: how puzzle stones from animal studies translate into psychiatry.
Grinevich, V., Neumann, I.D. Mol Psychiatry (2020). https://doi.org/10.1038/s41380-020-0802-9
Oxytocin as a potential defence against Covid-19?
Amélie Soumier and Angela Sirigu
Oxytocin, a possible treatment for COVID-19? Everything to gain, nothing to lose
Phuoc Tan Diep, Benjamin Buemann, Kerstin Uvnäs-Moberg
  • asked a question related to Randomized Clinical Trials
Question
7 answers
What methodology you favor in dealing with causal inference and why, when RCT is impossible? It's a general, open discussion.
To note: I don't find a topic identical to this, but probably a few similar ones raised as questions with more specific context.
  • asked a question related to Randomized Clinical Trials
Question
7 answers
I have undertaken an RCT and, given multiple irregular time-point measures of the DV, have used linear mixed models to analyse the results and include moderator/predictor variables.
CONSORT is quite emphatic about the reporting of effect sizes. However, SPSS does not produce these for linear mixed models, as far as I am aware.
Also, I have not seen many papers report them either. The one that I have references this paper as to how they calculated them:
But looking at the equation, I am not sure which bits correspond to which parts of my SPSS output.
Can anyone simplify this method?
Provide me with another way of calculating effect sizes from LMM?
Or provide me with evidence and rationale as to why effect sizes are not needed when reporting LMMs?
Many many thanks!
Relevant answer
Which of the two continuous IVs is taken as of random effect? What is the group sample size, and are there nested variable? Perhaps I can join the backup
  • asked a question related to Randomized Clinical Trials
Question
2 answers
Hello! I'm a Physical Therapy student and currently working on research about stabilizing spoons and their effects on people with PD and ET. I was hoping that some of you may have known any systematic review or RCT articles that I could use as a reference for my study?
Relevant answer
Answer
Thank you,
Aedrian Abrilla
! I did some research beforehand, however, I was trying to look for RCT or systematic review studies but I can't ask for a full-text request of the article. Thank you again! The links you put surely helped.
  • asked a question related to Randomized Clinical Trials
Question
12 answers
I've conducted an RCT in which I'm testing the effect of a group mindfulness intervention on depressive symptoms. Only one group was running at a time so there were four study waves, with each wave of participants being randomized to intervention or control. Outcomes were measured bi-weekly for 6 months. I'm testing the effect of intervention using PROC MIXED in SAS with bi-weekly assessments nested within participant identified in the repeated statement.
A reviewer has suggested that I include treatment wave as a random factor in the model. However, the interaction between treatment and study wave (as fixed effects) is not even close to significant (p = .99), suggesting that the effect of treatment is the same across waves. Is this sufficient justification to keep my analyses as they are and not include treatment wave as a random factor? Thanks!
Relevant answer
Answer
For me there are two issues here - conceptual (in relation to target of inference) and practical (what can be estimated).
In relation to the Centre's advice (which I did not write!), likelihood procedures can have real problems of estimation (achieving convergence) when the number of groups is small. Full uncertainty modelling via (say) MCMC does not usually suffer from this but you usually end up with wide (asymmetric) credible intervals for the estimated variance as this term cannot go negative (producing a positively skewed posterior distribution). You may then not able to say much about any differences.
Conceptually, if your classification is 'fixed' such that the categories exhaust the possibilities ( there are only two types of school such that private and public are not a sample of all possible types of school) then I would include a dummy in the fixed part of the model to get the difference in the means. Conversely if the categories are representative of wider entities (eg schools) then I would treat as a random classification and as a level in a multilevel model. You are then estimating the variance summarising unexplained between school variation in general.
Returning to the original question, I do not see the four waves as being meaningfully representative of all possible waves, so I would include as a set dummies in the fixed part. I am then trying to infer to those 4 specific waves which might have certain characteristics (eg pre and post Covid) that might affect the results for the key variable of interest. Of course, I could see some making the opposite case!
  • asked a question related to Randomized Clinical Trials
Question
8 answers
In summary, I am designing a quality improvement project to increase guideline adherence. After taking a baseline test, I will randomize the participants into control and intervention groups based on their baseline score, so that the mean scores of the groups are comparable. Then I will educate the intervention group and do a post-experiment test in the end of the study and compare the groups.
So is this a randomized clinical trial? Can it somehow be considered a controlled before after study? And most importantly, do I absolutely need to register this study on clinicaltrials.gov for it to publishable?
Relevant answer
Answer
Need also such information
  • asked a question related to Randomized Clinical Trials
Question
3 answers
Many clinicians have noted nonspecidfic pain modulating effects of injecting NACL0.9% or sterile water or another sterile solution which is considered inactive (used in the control group).
It is hypothesized that subcutaneous and intracutaneous injections modulate pain through:
1/ Dry needling effect: the effect of the needle which penetrates the skin and or muscle tissue such a 1A/ s the bleeding effect (blood contains platelets and growth factors), 1B/ triggerpoint effect when needling myofascial trigger points, 1C/ Gate control effect, 1D Effect on neuroinflammation, TRPV1 receptors,
2/ Volume effect: expansion of the extracellular space stimulates peripheral nerve endings
3/ Placebo effect (the placebo effect of an injection is bigger than a pil, but the effect seems to lessen after repeated sessions)
Relevant answer
Answer
In a clinical setting it is probably mostly explained by #3, the placebo effect. A patient has an expectation that they will feel better when they take a pill or are injected with something, because for your entire life before the sham treatment that has been the case. So, you get your sham treatment and feel better the first time because you have the mindset that you will. But, you figure out after a few times that nothing is actually happening, which explains the tapering effect over multiple treatments (extinction).
  • asked a question related to Randomized Clinical Trials
Question
5 answers
I'm aware that a t-test could be used between means at each time-point; but isn't it wrong to analyse each time point separately (increase type 1 error?).
Do I have to include time in as a factor, in which case do I have to use an ANCOVA? Or a factorial ANCOVCA? I'm also reading of linear mixed effect modelling....
Any help figuring out the right test will be really appreciated! Many thanks!
Relevant answer
Answer
Hello, I think you can manage to solve your problem by using Repeated Measures ANOVA with an within (time) and a between factor (groups). If you have a lot of time points (for example 10+) it would be better to use a longitudinal approach.
I advise you tu use Greenhouse-Geisser or Huynd-Feldt correction in case of sphericity violation.
  • asked a question related to Randomized Clinical Trials
Question
3 answers
I'm doing a systematic review and meta analysis on RCT studies and thinking about using GRADE approach to report the study evidence level, but i didnt use the ROB2 tools to appraise my study(I use PEdro tool), can i still use this approach or is it mandated to use rob2 to use this approach?
Relevant answer
Answer
Numerous tools exist to evaluate the quality/the risk of bias in randomized trials and observational studies. GRADE Handbook describes the key criteria used in the GRADE approach. If your tool matching with these criteria, you can use it. But I would suggest going for the ROB tool for RCTs.
  • asked a question related to Randomized Clinical Trials
Question
1 answer
I would be interested to receive views on the methodology used by Jager and Leek to estimate the false discovery rate in particular medical journals based on reported p-values in the absence of knowledge of the baseline prior probabilities that the study hypotheses are true.
The title of the paper is 'An estimate of the science-wise false discovery rate and application to the top medical literature'.
I am not interested in any biases related to the choice of journals or restriction to p-values in abstracts. The associated limitations are acknowledged in the paper. It is the methodology that concerns me.
Relevant answer
Answer
In clinical medicine, science-wise false discovery rate is probably, very much probably the highest in the field of migraine / primary headache, the most extensively investigated disorder that is known for over 8 millennia and currently affects approximately one billion people.
Methodology cannot be dissected apart from hypotheses, observations, experiments in humans or animals, scope and working of institutional ethical committees, or concept and conduct of clinical trials (including randomized clinical trials) with the untrammelled but unethical use of the placebo both as a comparator and as mask for clinical events. All human endeavour is subjective, no human endeavour is completely objective. The intrusion of mathematics in bioscience via biostatistics is a most unfortunate epiphenomenon that has emerged over the last 50-70 years, that has converted clinical scientists (of all specializations) into fundamentally data scientists.
  • asked a question related to Randomized Clinical Trials
Question
5 answers
I am currently running a meta-analysis exploring effects of mindfulness interventions on creativity. I am using JASP software to run my meta-regression, which needs effect size and standard error.
I understand standard error is standard deviation divided by the square root of the sample size, but which standard deviation do I use in this equation? I am looking at some data collected pre and post intervention and also data comparing control groups to intervention groups. So, would I use the post-intervention SD when computing standard error for pre/post RCT's? And the experimental group SD when computing standard error for control/experimental RCT's?
Any help on this would be great - thank you!
Relevant answer
Answer
In biomedical journals, Standard Error of Mean (SEM) and Standard Deviation (SD) are used interchangeably to express the variability. However, they measure different parameters. SEM quantifies uncertainty in estimate of the mean whereas SD indicates dispersion of the data from mean.
In other words, SD characterizes typical distance of an observation from distribution center or middle value. If observations are more disperse, then there will be more variability. Thus, a low SD signifies less variability while high SD indicates more spread out of data.
On the other hand, SEM by itself does not convey much useful information. Its main function is to help construct confidence intervals (CI). CI is the range of values that is believed to encompass the actual (“true”) population value. This true population value usually is not known, but can be estimated from an appropriately selected sample. Wider CIs indicate lesser precision, while narrower ones indicate greater precision.
In conclusion, SD quantifies the variability, whereas SEM quantifies uncertainty in estimate of the mean. As readers are generally interested in knowing the variability within sample and not proximity of mean to the population mean, data should be precisely summarized with SD and not with SEM.
In general, the use of the SEM should be limited to inferential statistics where the author explicitly wants to inform the reader about the precision of the study, and how well the sample truly represents the entire population.
Kindly refer to these citations for additional information:
1. What to use to express the variability of data: Standard deviation or standard error of mean? https://doi.org/10.4103/2229-3485.100662
2. Empowering statistical methods for cellular and molecular biologists. https://doi.org/10.1091/mbc.E15-02-0076
3. Error bars in experimental biology. https://doi.org/10.1083/jcb.200611141
  • asked a question related to Randomized Clinical Trials
Question
3 answers
For a meta-analysis of interventions composed of only 1 RCT, 5 single-arm (SA) and 5 double-arm observational studies (DA), what is the best way to account for the lack of control arms amongst the SAs? More specifically, is it possible to 'donate' control arms from the RCT / DAs to allow for comparison of data within the recipient SAs?
Some literature proposes matching a control arm from an RCT with an SA (Zhang et al - doi: 10.1016/S1499-3872(12)60209-4), however, I'm unsure if this method adjusts for inherent differences in selection bias between the two designs. Please help?
I would be incredibly grateful for any advice, thank you!
Relevant answer
Answer
I would include the RCT and the 5 double-arm observational studies and exclude all single-arm studies.
  • asked a question related to Randomized Clinical Trials
Question
3 answers
I'm in the process of doing a meta-analysis and have encountered some problems with the RCT data. One of my outcom is muscle strength. In one study, I have three different measurements of muscle strength for the knee joint (isometric, concentric, eccentric). I wonder how to enter data into the meta analysis. If I give them separately, I increase the number artificially (n). The best form would probably be to combine them within this one study, because in other studies included in the meta analysis, the authors give only one strength measurement.
Thank you all for any help.
Relevant answer
Answer
Yes, I recommend that you either combine the three outcomes in one measure, or choose the one outcome which most resembles the muscle strength outcome in the other studies. If you included all three measures, the present study would have (artificial) greater weight in the meta-analysis, and study samples would not be independent, since the study population is included three times.
  • asked a question related to Randomized Clinical Trials
Question
24 answers
Hi, I have a simple question.
I am hoping to perform a power analysis/sample size estimation for an RCT. We will be controlling for baseline symptoms, and using post-treatment or change scores as our outcome variable, Ie we will use an "ANCOVA" designs showed to increase power: https://trialsjournal.biomedcentral.com/articles/10.1186/s13063-019-3671-2
Would any body be able to point me towards the best tool for sample size estimation for such a model?
thanks!
Relevant answer
Answer
In response to --> so why adjusting?
In a true experiment with random allocation to groups (i.e., an RCT) that has both baseline and follow-up measures on the outcome variable, the principle reason for including the baseline measure as a covariate is to reduce the error term. Variability in the follow-up measure (i.e., the DV) that is accounted for by the linear relationship between baseline and follow-up scores is partialled out of the error term. The cost is 1 df. But that cost is usually more than made up for by the reduction in SSerror.
  • asked a question related to Randomized Clinical Trials
Question
3 answers
Hi!
I am currently conducting a systematic review on interventions, but for my inclusion criteria for study designs I included single group pre-post studies as well under quasi-experimental designs.
I conducted a meta-analysis on the studies that run a RCT and CCT design. However is it possible to conduct a separate meta-analysis on the single group pre-post studies, and pool the effect size together? Or should I report the individual studies and their respective effect sizes?
Any help would be much appreciated!
Relevant answer
Answer
It is fine to qualitatively (and critically) discuss the most relevant results of pre-post studies eligible for inclusion in your review. However, as a reviewer, I would focus my attention on controlled studies of the effects of intervention, in particular on high-quality randomized trials, if any. Always keep in mind the so called "evidence pyramid" and the scope of your research work.
Feel free to take a look at our brief guide to help healthcare workers understand how literature reviews are structured and how their results should be interpreted:
  • asked a question related to Randomized Clinical Trials
Question
4 answers
Are most-recent papers of randomized clinical trials needed to conduct a legitimate meta-analysis or decades back will still provide optimum results?
Relevant answer
Answer
Thank you
Aedrian Abrilla
and Nicolò Zarotti for your valuable time in responding to the question asked.
  • asked a question related to Randomized Clinical Trials
Question
5 answers
For my undergraduate dissertation, I am looking at comparing 2 different interventions that have not been compared directly within head-to-head randomised trials. I am looking into possibly doing a network meta-analysis for this reason with the 2 interventions being compared to a comparable control in the form of a wait-list or treatment as usual.
I've currently found just 5 RCT's (3 vs 2) so just wondering whether this type of analysis is still appropriate?
Relevant answer
Answer
You should try
  • asked a question related to Randomized Clinical Trials
Question
3 answers
The RoB 2 tool assesses the risk of bias in randomized clinical trials and the ROBINS 1 tool in non-randomized studies of interventions, such as cohort studies, case-controls and non-randomized clinical trials. But my question is, if there are clinical trials and cohort studies that do not have a control group, can the ROBINS 1 tool be applied? Or is there a more suitable tool?
Thank you.
  • asked a question related to Randomized Clinical Trials
Question
4 answers
Greetings;
I've recently published a preprint for a COVID treatment RCT, which appeared under "Dental" specialty instead of "Infectious diseases (ID)". This is an issue as it doesn't appear under searches for ID. Is there's any way to modify this categorization?
Relevant answer
Answer
Learn from this situation to check the keywords of your research next times before they are sent for publication
  • asked a question related to Randomized Clinical Trials
Question
8 answers
RCT study, which checking the pre intervention and post intervention effects on motor capacity and motor performance in the same population or same group. which statistical test best analyze this research?
Relevant answer
Answer
Pre-test and post-test research is one of many forms of quasi-experimental design. The appropriate Statistical Test choice depends on the design (field of study dependent; randomized trials survey) and type of response variable.
1) You may dichotomize the response or DV, and use logistic regression. 2) You may use the difference (post-pre) and regression approach.
And, 3) you can compare the means and for comparing parametric variables with normal distribution paired t-test would be appropriate. For parametric variables without normal distribution, Kruskal Wallis test would be appropriate. Continuous data are often summarised by giving their average and standard deviation (SD), and the paired t-test is used to compare the means of the two samples of related data, Pre-Post points.
4) One-way ANCOVA would be best if you take the Intervention type as the factor (between-subjects variable), and the post-intervention scores as the dependent variables. Pre-intervention scores could make good covariates too.
5) if you take it as repeated measure, the repeated measure ANOVA if the assumptions are met.
Specific Design, the research question, number of participants, type of response variable cumulatively inform the analysis( given the assumptions are met) and also Statistical Test to be used.
  • asked a question related to Randomized Clinical Trials
Question
3 answers
I am doing a crossover RCT.
I used between-subjects multiple measure ANOVA to assess the carryover effect. Is this OK?
In this case, should I consider p-value<0.10 as a sign of carryover effect?
Relevant answer
Answer
If the treatment order is randomized among subjects, then a carryover effect can be controlled for.
The statistical test you use doesn't "assess" the carryover effect. A repeated-measures ANOVA with a randomized treatment order controls for it.
  • asked a question related to Randomized Clinical Trials
Question
3 answers
If the interventions are compared pre and post in RCT, can NMA be done??
Relevant answer
Answer
Here is a reference for different software tools for NMA:
  • asked a question related to Randomized Clinical Trials
Question
21 answers
I have an RCT with three primary measures: M01, M02, M03, and each of these measures are presented randomly, but intact, to the participants at baseline, 30-days, and 60-days.
Let's say I get really good results from RCT01, and I get funded for RCT02, and RCT02 is just like RCT01, except I add two new measures.
RCT01: M01, M02, M03
RCT02: M01, M02, M03, M04, M05
If everything stays the same from RCT01 to RCT02 (except for the added measures), can I combine the data for M01, M02, and M03 in the analysis to potentially increase the power of my results?
If the answer is yes, is there a good citation that supports this?
Relevant answer
Answer
I will offer one last piece of advice. The lead author of that paper lists his email at the bottom of col. 3, p.1. Why not get his opinion? Be sure that you let Prof Daniel Wright and me know his opinion so that we can adjust our research design thinking to the best available options. Both of us can be contacted here at Researchgate. You would earn our undying gratitude by helping us in our professional endeavors. I look forward to hearing from you. Best wishes for a successful second award, David Booth
  • asked a question related to Randomized Clinical Trials
Question
6 answers
1. What type of literature should I read for writing a successful concept note?
2. Is there any literature which will help me decide a research question/objective for designing a study will be appropriate for the evaluation of community-based primary prevention models for NCDs?
3. For convincing the donor as a student researcher is it better that I narrow down to a specific prevention model such as Mass Health promotion to reduce NCD, e.g. Hypertension.
a) Do you think a base-line survey of knowledge regarding prevention vs end-line survey will be a strong method of evaluation?
b) If I try to do an RCT - which literature will help me find out what community-based primary preventions interventions recent or interesting?
(This is for an assignment)
Thank you
Relevant answer
Answer
OK great. Here are two our books that have case studies on NCDs
good luck
Mohamed
  • asked a question related to Randomized Clinical Trials
Question
4 answers
We are testing whether screening and management of depression during the pregnancy would improve the birth outcome. We will randomize the cluster (health facility) but at the baseline we will only screen the depression in the intervention group assuming similar prevalence will be in the control group. However, we will measure the birth outcome for both in order to make the comparison. Would this be considered as Cluster RCT? Also, any further suggestion to improve?
Relevant answer
Answer
Is everyone in the cluster(health facility) going to get the same treatment, and the determination of which health facility gets the treatment is random?. If so, that is a cluster rct. You need as many clusters as possible.
  • asked a question related to Randomized Clinical Trials
Question
9 answers
I am in an evidence based class working on my PICOT question. My concern is I don't know what my timeline should be. 6 weeks? 12 weeks? Looking for RCT articles for this!
Relevant answer
Answer
In addition to what has already been said by me, physical exercise also releases SEROTONIN, which is also very suitable against Depressions
  • asked a question related to Randomized Clinical Trials
Question
4 answers
Hello everyone!
My question is pretty basic, but as this is not really my field (only did a bit of statistics during my first year at uni), I don't know the answer to it.
I would like to find if an intervention was effective on improving maternal sensitivity. I have two groups: control and intervention, and two measures (pre intervention and post intervention) for each. I understood I have to do a t test. My question is: between which of the 4 is the t-test done? Is it between pre and post- intervention group? Is it between post control group and post intervention group? Is it both? Thank you all so much!
Relevant answer
Answer
You should read a standard text book on RCTs. If you are just looking at differences you could simply subtract the post scores from the pre scores and use an independent t test (but first check the assumptions). If you are looking at the relationship between the pre and post scores you may need to use an ANCOVA or a multiple linear regression or a mixed factor ANOVA.
  • asked a question related to Randomized Clinical Trials
Question
10 answers
I currently have a number of Postgraduates working on systematic literature reviews and we have reached the 'Search Strategy' stage. Our reviews focus on stroke rehabilitation, treatment of stuttering/stammering in adults and lymphoedema post breast cancer. We only have access to the free database Pubmed, and to Science Direct in our academic institution as we do not have a School of Medicine.
Before we go looking for collaborators with access to CINAHL, EMBASE, OVID MEDLINE etc, (and I notice that most published systematic literature reviews search 3-4 different databases) when can I be confident that I have searched sufficiently to not have missed any important RCT's?
Thanks Ken
Relevant answer
Answer
I recommend following Cochrane's guidelines (https://training.cochrane.org/handbook/current). You cannot go wrong following them. You have to use MEDLINE, CENTRAL, and Embase. Read Chapter 4 in the handbook for additional recommendations more specific to your research question.
  • asked a question related to Randomized Clinical Trials
Question
3 answers
I am writing a synopsis for my dissertation on a topic in which I'm comparing 2 endodontic sealers clinically (RCT). there are no clinical studies for their comparison in literature till now. I am having difficulty in calculating sample size. I need guidance.
Relevant answer
Answer
I have the primary outcome to measure on reliable scale with intra & inter observer reliability. But unable to find suitable sample size to defend my dissertation.
  • asked a question related to Randomized Clinical Trials
Question
7 answers
newcastle ottawa scale is good for cohort and case control observational studies, but I am doing meta-analysis with both randomized an non randomized clinical trials. some of the non randomized trials have single arms (without comparison) and I dont think Newcastle Ottawa scale can be used here. 
Relevant answer
Answer
i think the best option in this case is the Cochrane Collaboration tool for assessing risk of bias for RCTs and Risk Of Bias In Non-randomized Studies – of Interventions (ROBINS-I) for non-randomized studies
  • asked a question related to Randomized Clinical Trials
Question
5 answers
In Factorial Design of Experiments, each factor has different levels, one level can be considered as the base level. The cases/specimens sharing this base level can be considered as a control group. Also, randomization is similar to combination of all possible levels of all factors. In this sense, RCT and FDoE seem similar. What's your opinion?
Relevant answer
Answer
On the topic of "control groups":
Indeed, in a factorial design, each factor has a distinct control group. That's why I mentioned that you need at least one control group in an RCT. The RCT-logic still holds.
If you have a full factorial design, you could say that the group which represents the combination of the baselines of all factors, is semantically the "most authentic" control group. However, statistically, as you mention, the comparison is within the factors. At the same time, the analysis considers the interaction effects between the factors. That is the beauty of factorial designs.
For a very thorough, very well written, and highly applicable introduction into RCTs (and, as an extension, also factorial designs) I can highly recommend Gerber's and Green's book https://wwnorton.com/books/9780393979954