Science topic
Randomized Clinical Trials - Science topic
Explore the latest questions and answers in Randomized Clinical Trials, and find Randomized Clinical Trials experts.
Questions related to Randomized Clinical Trials
I want to study the effects of a pharmacological treatment (antidepressants) related to quality of life in oncologic patients. Apart from a depression diagnosis that would be a prerequisite for administering the treatment, i need another screening tool that could confirm the patient's ability to be functioning enough to give me true and valid answers later in the main tests. For this reason I am looking for a validated tool in clinical setting that could detect any cognitive impairment due to a psychiatric condition or substance induced (es. high doses of morphine).
Hi Researchers
I am doing meta-analysis for one of my SR. Could you please from your expertise suggests if we can do the meta-analysis of RCT and multiarm-RCTs together?
Thank you
Hello,
The researcher plan to conduct a (RCT) to compare a control group with an intervention group. They have provided a (MCID) for the length of stay (LOS) of one day. However, the standard deviation is extremely high at 120, resulting in an unreasonably large sample size requirement of approximately 500,000 participants, which is not feasible for a prospective RCT.
The issue arises from the high variability in LOS.I have set the power at the conventional level of 0.8 and the significance level (α) at 0.05.
could you please help me how to address this issue?
Thank you!
This article 《Low Risk of Hyperprogression with First-Line Chemoimmunotherapy for Advanced Non-Small Cell Lung Cancer: Pooled Analysis of 7 Clinical Trials》 can have 7 clinical Trials to analys, does that mean these 7 trials start by themselves or there is a way to get information like that?
Can significant results in the primary outcome of a pilot RCT (n = 50) with prespecified confirmatory analysis be interpreted as class 1 evidence as it would have been in a full scale trial? That means, is it possible to conclude from a pilot trial that a significant effect in the primary outcome means that the intervention improves the targeted outcome or is it still necessary to run a larger full-scale trial to confirm the hypothesis again?
In a rct, we have already decided the sample size based on one interim analysis for adaptive design; however, we are planning to add one more interim analysis for the safety, do I need to re-estimate the sample size?
Thanks!
As sample size calculation is extremely important for RCT, are there any recommendations for simple tools? For example, can GPower be used for the calculation for RCT?
I am doing a systematic review, and I am measuring risk of bias with RoB2 for RCT, and ROBINS-I for non RCT. My questions is, for single arm studies, can I use ROBINS-I? I am not sure how to answer the questions for the domain regarding confounding in this case.
Thank you!
If we consider an RCT, a case control study, an opinion article or grey unpublished literature all together due to the unavailability of papers in a systematic review knowing the fact that it will not be 100% quality assurance but comes with its own limitations, would you go ahead?.
or
Is it better to do a scoping review?.,
Looking forward for expert opinions
Many thanks
Punitha
In RCT study designed I have 2 groups; one of them control group and the 2nd is intervention group.
Each group with with 2 time point: pre management and post management.
Can I use Repeated measure ANOVA?
How can I describe the results in a table ?
Hi all!
I have a question & any help would be appreciated (especially Dr. Holger Steinmetz & Dr. Jan Antfolk).
Suppose we are conducting a Meta-analysis & looking at 2 Dichotomous outcomes. Outcome 1 is reported in 14 studies (all observational, no RCT) with different measures (OR= 5/14, RR= 2/14, PR= 2/14, % Raw data=5/14). Outcome 2 is reported in 3 studies (OR=1/3, HR=1/3, Raw data=1/3).
Which Effect size to use for each outcome & how to convert one into another?
We intend to integrate qualitative methods in a clinical trial that evaluates the risks and benefits of a certain intervention/drug. I am just wondering if you have examples of studies exploring risks and benefits of a drug/intervention using qualitative methods.
Can someone pls assist me with sample size calculation for RCT in scientific research, 2 grps control and intervention. Is there a method utilizing ANCOVA? Which software is the best. Assuming I have all assumption to run the ANCOVA.
Thank you kindly
Hashim
Dear Colleagues
Which the best type of regression analysis can be performed to test the affect the treatment on the single/multiple outcomes?
Dependent Variable is continous such as thiskness of Achilles tendon (in mm)
Independent variable is categorical (treatment/no treatment).
Best regards
I received comments from a reviewer on my manuscript having RCT design and I compared pre and post data by pair t- test. The reviewer commented:
Effect size indices and 95%CI could be presented.
What it means and how can I present it?
Any sample article or help
Thank you
This is because the sample size is small and some specific characters are required.
The inclusion and exclusion criteria were indicated. I acknowledged that this might led to the limitation for finding generalisation.
In my RCT, one of the exclusion criteria included the following: "Participants did not complete more than two modules." Is this related to the intent-to-treat principles, the gold standard for RCTs, or could we use it even if the study design doesn't adhere to intent-to-treat principles?
Many clinical trialists integrate qualitative and/or mixed methods research as part of their clinical trial projects. Could you please share your experiences and thoughts on the challenges in integrating these methodologies in clinical trials, and how to address them.
I have two datasets (both with experimental group and control group) which measure the same construct with two different forms due to age differences. All groups completed the measure at a pretest and post-test. The summation score of each individual at each time point was calculated.
Form A (10 items):
Exp group Pretest Post-test
Control group Pretest Post-test
Form B (6 items):
Exp group Pretest Post-test
Control group Pretest Post-test
I would like to transform the raw score into Z-score and aggregate the data from two groups, so that I can evaluate if there is any pre-post change in this construct. I wonder which mean and standard deviation to use for the calculation. Here are some of my considerations.
Option 1: Overall M and SD of both groups at pretest (T1)
The assumption is that the pretest M and SD represent the population without intervention. The post-test Z-score should reflect how much the score varies from the population mean at baseline (when Z = 0). My main concern is whether this ignored the differences between the time points where the data is collected (i.e. the M at the two time points may be different) and I can't attribute the pre-post difference to the intervention.
Option 2: Aggregating all the data and calculating a single overall M and SD
The assumption is all the data collected are from the same population/distribution, which is not true for the experimental group (as they received the intervention). However, the time-point difference seems to be considered and the M should be between T1 and T2.
Option 3: Use the overall M and SD for pretest score and use the control group M and SD for post-test score
This is a weird one, which I am not comfortable with. The overall M and SD represent the population at T1, while the post-test score of control group represent the population at T2.
May I have some advice on which option would be appropriate?
I have performed a double-blind, placebo-controlled, randomized clinical trial with N=63 (32 in the placebo group and 31 in the intervention group). After submitting my manuscript, I received major revisions. In one of the comments, the reviewer asked me to justify the sample size. The reviewer's comment was: "Sample size justification required proper drafting"
I do not know how to justify the sample size of my research
Sample size calculation
For the calculation of sample size, the significance level and statistical power were considered as 5% and 80%, respectively. According to the study of Mesri Alamdari N and colleagues, we used 0.23 nmol/L as the change in mean (d) and 0.29 nmol/L as the standard deviation of MDA as a primary outcome. Based on the formula, we needed a minimum number of 25 participants for each group. Considering the dropout rate of 20% during the clinical trial, 30 subjects in each group will be considered.
Thanks in advance
Kind regards
Recently, we want to do a trial for an interesting clinical problem that should be involving blind and placebo control, what should be observed at the beginning of a new RCT research?
Any suggestion on calculating sample size for a superior RCT in R or SAS?
I have used the 'pwr' package which gives me a different result from the one I got from an online calculator (Sample size calculator (riskcalc.org)).
my parameters are
control- change in mean- 20+/- 5
treatment- change in mean- 15 +/- 5
drop out -20%
power-0.9
Any suggestion would be appreciable.
Is there free software to calculate a sample size for more than two arm RCT?
May anyone provide an article commenting on the calculation of sample size for more than two arm of RCT
Thank you
How to calculate sample size for multi-arm (three arm) RCT?
Randomized clinical trials have become universally accepted as the standard procedure for comparing treatments. However, there is no guarantee that the subjects allocated to the different treatment groups will be similar in all important characteristics. Can anyone explain, what is the main factor when we got the imbalanced results between the treatment groups? a mistake on the randomization step? and then, what can we do to make it balanced?
I have three reports that I want to include in my systematic review. These are all qualitative information of the findings of the original RCT. They only include qualitative data of the intervention group.
As two of the team members contributed as much as the other in the process of manuscript preparation in a two-year lasting study, I want to assign two first-author for the manuscript if it is possible.
My systematic review has 9 unique RCT studies and 4 additional reports which are secondary analysis of the parent RCT. Considering that the reports are from the same parent studies, do I need to show them individually in the quality analysis table?
I chose Cochrane rob2 as my quality assessment tool of choice.
I am going to do a research by using the baseline data from a RCT for analyzing the association between variables. What will be the study design suitable in this case?
How to calculate the sample size of the study according to the study design suggested? Or should I use the same sample size as the RCT?
Dear RG Family,
I am working on certain aluminide-based coating systems for anti-corrosion applications. I performed different electrochemical tests including EIS and Tafel after immersion in 3.5 % NaCl for different intervals (from 1 hour to 100 days). For 1 hour, I am getting the normal charge transfer resistance (RCT) value (up to few thousand Ohm) after circuit fitting. As the immersion time increases, the impedance value was expected to increase due to the formation of a sacrificial layer. However, in my case, I have observed an exponential increase in RCT after the immersion for 100 days. The new Rct value is in millions, about ten thousand times as compared to 1 hr of immersion time (The Nyquist plot is attached). Even after the repetition of tests with a different specimen, I got a similar trend. A few Tafel Plots are also complementing the results from EIS, as the corrosion rate decreased significantly after immersion. As far as I understood, I am going right and obtaining a significant reduction in corrosion rate. My question is am I missing something? What significance do these results really have?
Sincerely,

I used cochrane-rob2 for the RCTs. What is the best tool for secondary data analysis of a parent RCT?
There is a difference between the researchers when choosing the terms Effectiveness and Efficacy for Randomized Controlled Trials. While checking the previously published research studies the authors have used both. The existing literature states "Efficacy can be defined as the performance of an intervention under ideal and controlled circumstances, whereas effectiveness refers to its performance under ‘real-world' conditions". Therefore, which is the correct term for randomized controlled trials? Efficacy or Effectiveness. I kindly request the experts to share your expertise.
Hello,
I´m in the process of writing a literature review on the topic of cough assessment for specific populations. The goal is to retrieve norm values for those populations before treatment. Therefore a lot of different study types will be included.
Because outcome values after an intervention or the intervention itself are not of interest to my research question, my questions are:
1. How do I assess the quality of the different study types?
2. Do I apply the checklists like the Newcastle Ottawa Scales for the different study types on the intire study although the main objectives (the interventions or outcomes) of most of the studies are not going to be reviewed?
3. Does anyone have experiences or ideas?
Many thanks,
Laura
HI everyone,
I am reading a diary study for the first time and I am not sure how to appraise it. What should I look for?
The study was conducted along a RCT , so they included the participants of the RCT.
Should I appraise it as every other qualitative study? Looking for elements of credibility, confirmability, reflexivity etc.?
In addition, should the authors report on saturation? if they achieved it or not? because since the number of participants was set by the RCT, what could they have done if they had not reached saturation?
thank you
Hello, everyone, In a study with 1 independent variable with 2 levels and 2 dependent variables, using MANOVA to analyze whether is right. If representing overall significant, post hoc comparison should use multiple independent t tests? Looking forward to your reply. Thanks.
I have calculated Cox Regression in SPSS (HR) but is there any way of calculating RR in SPSS?
The posttest-only control group design is a basic experimental design where participants get randomly assigned to either receive an intervention or not, and then the outcome of interest is measured only once after the intervention takes place in order to determine its effect. This design differs from the pretest-posttest randomized controlled trial by requiring no measurements before the intervention. I request the experts to share opinions on this.....
When conducting a systematic review and mete-analysis of observational studies we included two studies which are two secondary analysis of the same original RCT, but each are reporting different numbers according to the same outcomes. I think that inclusion of both enriches our analysis as each study reports different sample size and numbers.
I ask if this inclusion is right or we will get comments on this point by the reviewers?
We are running a systematic review of methods used to assess toxicity from glucocorticoids in inflammatory conditions (Prospero registration: CRD42022346875). We are in the process of screening and selecting papers for full-text review. As part of the quality appraisal we are thinking using the MAPP tool but several studies (e.g., DOI: 10.1111/jep.12884, DOI: 10.1016/j.jclinepi.2019.03.008) present some conflicting findings as for the tool's usefulness. Studies conclude that Additional validation research on the MMAT is still
needed. In our review we will include some mixed methods studies but most quantitative studies (e.g., RCT, observationals, cross-sectionals, validation, etc.). Given the broad nature of the study, we think this tools deemed appropriate, yet, we doubt as for its sound efficacy. Any previous experience or thoughs, would be much appreciated.
Dear Colleagues,
I am currently supervising a pilot RCT to examine the effectiveness and feasibility of a new neuroplastic treatment for stuttering/stammering with my postgraduate (Ms Hilary Mc Donagh).
We have run into a difficulty regarding deciding what is a suitable control for this study.
Our understanding is that placebos are potentially not recommended in a behavioural intervention because teaching a no benefit behaviour that has a no benefit consequence is possibly unethical. One suggestion would be to have a no treatment 'waiting list' period before the participants begin their 6-8 week trial, however, I am worried that participants then benefit from the Hawthorne Effect before the start of the trial which may make it more difficult to prove the efficacy of the new innovative intervention.
I suppose the question is what controls have people ever used in the past for RCT's involving treatments for stammering/stuttering.
We will welcome all advice.
Thanks Ken
Dear all,
I am performing a two-arm parallel cluster randomised controlled trial. Each cluster (each cluster is a kindergarten) will be randomly assigned to one of two arms. Outcomes will be measured at the level of individuals (who are kindergarten employees). I want to perform a form of restricted random assignment of clusters, to ensure balance in terms of number of clusters and number of individuals in the two arms. There are about 200 clusters, and the total sample size (individuals) is approximately 1300. However, the number of individuals within each cluster varies a lot (i.e., the cluster sizes vary). The number of clusters and individuals are known prior to the randomisation.
One approach I came across goes like this:
1. Rank-order clusters in terms of the number of individuals within each cluster (i.e., the cluster sizes).
2. Create blocks of kindergartens which contain similar number of individuals (according to their rank). Block size will then be 2.
3. Within each block, randomly assign one cluster to the intervention arm (with the remaining cluster being assigned to the control arm).
See attached file for a table that exemplifies this approach. Is this an appropriate approach, or are there some other concerns I am missing? Of note, I do not want to balance the two arms in terms of some predefined covariate; just balance arms in terms of number of clusters and cluster sizes.
All comments appreciated!
Best regards,
Lasse Bang
Hi mates, im looking for a risk of bias tools, that can be used in interventional studies ( either RCT or NON RCT).
Im doing a meta analisys about exercise and inflammatory markers in the elderly. i got some studys RCT and some Non RCT, what tool do you advise ?
Thanks for any help, im a newbie, best regards Luís Silva.
A patient is taken off a treatment because the outcome value of interest dropped below value B. For whatever reason the exact outcome value is missing. I need to impute it to avoid bias and to reduce my confidence intervals.
Is multiple imputation something I can use and if yes, how should I adjust it? This is obviously Missing Not At Random. If not multiple imputation, what else can I do? Is there a standard approach? Non–random attrition should be a very common thing in RCTs.
Hi, what are the possible confounding factors for "financial incentives and smoking cessation"?
Hello everyone!
We are developing a phase I randomized clinical trial, in 18 healthy volunteers, aimed to test the safety and pharmacokinetics of i.v drug. However, we want to test two different doses of the drug (doses A and B), and each dose is to be administered with a specific infusion rate: dose A will be administered at X ml/min, and dose B at Y ml/min.
We need to randomize the 18 patients with a 2:1 ratio (active drug vs placebo), in blocks of size 6. However, to maintain the blind, we also would need two different infusion rates for the placebo (X and Y).
What do you think is the best way to randomize the volunteers in this study?
One way could be to randomize the patients in a 2 x 2 factorial design: one axis to assign the drug vs placebo, and the other axis to assign the drug dose with the infusion rate. To maintain a 2:1 ratio for the first axis, a and 1:1 ratio for the second axis, in blocks of size 6. A second way could be to randomize "three treatments" (dose A with X infusion rate, dose B with Y infusion rate, and placebo), 1:1:1 ratio, in blocks of size 6, and then, to randomize patients assigned to placebo in blocks of size two (or without blocks) to infusion rate X or Y.
What do you think is the best manner to randomize in methodological terms? In the case of the first way, Do we need to test the interaction between dose and infusion rate? Do you have another idea to randomize the patients in this study?
Thank you so much for your suggestions and help.
Dear all ,
I cam a cross an RCT: High potency multistrain probiotic improves liver histology in non-alcoholic fatty liver disease (NAFLD): a randomised, double-blind, proof of concept study.
I searched what does POC means that " A proof of concept is meant to determine the feasibility of the idea or to verify that the idea will function as envisioned " the source is https://www.techtarget.com/searchcio/definition/proof-of-concept-POC
I wonder what does it ment in that RCT study ??
I am writing a systematic review where I will be including RCT and also randomized trials without the control group. Which risk of bias will be best to use for both please?
Hi
How can I do filtration for RCT studies in Google Scholar for a systematic review?
Hi everyone,
im working with RCT data .. 4 treatment groups and 4 control .. the experiement was 30 mins for control group, 30 mins for treatment groups, then 30 mins for control and so on till there were 4 treatment and 4 control groups. All treatments were same.
I'm now exploring the data.
Please guide me:
1) do I explore data for treatment and control groups seperately or treat the whole data as one?
2) if i have some outliers in treatment or control groups should i drop them?
3) what analysis can be conducted on such dataset ie Difference in differences, treatment effects, rrgression logit probit etc
Please share if you have a book specificaly for data analysis for RCT or experiments.
Looking forward for your guidance
thank you
I'm performing an RCT with cancer patients, with two distinct groups: both groups have cancer patients. I have planned to carry out the intention-to-treat if a patient does not continue in the study. One of the patients left the study for palliative care, and died due to the disease. Should I carry out the intention-to-treat with this specific patient?
We know in cross -over trial , there is washout period and new drug / intervention is given to opposite group. Why then it is included in RCT. What is the purpose of randomization here?
Per protocol analysis is a method used in RCT...are there any disadvantages of Per protocol analysis over intention to treat analysis?
When we have 2 experimental groups (with different treatments) and 1 control group, Is that an RCT or clinical trial or a three-arm Randomized Controlled Trial?
I am conducting a systematic review about a newly developed psychological intervention for children, but all studies available are of exploratory/feasibility nature and I am not sure how to evaluate/analyse their results.
I am wondering if anyone is aware of guidelines or articles regarding the decision process of going from a pilot/feasibility stage to full RCT?
I am sure this is a relevant question in many departments, deciding if the results from exploratory studies justify going further with expensive RCTs.
I have been looking in to Cochrane Library etc. to see how they evaluate evidence, but what I can find is mostly regarding making clinical guidelines based on evidence availible (RCTs and other sources of evidence).
If anyone knows anything about this process, guidelines for going from exploratory to RCTs, I would be immensely grateful since I feel a bit stuck on this question (how to analyse the results from feasibility/exploratory intervention studies, what criteria can be used for deciding to go ahead or not with RCTs etc..)
Dear researchers, I am planning to create an evidence map for a subject. What types of studies should be included in the evidence map (systematic review, meta-analysis, RCT..)? Is the data made using only the bubble chart?
Can you recommend a resource on how it's technically done?
In one study, a person in each treatment group converted from MCI to dementia. The study eliminated them from the analysis, using them as statistical outliers. I know that the two analysis for RCTs are per protocol and intention to treat. It did not use an ITT analysis. It was a pilot study with a control group. Subjects were randomized. Are there other analyses suitable for this case?
I had read there were various ways to handle missing data and the modern imputation methods might be the best solution to handle missing data. However, following planned statistical analysis plan in our RCT protocol, Last observation carried forward (LOCF) method was chosen.
I had done complete case analysis/ per protocol analysis using repeated measure ANOVA. As for the power of the study, we had achieved the minimum sample size (since we had included attrition rate during sample size calculation).
As for ITT analysis, I had a problem with drop-out with missing value at each timepoints and also missing all measurement value including baseline measurement (no data at all). So, is it okay if use mean imputation for to impute the data from those who had no data at all combine with LOCF method for drop-out who have baseline value.
Please advise, I will really appreciate it since statistics always making me confuse. Thank you.
Hi so I am doing a systematic review to examine the effect of a surgical produce A vs surgical procedure B on patients by using patient reported outcome scores.
There is only 1 RCT and the rest are case series for surgical procedure A or surgical procedure B with a mean no of patients of 50.
- Can I do a metanalaysis with these case series ? Or should I just say there is not enough evidence to do it as there is only 1 RCT.
- Can I pool the mean post operative PROM and mean preop PROM and check for the difference in improvement for surgical Procedure A vs Procedure B ? Or would this be a huge no no in the world of statistics? As one is assuming the populations baseline characteristics are the same . *
- Is it right to say that even if I do NO 2 I can not compare the results of A or B as the population demographics are different
- If I can't / you would not recommended to do no 2 and no 3 what should I do with my case series ? Is there a good way of presenting the results in a graph or I can only present them in a table ?
Please understand that most case series done by surgeons are usually retrospective and done within their unit. By aim is to highlight what we know so far and the future, where i will be doing large data registry work.
I understand I will use the GRADE approach to check for bias.
I have questions about the primary and secondary outcomes if you have an experience in randomized control trial (RCT)
If outcome variable measured by multiple methods. Do I need to have primary and secondary outcomes because one variable is measured through different tools? and
If one outcome variable measured at two different time points, do I should to indicate which time point is the primary outcome?
I can understand the importance of having an outcome assessor independent and blinded of intervention for a Randomised Controlled Trial (RCT), but when conducting a Case Study investigating a new Clinical Health intervention is it also important to NOT allow the principal researcher to carry out baseline and post-intervention outcome scales/measures?
My understanding is that the principal objective of a Case Study is to establish the feasibility of a new intervention rather than effectiveness and more importantly ensure that the participant does not experience any adverse effects. Because of this, the significance of any non-feasibility scales are minimal so for this reason would it be a poor design for the principal researcher to undertake the outcome scales/measures?
Thank you for your advice regarding this question!
Ken
I am planning a cross-over design study (RCT) on effect of a certain supplement/medicine on post-exercise muscle pain. There hasn't been any similar study to recent date on the effect of this medicine (or similar medicines) on post-exercise muscle pain. However, some studies have been conducted for effect of this medicine on certain conditions such as hypertension.
As long as I have been searching formulas for estimating sample size, they need information (such as standard deviation, mean, effect size, etc.) from some similar kind of studies which was conducted before.
Is there anyway to estimate a sample size for my RCT with the aforementioned conditions?
Suppose an RCT, with 30 participants in each arm. Group A receives CBT sessions, Group B as a waitlist control does not. Both groups complete a depression questionnaire measure at baseline, and after Group A have had their CBT, both groups complete the questionnaire again. Te change in depression scores is the outcome of interest. However, let's say 5 individuals from Group A drop out, and so do not have the second time-point data, and likewise 2 individuals from group B dropout. If the study is wanting to carry out intention-to-treat analysis, how does this work. Is missing data computed for the dropped out participants (by some method)? Are group means of the depression scores just calculated as normal and the difference examined, despite group size discrepancy at the end time-point? Or are the participants that dropped out excluded entirely from calculations?
I have gotten very confused! Many thanks for any help!
If we want to publish RCT protocol which journals are recommended and any indexed journal publishing protocol for free
I am designing a small RCT for pediatric depression. If participants need continued treatment after the intervention, and before 6-week follow-up measures are administered, ethically we will need to provide continued treatment and forgo follow-up data collection for that individual. Given our sample will be quite small (a total of 30 participants in the study), what would our options be in this scenario? Would imputation be a possibility if there are few such cases? Similarly, participants may drop out of our waitlist control group because they need immediate treatment. How have other researchers planned around these kind of ethical dilemmas in the grant proposal stage?
Degerative RCT is a common entity in elderly population. I would like to ask what are the indications of operative intervention in rotator cuff other than tear of the cuff with weakness of RC?
Hi all,
I wonder from the methodological point of view, is it correct to include RCT with other observational studies in meta-analysis if they are reporting the same comparison? and is there any steps to do before that?
Thanks
I am doing an RCT in New Zealand and one of the before and after measures I want is family relationships. The intervention will last 6-8 weeks, so I am looking for an instrument that can pick up changes within this timeframe. It would be good but not essential if the questionnaire also asked about the quality of life. The intervention will involve NZ adolescents and one adult family member, so having both perspectives would be ideal.
Is there a validated questionnaire that's suitable?
I would be interested in your thoughts on this new clinical trial.
Thanks
Phuoc-Tan
Phase II RCT to Assess Efficacy of Intravenous Administration of Oxytocin in Patients Affected by COVID-19 (OsCOVID19)
What methodology you favor in dealing with causal inference and why, when RCT is impossible? It's a general, open discussion.
To note: I don't find a topic identical to this, but probably a few similar ones raised as questions with more specific context.
I have undertaken an RCT and, given multiple irregular time-point measures of the DV, have used linear mixed models to analyse the results and include moderator/predictor variables.
CONSORT is quite emphatic about the reporting of effect sizes. However, SPSS does not produce these for linear mixed models, as far as I am aware.
Also, I have not seen many papers report them either. The one that I have references this paper as to how they calculated them:
But looking at the equation, I am not sure which bits correspond to which parts of my SPSS output.
Can anyone simplify this method?
Provide me with another way of calculating effect sizes from LMM?
Or provide me with evidence and rationale as to why effect sizes are not needed when reporting LMMs?
Many many thanks!
Hello! I'm a Physical Therapy student and currently working on research about stabilizing spoons and their effects on people with PD and ET. I was hoping that some of you may have known any systematic review or RCT articles that I could use as a reference for my study?
I've conducted an RCT in which I'm testing the effect of a group mindfulness intervention on depressive symptoms. Only one group was running at a time so there were four study waves, with each wave of participants being randomized to intervention or control. Outcomes were measured bi-weekly for 6 months. I'm testing the effect of intervention using PROC MIXED in SAS with bi-weekly assessments nested within participant identified in the repeated statement.
A reviewer has suggested that I include treatment wave as a random factor in the model. However, the interaction between treatment and study wave (as fixed effects) is not even close to significant (p = .99), suggesting that the effect of treatment is the same across waves. Is this sufficient justification to keep my analyses as they are and not include treatment wave as a random factor? Thanks!
In summary, I am designing a quality improvement project to increase guideline adherence. After taking a baseline test, I will randomize the participants into control and intervention groups based on their baseline score, so that the mean scores of the groups are comparable. Then I will educate the intervention group and do a post-experiment test in the end of the study and compare the groups.
So is this a randomized clinical trial? Can it somehow be considered a controlled before after study? And most importantly, do I absolutely need to register this study on clinicaltrials.gov for it to publishable?
Many clinicians have noted nonspecidfic pain modulating effects of injecting NACL0.9% or sterile water or another sterile solution which is considered inactive (used in the control group).
It is hypothesized that subcutaneous and intracutaneous injections modulate pain through:
1/ Dry needling effect: the effect of the needle which penetrates the skin and or muscle tissue such a 1A/ s the bleeding effect (blood contains platelets and growth factors), 1B/ triggerpoint effect when needling myofascial trigger points, 1C/ Gate control effect, 1D Effect on neuroinflammation, TRPV1 receptors,
2/ Volume effect: expansion of the extracellular space stimulates peripheral nerve endings
3/ Placebo effect (the placebo effect of an injection is bigger than a pil, but the effect seems to lessen after repeated sessions)
I'm aware that a t-test could be used between means at each time-point; but isn't it wrong to analyse each time point separately (increase type 1 error?).
Do I have to include time in as a factor, in which case do I have to use an ANCOVA? Or a factorial ANCOVCA? I'm also reading of linear mixed effect modelling....
Any help figuring out the right test will be really appreciated! Many thanks!
I'm doing a systematic review and meta analysis on RCT studies and thinking about using GRADE approach to report the study evidence level, but i didnt use the ROB2 tools to appraise my study(I use PEdro tool), can i still use this approach or is it mandated to use rob2 to use this approach?
I would be interested to receive views on the methodology used by Jager and Leek to estimate the false discovery rate in particular medical journals based on reported p-values in the absence of knowledge of the baseline prior probabilities that the study hypotheses are true.
The title of the paper is 'An estimate of the science-wise false discovery rate and application to the top medical literature'.
The paper is available at https://academic.oup.com/biostatistics/article/15/1/1/244509
I am not interested in any biases related to the choice of journals or restriction to p-values in abstracts. The associated limitations are acknowledged in the paper. It is the methodology that concerns me.
I am currently running a meta-analysis exploring effects of mindfulness interventions on creativity. I am using JASP software to run my meta-regression, which needs effect size and standard error.
I understand standard error is standard deviation divided by the square root of the sample size, but which standard deviation do I use in this equation? I am looking at some data collected pre and post intervention and also data comparing control groups to intervention groups. So, would I use the post-intervention SD when computing standard error for pre/post RCT's? And the experimental group SD when computing standard error for control/experimental RCT's?
Any help on this would be great - thank you!
For a meta-analysis of interventions composed of only 1 RCT, 5 single-arm (SA) and 5 double-arm observational studies (DA), what is the best way to account for the lack of control arms amongst the SAs? More specifically, is it possible to 'donate' control arms from the RCT / DAs to allow for comparison of data within the recipient SAs?
Some literature proposes matching a control arm from an RCT with an SA (Zhang et al - doi: 10.1016/S1499-3872(12)60209-4), however, I'm unsure if this method adjusts for inherent differences in selection bias between the two designs. Please help?
I would be incredibly grateful for any advice, thank you!
I'm in the process of doing a meta-analysis and have encountered some problems with the RCT data. One of my outcom is muscle strength. In one study, I have three different measurements of muscle strength for the knee joint (isometric, concentric, eccentric). I wonder how to enter data into the meta analysis. If I give them separately, I increase the number artificially (n). The best form would probably be to combine them within this one study, because in other studies included in the meta analysis, the authors give only one strength measurement.
Thank you all for any help.
Hi, I have a simple question.
I am hoping to perform a power analysis/sample size estimation for an RCT. We will be controlling for baseline symptoms, and using post-treatment or change scores as our outcome variable, Ie we will use an "ANCOVA" designs showed to increase power: https://trialsjournal.biomedcentral.com/articles/10.1186/s13063-019-3671-2
Would any body be able to point me towards the best tool for sample size estimation for such a model?
thanks!
Hi!
I am currently conducting a systematic review on interventions, but for my inclusion criteria for study designs I included single group pre-post studies as well under quasi-experimental designs.
I conducted a meta-analysis on the studies that run a RCT and CCT design. However is it possible to conduct a separate meta-analysis on the single group pre-post studies, and pool the effect size together? Or should I report the individual studies and their respective effect sizes?
Any help would be much appreciated!
Are most-recent papers of randomized clinical trials needed to conduct a legitimate meta-analysis or decades back will still provide optimum results?
For my undergraduate dissertation, I am looking at comparing 2 different interventions that have not been compared directly within head-to-head randomised trials. I am looking into possibly doing a network meta-analysis for this reason with the 2 interventions being compared to a comparable control in the form of a wait-list or treatment as usual.
I've currently found just 5 RCT's (3 vs 2) so just wondering whether this type of analysis is still appropriate?
The RoB 2 tool assesses the risk of bias in randomized clinical trials and the ROBINS 1 tool in non-randomized studies of interventions, such as cohort studies, case-controls and non-randomized clinical trials. But my question is, if there are clinical trials and cohort studies that do not have a control group, can the ROBINS 1 tool be applied? Or is there a more suitable tool?
Thank you.
Greetings;
I've recently published a preprint for a COVID treatment RCT, which appeared under "Dental" specialty instead of "Infectious diseases (ID)". This is an issue as it doesn't appear under searches for ID. Is there's any way to modify this categorization?
RCT study, which checking the pre intervention and post intervention effects on motor capacity and motor performance in the same population or same group. which statistical test best analyze this research?
I am doing a crossover RCT.
I used between-subjects multiple measure ANOVA to assess the carryover effect. Is this OK?
In this case, should I consider p-value<0.10 as a sign of carryover effect?
If the interventions are compared pre and post in RCT, can NMA be done??
I have an RCT with three primary measures: M01, M02, M03, and each of these measures are presented randomly, but intact, to the participants at baseline, 30-days, and 60-days.
Let's say I get really good results from RCT01, and I get funded for RCT02, and RCT02 is just like RCT01, except I add two new measures.
RCT01: M01, M02, M03
RCT02: M01, M02, M03, M04, M05
If everything stays the same from RCT01 to RCT02 (except for the added measures), can I combine the data for M01, M02, and M03 in the analysis to potentially increase the power of my results?
If the answer is yes, is there a good citation that supports this?
1. What type of literature should I read for writing a successful concept note?
2. Is there any literature which will help me decide a research question/objective for designing a study will be appropriate for the evaluation of community-based primary prevention models for NCDs?
3. For convincing the donor as a student researcher is it better that I narrow down to a specific prevention model such as Mass Health promotion to reduce NCD, e.g. Hypertension.
a) Do you think a base-line survey of knowledge regarding prevention vs end-line survey will be a strong method of evaluation?
b) If I try to do an RCT - which literature will help me find out what community-based primary preventions interventions recent or interesting?
(This is for an assignment)
Thank you
We are testing whether screening and management of depression during the pregnancy would improve the birth outcome. We will randomize the cluster (health facility) but at the baseline we will only screen the depression in the intervention group assuming similar prevalence will be in the control group. However, we will measure the birth outcome for both in order to make the comparison. Would this be considered as Cluster RCT? Also, any further suggestion to improve?
I am in an evidence based class working on my PICOT question. My concern is I don't know what my timeline should be. 6 weeks? 12 weeks? Looking for RCT articles for this!
Hello everyone!
My question is pretty basic, but as this is not really my field (only did a bit of statistics during my first year at uni), I don't know the answer to it.
I would like to find if an intervention was effective on improving maternal sensitivity. I have two groups: control and intervention, and two measures (pre intervention and post intervention) for each. I understood I have to do a t test. My question is: between which of the 4 is the t-test done? Is it between pre and post- intervention group? Is it between post control group and post intervention group? Is it both? Thank you all so much!

I currently have a number of Postgraduates working on systematic literature reviews and we have reached the 'Search Strategy' stage. Our reviews focus on stroke rehabilitation, treatment of stuttering/stammering in adults and lymphoedema post breast cancer. We only have access to the free database Pubmed, and to Science Direct in our academic institution as we do not have a School of Medicine.
Before we go looking for collaborators with access to CINAHL, EMBASE, OVID MEDLINE etc, (and I notice that most published systematic literature reviews search 3-4 different databases) when can I be confident that I have searched sufficiently to not have missed any important RCT's?
Thanks Ken
I am writing a synopsis for my dissertation on a topic in which I'm comparing 2 endodontic sealers clinically (RCT). there are no clinical studies for their comparison in literature till now. I am having difficulty in calculating sample size. I need guidance.
newcastle ottawa scale is good for cohort and case control observational studies, but I am doing meta-analysis with both randomized an non randomized clinical trials. some of the non randomized trials have single arms (without comparison) and I dont think Newcastle Ottawa scale can be used here.
In Factorial Design of Experiments, each factor has different levels, one level can be considered as the base level. The cases/specimens sharing this base level can be considered as a control group. Also, randomization is similar to combination of all possible levels of all factors. In this sense, RCT and FDoE seem similar. What's your opinion?